Quantcast
Channel: SCN : Blog List - ABAP for SAP HANA
Viewing all 99 articles
Browse latest View live

Hands-On with the SQL Monitor Part 1: Top Requests, SQL Profile and Nested Loops

$
0
0

In the first post of this series I provided you with the big picture of what the SQL Monitor is and what it can do for you. In a nutshell, it’s like an aggregated SQL Trace that runs system-wide and supplies various kinds of runtime measures as well as a connection between the SQL statement and the driving business process. Today you will finally see the SQL Monitor in action as I will guide you through the first usage scenario.

 

From an end-user’s perspective the SQL Monitor consists of two components. The administration transaction SQLM, on the one hand, provides an overview of the monitor’s current status and allows to activate and deactivate the monitoring. The data display transaction SQLMD, on the other hand, enables you to analyze the recorded data. A separate blog post will be devoted to transaction SQLM and the associated admin tasks at a later time. In this post, however, we will focus on analyzing the collected data using transaction SQLMD.

 

Without further ado, let us jump right into today’s scenario. I activated the SQL Monitor in one of our demo systems where it has collected a significant amount of data. Our goal is to analyze that data and to answer the following questions:

 

  1. What are our system’s most expensive request entry points (transactions, reports, …) in terms of the total database time?
  2. Which SQL statements dominate the SQL profile of the one most time-consuming entry point?
  3. How can we improve the (database) performance of this entry point?

 

To answer these questions, we log on to our system and fire up transaction SQLMD. Since the amount of collected monitoring data can get quite large, SQLMD first brings up a selection screen. This screen allows you to limit the amount of displayed data through several different filter options. For instance, you may restrict the objects containing the SQL statements (section “Objects”) – for example, to your custom code in packages Y* and Z*. In addition, you can filter the request entry points (section “Requests”) as well as the tables involved in the SQL statements (section “Tables”) and you can also choose how to aggregate the data (section “Aggregation”). Finally, you can decide by which criterion to sort the data and how many records to display (section “Order by”). Keep in mind that here the term “records” refers to the number of records displayed on the following screen and not to the number of records accessed by the SQL statements.

Capture.PNG
Screenshot 1: Selection screen of transaction SQLMD. Note that all screenshots were taken from ABAP in Eclipse, which I highly recommend to you. Nevertheless, the SQL Monitor can be used in the classical SAP GUI as well.

 

In order to answer question number one, we choose to aggregate the data by request and to order the results by the total (database) time. In addition, we merely consider the top 200 records that means the 200 most time-consuming request entry points, which will suffice for a rough overview. Note that we don’t use any of the filter settings since we want to get the top entries with respect to our entire system. Hitting F8 now takes us directly into the data by bringing up the following screen.

Capture1.PNG
Screenshot 2: Result screen displaying aggregated runtime data for the most time-consuming request entry points.

 

This list displays our system’s most expensive entry points – that is business processes – in terms of database time together with a wide range of aggregated runtime data. As you can see, the data involves entry points in both custom as well as SAP code. Hence, the SQL Monitor provides us with a comprehensive profile of our system.

 

In order to analyze the data, it is important to realize that all of the runtime measures refer to the SQL statements triggered by an entry point rather than to the entry point itself. For instance, the column “Executions” denotes the total number of executions for all SQL statements caused by an entry point but not the number of times the entry point itself was executed. Moreover, all time related columns are denoted in units of milliseconds.

 

Turning back to question number one, we now inspect the column “Total Time”, which provides the total database time of all SQL statements that were executed by an entry point. Comparing the values, we easily recognize that in terms of database time our system is clearly dominated by the three entry points at the top of the list, all of which lie in custom code. Hence, the first question is answered and if we wanted to reduce our system’s database load we would definitely start with those three entry points.

 

Moving on to the second question, we now focus on the one most time-consuming request entry point, which is the record at the very top. As you can see from the screenshot above, this entry point has caused more than 250 million database accesses (column “Executions”) which have resulted in a total of nearly 150 million milliseconds of database time (column “Total Time”). To get the SQL profile of this entry point, we have to use the column “Records”, which – contrary to what you might think – has nothing to do with the number of database records processed by an entry point. “Records” is a generic column which denotes the number of detail records and offers a hotspot link to drill down. Since we chose to aggregate by the request entry point, the detail records are the source code locations of the individual SQL statements that were triggered by an entry point. Consequently, clicking the “Records” hotspot link for our top entry point takes us to its SQL profile which looks like this:

Capture2.PNG
Screenshot 3: SQL profile of the most time-consuming request entry-point. The column “Exe./Sess.” is normally located further to the right and has been dragged to the visible area for this screenshot.

 

Each row in this list corresponds to an individual SQL statement triggered by our top entry point. For all of these statements the list includes a variety of aggregated runtime data as well as meta information like the involved database tables and the SQL operation type (SELECT, INSERT, UPDATE, …).

 

To analyze the SQL profile we first note that the list of statements is again sorted by the total database time. Observing the column “Total Time”, we then see that the uppermost statement obviously dominates over all other statements. In fact, with an aggregated runtime of more than 133 million milliseconds, it makes up about 90% of the database time of the entire entry point. This observation answers the second question and if we wanted to optimize the entry point’s database performance, we would doubtlessly inspect this top statement first.

 

Considering the third and last question we take a closer look at the data for the topmost statement. For one thing, the column “Mean Time” reveals that the statement itself is rather small since it has a mean runtime of a mere 0.5 milliseconds. Furthermore, the column “Exe./Sess.” (executions per internal session) tells us that on average the statement is executed almost 300.000 times in each internal session. Hence, we can conclude that the statement must be nested in a loop – maybe even in multiple loops across different stack levels.

 

Background Knowledge: Internal Session

The term “internal session” refers to the number of different roll areas in which a SQL statement was executed. A roll area is an area of memory that belongs to a work process. You can read up on the expressions "roll area" and "work process" on http://help.sap.com/.

 

In layman's terms, this means that for the most important request entry points like transactions or reports the number of internal sessions simply equals the number of times the transaction or report was executed. For RFC function modules things are a little more involved. This is due to the fact that as long as an RFC connection remains open, all calls to RFC modules in one and the same function group will use the same roll area in the remote systems. This can, for instance, happen when you make successive calls to an RFC module from one and the same report. Thus, for RFC modules the number of internal sessions can be smaller than the number of times the module was called.

 

In any case, if a SQL statement is executed very often in one internal session – that means if you observe high values in the column “Exe./Sess.” – this is a clear indication for a nested loop.

 

To validate our assumption we navigate to the source code which is achieved by simply double-clicking the row in the list.

Capture4.PNG
Screenshot 4: Source code for the dominating SQL statement within the most time-consuming request entry point.

 

As you can easily see from the screenshot, the statement is a SELECT SINGLE – that means it is small – and it is wrapped in a local loop. In light of the rather large number of average executions per internal session it is even likely that the statement is also contained in a non-local loop. This means that the method get_details is probably called in an external loop which could be verified by generating the corresponding where-used list. What’s remarkable at this point, is that the SQL Monitor data had already provided us with correct and valuable insights before we inspected the source code itself.

 

So what can you do as an ABAP developer in such a situation to improve the performance? Well, if the combination of the words “small statement” and “loop” has not rung your alarm bells yet, you should first of all study SAP’s golden OpenSQL rules. Having done so, you will come to the conclusion that it is best to resolve the loop(s) and to rewrite the sequence of small statements into a single large statement. This could, for instance, be achieved by using a join, a subquery or a FOR ALL ENTRIES all of which, in general, will exhibit a much better performance. This is especially true on SAP HANA where the database engine is optimized for large-scale block operations. Thus, question number three is answered as well.

 

Performing the code optimization itself is beyond the scope of this blog post which brings us to the end of our discussion. To recap, we have used the SQL Monitor to get an overview of our system’s most expensive request entry points in terms of total database time. Moreover, for the one most time-consuming request we have received the SQL profile through a simple UI drill-down. Finally, we have inspected the top statement within this profile and leveraged the monitoring data to identify performance issues. Astonishingly, all of this could be done with the greatest of ease and required nothing more than what is shown in the screenshots.

 

That’s all for today folks. If you like what you saw, make sure to watch out for my next blog post in ABAP for SAP HANA.


SAP TechEd 2013 - applications based on ABAP and SAP HANA, part 1

$
0
0

Disclaimer: This blog partly covers features of ABAP 7.4 which are not available yet for customers and partners, but are only planned to be made available with the next support package.


Two weeks ago, I came home from SAP TechEd. And just like in 2012 I gave a lecture about ABAP for SAP HANA.

 

I know that not everybody interested in the topic could make it to Vegas or Amsterdam... Or will be able to make it to Bangalore. Hence I decided to summarize what I talked about this year's SAP TechEd. I decided to write three blogs outlining some of the key takeaways and I hope that some of you enjoy reading it:

 

I guess all of you heard about the 'code pushdown' paradigm, which tells us to move complex and expensive calculations from the application to the database layer to benefit from the in-memory technology (if you don't know what I am talking about, take a look at this blog). With ABAP 7.4 we started a journey to simplify code pushdown within ABAP-based applications.

 

ABAP 7.4, support package 2: birth of SAP HANA content integration

Bottom_Up.png

Starting with ABAP 7.4, support package 2 you can use SAP HANA content integration to consume SAP HANA database artifacts in ABAP. I refer to it as a 'bottom-UP' approach, which allows you to:

  • FIRST create views and database procedures in SAP HANA (i.e. via the SAP HANA Studio)
  • THEN make them available in ABAP by means of proxy objects. The three types of proxy objects currently supported are:
    • External views (allowing the consumption of attribute views, analytic views and calculation views through Open SQL)
    • Database procedure proxies (allowing to easily call database procedures from ABAP)
    • HANA transport containers (linking a delivery unit to an ABAP transport request and ensuring a consistent lifecycle management when using the two other types of proxies)
  • AND LAST BUT NOT LEAST use the database artifacts through their proxies

The SAP HANA content integration made it possible for ABAP developers to easily consume the most important SAP HANA artifacts (views and database procedures). However, the content is not natively managed by ABAP, which has several drawbacks especially for integrated applications.


ABAP 7.4, next support package: the rise of advanced ABAP database programming

Top_Down.png

For many years ABAP developers were used to write programs solely using the development environment of the application server. With SAP HANA content integration they need(ed) to partly leave the ABAP environment. They need(ed) to work with the SAP HANA Studio and with an own database user.I don't say that working with the SAP HANA Studio does not make sense (e.g. to write native applications based on XS). I just say it is unusual for the ABAP community.

 

Therefore we twisted our brains and we came up with ideas to minimize the need for ABAP developers to leave ABAP development environment. That was the birth of advanced ABAP database programming. This is a 'top-DOWN' approach. It mainly consists of three new features:

  • Advanced Open SQL (which enriches the feature set of Open SQL to support more SQL-92 features and thereby allows to push down calculations to the database layer that could not be pushed down by means of Open SQL in the past)
  • Advanced view building (similar to advanced Open SQL; in addition it is supposed to simplify the consumption of relational data models)
  • ABAP-managed database procedures (which allow to write SQLScript embedded into ABAP methods)


With advanced ABAP database programming ABAP developers work with the ABAP development tools in Eclipse. When needed, they can implement complex and expensive calculations with SQLScript, but (in contrast to the past) tightly integrated into ABAP.

 

If you like to know more, stay tuned. In the next blog I will explain in detail what advanced ABAP database programming offers. And you will hear why advanced ABAP database programming should be your PREFERRED approach when pushing calculations from ABAP to the database layer.

SAP HANA : THROUGH THE EYES OF AN ABAP NEWBIE

$
0
0

                So with all these blogs going around I felt out of place and I also decided to write a blog post. But then came the real problem. What would I write about? All the topics seemed taken. So I decided to write about SAP HANA which seems like the new iPhone 5s in apple ecosystem or like Kitkat in case of android. SAP HANA is the coolest topic in SAP right now. Or it seems like to me. So I went through some blogs written by some good people and whatever I understood I am putting everything in one place. Experts please forgive if I am wrong. So let’s start by the basic question.

 

So what is this SAP HANA stuff?

 

Well according to Wikipedia “SAP HANA, short for 'High Performance Analytic Appliance' is an in-memory, column-oriented, relational database management system developed and marketed by SAP AG”.

Good so it’s a database. So what’s so special about it? Well it not just a database it’s an in memory, column oriented and a relational database system. I know what you guys are thinking. So what? It’s just another

Database system. You can put all the jargons in the definition but in the end it’s just a database. Even ORACLE is a relational database system. So why would anyone bother about it. And who is going to use it anyway. We already have proven trustworthy databases.

Well it’s not a normal database system. It’s much more than that. It’s the next step in database technology. Let begin by looking into what this in memory stuff is okay?

In memory basically means that all data in stored in RAM. That means there is no loading of data from disk to RAM.  There is no storing of some data temporarily in disk and reading required data according to requirement and all that mess that comes with it. Everything is there in the memory available to you all the time which means faster access to data by the CPU. All that’s nice to hear but what’s the use of it all? Why do I need all the data in the memory all the time?

Well if you have all the data available all the time then you can do real time analytics. Which means there is no wait time between the data being entered and data being available to you. Still don’t see why it’s good? Let me give you an example. Imagine there are two sellers A and B selling same phone on a site like flipkart or amazon. Company B suddenly offers a free cover with the phone and at that instant you are there to buy the phone. You look for the options and from which vendor are you going to buy the phone from. Unless there is a guy working in company A always looking at the competitors offers and price Company A is at disadvantage. Now imagine this at a larger scale where you are selling large amount of products and there are many competitors competing with you by offering different offers, discounts. So to gain advantage you need the current data at that time not one hour after. By that time your customer would have purchased his product and left. Here is where real time analytics comes into picture and hence SAP HANA. And HANA is designed in such a way that it can process large amount of data. So you or the customer of SAP HANA gets his or her data at the right time which is NOW.

Now what is this Column storage stuff going to affect the database. How does it matter whether I store it in columns or rows? Well for that let’s look at following table.

 

NAME

AGE

HEIGHT(cm)

Amit

24

181

Sachin

23

183

 

In row based technology data will be stored like

Amit-24-181 Sachin-23-183

And column base technology it will look like

Amit-Sachin  24-25  181-183

 

Now imagine if I wanted to calculate the average age of all people present in the database which one would take less time to read. Obviously the column one right? Because it doesn’t need to go through the name and height every time it accesses age. It can just zero in AGE and do the required calculation without bothering about anything else. This is exactly why SAP HANA is fast.

 

But what about writing new data. This exactly was the biggest problem with column based technology. To modify or add new data you need to do a lot of work than in row based database. But SAP HANA overcomes somehow by using delta merge technology (Don’t know what that it is. But apparently it works and that’s all we care about right ?).

 

And regarding the relational part I guess most of you are already aware what a RDBMS not just google it: .

 

All that seem cool. How is going to affect me? I am not the customer

 

                Well SAP HANA has some advantages over the existing data warehouse technologies like SAP BW for instance. In SAP BW it is IT administrator who decides which all information might be needed for analysis and not the end user. The performance will drastically get affected in case the user decide to something new that may not have occurred to the administrator. This problem is not present in HANA. The user can perform any type of aggregation and the performance will not get affected. So I think if it catches on SAP HANA is going to BW in many cases and this is my simple unexperienced opinion.

And now to the favourite part:

Good to know but Database or No Database,I am an ABAPER. Not going to bother. Thank you!

 

Apparently SAP is planning for 3 stages in implementing:

 

Stage 1: ABAP can access SAP HANA

At present ABAP can access SAP HANA like traditional database by means of secondary data connection.

Step 2: ABAP runs on SAP HANA

With the release of SAP Net weaver 7.3 the SAP HANA database can be used as the primary instance.

Step 3: ABAP optimized for SAP HANA

In the next stage SAP will allow other solutions to access SAP HANA as primary instance. In particular the SAP business suite. And during this process SAP is planning to optimize ABAP for the use in SAP HANA.

 

So whether you like it or not SAP HANA is coming and it looks like it’s here to stay. But hopefully it also bring lots of new opportunities to the ABAPER. So to all the newbies like me “The cycle is never going to end”.

 

 

On an interesting note go through the success stories of SAP HANA. Did you know that they used it in FOOTBALL for real time data?  That my friend is amazing

 

Disclaimer: I hear by pledge that I am an ABAP newbie and might say some stupid stuff in this blog. Kindly forgive the mistakes and please provide your valuable insights. Good day to all of you

Hands-On with the SQL Monitor Part 2: Top Statements and Empty FOR ALL ENTRIES Tables

$
0
0

Welcome to the third post in my blog series on the SQL Monitor. Last time I showed you how to use the SQL Monitor to find and analyze your system’s most time-consuming requests – that means business processes. In addition to this request-based approach, it can also be useful to perform an analysis on the SQL statement level. This is owing to the fact that an SQL statement might be triggered by several different requests. Hence, if such a statement exhibits a particularly poor performance, this can affect multiple business processes.

 

One of the common reasons for miserable performance is reading unnecessary database records. Therefore, today we will scan our demo system for SQL statements that are expensive due to the huge amount of accessed data. In particular, we want to answer the following questions:

 

  1. Which SQL statements have read or modified the most database records?
  2. Which requests trigger the top SQL statement in terms of accessed database records?
  3. How can we improve this statement’s performance?

 

Question 1

 

"Which SQL statements have read or modified the most database records?"

 

As before, the starting point of our analysis is the SQL Monitor’s data display transaction SQLMD. To answer the first question, we need an overview of the SQL statements executed in our system ordered by the total number of accessed database records. The first requirement – that is grouping the data by SQL statement – can easily be met by selecting to aggregate by source position on the selection screen (section “Aggregation”). Establishing the appropriate ordering, however, requires some manual steps on our Netweaver 7.40 SP3 demo system. This is because in the aforementioned release there is no option to sort the data by the total number of accessed database records in section “Order By” (a corresponding option is planned for Netweaver 7.40 SP5). The workaround is to clear the field “Maximal Number of Records” which will cause all available monitoring records to be displayed. Afterwards we can manually sort the result list in the ALV.

 

Note that if you have a large number of monitoring records in your system, SQLMD might require a substantial amount of time to build the result list. In such a case you may be able to speed up the process by setting a high number (for instance 10,000) in the field “Maximal Number of Records” instead of clearing it completely.

 

Altogether, the configured selection screen looks like this:

Capture1.PNG

Screenshot 1: Configuration of the selection screen in transaction SQLMD.

 

After hitting F8 we are then presented the list of all SQL statements that were executed in our system. To obtain the desired ordering, all we need to do is locate the column “Total Records” and sort it in descending order. A word of advice: Be careful not to confuse the columns “Records” and “Total Records”. As explained in my last post, the former denotes the number of available detail records while the latter indicates the number of accessed database records. For our demo system the final result is depicted in the following screenshot.

Capture2.PNG

Screenshot 2: List of SQL statements ordered by the total number of accessed database records.

 

This list provides an overview of SQL statements that have caused a large amount of data to be transferred between the database and the application server. As you can see, all of these statements are located in custom code. The two dominating statements are at the top of the list and in total each has accessed more than a billion database records. Hence, question number one is answered.

 

Question 2

 

"Which requests trigger the top SQL statement in terms of accessed database records?"

 

Turning to the second question, we now focus on the statement at the very top of the list which is located in include ZSQLM_TEST11 line number 34. We are interested to know which request entry points have caused the statement to be executed. If you have worked through my previous post, you might already guess that this information is just a single click away. Remember the column “Records”? As explained in the aforementioned post, this is a generic column which denotes the number of available detail records. Since we have chosen to aggregate the data by the source position, in our case the detail records are the requests that have triggered the SQL statement. To drill down into these detail records, all you have to do is click the hotspot link in the “Records” column. It couldn’t be any easier, could it?

 

For the SQL statement at the top of the list, the “Records” column indicates that there are three different driving requests – that means business processes. In particular, clicking the hotspot link takes us to the following list:

Capture3.PNG

Screenshot 3: List of request entry points that have caused the top SQL statement to be executed.

 

As you can see, all of the driving entry points caused our statement to access an average of about 25,000 database records on each execution (column “Mean Recs.”). The majority of executions – and thereby the majority of accessed database records – were caused by the report ZSQLM_TEST11. In addition, the statement was also triggered by an RFC module and a transaction both of which are, however, almost negligible due to their low number of executions. Thus, if we optimized our top statement’s performance, the request that would benefit most is the report ZSQLM_TEST11. These observations answer the second of our questions.

 

Question 3

 

"How can we improve this statement’s performance?"

 

Focusing on the last question, let us now turn back to the SQL statement itself and think about how we could speed it up a little. Your first impulse might be to investigate the code but hold on a second since before digging through the ABAP sources the SQL Monitor may already provide you with valuable insights!

 

When dealing with SQL statements that process an immense number of database records, it is advisable to check the maximum and minimum number of accessed records in the columns “Max. Records” and “Min. Records”, respectively. As you can see from the second screenshot, for our top statement the maximum amounts to 75,000 records while the minimum is 0. Moreover, the third screenshot indicates that the statement accessed the database table ZSQLM_TEST_USERS (column “Table Names”). Checking the table contents with transaction SE16, we realize that it contains exactly 75,000 records.Capture5.PNG

Screenshot 4: Number of records contained in the table accessed by the top SQL statement.

 

Hence, sometimes our top statement accessed all the contents of the database table while other times it accessed nothing. This is very suspicious and especially the fact that at times the whole content of the database table was accessed indicates that our top statement may involve an empty FOR ALL ENTRIES table. To check this assumption we can navigate to the source code by performing a double click either on the top statement itself in the overview (second screenshot) or on one of its driving requests in the detail view (third screenshot).

Capture4.PNG

Screenshot 5: Source code for the top statement.

 

Just as suspected the statement is a SELECT FOR ALL ENTRIES without any prior content check on the FOR ALL ENTRIES table. When the FOR ALL ENTRIES table is empty, the SELECT yields all the records contained in the database table. What’s particularly important is that an empty FOR ALL ENTRIES table invalidates the WHERE clause completely even if it contains additional conditions. In the majority of cases this is, however, not what the developer intended, especially when the database table contains even more records than in our case. To put it straight, this is not just a statement with room for performance optimization – it’s a bug. To fix it, all you need to do is wrap the SELECT in an IF statement to make sure it is never executed with an empty FOR ALL ENTRIES table. If you really want to access all database records for an empty FOR ALL ENTRIES table, add an ELSE branch and use a plain SELECT without any WHERE clause. This makes your code much more robust and readable and, finally, answers the third and last question.

 

Wrap-Up

Stepping through today’s scenario, I showed you how to analyze the SQL Monitor data starting from the SQL statement level. For this purpose we generated a list of all SQL statements executed in our system and sorted the results by the total number of accessed database records. Focusing on the top statement we used a simple one-click drill-down operation to obtain the list of requests (business processes) that caused the statement to be executed. Furthermore, we leveraged the monitoring data to reveal that the top statement involves an empty FOR ALL ENTRIES table.

 

One final remark: In view of SAP HANA, sorting the SQL Monitor data by the total number of accessed database records can also be useful when using aggregation by request. This allows you to locate data-intense requests which might significantly benefit from a code push-down onto the database.

 

That’s it for today. If you still feel like walking through another hands-on scenario (I hope you do), don’t miss my next post in ABAP for SAP HANA.

SAP TechEd 2013 - applications based on ABAP and SAP HANA, part 2

$
0
0

Disclaimer: This blog partly covers features of ABAP 7.4 which are not available yet for customers and partners, but are only planned to be made available with the next support package.


In my last blog I wrote about 'bottom-up' and 'top-down' approaches to leverage SAP HANA capabilities from ABAP. In this blog I want to give you a glimpse on new features planned for the next support package of ABAP 7.4.

 

Advanced Open SQL

As you probably know Open SQL is our database independent interface to connect the ABAP application server to the underlying database. The big advantage is that code making use of Open SQL runs on all database platforms supported by SAP NetWeaver AS ABAP.

 

The big disadvantage is that the feature set of Open SQL is quite restricted. Take a look at the following examples:

  • You want to sum up the costs of 12 periods which are stored in 12 different attributes of a record? Read all 12 attributes and sum them up in ABAP.
  • You want to concatenate the product ID and name? Read both and concatenate them in ABAP.
  • You want to calculate freight costs based on the maximum of the weight and the volume weight of a material? Read weight and volume weight and use an IF-clause in ABAP.

 

Is there really no better way? In the future there will be!

 

We plan to enrich the feature set of Open SQL. This will allow you to push down calculations to the database layer that could not be pushed down by means of Open SQL in the past. With the next support package of ABAP 7.4 we, for example, plan to support:

  • string expressions (concatenation of attributes)
  • usage of ABAP constants and variables in the projection list
  • CASE expressions ('simple CASE')
  • certain arithmetic expressions for integral, decimal and floating point calculations
  • certain built-in SQL functions (e.g. CAST, COALESCE)

 

The following example shows how advanced Open SQL can look like:

"product ID and product category are concatenated using a string expression

SELECT product_id && ',' &&@space && category AS product,

       "the price including the VAT is calculated in the database by means of

       "a CASE statement

       CASE tax_tarif_code

         WHEN 1 THEN price * @lc_factor_1

         WHEN 2 THEN price * @lc_factor_2

         WHEN 3 THEN price * @lc_factor_3

       END AS price_vat,"projection list needs to be separated by comma

       currency_code AS currency

       FROM snwd_pd

       INTO CORRESPONDING FIELDS OF @ls_result."variables have to be escaped by @

  WRITE: / ls_result-product,

           ls_result-price_vat CURRENCY ls_result-currency,

           ls_result-currency.

ENDSELECT.

 

Advanced view building

What I have written about Open SQL is basically also true for the view building capabilities of the ABAP Dictionary. In line with Advanced Open SQL we also plan to introduce features for advanced view building. These features will ease code pushdown and simplify the consumption of relational data models.

 

In the future we plan to allow you to create views by means of a new Eclipse-based editor (integrated into the Eclipse-based ABAP development environment). The following screenshot shows how this editor will look like.

Eclipse_DDL_Source.png

 

And the following two snippets illustrate how you will define views in the new editor. Views can be nested (i.e. a view consumes another view) and they can be linked with associations.

 

  • In the given example the view Z_DEMO_REVENUES reads certain attributes from table SNWD_SO. It summarizes and groups the data.

@AbapCatalog.sqlViewName: 'Z_DEMO_R'

define view z_demo_revenues as select from snwd_so

{

  snwd_so.buyer_guid,

  sum(snwd_so.gross_amount) as gross_amount,

  sum(snwd_so.net_amount) as net_amount,

  sum(snwd_so.tax_amount) as tax_amount,

  snwd_so.currency_code as currency

} group by snwd_so.buyer_guid, snwd_so.currency_code

  • The view Z_DEMO_CUSTOMER reads data from tables SNWD_BPA and SNWD_AD. It also defines an association to the first view.

@AbapCatalog.sqlViewName: 'Z_DEMO_C'

define view z_demo_customer as select from snwd_bpa

  inner join snwd_ad on

    snwd_ad.node_key = snwd_bpa.address_guid

  association[*] to z_demo_revenues as revenues on

    revenues.buyer_guid = snwd_bpa.node_key

{

  snwd_bpa.node_key, snwd_bpa.bp_id,

  snwd_bpa.company_name,

  snwd_ad.country,

  snwd_ad.postal_code,

  snwd_ad.city,

  revenues.gross_amount,

  revenues.currency

}

ABAP-managed database procedures

The last planned feature are ABAP-managed database procedures. You might have heard about database procedures already. They can be used to implement complex calculations by means of SQLScript (including Calculation Engine Functions). With the next support package of AS ABAP 7.4 we plan to support database procedures which are managed by the ABAP application server.

 

The following example shows how ABAP methods can be used as container for database procedures (you might notice that the code inside the method body is not ABAP, but SQLScript).

CLASS zcl_demo_amdp DEFINITION

  ...

  "marker interface (e.g. for where-used list)

  INTERFACES: if_amdp_marker_hdb.

  METHODS: determine_sales_volume

             IMPORTING VALUE(iv_client)TYPE mandt

             EXPORTING VALUE(et_sales_volume) TYPE tt_sales_volume.

  ...

ENDCLASS.

 


CLASS zcl_demo_amdp IMPLEMENTATION.

                                "additions for implementation

  METHOD determine_sales_volume BY DATABASE PROCEDURE FOR HDB LANGUAGE SQLSCRIPT

                                "forward declaration of used artifacts

                                USING snwd_so_i snwd_so_sl snwd_pd.

 

    lt_sales_volume = SELECT product_guid,SUM(quantity) AS quantity,

                             quantity_unit

                             FROM snwd_so_i AS i

                             INNER JOIN snwd_so_sl AS sl

                                ON sl.client = i.client

                             AND sl.parent_key = i.node_key

                             WHERE i.client = :iv_client

                             GROUP BY product_guid, quantity_unit;

 

    et_sales_volume = SELECT product_id, quantity, quantity_unit

                             FROM snwd_pd AS pd

                             LEFT OUTER JOIN :lt_sales_volume AS sv

                               ON sv.product_guid = pd.node_key

                             WHERE pd.client = :iv_client

                             ORDER BY product_id;

 

  ENDMETHOD.

ENDCLASS.

 

Now you know which new features are planned to ease code pushdown and to simplify consumption of relational data models.


The remaining question for my last blog will be: how can optimized infrastructure components help you to benefit from SAP HANA. If you like to learn about fuzzy-enabled value helps or evaluation of business rules in SAP HANA, you will soon be able to read more... but most likely only after Christmas .

Custom code and SAP HANA

$
0
0
When you plan to migrate to SAP HANA, the amount of custom code in your system will have an impact on total cost, time and the quality of the end result. In this article, we look at the potential impact of that custom code, and discuss how to tune your custom code to perform optimal on SAP HANA. By using the tips and tools mentioned here, you will be able to boost your ROI and smoothen the migration process.

Incentive

Even though system owners, software manufacturers and consultants do not like to admit to it, custom code is a central part of the business functionality. In fact, in some processes custom code is a requirement in order to make the standard functionality work with the business requirements. With that in mind, we can expect that all SAP installation, small or large, will have some elements of custom code.
When migrating to SAP HANA, we must therefore answer the following questions:
  1. Which parts of my custom code must be changed in order to make the code compile and avoid potential functional issues?
  2. Which parts of my code shall be optimized to achieve the performance expected with SAP HANA?
  3. How can I identify which of my main business processes have the potential to be massively accelerated with SAP HANA?
[Ref. 1: Bresch, Beghardt& co.]  
   
The answer to the first of these questions will provide insight into potential hurdles in the process of migration, while the answers to the two latter will be essential parts of your ROI estimates.

Part 1 – The code cleansing

In general, all the code that runs on your existing platform will continue to run as before on SAP HANA. That is the case for standard SAP code, as the migration will be based on the requirement of application enhancement pack levels in combination with the compulsory
NetWeaver stack level, We can expect that SAP has replaced potential troublesome code, and even optimised it in several areas.
That leaves you with your custom code. The general rule goes for these parts as well, and you can anticipate that most parts of your custom code still do what you would expect. However, as some of the fundamental characteristics in the underlying database changes when you replace your old database with HANA, the code needs a thorough check.
First of all, we need to find and replace any parts of the code that rely on database specific features. Examples are native SQL statements, and the use of DB hints in Open SQL statements. Take the code from example 1 in consideration.
EXECSQL PERFORMING loop_output.
SELECT connid, cityfrom, cityto
INTO :wa
FROM spfli
WHERE carrid = :c1
ENDEXEC.
Example 1: Native SQL.
This notation uses native SQL. It relies on the database to accept that exact syntax. The example is not very advanced. However, in order to eliminate the possibility of compatibility errors, a level of transparent abstraction should be introduced. This is done with a database independent statement set called Open SQL. By using Open SQL, the programmer makes sure that the code runs on any database chosen for your SAP installation or migration.
Secondly, we need to identify, examine and possibly replace code that relies on implicit sorting done by the database. The reason for this is that the migration to SAP HANA includes a change from row based to column based architecture. Before the migration, your row based database returned your result set in implicit primary key sequence if the SQL didn’t request otherwise. After conversion to the column based database, the implicit sort sequence is no longer returned. We can expect that programmers have based code on the previously existing features of implicit index sorting. When building and testing their code, they will have seen this feature in the debugger or in the final result set, and therefore omitted placing sorting in their own code. After migration to HANA, the code will still compile, but may not provide the correct result to the enduser. Therefore we need to place sorting in the custom code. This can be done by one of the following actions:
  • Adding “order by” in the select statement. This is the preferred choice if indexed fields should determine sort order.
  • Adding a “sort by” on the result set. The statement should follow immediately after the select statement in order to prevent processing duplication.
The third important issue with migration to a column based database is the conversion of pool and cluster tables. Cluster tables and some of the pool tables will be transformed into transparent tables and the relation between them is broken [Ref. 2, SAP Note 1785057]. Here is an example:
Before migration, a delete statement on a table cluster would delete from multiple clustered tables. After migration, the delete will only remove data from the table named in the SQL, not from tables that made up the cluster.
Example 2: Cluster and pool tables.
 
The code must be altered in order to cope with how cluster and pool tables are transformed into transparent tables. Legacy code should be adjusted with separate SQL calls to all tables that previously were in the cluster or pool.
Rewriting the code in accordance with the recommendations above does not sound so complicated. Finding the code that must be changed, however, may seem like an overwhelming task. No doubt, the amount of custom code will be a factor in estimating the effort. Luckily though, the vast majority of code lines that makes up your applications are made by SAP – and not you. Hence, SAP has had the use of tools to correct their own code. Some of the tools are now made available for the customers to use in their optimising work, and will be of great value in the process of migrating to SAP HANA.
In particular, the ABAP Code Inspector is essential in this work. In this tool you can define variants of which elements you want to analyze. In particular, the following categories will provide good help in identifying problematic code:

 

 

  • Critical statements: Find native SQL and DB hints.
  • Use of ADBC Interface: Find native SQL and DB administrative statements.
  • SELECT/OPEN CURSOR without ORDER BY: Finds problematic statements where database tables are read without order or sorting before read, search or delete.
  • Search ABAP Statement Patterns: Lets you search for index specific code.
To support the process of generally lifting the quality of the code, the code inspector is part of the ABAP Test Cockpit (ATC). Here, the code quality manager can schedule periodic runs, add quality gates with priorities, and publish the results back to developers. Even though you have no immediate plans to migrate to HANA, this tool should catch your interest. Putting your programming standards into a benchmarking regime will result in better quality and in the end better running business processes.

Part 2 – Boost you custom code

In part one we made the code run on SAP HANA and we eliminated potential code problems that may occur in the migration process. In this third part, we will look at how we can boost custom code so that you achieve the “HANA effect”.
Now that your custom code will compile and run on SAP HANA, you can already expect better response times on your SQLs without putting in any additional work. This will certainly be the case for database intensive programs where the programmers have followed best practice. In processes where programmers have not been focused towards efficient running code, the switch to SAP HANA will not result in massively reduced runtime.
So what are these golden rules of SQL, and what is their importance in terms of HANA? Bresch, Beghardt & co [ref .1] have made an overview of the most critical concepts:

Golden Rule

Detail / example

HANA relevance

Keep the result sets small.
  • Do not retrieve rows from the database and discard them on the application server using CHECK or EXIT, e.g. in SELECT loops.
  • Make the WHERE clause as specific as possible.

This rule is as important as before when migrating to HANA.

Minimise the amount of transferred data.
  • Use SELECT with a field list instead of SELECT * in order to transfer just the columns you really need.
  • Use aggregate functions (COUNT, MIN, MAX, SUM, AVG) instead of transferring all the rows to the application server.

When shifting to a column based database, this becomes more important. Reason being that the whole columns must be read by the database in order to fetch the returned result set.
Minimise the number of data transfers.
  • Use JOINs and or sub-queries instead of nested SELECT loops.
  • Use SELECT.. FOR ALL ENTRIES instead of lots of SELECTs or SELECT SINGLEs.
  • Use array variants of INSERT, UPDATE, MODIFY, and DELETE.
Arrays will be more efficient with column based architecture. Nested SELECTs will be causing more inefficiency (relatively speaking) then with row based databases.
Minimise the search overhead.
  • Define and use appropriate secondary indexes.
As secondary indexes are not required by SAP HANA, this rule has lost some of its importance.
Keep load away from the database.
  • Avoid reading data redundantly.
  • Use table buffering where possible and do not bypass it.
  • Sort data in your programs (unless ordering is with the primary table key).
In terms of HANA, you still want to keep unnecessary load away from the database. You DO however want to give the database your most data-intensive calculations to the database. This could be achieved by heavy SQL called from the application side or code pushdown to SAP HANA.
In example three there is a case of SQL code that produces a correct result set, but not in an optimized way.
SELECT *

FROM ekko INTO TABLE it_ekko

WHERE ebeln = lv_ebeln.

SELECT *

FROM ekpo INTO TABLE it_ekpo

FOR ALL ENTRIES IN it_ekko

WHERE ebeln EQ it_ekko-ebeln.

Example 3: Inefficient SQL.

There are three main problems with these statements. First of all, it triggers two separate roundtrips to the database. Secondly, the second SQL may result in unnecessary large result set that may never be used, as there is no check for an empty FOR ALL ENTRIES IN from the first SQL. Thirdly, the complete field list is fetched for both tables – which should only be the case if all fields will be used in the subsequent application logic.

 

As with the replacement of malfunctioning code in the first chapter, adopting these best practices should not be programmatically challenging – but the code bits worth changing may be difficult to locate. With that in mind, SAP has extended the previously discussed Code Inspector for this purpose. In the tool, you will find analysis of WHERE conditions, buffer bypass checks, nested SELECTs, unsecure FOR ALL ENTRIES checks and more.

Part 3 – Identify business processes

As the custom code potentially contains hundreds or even thousands of SQL statements, knowing where to start optimising can be a challenge. Trying to validate and correct all hits returned by the Code Inspector would be time consuming and not provide an immediate boost to the processes that have the most to gain. As most systems have some amount of dead code, some of the effort would be time wasted. Somehow, you would like to find processes that are time consuming in terms of SQL, high frequent or data intensive. Subsequently, you would want to combine those results with potential code optimising from the Code Inspector.
Most systems have large amounts of code that is unused (dead). Getting rid of dead code will be beneficial in terms of reducing maintenance effort of your system. SAP Usage and Procedure Logging is a tool that can help identify the code you can delete from your system. The tool integrates with the Custom Code Lifecycle Management in Solution Manager.

Dead code, SAP UP Logging

[SAP Active Global Support, Ref. 3]

In order to find and prioritize processes that are expensive in terms of database calls or volume, SAP has provided the New SQL Monitor. The tool can be activated in your SAP environment without disturbing the business processes, and could be executed even before migrating to HANA. It will provide performance data on all OPEN SQL statement executed in the system [Ref. 1: Bresch, Beghardt & co.].

By letting the tool run in your production environment, you will be provided with valuable logs that can be sorted and filtered in several ways and dimensions. It will tell you which SQLs have the highest frequencies of use within the timeframe, and which ones are the most expensive in terms of runtime and load. Starting your optimizing efforts with basis in this result set would make sense. That would certainly be the case if you can identify some extreme cases of high frequent SQLs or processes that stand out by their high execution time. However, it is likely that you will be faced with a long list of SQLs that are both frequent, data intensive and time consuming.

 

As a result of this we would want to find the areas that both have the potential for optimization and are showing up high on your SQL monitor log. This can be done with the SQL Performance Tuning Worklist. It combines findings from bode the Code Inspector and the New SQL Monitor. It syndicates execution time, amount of data involved and potential code deficits and then point to the exact bits of code where you should place your effort.

Summary

In order to prepare your custom code for the migration to SAP HANA, you need to make sure that your code still will work and not result in functional errors. After that step the focus will shift towards optimising your processes. There are several tools that are suggested in order to achieve this, and a set of rules and best practises that should be in focus for quality management and programmers. The end result should be smooth migration of your custom code along with the SAP standard code, and a notable performance boost on your prioritised business processes.

References

Ref 1: “CD200: Tune Your Custom ABAP Code - Get Ready for SAP HANA” by Stefan Bresch, Boris Gebhardt, Jens Lieberum, Johannes Marbach, as presented at SAP TechEd in Amsterdam, November 2013.

 

Ref 2: “Recommendations for migrating suite systems to SAP HANA”, by SAP, SAP Note 1785057 v7, found at https://websmp209.sap-ag.de/sap/support/notes/1785057 November 2013.

 

Ref 3: “ITM114: Real Software Utilization with Usage and Procedure Logging” by SAP Active Global Support, as presented at SAP TechEd in Amsterdam, November 2013.

 

 

Consuming HANA Views, Procedures, External Views in ABAP 7.40 Syntax - Part 1

$
0
0

Tried to document my learning on ABAP 7.4 and ABAP for HANA.

 

 

Topics:

 

1. ABAP Report with new data declaration syntaxes on 7.40

2. ABAP Report on HANA using ADBC

3. Consuming Attribute View using External View.

4. Consuming Attribute View using Native SQL

5. Consuming Analytic View/Calculation View in ABAP

6. Consuming HANA artifact Stored Procedure using ABAP Proxy Procedure.

7. Consume HANA artifact Stored Procedure by Calling it in ABAP Code.

 


Part 1: http://scn.sap.com/community/abap/hana/blog/2014/01/08/consuming-hana-views-procedures-external-views-in-abap-740-syntax--part-1


  • ABAP Report with new data declaration syntaxes on 7.40
  • ABAP Report on HANA using ADBC


Part 2:http://scn.sap.com/community/abap/hana/blog/2014/01/08/consuming-hana-views-procedures-external-views-in-abap-740-syntax--part-2


  • Consuming Attribute View using External View.
  • Consuming Attribute View using Native SQL
  • Consuming Analytic View/Calculation View in ABAP


Part 3: http://scn.sap.com/community/abap/hana/blog/2014/01/08/as

  • Consuming HANA artifact Stored Procedure using ABAP Proxy Procedure.
  • Consume HANA artifact Stored Procedure by Calling it in ABAP Code.


 

T1. ABAP Report with new data declaration syntaxes on 7.40

* Open SQL, Native SQL, Defining native SQL with String Templates & Expressions

 

REPORT zabap_for_hana.

 

* Declarations

* ADBC Objects and variables

 

DATA:  lo_sql_stmt TYPE REF TO cl_sql_statement,

             lr_data     TYPE REF TO data.

 

* Exception handling

 

DATA:  lx_sql_exc TYPE REF TO cx_sql_exception,

lv_text TYPE string,

           lo_alv  TYPE REF TO cl_salv_table,

           lx_msg  TYPE REF TO cx_salv_msg.

 

* Data objects

DATA: gt_pernr     TYPE ztt_emp_bill,

           lv_start     TYPE timestampl,

           lv_end       TYPE timestampl,

           lv_message   TYPE string.

 

* Hello World Program with new syntaxs available for data declaration in 7.40

 

*DATA: lv_name TYPE string VALUE 'Hello World'.  // old syntax

DATA(lv_name) = 'Hello World'.             " New Syntax

 

DATA: lr_conn TYPE REF TO cl_sql_connection,

            lt_emp_info TYPE TABLE OF dtab_emp_info.

 

WRITE: lv_name.

SKIP.

 

CREATE OBJECT lr_conn.

lr_conn->ping( ).

 

* Select

SELECT * UP TO 5 ROWS

  INTO TABLE lt_emp_info

  FROM dtab_emp_info.

SKIP.

 

* Read Syntax

 

*DATA: LS_EMP_INFO_2    TYPE DTAB_EMP_INFO.

*DATA: LS_EMP_INFO_READ TYPE DTAB_EMP_INFO.

*READ TABLE DTAB_EMP_INFO INTO LS_EMP_INFO_2 INDEX 2.

*READ TABLE DTAB_EMP_INFO INTO LS_EMP_INFO_READ WITH KEY PERNR = '00000005' DEPARTMENT = 'SUPPORT'.

 

* Above Code can be replaced like following

 

DATA(ls_emp_info_2) = lt_emp_info[ 2 ].

WRITE: 'Read Statement for Index 2:-',ls_emp_info_2-pernr.

SKIP.

 

* Reading the internal table with a condition

DATA(ls_emp_info_read) = lt_emp_info[ pernr = '00000005' department = 'SUPPORT' ].

WRITE: 'Read Statement with condition:-',ls_emp_info_read-pernr.

SKIP.

 

 

*READ TABLE DTAB_EMP_INFO TRANSPORTING NO FIELDS WITH KEY PERNR = '00000005' DEPARTMENT = 'SUPPORT'.

* Same can be written as below with new syntax

IF line_exists( lt_emp_info[ pernr = '00000005' department = 'SUPPORT' ] ).

  WRITE: 'Condition Satisfied'.

ENDIF.

 

* Defining the work area in the Loop Statement

LOOP AT lt_emp_info INTO DATA(ls_emp_info).

  WRITE: ls_emp_info-pernr.

ENDLOOP.

SKIP.

 

* Field symbols can also be assinged in the same fashion

LOOP AT lt_emp_info ASSIGNING FIELD-SYMBOL(<fs_emp_info>).

  WRITE: <fs_emp_info>-pernr.

ENDLOOP.

SKIP.

 

* Data Declaration when calling a method

CALL METHOD zcl_test=>get_pernr

  EXPORTING

    iv_location = 'BANGALORE'

    iv_country = 'IN'

  IMPORTING

    et_emp_info = DATA(lt_emp_info_tab).

 

DESCRIBE TABLE lt_emp_info_tab LINES DATA(lv_lines).

WRITE: 'Number of Records', lv_lines.

SKIP.

1.jpg

 

 

T2. ABAP Report on HANA using ADBC


TRY.

    CLEAR gt_pernr.

 

 

* Open SQL

*     SELECT empinfo~mandt empinfo~pernr empbill~bill_rate

*       FROM DTAB_EMP_INFO AS empinfo INNER JOIN DTAB_EMP_BILL AS empbill ON empinfo~pernr = empbill~pernr

*       into table gt_pernr

*      GROUP BY empinfo~mandt empinfo~pernr empbill~bill_rate

*       ORDER BY empinfo~pernr.

 

 

* Difference between Open and Native SQL are Comma separated field list, explicit client handling , NO 'INTO' clause

 

* Defining native SQL

 

data(lv_sql) = | SELECT empinfo.mandt, empinfo.pernr, empbill.bill_rate, |

**           use HANA built-in function

&& | sum( DAYS_BETWEEN(empbill.BILL_DATE,CURRENT_UTCDATE) ) AS LAST_BILL_REV

**    && | AVG( DAYS_BETWEEN(empbill.BILL_DATE,CURRENT_UTCDATE) ) AS LAST_BILL_REV |

&& |   FROM DTAB_EMP_INFO AS empinfo INNER JOIN DTAB_EMP_BILL AS empbill ON empinfo.pernr = empbill.pernr |

            && |  WHERE empbill.mandt = { sy-mandt } |

            && |  GROUP BY empinfo.mandt, empinfo.pernr, empbill.bill_rate |

            && |  ORDER BY empinfo.pernr |.

 

* Defining native SQL with String Templates & Expressions

 

    CONCATENATE ` SELECT empinfo.mandt, empinfo.pernr, empbill.bill_rate, `

*           use HANA built-in function

      ` DAYS_BETWEEN(empbill.BILL_DATE,CURRENT_UTCDATE) AS LAST_BILL_REV `

                ` FROM DTAB_EMP_INFO AS empinfo INNER JOIN DTAB_EMP_BILL AS empbill ON empinfo.pernr = empbill.pernr `

                ` WHERE empbill.mandt =  ` sy-mandt

                ` GROUP BY empinfo.mandt, empinfo.pernr, empbill.bill_rate, empbill.bill_date `

                ` ORDER BY empinfo.pernr `

                INTO DATA(lv_sql)

                SEPARATED BY space.

 

*     Create an SQL statement to be executed via default secondary DB connection

    CREATE OBJECT lo_sql_stmt EXPORTING con_ref = cl_sql_connection=>get_connection( ).

 

*     execute the native SQL query/ SQL Call

    DATA(lo_result) = NEW cl_sql_statement( )->execute_query( lv_sql ).   " new syntax

 

*     read the result into the internal table lt_partner

    GET REFERENCE OF gt_pernr INTO lr_data.

    lo_result->set_param_table( lr_data ).  "Retrieve result of native SQL call

    lo_result->next_package( ).

    lo_result->close( ).

 

  CATCH cx_sql_exception INTO lx_sql_exc.

    lv_text = lx_sql_exc->get_text( ).

    MESSAGE lv_text TYPE 'E'.

 

ENDTRY.

 

* display

TRY.

      cl_salv_table=>factory(

          IMPORTING

            r_salv_table = lo_alv

          CHANGING

            t_table      = gt_pernr ).

 

      lo_alv->display( ).

 

    CATCH cx_salv_msg INTO lx_msg.

      lv_text = lx_msg->get_text( ).

      MESSAGE lv_text TYPE 'E'.

ENDTRY.

 

2.jpg

Consuming HANA Views, Procedures, External Views in ABAP 7.40 Syntax - Part 2

$
0
0

Part 1: http://scn.sap.com/community/abap/hana/blog/2014/01/08/consuming-hana-views-procedures-external-views-in-abap-740-syntax--part-1


  • ABAP Report with new data declaration syntaxes on 7.40
  • ABAP Report on HANA using ADBC


Part 2:http://scn.sap.com/community/abap/hana/blog/2014/01/08/consuming-hana-views-procedures-external-views-in-abap-740-syntax--part-2

  • Consuming Attribute View using External View.
  • Consuming Attribute View using Native SQL
  • Consuming Analytic View/Calculation View in ABAP


Part 3: http://scn.sap.com/community/abap/hana/blog/2014/01/08/as

  • Consuming HANA artifact Stored Procedure using ABAP Proxy Procedure.
  • Consume HANA artifact Stored Procedure by Calling it in ABAP Code.

 

 

T3. Consuming Attribute View using External View.


Step 1:  Create HANA Attribute view :


1.jpg

2.jpg



Step 2: Save and Activate the view and check the data by using ‘Data Preview’ option.


Steps to create external view:


Step 1: Go to ‘ABAP’ prospective.

 

3.jpg

Step 2:  Right click on ABAP package under which you want to create this external view. Under ‘New’ click on ‘Other ABAP Repository Object’

 

4.jpg.png

 

Step 3: Expand ‘Dictionary’ folder and click on ‘Dictionary View’.

 

5.jpg

 

 

Step 4: Enter name and description of view and select ‘External View’ radio button and browse and select your HANA view.

 

6.jpg

 

Step 5: Click on Next, Finish and then activate the view, this will create your external view in you ABAP system, you can cross check in SE11.‘Synchronize’ button should be used if any changes are made in HANA view

 

7.jpg8.jpg

Source Code to consume the above created external View:


* External View
DATAlt_tab TYPE TABLE OF external_view.

SELECT *
INTO TABLE lt_tab
FROM external_view.

LOOP AT lt_tab ASSIGNING FIELD-SYMBOL(<fs>).
WRITE: / 'Pernr:' ,<fs>-pernr.
WRITE: '=', <fs>-last_rev_bill_date, /.
ENDLOOP.


Output:


9.jpg

 


T4. Consuming Attribute View using Native SQL


  DATA:  lt_tab2 TYPE TABLE OF external_view.

* consuming attribute view

  TRY.

 

      lv_sql = | SELECT bill_rate, emp_name, bill_date, pernr, |

*           use HANA built-in function

            && | DAYS_BETWEEN(BILL_DATE,CURRENT_UTCDATE) AS LAST_BILL_REV |

            && |   FROM _SYS_BIC."mohas97_ha5/AT_EMP_BILL" |.

 

*     Create an SQL statement to be executed via default secondary DB connection

      CREATE OBJECT lo_sql_stmt EXPORTING con_ref = cl_sql_connection=>get_connection( ).

 

*     execute the native SQL query/ SQL Call

      lo_result = NEW cl_sql_statement( )->execute_query( lv_sql ).   " new syntax

 

*     read the result into the internal table lt_partner

      GET REFERENCE OF lt_tab2 INTO lr_data.

lo_result->set_param_table( lr_data ).  "Retrieve result of native SQL call

lo_result->next_package( ).

lo_result->close( ).

 

    CATCH cx_sql_exception INTO lx_sql_exc.

      lv_text = lx_sql_exc->get_text( ).

      MESSAGE lv_text TYPE 'E'.

 

  ENDTRY.

 

  LOOP AT lt_tab2 ASSIGNING FIELD-SYMBOL(<fs>).

    WRITE: / 'Pernr:' ,<fs>-pernr.

    WRITE: '=', <fs>-last_rev_bill_date, /.

  ENDLOOP.

 

10.jpg

 

T5. Consuming Analytic View/Calculation View in ABAP


Calculation view can also be consumed in the same way.


Step 1:  Create HANA Analytic view

11.jpg


Step 2: Save and Activate the view and check the data by using ‘Data Preview’ option.


Source Code to consume the above Analytic View:


* consuming analytic view with input parameter

  DATA: LT_PROJ TYPE ZTT_EMP_PROJ.

 

  TRY.

 

      lv_sql = | SELECT mandt, PERNR, PROJ_NAME, RESOURCE_NO |

             && |   FROM _SYS_BIC."mohas97_ha5/AN_EMP_PROJ" |

*            && |  ('PLACEHOLDER'=('$$IP_PERNR$$', ' { lv_pernr } ' ) ) |

            && |  WHERE mandt = { sy-mandt } |.

*            && |  ORDER BY bill_rate |.

 

Source Code for Using Input Parameter

 

*      lv_sql = | SELECT mandt, PERNR, BILL_RATE, BILL_DATE |

*             && |   FROM _SYS_BIC."mohas97_ha5/AN_BILL_DATE" |

*            && |  ('PLACEHOLDER' = ('$$BILL_DATE$$', ' { SY-DATUM } ' )) |

*            && |  WHERE mandt = { sy-mandt }    GROUP BY mandt, pernr, bill_rate, bill_date |.

 

 

 

*     Create an SQL statement to be executed via default secondary DB connection

      CREATE OBJECT lo_sql_stmt EXPORTING con_ref = cl_sql_connection=>get_connection( ).

 

*     execute the native SQL query/ SQL Call

      lo_result = NEW cl_sql_statement( )->execute_query( lv_sql ).   " new syntax

 

*     read the result into the internal table lt_partner

*      GET REFERENCE OF lt_PROJ INTO lr_data.

      GET REFERENCE OF lt_proj INTO lr_data.

lo_result->set_param_table( lr_data ).  "Retrieve result of native SQL call

      lo_result->next_package( ).

      lo_result->close( ).

 

    CATCH cx_sql_exception INTO lx_sql_exc.

      lv_text = lx_sql_exc->get_text( ).

      MESSAGE lv_text TYPE 'E'.

 

  ENDTRY.

    LOOP AT lt_PROJ ASSIGNING FIELD-SYMBOL(<fs_PROJ>).

    WRITE: / 'Employee Proj Info' ,<fs_proj>-pernr.

    WRITE: '=', <fs_proj>-proj_name , /.

  ENDLOOP.

 

 

Output:

12.jpg


Consuming HANA Views, Procedures, External Views in ABAP 7.40 Syntax - Part 3

$
0
0

Part 1: http://scn.sap.com/community/abap/hana/blog/2014/01/08/consuming-hana-views-procedures-external-views-in-abap-740-syntax--part-1


  • ABAP Report with new data declaration syntaxes on 7.40
  • ABAP Report on HANA using ADBC


Part 2:http://scn.sap.com/community/abap/hana/blog/2014/01/08/consuming-hana-views-procedures-external-views-in-abap-740-syntax--part-2

  • Consuming Attribute View using External View.
  • Consuming Attribute View using Native SQL
  • Consuming Analytic View/Calculation View in ABAP


Part 3: http://scn.sap.com/community/abap/hana/blog/2014/01/08/as

  • Consuming HANA artifact Stored Procedure using ABAP Proxy Procedure.
  • Consume HANA artifact Stored Procedure by Calling it in ABAP Code.



T6. Consuming HANA artifact Stored Procedure using ABAP Proxy Procedure.


Steps to create Stored Procedure:

Step 1: Go to ‘Modeler’ or ‘SAP HANA Development ‘prospective and right click on your package and create a new procedure.

 

Step 2:  Enter name and description, select the ‘Default Schema’ (SAP-SID) and Run With as ‘Invoker’s Right’ and click on finish.

 

1.jpg

 

Step 3: Add ‘Output’ and ‘Input’ parameters by right clicking on the respective folders

2.jpg

Step 4: Place the code in the procedure editor

 

BEGIN

 

et_last_rev_bill = SELECT empinfo.pernr as PERNR, DAYS_BETWEEN(empbill.BILL_DATE,CURRENT_UTCDATE) AS LAST_REV_BILL

                     FROM DTAB_EMP_INFO AS empinfo INNERJOIN DTAB_EMP_BILL AS empbill

                       ON empinfo.pernr = empbill.pernr

                    WHERE empbill.mandt = empinfo.mandt

                    GROUPBY empinfo.pernr, empbill.bill_date ;

 

END;

/********* End Procedure Script ************/


3.jpg

Step5: Save and activate the procedure. Execute the procedure in ‘SQL Console’.

 

Call"_SYS_BIC"."mohas97_ha5/ZEMP_BILL_PP"(?) WITH OVERVIEW;


Call"_SYS_BIC"."mohas97_ha5/ZEMP_BILL_PP"(?);

 

4.jpg

 

 

Now create a Proxy Procedure in ABAP:

Step 1: Go to ‘ABAP’ prospective.

5.jpg

 

Step 2:  Right click on ABAP package under which you want to create this Proxy Procedure. Under ‘New’ click on ‘Other ABAP Repository Object’

6.jpg

 

Step 3: Choose ‘Database Procedure Proxy’.

 

7.jpg

 

Step 4: Enter name, description, HANA s procedure name
8.jpg

 

Step 5: Now your proxy procedure is created just activate the object.

9.jpg

 

Source code to Consume the ‘Proxy Procedure’  in ABAP:

 

  DATA: lt_bill_rev_days TYPE TABLE OF if_emp_bill_proxy_procedure=>et_last_rev_bill,

        lv_count     TYPE i.

  CALL DATABASE PROCEDURE emp_bill_proxy_procedure

IMPORTING et_last_rev_bill = lt_bill_rev_days.

 

  LOOP AT lt_bill_rev_days ASSIGNING FIELD-SYMBOL(<fs_bill_rev_days>).

    WRITE: / 'Days Since Bill Rate is changed' ,<fs_bill_rev_days>-pernr.

    WRITE: '=', <fs_bill_rev_days>-last_rev_bill , /.

  ENDLOOP.

 

Output:

10.jpg

 

 

T7. Consume HANA artifact Stored Procedure by Calling it in ABAP Code.

 

Source Code


* Calling hana procedure in ABAP
TYPES: BEGIN OF lty_s_overview,
param
TYPE string,
value TYPE string,
END OF  lty_s_overview.

DATA: lt_overview TYPE  TABLE OF lty_s_overview.
DATA: ls_overview TYPE  lty_s_overview.

TRY.


lv_sql
= | CALL _SYS_BIC."mohas97_ha5/ZEMP_BILL_PP"  | &&
|
( null ) WITH OVERVIEW |.

*     execute the native SQL query/ SQL Call
lo_result
= NEW cl_sql_statement( )->execute_query( lv_sql ).   " new syntax

*     read the result into the internal table lt_partner
GET REFERENCE OF lt_overview INTO lr_data.
lo_result
->set_param_table( lr_data ).  "Retrieve result of native SQL call
lo_result
->next_package( ).
lo_result
->close( ).

* Read internal table
READ TABLE lt_overview INTO ls_overview WITH KEY param = 'ET_LAST_REV_BILL'.
lv_sql
= ` select * from ` && ls_overview-value.
*     execute the native SQL query/ SQL Call
lo_result
= NEW cl_sql_statement( )->execute_query( lv_sql ).   " new syntax

*     read the result into the internal table lt_partner
GET REFERENCE OF lt_bill_rev_days  INTO lr_data.
lo_result
->set_param_table( lr_data ).  "Retrieve result of native SQL call
lo_result
->next_package( ).
lo_result
->close( ).

CATCH cx_sql_exception INTO lx_sql_exc.
lv_text
= lx_sql_exc->get_text( ).
MESSAGE lv_text TYPE 'E'.

ENDTRY.

LOOP AT lt_bill_rev_days ASSIGNING FIELD-SYMBOL(<fs_bill_rev_days>).
WRITE: / 'Days Since Bill Rate is changed::' ,<fs_bill_rev_days>-pernr.
WRITE: '::', <fs_bill_rev_days>-last_rev_bill , /.
ENDLOOP.

11.jpg

SAP CodeJam Istanbul

$
0
0

Great event at an awsome location!

 

Istanbul was the first location to host the ABAP on HANA CodeJam this year. With 40 registered attendees we decided to send not one but two experts to the event... and so Jens Weiler and me went there... If you followed Jens (@ABAP4H) on Twitter (#SAPCodeJam) you already heard about it... it's an awesome place (completely overwhelming!!).

 

So with a full packed room of 40+ CodeJamers we started the event - thanks again to Ibrahim Gunduz, Cigdem Gunduz, and Abdulbasit Gulsen (sorry I'm lacking the Turkish characters...) for the great organization! We had an awesome chatty and networking atmosphere at the event - really great, thanks so much for your hospitality (Turkey is famous for that but to experience is even better ).

 

Already thinking and talking about further CodeJam events let me just say - I'd love to come back and do further events!!

 

Again, thanks a lot - hope you enjoyed it as much as we did!

 

  Jasmin

 

 

Just a glance at the event (more pics to come on http://facebook/sapcodejam):

IMG_3997.JPG


And as you can see... even a broken leg could not stop Ibrahim from joining the event :-)
IMG_3979.JPG

Hands-On with the SQL Monitor Part 3: Analysis of Individual Database Tables

$
0
0

Hey folks. I’m back again with another hands-on scenario for the SQL Monitor. Last time we focused on SQL statements accessing huge volumes of data and empty FOR ALL ENTRIES tables. Today I will show you how to use the SQL Monitor to analyze individual database tables.

 

Imagine the following situation: In our system we have a custom database table ZSQLM_TEST_ORDER which is central to a number of our business processes. Since we are curious about the actual usage of this table in our productive system, we pose ourselves the following questions:

 

  1. Which request entry points (that is business processes) have accessed the table and which of them is most expensive in terms of SQL executions and database time?
  2. What is the table’s SQL operation (SELECT, INSERT, …) profile and does the table qualify for table buffering?

 

Question 1

 

"Which request entry points (that is business processes) have accessed the table and which of them is most expensive in terms of SQL executions and database time?"

 

As in the previous hands-on scenarios, the starting point of our analysis is the data display transaction SQLMD. The first question asks us to find the list of request entry points that have accessed our database table ZSQLM_TEST_ORDER. For this purpose we enter the table name in the selection screen field “Table Name” and select to aggregate the result by request in section “Aggregation”. Depending on the number of requests you expect, it might also be advisable to clear the field “Maximal Number of Records” to avoid truncation. However, for our particular case the default value of 200 will be more than sufficient. The configured selection screen looks like this:

Capture_New1.PNG

Screenshot 1: Selection screen of transaction SQLMD configured for the first question

 

After hitting F8, we are then directly presented the list of request entry points we looked for. As you can see in the screenshot below, there are three ABAP reports and an RFC function module that have accessed our database table.

Capture_New2.PNG

Screenshot 2: List of all request entry points (business processes) that have accessed the database table ZSQLM_TEST_ORDER.

 

For the second part of the first question we need to find the most dominant request out of the list. This can easily be achieved by evaluating the columns “Executions” and “Total Time”. A quick check reveals that the record at the very top of the list (ZSQLM_TEST) causes about half of all SQL executions on the table and accounts for about 50% of the total database time spent on the table. So if, for instance, we wanted to reduce load on this specific database table, we would first analyze SQL statements triggered by this top request entry point.

 

Question 2

 

"What is the table’s SQL operation (SELECT, INSERT, …) profile and does the table qualify for table buffering?"

 

Question number two seeks for the SQL operation profile of our database table that means the distribution of the number of SQL executions over SELECT, INSERT, UPDATE, … operations. In addition, the second part of the question is concerned with the possibility of table buffering. Therefore, we will also need to know how the number of accessed database records spreads across the different operation types. In order to obtain this information, we need to return to the selection screen of SQLMD and make some adaptions. First, we disable aggregation by selecting “None” in section “Aggregation”. Moreover, we clear the field “Maximal Number of Records” in order not to truncate any records. Here is the resulting selection screen:

Capture_New3.PNG

Screenshot 3: Reconfiguration of the selection screen of SQLMD for the second question.

 

Hitting F8 now brings up a list of all unaggregated SQL Monitor records that involve the table ZSQLM_TEST_ORDER. Note that here unaggregated means that the monitoring records where not grouped by source code position or request entry point. The reason for choosing this representation is that now we can see the SQL operation type directly in the top-level list (column “SQL Operation Type”) without having to drill-down.

 

To obtain the distribution of SQL executions and accessed database records over the different SQL operation types, we can now use the standard ALV functionality for calculating subtotals. All we need to do is select the column “SQL Operation Type” and click the “Subtotals…” button in the ALV toolbar. Afterwards we can select the columns “Executions” and “Total DB Records” and click the “Total” toolbar button to calculate the relative amounts. The final result is shown in the following screenshot.

Capture_New4.PNG

Screenshot 4: List of all unaggregated SQL Monitor records that relate to the database table ZSQLM_TEST_ORDER . The column “Total DB Records” was dragged to the front. The subtotals for the columns “Executions” and “Total DB Records” were calculated using the standard ALV functionality. The groups for INSERT and DELETE statements were collapsed.

 

As you can see from the screenshot above, more than 99% of all SQL executions on our table are SELECT statements while DELETE and INSERT operations account for significantly less than 1% (see column “Executions”). This almost looks like a read-only scenario. However, despite being negligible in terms of SQL executions, DELETE and INSERT statements are still responsible for more than 40% of all accessed database records (see column “Total DB Records”).

 

So what does this mean in regard to buffering the table? In general, writing accesses are disadvantageous for the table buffer since they lead to potentially costly synchronization processes between application servers. In our particular case a relatively high amount of database records are accessed in a number of DELETE and INSERT statements that is vanishingly small with respect to the vast number of SELECT statements. On average each individual writing access modifies 100,000 records (divide “Total DB Records” by “Executions” in Screenshot 4) and, thus, leads to a complete invalidation of the buffered table. Hence, the following buffer synchronization can be performed in a block operation which is relatively fast as compared to updating the records individually. Since this happens only once for every 35,000 SELECT statements (divide number of SELECTs by number of INSERTs and DELETEs in screenshot 4), it is an acceptable overhead. In total we can conclude that buffering our database table will very likely result in a significant speed-up of reading accesses.

 

Before activating the table buffer for your own database tables you should, however, be aware of the fact that data that is read from the buffer might not be up-to-date. This is because synchronization across multiple application servers due to writing accesses can take up to several minutes. Consequently, you should not use buffering when reading obsolete data is not acceptable for your application. Depending on your specific case this might even have legal implications.

 

Wrap-Up

 

In today’s hands-on scenario I showed you how to analyze your recorded SQL Monitor data on the level of database tables. With great ease, we have obtained a list of request entry points – that is business processes – that have accessed a specific database table. In addition, we employed standard ALV features to construct our table’s SQL operation profiles for the number of executions and the number of accessed database records. These profiles led us to the conclusion that buffering should be considered an option for our particular table.

 

It is important to note that as an application developer you could have had a hard time getting this insight from your source code alone. While you might know how many reading and writing database accesses are in your code, you usually will have no way to determine how often these operations occur and how many database records they access during productive use of your application. It might even get more difficult if your application doesn’t have exclusive access to the database table. The SQL Monitor greatly simplifies this task.

 

Finally, let me remark that the table filtering feature of transaction SQLMD has a wide range of additional use cases. For instance, it can help you to find each and any development object that has accessed a particular database table. This might be important when you want to analyze the encapsulation and security of your data. In addition, the table filtering can also assist you in locating lock situations caused by concurrent accesses to a table from different processes.

 

That’s enough for today. If you still can’t get enough of SQL monitoring make sure not to miss my next post in ABAP for SAP HANA.

New ABAP for HANA features in SAP NW 7.4 SP5

$
0
0

With the newly released AS ABAP 7.4 SP5 we have deepened the interplay between the ABAP and SAP HANA and enhanced the capabilities of the ABAP for HANA development. Following the code to data paradigm we have focused on enabling you to innovate faster and better with ABAP for HANA..


So what is inside this release for ABAP on HANA developers?


Performance Worklist Tool

 

The SQL Performance Worklist (Transaction: SWLT) allows you to correlate SQL runtime data with ABAP code analysis to plan optimizations using results from the new SQLMonitor and CodeInspector.

swlt.png


More details on the Performance Worklist Tool can soon be found in a detailed document here in SCN (The SQL Performance Worklist Tool ).

 

 

 

CDS Viewbuilding

 

CDS Viewbuilding is drastically improving the existing viewbuilding mechanisms in ABAP. It offers a broad variety of features you maybe know of the SQL92 Standard like all kind of join consitions, support of union, expressions, associations and much more. And also important we will continue to extended the feature set in the upcoming releases. CDS viewbuilding is completely syntax based and the design time with syntax checks, code completion and many more features is directly integrated in the ABAP Development Tools.

 

cds.png

 

More details on CDS Viewbuilding can soon be found in a detailed document here in SCN ( CDS Viewbuilding ).

 

 

 

Extended OpenSQL

 

Open SQL has been enhanced with new features, especially a lot of limitations have been removed, the JOIN functionality has been enhances as have been arithmetical and string expressions. Moreover, a new Open SQL syntax has been presented with incorporates e.g. the escaping of host variables, comma separation of the field list and so on and so forth.

 

OSQL.png

 

ABAP managed Database Procedures (AMDP)

 

AMDPs enables you to create database procedures directly in ABAP using e.g. SQL Script. And to seamlessly integrate it in modern ABAP development an AMDP can be implemented using an ABAP method. Curious how this look like ? Here we go:

 

AMDP.png

As you can see in the picture above the choice if the method is classical ABAP or SQL Script is in the method implementation and not in the definition. Making it possible to redefine an existing method implemented in ABAP with SQL script and the other way around.


The Lifecycle-Management of an AMDP is exactly the same as any other ABAP method. The ABAP server will take completely care for the deployment and activation of the procedure on the database.

 

More details for AMDP will follow soon and can be found in a detailed document here in SCN ( ABAP Managed Database Procedures ).

 

 

 

Transparent Optimizations

  • Fast Data Access (new data exchange protocol)
  • optimized SELECT... INTO ITAB and SELECT SINGLE
  • optimized FOR ALL ENTRIES-clause

 

So that's it - hope I made you excited. If you want to try it out, the new AS ABAP 7.4 SP5 Developer Edition will be available beginning of February - See here SCN Trial Editions: SAP NetWeaver Application Server ABAP 7.4.  Please check back on this ABAP for SAP HANA Space. We will continuously deliver detailed information of the new features and how-to use them.

 

Stay tuned & have fun with ABAP

 

Jens

ABAP for HANA and "Code Push-Down"

$
0
0

With the brand new NW AS ABAP 7.4 SP5 we are adding a new possibility for ABAP Developers to leverage HANA capabilities ( New ABAP for HANA features in SAP NW 7.4 SP5).

 

The whole ABAP for HANA story began last year with our first release of NW AS ABAP 7.4 introducing a new coding paradigm (at least for ABAP): “code pushdown”.

codepushdown.png

 

So what does this mean?

Code pushdown means delegating data intense calculations to the database layer. It does not mean push ALL calculations to the database, but only those that make sense. An easy example is if you want to calculate the amount of all positions of invoices. You should not select all positions of those invoices and calculate the sum in a loop. This can be easily done by using an aggregation function (here SUM()) on the database.

 

However, pre ABAP 7.4 releases only provided very limited support for such an approach. In the 7.4 SP2 release we did our first step to overcome this limitation. We introduced new ABAP artifacts to consume HANA views and procedures in ABAP: The “Bottom-Up approach”.

 

bottomup.png

 

Sounds perfect? Yes, but unfortunately it has some drawbacks. First of all, as a developer, you have to work in both worlds (HANA and ABAP). This requires a database user, which alone is quiet tricky in some enterprise environments. You also have to keep the HANA and ABAP artifacts in sync and take care of the lifecycle management.

 

With the new SP5 release we are taking the next step to enabling code pushdown with ABAP for SAP HANA: The “Top-Down approach”. It is based on your feedback and our own experience using ABAP for SAP HANA. It enables developers to continue working in their ABAP environment and still have similar possibilities to leverage the power of SAP HANA.

 

topdown.png

 

This approach comes with a huge bag of goodies, like writing SQL Script or being able to create database views with new join and aggregation capabilities directly in ABAP. We will describe these features in detail in more blogs that will follow soon.

 

By the way: this does not mean you do not need the Bottom-Up approach anymore. There are still special cases which you won’t be able to tackle with these new “Top-down”-features. But we are continuing to enrich the ABAP possibilities with every SP. Stay tuned.

 

If you are interested in those new features there is are already great videos available in our YouTube Channel: ABAP for SAP HANA - YouTube

 

Cheers

 

Jens

Oracle legacy syntax not working on any other DBMS (Native SQL Statements are not working when migrated to HANA)

Next Generation of ABAP on HANA CodeJams

$
0
0

After conducting the ABAP on HANA CodeJam in Istanbul last month, we went back to our desks and started to update the content of our CodeJam.

 

Having all the new cool features in the AS ABAP 7.4 Support Package 5 (see e.g. New ABAP for HANA features in SAP NW 7.4 SP5 blogged by Jens), a lot new hands-on exercises enter the game.

 

Curious?

... join the event!

The next CodeJam event takes place February, 26th 2014 in Bielefeld ( SAP CodeJam Bielefeld - ABAP on HANA Registration, Bielefeld - Eventbrite ). You have the chance to meet ABAP and HANA experts from the SAP development, and to put your hands on the new features for ABAP on HANA development.

... or organize it yourself!

If your are interested to participate but there is no CodeJam event available near to your location, why not host own yourself? Click here for more information!

 

We are looking forward to meet you at CodeJam!


DSAG Technologietage 2014: Best practices for Custom Development with ABAP and SAP HANA Platform

$
0
0

DSAG-Technologietage 2014, Stuttgart Feb-18 and Feb-19: For two days 1950 participants created one of the biggest technology events in Germany, Austria and Switzerland (thereby achieving an all-time high at DSAG). During this conference all questions concerning custom development using SAP HANA Platform and SAP NetWeaver played a central role (certainly not the only one, many other technology topics were covered as well). The following notes comprise my takeaways with respect to custom development and extensions of SAP solutions.

 

 

"Level completed? Embark on the journey to the new world of technology" – this theme was illustrated by

Andreas Giraud, DSAG board member for Tecchnology, in his opening keynote.  Andreas explained

how technology acts as a driver for business innovations.

In the SAP space, a survey on planned investments at DSAG members provides helpful

insight into the trends of 2014 (Mobile, User Experience, SAP HANA etc.).

Many customers and partners plan innovation projects with SAP HANA,

with special emphasis on hybrid solutions (cloud and on-premise).

 

Vishal Sikka (SAP board member heading Products and Innovations) presented

major priorities in a short video message. Based on the paradigms of Timeless Software,

Design Thinking and SAP HANA, he outlined the five growth opportunities of this year:

- SAP HANA, special attention is paid to simplicity, user and developer experience (collaboration, real-time source code analysis etc.)

- Cloud Services (creating the freedom to innovate)

- Core Apps (realtime apps, merging OLAP and OLTP, simplifying landscapes and applications, potential to remove 90% of tables) 

- Edge Apps (innovative apps and realtime business networks)

- Co-innovation (pursuing new frontiers, custom development plays an important role)

He summarized his findings by the observation, that trusted co-innovation is at the heart of this journey.

 

Bernd Leukert (Member of the Global Managing Board of SAP AG) provided an overview of

technology investment areas at SAP in his keynote. With SAP HANA Cloud Platform,

a new developer experience is offered to the community .Even more general,

SAP HANA as Innovation Platform enables a significant simplification of developemnt and operations.

In particular, he mentioned these tasks:

- Identify business processes, which provide competitive advantage

- Componentization: Focus on custom development

His keynote was completed by a compelling demo on developer experience.

 

After this series of keynotes, all participants were invited to one of 6 parallel breakout session.

I choose some lectures from track “Application Development, Portals and UI”.

 

Andreas Wesselmann (SAP) started with an overview session on “Application Development
with SAP HANA Platform”, where he identified new types of applications, e.g.,
in health (cancer diagnostics and treatment) or soccer (training analysis).
Certainly, optimization of existing custom development solutions also plays a
very important role. A major pillar of this optimization is given by the
paradigm of “code pushdown”, i.e., delegating data intensive calculations from
AS ABAP to SAP HANA. While BW on HANA served as a frontrunner, other scenarios
can be achieved by a co-deployment of AS ABAP and HANA. In addition, hybrid extensions
of existing on-premise solutions can be easily established with SAP HANA Cloud
Platform (HCP). HCP can also be used for on-demand extensions and net-new apps.
Fiori apps can be easily created via HCP.

 

Customers using SAP NetWeaver could get a concise overview of “SAP NetWeaver 7.4” in the
subsequent lecture given by Karl Kessler (SAP), who elaborated on Big Data as key
driver for this release. While the application server layer delegates major tasks
to HANA (code pushdown) and SAPUI5, still many application services remain
there. These principles were explained in detail using the reference scenario “Open
Item Analytics”. Karl also pointed out that SAPUI5 is contained in NW 7.4 out
of the box, while it still can be used as add-on for lower releases. OpenUI5 is
an ambitious invitation for developer community to contribute and participate
at innovation cycles.

 

Dirk Basenach (SAP) explained how to run “Application Development in Cloud”, where
he provided deep insights in to SAP HANA Cloud Platform (HCP), the Platform as
a Service offering (PaaS) from SAP. Starting from the observation that businesses
today show an increasing demand for efficient cost control, fast innovation
cycles and minimizing risk, Dirk pointed out that HCP as a public cloud fulfills
these requirements. SAP offers also dedicated packages for special purposes,
e.g., SuccessFactor extension package, hybrid extensions of on-premise solution,
and net-new apps (e.g., SAP Mobile Documents, SAP Precision Marketing, SAP
Service on Demand). HCP enables an accelerated development with open and
modular services, e.g. document service (CMIS), connectivity service (using
reverse proxy approach, http and RFC), identity service (SAML based SSO,
support of SAP and 3rd party identity providers). It is possible to use Java
and HANA native development on HCP. Several customer success stories (Accenture,
Danone, OPAL etc.) completed the big picture of this story.

 

Day 2 of DSAG Technologietage is always devoted to work groups (AK = Arbeitskreis),

this year with 22 parallel sessions (from  AK Application Integration” to “Virtualization and Cloud Computing”).

Here are my notes from the sessions I attended today:

 

AK Development, Jochen Kleimann (Mahle Behr), Jürgen Stolz (StolzIT), Peter Barker
(SAP) on „Best practices: Betriebsmittelverwaltung with Interactive Forms“: The
speakers explained how Mahle Behr established cross organization processes with
interactive offline forms. They achieved the goal to minimize training effort
for hundreds of suppliers globally. Main technologies are Floorplan Manager
(FPM), Sidepanel and SAP Interactive Forms by Adobe (IFbA, While they shared
best practices and major achievements, they also identified some gaps, e.g., regarding
integration of BW data in FPM. Finally, Peter outlined how IFbA will enable
Mobile Forms (HTML5) and Cloud Forms (ADS on HCP) starting in 2014.

 

AK Development, Tobias Kaufman (SAP) on “Native HANA Development”: Tobias demonstrated
how to build custom specific applications using the two-tier architecture of
SAP HANA XS and SAPUI5, including recommendations “When to use what (ABAP or
XS)”. He provided a detailed insight how to use server-side JavaScript,
Outbound Connectivity (e.g., http-request to other servers) and OData. Tobias
also explained that SQLScript should be used for heavyweight calculation logic
in the HANA DB, and that the benefits of Application Function Library can be
easily obtained in many cases (e.g., Predictive Analysis). He shared details
and best practices regarding data acess with XSJS, XSODATA and Core Data
Services. In addition, special attention wa paid on the River concept (single
language for data modeling, business logic and UI), best suitable for data
driven apps. He illustrated these observations at a reference scenario (SHINE  = SAP HANA Interactive Education).

 

AK Development, Jakob Mainka (Linde AG) on “Implementation of ABAP Test Cockpit (ATC)
at Linde AG”. These lecture offered helpful information how Linde is operating
a global business, where custom development plays an important role (e.g., 100,000
new lines of code created by 120 developer every month). Obviously, quality
assurance and compliance according to GMP (Good Manufacturing Practice) is
mission critical in this environment. While Linde spent high effort and managed
tedious processes for code review in old systems (4.6, 6.0), they have chosen ATC
as an alternate approach for better quality and agility. With ATC, they have
less effort and costs. Jakob Mainka explained how quality assurance of external
development projects is managed (e.g., internal definition of check variants),
and how a proper configuration (release of transports, periodic jobs etc.)
can further simplify processes and increase quality.

 

AK Portal and Development: Daniel Rothmund (Geberit) on “Project ‘Mobile Invoices’
with SAPUI5”. The speaker explained how Gebereit uses Portal, Gateway and
multiple ERP systems in this ambitious project. While the straightforward use
cases were implemented quickly, additional requirements could also be mastered,
e.g., offline scenarios (based in Roundtrips to Gateway), or using a multi origin
function at write operation in OData batch mode (with BADI Destination
Finder). At Gebereit, they were also able to define and implement custom
specific controls. Productive start of this project was very successful, they
achieved a high level of acceptance, no classroom training was needed.

 

AK Virtualization and Cloud Computing, Jana Richter (SAP) on “Flexible ABAP Server”. Jana provided
very helpful explanations and demos for these aspects:

- Optimized ressource consumption, support of adaptive computing (e.g., dynamic parameter
values at server start and during operations, flexible license management)

- Ensure Business continuity, simplify maintenance (e.g., soft shutdown, Rolling Kernel
Switch with Scale In scenario and dynamic configuration)

- Extended monitoring and troubleshooting options (SM04, SM51, SM50 etc.)

- Outlook (NW 7.4 SPS 08, reduced downtime (SUM), co-deployment ABAP and HANA)

 

Overall, I have seen an encouraging spirit at this conference. Custom Development is seen as a
major use case by almost all SAP customers. DSAG offered a great opportunity to
learn more how to optimize and modernize development processes, with many helpful
insights from customers, partners and SAP.

Hands-On with the SQL Monitor Part 4: Administration

$
0
0

Welcome back to another round of SQL Monitor hands-on action. While previous posts in this series were concerned with analyzing the recorded data, today we will focus on administrative tasks. In particular, I will guide you through a full administrative cycle of the SQL Monitor.

 

In general, such a cycle begins with the selection of a suitable measurement period. For example, you might decide to monitor a specific time slice – for instance 7 days – to inspect day-to-day business in your system. Alternatively, the measurement period can also be determined by certain events such as a year-end closing. In either case, you should make sure to limit code changes to a minimum during the monitoring interval since these changes might distort the recorded data and complicate the analysis.

 

Having decided for a monitoring interval, the next steps are to activate the SQL Monitor and to ensure that it remains in a consistent state throughout its operation. At the end of the measurement period you can then either deactivate the SQL Monitor manually or take advantage of the automatic deactivation feature which will be described later in this article. Afterwards you may analyze the recorded data and adapt your source code to improve its performance. Finally, to analyze the effect of your adaptions, you should reset the previously recorded data and start the monitoring cycle over again.

 

Summarizing this brief description, the monitoring cycle consists of the following phases

 

  1. Activation – activate the monitoring

  2. Monitoring – check & ensure the monitoring status

  3. Deactivation – deactivate the monitoring

  4. Reset – archive the collected data & reset the monitoring

Each of these steps will be described in detail in the remainder of this post. Before diving into the specifics of the different phases, however, there is one important point worth covering: authorizations. The SQL Monitor’s authorization concept involves two different roles – the data analyzer and the administrator. The former, evaluates the recorded data with the data display transaction SQLMD while the latter administrates the monitor using transaction SQLM. Both of these roles require the authorization profile for object S_DEVELOP (activity 03) as the bare minimum. In addition, you’ll need the authorization profile for object S_ADMI_FCD with value SQMD for the data analyzer role and value SQMA for the administrator role, respectively.

 

That being said, let’s turn to the individual administrative phases 1 to 4.

 

Phase 1 – Activation

 

As mentioned earlier, the starting point of all administrative tasks is the transaction SQLM. Its main screen (see Screenshot 1) contains information about the current status of the monitor (section “State”) as well as a number of control buttons (sections “Activation / Deactivation” and “Data Management”).

Capture1.PNG

Screenshot 1: Main screen of administration transaction SQLM.

 

As you can see, the monitoring is currently deactivated (field “Activation State” in section “State”). To activate it, you can use the buttons “All Servers” and “Select Servers” in section “Activation / Deactivation”. The former activates the monitoring globally – that means for every application server in the system – whereas the latter allows you to control the activation for each server individually.

 

In general, you will want to track all processes that are executed in your system and so you should usually activate the monitoring globally. For demonstrative purposes, however, let’s assume that we wanted to activate the monitoring on the local application server only. After clicking the button “Select Servers” a popup window appears in which we can select the individual servers for activation. The local application server is marked with an arrow which makes the task of locating and selecting it a breeze.

Capture2.PNG

Screenshot 2: Application server selection popup.

 

After selecting the local server and confirming the popup another popup appears and asks us to specify an expiration date for the activation. The monitoring will automatically be deactivated when the specified date and time are reached. Setting the expiration date is mandatory and has two benefits. Firstly, it lets you plan the length of your monitoring phase ahead of time so you don’t need to remember to come back and deactivate the monitoring. Secondly, if for any reason you forget to deactivate the monitoring, the automatic deactivation avoids recording of unneeded monitoring data.

 

Note that you can always change the scheduled deactivation date and time after the monitor was activated by using the application menu (from the menu bar select “SQL Monitor” > “Maintain Deactivation Schedule”).

Capture3.PNG

Screenshot 3: Activation expiration popup.

 

After confirming the popup the monitoring is, finally, activated for the local server and the state information on the main screen is updated accordingly. In addition, we see a tabular representation of the activation state for the individual servers.

Capture4.PNG

Screenshot 4: Updated main screen of transaction SQLM after activation of the monitoring on the local server.

 

Every time you change the activation state of the SQL Monitor a corresponding log entry will be created in the SQL Monitor log. The log can be accessed via the “History” (in later versions “Log”) toolbar button on the main screen of transaction SQLM (see Screenshot 4). It collects all past administrative actions that were performed on the SQL Monitor and help you answer questions such as “How long did we run the monitoring last time?” or “When was the data reset the last time?”.

Capture5.PNG

Screenshot 5: SQL Monitor History.

 

Now that the SQL Monitor is active we can switch to the second phase – the actual monitoring.

 

Phase 2 – Monitoring

 

Having activated the SQL Monitor there is not much to do for you from an administrator point of view. It might, however, be good practice to cast an eye on the monitoring status from time to time for the rare case of failure. That’s monitoring of the monitor if you like.

 

In general, all required status information and any failures will be presented to you on the start screen of transaction SQLM. One of the things you should watch out for here is an explosion in the number of collected monitoring records. Such a situation could occur if lots of generated coding is executed in your system. The SQL Monitor internally uses a number of aggregation strategies to limit the number of monitoring records but depending on your particular case there might be a rare chance that these counter-measures prove ineffective. Therefore, it is advisable to keep a close eye on the number of monitoring records (field “Number of Records” in section “State”) during the first few hours of operation. In general, the number will quickly saturate. If, however, you find the number of records to rise indefinitely without showing signs of saturation, you should deactivate the monitoring to prevent unnecessary high data volumes. At this point it might be worth mentioning that if you have SAP Note 1972328 applied in your system, the SQL Monitor will automatically shut off when the number of records exceeds 2,000,000. Besides, this note also contains tons of other enhancements and fixes so you should definitely take a look at it.

 

Other possible but rare failures include unscheduled batch jobs. During operation the SQL Monitor writes recorded data to the application servers’ shared memory space from where it is later collected and persisted into the system’s database. This requires periodic execution of the reports RTM_COLLECT_ALL and RSQLM_UPDATE_DATA. The scheduling of both of these jobs is automatically ensured when you active the monitoring in transaction SQLM. If for any reason one or both of these jobs are unscheduled afterwards, you will see a warning on the start screen of transaction SQLM. If you come across such a warning message, you need to either reschedule the batch job manually or renew the SQL Monitor activation by using the buttons “All Servers” or “Select Servers” in section “Activation / Deactivation”.

Capture6.PNG

Screenshot 6: Main screen of transaction SQLM displaying a warning about an unscheduled batch job.

 

Finally, there is one particular type of error that is not immediately apparent after firing up transaction SQLM – inconsistent activation states. An inconsistent activation state exists when the expected activation state differs from the actual activation state. In other words, the SQL Monitor is activated on a particular server but it is not actually running on that server (or the other way around). This could, for instance, happen when the server becomes unreachable during the activation process. To check the activation consistency you can click the “Servers” button in the toolbar on the main screen of transaction SQLM. This will establish the tabular display of the per-server activation state shown in Screenshot 4. In the rare case of inconsistencies you can try to repair the activation by using the application menu (from the menu bar select “SQL Monitor” > “Server States” > “Ensure”).

Capture7.PNG

Screenshot 7: In the rare case of inconsistent activation states you can try to repair the activation using the application menu.

 

Phase 3 – Deactivation

 

As mentioned in the description of phase number one, the activation expiration will lead to an automatic deactivation of the SQL Monitor once the specified date and time are reached. If, for any reason, you want to halt the monitoring before, you can do so by using the button “Deactivate” in section “Activation / Deactivation”. Both options for deactivating the monitor are globally effective and stop the monitoring on all available application servers.

Capture8.PNG

Screenshot 8: Main screen of transaction SQLM after deactivating the SQL Monitor manually.

 

Phase 4 – Reset

 

As mentioned earlier, the monitoring data is most valuable when it was recorded in a period without code changes. However, as you analyze the data you will likely want to change certain sections of code to improve performance. These code changes in turn will lead to a gradual distortion of the monitoring results which makes it harder to draw conclusions from the recorded data. Therefore, it is advisable to reset the SQL Monitor data whenever the source code in the system was changed to a significant extent (for instance due to imported transports).

 

Prior to erasing all SQL Monitor data from the system you have the opportunity to download the data to a file by using the button “Download Data” in section “Data Management”. The button will take you to a new screen that allows you to apply basic filtering (for instance on the package name) in order to limit the amount of downloaded data. Downloading the recorded data can be advantageous for archiving purposes or if you plan to import the data into a different application such as the SQL Performance Tuning Worklist (transaction SWLT).

 

Note that in later versions of the SQL Monitor you can also archive the recorded data into a so called snapshot. You can then display and analyze the snapshot’s data at any later point in time without independently of the currently recorded monitoring data.

Capture9.PNG

Screenshot 9: Selection screen for downloading the recorded data.

 

Whether or not you have downloaded or archived the recorded data, you can easily wipe all SQL Monitor data off the system by clicking the button “Delete Data” in section “Data Management”. Note, however, that this is only possible when the SQL Monitor is deactivated.

 

Wrap-Up

 

In today’s hands-on example I gave you an introduction to the administrative site of the SQL Monitor. As you’ve seen the administrative cycle consists of the four phases: activation, monitoring, deactivation and reset. The central access point for all of these phases is the transaction SQLM. The main screen of this transaction offers valuable status information and a set of control buttons. As result, tasks like activating, deactivating or resetting the Monitor are breeze.

 

I hope you had fun working through the present as well as any previous hands-on scenarios. I for sure did! This post marks the end of my series – at least for the time being. As we add new features to the SQL Monitor, I’m sure to return with another pack of hands-on scenarios soon!

 

In case you missed previous posts, you should start working through my series from the initial article: SQL Monitor Unleashed

SIT Hyderabad: Notes from the session on ABAP in HANA

$
0
0

Hi all SCN members,


In this blog I would like to mention the key points that I have jotted down in the technical session happened at SIT, Hyderabad on ABAP IN HANA by Sundaresan Krishnamurthy from SAPLABS. This 30 minutes was absolute enjoyment for all ABAPers, all credits to the presenter. It was again another useful session for all the attendees. Sundaresan was awarded with the winner award for his presentation on this session.

 

Introduction:


ABAP in HANA is upcoming and one hot topic in the SAP domain. Though HANA also playing a significant role in other fields of SAP its role in ABAP is immeasurable. In this blog I will be explaining in brief about the different aspects and role of HANA in ABAP.

 

Role of ABAP for development on HANA:


It basically deals with the following aspects,

 

  • SAP business Suite: it concentrates mainly on how customer can optimize and develop more applications.
  • SAP Net weaver BW: it concentrates on updation aspects.
  • New Application: concentrates on developing new applications.

 

Advantages of ABAP in HANA:


  1. Deeper and better integration with real time data access.
  2. New tools to detect optimization potential.
  3. Delightful user experience.
  4. New possibilities to build applications.
  5. Real time analysis.
  6. ABAP in eclipse (develop like never before).

 

ABAP code benefits from SAP HANA:


  1. SAP HANA platform is a complete relational database, hence no problem in using same code.
  2. Custom codes must be taken care during transition.
  3. Optimization potential will be high and transparent. That is fast data access and presence of optimized statements.
  4. In memory computing.
  5. Programming is near data base.
  6. HANA specific features are used.
  7. No message checks happen during transition.

 

CODE PUSHDOWN:


This is the main concept that explains the HANA functionality.

Usually in ABAP, data is transformed into code and then calculations happen alternatively in HANA the code is transformed into Data. This concept is called CODE PUSHDOWN.


Code pushdown need not necessarily be above example alone. A simple select query when transformed from SELECT SINGLE into SELECT FOR ALL ENTRIES also comes under CODE PUSHDOWN.


General Points:


  • In ABAP coding is sequential where as in HANA coding is parallel.
  • For getting started with HANA, ABAP 7.4 trial is used as virtual appliance.
  • During transition select query based on index will automatically get deleted, hence no index in picture.
  • As far as open SQL is concerned extended join support is provided.
  • Support for literals. For e.g. till now in ABAP select statements we have only fields which should be present either in structure or in table but with HANA even constants can be present in select statements.

 

SQL MONITOR- The tool of HANA:


  • This is an important tool that customer must be provided first incase he need to use HANA.
  • The T-Code used for SQL tool monitor is SQLM.
  • This tool is mainly used to analyze the code and check for its effectiveness.
  • There is another transaction used namely SWLT which mainly uses ABAP test cockpit to prioritize which line of code is used more.
  • In general it is similar to our ST05 with a difference that ST05 gives the trace for the entire code in the order in which it is executed but SQLM gives the trace on the basis of codes that are used more.
  • In short SQLM can serve as an input to ST05.

 

TAKEAWAYS:


  1. ABAP is and will continue to be basis for applications.
  2. ABAP 7.4 facilitates leveraging which is a SAP HANA feature.
  3. SAP HANA offer many new possibilities for ABAP based applications.
  4. HANA deals only with INSERT statements with database.
  5. Right time to get started with ABAP in HANA.

 

Conclusion:


The session actually gave me a good idea on what is HANA and what role does ABAP has on it. Thanks to the presenter such efforts with demo and explanations which made the subject more interesting.

 

Hope I shared all the content that was taught in the session, please feel free to correct for any mistakes. Question and answer session actually made the session more interesting.

 

Hats off to Sundaresan for this wonderful session.


Thanks for reading the blog   Hope you found it useful


Note: For more information about the happenings in SIT Hyderabad please read the below blog

 

http://scn.sap.com/community/events/inside-track/blog/2014/02/24/sit-hyderabad-an-overview-on-the-technical-track


 

LETS LEARN SHARE AND COLLABRATE

 

Thanks and Regards,

Satish kumar Balasubramanian

ABAP On HANA - My experience in SAP Inside Track

$
0
0

Hello all,

 

I would like to share my knowledge, which I gained from SIT (SAP Inside Track), 2014. Before getting started with a particular topic, I would like to share my experience on SIT, which was conducted in Hyderabad. It was a great pleasure for me to be a part of SIT. There were many valuable sessions throughout the day for 3 tracks (Technical, Functional and Analytical). As basically I am from technical side, the topics which they chosen are really excellent and presented in a very good manner.

 

Let me share you some of the points in ‘ABAP ON HANA’ which I learnt from the sessions, along with the points I collected apart from sessions, which I hope will be useful for you all to get the overview of this topic.

 

 

INTRODUCTION:

 

  • The main theme of this topic is: ‘Accelerate, Extend, Innovate using as ABAP and SAP HANA’.
  • This topic shows how, capabilities of ABAP have been enhanced for HANA development.
  • All ABAP applications have been optimized for SAP HANA:
    • Deeper and better integration with accelerate and real time data access.
    • New tools to optimization potential.
    • Delightful user experience and easy access.
    • New possibilities to build applications.
    • Real-time analysis with embedded analysis.
    • Develop like never before using ABAP in Eclipse.

 

MIGRATION FROM ABAP TO HANA:

 

  • ABAP code only using database independent features continue to run after migration to HANA.
  • General performance guidelines stay valid for SAP HANA.
  • Custom code transition:
    • Avoid (Functional) regression.
    • Detect (Additional) performance optimization potential.
  • Required and recommended adaptions:
    • Database migration related
    • Functional related
    • Performance related

 

NEW PARADIGM:

 

  • Code pushdown means delegating data intense calculations to the database layer. It does not mean push ALL calculations to the database, but only those that make sense.
  • Example: If you want to calculate the amount of all positions of invoices. You should not select all positions of those invoices and calculate the sum in a loop. This can be easily done by using an aggregation function ( SUM() ) on the database.

 

                                   hana_flow.png

 

DIFFERENCE BETWEEN ABAP AND HANA:

 

  • One basic difference between ABAP and HANA is, ABAP is based on sequential execution while HANA is based on parallel execution.
  • Widely used Insert statement in ABAP is replaced with Delta Merging in HANA.

 

SQL MONITOR:

 

  • Transaction code: SQLM.
  • It displays performance data on process level.
  • It allows drill down from process level to single database operation
  • Can run in production with minimal overhead (Less than 3%).
  • The monitoring cycle consists of 4 phases:
    • Activation – Activate the monitoring.
    • Monitoring – Check & ensure the monitoring status.
    • Deactivation – Deactivate the monitoring.
    • Reset – Archive the collected data and reset the monitoring.
  • Below are some screenshots of SQL monitor.
    • It consists of following buttons:
      • All Servers: To select all servers for monitoring.
      • Select Servers: To select some particular servers for monitoring.
      • Activate/Deactivate: To Activate/Deactivate the monitoring.

          

     Main Screen of SQLM:

 

     hana_main screen.png

 

     Log History:

 

     hana_history.png

  • Log History shows all the SQL statements running under the server along with following details:
    • Number of times the statement runs.
    • Duration of each statement.
    • Program in which the SQL statement is executing.
    • Data and Time of execution, etc.

 

CONCLUSION:

 

  • ABAP is and will continue to be basis for applications.
  • ABAP 7.4 facilities leveraging SAP HANA features.
  • SAP HANA offers many new possibilities for ABAP based applications.

 

 

Thanks & Regards,

Imran Khan.

Under the HANA hood of an ABAP Managed Database Procedure

$
0
0

Hi All, I've been looking into ABAP managed database procedures for HANA recently and decided to take a look at what's actually created under the hood in the HANA database when an AMDP is created.

 

I created a small test class in our CRM on HANA system with a method to read a couple of columns from the crmd_orderadm_h table using sqlscript. The method takes one input parameter IV_OBJECT_ID and has one export parameter ET_ORDER.

 

class ZCL_TEST_AMDP definition  public  final  create public .
public section.    interfaces IF_AMDP_MARKER_HDB.    types:        begin of ty_order,            object_id TYPE crmd_orderadm_h-object_id,            description TYPE crmd_orderadm_h-description,        end of ty_order,        tt_order TYPE STANDARD TABLE OF ty_order.    methods get_orders_sql        IMPORTING VALUE(iv_object_id) TYPE crmd_orderadm_h-object_id        EXPORTING VALUE(et_order) TYPE tt_order.
protected section.
private section.
ENDCLASS.
CLASS ZCL_TEST_AMDP IMPLEMENTATION.    method get_orders_sql        by database procedure        for hdb        language sqlscript        using crmd_orderadm_h.      et_order = select object_id, description from crmd_orderadm_h where object_id = iv_object_id;    endmethod.
ENDCLASS.

 

So pretty simple class. My expectation was that in HANA I would see a SQLScript procedure object and a Table Type object for the et_order parameter. What I did find was a bit different.

 

First of all it created 2 SQLScript procedures:

- a stub wrapper type procedure

- a main procedure called by the above stub procedure

 

It also created:

- a temporary table for the output data

- a view for the referenced table

 

The stub procedure seems to follow the naming convention of CLASSNAME=>METHODNAME#stub#DATETIME.

As you can see the definition includes only the input parameter that was defined in the class method. The export parameter is not specified in the definition.

In the body of the procedure it calls the main procedure and returns the output to the ET_ORDER parameter.

 

create procedure  "ZCL_TEST_AMDP=>GET_ORDERS_SQL#stub#20140318165936"
(  in    "IV_OBJECT_ID" NVARCHAR (000010)
)
language sqlscript sql security invoker  as begin  call "ZCL_TEST_AMDP=>GET_ORDERS_SQL" (    "IV_OBJECT_ID" => :IV_OBJECT_ID ,    "ET_ORDER" => :ET_ORDER  );  select * from :ET_ORDER;
end;

 

The main procedure follows the naming convention CLASSNAME=>METHODNAME

It specifies the input param IV_OBJECT_ID and the output param ET_ORDER with the type as the newly created temporary table ZCL_TEST_AMDP=>GET_ORDERS_SQL=>ET_ORDER#tft

 

In the body of the procedure it then reads from a newly created view ZCL_TEST_AMDP=>CRMD_ORDERADM_H#covw instead of directly from the table CRMD_ORDERADM_H.

ZCL_T

create procedure  "ZCL_TEST_AMDP=>GET_ORDERS_SQL"
(  in    "IV_OBJECT_ID" NVARCHAR (000010),  out   "ET_ORDER" "ZCL_TEST_AMDP=>GET_ORDERS_SQL=>ET_ORDER#tft"
)
language sqlscript  sql security invoker                            as begin      et_order = select object_id, description from "ZCL_TEST_AMDP=>CRMD_ORDERADM_H#covw" where object_id = iv_object_id;
end;

 

ZCL_TEST_AMDP=>GET_ORDERS_SQL=>ET_ORDER#tft is defined as a global temporary table with the columns mapping to the columns of the ET_ORDER param.

Pic1.PNG

 

Pic2.PNG

 

 

ZCL_TEST_AMDP=>CRMD_ORDERADM_H#covw is a view based on the CRMD_ORDERADM_H table. Here is the create statement:

 

CREATE VIEW "SAPSR3"."ZCL_TEST_AMDP=>CRMD_ORDERADM_H#covw" ( "CLIENT",  "GUID",  "OBJECT_ID",  "PROCESS_TYPE",  "POSTING_DATE",  "DESCRIPTION",  "DESCR_LANGUAGE",  "LOGICAL_SYSTEM",  "CRM_RELEASE",  "SCENARIO",  "TEMPLATE_TYPE",  "CREATED_AT",  "CREATED_BY",  "CHANGED_AT",  "CHANGED_BY",  "HEAD_CHANGED_AT",  "ORDERADM_H_DUMMY",  "INPUT_CHANNEL",  "BTX_CLASS",  "AUTH_SCOPE",  "OBJECT_TYPE",  "ARCHIVING_FLAG",  "DESCRIPTION_UC",  "OBJECT_ID_OK",  "VERIFY_DATE",  "CRM_CHANGED_AT",  "POSTPROCESS_AT" ) AS select  "CLIENT" ,  "GUID" ,  "OBJECT_ID" ,  "PROCESS_TYPE" ,  "POSTING_DATE" ,  "DESCRIPTION" ,  "DESCR_LANGUAGE" ,  "LOGICAL_SYSTEM" ,  "CRM_RELEASE" ,  "SCENARIO" ,  "TEMPLATE_TYPE" ,  "CREATED_AT" ,  "CREATED_BY" ,  "CHANGED_AT" ,  "CHANGED_BY" ,  "HEAD_CHANGED_AT" ,  "ORDERADM_H_DUMMY" ,  "INPUT_CHANNEL" ,  "BTX_CLASS" ,  "AUTH_SCOPE" ,  "OBJECT_TYPE" ,  "ARCHIVING_FLAG" ,  "DESCRIPTION_UC" ,  "OBJECT_ID_OK" ,  "VERIFY_DATE" ,  "CRM_CHANGED_AT" ,  "POSTPROCESS_AT"
from "CRMD_ORDERADM_H";

So in all 4 new objects were created:

- Procedure ZCL_TEST_AMDP=>GET_ORDERS_SQL

- Procedure ZCL_TEST_AMDP=>GET_ORDERS_SQL#stub#20140318165936

- Temporary table ZCL_TEST_AMDP=>GET_ORDERS_SQL=>ET_ORDER#tft

- View ZCL_TEST_AMDP=>CRMD_ORDERADM_H#covw

 

I can kinda understand the reasoning behind the temp table in order to easily map to an ABAP internal table but unsure of the purpose or logic for a) the stub procedure and b) the view for the table that's referenced. Maybe some SAP internal dev folks might have some insight here....just to satisfy my curiosity!

Viewing all 99 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>