Generic Extractions

 

  1. What is the T – code for generic extraction ?

A.                RS02

 

  1. What type of data sources, can we create ?

A:  1.trasanction data 2. master data attributes 3. texts.

 

  1. Why we can’t generate hierarchy generate datasource ?

A: Because in SAP R/3 the data base design in OLTP system is ER model, no hierarchies are maintained here, whenever there is any relationship 1: many, they split the information into different tables and link them with primary key and foreign key relationship.

 

  1. In what always we can generate generic datasource ?

A:  Table , view, infoset and function module. (Domain is available for text data souce).

 

5.   When u create a generic datasource it will ask three descriptions, short, medium and long, which is mandatory ?

A:   All are mandatory.

 

  1. When we are generating a datasource using a table, is extract structure created automatically or not ?

A: yes

 

  1. What is the difference between creating a generic datasource using view and function module ?

A: in case of function module, extract structure is explicitly created. In case of view, extract structure is automatically generated.

 

8.   When do u prefer for view and when do u prefer for function module ?

A: when the relationship is 1:1, view will be preferred, and when the relationship is 1: many function module will be preferred.

 

9.What is the importance of selection check box when you are generating the data source ?

A;  for whichever the field selection check box is selected that particular field will be made available in data selection tab of the info package, so we can use it to extract data selectively.

 

9.   What is the importance of hide check box ?

A: It plays on transfer structure, then for whatever the field u select, hide won’t be available in the transfer structure at BW side as well as at R/3 side.

 10.  When do u go for generic datasource ?

A: 1 when we don’t have any business content data source readymadely available, we go for generic datasource.

     2 Even if a business content datasource is available, and it is already being used up, and if you want to simulate the same kind of extractor, we go for generic extraction.

11.           Did you create generic data sources; if yes tell me what is the scenario?

A:    Scenario’s:

1.      To extract open deliveries: the datasource 2lis_12_vcitm is available as part of business content to extract All the delivery item level information. So to extract only open deliveries we have built the generic data source with function module.  (logic: we used three conditions to identify an open delivery

Ø  If  Delivery is considered to be a open delivery, if it has no PGI (post goods issue) document.

Ø  If  Delivery is considered to be a open delivery, if it has PGI document, and if this PGI document doesn’t contain billing document

Ø  If Delivery is considered to be a open delivery, if it has PGI and if this PGA document contains billing document, and this billing document not yet been posted to CO-PA.

 (to implement this logic )

we have a database table LIPS which contains all the delivery item level information. So we collect all the records from this LIPS table and cross check whether given delivery is open delivery or not, if it is open delivery insert the records into E_T_DATA,

 

To check whether given delivery is open or not

Ø  1. If Delivery is considered to be a open delivery, if it has no PGI (post goods issue) document.

      (we use a table VBFA (sales document Flow) to find out the PGI document for a given delivery.) else

Deliveries considered to be a open delivery, if it has PGI document, and if this PGI document doesn’t contain billing document

 

  (using VBFA find PGI document for a given delivery and feed this PGI document again to the VBFA table to find out the subsequent billing document )

 

Ø  If Delivery is considered to be a open delivery, if it has PGI and if this PGA document contains billing document, and this billing document not yet been posted to CO-PA.

 

  (if billing is available for a given delivery then we use the VBRP table to find out the posting status of the billing document, if the posting status is not equal to C, then it is a open delivery, if it equal to C then it isn’t a open delivery.

Scenario 2: (this scenario is suitable, only if SAP R/3 is 3.1v)

            In SAP R/3 3.1v there are no business content datasources to extract FI_AR information so we went about the building the generic datasource using the VIEW built on two tables, BSID and BSAD.

 

Scenario 3:

            We have built a generic datasource to simulate 0material_attr, which is a business content data source. Why? - My requirement was to do reporting on 0material master, so to do reporting on 0material we converted that as the data target, which can be loaded only with flexible update and this can be assigned with transaction datasource, but the business content data source 0material_attr is master data attribute data source, which can’t be used up, so to solve this problem we have built the transaction data datasource ZMATERIAL_ATTR which is simulating the 0material_attr.

 

12.  Can we set up delta for generic datasource ?

A:  yes

 

13.  What r the different ways of setting up the delta in generic extraction?

A:        Calday (if u setup delta on base of calday we can run delta only once per day that too at the end of the clock to minimize the missing of delta records.)

 

            Numeric pointer(this type of delta is suitable only when we r extracting data from a table which supports only creation of new records, but not change of existing records.  Ex: CATSDB (HR time management  table) ).

            Time stamp( using timestamp we can run delta multiple per day but we need to use the safety lower limit and safety upper limit with minimum of 5 minutes (300 seconds to be given)).

 

14.  In what scenario did we setup delta in generic datasources?

A:  Scenario I:  If scenario is open delivery, then no delta (why? Because we always wanted to extract snapshot of open deliveries).

    ScenarioII:   Yes we setup delta here based on calday using the field posting date (Technical Name).

     ScenarioIII:  When we r simulating the generic data source for 0material_attr, we implemented delta using coding in function module.

 

15.  Can we do datasource Enhancement for Generic Datasource, if yes How?

A:  yes ,  Go to T-Code RSA6, Select the data source and click on the button enhance extract structure and specify the fields which we want to enhance by prefixing ZZ to the fields, after this again get back to RSA6 T-Code select the datasource and click on Change BUTTON and deselect the hide check box and field only known in exit check box, then we write the enhancement code to populate the data into the enhanced fields by using the enhancement RSAP0001 this enhancement contains four function exits, that is EXIT_SAPLRSAP_001(for transaction data data source), EXIT_SAPLRSAP_002 (For master data attribute) EXIT_SAPLRSAP_003 (For master data texts), EXIT_SAPLRSAP_004 (Master data Hierarchies).

 

16.  When  we are generating the datasource we specify certain application component, what is that u r doing there ?

A:  We are designating the datasource to the application component.

 

17.  When u set up delta for a generic datasource, when you run a delta load at BW side, where does data come from ?

A:  RSA7 (DELTA QUEUE).

18.  When you are setting up the delta in generic datasources, what are the different images we can setup ?

A:  We can setup two types of images, one is new status for changed records, another one is additive delta.

 

19.  When I’m setting up the delta for the generic datasource if I select the update method as new status for changed records can I load data directly to the cube ?

A:  No  (why? If the update type is setup to new status for changed records then it becomes mandatory for us to load data into ODS and then ODS to CUBE.

 

20.What is difference between business content datasources and generic datasources?

A:  In case of business content datasources the data source is readymadely available in delivered version, in order to use it we have to install that particular data source to create a copy of the data source in active version by using RSA5 T-code, select the Data Source and click on transfer data source.  In case of generic datasources we are creating our own datasources.  We prefer business content datasources with specific to performance.

 

21.Steps of Generic Extraction Method ?

 A:   Go to RS02 and select the type of datasource and specify the name of datasource and click on create button and then it takes you next screen where we basically designate the datasource to particular application component and specify the table name or view name or function module or infoset query depending up on the requirement and provide the short medium and long descriptions and save it then it take u to the next screen  where it shows the detailed lever of information about the datasource and extract structure and allows us to customize the transfer structure by using the HIDE check box and selection check box for data selection tab and then save it to generate the datasource.

 

21.  What is ALE pointer (ALE Delta)?

            A: this is one way of setting up the delta based on the pre-defined fields defined   by SAP            or some tables.

 

22.  What is Data Source?

A:  Data Source defined ES and TS at source system and TS at BW system.

 

23.  When does the TS at SAP R/3 side get generated?

A:   when you activate transfer rules in BW, it creates a transfer structure at SAP R/3 side.

 

24.  How do we find out the TS in the source system?

A;  go to SE11, select the Radio button as DATA Type and type in /BI* and press on the function key F4, click on the new selection button down and then in the short description type in *NAME OF THE DATASORCE* and say enter. Then it gives the name of the transfer structure.

 

 

 

 

Logistics:

 

LIS (Logistics Information System):

 

1.      What do you mean by setup LIS Environment?

A:        When we execute the option setup LIS environment system generates two transparent tables (SxxxBIW1, SxxxBIW2) and one extract structure (SxxxBIWS). This option is executed only for user defined information structures, if we try to execute this option for SAP Defined, it throws an error stating LIS environment can’t be setup in SAP name range, because environment is by default setup for SAP defined information structures. 

      

2.      When do we extract data using LIS extraction or LIS data sources?

A:   Whenever we want to extract any logistics information like Sales Orders, Deliveries, Billing, Customer, Purchasing, Inventory Management.

 

3.      How many types of Information Structure are there and what are those ?

A:    We have two types of information structures one is user defined information structures and SAP Defined information Structures.

 

4.      What is the T-Code to generate LIS data source?

A:    LBW0

 

5.      LIS data source is generated using what?

A:    Information Structure

 

6.      Why do we use Two Transparent Tables?

A:    To handle delta because we use v1 or v2 update which can bringing the delta records at any given point of time we have these two table acting interchangeably by using the table TMCBIW we can know which table is active at the given point of time.

 

7.      What do you mean by v1 and v2?

A:   V1 stand for Synchronous updating, when the records are updated, the LUW’s are posted to the respective tables and we get a signal back to the application until then application is kept waiting, so it degrades the OLTP system performance and V2 stands for Asynchronous updating, when the records are updated, the LUW’s are posted to the respective tables and V2 is preferred over V1 because in case of V2 there is no signal coming back and improving the OLTP system performance to some extent.

 

8.      What are the disadvantages in LIS?

A: 
1. Information Structure: In LIS when we migrate the data information structure is filled with the initial records as well as delta records, which makes the size of the information structure equivalent to cube in BW system, which is of no use and degrading the data base performance.

 

2.      V1 or V2:  V1 or V2 updates doesn’t support background process, there by effecting OLTP system performance.

3.      Level of Information : LIS provides data sources specific to Information, so this would increase the input and output volume of the data records.

 

9.      When we are migrating the data in LIS what update did you use?

A:        V2

 

10.  Can you explain the steps about how to generate LIS data source in case of SAP defined ?

A:  Go to T-Code LBW0, specify the information structure, and then select the radio button ‘display settings’ to see the status of the information structure and try to observe whether the datasource has been generated or not, if not then select the radio button ‘generate data source’ and click on execute button, then it takes you to the screen where the extract structure (SxxxBIWS) gets assigned to the datasource with the following naming convention (2LIS_Application Component Number like 01, 02_Sxxx (name of the information structure) ) Ex: 2lis_01_Sxxx. And customize the extract structure by using ‘selection check box’ and ‘hide check box’ depending on the requirement and then save it. Once datasource is generated, we can replicate the datasource and start using the datasource to extract the data.

 

11.   Can you explain the steps about how to generate LIS data source in case of User defined ?

A:   Once information Structure and update rules to the information structure are ready, then go to the T-Code LBW0, specify the information structure, then  select the Radio Button ‘Setup LIS Environment’ and click on Execute button, this would generate two transparent tables (SxxxBIW1, SxxxBIW2) and one extract structure(SxxxBIWS). Then select the Radio Button ‘generate Data Source’ and click on Execute button, this would generate the data source by assigning the extract structure SxxxBIWS to the data source with the following naming convention (2LIS_Application Component Number like 01, 02_Sxxx (name of the information structure) ) Ex: 2lis_01_Sxxx. And customize the extract structure by using ‘selection check box’ and ‘hide check box’ depending on the requirement and then save it. Once data source is generated, we can replicate the datasource and start using the datasource to extract the data.

 

12.  How do we migrate the data in case of SAP defined information structure?

A:    In SAP R/3 system we lock the related T-Codes then go to T-Code SE14 and delete the contents of information structure Sxxx and Two Transparent Tables SxxxBIW1, SxxxBIW2 then go to the T-Code LBW1 and select the information structure and setup the updating as V2 (Asynchronous updating) and then save it. Once this is done we run the statistical setup by choosing the respective option for the defined application from T-Code SBIW and then specify the information structure name, specify the version as ( &(A) then provide data and time of the termination and select the menu path- (program -> execute in background) to execute the job in the background and we can observe the status of this job in T-Code SM37, once the job is done successfully we go to the T-Code LBW2 and convert the data from version &(A into 000 version and then go back to T-Code LBW1, select the information structure and set the updating to No update and save it,  then run the initial Loads or Full Loads to extract all the data from information structures into BW side, then once the initial loads are successful in BW and go back to R/3 T-Code LBW1 and select the information structure and setup the updating as V2 (Asynchronous updating) and Select the check box ‘ACTIVE DELTA’ and save it. Finally locks can be released, so that user can start recording the transactions.

  NOTE: These Migrations are always done in the Week Ends after the office hours and T-Codes are usually locked and unlocked by the basis people.

 

13.  Data Migration for User Defined?

A:    In SAP R/3 system we lock the related T-Codes then go to T-Code SE14 and delete the contents of information structure Sxxx and Two Transparent Tables SxxxBIW1, SxxxBIW2 then go to the T-Code OMO1 and select the information structure and setup the updating as V2 (Asynchronous updating) and then save it. Once this is done we run the statistical setup by choosing the respective option for the defined application from T-Code SBIW and then specify the information structure name, specify the version as ( &(A) then provide data and time of the termination and select the menu path- (program -> execute in background) to execute the job in the background and we can observe the status of this job in T-Code SM37, once the job is done successfully we go to the T-Code LBW2 and convert the data from version &(A into 000 version, then go to the T-Code OMO1 and select the information structure and setup the updating as No Update and save it. then run the initial Loads or Full Loads to extract all the data from information structures into BW side, then once the initial loads are successful in BW and go back to R/3 LBW0 specify the information structure and select the radio button ‘generate Delta Updating in LIS’ and click on Execute Button. Then select the Radio Button ‘Display Settings’ to see if the delta is deactivated or activated, if it is deactivated, select the radio button ‘Activate/Deactivate’ and click on execute button and then go back to T-Code OMO1 and select the information structure and setup the updating as V2 (Asynchronous updating) and then save it. Finally locks can be released, so that user can start recording the transactions.

  NOTE: These Migrations are always done in the Week Ends after the office hours and T-Codes are usually locked and unlocked by the basis people.

 

14.  What is the Naming Convention of the LIS Data Source?

A:   Naming convention (2LIS_Application Component Number like 01, 02_Sxxx (name of the information structure) ) Ex: 2lis_01_Sxxx.

 

15.  What is the T-Code to create Field Catalogs?

A:    MC18 – To create, MC19 – To change, MC20 – To display.

 

16.  What is the T-Code to create Information Structure?

A:    MC21 – To create, MC22 – To Change, MC23 – To display.

 

17.  What is the T- Code to create Update Rules between Field Catalogs and Information Structure?

A:    MC24 – To create, MC25 – To Change, MC26 – To Display.

 

18.  What information Structure Did you use?

A:   S260 – Sales Order Information, S261 – Deliveries, S262 – Billing, and S001 – Customer.

 

19.  Did you create any user defined Information Structure; If yes what is the Scenario?

A:    Yes, to extract pricing information from KONV and KONP Tables, we have built our own user defined information structure S608 and extracted the data.

 

LO – Cockpit:

 

1. What is the difference between LIS and LO?

A:       
1. Level of Information, Lo provides Data Sources specific to level of information. For Ex: 2LIS_11_VAHDR – Sales Order Header Information, 2LIS_11_VAITM – Sales Order Item, 2LIS_11_VASCL – Sales Order Scheduler, but in case of LIS it provides information structures specific to Sales Order Information.

 2. Lo uses V3 update (Asynchronous Background Process), which gives the option of executing the LUW’s in the background, but in case of LIS we use V1 or V2.

 3. In Lo, we fill the Setup Tables and once the data is completely extracted into BW side we can delete the Setup Tables Back again by using the T-code LBWG. But in case of LIS we never delete the contents of the information Structure once it is filled, apart from this information structure is also filled with Delta Records, which is of no use.

 

2.   What are the steps for generating Lo Data Source?

A:  Since Lo Data Sources are given as part of Business Content in Delivered Version To create a Copy of the same data source in active version, Go to T-Code RSA5, Select the data source and click on ‘transfer datasource’ button, to maintain the ready made extract structure given along with the datasource, go to the T-Code LBWE and click on the ‘Maintenance ‘ link to maintain the Extract Structure by moving the fields from the communication structures, if the required field is not available in the communication structure then we go for enhancing the data source. Since the extract structure is re-generated, we need to generate the data source by clicking the data source link and customize the extract structure by using the following check boxes ‘selection, hide, inversion, field only known in exit’ and then save it. Then choose the delta update mode as Queued Delta among (Direct Delta, Serialized V3 Update, Un serialized v3 update and queued delta) and click on inactivate link to activate the datasource. Then replicate the data source in BW side and start Migrating the data.

 

3. How do we migrate the data in LO?

      To be on the safer side, we prefer to delete the Setup Tables by using the T-Code LBWG and specify the application component and also we clear the Delta Queues by using the T-Code SMQ1 and we clear the Extractor queues by using the T-Code LBWQ then lock Related T-Codes, then run the Statistical Setup by using the T-Code OLI*BW, to fill the setup tables then run the initial Loads or Full Loads to extract all the data from Setup Tables into BW side, then once done successfully we go back to R/3 LBWE to setup the periodicity of the V3 job, which could be hourly or two hourly depending on the number of transactions, more the transactions v3 Job  should be scheduled more  frequently and then Finally locks can be released, so that user can start recording the transactions.

NOTE: These Migrations are always done in the Week Ends after the office hours and T-Codes are usually locked and unlocked by the basis people.

 

 

3.  What are the different Delta Update modes and explain each?

A. Direct Delta:- In case of Direct delta LUW’s are directly posted to Delta Queue (RSA7) and we extract the LUW’s from Delta Queue to SAP BW by running Delta Loads. If we use Direct Delta it degrades the OLTP system performance because when LUW’s are directly posted to Delta Queue (RSA7) the application is kept waiting until all the enhancement code is executed.

B. Queued Delta: - In case of Queued Delta LUW’s are posted to Extractor queue (LBWQ), by scheduling the V3 job we move the documents from Extractor queue (LBWQ) to Delta Queue (RSA7) and we extract the LUW’s from Delta Queue to SAP BW by running Delta Loads. Queued Delta is recommended by SAP it maintain the Extractor Log which us to handle the LUW’s, which are missed.

C.  Serialized V3 Update: - In case of Serialized V3 update LUW’s are posted to Update Queue (SM13), by scheduling the V3 job which move the documents from Extractor queue (LBWQ) to Delta Queue (RSA7) in a serialized fashion and we extract the LUW’s from Delta Queue to SAP BW by running Delta Loads. Since the LUW’s are moved in a Serial fashion from Update queue to Delta queue if we have any error document it doesn’t lift the subsequent documents and as it sorts the documents based on the creation time, there every possibility for frequent failures in V3 job and missing out the delta records. It also degrades the OLTP system performance as it forms multiple segments with respective to the change in the language.

D.     Un serialized V3 Update: - In case of Un serialized V3 update LUW’s are posted to Update Queue ( SM13 ), by scheduling the V3 job which move the documents from Extractor queue ( LBWQ ) to Delta Queue ( RSA7 ) and we extract the LUW’s from Delta Queue to SAP BW by running Delta Loads. Since the LUW’s are not moved in a Serial fashion from Update queue to Delta queue if we have any error document it considers the subsequent documents and no sorting of documents based on the creation time. It improves the OLTP system performance as it forms a segment for one language.

 

4.  How do you enhance a LO Datasource?

     A.   The enhancement procedure is same as any other data source enhancement but only difference is that instead of appending the fields to the Extract structure we append the field to the communication structure and move the field from communication structure to extract structure in Maintenance link in T-code LBWE. Deselect the hide check box and field only known in exit check box when generating the data source, then we write the enhancement code to populate the data into the enhanced fields by using the enhancement RSAP0001 this enhancement contains four function exits, that is EXIT_SAPLRSAP_001(for transaction data data source),EXIT_SAPLRSAP_002(For master data attribute)EXIT_SAPLRSAP_003(For master data texts), EXIT_SAPLRSAP_004 (Master data Hierarchies ). We use the internal table C_T_DATA in coding.

 

5.  What is the T-code for deleting Set Up tables?

A.        LBWG.

 

 

6.  What is the Importance of Delta Queue?

A.      Delta Queue (RSA7) Maintains 2 images one Delta image and the other Repeat Delta.

     When we run the delta load in BW system it sends the Delta image and when ever           delta loads and we the repeat delta it sends the repeat delta records.

 

7. What does the total column indicate in RSA7 Delta queue?

A.  For each set of LUW either available in both Delta and Repeat Delta or not it counts as one  LUW.

 

8.  What you mean by LUW?

A.       LUW stands of Logical Unit of Work. When we create o new document it forms New image ‘N’ and when ever there is a change in the existing document it forms before image ‘X’ and after Image ‘ ‘ and these after and before images together constitute one LUW. Before image is always backed up by after image.

 

9.When we are initializing the data source do we need to Lock the posting in SAP R/3?

A Yes, that is the reason we normally do this data migration steps during weekends in non-office hours, not to disturb the business.

 

10. How do we Re-initialize a LO Data source?

A. Firstly we lock as transactions as see that we have posting done to Extractor queue (LBWQ) in case of Queued Delta and Update queue (SM13) in case of Serialized V3 Update and Un serialized V3 update, Then if we don’t want the delta records in SM13 and LBWQ we can delete them or if we want them into SAP BW, then we schedule the V3 job and collect LUW’s into Delta Queue and Extract the LUW’s from Delta Queue to SAP BW by running Delta Loads in SAP BW. Once the Queues are Cleaned up, delete the Setup Tables by using the T-Code LBWG and specify the application component, then run the Statistical Setup by using the T-Code OLI*BW, to fill the setup tables then run the initial Loads or Full Loads to extract all the data from Setup Tables into BW side, then once done successfully we go back to R/3 LBWE to setup the periodicity of the V3 job, which could be hourly or two hourly depending on the number of transactions, more the transactions v3 Job should be scheduled more  frequently and then Finally locks can be released, so that user can start recording the transactions.

 

     NOTE: These Migrations are always done in the Week Ends after the office hours and      T-Codes are usually locked and unlocked by the basis people.

 

 


CO-PA Extraction:-

 

1.   What is the naming convention for CO-PA data source?

A.1_CO_PA_(client number any 3 digit number except 000 and 066)_(any meaning full name).

 

2.  On what basis do we generate the CO-PA data source?

A.     Since CO-PA data source is generated using Operating Concern we call it as Customer Generated Extractor.

 

3.  What is the T-Code to Generate the CO-PA data Source?

A.    KEB0.

 

4.You generated the Data Source based on Costing based or Accounting based? And what is difference?

A. We generated the CO-PA data source using Cost based.

 

           Cost based is preferred in case of product-based company and it also          considers the costs like Advertisement cost and disbursement cost. In case of             cost based Company code field is mandatory and it gives a detailed split of all the values.

            Accounting based is preferred in case of Service based company. In            case of Account based Company code, controlling area, Cost Element fields          are mandatory and it gives the Cumulated values for analysis.

 

5.   Do we have Business Content Info Source in case of CO-PA?

A.    No, Only Application proposal or User defined Info Source.

 

6.   In Case of CO-PA extractor how does the Delta handled?

A.    CO-PA runs delta based on Time Stamp. We can look at the Time stamps set for a Data source using the T-code KEB2.

 

7. What do you mean by Safety Delta?

A.        When we initialize the CO-PA data sources, we run the Delta load only after a minimum of half-an-hour time gap between Initialize Delta update and Delta Update and this Time Gap is called Safety Delta.

 

8. Explain the steps to generate the CO-PA datasource?

A.   Go to T-code KEB0 and specify the name of the CO-PA Data Source with the following naming convention 1_CO_PA_(client number any 3 digit number except 000 and 066)_(any meaning full name), select the radio button ‘Create’ and specify the Operating Concern and specify ‘Cost Based or Accounting Based’ depending on the requirement and click on Execute Button. And in the next screen give short, medium, long description and specify the field name for Partitioning (Profitability Segment) and click on ‘Info catalog’ Button and save the data source and the data source is generated. Then replicate the data source in BW side and start Migrating the data.

 

9.   What you mean by summarization Level or Field name for Partitioning?

A. It is nothing but the Profitability Segment what we specify when we post an entry.

 

10.  Explain me about the Data Structures or different Tables in CO-PA?

A.  CE1XXXX – Actual Line Items.

     CE2 XXXX – Planned Line items.

     CE3XXXX – Segment Table.

     CE4XXXX – Segment Level Table.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 



This website was created for free with Own-Free-Website.com. Would you also like to have your own website?
Sign up for free