Tuesday, April 12, 2011

Difference between Operational Data Store(ODS) and InfoCube

Infocubes have a multidimensional structure with dimension tables(max 16, 13 custom) and one fact table. they are meant for summarised records.
ODS store data at a more granular level. they have flat structures like a table in R/3. They have a unique feature "overwrite" which is absent in case of cubes.
You can use ODS to load to cube further.
Anyway, one major difference is the manner of data storage. In ODS, data is stored in flat tables. By flat we mean to say ordinary transparent table whereas in a CUBE, it composed of multiple tables arranged in a STAR SCHEMA joined by SIDs. The purpose is to do MULTI-DIMENSIONAL Reporting
Another difference is : In ODS, you can update an existing record given the KEY. In CUBES, theres no such thing. It will accept duplicate records and during reporting, SUM the keyfigures up. Theres no EDIT previous record contents just ADD. With ODS, the procedure is UPDATE IF EXISTING (base from the Table Key) otherwise ADD RECORD.
ODS
Stores line item level detail, more granular Can't create aggregates on ODS ODS are based on flat tables Only two dimensional reporting possible on ODS.  Overwrite feature available while loading records
Infocube
- Stores summarized data, less granular. 
- Aggregates can be created on top of Infocubes for better performance of Queries.  
- Multi-dimensional reporting possible on Infocube.
- Theres no overwrite feature while loading records.
Infocubes are MDM objects that fact table and dimension table are available whereas ODS is not a MDM object there are no fact tables and dimension tables. It consists of flat transparent tables.
In infocubes there are characteristics and keyfigures but in ods key fields and data fields. we can keep non key characteristics in data fields.
Some times we need detailed reports we can get through ODS. ODS are used to store data in a granular form i.e level of detail is more. The data in the infocube is in aggregated form.
From reporting point of view ods is used for operational reporting where as infocubes for multidimensional reporting.
ODS are used to merge data from one or more infosources but infocubes does not have that facility.
The default update type for an ODS object is overwrite for infocube it is addition. ODS are used to implement delta in BW. Data is loaded into the ODS object as new records or updating existing records in change log or overwrite existing records in active data table using 0record mode.
You cannot load data using Idoc transfer method in ODS but u can do in infocube.
You cannot create aggregate on ODS. You cannot create infosets on infocube.
ODS objects can be used.
When you want to use the facility of overwrite.  If you want to overwrite nonkey characteristics and key figures. If you want detailed reports you can use ODS.
If you want to merge data from two or more infosources you can use ODS.  It allows you to drill down from infocube to ODS through RRI interface.
ODS objects can be used in the following scenarios. ODS is not a mandatory but depending on the requirements we have to use it.
When you want to use the facility of overwrite.  If you want to overwrite nonkey characteristics and key figures in the data fields column.
If you want detailed reports, you can use ODS.
If you want to merge data from two or more infosources you can use ODS.
It allows you to drill down from infocube to ODS through RRI interface if u want detailed data from ODS.
If you want to create an external file.
The most important difference between ODS and BW is the existence of key fields in the ODS. In the ODS you can have up to 16 info objects as key fields. Any other info objects will either be added or overwritten! So if you have flat files and want to be able to upload them multiple times you should not load them directly into the info cube, otherwise you need to delete the old request before uploading a new one. There is the disadvantage that if you delete rows in the flat file the rows are not deleted in the ODS.
I also use ODS-Objects to upload control data for update or transfer routines. You can simply do a select on the ODS-Table /BIC/A00 to get the data.
ODS is used as an intermediate storage area of operational data for the data ware house . ODS contains high granular data . ODS are based on flat tables, resulting in simple modeling of ODS .  We can cleanse transform merge sort data to build staging tables that can later be used to populate INOFCUBE .
An infocube is a multidimentionsl dat acontainer used as a basis for analysis and reporting processing. The infocube is a fact table and their associated dimension tables in a star schema. It looks like a fact table appears in the middle of the graphic, along with several surrounding dimension tables. The central fact is usually very large, measured in gigabytes. it is the table from which you retrieve the interesting data. the size of the dimension tables amounts to only 1 to 5 percent of hte size of the fact table. Common dimensions are unit & time etc. 
There are different type of infocubes in BW, such as basic infocubes, remote infocubes etc.  
An ODS is a flat data container used for reporting and data cleansing/quality assurance purpose. They are not based on star schema and are used primaily for detail reporting rather than for dimensional analyais.
An infocube has a fact table, which contains his facts (key figures) and a relation to dimension tables. This means that an infocube exists of more than one table. These tables all relate to each other. This is also called the star scheme, because the dimension tables all relate to the fact table, which is the central point. A dimension is for example the customer dimension, which contains all data that is important for the customer.
An ODS is a flat structure. It is just one table that contains all data.  Most of the time you use an ODS for line item data. Then you aggregate this data to an infocube.
ODS holds transactional level data..Its just as a flat table.  Its not based on multidimensional model. ODS have three tables 1. Active table 2. change log 3. New table
Cube holds aggregated data which is not as detailed as ODS. Cube is based on multidimensional model. Cube have 2 tables 1. E table 2. F table.

More Interview Questions On CIF

1. We have 50 integration models for each object type, since we have 50 plants. Should we define fewer integration models?
Before PlugIn 2002.1, we recommend that you define fewer models for performance reasons. Generally, the size of the integration models depends on the data volume for each plant. To optimize the integration model number, we recommend that you purchase consulting expertise.
As of Plug-In 2002.1, the "Runtime version of the integration model" is available. Using the runtime version guarantees better performance in the online operation (also refer to the documentation for the report RCIFIMAX).
Even though the number of integration models does not affect the performance significantly, we recommend that you keep the number of integration models low, in order not to increase the runtime for generating the runtime model.
That is, do not regularly create new integration models, rather only create new versions of integration models.

You can find release notes for the PlugIn on SAP Service Marketplace at: "http://service.sap.com/R3-PLUG-IN" -> Media Center -> Release Notes PI 2002.1 Release Notes SAP APO.

2. Do we have to transfer the master data of the vendor together with the stock data, so that consignment stocks are transferred?
Yes - this ensures that the consignment stock is correctly linked to the vendor location in SAP APO.

3. Master record objects that were changed since the last transfer are transferred again to SAP APO by initial transfer.
Does this mean that the report RCPTRAN4 (evaluate and send change recordings) does not have to run?
And what about the report RBDCPCLR (delete change pointers) for reorganizing the change pointers?
You do not have to execute the report RCPTRAN4 in this case, since the dataset in SAP APO is up to date due to the initial data transfer.
You should use the report RBDCPCLR to delete "old" change pointers.

4. The master and movement data for a material 4711 is in two active integration models (A+B). Assuming that one of the two is deactivated - what happens then?
The master data and movement data remains active. See also Note 533755 "Description of the delta logic or the program RIMODINI".

5a. What happens if you deactivate an integration model that has master record objects?
Planning in SAP APO is still possible. However, you can no longer transfer the transaction data to SAP R/3.
5b. What happens with the master and movement data in SAP APO after the master data was deactivated?
The master data remains in SAP APO.
5c. What happens with the transaction data if there is another activation?
The transaction data is transferred again. Provided that you reschedule (for example plan automatically (not for plan/manufacturing orders)), the old transaction data is deleted. Note that the integration model for the master data must also be active if the transaction data is transferred again.

6. How do I change from small to large integration models?
You activate the large model (all data already selected in active models is not transferred again) and then deactivate the small models.

7. Why are my orders not transferred from SAP R/3 to SAP APO?
Refer to the information contained in Note 424927 "No order transfer from R/3 to APO" and check your settings accordingly.

8. My material removals are not transferred in the APO order, but the stocks change.
Refer to the information contained in Note 421940 "No reduction of order reservations in APO" and check your settings accordingly.

9. Can data be transferred from SAP R/3 to SAP APO using BTE change pointers (for example from the table MBEW table using user exits)?
Since the APO standard system does not require data from the table MBEW, this is not transferred to the CIF during the transfer of data changes using BTE. Via BTE, data for all SAP standard fields is transferred from SAP R/3 to SAP APO from the table MARA (plant-independent material data), the table MARC (plant-dependent material data), the table MARM (conversion of units of measure) and the table MAKT (material texts). In the customer exit in SAP R/3 also only this data is available. An alternative here is the transfer of the material master changes using ALE change pointers.
For example: Transferring the "floating average price/periodic unit price" (MBEV-VERPR) using the user exit CIFMAT01 does not work. For this, the BD52 Customizing must be changed and the data must be transferred using the ALE method.
Changes to customer-specific fields can also only be transferred to SAP APO using ALE in connection with customer exits.

10. How can I avoid overlaps and thereby inconsistencies during the integration model transfer?
If you use parallel processing for the initial data transfer, transaction data may be transferred to SAP APO before the corresponding master data is available in SAP APO. For example, you can then create in-house production orders in SAP APO without PPM even though this should not be the case. Unfortunately, this cannot be prevented technically. The integration models must be cut accordingly and scheduled in background jobs so that this does not happen. Background jobs also check whether queues have been processed correctly and without errors.

11. Where can I find information about parallel processing during the initial data transfer?
You can find release notes for the PlugIn on SAP Service Marketplace at: "http://service.sap.com/R3-PLUG-IN" -> Media Center -> Release Notes => PI 2002.1 Release Notes SAP APO.
Application log
1. Is there a way of analyzing errors in the partner system directly from the application log?
For information about this, see the following notes:
Note 396838 "R/3: Displaying application log from queue entry"
Note 396839 "APO: Jump to application log from incorrect queue entry"
Note 457399 "Branching to the application log with inbound queues"
Note 457418 "APO: Branching to the application log with inbound queues"

2. How can I find CIF logs?
In the R/3 and APO SAP systems, you can analyze the application log using the following transactions:
SAP R/3 transaction CFG1 (see also Note 544011) and SAP APO transaction /N/SAPAPO/C3 (see also Note 544389).
Interactive user
Question: When do I have to create a dialog user if no ATP check is to be used?
Answer: This is necessary for analyzing the data transfer and for debugging. Also check note 352844
As of PlugIn 2002.2, it is possible to work with separate authorizations for every application.
SNP PPMs
Question: Are SNP PPMs taken into account in change management?
Answer: No (version PlugIn 2001.2).
Questions on release statuses
1. You want to use a new SAP APO 3.1 with the same system name as your old SAP APO 3.0, which is deactivated. Does this work?
Yes, as long as the "old" APO System is deactivated. The name for a logical system (LOGSYS) can only be assigned once.
You must also consider the following: In the SAP R/3 system, unique GUIDs are created for the mapping between SAP R/3 and SAP APO documents. See the "CIF*MAP"R/3 tables. This may cause discrepancies during the assignment of GUIDs and documents in SAP APO when you start a new initial data transfer.

2. Does SAP APO 3.1 work with PI 2001.1?
PI 2001.2 is the minimum requirement in this case. For further questions on the PlugIn release, go to SAP Service Marketplace. Here you will find further information at "http://service.sap.com/R3-PLUG-IN" -> Integration of SAP R/3 and mySAP.com Components.
1 SAP R/3 with several SAP APOs
Question: A client of an SAP R/3 system is to be operated with several SAP APO Systems (Release 3.0 and 3.1). Does this cause problems?
Answer: In theory, this does not cause problems. However, note the following: A planned order or production order (for example order 4711), and a PREQ (PReq 4712, pos 0010) or a sales order item can only be sent to a SAP APO system, in other words a PReq created in SAP APO system 1 is not copied to SAP APO system 2. The SAP APO systems must plan different material/plant combinations.

No stock transfers should occur between the SAP APO systems.

This would cause problems because a transaction date that was sent from the R/3 system to both APO systems may transfer different updates in the retransfer from both APO systems. Even if the updates from both APO systems are the same, these cannot be processed in such a way that a consistent status is achieved afterwards.

In the case of other objects like TP/VS and production campaigns, problems may occur
because updates from APO systems can no longer occur in an indivisible logical unit of work (LUW).

This may be the case if some of the referencing transaction data originates in one of the APO systems and other transaction data originates in the other APO system.


Further problem may occur in the APO systems due to different release levels
if the release level of the APO system is relevant for shipping in the R/3 outbound.

In this case, it cannot be guaranteed that all target systems will always be handled in a loop for all object types before each APO release query.
qRFC monitor (transaction SMQ1/2)
1. Can I restrict the access of the 'Delete' function in transaction SMQ1 using authorizations (the display and processing functions should still be available to the user)?
There are three authorization groups for transactions SMQ1 and SMQ2:
  • * Group 1 cannot call SMQ1 SMQ2 at all.
  • * Group 2 can call SMQ1 SMQ2 but it can only display it (not delete it!) and activate queues The transaction authorization for SMQ1 and SMQ2 is required for this.
  • * Group 3 can call SMQ1, SMQ2 and use all functions. The value NADM must be defined for this in the object S_ADMI_FCD.

2. Is there a better display of the queues than the qRFC monitor for outbound queues (SMQ1) or inbound queues (SMQ2)?
  • Yes, in SAP APO you have the SCM Queue Manager in transaction /N/SAPAPO/CQ (see also Note 419178).
  • As of SCM 4.1, you can also the CIF cockpit (transaction /SAPAPO/CC) that provides an overview of and access to all CIF-relevant transactions and Customizing settings of the APO system and all connected ERP systems.
The CIF Cockpit
As of SCM 4.1, you can use the CIF cockpit (transaction /SAPAPO/CC) in SAP APO. It provides an overview of and access to all CIF-relevant transactions and Customizing settings of the APO system and all connected ERP systems.
CIF queue names
For a list of all current CIF queue names that are used to transfer data between ERP systems and SAP APO, refer to Note 786446.

Friday, January 21, 2011

Nice SCM portal

Hi Guys,
Unfortunately to day i saw one of the blog by Shaun Snapp.Really his articles gives you a very useful info regarding SCM APO.
If possible Visit this link 
http://www.scmfocus.com/

Thursday, January 13, 2011

BAPI and ALE Integration


The objective of "Business Application Programming Interfaces" (BAPIs) and "Application Link Enabling" integration is to enable future ALE scenarios to use BAPI interfaces. One advantage is that it will be much easier for both SAP Development and SAP customers to develop new ALE scenarios. Another advantage is that BAPI interfaces will be able to use already existing ALE functions, (e.g. error handling and writing links asynchronously).
Further advantages are:
  • object oriented approach
  • application maintains one interface only
  • reduction of generation program errors
Description of Function
When a BAPI is defined, ALE outbound and inbound interfaces are generated and entered in transport requests, provided that the following functions are generated at the time the BAPI is defined:
  • an IDoc type and its segments (IDoc = intermediate document)
  • a "wrapper" function module which decides whether the BAPI is called locally or from another system (see Overview Graphic). Local calls can call the BAPI immediately or via an IDoc which restarts it.
  • an ALE outbound function module which puts BAPI interface data into an IDoc and triggers ALE outbound processing
  • an ALE inbound function module which transfers BAPI interface data from an IDoc into the BAPI interface structure and calls the BAPI
  • ALE customizing for the new interface.
The flow diagram in the Overview Graphic shows the run time BAPI call flow logic.
The generation can also be used when an R/3 System is connected to a non-R/3 System. "One-way" means that either outbound or inbound but not both is implemented in the R/3 System. This means that a one-way interface cannot be used to exchange data between two R/3 Systems.
The generation can only be used for asynchronous interfaces, i.e. a receiver processes data without return parameters to the sender.
In principle the generation can also be used to support an IDoc interface for Electronic Data Interchange (EDI).



To call up the BAPI "BAPI_X_CREATE" in System 1, the program calls the generated "wrapper" function module "ALE_X_CREATE" whose interface contains all the BAPI interface parameters.
Application Area
All applications which create a write BAPI can use this functionality, especially when the BAPI is to be called from another R/3 System. Customers developing their own ALE scenarios also benefit from these advantages.