Top Banner
Creating an Excel Data Server Posted on July 6, 2011 PREPARE YOUR EXCEL SPREADSHEET First open up a Microsoft Excel spreadsheet, we will need to define a named range. Step: 1) Open spreadsheet Step: 2) Select Formulas–>Define Name
53
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: ODI doc

Creating an Excel Data   Server Posted on July 6, 2011

P R E P A R E Y O U R E X C E L S P R E A D S H E E T

First open up a Microsoft Excel spreadsheet, we will need to define a named range. Step: 1) Open spreadsheet

Step: 2) Select Formulas–>Define Name

Page 2: ODI doc

 

Step: 3) Select the range for the data

DEFINE AN ODBC DATA SOURCEStep: 1) ODI will use an ODBC connection to natively access Microsoft Excel. Select start–>Administrative Tools–

>data Source (ODBC)

Page 3: ODI doc

 

Step: 2) click on Add select the Excel Driver press ‘Finish’. Specify the data source Name and description. Click on ‘Select Workbook’

Page 4: ODI doc

 

Step: 3) Specify the Excel file path Press ok. You should now see your new ODBC Data Source listed in the ODBC Data Source Administrator.

Page 5: ODI doc

CREATE A DATA SERVER IN TOPOLOGY MANAGERStep: 1) Open up Topology Manager and go to Physical Architecture. Right click on theMicrosoft

Excel technology and select New Data Server. In the Data Server window enter a name in the Name field.

Page 6: ODI doc

Step: 2) Go to the JDBC tab. Select the Sun JDBC-ODBC Bridge in the JDBC DriverList. 

Step: 3) In the JDBC URL template replace with the name of the ODBC Data Source you specified earlier. I used Excel data Server in this example.

 Step: 4) Click on Test and make sure you get a successful connection.

Page 7: ODI doc

 

Click OK.

Step: 5) Go to Newly created data server and Right click select New Physical

Schema 

Step: 6) Verify newly created Physical Schema.

Page 8: ODI doc

Create New Logical Schema:Step: 1) Select Microsoft Excel in Logical Architecture tab Right click and select new logical schema. Specify

the name and map it with Contexts.

Page 9: ODI doc

Create Model for ExcelStep: 1) Go to the Designer Navigator select model tab and create new model for excel.

Step: 2) Specify the Name, Technology and Logical schema.

 

Step: 3) Select Reverse Engineering Tab. Select system table check box and go to Selective reverse Engineering tab.

 Step: 4) Here select Selective reverse Engineering and Objects to reverse Engineering check boxes.

Page 10: ODI doc

 Step: 5) Save it and perform reverse Engineer of this model.

 Step: 6) Right click on data store select View data.

Page 11: ODI doc

 Done !Posted in Uncategorized | Leave a reply

Creating an XML Data   Server Posted on July 6, 2011

An XML Data Server corresponds to one XML file that is accessible through your local network.

XML files are accessed through the Oracle Data Integrator Driver for XML. This JDBC driver loads the XML file

hierarchical structure in a relational structure in a schema stored in memory to enable SQL queries through JDBC. It

is also able to unload the relational structure back in the XML File.

You must have following information:

·         The location of the DTD file associated with your XML file

·         The location of the XML file

·         The name of the Root element of your XML file

Creation of the Data Server

 

Step: 1) Select XML from the Technology list view. Right click and select New Data Server.

Page 12: ODI doc

Step: 2) Fill in the following fields in the Definition tab:

·         Name: Name of the Data Server as it will appear in Oracle Data Integrator. 

·         User/Password: Not used here.

Step: 3) Fill in the following fields in the JDBC tab:

                        JDBC Driver: com.sunopsis.jdbc.driver.xml.SnpsXmlDriver

                        JDBC URL: jdbc:snps:xml?[property=value&property=value...]

 

JDBC Driver Properties: 

Page 13: ODI doc

Example:jdbc:snps:xml?f=../xml/department.xml&re=Department_DIM&ro=false&case_sens=true&s=Depart 

 

 

 Step: 4) Click test Connection button it will ask you for saving it press ‘ok’ then it will ask again for creating

physical schema press ‘ok’

Creating a Physical Schema for XMLThe Physical Schema will be a storage location for the tables associated with the XML file.

Step: 1) Select the appropriate XML Data Server then right-click and select New Physical Schema. The Physical

Schema window will appear.

Page 14: ODI doc

 

Step: 2) Name the Schema and Work Schema. Note that if you have named the schemawith the s= property

of the JDBC URL of the XML Data Server, you must use the same schema name here.

Step: 3) Save it. 

Creating a Logical Schema for XML

Page 15: ODI doc

A Logical Schema can be associated with only one Physical Schema in a given Context.

Creating Model for XML:

Step: 1) Select Model tab in designer Navigator and select New model.

   Step: 2) In Definition Tab Specify the following details.

Step: 3) Perform the reverse Engineering

Page 16: ODI doc

 

 

Posted in Uncategorized | Leave a reply

Introduction to Knowledge   Modules Posted on June 5, 2011

1) What is a Knowledge Module?Knowledge Modules (KMs) are code templates. Each KM is dedicated to an individual task in the overall data

integration process. The code in the KMs appears in nearly the form that it will be executed except that it includes

Oracle Data Integrator (ODI) substitution methods enabling it to be used generically by many different integration

jobs. The code that is generated and executed is derived from the declarative rules and metadata defined in the

ODI Designer module.

2)  Types of Knowledge Modules:

2.1) Reverse-Engineering Knowledge Modules (RKM)

Page 17: ODI doc

The RKM role is to perform customized reverse engineering for a model. The RKM is in charge of connecting to the application or meta-data provider then transforming and writing the resulting meta-data into Oracle Data Integration’s repository. The meta-data is written temporarily into the SNP_REV_xx tables. The RKM then calls the Oracle Data Integrator API to read from these tables and write to Oracle Data Integrators meta-data tables of the work repository in incremental update mode. This is illustrated below

 A typical RKM follows these steps:a).Cleans up the SNP_REV_xx tables from previous executions using the OdiReverseResetTable tool.

b). Retrieves sub models, datastores, columns, unique keys, foreign keys, conditions from the metadata provider to SNP_REV_SUB_MODEL, SNP_REV_TABLE,SNP_REV_COL, SNP_REV_KEY,SNP_REV_KEY_COL, SNP_REV_JOIN, SNP_REV_JOIN_COL, SNP_REV_COND tables.

C). Updates the model in the work repository by calling the OdiReverseSetMetaData tool.

Available RKM in ODI 11.1.1

1)  RKM DB2/4002)  RKM FILE (FROM EXCEL)3)  RKM HYPERION ESSBASE4)  RKM FINANCIAL MANAGEMENT5)  RKM HYPERION PLANNING6)  RKM INFORMIX7)  RKM INFORMIX SE8)  RKM MSSQL9)  RKM ORACLE10) RKM ORACLE (JYOTHON)11) RKM ORACLE DATA QUALITY12) RKM ORACLE OLAP (JYOTHON)13) RKM SQL (JYOTHON)14) RKM TERADATA

The CKM accepts a set of constraints and the name of the table to check. It creates an “E$” error table which it writes all the rejected records to. The CKM can also remove the erroneous records from the checked result set.

The following figures show how a CKM operates in both STATIC_CONTROL and FLOW_CONTROL modes.

2.2) Check Knowledge Modules (CKM)

The CKM is in charge of checking that records of a data set are consistent with defined constraints. The CKM is used

to maintain data integrity and participates in the overall data quality initiative. The CKM can be used in 2 ways:

Page 18: ODI doc

a)    To check the consistency of existing data. This can be done on any datastore or within interfaces, by setting

the STATIC_CONTROL option to “Yes”. In the first case, the data checked is the data currently in the datastore. In

the second case, data in the target datastore is checked after it is loaded.

b)    To check consistency of the incoming data before loading the records to a target datastore. This is done by

using the FLOW_CONTROL option. In this case, the CKM simulates the constraints of the target datastore on the

resulting flow prior to writing to the target

(STATIC_CONTROL)

 

  In STATIC_CONTROL mode, the CKM reads the constraints of the table and checks them against the data of the table. Records that don’t match the constraints are written to the “E$” error table in the staging area.

(FLOW_CONTROL)

 

In FLOW_CONTROL mode, the CKM reads the constraints of the target table of the Interface. It checks these

constraints against the data contained in the “I$” flow table of the staging area. Records that violate these

constraints are written to the “E$” table of the staging area. 

In both cases, a CKM usually performs the following tasks:

a) Create the “E$” error table on the staging area. The error table should contain the same columns as the

datastore as well as additional columns to trace error messages, check origin, check date etc.

b) Isolate the erroneous records in the “E$” table for each primary key, alternate key, foreign key, condition,

mandatory column that needs to be checked.

c) If required, remove erroneous records from the table that has been checked

Page 19: ODI doc

Available CKM in ODI 11.1.1

 

a) CKM HSQL

b) CKM NETEZZA

c) CKM ORACLE 

d) CKM SQL

e) CKM SYBASE IQ

f) CKM TERADATA

 

2.3) Loading Knowledge Modules (LKM)An LKM is in charge of loading source data from a remote server to the staging area. It is used by interfaces when some of the source datastores are not on the same data server as the staging area. The LKM implements the declarative rules that need to be executed on the source server and retrieves a single result set that it stores in a “C$” table in the staging area, as illustrated below.

 

The LKM creates the “C$” temporary table in the staging area. This table will hold records loaded from the source servera) The LKM obtains a set of pre-transformed records from the source server by executing the appropriate transformations on the source. Usually, this is done by a single SQL SELECT query when the source server is an RDBMS. When the source doesn’t have SQL capacities (such as flat files or applications), the LKM simply reads the source data with the appropriate method (read file or execute API).

b) The LKM loads the records into the “C$” table of the staging area.

An interface may require several LKMs when it uses datastores from different sources. When all source datastores are on the same data server as the staging area, no LKM is required.

 Available LKM in ODI 11.1.1

1)    LKM ATUNITY TO SQL

2)    LKM DB2 400 JOURNAL TO SQL

3)    LKM DB2 400 JOURNAL TO DB2 400

4)    LKM DB2 UDB to DB2 UDB (EXPORT_IMPORT) 

5)    LKM File to DB2 UDB (LOAD) 

6)    LKM File to MSSQL (BULK) 

7)    LKM File to Netezza (EXTERNAL TABLE) 

8)    LKM File to Netezza (NZLOAD) 

Page 20: ODI doc

9)    LKM File to Oracle (EXTERNAL TABLE) 

10)    LKM File to Oracle (SQLLDR) 

11)    LKM File to SQL 

12)    LKM File to Sybase IQ (LOAD TABLE) 

13)    LKM File to Teradata (TTU) 

14)    LKM Hyperion Essbase DATA to SQL 

15)    LKM Hyperion Essbase METADATA to SQL 

16)    LKM Hyperion Financial Management Data to SQL 

17)    LKM Hyperion Financial Management Members To SQL 

18)    LKM Informix to Informix (SAME SERVER) 

19)    LKM JMS to SQL 

20)    LKM JMS XML to SQL 

21)    LKM MSSQL to MSSQL (BCP) 

22)    LKM MSSQL to MSSQL (LINKED SERVERS) 

23)    LKM MSSQL to ORACLE (BCP SQLLDR) 

24)    LKM MSSQL to SQL (ESB XREF) 

25)    LKM Oracle BI to Oracle (DBLINK) 

26)    LKM Oracle BI to SQL 

27)    LKM Oracle to Oracle (data pump) 

28)    LKM Oracle to Oracle (DBLINK) 

29)    LKM SQL to DB2 400 (CPYFRMIMPF) 

30)    LKM SQL to DB2 UDB 

31)    LKM SQL to DB2 UDB (LOAD) 

32)    LKM SQL to MSSQL 

33)    LKM SQL to MSSQL (BULK) 

34)    LKM SQL to Oracle 

35)    LKM SQL to SQL 

36)    LKM SQL to SQL (ESB XREF) 

37)    LKM SQL to SQL (JYTHON) 

38)    LKM SQL to SQL (row by row) 

39)    LKM SQL to Sybase ASE 

40)    LKM SQL to Sybase ASE (BCP) 

41)    LKM SQL to Sybase IQ (LOAD TABLE) 

42)    LKM SQL to Teradata (TTU) 

43)    LKM Sybase ASE to Sybase ASE (BCP) 

2.4) Integration Knowledge Modules (IKM)

The IKM is in charge of writing the final, transformed data to the target table. Every interface uses a single IKM. When the IKM is started, it assumes that all loading phases for the remote servers have already carried out their tasks. This means that all remote source data sets have been loaded by LKMs into “C$” temporary tables in the staging area, or the source datastores are on the same data server as the staging area.Therefore, the IKM simply needs to execute the “Staging and Target” transformations, joins and filters on the “C$” tables, and tables located on the same data server as the staging area. The resulting set is usually processed by the IKM and written into the “I$” temporary table before loading it to the target. These final transformed records can be written in several ways depending on the IKM selected in your interface. They may be simply appended to the target, or compared for incremental updates or for slowly changing dimensions. There are 2 types of IKMs: those that assume that the staging area is on the same server as the target datastore, and those that can be used when it is not. These are illustrated below:

(Staging Area on Target)

Page 21: ODI doc

 

When the staging area is on the target server, the IKM usually follows these steps:

a) The IKM executes a single set-oriented SELECT statement to carry out staging area and target declarative rules

on all “C$” tables and local tables (such as D in the figure). This generates a result set.

b) Simple “append” IKMs directly write this result set into the target table. More complex IKMs create an “I$” table

to store this result set.

c) If the data flow needs to be checked against target constraints, the IKM calls a CKM to isolate erroneous records

and cleanse the “I$” table.

d) The IKM writes records from the “I$” table to the target following the defined strategy (incremental update,

slowly changing dimension, etc.).

e) The IKM drops the “I$” temporary table.

f) Optionally, the IKM can call the CKM again to check the consistency of the target datastore.

These types of KMs do not manipulate data outside of the target server. Data processing is set-oriented for

maximum efficiency when performing jobs on large volumes.

(Staging Area Different from Target) 

 

Page 22: ODI doc

 

When the staging area is different from the target server, as shown in Figure, the IKM usually follows these steps:

a)  The IKM executes a single set-oriented SELECT statement to carry out declarative rules on all “C$” tables and

tables located on the staging area (such as D in the figure). This generates a result set. 

b) The IKM loads this result set into the target datastore, following the defined strategy (append or incremental

update).

This architecture has certain limitations, such as:

A CKM cannot be used to perform a data integrity audit on the data being processed.

Data needs to be extracted from the staging area before being loaded to the target, which may lead to

performance issues.

Available IKM in ODI 11.1.11)      IKM Access Incremental Update2)      IKM DB2 400 Incremental Update3)      IKM DB2 400 Incremental Update (CPYF)4)      IKM DB2 400 Slowly Changing Dimension5)      IKM DB2 UDB Incremental Update6)      IKM DB2 UDB Slowly Changing Dimension7)      IKM File to Teradata (TTU)8)      IKM Informix Incremental Update9)      IKM MSSQL Incremental Update10)   IKM MSSQL Slowly Changing Dimension11)   IKM Netezza Control Append12)   IKM Netezza Incremental Update13)   IKM Netezza To File (EXTERNAL TABLE)14)   IKM Oracle AW Incremental Update15)   IKM Oracle BI to SQL Append16)   IKM Oracle Incremental Update17)   IKM Oracle Incremental Update (MERGE)18)   IKM Oracle Incremental Update (PL SQL)19)   IKM Oracle Multi Table Insert20)   IKM Oracle Slowly Changing Dimension21)   IKM Oracle Spatial Incremental Update22)   IKM SQL Control Append23)   IKM SQL Control Append (ESB XREF)24)   IKM SQL Incremental Update25)   IKM SQL Incremental Update (row by row)26)   IKM SQL to File Append27)   IKM SQL to Hyperion Essbase (DATA)28)   IKM SQL to Hyperion Essbase (METADATA)29)   IKM SQL to Hyperion Financial Management Data

Page 23: ODI doc

30)   IKM SQL to Hyperion Financial Management Dimension31)   IKM SQL to Hyperion Planning32)   IKM SQL to JMS Append33)   IKM SQL to JMS XML Append34)   IKM SQL to SQL Append35)   IKM SQL to Teradata (TTU)36)   IKM Sybase ASE Incremental Update37)   IKM Sybase ASE Slowly Changing Dimension38)   IKM Sybase IQ Incremental Update39)   IKM Sybase IQ Slowly Changing Dimension40)   IKM Teradata Control Append41)   IKM Teradata Incremental Update42)   IKM Teradata Multi Statement43)   IKM Teradata Slowly Changing Dimension44)   IKM Teradata to File (TTU)45)   IKM XML Control Append

 2.5) Journalizing Knowledge Modules (JKM)

JKMs create the infrastructure for Change Data Capture on a model, a sub model or a datastore. JKMs are not used in interfaces, but rather within a model to define how the CDC infrastructure is initialized. This infrastructure is composed of a subscribers table, a table of changes, views on this table and one or more triggers or log capture programs as illustrated below.

Available JKM in ODI 11.1.1

1)    JKM DB2 400 Consistent 

2)    JKM DB2 400 Simple 

3)    JKM DB2 400 Simple (Journal) 

4)    JKM DB2 UDB Consistent 

5)    JKM DB2 UDB Simple 

6)    JKM HSQL Consistent 

7)    JKM HSQL Simple 

8)    JKM Informix Consistent 

9)    JKM Informix Simple 

10)    JKM MSSQL Consistent 

11)    JKM MSSQL Simple 

Page 24: ODI doc

12)    JKM Oracle 10g Consistent (Streams) 

13)    JKM Oracle 11g Consistent (Streams) 

14)    JKM Oracle Consistent 

15)    JKM Oracle Consistent (Update Date) 

16)    JKM Oracle Simple 

17)    JKM Oracle to Oracle Consistent (OGG) 

18)    JKM Sybase ASE Consistent 

19)    JKM Sybase ASE Simple

2.6) Service Knowledge Modules (SKM)

SKMs are in charge of creating and deploying data manipulation Web Services to your Service Oriented Architecture

(SOA) infrastructure. SKMs are set on a Model. They define the different operations to generate for each datastores

web service. Unlike other KMs, SKMs do no generate an executable code but rather the Web Services deployment

archive files. SKMs are designed to generate Java code using Oracle Data Integrator’s framework for Web Services.

The code is then compiled and eventually deployed on the Application Server’s containers.

Available SKM in ODI 11.1.1

1)    SKM HSQL 

2)    SKM IBM UDB 

3)    SKM Informix 

4)    SKM Oracle

Posted in Uncategorized | Leave a reply

Working With Change Data   Capture Posted on April 7, 2011

Changed Data Capture

Purpose of CDC is to enable applications to process changed data only. CDC enables ODI to track changes in source

data caused by other applications. When running integration interfaces, ODI can avoid processing unchanged data

in the flow. Loads will process only changes since the last load. The volume of data to be processed is dramatically

reduced.

Reducing the source data flow to only changed data is useful in many contexts, such as data synchronization and

replication. It is essential when setting up an event-oriented architecture for integration. In such architecture,

applications make changes in the data (“Customer Deletion”, “New Purchase Order”) during a business process.

These changes are captured by Oracle Data Integrator and transformed into events that are propagated throughout

the information system.

CDC Techniques

1)Trigger based: ODI will create and maintain triggers to keep track of the changes.

2) Logs based: ODI retrieves changes from the database logs (Oracle, AS/400).

Page 25: ODI doc

3) Time stamp based: Processes written with ODI can filter the data by comparing the time stamp value with the

last load time (cannot process deletes)

4) Sequence number: If the records are numbered in sequence, ODI can filter the data based on the last value

loaded (cannot process updates and deletes).

Changed Data Capture is performed by journalizing models. Journalizing a model consists of setting up the

infrastructure to capture the changes (inserts, updates and deletes) made to the records of this model’s datastores.

Oracle Data Integrator supports two journalizing modes:

•Simple Journalizing tracks changes in individual datastores in a model.

•Consistent Set Journalizing tracks changes to a group of the model’s datastores, taking into account the referential

integrity between these datastores. The group of datastores journalized in this mode is called a Consistent Set.

The Journalizing Components

The journalizing components are:

•Journals: Where changes are recorded. Journals only contain references to the changed records along with the

type of changes (insert/update, delete).

•Capture processes: Journalizing captures the changes in the source datastores either by creating triggers on the

data tables, or by using database-specific programs to retrieve log data from data server log files.

•Subscribers: CDC uses a publish/subscribe model. Subscribers are entities (applications, integration processes,

etc) that use the changes tracked on a datastore or on a consistent set. They subscribe to a model’s CDC to have

the changes tracked for them. Changes are captured only if there is at least one subscriber to the changes. When

all subscribers have consumed the captured changes, these changes are discarded from the journals.

•Journalizing views: Provide access to the changes and the changed data captured. They are used by the user to

view the changes captured, and by integration processes to retrieve the changed data.

These components are implemented in the journalizing infrastructure

Setting up Journalizing:

This is the basic process for setting up CDC on an Oracle Data Integrator data model. Each of these steps is

described in more detail below.

1.Set the CDC parameters

2.Add the datastores to the CDC

3.For consistent set journalizing, arrange the datastores in order

4.Add subscribers

5.Start the journals

Journalizing Tools:

Page 26: ODI doc

Oracle Data Integrator provides a set of tools that can be used in journalizing to refresh information on the captured

changes or trigger other processes:

•SnpsWaitForData waits for a number of rows in a table or a set of tables.

•SnpsWaitForLogData waits for a certain number of modifications to occur on a journalized table or a list of

journalized tables. This tool calls SnpsRefreshJournalCount to perform the count of new changes captured.

•SnpsWaitForTable waits for a table to be created and populated with a pre-determined number of rows.

•SnpsRetrieveJournalData retrieves the journalized events for a given table list or CDC set for a specified

journalizing subscriber. Calling this tool is required if using Database-Specific Processes to load journalizing tables.

This tool needs to be used with specific knowledge modules.

•SnpsRefreshJournalCount refreshes the number of rows to consume for a given table list or CDC set for a

specified journalizing subscriber.

Implementing Changed Data Capture:

Step:1) Import the appropriate JKM in the project.

Click the Projects tab. Expand the Procedure-Demo > Knowledge Modules node, right-click Journalization (JKM), and

select Import Knowledge Modules.

Step:2) In the Models tab, create a new model named Oracle_relational_01. For Technology, enter: Oracle. Select

the logical schema Sales_Order. Click the Reverse Engineer tab and set Context to development. Verify the setting,

as shown in the following screen. Click the Journalizing tab.

Page 27: ODI doc

Step: 3) In the Knowledge Module menu, select JKM Oracle Simple. Procedure-Demo, as shown in the following

screen. Click the Save to save your model and then close the tab.

Step: 4) Reverse-engineer the model Oracle_Relational_01. Expand this model and verify its structure as follows.

Step: 5) Set up the CDC Infrastructure. You will start the CDC on the EMPLOYEE table in the Oracle_Relational_01

model.

To add the table to CDC, expand the Oracle_Relational_01 model, right-click the EMPLOYEE table, and select

Change Data Capture > Add to CDC. Click Yes to confirm.

Page 28: ODI doc

Step: 6) Click the Refresh icon. The small yellow clock icon is added to the table.

Step: 7) Right-click the EMPLOYEE table again and select Changed Data Capture > Start Journal.

Step: 8) you use the default subscriber SUNOPSIS. For that reason, you do not have to add another subscriber.

Click OK to confirm that your subscriber is SUNOPSIS. In the Information window, click OK again. Wait several

seconds, then click Refresh and verify that the tiny clock icon at the EMPLOYEE table is green now. This means that

your journal has started properly.

Page 29: ODI doc

Step: 9) Click the ODI Operator icon to open the Operator. Click Refresh. Select All Executions and verify that the

EMPLOYEE session executed successfully.

Step: 10) View the data and the changed data. 

In the Designer window, open the Models tab. Right-click the EMPLOYEE datastore and select Data.

Step: 11) Select the row with Employee_Key = 10. Change the value of the NAME2 column to “Symond”. Similarly,

select the row with Employee_Key = 15, and then change the value to “jacob”. Save your changes and close the

tab.

Page 30: ODI doc

Step: 12) Right-click the table again and select View Data. Scroll down, and verify that the rows are modified.

Close the tab.

To verify that your changed data is captured, right-click EMPLOYEE, and select Change Data Capture > Journal

Data. Find the captured changed records in the journal data. Close the tab.

Done !

Posted in Uncategorized | Leave a reply

Page 31: ODI doc

Oracle Data Integrator 11g Knowledge Modules   Description Posted on March 26, 2011

RKM Oracle

Reverse-engineers tables, views, columns, primary keys, non unique indexes and foreign keys.

JKM Oracle 10g Consistent (Streams)

Creates the journalizing infrastructure for consistent set journalizing on Oracle 10g tables, using Oracle Streams.

JKM Oracle 11g Consistent (Streams)

Creates the journalizing infrastructure for consistent set journalizing on Oracle 11g tables, using Oracle Streams.

JKM Oracle Consistent

Creates the journalizing infrastructure for consistent set journalizing on Oracle tables using triggers.

JKM Oracle Consistent (Update Date)

Creates the journalizing infrastructure for consistent set journalizing on Oracle tables using triggers based on a Last

Update Date column on the source tables.

JKM Oracle Simple

Creates the journalizing infrastructure for simple journalizing on Oracle tables using triggers.

JKM Oracle to Oracle Consistent (OGG)

Creates and manages the ODI CDC framework infrastructure when using Oracle GoldenGate for CDC. See Chapter

26, “Oracle GoldenGate” for more information.

CKM Oracle

Checks data integrity against constraints defined on an Oracle table.

LKM File to Oracle (EXTERNAL TABLE)

Page 32: ODI doc

Loads data from a file to an Oracle staging area using the EXTERNAL TABLE SQL Command.

LKM File to Oracle (SQLLDR)

Loads data from a file to an Oracle staging area using the SQL*Loader command line utility.

LKM MSSQL to Oracle (BCP SQLLDR)

Loads data from a Microsoft SQL Server to Oracle database (staging area) using the BCP and SQL*Loader utilities.

LKM Oracle BI to Oracle (DBLINK)

Loads data from any Oracle BI physical layer to an Oracle target database using database links. See Chapter 17,

“Oracle Business Intelligence Enterprise Edition” for more information.

LKM Oracle to Oracle (DBLINK)

Loads data from an Oracle source database to an Oracle staging area database using database links.

LKM Oracle to Oracle (datapump)

Loads data from an Oracle source database to an Oracle staging area database using external tables in the

datapump format.

LKM SQL to Oracle

Loads data from any ANSI SQL-92 source database to an Oracle staging area.

LKM SAP BW to Oracle (SQLLDR)

Loads data from SAP BW systems to an Oracle staging using SQL*Loader utilities. See the Oracle Fusion Middleware

Application Adapters Guide for Oracle Data Integrator for more information.

LKM SAP ERP to Oracle (SQLLDR)

Loads data from SAP ERP systems to an Oracle staging using SQL*Loader utilities. See the Oracle Fusion

Middleware Application Adapters Guide for Oracle Data Integrator for more information.

IKM Oracle AW Incremental Update

Page 33: ODI doc

Integrates data in an Oracle target table in incremental update mode and is able to refresh a Cube in an Analytical

Workspace. See Chapter 21, “Oracle OLAP” for more information.

IKM Oracle Incremental Update

Integrates data in an Oracle target table in incremental update mode.

IKM Oracle Incremental Update (MERGE)

Integrates data in an Oracle target table in incremental update mode, using a MERGE statement.

IKM Oracle Incremental Update (PL SQL)

Integrates data in an Oracle target table in incremental update mode using PL/SQL.

IKM Oracle Multi Table Insert

Integrates data from one source into one or many Oracle target tables in append mode, using a multi-table insert

statement (MTI).

IKM Oracle Slowly Changing Dimension

Integrates data in an Oracle target table used as a Type II Slowly Changing Dimension.

IKM Oracle Spatial Incremental Update

Integrates data into an Oracle (9i or above) target table in incremental update mode using the MERGE DML

statement. This module supports the SDO_GEOMETRY datatype.

IKM Oracle to Oracle Control Append (DBLINK)

Integrates data from one Oracle instance into an Oracle target table on another Oracle instance in control append

mode.

This IKM is typically used for ETL configurations: source and target tables are on different Oracle instances and the

interface’s staging area is set to the logical schema of the source tables or a third schema.

SKM Oracle

Generates data access Web services for Oracle databases. See “Working with Data Services” in the Oracle Fusion

Middleware Developer’s Guide for Oracle Data Integrator for information about how to use this SKM.

Page 34: ODI doc

Posted in Uncategorized | Leave a reply

Extracting Hyperion Financial Management Metadata to Oracle Data   Integrator Posted on March 26, 2011

Step: 1) Go to Start—>Program Files—>Oracle—>Oracle Data Integrator explore ODI and login to work

repository.

Step: 2) After successfully login to work repository go to Topology Navigator select Physical

Architecture Tab and select Technology and expand it. Go to the Hyperion Financial Management Right

Click and select ‘New Data Server’.

Step: 3) In the definition tab enter the following details as shown in the picture below:

Name: Appropriate name to data server

Cluster (Data Server): System name

User: User Name

Password: password

Page 35: ODI doc

Step: 4) Click on Save it will ask you to create physical schema press ‘OK’.

Step: 5) In Technology Tab go to Hyperion Financial Management expand it and select newly create

data server Right Click and select ‘New Physical schema’.

Step: 6) In the definition tab enter the HFM Application name to which you want to connect.

Page 36: ODI doc

Step: 7) Click on Save it will ask you to specify the context press ‘OK’.

Step: 8) Go to the Logical Architecture tab Expand Technologies and select Hyperion Financial

Management Right-click and select ‘New Logical Schema’.

Step: 9) Give appropriate name to logical schema and set the Context to Hyperion Financial

Management physical schema.

Step: 10) Click on ‘Save’.

Step: 11) Go to the ODI Designer and create new Project with appropriate name.

Page 37: ODI doc

Step: 12) Click on ‘Save’. You can check in the project tab

Step: 13) Expand project go to knowledge module right click and import following knowledge modules.

RKM Hyperion Financial Management

LKM File to SQL

LKM Hyperion Financial Management Members To SQL

LKM Hyperion Financial Management Data To SQL

IKM SQL to Hyperion Financial Management Data

IKM SQL to Hyperion Financial Management Dimension

IKM SQL to File Append

Step: 14) Press Ctrl and select all three knowledge

Page 39: ODI doc

Step: 15) ‘Close’.

Step: 16) Now go to the model tab and create new model folder.

Step: 17) Go to the model tab and create new model.

Step:18) Go to the Reverse Engineer tab select customize check box and enter details as follows:

Page 40: ODI doc

Step: 19) ‘Save’

Step:20) Newly created Model will be available for Reverse Engineering go to Model tab select

Hyperion planning model and Right Click and select Reverse Engineers. Before Reverse Engineer the

models make sure your agent services are running if not start it with following command: 

C:\oracle\Middleware\Oracle_ODI1\oracledi\agent\bin>agent –NAME=localagent

Step: 21) Now you can reverse engineer it.

Page 41: ODI doc

Press ‘OK’.

Step: 22) Go to the Operator Tab check whether the Execution run successfully or not.

If all the checks are green that means your model reversed successfully.

Step: 23) Go to the Designer and open model tab expand Hyperion Financial Management model

again. You can check all the dimensions are visible there from your HFM application

Page 42: ODI doc

Done !

Posted in Uncategorized | Leave a reply

Configuring ODI for integrating with Hyperion Financial Management   Application Posted on March 26, 2011

I will be using Single-machine architecture; machine will be hosting ODI and hosting HFM and EPM foundation

(Workspace and shared services). I have created a FM application named SAMPLE that is based on the “Simple

Demo” information that can be found in “Hyperion\products\Financial Management\Sample Apps\Simple Demo\” of

the HFM installation.

The machine is windows based as that is a pre-requisite for HFM and also the HFM client that has to be installed on

the ODI machine.

The EPM version installed is 11.1.1.3

We need to perform some work around solutions for integrating ODI 11g with EPMA 11.1.1.3.

Step: 1) Go to directory C:\oracle\Middleware\Oracle_ODI1\oracledi\agent\lib Copy the HFMDriver.dll file and

pest it to directory 

C:\oracle\Middleware\Oracle_ODI1\oracledi\agent\drivers

Page 43: ODI doc

Step: 2) If you will use the “Local Agent” you will need to add the driver location to the windows environment

variable (Path).

C:\oracle\Middleware\Oracle_ODI1\oracledi\agent\drivers

Once you have added the location make sure you restart all the ODI components to pick up the driver or you will

continue to receive the error message.

Posted in Uncategorized | Leave a reply

Creating and Connecting to ODI Master and Work   Repositories Posted on March 25, 2011

1

The first steps to setting up Oracle Data Integrator are to create the Master repository, connect to the Master

repository, create the Work repository, and then connect to the Work repository

Step:1) Connecting to the RDBMS (Oracle 10g XE)

Page 44: ODI doc

Step:2) Connecting to the RDBMS (Oracle 10g XE)

Step:3) Creating the RDBMS Schema/User (Oracle 10g XE) for the Master Repository

i.On the Oracle Database Express Edition Home screen, select SQL Commands > Enter Command from the SQL

drop-down list.

Create the schemas by executing the following SQL commands:

create user identified by ; grant dba, connect, create session to ;

SQL> create user bispmaster identified by password;

SQL> grant dba,connect,create session to bispmaster;

Page 45: ODI doc

Step:4) Creating the ODI Master Repository

In the URL field, enter the following URL: jdbc:oracle:thin:@bispcognos:1521:xe, and then enter the User as

bispmaster and Password as password(This is in our condition). In the Technology field, select Oracle. Click the Test

Connection button and verify successful connection. Click Next. Click OK on Master Repository Creation Wizard

screen.

Page 47: ODI doc

The SQL command is executed in the command window. If the Master repository creation is successful, you will see

the following message. Click OK. The ODI Master repository is now created.

Step:5) Connecting to the ODI Master Repository

Configure Repository Connections with the parameters provided in the following screen. Click the Test button.

Verify successful connection and click OK. Click OK to save the connection.

Select the newly created repository connection (Master Repository) from the drop-down list. Click OK. The ODI

Topology Manager starts.

Page 48: ODI doc

A relational schema must be created for each repository. The following steps walk you through

creating the relational schema for the ODI Work repository.

Step:1)Connecting to the RDBMS

If not opened, open the Oracle XE Database Home page: Start > All Programs > Oracle Database 10g Express

Edition > Go to Database Home page.

The Login screen appears. Log in to Oracle XE Database as system. The default password is password. Click Login.

Page 49: ODI doc

Step:2) Creating the RDBMS Schema/User (Oracle 10g XE) for the Work Repository

On the Oracle Database Express Edition Home screen, select SQL Commands > Enter Command from the SQL drop-

down list.

Create the schemas by executing the following SQL commands:

create user identified by ; grant dba, connect, create session to ;

SQL> create user bispwork identified by password;

SQL> grant dba,connect,create session to bispwork;

Step:3) Creating the ODI Work Repository

In ODI, click the Topology Navigator tab and then click to the Repositories panel. Right-click the Work Repositories

node and select New Work Repository. The Create Work Repository Wizard opens.

Page 50: ODI doc

In the screen that follows, enter the parameters shown in the following screen. Click the Test button. Verify

successful connection and click OK. Click Next.

In the Create Work Repository Login window, click Yes. Enter the Login name: WORK_REPO as shown on the

screenshot. Click OK. Verify that the newly created Work repository is now in the work repositories tree view

Page 51: ODI doc

Now you disconnect from the Master repository and connect to the Work repository. Click ODI menu and select

Disconnect “Master Repository”.

Click Connect to Repository. Select “WORK_REPO” from the Login Name drop-down list. Enter Password: SUNOPSIS.

Click OK. Click the Designer tab. The following ODI Designer screen appears.