Quantcast
Channel: ATeam Chronicles
Viewing all 987 articles
Browse latest View live

Using Oracle Data Integrator (ODI) to Load BI Cloud Service (BICS)

$
0
0

For other A-Team articles about BICS, click here

Introduction

Oracle Data Integrator (ODI) is a comprehensive data integration platform that covers most data integration scenarios.  It has long been possible to use ODI to load data into BI Cloud Service (BICS) environments, that use Database as a Service (DBaaS) as the underlying database.

The recent 12.2.1.2.6 release of ODI added the ability to load data into BICS environments based on a Schema Service Database.  ODI does this by using the BICS REST API.

This article will walk through the following steps to set up ODI to load data into the BICS schema service database through this method:

  • Downloading latest version of ODI
  • Configuring the physical and logical connection to BICS in ODI
  • Loading BICS knowledge modules
  • Reverse engineering BICS model
  • Create a simple mapping
  • Importing the BICS certificate into the trust store for the standalone agent

This article will not cover the installation and setup of ODI.  The assumption is that a 12.2.1.2.6 environment has been stood up and is working correctly.  For details on how to install and configure ODI, see this document.

 

Main Article

Download The Latest Version of Oracle Data Integrator

Download and install the latest version of ODI from OTN through this link.

 

Configure and Test Connection to BICS

This article will walk through one (of the several) methods to set up the BICS connection with a Physical and Logical connection.  For more details on topology and other approaches, see this document.

1. In ODI studio, select the ‘Topology‘ tab, and expand out ‘Technologies‘ under the Physical Architecture section

Cursor_and_Windows7_x86

2. Scroll down to the ‘Oracle BI Cloud Service‘ entry, right click and select ‘New Data Server

Cursor_and_Windows7_x86

3. Give the Data Server a name, and enter the BICS Service URL, as well as the user credentials and Identity Domain.

The syntax for the URL is:

https://service-identity_domain.analytics.data_center.oraclecloud.com

This URL can be obtained from the BICS instance, by taking the first part of the URL up to ‘oraclecloud.com’

Oracle_BI_Cloud_Service

Note – the Data Loader path will default to /dataload/v1, leave this.

4. Save the Data Server.  ODI will give you an informational warning about needing to register at least one physical schema.  Click ‘OK‘.

Cursor_and_Windows7_x86

5. Test the connection by selecting ‘Test Connection

For the time being, use the ‘Local (No Agent)‘ option.

NOTE – Once configuration has been completed, the ODI Agent where the execution will be run should also be tested.  It is likely that additional configuration will need to be carried out – this is covered in the last section of this article ‘Importing the BICS certificate into the trust store for the standalone agent’.

Windows7_x86

If the credentials and URL have been entered correctly, a notification similar to the following should be displayed.  If an error is displayed, trouble-shoot and resolve before continuing.

Cursor_and_Windows7_x86

TIP :  

ODI studio’s local agent uses the JDK’s certificate store, whereas the Standalone Agent does not.  It is therefore possible – and quite likely – that while the local agent will provide a successful Test Connection, the Standalone agent will produce and error similar to the following:

oracle.odi.runtime.agent.invocation.InvocationException: oracle.odi.core.exception.OdiRuntimeException: javax.ws.rs.ProcessingException: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

To resolve this, the BICS Certificate needs to be added to the trust store used by the Standalone agent.  These steps are covered later in this article in the section ‘Importing Certificate into Trust Store

 

6. Right click on the Data Server created in the step 2, and select ‘New Physical Schema

Cursor_and_Windows7_x86

ODI has the ability to load to both the Database Objects (Tables) in the Schema Service Database, and also Data Sets.

This loading option is chosen in the ‘Target Type‘ dropdown.  The selection then associates the appropriate REST Operations for ODI to connect.  Note – once the target type has been chosen and saved, it cannot be changed.

7. In this example the Target Type of Table is selected.

Windows7_x86

8. Save the Physical Schema.

Because we haven’t associated this with a Logical Architecture yet, the following warning will be shown.  Click OK to complete the save.

Windows7_x86

9. Expand out the Logical Architecture section of Topology, and then right click on ‘Oracle BI Cloud Service‘ and create a ‘New Logical Schema

Windows7_x86

10. In the configuration window, give the Logical Schema a descriptive name, and associate your context(s) with the physical schema that was created in steps 6-8.  Save the changes.

Windows7_x86

11. Repeat the steps from 6 on if you need to create an additional connection to load Data Sets

 

Load BICS Knowledge Modules

ODI uses 2 different Knowledge Modules for BICS:

– a reverse knowledge module (RKM) called RKM Oracle BI Cloud Service, and

– an integration knowledge module (IKM) called IKM SQL to Oracle BI Cloud Service.

 

1. In ‘Designer‘ expand your project and the knowledge modules and see if the KMs are already available.

Cursor_and_Windows7_x86

If they are – continue to the ‘Reverse Engineering‘ section of this article.

2. If the KMs are not shown, right click on the Knowledge Modules section and select ‘Import Knowledge Modules

Windows7_x86

Browse to a path similar to this to find the import directory.

/u01/oracle/ODI12c/odi/sdk/xml-reference

3. In the import wizard, select the 2 BICS KMs, and then select ‘OK’ to load them.

Cursor_and_Windows7_x86

TIP :  

If you already have used ODI for other integration tasks, you may be tempted to use existing Knowledge Modules.  Please note that the IKM SQL to Oracle BI Cloud Service does not support loading the Oracle SDO_GEOMETRY data type column to the BICS target table.

Oracle BI Cloud Service cannot be used as the staging area, and does not support incremental update or flow/static check. Therefore, the following KMs will not work with the Oracle BI Cloud Service technology:

RKM SQL (JYTHON)

LKM File to SQL

CKM SQL

IKM SQL Incremental Update

IKM SQL Control Append

LKM SQL to SQL (JYTHON)

More details can be found in this document.

 

Reverse Engineer BICS

Reverse-engineering is the process that populates the model in ODI, by retrieving metadata from the data server containing the data structures.

 

1. Create a new model in Designer, by selecting the ‘New Model‘ option as shown below

Cursor_and_Windows7_x86

2. In the Definition tab, given the model a name, select the ‘Oracle BI Cloud Service‘ as the technology, and select the Logical Schema created previously.

Cursor_and_Windows7_x86

3. In the Reverse Engineer tab, leave the logical agent set to ‘Local (No Agent)‘, and select the RKM Oracle BI Cloud Service knowledge module.  Then save the changes.

Cursor_and_Windows7_x86

TIP :  

At the time of writing this article, there is a bug in the reverse knowledge module that will present an error if tables in the BICS environment contain non-standard characters.

An error like the following may be generated:

ODI-1590: The execution of the script failed.
Caused By: org.apache.bsf.BSFException: exception from Groovy: oracle.odi.runtime.rest.SnpsRSInvocationException: ODI-30163: REST tool invocation failed with response code : 500. URL : https://businessintelltrialXXXX-usoracletrialXXXXX.analytics.us2.oraclecloud.com/dataload/v1/tables/APEX$TEAM_DEV_FILES

There is at least one Apex related table within BICS environments that has a non-standard character.  That table, as shown in the error above, is ‘APEX$TEAM_DEV_FILES’.

Until this issue is fixed, a workaround is required.

The simplest is to go into the Apex environment attached to the BICS environment, rename the APEX$TEAM_DEV_FILES table temporarily, run the Reverse Engineer process, and then rename the table back.

Another method is to use the ‘Mask’ import option box.  If you have a specific table(s) you need to reverse engineer, then enter the name followed by %

For instance, if there were 5 tables all starting ‘FACT….’, then a mask of ‘FACT%’ could be used to reverse engineer those 5 tables.

 

4. Select the ‘Reverse Engineer‘ action, and then ‘OK‘ to run the action.

Cursor_and_Windows7_x86

5. This will start a session that can be viewed in the Operator.

Cursor_and_Windows7_x86

6. Once the session has completed, expand the model to confirm that the database objects have been imported correctly.  As shown below, the tables in the BICS Schema Service database are now available as targets.

Cursor_and_Windows7_x86

7. Expand the BICS individual database objects that you will load, and confirm within the attributes that the Datatypes have been set correctly.  Adjust where necessary and save.

Cursor_and_Windows7_x86

 

Create Mapping

1. Within the ‘Mapping‘ sub-menu of the project, select ‘New Mapping

Windows7_x86

2. Drag in the source table from the source that will be loaded into BICS, and then the BICS target table, and link the two together.  For more information on how to create mappings, see this document.

TIP :  

The BICS API only allows data to be loaded, not ‘read’ or ‘selected’.  Because of this, BICS using the Schema Service Database CAN ONLY BE USED as a TARGET for ODI mappings.  It can not be used as a SOURCE.

 

3. Make sure the Target is using the IKM SQL to Oracle BI Cloud Service:

Windows7_x86

and that an appropriate loading KM is used:

Cursor_and_Windows7_x86

4. Run the mapping, selecting the Local Agent

Windows7_x86

5. Confirm in the Operator that the mapping was successful.  Trouble-shoot an errors you find and re-run.

Cursor_and_Windows7_x86

 

Importing Certificate into Trust Store

To operate, it is likely that the Standalone Agent will require the BICS certificate be added to its trust store.

These instructions will use Microsoft Explorer, although other browsers offer similar functionality.

1. In a browser, open the BICS /analytics portal, then click on the padlock icon.  This will open an information box, within which select ‘View certificates

Cursor_and_Windows7_x86

2. In the ‘Details‘ tab, select the ‘Copy to File‘ option which will open an export wizard.

Windows7_x86

3. Select the ‘DER encoded binary‘ format and then ‘Next

Cursor_and_Windows7_x86

4. Chose a path and file name for the certificate, then ‘Next‘, and on the final screen ‘Finish‘ to export the certificate.

Cursor_and_Windows7_x86

 

TIP :  

This article will go through the steps needed to add this certificate to the DemoTrust.jks key store.  This should *ONLY* be followed for demonstration or test environments.  For production environments, follow best practice guidelines as outlined in this document.

 

5. Copy the certificate file created in the previous steps to a file system accessible by the host running the standalone ODI agent.

6. Set the JAVA_HOME to the path of the JDK used while installing the standalone agent, for example

export JAVA_HOME=/u01/oracle/jdk1.8.0_111/bin

7. Browse to the bin directory of the ODI Domain Home, in this test environment that path is as follows:

/u01/oracle/ODI12c/user_projects/domains/base_domain/bin

8. Run the ‘setODIDomainEnv‘ script.  In a linux environment this would be:

./setODIDomainEnv.sh

The DemoTrust.jks keystore used by the agent should be located in the following path:

$ORACLE_HOME/wlserver/server/lib

 

TIP :  

It is possible that there are a number of DemoTrust.jks key stores on the file system, so make sure the correct one is updated.  If this process fails to resolve the error with the Standalone Agent, search the file system and see if it is using a different trust store.

 

9. Browse to that directory and confirm the DemoTrust.jks file exists.  In that same directory – run the keytool command to import the certificate created earlier.

The syntax for the command is as follows, $CERTIFICATE referencing the name/path for the certificate file downloaded from the BICS environment through the browser, $ALIAS being a name for that, and $KEYSTORE the name/path of the key store.

keytool -importcert -file $CERTIFICATE -alias $ALIAS -keystore $KEYSTORE

In this example, the command would be:

keytool -importcert -file /u01/oracle/Downloads/BICS.cer -alias BICS -keystore DemoTrust.jks

the default password is DemoTrustKeyStorePassPhrase

10. Details of the certificate are displayed and a prompt to ‘Trust this certificate?’ is displayed.  Type ‘yes‘ and then hit enter.

Cursor_and_Windows7_x86

If the import is successful, a confirmation that the certificate was added to the keystore is given.

11. Return to ODI and run the mapping, this time selecting the Standalone agent, and confirm it runs successfully.

Summary

This article walked through the steps to configure ODI to load data into the BICS schema service database through the BICS API

For other A-Team articles about BICS, click here.


Integrating Oracle Data Integrator (ODI) On-Premise with Cloud Services

$
0
0

Introduction

 

This article presents an overview of how to integrate Oracle Data Integrator (ODI) on-premise with cloud services.  Cloud computing is now a service or a utility in high demand.  Enterprises have a mix of on-premise data sources and cloud services.  Oracle Data Integrator (ODI) on-premise can enable the integration of both on-promise data sources and cloud services.

This article discusses how to integrate ODI on-premise with three types of cloud services:  Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS).  The first part of this article discusses the required components in order to integrate ODI on-premise with SaaS applications.  The ODI on-premise integration with PaaS is illustrated with three Oracle PaaS products:  Oracle Database Cloud Service (DBCS), Oracle Exadata as a Service (ExaCS), and Oracle Business Intelligence Cloud Service (BICS).  The last part of this article discusses the integration of ODI on-premise with Oracle Storage Cloud Service (SCS), an IaaS product.

 

Integrating Oracle Data Integrator (ODI) On-Premise with Cloud Services

 

This article defines ODI on-premise as having ODI installed and running on computers on the premises of a company.  More specifically, ODI on-premise refers to having the ODI agent and the ODI repository installed on computers and databases on the premises of a company.  Additionally, on-premise data sources are data servers running on computers on the premises of a company.  Figure 1 below shows the integration of ODI on-premise with several cloud services:

 

Figure 1 - ODI On-Premise Integration with Cloud Services

Figure 1 – ODI On-Premise Integration with Cloud Services

Figure 1 above illustrates the ODI on-premise integration with both Oracle and non-Oracle cloud services.  ODI on-premise can integrate with Oracle cloud services such as Database Cloud Service, Business Intelligence Cloud Service, Oracle Storage Cloud Service, and Oracle Sales Cloud.  Likewise, ODI on-premise can integrate with non-Oracle cloud services such as Salesforce, Google Analytics, and Success Factors.

The Oracle cloud services illustrated above are part of the Oracle Public Cloud, which can be organized into three categories:  Oracle Software as a Service (SaaS), Oracle Platform as a Service (PaaS), and Oracle Infrastructure as a Service (IaaS).  Oracle SaaS includes a variety of software services that can be categorized in several areas:  Supply Chain, Human Resources, Analytics, Social, CRM, Financials, and many others.  Oracle PaaS includes database and middleware, as well as development, management, security and integration.  Oracle IaaS offers a set of core infrastructure capabilities like elastic compute and storage to provide customers the ability to run any workload in the cloud.

The next sections of this article provide an overview of how to integrate ODI on-premise with these three types of cloud services.

 

ODI On-Premise Integration with Software as a Service (SaaS)

 

ODI on-premise can integrate with Software as a Service (SaaS) applications.  Figure 2 below shows a list of both Oracle SaaS applications and non-Oracle SaaS applications:

 

Figure 2 - ODI On-Premise to Software as a Service (SaaS)

Figure 2 – ODI On-Premise to Software as a Service (SaaS)

In order to integrate ODI on-premise with SaaS applications, at a minimum, three components are required:  a JDBC driver, an ODI Technology, and a set of ODI Knowledge Modules (KMs).

The JDBC driver is required to establish the connection between the ODI on-premise and the SaaS application. Some of these drivers are supplied by third-party vendors and Oracle partners such as Progress Data Direct.   These JDBC drivers can connect to Oracle SaaS applications such as Oracle Marketing Cloud, Oracle Service Cloud, and Oracle Eloqua.  In addition to Oracle SaaS applications, JDBC drivers, such as those offered by Progress Data Direct, can connect to other non-Oracle SaaS applications such as Salesforce Cloud, Microsoft SQL Azure, and Google Analytics.   For a list of cloud data sources supported by Progress Data Direct, go to “Progress Data Direct Connectors.”

The ODI Technology is a connection object under the ODI Topology that has been customized to enable connectivity between the client (ODI) and the SaaS application.  This ODI technology must be configured to correctly map the data types of the JDBC driver with those found in the ODI technology.

The ODI Knowledge Modules are required for two purposes:  to reverse-engineer the attributes or objects found in the SaaS application, and to perform the integration task of pulling and sending data – respectively – from and to the SaaS application.  Bristlecone, an Oracle partner, offers the following types of ODI knowledge modules for SaaS applications:  Reverse-Engineering Knowledge Modules (RKMs), Loading Knowledge Modules (LKMs), and Integration Knowledge Modules (IKMs).   For a list of knowledge modules and SaaS applications supported by Bristlecone, go to “Bristlecone ODI Cloud Integration Pack.”

Although JDBC drivers for SaaS applications offer a great deal of functionality, it can be tedious to create and maintain ODI technologies for each SaaS application.  Alternatively, a generic approach can be followed by implementing universal or generic ODI technologies, so they can be used for more than one SaaS application.  An example of how to implement universal ODI technologies for more than one SaaS application can be found in the following Oracle article: “A Universal Cloud Applications Adapter for ODI.”  An ODI webcast demonstration on how to use universal ODI technologies can be found at the following location: “Oracle Data Integrator Special Topic:  Salesforce.com & Universal Cloud.”  This ODI webcast illustrates step-by-step how to use ODI universal technologies with JDBC drivers to extract data from Salesforce.  The JDBC drivers illustrated in this ODI webcast are from Progress Data Direct.

For a complete list of JDBC drivers and ODI knowledge modules available for SaaS applications, see the following Oracle article:  “Need to Integrate your Cloud Applications with On-Premise Systems… What about ODI?

 

ODI On-Premise Integration with Platform as a Service (PaaS)

 

Oracle Platform as a Service (PaaS) includes a variety of cloud products.  These cloud products are divided into several categories.  Two of these categories include Data Management, and Business Analytics.  Data Management includes products such as Database Cloud Service (DBCS), Exadata Cloud Service (ExaCS), and Big Data Cloud Service (BDCS).  Business Analytics includes cloud products such as Business Intelligence (BICS), Big Data Preparation (BDP), and Big Data Discovery (BDD).

The following sections focus on how to integrate ODI on-premise with three Oracle PaaS products:  Database Cloud Service (DBCS), Exadata Cloud Service (ExaCS), and Business Intelligence Cloud Service (BICS).  For a complete list of Oracle PaaS products and categories, go to “Oracle Cloud.”

 

ODI On-Premise Integration with Oracle Database Cloud Service (DBCS)

 

Oracle Database Cloud Service (DBCS) offers database features and database capabilities.  DBCS offers three products:  Database as a Service (DBaaS), Database Schema Service (DBSS), and Database Exadata Cloud Service (ExaCS).

Each DBCS product offers a predefined set of features and options.  For instance, DBaaS offers a dedicated virtual machine for running the Oracle database instance, full SQL*Net access, and full administrative OS and SYSDBA access.  Both DBaaS and DBSS offer RESTful web services, which enable web applications to access data in the database service.  ExaCS offers the Oracle database on Exadata, Oracle Enterprise Manager, Exadata performance optimizations, and database rack options.  For additional information on Oracle Database Cloud Service offerings, go to “Oracle Database Cloud Service Offerings.”

ODI on-premise can integrate with these three DBCS products.  Figure 3 below shows how ODI on-premise can integrate with Database as a Service (DBaaS).  Three methods are illustrated:

 

Figure 3: ODI On-Premise to Database as a Service (DBaaS)

Figure 3: ODI On-Premise to Database as a Service (DBaaS)

The first method uses a JDBC driver to access the Oracle Database located in the cloud.  Using the ODI Topology, a JDBC connection is configured under the Oracle technology.  The steps to configure this connection are similar to configuring an Oracle database on-premise.  The JDBC cloud connection can be secured via a Secure Socket Shell (SSH) protocol; thus, ODI can send data via a secure channel to the Oracle database in the cloud.

The second method uses the Oracle Datapump technology.  ODI on-premise can use this technology to extract data from either an Oracle database on-premise or an Oracle Database as a Service.  Using a Secure Copy Protocol (SCP) tool, datapump files can be copied from an Oracle database on-premise into an Oracle Database as a Service, or vice versa.  Datapump files can be uploaded into an Oracle database – on premise or as a service – by executing a mapping from ODI on-premise.  Datapump technology is an Oracle technology; thus, when using this method to extract data from a database on-premise, the database on-premise must be Oracle.

The third method uses text files.  ODI on-premise can extract data from either a SQL database on-premise or an Oracle Database as a Service, and convert data into text files.  Using a Secure Copy Protocol (SCP) tool, text files can be copied from a SQL database on-premise into an Oracle Database as a Service, or vice versa.  Text files can be uploaded into a SQL database – on premise or as a service – by executing a mapping from ODI on-premise.  If the target database is Oracle, ODI on-premise can use the Oracle External Table technology to load the text files.

An example of how to copy text files from an on-premise data server into an Oracle Database as a Service can be found in the following blog:  “ODI 12c and DBCS in the Oracle Public Cloud.”  Also, this blog discusses how to configure a secured JDBC connection using SSH, and how to load the text files into the Oracle Database as a Service using Oracle external tables.

Oracle database tools such as Oracle SQL*Loader can also be used to load text files from an on-premise data server into an Oracle Database as a Service.  However, a secured SSH connection must be established prior invoking these Oracle database tools.  An example of how to create a secured SSH connection and how to use Oracle SQL*Loader to load text files from an on-premise data server into an Oracle Database as a Service can be found in the following blog:  “Tips for ODI in the Cloud:  ODI On-Premise with DBCS.”

For additional information on how to use Oracle Datapump with ODI, go to “Using Oracle Data Pump in Oracle Data Integrator (ODI).”  Also, additional datapump use-cases and examples can be found at “Using ODI Loading Knowledge Modules on the Oracle Database Cloud Service (DBCS).”  For additional information on how to work with text files in ODI, go to “Working with Files in Oracle Data Integrator (ODI).”

 

 

ODI On-Premise Integration with Oracle Database Exadata Cloud Service (ExaCS)

 

The Oracle Database Exadata Cloud Service (ExaCS) provides the highest-performing and most-available platform for running Oracle databases in the cloud.  This cloud service, based on the Oracle Exadata Database Machine, includes customized combinations of hardware, software, and storage that are highly tuned for maximum performance.

ODI on-premise integration strategies with DBaaS can be extended to ExaCS as well.  ODI on-premise can access and transform data in ExaCS using three methods:  JDBC, datapump (if both the source and the target data servers are Oracle databases), and text files.  Secured connections can be accomplished via a Secure Socket Shell (SSH) protocol, and files can be transferred from the source data server on-premise to the ExaCS – and vice versa – using a Secure Copy Protocol (SCP) tool.

Figure 4 below shows how ODI on-premise can upload and transform data in ExaCS:

 

Figure 4: ODI On-Premise to Exadata Cloud Service

Figure 4: ODI On-Premise to Exadata Cloud Service

Figure 4, above, also illustrates a fourth method for accessing and transforming data in the cloud:  DBLINK.  This fourth method uses Oracle database links to orchestrate data transfers between DBCS and ExaCS.  For instance, a database link can be created in ExaCS to access data from DBCS.  In ODI on-premise, this database link can be used in an ODI mapping to select data from the DBCS and insert it into ExaCS.

Additionally, Oracle database links can be used in conjunction with Oracle file transfer utilities to transfer data between Oracle database servers in the cloud.  Thus, ODI on-premise can also be used to orchestrate file transfers – such as datapump files or text files – between DBCS and ExaCS.

For additional information on how to load data into an Oracle Database in an Exadata Cloud Service instance, go to “Loading Data into the Oracle Database in an Exadata Cloud Service Instance.”  For additional information on how to configure network access to an Exadata Cloud Service instance, go to “Managing Network Access to an Exadata Cloud Service Instance.”


ODI On-Premise Integration with Oracle Business Intelligence Cloud Service (BICS)

 

Oracle Business Intelligence Cloud Service (BICS) offers agile analytics for customers interested on analyzing data from a variety of sources, including on-premises and other services in the cloud.

In order to store and manage the data that users analyze in BICS, a database cloud service is needed.  BICS can integrate with Oracle Database Cloud using one of two options:  Database Schema Service (DBSS) or Database as a Service (DBaaS).  BICS comes integrated with DBSS, so there is no additional configuration required if users want to use this database schema service.

When using BICS with DBSS, data from on-premise sources can be loaded into BICS using the Oracle BI Cloud Service (BICS) REST APIBICS REST API is based on the DBSS RESTful web services.  ODI on-premise can load data from on-premise data sources into BICS using the BICS REST API.  This strategy is illustrated on Figure 5 below, method 1:

 

Figure 5: ODI On-Premise to BI Cloud Service

Figure 5: ODI On-Premise to BI Cloud Service

ODI users and developers can use BICS REST API to programmatically create, manage, and load schemas, tables, and data into BICS.  ODI Knowledge Modules can be used to invoke the BICS REST API and mappings can be designed to load data from on-premise data sources into BICS.  An example of how to use ODI Knowledge Modules to invoke the BICS REST API can be found in the following article:  “ODI Knowledge Modules for BI Cloud Service (BICS).”

ODI 12.2.1.2.6 supports a new technology to directly integrate with BICS:  Oracle BI Cloud Service.  This new ODI technology uses the BICS RESTful APIs to load data into BICS environments that are based on DBSS.  An example of how to use this new ODI technology with BICS can be found in the following article:  “Using Oracle Data Integrator (ODI) to Load BI Cloud Service (BICS).”

When BICS is integrated with the Database as a Service (DBaaS), data from on-premise sources can be loaded directly into DBaaS using various methods such as JDBC, datapump, and text files. Thus, the BICS REST API is not required.  This strategy is also illustrated on Figure 5 above, methods 2, 3, and 4.

The BICS REST API does not include methods for extracting data from the underlying BICS database tables.  However, if BICS has been integrated with DBaaS, ODI on-premise can select data from DBaaS and export it as datapump files or text files.   ODI on-premise can transfer these files from the cloud and load them into an on-premise data server using a Secure Copy Protocol (SCP) tool or using the Oracle database file transfer package called DBMS_FILE_TRANSFER.  An example of using Oracle datapump with Oracle database file transfer can be found at the following article:  “Using Oracle Data Pump in Oracle Data Integrator (ODI).”

The BICS REST API does not include methods for extracting data from BICS reports.  Alternatively, Oracle Application Express (APEX) can be used to create RESTful web services to extract data from BICS reports.  An example of how to use APEX to extract data from BICS reports can be found in the following Oracle article:  “Extracting Data from BICS via APEX RESTful Web Services.

 

ODI On-Premise Integration with Infrastructure as a Service (IaaS)

 

Oracle Infrastructure as a Service (IaaS) offers three products:  Oracle Compute Cloud Service, Oracle Network Cloud Service, and Oracle Storage Cloud Service.  The Oracle Compute Cloud Service provides virtual compute environments, lifecycle management, dynamic firewalls, and secure access.  The Oracle Network Cloud Service provides connectivity services – such as VPN and FastConnect – between on-premise networks and the Oracle Public Cloud.  The Oracle Storage Cloud Service (SCS) provides storage for files and unstructured data.

The following section illustrates how to integrate ODI on-premise with Oracle Storage Cloud Service (SCS), an Oracle IaaS product.

 

ODI On-Premise Integration with Oracle Storage Cloud Service (SCS)

 

The Oracle Storage Cloud Service can be used by applications that require long-term data retention or as a staging area for data integration tasks in the cloud.  Other Oracle cloud services such as BI Cloud Service (BICS) can use SCS as a staging area for data consumption.

Users can programmatically store and retrieve content from SCS using the SCS REST API.  Additionally, SCS offers Java libraries to wrap the SCS REST API.  In ODI, tools and packages can be designed to invoke the SCS REST API, and ODI can upload files from an on-premise data server into SCS.  Likewise, using the SCS REST API, ODI on-premise can download files from SCS into an on-premise data sever.  This strategy is illustrated on Figure 6 below, methods 1, and 2:

 

Figure 6: ODI on Premise to Storage Cloud Service

Figure 6: ODI on Premise to Storage Cloud Service

 

Examples of how to use the ODI Open Tools framework to invoke the SCS REST API can be found in the following Oracle article:  “ODI Integration with Oracle Storage Cloud Service.”  Rittman Mead, an Oracle partner in the data integration space, has an example on how use ODI Open Tools to copy files from an on-premise data server into SCS:  “Oracle Data Integrator to Load Oracle BICS and Oracle Storage Cloud Service.”  This article also discusses how to load data into Business Intelligence Cloud Service (BICS) using the BICS REST API.  For additional information on how to develop and use ODI Open Tools, go to “Oracle Data Integrator Tools Reference.”

 

Conclusion

 

Cloud computing is now a service or utility in high demand.  Enterprises have a mix of on-premise environments and cloud computing services.  Oracle Data Integrator (ODI) on-premise can enable the integration of both on-promise data sources and cloud services.

This article presented an overview on how to integrate ODI with on-promise data sources and cloud services.  The article covered ODI on-premise integration with the following cloud services:  SaaS , PaaS, and IaaS.

For more Oracle Data Integrator best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-team Chronicles for Oracle Data Integrator (ODI).”

 

Other ODI Cloud Articles

 

Using Oracle Data Integrator (ODI) to Load BI Cloud Service (BICS)

Using ODI Loading Knowledge Modules on the Oracle Database Cloud Service (DBCS)

Oracle Data Integrator Best Practices: Using Loading Knowledge Modules on both On-Premises and Cloud Computing

Need to Integrate your Cloud Applications with On-Premise Systems… What about ODI?

Webcast: Oracle Data Integrator Special Topic:  Salesforce.com & Universal Cloud

Webcast: Integrating Database Cloud Service (DBCS), Big Data Preparation Cloud Service (BDP), and Business Intelligence Cloud Service (BICS) with Oracle Data Integrator (ODI)

A Universal Cloud Applications Adapter for ODI

Using Oracle Data Pump in Oracle Data Integrator (ODI)

ODI 12c and DBCS in the Oracle Public Cloud

ODI Knowledge Modules for BI Cloud Service (BICS)

ODI Integration with Oracle Storage Cloud Service

Oracle Data Integrator to Load Oracle BICS and Oracle Storage Cloud Service

 

Oracle PaaS Resources

Oracle Platform as a Service (PaaS)

Oracle Database Cloud Service Offerings

Oracle Database Cloud Service (DBCS)

Using Oracle Database Schema Cloud Service

Using RESTful Web Services in Oracle Schema Service

Oracle Exadata Cloud Service (ExaCS)

Loading Data into the Oracle Database in an Exadata Cloud Service Instance

Managing Network Access to an Exadata Cloud Service Instance

Oracle Business Intelligence Cloud Service (BICS)

Preparing Data in Oracle Business Intelligence Cloud Service

REST API Reference for Oracle Business Intelligence Cloud (BICS)

Extracting Data from BICS via APEX RESTful Web Services

Oracle Application Express (APEX) RESTful APIs

Oracle Application Express

Oracle IaaS Resources

Infrastructure as a Service (IaaS)

Oracle Storage Cloud Service (SCS)

Oracle Storage Cloud Service REST API

Oracle SaaS Resources

Applications as a Service (SaaS)

 

Other ODI Related Articles

Using Oracle Data Pump in Oracle Data Integrator (ODI)

Working with Files in Oracle Data Integrator (ODI)

Oracle Data Integrator (ODI) Tools Reference

 

Other Resources

Progress DataDirect Connectors

Bristlecone ODI Cloud Integration Pack

Oracle External Table Technology

 

 

Eloqua ICS Integration

$
0
0

Introduction

Oracle Eloqua, part of Oracle’s Marketing Cloud suite of products, is a cloud based B2B marketing platform that helps automate the lead generation and nurture process. It enables the marketer to plan and execute marketing campaigns while delivering a personalized customer experience to prospects.

In this blog I will describe how to integrate Eloqua with other SaaS applications using Oracle’s iPaaS platform, the Integration Cloud Service(ICS).
ICS provides an intuitive web based integration designer for point and click integration between applications, a rich monitoring dashboard that provides real-time insight into the transactions, all of it running on a standards based, mature runtime platform on Oracle Cloud. ICS boasts of a large library of SaaS, Application, as well as Technology Adapters that add to its versatility.

One such adapter is the Eloqua adapter, which allows synchronizing accounts, contacts and custom objects with other applications. The Eloqua Adapter can be used in two ways in ICS:

  • As the target of an integration where external data is sent to Eloqua,
  • Or as the source of an integration where contacts(or other objects) flowing through a campaign or program canvas in Eloqua are sent out to any external application.

This blog provides a detailed functional as well as technical introduction to the Eloqua Adapter’s capabilities.
The blog is organized as follows:

  1. a. Eloqua Adapter Concepts
  2. b. Installing the ICS App in Eloqua
  3. c. Creating Eloqua connection
  4. d. Designing the Inbound->Eloqua flows
  5. e. Designing the Eloqua->Outbound flows

This blog assumes that the reader has basic familiarity with ICS as well as Eloqua.

a. Eloqua Adapter concepts

In this section we’ll go over the technical underpinnings of the ICS Eloqua adapter.

ICS Adapter is also referred to as ICS Connector, they mean the same.

The ICS adapter can be used in ICS integrations for both triggering the integration and as target(Invoke) within an integration.

When used as target :

  • The adapter can be used to create/update Account, Contact and custom objects defined within Eloqua.
  • Under the hood the adapter uses the Eloqua Bulk 2.0 APIs to import data into Eloqua. More on this later.

When used as trigger :

  • The Eloqua Adapter allows instantiating an ICS integration when a campaign or program canvas runs within Eloqua.
  • The adapter must be used in conjunction with a corresponding ‘ICS App’ installed within Eloqua.

    Installing the ICS App is mandatory for triggering ICS integrations. The next section describes the installation.

    The marketer in Eloqua uses this app as a step in his campaign, and the app in turn invokes the ICS endpoint at runtime. The image below shows a sample ICS App in use in a campaign canvas within Eloqua:

  • Screen Shot 01-19-17 at 10.19 AM

  • The Eloqua ICS App resides within the Eloqua AppCloud, and complements the ICS Eloqua Adapter such that contacts and other objects flow out from the campaign, into the ICS App and eventually to the ICS integration. The image below describes this.
  • Screen Shot 01-19-17 at 12.35 PM

b. Installing the ICS App in Eloqua

As explained above, installing the ICS App in Eloqua is mandatory for the Eloqua->Outbound scenarios.

The app is available in Oracle Marketplace, and the installation is straightforward:

  • Open the ICS App on Oracle Marketplace at https://cloud.oracle.com/marketplace/app/AppICS
  • Click ‘Get App’. Accept the terms and conditions in the popup. Click ‘Next’. This will redirect you to your Eloqua login page. Sign in, and click ‘Accept and Install’
  • Screen Shot 11-30-16 at 06.22 PM

  • The next page takes you to the ICS configuration, where you need to provide the ICS URL, username and password. Click ‘Save’.
  • Screen-Shot-11-30-16-at-06.23-PM

  • Click ‘Sign In’ on the next page, thus providing the app access to Eloqua on your behalf(OAuth2).
  • Screen Shot 11-30-16 at 06.24 PM

  • Click ‘Accept’ on the next page.
  • The ICS App is now installed and ready to use as an ‘Action’ in Eloqua Canvas.

Now we will look at creating Eloqua connections and integrations in ICS.

c. Creating Eloqua connection in ICS

  1. Log on to the ICS home page. Click on ‘Create Connections’, then ‘New Connection’ , and choose ‘Eloqua’.
  2. Name the connection appropriately.
  3. Screen Shot 01-17-17 at 10.15 PM

  4. The Connection Role can be:
    • a. Trigger, used in integrations where the connection is only used to trigger the integration.
    • b. Invoke, used in integrations where the connection is only used as target.
    • c. Or Trigger and Invoke, which can be used either way.
  5. Click ‘Create’. Click on the ‘Configure Security’ button, and enter the Eloqua Company name, username and password. Then click on ‘Test’.
  6. At this point ICS authenticates with Eloqua using the credentials provided above. The authentication process depends on the connection role:

  • a.If ‘Invoke’ role, then ICS performs an HTTP Basic Authentication to https://login.eloqua.com using base64-encoded “<company>\<username>:<password>” string. This process is described in more detail here.
  • b.If ‘Trigger’ or ‘Trigger and Invoke’ role, then along with the above test ICS also reaches out to Eloqua AppCloud and checks if the Eloqua ICS App has been installed. If not installed then the connection test will fail.
  • Once the connection test is successful, save the connection.
  • Now that the connection has been defined, we can use the Eloqua adapter in an ICS integration to sync data. Let’s take a look at designing the Inbound->Eloqua usecases, i.e. where Eloqua is the target application.

    d. Designing the Inbound->Eloqua flows

    The Eloqua adapter for inbound->Eloqua flows only relies on the Bulk 2.0 APIs and doesn’t need the ICS App to be installed in Eloqua.
    Below are the steps to configure the adapter.

    Design time:

    • Create an ICS integration, and drag the Eloqua connection on the target or as an invoke activity in an orchestration.
    • Name your endpoint and click Next.
    • On the operations page, you can choose the Eloqua business object that needs to be created/updated, as well as fields within the object. You can choose the field to be uniquely matched on, etc.
    • Screen Shot 01-19-17 at 03.06 PM

    • You can also set the Auto-Sync time interval such that periodically the Eloqua data inserted into staging area will be synced to actual Eloqua tables.
    • Finish the wizard, complete the rest of the integration, and then activate it.

    At runtime, since we know that under the hood the Bulk Import APIs are being used, the following specific events happen:

    • Depending on the business object and the fields chosen, an import definition is created by POSTing to the “/bulk/2.0/<object>/imports/” Eloqua endpoint.
    • This returns a unique URI in the response, which is used to POST the actual data to Eloqua. Thus, as data gets processed through the ICS integration, it reaches the Eloqua Invoke activity, which internally uses the URI returned above to POST the data to Eloqua. The data is now in the Eloqua staging area, ready to be synced into Eloqua.
    • Now, depending on the ‘Auto-Sync’ interval defined in design-time, periodically the ‘/bulk/2.0/syncs’ endpoint is invoked which moves the data from the staging area to Eloqua database tables.

    The Bulk API steps above are described in more detail here.

    e. Designing the Eloqua->Outbound flows

    Design time :

    • Create an ICS integration, and drag the Eloqua connection as the source of the integration.
    • Select the business object, select the fields, followed by selecting the response fields.
    • Finish the wizard. Complete the integration and activate it.

    When the integration is activated, ICS makes a callout to the Eloqua ICS App, registering the integration name, its ICS endpoint, and request and response fields chosen above.

    At this point, back in the Eloqua UI, the marketer can configure the ICS App in her campaign by choosing among the activated ICS integrations and configuring them appropriately. For example, the screenshot below shows the ICS App’s ‘cloud action’ configuration screen from a sample Eloqua campaign, after an integration called ‘eloqua_blog’ with the Eloqua Adapter as source is activated:
    Screen Shot 01-19-17 at 03.41 PM

    The Marketer now runs her campaign. Contacts start flowing through various campaign steps, including the ICS App step, at which point the ICS App gets invoked, which in turn invokes the configured ICS integration.

    Integrating Oracle Project Cloud with Documents Cloud Service using REST APIs and business object-level security.

    $
    0
    0

    Introduction

    Oracle Documents Cloud Service (DCS) enables collaboration through rich set of social and mobile-optimized features. Customers often come across requirements to integrate DCS to Oracle ERP cloud. Such integration improves productivity by taking advantage of features of DCS Service. In this post, let’s take a look at integrating Project Management Cloud, a part of Oracle ERP cloud, with DCS. Contents of this post are applicable to R11 of Project Management Cloud and R16.4.5 of DCS Service.

    Main Article

    Project Cloud and Document Cloud both provide secure REST APIs for integration. In addition, Document Cloud offers UI integration through applinks, short-lived links accessible through HTML IFRAME. Project Cloud offers UI customization through Page Composer, which is sufficient to implement this solution. See links to documentation to these APIs in references section below. The solution described in this post uses aforementioned APIs and tools and a custom integration service deployed to JCS-SX. It leverages parts of design described in another blog post on integrating DCS and Sales Cloud (link provided in references section). Below is a high-level depiction of the solution.

    001

    Figure 1 – Overview of the solution

     

    JCS-SX is a PaaS-for-SaaS offering usually deployed alongside the Oracle SaaS application and pre-integrated with SaaS through Single-Sign-on. Guidance to implement this solution is split into subsections. For ease of comprehension, these instructions are abstracted. Click on one of the links below to jump to a subsection of interest.

    Documents Cloud REST API

    The following actions need to be performed through the API:

    • Query whether a sub-folder exists in DCS for the selected project.
    • Create a sub-folder for the project, based on project name.
    • Get an appslink to the sub-folder

    Get contents of a folder, in order to verify existence of sub folder with same name as project:
    Request:

    GET /documents/api/1.1/folders/F7A4AF94F58A48892821654E3B57253386C697CACDB0//items HTTP/1.1
    Host: &lt;DocsCloudHostName:port&gt;
    Authorization: Basic am9obi5kdW5iYXI6VmlzaW9uMTIzIQ==
    .....<br class="none" />
    

    Response:

    ....
    {
    "type": "folder",
    "id": "FE4E22621CBDA1E250B26DD73B57253386C697CACDB0",
    "parentID": "F7A4AF94F58A48892821654E3B57253386C697CACDB0",
    "name": "Cloud based HCM",
    "ownedBy": {
    "displayName": "John Doe",
    "id": "UDFE5D9A1F50DAA96DA5F4723B57253386C6",
    "type": "user"
    }
    ...<br class="none" /><br class="none" />

    Create a new sub-folder:

    Request:

    POST /documents/api/1.1/folders/F7A4AF94F58A48892821654E3B57253386C697CACDB0 HTTP/1.1
    Host: <hostname:port>
    Authorization: Basic am9obi5kdW5iYXI6VmlzaW9uMTIzIQ==
    …..
    {
        "name": "TestFolder1",
        "description": "TestFolder"
    }

    Response:

    HTTP/1.1 201 Created
    Date: Tue, 24 Jan 2017 22:14:50 GMT
    Location: https://docs-gse00000310.documents.us2.oraclecloud.com/documents/api/1.1/folders/F073C821561724BDA2E6B6C73B57253386C697CACDB0
    ….

    Create appslink to a subfolder:
    Request:

    POST /documents/api/1.1/applinks/folder/F7A4AF94F58A48892821654E3B57253386C697CACDB0 HTTP/1.1
    Host: <DCS host:port>
    Authorization: Basic am9obi55iYXI6VmlzaW9uMTIzIQ==
    ....
    
    {
        "assignedUser": "casey.brown",
        "role":"contributor"
    }

    Response:

    HTTP/1.1 200 OK
     Date: Wed, 25 Jan 2017 00:52:40 GMT
     Server: Oracle-Application-Server-11g
     .....
    
    {
     "accessToken": "eDkMUdbNQ2ytyNTyghBbyj43yBKpY06UYhQer3EX_bAQKbAfv09d4T7zuS5AFHa2YgImBiecD2u-haE_1r3SYA==",
     "appLinkID": "LF0fW2LLCZRsnvk1TVNcz5UhiqDSflq_2Kht39UOZGKsglZo_4WT-OkR1kEA56K91S1YZxSa8pBpQZD6BSWYCnAXZZKAZaela3IySlgJaaAvJrijCvWTazDqCeY56DvyYgHNjAoZPSy2dL0DzaCWi0XA==",
     "appLinkUrl": "https://docs-gse00000310.documents.us2.oraclecloud.com/documents/embed/link/app/LF0fW2LLCZRsnvk1TVNcz5UhiqDSflq_2Kht39UOZGKsglZo_4WT-OkR1kEA56K91S1YZxSa8pBpQZD6BSWYCnAXZZKAZaela3IySlgJaaAvJrijCvWTazDqCeY56DvyYgHNjAoZPSy2dL0DzaCWi0XA==/folder/F7A4AF94F58A48892821654E3B57253386C697CACDB0/_GruppFinancial",
     "errorCode": "0",
     "id": "F7A4AF94F58A48892821654E3B57253386C697CACDB0",
     "refreshToken": "LugYsmKWK6t5aCfAb8-lgdmp7jgF8v3Q9aEtits4oy0Oz9JtaYnL9BOs8q4lwXK8",
     "role": "contributor",
     "type": "applink"
     }<br class="none" /><br class="none" />

    Project Cloud REST API

    JCS-SX in the solution ensures that only users with access to a project could access the corresponding folder in DCS. This is achieved by invoking Project API with the JWT passed to the  service by project cloud. Without a valid token, the JCS-SX service will return an error.

    Here is the sample payload for the service.
    Request:

    GET /projectsFinancialsApi/resources/11.1.11/projects/300000058801556?fields=ProjectId,ProjectName&onlyData=true HTTP/1.1
    Host: <Project Cloud>
    Authorization:Bearer <JWT token>
    ...

    Response:

    HTTP/1.1 200 OK
    Server: Oracle-Application-Server-11g
    …
    {
      "ProjectId" : 300000058801556,
      "ProjectName" : "Dixon Financials Upgrade"
    }

    Security

    There are several aspects of security addressed by this solution.

    • Project Cloud and JCS-SX integration is secured by single-sign-on infrastructure of which both systems are participants. Single sign-on is enabled for JCS-SX instances and their co-located Fusion SaaS applications. This integration only ensures that the service is invoked on behalf of a valid user of ERP Cloud.
    • The API calls from JCS-SX to Project Cloud are secured by JWT tokens supplied by Project Cloud upon invoking the JCS-SX service. This JWT token is bound the currently logged in Project Cloud user. JWT Tokens are issued with a predetermined expiry time.
    • JCS-SX to DCS integration in this solution is secured by basic authentication. Federation of identity domains could allow seamless authentication and authorization of users between these two systems, with additional effort.

    JCS-SX Service

    This is a J-EE servlet that takes Project Name, Project ID and a JWT token as query string parameters. The functions of the service are as follows:

    • Using supplied JWT token and Project Id, try to get information about project using Project Cloud REST API. If the request fails, stop processing and return “HTTP 401 unauthorized” error.
    • If the previous step succeeds, query DCS for a sub-folder with the supplied project name. The root folder ID in DCS, basic authentication credentials are available to the servlet.
    • If a sub-folder does not exist, create a new sub-folder.
    • Create an appslink to the sub-folder. Generate HTML content with an IFRAME element pointing to the appslink returned by DCS API.

    Customizing Project Cloud

    For this integration, Project Cloud must be customized for the following:

    • Invoke JCS-SX service
    • Pass Project information such as Name and Id, along with a JWT token to JCS-SX service.
    • Display the appslink content from DCS.

    Project Cloud does not yet provide the app composer tool available in Sales Cloud at the time of publishing this post. However, page composer’s features are sufficient for this integration.  Here are the steps to implement:

    • Create and activate a sandbox, if the current use does not have one already.
    • Navigate to an appropriate page of project management cloud where Document Cloud’s content could be displayed. For this solution, let’s navigate to Home->Projects->Project Financial Management. Then, search for projects and click on a project, then click on Documents tab.

    002

    • Click on top right menu and select “Customize Pages”. Page Composer is now activated current page.
    • Click on a section of page where DCS appslink should be displayed.
    • On top left menu of Page Composer, click on “View” and select “Source”. Click on “Add Content”, click on “Components” and select “Web Page” widget.
    • 003Once the widget is displayed, drag the edges to desired size. Then, while the web page widget is selected, click on “Edit” of the Page Composer menu, on top left. Web Page component’s property dialog is displayed. Click the drop-down next to “Source” field and select “Expression Builder”.
      004
    • Once the widget is displayed, drag the edges to desired size. Then, while the web page widget is selected, click on “Edit” of the Page Composer menu, on top left. Web Page component’s property dialog is displayed. Click the drop-down next to “Source” field and select “Expression Builder”. Enter appropriate JCS-SX host and service URI for the JSC-SX service. Notice the bindings variables for project information and JWT token supplied through query string. These variables are available to the page by default.
      https://<JCS-SX HOST>:<PORT>/doccloud?projectID=#{bindings.ProjectId.inputValue}&projectName=#{bindings.Name.inputValue}&buname=#{bindings.Name3.inputValue}&customername=#{bindings.Customer.inputValue}&jwt=#{applCoreSecuredToken.trustToken}

    005

    • Click OK to submit and click “Apply” on Component properties page. If the integration works end-to-end, DCS page should be displayed as shown below, with a sub-folder named after the project in focus. Use can drag and drop documents into the Widget to add documents.

      006

    Summary

    This article explains how to integrate Oracle Project Management Cloud and DCS using REST API and JCS-SX.  It provides API snippets, instructions for customizing Project Cloud and the overall logic of the service deployed on JCS-SX. This approach is suitable for R11 of ERP cloud and R16.4.5 of DCS. Subsequent releases of these products offer equivalent or better integration capabilities. Refer to product documentation for later versions before implementing a solution based on this article. 

    References

    DCS REST API:

    http://docs.oracle.com/cloud/latest/documentcs_welcome/WCDRA/index.html

    Project Portfolio Management Cloud REST API:

    http://docs.oracle.com/cloud/latest/projectcs_gs/FAPAP/

    Blog on Sales Cloud to DCS integration:

    http://www.ateam-oracle.com/integrating-oracle-document-cloud-and-oracle-sales-cloud-maintaining-data-level-business-object-security/

     

     

    Oracle GoldenGate: Replicating “Soft” Deletes To Data Warehouses

    $
    0
    0

    Introduction

    We receive a lot of questions on how to setup Oracle GoldenGate to perform “soft” deletes in Data Warehouses. By default, Oracle GoldenGate replicates data operations exactly as they occur in the source database; however, in Data Warehouses their is typically a requirement to retain the original data record and set a flag that shows the record was deleted. This is what we mean by “soft” delete.

    In this article we shall present the three most common use cases and Oracle GoldenGate Replicat configurations that address the requirements. I shall be using Teradata as my target database; however, the concepts presented are valid for any RDBMS database supported as an Oracle GoldenGate target.

    Main Article

    When configuring replication to Data Warehouses, the three most common use cases we encounter are:

    (1) Retain a record of all source database operations for regulatory requirements.

    (2) Update the target database to set a delete flag and then insert the delete record as a new row. If the same data is re-inserted into the source database, create a new row in the target for this record.

    (3) Retain a single row in the target for the source data. If the source row is deleted, update a flag to record the delete. If the record is re-inserted into the source database, update the target row with the source record data and reset the delete flag.

    For our tests, we shall use the following source and target tables:

    Oracle Source Table
    create table tpc.repldel (
    myrowid   number(11) not null,
    atextrow  varchar2(50),
    insert_ts timestamp(6) not null,
    updt_ctr  number(18),
    update_ts timestamp(6)
    );

    Teradata Target Table
    create multiset table FDSUSER.REPLDEL (
    MYROWID   decimal(11),
    ATEXTROW  varchar(50),
    INSERT_TS timestamp(6),
    UPDT_CTR  decimal(18),
    UPDATE_TS timestamp(6),
    DEL_FLAG  char(1)
    )
    primary index (MYROWID);

    My source table definition does not contain a primary key or unique index, so we must ensure that all columns are logged for update and delete operations. Likewise, my Teradata target is a Mulitset table that does not have any unique indexes defined; this allows duplicate data rows to exist in the table.

    Use Case #1

    Use case: Retail A Record Of All Source Database Operations.

    For this use case, we configure Replicat to insert every source record, no matter the source operation type, by using the INSERTALLRECORDS parameter. The Replicat configuration I’ll use for my test is:

    replicat deltest
    targetdb tdexpress, userid ggadmin, password Oracle1
    maxtransops 500
    batchsql batchtransops 500, bytesperqueue 1000000, opsperbatch 500
    dboptions nocatalogconnect
    insertallrecords
    map PDBORCL.TPC.REPLDEL, target FDSUSER.REPLDEL,
    COLMAP( USEDEFAULTS,
    DEL_FLAG = @IF (@STRCMP (@GETENV (‘GGHEADER’, ‘OPTYPE’), ‘DELETE’) = 0, ‘Y’, ‘N’)
    );

    Generate test data:

    insert into tpc.repldel values (1, ‘Insert row 1’, CURRENT_TIMESTAMP, 0, NULL);
    insert into tpc.repldel values (2, ‘Insert row 2’, CURRENT_TIMESTAMP, 0, NULL);
    insert into tpc.repldel values (3, ‘Insert row 3’, CURRENT_TIMESTAMP, 0, NULL);
    insert into tpc.repldel values (4, ‘Insert row 4’, CURRENT_TIMESTAMP, 0, NULL);
    insert into tpc.repldel values (5, ‘Insert row 5’, CURRENT_TIMESTAMP, 0, NULL);
    commit;

    update tpc.repldel set atextrow = ‘Update row 2’, updt_ctr = updt_ctr+1, update_ts = CURRENT_TIMESTAMP
    where myrowid = 2;
    commit;

    update tpc.repldel set atextrow = ‘Update row 5’, updt_ctr = updt_ctr+1, update_ts = CURRENT_TIMESTAMP
    where myrowid = 5;
    commit;

    update tpc.repldel set atextrow = ‘Update row 2’, updt_ctr = updt_ctr+1, update_ts = CURRENT_TIMESTAMP
    where myrowid = 2;
    update tpc.repldel set atextrow = ‘Update row 5’, updt_ctr = updt_ctr+1, update_ts = CURRENT_TIMESTAMP
    where myrowid = 5;
    commit;

    delete from tpc.repldel where myrowid = 5;
    commit;

    Target table contents:

    SELECT MYROWID, ATEXTROW, INSERT_TS, UPDT_CTR, UPDATE_TS, DEL_FLAG FROM FDSUSER.REPLDEL order by MYROWID, UPDT_CTR;

     

    MYROWID ATEXTROW INSERT_TS UPDT_CTR UPDATE_TS DEL_FLAG
    1 Insert row 1 1/25/2017 13:35:52.291655 0 ? N
    2 Insert row 2 1/25/2017 13:35:52.297039 0 ? N
    2 Update row 2 1/25/2017 13:35:52.297039 1 1/25/2017 13:37:28.070826 N
    2 Update row 2 1/25/2017 13:35:52.297039 2 1/25/2017 13:37:28.080780 N
    3 Insert row 3 1/25/2017 13:35:52.297880 0 ? N
    4 Insert row 4 1/25/2017 13:35:52.298722 0 ? N
    5 Insert row 5 1/25/2017 13:35:52.299372 0 ? N
    5 Update row 5 1/25/2017 13:35:52.299372 1 1/25/2017 13:37:28.076635 N
    5 Update row 5 1/25/2017 13:35:52.299372 2 1/25/2017 13:37:28.081412 N
    5 Update row 5 1/25/2017 13:35:52.299372 2 1/25/2017 13:37:28.081412 Y

     

    As we can see in the output, a row exists in the target that corresponds to each source operation; with any rows deleted identified by the DEL_FLAG column.

    Use Case #2

    Use case: Update the target database to set a delete flag and then insert the delete record as a new row. If the same data is re-inserted into the source database, create a new row in the target for this record.

    Because there are no unique indexes on the target table, we need to define KEYCOLS on the Replicat to define uniqueness for update operations. We handle the delete requirement by setting the parameter INSERTDELETES. The Replicat configuration I’ll use for this test is:

    replicat deltest
    targetdb tdexpress, userid ggadmin, password Oracle1
    maxtransops 500
    batchsql batchtransops 500, bytesperqueue 1000000, opsperbatch 500
    dboptions nocatalogconnect
    insertdeletes
    map PDBORCL.TPC.REPLDEL, target FDSUSER.REPLDEL,
    KEYCOLS (MYROWID, INSERT_TS), COLMAP( USEDEFAULTS,
    DEL_FLAG = @IF (@STRCMP (@GETENV (‘GGHEADER’, ‘OPTYPE’), ‘DELETE’) = 0, ‘Y’, ‘N’)
    );

    Generate test data:

    insert into tpc.repldel values (1, ‘Insert row 1’, CURRENT_TIMESTAMP, 0, NULL);
    insert into tpc.repldel values (2, ‘Insert row 2’, CURRENT_TIMESTAMP, 0, NULL);
    insert into tpc.repldel values (3, ‘Insert row 3’, CURRENT_TIMESTAMP, 0, NULL);
    insert into tpc.repldel values (4, ‘Insert row 4’, CURRENT_TIMESTAMP, 0, NULL);
    insert into tpc.repldel values (5, ‘Insert row 5’, CURRENT_TIMESTAMP, 0, NULL);
    commit;

    update tpc.repldel set atextrow = ‘Update row 2’, updt_ctr = updt_ctr+1, update_ts = CURRENT_TIMESTAMP
    where myrowid = 2;
    commit;

    update tpc.repldel set atextrow = ‘Update row 5’, updt_ctr = updt_ctr+1, update_ts = CURRENT_TIMESTAMP
    where myrowid = 5;
    commit;

    update tpc.repldel set atextrow = ‘Update row 2’, updt_ctr = updt_ctr+1, update_ts = CURRENT_TIMESTAMP
    where myrowid = 2;
    update tpc.repldel set atextrow = ‘Update row 5’, updt_ctr = updt_ctr+1, update_ts = CURRENT_TIMESTAMP
    where myrowid = 5;
    commit;

    delete from tpc.repldel where myrowid = 5;
    delete from tpc.repldel where myrowid = 1;
    commit;

    insert into tpc.repldel values (1, ‘Insert row 1’, CURRENT_TIMESTAMP, 0, NULL);
    commit;

    Target table contents:

    SELECT MYROWID, ATEXTROW, INSERT_TS, UPDT_CTR, UPDATE_TS, DEL_FLAG FROM FDSUSER.REPLDEL order by MYROWID, UPDT_CTR;

    MYROWID ATEXTROW INSERT_TS UPDT_CTR UPDATE_TS DEL_FLAG
    1 Insert row 1 1/25/2017 14:20:06.915726 0 ? N
    1 Insert row 1 1/25/2017 14:19:06.064428 0 ? Y
    1 Insert row 1 1/25/2017 14:19:06.064428 0 ? N
    2 Update row 2 1/25/2017 14:19:06.065892 2 1/25/2017 14:19:42.388967 N
    3 Insert row 3 1/25/2017 14:19:06.067126 0 ? N
    4 Insert row 4 1/25/2017 14:19:06.068325 0 ? N
    5 Update row 5 1/25/2017 14:19:06.069736 2 1/25/2017 14:19:42.389616 Y
    5 Update row 5 1/25/2017 14:19:06.069736 2 1/25/2017 14:19:42.389616 N

     

    As we can see in the output, we now have 3 rows for MYROWID 1; one for the original insert, one for the delete, and one for the re-insert of the row in the source database. Likewise, there are two rows for MYROWID 5, one for the update of the original row and one for the delete.

    Use Case #3

    Use case: Retain a single row in the target for the source data. If the source row is deleted, update a flag to record the delete. If the record is re-inserted into the source database, update the target row with the source record data and reset the delete flag.

    This use case requires a change to the target table, as it requires only one row per source record. For Teradata, we either need to redefine the table as a Set table, or add an Unique Primary Index to the existing Multiset Table. To make this change, I delete all of my test data from the Teradata table and executed the command:

    alter table FDSUSER.REPLDEL modify unique primary index (MYROWID);

    In the Replicat, I’ll use the parameter UPDATEDELETES to set the DEL_FLAG and retain a copy of the source record. Because we set an Unique Primary Index on the column MYROWID, Teradata will return a duplicate row violation if an insert comes from the source for a row that had previously been deleted. We’ll set Replicat to catch the error returned by Teradata and then execute an exceptions map to update the existing row with the new data. The Replicat configuration I’ll use for this test is:

    replicat deltest
    targetdb tdexpress, userid ggadmin, password Oracle1
    maxtransops 500
    batchsql batchtransops 500, bytesperqueue 1000000, opsperbatch 500
    dboptions nocatalogconnect
    reperror (-2801, exception)
    updatedeletes
    map PDBORCL.TPC.REPLDEL, target FDSUSER.REPLDEL,
    COLMAP( USEDEFAULTS,
    DEL_FLAG = @IF (@STRCMP (@GETENV (‘GGHEADER’, ‘OPTYPE’), ‘DELETE’) = 0, ‘Y’, ‘N’)
    );
    — Exceptions handlers
    allowduptargetmap
    updateinserts
    map PDBORCL.TPC.REPLDEL, target FDSUSER.REPLDEL, EXCEPTIONSONLY
    COLMAP( USEDEFAULTS,
    DEL_FLAG = ‘N’
    );

    Generate test data:

    insert into tpc.repldel values (1, ‘Insert row 1’, CURRENT_TIMESTAMP, 0, NULL);
    insert into tpc.repldel values (2, ‘Insert row 2’, CURRENT_TIMESTAMP, 0, NULL);
    insert into tpc.repldel values (3, ‘Insert row 3’, CURRENT_TIMESTAMP, 0, NULL);
    insert into tpc.repldel values (4, ‘Insert row 4’, CURRENT_TIMESTAMP, 0, NULL);
    insert into tpc.repldel values (5, ‘Insert row 5’, CURRENT_TIMESTAMP, 0, NULL);
    commit;

    update tpc.repldel set atextrow = ‘Update row 2’, updt_ctr = updt_ctr+1, update_ts = CURRENT_TIMESTAMP
    where myrowid = 2;
    commit;

    update tpc.repldel set atextrow = ‘Update row 5’, updt_ctr = updt_ctr+1, update_ts = CURRENT_TIMESTAMP
    where myrowid = 5;
    commit;

    update tpc.repldel set atextrow = ‘Update row 2’, updt_ctr = updt_ctr+1, update_ts = CURRENT_TIMESTAMP
    where myrowid = 2;
    update tpc.repldel set atextrow = ‘Update row 5’, updt_ctr = updt_ctr+1, update_ts = CURRENT_TIMESTAMP
    where myrowid = 5;
    commit;

    delete from tpc.repldel where myrowid = 5;
    delete from tpc.repldel where myrowid = 1;
    commit;

    Target table contents:

    SELECT MYROWID, ATEXTROW, INSERT_TS, UPDT_CTR, UPDATE_TS, DEL_FLAG FROM FDSUSER.REPLDEL order by MYROWID, UPDT_CTR;

    MYROWID ATEXTROW INSERT_TS UPDT_CTR UPDATE_TS DEL_FLAG
    1 Insert row 1 1/25/2017 14:50:48.629563 0 ? Y
    2 Update row 2 1/25/2017 14:50:48.635908 2 1/25/2017 14:51:20.975013 N
    3 Insert row 3 1/25/2017 14:50:48.637100 0 ? N
    4 Insert row 4 1/25/2017 14:50:48.638282 0 ? N
    5 Update row 5 1/25/2017 14:50:48.639429 2 1/25/2017 14:51:20.975918 Y

     

    We now have one row in the target corresponding to each source operation. The DEL_FLAG column shows that records for MYROWID 1 and 5 were deleted from the source table.

    Now re-insert data for MYROWID 5 in the source table.

    insert into tpc.repldel values (5, ‘Insert row 5’, CURRENT_TIMESTAMP, 0, NULL);
    commit;

    The corresponding record in the target table is:

    MYROWID ATEXTROW INSERT_TS UPDT_CTR UPDATE_TS DEL_FLAG
    1 Insert row 1 1/25/2017 14:50:48.629563 0 ? Y
    2 Update row 2 1/25/2017 14:50:48.635908 2 1/25/2017 14:51:20.975013 N
    3 Insert row 3 1/25/2017 14:50:48.637100 0 ? N
    4 Insert row 4 1/25/2017 14:50:48.638282 0 ? N
    5 Insert row 5 1/25/2017 14:58:21.334681 0 ? N

     

    The data for MYROWID was overlaid with the incoming source data and the DEL_FLAG was reset.

     

    Summary

    In this article we presented solutions for the three most common use cases requiring the replication of “soft” deletes to Data Warehouses.

    Bulk import of sales transactions into Oracle Sales Cloud Incentive Compensation using Integration Cloud Service

    $
    0
    0

    Introduction

    Sales Cloud Incentive Compensation application provides API to import sales transactions in bulk. These could be sales transactions exported out of an ERP system. Integration Cloud Service (ICS) offers extensive data transformation and secure file transfer capabilities that could be used to orchestrate, administer and monitor file transfer jobs. In this post, let’s look at an ICS implementation to transform and load sales transactions into Incentive Compensation. Instructions provided in this post are applicable to Sales Cloud Incentive Compensation R11 and ICS R16.4.1 or higher.

    Main Article

    Figure 1 provides an overview of the solution described in this post. A text file contains sales transactions, in CSV format, exported out of ERP Cloud. ICS imports the file from a file server using SFTP, transforms the data to a format suitable for Incentive Compensation and submits an import job to Sales Cloud. The data transfer is over encrypted connections end-to-end. ICS is Oracle’s enterprise-grade iPaaS offering, with adapters for Oracle SaaS and other SaaS applications and native adapters that allow connectivity to most cloud and on-premise applications. To learn more about ICS, refer to documentation at this link.

    Figure1

    Figure 1 – Overview of the solution

    Implementation of the solution requires the following high-level tasks.

    For the solution to work, ICS should be able to connect with Sales Cloud and File Server.  ICS agents can easily enable connectivity, if one of these systems are located on-premise, behind a firewall.

    Configuring a file server to host ERP export file and enable SFTP

    A File Server is an optional component of the solution. If the source ERP system that produces CSV file allows Secure FTP access, ICS could connect to it directly. Otherwise, a file server could host exported files from ERP system. One way to quickly achieve this is to provision a compute note on Oracle Public Cloud and enable SFTP access to a staging folder with read/write access to ERP system and ICS.

    Defining data mapping for file-based data import service

    File-based data import service requires that each import job specify a data mapping. This data mapping helps the import service assign the fields in input file content to fields in Incentive Compensation Transaction object. There are two ways to define such mapping.

    • Import mapping from a Spreadsheet definition
    • Define a new import by picking and matching fields on UI

    Here are the steps to complete import mapping:

    • Navigate to “Setup and Maintenance”.

    Figure2

    • Search for “Define File Import” task list.

    Figure3

    • Click on “Manage File Import Mappings” task from list.

    Figure4

    • On next page, there are options to look-up existing mapping or create a new one for specified object type. The two options, import from file or create a new mapping are highlighted.

    Figure5

    • If you have a Excel mapping definition, then click on “Import Mapping” , provide information and click “OK”.

    Figure6

    • Otherwise, click a new mapping by clicking on “Actions”->”Create”.

    Figure7

    • Next page allows field-by-field mapping, between the CSV file’s fields and fields under “Incentive Compensation Transactions”.

    Figure8

    The new mapping is now ready for use.

    Identifying Endpoints

    Importing sales transaction require a file import web service and another optional web service to collect transactions.

    • Invoke file-based data import and export service with transformed and encoded file content.
    • Invoke ‘Manage Process Submission’ service with a date range for transactions.

    File-based data import and export service could be used to import and data out of all applications on Sales Cloud. For the solution we’ll use “submitImportActivity” operation.  WSDL is typically accessible at this URL for Sales Cloud R11.

    https://<Sales Cloud CRM host name>:<CRM port>/mktImport/ImportPublicService

    The next task could be performed by logging into Incentive Compensation application or by invoking a web service. ‘Manage Process Submission’ service is specific to Incentive Compensation application. The file-based import processes input and loads the records into staging tables.  ‘submitCollectionJob’ operation of ‘Manage Process Submission’ service initiates the processing of the staged records into Incentive Compensation. This service is typically accessible at this URL. Note that this action can also be performed in Incentive Compensation UI, as described in the final testing section of this post.

    https://<IC host name>:<IC port number>/publicIncentiveCompensationManageProcessService/ManageProcessSubmissionService

    Implementing an ICS Orchestration

    An ICS orchestration glues the other components together in a flow. ICS orchestrations provide flexible ways to invoke, such as a scheduled triggers or an API interface. Orchestrations can perform variety of tasks and implement complex integration logic. For the solution described in this post, ICS needs to perform the following tasks:

    • Connect to file server and import files that matches specified filename pattern.
    • Parse through file contents and for each record, transform the record to the format required by Incentive Compensation.
    • Convert the transformed file contents to Base64 format and store in a string variable.
    • Invoke File-based data import web service, with Base64-encoded data.Note this service triggers import process by does not wait for its completion.
    • Optionally, the service could invoke “Manage Submission Service” after a delay to ensure that the file-based import completed in Sales Cloud.

    For sake of brevity, only the important parts of the orchestration are addressed in detailhere. Refer to ICS documentation for more information on building orchestrations.

     

    FTP adapter configuration

    FTP adapters could be used with ‘Basic Map my data’ or Orchestration patterns. To create a new FTP connection, navigate to “Connections” tab, click on “New Connection” and choose FTP as type of connection.

    Under “Configure Connection” page, set “SFTP” drop down to “Yes”. FTP adapter allows login through SSL certificate or username and password.

    Figure9

    In “Configure Security” page, provide credentials, such as username password or password for a SSL certificate. FTP adapter also supports PGP encryption of content.

    Figure10

    Transforming the source records to destination format

    Source data from ERP could be in a different format than the format required by target system. ICS provides a sophisticated mapping editor to map fields of source record to target record. Mapping could be as easy as drag & drop of fields from source to target, or could be set using complex logic using XML style sheet language (XSLT).  Here is a snapshot of the mapping used for transformation, primarily to convert date string from one format to another.

    Figure15

    Mapping for SOURCE_EVENT_DATE requires a transformation, which is done using transformation editor, as shown.

    Figure16

    Converting file content to a Base64-encoded string

    File-based data import service requires the content of a CSV file to be Base64-encoded. This encoding could be done using simple XML schema to be used in the FTP invoke task of the orchestration. Here is the content of the schema.

    <schema targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/opaque/" xmlns="http://www.w3.org/2001/XMLSchema">
    <element name="opaqueElement" type="base64Binary"/>
    </schema>

    To configure a new FTP connection, drag and drop a connection, configured previously.
    Figure11

    Select operations settings as shown.
    Figure12

    Choose options to select an existing schema.

    Figure13

    Pick the schema file containing the schema.

    Figure14The FTP invoke is ready to get a file via SFTP and return the contents to the orchestration as a Base64-encoded string. Map the content as to a field in SOAP message to be sent to Incentive Compensation.

    Testing the solution

    To test the solution place a CSV formatted file at the stageing folder on file server. Here is sample content from source file.

    SOURCE_TRX_NUMBER,SOURCE_EVENT_DATE,CREDIT_DATE,ROLLUP_DATE,TRANSACTION_AMT_SOURCE_CURR,SOURCE_CURRENCY_CODE,TRANSACTION_TYPE,PROCESS_CODE,BUSINESS_UNIT_NAME,SOURCE_BUSINESS_UNIT_NAME,POSTAL_CODE,ATTRIBUTE21_PRODUCT_SOLD,QUANTITY,DISCOUNT_PERCENTAGE,MARGIN_PERCENTAGE,SALES_CHANNEL,COUNTRY
    TRX-SC1-000001,1/15/2016,1/15/2016,1/15/2016,1625.06,USD,INVOICE,CCREC,US1 Business Unit,US1 Business Unit,90071,SKU1,8,42,14,DIRECT,US
    TRX-SC1-000002,1/15/2016,1/15/2016,1/15/2016,1451.35,USD,INVOICE,CCREC,US1 Business Unit,US1 Business Unit,90071,SKU2,15,24,13,DIRECT,US
    TRX-SC1-000003,1/15/2016,1/15/2016,1/15/2016,3033.83,USD,INVOICE,CCREC,US1 Business Unit,US1 Business Unit,90071,SKU3,13,48,2,DIRECT,US

    After ICS fetches this file and transforms content, it invokes file-based data import service, with the payload shown below.

    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/oracle/apps/marketing/commonMarketing/mktImport/model/types/" xmlns:mod="http://xmlns.oracle.com/oracle/apps/marketing/commonMarketing/mktImport/model/">
     <soapenv:Header/>
     <soapenv:Body>
     <typ:submitImportActivity>
     <typ:importJobSubmitParam>
     <mod:JobDescription>Gartner demo import</mod:JobDescription>
     <mod:HeaderRowIncluded>Y</mod:HeaderRowIncluded>
     <mod:FileEcodingMode>UTF-8</mod:FileEcodingMode>
     <mod:MappingNumber>300000130635953</mod:MappingNumber>
     <mod:ImportMode>CREATE_RECORD</mod:ImportMode>
     <mod:FileContent>U09VUkNFX1.....JUkVDVCxVUw==</mod:FileContent>
     <mod:FileFormat>COMMA_DELIMITER</mod:FileFormat>
     </typ:importJobSubmitParam>
     </typ:submitImportActivity>
     </soapenv:Body>
    </soapenv:Envelope>


    At this point, import job has been submitted to Sales Cloud. Status of file import job could be tracked on Sales Cloud, under ‘Set and Maintenance’. by opening “Manage File Import Activities”. As shown below, there are several Incentive Compensation file imports have been submitted, in status ‘Base table upload in progress’.

    Figure17

    Here is a more detailed view of one job, opened by clicking on status column of the job. This job has imported records into a staging table.

    Figure18

    To complete the job and see transactions in Incentive Compensation, follow one of the these two methods.

    • Navigate to “Incentive Compensation” -> “Credits and Earnings” and click on “Collect Transactions” to import data
    • OR, Invoke ‘Manage Process Submission’ service with payload similar to sample snippet below.
    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/incentiveCompensation/cn/processes/manageProcess/manageProcessSubmissionService/types/">
       <soapenv:Header/>
       <soapenv:Body>
          <typ:submitCollectionJob>
             <typ:scenarioName>CN_IMPORT_TRANSACTIONS</typ:scenarioName>
             <typ:scenarioVersion>001</typ:scenarioVersion>
             <typ:sourceOrgName>US1 Business Unit</typ:sourceOrgName>
             <typ:startDate>2016-01-01</typ:startDate>
             <typ:endDate>2016-01-31</typ:endDate>
             <typ:transactionType>Invoice</typ:transactionType>
          </typ:submitCollectionJob>
       </soapenv:Body>
    </soapenv:Envelope>

    Finally, verify that transactions are visible under Incentive Compensation, by navigating to “Incentive Compensation” -> “Credits and Earnings”, from home page and by clicking on “Manage Transactions”.

    Figure19

    Summary

    This post explained a solution to import transactions into Incentive Compensation using web services provided by Sales Cloud and Incentive Compensation application. It also explained several features of Integration Cloud Service utilized to orchestrate the import. The solution discussed in this post is suitable for Sales Cloud R11 and ICS R16.4.1. Subsequent releases of these products might offer equivalent or better capabilities out-of-box. Refer to product documentation for later versions before implementing a solution based on this post.

     

     

    Test blog

    $
    0
    0

    This blog is for testing purposes only, please ignore.

    Understanding the Enterprise Scheduler Service in ICS

    $
    0
    0

    Introduction

     

    In many enterprise integration scenarios there is a requirement to initiate tasks at scheduled times or at user defined intervals. The Oracle Integration Cloud Service (ICS) provides scheduling functionality via the Oracle Enterprise Scheduler to satisfy these types of requirements.  The Oracle Enterprise Scheduler Service (ESS) is primarily a Java EE application that provides time-based and schedule-based callbacks to other applications to run their jobs. Oracle ESS applications define jobs and specify when those jobs need to be executed and then gives these applications a callback at the scheduled time or when a particular event arrives. Oracle ESS does not execute the jobs itself, it generates a callback to the application and the application actually executes the job request. This implies that Oracle Enterprise Scheduler Service is not aware of the details of the job request; all the job request details are owned and managed by the application.

     

    What follows will be a discussion as to how ICS utilizes the ESS feature.  The document will cover how the ESS threads are allocated and the internal preparation completed for file processing.

     

    Quick ICS Overview

     

    The Integration Cloud Service deployment topology consists of one cluster.  The cluster has two managed servers along with one administration server.  This bit of information is relevant to the discussion of how the Enterprise Scheduler Service works and how it is used by applications like an ICS flow that runs in a clustered HA environment.

    A common use case for leveraging ESS is to setup a schedule to poll for files on an FTP server at regular intervals.  At the time files are found and then selected for processing, the ESS does some internal scheduling of these files to ensure the managed servers are not overloaded.  Understanding how this file processing works and how throttling might be applied automatically is valuable information as you take advantage of this ICS feature.

    An integration can be scheduled using the ICS scheduling user interface (UI). The UI provides a basic and an advanced option.  The basic option provides UI controls to schedule when to execute the integration.

    schedulingBasic

     

     

    The advanced option allows one to enter an iCal expression for the scheduling of the integration.

    schedulingView

     

     

    The ESS allows for two jobs to be executed at a time per JVM.  This equates to a maximum of four files being processed concurrently in a 2 instance ICS cluster.  So how does ICS process these files, especially, if multiple integrations could pick up twenty-five files at a time?

    As previously stated, there are two asynchronous worker resources per managed server. These asynchronous worker resources are known as an ICSFlowJob or AsyncBatchWorkerJob.   At the scheduled time, the ESS reserves one of the asynchronous worker resources, if one is available.  The initial asynchronous worker is the ICSFlowJob.  This is what we call the parent job.

    It is important to digress at this point to make mention of the database that backs the ICS product. The ICS product has a backing store of an Oracle database.  This database hosts the metadata for the ICS integrations, BPEL instances that are created during the execution of orchestration flows, and the AsyncBatchWorker metadata.  There is no need for the customer to maintain this database – no purging, tuning, or sizing.  ICS will only keep three days of BPEL instances in the database.  The purging is automatically handled by the ICS infrastructure.

    The ICSFlowJob invokes the static ScheduledProcessFlow BPEL. This process does the file listing, creates batches with one file per batch, and submits AsyncBatchWorker jobs to process the files. The AsyncBatchWorker jobs are stored within a database table.  These worker jobs will eventually be picked up by one of the two threads available to execute on the JVM. The graphic below demonstrates the parent and subprocess flows that have successfully completed.

    emcc

     

    Each scheduled integration will have at most ten batch workers (AsyncWorkerJob) created and stored within the database table.  The batch workers will have one or more batches assigned.  A batch is equivalent to one file. After the batch workers are submitted, the asynchronous worker resource, held by the ICSFlowJob, is released so it can be used by other requests.

    Scenario One

    1. 1.One integration that is scheduled to run every 10 minutes
    2. 2.Ten files are placed on the FTP server location all with the same timestamp

    At the scheduled time, the ICSFlowJob request is assigned one of the four available threads (from two JVMs) to begin the process of file listing and assigning the batches.  In the database there will be ten rows stored, since there are ten files.  Each row will reference a file for processing.  These batches will be processed at a maximum of four at a time.  Recall that there are only two threads per JVM for processing batches.

    At the conclusion of processing all of the AsyncWorkerJob subrequests one of the batch processing threads notifies the parent request, ICSFlowJob, that all of the subrequests have completed.

     

    Scenario Two

    1. 1. Two Integrations are scheduled to run every 10 minutes
    2. 2. There are 25 files, per integration, at each integration’s specified FTP server location

    This scenario will behave just as in scenario one; however, since each integration has more than ten files to process, the subrequests, AsyncWorkerJob, must each process more than one file.  Each integration will assign and schedule the file processing as follows:

    5 AsyncWorkerJob subrequests will process 2 files each

    5 AsyncWorkerJob subrequests will process 3 files each

    At the conclusion of the assignment and scheduling of the AsyncWorkerJob subrequests  there will be 20 rows in the database; 10 rows per integration.

    The execution of the AsyncWorkJobs is based upon a first-come first-serve basis.  Therefore, the 20 subrequests will more than likely be interleaved between each other and the processing of all of the files will take longer than if the integrations had not been kicked off at the same time.  The number of scheduler threads to process the batch integrations does not change.  There will only be a maximum of two scheduler threads per JVM.

     

    Summary

    The ESS scheduler provides useful features for having batch processes kicked off at scheduled intervals.  These intervals are user defined providing great flexibility as to when to execute these batch jobs.  However, care must be taken to prevent batch jobs from consuming all of the system resources, especially when there are real-time integrations being run through this same system.

     

    The Integration Cloud Service has a built in feature to prevent batch jobs from overwhelming the service.  This is done by only allowing two scheduler threads to process files at a time per JVM.   This may mean that some batch integrations take longer; however, it prevents the system from being overwhelmed and negatively impacting other ICS flows not related to batch file processing with the ESS.

     

    As we have discussed in this article, the use case here is all about polling for files that ICS needs to process at specified times or intervals; however, the ESS may also be used to trigger integrations such as REST and SOAP-based web services.  When using ESS to initiate file processing, the system is limited to two scheduler threads per JVM.

     

    The polling approach may not always be the best approach, since the delivery of the file may not be on a regularly scheduled cycle.  When the delivery of the file, from the source, is not on a regular schedule then it is probably better to implement a push model.   In a coming blog, I will demonstrate how to implement the push model.  With the push model the system is no longer under the constraints of the two scheduler threads per JVM.

     

    To learn more about the Enterprise Service Scheduler one should reference the Oracle documentation.

     


    Using Process Cloud Service REST API Part 2

    $
    0
    0

    Introduction

    In Part 1 we looked at using the Process Cloud Service REST API and making REST calls from an HTML page using jQuery/AJAX. In Part 2 we’ll take the next step by using and presenting data retrieved from PCS. We started out stuffing the string representation of the JSON data into an HTML <div> on the page. Now let’s present interesting bits of the data in a clear and appealing user interface. Building presentation structure separate from data and code is a fundamental graphical user interface application architecture. The declarative “view” in HTML, the “model” JSON data from the REST calls and the “controller” JavaScript is a classic Model-View-Controller, MVC architecture. In our case we’ll be using a “view model” instead of a controller, resulting in an MVVM Model-View-ViewModel architecture. Building on the tiny bit of CSS started in Part 1, we’ll use CSS to create the look and feel of the HTML view.

    Organizing Source Files

    Bundling CSS and JavaScript in with the HTML was convenient for our small start in Part 1 but now let’s organize CSS in a separate file style.css and JavaScript in a file pcs-api.js. Let’s also take advantage of a development environment. There are a number to choose from, I like NetBeans so we’ll use it.

    Create a new HTML5/JS Application project, name it PCSrestDemo accepting the new project wizard defaults.

    NetBeams-HTML5-project

    We’ll be leveraging parts of the APITest1.html file from Part 1 so copy and paste from it where convenient. Edit the HTML <head> and first <div> for process definitions replacing the contents of the project auto generated index.html with the HTML shown below.

    <!DOCTYPE html>
    <html>
        <head>
            <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.1/jquery.min.js"></script>
            <title>PCS REST API Demo</title>
            <meta charset="UTF-8">
            <meta name="viewport" content="width=device-width, initial-scale=1.0">
        </head>
        <body>
            <h1>PCS REST API Demo</h1>
            <h2>Part 1</h2>
            <p>Use the process-definitions call to get a list of registered process names and definition IDs</p>
            <input type="button" value="Get Process List" onClick="getProcessList()">
            <br><br>
            <div id="proclist">
                <h3>Registered Processes</h3>
            </div>
            <br><br>
        </body>
    </html>

    We’ve changed the <div> id to proclist and we’ll place process definition response data there on the page. Without any styling the bare HTML will look like below.
    bare-htmlIn Part 1 we had a tiny bit of CSS to set the button width and h1 color.

    input {width:300px;}
    h1    {color: blue;}

    We’re not looking to win a design contest, but the CSS code below is a good start at getting control over the style elements of the application. Create a folder css and file style.css with the contents as shown. A real designer would make something a bit more elegant but we’re mainly interested in the mechanics of using CSS in an application.

    input {
        width: 200px;
        height: 50px;
        border-radius: 5px;
        color: DarkBlue;
        background-color: White;
        border: 3px solid DeepSkyBlue;
        cursor: pointer;
        font-size: 20px;
        font-family: Tahoma, Verdana, Arial;
        -webkit-transition: width 1s, height 1s;
        transition: width 1s, height 1s;
    }
    
    input:hover {
        width: 250px;
        height: 62px;
        background-color: DeepSkyBlue;
        box-shadow: 8px 8px 4px Grey;
    }
    
    input:active {
        box-shadow: 8px 4px 4px Grey;
        transform: translateY(4px);
    }
    
    h1, h2, h3 {
        color: DeepSkyBlue;
        text-align: Center; 
        font-family: Tahoma, Verdana, Arial;
    }
    
    h2, h3 {
        text-align: Left;
    }
    
    ul {
        list-style: none;
    }

    also add the CSS link to the <head> section of index.html

            <link rel="stylesheet" href="css/style.css" type="text/css"/>

    This will change the appearance from the bare HTML show above to that below

    Create another folder named js and a file pcs-api.js with contents shown below.

     function getProcessList()
     {
       $.ajax(
           {
             type: "GET",
             url: "http://pcshost:7003/bpm/api/4.0/process-definitions",
             headers: {'Authorization': "Basic d2VibG9naWM6d2VibG9naWMx"},
             contentType: "application/json",
             dataType: "json",
             success: function(json){
                         $("#response").html(JSON.stringify(json));
             },
             failure: function(errMsg) {
                  alert(errMsg);
             },
             error: function(xhr){
                  alert("An error occured: " + xhr.status + " " + xhr.statusTextt);
             }
           });
     }

    This is the same REST call we had in Part 1. The stringify(json) string blob was placed on the page in the #response <div> now we want to extract process names from the json object and place them in a list in the <div> we’re now calling #proclist. Add the script file reference to the <head> section of index.html.

            <script src="js/pcs-api.js"></script>

    vue.js

    Taking the minimalist, clean, simple approach started in Part 1 we’ll use vue.js for the JavaScript framework. There are other bigger, fancier frameworks like Angular.js but vue.js is lightweight, easy to use and a good starting point. In Part 3 of this series we’ll look at heavy duty JavaScript frameworks like Angular.js, node.js and Oracle JET.

    The declaration of what we want is an unordered list of process names

    <ul>
       <li>process name 1</li>
       <li>process name 2</li>
       <li>etc ...</li>
    </ul>

    so that’s what we’ll put in the #proclist <div>, a <ul> containing a declared list of process name, revision and defId items

    <ul>
        <li v-for="proc in procItems">
            {{ proc.processName }} version {{ proc.revision }} -- <b>defId:</b> {{ proc.processDefId }}
        </li>
    </ul>

    “v-for” is the vue.js for-loop binding and the double brace notation “{{ }}” is the data reference.

    In pcs-api.js make the connection between the JSON response data from the REST call and the <ul> on the HTML page. Looking at the PCS REST API documentation (or the JSON response from Postman as we did in Part 1) we see the process definition information is in a JSON array with the key name “items”. The vue.js vm (view-model) is created and used as shown

    var appProcList = new Vue({
        el: '#proclist',
        data: {
            procItems: json.items
        }
    })

    The view-model is a new Vue variable named appProcList, connected to the DOM element #proclist. The data defined by an array named procItems. Replace the stringify bit in the AJAX success function with the vue.js code above. Also load vue.js from the unpkg CDN by adding the line

    <script src="https://unpkg.com/vue@2.1.10/dist/vue.js"></script>

    in the <head> section of index.html. The results from a PCS instance with five registered processes looks like:

    Process Instance and Task List

    Duplicating the approach with process definitions let’s do the process instance call from Part 1 next. Add the HTML code to index.html

            <h2>Part 2</h2>
            <p>Retrieve a Process Instance</p>
            <input type="button" value="Get Process Instance" onClick="getProcessInstance()">
            <br><br>
            <div id="procinstance">
                <h3>Process Instance</h3>
                <ul>
                    <li><b>Title:</b> {{ title }}</li>
                    <li><b>ID:</b> {{ processId }}</li>
                    <li><b>Name:</b> {{ name }}</li>
                    <li><b>Owned By:</b> {{ ownedBy }}</li>
                    <li><b>Priority:</b> {{ priority }}</li>
                    <li><b>State:</b> {{ state }}</li>
                </ul>
            </div>
            <br><br>

    We’ll need a corresponding view-model, in pcs-api.js update and add the getProcessInstance() function.

    function getProcessInstance()
    {
      $.ajax(
          {
            type: "GET",
            url: "http://pcshost:7003/bpm/api/4.0/processes/10003",
            headers: {'Authorization': "Basic d2VibG9naWM6d2VibG9naWMx"},
            contentType: "application/json",
            dataType: "json",
            success: function(json){
                var appProcInstance = new Vue({
                    el: '#procinstance',
                    data: {
                        title: json.title,
                        processId: json.processId,
                        name: json.processName,
                        ownedBy: json.ownedBy,
                        priority: json.priority,
                        state: json.state
                    }
                })
            },
            failure: function(errMsg) {
                 alert(errMsg);
            },
            error: function(xhr){
                 alert("An error occured: " + xhr.status + " " + xhr.statusTextt);
            }
          });
    }

    The view-model appProcInstance connects to the #procinstance <div> id and the data items are mapped individually. The result looks like

    process-instance-result

    Similarly for the Task List call, update and add the <div> to index.html and getTaskList() function to pcs-api.js. The <div> looks like

            <h2>Part 3</h2>
            <p>Retrieve Task List</p>
            <input type="button" value="Get Task List" onClick="getTaskList()">
            <br><br>
            <div id="tasklist">
                <h3>Task List</h3>
                <ul>
                    <li v-for="task in taskItems">
                        {{ task.title }} <b>summary:</b> {{ task.shortSummary }} <b>created:</b> {{ task.createdDate }} - {{ task.state }}
                    </li>
                </ul>
            </div>
            <br><br>

    and the JavaScript looks like

    function getTaskList()
    {
      $.ajax(
          {
            type: "GET",
            url: "http://pcshost:7003/bpm/api/4.0/tasks?status=ASSIGNED&assignment=MY_AND_GROUP",
            headers: {'Authorization': "Basic d2VibG9naWM6d2VibG9naWMx"},
            contentType: "application/json",
            dataType: "json",
            success: function(json){
                var appTaskList = new Vue({
                    el: '#tasklist',
                    data: {
                        taskItems: json.items
                    }
                })
            },
            failure: function(errMsg) {
                 alert(errMsg);
            },
            error: function(xhr){
                 alert("An error occured: " + xhr.status + " " + xhr.statusTextt);
            }
          });
    }

    the view-model appTaskList connects to the #tasklist <div> and data comes from the json.items JSON array response data. The Task List results look like

    tasklist-results

    Audit Diagram

    Let’s do something a bit flashy with the audit diagram. Retrieving binary image data with the REST call as we did in Part 1, let’s open a modal overlay and show the diagram there and back on the main page after closing the modal. This approach and code is from a sample in the CSS section of w3schools (the modal sample is at the bottom of the page on images). The first thing to do is setup the CSS code. Create a file img-modal.css in the css folder of the application and insert the following

    #clickMe {
        border-radius: 5px;
        cursor: pointer;
        transition: 0.3s;
    }
    
    #clickMe:hover {opacity: 0.5;}
    
    /* The Modal (background) */
    .modal {
        display: none; /* Hidden by default */
        position: fixed; /* Stay in place */
        z-index: 1; /* Sit on top */
        padding-top: 100px; /* Location of the box */
        left: 0;
        top: 0;
        width: 100%; /* Full width */
        height: 100%; /* Full height */
        overflow: auto; /* Enable scroll if needed */
        background-color: rgb(0,0,0); /* Fallback color */
        background-color: rgba(0,0,0,0.9); /* Black w/ opacity */
    }
    
    /* Modal Content (image) */
    .modal-content {
        margin: auto;
        display: block;
        width: 80%;
        max-width: 700px;
    }
    
    /* Caption of Modal Image */
    #caption {
        margin: auto;
        display: block;
        width: 80%;
        max-width: 700px;
        text-align: center;
        color: #ccc;
        padding: 10px 0;
        height: 150px;
    }
    
    /* Add Animation */
    .modal-content, #caption {    
        -webkit-animation-name: zoom;
        -webkit-animation-duration: 0.6s;
        animation-name: zoom;
        animation-duration: 0.6s;
    }
    
    @-webkit-keyframes zoom {
        from {-webkit-transform: scale(0)} 
        to {-webkit-transform: scale(1)}
    }
    
    @keyframes zoom {
        from {transform: scale(0.1)} 
        to {transform: scale(1)}
    }
    
    /* The Close Button */
    .close {
        position: absolute;
        top: 15px;
        right: 35px;
        color: #f1f1f1;
        font-size: 40px;
        font-weight: bold;
        transition: 0.3s;
    }
    
    .close:hover,
    .close:focus {
        color: #bbb;
        text-decoration: none;
        cursor: pointer;
    }
    
    /* 100% Image Width on Smaller Screens */
    @media only screen and (max-width: 700px){
        .modal-content {
            width: 100%;
        }
    }

    also add the CSS link in the <head> section of index.html

            <link rel="stylesheet" href="css/img-modal.css" type="text/css"/>

    Add the audit diagram section to index.html.

            <h2>Part 4</h2>
            <p>Retrieve Audit Diagram</p>
            <h3>Audit Diagram</h3>
            <img id="clickMe" src="images/GoGetIt.png" alt="Audit Diagram for Process" onClick="getAndShowAudit()" width="300" height="200">
            <!-- Audit Diagram Modal -->
            <div id="auditModal" class="modal">
                <span class="close">×</span>
                <img class="modal-content" id="imgFromPCS">
                <div id="caption"></div>
            </div>

    The clickMe image file GoGetIt.png has been added to a folder images in the project. You can create your own .png or download the completed NetBeans project attached to this blog. The getAndShowAudit() function needs to be added to pcs-api.js

    function getAndShowAudit()
    {
        var modal = document.getElementById('auditModal');
        var clickMeImg = document.getElementById('clickMe');
        var modalImg = document.getElementById('imgFromPCS');
        var auditCaption = document.getElementById('caption');
    
        var oReq = new XMLHttpRequest();
        oReq.open("GET", "http://pcshost:7003/bpm/api/4.0/processes/10003/audit", true);
        oReq.responseType = "blob";
        oReq.setRequestHeader("Authorization", "Basic d2VibG9naWM6d2VibG9naWMx");
        oReq.onreadystatechange = function () {
                                    if (oReq.readyState == oReq.DONE) {
                                      modalImg.src = window.URL.createObjectURL(oReq.response);
                                      clickMeImg.src = window.URL.createObjectURL(oReq.response);
                                    }
                                  }
        oReq.send();
    
        modal.style.display = "block";
        auditCaption.innerHTML = clickMeImg.alt + " 10003";
    
        var span = document.getElementsByClassName("close")[0];
        span.onclick = function() { modal.style.display = "none"; }
    }

    The REST call is the same as we had in Part 1, code to open and close the modal has been added. The section of the page before retrieving the audit diagram

    audit-diagram-go

    the diagram is displayed in an overlay

    audit-diagram-modal

    Use the x in the upper right to close the overlay. The retrieved diagram also replaces the clickMe image.

    Summary

    A modern HTML/CSS application using the Process Cloud Service REST API is a convenient and straight forward approach to developing a custom user interface for PCS based workflow applications.In Part 3 we’ll take a look at components in vue.js and other JavaScript frameworks like Oracle JET and Angular.js.

    Loading Data from Oracle Field Service Cloud into Oracle BI Cloud Service using REST

    $
    0
    0

    Introduction

    This post details a method of extracting and loading data from Oracle Field Service Cloud (OFSC) into the Oracle Business Intelligence Cloud Service (BICS) using RESTful services. It is a companion to the A-Team post Loading Data from Oracle Field Service Cloud into Oracle BI Cloud Service using SOAP . Both this post and the SOAP post offer methods to complement the standard OFSC Daily Extract described in Oracle Field Service Cloud Daily Extract Description.

    One case for using this method is analyzing trends regarding OFSC events.

    This post uses RESTful web services to extract JSON-formatted data responses. It also uses the PL/SQL language to call the web services, parse the JSON responses, and perform database table operations in a Stored Procedure. It produces a BICS staging table which can then be transformed into star-schema object(s) for use in modeling. The transformation processes and modeling are not discussed in this post.

    Finally, an example of a database job is provided that executes the Stored Procedure on a scheduled basis.

    The PL/SQL components are for demonstration purposes only and are not intended for enterprise production use. Additional detailed information, including the complete text of the PL/SQL procedure described, is included in the References section at the end of this post.

    Update: As of December, 2016 the  APEX 5.1 APEX_JSON package has removed the limitation of 32K lengths for JSON values. A new section has been added to this post named Parsing Events Responses using APEX_JSON.

    Rationale for Using PL/SQL

    PL/SQL is the only procedural tool that runs on the BICS / Database Schema Service platform. Other wrapping methods e.g. Java, ETL tools, etc. require a platform outside of BICS to run on.

    PL/SQL may also be used in a DBaaS (Database as a Service) that is connected to BICS.

    PL/SQL can utilize native SQL commands to operate on the BICS tables. Other methods require the use of the BICS REST API.

    Note: PL/SQL is a very good at showcasing functionality. However, it tends to become prohibitively resource intensive when deploying in an enterprise production environment. For the best enterprise deployment, an ETL tool such as Oracle Data Integrator (ODI) should be used to meet these requirements and more:

    * Security

    * Logging and Error Handling

    * Parallel Processing – Performance

    * Scheduling

    * Code Re-usability and Maintenance

    About the OFSC REST API

    The document REST API for Oracle Field Service Cloud Service should be used extensively, especially the Authentication, Paginating, and Working with Events sections. Terms described there such as subscription, page, and authorization are used in the remainder of this post.

    In order to receive events, a subscription is needed listing the specific events desired. The creation of a subscription returns both a subscription ID and a page number to be used in the REST calls to receive events.

    At this time, a page contains 0 to 100 items (events) along with the next page number to use in a subsequent call.

    The following is a list of supported events types available from the REST API:

    Activity Events
    Activity Link Events
    Inventory Events
    Required Inventory Events
    User Events
    Resource Events
    Resource Preference Events

    This post uses the following subset of events from the Activity event type:

    activityCreated
    activityUpdated
    activityStarted
    activitySuspended
    activityCompleted
    activityNotDone
    activityCanceled
    activityDeleted
    activityDelayed
    activityReopened
    activityPreworkCreated
    activityMoved

    The process described in this post can be modified slightly for each different event type. Note: the columns returned for each event type differ slightly and require modifications to the staging table and parsing section of the procedure.

    Using Oracle Database as a Service

    This post uses the new native support for JSON offered by the Oracle 12c database. Additional information about these new features may be found in the document JSON in Oracle Database.

    These features provide a solution that overcomes a current limitation in the APEX_JSON package. The maximum length of JSON values in that package is limited to 32K characters. Some of the field values in OFSC events exceed this length.

    Preparing the DBaaS Wallet

    Create an entry in a new or existing Oracle database wallet for the trusted public certificates used to secure connections to the web service via the Internet. A link to the Oracle Wallet Manager documentation is included in the References section. Note the location and password of the wallet as they are used to issue the REST request.

    The need for a trusted certificate is detected when the following error occurs: ORA-29024: Certificate validation failure.

    An example certificate path found using Chrome browser is shown below. Both of these trusted certificates need to be in the Oracle wallet.

    • 2

    Creating a BICS User in the Database

    The complete SQL used to prepare the DBaaS may be viewed here.

    Example SQL statements are below:

    CREATE USER “BICS_USER” IDENTIFIED BY password
    DEFAULT TABLESPACE “USERS”
    TEMPORARY TABLESPACE “TEMP”
    ACCOUNT UNLOCK;
    — QUOTAS
    ALTER USER “BICS_USER” QUOTA UNLIMITED ON USERS;
    — ROLES
    ALTER USER “BICS_USER” DEFAULT ROLE “CONNECT”,”RESOURCE”;
    — SYSTEM PRIVILEGES
    GRANT CREATE VIEW TO “BICS_USER”;
    GRANT CREATE ANY JOB TO “BICS_USER”;

    Creating Database Schema Objects

    Three tables need to be created prior to compiling the PL/SQL stored procedure. These tables are:

    *     A staging table to hold OFSC Event data

    *     A subscription table to hold subscription information.

    *     A JSON table to hold the JSON responses from the REST calls

    The staging table, named OFSC_EVENT_ACTIVITY, has columns described in the OFSC REST API for the Activity event type. These columns are:

    PAGE_NUMBER — for the page number the event was extracted from
    ITEM_NUMBER — for the item number within the page of the event
    EVENT_TYPE
    EVENT_TIME
    EVENT_USER
    ACTIVITY_ID
    RESOURCE_ID
    SCHEDULE_DATE
    APPT_NUMBER
    CUSTOMER_NUMBER
    ACTIVITY_CHANGES — To store all of the individual changes made to the activity

    The subscription table, named OFSC_SUBSCRIPTION_PAGE, has the following columns:

    SUBSCRIPTION_ID     — for the supported event types
    NEXT_PAGE                — for the next page to be extracted in an incremental load
    LAST_UPDATE            — for the date of the last extract
    SUPPORTED_EVENT — for the logical name for the subscription event types
    FIRST_PAGE               — for the first page to be extracted in a full load

    The JSON table, named OFSC_JSON_TMP, has the following columns:

    PAGE_NUMBER — for the page number extracted
    JSON_CLOB       — for the JSON response received for each page

    Using API Testing Tools

    The REST requests should be developed in API testing tools such as cURL and Postman. The JSON expressions for parsing should be developed and tested in a JSON expression testing tool such as CuriousConcept. Links to these tools are provided in the References section.

    Note: API testing tools such as SoapUI, CuriousConcept, Postman, and so on are third-party tools for using SOAP and REST services. Oracle does not provide support for these tools or recommend a particular tool for its APIs. You can select the tool based on your requirements.

    Subscribing to Receive Events

    Create subscriptions prior to receiving events. A subscription specifies the types of events that you want to receive. Multiple subscriptions are recommended. For use with the method in this post, a subscription should only contain events that have the same response fields.

    The OFSC REST API document describes how to subscribe using a cURL command. Postman can also easily be used. Either tool will provide a response as shown below:

    {
    “subscriptionId”: “a0fd97e62abca26a79173c974d1e9c19f46a254a”,
    “nextPage”: “160425-457,0”,
    “links”: [ … omitted for brevity ]
    }.

    Note: The default next page is for events after the subscription is created. Ask the system administrator for a starting page number if a past date is required.

    Use SQL*Plus or SQL Developer and insert a row for each subscription into the OFSC_SUBSCRIPTION_PAGE table.

    Below is an example insert statement for the subscription above:

    INSERT INTO OFSC_SUBSCRIPTION_PAGE
    (
    SUBSCRIPTION_ID,
    NEXT_PAGE,
    LAST_UPDATE,
    SUPPORTED_EVENT,
    FIRST_PAGE
    )
    VALUES
    (
    ‘a0fd97e62abca26a79173c974d1e9c19f46a254a’,
    ‘160425-457,0’,
    sysdate,
    ‘Required Inventory’,
    ‘160425-457,0’
    );

    Preparing and Calling the OFSC RESTful Service

    This post uses the events method of the OFSC REST API.

    This method requires the Basic framework for authorization and mandates a base64 encoded value for the following information: user-login “@” instance-id “:” user-password

    An example encoded result is:

    dXNlci1sb2dpbkBpbnN0YW5jZS1pZDp1c2VyLXBhc3N3b3Jk

    The authorization header value is the concatenation of the string ‘Basic’ with the base64 encoded result discussed above. The APEX_WEB_SERVICE package is used to set the header as shown below:

    v_authorization_token := ‘ dXNlci1sb2dpbkBpbnN0YW5jZS1pZDp1c2VyLXBhc3N3b3Jk’;
    apex_web_service.g_request_headers(1).name  := ‘Authorization’;
    apex_web_service.g_request_headers(1).value := ‘Basic ‘||v_authorization_token;

    The wallet path and password discussed in the Preparing the DBaaS Wallet section are also required. An example path from a Linux server is:

    /u01/app/oracle

    Calling the Events Request

    The events request is called for each page available for each subscription stored in the OFSC_SUBSCRIPTION_PAGE table using a cursor loop as shown below:

    For C1_Ofsc_Subscription_Page_Rec In C1_Ofsc_Subscription_Page
    Loop
    V_Subscription_Id := C1_Ofsc_Subscription_Page_Rec.Subscription_Id;
    Case When P_Run_Type = ‘Full’ Then
    V_Next_Page := C1_Ofsc_Subscription_Page_Rec.First_Page;
    Else
    V_Next_Page := C1_Ofsc_Subscription_Page_Rec.Next_Page;
    End Case; … End Loop;

    The URL is modified for each call. The subscription_id and the starting page are from the table.

    For the first call only, if the parameter / variable p_run_type is equal to ‘Full’, the staging table is truncated and the page value is populated from the FIRST_PAGE column in the OFSC_SUBSCRIPTION_PAGE table. Otherwise, the staging table is not truncated and the page value is populated from the NEXT_PAGE column.

    Subsequent page values come from parsing the nextPage value in the responses.

    An example command to create the URL from the example subscription above is:

    f_ws_url := v_base_url||’/events?subscriptionId=’ ||v_subscription_id|| chr(38)||’page=’ ||v_next_page;

    The example URL result is:

    https://ofsc-hostname/rest/ofscCore/v1/events?subscriptionId=a0fd97e62abca26a79173c974d1e9c19f46a254a&page=160425-457,0

    An example call using the URL is below:

    f_ws_response_clob := apex_web_service.make_rest_request (
    p_url => f_ws_url
    ,p_http_method => ‘GET’
    ,p_wallet_path => ‘file:/u01/app/oracle’
    ,p_wallet_pwd => ‘wallet-password‘ );

    Storing the Event Responses

    Each response (page) is processed using a while loop as shown below:

    While V_More_Pages
    Loop
    Extract_Page;
    End Loop;

    Each page is parsed to obtain the event type of the first item. A null (empty) event type signals an empty page and the end of the data available. An example parse to obtain the event type of the first item is below. Note: for usage of the JSON_Value function below see JSON in Oracle Database.

    select  json_value (f_ws_response_clob, ‘$.items[0].eventType’ ) into f_event_type from  dual;

    If there is data in the page, the requested page number and the response clob are inserted into the OFSC_JSON_TMP table and the response is parsed to obtain the next page number for the next call as shown below:

    f_json_tmp_rec.page_number := v_next_page; — this is the requested page number
    f_json_tmp_rec.json_clob := f_ws_response_clob;
    insert into ofsc_json_tmp values f_json_tmp_rec;
    select json_value (f_ws_response_clob, ‘$.nextPage’ ) into v_next_page from dual;

    Parsing and Loading the Events Responses

    Each response row stored in the OFSC_JSON_TMP table is retrieved and processed via a cursor loop statement as shown below:

    for c1_ofsc_json_tmp_rec in c1_ofsc_json_tmp
    loop
    process_ofsc_json_page (c1_ofsc_json_tmp_rec.page_number);
    end loop;

    An example response is below with only the first item shown:

    {
    “found”: true,
    “nextPage”: “170110-13,0”,
    “items”: [
    {
    “eventType”: “activityUpdated”,
    “time”: “2017-01-04 12:49:51”,
    “user”: “soap”,
    “activityDetails”: {
    “activityId”: 1297,
    “resourceId”: “test-resource-id“,
    “resourceInternalId”: 2505,
    “date”: “2017-01-25”,
    “apptNumber”: “82994469003”,
    “customerNumber”: “12797495”
    },
    “activityChanges”: {
    “A_LastMessageStatus”: “SuccessFlag – Fail – General Exception: Failed to update FS WorkOrder details. Reason: no rows updated for: order_id = 82994469003 service_order_id = NULL”
    }
    }
    ],
    “links”: [

    ]
    }

    Each item (event) is retrieved and processed via a while loop statement as shown below:

    while f_more_items loop
    process_item (i);
    i := i + 1;
    end loop;

    For each item, a dynamic SQL statement is prepared and submitted to return the columns needed to insert a row into the OFSC_EVENT_ACTIVITY staging table as shown below (the details of creating the dynamic SQL statement have been omitted for brevity):

    An example of a dynamically prepared SQL statement is below. Note: for usage of the JSON_Table function below see JSON in Oracle Database.

    DYN_SQL

    The execution of the SQL statement and the insert are shown below:

    execute immediate f_sql_stmt into ofsc_event_activity_rec;
    insert into ofsc_event_activity values ofsc_event_activity_rec;

    Parsing Events Responses using APEX_JSON

    Update: As of December, 2016 the  APEX 5.1 APEX_JSON package has removed the limitation of 32K lengths for JSON values. This update allows the continued use of an Oracle 11g database if desired.  This new section demonstrates the usage.

    Each page response clob is parsed with the APEX_JSON.PARSE procedure as shown below. This procedure stores all the JSON elements and values in an internal array which is accessed via JSON Path statements.

    apex_json.parse(F_Ws_Response_Clob);

    Each page is tested to see if it is an empty last page. A page is deemed empty when the first event has a null event type as shown below.

    apex_json.parse(F_Ws_Response_Clob);
    F_Event_Type := apex_json.get_varchar2(p_path => ‘items[1].eventType’);
    Case When F_Event_Type Is Null
    Then V_More_Pages := False; …

    An example response is shown in the section above.

    Each item (event) is retrieved and processed via a while loop statement as shown below:

    while f_more_items loop
    process_item_JParse (i);
    i := i + 1;
    end loop;

    For each item (event), the event is parsed into a variable row record as shown below:

    OFSC_EVENT_ACTIVITY_rec.PAGE_NUMBER := F_Page_Number;
    OFSC_EVENT_ACTIVITY_rec.ITEM_NUMBER := FI ;
    OFSC_EVENT_ACTIVITY_rec.EVENT_TYPE := apex_json.get_varchar2(p_path => ‘items[‘ || Fi || ‘].eventType’) ;
    OFSC_EVENT_ACTIVITY_rec.EVENT_TIME := apex_json.get_varchar2(p_path => ‘items[‘ || Fi || ‘].time’) ;
    OFSC_EVENT_ACTIVITY_rec.EVENT_USER := apex_json.get_varchar2(p_path => ‘items[‘ || Fi || ‘].user’) ;
    OFSC_EVENT_ACTIVITY_rec.ACTIVITY_ID := apex_json.get_varchar2(p_path => ‘items[‘ || Fi || ‘].activityDetails.activityId’) ;
    OFSC_EVENT_ACTIVITY_rec.RESOURCE_ID := apex_json.get_varchar2(p_path => ‘items[‘ || Fi || ‘].activityDetails.resourceId’) ;
    OFSC_EVENT_ACTIVITY_rec.SCHEDULE_DATE := apex_json.get_varchar2(p_path => ‘items[‘ || Fi || ‘].activityDetails.date’) ;
    OFSC_EVENT_ACTIVITY_rec.APPT_NUMBER := apex_json.get_varchar2(p_path => ‘items[‘ || Fi || ‘].activityDetails.apptNumber’) ;
    OFSC_EVENT_ACTIVITY_rec.CUSTOMER_NUMBER := apex_json.get_varchar2(p_path => ‘items[‘ || Fi || ‘].activityDetails.customerNumber’) ;
    OFSC_EVENT_ACTIVITY_rec.ACTIVITY_CHANGES := Get_Item_ACTIVITY_CHANGES (FI);

    The insert of the row is shown below:

    insert into ofsc_event_activity values ofsc_event_activity_rec;

    Verifying the Loaded Data

    Use SQL*Plus, SQL Developer, or a similar tool to display the rows loaded into the staging table.

    A sample set of rows is shown below:

    tabResults

    Troubleshooting the REST Calls

    Common issues are the need for a proxy, the need for an ACL, the need for a trusted certificate (if using HTTPS), and the need to use the correct TLS security protocol. Note: This post uses DBaaS so all but the first issue has been addressed.

    The need for a proxy may be detected when the following error occurs: ORA-12535: TNS:operation timed out. Adding the optional p_proxy_override parameter to the call may correct the issue. An example proxy override is:

    www-proxy.us.oracle.com

    Scheduling the Procedure

    The procedure may be scheduled to run periodically through the use of an Oracle Scheduler job as described in Scheduling Jobs with Oracle Scheduler.

    A job is created using the DBMS_SCHEDULER.CREATE_JOB procedure by specifying a job name, type, action and a schedule. Setting the enabled argument to TRUE enables the job to automatically run according to its schedule as soon as you create it.

    An example of a SQL statement to create a job is below:

    BEGIN
    dbms_scheduler.create_job (
    job_name => ‘OFSC_REST_EVENT_EXTRACT’,
    job_type => ‘STORED_PROCEDURE’,
    enabled => TRUE,
    job_action => ‘BICS_OFSC_REST_INTEGRATION’,
    start_date => ’12-JAN-17 11.00.00 PM Australia/Sydney’,
    repeat_interval => ‘freq=hourly;interval=24’ — this will run once every 24 hours
    );
    END;
    /

    Note: If using the BICS Schema Service database, the package name is CLOUD_SCHEDULER rather than DBMS_SCHEDULER.

    The job log and status may be queried using the *_SCHEDULER_JOBS views. Examples are below:

    SELECT JOB_NAME, STATE, NEXT_RUN_DATE from USER_SCHEDULER_JOBS;
    SELECT LOG_DATE, JOB_NAME, STATUS from USER_SCHEDULER_JOB_LOG;

    Summary

    This post detailed a method of extracting and loading data from Oracle Field Service Cloud (OFSC) into the Oracle Business Intelligence Cloud Service (BICS) using RESTful services.

    The method extracted JSON-formatted data responses and used the PL/SQL language to call the web services, parse the JSON responses, and perform database table operations in a Stored Procedure. It also produced a BICS staging table which can then be transformed into star-schema object(s) for use in modeling.

    Finally, an example of a database job was provided that executes the Stored Procedure on a scheduled basis.

    For more BICS and BI best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for BICS.

    References

    Complete Procedure

    Complete Procedure using APEX_JSON

    JSON in Oracle Database

    REST API for Oracle Field Service Cloud Service

    Scheduling Jobs with Oracle Scheduler

    Database PL/SQL Language Reference

    APEX_WEB_SERVICE Reference Guide

    APEX_JSON Reference Guide

    Curious Concept JSON Testing Tool

    Postman Testing Tool

    Base64 Decoding and Encoding Testing Tool

    Using Oracle Wallet Manager

    Oracle Business Intelligence Cloud Service Tasks

     

    Integrating Sales Cloud and Service Cloud using ICS – troubleshooting issues with security configuration

    $
    0
    0

    Introduction

    This blog talks about a few “gotchas” when integrating Oracle Sales Cloud (OSC) and Oracle Service Cloud (OSvC) using Oracle’s iPaaS platform, the Integration Cloud Service(ICS).
    The idea is to have a ready reckoner for some common issues faced, so that customers can hit the ground running when integrating between OSvC and OSc using ICS

     

    ICS Integrations for OSC OSvC

    Pre-built ICS integrations are available from Oracle for certain objects and can be downloaded from My Oracle Support. Contact Oracle Support to download the pre-built integrations and the documentation that comes along with it.   

    The pre-built integration provides out of the box standard integration for the following –

    •     Integrate Account and Contacts Objects from Sales Cloud to Service Cloud

    OSC_SVC_integrations

    •    Integrate Organization and Contact objects from Service Cloud to Sales Cloud

    SVC_OSC_integrations
    The pre-built integration is built using ICS and provides a few standard field mappings. It can serve as a template and users can update any custom field mappings as needed.
    The ICS Prebuilt integrations also serve as reference for building other custom integrations between OSC and OSvC using ICS. ICS integrations can be built for integrating more objects like Partner and Opportunity objects from OSC. Similarly flows can be created to integrate Asset and Incident objects from OSvC. Refer to the Sales cloud Adapter documentation  and OSvC Adapter documentation  here for capabilities that can be used to build Custom integrations.

     

    ICS Credential in Sales Cloud

    One issue that could be faced by users after following the steps in the PreBuilt integrations document and activating the ICS integrations, is that the Account and Contact subscriptions do not flow from OSC to ICS.
    This is usually due to issues with creating the ICS credentials in OSC.
    Note that a csfKey entry in Sales Cloud infrastructure stores the ICS credentials used by Sales Cloud. This key is used to connect to ICS and invoke the subscription based integrations at runtime.

    Refer to this excellent blog post from  my colleague Naveen Nahata, which gives simple steps to create the csf Key. The SOA Composer page where csf key and values are updated is shown below.

    001_CSF_Key

    Note that OSC ‘R12’ and ‘R11’ customers can now self create csfKey on the SOA Composer App using the steps from Naveen’s blog above.
    R10 customers however, should create a support SR for the csfKey creation. Refer to the steps as mentioned in the implementation guide document within the R10 preBuilt integrtaion download package.

    Invalid Field Errors in OSvC

    Further, when testing the integration of Contact or Account from OSC to OSvC, the ICS instances could be going to failed state.  ICS may show the instance to be in failed state as shown below.

    OSC_SVC_ACCT_Created_Error_2
    Tracking the failed instance further may show error message as seen below

     

    ErrorMessage
    If the OSC_SVC_ACCOUNT_CREATED integration is ‘TRACE ENABLED’, then the Activity Stream/Diagnostic log file can be downloaded from ICS to further inspect the message payloads flowing in the integration instance.
    If one searches the logs for request/response message payloads using the ICS instance ID that has failed, he/she may find out that the issue is not really at the createOriginalSystemReference stage of the flow but the BatchResponse stage from Service Cloud.

     Error:  Invalid Field While processing Organization->ExternalReference(string)

    The response payload from OSvC will look as below

    <nstrgmpr:Create>
    <nstrgmpr:RequestErrorFault xmlns:nstrgmpr="urn:messages.ws.rightnow.com/v1_3">
    <n1:exceptionCode xmlns:nstrgmpr="http://xmlns.oracle.com/cloud/adapter/rightnow/OrganizationCreate_REQUEST/types">INVALID_FIELD</n1:exceptionCode>
    <n1:exceptionMessage xmlns:nstrgmpr="http://xmlns.oracle.com/cloud/adapter/rightnow/OrganizationCreate_REQUEST/types">Invalid Field While processing Organization-&gt;ExternalReference(string).</n1:exceptionMessage>
    </nstrgmpr:RequestErrorFault>
    </nstrgmpr:Create>

    Solution:

    Ensure that the credentials specified in the EVENT_NOTIFICATION_MAPI_USERNAME and EVENT_NOTIFICATION_MAPI_PASWD in OSvC do not refer to a ‘real’ OSvC user. OSvC user credentials may not have the rights to update External Reference fileds. It is important  that a dummy username/password is created in the EVENT_NOTIFICATION_MAPI_* fields in OSvC. And remember to use this credential when configuring the OSvC connection in ICS.

    ICS Credential in Service Cloud

    Another crucial part of the OSvC Configuration is setting the Credentials to use for Outgoing Requests from OSvC to ICS. This is done by setting the EVENT_NOTIFICATION_SUBSCRIBER_USERNAME and EVENT_NOTIFICATION_SUBSCRIBER_PASSWD  parameters in OSvC. This credential is used by OSvC to connect and execute ICS integrations and must point to a ‘real’ user on ICS. This user should have the “Integration Cloud Service Runtime Role” granted to it.

    References:

    Using Event Handling Framework for Outbound Integration of Oracle Sales Cloud using Integration Cloud Service
    Service Cloud
    Sales Cloud

     

    Integrating Oracle GoldenGate Cloud Service (GGCS) with Oracle Business Intelligence Cloud Service (BICS)

    $
    0
    0

    Introduction

    This article describes an overview of how to integrate Oracle GoldenGate Cloud Service (GGCS) in terms of populating or loading data into Oracle Business Intelligence Cloud Service (BICS) from On-Premises. Both GGCS and BICS are Platform as a Service (PaaS) services that runs in the Oracle Public Cloud (OPC).

    For GGCS to be integrated with BICS, the following prerequisites must be met:

    • BICS has to be provisioned with Database Cloud Service (DBCS) not Schema as a Service as it’s data repository
    • GGCS has to be provisioned and attached to the DBCS used by BICS as it’s data repository
    • The DBCS used by BICS for its data repository and GGCS must be in the same domain

    The high level steps for this On-Premises data with GGCS & BICS data integration are as follows:

    • Configure and Start GGCS Oracle GoldenGate Manager on the OPC side
    • Configure and Start SSH proxy server process on the On-Premises
    • Configure and Start On-Premises OGG Extract process for the tables to be moved to BICS DBCS
    • Configure and Start On-Premises OGG Extract Data Pump process
    • Configure and Start GGCS Replicat process on the OPC side to deliver data into BICS database

    The following assumptions have been made during the writing of this article:

    • The reader has a general understanding of Windows and Unix platforms.
    • The reader has basic knowledge of Oracle GoldenGate products and concepts.
    • The reader has a general understanding of Cloud Computing Principles
    • The reader has basic knowledge of Oracle Cloud Services
    • The reader has basic knowledge of Oracle GoldenGate Cloud Service (GGCS)
    • The reader has basic knowledge of Oracle Business Intelligence Cloud Service (BICS)
    • The reader has a general understanding of Network Computing Principles

    Main Article

    GoldenGate Cloud Service (GGCS)

    The GoldenGate Cloud Service (GGCS), is a cloud based real-time data integration and replication service, which provides seamless and easy data movement from various On-Premises relational databases to databases in the cloud with sub-second latency while maintaining data consistency and offering fault tolerance and resiliency.

    Figure 1: GoldenGate Cloud Service (GGCS) Architecture Diagram

    ggcs_architecture_01

    Business Intelligence Cloud Service (BICS)

    Oracle Business Intelligence Cloud Service (BICS) is a robust platform designed for customers who wants to simplify the creation, management, and deployment of analyses through interactive visualizations, data model designs, reports and dashboards. It extends customer analytics—enhances data while ensuring consistency, and maintaining governance through standard definitions, advanced calculations and predictive analytical functions.

    Figure 2: On-Premises to Business Intelligence Cloud Service (BICS) Architecture Diagram

    GGCS_BICS_Architecture

    As illustrated on figure 2, there are various ways or tools to integrate or move data between On-Premises and Business Intelligence in the cloud, such as Oracle Data Integration (ODI) application, Data Sync for BICS, even direct upload via secured File Transfer, and Remote Data Connector (RDC) are just some of the examples to access data from On-Premises and integrate or move it into BICS platform in the cloud.

    For near real time integration of On-Premises data to Business Intelligence Cloud Service (BICS); GoldenGate replication platform is the tool to use and this article will present an overview on how to configure or load data from On-Premises to Business Intelligence Cloud Service (BICS) via GoldenGate Cloud Service (GGCS).

     

    Oracle GoldenGate Replication

    The high level steps for GoldenGate replication between On-Premises (Source) data with BICS (Target) data via GGCS are as follows:

    • Configure and Start GGCS Oracle GoldenGate Manager on the OPC side
    • Configure and Start SSH proxy server process on the On-Premises
    • Configure and Start On-Premises OGG Extract process for the tables to be moved to BICS DBCS
    • Configure and Start On-Premises OGG Extract Data Pump process
    • Configure and Start GGCS Replicat process on the OPC side to deliver data into BICS database

    GGCS Oracle GoldenGate Manager

    To start configuring Oracle GoldenGate on the GGCS instance, the manager process must be running. Manager is the controller process that instantiates the other Oracle GoldenGate processes such as Extract, Extract Data Pump, Collector and Replicat processes.

    Connect to GGCS Instance through ssh and start the Manager process via the GoldenGate Software Command Interface (GGSCI).

    [oracle@ogg-wkshp db_1]$ ssh -i mp_opc_ssh_key opc@mp-ggcs-bics-01

    [opc@bics-gg-ggcs-1 ~]$ sudo su – oracle
    [oracle@bics-gg-ggcs-1 ~]$ cd $GGHOME

    Note: By default, “opc” user is the only one allowed to ssh to GGCS instance. We need to switch user to “oracle” via “su” command to manage the GoldenGate processes. The environment variable $GGHOME is  pre-defined in the GGCS instance and it’s the directory where GoldenGate was installled.

    [oracle@bics-gg-ggcs-1 gghome]$ ggsci

    Oracle GoldenGate Command Interpreter for Oracle
    Version 12.2.0.1.160517 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_160711.1401_FBO
    Linux, x64, 64bit (optimized), Oracle 12c on Jul 12 2016 02:21:38
    Operating system character set identified as UTF-8.
    Copyright (C) 1995, 2016, Oracle and/or its affiliates. All rights reserved.

    GGSCI (bics-gg-ggcs-1) 1> start mgr

    Manager started.

    GGSCI (bics-gg-ggcs-1) 2> info mgr

    Manager is running (IP port bics-gg-ggcs-1.7777, Process ID 79806).

    Important Note: By default, GoldenGate processes doesn’t accept any connection remotely. To enable connection from other hosts via the SSH proxy we need to add an ACCESS RULE to the Manager parameter File (MGR.prm) to allow connectivity through the public interface of the GGCS Instance.

    Here’s the MGR.prm file used in this example:

    –###############################################################
    –## MGR.prm
    –## Manager Parameter Template
    — Manager port number
    — PORT <port number>
    PORT 7777
    — For allocate dynamicportlist. Here the range is starting from
    — port n1 through n2.
    Dynamicportlist 7740-7760
    — Enable secrule for collector
    ACCESSRULE, PROG COLLECTOR, IPADDR 129.145.1.180, ALLOW
    — Purge extract trail files
    PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS, MINKEEPHOURS 24
    — Start one or more Extract and Replicat processes automatically
    — after they fail. –AUTORESTART provides fault tolerance when
    — something temporary interferes with a process, such as
    — intermittent network outages or programs that interrupt access
    — to transaction logs.
    — AUTORESTART ER *, RETRIES <x>, WAITMINUTES <y>, RESETMINUTES <z>
    — This is to specify a lag threshold that is considered
    — critical, and to force a warning message to the error log.
    — Lagreport parameter specifies the interval at which manager
    — checks for extract / replicat –lag.
    –LAGREPORTMINUTES <x>
    –LAGCRITICALMINUTES <y>
    –Reports down processes
    –DOWNREPORTMINUTES <n>
    –DOWNCRITICAL

    Start SSH Proxy Server on the On-Premises

    By default, the only access allowed to GGCS is via ssh, so to allow communication of GoldenGate processes between On-Premises and GGCS instance we would need to run SSH proxy server on the on-premises side to communicate to GoldenGate processes on the GGCS side.

    Start the SSH proxy via the following ssh command:

    [oracle@ogg-wkshp db_1]$ ssh -i keys/mp_opc_ssh_key -v -N -f -D 127.0.0.1:8888 opc@129.145.1.180 > ./dirrpt/socks.log 2>&1

    Command Syntax: ssh –i -v –N –f –D listening_ip_address:listening_tcp_port_address @ > output_file 2>&1

    SSH Command Options Explained:

    1. -i = Private Key file
    2. -v = Verbose Mode
    3. -N = Do no execute remote command; mainly used for port forwarding 
    4. -f = Run ssh process in the background
    5. -D Specifies to run as local dynamic application level forwarding; act as a SOCKS proxy server on a specified interface and port
    6. listening_ip_address = Host Name or Host IP Address where this SOCKS proxy will listen (127.0.0.1 is the loopback address)
    7. listening_tcp_port_address = TCP/IP Port Number to listen on
    8. 2>&1 = Redirect Stdout and Stderr to the output file
    9. Verify the SSH Socks Proxy server has started successfully.

      1. Check the socks proxy output file via the “cat” utility and look for the messages “Local connections to forwarded…” and “Local forwarding listening on port ”.  Make sure it’s connected to GGCS instance and listening on the right IP and port address.

    [oracle@ogg-wkshp db_1]$ cat ./dirrpt/socks.log

    OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
    debug1: Reading configuration data /etc/ssh/ssh_config
    debug1: Applying options for *
    debug1: Connecting to 129.145.1.180 [129.145.1.180] port 22.
    debug1: Connection established.
    debug1: identity file keys/mp_opc_ssh_key type 1
    debug1: loaded 1 keys
    debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3
    debug1: match: OpenSSH_5.3 pat OpenSSH*
    debug1: Enabling compatibility mode for protocol 2.0
    debug1: Local version string SSH-2.0-OpenSSH_4.3
    debug1: SSH2_MSG_KEXINIT sent
    debug1: SSH2_MSG_KEXINIT received
    debug1: kex: server->client aes128-ctr hmac-md5 none
    debug1: kex: client->server aes128-ctr hmac-md5 none
    debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
    debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
    debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
    debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
    debug1: Host ‘129.145.1.180’ is known and matches the RSA host key.

    debug1: Authentication succeeded (publickey).
    debug1: Local connections to 127.0.0.1:8888 forwarded to remote address socks:0
    debug1: Local forwarding listening on 127.0.0.1 port 8888.
    debug1: channel 0: new [port listener]
    debug1: Entering interactive session.

    Configure On-Premises Oracle GoldenGate

    For our test, we shall use the following tables for source (On-Premises) and target (GGCS delivering to BICS DBCS):

    CREATE TABLE ACCTN
    (
    ACCOUNT_NO NUMBER (10,0) NOT NULL
    , BALANCE NUMBER (8,2) NULL
    , PREVIOUS_BAL NUMBER (8,2) NULL
    , LAST_CREDIT_AMT NUMBER (8,2) NULL
    , LAST_DEBIT_AMT NUMBER (8,2) NULL
    , LAST_CREDIT_TS TIMESTAMP NULL
    , LAST_DEBIT_TS TIMESTAMP NULL
    , ACCOUNT_BRANCH NUMBER (10,0) NULL
    , CONSTRAINT PK_ACCTN
    PRIMARY KEY
    (
    ACCOUNT_NO
    )
    USING INDEX
    )
    ;
    CREATE TABLE ACCTS
    (
    ACCOUNT_NO NUMBER (10,0) NOT NULL
    , FIRST_NAME VARCHAR2 (25) NULL
    , LAST_NAME VARCHAR2 (25) NULL
    , ADDRESS_1 VARCHAR2 (25) NULL
    , ADDRESS_2 VARCHAR2 (25) NULL
    , CITY VARCHAR2 (20) NULL
    , STATE VARCHAR2 (2) NULL
    , ZIP_CODE NUMBER (10,0) NULL
    , CUSTOMER_SINCE DATE NULL
    , COMMENTS VARCHAR2 (30) NULL
    , CONSTRAINT PK_ACCTS
    PRIMARY KEY
    (
    ACCOUNT_NO
    )
    USING INDEX
    )
    ;
    CREATE TABLE BRANCH
    (
    BRANCH_NO NUMBER (10,0) NOT NULL
    , OPENING_BALANCE NUMBER (8,2) NULL
    , CURRENT_BALANCE NUMBER (8,2) NULL
    , CREDITS NUMBER (8,2) NULL
    , DEBITS NUMBER (8,2) NULL
    , TOTAL_ACCTS NUMBER (10,0) NULL
    , ADDRESS_1 VARCHAR2 (25) NULL
    , ADDRESS_2 VARCHAR2 (25) NULL
    , CITY VARCHAR2 (20) NULL
    , STATE VARCHAR2 (2) NULL
    , ZIP_CODE NUMBER (10,0) NULL
    , CONSTRAINT PK_BRANCH
    PRIMARY KEY
    (
    BRANCH_NO
    )
    USING INDEX
    )
    ;
    CREATE TABLE TELLER
    (
    TELLER_NO NUMBER (10,0) NOT NULL
    , BRANCH_NO NUMBER (10,0) NOT NULL
    , OPENING_BALANCE NUMBER (8,2) NULL
    , CURRENT_BALANCE NUMBER (8,2) NULL
    , CREDITS NUMBER (8,2) NULL
    , DEBITS NUMBER (8,2) NULL
    , CONSTRAINT PK_TELLER
    PRIMARY KEY
    (
    TELLER_NO
    )
    USING INDEX
    )
    ;

    Start On-Premises Oracle GoldenGate Manager

    [oracle@ogg-wkshp db_1]$ ggsci

    Oracle GoldenGate Command Interpreter for Oracle
    Version 12.1.2.1.10 21604177 23004694_FBO
    Linux, x64, 64bit (optimized), Oracle 12c on Apr 29 2016 01:06:03
    Operating system character set identified as UTF-8.
    Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

    GGSCI (ogg-wkshp.us.oracle.com) 1> start mgr

    Manager started.

    GGSCI (ogg-wkshp.us.oracle.com) 2> info mgr

    Manager is running (IP port ogg-wkshp.us.oracle.com.7809, Process ID 8998).

    Configure and Start Oracle GoldenGate Extract Online Change Capture process 

    Before we can configure the Oracle GoldenGate Extract Online Change process, we need to enable supplemental logging for the schema/tables we need to capture on the source database via the GGCSI utility.

    Enable Table Supplemental Logging via GGCSI:

    GGSCI (ogg-wkshp.us.oracle.com) 1> dblogin userid tpcadb password tpcadb

    Successfully logged into database.

    GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 2> add schematrandata tpcadb

    2017-02-04 11:59:20 INFO OGG-01788 SCHEMATRANDATA has been added on schema tpcadb.
    2017-02-04 11:59:20 INFO OGG-01976 SCHEMATRANDATA for scheduling columns has been added on schema tpcadb.

    Note: The GGSCI “dblogin” command let’s the GGSCI session logged into the database. Your GGSCI session needs to be connected to the database before you can execute the “add schematrandata” command.

    Create an Online Change Data Capture Extract Group (Process)

    For this test, we will name our Online Change Data Capture group process to ETPCADB.

    -> Register the Extract group with the database via GGSCI:

    GGSCI (ogg-wkshp.us.oracle.com) 1> dblogin userid tpcadb password tpcadb

    Successfully logged into database.

    GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 2> register extract etpcadb database

    Extract ETPCADB successfully registered with database at SCN 3112244.

    -> Create/Add the Extract Group in GoldenGate via GGSCI:

    GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 3> add extract etpcadb, integrated, tranlog, begin now

    EXTRACT added.

    Note: To edit/create the Extract Configuration/Parameter file, you need to execute “edit param <group_name>” via the GGSCI utility.

    GGSCI (ogg-wkshp.us.oracle.com) 1> edit param etpcadb

    Here’s the Online Change Capture Parameter (etpcadb.prm) file used in this example:

    EXTRACT ETPCADB
    userid tpcadb, password tpcadb
    EXTTRAIL ./dirdat/ea
    discardfile ./dirrpt/etpcadb.dsc, append
    TABLE TPCADB.ACCTN;
    TABLE TPCADB.ACCTS;
    TABLE TPCADB.BRANCH;
    TABLE TPCADB.TELLER;

    Add a local extract trail to the Online Change Data Capture  Extract Group via GGSCI

    GGSCI (ogg-wkshp.us.oracle.com) 1> add exttrail ./dirdat/ea, extract etpcadb

    EXTTRAIL added.

    Start the Online Change Data Capture  Extract Group via GGSCI

    GGSCI (ogg-wkshp.us.oracle.com) 2> start extract etpcadb

    Sending START request to MANAGER …
    EXTRACT ETPCADB starting

    Check the Status of Online Change Data Capture  Extract Group via GGSCI

    GGSCI (ogg-wkshp.us.oracle.com) 4> dblogin userid tpcadb password tpcadb

    Successfully logged into database.

    GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 5> info extract etpcadb detail

    EXTRACT ETPCADB Last Started 2017-02-04 12:43 Status RUNNING
    Checkpoint Lag 00:00:03 (updated 00:00:07 ago)
    Process ID 18259
    Log Read Checkpoint Oracle Integrated Redo Logs
    2017-02-04 12:50:52
    SCN 0.3135902 (3135902)
    Target Extract Trails:
    Trail Name Seqno RBA Max MB Trail Type
    ./dirdat/ea 0 1418 100 EXTTRAIL
    Integrated Extract outbound server first scn: 0.3112244 (3112244)
    Integrated Extract outbound server filtering start scn: 0.3112244 (3112244)
    Extract Source Begin End
    Not Available 2017-02-04 12:39 2017-02-04 12:50
    Not Available * Initialized * 2017-02-04 12:39
    Not Available * Initialized * 2017-02-04 12:39
    Current directory /u01/app/oracle/product/12cOGG/v1212110
    Report file /u01/app/oracle/product/12cOGG/v1212110/dirrpt/ETPCADB.rpt
    Parameter file /u01/app/oracle/product/12cOGG/v1212110/dirprm/etpcadb.prm
    Checkpoint file /u01/app/oracle/product/12cOGG/v1212110/dirchk/ETPCADB.cpe
    Process file /u01/app/oracle/product/12cOGG/v1212110/dirpcs/ETPCADB.pce
    Error log /u01/app/oracle/product/12cOGG/v1212110/ggserr.log

    GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 6> info all

    Program Status Group Lag at Chkpt Time Since Chkpt
    MANAGER RUNNING
    EXTRACT RUNNING ETPCADB 00:00:09 00:00:03

    Configure and Start Oracle GoldenGate Extract Data Pump process 

    For this test, we will name our GoldenGate Extract Data Pump group process to PTPCADB.

    Create the Extract Data Pump Group (Process) via GGSCI

    The Extract Data Pump group process will read the trail created by the Online Change Data Capture Extract (ETPCADB) process and sends the data to the GoldenGate process running on the GGCS instance via the SSH Socks Proxy server.

    GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 7> add extract ptpcadb, exttrailsource ./dirdat/ea

    EXTRACT added.

    Note: To edit/create the Extract Configuration/Parameter file, you need to execute “edit param <group_name>” via the GGSCI utility.

    GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 8> edit param ptpcadb

    Here’s the Extract Data Pump Parameter (ptpcadb.prm) file used in this example:

    EXTRACT PTPCADB
    RMTHOST 129.145.1.180, MGRPORT 7777, SOCKSPROXY 127.0.0.1:8888
    discardfile ./dirrpt/ptpcadb.dsc, append
    rmttrail ./dirdat/pa
    passthru
    table TPCADB.ACCTN;
    table TPCADB.ACCTS;
    table TPCADB.BRANCH;
    table TPCADB.TELLER;

    Add the remote trail to the Extract Data Pump Group via GGSCI

    The remote trail is the location output file on the remote side (GGCS instance) used by the Extract Data Pump to write data to be read by the Replicat Delivery process and apply to the target database in this case the DBCS used as a data repository of BICS.

    GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 9> add rmttrail ./dirdat/pa, extract ptpcadb

    RMTTRAIL added.

    Start the Extract Data Pump Group via GGSCI

    GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 10> start extract ptpcadb

    Sending START request to MANAGER …
    EXTRACT PTPCADB starting

    Check the Status of Extract Data Pump Group via GGSCI 

    GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 11> info extract ptpcadb detail

    EXTRACT PTPCADB Last Started 2017-02-04 13:48 Status RUNNING
    Checkpoint Lag 00:00:00 (updated 00:00:03 ago)
    Process ID 29285
    Log Read Checkpoint File ./dirdat/ea000000
    First Record RBA 0
    Target Extract Trails:
    Trail Name Seqno RBA Max MB Trail Type
    ./dirdat/pa 0 0 100 RMTTRAIL
    Extract Source Begin End
    ./dirdat/ea000000 * Initialized * First Record
    ./dirdat/ea000000 * Initialized * First Record
    Current directory /u01/app/oracle/product/12cOGG/v1212110
    Report file /u01/app/oracle/product/12cOGG/v1212110/dirrpt/PTPCADB.rpt
    Parameter file /u01/app/oracle/product/12cOGG/v1212110/dirprm/ptpcadb.prm
    Checkpoint file /u01/app/oracle/product/12cOGG/v1212110/dirchk/PTPCADB.cpe
    Process file /u01/app/oracle/product/12cOGG/v1212110/dirpcs/PTPCADB.pce
    Error log /u01/app/oracle/product/12cOGG/v1212110/ggserr.log

    GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 13> info all

    Program Status Group Lag at Chkpt Time Since Chkpt
    MANAGER RUNNING
    EXTRACT RUNNING ETPCADB 00:00:10 00:00:08
    EXTRACT RUNNING PTPCADB 00:00:00 00:00:02

    Configure and Start GGCS Oracle GoldenGate Delivery Process

    Connect to GGCS Instance through ssh and go GoldenGate Software Command Interface (GGSCI) utility to configure GoldenGate Delivery process.

    [oracle@ogg-wkshp db_1]$ ssh -i mp_opc_ssh_key opc@mp-ggcs-bics-01

    [opc@bics-gg-ggcs-1 ~]$ sudo su – oracle
    [oracle@bics-gg-ggcs-1 ~]$ cd $GGHOME

    Note: By default, “opc” user is the only one allowed to ssh to GGCS instance. We need to switch user to “oracle” via “su” command to manage the GoldenGate processes. The environment variable $GGHOME is  pre-defined in the GGCS instance and it’s the directory where GoldenGate was installled.

    [oracle@bics-gg-ggcs-1 gghome]$ ggsci

    Oracle GoldenGate Command Interpreter for Oracle
    Version 12.2.0.1.160517 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_160711.1401_FBO
    Linux, x64, 64bit (optimized), Oracle 12c on Jul 12 2016 02:21:38
    Operating system character set identified as UTF-8.
    Copyright (C) 1995, 2016, Oracle and/or its affiliates. All rights reserved.

    Configure GGCS Oracle GoldenGate Replicat Online Delivery process

    Configure the Replicat Online Delivery group that reads the trail file that the Data Pump writes to and deliver the changes into the BICS DBCS.

    Before configuring the delievery group make sure that the GGSCI session is connected to the database via the GGSCI “dblogin” command.

    GGSCI (bics-gg-ggcs-1) 1> dblogin useridalias ggcsuser_alias

    Successfully logged into database BICSPDB1.

    Create / Add the Replicat Delivery group and in this example we will name our Replicat DElivery group to RTPCADB.

    GGSCI (bics-gg-ggcs-1 as c##ggadmin@BICS/BICSPDB1) 2> add replicat rtpcadb, integrated, exttrail ./dirdat/pa

    REPLICAT (Integrated) added.

    Note: To edit/create the Replicat Delivery Configuration/Parameter file, you need to execute “edit param <group_name>” via the GGSCI utility.

    GGSCI (bics-gg-ggcs-1 as c##ggadmin@BICS/BICSPDB1) 3> edit param rtpcadb

    Here’s the GGCS Replicat Online Delivery Parameter (rtpcadb.prm) file used in this example:

    REPLICAT RTPCADB
    useridalias ggcsuser_alias
    –Integrated parameter
    DBOPTIONS INTEGRATEDPARAMS (parallelism 2)
    DISCARDFILE ./dirrpt/rtpcadb.dsc, APPEND Megabytes 25
    ASSUMETARGETDEFS
    MAP TPCADB.ACCTN, TARGET GGCSBICS.ACCTN;
    MAP TPCADB.ACCTS, TARGET GGCSBICS.ACCTS;
    MAP TPCADB.BRANCH, TARGET GGCSBICS.BRANCH;
    MAP TPCADB.TELLER, TARGET GGCSBICS.TELLER;

    Start the GGCS Replicat Online Delivery process via GGCSI 

    GGSCI (bics-gg-ggcs-1 as c##ggadmin@BICS/BICSPDB1) 3> start replicat rtpcadb

    Sending START request to MANAGER …
    REPLICAT RTPCADB starting

    Check the Status of GGCS Replicat Online Delivery process via GGSCI 

    GGSCI (bics-gg-ggcs-1 as c##ggadmin@BICS/BICSPDB1) 4> info replicat rtpcadb detail

    REPLICAT RTPCADB Last Started 2017-02-04 17:12 Status RUNNING
    INTEGRATED
    Checkpoint Lag 00:00:00 (updated 00:00:45 ago)
    Process ID 80936
    Log Read Checkpoint File ./dirdat/pa000000000
    First Record RBA 0
    INTEGRATED Replicat
    DBLOGIN Provided, no inbound server is defined
    Inbound server status may be innacurate if the specified DBLOGIN connects to a different PDB than the one Replicat connects to.
    Current Log BSN value: (no data)
    Integrated Replicat low watermark: (no data)
    (All source transactions prior to this scn have been applied)
    Integrated Replicat high watermark: (no data)
    (Some source transactions between this scn and the low watermark may have been applied)
    Extract Source Begin End
    ./dirdat/pa000000000 * Initialized * First Record
    ./dirdat/pa000000000 * Initialized * First Record
    Current directory /u02/data/gghome
    Report file /u02/data/gghome/dirrpt/RTPCADB.rpt
    Parameter file /u02/data/gghome/dirprm/rtpcadb.prm
    Checkpoint file /u02/data/gghome/dirchk/RTPCADB.cpr
    Process file /u02/data/gghome/dirpcs/RTPCADB.pcr
    Error log /u02/data/gghome/ggserr.log

    At this juncture, you now have a complete replication platform that integrates data between On-Premises and BICS DBCS via GGCS; any On-Premises changes on the table you make will be moved or integrated to the BICS Database Cloud Service via this GGCS replication platform.

    Summary

    This article walked through the steps to configure the Oracle GoldenGate Data Integration tool to be able to connect and extract data from On-Premise and integrate or deliver data to Business Intelligence Cloud Service (BICS) using Database Cloud Service (DBCS) as it’s data repository via GoldenGate Cloud Service (GGCS).

    For further information on other ways to integrate or move data from On-Premises to BICS, check the following A-Team articles:

     

    Test blog

    $
    0
    0

    This blog is for testing purposes only, please ignore.

    IDCS and Weblogic Federation with Virtual Users and Groups

    $
    0
    0

    Introduction

    Federation is a well-known pattern and has been discussed at length on this blog. Almost every vendor or cloud provider out there supports Federation and it’s been around for quite some time now.

    In this blog post, I will talk about Federation again, but this time in combination with Weblogic’s Virtual Users and Groups.

    What that means, in practical terms, is that users and groups won’t have to be synchronized between the Identity Provider (Oracle Identity Cloud Service) and the Service Provider (Weblogic).

    This approach presents a great advantage when integrating web applications running in Weblogic with Oracle Identity Cloud Service (IDCS), since we don’t have to worry about keeping IDs in synch, and administrators can concentrate the users/groups management on one single place: IDCS.

    1

    In the following topics we will demonstrate how to implement such use case, please read on…

    Configuration

    Configure Weblogic as the Service Provider (SP).

    Go to “Security Realms > Providers > Authentication”.

    Create a new SAML2IdentityAsserter provider.

    2

    Go to “Security Realms > Providers > Authentication”.

    Create a new SAMLAuthenticator.

    3

    Reorder the SAMLAuthenticator and SAML2IdentityAsserter. Move them to the top, as shown below.

    4

    Click on SAMLAuthenticator, and set its control flag to “SUFFICIENT”.

    5

    Click on the DefaultAuthenticator and set its Control Flag to “OPTIONAL”.

    6

    Restart all servers in the domain.

    Repeat the below steps for each of the managed server hosting the applications that will be federated with IDCS.

    Go to Servers > MANAGED_SERVER > Configuration > Federation Services > SAML 2.0 Service Provider.

    Enter the following:

    Enabled checked
    Preferred Binding POST
    Default URL* https://HOST:PORT/FederationSampleApp

    The default URL is the landing page of the Federated application.

    HOST:PORT is the host and port of the managed server running the sample application.

    The configuration should look like the picture below.

    Click Save.

    7

    Go to Servers > MANAGED_SERVER > Configuration > Federation Services > SAML 2.0 General.

    Fill the information as the picture below.

    The field “Published Site URL” must be in the format https://HOST:PORT/saml2.

    The field “Entity ID” is the unique identifier of the Service Provider, it will be used later in the IdP configuration.

    HOST:PORT is the host and port of the managed server running the sample application.

    8

    Configure IDCS as Identity Provider (IdP)

    Login to IDCS Admin Console.

    Go to Applications and click “Add”.

    From the list, choose “SAML Application”.

    9

    Enter the following:

    Name Federation Sample Application
    Description Sample application to showcase WLS Virtual Users/Groups.
    Application URL https://HOST:PORT/FederationSampleApp

    HOST:PORT is the host and port of the managed server running the sample application.

    For Application URL use the main page on the application deployed in WLS. Click “Next”.

    30
    In the General panel, enter the following:

    Entity ID FederationDomain
    Assertion Consumer URL https://HOST:PORT/saml2/sp/acs/post
    NameID Format Email address
    NameID Format Primary Email

    Entity ID must match the value used in the Service Provider configuration.

    HOST:PORT is the host and port of the managed server running the sample application.

    11

    In the Advanced Settings panel, enter the following:

    Signed SSO Assertion
    Include Signing Certificate in Signature checked
    Signature Hashing Algorithm SHA-256
    Enable Single Logout checked
    Logout Binding POST
    Single Logout URL https://HOST:PORT/FederationSampleApp/logout
    Logout Response URL https://HOST:PORT/FederationSampleApp

    Single Logout URL is the logout URL of the sample application.

    HOST:PORT is the host and port of the managed server running the sample application.

    12

    In the Attribute Configuration section, add one Group Attribute, with the following information:

    Name Groups
    Format Basic
    Condition All Groups

    Name must be “Groups” and format must be “Basic” so the SAML Identity Asserter can pick up the groups attributes when the SAML Assertion is posted back to WLS.

    13

    Click Finish. And Activate.

    29

    Open the application page, go to “SSO Configuration” tab and click “Download IDCS Metadata” and save the XML file (IDCSMetadata.xml).

    15

    Assign users to your application in IDCS

    Users need to be assigned to applications in the IdP (IDCS) before they can authenticate to those apps.

    We do it by assigning individual users to the application in the “Users” tab.

    Open the application page and go to “Users” tab.

    Click in “Assign Users”.

    16

    Select the users that should have access to the application.

    17

    Configure the Identity Provider Partner in WLS

    Upload the “IDCSMetadata.xml” file to the <DOMAIN_HOME> folder where the WLS Managed Server is running, for example: “/u01/oracle/domains/FederationDomain”

    Login to WLS Admin Console, go to Security Realms > Providers > Authentication and click on the SAML2IdentityAsserter.

    18

    Go to the Management tab and click “New”, and select “New Web Single Sign-On Identity Provider Partner”.

    19

    Enter the following information for the IdP Partner:

    Name: IDCS-IdP

    Choose the IDCSMetadata.xml and click “OK” button.

    20

    Click on the “IDCS-IdP” partner from the Identity Provider Partners list.

    21

    Fill in the following information:

    Enabled checked
    Virtual User checked
    Redirect URIs /FederationSampleApp/protected/*
    Process Attributes checked

    the “Redirect URIs” are all the URIs that should be protected by the SAML SSO policy, that is, every URI that would trigger the SAML SSO flow and/or require authorization.

    22

    Click “Save”.

    this is the key point of this use case, by enabling “Virtual User” and “Process Attributes” we will allow users that are only defined in the IdP (IDCS) to login to our application.

    Testing the setup

    The sample application (FederationSampleApp) deployment descriptor is configured to allow access to resources under <APP_CONTEXT_PATH>/protected/* to users that belong to “FederationSampleAppMembers” group.

    You can modify the deployment descriptor to add the groups you already have created in IDCS or you can create a new group called “FederationSampleAppMembers”.

    Web.xml:

    <web-resource-collection>

    <web-resource-name>ProtectedPages</web-resource-name>

    <url-pattern>/protected/*</url-pattern>

    </web-resource-collection>

    Weblogic.xml:

    <security-role-assignment>

    <role-name>allowedGroups</role-name>

    <principal-name>FederationSampleAppMembers</principal-name>

    </security-role-assignment>

    To create a new Group in IDCS called “FederationSampleAppMembers”, log in to IDCS admin console and go to Groups. Click Add, and provide the group name.

    23

    Assign users to the “FederationSampleAppMembers group”. These users will have access to the sample application deployed in Weblogic.

    24

    Deploy the sample application in your Weblogic and target it to the managed server(s) we configured to Federate with IDCS.

    Open a browser and go to https://HOST:PORT/FederationSampleApp/

    You should see the sample application main page, which is not protected by any security constraint.

    25

    If you click in any of the links, the SAML SSO flow is triggered, and you will be redirected to the IdP (IDCS) for authentication.

    26

    Once you provide your credentials, the IdP (IDCS) will validate them and create a SAMLResponse containing a SAML Assertion that will be posted back to the Service Provider (WLS).

    We can inspect the SAMLResponse that was posted to our SP (WLS), by using Chrome Dev Tools.

    27

    Decoding the SAML Assertion we can see that the interesting pieces are:

    The authenticated Subject

    <saml:Subject>

    <saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress">paulo.pereira@oracle.com</saml:NameID>

    <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">

    <saml:SubjectConfirmationData InResponseTo="_0x81d7ccb7b40001c2b13366d827ab79bf" NotOnOrAfter="2017-01-11T23:56:58Z" Recipient="https://HOST:PORT/saml2/sp/acs/post"/>

    </saml:SubjectConfirmation>

    </saml:Subject>

    Assertion Attributes (additional attributes we configured to include group membership).

    <saml:Attribute Name="Groups" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">

    <saml:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" xsi:type="xs:string">SalesMembers</saml:AttributeValue>

    <saml:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" xsi:type="xs:string">FederationSampleAppMembers</saml:AttributeValue>

    <saml:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" xsi:type="xs:string">Teste1</saml:AttributeValue>

    </saml:Attribute>

    If we navigate to the Principals page, in the sample application, we see that Weblogic created the the principals that correspond to our Assertion’s authenticated subject and the groups contained in the Assertion’s additional attributes.

    28

    The magic here happens because we configured our Identity Asserter (SAML2IdentityAsserter) to use Virtual users and process SAML Attributes. That means the Identity Asserter, when working with a SAML Authenticator (the SAML Authenticator must run with control flag set as “SUFFICIENT” and must be invoked before other authenticators), will create Principals that does not correspond to any user or group in the configured ID store in WLS.

    For more information on the SAML Authentication Provider and SAML Identity Asserter, consult the documentation here.

    The other key piece of the solution happens on the IDCS side. We configured our application to generate a SAML assertion that includes the user’s groups as additional attributes of the assertion.

    That way, we can “propagate” down to WLS the authenticated user and his group membership.

    Conclusion

    Configuring Federation between IDCS and Weblogic server with virtual users and groups makes it much easier for applications to integrate with IDCS as a single source of Identity administration. The approach discussed here has the advantage of eliminating the user/groups synch between Service Providers (applications) and the IdP (IDCS). Also, legacy applications or new ones that use standard Java Container Security can leverage this use case with minimal changes – if any at all – since the authorization is already defined in the application’s deployment descriptors.

    Applications that need to obtain additional user profile information can also be registered with IDCS as OAuth clients and consume IDCS APIs to obtain the logged in user information, but that is material for another blog post…

    ICS Connectivity Agent – Update Credentials

    $
    0
    0

    When installing the Connectivity Agent, there are several mandatory command-line arguments that include a valid ICS username (-u=[username]) and password (-p=[password]). These arguments are used to verify connectivity with ICS during installation and are also stored (agent credentials store) for later use by the agent server. The purpose of storing them is to allow the running agent to make a heartbeat call to ICS. This heartbeat is used to provide status in the ICS console regarding the state of the Connectivity Agent. This blog will detail some situations/behaviors relating to the heartbeat that cause confusion when the ICS console contradicts observations on the Connectivity Agent machine.

    Confusing Behaviors/Observations

    The following is a real-world series of events that occurred for an ICS subscriber. Their agent had been started and running for quite a while. The ICS console was used to monitor the health of the agent (i.e., the green icon which indicates the agent is running). Then out of the blue, the console suddenly showed the agent was down (i.e., the red icon):

    AgentCredUpdate-01

    The obvious next step was to check on the agent machine to make sure the agent was running. When looking through the standard out that was being captured, it shows that the agent was in fact still running:

    AgentCredUpdate-02

    Further investigation showed that the agent server logs did not indicate any problems. In an attempt to resolve this strange scenario, the agent server was bounced … but it failed to start with the following:

    AgentCredUpdate-03

    Although the -u and -p command-line parameters contained the correct credentials, the startAgent.sh indicated an error code of 401 (i.e., unauthorized). This error was very perplexing since the agent had been started earlier with the same command-line arguments. After leaving the agent server down for a while, another start was kicked off to demonstrate the 401 problem. Interestingly enough, this time the agent started successfully and went to a running state. However, the ICS console was still showing that the agent was down with no indication of problems on the Connectivity Agent machine. Another attempt was made to bounce the agent server and it again failed to start with a 401.

    At this point, the diagnostic logs were downloaded from the ICS console to see if there was any indication of problems on the ICS side. When analyzing the AdminServer-diagnostic.log, it showed many HTTP authentication/authorization failure messages:

    AgentCredUpdate-04

    At this point it was determined that the password for the ICS user associated with the Connectivity Agent had been changed without notifying the person responsible for managing the agent server. The series of odd behaviors were all tied to the heartbeat. When the ICS user password was changed, the running agent still had the old password. It was the repeated heartbeat calls with invalid credentials that caused the user account to be locked out in ICS. When a user account is locked, it is not accessible for approximately 30 minutes.

    This account locking scenario explained why the agent server could be started successfully and then fail with the 401 within a short period of time. When the account was not locked, the startAgent.sh script would successfully call ICS using the credentials from the command-line. Then the server would start and use the incorrect credentials from the credentials store for the heartbeat, thus locking the user account which caused the problem to repeat itself.

    The Fix

    To fix this issue, a WLST script (updateicscredentials.py) has been provided that will update the Connectivity Agent credentials store. The details on running it can be found in the comments at the top of the script:

    AgentCredUpdate-05

    When executing this script, it is important to make sure the agent server is running. Once the script is done you should see something like the following:

    AgentCredUpdate-06

    At this point, stop the agent server and wait 30 minutes to allow the user account to be unlocked before restarting the server. Everything should now be back to normal:

    AgentCredUpdate-07

    Possible Options For Less Than 30 Minute Waiting Period

    Although I have not yet had an opportunity to test the following out, in theory it should work. To avoid the 30 minute lockout period on ICS due to the Connectivity Agent heartbeat:

    1. Change the credentials on the Connectivity Agent server.
    2. Shutdown the Connectivity Agent server.
    3. Access the Oracle CLOUD My Services console and Reset Password / Unlock Account with the password just used for the agent:

    AgentCredUpdate-08

    4. Verify that the user can login to the ICS console (i.e., that the account is unlocked).
    5. Start the Connectivity Agent and allow the server to get to running state.
    6. Verify that “all is green” in the ICS console.

    Oracle GoldenGate: Network Apply to SQL Server

    $
    0
    0

    Introduction

    Oracle GoldenGate (OGG) best practices dictate that the OGG Apply process run on the target database server. However, there are instances where this configuration is not practical. In this article we shall discuss a solution where OGG Apply may be configured on a mid-tier server and apply data over a network to a remote SQL Server Database.

    Main Article

    A typical Oracle GoldenGate implementation that follows best practices guidelines is depicted below.

    ogg_typical

    In this configuration an OGG Extract process captures transactions from the source database logs and writes them to a Local GoldenGate Trail. An Extract Data Pump reads from the Local Trail and transmits the data over TCP/IP to Remote GoldenGate Trails on a database server. On the target database server, an Oracle GoldenGate Replicat process reads the data sent across the network and applies the records to the target database. For Microsoft SQL Server, Replicat connects to the database via ODBC and OLEDB, via the SQL Server Native Client driver and can use either Shared Memory (if running on same host), Named Pipes or the TCP/IP network protocol.

    In some instances it may not be practical to have the GoldenGate Replicat run on the target database server. Settings within the Microsoft SQL Server Network Connections provide a way for us to place the Replicat on a mid-tier server isolated from the target Microsoft SQL Server Database. For manageability and ease of use, this mid-tier server should be located within the same data center and network as the source database. One possible architecture for this scenario is depicted below.

    ogg_mss

    Target Database Server Modifications

    To allow the network apply of data via SQL Server Native Client, several modifications must be made to the target SQL Server instance and Windows server.

    Network Protocol Configuration Changes

    Ensure that either the TCP/IP or Named Pipes Network Protocol is enabled for the target SQL Server instance. Enable the protocol via SQL Server Configuration Manager.

    1. Start SQL Server Configuration Manager.
    – Click Start
    – Point to All Programs
    – Click Microsoft SQL Server
    – Click Configuration Tools
    – Click SQL Server Configuration Manager
    2. In SQL Server Configuration Manager, in the console pane, expand SQL Server Network Configuration.
    3. In the console pane, click Protocols for <instance_name>
    4. In the details pane, right-click TCP/IP, and then click Enable.
    5. In the console pane, click SQL Server Services.
    6. In the details pane, right-click SQL Server (<instance_name>), and then click Restart, to stop and restart the SQL Server service.

    Create the GoldenGate Database User

    For this configuration, Oracle GoldenGate will require a database user for SQL Server Authentication. Setup the database user with the privileges specified in the Fusion Middleware Installing and Configuring Oracle GoldenGate for SQL Server documentation.

    Server Changes

    Allow SQL Server connections through the Windows Firewall. One method for doing this is to add a program exception to the firewall using the Windows Firewall item in Control Panel.

    1. On the Exceptions tab of the Windows Firewall item in Control Panel, click Add a program.
    2. Browse to the location of the instance of SQL Server that you want to allow through the firewall, for example C:\Program Files\Microsoft SQL Server\MSSQL12.<instance_name>\MSSQL\Binn, select sqlservr.exe, and then click Open.
    3. Click OK.

    Mid-Tier Server

    Install Oracle GoldenGate for SQL Server on the mid-tier server.

    Install SQL Server Native Client 11.0 on the mid-tier server. Create a System DSN to the target SQL Server Database instance.

    1. Select Administrative Tools from the Control Panel.
    2. Select Data Sources (ODBC).
    3. Select the System DSN tab.
    4. Click the Add button.
    5. Select the SQL Server Native Client 11.0 Driver and click the Finish button.
    6. Enter a name for this data source. This name will be used as the TARGETDB setting in Replicat.
    7. Enter the server details for the target SQL Server Database.
    8. Click the Next button.
    9. Select With SQL Server authentication and enter the GoldenGate database user login credentials.
    10. Click the Next button.
    11. Click the Next button.
    12. Click the Finish button.
    13. Click the Test Datasource button.

    If the test is successful, a connection was established between the mid-tier server and the SQL Server Database; continue setting up Oracle GoldenGate and test the architecture.

    Performance Implications

    It should be noted that data apply performance will be (significantly) less than that of a best practices architecture. You should perform thorough testing before attempting to implement this architecture into a production environment.

    Security Implications

    By default, the SQL Server Native Client connection is unencrypted. To ensure your data is secure, select the Use strong encryption for data option when configuring the SQL Server Native Client DSN. This setting requires a security certificate be provisioned for the mid-tier and database servers.

    Summary

    In this article we discussed how to connect to a remote SQL Server Database from a mid-tier server running Oracle GoldenGate Replicat. Although this capability is present with SQL Server Native Client 11, you should always follow the best practice of having Replicat run on the target database server.

    Test for Uli1

    $
    0
    0

    11:36:26test to change author

    Lift and Shift to Oracle Data Integrator Cloud Service (ODICS) : Moving your Repository to the Cloud

    $
    0
    0

     

    Oracle Data Integrator (ODI) is now available as a Cloud Service: Oracle Data Integrator Cloud Service (ODICS). For customers who are interested in a subscription model for their ODI installation and want to integrate and transform data in the Cloud, this is the solution.

    Customers who already have ODI developments on premises will want to migrate their existing, on premises repository to the Cloud. For these use cases, we provide here a quick solution to perform this repository migration.

    1.     Make sure you are using the correct version of ODI

    If the version you are using is older than the version used for ODICS, the first thing to do is to upgrade your on-premises repository.

    The following steps will allow you to create a copy of your existing installation so that you can continue working with the existing repository. Then you will be able to upgrade and copy this repository without any impact on your production environment. When you will be satisfied that the Cloud installation works as expected, you’ll be able to stop the on-premises executions and switch to the Cloud environment.

    if you are using ODI as part of a BI Applications installation, do not follow the steps described here. Make sure that you follow the steps described in Oracle Support Document 1984269.1: OBIA 11g How to Migrate Configurations and Customizations from Development to a Test or a Production Environment. As of the time of writing of this post, BI Application is NOT compatible with ODICS.

    Here are 8 steps for a quick and easy upgraded clone of your on premises repository

    1- Purge the logs in the repository (if you purge before you export the repository, the export and import processes will be much faster).

    2- (11g repository only) Run the RCC tool that can obtained from support. Fix all errors reported by the tool.

    3- Backup the original repository (I like to use oracle expdp for that – fast, flexible, reliable)

    4- Create a brand new repository (using the RCU tool that comes with your installation of ODI) in the same version as the one you want to upgrade from (yes, FROM): this will setup Oracle inventory tables as needed without any need to monkey around with these tables.

    5- Overwrite the freshly created repository with the backup of the repository to upgrade: if you used expdp in step 3, you can use impdp to import that file– in essence we are creating a clone of the original repository, properly referenced in the Oracle inventory tables (if you are using impdp, make sure to map the imported tables to the new schema name, and to overwrite the existing tables)

    6- Now this step is very important: Connect to the Master repository of this new repository using the ODI Studio and update the connection parameters of the Work repository to make sure that it points to the NEW schema (as is, it still points to the original, on premises schema)

    7- Optional – backup your cloned repository. Doing so will save you a lot of time if anything goes wrong with the upgrade, otherwise you would have to redo steps 4 and 5.

    8- Download and install the version that matches that of ODICS (12.2.1.2.0 as of February 10, 2017): use the upgrade assistant of that version to upgrade the cloned repository.

    You now have a repository in the same version as the one used by ODICS. Time to migrate this over to the cloud…

     

    2.     Migrate your repository over to ODICS

    The steps to migrate the repository to the cloud are very similar to the ones described to upgrade the repository. Just follow these additional steps:

    1- Backup your 12.2.1.2.0 repository (I like to use oracle expdp for that – fast, flexible, reliable)

    2- Copy the backup file to the cloud, on the file system where ODICS will be installed

    3- In the cloud, create a brand new repository using the RCU tool that comes with the installation of ODICS (instructions for the installation are available here). Again, this ensures that all administrative tables are in place. This will be important in the future when you will want to further upgrade this repository;

    4- Overwrite the freshly created repository with the backup of the 12.2.1.2.0 repository: if you used expdp in step 1, you can use impdp to import that file– in essence we are creating a clone of the original repository, properly referenced in the Oracle inventory tables (if you are using impdp, make sure to map the imported tables to the new schema name, and to overwrite the existing tables)

    5- Now this step is very important: Connect to the Master repository of this new repository using ODICS Studio and update the connection parameters of the Work repository to make sure that it points to the CLOUD schemas (as is, it still points to the original, on premises schema)

    6- Do not forget to update the cloud agents and studios with 3rd party JDBC drivers that are used on premises, if any.

    7- Test your cloud environment: remember to validate your topology connections as your databases – or at least some of them – must have moved to the Cloud as well. You will also have to update your agents’ definitions: your new agents will be in the Cloud, along with this new repository. Note that during this time, the original environment is still up and running.

    8- Once you are satisfied with your tests in the Cloud, switch over to ODICS.

    You are now the proud owner of a fully operational ODICS repository!

    3.     Beyond the repository

    Having your repository in the cloud will only make sense if your ODI agent is in the Cloud (that is the purpose of ODICS), and so is the Studio used by the developer. Stay tuned for more blogs that will look into the details of ODICS best practices in the cloud: installing ODICS, deploying studio for development teams, and more!

     

    Conclusion

    With a limited number of steps, we have quickly upgraded and migrated an on-premises ODI repository to the Cloud, and made it usable as an ODICS repository.

    For more ODI and ODICS best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for ODI.

    Using Oracle Managed File Transfer (MFT) to Push Files to ICS for Processing

    $
    0
    0

    Introduction

    In a previous article I discussed the use of the Enterprise Scheduler Service (ESS) to poll for files, on a scheduled basis, to read from MFT.  In that article we discussed how to process many files that have been posted to the FTP server.  At the end of that article I mentioned the use of the push pattern for file processing.

    This article will cover how to implement that push pattern with Managed-File Transfer (MFT) and the Integration Cloud Service (ICS).  We’ll walk through the configuration of MFT, creating the connections in ICS, and developing the integration in ICS.

    The following figure is a high-level diagram of this file-based integration using MFT, ICS, and an Oracle SaaS application.

    mft2ics

     

    Create the Integration Cloud Service Flow

    This integration will be a basic integration with an orchestrated flow.  The purpose is to demonstrate how the integration is invoked and the processing of the message as it enters the ICS application.  For this implementation we only need to create two endpoints.  The first is a SOAP connection that MFT will invoke, and the second connection will be to the MFT to write the file to an output directory.

    The flow could include other endpoints but for this discussion additional endpoints will not add any benefits to understanding the push model.

    Create the Connections

    The first thing to do is the create the connections to the endpoints required for the integration.  For this integration we will create two required connections.

     

    1. SOAP connection.  This connection is what will be used by the MFT to trigger the integration as soon as the file arrives in the specified directory within the MFT (This will be covered in the MFT section of this article).
      1. FTP connection: This connection will be used to write the file to an output directory within the FTP server.  This second connection is only to demonstrate the flow and the processing of the file and then writing the file to an endpoint.  This endpoint could have been any endpoint, to invoke another operation.  For instance, we could have used the input file to invoke a REST, SOAP, or one of many other endpoints.

    Let’s define the SOAP connection.

    SOAP_Identifier

    Figure 1

    Identifier: Provide a name for the connection

    Adapter: When selecting the adapter type choose the SOAP Adapter

    Connection Role: There are three choices for the connection role; Trigger, Invoke, and Trigger and Invoke.  We will use a role of Trigger, since the MFT will be triggering our integration.

    SOAPConnectionProperties

    Figure 2

    Figure 2 shows the properties that define the endpoint.  The WSDL URL may be added by specifying the actual WSDL as shown above, or the WSDL can be consumed by specifying the host:port/uri/?WSDL.

    In this connection the WSDL was retrieved from the MFT embedded server.  This can be found at $MW_HOME/mft/integration/wsdl/MFTSOAService.wsdl.

    The suppression of the timestamp is specified as true, since the policy being used at MFT does not require the timestamp to be passed.

    Security Policy

    SOAP_Security

     

    Figure 3

    For this scenario we will be using the username-password token policy.  The policy specified on this connection needs to match the policy that is specified for the MFT SOAP invocation.

    The second connection, as mentioned previously, is for the purpose of demonstrating an end-to-end flow.  This connection is not important for the purpose of demonstrating the push pattern.  The connection is a connection back to the MFT server.

    MFT_FTP_Identifier

    Figure 4

    Identifier: Provide a unique name for the connection

    Adapter: When selecting the adapter type choose the FTP Adapter

    Connection Role: For this connection we will specify “Trigger and Invoke”.

    Connection Properties

    MFT_FTP_Connection_Properties

    Figure 5

    FTP Server Host Address:  The IP address of the FTP server.

    FTP Server Port: The listening port of the FTP Server

    SFTP Connection:  Specify “Yes”, since the invocation will be over sFTP

    FTP Server Time Zone: The time zone where the FTP server is located.

    Security Policy

    MFT_FTP_Security

    Figure 6

    Security Policy:  FTP Server Access Policy

    User Name:  The name of the user that has been created in the MFT environment.

    Password: The password for the specified user.

    Create the Integration

    Now that the connections have been created we can begin to create the integration flow.  When the flow is triggered by the MFT SOAP request the file will be passed by reference.  The file contents are not passed, but rather a reference to the file is passed in the SOAP request.  When the integration is triggered the first step is to capture the size of the file.  The file size is used to determine the path to traverse through the flow.  A file size of greater than one megabyte is the determining factor.

    integration

     

    Figure 7

    The selected path is determined by the incoming file size.  When MFT passes the file reference it also passes the size of the file.  We can then use this file size to determine the path to take.  Why do we want to do this?

    If the file is of significant size then reading the entire file into memory could cause an out-of-memory condition.  Keep in mind that memory requirements are not just about reading the file but also the XML objects that are created and the supporting objects needed to complete any required transformations.

    ICS product provides a feature to prevent an OOM condition when reading large files.  The top path shown in Figure 7 demonstrates how to handle the processing of large files.  When processing a file of significant size it is best to process the file by downloading the file to ICS (This is an option provided by the FTP adapter when configuring the work flow). After downloading the file to ICS it is processed by using a “stage” action.  The stage action is able to chunk the large file and read the file across multiple threads.  This article will not provide an in-depth discussion on the stage action.  To better understand the “stage” action, refer to the Oracle ICS documentation.

    The “otherwise” path is the execution flow above is taken when the file size is less than the configured maximum file size.  For the scenario in this blog, I set the maximum size to one megabyte.

    The use case being demonstrated involves passing the file by reference.  Therefore, in order to read or download the file we must obtain the reference location from MFT.  The incoming request provides the reference location.  We must provide this reference location and the target filename to the read or download operation.  This is done with the XSLT mapping shown in figure 8.

    FileReferenceMapping

    Figure 8

    The result mapping is shown in Figure 9.

    MappingPage

    Figure 9

     

    The mapping of the fields is provided below.

    Headers.SOAPHeaders.MFTHeader.TargetFilename -> DownFileToICS.DownloadRequest.filename.

    Substring-before(

    substring-after(InboundSOAPRequestDocument.Body.MFTServiceInput.FTPReference.URL,’7522’),

    InboundSOAPRequestDocument.Headers.SOAPHeaders.MFTHeader.TargetFilename) -> DownloadFileToICS.DownloadRequest.directory

    As previously stated, this is a basic scenario intended to demonstrate the push process.  The integration flow may be as simple or complex as necessary to satisfy your specific use case.

    Configuring MFT

    Now that the integration has been completed it is time to implement the MFT transfer and configure the SOAP request for the callout.  We will first configure the MFT Source.

    Create the Source

    The source specifies the location of the incoming file.  For our scenario the directory we place our file in will be /users/zern/in.  The directory location is your choice but it must be relative to the embedded FTP server and one must have permissions to read from that directory.  Figure 10 shows the configuration for the MFT Source.

    MFT_Source

    Figure 10

    As soon as the file is placed in the directory an “event” is triggered for the MFT target to perform the specified action.

    Create the Target

    The MFT target specifies the endpoint of the service to invoke.  In figure 11, the URL has been specified to the ICS integration that was implemented above.

    MFT_Target_Location

     

    Figure 11

    The next step to specify is the security policy.  This policy must match what was specified by the connection defined in the ICS platform.  We are specifying the username_token_over_ssl_policy as seen in Figure 12.

    MFT_Target_Policy

     

    Figure 12

    Besides specifying the security policy we must also specify to ignore the timestamp in the response. Since the policy is the username_token policy the request  must also specify the credentials in the request.  The credentials are retrieved from the keystore by providing the csf-key value.

    Create the Transfer

    The last step in this process is to bring the source and target together which is the transfer.  It is within the transfer configuration that we specify the delivery preferences.  In this example we set the “Delivery Method” to “Reference” and the Reference Type to be “sFTP”.

     

    MFT_Transfer_Overview

    Figure 13

    Putting it all together

    1. A “.csv” file is dropped at the source location, /users/zern/in.
    2. MFT invokes the ICS integration via a SOAP request.
    3. The integration is triggered.
    4. The integration determines the size of the incoming file and determines the path of execution
    5. The file is either downloaded to ICS or read into memory.  This is determined by the path of execution.
    6. The file is transformed and then written back to the output directory specified by the FTP write operation.
    7. The integration is completed.

    Push versus Polling

    There is no right or wrong when choosing either a push or poll pattern.  Each pattern has its benefits.  I’ve listed a couple of points to consider for each pattern.

    Push Pattern

    1. The file gets processed as soon as it arrives in the input directory.
    2. You need to create two connections; one SOAP connection and one FTP connection.
    3. Normally used to process only one file.
    4. The files can arrive at any time and there is no need to setup a schedule.

    Polling Pattern

    1. You must create a schedule to consume the file(s).  The polling schedule can be at either specific intervals or at a given time.
    2. You only create one connection for the file consumption.
    3. Many files can be placed in the input directory and the scheduler will make sure each file is consumed by the integration flow.
    4. The file processing is delayed upwards to the maximum time of the polling schedule.

    Summary

    Oracle offers many SaaS cloud applications such as Fusion ERP and several of these SaaS solutions provide file-based interfaces.  These products require the input files to be in a specific format for each interface.  The Integration Cloud Service is an integration gateway that can enrich and/or transform these files and then pass them along directly to an application or an intermediate storage location like UCM where the file is staged as input to SaaS applications like Fusion ERP HCM.

    With potentially many source systems interacting with Oracle SaaS applications it is beneficial to provide a set of common patterns to enable successful integrations.  The Integration Cloud Service offers a wide range of features, functionality, and flexibility and is instrumental in assisting with the implementation of these common patterns.

     

    Invoking IDCS REST API from PL/SQL

    $
    0
    0

    This post shows a way to make REST API calls to IDCS from an Oracle Database using PL/SQL.  The idea is that a PL/SQL application can manage and search for user and group entities directly in IDCS.

    In the sample code we’ll see how to obtain an access token from IDCS and make calls to create users, query group membership, and retrieve user profile attributes.  The PL/SQL code uses APEX 5.1 with the packages APEX_WEBSERVICE to call IDCS and APEX_JASON to parse the JSON response.

    Setup

     

    1. 1- Since the Oracle Database is acting as an IDCS client we need to register it in IDCS using Client Credentials as grant type and with permission to invoke Administratio APIs with Identity Domain Administrator.  The Client ID and Client Secret returned by the registration are used in the sample code to request an access code.

    Screenshot 2017-02-08 14.35.43

    Screenshot 2017-02-08 14.41.40

    1. 2- Now, we give the Database the appropriate ACLs so it can resolve and call the IDCS URL. Note that DB Cloud instances seem to have existing ACLs for any external host (*).  If needed, execute the existing commands to create ACL’s for the IDCS Host and Port:

     

    exec dbms_network_acl_admin.create_acl (acl => ‘idcs_apex_acl.xml’,description => ‘IDCS HTTP ACL’,principal => ‘APEX_05XXXX‘, is_grant => TRUE,privilege => ‘connect’,start_date => null,end_date => null);

    exec DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE(acl => ‘idcs_apex_acl.xml’,principal => ‘APEX_05XXXX‘,is_grant => true,privilege => ‘resolve’);

    exec dbms_network_acl_admin.assign_acl (acl => ‘idcs_apex_acl.xml’,host => ‘myidcshost.com‘,lower_port => 8943,upper_port => 8943);

    commit;

     

    Replace the following values accordingly:

    1. 1. ‘APEX_05XXXX’: the APEX schema owner (it varies with version)
    2. 2. ‘myidcshost.com’: IDCS host for one tenant.
    3. 3. 8943: IDCS SSL Port

    Verify the ACLs if you see the error “ORA-24247: network access denied by access control list (ACL)” when submitting a request.

     

    1. 3- In a Database Schema of your choice create the PL/SQL Package using this SQL Script.  Before executing the script replace all ocurrences of idcs_app for your schema name.  Run the script as a SYSDBA user or with the permissions to create procedures and types.

     

    1. 4- Add the appropiate root certificate chain to the Database Wallet (trusted certificates) for SSL communication with IDCS

     

    1. 5- Test requests to IDCS using curl with SSL to verify proper access.

     

    PL/SQL Code

    The sample code is in the form of a PL/SQL Package.  The script to create the package can be downloaded here.  The package specification is as follows:

     

    CREATE or REPLACE PACKAGE IDCS_CLIENT as

    g_base_url    VARCHAR2(500):= ‘https://myidcs.oracle.com:8943’;                           — Replace with IDCS Base URL
    g_client_id   VARCHAR2(100):=’8105a4f266c745b09a7bbed42ff151eb’;                  — Replace with Client ID
    g_client_pwd  VARCHAR2(100):=’a664583b-2115-4921-bd48-8e4a84b0c7a3′;       — Replace with Client Secret
    g_wallet_path VARCHAR2(200):= ‘file:/home/oracle/wallet’;                                      — Replace with DB Wallet location
    g_wallet_pwd  VARCHAR2(50):= ‘Welcome1’;                                                           — Replace with DB Wallet password
    g_users_uri   VARCHAR2(200):=’/admin/v1/Users’;
    g_groups_uri  VARCHAR2(200):=’/admin/v1/Groups’;

    g_bearer_token  VARCHAR2(32767);      — Stores Access token

    — Used to return a list of groups on function get_user_membership
    TYPE group_list_t  
    IS TABLE OF VARCHAR2(100);

    —  Used to return list of users and their profiles
    TYPE user_list_t 
    IS TABLE OF idcs_user_t;

    — Create the following TYPE outside of the package using SQLPLUS with SYSDBA account
    — TYPE idcs_user_t is used to store a user’s profile
    /*
    CREATE  TYPE idcs_user_t 
    AS OBJECT (
    username      VARCHAR2(100),
    displayname   VARCHAR2(100),
    firstname     VARCHAR2(50),
    lastname      VARCHAR2(50),
    email         VARCHAR2(100)
    );
    */

    Note that the variables g_client_id and g_client_pwd need to have the respective values obtained during the DB Client registration above.  The variable g_base_url is the IDCS Base URL for a specific tenant.  Also, since communication with IDCS is with SSL a wallet is needed for the database using g_wallet_path, and g_wallet_pwd for that purpose.

    Three types are defined to return multiple groups, and users.  The type idcs_user_t is an object to store user profile information.  This kind of Type can not be created inside the package so it has to be done in SQLPlus before creating the package.  The type user_list_t is a table type to hold user profiles and the type group_list_t is another table type to hold group names.

    — Obtain access token from IDCS
    PROCEDURE get_authz_token;

    — Creates user in IDCS
    PROCEDURE create_user (
    username      varchar2,
    first_name  varchar2,
    last_name   varchar2,
    email       varchar2);

    —  Assigns group groupname to user username
    PROCEDURE grant_group (
    username      varchar2,
    groupname   varchar2);

    — Returns list of all IDCS groups a user is a member of
    FUNCTION get_user_membership (
    username      varchar2)
    RETURN group_list_t;

    — Returns internal user id from username
    FUNCTION get_user_id (
    username      varchar2)
    RETURN VARCHAR2;

    — Table Function to retrieve username, displayname, firstname, lastname, and email for all users
    /* Sample usage for Table Function user_profiles:

        SELECT * from TABLE((idcs_client.user_profiles));  

        SELECT email from TABLE((idcs_client.get_user_profiles)) where username=’myuser1′;
        
    */
    FUNCTION user_profiles        — Table function to query user profiles
    RETURN user_list_t PIPELINED;

    END idcs_client;

    The use for procedures and functions is self explanatory.  Just the function user_profiles is different in the sense that it’s defined as an Oracle Table Function that is used to issue SELECT statements to retrieve idcs user profile attributes.  For example, the query:

    SELECT last_name from TABLE((idcs_client.get_user_profiles)) where username=’myuser1@demo.com’;

    would retrieve, in real time, the last name of the user with username myuser1@demo.com directly from IDCS.

    The actual code is in the Package Body below

     

    CREATE or REPLACE PACKAGE BODY idcs_client AS

    — Gets access token from IDCS
    PROCEDURE get_authz_token IS

    v_token_request_uri VARCHAR2(50):=’/oauth2/v1/token’;
    v_creds VARCHAR2(500):=g_client_id||’:’||g_client_pwd; –Client credentials unencoded
    v_client_creds VARCHAR2(1000):=replace(replace(replace(utl_encode.text_encode(v_creds,’WE8ISO8859P1′, UTL_ENCODE.BASE64),chr(9)),chr(10)),chr(13)); — BASE64 – encodes credentials
    l_idcs_response_CLOB CLOB; — JSON response from IDCS
    l_idcs_url VARCHAR2(500);

    BEGIN
    –Build request Headers
    apex_web_service.g_request_headers(1).name := ‘Content-Type’;
    apex_web_service.g_request_headers(1).value := ‘application/x-www-form-urlencoded; charset=UTF-8’;

    apex_web_service.g_request_headers(2).name := ‘Authorization’;
    apex_web_service.g_request_headers(2).value := ‘Basic ‘||v_client_creds;

    l_idcs_url := g_base_url||v_token_request_uri ; –Request URL

    — Sends a POST SSL Request to /oauth2/v1/token with grant_type=client_credentials and appropiate scope
    l_idcs_response_clob := apex_web_service.make_rest_request
    ( p_url => l_idcs_url,
    p_http_method => ‘POST‘,
    p_wallet_path => g_wallet_path,
    p_wallet_pwd => g_wallet_pwd,
    p_body => ‘grant_type=client_credentials’||’&’||’scope=urn:opc:idm:__myscopes__’);
    dbms_output.put_line(‘IDCS Response getting token: ‘||l_idcs_response_clob);

    — APEX_JSON Package used to parse response
    apex_json.parse(l_idcs_response_clob); — Parse JSON response. No ERROR Checking for simplicity.
    — Implement verification of response code and error check

    g_bearer_token := apex_json.get_varchar2(p_path => ‘access_token’); — Obtain access_token from parsed response and set variable with token value
    –dbms_output.put_line(‘Bearer Token: ‘||g_bearer_token);

    END get_authz_token;

    The function get_autz_token obtains the access token from IDCS using the credential obtained during the application registration.  The credentials in v_creds are in the form ‘clientID:clientSecret’ and then BASE64 encoded in v_client_creds.  The local variable l_idcs_response_CLOB will have the JSON response from IDCS.  After setting the request headers the request is sent using apex_web_service.make_rest_request with the URL l_idcs_url.  The code is not checking for errors in the response, the entire response can be seen in DBMS Output.  The response is parsed using apex_json.parse from l_idcs_response_CLOB and the access token is retrieved from the response resource ‘access_token’ and stored in g_bearer_token.

     

    — Creates user in IDCS
    PROCEDURE create_user (
         username varchar2,
         first_name varchar2,
         last_name varchar2,
         email varchar2) IS — work email

    l_idcs_url VARCHAR2(1000);

    l_authz_header APEX_APPLICATION_GLOBAL.VC_ARR2;
    l_idcs_response_clob CLOB;       — JSON IDCS response
    — Quickly build a JSON Request Body for Create User Request from parameter values
    l_users_body VARCHAR2(1000):='{
              “schemas”: [
                   “urn:ietf:params:scim:schemas:core:2.0:User”
               ],
                 “userName”: “‘||username||'”,
                “name”: {
                    “familyName”: “‘||last_name||'”,
                   “givenName”: “‘||first_name||'”
                },
              “emails”: [
              {
                 “value”: “‘||email||'”,
                 “type”: “work”,
                 “primary”: true
              }
             ]
    }’;

    BEGIN

    IF g_bearer_token IS NULL THEN
         idcs_client.get_authz_token; — Get an access token to be able to make request
    END IF;
    IF g_bearer_token IS NOT NULL THEN
    — Build Request Headers
    apex_web_service.g_request_headers(1).name := ‘Content-Type’;
    apex_web_service.g_request_headers(1).value := ‘application/scim+json’;
    apex_web_service.g_request_headers(2).name := ‘Authorization’;
    apex_web_service.g_request_headers(2).value := ‘Bearer ‘ || g_bearer_token; — Access Token
    l_idcs_url := g_base_url||g_users_uri ; — IDCS URL

    — Sends a POST SSL Request to /admin/vi/Users with new user in Body
    l_idcs_response_clob := apex_web_service.make_rest_request
    ( p_url => l_idcs_url,
    p_http_method => ‘POST’,
    p_wallet_path => g_wallet_path,
    p_wallet_pwd => g_wallet_pwd,
    p_body => l_users_body);

    dbms_output.put_line(‘IDCS Response creating user: ‘||l_idcs_response_clob);

    apex_json.parse(l_idcs_response_clob); — Parse JSON response. No ERROR Checking for simplicity.
    — Implement verification of response code and error check
    –dbms_output.put_line(l_idcs_response_clob);
    END IF;
    END create_user;

    The procedure create_user creates a user in IDCS with specified username, first name, last name, and work email values.  The variable l_users_body is the request body to create a user with the provided parameters.  It first requests the access token that is stored in g_bearer_token.  After building the headers it invokes apex_web_service.make_rest_request and the response in parsed.   The code is not checking for code or errors in the response, the entire response can be seen in DBMS Output.

     

    — Returns list of groups user username is a member of
    FUNCTION get_user_membership (
         username varchar2)
    RETURN group_list_t IS

    l_idcs_url VARCHAR2(1000);
    l_idcs_response_clob CLOB; –JSON IDCS Response
    — Request filter to return the displayname user username is a member of
    l_groups_filter VARCHAR2(100):=’?attributes=displayName&filter=members+eq+%22’||get_user_id(username)||’%22′;
    l_group_count PLS_INTEGER; –Number of group the user is a member of
    l_group_names group_list_t:=group_list_t(); — List of user’s groups to return

    BEGIN
    IF g_bearer_token IS NULL THEN
            idcs_client.get_authz_token; — Get access token
    END IF;
    IF g_bearer_token IS NOT NULL THEN
        — Build Request Headers
        apex_web_service.g_request_headers(1).name := ‘Content-Type’;
        apex_web_service.g_request_headers(1).value := ‘application/scim+json’;
        apex_web_service.g_request_headers(2).name := ‘Authorization’;
        apex_web_service.g_request_headers(2).value := ‘Bearer ‘ || g_bearer_token;
        l_idcs_url := g_base_url||g_groups_uri||l_groups_filter ;

    — Sends a GET SSL Request to /admin/vi/Groups?attributes=displayName&filter=members+eq+%22’||get_user_id(username)||’%22′;
    l_idcs_response_clob := apex_web_service.make_rest_request
        ( p_url => l_idcs_url,
         p_http_method => ‘GET’,
         p_wallet_path => g_wallet_path,
         p_wallet_pwd => g_wallet_pwd);

    dbms_output.put_line(‘IDCS Response getting membership: ‘||l_idcs_response_clob);
    apex_json.parse(l_idcs_response_clob); — Parse JSON response. No ERROR Checking for simplicity.
    — Implement verification of response code and error check
    –dbms_output.put_line(l_idcs_response_clob);
    l_group_count:=apex_json.get_count(p_path=>’Resources’); — Obtained number of Resources (groups) to extract groups below

    –List of groups is returned as l_group_names.  This loop populates the table variable.
    FOR i in 1..l_group_count LOOP — Through all returned groups.
       l_group_names.extend;
    –Find displayname for current group %d

       l_group_names(l_group_names.last):=apex_json.get_varchar2(p_path=>’Resources[%d].displayName’,p0=>i); 
      –dbms_output.put_line(l_group_names(i)); — Print group displayName
    END LOOP;

    RETURN l_group_names; — Returns list of the user’s groups
    END IF;
    RETURN null;
    END get_user_membership;

    The function get_user_membership returns all groups a user belongs to.  A filter is specified in the variable l_groups_filter to retrieve the group displayname and a filter to retrieve only the groups with the specified user_id as a value in the members attribute.  The user id is retrieve using username from another call to IDCS using the function get_user_id.  After building the headers it invokes apex_web_service.make_rest_request and the response in parsed.   The group count is returned from apex_json.get_count into l_grouip_count which is used in a Loop to populate the variable l_group_names with the displayname for each of the Resources in the response.   The table variable l_group_names is returned with the results.

     

    — This is a Table Function, can be queried as SELECT * from TABLE((idcs_client.get_user_profiles));

    FUNCTION user_profiles 
    RETURN user_list_t PIPELINED IS

    l_idcs_url VARCHAR2(1000);
    l_idcs_response_clob CLOB; — JSON IDCS Response
    — Filter to get displayname, username, active, firstname, lastname and primary email
    l_users_filter VARCHAR2(100):=’?attributes=displayname,username,active,name.givenName,name.familyName,emails.value,emails.primary’;
    l_user_count PLS_INTEGER; — Number of users returned.
    l_user_profile idcs_user_t:=idcs_user_t(NULL,NULL,NULL,NULL,NULL); –initialize variable that holds user profiles

    BEGIN
    IF g_bearer_token IS NULL THEN
         idcs_client.get_authz_token; — Get Access Token
    END IF;
    IF g_bearer_token IS NOT NULL THEN
        — Build Request Headers
        apex_web_service.g_request_headers(1).name := ‘Content-Type’;
        apex_web_service.g_request_headers(1).value := ‘application/scim+json’;
        apex_web_service.g_request_headers(2).name := ‘Authorization’;
        apex_web_service.g_request_headers(2).value := ‘Bearer ‘ || g_bearer_token;
        l_idcs_url := g_base_url||g_users_uri||l_users_filter ;

    — Sends a GET SSL Request to /admin/vi/Users?attributes=displayname,username,active,name.givenName,name.familyName,emails.value,emails.primary to retrieve ALL USERS.
    — No Paging is done
    l_idcs_response_clob := apex_web_service.make_rest_request
    ( p_url => l_idcs_url,
    p_http_method => ‘GET’,
    p_wallet_path => g_wallet_path,
    p_wallet_pwd => g_wallet_pwd);
    –dbms_output.put_line(‘IDCS Response getting profiles: ‘||l_idcs_response_clob);
    apex_json.parse(l_idcs_response_clob); — Parse JSON response. No ERROR Checking for simplicity.
    — Implement verification of response code and error check

    l_user_count:=apex_json.get_count(p_path=>’Resources’); — Number of Resources (users) in response

    — LOOP through all returned users and idcs_user_t table with the profile attributes for each user
    — No Paging implemented
    FOR i in 1..l_user_count LOOP
          l_user_profile:=idcs_user_t(apex_json.get_varchar2(p_path=>’Resources[%d].userName’,p0=>i),
                                                          apex_json.get_varchar2(p_path=>’Resources[%d].displayName’,p0=>i),
                                                          apex_json.get_varchar2(p_path=>’Resources[%d].name.givenName’,p0=>i),
                                                          apex_json.get_varchar2(p_path=>’Resources[%d].name.familyName’,p0=>i),
                                                          apex_json.get_varchar2(p_path=>’Resources[%d].emails[1].value’,p0=>i)
    );

    — dbms_output.put_line(l_user_profile.username);
    PIPE ROW(l_user_profile); — Pipe out rows to invoking select statement
    END LOOP;

    END IF;
    RETURN;
    END user_profiles;

    The table function user_profiles as mentioned above is invoked from a select statement to retrieve user profiles.  With the variable l_users_filter it can limit the data that comes from IDCS.  As declared is only retrieving a list of attributes per user and it’s not filtering users by attribute, so it will retrieve all users.  An example of SCIM filters when searching users is in this tutorial.  After building the headers it invokes apex_web_service.make_rest_request, the response in parsed.  The number of users returned by the request are stored in l_user_count by calling apex_json.get_count to get the number of Resources returned.  The table type variable l_user_profile is populated with the attributes from each user returned in a Loop.  Finally, the rows are piped out to the select statement that was issued.  Here’s a sample of the result of a select statement on user_profiles.

     

    Screenshot 2017-02-14 11.37.00

     

    Viewing all 987 articles
    Browse latest View live