Quantcast
Channel: ATeam Chronicles
Viewing all 987 articles
Browse latest View live

BPM 11g to 12c Upgrade Notes

$
0
0

BPM 12c has been the latest released version for almost two years now. BPM 11g installations should be looking at 12c upgrades to either pick up significant new features in 12c or prepare for 11g end of support. The upgrade can be quite involved depending on how complex the existing 11g environment is and how much running, faulted and completed instance data it contains. In all but the most trivial cases it will require careful planning and testing.

Support Dates

A caveat to be aware of is that 12.1.3.x, the initial 12c release support dates are based on WebLogic 12.1.x which came out a couple years before BPM 12.1.3. The end of support for WebLogic/BPM 12.1.3 is the December 2017, a year earlier than BPM 11g PS6 BP8 (11.1.1.7.8). If your upgrade planning is driven by support considerations 12c R2 (12.2.1.x) is a better target than 12c R1 (12.1.3.x).

(n.b. find a better table or graphic for below info)

FMWsupport-dates

To in-place or not to in-place?

There are major differences in BPM 12c. Some of the headliners are WebLogic 12.x vs. 10.3.6, new internal data model for instance data, BAM completely rewritten and cluster capable, Oracle Enterprise Manager/Fusion Middleware Control UI new and improved (even more so in 12.21 than 12.1.3). The cleanest and least problematic upgrade will be to build new 12c infrastructure, convert BPM Studio projects, redeploy and cut over or phase over. It is possible to do an in-place upgrade if cut over operations is not an option. Instance data will be restructured, WebLogic domains will be reconfigured, BPM Composer projects will be converted and transferred from PML to PAM. Deployed composites will continue to work as-is, the BPM run-time will dynamically convert deployed 11g composites to 12c every time they are loaded in memory.  If you have a lot of instance data in various states with multiple composites involved in a flow it may be difficult to do a before-after accounting of the data. If you are using partitions for your 11g instance data, in-place upgrade is only possible with 12.2.1 not 12.1.3.

Which 12c R1 or R2?

In short, R2 (12.2.1) is behind R1 (12.1.3) in bug fixes for engine function and performance and BPM Composer/PAM function. R1 has BP6 available April 2016 which address those bugs while R2 does not have its first BP until June 2016. If you must do the upgrade before July 2016 then R1 is the only real option. If you have partitioned instance data then you will need to build R1 new and cut over as mentioned above. Generally it will likely be best to wait for R2 to catch up with fixes in BP1 coming out in June.

11g Starting Point

BPM 11g PS5 (11.1.1.6) is the lowest supported starting point for in-place upgrade. Even though, if possible move to PS6 (11.1.1.7) first. In either case, apply the latest bundle patch – BP7 for PS5 (11.1.1.6.7) and BP8 for PS6 (11.1.1.7.8). Make sure the 11g installation and instance data are clean and healthy, purge or archive as much instance data as possible. ldentify any 11g deployed composites missing source that will not be able to be converted and redeployed.

BAM?

If you have BAM in the same domain as BPM you will need to do a hybrid in-place/cut over upgrade. BAM servers will need to be split out to a separate domain removing them from the BPM domain. The BPM domain can then be upgrade in-place while the BAM domain is built new and cut over.

New Enterprise Manager/Fusion Middleware Control

If EM is being used for operations and monitoring it is important to learn the differences in flow structure and audit trace and train operations staff appropriately. The Middleware Control part of EM is significantly changed from 11g to 12.1.3 and the UI significantly changed again from 12.1.3 to 12.2.1. If using instance name (via ora:setCompositeInstanceTitle() or otherwise) to search instance data it will not be obvious how to do a similar search in 12c EM.

Running an Upgrade

The most import part of running an upgrade is to do a test upgrade on clone of your production environment or as like production as possible. UAT or pre-production are good candidates if you can get representative instance data loaded from production. The second most important thing is to have a complete and proven backup of production before doing the upgrade.

References

Make sure you review the upgrade guide documentation. R1 is here and R2 here. There are also other good blogs, for example Jay Kasi’s blog for SOA Suite is largely applicable to BPM.


Oracle Data Integrator (ODI) for HCM-Cloud: a Knowledge Module to Generate HCM Import Files

$
0
0

Introduction

For batch imports, Oracle Cloud’s Human Capital Management (HCM) uses a dedicated file format that contains both metadata and data. As far as the data is concerned, the complete hierarchy of parent and children records must be respected for the file content to be valid.

To load data into HCM with ODI, we are looking here into a new Integration Knowledge Module (KM). This KM allows us to leverage ODI to prepare the data and generate the import file. Then traditional Web Services connections can be leveraged to load the file into HCM.

Description of the Import File Format

HCM uses a structured file format that follows a very specific syntax so that complex objects can be loaded. The complete details of the syntax for the import file are beyond the scope of this article, we only provide an overview of the process here. For more specific instructions, please refer to Oracle Human Capital Management Cloud: Integrating with Oracle HCM Cloud.

The loader for HCM (HCL) uses the following syntax:

  • Comments are used to make the file easier to read by humans. All comment lines must start with the keyword COMMENT
  • Because the loader can be used to load all sorts of business objects, the file must describe the metadata of the objects being loaded. This includes the objects name along with their attributes. Metadata information must be prefixed by the keyword METADATA.
  • The data for the business objects can be inserted or merged. The recommended approach is to merge the incoming data: in this case data to be loaded is prefixed with the keyword MERGE, immediately followed by the name of the object to be loaded and the values for the different attributes.

The order in which the different elements are listed in the file is very important:

  • Metadata for an object must always be described before data is provided for that object;
  • Parent objects must always be described before their dependent records.

In the file example below we are using the Contact business object because it is relatively simple and makes for easier descriptions of the process. The Contact business object is made of multiple components: Contact, ContactName, ContactAddress, etc. Notice that in the example the Contact components are listed before the ContactName components, and that data entries are always placed after their respective metadata.

COMMENT ##############################################################
COMMENT HDL Sample files.
COMMENT ##############################################################
COMMENT Business Entity : Contact
COMMENT ##############################################################
METADATA|Contact|SourceSystemOwner|SourceSystemId|EffectiveStartDate|EffectiveEndDate|PersonNumber|StartDate
MERGE|Contact|ST1|ST1_PCT100|2015/09/01|4712/12/31|STUDENT1_CONTACT100|2015/09/01
MERGE|Contact|ST1|ST1_PCT101|2015/09/01|4712/12/31|ST1_CT101|2015/09/01
COMMENT ##############################################################
COMMENT Business Entity : ContactName
COMMENT ##############################################################
METADATA|ContactName|SourceSystemOwner|SourceSystemId|PersonId(SourceSystemId)|EffectiveStartDate|EffectiveEndDate|LegislationCode|NameType|FirstName|MiddleNames|LastName|Title
MERGE|ContactName|ST1|ST1_CNTNM100|ST1_PCT100|2015/09/01|4712/12/31|US|GLOBAL|Emergency||Contact|MR.
MERGE|ContactName|STUDENT1|ST1_CNTNM101|ST1_PCT101|2015/09/01|4712/12/31|US|GLOBAL|John||Doe|MR.

Figure 1: Sample import file for HCM

The name of the file is imposed by HCM (the file must have the name of the parent object that is loaded). Make sure to check with the HCM documentation for the the limits in size and number of records for the file that you are creating. We will also have to zip the file before uploading it to the Cloud.

Designing the Knowledge Module

Now that we know what needs to be generated, we can work on creating a new Knowledge Module to automate this operation for us. If you need more background on KMs, the ODI documentation has a great description available here.

With the new KM, we want to respect all the constraints imposed by the loader for the file format. We also want to simplify the creation of the file as much as possible.

Our reasoning was that if ODI is used to prepare the file, the environment would most likely be such that:

  • Data has to be aggregated, augmented from external sources or somehow processed before generating the file;
  • Some of the data is coming from a database, or a database is generally available.

We designed our solution by creating database tables that matched the components of the business object that can be found in the file. This gives us the ability to enforce referential integrity: once primary keys and foreign keys in place in the database, parent records are guaranteed to be available in the tables when we want to write a child record to the file. Our model is the following for the Contact business object:

Data Model for HCM load

Figure 2: Data structure created in the database to temporary store and organize data for the import file

We are respecting the exact syntax (case sensitive) for the table names and columns. This is important because we will use these metadata to generate the import file.

The metadata need to use the proper case in ODI – depending on your ODI configuration, this may result in mixed case or all uppercase table names in your database. Either case works for the KM.

At this point, all we need is for our KM to write to the file when data is written to the tables. If the target file does not exist, the KM creates it with the proper header. If it does exist, the KM appends the metadata and data for the current table to the end of the file. Because of the referential integrity constraints in the database, we have to load the parent tables first… this will guarantee that the records are added to the file in the appropriate order. All we have to do is to use this KM for all the target tables of our model, and to load the tables in the appropriate order.

For an easy implementation, we took the IKM Oracle Insert and modified it as follows:

  • We added two options: one to specify the path where the HCM import file must be generated, the other for the name of the file to generate;
  • We created a new task to write the content of the table to the file, once data has been committed to the table. This task is written in Groovy and shown below in figure 3:

import groovy.sql.Sql
File file = new File('<%=odiRef.getOption("HCM_IMPORT_FILE_FOLDER")%>/<%=odiRef.getOption("HCM_IMPORT_FILE_NAME")%>')
if (!file.exists()){
file.withWriterAppend{w->
w<<"""COMMENT ##################################################################

COMMENT File generated by ODI
COMMENT Based on HDL Desktop Integrator- Sample files.
"""
  }
}
file.withWriterAppend{w->
  w<<"""
COMMENT ##########################################################################
COMMENT Business Entity : <%=odiRef.getTargetTable("TABLE_NAME")%>
COMMENT ###########################################################################
"""
  }
file.withWriterAppend{w->
w<<"""METADATA|<%=odiRef.getTargetTable("TABLE_NAME")%>|<%=odiRef.getTargetColList("", "[COL_NAME]", "|", "")%>
""".replace('"', '')
  }
// Connect to the target database
def db = [url:'<%=odiRef.getInfo("DEST_JAVA_URL")%>', user:'<%=odiRef.getInfo("DEST_USER_NAME")%>', password:'<%=odiRef.getInfo("DEST_PASS")%>', driver:'<%=odiRef.getInfo("DEST_JAVA_DRIVER")%>']
def sql = Sql.newInstance(db.url, db.user, db.password, db.driver)
// Retrieve data from the target table and write the data to the file
sql.eachRow('select * from  <%=odiRef.getTable("L","TARG_NAME","D")%>') { row ->
     file.withWriterAppend{w->
w<<"""MERGE|<%=odiRef.getTargetTable("TABLE_NAME")%>|<%=odiRef.getColList("","${row.[COL_NAME]}", "|", "", "")%>
""".replace('null','')
  }
 }
sql.close()

Figure 3: Groovy code used in the KM to create the import file

If you are interested in this implementation, the KM is available here for download.

Now all we have to do is to use the KM in our mappings for all target tables.

HCM KM in Use

Figure 4: The KM used in a mapping

We can take advantage of the existing options in the KM to either create the target tables if they do not exist or truncate them if they already exist. This guarantees that we only add new data to the import file.

Testing the Knowledge Module

To validate that the KM is creating the file as expected, we have created a number of mappings that load the 6 tables of our data model. Because one of our source files contains data for more than just one target table, we create a single mapping to load the first three tables. In this mapping, we specify the order in which ODI must process these loads as shown in figure 5 below:

Ensure data load order

Figure 5: Ensuring load order for the target tables… and for the file construction.

The remaining tables load can be designed either in individual mappings or consolidated in a single mapping if the transformations are really basic.

We can then combine these mappings in a package that waits for incoming data (incoming files or changes propagated by GoldenGate). The Mappings process the data and create the import file. Once the file is created, we can zip it to make it ready for upload and import with web services, a subject that is discussed in Using Oracle Data Integrator (ODI) to Bulk Load Data into HCM-Cloud.

The complete package looks like this:

 

HCM Load package

Figure 6: Package to detect arriving data, process them with the new KM and generate an import file for HCM, compress the file and invoke the necessary web services to upload and import the file.

With this simple package, you can start bulk loading business objects into HCM-Cloud with ODI.

The web service to import data into HCM requires the use of OWSM security policies. To configure OWSM with ODI, please Connecting Oracle Data Integrator (ODI) to the Cloud: Web Services and Security Policies

Conclusion

With relatively simple modifications to an out-of-the-box ODI Knowledge Module, the most advanced features of ODI can now be leveraged to generate an import file for HCM and to automate the load of batch data into the cloud

For more Oracle Data Integrator best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-team Chronicles for Oracle Data Integrator.

Acknowledgements

Special thanks to Jack Desai and Richard Williams for their help and support with HCM and its load process.

References

Hybrid Mobile Apps: Using the Mobile Cloud Service JavaScript SDK with Oracle JET

$
0
0

Introduction

Oracle’s Mobile Cloud Service has a Javascript SDK that makes connecting your hybrid mobile app to your mobile backend service a breeze.  The Javascript SDK for MCS comes in two flavors, one for web applications and one for Cordova applications. The MCS Javascript SDK for Cordova has a few more capabilities than the web version, such as methods for registering a device and notifications.  However, for the most part the two SDKs are quite similar. For creating hybrid mobile apps, choose the Cordova SDK.

To download the Javascript SDKs for MCS, login to your MCS instance and click on the “Get Started” page. This page has SDK downloads for native apps in Android, iOS, and MCS Javascript SDKs. You can download the SDK with a starter app or choose to download the SDK alone and add it to an existing project. For the example in this post, I downloaded the SDK by itself and added it to a project created using Oracle JET (Javascript Extension Toolkit). To get started with Oracle JET, follow the Get Started link on the JET home page.

The steps below include one way to connect the hybrid app to Mobile Cloud Service using the MCS JavaScript SDK. I will cover making calls to MCS for authentication and uploading a picture taken on the device to the MCS Storage repository.

NOTE: This example uses the camera plugin of Cordova. To test this on iOS the sample app will have to be run on an actual iOS device, since the iOS simulator does not have a camera. For Android, the emulator does have a camera, so on Android either a device or emulator will work.

Main Article

To get started, use the handy Oracle JET generator to stamp out a mobile app template. The generator can be installed used npm. Using Yeoman, the app template can be created for whatever platform you wish to use. The steps in this post will focus on Android primarily, but also work with iOS hybrid apps.

Install JET generator

To generate a hybrid mobile starter application, the Yeoman generator for Oracle JET must be installed. Use npm to install “generator-oraclejet”. Again, note that on Mac you may need to use sudo.

npm install -g generator-oraclejet

To verify the generator was installed, run the following command:

npm list -g generator-oraclejet

Scaffold a Mobile Application

Using Yeoman, the JET generator can scaffold three different types of starter applications. This example will use the “navBar” template. To see screens of the options, follow this link.

Open a command prompt and create a directory where you want your mobile application to reside. Change directory to that new folder and run the Yeoman command to create an Oracle JET application. The command below creates a hybrid app named “JETMobileDemo” using the “navBar” template. Note that if on Windows, the platforms option cannot include iOS. On Mac you can use both iOS and android.

yo oraclejet:hybrid --appName=JETMobileDemo --template=navBar --platforms=android

Once this command completes, the directory that you are in will have a JET mobile application created. The output should show this at the end:

  Done, without errors.
  Oracle JET: Your app is ready! Change to your new app directory and try grunt build and serve…

See this link to understand the folder structure that is created from the template. Below are the descriptions of the project folders from the documentation. Note that the “src” directory is where the app pages are defined and edited. At build time, the src files are copied to the hybrid directory. This is important to understand so you avoid developing in the hybrid directory which gets overwritten by the grunt build task (or more specifically the “grunt copy” task).

Build and Serve

If you have installed the Android or iOS tooling, then the template app is ready to be built and run on a device or emulator. To build the app, grunt is used. Set the platform as needed.

grunt build:dev --platform=android

Run the app using “grunt serve”. For more details on the grunt serve command, see the JET documentation on grunt commands. The command below runs the app on an Android device attached to your machine.

grunt serve --platform=android --destination=device

 

On the device or emulator, the starter template should look something like this. If running on iOS, the navigation bar will be on the bottom of the page instead of the top. The navbar can be styled to be on top or bottom for all devices if needed, but the default is top for Android, bottom for iOS. Hint: if you want the header on top for iOS as well, use the JET style “oj-hybrid-applayout-navbar-fixed-top” on the navigation div in index.html. For information on the built-in JET styles see this link.

 

scaffold

 

Open the JET Project for Editing

A grunt command can be used to start a local web server to run the project.

grunt serve --platform=android --web=true

For development, use the IDE or editor of your choice.
NOTE: During development of JET hybrid apps, edit the files under the “src” directory, not the “hybrid/www”. When grunt serve is run on the project, changes made in the src directory will be automatically copied over to the hybrid/www directory (this is part of the live reload feature). When grunt build executes, the files from the src directory are copied over to the hybrid directory. Note that if you don’t want to run a full build but just want to copy the files over from src to hybrid/www, run this command, which will delete everything under the hybrid/www and then copy the src files over again:

grunt clean copy

 

Main.js and AppController.js

JET uses Require.js for loading modules. In the index.html file, only two script tags are needed to run in the browser, one script tag for Require.js and one for main.js. The main.js file defines the Require configuration as well as the top level view model, named MainViewModel, which initializes the single-page application.

<script type="text/javascript" src="js/libs/require/require.js"></script>
<script type="text/javascript" src="js/main.js"></script>

This can be combined into a single line, if preferred. Require.js has a data-main attribute that can be used to reference main.js on the same line as Require.js.

<script type="text/javascript" data-main="js/main" src="js/libs/require/require.js"></script>

The MainViewModel also initializes the AppController.js view model object. The AppController defines the pages for the router and creates the navigation entries. Finally, the router is synchronized and then the bindings are applied to the body of the index.html file.

What about Cordova.js?

Cordova.js is referenced in the index.html file, but only in the hybrid/www directory. This script tag is added by the build process to the index.html file in the hybrid/www/index.html file. When the “grunt build:dev” is executed, watch for a step in the output that reads “includeCordovaJs”. This step insert the important cordova.js script tag into index.html.

<script src="cordova.js"></script>

The cordova.js is needed when running on a device or emulator. When running in the browser, a 404 error will occur because of the script tag for including cordova.js. Do not be concerned with the 404 issue on cordova.js when testing in the browser. But do be concerned if you see the same error on the device or emulator. The cordova.js file should be available when running on a device or emulator because the build process parks the cordova.js file in the platform directory. For Android the location is:

<project>\hybrid\platforms\android\platform_www\cordova.js

Because cordova.js is not available when running in the browser, testing any usage of Cordova plugins can only be done on a device or emulator/simulator. However, with the live reload capabilities using “grunt serve”, this makes it a less time consuming process to update the app on the device when making changes.

Connect to MCS using Javascript SDK

Ok, now let’s get to the MCS Javascript SDK! We’ll get started by creating a simple login page in the app to allow for user/password login and logout using the SDK. As a bonus we will use some JET components ojLabel, ojInputText, and ojInputPassword will be used with Knockout observables to set the username and password. If you haven’t checked out the plethora of JET UI components, run – don’t walk – to the JET cookbook page. This sample only uses very basic input components, but the JET UI components are extensive and powerful. See the Data Visualization Components for proof.

A JavaScript SDK is available for hybrid apps to connect to Oracle Mobile Cloud Service. This library is not available via bower, so this will be manually copied into the project.

At the end of this section the intial page of the app should look like this image.

loginpage

Get the SDK library

Download the MCS JavaScript SDK from your “Get Started” page in Mobile Cloud Service. From that page, links are provided to the different SDKs available in MCS.

Unzip the file to a temp directory and locate the mcs.js and mcs-min.js files.

 

Add SDK to Project and Require.js configuration

In your app project, under “src/js” folder create a new folder called “mcs”. Copy the mcs.js into this folder (or you can use the minified mcs-min.js file for these steps, if you prefer).

Update Require.js Configuration to include the MCS SDK

In the main.js file for the app, add a reference in the Require config file for the MCS JavaScript SDK. The mcs.js or minified mcs-min.js can be referenced. The configuration below use the non-minified version. One line is added below:

'mcs': 'mcs/mcs'

 

Config portion of main.js

requirejs.config(
{
  baseUrl: 'js',
  
  // Path mappings for the logical module names
  paths:
  //injector:mainReleasePaths
  {
    'knockout': 'libs/knockout/knockout-3.4.0.debug',
    'jquery': 'libs/jquery/jquery-2.1.3',
    'jqueryui-amd': 'libs/jquery/jqueryui-amd-1.11.4',
    'promise': 'libs/es6-promise/promise-1.0.0',
    'hammerjs': 'libs/hammer/hammer-2.0.4',
    'ojdnd': 'libs/dnd-polyfill/dnd-polyfill-1.0.0',
    'ojs': 'libs/oj/v2.0.0/debug',
    'ojL10n': 'libs/oj/v2.0.0/ojL10n',
    'ojtranslations': 'libs/oj/v2.0.0/resources',
    'text': 'libs/require/text',
    'signals': 'libs/js-signals/signals',
    'mcs': 'mcs/mcs'
  }
  //endinjector
  ,
  // Shim configurations for modules that do not expose AMD
  shim:
  {
    'jquery':
    {
      exports: ['jQuery', '$']
    }
  }
}
);

 

Get MCS URL and Keys

Login to MCS service console and go to the mobile backend that the app will use. On the Settings page, you can obtain the keys needed to wiring your mobile app to the MCS backend.  Click the “Show” links to see the URLs and keys for the mobile backend. These will be used to configure the MCS connection object in JavaScript.

settings

Create the view model

To interact with MCS from the app, an interface to MCS needs to be created. Since the MCS JavaScript SDK is already available to the app, a new JavaScript file can be created to call the MCS methods that the application needs to use.
Like the “mcs” folder that was created in the previous step, add another folder called “mbe”, an abbreviation for “Mobile Backend”. Create a JavaScript file called mbe.js. This should be at the same level as the mcs folder previously created.

 

Insert the code below into the file. Change the keys to use your MCS backend keys. This file is doing several things for configuring the application connection to the Mobile Backend in MCS:

 

  • The configuration is defined for a backend called “JETSample”. This name doesn’t have to match the name of the backend that is actually in MCS, but can if you wish to match it.
  • The “mcs” reference is included in the define array of dependencies. This reference is available because the Requires.js configuration in main.js was already updated in an earlier step.
  • The mcs_config object contains the URL and keys for accessing MCS. This object is defined in the official documentation for MCS. The basicAuth and OAuth portions of the object are defined, but only basicAuth is used in the demo.
  • The init() method in the file declares and initializes the MCS backend object, setting the authentication type to basic auth. A user named jetuser is defined in this particular MCS backend.
  • The methods defined in this file are for login and logout of the backend. A method for anonymous authentication is included but will not be used. The authenticate and logout methods will be used for a demo on the dashboard page to work in concert with JET components for a basic login page.
  • This file also contains methods for adding to a collection in MCS, which we will use later when interacting with the Cordova camera plugin. This example hardcodes a Storage collection name, “JETCollection”, that is assumed to be defined on the MCS backend.

mbe.js

define(['jquery', 'mcs'], function ($) {
    //define MCS mobile backend connection details
    var mcs_config = {
        "logLevel": mcs.logLevelInfo,
        "mobileBackends": {
            "JETSample": {
                "default": true,
                "baseUrl": "https://mobileportalsetrial-yourdomain.mobileenv.us2.oraclecloud.com:443",
                "applicationKey": "0fc655f4-0000-4876-0000-0000000000000",
                "authorization": {
                    "basicAuth": {
                        "backendId": "b00000cf-0000-4cda-a3e0-000000000000",
                        "anonymousToken": "redacted"
                    },
                    "oAuth": {
                        "clientId": "00000000-2c24-4667-b80e-000000000000000",
                        "clientSecret": "mysecretkey",
                        "tokenEndpoint": "https://mobileportalsetrial-yourdomain.mobileenv.us2.oraclecloud.com/oam/oauth2/tokens"
                    }
                }
            }
        }
    };

    function MobileBackend() {
        var self = this;
        self.mobileBackend;
        //Always using the same collection in this example, called JETCollection. Can be dynamic if using multiple collections, but for example using one collection.
        var COLLECTION_NAME = "JETCollection";
        function init() {
            mcs.MobileBackendManager.setConfig(mcs_config);
            //MCS backend name for example is JETSample. 
            self.mobileBackend = mcs.MobileBackendManager.getMobileBackend('JETSample');
            self.mobileBackend.setAuthenticationType("basicAuth");            
        }

        //Handles the success and failure callbacks defined here
        //Not using anonymous login for this example but including here. 
        self.authAnonymous = function () {
            console.log("Authenticating anonymously");
            self.mobileBackend.Authorization.authenticateAnonymous(
                    function (response, data) {                        
                        console.log("Success authenticating against mobile backend");
                    },
                    function (statusCode, data) {
                        console.log("Failure authenticating against mobile backend");
                    }
            );
        };

        //This handles success and failure callbacks using parameters (unlike the authAnonymous example)
        self.authenticate = function (username, password, successCallback, failureCallback) {
            self.mobileBackend.Authorization.authenticate(username, password, successCallback, failureCallback);
        };

        //this handles success and failure callbacks using parameters
        self.logout = function (successCallback, failureCallback) {
            self.mobileBackend.Authorization.logout();
        };

        self.isAuthorized = function () {
            return self.mobileBackend.Authorization.isAuthorized;
        };
       
        self.uploadFile = function (filename, payload, mimetype, callback) {            
            self.getCollection().then(success);                        
            
            function success(collection) {                
                //create new Storage object and set its name and payload
                var obj = new mcs.StorageObject(collection);
                obj.setDisplayName(filename);
                obj.loadPayload(payload, mimetype);                
                return self.postObject(collection, obj).then(function (object) {                                        
                    callback(object);
                });
            }
        }
        
        //getCollection taken from official documentation example at site https://docs.oracle.com/cloud/latest/mobilecs_gs/MCSUA/GUID-7DF6C234-8DFE-4143-B138-FA4EB1EC9958.htm#MCSUA-GUID-7A62C080-C2C4-4014-9590-382152E33B24
        //modified to use JQuery deferred instead of $q as shown in documentaion
        self.getCollection = function () {
            var deferred = $.Deferred();

            //return a storage collection with the name assigned to the collection_id variable.
            self.mobileBackend.Storage.getCollection(COLLECTION_NAME, self.mobileBackend.Authorization.authorizedUserName, onGetCollectionSuccess, onGetCollectionFailed);

            return deferred.promise();

            function onGetCollectionSuccess(status, collection) {
                console.log("Collection id: " + collection.id + ", description: " + collection.description);
                deferred.resolve(collection);
            }

            function onGetCollectionFailed(statusCode, headers, data) {
                console.log(mcs.logLevelInfo, "Failed to download storage collection: " + statusCode);
                deferred.reject(statusCode);
            }
        };

        //postObject taken from official documentation example at site https://docs.oracle.com/cloud/latest/mobilecs_gs/MCSUA/GUID-7DF6C234-8DFE-4143-B138-FA4EB1EC9958.htm#MCSUA-GUID-7A62C080-C2C4-4014-9590-382152E33B24
        //modified to use JQuery deferred instead of $q as shown in documentaion
        self.postObject = function (collection, obj) {
            var deferred = $.Deferred();

            //post an object to the collection
            collection.postObject(obj, onPostObjectSuccess, onPostObjectFailed);
            
            return deferred.promise();

            function onPostObjectSuccess(status, object) {            
                console.log("Posted storage object, id: " + object.id);
                deferred.resolve(object.id);
            }

            function onPostObjectFailed(statusCode, headers, data) {
                console.log("Failed to post storage object: " + statusCode);
                deferred.reject(statusCode);
            }
        };

        init();
    }

    return new MobileBackend();
});

 

Add JET Components to the dashboard page

For this example the dashboard page will have a login form added to it. The JET cookbook can be used for understanding how to add form elements and buttons into a view and viewModel. Knockout knowledge is important when using JET components. For beginning Knockout users the http://learn.knockoutjs.com/ tutorial provides a good hands-on lab.

Replace the scaffolded app’s dashboard.html page with the following html. What this html does is add a username and password entry field using JET components. The login form is only visible when the user is not authorized. The logout button is only visible when the user is authorized. The Knockout visible binding is used to hide and show elements based on the login status.

The username and password entry field values are bound to Knockout observables defined in dashboard.js. JET bindings for form inputs use the “value” attribute. The login button is bound on the click event to a function in the view model called login. Likewise, the logout button is bound on a click event to a logout function in the view model.

Additional elements that will be used are commented out for the time being.

dashboard.html

<div class="oj-hybrid-padding">
  <h3>Dashboard Content Area</h3>
  <div>
    <div data-bind="visible: !isLoggedIn()" class="oj-flex oj-sm-flex-direction-column oj-md-flex-direction-column"> 
      <div class="oj-flex-item">
        <label for="text-input">Username</label>
        <input id="text-input" type="text" data-bind="ojComponent: {component: 'ojInputText', value: username}"/>
      </div>
      <div class="oj-flex-item">
        <label for="password">Password</label>
        <input type="password" id="password" data-bind="ojComponent: {component: 'ojInputPassword', value: password}"/>
      </div>
      <div class="oj-flex-item">
        <input id="inputButton" type="button" data-bind="click: login, ojComponent: {component: 'ojButton', label: 'Login', chroming: 'full'}"/>
      </div>
    </div> 
  </div>
  <div data-bind="visible: isLoggedIn">
    <input id="inputButton" type="button" data-bind="click: logout, ojComponent: {component: 'ojButton', label: 'Logout', chroming: 'full'}"/> 

    <!--<input id="inputButton" type="button" data-bind="click: takePicture, ojComponent: {component: 'ojButton', label: 'Take a picture', chroming: 'none'}"/> 
    <br>
    <img id="cameraImage" src="" height="250" width="100%" data-bind="attr: { src: picture }">

    <input id="inputButton" type="button" data-bind="click: uploadPicture, ojComponent: {component: 'ojButton', disabled: (picture() === null), label: 'Upload to MCS', chroming: 'half'}"/> -->
  </div> 
</div>

 

Edit the dashboard view model

The dashboard.html page changes require related changes to the view model. Replace the dashboard.js file with the JavaScript below.

The first change is to include the proper dependencies in the define method (first line). The Mobile Backend helper class needs to be added here as a reference in order to makes calls to the MCS API that we have created in mbe.js.

Additional references are needed for JET components used on the page, in this case buttons and form inputs. Three Knockout observables are defined, one for the login status, one for the username, and one for the password. These are bound in the html to the JET input components. The login status is used to hide or show the login form or logout button based on the status (true or false). The username and password are initialized to working values so no typing is required in testing the login. An observable for picture is also defined for when we later add the Cordova camera plugin.

Notice the methods “authenticate” and “logout” call the Mobile Backend helper object’s methods to handle calls to MCS. In the case of the login method, callbacks are passed to the Mobile Backend helper so that upon success or failure the dashboard view model can react.

 

dashboard.js

define(['ojs/ojcore', 'knockout', 'jquery', 'mbe/mbe', 'ojs/ojknockout', 'ojs/ojselectcombobox', 'ojs/ojbutton', 'ojs/ojinputtext'],
        function (oj, ko, $, mbe) {
            function DashboardViewModel() {
                var self = this;
                self.isLoggedIn = ko.observable(false);

                //set user to a default value to quicken your login testing
                self.username = ko.observable("jetuser");

                self.picture = ko.observable(null);

                //pass callbacks to the login to trigger page behavior on success or failure
                self.login = function () {
                    mbe.authenticate(self.username(), self.password(), self.loginSuccess, self.loginFailure);
                };

                //pass callbacks to the login to trigger page behavior on success or failure
                self.logout = function () {
                    mbe.logout();
                    self.isLoggedIn(false);
                };

                self.loginSuccess = function (response) {
                    console.log(response);                    
                    self.isLoggedIn(true);
                };

                self.loginFailure = function (statusCode) {
                    self.isLoggedIn(false);
                    alert("Login failed! " + statusCode);
                };

            }
            return new DashboardViewModel;
        }
);

 

Test the login and logout

You may have to run “grunt clean copy” once to get all the updated files in place, and then run grunt serve. If a 401 error occurs on login, make sure that you are not hitting the CORS error (a Chrome workaround is defined in the next section).

After login, the Knockout visible binding on the isLoggedIn observable then hides the form and shows the logout button.

loginpagelogout

Cross Origin Error?

Calling backend services on MCS from your app while testing in a browser may cause a CORS error. This will show up in the browser console as a “HTTP Allow Access Origin Header” error. An example error showing this is:

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://mymcshost.mobileenv.us2.oraclecloud.com/mobile/platform/users/login. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).

 

To avoid this error while developing and testing in the browser, refer to Appendix B of the MCS guide that covers Oracle Mobile Cloud Service Environment Policies.  Specifically, the Security_AllowOrigin policy can be changed from the default value of “disallow” to “allow” in order to workaround this error in your development environment.

Content Security Policy Header in index.html

The meta tag in Cordova apps is an important change in your hybrid app’s index.html file. If the tag is not set, warnings will appear in the browser console while the app is running on Android. The details of this meta tag can be found at the Cordova whitelist-plugin github page.

https://github.com/apache/cordova-plugin-whitelist

A default declaration for your app can be the following. Note that the “gap:” entry is needed for iOS when the Cordova Splash plugin is in use. Put this into the index.html file in the head. Specific needs may require alterations to this, and for production consider changing this to point only to the MCS host (and specific hosts that your app needs to communicate with).

<!--Allows connection to any host. Consider changing this to only allow calls to your MCS host: connect-src 'self' http://mobilecloudservicehost -->
<meta http-equiv="Content-Security-Policy" content="default-src * data: gap:; script-src 'self' 'unsafe-inline' 'unsafe-eval' 127.0.0.1:* localhost:*; style-src 'self' 'unsafe-inline'; media-src *"/>

Working with Cordova Plugins

This set of steps will briefly cover how to add Cordova plugins to the project, but for full details on Cordova plugins see the Apache Cordova website. The sample in this demo is to take a picture using the device and then upload it to a MCS Collection as a jpg file.
Cordova has a set of core plugins that can be added to the app. To add a plugin, in your project, change directory on the command line into the “hybrid” folder. This is where the config.xml file is located for Cordova, which defines the Cordova plugins, platforms, and hooks.
To add a plugin use the “cordova plugin add” command. To persist the change to the config.xml file, use the –save option on the command. Add the camera and file plugins using these commands.

cordova plugin add cordova-plugin-camera --save
cordova plugin add cordova-plugin-file --save

Once the plugin is installed, the config.xml will have two new lines added to it.

 

<plugin name="cordova-plugin-camera" spec="~2.1.1" />
<plugin name="cordova-plugin-file" spec="~4.1.1" />

These lines should be near the end of the config.xml file. For questions about the contents of the config.xml, Apache has a reference guide for the config.xml file that describes all of the entries in detail.

Add a Mobile Backend Method for Uploading to a MCS Collection

In the Mobile Backend helper file that uses the MCS JavaScript SDK, the methods required for uploading files to a collection were already added when you created mbe.js. In MCS, create a collection named “JETCollection” (or whatever you like). Set the collection name in the mbe.js file to a variable accessible by the view model methods. This was already done in the mbe.js sample code listed earlier in this post.

var COLLECTION_NAME = "JETCollection";

The getCollection and postObject are documented using the $q async/promise library at this MCS documentation link. However, since JET already has JQuery available, the $.Deferred object can be used for handling the async calls to MCS for getting a collection and posting a file to a storage collection. In the callbacks the deferred object can be resolved or rejected.

From the dashboard, the uploadFile method can be used by passing a filename, file blob (arrayBuffer), the MIME type of the file (image/jpeg), and lastly a callback to run on completion so that the user can be notified that the upload is done.
The uploadFile performs two tasks: first, get the collection from MCS, and second, POST the file object to the storage collection.

 

Using the Plugins in the Dashboard

The capabilities of the plugin “cordova-plugin-camera” are on the Apache documentation site where the core Cordova plugins are documented. For this example, we will add a button to the dashboard page that uses the getPicture method of the Camera plugin.
First, edit the view and view model. Add two buttons and an image tag to dashboard.html. (These were commented out in the earlier dashboard.html text that was used, so you can now uncomment these lines.) This html is placed after the “Logout” button but inside the same div so that th picture area is only visible when the user is logged in.

For illustration, this JET button shows the chroming option of “none”. This styles the button differently from the login and logout buttons previously created. A third option for chroming is “half”, which will give another look to the button. The “Upload to MCS” button uses the “half” option. Another JET component option on the button is to make it disabled based on certain criteria in the view model. If the picture variable in the view model is null, the disabled option is turned on. This only allows the user to click the button when there is a picture already taken.

 

Add tags on dashboard.html

<input id="inputButton" type="button" data-bind="click: takePicture, ojComponent: {component: 'ojButton', label: 'Take a picture', chroming: 'none'}"/> 
<br>
<img id="cameraImage" src="" height="250" width="100%" data-bind="attr: { src: picture }"> 
<input id="inputButton" type="button" data-bind="click: uploadPicture, ojComponent: {component: 'ojButton', disabled: (picture() === null), label: 'Upload to MCS', chroming: 'half'}"/>

 

The buttons rely on methods named “takePicture” and “uploadPicture” in the view model, so add these functions in the view model dashboard.js. The takePicture method will use the “navigator.camera” object which is available due to the Cordova plugin. Add a check for the existence of the camera object, because running in the browser this object will not exist. To take a picture, the camera plugin has a method called “getPicture”. The parameters are success and failure callback methods. A third parameter for camera options is available. The options parameter allows control over image quality, encoding, height, width, and more. Add the code in the block below to the dashboard.js.

Add this block to dashboard.js

//use the Cordova camera plugin to take a picture
                self.takePicture = function () {
                    if (navigator.camera && typeof navigator.camera !== "undefined") {       
                        //sample camera options, using defaults here but for illustration....
                        //Note that the destinationType can be a DATA_URL but cordova plugin warns of memory usage on that.
                        var cameraOptions = {
                            quality: 50,
                            destinationType: Camera.DestinationType.FILE_URL,
                            sourceType: Camera.PictureSourceType.CAMERA,
                            allowEdit: false,
                            encodingType: Camera.EncodingType.JPEG,                            
                            saveToPhotoAlbum: false,
                            correctOrientation: true
                        };
                        //use camera pluging method to take a picture, use callbacks for handling result
                        navigator.camera.getPicture(cameraSuccess, cameraError, cameraOptions);
                    } else {
                        //running on web, the navigator.camera object will not be available
                        console.log("The navigator.camera object is not available.")
                    }
                };

                function cameraSuccess(imageData) {
                    //returns a file path such as: file:///storage/emulated/0/Android/data/org.oraclejet.mcsexample/cache/1459277993352.jpg
                    //set observable to the path and img tag's src attributein view will be updated.
                    self.picture(imageData);

                }

                function cameraError(error) {
                    console.log(error);
                }

                self.uploadPicture = function () {
                    //load file as blob, then once loaded add to MCS collection. Use callback for when complete.                    
                    getBlobFromFile(self.picture())
                            .then(function (arrayBuffer) {
                                mbe.uploadFile("picture.jpg", arrayBuffer, "image/jpeg", self.pictureUploadSuccess);                                
                            });
                };

                self.pictureUploadSuccess = function (objectid) {
                    console.log(objectid);
                    //showing alert to notify user that upload completed, MCS object id is shown.
                    alert("Picture uploaded to MCS, id is: " + objectid);
                };

                function getBlobFromFile(filepath) {
                    //Use Cordova file plugin API to get the file:
                    //On success, load the file as array buffer
                    var deferred = $.Deferred();

                    if (window.resolveLocalFileSystemURL && typeof window.resolveLocalFileSystemURL !== "undefined") {
                        window.resolveLocalFileSystemURL(filepath,
                                function (fileEntry) {
                                    //on success use fileEntry handle to read the file, then run callback when onloadend event occurs 
                                    fileEntry.file(function (file) {
                                        var reader = new FileReader();
                                        reader.onloadend = function (e) {                                            
                                            deferred.resolve(this.result);
                                        };
                                        reader.onerror = function (e) {                                            
                                            deferred.reject(e);
                                        };
                                        reader.readAsArrayBuffer(file);
                                    });
                                },
                                function (error) {
                                    console.log("Error getting file. Message: ", error);
                                    deferred.reject(error);                                    
                                }
                        );
                    } else {
                        var msg = "The object window.resolveLocalFileSystemURL does not exist. Cannot get file."
                        console.log(msg);
                        //reject immediately
                        deferred.reject(msg);
                    }
                    return deferred.promise();
                }

Build and Serve to Device

To build and run on an emulator or device, be sure that you have followed the guides for your mobile operating system.

Android: Follow the steps on Cordova’s site for Android setup.
https://cordova.apache.org/docs/en/latest/guide/platforms/android/index.html

iOS: Follow the steps on Cordova’s site for iOS setup.
https://cordova.apache.org/docs/en/latest/guide/platforms/ios/index.html

For Android, if you have an emulator running or device plugged in via USB, you should be able to see them using the Android Debug Bridge (adb). Running adb devices will show all emulators and devices attached.

adb devices

Output:

List of devices attached
c143bx44 device

If your device or emulator doesn’t show up, check the documentation on the Android Debug Bridge page to see if you have setup the device or emulator properly. For devices that fail to appear in the list, USB debugging may not enabled on the device itself or a driver for your device isn’t installed on the machine (see the Google OEM drivers page).

 

Build and Serve for Android

The first step once you are ready to deploy is to build the .apk file for Android. This command is already familiar from when you first created the hybrid app and performed a build on the project. Keep in mind that this command should be run from the project root, where the Gruntfile.js is located. This does not run from the hybrid directory. Only cordova commands need to be run in the hybrid directory.
Note: These steps are covered in the official JET documentation.

grunt build:dev --platform=android

Once the .apk file is built, you can serve the app to the device or emulator. The following command installs the app onto the device.

grunt serve:dev --platform=android –destination=device

The serve command should show a successful build followed by the installation and launch of the .apk file on the device or emulator.

BUILD SUCCESSFUL
Total time: 8.941 secs
Built the following apk(s):
D:\jet\mcsexample\hybrid\platforms\android\build\outputs\apk\android-debug.apk
Using apk: D:\jet\mcsexample\hybrid\platforms\android\build\outputs\apk\android-debug.apk
Installing app on device...
Launching application...
LAUNCH SUCCESS
Done, without errors.

 

Take a Picture

Take a picture. Hopefully your picture will be more inspiring than the picture of my desk! The dashboard should show the image after a picture is taken. The image can then be uploaded to MCS by clicking the “Upload to MCS” link.

takepicture

 

If you are using the Android emulator, and the “Back Camera” is set to “Emulated”, the picture you can take is a green square on a background of checkered squares.

takepicture-emulator

 

Once the file is uploaded, a new “picture.jpg” will be visible in the Collection.

uploaded

Inspect in Chrome

To troubleshoot the app for Android, open Chrome and enter the address “chrome://inspect”. This should show a list of running applications on the device. In the image below, the “WebView in org.oraclejet.mcsexample” is the app that was served. Clicking the “inspect” link will bring up Chrome developer tools and allow for debugging the app while it runs on the device.

inspect

The inspect view allows the use of Chrome developer tools while the app runs on the device or emulator.

 inspect-dashboard

Conclusion

The MCS Javascript SDK provides the link between your Cordova based application and Oracle Mobile Cloud Service. Like this example using Oracle JET, this same SDK can be used with Ionic/Angular apps. The MCS Javascript SDK covers more than just authentication and file storage as well, including the ability to register the device, log analytics, and call custom APIs. The SDK can save time in developing hybrid apps since the interactions with the backend are already built for you, and all of the power of Oracle Mobile Cloud Service is at your command using a supported SDK.

 

Integration Cloud Service (ICS) Security & Compliance

$
0
0

The attached white paper is the product of a joint A-Team effort that included Deepak Arora, Mike Muller, and Greg Mally.  Oracle Integration Cloud Service (ICS) runs within the Oracle Cloud where the architecture is designed to provide customers with a unified suite of Cloud Services with best-in-class performance, scalability, availability, and security. The Cloud Services are designed to run on a unified data center, hardware, software, and network architecture. This document is based on the Cloud Security Assessment section of the Security for Cloud Computing: 10 Steps to Ensure Success V2.0 document, which is produced by the Cloud Standards Customer Council where Oracle is a member.

For more details, see attached:

ICS Security and Compliance_v1.0

Java API for Integration Cloud Service

$
0
0

Introduction

Oracle ICS (Integration Cloud Service) provides a set of handy REST APIs that allow users to manage and monitor related artifacts such as connections, integrations, lookups and packages. It also allow the retrieval of monitoring metrics for further analysis. More details about it can be found in the following documentation link.

The primary use case for these REST APIs is to allow command-line interactions to perform tasks such as gathering data about ICS integrations or backup a set of integrations by exporting its contents. In order to interface with these REST APIs, users may adopt command-line utilities such as cURL to invoke them. For instance, the command below shows how to retrieve information about a connection:

curl -u userName:password -H “Accept:application/json” -X GET https://your-ics-pod.integration.us2.oraclecloud.com/icsapis/v1/connections/connectionId

If the command above executes completely then the output should be something like the following JSON payload:

{
   "links":{
      "@rel":"self",
      "@href":"https:// your-ics-pod.integration.us2.oraclecloud.com:443/icsapis/v1/connections/connectionId"
   },
   "connectionproperties":{
      "displayName":"WSDL URL",
      "hasAttachment":"false",
      "length":"0",
      "propertyGroup":"CONNECTION_PROPS",
      "propertyName":"targetWSDLURL",
      "propertyType":"URL_OR_FILE",
      "required":"true"
   },
   "securityproperties":[
      {
         "displayName":"Username",
         "hasAttachment":"false",
         "length":"0",
         "propertyDescription":"A username credential",
         "propertyGroup":"CREDENTIALS",
         "propertyName":"username",
         "propertyType":"STRING",
         "required":"true"
      },
      {
         "displayName":"Password",
         "hasAttachment":"false",
         "length":"0",
         "propertyDescription":"A password credential",
         "propertyGroup":"CREDENTIALS",
         "propertyName":"password",
         "propertyType":"PASSWORD",
         "required":"true"
      }
   ],
   "adaptertype":{
      "appTypeConnProperties":{
         "displayName":"WSDL URL",
         "hasAttachment":"false",
         "length":"0",
         "propertyGroup":"CONNECTION_PROPS",
         "propertyName":"targetWSDLURL",
         "propertyType":"URL_OR_FILE",
         "required":"true"
      },
      "appTypeCredProperties":[
         {
            "displayName":"Username",
            "hasAttachment":"false",
            "length":"0",
            "propertyDescription":"A username credential",
            "propertyGroup":"CREDENTIALS",
            "propertyName":"username",
            "propertyType":"STRING",
            "required":"true"
         },
         {
            "displayName":"Password",
            "hasAttachment":"false",
            "length":"0",
            "propertyDescription":"A password credential",
            "propertyGroup":"CREDENTIALS",
            "propertyName":"password",
            "propertyType":"PASSWORD",
            "required":"true"
         }
      ],
      "appTypeLargeIconUrl":"/images/soap/wssoap_92.png",
      "appTypeMediumGrayIconUrl":"/images/soap/wssoap_g_46.png",
      "appTypeMediumIconUrl":"/images/soap/wssoap_46.png",
      "appTypeMediumWhiteIconUrl":"/images/soap/wssoap_w_46.png",
      "appTypeName":"soap",
      "appTypeSmallIconUrl":"/images/soap/wssoap_32.png",
      "displayName":"SOAP",
      "features":"",
      "source":"PREINSTALLED",
      "supportedSecurityPolicies":"Basic Authentication, Username Password Token, No Security Policy"
   },
   "code":"connectionCode",
   "imageURL":"/images/soap/wssoap_w_46.png",
   "name":"Connection Name",
   "percentageComplete":"100",
   "securityPolicy":"USERNAME_PASSWORD_TOKEN",
   "status":"CONFIGURED",
   "supportsCache":"true"
}

These APIs were designed to return JSON payloads in most cases. However, some operations allow the result to be returned in the XML format. Users can control this by specifying the “Accept” HTTP header in the request.

Regardless of which format is chosen to work with, it is expected that users must handle the payload to read the data. This means that they will need to develop a program to retrieve the payload, parse it somehow and then work with that data. Same applies to invoking REST endpoints with path parameters or posting payloads to them. The end result is that a considerable amount of boilerplate code must be written, tested and maintained no matter which programming language is chosen.

Aiming to make things easier, the Oracle A-Team developed a Java API to abstract the technical details about how to interact with the REST APIs. The result is a simple-to-use, very small JAR file that contains all you need to rapidly create applications that interact with ICS. Because the API is written in Java, it can be reused across a wide set of programming languages that can run on a JVM including Clojure, JavaScript, Groovy, Scala, Ruby, Python, and of course Java.

The Java API for ICS is provided for free to use “AS-IS” but without any official support from Oracle. Bugs, feedback and enhancement requests are welcome but need to be performed using the comments section of this blog and the A-Team reserves the right of help in the best-effort capacity.

This blog will walk you through the steps required to use this Java API, providing code samples that demonstrate how to implement a number of common use cases.

Getting Started with the Java API for ICS

The first thing you need to do to start playing with the Java API for ICS is to download a copy of the library. You can get a free copy here. This library also depends on a few Open-Source libraries so you will need to download these as well. The necessary libraries are:

* FasterXML Jackson 2.0: The library uses this framework to handle JSON transformations back and forth to Java objects. You can download the libraries here. The necessary JAR files are: jackson-core, jackson-annotations and jackson-databind.

* Apache HTTP Components: Used to handle any HTTP interaction with the REST APIs for ICS. You can download the libraries here. The necessary JAR files are: http-core, http-client, http-mime and commons-codec and commons-logging.

It is important to remember that you must use JDK 1.6 or higher. Any JDK older version won’t work. Once all the libraries are on the classpath, you will be ready to get started.

Excuse me Sir – May I Have a Token?

The Java API for ICS was designed to provide the highest level of simplicity possible. Thus, pretty much all operations can be executed by calling a single object. This object is called Token. In simpler terms, a token gives you access to execute operations against your ICS pod. However; as you may expect, tokens are not freely accessible. In order to create a token, your code needs to authenticate against ICS. The example below shows how to create a token.

import com.oracle.ateam.cloud.ics.javaapi.Token;
import com.oracle.ateam.cloud.ics.javaapi.TokenManager;

public class CreatingTokens {

   public static void main(String[] args) throws Exception {

      String serviceUrl = "https://your-ics-pod.integration.us2.oraclecloud.com";
      String userName = "yourUserName";
      String password = "yourPassword";

      Token token = TokenManager.createToken(serviceUrl, userName, password);
      System.out.println("Yeah... I was able to create a token: " + token);

   }

}

The parameters used to create a token are pretty straightforward, and you should be familiar with them for your ICS pod. When the createToken() method is executed, it tries to authenticate against the ICS pod mentioned in the service URL. If for some reason the authentication does not happen, an exception will be raised with proper details. Otherwise, the token will be created and returned to the caller.

A token is a very lightweight object that can be reused across your application. Thus, it is a good idea to cache it after its creation. Another important aspect of the token is that it is thread-safe. That means that multiple threads can simultaneously invoke its methods without concerns about locks of any kind.

Using a Token to Perform Operations against ICS

Once you have properly created a token, you can start writing code to retrieve data from ICS and/or invoke operations against it. The examples below will show various ways in which the Java API for ICS can be used.

Listing all the integrations; who created them and their status

Token token = TokenManager.createToken(serviceUrl, userName, password);
List<Integration> integrations = token.retrieveIntegrations();

for (Integration integration : integrations) {

   System.out.println(integration.getName() + ": Created by '" +
   integration.getCreatedBy() + "' and it is currently " +
   integration.getStatus());

}

Showing source and target connections of one specific integration

Token token = TokenManager.createToken(serviceUrl, userName, password);
Integration integration = token.retrieveIntegration(integrationId, integrationVersion);
		
System.out.println("Integration: " + integration.getName());
Connection sourceConnection = integration.getSourceConnection();
Connection targetConnection = integration.getTargetConnection();
		
System.out.println("   Source Connection: " + sourceConnection.getName() +
         " (" + sourceConnection.getAdapterType().getDisplayName() + ")");
System.out.println("   Target Connection: " + targetConnection.getName() +
         " (" + targetConnection.getAdapterType().getDisplayName() + ")");

Exporting all integrations that are currently active

final String BACKUP_FOLDER = "/home/rferreira/ics/backup/";
Token token = TokenManager.createToken(serviceUrl, userName, password);
List<Integration> integrations = token.retrieveIntegrations();
		
for (Integration integration : integrations) {
						
   if (integration.getStatus().equals("ACTIVATED")) {
				
      Status status = integration.export(BACKUP_FOLDER + integration.getCode());
      System.out.println(integration.getCode() + " = " + status.getStatusInfo());
				
   }
			
}

Alternatively, if you are using JDK 1.8 then you could rewrite the entire for-each code using Lambdas:

integrations.parallelStream()
   .filter(i -> i.getStatus().equals("ACTIVATED"))
   .forEach((i) -> {
				
      String fileName = BACKUP_FOLDER + i.getCode();
      System.out.println(i.export(fileName).getStatusInfo());
			
   });

Printing monitoring metrics of one specific integration

Token token = TokenManager.createToken(serviceUrl, userName, password);
MonitoringMetrics monitoringMetrics = token.retrieveMonitoringMetrics(integrationId, integrationVersion);
		
System.out.println("Flow Name: " + monitoringMetrics.getFlowName());
System.out.println("   Messages Received...: " + monitoringMetrics.getNoOfMsgsReceived());
System.out.println("   Messages Processed..: " + monitoringMetrics.getNoOfMsgsProcessed());
System.out.println("   Number Of Errors....: " + monitoringMetrics.getNoOfErrors());
System.out.println("   Errors in Queues....: " +
   monitoringMetrics.getErrorsInQueues().getErrorObjects().size());
System.out.println("   Success Rate........: " + monitoringMetrics.getSuccessRate());
System.out.println("   Avg Response Time...: " + monitoringMetrics.getAvgRespTime());
System.out.println("   Last Updated By.....: " + monitoringMetrics.getLastUpdatedBy());

Deleting all connections that are currently incomplete

Token token = TokenManager.createToken(serviceUrl, userName, password);
List<Connection> connections = token.retrieveConnections();
		
for (Connection connection : connections) {
			
   if (Integer.parseInt(connection.getPercentageComplete()) < 100) {
				
      connection.delete();
				
   }
			
}

Deactivating all integrations whose name begins with “POC_”

Token token = TokenManager.createToken(serviceUrl, userName, password);
List<Integration> integrations = token.retrieveIntegrations();
		
for (Integration integration : integrations) {
			
   if (integration.getName().startsWith("POC_")) {
				
      System.out.println(integration.deactivate());
				
   }
			
}

Listing all packages and its integrations (using JDK 1.8 Lambdas)

Token token = TokenManager.createToken(serviceUrl, userName, password);
List<com.oracle.ateam.cloud.ics.javaapi.types.Package> pkgs = token.retrievePackages();
		
pkgs.forEach((p) -> {
			
   System.out.println("Package Name: " + p.getPackageName());
   p.getPackageContent().forEach(pc -> System.out.println(pc.getName()));
		
});

Importing integrations into ICS from the previously exported archive

Token token = TokenManager.createToken(serviceUrl, userName, password);
Status status = token.importIntegration("home/rferreira/ics/backup/myInteg.iar", false);
System.out.println(status.getStatusInfo());

Tip: the boolean parameter in the importIntegration() method controls whenever the integration must be replaced or not. If that parameter is set to true, then it will override any existing integration that has the same name. Otherwise; if that parameter is set to false, then it will assume that the integration does not exist in ICS and it will create it.

Alternatively, you can import a complete set of integrations at once by importing a package:

Status status = token.importPackage("home/rferreira/ics/backup/samplePackage.par");
System.out.println(status.getStatusInfo());

Conclusion

ICS is a powerful iPaaS solution offered by Oracle that provides a robust set of management capabilities. Along with its development console, it also provides a set of REST APIs that enable the creation of custom apps that can fetch data from ICS. Although those REST APIs are useful, developers often need more productivity while writing their code. This blog introduced the Java API for ICS, a simple-to-use library that abstracts the technical details of the REST APIs. We provided details on how to download, configure, and use the library, and code samples were provided to demonstrate how to use the Java API to access a range of common ICS management functions.

Creating custom Fusion Applications User Interfaces using Oracle JET

$
0
0

Introduction

JET is Oracle’s new mobile toolkit specifically written for developers to help them build client slide applications using JavaScript. Oracle Fusion Applications implementers are often given the requirement to create mobile, or desktop browser, based custom screens for Fusion Applications. There are many options available to the developer for example Oracle ADF (Java Based) and Oracle JET (JavaScript based). This blog article gives the reader a tutorial style document on how to build a hybrid application using data from Oracle Fusion Sales Cloud. It is worth highlighting that although this tutorial is using Sales Cloud, the technique below is equally applicable to HCM cloud, or any other Oracle SaaS cloud product which exposes a REST API.

Main Article

Pre-Requisites

It is assumed that you’ve already read the getting started guide on the Oracle Jet website and installed all the pre-requisites. In addition if you are to create a mobile application then you will also need to install the mobile SDKs from either Apple (XCode) or Android (Android SDK).

 

You must have a Apple Mac to be able to install the Apple IOS developer kit (XCode), it is not possible to run XCode on a Windows PC

Dealing with SaaS Security

Before building the application itself we need to start executing the REST calls and getting our data and security is going to be the first hurdle we need to cross.Most Sales Cloud installations allow “Basic Authentication” to their APIs,  so in REST this involves creating a HTTP Header called “Authorization” with the value “Basic <your username:password>” , with the <username:password> section encoded as Base64. An alternative approach used when embedding the application within Oracle SaaS is to use a generated JWT token. This token is generated by Oracle SaaS using either groovy or expression language. When embedding the application in Oracle SaaS you have the option of passing parameters, the JWT token would be one of these parameters and can subsequently be used instead of the <username:password>. When using JWT token the Authorization string changes slightly so that instead of “Basic” it become “Bearer”,

 

Usage Header Name Header Value
Basic Authentication Authorization Basic <your username:password base64 encoded>
JWT Authentication Authorization Bearer <JWT Token>

 

Groovy Script in SalesCloud to generate a JWT Token

def thirdpartyapplicationurl = oracle.topologyManager.client.deployedInfo.DeployedInfoProvider.getEndPoint("My3rdPartyApplication" )
def crmkey= (new oracle.apps.fnd.applcore.common.SecuredTokenBean().getTrustToken())
def url = thirdpartyapplicationurl +"?jwt ="+crmkey
return (url)

Expression Language in Fusion SaaS (HCM, Sales, ERP etc) to generate a JWT Token

#{EndPointProvider.externalEndpointByModuleShortName['My3rdPartApplication']}?jwt=#{applCoreSecuredToken.trustToken}

Getting the data out of Fusion Applications using the REST API

When retrieving  data from Sales Cloud we need to make sure we get the right data, not too much and not too little. Oracle Sales Cloud, like many other Oracle SaaS products, now supports the REST API for inbound and outbound data access. Oracle HCM also has a REST API but at the time of writing this article, the API is in controlled availability.

Looking at the documentation hosted at Oracle Help Center :http//docs.oracle.com/cloud/latest/salescs_gs/FAAPS/ 

The REST call to get all Sales Cloud Opportunities looks like this :

https://yourCRMServer/salesApi/resources/latest/opportunities

If you executed the above REST call you will notice that the resulting payload is large, some would say huge. There are good reasons for this, namely that the Sales Cloud Opportunity object contains a large number fields, secondly the result not only contains data but also contains metadata and finally the request above is a select all query. The metadata includes links to child collections, links to List of Values, what tabs are visible in Sales Cloud , custom objects, flexfields etc. Additionally the query we just executed is a the equivalent of a select * from table, i.e. it brings back everything so we’ll also need to fix that.

 

Example snippet of a SalesCloud Opportunity REST Response showing custom fields,tabs visible, child collections etc

"Opportunity_NewQuote_14047462719341_Layout6": "https://mybigm.bigmachines.com/sso/saml_request.jsp?RelayState=/commerce/buyside/document.jsp?process=quickstart_commerce_process_bmClone_4%26formaction=create%26_partnerOpportunityId=3000000xxx44105%26_partnerIdentifier=fusion%26_partnerAccountId=100000001941037",
  "Opportunity_NewQuote_14047462719341_Layout6_Layout7": "https://mybigMmachine.bigmachines.com/sso/saml_request.jsp?RelayState=/commerce/buyside/document.jsp?process=quickstart_commerce_process_bmClone_4%26formaction=create%26_partnerOpportunityId=300000060xxxx5%26_partnerIdentifier=fusion%26_partnerAccountId=100000001941037",
  "ExtnFuseOpportunityEditLayout7Expr": "false",
  "ExtnFuseOpportunityEditLayout6Expr": "false",
  "ExtnFuseOpportunityCreateLayout3Expr": "false",
  "Opportunity_NewQuote_14047462719341_Layout8": "https://mybigm-demo.bigmachines.com/sso/saml_request.jsp?RelayState=/commerce/buyside/document.jsp?process=quickstart_commerce_process_bmClone_4%26formaction=create%26_partnerOpportunityId=300000060744105%26_partnerIdentifier=fusion%26_partnerAccountId=100000001941037",
  "ExtnFuseOpportunityEditLayout8Expr": "false",
  "CreateProject_c": null,
  "Opportunity_DocumentsCloud_14399346021091": "https://mydoccloud.documents.us2.oraclecloud.com/documents/embed/link/LF6F00719BA6xxxxxx8FBEFEC24286/folder/FE3D00BBxxxxxxxxxxEC24286/lyt=grid",
  "Opportunity_DocsCloud_14552023624601": "https://mydocscserver.domain.com:7002/SalesCloudDocCloudServlet/doccloud?objectnumber=2169&objecttype=OPPORTUNITY&jwt=eyJhxxxxxy1pqzv2JK0DX-xxxvAn5r9aQixtpxhNBNG9AljMLfOsxlLiCgE5L0bAI",
  "links": [
    {
      "rel": "self",
      "href": "https://mycrmserver-crm.oracledemos.com:443/salesApi/resources/11.1.10/opportunities/2169",
      "name": "opportunities",
      "kind": "item",
      "properties": {
        "changeIndicator": "ACED0005737200136A6176612E7574696C2E41727261794C6973747881D21D99C7619D03000149000473697A65787000000002770400000010737200116A6176612E6C616E672E496E746567657212E2A0A4F781873802000149000576616C7565787200106A6176612E6C616E672E4E756D62657286AC951D0B94E08B020000787200106A6176612E6C616E672E4F626A65637400000000000000000000007870000000017371007E00020000000178"
      }
    },
    {
      "rel": "canonical",
      "href": "https://mycrmserver-crm.oracledemos.com:443/salesApi/resources/11.1.10/opportunities/2169",
      "name": "opportunities",
      "kind": "item"
    },
    {
      "rel": "lov",
      "href": "https://mycrmserver-crm.oracledemos.com:443/salesApi/resources/11.1.10/opportunities/2169/lov/SalesStageLOV",
      "name": "SalesStageLOV",
      "kind": "collection"
    },

Thankfully we can tell the REST API that we :

  • Only want to see the data, achieved by adding onlyData=true parameter
  • Only want to see the following fields OpportunityNumber,Name,CustomerName (TargetPartyName), achieved by adding a fields=<fieldName,fieldname> parameter
  • Only want to see a max of 10 rows, achieved by adding the limit=<value> parameter
  • Only want to see open opportunities, achieved by adding the q= parameter with a query string, in our case StatusCode=OPEN

If we want to get the data in pages/blocks we can use the offset parameter. The offset parameter tells the REST service to get the data “from” this offset. Using offset and limit we can effectively page through the data returned by Oracle Fusion Applications REST Service.

Our final REST request URL would look like :

https://myCRMServeroracledemos.com/salesApi/resources/latest/opportunities?onlyData=true&fields=OptyNumber,Name,Revenue,TargetPartyName,StatusCode&q=StatusCode=OPEN&offset=0&limit=10

The Oracle Fusion Applications REST API is documented in the relevant Oracle Fusion Applications Documentation, e.g. for Sales Cloud, http://docs.oracle.com/cloud/latest/salescs_gs/FAAPS/ but it is also worth noting that the Oracle Fusion Applications REST Services are simply an implementation of the Oracle ADF Business Components REST Services, these are very well documented here  https://docs.oracle.com/middleware/1221/adf/develop/GUID-8F85F6FA-1A13-4111-BBDB-1195445CB630.htm#ADFFD53992

Our final tuned JSON result from the REST service will look something like this (truncated) :

{
  "items": [
    {
      "Name": "Custom Sentinel Power Server @ Eagle",
      "OptyNumber": "147790",
      "StatusCode": "OPEN",
      "TargetPartyName": "Eagle Software Inc",
      "Revenue": 104000
    },
    {
      "Name": "Ultra Servers @ Beutelschies & Company",
      "OptyNumber": "150790",
      "StatusCode": "OPEN",
      "TargetPartyName": "Beutelschies & Company",
      "Revenue": 175000
    },
    {
      "Name": "Diablo Technologies 1012",
      "OptyNumber": "176800",
      "StatusCode": "OPEN",
      "TargetPartyName": "Diablo Technologies",
      "Revenue": 23650
    }
}

Creating the Hybrid Application

Now we have our datasource defined we can start to build the application. We want this application to be available on a mobile device and therefore we will create a “Mobile Hybrid” application using Oracle JET, using the –NavDrawer template.

yo oraclejet:hybrid OSCOptyList --template=navDrawer --platforms=android

Once the yeoman script has built your application, you can test the (basic) application using the following two commands.

grunt build --platform=android 
grunt serve --platform=android --web=true

The second grunt serve command has a web=true parameter at the end, this is telling the script that we’re going to be testing this in our browser and not on the device itself. When this is run you should see basic shell [empty] application in your browser window.

For

Building Our JavaScript UI

Now that we have our data source defined we can now get onto to task of building the JET User Interface. Previously you executed the yo oraclejet:hybrid command, this created you a hybrid application using a template. Opening the resulting project in an IDE, like NetBeans, we can see that the project template has created a collection of files and that one of them is “dashboard.html” (marked 1 in the image), edit this file using your editor.

dashboard.html

 

Within the file delete everything and replace it with this snippet of html code

<div class="oj-hybrid-padding">
    <div class="oj-flex">
        <div class="oj-flex-item">
            <button id= "prevButton" 
                    data-bind="click: previousPage, 
                       ojComponent: { component: 'ojButton', label: 'Previous' }">
            </button>
            <button id= "nextButton"
                    data-bind="click: nextPage, 
                       ojComponent: { component: 'ojButton', label: 'Next' }">
            </button>
        </div>
    </div>
    <div class="oj-flex-item">    
        <div class="oj-panel oj-panel-alt1 oj-margin">
            <table id="table" summary="Opportunity List" aria-label="Opportunity List"
                   data-bind="ojComponent: {component: 'ojTable', 
                                data: opportunityDataSource, 
                                columnsDefault: {sortable: 'none'}, 
                                columns: [{headerText: 'Opty Number', 
                                           field: 'OptyNumber'},
                                          {headerText: 'Name', 
                                           field: 'Name'},
                                          {headerText: 'Revenue', 
                                           field: 'Revenue'},
                                          {headerText: 'Customer Name', 
                                           field: 'TargetPartyName'},
                                          {headerText: 'Status Code', 
                                           field: 'StatusCode'}
           ]}">
            </table>
        </div>    
    </div>
</div>

The above piece of html adds a JET table to the page, for prettiness we’ve wrapped the table in a decorative panel and added a next and previous buttons. The table definition tells Oracle JET that the data is coming from a JavaScript object called “opportunityDataSource“, it also defines defines the columns, column header text and that the columns are not sortable. The button definitions reference two functions in our JavaScript (to follow) which will paginate the data.

Building The logic

We can now move onto the JavaScript side of things, that is the part where we get the data from Sales Cloud and makes it available to the table object in the html file. For this simplistic example we’ll get the data direct from Sales Cloud and display it in the table, with no caching and nothing fancy like collection models for pagination .

Edit the dashboard.js file, this is marked as 2 in the above image. This file is a RequiresJS AMD (Application Module Definition File) and is pre-populated to support the dashboard.html page.

Within this file, cut-n-paste the following JavaScript snippet.

define(['ojs/ojcore', 'knockout', 'jquery', 'ojs/ojtable', 'ojs/ojbutton'],
        function (oj, ko, $) {
            function DashboardViewModel() {
                var self = this;
                var offset = 0;
                var limit = 10;
                var pageSize = 10;
                var nextButtonActive = ko.observable(true);
                var prevButtonActive = ko.observable(true);
                //
                self.optyList = ko.observableArray([{Name: "Fetching data"}]);
                console.log('Data=' + self.optyList);
                self.opportunityDataSource = new oj.ArrayTableDataSource(self.optyList, {idAttribute: 'Name'});
                self.refresh = function () {
                    console.log("fetching data");
                    var hostname = "https://yourCRMServer.domain.com";
                    var queryString = "/salesApi/resources/latest/opportunities?onlyData=true&fields=OptyNumber,Name,Revenue,TargetPartyName,StatusCode&q=StatusCode=OPEN&limit=10&offset=" + offset;
                    console.log(queryString);
                    $.ajax(hostname + queryString,
                            {
                                method: "GET",
                                dataType: "json",
                                headers: {"Authorization": "Basic " + btoa("username:password")},
                                // Alternative Headers if using JWT Token
                                // headers : {"Authorization" : "Bearer "+ jwttoken; 
                                success: function (data)
                                {
                                    self.optyList(data.items);
                                    console.log('Data returned ' + JSON.stringify(data.items));
                                    console.log("Rows Returned"+self.optyList().length);
                                    // Enable / Disable the next/prev button based on results of query
                                    if (self.optyList().length < limit)
                                    {
                                        $('#nextButton').attr("disabled", true);
                                    } else
                                    {
                                        $('#nextButton').attr("disabled", false);
                                    }
                                    if (self.offset === 0)
                                        $('#prevButton').attr("disabled", true);
                                },
                                error: function (jqXHR, textStatus, errorThrown)
                                {
                                    console.log(textStatus, errorThrown);
                                }
                            }
                    );
                };
                // Handlers for buttons
                self.nextPage = function ()
                {

                    offset = offset + pageSize;
                    console.log("off set=" + offset);
                    self.refresh();
                };
                self.previousPage = function ()
                {
                    offset = offset - pageSize;
                    if (offset < 0)
                        offset = 0;
                    self.refresh();
                };
                // Initial Refresh
                self.refresh();
            }
            
            return new DashboardViewModel;
        }
);

Lets examine the code

Line 1: Here we’ve modified the standard define so that it includes a ojs/table reference. This is telling RequiresJS , which the JET toolkit uses, that this piece of JavaScript uses a JET Table object
Line 8 & 9 : These lines maintain variables to indicate if the button should be enabled or not
Line 11: Here we created a variable called optyList, this is importantly created as a knockout observableArray.
Line 13: Here we create another variable called “opportunityDataSource“, which is the variable the HTML page will reference. The main difference here is that this variable is of type oj.ArrayTableDataSource and that the primary key is OptyNumber
Lines 14-47 :  Here we define a function called “refresh”. When this javascript function is called we execute a REST Call back to SalesCloud using jquery’s ajax call. This call retrieves the data and then populates the optyList knockout data source with data from the REST call. Specifically here note that we don’t assign the results to the optyData variable directly but we purposely pass a child array called “items”. If you execute the REST call, we previously discussed, you’ll note that the data is actually stored in an array called items
Line 23 : This line is defining the headers, specifically in this case we’re defining a header called “Authorization” , with the username & password formatted as “username:password” and then the base64 encoded.
Line 24-25  :These lines define an alternative header which would be appropriate if a JWT token was being used. This token would be passed in as a parameter rather than being hardcoded
Lines 31-40 : These query the results of the query and determine if the next and previous buttons should be enabled or not using jQuery to toggle the disabled attribute
Lines 50-63 : These manage the next/previous button events
Finally on line 65 we execute the refresh() method when the module is initiated.

Running the example on your mobile

To run the example on your mobile device execute the follow commands

grunt build --platform=android 
grunt serve --platform=android

or if you want to test on a device

grunt serve --platform=android -destination=[device or emulator name]

If all is well you should see a table of data populated from Oracle Sales Cloud

 

For more information on building JavaScript applications with the Oracle JET tool make sure to check out our other blog articles on JET here , the Oracle JET Website here and the excellent Oracle JET You Tube channel here

Running the example on the browser and CORS

If you try and run the example on your browser you’ll find it probably won’twork. If you look at the browser console (control+shift+I on most browsers) you’ll probably see that the error was something like “XMLHttpRequest cannot load…” etc

cors

This is because the code has violated “Cross Origin Scripting” rules. In a nut shell “A JavaScript application cannot access a resource which was not served up by the server which itself was served up from”.. In my case the application was served up by Netbeans on http://localhost:8090, whereas the REST Service from Sales Cloud is on a different server, thankfully there is a solution called “CORS”. CORS stands for Cross Origin Resource Sharing and is a standard for solving this problem, for more information on CORS see this wikipedia article, or other articles on the internet.

Configuring CORS in Fusion Applications

For our application to work on a web browser we need to enable CORS in Fusion Applications, we do this by the following steps :

  1. 1. Log into Fusion Applications (SalesCloud, HCM etc) using a user who has access to “Setup and Maintenance”
  2. 2. Access setup and Maintenance screens
  3. 3. Search for Manage Administrator Profile Values and then navigate to that task
  4. 4. Search for “Allowed Domains” profile name (case sensitive!!).
  5. 5. Within this profile name you see a profile option called “site“, this profile option has a profile value
  6. 6. Within the profile value add the hostname, and port number, of the application hosting your JavaScript application. If you want to allow “ALL” domains set this value to “*” (a single asterisk )
  7. WARNING : Ensure you understand the security implication of allowing ALL Domains using the asterisk notation!
  8. 7. Save and Close and then retry running your JET Application in your browser.
setupandMaiteanceCORS

CORS Settings in Setup and Maintenance (Click to enlarge)

If all is good when you run the application on your browser, or mobile device, you’ll now see the application running correctly.

JETApplication

Running JET Application (Click to enlarge)

 

Final Note on Security

To keep this example simple the security username/password was hard-coded in the mobile application, not suitable for a real world application. For a real application you would create a configuration screen, or use system preferences, to collect and store the username , password and the SalesCloud Server url which would then be used in the application.

If the JET Application is to be embedded inside a Fusion Applications Page then you will want to use JWT Token authentication. Modify the example so that the JWT token is passed into the application URL as a parameter and then use that in the JavaScript (lines 24-25) accordingly.

For more information on JWT Tokens in Fusion Applications see these blog entries (Link 1, Link 2) and of course the documentation

Conclusion

As we’ve seen above its quite straightforward to create mobile, and browser, applications using the Oracle JET Framework. The above example was quite simple and only queried data, a real application would also have some write/delete/update operations and therefore you would want to start to look at the JET Common Model and Collection Framework (DocLink) instead. Additionally in the above example we queried data direct from a single SalesCloud instance and did no processing on it.. Its very likely that a single mobile application will need to get its data from multiple data sources and require some backend services to “preprocess” and probably post process the data, in essence provide an API.. We call this backend a  “MBaaS”, ie Mobile Backend As A Service, Oracle also provides a MBaaS in its PaaS suite of products and it is called Oracle Product is called “Mobile Cloud Service”..

In a future article we will explore how to use Oracle Mobile Cloud Service (Oracle MCS) to query SalesCloud and Service cloud and provide an API to the client which would be using the more advanced technique of using the JET Common Model/Collection framework.

 

 

Using Oracle Data Integrator (ODI) to Bulk Load Data into HCM-Cloud

$
0
0

Introduction

With its capacity to handle complex transformations and large volumes of data, and its ability to orchestrate operations across heterogeneous systems, ODI is a great tool to prepare and upload bulk data into HCM Cloud.

In this post, we are looking at the different steps required to perform this task.

Overview of the integration process

There are three steps that are required to prepare and upload data into HCM:

  • Transform data and prepare a file that matches the import format expected by HCM. Then ZIP the generated file;
  • Upload the file to UCM-Cloud using the appropriate web service call;
  • Invoke HCM-Cloud to trigger the import process, using the appropriate web service call.

We will now see how these different steps can be achieved with ODI.

Preparing the data for import

We will not go into the details of how to transform data with ODI here: this is a normal use of the product and as such it is fully documented.

For HCM to be able import the data, ODI needs to generate a file that has to be formatted according to HCM specifications. For ODI to generate the proper file, the most effective approach is to create a custom Knowledge Module (KM). The details for this Knowledge Module as well as an introduction to the HCM file format are available here: Oracle Data Integrator (ODI) for HCM-Cloud: a Knowledge Module to Generate HCM Import Files. Using this KM, data can be prepared from different sources, aggregated and augmented as needed. ODI will simply generate the import file as data is loaded into a set of tables that reflect the HCM file’s business objects components.

Once the file has been generated with regular ODI mappings, the ODI tool OdiZip can be used to compress the data. You need to create a package to define the sequence of mappings to transform the data and create the import file. Then add an OdiZip step in the package to compress the file.

ODIZip

the name of the import file is imposed by HCM, but the ZIP file can have any name, which can be very convenient if you want to generate unique file names.

Uploading the file to UCM-Cloud

The web service used to upload the file to UCM is relatively straightforward. The only element we have to be careful with is the need to timestamp the data by setting a start date and a nonce (unique ID) in the header of the SOAP message. We use ODI to generate these values dynamically by creating two variables: StartDate and Nonce.  Both variables are refreshed in the package.

The refresh code for the StartDate variable is the following:

select to_char(sysdate,’YYYY-MM-DD’) || ‘T’ || to_char(systimestamp,’HH:MI:SSTZH:TZM’)
from dual

This formats the date like this: 2016-05-15T04:38:59-04:00

The refresh code for the Nonce variable is the following:

select dbms_random.string(‘X’, 9) from dual

This gives us a 9 character random alphanumeric string, like this: 0O0Q3LRKM

We can then set the parameters for the UCM web service using these variables. When we add the OdiInvokeWebService tool to a package, we can take advantage of the HTTP Analyzer to get help with the parameters settings.

HTTP Analyzer

To use the HTTP Analyzer, we first need to provide the WSDL for the service we want to access. Then we click the HTTP Analyzer button: ODI will read the WSDL and build a representation of the service that lets us view and set all possible parameters.

If not obvious, for the Analyzer to work, you need to be able to connect to the WSDL.

The Analyzer lets us set the necessary parameters for the header, where we use the variables that we have previously defined:

UCM soap header

We can then set the rest of the parameters for the web service. To upload a file with UCM, we need the following settings:

IdcService: CHECKIN_UNIVERSAL (for more details on this and other available services, check out the Oracle Fusion Middleware Services Reference Guide for Oracle Universal Content Management)

FieldArray: we use the following fields:

Field name Field content Comment
 
dDocName Contact.zip The name of our file
dDocAuthor HCM_APP_ADMIN_ALL Our user name in UCM
dDocTitle Contact File for HCM Label for the file
dDocType Document
dSecurityGroup Public
doFileCopy TRUE Keep the file on disk after copy
dDocAccount ODIAccount

 

The screenshot below shows how these parameters can be entered into the Analyzer:

HTTP Analyzer IdcService

In the File Array we can set the name of the file and its actual location:

HTTP Analyzer - File and send

At this point we can test the web service with the Send Request button located at the bottom of the Analyzer window: you see the response from the server on the right-hand side of the window.

If you want to use this test feature, keep in mind that:
– Your ODI variables need to have been refreshed so that they have a value
– The ODI variables need to be refreshed between subsequent calls to the service: you cannot use the same values twice in a row for StartDate and Nonce (or the server would reject your request).

A few comments on the execution of the web service: a successful call to the web service does not guarantee that the operation is successful. You want to review the response returned by the service to validate the success of the operation. Make sure that you set the name of the Response File when you set the parameters for the OdiInvokeWebService tool to do this.

All we need to validate in this response file is the content of the element StatusMessage: if it contains ‘Successful’ then the file was loaded successfully. If it contains ‘not successful’ then you have a problem. It is possible to build an ODI mapping for this (creating a model for the XML file, reverse-engineering the file, then building the mapping…) but a very simple Groovy script (in an ODI procedure for instance) can get us there faster and can throw in the ODI Operator log the exact error message returned by the web service in case of problems:

import groovy.util.XmlSlurper

// You can replace the following hard-coded values with ODI variables. For instance:
// inputFile=#myProject.MyResponseFile
inputFile = ‘D/TEMP/HCM//UCMResponse.xml’
XMLTag=’StatusMessage’
fileContents = new File(inputFile).getText(‘UTF-8’)
def xmlFile=new XmlSlurper().parseText(fileContents)
def responseStatus= new String(xmlFile.’**’.find{node-> node.name() == XMLTag}*.text().toString())
if (responseStatus.contains(‘Successful’)) {
// some action}
else {
throw new Exception(responseStatus)
}

This said, if all parameters are set correctly and if you have the necessary privileges on UCM Cloud, at this point the file is loaded on UCM. We can now import the file into HCM Cloud.

Invoking the HCM-Cloud loader to trigger the import process

The HCM web service uses OWSM security policies. If you are not familiar with OWSM security policies, or if you do not know how to setup ODI to leverage OWSM, please refer to Connecting Oracle Data Integrator (ODI) to the Cloud: Web Services and Security Policies. This blog post also describes how to define a web service in ODI Topology.

Once we have the web service defined in ODI Topology, invoking the web service is trivial. When you set the parameters for the ODI tool OdiInvokeWebService in your package, you only need to select a Context as well as the logical schema that points to the web service. Then you can use the HTTP Analyzer to set the parameters for the web service call:

HCM web service call

In our tests we set the ContentId to the name of the file that we want to load, and the Parameters to the following values:

FileEncryption=NONE, DeleteSourceFile=N.

You can obviously change these values as you see fit. The details for the parameters for this web service are available in the document HCM Data Loader Preparation.

Once we have set the necessary parameters for the payload, we just have to set the remainder of parameters for OdiInvokeWebService. In particular, we need a response file to store the results from the invocation of the web service.

Here again we can use Groovy code to quickly parse the response file and make sure that the load started successfully (this time we are looking for an element named result in the response file).

Make sure that the user you are using to connect and initiate the import process has enough privileges to perform this operation. One easy way to validate this is with the HCM user interface: if you can import the files manually from the HCM user interface, then you have enough privileges to execute the import process with the web service.

The final ODI package looks like this:

HCM Load Final Package

This final web service call initiates the import of the file. You can make additional calls to check on the status of the import (running, completed, aborted) to make sure that the file is successfully imported. The process to invoke these additional web services is similar to what we have done here to import the file.

Conclusion

The features available in ODI 12.2.1 make it relatively easy to generate a file, compress it, upload it to the cloud and import it into HCM-Cloud: we have generated an import file in a proprietary format with a quick modification of a standard Knowledge Module; we have edited the header and the workload of web services without ever manipulating XML files directly; we have setup security policies quickly by leveraging the ability to define web services in ODI Topology. Now all we have to do is to design all the transformations that will be needed to generate the data for HCM-Cloud!

For more Oracle Data Integrator best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-team Chronicles for Oracle Data Integrator.

References

OAM Protected SPAs and Same-Origin Policy

$
0
0

Introduction

On a previous post, I described the usage of OAM’s SAML Identity Assertion in the context of SPA (Single Page Applications) and how easy it is to take advantage of it for securely propagating the end user identity from the client to the backend services. However, that post is written with the assumption that both the JavaScript code and the REST services are protected by the same WebGate. Speaking in HTTP protocol terms, we say they have same origin.

Modern web browsers natively take a security measure called Same-Origin policy, enforcing that a script can only invoke a service if both the script and service are served from the same host. This is done for avoiding a 3rd-party web site sending a malicious script to the user web browser, taking advantage of any browser session data (including http cookies), thus making itself able for executing remote calls to legitimate services.

In OAM real world deployment scenarios, it’s totally ok that customers want to use separate WebGates for JavaScript and REST services. This breaks the Same-Origin policy right away. The XHRs (XML HTTP Requests) made by the browser on behalf of the JavaScript would be automatically denied. This post describes how to deal with this by using the CORS (Cross Object Request Sharing) mechanism, as well as how to handle pre-flight requests and HTTP redirects in the context of REST services protected by OAM.

Setting the basis for this discussion, the deployment topology is depicted in the following diagram. It’s true that we could have a Reverse Proxy in front of the two WebGates as well. That would obviously moot this discussion, but that’s not what we want, since having WebGates directly exposed to clients is totally valid.

Deployment Topology

Deployment Topology

1 – The user first loads a JavaScript into the browser by accessing a resource protected by a WebGate running on host myapp.ateam.com.
2 – The Javascript (an AngularJS application) makes XHRs (GET and POST) to REST services protected by a Webgate running on host myservice.ateam.com.

It looks simple. But there are a few devils on the way.

CORS – Cross Object Request Sharing

Refer to this document for the CORS specification.

In a nutshell, CORS is a mechanism defined around the notion of allowing user-defined resources to relax the Same-Origin policy. These resources can basically tell the browser which are the origins they accept requests from. In our scenario, the REST services running on host myservice.ateam.com tell the browser to accept requests from myapp.ateam.com. There are many other aspects in CORS, like methods and headers allowed, credentials propagation and pre-flight requests. We’ll certainly touch upon them here, but it is highly recommended that you take a read on the CORS specification for fully understanding their semantics.

Handling Same-Origin Policy

Let’s assume the user has been authenticated and the AngularJS code, served by myapp.ateam.com, is loaded by the browser. It’s now going to invoke a REST service on myservice.ateam.com.

partsInvApp.controller('partsInvController', function ($scope, $http){
      $http.get('http://myservice.ateam.com:7777/services/partsinventory/parts').success(function(data) {
      $scope.parts = data.result;
    });

For relaxing the Same-Origin policy, we need the following ‘Header’ directives in the routing rule for the backend services in mod_wl_ohs.conf of myservice.ateam.com’s OHS:

<Location /services>
    ## Handling the internal forward for the WebLogic server actually hosting the services
    SetHandler weblogic-handler
    WebLogicHost int.us.oracle.com
    WeblogicPort 8003
    
    ## The following 'Header always set' directives are mandatory for cross domain XHR
    SetEnvIf Origin "(http://myapp.ateam.com:7777|null)" ACAO=$0
    Header always set Access-Control-Allow-Origin %{ACAO}e env=ACAO
    Header always set Access-Control-Allow-Credentials "true"
    Header always set Access-Control-Allow-Methods "GET, POST, OPTIONS"
    Header always set Access-Control-Allow-Headers "Origin, Content-Type, Accept"    
</Location>

Let’s first focus on the SetEnvIf directive and the 1st Header directive. The SetEnvIf is whitelisting “http://myapp.ateam.com:7777” and “null” values in the Origin request header. Only these two values are echoed back in Access-Control-Allow-Origin response header. Handling “null” is necessary due to HTTP redirects between the WebGate and OAM server. The browser sets the Origin header to “null” when a redirect is made to a different server than the one originally requested. That’s what happens, for instance, when the browser is redirected from http://myservice.ateam.com:7777/services/partsinventory/parts to http://<oam_server>:<port>/oam/server/obrareq.cgi.

Accepting “null” origins may ease the way for CSRF (Cross Site Request Forgery) attacks. In general, for guarding against CSRF attacks, do not rely on the Origin header, do not allow the “safe” operations like GET, HEAD and OPTIONS to change server-side data and use CSRF tokens in those considered “unsafe” operations, like POST, PUT, PATCH and DELETE.

When a request hits the WebGate on myservice.ateam.com, it will be detected there’s no OAM authentication cookie for that WebGate. Hence, an HTTP redirect is made to http://<oam_server>:<port>/oam/server/obrareq.cgi with OAM_ID cookie. OAM verifies the cookie and does another HTTP redirect, this time to myservice.ateam.com WebGate on /obrar.cgi, where the OAM authentication cookie is generated. Finally, the browser is redirected to the originally requested resource. This is just the way OAM works. It’s not at all particular to the use case here described.

From the standpoint of the JavaScript code, it doesn’t need to follow all those redirects. Even in the context of an XHR, the redirects are natively handled by the browser. Let’s notice this: in the context of the XHR. As such, the redirects are like just another XHR. As a consequence, CORS headers also need to be defined for them. So, re-reading the second last paragraph, here is the sequence of redirects that takes place:

1 – http://myservice.ateam.com:7777/services/partsinventory/parts -> http://<oam_server>:<oam_port>/oam/server/obrareq.cgi

2 – http://<oam_server>:<oam_port>/oam/server/obrareq.cgi -> http://myservice.ateam.com:7777/obrar.cgi

3 – http://myservice.ateam.com:7777/obrar.cgi -> http://myservice.ateam.com:7777/services/partsinventory/parts

Those redirects tell us we need CORS headers for the OAM server itself. How do we deal with this? We have to front-end OAM server with a reverse proxy that allows us to set those headers. Actually, this is a recommended approach for not exposing OAM server in the DMZ for internet facing web applications.

In my setup, I’ve used OHS (mylogin.ateam.com:7778) for the purpose and set CORS headers within mod_wl_ohs.conf:

# Handling OAM redirects in the context of XML HTTP Requests
<Location /oam>
SetHandler weblogic-handler
WebLogicHost oamserver.us.oracle.com
WeblogicPort 14100

SetEnvIf Origin "(http://myapp.ateam.com:7777|null)" ACAO=$0

Header always set Access-Control-Allow-Origin %{ACAO}e env=ACAO
Header always set Access-Control-Allow-Credentials "true"
Header always set Access-Control-Allow-Methods "GET, POST, OPTIONS"
Header always set Access-Control-Allow-Headers "Origin, Content-Type, Accept"
</Location>

When front-ending OAM, it’s imperative to update OAM server host and port in OAM Console, per image below:

OAM Front End Host

OAM Front End Host

To fully satisfy the browser in that sequence of redirects, we also need CORS headers for http://myservice.ateam.com:7777/obrar.cgi. I’ve defined them as follows in myservice.ateam.com OHS httpd.conf:

# Handling OAM redirects in the context of XML HTTP Requests
<Location /obrar.cgi>
SetEnvIf Origin "(http://myapp.ateam.com:7777|null)" ACAO=$0
Header always set Access-Control-Allow-Credentials "true"
Header always set Access-Control-Allow-Methods "GET, POST, OPTIONS"
Header always set Access-Control-Allow-Headers "Origin, Content-Type, Accept"
Header always set Access-Control-Allow-Origin %{ACAO}e env=ACAO
</Location>

With these in place, we satisfy requests with the GET method.

Handling POSTs

For handling POSTs, we need to know that browsers may decide to preflight the request. A pre-flight request is a preliminary inquiry to ensure the actual request is safe to be sent. So the browser first sends a request with the OPTIONS method, and the server replies back with the appropriate CORS headers, either allowing or denying the request.

Verify in this Firefox screenshot how an OPTIONS request is sent right before the actual POST, basically asking for authorization to submit the POST. This is evidenced by “Access-Control-Request-Method” request header. The server authorizes by issuing back the “Access-Control-Allowed-Methods” response header.

OPTIONS Request

OPTIONS Request

Now, a little devil here: do notice that OAM may also be protecting OPTIONS requests as well. As such, it would expect cookies in the request. The thing is that preflight requests, per CORS specification, exclude user credentials (including HTTP cookies). That basically means an anonymous request to an OAM-proteced resource, initiating an authentication flow, definitely not what our application expects. The solution to this is simply excluding OPTIONS as one of the supported methods in the protected resource. In fact, there’s no need for us to worry about the OPTIONS method in OAM, as long as the backend REST services either don’t support the OPTIONS method or don’t implement it incorrectly.

For excluding the OPTIONS method from OAM oversight, edit the corresponding resource in OAM Console:

 

Options OFF

Options OFF

We can even take the precaution of handling OPTIONS in OHS, thus preventing the request hitting the REST service endpoint in Weblogic server altogether. This can be implemented with the following directives within <Location /services> in mod_wl_ohs.conf:

RewriteEngine On
RewriteCond %{REQUEST_METHOD} OPTIONS
RewriteRule ^(.*) $1 [R=200,L]

We’re essentially returning a 200 HTTP status code for any OPTIONS request.

Handling OAM Timeouts

It might be the case that the end user leaves the application idle for some time. Upon her return, OAM might have timed out, either due to OAM idle timeout or OAM session expiration. The act of clicking some UI element (like a button or link) that triggers a REST service call must be handled in context by the application. One approach is forcing a new user login for the application as whole. Some people may feel this as disruptive. I like to think that an OAM-protected REST service is just another OAM-protected server-side resource within traditional web applications. And given the intrinsic stateless orientation of REST-based services, it should be fine that the user interface calls out a relogin.

Upon timeout, OAM does an HTTP redirect to http://<oam_server>:<oam_port>/oam/server/obrareq.cgi, which in turn brings in the SSO login page. Therefore, we have to handle this in OHS. And this has been already taken care when we dealt with the redirects that takes place during normal processing of REST services requests in the OAM front-end host.

# Handling OAM redirects in the context of XML HTTP Requests
<Location /oam>
SetHandler weblogic-handler
WebLogicHost oamserver.us.oracle.com
WeblogicPort 14100

SetEnvIf Origin "(http://myapp.ateam.com:7777|null)" ACAO=$0

Header always set Access-Control-Allow-Origin %{ACAO}e env=ACAO
Header always set Access-Control-Allow-Credentials "true"
Header always set Access-Control-Allow-Methods "GET, POST, OPTIONS"
Header always set Access-Control-Allow-Headers "Origin, Content-Type, Accept"
</Location>

The AngularJS code must capture the redirect outcome (which is the SSO page) and preferrably redirect the browser window to the protected server-side resource in which the call is being executed. Yes, I agree that figuring out this context may vary in complexity depending on how the application has been designed. My use case here is the simplest, since I have only one HTML page serving the AngularJS code.

My sample employs an AngularJS interceptor, as follows. It basically looks for a specific string that I know is present in OAM’s login page. In finding it, it redirects the browser window to partsInventory.html location, the resource that is actually embedding the AngularJS code. Just remember to register the interceptor with AngularJS $httpProvider.

partsInvApp.factory('redirectInterceptor', function($q,$location,$window){
    return {
        'response': function(response){
            if (typeof response.data === 'string' && response.data.indexOf("Enter your Single Sign-On credentials below") > -1) {
                $window.location = 'partsInventory.html';
                return $q.reject(response);
            }
            else {
              return response || $q.when(response);
            }  
        }
    }
});

partsInvApp.config(['$httpProvider', function($httpProvider) {
    $httpProvider.defaults.withCredentials = true;
    $httpProvider.interceptors.push('redirectInterceptor');
  }]);

Notice the line

$httpProvider.defaults.withCredentials = true;

This is what makes the browser sending over cookies along with HTTP requests in the context of XHR in AngularJS, a key requirement for OAM-protected applications.

Sample Code

Here’s the HTML code of my suboptimal Inventory application:

<html ng-app="partsInvApp">
  <head>
    <meta charset="utf-8">
    <title>Parts Inventory Application</title>

    <link href="bootstrap/css/bootstrap.min.css" rel="stylesheet">

    <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.4.4/angular.min.js"></script>
    <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.4.4/angular-cookies.js"></script>
    <script src="./partsInvApp.js"></script>
    <script src="./jquery-1.12.3.min.js"></script>
  </head>
  <body>
    <p ng-controller="userProfileController">
        Welcome <b>{{firstName}} {{lastName}}</b>, check out our inventory list</p>
    <div style="width:600px" width="600" class="table-responsive" ng-controller="partsInvController">
      <table class="table table-striped" style="width:600px" width="600">
        <thead>
          <tr>
            <th width="15%">Id</th>
            <th width="15%">Name</th>
            <th width="30%">Description</th>
            <th width="15%">Price</th>
            <th width="10%">Quantity</th>
            <th width="15%"> </th>
          </tr>
        </thead>  
        <tbody>
          <tr ng-repeat="part in parts">
          	<td width="15%">{{part.uniqueid}}</td>
            <td width="15%">{{part.name}}</td>
            <td width="30%">{{part.desc}}</td>
            <td width="15%">{{part.price}}</td>
            <td width="10%" valign="top">
                <input type="text" name="amt" ng-model="amt" size="3"> 
            </td> 
            <td width="15%" valign="top">
              <button ng-click="orderPart(part.uniqueid, amt)" class="btn btn-sm btn-primary" ng-disabled="orderForm.$invalid">Order</button>
            </td>  
          </tr>
        </tbody>  
      </table>
      <h4 align="center" ng-if="PostDataResponse">
        <span class="label label-success">
          {{PostDataResponse}}
        </span>  
      </h4>  
    </div>  
  </body>
</html>

And its AngularJS code:

var partsInvApp = angular.module('partsInvApp', []);

partsInvApp.factory('redirectInterceptor', function($q,$location,$window){
    return {
        'response': function(response){
            if (typeof response.data === 'string' && response.data.indexOf("Enter your Single Sign-On credentials below") > -1) {
                $window.location = 'partsInventory.html';
                return $q.reject(response);

            }
            else {
              return response || $q.when(response);
            }  
        }
    }
});

partsInvApp.config(['$httpProvider', function($httpProvider) {
    $httpProvider.defaults.withCredentials = true;
    $httpProvider.interceptors.push('redirectInterceptor');
  }]);


partsInvApp.controller('partsInvController', function ($scope, $http){
      $http.get('http://myservice.ateam.com:7777/services/partsinventory/parts').success(function(data) {
      $scope.parts = data.result;
    });

    $scope.orderPart = function(partId,amt) {

      if (!amt) {
        alert ("Please inform quantity.");
        return;
      }

      console.log("Placing part order for " + amt + " items of part " + partId);

      var data = $.param({partId: partId,amount: amt});

      console.log(data);
        
      var config = {
        headers : {
          'Content-Type': 'application/x-www-form-urlencoded;charset=utf-8;'
        }
      };

      $http.post('http://myservice.ateam.com:7777/services/partsinventory/order', data, config)

      .success(function (data, status, headers, config) {
        $scope.PostDataResponse = data.result;
      })
      .error(function (data, status, header, config) {
        $scope.ResponseDetails = "Data: " + data +
          "<hr/>status: " + status +
          "<hr/>headers: " + header +
          "<hr/>config: " + config;
      });
    };    
});

partsInvApp.controller('userProfileController', function ($scope, $http) {

      $http.get('http://myservice.ateam.com:7777/services/userprofile/userinfo').success(function(data) {
        userinfo = data.result;
        $scope.firstName = userinfo[0].firstName;
        $scope.lastName = userinfo[0].lastName;
      });
});

Conclusion

In this post I’ve demonstrated how to handle the Same-Origin policy implemented by web browsers in the context of an SPA under the protection of Oracle Access Manager by using CORS headers plus the understanding of OAM-specific behavior. Combined with the usage of OAM’s Identity Assertion for secure identity propagation, this can be used in real world implementation scenarios, showcasing OAM again as a powerful tool for building modern and secure web applications.


Oracle GoldenGate: Tables Without Keys

$
0
0

Introduction

For data replication products like Oracle GoldenGate (OGG), it is typically a best practice to have a primary key or unique index defined for tables being replicated. However, in practice this may not be possible when working with poorly designed databases or legacy applications. In this article we shall detail OGG’s rules for determining uniqueness in replicated data and present examples and best practices for working with tables without primary keys or unique indexes.

Main Article

The ANSI standard for RDBMS systems states that tables must be defined with constraints; which are rules that define which data values are valid during INSERT, UPDATE, and DELETE operations. The standard defines operational rules for four types of constraint that may be defined for a table, CHECK, PRIMARY KEY, UNIQUE, and FOREIGN KEY; however, different database platforms may allow other types. Furthermore, many RDBMS systems; including Oracle, do not enforce the standard and allow Database Engineers to create tables with lax or no constraints; which violates the standard that every row within a table must reflect a level of uniqueness across the data structure.

This poses a problem when replicating update and delete operations as the source data has no uniqueness, which may result in data integrity issues when applied to the target database.

How Oracle GoldenGate Ensures Row Uniqueness

OGG requires a unique row identifier on the source and target tables for update and delete operations. The row identifier is selected based upon the following order of priority, depending on the number and type of constraints.

1) Primary key.
2) Unique key.
3) If none of the preceding key types exist (even though there might be other types of keys defined on the table) OGG constructs a pseudo key of all columns.

These are the generic rules and there are additional specific considerations depending upon the database platform. For more details review your specific Oracle GoldenGate Installation and Configuration Guide.

To demonstrate the operational differences, let’s setup replication for three tables:

create table loren.t_pkey (
c_num    number (10,0),
c_ts     timestamp(6),
c_txt    varchar (50),
primary key (c_num)
);
create table loren.t_ui (
c_num    number (10,0),
c_ts        timestamp(6),
c_txt       varchar (50)
);
create unique index  tui_idx on loren.t_ui (c_num);
create table loren.t_nopk (
c_num    number (10,0),
c_ts     timestamp(6),
c_txt    varchar (50)
);

 

For this example, we’ll use Integrated Extract and Classic Replicat configured as:

extract eorcl
userid c##ggs, password AACAAAAAAAAAAAHAAIFBOIYAMCGIMARE BLOWFISH, ENCRYPTKEY DEFAULT
exttrail ./dirdat/eo
logallsupcols
updaterecordformat compact
table pdborcl.loren.*;
replicat cl_amer
userid ggs@pdbamer, password AACAAAAAAAAAAAHAAIFBOIYAMCGIMARE BLOWFISH, ENCRYPTKEY DEFAULT
assumetargetdefs
map pdborcl.loren.*, target pdbamer.loren.*;

 

When Extract resolves the table structure, we see the following in the Extract Report file:

2016-05-23 13:49:00  INFO    OGG-06509  Using the following key columns for source table PDBORCL.LOREN.T_PKEY: C_NUM.
2016-05-23 13:49:00  INFO    OGG-06509  Using the following key columns for source table PDBORCL.LOREN.T_UI: C_NUM.
2016-05-23 13:49:00  WARNING OGG-06439  No unique key is defined for table T_NOPK. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may be used to define the key.
2016-05-23 13:49:00  INFO    OGG-06509  Using the following key columns for source table PDBORCL.LOREN.T_NOPK: C_NUM, C_TS, C_TXT.

Likewise, the Replicat report file shows:

2016-05-23 13:49:03  INFO    OGG-06510  Using the following key columns for target table PDBAMER.LOREN.T_PKEY: C_NUM.
2016-05-23 13:49:06  INFO    OGG-06510  Using the following key columns for target table PDBAMER.LOREN.T_UI: C_NUM.
2016-05-23 13:49:06  WARNING OGG-06439  No unique key is defined for table T_NOPK. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may be used to define the key.
2016-05-23 13:49:06  INFO    OGG-06510  Using the following key columns for target table PDBAMER.LOREN.T_NOPK: C_NUM, C_TS, C_TXT.

Both OGG Groups will use the defined Primary Key to maintain uniqueness for the table T_PKEY, the defined Unique Index for T_UI, and a pseudo key consisting of all table columns for T_NOPK.

Now let’s send and update operation through the replication feed and see what SQL DML statements are built by Replicat. To see the DML statements, I need to modify my Classic Replicat configuration. My Integrated Extract and Classic Replicat configuration files look like this:

extract eorcl
userid c##ggs, password AACAAAAAAAAAAAHAAIFBOIYAMCGIMARE BLOWFISH, ENCRYPTKEY DEFAULT
exttrail ./dirdat/eo
logallsupcols
updaterecordformat compact
table pdborcl.loren.*;
replicat cl_amer
userid ggs@pdbamer, password AACAAAAAAAAAAAHAAIFBOIYAMCGIMARE BLOWFISH, ENCRYPTKEY DEFAULT
assumetargetdefs
showsyntax
map pdborcl.loren.*, target pdbamer.loren.*;

 

To see the DML statements, I must start the Replicat from a command prompt.

On the source database, I execute the following update statements:

update loren.t_pkey set c_txt=’Update of row 4.’ where c_num=4;
update loren.t_ui set c_txt=’Update of row 4.’ where c_num=4;
update loren.t_nopk set c_txt=’Update of row 4.’ where c_num=4;
commit;

Replicat displays the DML created from this source transaction:

UPDATE “LOREN”.”T_PKEY” x SET x.”C_TXT” = ‘Update of row 4.’ WHERE x.”C_NUM”=’4′
UPDATE “LOREN”.”T_UI” x SET x.”C_TS” = TO_TIMESTAMP(‘2016-05-23 13:48:58.163000000′,’YYYY-MM-DD HH24:MI:SS.FF’),x.”C_TXT” = ‘Update of row 4.’ WHERE x.”C_NUM”=’4′
UPDATE “LOREN”.”T_NOPK” x SET x.”C_NUM” = ‘4’,x.”C_TS” = TO_TIMESTAMP(‘2016-05-23 13:48:58.178000000′,’YYYY-MM-DD HH24:MI:SS.FF’),x.”C_TXT” = ‘Update of row 4.’ WHERE x.”C_NUM”=’4′ AND x.”C_TS”=TO_TIMESTAMP(‘2016-05-23 13:48:58.178000000′,’ YYYY-MM-DD HH24:MI:SS.FF’) AND x.”C_TXT”=’This is row 4\00\00\00′ AND ROWNUM = 1

As you can see above, for the tables with a Primary Key and Unique Index, Replicat used that column in the where clause. For the table with neither, all columns and their before image data were used to build the where clause. Because there is no Primary Key or Unique Index on the target table, we must assume duplicate data rows exist; therefore, the number of rows update will be limited to one via the “ROWNUM = 1” statement appended to the WHERE clause.

Let’s do another update using the column c_txt in the where clause of our source statement:

update loren.t_pkey set c_txt=’Update of row 2.’ where c_txt=’This is row 2′;
update loren.t_ui set c_txt=’Update of row 2.’ where c_txt=’This is row 2′;
update loren.t_nopk set c_txt=’Update of row 2.’ where c_txt=’This is row 2′;
commit;

The statements built and executed by Replicat on the target are:

UPDATE “LOREN”.”T_PKEY” x SET x.”C_TXT” = ‘Update of row 2.’ WHERE x.”C_NUM”=’2′
UPDATE “LOREN”.”T_UI” x SET x.”C_TS” = TO_TIMESTAMP(‘2016-05-23 13:48:58.163000000′,’YYYY-MM-DD HH24:MI:SS.FF’),x.”C_TXT” = ‘Update of row 2.’ WHERE x.”C_NUM”=’2′
UPDATE “LOREN”.”T_NOPK” x SET x.”C_NUM” = ‘2’,x.”C_TS” = TO_TIMESTAMP(‘2016-05-23 13:48:58.178000000′,’YYYY-MM-DD HH24:MI:SS.FF’),x.”C_TXT” = ‘Update of row 2.’ WHERE x.”C_NUM”=’2′ AND x.”C_TS”=TO_TIMESTAMP(‘2016-05-23 13:48:58.178000000′,’YYYY-MM-DD HH24:MI:SS.FF’) AND x.”C_TXT”=’This is row 2\00\00\00′ AND ROWNUM = 1

Once again we see that where a Primary Key or Unique Index exists, Replicat will use it in the where clause; while all columns and before image data is used in the where clause for the table with neither.

Why is using all of the columns in the an update statement a bad thing? For one, this type of DML is very expensive to execute as the database performs a full table scan. For my very small test table, this is no big deal; but, what if my table had millions or billions of rows? Then we would see a degradation in performance of both my database and Replicat.

Define A Pseudo Key

Now that we have an understanding of how Extract and Replicat functions when tables have primary keys, unique indexes, and neither defined; let’s take things a bit further and have OGG override the database settings.

To do this, we set the KEYCOLS option of the Extract TABLE and Replicat MAP statements. KEYCOLS is shorthand for “key columns”; which is a user-defined column, or set of columns, OGG will use instead of an existing Primary Key or Unique Index. For tables without Primary Keys or Unique Indexes, OGG treats this pseudo-key as a Primary Key.

When using KEYCOLS, it is best practice to define the same pseudo-key in both Extract and Replicat. The pseudo-key data must also create uniqueness for the row being applied to the target; I.E., the where statement created by Replicat must modify one, and only, one row. Furthermore, the database must log the records for these columns as part of any update or delete operations performed. For my Oracle Database, I set this by executing the GGSCI command “add schematrandata pdborcl.lpenton allcols”.

In my test tables, I have a timestamp column defined. To use this as a pseudo-key, modify my Extract and Replicat as follows:

extract eorcl
userid c##ggs, password AACAAAAAAAAAAAHAAIFBOIYAMCGIMARE BLOWFISH, ENCRYPTKEY DEFAULT
exttrail ./dirdat/eo
logallsupcols
updaterecordformat compact
table pdborcl.loren.t_pkey, KEYCOLS (c_ts);
table pdborcl.loren.t_ui, KEYCOLS (c_ts);
table pdborcl.loren.t_nopk, KEYCOLS (c_ts);
replicat cl_amer
userid ggs@pdbamer, password AACAAAAAAAAAAAHAAIFBOIYAMCGIMARE BLOWFISH, ENCRYPTKEY DEFAULT
assumetargetdefs
showsyntax
map pdborcl.loren.t_pkey, target pdbamer.loren.t_pkey, KEYCOLS (c_ts);
map pdborcl.loren.t_ui, target pdbamer.loren.t_ui, KEYCOLS (c_ts);
map pdborcl.loren.t_nopk, target pdbamer.loren.t_nopk, KEYCOLS (c_ts)

Normally, if a table has a valid primary key or unique index defined we will not use PKEY to override the database settings. I am doing so here just to show OGG operation when this option is set.

Now let’s run another update transaction on the source tables:

update loren.t_pkey set c_txt=’Update of row 5.’ where c_num=5;
update loren.t_ui set c_txt=’Update of row 5.’ where c_num=5;
update loren.t_nopk set c_txt=’Update of row 5.’ where c_num=5;
commit;

When Integrated Extract resolves the table structure, we see the following in the Extract Report file:

2016-05-23 15:37:55  INFO    OGG-06509  Using the following key columns for source table PDBORCL.LOREN.T_PKEY: C_TS.
2016-05-23 15:37:55  INFO    OGG-06509  Using the following key columns for source table PDBORCL.LOREN.T_UI: C_TS.
2016-05-23 15:37:55  INFO    OGG-06509  Using the following key columns for source table PDBORCL.LOREN.T_NOPK: C_TS.

Replicat reports:

2016-05-23 15:44:21  INFO    OGG-06510  Using the following key columns for target table PDBAMER.LOREN.T_PKEY: C_TS.
UPDATE “LOREN”.”T_PKEY” x SET x.”C_NUM” = ‘5’,x.”C_TXT” = ‘Update of row 5.’ WHERE x.”C_TS”=TO_TIMESTAMP(‘2016-05-23 13:48:58.147000000′,’YYYY-MM-DD HH24:MI:SS.FF’)
2016-05-23 15:53:25 INFO OGG-06510 Using the following key columns for target table PDBAMER.LOREN.T_UI: C_TS.
UPDATE “LOREN”.”T_UI” x SET x.”C_NUM” = ‘5’,x.”C_TXT” = ‘Update of row 5.’ WHERE x.”C_TS”=TO_TIMESTAMP(‘2016-05-23 13:48:58.163000000′,’YYYY-MM-DD HH24:MI:SS.FF’)
2016-05-23 15:53:35 INFO OGG-06510 Using the following key columns for target table PDBAMER.LOREN.T_NOPK: C_TS.
UPDATE “LOREN”.”T_NOPK” x SET x.”C_NUM” = ‘5’,x.”C_TXT” = ‘Update of row 5.’ WHERE x.”C_TS”=TO_TIMESTAMP(‘2016-05-23 13:48:58.178000000′,’YYYY-MM-DD HH24:MI:SS.FF’) AND ROWNUM = 1

Notice that Replicat reports that the column C_TS is now being used as a key column; which is specified in the where clause built from the source updates.

Tips For Tables Without A Primary Key Or Unique Index

The following tips have been gathered from Oracle field personnel. It is not all-inclusive for what to do when replicating tables without primary keys or unique indexes; but does provide a good summary of things to consider.

1) Classify the tables without PK/UIs in 3 groups: high, medium, or low update/delete activity.  Focus most of your tuning effort on the high activity tables.
a) If high activity tables have a sub-set of columns that will ensure uniqueness (even though there isn’t a PK or UI), use KEYCOLS in the extract & replicat to reduce the data that needs to be sent from source to target.
b) If time permits, analyze the medium activity tables too.
c) No need to worry about the low activity tables since the impact will be minimal overall.

2) Be aware that all updates will get processed like PKUpdates; meaning that the full before and after image will be sent across to the target.
a) The before values will be used in the WHERE clause and the after values will be used in the SET clause of the UPDATE statement created by the replicat. The SQL statement could be quite large for tables with lots of columns; this is another reason to determine if a sub-set of columns will ensure uniqueness and use KEYCOLS.

3) If you’re planning to use a coordinated replicat, it will be best to define a few columns that have low odds of being updated with THREAD or THREADRANGE for the coordinator replicat to perform the hash algorithm to determine the processing thread.

4) Tables that have no key cannot be used for conflict detection and resolution (CDR).

5) Keep in mind row size limit for OGG of 4Mb for before AND after image combined.

6) BATCHSQL has a limitation of 25Kb for a row, if there is no PK and there are a lot of columns BATCHSQL should be disabled.

Summary

In this article we described and presented examples of how Oracle GoldenGate functions for tables with Primary Keys, Unique keys, and neither. We also presented examples of how end-users can create pseudo-keys that Oracle GoldenGate will use for target table update and delete operations; and finished by providing tips to keep in mind when working with tables without primary keys or unique indexes.

Extending Supply Chain Management using Process Cloud Service.

$
0
0

Introduction

Oracle Process Cloud Service (PCS), a Platform-as-a-Service offering, enables human workflow capabilities on the cloud with easy-to-use composer and work space. PCS allows authoring of processes using business-analyst friendly BPMN notation over swim-lanes. PCS eliminates the burden of building and maintaining on-premise business process management platforms and allows enterprises to easily transition to the cloud.

 

Key features of Process Cloud Service include:

  • Invoke workflows through Web forms, SOAP service and REST service.
  • Invoke external SOAP and REST/JSON services.
  • Invoke external services synchronously or asynchronously.
  • Import existing BPMN based workflows.

With the rapid adoption of Oracle SaaS applications, PCS comes in handy as an option to extend SaaS with human-task workflows. Here are some scenarios where PCS is a strong candidate:

  • Although most Oracle SaaS offering support workflow mechanisms natively, PCS provides a consistent workflow mechanism when multiple SaaS products are involved
  • Workflow capabilities needed to rapidly integrate processes across on-premise and Cloud applications.
  • Orchestration use cases with heavy use of human tasks.

For the purpose of this blog, let’s look at extending Supply Chain Management Cloud with a workflow in PCS to capture, review and submit sales orders. The principles explained in this blog apply to other Oracle SaaS services as well.

Sample Workflow

In this scenario, users in an enterprise submit orders to a PCS workflow. PCS then sends the orders to Supply Chain Management cloud’s Distributed Order Orchestration (DOO) web services.  The status of SCM Cloud Sales Order is retrieved for user’s review before the workflow ends.  This sample demonstrates the capabilities of PCS with basic functions of PCS and SCM Cloud.  It could be extended for advanced use cases. Figure 1 shows the high-level workflow.

Figure 1

Worflow overkview

 

Environment requirements for the sample

Below are requirements to enable the sample workflow between PCS and SCM Cloud.

Process Cloud Service

  • Check Access to PCS composer and sample workflows. .
  • Check for Network connectivity between PCS and SCM verified. This shouldn’t be an issue when using SCM Cloud. However if you are using the Oracle Fusion SCM On-premise version then it is necessary to check this connectivity and make any necessary network changes

Supply Chain Management Cloud (R11)

  • Confirm Access to SCM Cloud with implementation privileges provisioned.
  • Obtain the URL for the Order Management Order Capture service endpoint.
  • Ensure that Relevant functional setup is complete for Source systems and item relationships.
  • Ensure that job to collect order reference data is enabled and running.

SCM Cloud Order Management module should be implemented in order for the order capture services function properly. For more information on configuring  Order Management, refer to the white papers listed at the bottom of this post. These documents might require access to Oracle support portal.

 

SCM Cloud order management services

For the sample, a test instance of Oracle Supply Chain Management Cloud Release 11 was used. Order capture service accepts orders from upstream capture systems through ProcessOrderRequest calls. It also provides details of an order through GetOrderDetails call. XML payloads for both services were captured from sample workflow and  provided below and, for sake of brevity, detailed instructions on Order Management configuration is left to support documentation.

As of Release 11, SCM Cloud only exposes SOA services for Order capture. SOA service endpoint for R11 is not listed in the catalog. The endpoint is

https://<hostname>:<port>/soa-infra/services/default/DooDecompReceiveOrderExternalComposite/ReceiveOrderRequestService. Append “?WSDL” to end the endpoint to retrieve the WSDL.

 

Sample payload for ProcessOrderRequest:

<?xml version = '1.0' encoding = 'UTF-8'?>
<env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:wsa="http://www.w3.org/2005/08/addressing">
   <env:Header>
      <wsa:To>https://eczc-test.scm.em2.oraclecloud.com:443/soa-infra/services/default/DooDecompReceiveOrderExternalComposite/ReceiveOrderRequestService</wsa:To>
      <wsa:Action>ProcessOrderRequestSync</wsa:Action>
      <wsa:MessageID>urn:eebb5147-2840-11e6-9a89-08002741191a</wsa:MessageID>
      <wsa:RelatesTo>urn:eebb5147-2840-11e6-9a89-08002741191a</wsa:RelatesTo>
      <wsa:ReplyTo>
         <wsa:Address>http://www.w3.org/2005/08/addressing/anonymous</wsa:Address>
         <wsa:ReferenceParameters>
            <orasoa:EndpointAddress xmlns:orasoa="http://xmlns.oracle.com/soa">http://localhost:7003/soa-infra/services/testing/SalesOrderProcess!595*soa_8366f568-20d7-4a6a-ad68-58effa7a29e3/SCMWebService%23SCMSalesOrderProcess/Services.Externals.SCMWebService.reference</orasoa:EndpointAddress>
            <orasoa:PortType xmlns:ptns="http://xmlns.oracle.com/apps/scm/doo/decomposition/receiveTransform/receiveSalesOrder/DooDecompReceiveOrderExternalComposite" xmlns:orasoa="http://xmlns.oracle.com/soa">ptns:ReceiveOrderRequestServiceCallback</orasoa:PortType>
            <instra:tracking.ecid xmlns:instra="http://xmlns.oracle.com/sca/tracking/1.0">297bed2c-1dde-4850-8a34-cf2da58d19ca-0001398a</instra:tracking.ecid>
            <instra:tracking.conversationId xmlns:instra="http://xmlns.oracle.com/sca/tracking/1.0">urn:eebb5147-2840-11e6-9a89-08002741191a</instra:tracking.conversationId>
            <instra:tracking.FlowEventId xmlns:instra="http://xmlns.oracle.com/sca/tracking/1.0">40879</instra:tracking.FlowEventId>
            <instra:tracking.FlowId xmlns:instra="http://xmlns.oracle.com/sca/tracking/1.0">40047</instra:tracking.FlowId>
            <instra:tracking.CorrelationFlowId xmlns:instra="http://xmlns.oracle.com/sca/tracking/1.0">0000LKDtQPbFw000jzwkno1NJbIb0000LQ</instra:tracking.CorrelationFlowId>
         </wsa:ReferenceParameters>
      </wsa:ReplyTo>
      <wsa:FaultTo>
         <wsa:Address>http://www.w3.org/2005/08/addressing/anonymous</wsa:Address>
         <wsa:ReferenceParameters>
            <orasoa:EndpointAddress xmlns:orasoa="http://xmlns.oracle.com/soa">http://localhost:7003/soa-infra/services/testing/SalesOrderProcess!595*soa_8366f568-20d7-4a6a-ad68-58effa7a29e3/SCMWebService%23SCMSalesOrderProcess/Services.Externals.SCMWebService.reference</orasoa:EndpointAddress>
            <orasoa:PortType xmlns:ptns="http://xmlns.oracle.com/apps/scm/doo/decomposition/receiveTransform/receiveSalesOrder/DooDecompReceiveOrderExternalComposite" xmlns:orasoa="http://xmlns.oracle.com/soa">ptns:ReceiveOrderRequestServiceCallback</orasoa:PortType>
         </wsa:ReferenceParameters>
      </wsa:FaultTo>
   </env:Header>
   <env:Body>
      <process xmlns="http://xmlns.oracle.com/apps/scm/doo/decomposition/receiveTransform/receiveSalesOrder/DooDecompReceiveOrderExternalComposite">
         <OrchestrationOrderRequest>
            <SourceTransactionIdentifier xmlns="http://xmlns.oracle.com/apps/scm/doo/decomposition/receiveTransform/receiveSalesOrder/model/">1154551RBWM</SourceTransactionIdentifier>
            <SourceTransactionSystem xmlns="http://xmlns.oracle.com/apps/scm/doo/decomposition/receiveTransform/receiveSalesOrder/model/">OPS</SourceTransactionSystem>
            <SourceTransactionNumber xmlns="http://xmlns.oracle.com/apps/scm/doo/decomposition/receiveTransform/receiveSalesOrder/model/">1154551RBWM</SourceTransactionNumber>
            <BuyingPartyName xmlns="http://xmlns.oracle.com/apps/scm/doo/decomposition/receiveTransform/receiveSalesOrder/model/">Computer Service and Rentals</BuyingPartyName>
            <TransactionalCurrencyCode xmlns="http://xmlns.oracle.com/apps/scm/doo/decomposition/receiveTransform/receiveSalesOrder/model/">USD</TransactionalCurrencyCode>
            <TransactionOn xmlns="http://xmlns.oracle.com/apps/scm/doo/decomposition/receiveTransform/receiveSalesOrder/model/">2016-06-01T14:36:42.649-07:00</TransactionOn>
            <RequestingBusinessUnitIdentifier xmlns="http://xmlns.oracle.com/apps/scm/doo/decomposition/receiveTransform/receiveSalesOrder/model/">300000001548368</RequestingBusinessUnitIdentifier>
            <PartialShipAllowedFlag xmlns="http://xmlns.oracle.com/apps/scm/doo/decomposition/receiveTransform/receiveSalesOrder/model/">false</PartialShipAllowedFlag>
            <OrchestrationOrderRequestLine xmlns:ns2="http://xmlns.oracle.com/apps/scm/doo/decomposition/receiveTransform/receiveSalesOrder/model/" xmlns="http://xmlns.oracle.com/apps/scm/doo/decomposition/receiveTransform/receiveSalesOrder/model/">
               <ns2:SourceTransactionLineIdentifier>1</ns2:SourceTransactionLineIdentifier>
               <ns2:SourceTransactionScheduleIdentifier>1</ns2:SourceTransactionScheduleIdentifier>
               <ns2:SourceTransactionLineNumber>1</ns2:SourceTransactionLineNumber>
               <ns2:SourceTransactionScheduleNumber>1</ns2:SourceTransactionScheduleNumber>
               <ns2:ProductNumber>AS54888</ns2:ProductNumber>
               <ns2:OrderedQuantity>1</ns2:OrderedQuantity>
               <ns2:OrderedUOMCode>zzx</ns2:OrderedUOMCode>
               <ns2:OrderedUOM>EA</ns2:OrderedUOM>
               <ns2:RequestingBusinessUnitIdentifier>300000001293806</ns2:RequestingBusinessUnitIdentifier>
               <ns2:ParentLineReference/>
               <ns2:RootParentLineReference/>
               <ns2:ShippingInstructions>BM Ship Instructions- Ship it in a day</ns2:ShippingInstructions>
               <ns2:PackingInstructions/>
               <ns2:RequestedShipDate>2016-12-26T00:00:00</ns2:RequestedShipDate>
               <ns2:PaymentTerms/>
               <ns2:TransactionCategoryCode>ORDER</ns2:TransactionCategoryCode>
               <ns2:BillToCustomerName>Computer Service and Rentals</ns2:BillToCustomerName>
               <ns2:BillToAccountSiteUseIdentifier>300000001469016</ns2:BillToAccountSiteUseIdentifier>
               <ns2:BillToCustomerIdentifier>300000001469002</ns2:BillToCustomerIdentifier>
               <ns2:PartialShipAllowedFlag>false</ns2:PartialShipAllowedFlag>
               <ns2:UnitListPrice>100.0</ns2:UnitListPrice>
               <ns2:UnitSellingPrice>100.0</ns2:UnitSellingPrice>
               <ns2:ContractEndDate>2018-12-13</ns2:ContractEndDate>
               <ns2:ExtendedAmount>100.0</ns2:ExtendedAmount>
               <ns2:TaxExempt>S</ns2:TaxExempt>
               <ns2:ShipSetName>{{SHIPSET}}</ns2:ShipSetName>
               <ns2:OrigSysDocumentReference>ORIGSYS</ns2:OrigSysDocumentReference>
               <ns2:OrigSysDocumentLineReference>ORIGSYSLINE</ns2:OrigSysDocumentLineReference>
               <ns2:LineCharge>
                  <ns2:ChargeDefinitionCode>QP_SALE_PRICE</ns2:ChargeDefinitionCode>
                  <ns2:ChargeSubtypeCode>ORA_PRICE</ns2:ChargeSubtypeCode>
                  <ns2:PriceTypeCode>ONE_TIME</ns2:PriceTypeCode>
                  <ns2:PricedQuantity>1</ns2:PricedQuantity>
                  <ns2:PrimaryFlag>true</ns2:PrimaryFlag>
                  <ns2:ApplyTo>PRICE</ns2:ApplyTo>
                  <ns2:RollupFlag>false</ns2:RollupFlag>
                  <ns2:SourceChargeIdentifier>SC2</ns2:SourceChargeIdentifier>
                  <ns2:ChargeTypeCode>ORA_SALE</ns2:ChargeTypeCode>
                  <ns2:ChargeCurrencyCode>USD</ns2:ChargeCurrencyCode>
                  <ns2:SequenceNumber>2</ns2:SequenceNumber>
                  <ns2:PricePeriodicityCode/>
                  <ns2:GsaUnitPrice/>
                  <ns2:ChargeComponent>
                     <ns2:ChargeCurrencyCode>USD</ns2:ChargeCurrencyCode>
                     <ns2:HeaderCurrencyCode>USD</ns2:HeaderCurrencyCode>
                     <ns2:HeaderCurrencyExtendedAmount>150.0</ns2:HeaderCurrencyExtendedAmount>
                     <ns2:PriceElementCode>QP_LIST_PRICE</ns2:PriceElementCode>
                     <ns2:SequenceNumber>1</ns2:SequenceNumber>
                     <ns2:PriceElementUsageCode>LIST_PRICE</ns2:PriceElementUsageCode>
                     <ns2:ChargeCurrencyUnitPrice>150.0</ns2:ChargeCurrencyUnitPrice>
                     <ns2:HeaderCurrencyUnitPrice>150.0</ns2:HeaderCurrencyUnitPrice>
                     <ns2:RollupFlag>false</ns2:RollupFlag>
                     <ns2:SourceParentChargeComponentId/>
                     <ns2:SourceChargeIdentifier>SC2</ns2:SourceChargeIdentifier>
                     <ns2:SourceChargeComponentIdentifier>SCC3</ns2:SourceChargeComponentIdentifier>
                     <ns2:ChargeCurrencyExtendedAmount>150.0</ns2:ChargeCurrencyExtendedAmount>
                  </ns2:ChargeComponent>
                  <ns2:ChargeComponent>
                     <ns2:ChargeCurrencyCode>USD</ns2:ChargeCurrencyCode>
                     <ns2:HeaderCurrencyCode>USD</ns2:HeaderCurrencyCode>
                     <ns2:HeaderCurrencyExtendedAmount>150.0</ns2:HeaderCurrencyExtendedAmount>
                     <ns2:PriceElementCode>QP_NET_PRICE</ns2:PriceElementCode>
                     <ns2:SequenceNumber>3</ns2:SequenceNumber>
                     <ns2:PriceElementUsageCode>NET_PRICE</ns2:PriceElementUsageCode>
                     <ns2:ChargeCurrencyUnitPrice>150.0</ns2:ChargeCurrencyUnitPrice>
                     <ns2:HeaderCurrencyUnitPrice>150.0</ns2:HeaderCurrencyUnitPrice>
                     <ns2:RollupFlag>false</ns2:RollupFlag>
                     <ns2:SourceParentChargeComponentId/>
                     <ns2:SourceChargeIdentifier>SC2</ns2:SourceChargeIdentifier>
                     <ns2:SourceChargeComponentIdentifier>SCC1</ns2:SourceChargeComponentIdentifier>
                     <ns2:ChargeCurrencyExtendedAmount>150.0</ns2:ChargeCurrencyExtendedAmount>
                  </ns2:ChargeComponent>
               </ns2:LineCharge>
            </OrchestrationOrderRequestLine>
         </OrchestrationOrderRequest>
      </process>
   </env:Body>
</env:Envelope>

Sample payload for GetOrderDetails:

<?xml version = '1.0' encoding = 'UTF-8'?>
<env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:wsa="http://www.w3.org/2005/08/addressing">
   <env:Header>
      <wsa:To>https://eczc-test.scm.em2.oraclecloud.com:443/soa-infra/services/default/DooDecompReceiveOrderExternalComposite/ReceiveOrderRequestService</wsa:To>
      <wsa:Action>GetOrderDetailsSync</wsa:Action>
      <wsa:MessageID>urn:109ed9ec-2841-11e6-9a89-08002741191a</wsa:MessageID>
      <wsa:RelatesTo>urn:eebb5147-2840-11e6-9a89-08002741191a</wsa:RelatesTo>
      <wsa:ReplyTo>
         <wsa:Address>http://www.w3.org/2005/08/addressing/anonymous</wsa:Address>
         <wsa:ReferenceParameters>
            <orasoa:EndpointAddress xmlns:orasoa="http://xmlns.oracle.com/soa">http://localhost:7003/soa-infra/services/testing/SalesOrderProcess!595*soa_8366f568-20d7-4a6a-ad68-58effa7a29e3/SCMWebService%23SCMSalesOrderProcess/Services.Externals.SCMWebService.reference</orasoa:EndpointAddress>
            <instra:tracking.ecid xmlns:instra="http://xmlns.oracle.com/sca/tracking/1.0">297bed2c-1dde-4850-8a34-cf2da58d19ca-0001398a</instra:tracking.ecid>
            <instra:tracking.conversationId xmlns:instra="http://xmlns.oracle.com/sca/tracking/1.0">urn:eebb5147-2840-11e6-9a89-08002741191a</instra:tracking.conversationId>
            <instra:tracking.FlowEventId xmlns:instra="http://xmlns.oracle.com/sca/tracking/1.0">40886</instra:tracking.FlowEventId>
            <instra:tracking.FlowId xmlns:instra="http://xmlns.oracle.com/sca/tracking/1.0">40047</instra:tracking.FlowId>
            <instra:tracking.CorrelationFlowId xmlns:instra="http://xmlns.oracle.com/sca/tracking/1.0">0000LKDtQPbFw000jzwkno1NJbIb0000LQ</instra:tracking.CorrelationFlowId>
         </wsa:ReferenceParameters>
      </wsa:ReplyTo>
      <wsa:FaultTo>
         <wsa:Address>http://www.w3.org/2005/08/addressing/anonymous</wsa:Address>
         <wsa:ReferenceParameters>
            <orasoa:EndpointAddress xmlns:orasoa="http://xmlns.oracle.com/soa">http://localhost:7003/soa-infra/services/testing/SalesOrderProcess!595*soa_8366f568-20d7-4a6a-ad68-58effa7a29e3/SCMWebService%23SCMSalesOrderProcess/Services.Externals.SCMWebService.reference</orasoa:EndpointAddress>
         </wsa:ReferenceParameters>
      </wsa:FaultTo>
   </env:Header>
   <env:Body>
      <GetOrderDetailsProcessRequest xmlns="http://xmlns.oracle.com/apps/scm/doo/decomposition/orderDetailServices/DooDecompOrderDetailSvcComposite">
         <SourceOrderInput xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dood="http://xmlns.oracle.com/apps/scm/doo/decomposition/orderDetailServices/DooDecompOrderDetailSvcComposite" xmlns:mod="http://xmlns.oracle.com/apps/scm/doo/decomposition/orderDetailServices/model/">
            <mod:SourceOrderSystem>OPS</mod:SourceOrderSystem>
            <mod:SalesOrderNumber>1154660RBWM</mod:SalesOrderNumber>
         </SourceOrderInput>
      </GetOrderDetailsProcessRequest>
   </env:Body>
</env:Envelope>

PCS workflow in detail

The sample PCS workflow has submission and approval human tasks, associated web forms, a basic gateway rule, a REST web service call to dynamically populate drop-down list and two SOAP web service calls to Order Management services.  Order management service in this case is secured with HTTP basic authentication and accessible only over TLS. Figure 2 shows the swim lane representation of the workflow with self-describing flow elements names. We’ll focus on SCM Cloud specific aspects of the process flow. Process Cloud Service provides several pre-built samples and pattern templates, both of which could be used for quick development of a flow.

Figure 2

processflow



Adding a connector to SCM Cloud web service

In order to use web services to be used in a process, a web service connector  WSDL and associated schema files must be added to the PCS project in composer. Figure 3 shows how to create a connector. Once a connector is created, it is available for implementation in a Service flow element. As part of the connector setup, composer allows security to be configured with options such as HTTP basic authentication or WS-Security username token. Note that these settings can be changed in customization page when the flow is deployed.

Figure 3

WebServiceConnector

 

Associating Data between PCS flow elements

As the PCS flow transitions between flow elements, data input to the element and data output from the element need to be associated to suitable data objects. Data objects could be based on pre-built types such as int or String, or based on a one of the types defined in imported XML schema files. XML schema types should be imported under “Business Objects” section of the composer.  Figure 4 shows data association to capture input for order capture service. As shown, some elements are captured from a web form submitted by a user and many others are hard-coded for this sample flow.

Figure 4

dataassociation

 

 Building web forms from pre-defined business types.

Human tasks in PCS are represented by Web form-based UI. Web forms could be built quickly from pre-defined data types, such as XML complex types. Web form elements inherit the data constraints defined in the complex type. Once a form is generated, fields could be re-arranged to improve the UI.  Figure 5 shows a web form generated based on output of GetOrderDetails web service call, with details returned by SCM Cloud.

Figure 5

OrderDetailsStatus

 

Customizing process during deployment

Process cloud service supports deployments from composer to a test environment and then to a production or subsequent test environments. During the deployment, environment specific information such as endpoints and credentials could be updated.  All web service connectors used by the project are available to be configured. Figure 6 shows customization page for test deployment.

Figure 6

DeploymentCustomization

 
























Conclusion

Process Cloud Service offers a quick and reliable way to author and deploy work flows that could orchestrate human tasks and system interactions using industry standard notations and protocols. This is very useful to integrate and extend SaaS applications from Oracle and other vendors. PCS allows enterprise to leverage in-house expertise in process development without the hassles of building and maintaining the platform or having to master process flow implementation techniques in multiple SaaS products. This article covered a specific use case where PCS captures orders through rules and approval tasks in PCS and sends the order to SCM Cloud’s order capture service, and, finally, obtains order status and other details from SCM Cloud.

 

References

Using Web Services with Oracle Fusion Order Management Cloud (Doc ID 2051 640.1)

Integrating Non-Fusion Fulfillment System With Oracle Fusion Order Management Cloud (Doc ID 2123078.1)

Manage Source Systems for Order Management Cloud

Fusion Applications : Priming the Server Startup

$
0
0

Introduction

After restart of the Fusion Applications (FA), the very first clicks can at times get slow responses compared to subsequent clicks to the same pages, as a result of cache’s not being populated yet. As with any Java application, this is normal as a large number of objects (classes, ADF objects, Profiles etc.) need to be loaded before serving the first request for the web page.  Most objects are cached in various layers and subsequent requests are then very fast.

In rare cases where very heavy usage may come in suddenly after server restart (for example, hundreds of bank tellers logging in at 8am sharp after an overnight server restart) this can lead to a load issues that can cascade further into prolonged delays well after the very first call.

This article provides a method to address this.

Main Article

Cold server (the term used to indicate a freshly started server) system performance can be improved in FA by priming the various system caches intelligently to provide a richer user experience right from the first click. The idea is to intelligently identify the most used artifacts to pre-load them at the server start to avoid the on-demand loading later during the first request.

The initial work done in this aspect has been in the Fusion Applications midtier java layers and further reviews are being done in other areas.  Pre-loading artifacts during server startup will improve first user log in and navigation response times to bring it in line with warm server response times.

This has been available since Release 8 though modest and not well known, until now.

For the Fusion Applications layer pre-loading, a new application profile has been added : FND_PRELOAD_ARTIFACTS.

Once this profile is enabled and system restarted, it will first automatically identify a list of artifacts to pre-load and save it in the file system (currently, by default, in each domain home as files named _preload.log). The next server start will pre-load the artifacts listed in these preload files so that even the first click will be faster than if these objects were to be loaded on-demand when users log in.

Steps to enable the FND_PRELOAD_ARTIFACTS profile are shown below :

1. Go to Fusion Applications Setup and Maintenance page.
Typical default install URL (please change the host and port as needed) is:
https://fa-external.fa.com:10614/setup/faces/TaskListManagerTop
Login as an user with the Application Implementation Consultant or Administrator role.

2. Select Tasks tab and search for “Manage Administrator Profile Values”.

csb1

3. In the Search Results section click on the “Go to Task” icon.

csb2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

4. In the Manage Administrator Profile Values screen search for Profile Option Code “FND_PRELOAD_ARTIFACTS”.

csb3

5. In the search results section you should see FND_PRELOAD_ARTIFACTS.

csb4

6. In the Profile Values section for  the row with default Profile Level “Site”, set the Profile Value (right-most column) to “Yes” using the drop down list. (In some monitors, you may need to scroll right or adjust the column sizes to see the Profile Value column).

7. Click on Save button at the top right.

Your system is now enabled to identify a list of artifacts to pre-load during the next shutdown and pre-load these at the next startup.

Tests have shown improvements of about 10 to 20% with this pre-loading technique and more work is in progress to broaden the scope and improve this further – we will have updates when possible.

Summary

While first click performance in application server cold start mode are not a big concern for normal users, it could impact some users and we have provided a peek at how pre-loading artifacts in java mid-tier layer can help.  More work is in progress on other layers and we will update when possible.

Meanwhile, if you do test this, please feel free to drop a note on how much improvement you see in your system for cold start behavior for the very first calls – take some normal usage scenarios your users perform like creating an order or expense or user in the system.

Also remember that this is for initial calls to the system after restart and any performance issues later need to be reviewed separately.

Have fun with Fusion Applications !

Some ICS API scripts for doing useful things

$
0
0

ICS Rest API

ICS has had a REST API since April, and I have developed some quick and useful scripts that I would like to share.

You can see the REST API documentation here: http://docs.oracle.com/cloud/latest/intcs_gs/ICSRA/

About these scripts

They’re all implemented with Python 3.5 and pycurl. Get python here: https://www.python.org/downloads/release/python-350/ and pycurl here: http://pycurl.io/. Note that on Linux, you’ll probably have both of these readily available in your distribution. Sadly, this post cannot be a technical support post for installing python and pycurl on Windows.

They use python regular expressions: https://docs.python.org/3/library/re.html#regular-expression-syntax. The default regular expression matches NOTHING- this is by design, you can run these scripts to test connectivity and they won’t do anything else by default, except verify you can connect and download the relevant resources (list of integrations or list of connections).

Help

I tried to make them usefully helpful by themselves (--help works), but here’s a brief overview of each script and it’s intentions.

Logging

All output is logged to a log file (output.log is the default, and you can use --logfile=[whereever] to change it), and --log=DEBUG will enable a lot of additional debugging information if something isn’t working or you want to see the details of the json being exchanged between client and server.

Common parameters

  • [baseurlforicsserver] is the base url for your ICS server. This is the URL without the /ics/faces/global suffix. For example for the ICS instance https://ateamics-thisissecret.integration.us2.oraclecloud.com/ics/faces/global you would use https://ateamics-thisissecret.integration.us2.oraclecloud.com
  • [userwithicsadminpermission] is a user identity in ICS with administrative permissions
  • [password] is their password
  • [pattern] is a python regular expression, with an optional field prefix: 'name:.*' will apply the regular expression .* to the “name” field of the integration (name is the default field if none is specified); 'lastUpdatedBy:christian.*' would apply the regular expression christian.* to the field “lastUpdatedBy”

Download

The scripts can be downloaded here: http://www.ateam-oracle.com/wp-content/uploads/2016/06/icsapiscripts.zip

Exporting and cleaning up integrations

First up, a script for exporting and cleaning up ICS integrations. ICS can sometimes get a bit cluttered, with people popping up test integrations left and right to try different things. This isn’t necessarily a great state of affairs, so this script is useful – it lets you use a regular expression against a particular field in the ICS JSON to identify integrations you wish to deactivate, export and potentially delete. Export functionality is always enabled – this script will always export the integrations it matches, however, deactivation and deletion are optional toggles that can happen as well. (Deactivation if enabled is done before export, and deletion if enabled is done after export).

./exportintegrations.py [pattern] --exportdir [directorytoexportto] --server [baseurlforicsserver] --user [userwithicsadminpermission] --pass [password]

  • [directorytoexportto] is a directory name. If it doesn’t exist it will be created. All matched integrations will be exported to this directory, as IAR files, the same as if they had been exported from the web console

There’s a couple more arguments that toggle various functionality:

  • --deactivate will deactivate all the integrations that match
  • --delete will also delete all the integrations after exporting them.

Cleaning up connections

Second up, a connections cleanup script. This is much akin to the first script, it is designed to cleanup an environment that’s become full of experiments. However, connections have no export capability so they can’t be backed up for safe keeping (this is as much by design as an oversight – connections encode secrets such as credentials to other systems, and as such, being able to export them would be a significant security risk).

./cleanconnections.py [pattern] --server [baseurlforicsserver] --user [userwithicsadminpermission] --pass [password]

It takes the common arguments and a --delete argument will actually perform the delete operation. Without it it’ll simply log what it would delete based on your pattern matching.

Bouncing integrations

Finally a bounce script. This script will bounce every integration that matches, by issuing a deactivate followed by an activate request. This may be useful on occasion. Again, it takes the common arguments.

./bounceintegrations.py [pattern] --server [baseurlforicsserver] --user [userwithicsadminpermission] --pass [password]

Summary

As you can see, the ICS REST API is already useful for doing some interesting tasks that are painstaking when using the UI. I hope to expand on it in the coming weeks to more useful tasks, like automatically tracking development in ICS using source code control.

Preparing Amazon Elastic MapReduce (EMR) for Oracle Data Integrator (ODI)

$
0
0

Introduction

This article demonstrates how to prepare the Amazon Elastic MapReduce (EMR) cloud service for the Oracle Data Integrator (ODI) installation.  Amazon EMR is a big data cloud service, available on the Amazon Web Services (AWS) cloud computing services.

ODI is well documented to run on both the Cloudera and Hortonworks distributions of Hadoop.  ODI can also run on the distributions of Hadoop found on the Amazon EMR cloud service.  This is the first article of four publications that shows how to install, configure, and use ODI on the Amazon EMR cloud service:

 

For a demonstration of how to leverage ODI on Amazon EMR, go to “Webcast: Leveraging Oracle Data Integrator (ODI) with Amazon Elastic MapReduce (EMR).”  Additionally, an ODI 12.2.1 repository with examples of how to leverage ODI with Amazon EMR can be found at “ODI Repository Sample for Amazon Elastic MapReduce (EMR).”

 

Preparing Amazon Elastic MapReduce (EMR) for Oracle Data Integrator (ODI)

 

In order to leverage ODI with the Amazon EMR service, three AWS services are required:  Amazon RDS, Amazon EMR, and Amazon S3.  Amazon RDS is the database service, which includes database engines such as Oracle, MySQL, and PostgreSQL.  Amazon S3 is the cloud storage, which allows users to access data stored in the AWS cloud from anywhere on the web.  Amazon EMR is a cluster of Amazon EC2 compute resources, based on the Hadoop distributed processing architecture, MapReduce.

Figure 1 below, illustrates the required AWS cloud services in order to host ODI on Amazon EMR:

 

Figure 1 - ODI on Amazon Elastic MapReduce

Figure 1 – ODI on Amazon Elastic MapReduce

The Amazon RDS database instance, on Figure 1 above, is required in order to host the ODI repository.   The ODI repository can be hosted on one of the following two Amazon RDS database engines: Oracle or MySQL.    Using the Oracle database engine, this article illustrates how to install an ODI repository on an Amazon RDS instance.

By default, the Amazon EMR cluster is provisioned with one master node, and two slave nodes.  This article discusses how to create an Amazon EMR cluster with additional storage in order to host the ODI binaries.  Two ODI main components are installed and configured on the master node of the Amazon EMR cluster:  the ODI agent, and the ODI studio.  The ODI standalone agent is the recommended agent-type, since it does not require an application server and it can run as a standalone Java application in the Amazon EMR cluster.  The ODI studio can also be installed on the master node of the Amazon EMR cluster.

This article discusses how to install and configure X Window (X11) software on the master node of the EMR cluster.  X11 software allows applications such as the ODI installer to run on a server; thus, the ODI agent and the ODI studio can be installed and configured on the master node of the Amazon EMR cluster.  Additionally, X11 software allows the forwarding of application screens; thus, ODI users can launch ODI binaries on the master node of the Amazon EMR cluster, but the ODI screens can run on client computers as shown on Figure 1 above.

The Amazon S3 storage instance, on Figure 1 above, can be used to store source-data files that need to be ingested into the Amazon EMR cluster.  The Amazon S3 storage instance can also be used as a landing area to store data that has been transformed on the Amazon EMR cluster.  Data stored on an Amazon S3 storage instance can be copied or moved into other cloud data services or on-premise data warehouses.  This article shows how to create a hadoop user that has access to both file systems:  the Hadoop file system and the Amazon S3 file system; thus, ODI users can create integration tasks in the EMR cluster to transform data on both file systems.

 

Creating the Amazon AWS Account

In order to configure ODI with Amazon EMR, an Amazon AWS account must be created.  Proceed to create the Amazon AWS account.  Once the Amazon AWS account is created, go to the Amazon AWS Management Console, and identify the following four Amazon AWS services:  Amazon EC2, Amazon RDS, Amazon EMR, and Amazon S3.  The following sections of this article show how to configure these four Amazon AWS services to successfully install and leverage ODI on the Amazon EMR cloud service.

Creating the Amazon Key Pair

Amazon EMR is a cluster of Amazon EC2 compute resources.  Amazon EC2 uses public–key cryptography to login and access the Amazon EMR instances.  On Amazon EMR, users can access the EMR cluster by login into the master node of the cluster.  The public-key cryptographic uses a public key to encrypt the password information when a user attempts to login into a master node.  Then, the master node uses a private key to decrypt the password information and determines if a user can be logged into the cluster.    In Amazon AWS, these public and private keys are known as Amazon Key Pairs.  In order to host ODI on an Amazon EMR instance, an ODI agent must be configured on the master node of the EMR cluster.  Thus, an Amazon key pair is required in order to access the master node of an EMR cluster.

Return to the Amazon AWS Management Console, and locate the AWS services.  Select EC2 to access the EC2 dashboard.  On the EC2 dashboard, locate the Network & Security section.  This section contains various network and security options, including the Key Pairs option.  Select the Key Pairs option, and create a new EC2 key pair.  Figure 2 below illustrates the location of the Key Pairs option in the EC2 dashboard:

 

Figure 2 - Creating an ODI Key Pair in Amazon AWS

Figure 2 – Creating an ODI Key Pair in Amazon AWS

When creating the new EC2 key pair, a file with extension .pem is also created.  Save the .pem file on disk.  This file will be used in a later section of this article in order to configure the SSH connection, and access the master node of the EMR cluster.

 

Creating the Amazon EMR Cluster

Once the EC2 key pair has been created and a .pem file has been saved on disk, proceed to create an instance of the Amazon EMR cloud service.  Using the Amazon AWS Management Console, locate the AWS services and select EMR.

In the Elastic MapReduce service window, select the Create Cluster option, and proceed to create a new EMR cluster.  Select the Advanced Options to customize the configuration of the new EMR cluster.  Select the software vendor and the release version as shown on Figure 3 below.  In this example, the selected vendor is Amazon, and the release version is emr-4.5.0.  In this release, the following application tools have been selected:  Hadoop, Sqoop, Spark, Hive, HCatalog, Oozie, Hue, and Pig.  Select additional application tools if your environment requires them.

 

Figure 3 - Creating the Amazon EMR Cluster

Figure 3 – Creating the Amazon EMR Cluster

Once the vendor and the release version have been selected, proceed to configure the Hardware options.  Ensure that the selected hardware options meet the requirements of your desired environment.

It is recommended to configure additional storage for the ODI installation and ODI binaries.  In the Hardware Configuration options, locate the Master Node instance, select the Add EBS volumes option, and add a new storage volume as shown on Figure 4 below:

 

Figure 4 - Adding EBS Storage in the Amazon EMR Cluster

Figure 4 – Adding EBS Storage in the Amazon EMR Cluster

Figure 4 above shows the default size of 100 GiB for the new EBS volume.  This size is sufficient for the ODI installation and the additional binaries that will be installed on the master node of the Amazon EMR cluster.  However, additional storage or EBS volumes may be required if the user chooses to store data files or install additional software.

Select the General Cluster Settings options and enter the name of the EMR cluster.  Select the Security options and enter the EC2 key pair that was created in a previous section of this article.  Figure 5 below shows the selection of the EC2 key pair:

 

Figure 5 - Specifying the ODI EC2 Key Pair

Figure 5 – Specifying the ODI EC2 Key Pair

Note 1 - Specifying the ODI EC2 Key Pair

 

Once the EC2 key pair has been selected, proceed to create the EMR cluster.  The EMR cluster will be ready for use when its status is Waiting as shown on Figure 6 below.  Once the EMR cluster is ready for use, locate the Connections section and proceed to enable the web connections.   Follow the Amazon instructions on how to enable the Amazon EMR web applications such as Hue, Spark, and the Resource Manager.

 

Figure 6 - Enabling Web Connections

Figure 6 – Enabling Web Connections

Once the Web connections have been enabled, select the Hue application, and create a Hue account as shown on Figure 7 below:

 

Figure 7 – Creating the Hue Account

Figure 7 – Creating the Hue Account

The new Hue account, as shown on Figure 7 above, will be used by ODI to access Hive, and other Hadoop resources.  This Hue user will be configured in the ODI Topology.

 

Note 2 - Creating the Hue Account

 

Once the Hue user has been created, login to Hue and browse the file system of the EMR cluster.  On Amazon EMR, users can access both the Amazon Simple Storage Service (S3) and the Amazon Hadoop file system as shown on Figure 8 below.  Furthermore, Hive tables can be defined on both Amazon S3 and Amazon Hadoop file system; thus, ODI can be used to transform data from and into Hive tables that have been defined on both file systems.  For additional information on how to integrate Hive with Amazon S3, go to “Additional Features of Hive in Amazon EMR.”

 

Figure 8 – Browsing Amazon S3 and Amazon Hadoop

Figure 8 – Browsing Amazon S3 and Amazon Hadoop

 

Configuring SSH to Access the Amazon EMR Cluster

In order to access the master node of the EMR cluster, the user must perform two configurations:  create an inbound SSH rule, and configure a SSH client tool to connect to the master node.  The inbound SSH rule must be defined in the security group of the master node.  Return to the Amazon AWS Management Console, and locate the AWS services.  Select EC2 to access the EC2 dashboard.  On the EC2 dashboard, locate the Network & Security section, and select the Security Groups option.  A list of security groups should be displayed on screen.  Identify and select the security group of the master node, and proceed to edit its inbound rules as shown on Figure 9 below:

 

Figure 9 – Editing the EMR Master Security Group

Figure 9 – Editing the EMR Master Security Group

In the Edit Inbound Rules page, add a new inbound rule of type SSH as shown on Figure 10 below.  Specify the IP address of the machine that will be authorized to connect to the Amazon EMR cluster.  If multiple computers must access the Amazon EMR cluster, specify a range of IP addresses.  Save your changes.

 

Figure 10 – Adding an Inbound Rule for SSH Connection

Figure 10 – Adding an Inbound Rule for SSH Connection

Return to the Amazon AWS Management Console, and select EMR.  In the cluster list, locate and select the new EMR cluster.  Locate the Master Public DNS section of the new cluster, and select the SSH option as shown on Figure 11 below.  Follow the Amazon instructions on how to install PuTTY on a client computer and use the EC2 key pair .pem file to connect and access the master node of the EMR cluster.

 

Figure 11 – Configuring the SSH Connection

Figure 11 – Configuring the SSH Connection

Using PuTTY on a client computer, connect to the master node of the EMR cluster, as shown on Figure 12 below.

 

Figure 12 – Accessing the Amazon EMR Master Cluster

Figure 12 – Accessing the Amazon EMR Master Cluster

Note 3 - Accessing the Amazon EMR Master Cluster

 

Adding the Hue User to the Hadoop Group

Before running any Hive queries, add the Hue user to the hadoop group in Linux.  Using the hadoop user of the EMR master node, login into the master node and execute the following Linux command:

sudo adduser -g hadoop oracle

Creating the Amazon RDS Instance

When configuring ODI with Amazon EMR, the ODI repository should be installed on a database instance of Amazon RDS.  The ODI repository installation supports various database engines.  On Amazon RDS, ODI can be installed on one of two Amazon RDS services:  MySQL, and Oracle.

Return to the Amazon AWS Management Console, and locate the AWS services.  Select RDS to create a new Amazon RDS service instance.  Choose between Oracle and MySQL database engines.  Select the database options that meet your requirements, and proceed to create the database instance.  Figure 13 below shows the Oracle database engines available on Amazon RDS.

 

Figure 13 – Configuring a Database Instance in Amazon RDS

Figure 13 – Configuring a Database Instance in Amazon RDS

Once the database instance has been created, identify the Endpoint of the new database as shown on Figure 14 below.  Make a note of the Endpoint URL and the port number.  This information is required in order to create an ODI repository on this database instance.

 

Figure 14 – Database Instance Endpoint URL and Port Number

Figure 14 – Database Instance Endpoint URL and Port Number

The new database instance will be accessed by the ODI standalone agent as well.  Thus, a RDS inbound rule must be configured to allow the Agent to access the RDS database instance.

Return to the Amazon AWS Management Console, and locate the AWS services.  Select EC2 to access the EC2 dashboard.  In the EC2 dashboard, locate the Network & Security section, and select the Security Groups option.  A list of security groups should be displayed on screen.  Identify and select the security group of the RDS instance, as shown on Figure 15 below.  Proceed to edit the inbound rules of the RDS instance security group.

 

Figure 15 - Adding a RDS Inbound Rule for the RDS Database Instance

Figure 15 – Adding a RDS Inbound Rule for the RDS Database Instance

Create a new inbound rule of type RDS that allows the master node of the EMR cluster to make inbound calls to the RDS database instance.  On Figure 15 above, the Source IP Address has been set to 172.31.5.102/32 – this is a rage of IP addresses, which include the IP address of the EMR master node.  Save your new inbound rule.

For additional information on how to access an Oracle database instance on the Amazon RDS service, go to “Connecting to a DB Instance Running the Oracle Database Engine.”

To install ODI on the master node of the Amazon EMR cluster, go to “Installing Oracle Data Integrator (ODI) on Amazon Elastic MapReduce (EMR).”

Conclusion

 

ODI is well documented to run on both the Cloudera and Hortonworks distributions of Hadoop.  ODI can also run on the distributions of Hadoop found on the Amazon EMR cloud service.  This article demonstrates how to prepare the Amazon Elastic MapReduce (EMR) cloud service for the Oracle Data Integrator (ODI) installation.

For more Oracle Data Integrator best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-team Chronicles for Oracle Data Integrator (ODI).”

ODI Related Cloud Articles

Installing Oracle Data Integrator (ODI) on Amazon Elastic MapReduce (EMR)

Configuring Oracle Data Integrator (ODI) for Amazon Elastic MapReduce (EMR)

Using Oracle Data Integrator (ODI) with Amazon Elastic MapReduce (EMR)

Webcast: Leveraging Oracle Data Integrator (ODI) with Amazon Elastic MapReduce (EMR)

ODI Repository Sample for Amazon Elastic MapReduce (EMR)

Integrating Oracle Data Integrator (ODI) On-Premise with Oracle Cloud Services

 

Installing Oracle Data Integrator (ODI) on Amazon Elastic MapReduce (EMR)

$
0
0

Introduction

This article demonstrates how to install Oracle Data Integrator (ODI) on the Amazon Elastic MapReduce (EMR) cloud service.  Amazon EMR is a big data cloud service, available on the Amazon Web Services (AWS) cloud computing services.

ODI is well documented to run on both the Cloudera and Hortonworks distributions of Hadoop.  ODI can also run on the distributions of Hadoop found on the Amazon EMR cloud service.  This is the second article of four publications that shows how to install, configure, and use ODI on the Amazon EMR cloud service:

 

For a demonstration of how to leverage ODI on Amazon EMR, go to “Webcast: Leveraging Oracle Data Integrator (ODI) with Amazon Elastic MapReduce (EMR).”  Additionally, an ODI 12.2.1 repository with examples of how to leverage ODI with Amazon EMR can be found at “ODI Repository Sample for Amazon Elastic MapReduce (EMR).”

 

Installing Oracle Data Integrator (ODI) on Amazon Elastic MapReduce (EMR)

 

Prior installing ODI on the Amazon EMR cloud service, users must prepare and configure the Amazon EMR cluster for the ODI installation.  To prepare the Amazon EMR cloud service for ODI installation, go to “Preparing Amazon Elastic MapReduce (EMR) for Oracle Data Integrator (ODI).”

 Once users have prepared the Amazon EMR cloud service for ODI, users need to download from Oracle the ODI and the Java SDK installation files.  Go to the Oracle Data Integrator Downloads page, and download the ODI installation files.  Also, go to the Oracle Java Downloads page, and download the Oracle Java SDK installation file – specifically – the Java SE Development Kit tar.gz file.  For a complete list of Oracle certified Java versions for ODI, go to “Oracle Fusion Middleware Certification Matrix for Oracle Data Integrator.”

Install a secured file transfer protocol (SFTP) tool such as FileZilla to transfer the ODI and the Java SDK installation files into the master node of the EMR cluster.  Also, copy any additional jar files such as the JDBC driver file (ojdbc6.jar) required by Sqoop to connect to SQL databases such as the Oracle database.  Figure 1 below shows the ODI and the Java SDK installation files.

 

Figure 1 - Copying Binaries to the Amazon EMR Master Node

Figure 1 – Copying Binaries to the Amazon EMR Master Node

Note 1 - Copying Binaries to the Amazon EMR Master Node

 

When using SFTP tools to transfer files into the master node of the EMR cluster, a PuTTY private key file (.ppk) is required.  Configure the SFTP connection with the PuTTY private key file before attempting to connect and transfer files into the master node of the EMR cluster.

Using the SFTP tool, login into the master node of the EMR cluster, and proceed to transfer the ODI and Java SDK installation files as shown on Figure 1 above.  Use the hadoop user of the EMR cluster to login into the master node – the hadoop user does not require a password.

Configuring Sqoop in the Master Node of the EMR Cluster

 

Amazon EMR 4.5.0 includes the Sqoop utility.  ODI users can use Sqoop to transfer data between Amazon RDS and Amazon EMR.  In order to use ODI with the Amazon distribution of Sqoop, two configurations must be performed:  add the location of Sqoop in the .bash_profile file of the hadoop user, and add the location of the Amazon Java in the sqoop-env.sh file.

Login into the master node of the EMR cluster, and add the location of the Sqoop tool in the .bash_profile file of the hadoop user as shown on Figure 2 below.

 

Figure 2 - Adding Sqoop in bash_profile

Figure 2 – Adding Sqoop in bash_profile

Modify the Sqoop configuration file called /usr/lib/sqoop/conf/sqoop-env.sh and set JAVA_HOME to the Amazon distribution of Java as shown on Figure 3 below.  This step is required in order to launch the Sqoop tool with the correct Java, the Amazon distribution of Java.  Notice that the ODI Agent uses the Oracle Java SDK to launch ODI tasks, but the Sqoop tool must use the Amazon distribution of Java to run successfully.  The following example shows how to modify the sqoop-env.sh:

sudo vi sqoop-env.sh

 

Figure 3 - Setting Java Home in the Sqoop Config File

Figure 3 – Setting Java Home in the Sqoop Config File

Copy any necessary JDBC driver files to /usr/lib/sqoop/lib directory.    These driver files are required by Sqoop in order to connect to SQL databases such as the Oracle database.  The following example copies the ojdbc6.jar file from the hadoop home directory to the sqoop lib directory:

sudo cp /home/hadoop/ojdbc6.jar /usr/lib/sqoop/lib

 

Installing X-Window Software to Access the EMR Master Node

 

In order to install ODI in the master node of the EMR cluster, an X-Window software is required.  The X-Window software allows programs such as the ODI install to run on the EMR master node, but the display of the ODI install can be forwarded to a client computer.  This strategy is known as X11 forwarding.  The X11 forwarding traffic can also be tunneled over SSH; thus, users can securely access the master node of the EMR cluster, and install ODI.

The X11 forwarding strategy can be used to install the ODI binaries, create an ODI repository, and configure the ODI agent in the master node of the EMR cluster.  Additionally, the X11 forwarding strategy can be used to run the ODI Studio in the master node as well.  This strategy offers performance benefits when accessing the ODI repository and the ODI Studio from a remote computer.

Before using the X-Window emulator, use PuTTY to login into the master node of the EMR cluster and install the X11 Authority package as shown on Figure 4 below.  The X11 Authority package enables X11 forwarding between the master node of the EMR cluster and the X-Window client.

 

Figure 4 – Installing X11 in the Amazon EMR Master Node

Figure 4 – Installing X11 in the Amazon EMR Master Node

Once the X11 Authority installation is complete, choose an X-Window emulator software such as Cygwin/X and install it in your client computer.  Launch the X-Windows emulator software in your client computer, and run an X-Window terminal to access the master node of the EMR cluster.  To start X11 forwarding, export the DISPLAY of the X-Window terminal and use SSH to connect to the master node as show on Figure 5 below.  Specify the location and name of the key pair file.  Here is an example of how to connect to the master node using SSH:

ssh -X hadoop@ec2-54-200-52-29.us-west-2.compute.amazonaws.com -i ODIKeyPair.pem

 

Figure 5 – Accessing the Amazon EMR Master Node with an X11 Terminal

Figure 5 – Accessing the Amazon EMR Master Node with an X11 Terminal

 

Installing ODI in the Master Node of the EMR Cluster

 

Using an X-Window terminal, login into the master node of the EMR cluster and identify the Amazon Elastic Block Store (EBS) volume where ODI binaries will be installed, as shown on Figure 6 below.  It is recommended to add an EBS volume on the master node of the Amazon EMR cluster to host the ODI binaries.  For additional information on how to add an EBS volume, go to “Preparing Amazon Elastic MapReduce (EMR) for Oracle Data Integrator (ODI).”

Use the Linux command df –h to identify the EBS volume.

 

Figure 6 – Locating the EBS Volume Drive

Figure 6 – Locating the EBS Volume Drive

Create a directory in this EBS volume, and copy the ODI and Oracle Java installation files from the hadoop home directory to the EBS volume directory.  Then, install the Oracle Java SDK in the new EBS volume directory.  The following example installs the Oracle Java SDK file called jdk-8u51-linux-x64.gz:

tar zxvf jdk-8u51-linux-x64.gz

 

To install ODI, follow the ODI installation instructions found at “Oracle Fusion Middleware Installing and Configuring Oracle Data Integrator.”  Use an X-Window terminal to run the ODI installer in the master node of the EMR cluster, but forward the screens of the ODI installer to the client computer as shown on Figure 7 below.  For an example of how to use an X-Window terminal to login into the master node of the EMR cluster, go to the following section of this article: “Installing X-Window Emulator to Access the EMR Master Node.”

 

Figure 7 – Installing ODI in the Amazon EMR Master Node

Figure 7 – Installing ODI in the Amazon EMR Master Node

When installing ODI, select the Standalone Installation type as shown on Figure 8 below.  Ensure that ODI gets installed on the new EBS volume.

 

Figure 8 – Selecting the ODI Standalone Installation Type

Figure 8 – Selecting the ODI Standalone Installation Type

 

Once ODI has been installed, return to the Amazon AWS console, and add a new outbound rule for the master node to access the RDS database instance that was created on a previous section of this article.  Using the security group of the master node, add a new outbound RDS rule as shown on Figure 9 below.

 

Figure 9 - Outbound RDS Security Rule for the Master Node

Figure 9 – Outbound RDS Security Rule for the Master Node

Proceed to create an ODI repository by launching the Oracle RCU utility.  Set the correct Java, Oracle Java SDK, prior launching the RCU utility.  Use the RDS database instance created on a previous section of this article to host the ODI repository and ODI schemas.

Once the ODI repository is created, launch the Fusion Middleware Configuration Wizard to create a new ODI domain and configure an ODI standalone agent in the master node of the EMR cluster as shown on Figure 10 below.

 

Figure 10 – Installing the ODI Standalone Agent

Figure 10 – Installing the ODI Standalone Agent

When configuring the ODI standalone agent, specify the Master Public DNS of the EMR cluster as shown on Figure 11 below.  The Master Public DNS can be found in the Elastic MapReduce page of the EMR cluster.

 

Figure 11 – Configuring the ODI Standalone Agent

Figure 11 – Configuring the ODI Standalone Agent

Login to the new ODI repository, and select the ODI Topology.  In the ODI Topology, proceed to create the Physical ODI standalone agent.  For the Agent’s host, specify the Master Public DNS of the EMR cluster as well.  Launch the ODI standalone agent in the master node of the EMR cluster.  Test the agent from the ODI Topology.

For additional information on how to create an ODI master and work repository, go to “Creating the ODI Master and Work Repository Schemas.”  For additional information on how to create an ODI standalone agent, go to “Configuring the Standalone Domain for the Standalone Agent.”  For information on how to create an Oracle database instance on Amazon RDS, go to “Preparing Amazon Elastic MapReduce (EMR) for Oracle Data Integrator (ODI)”, under section “Creating the Amazon RDS Instance.”

 

 

Conclusion

 

ODI is well documented to run on both the Cloudera and Hortonworks distributions of Hadoop.  ODI can also run on the distributions of Hadoop found on the Amazon EMR cloud service.  This article demonstrates how to install ODI on the Amazon Elastic MapReduce (EMR) cloud service.

For more Oracle Data Integrator best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-team Chronicles for Oracle Data Integrator (ODI).”

ODI Related Cloud Articles

 

Preparing Amazon Elastic MapReduce (EMR) for Oracle Data Integrator (ODI)

Configuring Oracle Data Integrator (ODI) for Amazon Elastic MapReduce (EMR)

Using Oracle Data Integrator (ODI) with Amazon Elastic MapReduce (EMR)

Webcast: Leveraging Oracle Data Integrator (ODI) with Amazon Elastic MapReduce (EMR)

ODI Repository Sample for Amazon Elastic MapReduce (EMR)

Integrating Oracle Data Integrator (ODI) On-Premise with Oracle Cloud Services

Configuring Oracle Data Integrator (ODI) for Amazon Elastic MapReduce (EMR)

$
0
0

Introduction

This article demonstrates how to configure Oracle Data Integrator (ODI) for the Amazon Elastic MapReduce (EMR) cloud service.  Amazon EMR is a big data cloud service, available on the Amazon Web Services (AWS) cloud computing services.

ODI is well documented to run on both the Cloudera and Hortonworks distributions of Hadoop.  ODI can also run on the distributions of Hadoop found on the Amazon EMR cloud service.  This is the third article of four publications that shows how to install, configure, and use ODI on the Amazon EMR cloud service:

 

For a demonstration of how to leverage ODI on Amazon EMR, go to “Webcast: Leveraging Oracle Data Integrator (ODI) with Amazon Elastic MapReduce (EMR).”  Additionally, an ODI 12.2.1 repository with examples of how to leverage ODI with Amazon EMR can be found at “ODI Repository Sample for Amazon Elastic MapReduce (EMR).”

 

Configuring Oracle Data Integrator (ODI) for Amazon Elastic MapReduce (EMR)

 

Prior configuring ODI for the Amazon EMR cloud service, users must install ODI on the Amazon EMR cloud service.  To install ODI on the Amazon EMR cloud service, go to “Installing Oracle Data Integrator (ODI) on the Amazon Elastic MapReduce (EMR).

Once users have installed ODI on the Amazon EMR cloud service, the ODI Topology must be configured.  This section illustrates how to configure ODI Topology with three of the big data technologies found on Amazon EMR:  Hadoop, Hive, and Spark.  Additional technologies such as Pig and Oozie can be configured as well.

The IP address of the master node of the EMR cluster is needed in order to configure ODI with the following two technologies:  Hadoop and Hive.  Login into the master node of the EMR cluster, and identify the IP address of the master node as shown on Figure 1 below.

 

Figure 1 – Identifying the IP Address of the EMR Master Node

Figure 1 – Identifying the IP Address of the EMR Master Node

Make a note of the IP address of the master node.  Proceed to configure the following three technologies in this order:  Hadoop, Hive, and Spark.

To see a complete list of big data technologies supported by ODI, go to “Fusion Middleware Integrating Big Data with Oracle Data Integrator.”  For information on how to configure ODI technologies, go to “Setting Up the ODI Topology.” 

Hadoop Configuration

Using the Physical Architecture of the ODI Topology, select the Hadoop technology, and create a new data server as shown on Figure 2 below.  Enter a name for the new Hadoop data server.  Under the Connection section, enter the name of the hadoop user.  The hadoop user does not have a password; thus, the password value is not required for this configuration.

Specify the HDFS Name Node URI, and the Resource Manager – use the IP address of the EMR master node for these two metadata elements.  For the ODI HDFS Root, specify the home directory of the hadoop user, and a directory name where ODI can initialize the Hadoop technology.  This directory name will be created when the user chooses to initialize the data server.

Save your new Hadoop data server, and select the Initialize option to start the initialization process.  Once the Initialization process is complete, proceed to test the new data server.

 

Figure 2 – Configuring the ODI Hadoop Technology

Figure 2 – Configuring the ODI Hadoop Technology

Create a physical schema for the new Hadoop technology, and use the default values.  Then, go to the Logical Architecture of the ODI Topology, and create a new Hadoop logical schema.  Configure the ODI Context with the new Hadoop logical and physical schemas.

 

Hive Configuration

Using the Physical Architecture of the ODI Topology, select the Hive technology, and create a new data server as shown on Figure 3 below.  Enter a name for the new Hive data server.  Specify the User and Password of the Hue account.  For details on how to create the Hue account, go to “Preparing Amazon Elastic MapReduce (EMR) for Oracle Data Integrator (ODI).”

For the Metastore URI, specify the IP address of the EMR master node.  Under Hadoop Configuration, select the Hadoop data server – this is the Hadoop data server created on the previous section of this article.

 

Figure 3 – Configuring the Hive Physical Data Server

Figure 3 – Configuring the Hive Physical Data Server

Go to the JDBC tab of the new Hive data server, and select the JDBC driver and the JDBC URL as shown on Figure 4 below.  For the JDBC URL, specify the IP address of the EMR master node.  Save your new Hive data server, and proceed to test the connection.

 

Figure 4 – Specifying the JDBC Parameters for the Hive Physical Data Server

Figure 4 – Specifying the JDBC Parameters for the Hive Physical Data Server

 

Create a physical schema for the new Hive technology, and select the default schemas for Hive as shown on Figure 5 below.

 

Figure 5 – Configuring the Hive Physical Schema

Figure 5 – Configuring the Hive Physical Schema

Go to the Logical Architecture of the ODI Topology, and create a new Hive logical schema.  Configure the ODI Context with the new Hive logical and physical schemas.

Spark Configuration

Using the Physical Architecture of the ODI Topology, select the Spark technology, and create a new data server as shown on Figure 6 below.  Enter the name of the new Spark data server.  For the Master Cluster (Data Server), enter yarn-client.  Save your new Spark data server.

 

Figure 6 – Configuring the Spark Physical Data Server

Figure 6 – Configuring the Spark Physical Data Server

Create a physical schema for the new Spark technology, and use the default values.  Then, go to the Logical Architecture of the ODI Topology, and create a new Spark logical schema.  Configure the ODI Context with the new Spark logical and physical schemas.

  

Conclusion

ODI is well documented to run on both the Cloudera and Hortonworks distributions of Hadoop.  ODI can also run on the distributions of Hadoop found on the Amazon EMR cloud service.  This article demonstrates how to configure ODI for the Amazon Elastic MapReduce (EMR) cloud service.

For more Oracle Data Integrator best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-team Chronicles for Oracle Data Integrator (ODI).”

ODI Related Cloud Articles

Preparing Amazon Elastic MapReduce (EMR) for Oracle Data Integrator (ODI)

Installing Oracle Data Integrator (ODI) on Amazon Elastic MapReduce (EMR)

Using Oracle Data Integrator (ODI) with Amazon Elastic MapReduce (EMR)

Webcast: Leveraging Oracle Data Integrator (ODI) with Amazon Elastic MapReduce (EMR)

ODI Repository Sample for Amazon Elastic MapReduce (EMR)

Integrating Oracle Data Integrator (ODI) On-Premise with Oracle Cloud Services


Integrating with Sales Cloud using SOAP web services and REST APIs (Part 3)

$
0
0

This is part 3 of the blog series that covers SOAP and REST integration with Sales Cloud. In part 1 and part 2, I covered SOAP services. In this part I’ll be covering Sales Cloud REST APIs

Sales Cloud provides REST APIs that allow external applications to view and edit Sales Cloud data. A complete catalog of these REST APIs is available at the Oracle Help Center

In this catalog you’ll notice several details as shown in the screenshot below

Snap8

Sales Cloud REST APIs are automatically available in your Sales Cloud Instance. No additional configuration, or setup is required to enable REST APIs.

Sales Cloud REST APIs are also completely free of charge to use for any number of invocations you may want to make.

For each Sales Cloud object, GET, POST, PATCH and DELETE HTTP Methods are supported corresponding to CRUD operations on these Sales Objects. These appear in the API catalog, on the left navigation, under Accounts entry. The request and response payload structure along with sample payloads are all provided in this one stop shop.

The REST APIs also directly correlate to Sales Cloud UI. That is, if you have a field that is required or is of a specific data type, then the REST API automatically reflects those validations.

In addition to the Parent Object, REST APIs also supports using child objects. In the API catalog screenshot above the Accounts Objects and its corresponding child objects are shown.

The simplest way for you to get started is to use a tool like SOAPUI or Postman to execute a “Get all” action on the Account object. I’ll use Postman and use the /crmCommonApi/resources/latest/accounts. Since the latest version is 11.1.11, this is the same as /crmCommonApi/resources/11.1.11/accounts. The only other header I provided is the authentication using Basic Auth.

Snap9

You can then query for a specific account now, as described in the “Get an Account” task of the catalog. The API will now be https://<instance>.crm.<datacenter>.oraclecloud.com/crmCommonApi/resources/11.1.11/accounts/3008

If you want to understand more about each field within the account object, then use https://<instance>.crm.<datacenter>.oraclecloud.com/crmCommonApi/resources/11.1.11/accounts/describe

It is possible to filter specific the results such that you see only the data that you need using the “fields” parameter

https://.crm..oraclecloud.com/crmCommonApi/resources/11.1.11/accounts/3008?fields=PartyUniqueName,PrimaryContactName

Even after specifying only two fields, you will notice that there are several other “links” return in the JSON response. These links are primarily used to navigate to child objects, self-references, and LOVs. In conformance to the HATEOAS constraints, Oracle Sales Cloud REST APIs provide these links to facilitate dynamic navigation from User interfaces. For example, it is easy to obtain the list of values associated with a given record, or access the child records, or during pagination it is easy to navigate back and forth to other sets of records. These links can be optionally be disabled using the onlyData=true parameter

Several parameters such as query, limit, total count, and order by are available when doing a GET. I won’t go into more details as these are very clearly described in the API catalog https://docs.oracle.com/cloud/latest/salescs_gs/FAAPS/Resource_Actions.html. It is very easy to play with these params. Try them out!

Custom Fields, Custom Objects, and REST APIs

All Custom objects will automatically have a REST API enabled. So, if you create a custom object called PromotionalMaterial, then you obtain an API /../PromotionalMaterial that exposes all fields of this object. No additional work is required. Any relationships that are modeled for between any combination of custom objects and standard objects are also exposed in the API immediately. Similarly, when custom fields are added to an existing standard object(for which a REST API is available), then these custom fields are also exposed immediately.

Important: Since you will typically be adding custom fields and custom objects when in an active Sandbox, these API changes will be visible only to those users that are using that sandbox. Once the changes are published to the mainline, all users will be able to access the new/modified APIs

Authentication

All Sales Cloud REST APIs are protected using an OWSM policy – oracle/multi_token_over_ssl_rest_service_policy. This policy currently the following client authentication methods

  1. Basic Auth over SSL (which we used earlier in this post)
  2. SAML 2.0 (provided a SAML trust is already established between the Sales Cloud and the calling service)
  3. JWT Token (A bearer token instead of Basic Auth. If trust is set up between Sales Cloud and the calling service, the JWT token can be issued by the calling service)

Authorization

The data returned by the API is governed by the Sales Cloud Role and Data level security. For example, if John Doe does a GET on all Opportunities and authenticating using Basic Auth as John Doe, then Sales Cloud will check if John Doe is allowed to access Opportunities through the web service, and will also determine which Opportunities John Doe should be able to view based on standard data level security for Opportunities in Sales Cloud (more details here)

Additionally, clients can pre-verify the level of Role access for a given user by using /describe. For example, when John Doe executes /describe and it returns the content below, then it is clear that the GET and POST operation are allowed on this specific object. Additionally, Child objects can be protected separately and so it will have it’s own “actions” element with the allowed methods.

"actions": [
          {
            "name": "get",
            "method": "GET",
            "responseType": [
              "application/json",
              "application/vnd.oracle.adf.resourcecollection+json"
            ]
          },
          {
            "name": "create",
            "method": "POST",
            "requestType": [
              "application/vnd.oracle.adf.resourceitem+json"
            ],
            "responseType": [
              "application/json",
              "application/vnd.oracle.adf.resourceitem+json"
            ]
          }
        ] 

Use Cases for SOAP vs REST APIs

Although Sales Cloud REST APIs can be used for both data integration use cases and UI extension use cases, it is likely that Sales Cloud REST APIs will be used more for the latter. Sales Cloud REST APIs use JSON by default making it a great choice for UI development (not the only reason). Sales Cloud SOAP services on the other hand use XML and since there are several mature integration tools that are based on XML technologies such as XSLT, and XQuery, it makes it easier for Integration developers to continue using SOAP Services. Additionally Sales Cloud Events and File integration choices are also available for data integration.

REST APIs are a huge boon for Custom UI development, which could take the form of building specific extensions Sales Cloud UIs, or for building fully standalone browser-based Javascript UIs, or Mobile UIs that are powered by Sales Cloud REST APIs. Since the REST API encapsulate all core Sales Cloud business logic and Security logic, custom UIs need not duplicate the same.

If you would like to get started on building your first UI based on Sales Cloud REST API, Angelo has written a very nice blog where he shows the usage of Oracle JET to build a custom UI that invokes Sales Cloud REST APIs.

Using Sales Cloud with Oracle Mobile Cloud Service when building Custom UIs

While Sales Cloud REST APIs are very powerful and intuitive to use, you may have requirements where it would make more sense to have a wrapper/proxy API which shapes these Sales Cloud APIs to make it more tailored for your custom UI. For example, your UI requirements may dictate the need for invoking multiple Sales Cloud APIs or even external applications, and shaping the data before it can be displayed meaningfully in your custom UI. When developing Single Page Applications (SPA) you may want to minimize the number of server calls made to render data and use getters and setters that simply bring the data that you are relevant to your current screen.

In such cases, as we discussed above, you will typically consider a wrapper/proxy layer. A good choice for implementing this shaping/orchestration/proxy would be any commercial product that serves as an API manager.

I’ll talk next about using Oracle Mobile Cloud Service to implement this layer. Mobile Cloud Service is an Oracle offering which not only offers the power of creating and managing API endpoints, but also provides a Node.js engine to perform this orchestration, and connectors to connect to several end systems. In addition to serving as an API management layer, Mobile Cloud Service (as the name suggests) provide several Mobile centric features such as push notifications, offline support, caching support, and location support.

The focus of this blog is not to describe all Mobile Cloud Service (MCS) features, but to illustrate the usage of MCS with Sales Cloud using a simple example. We’ll also touch upon expanded security options when using this approach.
Note: If you would like to learn more about MCS, please refer to list of A-Team blogs on MCS

I’ll next walk-through a simple use case of using MCS with Sales Cloud REST APIs.

Imagine a Sales Rep walking into a customer site for a meeting. This Rep would like to have a UI which provides details about this customer. Let’s assume that the Sales Rep would like basic account details, open issues with existing products that this customer uses, and the current stock ticker of the company. As you may realize each of this information typically comes from a different system – account details come from Sales Cloud, Issue details (or Incidents) come from Service Cloud, and the stock ticker probably comes from Google Finance APIs . The developer of this custom UI however would like a single REST API called /Customers providing all this information.

Here are some high level steps to achieve this

Step 1: Create a new API in MCS and decide on a contract with the UI developer

Step 2: Create a REST connector in MCS to connect to Sales Cloud

Step 3: Create connectors to Service Cloud for incidents, and Google Finance for stock ticker

Step 4: Implement logic in MCS Node.js layer to orchestrate these calls

Step 5: Test service, add it to MCS Mobile back end, and expose MCS APIs securely

I’ll only discuss Step 2, Step 4 and Step 5. For step 1, refer to the Oracle MCS documentation. Step 3 is similar to Step 2.

For Step 2, refer to the image below. You will notice that when creating a connector you are providing only the endpoint of the top level Sales Cloud resource i.e. https://<instance>.crm.<datacenter>oraclecloud.com/crmCommonApi/resources/latest with no references to a specific Sales Cloud object. This connector will later be referenced in the Node.js implementation and specific resources will later be requested by the implementation code. For example Connector/Accounts. This decoupling is very useful when changing any connection details such as pointing the APIs to a different Sales Cloud instance or using different authentication credentials.

In the future, Oracle plans to release pre-build connectors to Sales Cloud which will also allow you to introspect the endpoints and browse different resources.

Snap13

In terms of security, when creating the connector notice in the image below that I’ve chosen the SAML security policy. Since MCS and Sales Cloud are both Oracle Cloud products SSO and SAML trust is pre-established. By choosing SAML for the MCS to Sales Cloud communication, I ensure that the identity of the user invoking the /Customer MCS API will be automatically be propagated to Sales Cloud. Passing the user context is very important because, as you may recollect, the Sales Cloud API returns results specific to a user’s role and data level security access. To ensure that SAML works, MCS and Sales Cloud should be in the same oracle cloud identity domain which inherently sets up the SSO and SAML trust between these services.

Snap15

Now step 4 requires me to invoke the Sales Cloud connector that I created using Node.js code and Oracle custom APIs. I’m not going into the details of how to attach a Node.js implementation to your MCS REST endpoint. This is explained very well in the Oracle MCS documentation. I’ll just focus on the code that calls the Sales Cloud MCS connector that we created earlier. A sample code snippet for this looks like below

module.exports = function(service) {
	service.get('/mobile/custom/Customers/CustomerId', function(req,res) {
		var result = {};
		req.oracleMobile.connectors.ArvindSalesCloud.get('accounts',null,{qs: {fields: 'PartyUniqueName,StockSymbol',onlyData:'true',limit:'10'}}).then(
		<<Other code to gather Service Cloud and Stock price. Typically a series of Node.js calls are invoked using Futures or async.waterflow constructs>>
		function(result){
			res.send(result.statusCode, result.result);
		},
		function(error){
			res.send(500, error.error);
		}
	);
	});
};

After implementation is complete, this API is attached to a Mobile Backend and published. The API is then protected for authentication using Basic Auth or OAuth. You can then call this API from the custom UI or using SOAPUI/Postman for the purpose of testing. Remember to pass the MCS Mobile Backend ID in your HTTP Header addition to providing the authentication headers. More details are available here in Oracle MCS documentation. Among other things, Mobile Backend acts an OAuth Client whose credentials are used by all Mobile apps that are using the MCS APIs. If the called MCS API includes calls to other MCS APIs (chaining) within the same backend, then the identity and credentials of the original caller are propagated through the chain of calls automatically.

As discussed before, when Sales Cloud and MCS are used together, they are typically provisioned in the same Oracle Cloud ID domain, inherently establishing a SAML trust. This is why MCS was able to invoke Sales Cloud using SAML when configuring the connector. In such environments, Sales Cloud is typically the IdP for Federation. In this setup, as soon as the end user using the mobile App logs in using the Sales Cloud credentials, he/she is able to request an OAuth token for the Mobile Backend which will be used for invocations in all subsequent contexts. This ensures end to end identity propagation.

As a closing remark, I’d like to point out that Sales Cloud REST APIs are very powerful, available out of the box, and free of charge to use. This post is to just introduce you to the APIs and point out some common implementation patterns. Expect to see more blogs from the A-Team on this topic.

 

Migrating AMPA Apps to Oracle MAF 2.3.1 Client Data Model

$
0
0

Introduction

Oracle MAF 2.3.1 has just been released. This release contains a major new feature, the client data model (CDM). CDM is the productized version of the A-Team Mobile Persistence Accelerator (AMPA). This article explains how you can migrate your existing MAF app that uses AMPA to MAF 2.3.1 with CDM. We recommend to perform this migration as soon as possible since Oracle A-Team will no longer maintain the AMPA open source project in GitHub. The migration steps are pretty straightforward and risk-free since the complete code base of AMPA has been integrated “as is” with MAF, the biggest change is the renaming of Java packages.

Main Article

If you are migrating from MAF 2.3.0, you need to decide whether you want to upgrade your existing JDeveloper 12.2.1 installation to MAF 2.3.1 or you prefer to install another fresh instance of JDeveloper 12.2.1 which allows you to run both MAF 2.3.0 and MAF 2.3.1 apps side by side.

See the article How do I install 2 versions of the same version of JDeveloper for more info.

If want to upgrade you need to perform the following steps after you installed the MAF 2.3.1 extension:

  • Upgrade the MAF JDK 1.8 Compact profile 2
  • Remove AMPA extension

If you do no want to upgrade, or you are coming from an older MAF version that required JDeveloper 12.1.3, you can start with a fresh install of JDeveloper 12.2.1 and install the MAF extension as documented here. You can then proceed with the migration steps:

  • Perform General Migration Steps (not AMPA specific)
  • Change Namespace in persistence-mapping.xml
  • Rename Java packages
  • Change AMPA EL Expressions
  • Change Reusable Feature Archive References
  • Configure Database to be Unencrypted

The next sections will explain all of these steps in detail. The last section will discuss the available CDM documentation.

Upgrade the MAF JDK 1.8 Compact Profile 2

When you install the MAF 2.3.1 extension over the MAF 2.3.0 extension, the MAF JDK 1.8 Compact Profile 2 is not updated automatically. Since the CDM functionality is added to MAF through new jar files that are not automatically added to this profile, JDeveloper will not be able to find the CDM classes. This will cause compilation errors like
package oracle.maf.api.cdm.persistence.model does not exist

Note that you will also get these errors when you create a new app using CDM, not just when migrating an AMPA app.

To fix this you need to upgrade the JDK profile as follows:

  • In JDeveloper, go to Tools > Manage Libraries
  • Click on the tab Java SE Definitions
  • Select the MAF JDK 1.8 Compact 2 Profile library and click the Remove button
  • Restart JDeveloper

The MAF JDK 1.8 Compact 2 Profile should now be re-added automatically with the correct jar files. Here is a screen shot of the correct profile definition:

profile

To test whether JDeveloper can find the CDM classes you can use Go to Java File option (Ctrl-Minus on Windows, Cmd-J on Mac) and enter InitDB in it. This should bring up the oracle.maf.impl.cdm.lifecycle.InitDbLifecyleListner class.

Remove AMPA Extension

To remove the AMPA extension from JDeveloper, you go to the Tools -> Features menu option. Then click on Installed Updates. Select the A-Team Mobile Persistence Accelerator and click the Uninstall button.

RemoveAMPA

This removes the AMPA extension registration and jar files. However, it does not remove the oracle.ateam.mobile.persistence folder in the jdeveloper/jdev/extensions folder.

RemoveAMPAFolder2

You can remove this folder manually.

Perform General Migration Steps (not AMPA specific)

General migration steps are documented in the MAF developer’s guide, section Migration your Application to MAF 2.3.1. That section also includes a paragraph about migrating AMPA apps, this blog article is a more comprehensive version of that paragraph, with more background info, and some additional steps.

One general migration step is not documented there, the network access plugin has been renamed. If you open the maf-plugins.xml file, you will see that the network pluginId is now invalid:

NetworkPluginError

To fix this, you need to change the pluginId to maf-cordova-plugin-network-access. Or you can remove the network plugin line, go to the Overview tab of maf-application.xml and check the Network Access checkbox on the Plugins tab.

Change Namespace in persistence-mapping.xml

Open the persistence-mapping.xml, located in META-INF directory of your ApplicationController project and change the namespace to http://xmlns.oracle.com/adf/mf/amx/cdm/persistenceMapping.

namespace

Rename Java Packages

All non-deprecated AMPA classes have been included in MAF 2.3.1. MAF makes a distinction between public classes and internal implementation classes. Public classes are included in a package name starting with oracle.maf.api. Implementation classes are included in a package name starting with oracle.maf.impl. The signature of public classes is guaranteed to ensure upwards compatibility, implementation classes might change over time and might break your custom code if you use these classes and then migrate to a newer MAF version. This is why Oracle recommends to only use public MAF framework classes in your custom code. For this first release of CDM, the AMPA code has been included “as is” but over time the code base will be improved to support new features. To keep flexibility in improving and refactoring the code over time, a number of the AMPA classes have been moved into implementation packages starting with oracle.maf.impl.cdm. While Oracle generally recommends to avoid use of implemntation classes in your custom code, it is fine and even inevitable to do so with some of the CDM implementation classes. For example, all your service classes will now extend from oracle.maf.impl.cdm.persistence.service.EntityCRUDService.   

With this explanation, we are ready to rename the AMPA java packages. The table below lists the global search and replace actions you should perform in order on all files in your application.

Search Text Replace With
oracle.ateam.sample.mobile.v2.persistence.service.EntityCRUDService oracle.maf.impl.cdm.persistence.service.EntityCRUDService
oracle.ateam.sample.mobile.v2.persistence.manager.MCSStoragePersistenceManager oracle.maf.impl.cdm.persistence.manager.MCSStoragePersistenceManager
oracle.ateam.sample.mobile.v2 oracle.maf.api.cdm
oracle.ateam.sample.mobile.mcs.storage oracle.maf.api.cdm.mcs.storage
oracle.ateam.sample.mobile oracle.maf.impl.cdm

You can perform these global search and replace actions in JDeveloper by navigating to the Search -> Replace in Files option.

SearchReplace

Make sure you set the Scope field to your entire application, not just to one of the projects.

If you now try to compile your application, you still might get a few compilation errors. This is because classes in the same AMPA package might have been divided over both the implementation and public CDM packages. Easiest way to fix these errors is to double-click on the error to jump to the Java class, and rename the .api. part of the import to .impl. or vice versa. Alternatively, you can remove the invalid import statement and let JDeveloper automatically suggest the correct import.

There is one remaining change you have to make manually in maf-application.xml because this file is in a directory that is not scanned for global search and replace actions, you need to change the Lifecycle Event Listener property from oracle.ateam.sample.mobile.lifecycle.InitDBLifeCycleListener to oracle.maf.impl.cdm.lifecycle.InitDBLifeCycleListener. If you are using a custom lifecycle listener that extends InitDBLifecyleListener, you don’t have to do anything because your custom class is already updated to point to the CDM package.

If your application compiles successfully, you can do a final check by doing a global search in your application on the string oracle.ateam.sample which should no longer return any hits.

Change AMPA EL Expressions

AMPA comes with some standard EL expressions around background tasks and pending data sync actions. To update these expressions for CDM you should perform the following global search and replace actions:

Search Text Replace With
applicationScope.ampa_bg_task_running applicationScope.maf_bg_task_running
applicationScope.ampa_hasDataSyncActions applicationScope.maf_hasDataSyncActions
applicationScope.ampa_dataSyncActionsCount applicationScope.maf_dataSyncActionsCount
applicationScope.[entityName]_hasDataSyncActions applicationScope.maf_hasDataSyncActions

The last entry in this table is applicable when you are migrating from an earlier AMPA release, not the latest 12.2.1.0.68 release. In previous releases, the data synchronization happened in the context of an Entity CRUD service, it only synchronized the data object of the entity CRUD service and its child data objects (if applicable). Therefore the expression to check whether an entity (data object) had pending data sync actions included the entity name as prefix instead of the general ampa_ prefix used in latest AMPA release.

Change Reusable Feature Archive References

AMPA shipped with two reusable feature archives to inspect web service calls and to view pending synchronization actions. While the web service calls feature is not yet documented in the CDM chapter, both feature archives are included with CDM. If you run the MAF User Interface Generator they wil be automatically added to your application, just like AMPA did.  If your existing AMPA application is using one or both of these features, you need to do two things:

  • Change the feature archive library reference
  • Change the feature reference in maf-application.xml

To change the library reference, go to Application -> Application Properties menu option, and click on the Libraries and Classpath option at the left.

FeatureArchives

Remove both jar files, and click the Add Jar/Library button. Navigate to the directory where the CDM feature archives can be found, which is jdeveloper/jdev/extensions/oracle.maf/FARs/CDM.

AddCDMFeatures

Select both jar files and click Open button to add them to the application.

Now, go to the Overview tab of maf-application.xml and update the invalid feature references oracle.ateam.sample.mobile.datasynch and oracle.ateam.sample.mobile.wscalls with the new CDM id’s of these features.

NewFeatureRef

Configure Database to be Unencrypted

Unfortunately, during the integration of AMPA to CDM a minor code change has introduced an issue which currently prevents you from encrypting the SQLite database. This will be fixed with the next MAF release. For now, you should disable DB encryption by adding the following line to the mobile-persistence-config.properties file, located in the META-INF directory of tour ApplicationController project:

db.encryption=false

If you don’t have this entry, or you change the value to true, your application will hang at application start up and the log will display the following error:

CDM Database key is null

MAF Client Data Model Documentation

The CDM chapter in the MAF developer’s guide currently contains a subset of the information that you can find in the AMPA Developer’s guide. Sections which refer to AMPA classes that ended up in the implementation packages (see previous section Rename Java Packages for more info) have not been include yet. In subsequent MAF releases the remaining documentation topics will be added once final decisions have been made about the location of all CDM classes and methods

For now, you can continue to consult the AMPA Developer’s guide for those topics, because all of the content is still valid. Most notably, you might want to check out the following sections:

 

Handling Unknown File Formats with Oracle Big Data Preparation Cloud Service (BDP)

$
0
0

Introduction

Recently one of our customers shared with us a Splunk file and asked if we could handle this with Oracle Big Data Preparation Cloud Service (BDP), so we tried it! The product will handle this format out of the box in its next release (coming soon, stay tuned!), but in the meantime we wanted to take this opportunity to see how BDP could help us process a file format that it did not understand out-of-the-box. We are sharing the result of this experience here.

Understanding the file format

A very good introduction to Slunk is available here and can be summarized as: “Splunk is a log aggregation tool […where you] write out your logs into comma delimited key/value pairs”. The file we had to interpret had over 6 different formats for the log entries with different key/value pairs that needed extraction. The following example has a similar structure to that file. We are using color coding and characters thickness to differentiate different formats:

2016-05-10-08:00:00 action=macdetected equipment=WD23sp001 macaddress=01:11:25:5a:92:a1 accesspoint=sbmalexingctr

2016-05-10-08:00:00 action=macdetected equipment=LS11ad145 macaddress=1a:22:bc:fe:21:b2 accesspoint=sbmalexingctr

2016-05-10-08:00:00 action=macdetected equipment=WD23sp001 macaddress=45:f1:32:2a:bb:11 accesspoint=sbmalexingctr

2016-05-10-08:00:00 action=macdetected equipment=WD23sq002 macaddress=c1:14:f1:66:48:8b accesspoint=sbmabulwc

2016-05-10-08:01:12 action=signup id=johndoe device=android ip=128.12.25.01 equipment=LS11ad145 macaddress=1a:22:bc:fe:21:b2 accesspoint=sbmalexingctr

2016-05-10-08:01:15 action=webaccess url=my.yahoo.com equipment=LS11ad145 macaddress=1a:22:bc:fe:21:b2 accesspoint=sbmalexingctr

2016-05-10-08:02:23 action=webaccess url=https://www.google.com/search?q=johndoe&ie=utf-8&oe=utf-8 equipment=LS11ad145 macaddress=1a:22:bc:fe:21:b2 accesspoint=sbmalexingctr

2016-05-10-08:03:21 action=maclost reason=outofrange equipment=WD23sq002 macaddress=c1:14:f1:66:48:8b accesspoint=sbmabulwc

2016-05-10-08:03:22 action=macdetected equipment=WD23sp001 macaddress=77:a8:da:c5:33:58 accesspoint=sbmalexingctr

2016-05-10-08:03:22 action=signup id=janedoe device=iphone ip=128.12.25.02 equipment=WD23sp001 macaddress=77:a8:da:c5:33:58 accesspoint=sbmalexingctr

2016-05-10-08:03:22 action=iMessage message=connect server=imessage.apple.com equipment=WD23sp001 macaddress=77:a8:da:c5:33:58 accesspoint=sbmalexingctr

2016-05-10-08:04:45 action=iMessage message=connectResponse server=imessage.apple.com equipment=WD23sp001 macaddress=77:a8:da:c5:33:58 accesspoint=sbmalexingctr

2016-05-10-08:04:47 action=iMessage message=Push Topic server=imessage.apple.com equipment=WD23sp001 macaddress=77:a8:da:c5:33:58 accesspoint=sbmalexingctr

2016-05-10-08:04:49 action=iMessage message=Push Notification server=imessage.apple.com equipment=WD23sp001 macaddress=77:a8:da:c5:33:58 accesspoint=sbmalexingctr

2016-05-10-08:05:05 action=macdetected equipment=WD23sp001 macaddress=33:fb:b8:34:55:04 accesspoint=sbmalexingctr

2016-05-10-08:05:05 action=iMessage message=Push Notification Response server=imessage.apple.com equipment=WD23sp001 macaddress=77:a8:da:c5:33:58 accesspoint=sbmalexingctr

2016-05-10-08:06:57 action=signup id=JimmyFoo device=OSX ip=128.12.25.03 equipment=LS11ad145 macaddress=1a:22:bc:fe:21:b2 accesspoint=sbmalexingctr

2016-05-10-08:07:35 action=data protocol=ftp equipment=LS11ad145 macaddress=1a:22:bc:fe:21:b2 accesspoint=sbmalexingctr

2016-05-10-08:07:36 action=disconnect id=johndoe device=android ip=128.12.25.01 equipment=LS11ad145 macaddress=1a:22:bc:fe:21:b2 accesspoint=sbmalexingctr

2016-05-10-08:07:37 action=maclost reason=poweroff equipment=WD23sq002 macaddress=c1:14:f1:66:48:8b accesspoint=sbmabulwc

2016-05-10-08:08:01 action=macdetected equipment=WD23sq002 macaddress=c1:14:f1:66:48:8b accesspoint=sbmabulwc

There are two challenges with this type of format:

  • First, the shape of the records change from line to line
  • Second, the records mix metadata (keys or labels) and data (values). We want to separate the two for an easier display with reporting tools, or to feed that data to ETL tools for further processing.

With BDP, we want to extract all the labels and use them as column names. Once we have identified and extracted the column names, we have a separation between data and meta-data… and we can display the information efficiently.

In this early version of BDP, we only see a single column for the original file (Col_0001 in the screenshot below).

Extract from Col_0001

The first thing we do is to ask BDP to list all labels available with a very basic regular expression that lists all strings that end with the ‘=’ sign:

([A-z]+=)

You can see below that BDP provides immediate feedback for the result of the expression (see Regex Result) as we are building it:

Labels Regular Expression

This generates a new pseudo column with only the label names. If we ask BDP to display all distinct values for this new column, we now have a list of all possible shapes for the records… along with the exhaustive list of possible labels:

List of Labels

Now that we know which column names we want to create, we further take advantage of BDP to extract the information that we need from the file and prepare it for our users.

Extracting Column names

For each one of the column of interest to us, we can extract the data with a very simple extract expression. For instance, if we want a column named AccessPoint to store the data from the accesspoint label, all we have to do is use the expression:

accesspoint=([A-z]+)

When we enter this expression, once again BDP gives us immediate feedback on the values found in the data file (sbmalexingctr in the example below):

Access Point regular expression

A column such as Mac Address would use an expression like this:

macaddress =([a-f0-9:]+)

Mac address regular expression

For each column that we create, BDP updates profiling statistics immediately to give us a detailed view of the distribution of the data. In the screenshot below, we can see the distribution for the Action column: distinct values and their count in the table on the left, and a bubble chart at the bottom right.

Actions profiling

This allows us to keep or discards columns as we move along. Columns with only a single value across the board might not be of much interest for instance…

Publishing the result

Once we have identified all the columns of interest, we can publish the data – either directly to Business Intelligence Cloud Service (BICS), or to a file for further consumption. At this point, removing the original columns (‘Col_0001’ and ‘Labels’) makes sense. We can always retrieve them later if needed since BDP allows us to reverse any operation by clicking on the ‘X’ next to the operation itself in the list of transforms:

Transforms list

We can now execute our script. The progress of the operations is visible in the Activity Stream which can be found on the right-hand side of the dashboard as seen in the screenshot below:

Dashboard

We can then review the result of our operations:

Published Results

We did our first exercise with a small version of the original file. This allowed us to design the solution while the much larger (4Gb) original file was being loaded to the Cloud. Since BDP runs on a Hadoop cluster (leveraging Spark/Scala), it scales easily for massive content.

By the time we had completed this first run, the original file was available in the Cloud. We ran the exact same job definition on the larger file and experienced immediate success: once the transform script is defined, we can run it against any file that matches more or less the original format: labels can be out of order, they can be added or missing. None of these will prevent the transformations from running.

These jobs can be operationalized via the scheduler to process new files, or can be based on the detection of new files arriving in the source system.

Conclusion

Even with a file format that is not recognized by BDP (at least in this early release) we can take advantage of its ability to dynamically prepare and profile data to extract relevant information. We can then publish it in a structured format for further consumption with very little effort and no programming skills required.

For more Data Integration best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-team Chronicles for Data Integration.

Acknowledgements

Special thanks to Luis Rivas and Sandrine Riley for continuous support on BDP.

Accessing and Analyzing Twitter Feeds with Oracle Stream Analytics (Part 1)

$
0
0

Oracle Stream Analytics

The new release of Oracle Stream Analytics (OSA) contains a host of new features some of which are connection capability to Twitter, geo-spatial Exploration display and machine learning based data analysis. In combination those features make OSA an excellent tool for implementing a Twitter sentiment monitoring application. This is Part 1 of a 3 part series demonstrating how to use OSA to monitor a Twitter data stream and analyze the stream for geographical and sentiment information. Part 1 will show how to create a connection to Twitter and stream tweets into OSA. Part 2 will show how to display geographical origin of tweets. Part 3 will show how to analyze the tweet stream for sentiment information.

Accessing Twitter

The Twitter developer platform provides a REST API, OAuth authentication, Twitter widgets to embed in Web and mobile apps etc. To define the OSA connection to Twitter, create OAuth credentials at Twitter. Then use the built in Twitter adapter provided in OSA 12.2.1 to access a tweet stream.

At Twitter create an application definition

twitter app management-create new app

click the “Create New App” button to create the app. Fill in Name, Description and Website fields – agree to the developer agreement and click “Create your Twitter application” button at the bottom of the page. The new application will have a Consumer Key and Consumer Secret, you will also need Access Token and Access Token Secret so click the “Create my access token” button.

twitter app keys

Record the four key strings as shown above.

OSA Connection

In OSA create the connection definitionOSA catalog create connection

provide a Name and select Twitter for the connection type. On the next dialog page paste the key strings recorded earlier from the Twitter application and click “Save”.

twitter connection keys

OSA Stream

Once the connection is defined it is used to create a stream. Select “Stream” from the “Create New Item” menu and enter Name, Description, Tags and pick Twitter from the Stream Type drop down list.You can also select the “Create Exploration” check box to automatically start the Exploration dialog. On the next page of the stream dialog select the just created connection and enter a comma separated list of Twitter hashtags for the stream.

OSA stream create

On the last page of the stream dialog select “TwitterShape” from the “Select Existing Shape” drop down list.

OSA stream twitter shape

OSA Exploration

The create exploration dialog will open after the stream is created. Enter a Name, Description and Tags for the exploration. Select the newly created Stream as the Source and click the Create button.

OSA exploration create

The exploration will show the raw data for the Tweets on the hashtag.

OSA exploration tweets

Next

In part 2 we’ll look at using the geographical data in the stream (latitude/longitude of tweeter) to create a spatial exploration displaying where tweets originate.

Tuning G1GC For SOA

$
0
0

Garbage-First Garbage Collector (G1GC) is a new GC Algorithm introduced in later version of JDK 1.7. Prior to the introduction of G1GC there have been 2 other GC Algorithms: ParallelGC and Concurrent Mark Sweep (CMS) algorithms. While ParalleGC was the choice for high throughput applications like SOA, CMS was the choice for applications requiring low latency. ParalleGC, CMS and G1GC all exist in JDK 1.7 and higher versions. One must make a informed decision to pick the right algorithm for their application.

This blog features tuning of G1GC specifically to Oracle SOA product. However, the same tuning process can be applied to other applications/products running with G1GC algorithm.

Considerations for Moving to G1GC

While ParallelGC maybe well suited for SOA applications, there are times when one might think of going for G1GC instead of ParallelGC. Some of the considerations for moving to G1GC are the following:
1. Full GC Long or Frequent
ParallelGC is a high throughput algorithm in which it accumulates garbage over time. Once the garbage has eaten up all the heap then it decides to do a FULL GC (Garbage Collection) that will clean up the garbage. However, doing so it takes longer time toe execute Full GC. This may not be optimal for many solutions because of functional implications due to stop-the-world the FULL GC introduces. A FULL GC can also cause several problems like dead node detection, service migration, JTA migration etc. So, if the FULL GC is taking a long time and if it is not desirable for the solution then G1GC can be chosen.

2. Heap Size very Large
With a large heap in order of 16G or over, the Full GC times will take longer with ParalleGC. The time taken by ParallelGC increases with heap size increasing. With very large heap G1GC can be chosen.

3. Hardware
G1GC has more overhead with using system threads than parallel GC. G1GC does incremental GC collections without the usual Stop the World (STW). Hence G1GC is recommended on hardware that has sufficient hardware threads where concurrent GCs can occur without noticeable degradation of application performance.

Tuning G1GC

After going through the considerations and if the situations meets the some or all of the considerations it is a good idea to go ahead with G1GC algorithm.

The SOA application in reference to this blog met all the criterion above. The SOA application had a huge heap about 22G. With 22G of heap the ParallelGC takes a long time to complete FullGC. Moreover, this SOA application was running on Exalogic with 22 hardware threads per node. This gave sufficient concurrency to run G1GC while not comprising on the threads that were used for the application.

The next step was to tune G1GC to meet the following goals:

Goals:

  1. 1. Best Throughput:

G1GC algorithm should provide comparable throughput as that given by ParallelGC. The throughput shall be measured in every test and tuning shall be done to better the throughput test after test until the throughput cannot can be bettered any further.

  1. 2. Avoid Full GC OR Constant-Time Full GC:

As much possible the tuning must help to avoid FULL GC. If FULL GCs are unavoidable then atleast the FULL GCs should take constant time and this time should be as low as possible. For this the Promotion Rate of Objects and Freed memory/Min of the objects should be bettered test after test

There were multiple tests that were run and after each test the results were analyzed to further tune the G1GC until the desired goals were met.

Analysis Tools:

For the analysis of the test results the following tools were used:

  1. 1. gceasy.io
  2. This is web-based gc Analyzer tool availble at www.gceasy.io
  3. 2. GC Viewer
  4. GC Viewer is a Java-based GUI tool for GC log analysis.
  5. Note: Make sure to download the latest source of GC Viewer from GIT and then compile the project to get the jar file.
  6. 3. GC Logs
  7. The Raw GC logs were analyzed for each test run.

Test1:

This is the first test after changing the GC algorithm from ParallelGC to G1GC. The following are the ParallelGC configurations that were tuned for the SOA application.

Parallel GC configuration:

-Xmx18432M -Xms18432M -Xmn6144M -XX:PermSize=1536M -XX:MaxPermSize=1536M -XX:+UseParallelOldGC -XX:ParallelGCThreads=22 -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=4

When moving from ParallelGC to G1GC, do not introduce too many tuning changes right in the first test. Tuning should always be incremental and there needs to be good rationale to tune any parameter. The following were the first cut of G1GC parameters

G1GC Configuration:

-Xmx18432M -Xms18432M -Xmn6144M -XX:PermSize=1536M -XX:MaxPermSize=1536M -XX:+UseParallelOldGC -XX:+UseG1GC -XX:ConcGCThreads=6 -XX:MaxGCPauseMillis=1000 -XX:ParallelGCThreads=22 -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=4

 Clearly there are some addtional parameters and and some parameters are taken off. The following explains the reasoning behind the parameter changes:

  • UseAdaptiveSizePolicy and -Xmn6144M should both be removed with G1GC because G1GC automatically helps adjust the sizes of New Gen and Old Gen.
  • ConcGCThreads is set to 6. Generally this should be 1/4th of ParallelGC threads.
  • MaxGCPauseMillis is set to 1000ms.

Observations and Recommendations:

  1. 1. In the logs the following traces are observed:

13491.652: [G1Ergonomics (Mixed GCs) do not start mixed GCs, reason: reclaimable percentage not over threshold, candidate old regions: 107 regions, reclaimable: 543317752 bytes (2.80 %), threshold: 10.00 %]

This shows that the concurrent marking cycle has completed but the Mixed GCs are not done because the reclaimable space is less than tolerable limit set for the waste with G1HeapWastePercent. Its default value is 10%. G1HeapWastePercent can be decreased to be around 2% so that mixed GCs can occur at a lower threshold.

  1. 2. The heap has grown to 16G and there is not enough space left to promote objects and the young GC fails. At this time, Ergonomics requests to start a concurrent marking cycle because the heap occupancy has suddenly crossed the 45% threshold:

2016-05-24T20:38:04.082-0500: 2397.689: [GC pause (young) 2397.689: (to-space exhausted), 52.1631267 secs] [Eden:10096.0M(10096.0M)->0.0B(8192.0K) Survivors: 320.0M->1304.0M Heap: 16.0G(18.1G)->17.4G(18.1G)]

Another attempt to collect but that fails too:

2016-05-24T20:38:56.269-0500: 2449.874: [GC pause (young) (initial-mark) 2449.874: (to-space exhausted), 40.8441375 secs] [Eden: 8192.0K(8192.0K)->0.0B(880.0M) Survivors: 1304.0M->40.0M Heap: 17.4G(18.1G)->17.6G(18.1G)]

Now the heap is too FULL, it resorts to Full GC:

2016-05-24T20:39:38.799-0500: 2492.403: [Full GC 17G->6211M(18G), 52.7499550 secs]    [Eden: 0.0B(920.0M)->0.0B(9832.0M) Survivors: 0.0B->0.0B Heap: 17.7G(18.1G)->6211.3M(18.1G)]

To handle this situation the following recommendations can be considered:

i. Decrease the Pause time goal to say 500ms or leave it to default so that young regions get collected earlier than 10GB occupancy

ii. Set G1ResrvePercent=15% which would reserve 2.7GB as free space. This will help avoid the to-space exhaustion since we would always have 2.7gb free for object promotion. (as we saw from the logs that when heap occupancy was 15.6G young GC could successfully promote objects). With this setting if we still see to-space exhaustion then we can further increase this value.

iii. Increase the Heap Size to 18+2.7 GB (or higher) since 2.7gb will be kept as reserved free space.

iv. Tuning the InitiatingHeapOccupancyPercentage is not recommended for now because almost 10G is occupied by the young regions and if those are collected in time then the Full GCs can be avoided.

Test 2:

G1GC Configuration:

-Xmx22528M -Xms22528M -XX:PermSize=1536M -XX:MaxPermSize=1536M -XX:+UseG1GC -XX:ConcGCThreads=6 -XX:ParallelGCThreads=22 -XX:MaxGCPauseMillis=1000 -XX:G1HeapWastePercent=2 -XX:G1ReservePercent=15

Observations and Recommendations:

  1. 1. Avg Promotion Rate=732 kb/sec
  2. 2. Freed Mem/min=3872M/min
  3. 3. Throughput=98.69%
  4. 4. The test indicate that a FULL GC was encountered
  5. 5. The following traces are observed:

2016-05-26T02:18:47.053-0500: 26232.157: [GC pause (mixed) 26232.157:[G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 18970,predicted base time: 408.01 ms, remaining time: 591.99 ms, target pause time:1000.00 ms] 26232.157: [G1Ergonomics (CSet Construction) add young regions to CSet,eden: 105 regions, survivors: 23 regions, predicted young region time: 54.20ms]
 26232.157: [G1Ergonomics (CSet Construction) finish adding old regions toCSet, reason: reclaimable percentage not over threshold, old: 4 regions, max:256 regions, reclaimable: 428589248 bytes (2.00 %), threshold: 2.00 %]
 26232.157: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 105regions, survivors: 23 regions, old: 4 regions, predicted pause time: 480.20ms, target pause time: 1000.00 ms]
 26232.399: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason: still doing mixed collections, occupancy: 11525947392bytes, allocation request: 0 bytes, threshold: 9663676380 bytes (45.00 %),source: end of GC]
 26232.399: [G1Ergonomics (Mixed GCs) do not continue mixed GCs, reason:reclaimable percentage not over threshold, candidate old regions: 103regions, reclaimable: 428589248 bytes (2.00 %), threshold: 2.00 %], 0.2423031 secs]
[Eden: 840.0M(840.0M)->0.0B(6224.0M) Survivors: 184.0M->96.0M Heap:11.7G(20.0G)->10.8G(20.0G)]

The GC has collected only 4 old regions. The expectation was more old regions to be collected and hence consider working with following parameters:

-XX:G1OldCSetRegionThresholdPercent=15 (Increase this parameter form 10 to 15 so we have higher upper limit for older objects to be collected)

-XX:G1MixedGCLiveThresholdPercent=55 (Decrease this parameter from 65 to 55 to have old objects collected at this occupancy threshold)

  1. 6. There is a large flux in eden sizes. For this issue, limiting the maximum eden space maybe a good idea. Making-XX:G1MaxNewSizePercent=25 binds the eden size to 1/4 of the total heap. This should avoid the eden flux.

[Eden: 7984.0M(7984.0M)->0.0B(10.5G) Survivors: 104.0M->168.0M Heap:13.6G(20.0G)->6022.3M(20.0G)]

Test2-Eden Space Decreasing

Where the young gen is so large that G1GC can’t possibly finish a marking cycle if too much survives.

  1. 7. With this in place the behavior should be a bit more predictable and InitiatingHeapOccupancyPercent can be set to 60, the default value of 45 is too low since there appears to be some sort of steady state around 12-13G.

 

Test 3:

G1GC Configuration:

-Xmx22528M -Xms22528M -XX:PermSize=1536M -XX:MaxPermSize=1536M -XX:+UseG1GC -XX:ConcGCThreads=12 -XX:ParallelGCThreads=22 -XX:MaxGCPauseMillis=1000 -XX:G1HeapWastePercent=1 -XX:G1ReservePercent=15 -XX:+UnlockExperimentalVMOptions -XX:G1OldCSetRegionThresholdPercent=20 -XX:G1MixedGCLiveThresholdPercent=45 -XX:G1MaxNewSizePercent=25 -XX:InitiatingHeapOccupancyPercent=60

Observations and Recommendations:

  1. 1. Avg Promotion Rate=567 kb/sec
  2. 2. Freed Mem/min=3925M/min
  3. 3. Throughput=98.53%
  4. 4. The test indicate that a FULL GC was encountered
  5. 5. The Eden flux is no more seen after capping the Max Eden space. However, the eden still keeps decreasing over time making way for the Old Gen to utilize the space. Few more tests can be run before making a decision to cap the Min Eden space.  
  1. Test3-Eden Space Decreasing
    1. 6. One more problem that is seen is that not enough number of old regions were included in the collections.

[Eden: 1008.0M(1008.0M)->0.0B(976.0M) Survivors: 112.0M->144.0M Heap:18.6G(22.0G)->17.7G(22.0G)] [Times: user=3.41 sys=0.00, real=0.26 secs]
2016-05-31T07:29:30.091-0500: 63963.414: [GC pause (mixed) 63963.414:[G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 18677, predicted base time: 240.93 ms, remaining time: 759.07 ms, target pause time:
1000.00 ms]63963.414: [G1Ergonomics (CSet Construction) add young regions to CSet,eden: 122 regions, survivors: 18 regions, predicted young region time: 65.67ms]
 63963.414: [G1Ergonomics (CSet Construction) finish adding old regions to CSet, reason: reclaimable percentage not over threshold, old: 3 regions, max:564 regions, reclaimable: 236219288 bytes (1.00 %), threshold: 1.00 %]
 63963.414: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 122regions, survivors: 18 regions, old: 3 regions, predicted pause time: 310.84ms, target pause time: 1000.00 ms]
 63963.709: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason: still doing mixed collections, occupancy: 18824036352 bytes, allocation request: 0 bytes, threshold: 14173392060 bytes (60.00 %), source: end of GC]
 63963.709: [G1Ergonomics (Mixed GCs) do not continue mixed GCs, reason:reclaimable percentage not over threshold, candidate old regions: 44 regions,reclaimable: 236219288 bytes (1.00 %), threshold: 1.00 %], 0.2948710 secs]

[Eden: 976.0M(976.0M)->0.0B(976.0M) Survivors: 144.0M->144.0M Heap:18.6G(22.0G)->17.7G(22.0G)]

For this problem, setting G1MixedGCLiveThresholdPercent=85 makes sure that good number of old regions became candidates for inclusion. Looks like lowering the value of G1MixedGCLiveThresholdPercent (in the previous test) had a negative effect on the tests

    1. 7. Setting the parameter G1HeapWastePercent to an aggressive value of 1 does benefit much. So for the next test we will set the value back to 2.

Test 4:

G1GC Configuration:

-Xmx20480M -Xms20480M -XX:PermSize=1536M -XX:MaxPermSize=1536M -XX:+UseG1GC -XX:ConcGCThreads=12 -XX:ParallelGCThreads=22 -XX:MaxGCPauseMillis=1000 -XX:G1HeapWastePercent=2 -XX:NewSize=5g -XX:G1ReservePercent=15 -XX:+UnlockExperimentalVMOptions -XX:G1OldCSetRegionThresholdPercent=15 -XX:G1MixedGCLiveThresholdPercent=85 -XX:G1MaxNewSizePercent=25

Observations and Recommendations:

  1. 1. Avg Promotion Rate=567 kb/sec
  2. 2. Freed Mem/min=3949M/min
  3. 3. Throughput=99.56%
  4. 4. The following traces are observed:

47131.902: [G1Ergonomics (Mixed GCs) do not continue mixed GCs, reason:reclaimable percentage not over threshold, candidate old regions: 220 regions, reclaimable: 428700856 bytes (2.00 %), threshold: 2.00 %]
 47450.922: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason: still doing mixed collections, occupancy: 1111490560 0 bytes, allocation request: 0 bytes, threshold: 9663676380 bytes (45.00 %), source: end of GC]
 47450.922: [G1Ergonomics (Mixed GCs) start mixed GCs, reason: candidate old regions available, candidate old regions: 223 regions, reclaimable: 43719 6056 bytes (2.04 %), threshold: 2.00 %]
2016-06-01T07:08:18.279-0500: 47468.369: [GC pause (mixed) 47468.370:[G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 15583,predicted base time: 436.80 ms, remaining time: 563.20 ms, target pause time:1000.00 ms]
 47468.742: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycleinitiation, reason: still doing mixed collections, occupancy: 11131682816 bytes, allocation request: 0 bytes, threshold: 9663676380 bytes (45.00 %),source: end of GC]
 47468.742: [G1Ergonomics (Mixed GCs) do not continue mixed GCs, reason:reclaimable percentage not over threshold, candidate old regions: 219 regions, reclaimable: 428301560 bytes (1.99 %), threshold: 2.00 %]
 47805.150: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycleinitiation, reason: still doing mixed collections, occupancy: 1114007142
4 bytes, allocation request: 0 bytes, threshold: 9663676380 bytes (45.00 %),source: end of GC]
 47805.150: [G1Ergonomics (Mixed GCs) start mixed GCs, reason: candidate old regions available, candidate old regions: 227 regions, reclaimable: 44126
5288 bytes (2.05 %), threshold: 2.00 %]
2016-06-01T07:14:10.805-0500: 47820.886: [GC pause (mixed)
47820.886:[G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 15272,predicted base time: 487.50 ms, remaining time: 512.50 ms, target pause time:1000.00 ms]
 47821.265: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycleinitiation, reason: still doing mixed collections, occupancy: 1114846003 2 bytes, allocation request: 0 bytes, threshold: 9663676380 bytes (45.00 %),source: end of GC]
 47821.265: [G1Ergonomics (Mixed GCs) do not continue mixed GCs, reason:reclaimable percentage not over threshold, candidate old regions: 219 regions, reclaimable: 429481256 bytes (2.00 %), threshold: 2.00 %]

To continue to have the mixed collections, set G1MixedGCLiveThresholdPercent to 90

 

Test 5:

G1GC Configuration:

-Xmx22528M -Xms22528M -XX:PermSize=1536M -XX:MaxPermSize=1536M -XX:+UseG1GC -XX:ConcGCThreads=12 -XX:ParallelGCThreads=22 -XX:MaxGCPauseMillis=1000 -XX:G1HeapWastePercent=2 -XX:G1ReservePercent=15 -XX:+UnlockExperimentalVMOptions XX:G1OldCSetRegionThresholdPercent=15 -XX:G1MixedGCLiveThresholdPercent=90 -XX:G1MaxNewSizePercent=25

Observations and Recommendations:

  1. 1. Avg Promotion Rate=592 kb/sec
  2. 2. Freed Mem/min=3900M/min
  3. 3. Throughput=99.47%
  4. 4. FULL GC encountered
  5. 5. We need to shift our attention to the third goal i.e. to contain the Full GC times within a limit.

The GC logs shows that the mixed GC are happening and it is trying to collect as much as it can. The rest of the data that it is not able to collect is because that data is considered live. This live data however contains garbage referenced and kept live by the unreachable classes. In JDK7, these unreachable classes are unloaded as part of the Full GC and that’s when this unclaimed garbage also gets collected. So, this is as far as we could go ensuring that the old regions get collected as part of the mixed GCs.

Now, to make sure that the Full GCs that are bound to happen to unload the unreachable classes and collect the objects referenced by these unreachable classes are contained, we will set the minimum size for the young gen. We will set 1/5th as the minimum and 1/4th as the maximum for the young gen.

Test 6:

G1GC Configuration:

Xmx22528M -Xms22528M -XX:PermSize=1536M -XX:MaxPermSize=1536M -XX:+UseG1GC -XX:ConcGCThreads=12 -XX:ParallelGCThreads=22 -XX:MaxGCPauseMillis=1000 -XX:G1HeapWastePercent=2 -XX:G1ReservePercent=15 -XX:+UnlockExperimentalVMOptions -XX:G1OldCSetRegionThresholdPercent=15 -XX:G1MixedGCLiveThresholdPercent=90 -XX:G1NewSizePercent=20 -XX:G1MaxNewSizePercent=25

Observations and Recommendations:

  1. 1. Avg Promotion Rate=522 kb/sec
  2. 2. Freed Mem/min=4642M/min
  3. 3. Throughput=99.35%
  4. 4. FULL GC encountered
  5. 5. The Avg Promotion Rate and Freed Memory/min is the best from the previous test runs. With all the tuning, there is not way to avoid a FULL GC because of the objects in the live data that are referred by unreachable classes are not collected during Mixed GC in JDK 1.7. However this behaviour has changed in JDK 1.8. Since this exercise was performed on SOA11g, we could only use JDK 1.7 as SOA 11g is not supported on JDK 1.8 as yet.
  6. 6. It is observed that the Maximum Eden space is contained between min and max limits.

Test6-Eden space bound between Min anx Max

  1. 6. The throughput rate is 99.35% which is almost the same as ParallelGC in theory.

Summary:

Although we could not avoid the occurrence of FULL GC, and therefore could not get tenured generation growth flat. However, we still have been able to achieve the following:

  1. 1. The occurrence of FULL GCs were restricted to 1 per little over day
  2. 2. Every Full GC took a constant time of avg of 28s. This is very reasonable given that G1GC runs FULL GC on single thread when compared to ParallelGC that makes use of all the ParallelGC threads. However, dues to incremental garbage collection mechanism in G1GC the garbage to be cleared with FULL GC would be far less when compared to ParallelGC.
  3. 3. Reduce the promotion rate of objects
  4. 4. The average throughput was 99.46% which is very much comparable to the ParallelGC throughput.

Using JDK 8, following the same tuning method one can achieve more flater growth of tenured generation.

Viewing all 987 articles
Browse latest View live