Quantcast
Channel: ATeam Chronicles
Viewing all 987 articles
Browse latest View live

Integrating Oracle Service Cloud (RightNow) with Oracle Business Intelligence Cloud Service (BICS) – Part 2

$
0
0

Introduction

 

This article expands on “Integrating Oracle Service Cloud (RightNow) with Oracle Business Intelligence Cloud Service (BICS) – Part 1” published in June 2015.

Part 1 described how to integrate BICS with Oracle Service Cloud (RightNow) Connect Web Services for Simple Object Access Protocol (SOAP).

Part 2 covers using the Connect REST API which allows integration with Oracle Service Cloud (RightNow) using representational state transfer (REST) web services.

REST is currently the recommended and suggested method for integrating BICS with Oracle Service Cloud (RightNow). The Connect REST API has been available as of the May 2015 release of Oracle Service Cloud (RightNow). However, support for ROQL object and tabular queries only became available in the August 2015 release. Therefore, Oracle Service Cloud (RightNow) August 2015 or higher is required to implement the examples provided.

Additionally, this article showcases the new APEX_JSON package that is available as of Oracle Apex 5.0 for parsing and generating JSON. The following three APEX_JSON functions are utilized in the solution: apex_json.parse, apex_json.get_number and, apex_json.get_varchar2.

The eight steps below explain how to load data from Oracle Service Cloud (RightNow) into BICS using PL/SQL ran through Oracle Apex SQL Workshop – using an Oracle Schema Service Database. The code snippets may then be incorporated into a stored procedure or web service and scheduled / triggered. (Such topics have been covered in past BICS blogs.)

1)    Construct the ROQL Query

2)    Test the Connect REST API Query URL

3)    Run the apex_web_service.make_rest_request

4)    Formulate the JSON Path Expression

5)    Create the BICS Database Table

6)    Add the APEX_JSON parse code to the PL/SQL

7)    Execute the PL/SQL

8)    Review the Results

Main Article

 

Step One – Construct the ROQL Query

 

See Step Two of “Integrating Oracle Service Cloud (RightNow) with Oracle Business Intelligence Cloud Service (BICS) – Part 1” on how to construct the ROQL Query. For this example the ROQL Query is:

select id, subject from incidents where id <12500

Step Two – Test the Connect REST API Query URL

 

This section covers building and testing the URL for ROQL object queries.

The q query parameter ROQL syntax is as follows:

https://<your_site_interface>/services/rest/connect/<version>/<resource>/?q=<ROQL Statement>

1)    Take the ROQL Query from Step One and replace all the spaces with %20.

For example: select%20id,%20subject%20from%20incidents%20where%20id%20<12500

2)    Append the ROQL string to the REST API URL. For example:

https://yoursite.rightnowdemo.com/services/rest/connect/v1.3/queryResults/?query=select%20id,%20subject%20from%20incidents%20where%20id%20<12500

3)    Place the URL into a browser. It should prompt for a username and password. Enter the username and password.

4)    The browser will then prompt to save or open the results. Save the results locally. View the results in a text editor.

The text file should contain the results from the ROQL query in JSON format.

Snap1

 

Step Three – Run the apex_web_service.make_rest_request

1)    Open SQL Workshop from Oracle Application Express

Snap2

2)    Launch SQL Commands

Snap3

3)    Use the snippet below as a starting point to build your PL/SQL.

Run the final PL/SQL in the SQL Commands Window.

Replace the URL site, username, password, and ROQL query.

For a text version of the code snippet click here.

DECLARE
l_ws_response_clob CLOB;
l_ws_url           VARCHAR2(500) := ‘https://yoursite.rightnowdemo.com/services/rest/connect/v1.3/queryResults/?query=select%20id,%20subject%20from%20incidents%20where%20id%20<12500‘;

apex_web_service.g_request_headers(1).name := ‘Accept'; apex_web_service.g_request_headers(1).value := ‘application/json; charset=utf-8′; apex_web_service.g_request_headers(2).name := ‘Content-Type';
apex_web_service.g_request_headers(2).value := ‘application/json; charset=utf-8′;
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => l_ws_url,
p_username => ‘username‘,
p_password => ‘password‘,
p_http_method => ‘GET’
);
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,12000,1));
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,12000,12001));
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,12000,24001));

4)    Run the query. A subset of the JSON results should be displayed in the Results section of SQL Commands.

Additional dbms_output.put_line’s may be added should further debugging be required.

It is not necessary at this stage to view the entire result set. The key to this exercise is to prove that the URL is correct and can successfully be ran through apex_web_service.make_rest_request.

Snap4

Step Four – Formulate the JSON Path Expression

 

1)    When formulating the JSON path expression, it may be useful to use an online JSON Path Expression Tester.

There are many different free JSON tools available online. The one below is: https://jsonpath.curiousconcept.com/

When testing, reduce the JSON output to something more manageable.

For a text version of the JSON used in this example click here.

Snap5

2)    For this exercise the following needs to be exacted from the JSON: count, all the ids, and all the subject data.

Test various path scenarios to confirm that the required data can be extracted from the JSON.

(It is much easier to debug the path in an online JSON editor than in SQL Developer.)

a)    COUNT

JSONPath Expression:

items[0].count

Returns:

[
3
]

b)    ID

id #1

JSONPath Expression:

items[0]["rows"][0][0]

Returns:

[
"12278"
]

id #2

JSONPath Expression:

items[0]["rows"][1][0]

Returns:

[
"12279"
]

id #3

JSONPath Expression:

items[0]["rows"][2][0]

Returns:

[
"12280"
]

c)    SUBJECT

subject #1

JSONPath Expression:

items[0]["rows"][0][1]

Returns:

[
"How long until I receive my refund on my credit card?"
]

subject #2

JSONPath Expression:

items[0]["rows"][1][1]

Returns:

[
"Do you ship outside the US?"
]

subject #3

JSONPath Expression:

items[0]["rows"][2][1]

Returns:

[
"How can I order another product manual?"
]

 

Step Five –  Create the BICS Database Table

 

1)    Open SQL Workshop from Oracle Application Express

Snap2

 

2)    Launch SQL Commands

Snap3

 

 

 

 

 

 

3)    Create the RIGHT_NOW_REST table in the BICS database.

To view the SQL in plain text click here.

CREATE TABLE “RIGHT_NOW_REST”
(ID NUMBER,
SUBJECT VARCHAR2(500)
);

Snap7

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Step Six – Add the APEX_JSON parse code to the PL/SQL

 

For a text version of the PL/SQL snippet click here.

The code highlighted in orange is what was tested in ‘Step Three – Run the apex_web_service.make_rest_request’.

Replace the URL site, username, password, and ROQL query, as previously described in ‘Step Three – Run the apex_web_service.make_rest_request’.

The objective of this step is to add the JSON Path Expression’s formulated and tested in Step Four into the PL/SQL.

Note: Three debug lines have been left in the code – as they will most likely be needed. These can be removed if desired.

The key parts to the APEX_JSON code are:

1)    Identify a value that can be used to loop through the records.

Apex PL/SQL: apex_json.get_number(p_path => ‘items[1].count’)

In this example it is possible to use “count” to loop through the records.

However, different use cases may require another parameter to be used.

Note the different numbering in the JSONPath Expression vs. the apex_json function.

In the online JSONPath Expression builder position 1 = [0]. However, in Apex position 1 = [1].

Snap8

2)    Map the fields to be selected from the JSON.

ID:             apex_json.get_varchar2(p_path => ‘items[1].rows['|| i || '][1]‘),

SUBJECT: apex_json.get_varchar2(p_path => ‘items[1].rows['|| i || '][2]‘)

Change the get_datatype accordingly. For example: get_varchar2, get_number, get_date etc. See APEX_JSON function for syntax. It may be easier to return the JSON as varchar2 and convert it in a separate procedure to avoid datatype errors.

Revisit the JSONPath Expression’s created in ‘Step Four – Formulate the JSON Path Expression’ and map them to APEX_JSON.

Convert the JSONPath to APEX_JSON. Taking into consideration the LOOP / Count ‘i’ variable.

Repeat the process of adding one position to the starting value. i.e. [0] -> [1], [1] -> [2].

ID:

items[0]["rows"][0][0]         —————->

items[0]["rows"][1][0]         —————->      apex_json.get_varchar2(p_path => ‘items[1].rows['|||| '][1]‘)

items[0]["rows"][2][0]         —————->

SUBJECT:

items[0]["rows"][0][1]         —————->

items[0]["rows"][1][1]         —————->      apex_json.get_varchar2(p_path => ‘items[1].rows['|||| '][2]‘)

items[0]["rows"][2][1]         —————->

Additional Code Advice (p_path):

Keep in mind that p_path is expecting a string.

Therefore, it is necessary to concatenate any dynamic variables such as the LOOP / Count ‘i’ variable.

DECLARE
l_ws_response_clob CLOB;
l_ws_url VARCHAR2(500) := ‘https://yoursite.rightnowdemo.com/services/rest/connect/v1.3/queryResults/?query=select%20id,%20subject%20from%20incidents%20where%20id%20<12500‘;

apex_web_service.g_request_headers(1).name := ‘Accept'; apex_web_service.g_request_headers(1).value := ‘application/json; charset=utf-8′; apex_web_service.g_request_headers(2).name := ‘Content-Type';

apex_web_service.g_request_headers(2).value := ‘application/json; charset=utf-8′;
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => l_ws_url,
p_username => ‘username‘,
p_password => ‘password‘,
p_http_method => ‘GET’
);
apex_json.parse(l_ws_response_clob);
–sys.dbms_output.put_line(‘found ‘||apex_json.get_varchar2(p_path => ‘items[1].rows[3][2]‘));
–sys.dbms_output.put_line(‘Number of Incidents: ‘||apex_json.get_varchar2(p_path => ‘items[1].count’));
for i in 1..apex_json.get_number(p_path => ‘items[1].count’) LOOP
INSERT INTO RIGHT_NOW_REST(ID, SUBJECT)
VALUES
(
apex_json.get_varchar2(p_path => ‘items[1].rows['|| i || '][1]‘),
apex_json.get_varchar2(p_path => ‘items[1].rows['|| i || '][2]‘)
);
–sys.dbms_output.put_line(‘Col1:’|| apex_json.get_varchar2(p_path => l_col1_path) || ‘ Col2:’ || apex_json.get_varchar2(p_path => l_col2_path)) ;
end loop;

Step Seven – Execute the PL/SQL

 

Run the PL/SQL in Apex SQL Commands.

“1 row(s) inserted.” should appear in the results.

This PL/SQL code snippet could also be incorporated through many other mechanisms such as stored procedure, web service etc. Since these topics have been covered in past blogs this will not be discussed here.

Step Eight – Review the Results

 

Confirm data was inserted as expected.

select * from RIGHT_NOW_REST;

Snap9

Further Reading

 

Click here for the Application Express API Reference Guide – MAKE_REST_REQUEST Function.

Click here for the Application Express API Reference Guide – APEX_JSON Package.

Click here for the Oracle Service Cloud Connect REST API Developers Guide – August 2015.

Click here for more A-Team BICS Blogs.

Summary

 

This article provided a set of examples that leverage the APEX_WEB_SERVICE_API to integrate Oracle Service Cloud (RightNow) with Oracle Business Intelligence Cloud Service (BICS) using the Connect REST API web services.

The use case shown was for BICS and Oracle Service Cloud (RightNow) integration. However, many of the techniques referenced could be used to integrate Oracle Service Cloud (RightNow) with other Oracle and non-Oracle applications.

Similarly, the Apex MAKE_REST_REQUEST and APEX_JSON examples can be easily modified to integrate BICS or standalone Oracle Apex with any other REST web service that can be accessed via a URL and returns JSON data.

Techniques referenced in this blog may be useful for those building BICS REST ETL connectors and plug-ins.

Key topics covered in this article include: Oracle Business Intelligence Cloud Service (BICS), Oracle Service Cloud (RightNow), ROQL, Connect REST API, Oracle Apex API, APEX_JSON, apex_web_service.make_rest_request, PL/SQL, and Cloud to Cloud integration.


OpenCMIS and Oracle Documents Cloud

$
0
0

Introduction

Using Apache Chemistry, a CMIS server can be built to interact with Oracle Documents Cloud Service. Following the example in the OpenCMIS Server Development Guide, a CMIS server for Oracle Documents Cloud can be deployed on a Weblogic or Java Cloud Service (JCS) instance.

 

Main Article

 

Oracle Documents Cloud Service (DOCS) is an Enterprise File Sync and Share (EFSS) solution that allows for collaboration and access from any device. The use of it as a CMIS server in this example allows use of a Oracle Documents Cloud account as the backend content repository. This can allow for an application that requires a CMIS connection, such as Primavera, to create folders and files in DOCS that could be further shared out to other DOCS users on their mobile or tablet devices. The possibility of CMIS and DOCS working together can lead to other interesting use cases, since DOCS is available on any device or browser.

This article will discuss how a CMIS server could be built using the DOCS REST API. The Apache Chemistry OpenCMIS development guide provides the key infrastructure for creating a CMIS server, while the DOCS REST API can be used for the backend interactions.

 

Mappings Between DOCS Objects and CMIS Objects

Properties of folders and files have many similarities between CMIS and DOCS. The REST API for DOCS provides parallels to the CMIS properties that can be mapped directly into CMIS objects. One of the duties of a CMIS server is to provide meaningful metadata in the CMIS world for clients to consume. In the table below, the REST attributes returned from DOCS are shown in the first column and the other columns show how that DOCS property fits into the CMIS folder object type.

Folder Properties

The cmis:objectId property is a natural fit for the DOCS folder id. Many of the properties are direct mappings from DOCS to CMIS, but a few must be derived. The cmis:path would currently have to be constructed from a folder’s parents recursively, since DOCS does not have a path associated with a folder. However, the DOCS folder id is the key used to access a folder, not a path. Some CMIS fields would not have a direct mapping to a DOCS attribute at this time, such as cmis:changetoken and cmis:secondaryObjectIds.

The following folder properties from DOCS to CMIS can be mapped:

 

DOCS Folder REST Attribute Id Type Example Value
id cmis:objectId id ["F5F01D87727350F4BF624ABF15BE5DEA38AC513C7430"]
name cmis:name string ["test"]
createdBy->displayName cmis:createdBy string ["Tom Tester"]
modifiedBy->displayName cmis:lastModifiedBy string ["Tom Tester"]
createdTime cmis:creationDate datetime [2015-10-06 20:53:55 -0700]
modifiedTime cmis:lastModificationDate datetime [2015-10-06 20:54:04 -0700]
description cmis:description string []
type cmis:baseTypeId id ["cmis:folder"]
type cmis:objectTypeId id ["cmis:folder"]
Constructed from name and parent folder names cmis:path string ["/Marketing/Assets"]
parentID cmis:parentId id ["self"]

 

In a prototype, the CMIS workbench shows this in a better display when selecting a folder in the tree navigation. The REST attributes from DOCS appear in CMIS attributes.

folder-object

 

Document Properties

A Document object type in CMIS is the equivalent of a File object type in DOCS terminology. Like folders, many of the CMIS document values can be populated from the REST responses from DOCS.

The following document properties from DOCS to CMIS can be mapped:

DOCS Folder REST Attribute Id Type Value
id cmis:objectId id ["DE484E3FA0F3961FBED7551715BE5DEA38AC513C7430"]
name cmis:name string ["barcode.doc"]
createdBy->displayName cmis:createdBy string ["Tom Tester"]
modifiedBy->displayName cmis:lastModifiedBy string ["Tom Tester"]
createdTime cmis:creationDate datetime [2015-10-06 20:49:37 -0700]
modifiedTime cmis:lastModificationDate datetime [2015-10-06 20:49:37 -0700]
description cmis:description string []
type cmis:baseTypeId id ["cmis:document"]
type cmis:objectTypeId id ["cmis:document"]
version cmis:versionLabel string ["1"]
id cmis:versionSeriesId id ["DE484E3FA0F3961FBED7551715BE5DEA38AC513C7430"]
size cmis:contentStreamLength integer [27136]
Derived from extension on name attribute cmis:contentStreamMimeType string ["application/msword"]
name cmis:contentStreamFileName string ["barcode.doc"]

 

 

In a prototype of a DOCS CMIS server, the CMIS workbench shows what fields are mapped when a document is selected in the navigation tree. The above mappings shows DOCS attributes appearing in the CMIS workbench.

document-object

 

REST Client

A key toward linking the OpenCMIS development guide sample and DOCS is creating a Java REST client. This can be done using Jersey or Apache CXF. Defining POJOs to handle the REST responses is helpful in parsing the responses returned from DOCS.

The client can be created in various ways. One is by using the WADL file for the DOCS REST API, in which JDeveloper can automatically generate a client interface for the entire DOCS REST API. Another method is to create your own class. The snippet below shows use of Jersey to create a client. The client must define the methods that are needed for CMIS to call, which includes most of the folders and files REST calls in DOCS. A sample below shows a GET Personal Workspace REST request, which returns a POJO named “PersonalWorkspace”. A second method does a POST to create or upload a file into DOCS. A set of POJOs can be created to handle the responses that Documents Cloud returns.

    public RestClient(String docshostname, String username, String password) {
        super();
        m_client = getClient();
        
        String authString = username + ":" + password;

        try {
            authString = DatatypeConverter.printBase64Binary(authString.getBytes("UTF-8"));
        } catch (UnsupportedEncodingException e) {
            e.printStackTrace();
        }
        m_auth = authString;
        m_hostname = docshostname;
        m_restBaseUrl = m_hostname + m_restResourceStart;
    }

    public static Client getClient() {
        ClientConfig cc = new DefaultClientConfig();
        cc.getClasses().add(MultiPartWriter.class);
        cc.getFeatures().put(JSONConfiguration.FEATURE_POJO_MAPPING, Boolean.TRUE);
        return Client.create(cc);
    }
    
    public PersonalWorkspace getPersonalWorkspace(String orderby, int limit, int offset) throws Exception {
        String restPath = "folders/items"; 
        String fullRestUrl = m_restBaseUrl + restPath;
        
        //Set defaults for orderby, limit, and offset
        if(orderby == null){
            orderby="name:asc";
        }
        
        if(limit <= 0){
            limit = 50;
        }
        
        if(offset <= 0){
           offset = 0; 
        }
        
        fullRestUrl = fullRestUrl + "?orderby=" + orderby + "&limit=" + limit + "&offset=" + offset;
        WebResource webResource = m_client.resource(fullRestUrl);
        ClientResponse response =
            webResource.header("Authorization", "Basic " + m_auth).get(ClientResponse.class);
        
        PersonalWorkspace pw = null;
        if (response.getStatus() != 200) {                      
            throw new Exception("Failed calling REST GET folder in method getPersonalWorkspace, URL was: " + fullRestUrl + ": \nHTTP error code: " + response.getStatus() + "\nStatus Info: " +
                                       response.getStatusInfo() + "Response was: " + response.getEntity(String.class));
        }
        else{
            pw = response.getEntity(PersonalWorkspace.class);
        }
        return pw;
    }

    public Item <strong>postCreateFile</strong>(String parentFolder, String primaryFile, InputStream is) throws Exception {
        if (primaryFile == null){
            throw new Exception("postCreateFile: primaryFile cannot be null when creating a new file in DOCS.");
        }            
        String restPath = "files/data"; 
        String fullRestUrl = m_restBaseUrl + restPath;
        
        WebResource webResource = m_client.resource(fullRestUrl);
        
        String jsonPayload = "{\"parentID\":\"" + parentFolder + "\"}";
        //Add the jsonInputParameters to the multi part form
        FormDataMultiPart formDataMultiPart = new FormDataMultiPart();
        FormDataBodyPart fdpjson = new FormDataBodyPart("jsonInputParameters", jsonPayload);
        formDataMultiPart.bodyPart(fdpjson);

        //Add the file contents to the multi part form        
        StreamDataBodyPart sdp = new StreamDataBodyPart("primaryFile", is, primaryFile);
        formDataMultiPart.bodyPart(sdp);

        ClientResponse response =
            webResource.header("Authorization", "Basic " + m_auth).type(MediaType.MULTIPART_FORM_DATA).post(ClientResponse.class, formDataMultiPart);
        
        if (response.getStatus() != 200 &amp;&amp; response.getStatus() != 201) {
            throw new Exception("Failed calling REST post create file in method postCreateFile, URL was: " + fullRestUrl + ": \nHTTP error code: " + response.getStatus() + "\nStatus Info: " +
                                       response.getStatusInfo() + "Response was: " + response.getEntity(String.class));
        }
        return response.getEntity(Item.class);            
    }

 

DOCS CMIS Server

Using the FileBridge demo as an example, create classes for an Oracle DOCS CMIS Server, and define the behaviors to work with the REST API instead of a file system. Creating a CMIS project from scratch can generate class names for you and include all of the dependencies. See the section in the OpenCMIS development guide titled “Creating a new server project from scratch (Optional)”.

Once the project is generated, the CMIS methods can be written to interact with the DOCS REST client. For example, the CMIS createDocument method takes an InputStream and uploads it to the repository. In this case using the REST client can perform the task.

    /**
     * CMIS createDocument.
     */ 
    public String createDocument(CallContext context, Properties properties, String folderId,
            ContentStream contentStream, VersioningState versioningState) {
        checkUser(context, true);

        if (VersioningState.NONE != versioningState) {
            throw new CmisConstraintException("Versioning not supported!");
        }

        // check properties
        checkNewProperties(properties, BaseTypeId.CMIS_DOCUMENT);

        // check the file
        String name = OracleDocsUtils.getStringProperty(properties, PropertyIds.NAME);
        
        Item item = null;
        try {
            item = <strong>docsRestClient.postCreateFile(folderId, name, contentStream.getStream());</strong>
        } catch (Exception e) {
            e.printStackTrace();
            LOG.severe(e.getMessage());
        }
        return item.getId();
    }

 

Apache Chemistry Links and Information

Chemistry Home Page: http://chemistry.apache.org/

Older version: How to create a CMIS server using OpenCMIS: https://chemistry.apache.org/java/how-to/how-to-create-server.html

Newer version: OpenCMIS Server Development Guide on GitHub: https://github.com/cmisdocs/ServerDevelopmentGuide

This CMIS server was based on the newer version of the CMIS server development guide.

WLS 12c Deployment Notes

The development guide builds a sample server based on a file system, called the FileBridge project. The model provides a starting point for building a custom CMIS server.

The guide contains instructions on using Maven to create a base project that contains the necessary jar files, which can then be imported to Eclipse (or JDeveloper). The FileBridge demo server can be deployed to a Weblogic instance as a war file or an ear. One note is that the version of JAX-WS that the sample uses is not the same version as Weblogic 12c. This is only an issue if you intend to use the Web Services binding. This does not effect the AtomPub or browser binding. However, if using WLS 12c and the need for the web services binding is needed, deploy the project as an ear file, with the weblogic-application.xml and weblogic.xml files below.

weblogic-application.xml

A weblogic-application.xml file is needed to deploy successfully on WLS 12c. This is due to the JAX-WS version that OpenCMIS prefers to use is older than what WLS ships in 12c. This is not an issue in WLS 10.3.6 but is in 12c.  The workaround in 12c is to override the JAX-WS stack bundled with WLS 12c and use the one that Apache Chemistry OpenCMIS Server requires. The only reason the CMIS server needs to be an EAR is to handle this need, otherwise in WLS 10.3.6 it can be deployed as a WAR and a weblogic-application.xml file is not needed.

 

 <?xml version = '1.0' encoding = 'windows-1252'?>
<weblogic-application xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                      xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-application http://xmlns.oracle.com/weblogic/weblogic-application/1.5/weblogic-application.xsd"
                      xmlns="http://xmlns.oracle.com/weblogic/weblogic-application">
  <prefer-application-packages>
    <package-name>com.ctc.*</package-name>
    <package-name>com.sun.xml.*</package-name>
    <package-name>com.sun.istack.*</package-name>
    <package-name>com.sun.msv.datatype.*</package-name>
    <package-name>com.sun.msv.driver.*</package-name>
    <package-name>com.sun.msv.grammar.*</package-name>
    <package-name>com.sun.msv.reader.*</package-name>
    <package-name>com.sun.msv.relaxns.*</package-name>
    <package-name>com.sun.msv.scanner.*</package-name>
    <package-name>com.sun.msv.util.*</package-name>
    <package-name>com.sun.msv.verifier.*</package-name>
    <package-name>com.sun.msv.writer.*</package-name>
    <package-name>com.sun.org.apache.xml.internal.*</package-name>
    <package-name>com.sun.wsit.*</package-name>
    <package-name>javax.xml.activation.*</package-name>
    <package-name>javax.xml.annotation.*</package-name>
    <package-name>javax.xml.mail.*</package-name>
    <package-name>javax.xml.security.*</package-name>
    <package-name>javax.xml.registry.*</package-name>
    <package-name>javax.xml.rpc.*</package-name>
    <package-name>javax.xml.crypto.*</package-name>
    <package-name>org.apache.xerces.*</package-name>
    <package-name>javanet.staxutils.*</package-name>
    <package-name>jp.gr.xml.*</package-name>
    <package-name>org.codehaus.stax2.*</package-name>
    <package-name>org.glassfish.gmbal.*</package-name>
    <package-name>org.iso_relax.*</package-name>
    <package-name>org.jcp.xml.dsig.*</package-name>
    <package-name>org.jvnet.*</package-name>
    <package-name>org.relaxng.*</package-name>
  </prefer-application-packages>
</weblogic-application>

 

weblogic.xml

 

A weblogic.xml descriptor is needed to set prefer-web-inf-classes to true.

<?xml version = '1.0' encoding = 'windows-1252'?>
<weblogic-web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                  xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-web-app http://xmlns.oracle.com/weblogic/weblogic-web-app/1.5/weblogic-web-app.xsd"
                  xmlns="http://xmlns.oracle.com/weblogic/weblogic-web-app">
  <container-descriptor>
    <prefer-web-inf-classes>true</prefer-web-inf-classes>
  </container-descriptor>         
</weblogic-web-app>

 

How to test a custom CMIS Server

For testing, the Chemistry workbench is a useful tool that can be run on Linux or Windows. Download the CMIS Workbench from the Apache site. This is an application for connecting to a CMIS server and interacting with it.

http://chemistry.apache.org/java/developing/tools/dev-tools-workbench.html

Unzip the workbench and run it using workbench.bat (Windows) or workbench.sh (Linux).

 

 

Oracle JET with NodeJS

$
0
0

Introduction

Now that Oracle JET is released, you may look for simple way to start playing with the framework, not only on the UI side but also using some mock-up REST APIs and probably build something more realistic. In this article I would like to show you a simple way to use Oracle JET in NodeJS with the ExpressJS framework.

Main Article

This article will be a step by step tutorial of how to setup NodeJS project with ExpressJS and include and use Oracle JET inside. Lucky there is a very small amount of custom code you need to write to make that possible. Let get started!

First you need to install NodeJS, if you haven’t done so already. I personally do this locally only over the NVM (Node Version Manager), as I had to run several versions of node at the same time for different projects. If you use Mac or Linux than this tool provides you really great NodeJS version management capabilities. For the Windows users there is an alternative solution, here is the NVM link from GitHub:

https://github.com/creationix/nvm

For the purpose of this example I used NodeJS v0.12.7 but you could use newer version, it should work without problems. So here the steps:

– Install NVM: follow the installation instructions from the official side: https://github.com/creationix/nvm

– Install NodeJS: you can use now NVM to do so, or you can just go the NodeJS page and download and install from there

– Install ExpressJS: this is a small Node Web Framework and probably one of the most popular for Node, and I will use it for this example

– Generate ExpressJS Project: the Express JS framework delivers also generator which you will use to generate the project structures. To install the generator use following command:

$ npm install express-generator -g

Generate Node Express JS project using following:

$ express oraclejetwithnodejs

The default project structure should look now like this:

nodejsexpressjsprojectexample

 

 

 

 

 

 

 

 

 

 

Add Oracle JET into the ExpressJS project. To do so download first JET form the official Oracle page: http://www.oracle.com/webfolder/technetwork/jet/index.html

For my example I use the <strong>basic starter template</strong> project. You can download it from the Oracle JET download page:
<strong>http://www.oracle.com/technetwork/developer-tools/jet/downloads/index.htm</strong>

Unzipped and get the entire structure and copy it under the public folder form the NodeJS project, your project structure now should look like this:

expressjswithoraclejet

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

In my case I just copy the static JET resources inside the public folder. You could also create sub folder and put the Oracle JET inside.

Notice that I did not copy the template folder with the html resources from the Quickstart JET template project. We will do this in the next step.

Now that we have all Oracle JET resources we need in the public folder, as next we have to copy the /template folder somewhere from the Quickstart project and the index.html file. However the template loading will be handle it in different way then the static content files in the public folder. The HTML partials templates will be loaded using an AJAX call. From here you have basically two possibilities to achieve this, you either use the /public folder again, or you use partials. Following describes the both methods.

Option #1 Use the static /public folder

You can put the /template folder and the index.html file from the JET Quickstart project inside the /public folder and serve it from there. This is the easy way, you only have to change the default ExpressJS project in two places, as the default ExpressJS project comes with Jade template engine configured, you have to remove it:

On line 15 from the app.js file, remove the view engine line:

app.set(‘view engine’, ‘jade’);

On line 9 from the app.js file, remove the users router, we do not need it anymore:

var users = require(‘./routes/users’);

Inside the /routers folder open the index.js file and change the code to this:

var express = require('express');
var router = express.Router();

router.get('/', function (req, res, next) {
    //res.sendFile('index.html', {root: './views/'});
    res.sendFile('index.html', {root: './public/'});
});

module.exports = router;

Remove form the /router folder the user.js file, we don’t need it anymore. Also form the /views folder you can remove now all jade templates.

If you start your node project now (npm start) you should be able to see everything loading properly.

oraclejetnodejsinbrowsertest

Option #2 Using partials

This approach requires just a few lines of code, however it gives you more control over the templates. For example you can additionally secure specific templates if required, or depending on parameters execute specific code on server side before loading templates. You do have also the potential if required in some cases to generate specific HTML on server side and transfer only HTML to the client on the partial call. Overall it gives you more possibilities, as only serving the templates as static content.

To be able to use partials loading we will create simple partial.js code into the /routes folder. Here is the example code:

module.exports = function (basepath) {
    return {
        process: function (req, res) {
            res.sendFile('templates/' + req.params.name, {root: basepath + '/views/'});
        }
    };
}

This module basically tells the express router that within the /views folder there is another folder called /templates which contains the partial file that the AJAX call wants to load. Now we can initialize the module in the app.js file:

var loadPartials = require('./routes/partials.js');
var partials = loadPartials(__dirname);

// load the templaces using the partials
app.get('/templates/:name', partials.process);

Copy now the template folder under the /views folder and star the NodeJS server. You should be able to load the templates now as partials. Your folder structure now should look like this:

oraclejetnodejsfinalfolderstructure

 

Multiple authentication mechanism chaining in OAM

$
0
0

Authentication mechanism chaining

Since the inception of OAM 11g, we have been talking about authentication scheme chaining and being able to invoke multiple authentication schemes in sequence or invoke an authentication scheme based on some condition. This has been made possible since OAM R2PS2 release with the introduction of authentication status. You can PAUSE authentication process to interact with the user and resume authentication once the interaction is over. However this is all done from within one authentication module by chaining multiple authentication plug-ins instead of chaining authentication schemes. post I use this technique to implement a “user chooses the authentication method” use case described below.

Use case

When user accesses a protected resource, user should get a drop down box with all the authentication mechanisms supported. The user will choose one of the authentication mechanisms depending on user’s convenience of using certain authentication mechanism.  Here is the request flow.

  1. 1. User accesses a protected resource.
  1. 2. The user is redirected to authentication mechanism chooser page where he will get a drop down box with all the authentication mechanisms supported. The user will choose one of the authentication mechanisms from the drop down box and submit the choice.
  1. 3. A custom authentication plug-in (Authentication Chooser plug-in) reads user choice and depending on authentication mechanism chosen, the user is redirected to respective credential collector page by credential collector plug-in. Each authentication mechanism has to invoke different credential collector plug-in. This is accomplished by authentication plug-in orchestration in the authentication module. Ex: If user chooses FORM based login, then the custom plug-in will return SUCCESS otherwise it will return FAILURE. Based on SUCCESS and FAILURE, you can orchestrate respective credential collector plug-in. This works fine when there are just two authentication mechanisms supported. If there are more, then FAILURE has to invoke one other custom authentication plug-in. You will need N-1 (‘N’ is the number of authentication mechanisms that are supported) custom authentication plug-ins to redirect user to respective credential collector plug-ins.
  1. 4. User is now challenged for chosen credentials by credential collector plug-in. When user is challenged for credentials, Authentication context status is set to PAUSE.
  1. 5. When user submits credentials with authentication context (OAM_REQ cookie), OAM resumes authentication from the next plug-in in sequence. The next plug-in in sequence has to process those credentials.
  1. 6. If those credentials are correct, then a credentials processing plug-in returns SUCCESS and user is redirected to protected resource that the user intended to access.

There are two aspects of the implementation that are different than just FORM or Kerberos authentication process. One of them is the need of custom authentication plug-in and the second one is plug-in orchestration in authentication module.

Authentication plug-in orchestration

Here are all the steps involved in authentication in sequence along with some description, plug-in name, Is it custom plug-in or Out-Of the Box plug-in. Diagram below the table shows orchestration of the plug-ins.

 

Step Name

Plug-In Name (OOB/Custom)

Description

ChallengeChoice CredentialsChallengePlugin (OOB) This plug-in will forward Authentication request to Authentication chooser page that shows drop down box with all the authentication mechanisms supported.
AuthSelector (Authentication chooser plug-in) CustomAuthSelector (Custom) This plug-in will read user choice and return SUCCESS or FAILURE as described in the last section.
ChallengeForCreds CredentialsChallengePlugin (OOB) This plug-in will forward the request to login form.
ValidateUser UserIdentificationPlugin (OOB) This is UserIdentificationPlugin to search the user
ValidatePassword UserAuthenticationPlugin (OOB) This is UserAuthenticationPlugin that validates the password
ChallengeForToken CredentialsChallengePlugin (OOB) This plug-in is invoked when user chooses Kerberos authentication. It redirects user to Kerberos credentials collector page.
ReadKerberos KerberosTokenReader (Custom) This plug-in reads Kerberos token from HTTP header and adds it to OAM credentials object.
ValidateToken KerberosTokenAuthenticator (OOB) This plug-in reads Kerberos token from OAM credentials object and validates it.
KerberosUserIdentification UserIdentificationPlugin (OOB) This is UserIdentificationPlugin to search for the logged in user in LDAP.

 

 

PluginOrchestration

As I mentioned before, if there are N authentication methods to support, you will need N-1 custom authentication plug-ins to process user choice and route to respective credential collection URL. Below example shows authentication plug-in orchestration to support 3 authentication methods using two custom authentication plug-ins as highlighted. One other interesting observation or guideline so to say is, there are three successful orchestration completions one for each authentication method supported.

 

MultipleCustomAuthnPlugins

Authentication plug-ins used

Credentials challenge plug-ins

CredentialsChallengePlugin is Out-Of the Box plug-in used by ChallengeforCreds, challengeForToken and challengeChoice steps in the flow. challengeforCreds and challengeChoice steps forward the request to respective login pages as shown in the screen shot below. However challengeForToken step redirects the request to Kerberos challenge URI. Kerberos challenge URI is,

/oam/CredCollectServlet/WNA?spnegotoken=string&challenge_url=%2Foam%2FCredCollectServlet%2FWNA&request_id={KEY_REQUEST_ID}&authn_try_count=0&locale=en_US

/oam/CredCollectServlet/WNA is OAM Out-Of the Box Servlet that challenges user for Kerberos token.

Please note that URI contains request_id query parameter. The value is {KEY_REQUEST_ID}. This is required to carry forward same authentication context. I have to pass this on because this is HTTP redirect request and HTTP being state less protocol, will lose that information if I do not pass it. However in case of challengeForCreds, because it is HTTP forward request, the login page will get it without me passing it as query parameter.

One other credentials collector supported by OAM is, x509Certificate. Below is the URL for x509 certificate credentials collector if you need to support that authentication mechanism as well.

/oam/CredCollectServlet/X509?challenge_url=https%3A%2F%2Foamvm.example.com%3A14101%2Foam%2FCredCollectServlet%2FX509&request_id={KEY_REQUEST_ID}&authn_try_count=0&locale=en_US

 

ChallengeChoice

 

ChallengeToken

 

 

Authentication chooser plug-in

You have to read authentication method chosen and return either success or failure as shown in the sample code below. Please note that I am also reading for request_id and returning error if I do not find it. What that also tells you is, authentication chooser login page should read request_id and pass it as hidden FORM parameter just like you do in the custom FORM login pages.

AuthChooserPlugin

KerberosTokenAuthenticator plug-in

KerberosTokenAuthenticator plug-in validates Kerberos token. It is Out-Of the Box plug-in. It reads Kerberos token from OAM credentials object and validates that using keytab.

KerberosTokenAuthenticator

KerberosTokenReader plug-in

KerberosTokenReader is one other custom plug-in in the sequence. When user submits Kerberos token, the next step is to validate token. KerberosTokenAuthenticator is Out-Of the Box plug-in that can validate the token. However it looks for Kerberos token in OAM credentials object while user submits the token as Authorization HTTP header. So KerberosTokenReader reads Kerberos token from HTTP request and adds that into OAM credentials object as shown below.

KerberosTokenReader

Similarly X509CredentialExtractor looks for X509 certificate in OAM credentials object. If you are planning to use that within the flow, you will have to write X509CertReader plug-in that reads certificate from HTTP request and adds that into OAM credentials object with “PluginConstants.KEY_CERTIFICATE” as the key. Or you can make generic plug-in to read source (HTTP Request), destination (OAM Credentials) and object (Negotiate header) from plug-in configuration and moves the object from source to the destination.

User Identification Plugin

This is again Out-Of the Box plug-in that reads user information and searches for the user in LDAP.

User Authentication Plugin

This plug-in is used for FORM based login. The plug-in validates user password by binding to LDAP with user DN that it gets from previous plug-in in the sequence (UserIdentificationPlugin) and user password.

Conclusion

There are many more such use cases that can be solved with this approach. In this case, we relied on end user to choose what authentication mechanism (s)he wants to use. We could as well read whether or not user is in the domain and challenge for kerberos else challenge for FORM login. One other use case is to check if user is using mobile device or desktop and change authentication mechanism based on that. FIDO is gaining popularity. We can check if device supports FIDO authentication. If it does then challenge for FIDO authentication, else challenge with login FORM.

Oracle Mobile Cloud Service by Example Part 2: Create a Hello World Mobile API in 4 mins

$
0
0

Get started with Mobile Cloud Service (we call it MCS) in 4 mins!!!

I published this short video back in June on the A-Team youtube channel.

This is the first thing anyone should do when they work with MCS to establish connectivity

Its already had lots of viewings so I thought I would show it here too.

It shows how how easy it is to create a simple Hello World REST mock service using MCS.

This is the first thing anyone should do when they work with MCS to establish connectivity

Amazingly it takes only a little while longer to really implement integration to your Enterprise Applications.

There are more videos planned with end to end implementations.

Enjoy

If you found it interesting and want to see more Oracle A-Team videos, subscribe to the channel (click icon in bottom right corner of the video).

 

 

 

 

 

 

 

 

 

 

 

Optimize Oracle JET

$
0
0

Introduction

In my last article, I shown you how you could run a Node project using Oracle JET. In this post I would use the project as a basis and talk about another interesting topic, optimization. There are many aspects how HTML based client-server applications should be optimized, but mostly comes down to two key points: minimize the number of request, and compress the server output to the client. In this article I will talk about how to achieve this with Oracle JET.

Main Article

Today in Internet there are already a lot of articles talking about how to optimize web based applications. Oracle JET provides also a topic in the development guide, which you can follow here:

http://docs.oracle.com/middleware/jet112/jet/developer/GUID-3601ED20-4ECE-462C-8020-A55C86854142.htm#JETDG278

Basically all best practices you know about optimizing client HTML based application apply to Oracle JET as well. I would like to pick up my top 3, where you can get the best results:

#1 Reduce the number of HTTP Request (aka “The best HTTP request is the one you don’t have to do!”)

#2 Compress the output – gzip the content output to the client to reduce the bandwidth usage for faster load

#3 Use client/browser cache

The first two points are in our experience so far the most important. Even if you don’t use browser cache, if you reduce the number of resources required to load to render the page and compress the size of the loaded content to a minimum, your page will load fast. Best case you should have one request loading the page HTML markup, one CSS file request load and one JS request file load. You can go even further and for example if you know which resources you have to load for specific page hit, merge for example the CSS and JS code into the HTML page, so that you will have only one initial request, and then partially load the rest of the resources if required.

To make a example I will use the Oracle JET Quick Start Basic project. If you load the project into the browser you will realize following page footprint (46 Request and 1.8MB content to load): oraclejetprojectdebugfirebugconsoleview

The reason is that the Quick Start application is delivered in so called debug state. You load all modules and files separately not minified and not compressed. This is the mode you really want to have during the application development. In case of code error you can debug and see exactly the problem, in which file and line a error occurred, it will be mostly in readable and understandable format.

Let’s assume that this is my final project now and I would like to deploy this on my production system. Before I do so, I would have to make sure that I apply rule #1, minimize the number of request. You can do this of course manually, however I would like to automate the process. Since Oracle JET relays on RequireJS to load the dependencies, I would use the Require JS Optimizer for that purpose.

http://requirejs.org/docs/optimization.html

The RequireJS Optimizer is a tool which allows you to minify, uglify and merge modules into single file. It understands the RequireJS project structure and runs on Node, Java with Rhino and Nashorn or in the browser. Since I setup Node project I will run it as Node module. For how to install it follow the link I shared above.

After you installed the optimizer, you have to create a build file, which I called build.js. You can select any name you like. This file will contain the settings, which will tell the require JS optimizer how to merge the files. For the purpose of completeness, I will share here the complete build file:

({
  insertRequire: false,
  baseUrl: 'public/js',
  mainConfigFile: 'public/js/main.js',
  out: 'public/js/main.min.js',
  findNestedDependencies: true,
  //optimize: 'none',
  name: "main",
  paths: {
    'knockout': 'libs/knockout/knockout-3.3.0',
    'jquery': 'libs/jquery/jquery-2.1.3.min',
    'jqueryui-amd': 'libs/jquery/jqueryui-amd-1.11.4.min',
    'promise': 'libs/es6-promise/promise-1.0.0.min',
    'hammerjs': 'libs/hammer/hammer-2.0.4.min',
    'ojdnd': 'libs/dnd-polyfill/dnd-polyfill-1.0.0.min',
    'ojs': 'libs/oj/v1.1.2/min',
    'ojL10n': 'libs/oj/v1.1.2/ojL10n',
    'ojtranslations': 'libs/oj/v1.1.2/resources',
    'signals': 'libs/js-signals/signals.min',
    'text': 'libs/require/text'
  },
  shim: {
    jquery: {
      exports: '$'
    }
  },
  include: [
    //'modules/app',
    'modules/footer',
    'modules/graphics',
    'modules/header',
    'modules/home',
    'modules/library',
    'modules/people',
    'modules/performance',

    'ojs/ojcore',
        'knockout',
        'jquery',
        'ojs/ojrouter',
        'ojs/ojknockout',
        'ojs/ojbutton',
        'ojs/ojtoolbar',
        'ojs/ojmenu',
        'ojs/ojmodule',

    'text!templates/compContent.tmpl.html',
    //'text!templates/demoAppHeaderOffCanvasTemplate.tmpl.html',
    'text!templates/footer.tmpl.html',
    'text!templates/graphics.tmpl.html',
    'text!templates/header.tmpl.html',
    'text!templates/home.tmpl.html',
    'text!templates/library.tmpl.html',
    'text!templates/navContent.tmpl.html',
    'text!templates/people.tmpl.html',
    'text!templates/performance.tmpl.html'

  ],
  bundles: {
      "main.min": []
  },

})

For complete documentation about what all this settings mean, follow the link from the Require JS Optimizer I share above. Let me explain few of the options. In our project we have one main module, which is the main.js file, which loads all dependencies at once. Because we build and know those dependencies, we don’t want to load all those modules separately on production but merge them into one file. For that purpose we use include to show the optimizer which modules content should be loaded into the main module file. What we also do, we load all templates as well. If you remember the non optimized screen above, you will see that in the way the project is developed currently, it loads all those templates separately in the initial load. Better option here would be if we only load current needed template, and then partially load the rest of the templates when required, however this will require code changes, which I could cover in another article. Our main purpose now will be to get  the best out of the current application. The new merged file will be called main.min.js and will contain all dependencies required for the initial load, which you see from the non optimized load.

Notice also the usage of bundles. Bundles allows you to configure the location of specific package within the merged modules. For example now that we merge for example the header.js file into the main.min.js file, if another module later requires the same header.js we just have to specify that it is in the main.min.js so that no new load of JS file from the server will be triggered, if this modules is already loaded somewhere.

Btw the reason that bundles is empty array in the build file, is due to a bug in the RequireJS modules, which will throw error on the build process, if you use bundles into the main file.

Next step will be to change the main.js file to use bundles. As mention above, we need to tell the main module where to find his dependencies. For that purpose change in the main.js file the requirejs.config to the following:

requirejs.config({
  bundles: {
    'main.min': [
      //'modules/app',
      'modules/footer',
      'modules/graphics',
      'modules/header',
      'modules/home',
      'modules/library',
      'modules/people',
      'modules/performance',

          'ojs/ojcore',
          'knockout',
          'jquery',
          'ojs/ojrouter',
          'ojs/ojknockout',
          'ojs/ojbutton',
          'ojs/ojtoolbar',
          'ojs/ojmenu',
          'ojs/ojmodule',

      
      'text!templates/compContent.tmpl.html',
      //'text!templates/demoAppHeaderOffCanvasTemplate.tmpl.html',
      'text!templates/footer.tmpl.html',
      'text!templates/graphics.tmpl.html',
      'text!templates/header.tmpl.html',
      'text!templates/home.tmpl.html',
      'text!templates/library.tmpl.html',
      'text!templates/navContent.tmpl.html',
      'text!templates/people.tmpl.html',
      'text!templates/performance.tmpl.html'
    ]
  },
  // Path mappings for the logical module names
  paths: {
    'knockout': 'libs/knockout/knockout-3.3.0',
    'jquery': 'libs/jquery/jquery-2.1.3.min',
    'jqueryui-amd': 'libs/jquery/jqueryui-amd-1.11.4.min',
    'promise': 'libs/es6-promise/promise-1.0.0.min',
    'hammerjs': 'libs/hammer/hammer-2.0.4.min',
    'ojdnd': 'libs/dnd-polyfill/dnd-polyfill-1.0.0.min',
    'ojs': 'libs/oj/v1.1.2/min',
    'ojL10n': 'libs/oj/v1.1.2/ojL10n',
    'ojtranslations': 'libs/oj/v1.1.2/resources',
    'signals': 'libs/js-signals/signals.min',
    'text': 'libs/require/text'
  },
  // Shim configurations for modules that do not expose AMD
  shim: {
    'jquery': {
      exports: ['jQuery', '$']
    }
    /*,'crossroads': {
        deps: ['signals'],
        exports: 'crossroads'
    }*/
  },
  // This section configures the i18n plugin. It is merging the Oracle JET built-in translation
  // resources with a custom translation file.
  // Any resource file added, must be placed under a directory named "nls". You can use a path mapping or you can define
  // a path that is relative to the location of this main.js file.
  config: {
    ojL10n: {
      merge: {
        //'ojtranslations/nls/ojtranslations': 'resources/nls/menu'
      }
    }
  }
});

We now add the bundles configuration, which says that the specified dependencies can be found within the main.min.js file. You also have to change the path to the templates, to align with the path from the bundles configuration:

oj.ModuleBinding.defaults.viewPath = 'text!templates/';

Now that the configuration is ready we can run the optimizer and generate the new optimized file.

To run the Require JS Optimizer execute following line within the project folder: 
r.js -o build.js

This should produce following output:

Tracing dependencies for: main
Uglifying file: /Users/lypelov/development/JET/oraclejetonnodejs/public/js/main.min.js

/Users/lypelov/development/JET/oraclejetonnodejs/public/js/main.min.js
----------------
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/ojL10n.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/resources/nls/ojtranslations.js
ojL10n!ojtranslations/nls/ojtranslations
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojcore.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/knockout/knockout-3.3.0.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/jquery/jquery-2.1.3.min.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/js-signals/signals.min.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/es6-promise/promise-1.0.0.min.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojrouter.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojknockout.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/jquery/jqueryui-amd-1.11.4.min/core.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/jquery/jqueryui-amd-1.11.4.min/widget.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojmessaging.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojcomponentcore.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojbutton.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojtoolbar.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/jquery/jqueryui-amd-1.11.4.min/position.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojpopupcore.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojmenu.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojmodule.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/main.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/modules/footer.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/modules/graphics.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojlistview.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojnavigationlist.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/hammer/hammer-2.0.4.min.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojjquery-hammer.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojoffcanvas.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojdatacollection-common.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/jquery/jqueryui-amd-1.11.4.min/mouse.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/jquery/jqueryui-amd-1.11.4.min/draggable.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojdialog.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/modules/header.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/modules/home.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/modules/library.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/modules/people.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/modules/performance.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/require/text.js
text!templates/compContent.tmpl.html
text!templates/footer.tmpl.html
text!templates/graphics.tmpl.html
text!templates/header.tmpl.html
text!templates/home.tmpl.html
text!templates/library.tmpl.html
text!templates/navContent.tmpl.html
text!templates/people.tmpl.html
text!templates/performance.tmpl.html

 

The new main.min.js file is ready and we can use it in our index.html project. To do so change the line including the main module with this:

<script data-main="js/main.min" src="js/libs/require/require.js"</script>

If you run again the project in the browser you should see now completely different picture:oraclejetoptimized project We now have only 9 requests and 721K of content to load. If we take one simple step in our Node project and allow the compression on line 22 from the app.js file:

// setup compression
app.use(compression());

…the page footprint will look even better only 190K:

oraclejetoptimizedcompression

If you look at the rest of the 9 request you can realize that we can reduce them even too. For example you can merge the 3 CSS files into one, and remove the pictures and use CSS Sprites instead. Also the optimizer has an option to include the require.js file into the module file as well, so all this could reduce the number of requests to 3.

Note that there is one template called demoAppHeaderOffCanvasTemplate.tmpl.html, which we did not include into the main.min.js file. This is because of a issue with the template which requires specific DIV to be loaded otherwise the left side menu won’t work on small screen devices. I would work later to see if there is a way to fix that problem.

 

Optimize Oracle JET – Continue

$
0
0

Introduction

It seams to be a lot of interest about my previous article, regarding Oracle JET optimization techniques. In this second post of the same journey I would like to continue with the optimization. As I mention in the previous article it is possible by using the Require JS Optimizer, to reduce the requests to only one CSS file and one JS file for the project. In the previous article I used the Oracle JET Getting Started project and optimized it up to 9 requests. I will continue now with the same project and optimizations and will reduce the JET dependencies to be loaded with only 2 requests.

Main Article

I would continue from the previous state, just to remind you our project now looks with optimizations like this:

oraclejetoptimizedcompression

We have only 9 requests, however as you can see, there is more we can do here. We will go forward and merge all CSS files into one. Also we will merge the template demoAppHeaderOffCanvasTemplate.tmpl.html into the main.min.js file, as well as the require.js file. On the end we will have to load only one JS and one CSS file to run the project. I will achieve this all by continue using the Require JS Optimizer and only extend the script from my previous blog post.

Note that the approach here is not suitable for all cases. In case you have a big Oracle JET project, you may want to have for the very first load again only one JS file but this file does not necessary has to contains all modules. You can separate the project into sub modules and load them partially. For example imagine that the People tab has complicate functionality which will require more JavaScript code. You can merge this code in separate people.min.js file which you can load once the user clicked on the the People tab. I would go in details for how to make that possible in separate article.

For the purposes of completeness I will again share the complete build file here. Note that the file is now renamed to build-js.js. The reason to do so is that we will have a second separate build file for the CSS files.

The complete modified build-js.js file:

({
  insertRequire: false,
  baseUrl: 'public/js',
  mainConfigFile: 'public/js/main.js',
  out: 'public/js/main.min.js',
  findNestedDependencies: true,
  //optimize: 'none',
  optimize: "uglify2",
  name: "main",
  paths: {
    requireLib: 'libs/require/require',
    'knockout': 'libs/knockout/knockout-3.3.0',
    'jquery': 'libs/jquery/jquery-2.1.3.min',
    'jqueryui-amd': 'libs/jquery/jqueryui-amd-1.11.4.min',
    'promise': 'libs/es6-promise/promise-1.0.0.min',
    'hammerjs': 'libs/hammer/hammer-2.0.4.min',
    'ojdnd': 'libs/dnd-polyfill/dnd-polyfill-1.0.0.min',
    'ojs': 'libs/oj/v1.1.2/min',
    'ojL10n': 'libs/oj/v1.1.2/ojL10n',
    'ojtranslations': 'libs/oj/v1.1.2/resources',
    'signals': 'libs/js-signals/signals.min',
    'text': 'libs/require/text'
  },
  shim: {
    jquery: {
      exports: '$'
    }
  },
  include: [
    'requireLib',
    //'modules/app',
    'modules/footer',
    'modules/graphics',
    'modules/header',
    'modules/home',
    'modules/library',
    'modules/people',
    'modules/performance',

    'ojs/ojcore',
        'knockout',
        'jquery',
        'ojs/ojrouter',
        'ojs/ojknockout',
        'ojs/ojbutton',
        'ojs/ojtoolbar',
        'ojs/ojmenu',
        'ojs/ojmodule',

    'text!templates/compContent.tmpl.html',
    'text!templates/demoAppHeaderOffCanvasTemplate.tmpl.html',
    'text!templates/footer.tmpl.html',
    'text!templates/graphics.tmpl.html',
    'text!templates/header.tmpl.html',
    'text!templates/home.tmpl.html',
    'text!templates/library.tmpl.html',
    'text!templates/navContent.tmpl.html',
    'text!templates/people.tmpl.html',
    'text!templates/performance.tmpl.html',

  ],
  bundles: {
      "main.min": []
  },
})

The notable changes here are following:

  • requireLib: ‘libs/require/require’ – to merge the require.js file into the main.min.js file, so we don’t have to load it separately and spare a request.
  • within the include section, we now have ‘requireLib‘, to indicate that the requirejs dependency will be included and should be loaded from the same location.
  • we now include the template ‘text!templates/demoAppHeaderOffCanvasTemplate.tmpl.html’ into the main.min.js file as well

 

Also we need to change the main.js file to use the new partials and to tell the bundles where to find the require JS dependencies. We have to change the requirejs.config bundles:

requirejs.config({
  bundles: {
    'main.min': [
      'require',
      //'modules/app',
      'modules/footer',
      'modules/graphics',
      'modules/header',
      'modules/home',
      'modules/library',
      'modules/people',
      'modules/performance',

          'ojs/ojcore',
          'knockout',
          'jquery',
          'ojs/ojrouter',
          'ojs/ojknockout',
          'ojs/ojbutton',
          'ojs/ojtoolbar',
          'ojs/ojmenu',
          'ojs/ojmodule',

      'text!templates/compContent.tmpl.html',
      'text!templates/demoAppHeaderOffCanvasTemplate.tmpl.html',
      'text!templates/footer.tmpl.html',
      'text!templates/graphics.tmpl.html',
      'text!templates/header.tmpl.html',
      'text!templates/home.tmpl.html',
      'text!templates/library.tmpl.html',
      'text!templates/navContent.tmpl.html',
      'text!templates/people.tmpl.html',
      'text!templates/performance.tmpl.html',

    ]
  },
  // Path mappings for the logical module names
  paths: {
    'knockout': 'libs/knockout/knockout-3.3.0',
    'jquery': 'libs/jquery/jquery-2.1.3.min',
    'jqueryui-amd': 'libs/jquery/jqueryui-amd-1.11.4.min',
    'promise': 'libs/es6-promise/promise-1.0.0.min',
    'hammerjs': 'libs/hammer/hammer-2.0.4.min',
    'ojdnd': 'libs/dnd-polyfill/dnd-polyfill-1.0.0.min',
    'ojs': 'libs/oj/v1.1.2/min',
    'ojL10n': 'libs/oj/v1.1.2/ojL10n',
    'ojtranslations': 'libs/oj/v1.1.2/resources',
    'signals': 'libs/js-signals/signals.min',
    'text': 'libs/require/text'
  },
  // Shim configurations for modules that do not expose AMD
  shim: {
    'jquery': {
      exports: ['jQuery', '$']
    }
    /*,'crossroads': {
        deps: ['signals'],
        exports: 'crossroads'
    }*/
  },
  // This section configures the i18n plugin. It is merging the Oracle JET built-in translation
  // resources with a custom translation file.
  // Any resource file added, must be placed under a directory named "nls". You can use a path mapping or you can define
  // a path that is relative to the location of this main.js file.
  config: {
    ojL10n: {
      merge: {
        //'ojtranslations/nls/ojtranslations': 'resources/nls/menu'
      }
    }
  }
});

We done here two changes:

  • we add require, to notify the requirejs will be included in the main.min.js
  • we add the ‘text!templates/demoAppHeaderOffCanvasTemplate.tmpl.html’ template

As I mention we will have also a second build file, which I called build-css.js. We will use it to automatically merge the 3 CSS files. The build file has following content:

({
  cssIn: "public/css/override.css",
  out: "public/css/override.min.css",
  optimizeCss: "standard",
})

 

To make the build-css.js file working, there is one additional change required within the override.css file from the Oracle JET Quick Start project. We have to indicate that we want to import other files, so that the Require JS Optimizer do this job for us. Open the override.css project and on the top of the file add following:

@import url('libs/oj/v1.1.2/alta/oj-alta-min.css');
@import url('demo-alta-patterns-min.css');

You should add the imports at the very top of the file before any custom css class, to make sure that the two css files will be concatenated before your css overrides.

The two builds are ready to execute. To run the build-js.js file, execute from the commend like: r.js -o build-js.js, and you should see following console output:

Tracing dependencies for: main
Uglify2 file: /Users/lypelov/development/JET/oraclejetonnodejs/public/js/main.min.js

/Users/lypelov/development/JET/oraclejetonnodejs/public/js/main.min.js
----------------
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/require/require.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/ojL10n.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/resources/nls/ojtranslations.js
ojL10n!ojtranslations/nls/ojtranslations
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojcore.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/knockout/knockout-3.3.0.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/jquery/jquery-2.1.3.min.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/js-signals/signals.min.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/es6-promise/promise-1.0.0.min.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojrouter.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojknockout.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/jquery/jqueryui-amd-1.11.4.min/core.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/jquery/jqueryui-amd-1.11.4.min/widget.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojmessaging.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojcomponentcore.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojbutton.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojtoolbar.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/jquery/jqueryui-amd-1.11.4.min/position.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojpopupcore.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojmenu.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojmodule.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/main.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/modules/footer.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/modules/graphics.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojlistview.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojnavigationlist.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/hammer/hammer-2.0.4.min.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojjquery-hammer.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojoffcanvas.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojdatacollection-common.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/jquery/jqueryui-amd-1.11.4.min/mouse.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/jquery/jqueryui-amd-1.11.4.min/draggable.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/oj/v1.1.2/min/ojdialog.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/modules/header.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/modules/home.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/modules/library.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/modules/service.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/modules/people.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/modules/performance.js
/Users/lypelov/development/JET/oraclejetonnodejs/public/js/libs/require/text.js
text!templates/compContent.tmpl.html
text!templates/demoAppHeaderOffCanvasTemplate.tmpl.html
text!templates/footer.tmpl.html
text!templates/graphics.tmpl.html
text!templates/header.tmpl.html
text!templates/home.tmpl.html
text!templates/library.tmpl.html
text!templates/navContent.tmpl.html
text!templates/people.tmpl.html
text!templates/performance.tmpl.html

Then run the build-css.js file run: r.js -o build-css.js, and you should see following console output:

/Users/lypelov/development/JET/oraclejetonnodejs/public/css/override.min.css
----------------
/Users/lypelov/development/JET/oraclejetonnodejs/public/css/libs/oj/v1.1.2/alta/oj-alta-min.css
/Users/lypelov/development/JET/oraclejetonnodejs/public/css/demo-alta-patterns-min.css
/Users/lypelov/development/JET/oraclejetonnodejs/public/css/override.css

To be able to use the two new files we have to do few changes in in the Oracle JET Quick Start Project. We have to change the index.html file to use the new minified files. First remove the css loads and change to have only one which now should load only the override.min.css.

<link rel="stylesheet" href="css/override.min.css" type="text/css"/>

Also change the way the main.min.js file is loaded to only this:

<script src="js/main.min.js"></script>

We also merge the ‘text!templates/demoAppHeaderOffCanvasTemplate.tmpl.html’ into the main.min.js file, however if you run the project now you may get following error message:

oraclejetleftsidemenuerror

You get the error message because the template expects to load into a HTML element having the ID app_nav_template, however it is not able to find it. The module is inside the header, which is now injected from the main.min.js file. To fix the issue we will cut following code from the header.tmpl.html file:

<!-- template for rendering app nav data -->
            <script type="text/html" id="app_nav_template">
                <li>
                    <a href="#">
                        <span
                            data-bind="css: $data['iconClass']">
                        </span>
                        <!-- ko text: $data['name'] --> <!--/ko-->
                    </a>
                </li>
            </script>

… and past it into the index.html file after the header tag, like this:

            <!-- Header section which contains the Global Branding, Global Navigation,
            ** and Application Navigation code. Template is located in /templates/header.tmpl.html
            -->
            <header id="headerWrapper" role="banner" data-bind="ojModule: { name: 'header' }"></header>

            <!-- template for rendering app nav data -->
            <script type="text/html" id="app_nav_template">
                <li>
                    <a href="#">
                        <span
                            data-bind="css: $data['iconClass']">
                        </span>
                        <!-- ko text: $data['name'] --> <!--/ko-->
                    </a>
                </li>
            </script>

Now the project is ready. Open in the browser and you should see following:

oraclejetoptimizeonejs

That’s now a totally different picture. To load the project now we have only one CSS file, the override.min.css, and only one JS file, the main.min.js. From here “some hard core optimizer” could go even one step further and after the optimization of the CSS and the JS file they could even inject the code in the HTML markup returned from the initial page request, the very first from the console above, to reduce the project server round trips to only one. Also the two images loaded could be reduced to one requests by using CSS Sprites.

MDC Switch – Configuring Multi-Data Center Types

$
0
0

INTRODUCTION

This post discusses the steps required to configure a “master” data center to a “clone” data center and visa-versa.

If you are not familiar with Multi-Data Center (MDC) implementation and Automated Policy Synchronization (APS) please read the following links:

http://www.ateam-oracle.com/multi-data-center-implemenation-in-oracle-access-manager/

http://www.ateam-oracle.com/automated-policy-synchronization-aps-for-oam-cloned-environment/

MAIN ARTICLE

Use Case: Customer wants to convert an existing data center type from “master” to “clone” and the “clone” data center to a “master”.

One scenario for this use case is where the customer is adding a new data center and wants to designate the data center as the “master” and set the old “master” site as “clone”.  Other scenarios include configuring disaster recovery (DR) sites for example.

 

Implementation

I like to automate things where applicable so after each step below I provide a simple bash script via gitlab link that you can use as a baseline for your environment. Scripting the process is helpful for quickly switching the data center type. There are some caveats that I will be discussing after each step as well.

NOTE: In order for the MDC switch to work, both the “master” and “clone” Weblogic Administration servers must be up and running.

The steps provided below was tested on Oracle Access Manager (OAM) version 11.1.2.3.3 (PS3/BP03) and Weblogic 10.3.6.

  • Step 1) Disable/Delete existing APS agreement.

a) First get the list of agreements:

curl -k -u weblogic:Welcome1 ‘http://dc1.ateam.com:7001/oam/services/rest/_replication/agreements

b) Using the replication agreement ID above, disable the consumer agreement:

curl -u weblogic:Welcome1 -H ‘Content-Type: application/json’ -X PUT ‘http://dc1.ateam.com:7001/oam/service/rest/_replication/201409231329353668′ -d ‘{“enabled”:”false”,”replicaType”:”CONSUMER”}’

c) Disable supplier

curl -u weblogic:Welcome1 -H ‘Content-Type: application/json’ -X PUT ‘http://dec1.ateam.com:7001/oam/service/rest/_replication/201409231329353668′ -d ‘{“enabled”:”false”,”replicaType”:”SUPPLIER”}’

d) Delete both the consumer and supplier agreements

curl -u weblogic:Welcome1 -H ‘Content-Type: application/json’ -X DELETE ‘http://dc1.ateam.com:7001/oam/services/rest/_replication/201409231329353668′

Here is a link to the script:

https://gitlab.com/vkalra/OAMScripts/blob/master/removeAPS.sh

First notice that the curl command requires a user name and password. The user here must be part of the ‘system store’. By default the system store is WebLogic embedded LDAP. For a better understanding check out this link:

http://www.ateam-oracle.com/oam-11g-configuring-data-sources/

Now if you changed the system store after the replication agreement was created you may have issues removing the agreement. You have two options here; one you can revert the system store configuration and try the script again or you can delete the replication agreement directly from the Database.

Below is a screen shot of SQL Developer that shows the value of the replication agreement in the database. From SQL Developer you can delete the replication agreement. Make sure that you remove both entries from the master and clone database. The replication agreement can be found under the AM_REPLICATION_SETTINGS table.

 

OEL6.6 DC1 (APS working with PS3 base and external LDAP) [Running] - Oracle VM VirtualBox_034

  • Step 2) Modify Clone DC

Run wlst script:

connect() (enter parameters to connect)

domainRuntime()

a) Set write enabled flag to ‘true’

setMultiDataCenterWrite(WriteEnabledFlag=”true”)

b) Set to data center to “master”

setMultiDataCenterType(DataCenterType=”Master”)

  • Step 3) Modify Master DC

Run wlst script:

connect() (enter parameters to connect)

domainRuntime()

a) Set data center type to clone

setMultiDataCenterType(DataCenterType=”Clone”)

b) Set write enabled flag to ‘false’

setMultiDataCenterWrite(WriteEnabledFlag=”false”)

Here is a link to the script:

https://gitlab.com/vkalra/OAMScripts/blob/master/mdcSwitch.py

 

  • Step 4) Setup a new replication agreement

a) Make sure replication is enabled

curl -u weblogic:Welcome1 ‘https://dc2-ateam.com:7001/oam/services/rest/_replication/hello’

b) Create the agreement and note the agreement ID

Make sure the host is pointing to the new master site (dc2)

curl -u weblogic:Welcome1 -H ‘Content-Type: application/json’ -X POST ‘http://dc2.ateam.com:7001/oam/services/rest/_replication/setup’ -d  ‘{“name”:”DC22DC1″, “source”:”dc2East”,”target”:”dc1West”,”documentType”:”ENTITY”}’

{“enabled”:”true”,”identifier”:”201510160547491349″,”ok”:”true”,”pollInterval”:”900″,”startingSequenceNumber”:”1761″,”state”:”READY”}

c) Shorten the poll interval for testing purposes

curl -u weblogic:Welcome1 -H ‘Content-Type: application/json’  -X PUT ‘http://dc2.ateam.com:7001/oam/services/rest/_replication/201409040157218184′ -d ‘{“pollInterval”:”60″,”replicaType”:”consumer”}’

Here is the link to the script:

https://gitlab.com/vkalra/OAMScripts/blob/master/createAPS.sh

Once this step is complete, you will need to restart the clone admin and managed srevers as per the documentation:

Once replication agreement is created successfully, the clone/consumer will start polling for changes from next restart of clone AdminServer onwards. The default poll interval for changes is ‘900’ seconds or 15 mins. The poll interval can be changed by executing an edit replication agreement command.

Done!


Integrating Oracle Social Data and Insight Cloud Service with Oracle Business Intelligence Cloud Service (BICS)

$
0
0

Introduction

 

This article outlines how to integrate Oracle Social Data and Insight Cloud Service with Oracle Business Intelligence Cloud Service (BICS). Two primary data movement patterns are described:

(a) JSON data is retrieved from an external cloud source using REST Web Services.

(b) Data is inserted into the Schema Service database with Apex PL/SQL functions.

The above topics have previously been discussed in past A-Team BICS Blogs. What makes this article unique, is that it retrieves and displays the results real-time. These results are stored temporary in the database while viewed via a Dashboard. This data could then be permanently archived if desired.

The eighteen steps below have been broken into two parts. Part A follows a similar to pattern to that covered in “Integrating Oracle Service Cloud (RightNow) with Oracle Business Intelligence Cloud Service (BICS) – Part 2″. Part B incorporates ideas covered in “Executing a Stored Procedure from Oracle Business Intelligence Cloud” Service” and “Using the Oracle Business Intelligence Cloud Service (BICS) REST API to Clear Cached Data“.


PART A – Retrieve & Load Data


1)    Review REST API Documentation

2)    Build REST Search Endpoint URL

3)    Run apex_web_service.make_rest_request

4)    Formulate JSON Path

5)    Create BICS Tables

6)    Parse JSON

7)    Execute PL/SQL

8)    Review Results


PART B – Trigger Results Real-Time


9)    Add Clear BI Server Cache Logic

10)  Create Function – to execute Stored Procedure

11) Test Function – that executes Stored Procedure

12)  Create Dummy Table (to reference the EVALUATE function)

13)  Create Repository Variables

14)  Create Expression in Data Modeler (that references EVALUATE function)

15)  Create Analysis – that executes EVALUATE function

16)  Create Analysis – to display results

17)  Create Dashboard Prompt

18)  Create Dashboard

Main Article

 

Part A – Retrieve & Load Data

 

Step 1 – Review REST API Documentation

 

Begin by reviewing the REST APIs for Oracle Social Data and Insight Cloud Service documentation. This article only covers using the /v2/search end point. The /v2/search is used to retrieve a list of companies or contacts that match a given criteria. There are many other end points available in the API that may be useful for various integration scenarios.

Step 2 – Build REST Search Endpoint URL

 

Access Oracle Social Data and Insight Cloud Service from Oracle Cloud My Services.

The Service REST Endpoint (Company and Contact Data API) will be listed.

For Example: https://datatrial1234-IdentityDomain.data.us9.oraclecloud.com/data/api

Append v2/search to the URL.

For Example: https://datatrial1234-IdentityDomain.data.us9.oraclecloud.com/data/api/v2/search

Step 3 – Run apex_web_service.make_rest_request

 

1)    Open SQL Workshop from Oracle Application Express

Snap2

2)    Launch SQL Commands

Snap3

3)    Use the code snippet below as a starting point to build your PL/SQL.

Run the final PL/SQL in the SQL Commands Window.

Replace the URL, username, password, identity domain, and body parameters.

For a text version of the code snippet click here.

For detailed information on all body parameters available click here.

l_ws_response_clob CLOB;
l_ws_url VARCHAR2(500) := ‘YourURL/data/api/v2/search';
l_body CLOB;

l_body := ‘{“objectType”:”People”,”limit”:”100″,”filterFields”:
[{"name":"company.gl_ult_dun","value":"123456789"},
{"name":"person.management_level","value":"0"},
{"name":"person.department","value":"3"}],”returnFields”:
["company.gl_ult_dun","person.parent_duns", "person.first_name",
"person.last_name","person.department","person.management_level","person.gen
der_code","person.title","person.standardized_title","person.age_range","per
son.company_phone","person.company_phone_extn","name.mail","person.co_offica
l_id"]}‘;
–use rest to retrieve the Data Service Cloud – Social Data
apex_web_service.g_request_headers(1).name := ‘Content-Type';
apex_web_service.g_request_headers(1).value := ‘application/json';
apex_web_service.g_request_headers(2).name := ‘X-ID-TENANT-NAME';
apex_web_service.g_request_headers(2).value := ‘Identity Domain’;
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => l_ws_url,
p_username => ‘Username‘,
p_password => ‘Password‘,
p_body => l_body,
p_http_method => ‘POST’
);
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,12000,1));
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,12000,12001));
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,12000,24001));

4)    Run the query. A subset of the JSON results should be displayed in the Results section of SQL Commands.

Additional dbms_output.put_line’s may be added should further debugging be required.

It is not necessary at this stage to view the entire result set. The key to this exercise is to prove that the URL is correct and can successfully be run through apex_web_service.make_rest_request.

5)    Currently the body parameter filterFeilds only accepts “value” and not “DisplayValue”; thus, it may be necessary to create dimension lookup tables. For this exercise two dimension look-up tables are used.

Look-up values may change over time and should be re-confirmed prior to table creation.

“Step 5 –  Create the BICS Database Tables” describes how to create the two look-up tables below.

Department – Lookup Values

Value    DisplayValue
0        Administration
1        Consulting
2        Education
3        Executive
4        Facilities
5        Finance
6        Fraternal Organizations
7        Government
8        Human Resources
9        Operations
10       Other
11       Purchasing
12       Religion
13       Research & Development
14       Sales & Marketing
15       Systems

Management Level – Lookup Values

Value    DisplayValue
0        C-Level
1        Vice-President
2        Director
3        Manager
4        Other

 

Step 4 – Formulate JSON Path


1)    When formulating the JSON path expression, it may be useful to use an online JSON Path Expression Tester.

There are many different free JSON tools available online. The one below is: https://jsonpath.curiousconcept.com

2)    For this exercise the below values will be exacted from the JSON.

Each path was tested in the JSON Path Expression Tester.

The attribute numbers 1-12 are associated with the order in which returnFields have been specified in the body parameter. Thus, attribute numbers may differ from the example if:

a) Fields are listed in a different sequence.

b) An alternative number or combination of fields is defined.

Value                             JSON Path Expression 
totalHits                         ‘totalHits’
company.gl_ult_dun                 parties[*].attributes[1].value
person.parent_duns                 parties[*].attributes[2].value
person.first_name                  parties[*].attributes[3].value
person.last_name                   parties[*].attributes[4].value
person.department                  parties[*].attributes[5].value
person.management_level            parties[*].attributes[6].value
person.gender_code                 parties[*].attributes[7].value
person.title                       parties[*].attributes[8].value
person.standardized_title          parties[*].attributes[9].value
person.age_range                   parties[*].attributes[10].value
person.company_phone               parties[*].attributes[11].value
person.company_phone_extn          parties[*].attributes[12].value
name.mail                          parties[*].email
person.co_offical_id               parties[*].id

Step 5 –  Create BICS Tables

 

1)    Open SQL Workshop from Oracle Application Express

Snap2

2)    Launch SQL Commands

Snap3

3)    Create the SOCIAL_DATA_CONTACTS table in the BICS database.

To view the SQL in plain text click here.

CREATE TABLE SOCIAL_DATA_CONTACTS(
COMPANY_DUNS_NUMBER VARCHAR2(500),
CONTACT_DUNS_NUMBER VARCHAR2(500),
FIRST_NAME VARCHAR2(500),
LAST_NAME VARCHAR2(500),
DEPARTMENT VARCHAR2(500),
MANAGEMENT_LEVEL VARCHAR2(500),
GENDER VARCHAR2(500),
JOB_TITLE VARCHAR2(500),
STANDARDIZED_TITLE VARCHAR2(500),
AGE_RANGE VARCHAR2(500),
COMPANY_PHONE VARCHAR2(500),
COMPANY_PHONE_EXT VARCHAR2(500),
EMAIL_ADDRESS VARCHAR2(500),
INDIVIDUAL_ID VARCHAR2(500));

4)    Create and populate the DEPARTMENT_PROMPT look-up table in the BICS database.

CREATE TABLE DEPARTMENT_PROMPT(DEPT_NUM VARCHAR(500), DEPT_NAME VARCHAR2(500));

INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘0′,’Administration’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘1′,’Consulting’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘2′,’Education’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘3′,’Executive’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘4′,’Facilities’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘5′,’Finance’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘6′,’Fraternal Organizations’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘7′,’Government’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘8′,’Human Resources’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘9′,’Operations’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(’10’,’Other’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(’11’,’Purchasing’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(’12’,’Religion’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(’13’,’Research & Development’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(’14’,’Sales & Marketing’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(’15’,’Systems’);

5)    Create and populate the MANAGEMENT_LEVEL_PROMPT look-up table in the BICS database.

CREATE TABLE MANAGEMENT_LEVEL_PROMPT(ML_NUM VARCHAR(500), ML_NAME VARCHAR2(500));

INSERT INTO MANAGEMENT_LEVEL_PROMPT(ML_NUM, ML_NAME) VALUES(‘0′,’C-Level’);
INSERT INTO MANAGEMENT_LEVEL_PROMPT(ML_NUM, ML_NAME) VALUES(‘1′,’Vice-President’);
INSERT INTO MANAGEMENT_LEVEL_PROMPT(ML_NUM, ML_NAME) VALUES(‘2′,’Director’);
INSERT INTO MANAGEMENT_LEVEL_PROMPT(ML_NUM, ML_NAME) VALUES(‘3′,’Manager’);
INSERT INTO MANAGEMENT_LEVEL_PROMPT(ML_NUM, ML_NAME) VALUES(‘4′,’Other’);

Step 6 – Parse JSON


For a text version of the PL/SQL snippet click here.

Replace URL, username, password, identity domain, and body parameters.

The code spinet has been highlighted in different colors grouping the various logical components.

Blue

Rest Request that retrieves the data in JSON format as a clob.

Yellow

Lookup “Value” codes based on users selection of “DisplayValue” descriptions.

Purple

Logic to handle ‘All Column Values’ when run from BICS. (This could be handled in many other ways … and is just a suggestion.)

Green

Convert JSON clob to readable list -> Parse JSON values and insert into database.

Code advice: Keep in mind that p_path is expecting a string. Therefore, it is necessary to concatenate any dynamic variables such as the LOOP / Count ‘i’ variable.

Red

Array to handle entering multiple duns numbers.

Grey

Left pad Duns numbers with zeros – as this is how they are stored in Oracle Social Data and Insight Cloud Service.

CREATE OR REPLACE PROCEDURE SP_LOAD_SOCIAL_DATA_CONTACTS(
p_company_duns varchar2
,p_department varchar2
,p_management_level varchar2
) IS
l_ws_response_clob CLOB;
l_ws_url VARCHAR2(500) := ‘YourURL/data/api/v2/search';
l_body CLOB;
l_num_contacts NUMBER;
v_array apex_application_global.vc_arr2;
l_filter_fields VARCHAR2(500);
l_pad_duns VARCHAR2(9);
l_department_num VARCHAR2(100);
l_management_level_num VARCHAR2(100);

DELETE FROM SOCIAL_DATA_CONTACTS;
–lookup department code
IF p_department != ‘All Column Values’ THEN
SELECT MAX(DEPT_NUM) into l_department_num
FROM DEPARTMENT_PROMPT
WHERE DEPT_NAME = p_department;
END IF;
–lookup management level code
IF p_management_level != ‘All Column Values’ THEN
SELECT MAX(ML_NUM) into l_management_level_num
FROM MANAGEMENT_LEVEL_PROMPT
WHERE ML_NAME = p_management_level;
END IF;
–loop though company duns numbers
v_array := apex_util.string_to_table(p_company_duns, ‘,’);
for j in 1..v_array.count LOOP
–pad duns numbers with zeros – as they are stored in the system this way
l_pad_duns := LPAD(v_array(j),9,’0′);
–logic to handle All Column Values
IF p_department != ‘All Column Values’ AND p_management_level != ‘All Column Values’ THEN
l_filter_fields := ‘”filterFields”:[{"name":"company.gl_ult_dun","value":"' || l_pad_duns || '"},{"name":"person.department","value":"'|| l_department_num || '"},{"name":"person.management_level","value":"'|| l_management_level_num || '"}]‘;
ELSE
IF p_department = ‘All Column Values’ AND p_management_level != ‘All Column Values’ THEN
l_filter_fields := ‘”filterFields”:[{"name":"company.gl_ult_dun","value":"' || l_pad_duns || '"},{"name":"person.management_level","value":"'|| l_management_level_num || '"}]‘;
ELSE
IF p_department != ‘All Column Values’ AND p_management_level = ‘All Column Values’ THEN
l_filter_fields := ‘”filterFields”:[{"name":"company.gl_ult_dun","value":"' || l_pad_duns || '"},{"name":"person.department","value":"'|| l_department_num || '"}]‘;
ELSE
l_filter_fields := ‘”filterFields”:[{"name":"company.gl_ult_dun","value":"' || l_pad_duns || '"}]‘;
END IF;
END IF;
END IF;
–build dynamic body
l_body := ‘{“objectType”:”People”,”limit”:”100″,’ || l_filter_fields || ‘,”returnFields”:["company.gl_ult_dun","person.parent_duns", "person.first_name", "person.last_name","person.department","person.management_level","person.gender_code","person.title","person.standardized_title","person.age_range","person.company_phone","person.company_phone_extn","name.mail","person.co_offical_id"]}';
–use rest to retrieve the Data Service Cloud – Social Data
apex_web_service.g_request_headers(1).name := ‘Content-Type';
apex_web_service.g_request_headers(1).value := ‘application/json';
apex_web_service.g_request_headers(2).name := ‘X-ID-TENANT-NAME';
apex_web_service.g_request_headers(2).value := ‘identity domain';
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => l_ws_url,
p_username => ‘UserName’,
p_password => ‘Password’,
p_body => l_body,
p_http_method => ‘POST’
);
–parse the clob as JSON
apex_json.parse(l_ws_response_clob);
–get total hits
l_num_contacts := CAST(apex_json.get_varchar2(p_path => ‘totalHits’) AS NUMBER);
–loop through total hits and insert JSON data into database
IF l_num_contacts > 0 THEN
for i in 1..l_num_contacts LOOP

INSERT INTO SOCIAL_DATA_CONTACTS(COMPANY_DUNS_NUMBER, CONTACT_DUNS_NUMBER,FIRST_NAME,LAST_NAME,DEPARTMENT,MANAGEMENT_LEVEL,GENDER,JOB_TITLE,STANDARDIZED_TITLE,AGE_RANGE,COMPANY_PHONE,COMPANY_PHONE_EXT,EMAIL_ADDRESS,INDIVIDUAL_ID)
VALUES
(
v_array(j),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[2].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[3].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[4].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[5].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[6].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[7].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[8].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[9].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[10].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[11].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[12].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].email’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].id’)
);
end loop; –l_num_contacts
END IF;    –greater than 0

end loop; –v_array.count
commit;

Step 7 – Execute PL/SQL

 

Run the PL/SQL in Apex SQL Commands. Test various combinations. Test with duns numbers with less than 9 digits.

SP_LOAD_SOCIAL_DATA_CONTACTS(‘123456789′,’All Column Values’,’All Column Values’);

SP_LOAD_SOCIAL_DATA_CONTACTS(‘123456789′,’Administration’,’All Column Values’);

SP_LOAD_SOCIAL_DATA_CONTACTS(‘123456789′,’All Column Values’,’Manager’);

SP_LOAD_SOCIAL_DATA_CONTACTS(‘123456789′,’Administration’,’Manager’);

SP_LOAD_SOCIAL_DATA_CONTACTS(‘1234567′,’All Column Values’,’All Column Values’);

Step 8 – Review Results

 

Confirm data was inserted as expected.

SELECT * FROM SOCIAL_DATA_CONTACTS;

Part B – Trigger Results Real-Time

 

Oracle Social Data and Insight Cloud Service data will be retrieved by the following sequence of events.

 

a)    A BICS Consumer selects required input parameters via a Dashboard Prompt.

b)    Selected Dashboard Prompt values are passed to Request Variables.

(The Request Variable temporary changes the state of the Repository Variable for that Session.)

c)    Session Variables (NQ_SESSION) are read by the Modeler Expression using the VALUEOF function.

d)    EVALUATE is used to call a Database Function and pass the Session Variable values to the Database Function.

e)    The Database Function calls a Stored Procedure – passing parameters from the Function to the Stored Procedure

f)    The Stored Procedure uses apex_web_service to call the Rest API and retrieve data in JSON format as a clob.

g)    The clob is parsed and results are returned and inserted into a BICS database table.

h)    Results are displayed on a BICS Analysis Request, and presented to the BICS Consumer via a Dashboard.


Step 9 – Add Clear BI Server Cache Logic


For a text version of the PL/SQL snippets in step 9-11 click here.

This step is optional depending on how cache is refreshed / recycled.

Replace BICS URL, BICS User, BICS Pwd, and BICS Identity Domain.

Add cache code after insert commit (after all inserts / updates are complete).

Repeat testing undertaken in “Step 7 – Execute PL/SQL”.

DECLARE
l_bics_response_clob    CLOB;

–clear BICS BI Server Cache
apex_web_service.g_request_headers(1).name := ‘X-ID-TENANT-NAME';
apex_web_service.g_request_headers(1).Value := ‘BICS Identity Domain‘;
l_bics_response_clob := apex_web_service.make_rest_request
(
p_url => ‘https://BICS_URL/bimodeler/api/v1/dbcache’,
p_http_method => ‘DELETE’,
p_username => ‘BICS UserName‘,
p_password => ‘BICS Pwd
);
–dbms_output.put_line(‘Status:’ || apex_web_service.g_status_code);


Step 10 – Create Function – to execute Stored Procedure

 

CREATE OR REPLACE FUNCTION LOAD_SOCIAL_DATA_CONTACTS
(
p_company_duns IN VARCHAR2,
p_department IN VARCHAR2,
p_management_level VARCHAR2
) RETURN INTEGER
IS PRAGMA AUTONOMOUS_TRANSACTION;

SP_LOAD_SOCIAL_DATA_CONTACTS(p_company_duns,p_department,p_management_level);
COMMIT;
RETURN 1;

Step 11 – Test Function – that executes Stored Procedure

 

SELECT LOAD_SOCIAL_DATA_CONTACTS(‘123456789′,’All Column Values’,’All Column Values’) FROM DUAL;

 

Step 12 – Create Dummy Table – to reference the EVALUATE function


For a text version of the PL/SQL snippet click here


1)    Create Table

CREATE TABLE DUMMY_REFRESH
(REFRESH_TEXT VARCHAR2(255));

2)    Insert descriptive text into table

INSERT INTO DUMMY_REFRESH (REFRESH_TEXT)
VALUES (‘Hit Refresh to Update Data’);

3)    Confirm insert was successful

SELECT * FROM DUMMY_REFRESH;

 

Step 13 – Create Repository Variables


Create a Repository Variable in the BICS Modeler tool for each parameter that needs to be passed to the function and stored procedure.

Snap12

Snap13

Snap14

 

Step 14 –  Create Expression in Data Modeler – that references EVALUATE function

 

Create the expression in the BICS Modeler tool using EVALUATE to call the function and pass necessary parameters to the function and stored procedure.

EVALUATE(‘LOAD_SOCIAL_DATA_CONTACTS(%1,%2, %3)’,VALUEOF(NQ_SESSION.”r_company_duns”),VALUEOF(NQ_SESSION.”r_department”),VALUEOF(NQ_SESSION.”r_management_level”))

 

Snap2

 

Step 15 – Create Analysis – that executes EVALUATE function


Create an Analysis and add both field from the DUMMY_REFRESH table. Hide both field so that nothing is returned.

 Snap4

Snap5

Snap6

Step 16 – Create Analysis – to display results

 
Add all or desired fields from SOCIAL_DATA_CONTACTS table.

 Snap16

Step 17 – Create Dashboard Prompt


For each Prompt set the corresponding Request Variable.

*** These must exactly match the names of the repository variables created in “Step 13 – Create Repository Variables” ***

Snap9

Snap10

Snap11

Snap15

For each prompt manually add the text for ‘All Columns Values’ and exclude NULL’s.

Snap19

Snap20

The Dashboard also contains a workaround to deal multiple Dun’s numbers. Currently VALUELISTOF is not available in BICS. Therefore, it is not possible to pass multiple values from a Prompt to a request / session variable; since, VALUEOF can only handle a single value.

A suggested workaround is to put the multi-section list into a single comma delimiter string – using LISTAGG. The single string can then be read by VALUEOF and logic in the stored procedure can read through the array.

CAST(EVALUATE_AGGR(‘LISTAGG(%1,%2) WITHIN GROUP (ORDER BY %1 DESC)’,”DUNS_NUMBERS”.”COMPANY_DUNS_NUMBER”,’,’) AS VARCHAR(500))

 
Step 18 – Create Dashboard


There are many ways to design the Dashboard for the BICS Consumer. One suggestion is below:

The Dashboard is processed in seven clicks.

1)    Select Duns Number(s).

2)    Select Confirm Duns Number (only required for multi-select workaround described in Step 17).

3)    Select Department or run for ‘All Column Values’.

4)    Select Management Level or run for ‘All Column Values’.

5)    Click Apply. *** This is a very important step as Request Variables are only read once Apply is hit ***

6)    Click Refresh – to kick off Refresh Analysis Request (built in Step 15).

7)    Click Get Contact Details to display Contact Analysis Request (built in Step 16).

Snap7

Ensure the Refresh Report Link is made available on the Refresh Analysis Request – to allow the BICS Consumer to override cache.

Snap17

Optional: Make use of Link – Within the Dashboard on Contact Analysis Request to create “Get Contact Details” link.

Snap18

Further Reading


Click here for the Application Express API Reference Guide – MAKE_REST_REQUEST Function.

Click here for the Application Express API Reference Guide – APEX_JSON Package.

Click here for the REST APIs for Oracle Social Data and Insight Cloud Service guide.

Click here for more A-Team BICS Blogs.

Summary


This article provided a set of examples that leverage the APEX_WEB_SERVICE_API to integrate Oracle Social Data and Insight Cloud Service with Oracle Business Intelligence Cloud Service (BICS) using the Connect REST API web services.

The use case shown was for BICS and Oracle Social Data and Insight Cloud Service integration. However, many of the techniques referenced could be used to integrate Oracle Social Data and Insight Cloud Service with other Oracle and non-Oracle applications.

Similarly, the Apex MAKE_REST_REQUEST and APEX_JSON examples could be easily modified to integrate BICS or standalone Oracle Apex with any other REST web service that is accessed via a URL and returns JSON data.

Techniques referenced in this blog could be useful for those building BICS REST ETL connectors and plug-ins.

Key topics covered in this article include: Oracle Business Intelligence Cloud Service (BICS), Oracle Social Data and Insight Cloud Service, Oracle Apex API, APEX_JSON, apex_web_service.make_rest_request, PL/SQL, BICS Variables (Request, Repository, Session), BICS BI Server Cache, BICS Functions (EVALUATE, VALUEOF, LISTAGG), and Cloud to Cloud integration.

Using JMS Queues with Oracle Data Integrator (ODI)

$
0
0

Introduction

 

Oracle customers use Java Message Service (JMS) queues with Oracle Data Integrator (ODI) to consume, transform, and publish millions of JMS messages every day.  A new Oracle white paper called “Using JMS Queues with Oracle Data Integrator (ODI)” illustrates, step-by-step, how to use JMS queues with ODI.

This white paper uses the airline industry as an example.  Figure 1 illustrates the use-case presented in this white paper:

 

Figure 1 - Using JMS Queues with ODI

Figure 1 – Using JMS Queues with ODI

 

The ODI XML technology is also covered in this white paper, since the JMS messages discussed in this article are XML documents.  Additionally, an ODI repository sample is available in Java.net.  If you would like to download a free copy of this ODI demo, go to “ODI 12.1.3 SmartExport Demo for JMS Queues,” and search for “JMS”.

 

Conclusion

 

For more Oracle Data Integrator best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-team Chronicles for Oracle Data Integrator (ODI).”

 

Additional Resources

 

ODI Resources and References

ODI 12.1.3 SmartExport Demo for JMS Queues

Using Java Message Services (JMS) in Oracle Data Integrator (ODI)

Introduction to Oracle Data Integrator Driver for XML

How to Work with Java Message Services (JMS) in Oracle Data Integrator (ODI)

Creating a JMS XML Data Server in ODI 12c

Creating an Integration Project in ODI

Creating and Using Data Models and Datastores in Oracle Data Integrator

Managing Knowledge Modules in ODI

 

JMS Resources and References

Java Message Service (JMS) Application Programming Interface (API)

Java Messaging Service (JMS) Specifications

Introduction to the Java Message Service (JMS)

 

WebLogic Resources and References

Fusion Middleware Programming JMS for the WebLogic Server

Configuring and Managing JMS for Oracle WebLogic Server 12c

Oracle WebLogic Server 12c:  Configuring JMS Servers and Destinations

WebLogic 12c Connection Factory Configuration

Oracle Fusion Middleware Oracle WebLogic Server API Reference 12c Release 1 (12.1.1)

Understanding the WebLogic Thin T3 Client

 

XML Resources and References

Extensible Markup Language

XML Schema Definition

 

Other Resources and References

Using the QueueSend Program to Send a Message to a JMS Queue

XMLSpear

Custom Transports in Oracle Service Bus 12.2.1

$
0
0

Oracle Service Bus (or Service Bus for short) provides a very powerful set of APIs that allow experienced Java developers to create custom transport providers. This is called Service Bus Transport SDK. By using this SDK, it is possible to create custom transport providers to handle both inbound and outbound message handling for specific protocols, without having to worry with the internal details of Service Bus.

fig-01

The objective of this post is not about how the Service Bus Transport SDK works, neither about providing examples about how to use it. This is very detailed in the Service Bus documentation. Instead, we are going to cover the specifics about creating custom transport providers for Service Bus 12.2.1. Thus; this post will walk through the changes and challenges introduced by this new version, which may help people that want to port their custom transports from previous versions of Service Bus to 12.2.1.

Changes in the Classpath

No matter which IDE you commonly use to develop the code for custom transport providers, when you try to open your project you will face some annoying classpath issues. This will happen because the 12.2.1 version of Service Bus changed many of its JAR files, in an attempt to create a more consistent system library classpath. This is also true for some JAR files that belongs to WebLogic, and many others from the Fusion Middleware stack.

Therefore, you will have to adapt your classpath to be able to compile your source-code again, either compiling the code from the IDE or using the Ant javac task. The XML snippet below is an Eclipse user library export with some of the most important JARs that you might need while working with Service Bus 12.2.1.

<?xml version="1.0" encoding="UTF-8" standalone="no"?>

<eclipse-userlibraries version="2">

    <library name="Java EE API" systemlibrary="false">
        <archive path="/oracle/mw-home/wlserver/server/lib/javax.javaee-api.jar"/>
    </library>

    <library name="Service Bus Transport SDK" systemlibrary="false">
        <archive path="/oracle/mw-home/wlserver/server/lib/weblogic.jar"/>
        <archive path="/oracle/mw-home/wlserver/modules/com.bea.core.xml.xmlbeans.jar"/>
        <archive path="/oracle/mw-home/osb/lib/modules/oracle.servicebus.kernel-api.jar"/>
        <archive path="/oracle/mw-home/osb/lib/modules/oracle.servicebus.configfwk.jar"/>
        <archive path="/oracle/mw-home/osb/lib/transports/main-transport.jar"/>
        <archive path="/oracle/mw-home/osb/lib/modules/oracle.servicebus.common.jar"/>
        <archive path="/oracle/mw-home/osb/lib/modules/oracle.servicebus.services.sources.jar"/>
        <archive path="/oracle/mw-home/osb/lib/modules/oracle.servicebus.services.core.jar"/>
        <archive path="/oracle/mw-home/osb/lib/modules/oracle.servicebus.platform.jar"/>
        <archive path="/oracle/mw-home/osb/lib/modules/oracle.servicebus.utils.jar"/>
        <archive path="/oracle/mw-home/wlserver/modules/com.bea.core.jmspool.jar"/>
        <archive path="/oracle/mw-home/osb/lib/modules/oracle.servicebus.resources.svcaccount.jar"/>
        <archive path="/oracle/mw-home/wlserver/modules/com.bea.core.descriptor.jar"/>
        <archive path="/oracle/mw-home/wlserver/modules/com.bea.core.descriptor.j2ee.jar"/>
        <archive path="/oracle/mw-home/wlserver/modules/com.bea.core.descriptor.application.jar"/>
        <archive path="/oracle/mw-home/wlserver/modules/com.bea.core.descriptor.wl.jar"/>
        <archive path="/oracle/mw-home/osb/lib/modules/oracle.servicebus.resources.service.jar"/>
        <archive path="/oracle/mw-home/wlserver/server/lib/wls-api.jar"/>
        <archive path="/oracle/mw-home/wlserver/modules/com.bea.core.utils.full.jar"/>
    </library>

    <library name="WebLogic JMS API" systemlibrary="false">
        <archive path="/oracle/mw-home/wlserver/server/lib/wljmsclient.jar"/>
    </library>

    <library name="WebLogic WorkManager API" systemlibrary="false">
        <archive path="/oracle/mw-home/wlserver/modules/com.bea.core.weblogic.workmanager.jar"/>
    </library>

</eclipse-userlibraries>

You might need to change your Ant script as well:

fig-02

Changes in the Kernel API

Although there are minimal, there were changes in the Service Bus Kernel API that can avoid your code to be compiled. Specifically, you will see some compiler errors in the classes that handle the UI part of your custom transport provider. The first change noticed is the removal of the setRequired() method in the com.bea.wli.sb.transports.ui.TransportEditField class. It seems that it vanished in the 12.2.1 version.

fig-03

Similarly, the Kernel API removed the DISPLAY_LIST constant from the TransportUIFactory.SelectObject inner class:

fig-04

However, if you try to compile the source-code using the Ant javac task, it works. Moreover; all the missing parts are still available in the 12.2.1 version of Service Bus, and works in runtime after you install the custom transport provider. For this reason, it can be considered safe to ignore those compiler errors.

Targeting to JDK 1.8

Service Bus 12.2.1 is certified to run on top of Fusion Middleware 12.2.1, which in turn is certified to run on top of JDK 1.8. Thus, it might be a good idea to change your compiler settings to generate JDK 1.8 compliant bytecode.

fig-05

This is not a pre-requirement of course, since the JVM allows the execution of code compiled in earlier versions. But to promote better alignment with the Fusion Middleware certification matrix, that can be considered a best practice. Besides, you might be interested in using some of the JDK 1.8 new features such as lambdas expressions, pipelines & streams and default methods.

Issues with the Service Bus Console

The Service Bus Console had its UI changed in version 12.2.1. Now it uses the Oracle Alta UI, the same look-and-feel found in major Cloud offerings such as the Integration Cloud Service and SOA Cloud Service. While this is good because it provides better experience for users using the Service Bus Console, it brings an additional challenge when you deploy your custom transport provider.

The challenge is that even after having the custom transport provider installed, you will notice that it will not be available in the Service Bus Console. At first, you will think that the custom transport provider was not installed properly, but if you strictly follow the instructions about how to deploy custom transport providers, you can be certain that it will be installed correctly.

The issue here is a bug in the Service Bus Console regarding internationalization. All the transports must have an entry in a property file that maintains the descriptions of the resources created in Service Bus Console. For the transports that come with Service Bus, these entries are set. But for custom transport providers, you will have to manually create these entries in the properties file in order to have your custom transport provider working with the Service Bus Console. The instructions below will help you to solve this issue.

Firstly, locate the following file:

$MW_HOME/osb/lib/osbconsoleEar/webapp/WEB-INF/lib/adflib_osb_folder.jar

Secondly, open this JAR and edit a properties file that is contained inside. The file to be edited is:

/oracle/soa/osb/console/folder/l10n/FolderBundle.properties

This file generically handles internationalized messages when no language is specified in the browser. You might need to change other files to make your custom transport provider available when specific languages (i.e.: Brazilian Portuguese, Spanish, Japanese) have been set.

You will have to create two, maybe three entries in this file. The first entry provides a generic description to your custom transport provider. If your custom transport provider has inbound support, then it must have an entry for the Proxy Service description. If your custom transport provider has outbound support, then it must have an entry for the Business Service description.

The example below shows the entries for a custom transport provider named kafka, that has both inbound and outbound support:

desc.res.gallery.kafka=The Kafka transport allows you to create proxy and business services that communicate with Apache Kafka brokers.

desc.res.gallery.kafka.proxy=The Kafka transport allows you to create proxy services that receive messages from Apache Kafka brokers.

desc.res.gallery.kafka.business=The Kafka transport allows you to create business services that route messages to Apache Kafka brokers.

Save all the changes made in the properties file, and save this file back to the JAR file. You will need to restart Service Bus to check if this change had effect. After restart Service Bus, you will notice that the Service Bus Console now allows your custom transport provider to be used.

fig-06

The Oracle support and engineering teams are aware about this bug, and hopefully future versions of Service Bus will eliminate the need to manually create these entries. This issue has no impact if you develop Service Bus applications using Fusion Middleware JDeveloper.

Oracle HCM Cloud – Bulk Integration Automation Using SOA Cloud Service

$
0
0

Introduction

Oracle Human Capital Management (HCM) Cloud provides a comprehensive set of tools, templates, and pre-packaged integration to cover various scenarios using modern and efficient technologies. One of the patterns is the batch integration to load and extract data to and from the HCM cloud. HCM provides the following bulk integration interfaces and tools:

HCM Data Loader (HDL)

HDL is a powerful tool for bulk-loading data from any source to Oracle Fusion HCM. It supports important business objects belonging to key Oracle Fusion HCM products, including Oracle Fusion Global Human Resources, Compensation, Absence Management, Performance Management, Profile Management, Global Payroll, Talent and Workforce Management. For detailed information on HDL, please refer to this.

HCM Extracts

HCM Extract is an outbound integration tool that lets you select HCM data elements, extracting them from the HCM database and archiving these data elements as XML. This archived raw XML data can be converted into a desired format and delivered to supported channels recipients.

Oracle Fusion HCM provides the above tools with comprehensive user interfaces for initiating data uploads, monitoring upload progress, and reviewing errors, with real-time information provided for both the import and load stages of upload processing. Fusion HCM provides tools, but it requires additional orchestration such as generating FBL or HDL file, uploading these files to WebCenter Content and initiating FBL or HDL web services. This post describes how to design and automate these steps leveraging Oracle Service Oriented Architecture (SOA) Cloud Service deployed on Oracle’s cloud Platform As a Service (PaaS) infrastructure.  For more information on SOA Cloud Service, please refer to this.

Oracle SOA is the industry’s most complete and unified application integration and SOA solution. It transforms complex application integration into agile and re-usable service-based components to speed time to market, respond faster to business requirements, and lower costs.. SOA facilitates the development of enterprise applications as modular business web services that can be easily integrated and reused, creating a truly flexible, adaptable IT infrastructure. For more information on getting started with Oracle SOA, please refer this. For developing SOA applications using SOA Suite, please refer to this.

These bulk integration interfaces and patterns are not applicable to Oracle Taleo.

Main Article

 

HCM Inbound Flow (HDL)

Oracle WebCenter Content (WCC) acts as the staging repository for files to be loaded and processed by HDL. WCC is part of the Fusion HCM infrastructure.

The loading process for FBL and HDL consists of the following steps:

  • Upload the data file to WCC/UCM using WCC GenericSoapPort web service
  • Invoke the “LoaderIntegrationService” or the “HCMDataLoader” to initiate the loading process.

However, the above steps assume the existence of an HDL file and do not provide a mechanism to generate an HDL file of the respective objects. In this post we will use the sample use case where we get the data file from customer, using it to transform the data and generate an HDL file, and then initiate the loading process.

The following diagram illustrates the typical orchestration of the end-to-end HDL process using SOA cloud service:

 

hcm_inbound_v1

HCM Outbound Flow (Extract)

The “Extract” process for HCM has the following steps:

  • An Extract report is generated in HCM either by user or through Enterprise Scheduler Service (ESS)
  • Report is stored in WCC under the hcm/dataloader/export account.

 

However, the report must then be delivered to its destination depending on the use cases. The following diagram illustrates the typical end-to-end orchestration after the Extract report is generated:

hcm_outbound_v1

 

For HCM bulk integration introduction including security, roles and privileges, please refer to my blog Fusion HCM Cloud – Bulk Integration Automation using Managed File Trasnfer (MFT) and Node.js. For introduction to WebCenter Content Integration services using SOA, please refer to my blog Fusion HCM Cloud Bulk Automation.

 

Sample Use Case

Assume that a customer receives benefits data from their partner in a file with CSV (comma separated value) format periodically. This data must be converted into HDL format for the “ElementEntry” object and initiate the loading process in Fusion HCM cloud.

This is a sample source data:

E138_ASG,2015/01/01,2015/12/31,4,UK LDG,CRP_UK_MNTH,E,H,Amount,23,Reason,Corrected all entry value,Date,2013-01-10
E139_ASG,2015/01/01,2015/12/31,4,UK LDG,CRP_UK_MNTH,E,H,Amount,33,Reason,Corrected one entry value,Date,2013-01-11

This is the HDL format of ElementryEntry object that needs to be generated based on above sample file:

METADATA|ElementEntry|EffectiveStartDate|EffectiveEndDate|AssignmentNumber|MultipleEntryCount|LegislativeDataGroupName|ElementName|EntryType|CreatorType
MERGE|ElementEntry|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|E|H
MERGE|ElementEntry|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|E|H
METADATA|ElementEntryValue|EffectiveStartDate|EffectiveEndDate|AssignmentNumber|MultipleEntryCount|LegislativeDataGroupName|ElementName|InputValueName|ScreenEntryValue
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|Amount|23
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|Reason|Corrected all entry value
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|Date|2013-01-10
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|Amount|33
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|Reason|Corrected one entry value
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|Date|2013-01-11

SOA Cloud Service Design and Implementation

A canonical schema pattern has been implemented to design end-to-end inbound bulk integration process – from the source data file to generating HDL file and initiating the loading process in HCM cloud. The XML schema of HDL object “ElementEntry” is created. The source data is mapped to this HDL schema and SOA activities will generate the HDL file.

Having a canonical pattern automates the generation of HDL file and it becomes a reusable asset for various interfaces. The developer or business user only needs to focus on mapping the source data to this canonical schema. All other activities such as generating the HDL file, compressing and encrypting the file, uploading the file to WebCenter Content and invoking web services needs to be developed once and then once these activities are developed they also become reusable assets.

Please refer to Wikipedia for the definition of Canonical Schema Pattern

These are the following design considerations:

1. Convert source data file from delimited format to XML

2. Generate Canonical Schema of ElementEntry HDL Object

3. Transform source XML data to HDL canonical schema

4. Generate and compress HDL file

5. Upload a file to WebCenter Content and invoke HDL web service

 

Please refer to SOA Cloud Service Develop and Deploy for introduction and creating SOA applications.

SOA Composite Design

This is a composite based on above implementation principles:

hdl_composite

Convert Source Data to XML

“GetEntryData” in the above composite is a File Adapter service. It is configured to use native format builder to convert CSV data to XML format. For more information on File Adapter, refer to this. For more information on Native Format Builder, refer to this.

The following provides detailed steps on how to use Native Format Builder in JDeveloper:

In native format builder, select delimited format type and use source data as a sample to generate a XML schema. Please see the following diagrams:

FileAdapterConfig

nxsd1

nxsd2_v1 nxsd3_v1 nxsd4_v1 nxsd5_v1 nxsd6_v1 nxsd7_v1

Generate XML Schema of ElementEntry HDL Object

A similar approach is used to generate ElementEntry schema. It has two main objects: ElementEntry and ElementEntryValue.

ElementEntry Schema generated using Native Format Builder

<?xml version = ‘1.0’ encoding = ‘UTF-8’?>
<xsd:schema xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:nxsd=”http://xmlns.oracle.com/pcbpel/nxsd” xmlns:tns=”http://TargetNamespace.com/GetEntryHdlData” targetNamespace=”http://TargetNamespace.com/GetEntryHdlData” elementFormDefault=”qualified” attributeFormDefault=”unqualified” nxsd:version=”NXSD” nxsd:stream=”chars” nxsd:encoding=”UTF-8″>
<xsd:element name=”Root-Element”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”Entry” minOccurs=”1″ maxOccurs=”unbounded”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”METADATA” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementEntry” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveStartDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveEndDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”AssignmentNumber” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”MultipleEntryCount” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”LegislativeDataGroupName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EntryType” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”CreatorType” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”${eol}” nxsd:quotedBy=”&quot;”/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:annotation>
<xsd:appinfo>NXSDSAMPLE=/ElementEntryAllSrc.dat</xsd:appinfo>
<xsd:appinfo>USEHEADER=false</xsd:appinfo>
</xsd:annotation>
</xsd:schema>

ElementEntryValue Schema generated using Native Format Builder

<?xml version = ‘1.0’ encoding = ‘UTF-8’?>
<xsd:schema xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:nxsd=”http://xmlns.oracle.com/pcbpel/nxsd” xmlns:tns=”http://TargetNamespace.com/GetEntryValueHdlData” targetNamespace=”http://TargetNamespace.com/GetEntryValueHdlData” elementFormDefault=”qualified” attributeFormDefault=”unqualified” nxsd:version=”NXSD” nxsd:stream=”chars” nxsd:encoding=”UTF-8″>
<xsd:element name=”Root-Element”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”EntryValue” minOccurs=”1″ maxOccurs=”unbounded”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”METADATA” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementEntryValue” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveStartDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveEndDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”AssignmentNumber” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”MultipleEntryCount” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”LegislativeDataGroupName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”InputValueName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ScreenEntryValue” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”${eol}” nxsd:quotedBy=”&quot;”/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:annotation>
<xsd:appinfo>NXSDSAMPLE=/ElementEntryAllSrc.dat</xsd:appinfo>
<xsd:appinfo>USEHEADER=false</xsd:appinfo>
</xsd:annotation>
</xsd:schema>

In Native Format Builder, change “|” separator to “,” in the sample file and change it to “|” for each element in the generated schema.

Transform Source XML Data to HDL Canonical Schema

Since we are using canonical schema, all we need to do is map the source data appropriately and Native Format Builder will convert each object into HDL output file. The transformation could be complex depending on the source data format and organization of data values. In our sample use case, each row has one ElementEntry object and 3 ElementEntryValue sub-objects respectively.

The following provides the organization of the data elements in a single row of the source:

Entry_Desc_v1

The main ElementEntry entries are mapped to each respective row, but ElementEntryValue entries attributes are located at the end of each row. In this sample it results 3 entries. This can be achieved easily by splitting and transforming each row with different mappings as follows:

<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”> – map pair of columns “1” from above diagram

<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”> – map pair of columns “2” from above diagram

<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”> – map pair of columns “3” from above diagram

 

Metadata Attribute

The most common use cases are to use “merge” action for creating and updating objects. In this use case, it is hard coded to “merge”, but the action could be set up to be dynamic if source data row has this information. The “delete” action removes the entire record and must not be used with “merge” instruction of the same record as HDL cannot guarantee in which order the instructions will be processed. It is highly recommended to correct the data rather than to delete and recreate it using the “delete” action. The deleted data cannot be recovered.

 

This is the sample schema developed in JDeveloper to split each row into 3 rows for ElementEntryValue object:

<xsl:template match=”/”>
<tns:Root-Element>
<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”>
<tns:Entry>
<tns:METADATA>
<xsl:value-of select=”‘MERGE'”/>
</tns:METADATA>
<tns:ElementEntry>
<xsl:value-of select=”‘ElementEntryValue'”/>
</tns:ElementEntry>
<tns:EffectiveStartDate>
<xsl:value-of select=”ns0:C2″/>
</tns:EffectiveStartDate>
<tns:EffectiveEndDate>
<xsl:value-of select=”ns0:C3″/>
</tns:EffectiveEndDate>
<tns:AssignmentNumber>
<xsl:value-of select=”ns0:C1″/>
</tns:AssignmentNumber>
<tns:MultipleEntryCount>
<xsl:value-of select=”ns0:C4″/>
</tns:MultipleEntryCount>
<tns:LegislativeDataGroupName>
<xsl:value-of select=”ns0:C5″/>
</tns:LegislativeDataGroupName>
<tns:ElementName>
<xsl:value-of select=”ns0:C6″/>
</tns:ElementName>
<tns:EntryType>
<xsl:value-of select=”ns0:C9″/>
</tns:EntryType>
<tns:CreatorType>
<xsl:value-of select=”ns0:C10″/>
</tns:CreatorType>
</tns:Entry>
</xsl:for-each>
<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”>
<tns:Entry>
<tns:METADATA>
<xsl:value-of select=”‘MERGE'”/>
</tns:METADATA>
<tns:ElementEntry>
<xsl:value-of select=”‘ElementEntryValue'”/>
</tns:ElementEntry>
<tns:EffectiveStartDate>
<xsl:value-of select=”ns0:C2″/>
</tns:EffectiveStartDate>
<tns:EffectiveEndDate>
<xsl:value-of select=”ns0:C3″/>
</tns:EffectiveEndDate>
<tns:AssignmentNumber>
<xsl:value-of select=”ns0:C1″/>
</tns:AssignmentNumber>
<tns:MultipleEntryCount>
<xsl:value-of select=”ns0:C4″/>
</tns:MultipleEntryCount>
<tns:LegislativeDataGroupName>
<xsl:value-of select=”ns0:C5″/>
</tns:LegislativeDataGroupName>
<tns:ElementName>
<xsl:value-of select=”ns0:C6″/>
</tns:ElementName>
<tns:EntryType>
<xsl:value-of select=”ns0:C11″/>
</tns:EntryType>
<tns:CreatorType>
<xsl:value-of select=”ns0:C12″/>
</tns:CreatorType>
</tns:Entry>
</xsl:for-each>
<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”>
<tns:Entry>
<tns:METADATA>
<xsl:value-of select=”‘MERGE'”/>
</tns:METADATA>
<tns:ElementEntry>
<xsl:value-of select=”‘ElementEntryValue'”/>
</tns:ElementEntry>
<tns:EffectiveStartDate>
<xsl:value-of select=”ns0:C2″/>
</tns:EffectiveStartDate>
<tns:EffectiveEndDate>
<xsl:value-of select=”ns0:C3″/>
</tns:EffectiveEndDate>
<tns:AssignmentNumber>
<xsl:value-of select=”ns0:C1″/>
</tns:AssignmentNumber>
<tns:MultipleEntryCount>
<xsl:value-of select=”ns0:C4″/>
</tns:MultipleEntryCount>
<tns:LegislativeDataGroupName>
<xsl:value-of select=”ns0:C5″/>
</tns:LegislativeDataGroupName>
<tns:ElementName>
<xsl:value-of select=”ns0:C6″/>
</tns:ElementName>
<tns:EntryType>
<xsl:value-of select=”ns0:C13″/>
</tns:EntryType>
<tns:CreatorType>
<xsl:value-of select=”ns0:C14″/>
</tns:CreatorType>
</tns:Entry>
</xsl:for-each>
</tns:Root-Element>
</xsl:template>

BPEL Design – “ElementEntryPro…”

This is a BPEL component where all the major orchestration activities are defined. In this sample, all the activities after transformation are reusable and can be moved to a separate composite. A separate composite may be developed only for transformation and data enrichment that in the end invokes the reusable composite to complete the loading process.

 

hdl_bpel_v2

 

 

SOA Cloud Service Instance Flows

The following diagram shows an instance flow:

ElementEntry Composite Instance

instance1

BPEL Instance Flow

audit_1

Receive Input Activity – receives delimited data to XML format through Native Format Builder using File Adapter

audit_2

Transformation to Canonical ElementEntry data

Canonical_entry

Transformation to Canonical ElementEntryValue data

Canonical_entryvalue

Conclusion

This post demonstrates how to automate HCM inbound and outbound patterns using SOA Cloud Service. It shows how to convert customer’s data to HDL format followed by initiating the loading process. This process can also be replicated to other Fusion Applications pillars such as Oracle Enterprise Resource Planning (ERP).

Index of JET articles

Improve Oracle Unified Directory 11gR2 Search Performance with Index Entry Limit

$
0
0

Introduction

I am always looking for great tips that give big values; this one is no exception. This article is to help you understand how to tweak the index called “Index Entry Limit” to reap some dramatic ldapsearch performance improvements. I explain what this index is about, some of my own test results, how to determine the correct value, and finally how to make the index change to your OUD instance. This will be a tip you will definitely want to add to your OUD Ninja black bag.

Main Article

What is Index Entry Limit?

At the time of this publishing the latest official Oracle® Fusion Middleware Administrator’s Guide for Oracle Unified Directory 11g Release 2 (11.1.2) Part Number E22648-02 documentation says in section 6.3 Index Entry Limit as follows:

“The index entry limit is a configuration limit that can be used to control the maximum number of entries that is allowed to match any given index key (that is, the maximum size of an ID list). This provides a mechanism for limiting the performance impact for maintaining index keys that match a large percentage of the entries in the server. In cases where large ID lists might be required, performing an unindexed search can often be faster than one that is indexed.”

So what does this mean? Basically it is an index, which OUD uses to maintain a key between the index attribute and entries. It is a little different than the standard LDAP indexes you add for an attribute like equality, presence, substring, etc. The Index Entry Limit value controls the maximum number of entries kept in the OUD index record when it looks up data. The default value for this limit is 4000 and if you have more records returned by some ldapsearch that exceeds this value, then increasing it can reap some pretty outstanding improvements in performance. Before you jump ahead and simply raise this value to some ridiculous number, read on to see what I discovered and how to determine the right balance —- the sweet spot.

 

My Test Results using the Index Entry Limit Tweak

It is important to run various tests to determine the impact of making changes to the Index Entry Limit value, so to start I want to give explain what I used as my environment. I setup an OUD 11.1.2.3.0 instance populated with 1 million user entries with some dynamic groups. Each entry had several attributes populated with data, so for my example test I leveraged the “st” attribute; used to define the US state a person lives in. Within the 1 million user population there was an average fewer than 18,000 users for each “st” or state. For example NC, or North Carolina, had the largest population of 17,499 users. This is important to know; more on this later.

For my first test case I ran a simple ldapsearch with a filter “(st=<st value>)” against 5 different states, TX, FL, RI, VA, and AK. I created a simple script that iterated the search through each state twice. I collected a summary of the time it took to complete for search, and increased the Index Entry Limit each time and so on. The following table is a summary of the results.

 

Index Entry Limit value 4,000 (base) 10,000 20,000 40,000 60,000
Search Time in Milliseconds 3923 3077 1401 1571 1617
Percentage Improvement 100% 127% 280% 250% 243%

 

Interestingly you can see at 20,000 I got the faster ldapsearch. I ran a similar test against dynamic groups that also used a filter on states. The following table is a summary of the results, and similar to the first test I found that 20,000 gave the best results.

Index Entry Limit value 4,000 (base) 10,000 20,000 40,000 60,000
Search Time in Milliseconds 5466 4876 1434 1511 1441
Percentage Improvement 100% 112% 381% 362% 379%

 

So I learned a couple things, 1) increasing the Index Entry Limit can have dramatic performance improvements, and 2) there is an optimal value to get the best results so simply making the value a very high number does not yield even better performance and can in fact have a negative impact.

To add more confirmation I ran large load tests against my OUD instance doing a similar adjustment of the Index Entry Limit value, and the results showed the same pattern. That is increasing the value to what I call the Sweet Spot amount led to the biggest gain.  So how do you determine this so called Sweet Spot value?

 

The Sweet Spot

In the previous section I determined that there is an optimal value for the Index Entry Limit to get the most performance, and in my test case I determined that value to be 20,000. However your value will most likely be completely different.

I already mentioned earlier that OUD uses the Index Entry Limit value as a measure to how many entries it keeps in its index entry record when it looks up the data. In my case the highest common denominator of the “st” attribute population was NC, North Carolina, which had a population total of 17,499. So in reality the most optimum value would have bee to set the Index Entry Limit to 17,499. However there are no guarantees that the population based on the attribute “st” will be a max of 17,499 forever. Therefore I decided to add some buffer and make the value 20,000. That said, if I were to say just make the value to 100,000 to make sure I have some buffer, the problem is that this adds unnecessary indexes that OUD must track and therefore the impact becomes more negative. So there is a balance that must be kept.

Generally Directory Services data is pretty static, but the volume can change. So even though this indexing tip can reap some pretty big rewards, you must be careful about managing the number. I would even go as far to say if you think the number of the entries the filter changes dramatically, you can either monitor this change and tune periodically or decide on a balance value and hope it is not too high.   Because when you increase the Index Entry Limit, the cost is not free, it does make OUD consume more RAM.   For example in my case, when I increased the “st” Index Entry Limit from 4000 to 20,000, the demand on RAM increased around 900MB. That is nothing major, but if you are tuning for several attributes then this can add up, so for your specific tuning I recommend running tests and monitoring the disk, RAM, CPU changes so that you do not exhaust limited resources.. The performance gains are amazing, but like a super hero, you must treat your new super powers with great respect.

 

Determining the Index Entry Limit

Sorry for the long previous section, but I really wanted to expand on as much as I could without creating an entire novel to make sure you understood the ramifications of playing with this really great tuning tip, but now to show you how to determine this value. There is a MOS (My Oracle Support) Doc ID 1526683.1 article that explains a way to determine the entries that are associated with this index by using a special operational attribute “debugsearchindex.   To illustrate let’s say you have a Dynamic group based on an LDAP filter “(st=NC)”. If you include the parameter “debugsearchindex” at the end of the command, an output will result that will help determine the optimum value to use.

Example LDAP Search command:

 

ldapsearch -h localhost –p 1389 –D cn=Directory\ Manager –w <passwd> \

     -x –LLL –b dc=example,dc=com “(st=NC)” debugsearchindex

dn: cn=debugsearch

debugsearchindex: filter=(st=NC)[INDEX:st.equality][COUNT:17499] scope=sub[LIM

IT-EXCEEDED:1000002] final=[COUNT:17499]

 

Depending on what you use for a LDAP filter your result will probably be different, but the key is to run an ldapsearch with the filter your application or what ever is querying OUD and include the debugsearchindex parameter.   In my example there is a final COUNT value of 17,499, so this would be the optimum Index Entry Limit value. However, before you run off and think this is all there is too it, let me expand a bit more.

You will also need to account for any variation. For example in my case I had 52 different states (“st”), which means I had to run this same command against each state; e.g. (st=FL), (st=CA), etc. and from the list of final outputs I determined the largest final COUNT was indeed from the state NC with a value of 17,499.   Since the count of users could vary over time, I decided to go with the Index Entry Limit value of 20,000 that would provide some buffer. Typically Directory Services have fairly static value, but in this case it depended on how my population would grow over time. You will have to determine what you think would be a good balance and maybe even have to make this tuning an on going administrative activity in order to get the most performance.

 

You can create a script to run through all the variations I mentioned in your LDAP filter to come up with the largest value. Below I created a simple example of what you could use as a script to iterate through a list of states or what ever attribute you want to evaluate. Feel free to modify this script to your specific needs. The idea is that the last output “MAX:” tells you the highest count from all the values you go through based on the LDAP filter, which would be the value you would minimally use for the Index Entry Limit.  You can also increase the Index Entry Limit value a bit as a buffer depending on what your situation is, but increasing it to much may not have much value.

/*** Example debugsearchtest_list.txt Input File with list of values ***/
TX
FL
RI
VA
AK

 

/*** Example getMaxEntryLimit.sh Script ***/

#! /bin/bash
INPUT=debugsearchtest_list.txt
OLDIFS=$IFS
IFS=”,”
attr=”st”
ldaphost=”oud1.melander.us”
ldapport=”1389″
ldapbind=”cn=Directory Manager”
ldapcred=”Oracle123″
ldapbase=”cn=Users,dc=oracle,dc=com”

[ ! -f $INPUT ] && { echo “$INPUT file not found”; exit 99; }
max=0
while read attr_val
do
     debugcount=`ldapsearch -h $ldaphost -p $ldapport \
     -D “$ldapbind” -w “$ldapcred” -x -LLL \
     -b “$ldapbase” \
     “($attr=$attr_val)” \
     debugsearchindex | \
     grep ‘final=’ | sed ‘s/\]//g’ | awk -F “:” ‘{print $3}’`
     echo “$attr_val – $debugcount”
     array+=($debugcount)
done < $INPUT
IFS=$OLDIFS
IFS=$’\n’
echo -ne “————-\n”
max=`echo “${array[*]}” | sort -nr | head -n1`
echo “MAX: $max”

 

/*** Example Command-line Output ***/
TX – 17149
FL – 17210
RI – 17120
VA – 17198
AK – 17190
————-
MAX: 17210

How to Set the Index Entry Limit

There are three ways to set the index entry limit in OUD and that is either the command line menu driven option, the silent command line, or using the GUI in ODSM. To keep it simple I am providing the CLI to accomplish this, which works perfectly fine. All you need to change is the attribute and index entry limit value.

STEP 1 – Create the Index if it does not exist

You cannot add the Index Entry Limit index to an attribute if it is not indexed at all.  By default only a few attributes are indexed as part of the plain vanilla OUD installation. Create an index of some type; i.e. approximate, equality, extensible, ordering, presence, or substring.  The following is a command to create an index, just replace the index-type: value to what you need.  You can always modify this later if needed.

 

# EXAMPLE COMMAND TO CREATE AN INDEX FOR ATTRIBUTE “ST”

./dsconfig create-local-db-index \

         –element-name userRoot \

         –set index-type:equality \

         –type generic \

         –index-name st \

         –hostname localhost \

         –port 4444 \

         –trustAll \

          –bindDN cn=Directory\ Manager \

         –bindPasswordFile passwd.txt \

         –no-prompt

 

STEP 2 – View the existing attribute indexing

The following command will show the existing indexes for the “st” attribute, so please modify has needed.

 

# VIEW THE EXISTING INDEX ENTRY LIMIT

./dsconfig -n get-local-db-index-prop \

          –element-name userRoot \

          –index-name st \

          –hostname localhost \

          –port 4444 \

          –trustAll \

          –bindDN cn=Directory\ Manager \

          –bindPasswordFile passwd.txt \

          –no-prompt

 

STEP 3 – Set the Index Entry Limit

This command is the important one that will set the Index Entry Limit to the value you have decided to be the optimum number.

 

# SET INDEX ENTRY LIMIT TO 20,000

./dsconfig set-local-db-index-prop \

          –element-name userRoot \

          –index-name st \

          –set index-entry-limit:20000 \

          –hostname localhost \

          –port 4444 \

          –trustAll \

          –bindDN cn=Directory\ Manager \

          –bindPasswordFile passwd.txt \

          –no-prompt

STEP 4 – Rebuild the Index for a single attribute

The following command will rebuild the “st” attribute index; modify it to your attribute.  You can also replace “–index st” with –rebuildALL to rebuild all indexes.  IMPORTANT:  When running this command it will bring OUD offline as the indexes are rebuilt, so make sure when running this in production it be done during a scheduled off peak time and put up the appropriate SYSTEM DOWN warnings.

 

# REBUILD INDEX FOR SINGLE ATTRIBUTE

./rebuild-index \

          –hostname localhost \

          –baseDN dc=oracle,dc=com \

          –port 4444 \

          –trustAll \

          –bindDN cn=Directory\ Manager \

          –bindPasswordFile passwd.txt \

          –index st

 

Summary

To summarize setting the right value of Index Entry Limit for an attribute used in an LDAP filter can greatly improve search results for OUD, but be careful about setting this value. Setting it to high will increase memory and CPU since OUD has to manage these new record indexes. That said, used correctly this is a great tuning tip and most people that I have worked with who implemented it have been very successful in improving OUD’s performance.

Working with Oracle Unified Directory 11gR2 Transformation Framework

$
0
0

If you have been using Oracle’s Identity Management software for at least the last few years you will probably be familiar or at least heard of OVD (Oracle Virtual Directory), which was original acquired back in 2005 from a company called OctetString. OVD provides a vast number of great virtual features used to aggregate multiple backend data stores and present LDAP consumers a single unified Directory Server.  Beginning with OUD version 11.1.2.1.0, there have been a number of virtualization features added similar to what is provided in OVD.  This trend has continued through OUD 11.1.2.3.0 where features such as joining multiple backends was added. This is even more evident with the recent release of OUD 11.1.2.3.0 where features such as joining multiple backends have also been added.

The OUD Transformation Framework can do various things as presented in the latest documentation “Understanding the Transformation Framework”, but in order to help illustrate how this feature can really add value I recently worked with a customer where leveraging a Transformation Rule helped solved a problem. Because the existing documentation is either confusing or lacking, I decided to write this article to help learn more about the Transformation Framework and how to make it work. An important note I want to alert you is at the time this article was published in order to use the OUD virtualization features you are required to have what is called a “Oracle Directory Service Plus” license http://www.oracle.com/us/products/middleware/identity-management/oracle-directory-services/overview/index.html. If you have any questions about that please refer to your local Oracle Sales Representative.

 

Main Article

My Example Data Transformation Use Case

Some times the best way to help explain something is to tell a story that relates to the problem. So to help illustrate I am building a use case that will leverage the Transformation Framework to solve a problem. In actuality my use case is based on some work I did with a real customer, but I am modifying some of the details in order to protect any confidential details of the solution.

Let’s say Acme Company is using the Oracle Unified Directory (OUD) and has millions of users. Each user in the OUD Directory store has an attribute named “st” for State (This is specifically a inetOrgPerson attribute profile for USA, but you can extrapolate and change it to something that seems more related to what you are planning.), and each user is assigned a single value.   For example if you are from California, your LDAP profile will have the attribute “st” with the abbreviated state value “CA”, Florida you will have st=FL, and so on.   Now Acme Company is also using Oracle Access Manager (OAM) to authenticate each user and in the authentication process gets an incoming value that identifies what state each user is from, but the problem in lies the incoming value is the real name and not the abbreviated value. For example Adina authenticates, she is from Florida and the value passed to OAM is Florida, not FL. Acme Company wants to leverage the incoming value during authentication to look her up in OUD so that they can personalize her experience, but unfortunately the incoming value “Florida” does not match what is in the OUD Directory “FL”.

 

How OUD Transformation Framework will solve the Problem

We know the incoming value will always be consistent, so all we need to do is use that value to map it to the appropriate attribute and value in the OUD Directory Server. Refer to the following sequence diagram in order to illustrate how the OUD Proxy Transformation Framework will solve the authentication process from OAM to OUD Proxy to OUD Directory.

OUD Proxy Data Transformation Sequence Diagram

Notice in the flow that the OUD Proxy it gets a LDAP filter request from OAM in the form of “virtST=Florida” and the OUD Proxy Transformation Rule changes it to “st=FL” so that the lookup will be found and Adina will correctly be identified to below to Florida.   By the way, the attribute “virtST” does not exist in the OUD Directory, it only exists in the OUD Proxy; more on that later.   Hopefully at this point you can begin to brainstorm how this feature could apply to your Identity application. What I am about to cover gives the basics on how to implement the Transformation Framework, but it should also give you the basics that could be applied to other OUD virtual features.

 

How to Create a OUD Proxy Data Transformation

An important aspect to working with the Transformation Framework is understanding the component dependencies. Below is a diagram to help illustrate those dependencies.

OUD Proxy Transformation Dependencies

 

As the diagram illustrates, the first component #1 to be created will be the custom schema (Note if you follow the command line option, creating the attribute is not required though I will mention it is recommended.), then #2 create a Transformation Rule, and so forth.

There are a couple approaches to creating the Transformation Rule. Option 1) is using the Command-line, and option 2) is using the GUI in ODSM (Oracle Directory Service Manager). I am going to show both approaches because there are pros and cons to both and leave it up to you to pick which option works best in your situation.

 

Command-line Approach to Create the Transformation

First things first, all instructions going forward are done in the OUD Proxy Server, NOT OUD Directory Server. This is a very important point to remember.

To stage all the commands that are going to be executed, I am listing all the associated parameter names and values; please associate the values in the commands and adjust for your environment as needed. In addition, I am also color-coding all associated values between steps to help clearly illustrate how certain components relate to each other as pointed out in the previous OUD Proxy Transformation Dependencies diagram.

OUD Proxy Hostname:       localhost
OUD Proxy Instance Port:  2389
OUD Proxy Admin Port:     2444
OUD Super User Name:      cn=Directory Manager
OUD Super User Password:  passwd.txt(password inside file)
OUD Base DN to Invoke WF: cn=users,dc=oracle,dc=com

Step 1 – Change to the OUD Proxy Instance bin directory

All command line actions will be run from the OUD Proxy Instance bin directory. It is important to know this is the bin directory in the OUD Proxy instance and not the OUD Directory Server instance, so change to the bin directory in the OUD Proxy Instance.

cd <OUD Proxy Instance>/OUD/bin

 

Step 2 – Create a custom schema attribute

Let’s create a custom attribute, which by the way is only extended to the OUD Proxy, and is not created in the OUD Directory. So there is no worry about making any changes in the main OUD Directory Server data store.

If you only use the command-line option the custom attribute is not really required.  However, I highly recommend creating the custom attribute because if you decide to modify the Transformation Rule in any way, when saving the changes it will complain the “Client attribute” does not exist and saving any changes will simply fail. Therefore I highly recommend creating the custom attribute.

The following custom attribute definition should be placed in some file; e.g. customAttribute.ldif. Keep in mind the following example is for my use case, your attribute will most likely be completely different. Please refer to the OUD Administration Documentation on creating custom attributes in the OUD Proxy Server.

Example File customAttribute.ldif -
———————————————————–
dn: cn=schema
changetype: modify
add: attributeTypes
attributeTypes: ( 2.5.4.8.101
  NAME ( ‘virtST‘ )
  DESC ‘Virtual transformation attribute’
  SUP name EQUALITY caseIgnoreMatch
  SUBSTR caseIgnoreSubstringsMatch
  SYNTAX 1.3.6.1.4.1.1466.115.121.1.15
  X-ORIGIN ‘RFC 4519′
  USAGE userApplications )

 

Run command –
———————————————————–
./ldapmodify \
-h localhost \
-p 2389
-D cn=Directory\ Manager \
-j passwd.txt \
-a -f customAttribute.ldif

Example Output –
———————————————————–
Processing MODIFY request for cn=schema
MODIFY operation successful for DN cn=schema

 

Step 3 – Create Outbound Attribute Addition Transformation

The following will create the Outbound Attribute Transformation Rule. In my example below I only included 3 of the 50 states for the sake of illustration, but you can extrapolate for your use case and make necessary adjustments.  Notice the association between the attribute “virtST” created in the previous step is associated in the following command; shown in bold red.

./dsconfig \
-h localhost \
-p 2444 \
-D cn=Directory\ Manager \
-j passwd.txt \
-n create-transformation \
–type add-outbound-attribute \
–set client-attribute:virtST=%st%\(Florida,FL\)\(Minnesota,MN\)\(Wisconsin,WI\) \
–transformation-name virtState

 

Step 4 – Create Transformation Workflow Element

The following creates a Transformation Workflow Element and associates the Transformation rule “virtState” created in the previous state; shown in bold blue.

./dsconfig \
-h localhost \
-p 2444 \
-D cn=Directory\ Manager \
-j passwd.txt \
-n create-workflow-element \
–set enabled:true \
–set next-workflow-element:load-bal-we1 \
–set transformation:virtState \
–type transformations \
–element-name workflow-elem-state

 

Step 5 – Create Workflow

The following command creates the required Workflow and associates the Workflow Element “workflow-elem-state” created from the previous step; shown in bold green.

./dsconfig \
-h localhost \
-p 2444 \
-D cn=Directory\ Manager \
-j passwd.txt \
-n create-workflow \
–set base-dn:cn=users,dc=oracle,dc=com \
–set enabled:true \
–type generic \
–set workflow-element:workflow-elem-state \
–workflow-name workflow-state 

 

Step 6 – Set Workflow to Network Group

The final command does not actually create anything, but instead associates the Workflow “workflow-state” created in the previous step to the main network group; shown in bold brown.

./dsconfig \
-h localhost \
-p 2444 \
-D cn=Directory\ Manager \
-j passwd.txt \
-n set-network-group-prop \
–group-name network-group \
–add workflow:workflow-state

 

TIP: In the previous step part of the command to create the Workflow had a parameter “–set base-dn:”, which defined the namespace to which the Transformation will be invoked; in my example it was cn=users,dc=oracle,dc=com. If you open ODSM and log into the OUD Proxy, go to the Configuration tab, expand Network Groups, and finally select network-group, you will see the associated workflows. If there are any workflows that already has been defined with the same base DN, you will get an error as follows:

The Network Group could not be modified because of the following reason:

* [LDAP: error code 53 - Entry cn=network-group,cn=Network 
Groups,cn=config cannot be modified because one of the configuration 
change listeners registered for that entry rejected the change: Unable 
to register the workflow because the base DN 'cn=users,dc=oracle,dc=com' 
is already registered with the network group 'network-group']

The reason for this is because the network group can only have unique base DNs for each associated workflow. If you get this error you will need to go into ODSM and review the network group in the Proxy Server to see which base DNs are related to all the workflows attached to the network group.

 

If you do not intend to create the Transformation using ODSM, you can skip to the end where you test the Transformation rule out.

 

ODSM Approach to Create the Transformation

The following shows all the required steps to accomplish everything done using the command-line in ODSM.

 

Step 1 – Login to ODSM

Open ODSM (Oracle Directory Service Manager) and login as an administrator to the OUD Proxy Server instance.

ODSM Login

 

Step 2 – Create a custom schema attribute

NOTE: The following attribute creation is based on my example and not yours, so modify accordingly. Navigate to the Schema tab and create a new attribute by finding the existing attribute “st” and then clone it. Enter a new name “virtST” and a OID value <your OID value>. Be sure the OID value is unique and does not conflict with any existing schema attributes or any future LDAP RFC schema attributes. Finally click Apply to create the attribute.

Custom Schema

 

Step 3 – Create Outbound Attribute Addition Transformation

Navigate to the Configuration tab > click on the CORE Configuration icon > right click on Transformation > select Create Outbound Attribute Addition Transformation.

 

 

Outbound Attribute Addition Transformation

 

Enter “virtState” in the Name field and click the Define… button > select Value of Another Attribute > enter or select “st” (This attribute in OUD the transformation will be mapped to) > and finally click the Create button.

Name: virtST
Attribute: st
Value Mapping:

   On Matching       Replace With   
FL Florida
MN Minnesota
WI Wisconsin

 

 

Transformation Value Mapping

Step 4 – Create Transformation Workflow Element

In the CORE Configuration area icon right click on Workflow Elements > select Create Transformation Workflow Element > enter the following and click the Create button.

Name: workflow-elem-state
Next Workflow Element: load-bal-we1
Transformations: select “virtState”

Create Workflow Element

 

Step 5 – Create Workflow

In the CORE Configuration icon area right click on the Workflow > select Create Workflow > enter the following information and click the Create button.

 

Name: workflow-state
Base DN: cn=users,dc=oracle,dc=com
Workflow Element: select workflow-elem-state
Criticality: True

Create Workflow

 

Step 6 – Set Workflow to Network Group

In the CORE Configuration icon area expand General Configuration > expand Network Groups > select network-group. Under Workflow click the Add button and enter the name of the workflow created in the previous step “workflow-state” and then click the Apply button.

 

Assign Network Group

 

Testing the OUD Proxy Transformation Rule

Finally regardless of which approach you took to create the OUD Proxy Transformation rule testing it will be the same. Testing the Transformation Rule is pretty simple, just run a standard LDAP search against the OUD Proxy instance port, with the correct base DN that was defined when creating the Workflow, and the proper LDAP filter. Below is an example of what it should look like for the example I have presented though your search could differ depending on how you defined the transformation. If it works, the filter “virtST=Florida” should run the Transformation rule and change the backend search to “st=FL” and return all matching users.

 

ldapsearch \
-h oud1.melander.us \
-p 2389 \
-D “cn=Directory Manager” \
-w Oracle123 -x -LLL \
-b “cn=users,dc=oracle,dc=com” \
“(virtST=Florida)”

 

Summary

Though this OUD article is specific to a Transformation Rule you can apply the basics for creating many types of virtualization features in OUD. I hope this helps anyone that has struggled trying to get one of these features working.


Configuring Oracle Public Cloud to Federate with Microsoft Azure Active Directory

$
0
0

Introduction

Companies usually have some Identity and Access Management solution deployed on premises to manage users and roles to secure access to their corporate applications. As business move to the cloud, companies will, most likely, want to leverage the investment already made into such IAM solutions and integrate them with the new SaaS or PaaS applications that are being added to their portfolio.

Oracle Public Cloud and its Shared Identity Management services (SIM) provides integration with third-party Identity Providers (IdP) by using a well-known industry standard, SAML WebSSO Federation. This way, existing investments in IAM systems can be leveraged to allow users to log in to their corporate IdP and then single sign-on to Oracle Public Cloud applications that uses the Shared Identity Management Services.

In this post, we will show how to configure Oracle Public Cloud’s SIM (Service Provider) to Federate with Microsoft Azure Active Directory (IdP).

This post assumes Azure version used is “Azure Active Directory Premium”.

Before starting this procedure, make sure you have administrator access to both Azure Portal and Oracle Public Cloud Portal.

To expedite your work, open each of the services in a different browser, as we will make changes and copy data from one service to another.

Configure Azure AD as IdP for SAML Federation

Create and configure an application in Azure

Sign in to Azure Portal and browse to “Active Directory”, “Applications”, and click “Add”

img1

Select “Add an application from the gallery”.

img2

 

 

 

 

 

 

 

 

 

 

 

Select “Custom”, then “Add an unlisted application my organization is using”, provide a name, and save.

img3On the Application page, click on “Configure single sign-on”.

img4

Select “Microsoft Azure AD Single Sign-On”, click Next

img5

On the “Configure App Settings” screen, add the values for Oracle Public Cloud.

Don’t worry about them now, we will enter some temporary values, later on we will obtain the correct ones from OPC.

In “Issuer”, enter the OPC value for Provider ID, for example: https://myservices.us.oraclecloud.com/oam/fed/cloud/ateamiddomain

In “Reply URL”, enter OPC value for Assertion Consumer Service URL, for example: https://myservices.us.oraclecloud.com/oam/server/fed/sp/sso?tenant=ateamiddomain

Click Next.

img14

In the “Configure single sign-on at Oracle Public Cloud” screen, download Azure Metadata (XML) and save the file as “IdP-Metadata.xml”.

We will use this file later to configure Oracle Public Cloud.

Check “Confirm that you have configured single sign-on as described above.”

Click next.

img7

In the next screen, confirm the notification email and save.

Configure Oracle Public Cloud as Service Provider for SAML Federation

Log into Oracle Public Cloud, go to “Users”, “SSO Configuration” and click on “Configure SSO” button.

img8

In the pop-up window, select “Import identity provider metadata” and load the Azure Metadata file, IdP-Metadata.xml, we saved from the previous step.

From the select drop-downs, choose:

SSO Protocol: HTTP POST

User Identifier: User ID

Contained in: NameID

img9

Save it, and from the resulting screen, take a note of the following information from OPC: “Provider Id” and “Assertion Consumer Service URL”.

img10

 

 Update Azure Active Directory with OPC Information

Go back to Azure Portal, and select your directory, then click on “Applications” and then on the application we just created in the previous step, “Oracle Public Cloud”.

img11

Click on “Configure single sign-on”.

img12

Select “Microsoft Active Directory” again, and click Next.

img13

In the “Issuer” field, enter the value you copied from OPC, “Provider Id”.

In the “Reply URL” field, enter the value you copied from OPC, “Assertion Consumer Service URL”.

img14

For the next steps, just go with the default, click next and save.

Assign Azure Users to access Oracle Public Cloud.

In Azure, you have to specify which users and/or groups will have access to the “Oracle Public Cloud” application, otherwise users won’t be able to access the application after log-in.

In Azure portal, navigate to your directory, click on “Applications” and choose the application we created, “Oracle Public Cloud”.

Go to “Users and Groups” tab, and search for the groups you would like to grant access to the application and assign them by clicking on the assign button at the bottom of the page.

img15

Importing Azure Users into Oracle Public Cloud.

Before users can actually log-in to Azure and SSO to Oracle Cloud, we need to have the usernames imported into OPC.

To export the users from Azure, use the recommended method, depending if your users are sourced from an on-prem Active Directory (use the standard AD tools to export them) or if the users are sourced from Azure directly (use Azure AD Windows Power Tools).

To upload users into Oracle Public Cloud, you need to export your users into a CSV file, with the following structure: First Name, Last Name, Email, User Login.

The User Login must match the same username used to log-in to Azure.

Consult the following document on how to manage users in OPC: Importing a Batch of User Accounts

Testing and Enabling SSO for Oracle Public Cloud

After the necessary configurations are done on both sides, we can test if the SSO is working.

Log in to Oracle Public Cloud Portal and go to Users, SSO Configuration.

Click on the “Test” button.

img16

A new browser will open, click on “Start SSO”.

img17

 

 

 

 

 

 

 

 

 

 

You will be redirected to Azure Portal login page.

Provide a user credentials that have access to the Oracle Public Cloud Application.

img18

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

If authentication is successful, you will be redirected to Oracle Public Cloud page, showing the results of the SAML Assertion and authentication.

img19

Now, you can enable SSO for this identity domain. Go to Users, SSO Configuration and click on “Enable SSO”.

img20

Once the SSO is enabled for this identity domain, users can log in with their corporate account from Azure, by choosing “Sign in using your company ID” button at the login screen.

img21

Conclusion

Oracle Public Cloud allows customers with a current functional IAM solution, be it on on-premise or in the cloud, to maintain their investiment while integrating with new applications on Oracle Cloud that uses the Shared Identity Management services.

Customers can easily maintain users in a centralized IdP, and grant or revoke access for external applications without the need to update any account or permissions on the external systems.

A company can revoke access for a departing employee to Oracle Public Cloud by simply removing him from the group that has access to the application defined in Azure AD, or give access to an entire organization with a few clicks of the mouse.

Intelligent Devices with Oracle Edge Analytics & MQTT

$
0
0

Introduction

It is becoming increasingly more common to see scenarios where technologies like IoT (Internet of Things) and Event Streaming are combined to create solutions that leverage real-time intelligence for business decision making. Typical scenarios involve monitoring large numbers devices continuously emitting events as a result of some sort of situation detection (commonly detected by specialized sensors) that are processed, in near real-time, by streaming analytics capable technologies such as OSX (Oracle Stream eXplorer).

In this context, OSX is running in some server-side environment such as the Oracle Cloud or On-Premise data centers, and the incoming events must be sent over a network infrastructure in order to be processed. While this seems to be simple, not all use cases allow these device-related events to be easily sent over the network. In fact, most scenarios that leverage sensor-based technologies cannot count on the luxury of having a reliable/stable network infrastructure that connects the edges (sometimes called “Fog”) with the server-side environment. Ships crossing oceans, cities with poor network bandwidth infrastructure, and cities with regulatory security rules are common examples.

figure_x_bad_network_infrastruecture

There are various alternatives to overcoming these types of network bandwidth limitations. If the device is capable of caching the events for a period of time, then those events could be sent in a batch when the network is available. If they are not capable of caching, they can communicate with a central hub (sitting somewhere near the devices) that will act as concentrator of those events and will dispatch them appropriately. Unfortunately “band-aid” solutions like this normally add a degree of complexity to the development of those devices, with error-prone programming constructs. Not to mention that the events may be sent in a delayed manner where their value will be inherently lost. In order to be considered useful events need to be processed as they happen, and this implies that sometimes the streaming analytics technology needs to be as close to the fog as possible.

Oracle addresses this problem by making the streaming analytics technology from OSX available to be used in the fog, by devices with very restricted hardware. This is possible through the OEA (Oracle Edge Analytics) product, an optimized low memory and disk footprint version of OSX. This product is not necessarily new (it was known as Oracle Event Processing for Java Embedded) but has a new set of features that might provide the missing ingredient to the development of intelligent, event-driven devices, capable of handling pattern matching, event correlation, filtering, and aggregation scenarios.

This article will provide an overview of how to get started with OEA, covering details about installation, domain creation, application deployment, and how to leverage the MQTT protocol in an OEA application. If you need to find more information about OEA please consider reading the Getting Started with Oracle Edge Analytics guide, available under the product documentation.

Installing Oracle Edge Analytics

Installing OEA in a device is very straightforward. It basically involves extracting the contents of the OEA distribution file into a folder. However; every device has its own way to transfer files into its file system, and that part can be tricky sometimes. For the sake of clarity, this article will assume that you are using a Raspberry Pi Model 2 (Pi for short) running the last Noobs operating system.

raspberry_pi_2_model_b

Figure 1: Raspberry Pi Model 2 (ARM 900 MHz CPU with 1GB of RAM) used in this article.

The first step should be creating a folder in your Pi. You must make sure that this folder has read, write and execution privileges; otherwise the product will not work correctly. In order to do these steps, you must have access to your Pi. Most people do this by connecting the device into a Monitor/TV using a HDMI cable. Alternatively, you can setup SSH access in your Pi to be able to connect to it remotely. The second step should be extracting the contents of the OEA distribution file into the folder created. If you have an OTN (Oracle Technology Network) account, you can get a copy of OEA here.

installingEdgeAnalyticsIntoRaspberryPi

Figure 2: Extracting the contents of the OEA distribution file into a Raspberry Pi folder.

The third step is making sure you have a supported JDK installation in your Pi. Oracle recommends using either JDK 1.8.0_06 (Client JVM Only) or Java Embedded Runtime 1.8.0_33. The easiest way to do this if you are running a Raspbian distribution is using the following command:

apt-get install oracle-java8-jdk

Finally, create an environment variable named JAVA_HOME whose value points to the location where Java is installed. This is very important because most OEA internal scripts expect that this variable is set in order to work. Thus, find where the Java is installed. If you installed the JDK using the apt-get command shown before, then you can query the exact location where it is installed using the dpkg-query command:

dpkg-query -L oracle-java8-jdk | grep /bin/javac

The folder that holds your Java installation will be shown in the output. Copy the entire path, except for the “/bin/javac” in the end. Then, create the JAVA_HOME environment variable using the following command:

export JAVA_HOME=<ORACLE_JAVA8_JDK_INSTALL>

Using the Configuration Wizard

OEA, just like OSX and WebLogic, uses the concept of domains. A domain consists of one or more OEA servers (and their associated resources) that you manage with a single administration server. Since OEA does not support clustering, a typical OEA domain has one single server.

In order to create a domain, you must use the Configuration Wizard. In OEA, the Configuration Wizard only works in silent mode, which is a non-interactive way to create and configure a domain. As such, you need to create a XML file that defines the domain configuration settings. The listing 1 below shows an example this file.

<?xml version=”1.0″ encoding=”UTF-8″?>

<bea-installer xmlns=”http://www.bea.com/plateng/wlevs/config/silent“>

   <input-fields>

      <data-value name=”CONFIGURATION_OPTION” value=”createDomain” />

      <data-value name=”USERNAME” value=”wlevs” />

      <data-value name=”PASSWORD” value=”welcome1″ />

      <data-value name=”SERVER_NAME” value=”defaultserver” />

      <data-value name=”DOMAIN_NAME” value=”helloWorldDomain” />

      <data-value name=”DOMAIN_LOCATION” value=”/oea/domains” />

      <data-value name=”NETIO_PORT” value=”9002″ />

      <data-value name=”KEYSTORE_PASSWORD” value=”welcome1″ />

   </input-fields>

</bea-installer>

Listing 1: Sample silent.xml file, used to create an OEA domain.

Save the silent.xml file in some folder on your Pi. With this file in place you can invoke the Configuration Wizard from the shell. Execute the following command:

$OEA_HOME/oep/common/bin/config.sh -mode=silent -silent_xml=<SILENT.XML_FILE_LOCATION> -log=/home/pi/logs/createDomain.log

If the Configuration Wizard does not complete successfully, check the log file for more information. The shell command exit code can help you learn more about the outcome of silent execution. When completed successfully, the domain will have been created in the location specified in the DOMAIN_LOCATION parameter of the silent.xml file.

Sample Application Walkthrough

In this section, we are going to show the details behind the sample application used in this article. The use case implemented shows how to automate a hospital room to automatically detect when the temperature drops, avoiding that the patient be uncomfortable with the cold in the room. Once detected that the temperature drops, a signal can be sent to adjust the thermostat, and also calling the nurse to the room.

figure_2.1_patient_in_hospital

From the devices perspective, it would be necessary a sensor capable of measuring temperatures, and a Raspberry Pi that will execute the OEA product with the application. There are several low-cost sensors available in the market, such as the ones commercialized by Vernier. You can check them here.

The sample application uses the built-in REST adapter from OEA to receive events that, in our example, hold data about temperatures. The idea is that the sensor sends readings to the application by posting a JSON payload containing a sensor identifier and temperature value, to an HTTP endpoint exposed by the application through the REST adapter. Listing 2 shows an example of the JSON payload.

{

   “sensorId“:”sensor-01″,

   “temperature“:72.8

}

Listing 2: Sample JSON payload that sensors must send to the REST endpoint.

Once received by the application, the JSON payload is automatically transformed into an event type and queried by a processor that must perform a filter. The filter essentially discards any temperature reading that contains values below 70 Fahrenheit. The resulting events that match the criteria are then printed out in the output console. Figure 3 shows the EPN (Event Processing Network) of this application.

figure3_epn_only_rest_adapter

Figure 3: EPN of the OEA application that handles temperatures events.

You can download the OEA application project that contains all the artifacts here. This project has been built using Fusion Middleware JDeveloper contained in the SOA Quick Start Installer version 12.2.1. This JDeveloper version can be downloaded here.

Once the project is open in JDeveloper, you can start checking the implementations details. How the JSON payload is automatically transformed into an event type is a good place to start. If you double-click the restAdapter component, you will see the bean declaration that defines it. The jsonMapper property points to another bean that uses the mapper capabilities available in OEA. Specifically; the mapper used is JAXB-based and it is capable of handling JSON payloads and automatically transforms them into event types. The event type used in this case is specified in the property eventTypeName. Figure 4 shows how this is configured.

figure_4_json_2_event_type_mapping_through_rest

Figure 4: Beans configuration for JSON handling through the REST adapter.

Moving forward, you might be curious about how the OEA application handles the event filtering. This is accomplished through a processor named filterTemperatures. This processor uses a simple CQL query to filter out the undesired events. CQL is the acronym of Continuous Query Language, a query language based on ANSI SQL with added constructs that support streaming data and pattern matching. By using CQL, developers can perform event streaming analysis with a high-level programming language. Figure 5 shows the CQL statement that filters the events received by the application.

figure_5_cql_statement_to_filter_events

Figure 5: CQL statement that filters undesired temperature readings.

Finally, you will notice the outputAdapter component. This is a custom adapter, created using the OEA adapter API, that takes the incoming event and print its contents out in the output console. There are no specific details to cover about this component since it was created only for instructional purposes. Normally, the output events are sent downstream using specialized adapters such as REST or MQTT, to implement the thermostat adjustment, for example.

Deployment and Testing

We are going to deploy and test the sample application discussed in the previous section. The first step is making sure that your OEA server is up and running. If the server is not running, you can easily start it using the built-in script created during the domain creation. Navigate to the folder that contains your OEA server. For example; considering the domain created using the listing 1 silent.xml file, you should navigate to the following folder:

/oea/domains/helloWorldDomain/defaultserver

Inside this folder you will see a script named startwlevs.sh. Execute this script in order to start your OEA server. Wait until the server is completely started. This may take several minutes if it is the first time you’ve start the server.

Next, you need to deploy the sample application onto the OEA server. You can either generate a deploy bundle directly from your JDeveloper install or download a pre-generated copy here. Copy the bundle JAR file into some local folder of your Pi. Once you have copied the bundle JAR, navigate to the following folder:

$OEA_HOME/oep/bin

This folder contains several files that are used to perform administration and deployment tasks. For this activity we are going to leverage the wlevsdeploy.jar file which contains built-in functions to perform deployment tasks. To deploy the sample application into the local OEA server, execute this command:

java -jar wlevsdeploy.jar –url http://localhost:9002/wlevsdeployer -user wlevs -password welcome1 –install <APPLICATION_BUNDLE_JAR_FILE_LOCATION>

The command above uses several parameters that are defined during the domain creation, such as the NETIO_PORT, USERNAME and PASSWORD. After the bundle JAR is correctly installed, you should see in the output console of the OEA server messages like this:

<BEA-2045000> <The application bundle “GettingStartedWithOEA” was deployed successfully>

<BEA-2047000> <The application context for “GettingStartedWithOEA” was started successfully>

With the sample application properly installed, it is time to run some tests. Since the application exposes an HTTP endpoint to receive the events containing the temperature readings, you can perform HTTP POST’s onto your Raspberry Pi using its hostname and the NETIO_PORT as the URI. Also remember that the application registers the HTTP endpoint in the /receiveTemperatures context path, so the target URL will be:

http://hostname:9002/receiveTemperatures

For instance, if you are locally connected into your Pi, you could use the cURL command to perform an HTTP POST. Try posting the JSON payload shown in listing 2 using the following command:

curl -H “Content-Type: application/json” -X POST -d ‘{“sensorId”:”sensor-01″,”temperature”:72.8}’ http://localhost:9002/receiveTemperatures

You should see the following message in the output console of the OEA server:

[TemperatureReading] sensorId = sensor-01, temperature = 72.8

This means that the JSON payload was properly received by the application and that the EPN flow was executed completely, including the last stage which is the outputAdapter component. Go ahead and try sending events that contain temperature values that falls below the 70 Fahrenheit watermark. You will notice that those events will not be printed out in the output console, since they do not satisfy the filter criteria specified in the CQL statement shown in figure 5.

Adding Support to MQTT

Although REST is largely leveraged today by devices that need to communicate to the external world, it is not considered the best technology for that purpose. REST is designed around the Request/Response model, running on top of a stateless protocol such as HTTP. This alone brings a tremendous overhead in terms of payload due the inherently large amount of header data found in HTTP requests and responses. Another important aspect to consider is the lack of reliable messaging over unreliable networks. Most deployed devices rely on extremely low bandwidth such as radio frequency based communication.

There is another problem with using REST. Power is a scarce resource in many edge devices. Every transaction performed by the device must be done efficiently consuming minimal computational cycles in order to save energy, allowing the device to prolong its activities. REST-based architectures tend to consume large amounts of computational cycles by typically using long-running polling techniques, and by hosting a web server (even if it is something lightweight such as Jetty) in the device.

The MQTT protocol was invented to address many of these challenges. MQTT is a M2M (Machine-to-Machine) protocol that allows reliable connectivity using a Publish/Subscribe messaging style; still being very efficient for small footprint devices. MQTT allows devices to communicate efficiently, saving battery and bandwidth while maintaining extremely low latency when communicating with the external world. There are many tests that prove MQTT’s efficiency and you can read more about that in this article written by Stephen Nicholas.

The Oracle A-Team developed a native adapter for OEA which was designed to allow both inbound and outbound connectivity with MQTT-based brokers. This section will describe how this adapter works, and how it can be incorporated into the sample application described so far.

The MQTT adapter is provided for free to use “AS-IS” but without any official support from Oracle. Bug reports, feedback and enhancement requests are welcome, but need to be performed using the comments section of this blog and the A-Team reserves the right of help in the best-effort capacity.

The first thing you need to do is to obtain a copy of the MQTT adapter. You can download it here. Unzip the package contents into a staging folder. Inside it you will have two other packages, the bundle JAR of the adapter and some pre-built samples. Unzip the bundle JAR and copy it to the modules folder of your OEA server. For example, considering the domain created using the listing 1 silent.xml file, the modules folder location should be this:

/oea/domains/helloWorldDomain/defaultserver/modules

Bundle JARs copied to the modules folder are automatically loaded into the OEA runtime system. Thus, being available to be used by any application deployed. You will need to restart your OEA server in order to the changes take effect. Tip: there is a stopwlevs.sh script under the OEA server folder. Alternatively, you can locally stop the OEA server using the following command:

java -jar wlevsadmin.jar -username wlevs -password welcome1 SHUTDOWN

Download the modified version of the OEA application that contains the MQTT support from here. This new version uses the MQTT adapter to establish communication with an internet-hosted MQTT broker. If the Pi device that you are using does not have internet connection, then a local install of a MQTT broker will be required for the tests. There are plenty of MQTT brokers available to use, as you can see here. Figure 6 shows the EPN of the modified version of the application.

figure_6_epn_with_rest_and_mqtt

Figure 6: EPN of the OEA application, now handling both REST and MQTT events.

If you double-click the mqttAdapter component, you will see the bean declaration who defines it. Just like the restAdapter component, it uses the JAXB mapper to handle JSON payloads and automatically transform them into event types. The property that links the MQTT adapter with the mapper is called mapper. But differently from the REST adapter, there are additional properties to be set in the MQTT adapter in order to establish the communication with the broker. The serverURIs property holds the comma-separated list of brokers; the topicName property specifies which topic the adapter should listen to and the qualityOfService property holds the comma-separated list (one for each broker) of QoS values. Figure 7 shows how this is configured.

figure_7_json_and_event_type_mapping_through_rest_and_mqtt

Figure 7: Beans configuration for JSON handling through the REST and MQTT adapters.

Generate a new bundle JAR with the modified version of the application. Alternatively, you can grab a pre-generated copy of the modified version of the application here. Copy the bundle JAR file into some local folder of your Pi. Once you have copied the bundle JAR, it will be necessary to undeploy the current version of the application. This is necessary because both the old and the new version of the application have the same bundle name, which must be unique in the OEA server. To undeploy the application, use the following command:

java -jar wlevsdeploy.jar -url http://localhost:9002/wlevsdeployer -user wlevs -password welcome1 -uninstall GettingStartedWithOEA

The uninstall command takes the bundle name of the application as parameter. This is how the OEA server knows which application must be undeployed. With the older version of the application properly undeployed, you can deploy the new version of the application using the same deploy command you have used before:

java -jar wlevsdeploy.jar –url http://localhost:9002/wlevsdeployer -user wlevs -password welcome1 –install <APPLICATION_BUNDLE_JAR_FILE_LOCATION>

After the bundle JAR is correctly installed, you should see in the output console the same confirmation messages you have saw before, stating that the application was deployed and installed successfully.

In order to test the new version of the application, an MQTT-based client will be necessary. This client must connect to the same broker that the application is connected to and publish messages to the topic named temperatures. This can be accomplished in two ways:

    • Writing your own Client: There is an open-source project called Eclipse Paho that provides APIs to several programming languages such as C/C++, Java, JavaScript, Python, Go and C# .NET.

    • Using a GUI Client Tool: The Eclipse Paho project also provides a graphical utility tool that is intended to be used for testing purposes. There are versions for several operating systems, and you can freely download it here.

For instance, the figure 8 below shows the GUI utility tool from the Eclipse Paho project being used to publish a message to the MQTT broker.

figure_8_testing_the_mqtt_adapter

Figure 8: Using the Eclipse Paho GUI Utility Tool to publish messages to a MQTT broker.

Since the application subscribe to the temperatures topic using the MQTT adapter, it will receive a copy of the message sent and process it using the implemented EPN. You should see the following message in the output console of the OEA server:

[TemperatureReading] sensorId = sensor-02, temperature = 89.88

The MQTT protocol when combined with OEA product can help you to create very powerful IoT projects, that might provides not only development speed, but also great performance. During the development of the MQTT adapter, the A-Team had the chance to perform some stress testing against OEA while running on top of the Pi, and some interesting numbers came up.

This stress test should not be considered as an Oracle official benchmark. It was made available in this article only for information purposes. Moreover, the numbers shown does not reflect the actual limit of the product, nor the Raspberry Pi device. They might be higher or lower depending of several other factors such as network, hardware, coding, algorithm complexity, etc.

The stress test mainly focused in calculating what is the actual processing limit of the Pi device while running an OEA application that process concurrent sensor events. To increase the accuracy of the test, the network latency was reduced as much as possible, so the only hardware resources that would affect the results and therefore, limit scalability, would be CPU and memory. The figure 9 shows the results of this stress test.

figure_9_stress_test_results

Figure 9: Stress test results obtained using OEA, MQTT and Raspberry Pi.

For sending the messages, it was used a MQTT client producer written in Java that uses two threads to concurrently send messages to the topic. Each thread sends 2,000 events per second, totaling 4,000 per second from the consumer point of view. The tests shown that increasing the incoming volume (increasing the number of producer threads or increasing the number of messages sent) did not positively change the consumption throughput, indicating some sort of contention. Hence, it was how the scalability limit was calculated. Curiously, CPU usage did not increased with the incoming volume increase, neither the JVM GC (Garbage Collection) pressure, which may indicate that the network might be limiting factor for the scalability.

Conclusion

This article introduced the OEA product, Oracle’s technology to bring event stream analytics to devices such as Raspberry Pi. Through a series of examples, it was shown how to get the OEA product installed, configured and ready to execute a sample application. The article also showed how to deploy and test this sample application, exploring the administrative capabilities found in the OEA product. Finally, the article showed how to leverage the MQTT protocol in OEA-based applications, through the use of a native MQTT adapter built specifically for OEA.

Event streaming analytics is a very hot topic that has gained even more traction with the explosion of IoT and machine-generated data. However, there are specific use cases that require event data to be analyzed in the fog, with zero connectivity to server-side environments capable of handling the data stream. This example discussed here shows a possible solution that could bring event streaming analytics to devices, even ones with minimal hardware capabilities and network bandwidth constraints.

Integrating Social Relationship Management (SRM) with Oracle Business Intelligence Cloud Service (BICS)

$
0
0

Introduction

 

This article outlines how to integrate Social Relationship Management (SRM) with Oracle Business Intelligence Cloud Service (BICS).

Bringing the SRM records into BICS enables the data consumer to refine and focus on relevant data through prompts and filters. Additionally, once in BICS, the SRM data can be mashed with complementary datasets.

Three patterns are covered:

(a) Managing SRM authentication, authorization, and token refresh.

(b) Retrieving SRM data in JSON format using REST Web Services.

(c) Parsing and Inserting SRM data into the Schema Service database with Apex PL/SQL functions.


The article concludes at PL/SQL code snippet stage. Suggestions on how to schedule, trigger, and display results in BICS can be found in past A-Team BICS Blogs.

SRM screen prints in this article have been translated to English from Spanish and text may differ slightly from originals.

For those very familiar with the SRM API and the BICS API, it is possible to jump straight ahead to Step 4 – #4 “Final Code Snippet – add apex_json.parse code to PL/SQL”. That said, Steps 1-3 are valuable for assisting in understanding the solution and will be needed should de-bugging be required.

 

Main Article

 

Step 1 – Review SRM Developer Platform API Documentation

 

Begin by reviewing the SRM Developer Platform API documentation. This article covers Refreshing Tokens (oauth/token) and the List Messages Method (engage/v1/messages). There are many other objects and methods available in the API that may be useful for various integration scenarios.

Step 2 – Authentication

 

Detailed documented steps for Authentication can be found here. The following is a summary of what was found to be the minimal steps required for BICS.


Prerequisites:

* SRM Account must include message routing from from Listen to Engage within Social Engagement and Monitoring (SEM) for these to be available in the API.

* SRM Account must have administrator privileges to access the Account link.


1)    Register the Client Application

Go to: https://accounts.vitrue.com/#api_apps

Click: Create New Application

Enter Application Name & URL(URI) Call Back.

Make a note of the Application Name, Callback URL (URI), Client ID (Customer ID), and Client Secret.

Snap1

Snap2


2)    Request User Authorization and Obtain the Authorization Code

Authorize URL:

https://gatekeeper.vitrue.com/oauth/authorize?client_id=abc123&scope=engage&redirect_uri=https://accounts.vitrue.com&response_type=code

Replace client_id and redirect_uri with those associated with the API Application.

There are 3 different scopes:

engage = Access endpoints for messages and replies
publish = Access endpoints for posts
admin = Access endpoints for accounts, bundles, and resources

To determine what scope is required review the URL of the method being used. This example uses engage/v1/messages, thus scope = engage is required. When using admin/v1/accounts, scope = admin is required. If the incorrect scope is referenced, when running the method, the following error will be received:

{“Unauthorized.”,”detail”:”The user credentials provided with the request are invalid or do not provide the necessary level of access.”,”status”:401}

Copy the Authorize URL into a browser. Log into SRM.

Untitled1

Click “Authorize” to grant access. It is very important that the correct scope is displayed. In this case “Engage”.

Untitled3

If successful the browser will re-direct to the URL (URI) Call Back / Redirect URI. The code will be appended. Make a note of the code.

Untitled2


3)    Exchange Authorization Code for an Access Tokens

If necessary install Curl. Download Curl from here.

Run the following replacing client_id, client_secret, redirect_uri, code, and scope.

It is very important that the scope matches what was used to generate the code.

curl -X POST -d grant_type=authorization_code -d client_id=abc123 -d client_secret=abc123 -d redirect_uri=”https://accounts.vitrue.com” -d code=abc123 -d scope=engage https://gatekeeper.vitrue.com/oauth/token -k

If successful the access_token and refresh_token will be returned in JSON format. Copy the JSON containing the tokens and save it to notepad for future reference.

{“access_token”:”abc123″,”token_type”:”bearer”,”expires_in”:7200,”refresh_token”:”abc123″,”scope”:”engage”}

Tokens expire after two hours. If tokens expire generate new tokens by running grant_type=refresh_token.

curl -X POST -d grant_type=refresh_token -d refresh_token=abc123 -d client_id=abc123 -d client_secret=abc123 -d redirect_uri=”https://accounts.vitrue.com” https://gatekeeper.vitrue.com/oauth/token -k

If tokens get lost or out of sync obtain a new “authorization code” and repeat the process to get a new code and new access / refresh tokens. If tokens get out of sync the following error will be received:

{“error”:”invalid_request”,”error_description”:”The request is missing a require
d parameter, includes an unsupported parameter value, or is otherwise malformed.”}

If an attempt is made to re-authorize a code that is still active the following error will be received. Thus, the need to get a fresh code.

{“error”:”invalid_grant”,”error_description”:”The provided authorization grant is invalid, expired, revoked, does not match the redirection URI used in the authorization request, or was issued to another client.”}


4)    Save the refresh tokens to the BICS Database

CREATE TABLE REFRESH_TOKEN(CREATE_DATE TIMESTAMP, access_token VARCHAR(500), token_type VARCHAR(10), expires_in VARCHAR(10), refresh_token varchar(500), scope varchar(10));

Replace ‘abc123‘ with the current refresh token. Only the refresh token is needed at this stage.

INSERT INTO REFRESH_TOKEN(CREATE_DATE,refresh_token) VALUES (SYSDATE,’abc123‘);

5)    Refresh token from BICS

This code snippet provides a very basic example of how to store the refresh token in BICS. It should be used for demo purposes only. In production systems, more secure options of storing and linking refreshing token’s with the user’s record or profile should be considered.

Open SQL Workshop from Oracle Application Express

Snap2

Launch SQL Commands

Snap3

Use the code snippet below as a starting point to build the refresh token PL/SQL.

For a text version of the code snippet click here.

Replace the redirect_uri, client_id, client_secret.

Re-run the code snippet code to confirm that the new refresh code gets inserted into the REFRESH_TOKEN table each time.

The REFRESH_TOKEN table should only ever have one record in it at all times.

DECLARE
l_ws_response_clob CLOB;
l_refresh_token VARCHAR(500);
l_body VARCHAR(500);

SELECT MAX(refresh_token) INTO l_refresh_token FROM REFRESH_TOKEN;
dbms_output.put_line(‘Old Refresh Token: ‘ || dbms_lob.substr(l_refresh_token,12000,1));
apex_web_service.g_request_headers(1).name := ‘Content-Type';
apex_web_service.g_request_headers(1).value := ‘application/json';
l_body := ‘{
“refresh_token”: “‘ || l_refresh_token || ‘”,
“grant_type”: “refresh_token”,
“redirect_uri”: “https://accounts.vitrue.com“,
“client_id”: “abc123“,
“client_secret”: “abc123
}';
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => ‘https://gatekeeper.vitrue.com/oauth/token’,
p_body => l_body,
p_http_method => ‘POST’
);
apex_json.parse(l_ws_response_clob);
DELETE FROM REFRESH_TOKEN;
INSERT INTO REFRESH_TOKEN(CREATE_DATE, access_token, token_type, expires_in, refresh_token, scope)
VALUES (
SYSDATE,
apex_json.get_varchar2(p_path => ‘access_token’),
apex_json.get_varchar2(p_path => ‘token_type’),
apex_json.get_varchar2(p_path => ‘expires_in’),
apex_json.get_varchar2(p_path => ‘refresh_token’),
apex_json.get_varchar2(p_path => ‘scope’)
);
dbms_output.put_line(‘New Refresh Token: ‘ || apex_json.get_varchar2(p_path => ‘refresh_token’));
COMMIT;

 

Step 3 – Run List Messages Method (engage/v1/messages)


For a text version of the code snippet click here.


Code Breakdown


Orange
– Refresh token code (from Step 2).

Blue – Reads Access Token from REFRESH_TOKEN table

Light Green – Rest Web Service for List Messages Method (engage/v1/messages). Returns data as a clob.

Bright Green – bundleId and resourceId parameters. These were identified through the “List Accounts Method” (admin/v1/accounts) and “List Bundles Method” (admin/v1/accounts/:account_id/bundles).

DECLARE
l_ws_response_clob CLOB;
l_refresh_token VARCHAR(500);
l_body VARCHAR(500);
l_access_token VARCHAR(500);
l_ws_response_clob2 CLOB;
l_ws_url VARCHAR2(500) := ‘https://public-api.vitrue.com/engage/v1/messages?bundleId=3876?resourceId=108641‘;

SELECT MAX(refresh_token) INTO l_refresh_token FROM REFRESH_TOKEN;
–dbms_output.put_line(‘Old Refresh Token: ‘ || dbms_lob.substr(l_refresh_token,12000,1));
apex_web_service.g_request_headers(1).name := ‘Content-Type';
apex_web_service.g_request_headers(1).value := ‘application/json';
l_body := ‘{
“refresh_token”: “‘ || l_refresh_token || ‘”,
“grant_type”: “refresh_token”,
“redirect_uri”: “https://accounts.vitrue.com”,
“client_id”: “abc123″,
“client_secret”: “abc123″
}';
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => ‘https://gatekeeper.vitrue.com/oauth/token’,
p_body => l_body,
p_http_method => ‘POST’
);
apex_json.parse(l_ws_response_clob);
DELETE FROM REFRESH_TOKEN;
INSERT INTO REFRESH_TOKEN(CREATE_DATE, access_token, token_type, expires_in, refresh_token, scope)
VALUES (
SYSDATE,
apex_json.get_varchar2(p_path => ‘access_token’),
apex_json.get_varchar2(p_path => ‘token_type’),
apex_json.get_varchar2(p_path => ‘expires_in’),
apex_json.get_varchar2(p_path => ‘refresh_token’),
apex_json.get_varchar2(p_path => ‘scope’)
);
–dbms_output.put_line(‘New Refresh Token: ‘ || apex_json.get_varchar2(p_path => ‘refresh_token’));
–Get Access Token
SELECT MAX(access_token) INTO l_access_token FROM REFRESH_TOKEN;
dbms_output.put_line(dbms_lob.substr(l_access_token,12000,1));
–Set Headers
apex_web_service.g_request_headers(1).name := ‘Authorization';
apex_web_service.g_request_headers(1).value := ‘Bearer ‘ || l_access_token;
apex_web_service.g_request_headers(2).name := ‘Accept';
apex_web_service.g_request_headers(2).value := ‘application/json';
–Get Message
l_ws_response_clob2 := apex_web_service.make_rest_request
(
p_url => l_ws_url,
p_http_method => ‘GET’
);
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob2,12000,1));
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob2,12000,12001));
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob2,12000,24001));
COMMIT;

 

Step 4 – Parse and Insert SRM messages into BICS


1. Validate the JSON


Only valid JSON can be parsed by Apex. If the code fails at the parsing stage, it is recommended to validate it.

There are many free online JSON validating tools such as: https://jsonformatter.curiousconcept.com/

For a sample SRM JSON payload click here.

Viewing the JSON in the JSON formatter allows for easily expand and collapse of the different elements – assisting with choosing the desired fields to bring into BICS.

For this example id, type, resourceName, resourceType, and body will be selected.

Snap3

2. Test JSON Path Expressions


When formulating the JSON path expression, it may be useful to use an online JSON Path Expression Tester such as https://jsonpath.curiousconcept.com.

The below example shows testing the “id” path.

Snap4

Snap5

JSON Path Expression’s of all required fields.

Value                       JSON Path Expression 
count                       count
id                          items[*].id
type                        items[*].type
resourceName                items[*].resource.resourceName
resourceType                items[*].resource.resourceType
body                        items[*].body

3. Create SRM_MESSAGE table in BICS

Create the table from Apex SQL Workshop -> SQL Commands.

CREATE TABLE SRM_MESSAGES(ID VARCHAR(100),TYPE VARCHAR(100),RESOURCE_NAME VARCHAR(100),RESOURCE_TYPE VARCHAR(100),BODY VARCHAR(1000));

 4. Final Code Snippet – add apex_json.parse code to PL/SQL

For a text version of the code snippet click here.

DECLARE
l_ws_response_clob CLOB;
l_refresh_token VARCHAR(500);
l_body VARCHAR(500);
l_access_token VARCHAR(500);
l_ws_response_clob2 CLOB;
l_ws_url VARCHAR2(500) := ‘https://public-api.vitrue.com/engage/v1/messages?bundleId=3876?resourceId=108641′;

SELECT MAX(refresh_token) INTO l_refresh_token FROM REFRESH_TOKEN;
–dbms_output.put_line(‘Old Refresh Token: ‘ || dbms_lob.substr(l_refresh_token,12000,1));
apex_web_service.g_request_headers(1).name := ‘Content-Type';
apex_web_service.g_request_headers(1).value := ‘application/json';
l_body := ‘{
“refresh_token”: “‘ || l_refresh_token || ‘”,
“grant_type”: “refresh_token”,
“redirect_uri”: “https://accounts.vitrue.com”,
“client_id”: “abc123“,
“client_secret”: “abc123
}';
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => ‘https://gatekeeper.vitrue.com/oauth/token’,
p_body => l_body,
p_http_method => ‘POST’
);
apex_json.parse(l_ws_response_clob);
DELETE FROM REFRESH_TOKEN;
INSERT INTO REFRESH_TOKEN(CREATE_DATE, access_token, token_type, expires_in, refresh_token, scope)
VALUES (
SYSDATE,
apex_json.get_varchar2(p_path => ‘access_token’),
apex_json.get_varchar2(p_path => ‘token_type’),
apex_json.get_varchar2(p_path => ‘expires_in’),
apex_json.get_varchar2(p_path => ‘refresh_token’),
apex_json.get_varchar2(p_path => ‘scope’)
);
–dbms_output.put_line(‘New Refresh Token: ‘ || apex_json.get_varchar2(p_path => ‘refresh_token’));
–Get Access Token
SELECT MAX(access_token) INTO l_access_token FROM REFRESH_TOKEN;
–dbms_output.put_line(‘Access Token’ || dbms_lob.substr(l_access_token,12000,1));
–Set Headers
apex_web_service.g_request_headers(1).name := ‘Authorization';
apex_web_service.g_request_headers(1).value := ‘Bearer ‘ || l_access_token;
apex_web_service.g_request_headers(2).name := ‘Accept';
apex_web_service.g_request_headers(2).value := ‘application/json';
–Get Message
l_ws_response_clob2 := apex_web_service.make_rest_request
(
p_url => l_ws_url,
p_http_method => ‘GET’
);
–dbms_output.put_line(dbms_lob.substr(l_ws_response_clob2,12000,1));
–dbms_output.put_line(dbms_lob.substr(l_ws_response_clob2,12000,12001));
–dbms_output.put_line(dbms_lob.substr(l_ws_response_clob2,12000,24001));
–Delete Messages
DELETE FROM SRM_MESSAGES;
–Parse Clob to JSON
apex_json.parse(l_ws_response_clob2);
–Insert data
IF apex_json.get_varchar2(p_path => ‘count’) > 0 THEN
for i in 1..apex_json.get_varchar2(p_path => ‘count’) LOOP
INSERT INTO SRM_MESSAGES(ID,TYPE,RESOURCE_NAME,RESOURCE_TYPE,BODY)
VALUES
(
apex_json.get_varchar2(p_path => ‘items['|| i || '].id’),
apex_json.get_varchar2(p_path => ‘items['|| i || '].type’),
apex_json.get_varchar2(p_path => ‘items['|| i || '].resource.resourceName’),
apex_json.get_varchar2(p_path => ‘items['|| i || '].resource.resourceType’),
apex_json.get_varchar2(p_path => ‘items['|| i || '].body’)
);
end loop;
END IF;
COMMIT;

Further Reading


Click here for the Application Express API Reference Guide – MAKE_REST_REQUEST Function.

Click here for the Application Express API Reference Guide – APEX_JSON Package.

Click here for the SRM Developer Platform API Guide.

Click here for more A-Team BICS Blogs.

Summary


This article provided a set of examples that leverage the APEX_WEB_SERVICE_API to integrate Social Relationship Management (SRM) with Oracle Business Intelligence Cloud Service (BICS) using the SRM Developer Platform.

The use case shown was for BICS and SRM. However, many of the techniques referenced could be used to integrate SRM with other Oracle and non-Oracle applications.

Similarly, the Apex MAKE_REST_REQUEST and APEX_JSON examples could be easily modified to integrate BICS or standalone Oracle Apex with any other REST web service that is accessed via a URL and returns JSON data.

Techniques referenced in this blog could be useful for those building BICS REST ETL connectors and plug-ins.

TCP/IP Tuning

$
0
0

Introduction

This article is meant to provide an overview of TCP tuning.

It is important to understand that there is no single set of optimal TCP parameters. The optimal tuning will depend on your specific operating system, application(s), network setup, and traffic patterns.

The content presented here is a guide to common parameters that can be tuned, and how to check common TCP problems.

It is recommended that you consult the documentation for your specific operating system and applications for guidance on recommended TCP settings. It is also highly recommended that you test any changes thoroughly before implementing them on a production system.

 

TCP auto tuning

 

Depending on your specific operating system/version and configuration, your network settings may be autotuned.

To check if autotune is enabled on many linux based systems:

cat /proc/sys/net/ipv4/tcp_moderate_rcvbuf

or

sysctl –a | grep tcp_moderate_rcvbuf

 

If tcp_moderate_rcvbuf is set to 1, autotuning is active and buffer size is adjusted dynamically.

While TCP autotuning provides adequate performance in some applications, there are times where manual tuning will yield a performance increase.

 

Common TCP Parameters

This table shows some commonly tuned linux TCP parameters and what they are for. You can look up the equivalent parameter names for other operating systems.

net.core.rmem_default Default memory size of receive(rx) buffers used by sockets for all protocols. Value is in bytes.
net.core.rmem_max Maximum memory size of receive(rx) buffers used by sockets for all protocols. Value is in bytes.
net.core.wmem_default Default memory size of transmit(tx) buffers used by sockets. Value is in bytes.
net.core.wmem_max Maximum memory size of transmit(tx) buffers used by sockets. Value is in bytes.
net.ipv4.tcp_rmem TCP specific setting for receive buffer sizes. This is a vector of 3 integers: [min, default, max]. The max value can’t be larger than the equivalent net.core.{r,w}mem_max. Values are in bytes.
net.ipv4.tcp_wmem TCP specific setting for transmit buffer sizes. This is a vector of 3 integers: [min, default, max]. The max value can’t be larger than the equivalent net.core.{r,w}mem_max. Value are in bytes.
net.core.netdev_max_backlog Incoming connections backlog queue is the number of packets queued when the interface receives packets faster than kernel can process them. Once this number is exceeded, the kernel will start to drop the packets.
File limits While not directly TCP, this is important to TCP functioning correctly. ulimit on linux will show you the limits imposed on the current user and system. You must have enough hard and soft limits for the number of TCP sockets your system will open.
These can be set here:/etc/security/limits.d/

soft        nofile    XXXXX

hard       nofile    XXXXX

 

The running value of these parameters can be checked on most linux based operating systems using sysctl.

To see all of your currently configured parameters, use:

sysctl –a

If you want to search for a specific parameter or set of parameters, you can use grep. Example:

sysctl –a | grep rmem

 

The values you set for these depend on your specific usage and traffic patterns. Larger buffers don’t necessarily equate to more speed. If the buffers are too small, you’ll likely see overflow as applications can’t service received data quickly enough. If buffers are too large, you’re placing an unnecessary burden on the kernel to find and allocate memory which can lead to packet loss.

Key factors the will impact your buffer needs are the speed of your network (100mb, 1gb, 10gb), and your round trip time (RTT).

RTT is the measure of time it takes a packet to travel from the host, to a destination, and back to the host again. A common tool to measure RTT is ping.

It is important to note that just because a server has a 10gb network interface, that does not mean it will receive a maximum of 10gb traffic. The entire infrastructure will determine the maximum bandwidth of your network.

A common way to calculate your buffer needs is as follows:

Bandwidth-in-bits-per-second * Round-trip-latency-in-seconds = TCP window size in bits / 8 = TCP window size in bytes

Example, using 50ms as our RTT:

NIC speed is 1000Mbit(1GBit), which equals 1,000,000,000 bits.

RTT is 50ms, which equals .05 seconds.

Bandwidth delay product(BDP) in bits – 1,000,000,000 * .05 = 50,000,000

Convert BDP to bytes – 50,000,000/8 = 6,250,000bytes, or 6.25mb

Many products/network appliances state to double, or even triple your BDP value to determine your maximum buffer size.

Table with sample buffer sizes based on NIC speed:

 

NIC Speed (Mbit) RTT(ms) NIC bits BDP(bytes) BDP(mb) net.core.rmem_max net.ipv4.tcp_rmem
100 100 100000000 1250000 1.25 2097152 4096 65536 2097152
1000 100 1000000000 12500000 12.5 16777216 4096 1048576 16777216
10000 100 10000000000 125000000 125 134217728 4096 1048576 33554432

 

Notice in the 10gb NIC the net.core.rmem.max value is great than the net.ipv4.rcp.rmem max value. This is an example of splitting the size for multiple data streams. Depending on what your server is being used for, you may have several streams running at one time. For example, a multistream FTP client can establish several streams for a single file transfer.

Note that for net.ipv4.tcp_{r,w}mem, the max value can’t be larger than the equivalent net.core.{r,w}mem_max.

 

net.core.netdev_max_backlog should be set based on your system load and traffic patterns. Some common values used are 32768 or 65536.

 

Jumbo frames

For Ethernet networks, enabling jumbo frames (Maximum Transmission Unit (MTU)) on all systems (hosts and switches) can provide a significant performance improvement, especially when the application uses large payload sizes. Enabling jumbo frames on some hosts in the configuration and not others can cause bottlenecks. It is best to enable jumbo frames on all hosts in the configuration or none of the hosts in the configuration.

The default 802.3 Ethernet frame size is 1518 bytes. The Ethernet header consumes 18 bytes of this, leaving an effective maximum payload of 1500 bytes. Jumbo Frames increasing the payload from 1500 to 9000 bytes. Ethernet frames use a fixed-size header. The header contains no user data, and is overhead. Transmitting a larger frame is more efficient, because the overhead-to-data ratio is improved.

 

Setting TCP parameters

The following is a list of methods for setting TCP parameters on various operating systems. This is not an all-inclusive list, consult your operating system documentation for more details.

If you make changes to any kernel parameters, it is strongly recommended that you test these changes before making changes in a production environment.

It is also suggested that you consult product documentation for recommended settings for specific products. Many products will provide minimum required settings and tuning guidance to achieve optimal performance for their product.

Windows

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\ParametersMaxUserPort = dword:0000fffe

Solaris

ndd -set /dev/tcp tcp_max_buf 4194304

AIX

/usr/sbin/no -o tcp_sendspace=4194304

 

Linux

sysctl -w net.ipv4.tcp_rmem=”4096 87380 8388608″

HP-UX

ndd -set /dev/tcp tcp_ip_abort_cinterval 20000

 

Common TCP parameters by operating system

The following is a list of commonly tuned parameters for various operating systems. Consulting the documentation for your specific operating system and/or product for more details on what parameters are available, recommended settings, and how to change their values.

Solaris

  • tcp_time_wait_interval
  • tcp_keepalive_interval
  • tcp_fin_wait_2_flush_interval
  • tcp_conn_req_max_q
  • tcp_conn_req_max_q0
  • tcp_xmit_hiwat
  • tcp_recv_hiwat
  • tcp_cwnd_max
  • tcp_ip_abort_interval
  • tcp_rexmit_interval_initial
  • tcp_rexmit_interval_max
  • tcp_rexmit_interval_min
  • tcp_max_buf

 

AIX

  • tcp_sendspace
  • tcp_recvspace
  • udp_sendspace
  • udp_recvspace
  • somaxconn
  • tcp_nodelayack
  • tcp_keepinit
  • tcp_keepintvl

Linux

  • net.ipv4.tcp_timestamps
  • net.ipv4.tcp_tw_reuse
  • net.ipv4.tcp_tw_recycle
  • net.ipv4.tcp_fin_timeout
  • net.ipv4.tcp_keepalive_time
  • net.ipv4.tcp_rmem
  • net.ipv4.tcp_wmem
  • net.ipv4.tcp_max_syn_backlog
  • net.core.rmem_default
  • net.core.rmem_max
  • net.core.wmem_default
  • net.core.wmem_max
  • net.core.netdev_max_backlog

 

HP-UX

  • tcp_conn_req_max
  • tcp_xmit_hiwater_def
  • tcp_ip_abort_interval
  • tcp_rexmit_interval_initial
  • tcp_keepalive_interval
  • tcp_recv_hiwater_def
  • tcp_recv_hiwater_max
  • tcp_xmit_hiwater_def
  • tcp_xmit_hiwater_max

Checking TCP performance

 

The following are some useful commands and statistics you can examine to help determine the performance of TCP on your system.

ifconfig

ifconfig –a, or ifconfig <specific_interface>

Sample output:

eth1 Link encap:Ethernet HWaddr 00:00:27:6F:64:F2

inet addr:192.168.56.102 Bcast:192.168.56.255 Mask:255.255.255.0

inet6 addr: fe80::a00:27ff:fe64:6af9/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:5334443 errors:35566 dropped:0 overruns:0 frame:0

TX packets:23434553 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:15158 (14.8 KiB) TX bytes:5214 (5.0 KiB)

 

Examine the RX and TX packets lines of the output.

errors Packets errors. Can be caused by numerous issues, such as transmission aborts, carrier errors, and window errors.
dropped How many packets were dropped and not processed. Possibly because of low memory.
overruns Overruns often occur when data comes in faster than the kernel can process it.
frame Frame errors, often caused by bad cable, bad hardware.
collisions Usually caused by network congestion.

 

 netstat –s

netstat –s will display statistics for various protocols.

Output will vary by operating system. In general, you are looking for anything related to packets being “dropped”, “pruned”, and “overrun”.

 

Below is sample TCPExt output.

Depending on your specific system, output for these values will only be displayed if it is non-zero.

XXXXXX packets pruned from receive queue because of socket buffer overrun Receive buffer possibly too small
XXXXXXpackets collapsedin receive queue due to low socket buffer Receive buffer possibly too small
XXXXXX packets directly received from backlog Packets being placed in the backlog because they could not be processed fast enough. Check if you are dropping packets. Just because the backlog is being used does not necessarily mean something bad is happening. It depends on the volume of packets in the backlog, and whether or not they are being dropped.

 

Further reading

 

The following additional reading provides the RFC for TCP extensions, as well as recommended tuning for various applications.

RFC 1323

RFC 1323 defines TCP Extensions for High Performance

https://www.ietf.org/rfc/rfc1323.txt

 

Oracle Databse 12c

https://docs.oracle.com/database/121/LTDQI/toc.htm#BHCCADGD

 

Oracle Coherence 12.1.2

https://docs.oracle.com/middleware/1212/coherence/COHAG/tune_perftune.htm#COHAG219

JBoss 5 clustering

https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Web_Platform/5/html/Administration_And_Configuration_Guide/Clustering_Tuning.html

 

Websphere on System z

https://www-01.ibm.com/support/knowledgecenter/linuxonibm/liaag/wp64bit/l0wpbt00_ds_linux_kernel_settings.htm

 

Tuning for Web Serving on the Red Hat Enterprise Linux 6.4 KVM Hypervisor

ftp://public.dhe.ibm.com/linux/pdfs/Tuning_for_Web_Serving_on_RHEL_64_KVM.pdf

 

Oracle Glassfish server 3.1.2

https://docs.oracle.com/cd/E26576_01/doc.312/e24936/tuning-os.htm#GSPTG00007

 

Solaris 11 tunable parameters

https://docs.oracle.com/cd/E26502_01/html/E29022/appendixa-28.html

 

AIX 7 TCP tuning

http://www.ibm.com/developerworks/aix/library/au-aix7networkoptimize3/

 

Redhat 6 Tuning

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/Performance_Tuning_Guide/index.html#main-network

Behind the Delete Trigger in Sales Cloud Application Composer

$
0
0

Cautionary Details on Delete Trigger Behavior with TCA Objects

Developers and technically-inclined users who have ever needed to extend Oracle Sales Cloud are probably familiar with Application Composer (known as App Composer to its friends) — the built-in, browser-based collection of tools that makes it possible to extend Sales Cloud safely without requiring a level of system access that would be inconsistent with and unsafe for the cloud infrastructure. Likewise, many who have built App Composer extensions probably know about object triggers and how to add custom Groovy scripts to these events. Object trigger logic is a major part of most Sales Cloud extensions, especially when there is a need to communicate with external systems. With current Sales Cloud releases (Rel9 and Rel10), harnessing these trigger events and arming them with Groovy scripts arguably has become one of the more effective strategies for developing point-to-point data synchronization extensions.

Existing documentation on trigger usage in App Composer is located in two places: the Groovy Scripting Reference  and the guide for Customizing Sales. But due to the scope of these documents and the extent of topics that require coverage, the authors were unable to provide detailed information on triggers.  These reference guides were never meant to offer best practice recommendations on the use of specific triggers for different purposes. Given this need — that Sales Cloud extension developers need more guidance in order to be more proficient when using object triggers — there are numerous areas requiring deeper technical investigation.  These topics can, and probably should, be covered in detail in the blog arena.

One area requiring additional clarification and vetting of options is a somewhat obscure anomaly in the behavior of delete triggers across different objects in Sales Cloud. By design, objects belonging to Oracle’s Trading Community Architecture (TCA) data model – for example Accounts, Addresses, Contacts, Households, and more – are never deleted physically from the database, at least not through the native Sales Cloud application UI. Therefore, delete triggers do not fire as expected for these objects. In other words, any piece of App Composer extension logic touching TCA objects that includes a delete trigger as one of its components will probably fail. For non-TCA objects, triggers specific to delete actions work as designed. This post will explore differences in delete trigger behavior between TCA and non-TCA data objects in Sales Cloud. The illustrative use case used for this post is a requirement to keep a rudimentary audit trail of deleted object activity in Sales Cloud, tracking the object deleted along with user and timestamp values.

Prerequisites for the Use Case

A custom object, “DeleteAuditLog”, will act as the container for storing the archived object, user, and timestamp details. It has the following attributes:

Field Name Field Type Standard/Custom Additional Info
CreatedBy Text Standard value used to determine who performed the delete
CreationDate Date Standard date of deletion
RecordName Text Standard
LastUpdateDate Date Standard
LastUpdatedBy Text Standard
Id Number Standard
ObjectId Number Custom holds id of object deleted
ObjectType Text Custom holds type of object deleted
ObjectDetail Text Custom holds details for object deleted

Granted, the data elements that will be saved to the custom object do not represent an extremely meaningful audit trail; the intent is not to implement a complete solution but rather to demonstrate the potential of what is possible with trigger scripts.

Although not absolutely required, a global function that takes care of the DeleteAuditLog record creation and assignment of attribute values eliminates duplicate code.  Having it in place as a global function is consistent with modular coding best practices. Here are the details for this global function:

triggers8

Trigger Overview

For readers who have not yet ventured into the world of App Composer triggers, a short introduction is in order. Creating Groovy scripts for event triggers is a way to extend Sales Cloud by telling the system to do something extra when object-related system events occur. That “something extra” can take a variety of forms: validating a field, populating a custom field, calling out to an external web service, creating new instances of objects (standard or custom), or reacting to a given value in an object field and doing some extra processing. Different sequences of triggers fire when objects are created, modified, or deleted. Some triggers fire and are shared across multiple UI actions; others are single-purpose.

There are up to fifteen different object trigger events exposed in App Composer. Not all of these fifteen trigger events are exposed across all objects, however.  For example, the “Before Commit to the Database” trigger is not exposed for objects belonging to the Sales application container. To access and extend trigger events, navigate to the object of interest in the left navigation frame after selecting the correct application container, and then expand the object accordion, which will expose the “Server Scripts” link.

triggers1

Clicking the Server Scripts link populates the App Composer content window with the Server Scripts navigator for the selected object, one component of which is the tab for Triggers. (There are additional tabs in the content window for Validation Rules and Object Functions.) Selecting the Trigger tab exposes areas for Object Triggers and Field Triggers. New, edit, and delete actions for triggers are available through icons are through the Action drop-down menus for Object Triggers and Field Triggers.

triggers2

The majority of triggers are related to database events: before/after insert, before/after update, before/after delete, before/after commit, before/after rollback, and after changes are posted to the database. There are several remaining triggers related to ADF page life cycle events: after object create, before modify, before invalidate, and before remove.

For investigative purposes, it can be revealing to create Groovy scripts for these trigger events in order to reveal firing order and other trigger behaviors; in fact, that was the strategy used here to clarify trigger behavior across different types of objects. Thus, a typical trigger script that does nothing other than to log the trigger event might consist of the following two lines:

println 'Entering AfterCreateTrigger ' + adf.util.AddMilliTimestamp()
println 'Exiting AfterCreateTrigger ' + adf.util.AddMilliTimestamp()

(NOTE: AddMilliTimestamp is a global function that displays time formatted with an added milliseconds component.)

After Groovy trigger scripts are in place for the object trigger events in focus, it then becomes possible to test multiple actions (e.g. object creates, updates, deletes) across different objects in the Sales Cloud user interface. This results in debug-style statements getting written to server logs, which can then be examined in the App Composer Run Time Messages applet to discover end-to-end trigger behavior for various UI actions. The logs for commonly-performed UI actions on Sales Cloud objects follow below. (Run Time Messages content was exported to spreadsheet format to allow selecting the subset of messages applicable to each UI action).

Object create followed by Save and Close:

triggers3

Object modify followed by Save and Close:

triggers4

Object (non-TCA) delete followed by User Verification of the Delete Action:

triggers5

Normal Delete Trigger Behavior

From the above listings, delete triggers work pretty much as expected, at least for non-TCA objects. BeforeRemove, BeforeInvalidate, and BeforeModify events occur before the database-related events – Before/After Delete and AfterCommit – fire. Given this sequence of events, if the goal of the Sales Cloud extension is to log details of the delete event, then it probably makes the most sense to target the unique trigger that fires as soon as the object deletion becomes a known sure thing but before the transaction commit in order to get the log record created in the same database transaction. In this case, therefore, focusing on the AfterDelete event should be optimal; it only fires for the delete action and it occurs, by definition, after the delete event occurs in the database.

There is a behavior unique to the delete action and its chain of triggers, however, that makes implementation of this straightforward approach a bit more complicated.  After the BeforeModify trigger event, which occurs fairly early in the event chain, getting a handle to the to-be-deleted record becomes impossible.  If a need exists, therefore, to read any of the record’s attribute values, it has to be done during or before the BeforeModify event.  After that event the system treats the record as deleted so effectively it is no longer available.

Because the audit log requirement requires reading the value of an object id and then writing that to the audit trail record, hooking into the BeforeModify event is required.  But because the BeforeModify trigger is no longer unique to the delete UI action, the script would somehow have to include a check to make sure that the trigger is a part of the delete chain of events and not being fired for a normal update action.  There does not seem to be a way to perform this check using any native field values, so one option might be to push field values onto the session stack in the BeforeModify trigger, and then pull them off the session stack in the AfterDelete trigger.

Script for the BeforeModify trigger event:

println 'Entering BeforeModify ' + adf.util.AddMilliTimestamp()
adf.userSession.userData.put('ObjectId', SalesAccountPartyId)
adf.userSession.userData.put('ObjectDetail', 'Name: ' + Name + ', Org: ' + OrganizationName)
println 'Exiting BeforeModify ' + adf.util.AddMilliTimestamp()

Script for the AfterDelete trigger event:

println 'Entering AfterDelete ' + adf.util.AddMilliTimestamp()
println 'Creating Delete Audit Log Record'
def objType = 'Sales Lead'
def objId = adf.userSession.userData.ObjectId
def objDtl = adf.userSession.userData.ObjectDetail
def logResult = adf.util.CreateDeleteAuditLog(objType, objId, objDtl) ? 
  'Delete Log Record created OK' : 'Delete Log Record create failure'
println logResult
println 'Exiting AfterDelete ' + adf.util.AddMilliTimestamp()

TCA Objects and the Delete Trigger

Implementing a similar trigger for a TCA object (e.g. Account) leads to a far different outcome. The failure to log the Account UI delete event becomes clear when the debug Run Time Messages associated with the event are examined.

The delete action on the Account object results in the following series of triggers getting fired:

triggers6

The above listing of fired triggers is representative of what takes place when a TCA object record is “deleted” in the UI.  By design, soft deletes occur instead of physical deletes, so the sequence of trigger events looks more like the objects are being modified than deleted. And actually, record updates are indeed what occur.  For TCA objects, instead of being physically deleted they are marked as inactive by changing the value of the PartyStatus (or similar) field to ‘I’. This value tells the system to treat records as if they no longer exist in the database.

Therefore, hooking into delete trigger events for TCA objects will never have the desired effect.  What can be done about the audit log use case?  Now that this behavior is known for TCA objects, and knowing that the value of the PartyStatus (or equivalent) field can be used to check for the delete UI action, all of the audit log logic can be contained in the BeforeModify trigger event.  There is no need to push and pull values off of the session.  Hooking into the BeforeModify trigger event remains viable for TCA objects even though the chain of triggers is far different.

Here is a sample script, which shares some pieces of the delete script for Sales Lead above, for the Account (TCA) object:

println 'Entering BeforeModifyTrigger ' + adf.util.AddMilliTimestamp()
// check for delete activity
if (isAttributeChanged('PartyStatus') && PartyStatus == 'I') {
  println 'Creating Delete Audit Log Record'
  def objType = 'Account'
  def objId = PartyId
  def objDtl = OrganizationName
  def logResult = adf.util.CreateDeleteAuditLog(objType, objId, objDtl) ? 
    'Delete Log Record created OK' : 'Delete Log Record create failure'
  println logResult
} else {
  println 'Record Change other than a delete attempt'
}
println 'Exiting BeforeModifyTrigger ' + adf.util.AddMilliTimestamp()

Short-Term Workarounds and Long-Term Prognosis

Multiple customer service requests related to the delete trigger anomaly for TCA objects have been filed across various versions of Sales Cloud, and this activity has resulted in at least one KnowledgeBase article (Unable To Restrict Deletion Of Contacts Via Groovy (Doc ID 2044073.1) getting published about the behavior. A workaround entirely consistent with what was presented here for TCA objects is discussed in the article. For the longer term, enhancement request #21557055 has been filed and approved for a future release of Sales Cloud.

Viewing all 987 articles
Browse latest View live