Quantcast
Channel: ATeam Chronicles
Viewing all 987 articles
Browse latest View live

Creating a Documents Cloud Service REST Client from the DOCS WADL

$
0
0

Documents Cloud Service (DOCS) provides a WADL file for the REST API. This can be used in JDeveloper or other tools to quickly create a REST client in Java with minimal coding. The WADL file contains all of the 1.1 REST API service calls, thus the WADL2Java output generates handles for calling any DOCS service.

JDeveloper 12c has a builtin “RESTful Client And Proxy” feature that can take a WADL file and produce Java code. Other tools like SoapUI can also take a WADL file and produce a project that quickly enables calling REST services. Both JDeveloper and SoapUI use WADL2Java to produce the artifacts. The WADL2Java tool can also be used from the command line but tools like these simplify the turnaround time. For more details on the WADL2Java tool, see this page.

Download the Documents Cloud Service WADL file from this page.

Create a new application and project

Create a new custom Java application in JDeveloper.

Create a new “RESTful Client and Proxy”

Follow the wizard.

Click next on the first page.

 

Create new RESTful Client and Proxy

Create new RESTful Client and Proxy

Select the style of output. The default is Jersey 1.x.

 

Select style for generated Java classes

Select style for generated Java classes

Browse to the WADL file. Name the package as you like. Click Next.

 

Select the WADL file and name the package

Select the WADL file and name the package

 

Accept the defaults. Note that the Base URI will be changed to your DOCS instance URL, but this isn’t necessary to change at this point. The WADL uses a Base URI of https://DocumentsCloudService to create cleaner class names. The WADL file could be updated prior to creating the JDeveloper application – however, the class names will be generated using the URI and are a bit cumbersome to use. Changing the Base URI after the code is generated is a simple one line code change.

Use the default Base URI and Class Name.  The URI will be updated in a later step.

Use the default Base URI and Class Name. The URI will be updated in a later step.

On the next page, do not select any policies.

 

Click next on the Policies page

Click next on the Policies page

 

Click Next to create the REST client.  This executes the wadl2java tool against the WADL file to create Java classes that can be used to call any Documents Cloud Service.

 

The generated code creates a basic client. The DocumentsCloudService.java class contains the logic to call all services. The newly generated Java files are DocumentCloudService.java and a corresponding test client called DocumentCloudServiceClient.java. Notice that the different API resources are defined (Files, Catalog, Users).

Generated code creates a basic client. The DocumentsCloudService.java class contains the logic to call all services.

 

One update must be made the DocumentsCloudService.java file. The endpoint must be set. Change the https url to point to a valid endpoint. The https://DocumentsCloudService url must be set to an actual host.

Change:

URI originalURI = URI.create("https://DocumentsCloudService");

To a valid endpoint:

URI originalURI = URI.create("https://scdemos-scuscdc.documents.us2.oraclecloud.com/documents/api/1.1");
Update the DOCS instance URL

Update the DOCS instance URL

 

Adding HTTP Logging for Troubleshooting

 

For troubleshooting REST requests and responses, add the logging filter to the client in DocumentsCloudServiceClient.java. This will print out the HTTP requests and responses when it runs. Note that this is useful for testing or troubleshooting but is verbose and may only be needed when the HTTP dialogue is of interest to you.

public class DocumentsCloudServiceClient {
    public static void main(String[] args) {
        
        Client client = DocumentsCloudService.createClient();
        client.addFilter(new LoggingFilter(System.out));

Adding the Authorization Parameter

 

The authorization header must be present on all DOCS service calls. The Authorization header is required. For testing the authorization string can be used in the client. However, this must be stored and accessed in a keystore when doing anything beyond basic client testing. For demo purposes, the encoded auth string can be used in a variable in your client class.

    String authorization = "Basic mybase64pw=";

 

Calling Services

 

Under the “add your code here” comment, add service calls to the REST API. For now, get the responses back as Strings (JSON output).  Use the clients to call services. For example, the calls to REST services follow the URI format to the resources. Using documentscloudservicefolders.items() will get the authorized user’s Personal Workspace.

        //************************************************************************************
        // Folders API Test - get personal workspace
        //************************************************************************************
          
        //GET folders/items 
        System.out.println("\n*******************************\n* GET folders/items\n*******************************\n"); 
        ClientResponse response = documentscloudservicefolders.items().get(authorization, ClientResponse.class);
        status = response.getStatus();
        if(status == 200){
            System.out.println(response.getEntity(String.class));
        }
        else{
            System.out.println("Error: " + status + ": " + response.getStatusInfo());
        }

Services that Requires a “role” parameter

The WADL2Java output creates a Role.java class file. This is because for Documents Cloud Service, the “role” parameter contains a fixed set of options that can be used: viewer, downloader, contributor, or manager. The Role.java class contains an Enumeration of these values. To use the values, reference the parameter in the service call using, for example, Role.DOWNLOADER.

        response = documentscloudserviceshares.id(folderId).role().put(userId, Role.DOWNLOADER, authorization, ClientResponse.class);

 

File upload

 

To use the upload service, add a line to DocumentsCloudService.java method customizeClientConfiguration in order to post multipart forms. The customizeClientConfiguration is the method where additional options can be placed when the client objects are created. If this is not done, this error occurs when attempting to upload a file to DOCS:

Caused by: com.sun.jersey.api.client.ClientHandlerException: A message body writer for Java type, class com.sun.jersey.multipart.FormDataMultiPart, and MIME media type, multipart/form-data, was not found

 

    /**
     * Template method to allow tooling to customize the new Client
     *
     */
    private static void customizeClientConfiguration(ClientConfig cc) {
        cc.getClasses().add(MultiPartWriter.class);
    }

 

Then in the application, include the Multipart jar (jersey-multipart-1.18.jar). This jar ships with JDeveloper 12 in oracle_common/modules. This can be included alone, or it is included in the JDev library “REST Client Lib (JAX-RS 2.x)”. Note that the other two libraries, JAX-RS Jersey 1.x and JAX-RS Jersey Jackson (Bundled), are automatically added when the RESTful Client and Proxy Wizard completes.

 

image015

 

Once the multipart jar is available, the POST to upload a file requires a format that looks like the following, with two body parts. The first is a json payload that contains the parent folder id. The second part is the actual file content. Note that this example uploads a small text file that contains only the text “Hello World!”.

 

----1234567890
Content-Disposition: form-data; name="jsonInputParameters"

{
"parentID": "<yourfolderguid>"
}
----1234567890
Content-Disposition: form-data; name="primaryFile"; filename="helloworld.txt"
Content-Type: text/plain

Hello World!

----1234567890--

 

Sample code to create the multipart request, with json payload and file.

        //POST files/data  - needs MultiPart features.
        System.out.println("\n*******************************\n* POST files/data\n*******************************\n");

        String parentID = "self"; //self is the top level folder. Use a folder GUID to uplaod to a specific subfolder.
        String jsonPayload = "{\"parentID\":\"" + parentID + "\"}";
        
        //Add the jsonInputParameters to the multi part form
        FormDataMultiPart multipart = new FormDataMultiPart();
        FormDataBodyPart fdpjson = new FormDataBodyPart("jsonInputParameters", jsonPayload);
        multipart.bodyPart(fdpjson);

        //Add the file contents to the multi part form. Sample is assuming text/plain as type. MediaType will need to be updated according to file being uploaded.
        File f = new File("c:/temp/helloworld.txt");
        FileDataBodyPart fdp = new FileDataBodyPart("file", f, MediaType.TEXT_PLAIN_TYPE);
        fdp.setName("primaryFile");

        multipart.bodyPart(fdp);
        
        response = documentscloudservicefiles.data().postMultipartFormData(multipart, authorization, ClientResponse.class);
        status = response.getStatus();
        if(status == 200){
            System.out.println(response.getEntity(String.class));
            System.out.println("File uploaded successfully.");
        }
        else{
            System.out.println("Error: " + status + ": " + response.getStatusInfo());
            System.out.println(response.getEntity(String.class));
        }

File Download

 

To download a file, the service call must write the stream to a file. The filename is in the Content-Disposition header. Use regex to extract it. Once that is found, use Java IO classes to write the file to disk.

Add these imports:

import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import java.nio.file.Files;
import java.nio.file.StandardCopyOption;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import javax.annotation.Generated;
import javax.ws.rs.core.MultivaluedMap;

Sample code to call the service, extract the Content-Disposition header and filename attribute, and save the file to a directory, with the file name being the same as what is in DOCS.

        //GET files/{id}/data
        response = documentscloudservicefiles.id(fileId).data().get(authorization, ClientResponse.class);
        status = response.getStatus();
        if(status == 200){
            MultivaluedMap<String, String>  headers = response.getHeaders();
            String contentDisposition = headers.getFirst("Content-Disposition");  //This header is formatted like this:     attachment; filename*=UTF-8''helloworld.txt; filename="helloworld.txt"
            
            //Use regex to pull out filename from content-disposition header. Following sample from Stack Overflow http://stackoverflow.com/questions/8092244/regex-to-extract-filename
            String fileName = null;
            Pattern regex = Pattern.compile("(?<=filename=\").*?(?=\")");
            Matcher regexMatcher = regex.matcher(contentDisposition);
            if (regexMatcher.find()) {
                fileName = regexMatcher.group();
            }
    
            String outputDir = "C:/temp/";
            
            InputStream is = response.getEntity(InputStream.class);
            try {
                //For sample test, overwrite file if it exists
                Files.copy(is, new File(outputDir + fileName).toPath(), StandardCopyOption.REPLACE_EXISTING);
                System.out.println("Saved file " + fileId + " to path : " + outputDir + fileName + ".");
            } catch (IOException e) {
                e.printStackTrace();
            }
        }
        else{
            System.out.println("Error: " + status + ": " + response.getStatusInfo());
        }

Java Flight Recorder

$
0
0

Overview

Performance issues are some of the most difficult and expensive issues to diagnose and fix.  For JAVA applications there is a great tool called the Java Flight Recorder (JFR) that can be used to both proactively to find potential performance issues during testing before they become apparent through external metrics and reactively to troubleshoot issues that have appeared during performance testing or in production.

The Java Flight Recorder captures and dumps, with very little overhead (typically less than 2%), information about the Java runtime and the Java application running in the Java runtime to a file. The data is recorded at time stamped data points called events. Typical events can be threads waiting for locks, garbage collections, periodic CPU usage data, File/Socket Input/output, JDBC request processing, Servlet Invocation etc. By default every event that took more than 10 ms is recorded. When creating a JFR recording, you can create recording templates which are sets of events that should be recorded. However, the default template is good enough to capture most of the details required to monitor the health of the system.

How to configure Java Flight Recording

How to Enable JFR

JFR is by default enabled for JRockit.

You have to specify following Java argument to enable JFR for Hotspot.
-XX:+UnlockCommercialFeatures -XX:+FlightRecorder

How to start/stop JFR

Using Command line to Start/Stop JFR

 

  1. 1. Go to $JAVA_HOME/bin directory where system is running.
  2. 2. Run “./jcmd” command to get a list of JVMs instantiated by the Java instance along with their process IDs. For Jrockit, the command is, “./jrcmd”.

jcmdCommand

  1. 3. To get a list of commands for JVM process, run “./jcmd $ProcessID help” command. If JFR is enabled for JVM instance then you will see JFR related commands (JFR.start, JFR.stop etc.).
  2. 4. To start/stop JFR recording, run following commands. Click here for more details and documentation about JFR commands.

JFRCommands

Using Java Mission Control (JMC) to start/stop JFR

Start JMC from $JAVA_HOME/bin with “./jmc” command. For Jrockit JVM, the command is, “./jrmc”. You can see a list of Java Virtual Machine (JVM) processes running under JVM Browser.

JMCConsole

You can select a JVM process and start/stop or dump JFR if it was already started from JVM browser.

JMCStartJFR

JMCStartJFRScreen

Using triggers to start/stop JFR

You can set up Java Mission Control to automatically start or dump a flight recording when a threshold is reached for some system resource or some condition is met for MBean. This is done from the JMX console. To start the JMX console, find your application in the JVM Browser, right-click it, and select “Start JMX Console”.

JMCStartJMX

Select “Triggers” tab at the bottom of the screen to configure new trigger.

You can choose to create a trigger on any MBean in the application. There are several default triggers set up for common conditions such as high CPU usage, deadlocked threads, or too large of a live set. Select “Add” to choose any MBean in the application, including your own application-specific ones. When you select your trigger, you can also select the conditions that must be met.

Click the check box next to the trigger to enable it. Select condition that should be met for the trigger then, click the Action tab. From Action tab, select what to do when the condition is met.

JMCTriggerCondition

JMCTriggerAction

How to inspect JFR

CPU usage

As shown below, you can check average CPU usage and CPU usage when performance test was run. Performance test was run twice when this JFR was captured. As shown in the screen shot below, both the times CPU shot up to 100%. This of course is not a good sign and needs to be corrected. Okay, so how do you know the number of CPUs assigned to the machine?  You can check that from “System” tab.

JMC2CPU

JMCSystemOverview

As you can see in the above screen shot, two CPUs were assigned to the machine. When I changed it to 4 CPUs, it did help and the CPU was no more 100% utilized, however it was still used more than 90% as shown in the screen shot below for the same performance test. This shows a use case of JFR. You can measure performance improvement of the system when you make some configuration change.

JMC4CPU

Heap usage

The heap size is one of the very important resources that impacts performance. Along with heap size, garbage collection (GC) pattern also says a lot about performance. How many times GC was run, how much time did it take to finish GC and how many of those GCs were PermGen and full GCs. The GC cycle also helps detect memory leaks if any exist. If memory used post GC operation is growing every time GC runs is the first indication of potential memory leaks. More information about memory tab and detecting memory leak can be found here.

JMC_JVMInformation

JMC_Memory

Contention

Contention is one another easy to detect signs of performance issues. Too much contention can be a result of low CPU or low memory available or bad coding. Looking at resources usage and stack trace of the thread that caused contention can help you find out the root cause of contention. As shown in the below example, same test with 2 CPU had a much more contention than with 4 CPU.

Note: Ignore Muxer thread contention. They are special threads created by weblogic to read/write data from/to sockets. 

JMC_Contention

System configuration

System configuration as shown in one of the screen shot above shows details about CPU, memory and hardware. It also shows a list of environment variables and processes running on the system. Processes running on the system can help you check and make sure no unwanted process was running when the performance test was done.

JMC_EnvVariables

Weblogic tab pack

One of the interesting parts of JFR is integration with weblogic. JFR can capture all the events produced by the Weblogic Diagnostics Framework (WLDF) throttled per ECID (Execution Context Identifier). The Weblogic tab pack is not installed by default. You can follow the steps to install the Weblogic tab pack. JFR can capture WLDF

From Weblogic tab, you can check the time taken to process every request identified by ECID or by HTTP requests or by servlets. You can also check the time taken by JDBC queries to diagnose list of JDBC queries that need tuning.

JMC_ECID

JMC_HTTPRequests

Operative set and Range navigator

Every tab under Java Mission control has a range navigator at the top. It is time range. You can select a period of time when let’s say the test was run or when you observed failures during the performance test. This helps you do more focused troubleshooting and also get zoomed view of the event graph. Once you select a range, you can ‘synchronize selection’ so every tab provides you data within that time frame.

JMC_RangeNavigator

There is also an interesting feature that is operative set. When you figure out, let’s say a troubled thread, you can add that thread to operative set and view data about just that thread or operation in every tab. This helps you pinpoint the issue and recommend very specific tuning. Below is an example of using operative set.

First of all, from Weblogic tab, I get request instance based on ECID that took the maximum time to process (more than 2 seconds) and add that to operative set. Select ‘Add Selection’ option from the list of menu items as shown in the screen below.

JMC_AddOperativeSet

Then from Events tab I click on ‘Show Only Operative Set’ and that will show only those events that happened during processing of the request that I selected as operative set. If you see that event, you would notice that it spent more than 1 second waiting for monitor so you know why it took more than 2 seconds to process the request.

JMC_OperativeThread

More information about JFR can be found here.

Getting Started with Oracle Database Cloud Service – Virtual Image – Creating a database

$
0
0

Introduction

The Oracle Database Cloud Service – Virtual Image option gives great flexibility on how to deploy databases in the Oracle Cloud. It allows customization options in almost all aspects of the database (e.g. file system layout, memory management, etc.). This makes this option very beneficial – for example – in highly customized database deployments that are being migrated to the cloud.  Please note when using the Oracle Database Cloud Service – Virtual Image option: All subsequent maintenance operations, including backup, patching and upgrades, are performed by you. This way all instances can be integrated in your existing maintenance schedule for already existing On Premise databases.

Only when the Oracle Database Cloud Service – Database as a Service option is selected, a database instance is created based as part of the provisioning. It comes with automatic backup and point-in-time recovery options plus patching and upgrades, however offers less flexibility, e.g. for the layout of data files. Please review this document to find the right option for your requirements.

This example will show you how to create a database after the Virtual instance has been provisioned as described in this document.

 

Create Storage Volumes

Login to your Database Cloud Services Console – Home Page and navigate to the Instance Detail page of your instance by clicking on the name – ‘dbaas-demo-vi1′ in this example. On the details page: Locate the Node where the database is supposed to be created and select the “Scale Out” option from the context menu as shown is the picture below.

storage1

 

Select the required size – in this example 30 GB have been chosen which gives plenty of space for patches etc. The disk automatically gets attached to the instance, once “Yes, Scale Up Service” has been clicked.

storage2

After pressing the “Yes, Scale Up Service” the instance goes into Maintenance Mode and is restarted.

storage4

Once the instance is restarted – navigate to the Compute Cloud Services Console and click on the storage section to review the created storage volume. The overview page will show the newly attached volume. It is attached to the instance as the next available slot: In this example this is /dev/xvdc. The first disk – /dev/xvdb – holds the base operating system etc.

storage3

 

Login to the VM using the opc user – described here– issuing fdisk -l will reveal the newly attached volume /dev/xvdc.

image4

In order to use the disk, a partition has to be created. This can be achieved by running fdisk /dev/xvdc – further details on this topic can be found in the Oracle Linux Administrator Guide.

image5

The next step is to create a file system on the new volume – optionally logical volumes can also be created. For this example ext4 is chosen, which can be seen as a default in the Oracle Cloud. It is recommended to use a label for the volume – this can be done with the “-L” flag of mkfs.ext4 – this ensures that if the order of the disks attached to the VM changes the volume stays mounted to the right mount point.

image6

After the partition is created a mount point /u01 needs to be created to mount the binaries for the database home. To enable this mount point across reboots the mount information should be recorded in the /etc/fstab. After mounting the file system on /u01 make sure to change the ownership to the user oracle with the group oinstall.

image7

Every Virtual Image instance comes with the database binaries pre-deployed in a tar.gz archive. To extract it: Switch to the user oracle and extract it in the directory /u01. It should be extracted using the p flag to ensure the proper ownership of the files in the archive after extraction.

image8

After the extraction is complete two scripts have to be run by opc with the sudo option or root. These scripts set permissions on the oraInventory and adds certain files outside the Oracle Base, like the /etc/oratab file.

image9

At this stage additional storage should be added and mounted using the process described here to satisfy the database storage requirements. To create databases with the Virtual Image option – the Database Configuration Assistance (dbca) is used – using the same process as for on premise databases.

dbca can be used in the command line as shown in this image:

image10

Alternatively the GUI version can be used, once X11 Forwarding is configured correctly. In order to enable the forwarding simply add the lines to the sshd_config as shown and restart the ssh daemon.

image11

Starting a local XServer and logging into the Instance using oracle allows the start of dbca with the GUI being display on the local screen. The process of creating a database is identical to creating a database on premise.

image12

 

Further Reading

Creating a Database on a Virtual Image Service Instance

http://www.oracle.com/pls/topic/lookup?ctx=cloud&id=GUID-4851560B-D4B6-4275-9950-9953BB66B040

Oracle Linux Chapter 18 Storage Management

http://docs.oracle.com/cd/E37670_01/E41138/html/ol_about_storage.html

Oracle Database Cloud Service Documentation

http://docs.oracle.com/cloud/latest/dbcs_dbaas/index.html

An ODI Journalizing Knowledge Module for GoldenGate Integrated Replicat

$
0
0

One of the new features in GoldenGate 12c is the Integrated Replicat apply mode.

All out-of-the-box versions of the ODI JKM for GoldenGate to this date were designed for the Classic Replicat apply mode and they rely on the Checkpoint table maintained by GoldenGate. This table is used to figure out which changed records can reliably be processed by ODI. However, if you choose to use the Integrated Replicat apply mode of GoldenGate, there is no Checkpoint table anymore.

This post proposes a solution to modify the out-of-the-box JKM for GoldenGate to support Integrated Replicat apply mode.

Out-of-the-box JKMs for Oracle GoldenGate

In a nutshell, ODI maintains window_ids to keep track of the primary keys (PKs) of new, updated to deleted the records, and uses the GoldenGate Checkpoint table to seed these window-ids: this seeding is called the Extend Window operation. If you want more details on the inner workings of the out-of the box JKMs and how they leverage the GoldenGate checkpoint table with Classic Replicat, the post Understanding the ODI JKMs and how they work with Oracle GoldenGate will provide all the necessary background.

Understanding Oracle SCN and GoldenGate CSN

The Oracle database provides a very good description of what a SCN is: documentation: “A system change number (SCN) is a logical, internal time stamp used by Oracle Database. SCNs order events that occur within the database, which is necessary to satisfy the ACID properties of a transaction. Oracle Database uses SCNs to mark the SCN before which all changes are known to be on disk so that recovery avoids applying unnecessary redo. (…) Every transaction has an SCN.” You can find the complete description here: System Change Numbers (SCNs).

The Oracle GoldenGate documentation describes a CSN in the section About the Commit Sequence Number as follows: “A CSN is a monotonically increasing identifier generated by Oracle GoldenGate that uniquely identifies a point in time when a transaction commits to the database.“

On an Oracle database, GoldenGate will use the database SCN for its CSN.

Description of Integrated Replicat

With the Integrated Replicat mode, GoldenGate constructs logical change records (LCR) that represent source database DML transactions instead of constructing SQL statements. This approach greatly improves performance and reliability of the write process, and as such it does not require a Checkpoint Table.

The Integrated Replicat stores details of its processing in a system table: SYS.DBA_GG_INBOUND_PROGRESS.

This table stores the source SCN (System Change Number) of the last record processed by each Replicat (or GoldenGate CSN for non-Oracle sources). All records up to the APPLIED_LOW_POSITION SCN are guaranteed to be committed. Records between APPLIED_LOW_POSITION and APPLIED_HIGH_POSITION are being processed (i.e. they could be committed or not).

Figure 1 shows the complete structure of the table:

Table SYS.DBA_GG_INBOUND_PROGRESS(

SERVER_NAME                                          VARCHAR2(128)
PROCESSED_LOW_POSITION                  VARCHAR2(4000)
APPLIED_LOW_POSITION                         VARCHAR2(4000)
APPLIED_HIGH_POSITION                         VARCHAR2(4000)
SPILL_POSITION                                        VARCHAR2(4000)
OLDEST_POSITION                                   VARCHAR2(4000)
APPLIED_LOW_SCN                                  NUMBER
APPLIED_TIME                                           DATE
APPLIED_MESSAGE_CREATE_TIME         DATE
SOURCE_DATABASE                                 VARCHAR2(128)
SOURCE_ROOT_NAME                            VARCHAR2(128))

Figure 1: Structure of the SYS.DBA_GG_INBOUND_PROGRESS view.

If you want more details on Integrated Replicat for Oracle GoldenGate, the Oracle documentation provides an excellent description of the technology and its benefits here: Choosing Capture and Apply Modes.

The challenge for ODI is that there is no way to relate the source SDN with anything that can be stored in the J$ table, hence the need for a new approach.

Description of the new approach

Instead of having GoldenGate provide a WINDOW_ID when PKs are written to the J$ table, we remove the WINDOW_ID column altogether and we replace it with the Oracle database ORA_ROWSCN Pseudocolumn. The SCN is assigned by the database when the transaction completes: this provides us with a reliable value that we can use as a WINDOW_ID at no additional cost.

To have row level detail in that pseudo column, we have to create the J$ table with the option ROWDEPENDENCIES (for more details on this option, see this post from Tom Kyte: Using the Oracle ORA_ROWSCN). From then on, all we need is to retrieve the current SCN from the database when we do the ‘Extend Window’ operation: all records committed at this point in the J$ table are available for CDC processing. We can retrieve this value with the command:

select CURRENT_SCN from v$database

Note that the ODI user needs the ‘Select’ privilege on the ‘v$database’ view to be able to run this query.

Steps to modify in the JKM

A modified version of the out-of-the-box JKM is available here: JKM Oracle to Oracle Consistent (OGG Online) Integrated Replicat. In this implementation, all KM tasks that were modified from the original JKM have their name prefixed with a * sign.

Table 1 below shows the alterations to the original JKM for the modified tasks (all changes are done in the Target Command of the tasks):

Task Description of the Change Code
Create J$ Table Remove WINDOW_ID;Add the ROWDEPENDENCIES option to the table  create table <%=odiRef.getJrnInfo(“JRN_FULL_NAME”)%>(<%=odiRef.getColList(“”, “[COL_NAME]\t[DEST_WRI_DT] null”, “,\n\t”, “”, “PK”)%>) ROWDEPENDENCIES
Create JV$ view Replace the use of the JRN.WINDOW_ID column in the J$ table with the pseudo column JRN.ORA_ROWSCN. This is in the where clause of the sub-select  and        JRN.ORA_ROWSCN        >= SUB.MIN_WINDOW_IDand       JRN.ORA_ROWSCN        < SUB.MAX_WINDOW_ID_DEL
Create data view Replace the use of the JRN.WINDOW_ID column in the J$ table with the pseudo column JRN.ORA_ROWSCN. This is in the where clause of the sub-select JRN.ORA_ROWSCN >= SUB.MIN_WINDOW_ID
(note that the ORA_ROWSCN cannot be null)
Create apply oby (2) Add the ‘INTEGRATED’ keyword and remove the code for the checkpoint table:OdiOutFile (…) add replicat <%= odiRef.getOggProcessInfo(odiRef.getOggModelInfo(“REPLICAT_LSCHEMA”), “NAME”) %>#ODI_APPLY_NUMBER, INTEGRATED, exttrail <%= odiRef.getOggProcessInfo(odiRef.getOggModelInfo(“REPLICAT_LSCHEMA”), “LTRAIL_FILE_PATH”) %>stop replicat (…)start replicat (…)
Create apply prm (3) In the GoldenGate PRM file for the replicat, remove the WINDOW_ID column from the KEYCOLS and COLMAP command:  OdiOutFile (…)-APPEND COLMAP (…) map (…), KEYCOLS (<%= odiRef.getColList(“”, “[COL_NAME]“, “, “, “”, “PK”) %>), INSERTALLRECORDS, OVERRIDEDUPS,
COLMAP (<%= odiRef.getColList(“”, “[COL_NAME] = [COL_NAME],\n”, “”, “”, “PK”) %>);
Extend Window Retrieve the database current SCN instead of retrieving the ID of the last processed record from the checkpoint table: all records with a lower SCN are guaranteed to be committed:  update <%=odiRef.getJrnInfo( “CDC_SET_TABLE” )%>set          (CUR_WINDOW_ID, CUR_WINDOW_ID_DEL, CUR_WINDOW_ID_INS )=(select CURRENT_SCN, CURRENT_SCN, CURRENT_SCN from v$database)where   CDC_SET_NAME = ‘<%=odiRef.getObjectName(“L” ,odiRef.getJrnInfo( “JRN_COD_MOD” ) , “D”)%>’
Cleanup journalized table Since we have replaced the WINDOW_ID column in the J$ table with the pseudo column ORA_ROWSCN we have to update the query accordingly  delete from        <%=odiRef.getJrnInfo(“JRN_FULL_NAME”)%> <%=odiRef.getInfo(“DEST_TAB_ALIAS_WORD”)%> JRNwhere   JRN.ORA_ROWSCN <     (select  (…)         )
Adding checkpoint table online Keep the login, but remove the creation of the checkpoint table.Rename the step to ‘Online Connect to the Database’. Remove: add checkpointtable <%= odiRef.getInfo(“SRC_DEFW_SCHEMA”) %>.<%=odiRef.getOption(“CHECKPOINT_TABLE_NAME”)%>
Execute apply commands online Remove the reference to the checkpoint table  

Table 1: Code changes to the original JKM

You can also delete the KM option called CHECKPOINT_TABLE_NAME option since it is not used by the JKM anymore.

Beyond Integrated Replicat

This approach can be expanded beyond the use-case described here: SCNs can be used in the J$ tables independently of the capture mechanism and, in the case of GoldenGate, can be used whether Integrated Replicat is used or not to deliver the data.

When similar mechanisms are available for non-Oracle databases, they can be used as well. For instance Microsoft SQL server allows the creation of a column of type rowversion that can be used for the same purpose.

Conclusion

With relatively simple modifications to the original JKM, ODI can now support the Oracle GoldenGate Integrated Replicat apply mode. This will allow you to take advantage of all the benefits provided by this new mode and allows further integration with ODI.

References

The following references were used to aggregate the necessary information for this post:

For more ODI best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for ODI. For Oracle GoldenGate, visit “Oracle A-Team Chronicles for GoldenGate

Acknowledgements

Special thanks to Tim Garrod for pointing out the possible use of SCNs, Valarie Bedard, Nick Wagner and Mony Subramony for reviewing and validating the successive approaches and attempts, and Pat McElroy for sharing her expertise on Integrated Replicat.

 

Optimizing EhCache configuration for WebCenter Sites

$
0
0

In this blog article I would like to share my ideas and strategies that I have used on how to configure WebCenter Sites’ page, pagelet and inCache. I will discuss several trade-offs in configuring the cache. I would not go into detail on ticket caching for CAS.

This blog expects that you have a good understanding of WebCenter Sites’ and its caches. It also focussed on Delivery server cache tuning. For Editorial servers the Asset Cache and resultset cache are much more important than pagelet cache.

 

Introduction

 

  1. It is valuable to realise that the caching strategy of WebCenter Sites has two goals:
  2. 1) Take load of the database for repeated-read items and therefore provide a huge gain in scalability as a potential source of contention is reduced.
    2) Reduction of CPU operations. Because the caching is done of compound objects a lot of separate calls to either the database or the rendering engine (JSP engine for instance) is avoided. For instance; reading an asset from the database can easily take 20 database calls. If now the asset POJO is stored on a in-memory map, the CPU cycles involved in combining the results of those 20 calls are avoided. This is in addition to avoiding 20 round-trips to the database to read the data.

Now this caching comes also with a cost. Caches need to be managed. This can be done with the time-to-live timeout, or in combination of managed expiration at time of change. The simplest form to implement (time-to-live [http://en.wikipedia.org/wiki/Time_to_live]) suffers from a data freshness. If data is changed before the timeout occurs, the client sees old and stale data.

In WebCenter Sites most caches are managed, meaning that items expire when data changes. This expiration can be at the level of the while cache ‘region’, for instance with result caches, or at the level of an individual cache item, as with pagelet caches.

The other costs of caching is memory consumption. As caching is per definition storing objects in a fast accessible store, all the caches in WebCenter Sites are on-heap caches, sometimes with overflow to disk. As heap memory (and disk memory too) is limited, the cache sizes need to be configured to the correct limits in conjunction with garbage collection tuning.

This overflow to disk has two functions: hold more items in cache than memory allows and be able to persist a cache so it can survive JVM restarts. The latter resolves the common cold start and thundering herd (http://en.wikipedia.org/wiki/Thundering_herd_problem) issues.

After this insight in the trade-offs between CPU cycles, database access, liveliness of data and memory usage, I would like to explain a little bit on how the WebCenter Sites’ caches work together.

At the lowest level is results caching. In this cache are (if so requested) all the cached restless of data queries stored. As most data is read from the database, for instance asset data, most data will be stored. There is a one-to-one relationship between the query and the resultset.

On top of that cache is the AssetCache. In this cache are assets stored as they are entirely read from the database. As it takes multiple database queries to read one asset is this cache a composed cache. As there is considerable overhead to compose an asset for its resultsets, this is a very efficient cache. Especially when assets are read in uncached pagelets. The AssetCache sits on top of the resultset cache for all the queries that are fired to read an asset. It is also a subset from resultset cache as not all cached queries are used to read assets.

On top of the AssetCache is the ContentServer pagelet cache. This cache stored the output of rendering of parts of a page or even the whole page. It stores character data, which can be HTML, JSON, XML or plain text. The cache consist of the rendering results of invoking a a call to a SiteCatalog entry if that SiteCatalog entries was configured to cache its results. The result includes the rendering results to calls to the database, assets, search results, call to external systems etc. As such it is a composed cache and layered on top of resultset cache and AssetCache.

Next to ContentServer pagelet cache is Blob Cache. This holds the results for calls to BlobServer. Technically it is comparable to resultset cache, as it caches the database query and a pointer to the binary file on disk. Where the ContentServer pagelet cache holds character data and the output of rendered ‘elements’, does the Blob Cache hold binary data.

On top of the ContentServer pagelet cache and the Blob Cache is the SatelliteServer pagelet cache. This cache is used by SatelliteServer to assemble the full pages from the individual pagelets for character data like HTML or JSON. These pagelets can be cache or uncached. It queries the ContentServer pagelet cache for these pagelets, and stores them almost identically in it’s own cache. Just some metadata is different. For binary data it queries the Blob Cache and then also caches the binary data. Depending on the size of the binaries this can be a lot of bytes being stored for a long time on the heap. Depending on the available heap size this might or might not be a challenge to configure optimally.

Besides pagelets does SatelliteServer also cache the web references, for the vanity urls. This is similar to resulset cache as there is a direct relationship between the URL (=cache key) and the database query issued. Technically there is more to it that is not relevant to this blob, but for cache configuration it is just an in-memory store of name/value pairs.

Now there is one more cache layer and that is Remote SatelliteServer pagelet cache. This is the same cache as for SatelliteServer pagelet cache but now stored on another JVM and thus heap. The Remote SatelliteServer pagelet cache queries just like the Co-Resident SatelliteServer pagelet cache the ContentServer pagelet cache and Blob Cache. For Remote SatelliteServer the tranport from ContentServer to SatelliteServer is done over HTTP, for Co-Resident it is in-process.

In case of Co-Resident SatelliteServer: the resultet, asset, blob, CS pagelet, web reference and SS pagelet caches all live on the same heap. As some caches contain data read from other caches you may want to use that information to tune the caches in case of a heap size that is not large enough to hold all the data in it’s in-memory caches. As storing all the data in-memory will result in best performance and throughput. And this is the main point of this blob that I want to bring across: how to tune the caches in conjunction with each other.

In the table below is a summary of the different characteristics I just mentioned.

cache what on heap on disk composed remarks
resultset cache database queries results yes no no
asset cache fully read assets yes no yes
CS pagelet cache composed pagelets yes yes yes
blob cache binary data yes no no
SS pagelet cache composed pagelets and binaries yes yes (no!) yes subset of CS pagelet cache
SS webref cache vanity url mapping yes no no

|

Let’s consider some use cases. In all these cases we only consider the situation where the heap size is not large enough to store all items in memory. This can also be the case when the machine has enough physical memory but Garbage Collection tuning indicates that you are better off with a smaller heap to avoid (too) long Full GC pauses. If all items can be stored in-memory and no items are evicted from the cache because the cache is full you are done with tuning as your have reached an ideal situation.

  1. 1) Fully cache pages without a CDN (Content Delivery Network)
    2) Fully cached pages with a CDN
    3) Partially cached pages without a CDN
    4) Partially cached pages with a CDN
    5) Fully uncached pages without a CDN
    6) Fully uncached pages with a CDN

Important is to realise is that adding an additional item to a full EhCache is expensive as at the same time that the item is added (by the same thread) an item needs to be selected to be removed from the cache. The selection for removal is expensive both in CPU cycles and the involved locking. It is even more expensive if items also need to be removed from disk as additional I/O is added to the the mix. So a full cache where constantly items are getting added and removed is something that you desperately want to avoid.

For monitoring the caches it is best to use the WebCenter Sites SupportTools that can be requested from Oracle Support. A lot real-time information can be gathered from reports in the cache section of the SupportTools. Also JMX can be used, but then you will need to write the reports. If you want to graph and trend over time, JMX is an very solid way to read cache statistics data.

1) Fully cache pages without a CDN

This is a relatively simple case as the SatelliteServer pagelet cache is most used, compared to the lower level caches like AssetCache and resultset cache. The other caches are only used when something changes, i.e. when assets are published. All focus should be on the pagelet cache and this should be made as large as possible. Blob Cache is hardly used and serves no function as blobs will the cached on SatelliteServer. ContentServer pagelet cache serves as a store for embedded pagelets (see render:call template style=embedded in the WebCenter Sites’ documentation). But as the same data is available on SatelliteServer, the in-memory pagelet cache it is mostly redundant. TheContentServer pagelet cache in-memory store can be sized small and the on-disk store very large. This makes the ContentServer pagelet cache a persistent store for pagelets and in this way does not compete with the heap for Satellite pagelet cache.

WebReferences cache should ideally not be full (at 100% utilization) and configured large enough to hold all data.

Depending on the frequency of publishing (or in general frequency and amount of change) you will need to monitor the AssetCache and resultset cache. They serve as a helper function to quickly regenerate pagelets that are expired from cache due to the publish. What I mean is that for repeated reads of database queries or assets in both time (before and after publish) and for multiple pagelets, it is faster and more scalable to read them from a cache than from the database. But if there is a memory trade-off to be made between pagelet cache and asset cache, pagelet cache should take a preference.

2) Fully cached pages with a CDN

With adding a CDN to the architecture, the read patterns to the Satellite caches change drastically. As the CDN will cache blobs for a long time there is not a lot of use in caching these blobs on Satellite. If you use a CDN or another front cache like Squid or Varnish for blob data, there is no need to store blobs in Satellites’ page cache, as done by default. In this case it is advisable to set the expiration of blobs to immediate, as blobs are cached externally. There is no benefit in polluting the page cache with large blob data.

In satellite.properties you can set expiration=immediate. It is also strongly advisable to add a Cache-Control header with a long expiration time to blobs. This can be done with either the web references configuration for blobs, or with a additional header in the render:bloburl call to construct a URL with the Cache-Control header.
The SatelliteServer pagelet cache serves mainly as a persistent store for the CDN that is initially queried upon first request and periodically when the CDN cache times out. More advanced configuration schemes are possible, but for the cache configuration this is the basic idea. In this use case the HTML/JSON/XML response can also be cached on the CDN and the CDN can be configured to do so.
AssetCache, web reference cache, blob cache and resultset cache should be configured and monitored similar as under 1).

3) Partially cached pages without a CDN

Compared to 1) the access characteristics change in such a way that AssetCache and to lesser extend resultset cache become more important. As more frequent CSElements and Templates are executed, we cannot rely on pagelet caching alone for optimal results. Again, close monitoring of AssetCache and resultset cache will guide into the right configuration. Repeated reads for various uncached pagelets of assets becomes more important than repeated reads over time. With that ‘over time’ I mean that pre and post publish reads of assets for non-changed assets can be re-read from cache without going to the database and composing the asset POJO.

4) Partially cached pages with a CDN

Compared to 2) is also the access characteristic radically different. Blobs can still be cached on the CDN, but HTML pages cannot. This mean that for each page request the CDN needs to phone home and get the HTML page. SatelliteServer is not access much more often for HTML (actually any character output that is rendered though Templates and CSElements). It also means that for the HTML pages the characteristics become similar to 3). The remarks in 3) around AssetCache etc hold also for this use case.

5) Fully uncached pages without a CDN

This is not a typical use case. It might be one where Sites is used as a CMS only and delivery of the HTML is done through another front-end framework. The output of WebCenter Sites might be binary data (images) and JSON or XML. That JSON is then used by a front-end framework to render pages. Caching might be done in the front-end framework.
SatelliteServer Pagelet caching is only used for for blobs. It’s configuration is irrelevant for pagelets. The ContentServer pagelet cache is not used at all. Blob Cache does not matter much as Satellite Server pagelet cache is used to cache the blobs.
AssetCache and resultset cache is very important. It’s important that you monitor and tune those caches carefully.

6) Fully uncached pages with a CDN

This is also not a typical use case. The difference with 5) is that the blob caching function is now mostly performed at the CDN layer. This means that the Satellite Server pagelet cache layer not relevant and blobs can be set to expire immediately.

When does deploying Remote Satellite Server make sense?

Remote Satellite Server brings most value when your ContentServer JVMs are maxed out on either CPU or memory and a lot of CPU cycles are spend on assembling the page. The latter is the case when you have a lot of pagelets per page. In this case you can off-load memory (the Satellite Server pagelet cache) or CPU cycles to another machine. When you have uncached pagelets on your pages (maybe just an uncached outer wrapper) you are likely to see a degradation in performance as at least some pagelets in the page need to be fetched from ContentServer over HTTP. Those uncached pagelets will also need to be parsed. In this case you will need to balance memory (if you are memory bound) and performance. If you are memory bound and cache blobs on SatelliteServer page cache it is advisable to off-load caching of blobs to either a CDN or another front-end cache like Varnish or Squid.
Another pattern  you can deploy if you are memory bound and have many domains/sites to serve and you have a multi node cluster, is to  shard traffic over the cluster nodes. You can  appoint 2 nodes per virtual host to receive traffic, whilst dedicating other nodes to other virtual hosts. A cluster node can serve traffic of multiple virtual hosts, but it should not serve traffic for all hosts. This will limit the number of pagelets and assets cached per JVM.

It should also be noted that the default settings for resultset caching and pagelet caching is not optimal. For instance the default settings for cc.AssetTypeCSz, cc.cacheResults and cc.cacheResultsTimeout are too low for production Delivery use.

 

General Guidelines

  1. To conclude here are the general guidelines.
    1) Default out-of-the-box settings are not optimal for Production Delivery.
    2) SatelliteServer pagelet cache should only be used for in-memory caching and ContentServer pagelet cache mostly of disk cache. In this way they work well together without much in-memory overhead.
    3) AssetCache and resultset cache become important when you have uncached pagelets. Even with uncache pagelets it is important to understand what us happening in those uncached components to optimise the lower-level caches.
    4) Web References cache is important and should ideally by not 100% full. Assign a large enough size to it. You don’t want to constantly add and remove items from this cache.
    5) Make sure you shut down you web application nicely. If EhCache does not get shut down properly, it will mark its whole disk cache as invalid. This means that after restart all the pagelets will need to be regenerated. This can be a drain on the system.

As you can see there are many ways to tune the WebCenter Sites’ caches. This blob should be seen as a starting point or an intermediate touchpoint for an optimal caching strategy. And an optimal caching strategy requires a good understanding of the business goals around performance, scalability and content freshness.

Managing Oracle Documents Cloud Service files via curl commands

$
0
0

 

Oracle Documents Cloud Service allows users to keep their files in the cloud. It allows organizations to centralize their data. This blog post covers how to manage your files from a terminal. It could be a terminal running on your MacBook laptop, Windows desktop, or Linux server. Command-line tools are a simple applications to upload content into the cloud.

I will cover how using curl, a command line tool for transferring data, we could issue HTTP/HTTPS requests that will interact with Oracle Documents Cloud Service REST API File Resource.

 

curl is a free and open software. It supports a range of internet protocols including HTTP, HTTPS, FTP, FTPS, etc. It can be downloaded from http://curl.haxx.se/download.html

The Oracle Documents Cloud Service REST API File Resource documentation is available at: http://docs.oracle.com/cloud/latest/documentcs_welcome/WCCCD/GUID-2A7675E9-536D-47FD-B761-DD1881ADBC7E.htm#WCCCD3763

 

Find below a list of operations allowed on the File Resource:

 

1) Get File Metadata (GET)

FileID for the target document is passed as part of the URLuokiad

curl -u <username>:<password> https://<ORACLE-DOCS-SERVER>/documents/api/1.1/files/D7FA6B2DFA043CF6C8F460BAT0000DEFAULT00000000

 

2) Rename File (PUT)

FileID for the target document is passed as part of the URL and the new filename is passed as a JSON argument

curl -u <username>:<password> -X PUT -d "{\"name\":\"renamed_document.pdf\"}" https://<ORACLE-DOCS-SERVER>/documents/api/1.1/files/D7FA6B2DFA043CF6C8F460BAT0000DEFAULT00000000

 

3) Copy File (POST)

FileID for the source document is passed as part of the URL and the FolderID for the destination is passed as a JSON argument

curl -u <username>:<password> -X POST -d "{\"destinationID\":\"FD00F625F56050BA15CF567AT0000DEFAULT00000000\"}" https://<ORACLE-DOCS-SERVER>/documents/api/1.1/files/D7FA6B2DFA043CF6C8F460BAT0000DEFAULT00000000/copy

 

4) Delete File (DELETE)

FileID for the document to be deleted is passed as part of the URL. Note that this file will be placed in the Trash bin and will not be removed from the system until it is purged from the Trash.

curl -u <username>:<password> -X DELETE https://<ORACLE-DOCS-SERVER>/documents/api/1.1/files/D6178E2C7B4ACAE15E179393T0000DEFAULT00000000

 

5) Upload File (POST)

This request passes a two-part multipart/form-data via the -F curl tag: the FolderID for the destination as a JSON argument and the local filename to be uploaded

curl -u <username>:<password> -X POST -F "jsonInputParameters={\"parentID\":\"F5F20CC273D80927A89D0701T0000DEFAULT00000000\"}" -F "primaryFile=@sample_document.pdf" https://<ORACLE-DOCS-SERVER>/documents/api/1.1/files/data

 

6) Upload File Version (POST)

FileID for the target document is passed as part of the URL and the new version of the document is passed as a multipart via the curl tag -F

curl -u <username>:<password> -X POST -F "primaryFile=@sample_document.pdf" https://<ORACLE-DOCS-SERVER>/documents/api/1.1/files/DE21B5568A9F9A27B16CBA00T0000DEFAULT00000000/data

 

7) Download File (GET)

FileID for the target document is passed as part of the URL

curl -u <username>:<password> https://<ORACLE-DOCS-SERVER>/documents/api/1.1/files/D7FA6B2DFA043CF6C8F460BAT0000DEFAULT00000000/data

 

8) Get File Versions (GET)

FileID for the target document is passed as part of the URL

curl -u <username>:<password> https://<ORACLE-DOCS-SERVER>/documents/api/1.1/files/DE21B5568A9F9A27B16CBA00T0000DEFAULT00000000/versions

 

The above operations were tested on a Windows 7 laptop using curl version 7.42.0 (x86_64-pc-win32).

 

Valuable Tools For Diagnostics Gathering and Troubleshooting

$
0
0

Introduction

The Oracle A-Team is often asked to help customers identify a myriad of JVM and SOA application issues.  Without fail, the customer will be asked for data regarding their application.  This is not application data, but rather data about the running application from the JVM’s perspective. The data we ask for normally includes Java thread dumps, garbage collection logs, Java heap dumps, and also Java Flight recordings; one of the “new” favorites.

Once this information is available for analysis there is a good chance that one of our team members will be able to hone in on the underlying issues.  However, until such information is provided the team is pretty much flying blind.

This article will provide a brief description of this diagnostic data and related files, how to collect them, and the tools available to work with them. Our team finds these diagnostic files and tools to be invaluable.  You should consider adding them to your toolbox to help your troubleshooting and performance tuning efforts.

Thread Dumps

A thread dump is a snapshot of the state of all the threads in the JVM process.  Each thread, in the JVM, will provide a stack trace that will show its execution state.  A thread dump reveals information about the Java application’s activity at the time the thread dump was taken as well as all of the other activity that might be occurring in the JVM.  Since the Oracle SOA stack runs on WebLogic server, the stack trace will provide a snapshot of what the Weblogic server is doing, which includes things like handling incoming HTTP requests, authenticating a database adapter call, or dispatching a BPEL composite to perform some work, etc.

It’s important to understand that a single thread dump only provides a small snapshot or view of the activities within the JVM process.  In order to gain a better understanding of what is happening it is necessary to take multiple thread dumps over a period of time.  It varies according to the type of issue being analyzed, but a typical recommendation is to take 3 – 5 thread dumps at an interval of 10 – 30 seconds between each thread dump.  Being able to review the thread snapshots over these time intervals will provide a bigger window into the behavior of the application.

How to Collect  Java Thread Dumps

There are a number of ways to collect thread dumps from the JVM:

  • The most common method is to use the kill -3 <pid> command (on Unix) which sends an interrupt to the JVM causing it to dump threads. In scenarios where the application is executing in a clustered environment it is wise to create scripts that can be executed on each physical machine.

 

  • From the WebLogic server console, select the managed server in question, click on the “Monitor” tab then click on the “Threads” tab then click the “Dump Thread Stacks” button.

 

  • When using WebLogic Server a thread dump can be generated by using WLST (How to take Thread Dumps with WLST Doc Id 1274713.1) or by accessing the administration server for the domain, selecting the server, and then requesting a thread dump.

 

  • If using Windows, you can use the jstack command:

http://docs.oracle.com/javase/6/docs/technotes/tools/share/jstack.html

 

 

ThreadLogic

The ThreadLogic utility is a free tool you can download to assist in analyzing thread dumps taken from a JVM, (For the purposes of this discussion we will assume that WLS is running on the JVM).  Threadlogic can digest the log file containing the thread dump output.  This utility does a fair amount of the initial analysis for you like finding locks, the holder of locks, and fatal locking conditions.  If you’ve ever read a raw thread dump from a log file then you know it can be daunting – especially if you don’t know exactly what you are looking for.  Threadlogic helps by recognizing the type of each thread and categorizing them in the tool to help you understand which threads are JVM threads, WLS threads, and then “application” threads. In this context “application” means SOA, ADF, Service Bus, Coherence, etc.  In addition, Threadlogic can process a series of thread dumps and perform a “diff” operation between them. This is helpful in determining what threads are doing over a period of time.

The utility will not give one the ultimate answer, but it can reduce the overall effort it takes to review the thread dumps.

You can find the Threadlogic tool and download it at this location.

The following figures provide a sample of the ThreadLogic utility.  Figure 1 is the summary view for a selected thread dump.  Figure 2 demonstrates the view once a specific execute thread is selected.  Within the detail pane the advisories, as determined by ThreadLogic, and the stack trace of the selected execute thread are provided.

Figure 1 ThreadLogic Summary Page

Figure 1 ThreadLogic Summary Page

ThreadLogic Thread Detail

Figure 2 ThreadLogic Thread Detail

 

Verbose Garbage Collection

Being able to review the garbage collection (GC) behavior of the JVM is critical to understanding whether or not GC issues are causing slowdowns, high CPU, or hangs when the JVM is not responding. By using the GC logs you can get detailed information on GC performance in order to determine whether the garbage collection algorithms or parameters should be changed.

Requesting the verbose output of the garbage collection has very little overhead.  The little bit of overhead added is well worth what is provided in return.  The following parameters should be added to the JVM (HotSpot) startup command line to ensure verbose garbage collection is captured.

  • -XX:+PrintGCDateStamps
  • -XX:+PrintGCDetails
  • -Xloggc:<gc_log_path>/${SERVER_NAME}.$(date+%s)_gc.log

 

For JRockit, use the following commands:

  • -Xverbose:gc
  • -XverboseTimeStamp
  • -Xverboselog:<gc_log_path>/${SERVER_NAME}.$(date+%s)_gc.log

 

The parameters provided are strictly for the purpose of requesting garbage collection output.  These are to be in addition to the other JVM command line parameters. Note also that even though these GC log files are not large they should be managed, rotated, and cleaned up in the same manner as your other server logs so as not to overflow disk space.

Java Flight Recordings

This utility has been in the JRockIt JVM for many years and is an absolutely vital tool for identifying trouble spots in the JVM.  Beginning with the release of JDK 1.7.0_40 the flight recorder features have been added to the Java HotSpot JVM.

The utility gathers detailed run-time information about how the Java Virtual Machine and the Java application are behaving. The data collected includes an execution profile, garbage collection statistics, optimization decisions, object allocation, heap statistics, thread details, and latency events for locks and I/O.  The utility provides very detailed metrics and performance information about JVM execution over a rolling time window. Overhead is extremely low because the monitoring functionality is built into the JVM and is not generated using bytecode instrumentation, as is the case with other profiling tools.

The Mission Control Client is a graphical tool for analyzing the flight recording data that is captured by the JVM to a file. The Mission Control client can also be attached to a running JVM to view real-time performance and profiling data.

In order to instruct the flight recorder to begin collecting this information and storing data for off line analysis add the following JVM command line parameters for the HotSpot JVM.

  • -XX:+UnlockCommercialFeatures
  • -XX:+FlightRecorder
  • -XX:StartFlightRecording=maxage=<rolling amount of time>,filename=<filename>

 

Add the following command line parameter if you are running JRockit:

  • -XX:FlightRecorderOptions=defaultrecording=true,disk=true,repository=<jfr_repos_path>,maxage=30m,dumponexit=true,dumponexitpath=<jfr_path>

 

When specifying the “rolling amount of time” keep in mind that one would want to capture a recording that’s long enough to span the time period you want to analyze. This is important in order to get an understanding of the behavior of the JVM and the application and specific events that occur that need to be captured.  The recommendation is usually to start with 30(m)inutes.

Refer to the relevant Java HotSpot Flight Recorder or JRockit documentation below for a complete understanding of the recording options and the Mission Control UI.

 

The flight recording offers many views into the JVM and application behavior.  The figures shown below are just a sampling of those views.  The views provided are just the summary page for each of the sections that the flight recorder provides.  When viewing the section summary pages notice that there are several tabs at the bottom of each page.  Selecting the tabs provides a deeper dive into each of the respective sections.

 

Figure 3 JFR General Section

 

Figure 4 JFR Memory Section

Figure 5 JFR Code Section

Figure 6 JFR CPU/Threads Section

Figure 7 JFR Events Section

 

Heap Dump

The heap dump is invaluable when it comes to determining what Java objects are consuming all of the Java heap space.  This is critical to understand when an out of memory exception occurs.  To ensure a heap dump is generated at an out of memory exception then add the following parameters to the JVM command line.

  • -XX:+HeapDumpOnOutOfMemoryError
  • -XX:HeapDumpPath=<dump path>

 

To get any value from the generated heap dump the use of an analysis tool such as the Memory Analyzer Tool (MAT) is required.  The Memory Analyzer Tool is downloadable from here.   An overview of the MAT tool and related techniques for Java heap dump analysis is beyond the scope of this post but you can find more details on using MAT here.  The screenshots below are some of the heap dump views.

 

Figure 8 Heap Dump Overview

Figure 9 Heap Dump Histogram

Figure 10 Heap Dump Dominator _Tree

Figure 11 Heap Dump Object Query Page

 

Summary

There are numerous free tools available for assisting in identifying Java application issues, hung threads, blocking threads, slow application and JVM performance, garbage collection, and heap consumption issues.  Unfortunately, too many developers, admins, and architects do not have these tools or the knowledge to use them in their toolboxes.  Becoming familiar with these tools will improve your ability to troubleshoot and tune Java applications more effectively and more rapidly. It is highly recommended that you add them to your toolkit as soon as possible.

Oracle Service Bus Transport for Apache Kafka (Part 1)

$
0
0

Introduction

Few of us realize it, but the heart of OOP (Object-Oriented Programming) has more to do with messaging than with objects and classes or related design principles like inheritance and polymorphism. At very beginning, OOP was designed to help developers implement data exchange between objects, so these objects would be coordinated to execute some business process and divide the problem into smaller pieces with a key benefit being code that is easier to maintain. Another original OOP design concept dictates that all objects in the system are accountable for data sharing; no object holds more responsibility than another regarding messaging. This design principle helps OO applications to scale as a whole because objects can scale individually.

Considering that the firsts object-oriented languages (i.e.: LISP, MIT ALGOL, Simula 67) were created in the 60’s, we can all agree that messaging has been around for at least four decades. However, messaging looks quite different today from these original OO concepts and this discrepancy is a direct result of industry demands and technology evolution. Let’s do a quick review:

- To increase performance, objects were supposed to run concurrently using threads;

- To increase scalability, objects were supposed to be spread across different machines;

- To increase availability, messages were stored in a central repository within machines;

- To increase manageability, this repository controls most part of the message handling;

- To increase adoption, standards were created having in mind all these characteristics;

Most of these characteristics define what we refer to today as MOM (Message-Oriented Middleware). This type of middleware is commonly used for exchanging messages in a reliable, fault-tolerant and high performance way. Numerous standards were created to define APIs for MOM’s, with JMS (Java Message Service) and AMQP (Advanced Message Queuing Protocol) as perhaps the most popular ones, with many great implementations available, both commercial and open-source.

But industry demands never slow down, and like any other technology, MOM’s have to keep up with them in order not to fall into obsolescence. There is one major demand currently threatening most implementations and that is data volume: the need to handle not hundreds or thousands but millions of messages per second. The volume of data generated had increased exponentially in the last decade and keeps growing every second. Most implementations follow a MOM design concept – dedicated by an API specification – in which the consumers and producers need to be as lightweight as possible, so the MOM ends up handling most parts of the messaging. While this is good from an ease-of-use standpoint, it creates a serious bottleneck in the architecture that eventually prevents it from scaling up and meets these ever increasing data volumes.

Of course, not all applications need to handle millions of messages simultaneously and most of these implementations might seem “good-enough”. However, what happens if one of these applications needs to be modified to handle a high data volume scenario? What are the consequences? Common situations include application adjustments and fine tuning that by itself bring a new set of complexities to the architecture. For instance:

- Disable message persistence to improve performance, but potentially losing messages;

- For JVM-based implementations, increase the heap size, and possibly cause long GC pauses;

- Trimming the Java EE application server to use only JMS, perhaps losing flexibility and supportability;

- Giving producer/consumer’s more responsibility, but breaking the API specification;

- Using high-speed appliances to improve performance, but also increasing the TCO;

It seems that industry demands over the years caused us to deviate a lot from the original OOP design that dictates that the system scales as a whole when each object scales individually. This principle is more critical than ever in an era where scalability is the most important driver due to high data volumes. Finally, we’re also in the Cloud Computing era where a significant percentage of enterprise applications and computing infrastructure is either running in, or moving to, the Cloud.

Some years ago, a few LinkedIn enterprise architects responsible for the company’s data infrastructure faced the same challenges while working to build data pipelines throughout the company. They were trying to capture and manage relational database changes, NoSQL database changes, ETL tasks, messaging, metrics, and integrate tools like Apache Hadoop, etc. After ending up with different approaches for each scenario, they realized that maybe one single approach would be more maintainable, but keeping in mind all the challenges from each scenario related with scalability. This is the simplistic overview about how Apache Kafka was created. Apache Kafka (Kafka for short) is a distributed, partitioned and replicated commit log service that provides features similar to other messaging systems, but with a unique design that allows it to handle terabytes of data with a single-node. Most importantly – the Kafka architecture allows it to scale as whole because each layer (including producers and consumers) is designed from the ground up to be scalable.

After have seen numerous requests from customers and partners about being able to integrate with Kafka, the A-Team decided to write a native transport for Oracle Service Bus (Service Bus for short) to allow the connection and data exchange with Kafka – supporting message consumption and production to Kafka topics. This is done in a way that allows Service Bus to scale jointly with Kafka, both vertically and horizontally.

The Apache Kafka transport is provided for free to use “AS-IS” but without any official support from Oracle. Bugs, feedback and enhancement requests are welcome but need to be performed using the comments section of this blog and the A-Team reserves the right of help in the best-effort capacity.

In order to detail how this transport works, an article divided in three parts will be provided. This first part of the article will provide a basic introduction, covering how the transport can be installed and how to create Proxy and Business services to read and write from/to Kafka. The second part of the article will cover more advanced details like how to implement message partitioning and how to scale the architecture as a whole. The third and last part will discuss common use cases for this transport. This article will not cover Kafka in detail but will instead focus on the Service Bus aspects of the solution. If you need a Kafka overview, it is strongly recommended you read its official documentation and spend some time trying out its quick start. Jay Kreps, one of the Kafka co-creators, wrote a very good in-depth article about Kafka that also covers all the details.

Installing the Transport in Service Bus

The first step is to make the Kafka main libraries available on the Service Bus classpath. At runtime, the transport implementation use classes from these libraries to establish connections to the Kafka brokers and to Zookeeper. As a best practice, put these libraries in the following folder: $OSB_DOMAIN/lib. JAR files within this folder will be picked up and added dynamically to the end of the server classpath at server startup. Considering Kafka version 0.8.1.1, the libraries that need to be copied are:

$KAFKA_HOME/libs/kafka_2.9.2-0.8.1.1.jar

$KAFKA_HOME/libs/log4j-1.2.15.jar

$KAFKA_HOME/libs/metrics-core-2.2.0.jar

- $KAFKA_HOME/libs/scala-library-2.9.2.jar

$KAFKA_HOME/libs/zkclient-0.3.jar

$KAFKA_HOME/libs/zookeeper-3.3.4.jar

Here, $KAFKA_HOME should point to the exactly location where Kafka was installed. Keep in mind that the list shown before presents files of Kafka version 0.8.1.1. If you decide to use a more recent version of Kafka, the filenames may be quite different.

The second step is deploying the transport within the Service Bus domain. You can grab a copy of the transport here. Download the zip file and extract its contents to a staging folder. Copy the implementation files (kafka-transport.ear and kafka-transport.jar) to the same folder where the other transports are kept:

$MW_HOME/osb/lib/transports

And copy the JDeveloper plugin descriptor (transport-kafka.xml) to the plugins folder:

$MW_HOME/osb/config/plugins

Finally, deploy the kafka-transport.ear file as an enterprise application to your Service Bus domain. You can use any deployment method of your choice (Administration Console, WLST, etc) just make sure to target this deployment to all servers and clusters within your domain, including the administration server. As a best practice, name this deployment as “Service Bus Kafka Transport Provider”.

Receiving Data from Apache Kafka

This section will show how to create a Service Bus Proxy Service that fetches messages from a Kafka topic and processes them using a Pipeline. The following steps will assume that you have an up-and-running Kafka deployment, and that a topic named “orders” was previously created.

Creating the Service Bus Artifacts

1) In the Service Bus Console, create a session to modify the projects.

2) Create a new project called “article” under the projects folder.

3) Right-click the newly created project and choose Create > Pipeline.

4) Name this Pipeline as “OrdersListenerImpl”.

5) In the Service Type field, choose “Messaging”

6) Set “Text” as the Request Type and “None” as the Response Type.

7) Leave the “Expose as a Proxy Service” check box unchecked.

8) Click in the Create button to finish.

9) Open the Message Flow editor.

10) Implement the Message Flow with a Pipeline-Pair, creating one single stage in the request pipeline that prints the headers and the body of the incoming message. Here is an example of this implementation.

figure-1

11) Right-click the article project and choose Create a Proxy Service.

12) Name this Proxy Service as “OrdersListener”.

13) In the Protocol field choose “kafka” then, click in the Next button.

figure-2

14) Set “Text” as the Request Type and “None” as the Response Type.

15) Click in the Next button to continue.

16) In the Endpoint URI field, enter with the Zookeeper server details.

figure-3

17) Click on the Create button to finish.

18) In the General page, associate this Proxy Service with the previously created Pipeline.

figure-4

19) Click on the Transport Details tab.

20) In the Topic Name field, enter with “orders”.

figure-5

21) Save the entire configuration and activate the changes.

Testing the Scenario

In order to test this scenario, you can use the producer utility tool that comes with Kafka. In the “bin” folder of your Apache Kafka installation, type:

$KAFKA_HOME/bin/kafka-console-producer.sh –broker-list localhost:9092 –topic orders <ENTER>

This will open the producer console for sending messages. Anything you type and hit the enter key will be immediately sent to the Kafka topic. For this simple test, type:

Hello World from Apache Kafka <ENTER>

You should see an output on Service Bus like this:

<Jun 16, 2015 7:02:41 PM EDT> <Warning> <oracle.osb.logging.pipeline> <BEA-000000> < [HandleIncomingPayload, HandleIncomingPayload_request, Logging, REQUEST] <con:endpoint name="ProxyService$article$OrdersListener" xmlns:con="http://www.bea.com/wli/sb/context">
  <con:service/>
  <con:transport>
    <con:uri>localhost:2181</con:uri>
    <con:mode>request</con:mode>
    <con:qualityOfService>best-effort</con:qualityOfService>
    <con:request xsi:type="kaf:KafkaRequestMetaDataXML" xmlns:kaf="http://oracle/ateam/sb/transports/kafka" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
      <tran:headers xsi:type="kaf:KafkaRequestHeadersXML" xmlns:tran="http://www.bea.com/wli/sb/transports">
        <kaf:message-key/>
        <kaf:partition>1</kaf:partition>
        <kaf:offset>5</kaf:offset>
      </tran:headers>
    </con:request>
  </con:transport>
  <con:security>
    <con:transportClient>
      <con:username>weblogic</con:username>
      <con:principals>
        <con:group>AdminChannelUsers</con:group>
        <con:group>Administrators</con:group>
        <con:group>IntegrationAdministrators</con:group>
        <con:group>Monitors</con:group>
      </con:principals>
    </con:transportClient>
  </con:security>
</con:endpoint>> 

<Jun 16, 2015 7:02:41 PM EDT> <Warning> <oracle.osb.logging.pipeline> <BEA-000000> < [HandleIncomingPayload, HandleIncomingPayload_request, Logging, REQUEST] Hello World from Apache Kafka>

Sending Data to Apache Kafka

This section will show how to create a Service Bus Business Service that sends messages to a Kafka topic. The following steps will assume that you have an up-and-running Kafka deployment, and that a topic named “orders” was previously created.

Creating the Service Bus Artifacts

1) In the Service Bus Console, create a session to modify the projects.

2) Right-click the article project and choose Create > Business Service.

3) Name this Business Service as “OrdersProducer”.

4) In the Transport field, choose “kafka” then, click in the Next button.

figure-6

5) Set “Text” as the Request Type and “None” as the Response Type.

6) Click on the Next button to continue.

7) In the Endpoint URIs section, enter with the Kafka broker server details.

figure-7

8) Click the Create button to finish.

9) Click the Transport Details tab.

10) In the Topic Name field, enter with “orders”.

figure-8

11) Save the entire configuration and activate the changes.

Testing the Scenario

In order to test this scenario, you can use the Test Console that comes with Service Bus.

figure-9

Type “Hello World from Oracle Service Bus” and click on the Execute button to perform the test. Since we have already deployed a Proxy Service that fetches messages from the Kafka topic, you should see a similar output on Service Bus:

<Jun 16, 2015 7:27:48 PM EDT> <Warning> <oracle.osb.logging.pipeline> <BEA-000000> < [HandleIncomingPayload, HandleIncomingPayload_request, Logging, REQUEST] <con:endpoint name="ProxyService$article$OrdersListener" xmlns:con="http://www.bea.com/wli/sb/context">
  <con:service/>
  <con:transport>
    <con:uri>localhost:2181</con:uri>
    <con:mode>request</con:mode>
    <con:qualityOfService>best-effort</con:qualityOfService>
    <con:request xsi:type="kaf:KafkaRequestMetaDataXML" xmlns:kaf="http://oracle/ateam/sb/transports/kafka" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
      <tran:headers xsi:type="kaf:KafkaRequestHeadersXML" xmlns:tran="http://www.bea.com/wli/sb/transports">
        <kaf:message-key/>
        <kaf:partition>1</kaf:partition>
        <kaf:offset>6</kaf:offset>
      </tran:headers>
    </con:request>
  </con:transport>
  <con:security>
    <con:transportClient>
      <con:username>weblogic</con:username>
      <con:principals>
        <con:group>AdminChannelUsers</con:group>
        <con:group>Administrators</con:group>
        <con:group>IntegrationAdministrators</con:group>
        <con:group>Monitors</con:group>
      </con:principals>
    </con:transportClient>
  </con:security>
</con:endpoint>> 

<Jun 16, 2015 7:27:48 PM EDT> <Warning> <oracle.osb.logging.pipeline> <BEA-000000> < [HandleIncomingPayload, HandleIncomingPayload_request, Logging, REQUEST] Hello World from Oracle Service Bus>

Development using FMW JDeveloper 12c

Alternatively, you can use the Fusion Middleware JDeveloper 12c to create Service Bus projects and also implement integration scenarios using the Kafka transport. This is probably a better approach than using the Service Bus Console when designing end-to-end integration flows, since you can leverage other features available in the IDE such as the debugger, mappers, XPath editors, drag-and-drop components, etc.

figure-10-1

Depending of which version of the Fusion Middleware JDeveloper you use, there will be some warnings in the log stating “Failed to invoke edit for <ENDPOINT>: SCA endpoint not available”. This is a known bug related to custom transports that Oracle engineering is already working to solve. But apart from these warnings everything works fine and you will be able to deploy the Service Bus project from the IDE normally. Also, you will notice that any custom transport is not available in the palette, but you can create then using the File > New approach.

Short Interview with Jay Kreps

While I was writing this article, I had the pleasure and honor to talk with Jay Kreps, one of the Kafka co-creators and CEO of Confluent – the company founded by the team who built Kafka and which provides an enterprise-ready version of Kafka called Stream Data Platform. Here’s a transcript of that brief interview.

- Tell us a Little Bit About Yourself:

“I’m the CEO at Confluent. I’m also one of the co-creators of Apache Kafka. I was previously one of the lead architects for data infrastructure at LinkedIn where Kafka was developed.”

- How popular is Apache Kafka Today?

“It’s really popular. You can see some of the user’s who have added themselves publically here

- What do you Think about the Service Bus Kafka Transport?

“I’m really excited to see the integration. We want to be able to plug in everything!”

- Would you like to see Oracle Supporting Kafka on its Cloud Offerings?

“Sure!”

On behalf of the Oracle community, I would like to thank Jay for his time and support on this project.

Conclusion

Apache Kafka is getting a considerable amount of traction from the industry with its ability to handle high volumes of data (terabytes-scale) with minimum hardware resources, allowing Cloud-based architectures to achieve high throughput and low latency messaging. Providing a native transport that supports Kafka in Service Bus allows for some interesting possibilities and we hope customers consider this option when thinking about using these technologies to solve challenging integration and messaging problems.

This article is the first part of a series that intend to show how Service Bus can be leveraged to connect and exchange data with Kafka, allowing developers and architects to gain from the both worlds. As soon other parts of the article are available, links for them will be appropriately provided here.


OAM Federation: Identity Provider & Service Provider Management

$
0
0

In this blog post I want to clarify a point of initial confusion some people experience with OAM Federation 11.1.2.3. If we go to the “Federation” tab of the OAM Console, we see:

LaunchPadScreenShot

Now the two main objects you manage in your OAM Fed configuration are your IdP Partner definitions and your SP Partner definitions. So, I want to look at the IdP Partner definitions. Which link do I choose? The answer is, “Service Provider Management”. Conversely, to look at the SP Partner definitions, I click on “Identity Provider Management”. To many people, that at first seems back-to-front, but if you think about it some more, it makes perfect sense.

Let’s draw a diagram:

SAMLDiagram

Each Service Provider has a relationship with one or more Identity Providers, and each Identity Provider has a relationship with one or more Service Providers. The owner of each Service Provider has to decide which Identity Providers it is willing to work with, and the owner of each Identity Provider has to decide which Service Providers it is willing to work with. Each of these IdP-SP relations can only exist by mutual agreement of both ends; each side is trusting the other – the service provider needs to trust the identity provider to provide genuine user identities (i.e. only authenticate joe@example.com if it really is Joe); the identity provider needs to trust the service provider not to abuse the identities it is sent (e.g. to maintain the confidentiality of the user attribute data it is sent).

So the most important thing each IdP needs to know is – which SPs am I authorised to talk to? And the most important thing each SP needs to know is — which IdP am I authorized to talk to? So the SP Partners are part of the IdP configuration, and the IdP Partners are part of the SP configuration.

Concurrent Development Model for WebCenter Sites

$
0
0

1.0 Introduction

 

Several times we are asked the question, “I need to develop multiple versions of the Site at the same time. I have completed phase one and version 1.0 of the website is live on delivery. I am working on version 2.0 in the development environment. It will take another 3 weeks to complete version 2.0 and deploy it to delivery. In the meanwhile, I need to start working on version 3.0. How can I do this?”

 

The answer is very simple. WebCenter Sites development is like any other software development project. A version control system integrated with the development environment is needed to work on multiple versions concurrently. WebCenter Sites Development Tool Kit, CSDT, has a plugin for Eclipse. This allows developers to use Eclipse IDE, and export/import resources from WebCenter Sites to Eclipse workspace. Eclipse can now be integrated with a version control system of your choice.

 

Let’s first take a look at the development environment for WebCenter Sites for a medium to large project that requires multiple developers.

 

 

2. WebCenter Sites – IDE Integration

 

As a part of its Developers Tools kit (CSDT), WebCenter Sites has an OOTB integration with Eclipse IDE[1]. Once integrated with Eclipse, the developers interact with WebCenter Sites primarily through Eclipse, which provides a rich set of functions for managing WebCenter Sites resources including Templates, CSElements, Site Entry Assets, Element Catalog Entries and Site Catalog Entries.

 

Eclipse managed resources are stored as files in a file system. This gives developers the option to integrate with a version control system of their choice. If the resources are modified and WebCenter Sites is running, the resources are automatically synchronized, that is, imported into WebCenter Sites, in its native database representation. Manual synchronization can also be performed in both directions.

 

This is depicted in the following diagram:

 

EclipsePlugIn

 

 

Using the Eclipse integration, developers typically perform the following tasks in Eclipse:

* Create, edit, and delete CSElement, Template, and SiteEntry assets

* Develop JSP elements with standard Eclipse features such as tag completion, syntax highlighting, and debugging

* Export and import assets, asset types, flex families, sites, roles, tree tabs, and start menu items

* Preview WebCenter Sites pages within the Eclipse IDE using an embedded preview browser

* View the WebCenter Sites log file in a dynamically refreshing panel

* Integrate with version control systems

 

3. Integration with Version Control System

 

CSDT supports integration with a version control system through Eclipse IDE. Eclipse managed WebCenter Sites resources are stored as files in Eclipse workspace called the Main Developer Tools Workspace. Developers can check-in & check-out resources from their workspace to a version control system. From version control system, the resources can be imported in Development Server. The Development Server is used to keep all the files, assets, asset definitions, templates, elements, etc. of the current version. This is shown in the following diagram:

 

DevelopmentEnvironment

 

 

Typically, developers check-out the templates & CSElements they need to work on to their local development system. After they have finished their work, they check-in their changes to version control system. From version control system they are imported in the Development Server. From Development Server the version can be published to Testing/QA or Management Server as required. The following diagram depicts this scenario.

 

WCSIDEDevelopmentEnvironment

1. The developers check-out the template/CS-Element from Version Control System, and create/modify templates & CS-Elements on their laptop or on the sandbox. The laptop/sandbox has WebCenter Sites along with Eclipse Plug-in.

2.  The Templates/CSELements ares check-in the Version Control System.

3.  The Templates/CSElements are imported into the Development Server.

4.  Once the build is in the Development Server, it gets deployed to Management Server or (Test/QA Server) using WebCenter Sites Publishing Mechanism. From Management Server it gets deployed to Delivery Server again using WebCenter Sites publishing mechanism.

 

4. Developing Multiple Versions Concurrently

 

 

In addition to the set-up given in the earlier, developing multiple versions concurrently requires that you setup parallel development environment for different versions.

 

Say you want to work on both version 2 and version 3 in the development environment. You will need two development servers. The developers working on version 2, will download the version 2 template/cs-elements and other assets (asset types, definitions, site entry etc) for version 2 on their laptop or sandbox.

 

The developers working on version 3 will download the version 3 templates/cs-elements and other assets for version 3 on their laptop/sandbox. Both version 2 and 3 will use the same Version Control System. But Version 2 and version 3 have their own Development Servers. The templates, elements and other assets for version 2 are imported into Version 2 Development Server and templates, elements and other assets for version 3 are imported into Version 3 Development Server. The following diagram depicts this:

ParallelDevelopmentEnvironment

 

This setup allows developers to start developing version 3 while 2 is still being tested and is in the QA cycle.

 

5.    Conclusion

 

 

Thus we see that developing multiple versions concurrently in WebCenter Sites can easily be done by utilizing WebCenter Sites CSDT tool-kit, a version control system, and additional development servers.

 

[1] https://docs.oracle.com/cd/E29542_01/doc.1111/e29634/dt_intro.htm#WBCSD929

OAM Federation 11.1.2.3: Performing a Loopback Test

$
0
0

In this blog post I will share steps for performing a loopback test of OAM Federation 11.1.2.3. In a loopback test, we configure OAM’s SP to point to OAM’s IdP. This enables you to confirm the basic functionality of OAM Federation without requiring any external partner server. I also find it useful in plugin development – you can perform initial development of your plugin using just the OAM Federation server, since you might not have an instance of the intended partner server available in your development environment.

You can find instructions here on how to do the same thing in OIF 11.1.1.x federation. (Those instructions are for Fusion Apps, however the loopback test itself is identical for non-Fusion Apps environments.) You’ll find the steps in OAM 11.1.2.3 are very similar, the main difference being that OAM 11.1.2.3 uses OAM Console for configuration rather than Enterprise Manager. Also, while I have provided steps for OAM 11.1.2.3, the steps in 11.1.2.2 are very similar (the configuration screens themselves are similar, but the navigation paths to reach them are different).

A couple of prerequisites:

  • you need OAM 11.1.2.3 installed. I used the Lifecycle Management Tools Deployment Wizard — see 11.1.2.3 IAM Deployment Guide
  • you need the “Identity Federation” service enabled. If it is not already enabled, you can go to oamconsole > Configuration tab > Available Services. (Hint: if it is disabled yet the “Enable Service” link is greyed out, try disabling then re-enabling “Mobile & Social”).

Download the metadata

OAM’s IDP metadata is available using the URL: http://OAMHOST:OAMPORT/oamfed/idp/metadata

Download that and save it to a file.

You can also download the SP metadata from the URL: http://OAMHOST:OAMPORT/oamfed/sp/metadata

However, what you will discover, is they are basically the same thing, with only the ID, timestamps, and digest/signature values different. To see this, run each XML file through xmllint –format, and then diff the results.

Hence, in OAM Federation’s case, we can treat the two sets of metadata interchangeably, and only need to save the one file.

Next we need to create LoopbackIDP

oamconsole > Federation tab > under “Federation” select “Service Provider Management”
Click on “Create Identity Provider Partner”:

Fill in the following data:

  • Name: “LoopbackIDP”
  • “Enable Partner” will be checked by default
  • Select protocol “SAML2.0”, browse to the metadata XML file you saved previously
  • Under “User Identity Store” you can select “OAMIDSTORE”.
  • You can leave the other mapping settings, e.g. “Map assertion Name ID to User ID Store attribute”, to their defaults. Basically, you can use whatever attribute you want, provided you have that attribute defined and unique for your test users, and you make the same configuration at the other end.

Click Save – LoopbackIDP will be created.

 

Now to test this, let’s use the SP test page:

The SP Test page can be accessed via: http://OAMHOST:OAMPORT/oamfed/user/testspsso

 

initially this screen will display “System Error”.

If you look at the logs (e.g. wls_oam1-diagnostic.log) you will see:

oracle.security.fed.event.EventException: SP Engine error - the Test SP Engine is not enabled
at oracle.security.fed.eventhandler.fed.authn.engines.testsp.TestSPRetrieveInformationEventHandler.perform(TestSPRetrieveInformationEventHandler.java:77)

Solution: you need to run the configureTestSPEngine("true") command in WLST:

  • Start WLST:
 $IAM_ORACLE_HOME/common/bin/wlst.sh
    • Note that there are multiple copies of WLST installed. You need to use the copy under the $IAM_ORACLE_HOME, since that copy has the configureTestSPEngine() command configured. The other copies of WLST installed in the other homes (oracle_common, wlserver_10.3) will lack this command, and the other OAM-specific commands.
  • Connect to the WLS Admin server:
 connect()

Enter the username (e.g. weblogic) ,password & Admin Server URL (e.g. t3://myadminserver.example.com:7001 )

  • Navigate to Domain Runtime branch:
domainRuntime()
  • Execute the configureTestSPEngine() command: configureTestSPEngine(“true”)

 

Now we try again:

Choose partner “LoopbackIDP” and “Start SSO”.

Once again we get a “System Error”. Looking at the logs:

<Error> <oracle.security.fed.jvt.JVTDiscoveryManager> <FEDSTS-12014> <Discovery Finder Exception: unable to locate object in the repository: {0}
oracle.security.fed.jvt.discovery.exceptions.DiscoveryFinderException: Missing partner configuration for: http://OAMHOST:OAMPORT/oam/fed
at oracle.security.fed.jvt.discovery.model.config.ConfigServiceDiscoveryProvider.getPartnerConfig(ConfigServiceDiscoveryProvider.java:1043)
at oracle.security.fed.jvt.discovery.model.config.ConfigServiceDiscoveryProvider.locateProtocolConfiguration(ConfigServiceDiscoveryProvider.java:910)
at oracle.security.fed.jvt.discovery.model.config.CSFConfigDiscoveryProvider.locateProtocolConfiguration(CSFConfigDiscoveryProvider.java:134)
at oracle.security.fed.jvt.discovery.model.config.ChainingConfigDiscoveryProvider.locateProtocolConfiguration(ChainingConfigDiscoveryProvider.java:42)
at oracle.security.fed.jvt.discovery.model.config.CachingConfigDiscoveryProvider.locateProtocolConfiguration(CachingConfigDiscoveryProvider.java:75)
at oracle.security.fed.jvt.JVTDiscoveryManager.locateProtocolConfiguration(JVTDiscoveryManager.java:1956)

This is because we have configured “LoopbackIDP” as an IDP partner in the SP configuration, but we have not configured the SP as an SP partner in the IDP configuration. Let us do that now:

Next we need to create LoopbackSP

oamconsole > Federation tab > under “Federation” select “Identity Provider Management”

Click on “Create Service Provider Partner”:

Fill in the following data:

  • Name: “LoopbackSP”
  • “Enable Partner” will be checked by default
  • Select protocol “SAML2.0”, browse to the metadata XML file you saved previously
  • For “NameID Format” you can use the default of “Email Address”. For “NameID Value” you can use “User ID Store Attribute” of “mail”.
  • Other settings can be left at their defaults.

Click Save – LoopbackSP will be created.

 

Now let’s repeat the test with the test SP app:

Having tested this with an SP app, we can now do the same test using a protected page:

 

  • In oamconsole, go to Federation tab, Federation tile, Service Provider Management
  • Search for LoopbackIDP and open it
  • Click “Create Authentication Scheme and Module”

This will create an Authentication Module called LoopbackIDPFederationPlugin and an Authentication Scheme called LoopbackIDPFederationScheme

  • Go to your OHS htdocs directory — if you used LCM installation, this is $IDMTOP/config/instances/ohs1/config/OHS/ohs1/htdocs
  • Create a directory called fedtest
  • Inside fedtest create a file called index.html, add some content e.g. “<h1>Hello World!</h1>”
  • In OAM Console, go to “Application Security” tab, “Access Manager” tile, “Application Domains” link:

  • Then press “Search”:
  • Open “IAM Suite”:
  • Go to “Authentication Policies” tab, and click “Create”:

  • On the “Create Authentication Policy” screen, enter name “LoopbackIDPAuthPolicy” and select “Authentication Scheme” of “LoopbackIDPFederationScheme”. Then click “Apply”.

Go back to the “IAM Suite” tab, then select the “Resources” subtab. Click “Create”:

Choose resource type of “HTTP”, host identifier of “IAMSuiteAgent”, Resource URL of /fedtest/…/*


And protection level of “Protected”, with “Authentication Policy” of “LoopbackIDPAuthPolicy” and “Authorization Policy” of “Protected Resource Policy”

Click Apply

 

  • Now finally to test: open a new Private Browsing Window, and navigate to http://OAMHOST:OAMPORT/fedtest/index.html . You should get redirected to the login page, and you should see /oamfed/idp/samlv20 in the address bar of the login page. This shows we are using SAML. Then login and you should see the message “Hello World!”

 

 

Mobile Cloud Service by Example Part1: Defining and Implementing a Mock API

$
0
0

I’m really excited at the moment 

The release of Oracle Mobile Cloud Service (MCS) was unveiled yesterday, together with 24 new Platform and Infrastructure Cloud Services.

Why is this important?

Two words…..

“Mobile First”

“Mobile First” was first used by Marc Davis, who was Yahoo’s Chief Scientist and VP of Early Stage Products of Yahoo! Mobile.

Every company needs to think about a mobile strategy to have a future strategy. If you don’t have one, then you just don’t know it yet!

Going Mobile Cloud

14 months ago I decided to expand my horizons and go mobile. I have been taking MCS through its paces by :

  • Developing REST API’s in node.js as javascript (how the api’s are implemented in MCS)
  • Developing agnostic mobile applications to consume these API’s

In this series of blogs I plan to:

  • Introduce you to the concepts of MCS
  • Give suggestions for best practices to adopt
  • Demonstrate by example by taking you through real examples you can try and demo yourselves

What is Oracle Mobile Cloud Service?

Enterprise Level Mobile Backend as a Service (MBaaS) delivered as a Mobile Platform (PaaS) solution on Oracle Cloud.

The successful implementation of any Enterprise Mobile solution requires:

  • End to end Security
  • Agnostic to mobile platforms e.g iOS, Android, html5
  • REST APIs implementation with orchestration (in mcs this is using node.js)
  • Centralised Management of the Mobile Applications
  • Connectivity to enterprise resources on premise or in the cloud
  • Scalable performance
  • Mobile Analytics
  • Platform APIs – e.g to manage push notifications & identity

MCS makes it easy to quickly implement an enterprise mobile solution in the cloud.

I’m not going to give you a long pitch about MCS. (that would be far too long for a blog and its not what I’m about).

Instead I want to get you up and running quickly using it.

For a quick overview of mcs:

 

Then read this quick article Build Your Mobile Strategy—Not Just Your Mobile Apps.

Creating a mock API in MCS

One of the first tasks for a service developer in MCS is to create an API with mock data and make that available quickly. This allows the Mobile Application developers to build their apps using live API data. In MCS we have several options to implement the mock data depending on the complexity of the use-case.

  • Directly in the RAML definition
  • As javascript in Custom Code.
  • As JSON objects stored in MCS storage
  • As database tables using the database API
  • As external enterprise resources using REST or SOAP

The usecase – MobileMoney Mobile Application

Today I will take you through the first example.

We will use the example of a mobile payment app that allows the payment of funds from one mobile user to another using a telephone number or bank details. Mobile applications like this already exist from Denmark banking apps like Swipp and MobilePay

Example Demo Video – create a mock REST API in MCS

The following video shows how to create a mock REST API in MCS in a matter of mins. I will…

  • Create a Mobile Backend called MobileMoney
  • Create a REST API Mobile called payments
  • Define a resource to query the past payments made
  • Test the API in the MCS testing page
  • Test the API from postman using basic security

I also encourage you to subscribe to the Oracle  to the A-Team youtube channel.

 

 

 

Creating a Mobile-Optimized REST API Using Oracle Mobile Cloud Service – Part 1 API Design

$
0
0

Introduction

To build functional and performant mobile apps, the back-end data services need to be optimized for mobile consumption. RESTful web services using JSON as payload format are widely considered as the best architectural choice for integration between mobile apps and back-end systems. At the same time, most existing enterprise back-end systems provide a SOAP-based web service application programming interface (API) or proprietary file-based interfaces. In this article series we will discuss how Oracle Mobile Cloud Service (MCS) can be used to transform these enterprise system interfaces into a mobile-optimized REST-JSON API. This architecture layer is sometimes referred to as Mobile Backend as a Service (MBaaS). A-Team has been working on a number of projects with MCS to build this architecture layer. We will explain step-by-step how to build an MBaaS, and we will  share tips, lessons learned and best practices we discovered along the way. In this first part we will discuss how to design the REST API.

Other parts coming soon in this series include:

  • Implementing the GET resources
  • Implementing the POST, PUT and DELETE resources
  • Learning techniques for logging, debugging, troubleshooting and exception handling
  • Building a mobile app that consumes the REST API Using Oracle MAF and A-Team Mobile Persistence Accelerator (AMPA)

Main Article

Design Considerations

Let’s start with the first challenge: how do you design an API that is truly optimized for mobile apps? A common pitfall is to start with the back-end web services, and take that back-end payload as a starting point. While that may limit the complexity of transformations you have to do in MCS it leads to an API which is everything but optimized for mobile. This brings us to our first recommendation:

The REST API design should be driven by the mobile developer.

He (or she) is the only one who can combine all the requirements, information and knowledge required for a good design:

  • he designs and builds the various screens, knows the supported form factors and knows exactly which data should be retrieved for which screen.
  • he knows the requirements for working in offline mode, and knows how this can be supported and implemented using his mobile development tool set.
  • he is responsible for data caching strategies to optimize performance in both online and offline scenarios
  • he decides which read and write actions can be performed in a background thread not impacting the user-perceived performance.

To illustrate how the above aspects impact the design of the API, we will introduce the sample “human resources” app that we will use throughout this article series. Lets start with the three screen mockups our API should support:

mockups

A first design for the read (GET) resources can look like this

  • /departments: returns list of departments containing department number and name. A “quickSearch” query parameter might be added to support filtering if this cannot be implemented or is undesirable to perform on the device because of the size of the dataset.
  • /departments/{departmentId}: returns all department attributes for the department matching the {departmentId} path parameter and a sub list of all employees working in this department consisting of id, firstname and lastName attributes.
  • /employees/{employeeId}: returns all employee attributes for the employee matching the {employeeId} path parameter.

As you can see this design is driven by the screens. It allows for “on-demand” data loading, using lean resources that only send the absolutely neccessary set of data across the wire, minimizing payload size and maximizing performance. This design is clearly optimized for online usage of the application.  If the mobile developer has to support an offline usage scenario, he would need to do the following to prepare the app for offline usage, storing all data locally on the device:

  • Call the /departments resource
  • Loop over all the departments returned, and for each department call the  /departments/{departmentId} resource.
  • Loop over all employees returned, and for each employee call the /employees/{employeeId} resource

Needless to say that this is not a very efficient way of data loading for offline usage. It can easily result in hundreds of REST calls causing a delay of many minutes to prepare the app for offline usage. So, to support offline usage, it would be handy to add a query param “expandDetails” to the /departments resource which when set to “true” would return all department and employee data in one roundtrip.

Of course there are limits to the amount of data you can offload to your phone or tablet. You sure don’t want to store the complete enterprise back-end database on your phone!  So, in our sample, depending on the number of departments in the back-end database, we might need additional query parameters to allow the mobile user to select a specific subset of departments for offline usage.

At this point you might think, no worries, I only have to support an online usage scenario. Well, not too fast… A-team has learnt from our experiences in mobile engagements that more aggressive upfront data caching strategies might still be needed for various performance-related reasons:

  • The app might be used in parts of the world where network connectivity and network bandwidth is unreliable so users prefer to have a longer waiting time at app startup to prevent network hick ups while using the app.
  • The performance of the back-end API calls might turn-out to be too slow for “on-demand” data loading.
  • The REST-JSON transformations in MCS are typically very fast. However, the required JSON payload might require assembling data from various back-end data sources, degrading performance when looping over result sets is needed to get additional lookup data.

Let’s clarify this last point with an example. Assume there is a back-end HR interface that returns all employee data except for the job title (only job code is returned). Another “lookup” interface returns the job details including the job title. In MCS, a loop over all employees is then needed, and for each employee a call to the jobs “lookup” interface is needed to add the job title to the payload. If the lookup call takes just one second, the performance loss can already be significant with tens of employees returned.  In such a situation you have two options: use the object storage facility in MCS to cache the job lookup data to prevent expensive calls for each employee, or modify the JSON payload to pass only the job id and do the job lookup on the mobile device. This last option would require an additional /jobs resource in your design that returns all job titles for you to cache on the device. In this article series, we will implement both options for educational purposes, we will use MCS caching, but we will also provide an additional resource to return job id’s and their titles as this is needed since our screen design shows a drop-down list to assign a job to an employee.

In summary: various user groups might have different data caching needs, and initial data caching strategies might need to be revisited for performance reasons.

We can distill an important lesson from the above:

Your REST API design should be flexible in terms of the data caching options it can support.

Creating the API Design in MCS

Developers building traditional enterprise system interfaces often follow a design-by-contract approach. In the XML-based web services world, this means the use of XML Schema’s (XSD’s) and Web Service Definition Language (WSDL) to formally specify the interfaces. However, mobile developers live in a different world, they typically just think in JSON payloads. Fortunately, there are emerging standards in the REST-JSON world like RAML (Restful API Modeling Language) and SWAGGER. Oracle MCS uses RAML for specifying the API design. The nice thing about RAML is that it both supports formal REST specifications using the JSON Schema standard, but also allows you to define request and response payloads using sample data. Sample payloads can be changed easily, reflecting the agile and flexible nature mobile developers are used to when working with JavaScript-based (mobile) frameworks like Cordova, Angular, Ionic, Ember etc. Moreover, by using sample data while specifying the API design in MCS you can instantly provide the mobile developer with a mock data server so they can start building the app even before your API design has been implemented in MCS.

So, as we started off by recommending to have the mobile developer drive the design, we should facilitate him with a documentation format he is comfortable with: sample request/response payloads, together with the resource path, path and query parameters, the HTTP method and a short description. We will create this design in MCS by executing the following steps:

1. Create a Mobile Backend

A mobile backend is a container of one or more API’s used by one or more mobile applications. API’s in MCS are always accessed from the outside through a mobile backend. To create the mobile backend, login to Oracle Mobile Cloud Service and click on the Development tab.

DevHome

On the development home page, click on the Mobile Backend icon. Click on button New Mobile Backend and enter values as shown below.

NewMobileBackend

Click the Create button,

2. Specify the Human Resources API

We will now specify the REST resources and their HTTP methods that make up the human resources API. Click on the Open button of the newly created HR1 mobile backend, go to the APIs menu option and click on the New API button. Enter the values of your new API as shown below.

NewAPI

Click the Create button, this opens up the General tab of your new Human Resources API.

ApiGeneral

On this tab, set the Default Media Type to application/json. We can now start specifying the various REST resources that together form our API design. Click on the Endpoints tab, then click on the New Resource button, and create a new resource named /departments.

NewResourceDepartments

We need to define the HTTP methods that we want to support on this resource. To do this, we click the Methods link at the right and then we click the Add Method button and choose GET.

MewMethodGET

Since we want to support both online and offline scenarios, we click the Add Parameter button and we add a boolean query parameter named expandDetails as discussed above. When set to true, this will return all department data including nested employee data for each department. When set to false or not specified, we only return a list consisting of department id and name.

QueryParam

To specify a sample response payload, we click on the Responses tab.The standard response status code 200 is already selected as well as the suggested return media type which is based on the default media type that we specified on the General tab. We now enter a sample response payload as shown below.

DepartmentsResponseSample

To support creation of a new department and update of a department, we also add a POST and PUT method to this same resource. The sample payload for both the request and response payload for these two methods looks like this:

{
    "id": 80,
    "name": "Sales",
    "managerId": 145,
    "locationId": 2500
}

When designing REST resources it is common practice for PUT/POST methods to return the same object(s) as sent in the request, possibly updated with server-derived values like the ID.

Note that this sample payload indicates you can only insert or update one department at the time. When implementing the API you can easily choose to support both a single department insert/update or an array of departments. For a quick overview of the methods implemented on a resource, you can use the description field.

We now create a nested resource under /departments to retrieve the details of a single department and to remove a department. (Our user interface design doesn’t seem to support deletion but for completeness we will add it anyway), Nested resources are created by clicking the plus icon at the left of the parent resource. We then extend the /departments path with a path parameter named id which should be enclosed in curly brackets.

DepartmentDetailsResource

The GET method should return a sample payload like this:

{
    "id": 80,
    "name": "Sales",
    "managerId": 145,
    "locationId": 2500,
    "employees": [
        {
            "id": 145,
            "name": "John Russel"
        },
        {
            "id": 146,
            "name": "Peter Fletcher"
        }
    ]
}

The DELETE method does not need sample payloads as the department id and HTTP verb provide enough information to execute the delete action.

In a similar way, we can create the /employees resource with GET, PUT and POST methods, and the /employees/{id} resource with GET and DELETE methods. Finally, we need the /jobs resource with a GET method to return a list of jobs. When we are done, and we click on the Compact Mode On button, the endpoints tab should look like this.

CompactMode

3. Publishing the Human Resources API Design

Now that we have completed the design we want to share this with the various stakeholders. One way to do this is to give them (read-only) access to MCS so they can inspect the API design themselves. However, for non-technical people that might be a step to far, so you probably want to publish the API in an easily human-readable format.Well, it turns out that MCS has already done that for you! While entering the various resources and methods, all information entered was directly stored in a RAML document, which through a strict indentation scheme is structured enough for computers to understand but at the same time easily read by humans. To view the RAML document you can click on the most right “toggle” icon which allows you to switch between RAML source mode and design mode.

RAML

The icon with the down arrow can be used to download the RAML document so you can publish/distribute it as desired. Note that if other people have comments and would like to make some changes to the design, they can send you a modified version of the RAML document, which you then can copy and paste in the RAML source mode. You can also create a brand new API design in MCS by directly importing an existing RAML document.

Providing a Mock Data Service for MCS API Design

Time-to-market is critical to stay competitive in this fast-moving mobile world. To not loose valuable development time, it is common practice to provide a server with mock-up data once the initial API design is fleshed out. This allows the mobile developer to already start building out the mobile user interface, while the MCS developer is still busy implementing the real API. Again, the good news is that by using Oracle MCS you got this mock data service for free. By default, calling an MCS REST resource that has not been implemented yet, will return the sample payload that we defined while specifying the API design.

We can use a powerful REST client like Postman to test this out. First, we need to find out the full REST resource path. We can find this by going to the General tab for our Human Resources API.

ResourcePath

The resource path shown here can be appended with the specific REST resources we defined in our API, for example /departments or /departments/10. To be able to access the MCS API, without user credentials, we also need to allow anonymous users access. We can do this on the Security tab by selection the ON option.

AninymousAccess

We also need to specify two HTTP request headers, one specifying the mobile backend ID, and one specifying the anonymous user key. The values for these header parameters can be found on the mobile back end overview page by expanding the Keys section.

MobileBackendKeys

With this information we can set up our request header parameters as follows:

  • Oracle-Mobile-Backend-Id = fd27fb4f-90e1-4d4d-9e3f-ca4befdebf2c
  • Authorization = Basic UFJJTUVfREVDRVBUSUNPTl9NT0JJTEVfQU5PTllNT1VTX0FQUElEOml6LmQxdTlCaWFrd2Nz

When we enter this information in Postman and hit the Send button our sample payload is returned correctly.

Postman

We can pass on this information to the mobile developer so he can start accessing our MCS API using mock data. As you noticed, this is a static mock data service, the payload returned cannot be changed based on the value of path or query parameter or the request payload. However, in the next part of this article series we will see how we can easily implement a more “dynamic” mock API taking into account the value of the expandDetails query parameter.

Implementing the Design

In part 2 of this article series we will start implementing this design using Oracle MCS. We will integrate with a back-end interface that is provided in the form of an ADF BC SDO SOAP web service. In subsequent parts, all aspects of the MCS implementation will be discussed, including Node.js asynchronous Javascript programming, MCS caching options, sequencing multiple backend calls, error handling, logging and security. No prior knowledge of MCS will be assumed. Stay tuned!

Oracle Service Bus Transport for Apache Kafka (Part 2)

$
0
0

Introduction

The first part of this article briefly discussed the motivations that leads to the usage of Kafka in software architectures, and also focused on how the Kafka transport can be installed and how to create Proxy and Business services to read and write from/to Kafka topics.

As expected, as more people use the Kafka transport more they will want to know about how to extract the best performance from it. After all, one of the key reasons to adopt Kafka is high throughput and low latency messaging, reason why it would not make any sense to have a layer in the architecture incapable to keep up with that kind of performance.

This second part of the article will cover advanced details related to the Kafka transport, specifically how it can be configured to increase performance and better leverage the hardware resources. Most of the tips shown in this article come from Kafka best practices, reason why this article will assume that you possess a minimal knowledge about how Kafka works. A detailed discussion about the Kafka design is available here.

If achieving better performance is your main objective, please consider other references rather than only this article. Software architectures have many layers that need to be optimized, and the Service Bus transport layer is just one of them. Service Bus/WebLogic provides several tools and resources for fine tuning, and it is strongly recommended to explore each one of them carefully. Also, the design of your Pipelines has a considerable amount of responsibility in how fast your services perform. Excessive use of Service Callouts during the message processing, the usage of blocking threads in Java Callouts, too much payload transformations and JVM garbage collection are some of the factors to consider. Keep these factors in mind while developing your services.

Finally, this article will also show how to capture common issues with few troubleshooting techniques. These techniques can be interesting to double-check if something is working as expected or to find out why something is not working.

Implementing Fault-Tolerance in the Kafka Transport

Software-based systems must have the fault-tolerance attribute measured not only by how much it can handle failures by itself, but also in terms of how fault-tolerant its dependencies are. Typical dependencies that must be considered are the sustaining infrastructure such as the hardware, the network, the operating system, the hypervisor (when using virtualization) and any other stack above those layers.

Equally important are the third-part systems that the system relies on. If to perform a specific task the system needs to interface with other systems, those other systems heavily influence its availability. For instance, a system that depends on other six systems cannot score 99% of fault-tolerance; it should score 93% in the best case scenario, because the other six systems must be continuously up-and-running in order to provide the remaining 6%. The Kafka transport has dependencies that need to be properly understood in order to provide fault-tolerance.

Let’s start discussing Zookeeper. Since the very beginning, Kafka relies on Zookeeper to perform some important tasks, such as using it to perform new leader election for a partition when some of the broker dies. Zookeeper is also used as a repository to maintain offsets, so instead of delegate to the consumers the responsibility of managing the offsets (which would lead to the use of custom coordination algorithms) it can be stored in Zookeeper and the consumers just go there and pick it up.

Recent versions of Kafka allows that the brokers can be used as repository to maintain the offsets. In fact, this is considered the best practice since the brokers provides much better write performance compared to Zookeeper. This change of behavior can be configured in the Kafka transport using the ‘Offsets Storage’ and ‘Dual Commit Enable’ fields, and it is only available for the Proxy Services.

Zookeeper provides built-in clustering features and you should leverage these features, along with other infrastructure details, in your deployment. The official Kafka documentation has a section to discuss that. Good highly available Zookeeper deployments have at least two and preferably three instances of it up-and-running. From the Kafka transport perspective, the Zookeeper instances need to be configured in the Proxy Services via the Endpoint URI field.

figure-12

The second important dependency is the Kafka brokers itself. While the information about the brokers is not relevant for the Proxy Services, it is a requirement for the Business Services. The Proxy Services acts as consumers to Kafka and consumers interact to Zookeeper to discover which brokers are available. But in the case of Business Services they act as producers, and because that they need to directly interact with the brokers to transmit the messages. Therefore, when configuring the endpoints of your Business Services, you need to specify at least one, preferably two, Kafka brokers:

figure-13

It is important to note that there is no need to configure all the Kafka instances in your Business Service. The instances described in the Endpoint URIs field are used only to perform the initial connection to the Kafka cluster, and then the full set of instances is automatically discovered. For this reason, set at least two instances; so if one instance is down during the endpoint activation, the other one can take over the discovery process.

This discovery process does not happen only during endpoint activation. Instead, the Kafka transport periodically asks for the current state of the cluster about running instances. This is extremely important for the Business Services to effectively load balance across partitions while transmitting the messages. It would make no sense keep a static state of the cluster while the brokers are insert/removed from the cluster. This could seriously affect the system availability and scalability. To control the frequency of this discovery process you can use the Metadata Fetch Timeout and Maximum Metadata Age fields.

figure-14

Fine Tuning for High Throughput Message Consumption

In Kafka, when a message is sent to a topic, the message is broadcasted to any consumer that subscribes to that topic. In this context, a consumer is a logical entity that is associated to a unique group. For this to work, each consumer must carry a group identifier. If two or more processes subscribe to a topic and share the same group identifier, then they will be handled as one single consumer. By using this design, Kafka can have the guarantee that each consumer will be able to perform its task in a fault-tolerant and scalable fashion, due the use of multiple processes instead of one.

The concept of groups in Kafka is very powerful, by simplifying the way to implement messaging styles like P2P (Point-To-Point) and Publish/Subscribe. Unlike other messaging systems like JMS, Kafka does not use different types of destinations. There is one single abstraction called topics. The differentiation of how a message is going to be delivered, whether using P2P or Publish/Subscribe, is a simple question of having processes belonging to different groups.

consumer-groups

In the Kafka transport, you can use the Group Identifier field to configure which group the Proxy Service will belong. The Group Identifier field is available in the Transport Details tab of the Proxy Service.

figure-1

When a new Proxy Service is created, the Kafka transport take care of copying the name you set for it to the Group Identifier field, in order to make sure that the Proxy Service will always belong to a group. But the Group Identifier field allows the value to be changed, reason why you should be extremely careful about the value you set. For instance, if you create two Proxy Services – in the same or different Service Bus projects – that subscribes to a topic to perform distinct tasks when a message arrives, it can happen that only one of them receives the message if they use the same group identifier. In order to work, you will have to set different group identifiers so then each Proxy Service processes a copy of the message.

If you are concerned about scalability, the concept of groups can be quite interesting to scale the system using the Service Bus cluster. In runtime, Service Bus deploys the Proxy Service on each managed server of the cluster. Because each copy of the Proxy Service uses the same Group Identifier value set in design time, for the Kafka perspective they will be seem as one single consumer, causing a natural load-balance between them to process the messages. That creates an interesting way to scale message consumption both vertically and horizontally. It can scale in (vertical scalability) because you can have a machine with multiple managed server JVMs, and it can scale out (horizontal scalability) because you can spread the managed servers across different machines in a cluster. Either way adding more hardware resources will positively impact the message consumption throughput.

figure-2

The concept of groups can also be applied in case of multiple clusters. For instance, consider a Service Bus cluster running a Proxy Service that subscribes to a topic using the group identifier “X”. If another Service Bus cluster runs a Proxy Service that subscribes to the same topic and also uses “X” as the group identifier, then those Service Bus clusters will load-balance in an active-active style to process messages from the topic. Alternatively, if in this case different group identifiers are used, both of them will receive a copy of the messages, and by having only one Service Bus cluster actually processing the messages and having the other one in stand-by mode, it is possible to implement an active-passive style of processing. This can be particularly interesting if those Service Bus clusters are spread in different data centers.

Kafka also provides support for data centers synchronization via a feature called Data Mirroring. That feature can be useful to maintain different data centers synchronized, having data from one or more clusters from one data center copied to a remote destination cluster in another data center. You can read more information about this here.

Now let’s discuss another type of scalability, which is a variation of the vertical scalability. Having one machine with multiple managed server JVMs can help to leverage a system running with multiple CPU cores, because the tasks can be executed in parallel. For instance, considering a machine equipped with sixteen CPU cores. We could have four of these cores dedicated to operating system tasks and use the other twelve cores to process messages from Kafka, using twelve managed server JVMs. The problem with this approach is that we do not have any guarantees from the operating system that each core will be solely dedicated to one process, because from the operating system perspective; a task is executed in a CPU core by the usage of threads, and the operating system schedules the threads with priorities that we cannot control. While the operating system works to provide a fair chance to each process to run its tasks, it may happen that processes with larger number of threads get more priority, leaving other processes waiting.

But this can be achieved using multiple threads per managed server JVM, breaking down the messages coming from topics into multiple streams and process each stream with a thread. In Kafka, a stream is a high-speed data structure that continuously fetches data from a partition in the cluster. By default, each Proxy Service created is configured to handle one thread, consequentially being able to process only one stream. To increase the number of threads you can use the Consumer Threads field available in the Transport Detail tab of the Proxy Service.

figure-3

In the example above, the Proxy Service is configured to create sixteen threads. During the endpoint activation the Kafka transport will request sixteen streams to the cluster and each one of these streams will be handled using a dedicated thread, increasing the message consumption throughput considerably.

Make sure to set enough partitions for the topic, otherwise this feature will not work correctly. The best practice is that the number of partitions of a topic should be equals to the number of consumer threads available. Partitions are configured in a per-topic basis, in the Kafka cluster.

Regardless of how many threads are configured for the Proxy Service, all the work related to process streams is scheduled by the Work Manager specified in the Dispatch Policy field. The default WebLogic Work Manager is selected when a Proxy Service is created, but it is strongly recommended to use a custom one, for two reasons. First, because the threads are kept in a waiting state to continuously listen for new messages, WebLogic can eventually presume that they become stucked and react accordingly. This situation can be avoided by using a custom Work Manager capable of ignoring stuck threads.

Secondly, because WebLogic uses a single-thread pool to execute any type of work, there is no way to guarantee that a Proxy Service will have a fair share from this pool unless of using a Work Manager with minimum and maximum thread constraint set. This is particularly important for scenarios where Service Bus manages different types of services.

figure-4

If multiple Proxy Services using the Kafka transport are deployed in the same Service Bus domain, do not share the same Work Manager between them. This may cause thread starvation if the maximum thread constraint of the Work Manager is not high enough to handle the sum of all threads. As a best practice, create one Work Manager for each Proxy Service within your Service Bus domain.

Implementing Partitioning During Message Transmission

In runtime, each topic in Kafka is implemented using a cluster-aware partitioned log. This design is very scalable because a topic can handle a number of messages that goes beyond the limits of a single server; because the topic data is spread in the cluster using multiple partitions. In this context, a partition is a sequence of messages continuously append, in which each message is uniquely identified within the partition through a sequence identifier called offset.

figure-5

By using multiple partitions, Kafka not only can handle more data but also scale out the message consumption by assigning the partitions in the topic to the consumers in the consumer group so that each partition is consumed by exactly one consumer in the group. With this approach Kafka ensures that the consumer is the only reader of that partition, being able to consume the data in an ordered fashion.

The partition assignment across the consumers is dynamic, even when new brokers/consumers join the cluster, or when they fail and leave it. This is accomplished by a process called Rebalancing that assigns the partitions across the consumer processes. While there is no way to control how the partitions are assigned to the consumer processes, there is a way to control to which partitions the messages are sent. In fact, the absent of control about how the messages are spread across the partitions leads to a huge chance of overhead in the message consumption, if messages end up always in the same partition.

Hopefully, the Kafka transport provides ways to implement message partitioning across the brokers. This can be accomplished by using special headers in the message that is sent downstream to Kafka. During the transmission of a message to Kafka, the following algorithm is used:

1) If a partition is specified in the message header, then it its used.

2) Otherwise, if a key is specified, it is used to figure the partition.

3) Otherwise, a randomly selected partition will be used instead.

The rationale of this algorithm is that if no additional information is provided in the message header, the message will be sent anyway but with no control about which partition will be selected. Providing the appropriated information in the message header, either assigning a partition or a key, provides this fine control over partition assignment. In order to work, the header must have the information written just before the message is sent downstream, and the best way to implement this is by using the Transport Headers action.

To illustrate this scenario, consider a Business Service called “DispatchOrders”, created using the Kafka transport. Before routing the message to this Business Service, use a Transport Headers action in the Route node to set the header values. The header options available are: ‘partition’ and ‘message-key’. Here is an example of setting the message-key.

figure-6

The approach to set the partition header is the same. The Transport Headers action can also be used when messages are published in the middle of the Pipeline, so it is not restricted to be used with routing nodes. Assigning both the partition and the message-key value in the header it is pointless because if provided, the partition always takes precedence during the message transmission.

Improving the Batching of the Kafka Transport

The Kafka transport was designed from the ground to leverage the new producer API, to obtain the best possible results in runtime in terms of transmission throughput. Because the producer API sends all the messages asynchronously there is no blocking on the Business Services when messages are dispatched; and the threads used to perform this dispatch immediately return to the WebLogic thread pool. An internal listener is created for every message dispatched, to receive the confirmation of the message transmission. When this confirmation arrives (with or without an error) the response thread is scheduled to execute the response Pipeline, if any.

Because messages are sent asynchronously, they are not immediately written into the partitions, which causes the messages to be locally buffered waiting for a single dispatch. A background thread created by the producer API manages to accumulate the messages and send in a batch through the TCP connection established with each of the brokers it needs to communicate with. This batching approach considerably improves the transmission throughput by minimizing the overhead that the network may cause.

The Kafka transport provides options to configure how this batching is performed. Most scenarios will be fine using the standard configuration, but for high volume scenarios where Service Bus must write to Kafka, changing these options can positively affect the performance.

figure-15

The most important fields in this context are the Buffer Memory and the Batch Size, and both fields are measured in terms of bytes. The Buffer Memory field controls the amount of bytes to be buffered and it defaults to 32MB. Changing this value affects the message dispatch and the JVM heap footprint. You can also control what happens when this buffer becomes full with the Block On Buffer Full field. The Batch Size controls the size of each batch and it defaults to 16KB. Changing this value will directly affect how frequent are the writes over the network.

In scenarios of heavy load, the value set in the Batch Size field will correct control the frequency of the writes because the batch will quickly get full. However, under moderated load the Batch Size can take a while to fill up and you can use the Linger field as an alternative to control the write frequency. This field is measured in milliseconds and any positive value set will create a delay that waits for new messages to be batched together.

Delivery Semantics during Message Transmission

In a distributed architecture, any type of failure can happen during the transmission of messages, and prevents the consumers to receive them. Therefore, it is up to the producer of the message describe the appropriate behavior to handle failures, otherwise data might be lost. In Kafka is no different. When a message is sent, the producer needs to have some degree of guarantee in regards its delivery, in order to decide if the transaction should be considered complete or, assume that it failed and try it again.

This section will describe some options available in the Kafka transport to customize the behavior of the message transmission regarding durability. But to use these options correctly, you need to understand how replication works in Kafka.

The concept of partitions was briefly discussed in the previous section, but one important detail was not mentioned: In Kafka, each partition can have multiple copies across the cluster. This is primarily done to provide high availability to the partitions, so the message consumption from a partition can continue even when the broker that hosts that partition fails. The number of replicas of the partitions can be set administratively in a per-topic basis so you can better control how available are the partitions depending of the cluster size.

For instance, if you set the replication factor of a topic to three, each partition within that topic will have three replicas in the cluster. One of these replicas is selected as leader, and the other replicas became followers. The difference between leaders and followers is that only the leader is responsible for reading and writing into the partition, and the followers are used for backup purposes. Because the followers maintain a synchronized copy of the partition with the leader, they are known as ISR’s, acronym for In-Sync Replicas. The more ISR’s exist for a partition then more highly available it will be. But this high availability does not come for free; the message transmission can be heavily impacted in which regards to latency. For instance, when using the highest durability, the message transmitted needs to wait until all ISR’s acknowledges to the leader to be considered completed.

Kafka provides ways to customize this behavior and that was captured as options in the Kafka transport, so you can change it to afford your specific requirements. When you create a Business Service using the Kafka transport, you can use the Acknowledge field to configure the appropriated behavior regarding durability. The available options are:

- Without Acknowledge: The fastest option in terms of latency, but data loss might happen.

- Leader Acknowledge: Waits for the leader to acknowledge, with a modest latency increase.

- ISRs Acknowledge: Highest durability. All ISR’s need to acknowledge, so the latency is higher.

Also, the Timeout field can be modified to customize the amount of time to wait until declare that the durability option chosen was not meet. It defaults to thirty seconds, but this value can be changed to accommodate specific scenarios such as slow network deployments.

figure-8

These options handle how durability must behave, but they do not control what happens if a message transmission is considered failed. How many retries Service Bus must perform? How long between each retry should Service Bus wait? Those are some of the questions that may rise during the implementation of services that send messages to Kafka. In the Kafka transport, this is configured in the Transport tab of your Business Service.

figure-9

These options are self-explanatory and well detailed in the official Service Bus documentation. However, there is one detail that you must be aware of. In Kafka, there are two specific properties that control how retries are performed, the ‘retries’ and the ‘retry.backoff.ms’. In runtime, the Kafka transport copy the value set in the Retry Count field to the retries property directly, without any extra processing. But in the case of the Retry Iteration Interval field, instead of copying the value set to the retry.backoff.ms property directly, a calculation is performed to discover the appropriated value.

The reason why this is done this way is due the difference of unit precision. While the Retry Iteration Interval field has second precision, the retry.backoff.ms has millisecond precision. Thus, the first step of the calculation is bringing the Retry Iteration Interval value to the millisecond precision, to then get an approximated value for the retry.backoff.ms property, by dividing the obtained value with the value set in the Retry Count field. For instance, if the value set in the Retry Count field is three and the value set in the Retry Iteration Interval field is one, then the value set in the retry.backoff.ms property is 333ms.

If you need that the retry.backoff.ms property work with specific values, you can set it using the Custom Properties field that will be detailed in the next section.

Guidelines to Perform General Troubleshooting

This section will focus on how to identify and troubleshoot some issues that might happen with the Kafka transport. The first thing to learn is that the Kafka transport logs most internal things in the server output. During a troubleshooting section, make sure to capture the server output and read the logs for additional insight.

For instance, if during server startup the Kafka transport is not able to load the main Kafka libraries from the Service Bus classpath, it will log in the server output the following message:

<Jun 22, 2015 6:09:20 PM EDT> <Warning> <oracle.ateam.sb.transports.kafka> <OSB-000000> <Kafka transport could not be registered due to missing libraries. For using the Kafka transport, its libraries must be available in the classpath.>

This might explain why the Kafka transport seems not to be available in Service Bus. Similarly, if a Proxy Service is not able to connect to Zookeeper, it will print the following log in the server output:

<Jun 22, 2015 6:30:18 PM EDT> <Error> <oracle.ateam.sb.transports.kafka> <OSB-000000> <Error while starting a Kafka endpoint.>

com.bea.wli.sb.transports.TransportException: org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 6000

        at oracle.ateam.sb.transports.kafka.KafkaEndpoint.start(KafkaEndpoint.java:452)

        at oracle.ateam.sb.transports.kafka.KafkaTransportProvider.activationComplete(KafkaTransportProvider.java:129)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at com.bea.wli.sb.transports.Util$1.invoke(Util.java:79)

        at com.sun.proxy.$Proxy158.activationComplete(Unknown Source)

The Kafka transport provides some JVM properties to customize specific behaviors. During endpoints activation, the Kafka transport builds native connections to Kafka using the producer and consumer APIs. For such, it needs to create Kafka specific properties and fills it with the values set in design-time. To display these specific properties in the server output, you can use the following JVM property:

-Doracle.ateam.sb.transports.kafka.endpoint.config.printProperties=true

This JVM property defaults to false. If it is set to true, the following log is printed in the server output during the endpoint activation:

<Jun 22, 2015 6:44:59 PM EDT> <Info> <oracle.ateam.sb.transports.kafka> <OSB-000000> <The endpoint ‘DispatchOrders’ is being created using the following properties>

block.on.buffer.full=true

retries=3

key.serializer=org.apache.kafka.common.serialization.StringSerializer

acks=1

send.buffer.bytes=131072

metadata.max.age.ms=300000

buffer.memory=33554432

batch.size=16384

timeout.ms=30000

max.request.size=1048576

linger.ms=0

bootstrap.servers=riferrei-laptop:9091,riferrei-laptop:9092

metadata.fetch.timeout.ms=60000

client.id=DispatchOrders

value.serializer=org.apache.kafka.common.serialization.StringSerializer

receive.buffer.bytes=32768

retry.backoff.ms=333

compression.type=none

reconnect.backoff.ms=10

The Kafka transport was designed to delay the activation of Proxy Service-based endpoints until the server reaches the RUNNING state. This is important to ensure that the server does not start processing messages during startup, scenario that can significantly slow down the server and delay the startup if the number of messages to consume is too high. For this reason, when the Kafka transport is initialized, it creates an internal timer that keeps polling the server about its state. Each check happens every five seconds, but you can set small or higher values using the following JVM property:

-Doracle.ateam.sb.transports.kafka.endpoint.startup.checkInterval=<NEW_VALUE>

This JVM property is measured in milliseconds. Keep in mind that there is a maximum possible value for this property, which is one minute. Any value set that is greater than one minute will be automatically adjusted to one minute to prevent any tentative of damaging the Service Bus startup.

When the Service Bus server reaches the RUNNING state, all the endpoints are started and the internal timer is destroyed automatically. But reaching the RUNNING state is not the only criteria to destroy the internal timer. As expected, it might happen that the server never reaches the RUNNING state for some reason. In this case, the internal timer will be destroyed if the startup takes more than five minutes. For most Service Bus deployments five minutes may be considered more than enough, but in case of servers with longer startup times, the endpoints may never be started because the internal timer responsible for this was already destroyed when the server finally got into the RUNNING state. To prevent this from happen, you can change the timeout value by using the following JVM property:

-Doracle.ateam.sb.transports.kafka.endpoint.startup.timeout=<NEW_VALUE>

This JVM property is measured in milliseconds. Adjust it to make sure that the internal timer responsible for starting the Proxy Services waits time enough to do it.

While working with Proxy Services created with the Kafka transport, there will be situations where you want to make sure if the number of threads set in the Consumer Threads field is actually working. This can be accomplished by using the standard WebLogic monitoring tools available in the administration console. For instance, by accessing the Monitoring tab of the Kafka transport deployment, you can use the workload sub-tab to check if a specific Work Manager scheduled the appropriated work.

figure-10

The example above shows a Work Manager with sixteen pending requests, value that matches with the number of threads set in the Consumer Threads field. Of course, In order to work, the Work Manager needs to be explicitly associated with the Proxy Service.

Another way to verify if the value set in the Consumer Threads field is working is by taking thread dumps from the JVM. In the thread dump report, you will have entries that look like this:

“[ACTIVE] ExecuteThread: ’21’ for queue: ‘weblogic.kernel.Default (self-tuning)'” Id=161 WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@3c0fcd49

               at sun.misc.Unsafe.park(Native Method)

               –  waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@3c0fcd49

               at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)

               at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)

               at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)

               at kafka.consumer.ConsumerIterator.makeNext(ConsumerIterator.scala:63)

               at kafka.consumer.ConsumerIterator.makeNext(ConsumerIterator.scala:33)

               at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)

               at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)

               at oracle.ateam.sb.transports.kafka.KafkaEndpoint$StreamListener.run(KafkaEndpoint.java:510)

               at weblogic.work.WorkAreaContextWrap.run(WorkAreaContextWrap.java:55)

               at weblogic.work.ContextWrap.run(ContextWrap.java:40)

               at com.bea.alsb.platform.weblogic.WlsWorkManagerServiceImpl$WorkAdapter.run(WlsWorkManagerServiceImpl.java:194)

               at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:548)

               at weblogic.work.ExecuteThread.execute(ExecuteThread.java:311)

               at weblogic.work.ExecuteThread.run(ExecuteThread.java:263)

The rationale here is looking for the following string:

“oracle.ateam.sb.transports.kafka.KafkaEndpoint$StreamListener.run”

The number of times that this string appears in the thread dump report need to match with the value set in the Consumer Threads field.

Another important feature included in the Kafka transport is the Custom Properties field, available in all types of services created. The primary purpose of this field is to provide a way to directly set any specific Kafka property. This can be quite useful if some property introduced in a particular release of Kafka has no equivalent in the user interface; situation that may occur often due the Open Source nature of Kafka, where new releases are provided by the community in a much faster pace.

figure-11

Any property set in this field is used by the Kafka transport to establish native connections to Kafka. However, there is an important aspect about precedence that needs to be carefully considered. If some property is set in this field, and this property has an equivalent in the user interface, the value used will be the one set in the Custom Properties field, overriding any value set in the user interface.

Conclusion

This article is the part two of a series that intend to show how Service Bus can be leveraged to connect and exchange data with Kafka, allowing developers and architects to gain from both worlds. It was explained advanced techniques that can be applied to the Kafka transport, and also how to identify and troubleshoot common issues.

Invoke Fusion Cloud Secured RESTFul Web Services

$
0
0

Introduction

The objective of this blog is to demonstrate how to invoke secured RestFul web services from Fusion Cloud using Oracle Service Oriented Architecture (SOA) as an Integration hub for real time integration with other clouds and on-premise applications. SOA could be on-premise or in the cloud (PAAS). The SOA composites deployed in on-premise SOA can be migrated to SOA in cloud.

What is REST?

REST stands for Representational State Transfer. It ignores the details of implementation and applies a set of interaction constraints. The web service APIs that adhere to the REST Architectural constraints are called RestFul. The HTTP based RESTFul APIs area defined with the following aspects:

  • Exactly one entry point – For example: http://example.com/resources/
  • Support of media type data – JavaScript Object Notation (JSON) and XML are common
  • Standard HTTP Verbs (GET, PUT, POST, PATCH or DELETE)
  • Hypertext links to reference state
  • Hypertext links to reference related resources

Resources & Collections

The Resources can be grouped into collections. Each collection is homogeneous and contains only one type of resource. For example:

URI Description Example
/api/ API Entry Point /fusionApi/resources
/api/:coll/ Top Level Collection :coll /fusionApi/resources/department
/api/:coll/:id Resource ID inside Collection /fusionApi/resources/department/10
/api/:coll/:id/:subcoll Sub-collection /fusionApi/resources/department/10/employees
/api/:coll/:id/:subcoll/:subid Sub Resource ID /fusionApi/resources/department/10/employees/1001

 

Invoking Secured RestFul Service using Service Oriented Architecture (SOA)

SOA 12c supports REST Adapter and it can be configured as a service binding component in a SOA Composite application. For more information, please refer to this link. In order to invoke a secured RestFul service, Fusion security requirements must be met. These are the following requirements:

Fusion Applications Security

All external URLs in the Oracle Fusion Cloud, for RESTful Services, are secured using Oracle Web Security Manager (OWSM). The server policy is “oracle/http_jwt_token_client_policy” that allows the following client authentication types:

  • HTTP Basic Authentication over Secure Socket Layer (SSL)
  • Oracle Access Manager(OAM) Token-service
  • Simple and Protected GSS-API Negotiate Mechanism (SPNEGO)
  • SAML token

JSON Web Token (JWT) is a light-weight implementation for web services authentication. A client having valid JWT token is allowed to call the REST service until it expires. The OWSM existing policy “oracle/wss11_saml_or_username_token_with_message_protection_service_policy” has the JWT over SSL assertion. For more information, please refer to this.

The client must provide one of the above policies in the security headers of the invocation call for authentication. In SOA, a client policy may be attached from Enterprise Manager (EM) to decouple it from the design time.

Fusion Security Roles

The user must have appropriate Fusion Roles including respective data security roles to view or change resources in Fusion Cloud. Each product pillar has respective roles. For example in HCM, a user must have any role that inherits the following roles:

  • HCM REST Services Duty – Example: “Human Capital Management Integration Specialist”
  • Data security Roles that inherit “Person Management Duty” – Example: “Human Resource Specialist – View All”

 

Design SOA Code using JDeveloper

In your SOA composite editor, right-click the Exposed Services swimlane and select Insert > REST. This action adds REST support as a service binding component to interact with the appropriate service component.

This the sample SOA Composite with REST Adapter using Mediator component (you can also use BPEL):

rest_composite

These are the following screens on how to configure RestFul Adapter as an external reference:

REST Adapter Binding

rest_adapter_config_1

REST Operation Binding

rest_adapter_config_2

REST Adapter converts JSON response to XML using Native Format Builder (NXSD). For more information on configuring NXSD from JSON to XML, please refer this link.

generic_json_to_xml_nxd

Attaching Oracle Web Service Manager (OWSM) Policy

Once the SOA composite is deployed to your SOA server, the HTTP Basic Authentication OWSM policy is attached as follows:

Navigate to your composite from EM and click on policies tab as follows:

 

rest_wsm_policy_from_EM_2

 

Identity Propagation

Once the OWSM policy is attached to your REST reference, the HTTP token can be passed using the Credential Store. Please create credential store as follows:

1. Right-Click on  SOA Domain and select Security/Credentials.

rest_credential_1

2. Please see the following screen to create a key under oracle.wsm.security map:

 

rest_credential_2

Note: If oracle.wsm.security map is missing, then create this map before creating a key.

 

By default, OWSM policy uses basic.crendial key. To use newly created key from above, the default key is override using the following instructions:

1. Navigate to REST reference binding as follows:

rest_wsm_overridepolicyconfig

rest_wsm_overridepolicyconfig_2

Replace basic.credentials with your new key value.

 

Secure Socket Layer (SSL) Configuration

In Oracle Fusion Applications, the OWSM policy mandates HTTPs protocol. For introduction to SSL and detailed configuration, please refer this link.

The cloud server certificate must be imported in two locations as follows:

1. keytool -import -alias slc08ykt -file /media/sf_C_DRIVE/JDeveloper/mywork/MyRestProject/facert.cer -keystore /oracle/xehome/app/soa12c/wlserver/server/lib/DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase

This is the output:

Owner: CN=*.us.mycompany.com, DC=us, DC=mycompany, DC=com
Issuer: CN=*.us.mycompany.com, DC=us, DC=mycompany, DC=com
Serial number: 7
Valid from: Mon Apr 25 09:08:55 PDT 2011 until: Thu Apr 22 09:08:55 PDT 2021
Certificate fingerprints:
MD5: 30:0E:B4:91:F3:A4:A7:EE:67:6F:73:D3:E1:1B:A6:82
SHA1: 67:93:15:14:3E:64:74:27:32:32:26:43:FF:B8:B9:E6:05:A8:DE:49
SHA256: 01:0E:2A:8A:D3:A9:3B:A4:AE:58:4F:AD:2C:E7:BD:45:B7:97:6F:A0:C4:FA:96:A5:29:DD:77:85:3A:05:B1:B8
Signature algorithm name: MD5withRSA
Version: 1
Trust this certificate? [no]: yes
Certificate was added to keystore

2. keytool -import -alias <name> -file /media/sf_C_DRIVE/JDeveloper/mywork/MyRestPorject/facert.cer -trustcacerts -keystore /oracle/xehome/app/jdk1.7.0_55/jre/lib/security/cacerts

This is the output:

Enter keystore password:
Owner: CN=*.us.mycompany.com, DC=us, DC=mycompany, DC=com
Issuer: CN=*.us.mycompany.com, DC=us, DC=oracle, DC=com
Serial number: 7
Valid from: Mon Apr 25 09:08:55 PDT 2011 until: Thu Apr 22 09:08:55 PDT 2021
Certificate fingerprints:
MD5: 30:0E:B4:91:F3:A4:A7:EE:67:6F:73:D3:E1:1B:A6:82
SHA1: 67:93:15:14:3E:64:74:27:32:32:26:43:FF:B8:B9:E6:05:A8:DE:49
SHA256: 01:0E:2A:8A:D3:A9:3B:A4:AE:58:4F:AD:2C:E7:BD:45:B7:97:6F:A0:C4:FA:96:A5:29:DD:77:85:3A:05:B1:B8
Signature algorithm name: MD5withRSA
Version: 1
Trust this certificate? [no]: yes
Certificate was added to keystore

You must restart Admin and SOA Servers.

 

Testing

Deploy the above composite in your SOA server. The SOA composite can be invoked from EM or using tools like SOAPUI. Please see the following link to test REST adapter using HTTP Analyzer.

Conclusion

This blog demonstrates how to invoke secured REST services from Fusion Applications cloud using SOA. It provides detailed configuration on importing cloud keystores and attaching OWSM policies. This sample supports multiple patterns such as cloud-to-cloud, cloud-to-OnPremise, cloud-to-BPO, etc.

 

 

 


Integrating Oracle Service Cloud (RightNow) with Oracle Business Intelligence Cloud Service (BICS)

$
0
0

Introduction

This article describes how to integrate Oracle Service Cloud (RightNow) with Business Intelligence Cloud Service (BICS), illustrating how to programmatically load Service Cloud (RightNow) data into BICS, making it readily available to model and display on BICS dashboards.

Two artifacts are created in the process:

1)    BICS Database Table

The table is created within the BICS database to load / store the Service Cloud (RightNow) data.

2)    Stored Procedure – within BICS Database

A stored procedure is created within the BICS Schema Service Database. The stored procedure uses the apex_web_service.make_request function to call an external SOAP web service; that runs a ROQL query returning the results from Service Cloud (RightNow) in XML format. The apex_web_service.parse_xml function is used to parse the XML output to a delimited list. The results are then separated into columns and inserted into the BICS Database Table created in step 1.

The ten steps below are covered in detail to assist in replicating and testing the integration solution. Those already very familiar with “Cloud Service (RightNow) web services”, “ROQL”, and the “Apex API”, may prefer to jump straight to step five. That said, steps one to four are very useful for constructing, comprehending and debugging the apex_web_service.make_request and apex_web_service.parse_xml functions referenced in the stored procedure.

1)    Obtain the Standard WSDL

2)    Construct the ROQL Query

3)    Build the Soap Envelope

4)    Formulate the XPath

5)    Create the BICS Database Table

6)    Create the Stored Procedure

7)    Execute the Stored Procedure

8)    Refresh the Results

9)    Display the Results

10)  Schedule

 

Main Article

Step One – Obtain the Standard WSDL

1)    Collect and make note of the “host name” and the “interface name” from the Oracle Service Cloud (RightNow) environment.

2)    Open the web browser and enter the following two URL’s, testing the below standard WSDL’s, replacing the host name and interface values.

First Test

URL: https://<host_name>/cgi-bin/<interface>.cfg/services/soap?wsdl

Sample URL: https://integration-test.rightnowdemo.com/cgi-bin/integration_test.cfg/services/soap?wsdl

Second Test

URL: https://<host_name>/cgi-bin/<interface>.cfg/services/soap?wsdl=typed

Sample URL: http://integration-test.rightnowdemo.com/cgi-bin/integration_test.cfg/services/soap?wsdl=typed

Where host_name is integration-test.rightnowdemo.com and interface is integration_test

For both tests the following should be returned to the browser. If instead an error is displayed, the Oracle Service Cloud (RightNow) WSDL is incorrect and must be rectified before continuing.

Snap1

Step Two – Construct the ROQL Query

CXDev Toolbox can be used to build and test ROQL.

It can be downloaded from: http://toolbox.cxdeveloper.com

Launch CXDev, then enter the connection information below to login.

1)    Soap URL (obtained from pervious steps) – without the wsdl:

URL: https://<host_name>/cgi-bin/<interface>.cfg/services/soap

URL Sample: https://integration-test.rightnowdemo.com/cgibin/integration_test.cfg/services/soap

2)    Username

3)    Password – Note the password is in a strange white box that sits above the Soap URL.

4)    Click Login

Snap6

5)    Click on the ROQL Tester icon at the bottom of the screen.

Snap7

6)    Type in the ROQL

For Example:

Select  ID, Name.First,Contact.Address.City
From Contact
Where ID Between 1 and 100;

7)    Click Run Query. Results will be returned in a tabular format. Ensure the query returns results and runs error free before continuing.

Snap8

Step Three – Build the Soap Envelope

1)    Use SoapUI http://www.soapui.org to test the envelope.

Create a New SOAP Project. Give it any name. The Initial WSDL is what was tested in the previous step (remember to add wsdl-typed at the end).

URL: https://<host_name>/cgi-bin/<interface>.cfg/services/soap?wsdl=typed

For example: http://integration-test.rightnowdemo.com/cgi-bin/integration_test.cfg/services/soap?wsdl=typed

Snap2

2)    Double Click on QueryCSV -> Request

Snap3

3)    Delete everything from the request and replace with the following.

Click Here for text version of Soap Envelope.

Replace Username and Password with the Oracle Service Cloud (RightNow) Username / Password.

Replace the SQL if the ROQL Query has been customized.

<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:v1=”urn:messages.ws.rightnow.com/v1_2″>
<soapenv:Header>
<v1:ClientInfoHeader>
<v1:AppID>Basic Create</v1:AppID>
</v1:ClientInfoHeader>
<wsse:Security xmlns:wsse=”http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd” mustUnderstand=”1″>
<wsse:UsernameToken>
<wsse:Username>Username</wsse:Username>
<wsse:Password Type=”http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText”>Password</wsse:Password>
</wsse:UsernameToken>
</wsse:Security>
</soapenv:Header>
<soapenv:Body>
<v1:QueryCSV>
<v1:Query>Select  ID, Name.First,Contact.Address.City from Contact where ID between 1 and 100</v1:Query>
<v1:PageSize>100</v1:PageSize>
<v1:Delimiter>~</v1:Delimiter>
</v1:QueryCSV>
</soapenv:Body>
</soapenv:Envelope>

4)    Submit the request. It should return the ID, First Name and City details in XML. For example

Click Here for text version of XML Output

<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/”>
<soapenv:Header/>
<soapenv:Body>
<n0:QueryCSVResponse xmlns:n0=”urn:messages.ws.rightnow.com/v1_2″ xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”>
<n0:CSVTableSet>
<n0:CSVTables>
<n0:CSVTable>
<n0:Name>Contact</n0:Name>
<n0:Columns>ID~First~City</n0:Columns>
<n0:Rows>
<n0:Row>1~Cliente~</n0:Row>
<n0:Row>2~Thiago~</n0:Row>
<n0:Row>3~Contato~</n0:Row>
<n0:Row>4~Teste~</n0:Row>
<n0:Row>5~Thiago~</n0:Row>
<n0:Row>6~Marcelle~Rio de Janeiro</n0:Row>
<n0:Row>7~Novo CRM~Rio de Janeiro</n0:Row>
<n0:Row>9~OI~</n0:Row>
<n0:Row>10~Teste~</n0:Row>
<n0:Row>11~Contato do Cliente~</n0:Row>
<n0:Row>12~Contato do Cliente 2~</n0:Row>
<n0:Row>37~TEste~</n0:Row>
<n0:Row>38~John~</n0:Row>
<n0:Row>39~John1~</n0:Row>
<n0:Row>40~John2~</n0:Row>
<n0:Row>41~rrrrrr~</n0:Row>
<n0:Row>58~Jane601~Redwood Shores</n0:Row>
<n0:Row>59~Jane602~Redwood Shores</n0:Row>
<n0:Row>60~Jane603~Redwood Shores</n0:Row>
<n0:Row>61~Jane604~Redwood Shores</n0:Row>
<n0:Row>62~Jane605~Redwood Shores</n0:Row>
<n0:Row>63~Jane606~Redwood Shores</n0:Row>
<n0:Row>64~Jane607~Redwood Shores</n0:Row>
<n0:Row>65~Jane608~Redwood Shores</n0:Row>
<n0:Row>66~Jane609~Redwood Shores</n0:Row>
<n0:Row>67~Jane610~Redwood Shores</n0:Row>
<n0:Row>68~Jane701~Redwood Shores</n0:Row>
<n0:Row>69~Jane611~Redwood Shores</n0:Row>
<n0:Row>70~Jane612~Redwood Shores</n0:Row>
<n0:Row>71~Jane702~Redwood Shores</n0:Row>
<n0:Row>72~Jane613~Redwood Shores</n0:Row>
<n0:Row>73~Jane703~Redwood Shores</n0:Row>
<n0:Row>74~Jane614~Redwood Shores</n0:Row>
<n0:Row>75~Jane704~Redwood Shores</n0:Row>
<n0:Row>76~Jane615~Redwood Shores</n0:Row>
<n0:Row>77~Jane705~Redwood Shores</n0:Row>
<n0:Row>78~Jane616~Redwood Shores</n0:Row>
<n0:Row>79~Jane706~Redwood Shores</n0:Row>
<n0:Row>80~Jane617~Redwood Shores</n0:Row>
<n0:Row>81~Jane707~Redwood Shores</n0:Row>
<n0:Row>82~Jane618~Redwood Shores</n0:Row>
<n0:Row>83~Jane708~Redwood Shores</n0:Row>
<n0:Row>84~Jane619~Redwood Shores</n0:Row>
<n0:Row>85~Jane709~Redwood Shores</n0:Row>
<n0:Row>86~Jane620~Redwood Shores</n0:Row>
<n0:Row>87~Jane710~Redwood Shores</n0:Row>
<n0:Row>88~Jane711~Redwood Shores</n0:Row>
<n0:Row>89~Jane621~Redwood Shores</n0:Row>
<n0:Row>90~Jane712~Redwood Shores</n0:Row>
<n0:Row>91~Jane622~Redwood Shores</n0:Row>
<n0:Row>92~Jane713~Redwood Shores</n0:Row>
<n0:Row>93~Jane623~Redwood Shores</n0:Row>
<n0:Row>94~Jane714~Redwood Shores</n0:Row>
<n0:Row>95~Jane624~Redwood Shores</n0:Row>
<n0:Row>96~Jane715~Redwood Shores</n0:Row>
<n0:Row>97~Jane625~Redwood Shores</n0:Row>
<n0:Row>98~Jane716~Redwood Shores</n0:Row>
<n0:Row>99~Jane626~Redwood Shores</n0:Row>
<n0:Row>100~Jane717~Redwood Shores</n0:Row>
</n0:Rows>
</n0:CSVTable>
</n0:CSVTables>
</n0:CSVTableSet>
</n0:QueryCSVResponse>
</soapenv:Body>
</soapenv:Envelope>

Step Four – Formulate the XPath

1)    Free Formatter XPath Tester can be used to build and test the XPath. This is particularly helpful for debugging.

http://www.freeformatter.com/xpath-tester.html

Copy the XML output returned by SOAPUI into the XML Input box.

Click Here for text version of XML Output

Copy the following XPath in the XPath expression box.

Click Here for XPath’s in text format.

//n0:QueryCSVResponse/n0:CSVTableSet/n0:CSVTables/n0:CSVTable/n0:Rows/n0:Row/text()

2)    Click: TEST XPATH

The below “Namespace with prefix ‘n0’ has not been declared” error may be encountered.

Snap4

3)    To resolve this error copy the n0 name space definition. Copy from Here (as the quotes do not copy correctly from the browser).

xmlns:n0=”urn:messages.ws.rightnow.com/v1_2″

4)    And place it after:

<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/”

5)    It should look like this (confirm quotes copied / pasted correctly and are in the correct positions to avoid errors).

<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:n0=”urn:messages.ws.rightnow.com/v1_2″>

6)    Re-Click TEST XPATH

The results will be returned in a delimited list. If an error has occurred, the XPath may need correcting.

Snap5

Step Five – Create the BICS Database Table

1)    Open SQL Workshop from Oracle Application Express

3

 

2)    Launch SQL Commands

4

 

 

 

 

 

 

 

3)    Run the CREATE_TABLE SQL Command to create the RIGHTNOW table in the BICS database

To view SQL in plain text click here: CREATE_TABLE

CREATE TABLE “RIGHTNOW”
(“ID” NUMBER,
“FIRST_NAME” VARCHAR2(100 BYTE),
“CITY” VARCHAR2(100 BYTE)
);

Snap9

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Step Six – Create the Stored Procedure

Create and run the BICS_RN_INTEGRATION stored procedure that calls the web service to pull the Service Cloud (RightNow) data and load’s the data into the RIGHTNOW table.

The stored procedure leverages the APEX_WEB_SERVICE API (using Apex_web_service.make_request). This example invokes a SOAP style web service. However, the APEX_WEB_SERVICE API can also be used to invoke RESTful style web services. The web service returns the results in XML. Apex_web_service.parse_xml is used to extract the data set. It is returned as a delimiter list – separated by “~”. The results are then extracted into columns and then loaded into the database table.

Replace the following highlighted values:

1)    l_envelope – If changed in “Step Three – Build the Soap Envelope” then replace with custom envelope.

2)    Username

3)    Password

4)    ROQL Query – The ROQL Query constructed in “Step Two – Construct the ROQL Query“.

5)    p_url – The Standard WSDL obtained in “Step One – Obtain the Standard WSDL” excluding the endpoint. Should end in /soap.

6)    p_xpath – If changed it in “Step Four – Formulate the XPath” then replace with custom Xpath.

7)    Table Name –  The name of the table created in “Step Five – Create the BICS Database Table

8)    Insert Values – Duplicate for each column. Increase by +1 in the second regexp_substr parameter / position.

~

To view SQL in plain text click here: BICS_RN_INTEGRATION

CREATE OR REPLACE PROCEDURE “BICS_RN_INTEGRATION”
IS
l_envelope CLOB;
l_xml XMLTYPE;
f_index number ;
l_str varchar2(1000);
BEGIN
l_envelope := ‘<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:v1=”urn:messages.ws.rightnow.com/v1_2″>
<soapenv:Header>
<v1:ClientInfoHeader>
<v1:AppID>Basic Create</v1:AppID>
</v1:ClientInfoHeader>
<wsse:Security xmlns:wsse=”http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd” mustUnderstand=”1″>
<wsse:UsernameToken>
<wsse:Username>Username</wsse:Username>
<wsse:Password Type=”http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText”>Password</wsse:Password>
</wsse:UsernameToken>
</wsse:Security>
</soapenv:Header>
<soapenv:Body>
<v1:QueryCSV>
<v1:Query>Select ID, Name.First,Contact.Address.City From Contact Where ID Between 1 and 100</v1:Query>
<v1:PageSize>100</v1:PageSize>
<v1:Delimiter>~</v1:Delimiter>
</v1:QueryCSV>
</soapenv:Body>
</soapenv:Envelope>';
l_xml := apex_web_service.make_request(
p_url => ‘https://integration-test.rightnowdemo.com/cgibin/integration_test.cfg/services/soap
,p_envelope => l_envelope
,p_proxy_override => ‘www-proxy.us.oracle.com’
);
–dbms_output.put_line ( l_xml.getCLOBVal() );
delete from rightnow;
f_index := 0;
LOOP
f_index := f_index +1;
l_str := apex_web_service.parse_xml(
p_xml => l_xml,
p_xpath => ‘//n0:QueryCSVResponse/n0:CSVTableSet/n0:CSVTables/n0:CSVTable/n0:Rows/n0:Row['||to_char(f_index)||']/text()‘,
p_ns => ‘xmlns:n0=”urn:messages.ws.rightnow.com/v1_2″‘ );
— dbms_output.put_line(dbms_lob.substr(l_str,24000,1));
insert into rightnow
values (
regexp_substr(l_str, ‘[^~]+’, 1, 1),
regexp_substr(l_str, ‘[^~]+’, 1, 2),
regexp_substr(l_str, ‘[^~]+’, 1, 3)
);
exit WHEN l_str is null;
end loop;
commit;
END;
/

Snap10

Step Seven – Execute the Stored Procedure

1)    Execute the Stored Procedure in Oracle Apex

begin
BICS_RN_INTEGRATION();
end;
/

Snap11

2)    Confirm data loaded correctly.

Snap15

Step Eight – Refresh the Results

Once the data is loaded the cache must be manually cleared in Data Modeler. In July 2015, purge of cache via the BICS REST API will be made available. Until then, manually clear the cache from the Data Modeler by clicking on the cog to the right of the table and selecting “Clear Cached Data”.

Snap12

Step Nine – Display the Results


Snap13

Snap14

Step Ten – Schedule

To schedule data loading use the Cloud Scheduler.

Create Job:

BEGIN
cloud_scheduler.create_job(
job_name => ‘LOAD_RIGHTNOW’,
job_type => ‘STORED_PROCEDURE’,
job_action => ‘BICS_RN_INTEGRATION’,
start_date => ’01-JUN-30 07.00.00.000000 AM -05:00′,
repeat_interval => ‘FREQ=DAILY’,
enabled => TRUE,
comments => ‘Loads RIGHTNOW’);
END;

View Jobs:

SELECT * FROM user_scheduler_jobs;

Delete Job:

BEGIN
cloud_scheduler.drop_job(
job_name => ‘LOAD_RIGHTNOW’);
END;

Run Job Now:

BEGIN
cloud_scheduler.run_job(‘LOAD_RIGHTNOW’);
END;

Further Reading

Click here for more information on using the APEX_WEB_SERVICE API.

Click here for more information on Service Cloud (RightNow) Web Services.

Summary

This article has provided a set of example artifacts that leverage the APEX_WEB_SERVICE_API to integrate Oracle Service Cloud (RightNow) with Oracle Business Intelligence Cloud Service (BICS) using SOAP style web services. This article may also be a good starting point for those wanting to perform similar integrations using RESTful style web services. This method could also be used to integrate BICS with other Oracle and third-party applications.

Accelerating a Manual IDM Upgrade as Part of a Fusion Applications Upgrade

$
0
0

Introduction

When you upgrading your Oracle Fusion Applications (FA) environment, Oracle provides a tool set called the Upgrade Orchestrator that is used to reduce the complexity and requirement for manual intervention as the upgrade calling most tasks automatically when using the orchestration tooling. As you likely know by now the upgrade orchestrator has what is known as a pause point so that you can manually run specific tasks that cannot be called automatically by the tool. One of the most manually intensive and time consuming pause points is Upgrade the Identity Management (IDM) components of your FA environment to the next release.

If your environment was provisioned originally on release 7 or later then you may be able to take advantage of an automated IDM upgrade process provided by the upgrade orchestrator, but often a manual upgrade is still called for. Specifically, if your original Identity Management (IDM) installation was configured to use anything other than the SINGLE, 3-NODE, or 4-NODE configurations, you will be required to “manually” upgrade IDM to the next release.  In this blog we will walk you through “automating” the “manual” IDM upgrade by making use of the patching framework for IDM artifacts, details of which can be found in the Oracle® Fusion Applications Patching Guide in chapter 6: Patching Oracle Identity Management Artifacts. Later releases will also have a similar chapter outlining the same concepts discussed here. In this paper we will illustrate the usage of the IDM patching framework for a release 8 to release 9 upgrade. The goal of this paper is to show you how to “automate” the manual upgrade of the IDM components during a fusion applications upgrade, providing a significant time saving over pure manual patching methods.

 Upgrade Flow

Figure 1 – High level view of Fusion Applications upgrade flow

Automatic or Manual IDM Upgrade

The Upgrade Orchestrator supports an automated IDM Upgrade, if your deployment is a Linux-64 bit platform and is running IDM provisioned using the IDM provisioning wizard (a.k.a. idmlcm), and was provisioned on Release 7 or later.

IDM can be provisioned using multiple topology configurations, yet at this time only three topologies are supported for automated IDM upgrade through the orchestrator. This blog will show you how to automate the MANUAL Option shown below.

When setting up the properties files for the upgrade orchestrator, you specify to the orchestrator which topology was provisioned by updating the property IDM_SETYP_TYPE on IDM.properties file using one of the following values:

  • SINGLE – All IDM Nodes (IDM, IAM, OHS) and the Database are installed on a single host
  • 3-NODE – The IDM, IAM and OHS Nodes are installed on independent hosts and the Database is installed on the IDM node
  • 4-NODE – The Database, IDM, IAM and OHS Nodes are installed on independent hosts.
  • MANUAL – If your environment was provisioned with a different topology configuration or for those environments originally installed with Release 6, use the value MANUAL to indicate to orchestrator the IDM Upgrade is to be performed manually.

Introduction to Patching Oracle Identity Management Artifacts

Alongside the introduction of the Upgrade Orchestrator with Release 7 of Fusion Applications, an often overlooked but very useful toolset was also introduced, that of the idmpatchmanager.sh and idmpatch.sh tools. Let’s first talk about what each of these tools does and then we will walk you through how to use them effectively.

  • Idmpatchmanager.sh –The Oracle Identity Management Patch Manager is a tool that generates the patch plan and controls the patch session, this is fully described beginning in section 6.2.1in the Oracle® Fusion Applications Patching Guide, but basically it does the following:
    • Patch plan generation
  • IDMpatch.sh – The Oracle Identity Management Patcher is a per host patch execution engine, this will utilize the patch plan generated by the idmpatchmanager script and will execute in the following order:
    • Patch Apply Prerequisite Phase (All services will be up)
    • Patch Pre-Apply Phase (All services will be down)
    • Patch Apply Phase (Limited services will be available)
      • This stage will also take care of any post patch artifact changes that may need to be done

 

Installing the Oracle Identity Management Lifecycle Management Tools

Starting with Release 7 a version of the patching framework gets automatically installed as part of the IDM Provisioning Wizard when your environment is originally provisioned. A newer version of the idmlcm tools is also found in the upgrade media you downloaded for the upgrade you are performing, it can be found here:

$REPOSITORY_LOCATION/installers/idmlcm/idmlcm/Disk1

Note: the environment variable $REPOSITORY_LOCATION is explained below

The complete instructions for installing the patching framework are located in the Installing the Identity Management Provisioning Tools chapter of the Oracle Fusion Applications Installation Guide.

As mentioned before each Release of Oracle Fusion Application includes an updated version of the idmlcm tools, in similar manner each release upgrade uses this tool in a slightly different way.

Upgrading R7 to R8 you should use the R7 version of the idmlcm tools installed during provisioning, while upgrading from R8 to R9 you must install and use the new R9 version of the idmlcm tools provided with the R9 media pack

TIP: You may wish to rename the original IDMLCM directory that was installed at time of provisioning so the installation of the New version of the tools does not wipe it out.

If you are running the R7 or R8 version of idmlcm tools take a copy of env.properties.template as env.properties and make the updated mentioned below. If you are running the R9 version of the tools the file is called common.properties.

The environment variables you will need to validate and edit the following environment variables on $IDM_LCM_TOP/common/config/common.properties file to match your environment, similar to:

JAVA_HOME=/u01/app/shared_location/yourRelease/Repository/jdk6
IDM_TOP=/u01/app/idm
LCM_CONFIG=/u01/app/idm/config/lcmconfig

Note: the environment variable $IDM_LCM_TOP is explained below

 

Using the idmpatchmanager.sh Script to Set up Your Patch Plan

The first thing we need to do is define several environment variables that are used during the process. Items written in italics can have any name you wish, all others are the typical default values associated with most FA installations. Feel free to adjust the values to match your environment.

export SHARED_LOCATION=/u01/app/shared_location
export REPOSITORY_LOCATION=${SHARED_LOCATION}/yourRelease/Repository (where yourRelease will be equal to the release your are upgrading to)
export IDM_LCM_TOP=/u01/app/idmlcmx (where x is variable)
export IDM_TOP=/u01/app/idm
export LCM_CONFIG=${IDM_TOP}/config/lcmconfig
export JAVA_HOME=${REPO_LOCATION}/jdk6
export PATH=${JAVA_HOME}/bin:${PATH}

Create the Patch Plan and Apply the Patches

You create the patch plan be running the idmpatchmanager.sh with the apply option as described in the Oracle Fusion Applications Patching Guide, it looks like this:

cd ${IDM_LCM_TOP}/patch/bin
./idmpatchmgr.sh apply -patchtop ${REPOSITORY_LOCATION}/installers

This will generate the patch plan.

After the patch plan is created it is time to apply the patches, running the idmpatch.sh script with the run option does this, it looks like this:

./idmpatch.sh run

TIP: if you want to know how long the operation took to run you can place the {time} linux command before the above command.

You can monitor the progress of the patching session by looking at the log located here:

$LCM_CONFIG/patch/status/currentSessionID/hosts/hostname/log/idmpatch-session.log

TIP: Complete instructions on how to monitor the patch application can be found in the Oracle Fusion Applications Patching guide

 

Conclusion

Utilizing the Oracle Identity Management Patching Framework to help you accelerate the manual IDM upgrade component of your Fusion Applications Upgrade can save significant time during your overall upgrade process. In our internal tests we have seen run times of 2.5 hours or less when using the framework, whereas it has the potential to take longer performing the same task in a completely manual way.

Extracting Data from BICS / Apex via RESTful Webservice

$
0
0

Introduction

BI Cloud Service (BICS) has an API to load data into the underlying database (more details here), but reports created in BICS can not be made available and called as web-services to extract the data.  This article will document a method by which RESTful web-services can be created in Apex and exposed to replicate a BICS report, and provide access in the underlying BICS Schema Service Database.

 

Main Article

RESTful services can be created from the Service Console of the Schema Service Database.  Select the ‘SQL Workshop':

Oracle_Application_Express

and then the ‘RESTful Services’ option:

SQL_Workshop

Create a new RESTful Service.

For this example, use the URL Prefix ‘bicsdemo’ and create a ‘GET’ handler which will respond by selecting all the records from the demo_orders table based on the SQL statement of ‘select * from demo_orders‘, as shown below.

Create_RESTful_Services_Module

Once created, edit the ‘GET’ handler that was just created, and select the ‘Test’ button beneath the source to confirm that the JSON formatted data will be returned as expected.

Resource_Handler

 

The RESTful webservice is now ready to be called.  This can be tested using cURL.

The following syntax will call the ‘orders’ RESTful webservice, and return the data in JSON format (changing the username / password and BICS instance details as appropriate).

Be sure to append a ‘/’ character at the end of the string.

curl -u testuser@oracle.com:Welcome1! -k https://businessintelltrialXXXXdb-usoracletrialABCDE.db.us2.oraclecloudapps.com/apex/bicsdemo/orders/

 

This simple example showed how to extract data from a specific table, but it may be necessary to use some more complex SQL and include filters to limit the data returned.

In the future it should be possible to call a BICS report as a web-service.  Until that functionality is available, an alternative option is to replicate the report by re-creating the SQL statement that BICS generates.

The dashboard below is from the Sample App Dashboard within BICS.

The section highlighted uses the filters shown on the left hand side of the screen for Year, Region and Product, and returns data on Product Revenue Performance.

 

Dashboard

 

With a BICS user with Administrator rights, the dashboard can be edited and that report can be analyzed to get a better idea of the logic.

Using the ‘Administration’ option, and then ‘Manage Sessions’ – the SQL that the BI Server runs can be located and examined.

 

Untitled

 

With a little work, and some testing within the SQL Workshop, the SQL statement can be recreated.  In this case the following SQL statement was reversed engineered to mimic the BICS report.

select
extract(year from c.time_bill_dt) as year,
sum(c.revenue) as revenue,
sum (c.cost_fixed + c.cost_variable) as cost,
sum(1) as orders,
p.product,
g.region,
o.channel_name
from cloud_f_cost c
join cloud_d_products p on p.prod_item_key=c.prod_item_key
join cloud_d_geography g on g.addr_key = c.addr_key
join cloud_d_orders o on o.order_key = c.order_key
group by
extract(year from c.time_bill_dt),
p.product,
g.region,
o.channel_name

 

This SQL can then be used to create a new RESTful ‘GET’ method.

This time a parameter for ‘region’ will be created, which is included as a filter in the select statement.

Resource_Handler

This parameter can be passed in the call of the RESTful service.  From cURL the syntax to filter the result set to the ‘APAC’ region would look like this.

curl -u testuser@oracle.com:Welcome1! -k https://businessintelltrialXXXXdb-usoracletrialABCDE.db.us2.oraclecloudapps.com/apex/bicsdemo/orders/APAC

Multiple parameters can be created in this manner to filter the dataset further and minimize the data returned.

 

Summary

This article demonstrated how the Apex RESTful services can be used to replicate BICS reports and allow data to be extracted from the BICS Schema Service Database via a web-service call.

 

Further Reading

Using RESTful Services:

https://docs.oracle.com/cd/E37097_01/doc.42/e35128/restful_svc.htm#AEUTL445

Cloud Specific Documentation:

http://docs.oracle.com/cloud/latest/dbcs_schema/CSDBU/GUID-FA6FC371-064A-467C-A10D-EC995A2A4DB0.htm#CSDBU195

White Paper:

https://cloud.oracle.com/_downloads/WhitePaper_Database_7/RESTful+Web+Services+for+the+Oracle+Database+Cloud+Service.pdf

Sample:

All Schema Service instances come with a sample RESTful service that you can review (called oracle.example.hr).

Creating a Mobile-Optimized REST API Using Oracle Mobile Cloud Service – Part 2

$
0
0

Introduction

To build functional and performant mobile apps, the back-end data services need to be optimized for mobile consumption. RESTful web services using JSON as payload format are widely considered as the best architectural choice for integration between mobile apps and back-end systems. At the same time, many existing enterprise back-end systems provide a SOAP-based web service application programming interface (API). In this article series we will discuss how Oracle Mobile Cloud Service (MCS) can be used to transform these enterprise system interfaces into a mobile-optimized REST-JSON API. This architecture layer is sometimes referred to as Mobile Backend as a Service (MBaaS). A-Team has been working on a number of projects using MCS to build this architecture layer. We will explain step-by-step how to build an MBaaS, and we will  share tips, lessons learned and best practices we discovered along the way. No prior knowledge of MCS is assumed. In part 1 we discussed the design of the REST API, in this second part we will discuss the implementation of the “read” (GET) RESTful services in MCS by transforming ADF BC SOAP service methods.

Main Article

This article is divided in the following section:

  • Understanding the JavaScript Scaffold
  • Implementing Dynamic Mock Data
  • Defining and Testing the HR SOAP connector
  • Implementing the GET Resources

Understanding the JavaScript Scaffold

MCS uses Node.js and Express.js as the transformation and shaping engine of your custom API’s. It is beyond the scope of this article to explain Node.js in detail, to be able to follow the remainder of this article it is sufficient to know that Node.js is a high-performance platform that allows you to run JavaScript on the server using a so-called asynchronous event model. Express.js is a web application framework that works on top of Node.js which, among others, makes it easy to implement RESTful services. To jumpstart the implementation of our REST API using Node and Express we can download a so-called JavaScript scaffold. First click on the Manage link on our Human Resources API overview page

ManageImpl

This brings us to the API implementation page which allows us to download a zip file with a skeleton of our Node.js project that we will use to implement the various REST resources that we defined in part 1 of this article series.

DownloadScaffold

Save the zip file hr_1.0.zip to your file system and unzip it. It creates a directory named hr, with 4 files:

ScaffoldFiles

Here is a short explanation of each file:

  • package.json: this file holds various metadata relevant to the project. It is used to give information to the Node Package Manager (npm) that allows it to identify the project as well as handle the project’s dependencies. Later on we will edit this file to add a dependency to the SOAP connector that we are going to use.
  • hr.js: this is the “main” JavaScript file. If you open it, you will see skeleton methods for each REST resources that we have defined in part1. This is the file that MCS will use to handle the REST calls to your API.
  • samples.txt: this is a file with various code snippets that come in handy when implementing a custom API, as we will see later on
  • readMe.md: readme file that explains the purpose of the files included in the scaffold.

You can use your favorite JavaScript editor to inspect and edit the content of these files.

If you are new to JavaScript development and you don’t have a strong preference for a JavaScript editor yet, we recommend you to start using the free, open source IDE NetBeans. Traditionally known as a Java IDE, NetBeans now also provides excellent support for JavaScript development with code completion and support for Node.js and many other popular JavaScript tools and frameworks.

The hr.js looks like this in Netbeans:

NetbeansSkeleton

Note the function called on the service object, which reflects the HTTP method we defined for the resource in MCS. The order in which the functions are generated into the skeleton is random, so it is a good practice to re-order them and have all methods on the same resource grouped together as we already did in the above screen shot.

The req and res parameters passed in each function are Express objects and provide access to the HTTP request and response.

To learn which properties and functions are available on the request and response objects it is useful to check out the Express API documentation.

Implementing Dynamic Mock Data

As discussed in part 1, MCS provides static mock data out of the box by defining sample payloads as part of the API design specification that is stored in the RAML document. Making the mock data dynamic, taking into account the value of query and path parameters, is easy and quick, and can be useful for the mobile developer to start with the development of the mobile app before the actual API implementation is in place. We therefore briefly explain how to implement dynamic mock data before we start implementing the “real” API by transforming the XML output from the ADF BC SOAP service to REST-JSON.

When implementing the /departments GET resource, we need to return different payloads based on the value of the expandDetails query parameter. It is easiest to create separate JSON files for each return payload. We create a file named DepartmentsSummary.json that holds the return payload, a list of department id and name, when expandDetails query parameter is false or not set, and store that file in the sampledata sub directory under our hr root folder. Likewise, we create a file DepartmentsDetails.json in the same directory that holds the list of departments with all department attributes and nested employees data which we return when expandDetails is set to true.

NetBeansMock

To return the correct payload based on this query parameter, we implement the corresponding method as follows:

    service.get('/mobile/custom/hr/departments', function (req, res) {
        if (req.query.expandDetails === "true") {
            var result = require("./sampledata/DepartmentsDetails");
            res.send(200, result);

        } else {
            var result = require("./sampledata/DepartmentsSummary");
            res.send(200, result);
        }
    });

As you can see we check the value of the query parameter using the expression req.query.expandDetails. Based on the value we read one of the two sample payloads and return it as response with HTTP status code 200.

Likewise, we can make /departments/:id GET resource somewhat dynamic by setting the department id in the return payload to the value of the department id path parameter:

    service.get('/mobile/custom/hr/departments/:id', function (req, res) {
        var department = require("./sampledata/Department");
        department.id = req.params.id;
        res.send(200, department);
    });

Here we check the value of the path parameter using the expression req.params.id.  Obviously, we could make the return payload more dynamic by creating multiple sample JSON files for each department id, named Department10.json, Department20.json, etc, and return the correct payload using the following code:

    service.get('/mobile/custom/hr/departments/:id', function (req, res) {
        var department = require("./sampledata/Department"+req.params.id);
        res.send(200, department);
    });

To test our dynamic mock implementation, we simply zip up the hr directory that was created by unzipping the scaffold zip file, and then upload this zip file to MCS again.

UploadImpl

After uploading the zip file you can switch to the Postman tab in your browser and test the two GET resources for its dynamic behavior.

PostmanCollection

The Postman REST client allows you to save resources with its request headers and request payload in so-called collections. We recommend to create a collection for each MCS API you are developing so you can quickly retest resources after uploading a new implementation zip file. Postman also allows you to export and import collections so you can easily share your test collection with other developers.

You can also test the resource using the Test tab inside MCS but this is slower as you have to navigate back and forth between the test page and upload page and there is no option to save request parameters and request payloads.

Defining and Testing the HR SOAP connector

The input data for the HR Rest API is primarily coming from an ADF Business Components application where we exposed some of the view objects as SOAP web services using the application module service interface wizards. MCS eases the consumption of SOAP web services through so-called connectors. A SOAP connector hides the complexity of XML-based SOAP messages and allows you to invoke the SOAP services using a simple JSON payload without bothering about namespaces.

So, before we can define our SOAP connector, we need to have our ADF BC SDO SOAP service up and running. We can test the service using SOAP UI, or using the HTTP Analyzer inside JDeveloper. As shown in screen shot below, the findDepartments method returns a list of departments, including a child list of employees for each department.

HttpAnalyzer

There is also a getDepartments method that takes a departmentId as parameter, this method returns one department and its employees. We will use these two methods as the data provider for the two RESTful resources that we are going to implement in this article, the /departments GET resource and the /departments/{id} GET resource.

We put the WSDL URL on the clipboard by copying it from the HTTP Analyzer window in JDeveloper, and then we click on the Development tab in MCS and click the Connectors icon.

ConnectorMenuItem

We click on the green New Connector button and from the drop down options, we choose SOAP connector.

CreateConnectorMenuItem

In the New SOAP Connector API dialog that pops up, we set the API name, display name, and a short description as below, and we paste the WSDL URL from the clipboard.

CreateSOAPConnectorDialog

We also changed the host IP address from 127.0.0.1 (or localhost) to the actual IP address of our machine that is running the SOAP web service so MCS can actually reach our web service, After clicking Create button we see the connector configuration wizard where we can change the port details, set up security policies (we will look into security later in this article series), and test the connector. We click the Test tab, and then we click on the green bar with the findDepartments method so we can start testing this method.

findDepartmentsTest

The first thing that might surprise you is the default value for the Content Type request header parameter. It is set to application/json. And this content-type matches with the sample JSON payload displayed in the HTTP Body field. You might wonder, is this wrong, and do you need to change this to xml? The answer is no, MCS automatically converts the JSON payload that you enter to to the XML format required for the actual SOAP service invocation. You  probably noticed that it took a while before the spinning wheel disappeared, and the sample request payload was displayed. This is because MCS parses the WSDL and associated XML schemas to figure out the XML request payload structure and then converts this to JSON. This is a pretty neat feature because you can continue to work with easy-to-use JSON object structures to call SOAP services rather than the more complex XML payloads where you also have to deal with various namespace declarations.

Now, to actually test the findDepartments method we can greatly simplify the sample payload that is pre-displayed in the HTTP Body field. ADF BC SOAP services support a very sophisticated mechanism to filter the results using (nested) filter objects consisting of one or more attributes, with their value and operator. We don’t need all that as we simply want all departments returned by the web service, so we change the HTTP Body as follows:

findDepartmentsBody

We set the Mobile Backend field to HR1 (or any other mobile backend, the value of this field doesn’t matter when testing a connector), and click on Test Endpoint.The response with status 200 should now be displayed:

findDepartmentsResponse

Again, MCS already converted the XML response payload to JSON which is very convenient later on, when we transform this payload into our mobile-optimized format using JavaScript.

If for whatever reason you do not want MCS to perform this auto-conversion from XML to JSON for you, you can click the Add HTTP Header button and add a parameter named Accept and set the value to application/xml. This will return the raw XML payload from the SOAP service.Likewise, if you want to specify the request body as XML envelope rather than JSON, you should change the value of the Content-Type parameter to application/xml as well.

 Implementing the GET Resources

With the HR SOAP connector in place we can now add real implementation code to our JavaScript scaffold, where we use the SOAP connector to call our web service and then transform the payload to the format as we specified in our API design in part 1.

You might be inclined to start adding implementation code to the main hr.js script like we did for the dynamic mock data, however we recommend to do this in a separate file.

For readability and easier maintenance, it is better to treat the main JavaScript file (hr.js in our example) file as the “interface” or “contract” of your API design, and keep it as clean as possible. Add the resource implementation functions in a separate file (module) and call these functions from the main file.

To implement the above guideline, we first create a JavaScript file named hrimpl.js in a subdirectory named lib with two functions that we will implement later:

exports.getDepartments = function (req, res) {
        var result = {};
        res.send(200, result);
};

exports.getDepartmentById = function (req, res) {
        var result = {};
        res.send(200, result);
};

In the main hr.js file, we get access to the implementation file by adding the following line at the top:

var hr = require("./lib/hrimpl");

When using a sophisticated JavaScript editor like NetBeans, we get code insight to pick the right function from our implementation file for each resource implementation:

NetbeansCodeInsight

To be able to use the HR SOAP connector we created in the previous section in our implementation code, we need to define a dependency on it in package.json:

ConnectorDependency

We can copy the dependency path we need to enter here from the connector General page:

ConnectorPath

We can now start adding implementation code to our hrimpl.js file. Let’s start with code that simply passes on the JSON as returned by the SOAP findDepartments method:

exports.getDepartments = function (req, res) {
  var handler = function (error, response, body) {
    var responseMessage = body;
    if (error) {
      responseMessage = error.message;
    } 
    res.status(response.statusCode).send(responseMessage);
    res.end();
  };

  var optionsList = {uri: '/mobile/connector/hrsoap1/findDepartments'};
  optionsList.headers = {'content-type': 'application/json;charset=UTF-8'};
  var outgoingMessage = {Header: null, Body: {"findDepartments": null}};
  optionsList.body = JSON.stringify(outgoingMessage);
  var r = req.oracleMobile.rest.post(optionsList, handler);
};

Let’s analyze each line of this code, starting in the middle:

  • line 11: here we set  up the optionsList object that we use to pass in the connector URI of the SOAP method we want to invoke. The URI path can be copied from the connector test page.
  • line 12: we set the Content-Type header parameter, just like we did in the connector test page
  • line 13: we set the request body object just like we did in the connector test page
  • line 14: we convert the request body to a string
  • line 15: we invoke the SOAP service through the connector, passing the optionsList object and a so-called callback handler. This is needed because of the asynchronous programming model we briefly mentioned when introducing Node: the SOAP method is called asynchrously and the Node engine in MCS calls this callback handler function when the SOAP service returns a response.
  • line 2: the declaration of the callback handler function with the expected signature.
  • line 3: we default the response message we will return to the body of the SOAP service.
  • line 4 and 5: if an error occurred and the SOAP call did not succeed, we set the response message to the error message
  • line 7: we set the HTTP status code of our REST call (/departments in this case) to the same status as returned by the SOAP call, and we set the response body of this REST call to the response message we set before.
  • line 8: Calling the end() function on the REST response tells Node/Express that we are done and that the response can be returned to the caller.

Note that in the samples.txt file included in the scaffold zip file you can find similar sample code to call a SOAP (or REST) connector and to declare a callback handler function.

We can now zip up the hr directory and upload this new implementation and go to Postman to test whether the /departments resource indeed returns the full SOAP response body in JSON format. Once this works as expected we know we have set up the SOAP connector call correctly and we can move on to transform the SOAP response payload based on the value of the expandDetails query parameter.

To transform JSON arrays, the JavaScript map function comes in very handy. This function creates a new array with the results of calling a provided function on every element of the array on which the method is called. So, we need to specify two transformation functions for the department, and use one or the other in our map() function call based on the value of the query parameter.

We recommend to define transformation functions in a separate file to increase maintainability and reusability. If you have prior experience with a product like Oracle Service Bus you can make the analogy with the reusable XQuery or XSLT definitions, they serve the same purpose as these transformation functions although the implementation language is different.

To follow the above guidelines, we create a new Javascript file named transformations.js, store it in the lib subfolder and add the transformations we need for the getDepartments function:

exports.departmentSummarySOAP2REST = function (dep) {
  var depRest = {id: dep.DepartmentId, name: dep.DepartmentName};
  return depRest;
};

exports.departmentSOAP2REST = function (dep) {
  var emps = dep.EmployeesView ? dep.EmployeesView.map(employeeSOAP2REST) : [];
  var depRest = {id: dep.DepartmentId, name: dep.DepartmentName, 
    managerId: dep.ManagerId, locationId: dep.LocationId, employees: emps};
  return depRest;
};

function employeeSOAP2REST(emp) {
  var empRest = {id: emp.EmployeeId, firstName: emp.FirstName, 
    lastName: emp.LastName, email: emp.Email, phoneNumber: emp.PhoneNumber, 
    jobId: emp.JobId, salary: emp.Salary, commission: emp.CommissionPct, 
    managerId: emp.ManagerId, departmentId: emp.DepartmentId};
  return empRest;
};

The first function on line 1 will be used when expandDetails query parameter is not set or set to false: we only extract the department id and name attributes from the SOAP response department object. The second function is used when this query parameter is set to true, we need to transform to a department object which includes all department attributes as well as a nested array of employees that work in the department. To transform the nested employees array we use the map function just like we are going to do for the top-level transformation of the department array. Since some departments might not have employees, we check in line 7 whether the EmployeesView attribute in the department SOAP object exists. If this attribute is not present,we set the employees attribute to an empty array.

We get access to these transformation functions in the hrimpl.js file by using the require statement:

var transform = require("./transformations");

Here is the completed implementation of our getDepartments function using the transformation functions we just defined:

exports.getDepartments = function (req, res) {
  var handler = function (error, response, body) {
    var responseMessage = body;
    if (error) {
      responseMessage = error.message;
    } else if (parseInt(response.statusCode) === 200) {
      var json = JSON.parse(body);
      var expandDetails = req.query.expandDetails;
      var resultArray = json.Body.findDepartmentsResponse.result;
      removeNullAttrs(resultArray);
      var transformFunction = expandDetails === 'true' ? 
        transform.departmentSOAP2REST : transform.departmentSummarySOAP2REST;
      var departments = resultArray.map(transformFunction);
      responseMessage = JSON.stringify(departments);
    }
    res.status(response.statusCode).send(responseMessage);
    res.end();
  };

  var optionsList = {uri: '/mobile/connector/hrsoap1/findDepartments'};
  optionsList.headers = {'content-type': 'application/json;charset=UTF-8'};
  var outgoingMessage = {Header: null, Body: {"findDepartments": null}};
  optionsList.body = JSON.stringify(outgoingMessage);
  var r = req.oracleMobile.rest.post(optionsList, handler);
};

Compared to the previous “pass-through” implementation, we now added an else if branch in the callback handler where we first check the HTTP status code of the SOAP call. If the SOAP call has been successful with status code 200 (line 6), we obtain the value of the expandDetails query parameter (line 8). We traverse the response body in line 9 and store the actual array of departments in variable resultArray. Based on the value of the query parameter, we either use the summary transformation function or the “canonical” transformation function (line 11 and 12), and then we pass these transformation function with the map() function call on the array of SOAP department objects (line 13). The method call on line 10 is explained in the section below “Handling Null Values in SOAP Responses”.

Building on the concepts we have learned so far, you should be able to understand the code below which implements the getDepartmentById function used for the /departments/{id} GET resource:

exports.getDepartmentById = function (req, res) {

  var handler = function (error, response, body) {
    var responseMessage = body;
    var statusCode = response.statusCode;
    if (error) {
      responseMessage = error.message;
    } else if (parseInt(response.statusCode) === 200) {
      var json = JSON.parse(body);
      if (json.Body.getDepartmentsResponse) {
        var dep = json.Body.getDepartmentsResponse.result;
        removeNullAttrs(dep);
        var depResponse = transform.departmentSOAP2REST(dep);
        responseMessage = JSON.stringify(depResponse);
      } else {
        responseMessage = "Invalid department ID " + req.params.id;
        statusCode = 400;
      }
    }
    res.status(statusCode).send(responseMessage);
    res.end();
  };

  var optionsList = {uri: '/mobile/connector/hrsoap1/getDepartments'};
  optionsList.headers = {'content-type': 'application/json;charset=UTF-8'};
  var outgoingMessage = {Body: {"getDepartments": {"departmentId": req.params.id}}};
  optionsList.body = JSON.stringify(outgoingMessage);
  var r = req.oracleMobile.rest.post(optionsList, handler);
};

A few observations on the above method implementation:

  • We check for the HTTP status code, if the 200 code is not returned, we assume the department id was invalid, and we return a status code of 400 Bad Request.
  • We reuse one of the transformation functions that we created for the /departments resource
  • In the request body to call the SOAP getDepartments method, we pass in the departmentId path parameter. We obtain the value of this parameter using the expression req.params.id.

Handling Null Values in SOAP Responses

Your XML SOAP responses might includes a null value like this CommissionPct attribute:

SoapNullValue

then the auto-converted JSON body from the SOAP response will include the CommissionPct attribute like this

"Salary": 11000,
"CommissionPct": {"@nil": "true"},
"ManagerId": 100,

It is common practice in JSON payloads to leave out attributes that are null. As a matter of fact, In JavaScript when you set an attribute to null and convert the object to a string, the attribute will be left out automatically. To ensure we do not pass on this “nil” object, we apply the following removeNullAttrs method to the SOAP result array before performing the transformations:

function removeNullAttrs(obj) {
  for (var k in obj)
  {
    var value = obj[k];
    if (typeof value === "object" && value['@nil'] === 'true') {
      delete obj[k];
    }
    // recursive call if an object
    else if (typeof value === "object") {
      removeNullAttrs(value);
    }
  }
}

Conclusion

We have provided you with detailed step-by-step instructions on how to create two RESTFul resources that follow the design we have described in part 1 of this article series. If you are used to the more visual and declarative approach used in products like Oracle Service Bus, this code-centric approach using JavaScript and Node.js might feel somewhat strange in the beginning. However, it is our own experience that you very quickly adapt to the new programming model, and the more you will use MCS for transformations like this, the more you will like it. Once you understand the core concepts explained in this article series, you will notice how easy and fast MCS is for creating or modifying transformations.

In the next parts of this article series we will dive into more advanced concepts. We will implement the PUT and POST resources, discuss troubleshooting techniques, add error handling and security, and we will take a look at caching using the MCS storage facility, and we will look into techniques for sequencing multiple API calls in a row without ending up in the infamous “callback hell”. Stay tuned!

OAM Federation 11.1.2.3: Example Message Processing Plugin

$
0
0

SAML is an extensible protocol. Since it is based on XML, through the use of XML namespaces, custom elements and attributes can be inserted into the SAML messages at the appropriate places. Sometimes third party or custom SAML implementations will require particular custom elements or attributes to function.

In this example, we will suppose an IdP requires a custom <CompanyInfo> element included in the SAML extensions to provide the name of the company issuing the SAML request:

<samlp:Extensions xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol">
<CompanyInfo xmlns="http://example.com/samlext/1.0"
       CompanyName="Example Corporation" />
</samlp:Extensions>

(This case is based on a real customer scenario; I’ve changed the XML to simplify it and to respect that customer’s confidentiality.)

In 11.1.2.3 this is possible using OIFMessageProcessingPlugin. Note that only one plugin is allowed in your OAM environment, but since the plugin is passed information on each message (e.g. name of the partner it is for, whether it is incoming or outgoing, etc), you can use conditional logic in your plugin to do different things for different messages.

This tutorial assumes you have set up OAM Federation loopback test (LoopbackIDP/LoopbackSP) per the previous post. This enables us to initially test the plugin without requiring the third party server expecting the custom elements to be available. (Of course, once you believe you have it working without the third party server, you’d want to test it with them.)

DISCLAIMER: This sample code is just a start for your own development process, it is not production quality. This is a sample only, not officially supported by Oracle, use at your own risk.

Build the example plugin

We need our Java code for our plugin. Create a directory anywhere, let us refer to it as $PLUGINDEV. Then create the directory tree $PLUGINDEV/src/oracle/ateam/msgprocplugin and in that directory place the file SampleMsgProcPlugin.java with the following content:

package oracle.ateam.msgprocplugin;

import java.io.*;
import java.util.*;
import javax.xml.parsers.*;
import oracle.security.am.plugin.*;
import oracle.security.fed.plugins.fed.msgprocessing.*;
import org.w3c.dom.*;
import org.w3c.dom.ls.*;
import org.xml.sax.*;
import static java.lang.System.err;

public class SampleMsgProcPlugin extends OIFMessageProcessingPlugin {

        private boolean monitoringStatus;

        public ExecutionStatus process(MessageContext messageCtx) throws MessageProcessingException {
                try {
                        String msg = "";
                        msg += "************************************\n";
                        msg += "* SAMPLE MESSAGE PROCESSING PLUGIN *\n";
                        msg += "************************************\n";
                        msg += "Partner Name: " + messageCtx.getPartnerName() + "\n";
                        msg += "Message Type: " + messageCtx.getMessageType() + "\n";
                        msg += "Message Version: " + messageCtx.getMessageVersion() + "\n";
                        msg += "User DN: " + messageCtx.getUserDN() + "\n";
                        msg += "User ID: " + messageCtx.getUserID() + "\n";
                        msg += "User ID Store: " + messageCtx.getUserIDStore() + "\n";

                        // Determine if this message meets our criteria for modification
                        boolean matches =
                                "LoopbackIDP".equals("" + messageCtx.getPartnerName()) &&
                                "SSO_AUTHN_REQUEST_OUTGOING".equals("" + messageCtx.getMessageType()) &&
                                "SAML2.0".equals("" + messageCtx.getMessageVersion());

                        if (!matches)
                                msg += "@@@@@@ CRITERIA NOT MET - SKIPPING THIS MESSAGE @@@@@@\n";
                        else {
                                msg += "@@@@@@ CRITERIA MET - TRANSFORMING THIS MESSAGE @@@@@@\n";
                                Element root = parseXML(messageCtx.getMessage());
                                String pretty = unparseXML(root);

                                msg += "---------- ORIGINAL XML -----------------\n";
                                msg += "\n" + pretty + "\n";

                                // Now to modify the XML message
                                Element ext = getExtensionsXML();
                                root.appendChild(root.getOwnerDocument().importNode(ext,true));

                                // DOM tree modified - unparse the DOM back into string
                                messageCtx.setModifiedMessage(unparseXML(root));

                                msg += "---------- MODIFIED XML -----------------\n";
                                msg += "\n" + messageCtx.getModifiedMessage() + "\n";
                        }

                        msg += "=================ENDS===================\n";
                        err.println(msg);
                        return ExecutionStatus.SUCCESS;
                } catch (Exception e) {
                        e.printStackTrace();
                        throw handle(e);
                }
        }

        @Override
        public String getDescription(){
                return "Sample Message Processing Plugin";
        }

        @Override
        public Map<String, MonitoringData> getMonitoringData(){
                return null;
        }

        @Override
        public boolean getMonitoringStatus(){
                return monitoringStatus;
        }

        @Override
        public String getPluginName(){
                return "SampleMsgProcPlugin";
        }

        @Override
        public int getRevision() {
                return 123;
        }

        @Override
        public void setMonitoringStatus(boolean status){
                this.monitoringStatus = status;
        }

        private RuntimeException handle(Exception e) {
                return e instanceof RuntimeException ? (RuntimeException)e : new RuntimeException(e);
        }

        private DocumentBuilderFactory createDBF() {
                try {
                        Class<?> c = Class.forName("com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl",true,ClassLoader.getSystemClassLoader());
                        return (DocumentBuilderFactory) c.newInstance();
                } catch (Exception e) {
                        throw handle(e);
                }
        }
        
        private Element parseXML(String xml) {
                try {
                        DocumentBuilderFactory dbf = createDBF();
                        dbf.setNamespaceAware(true);
                        DocumentBuilder db = dbf.newDocumentBuilder();
                        StringReader sr = new StringReader(xml);
                        InputSource is = new InputSource(sr);
                        Document doc = db.parse(is);
                        return doc.getDocumentElement();
                } catch (Exception e) {
                        throw handle(e);
                }
        }

        private DOMImplementationSource getDOMImpl() {
                try {
                        Class<?> c = Class.forName("com.sun.org.apache.xerces.internal.dom.DOMXSImplementationSourceImpl",true,ClassLoader.getSystemClassLoader());
                        return (DOMImplementationSource) c.newInstance();
                } catch (Exception e) {
                        throw handle(e);
                }
        }

        private String unparseXML(Node node) {
                try {
                        Document doc = node.getOwnerDocument();
                        DOMImplementationSource reg = getDOMImpl();
                        DOMImplementationLS ls = (DOMImplementationLS) reg.getDOMImplementation("LS");
                        LSSerializer w = ls.createLSSerializer();
                        w.getDomConfig().setParameter("format-pretty-print", Boolean.TRUE);
                        w.getDomConfig().setParameter("xml-declaration", Boolean.TRUE);
                        String ppxml = w.writeToString(doc);
                        return ppxml.replace("<?xml version=\"1.0\" encoding=\"UTF-16\"?>","<?xml version=\"1.0\" encoding=\"UTF-8\"?>");
                } catch (Exception e) {
                        throw handle(e);
                }
        }

        private Element getExtensionsXML() {
                String extensionsXML = "";
                extensionsXML += "<samlp:Extensions xmlns:samlp=\"urn:oasis:names:tc:SAML:2.0:protocol\">\n";
                extensionsXML += "        <CompanyInfo xmlns=\"http://example.com/samlext/1.0\"\n";
                extensionsXML += "             CompanyName=\"Example Corporation\" />n";
                extensionsXML += "</samlp:Extensions>\n";
                return parseXML(extensionsXML);
        }

        private Element getChildElem(Node node,String nsuri, String name) {
                try {
                        NodeList kids = node.getChildNodes();
                        for (int i = 0; i < kids.getLength(); i++) {
                                Node k = kids.item(i);
                                if (k.getNodeType() != org.w3c.dom.Node.ELEMENT_NODE) continue;
                                if (!Objects.equals(k.getNamespaceURI(),nsuri)) continue;
                                if (!Objects.equals(k.getLocalName(),name)) continue;
                                return (Element)k;
                        }
                        return null;
                } catch (Exception e) {
                        throw handle(e);
                }
        }
}

Question: Why do I get the DOM factory objects directly from the System Class Loader using Class.forName?

Answer: I tried doing it the normal way, but OAM overrides the standard JDK XML classes with some Oracle-specific ones that don’t implement DOM 3 Load and Save. So by doing it this way I make sure I get an XML implementation that has the features I need.

Next we need to create the XML plugin manifest $PLUGINDEV/SampleMsgProcPlugin.xml:

<?xml version="1.0"?>
<Plugin type="Message Processing">
  <author>Oracle A-Team</author>
  <email>donotreply@oracle.com</email>
  <creationDate>2015-04-16 12:53:37</creationDate>
  <description>Sample Message Processing Plugin</description>
  <configuration>
  </configuration>
</Plugin>

Here we could, if we so wished, define some configuration settings for our plugin, that could then be modified using the OAM Console. However, in this simple example, we have not done that.

Next we need to create $PLUGINDEV/MANIFEST.MF file:

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: SampleMsgProcPlugin
Bundle-SymbolicName: SampleMsgProcPlugin
Bundle-Version: 1
Bundle-Activator: oracle.ateam.msgprocplugin.SampleMsgProcPlugin
Import-Package: javax.xml.parsers,oracle.security.am.plugin,oracle.security.fed.plugins.fed.msgprocessing,org.osgi.framework;version="1.3.0",org.w3c.dom,org.w3c.dom.ls,org.xml.sax
Bundle-RequiredExecutionEnvironment: JavaSE-1.6

This is the OSGi bundle metadata. Notably, it lists the Java packages our plugin requires. (Note that Import-Package is all on one-line.)

Next we need to create $PLUGINDEV/compile.sh file:

#!/bin/bash
DOMAIN_HOME=/idmtop/config/domains/IAMAccessDomain
SERVER_NAME=wls_oam1

JARS="$(find $DOMAIN_HOME/servers/$SERVER_NAME/tmp/_WL_user/oam_server_11.1.2.0.0/ -name fed.jar -o -name oam-plugin.jar -o -name felix.jar | tr '\n' ':' | sed -e 's/:$//')"
SRCS="$(find src -name '*.java')"
rm -rf build
mkdir build
javac -d build -classpath $JARS $SRCS
cp SampleMsgProcPlugin.xml build
mkdir build/META-INF
cp MANIFEST.MF build/META-INF
cd build
jar cvmf META-INF/MANIFEST.MF ../SampleMsgProcPlugin.jar *

This shell script compiles the plugin for us. (Of course, one should use something like ANT or Maven instead, but for our simple example a shell script will do.) Note the path DOMAIN_HOME and the SERVER_NAME may need to be changed for your environment. Also note that JARS= is supposed to all be on one line (in case your web browser is wrapping it).

Finally we run compile.sh to create SampleMsgProcPlugin.jar.

Deploy the example plugin

Login to OAM Console

Go to “Application Security” tab

In “Plug-ins” section select “Authentication Plug-ins”:

Note: Even though the label says “Authentication Plug-ins”, the same screen works for non-authentication types of plugins, such as the message processing plugin in this case.

Click “Import Plugin”:

Upload “SampleMsgProcPlugin.jar” you built in earlier step and click “Import”:

Refresh the table and search for the new plugin:

Click “Distribute Selected”, then click Refresh icon to see status has changed to “Distributed”:

Now click “Activate Selected”, then click Refresh icon to see status has changed to “Activated”:

Enabling message processing plugin

The plugin has now been installed. Now we need to tell OAM Federation to use it. Edit the $DOMAIN_HOME/config/fmwconfig/oam-config.xml file. Look for the “fedserverconfig” section:

In that section, look for a setting called “messageprocessingeplugin”:

Change its value to the name of your plugin:

Also, in the same section, look for a setting called “messageprocessingenabled”:

Change that from “false” to “true”.

Finally, near the top of the file, look for a version number:

Increment that number by 1, and save. (It doesn’t matter by how much you increment it, so long as the new version number is higher than the old one.)

Now check the oam-config.ref file in the same directory:

When the version number in that file has increased to the new number in the oam-config.xml, you know the new number has been loaded.

Testing the new plugin

Go to the SP Test Page (http://OAMHOST:OAMPORT/oamfed/user/testspsso). This assumes you have enabled it per my previous post. Test using LoopbackIDP. As you test, watch SAML messages using e.g. SAML Tracer plugin in Firefox [https://addons.mozilla.org/en-us/firefox/addon/saml-tracer/]. You should see the custom XML extension in the SAML AuthnRequest in SAML Tracer:

Also note the custom plugin writes to the wls_oam1 console log every time it runs:

You would probably not want to do this in Production, but it is helpful in development. And you’d want to make it log through java.util.logging not System.err.println. Remember this sample code is just a start for your own development process, it is not meant to be production quality.

Viewing all 987 articles
Browse latest View live