Quantcast
Channel: ATeam Chronicles
Viewing all 987 articles
Browse latest View live

RESTful Invoke BPM Process Using Apache CXF

$
0
0

Apache CXF is a services framework that is the open source evolution of IONA Celtix and Codehaus XFire, hence the name CXF. CXF has extensive support for Web Service standards WS-*, JAX-WS and JAX-RS APIs etc. but the focus of this article is on the CXF web service client proxy factory and running it in a WebLogic web app implementing a REST service. As a demonstration we will build a service using JDeveloper that invokes an Oracle BPM process with a message start service interface. REST “clients” are lightweight and simpler to implement than SOAP web service clients. A REST to SOAP converter for BPM processes make them more accessible.

For the impatient, the two key enablers are to determine the proper subset of CXF library jars and resolve any conflicts with WebLogic libraries. The CXF distribution has 149 jars and you don’t want to just simply add them all to your project. The following list is based on the dynamic client sample in the CXF distribution from the 3.1.0 release.

cxf-rt-frontend-jaxws-3.1.0.jar
cxf-rt-transports-http-3.1.0.jar
cxf-rt-transports-http-jetty-3.1.0.jar
jaxb-xjc-2.2.11.jar
aopalliance-1.0.jar
asm-5.0.3.jar
cxf-core-3.1.0.jar
cxf-rt-bindings-soap-3.1.0.jar
cxf-rt-bindings-xml-3.1.0.jar
cxf-rt-databinding-jaxb-3.1.0.jar
cxf-rt-frontend-simple-3.1.0.jar
cxf-rt-ws-addr-3.1.0.jar
cxf-rt-ws-policy-3.1.0.jar
cxf-rt-wsdl-3.1.0.jar
javax.servlet-api-3.1.0.jar
jaxb-core-2.2.11.jar
jaxb-impl-2.2.11.jar
jcl-over-slf4j-1.7.9.jar
jetty-continuation-9.2.9.v20150224.jar
jetty-http-9.2.9.v20150224.jar
jetty-io-9.2.9.v20150224.jar
jetty-security-9.2.9.v20150224.jar
jetty-server-9.2.9.v20150224.jar
jetty-util-9.2.9.v20150224.jar
neethi-3.0.3.jar
slf4j-api-1.7.9.jar
slf4j-jdk14-1.7.9.jar
spring-aop-4.1.1.RELEASE.jar
spring-beans-4.1.1.RELEASE.jar
spring-context-4.1.1.RELEASE.jar
spring-core-4.1.1.RELEASE.jar
spring-expression-4.1.1.RELEASE.jar
stax2-api-3.1.4.jar
woodstox-core-asl-4.4.1.jar
wsdl4j-1.6.3.jar
xml-resolver-1.2.jar
xmlschema-core-2.2.1.jar

The only significant conflict with WebLogic is from the WS Policy Neethi library and is fixed by adding the following application preference to weblogic.xml:

  <container-descriptor>
    <prefer-application-packages>
      <package-name>org.apache.neethi.*</package-name>
    </prefer-application-packages>
  </container-descriptor>

 

The Demo Service

The demo is a RESTful service that invokes a message start BPM process via the usual SOAP Web Service call based on the published WSDL for the process. The service will be deployed and run on WebLogic. The process can have any number of parameters which we will assume to be all of type string to keep things simple. It would be straightforward to handle arbitrary types since we introspect the generated proxy class but I’ll leave that as an exercise for the reader. The most common BPM process invoke is asynchronous with no callback, go do your work and don’t ever bother me about it. That is call mechanism implemented in the demo.

BPM Process

A sample BPM process is needed to test the REST service. A representative process will have a message Start with End type set to None since we won’t be listening for a callback.

Sample BPM Process

The defined interface with four sample string arguments would look like

Message Start Arguments

The published WSDL excerpt shows the start method with four arguments. The service accesses this WSDL via CXF to generate the proxy.

<wsdl:definitions ... targetNamespace="http://xmlns.oracle.com/bpmn/bpmnProcess/SampleProcess">
        <xsd:schema targetNamespace="http://xmlns.oracle.com/bpmn/bpmnProcess/SampleProcess">
            <xsd:element name="start">
                <xsd:complexType>
                    <xsd:sequence>
                        <xsd:element name="sampleArg1" type="xsd:string"/>
                        <xsd:element name="sampleArg2" type="xsd:string"/>
                        <xsd:element name="sampleArg3" type="xsd:string"/>
                        <xsd:element name="sampleArg4" type="xsd:string"/>
                    </xsd:sequence>
                </xsd:complexType>
            </xsd:element>
        </xsd:schema>
    </wsdl:types>
    <wsdl:message name="start">
        <wsdl:part name="parameters" element="tns:start"/>
    </wsdl:message>
    <wsdl:portType name="SampleProcessPortType">
        <wsdl:operation name="start">
            <wsdl:input message="tns:start"/>
        </wsdl:operation>
    </wsdl:portType>
    <wsdl:binding name="SampleProcessBinding" type="tns:SampleProcessPortType">
        <wsdlsoap:binding transport="http://schemas.xmlsoap.org/soap/http"/>
        <wsdl:operation name="start">
            <wsdlsoap:operation style="document" soapAction="start"/>
            <wsdl:input>
                <wsdlsoap:body use="literal"/>
            </wsdl:input>
        </wsdl:operation>
    </wsdl:binding>
    <wsdl:service name="SampleProcess.service">
        <wsdl:port name="SampleProcessPort" binding="tns:SampleProcessBinding">
            <wsdlsoap:address location="http://oramint:7101/soa-infra/services/default/BPMsample%211.0*soa_b0779884-0f0b-4ba5-afaf-0a3191ddb105/SampleProcess.service"/>
        </wsdl:port>
    </wsdl:service>
</wsdl:definitions>

RESTful Frontend

Start with a new class named InvokeBPM and service method named invokeBPM (feels a bit odd that we’re not doing a ‘State Transfer’ with the REST call but credit REST for being more than originally envisioned). Pass incoming info for the process call via query parameters.  We need host server and port, composite ID, process name and finally a comma separated list of arguments to the process.

public Response invokeBPM(@QueryParam("serverAndPort") String serverAndPort,
                          @QueryParam("compositeId") String compositeId,
                          @QueryParam("processName") String processName,
                          @QueryParam("argumentList") String argumentList)

Proxy

The CXF proxy generator will need the WSDL URL and the SOAP service reference in the form of a qualified name so we need to construct the service path string and XML target namespace as well as the ?WSDL URL.

BPM processes in the default partition have service URLs in the form

http://<server and port>/soa-infra/services/default/<composite ID>/<process name>.service

construct the service URL string using the incoming data

String serviceStr =
  "http://" + serverAndPort + "/soa-infra/services/default/" + compositeId + "/" + processName + ".service";

The WSDL URL just needs “?WSDL” appended to the service path string.

URL wsdlURL = new URL(serviceStr + "?WSDL");

The qualified name for the service is “.service” appended to the process name combined with the target namespace seen in the WSDL above.

QName qualServiceName =
     new QName("http://xmlns.oracle.com/bpmn/bpmnProcess/" + processName, processName + ".service");

Everything is set for the main event, using the CXF factory to build the proxy. The factory creates a class at runtime based on the WSDL read from the WSDL URL. The class generation is reported in the standard output log of the service with a message like “INFO: Created classes: com.oracle.xmlns.bpmn.bpmnprocess.sampleprocess.Start”

JaxWsDynamicClientFactory factory = JaxWsDynamicClientFactory.newInstance();
Client client = factory.createClient(wsdlURL.toExternalForm(), qualServiceName);
ClientImpl clientImpl = (ClientImpl) client;

(Note that CXF also has the JaxWsClientFactoryBean class which is similar to JaxWsDynamicClientFactory except it does not handle complex objects in the WSDL)

Making the Call

Once the proxy is built we need to instantiate an XML document object message, fill in the argument values and make the SOAP call via the proxy. This is a document literal SOAP call, the arguments are mapped to fields in the message object. The CXF factory does essentially the same thing clientgen does at dev time, it just does it at runtime. In this simple example the class has four string fields. In the general case, the message document could be arbitrarily complex.

To get the message part class use the following sequence, get the endpoint from the clientImpl, service info from the endpoint, SOAP binding from the service info, operation from the binding and finally message part from the operation.

First the endpoint

Endpoint endpoint = clientImpl.getEndpoint();

then the service info – there is only one in our case

ServiceInfo serviceInfo = endpoint.getService().getServiceInfos().get(0);

then the SOAP binding – also only one

BindingInfo binding = serviceInfo.getBindings().iterator().next();

then the operation – only one, the start operation, we need the qualified name

QName opName = binding.getOperations().iterator().next().getName();

then the input message

BindingOperationInfo boi = binding.getOperation(opName);
BindingMessageInfo inputMessageInfo = boi.getInput();

finally, the list of message parts

List<MessagePartInfo> parts = inputMessageInfo.getMessageParts();

get the class for the input message part – there is only one and it’s called “parameters”

Class<?> partClass = parts.get(0).getTypeClass();

Now we have the class that was created by the CXF factory, partClass. The declared fields of this class are the arguments we are looking for. Declare inputFields to identify the array of fields so we can get the field names later, we’ll also want to know the field count.

Field[] inputFields = partClass.getDeclaredFields();

make an input object based on the proxy class

Object inputObject = partClass.newInstance();

All that’s left to do now is write the field values. The values were passed in the comma separated string argumentList which will be easier to use split into an array.

List<String> argItems = Arrays.asList(argumentList.split("\\s*,\\s*"));

In a perfect world the arrays argItems and inputFields will have the same length. Maybe most of the time they will but to protect against index out of bounds error let’s use the minimum of the two.

int lastArgPosition = Math.min(inputFields.length, argItems.size());

We’ll need to instantiate a property descriptor for each field in order to use its getWriteMethod() to load the value from argItems, loop over fields.

PropertyDescriptor fieldPropDesc = null;
for (int i = 0; i < lastArgPosition; i++) {
   fieldPropDesc = new PropertyDescriptor(inputFields[i].getName(), partClass);
   fieldPropDesc.getWriteMethod().invoke(inputObject, argItems.get(i));
}

The input object is now populated, the proxy will use it to create the XML document for the SOAP call. As mentioned in the beginning, we want an asynchronous call which means we need to use the client invoke signature with a client callback argument. We don’t expect a callback so we won’t bother setting up the callback service.

client.invoke(new ClientCallback(), boi, inputObject);

Build, Run and Test

The CXF binary and source distributions are available at http://cxf.apache.org/download.html. Unless you are feeling adventurous and want to build the source just take the binaries and unpack anywhere you can access the /lib folder from JDeveloper. The steps for building the demo are;

  • create a custom type application with REST Web Service feature
  • create the RESTful service with InvokeBPM class
  • add the CXF library jars to the project classpath
  • add the code described above for InvokeBPM
  • add the prefer application override for Neethi to weblogic.xml
  • create the war deployment profile
  • finally generate the war

In JDeveloper, select “New Application …” and then select “Custom Application”

JDeveloper New Custom Application

For project features select “REST Web Services” (and not SOAP Web Services). “Java” will be auto selected. Name the project “CallBPMService”.

REST Project Feature

Enter a package path in the next dialog and finish the new application wizard.

Create the service class using the JDeveloper wizard which sets up the class with the appropriate REST annotations and adds WEB-INF/web.xml and WEB-INF/weblogic.xml to the project.

REST Service Menu

Name the class InvokeBPM and use http GET call semantics. Also select application/json and application/xml media types as a convention. As mentioned before we’re not doing any “State Transfer” so there is nothing produced by the service.

Create REST dialog

The wizard creates the .java file for the service which you can code as discussed or paste in the completed code available in the InvokdeBPM.java file listed at the end of the article.

Add the CXF jars listed in the beginning to the project classpath

CXF jars classpath

WEB-INF/weblogic.xml needs to be edited. It will have a library reference which should be deleted and the “prefer-application-packages” setting for Neethi added. The original looks like

weblogic.xml with library ref

remove the library-ref and add the following

weblogic.xml prefer application

Create a WAR file deployment profile

WAR deployment profile

give the desired context root

Context Root

select the CXF jars that were added to the project classpath to go in the WAR, the JAX-RS Jersey 2.x library is in WebLogic 12c so that doesn’t need to go in the WAR

CXF lib jars

set the default platform to Weblogic 12

WLS platform setting

That completes the WAR file deployment profile. Use it to generate a WAR file and deploy it on your WebLogic server.

After the service is deployed, make a REST call to it using an http URL with the query parameters. You can also use a tool that reads the WADL and helps build the URL and parameters. You should see audit history for the sample BPM process with payload data that originated from the REST call.

Using http analyzer in JDeveloper load the application WADL

REST call from http analyzer

Enter query parameters for server and port etc., here I’ve entered “ArgOne, ArgTwo, ArgThree, ArgFour” as the argument list parameter. Click “Send Request” button and the BPM process will be invoked with the four string arguments in the payload.

BPM completed processes

open the audit history to check the payload

process audit history

The payload shows the four string arguments from the csv string in the REST call

payload data

The full text of the payload looks like

<auditQueryPayload auditId="35" ciKey="10" xmlns="http://xmlns.oracle.com/bpmn/engine/audit">
   <serviceOutput>
      <element name="sampleArg4" isBusinessIndicator="false">
         <value>
            <![CDATA[<sampleArg4>ArgFour</sampleArg4>]]>
         </value>
      </element>
      <element name="sampleArg3" isBusinessIndicator="false">
         <value>
            <![CDATA[<sampleArg3>ArgThree</sampleArg3>]]>
         </value>
      </element>
      <element name="sampleArg2" isBusinessIndicator="false">
         <value>
            <![CDATA[<sampleArg2>ArgTwo</sampleArg2>]]>
         </value>
      </element>
      <element name="sampleArg1" isBusinessIndicator="false">
         <value>
            <![CDATA[<sampleArg1>ArgOne</sampleArg1>]]>
         </value>
      </element>
   </serviceOutput>
</auditQueryPayload>

Summary

The Apache CXF dynamic Web Service proxy client factory enables a convenient mechanism to invoke message start BPM processes. If you have BPM processes with the same message start payload you can statically generate the proxy client class and use JAX-WS API to set the binding at runtime and do not need CXF runtime class factory.

Things to Try Next

Some suggestions for things to experiment using CXF; create and use a callback service, create a callback service at runtime using CXF service builder, use complex types in the BPM message start interface.

Complete Java Source

package oracle.ateam;

import java.beans.IntrospectionException;
import java.beans.PropertyDescriptor;

import java.lang.reflect.Field;
import java.lang.reflect.InvocationTargetException;

import java.net.MalformedURLException;
import java.net.URL;

import java.util.Arrays;
import java.util.List;

import javax.ws.rs.Consumes;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Response;

import javax.xml.namespace.QName;

import org.apache.cxf.endpoint.Client;
import org.apache.cxf.endpoint.ClientCallback;
import org.apache.cxf.endpoint.ClientImpl;
import org.apache.cxf.endpoint.Endpoint;
import org.apache.cxf.jaxws.endpoint.dynamic.JaxWsDynamicClientFactory;
import org.apache.cxf.service.model.BindingInfo;
import org.apache.cxf.service.model.BindingMessageInfo;
import org.apache.cxf.service.model.BindingOperationInfo;
import org.apache.cxf.service.model.MessagePartInfo;
import org.apache.cxf.service.model.ServiceInfo;

@Path("bpm")
@Consumes(value = { "application/json", "application/xml" })
@Produces(value = { "application/json", "application/xml" })
public class InvokeBPM {

    @GET
    @Produces(value = { "application/json", "application/xml" })
    @Path("/invoke")
    public Response invokeBPM(@QueryParam("serverAndPort") String serverAndPort,
                              @QueryParam("compositeId") String compositeId,
                              @QueryParam("processName") String processName,
                              @QueryParam("argumentList") String argumentList) throws MalformedURLException,
                                                                                      InstantiationException,
                                                                                      IllegalAccessException,
                                                                                      IntrospectionException,
                                                                                      InvocationTargetException,
                                                                                      Exception {

        // build the service URL string with BPM server, composite and process info
        String serviceStr =
            "http://" + serverAndPort + "/soa-infra/services/default/" + compositeId + "/" + processName + ".service";

        URL wsdlURL = new URL(serviceStr + "?WSDL");

        // the target namespace should always be the same for BPM 12c and 11g, as well as the .service suffix
        QName qualServiceName =
            new QName("http://xmlns.oracle.com/bpmn/bpmnProcess/" + processName, processName + ".service");

        // run the CXF factory to create the proxy class
        JaxWsDynamicClientFactory factory = JaxWsDynamicClientFactory.newInstance();
        Client client = factory.createClient(wsdlURL.toExternalForm(), qualServiceName);
        ClientImpl clientImpl = (ClientImpl) client;

        // traverse Endpoint->ServiceInfo->SOAPBinding->operation->Input MessagePart to get the proxy class reference
        Endpoint endpoint = clientImpl.getEndpoint();
        ServiceInfo serviceInfo = endpoint.getService().getServiceInfos().get(0);
        BindingInfo binding = serviceInfo.getBindings().iterator().next();

        QName opName = binding.getOperations().iterator().next().getName();
        BindingOperationInfo boi = binding.getOperation(opName);
        BindingMessageInfo inputMessageInfo = boi.getInput();

        List<MessagePartInfo> parts = inputMessageInfo.getMessageParts();
        Class<?> partClass = parts.get(0).getTypeClass();

        // fields are all string for the demo but can be complex types in general
        Field[] inputFields = partClass.getDeclaredFields();

        // create a proxy object using the hard won class type
        Object inputObject = partClass.newInstance();

        // split the comma separted argument list string into a List
        List<String> argItems = Arrays.asList(argumentList.split("\\s*,\\s*"));

        // only write the minimum arguments if there is a mis-match, nominally will have same number of each
        int lastArgPosition = Math.min(inputFields.length, argItems.size());

        // write the field values
        PropertyDescriptor fieldPropDesc = null;
        for (int i = 0; i < lastArgPosition; i++) {
            fieldPropDesc = new PropertyDescriptor(inputFields[i].getName(), partClass);
            fieldPropDesc.getWriteMethod().invoke(inputObject, argItems.get(i));
        }

        // and make the call, ClientCallback makes it asynchronous but not expecting a callback so no service setup
        client.invoke(new ClientCallback(), boi, inputObject);

        return Response.ok().build();
    }
}

Accelerating Fusion Applications (FA) Bundle Patching for Human Capital Management (HCM)

$
0
0

Introduction

Applying functional bundle patches to your Fusion Applications (FA) environment is a normal part of the FA life cycle. Typically these bundle patches are released on a monthly basis, an although they are cumulative, it is still considered to be a best practice to apply these bundles as often as possible within the constraints of your operational environment. In this article we will discuss a method for doing this that should enhance your operational efficiency to a great degree. To illustrate how this can be done, we will use a bundle patch from the Human Capital Management (HCM) family. However, the techniques applied here are applicable to other bundle patches as well.

Note: This document does nothing more than provide a workflow for the patching process, it does not remove any requirements from the process. There may still be manual steps that need to be followed but are not explicitly documented, processes that need to be stopped/started, etc. – please ensure you have read and understand all of the individual readme files for the patches being applied.

Step1- Download the Latest HCM Bundle

Download the latest HCM bundle patch. To do this you will need to go to My Oracle Support, and once there choose the Patches and Updates tab.

patch dl - fig 1

Figure 1 – My Oracle Support Home Page

Next, choose the Product or Family (Advanced) tab and fill in the information for your FA family and Release, as shown below and then hit the Search button.

patch dl fig 2

Figure 2 – Search for a patch

You will be presented with a list of available patches from which you can download the latest HCM bundle patch. For complete instructions, see Patching Fusion Applications – What Types of Patches Exist and How Often Should They Be Applied.

Step 2 – Open the ReadMe for the HCM Bundle and Download the Patches

In the typical HCM bundle there are a few major components:

  • The prerequisites for the patch
    • Note: It is important to notice that many of these prerequisites may have prerequisites; the Business Intelligence (BI) patch that is often part of the HCM bundle prerequisites is a prime example of this.
  • The bundle itself
  • Post implementation patches for the bundle

You now need to download all of the associated patches as well as their prerequisites. By way of example, I am going to use patch 20724469 – Patch bundle 4 for HCM Release 11.1.9.1.2. But all of them are pretty similar, so the approach defined here is not patch specific.

How you download and store the patches and prerequisites is important, so let’s start from the top of the readme for patch 20724469. Besides the P4FA patch – which we will assume has already been applied – you will usually see a BI patch (in this case 20372185) first in the list. Open up the readme for the BI patch and you will see that it has several prerequisite patches. Follow the detailed instructions to determine which of these needs to be applied and store them in a directory all by themselves.

Use an approach similar to:
> cd tmp
> mkdir biprereq

Download the BI prerequisites required for your environment to the biprereq directory you just created, these patches should be the only things in this directory. Please note that it is possible for any of these prerequisite patches to have prerequisites themselves, be certain to download these as well.

Next download the Bi patch itself, in this case patch 20372185 to the directory of your choosing. For example, you could use: /tmp/patches

Now it is time to download the remainder of the HCM bundles prerequisites, these are usually listed under the heading of “Fusion Applications Patches”. These should go in a directory by themselves as well, something like /tmp/hcmbundle. Now download the HCM patch itself, patch 207244496 in this case, and place it in the same /tmp/hcmbundle directory.

Lastly, download the post install patches for the HCM bundle and place them in the same directory as the HCM prerequisite patches (/tmp/hcmbundle). The directory /tmp/hcmbundle should now hold the HCM bundle prerequisites (excluding the BI patch), the HCM bundle itself,  and the HCM post install patches.

To recap, you should now have three directories and containing the following items:

  • /tmp/biprereq
    • This directory holds all the required prerequisites for the BI patch
  • /tmp/patches
    • This directory holds the BI patch
  • /tmp/hcmbundle
    • This directory holds
      • All the prerequisites for the HCM bundle except those associated with BI
      • The HCM bundle itself
      • The post install patches for the HCM bundle
    • If you have language packs installed then you would have a forth directory to hold the language patches for the HCM bundle, something like /tmp/hcmlang. You will require one directory per language.

Step 3 – Perform the Patching

We are going to use a few patching methods here, but we will work through them on at a time.

Step 3a – Apply the BI Prerequisite Patches

Ensure you have read all of the readme’s for the patches you are about to apply, and that you are using the proper version of OPatch. Verify that any manual prerequisite steps (shutting down services, etc.) have been followed. Should any one of the readme’s indicate that a patch should be applied using fapatch manager, please apply these patches in that manner.

The BI prerequisite patches can be applied using the opatch napply option. Set your Oracle home according to the instructions in the BI prerequisites patch readme file. Navigate to the patch directory where you stored the BI prerequisite patches and run opatch using the napply option.

After completion of the application of the BI patches, please review the readme’s for the patches you just applied for any manual post patching steps that may be required.

Step 3b – Apply the BI Patch

The BI patch has its own orchestration, so use the application instructions in the BI patch to apply this patch.  Don’t forget to apply any post patching steps for BI.

Step 3c – Apply the HCM Prerequisites, HCM Bundle, and HCM Post Install Patches

To perform this step you will first need to develop a patch plan. The complete instructions on how to do this are found in the Fusion Applications Patching Guide in Chapter 3.7. Apply these patches using the instructions associated with the process defined in the patching guide. The directory you will point to for the patching framework to use will be the /tmp/hcmbundle directory where you are storing the patches required for this step.

This approach should significantly reduce the amount of time it takes to apply the HCM patch bundle as using the patch plan will allow the patching framework to minimize the number of environment bounces required to successfully apply the patch bundle and its prerequisite and post patches.

Step 3d – (Optional) Apply HCM language patches

The completion of this step will follow the same path as step 3c above. You will utilize the patching framework described in the patching guide to build a patch plan for each language you have installed and apply the patches. The directory used for the patching plan this time will be one of the /tmp/hcmlang directories.

Summary

Using the patching frameworks provided as part of the Fusion Applications toolset can save you a tremendous amount of time when performing periodic maintenance for your environment.

 

Fusion Apps P2T Identity Synchronization Flow

$
0
0

Introduction:

In this blog we will take a closer look at Fusion Apps (FA) P2T tool identity synchronization between source and target systems and internally within FA. Understanding the logic around the synchronization can be helpful for a successful completion of the P2T tool. Also, we will further discuss this logic with an exercise of a trouble-shooting scenario. 

The FA P2T tool documentation is available as part of the Oracle® Fusion Applications – “Cloning and Content Movement Administrator’s Guide”. 

FA P2T User and Role Identities Replication:

The diagram below shows the main flows of user and role identities in a typical P2T tool run.

FAP2TIDMSyncFlowsVN2

As shown in the diagram above, there are two main flows of User and Role Identity information from source system, which is typically production to the target system, which is typically test.

Flow 1: IDM User and Role Identities Replication

The User and Role Identity information stored in the Oracle Internet Directory(OID) is transferred during the IDM phase of P2T – specifically when ‘bmIDM.sh oid’ is run. This IDM identity replication compares entities under specific subtrees between the source and target LDAP directories and applies the changes to the target OID directory server. The data is moved in an on-demand basis and connectivity between the source and target environments are needed for this replication. This IDM identity replication is primarily performed by using an OID tool called ‘idmcprec’. More information on this tool can be found at Fusion Middleware User Reference for Oracle Identity Management. Also, it can be useful to note that it excludes 3 attributes of each identity of source that are not relevant at target, namely: userpassword, creatorsname and modifiersname. Customers can also use this method on an ongoing basis and it only applies the incremental changes to the target environment. This also helps Oracle Identity Manager (OIM) where it is processing only the incremental changes notification to FA-HCM.

Flow 2: FA User and Role Identities Transferred with FA DB Copy

The second step in the flow is the FA database duplication that brings in the snapshot of production database to the test environment. Along with this data, the User and Role Identity information stored inside the application itself is brought to the target environment.

The FA-IDM stack contains many applications that rely on the identity information stored in OID and FADB. Let’s take a quick look at the general flow of user and role identity information inside a FA system as a whole; to further understand the impact of the two flows from source to target we just discussed. Please refer to the diagram below that shows the key identity / security components inside a FA system:

FAP2TIDMFlowVN

As highlighted in the diagram above, FA, OID and OIM are the key identity storage containers in the FA and IDM stacks combined. User or Role Identity can be originated in any of the 3 containers in blue boxes and they are all kept synchronized using the processes mentioned in the connection between them. OIM is primarily playing a mediator role in the flow between FA and OID and so, the P2T tool focuses on replicating the User and Role Identity information contained in the FADB and OID. It is letting the synchronization jobs take care of replicating the information to OIM. Hence, as you would notice in the P2T documentation, that we are disabling OIM synchronization jobs before OID replication and re-enable after that is completed.

In some cases, at the end of the P2T run, it is beneficial to run the following OIM reconciliation tasks manually once: “LDAP User Create and Update Reconciliation”, “LDAP Role Create and Update Reconciliation”, “LDAP Role Membership Reconciliation”. For more information on managing OIM Scheduled Tasks, please refer to Fusion Middleware System Administrator’s Guide for Oracle Identity Manager. Also, the blog “Keeping in Sync – LDAP Reconciliation“, though a bit dated, has good information on OIM Scheduled Tasks.

Troubleshooting:

Occasionally, the P2T tool user may see an error such as the following while working on the steps under “Reconcile Identity Management Data” – Section 3.5.

“User count in source OIM DB does not match User count in LDAP”

Similarly, it can also be an error related to the role count. Let’s discuss the logic behind this message and how to resolve the error.

The validation in IDM phase performs a set of calculations on the source system / production system for the integrity of user and role data present there. The tools estimate any mismatch between the 3 components, namely FA, OIM and OID, noted in the flow diagram above and reports for corrective action before performing IDM synchronization.

If a P2T tool user is faced with the above error, then the following set of steps can help to find out where the discrepancy is originating from. You may follow these steps to resolve the issue:

Step 1:

Connect to the OID LDAP server and search for users with a filter “objectclass=person”. This result includes the system users known as ‘AppIDUsers’. So, make another call to OID to get the number for “objectclass=orclAppIDUser” and subtract – equation shown below. Then there is a small adjustment for a few other known system users that are accounted for and final user count is determined. An example of the details are shown here:

Obtain count of Person Users with ‘ldapsearch’ query: ldapsearch -h localhost -p 3060 -D cn=orcladmin -w Welcome2 -b “cn=users,dc=mycompany,dc=com” “objectclass=person” > ./personobjusers

Then, LDAP users = Person Users – AppIDUsers – 2

Step 2:

Perform a check against the OIM store with a SQL query similar to the following on to the OIM DB:

SELECT COUNT FROM USR WHERE USR_LOGIN NOT LIKE ‘%deleted%’ AND USR_LOGIN NOT LIKE ‘%DEL%’ AND USR_LDAP_GUID IS NOT NULL AND USR_STATUS != ‘Deleted';

SELECT * FROM USR WHERE USR_LOGIN NOT LIKE ‘%deleted%’ AND USR_LOGIN NOT LIKE ‘%DEL%’ AND USR_LDAP_GUID IS NOT NULL AND USR_STATUS != ‘Deleted';

SELECT USR_LOGIN FROM USR WHERE USR_LOGIN NOT LIKE ‘%deleted%’ AND USR_LOGIN NOT LIKE ‘%DEL%’ AND USR_STATUS != ‘Deleted';

Now, the results from the OID and the OIM user counts or role counts depending on which one you are looking at, needs to be compared to find the delta between OIM and LDAP. Using the above as base example, the P2T tool user can perform similar operations and calculate the numbers for users and roles. Since you have obtained the specific name of users or roles that fall out as discrepancy, it becomes easy to resolve those at the application using application specific tools. Upon resolving the discrepancies, the P2T tool can successfully proceed to complete the phase.

Summary:

This article explained the FA P2T identity synchronization flows and showcased an example troubleshooting to help P2T tool user in completing the P2T identity replication.

Architecture considerations for integrating mobile applications with cloud and on-premise applications

$
0
0

Introduction

A mobile application requires an easy to use, flexible, secure and fast integration to various backends to access the data required by the mobile application. Ideally such an infrastructure should also allow for leveraging an existing IDM stack to be re-used by the mobile application.

This article is intended to give architectural considerations how to integrate a nice-looking and modern mobile application with various backends using the new Oracle Cloud mobile/integration offerings.

Tip: Oracle Fusion Apps, Siebel, SAP, Salesforce, Oracle Sales Cloud or any application providing SOAP/REST services can be used as backend. This article will only use Oracle Sales Cloud and Oracle Core Banking to illustrate the principles.

 

Main Mobile/Integration Cloud Offerings

In the examples we will use a set of new Oracle Cloud offerings including:

  • MCS (Oracle Mobile Cloud Service)
  • ICS (Oracle Integration Cloud Service)
  • SOA Suite on JCS (Oracle Java Cloud Service)

You can get more information about these services at: Oracle Cloud Offerings

Main Article

The examples used here are part of a fictitious mobile banking application.

A Simple example

First let us start by looking at the data requirements of such a mobile application which shows a customer’s address data after he has successfully authenticated via a login page. So the first thing is to check what data should be presented on the screen:

{
 profile: {
    name: "John Doe",
    username: "johnDoe",
    addressLine1: "Abbey Road1",
    addressLine2: "London",
    postcode: "ECA1",
    country: "UK"
 }
}

You can get the relevant data from Oracle Sales Cloud, but the retrieved data is not necessarily in the expected format. Here is the result of a sample call:

{
 profile: {
    name: "John Doe",
    username: "johnDoe",
    address1: "Abbey Road1",
    city: "London",
    code: "ECA1",
    country: "UK"
 }
}

It’s obvious that the attribute name adressLine1 used in the application does not map to the attribute name address1 coming from Oracle Sales Cloud. Same applies to addressLine2 versus city. The line with attribute name postcode needs to be mapped, too, as we get the fieldname code from the backend. These mappings needs to be done to use the result in the mobile application.

Let us compare some options to accomplish the above. We also should keep in mind that we need a solution which is fast, reliable and easy to change.

Option 1

Using MCS only for the solution. Here is a short list of pros and cons:

  • no additional component involved
  • does not scale very well for integration – it has only limited resources available for backend connects and can not parallelise the backend work
  • everything needs to be developed manually using JavaScript (you need very experienced people, higher effort in case of changes compared to integration engines)

Option 2

Using MCS as central component for the connect from the mobile devices. Add ICS cloud option for backend integration purposes. Here is a list of pros and cons for this approach:

  • additional component involved (but only a single webservice call needed between MCS and ICS which is easy to create and will never be changed)
  • ICS scales very well for integration purposes
  • built for integration so simple and easy to use with defined ui’s for backend connectivity and data mapping
  • replacement of a backend solution is much easier as mobile/MCS side is not affected ( for example use of Oracle Sales Cloud instead of Salesforce)

Comparing these pros and cons for the available options would lead us to use option 2 as the best approach for this integration.

Here is a graphical representation of option 2 :

Overview1_new

Explanation

1) Mobile Devices call MCS via REST call
2) MCS calls ICS via REST call
3) ICS calls SalesCloud via sync SOAP (can be any SaaS/OnPremise-Application which offers WebServices)

Tip: In most cases you have the choice if you want to do the sync calls using SOAP or REST calls – normally that depends on which adapter is supported by your application.

A More Complex Example

But what happens if your mobile application needs to work with more complex data and multiple backends. Let us discuss that again using a more complex example from the same mobile application.

Your customer now wants to see the balance of his banking accounts as you can see now in the payload presented here.

{
   profile: {
      name: "John Doe",
      username: "johnDoe",
      addressLine1: "address1",
      addressLine2: "address2",
      postcode: "code",
      country: "UK"
   },
   accounts: [
      {
         id: "1234-5678-1234-8654",
         name: "Classic Checking",
         balance: "7120.36",
         currency: "USD",
         type: "checking",
         actions: [{action:"deposit"},{action:"transfer"},{action:"billPay"}]
      },
      {
         id: "1234-5678-1234-5432",
         name: "Regular Savings",
         balance: "5874.54",
         currency: "USD",
         type: "savings",
         actions: [{action:"deposit"},{action:"transfer"}]
      }
   ]
}

We now have two parts in our payload. The profile part is coming again from Oracle Sales Cloud as mentioned above. The accounts data is stored in our example in Oracle Core Banking. So we will need to retrieve information from more than one source, do the mapping for every source, and combine the results for consumption in the mobile application. So we need to check what options we have to do that and again compare the pros and cons. We will do that based on the decision made for simple integrations so using MCS only is no viable option anymore. As ICS does not yet offer a method to combine the results of multiple sources we will have to add SOA (based on Oracle Java Cloud) to the architecture.

Option 1

MCS calls SOA directly to get the data back from there. SOA in turn calls the backends, does the mapping, aggregates the data and sends this to MCS.

  • Mapping is not that much work as in MCS but need to be done more manually compared to ICS
  • Any change in a backend (replaced backend or API changes) will trigger a re-development and re-deployment in SOA

Option 2

MCS calls ICS to get the data. ICS then calls SOA and SOA will directly call the backends. SOA does the mapping, aggregates the data and responds this to ICS. ICS then sends the data to MCS in the same format.

  • Will fit in the existing architecture
  • Reduces the complexity on MCS side as it only has to call ICS for all cases
  • Mapping is not that much work as in MCS but needs to be done more manually compared to ICS
  • Any change in backend (replaced backend or API changes) will trigger a re-development and re-deployment in SOA

Option 3

Similar to option 2. But this time SOA will not call the backends directly. It will call ICS to get the needed data and combine them to the correct format.

  • Will fit in the existing architecture
  • Reduces the complexity on MCS side as it only has to call ICS for all cases
  • Offers more backend adapters and more flexibility for the mapping – no redeployment needed for changes
  • Even replacing backends will be much easier as this can be done in ICS – no need to change anything in SOA – so more flexible

From our experience endpoint virtualization as provided here by the additional layer in ICS is worth the investment – which will lead to choose option 3.

The graphical representation of the architecture looks like this:

Overview4_new

Explanation

1) Mobile Devices call MCS via sync REST
2) MCS calls ICS via REST call
3) ICS calls JavaCloudService(SOA 12c) via REST call – (this will give the results back to ICS after the calls in step 4 and 5 have succeeded and SOA has built the payload for the answer)
4) ICS calls Oracle Sales Cloud via sync SOAP to get customer data
5) ICS calls Oracle Core Banking App via sync SOAP to get banking account data

Tip: In most cases you have the choice if you want to do the sync calls using SOAP or REST calls – normally that depends on which adapter is supported by your application.

Identity Management / Security Aspects

MCS is designed with enterprise-grade security baked in. Security begins at the level of the mobile backend. MCS security takes care of:

  • Authentication – the user logs in.
  • Authorization – the user only gets access to the information he should see and can only perform the actions he should be able to.

MCS uses the OAuth protocol for the authentication from the mobile devices. An OAuth token will be created during login and will be used for the subsequent interactions to identify the user. This token is also used to authorize the users information and actions on MCS. As we only use supplied functions it will not create additional work to use this security framework.

Summary

This article provides an overview of options you have to integrate a mobile application with your existing or future backend environments leveraging Oracle’s PaaS offerings for mobile and integration cloud services. It should provide some guidelines on how to architect the background integration depending on requirements.

A Universal Cloud Applications Adapter for ODI

$
0
0

Cloud computing is dramatically accelerating the number of systems and technologies with which integration solutions have to interact. As new Cloud applications seem to appear every day, defining the specifics for each one of these applications one at a time quickly becomes a daunting task. To address this new challenge, we have created a universal approach that allows us to better leverage ODI with all of these emerging technologies, from the latest fad to the most established ones.

1. Background: Generating the best possible code

To be able to generate the most efficient SQL code for each and every database, ODI embeds a complete definition of their characteristics. This includes the definition of the native SQL supported by the databases, the definition of their native data types and how they convert to other technologies, and all the details that allow ODI to generate the best possible code for each technology. This includes DDL and DML generation, truly setting ODI in a category of its own.

In a Cloud environment and with Cloud applications in particular, the requirements are somewhat different. Similarly to databases, JDBC drivers can be used to access Cloud applications (see the DataDirect Cloud JDBC drivers for instance). But Cloud applications often impose limitations in what code can be generated: there is usually no support for DDL execution (objects are typically created from the UI, not with a SQL statement); DML is typically limited to the most basic statements (no analytic functions for instance).

2. Defining a Universal Technology

Creating a generic technology that supports all Cloud applications makes a lot of sense from an ODI perspective. Since applications impose a lot of limitations on what is possible with the generated code, the definition of all these technologies in ODI ends up being very similar – with the exception of data types that remain application specific. But as long as we are using JDBC, we can rely on standard JDBC data types and thus use the exact same data types for all Cloud applications. The number one limitation with this approach is that DDL generation is not be possible anymore: this is not an issue here as Cloud applications rarely allow this.

3. Making a Universal Cloud Applications Adaptor for ODI

From an ODI perspective, we need two elements to properly implement a Universal Cloud Applications technology:

  • We need to define a new Technology in ODI Topology. This defines what SQL code can be generated, how to generate it and what data types are supported.
  • By default, when ODI reverse-engineers objects from a system, it uses native datatypes for the definition of the columns. Because we want to use generic data types instead of the native ones, we need to have a Reverse-engineering Knowledge Module (RKM) that returns the generic data types.

Once these elements are in place, generic Knowledge Modules can be used with this technology. The ODI A-Team is working with customers on alterations to some of these KMs to further improve performance related to Cloud Connectivity, but the existing KMs definitely works. The team is also working on some Integration KMs that are more application specific, but these KMs still take advantage of the Cloud Applications technology described here.

Let’s now look into the specifics of the technology and the RKM.

3.1 The technology

We have created the necessary technology, called Cloud Applications. It can be downloaded from java.net here. This technology can be used create data servers that contain the necessary parameters to connect to Cloud applications used as source or target in ETL mappings.

The Cloud Applications technology is used the same way as any other ODI technology: define your data server and enter the appropriate parameters for your JDBC driver, then create the necessary physical and logical schemas.

If you are using JDBC drivers that are not provided out of the box with ODI, you have to copy them into the following locations:

  • For the ODI Studio on a windows system: \Users\<yourUserName>\AppData\Roaming\odi\oracledi\userlib
  • For the ODI Studio on a unix system: $ODI_HOME/.odi/oracledi/userlib
  • For a standalone agent; <YourODIAgentHome>\odi\agent\lib
  • For a JEE agent, you can create an agent template to deploy the driver. This is described here in the Oracle documentation: Creating a Server Template for the Java EE Agent

Remember to restart your Studio and Agents after installing the drivers to make sure that these new drivers are properly loaded.

One great benefit is that under this Cloud Applications technology, each data server defines how to connect to one Cloud application. Different applications can be listed under this technology as Data Servers, each using its own JDBC driver and connection parameters. Below is an example of what the technology and associated datatypes look like in ODI Topology, with sample data servers defined to connect to Eloqua, Oracle Service Cloud and SalesForce:

Techno

One element to keep in mind is that, when using this technology, you have to use the RKM described below to reverse engineer cloud objects.

The mappings from these data types to most supported technologies are already part of the Cloud Applications technology definition. At this point, only the Oracle technology has the necessary mappings TO this technology. If you are using the Cloud Applications technology with other technologies than Oracle, you have to define the data type mappings from these technologies into the appropriate Cloud application data types. This must be done whether the Cloud application is used as a source or a target. To do this, edit each data type in the other technology, and select the matching type in the Cloud application. The example below shows how to do this matching in ODI Topology for the VARCHAR data type of MySQL.

DataTypeMapping

Note that not all data types have to be mapped, only the ones that are used in the mappings.

3.2 The Reverse-Engineering Knowledge Module

The RKM is used like any other RKM: import the RKM as a Global KM or as a local KM into the project of your choice (please note that a Global RKM would be a better choice as an RKM should really not be project specific).

Once the KM has been imported, create a model that points to the logical schema created for the Cloud application that you want to use. In the Reverse Engineer tab of the model, select the Customized option, and then select the RKM Cloud Application in the Knowledge Module drop down, as shown below:

CloudApplicationRKM

Keep in mind that with an RKM you do not have access to the Selective Reverse Engineering tab: if you want to filter or restrict the list of objects returned by the KM, use the Mask option. For instance, set the mask to ORDERS% to retrieve only table names that start with ORDERS.

Retrieving primary keys and foreign keys is disabled by default in the RKM options because not all technologies support referential integrity. As long as the Cloud application supports them, and as long as the associated JDBC driver supports them too, you can enable the options to import these as well.

The RKM can be downloaded from java.net here. Obviously you can easily extend its use and capabilities if need be by modifying the content and behavior.

4. Using the Cloud Applications technology and the associated RKM

If you reverse engineer the Cloud objects for this technology without the RKM Cloud Application, you have to be aware that the behavior can be misleading. First it looks like the reverse engineering process worked: tables and columns are be properly listed. But if you look in the details of the columns, you can see that very few (if any) of these columns have any data types. As you build your mappings, these columns without data types are ignored by ODI when it generates the code (there is no way to convert data to and from unknown data types) and the resulting SQL code is missing a large number of mapped columns.

So remember, when using this technology, you do have to use the RKM Cloud Application (same RKM for all Cloud applications). Once this is done, you can use these tables either as source or targets for your mappings as with any other tables.

5. Beyond basics

As always in ODI, it is possible to expand the use of the Cloud Applications technology with additional Knowledge Modules that can be application or vendor specific. As Cloud applications mature over time, it will always be possible to derive a new, more advanced technology on a case by case basis, and to support more advanced features for the more advanced of these applications.

Conclusion

By combining the Cloud Applications technology, the associated RKM and a JDBC driver that gives access to the applications, ODI can integrate Cloud data the same way it integrates all other data. And the Cloud Applications technology is ready to supports the JDBC drivers of tomorrow’s Cloud applications.

For more ODI best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for ODI.

Acknowledgements

Special thanks to Sumit Sarkar from DataDirect and Shankar Malayya from Forrester Research for their help and support in putting this solution together.

Integrating JSON responses with Oracle Business Intelligence Cloud Service (BICS)

$
0
0

Introduction

This article outlines how to automate the loading of a JSON response into Oracle BI Cloud Service (BICS) via Oracle Application Express (Oracle Apex).

The solution is PL/SQL based and can also be used for non-BICS database environments such as Oracle DbaaS and Oracle Database Schema Service.

The main components of the PL/SQL code are as follows:

1)    Read in the JSON response as a CLOB using the apex_web_service.make_rest_request function.

2)    Convert the CLOB to a JSON_LIST data type.

3)    Loop through rows of the JSON_LIST.

4)    Use the JSON_EXT.GET_STRING function to parse out the JSON attributes.

5)    Insert the JSON attributes into a relational table located in the BICS schema.

The rationale for the article is to overcome the following constraints:

1)    Currently the REST API Oracle Business Intelligence Cloud Service can not read the JSON structure depicted in the article. This makes it difficult to achieve automated cloud-to-cloud data transfers without additional file conversions.

2)    The APEX_JSON package is only available in Oracle Apex 5.0 and higher. At the time this article was written, BICS was shipped with Oracle Apex v4.2.6.00.03. In the future, should the BICS Apex version be upgraded, it may be more preferable to use the functionality available in the APEX_JSON package rather than that in the PL/JSON package suggested in this article.

This article is intended for developers looking for a simple straightforward method to read JSON into BICS using PL/SQL. The primary requirements are:

1)    That the JSON response is accessible via a URL.

2)    The JSON response can be interpreted by the JSON_LIST data type installed with the PL/JSON package.

The JSON response may originate from a source such as (but not limited to) a mobile device, a website or another cloud application.

The sample code supplied parses the JSON data to a relational database table – making it readily available to model in BICS. Oracle Database 12c has built in JSON functions that allows JSON data to be stored in its original format. This article does not cover these features. However, pending how the data will be consumed, this may be something worth considering.

It is recommended to follow the steps below in chronological order. This will facilitate successful replication and assist with debugging.

Step One – Build the PL/SQL to return the JSON Response

Step Two – Load the PL/JSON Package

Step Three – Create the table to store the data

Step Four – Build the PL/SQL to parse the JSON and insert into the database

Step Five – Merge the code from Step One and Step Four

Main Article

Step One – Build the PL/SQL to return the JSON Response.

# 1-3: Explains how to build the PL/SQL

# 4-5: Describes how to run the PL/SQL – through Oracle Apex SQL Workshop

# 6-7: Illustrates a sample JSON response


 

1)    Replace the following values (that are highlighted in yellow in the below code sample box).

a)    l_ws_url – replace with JSON request URL

b)    g_request_headers – replace with JSON request headers

2)    Confirm that the PL/SQL returns the JSON response successfully.

3)    For a text version of this sql code click here

DECLARE
l_ws_response_clob CLOB;
l_ws_url VARCHAR2(500) := ‘https://YourURL‘;
BEGIN
apex_web_service.g_request_headers(1).name := ‘Accept';
apex_web_service.g_request_headers(1).value := ‘application/json; charset=utf-8′;
apex_web_service.g_request_headers(2).name := ‘Content-Type';
apex_web_service.g_request_headers(2).value := ‘application/json; charset=utf-8′;
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => l_ws_url,
p_http_method => ‘GET’
);
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,24000,1));
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,24001,24000));
END;

4)    From Oracle APEX -> SQL Workshop

Snap14

5)    Copy the SQL into the SQL Command window and Run -> Confirm that the Results return the JSON response.

Snap15

6) Below is an example JSON response.

[{"avatarDefinitionId":"13","attributes":{"Name":"HP Jet Pro Wireless","Status":"1","Manufacturer":"HP","Model":"P1102W","Type":"LaserJet"},"lastUpdatedTime":"2015-07-28T12:01:55.919Z"},{"avatarDefinitionId":"13","attributes":{"Name":"Epson Stylus Personal Printer","Status":"1","Manufacturer":"Epson","Model":"C88","Type":"Inkjet"},"lastUpdatedTime":"2015-07-28T12:01:00.000Z"},{"avatarDefinitionId":"13","attributes":{"Name":"Canon PIXMA","Status":"1","Manufacturer":"Cannon","Model":"MG3520","Type":"Wireless All-in-One "},"lastUpdatedTime":"2015-07-28T12:00:21.923Z"},{"avatarDefinitionId":"13","attributes":{"Name":"Konica Minolta Bizhub","Status":"4","Manufacturer":"Konica","Model":"C360","Type":"Color Copier"},"lastUpdatedTime":"2015-07-28T11:51:57.993Z"}]

7) For a text version of the JSON response click here

Step Two – Load the PL/JSON Package

Step Two describes how to load the artifacts from the PL/JSON Package into the BICS Schema.

The BICS schema name is visible from Oracle Apex SQL Workshop (in the Schema drop-down – top right).

It can also be gathered by running the SQL below.

The scripts should be ran by a user the “Database Administrator” role.

Click here for more information on BICS roles.

SELECT
sys_context(‘USERENV’, ‘CURRENT_SCHEMA’) CURRENT_SCHEMA
FROM dual;

 Snap19


1)    Download the PL/JSON Package from here

2)    Click Download Zip

3)    Extract pljson-master.zip

4)   The extracted file contains the following:

Snap1

5)    Either run install.sql or run each of the below scripts individually in the order they appear in install.sql

json_value.typ
json_list.typ
json.typ
json_parser.sql
json_printer.sql
json_value_body.typ
json_ext.sql
json_body.typ
json_list_body.typ

6)    If loading the scripts through APEX, use the SQL Scripts loader.

       SQL Workshop -> SQL Scripts -> Upload -> Run

7)    If running the install.sql though SQLDeveloper, change the “select default path to look for scripts” to match where the zip file was extracted.

Tools -> Preferences -> Database -> Worksheet

Snap4

8)    Confirm all objects have a status of VALID

Snap2

Step Three – Create the table to store the data

CREATE TABLE PRINTER_INFO
(
RECORD_NUM number,
NAME varchar(100),
STATUS  varchar(100),
MANUFACTURER  varchar(100),
MODEL varchar(100)
);

Step Four – Build the PL/SQL to parse the JSON and insert into the database

1)    Save the JSON output from Step One to a local text file.

Download it from here

Rename the file json.txt

Save it to C:\Temp

Snap22

Snap6

 

2)    From Apex go to SQL Workshop -> RESTful Services

Snap7

 

3)    Create a RESTful Service Module.

Name: Printer.info

URI Prefix: Printer/

Snap8

4)    Add a URI Template

URI Template: PostStatus/

Snap9

5)    Add a Resource Handler

Method: POST

Source Type: PL/SQL

Source: Copy SQL from below (#7)

Snap10

6)    Modify the highlighted SQL to match the JSON file and table column names.

Add additional columns and change attribute names where needed.

7)    For a text version of this SQL code click here

declare
l_clob clob;
l_warning varchar2(32767);
l_list json_list;
l_dest_offset   integer;
l_src_offset    integer;
l_lang_context  integer;
l_col1 VARCHAR2(100);
l_col2 VARCHAR2(100);
l_col3 VARCHAR2(100);
l_col4 VARCHAR2(100);
begin
l_dest_offset := 1;
l_src_offset := 1;
l_lang_context := dbms_lob.default_lang_ctx;
DBMS_LOB.createtemporary(l_clob, FALSE);
DBMS_LOB.CONVERTTOCLOB(
dest_lob       => l_clob,
src_blob       => :body,
amount         => DBMS_LOB.LOBMAXSIZE,
dest_offset    => l_dest_offset,
src_offset     => l_src_offset,
blob_csid      => dbms_lob.default_csid ,
lang_context   => l_lang_context,
warning        => l_warning
);
l_list := json_list(l_clob);
for i in 1..l_list.count LOOP
l_col1   := json_ext.get_string(json(l_list.get(i)),’attributes.Name’);
l_col2   := json_ext.get_string(json(l_list.get(i)),’attributes.Status’);
l_col3   := json_ext.get_string(json(l_list.get(i)),’attributes.Manufacturer’);
l_col4   := json_ext.get_string(json(l_list.get(i)),’attributes.Model’);
INSERT INTO PRINTER_INFO(RECORD_NUM,Name,Status,Manufacturer,Model) VALUES (i,l_col1,l_col2,l_col3,l_col4);
end loop;
end;

8)    *** Apply Changes ***

Snap11

 

 

9)    Run through curl to test. Download curl from here.

Replace the following highlighted items:

username:password

-k with the Apex URL

-d with the path to the JSON file saved locally (as defined in Step Four – #1)

10) For a text version of the curl command click here

curl -u username:password -X POST -v -k “https://mytrial123-us123.db.usa.oracle.com/apex/Printer/PostStatus/” -H “Content-Type: application/json” -d@C:\temp\json.txt

11)    Confirm data loaded correctly

select * from PRINTER_INFO;

Snap20

Step Five – Merge the code from Step One and Step Four

1)    Edit the Resource Handler Source. If preferred, this may also be ran in SQL Workshop.

To get to the Resource Handler Source go to:

SQL Developer -> RESTful Services -> Module -> URI Template -> POST SOURCE

Snap7

Snap17

2)    Combine the PL/SQL developed in steps one and four. Replace the l_ws_url, g_request_headers, JSON attributes, and insert table columns.

3)    For a text version of the PL/SQL click here

declare
l_ws_response_clob CLOB;
l_ws_url VARCHAR2(500) := ‘YourURL';
l_list json_list;
l_col1 VARCHAR2(100);
l_col2 VARCHAR2(100);
l_col3 VARCHAR2(100);
l_col4 VARCHAR2(100);
begin
–get JSON
apex_web_service.g_request_headers(1).name := ‘Accept';
apex_web_service.g_request_headers(1).value := ‘application/json; charset=utf-8′;
apex_web_service.g_request_headers(2).name := ‘Content-Type';
apex_web_service.g_request_headers(2).value := ‘application/json; charset=utf-8′;
l_ws_response_clob := apex_web_service.make_rest_request(
p_url => l_ws_url,
p_http_method => ‘GET’
);
–dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,24000,1));
–dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,24001,24000));
–convert clob to json_list
l_list := json_list(l_ws_response_clob);
for i in 1..l_list.count LOOP
l_col1   := json_ext.get_string(json(l_list.get(i)),’attributes.Name’);
l_col2   := json_ext.get_string(json(l_list.get(i)),’attributes.Status’);
l_col3   := json_ext.get_string(json(l_list.get(i)),’attributes.Manufacturer’);
l_col4   := json_ext.get_string(json(l_list.get(i)),’attributes.Model’);
INSERT INTO PRINTER_INFO(RECORD_NUM,Name,Status,Manufacturer,Model) VALUES (i,l_col1,l_col2,l_col3,l_col4);
end loop;
end;

4)    Apply Changes

Snap11

5)   Delete the records that were previously loaded

Delete from PRINTER_INFO;

Snap21

6)    Re-run through curl. This time omitting the -H and -d flags.

For a text version of the curl command click here

curl -u username:password -X POST -v -k “https://mytrial123-us123.db.usa.oracle.com/apex/Printer/PostStatus/”

7)    Confirm data loaded successfully

select * from PRINTER_INFO;

Snap20

Further Reading

Click here for the Application Express API Reference Guide –  MAKE_REST_REQUEST Function

Click here for the PL/JSON Reference Guide (version 1.0.4)

Click here for a Guide to “Introducing JSON”

Summary

This article describes how to integrate JSON with BICS. The solution requires the installation of the referenced PL/JSON package and the code is accessed through the APEX RESTful Services. That said, the APEX RESTful Services are not mandatory and other technologies more appropriate to the individual use case can be utilized. For illustration purposes Curl has been used to transfer the data. It is anticipated in a production environment alternative data transfer methods may be preferred. This article was written to appeal to a BICS’s audience. Additionally, Non-BICS users may also find it beneficial in particular those consuming JSON through cloud or mobile technologies wanting to load the data into Oracle DbaaS or Oracle Database Schema Service (via Oracle APEX).

Using CDN with WebCenter Sites 11gR1

$
0
0

1.    Introduction

 

WebCenter Sites has multiple layers of cache. Closest to the database server is the Result Set Cache, then Asset Cache, then WebCenter Sites Page Cache, and finally WebCenter Sites Co-Resident/Remote Satellite Server Cache. Many times Remote Satellite Server is projected as an edge cache. However, there are many limitations in using Remote Satellite Server as an edge cache. For details refer to A-Team Chronicles article “Should Remote Satellite Server be used for Edge Caching”.[1]

Many of our customers use CDN to improve the performance of their web site. Typically WebCenter Sites customers use CDN to cache static images and web pages rendered by WebCenter Sites. Usually CDN caches images and web pages based on the URL. Let’s look at some of the use cases regarding what artifacts can be cached on CDN when using it for a WebCenter Sites rendered web site.

 

2.    Caching Static Content on CDN

 

Many clients use CDN just to cache the static content. In an image rich web site a significant portion of the web page size is due to static content like images, CSS, Fonts, JS etc. The size of the dynamic html may be quite less. Caching the static content in CDN will reduce the time required to download the static content, and thereby improve the page response time.

To cache static content on CDN, it’s best to put the static content (images, css, js) on WebServers/HTTP Server. This static content can directly be cached by CDN.

To implement a robust cache strategy for static content it is important to determine the duration for which static content should be cached. You should consider how frequently the static content is updated, how long you can tolerate the stale static content, and how you inform CDN that the content has been updated.

The WebServer/HTTP server should stamp relevant cache instructions on the static file headers like JS, Images, CSS, Fonts and others. Using these instructions the CDN determines the length of time for which to cache the content.

 

3.    Caching Dynamic Content on CDN

 

The dynamic content usually refers to HTML that is generated in response to a client’s request to view a web page. Generating html requires some logic to execute on the application servers and data/content to be read from database. The time required to generate the html can be quite large and a significant part of the response time.

WebCenter Sites uses its “Page Caching” to cache the generated html. The cache maintains a list of all the parameters required to generate the html for the dynamic web page. Thus, when the next request with the same parameters comes to view the same page the html does not have to be generated. WebCenter Sites retrieves the html from its ‘Page Cache’.

Additionally CDN can be used to cache the dynamic web pages, as discussed below.

 

3.1 Caching WebCenter Sites Rendered Page on CDN

 

To cache the WebCenter Sites rendered web pages the web site should be coded to utilize the friendly URL capability. This makes it simpler to cache the web pages against the given URLs. For the purpose of caching WebCenter Sites rendered dynamic web pages in CDN, two types of cases are important – fully cached web page, and partially cached or un-cached pages.

WebCenter Sites breaks the page into many pagelets when it renders a web page. Each of these individual pagelets can be ‘page cached’ on WebCenter Sites and Satellite Servers. ‘Fully Cached’ web page refers to the case when all the pagelets that comprise the page are cached. In this case the web page is ‘fully cached’ and does not have any component that is not ‘page cached’ on WebCenter Sites. ‘Partially Cached’ web page refers to those web pages where some of the pagelets are not ‘page cached’. Thus this web page is only ‘partially cached’ on WebCenter Sites.

 

3.1.1 Caching ‘Fully Cached’ WebCenter Sites Rendered Page on CDN

 

When WebCenter Sites generates and renders a ‘fully cached’ web page it sets the Last-Modified time in the http header as the time when the web page was generated. When WebCenter Sites gets subsequent requests for the page, it does not have to generate the html again and retrieves the page from its ‘Page Cache’. In such a case WebCenter Sites does not change the Last-Modified time.  Its only after the page cache expires or is flushed that WebCenter Sites needs to regenerate the page. When it generates and caches the page again, it sets the Last-Modified time accordingly. Thus the Last-Modified time refers to the time the web page was last generated by WebCenter Sites.

CDN can use the Last-Modified time to determine if its cache is up-to-date or needs to be refreshed.

 

3.1.2 Caching ‘Partially Cached’ or ‘Un-Cached’ WebCenter Sites Rendered Page on CDN

 

When WebCenter Sites generates and renders a ‘partially cached’ or ‘un-cached’ web page, it sets the Last-Modified time in the http header as the current time or time when the web page is generated. When WebCenter Sites gets subsequent requests for the page some templates/cs_elements need to be executed again. So the WebCenter Sites again sets Last-Modified time in the http header as the current time or time when some components of the web page are generated. Thus the Last-Modified time always refers to the time the request for web page is received.

In this case Last-Modified time is not very useful in determining whether to refresh the CDN cache or not. For such pages usually customers specify a time out value for the duration for which the web page should be cached. After the time out, CDN makes another request to WebCenter Sites to get a fresh copy of the web page.

Just because it is technically feasible does not mean that you should always cache ‘un-cached’ pages on CDN. You need to carefully consider the reasons why these web pages are not cached on WebCenter Sites. If those reasons are still valid for CDN, you need to be careful before you decide to cache them on CDN. For very heavy traffic sites caching web pages for a short duration, say around 5 or 10 minutes, may also make a difference in the performance.

 

3.2 Caching Blob Server Images on CDN

Your site may have some images that are managed by content editors using WebCenter Sites. These images are rendered using blob server. The blob server URL for the images can be quite unfriendly and CDN may or may not be able to cache them. Furthermore, the blob server URL changes after the image asset is modified and published. Some of the web pages that are cached on CDN however, may continue to use the old URL. In this case those web pages will continue to get the old image or may get a missing image. If the old image with the old URL is present in CDN cache the web page will get the old image. If the old image in not present in CDN cache and CDN makes a request to WebCenter Sites for the old URL WebCenter Sites will return an error and the web page will get a missing image.

 

4.    What Not to Cache on CDN

 

It’s important to remember that not all content can be cached on CDN. For example, any ‘forms’ or pages that show any personal information should not be cached. Similarly, unless you have taken great care and your CDN allows it, one should not cache any ‘authicated’ pages.

 

Do not cache any forms on CDN. Even the blank form pages should not be cached or should be cached with great care. Usually the URL for a blank form and filled form submit is the same. In this case you should not cache the blank form. In my opinion, it’s best to avoid caching any type of forms.

You should not cache any web page that has personal information. The visibility of a page that has personal information is very limited. It is not worthwhile to cache a page on CDN that has very limited visibility.

You need to be very careful in caching authenticated web pages that are behind a login. For this to work the CDN should have a way to find out if the visitor has logged in or not. Some CDNs have mechanisms for doing this. You should also consider how much visibility the authenticated pages have. Usually the authenticated pages are not cached using CDN.

5.    Conclusion

 

In short you can use a CDN with WebCenter Sites very effectively to cache both static and dynamic content. You should be careful about what you cache, how you invalidate and refresh the cache. You should not cache any form pages and pages that have personal information.

 

 

[1] http://www.ateam-oracle.com/should-remote-satellite-server-be-used-for-edge-caching/

 

Managing multiple applications in Mobile Cloud Services

$
0
0

Introduction

When the landscape in IT is changing from an on premise architecture to a cloud based architecture and from a desktop based workstation to mobile and tablet workstations,  some questions may arise from an enterprise architecture perspective. On of those questions will probably be: “How do we manage multiple applications” .
When you have to design multi-channel application you have a set of back-end services, security providers, policies and so on. How do you manage these assets and how do you make them available to different application?

This article will guide you through some concepts used in Mobile Cloud Services so will have a better understanding on how to manage multiple API’s and expose them to different application based upon their needs.

Main Article

Bottom-Up approach

First of all we need to rethink the way we design API’s.

Currently, many enterprises are using a Service oriented architecture which advocates a top-down approach. All the services are created independent from who or what consumes these services. This top-down approach will provide services that can be used in the entire enterprise and if a new application requires any of these services, the application needs to consume these service as-is.

The difference with the bottom-up approach is that we tailor the services, or API’s on the need of the mobile application. At first this might contradict with everything a proper enterprise architecture stands for, however when we start to think about it, it makes more sense.

Current SOA enabled services are consumed by applications deployed on heavy hardware. They have plenty of processing power and super fast network connection to the back-end services.
If we look at the mobile landscape, all the processing power comes from a single device. Your mobile phone or tablet needs to parse huge amount of data coming from the server. Because of this we need to rethink the design of the services that are consumes by these devices.

In general an application deployed on a server will call multiple back-end services to build the data model for a single screen. While the server can call these services in parallel and build the screen in an asynchronous way, a mobile device lacks that capability. That is why it is better to tailor these service for mobile consumption. These services also need to be optimized for mobile usage so only the data that is used on the mobile device will be included in the response. This will result in the best performance for your mobile application.

More information about designing mobile optimized API’s can be found in the Creating a Mobile-Optimized REST API article.  .

Mobile Back-ends, gateway to your application

If you want a mobile application to connect to your resources in MCS you have to create a mobile backend (MBE). An MBE is like a gateway for your application into your mobile assets. They are a server-side grouping of resources that can be used to group mobile applications.

 

mobilebackend_mcs_zoom

Even though an MBE is a grouping of resources, this does not mean these resources can only be consumes by a single MBE. Below is a table of the listed resources in the above image with a short description and if you can reuse them in multiple applications and MBE’s.

 

Resource Type Description Can be shared between applications
API Collection of REST endpoints Yes
Storage Storage to store basic files like images, JSON files and so on Yes
Connectors Connections to REST or SOAP based services which can be your own on-premise services or other cloud based services Yes
Users/Roles Authentication and Authorization configuration Yes
Analytics/Events Analytical information about the usage or custom events happening in your application No
Notifications Notification service to provide push notification to Google and Apple notification service No

As you can see from the above table, most resource can be shared between different Mobile-Backends.

Only the analytics/events and the notification service is uniquely bound to a MBE. This can lead to some design considerations.
For example if you want to create advanced analytical events you have to make sure all your mobile application share the same MBE, otherwise you won’t be able to link them together.

Let’s say you have a banking application for customers so they can access their accounts. The banking representatives have a different application that provides them with a 360 view of their customers. Both application are different and will require different roles and users however they need to be exposed through the same MBE otherwise you won’t be able to share analytical data between the customer application and the representative application.

The above example shows that there is no one-to-one mapping between a MBE and mobile application. It shows that the MBE is a grouping of related application that share resources like API’s, storage, analytics and so on.

Managing users and roles for multiple application

If you are building multiple applications on your MCS instance you also want to make sure you organize and manage users in a proper way.

Following image shows a diagram on how the different security elements in MCS work together:

MCS Security

Users are organized in User Realms. Each realm is a container for users and a schema to define the user attributes.

An MBE can only be linked to a single user realm so only the users in the configured realm will be able to access the application. This feature can be used to create a barrier between your applications and acts as a security boundary so you have full control over who can access your application.

Roles, on the other hand are independent from user realms. If you have a “Customer” or “Admin” role, you only have to define it once and it will be reusable over all MBE’s and user realms.
The purpose of a role is to restrict access to API’s and storage.

Each API can have a global rule but you can also assign specific roles for each resource and method combination within your API.
This also means security on your API is independent from a MBE. You cannot restrict a specific user access to an API from one MBE while he has access to that same API through another MBE. In that use case, you have to implement additional security checks on the mobile application itself or through custom code.

Conclusion

When creating multiple applications on MCS it is important to think ahead and consider how to manage the different assets in MCS.
it is important to understand what the relation is between a mobile back-end, API, storage, user realms and roles and how these assets can shared and separated.

A Mobile-Backend is the key gateway for your mobile application. It defines what users can access it through the user realm assigned to it. It also specified the collection of API’s and Storages that can be access through that MBE. In addition, events and analytical information is grouped within a mobile back-end and cannot be shared between multiple MBE’s.


Understanding listen ports and addresses to effectively troubleshoot Fusion Applications

$
0
0

Introduction:

To communicate with any process, you need to know at least 3 things about it – IP address, port number and protocol. Fusion Applications comprises many running processes. End users communicate directly with only a handful of them, but all processes communicate with other processes to provide necessary services. Understanding various IP addresses and listen ports is very important to effectively troubleshoot communication between various components and identifying where the problem lies.

Main Article:

We will start by describing some of the key concepts used in this post. We will then demonstrate their relationships and a way to get details about them so that you can use them while troubleshooting.

IP address – an identifier for a device on a TCP/IP network. Servers usually have one or more Network Interface Cards (NICs). In the simplest configuration, each NIC has one IP address. This is usually referred to as the physical IP address and is also the IP address mapped to the network hostname in the Domain Name System (DNS).

Virtual IP (VIP) – an IP address that does not correspond to a physical network interface. You can assign multiple IP addresses to a single NIC card. These virtual interfaces show up as eth0:1, eth0:2 etc. The reason to use VIP instead of physical IP is easier portability. In case of hardware failure, you can assign the VIP to a different server and bring your application there.

Host Name – the name assigned to a server. This name is usually defined in corporate and optionally in public DNS and maps to the physical IP address of the server. In FA, we refer to two different types of hostnames – physical and abstract.
Physical hostname is the one defined in DNS and recognized across the network.
Abstract hostname is like a nickname or an alias – you can assign multiple nicknames to the same IP address. For example, a person may officially be recognized as Richard, but his friends may call him Rick, Dick or even Richie. In FA, abstract hostname to IP address mapping is defined in the hosts file instead of DNS so that the alias to IP address mapping can be kept private to only a particular instance of Fusion Application.

Listen Port – serves as an endpoint in an operating system for many types of communication. It is not a hardware device, but a logical construct that identifies a service or process. As an example, an HTTP server listens on port 80. A port number is unique to a process on a device. No two processes can share the same port number at the same time.

Ephemeral Port – a short-lived port for communications allocated automatically from a predefined range by the IP software. When a process makes a connection to another process on its listen port, the originating process is assigned a temporary port. It is also called the foreign port.

Think of a telephone communication. If you have to reach an individual and you do not wish to use your personal number since you want it to be available for incoming calls, you can use any available public telephone to make the call. This public phone also has a telephone number. It remains busy so long as you are on the call. As soon as you hang up, it becomes available for use by any other individual. Ephemeral ports are the equivalent of public telephones in this example.

Listen Address - the combination of IP address and listen port. A process can listen on various IP addresses, but usually on only one port. E.g., DB listener listens on port 1521 by default. In the most common configuration, a process either listens on a single IP address, or all available IP addresses.

Let us see how to use a useful and simple command “netstat” to identify and understand the addresses and their relationships.

In the simplest usage, netstat prints the following columns:

 

Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name

 

The important ones for this post are the following:

Proto: Protocol used for this port
Local Address: The listen address. The first part is the IP address; second is the port number
Foreign Address: The address of the remote end point of this socket. Think of it as the phone number of the public telephone you used to call your contact. Similar to the listen address, the first part is the IP address; second is the port number
State: State can have various values. The important ones are:

LISTEN: The socket is listening for incoming connections. Foreign address is not relevant for this line
ESTABLISHED: The socket has an established connection. Foreign address in the address of the remote end point of the socket.
CLOSE_WAIT: The remote end has shut down, waiting for the socket to close.

Let us now look at some example outputs of the netstat command. In the first example below, let us look at some listen sockets

 

     Proto Recv-Q Send-Q Local Address          Foreign Address State   PID/Program name
1.   tcp        0      0 :::10214               :::*            LISTEN  4229/java
2.   tcp        0      0 0.0.0.0:10206          0.0.0.0:*       LISTEN  4230/nqsserver
3.   tcp        0      0 10.228.136.10:10622    0.0.0.0:*       LISTEN  3501/httpd.worker

 

All 3 of the lines above show that the respective processes are listening for an incoming request on a particular address. This is evident from the value under the “State” column.

In lines 1 and 2, the processes are listening on all available IP addresses. This shows up in netstat as either “0.0.0.0” or a string of empty delimiters – “:::”. The listen port is 10214 and 10206 for java and nqsserver respectively. This means you can connect to these programs using any IP address that is bound to any interface on that server, so long as you use the right port number

In line 3, the process is listening on port 10622 only on IP address 10.228.136.10. This means that any request coming on a different IP address will not reach the process, even if the IP address is of the same host.

This is particularly important when you use VIP for enabling dynamic failover of various components, such as SOA Server. In this case, telnet to physical_ip:port_number will fail, but to virtual_ip:port_number will succeed.

Let’s now look at another example of some established connections:

 

     Proto Recv-Q Send-Q Local Address              Foreign Address          State       PID/Program name
1.   tcp        0      0 10.228.136.10:10212        10.228.136.10:39709      ESTABLISHED 4232/nqsclustercont
2.   tcp        0      0 10.228.136.10:12927        10.233.24.88:5575        ESTABLISHED 3524/httpd.worker
3.   tcp        0      0 ::ffff:10.228.136.10:29024 ::ffff:10.233.24.88:1521 ESTABLISHED 19757/java

 

 

Established connections show details of both ends of the socket. Based on our analogy above, “Local Address” and “Foreign Address” are 2 ends of a telephone connection. However, just by looking at a single line of netstat output for a socket, it cannot be determined which one is the listen port and which one is the connection requestor.

Please note that “Foreign address” doesn’t have to be a different host, or even a different IP address. It is simply the other end of socket connection and hence is a different process, which may be running on the same host or a different one. The local address always refers to an IP address on the same host.

To determine which one is the listen port and which one is ephemeral port, you need to determine if there is a LISTEN socket for that particular IP address. To do that, make sure you are on the host that the IP address is tied to, and run the following command:

netstat -anp | grep <local_address_port_number> | grep LISTEN

If it returns a LISTEN socket, then that is the process to which a client is connected. The client information can be found by running a similar command on the host referred to in “Foreign Address.”

If a LISTEN socket is not returned, the Foreign Address refers to the LISTEN socket and this line of netstat output refers to a client connecting to a remote process.

 

Troubleshooting scenario:

Let’s now try to apply this understanding to troubleshoot a simple problem – unable to access a web page.

You are trying to access the WebLogic administration console of CommonDomain and are unable to do so. To access the page, you type a URL similar to:

http://common-internal.mycompany.com:80/console

Note: The values used in this scenario are for demonstration purpose. You should substitute the appropriate values based on your environment

The first step is to identify which component has a problem. For that you need to understand the components involved. In a typical enterprise deployment of FA, a Load Balancer (LBR) sits in front of the HTTP Server, which in turn communicates with the WebLogic servers. Since the console application is deployed on the AdminServer, HTTP Server in turn talks to AdminServer of CommonDomain.

Here is a graphical representation of this flow:

 

01_ListenAddress

As you can see, the request flows through the LBR and the HTTP Server before reaching the AdminServer.

When you are unable to access a web page, the following are some of the common reasons:

1. Problem with name resolution
2. Problem with network layer preventing communication between 2 components
3. One or more components are down or unresponsive

We can use some basic tools to identify where the problem is. These tools are ping, telnet, netstat, lsof and ps. Once we have identified which component has a problem, we can figure out what is causing it. In this post, we will keep our focus on finding which component has a problem.

So let us walk through the request flow:

1. Check the name resolution to hostname/VIP in the URL (in our example common-internal.mycompany.com). Use the ping utility to determine if you can resolve the name, and contact the IP address. Since your browser is trying to contact the server, you will run ping utility on the desktop or device which is accessing this URL

ping common-internal.mycompany.com

Ping may not work if ICMP is disabled, but it will return the IP address that the name resolves to. Make sure this is the correct IP address. If it is not, the problem is with name resolution. If it returns the correct IP address, please move to the next step.

2. Make sure the port you are trying to reach on the hostname/VIP is reachable. Similar to #1 above, you will run telnet utility on the device you are trying to access the URL from.

telnet common-internal.mycompany.com 80

If telnet is unsuccessful, it means LBR cannot be reached on the HTTP port. This could be due to 2 reasons:

a. LBR is not listening or is down
b. Firewall or network issues are stopping this communication

If telnet is successful, please move to the next step.
3. The next step is to make sure LBR is configured properly and is routing requests to the HTTP Server(s) on the right port. Since LBR is a component outside of the FA stack and is usually managed by the network/security team, troubleshooting it is outside of the scope of this document. Make sure the team managing LBR confirms the configuration as well as reachability of the web server.

 

4. Another way to eliminate LBR and figure out if the problem is within FA stack and/or network communication between the components of the FA stack only is to directly access the URL from the web server. The initial URL we used resolved to LBR. We now need to figure out how to directly access this URL from the HTTP server. This can be done by changing the hostname in the URL to the hostname of one of the HTTP servers. Also change the port number in the URL to that of the listen address of the appropriate Virtual Host of the HTTP Server.

The virtual host configuration is stored in one of the files under $INSTANCE_HOME/config/OHS/<component_name>/moduleconf directory. These files are named FusionVirtualHost_<domain_short_form>.conf, where domain_short_form is fs for CommonDomain, hcm for HCMDomain, and so on.

Since common-internal is used for CommonDomain, we will look at FusionVirtualHost_fs.conf. The first few lines in this file specify the listen addresses (one for HTTP requests and one for HTTPS):

## Fusion Applications Virtual Host Configuration
Listen fusionhost.mycompany.com:10614
Listen fusionhost.mycompany.com:10613

 

 

The <VirtualHost> section specifies the VirtualHost configuration and this can be used to identify the mapping between LBR port and HTTP server port:

#Internal virtual host for fs
<VirtualHost fusionhost.mycompany.com:10613 >
ServerName http://common-internal.mycompany.com:80

 

So common-internal.mycompany.com maps to fusionhost.mycompany.com:10613

Now we can change the URL and try to access it directly. The new URL will be http://fusionhost.mycompany.com:10613/console. Please note that your organization may block direct access to servers on ports other than SSH from the desktops. In this case, you can access this URL from a browser running on the HTTP server itself.

If this URL works, the problem is with components before the HTTP server, namely, LBR and Desktop and the network between them.

If this URL also doesn’t work, please move to the next step.

5. Now we need to check whether Oracle HTTP Server (OHS) is working of not.

a. Check if it is running. Use “opmnctl status -l”
b. Check if it is listening on the port of interest – in this case, port 10613.

TestCha -bash-3.2$ netstat -anp | grep 10613 | grep LISTEN(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)tcp 0 0 10.228.136.10:10613 0.0.0.0:* LISTEN 3501/httpd.worker

If the above commands are unsuccessful, we need to troubleshoot why OHS is not running.

If the above commands are successful, please move to the next step.

6. Now let’s focus our attention to the final component of the flow – WebLogic AdminServer. The first step is to identify the listen address. By default, FA provisioning engine enables AdminServer of a particular domain to listen on the IP address tied to the hostname (physical or abstract) specified in the response file created using the provisioning wizard. Enterprise Deployment Guide for FA recommends changing this listen address to a VIP so that AdminServer can be manually failed over in a highly available environment. Similar steps are recommended for automatic migration of SOA Servers of each domain and BI Server. Once the change to listen address is made, HTTP Server configuration needs to be edited to point to the new address.

So let’s determine if HTTP Server is configured properly and can communicate with the WebLogic Server – in this case AdminServer of the CommonDomain.

First determine where does the Location “/console” is configured to. For CommonDomain, open FusionVirtualHost_fs.conf and look for “/console” under the internal virtual host section.

## Context roots for application consoleapp
<Location /console >
    SetHandler weblogic-handler
    WebLogicCluster fusionhost.mycompany.com:7001
</Location>

 

In our example, we have configured HTTP Server to direct incoming requests for “/console” to WLS running on host fusionhost.mycompany.com and port 7001.

First, we will verify connectivity to AdminServer.

Please make sure that this hostname maps to the right IP address.

Now use telnet to connect with AdminServer

telnet fusionhost.mycompany.com 7001

If telnet succeeds, make sure port 7001 is not accidentally in use by a different process. To do so, login to the host where AdminServer is supposed to run and issue the following command:

netstat -anp | grep 7001 | grep LISTEN

Example Output:

-bash-3.2$ netstat -anp | grep 7001 | grep LISTEN
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp   0   0   ::ffff:10.228.136.10:7001   :::*   LISTEN   14853/java

 

Make sure the process returned by the above command is actually the AdminServer.

If telnet fails, then it could point to the following:

a. HTTP Server is not configured to point to the correct listen address. Look at AdminServer configuration and make sure the Listen Address matches with HTTP server configuration.
b. AdminServer is either not running or not responding. Look at the AdminServer logs to determine it is healthy.
c. Network issues or firewall is blocking the communication.

 

Conclusion:

This concludes our troubleshooting of a failure of a web page request. As you can see, understanding listen addresses and rudimentary network tools is important in troubleshooting communication issues in Fusion Applications. This knowledge can be applied to non-HTTP requests; to products other than FA – even non-Oracle products.

Integrating Documents Cloud Public Links to Sales Cloud – Sales Collateral use case

$
0
0

Introduction

Oracle Documents Cloud and Oracle Sales Cloud working together is a perfect match to increase productivity. Better if integrates in a single UI.


This post will show you how to integrate Oracle Documents Cloud with Oracle Sales Cloud using a simple “Sales Collateral” use case, which consist in showing a new menu item that shows documents from a embedded Documents Cloud Public Link.


This sample focus in using Public Links from Documents Cloud and the Desktop UI from Sales Cloud, to cover the Sales Cloud R9.

Sales Collateral

Main Article

To complete this sample will be needed:

  • Oracle Documents Cloud account
  • ADF customization asset
  • Oracle Sales Cloud account with Application Composer and Topology Manager access

Due the integration limitations with Sales Cloud R9, including outbound REST calls support, the choice was to use Documents Cloud Public Links without security.
This sample covers the Oracle Sales Cloud R9. Also works with Oracles Sales Cloud R10, but with R10 is possible do a integration without ADF customization and using only the Page Composer. Also is possible to create a new Simplified UI (Fuse) button. This subject is covered in another Blog Post.

Step-by-Step of the integration

For this integration, a new Sales Object will be created and the ADF customization will replace, which will consume a Topology previously created. All this need to be done in a Sandbox before publish to all users.

Creating a Sandbox

Before start, its mandatory the creation of a sandbox where will reside the customization, which can be published later.

In the top right, open the menu below the user name and click “Manage Sandboxes”:

Manage Customizations Menu

* If you cannot see this menu, your user do not have customization right, logout and login with a user with customization rights.

Manage Sandboxes list empty

Create a new Sandbox with any desired name:

CreateSandbox

Sandbox Created

Sandbox created. Make sure to “Set as Active” your Sandbox.

Creating a Topology Object

The topology object will store the Public Link address.

Go to “Setup and Maintenance” to create the Topology.

Setup and Maintenance

In the “Setup and Maintenance” screen, select “Manage Third Party Applications” and then create, to create a new Topology.

Manage Third Party applications

Enter the Application Name, something like “SalesCollateral_PublicLink” or something else more simple like “Sales Collateral”. Note that this name will be used by the customization.

Enter the Full URL, which contains the public link.

Then, enter the Partner Name: Documents Cloud

Topology Created

Your Topology object should looks like this:

Topology Apps List

Creating Sales Collateral Object

Now will create the Sales Collateral Object, which will be the placeholder for the customization.

Go back to the navigator and enter in the Application Composer:

Sales Collateral Object

* If you cannot see the Customization or Application Composer in the menu, the user probably do not have customization rights.

 

In the Application Composer, select the “Sales” as the application and then the new New Button icon right in the “Custom Objects”:

Application Composer

 

Enter the name “Sales Collateral” in the Display name and accept the default values:

App Composer

 

Select the icon (Sales Cloud R9 do not have one icon specific to Documents Cloud. Sales Cloud R10 come with a specific “similar” icon):

App Composer

 

In the Sales Collateral object, go to pages:

App Composer

 

Go to “Desktop Pages” and click in the “Create Work Area”:

App Composer

 

In the “Menu Category”, select “Sales” and enter the “Menu Item Display Label”:

App Composer

 

Uncheck the “Enable” for Regional Search and then Next Button and Next Button again:

App Composer

 

Uncheck the options and Next Button:

App Composer

 

Save and close:

App Composer

 

Go back to the Navigator, you should see a “Sales Collateral” new option under Sales:

Menu

 

Inside the “Sales Collateral” page, select “Customize Sales Collateral Pages” under the Administration menu:

Customize

 

Then chose “Site” to Edit:

Customize Sales Collateral

 

Click in “Manage Customizations”:

Customize Sales Collateral

 

Click in the “Upload” option, in the “All Layers” column for the “Page:”.

Upload Customization

 

Upload the file provided:

Upload Customization File

 

You can also create the customization file by creating a xml document with the following content:

<?xml version='1.0' encoding='UTF-8'?>
<mds:customization version="11.1.1.67.89" xmlns:mds="http://xmlns.oracle.com/mds" motype_local_name="root" motype_nsuri="http://java.sun.com/JSP/Page">
   <mds:insert parent="(xmlns(jsp=http://java.sun.com/JSP/Page))/jsp:root" position="last">
      <af:panelStretchLayout xmlns:af="http://xmlns.oracle.com/adf/faces/rich" id="ExtGen_psl11" inlineStyle="width:99.9%;" startWidth="0px" endWidth="0px" topHeight="0px" bottomHeight="0px" styleClass="AFStretchWidth" xmlns:f="http://java.sun.com/jsf/core" xmlns:c="http://java.sun.com/jsp/jstl/core" xmlns:fnd="http://xmlns.oracle.com/apps/fnd/applcore">
         <f:facet xmlns:f="http://java.sun.com/jsf/core" name="bottom"/>
         <f:facet xmlns:f="http://java.sun.com/jsf/core" name="center">
           <af:inlineFrame xmlns:af="http://xmlns.oracle.com/adf/faces/rich" id="DoCS_SalesLibrary" source="#{EndPointProvider.externalEndpointByModuleShortName['SalesCollateral_PublicLink']}/lyt=grid" inlineStyle="" styleClass="inlinestlye" sizing="none" shortDesc="Documents Cloud"/>
         </f:facet>
         <f:facet xmlns:f="http://java.sun.com/jsf/core" name="start"/>
         <f:facet xmlns:f="http://java.sun.com/jsf/core" name="end"/>
         <f:facet xmlns:f="http://java.sun.com/jsf/core" name="top"/>
      </af:panelStretchLayout>
   </mds:insert>
</mds:customization>

 

Updated code here:

Note the EL in the xml (Line 7), contains the name of the Topology Object previously created:

#{EndPointProvider.externalEndpointByModuleShortName['SalesCollateral_PublicLink']}/lyt=grid

Also note the “/lyt=grid” at the end, this will force the layout be grid.

 

Close the customization screen by hitting “Close” in the Top Right of the page.

 

At this point, you will be able to see the integration:

Sales Collateral working

 

* Remember that the Sales Cloud R10 has a different look and feel.

Conclusion

This article covers a simple integration between Oracle Documents Cloud and Oracle Sales Cloud, by using public links. Can also be replicated to different use cases other than the Sales Collateral and different modules such as HCM.

 

Soon will be posted a similar integration using Sales Cloud R10, which will be referenced here.

 

Reference

Oracle Documents Cloud Service info: http://cloud.oracle.com/documents

Oracle Sales Cloud info: http://cloud.oracle.com/sales-cloud

Embedding DoCS web user interface info: http://docs.oracle.com/cloud/latest/documentcs_welcome/WCCCD/GUID-3AB30A35-F0E4-4967-92C8-159FC5AA3844.htm#WCCCD4015

Implementing Upsert for Oracle Service Cloud APIs

$
0
0

Introduction

Oracle Service Cloud provides a powerful, highly scalable SOAP based batch API supporting all the usual CRUD style operations. We have recently worked with a customer who wants to leverage this API in large scale but requires the ability to have ‘upsert’ logic in place, i.e. either create or update data in Oracle Service Cloud (OSvC) depending on whether an object already exists in OSvC or not. At this time the OSvC API does not provide native support for upsert, but this article will show an approach to accomplish the same leveraging Oracle SOA Suite. It also provides data points regarding the overhead and the scalability in the context of high-volume interfaces into OSvC.

Main Article

Why Upsert?

One might ask why would one need upsert logic in the first place. Aside that this is common practice in some well established applications such as Siebel, there are situations where upsert capabilities come in very handy. For example, if one cannot rely on the source system feeding data into a target application to be able to tell whether some data has been provided earlier or not, it’s useful to be able to determine this on the target side and take the right action. I.e. create a new record or object in the target or update an existing record/object with new data. Clearly, creating duplicate information in the target is the one thing to be avoided most.

Maintaining Cross-References

In order to determine if a particular source record has already been loaded into OSvC previously or not, cross-reference information must be maintained at some place. There are different approaches to this, depending on system capabilities this could be either in the source system, the integration middleware, or the target system. There are specific advantages for each approach, but this is outside the scope of this article. In this case we want to leverage OSvC extensibility capabilities to provide additional attributes that can hold the references to the source record in the source system. A common practice is to use a pair of attributes such as (SourceSystem,SourceSystemId) for this purpose. With the OSvC object designer it’s a straightforward task to do this, e.g as shown for the Contact object below:

Custom Cross-Reference AttributesPerformance and scalability really matter in this scenario, so we have to make sure that the queries to determine if a records already exists will perform well. We will ultimately construct ROQL queries that will translate to point lookup queries in the OSvC database in order to verify is a set of (SourceSystem,SourceSystemId) pairs exist in the OSvC database. Therefore, having a custom Index on these two custom attributes will allow the OSvC database to execute such queries in a performant way avoiding full table scans. In the OSvC object designer, defining a custom index is straight-forward:

Custom Index

With that in place (after deploying the updated object to the system) we have all we need to store, maintain, and query the cross-references to the source record in OSvC. In the next section we will discuss how a this can be leveraged in a SOA implementation to realise the upsert logic.

SOA Implementation

As we are looking at a batch-style interface here with the need to process large volumes of records, it certainly does not make sense to query OSvC for each record to determine wether we need to execute a Create or Update operation for each record. Instead, as we want to process a bulk of say 100 objects in one service invocation against OSvC, we rather design it in the following way to keep round trips at a minimum:

Step 1: SOA composite receives a bulk of 100 records.

Step 2: BPEL process constructs a single ROQL query to determine for all records in one go whether they already exist in OSvC or not. This ROQL will be executed via the queryCSV API method. Running individual object queries would not scale very well for this scenario.

Step 3: BPEL constructs the bulk API payload for OSvC by combining Create and Update operations.

Step 4: BPEL invokes the OSvC batch API and processes the response e.g. for a reply to the source system.

In other words, we have two interactions with OSvC. The first one is to retrieve the cross-referencing information held in custom attributes and the second one does the actual data processing taking the cross-referencing into account.

Upsert BPEL Process

As stated previously, in Step 2 above we need to construct a single ROQL query that takes care of looking up any cross-references for the list of records currently processed by the BPEL process. This is accomplished by string concatenation adding a criteria to the ROQL where clause per record. The condition ensures that the core structure of the query ‘SELECT … FROM … WHERE’ is set for the first record while for each subsequent records it will just add another OR clause.

  <xsl:variable name="whereclause">
    <xsl:for-each select="/ns0:LoadDataCollection/ns0:LoadData">
      <xsl:choose>
        <xsl:when test="position() = 1">
          <xsl:value-of select="concat (&quot;select c.id, c.CustomFields.CO.SourceSystemId from Contact c where (c.CustomFields.CO.SourceSystem='LegacyApp1' and c.CustomFields.CO.SourceSystemId='&quot;, ns0:cdiId, &quot;')&quot; )"/>
        </xsl:when>
        <xsl:otherwise>
          <xsl:value-of select="concat (&quot; or (c.CustomFields.CO.SourceSystem='LegacyApp1' and c.CustomFields.CO.SourceSystemId='&quot;, ns0:cdiId, &quot;')&quot; )"/>
        </xsl:otherwise>
      </xsl:choose>
    </xsl:for-each>
  </xsl:variable>
  <xsl:template match="/">
    <tns:QueryCSV>
      <tns:Query>
        <xsl:value-of select="$whereclause"/>
      </tns:Query>
      <tns:PageSize>10000</tns:PageSize>
      <tns:Delimiter>,</tns:Delimiter>
      <tns:ReturnRawResult>false</tns:ReturnRawResult>
      <tns:DisableMTOM>true</tns:DisableMTOM>
    </tns:QueryCSV>
  </xsl:template>

This results in a ROQL query in the following structure:

select c.id, c.CustomFields.CO.SourceSystemId 
from Contact c 
where (c.CustomFields.CO.SourceSystem='LegacyApp1' and c.CustomFields.CO.SourceSystemId='15964985') 
or (c.CustomFields.CO.SourceSystem='LegacyApp1' and c.CustomFields.CO.SourceSystemId='15964986') 
or (c.CustomFields.CO.SourceSystem='LegacyApp1' and c.CustomFields.CO.SourceSystemId='15964987')
etc.

The corresponding result from running this ROQL against OSvC using the QueryCSV operation provides us entries for all source records that already exists based on the SourceSystemId criteria. Vice versa, for non-existing references in OSvC there isn’t a result record in the queryCSV response:

         <n0:QueryCSVResponse xmlns:n0="urn:messages.ws.rightnow.com/v1_2">
            <n0:CSVTableSet>
               <n0:CSVTables>
                  <n0:CSVTable>
                     <n0:Name>Contact</n0:Name>
                     <n0:Columns>ID,SourceSystemId</n0:Columns>
                     <n0:Rows>
                        <n0:Row>12466359,15964985</n0:Row>
                        <n0:Row>12466369,15964987</n0:Row>
                        <n0:Row>12466379,15964989</n0:Row>
                        <n0:Row>12466387,15965933</n0:Row>
                        <n0:Row>12466396,15965935</n0:Row>
                        <n0:Row>12466404,15965937</n0:Row>
                     </n0:Rows>
                  </n0:CSVTable>
               </n0:CSVTables>
            </n0:CSVTableSet>
         </n0:QueryCSVResponse>

So in the case of the example we can conclude that for the record referencing 15964985, it would have to be an update, while it would be a create for reference 15964986.

In the next Step 3 this result needs to be merged with the actual data to construct the payload for the OSvC Batch API. We conditionally either construct a CreateMsg or UpdateMsg structure depending on wether the previous ROQL has retrieved the source application key or not. And if it’s an update, it’s essential to include the OSvC object identifier in the RNObjects structure so that the API is pointed to the right object in OSvC for update.

  <xsl:template match="/">
    <ns1:Batch>
      <xsl:for-each select="/ns0:LoadDataCollection/ns0:LoadData">
        <xsl:variable name="appKey" select="ns0:appKey"/>
        <xsl:choose>
          <xsl:when test="count($InvokeLookup_QueryCSV_OutputVariable.parameters/ns1:QueryCSVResponse/ns1:CSVTableSet/ns1:CSVTables/ns1:CSVTable/ns1:Rows/ns1:Row[contains(text(),$appKey)]) = 0 ">
            <ns1:BatchRequestItem>
              <ns1:CreateMsg>
                <ns1:RNObjects xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="v13:Contact">
                  <v13:CustomFields xmlns:n3="urn:generic.ws.rightnow.com/v1_2">
                    <n3:GenericFields name="CO" dataType="OBJECT">
                      <n3:DataValue>
                        <n3:ObjectValue xsi:type="n3:GenericObject">
                          <n3:ObjectType>
                            <n3:Namespace/>
                            <n3:TypeName>ContactCustomFieldsCO</n3:TypeName>
                          </n3:ObjectType>
                          <n3:GenericFields name="SourceSystem" dataType="STRING">
                            <n3:DataValue>
                              <n3:StringValue>LegacyApp1</n3:StringValue>
                            </n3:DataValue>
                          </n3:GenericFields>
                          <n3:GenericFields name="SourceSystemId" dataType="STRING">
                            <n3:DataValue>
                              <n3:StringValue>
                                <xsl:value-of select="$appKey"/>
                              </n3:StringValue>
                            </n3:DataValue>
                          </n3:GenericFields>
                        </n3:ObjectValue>
                      </n3:DataValue>
                    </n3:GenericFields>
                  </v13:CustomFields>
                  <v13:Name>
                    <v13:First>
                      <xsl:value-of select="ns0:firstName"/>
                    </v13:First>
                    <v13:Last>
                      <xsl:value-of select="ns0:lastName"/>
                    </v13:Last>
                  </v13:Name>
                </ns1:RNObjects>
                <ns1:ProcessingOptions>
                  <ns1:SuppressExternalEvents>true</ns1:SuppressExternalEvents>
                  <ns1:SuppressRules>true</ns1:SuppressRules>
                </ns1:ProcessingOptions>
              </ns1:CreateMsg>
            </ns1:BatchRequestItem>
          </xsl:when>
          <xsl:otherwise>
            <ns1:BatchRequestItem>
              <ns1:UpdateMsg>
                <ns1:RNObjects xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="v13:Contact">
                  <ID xmlns="urn:base.ws.rightnow.com/v1_2">
                    <xsl:attribute name="id">
                      <xsl:value-of select="substring-before($InvokeLookup_QueryCSV_OutputVariable.parameters/ns1:QueryCSVResponse/ns1:CSVTableSet/ns1:CSVTables/ns1:CSVTable/ns1:Rows/ns1:Row[contains(text(),$appKey)]/text(),',')"/>
                    </xsl:attribute>
                  </ID>
                  <v13:CustomFields xmlns:n3="urn:generic.ws.rightnow.com/v1_2">
                    <n3:GenericFields name="CO" dataType="OBJECT">
                      <n3:DataValue>
                        <n3:ObjectValue xsi:type="n3:GenericObject">
                          <n3:ObjectType>
                            <n3:Namespace/>
                            <n3:TypeName>ContactCustomFieldsCO</n3:TypeName>
                          </n3:ObjectType>
                          <n3:GenericFields name="SourceSystem" dataType="STRING">
                            <n3:DataValue>
                              <n3:StringValue>LegacyApp1</n3:StringValue>
                            </n3:DataValue>
                          </n3:GenericFields>
                          <n3:GenericFields name="SourceSystemId" dataType="STRING">
                            <n3:DataValue>
                              <n3:StringValue>
                                <xsl:value-of select="$appKey"/>
                              </n3:StringValue>
                            </n3:DataValue>
                          </n3:GenericFields>
                        </n3:ObjectValue>
                      </n3:DataValue>
                    </n3:GenericFields>
                  </v13:CustomFields>
                  <v13:Name>
                    <v13:First>
                      <xsl:value-of select="ns0:firstName"/>
                    </v13:First>
                    <v13:Last>
                      <xsl:value-of select="ns0:lastName"/>
                    </v13:Last>
                  </v13:Name>
                </ns1:RNObjects>                  
                <ns1:ProcessingOptions>
                  <ns1:SuppressExternalEvents>true</ns1:SuppressExternalEvents>
                  <ns1:SuppressRules>true</ns1:SuppressRules>
                </ns1:ProcessingOptions>
              </ns1:UpdateMsg>
            </ns1:BatchRequestItem>
          </xsl:otherwise>
        </xsl:choose>
      </xsl:for-each>
    </ns1:Batch>

The outcome of this transformation is a list of UpdateMsg and CreateMsg elements that are passed to the OSvC API in a single invocation.

From the perspective of pushing data into the SOA layer, this now transparently provides upsert logic and cross-referencing is maintained in OSvC. The next question one might ask is what is the performance overhead of the above? Or in other words: how does the extra round trip impact the overall throughput of the interface?

Performance Analysis

In order to understand the performance implications we have tested a batch interface into OSvC to synchronise Contact information with and without this upsert logic. The below diagram illustrates the throughput, i.e. number of processed Contact objects per time period for a different set of scenarios. We were executing batches of 1 million records each time with a concurrency of 50 parallel client threads.

Upsert performance results

The first, blue bar represents the throughput when the Upsert logic is not in place, i.e. there is no extra round-trip and all 1M records translate to Create operations in OSvC. The second bar represents also 1M create operations, but this time with the upsert logic in place. It turns out that the overhead for doing the extra round trip is negligible in such as scenario as all the heavy lifting is done during the actual data processing. The fast lookup queries (<1s for a batch of 100 records) are practically irrelevant for this specific use case.

We have conducted further tests with a growing proportion of update operation as opposed to create operations. The throughput keeps increasing as there are more updates and less creates. The simple reason is that the updates in our test case were rather light (updating 5 attributes of the object) compared to the creation of the full object with a much higher number of standard and custom attributes.

Conclusion

This article has provided an approach for implementing upsert capabilities for Oracle Service Cloud APIs. We have chosen to maintain cross-referencing information in OSvC and to use Oracle SOA Suite as the integration technology. We have also provided test results indicating the performance impact of the proposed design in high-volume scenarios.

OAM Federation 11.1.2.3: Performing a loopback test with WS-Federation

$
0
0

In a previous post I gave steps for performing a loopback test with SAML. This is where we configure OAM Federation to talk to itself, to act as both IdP and SP. This is useful in development and test environments to confirm OAM Federation is working without requiring an external server to talk to at the other end. So in this post, I want to do the same for WS-Federation (WS-Fed).

SAML vs WS-Fed

Support for WS-Federation (WS-Fed), and specifically the WS-Fed Passive Requestor Profile (WS-Fed PRP), was introduced in OAM Federation 11.1.2.3. Support for WS-Fed PRP in OAM 11.1.2.3 is similar to the support for WS-Federation in the 11.1.1.x Oracle Identity Federation (OIF) standalone product. OAM Federation 11.1.2.3 does not support the WS-Fed Active Requestor Profile.

WS-Fed is a competing federation protocol to SAML: the capabilities each provides are largely similar, but the underlying protocols are different and each uses distinct terminology. Most vendors, including Oracle, have focused on SAML for identity federation; WS-Fed has seen limited adoption outside of Microsoft-centric environments. Given that, if you have a choice, I would strongly recommend using SAML over WS-Fed.

There are two primary cases in which the need for WS-Fed may be encountered:

  • Active Directory Federation Services (ADFS): since this supports both SAML and WS-Fed, either protocol could be used for integration with OAM Federation. But the SAML-based integration should be preferred, since that is what most customers with this use case successfully use.
  • ASP.NET and WCF applications: the Windows Identity Federation (WIF) component shipped with .NET 4.5 and later supports WS-Fed protecting ASP.NET and WCF applications simply through XML configuration changes (i.e. no code required.) By contrast, while it does include some .Net classes for validating SAML tokens, it does not contain a complete implementation of the SAML protocol, so ASP.NET/WCF applications cannot be federated with SAML unless you either write some custom code, or use third party components. Due to this, when you have existing ASP.NET/WCF apps, migrating them from WS-Fed to SAML can be time-consuming.

Given this situation, the main use case for using WS-Fed with OAM Federation is ASP.NET or WCF applications.

Configuring OAM for the WS-Fed loopback test

We assume you have OAM 11.1.2.3 up and running, and that you have enabled the Federation Service in the OAM console. Also, note that if you did the SAML Loopback test from my previous blog post the SAML and WS-Fed Loopback test configurations cannot co-exist at the same time. This is because OAM Federation only supports a single protocol per a provider URL – either SAML or WS-Fed. In practice, this should not be an issue, since if a given provider URL supports multiple protocols, it makes sense for OAM to use one of those protocols only to communicate with it.

In order to configure a WS-Federation partner, you need to use WLST commands. The OAM Console web application cannot create or edit WS-Federation partners, although it can be used to delete them.

We need to export the certificate stscertalias from the $DOMAIN_HOME/config/fmwconfig/.oamkeystore
Use this command:

$JAVA_HOME/bin/keytool -exportcert -keystore $DOMAIN_HOME/config/fmwconfig/.oamkeystore -storetype JCEKS -alias stscertalias -file /tmp/stscertalias.cer -v

When it asks you for the password, you can just press ENTER and ignore the WARNING. (You don’t actually need to know the password to export the certificate.)

Next we run the following commands using $OAM_ORACLE_HOME/common/bin/wlst.sh:

connect("weblogic","welcome1","t3://ADMINSERVERHOST:ADMINSERVERPORT")
domainRuntime()
configureTestSPEngine("true")
deleteFederationPartner(partnerName="LoopbackIDP", partnerType="idp")
deleteFederationPartner(partnerName="LoopbackSP", partnerType="sp")
addWSFed11IdPFederationPartner(partnerName="LoopbackIDP", providerID="http://OAMHOST:OAMPORT/oam/fed",ssoURL="http://OAMHOST:OAMPORT/oamfed/idp/wsfed11")
addWSFed11SPFederationPartner(partnerName="LoopbackSP", realm="http://OAMHOST:OAMPORT/oam/fed", ssoURL="http://OAMHOST:OAMPORT/oamfed/sp/wsfed11")
setFederationPartnerSigningCert(partnerName="LoopbackIDP",partnerType="idp",certFile="/tmp/stscertalias.cer")

Of course, you will replace ADMINSERVERHOST, ADMINSERVERPORT, OAMHOST and OAMPORT with appropriate values for your environment.

Note that we are calling configureTestSPEngine() to run enable the OAM Federation test page – we will use that in our testing below. Note also that we are calling deleteFederationPartner() to delete the federation partners first. The first time you run this script, these two commands will print an error. However, by including them, we make it possible to run the script multiple times, and have it delete any existing provider by that name before creating the new one.

Performing the loopback test

Open a new private browsing window, and go to the URL http://OAMHOST:OAMPORT/oamfed/user/testspsso, you should see the following displayed:
WSFedBlogPost-001
(Note: if you find the screenshots hard to read, click on them to load higher resolution versions)

Selecting “LoopbackIDP”, and clicking on “Start SSO”, we should get the OAM login page:
WSFedBlogPost-002
I try logging in as “weblogic”, and then I get this error:
WSFedBlogPost-003
What went wrong here? Out-of-the-box, the “weblogic” user included in the Embedded WebLogic LDAP does not have an email attribute, which is what we use by default as our NameID format. To fix this, I can set an email attribute on that user. Go to WLS console > Security Realm > myrealm > Users and Groups > Users > weblogic > Attributes. Find the “mail” attribute (it should be on the second page), click on its Value table cell, and enter an email address (e.g. weblogic@example.com):
WSFedBlogPost-004
Then press ENTER to save. Now if we try the test again using a new private browsing window, it works:
WSFedBlogPost-005
Note: if you don’t use a new private browsing window, and try the test again with the original window, you will get the same error again. This is because OAM Federation does not pick up the change in email address until the user logs out and logs in again.

Creating a Mobile-Optimized REST API Using Oracle Mobile Cloud Service – Part 3

$
0
0

Introduction

To build functional and performant mobile apps, the back-end data services need to be optimized for mobile consumption. RESTful web services using JSON as payload format are widely considered as the best architectural choice for integration between mobile apps and back-end systems. At the same time, many existing enterprise back-end systems provide a SOAP-based web service application programming interface (API). In this article series we will discuss how Oracle Mobile Cloud Service (MCS) can be used to transform these enterprise system interfaces into a mobile-optimized REST-JSON API. This architecture layer is sometimes referred to as Mobile Backend as a Service (MBaaS). A-Team has been working on a number of projects using MCS to build this architecture layer. We will explain step-by-step how to build an MBaaS, and we will  share tips, lessons learned and best practices we discovered along the way. No prior knowledge of MCS is assumed. In part 1 we discussed the design of the REST API, in part 2 we covered the implementation of the “read” (GET) resources and in this part we will move on to the “write” resources (POST,PUT and DELETE).

Main Article

This article is divided in the following sections:

  • Inserting and Updating a Department
  • Deleting a Department
  • Handling (Unexpected) Errors and Business Rule Violations

Inserting and Updating a Department

Our HR SOAP web service includes a method mergeDepartments which we can use for both inserting and updating departments. This is convenient so we need to create only one mergeDepartment function in our hrimpl.js file that we call from hr.js for both the POST and PUT method:

  service.post('/mobile/custom/hr/departments', function (req, res) {
    hr.mergeDepartment(req, res);
  });

  service.put('/mobile/custom/hr/departments', function (req, res) {
    hr.mergeDepartment(req, res);
  });

To find out the request payload structure expected by the SOAP method, we navigate to the Tester page of our HR SOAP connector and click on the mergeDepartments method

MergeDepReqPayload

As explained in part 2,the SOAP connector in MCS by default converts the XML request and response payloads to JSON which makes it much easier to invoke the connector from our custom API code. So, we need to transform the request payload we receive in our POST and PUT resources to the format expected by the SOAP method. In part 1 of this article series we designed the request payload structure for inserting and updating a department as follows:

{
    "id": 80,
    "name": "Sales",
    "managerId": 145,
    "locationId": 2500
}

With this information in place, we can now implement the mergeDepartment function in the hrimpl.js file:

exports.mergeDepartment = function (req, res) {
  var handler = function (error, response, body) {
    var responseMessage = body;
    if (error) {
      res.status(500).send(error.message);
    } else if (parseInt(response.statusCode) === 200) {
      var depResponse = transform.departmentSOAP2REST(JSON.parse(body).Body.mergeDepartmentsResponse.result);
      responseMessage = JSON.stringify(depResponse);
      res.status(200).send(responseMessage);
    }
    res.end();
  };

  var optionsList = {uri: '/mobile/connector/hrsoap1/mergeDepartments'};
  optionsList.headers = {'content-type': 'application/json;charset=UTF-8'};
  var soapDep = transform.departmentREST2SOAP(req.body);
  var outgoingMessage = {"Header": null, "Body": {"mergeDepartments": {"departments": soapDep}}};
  optionsList.body = JSON.stringify(outgoingMessage);
  var r = req.oracleMobile.rest.post(optionsList, handler);
};

Based on what you learned in part 2 you should now be able to understand this code as no new concepts are introduced here:

  • In lines 14-19 we set up the call to the SOAP mergeDepartments method. We use a new transformation function departmentREST2SOAP to transform the payload we receive into the format required by the SOAP method.
  • In lines 2-12 we set up the handler used to process the response from the SOAP method call. This handler is very similar to the handler we used in the getDepartmentById method we discussed in part 2. It is using the same transformation function to return the department object in the same format as it was sent. This follows the best practice we discussed in part 1: REST resources performing an insert or update of some resource should return this same resource including any new or updated  server-side attribute values. This way the client does not need to make an additional call to find out which attributes have been updated server-side.

For completeness, here is the code of the new departmentREST2SOAP function in transformations.js:

exports.departmentREST2SOAP = function (dep) {
var depSoap = {DepartmentId: dep.id, DepartmentName: dep.name, ManagerId: dep.managerId, LocationId: dep.locationId};
return depSoap;
};

Deleting a Department

The ADF BC SOAP method payload to delete a department has a similar format as the mergeDepartments method:

{
   "Header": null,
   "Body": {
      "deleteDepartments": {
         "departments": {
            "DepartmentId": 10,
            "DepartmentName": "Administration",
            "ManagerId": 200,
            "LocationId": 1700,
        }
      }
   }
}

This is a bit odd since the only information used here is primary key attribute DepartmentId. As a matter of fact, we only need to include the DepartmentId attribute and can leave out the other attributes for the delete to work. This is quite convenient since our REST API design in part 1 did not specify a department payload for the DELETE method, the ID of the department that should be deleted is simply passed in as a path parameter. This leads to the following implementation for deleting a department:

In hr.js:

  service.delete('/mobile/custom/hr/departments/:id', function (req, res) {
    hr.deleteDepartment(req, res);
  });

In hrimpl.js:

exports.deleteDepartment = function (req, res) {
  var handler = function (error, response, body) {
    var responseMessage = body;
    if (error) {
      res.status(response.statusCode).send(error.message);
    } else if (parseInt(response.statusCode) === 200) {
      responseMessage = JSON.stringify({Message: "Department " + req.params.id+" has been deleted succesfully."});
      res.status(200).send(responseMessage);
    }
    res.end();
  };

  var optionsList = {uri: '/mobile/connector/hrsoap1/deleteDepartments'};
  optionsList.headers = {'content-type': 'application/json;charset=UTF-8'};
  var soapDep = transform.departmentREST2SOAP(req.body);
  var outgoingMessage = {"Header": null, "Body": {"deleteDepartments": {"departments": {"DepartmentId" : req.params.id}}}};
  optionsList.body = JSON.stringify(outgoingMessage);
  var r = req.oracleMobile.rest.post(optionsList, handler);
};

As you can see we only set the DepartmentId attribute in the department JSON object in the body of the SOAP request message and we take the value form the path parameter. If the deletion was succesfull (HTTP status code is 200), we will return a success message.

Handling (Unexpected) Errors and Business Rule Violations

So far we assumed that both the MCS API requests and calls to the various SOAP methods are valid and return the expected status code of 200. In reality, there are some typical error situations that you should check for in your API imlementation:

  • The SOAP service might not be running because the server is down or the SOAP service is unavailable
  • The REST call to MCS might contain invalid data, for example a non-existing department ID is in the resource path when trying to delete a department
  • The SOAP service method call might succeed but return a status other than 500 because of some business rule violation. For example, when inserting a department some required attributes might be missing, or when deleting a department dependent employees might exist, preventing deletion of the department.

All these situations should be taken care of in the handler function that is used to process the result of our SOAP method calls. So far, we only checked for status 200 before we processed the SOAP response, otherwise we directly returned the SOAP response as JSON response for the MCS API call. You might think we need to check for these error situations within the if (error) branch of the handler function but this is not true.The error object is only passed in when the MCS HTTP server is not reached and no response is available which is a very rare situation that only happens if you made some programming error in your custom code, like an invalid URI for your API call. So, all other error situations, including all of the examples given above do return a response and should be handled by inspecting the HTTP response code in combination with the response body.

Let’s first see what is returned by the SOAP connector when the server hosting the ADF BC SOAP service is down:

{
    "type": "http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1",
    "status": 500,
    "title": "Timeout",
    "detail": "An HTTP read or connection timeout occurred. The timeout values used for the connection are \"HTTP Connection Timeout: 20,000 ms, HTTP Read Timeout: 20,000 ms\". Check the service and if it's valid, then increase the timeouts.",
    "o:ecid": "52f681a0d2bc84f2:4d22bd64:14dbe2a8ef8:-8000-0000000000498832, 0:5:1:1",
    "o:errorCode": "MOBILE-16005",
    "o:errorPath": "/mobile/custom/hr/departments"
}

When the server is up and running but the ADF BC SOAP service is not deployed or otherwise unavailable, the SOAP connector returns the following response body:

{
    "type": "http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.1",
    "status": 500,
    "title": "Problem calling SOAP service",
    "detail": "We encountered a problem when calling the SOAP service (Service Name: {/oracle/ateam/hr/soapdemo/model/common/}HRServiceService, Port: {/oracle/ateam/hr/soapdemo/model/common/}HRServiceServiceSoapHttpPort, Operation: findDepartments, Endpoint URL: http://10.175.210.106:7101/hrsoap/HRServiceService). Reason: Bad response: 404 Not Found from url http://10.175.210.106:7101/hrsoap/HRServiceService. Check the validity of the SOAP Connector configuration. ",
    "o:ecid": "52f681a0d2bc84f2:4d22bd64:14dbe2a8ef8:-8000-00000000004989d3, 0:2:1:1",
    "o:errorCode": "MOBILE-16006",
    "o:errorPath": "/mobile/custom/hr/departments"
}

Both error messages contain detailed technical info that you typically do no want to disclose to the users of your REST API. Since these are standard MCS error messages, we can write a generic error handler function. Here is a basic version to catch the above errors and replace them with a user-friendly error message:

function handleSOAPInternalServerError (soapResponseBody) {
  if (soapResponseBody.title) {
    var title = soapResponseBody.title;
    if (title==='Timeout' || title==='Problem calling SOAP service') {
      res.status(503).send(JSON.stringify({Message: "Service is not available, please try again later"}));
    }
  } else {
    res.status(500).send(JSON.stringify(soapResponseBody));  
  }
}

Note the HTTP status code we return in the case, which is 503 Service Unavailable. A call to this error handler function should be included in each SOAP callback handler we have defined so far:

  var handler = function (error, response, body) {
    var responseMessage = body;
    if (error) {
      responseMessage = error.message;
      res.status(500).send(responseMessage);
    } else if (parseInt(response.statusCode) === 200) {
      responseMessage = JSON.stringify({Message: "Department " + req.params.id+" has been deleted succesfully."});
      res.status(200).send(responseMessage);
    } else if (parseInt(response.statusCode) === 500) {
      handleSOAPInternalServerError(JSON.parse(body));
    } 
    res.end();
  };

 

Use the standard list of HTTP status codes to return the code that best matches the error situation you want to convey to the user of your API.

If we now try to access our MCS REST API when the SOAP service is not available, the response will look like this:

ServiceNotAvailable

Now let’s see what kind of response we get when an exception occurs within the ADF BC implementation of our SOAP service. Here is the response when we try to delete a non-existent department:

{
  "Header" : null,
  "Body" : {
    "Fault" : {
      "faultcode" : "env:Server",
      "faultstring" : "JBO-25020: View row with key oracle.jbo.Key[888 ] is not found in Departments.",
      "detail" : {
        "ServiceErrorMessage" : {
          "code" : "25020",
          "message" : "JBO-25020: View row with key oracle.jbo.Key[888 ] is not found in Departments.",
          "severity" : "SEVERITY_ERROR",
          "sdoObject" : {
            "@type" : "ns1:DepartmentsViewSDO",
            "DepartmentId" : "888"
          }
        }
      }
    }
  }
}

As you can see there is a lot of redundant information, and all we really need to include in our REST response is the actual error message. Since all error responses from our ADF BC SOAP service will have this same structure, it is quite easy to extend our generic error handler function to show the ADF BC error message:

function handleSOAPInternalServerError (soapResponseBody) {
  if (soapResponseBody.title) {
    var title = soapResponseBody.title;
    if (title==='Timeout' || title==='Problem calling SOAP service') {
      res.status(503).send(JSON.stringify({Message: "Service is not available, please try again later"}));
    }
  } else if (soapResponseBody.Body.Fault) {
      res.status(400).send(JSON.stringify({Message: soapResponseBody.Body.Fault.faultstring}));
  } else {
    res.status(500).send(JSON.stringify(soapResponseBody));  
  }
}

In Postman, the same department delete error will now look like this:

DeleteDepartmentError

Admittedly, this is still a somewhat technical message. It would be nicer to have a message like “Department ID 300 does not exist”. We will leave it as an exercise to you to extend this generic error handler to check for specific JBO error codes and replace it with more user friendly messages.

Conclusion

Building on the concepts we introduced in part 1 and part 2 we have shown you how to implement PUT, POST and DELETE resources including error handling. If you have followed along with all the coding samples, you probably noticed how simple it is to implement a fully functional mobile-optimized CRUD-style API using Oracle MCS. This is really the beauty of the programming model in MCS: a limited set of powerful core concepts that are easy to learn and can be applied very quickly. In the next article of this series we will discuss the one remaining core concept you are likely to use in almost all your API implementations: sequencing multiple connector and/or platform API calls in a row (or in parallel) without ending up in the infamous “callback hell”.

Leverage Oracle PaaS for Oracle SaaS

$
0
0

These days I spend a lot of my time helping Oracle SaaS customers unlock the potential that Oracle PaaS has to offer. In other words, leveraging PaaS to augment SaaS functions, to open up SaaS to new channels of data, and to integrate SaaS with other cloud/ on-premises solutions. So, a little more about it in this post ..

A quick look at https://cloud.oracle.com would reveal that Oracle offers over 18 PaaS services that span an expansive breadth of technologies allowing you to architect highly-distributed and scalable solutions. Depending on your use-case, these PaaS services can unlock new possibilities for your SaaS such as

  • Allowing Oracle Service Cloud users to initiate predictive maintenance for your IoT-enabled devices
  • Enabling collaboration with co-workers, partners, and customers through Oracle Social Network and Oracle Document Cloud Service
  • Acquiring actionable intelligence from large sets of enterprise and social data using Oracle Social Engagement and Oracle Big Data Services
  • and many more..

Oracle PaaS services all offer rich-sets of APIs that allow you to design all-Oracle SaaS-PaaS solutions or solutions that include other cloud/on-premises systems that you may have. Should you wish to leverage Oracle PaaS for your SaaS, Oracle makes this easier – by providing you with features such as pre-built integration and unified security.

I created this infographic to depict some possibilities when using Oracle PaaS services in conjunction with Oracle Customer Experience SaaS services. I’ve also illustrated the APIs and other mechanisms that connect these PaaS and SaaS services.

 

PaaSforSaaS_Final_Infographic_1

Fusion HCM Cloud – Bulk Integration Automation Using Managed File Transfer (MFT) and Node.js

$
0
0

Introduction

Fusion HCM Cloud provides a comprehensive set of tools, templates, and pre-packaged integration to cover various scenarios using modern and efficient technologies. One of the patterns is the bulk integration to load and extract data to/from the cloud.

The inbound tool is the File Based data loader (FBL) evolving into HCM Data Loaders (HDL). HDL is a powerful tool for bulk-loading data from any source to Oracle Fusion Human Capital Management (Oracle Fusion HCM). HDL supports one-time data migration and incremental load to support co-existence with Oracle Applications such as E-Business Suite (EBS) and PeopleSoft (PSFT).

HCM Extracts is an outbound integration tool that lets you choose HCM data, gathers it from the HCM database and archives it as XML. This archived raw XML data can be converted into a desired format and delivered to supported channels recipients.

HCM cloud implements Oracle WebCenter Content, a component of Fusion Middleware, to store and secure data files for both inbound and outbound bulk integration patterns.

Oracle Managed File Transfer (Oracle MFT) enables secure file exchange and management with internal systems and external partners. It protects against inadvertent access to unsecured files at every step in the end-to-end transfer of files. It is easy to use, especially for non technical staff, so you can leverage more resources to manage the transfer of files. The built in extensive reporting capabilities allow you to get quick status of a file transfer and resubmit it as required.

Node.js is a programming platform that allows you to execute server-side code that is similar to JavaScript in the browser. It enables real-time, two-way connections in web applications with push capability, allowing a non-blocking, event-driven I/O paradigm. Node.js is built on an event-driven, asynchronous model. The in-coming requests are non-blocking. Each request is passed off to an asynchronous callback handler. This frees up the main thread to respond to more requests.

This post focuses on how to automate HCM Cloud batch integration using MFT (Managed File Transfer) and Node.js. MFT can receive files, decrypt/encrypt files and invoke Service Oriented Architecture (SOA) composites for various HCM integration patterns.

 

Main Article

Managed File Transfer (MFT)

Oracle Managed File Transfer (MFT) is a high performance, standards-based, end-to-end managed file gateway. It features design, deployment, and monitoring of file transfers using a lightweight web-based design-time console that includes file encryption, scheduling, and embedded FTP and sFTP servers.

Oracle MFT provides built-in compression, decompression, encryption and decryption actions for transfer pre-processing and post-processing. You can create new pre-processing and post-processing actions, which are called callouts.

The callouts can be associated with either the source or the target. The sequence of processing action execution during a transfer is as follows:

  1. 1. Source pre processing actions
  2. 2. Target pre processing actions
  3. 3. Payload delivery
  4. 4. Target post processing actions
Source Pre-Processing

Source pre-processing is triggered right after a file has been received and has identified a matching Transfer. This is the best place to do file validation, compression/decompression, encryption/decryption and/or extend MFT.

Target Pre-Processing

Target pre-processing is triggered just before the file is delivered to the Target by the Transfer. This is the best place to send files to external locations and protocols not supported in MFT.

Target Post-Processing

Post-processing occurs after the file is delivered. This is the best place for notifications, analytic/reporting or maybe remote endpoint file rename.

For more information, please refer to the Oracle MFT document

 

HCM Inbound Flow

This is a typical Inbound FBL/HDL process flow:

inbound_mft

The FBL/HDL process for HCM is a two-phase web services process as follows:

  • Upload the data file to WCC/UCM using WCC GenericSoapPort web service
  • Invoke “LoaderIntegrationService” or “HCMDataLoader” to initiate the loading process.

The following diagram illustrates the MFT steps with respect to “Integration” for FBL/HDL:

inbound_mft_2

HCM Outbound Flow

This is a typical outbound batch Integration flow using HCM Extracts:

extractflow

 

The “Extract” process for HCM has the following steps:

  • An Extract report is generated in HCM either by user or through Enterprise Scheduler Service (ESS) – this report is stored in WCC under the hcm/dataloader/export account.
  • MFT scheduler can pull files from WCC
  • The data file(s) are either uploaded to the customer’s sFTP server as pass through or to Integration tools such as Service Oriented Architecture (SOA) for orchestrating and processing data to target applications in cloud or on-premise.

The following diagram illustrates the MFT orchestration steps in “Integration” for Extract:

 

outbound_mft

 

The extracted file could be delivered to the WebCenter Content server. HCM Extract has an ability to generate an encrypted output file. In Extract delivery options ensure the following options are correctly configured:

  • Select HCM Delivery Type to “HCM Connect”
  • Select an Encryption Mode of the four supported encryption types or select None
  • Specify the Integration Name – this value is used to build the title of the entry in WebCenter Content

 

Extracted File Naming Convention in WebCenter Content

The file will have the following properties:
Author: FUSION_APPSHCM_ESS_APPID
Security Group: FAFusionImportExport
Account: hcm/dataloader/export
Title: HEXTV1CON_{IntegrationName}_{EncryptionType}_{DateTimeStamp}

 

Fusion Applications Security

The content in WebCenter Content is secured through users, roles, privileges and accounts. The user could be any valid user with a role such as “Integration Specialist.” The role may have privileges such as read, write and delete. The accounts are predefined by each application. For example, HCM uses /hcm/dataloader/import and /hcm/dataloader/export respectively.
The FBL/HDL web services are secured through Oracle Web Service Manager (OWSM) using the following policy: oracle/wss11_saml_or_username_token_with_message_protection_service_policy.

The client must satisfy the message protection policy to ensure that the payload is encrypted or sent over the SSL transport layer.

A client policy that can be used to meet this requirement is: “oracle/wss11_username_token_with_message_protection_client_policy”

To use this policy, the message must be encrypted using a public key provided by the server. When the message reaches the server it can be decrypted by the server’s private key. A KeyStore is used to import the certificate and it is referenced in the subsequent client code.

The public key can be obtained from the certificate provided in the service WSDL file.

Encryption of Data File using Pretty Good Privacy (PGP)

All data files transit over a network via SSL. In addition, HCM Cloud supports encryption of data files at rest using PGP.
Fusion HCM supports the following types of encryption:

  • PGP Signed
  • PGP Unsigned
  • PGPX509 Signed
  • PGPX509 Unsigned

To use this PGP Encryption capability, a customer must exchange encryption keys with Fusion for the following:

  • Fusion can decrypt inbound files
  • Fusion can encrypt outbound files
  • Customer can encrypt files sent to Fusion
  • Customer can decrypt files received from Fusion

MFT Callout using Node.js

 

Prerequisites

To automate HCM batch integration patterns, the following components must be installed and configured respectively:

 

Node.js Utility

A simple Node.js utility “mft2hcm” has been developed for uploading or downloading files to/from a MFT server callout to Oracle WebCenter Content server and initiate HCM SaaS loader service. It utilizes the node “mft-upload” package and provides SOAP substitution templates for WebCenter (UCM) and Oracle HCM Loader service.

Please refer to the “mft2hcm” node package for installation and configuration.

RunScript

The RunScript is configured as “Run Script Pre 01” to configure a callout that can be injected into MFT in pre or post processing. This callout always sends the following default parameters to the script:

  • Filename
  • Directory
  • ECID
  • Filesize
  • Targetname (not for source callouts)
  • Sourcename
  • Createtime

Please refer to “PreRunScript” for more information on installation and configuration.

MFT Design

MFT Console enables the following tasks depending on your user roles:

Designer: Use this page to create, modify, delete, rename, and deploy sources, targets, and transfers.

Monitoring: Use this page to monitor transfer statistics, progress, and errors. You can also use this page to disable, enable, and undeploy transfer deployments and to pause, resume, and resubmit instances.

Administration: Use this page to manage the Oracle Managed File Transfer configuration, including embedded server configuration.

Please refer to the MFT Users Guide for more information.

 

HCM FBL/HDL MFT Transfer

This is a typical MFT transfer design and configuration for FBL/HDL:

MFT_FBL_Transfer

The transfer could be designed for additional steps such as compress file and/or encrypt/decrypt files using PGP, depending on the use cases.

 

HCM FBL/HDL (HCM-MFT) Target

The MFT server receives files from any Source protocol such as SFTP, SOAP, local file system or a back end integration process. The file can be decrypted, uncompressed or validated before a Source or Target pre-processing callout uploads it to UCM then notifies HCM to initiate the batch load. Finally the original file is backed up into the local file system, remote SFTP server or a cloud based storage service. An optional notification can also be delivered to the caller using a Target post-processing callout upon successful completion.

This is a typical target configuration in the MFT-HCM transfer:

Click on target Pre-Processing Action and select “Run Script Pre 01”:

MFT_RunScriptPre01

 

Enter “scriptLocation” where node package “mft2hcm” is installed. For example, <Node.js-Home>/hcm/node_modules/mft2hcm/mft2hcm.js

MFTPreScriptUpload

 

Do not check ”UseFileFromScript”. This property replaces an inbound file (source) of MFT with the file from target execution. In FBL/HDL, the response (target execution) do not contain file.

 

HCM Extract (HCM-MFT) Transfer

An external event or scheduler triggers the MFT server to search for a file in WCC using a search query. Once a document id is indentified, it is retrieved using a “Source Pre-Processing” callout which injects the retrieved file into the MFT Transfer. The file can then be decrypted, validated or decompressed before being sent to an MFT Target of any protocol such as SFTP, File system, SOAP Web Service or a back end integration process. Finally, the original file is backed up into the local file system, remote SFTP server or a cloud based storage service. An optional notification can also be delivered to the caller using a Target post-processing callout upon successful completion. The MFT server can live in either on premise or a cloud iPaaS hosted environment.

This is a typical configuration of HCM-MFT Extract Transfer:

MFT_Extract_Transfer

 

In the Source definition, add “Run Script Pre 01” processing action and enter the location of the script:

MFTPreScriptDownload

 

The “UseFileFromScript” must be checked as the source scheduler is triggered with mft2hcm payload (UCM-PAYLOAD-SEARCH) to initiate the search and get WCC’s operations. Once the file is retrieved from WCC, this flag tells MFT engine to substitute the file from downloaded from WCC.

 

Conclusion

This post demonstrates how to automate HCM inbound and outbound patterns using MFT and Node.js. The Node.js package could be replaced with WebCenter Content native APIs and SOA for orchestration. This process can also be replicated for other Fusion Applications pillars such as Oracle Enterprise Resource Planning (ERP).


Oracle Mobile Cloud Service: Designed for Performance, Scalability and Productivity

$
0
0

Introduction

Oracle Mobile Cloud Service (MCS for short) is Oracle’s Mobile Backend as a Service (MBaaS) offering. MCS enables companies to create and deploy scalable, robust, and secure mobile applications quickly and easily, and empowers developers to leverage enterprise IT systems—without having to get IT involved.

Mobile Cloud Service does not only provide a rich UI to manage and develop your mobile API’s, it also provides a well designed architecture that delivers a highly available, scalable  and productive environment.
This article will go into the details of the MCS environment and will explain what to expect from it architecture.

Main Article

The article is divided in following section:

  • MCS Architecture
  • Scalability
  • Productivity

MCS Architecture

One of the key architectural components in MCS is Node.JS.

Node.js is a platform built on Chrome’s JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.

Node.JS in the marketplace

Wallmart used Node.JS to handle all their mobile users during black Friday. They received over 200 million users on that day while the CPU never exceeded 1%!
Additionally, Groupon re-implemented their system in Node.JS and they saw a reduction of 50% in response times.
A last example that shows the power of Node.JS is LinkedIn. When they changed their mobile back-end from Rails to Node.JS they could drop the amount of servers from 30 to only 3 while the new system performed 20 times faster!

For some more examples, you can read following article: Why Node.JS is becoming the go-to technology in the enterprise

The reason why Node.JS is so good in handling huge amounts of traffic is mainly because of the unique architecture of Node.JS.

Node.JS Architecture

Node.JS is an event based, non-blocking I/O framework. What this all means is that Node.JS has an event loop that will delegate requests to the appropriate code.  Whenever a request is send to MCS, it will be put onto the event loop of the Node.JS container. This will call the appropriate code from your implementation in an asynchronous way. The asynchronous property of the event loop is very important for the performance in Node.JS. It won’t wait for your custom code to complete before processing other requests. Each request put on top of the event loop will be handled separately in an asynchronous way.

The same goes for your own custom code. Every time you use a callback function, you hook into the event loop of Node.JS and make use of the asynchronous property of the container. There is nothing that can block another request. The only request that can be blocked is your own request.
This in contrast to the thread based request handling where you have a set amount of threads. If all the threads are busy, no resources will be available to handle further requests so they are put on hold until a thread becomes available.

NodeJS

Although there is only a single thread to process events, it’s more than enough because the thread’s only responsibility is delegating between the incoming requests and asynchronous threads. Those threads will delegate the I/O operations to non-blocking operations so they don’t block other requests.

Although we say it’s a non-blocking architecture, there are cases where you could starve the environment from resources and that’s mainly limited to the CPU. Because there is only a single thread, if the CPU has no available cycles than the event loop won’t be able to process further events and it will have to wait until the CPU is available.
This can be solved by spawning multiple workers which can be spread out over CPU’s so you can make use of multi core environment.

So how does this architecture fit in with MCS?

MCS and Node.JS

Because MCS is like a service bus, it will have to deal with a high volume of requests. For this reason, Node.JS is a perfect fit for MCS. A lot of the time requests will call into your back-end services, whether they are other cloud services or on premise service, MCS will have to wait for the response to come back. Therefore it is important that you have an infrastructure that doesn’t block other requests when a back-end call takes a lot of time. While a request is waiting for those back-end connectors to respond, they won’t take up resources so a maximum throughput of request will be made possible through such an architecture.

Scalability

The drawback of a single threaded event loop is it doesn’t fully optimize multi-core systems. It also can cause throughput limitations when you have CPU intensive calculations however because MCS is acting as a service bus, the CPU intensive work will be done by the back-end and most likely you only will need to aggregate data from different connector calls which isn’t that CPU intensive unless you deal with huge amount of data from a single connector call.

Fortunately, these limitations can be mitigated by spawning multiple Node.JS containers as the concurrency gets higher. That’s where MCS while help you. You aren’t limited to a single Node.JS container but MCS will handle the orchestration of multiple Node.JS containers.

This means that if your throughput increases, MCS will spin up additional Node.JS containers to handle the high volume of request. If the throughput goes down, those additional containers will be taken offline, freeing resources for other use.
All this orchestration is handled seamlessly in the background.

Productivity

Node.JS is JavaScript based which means that your developers who build API’s on MCS do not need to learn a new, complex language. They can reuse their already known skills.
Because of this architecture, the silo between front-end and back-end developer will vanish and both your front-end developers and API developer will be able to talk and help each other as their skill can be used in multiple areas. In most cases you probably won’t have people assigned to the API and other people assigned to the front-end. They will be the same team and share the same tasks. They will work together as a single unit which will increase efficiency.

If a front-end developer is missing an API resource or a change need to be made on the API, he can simply do it by himself because.

Short learning curve

Because the main programming language in MCS is JavaScript, t he learning curve is short. This is mainly because most of the developers who have to build assets on MCS will at least, have a basic understanding of JavaScript. In addition to this, JavaScript is an easy to pick up language.

There is also no additional tooling to learn because you can write JavaScript in your favorite IDE or even in an advanced text editor like Notepad++ or Sublime.

We noticed this first hand when we first got introduced to MCS. Most people on the team had extensive experience with server side java but little to non experience with JavaScript however it only took us a day or so to go over a few tutorials on JavaScript before we were productive on MCS.

Productivity through community

Node.JS has a huge community backing it. Many people and companies contribute modules to the global Node.JS repository. All the modules available on the Node Package Manager (NPM) repository can be freely used by anybody else which means you don’t have to reinvent the wheel.The current NPM repo has over 173000 modules available with over 90 million downloads/day.

If you need to build something complex, there is a good change there are modules available on the repository that can help you so instead of spending two weeks writing complex code that might be error prone, you simply install a module from the repository and benefit from its development and experience.

For example, a common task in MCS is aggregating JSON payloads from multiple connector calls. This involves traversing complex JSON objects and merging them into a single one.
If you look on the NPM repository, there are hundreds of modules available that can help make this task easier.

Below you can find a list of modules that are useful in MCS:

AsynchModule that enabled you to have a lot of control of multiple asynchronous events. It allows you to chain calls and pass values from one asynchronous call to another.

xml2jsAlthough MCS allows you to work nativaly with JSON, sometimes you will have to work with XML and this module allows you to easily convert XML to JSON objects.

xml-parserA similar module as xml2js

Implementing OAuth 2 with Oracle Access Manager OAuth Services (Part I)

$
0
0

Introduction

This post will explain the basics of OAuth 2.0 and how it can be used to protect resources by implementing some of the most common OAuth use cases.

OAM provides out of the box OAuth Services, which allows a Client Application to access protected resources that belong to an end-user (that is, the Resource Owner).

Before going further in this post, check out the OAuth2 specification, to understand the basic OAuth concepts, as it won’t be covered in this post.

This post will be divided into 5 parts:

Part I – explains the proposed architecture and how to enable and configure OAM OAuth Services.

Part II – describes a Business to Business use-case (2-legged flow);

Part III  – explains the Customer to Business use-case (3-legged flow), when the client application is running in the application server;

Part IV – describes the Customer to Business use-case (3-legged flow), when the client application runs on the browser, embedded as Javascript code;

Part V  – discuss Access Token and Refresh Token usage/strategy and provides the source code for the use-case examples.

Architecture

The picture below represents an overview of the different components and the interaction between them.

Architecture

 

  • Client WebApplication: The applications play the role of OAuth Clients. The applications will request access to resources owned by the end user.
  • ResourceServer: Those are RESTful webservices that represent the protected resources owned by the end user. They require a valid OAuth token in order to serve information back to the ClientWebApp.
  • OAM/OAuth Server: Responsible for authenticating the end user and for issuing and validating OAuth Tokens.

Configuration

The configuration takes place in the OAM Mobile and Social component, in the OAuth Services page.

You need to have access to the OAM Console at http://oam_admin_server:port/oamconsole to perform the required configurations.

Before proceeding, make sure you have enabled OAuth Services by following the steps described here.

Before jumping into the OAuth flow and code examples, we need to register Clients and Resources Serveres with OAM.

For the B2B use-case, presented in part II, we will need to register a ‘Business’ Resource Server and a ‘BusinessClient’ Client.

The Business Client will request access to some resources from the Business Resource Server, passing an Access Token, obtained from OAM’s OAuth Server.

For the C2B use-cases, we will register one Resource Server: ‘Customer Resource Server’. It will be used for both C2B use-cases (app running in the application server and app running as embedded Javascript code).

We will also register the following clients: ‘Customer Client’ (for the C2B use-case where the app runs in the application server) and the ‘Browser Client’ (for the C2B use-case where the app runs as embedded Javascript code).

In all cases, the Resource Servers will need to validate the Access Token, before granting access to the protected resources.

To achieve this, there are two different approaches:

1 – Use a ‘special’ client, we’re calling it the ‘Service Token Validator’, which is just another OAuth client, with no grant types provileges but it is configured to retrieve additional token attributes (useful for token validation). This approach is simple and requires almost no coding since it only required a REST call to the OAuth Server, passing the Access Token. This is the simplest choice but won’t be really useful in real world application as it does not scale and performs well;

2 – Export the OAuth Server signing key and implementing a piece of code in the Resource Server that validates the Access Token signature and claims. This approach is a bit more complicated as it involves some 3rd party library, different keys, claims, etc.

Both approaches are explained in part V of this post series, and considerations are discussed in part V regarding the cons/pros of one over another.

Creating a Custom Resource Server

A Resource Server represents a 3rd party Service or Application that your clients are entitled to have access to.

The use-cases proposed here will use ‘business’ and ‘clients’ services, so two Resource Servers will be created in OAM: ‘Business Service‘ and ‘Client Service’.

The Business Service will be used in the B2B use-case and the Client Service will be used in the C2B use-cases.

To create the Resource Servers, follow these steps:

1. Log in to OAM Console, in the Launch Pad page, open the OAuth Service page.

img1

 

2. Select the Default Domain.

img2

 

3. Open the Resource Servers tab.

img3

 

4. On the Custom Resource Servers table, click ‘create’ and create two Resource Servers according to the information below.

img4

 

 

Business Resource Server

Name: Business

Authorization & Consent Service Plug-in: CoherenceAuthzUserConsentPlugin

Scope Namespace Prefix: Business

Scopes: Info

Decription: Reads Business Information

 

Your Business Resource Server should look like this:

img5

 

Customer Resource Server

Name: Customer

Authorization & Consent Service Plug-in: CoherenceAuthzUserConsentPlugin

Scope Namespace Prefix: Customer

Scopes: Info

Description: Reads Customer Information

 

Your Customer Resource Server should look like this:

img6

 

 

OAuth Client Registration

In OAuth2, clients are “applications making protected resource requests on behalf of the resource owner and with its authorization.”

The Services, or Applications, that play the roles of Resource Servers in our use cases, will validate the OAuth token received from the clients when those clients request access to their protected resources.

To do so, our Services will need to be registered as ‘special’ clients with OAM, with the only purpose of validating OAuth tokens. More on that later.

Four OAuth clients will be registered, one for the Business Use Case (B2B), one for the C2B Use Case – server side application, one for the C2B Use Case – application embedded as Javascript code, and one additional client for our Resource Server applications to validate the Access Tokens.

 

Access the OAM Console,  and execute the following steps:

1. On the Launch Pad page, in the Mobile and Social section, click on OAuth Service link. The OAuth Identity Domains tab appears.

2. Click on  the Defaul Domain, and open the OAuth Clients tab.

img7

3. Click on the Create button and enter the appropriate data to create the clients (execute this step 4 times for each client with the data below).

 

Business Client

The Business Client represents the business client application in the B2B use case.

This application will make calls to the Resource Server represented by a RESTful web service that will provide business information based on the client ID.

Set the following properties for the Business Client:

Client ID: businessClient

Client Secret: 3WFLudTFyBk

Bypass User Consent: checked

Allowed Scopes: Business.Info

Grant Types: Client Credentials

 

Your Business Client should look like this:

img8

 

Customer Client

The Customer Client represents the customer client application, in the C2B use case, where the application runs on the server side.

This application will make calls to the Resource Server represented by a RESTful web service that will provide information based on the end user authenticated with OAM.

Set the following properties for the Customer Client:

Client ID: customerClient

Client Secret: zRpJbl73iE8X

HTTP Redirect URIs: https://oxygen.mycompany.com:7502/ClientWebApp/CustomerClient

(This is the URI of the client application that will receive the Authorization Token, after the user successfully authenticates. See the provided source code for more details).

Allowed Scopes: Customer.Info, UserProfile.me

Grant Types: Authorization Code

 

Your Customer Client should look like this:

img9

 

Browser Client

The Browser Client represents the customer client application, in the C2B use case, where the application runs on the browser itself.

This application will make calls to the Resource Server represented by a RESTful web service that will provide information based on the end user authenticated with OAM.

Client ID: browserClient

Client Secret: g8aGglZDrPDl7e

HTTP Redirect URIs: https://oxygen.mycompany.com:7502/ClientWebApp/index.html

(This is the URI of your client application that will receive the Authorization Token, after the user successfully authenticates).

Allowed Scopes: Customer.Info, UserProfile.me

Grant Types: Authorization Code

 

Your Browser Client should look like this:

img10

 

 

Service Token Validator

The Service Token Validator is a ‘special’ client; it actually runs on the Resource Server side, and will only be used by the Resource Server to validate the incoming OAuth Access Tokens.

Client ID: ServiceTokenValidator

Allow Token Attributes Retrieval: Checked.

(This will allow our client to obtain detailed information about the Access Token along with its validation status.)

Client Secret: A3nqKocV6I0GJB

No other information is required for this client.

 

Your Service Token Validator Client should look like this:

 

img11

 

 

That’s it for now, in the following post we will cover the Business to Business Use case.

Integrating Oracle Fusion Applications – WebCenter / Universal Content Management (UCM) with Oracle Business Intelligence Cloud Service (BICS)

$
0
0

Introduction

 

This article describes how to integrate Oracle Fusion Applications – WebCenter / Universal Content Management (UCM) with Oracle Business Intelligence Cloud Service (BICS). The integration pattern covered shares similarities to those addressed in the previously published A-Team blog on: “Integrating Oracle Fusion Sales Cloud with Oracle Business Intelligence Cloud Service (BICS)”. The motivation behind this article is to provide a fresh perspective on this subject, and to offer an alternative for use cases unable to use OTBI web services to extract Fusion data.

The solution uses PL/SQL and Soap Web Services to retrieve the Fusion data. It was written and tested on Fusion Sales Cloud R10. That said, it is also relevant to any other Oracle product that has access to WebCenter / UCM – provided that idcws/GenericSoapPort?wsdl is publicly available. The article is geared towards BICS installations on an Oracle Schema Service Database. However, it may also be useful for DbaaS environments.

The artifacts provided can be used as a starting point to build a custom BICS – WebCenter / UCM adapter. The examples were tested against a small test data-set, and it is anticipated that code changes will be required before applying to a Production environment.

The article is divided into four steps:

 

Step One – Create and Activate the Schedule Export Process

Describes the data staging process – which is configured through “Schedule Export” (accessed via “Setup and Maintenance”). “Schedule Export” provides a variety of export objects for each Fusion module / product family. It walks through creating, editing, scheduling, and activating the “Scheduled Export”. The results of the “Scheduled Export” are saved to a CSV file stored in WebCenter / UCM.

Step Two – Confirm data is available in WebCenter / UCM

Verifies that the user can log into Webcenter / UCM and access the CSV file that was created in Step One. The “UCM ID” associated with the CSV file is visible from the WebCenter / UCM Content Server Search. The id is then used in Step Three to programmatically search for the object.

Step Three – Test GET_SEARCH_RESULTS and GET_FILE Soap Requests

Outlines how to build the Soap requests that utilizes the public idcws/GenericSoapPort?wsdl (available in Fusion R10). “GET_SEARCH_RESULTS” is used to retrieve the “dID” based on the “UCM ID” of the CSV file (gathered from Step Two). “GET_FILE” is then used to retrieve the file associated the given “dID”. The file is returned as a SOAP attachment.

Step Four – Code the Stored Procedure

Provides PL/SQL samples that can be used as a starting point to build out the integration solution. The database artifacts are created through Apex SQL Workshop SQL Commands. The Soap requests are called using apex_web_service.make_rest_request.

Generally speaking, Make_Rest_Request is reserved for RESTful Web Services and apex_web_service.make_request for Soap Web Services. However, in this case it was not possible to use apex_web_service.make_request as the data returned was not compatible with the mandatory output of XMLTYPE. Apex_web_service.make_rest_request has been used as a workaround as it offers additionally flexibility, allowing the data to be retrieved as a CLOB.

For “GET_SEARCH_RESULTS” the non-XML components of the file are removed, and the data is saved as XMLTYPE so that the namespace can be used to retrieve the “dID”.

For “GET_FILE” the data is kept as a CLOB. The non-CSV components of the file are removed from the CLOB. Then the data is parsed to the database using “csv_util_pkg.clob_to_csv” that is installed from the Alexandria PL/SQL Utility Library.

 

Main Article

 

Step One – Create and Activate the Schedule Export Process

 

1)    Click Setup and Maintenance.

Snap0

 2)    Enter “Schedule Export” into the “Search: Tasks” search box.

Click the arrow to search.

Snap2

3)    Click Go to Task.

Snap3

 

4)    Click Create.

Snap4

 

 

 

 

 

 

5)    Type in Name and Description.

Click Next.

Snap1

 

6)    Click Actions -> Create.

Snap6

 

 

7)    Select the desired export object.

Click Done.

Snap2

 

 

 

 

 

8)    Click the arrow to expand the attributes.

Snap9

 

9)    Un-check unwanted attributes.

For this example the following five attributes have been selected:

a)    Territory Name
b)    Status Code
c)    Status
d)    Type
e)    Forecast Participation Code

Snap3 Snap4

10)   Select Schedule Type = Immediate.

Click Next.

Snap10

11)   Click Activate.

Snap11

 

12)   Click refresh icon (top right) until status shows as “Running”.

Confirm process completed successfully.

Snap12a

 

Snap6

Snap7

13)   Once the process is complete – Click on “Exported data file” CSV file link.

Confirm CSV contains expected results.

Snap8

Step Two – Confirm data is available in WebCenter / UCM

 

1)    Login to WebCenter / UCM.

https://hostname.fs.em2.oraclecloud.com/cs

2)    Search by Title.

Search by Title for the CSV file.

Snap1

 

3)    Click the ID to download the CSV file.

4)    Confirm the CSV file contains the expected data-set.

Snap2

Step Three – Test GET_SEARCH_RESULTS and GET_FILE Soap Requests

 

1)    Confirm that the idcws/GenericSoapPort?wsdl is accessible. (Note this is only public in Fusion Applications R10.)

https://hostname.fs.em2.oraclecloud.com/idcws/GenericSoapPort?wsdl

Snap15

 

2)    Launch SOAPUI.

Enter the idcws/GenericSoapPort?wsdl in the Initial WSDL box.

Click OK.

https://hostname.fs.em2.oraclecloud.com/idcws/GenericSoapPort?wsdl

Snap16

 

3)    Right Click on the Project.

Select “Show Project View”.

Snap17

4)    Add a new outgoing WSS Configuration called “Outgoing” and a new WSS Entry for “Username”

a)    Click on the “WS-Security Configurations” tab.

b)    Click on the + (plus sign) located in the top left.

c)    Type “Outgoing” in Name.

d)    Click OK.

e)    Click on the + (plus sign) located in the middle left. Select Username. Click OK.

f)    Type in the WebCenter / UCM  user name and password.

g)    In the Password Type drop down box select “PasswordText”

Snap18

 

5)    Add a new WSS Entry for Timestamp

a)    Click on the + (plus sign) again located in middle left.

b)    Select Timestamp. Put in a very large number. This is the timeout in milliseconds.

c)    Close the window.

Snap19

 

6)    Click on Request

Delete the default envelope and replace it with below:

Replace the highlighted UCM ID with that found in “Step Two – Confirm data is available in WebCenter / UCM”.

Do not remove the [``] tick marks around the UCM ID.

For a text version of this code click here.

<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/”
xmlns:ucm=”http://www.oracle.com/UCM”>
<soapenv:Header>
Right Click here … then remove this text
</soapenv:Header>
<soapenv:Body>
<ucm:GenericRequest webKey=”cs”>
<ucm:Service IdcService=”GET_SEARCH_RESULTS”>
<ucm:Document>
<ucm:Field name=”QueryText”>dDocName &lt;starts> `UCMFA001069`</ucm:Field>
</ucm:Document>
</ucm:Service>
</ucm:GenericRequest>
</soapenv:Body>
</soapenv:Envelope>

7)    Place the cursor in between the <soapenv:Header> tags (i.e. “Right Click here … then remove this text”).

Right Click -> Select Outgoing WSS -> Select Apply “Outgoing”.

Snap4

8)    The previously defined header containing the user name, password, and timeout settings should now be added to the request.

Remove the “Right Click here … then remove this text” comment.

Confirm Outgoing WSS has been applied to the correct position.

Snap5

9)    Submit the Request (by hitting the green arrow in the top left).

Snap6

10)   The Request should return XML containing the results of GET_SEARCH_RESULTS.

Ctrl F -> Find: dID

Snap7

 

 

 

 

11)   Make note of the dID. For example the dID below is “1011”.

Snap8

 

 

12)   Right Click on the Request.

Rename it GET_SEARCH_RESULTS for later use.

Snap25a

 

 

Snap25b

 

 

13)   Right Click on GenericSoapOperation -> Select New Request

Snap26

 

 

14)   Name it GET_FILE.

Snap27

 

 

15)   Delete the default request envelope and replace with below:

For a text version of the code click here.

<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:ucm=”http://www.oracle.com/UCM”>
<soapenv:Header>
Right Click here … then remove this text
</soapenv:Header>
<soapenv:Body>
<ucm:GenericRequest webKey=”cs”>
<ucm:Service IdcService=”GET_FILE”>
<ucm:Document>
<ucm:Field name=”dID”>1011</ucm:Field>
</ucm:Document>
</ucm:Service>
</ucm:GenericRequest>
</soapenv:Body>
</soapenv:Envelope>

16)  (a)   Repeat process of adding Outgoing WSS

Place the cursor in between the <soapenv:Header> tags (i.e. “Right Click here … then remove this text”).

Right Click -> Select Outgoing WSS -> Select Apply “Outgoing”.

The previously defined header containing the user name, password, and timeout settings should now be added to the request.

Remove the “Right Click here … then remove this text” comment.

Confirm Outgoing WSS has been applied to the correct position.

(b)   Submit the Request (green arrow top left).

An attachment should be generated.

Click on the Attachments tab (at bottom).

Double click to open the attachment.

Snap9

 

17)   Confirm results are as expected.

Snap10

Step Four – Code the Stored Procedure

1)    Test GET_SEARCH_RESULTS PL/SQL

(a)    Copy the below PL/SQL code into the Apex -> SQL Workshop -> SQL Commands.

(b)    Replace:

(i) Username

(ii) Password

(iii) Hostname

(iv) dDocName i.e. UCMFA001069

(c)    For a text version of the PL/SQL click here.

DECLARE
l_user_name VARCHAR2(100) := ‘username‘;
l_password VARCHAR2(100) := ‘password‘;
l_ws_url VARCHAR2(500) := ‘https://hostname.fs.us2.oraclecloud.com/idcws/GenericSoapPort?wsdl';
l_ws_action VARCHAR2(500) := ‘urn:GenericSoap/GenericSoapOperation';
l_ws_response_clob CLOB;
l_ws_response_clob_clean CLOB;
l_ws_envelope CLOB;
l_http_status VARCHAR2(100);
v_dID VARCHAR2(100);
l_ws_resp_xml XMLTYPE;
l_start_xml PLS_INTEGER;
l_end_xml PLS_INTEGER;
l_resp_len PLS_INTEGER;
l_xml_len PLS_INTEGER;
clob_l_start_xml PLS_INTEGER;
clob_l_resp_len PLS_INTEGER;
clob_l_xml_len PLS_INTEGER;
clean_clob_l_end_xml PLS_INTEGER;
clean_clob_l_resp_len PLS_INTEGER;
clean_clob_l_xml_len PLS_INTEGER;
v_cdata VARCHAR2(100);
v_length INTEGER;
BEGIN
l_ws_envelope :=
‘<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:ucm=”http://www.oracle.com/UCM”>
<soapenv:Body>
<ucm:GenericRequest webKey=”cs”>
<ucm:Service IdcService=”GET_SEARCH_RESULTS”>
<ucm:Document>
<ucm:Field name=”QueryText”>dDocName &lt;starts> `UCMFA001069`</ucm:Field>
</ucm:Document>
</ucm:Service>
</ucm:GenericRequest>
</soapenv:Body>
</soapenv:Envelope>';
apex_web_service.g_request_headers(1).name := ‘SOAPAction';
apex_web_service.g_request_headers(1).value := l_ws_action;
apex_web_service.g_request_headers(2).name := ‘Content-Type';
apex_web_service.g_request_headers(2).value := ‘text/xml; charset=UTF-8′;
l_ws_response_clob := apex_web_service.make_rest_request(
p_url => l_ws_url,
p_http_method => ‘POST’,
p_body => l_ws_envelope,
p_username => l_user_name,
p_password => l_password);
–dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,24000,1));
–Tested on a very small CLOB. Less than 32767. If larger may need to slice.
–dbms_output.put_line(length(l_ws_response_clob));
–Remove header as it is not XML
clob_l_start_xml := INSTR(l_ws_response_clob,'<?xml’,1,1);
clob_l_resp_len := LENGTH(l_ws_response_clob);
clob_l_xml_len := clob_l_resp_len – clob_l_start_xml + 1;
l_ws_response_clob_clean := dbms_lob.substr(l_ws_response_clob,clob_l_xml_len,clob_l_start_xml);
–dbms_output.put_line(l_ws_response_clob_clean);
–Remove the tail as it is not XML
clean_clob_l_end_xml := INSTR(l_ws_response_clob_clean,’——=’,1,1);
clean_clob_l_resp_len := LENGTH(l_ws_response_clob_clean);
clean_clob_l_xml_len := clean_clob_l_end_xml – 1;
l_ws_response_clob_clean := dbms_lob.substr(l_ws_response_clob_clean,clean_clob_l_xml_len,1);
–dbms_output.put_line(l_ws_response_clob_clean);
–Convert CLOB to XMLTYPE
l_ws_resp_xml := XMLTYPE.createXML(l_ws_response_clob_clean);
select (cdata_section)
into v_cdata
from
xmltable
(
xmlnamespaces
(
‘http://schemas.xmlsoap.org/soap/envelope/’ as “env”,
‘http://www.oracle.com/UCM’ as “ns2″
),
‘//env:Envelope/env:Body/ns2:GenericResponse/ns2:Service/ns2:Document/ns2:ResultSet/ns2:Row/ns2:Field[@name="dID"]‘
passing l_ws_resp_xml
columns
cdata_section VARCHAR2(100) path ‘text()’
) dat;
dbms_output.put_line(‘dID:’ || v_cdata);
END;

 (d)    The Results should show the corresponding dID.

Snap11

2)    Install the relevant alexandria-plsql-utils

(a)    Go to: https://github.com/mortenbra/alexandria-plsql-utils

Snap2

(b)    Click Download ZIP

Snap1

(c)    Run these three sql scripts / packages in this order:

\plsql-utils-v170\setup\types.sql

\plsql-utils-v170\ora\csv_util_pkg.pks

\plsql-utils-v170\ora\csv_util_pkg.pkb

3)    Create table in Apex to insert data into.

For a text version of the SQL click here.

CREATE TABLE TERRITORY_INFO(
Territory_Name VARCHAR(100),
Status_Code VARCHAR(100),
Status VARCHAR(100),
Type VARCHAR(100),
Forecast_Participation_Code VARCHAR(100)
);

4)    Test UCM_GET_FILE STORED PROCEDURE

(a)    Copy the below PL/SQL code into the Apex -> SQL Workshop -> SQL Commands.

(b)    Replace:

(I) Column names in SQL INSERT as needed

(Ii) Header name of first column. i.e. “Territory Name”

(c)    For a text version of the PL/SQL click here.

CREATE OR REPLACE PROCEDURE UCM_GET_FILE
(
p_ws_url VARCHAR2,
p_user_name VARCHAR2,
p_password VARCHAR2,
p_dID VARCHAR2
) IS
l_ws_envelope CLOB;
l_ws_response_clob CLOB;
l_ws_response_clob_clean CLOB;
l_ws_url VARCHAR2(500) := p_ws_url;
l_user_name VARCHAR2(100) := p_user_name;
l_password VARCHAR2(100) := p_password;
l_ws_action VARCHAR2(500) := ‘urn:GenericSoap/GenericSoapOperation';
l_ws_resp_xml XMLTYPE;
l_start_xml PLS_INTEGER;
l_end_xml PLS_INTEGER;
l_resp_len PLS_INTEGER;
clob_l_start_xml PLS_INTEGER;
clob_l_resp_len PLS_INTEGER;
clob_l_xml_len PLS_INTEGER;
clean_clob_l_end_xml PLS_INTEGER;
clean_clob_l_resp_len PLS_INTEGER;
clean_clob_l_xml_len PLS_INTEGER;
BEGIN
l_ws_envelope :=
‘<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:ucm=”http://www.oracle.com/UCM”>
<soapenv:Body>
<ucm:GenericRequest webKey=”cs”>
<ucm:Service IdcService=”GET_FILE”>
<ucm:Document>
<ucm:Field name=”dID”>’|| p_dID ||'</ucm:Field>
</ucm:Document>
</ucm:Service>
</ucm:GenericRequest>
</soapenv:Body>
</soapenv:Envelope>
‘;
apex_web_service.g_request_headers(1).name := ‘SOAPAction';
apex_web_service.g_request_headers(1).value := l_ws_action;
apex_web_service.g_request_headers(2).name := ‘Content-Type';
apex_web_service.g_request_headers(2).value := ‘text/xml; charset=UTF-8′;
l_ws_response_clob := apex_web_service.make_rest_request(
p_url => l_ws_url,
p_http_method => ‘POST’,
p_body => l_ws_envelope,
p_username => l_user_name,
p_password => l_password);
–Note: This was tested with a very small result-set
–dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,24000,1));
–Tested on a very small CLOB. Less than 32767. If larger may need to slice.
–dbms_output.put_line(length(l_ws_response_clob));
–Remove junk header
clob_l_start_xml := INSTR(l_ws_response_clob,'”Territory Name“‘,1,1);
clob_l_resp_len := LENGTH(l_ws_response_clob);
clob_l_xml_len := clob_l_resp_len – clob_l_start_xml + 1;
l_ws_response_clob_clean := dbms_lob.substr(l_ws_response_clob,clob_l_xml_len,clob_l_start_xml);
–dbms_output.put_line(l_ws_response_clob_clean);
–Remove junk footer
clean_clob_l_end_xml := INSTR(l_ws_response_clob_clean,CHR(13),-3)-2;
clean_clob_l_resp_len := LENGTH(l_ws_response_clob_clean);
clean_clob_l_xml_len := clean_clob_l_end_xml;
l_ws_response_clob_clean := dbms_lob.substr(l_ws_response_clob_clean,clean_clob_l_xml_len,1);
— dbms_output.put_line(l_ws_response_clob_clean);
–Insert into database
DELETE FROM TERRITORY_INFO;
INSERT INTO TERRITORY_INFO (Territory_Name,Status_Code,Status,Type,Forecast_Participation_Code)
select C001,C002,C003,C004,C005 FROM table(csv_util_pkg.clob_to_csv(l_ws_response_clob_clean,’,’,1));
END;

(d)    To run the stored procedure – Copy the below PL/SQL code into the Apex -> SQL Workshop -> SQL Commands.

(e)    For a text version of the PL/SQL click here.

(f)    Replace:

(i) Username

(ii) Password

(iii) Hostname

(iv) dID i.e. 1011

BEGIN
UCM_GET_FILE(‘https://hostname.fs.us2.oraclecloud.com/idcws/GenericSoapPort?wsdl’,’username‘,’password‘,’dID‘);
END;

(g)    Confirm data was loaded successfully

SELECT * FROM TERRITORY_INFO;

Snap3

5) Combine GET_SEARCH_RESULTS and GET_FILE

(a)    Copy the below PL/SQL code into the Apex -> SQL Workshop -> SQL Commands.

(b)    Replace:

(i) Username

(ii) Password

(iii) Hostname

(iv) dDocName i.e. UCMFA001069

(c)    Note: The code highlighted in green is the only change made to the original.

(d)    For a text version of the PL/SQL click here.

(e)    Note: This code would also be converted to a stored procedure with parameters should it be implemented in production.

DECLARE
l_user_name VARCHAR2(100) := ‘username‘;
l_password VARCHAR2(100) := ‘password‘;
l_ws_url VARCHAR2(500) := ‘https://hostname.fs.us2.oraclecloud.com/idcws/GenericSoapPort?wsdl';
l_ws_action VARCHAR2(500) := ‘urn:GenericSoap/GenericSoapOperation';
l_ws_response_clob CLOB;
l_ws_response_clob_clean CLOB;
l_ws_envelope CLOB;
l_http_status VARCHAR2(100);
v_dID VARCHAR2(100);
l_ws_resp_xml XMLTYPE;
l_start_xml PLS_INTEGER;
l_end_xml PLS_INTEGER;
l_resp_len PLS_INTEGER;
l_xml_len PLS_INTEGER;
clob_l_start_xml PLS_INTEGER;
clob_l_resp_len PLS_INTEGER;
clob_l_xml_len PLS_INTEGER;
clean_clob_l_end_xml PLS_INTEGER;
clean_clob_l_resp_len PLS_INTEGER;
clean_clob_l_xml_len PLS_INTEGER;
v_cdata VARCHAR2(100);
v_length INTEGER;
BEGIN
l_ws_envelope :=
‘<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:ucm=”http://www.oracle.com/UCM”>
<soapenv:Body>
<ucm:GenericRequest webKey=”cs”>
<ucm:Service IdcService=”GET_SEARCH_RESULTS”>
<ucm:Document>
<ucm:Field name=”QueryText”>dDocName &lt;starts> `UCMFA001069`</ucm:Field>
</ucm:Document>
</ucm:Service>
</ucm:GenericRequest>
</soapenv:Body>
</soapenv:Envelope>';
apex_web_service.g_request_headers(1).name := ‘SOAPAction';
apex_web_service.g_request_headers(1).value := l_ws_action;
apex_web_service.g_request_headers(2).name := ‘Content-Type';
apex_web_service.g_request_headers(2).value := ‘text/xml; charset=UTF-8′;
l_ws_response_clob := apex_web_service.make_rest_request(
p_url => l_ws_url,
p_http_method => ‘POST’,
p_body => l_ws_envelope,
p_username => l_user_name,
p_password => l_password);
–dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,24000,1));
–Tested on a very small CLOB. Less than 32767. If larger may need to slice.
–dbms_output.put_line(length(l_ws_response_clob));
–Remove header as it is not XML
clob_l_start_xml := INSTR(l_ws_response_clob,'<?xml’,1,1);
clob_l_resp_len := LENGTH(l_ws_response_clob);
clob_l_xml_len := clob_l_resp_len – clob_l_start_xml + 1;
l_ws_response_clob_clean := dbms_lob.substr(l_ws_response_clob,clob_l_xml_len,clob_l_start_xml);
–dbms_output.put_line(l_ws_response_clob_clean);
–Remove the tail as it is not XML
clean_clob_l_end_xml := INSTR(l_ws_response_clob_clean,’——=’,1,1);
clean_clob_l_resp_len := LENGTH(l_ws_response_clob_clean);
clean_clob_l_xml_len := clean_clob_l_end_xml – 1;
l_ws_response_clob_clean := dbms_lob.substr(l_ws_response_clob_clean,clean_clob_l_xml_len,1);
–dbms_output.put_line(l_ws_response_clob_clean);
–Convert CLOB to XMLTYPE
l_ws_resp_xml := XMLTYPE.createXML(l_ws_response_clob_clean);
select (cdata_section)
into v_cdata
from
xmltable
(
xmlnamespaces
(
‘http://schemas.xmlsoap.org/soap/envelope/’ as “env”,
‘http://www.oracle.com/UCM’ as “ns2″
),
‘//env:Envelope/env:Body/ns2:GenericResponse/ns2:Service/ns2:Document/ns2:ResultSet/ns2:Row/ns2:Field[@name="dID"]‘
passing l_ws_resp_xml
columns
cdata_section VARCHAR2(100) path ‘text()’
) dat;
–dbms_output.put_line(‘dID:’ || v_cdata);
UCM_GET_FILE(l_ws_url,l_user_name,l_password,v_cdata);
END;

Further Reading

Click here for the Application Express API Reference Guide –  MAKE_REST_REQUEST Function

Click here for the Alexandria-plsql-utils

Click here for related A-Team BICS blogs

Summary

This article described how to integrate Oracle Fusion Applications – WebCenter / Universal Content Management (UCM) with Oracle Business Intelligence Cloud Service (BICS). It covered the functional and technical steps necessary to automate exporting of data from WebCenter / UCM and load it into a BICS database.

In order to implement this solution the WebCenter / UCM idcws/GenericSoapPort?wsdl must be publicly available. At the time of writing this was available in Fusion Applications R10.

The solution did not cover creating the BICS Data Model or Dashboards. Information on this topic can be found on other A-Team blogs published by the same author.

The SQL scripts provided are for demonstration purposes only. They were tested on small sample data-set. It is anticipated that code adjustments would be need to accommodate larger Production data-sets.

Oracle Commerce Best Practices for Repository Item Cache Warming

$
0
0

Oracle Commerce customers often ask, “How should we warm our repository item cache for better page serving performance?” This article will address that question as well as some of the decisions that you’ll need to make along the way.

First, let’s talk about what repository item cache warming is.

Generally speaking, warming is when we want to preemptively fetch repository items from the database in anticipation of future use. In effect, we’re loading them into cache so that we needn’t fetch them while we’re servicing a consumer’s page request. We do this so that we can provide better page response time to the consumer, and so that we can mitigate database traffic for what are mostly read-only items. The prime example of this is warming of product data in the Product Catalog repository.

Warming is generally desirable at two times, when starting an instance and after a deployment from the BCC, when caches need to be invalidated. In this article, we’ll talk about with the former, which is often referred to as preloading.

When we warm a cache like this, we defer servicing consumer requests until after the cache has been warmed. Generally speaking, we would warm the cache when the page serving instance is being started, as a side effect of starting the associated repository component.

A well-tuned and warmed repository cache is essential for achieving optimal page load times for your consumers. It contributes to both good page response time as well as server scalability. But the techniques for its use need to be applied wisely. The steps involved are:

  1. 1. Determine whether warming is really necessary
  2. 2. Fix the root cause of latency first
  3. 3. Try to limit warming to your active items
  4. 4. Reduce the size of the items being warmed
  5. 5. Avoid warming items that result in excessive startup delay
  6. 6. Test your cache management strategy under sustained load
  7. 7. Consider other warming techniques

We’ll talk about each of these at length.

Determine whether warming is really necessary

Most of the highly referenced repositories, like the product catalog, warm very naturally. The latency penalty for the first retrieval of a category, product, or SKU is often not materially relevant — often in the 8-12ms range. So before you preemptively attempt to warm the products into cache, first determine whether doing so is actually necessary.

You can determine this through load testing, validating average page load time during ramp up on a cold cache versus the times achieved after the load test has reached a steady state. Or you can simply perform a trial on a running production system by flushing the repository cache on a single page serving instance and monitoring the inflection in average page load time.

At one large retailer that I have worked with, they were delaying startup by 15 minutes while warming their product cache. When we tested the actual latency incurred by page requests when not warming the cache, we found that there was about 100ms latency on first reference only; the cache naturally warmed in 3-4 seconds for the entire portion of the active catalog. From this, we determined that the natural warming of the cache was sufficient to their needs. This gave them an additional 15 minutes of site availability each time that they restarted their production commerce cluster.

Fix the root cause of latency first

If you find that warming is desirable because the latency incurred by natural warming is excessive, first try to determine why that is the case. Is the root cause slow database performance? Network latency?

Make sure that you’re correcting the root cause of the latency. If your database isn’t performing well or is incurring latching/locking, then warming a large number of page serving instances concurrently, such as at cluster startup, can overwhelm it. Before implementing cache warming, analyze AWR reports and make sure that the database isn’t the root cause of the page serving latency. The same can be true for network induced latency as well, such as you might see by putting a PCI compliant (layer 4) firewall between the application servers and the database manager.

Don’t mitigate initial latency by warming your instance offline if you can fix the root cause instead.

Sometimes the reason that initial page loading is slow isn’t because of repository item availability, but rather artifacts in the pages themselves. So improving item availability is only masking the problem, removing one contributing element, rather than resolving the problem. This often involves page composition. A simple way to mitigate this issue is to use the droplet cache to build pre-composed elements or fragments of the page. Then, whenever that element is needed in a page, it can be drawn directly out of droplet cache rather than re-rendering it. An example of this can be found in Appendix B of the Oracle Commerce Page Developer’s Guide, starting on page 231 in the version 11.1 documentation set.

When caching elements of a page containing product information, you can use the product id as the key to the cached item. By doing this, you can extend the deployment event listener to invalidate or recreate these fragments if a change to the associated product information is deployed.

Similar techniques may be used for other custom caches.

Limit warming to your active items

At this point, you’ve determined that you do need to warm your caches to attain adequate page response times, and you’ve determined that the latency incurred obtaining the data from the database is not an issue. The next thing to be determined, then, is how much of the data is needed when you start your page serving instance.

Let’s look at another example. At another retailer that I worked with, they had in excess of 1 million products in their catalog. But analysis determined that less than 500 of those products yielded over 90% of their site’s traffic and revenue. Restricting warming to only those active, high yield items improved availability while not incurring a significant delay in start-up.

This is not uncommon. It’s particularly true of “long tail” retailers that have large numbers of products that are available in low quantity and that sell infrequently.

So how do you go about this?

Most customers have an external system where they track product activity. It could be your order management system, where you track sales and returns. Or it could be the use of consumer tracking tags, which keep track of which products consumers are looking at. Or any number of analytics solutions.

However and wherever you track consumer interest in your products, you need to reduce that information into a list of your top performing products. Those products that draw the most traffic to your site. Now create a custom boolean property in the product item of your product catalog repository. Let’s call it ‘highInterest’. Periodically, run a feed into your content administration system (BCC) that sets this property to true for those products, false for all others.

Now when you start the product catalog repository, you can include an RQL query in the customCatalog.xml file to cause these products to be fetched into item cache when the repository instance is started.

<query-items item-descriptor=”product” quiet=”true”>
    highInterest=”true”
</query-items>

This will cause those product items to be warmed in the cache without your having to wait while all of the items of lesser interest are loaded.
For more information about loading queried items during repository startup, see the section titled ‘SQL Repository Caching’ in the Oracle Commerce Repository Guide, starting on page 130 for version 11.1.

It’s important that you separate consumer traffic from traffic generated by bots, such as Google or Bing search indexing. Don’t allow bot traffic to influence your analysis of consumer interest. Exclude the traffic tagging from these pages. And, by the way, having that traffic directed to a page serving instance dedicated to that purpose has the added benefit that that traffic doesn’t consume resources that would be better put to use servicing your consumers. You needn’t bother warming that bot instance’s cache either; bots don’t care much about page response times.

Reduce the size of the items being warmed, if appropriate

Another way to decrease the amount of time spent warming repository cache is to decrease the size of the objects being loaded.

If some of the items being warmed have properties containing large values, or multi-valued properties that are infrequently referenced, use lazy loading of those properties. This will reduce the amount of data being returned from the database and unmarshalled into repository cache, making warming quicker and reducing the amount of heap memory consumed by the cache. Pay particular attention to BLOB and CLOB properties.

By using lazy loading, retrieval of the values of these properties will be deferred until a getPropertyValue() call is made against an instance of the associated item. You can even group these properties, so that a getPropertyValue() call for any one of them will retrieve all of the properties in the group at the same time.

For more information about lazy loading item properties, see the section titled ‘Enabling Lazy Loading’ in the Oracle Commerce Repository Guide, starting on page 131 for version 11.1.

Note that lazy loading of those properties will be used any time that the item instance is loaded from the database, not just during repository startup. So the positive effective goes beyond just during warming.

Avoid warming items that result in excessive startup delay

With Commerce repositories, you can create user-defined properties that are underpinned by Java code. This allows you to create custom code to populate and manage the values of those properties when the item is loaded and accessed from cache.

You need to be sensitive to the runtime characteristics of this code. It is possible to write custom code that makes web service calls to external systems. You want to avoid warming items that have user-defined properties that are underpinned by web service calls. These can have a cascading effect where the delay may compound exponentially.

Imagine the delay that you would incur if you were to have a property of product that is pulling data in real time from an external PIM or OMS. If each call took 100 milliseconds, a very reasonable latency for a call to an OMS, and you are warming 1,000 items, you would incur 100,000ms or almost two minutes of delay just loading those property values. Now imagine that you hadn’t restricted yourself to just warming the items of high interest and instead loaded all one million. That would be almost 20 minutes of delay in starting your instance!

If you do have properties of this nature and yet still need to warm those items into cache, make sure that you lazy load those properties so that they are only loaded upon property retrieval.

Test your cache management strategy under sustained load

At this point, you should have your cache management and warming strategy well in hand. You’ve done your homework, you’ve restricted what’s being loaded to that which is most relevant to your consumers, and you’ve taken care that warming is using heap effectively.

You’re done. Right?

Wrong! Now it’s time to test your caching under sustained load.
Start by appropriately sizing your repository items caches. For each item, you need to determine the approximate count of the number of items that you’ll want to warm into cache. Now set the cache size, as specified by the item-cache-size value in the repository definition file, to a value somewhat larger than the expected number of items. I generally add an additional 5% for items that occur in large numbers, 10% for those that occur in smaller numbers, to allow for unexpected growth.

Ratchet your load up gradually, stepwise, until you reach your peak load. Then sustain that for an extended period, so that you can ensure that your cache management strategy holds under extended periods of stress. This is a good time to test for memory leaks as well, particularly if you have introduced any custom caches.

You should be monitoring page views per second per page serving instance, scalability factors, and server resource utilization to ensure that you are achieving consistent page response time during all phases of the testing cycle. This should include during and immediately after a deployment from the BCC.

Monitor your heap usage using Java Flight Recorder and your cache residency and hit ratios in the Dynamo Administration Component Browser’s repository component display page. Tune the item cache size accordingly until you reach your cache residency objectives and a high hit ratio.

Consider other warming techniques

You may want to consider other warming techniques, such as selective warming when a user logs in or a campaign is launched, rather than warming the world. For example,

  • in a B2B environment it may be beneficial to preemptively warm the organization items for organizations that that user is associated with when that user logs in.
  • when warming products of high interest, it might be worth determining whether you would also want to warm the associated SKU’s, prices in commonly used price lists, and inventory items.
  • for complex pages involving products, you might want to cache page fragments associated with those products so that you only render the fragment when the product information changes.

These warming techniques may be considered for many reasons. To improve page response time, to reduce load on back-office systems or the database, to increase server scalability, to provide a consistent as-of for information being drawn from external systems, etc.

In Conclusion

Cache warming can be an effective technique for ensuring optimal page response times for your consumers and for improving server scalability. But like all very powerful tools, it should be used wisely. While you are warming your cache, you are not servicing your consumers. So temper your use of warming with your overall objective, that of servicing consumers and making money for your company.

And remember, as Fred Brooks once stated, “premature optimization is the root of all evil.

In a follow-on article, I’ll share with you some tips and sample code for monitoring cache usage effectiveness in your running system. I’ll give you techniques for saving snapshots of your cache usage and hit ratios in a file or database table. Then, in a later article, we’ll discuss warming of caches after a deployment from the BCC.

Implementing OAuth 2 with Oracle Access Manager OAuth Services (Part II)

$
0
0

Introduction

This post is part of a series of posts about OAM’s OAuth implementation.

Other posts can be found here:

Part I – explains the proposed architecture and how to enable and configure OAM OAuth Services.

Part II – describes a Business to Business use-case (2-legged flow);

Part III  – deals with the Customer to Business use-case (3-legged flow), when the client code is running in the application server;

Part IV – describes the Customer to Business use-case (3-legged flow), when the client application runs on the browser, embedded as Javascript code;

Part V  – provides the source code and additional information for the use case implementation.

The B2B (or Business to Business) use-case, usually represents an application that calls another application or service, without end user intervention.

In this example, the BusinessClient application (in OAuth spec, called a client) will make a call to a service, BusinessService (in OAuth spec, a Resource Server), and request some Business Information, passing the Access Token.

Since there is no end user intervention, the client is pre authorized to have access to the resource, making this use case one of the simplest to implement.

In this implementation, a Java Servlet will be used to simulate the BusinessClient, so that the flow can be started at will, and a Java RESTful web service will be used to represent the Resource Server.

Use Case Flow

The following picture shows the flow between the different components.

B2B Use Case - New Page

 

Steps details:

1. BusinessClient requests an OAuth Token from OAuth Server using “Client Credentials” grant type.

The BusinessClient application must be registered with OAM OAuth Server as an OAuth client and it must send its credentials in a Base64 encoded string in the Authorization Header.

The application also declares the scope for which it is requesting the token.

The call would look like this in curl:

curl -i -H “Authorization: Basic YnVzaW5lc3NDbGllbnQ6M1dGTHVkVEZ5Qms=” -H “Content-Type: application/x-www-form-urlencoded;charset=UTF-8″ –request POST https://oxygen.mycompany.com:14101/ms_oauth/oauth2/endpoints/oauthservice/tokens -d ‘grant_type=client_credentials&scope=Business.Info’

2. The OAuth Server checks the client credentials, the grant type and if it is authorized to request the scope.

Note that in Part I of this post, the BusinessClient was defined to request tokens using “Client Credentials” grant type and authorized to request “Business.Info” scopes only.

If a client tries to request tokens using a grant type or a scope it is not authorized to, it will receive an error.

 

3. The OAuth Server also checks if the user has granted permission to the client to request a token in its behalf.

In the B2B use case, there is no user intervention, this client is already allowed to request tokens without representing an end user identity. Remember that in Part I, in the BusinessClient configuration, “Bypass User Consent” is checked.

 

4. OAuth Server returns the Access Token for the BusinessClient.

The OAuth Server response for an Access Token request would look like this:

{“expires_in”:3600,”token_type”:”Bearer”,”access_token”:”eyJhbGciOiJSUzUxMiIsInR5cCI6IkpXVCIsImtpZCI6Im9yYWtleSJ9.eyJzdWIiOiJidXNpbmVzc0NsaWVudCIsImlzcyI6Ind3dy5vcmFjbGUuZXhhbXBsZS5jb20iLCJvcmFjbGUub2F1dGguc3ZjX3BfbiI6Ik9BdXRoU2VydmljZVByb2ZpbGUiLCJpYXQiOjE0MzY0NzUwODgwMDAsIm9yYWNsZS5vYXV0aC5wcm4uaWRfdHlwZSI6IkNsaWVudElEIiwiZXhwIjoxNDM2NDc4Njg4MDAwLCJvcmFjbGUub2F1dGgudGtfY29udGV4dCI6InJlc291cmNlX2FjY2Vzc190ayIsInBybiI6ImJ1c2luZXNzQ2xpZW50IiwianRpIjoiNjQwNzEwOTQtOGQyNS00Y2I3LWI5NmMtNDQxZDJmNDI2MjJkIiwib3JhY2xlLm9hdXRoLmNsaWVudF9vcmlnaW5faWQiOiJidXNpbmVzc0NsaWVudCIsIm9yYWNsZS5vYXV0aC5zY29wZSI6IkJ1c2luZXNzLkluZm8iLCJ1c2VyLnRlbmFudC5uYW1lIjoiRGVmYXVsdERvbWFpbiIsIm9yYWNsZS5vYXV0aC5pZF9kX2lkIjoiMTIzNDU2NzgtMTIzNC0xMjM0LTEyMzQtMTIzNDU2Nzg5MDEyIn0.GXcQ3sjFyxcwO85x50zRfZCXf-YEUgFerudOoRVky5a334p3GmlKl147OarZs-h0uGl5okrp0Ivfx7ed2Y7jR0nbTTye6TxiSgCyHrJXjp5_blwRH6hj05KI6RxWdzJJqeC95Kgfh-m2FOqbffQOC1XhHvoWQ8KkG9ub5ZgmhPo”}

This token is not associated with a end-user, it contains the OAuth client application information only.

We will see the difference between this kind of token and the token from a 3-legged OAuth flow.

When decoded, the following revelant information can be extracted from the token:

{
“sub”: “businessClient”,
“iss”: “www.oracle.example.com”,
“oracle.oauth.svc_p_n”: “OAuthServiceProfile”,
“iat”: 1436475088000,
“oracle.oauth.prn.id_type”: “ClientID”,
“exp”: 1436478688000,
“oracle.oauth.tk_context”: “resource_access_tk”,
“prn”: “businessClient”,
“jti”: “64071094-8d25-4cb7-b96c-441d2f42622d”,
“oracle.oauth.client_origin_id”: “businessClient”,
“oracle.oauth.scope”: “Business.Info”,
“user.tenant.name”: “DefaultDomain”,
“oracle.oauth.id_d_id”: “12345678-1234-1234-1234-123456789012″
}

 

5. Now, with the Access Token, the BusinessClient makes a remote call to the ResourceServer passing the token in the POST parameters.

 

6. The ResourceServer receives the token and makes a call to the OAuth Server to validate the token.

The OAuth server will assert if the token has not expired, has not been tampered with, if it has not been revoked and if the scope is valid.

To make this call, the Resource Server must also be registered as an OAuth client, the Service Token Validator, explained in Part I of this post.

Note below that this client has no scopes or is allowed any grants, it is only used to validate tokens.

The validation can be a simple one or can be a token instrospection, which retrives additional attributes.

Validation calls would look like this:

Simple Validation request:

curl -i -H ‘Authorization: Basic YnVzaW5lc3NDbGllbnQ6M1dGTHVkVEZ5Qms=’ –request POST http://oxygen.mycompany.com:14100/ms_oauth/oauth2/endpoints/oauthservice/tokens -d ‘grant_type=oracle-idm%3A%2Foauth%2Fgrant-type%2Fresource-access-token%2Fjwt&oracle_token_action=validate&scope=UserProfile.me&assertion=eyJhbGciOiJSUzUxMiIsInR5cCI6IkpXVCIsImtpZCI6Im9yYWtleSJ9.eyJzdWIiOiJidXNpbmVzc0NsaWVudCIsImlzcyI6Ind3dy5vcmFjbGUuZXhhbXBsZS5jb20iLCJvcmFjbGUub2F1dGguc3ZjX3BfbiI6Ik9BdXRoU2VydmljZVByb2ZpbGUiLCJpYXQiOjE0MzA4MjczMzAwMDAsIm9yYWNsZS5vYXV0aC5wcm4uaWRfdHlwZSI6IkNsaWVudElEIiwiZXhwIjoxNDMwODMwOTMwMDAwLCJvcmFjbGUub2F1dGgudGtfY29udGV4dCI6InJlc291cmNlX2FjY2Vzc190ayIsInBybiI6ImJ1c2luZXNzQ2xpZW50IiwianRpIjoiNzBiZTRjOTItZWNmNi00YWM0LWEzOGEtZjUyMzUwMzUzNDJjIiwib3JhY2xlLm9hdXRoLmNsaWVudF9vcmlnaW5faWQiOiJidXNpbmVzc0NsaWVudCIsIm9yYWNsZS5vYXV0aC5zY29wZSI6IlVzZXJQcm9maWxlLm1lIiwidXNlci50ZW5hbnQubmFtZSI6IkRlZmF1bHREb21haW4iLCJvcmFjbGUub2F1dGguaWRfZF9pZCI6IjEyMzQ1Njc4LTEyMzQtMTIzNC0xMjM0LTEyMzQ1Njc4OTAxMiJ9.omYuR3KOsg5QbzYSq98aqtBUb37mpEeSKu-8U5-RVfZ16XceFcDRiDZ8upJcVoRyolpl52RFqjBdIDKRJQK6C21ZBkVhHKUvSyrJCD1AVATihq8Xm2mr0zZuRg6iqbalZEXfuIKd8hU4Q863aMwmo-W9j2Ep63ebTeXpHPkkWcI’

Token Introspection request:

curl -i -H ‘Authorization: Basic YnVzaW5lc3NDbGllbnQ6M1dGTHVkVEZ5Qms=’ –request POST http://oxygen.mycompany.com:14100/ms_oauth/oauth2/endpoints/oauthservice/tokens -d ‘grant_type=oracle-idm%3A%2Foauth%2Fgrant-type%2Fresource-access-token%2Fjwt&oracle_token_action=validate&scope=UserProfile.me&oracle_token_attrs_retrieval=iss%20aud%20exp%20prn%20jti%20exp%20iat%20oracle.oauth.scope%20oracle.oauth.client_origin_id%20oracle.oauth.user_origin_id%20oracle.oauth.user_origin_id_type%20oracle.oauth.tk_context%20oracle.oauth.id_d_id%20oracle.oauth.svc_p_n&assertion=eyJhbGciOiJSUzUxMiIsInR5cCI6IkpXVCIsImtpZCI6Im9yYWtleSJ9.eyJzdWIiOiJidXNpbmVzc0NsaWVudCIsImlzcyI6Ind3dy5vcmFjbGUuZXhhbXBsZS5jb20iLCJvcmFjbGUub2F1dGguc3ZjX3BfbiI6Ik9BdXRoU2VydmljZVByb2ZpbGUiLCJpYXQiOjE0MzA4MjczMzAwMDAsIm9yYWNsZS5vYXV0aC5wcm4uaWRfdHlwZSI6IkNsaWVudElEIiwiZXhwIjoxNDMwODMwOTMwMDAwLCJvcmFjbGUub2F1dGgudGtfY29udGV4dCI6InJlc291cmNlX2FjY2Vzc190ayIsInBybiI6ImJ1c2luZXNzQ2xpZW50IiwianRpIjoiNzBiZTRjOTItZWNmNi00YWM0LWEzOGEtZjUyMzUwMzUzNDJjIiwib3JhY2xlLm9hdXRoLmNsaWVudF9vcmlnaW5faWQiOiJidXNpbmVzc0NsaWVudCIsIm9yYWNsZS5vYXV0aC5zY29wZSI6IlVzZXJQcm9maWxlLm1lIiwidXNlci50ZW5hbnQubmFtZSI6IkRlZmF1bHREb21haW4iLCJvcmFjbGUub2F1dGguaWRfZF9pZCI6IjEyMzQ1Njc4LTEyMzQtMTIzNC0xMjM0LTEyMzQ1Njc4OTAxMiJ9.omYuR3KOsg5QbzYSq98aqtBUb37mpEeSKu-8U5-RVfZ16XceFcDRiDZ8upJcVoRyolpl52RFqjBdIDKRJQK6C21ZBkVhHKUvSyrJCD1AVATihq8Xm2mr0zZuRg6iqbalZEXfuIKd8hU4Q863aMwmo-W9j2Ep63ebTeXpHPkkWcI’

 

7. The OAuth server responds with the results of the token validation, that would look like this:

The simple validation response:

{“successful”:true}

The Token introspection response:

{“successful”:true,”oracle_token_attrs_retrieval”:{“oracle.oauth.tk_context”:”resource_access_tk”,”exp”:1430830930000,”iss”:”www.oracle.example.com”,”prn”:”businessClient”,”oracle.oauth.client_origin_id”:”businessClient”,”oracle.oauth.scope”:”UserProfile.me”,”jti”:”70be4c92-ecf6-4ac4-a38a-f5235035342c”,”oracle.oauth.svc_p_n”:”OAuthServiceProfile”,”iat”:1430827330000,”oracle.oauth.id_d_id”:”12345678-1234-1234-1234-123456789012″}}

Note that the token for a B2B use case represents just the identity of the OAuth client (“prn”:”businessClient” and “oracle.oauth.client_origin.id”:”businessClient”), it does not contain any reference to an end user because the BusinessClient is pre-authorized and haven’t gone through a user authentication or user consent.

 

8. The resource server is responsible for the implementation of the authorization logic itself, therefore it is up to the ResourceServer to make the decision if the scope for which the token was issued is valid or any other reason it might consider to give access for the requesting application.

If all checks are satisfied, the ResourceServer returns the requested data back to the client.

 

B2B Use Case Implementation

To implement the B2B use case, a standard Java Servlet will be used, BusinessClient.java, so that the use case can be triggered at will.

In a real world situation, it could well be any piece of code running on the server in a scheduled timer, without user intervention.

This servlet, will obtain an Access Token from the OAuth Server, using the Client Credentials grant type, and will pass the Access Token to the Resource Server, when making the call to the Business RESTful endpoint.

The Resource Service will be implemented in a annotated plain Java class using the Jersey framework to expose the Business Service as a RESTful webservice.

The Resource Server also implements a Servlet Filter, that intercepts all incoming requests to validate the incoming Access Token before giving access to the REST Endpoint.

The RESTful WS implementation, BusinessService.java is completely independent from the OAuth implementation.

In a real world case, it might be a good practice to implement Filters, or any other way to intercept the incoming request, and validate the token or other decisions, like scope, before passing the request along to the endpoint.

The complete source code will be provided in the Part VI of this post.

The relevant classes for this use case are:

  • BusinessClient.java
  • OAuthTokenValidationFilter.java
  • OAuthTokenValidator.java
  • BusinessService.java

In the next post we will cover the Customer to Business Use case, when the client application runs on the application server.

Viewing all 987 articles
Browse latest View live