Quantcast
Channel: ATeam Chronicles
Viewing all 987 articles
Browse latest View live

Avoiding Memory Leaks with JDK 1.7 SDP Support

$
0
0

Introduction

Although Exalogic is widely used as a consolidation platform due its concept of data center in-a-box, most customers look after it as a technical solution for improving application throughput. Counting with a powerful 40GB/s bandwidth InfiniBand fabric, applications running on top of Exalogic can benefit from the high throughput, low latency and efficient network layer while exchanging messages across different processes.

One of the main benefits of using the InfiniBand technology is providing a way to applications to move data directly from the memory of one server to another, bypassing the operating system of both servers, resulting in significant CPU usage reduction plus latency improvement. But to leverage this capability, all applications must use a specific protocol. The most well-known protocol for this is called SDP (Sockets Direct Protocol) and it provides a transport-agnostic way to support streams sockets over RDMA (Remote Direct Memory Access) network fabrics.

The most common case when SDP need to be leveraged is when applications running on Exalogic need high performance access to Oracle databases running on Exadata. In the Exadata machine, most DBAs only need to set up a new listener based on SDP instead of TCP and that’s it. But in the Exalogic machine things can get a little complicated, especially if you are trying to leverage SDP from the support introduced in JDK 1.7. This article will discuss the issues that can happen when Java applications access SDP-enabled databases in Exadata.

It is important to state that the issues mentioned in this article does not happen when SDP connections between Exalogic and Exadata are established using the built-in stack available in Exalogic, which is essentially the EECS (Exalogic Elastic Cloud Software). Applications intended to run on top of Exalogic must leverage the EECS, and Oracle supports ISVs interested in doing this with the Exastack program. The SDP support available in JDK 1.7 must be used for ISVs interested in building applications for both Exalogic and other InfiniBand-based machines.

The Problem

As you can follow in this article, to leverage SDP in Java applications you need to provide to the JVM a SDP configuration file. In the content of this file, you need to define a set of rules about which endpoints to connect (outbound) or bind (inbound). In case of accessing Oracle databases through SDP-enabled listeners running on the Exadata machine, you need to provide a connect rule just as shown in the listing 1. The rule assumes that the SDP listener is running on port 1522.

connect 192.168.1.102 1522

Listing 1: Connect rule for the SDP configuration file with specific port.

To avoid setting too many rules in the SDP configuration file, some people use wildcards to generalize IP ranges and/or ports. For instance, listing 2 shows a variation of the rule shown in listing 1, in which SDP can be established to any port of the given IP address.

connect 192.168.1.102 *

Listing 2: Connect rule for the SDP configuration file, using wildcards.

Using wildcards in rules of the SDP configuration file would be fine if wasn’t for the fact that multiple protocols can be activated at the same time for a given IP address. For instance, imagine an Exadata machine having database listeners exposed through SDP on port 1522, but also having FAN events listened through TCP on port 6200. This mix of protocols can be a big source of problems in the Exalogic machine if the SDP configuration file uses wildcards. In runtime, for every IP address set in the SDP configuration file, the JVM will try to establish socket connections through SDP, even if the target port does not support the SDP protocol. Due the mismatch of protocols, a continuous set of connection retries will occur, eventually leading to serious memory leak issues, as observed in some customers.

The Solution

The easiest way to avoid the memory leak issues is avoiding using wildcards in the SDP configuration file. As much as possible, try to set specific ports for each IP address, just as shown in listing 1. There are several tools that you can use in both Exalogic and Exadata to make sure if a specific port supports SDP, as you can see in the “Monitoring SDP Sockets using sdpnetstat on Oracle Linux” section of the following link.


Jumpstarting Development on the Cloud – Part 2 : Packaging and Build

$
0
0


Introduction

The Oracle Developer Cloud Service (DevCS) is a cloud-based software development Platform as a Service (PaaS) and a hosted environment for your application development infrastructure. It provides an open source standards-based solution to develop, collaborate on, and deploy applications within Oracle Cloud. Some of the core features offered by DevCS include a hosted GIT repository, a Maven repository, Issue Tracking capabilities, a Hudson Continuous Integration (CI) server, Wiki collaboration, code-review, roles and responsibilities separation, and many more. The complete list of features is available here.

While the documentation of all these features is available at http://docs.oracle.com/cloud/latest/devcs_common/index.html , in this blog we’ll concentrate on the Maven aspect of DevCS. Specifically, we’ll describe how one can :

Developer Cloud Service lets you create a number of ‘projects’. With each project, an instance of Hudson CI Server, a GIT repository, a Maven Repository and a Wiki server are allotted. Below is a snapshot of the DevCS project home page:

The environment specific data has been redacted from the images in this blog.


DevCS-default

It can be seen above that there are two git repositories attached to the project. DevCS allows a number of GIT repositories, both hosted and external, to be attached to a project. This provides flexibility to the Project Management team within your organization to logically separate projects as needed. The GIT Repositories can be managed via the Admin tab, where the result is shown below :

Populating the Maven Repository

In this section we will go through the steps to populate the DevCS Maven repository with the Weblogic 12.1.2 libraries, although the steps are valid for any version of Weblogic 12c.

Since the Weblogic libraries are Oracle copyrighted, care must be taken to ensure the libraries aren’t publicly accessible. Since a DevCS Maven repository is protected via HTTP Authentication, it’s OK to upload the libraries there.

The entire process can be divided into three main steps :

  1. a. Setting up Maven on a local machine.
  2. b. Installing Weblogic binaries on the local machine.
  3. c. Pushing the local binaries to the remote Maven repository.

Before starting the upload, it is important to consider the network latency and upload speed to ensure the upload happens as quickly as possible.
Also, if a large number of libraries are to be uploaded, it is worth considering a VNC type session, so that session interruptions do not interfere with the upload.

a) Setting up Maven on the local Machine

  1. i. Download and install the apache-maven-3.x.x (we used 3.0.5) on the local Machine (say a linux VM).
  2. ii. Open MVN_HOME/conf/settings.xml and make the following changes.
      1. 1) Under the ‘proxies’ section, specify the appropriate proxy, as given below:
      2. <proxies>   
            <proxy>
              <active>true</active>
              <protocol>http</protocol>
              <host>PROXY_URL</host>
              <port>80</port>
              <nonProxyHosts>www.anything.com|*.somewhere.com</nonProxyHosts>
            </proxy>    
        </proxies>
        
      3. 2) Under the ‘servers’ section, specify a remote Maven server called remoteRepository. This will hold the username/password details when connecting to the ‘remoteRepository’, i.e. our DevCS Maven Repository :
      4. <servers>
            <server>
              <id>remoteRepository</id>
              <username>USERNAME</username>
              <password>PASSWORD_IN_PLAINTEXT</password>
            </server>
        </servers> 
        
      5. The DevCS Maven Repository is protected, and only users who have access to the project can view the associated repository. The authentication mechanism is basic HTTP. The username/password above will be used for authentication before the upload.

      6. 3) Under the ‘profiles’ section, specify the default value of ‘remoteRepository’, as mentioned below:
      7. <profiles>
         <profile>
            <id>default</id>
            <repositories>
              <repository>
                <id>remoteRepository</id>
                <name>Remote Weblogic Repository</name>
                 <url>dav:https://developer.us2.oraclecloud.com/……/maven/</url>
               <layout>default</layout>
              </repository>
            </repositories>
          </profile>    
        </profiles>
        

        The ‘url’ above can be found on the DevCS home page, as seen in the above image. Also note the ‘dav:’ in the URL, signifying that the upload will use the WebDAV protocol.

      8. 4) Make sure the default profile is activated :
      9. <activeProfiles>
          <activeProfile>default</activeProfile>
        </activeProfiles>
        

b) Installing Weblogic binaries on the local machine

  1. i. Download the Weblogic 12.1.2 ‘Zip distribution for Mac OSX, Windows, and Linux’ from Oracle Technology Downloads . We used the zip distribution only for demonstration purposes, any other type of installation would work as well.
  2. ii. Unzip the downloaded file to any location (eg: /home/myhome/mywls ).
  3. iii. Set JAVA_HOME and MW_HOME, eg :
    1. 1) $export JAVA_HOME=/home/myhome/myjavahome
    2. 2) $export MW_HOME=/home/myhome/mywls/wls12120
  4. iv. Run MW_HOME/configure.sh . This unpacks all the jar files

c) Pushing the local weblogic binaries to the remote maven repository

  1. i. On the local VM, ensure MVN_HOME/bin is available in the $PATH variable
  2. ii. ‘cd’ to location ‘/home/myhome/mywls/wls12120/oracle_common/plugins/maven/com/oracle/maven/oracle-maven-sync/12.1.2’
  3. iii. Create a file, pom.xml , with the following content :
  4. <project>
      <modelVersion>4.0.0</modelVersion>
      <groupId>com.mycompany.oracle</groupId>
      <artifactId>my-app</artifactId>
      <!--The version, groupId, artifactId above can be anything. -->
      <version>1</version>
    	<build>
    		<extensions>
    			<extension>
    				<artifactId>wagon-webdav-jackrabbit</artifactId>
    				<groupId>org.apache.maven.wagon</groupId>
    				<version>1.0-beta-7</version>
    			</extension>
    		</extensions>
    	</build>  
    </project>
    

    The above pom.xml declares the wagon webdav plugin to be part of the mvn context. This library is used for uploading content to the remote repository using the WebDAV protocol.

  5. iv. Run the command :
  6. mvn deploy:deploy-file -DpomFile=oracle-maven-sync.12.1.2.pom -Dfile=oracle-maven-sync.12.1.2.jar
     -DrepositoryId=remoteRepository   -X  -Durl=dav:https://developer.us2.oraclecloud.com/……/maven/
  7. This populates the local Maven repository, and deploys the file oracle-maven-sync.12.1.2.jar, its associated pom, and the MD5 and SHA1 hash of each to the remote repository specified by the -Durl parameter. The -DrepositoryId is used to pick up the authentication information from settings.xml (which we set above in Section a.2).
  8. v. Run the command :
  9. mvn com.oracle.maven:oracle-maven-sync:push 
    -Doracle-maven-sync.oracleHome=/home/myhome/mywls/wls12120  
    -Doracle-maven-sync.serverId=remoteRepository
  10. This command invokes the ‘Oracle Maven Synchronization Plugin’, specifically the coordinate com.oracle.maven:oracle-maven-sync:push , which uploads the local weblogic jars to the remote repository. This command recursively traverses through various dependencies and uploads them all. Depending on the network connection/upload speed, this may take a couple of hours.
  11. Please note that this command uses the pom.xml file defined in step#iii above.

Use Maven to compile, package and deploy a java application to JCS-SX

We assume that the source java project is checked into the DevCS GIT repository.
The following pom.xml can be used for compiling and packaging the project. It must also be checked into the GIT repository.

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<groupId>com.oracle.samples</groupId>
	<artifactId>simpleJavaApplication</artifactId>
	<version>1.0-SNAPSHOT</version>
	<packaging>war</packaging>
	<name>simpleJavaApplication</name>
	<dependencies>
		<dependency>
			<groupId>junit</groupId>
			<artifactId>junit</artifactId>
			<version>4.11</version>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>com.oracle.weblogic</groupId>
			<artifactId>weblogic-server-pom</artifactId>
			<version>12.1.2-0-0</version>
			<type>pom</type>
			<scope>provided</scope>
		</dependency>
	</dependencies>
	<build>
		<plugins>
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-compiler-plugin</artifactId>
				<version>3.1</version>
				<configuration>
					<source>1.7</source>
					<target>1.7</target>
				</configuration>
			</plugin>
			</plugins>
	</build>
	  <repositories>
		<repository>
		  <id>remoteRepository</id>
		  <url>https://developer.us2.oraclecloud.com/……/maven/</url>
		  <layout>default</layout>
		</repository>
	  </repositories>
	  <pluginRepositories>
		<pluginRepository>
		  <id>remoteRepository</id>
		  <url>https://developer.us2.oraclecloud.com/……/maven/</url>	
		</pluginRepository>
	  </pluginRepositories>	 
</project> 

The things to note are :

  • The com.oracle.weblogic and junit dependencies
  • Maven compiler plugin which lets us specify the -source and -target options while using javac (optional)
  • Maven Repository url that specifies the location of the Weblogic libraries (i.e. the repository we just populated)

Below are the steps for setting up the Hudson build and deployment options on DevCS:

  1. a) On the DevCS project home page, click on the ‘Build’ tab, and create a new job
  2. 2) Name the new job. On the configurations page, set the following :
    1. a) JDK Version
    2. b) Git Repository details
    3. c) Create a Maven 3 Build Step and specify ‘clean package’ as the goals. Also ensure the pom.xml file is specified correctly.
    4. d) In the post-build step check the ‘Arhive the artifacts’ option, specifying the file(s) to be archived, as shown below . The archived file(s) will be used to deploy the generated artifacts to the associated JCS-SX instance.
    5. Save the job. Run it once to make sure the build executes successfully
  3. 3) Go to the ‘Deploy’ tab and create a new Deployment Configuration :
  4. The options in the image above are self-explanatory. The application ‘TestConfiguration’ is to be deployed to the JCS-SX instance ‘jcssx02′. The artifacts generated from the specific Hudson job and build number are deployed to the ‘jcssx02′ instance. One can also schedule an Automatic deployment (as compared to ‘On Demand’) on every successful Hudson build.

Conclusion

We’ve demonstrated how to use the Developer Cloud Service to automate your Maven based development lifecycle. We saw how the Maven Repository can be populated with weblogic or other custom jars, and how the Hudson build jobs and deployments can be configured.

We’d be remiss if we didn’t mention the newly introduced Oracle Maven Repository . Steve Button explains it nicely in his blog https://blogs.oracle.com/WebLogicServer/entry/weblogic_server_and_the_oracle . Efforts are underway to make the Oracle Maven Repository work with DevCS. More on that when we have more information.

Jumpstarting Development on the Cloud – Part 1 : Standardizing a Workflow

$
0
0

Introduction

This article introduces you to a standardized multi-user workflow for the Oracle Developer Cloud Service. The intent is to showcase a how a team starting on the Oracle developer Cloud Service would collaborate on code to address various requirements and use-cases in a typical SDLC model. We present a popular branching and merging model that lets project teams collaborate effectively for maintaining and moving forward on multiple software releases and code-lines. This is, of course, not the only workflow possible, but hopefully will introduce you to a model that you can use right away or provide ideas for formulating your own workflow that works best for the individual team.

Main Article

Standardizing on a workflow also softens the learning curve for new members as well providing a common set of rules for actions that are commonly performed in a typical software development scenario so that all developers take the same approach.

A basic knowledge of git is assumed for this article. If you are unfamiliar with basic branching and merging in git please take a quick look at the excellent in-depth information at git-scm.com . A typical project intended for production deployment goes through a development phase where all developers collaborate on various features for the application being built, a simultaneous testing phase where code is integrated and tested, then a release phase where the code that has passed tests is considered production ready and frozen for a release. Usually, towards the end of the first cycle where integration testing is happening a bunch of (or all) developers will split their time between fixing bugs that came from the testing and working on new features for the next release phase. Finally, once a release has been made and deployed to production, there may be cases where a new bug is discovered in production which will require a hot-fix. The released version of the project will of course require back-ports for upstream features, one-off patches and so on as well.

If you are an experienced user of git, you will notice that this workflow is similar to git-flow.You are right, and it is in-fact based on git-flow. For those new to git-flow, its a standardized workflow for using git. The goal is that everyone uses a common set of rules for branching and merging actions that are commonly performed in a development cycle like the one above. The next step in standardization is to have a common vocabulary so that people can easily find feature branches, release branches , bug fixes and so on. So in this process, we have :

With these requirements, we can setup a simple process for source control as follows.

  1. 1.There will be one master branch which tracks the current production quality code.
    1. a.This branch is called ‘master’
    2. b.Deployments/Release builds for the current release are made from master.
  2. 2.There will be a development branch where all feature teams will contribute code to, and where integration testing occurs.
    1. a.This branch is called ‘develop’.
  3. b.Every feature that is part of the application is a branch that is created off of the development branch.
    1. 3.Each feature branch is named ‘feature/featurename’
    2. a.For instance, the branch for a feature named “JSONSupport” will be called ‘feature/JSONSupport’
    3. b.When a feature is complete, the feature branch is merged in (delivered) to the development branch.
  4. 4.Bug fixes also get their own branches similar to features and can be delivered to feature branches or directly to the development branch.
    1. a.The naming convention would be ‘bugfix/bug#’. For instance, the gug # 2356 would be worked on in a brancg named ‘bugfix/2456′
    2. b.If a bug is identified as part of a feature, in the feature testing (before the feature is delivered to the development branch), a bug fix branch can be created off of the feature branch.
  5. 5.When any branch is merged in to the development branch, a merge request should be created. A merge request is a mechanism in ODCS where the merge can be analyzed and code reviewed before the actual merge happens.
  6. 6.The development branch will constantly receive feature updates from feature teams, and will never be frozen.
    1. a.A specific commit in the development branch which has cleared the QA process will be the one chosen to become a release candidate. A new branch is created from this commit for the release.
    2. b.Merges in to development branch, in other words, a feature delivery , is a planned activity that is scheduled based on release plans.
  7. 7.This branch will be named ‘release/releasename‘. For instance, the jabberwocky release will have abranch named “release/jabberwocky”.
    1. a.This branch should contain only last minute bug fixes that go in the branch during final testing.
    2. b.Once all testing is done the code is ready for final build, this branch is merged in to master and the build is done. (see blelow for details on which brach a build is generated from)
    3. c.This release branch then now historical and is now used for long term maintenance of this release.
  8. 8.For production issues, in the current release, hotfix branches are created off of the master branch.
    1. a.If the production issue is logged on an older release, then the hotfix maybe created on the older release branch.
    2. 9.For a back-port, a feature branch can be merged in to the release branch. Depending on how old the release is, the merge may be trivial or not, but the feasibility of the back port can be evaluated using the tools before doing the actual merge.

Well, that was quite a bit of processes, lets make sure we go through the process in a visual manner so that its easy to better understand what to do under various circumstances. we will assume you are working on a project which you develop for your organization (as opposed to a product with multiple customers).

1

The figure above is simplified for easy explanation, and to show the hierarchy. In reality, branches are just labels or pointers that point at commit objects, they do not have a physical manifestation other than making it easier to refer to a commit by a name. This also means that processes like branching-off and merging two branches just realigns or moves around the labels. Its is therefor common to have multiple labels like ‘bug#2359′ and ‘develop’ on the same commit. We however depict these in the figures as a hierarchical structure, and when two branches point at the same commit, we have colored them the same with the same commit hash to identify that they are in-fact the same commit.

We have 5 branches here. The master brach is on commit ‘5645b2′ (the short form of the SHA-1 for that commit) and since the master is on this commit, we can assume that this commit has been tested and deployed to production. In other words, if we needed to get a copy of whats on production at this very instant, we would create a branch off of master as the next release branch.

We can also see that the the release branch ‘Rel2.3′ was delivered to master.

At commit ‘ab3456′, people started diverging and creating new branches from develop. The first was bug#2359, and the second was the branch named ‘OAuth Support’. These are examples of bug fix and feature branches. The developers working on these fixes or features would be committing and collaborating by contributing code to these branches. In the case of the bugfix, there were two commits 5a23e3 and 98b2e5 which the developer(s) committed to this branch. At this time we can assume that the the developers have unit tested their code and consider this fix ready to be delivered to the develop branch. They go ahead and merge their bug fix branch in to the develop branch , which simply moved the develop branch ahead to that commit. This is a example of a fast forward merge, where the the bug fix initially branched off from develop and then later was being delivered back to develop. No one else had delivered anything to develop in the meantime, which means that the changes made on the bug fix was just incremental to the code in develop. Git calls this a fast forward merge, and just involves move the develop branch or pointer to a new commit, 98b2e5 in this case. In reality, now the commit 98b2e5 should have two labels or bracnhes pointing ta it, the bug fix branch, and the develop branch. In the figure above, we have simplified this by representing the commit twice in a hierarchical structure.

Similarly at a later time, the developers who were working on ‘OAuth Support’ consider their feature complete at the point where they committed ‘34598c2′ and decided to merge the feature code in to develop branch. This merge created the commit ’12c3e5′.

This example was meant to simply demonstrate one of the possible workflows with git. Workflows can be more simplistic than this, where everyone works directly on master branch (a useful model when a small team works on a short lived project like a PoC or sample application – i.e. no real production use and maintenance), or could be considerably more complex with multiple distributed repositories where multiple remotes are managed and synced between each other.

Working on a Bug-Fix

To begin with lets assume that John is on master and he wants to start working on bug fix #5678. John switches to the develop branch and then creates a new branch for his bug fix off of the develop branch.

git checkout develop
git checkout -b bugfix#5678

This creates the bugfix#5678 branch and switches to it. John works on the bug fix. In the most simplistic case, John is the only developer working on the bug fix, and once he tests his fix, and believes its ready to be moved up to the develop branch, he can initiates a merge request. Which then causes the merge to be code reviewed and delivered on to the develop branch. A slightly more realistic version of this scenario is when John gets pulled in to work on a higher priority issue/feature while hes in the middle of his bug-fix.

Context switching and working on multiple tasks

Lets assume that John’s bugfix #5678 was not a trivial one and requires him to refactor code in two files. He finishes working on the first one and is abut to start on the second when Alice requests his help with a higher priority task. He now needs to save his work so that he can get back to it later. John commits his work (5675ab) and pushes it to the remote. The repository looks like this now:

2

He could have stashed his work instead of committing as well. Stashing is a core git feature that pushes your current changes on to a stack so that you no uncommitted changes in your working copy so that you safely switch branches. Alice wants his help to work on a feature to support a JSON payloads in the application. Alice has already created the ‘JSONSupport’ branch off of develop and checked in some code. She needs John to make a few changes to support JSON in the module he’s responsible for. Alice had been working on this in parallel. John sees Alice’s branch which she just pushed to the remote so that she can collaborate with John on it :

3

John gets all latest code from the remote and switches to the JSON Support branch :
git fetch origin
git push origin bugfix#5678
git checkout JSONSupport
<start work>

When John fetches the repository from the server, John is fetching all the changes people have pushed to that server/remote including new branches people have started and pushed to the remote. This brings in the JSONSupport branch that Alice has been working on. John also pushes his bug-fix branch to the server, although this is optional. John can now switch to this branch, which changes the files in his working directory to be in the state that Alice had committed to create af4572 (the tip of Alice’s branch). At this point, John can start working on this branch, add JSON support to the files/modules he’s responsible for and commit back to the branch.

git add <my_file.js>
git commit -m "Added JSON support for the project"
git push origin JSONSupport

With John’s commit, the tip of the JSONSupport  branch tip moved forward. When John pushes his changes, the commit that he just created is pushed in to the remote and Alice can see it when she fetches from the remote. The repository now looks like this :

4

Merge Requests, Code reviews and Code Merges

The first thing that comes mind when integrating code , especially code that resides on multiple branches, is to make sure that we review the changes and make sure that everything is okay. Different git hosts have different approaches to this, but the developer cloud makes this process simple though Merge Requests. Merge requests are requests you open so that someone can review the changes thats about to be made, and comment on these changes before the code is pulled in to the target branch.

Lets see how this works for John and Alice. The JSON support branch (or the feature) is complete, and Alice, who was the feature lead on this goes to the developer cloud and opens a new Merge Request, this is what it looks like :

5

Alice, picks the review branch as the JSONSupport Branch which she wants to merge (or deliver ) from and the target branch as the develop branch,  the branch she wants to merge in to or deliver to. Scott is automatically picked up as a reviewer since he has been assigned as a default reviewer of any merges in to the develop branch.This merge however needs to be reviewed by Scott who is one of the lead developers. Alice creates the merge requests which automatically sends an email to Scott that he needs to review a merge.

Scott can come and take a look at the merge request which shows him the branch which is being requested to be merged in to develop, including all the code changes that will be introduced as part of this merge. Scott can review them and provide his comments through the merge request created in Oracle Developer Cloud.

6

Scott can now see the added and changed files online, and also see the color coded diffs, for the changes that were made. The conversation tab provides comments on the merge, like if Scott is not happy with some implementation aspect, he can give that feedback and Alice can work on the feedback and get back to Scott. Once Scott is happy with the changes, he can approve the merge request. Its should be pointed out that even-though merge requests provide the fabric to  facilitate collaboration and approval process, the platform does not prevent a rogue merge – when some one decides to merge to develop branch on their own and push those changes without creating a merge request in the first place. This is because, git naturally allows for this, and provides mechanisms like server hooks to implement access restrictions on who can commit to what branch. The developer cloud platform does not hinder or feature-limit the git source control system.

Scott approves the changes and the merge the merge moves the develop branch forward, with the new JSON Support changes. In this example, there will be no conflicts when Alice merges the code to develop branch, since the develop branch is still exactly where it was when she began working on the her feature. Since there were no changes contributed to the develop branch since Alice started working, when Alice merges her feature branch, the develop branch should receive only the incremental changes made by Alice, which means its essentially the same code that Alice has on the tip of her branch. Git realizes this and this type of a merge is called a fast forward merge, since git essentially just moves a pointer.

7

The develop branch now has the the code changes introduced as part of the JSONSupport feature, or rather the develop branch now points to a commit that has these changes. Notice that the commits colored the same are infact the same commit, they are duplicated on the figure to easily visualize the brach structure.

Conflict Resolution

One common aspect we did not account for, in the merge scenario above was conflict resolution. This is because it needs to a closer discussion and there are multiple approaches to do this that suit different work styles. We discuss one that we feel is most efficient in terms of the control it allows and the ownership it enforces.

When there are conflicts during a merge, the Developer cloud platform will indicate that there are conflicts, and an automatic merge cannot be done on the web ui. Conflicts are usually caused by code that changed on develop branch since the contributor started working on his feature. At this point, the reviewer and the contributor can collaborate using the merge request facility to discuss how to resolve the conflicts. However, we feel that a more efficient way to work is for the contributor to resolve the conflicts before creating the merge request. The reviewer can still see how how the contributor chose to resolve the conflicts and provide feedback if he does not agree with the changes. This approach has the advantage that it limits the back and forth during the review process and presents the reviewer with a possible conflict resolution strategy. This also moves the responsibility of delivering code that is in sync with the develop branch to the feature developer. The feature developer now needs to make sure that he accounts for the latest changes on the code line rather than being isolated in his own feature code. This also encourages the feature developers to frequently pull in the latest code line changes so that the merge process is smoother and gradual.

Lets look at at this process in detail. John has now gone back to his bug fix, after helping Alice with the JSONSupport feature and finishes the bug fix. He is now ready to check-in his bug fix.  First, the develop branch has now moved forward from when he started work on the bug-fix. So he now has to merge the changes and perhaps resolve conflicts. There will be conflicts only if there were changes to the same line of code in the two branches being merged. For our illustration, lets say Johns has to resolve conflicts before he can merge. Once John creates the Merge request, and Scott tries to review it, Scott will be alerted by the developer cloud service that an automatic merge cannot be done from the Developer cloud console, and will indicate the files that have merge conflicts.

One possible workflow is for Scott to look at the merge conflicts, and then suggest a resolution in the Merger request, that John can implement. Although this a simple workflow, a better way is for John to resolve the conflicts before he starts the Merge request. This way, John can resolve the conflicts, or re-work his code so that it works with the latest on develop and then run tests to make sure that there are no regressions before he creates a merge request for Scott to approve. To resolve conflicts before creating a merge request, John can pull in the latest code from develop to check if the code is going to have merge conflicts, and if so resolve them. This is what the repository would  look like when john pulls in the develop branch. The merge (automatic or otherwise) creates a new commit that merges the develop branch in to the bug-fix branch moving the bug fix branch forward (Note that the develop branch stays where it is, since the target of the merge was the bug-fix branch).

8

The tip of the bug-fix branch now contains the merged code- the latest develop branch with John’s bug-fix and any conflict resolutions. John can deliver his bug-fix to develop which would be come another Fast forward merge since he already resolved conflits.
9

Release and Maintenance

Major releases to the code also get their own release branch. This branch is created off of the develop branch. Depending on the release model, this branch os either created once planned feature branches have been merged in to the develop branch (effectively all featured planned for the release are delivered), or in the case of a timed release plan, once the release window is upon the team (whatever features are delivered will make the release). Once created, this branch used to fix issues related to the release, or evolve the release through a release candidate process. Once this branch is ready for release, it is merged wit master and the final build is created. Once this is done, this branch can be used for long term maintenance or feature back-ports. The merge from the release branch tot he master is tagged with the release number, and the master branch will always have the latest released code line, this is important when we need to fix production issues for the project (as opposed to a product where hot fixes are best done on the release branches since multiple customers will be on multiple releases).

Lets assume that with the feature branches and bug fixes we have delivered to develop branch so far, we are ready for a release. Scott prepares for the release by creating the release branch. When created, the release branch tip points tot he same commit as the tip of the develop branch. But Scott then makes a change that is exclusive to this release in the Rel2.4 branch, like changing the version number.

10

Once QA has finished testing the build and the team is satisfied with the quality of this branch, it can be released by merging it with the master or the production branch.

11

Where do builds come from ?

Builds can be created from any branch obviously, but as a standard process, should production builds come from the master branch or from the release branch ?

Production builds coming from the release branch should be identical to production builds coming from the master once the release branch has been merged, since the master branch (after merge from release branch) should not contain code that is not present of the release branch.

The choice really depends on which branch you identify as “production”, since one implication of this decision is which branch you will fix production issues on. For typical Projects, that are custom developed software for an enterprise, master makes sense, because there is likely only one production system, and its easy to say that the production system always runs code that’s on the master branch. Historical or older releases are no nonger running in production, one the new production deployment has been made.

For product development however, this does not hold good because whats in production depends on the customer. You could have customers that are on a wide spectrum of supported releases. When customer on release 3.4 reports a production issue, you will need to fix that in a different code branch than Customer B reporting a production issue on release 2.2 .

Conclusion

We hope that gives you a solid foundation on setting up a standardized process which you can tailor to your needs and development style, while jumpstarting your development efforts on the Oracle Developer Cloud Service.

Speed up DAF Deployment of a Large Project by Disabling Deployment Data Purging

$
0
0

Introduction

DAF Deployment is a module included in Oracle Commerce which can be used to deploy repository and file-based assets from one server to another, usually from a development environment to a staging or production site.

The purpose of this blog article is to illustrate a tip for DAF Deployment, which offers an option to speed up the deployment process when it is taking some time to complete after successfully deployed contents to the target system.

 

Main Article

Speed up DAF Deployment of a Large Project by Disabling the DeploymentManager’s purgeDeploymentData Option

When using DAF Deployment to deploy a large project (e.g. 200,000 assets or more), you can defer the step to purge deployment data in order to speed up the overall deployment process. After DAF Deployment successfully deployed assets to a target system, the process will transition to a clean up stage called the Deployment Completion phase, in which deployment data is purged.  Essentially all assets of the project have already been deployed at this point and DAF Deployment just needs to clean up the deployment process data before finalizing the project. Transition to this phase can be observed in the Business Control Center (BCC) when the Deployment Console displays a change of the status message from “Gathering Data” to “Deployment Complete.”

One way to speed up DAF Deployment and facilitate it to finish the process of deploying a large project is to disable the option to automatically purge deployment data. This implies that the deployment process data must be manually deleted after the process completes.  To disable the step to purge deployment data, simply browse the /atg/deployment/DeploymentManager component page through the Asset Management (Publishing) system’s Dynamo Administration (dyn/admin), access the purgeDeploymentData option and save the option as false.  This will prevent the deployment process from purging deployment data during the Deployment Completion phase, faciliating the process to finish and allowing subsequent deployments in the queue to start.  Alternatively, the /atg/deployment/DeploymentManager configuration properties file can be added or updated by inserting the following line:

 

purgeDeploymentData=false

 

to bypass the purging of deployment data during the Deployment Completion phase.  As usual, the Asset Management system needs to be restarted in order for this updated configuration to become effective.  In case the deployment process is still spending some time during the Deployment Completion phase (the progress bar stays at 64% with a status message of “Deployment Complete” after turning off the option), the enablePurge option should also be switched to false to ensure the process is not purging any deployment data.  Similarly the enablePurge option can also be disabled by adding the following line to the /atg/deployment/DeploymentManager configuration properties. Remember this change will become effective only after restarting the Asset Management system.

 

enablePurge=false

 

The reason for the additional time spent during the Deployment Completion phase is that the Asset Management system requires some time to execute the following DELETE query to purge deployment data.

 

DELETE FROM das_rep_mark WHERE rep_marker_id in (

SELECT

marker_id

FROM

das_deploy_mark

WHERE

deployment_data = ‘dd14005′

)

 

By setting the purgeDeploymentData property to false, the deployment process will bypass execution of the above query during the Deployment Completion phase and proceed with finalizing the deployment, exiting out of the process so the current deployment can be removed from the deployment queue and the system can move forward with subsequent deployments.

After completing all deployments, the deployment data should be manually purged. Navigate to the DeploymentManager component page and invoke the purgeDataForCompletedDeployment method to delete all completed deployments data. Information related to this tip can also be founded in the Oracle Support article with Doc ID 1901892.1.

Performance Study – REST vs SOAP for Mobile Applications

$
0
0

Introduction

To build functional and performant mobile apps, the back-end data services need to be optimized for mobile consumption. RESTful web services using JSON as payload format are widely considered as the best architectural choice for integration between mobile apps and back-end systems. Nevertheless, we have seen many customers of Oracle’s Mobile Application Framework (MAF) consuming SOAP web services in their mobile apps. One reason this is happening might be the nice declarative support in MAF/JDeveloper where you can easily create a SOAP-based data control through a wizard and build your pages using drag and drop. However, this wizard is only intended for really simple SOAP services. It cannot handle all XSD types, nor can it handle more complex, nested payloads. One way to work around these limitations is to process the SOAP payload programmatically in Java, but this is not a trivial task to do. While most of the issues around consuming more complex web services can ultimately be solved, this article explains why you should really abandon SOAP and go for REST-JSON services for one simple reason: performance. The differences in performance are staggering and get worse as the mobile device gets older.

SOAPvsRESTGraphHori

Main Article

This articles discusses the results of a test conducted by Oracle’s A-Team to compare the performance of REST-JSON, REST-XML and SOAP service calls in MAF. We will first discuss the test set-up, then discuss the test results and we will end with a discussion of the options you have if you are currently consuming SOAP web services in your MAF application.

Test Set-Up

We have created an ADF Business Components (ADF BC) application that uses the HR schema to return a list of departments, including a nested list of employees for each department. So, the payload returned consists of 27 departments with 107 nested employee records.Each department row has 4 attributes, each employee row has 11 attributes.
In JSON format this payload is 26.2 KB, in XML format the payload is 77.3 KB in size (whitespace and carriage returns have been removed).

The more verbose structure of XML causes a payload that triples the size of the JSON payload.

Since the payload is relatively small, and the test was conducted on a fast WIFI network, this difference in payload size did not have a measurable impact on the test results, however if your own apps handle bigger payloads in environments with slow networks this difference can become a significant factor impacting overall performance.

The ADF BC application exposes its department-employees data in three different ways:

  • Through an SDO SOAP web service using the standard “Publish as SOAP” functionality in ADF BC
  • Through a Jersey servlet using the ADF BC “writeToXML” facility and some custom code in the ADF BC application to publish the data as REST-XML.
  • Through a Jersey servlet using Google Gson library and some custom code in the ADF BC application to publish the data through REST-JSON.

The MAF 2.1 application consists of one page where we can invoke the three services through 3 buttons. After clicking a button the department list is shown and a popup showing the time it took to call the REST or SOAP service.

TestApp

We separately timed the conversion to department and employee Java objects, and UI rendering time is not included either. All service calls where made programmatically in Java:

  • The REST-JSON call was made using the RestServiceAdapter API.
  • The SOAP and REST-XML calls were made using the AdfmfJavaUtilities.invokeDataControlMethod API.

As the name suggests, this call requires the presence of a data control, so we first ran the JDeveloper Web Service Data Control Wizard (SOAP/REST).
If you are implicitly invoking the web service in your own apps because you are using drag and drop from your SOAP or REST-XML data control, than the test results are still valid for you: under the covers MAF uses the same AdfmfJavaUtilities.invokeDataControlMethod API. The only difference is that when you directly build your UI on top of the SOAP or REST-XML data control, MAF does not convert the GenericType response objects to a list of concrete java objects, like we do in our test. We will study the performance of deserialization of Java objects in more detail in a separate article, but our measurements revealed that this Java object conversion is playing a minimal role in the overall performance comparison of SOAP versus REST.

Test Results

We have tested the web service calls on 4 different devices: an iPad 2 (iOS 8.0) an iPhone 4S (iOS 8.1), a Samsung Galaxy S4 phone (Android 4.4.2) and in the iOS Simulator. On each device, we called each service 10 times. The first invocation of each service was omitted from the test results because that call generally took 1-2 seconds longer because the relevant Java classes needed to be loaded. Here is a visual representation of the test results:

SOAPvsRESTGraphHori2

As you can see, the differences in performance are very significant. When we ignore the iOS Simulator which is not really relevant here, the REST-JSON calls are between 9 (Samsung Galaxy S4) and 30 (iPhone 4S) times faster.

REST-JSON is 9 to 30 times faster than SOAP in mobile applications!

If you want to know the exact average response times in milliseconds, here are the same data in table format:

Device REST-JSON REST-XML SOAP REST-JSON Faster
iPad 2 187 3301 5109 27x
iPhone 4S 320 4973 9694 30x
Samsung Galaxy S4 158 1019 1465 9x
iOS Simulator 151 585 462 3x

We can conclude that calling SOAP web services on an older device like the iPhone 4S is just unusable. With waiting times up to 10 seconds, your end users are likely to discard your app as quickly as they installed it. On newer devices the difference is not that dramatic but still significant. And don’t forget that market share of older devices is still relatively large. According to this site, the market share of iPhone 4S (11%) is still larger than iPhone 6 (10%).You might wonder where these enormous performance differences come from. There is just one, simple reason: XML parsing is slow and CPU intensive. Also note it is not just the XML payload that needs to be parsed, all the XSD’s and the WSDL are parsed as well. For this simple ADF BC SOAP service that’s already 10 XML files.

I am using SOAP Web Services today, what should I do now?

Depending on your situation, there are various options to investigate:

  • Make the SOAP calls faster
  • Execute SOAP calls in a background thread
  • Rewrite your SOAP services to REST-JSON
  • Transform your SOAP Services to REST-JSON before consuming them in MAF

Let’s go over all these options.

The first one sounds probably most attractive to you as it minimizes the impact on your existing apps. As explained above, the slow and numerous XML parsing is the main cause of the bad performance. You might wonder: can’t we use a third party library to see if we can increase the speed of XML parsing? The answer is no, there is no way to plug-in your own XML parser code when calling AdfmfJavaUtilities.invokeDataControlMethod or when directly using the SOAP data control. The only way to use third party libraries is to not use the MAF API’s at all to call a SOAP service. But remember that MAF 2.1 uses the JDK 1.8 Compact 2 Profile, which does NOT include the JAX-WS API’s. So, this route would require a lot of investigation and custom code with no guarantee of a faster solution.

A bit more viable option, assuming you have to stick with SOAP for whatever reason, is to execute your web service calls in a background thread, allowing the user to continue to work with the app while the data loads in the background. This should be done in combination with on-device data caching using the SQLIte database so you can quickly show the end user the data previously stored in the SQLite database, while in the background you fetch the latest data from the server. Notice that with this approach you put all your SQLIte and SOAP-calling logic in a service class that you turn into a bean data control, which means you have to rewire your UI bindings from the SOAP data control to the bean data control.

The only real solution is to embrace industry best practices and move to REST-JSON services. If you have control over the development of your backend SOAP services and you don’t (yet) have other systems relying on the same SOAP API , you might be able to change them into RESTful services using JSON. The amount of effort involved depends on the way these services have been built. For example, if JAX-WS is being used, it is no rocket science to replace this with an implementation based on JAX-RS.

However, in the majority of cases you will be faced with existing backend SOAP services that cannot be changed. In this situation, you should add a layer to your server-side architecture that transforms the enterprise SOAP services to mobile-optimized REST-JSON services. Oracle offers two options to do this:

While building this transformation layer takes additional time and possibly requires learning new skills, it is our experience that building this layer is an investment that quickly pays back. A mobile-optimized REST-API is much easier to consume in MAF resulting in shorter development cycles, and above all in much more responsive mobile apps as we have shown in this article.As mentioned in the introduction, you might have started using SOAP services with MAF because of the nice declarative support through a wizard and data control. A-Team offers a free extension to Oracle MAF called the A-Team Mobile Persistence Accelerator (AMPA) which provides the same level of declarative support for REST-JSON services. In addition, AMPA provides you with an on-device data caching layer and data synchronization functionality which allows you to use the mobile app in offline mode without any Java coding required from your side.

I am using the REST-XML Data Control today, what should I do now?

While not as bad as SOAP performance, the above test results clearly show that calling RESTful services with XML payload through a data control are considerably slower than REST-JSON calls through the RestServiceAdapter. The options discussed in the previous section to move to REST-JSON also apply here. There is another way you can improve performance by using the RestServiceAdapter to programmatically call your REST-XML service instead of the REST-XML data control. We will study the performance impact of this approach in a separate article.

Single Page Mobile App with MAF and Angular Ionic Framework

$
0
0

Introduction

Most companies will work with a Web Design Agency to create responsive designs concepts, which will be used to provide an optimal viewing experience across a wide range of devices (e.g mobile, web, .etc). From these design concepts, a decision on how to develop the application’s user interface (UI) can depend on challenges exposed by the design itself. Moreover, in most cases, open source frameworks, such as AngularJS and Ionic, have become popular choices for the development platform. Oracle Mobile Application Framework (MAF), which is based on Apache Cordova, is a development platform for mobile. Since MAF can use the same assets (HTML, CSS, JavaScript, .etc) that were developed from these frameworks, this makes is very easy to develop a hybrid MAF application that can be deployed to both iOS and Android devices. This post will give an example of this concept using both AngularJS and Ionic.

Main Article

The following screen shot is the completed example MAF application, which displays a list of items. As mentioned previously, the open source based framework assets were just added to the MAF application project and executed. However, since MAF only adds the capability of the deployment to a device, the assets could also be tested in a browser.

10_angularoniodemulator

Selecting an item in the list will navigate to the details:

11_angularoniodemulator

The hybrid MAF application consists of a single feature, which has a single HTML page associated for the main view. The HTML page (index.html) is implemented to display the different views, which were developed using  AngularJS and Ionic. Before we get into the integration details, the following is the structure from the open source implementation. Note the location of all of the files contained in the structure. All HTML partials, which will be loaded from AngularJS are stored in the /templates folder. In addition, the /lib folder contains all of the 3rd party framework static files and the /js folder contains the custom java script based code.

3_angularfolderstructure

OK, now it is time to details the integration steps. The first step is to create a default MAF application. Once the defaults have been created in the MAF project structure, open the maf-feature.xml. The location of the the maf-feature.xml is highlighted below:

4_angularcreatemafproject
In the maf-feature.xml view, click the green + sign to create new feature. In the Create MAF Feature dialog, complete the necessary information and then click OK.

5_angularcreatefeature

Once the feature has been created, click on the Content tab. The Content tab is where you configure what type of “view” is associated with the feature. A view can be a MAF AMX Page, MAF task flow, Local HTML or a Remote HTML. Click on the green + sign and select Local HTML. Notice the addition of the new (page) reference in the Content section. In addition, notice also the URL section is now highlighted. Click the green + sign for the URL field to create a new HTML page. In the Create HTML Page dialog, name the file index.html, keep the defaults for the other items and then click OK.

6_angularcreatehtmlpage

 

The new index.html page will be created under the feature folder (if the default directory was chosen upon page creation). The JDeveloper MAF project structure should look similar to this:

7_angularmafprojectstructure

The MAF application is now ready to integrate the (AngularJS and Ionic) assets. Copy the full folder structure of the (framework) web application and install the files into the MAF feature folder (e.g. demo.singlepageapp where the index.html page is located). Once completed, the MAF View Controller project directory strucure will look like the following:

8_angularmaffolderstructure

Since this is a single page application, all code used display the various (partial) views is loaded and controlled from the index.html page. This concept of development makes designing pages easy, as there is only file that will need to be updated. Open the index.html file and modify the code with the following:

<!DOCTYPE html>
<html>
    <head>
        <meta charset="utf-8"/>
        <meta name="viewport" content="initial-scale=1, maximum-scale=1, user-scalable=no, width=device-width"/>
        <title></title>
        <link href="lib/ionic/css/ionic.css" rel="stylesheet"/>
        <link href="css/style.css" rel="stylesheet"/>
        <script type="text/javascript">
          if (!window.adf)
              window.adf = {
              };
          adf.wwwPath = "../../../../www/";
        </script>
        <script type="text/javascript" src="../../../../www/js/base.js"></script>
        <!-- ionic/angularjs js -->
        <script src="lib/ionic/js/ionic.bundle.js"></script>
        <!-- your app's js -->
        <script src="js/app.js"></script>
        <script src="js/controllers.js"></script>
    </head>
    <body ng-app="starter">
        <ion-nav-view></ion-nav-view>
    </body>
</html>

Notice that the code can directly use the paths from the feature to include the (framework’s) static files. The partial based HTML view(s) are injected into the page through the <ion-nav-view> tags. In addition, the addition of the cordova.js is not necessary, since this file will be added to by the MAF framework during the building of the application..

Following MAF documentation shows how to work with local HTML files in details: http://docs.oracle.com/middleware/mobile200/mobile/develop/appx-maf-javascript.htm#BABHAHAC

This application does use a RESTful HTTP connection. If a MAF application invokes calls to a (REST) server, the URL must be “white listed”. To do this operation, just add the connection to the MAF project. For example, within the Application Resources section in JDeveloper, right mouse click on Connections folder and select New Connection->URL. From the Create URL Connection dialog, specify the URL Endpoint (e.g. the URL which will be white listed).

Now that all of the static files have been included, the Cordova plugins (used by Ionic Framework) must also be registered in the MAF application. To ensure the plugin files are always accessible, the plugin files have been installed in a new plugins folder in the MAF project:

12_angularjsplugins

Once the files have been copied to the plugin directory, open the maf-application.xml file. In the file view, click on the Plugins tab. In the Additional Plugins section click the green + sign. The Select Plugin Directory dialog opens. In this dialog, navigate to the plugins directory and select the root folder (e.g. com.ionic.keyboard) of a plugin and click Select. This will add the plugin to the Additional Plugins section. For this example, this step was also repeated for org.apache.cordova.console and org.apache.cordova.device:

13_angularjsplugins

Now the application is ready to run as native mobile application on Android or iOS device.

Download the full example here: MafAngularJSIonic

 

Condensed version of “Classic Rookie Mistakes with Eloqua”

$
0
0

Introduction

The following thread was posted on TopLiners (https://community.oracle.com/thread/3666600). I found it insightful to see what kinds of disasters people can make in the real-world. For the sake of brevity, I have condensed/edited it here for you. You don’t need to understand how Eloqua works to quickly see what kinds of issues newbies run up against. Makes for good reading if you want to gain some insight about Eloqua and other CX products. (It also suggests where product management could make some improvements…).

And finally, as developer/architects, if we want to integrate with Eloqua we need to consider how another application might “discover” Eloqua objects when it is clear that humans are having trouble finding/managing their own content! (read below)

Classic Rookie Mistakes with Eloqua

  • 1. Not following the setup guides. You need to benchmark, develop data dictionaries, naming conventions, etc. We didn’t do this until almost a year in and really lost almost a year’s worth of true productivity.
  • 2. Running a bunch of campaigns out of the “testing area” email group.
  • 3. Placing filler content in an active campaign. Whoops – just sent an email out with “NEED SUBJECT LINE” in the subject line area.
  • 4. I see people skimming over best practices because “We need to get value out of this investment, so we need to start sending lots of emails…” Obviously, I am a proponent for seeing the highest ROI possible, but many times this means slowing down enough to think through the strategy for your naming conventions, email groups, campaign strategy and business processes.
  • 5. Not using segmenting/filtering contacts properly with the right combination of “and,” “or,” and, correct grouping.
  • 6. You’re right… that one totally plays with your mind, and then you get into include/exclude. We’ll usually have an extra person or two check our logic.
  • 7. I heard of a story where emails were being sent from reps that were no longer with the company…
  • 10. How about forgetting to set AM/PM in the wait step so the campaign gets sent out stupid o’clock in the morning!!!
  • 12. Sometimes I actually forget to put Tests in the Testing area
  • 13. Naming conventions for sure! They learned fast to use them or reporting was a disaster!
  • 14. One time I sent out an email to 30K contacts where all the links were broken. Ooops! Triple check, triple check, triple check. I’ve made several and seen more than several… My most recent was last year (yes, IMO, even after six years in the system, you can still make “rookie-like” mistakes). Last year, I modified all of our integration calls to sync on the 18 digit SFDC ID, tested and it appeared all good. However, I just discovered I forgot to remove the “use case sensitive comparison” criteria on our Company/Contact linkage (still an old school deduplication handler set). I have over 10K contacts w/ possibly suspect links to companies. DUH! Lesson? A second set of eyes on a test case matrix is always a good idea! One of the craziest I’ve ever seen? One user – at a VERY large enterprise client – deleted one of their many Hypersites. Not a landing page. Not 100 landing pages. The ENTIRE site! Of course, Eloqua was able to help them get it back online via the backup files, but it was chaotic for a while! Lesson? ALWAYS read – and re-read – the pop-up warnings, especially before deleting!
  • 16. Forgot to schedule a batch email and send it right away to thousand of contacts!!!
  • 17. Accidentally clicking on “Send Plain Text Only” tickbox under the email setting. Proceed on to test on the HTML version and send out the email and thousands of text email were sent out instead of html…
  • 18. 3 major rookie mistakes so far: 1) Being the only person trained or working with Eloqua at our company. The implementation has been more than challenge. With ROI pressures to get campaigns out, the biggest rookie mistake i have made is prioritising the wrong things at the wrong time resulting in spreading myself a little thing. I feel there is still a long road ahead to solve this particular issue however I’m getting a little better as I’m working out what process leads onto what, and what process’s can be run in parallel. 2) Requesting a password reset for the cloud connector site (not being allowed to set it to old password) I set it to something else. Unfortunately on the same day as a webinar invite went out. So all those who wanted to register for the webinar got stuck in the cloud connector waiting for the correct password. I luckily noticed about 45 mins before the webinar but I’m sure it affected attendance. 3) Third mistake was not spending enough time on Topliners earlier in the year. Took a while to understand the possibility of getting such varied advice so easily.
  • 19. Always, always back up your landing pages before copying and pasting updated code… especially for live pages.
  • 20. Building emails with 3 dozen text boxes and wondering why we have rendering problems.
  • 21. Echoing what many others have said. There’s a lot of foundational work that needs to be done during day/week/month one (including setting up and understanding naming conventions, defining success metrics with the help of your immediate colleagues, etc.). Don’t skip the groundwork — it’s important!
  • 23. Nearly deleting our entire EMEA nurture microsite. Yikes! Luckily it didn’t happen but the scare did lead to a great feature called recovery checkpoints.
  • 24. can’t stress this enough – as much as naming conventions/folder structures can get in the way, they are a blessing.
  • 25. Believing everything will work as normal after a release. The Lucy to my Charlie Brown…
  • 26. When saving a form as a template in Eloqua you also need to save it as a form in the assets area because these are two different entities within Eloqua. Either I missed this in my class or I completely forgot about it until I had to update templates.
  • 27. Thinking that PDF download tracking will work just by creating the link.
  • 28. Setting a field as mandatory during smart start and wondering why you have a draft error stating a field is mandatory!
  • 29. Adding a filter to an active segment and forgetting to check its dependencies. And then asking the person who you poked fun at for having the same draft error the day before!
  • 31. After nearly eight years, I still get caught with time-based errors. Email send times, wait steps, even auto synchs. I would blow something up 3-4 times a year if I didn’t have some great people watching my back. I also do something wrong with the data priority order on a twice-yearly schedule. Haven’t done that in a while so I’m probably due.
  • 32. Bringing dirty data full of duplicates into Eloqua from my CRM upon the first sync. When your Smart Start consultant tells you to clean your data before it gets synced over during Smart Start, SHE MEANS IT!!!!
  • 33. Not getting Eloqua University licenses for ALL of our Eloqua users!
  • 34. Mapping out a campaign on paper before just placing it into Eloqua and pressing the Start button. It’s amazing what a little paper and pen and also a little writing out of campaign logic can do to save time.
  • 35. Implementing a change in assets (Email, landing page, or form) in last minute and hoping everything went well, trying to make stakeholder happier. Truth is, more trouble, get CHANGE REQUEST SLA established (how much time is needed to implement the change or how much delay in campaign based on the time or request) to not fall for this one.
  • 36. I never make mistakes but a “friend of mine” once enabled a campaign where all of the assets weren’t ready and put in dummy asset to be able to activate the campaign. My “friend” did this one about a year ago on a campaign. Put in some dummy place holders and intended to go back and add the real emails once they were ready. Got busy and didn’t get a chance to disable the campaign before the dummy emails were sent out.
  • 37. Rookie mistake that I make for anything that includes building something on the web: Download before you upload. “Damn, I overwrote that file AGAIN!”
  • 40. Sent out my first Eloqua email just before I went on a vacation. We had a few people check it including myself. No-one caught on that we were linking to our staging site and not our production site. Two lessons learned: 1.) Replace your logo on your staging site with something big and red (assuming your logo isn’t already), and 2.) Don’t bring your phone on vacation
  • 43. My new trick is to put filler content that would not be embarrassing if it got sent out. (where possible, and when I have to use filler)
  • 44. Forgetting to add a new rep to the signature rules is one. I think the biggest was not selecting all rows when sorting a spreadsheet so that the first name field was scrambled against the email address field – uploading it and overwriting all those first names. MEMO TO SELF – UPDATE IF BLANK! Going too fast with wanting everything. Start small, think big!
  • 46. Our rookie mistake involves the “other” system – allowing duplicate contact/lead records in that system and then attempting to connect that to Eloqua, only to realize that oops, duplicate contacts are BAD and we should not have allowed that.
  • 47. We had one person in charge of everything that was running in Eloqua for several years and when she left the company we weren’t sure what we were inheriting. It took us several months to get up to speed!
  • 49. Turning on our first big campaign with a form evaluation time of immediately (should that REALLY be the default system setting? C’mon!) and not seeing Dynamic Content request that was in it.
  • 51. Not reading the manuals!
  • 52. I’m also in my first month of using Eloqua, I’ve read this blog post so I could make sure not to repeat the same mistake… So recently, I sent out an email blast – I tested everything and it was going smoothly! For some odd reason, it felt too good to be true (especially for an Eloquean Newbie)… Turns out, I was wrong! Everything did go smoothly but the Sales team were getting the Out of Office messages instead of Marketing… I checked the settings and realized the “Reply-To” was set to the sales team email address… yikes! CHECK EVERYTHING!
  • 54. I think starting with small, less complex campaigns initially would have been better for increasing our confidence. I like the motto of crawl – walk – run when learning something new. It just helps you get a better foundation and reinforces your understanding of the nuances of a new system. ALSO, don’t be afraid to ask questions and admit you don’t know and answer! Everyone has been very helpful!
  • 55. Making mistakes is good till the time we learn something out of it. When I had initially started using Eloqua, I did not use the “Copy” and “Save as a template” feature for around a month. I used to waste a lot of my time creating similar emails from scratch again and again. Sounds stupid, but I can always use this as a benchmark to measure how far I have come improving and learning the nitty gritties of the system.
  • 56. The thing that still gets me to this day is not correctly setting the Key Field Mapping in form processing steps. It’s almost always my problem when my forms aren’t working properly. And, I almost never realize it until right after Eloqua Support starts to tell me to check the “key field…” You’d think I’d learn. The image is from a live form I’ve had running for over a year.
  • 57. Something that is easily forgotten here is going to “Manage Links” and marking “Track All”. We still have the mindset that Eloqua will automatically track all of the links for us.
  • 58. Making changes to and saving a shared segment, when I should have saved it as a copy before I made the changes! Luckily I remembered the changes made, and changed it back before any damage was done!
  • 59. Ha Ha! We’ve all done it! Rule to thumb, Shared anything make a COPY! and do no harm! Proofread, proofread, proofread… have someone else check it… let it sit again at least overnight and look at it again with fresh eyes. I often send out similar messages for up to four different firms within our organization, so it’s easy to copy an email and forget to change a minor detail (like someone’s name). As simple as it sounds, I’ve created a checklist and will sometimes ask someone else to double-check a test email before sending out my final. The checklists includes things like Are all logos correct? Are all names correct? Are all phone numbers correct? Do all links work? Are all dates correct? Am I using the correct list? Measure twice and cut once…
  • 61. Creating a check-list is a great idea. It helps in reducing agitation and prevents fatal activation accidents. My checklist includes – Double checking subject lines and sender information, testing the email in all browsers, form processing steps, date and time of activation as per the correct time zone etc. After all, prevention is better than cure.
  • 63. The one thing I would add for the emails I do is to make sure “Allow Resend to Past Participants” is turned on because I mainly send transactional emails and it is not uncommon for me to send the same email twice. It’s not the end of the world if I don’t turn this on (there are workarounds), but it does make it easier.
  • 64. We do Test Sends to the whole campaign team, so when we didn’t check “Allow Resend to Past Participants”, they didn’t get the “actual” send going to clients, and panicked that none of the clients got the emails either.
  • 66. Our group based its naming convention off one we got from Eloqua here. In our Marketing group, the people pulling reports are different than the ones building assets and configuring campaigns in Eloqua, so bad naming would mean disaster in reporting — i.e. it wouldn’t be clear to them in Insight which emails, campaigns, forms and LPs were theirs, and they’d miss some data.
  • 67. My advice for a naming convention is: Be consistent! Use the date (at least month and year) – also see #1 Call it what it is! Try to avoid crazy acronyms unless they are commonly used in your organization. (refer to #1)
  • 68. We built a Asset Name Generator right into the Eloqua API and force our global users to use the tool every time they create a new asset. The tool is just a form with some drop-downs that allows them to select all of the components of the name (Year, Quarter, Asset Type, Product Type, Geo, etc.). Once they make their selection, the tool spits out a name in the proper convention. Here’s an example of what it spits out: The format is as follows: year, quarter, geo, country, product type, offer type, language, name (and the last one, (name) is a free text field for them to add whatever they want). We did this to enable us to find assets quickly and easily – especially since we no longer have the tree-structure view like we had in E9. What’s more, we use the convention to easily create reports in insight. It’s been a life saver.
  • 69. Ugh, had a couple scares myself. Big reason I take a hard stance that if assets and decisions are complete at handover, I don’t care about timelines, I’m not taking it. They can stomp, huff and puff all they want, it’s just not worth it.
  • 73. This is a great post I’m currently taking my first deep dive into Eloqua this week and it’s great to quanitify the importance of a lot of these things – in particular the naming conventions and the way this will help with reporting. One of largest problems to overcome is multilingual campaigns covering EU and ensuring that our maintain efficiency within the team.
  • 74. About 4 years ago, while working at a company that sold through a channel model into the wireless industry, I confused my “ands” and “ors” while building a segment and mistakenly sent a newsletter from one channel partner (Sprint) to ALL of our contacts (Verizon, AT&T, US Cellular, etc). Our channel partners were none to pleased! Let’s just say, 4 years later, I still obsessively check my segments 7 or 8 times before hitting activate…and then say a little prayer as the emails start flying. That’s one way to learn segmentation
  • 77. And then be sure to use the list! It’s so easy to get rushed and think you have things done. Checklists and standard processes are key.
  • 79. In all of my excitement to activate a campaign I forgot to add an essential wait step and discovered just how fast Eloqua can send contacts through a campaign flow.
  • 81. I’ve seen that happen numerous times. I think the timing of campaigns is very difficult to test; just takes checking and re-checking.
  • 84. We have a new client with a very similar need. We’re inheriting hundreds of current landing pages, no solid naming conventions. Curious about the details of your form – can you share any details around what you used to build this?
  • 85. Organization is a huge deal in Eloqua. Create naming conventions and an organizational system from the get-go!
  • 86. Naming convention saves my bacon. For what it’s worth I use the following: All assets start with the date e.g. 20140927. It doesn’t matter what it is, a form, email, landing page, shared lists, the campaign even images or documents if they are specific to a campaign. They all start with the date. The date is more about a unique number than it is about the date specifically relating to the campaign. Except perhaps for an event, I use the date of the event as the unique number. As an example, our newsletter is called Outside The Cube (You can subscribe here :-). So my naming convention is like this: Campaign: 20140909 Outside the Cube 011 Emails: 20140909 CUSTOMER Outside the Cube 011 20140909 PROSPECT Outside the Cube 011 20140909 REM CUSTOMER Outside the Cube 011 (REM is “reminder” i.e. the email they get if they don’t open the first one – that works a treat) 20140909 REM PROSPECT Outside the Cube 011 It means that I save assets etc in specific folders, but I rarely navigate through folders to find things, I just search for them in the search box, it’s easy and saves time and because of the date I know I have the right asset associated with the right campaign. This is just one option, I’m sure there are many ways. I hope this helps.
  • 87. I also make sure to include event names, subdomain names and such in names. Anything that’s a distinguisher helps.
  • 88. How did you set this great tool up? Via forms? I want to use this for our campaigns as well. I have almost the same naming convention set up as you and this would force us to use every time.
  • 89. You can use the Eloqua 10 Naming Conventions Template located here: http://topliners.eloqua.com/docs/DOC-2861. You can customize it to your team’s needs and go from there!
  • 90. This was done with a simple, 3rd party-hosted web page which was served up through the Cloud Connector panel in Eloqua. Pretty simple solution but effective! :o)
  • 91. On the subject of campaign naming conventions – we built a simple tool to make life as easy as possible for our marketers requesting campaigns from us: Twitter | Campaign Name Generator
  • 92. Not noticing the end date on my first campaign. Ended up with a Completed campaign where a lot of members were just thrown out… Won’t happen again that’s for sure…
  • 93. There are different types of naming conventions based on your company activity or size. From my perspective, I used to work with this kind of naming convention: YEAR-Region-Country-BU-Type-campaign name or form name or landing page name Example : 2014-EMEA-FR-Event-Road to Revenue Tour
  • 94. We have established naming conventions but it’s so difficult to maintain them!
  • 95. Agree. Although our naming convention was agreed upon by all parties, it does tend to have some organic growth over time… sneaks away from you.
  • 96. Naming conventions can be difficult to maintain and enforce, especially at first! One helpful solution is distributing a template to all users where they just fill in their pieces and the template auto-generates the full name for them. Beyond that, you can show users how reporting works and how a mislabeled piece of content, e.g. wrong geo or product type, won’t be pulled into their reports and will actually change others, so for the good of everyone involved it’s worth taking the extra time to double check a name.
  • 97. making sure you roll out changes to small pieces of your database, even if you’ve been doing this for years. Testing is a great start but a lot of the times the tests are only covering ideal and/or known cases. Once a solution is implemented you’ll see unexpected results. It’s much easier to fix 500 or 1,000 contacts that ran through a poorly thought out update rule than 1.5 million contacts!
  • 99. A while back I was hitting my head in the wall to understand why a contact is being created for form submissions even though the processing step of update contact was set up in a way that some particular submissions should not have been updated as contacts. And then I started testing things out to find that if you add the ‘Add to shared List’ processing step in the form, every contact will be created as contacts irrespective of the conditions specified.
  • 100. Forms can be quirky and counter-intuitive until you run through a real-world exercise like this. Can’t count how many times I’ve run into similar issues on form processing steps and now I just know to check a box or not add specific steps, etc. just from experience, nothing you would learn in a class!
  • 102. Headers with spelling mistakes – triple check!
  • 103. I have a “friend” who has done that too. It’s particularly easy to do this on webinar campaigns, when you need to wait for the webinar recording to be available to finalize the follow-up email. The next time my “friend” did a webinar, “she” set the wait step before the non-final email to 12 months and set “herself” an Outlook task to finalize the email the day after the webinar. It worked like a charm… after the email was finalized, we deactivated the campaign, re-set the wait step, and re-activated it.

Harvesting Information from IT Systems – Part 1

$
0
0

 

Introduction

Back in 2012, while I was performing research on how to make a Fusion Applications environment portable, the decisive moment came when I was able to confirm that I could copy the entire FA environment from one server to another and run it there as long as I added all hostnames present in the FA configuration data to the /etc/hosts file of the operating system. By tweaking the hostname-to-IP-address mappings in the hosts file so that the requests over the network would be routed to the new machines instead of the old ones I was able to keep the duplicate FA environment almost intact, creating a perfect clone while preserving the ability to patch and upgrade. This technique later led to the creation of the Fusion Applications Cloning Tool, which the A-Team developed in 2013 under my lead.

While defining the architecture for that tool, it became clear that if we were to add all hostnames present in the FA configuration to the /etc/hosts file, we would first need to know what those hostnames were. While in a perfect world we could simply look at the server name using ping or the hostname command, this was actually much more complex since many customers have more than one hostname pointing at a single server. For example, one of our customers had two different hostnames – one local and one in DNS – for every host in their data center; they also had Virtual IPs configured on the server which use different hostnames, plus remote hostnames defined locally in the /etc/hosts file. An inspection of their FA configuration revealed that all of these hostnames were being used to point at the same server; some configuration files even used the IP address itself to reference a particular server in the topology.

So before we would be able to start the clone of the Fusion Applications environment on another host, we knew we would have to look up hostname references in Fusion Applications configuration data, which includes files, database tables, credentials stores, etc. With no tool available to do that for us at the time, we defined a procedure for finding not just hostnames but also other details such as usernames, database connections, identity management information, topology, etc. and documented it as the Discovery process in the Fusion Applications Cloning and Content Movement Administrator’s Guide.

We are currently working on a tool that automates that process and the next sections attempt to share some of the thought process that went into the Discovery process and its automated version – the Discover tool. I hope it will be of interest to anyone in need of harvesting information from IT systems like we do.

 

Know Before You Go

The need to obtain information from IT systems is nothing new. Regular system maintenance including patching, upgrading and making configuration changes often requires knowing current information such as database connection details, HTTP URLs, file locations and, of course, passwords. IT teams have traditionally relied on spreadsheets, wiki pages and cookbooks to keep track of and share that kind of information, but aside from the questionable security and reliability of these methods, how can one guarantee that the information there is current? Most of the time they simply can’t.

Going back to installation response files and other files generated at installation time sometimes works but can be misleading since they will most likely not include manual configuration changes made to the environment after the installation.

Looking at current architectural trends, modern systems rely more and more on service oriented architectures, with information coming from other, often unrelated systems using HTTP-based APIs. The decentralized nature of these architectures makes it even more difficult to maintain a centralized, up to date and reliable repository of configuration information and makes traditional methods either too costly or quickly obsolete.

So in order to be sure about the information from an IT system, the ideal approach would be to be able to inspect that system as needed in order to gather information that is current and reliable. There are many different ways to go about doing that, but we have found that the following 5 points can take care of all our information harvesting needs:

 

  • Knowing the what, where and how of the information you need
  • Gathering the information
  • Analyzing it
  • Verifying it
  • Summarizing and presenting it

Harvesting

 

In this part 1 we will be discussing the first one: Knowing the what, where and how of the information you need.

Part 2 will discuss the remaining points: Gathering, Analyzing, Verifying and Presenting the information.

 

The What, Where and How of IT Information

What information?

In our example above, we know we need all hostnames, however, hostnames can be available in various shapes (URLs, database connect strings, property values, etc.) so when looking for hostnames we are actually looking for all possible hostname shapes.

This may be true for other types of information as well, such as usernames, port numbers, file names, etc. So before you go on your quest to find the information, make sure you know all the available shapes and forms in which it may be presented and you have included all of them in your list. With this in mind, our search for hostnames now includes the following:

 

Information shape Example Found for example in
Hostnames fusionapps.mycompany.com Properties files
HTTP URLs http://fusionapps.mycompany.com:10614 Web Service connections
JDBC URLs jdbc:oracle:thin:@fusiondb.mycompany.com:1521/fusiondb Database connections
LDAP URLs ldap://idm.mycompany.com:3060 Identity Store connections
Database connect strings (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=fusiondb.mycompany.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=fusiondb))) Database connections
Hostname:Port pairs fusionapps.mycompany.com:10613 Oracle HTTP Server configuration files

 

Where to find it?

In a perfect world, all configuration information for a given system is managed through a single web-based console, but we all know the IT world is far from perfect. In reality, modern systems span multiple products, often built on different technologies, with each system maintaining their own configuration, including database connections, identity system connections, API endpoint connections, etc.

In our hostname example, in order to find all hostnames (and other shapes) used in Fusion Applications we have to first list all the possible sources that can contain hostname references, they include among others:

 

Source type Notes
Database connections
Identity management connections E.g. SSO provider, identity store (LDAP), policy store/credential store (LDAP)
HTTP connections E.g. web service URLs, map URLs, generic HTTP URLs
Frontend HTTP endpoint configuration The hostnames defined at the HTTP server level for incoming requests
Search crawler connections External connections used to retrieve search indexing data

 

The list above provides an idea of where to look, but in order to come up with the detailed list of all the specific places that can contain hostname references, a combination of the methods below is needed at least once when coming up with the list of sources:

 

  • Product documentation detailing configuration points: this is often available for parts of the system but a comprehensive list is rarely available. Even if such is provided by the software vendor, configuration changes made by the customer (e.g. extensions, integrations, customizations) will often add more configuration points and, consequently, hostname references.
  • Information from domain experts: even if the documentation is not available through the software vendor, engineers often create their own lists and often publish them in blogs or whitepapers. While unofficial, this source of information is often valuable as a way to validate the list of sources
  • Scanning the environment: before inspecting the IT system, it may be a good idea to perform system scans to obtain a list of all places where a piece of information is found. I’ll provide details about this technique in another blog. This activity does not have to be performed every time you want to perform discovery, but should be performed every time there is a significant change in the environment being discovered, such as an upgrade or an extension.

 

How to get to it?

Now that we know all information we need (in its various shapes and forms) and where to get it from, we need to find a way to get to it. Once again, having a web-based console to get to the information really helps when performing manual discovery. A tool like Oracle Cloud Control can aggregate information from multiple systems/consoles and make this process a lot easier.

If the information is not readily available in a central location, one can always obtain each piece manually by:

  • Invoking command line tools e.g. SQL*Plus
  • Getting it from a web-based console or web page
  • Inspecting system files

 

However the information we are looking for is so vast and detailed that 1. Doing it manually would take way too much time and 2. Going directly to the source (APIs, command-line tools, LDAP directories and sometimes all the way down to database tables and configuration files) guarantees we will not miss anything. Since the information we are looking for lives in a variety of sources, we will need a variety of access methods e.g.:

 

Access Method Used to access
SQL /JDBC database resources e.g. transactional data, some configuration data
LDAP LDAP directories for identity (user and group) information
HTTP Used to access web-basedAPIs (web services, REST, etc.)
JMX / T3 MBeans (to access WebLogic server configuration for example)
File system files
Process invocation output from command line tools

Most programming languages including Java have APIs for all of these which makes it possible to automate them.

Once the information has been obtained, it may have to go through some basic processing to extract important elements from it, as in our example, extracting the hostname from an HTTP URL or from a configuration (properties) file. When performing discovery manually, this becomes part of the process naturally, however when automating one must invoke this as a separate task. Some of the ways this can be done are listed below:

 

Text format Processing method Description
Any text Regular expressions RegEx can be used to extract information from literally any text, however for structured text there may be other optimized ways to do it (see examples below)
XML XPath XML is widely used as a format for configuration files and XPath is certainly the best way to obtain specific information from XML data.
JSON Javascript / JSONPath JSON data can be processed directly in JavaScript or alternatives such as JSONPath
CSV ODBC, etc. Comma-separated values is a widely used format for storing tabular data and is compatible with popular tools like Excel.
Properties files Regular expressions / Java Properties class  Properties files are widely used for storing system configuration

 

 

Make it repeatable

Once you define the access method and processing method for each source, it is very important to ensure that the access and processing of the information can be encapsulated into a repeatable step. We wouldn’t want to take all the thought process that went into getting that information once and let it go to waste. This can be done through documentation and/or creation of code units that can be run independently, normally through scripts.

In the Discover tool, we call these units “operations” and we implement them as Java code. But an operation is, in reality, an abstract concept that defines a contract (inputs and outputs, plus the action it performs) and can be implemented in a variety of ways. We will discuss more about how we are coding operations and their features in another blog post. For now, all you need to know is that they are units of documentation or code that allow you to obtain a specific set of information from a given source.

 

Definition of an operation:

Snap1

Here are a couple of examples of operations:

 

Operation 1: Get a list of all JDBC URLs used by data sources in a WebLogic Domain
Inputs: WebLogic Console URL, username and password for a given domain
Outputs: list of JDBC URLs
Procedure:
  • Go to WebLogic Console using the given URL, username and password
  • Navigate to the data sources page and click on one of the data sources
  • Click on the Connection Pool tab
  • Write down the JDBC URL used
  • Repeat for each data source

 

This is a manual operation, note that the procedure describes the steps a person must go through to perform it.

 

Operation 2: Get a list of all database links in a given database
Inputs: Hostname, port, service name, dba username and password for a given database
Outputs: list of database links
Procedure: getDbLinks.sql script 

This is an automated operation, which uses a SQL script to obtain the information needed.

Note that the contract defined in both (inputs, outputs, what it does) has the same format, even though operation 1 is manual and operation 2 is automated. This is a key aspect since 1. the process of harvesting information may mix automated and manual steps and 2. it facilitates process design and the transition from a manual process to an automated one so that complex steps can be initially performed manually and be automated over time.

 

Stay tuned for part 2, where we will discuss how to collect the information, analyze it, verify it and present it so that it can be used by other processes.

 

 


Should Remote Satellite Server be used for Edge Caching?

$
0
0

 

Should Remote Satellite Server be used for Edge Caching?

Lately, I have come across a few clients wanting to use WebCenter Sites Remote Satellite Server (rSS) as an edge cache – to cache content closer to the end client. Typical pattern is the client wants to have WebCenter Sites delivery servers in a data center in US and serve the Asia Pacific traffic using Remote Satellite Server in Asia/Pacific. The client wants to know if this is a good practice.

 

Well the short answer is no, this is not a good architecture design, and it’s against the best practices. Let’s look at the reasons why I say this is not a good architecture design.

 

WebCenter Sites, in addition to being a Web Experience & Content Management system is also a caching engine. It drives its performance by using a number of different layers of caching, including Page Caching[1]. Page Caching is also done at two different layers – WebCenter Sites Servers and then on Remote Satellite Servers.

 

To get the best out of Page Caching, the page is broken into a few modular pagelets – like header, footer, navigation, and center body. Each pagelet is cached separately. [2],[3],[4] Usually, there is an uncached wrapper. The uncached wrapper has logic for reading cookies, authentication, authorization, set user profile, and other logic that cannot be cached. [5]

 

The cached pagelets can reside both in Remote Satellite Server and WebCenter Sites server. When a site visitor sends a HTTP request to view a page, the request first goes to Remote Satellite Server. Remote Satellite Server checks if the pagelets to construct the page are in its cache or not. If a pagelet is not in its cache, Remote Satellite Server sends a request to WebCenter Sites Server to get the pagelet. If there is any uncached pagelet, the code to generate HTML for that pagelet is executed on WebCenter Sites server. Thus to serve a single request to view a page, rSS may need to send multiple requests to WebCenter Sites server, making the communication between rSS and WebCenter Sites Server very chatty. If rSS is physically very far from WebCenter Sites and the band-width between them is not very high, it adversely affects the page load time.

 

To get reasonable performance if rSS and WebCenter Sites servers are very far apart – say in different continents – is to cache the whole page. This requires that there is no uncached wrapper, that the number of pagelets is very few – say 1 or 2. Since the whole page is cached, the request can be satisfied by rSS and no request needs to be sent from rSS to WebCenter Sites Server.

 

Although this may look like a reasonable approach, in practice it will not work for most of the clients. Caching the whole page causes the page to have a very high number of dependencies. A Navigation pagelet itself may have a few tens of asset dependencies. A list of “Top Ten Articles” may have another ten asset dependencies. A home page may have several tens or hundreds of asset dependencies. Whenever we publish an asset, WebCenter Sites flushes all the dependent cached pages.

 

Additionally, if there is any “unknown” dependency[6], it may cause even a greater number of pages to be flushed from cache. Unknown dependencies are introduced when there is no way to predict the set of assets needed to compose a web page – say pages that require search, or use WebCenter Sites Search State tags. If there is an unknown dependency on a page, the page is flushed every time an asset of the asset type is published.

 

This means any publish is likely to flush a large number of pages from Remote Satellite Servers. So, in practice, to use Remote Satellite Server as a remote cache, and to cache the full pages require that the pages:

 

* should not have any logic that cannot be cached – like authentication, use of cookies, setting user profile data, integration with some back-end services etc.

* should not use unached wrapper, and cache the full page

* should limit the number of pagelets to very small say one or two

* should not use any tags that introduce unknown dependencies

* should not be using any functionality like WebCenter Sites Engage and Community Server

* publishing should be limited to one or two a day, that also during off peak hours for the Remote Satellite Servers

 

 

With so many limitations, I do not see how one can have a meaningful architecture with Remote Satellite Servers in different remote regions. An alternate architecture is to have WebCenter Sites delivery cluster in different regions and publish to them from central WebCenter Sites Management Servers.

 

 

[1] http://docs.oracle.com/cd/E29542_01/doc.1111/e29634/dev_codingelements.htm#WBCSD1997

[2] http://docs.oracle.com/cd/E29542_01/doc.1111/e29634/dev_caching_chapter.htm#WBCSD6074

[3] http://www.ateam-oracle.com/double-buffered-caching-with-incache-in-webcenter-sites-11gr1/

[4] http://www.ateam-oracle.com/sites-page-caching-remote-satelliteserver-round-trips-and-render-tags/

[5] http://www.ateam-oracle.com/how-to-prevent-overcaching-with-wc-sites/

[6] http://www.ateam-oracle.com/importance-of-wrapper-in-webcenter-sites11gr1/

Using File Based Loader for Fusion Product Hub

$
0
0

Using File Based Loader for Fusion Product Hub

Introduction

File Based Loaders (FBL) offer a broad bandwidth of varieties to import batch data manually by user interaction or automated via locally scheduled processes using existing API’s and Web Services. This article will highlight the Fusion Product Hub specific capabilities to import item data in a batch by using FBL. Another and more generic article about File Based Loaders can be found here.

The current FBL solution for Fusion Product Hub is covering the following customer scenarios:

  • Manual (UI based) upload and import of item data to Fusion Product Cloud or Fusion Product Hub on-premise instances
  • Automated loader and import processes for item data to Fusion Product Cloud or Fusion Product Hub on-premise instances

This article will describe a technical implementation that can be used with Fusion Product Hub and on-premise installations the same way. It will also cover some basic and necessary functional setup aspects. Please note that item import via FBL doesn’t replace the other Product Hub solutions for data import such as item batch imports via Fusion Desktop Integration. It can rather be seen as an additional offering for item imports.

Main Article

File Based Loader for Fusion Product Hub is using standard technologies and components from the Fusion Apps technology stack on the backend – both in the cloud and on-premise. It’s not necessary to install extra components or products in addition to Fusion Product Hub.

Figure below visualizes the available product data load options for manual (user interaction trough portal or desktop integration) and automatic (Web Services, API’s) scenarios. This blog will explain how to use the various features.

Overview

Customers can use a thin technology footprint in their client environment to use the capabilities of FBL for Fusion Product Hub. The following runtime components and tools are sufficient to create a connection to FBL for uploading item data and triggering scheduling jobs:

  • Java Development Kit (JDK) 1.8.x
  • JDeveloper 12c
  • WebCenter Content Document Transfer Utility for Oracle Fusion Applications (free of charge utility available on Oracle Technology Network)

Especially for cloud customers this footprint eliminates the necessity to install additional server components in their data center while Fusion Apps on-premise or Fusion Middleware customers can leverage their existing infrastructure to run the FBL related client programs and tools.

FBL can be seen as an additional integration point with an option to provide item loader data in Fusion Content Server (UCM) for further import processing. These tasks can be done as manual interactions (occasional item loads) or alternately as an automated task via scripts and API’s. Details will be explained in following sections:

  • Common functional setup for successful item imports
  • Loading data to Fusion Content Server
  • Initiating a item load scheduled job

Note: Other item import capabilities using Desktop Integration are co-existing with the current FBL offering and remain another import offering for on-premise customers.

Part I: Functional Setup for Fusion Product Hub Item Loader

This blog will not cover all aspects of functional setup steps. Instead we’ll focus just on some basic introduction about a functional setup which is generic in terms of being valid for performing any other item definitions as well. Fusion Product Hub offers a set of capabilities for definition of custom item structure as required by customers needs.

DefineAttributesIn a first step, after receiving information describing the item structure and validations, an authorized user will create custom attributes by running the setup task Manage Attribute Groups and Attributes as shown in screenshot above. This step is optional and needs to be carried out only if attributes other than those available out of the box (operational attributes) in Product Hub are required.

DefineAttributes2Attribute Groups consist of attributes, which describe a specific feature of an item. Attribute values can be validated by value sets or more complex, coded validations. All these definitions are stored in an internal metadata repository called Extensible Flexfields (EFF).

DefineAttributes3Once these Attributes and Attribute Groups have been defined, they can be assigned to Item Classes as shown below. New items being loaded via FBL belong to dedicated item classes after import.

DefineItemClassBefore running an item import we must create a mapping between import item structure and the equivalent item class in Fusion Product Hub. Once defined, we must save the Import Map and will refer to it later in the loader process.

DefineItemClass2Screenshot below shows a sample mapping in an overview page. Process of mapping consists of assigning columns in CSV structure to attributes defined per item.

MouserImportMapDefinition2The import structure for mapping is derived from a sample CSV file being loaded to Fusion Product Hub. The first line (header) in a CSV file describes the columns in import structure that has to be mapped to target structure. This can be done via UI by dragging target fields to source fields.

MouserImportMapDefinitionLast but not least an Item Import will run in the context of a Spoke System in Fusion Product Hub. If not already existing, it must be created and assigned to an Item Organization. Every import job started via FBL must refer to a spoke system.

PIMDH_SpokeSystemDefinitionThe functional setup for FBL Item Import as shown above doesn’t differ from any other item import like Desktop Integration. This is usually a one-time activity per import structure. The functional setup is complete after finishing the previous tasks.

Part II: Loading Product Data to Fusion Content Server

FBL leveraged the Universal Content Management (UCM) server coming with Fusion Product Hub for storing import files. It’s usually available under the following URL:

https://<fusion_product_hub_host>:<port>/cs

Customers have a choice to either use the FBL UI for occasional data loads or to setup a machine-to-machine communication instead. This chapter will give an overview about folder structures, basic security structures and given functionality in UCM to put loader files into a staging area for further processing for both variants: the manual and the automated loader tasks.

Manual Steps

Login page to Fusion Content Server is available via URL above.

UCM_LoginPageIn demo system we’re using a Fusion Product Hub identity named PIMQA.

UCM_LoginPage2This user PIMQA is assigned to the following roles as shown below. By using these roles we ensure that all required permissions are given to run File Based Loader.

PIMQA_RolesFile Based Loader requires two files being available in UCM:

  • Data File in CSV format containing the item information
  • Manifest file defining the used import mapping (see above) and the path/name for file containing item data

Both files must be existent and accessible in Fusion Content Server before triggering the loader job. Screenshot below shows a sample of item data in CSV format.

NewStructureItemCSV As stated above a manifest file describes the file and import mapping information for a dedicated item load as shown below.

NewStructureManifestCSVStaging area for Fusion Product Hub is predefined as /Content Folders/PIM.

Via menu New Item new files must be uploaded into that folder. The screenshot below provides further details. The field Account must be filled with the correct values for accessibility permissions – in case of this FBL for Fusion Product Hub sample we used scm$/item$/import$. This account is a seeded value and it can be used for this purpose. Users can setup their own accounts and use them alternately. It’s also possible to use Security Groups instead of Accounts when using UI based file upload. More details about security mechanisms are explained in the next section below.

UCM_UploadPimFile_ManuallyOnce all files were uploaded – either manually or automatically via scripts – the required files must reside in UCM folder before triggering the item load job. The screenshot below shows a sample.

UCM_PimFiles3When double checking properties for uploaded files the screenshot below shows a typical setup (meaning for Security Group and Account to be explained further down in this document):

  • Folder: /Contribution Folders/PIM
  • Security Group: FAImportExport
  • Account: scm$/item$/import$

UCM_PimFiles2As soon as these files have been uploaded and correct data has been provisioned the UCM part of FBL is done and we can proceed to do the next step.

Optional: Checklist Security Setup in UCM (on-premise only)

Normally there are no requirements to modify or extend the UCM security setup. The security features described above (i.e. Security Group, Account etc.) are supposed to be existent. However in case of troubleshooting it might be good having a quick checklist about UCM security options as needed by FBL for Fusion Product Hub. A full documentation can be found on product documentation site here.

The following relationship between Users, Roles, Policies and Resources exist:

  • UCM resources like Security Groups and Accounts define access to various folders and files
  • These resources are grouped in APM Policies
  • APM Policies are assigned to Application Roles
  • Application Roles are assigned to Fusion Product Hub Users

The best option to check security setup is by using a privileged user like FAAdmin. Obviously that won’t work in Fusion Product Cloud. Its recommended to submit a Service Request in case of doubt that security options might not be set correctly if using cloud services.

After login as a privileged user open the Administration sub-page. In the right window pane a list of Admin Applets appears after activation of Administration -> Admin Applets (see below).

UCM_AdminAppletsThe applet User Admin shows the user PIMQA we’re using in this sample as the external user in our sample. It means the user is registered in Fusion Identity Management. Only a few UCM built-in users are marked as local. Usually it’s neither necessary nor recommended to touch any of these entries via this applet.

UCM_UserAdminAppletThe screenshot below shows more details of our sample user.

UCM_UserAdminApplet2Furthermore it might be useful to show where the UCM Account scm$/item$/import$ (see usage in section above) is defined as it will improve the understanding of underlying security concepts.

Entries can be found via the Authorization and Policy Management (APM) page in Fusion Product Hub via a link like this:

https://<fusion_product_hub_server>:<port>/apm

You must use privileged user credentials like FAAdmin for a successful login to APM.

Once logged in we can search for UCM Accounting details as shown below.

Search for Application Roles -> IDCCS -> Resources

APM_UCM_AccountNext step is to search for resources starting with scm in the Search Resources pages s shown below.

APM_UCM_Account2Open detail page for scm$/item$/import$ with the results as shown below and click the button Find Policies.

APM_UCM_Account3In Policies overview page we find the attached policies.

APM_UCM_Account4Opening these policies will show the details about document permissions per resource as defined for Item Import in Fusion Content Server.

APM_UCM_Account5

Programmatic Interface for Item Upload to Fusion Content Server

As an alternative to manual uploads of Item data we can use a publicly available toolset called Webcenter Content Document Transfer Utility for Fusion Apps. It can be downloaded from OTN as shown below.

WebCenterTransferUtility_Download

This toolset provides some Java programs to be used from command line interface. Such an interface is useful when running periodical jobs in customer environments to upload new or changed item data without human interaction.

A processing pipeline could look like this:

  • Extract item data from a local system and transform them into a CSV format as expected by Fusion Product Hub
  • Put the file to a staging area
  • Create a manifest file or reuse an existing manifest file in case file names remain the same on Fusion Content Server
  • Run the command line utility to upload file(s)
  • Initiate further processing to load Item Data via calling a Webservice to run an Import Job.

Recently some related articles have been published on Fusion Apps Developer Relations Blog like this post. Please refer to those sources if you want to learn more about the tool usage.

In this section we will cover the tool usage as required for Fusion Product Hub item load.

The transfer utility provides two different interfaces to connect to Fusion Content Server:

  • The RIDC-based transfer utility as a feature-set Java library that encapsulates a proprietary protocol (ridc) to Fusion Content Server via HTTPS.
  • A generic soap-based transfer utility using the Oracle JRF supporting libraries for JAX/WS over HTTPS to communicate with the Fusion Content Server.

After download and extraction of transfer utility two sub-directories will exist: ridc and generic. Details about the specific command line parameters and connection information can be found below.

In addition to these both sub-directories a file WebCenter Content Document Transfer Utility Readme.html will be extracted with a comprehensive documentation about tool usage, troubleshooting and additional options.

Upload via the RIDC Java library

Using a RIDC connection might be the preferred option for those customers who have no FMW products in place. The Java library oracle.ucm.fa_client_11.1.1.jar (existing in in sub-directory ridc after extraction) can be used standalone and doesn’t require any other libraries in addition to a JDK with a minimum release of 1.7 (for JRockit 1.6).

Connection information can be located in a configuration file connection.properties with content like this:

url=https://<fusion_product_hub_server>:<port>/cs/idcplg
username=<IntegrationUser>
password=<Password>
policy=oracle/wss_username_token_client_policy

In production environments it’s strongly recommended to avoid saving passwords in clear text in configuration files like this. Putting them into wallets and reading values from there would be the preferred choice.

A command line running the document upload via RIDC would look like this (“\” used for same line where columns too long) :

${JAVA_HOME}/bin/java \
-jar ./oracle.ucm.fa_client_11.1.1.jar UploadTool \
--propertiesFile=./connection.properties \
--primaryFile=ItemManifest.csv \
--dDocTitle="ItemManifest.csv" --k0=dCollectionPath \
--v0="/Contribution Folders/PIM/" \
-dDocAccount="/scm$/item$/import$"

A successful execution will result in an output like this:

Oracle WebCenter Content Document Transfer Utility
Oracle Fusion Applications
Copyright (c) 2013-2014, Oracle. All rights reserved.
* Custom metdata set: "dCollectionPath"="/Contribution Folders/PIM/".
Performing upload (CHECKIN_UNIVERSAL) ...
Upload successful.
[dID=76 | dDocName=UCMFA000076]

The uploaded document from the example above resides in the Fusion Content Server with a global document id 76 and an internal document name UCMFA000076. For further processing we’d rather locate it by its logical file information /Contribution Folders/PIM/ItemManifest.csv.

Using RIDC connection would be apparently a first choice for cloud customers who are not using any Oracle Fusion Middleware runtime environment. However it is possible for Fusion Product Hub on-premise customers to use this connection type too.

Upload via the generic Java library

The generic approach will connect to a WebService in Fusion Content Server to perform a file upload. After extraction in folder generic a Java library oracle.ucm.fa_genericclient_11.1.1.jar can be found. For this type of connection the connection.properties will point to a different URL as shown below:

url=https://<fusion_product_hub_server>:<port>/idcws
username=<IntegrationUser>
password=<Password> 
policy=oracle/wss_username_token_client_policy

Its Important to mention that the tool can’t run standalone, as we must add an additional library from a Weblogic Server runtime directory: jrf-client.jar. This can be found in WLS directory oracle_common/modules/oracle.jrf_11.1.1. No other libraries are required to be added to the classpath as the remaining Oracle JRF Web Service are referred from jrf-client.jar.

The command line using the generic Java library would look like this:

${JAVA_HOME}/bin/java -classpath \
<WLS_HOME>/oracle_common/modules/oracle.jrf_11.1.1/jrf-client.jar:./oracle.ucm.fa_genericclient_11.1.1.jar \
oracle.ucm.idcws.client.UploadTool \
-propertiesFile=./connection.properties \
--primaryFile=/home/oracle/CASE_1-CsvMap.csv \
--dDocTitle="Product Item Import 0001" \
--k0=dCollectionPath --v0="/Contribution Folders/PIM/" \
-dDocAccount="/scm$/item$/import$"

Output looks identical like the RIDC version:

Oracle WebCenter Content Document Transfer Utility
Oracle Fusion Applications
Copyright (c) 2013-2014, Oracle. All rights reserved.
* Custom metdata set: "dCollectionPath"="/Contribution Folders/PIM/".
Performing upload (CHECKIN_UNIVERSAL) ...
Upload successful.
[dID=77 | dDocName=UCMFA000077]

As mentioned above using this connection type requires a fully installed Weblogic runtime environment and uses a standard WebService interface.

Logging option

In order to adjust the level of logging information, the log level can be controlled through a properties file such as logging.properties that can be added to the runtime Java call by option

-Djava.util.logging.config.file=./logging.properties

The content of this file could look like this:

handlers=java.util.logging.ConsoleHandler
.level=FINEST
java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.ConsoleHandler.level=FINEST
oracle.j2ee.level=FINEST

As this is a standard Java feature the full list of values looks as follows:

  • SEVERE (highest value – least logging)
  • WARNING
  • INFO
  • CONFIG
  • FINE
  • FINER
  • FINEST (lowest value – most logging)

Using the logging features might help in cases where content transfer utility runs into issues with the connections and/or the upload of files.

Optional: Managing SSL Self-Signed Certificates

It’s strongly recommended to use the HTTPS protocol when connecting to Fusion Content Server despite the fact that plain HTTP connections would technically work as well. In scenarios using Fusion Product Cloud the server certificates are signed by well-known authorities (trust centers), who’s root certificates are normally part of JDK or browser distributions and no special certificate handling is required.

When using Fusion Product Hub on-premise there might be situations where self-signed certificates are used for SSL. When running Java programs these certificates must be imported into the clients certificate store. Here is a short explanation how to manage this:

The connection to a server with self-signed certificate will produce a warning in web browsers. Its possible to take a closer look to the certificate details like shown in a Firefox screenshot below:

  • Warning page appears stating that a connection can’t be trusted
  • Click on “I understand the risks”
  • Click on “Add exception …”
  • Click on “View” and as a result the certificate details appear like shown below

DownloadSSLSelfSignedCert2

Usually unknown certificates shouldn’t be trusted, but in this special case we are the issuers and make an exception.

We can download the certificate via the following access path in Firefox:

  • Click on tab “Details” as shown in screenshot above
  • Click on “Export … ” as shown in screenshot below
  • In File Save dialog choose “X.509 Certificate (DER)”
  • Save the file in a folder

DownloadSSLSelfSignedCert

Once saved, we must import this certificate into the certificate store of Java runtime which we use to run the content transfer utility. The command line looks like this:

${JAVA_HOME}/bin/keytool –importcert \
–alias <name_referring_to_ssl_server> \
-keystore ${JAVA_HOME}/jre/lib/security/cacerts \
–file <path_to_der_certificate>

When asked for a password and it has never been changed the default value would be “changeit”.

Part III: Initiating the Item Data Load

In the previous section the provisioning of files to Fusion Content Server has been explained. The final step to import these items into Fusion Product Hub is running the loader and import job. This job runs seamlessly and the following jobs are included:

  • Transfer item data from the file in Fusion Content Server to Item Interface Tables
  • Run a batch import from Item Interface Tables to Item tables

Step 2 above is identical to Item Batch Import as existent in Fusion Product Hub for a while including exception handling and job status reporting.

Customers have an option to initiate the scheduled job (occasional triggering) via UI or to embrace it by scripts for automated and periodical runs.

Manual Item Load via Fusion Product Hub UI

Initiating a manual item load is pretty straightforward, as users just have to follow the standard dialog to trigger a job. For this purpose use the Scheduled Processes menu entry in Fusion Product Hub Navigator.

ScheduleProcessesNavigatorSearch for a job called Schedule Product Upload Job as shown below.

NewScheduleJobProvide parameters as required:

  • Manifest File Path: file location of the manifest file as uploaded to Fusion Content Server previously
  • Assigned Spoke System: Spoke system as defined in the functional setup previously (see section I)

Once the job parameters have been provided the job execution can be submitted for immediate execution or scheduled for a later time.

NewScheduleJobParametersThe execution status of jobs can be monitored via the same UI as shown below. Once finished the items are supposed to be transferred from CSV file into the system and can be found in the standard Fusion Product Hub UI for further processing.

NewScheduleJobProgress

Programmatic execution of the loader job from command line

Similar as for uploading files into Fusion Content Server the triggering of loader jobs can be initiated by Java programs. For this purpose it’s recommended to use Oracle JDeveloper 12c that can be downloaded from OTN as shown below.

JDev_Download

It’s not necessary to download more technology products to run the programmatic interface for job scheduling.

Create WebService Client to initiate a scheduling job

Technically the job scheduler can be accessed via an existing WebService interface. Oracle JDeveloper 12c provides a great offering to generate the WebService accessory code via a coding wizard.

The series of screenshots below will document the step-by-step procedure to generate the Java code. Once done, we have a skeleton of Java code and configuration files that require some minor extensions in order to execute the web service.

As a very first step create a Java application with a custom project. Then choose Create Web Service Client and Proxy via “New …” and “Gallery …”.

JDeveloper2As shown in screenshots below we must provide the information for the WebService we intend to invoke in our code. For Item Loader it has the following format:

https://<fusion_product_hub_server>:<port>/finFunShared/FinancialUtilService?wsdl

JDeveloper3Once provided click Next and the wizard will start determining the web service configuration by introspecting the provided web service WSDL.

JDeveloper4As shown in screenshot below there is a choice to enter a custom root package for the generated WebService client code. Default code will use a package name like this:
com.oracle.xmlns.apps.financials.commonmodules.shared.financialutilservice

In most cases customers want to reflect their own packaging naming conventions and this screen is the location where to configure it.

JDeveloper5In next step, as shown screenshot below, it is not necessary to change any information and user can enter Next.

JDeveloper6The next dialog will give users a choice to configure the client using a synchronous or asynchronous method. Scheduling a job is a synchronous activity and therefore its not required to generate asynchronous Java methods.

JDeveloper7After reading and analyzing the web service a WSM policy oracle/wss11_username_token_with_message_protection_server_policy has been found on the server side. The code generator uses the corresponding client policy oracle/wss11_username_token_with_message_protection_client_policy to fulfill the server requirements. This value must be accepted, as the communication between client and server will fail otherwise.

JDeveloper8On the next dialog screen no changes are required and the user can press Next.

JDeveloper9Last screenshot of this code generation dialog will show a summary of methods being generated, as they will fit with the methods found in the web service wsdl. After clicking finish the code generation starts and might take up to one or two minutes.

JDeveloper10The development environment after finishing code generation will have a look like shown below in screenshot.

JDeveloperThis code generation helps saving a tremendous amount of time if programming manually. Its worth to mention, that some parts of the generated code are under control of JDeveloper and might be overwritten in case of some configuration changes happen. Developers must be careful to add their own code in sections where foreseen and indicated in the code.

The generate code doesn’t provide any details about

  • Authentication by providing credential
  • Message encryption as required by the WSM policy
  • Web service operations to be initiated by this Java code – here Java method submitESSJobRequest() for web service operation submitESSJobRequest
  • Parameters to be passed to these operation calls

All the additions above are manual tasks to be performed by programmers.

Below is a piece of Java code that shows a working example kept simple for better readability. For a production use we recommend the following improvements:

  • Put details for keystore etc in config files
  • Same for username
  • Store passwords in a wallet
  • Important: as mentioned the code is under control of a code generator. To avoid unintentionally code changes its strongly recommended to create an own class by copy and paste from generated class

Generated and modified Java File FinancialUtilServiceSoapHttpPortClient.java

package wsclient.mycompany.com;

import com.sun.xml.ws.developer.WSBindingProvider;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;

import oracle.webservices.ClientConstants;

import weblogic.wsee.jws.jaxws.owsm.SecurityPoliciesFeature;

// This source file is generated by Oracle tools.
// Contents may be subject to change.
// For reporting problems, use the following:
// Generated by Oracle JDeveloper 12c 12.1.3.0.0.1008
public class FinancialUtilServiceSoapHttpPortClient {
  public static void main(String[] args) {
      FinancialUtilService_Service financialUtilService_Service = 
            new FinancialUtilService_Service();

      // Configure security feature
      SecurityPoliciesFeature securityFeatures = 
            new SecurityPoliciesFeature(new String[] {
"oracle/wss11_username_token_with_message_protection_client_policy"
            });
      FinancialUtilService financialUtilService =
      financialUtilService_Service.getFinancialUtilServiceSoapHttpPort(
                                      securityFeatures);
      // Add your code to call the desired methods.
      WSBindingProvider wsbp = (WSBindingProvider) financialUtilService;
      Map<String, Object> reqCon = wsbp.getRequestContext();

      reqCon.put(WSBindingProvider.USERNAME_PROPERTY, "IntegrationUser");
      reqCon.put(WSBindingProvider.PASSWORD_PROPERTY, "Password");

      reqCon.put(ClientConstants.WSSEC_KEYSTORE_TYPE, "JKS");
      reqCon.put(ClientConstants.WSSEC_KEYSTORE_LOCATION, 
                                "/home/oracle/FusionClient.jks");
      reqCon.put(ClientConstants.WSSEC_KEYSTORE_PASSWORD, "Welcome1");
      reqCon.put(ClientConstants.WSSEC_ENC_KEY_ALIAS, "mykey");
      reqCon.put(ClientConstants.WSSEC_ENC_KEY_PASSWORD, "Welcome1");
      reqCon.put(ClientConstants.WSSEC_RECIPIENT_KEY_ALIAS, "mykeys");

      Long jobID = startEssJob(financialUtilService);

      System.out.println("Item Data Import Job started with ID: " +
                     jobID.toString());
    }

  private static Long startEssJob(FinancialUtilService fus) {
      Long essRequestId = new Long(-1);

      try {
          List<String> paramList = new ArrayList<String>();
          // UCM folder and file name
          paramList.add("/Contribution Folders/PIM/ProductLoad.csv"); 
          // Spoke System Code
          paramList.add("PIMDH"); 
          // Product Upload - static value here
          paramList.add("true");
          // Product Hub Portal Flow
          paramList.add("false");
          essRequestId = fus.submitESSJobRequest( 
             "/oracle/apps/ess/scm/productHub/itemImport/",
             "ExtProductUploadSchedulingJobDef", paramList);
        } 
      catch (ServiceException e) {
            e.printStackTrace();
            System.exit(1);
        }

      return essRequestId;
  }
}

Running this Java program from command line doesn’t require any additional libraries except those coming with a JDeveloper installation and a standard JDK.

Its recommended to package all files in project into a JAR file via a Deployment Profile. Once done, a sample call for this WebService client would look as follows:

clientJar=<project_dir>/deploy/FinancialUtilService-Client.jar
jdevDir=<jdev12c_install_dir>
modulesDir=${jdevDir}/oracle_common/modules

${JAVA_HOME}/java \
-server \
-Djava.endorsed.dirs=${jdevDir}/oracle_common/modules/endorsed \
-classpath ${clientJar}:\
${javaDir}/wlserver/server/lib/weblogic.jar:\
${modulesDir}/oracle.jrf_12.1.3/jrf.jar:\
${modulesDir}/ oracle.toplink_12.1.3/eclipselink.jar:\
${modulesDir}/ oracle.toplink_12.1.3/org.eclipse.persistence.nosql.jar:\
${modulesDir}/ oracle.toplink_12.1.3/org.eclipse.persistence.oracle.nosql.jar:\
${jdevDir}/wlserver/modules/com.bea.core.antlr.runtime_2.0.0.0_3-2.jar:\
${modulesDir}/javax.persistence_2.0.jar:\
${modulesDir}/com.oracle.webservices.fmw.wsclient-impl_12.1.3.jar:\
${modulesDir}/com.oracle.webservices.fmw.jrf-ws-api_12.1.3.jar \
wsclient.mycompany.com.FinancialUtilServiceSoapHttpPortClient

Output of this call will be the Job ID of the scheduled item loader job. The monitoring of job progress can be done via the application UI. There are other web services that can be used for checking job status but their explanation is subject to a future blog post.

Managing WS Security

As mentioned earlier in this blog the web service policy to run a scheduler job is oracle/wss11_username_token_with_message_protection_server_policy. For our Java client it means that two core requirements must be satisfied and have been shown in code sample above:

  • Passing username/password (here: IntegrationUser/Password)
  • Encrypt the message content

For encryption we must use the public key of web service as provided inside the WSDL. An example can be seen in the screenshot below.

WSDLX509Cert

The following steps are required to create an entry in the client side Java Key Store for message encryption:

  • Save the certificate from the WSDL in a certificate file in .pem or .der format
  • Import the certificate to an existing key store or create a new key store by importing the certificate
  • Refer to the certificate entries for message encryption as shown above in the sample class.

Open the WSDL in a web browser and search for XML tag dsig:X509Certificate. The content must be copied and pasted into a text file as shown below between a line BEGIN CERTIFICATE and END CERTIFICATE (sample data as copied from our test case):

-----BEGIN CERTIFICATE-----
MIIB+zCCAWSgAwIBAgIEUzRQ0zANBgkqhkiG9w0BAQUFADBCMRMwEQYKCZImiZPyLGQBGRYDY29tMRkwFwYKCZImiZPyLGQBGRYJbXljb21wYW55MRAwDgYDVQQDEwdzZXJ2aWNlMB4XDTE0MDMyNzE2MjQ1MVoXDTE3MDMyNzE2MjQ1MVowQjE
TMBEGCgmSJomT8ixkARkWA2NvbTEZMBcGCgmSJomT8ixkARkWCW15Y29tcGFueTEQMA4GA1UEAxMHc2VydmljZTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA1eGZzKgK5ZvSzfDVJ06oDYR0Zn79JQXNpddopXKLTWy87w95hfVv2UFSmK
0+3yjHR/OCpxHERwtBk3Q4jjLVv3nINwKmt/ELnMAm+pa4pAK3wXEzopoxM5phQPp2Mn/iLLNp1OfRI8yzRGowi9K71JcuDhlWJCGRETLxyDgxy3ECAwEAATANBgkqhkiG9w0BAQUFAAOBgQBdLYGuZSbCBpG9uSBiPG+Dz+BMl4KuqSjPjv4Uu
nCWLFobNpb9avSw79nEp4BS42XGaOSrfXA2j+/9mY9k9fUxVV+yP7AeKDKwDMoLQ33Yoi0B2t/0LkUDkYEa3xlluLAavrFvJfxSZH87WanJ2HbNwQWpbfRq1iG1aiji/2g9Tw==
-----END CERTIFICATE-----

Save the file under a name like <fusion_product_hub_server>.der and create an entry or a new Java keystore file as follows:

${JAVA_HOME}/bin/keytool –importcert \
–alias <alias_as_referred_in_Java_code > \
-keystore <my_local_client_trust_store> \
–file <path_to_der_certificate_above>

If the keystore doesn’t exist it will be created including new passwords (Welcome1 in Java code sample above). If the files exist we must provide the keystore passwords in order to be able to create the entry.

Once created, the key store will contain an entry like this:

$ ${JAVA_HOME}/bin/keytool -v -list -keystore <my_local_client_trust_store>
Enter keystore password:  

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

Alias name: mykeys
Creation date: Feb 24, 2015
Entry type: trustedCertEntry

Owner: CN=fahost2.mycompany.com, OU=defaultOrganizationUnit, O=defaultOrganization, C=US
Issuer: CN=fahost2.mycompany.com, OU=defaultOrganizationUnit, O=defaultOrganization, C=US
Serial number: 5397498d
Valid from: Tue Jun 10 20:08:13 CEST 2014 until: Sat Jun 10 20:08:13 CEST 2017
Certificate fingerprints:
	 MD5:  6D:BA:94:CE:84:E6:C0:A3:CA:A3:F1:8A:39:1E:E9:2E
	 SHA1: C7:3D:62:42:D8:E7:A0:DB:57:93:40:32:A8:54:E0:57:60:F0:8B:FD
	 SHA256: 40:D0:C3:81:CF:5D:6B:61:95:23:27:24:83:8D:1A:34:9F:31:C7:E5:15:BE:49:44:81:E6:D9:34:0A:69:FA:06
	 Signature algorithm name: SHA1withRSA
	 Version: 3


*******************************************
*******************************************

Summary

In this article we provided a 360° view on tasks and activities for automating the use of File Based Loader for Fusion Product Hub. Everything discussed in this article applies to both cloud and on-premise deployments of Fusion Product Hub the same way.

Link collection in order of appearance in this blog:

Purging and partitioned schemas

$
0
0

SOA Suite 11g and 12c both require regular database maintenance for optimal performance. A key task in managing your SOA Suite database is a regular purging strategy. You should be doing this, so read the Oracle SOA Suite database growth management strategy if you haven’t already: http://www.oracle.com/technetwork/middleware/bpm/learnmore/soa11gstrategy-1508335.pdf

One of the best practices for managing large SOA Suite applications is to use Oracle Database partitioning. In 11g this is usually a fairly ad-hoc setup, though the whitepaper has everything you need to know about setting it up; in 12c, the “LARGE” RCU profile is partitioned (with monthly partitions).

Purging a partitioned schema usually involves running the check and move scripts, to ensure your partitions don’t contain “LIVE” data (based on your retention policy), followed by dropping the “OLD” partitions and rebuilding the indexes.

However, there are times where you may want to run a purge to clean up data that doesn’t neatly align with the partitions, for example in a load testing environment. The purge scripts, by default, won’t touch any table that is partitioned. If your favourite table isn’t mentioned in the purge debug log output (example below), then it is probably because it is partitioned.

To force the purge scripts to consider partitioned tables, you need to enable the “purge_partitioned_component” flag to the “delete instances” purge function (see below). The purge script will then purge partitioned tables.

Obviously, this is not intended for regular production use and it should never be used there.

An example invocation with the flag set:

soa.delete_instances(max_runtime => 300, min_creation_date => to_timestamp('2000-01-01','YYYY-MM-DD'), max_creation_date => to_timestamp('2000-12-31','YYYY-MM-DD'), purge_partitioned_component=TRUE);

The example output below is from a soa.delete_instances run that has a partition on composite_instance. Note that there is no mention of composite_instance in the output.

There are several tables which can be partitioned, as well as whole units (such as BPEL). The purge script will skip any that have a partition. (If you are interested, you can search the PLSQL packages in a SOAINFRA schema for ‘is_table_partitioned’ to see which tables are checked and which columns it considers for partitioning).

01-JAN-2000 12:00:00 : procedure delete_instances
01-JAN-2000 12:00:00 : time check
01-JAN-2000 12:00:00 : sysdate = 01/JAN/2000:12/00
01-JAN-2000 12:00:00 : stoptime = 01/JAN/2000:12/00
01-JAN-2000 12:00:00 : checking for partitions
01-JAN-2000 12:00:00 : done checking for partitions
01-JAN-2000 12:00:00 : composite_dn =
01-JAN-2000 12:00:00 : loop count = 1
01-JAN-2000 12:00:00 : deleting non-orphaned instances
01-JAN-2000 12:00:00 Number of rows in table ecid_purge Inserted = 1
01-JAN-2000 12:00:00 : calling soa_orabpel.deleteComponentInstances
01-JAN-2000 12:00:00 Number of rows in table temp_cube_instance Inserted = 1
01-JAN-2000 12:00:00 Number of rows in table temp_document_ci_ref Inserted = 1
01-JAN-2000 12:00:00 Number of rows in table temp_document_dlv_msg_ref Inserted = 1
01-JAN-2000 12:00:00 Number of rows in table HEADERS_PROPERTIES purged is : 1
01-JAN-2000 12:00:00 Number of rows in table AG_INSTANCE purged is : 0
01-JAN-2000 12:00:00 Number of rows in table TEST_DETAILS purged is : 0
01-JAN-2000 12:00:00 Number of rows in table CUBE_SCOPE purged is : 1
01-JAN-2000 12:00:00 Number of rows in table AUDIT_COUNTER purged is : 1
01-JAN-2000 12:00:00 Number of rows in table AUDIT_TRAIL purged is : 1
01-JAN-2000 12:00:00 Number of rows in table AUDIT_DETAILS purged is : 1
01-JAN-2000 12:00:00 Number of rows in table CI_INDEXES purged is : 0
01-JAN-2000 12:00:00 Number of rows in table WORK_ITEM purged is : 1
01-JAN-2000 12:00:00 Number of rows in table WI_FAULT purged is : 1
01-JAN-2000 12:00:00 Number of rows in table XML_DOCUMENT purged is : 1
01-JAN-2000 12:00:00 Number of rows in table DOCUMENT_DLV_MSG_REF purged is : 1
01-JAN-2000 12:00:00 Number of rows in table DOCUMENT_CI_REF purged is : 1
01-JAN-2000 12:00:00 Number of rows in table DLV_MESSAGE purged is : 1
01-JAN-2000 12:00:00 Number of rows in table DLV_SUBSCRIPTION purged is : 1
01-JAN-2000 12:00:00 Number of rows in table DLV_AGGREGATION purged is : 0
01-JAN-2000 12:00:00 Number of rows in table CUBE_INSTANCE purged is : 1
01-JAN-2000 12:00:00 Number of rows in table BPM_AUDIT_QUERY purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_MEASUREMENT_ACTIONS purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_MEASUREMENT_ACTION_EXCEPS purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_CUBE_AUDITINSTANCE purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_CUBE_TASKPERFORMANCE purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_CUBE_PROCESSPERFORMANCE purged is : 0
01-JAN-2000 12:00:00 : completed soa_orabpel.deleteComponentInstances
01-JAN-2000 12:00:00 : calling workflow.deleteComponentInstances
01-JAN-2000 12:00:00 : workflow.deleteComponentInstance begins
01-JAN-2000 12:00:00 : workflow.truncate_temp_tables
01-JAN-2000 12:00:00 Number of rows in table temp_wftask_purge workflow.deleteComponentInstance Inserted = 0
01-JAN-2000 12:00:00 : workflow.delete_workflow_instances begins
01-JAN-2000 12:00:00 : Purging WFTask_TL
01-JAN-2000 12:00:00 Number of rows in table WFTask_TL Purge WFTask_TL0
01-JAN-2000 12:00:00 : Purging WFTaskHistory
01-JAN-2000 12:00:00 Number of rows in table WFTaskHistory Purge WFTaskHistory0
01-JAN-2000 12:00:00 : Purging WFTaskHistory_TL
01-JAN-2000 12:00:00 Number of rows in table WFTaskHistory_TL Purge WFTaskHistory_TL0
01-JAN-2000 12:00:00 : Purging WFComments
01-JAN-2000 12:00:00 Number of rows in table WFComments Purge WFComments0
01-JAN-2000 12:00:00 : Purging WFMessageAttribute
01-JAN-2000 12:00:00 Number of rows in table WFMessageAttribute Purge WFMessageAttribute0
01-JAN-2000 12:00:00 : Purging WFAttachment
01-JAN-2000 12:00:00 Number of rows in table WFAttachment Purge WFAttachment0
01-JAN-2000 12:00:00 : Purging WFAssignee
01-JAN-2000 12:00:00 Number of rows in table WFAssignee Purge WFAssignee0
01-JAN-2000 12:00:00 : Purging WFReviewer
01-JAN-2000 12:00:00 Number of rows in table WFReviewer Purge WFReviewer0
01-JAN-2000 12:00:00 : Purging WFCollectionTarget
01-JAN-2000 12:00:00 Number of rows in table WFCollectionTarget Purge WFCollectionTarget0
01-JAN-2000 12:00:00 : Purging WFRoutingSlip
01-JAN-2000 12:00:00 Number of rows in table WFRoutingSlip Purge WFRoutingSlip0
01-JAN-2000 12:00:00 : Purging WFNotification
01-JAN-2000 12:00:00 Number of rows in table WFNotification Purge WFNotification0
01-JAN-2000 12:00:00 : Purging WFTaskTimer
01-JAN-2000 12:00:00 Number of rows in table WFTaskTimer Purge WFTaskTimer0
01-JAN-2000 12:00:00 : Purging WFTaskError
01-JAN-2000 12:00:00 Number of rows in table WFTaskError Purge WFTaskError0
01-JAN-2000 12:00:00 : Purging WFHeaderProps
01-JAN-2000 12:00:00 Number of rows in table WFHeaderProps Purge WFHeaderProps0
01-JAN-2000 12:00:00 : Purging WFEvidence
01-JAN-2000 12:00:00 Number of rows in table WFEvidence Purge WFEvidence0
01-JAN-2000 12:00:00 : Purging WFTaskAssignmentStatistic
01-JAN-2000 12:00:00 Number of rows in table WFTaskAssignmentStatistic Purge WFTaskAssignmentStatistic0
01-JAN-2000 12:00:00 : Purging WFTaskAggregation
01-JAN-2000 12:00:00 Number of rows in table WFTaskAggregation Purge WFTaskAggregation0
01-JAN-2000 12:00:00 : Purging WFTask
01-JAN-2000 12:00:00 Number of rows in table WFTask Purge WFTask0
01-JAN-2000 12:00:00 : workflow.delete_workflow_instances ends
01-JAN-2000 12:00:00 : workflow.deleteComponentInstance ends
01-JAN-2000 12:00:00 : completed workflow.deleteComponentInstances
01-JAN-2000 12:00:00 : calling mediator.deleteComponentInstances
01-JAN-2000 12:00:00 Number of rows in table temp_mediator_instance Inserted = 0
01-JAN-2000 12:00:00 Number of rows in table mediator_payload purged is : 0
01-JAN-2000 12:00:00 Number of rows in table mediator_deferred_message purged is : 0
01-JAN-2000 12:00:00 Number of rows in table mediator_document purged is : 0
01-JAN-2000 12:00:00 Number of rows in table mediator_case_detail purged is : 0
01-JAN-2000 12:00:00 Number of rows in table mediator_case_instance purged is : 0
01-JAN-2000 12:00:00 Number of rows in table mediator_instance purged is : 0
01-JAN-2000 12:00:00 : completed mediator.deleteComponentInstances
01-JAN-2000 12:00:00 : calling decision.deleteComponentInstances
01-JAN-2000 12:00:00 Number of rows in table temp_brdecision_instance Inserted = 0
01-JAN-2000 12:00:00 Number of rows in table BRDecisionFault purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BRDecisionUnitOfWork purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BRDecisonInstance purged is : 0
01-JAN-2000 12:00:00 : completed decision.deleteComponentInstances
01-JAN-2000 12:00:00 : calling fabric.deleteComponentInstances
01-JAN-2000 12:00:00 Number of rows in table reference_instance_purge inserted = 1
01-JAN-2000 12:00:00 Number of rows in table xml_document purged is : 1
01-JAN-2000 12:00:00 Number of rows in table instance_payload purged is : 1
01-JAN-2000 12:00:00 Number of rows in table reference_instance purged is : 1
01-JAN-2000 12:00:00 Number of rows in table xml_document purged is : 0
01-JAN-2000 12:00:00 Number of rows in table rejected_msg_native_payload purged is : 0
01-JAN-2000 12:00:00 Number of rows in table instance_payload purged is : 0
01-JAN-2000 12:00:00 Number of rows in table composite_instance_fault purged is : 1
01-JAN-2000 12:00:00 Number of rows in table xml_document purged is : 1
01-JAN-2000 12:00:00 Number of rows in table instance_payload purged is : 1
01-JAN-2000 12:00:00 Number of rows in table composite_sensor_value purged is : 0
01-JAN-2000 12:00:00 Number of rows in table composite_instance_assoc purged is : 1
01-JAN-2000 12:00:00 Number of rows in table component_instance_purge inserted = 0
01-JAN-2000 12:00:00 Number of rows in table xml_document purged is : 0
01-JAN-2000 12:00:00 Number of rows in table instance_payload purged is : 0
01-JAN-2000 12:00:00 Number of rows in table component_instance purged is : 0
01-JAN-2000 12:00:00 Number of rows in table attachment purged is : 0
01-JAN-2000 12:00:00 Number of rows in table attachment_ref purged is : 0
01-JAN-2000 12:00:00 : completed fabric.deleteComponentInstances
01-JAN-2000 12:00:00 : time check
01-JAN-2000 12:00:00 : sysdate = 01/JAN/2000:12/00
01-JAN-2000 12:00:00 : stoptime = 01/JAN/2000:12/00
01-JAN-2000 12:00:00 : loop count = 2
01-JAN-2000 12:00:00 : deleting orphaned instances
01-JAN-2000 12:00:00 : calling soa_orabpel.deleteNoCompositeIdInstances
01-JAN-2000 12:00:00 Number of rows in table temp_document_dlv_msg_ref Inserted no cikey 1
01-JAN-2000 12:00:00 Number of rows in table temp_cube_instance Inserted = 0
01-JAN-2000 12:00:00 Number of rows in table temp_document_ci_ref Inserted = 0
01-JAN-2000 12:00:00 Number of rows in table temp_document_dlv_msg_ref Inserted = 0
01-JAN-2000 12:00:00 Number of rows in table HEADERS_PROPERTIES purged is : 1
01-JAN-2000 12:00:00 Number of rows in table AG_INSTANCE purged is : 0
01-JAN-2000 12:00:00 Number of rows in table TEST_DETAILS purged is : 0
01-JAN-2000 12:00:00 Number of rows in table CUBE_SCOPE purged is : 0
01-JAN-2000 12:00:00 Number of rows in table AUDIT_COUNTER purged is : 0
01-JAN-2000 12:00:00 Number of rows in table AUDIT_TRAIL purged is : 0
01-JAN-2000 12:00:00 Number of rows in table AUDIT_DETAILS purged is : 0
01-JAN-2000 12:00:00 Number of rows in table CI_INDEXES purged is : 0
01-JAN-2000 12:00:00 Number of rows in table WORK_ITEM purged is : 0
01-JAN-2000 12:00:00 Number of rows in table WI_FAULT purged is : 0
01-JAN-2000 12:00:00 Number of rows in table XML_DOCUMENT purged is : 1
01-JAN-2000 12:00:00 Number of rows in table DOCUMENT_DLV_MSG_REF purged is : 1
01-JAN-2000 12:00:00 Number of rows in table DOCUMENT_CI_REF purged is : 0
01-JAN-2000 12:00:00 Number of rows in table DLV_MESSAGE purged is : 1
01-JAN-2000 12:00:00 Number of rows in table DLV_SUBSCRIPTION purged is : 0
01-JAN-2000 12:00:00 Number of rows in table DLV_AGGREGATION purged is : 0
01-JAN-2000 12:00:00 Number of rows in table CUBE_INSTANCE purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_AUDIT_QUERY purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_MEASUREMENT_ACTIONS purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_MEASUREMENT_ACTION_EXCEPS purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_CUBE_AUDITINSTANCE purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_CUBE_TASKPERFORMANCE purged is : 0
01-JAN-2000 12:00:00 Number of rows in table BPM_CUBE_PROCESSPERFORMANCE purged is : 0
01-JAN-2000 12:00:00 : completed soa_orabpel.deleteNoCompositeIdInstances
01-JAN-2000 12:00:00 : calling workflow.deleteNoCompositeIdInstances
01-JAN-2000 12:00:00 : workflow.deleteNoCompositeIdInstances begins
01-JAN-2000 12:00:00 : workflow.truncate_temp_tables
01-JAN-2000 12:00:00 : workflow.deleteNoCompositeIdInstances populates temp_wftaks_purge using createdDate between
min_date=01-JAN-00 12.00.00.000000 AMand max_date=31-JAN-00 12.00.00.000000 AM
01-JAN-2000 12:00:00 : workflow.deleteNoCompositeIdInstances done. No WFTask instances were found
01-JAN-2000 12:00:00 : completed workflow.deleteNoCompositeIdInstances
01-JAN-2000 12:00:00 : calling mediator.deleteNoCompositeIdInstances
01-JAN-2000 12:00:00 Number of rows in table temp_mediator_instance Inserted = 0
01-JAN-2000 12:00:00 : No Mediator instances found with composite instance id as null or zero
01-JAN-2000 12:00:00 Number of rows in table temp_mediator_instance Inserted = 0
01-JAN-2000 12:00:00 : No Mediator instances found from mediator_resequencer_message
01-JAN-2000 12:00:00 Number of rows in table temp_mediator_instance Inserted = 0
01-JAN-2000 12:00:00 : No Mediator instances found in mediator_deferred_message
01-JAN-2000 12:00:00 : completed mediator.deleteNoCompositeIdInstances
01-JAN-2000 12:00:00 : calling decision.deleteNoCompositeIdInstances
01-JAN-2000 12:00:00 Number of rows in table temp_brdecision_instance Inserted = 0
01-JAN-2000 12:00:00 : No Decision instances found with null composite instance ids
01-JAN-2000 12:00:00 : completed decision.deleteNoCompositeIdInstances
01-JAN-2000 12:00:00 : calling fabric.deleteNoCompositeIdInstances
01-JAN-2000 12:00:00 Number of rows in table reference_instance_purge inserted = 0
01-JAN-2000 12:00:00 Number of rows in table xml_document purged is : 0
01-JAN-2000 12:00:00 Number of rows in table instance_payload purged is : 0
01-JAN-2000 12:00:00 Number of rows in table reference_instance purged is : 0
01-JAN-2000 12:00:00 Number of rows in table composite_fault_purge inserted = 0
01-JAN-2000 12:00:00 Number of rows in table xml_document purged is : 0
01-JAN-2000 12:00:00 Number of rows in table rejected_msg_native_payload purged is : 0
01-JAN-2000 12:00:00 Number of rows in table instance_payload purged is : 0
01-JAN-2000 12:00:00 Number of rows in table composite_instance_fault purged is : 0
01-JAN-2000 12:00:00 Number of rows in table component_instance purged is : 0
01-JAN-2000 12:00:00 : completed fabric.deleteNoCompositeIdInstances

Protecting users and their emails after FA-P2T on Cloud environments

$
0
0

Introduction

The P2T – Prodution to Test – procedure is a very popular feature that FA customers utilize. It allows them to have their production data copied to another environment. Nowadays, P2T is a very common cloud SAAS and on-premise procedure. An important aspect that is not discussed frequently is the post-process of P2T. This approach is very important to avoid security issues, such as production passwords and emails being available in a different environment.

Main Article

This article will cover the post-process of FA-P2T and changing information that comes from production that is unwanted in the test environment. For this blog, our specific test case is email’s attribute. Note: There isn’t a requirement to change this specific attribute, but if they are not changed, then you may inadvertently send notifications from a non-production environment to valid, production email addresses. This may cause confusion and frustration from some users trying to identify what environment these e-mails are coming from. Hence, this article provides step-by-step instructions to accomplish the change e-mail task making sure your end-user’s email will not be available on lower environments.

NOTE: ALL THESE STEPS HAVE TO BE DONE ONLY IN THE TEST ENVIRONMENT

Step 1: FROM OID SIDE
1.1)Do a backup first:

/u01/oid/oid_home/ldap/bin/ldifwrite connect=oiddb basedn="cn=users,dc=us,dc=oracle,dc=com" thread=3 verbose=true ldiffile=/tmp/backup-ENVXXX-[DATE].dat

1.2)Change e-mail address(And avoid special characters issues) :

/u01/oid/oid_home/ldap/bin/bulkmodify connect=oiddb attribute="mail" value="Global-Fusion.alerts@oracle.com" replace="TRUE" basedn="cn=users,dc=us,dc=oracle,dc=com" threads=4 size=1000 verbose=true

1.3)NOTE: If you want to protect admin e-mails or change to something else, add those values into a ldif file and run as ldapmodify command below(instead of 1.2):

ldapmodify -p 3060 -D cn=orcladmin -w **** -a -f return-SYSADMINEMAILS.ldif (file attached)

Step 2: FROM OIM SIDE:
2.1)UPDATE or CREATE SYSPROP into OIM(WEBUI) to allow unique e-mails. Set ‘OIM.EmailUniqueCheck’ to FALSE

OIM11G-UniqueEmailProperty

2.2)Run SQL:

 update usr set usr_email='Global-Fusion.alerts@oracle.com' where (usr_login not in('XELSYSADM', 'OAMADMINUSER','FUSION_APPS_HCM_SOA_SPML_APPID','WEBCHATADMIN','XELOPERATOR','WEBLOGIC','OIMINTERNAL','POLICYROUSER','POLICYRWUSER',
'OBLIXANONYMOUS','WEBLOGIC_IDM','IDROUSER','IDRWUSER','OAMSOFTWAREUSER','FAADMIN','OIM_ADMIN','OAMADMINUSER','OCLOUD9_OSN_APPID','OSN_LDAP_BIND_USER','HCM.USER','OIMADMINUSER'))

Step 3: FROM FUSION SIDE
3.1)Updating per_email_addresses table

 
UPDATE fusion.per_email_addresses pea set pea.email_address = 'Global-Fusion.alerts@oracle.com' where pea.email_address like '%oracle.com'

3.2)Updating hz_parties table

 
update fusion.hz_parties hp set hp.email_address = 'Global-Fusion.alerts@oracle.com' where hp.email_address like '%oracle.com'

3.3)Updating hz_contact_points table

update fusion.hz_contact_points hcp set hcp.email_address = 'Global-Fusion.alerts@oracle.com' where hcp.email_address like '%oracle.com'

Step 4: RUN ESS JOB to update FA from OID data:
4.1)Login into FA(Navigator–>Setup & Manitenance–>Search Schedule Person Keyword Crawler(Named as ‘Update Person Search Keywords’)–>Submit.

Conclusion

Well done, however, protecting email addresses for an organization is a proposition that should be done carefully, and an entire environment backup must be done before it starts. Using proper planning and understanding the various dimensions provided by this solution and its concepts allows an organization to discern how they handle email data. It also highlights what of the enterprise is willing to protect end- user data from copied environments, and how best to offer Oracle protection in an integrated and effective manner.

Configuring your Oracle Database for Kerberos authentication

$
0
0

Introduction

I have two goals with this post. To show how to setup Kerberos authentication for the Oracle Database and also demonstrate that the use/configuration of Kerberos is pretty straightforward. At least with the versions and OS I have used for this setup.

The Kerberos functionality is provided by the Advanced Security Option of the DB and the Oracle client so it is important that this option has been select while creating the DB and while installing any Oracle Database clients.

There are a lot of articles about Kerberos and how in detail the ticket exchanges work so I’ll not describe this in this blog post. Just doing a quick google search for “Kerberos flow” will give you a lot of hits and links to some good reading.

Main article:

Expected goal:

An end-user using Windows 7 is able to login to an Oracle Database without providing any username/password to the DB.
A Kerberos ticket will be used as a trusted way of providing the user identity to the database. I’ll use SQLplus to test the setup but tools like Sql Developer or Toad etc. will also be able to use this feature if they use the OCI(Thin/Thick) client.
sqlplus

Note: The user still exists locally in the database and authorization is still handled by the individual database. This is not like Enterprise User Security, we just change the authentication mechanism from Username/Password to External(Kerberos) for specific users.

 

The components:

This setup consists of the following components:

  • Active Directory 2012 R2
  • Oracle Linux OS (6.4)
  • Oracle Database 11g R2 (11.2.0.3.0)
  • Windows 7
  • Oracle Database client 11g (11.2.0.1.0)

Some prerequisites:

  • Your Windows 7 client has joined the AD Domain and a domain user is used to login to Windows.
  • An Oracle DB instance has been created and the Advanced security option selected during creation
  • The Oracle Client software has been installed on the Windows 7 client incl. the Advanced Security option
  • Linux has the required Kerberos package. E.g. kinit is available

Solution flow:

The below diagram should give an idea about the typical flow.

flow

Flow Steps:

  • 1. This is the typical Windows client login to the domain. On linux this steps would be the okinit call
  • 2. The client will get a Ticket Granting ticket back from the KDC
  • 3. SQLplus will based on the DB we want to access request a service ticket
  • 4. The KDC will provide a Service ticket
  • 5. SQLplus will present this Service Ticket to the DB
  • 6. The DB will check the validity of the ticket towards the KDC
  • 7. The DB will allow access

Setting up things:

These are the high-level steps we need to do.

  • 1. Setup the required test users and service accounts in the AD and the DB
  • 2. Get our Linux OS to talk Kerberos with the KDC using kinit
  • 3. Get Oracle to talk Kerberos with the KDC using okinit
  • 4. Create a Keytab file and move it to the DB
  • 5. Test the setup locally on the DB server using okinit and SQLplus
  • 6. Setup the Oracle Client to use Kerberos on the Windows Workstation

The details:

Here is what I had to setup in my environment to make this work.

1. Setup the required test users and service accounts in the AD and the DB
We need the following user accounts in the AD:

  • A Domain test User that will be used during login to Windows 7: tester@mydomain.test
  • A Service Account used by the DB during Kerberos communication: oracle11g@mydomain.test

We need the following account in the DB:

    • A test user that that will be used by SQLplus:TESTER@MYDOMAIN.TEST
    • Note this user needs to be created like this:
    • 1. create user "TESTER@MYDOMAIN.TEST" identified externally;  
      or
      2. create user tester identified externally as "tester@MYDOMAIN.TEST";

Notice: The database username needs to be all capital letters if you go with option 1. In option 2 the case of tester@MYDOMAIN.TEST must match whats in the AD

2. Get our Linux OS to talk Kerberos with the KDC and test it using Kinit
There is a default kerberos configuration file we need to update. The typical location is: /etc/krb5.conf
You can configure a lot of different things in this file but I’ll try to keep it to the bare minimum I had to change to get this scenario working.

The default file looks like this:

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = EXAMPLE.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true

[realms]
 EXAMPLE.COM = {
  kdc = kerberos.example.com
  admin_server = kerberos.example.com
 }

[domain_realm]
 .example.com = EXAMPLE.COM
 example.com = EXAMPLE.COM

In the [libdefaults] section I changed:

default_realm = EXAMPLE.COM to default_realm = MYDOMAIN.TEST

Now the default_realm matches the Windows Domain name

In the [realms] section I changed:
EXAMPLE.COM = {
kdc = kerberos.example.com
admin_server = kerberos.example.com
}

to

MYDOMAIN.TEST = {
kdc = ad2012r2.mydomain.test
admin_server = ad2012r2.mydomain.test
}

The kdc and admin_server both point to my AD Server. While testing I found that just having the kdc entry was enough.

Finally in the [domain_realm] section I changed:

.example.com = EXAMPLE.COM
example.com = EXAMPLE.COM

to

tester = MYDOMAIN.TEST
.tester = MYDOMAIN.TEST

This part is important and sometimes confuses people.
In the domain_realm section we setup a mapping between the domain name of the DB server and the Kerberos realm to use.
In my case the FQDN of the DB server was idm.tester while the AD domain was MYDDOMAIN.TEST

To test the Kerberos configuration we will used the kinit OS command.

I suggest testing both the Domain users created earlier to ensure they both work.

Here is the output for TESTER@MYDOMAIN.TEST

[oracle@idm]$ kinit tester
Password for tester@MYDOMAIN.TEST: 
[oracle@idm]$ klist
Ticket cache: FILE:/tmp/krb5cc_54321
Default principal: tester@MYDOMAIN.TEST

Valid starting     Expires            Service principal
11/04/14 11:15:16  11/04/14 21:15:19  krbtgt/MYDOMAIN.TEST@MYDOMAIN.TEST
	renew until 11/11/14 11:15:16

The above result shows that we received a valid TGT from the KDC.

3. Get the Oracle client(on the server) to talk Kerberos with the KDC using OKinit

The okinit command is the Oracle equivalent of kinit but it uses configuration from the Oracle home. So we use this command to verify that the configuration from an Oracle point of view is correct.
We need to add a few lines to the sqlnet.ora used by the Database.

SQLNET.AUTHENTICATION_SERVICES = (BEQ,KERBEROS5)   # Allow Kerberos5 as authentication method
SQLNET.KERBEROS5_CONF = /etc/krb5.conf         # Location of the OS level kerberos configuration file
SQLNET.KERBEROS5_CONF_MIT=TRUE                 # Parse the krb5.conf file based on the MIT Kerberos configuration format.

Now the okinit command can be tested. Ensure the ORACLE_HOME has been set.

[oracle@idm bin]$ ./okinit -e 23 tester
Kerberos Utilities for Linux: Version 11.2.0.1.0 - Production on 25-FEB-2015 12:06:19
Copyright (c) 1996, 2009 Oracle. All rights reserved.
Password for tester@MYDOMAIN.TEST: 
[oracle@idm bin]$ 

[oracle@idm bin]$ ./oklist 
Kerberos Utilities for Linux: Version 11.2.0.1.0 - Production on 25-FEB-2015 12:07:19
Copyright (c) 1996, 2009 Oracle. All rights reserved.
Ticket cache: /tmp/krb5cc_54321
Default principal: tester@MYDOMAIN.TEST
 Valid Starting Expires Principal
25-Feb-2015 12:06:21 25-Feb-2015 20:06:19 krbtgt/MYDOMAIN.TEST@MYDOMAIN.TEST

Based on the above we can see that a TGT has been placed in the default Kerberos cache.

4. Create a Keytab file and move it to the Database Server

The Database Service will use the keytab file instead of using username/password when interacting with the KDC. (Use Google to get more details on the contents of the keytab file)

On the Domain Controller you need to use the ktpass tool to generate a keytab file and also associate an SPN(Service Principal) entry to the oracle11g AD User.

C:\Users\Administrator>ktpass -princ orcl/idm.tester@MYDOMAIN.TEST -ptype KRB5_NT_PRINCIPAL -crypto RC4-HMAC-NT -pass somepassword -mapuser oracle11g@mydomain.test -out c:\keytab_11g
Targeting domain controller: AD2012R2.mydomain.test
Using legacy password setting method
Successfully mapped orcl/idm.tester to oracle11g.
Key created.
Output keytab to c:\keytab_11g:
Keytab version: 0x502
keysize 64 orcl/idm.tester@MYDOMAIN.TEST ptype 1 (KRB5_NT_PRINCIPAL) vno 3 etype
 0x17 (RC4-HMAC) keylength 16 (0xf58d4301ca8034c16ef1843dc53bd113)

The -princ value is case sensitive and constructed in the following way: <Service name>/<FQDN of the DB Server>@<Kerberos realm> The Service name is defined in the SQLNET.AUTHENTICATION_KERBEROS5_SERVICE parameter in the sqlnet.ora file on the Database Server The Kerberos realm is provided in the krb5.conf file. You can search in AD for the oracle11g user and validate that the “User logon name” field on the Account tab contains orcl/idm.tester. This means the SPN has been created as expected. A keytab file keytab_11g should also be created and placed at c:\ Copy this file to the Database server.

The keytab file can be validated using the the oklist tool.

[oracle@idm bin]$ ./oklist -k /tmp/keytab_11g
Kerberos Utilities for Linux: Version 11.2.0.1.0 - Production on 25-FEB-2015 13:41:09
Copyright (c) 1996, 2009 Oracle. All rights reserved.

Service Key Table: /tmp/keytab_11g

Ver Timestamp Principal
 3 01-Jan-1970 01:00:00 orcl/idm.tester@MYDOMAIN.TEST

This shows that our principal has been created correctly in the keytab file.

The sqlnet.ora file needs to be updated with two more parameters:

SQLNET.AUTHENTICATION_KERBEROS5_SERVICE=orcl  #A unique service name. It does not need to correspond to the DB Service name
SQLNET.KERBEROS5_KEYTAB=/tmp/keytab_11g   # The absolute location of the keytab file.

 

5. Test the setup locally on the DB server using okinit and sqlplus
All should be setup now and ready for the first test.

In case there is some old kerberos ticket in the cache this can be cleared by running okdstry

[oracle@idm bin]$ ./okdstry

Kerberos Utilities for Linux: Version 11.2.0.1.0 - Production on 25-FEB-2015 13:48:44

Now we can request a new TGT using okinit as shown in step 3.
Use oklist to validate that the ticket is in the Kerberos cache.

Then  we can call sqlplus like this:

[oracle@idm bin]$ ./sqlplus /@orcl
SQL> show user
USER is "TESTER@MYDOMAIN.TEST"
SQL>

The result should be that you logged in to the Database as the user TESTER@MYDOMAIN.TEST.

 

6. Setup the Oracle Client on the Windows 7 workstation to use Kerberos
The next step is to setup the Oracle Database client on the Windows workstation to support Kerberos as authentication method.
The main difference on Windows is that we do not need to use the okinit command to ask for a Kerberos ticket as Windows will take care of this and place the ticket in a cache ready for use.

Note: Tools like kerbtray.exe will provide an easy way to look at the kerberos tickets available in the OS cache.

Login to your workstation using the “tester” user.

Please ensure the Oracle Client has been setup and can connect to the Database using username/password as credentials.

The following parameters must be added to the sqlnet.ora file:

SQLNET.AUTHENTICATION_SERVICES = (NTS,KERBEROS5)
SQLNET.KERBEROS5_CONF = c:/tmp/krb5.conf
SQLNET.KERBEROS5_CONF_MIT=TRUE
SQLNET.KERBEROS5_CC_NAME=OSMSFT://    # Based on this  sqlnet will look for the kerberos ticket in the Windows OS Credentials Cache.

Then we need to create a krb5.conf file with the following contents:

[realms]
 MYDOMAIN.TEST = {
 kdc = ad2012r2.mydomain.test
 admin_server = ad2012r2.mydomain.test
}
[domain_realm]
tester = MYDOMAIN.TEST
.tester = MYDOMAIN.TEST
.mydomain.test = MYDOMAIN.TEST
 mydomain.test = MYDOMAIN.TEST

Place the file at c:\tmp so it matches the path used in the sqlnet.ora file.

Now we can use sqlplus and test access using a Kerberos ticket.

C:\Users\tester>sqlplus /@orcl
SQL*Plus: Release 11.2.0.1.0 Production on Fri Mar 13 11:26:27 2015
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning option

SQL> show user
USER is "TESTER@MYDOMAIN.TEST"
SQL>

 

Support notes and references:

For more info have a look at this Oracle Support note: Configuring ASO Kerberos Authentication with a Microsoft Windows 2008 R2 Active Directory KDC (Doc ID 1304004.1)

Also the official Advanced Security Administrator’s Guide has more details.

Let me know if you have any comments or corrections.

Introduction to the Oracle Stream Explorer White Paper

$
0
0

During this week, Oracle’s real-time streaming analytics platform known as OEP (Oracle Event Processing) had a strategic rebrand, being called now as Oracle Stream Explorer. It still includes the complete, highly scalable, high performance event-driven runtime from OEP, but now with a new collection of visually impactful web tooling features.

The purpose of this rebranding is to elevate and accelerate the adoption of event processing to the “masses” and enabling the enterprise to now immerse itself in the next generation of real-time solutions, with a time to market in minutes rather than days or weeks.

In order to help first time users to work with Oracle Stream Explorer, I wrote this white paper that provides the basic information necessary to start building applications using Oracle Stream Explorer, also showing step-by-step how to develop a sample application based on the famous “Mall Scene” from the Minority Report film.

http://www.oracle.com/technetwork/middleware/complex-event-processing/overview/introsxwp-otn-2470237.pdf

Fusion HCM Cloud Bulk Integration Automation

$
0
0

Introduction

Fusion HCM Cloud provides a comprehensive set of tools, templates, and pre-packaged integration to cover various scenarios using modern and efficient technologies. One of the patterns is the bulk integration to load and extract data to/from cloud. The inbound tool is the File Based data loader (FBL) evolving into HCM Data Loaders (HDL). HDL supports data migration for full HR, incremental load to support co-existence with Oracle Applications such as E-Business Suite (EBS) and PeopleSoft (PSFT). It also provides the ability to bulk load into configured flexfields. HCM Extracts is an outbound integration tool that let’s you choose data, gathers and archives it. This archived raw data is converted into a desired format and delivered to supported channels recipients.

HCM cloud implements Oracle WebCenter Content, a component of Fusion Middleware, to store and secure data files for both inbound and outbound bulk integration patterns. This post focuses on how to automate data file transfer with WebCenter Content to initiate the loader. The same APIs will be used to download data file from the WebCenter Content delivered through the extract process.

WebCenter Content replaces SSH File Transfer Protocol (SFTP) server in the cloud as a content repository in Fusion HCM starting with Release 7+. There are several ways of importing and exporting content to and from Fusion Applications such as:

  • Upload using “File Import and Export” UI from home page navigation: Navigator > Tools
  • Upload using WebCenter Content Document Transfer Utility
  • Upload programmatically via Java Code or Web Service API

This post provides an introduction, with working sample code, on how to programmatically export content from Fusion Applications to automate the outbound integration process to other applications in the cloud or on-premise. A Service Oriented Architecture (SOA) composite is implemented to demonstrate the concept.

Main Article

Fusion Applications Security in WebCenter Content

The content in WebCenter Content is secured through users, roles, privileges and accounts. The user could be any valid user with a role such as “Integration Specialist.” The role may have privileges such as read, write and delete. The accounts are predefined by each application. For example, HCM uses /hcm/dataloader/import and /hcm/dataloader/export respectively.

Let’s review the inbound and outbound batch integration flows.

Inbound Flow

This is a typical Inbound FBL process flow:

 

HDL_loader_process

The data file is uploaded to WebCenter Content Server either using Fusion HCM UI or programmatically in /hcm/dataloader/import account. This uploaded file is registered by invoking the Loader Integration Service – http://{Host}/hcmCommonBatchLoader/LoaderIntegrationService.

You must specify the following in the payload:

  • Content id of the file to be loaded
  • Business objects that you are loading
  • Batch name
  • Load type (FBL)
  • Imported file to be loaded automatically

Fusion Applications UI also allows the end user to register and initiate the data load process.

 

Encryption of Data File using Pretty Good Privacy (PGP)

All data files transit over a network via SSL. In addition, HCM Cloud supports encryption of data files at rest using PGP.
Fusion supports the following types of encryption:

  • PGP Signed
  • PGP Unsigned
  • PGPX509 Signed
  • PGPX509 Unsigned

To use this PGP Encryption capability, a customer must exchange encryption keys with Fusion for the following:

  • Fusion can decrypt inbound files
  • Fusion can encrypt outbound files
  • Customer can encrypt files sent to Fusion
  • Customer can decrypt files received from Fusion

Steps to Implement PGP

  1. 1. Provide your PGP Public Key
  2. 2. Oracle’s Cloud Operations team provides you with the Fusion PGP Public Key.

Steps to Implement PGP X.509

  1. 1. Self signed fusion key pair (default option)
    • You provide the public X.509 certificate
  2. 2. Fusion Key Pair provided by you:
    • Public X.509 certificate uploaded via Oracle Support Service Request (SR)
    • Fusion Key Pair for Fusion’s X.509 certificate in a Keystore with Keystore password.

Steps for Certificate Authority (CA) signed Fusion certificate

      1. Obtain Certificate Authority (CA) signed Fusion certificate
      2. Public X.509 certificate uploaded via SR
      3. Oracle’s Cloud Operations exports the fusion public X.509 CSR certificate and uploads it to SR
      4. Using Fusion public X.509 CSR certificate, Customer provides signed CA certificate and uploads it to SR
    5. Oracle’s Cloud Operations provides the Fusion PGP Public Certificate to you via an SR

 

Modification to Loader Integration Service Payload to support PGP

The loaderIntegrationService has a new method called “submitEncryptedBatch” which has an additional parameter named “encryptType”. The valid values to pass in the “encryptType” parameter are taken from the ORA_HRC_FILE_ENCRYPT_TYPE lookup:

  • NONE
  • PGPSIGNED
  • PGPUNSIGNED
  • PGPX509SIGNED
  • PGPX509UNSIGNED

Sample Payload

<soap:Envelope xmlns:soap=”http://schemas.xmlsoap.org/soap/envelope/”> <soap:Body>
<ns1:submitEncryptedBatch
xmlns:ns1=”http://xmlns.oracle.com/apps/hcm/common/batchLoader/core/loaderIntegrationService/types/”>
<ns1:ZipFileName>LOCATIONTEST622.ZIP</ns1:ZipFileName>
<ns1:BusinessObjectList>Location</ns1:BusinessObjectList>
<ns1:BatchName>LOCATIONTEST622.ZIP</ns1:BatchName>
<ns1:LoadType>FBL</ns1:LoadType>
<ns1:AutoLoad>Y</ns1:AutoLoad>
<ns1:encryptType>PGPX509SIGNED</ns1:encryptType>
</ns1:submitEncryptedBatch>
</soap:Body>
</soap:Envelope>

 

Outbound Flow

This is a typical Outbound batch Integration flow using HCM Extracts:

extractflow

The extracted file could be delivered to the WebCenter Content server. HCM Extract has an ability to generate an encrypted output file. In Extract delivery options ensure the following options are correctly configured:

  1. Select HCM Delivery Type to “HCM Connect”
  2. Select an Encryption Mode of the 4 supported encryption types. or select None
  3. Specify the Integration Name – his value is used to build the title of the entry in WebCenter Content

 

Extracted File Naming Convention in WebCenter Content

The file will have the following properties:

  • Author: FUSION_APPSHCM_ESS_APPID
  • Security Group: FAFusionImportExport
  • Account: hcm/dataloader/export
  • Title: HEXTV1CON_{IntegrationName}_{EncryptionType}_{DateTimeStamp}

 

Programmatic Approach to export/import files from/to WebCenter Content

In Fusion Applications, the WebCenter Content Managed server is installed in the Common domain Weblogic Server. The WebCenter Content server provides two types of web services:

Generic JAX-WS based web service

This is a generic web service for general access to the Content Server. The context root for this service is “/idcws”. For details of the format, see the published WSDL at https://<hostname>:<port>/idcws/GenericSoapPort?WSDL. This service is protected through Oracle Web Services Security Manager (OWSM). As a result of allowing WS-Security policies to be applied to this service, streaming Message Transmission Optimization Mechanism (MTOM) is not available for use with this service. Very large files (greater than the memory of the client or the server) cannot be uploaded or downloaded.

Native SOAP based web service

This is the general WebCenter Content service. Essentially, it is a normal socket request to Content Server, wrapped in a SOAP request. Requests are sent to the Content Server using streaming Message Transmission Optimization Mechanism (MTOM) in order to support large files. The context root for this service is “/idcnativews”. The main web service is IdcWebRequestPort and it requires JSESSIONID, which can be retrieved from IdcWebLoginPort service.

The Remote Intradoc Client (RIDC) uses the native web services. Oracle recommends that you do not develop a custom client against these services.

For more information, please refer “Developing with WebCenter Content Web Services for Integration.”

Generic Web Service Implementation

This post provides a sample of implementing generic web service /idcws/GenericSoapPort. In order to implement this web service, it is critical to review the following definitions to generate the request message and parse the response message:

IdcService:

IdcService is a predefined service node’s attribute that is to be executed, for example, CHECKIN_UNIVERSAL, GET_SEARCH_RESULTS, GET_FILE, CHECKOUT_BY_NAME, etc.

User

User is a subnode within a <service> and contains all user information.

Document

Document is a collection of all the content-item information and is the parent node of the all the data.

ResultSet

ResultSet is a typical row/column based schema. The name attribute specifies the name of the ResultSet. It contains set of row subnodes.

Row

Row is a typical row within a ResultSet, which can have multiple <row> subnodes. It contains sets of Field objects

Field

Field is a subnode of either <document> or <row>. It represents document or user metadata such as content Id, Name, Version, etc.

File

File is a file object that is either being uploaded or downloaded

For more information, please refer Configuring Web Services with WSDL, SOAP, and the WSDL Generator.

Web Service Security

The genericSoapPort web service is protected by Oracle Web Services Manager (OWSM). In Oracle Fusion Applications cloud, the OWSM policy is: “oracle/wss11_saml_or_username_token_with_message_protection_service_policy”.

In your SOAP envelope, you will need the appropriate “wsee” headers. This is a sample:

<soapenv:Header>
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" soapenv:mustUnderstand="1">
<saml:Assertion xmlns:saml="urn:oasis:names:tc:SAML:1.0:assertion" MajorVersion="1" MinorVersion="1" AssertionID="SAML-iiYLE6rlHjI2j9AUZXrXmg22" IssueInstant="2014-10-20T13:52:25Z" Issuer="www.oracle.com">
<saml:Conditions NotBefore="2014-10-20T13:52:25Z" NotOnOrAfter="2015-11-22T13:57:25Z"/>
<saml:AuthenticationStatement AuthenticationInstant="2014-10-20T14:52:25Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">
<saml:Subject>
<saml:NameIdentifier Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">FAAdmin</saml:NameIdentifier>
<saml:SubjectConfirmation>
<saml:ConfirmationMethod>urn:oasis:names:tc:SAML:1.0:cm:sender-vouches</saml:ConfirmationMethod>
</saml:SubjectConfirmation>
</saml:Subject>
</saml:AuthenticationStatement>
</saml:Assertion>
</wsse:Security>
</soapenv:Header>

Sample SOA Composite

The SOA code provides a sample on how to search for a document in WebCenter Content, extract a file name from the search result, and get the file and save it in your local directory. The file could be processed immediately based on your requirements. Since this is a generic web service with a generic request message, you can use the same interface to invoke various IdcServices, such as GET_FILE, GET_SEARCH_RESULTS, etc.

In the SOA composite sample, two external services are created: GenericSoapPort and FileAdapter. If the service is GET_FILE, then it will save a copy of the retrieved file in your local machine.

Export File

The GET_FILE service returns a specific rendition of a content item, the latest revision, or the latest released revision. A copy of the file is retrieved without performing a check out. It requires either dID (content item revision ID) for the revision, or dDocName (content item name) along with a RevisionSelectionMethod parameter. The RevisionSelectionMethod could be either “Latest” (latest revision of the content) or “LatestReleased” (latest released revision of the content). For example, to retrieve file:

<ucm:GenericRequest webKey="cs">
<ucm:Service IdcService="GET_FILE">
<ucm:Document>
<ucm:Field name="dID">401</ucm:Field>
</ucm:Document>
</ucm:Service>
</ucm:GenericRequest>

Search File

The dID of the content could be retrieved using the service GET_SEARCH_RESULTS. It uses a QueryText attribute in <Field> node. The QueryText attribute defines the query and must be XML encoded. You can append values for title, content Id, and so on, in the QueryText, to refine the search. The syntax for QueryText could be challenging, but once you understand the special characters formats, it is straight forward. For example, to search content by its original name:

<ucm:Service IdcService="GET_SEARCH_RESULTS">
<ucm:Document>
<ucm:Field name="QueryText">dOriginalName &lt;starts&gt; `Test`</ucm:Field>
</ucm:Document>
</ucm:Service>

In plain text, it is dOriginalName <starts> `Test`. The <substring> is the mandatory format. You can further refine the query by adding more parameters.

This a sample SOA composite with 2 external references, genericSoapPort and FileAdapter.

ucmComposite

This is a sample BPEL process flow that demonstrates how to retrieve the file and save a copy to a local directory using File Adapter. If the idcService is GET_SEARCH_RESULTS, then do not save the file. In a real scenario, you will search, check out and start processing the file.

 

ucmBPEL1

The original file name is preserved when copying it to a local directory by passing the header property to the FileAdapter. For example, create a variable fileName and use assign as follows:

1. get file name from the response message in your <assign> activity as follows:

<from expression="bpws:getVariableData('InvokeGenericSoapPort_GenericSoapOperation_OutputVariable','GenericResponse','/ns2:GenericResponse/ns2:Service/ns2:Document/ns2:ResultSet/ns2:Row/ns2:Field[@name=&quot;dOriginalName&quot;]')"/>
<to variable="fileName"/>

Please make note of the XPath expression as this will assist you to retrieve other metadata.

2. Pass this fileName variable to the <invoke> of the FileAdapter as follows:

<bpelx:inputProperty name="jca.file.FileName" variable="fileName"/>

Please add the following property manually to the ../CommonDomain/ucm/cs/config/config.cfg file for the QueryText syntax: AllowNativeQueryFormat=true
Restart the managed server.
The typical error is: “StatusMessage”>Unable to retrieve search results. Parsing error at character xx in query….”

Testing SOA Composite:

After the composite is deployed in your SOA server, you can test it either from Enterprise Manager (EM) or using SoapUI. These are the sample request messages for GET_SEARCH_RESULTS and GET_FILE.

The following screens show the SOA composites for “GET_SEARCH_RESULTS” and “GET_FILE”:

searchfile

getfile

Get_File Response snippet with critical objects:

<ns2:GenericResponse xmlns:ns2="http://www.oracle.com/UCM">
<ns2:Service IdcService="GET_FILE">
<ns2:Document>
<ns2:Field name="dID">401</ns2:Field>
<ns2:Field name="IdcService">GET_FILE</ns2:Field>
....
<ns2:ResultSet name="FILE_DOC_INFO">
<ns2:Row>
<ns2:Field name="dID">401</ns2:Field>
<ns2:Field name="dDocName">UCMFA000401</ns2:Field>
<ns2:Field name="dDocType">Document</ns2:Field>
<ns2:Field name="dDocTitle">JRD Test</ns2:Field>
<ns2:Field name="dDocAuthor">FAAdmin</ns2:Field>
<ns2:Field name="dRevClassID">401</ns2:Field>
<ns2:Field name="dOriginalName">Readme.html</ns2:Field>
</ns2:Row>
</ns2:ResultSet>
</ns2:ResultSet>
<ns2:File name="" href="/u01/app/fa/config/domains/fusionhost.mycompany.com/CommonDomain/ucm/cs/vault/document/bwzh/mdaw/401.html">
<ns2:Contents>
<xop:Include href="cid:7405676a-11f8-442d-b13c-f8f6c2b682e4" xmlns:xop="http://www.w3.org/2004/08/xop/include"/>
</ns2:Contents>
</ns2:File>
</ns2:Document>
</ns2:Service>
</ns2:GenericResponse>

Import (Upload) File for HDL

The above sample can also be use to import files into the WebCenter Content repository for Inbound integration or other use cases. The service name is CHECKIN_UNIVERSAL.

Summary

This post demonstrates how to secure and automate the export and import of data files in WebCenter Content server implemented by Fusion HCM Cloud. It further demonstrates how integration tools like SOA can be implemented to automate, extend and orchestrate integration between HCM in the cloud and Oracle or non-Oracle applications, either in Cloud or on-premise sites.

The SOA sample code is here.


Building Oracle Sales Cloud Reports with ‘Deep Link’ Capabilities

$
0
0

Introduction

‘Deep Links’ or ‘Direct Page Links’ allow users to directly open a specific tab or sub-tab in the Oracle Sales Cloud Simplified User Interface.  A Salesperson, for instance, could view a report that lists all of their active opportunities with a link to open the opportunity tab directly in Sales Cloud to edit it, or perhaps a link to open the specific Customer Contact they are working with.

This article presents a detailed methodology to develop a report with embedded deep links to the main Account tab and one of it’s Sub-tabs. Furthermore, it details the steps for building the report syntax for the Opportunities and Activities screens and constructing similar links for Contacts, Leads, Household and Custom Objects.

 

Activities_new Accounts_new Contacts_new Opportunities_new

 

Additionally, the article explains how to present  the reports with the same look and feel as the existing canned reports that come with Sales Cloud.

 

Main Article

Creating the reports requires a user with Report Author privileges.  Reports are created by following the ‘BI Analytics (Answers)’ link from the Business Intelligence page from the Cloud Home.

Oracle_Cloud

Account Tab

This example demonstrates how to create a report that drills directly to the main Account Tab of a Customer, and also to the Opportunity Sub-tab.

The general pattern for the URL to be used is:

https://hostname:port/application/faces/CrmFusionHome?cardToOpen=ZCM_CUSTOMERCTRINFRA360_CUSTOMERS_CRM_CARD&tabToOpen=ZCM_CUSTOMERCTRINFRA360_CUSTOMERS_CRM&TF_subTabName=Overview&TF_AccountPartyId=123456

 This URL can be broken into 4 parts.

1. The host name for the cloud environment (the port is typically not required).

https://hostname:port

2.  The next part of the URL specifies the Customer Simplified UI container, the CRM ‘Card’ and the CRM ‘Tab’.

/application/faces/CrmFusionHome?cardToOpen=ZCM_CUSTOMERCTRINFRA360_CUSTOMERS_CRM_CARD&tabToOpen=ZCM_CUSTOMERCTRINFRA360_CUSTOMERS_CRM

3.  The next section defines the name of the Sub-tab to Open.  In this case the ‘Overview’ Sub-tab.  Valid options for this argument are ‘Overview’, ‘Profile’, ‘SalesAccountTeam’, ‘Contacts’, ‘Assets’, ‘Opportunities’, ‘Quotes’, ‘Leads’, ‘Relationships’, ‘Recommendations’, ‘Notes’,  ‘Activities’,  ‘Conversations’.

&TF_subTabName=Overview

4. The final part is the Account ID that will be opened.  This will be built dynamically in the report so that each row in the report will open the specific account id for that report row.

&TF_AccountPartyId=123456

 

Create the Account Analysis

1. In the Analytics Portal, create a new Analysis

New_Analysis

 

2. Select the ‘Sales – CRM Customer Overview’ Subject Area.

 

Account_Subject_Area

3. Add ‘Level 1 Account Name’ from the Customer folder

4. Add ‘Level 1 Account ID’ from the ‘Customer’ folder

5. Add ‘Level 1 Account ID’ (again) from the ‘Customer’ folder

6. Add a filter for ‘Account Status = ‘Active’

Note: The two instances of Account ID will be later edited to create the two deep drill links.

 

Oracle_BI_Answers

 

Apply ‘Look and Feel’

To retain the ‘look and feel’ of an existing Sales Cloud reports;

1. From the ‘Results’ tab select the option to import formatting from another analysis

 

Oracle_BI_Answers

 

2. Select the formatting from an existing sales cloud report.  In this example the formatting from the ‘Top Customers’ report in the /Shared Folders/Sales/Analytic Library/Customer folder was selected.

 

Select_previous_formatting

 

Add Link to Main Account Tab

1. Return to the ‘Criteria’ tab, and select ‘Edit formula’ for the first ‘Level 1 Account ID’ in the report.  This will become the URL that opens the main Account tab.

 

Edit_Formula

2. To build the HTTP link, begin with the following text, starting with a ‘ character to let BI Answers know that what follows is text and not a field from the subject area.

‘<A HREF=https://

3. Then insert the fully qualified hostname for your Sales Cloud environment – for instance:

adc2-fapXXXX-crm.oracledemos.com

4. Append the following text, which includes the code to open the ‘Customers’ Card and Tab, and the ‘Overview’ Sub tab, and also to open the AccountPartyId for an, as yet, undefined account.  Note the Sub-tab argument (&TF_subTabName=Overview) is not required when opening the main Customers tab, as the Overview tab is opened by default, but including it makes it easier to subsequently edit to open specific Sub-tabs.

/customer/faces/CrmFusionHome?cardToOpen=ZCM_CUSTOMERCTRINFRA360_CUSTOMERS_CRM_CARD&tabToOpen=ZCM_CUSTOMERCTRINFRA360_CUSTOMERS_CRM&TF_subTabName=Overview&TF_AccountPartyId=’

5. The next section is the dynamic code to build the URL with the specific Account ID.  This is done using the pipe characters to concatenate the account ID from the report, with code that tells the browser to open the link in a new target frame, and what text to display in the link itself:

‘ || “Customer”.”Level 1 Account ID” || ‘ target= _new>Drill to Acct Ovrw</A>’

6. The first quote closes out the original string and lets BI Answers know that what follows is now dynamic and coming from a field in the Subject Area, followed by the pipe characters.  The “Customer”.”Level 1 Account ID” is the value from the BI Presentation catalog, and then the pipe characters.  Then reopening a quote ‘ for the final piece of the URL.  Note, there is a space between the quote and the word target.  The text ‘Drill to Acct Ovrv’ is what will be displayed in the report for the user to click and can be changed to suit.

The final code should look similar to this:

‘<A HREF=https://adc2-fapXXXX-crm.oracledemos.com/customer/faces/CrmFusionHome?cardToOpen=ZCM_CUSTOMERCTRINFRA360_CUSTOMERS_CRM_CARD&tabToOpen=ZCM_CUSTOMERCTRINFRA360_CUSTOMERS_CRM&TF_subTabName=Overview&TF_AccountPartyId=’ || “Customer”.”Level 1 Account ID” || ‘ target= _new>Drill to Acct Ovrw</A>’

 

7. Enter this in the Formula window, change the Folder and Column Headings, and select ‘Contains HTML Markup’ as shown:

 

Account_Overview_Code

It would have been possible to make the actual link text displayed in the report dynamic as well.  For instance, this version of the  would display the Account Level 1 Name as the text for the URL that would then open the Account Tab:

‘<A HREF=https://adc-fapXXXX-crm.oracledemos.com/customer/faces/CrmFusionHome?cardToOpen=ZCM_CUSTOMERCTRINFRA360_CUSTOMERS_CRM_CARD&tabToOpen=ZCM_CUSTOMERCTRINFRA360_CUSTOMERS_CRM&TF_AccountPartyId=’ || “Customer”.”Level 1 Account ID” || ‘ target= _new>’ || “Customer”.”Level 1 Account Name” ||'</A>’

 

Add Link to Opportunities Sub-Tab

Since this report will have two different URLs, one to the main Account Tab, and one to a subsequent Sub-tab, we will use the version that displays the static text to make it clearer what the link text will open.

1. Edit the formula for the second ‘Level 1 Account ID’ field.

2. The code below will directly open the ‘Opportunities’ Sub-tab as highlighted.

‘<A HREF=https://adc2-fapXXXX-crm.oracledemos.com/customer/faces/CrmFusionHome?cardToOpen=ZCM_CUSTOMERCTRINFRA360_CUSTOMERS_CRM_CARD&tabToOpen=ZCM_CUSTOMERCTRINFRA360_CUSTOMERS_CRM&TF_subTabName=Opportunities&TF_AccountPartyId=’ || “Customer”.”Level 1 Account ID” || ‘ target= _new>Drill to Oppty</A>’

 

3. Change the Folder and Column Headings, and select ‘Contains HTML Markup’.

 

Account_Opportunity_Code2

 

Set Column Properties

The final step is to tell BI Answers that the two calculated column data fields should be displayed in HTML format.

1. For each of of the new URL fields, select ‘Column Properties’

Column_Properties

 

2. In the ‘Data Format’ tab, select to Override the Default Data Format and select ‘HTML’.

HTML_Format

 

Test and Save Report

1. Run the report and test the two drill links and make sure they open the correct Customer Account screen, both in the Overview section and also direct to the Opportunity sub-tab.

Account_Report_Output

2. Save the report into a sub-folder within /Shared Reports/Custom Reports/

3. Make a note of the name.  This will be needed to find the reports from the Sales Cloud Interface.

Oracle_BI_Catalog

 

Run Report in Sales Cloud

1. Return to the Sales Cloud Simplified Interface and select the ‘Analytics’ tab

2. Search for the report, either by entering the full name or partial text.

3. Once located, make it a favorite by selecting the ‘star’ symbol.  This allows the report to be easily found in the favorite bar for future use.

Fusion_Applications 

 

4. Run the report, and select one of the ‘Account Opportunity’ entries to demonstrate how the deep drill will directly open the Opportunity Sub-tab of the main Account Tab.

 

Fusion_Applications

 

Notice how the Opportunity Sub-tab is opened directly.

Fusion_Applications

Opportunity Tab

The format for building the URL to open the Opportunity Tab is:

https://hostname:port/customer/faces/CrmFusionHome?tabToOpen=MOO_OPPTYMGMTOPPORTUNITIES_CRM&cardToOpen=MOO_OPPTYMGMTOPPORTUNITIES_CRM_CARD&TF_subTabName=Quotes&TF_skipToEditOptyId=123456

This direct page link opens the Quotes subtab on the Edit Opportunity simplified page.

The allowable values for Sub-tab for Opportunities are ‘Summary’, ‘Quotes’, ‘Contact’, ‘OpptyTeam’, ‘OpptyPartner’, ‘Activities’, ‘Notes’, ‘Assessments’, ‘Leads’, ‘Conversations’

This is an example of the completed formula to trill directly to the main Opportunity Tab.

‘<A HREF=https://adc2-fapXXXX-crm.oracledemos.com/customer/faces/CrmFusionHome?cardToOpen=MOO_OPPTYMGMTOPPORTUNITIES_CRM_CARD&tabToOpen=MOO_OPPTYMGMTOPPORTUNITIES_CRM&TF_skipToEditOptyId=’ || “Opportunity”.”Opportunity ID” ||’ target= _new>Drill to Oppty</A>’

This example includes the additional subTabName code to drill to the Opportunity Team Sub-tab.

‘<A HREF=https://adc2-fapXXXX-crm.oracledemos.com/customer/faces/CrmFusionHome?cardToOpen=MOO_OPPTYMGMTOPPORTUNITIES_CRM_CARD&tabToOpen=MOO_OPPTYMGMTOPPORTUNITIES_CRM&TF_subTabName=OpptyTeam&TF_skipToEditOptyId=’ || “Opportunity”.”Opportunity ID” ||’ target= _new>Drill to Oppty Team</A>’

 

Activity Tab

The general syntax to open the Activity Tab for a specific account is as follows.

https://hostname:port/application/faces/CrmFusionHome?cardToOpen=ZMM_ACTIVITIES_CRM_CARD&tabToOpen=ZMM_ACTIVITIES_ACTIVITIES_CRM&TF_ActivityId=123456

There is the option to link directly to specific types of activities by changing the value for the parameter

tabToOpen=

To open a list of all tasks use the pattern:

&tabToOpen=ZMM_ACTIVITIES_TASKS_CRM

To link to appointments (calendar view) use the pattern:

&tabToOpen=ZMM_ACTIVITIES_APPOINTMENTS_CRM

An example of the formula code for a BI Answers report is shown below.

‘<A HREF=https://adc2-fapXXXX-crm.oracledemos.com/customer/faces/CrmFusionHome?cardToOpen=ZMM_ACTIVITIES_CRM_CARD&tabToOpen=ZMM_ACTIVITIES_ACTIVITIES_CRM&TF_ActivityId=’ || CAST( “Activity”.”Activity Id” AS char(40)) || ‘ target= _new>Drill to Activity</A>’

Notice that the “Activity”.”Activity Id” is cast to a char(40).  This field is defined as numeric in the RPD and thus cannot be concatenated until it is changed to a string value.  Without this the report will fail with this error:

[nQSError: 22020] Function Concat does not support non-text types. (HY000)

Windows7_x86

 

Contact Tab

The general syntax to open the Contact Tab for a specific account is as follows.

https://hostname:port/application/faces/CrmFusionHome?cardToOpen=HZ_FOUNDATIONPARTIES_CONTACTS_CRM_CARD&tabToOpen=HZ_FOUNDATIONPARTIES_CONTACTS_CRM&TF_subTabName=Overview&TF_ContactPartyId=123456

As in previous examples, the &TF_subTabName=Overview code is not-required if opening the main Tab, but for any Sub-tab the syntax is required, with the following being allowed values ‘Overview’, ‘Profile’, ‘Team’, ‘Opportunities’, ‘Leads’, ‘Assets’, ‘Relationships’, ‘Recommendations’, ‘Notes’, ‘Activities’,  ‘Conversations’.

Household Tab

The general syntax to open the Household Tab for a specific account is as follows.

https://hostname:port/application/faces/CrmFusionHome?cardToOpen=ZCM_CUSTOMERCTRINFRA360_GROUPS_CRM_CARD&tabToOpen=ZCM_CUSTOMERCTRINFRA360_GROUPS_CRM&TF_subTabName=Overview&TF_HouseholdPartyId=123456

Allowed values for the Sub-tab element are ‘Overview’, ‘Profile’, ‘SalesAccountTeam’, ‘Contacts’, ‘Opportunities’,
‘Assets’, ‘Leads’, ‘Relationships’, ‘Notes’, ‘Activities’, ‘Conversations’

Lead Tab

The general syntax to open the Lead Tab for a specific account is as follows.

https://hostname:port/application/faces/CrmFusionHome?cardToOpen=MKL_LEADS_CARD&tabToOpen=MKL_LEADS&TF_subTabName=Summary&TF_LeadId=123456

Allowed values for the Sub-tab element are ‘Summary’, ‘Contacts’, ‘Qualification’, ‘Team’, ‘Activities’, ‘Notes’, ‘Opportunities’, ‘Conversations’, ‘Analytics1′, ‘Analytics2′, ‘Analytics3′

Custom Objects

The general syntax to open the Custom Objects is as follows.

https://hostname:port/application/faces/CrmFusionHome?cardToOpen=CRM_CUSTOM_CARD_XXXX& tabToOpen=CRM_CUSTOM_TAB_XXXX&TF_ObjectId=123456

Replace XXXX with the custom object’s API name using all upper case letters. For example, TROUBLETICKET_C. Obtain the API name from the object’s overview page (click the object’s node in the Custom Objects tree in Application Composer).

NoteDirect page links to custom object sub-tabs are not supported.

 

Summary

This article explained the concept of ‘Deep Dive’ links within Oracle Sales Cloud, and how to create new reports for users to quickly be able to open relevant content within the Simplified GUI.

Node.js – Invoking Secured REST Services in Fusion Cloud – Part 1

$
0
0

Introduction

This post focuses on invoking secured Fusion Cloud RESTFul services using Node.js. Part 1 is explicitly focused on the “GET” method. The assumption is that the reader has some basic knowledge on Node.js. Please refer to this link to download and install Node.js in your environment.

Node.js is a programming platform that allows you to execute server-side code that is similar to JavaScript in the browser. It enables real-time, two-way connections in web applications with push capability, allowing a non-blocking, event-driven I/O paradigm. It runs on a single threaded event loop and leverages asynchronous calls for various operations such as I/O. This is an evolution from stateless-web based on the stateless request-response paradigm. For example, when a request is sent to invoke a service such as REST or a database query, Node.js will continue serving the new requests. When a response comes back, it will jump back to the respective requestor. Node.js is lightweight and provides a high level of concurrency. However, it is not suitable for CPU intensive operations as it is single threaded.

Node.js is built on an event-driven, asynchronous model. The in-coming requests are non-blocking. Each request is passed off to an asynchronous callback handler. This frees up the main thread to respond to more requests.

 

Main Article

An internet media type data for RESTFul services is often JavaScript Object Notation (JSON). JSON is a lightweight data-interchange format and it is a standard way to exchange data with RESTFul services. It is not only human readable, but easy for machines to parse and generate. For more information on JSON, please refer to this link.

JSON Samples:

Simple Data Structure
Var Employee = {
“name” : “Joe Smith”
“ID” : “1234”
“email” : joe.smith@oracle.com
};
Data in Arrays
var emailGroups = [{
"email" : "email1@myCompany.com",
"name" : "Joe Smith"
},
{
"email" : " email2@myCompany.com ",
"name" : "Don Smith"
}];

 

Security

The RESTFul services in Oracle Fusion Cloud are protected with Oracle Web Service Manager (OWSM). The server policy allows the following client authentication types:

  • HTTP Basic Authentication over Secure Socket Layer (SSL)
  • Oracle Access Manager(OAM) Token-service
  • Simple and Protected GSS-API Negotiate Mechanism (SPNEGO)
  • SAML token

The client must provide one of the above policies in the security headers of the invocation call for authentication. The sample in this post is using HTTP Basic Authentication over SSL policy.

 

Node.js HTTP Get Request

In general there are two modules to invoke HTTP/s secured REST services for GET method:

  • 1. http.get() – This is a native HTTP/s API and supports only GET method.
  • 2. http.request() – The request is designed to make HTTP/s calls and supports multiple request methods such as GET, POST, PUT, DELETE, etc.

HTTP GET Module

This native API implicitly calls http.request set to GET and calls request.end() automatically. There are two parameters for this method that defines what and how the REST services is being invoked. The following snippet demonstrates the construct for HTTP/s invocation:

var client = require('https')
request = client.get(options, function(response) {…}

The above command uses HTTPS protocol. The “require” statement enables either HTTP or HTTPS protocol.

Options

The “options” is an object or string that includes the following information:

  • host –  A domain name or IP address of the server to issue the request to.
  • port – Port of remote server.
  • Path – Uniform Resource Idenitifier (URI)
  • HTTP Headers – HTTP headers that must be sent with the request such as authorization
  • Certificates – certificates such as ca cert for SSL

For more information on object “options”, refer the following link.

This a typical example of constructing ‘option’

var options = {
ca: fs.readFileSync('myCert'),
host: 'hostname.mycompany.com',
port: 443,
path: '/hcmCoreApi/atomservlet/employee/newhire',
headers: {
'Authorization': 'Basic ' + new Buffer(uname + ':' + pword).toString('base64')
}

 

function(response)

This is a callback parameter as a one time listener for the response event. It is emitted when a response is received to this request.

To get the response, add a listener for ‘response’ to the request object. The ‘response’ will be emitted from the request object when the response headers have been received. The ‘response’ event is executed with one argument which is an instance of http.IncomingMessage.

During the ‘response’ event, one can add listeners to the response object; particularly to listen for the ‘data’ event.

If no ‘response’ handler is added, then the response will be entirely discarded. However, if you add a ‘response’ event handler, then you must consume the data from the response object, either by calling response.read() whenever there is a ‘readable’ event, or by adding a ‘data’ handler, or by calling the .resume() method. Until the data is consumed, the ‘end’ event will not fire. Also, until the data is read it will consume memory that can eventually lead to a ‘process out of memory’ error.

Note: Node does not check whether Content-Length and the length of the body which has been transmitted are equal or not.

The following sample has implemented three events:

  1. 1. Data – an event to get the data
  2. 2. End – an event to know the response is completed.
  3. 3. Error – an event to capture the error and trigger your error logic handler.
request = http.get(options, function(res){
var body = "";
res.on('data', function(chunk) {
body += chunk;
});
res.on('end', function() {
console.log(body);
})
res.on('error', function(e) {
console.log("Got error: " + e.message);
});
});

 

HTTP Request Module

The request module is designed to make various HTTP/s calls such as GET, POST, PUT, DELETE, etc. The http.get() implicitly calls http.request set to GET and calls request.end() automatically. The code for GET method is identical to http.get() except for the following:

  • The http.get() is replaced with “http.request()”
  • The “option” object has a method property that defines the HTTP operation (default is GET)
  • The request.end() is explicitly called to signify the end of the request
Sample Snippet:
var options = {
ca: fs.readFileSync('myCert'),
host: 'hostname.mycompany.com',
port: 443,
method: GET,
path: '/<fusion_apps_api>/employee',
headers: {
'Authorization': 'Basic ' + new Buffer(uname + ':' + pword).toString('base64')
}
var request = http.request(options, function(res){…}
….
request.end();

 

Response Status Codes and Headers

The HTTP status code is available from response event function function(res). For example

res.statusCode

 

The HTTP response headers are available from same response event function as follows:

res.headers

 

Parsing JSON Response

The response JSON message can be parsed using JSON object. The JSON.parse() parses a string from the RESTFul services. For example:

JSON.parse(responseString)

 

Sample Code got http.get()

var uname = 'username';
var pword = 'password';
var http = require('https'),
fs = require('fs');
var options = {
ca: fs.readFileSync('MyCert'),
host: 'host.mycompany.com',
port: 10620,
path: '/<fusion_apps_api>/employee',
headers: {
'Authorization': 'Basic ' + new Buffer(uname + ':' + pword).toString('base64')
}
};
request = http.get(options, function(res){
var responseString = "";
res.on('data', function(data) {
responseString += data;
});
res.on('end', function() {
console.log(responseString);
})
res.on('error', function(e) {
console.log("Got error: " + e.message);
});
});

Sample Code for http.request()

var uname = 'username';
var pword = 'password';
var http = require('https'),
    fs = require('fs');

var options = {
    ca: fs.readFileSync('hcmcert1'),
    host: 'host.mycompany.com',
    port: 10620,
    path: '/<fusion_apps_api>/employee',
    method: 'GET',
    headers: {
     'Authorization': 'Basic ' + new Buffer(uname + ':' + pword).toString('base64')
   }         
};

var    request = http.request(options, function(res){
    console.log(res.headers);
    var responseString = '';
    res.on('data', function(chunk) {
         console.log(chunk);
         responseString += chunk;
    });
    res.on('end', function() {
        console.log(responseString);
    })
    res.on('error', function(e) {
        console.log("Got error: " + e.message);
    });
});

request.end();

Conclusion

This post demonstrates how to invoke secured Fusion Cloud REST Services using Node.js. It also provides basic introduction to JSON format and how to parse the JSON response in Node.js. The sample code is a prototype and must be further modularized for re-usability.

 

How to resolve “DSP Taglib did not match expected version” error

$
0
0

In Oracle Commerce when an upgrade or patch is applied that updates either a TLD or jar associated with a tag library you might see an error in the logs like this:

The version of the DSP Taglib from the web-app "CRS" found within <path to jar>/dspjspTagLib1_0.jar did not match the expected version. Please update all copies of the DSP Taglib with the version in $DYNAMO_HOME/DAS/taglib/dspjspTaglib /1.0/lib/ and re-assemble your application. Enable loggingDebug on this component and restart to see which resource differs.

This happens because the version of the tag library (taglib) in your running web-app does not match the version of the updated taglib in DAS.

When you upgraded or ran the patch, the installation changed the taglib files in the ATG installation directory only. Your application still has the old version of the tag libraries. You need to update the taglibs in your application so it matches the version of the patched or updated libraries.
In J2EE the container looks for Tag files in four locations: 

1. Directly inside WEB-INF/tags directory

2. Inside a sub-directory of WEB-INF/tags

3. Inside the META-INF/tags directory which is inside a JAR file that is inside WEB-INF/lib

4. Inside a sub-directory of META-INF/tags which is inside a JAR file that’s inside WEB-INF/lib

taglibs

To fix this problem you need to update the your project’s tag library, i.e. copy the DAS tag libraries to your application.

To find out all copies of the tag library you could run this in a linux command prompt:

$> find . -name \*.tld

Screenshot-1

Now you that know which projects have a tag library, the next step is to update their contents. You could create a script as in the example below to update all tag libraries in your web-app.

This script copies tag libraries to the web-app (project). Put this content in a file called refreshJ2EE.sh. The following was written for a Linux environment. If you have another OS then change the content accordingly.

 

#!/bin/bash
# ==================================================================================================
#  refreshProjects.sh : Refresh the ATG OOTB assets copied into the projects.
# ==================================================================================================

# After a fix pack or hotfix is applied to the ATG installation, this script will refresh the
# copies of OOTB assets that may have been altered.
# J2EE standard

function refreshJ2EE {
    PROJECT=$1
    WAR=$2
    echo ' '
    echo Updating ${PROJECT} J2EE project ...
    echo ' '

    # dsp and JSTL Tag Library Definition files
    cp -v ${ATG_HOME}/DAS/taglib/jstl/1.1/tld/c.tld                       ${PROJECT}/j2ee/${WAR}/WEB-INF/tags/c.tld
    cp -v ${ATG_HOME}/DAS/taglib/dspjspTaglib/1.0/tld/dspjspTaglib1_0.tld ${PROJECT}/j2ee/${WAR}/WEB-INF/tags/dspjspTaglib1_0.tld
    cp -v ${ATG_HOME}/DAS/taglib/jstl/1.1/tld/fmt.tld                     ${PROJECT}/j2ee/${WAR}/WEB-INF/tags/fmt.tld
    cp -v ${ATG_HOME}/DAS/taglib/jstl/1.1/tld/fn.tld                      ${PROJECT}/j2ee/${WAR}/WEB-INF/tags/fn.tld

    # dsp and JSTL tag classes
    cp -v ${ATG_HOME}/DAS/taglib/dspjspTaglib/1.0/lib/dspjspTaglib1_0.jar ${PROJECT}/j2ee/${WAR}/WEB-INF/lib/dspjspTaglib1_0.jar
    cp -v ${ATG_HOME}/DAS/taglib/jstl/1.1/lib/standard.jar                ${PROJECT}/j2ee/${WAR}/WEB-INF/lib/standard.jar
    cp -v ${ATG_HOME}/DAS/taglib/jstl/1.1/lib/jstl.jar                    ${PROJECT}/j2ee/${WAR}/WEB-INF/lib/jstl.jar
    cp -v ${ATG_HOME}/DAS/taglib/json/0.4/lib/json-taglib-0.4.jar         ${PROJECT}/j2ee/${WAR}/WEB-INF/lib/json-taglib-0.4.jar
}

# CA Asset Manager UI
function refreshCA {
    PROJECT=$1
    WAR=$2
    echo ' '
    echo Updating ${PROJECT} CA project ...
    echo ' '

    cp -v ${ATG_HOME}/AssetUI/taglibs/asset-ui/tld/asset-ui-1_0.tld       ${PROJECT}/j2ee/${WAR}/WEB-INF/tags/asset-ui-1_0.tld
    cp -v ${ATG_HOME}/AssetUI/taglibs/asset-ui/lib/asset-ui-1_0.jar       ${PROJECT}/j2ee/${WAR}/WEB-INF/lib/asset-ui-1_0.jar

    refreshJ2EE ${PROJECT} ${WAR}
}

Then invoke the script replacing the project and war with your project name and war file name:

$> refreshJ2EE <project> <war>

Some organizations are very concerned with tight timelines and may feel the temptation to avoid this update. They are trying to reduce the number of moving parts and therefore reduce the scope of regression testing. Some may argue that these errors don’t product major malfunctions, and therefore they can operate temporarily with these errors in the logs. However it is very important to maintain an updated and supported environment. There are many side effects that could occur as a result of running an environment with mismatching version of the tag libraries. A couple of examples are: JSP pages throwing errors that should not occur, or run into value corruption in transfer between components and JSTL. Not to mention the expense and time consumed into troubleshooting. Therefore it is highly recommended that tag library versions are always matching.

Integrating WebCenter Sites Community-Gadget with Social Networking

$
0
0

 

You can download this paper by clicking here

 

Introduction

 

Recently I have had a few inquiries regarding integrating WebCenter Sites 11gR1 with social sites. There are several questions about whether there is an out-of-the-box integration or not, and if there is an out-of-the-box integration, what exactly does it provide.

One reason for the confusion is that WebCenter Sites 11gR1 Community-Gadget provides integration with Social. But the integration is limited only to Community-Gadget. It is not for WebCenter Sites’ delivery web-site in general. The integration is used only by Community-Gadget to allow a site visitor to comment or provide feedback on existing articles and share the comment or feedback with social sites. This authentication logs-in a visitor on Community-Gadget, and not on WebCenter Sites’ visitor site. If a web-site’s functionality provides personalization and requires a site visitor to login on the delivery web-site, that login is separate from login to Community-Gadget.

WebCenter Sites 11gR1 Community-Gadget provides three ways of integrating with social sites. This is well documented in WebCenter Sites Developer’s Guide. I am writing this blog, with a view to detail what integration is provided and provide links to detailed step-by-step procedure to configure the integration. This blog is applicable to WebCenter Sites 11gR1 release.

 

Integrating Community-Gadget with Social Networking

WebCenter Sites Community-Gadget provides integration with social sites. This integration allows WebCenter Sites visitor to authenticate using a social login – Facebook, Twitter, Jainrain, Google, Oracle Mobile and Social Access Service (OMSAS)[1]. This is useful for visitors having existing social profiles who want to use their social profile to comment on article, provide feedback, or rate an article and share the same comment or feedback with their social network.

 

To authenticate a visitor for WebCenter Site’s Community-Gadget the developers have the following options:

* Use the built-in support for Facebook, Twitter, and Google, which is available ready-to-use with the Community-Gadgets web application.

* Use the authentication hub Janrain, to enable access to most prevalent providers available online. Janrain is a SaaS solution that provides integration services with a number of online authentication providers such as Facebook, Twitter, Google, and several others.

* Use the authentication hub, Oracle Mobile and Social Access Service (OMSAS), to enable access to prevalent providers available online. OMSAS is also a solution that provides integration services with a number of online authentication providers such as Facebook, Twitter, Google, and others.

 

Integration with Facebook:

Integration with Facebook requires that developers first setup a Facebook application and specify the Community Servers’ production URL. This is done on the facebook site at https://developers.facebook.com/apps .

Once this is done, the developers need to configure Facebook Application Authentication properties on the Community-Gadgets Web Application on WebCenter Sites delivery system.

The detailed step-by-step instructions for these are given in the WebCenter Sites developers’ guide chapter: Integrating with Facebook at URL: http://docs.oracle.com/cd/E29542_01/doc.1111/e29634/configcommunity.htm#WBCSD427    [2] 

 

Integration with Twitter:

Integration with Twitter is similar to integration with Facebook. First the developers setup a Twitter application and specify the Community Servers’ production URL. This is done on the Twitter site at https://dev.twitter.com/

Once this is done, the developers need to configure Twitter Application Authentication properties on the Community-Gadgets Web Application on WebCenter Sites delivery system.

The detailed step-by-step instructions for these are given in the WebCenter Sites developers’ guide chapter: Integrating with Twitter at

URL: http://docs.oracle.com/cd/E29542_01/doc.1111/e29634/configcommunity.htm#WBCSD435   [3]

 

 

Integration with Google:

Integration with Google is also similar to integration with Facebook & Twitter. First the developers setup a Google application and specify the Community Servers’ production URL. This is done on the Google site at https://code.google.com/apis/console

Once this is done, the developers need to configure Google Application Authentication properties on the Community-Gadgets Web Application on WebCenter Sites delivery system.

The detailed step-by-step instructions for these are given in the WebCenter Sites developers’ guide chapter: Integrating with Google at

URL:   http://docs.oracle.com/cd/E29542_01/doc.1111/e29634/configcommunity.htm#WBCSD6386   [4]

 

Integration with Janrain:

Janrain is a service provider that integrates with a number of social sites and third-party services. The client must subscribe to Jainrain service and choose which third party service ne wants to integrate with. WebCenter Sites fully integrates with Jainrain service. To use Jainrain, client must have an account with Jainrain, next configure a Janrain Application and then to configure Janrain Application Authentication properties on the Community-Gadgets Web Application on WebCenter Sites delivery system.

The detailed step-by-step instructions for integration with Janrain are given in the WebCenter Sites developers’ guide chapter: Integrating with Janrain at

URL:   http://docs.oracle.com/cd/E29542_01/doc.1111/e29634/configcommunity.htm#WBCSD440   [5]

 

Integration with Oracle Internet Access Service and Oracle Access Manager

To integrate WebCenter Sites with Oracle Access Manager, the developers need to:

* Configure WebLogic Server to support Mobile and Social Identity Provider

* Enable Mobile and Social (M&S) Service on OAM

* Define Internet Identity Providers fir OMSAS

* Create a new Mobile & Social Application Profile

* Enable integration of OMSAS with WebCenter Sites Community-Gadget

The detailed information for these steps is given in the WebCenter Sites Developer Guide in the Chapter: Integrating with Oracle Internet Access Service at

URL:   http://docs.oracle.com/cd/E29542_01/doc.1111/e29634/configcommunity.htm#WBCSD6397   .[6]

 

Summary

To summarize, WebCenter Sites 11gR1 Community-Gadget provides three ways to integrate with social sites. However, this integration is only for use with Community-Gadget, and is not for authenticating a visitor for WebCenter Sites live delivery web-site.

 

 

 

[1] http://docs.oracle.com/cd/E29542_01/doc.1111/e29634/configcommunity.htm#WBCSD425

[2] http://docs.oracle.com/cd/E29542_01/doc.1111/e29634/configcommunity.htm#WBCSD427

[3] http://docs.oracle.com/cd/E29542_01/doc.1111/e29634/configcommunity.htm#WBCSD435

[4] http://docs.oracle.com/cd/E29542_01/doc.1111/e29634/configcommunity.htm#WBCSD6386

[5] http://docs.oracle.com/cd/E29542_01/doc.1111/e29634/configcommunity.htm#WBCSD440

[6] http://docs.oracle.com/cd/E29542_01/doc.1111/e29634/configcommunity.htm#WBCSD6397

 

You can download this paper by clicking here

 

Development Patterns in Oracle Sales Cloud Application Composer, Part 2

$
0
0

Introduction

In Part 1 of this post (http://www.ateam-oracle.com/development-patterns-in-oracle-sales-cloud-application-composer-part-1), we used the experiences of three tradespersons – a plumber, a carpenter, and an electrician – to make the case that planning is critical before building or remodeling a house. During their brief hypothetical conversation while working on a job site, they convinced each other that formal planning, normally executed in building construction projects by drafting blueprints, ensures that all of the individual sub-systems work together efficiently in the completed house. We used the jist of their conversation to reinforce the necessity of planning in software development, especially when mapping out how all of the individual components of a software development project will work together, as much as possible optimizing the relationships among the various components. This kind of planning is as much a fundamental requirement for successful software development projects as blueprints are for building construction.

Laying out the structural framework for more complex software development projects greatly increases the odds of successful outcomes. This should come as no surprise, but nonetheless planning is often given short shrift or even ignored entirely.  Also generally accepted, but also occasionally ignored, is the practice of not re-inventing the wheel with every new project. With both building construction and software development, it would be redundant to start the planning and design stages of every new project from scratch. Normally, proven and optimized patterns are available to jumpstart planning and design, and they should be utilized whenever possible. In fact, the main goal of Part 1 was to suggest a framework for how global functions and trigger functions could interact in an Oracle Sales Cloud (OSC) Application Composer extensibility project, using the Oracle Documents Cloud Service (DOCS) as a representative integration target.

The plan for Part 2 (even blog posts benefit from plans!) is to continue exploring the relationships between the global function library we started to build and other extensibility artifacts, adding new features to the extensibility project with an additional trigger function, again using the Oracle Documents Cloud Service (now in version 1.1 of the REST API) as the integration target. If they are designed correctly, the global functions should be able to support the addition of new features without major refactoring.

Availability of New REST Resource in DOCS: Folder Sharing

Think of the global function library as similar to the building’s foundation. With the foundation in place, the fundamental means of interacting with Oracle Documents Cloud Service REST services from Sales Cloud is ready to support the superstructure, which in the case of Application Composer usually takes the form of object trigger functions. With working trigger functions (covered in Part 1 of the post) that support three outbound calls to the Oracle Documents Cloud Service – (1) creating a dedicated document folder when a new opportunity is created, (2) renaming the folder if and when the opportunity is renamed, and (3) deleting the folder with its content if the opportunity is deleted — the extensions are functional/usable and can at least be tested end-to-end even if they are not quite ready for a production rollout.

One set of new features added to version 1.1 of the DOCS REST API allows for programmatic sharing of folders and documents. To round out the integration with Sales Cloud, we would like to take advantage of the new feature by writing a trigger function that adds or removes contributors from the DOCS folder set up for the opportunity whenever a new team member is added to or removed from the opportunity. Adding this new trigger function will be an additional test of how well the global functions are designed. If we can implement the trigger function with minimal effort, it is a good sign that the global functions have been built correctly.

Implementing the New Feature

As a refresher, below is the sequence of how the global functions work together when making a generic REST call from an object function or object trigger function:

  1. Prepare request payload if required.  Required format: Map<String, Object>
  2. Call global function: callRest( String restHttpMethod, String urlExtension, Map requestPayload)
    1. Inside callRest function: use map2Json function to convert requestPayload Map to JSON format.
  3. General error checking after function returns (e.g. check that response is returned in the expected format)
  4. Process response: json2Map(responsePayload) converts JSON response to Map
  5. Detailed error checking based on response content
  6. Process converted Map content returned in response as needed (function-specific)

In Sales Cloud Application Composer, the steps necessary for adding folder sharing support can be used as a prototype for adding virtually anything exposed in a REST API. Being able to address multiple sets of requirements is part of the advantage of having well-crafted global functions.

Below are the steps for designing and incorporating the new feature, including details and discussion of options:

  1. Identify the affected Sales Cloud object in the transaction. In this case we know we are working with an Opportunity or something related to an Opportunity (either or parent, child, or another object with a defined relationship).
  2. Decide on the best-fit trigger event. Normally all but a few of the sixteen or so triggers available can be eliminated.
  3. Create the trigger function and write the code. This step may require a few iterations.
  4. Add error handling code. How this step is implemented is going to depend on whether or not any supporting global functions existing for error handling.

Identify the affected Sales Cloud object in the transaction that will trigger the outbound web service call.  Typically, it is easiest to work from the top down to identify the object for which to write the trigger function. For example, in the folder sharing case, the known top-level object is Opportunity. In OSC Release 8/9 there are eleven candidate objects, consisting of four child objects and seven objects with defined one-to-many relationships. (The lists of child and related objects are shown on the overview page for the top level object.) It is obvious, as it will be in the vast majority of cases that the object of interest is Opportunity Team Member. The trigger will be fired whenever a team member is added or removed from the parent object. In the small number of cases where it may be impossible to isolate just one object, opening up the candidate child or related objects and examining the fields should lead to identifying (or eliminating) the object as the candidate for a trigger function.

Decide on the best-fit trigger event. For Release 8 and Release 9, refer to Table 1 for the available object triggers.

Table 1: Object Trigger Events for Groovy Function Code*

Event Fires When?
After Create New instance of an object is created
Before Modify Field is initially modified in an existing row
Before Invalidate Field is initially modified in an existing object, or when child row is created, removed,   modified
Before Remove Attempt is made to delete an object
Before Insert Before new object is inserted.
After Insert After new object is created.
Before Update Before existing object is modified in database.
After Update After existing object is modified in database.
Before Delete Before existing object is deleted in database.
After Delete After existing object is deleted in database.
Before Commit Before pending changes are committed.
After Commit After pending changes are committed.
Before Rollback Before pending changes are rolled back.
After Rollback After pending changes are rolled back.
After Changes After all changes have been posted but before transaction is committed.

*NOTE: Not all trigger events are exposed for every object. Minor variations exist across application containers and object types.

There are a number of behavioral caveats around the use of trigger events. Primarily, selecting the trigger that makes the most sense in the context of whether or not data updates are occurring in the trigger function code will dictate the correct event to use. For example, if a function is updating a field value it does not make any sense at all to do that in any of the “After…” events, as the main database transaction will have taken place already. From a performance perspective, this will force another set of transactions to the database, which is bad enough, but to add insult to injury, all validation code will run another time needlessly.  In the worst case, triggers may be called repeatedly, which may result in an endless loop (if App Composer did not have safeguards in place to prevent this from happening).

In some cases business logic will help make an informed choice of the best trigger event. For example, it makes little sense to add an opportunity team member to a DOCS folder as a contributor unless it is certain that the database transaction which adds or deletes the team member completes successfully. Since the team member trigger function is not making any data updates, it is not only safe, but also logical, to use one of the “After…” events.

Create the trigger function and write the code. Obviously this step will probably take up the bulk of the effort. To ease the amount of work required, look at what the global function(s) require for input parameters as well as what the global functions return. Structurally and functionally, that discovery process, in conjunction with the business need, will dictate a large part of what the trigger function needs to accomplish.

Below is the code for the new trigger function that adds a folder contributor:

println 'Entering AddFolderContributorTrigger'
def docFolderGuid = nvl(Opportunity?.DocFolderGuid_c, '')
if (docFolderGuid) {
  def restParamsMap = adf.util.getDocCloudParameters()
  // prepare URL extension
  def urlExt = '/shares/' + docFolderGuid
  // prepare request payload  
  def userGUID = adf.util.getDocCloudServiceUserGUID(Name)
  def reqPayload = [userID:(userGUID), role:’Contributor’, message: ‘adding you to Opportunity folder’] 
  // make REST call (this is POST method) and save response payload 
  def respPayload = adf.util.callRest('POST', urlExt, reqPayload) 
  // convert JSON to Map for ease of handling individual attributes 
  def respMap = adf.util.json2Map(respPayload) 
  //TODO: better error checking required here 
  def errorCode = respMap.errorCode 
  if (errorCode != 0) { 
    // error occurred 
  } else { 
    println ‘Team member successfully added as contributor’ 
  } 
} else { 
  println 'Opportunity folder has not been created for ' + Opportunity?.Name 
} 
println 'Exiting AddFolderContributorTrigger'

By leveraging the global functions, the object trigger script to add a contributor to a DOCS folder when a new opportunity team member is added is less than a dozen lines of code. The first block of code, after checking to see if a DOCS folder exists, obtains a DOCS user GUID by querying the service with a REST call. Then a URL extension string is built, a Map of required key:value pairs is populated, and both of which are fed to the global callRest function. The response from the function is converted to JSON and rudimentary error checking is performed.

Below is the code for the new trigger function that removes an existing folder contributor:

println 'Entering RemoveFolderContributorTrigger'
def docFolderGuid = nvl(Opportunity?.DocFolderGuid_c, '')
if (docFolderGuid) {
  def restParamsMap = adf.util.getDocCloudParameters()
  // prepare URL extension
  def urlExt = '/shares/' + docFolderGuid + ‘/user’
  // prepare request payload
  def userGUID = adf.util.getDocCloudServiceUserGUID(Name)
  def reqPayload = [userID:(userGUID), role:’Contributor’, message: ‘removing you from Opportunity folder’] 
  // make REST call (this is DELETE method) and save response payload 
  def respPayload = adf.util.callRest('DELETE', urlExt, reqPayload) 
  // convert JSON to Map for ease of handling individual attributes 
  def respMap = adf.util.json2Map(respPayload) 
  //TODO: better error checking required here 
  def errorCode = respMap.errorCode 
  if (errorCode != 0) { 
    // error occurred 
  } else { 
    println ‘Team member successfully removed from DOCS folder’ } 
} else { 
    println 'Opportunity folder has not been created for ' + Opportunity?.Name 
} 
println 'Exiting RemoveFolderContributorTrigger'

The script to remove a folder contributor is also less than a dozen lines of code, and relies upon the global functions in the same way as the add contributor script.  Obviously, the REST DELETE method is specified instead of using a POST, as per the DOCS REST specifications.

One additional function to obtain a specific user GUID, or unique id, from DOCS is needed.  This function takes a search string, representing a user name, as input, and after making a REST call into the DOCS users resource, returns the user GUID.  Below is the code for the function:

println 'Entering getDocCloudServiceUserGUID'
def returnGUID = ''
// prepare URL extension
def urlExt = '/users/items?info=' + searchString
// no request payload
def reqPayload = [:]
// make REST call (this is GET method) and save response payload
def respPayload = adf.util.callRest('GET', urlExt, reqPayload)
// convert JSON to Map for ease of handling individual attributes
def respMap = adf.util.json2Map(respPayload)
//TODO: better error checking required here
def errorCode = respMap.errorCode
if (errorCode != 0) {
   // error occurred
   println 'DocCloudService error; errorCode ' + errorCode
} else {
   // get user GUID
   returnGUID = respMap.items[0].get('id')
}
println 'Exiting getDocCloudServiceUserGUID'
return returnGUID

It may make the most sense to create this as an object function under the Opportunity object, or perhaps as a global function.  The differences are minor, and function location is a matter of developer preference.

Add error handling code. Given the simple integration architecture set up for this example — a SaaS Sales Cloud instance making REST calls into a PaaS Documents Cloud Service instance — admittedly there are not many options available, other than reporting that something bad happened, when unexpected errors occur at runtime. In an environment where user actions – for example saving a new opportunity – trigger synchronous outbound web service calls, interrupting the user experience by blocking the database transaction may not be optimal.

The error handling options are few: (1) continue with the Sales Cloud transaction, in this case completing the create or edit of an Opportunity object, (2) back out of the Sales Cloud transaction if any failures are detected in the web service calls, or (3) take a hybrid approach and give the user a certain degree of control over what to do after an error.  Due to the non-critical nature of the transactions between Sales Cloud and DOCS in this example, reporting the error and moving on suffice.  If there is a need to create a DOCS folder for an Opportunity after the fact, it would be possible to create an Action button that could call into the same global functions with the same logic as the object trigger functions.

Summary

Planning out what work is done in global functions and what gets done in object trigger scripts, if done correctly, can lead to major efficiencies when adding new features to an existing extensibility project. This example used existing global functions that make REST calls from Sales Cloud to Documents Cloud Service to implement support for maintaining a group of DOCS folder contributors as team members are added or removed from the Opportunity team. Due to prior planning and following guidelines laid out in Part 1 of this post, object trigger functions were extremely lightweight and were added to the extensibility project with minimal effort.

Viewing all 987 articles
Browse latest View live