Quantcast
Channel: ATeam Chronicles
Viewing all 987 articles
Browse latest View live

Debugging CAS with the Endeca RecordStore Inspector

$
0
0

Introduction

When it comes to debugging Endeca Content Acquisition System (CAS) related issues, there are few tools that Endeca developers have at their disposal to aid them in their troubleshooting. Most know to review the CAS service logs, but occasionally an issue arises where a peek inside a record store can be very revealing. If you’re a savvy Endeca developer you already know that you can export record store content using the “recordstore-cmd” command. Combined with a good text editor this command can be very useful, but CLI’s can be a bit tedious to work with at times, so when I recently ran into such an issue I decided to write my own visual tool for inspecting Endeca Record Stores, which I aptly named the Endeca RecordStore Inspector. In this article, I introduce the Endeca RecordStore Inspector utility and show how it can be used to debug Endeca CAS related issues.

Main Article

I was recently assisting with a CAS issue where the CAS Console was reporting failed records for an incremental crawl of a Record Store Merger where one of the data sources contained deleted records. The origin record store was reporting the deleted records correctly, but when the last-mile-crawl merger was run it would report the deleted records as failed records.

For each failed record, I could see the following messages in cas-service.log:

WARN [cas] [cas-B2BDemo-last-mile-crawl-worker-1] com.endeca.itl.executor.ErrorChannelImpl.[B2BDemo-last-mile-crawl]: Record failed in MDEX Output with message: missing record spec record.id

DEBUG [cas] [cas-B2BDemo-last-mile-crawl-worker-1] com.endeca.itl.executor.ErrorChannelImpl.[B2BDemo-last-mile-crawl]: Record failed in MDEX Output (MdexOutputSink-826115787) with Record Content: [Endeca.Action=DELETE]

The messages were not intuitive at first, but I could tell that a different record spec identifier was being used, so to see what was going into the record stores I decided to create a tool for visualizing the contents of a record store in a familiar tabular format. Using this tool, I could see that the records contained both an “Endeca.Id” property as well as the “record.id” property:

However, when one of the source files was removed and the acquisition re-run, the new generation contained the delete records with only the “Endeca.Id” property:

So when the last mile record store merger was run, it didn’t know how to merge the delete records because the record spec identifier (as well as the Record ID property on the data sources) had been changed to “record.id”, thereby producing the above warning message (“missing record spec record.id“) for the DELETE action entries.

Of course the same diagnosis could have been made using recordstore-cmd and a text editor, but some things are easier done in a GUI. For example, sorting records by a specific column type. The Record Store Inspector allows you to sort on any column, as well as filter which columns are visible using regular expression syntax. You can open two different generations of a record store and compare them side-by-side. You can even export the contents of a record store (with or without filters applied) to a comma-separated value (CSV) text file, Microsoft Excel file, or Endeca Record XML file. These sorts of operations are more difficult to do when using just recordstore-cmd and a text editor, and my goal in creating this tool was to make the Endeca community more productive in their ability to diagnose CAS related issues on their own.

About the Endeca RecordStore Inspector

The Endeca RecordStore Inspector utility was written using JavaFX 8 and Endeca 11.1, and runs on Windows, Linux, or any environment that supports the Java 8 runtime. Below I’ve provided download links for two versions; a portable version optimized for Windows, and a self-contained Java jar file for all other environments. To run the Windows version, simply extract the contents of the attached archive and double-click on the rs_inspector.exe file. This version includes a Java 8 runtime environment, so there is no need to install Java or Endeca to run it. To run the self-contained jar file you will need a Java 8 Runtime Environment installed. If one is already present, just copy the attached file below and run the command: “java -jar rs_inspector-1.0-all.jar” to launch the application.

When the application has started you can press CTRL-R to select the record store and generation that you want to view. If your CAS Server runs on a host/port other than localhost:8500, then you can use ALT-S to change the default settings.

Once the record store loads, you can further refine the view using Java regular expression syntax in the column and value fields. This will restrict the columns and rows visible to only records matching the regular expression syntax specified. For example, to view only the columns for “product.id” and “product.short_desc” you can specify a Column Text filter of “product\.id|product\.short_desc” and click the Apply Filter button. To further refine the view to show only products with an id value greater than 3000000, you can use a Value Text filter of “[3-9][0-9]{6}[0-9]*|[1-9][0-9]{7}[0-9]*”.

It is important to point out that this tool currently loads all rows in the selected record store into the table view, so depending on how large the record store is, if your JVM doesn’t have sufficient memory to store all the data, you will receive an OutOfMemoryError. If your environment has sufficient memory to store the entire record store in memory, then you can increase the JVM memory settings (-Xmx) to support your record store. If your record store is larger than 2GB, you should run the RecordStore Inspector in a 64-bit JVM. If the memory in your environment is limited, then you may not be able to load your record store using this tool. Perhaps later versions will offer the ability to incrementally load data into the view. If this is important to you, then please let me know.

Summary

Endeca content acquisition can be somewhat of a black box. To provide some transparency to this process I’ve created the Endeca RecordStore Inspector. The Endeca RecordStore Inspector is a visual tool intended to aid in the debugging of issues pertaining to Endeca CAS data ingestion. In this article we’ve seen one example of how this tool was used to make sense of a seemingly enigmatic error message, but the applications of this tool are much broader in scope, not only in its use as debugging aid, but as a medium for understanding Endeca CAS in general.

Below are links to download the Endeca RecordStore Inspector. Please note that this tool is provided “as-is”, without guarantee or warranty of any kind. It is not part of the Endeca or Oracle Commerce product suite, and therefore not supported by Oracle. However, a link to the complete source code is provided below, and you are free to fix any issues or enhance the tool in any way you like.

Download the portable Windows version

Download the self-contained jar version

The source code and latest version of this utility is maintained on GitHub:
https://github.com/dprantzalos/Endeca-RecordStore-Inspector

If you find this tool useful, please let me know.


Building iBeacon based apps with Oracle MAF

$
0
0

Introduction

Bluetooth Low Energy (BLE) based proximity beacons were first under the spotlight when Apple announced the iBeacon standard. Since then a lot has changed and there are now multiple beacon vendors and offerings on the market. Enterprises are also looking to add proximity based intelligence to their mobile offerings. In this article, we introduce you to beacon technology and show you how you can build support for beacon based interactions in your MAF application. We will will also take a look at Oracle’s cloud based offerings that can enhance this experience and provide valuable 360º insight in to the data.

Main Article

In this article, we will quickly review what iBeacons are, the various standards and solutions on the market today, and how you can build standards based support for beacon technology in your MAF applications. We will also show how you can leverage local notifications in MAF (available since MAF v2.1.1) to trigger based on the proximity to beacons even when an application is not running. Although proximity detection and beacon interaction are essentially client oriented activities, Oracle’s cloud offerings play a vital role in transforming these client side interactions into two-way, context-aware, customer conversations for real world applications. We will see how a real world application is built with the Oracle technology that provides context and intelligence support for proximity based applications.

image1

A beacon triggered Local Notification from a MAF application on iOS. The application was in the background and the device locked when the device detected its proximity with a beacon and triggered the alert.

Screenshot_2015-03-28-02-03-55

A beacon triggered Local Notification from a MAF application on iOS. The application was in the background when the device detected its proximity with a beacon and triggered the alert. The device was not locked for the screenshot, but the behavior is the same.

 

iBeacon Basics

iBeacons at its core are simple Bluetooth transmitters with a power source that broadcasts a signature. This broadcast signal can be picked up by any bluetooth capable receiver like a smartphone when it is within the beacon’s broadcast range and the smartphone or tablet can then perform actions based on this proximity information. The iBeacon standard defines that an advertisement packet emitted by a bluetooth device should include the following bits :

Field
Size(bytes) Description
UUID

16

An identifier chosen by the application development team to identify beacons at a high level based on their own use-cases.
Major

2

An identifier to narrow down the beacon to a particular region or subset.
Minor

2

An identifier to further qualify the beacon. The combination of UUID+Major+Minor should uniquely identify the beacon (and therefore the region the beacon is representing)

An example of this might be Acme Supplies, a nationwide retailer would choose a UUID across all their stores, to uniquely identify their beacons from other beacons that might be nearby. The major number may signify the store number, so the combination of the UUID and the major number now identifies a particular Acme Supplies Store. The Minor number may be used to identify fine grained regions within a given store like sporting goods or home furniture.

Apple Inc., who created the iBeacon specification describes it as “a new class of low-powered, low-cost transmitters that can notify nearby iOS 7 or 8 devices of their presence.”  As such, iOS provides native support and APIs for interacting with transmitters that adhere to the iBeacon specification which lays down guidelines that bluetooth device manufacturers have to meet in order to certify their devices as iBeacon compatible.

Android does not have native support for iBeacons like iOS, but since iBeacons are simple bluetooth transmitters, apps can be built on Android that directly use the bluetooth stack provided by Android to interact with bluetooth devices of any kind – including iBeacons. The closest standard that offers interoperability is the Alt Beacon proposal by Radius Networks Inc. This article will show you how to use the Alt Beacon android library to integrate it with a MAF application for iBeacon support on Android as well.

How applications use beacons – Beacon regions

Applications can use beacons in a variety of ways that include providing context awareness and proximity based cues for people to find locations nearby or interact with their environment. To simplify application development, the concept of a beacon region is used. A Beacon Region is an abstraction for identifying one or more physical regions based on their beacon attributes like UUID, Major and Minor. A region can be defined in three ways :

  • With only UUID: which will define an abstract region that may correspond to multiple physical locations. For example: with the Acme Supplies example above, a region with just the UUID (common for all beacons deployed by Acme in all their stores) may be called “Acme Store Vicinity” and matches the proximity of any acme store.
  • With UUID and Major: matches all beacons using a specific combination of UUID and Major. For example: An Acme Store that is located in Redwood Shores, CA, and matches al beacons deployed in that one store
  • With UUID, Major and Minor: matches a single beacon. For example, the Sporting Goods department in the Redwood Shores location of Acme Store.

Applications keep track of beacon regions and primarily use two techniques to do so : Ranging and Monitoring

Ranging

Ranging is accomplished by a set of APIs that provide a list of all beacons that are visible to the device(smartphone/tablet) in a given region, together with an estimated proximity indicator  and signal strength from the device to each beacon. Ranging gives the application very fine grained detail on proximity and iOS also applies some filters to further quantify the proximity as follows :

Proximity
Description
Immediate Very close or touching the beacon. Highest confidence level in the proximity measurement.
Near Strong signal reception indicative of a clear line of sight. Usually 1-3 meters in distance, but obstructions can cause signal attenuation which may cause the device to report a different proximity range based on signal strength.
Far Lower signal strength. This may not always be due to the signal originating from afar, but maybe due to signal attenuation. This indicates that the confidence in accuracy is too low to classify it as Near or Immediate.
Unknown The proximity could not be determined. Could be because there were far too few measurement samples to report a proximity measurement.

Ranging is supported when the application is running in the foreground, and has a higher power consumption impact.

Monitoring

Monitoring is a more passive form of getting proximity based alerts that work even when the application is in the background or even when the application process is not running at all. With region monitoring, and application may track up to 20 regions (remember however that a region may correspond to multiple physical locations) and can be alerted when the device enters or exits a region. Monitoring does not require the app to run in the foreground, but can be combined with MAF support for local notifications to enable the user to launch the app based on proximity to a region. Monitoring has a lower impact on power consumption but does not include active signal strength measurements or proximity filters. An application can register a list of regions that it wants to be notified by the underlying APIs, and whenever the OS detects that the device is in range of one of the beacon regions that an application has expressed interest it, it notified that application. this notification is usually an entry event which signals that the device just entered in to range of a region or an exit event that signifies the opposite. The OS takes on the responsibility for notifying the interested applications.

General Architecture for Beacon enabled applications

The iBeacon standard relies on an iOS device for intelligent communication, since the beacon itself is a disconnected “lighthouse” that keeps broadcasting the same bits of information (like a signature) repeatedly. Most beacon manufacturers however implement features to differentiate their products on the market by implementing features outside the iBeacon standard’s specifications. These typically range from sensor based information like temperature, or altitude at the beacon’s current position, to enterprise grade cloud platforms to manage the fleet of beacons. Needless to say, these custom extensions are made available through vendor specific APIs which when used, will lock the application down to a single beacon manufacturer. The sample that we present here, does not leverage any vendor specific API (and therefore vendor specific functionality) and sticks to the iBeacon specifications.

This paves the way for a general architecture for implementing basic functionality for applications that interact with beacons. with most applications we have two fundamental use-cases to achieve. The iBeacon capable App needs to know and update itself of the beacon signatures it needs to listen to, and the enterprise needs to configure how the application reacts to specific beacons in the context of a business use-case. In other words, the app needs to be decoupled from the beacons themselves while having the ability to discover the beacon signatures that the app should be listening and reacting to. The enterprise needs to change how the application should react to a beacon in order to achieve a business goal.

Implementing iBeacon support for Oracle MAF applications

Since MAF has been designed with easy extensibility and the ability to interface with the native layer using Cordova, developing support for Beacons becomes a simple exercise. MAF uses the Cordova layer to facilitate communication bridges from the WebView component, within which a MAF application runs, to the lower level native APIs. In this exercise, we will use both a pre-built Cordova plugin to demonstrate how MAF’s support for Cordova immediately makes your application development faster and also develop extensions to this Cordova plugin to examine how the plugin works and how to interface with the native platform APIs. Since MAF applications should ideally support both IOS and Android, we use a common Cordova interface to expose the different underlying native APIs as a consistent unified API that the MAF application can use regardless of the platform to which the application is deployed.

In an application designed to support beacons, a lot of use-cases revolve around the ability to provide context sensitive information to the user inside the app, (like walking up to a museum exhibit or a kiosk). However, the true power is harnessed when the application can intelligently inform the user without the user’s active participation, in other words when the phone is locked and in the user’s pocket or hand bag. MAF, since v 2.1.1, has added support for Local notification feature that helps you do this. In our example we will cover this scenario and examine how  a Cordova plugin in conjunction with MAF Local Notifications can create an application that can react to proximity based events even when the user does not have the phone in his hand.

Therefore, we can split the solution into the mobile app, and a service that can manage the business use-case across the fleet of deployed beacons. The mobile app accomplishes the beacon monitoring, ranging and leverages device sensors and functionality to interact with the use in a context aware manner. The service application lets the business users control how the application reacts when its within the vicinity of a beacon, including managing the fleet of beacons. For example, a department store might wish to rollout a special offer on say, home furniture in the north eastern stores alone based on the user’s loyalty profile. This would translate to identifying the beacons in the “home furnishing” department (minor number) in a set of stores (a set of major numbers) for that store chain/retail brand (UUID). Now the service can attach an offer to these beacons and the service can provide this offer to users based on their loyalty profile.

The general architecture of the application takes the form of :

  • Invoke a rest service on application startup (from a background thread so as not to hinder the start-up performance)
  • The rest service delivers a JSON array containing the beacon identifiers we have deployed and a decision tree to use to deliver notifications.
    • For example,  the JSON array might include something simple like a Beacon UUID and a String message which the application can interpret as “when within the vicinity of <UUID> then show a notification with <message>”
  • The decision tree can be updated at regular intervals to keep it current.

The general architecture of the service would be :

  • Provide a REST service that returns information based on the user identity (and further attributes of the identity like the loyalty profile)
  • The rest service delivers a JSON array containing the beacon identifiers we have deployed and a decision tree to use to deliver notifications.
    • For example,  the JSON array might include the beacon UUID, Major and minor which identifies the home furniture department in a store and a message about a special offer, which the application can interpret as “when within the vicinity of <UUID> then show a notification with <message>”
  • The decision tree can be updated at regular intervals to keep it current.
  • The application uses Local Notification to let the user know about the event.

iOS approach

Implementing the Cordova plugin for iOS is relatively simple since iOS natively supports both ranging and monitoring of beacons with the native APIs. The approach here is to simply build the Cordova layer on top of the Native support to expose the Native APIs to the Javascript layer. The native API works as follows :

  •  The application asks for the user’s permission to use location services.
  • The application invokes a REST service that returns information based on the user identity (and further attributes of the identity like the loyalty profile)
  • The application can now use the cordova abstraction of the native APIs that signals the iOS platform of the beacon identifier information the app is interested in, and to inform the application when the device is in range of one of those beacons.
    • CLLocationManager – Keeps track of the beacon regions and notifies the app when a beacon that it is interested in is within range.
    • CLBeaconRegion – Looks for devices whose identifying information matches the information you provide. When that device comes in range, the region triggers the delivery of an appropriate notification.
    • CLBeacon – This class represents a beacon that was encountered during region monitoring.
  • The application starts monitoring for beacons providing the API with a list of BeaconRegion objects.
  • Whenever the device is within the vicinity of a beacon, the platform identifies that the application had expressed interest and notifies the application.
    • The application is woken up momentarily to receive the message if the application was in the background or killed.
  • The application receives the beacon information in this message and can react to the beacon in any way. (In our case, we will use a local notification to alert the user)

The Cordova plugin tries to mimic these classes to keep the programming model familiar and aligned with the Native APIs.

Android approach

Implementing beacon functionality on Android is slightly more involved mainly because iBeacon is an Apple specific feature where the iOS platform provides native support in the form of services and APIs. Android unfortunately does not have the same level of deep platform level support of integration with the iBeacon standard published by Apple. We do however have access to the bluetooth stack on the Android device, through Cordova. This means that we need to implement some of the API features that iOS provides out of the box as part of our plugin. Fortunately, there is an alternative open standard being proposed by Radius Networks, called Alt-Beacon, which promises interoperability. we will leverage this library in conjunction with the bluetooth stack on the device to  identify bluetooth signal broadcasts from devices identifying themselves as beacon (as opposed to a wireless headset, speaker, or a fitness tracker) and accomplish both monitoring and ranging functionality. Since this will be the same application developed in MAF, using a different code path since it will be running  on android, the architecture and the JS API we use on the MAF layer will be the same. The complexities of the Android implementation due to lack of platform support are encapsulated in the Cordova plugin implementation. The sequence of events are runtime would be :

  • The application asks for user’s permission to use location and bluetooth functionality at the time of installation.
  • When the application starts up, we start an android service to monitor for beacons based on the advertisement PDU in the bluetooth signal broadcast.
    • This is how we can distinguish between beacons and other bluetooth signals.
    • The service is packaged as part of the application itself.
  • Expose an API similar to iOS with abstractions for Monitoring and Ranging.
  • The application level logic remains the same – The app will invoke the same REST service that returns information based on the user identity (and further attributes of the identity like the loyalty profile)
  • When the application moves into range of a beacon, the bluetooth service (and not the platform, like in iOS) will let our app know about that event.

 

Getting Groovy with Oracle Data Integrator: Automating Changes after Upgrading ODI or Migrating from Oracle Warehouse Builder

$
0
0

Introduction

Oracle now provides a migration utility to convert your existing Oracle Warehouse Builder (OWB) mappings to Oracle Data Integrator (ODI) mappings. As customers started to go through this migration effort, we realized that some post migration scripts could help update the migrated repository. Once we had the scripts, we realized they could also be used by all ODI users, whether they had migrated from OWB, or upgraded from an earlier version of ODI, or simply wanted to automate massive changes to their repositories.

We are describing here a number of such scripts that we have co-developed with our customers, and we provide a link to the source code of these scripts so that you can adapt them to your personal needs.

The Groovy Scripts

The scripts are written in Groovy and leverage the ODI SDK. They all leverage SwingBuilder to create a graphical interface used to prompt the user for the necessary parameters: the SDK retrieves the necessary objects from the repository and lets the user select them as needed. We started from examples provided by David Allan in this blog and quickly had enough for a practical interface. One interesting aspect was the ability to dynamically build lists based on existing selections: only list the relevant Folders or Knowledge Modules for the selected Project for instance.

The scripts that we are presenting here are the following:

List of groovy scripts

We have a conservative approach in these script: we usually limit the scope of changes to a single folder within a given project. It would not take much to alter the scripts to perform the same operations for all folders in a project, or for the entire repository if needed.

Even if all these scripts were originally put together as part of migration or upgrade efforts, they can all be used in any ODI repository where similar changes need to be automated. For instance, a global change of the Knowledge Modules used in the mappings can be performed independently of any migration or upgrade effort.

 

Caution: all these scripts modify the content of your repository. Before trying any one of these, make sure that you have a backup of your repository, that you can locate that backup and that you know how to restore the backup. Undoing the operations performed by these scripts could be quite painful otherwise…

 

Using the Groovy scripts in ODI

To make sure that we do not have to write usernames and passwords in the scripts, all scripts must be launched from an ODI Studio that is already connected to the repository: the Groovy code retrieves the Studio connection to the repository. So please make sure that you are indeed connected to the repository when you run these scripts.

Once you have downloaded the Groovy scripts, go to the ODI menu Tools/Groovy/Open Script and select the script that you want to import into the ODI Studio.

open_groovy_script

To run a script, make sure that the mouse cursor is in the script and click the ODI Run button in the Studio toolbar:

groovy_run

If you create your own scripts, be aware that by default ODI saves the scripts in <user home directory>/odi/oracledi. On Windows7 for instance, that is: C:\Users\<username>\AppData\Roaming\odi\oracledi.

All scripts take advantage of the ODI Studio console to output details about the objects that are parsed in the repository and to keep you informed about the changes that are made to these objects (any print or println command will be displayed in the console). The example below shows the output for the script that renames mappings. In this run, we replace ‘Custs’ with ‘Customers’ for all mappings that contain the string ’LOAD_’:

groovy_console_display

 

Merging ODI Models

When would you use this script?

When the migration utility converts an OWB model to ODI, it creates a new ODI model for each OWB project. If you have different projects that use the same tables, this results in unnecessary clutter in your models, with the same tables appearing in multiple models. If you were to try and merge the models manually, you would have to open all the mappings and change the source or target datastores to point to one single model. This can be quite a daunting task. Using this script, you can select an original model name, a new model name, and the script updates the mappings for you.

 

Caution: the script only updates the mappings; it does not modify the models. As a consequence, you have to make sure that all necessary datastores are available in the new model as the script will not create them if they are missing (but this could easily be added with a little more code).

 

An example of the result of running this script is illustrated in the screenshots below. The first screenshot shows that, in the original mapping, the target table comes from the model COPY_OF_ORA_SALES

original_mapping_model

When we run the script we ask that the mapping now use the table from the ORA_SALES model instead of the original COPY_OF_ORA_SALES model (make sure that you close your mappings as you run the script):

groovy_change_model

When we re-open the mapping, it does indeed point to the ORA_SALES model:

groovy_changed_model

Using the script

The source code for this script is available here.

When you run the script, simply select the project code, original model code and as well as the code of the new model that you want to use. If you only want to impact a subset of your mappings, you can enter a string that helps the script identify the mapping (no wildcard in the current release, though that would be possible with regular expressions).

groovy_change_model_run

This script will parse all datasources used in the mappings that match the search criteria, as well as ‘Begin Command’ and ‘End Command’ of these mapping if the corresponding options are selected. All matching datastores will be replaced with the datastores of the model selected for the consolidation.

Code highlight

The replacement of the datastore in the old model with the datastore in the new model is quite trivial: once we have identified the datastore, the binTo() method of the iMapComponent object takes care of everything for us.

if (originalModel.getCode()==originalModelCode){
    // Find matching datastore in new model
    newDataStore=datastoref.findByName(datastoreResourceName,newModelCode)
    if (newDataStore==null){
        println("WARNING **** Datastore : "+datastoreResourceName+ " missing in new model "+newModelCode)
    }else{
        //replace the component with new binding?
        println("--- replacing with: "+datastoreResourceName+" from "+newModelCode)
        component.bindTo(newDataStore, false)
    }
}

 

Renaming Mappings

When could you use this script?

It is not rare for mature projects to see an evolution in naming convention, but often times old objects keep using the old convention. A migration or upgrade can be a great opportunity to review mapping names and make the necessary corrections. This script will allow you to add a prefix and/or a postfix to the mappings names, in addition to replacing string patterns in the mapping names.

 

Using the script

The code for this script is available here.

In the example below, we are looking for mappings that contain the string ‘Load ’ (including the trailing space) and we change the mapping names to insert ‘Initial ’ at the beginning of the mapping name: ‘Load xxx’ becomes ‘Initial Load xxx’.

groovy_rename_mapping

Note that in this case, a mapping named ‘Final Load’ would not be renamed since we included a trailing space in the string used to identify the mappings.

Code highlight

Once we have identified a mapping that we want to rename, the new name is set with the Mapping.setName() method:

// Make sure that the mapping matches the names we are looking for
if (map.getName().contains(MappingNamePattern)){
    MappingName=(map.getName()=~ReplaceString).replaceAll(ReplaceWith)
    MappingName=Prefix+MappingName+Postfix
    println("New name: "+ MappingName)
    map.setName(MappingName)
    bNameChanged=true
}

 

Replacing Knowledge Modules

When would you use this script?

Replacing a KM in a number of mappings is a task that can be needed after a migration from OWB to ODI, as well as independently of any migration effort: you want to change from a project specific KM to a global KM, or you may want to change from an original ODI 11g KM to a new ODI 12c component KM.

In the case of the migrations from OWB, the migration tool will select the default KM for the type of operation that needs to be performed (insert, update, SCD). In one instance of an OWB to ODI migration, we needed a KM that would delete all records that matched the select statement created on the source. We had a custom KM for this, but short of a script, we would have had to manually edit all mappings.

Even if changing a KM requires only a few clicks, the task can become tedious if you have to do this for hundreds of mappings. This script will automate the operation.

Using the script

The code for this script is available here.

This script is pretty straightforward. All project and global KMs are be available for selection in the graphical interface: pick the KM that you want replaced as well as the new KM, and click OK to perform the substitution. The console lists the mappings where the KMs are substituted.

groovy_replace_km

Note that in the example above, any mapping with a name that includes the substring _INCR_ has its IKM replaced (if the KM matches the other search criteria of course).

 

Code highlight

We parse the physical nodes in search for KMs. IKM nodes can include both IKM and CKMs. The nodes are part of a MapPhysicalDesign object.

if (map.getName().contains(MappingNamePattern)){
    // get the list of physical designs:
    PhysicalDesignList=map.getExistingPhysicalDesigns()
    for (pd in PhysicalDesignList){
        // get the list of physical nodes:
        PhysicalNodesList=pd.getPhysicalNodes()
        for (pn in PhysicalNodesList){
            if (pn.isLKMNode()){
                // LKM only
                CurrentLKMName=pn.getLKMName()
                if ((CurrentLKMName!=null) &amp;&amp; (KMType=='LKM') &amp;&amp; (CurrentLKMName==OriginalKMName)){
                    LKMs = LKMf.findByName(NewKMName);
                    myLKM = LKMs.iterator().next();
                    pn.setLKM(myLKM)
                    bKMChanged=true
                }
            }else if (pn.isIKMNode()){
               (…)

 

Replacing Optimization Context

The optimization context is something that you do not usually want to modify as a rule, as it impacts the code generation. However, as you migrate existing projects from OWB, you may have to change this value.

When would you use this script?

The use for this script is relatively limited. The importance of the optimization context is that if you have different topology layout between your development and production contexts, you can force ODI to behave as if all environments were the same (typically the same as the production context). For more on optimization contexts, you can look at this earlier post.

In ODI 12c, the optimization context is part of the parameters of the Deployment Specifications, available in the Physical tab of the mappings:

 

groovy_replace_optimization_context

Using the script

The code for this script is available here.

Again a pretty straightforward script, simply select the original Optimization Context and the new one to use as a replacement.

groovy_replace_optimization_context_ui

 

Code highlight

This sample is pretty basic: it is only a matter of comparing the existing Optimization Context to the one we searching for, and to perform the replacement when we find a match. This operation is done at the MapPhysicalDesign level:

for (pd in PhysicalDesignList){
    currentOptimizationContext=pd.getOptimizationContext()
    print("OptimizationContext: "+currentOptimizationContext.getName())
    if (currentOptimizationContext.getName()==OriginalOptimizationContext){
        pd.setOptimizationContextByName(NewOptimizationContext)
        println(" replaced with "+NewOptimizationContext)
    }else
        println(" not replaced")
}// Physical Design

 

Conclusion

Groovy is a very convenient way to automate massive changes that would be extremely tedious if we were to perform them manually. By leveraging the ODI SDK, we also ensure that the changes that we are making in the ODI repository are valid from an application perspective and from a referential integrity perspective.

 

For more ODI best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for ODI.

 

References

The OWB-ODI migration documentation can be found here: http://docs.oracle.com/middleware/1212/odi/ODIMG/toc.htm. Note that the utility comes as patches that you have to apply to OWB and ODI. Details for the patch numbers are available on the Oracle Support web site. After logging in, look for doc ID 1503877.1: How To Migrate From OWB To ODI

The documentation for the ODI SDK (version 12.1.3) can be found here: http://docs.oracle.com/middleware/1213/odi/reference-java-api/toc.html

The documentation for Swingbuilder (used for the user interface) can be found here:

http://groovy.codehaus.org/Swing+Builder (scroll to the bottom of the article for a list of available widgets)

David Allan’s introduction on Groovy Swingbuilder can be found here:

https://blogs.oracle.com/dataintegration/entry/odi_accelerator_launchpad_getting_groovy

David Allan’s reference blog on the SDK for ODI 12c (one you really want to bookmark!):

https://blogs.oracle.com/dataintegration/entry/odi_12c_mapping_sdk_the

Oracle VM Storage Repository Replication for On-Premise Fusion Applications Disaster Recovery

$
0
0

Introduction

Introducing Disaster Recovery into a Fusion Applications Environment can increase the management complexity – Oracle VM offers reliable solutions to simplify ongoing maintenance and switchover/failover processes. The solution discussed here is an example deployment for Fusion Applications as an Active-Passive Environment across two datacenters. It is an addition to the previously published article Disaster Recovery for On-Premise Fusion Applications

This deployment is applicable for all Oracle VM deployments that use NFS or SAN based storage repositories. The underlying storage array needs to provide storage replication functionality. The performance and latency requirements for this part of the DR solution are a lot less strict as Fusion Applications mostly relies on the database and to a certain extent Shared Storage to store application data.

 

OVM-Replication

Ongoing Maintenance

An integral prerequisite to a working Disaster Recovery solution for Fusion Applications is the proper maintenance of servers/VMs and their corresponding operating systems. If e.g. a kernel patch gets installed on the Primary Site, this patch also needs to be installed on the Secondary Site to make sure switchover/failover work properly. This requirement includes user management, configuration changes, patching, and more to satisfy the symmetrical deployment requirement and minimize risks during switchover/failover.

Storage Repository Replication

Oracle VM offers an effective solution in conjunction with the underlying storage technology to cut down on this type of maintenance in virtualized environments. This approach basically replicates the entire disks of all the VMs across to the Secondary Site. In case of a disaster at the Primary Site – all VMs can simply be started at the Secondary Site and Fusion Applications can resume service without having to reconfigure the environment. Even though the switchover/failover process is taking longer compared to an active-active Oracle VM approach – where the VMs on the standby side are already started, but not FA – however the achieved cost and maintenance reduction is making this solution ideal for environments with a less critical recovery time objective.

Switchover Process

The Switchover Process as well as the Failover Process is very similar to the Standard Fusion Applications Disaster Recovery Process. The only addition is basically the steps to start and/or stop Oracle VM Manager and the Oracle VMs at each of the sites. This diagram describes the flow of the switchover operation.

 

image2

Failover

Assuming the Primary Site is offline after the disaster it will be required to execute the failover process. This process is basically a subset of the switchover process. As soon as the primary site is available again the storage replication can be enabled again to allow mirroring of the storage repository back to the primary site. This part of the procedure is described in the storage vendor’s documentation.

image3

The environment at the secondary site operates completely autonomous from the primary site after the disaster has occurred. It is important to make sure that the primary site does not restart, when it comes online this can be achieved by additional tools like Oracle Site Guard.

OVM-Replication-failed

Oracle VM Implantation Considerations

In order to implement this Disaster Recovery with minimal problems it is recommended to have multiple independent Oracle VM Manager with the same UUID in an active-passive way. To retrieve the UUID of the Oracle VM Manager simply login and click on Help – About… Alternatively the UUID is stored in /u01/app/oracle/ovm-manager-3/.config

image5

This can also easily be achieved by following the steps described in the documentation ‘3.7. Running Oracle VM Manager as a Virtual Machine’ – just create a second VM on the secondary site using the same steps as in the primary site to deploy the second Oracle VM Manager. Make sure not to run both OVMs at the same time to avoid conflicts should you not have a separate management network per site.

The server pools at the primary and secondary site are independent of each other. For simplified management you can discover and register both pools (the one at the primary site and the one at the secondary site) in both Oracle VM Manager. Make sure that the correct storage repositories are presented to the correct server pool. This is achieved the easiest way by presenting the Repository only to the relevant server pool.

image6

Should clustering on the server pool level be enabled – please note that the pool file system does not need to be replicated. Only the actual storage repositories need to be replicated across datacenters. Also required is that the WWIDs for LUNs have to be identical across both datacenters.

For additional simplification the repository for the Oracle VM Manager can be deployed using Data Guard across both datacenters removing additional rediscovery work after a switchover.

 

 

Further Reading

 

Disaster Recovery for On-Premise Fusion Applications (http://www.ateam-oracle.com/disaster-recovery-for-on-premise-fusion-applications/)


Whitepaper: Oracle VM 3: Overview of Disaster Recovery Solutions  (http://www.oracle.com/technetwork/server-storage/vm/ovm3-disaster-recovery-1872591.pdf )

Avoiding LibOVD Connection Leaks When Using OPSS User and Role API

$
0
0

The OPSS User and Role API (oracle.security.idm) provides an application with access to identity data (users and roles), without the application having to know anything about the underlying identity store (such as LDAP connection details).

For new development, we no longer recommend the use of the OPSS User and Role API – use the Identity Governance Framework (IGF) APIs instead. However, if you already have code which uses the User and Role API, that code is still supported for the time being.

The OPSS User and Role API supports both LDAP and non-LDAP backends (such as XML files); however, for production use, LDAP backends are the most common, and are strongly recommended. When using an LDAP backend, the OPSS User and Role API retrieves the LDAP connection details from the security providers defined in the WebLogic Security Realm.

Now there are two different modes in which the OPSS User and Role API may operate – virtualize=true, in which all LDAP access is via LibOVD; and virtualize=false, in which LibOVD is not used and the User and Role API’s own LDAP access code is used instead. The difference is that without LibOVD, the User and Role API can only access a single LDAP server (the first defined in the WebLogic security realm); whereas, with LibOVD enabled, the User and Role API uses LibOVD to join the users and roles of two or more LDAP servers into one. A common use case is where you have different LDAPs for different user communities (for example, customers might have user accounts in OUD, while employees might instead be stored in Active Directory).

When using the OPSS User and Role API with LibOVD, you need to be careful not to cause connection leaks. The most common symptom of a connection leak is this error: “NamingException: No LDAP connection available to process request for DN”

Connection leaks can be caused through the use of any of the methods of oracle.security.idm.IdentityStore which return an oracle.security.idm.SearchResponse object. For example, the search(oracle.security.idm.SearchParameters) method. Calling any of these methods, when the SearchResponse object is created, it is associated with one or more backend LDAP connections in the LibOVD connection pools. These connections will not be returned to the pool until one of the following occurs:

  • you iterate until the end of the response – i.e. call hasNext() until it returns false
  • you call the close() method on the response

If you do not do either of the above, you will leak LibOVD backend LDAP connections. The JavaDoc for the close() method points this out:

Closes this response object and frees all resources associated with it. After close() is called the response object becomes invalid. Any subsequent invocation of its methods will yield undefined results. Close() should be invoked when discarding a response object so that resources are freed. If a response proceeds to its natural end (i.e. when hasNext() returns false) then close() will be called internally. There is no need to call close() explicitly in such case.

 

Some people decide to use netstat, or packet capture tools such as Wireshark or tcpdump, to try to diagnose this issue. Now, in itself, that is not a bad line of thought; however, you may not be able to detect this type of leak with those tools. That is because, after some time (depending on your configuration), either the LDAP server or else some middle box such as a load balancer may shutdown the underlying network socket; however, the in-memory structure which holds the network socket will still be checked out of the connection pool. There is a cleaner thread which runs inside LibOVD, which clears out closed network sockets from the pool; however, if a connection is marked as in use, the cleaner thread will not remove it from the pool, even though the underlying network socket is closed. In this case, you will not see any build up of actual network connections, even as your LibOVD connection pools are becoming exhausted. One symptom you may see, however, is lack of reuse of network connections.

Eventually, when the pool is completely filled with leaked connections, firstly all clients will hang for the maximum pool wait time, then “NamingException: No LDAP connection available to process request for DN” will occur. At this point, LibOVD may (depending on the version) decide to shutdown and restart the connection pool, thus recovering from the problem. However, the connection leak will continue, and the hang followed by errors will occur again in the future.

Oracle GoldenGate Best Practices: Replication between Cloud and On-Premise Environments with Oracle GoldenGate Version 12c

$
0
0

Introduction

Oracle GoldenGate Extract, Replicat and associated utilities enable you to create, load and refresh one database to another database.  This white paper will be utilizing examples for Oracle to Oracle homogenous replication, but it does not preclude the source and target environments being heterogeneous databases.   These examples also show data being replicated between a Cloud and On-Premise environment.  The example environments can easily be reversed to be replication between On-Premise and a Cloud environment.  It depends on your specific requirements.

Main Article

 

This document will address different approaches to extracting data from a Cloud environment and replicating it to an On-premise environment.

Cloud to On-Premise using a Mid-Tier:
In this approach, a mid-tier is utilized to eliminate the requirement of the cloud or on-premise environment to have a direct connection into either environment.  Only the mid-tier allows a direct network connection to specific ports defined by the PORT and DYNAMICPORTLIST parameters in the manager parameter file.

Cloud to On-Premise utilizing a Passive Extract Pump:
In this approach, the cloud environment is allowing a connection to very specific ports from the trusted on-premise environment.  The data is extracted from the cloud database with a standard extract, but the data is pushed to the on-premise environment using a Passive extract pump.  The passive extract pump is stopped and started from the on-premise environment.  The connection to the passive extract pump is initiated by the on-premise environment to specific ports defined by the PORT and DYNAMICPORTLIST parameters in the manager parameter file.

Cloud to On-Premise utilizing a Passive Extract:
In this approach, the cloud environment is allowing a connection to very specific ports from the trusted on-premise environment.  The data is extracted from the cloud database with a passive extract..  The passive extract is stopped and started from the on-premise environment.  The connection to the passive extract is initiated by the on-premise environment to specific ports defined by the PORT and DYNAMICPORTLIST parameters in the manger parameter file.  The caveat with this approach is that if the network connection goes down, the extract process will abend.  As a result, data will not be extracted until the connection is restored.

Extract, Extract pump and Replicat work together to keep the databases in sync near real-time via incremental transaction replication.  In all examples this function is accomplished by
1)  Starting the Manager program in all OGG installed systems.
2)  Adding supplemental transaction log data for update operations on the source system.
3)  Running the real-time Extract to retrieve and store the incremental changed data from the Oracle tables into trail files on the target Unix system.
In the first 2 approaches, running the real-time Passive Extract pump to send incremental changed data from the cloud environment or the mid-tier to the target on-premise system.

After initial synchronization,
4)  Start the real-time Replicat to replicate extracted data.

Once Extract and Replicat are running, changes are replicated perpetually.

 

Summary

Hopefully, this white paper has provided a quick overview of what options you can utilize to replicate data between the cloud to on-premise environments.  Undoubtedly, you will eventually fine-tune this process in your own environment.
Reference the Oracle Database 12.1 Documentation for additional information on the Oracle 12.1 RDBMS.
Reference the Oracle GoldenGate 12c Reference Guide and the Oracle GoldenGate 12c  Administration Guide for additional information on:

Extract Parameters for Windows and Unix
Replicat Parameters for Windows and Unix
Passive Extract for Windows and Unix
Extract Management Considerations
Replicat Management Considerations

 

Download:

Oracle Support Document 1996653.1 – Oracle GoldenGate Best Practices: Replication between Cloud and On-Premise Environments with Oracle GoldenGate

 

A-Team Mobile Persistence Accelerator Release 12.1.3.2 Now Available!

$
0
0

Introduction

A new release 12.1.3.2 of the open source A-Team Mobile Persistence Accelerator (AMPA) has been made available on GitHub. AMPA is a productivity toolkit that works on top of Oracle’s Mobile Application Framework (MAF) and auto-implements a range of best practices in MAF development with a focus on mobile persistence. In this article we will discuss the main new features of release 12.1.3.2. If you are new to the AMPA, you first might want to read the introductionary “Getting Started” article.

Main Article

RAML Support

Restful API Modeling Language (RAML) is an emerging standard for describing and documenting RESTful API’s. RAML is easy to read for humans, and at the same time structured enough for computers to parse the API definitions. The existing AMPA wizard that generates the MAF persistence layer and data objects based on sample resources has been extended to support RAML. You can now choose to specify a sample resource or sample resource payload as before, or you specify a RAML document and then the wizard will suggest the data objects, parent-child relations and CRUD resources based on the content of the RAML document.

RAMLWizardPage

For this to work, the RAML document must contain payload specifications, either in the form of a JSON schema reference, or a sample payload included in the RAML document or a referenced RAML payload template file using the !include syntax.

Oracle’s Mobile Cloud Services (MCS) product, to be released later this year, also uses RAML, API’s specified within MCS can be consumed seemlessly by the combination of Oracle MAF and AMPA, without the need for any Java coding.

Support for Logging REST Calls

The area where you typically have most problems during mobile app development is the invocation of your RESTful services. If request payloads or request headers are not specified correctly, the service call will fail. If the response payload has a different structure than expected, processing of the return payload will fail. Troubleshooting these issues around your RESTful services can be cumbersome. Adding logging statements and/or debugging the RestPersistenceManager.invokeRestService method line-by-line are typically the best way to fix these issues, but can be time-consuming. This new AMPA release will make your life much easier when it comes to debugging REST calls. There is a new property logWebServiceCalls which by default is is set to true by when you create the persistence-mapping.xml using the REST wizard.

LogWebServiceCallsProperty

When this property is set to true, AMPA logs the details of all REST calls into a database table in the SQLIte database. AMPA also includes a reusable feature archive that you can add to your application to easily view the details of the REST calls at runtime. This includes the request headers, request and response payload, any invocation error messages and last but not least the duration of the REST call in milliseconds. To add this reusable feature to your application, you simply add the feature archive WebServiceCallsFeature.jar as an application library:

LogFAR

This jar file can be found under jdev_home/jdev/extensions/oracle.ateam.mobile.persistence. After you added the feature archive file, you can add the feature to your maf-application.xml file:

LogFeature

An even easier way is to use the MAF User Interface Generator that comes with AMPA. If you generate a default CRUD interface for your bean data controls, the web service call feature will be automatically added to your application. You then start your app, test the CRUD functionality of your own features, and then you can switch to the “WS Calls” feature to see all the REST calls that have been made, including any errors that might have occurred during invocation.

LogPages

In the list view you can use the quick search to filter on a specific HTTP method or on a specific resource. Do not forget to hit the search button again when you are switching back and forth between your own features and this logging feature. New REST calls will not appear automatically after you launched the logging feature. If you prefer to build your own logging feature, you can simple create a bean data control from class oracle.ateam.sample.mobile.logging.WebServiceCallService as shown below.

LogBeanDC

Obviously, you will typically only use this kind of loggin during development, so we recommend to set this property to false when deploying your app into production. Having said that, you might want to temporarily switch on this feature at runtime to help debugging end user problems. You can do so by setting the application-scoped variable logWebServiceCalls (#{applicationScope.logWebServiceCalls}) to true.

Editing of Persistence Mappings

A new visual editor is provided that allows you to more easily edit existing data object mappings. The editor can be invoked by right-mouse-clicking on a project node in the application navigator of your MAF application and then choosing Edit Persistence Mapping from the popup menu.

EditPMPopupAndPage

This editor is basically a simplified version of the REST wizard. You will only run the REST wizard if you need to add new data objects based on sample resources or a RAML document to your application.The Resource Details page is an enhanced version of the old Resource Parameters page.

ResourceDetails

In addition to the resource parameters, there are new fields to configure the structure of the response payload (in case of GET resources), or the request payload (in case of non-GET resources). Previously this configuration could only be done by directly editing the persistence-mapping.xml file.

Automatic Invocation of Canonical Resource

The Canonical Resource is the RESTful service to get all details about a specific data object instance. To reduce size of payloads as much as possible, it is good practice to have two separate RESTful resources, one to return a list of say departments, which only includes “summary” attributes that are needed to populate the list view page. Then, when the user selects one department and navigates to a detail page, a second REST resource is invoked which returns all additional “detail” attributes of the department and possible some child collections like the list of employees in the department that are shown on the detail page. This second “detail” resource is called the canonical resource in AMPA.

We now added the ability to have AMPA automatically invoke the canonical resource by specifying a triggering attribute in the REST wizard and Edit Persistence Mapping dialog.

Canonical Trigger Attribute

If the triggering attribute is specified, AMPA will generate code in the getter method of this attribute to invoke the getCanonical method. This way UI developers no longer have to worry about invoking the getCanonical method when navigating from a list view to a more detailed form view, the method call is now handled transparently within the model layer. By default, the canonical method is executed in the background and the UI will be refreshed once the REST response has been processed.

Support for Custom Resources in REST Wizard

In addition to CRUD resources, you can now also specify custom resources in the REST wizard or Edit Persistence Mapping dialog. AMPA will generate a corresponding method in your service class for each custom resource and will take care of the low-level code required to invoke the REST resource. The Resource Details page (see above for screen shot) now includes an iconic plus button which launches a dialog to specify the custom resource:

NewCustomResource

You can then drag and drop the custom method as a button onto your page to invoke the custom resource. By using this custom resource facility,your custom REST calls will be included in the standard AMPA data synchronization cycle: if the REST calls fails because the device is offline or some other service invocation error occurs, the custom resource invocation is marked as pending data sync action and will be replayed the next time the user tries to perform some remote write action.

See the release notes for a complete list of all enhancements and bug fixes in release 12.1.3.2.

If you have any questions about the use of AMPA, then please use the Oracle MAF Discussion forum and mention the versions of MAF and AMPA you are using.

Using OAAM Risk Evaluation in OAM Authorization Policies

$
0
0

We recently encountered an interesting requirement about taking decision within OAM Authorization policy based on the Risk-evaluation performed by OAAM during Authentication flow. Considering the interesting nature of the requirement / use-case, I thought to share details about the implementation approach through this blog post.

Before I go into details about the implementation approach, let me explain the requirement / use-case as example with a few bullet points:

  • Requirement was to allow users from only selected country/state to access selected resources after authentication.
  • User should be authenticated only once.
  • Only certain applications / resources should be restricted based on the geo-location identified by IP Address.

Using above mentioned use case, idea here for this blog post is to provide details about how risk-evaluation performed by OAAM during Authentication flow can useful while taking decision as part of Authorization policy in OAM. The approach would be useful for other use-cases as well of similar nature.

The implementation approach (using feature known as IdentityContext) we are going to discuss further is considering the assumptions mentioned below:

  • IAM Suite version 11.1.2.2.0 or later
  • Advanced integration between OAM and OAAM using TAP (TapTokenVersion v2.1)

Identity Context (shared between OAM and OAAM) allows context-aware policy management and authorization capabilities built into the Oracle Access Management platform. The Identity Context feature is helpful in securing access to resources using traditional security controls (such as roles and groups) as well as dynamic data established during authentication and authorization (such as authentication strength, risk levels, device trust and the like).

The solution approach is as follows….

  • OAAM policy will set risk score (in Post Authentication Checkpoint) based on client’s geo location. i.e. If the client’s geo location is matching with policy/rule configuration, OAAM will set risk level/score to some specific number.
  • OAAM will then pass the score (generated from Post Authentication Checkpoint) as an Identity Context to OAM.
  • OAM will get the Identity Context (set by OAAM) and it will set an authentication response session attribute according to the Identity Context. i.e. As Authentication Response in OAM, values from IdentityContext will be stored in session attributes. The session attributes can be checked by OAM Authorization policies.
  • OAM Authorization policy will take decision (allow or deny) based on the rule/condition configured to check risk score available from session attributes (set by OAM Authentication response)

Out-of-the-box polices in OAAM generate risk scores like 100, 200, 300, 500 and OAM Rule/Condition in Authorization policy allows string comparison on session attribute with operators such as “equals”, “starts-with”, “contains” and “ends-with”. Our new policy in OAAM can generate score like 1 for client IP address of ABC Countries and 0 for client IP address of XYZ Countries. So with new OAAM policy in place in as a Post Authentication Checkpoint, risk scores will be like 101, 201, 301,… for client IP address of ABC Countries. This will allow us to configure OAM Rule/Condition in Authorization policy to check risk score with the “ends-with” operator.

Risk Score generated by OAAM Checkpoint execution will be available from Identity Context. So, OAAM polices can be configured in a way to generate risk scores in specific way and accordingly, OAM Rule/Condition in Authorization policy can be configured to take appropriate action based on the risk score. Above mentioned logic is just one example to add 1 or 0 to risk score from OAAM policies.

Following are the steps/changes for above mentioned solution approach for the example use case. These steps are only to give the basic understanding of the configuration to meet the requirements. Important steps/changes to note here from the perspective of Identity Context are about the way Authentication Response configuration is used to save values in session variables and use the same during Authorization.

A)

Configure OAM-OAAM integration using TAPScheme (documentation)

B)

Configure OAM and OAAM for Identity Context (documentation)

  • Change TapTokenVersion to v2.1 in OAM (oam-config.xml). If Thirdparty TAP Partner is yet to be registered in OAM, with WLST execution of “registerThirdPartyTAPPartner”, parameter tapTokenVersion=”v2.1″ can be used to avoid manual changes in oam-config.xml
  • Update / Add configuration parameter in OAAM (Properties section of OAAM Admin Console). oaam.uio.oam.dap_token.version=v2.1

C)

In OAAM, update Groups/Policies configuration to generate required risk score

  • Create a policy in OAAM with two rules to generate risk score 0 (for request from XYZ Countries on in other words, NOT from ABC Countries) or 1 (for request from ABC Countries)
    • Create a group in OAAM to contain countries ABC. This group will be used in Rules we will add to new OAAM policy.
    • Add ABC Countries to the group.
    • Create a group in OAAM to contain countries XYZ. This group will be used in Rules we will add to new OAAM policy.
    • Add XYZ Countries to the group.
    • Create a new OAAM policy for checkpoint “Post authentication”.
    • Associate the policy with “All Users” so that the policy is executed for each post authentication checkpoint execution
    • Add a new rule “Request from ABC Countries” to policy “TEST: Policy to return specific risk score” for checking request from ABC Countries.
    • Add a condition “Location: In Country group” to rule “Request from ABC Countries”
    • Update the condition with correct value for parameters “Is in group” and “Country in country group”. Selected country group which was created in previous steps.
      Rule - Request from ABC
    • Update result for the rule “Request from ABC Countries” to generate score 1
    • Add a new rule “Request from XYZ Countries” (in other words, NOT from ABC Countries) to policy “TEST: Policy to return specific risk score” for checking request from XYZ Countries (in other words, NOT from ABC Countries).
    • Add a condition “Location: In Country group” to rule “Request from XYZ Countries” (in other words, NOT from ABC Countries)
    • Update the condition with correct value for parameters “Is in group” and “Country in country group”. Selected country group which was created in previous steps.
      Rule - Request from XYZ
      • or
        Rule - Request from XYZ (Not from ABC)
    • Update result for the rule “Request from XYZ Countries” (in other words, NOT from ABC Countries) to generate score 0
    • Policy with two rules and policy linked to all users to be executed for each post authentication checkpoint execution…
      Policy with two rules..

D)

In OAM, Update Authentication Policy to store OAAM risk score/level in session attribute

  • Add an Identity Context Response in the OAM Authentication Policy
    • Navigate to the Application Domain of the required application.
    • From the Authentication Policies tab select the respective protected policy.
    • From the Responses tab create a new response as follows (Name the response accordingly and write it down because it will be used in the authorization policy). Variable $session.attr.risk.level of Identity Context would be having risk score set by OAAM
      OAM AuthN Policy

E)

In OAM, Update Authorization Policy to Allow or Deny based on value for session attribute created in Authentication Policy

  • Update Authorization Policy to take decision based on the session attribute created in Authentication Policy
    • Navigate to the Application Domain of the required application.
    • Navigate to the Authorization Policies tab and select the Protected policy specific for the application needing geo location authorization.
    • Select the Conditions tab and create a new condition. Name the condition as needed to relate to the ABC Countries (risk score ending with 1) requirement. Normally OAAM returns risk scores like 100, 200, 300, 500, so note the Operator is set to “Ends with” because say OAAM returns a risk score of 300, then if the user is coming from the ABC Countries, it will add “1” and the finaly score could be 301. Therefore using the Operator “Ends With” and an Attribute Value of “1”, the score would see 301 as a match and determine the user is coming from the geo location of the ABC Countries.OAM AuthZ Policy B
    • You can repeat Condition creation for for another Condition if you want to create a rule for XYZ Countries (risk score ending with 0). OAM AuthZ Policy A
    • You should have the needed Conditions created now (Note your condition names may be different). Note if you need a XYZ Countries Condition you can repeat the creating a Condition and set the value to zero, or Ends With zero. This Condition could be used to Deny or Allow access based on NOT being in the ABC Countries.
    • Select the Rules tab in the Authorization Policy and configure the selected Conditions as needed. For example you could add the Condition ABC Countries to the Allow Rule for the Selected Condition. This affect would ALLOW a user IF they connected from the ABC CountriesABC Countries based on OAAM risk score asserted as an Identity Context to OAM.
      OAM AuthZ Policy Rules

REST Adapter and JSON Translator in SOA/OSB 12.1.3

$
0
0

If you are using REST adapter in SOA/OSB 12.1.3, you would probably encounter some requirements that you would need to response with a JSON array format which has no object name or name/value pairs, and must be a valid format according to RFC4627 specification. For example:

["USA","Canada","Brazil","Australia","China","India"]

In SOA/OSB 12.1.3, the REST adapter requires you to design an XML schema in order to generate the proper JSON format you would require. If you want to generate the above JSON format, you would need to understand how JSON translator works in 12.1.3.

JSON and XML, although different, have some similarities. Hence, JSON constructs can be mapped to XML and vice-versa. The inherent differences between these two formats are handled by following some pre-defined conventions. The convention used in SOA 12.1.3 is based on the BadgerFish convention. Here are some of the rules:

 

XML JSON Comments
<AccountName>Peter</AccountName> { “AccountName” : “Peter” } XML elements are mapped to JSON object properties
<AccountName isOpen =”true”>Peter</AccountName> {
“AccountName”:
{“@isOpen” : true, “$” : “Peter” }
}
XML attributes are mapped to JSON object properties, with property names starting with @ symbol.  When elements have attributes defined in the XML schema, text nodes are mapped to an object property with the property name $. This is true even if at runtime the attributes do not occur.
<Address>
  <City>San Francisco</City>
</Address>
{“Address” : { “City” : “San Francisco” }} Nested elements become nested objects
<Name>Peter</Name><Name>John</Name> { “Name” : [ "Peter", "John" ] } Elements with maxOccurs > 1 in their schemas (repeating elements) become JSON arrays
<RootElement>
  <Country>USA</Country>
</RootElement>
{ “Country” :  “USA” } XML root elements are dropped when converting to JSON. In the reverse direction, a root element is adding when converting JSON to XML; in such cases the name of the root element is obtained from the schema. This is done because JSON can have multiple top-level object properties which would result in multiple root elements which is not valid in XML.
<Integer>10</Integer>
<String>string-value</String>
<Boolean>true</Boolean>
{“Integer” : 10,”String”: “string-value”,”Boolean”: true} The JSON data types – boolean, string and number are supported. When converting XML to JSON, based on the type defined in the XML schema the appropriate JSON type is generated.
<RootElement xmlns=”http://xmlns.oracle.com/country”>  <Country>USA</Country>
</RootElement>
{ “Country” :  “USA” } The RootElement  and all namespace information (ns declarations and prefixes) are dropped when converting XML to JSON. On converting the JSON back to XML, the namespace information (obtained from the schema) is added back to the XML.
<customers>
 <customer>Peter<customer>      <customer>John<customer>
</customers>
[ "Peter", "John"] Top level arrays – an nxsd annotation nxsd:jsonTopLevelArray=”true” can be set in the schema to indicate that the JSON will have a top level array.

The following scenarios are not handled by the JSON translator:

  • A choice group with child elements belonging to different namespaces having the same (local) name and a sequence group with child elements having duplicate local names. This is because all namespace information is dropped when converting XML to JSON and translates to a JSON object with duplicate keys, which is not a valid format according to RFC4627 specification. This translates to an object with duplicate property. For example,
    <productList>
    	<products>
    		<product>
    			<productCode>1</productCode>
    			<productDesc>product 1</productDesc>
    		</product>
    		<product>
    			<productCode>2</productCode>
    			<productDesc>product 2</productDesc>
    		</product>
    	</products>
    </productList>
  • Arrays within arrays, for example: [ [ "Harry", "Potter"] , ["Peter", "Pan"]]
  • Mixed arrays, for example:  [ [ "Harry", "Potter"] , “”, {“Peter”  : “Pan” }]
  • Handling JSON null
  • XML Schema Instance (xsi) attributesare not supported.

In order to generate the required JSON array format: ["USA","Canada","Brazil","Australia","China","India"], you need to have a XML schema similar to the example as shown below:

<?xml version = '1.0' encoding = 'UTF-8'?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
            xmlns="http://TargetNamespace.com/ListOfValues_countries_response"
            targetNamespace="http://TargetNamespace.com/ListOfValues_countries_response" 
            elementFormDefault="qualified"
            xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" 
            nxsd:version="JSON" nxsd:jsonTopLevelArray="true"
            nxsd:encoding="UTF-8">
    <xsd:element name="Root-Element">
        <xsd:complexType>
            <xsd:sequence>
                <xsd:element name="topLevelArray" maxOccurs="unbounded" type="xsd:string"/>
            </xsd:sequence>
        </xsd:complexType>
    </xsd:element>
</xsd:schema>

When the JSON translator converts the XML to JSON format, the XML root elements are dropped. In the reverse direction, a root element is adding when converting JSON to XML; in such cases the name of the root element is obtained from the schema. This occurred because JSON can have multiple top-level object properties that resulted in multiple root elements which, on the other hand, is not valid in XML.  An nxsd annotation nxsd:jsonTopLevelArray=”true” can be set in the schema to indicate that the JSON will have a top level array.

One of the option to generate the required XML Schema format is to use the Native Format Builder.  Read this blog about using Native Format Builder: http://www.ateam-oracle.com/introduction-to-fmw-12c-rest-adapter/, and Oracle documentation: http://docs.oracle.com/middleware/1213/soasuite/develop-soa/soa-rest-integration.htm#SOASE88861

JCS (Java Cloud Service) – An Alternative to the SSH Tunnel

$
0
0

The recommended way to use WLST with JCS is to use an SSH Tunnel. While this isn’t that difficult to setup and use, it is an extra step that some may feel as unnecessary.  It also changes the way you have been accustomed to using WLST against your on-premise WebLogic instances. This blog will attempt to provide a few configuration steps that will negate the need for using an SSH tunnel so that you can deploy to JCS the same way you have been accustomed to with your on-premise WebLogic domains. There are 3 main actions that need to be performed to set this up properly:

  1. 1. Create a New Network Channel
  2. 2. Open up Port in the Compute Console
  3. 3. SSL Setup

Step 1: Create a New Network Channel in the WebLogic Server Console for Your Admin Server

  1. a. Navigate to Servers->AdminServer->Protocols->Channels
     
  2. b. Click Lock & Edit
     
  3. c. Click New
     
  4. d. Create a T3 channel with the name of ExternalChannelT3 and click Next


    newt3channel1

    External T3 Channel

  5. e. Here we are choosing a new Listen Port to use.  As you can see, I have set both the Listen Port and External Listen Port to be the same.  The most important part here is the External Listen Address.  This MUST be the EXTERNAL IP Address/Host Name that you are accessing the WLS Console over.  It should be the same as what you currently see in your Address Bar. Click Next to continue.


    newt3channel2
  6. f. Enable HTTP Tunneling.
     
  7. g. Ensure HTTP is also enabled for this channel.
     
  8. h. Finally, click Finish and Activate Changes.


    newt3channel3




Step 2: Open a new network port in the Compute Console

See http://docs.oracle.com/cloud/latest/dbcs_dbaas/CSDBI/GUID-95C7A0BD-208C-4D8E-A1DF-BBC1EDBA7755.htm

While the link above describes opening a port for SQL*Net, the instructions are the same. Just change port 2484 for 7003.

  1. a. Sign into the My Services application by clicking the link in your Welcome e-mail or by going to http://cloud.oracle.com, clicking Sign In, and selecting Public Cloud Services as your My Services Data Center.The Platform Services Dashboard is displayed.
     
  2. b. In the entry for Oracle Compute Cloud Service, click Open Service Console.The Oracle Compute Cloud Service console’s Overview page is displayed with the Instances tile selected.
     
  3. c. Click Network button.The Oracle Compute Cloud Service console’s Network page is displayed.
     
  4. d. Click the Protocols tile on the left of the page, and then click Create Protocol. In the Create Protocol dialog, enter the following information.

    • Name: Any name to identify the new port, for example, ExternalT3Port
    • Port Type: tcp
    • Port Range Start: 7003
    • Port Range End: 7003
    • Description: Any description of your choice.
  5. e. Click Create.
     
  6. f. Click the Access Rules tile on the left side of the page, and then click Create Access Rule. In the Create Access Rule dialog, enter the following information.

    • Name: Any name to identify the access rule.
    • Status: Enabled
    • Protocol: Select the name of the protocol you created in the steps above, for example ExternalT3Port.
    • Source: Select IP Lists, and then select public-internet from the list.
    • Destination: Select the name of the network group to use as the target for this access rule. Since we are connecting to the Admin Server, we need to choose the associated network group. By default, this is ora_admin
    • Description: Any description of your choice.
  7. g. Click Create.

Step 3: SSL Setup

If you want to use SSL as well, follow the same steps above except create a T3S channel instead of a T3 channel. Choose a different port such as 7004.

Now this where you must pay attention since your typical SSL client will not work out of the box. JCS has set a default minimum protocol to be TLS version 1.2. If you take a look at the start-up arguments for your WebLogic Server, you will see the following system property:

weblogic.security.SSL.minimumProtocolVersion=TLSv1.2

This sets the minimum protocol version to accept as TLS 1.2. The reason this causes problems is JSSE does not enable TLS 1.2 by default. WebLogic provides the means to enable this via a system property. Please see the following documentation:

https://docs.oracle.com/middleware/1213/wls/SECMG/ssl_version.htm

The solution seems simple enough right? Let’s just add the following system property to your client:

-Dweblogic.security.SSL.protocolVersion=TLS1

That enables all TLS1.x. But, there is a gotcha! That only works if you are NOT using the Thin T3 client. There is an open bug (20758863) to enable the protocolVersion system property for the Thin T3 Client as well. But for now you will need to use the full client or weblogic.jar.

Thanks for reading!

Configuring the Data Sync Tool for BI Cloud Service (BICS)

$
0
0

Introduction

The Data Sync Tool provides the ability to synchronize data from on-premise sources to the BI Cloud Service (BICS).  The tool can source from both relational data, and flat files, and it can be used for one-off loads, or in an incremental fashion to keep BICS ‘in-sync’ with on-premise data.

This article will cover the installation and configuration steps, and then walk through the creation of data loads including load strategies, scheduling, and monitoring jobs.

 

Main Article

This article can be read in its entirety, or used as a reference guide for a specific function.  The following topics will be covered, and the links below can be used to go directly to that topic.

 

Table of Contents

Downloading and Installing

Starting the Tool

Configuration

System Preferences
E-Mail Configuration

Concepts for the Data Sync Tool

Connections

BICS and File Source Connections
New RDBMS Source

Projects

Data from Table
SQL Source
File Source

Load Strategies

Target Table Options
Rolling Delete

Running and Scheduling Jobs

Monitoring and Trouble Shooting Jobs

Reloading Data

Downloading and Installing

The Data Sync tool is available on OTN, download it here.

It is a Java application, and requires a Java 1.7 JDK and jdbc drivers.  A Java Runtime Environment (JRE) is not sufficient.

Once downloaded and unzipped, locate the ‘config.bat’ or ‘config.sh’ depending on the operating system.  Open this in a text editor, and set the path for the Java home for the machine where the application will be run.

set JAVA_HOME=”C:\Program Files\Java\jdk1.7.0_71″

export JAVA_HOME=/u01/app/java/jdk

The Data Sync tool uses JDBC to connect to source databases.  If you intend to source from a relational database to load BICS, you need to make sure the JDBC drivers are the correct versions for the source on-premise database.  The tool ships with Oracle JDBC version 11.2.x.  If the source database is different – then obtain the correct JDBC drivers for that version of the database, and copy into the relevant \lib directory of the Data Sync tool and overwrite the version that is there.  This table shows the files required for each database vendor that need to be obtained and copied.

 

Vendor
JDBC Driver Name
Oracle ojdbc6.jar
MySQL Mysql-connector-java*.jar
Microsoft SQL Server sqljdbc.jar
DB2 db2java.zip
Timesten ttjdbc6.jar, orai18n.jar, timestenjmsxla.jar, jms.jar, javax.jms.jar
Teradata terajdbc4.jar, log4j.jar, teradata.jar, tdgssjava.jar, tdgssconfig.jar

Starting the Tool

To start the Data Sync tool, run the datasync.bat for windows, or datasync.sh command for linux.  These commands can be found in the root of the unzipped directory.

The first time the tool is run a configuration wizard will take you through the setup. In subsequent sessions of the application the tool will recognize that the wizard has been run before and open the existing configuration.

 

Configuration

System Preferences

A System Preference panel can be opened from under the ‘Views’ menu.

Within the preferences are a number of options, including the default folder where flat files will be searched for, and the number of log files that will be kept before they are purged.

An explanation for each option is available in the ‘Comments’ column.

Windows7_x64

E-Mail Configuration

The Data Sync tool can be set up to send e-mail notifications about jobs.  This is done under the Tools / E-Mail menu.

1. Select the ‘Recipients’ sub-menu and add one or more e-mail addresses.

2. Under the Setup menu, add the details of the e-mail server and connection details.

Email_config

3. Hit the ‘Send Test Email’ button to confirm the settings.  The test e-mail will be sent to the recipients defined in the first step.

Concepts for the Data Sync Tool

The tool has 3 main sections.

Connections – this is where the source and target connections are defined.

Project – a workspace where the source files and targets are defined, as well as their attributes.  This is where the load strategy for each table is also defined.

Jobs – a unit of work that can be used to upload the data from one of more sources defined in the project.  A project can have one or more jobs.  This is also where the status of data loads can be found and examined.

Each section can be reached by selecting the corresponding button in the main menu bar:

Windows7_x64

Connections

BICS and File Source Connections

There are two default connections.  ‘TARGET’ which is the BICS cloud instance that will be loaded, and ‘File Source’ which is the connection for file sources.

Do not delete or rename either entry, they are referenced internally by the tool.

The ‘TARGET’ connection must be edited to point to the BICS Cloud instance.  Under Connections, select ‘TARGET’ and edit the details.  The Connection Type should be left as ‘Oracle (BICS)’.  A username and password for a user with the ‘Data Loader’ application role should be entered.  The URL will be the same as the entry to the BICS /analytics portal, but without the “/analytics” suffix.  For instance:

Windows7_x64

If this URL is not known, it can be located on the Oracle Business Intelligence Cloud Service page within My Services, as shown below.  Copy that URL, and remove the “/analytics” suffix.

Windows7_x64

Select ‘Test Connection’ to confirm connectivity to the BICS target.

2

New RDBMS Source

To add a new RDBMS as a source, select ‘New’, then name it.  Select the correct connection type for the database vendor, and enter the connection details.  This is an example of an Oracle DB source:

Windows7_x64

Test the connection to confirm the settings.

Projects

Once the source and target connections have been setup, the next step is to define the actual tables, sql, or files to be used as the sources, and the table(s) that will be loaded in BICS.  This is done under the ‘Project’ section.

Data from Table

To source from a table in the on-premise RDBMS created in the previous section, select the ‘Data From Table’ option under the Project / Relational Data section.

 

Windows7_x64

Firstly select the Data Source where the table is located (1), then ‘Search Tables’ to bring up a list of the tables available (2).  Next select the Table or Tables to be imported by checking the ‘Import’ checkbox (3), and finally the ‘Import Tables’ button (4).

Windows7_x64

SQL Source

Instead of using a table, it may be preferable to use a SQL statement against the source database.  It could be that the data is coming from multiple tables, or that the data in the source tables is at a more detailed level so aggregate functions need to be used to reduce the data set and increase performance in BICS.

Select the ‘Data from SQL’ option. The query requires a logical name, and the results can be persisted either in a new table or in an existing table.  If ‘Create a table’ is select, the column definitions are inferred from the SQL structure. If an existing table is selected, any new columns from the SQL will be added to the list of columns.

 

Windows7_x64

File Source 

A similar process is used for setting up a file source.  Select the ‘New’ button under the Project / File Data selections.

 

Windows7_x64

Select the details of the source file location, and then chose the Target Table option.  Again, there is an option to load an existing table, or to create a new one.  Select the delimiter for the file, the timestamp format, and the number of lines the tool will sample to estimate field sizes and types.  If the first row contains headers, select that here.  If not – the column names will need to be entered in the following screen.   If the file contains duplicates, there is an option to remove these if required.

 

Windows7_x64

The next screen provides the options to chose the fields to be imported from the flat file, as well as some basic Data Transformations for changing the case of the field, converting to a number, or trimming spaces.  The tool makes its best ‘guess’ for each data-type based on sampling up the the first 10,000 rows of the file, but the user should go through and make sure each has been classified correctly.   Special characters will be removed from the column header, and converted to upper case.  If a column name length is more than 30 characters, it will be truncated.

If the ‘Remove Duplicates’ option was selected in the previous screen, then the ‘Update Rows Match’ column would be used to select what makes the records unique.

Windows7_x64

If everything is set correctly and there are no data errors, a ‘Success’ notification will be displayed.  If there is a problem, the error message will provide details of the problem.

When errors are encountered, a ‘bad’ file is created in the log directory with the convention of <logical name>.bad.

The bad file contains information about which line is the bad record, the record itself, and what were the problems parsing that file.

Once the bad records have been identified, these must be fixed in the original file and the process re-run.  The file will only be loaded once there are no errors.

Load Strategies

There are multiple load strategies that can be used with the Data Sync Tool.

For Data from Table, or Data from SQL the load strategy can be found and changed within the ‘Edit’ properties of the Connection under the Project Section of the tool:

Windows7_x64

For a File source it can be found under the ‘File Targets’ tab by selecting the ‘Load Strategy’ option:

Windows7_x64

The load strategies available are:

Replace Data in Table: this will truncate the target table each run and then reload the table.  If the target table has indexes, these will be dropped prior to the load and are then recreated afterwards.

Append Data in Table: data will be appended to the target table.  No checks will be done to see if the data already exists in the target.

Update Table: the first time this is run the target table, if it exists, will be truncated and indexes removed and recreated post load.  There are 2 options, both of which can be run independently or concurrently.

1. Add new records – the target will be checked to see if the key exists – if it does not, then the row will be added

2. Update Existing Records - the target will be checked to see if the key exists and an update date in the source system is more recent than the date the job was last run.  If that is the case, the record in the target will be updated.

For the Update Table strategy the tool requires 2 fields.  A unique key, and an update date.

In this example, the COUNTRY_ID is the unique key, and the M_LAST_UPDATE_DTTM is the field that the source system uses to specify when that record was last updated.

Windows7_x64

There is also an option to ‘Never Delete Data’ – this prevents the ‘Truncate’ and removal of indexes from running for the ‘Replace Data in Table’ option, and also the first run of the ‘Update table’ load strategies.

Target Table Options

Check the ‘Target Tables’ section.  This will have a row for each file or RDBMS table source.  Select the table, and then within the options below the table name, table columns and properties, and indexes can be edited if necessary.

If the target tables were created by another process – perhaps from a manual load through the File Data Loader, or through the Apex tool – there may be issues for the Data Sync tool when it tries to truncate and remove and recreate indexes, or with existing constraints on the table in BICS that do not match the constraints set up in the data loader tool.

To fix this, have the tool drop and recreate the target table itself first.  To do this, under the ‘Target Tables’ section, right click on the table name, and then select the ‘Drop/Create/Alter Tables’ option:

 

Windows7_x64

Several options will be presented.  Select the ‘Create New’ and also the ‘Drop Tables’.  This will drop the table created previously be the other process and recreate it with the tool.

 

Windows7_x64

Rolling Delete Days

If the data set in BICS is large, performance can be improved by purging data that has reached a certain age.  This age is calculated based on the date used in the filter for the Incremental Update.  In this example any rows updated in the source table more than 30 days ago will be deleted.

 

Windows7_x64

Running And Scheduling Jobs

Once the source tables and files, and the load strategies have been defined, the job can be run by selecting the ‘Run Job’ button in the toolbar.

Jobs can also be scheduled to run from the ‘Job Schedules’ sub tab.

Within the properties of the job the number of retries can be set up.  By default this is set to null, or 0.

Windows7_x64

Monitoring and Trouble Shooting Jobs

To monitor select the ‘Jobs’ tab.

Windows7_x64

While the job is running, it can be viewed on the ‘Current Jobs’ tab.  The ‘Auto Refresh’ can be set to automatically refresh the status.

Once a job has finished – either successfully, or with an error – it is moved to the ‘History’ tab.

 

Windows7_x64

The record on the top shows the summary of the job. The tasks tab below lists one line item per data flow as defined by the user. The details tab elaborates on the line items for the data flow.  In this example below a job is highlighted in the top section, and the tasks for that job are listed in the bottom section with details of the rows successfully loaded.

 

Windows7_x64

Logs are also saved to the file system under the ‘log’ sub-directory of the BICSDataSync folder.

If a task fails, it may be necessary to re-run the whole job.  Before running it again, right click on the failed Job name, and select ‘Mark As Completed’.

Windows7_x64

Reloading Data

Should it be necessary to do a full load of BICS, this can be achieved in 2 ways.

To reload all tables, go to ‘Tools / Re-extract and load all data’.  A confirmation box will open.

Windows7_x64

To reload a single table, go to Connections, then the ‘Refresh Dates’ tab.  Select the table to be refreshed, and select the ‘Re-Extract Data’ option.

Windows7_x64

A further option box will be presented, where ‘all data’ can be reloaded, or all ‘data from a specified date’.  Make the appropriate choice and then select ‘ok’.  The subsequent run will extract all the data from the source table, and reload the table.

 

Summary

This article walked through the steps to download, configure, and set-up and monitor data loads in BICS.  For further information, see the documentation on OTN.  That documentation can be found here.

OAAM_SAMPLE with different integration and/or deployment options

$
0
0

Multiple times in past, I have encountered questions/issues about OAAM_SAMPLE. So, thought to write a small post explaining how it can be used/configured to test (try out) different native integration options for OAAM.

The OAAM Sample application is for demonstration purposes to familiarize yourself with OAAM APIs. It is not intended to be used as production code since it only provides basic elements of API usage. If you are implementing a native integration, you can develop your application using the OAAM Sample application as a reference.

For understanding the Flow and OAAM APIs, it would be very helpful to play around with OAAM Sample Application. Following are the few use cases where OAAM Sample Application will be useful in performing proof of concepts around native integration of OAAM APIs with protected business application.

  • To analyze behavior of OAAM APIs
  • To analyze and develop Custom Fingerprinting
  • To analyze and develop Custom Challenge Flow
  • To analyze and develop flows around Virtual Authentication Devices, Knowledge-Based Authentication, and One-Time Password
  • To analyze and develop integration of Client Applications with OAAM for Transactions/Entities

Through this post, I am intending to provide details about following integration and/or deployment scenarios. Considering the popularity and usage, Java based native integration/deployment options are mentioned here in this post.

  • Integration Scenarios/Options
    • Native SOAP Integration
    • Native In-Proc (static linking) Integration
  • Deployment Scenarios/Options
    • Native SOAP or In-Proc Integration using OAAM Shared Library on WebLogic
    • Native SOAP or In-Proc Integration on Non-WebLogic containers
    • Native Integration for Transactions / Entities

As example of native integration, OAAM Sample Application can be considered as protected application on which OAAM based risk evaluation and prevention is required.

Native In-Proc (static linking) Integration:

OAAM can be natively integrated with application(s) to provide extreme high performance and highly customizable security (risk evaluation and prevention). This native integration embeds OAAM in-process inside the protected applications. The application invokes the OAAM APIs directly to access risk and challenge flows. This integration involves inclusion of OAAM core libraries (Jar files) and properties files into protected application. As OAAM is embedded in application(s) for this integration, this would even require OAAM database access from protected applications.

Considering the performance (in compare to SOAP integration), In-Proc integration would be considered better but from the perspective of maintainability/patching, In-Proc integration would  be a bit complicated.

Native SOAP Integration:

Customers who have advanced requirements similar to native integration but who prefer to use SOAP web services instead of Java API integration directly can choose this option.

This is widely used integration option which involves inclusion of OAAM SOAP Client Library (Java/.Net Wrapper to invoke SOAP APIs) to have communication with OAAM Server for device fingerprinting, risk evaluation and KBA related activities.

OAAM_SAMPLE package is available from:

Brief instructions for setting up the OAAM Sample application are available from Developer’s Guide for OAAM. The most recent OAAM Sample Application that illustrates Java API integration can be downloaded from MOS. MOS Doc ID 1542025.1 is having reference to OAAM Sample package for 11.1.2.x and the same is used while writing this post.

Native SOAP Integration

Following are the few important aspects / configuration steps to understand and keep in mind for SOAP based native integration.

  • OAAM Server (default name of managed server: oaam_server_server1 and the name of Enterprise Application oaam_server.ear) exposes WebServices for SOAP based integration.
  • WSDL for the exposed WebServices can be found from http://<OAAM Server Host>:<Port>/oaam_server/services?wsdl
  • As OAAM provides client library for native integration, it is highly advisable to use the same and avoid usage of WebServices directly (i.e. without using client library provided by OAAM).
  • When OAAM Shared Library are not used, it would be advisable to replace oaam_soap_client.jar and bharosa_properties folder with files available from <IAM_ORACLE_HOME>/oaam/oaam_libs/jar/oaam_soap_client.jar and <IAM_ORACLE_HOME>/oaam/oaam_libs/war/oaam_native_lib.war/WEB-INF/classes/bharosa_properties.
  • All the changes in configurable properties should be included in oaam_custom.properties. By default, there are lot of properties already set in oaam_custom.properties and some of them would be required to be corrected/deleted based on integration and deployment scenario.
  • Based on following configurable property, OAAM SOAP Client would use WebLogic based SOAP Implementation. For usage of different SOAP Implementation (like AXIS), customized implementation of VCryptSOAP (like VCryptSOAPGenericImpl) would be required to be prepared and configured.
    • vcrypt.common.util.vcryptsoap.impl.classname=com.bharosa.vcrypt.common.impl.VCryptSOAPGenericImpl
  • Same OAAM Client Library can be used for SOAP or In-Proc integration. To let OAAM Client Library know to use SOAP based integration, following properties are required to be set correctly in oaam_custom.properties:
    • vcrypt.tracker.soap.useSOAPServer=true
    • vcrypt.soap.disable=false
    • vcrypt.tracker.impl.classname=com.bharosa.vcrypt.tracker.impl.VCryptTrackerSOAPImpl
    • vcrypt.tracker.soap.url=http://host-name:port/oaam_server/services
  • OAAM Client library can read configuration parameters from Database and Properties files. In most of the deployment scenarios, OAAM Sample will not be having access to Database. So to let OAAM Client know to read configurable parameters only from Properties Files, following properties are required to be set in oaam_custom.properties:
    • bharosa.config.impl.classname=com.bharosa.common.util.BharosaConfigPropsImpl
    • bharosa.config.load.impl.classname=com.bharosa.common.util.BharosaConfigLoadPropsImpl
  • Default SOAP Implementation (VCryptSOAPGenericImpl) is designed to use BASIC (username and password) based HTTP Authentication. Following configurable properties are required to be set in oaam_custom.properties to have Authentication enabled for SOAP communication. Keystore related details are available from standard documentation : Setting Up Client Side Keystore to Secure the SOAP User Password.
    • vcrypt.soap.auth=true
    • vcrypt.soap.auth.keystorePassword=<Java-keystore-password>
    • vcrypt.soap.auth.aliasPassword=<Keystore-alias-password>
    • vcrypt.soap.auth.username=<SOAP-User-name>
    • vcrypt.soap.auth.keystoreFile=<Keystore File name which should be available from classpath. i.e. WEB-INF/classes>
  • For configuring Authorization on OAAM Server side (exposing WebServices), details are available from post: How to secure Web Services exposed by OAAM Server.
  • OAAM uses encryption/decryption for Configurable properties and some Database Tables/Columns. Algorithm and Implementation for such encryption/decryption is controlled by few configurable properties which will be first read during initialization of OAAM Client.
    • By default, required configuration for such encryption/decryption is read from CSF (which would have been initialized during first startup of OAAM Server/Admin). So, if the deployment of OAAM Sample application is on the same WebLogic Server domain where OAAM Server is running, then there should not be any “bharosa.cipher.encryption.algorithm.enum*” properties set in oaam_custom.properties.
    • But, if the deployment of OAAM Sample application is on a non-WebLogic Server or a non-Identity Access Management WebLogic Server domain, then following properties should be set in oaam_custom.properties:
      • bharosa.cipher.encryption.algorithm.enum.DESede_config.keyRetrieval.classname=com.bharosa.common.util.cipher.KeystoreKeyRetrieval
      • bharosa.cipher.encryption.algorithm.enum.DESede_config.keystoreFile=<Keystore File name which should be available from classpath. i.e. WEB-INF/classes>
      • bharosa.cipher.encryption.algorithm.enum.DESede_config.keystorePassword=<Java-keystore-password>
      • bharosa.cipher.encryption.algorithm.enum.DESede_config.aliasPassword=<Keystore-alias-password>
      • bharosa.cipher.encryption.algorithm.enum.DESede_db.keyRetrieval.classname=com.bharosa.common.util.cipher.KeystoreKeyRetrieval
      • bharosa.cipher.encryption.algorithm.enum.DESede_db.keystoreFile=<Keystore File name which should be available from classpath. i.e. WEB-INF/classes>
      • bharosa.cipher.encryption.algorithm.enum.DESede_db.keystorePassword=<Java-keystore-password>
      • bharosa.cipher.encryption.algorithm.enum.DESede_db.aliasPassword=<Keystore-alias-password>
  • Based on the preference, EAR or WAR should be prepared including updated properties and library files.
  • Based on the deployment scenario, there would be few more properties required to be updated. Such changes are mentioned in the sections below.

Considering/Understanding of above mentioned points and documentation should make the deployment of OAAM Sample easier.

Native In-Proc (static linking) Integration

Following are the few important aspects / configuration steps to understand and keep in mind for In-Proc (static linking) native integration.

  • Same OAAM Sample package would be capable enough to work with SOAP or In-Proc integration. There would be changes required in configurable properties and libraries to use specific integration (SOAP or In-Proc) with OAAM Sample.
  • With In-Proc integration, OAAM Client library will directly invoke Java APIs (without SOAP Client) and that would require Database access from OAAM Client. i.e. OAAM Client will need access to OAAM Core Libraries and Data Source. Considering such access, it would be advisable to use OAAM Shared Library and Data Source on WebLogic. If in-proc integration is used with non-weblogic container, OAAM Sample deployment would need to include changes in persistence (database access) layer and OAAM core libraries.
  • All the changes in configurable properties should be included in oaam_custom.properties. By default, there are lot of properties already set in oaam_custom.properties and some of them would be required to be corrected/deleted based on integration and deployment scenario.
  • Same OAAM Client Library can be used for SOAP or In-Proc integration. To let OAAM Client Library know to use In-Proc integration, following properties are required to be set correctly in oaam_custom.properties:
    • vcrypt.tracker.soap.useSOAPServer=false
    • vcrypt.soap.disable=true
  • OAAM Client library can read configuration parameters from Database and Properties files. In In-Proc integration, OAAM Sample will be having access to Database and Properties files both. So, following properties should not be set in oaam_custom.properties:
    • bharosa.config.impl.classname
    • bharosa.config.load.impl.classname
  • OAAM uses encryption/decryption for Configurable properties and some Database Tables/Columns. Algorithm and Implementation for such encryption/decryption is controlled by few configurable properties which will be first read during initialization of OAAM Client.
    • By default, required configuration for such encryption/decryption is read from CSF (which would have been initialized during first startup of OAAM Server/Admin). So, if the deployment of OAAM Sample application is on the same WebLogic Server domain where OAAM Server is running, then there should not be any “bharosa.cipher.encryption.algorithm.enum*” properties set in oaam_custom.properties.
    • But, if the deployment of OAAM Sample application is on a non-WebLogic Server or a non-Identity Access Management WebLogic Server domain, then following properties should be set in oaam_custom.properties:
      • bharosa.cipher.encryption.algorithm.enum.DESede_config.keyRetrieval.classname=com.bharosa.common.util.cipher.KeystoreKeyRetrieval
      • bharosa.cipher.encryption.algorithm.enum.DESede_config.keystoreFile=<Keystore File name which should be available from classpath. i.e. WEB-INF/classes>
      • bharosa.cipher.encryption.algorithm.enum.DESede_config.keystorePassword=<Java-keystore-password>
      • bharosa.cipher.encryption.algorithm.enum.DESede_config.aliasPassword=<Keystore-alias-password>
      • bharosa.cipher.encryption.algorithm.enum.DESede_db.keyRetrieval.classname=com.bharosa.common.util.cipher.KeystoreKeyRetrieval
      • bharosa.cipher.encryption.algorithm.enum.DESede_db.keystoreFile=<Keystore File name which should be available from classpath. i.e. WEB-INF/classes>
      • bharosa.cipher.encryption.algorithm.enum.DESede_db.keystorePassword=<Java-keystore-password>
      • bharosa.cipher.encryption.algorithm.enum.DESede_db.aliasPassword=<Keystore-alias-password>
  • Based on the preference, EAR or WAR should be prepared including updated properties and library files.
  • Based on the deployment scenario, there would be few more properties required to be updated. Such changes are mentioned in the sections below.

Considering/Understanding of above mentioned points and documentation should make the deployment of OAAM Sample easier.

Native SOAP or In-Proc Integration using OAAM Shared Library on WebLogic

Apart from the details mentioned above about SOAP or In-Proc integration, following are the few additionally important aspects / steps for deployment of OAAM Sample on WebLogic and to use specific integration for invocation of OAAM APIs.

  • There are two versions (EAR and WAR) of OAAM Shared Library (oaam_native_lib) available from <IAM_ORACLE_HOME>/oaam/oaam_libs/ear or <IAM_ORACLE_HOME>/oaam/oaam_libs/war. Target for Shared Library should be updated correctly to make sure that OAAM Sample Application would be able to access that.
    • If OAAM Sample is packaged (and to be deployed) as WAR, WAR version of oaam_native_lib (<IAM_ORACLE_HOME>/oaam/oaam_libs/war/oaam_native_lib.war) should be deployed as shared library in WebLogic.
      • To use WAR version of Shared Library from OAAM Sample Web Application, you must refer to the shared library by adding the following entry to your WebLogic deployment descriptor file, weblogic.xml:
        • <library-ref>
                 <library-name>oracle.oaam.libs</library-name>
          </library-ref>
    • If OAAM Sample is packaged (and to be deployed) as EAR, EAR version of oaam_native_lib (<IAM_ORACLE_HOME>/oaam/oaam_libs/ear/oaam_native_lib.ear) should be deployed as shared library in WebLogic.
      • To use WAR version of Shared Library from OAAM Sample Enterprise Application, you must refer to the shared library by adding the following entry to your WebLogic deployment descriptor file, weblogic-application.xml:
        • <library-ref>
                 <library-name>oracle.oaam.libs</library-name>
          </library-ref>
  • As Shared Library is including OAAM libraries/jars and properties, Sample application would NOT need to include any OAAM specific libraries/jars and properties files. Sample application would just need to include oaam_custom.properties in WEB-INF from OAAM Sample integration perspective.
  • In case of In-Proc integration, Target for Data Source jdbc/OAAM_SERVER_DB_DS should also be updated correctly to make sure that OAAM Sample Application would be able to access that. If OAAM Sample is to be deployed on non-Identity Access Management WebLogic Server domain, creation of Data Source (with name jdbc/OAAM_SERVER_DB_DS) would be required to provide database access to In-Proc integration.
  • If OAAM Sample is to be deployed on non-Identity Access Management WebLogic Server domain, values for following Keys in CSF (of WebLogic Domain for OAAM Sample) should exactly match with the values in CSF of Identity Access Management WebLogic Server domain.
  • If OAAM Sample application is to be deployed on same domain as OAAM Server/Admin, it would be advisable to deploy OAAM Sample as EAR. Default installation of OAAM will include deployment of OAAM Shared Library (oracle.oaam.libs) as well and the Shared Library would be deployed as EAR. If OAAM Sample is deployed as WAR on the same domain, it would not be able to access EAR Shared Library.
  • Following are the few properties which are updated in oaam_custom.properties to make OAAM Sample work with SOAP based integration.
    • 2c2
      < vcrypt.tracker.soap.url=http://${hostName}:${webapp_port}/oaam_server/services
      ---
      > vcrypt.tracker.soap.url=http://myhostedlinux:14300/oaam_server/services
      4c4
      < vcrypt.soap.auth=false
      ---
      > vcrypt.soap.auth=true
      9c9
      < vcrypt.soap.auth.username=dove
      ---
      > vcrypt.soap.auth.username=oaamsoap
      30c30
      < bharosa.image.dirlist=<Weblogic Folder>/Oracle_IDM1/oaam/oaam_images/
      ---
      > bharosa.image.dirlist=/scratch/orafmw/r2ps2/products/access/iam/oaam/oaam_images/
  • Following are the few properties which are updated in oaam_custom.properties to make OAAM Sample work with In-Proc based integration.
    • 2c2
      < vcrypt.tracker.soap.url=http://${hostName}:${webapp_port}/oaam_server/services
      ---
      > #vcrypt.tracker.soap.url=http://iam11g.local:14300/oaam_server/services
      7,10c7,10
      < vcrypt.soap.auth.keystorePassword=ZG92ZTEyMzQ=
      < vcrypt.soap.auth.aliasPassword=ZG92ZTEyMw==
      < vcrypt.soap.auth.username=dove
      < vcrypt.soap.auth.keystoreFile=system_soap.keystore
      ---
      > #vcrypt.soap.auth.keystorePassword=ZG92ZTEyMzQ=
      > #vcrypt.soap.auth.aliasPassword=ZG92ZTEyMw==
      > #vcrypt.soap.auth.username=oaamsoap
      > #vcrypt.soap.auth.keystoreFile=system_soap.keystore
      13c13
      < vcrypt.common.util.vcryptsoap.impl.classname=com.bharosa.vcrypt.common.impl.VCryptSOAPGenericImpl
      ---
      > #vcrypt.common.util.vcryptsoap.impl.classname=com.bharosa.vcrypt.common.impl.VCryptSOAPGenericImpl
      16,19c16,19
      < vcrypt.soap.disable=false
      < vcrypt.tracker.soap.useSOAPServer=true
      < vcrypt.tracker.impl.classname=com.bharosa.vcrypt.tracker.impl.VCryptTrackerSOAPImpl
      < vcrypt.soap.call.timeout=10000
      ---
      > vcrypt.soap.disable=true
      > vcrypt.tracker.soap.useSOAPServer=false
      > #vcrypt.tracker.impl.classname=com.bharosa.vcrypt.tracker.impl.VCryptTrackerSOAPImpl
      > #vcrypt.soap.call.timeout=10000
      21,22c21,22
      < bharosa.config.impl.classname=com.bharosa.common.util.BharosaConfigPropsImpl
      < bharosa.config.load.impl.classname=com.bharosa.common.util.BharosaConfigLoadPropsImpl
      ---
      > #bharosa.config.impl.classname=com.bharosa.common.util.BharosaConfigPropsImpl
      > #bharosa.config.load.impl.classname=com.bharosa.common.util.BharosaConfigLoadPropsImpl
      30c30
      < bharosa.image.dirlist=<Weblogic Folder>/Oracle_IDM1/oaam/oaam_images/
      ---
      > bharosa.image.dirlist=/scratch/orafmw/r2ps2/products/access/iam/oaam/oaam_images/
      33,41c33,41
      < bharosa.cipher.encryption.algorithm.enum.DESede_config.keyRetrieval.classname=com.bharosa.common.util.cipher.KeystoreKeyRetrieval
      < bharosa.cipher.encryption.algorithm.enum.DESede_config.keystoreFile=system_config.keystore
      < bharosa.cipher.encryption.algorithm.enum.DESede_config.keystorePassword=ZG92ZTEyMzQ=
      < bharosa.cipher.encryption.algorithm.enum.DESede_config.aliasPassword=ZG92ZTEyMw==
      <
      < bharosa.cipher.encryption.algorithm.enum.DESede_db.keyRetrieval.classname=com.bharosa.common.util.cipher.KeystoreKeyRetrieval
      < bharosa.cipher.encryption.algorithm.enum.DESede_db.keystoreFile=system_db.keystore
      < bharosa.cipher.encryption.algorithm.enum.DESede_db.keystorePassword=ZG92ZTEyMzQ=
      < bharosa.cipher.encryption.algorithm.enum.DESede_db.aliasPassword=ZG92ZTEyMw==
      ---
      > #bharosa.cipher.encryption.algorithm.enum.DESede_config.keyRetrieval.classname=com.bharosa.common.util.cipher.KeystoreKeyRetrieval
      > #bharosa.cipher.encryption.algorithm.enum.DESede_config.keystoreFile=system_config.keystore
      > #bharosa.cipher.encryption.algorithm.enum.DESede_config.keystorePassword=ZG92ZTEyMzQ=
      > #bharosa.cipher.encryption.algorithm.enum.DESede_config.aliasPassword=ZG92ZTEyMw==
      > #
      > #bharosa.cipher.encryption.algorithm.enum.DESede_db.keyRetrieval.classname=com.bharosa.common.util.cipher.KeystoreKeyRetrieval
      > #bharosa.cipher.encryption.algorithm.enum.DESede_db.keystoreFile=system_db.keystore
      > #bharosa.cipher.encryption.algorithm.enum.DESede_db.keystorePassword=ZG92ZTEyMzQ=
      > #bharosa.cipher.encryption.algorithm.enum.DESede_db.aliasPassword=ZG92ZTEyMw==

Native SOAP or In-Proc Integration using OAAM Shared Library on Non-WebLogic containers

Apart from the details mentioned above about SOAP or In-Proc integration, following are the few additionally important aspects / steps for deployment of OAAM Sample on Non-WebLogic containers like Apache Tomcat and to use specific integration for invocation of OAAM APIs.

  • As on Non-Weblogic containers, there would not be OAAM Shared Library available so all required OAAM Client Libraries (Jar files) and properties files should be included in the OAAM Sample Application.
  • For In-Proc integration, OAAM Core Libraries (Jar files) and properties files should be included in the OAAM Sample Application.
  • For In-Proc integration, there would be changes required in libraries, properties files and persistence.xml to make database accessible from OAAM Sample. Changes required for this would vary based on the Application Container. As example, following are the changes required in OAAM Sample to make it work with In-Proc integration from Non-Weblogic container like Apache Tomcat
    • Additional Changes in Properties (oaam_custom.properties):
      • oaam.db.toplink.useCredentialsFromCSF=false
      • oaam.csf.useMBeans=false
    • Changes in META-INF/persistence.xml:
      •             <property name=”eclipselink.jdbc.url” value=”jdbc:oracle:thin:@myhostedlinux:1521:iam111220″/>
      •             <property name=”eclipselink.jdbc.user” value=”AM_OAAM”/>
      •             <property name=”eclipselink.jdbc.password” value=”oracle123″/>
    • Inclusion of additional libraries (Jar files) in Classpath:
      • com.bea.core.antlr_2.7.7.jar, com.bea.core.apache.commons.collections_3.2.0.jar  , jps-api.jar, jps-manifest.jar, jps-az-management.jar, jps-platform.jar, jps-az-rt.jar, jps-se.jar, jacc-spi.jar, jps-ee.jar, jps-internal.jar, jps-common.jar, com.oracle.toplink_1.1.0.0_11-1-1-6-0.jar, eclipselink.jar, javax.persistence_1.1.0.0_2-0.jar, ojdbc6.jar, commons-codec-1.2.jar, drools-base-2.0-beta-21.jar, drools-core-2.0-beta-21.jar, drools-io-2.0-beta-21.jar, drools-java-2.0-beta-21.jar, drools-jsr94-2.0-beta-21.jar, drools-smf-2.0-beta-21.jar, groovy-all-1.6.3.jar, janino-2.0.16.jar, jsr94.jar, oaam_core.jar, oaam_native_wrapper.jar, oaam_uio.jar, etc…
    • Inclusion of additional configuration files in Classpath:
      • oaam_toplink_orm.xml and META-INF/persistence.xml

 

Native Integration for Transactions / Entities

With OAAM Sample Application, it’s very much easy to play around with OAAM APIs for Entities and Transactions. With changes in configurable properties, OAAM Sample will be able to render custom input forms for entities and transactions. Links available on the left side (after logging in successfully) are rendered based on the configurable properties.

Values for following enumerations in oaam_custom.properties are responsibile for rendering input pages for transactions and entities.

  • tracker.entity.type.enum
  • tracker.entity.form.enum.<xyz>.form_fields
  • tracker.entity.form_field.<xyz>.enum
  • tracker.transaction.type.enum
  • tracker.transaction.form.enum.<xyz>.form_fields
  • tracker.transaction.form_field.<xyz>.enum

 

Diagnosing performance issues front to back-end in WebLogic Server applications with Java Flight Recorder

$
0
0

Introduction

The Java Mission Control and Java Flight Recorder are relatively new tools that have extended greatly the diagnostics capabilities of the Java platform. They allow collecting an impressive amount of detailed runtime information about the JVM, with minimum performance impact, in a way that would have been hard to imagine a few years ago.

Java Mission Control is basically a set of tools that enables efficient and detailed analysis of the extensive data provided by Java Flight Recorder, which is the entity that lives in the JVM collecting a wide variety of runtime information. Java Flight Recorder used to be tightly integrated with the JRockit JVM, although it’s been bundled with the HotSpot JVM since Java 7 Update 40 release.

There is plenty of information out there about how to use both JMC and JFR in the form of blogs, videos and technical documentation, so I won’t cover that in much detail. The purpose of this article is to give a hint to developers unfamiliar with JFR on how to diagnose performance issues associated to an application flow triggered from the front-end.

 

Main Article

One of the most common scenarios that engineers working on applications deployed in WebLogic servers need to deal with is to diagnose a web application with poor performance.

Often, users complain about sluggishness after they click on a specific link of the application or as part of a specific operation. Also, it is common that these performance issues are not constant and happen rather randomly or intermittently.

Normally, getting a full picture of what could have gone wrong, from the front-end to the middle or back-end layers, requires a thorough analysis of all involved components. Depending on the logging capabilities or integrated diagnostic frameworks used by the application, the difficulty of debugging this way may vary, but in general, it becomes time consuming at the least.

If regular logging is only used, the debugger needs to correlate evidences of events and their timestamps in the log files from different components in order to get an idea of any potential bottlenecks.

Also, capturing fairly detailed performance information about an application is usually expensive, and typically requires enabling logging capabilities or using profiling tools based on the JVMPI/JVMTI interfaces, that may have a negative impact in performance as well.

Fortunately, Java Mission Control and Java Flight Recorder have made things much easier for everybody and have become the Holy Grail of Java application profiling, making it feasible to profile applications with virtually no performance degradation, which wasn’t possible a few years ago.

 

Capturing WebLogic event data with Java Flight Recorder

It is possible to integrate WebLogic and the Java Flight Recorder to collect event data from WebLogic containers, through the WebLogic Diagnostic Framework (WLDF). The overhead of enabling JFR and configure WLDF to generate WebLogic Server Diagnostics to be captured by JFR is minimal, and makes it ideal to be used in full-time basis, especially with production environments where it adds the greatest value.

Java Flight Recorder works with the concepts of events, which is the representation of a piece of data related to something that happened at a specific point in time.

When WebLogic is configured with the Oracle HotSpot JDK, JFR will be disabled by default and it will be necessary to enable it in one of the following ways:

 

1. By adding the following options in the java command that starts the WebLogic JVM:

-XX:+UnlockCommercialFeatures -XX:+FlightRecorder

Since JFR is a commercial feature, you must specify ‘-XX:+UnlockCommercialFeatures’ before any other option, followed by ‘-XX:+FlightRecorder’ to enable JFR.

In the case of WebLogic, these options may be added to the startWebLogic.sh script that starts the Managed Server, in the JAVA_OPTIONS variable

Another alternative is adding them to the script: $DOMAIN_HOME/bin/setDomainEnv.sh

in the EXTRA_JAVA_OPTIONS variable.

For instance, if you wanted to start the server with a continuous recording, you can add the following parameters:

-XX:+UnlockCommercialFeatures -XX:+FlightRecorder -XX:StartFlightRecording= name=MyRecording,filename=myrecording.jfr

Continuous recordings have no specific duration and will keep recording until the JVM shuts down or until the recording is explicitly dumped through the jcmd ‘JFR.dump’ option. They are ideal for production systems, although it is possible to time-fix recordings by adding the ‘duration’ parameter to the jcmd command.

-XX:+UnlockCommercialFeatures -XX:+FlightRecorder -XX:StartFlightRecording= duration=60s,name=MyRecording,filename=myrecording.jfr

Important: If you’re not familiar with editing these files, create a backup copy before making any changes.

 

2. By using the jcmd tool:

Jcmd is pretty flexible and has plenty of options, but the following command should suffice for starting a recording in the same fashion as the previous example:

$./jcmd <Managed_server_pid> JFR.start name=MyRecording filename=myrecording.jfr settings=default
For getting the PID of your managed server, you may get a list of running Java processes by running the jcmd command alone:
$ ./jcmd
You will see the Managed server process names and PID’s easy to identify as ‘weblogic.Server’.

 

3. By starting a recording from the Java Mission Control GUI:
It is also possible to use the Java Mission Control GUI itself, although for this alternative to work, you must start the Managed Server to be profiled with the ‘+XX:+UnlockCommercualFeatures’ and ‘+XX:+FlightRecorder’ parameters, as outlined in alternative #1. This method is as flexible as #2 in that it allows starting and stopping a recording on demand, in this case through the GUI.
One of the components available in the GUI is the JVM Browser, which you will see at the left side of the window when you open it:

JVM Browser

You can start a recording by following these steps:

- Start the application to be profiled with the following arguments to enable JFR:

-XX:+UnlockCommericalFeatures –XX:+FlightRecorder

- Start Java Mission Control by running $JAVA_HOME/bin/jcmd.

- You will see the JVM Browser at the left of the window. The JVM Browser is a component that shows a list of the JVM’s discovered in the host where the tool is running.
You can select the JVM you’d like to start JFR on and right-click on it, then select ‘Start Flight Recording’.
– In the next window, select ‘Timed Fixed Recording’ or ‘Continuous Recording’ depending on your needs.
If you need to create a duration-based recording, you can specify the recording time, otherwise, maximum age and maximum size options are available for continuous recordings.
– In the next couple of screens you can configure, optionally, advanced event capturing settings like ‘Method sampling’ (that defines how often JFR samples your methods), Thread Dump periodicity (every 1, 10 or 60s.), and Exceptions (errors only, or all errors and exceptions).

- Click on ‘Finish’.

- The recording will be created and once it’s done, it will be displayed in Mission Control.

Of course, there’s much more on how to start JFR in terms of options and capabilities it supports, but you can refer to the Java Flight Recorder Runtime Guide in the Java SE Documentation site and to the many tutorials available in the Internet.
It is possible to configure JFR to generate the recording after the JVM exits or after certain duration, after which a recording file will be generated as specified in the filename parameter.
It is important to mention that the amount of event data for the WebLogic Server captured by JFR can be adjusted for each event by configuring diagnostic volumes. Normally, the diagnostic volume control will be set to “Low”, which has minimal performance impact on the Server. It is possible to gather more detailed information by setting the volume to “Medium” or “High”, although the performance impact should be assessed (specially with “High”). For more information, refer to “Configuring WLDF Diagnostic Volume”:

 

The WebLogic Execution Context ID (ECID)

In Fusion Middleware, an Execution Context ID (ECID) is a unique identifier that is injected as a request enters the system, and that is associated with an execution flow, as it goes from front-end components like the Oracle HTTP Server to others like Oracle WebCache, multiple WebLogic servers, to the Oracle Database. It can be used to correlate events occurred in different layers that are associated to the execution flow of the same request.

The ECID will be tracked and will be used and displayed by tools like JMC and JFR as long as the WebLogic Diagnostic Framework is enabled in any format, like any of the following:

- By configuring WLDF settings in the Administration Console.
– By writing and running WLST scripts
– Programmatically, by using JMX and WLDF configuration MBeans.
– By editing the XML configuration files directly.

For the purposes of this article, one recommendation would be enabling a Diagnostic Context for the WebLogic servers you want to profile, or defining a Diagnostic Module that targets those servers. Both alternatives, along with many other advanced WLDF options, can be configured easily in the Administration Console.

The documentation on WLDF is very extensive and detailed so I won’t go into that in detail, but you can refer to the “Configure the WebLogic Diagnostic Framework” chapter in the Oracle WebLogic Server Administration Console Online Help for more information.

Once WLDF is enabled, an ECID will be created and initialized when a request enters the system (for example, when a client makes an HTTP request). The ECID will remain associated to the request, even as it crosses the JVM boundaries to other products (OHS, SOA Server, etc.).

We will use the ECID to correlate the events captured by JFR related to WebLogic.

 

Viewing WebLogic diagnostic data with Java Mission Control

We have referred to the Java Mission Control (JMC), which is the tool for viewing and analyzing the data collected by JFR.
JMC can be launched with the command $JAVA_HOME/bin/jmc, which starts a GUI application that you can use to open and view JFR recording files (that normally have .jfr extension).
However, in order to be able to see the data captured from WebLogic, we will need to install the WebLogic plug-in for JMC.
You can do so by following the next steps:

1. Open JMC by running $JAVA_HOME/bin/jmc
2. Once the GUI has been launched, go to the ‘Help’ menu and click on the option ‘Install new software’.
3. In the upcoming window, expand the ‘Flight Recorder plug-ins’ option from the list and select the ‘WebLogic Tab Pack’ checkbox.
4. Click ‘Next’ and follow the remaining instructions to install the plug-in.
5. Restart Java Mission Control.

Once these steps are completed, you should be able to see an additional ‘WebLogic’ tab in the left pane when you open a JFR recording with WebLogic event data, as follows:

 

JMC WebLogic tab 2

As you may have noticed, there are plenty of places in the recording where we can look for issues. Depending on the context we have of the problem, we may want to look at a specific sub-tab if we are suspicious about a particular type of problem (like the Database sub-tab).
The possible scenarios and use cases are countless, of course, so I’ll just focus on a couple of them that may give an idea of how to use the extensive information provided through the JMC GUI.

Scenario #1: Find expensive events from Servlet invocations

One possible scenario we may find is that we want to drill down from a particular URL invoked to the underlying layers where the performance issue could be. Application users may have told us about an URL particularly slow. In this case, the WebLogic plug-in lets us focus on events associated to one or more URI invocations.

You can follow the next steps:

1. Click on the WebLogic tab in the left pane of the JMC window.

2. Go to the ‘Servlets’ sub-tab. You will see the Servlets window containing different sections, including Servlet Operations, Servlet Invocations by URI and a chart at the top showing the peaks of occurring events.

Servlets subtab

3. Take a look at the ‘Servlet Invocations by URI’ panel at the bottom. It will contain a list of URI’s invoked during the recording, along with useful attributes like their sample count and average duration. If your application serves client HTTP requests through Servlets (as it is commonly the case) this is a place where you would search for the URI related to the particular action that you identify as poorly performing:

Servlets by URI

If the performance issue is actually located in your Java application and it really happens within the execution flow of the URL you suspect, you will see its corresponding URI showing up in the list with a high average duration.

4. Add the corresponding URI from the list to an Operative Set.

Operative Sets are a feature of Java Mission Control that allows grouping a set of events having some very specific properties. The user may pick specific events of interest from different tabs (like the WebLogic tab) and add them to the Operative Set, which can be reached and utilized from almost every tab in the JMC user interface, especially from the Events tab of the JMC GUI, from where they can be managed, analyzed or removed.

In order to create the Operative Set with the events related to the execution of a given URI invocation, follow the next steps:

4.1. Select the corresponding URI from the list and right-click to get the context menu.
4.2. Select ‘Operative Set’ -> ‘Add Related Events’

OperativeSet submenu 1

4.3. In the upcoming submenu, select ‘With ECID=<very long hexadecimal id>.. (First in Selection)’.

OperativeSet submenu 2

This will add to the Operative Set the events associated with the first ECID found with an invocation of the URI in question (with the very long hexadecimal ID value).

It is also possible to add all the events related to all the invocations of the indicated URI (even with different ECID’s) by selecting ‘With any ECID in Selection’ in this step. The same would work if option ‘With URI=<URI_being_selected>’ is selected.

After this, the events added to the Operative Set will be able to be viewed in the Events tab.

5. Go to the ‘Events’ tab in the left vertical pane and visualize the events data. You will see multiple sub-tabs for different interesting aspects of the Event data.

Events tab

 

 By default, the Events tab will show data for all the events in the recording. However, that can be limited in each tab to the data in our Operative Set by checking the ‘Show only Operative Set’ checkbox.

6. Search the ‘Event types’ tab, that appears in the left side of the JMC window and select the ‘WebLogic Server’ checkbox. This will include WebLogic events to be shown in the Events tab as well:

Event types WebLogic checkbox

7. Click on the ‘Log’ sub-tab. This sub-tab displays a table if the available events. When you select one of the events, all the information (attribute values) in the event will be shown to you in the lower half of the tab. Again, in this tab you can opt by only seeing what’s in the Operative Set by checking the same box:

Events Log subtab

8. Play with the data shown.

As you may notice, the event log table has several attributes of interest by default:

- Event type
– Start time
– End time
– Duration
– Thread
– Java Thread Id
– OS Thread Id

Depending on what you need, you may sort the table by any of these columns. For instance, you can sort by duration and check the event type that took longer to execute.

The following example shows a sequence of events sorted by duration. Since we are only showing events associated with the same ECID, it is possible that the events on top of the list are aggregating time from underlying events.

Events Log by Duration

Once you identify an event type that is particularly time consuming, you can select it and its corresponding attributes will show at the bottom (in the ‘Event attributes’ pane). Among them, you can see the executing class name, the method name, and the thread stack of the event with the corresponding line number of the running code.

Events Log details 1

In the previous example, I’ve selected an event of type ‘JDBC Statement execute’ with duration of 645 microseconds. Whilst in this case such duration is insignificant, lengths of magnitude of several milliseconds may be too high for production systems running those statements concurrently thousands of times for hundreds of users. Fortunately, JFR will provide enough details based on the event type, like the SQL statement and the JDBC invoking class (weblogic.jdbc.wrapper.PreparedStatement here), which may have negative performance repercussions if not chosen adequately.

We can also see the sequence of events that occurred before and after the event in question by keeping it selected, and then sorting by ‘Start time’. That will give you a clear picture of the flow of events and their duration.

Event Log details 2

Tip: You may chose different combinations depending on your specific use case. For instance, in the case of application components running concurrently with multiple users, you may want to check if all the executions had a similar issue. In that case, you could add all the events related to invocation executions for a given URI by selecting ‘With URI=<your_URI>’ in step 4.3.

 

Scenario #2: Find expensive executions by their ECID’s

If we are not sure of what URL/URI invocation ended up in a performance issue but have an estimated time when the issue happened, you can also search long running executions that occurred around such time and then find their associated events, with their respective durations and URI’s.

Follow these steps:

1. Click on the WebLogic tab in the left pane of the JMC window.
2. Click on the ‘ECIDs’ sub-tab.

WebLogic ECID subtab

3. You will see three main sections in the window (Events chart, ECIDs and ECID log).

4. In the ECIDs table, you may sort by ‘Start time’ and then locate the ones close to the time where you suspect the issue occurred. An alternative is to sort by the ‘Duration’ column and find the executions with the longest duration.

Events Log ECID by Duration

5. Select the ECID you’d like to see more about.

6. Look at the ECID log table at the bottom. It will show the sequence of events for that particular ECID, including their start and end time, their duration and their URI (if the URI column is not visible, you can enable it by right clicking on any event, then selecting ‘Visible Columns’ -> URI).

Event Log ECID details

Once you identify an event that looks suspicious, you can go to the ‘Events’ tab -> Log sub-tab and locate it, getting additional information about it like the Event thread stack, just like it’s explained in the scenario #1.

 

Conclusion

What I’ve shown here is the teeny tiny tip of a huge iceberg of possibilities brought by the Java Flight Recorder and Java Mission Control. They put together things that, in the past, needed to be looked up in different places from different tools, and make it incredibly easy to access key information about your system.

Nowadays, as application containers like WebLogic have become the cornerstone of modern critical business applications, having the capability of doing an efficient troubleshooting against the clock may be the difference between success and failure of enterprise applications in many cases.

As platforms and languages evolve, we see that advanced diagnostic features play a more core, relevant role. We can just imagine what is coming in years ahead, but for now there are still a lot of possibilities to be found with these two great Java features.

Happy diagnosing!

How to plan for “Modern Marketing” architectures

$
0
0

The days of the company website being the most important online marketing asset are long over. No matter how sexy/compelling/brilliant a website is it now represents just a small fraction of a customer’s journey. While still important, the website is just one piece of the online puzzle that needs to be addressed. And this brings us to the Oracle stack. The fact is, the various touchpoints on a customer’s journey need to be addressed by entirely different software tools. As such, to complete and end-to-end managed journey we need multiple specialized tools deployed in a side-by-side manner. One tool cannot possible address all the needs of managing a customer’s journey throughout all the various online and offline properties. Consequently, it is important for clients, sales reps, partners, and consultants to accept that deploying a stack of SaaS and on-prem apps is currently the best practice so that all marketing bases are covered.

The standard way of describing a customer’s journey is as a “sales funnel” where thousands or even millions of potential customers are filtered down, nurtured, and finally guided towards consummating a sale. In an online retail environment this maps quite well to to the following stack of Oracle tools:

  • Oracle Marketing Cloud
  • Oracle Content Marketing
  • WebCenter Sites
  • Oracle Commerce

The above stack can be best shown in relation to the sales funnel in the following diagram:

sales funnel

Also available as a pptx file for inclusion into your presentations:

sales funnel

Note that being able to manage videos and placed ads (as well as analyzing their effectiveness) are a key part of the overall marketing funnel these days. Integrating these into Oracle Marketing Cloud is a key deliverable and is typically done with 3rd party plugins. The other key thing is making all content appropriate for each persona and locale. The combination of Oracle Content Marketing with WebCenter Sites addresses this requirement nicely and provides for a single source of all such content which can then be repurposed for both OMC as well as Commerce.

You will notice that “page layout” is not discussed in this blog post. The reason is that while still important, having all the touchpoints managed end-to-end is MORE important these days. That is what “Modern Marketing” is all about.

Using Oracle Documents Cloud REST API with Python Requests

$
0
0

Python Requests is a library that simplifies consuming RESTful resources from the client side. The Oracle Documents Cloud Service (DOCS) REST API fits well with Python Requests, allowing a service call to get a folder or file to be done in few lines of code. Since Python is naturally friendly to JSON responses, parsing the Response object can be done using standard syntax.

The Requests library must be installed to Python using the pip utility (or easy-install on Windows). The example below also uses the “getpass” module to handle user entry of a password.

Python Requests link: http://docs.python-requests.org/en/latest/

Python getpass module link: https://docs.python.org/2/library/getpass.html

 

A first look at using Requests with DOCS REST services is to get a user’s Personal Workspace. Notice that importing the Requests library allows calling a HTTP GET on a REST URL. All of the HTTP work is done in Python Requests and the client code need only pass in a URL, username, and password.

import requests
import getpass

docsurl='https://mydocsinstance/documents/api/1.1/folders/items'
username='peter.flies@oracle.com'

# Get password from user entry. This is using a module called "getpass"
pw = getpass.getpass("Enter password:")
response = requests.get(docsurl, auth=(username, pw))

# View the status code - should be 200. Error handling can be done with the status code. 
print (response.status_code)
# Header data is available from the Response object, which is part of Python Requests 
print (response.headers['content-type'])

# Get the JSON data
j = response.json()

# Navigate the JSON data using standard Python
print('count=' + j['count'])
print('errorCode=' + j['errorCode'])
print('totalResults=' + j['totalResults'])

print('Items:')
for item in j['items']:
	print ('type=' + item['type'] + ' name=' + item['name'] + ' owner=' + item['ownedBy']['displayName'])

 

An upload example requires a multipart HTTP POST request to send the file payload and a JSON payload. This example also shows the use of a Session in Python Requests. The Session can have the Authorization header set once and be re-used for all subsequent REST calls to Oracle Documents Cloud service. The example below uploads all files in a directory to a user’s DOCS account. A folder GUID is needed for the upload to add the new files into the target folder. Some additional lines of code are added here for printing out the amount of milliseconds that each upload takes. The upload request differs from the previous example in that the POST request needs multipart payload to succeed. Notice that a data part called “jsonInputParameters” and a file part called “primaryFile” are both added to the request. Python’s os module can handle opening the file and placing it into the multipart. A loop can grab each file from the directory and submit an upload request.

import os
import getpass
import time
import requests

docsurl='https://mydocsinstance/documents/api/1.1'
path = 'C:/TEMP/upload'
username = 'peter.flies@oracle.com'
uploadfolderid = 'F26415F66B0BE6EE53314461T0000DEFAULT00000000'

# Get user input to set the password. Set the REST client authorization header by directly passing in the username and password 
pw = getpass.getpass("Enter password:")
 
files = (file for file in os.listdir(path) 
	if os.path.isfile(os.path.join(path, file)))

# Requests has a reusable Session object. Init it with the Authorization header
s = requests.Session()
s.auth = (username, pw)

print('Uploading files from path ' + path + ' to Documents Cloud Service ' + docsurl + '.\n')
for file in files: 
	startMillis = int(round(time.time() * 1000))
	print('Uploading ' + file + '...')
	print(path + '/' + file)
	resourcePath = 'files/data'
	fullpath = path + '/' + file
	files = {
		'primaryFile': open(fullpath, 'rb')
	}
	jsondata='{\"parentID\": \"' + uploadfolderid + '\" }'
	data = {
		'jsonInputParameters': jsondata
	}
	response = s.post(docsurl + '/' + resourcePath, files=files, data=data)		
	endMillis = int(round(time.time() * 1000))
	if(response.status_code == 200 or response.status_code == 201):
		j = response.json()
		print('Upload successful. ' + str(endMillis - startMillis) + ' ms. File id: ' + j['id'] + ', version number: ' + j['version'] + '\n')
	else:
		print('ERROR: Upload unsuccessful for file: ' + file)
		print(str(response.status_code) + ' ' + response.reason )
		if(response.json()):
			j = response.json()
			print('Error details: ' + j['errorCode'] + ' ' + j['errorMessage'])			
		else:
			print('Dump of response text:')
			print(response.text + '\n')



A file download is a HTTP GET but requires saving the file to a location on disk. Again, Python has utilities to simplify the downloading and saving of Oracle Documents Cloud files.

import getpass
import requests
import logging

docsurl='https://documents.us.oracle.com/documents/api/1.1'
savePath = 'C:/TEMP/download'
username = 'peter.flies@oracle.com'
fileid = 'DCA4BBA0908AE2F497832BC2T0000DEFAULT00000000'
resourcePath = 'files/{fileid}/data'
resourcePath = resourcePath.replace('{fileid}', fileid)

# Get user input to set the password. Set the REST client authorization header by directly passing in the username and password 
pw = getpass.getpass("Enter password:")
 
# Requests has a reusable Session object. Init it with the Authorization header
s = requests.Session()
s.auth = (username, pw)

with open(savePath, 'wb') as fhandle:
	response = s.get(docsurl + '/' + resourcePath, stream=True)

	if not response.ok:
		print("Error!") #What do you want to do in case of error? 

	for filePiece in response.iter_content(1024):
		if not filePiece:
			break

	fhandle.write(filePiece)
#no JSON in response, but check it for status code, headers, etc. 
print(response.status_code)

 

For services that you may want to call frequently, a Python class can be created that wraps the Requests library. The commonly used service calls and associated parameters can have a method signature that sets the default parameters, but can also be overridden as needed. In the example class file below, the “DOCSClient” class has a constructor that initializes the Session. Once a handle to a DOCSClient is created, a method called “itemPersonalWorkspace” can be called with parameters to set the sort order, limit, and offset. This sample class has methods for only a few of the Documents Cloud Service REST calls, but the example can be applied to any DOCS REST API.

 

import requests
import logging

class DOCSClient:
	def __init__(self, docsurl, username, password):
		self.docsurl = docsurl
		self.version = '1.1'
		self.restBaseUrl = docsurl + '/api/' + self.version + '/'
		self.s = requests.Session()
		self.s.auth = (username, password)
		self.appLinkRoles = ('viewer', 'downloader', 'contributor')
		self.roles = ('viewer', 'downloader', 'contributor', 'manager')
	
	# Sample Item Resource REST method
	def itemPersonalWorkspace(self, orderby='name:asc', limit=50, offset=0):
		resourcePath = 'folders/items' + '?orderby=' + orderby + '&limit=' + str(limit) + '&offset=' + str(offset)
		return self.s.get(self.restBaseUrl + resourcePath)		
		
	# Sample Folder Resource REST methods
	def folderQuery(self, folderid):
		resourcePath = 'folders/{folderid}'
		resourcePath = resourcePath.replace('{folderid}', folderid)
		return self.s.get(self.restBaseUrl + resourcePath)		

	def folderCreate(self, folderid, name, description=''):
		resourcePath = 'folders/{folderid}'
		resourcePath = resourcePath.replace('{folderid}', folderid)
		resourcePath = resourcePath + '?name=' + name + '&description=' + description
		return self.s.post(self.restBaseUrl + resourcePath)		

	# Sample Upload Resource REST method
	def fileUpload(self, parentID, filepath):
		resourcePath = 'files/data'
		files = {
			'primaryFile': open(filepath, 'rb')
		}
		
		jsondata='{\"parentID\": \"' + parentID + '\" }'
		data = {
			'jsonInputParameters'  : jsondata
		}
		response = self.s.post(self.restBaseUrl + resourcePath, files=files, data=data)		
		return response

 

 

Lastly, using the newly created class can be done using an import statement. The class in this case was stored in a file called “oracledocsrequests.py”. That file must be in the PYTHON_PATH to be found when the code is run, and once in the path the import statement is a single line to make the script aware of the DOCSClient class. Once a client object is created, with the URL, username, and password being passed in, any of the methods can be called in a single line. A folder creation example is shown below using one of the class methods.  Note that the parameters in the DOCSClient class define description parameter as being empty by default, but the example overrides the empty string with “created from Python” as the folder description.

import getpass
from oracledocsrequests import DOCSClient

docsurl='https://documents.us.oracle.com/documents'
username='peter.flies@oracle.com'
folderid = 'F26415F66B0BE6EE53314461T0000DEFAULT00000000'

# Get user input to set the password. Set the REST client authorization header by directly passing in the username and password 
pw = getpass.getpass("Enter password:")

client = DOCSClient(docsurl, username, pw)

print('\n******* folderCreate *******')
response = client.folderCreate(folderid, 'My python folder', 'created from Python')
j = response.json()
print(response.status_code)
print('name=' + j['name'])

 

The Python Requests library slogan is “HTTP for Humans”. The powerful Requests library makes using the Oracle Documents Cloud Service REST API simple enough to write your own utilities to interact with a DOCS workspace.

 

 

 

 

 

 


Addition by Subtraction – A Quick Win in Performance

$
0
0

Recently, I worked with a customer where we would notice severe performance degradation over time within JCS (Java Cloud Service).

The product used was Oracle Service Bus (OSB) where it would take around 2 seconds to process a request and send a response.  Over time, we would see the same request/response take over 1 minute!  If we restarted the JVM and tested again, the response time was back to 2 seconds but would again degrade over time.

In an effort to troubleshoot this, I started out by taking JFRs (Java Flight Recordings) to compare when things were somewhat fast versus when they were horrendously slow. We noticed that the XML transformations become very slow.   What we did not know was whether it was only the transformations that became slow or whether everything became slow.

In an effort to determine whether it was only OSB transformations, I created a simple WebApp that would stress the CPU.  When invoked, it calculates Fibonacci to the 40th digit.

After server start this would typically finish in about 5 seconds. After the server gets into the bad state, the browser would timeout and we would start to see STUCK threads on WebLogic where the stack would like like:

"[STUCK] ExecuteThread: '60' for queue: 'weblogic.kernel.Default (self-tuning)'" daemon prio=10 tid=0x00007f3980066800 nid=0x1abc runnable [0x00007f39c9ce5000]
   java.lang.Thread.State: RUNNABLE
        at jsp_servlet.__index.fib(__index.java:61)
        at jsp_servlet.__index.fib(__index.java:69)
        at jsp_servlet.__index.fib(__index.java:69)
        at jsp_servlet.__index.fib(__index.java:69)
        at jsp_servlet.__index.fib(__index.java:69)
        at jsp_servlet.__index.fib(__index.java:69)
        at jsp_servlet.__index.fib(__index.java:69)
        at jsp_servlet.__index.fib(__index.java:69)
        at jsp_servlet.__index.fib(__index.java:69)
        at jsp_servlet.__index.fib(__index.java:69)
        at jsp_servlet.__index.fib(__index.java:69)
        at jsp_servlet.__index.fib(__index.java:69)

...

This would consume a full CPU until it completed. Eventually, it did complete, but when things were really bad it would take over 1 hour (you read that correctly) to complete.

It almost seemed like the CPU was being throttled severely, especially since the thread dump only showed the expected recursion in my Fibonacci JSP code.

At this point, I wanted to see whether a native stack trace showed me anything different. Here is what a pstack showed for one of the threads stuck in the Fibonacci recursion:

Thread 22 (Thread 0x7f39cacf8700 (LWP 6828)):
#0  0x00007f3a320d6473 in CodeHeap::largest_free_block() const () from /u01/jdk/jre/lib/amd64/server/libjvm.so
#1  0x00007f3a31f678af in CodeCache::largest_free_block() () from /u01/jdk/jre/lib/amd64/server/libjvm.so
#2  0x00007f3a31f928b5 in CompileBroker::compile_method(methodHandle, int, int, methodHandle, int, char const*, Thread*) () from /u01/jdk/jre/lib/amd64/server/libjvm.so
#3  0x00007f3a31ddedbd in AdvancedThresholdPolicy::submit_compile(methodHandle, int, CompLevel, JavaThread*) () from /u01/jdk/jre/lib/amd64/server/libjvm.so
#4  0x00007f3a32418180 in SimpleThresholdPolicy::event(methodHandle, methodHandle, int, int, CompLevel, nmethod*, JavaThread*) () from /u01/jdk/jre/lib/amd64/server/libjvm.so
#5  0x00007f3a3213b637 in InterpreterRuntime::frequency_counter_overflow_inner(JavaThread*, unsigned char*) () from /u01/jdk/jre/lib/amd64/server/libjvm.so
#6  0x00007f3a3213f5c6 in InterpreterRuntime::frequency_counter_overflow(JavaThread*, unsigned char*) () from /u01/jdk/jre/lib/amd64/server/libjvm.so
#7  0x00007f3a260108fb in ?? ()
#8  0x0000000000000007 in ?? ()
#9  0x00007f3a260108c7 in ?? ()
#10 0x00007f39cacf55e0 in ?? ()

What we can see here is that the method is being compiled over and over. With help from a colleague, it was determined that this was a known bug with a specific JVM flag: -XX:+TieredCompilation.

This flag is set by JCS itself when provisioning the instances. You can see this in the Server Start properties for the managed server:

While the journey was long (I didn’t cover many of the troubleshooting/diagnostics that were done in between), the fix was simple.

Remove the -XX:+TieredCompilation flag.

So, what were the results after making this change? With my Fibonacci program, it would now finish at 500 milliseconds! That is even a huge improvement over our baseline of 5 seconds after server start! And that result remained consistent over time too!
And what about with OSB? Instead of our baseline of 2 seconds that we obtained right after server start, we were now at 700 milliseconds! That also remained consistent over time! So, not only did removing this flag help with the severe performance degradation over time, it significantly improved on our fastest times!

This bug has since been fixed in 1.8.0.0 JVM but it will not be fixed in 1.7 JVM and thus the flag should never be used with version earlier than 1.8.x.

SPECIAL NOTE: In terms of JCS itself, this flag has now been removed for the upcoming release (15.2.3 version). After this release, any NEW JCS instances that are provisioned will NOT have this flag. For any existing JCS instances that you have, please remove this flag.

Thanks for reading!

Integrating Oracle Fusion Sales Cloud with Oracle Business Intelligence Cloud Service (BICS)

$
0
0

Introduction

This blog describes how to integrate Oracle Fusion Sales Cloud with Business Intelligence Cloud Service (BICS). The blog outlines how to programmatically load Sales Cloud data into BICS, making it readily available to model and display on BICS dashboards.

Three artifacts are created in the process:

1)    OTBI Answers / Analysis Request

Sales Cloud data is retrieved via an Oracle Transaction Business Intelligence (OTBI) Answers / Analysis request created within Sales Cloud Analytics.

2)    BICS Database Table

The table is created within the BICS database to load / store the Sales Cloud data.

3)    Stored Procedure – within BICS Database

A dynamic stored procedure is created within the BICS database (either Database Schema Service or Database as a Service). The stored procedure calls an external SOAP web service that runs and returns the results from the OTBI Answers / Analysis request. These results are then inserted into the BICS Database Table created in step 2.

The steps below detail how to create these three artifacts and call the data load integration process.

Main Article

Creating the OTBI Answers / Analysis Request

1) Create the OTBI Answers / Analysis report in Sales Cloud Analytics

1

2) Depending on the design of the OTBI report will may need to disable cache – to force it to refresh each time.

From the Advanced tab

a) Check “Bypass Oracle BI Presentation Services Cache”

b) Choose Partial Update = Entire Report

c) Apply XML

10

 

d) In Prefix: SET VARIABLE   DISABLE_CACHE_HIT=1;

e) Click Apply SQL

f) Click Save

g) Close report and re-open

h) Go to Advance tab to confirm cache changes have stuck

11

3) Confirm the request returns the desired results

2

Create the BICS Database Table

(This example uses Oracle Database Schema Service)

1) Open SQL Workshop from Oracle Application Express

3

 

2)    Launch SQL Commands

4

 

 

 

 

 

 

 

3)    Run the CREATE_TABLE SQL Command to create the SALES_CLOUD_PIPELINE table in the BICS database

To view SQL in plain text click here: CREATE_TABLE

CREATE TABLE “SALES_CLOUD_PIPELINE”
   (    “OPPORTUNITY_ID” NUMBER,
    “OPPORTUNITY_NAME” VARCHAR2(100 BYTE),
    “OPTY_DATE” DATE,
    “CUSTOMER_NAME” VARCHAR2(100 BYTE),
    “CUSTOMER_STATE” VARCHAR2(50 BYTE),
    “PRODUCT_NAME” VARCHAR2(100 BYTE),
    “CATALOG” VARCHAR2(50 BYTE),
    “PRODUCT_FAMILY” VARCHAR2(50 BYTE),
    “QTY” NUMBER,
    “OPPORTUNITY_REVENUE” NUMBER,
    “OWNER_LAST_NAME” VARCHAR2(10 BYTE),
    “TERRITORY_OWNER” VARCHAR2(50 BYTE),
    “TERRITORY” VARCHAR2(50 BYTE),
     “LOAD_DTTM” DATE,
    “LOADED_BY_USER” VARCHAR2(32 BYTE)
   );

5

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Creating the Stored Procedure – within BICS Database

Create and run the bics_scs_integration stored procedure that calls the web service to pull the Sales Cloud data and load the Pipeline data into the SALES_CLOUD_PIPELINE table.

The stored procedure leverages the APEX_WEB_SERVICE API. This enables you to integrate Sales Cloud with the BICS Database Schema Service, allowing you to interact with the web services via PL/SQL. In this example we invoke a SOAP style web service. However, the APEX_WEB_SERVICE API can also be used to invoke RESTful style Web services. The web service returns the results in XML, these results are then parsed out, and then loaded into the database table into separate columns.

~

To view SQL in plain text click here: bics_scs_integration

create or replace procedure bics_scs_integration
    (
         p_session_url                 varchar2
         ,p_data_url                   varchar2
         ,p_report_folder              varchar2
         ,p_au                         varchar2
         ,p_ap                         varchar2
         ,p_target_table               varchar2
         ,p_col_def                    varchar2
         ,p_proxy                      varchar2  default ‘www-proxy.us.oracle.com’
    ) is
  l_envelope CLOB;
  l_xml XMLTYPE;
  v_session_id varchar2(128) ;
  v_xml xmltype ;
   v_clob clob ;
  v_cdata clob ;
      p_pos number ;
      p_prev_pos number ;
      f_more_available boolean ;
      f_index number ;
      f_max_index number ;
      v_col_spec varchar2(128) ;
      v_col_name varchar(64);
      v_col_type varchar2(64) ;
      v_comma varchar2(2) := ” ;
      v_sel_list varchar2(4000) ;
      v_cdef_list varchar2(4000) ;
      v_curr_db_user varchar2(32) ;
BEGIN
   select user into v_curr_db_user from dual ;
  l_envelope := ‘<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:v7=”urn://oracle.bi.webservices/v7″>   
<soapenv:Header/>   
<soapenv:Body>      
<v7:logon>         
<v7:name>’||p_au||'</v7:name>         
<v7:password>’||p_ap||'</v7:password>      
</v7:logon>   
</soapenv:Body>
</soapenv:Envelope>';
  l_xml      := apex_web_service.make_request(
    p_url => p_session_url
    ,p_envelope => l_envelope
    ,p_proxy_override => p_proxy
    );
     select sess.session_id
          into v_session_id
     from
     xmltable
     (
          xmlnamespaces
          (
               ‘urn://oracle.bi.webservices/v7′ as “sawsoap”
               —‘http://www.ms.com/xml’
               —‘http://schemas.xmlsoap.org/ws/2004/08/addressing’ as “wsa”,
               —‘http://www.w3.org/2003/05/soap-envelope’ as “soap”
          ),
          ‘//sawsoap:logonResult’
          passing l_xml
          columns
          session_id               varchar2(50)     path     ‘//sawsoap:sessionID’
     ) sess;
     dbms_output.put_line(‘Session ID = ‘ || v_session_id);
       l_envelope := ‘<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:v7=”urn://oracle.bi.webservices/v7″>
   <soapenv:Header/>
   <soapenv:Body>
      <v7:executeXMLQuery>
         <v7:report>
            <v7:reportPath>’||p_report_folder||'</v7:reportPath>
            <v7:reportXml></v7:reportXml>
         </v7:report>
         <v7:outputFormat></v7:outputFormat>
         <v7:executionOptions>
            <v7:async></v7:async>
            <v7:maxRowsPerPage></v7:maxRowsPerPage>
            <v7:refresh></v7:refresh>
            <v7:presentationInfo></v7:presentationInfo>
            <v7:type></v7:type>
         </v7:executionOptions>
         <v7:reportParams>
            <!–Zero or more repetitions:–>
            <v7:filterExpressions></v7:filterExpressions>
            <!–Zero or more repetitions:–>
            <v7:variables>
               <v7:name></v7:name>
               <v7:value></v7:value>
            </v7:variables>
            <!–Zero or more repetitions:–>
            <v7:nameValues>
               <v7:name></v7:name>
               <v7:value></v7:value>
            </v7:nameValues>
            <!–Zero or more repetitions:–>
            <v7:templateInfos>
               <v7:templateForEach></v7:templateForEach>
               <v7:templateIterator></v7:templateIterator>
               <!–Zero or more repetitions:–>
               <v7:instance>
                  <v7:instanceName></v7:instanceName>
                  <!–Zero or more repetitions:–>
                  <v7:nameValues>
                     <v7:name></v7:name>
                     <v7:value></v7:value>
                  </v7:nameValues>
               </v7:instance>
            </v7:templateInfos>
            <!–Optional:–>
            <v7:viewName></v7:viewName>
         </v7:reportParams>
         <v7:sessionID>’
         || v_session_id ||
         ‘</v7:sessionID>
      </v7:executeXMLQuery>
   </soapenv:Body>
</soapenv:Envelope>';
  l_xml      := apex_web_service.make_request(
    p_url => p_data_url
    ,p_envelope => l_envelope
    ,p_proxy_override => p_proxy
    );
             select (cdata_section)
             into v_cdata
     from
     xmltable
     (
           xmlnamespaces
          (
                   ‘urn://oracle.bi.webservices/v7′ as “sawsoap”
                    —‘http://www.ms.com/xml’
               —‘http://schemas.xmlsoap.org/ws/2004/08/addressing’ as “wsa”,
               —‘http://www.w3.org/2003/05/soap-envelope’ as “soap”
         ),
          ‘//sawsoap:executeXMLQueryResult/sawsoap:return’
          passing l_xml
          columns
          cdata_section               clob     path     ‘//sawsoap:rowset/text()’
     ) dat
     where rownum = 1;
    f_more_available := true ;
    p_prev_pos := 0 ;
    f_index := 0 ;
    f_max_index := 3 ;
    while f_more_available
    loop
       f_index := f_index + 1 ;
       p_prev_pos := p_prev_pos + 1 ;
       p_pos :=  instr(p_col_def, ‘|’, p_prev_pos, 1) ;
       if ( p_pos = 0 )
       then
          — this is the only column or is the last
          — dbms_output.put_line(‘pos :’||p_pos) ;
         — dbms_output.put_line(‘prev_pos :’||p_prev_pos) ;
          v_col_spec := substr(p_col_def, p_prev_pos) ;
         —  dbms_output.put_line(‘pos :’||p_pos) ;
          f_more_available  := false ;
       else
           v_col_spec := substr(p_col_def, p_prev_pos, p_pos – p_prev_pos) ;
           p_prev_pos := p_pos;
     end if ;
     p_pos := instr(v_col_spec, ‘:’) ;
     v_col_name := substr(v_col_spec, 1, p_pos – 1) ;
     v_col_type := substr(v_col_spec, p_pos +1) ;
      dbms_output.put_line(‘Col spec :’||v_col_spec) ;
      dbms_output.put_line(‘Col name :’||v_col_name) ;
      dbms_output.put_line(‘Col spec :’||v_col_type) ;
      v_sel_list := v_sel_list || v_comma || v_col_name ;
      v_cdef_list := v_cdef_list || v_comma || v_col_name ||’ ‘||v_col_type ||’ PATH ”’||v_col_name||””;
      v_comma := ‘,';
    end loop;
    dbms_output.put_line(‘sel list: ‘||v_sel_list) ;
    dbms_output.put_line(‘cdef list: ‘||v_cdef_list) ;
      execute immediate ‘delete ‘ || p_target_table ;
      execute immediate ‘insert into ‘||p_target_table||’ ‘||
       ‘SELECT  ‘||v_sel_list  ||’, sysdate, ”’||v_curr_db_user||”’
         FROM XMLTABLE(
                XMLNamespaces(default ”urn:schemas-microsoft-com:xml-analysis:rowset”),
                ”$aaa/rowset/Row” PASSING XMLTYPE(:cdata) AS “aaa”
                   COLUMNS ‘||v_cdef_list||’
                           )’ using v_cdata;
commit ;
END;
/

6

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Execute stored procedure BICS_SCS_INTEGRATION

Replace the below parameters to correspond to you own BICS and Sales Cloud environments:

1)    Sales Cloud Analytics Server Name

2)    Path and name of (OTBI) Answers / Analysis Request

3)    User with permissions to run OTBI Answers / Analysis Request

4)    Password of user running the request

5)    Column number and column data type of each field returned by the OTBI report

6)    Name of BICS Database table loading data into

To view SQL in plain text click here: run_bics_scs_integration

declare
v_row_count number ;
begin
select count(*) into v_row_count from SALES_CLOUD_PIPELINE;
dbms_output.put_line(‘Original count: ‘||v_row_count) ;
delete SALES_CLOUD_PIPELINE;
commit ;
select count(*) into v_row_count from SALES_CLOUD_PIPELINE;
dbms_output.put_line(‘Deleted count: ‘||v_row_count) ;
bics_scs_integration
(
‘https://servername.oraclecloud.com/analytics-ws/saw.dll?SoapImpl=nQSessionService’
,’https://servername.oraclecloud.com/analytics-ws/saw.dll?SoapImpl=xmlViewService’
,’/shared/Custom/Sales/Pipeline/Sales_Cloud_Extract
,’username
,’password
,’SALES_CLOUD_PIPELINE
,’Column0:NUMBER|Column1:VARCHAR2(100)|Column2:DATE|Column3:VARCHAR2(100)|Column4:VARCHAR2(50)|Column5:VARCHAR2(100)|Column6:VARCHAR2(50)|Column7:VARCHAR2(50)|Column8:NUMBER|Column9:NUMBER|Column10:VARCHAR2(10)|Column11:VARCHAR2(50)|Column12:VARCHAR2(50)’
);
select count(*) into v_row_count from SALES_CLOUD_PIPELINE;
dbms_output.put_line(‘Refreshed count: ‘||v_row_count) ;
end;
/

Confirm data loaded correctly

7

 

Confirm Sales Cloud Data is now available in BICS

8

Cache

Once the data is loaded the cache must be manually cleared in Data Modeler. In June 2015 purge of cache via the BICS REST API will be made available. Until then manually clear the cache from the Data Modeler by clicking on the cog to the right of the table and selecting “Clear Cached Data”.

9

Further Reading

Click here for more information on using the APEX_WEB_SERVICE API.

Summary

This blog has provided a set of sample artifacts that leverage the APEX_WEB_SERVICE API to integrating Oracle Fusion Sales Cloud with Oracle Business Intelligence Cloud Service (BICS). The example uses SOAP style web services. That said, the blog may also be a good starting point for those wanting to perform similar integrations using RESTful style web services. Likewise, the blog focuses on integration with Fusion Sales Cloud and BICS. However, this methodology could also be used to integrate BICS with other Oracle and third-party applications.

Consuming RESTful services in Oracle Database Cloud Service from AngularJS

$
0
0

Oracle Cloud Database Schema Service (DbCS Schema) allows developers to expose RESTful (Representational State Transfer) services in JSON (JavaScript Object Notation) format from the database. This nice feature offers an open and easy way for applications constructed using different technologies to access data in DbCS Schema.

AngularJS is an open source JavaScript framework. It provides the declarative tags along with the JavaScript libraries to simplify the development of a HTML5 User Interface. AngularJS also provides a $http service that facilitates the integration of Web UI pages with backend services via a RESTful interface.

In this post, I will walk you through the steps of

  1. 1) creating a simple table in DbCS Schema service. This table will be used to expose the RESTful services
  2. 2) exposing RESTful/JSON services (GET, POST and PUT methods in this post) in Oracle DbCS Schema
  3. 3) building an AngularJS sample page to consume these RESTful services.

The AngularJS UI pages will be deployed on Java Cloud Service – SaaS Extension.

 

Create a table in DbCS Schema

In order to demonstrate how to expose the RESTful services from DbCS Schema, we will create a simple table first with a pre-propulated sample data. The table creation and data insertion are provided in the following SQL script, we will call it person_ddl.sql.

 CREATE TABLE PERSON
 (
     LASTNAME VARCHAR2(20) NOT NULL, 
     FIRSTNAME VARCHAR2(20) NOT NULL, 
     EMAIL VARCHAR2(40), 
     PHONE_NUMBER VARCHAR2(20)
 );
 INSERT INTO PERSON(LASTNAME,FIRSTNAME,EMAIL,PHONE_NUMBER) values('Smith','Mike','mike.smith@mycompany.com','111 222 3333');
  • Open the DbCS Schema console, click in “SQL Scripts” under “SQL Workshop”

Screen Shot 2015-04-29 at 9.29.18 am

  • Upload and Run the person_ddl.sql

Once the table is created, we are ready to expose the RESTful services from DbCS Schema.

Expose RESTful services from DbCS Schema

Create a RESTful Service Module

  • Go to SQL Workshop->RESTful Services, create a module sample.person as follows:

Screen Shot 2015-04-24 at 11.35.31 am

If you intend to deploy the AngularJS application in a different domain other than Oracle Cloud, for example, on an on-premise http server, you must enable Cross Origin Request Sharing (CORS) by entering the domain name in “Origins Allowed” field. In this post, we simply put “*” that allows Javascript applications from any domain to invoke the services in DbCS Schema.

Expose the GET method from DbCS Schema

  • Under the module “sample.person, create a Template “persons/”.

Screen Shot 2015-04-17 at 2.43.49 pm

  • Then create a GET Resource Handler under the newly created template “persons/”.

Screen Shot 2015-04-17 at 2.42.31 pm

  • Click “Test” button to test the GET method, you should be able to see the pre-populated record.

Expose the POST method from DbCS Schema

  • Under the template “persons/”, create a POST Resource Handler.

Screen Shot 2015-04-17 at 2.46.41 pm

Expose the PUT method from DbCS Schema

Under the module “sample.person, create a new template as follows. The {lastname} is used as the identifier to indicate which record needs to be updated in DbCS Schema.

Screen Shot 2015-04-17 at 2.52.33 pm

  • Under the newly created template “person/”, create a PUT Resource Handler.

Screen Shot 2015-04-17 at 2.59.08 pm

Now we have completed exposing the RESTful services as shown in the following figure.

Screen Shot 2015-04-20 at 9.52.36 am

DbCS Schema doesn’t provide an embedded Test feature for POST and PUT methods. You are free to use your favourite REST Client such as Chrome Postman or Firefox Poster to test these two methods.

Create a HTML Page

Create a HTML with AngularJS, say DemoGet.html, in your favorite IDE to get the list of persons. The invocation to the RESTful services in DbCS Schema is very straightforward by using the AngularJS $http service with the DbCS Schema URL. Deploy the Web Application to JCS-SX.

The snippet of AngularJS code for GET mehtod:

  <script>
    angular.module("myapp", []).controller("SampleController", function($scope, $http) {
     $http.get('https://<your-dbcs-host>/apex/person/persons/')
        .success(function(data, status, headers, config) {
            $scope.persons = data; 
     });
    });
  </script>

The snippet AngularJS code for the POST method:

      <script>
         angular.module("myapp", []).controller("SampleController", function($scope, $http) {
            $scope.personform = {};
            $scope.personform.submitPerson = function(item, event) {
                var dataObject = {
                    "lastname": $scope.personform.lastname,
                    "firstname": $scope.personform.firstname,
                    "email": $scope.personform.email,
                    "phone_number": $scope.personform.phone_number
                };
                $http.post('https://<your-dbcs-host>/apex/person/persons/',dataObject,{})
                    .success(function(data, status, headers, config) {
                     alert($scope.personform.lastname+"'s form submitted!");
                });
            };
         });    
      </script>

The code for the PUT method is very similar to the one for POST. You need to pass the lastname of the person as the identifier in the URL as follows:

$http.put('https://<your-dbcs-host>/apex/person/person/smith',dataObject,{})

Conclusion

This post demonstrates how to expose RESTful services in JSON format from DbCS Schema. This feature enables the easy access to DbCS Schema from a broader set of technologies in a heterogeneous environment. AngularJS is chosen as an example to demonstrate the access to the exposed RESTful services.

WebLogic Server: Saving Disk Space in /tmp

$
0
0

Introduction

Many WebLogic Server (WLS) implementations use JRockit 28 as the JVM implementation. JRockit 28 comes with the very useful JRockit Flight Recorder which helps in many troubleshooting situations.

Problem

In high volume WLS implementations with many domains and many managed servers the Flight Recorder could fill up the disk of the temporary file storage (often located at /tmp). This works good for some time, but suddenly results in failures during restarts of some servers. It often helps to remove old and outdated data in the temporary file storage.

Solution

Monitoring the disk space requires a lot of discipline or a decent tool like Oracle Cloud Control 12c and may be forgotten in peak hours. A simple solution is to use a file store with a much higher disk capacity. This can be done by setting the JRockit Flight Recorder output repository to the new location by passing the following parameter to the JVM: -XX:FlightRecorderOptions=disk=true,maxsize=1GB,repository=/u01/app/jfrtmpfiles. Here is how (pick only one of these options):

At the command line:

export JAVA_OPTIONS=-XX:FlightRecorderOptions=disk=true,maxsize=1GB,repository=/u01/app/jfrtmpfiles
./startWeblogicServer.sh

In DOMAIN_HOME/bin/setDomainsEnv.sh add it right after the export WL_HOME:

WL_HOME="C:/oracle/wls/1036/wlserver_10.3"
export WL_HOME
JAVA_OPTIONS=-XX:FlightRecorderOptions=disk=true,maxsize=1GB,repository=/u01/app/jfrtmpfiles
export JAVA_OPTIONS

For Fusion Applications only update the file DOMAIN_HOME/config/fusionapps_start_params.properties in each Fusion Applications Domain:

fusion.default.default.sysprops=-XX:FlightRecorderOptions=disk=true,maxsize=1GB,repository=/u01/app/jfrtmpfiles -Dapplication.top...

 

“Lift and Shift” On-Premise RPD to BI Cloud Service (BICS)

$
0
0

Introduction

“Lift and Shift” is the method by which an On-Premise RPD, created in the BI Admin tool, can be uploaded to a BI Cloud Service (BICS) instance and made available to report authors.

This cuts down on development time, allowing a single model to be used both on-premise, and in the cloud.  It also allows multiple subject areas to be created for a BICS reporting environment, accessible through the Visual Analyzer, and Answers reporting tools in BICS.

The process is simple, but has some minimum requirements:

1. A Database as a Service (DBaaS) Instance is required to house the data.  In future releases of BICS it should be possible to use the DB Schema Service that comes with BICS, but this is not currently supported.

2. The version of the BI Admin tool on premise used to create the RPD needs to be 11.1.1.6 or later

3. The RPD must pass the consistency check within the BI Admin tool

 

Main Article

For demonstration purposes, this article will assume that an RPD has already been created against an on-premise data source.

The following steps should be followed to ‘Lift and Shift’ the RPD to the BICS instance.

1. Replicate the data from the on-premise data sources to the DBaaS Database

For the RPD to work in BICS, all data that is referenced within the RPD must be copied to the DBaaS instance.  This can be accomplished through SQL Developer, many ETL tools, or the freely available Data Sync tool (covered in this article).

In this example 3 tables in an on-premise oracle database:

Windows7_x86

Were added to create a simple RPD.

Windows7_x64

Those tables were then created in the DBaaS database, and the data replicated.

 

2. Repoint the RPD to the DBaaS Database

Once the data has been copied, the RPD needs to be repointed to the DBaaS database.  Each Connection pool within the RPD should be updated.

From within the Oracle DBaaS Cloud console, make a note of the IP address, port and SID for the DBaaS database:

Oracle_Database_Cloud_Service
Within each connection pool in the RPD, change the host, the SID, and PORT if necessary. Amend the User name and Password for the schema user in the DBaaS.

Windows7_x64

Since each Connection Pool needs to be updated independently, it would be possible to point the RPD to multiple DBaaS instances, or to different schemas within a single DBaaS.  In theory there is no limit to the complexity of the RPD that can be used.  The only requirement is that all the data that is referenced must be replicated in the DBaaS instance.  Best practice, however, suggests that subject areas and objects not needed in BICS should be removed from the RPD to improve performance.

DBaaS acts just like any other database within the BI Admin tool, so an administrator could select ‘Import Metadata’ to check the connection, and add more tables to the RPD if needed.

Windows7_x64

3. Check Global Consistency

Once the RPD has been updated, check for Global Consistency.  The RPD must pass this test for it to work correctly in BICS. Resolve any issues, then check again.

Windows7_x64

 

4. Uploading RPD to BICS

The RPD is loaded into BICS through the Business Intelligence Cloud Service console.

Before uploading, it is good practice to take a Snapshot of the current System State so that the system can be returned to the ‘pre-RPD state’ should there be a reason to do this.  From within ‘Snapshots’, select ‘New Snapshot’.

Windows7_x64

Once a snapshot has been created, select ‘Upload Snapshot’, and select the local RPD file and enter the RPD password

Windows7_x64

The RPD contains the connections to the DBaaS database, so the Database connection in the Service Console does not need to be changed.

Windows7_x64

However, if the DBaaS Instance is to be the permanent location for all BICS data, then the option to ‘Use a different cloud database’ should be selected, and details of the DBaaS database entered.

Once an RPD is published to BICS, the Data Modeling tool can not be used.  The RPD is essentially a ‘read only’ model in BICS.  If model changes are required, these should be made with the local copy of the RPD in the BI Admin tool, and then uploaded again for the Service Console.

‘Lift and Shift’ is a one way move.  The RPD cannot be moved back to on-premise from BICS.  Reports created against the On-Premise RPD can not be migrated to the BICS environment, and will instead need to be re-created in BICS.  There are plans to add this functionality in the future.

Should it be necessary to revert to using the Data Modeling Tool, restore the snapshot created in the above step.

 

5. Creating New Analyses and Visualizations

Once the RPD has successfully been published, the subject areas are available as separate models for reporting in Visual Analyzer and BI Answers.

 

Windows7_x64

Summary

This article walked through the steps to publish an on-premise RPD to BICS using the ‘Lift and Shift’ method.

Viewing all 987 articles
Browse latest View live