Quantcast
Channel: ATeam Chronicles
Viewing all 987 articles
Browse latest View live

TCP/IP Tuning

$
0
0

Introduction

This article is meant to provide an overview of TCP tuning.

It is important to understand that there is no single set of optimal TCP parameters. The optimal tuning will depend on your specific operating system, application(s), network setup, and traffic patterns.

The content presented here is a guide to common parameters that can be tuned, and how to check common TCP problems.

It is recommended that you consult the documentation for your specific operating system and applications for guidance on recommended TCP settings. It is also highly recommended that you test any changes thoroughly before implementing them on a production system.

 

TCP auto tuning

 

Depending on your specific operating system/version and configuration, your network settings may be autotuned.

To check if autotune is enabled on many linux based systems:

cat /proc/sys/net/ipv4/tcp_moderate_rcvbuf

or

sysctl –a | grep tcp_moderate_rcvbuf

 

If tcp_moderate_rcvbuf is set to 1, autotuning is active and buffer size is adjusted dynamically.

While TCP autotuning provides adequate performance in some applications, there are times where manual tuning will yield a performance increase.

 

Common TCP Parameters

This table shows some commonly tuned linux TCP parameters and what they are for. You can look up the equivalent parameter names for other operating systems.

net.core.rmem_default Default memory size of receive(rx) buffers used by sockets for all protocols. Value is in bytes.
net.core.rmem_max Maximum memory size of receive(rx) buffers used by sockets for all protocols. Value is in bytes.
net.core.wmem_default Default memory size of transmit(tx) buffers used by sockets. Value is in bytes.
net.core.wmem_max Maximum memory size of transmit(tx) buffers used by sockets. Value is in bytes.
net.ipv4.tcp_rmem TCP specific setting for receive buffer sizes. This is a vector of 3 integers: [min, default, max]. The max value can’t be larger than the equivalent net.core.{r,w}mem_max. Values are in bytes.
net.ipv4.tcp_wmem TCP specific setting for transmit buffer sizes. This is a vector of 3 integers: [min, default, max]. The max value can’t be larger than the equivalent net.core.{r,w}mem_max. Value are in bytes.
net.core.netdev_max_backlog Incoming connections backlog queue is the number of packets queued when the interface receives packets faster than kernel can process them. Once this number is exceeded, the kernel will start to drop the packets.
File limits While not directly TCP, this is important to TCP functioning correctly. ulimit on linux will show you the limits imposed on the current user and system. You must have enough hard and soft limits for the number of TCP sockets your system will open.
These can be set here:/etc/security/limits.d/

soft        nofile    XXXXX

hard       nofile    XXXXX

 

The running value of these parameters can be checked on most linux based operating systems using sysctl.

To see all of your currently configured parameters, use:

sysctl –a

If you want to search for a specific parameter or set of parameters, you can use grep. Example:

sysctl –a | grep rmem

 

The values you set for these depend on your specific usage and traffic patterns. Larger buffers don’t necessarily equate to more speed. If the buffers are too small, you’ll likely see overflow as applications can’t service received data quickly enough. If buffers are too large, you’re placing an unnecessary burden on the kernel to find and allocate memory which can lead to packet loss.

Key factors the will impact your buffer needs are the speed of your network (100mb, 1gb, 10gb), and your round trip time (RTT).

RTT is the measure of time it takes a packet to travel from the host, to a destination, and back to the host again. A common tool to measure RTT is ping.

It is important to note that just because a server has a 10gb network interface, that does not mean it will receive a maximum of 10gb traffic. The entire infrastructure will determine the maximum bandwidth of your network.

A common way to calculate your buffer needs is as follows:

Bandwidth-in-bits-per-second * Round-trip-latency-in-seconds = TCP window size in bits / 8 = TCP window size in bytes

Example, using 50ms as our RTT:

NIC speed is 1000Mbit(1GBit), which equals 1,000,000,000 bits.

RTT is 50ms, which equals .05 seconds.

Bandwidth delay product(BDP) in bits – 1,000,000,000 * .05 = 50,000,000

Convert BDP to bytes – 50,000,000/8 = 6,250,000bytes, or 6.25mb

Many products/network appliances state to double, or even triple your BDP value to determine your maximum buffer size.

Table with sample buffer sizes based on NIC speed:

 

NIC Speed (Mbit) RTT(ms) NIC bits BDP(bytes) BDP(mb) net.core.rmem_max net.ipv4.tcp_rmem
100 100 100000000 1250000 1.25 2097152 4096 65536 2097152
1000 100 1000000000 12500000 12.5 16777216 4096 1048576 16777216
10000 100 10000000000 125000000 125 134217728 4096 1048576 33554432

 

Notice in the 10gb NIC the net.core.rmem.max value is great than the net.ipv4.rcp.rmem max value. This is an example of splitting the size for multiple data streams. Depending on what your server is being used for, you may have several streams running at one time. For example, a multistream FTP client can establish several streams for a single file transfer.

Note that for net.ipv4.tcp_{r,w}mem, the max value can’t be larger than the equivalent net.core.{r,w}mem_max.

 

net.core.netdev_max_backlog should be set based on your system load and traffic patterns. Some common values used are 32768 or 65536.

 

Jumbo frames

For Ethernet networks, enabling jumbo frames (Maximum Transmission Unit (MTU)) on all systems (hosts and switches) can provide a significant performance improvement, especially when the application uses large payload sizes. Enabling jumbo frames on some hosts in the configuration and not others can cause bottlenecks. It is best to enable jumbo frames on all hosts in the configuration or none of the hosts in the configuration.

The default 802.3 Ethernet frame size is 1518 bytes. The Ethernet header consumes 18 bytes of this, leaving an effective maximum payload of 1500 bytes. Jumbo Frames increasing the payload from 1500 to 9000 bytes. Ethernet frames use a fixed-size header. The header contains no user data, and is overhead. Transmitting a larger frame is more efficient, because the overhead-to-data ratio is improved.

 

Setting TCP parameters

The following is a list of methods for setting TCP parameters on various operating systems. This is not an all-inclusive list, consult your operating system documentation for more details.

If you make changes to any kernel parameters, it is strongly recommended that you test these changes before making changes in a production environment.

It is also suggested that you consult product documentation for recommended settings for specific products. Many products will provide minimum required settings and tuning guidance to achieve optimal performance for their product.

Windows

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\ParametersMaxUserPort = dword:0000fffe

Solaris

ndd -set /dev/tcp tcp_max_buf 4194304

AIX

/usr/sbin/no -o tcp_sendspace=4194304

 

Linux

sysctl -w net.ipv4.tcp_rmem=”4096 87380 8388608″

HP-UX

ndd -set /dev/tcp tcp_ip_abort_cinterval 20000

 

Common TCP parameters by operating system

The following is a list of commonly tuned parameters for various operating systems. Consulting the documentation for your specific operating system and/or product for more details on what parameters are available, recommended settings, and how to change their values.

Solaris

  • tcp_time_wait_interval
  • tcp_keepalive_interval
  • tcp_fin_wait_2_flush_interval
  • tcp_conn_req_max_q
  • tcp_conn_req_max_q0
  • tcp_xmit_hiwat
  • tcp_recv_hiwat
  • tcp_cwnd_max
  • tcp_ip_abort_interval
  • tcp_rexmit_interval_initial
  • tcp_rexmit_interval_max
  • tcp_rexmit_interval_min
  • tcp_max_buf

 

AIX

  • tcp_sendspace
  • tcp_recvspace
  • udp_sendspace
  • udp_recvspace
  • somaxconn
  • tcp_nodelayack
  • tcp_keepinit
  • tcp_keepintvl

Linux

  • net.ipv4.tcp_timestamps
  • net.ipv4.tcp_tw_reuse
  • net.ipv4.tcp_tw_recycle
  • net.ipv4.tcp_fin_timeout
  • net.ipv4.tcp_keepalive_time
  • net.ipv4.tcp_rmem
  • net.ipv4.tcp_wmem
  • net.ipv4.tcp_max_syn_backlog
  • net.core.rmem_default
  • net.core.rmem_max
  • net.core.wmem_default
  • net.core.wmem_max
  • net.core.netdev_max_backlog

 

HP-UX

  • tcp_conn_req_max
  • tcp_xmit_hiwater_def
  • tcp_ip_abort_interval
  • tcp_rexmit_interval_initial
  • tcp_keepalive_interval
  • tcp_recv_hiwater_def
  • tcp_recv_hiwater_max
  • tcp_xmit_hiwater_def
  • tcp_xmit_hiwater_max

Checking TCP performance

 

The following are some useful commands and statistics you can examine to help determine the performance of TCP on your system.

ifconfig

ifconfig –a, or ifconfig <specific_interface>

Sample output:

eth1 Link encap:Ethernet HWaddr 00:00:27:6F:64:F2

inet addr:192.168.56.102 Bcast:192.168.56.255 Mask:255.255.255.0

inet6 addr: fe80::a00:27ff:fe64:6af9/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:5334443 errors:35566 dropped:0 overruns:0 frame:0

TX packets:23434553 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:15158 (14.8 KiB) TX bytes:5214 (5.0 KiB)

 

Examine the RX and TX packets lines of the output.

errors Packets errors. Can be caused by numerous issues, such as transmission aborts, carrier errors, and window errors.
dropped How many packets were dropped and not processed. Possibly because of low memory.
overruns Overruns often occur when data comes in faster than the kernel can process it.
frame Frame errors, often caused by bad cable, bad hardware.
collisions Usually caused by network congestion.

 

 netstat –s

netstat –s will display statistics for various protocols.

Output will vary by operating system. In general, you are looking for anything related to packets being “dropped”, “pruned”, and “overrun”.

 

Below is sample TCPExt output.

Depending on your specific system, output for these values will only be displayed if it is non-zero.

XXXXXX packets pruned from receive queue because of socket buffer overrun Receive buffer possibly too small
XXXXXXpackets collapsedin receive queue due to low socket buffer Receive buffer possibly too small
XXXXXX packets directly received from backlog Packets being placed in the backlog because they could not be processed fast enough. Check if you are dropping packets. Just because the backlog is being used does not necessarily mean something bad is happening. It depends on the volume of packets in the backlog, and whether or not they are being dropped.

 

Further reading

 

The following additional reading provides the RFC for TCP extensions, as well as recommended tuning for various applications.

RFC 1323

RFC 1323 defines TCP Extensions for High Performance

https://www.ietf.org/rfc/rfc1323.txt

 

Oracle Databse 12c

https://docs.oracle.com/database/121/LTDQI/toc.htm#BHCCADGD

 

Oracle Coherence 12.1.2

https://docs.oracle.com/middleware/1212/coherence/COHAG/tune_perftune.htm#COHAG219

JBoss 5 clustering

https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Web_Platform/5/html/Administration_And_Configuration_Guide/Clustering_Tuning.html

 

Websphere on System z

https://www-01.ibm.com/support/knowledgecenter/linuxonibm/liaag/wp64bit/l0wpbt00_ds_linux_kernel_settings.htm

 

Tuning for Web Serving on the Red Hat Enterprise Linux 6.4 KVM Hypervisor

ftp://public.dhe.ibm.com/linux/pdfs/Tuning_for_Web_Serving_on_RHEL_64_KVM.pdf

 

Oracle Glassfish server 3.1.2

https://docs.oracle.com/cd/E26576_01/doc.312/e24936/tuning-os.htm#GSPTG00007

 

Solaris 11 tunable parameters

https://docs.oracle.com/cd/E26502_01/html/E29022/appendixa-28.html

 

AIX 7 TCP tuning

http://www.ibm.com/developerworks/aix/library/au-aix7networkoptimize3/

 

Redhat 6 Tuning

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/Performance_Tuning_Guide/index.html#main-network


Integration Cloud Service (ICS) On-Premise Agent Installation

$
0
0

The Oracle On-Premises Agent (aka, Connectivity Agent) is necessary for Oracle ICS to communicate to on-premise resources without the need for firewall configurations or VPN. Additional details about the Agent can be found under New Agent Simplifies Cloud to On-premises Integration. The purpose of this A-Team blog is to give a consolidated and simplified flow of what is needed to install the agent and provide a foundation for other blogs (e.g., E-Business Suite Integration with Integration Cloud Service and DB Adapter). For the detailed online documentation for the On-Premises Agent, see Managing Agent Groups and the On-Premises Agent.

On-Premises Agent Installation

The high-level steps for getting the On-Premises Agent installed on your production POD consist of two activities: 1. Create an Agent Group in the ICS console, and 2. Run the On-Premises Agent installer. Step 2 will be done on an on-premise Linux machine and the end result will be a lightweight WebLogic server instance that will be running on port 7001.

Create an Agent Group

1. Login to the production ICS console and view landing page.
 ICSConnectivityAgent_001
2. Verify that the ICS version is 15.4.5 or greater.
ICSConnectivityAgent_002
ICSConnectivityAgent_003
3. Scroll down on ICS Home page and select Create Agents. Notice this brings you to the Agents page of the Designer section.
ICSConnectivityAgent_004
ICSConnectivityAgent_005
4. On the Agents page click on Create New Agent Group.
5. Provide a name for your agent group (e.g., AGENT_GROUP).
ICSConnectivityAgent_006
6. Review the Agent page containing new group.
ICSConnectivityAgent_007

Run the On-Premises Agent Installer

1. Click on the Download Agent Installer drop down on the Agent page, select Connectivity Agent, and save file to an on-premise Linux machine where the agent will be installed/running.
ICSConnectivityAgent_008
ICSConnectivityAgent_009
2. Extract the contents of the zip file for the cloud-connectivity-agent-installer.bsx.  This .bsx is the installation script that will be executed in the on-premise machine where the agent will reside.  A .bsx is a self extracting Linux bash script:
ICSConnectivityAgent_010
3. Make sure the cloud-connectivity-agent-installer.bsx file is executable (e.g., chmod +x cloud-connectivity-agent-installer.bsx) and execute the shell script.  NOTE: It is important to specify the SSL port (443) as part as the host URL.  For example:
./cloud-connectivity-agent-installer.bsx -h=https://<ICS_HOST>:443 -u=[username] -p=[password] -ad=AGENT_GROUP
ICSConnectivityAgent_011
4. Return to the ICS console and the Agents configuration page.
ICSConnectivityAgent_012
5. Review the Agent Group.
ICSConnectivityAgent_013
ICSConnectivityAgent_014
6. Click on Monitoring and select the Agents icon on the left-hand side.
ICSConnectivityAgent_0151
7. Review the Agent monitoring landing page.
ICSConnectivityAgent_016
8. Review the directory structure for the agent installation.
ICSConnectivityAgent_017
As you can see this is a standard WLS installation.  The agent server is a single-server configuration where everything is targeted to the Admin server and is listening on port 7001.  Simply use the scripts in the ${agent_domain}/bin directory to start and stop the server.

We are now ready to leverage the agent for things like the Database or EBS Cloud Adapter.

E-Business Suite Integration with Integration Cloud Service and DB Adapter

$
0
0

Introduction

Integration Cloud Service (ICS) is an Oracle offering for a Platform-as-a-Service (PaaS) to implement message-driven integration scenarios. This article will introduce into the use of ICS for integrating an on-premise E-Business Suite (EBS) instance via Database Adapter. While EBS in recent releases offers a broad set of integration features like SOAP and REST support (i.e. via Integrated SOA Gateway), these interfaces are not available in older versions like 11.5.x. In the past it has been a proven approach to use Oracle Fusion Middleware Integration products (SOA, OSB etc.) running on-premise in a customer data center to connect to an EBS database via DB Adapter. In a short time this feature will be available also in a cloud based integration solution as we will discuss in this article.

Unless we focus on EBS integration here the DB Adapter in ICS will work similarly against any other custom database. Main reason to use an EBS context is the business case shown below, where ICS is connected to Mobile Cloud Service (MCS) to provide a mobile device solution.

Business Case and Architecture

Not hard to imagine that Oracle customers running EBS 11.5.x might have a demand to add a mobile channel for their end-users. One option could be an upgrade to a recent release of EBS. As this will be in most cases a bigger project, an alternative could be the creation of a custom mobile solution via Oracle Jet and MCS as figured below. MCS is a PaaS offering and requires access to an underlying database via REST/JSON. This is the situation where ICS appears in this architecture.

01_Architecture

In absence of native SOAP or REST capabilities being available in EBS 11.5.x tech stack, the integration via ICS would close that gap. Any database access activities (retrieving data, CRUD operations etc.) can run via an ICS/DB Adapter connection to an EBS on-premise database. ICS itself will provide a REST/JSON interface for the external interaction with EBS. This external interface is generic and not restricted to MCS as caller at all. However in our business case the ICS with DB Adapter fulfills the role of a data access layer for a mobile solution.

As shown in the architecture figure above the following components are involved in this end-to-end mobile solution:

  • DB Adapter uses a local component to be installed on-premise in EBS data center named ICS Agent. This agent communicates via JCA with the database and retrieves/sends data from DB Adapter in ICS from/to database
  • Communication between ICS Agent and DB Adapter is setup via Oracle Messaging Service tunneled through HTTPS
  • DB Adapter provides a standard SQL interface for database access
  • Part of the embedded features in ICS are data mapping and transformation capabilities
  • The external REST endpoint in ICS will be made public through REST Adapter in ICS

The ICS configuration and communication in architecture figure stands for a generic approach. In this sample the mobile solution for EBS 11.5.x makes use of the described data access capabilities as follows (mobile components and Jet are not in scope of this document as we focus on the ICS part here):

  • MCS connects to ICS via a connector or generic REST interface
  • EBS data will be processed and cached in MCS
  • Mobile devices communicate with MCS via REST to render the EBS data for visualization and user interaction

In the following article we will focus purely on the ICS and DB Adapter integration and leave the mobile features out of scope. The technical details of ICS and DB Adapter implementation itself won’t be handled here too, as they will become the main content of another blog. Instead we will show how the implementation features can be used from an Application Integration Developer’s perspective.

ICS Configuration Overview

At the beginning of an ICS based integration there are some configuration activities to be done like creation of connections. This is a one-time or better first-time task in order to make ICS ready for creation of integration flows. This is probably not really an Application Developer’s activity. In most cases a dedicated ICS Administrator will perform the following actions by himself.

02_ICS_ConnectionsAt least two connections must be setup for this EBS integration via database communication

  • Database Adapter pointing to the EBS database – database connection parameters will be used by ICS Agent running in-house on customers datacenter
  • REST Adapter to provide a REST interface for external communication

Screenshot below shows a sample configuration page for DB Adapter connected to an EBS instance. The main parameters can be seen as a local connection from ICS Agent to database: hostname, port, SID.

By using this configuration page there must be also an assignment of a local ICS Agent to this DB Adapter made.

03_1_ICSDBAdapterEBiz

03_2_ICSDBAdapterEbizIn most cases it will make sense to use EBS database user APPS for this connection as this credential provides the most universal and context-sensitive access to EBS data model.

04_ICSDBAdapterEBizCredentials

The other connection to setup is a REST interface (further listed as ICS LocalRest in this article) used for inbound requests and outbound responses. As showing in screenshot below this is a quite straightforward task without extensive configuration in our case. Variances are possible – especially for Security Policies, Username etc:

  • Connection Type: REST API Base URL
  • Connection URL: https://<hostname>:<port>/ics
  • Security Policy: Basic Authentication
  • Username: <Weblogic_User>
  • Password: <password>
  • Confirm Password: <password>

05_ICSLocalRestAdapterConfig

After setting up two connections we are good to create an integration between EBS database and any other system being connected via REST.

DB Adapter based Integration with EBS

During our activities we created some good practices that are probably worth to be shared this way. In general we made some good experience with a top-down approach that looks like follows for creation of an integration flow:

  • Identify the parameter in REST call to become part of the JSON payload (functionality of this integration point) for the external interface
  • Identify the EBS database objects being involved (tables, views, packages, procedures etc)
  • Create a JSON sample message for inbound and another one for outbound
  • Design the data mapping between inbound/outbound parameters and SQL statement or PLSQL call
  • Create a EBS DB integration endpoint, enter the SQL statement or call the PLSQL procedure/function dedicated to perform the database activity
  • Create a local REST integration endpoint to manage the external communication
  • Assign the previously created inbound and outbound sample JSON messages to the request and response action
  • Create a message mapping for inbound parameters to SQL/PLSQL parameters
  • Do the same for outbound parameters
  • Add a tracking activity, save the integration and activate it for an external usage

The DB adapter is able to handle complex database types for a mapping to record and array structures in JSON. This means there won’t be any obvious limitations to pass nested data structures to PLSQL packages via JSON.

Here is a sample. In PLSQL we define a data type like follows:

TYPE timeCard IS RECORD (
startTime VARCHAR2(20),
stopTime VARCHAR2(20),
tcComment VARCHAR2(100),
tcCategoryID VARCHAR2(40));
TYPE timeCardRec IS VARRAY(20) OF timeCard;

The parameter list of the procedure looks embeds this datatype in addition to plain type parameters:

procedure createTimecard(
userName   in varchar2,
tcRecord   in timeCardRec,
timecardID out NUMBER,
status     out varchar2,
message     out varchar2 );

The JSON sample payload for the IN parameters would look like this:

{
"EBSTimecardCreationCollection": {
   "EBSTimecardCreationInput": {
       "userName": "GEVANS",
       "timeEntries" : [
           {
             "startTime": "2015-08-17 07:30:00",
             "stopTime": "2015-08-17 16:00:00",
             "timecardComment": "Regular work",
             "timecardCategoryID": "31"
           },{
             "startTime": "2015-08-18 09:00:00",
             "stopTime": "2015-08-18 17:30:00",
             "timecardComment": "",
             "timecardCategoryID": "31"
           },{
             "startTime": "2015-08-19 08:00:00",
             "stopTime": "2015-08-19 16:00:00",
             "timecardComment": "Product Bugs Fixing",
             "timecardCategoryID": "31"
           },{
             "startTime": "2015-08-20 08:30:00",
             "stopTime": "2015-08-20 17:30:00",
             "timecardComment": "Customers Demo Preparation",
             "timecardCategoryID": "31"
           },{
             "startTime": "2015-08-21 09:00:00",
             "stopTime": "2015-08-21 17:00:00",
             "timecardComment": "Holiday taken",
             "timecardCategoryID": "33"
           }
           ] }
     }
}

The JSON sample below will carry the output informtion from PLSQL package back inside the response message:

{
   "EBSTimecardCreationOutput":
   {
       "timecardID": "6232",
       "status": "Success",
       "message": "Timecard with ID 6232 created for User GEVANS”
   }
}

As shown we can use complex types in EBS database and are able to create an according JSON structure that can be mapped 1:1 for request and response parameters.

Creating an EBS Integration

To start with the creation of an EBS integration an Application Developer must login to the assigned Integration Services Cloud instance with the username and password as provided.

06_Login_ICS

Entry screen after login shows the available activities that are

  • Connections
  • Integrations
  • Dashboard

As an Applications Developer we will chose Integrations to create, modify or activate integration flows. Connections handling has been shown earlier in this article and Dashboard is usually an option to monitor runtime information.

07_MainScreenICSTo create a new integration flow choose Create New Integration and Map My Data. This will create an empty integration where you have the opportunity to connect to adapters/endpoints and to create data mappings.

08_1_NewIntegrationEnter the following information

  • Integration Name : Visible Integration name, can be changed
  • Identifier : Internal Identifier, not changeable once created
  • Version :  Version number to start with
  • Package Name (optional) : Enter name if integration belongs to a package
  • Description (optional) : Additional explanatory information about integration

08_2_NewIntegration_CapabilitiesScreenshot below shows an integration which is done by 100% and ready for activation. When creating a new integration both sides for source and target will be empty. Suggestion is to start creating a source as marked on left side in figure below.

09_LocalRestAdapterIntegrationConfig

As mentioned before it might be a good practice to follow a top-down approach. In this case the payload for REST service is defined and exists in form of a JSON sample.

The following information will be requested when running the Oracle REST Endpoint configuration wizard:

  • Name of the endpoint (what do you want to call your endpoint?)
  • Description of this endpoint
  • Relative path of this endpoint like /employee/timecard/create in our sample
  • Action for this endpoint like GET, POST, PUT, DELETE
  • Options to be configured like
    • Add and review parameters for this endpoint
    • Configuration of a request payload
    • Configure this endpoint to receive the response

Sample screenshot below shows a configuration where a POST operation will be handled by this REST endpoint including the request and response.

10_LocalRestAdapterIntegrationConfigThe next dialog window configures the request parameter and the JSON sample is taken as a payload file. The payload content will appear later in mapping dialog as the input structure.

11_LocalRestAdapterIntegrationRequestParamThe response payload will be configured similar to the request payload. As mentioned the input/output parameters are supposed to be defined in a top-down approach for this endpoint. In the response payload dialog we assign the sample JSON payload structure as defined for output payload for this REST service.

12_LocalRestAdapterIntegrationResponseParamFinally the summary dialog window appears and we con confirm and close this configuration wizard.

13_LocalRestAdapterIntegrationSummaryNext action is a similar configuration for target – in our sample the DB adapter connected to EBS database.

14_EBSDbAdapterPackageConfigDB adapter configuration wizard starts with a Basic Information page where the name of this endpoint is requested and general decisions has to be made whether the service will use a SQL statement or make a PLSQL procedure/function call.

As shown in screenshot below the further dialog for a PLSQL based database access will basically start by choosing the schema, package and procedure/function to be used. For EBS databases the schema name for PLSQL packages and procedures is usually APPS.

15_EBSDbAdapterPackageConfigAfter making this choice the configuration is done. Any in/out parameter and return values of a specific function become part of the request/response payload and appear in message mapping dialog later.

16_EBSDbAdapterPackageConfigIn case the endpoint will run a plain SQL statement just choose Run a SQL statement in basic information dialog window.

A different dialog window will appear which allows the entering of a SQL statement that might be a query or even a DML operation. Parameter must be passed in a JCA notation with a preceding hash-mark (#).

17_EBSDbAdapterSQLValidationAfter entering the SQL statement it must be validated by activating Validate SQL Query button. As long as any validation error messages appear those must be corrected first in order to finalize this configuration step. Once the statement has been successfully validated a schema file will be generated.

18_EBSDbAdapterSQLSummaryBy clicking on the schema file URL a dialog window shows the generated structure as shown below. The elements of this structure have to be mapped in transformation component later, once the endpoint configuration has is finished.

19_EBSDbAdapterSQLXSDGeneratedThe newly created integration contains two transformations after endpoint configuration has been finished – one for requests/inbound and another one for response/outbound mappings.

20_MessageMappingThe mapping component itself follows the same principles like the comparable XSLT mapping tools in Fusion Middleware’s integration products. As shown in screenshot below the mapped fields are marked with a green check mark. The sample shows an input structure with a single field (here: userName) and a collection of records.

21_MessageMappingInputSample below shows the outbund message mapping. In the according PLSQL procedure three parameters are marked as type OUT and will carry the return information in JSON output message.

22_MessageMappingOutParamsOnce finished with the message mappings, the final step for integration flow completion is the addition of at least one tracking information (see link on top of page). This means one field in message payload has to be identified for monitoring purposes. The completion level will change to 100% afterwards. The integration must be saved and Application Developer can return to integration overview page.

23_IntegrationOverviewLast step is the activation of an integration flow – supposed to be a straightforward task. Once the completion level of 100% has been reached for completion level the integration flow is ready to be activated.

24_Activate_TimecardAfter clicking on Activate button a Confirmation dialog appears asking whether this flow should be traced or not.

25_Activate_TimecardOnce activated the REST endpoint for this integration is enabled and ready for invocation.

26_IntegrationsOverview

Entering the following URL in a browser window will test the REST interface and return a sample:

  • https://<hostname>:<port>/integration/flowapi/rest/<Integration Identifier>/v<version>/metadata

Testing the REST integration workflow requires a tool like SoapUI to post a JSON message to REST service. In this case the URL from above changes in terms of adding the integration access path as configured in REST connection wizard:

  • https://<hostname>:<port>/integration/flowapi/rest/<Integration Identifier>/v<version>/employee/timecard/create

Security Considerations

Earlier in this document we discussed the creation of a DB endpoint in EBS and the authentication as APPS user. In general it is possible to use other DB users alternately. The usage of a higher privileged user like SYSTEM is probably not required and also not recommended due to the impact if this connection might be hacked.

There are multiple factors having an influence on the security setup tasks to be done:

  • What are the security requirements in terms of accessed data via this connection?
    • Gathering of non-sensitive information vs running business-critical transactions
    • Common data access like reading various EBS configuration information vs user specific and classified data
  • Does this connection have to provide access to all EBS objects in database (packages, views across all modules) or can it be restricted to a minimum of objects being accessed?
  • Is the session running in a specific user context or is it sufficient to load data as a feeder user into interface tables?

Depending on the identified integration purpose above the security requirements demand might range in a span from extremely high to moderate. To restrict user access to a maximum it would be possible to create a user with a limited access to a few objects only like APPLSYSPUB. Access to PLSQL packages would be given on demand of accessibility.

If access to database is required to run in a specific context the existing EBS DB features to put a session into a dedicated user or business org context via FND_GLOBAL.APPS_INITIALIZE or MO_GLOBAL.INIT (R12 onward) must be used. That will probably have an impact on the choice to run a plain SQL statement vs a PLSQL Procedure. With the requirement to perform a preceding call of FND_GLOBAL also a SELECT statement has to run inside a procedure this way and the result values must be declared as OUT parameters as shown previously.

In general the requirement to perform a user authentication is outside of scope of this (EBS) database adapter. In practice the upper layer on top of ICS must support that no unsolicited user access will be given. While connection encryption via SSL is supposed to be the standard there could be obviously a need to create a full logical session management for end-user access including user identification, authentication and session expiration.

Such a deep-dive security discussion was out-of-scope for this blog and should be handled in another article.

For non-EBS databases similar considerations will obviously apply.

Contribution and Conclusion

This blog posting was dedicated to give an overview on the quite new DB adapter in ICS. While recent EBS releases will have a benefit to integrate via EBS adapter or built-in tools the older versions probably won’t. Using the DB adapter will be possibly the preferred method to create a cloud based access to a legacy on-premise EBS database.

At this point I’d like to thank my team mate Greg Mally for his great contribution! We worked and still work closely together in our efforts to provide some good practices for ICS adoption by our customers. Greg has recently published a great own blog to give a deeper technical look behind the scenes of ICS and DB adapter. So it will be very worth to read his blog too!

Integrating Oracle Data Integrator (ODI) On-Premise with Cloud Services

$
0
0

Introduction

 

This article presents an overview of how to integrate Oracle Data Integrator (ODI) on-premise with cloud services.  Cloud computing is now a service or a utility in high demand.  Enterprises have a mix of on-premise data sources and cloud services.  Oracle Data Integrator (ODI) on-premise can enable the integration of both on-promise data sources and cloud services.

This article discusses how to integrate ODI on-premise with three types of cloud services:  Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS).  The first part of this article discusses the required components in order to integrate ODI on-premise with SaaS applications.  The ODI on-premise integration with PaaS is illustrated with three Oracle PaaS products:  Oracle Database Cloud Service (DBCS), Oracle Exadata as a Service (ExaCS), and Oracle Business Intelligence Cloud Service (BICS).  The last part of this article discusses the integration of ODI on-premise with Oracle Storage Cloud Service (SCS), an IaaS product.

 

Integrating Oracle Data Integrator (ODI) On-Premise with Cloud Services

 

This article defines ODI on-premise as having ODI installed and running on computers on the premises of a company.  More specifically, ODI on-premise refers to having the ODI agent and the ODI repository installed on computers and databases on the premises of a company.  Additionally, on-premise data sources are data servers running on computers on the premises of a company.  Figure 1 below shows the integration of ODI on-premise with several cloud services:

 

Figure 1: ODI On-Premise Integration with Cloud Services

Figure 1: ODI On-Premise Integration with Cloud Services

 

Figure 1 above illustrates the ODI on-premise integration with both Oracle and non-Oracle cloud services.  ODI on-premise can integrate with Oracle cloud services such as Database Cloud Service, Business Intelligence Cloud Service, Oracle Storage Cloud Service, and Oracle Sales Cloud.  Likewise, ODI on-premise can integrate with non-Oracle cloud services such as Salesforce, Google Analytics, Success Factors, and Workday.

The Oracle cloud services illustrated above are part of the Oracle Public Cloud, which can be organized into three categories:  Oracle Software as a Service (SaaS), Oracle Platform as a Service (PaaS), and Oracle Infrastructure as a Service (IaaS).  Oracle SaaS includes a variety of software services that can be categorized in several areas:  Supply Chain, Human Resources, Analytics, Social, CRM, Financials, and many others.  Oracle PaaS includes database and middleware, as well as development, management, security and integration.  Oracle IaaS offers a set of core infrastructure capabilities like elastic compute and storage to provide customers the ability to run any workload in the cloud.

The next sections of this article provide an overview of how to integrate ODI on-premise with these three types of cloud services.

 

ODI On-Premise Integration with Software as a Service (SaaS)

 

ODI on-premise can integrate with Software as a Service (SaaS) applications.  Figure 2 below shows a list of both Oracle SaaS applications and non-Oracle SaaS applications:

 

Figure 2: ODI On-Premise to Software as a Service (SaaS)

Figure 2: ODI On-Premise to Software as a Service (SaaS)

 

In order to integrate ODI on-premise with SaaS applications, at a minimum, three components are required:  a JDBC driver, an ODI Technology, and a set of ODI Knowledge Modules (KMs).

The JDBC driver is required to establish the connection between the ODI on-premise and the SaaS application. Some of these drivers are supplied by third-party vendors and Oracle partners such as Progress Data Direct.   These JDBC drivers can connect to Oracle SaaS applications such as Oracle Marketing Cloud, Oracle Service Cloud, and Oracle Database Cloud.  In addition to Oracle SaaS applications, JDBC drivers from Progress Data Direct can connect to other non-Oracle SaaS applications such as Salesforce Cloud.   For a list of cloud data sources supported by Progress Data Direct, go to “Progress Data Direct Connectors.”

The ODI Technology is a connection object under the ODI Topology that has been customized to enable connectivity between the client (ODI) and the SaaS application.  This ODI technology must be configured to correctly map the data types of the JDBC driver with those found in the ODI technology.

The ODI Knowledge Modules are required for two purposes:  to reverse-engineer the attributes or objects found in the SaaS application, and to perform the integration task of pulling and sending data – respectively – from and to the SaaS application.  Bristlecone, an Oracle partner, offers the following types of ODI knowledge modules for SaaS applications:  Reverse-Engineering Knowledge Modules (RKMs), Loading Knowledge Modules (LKMs), and Integration Knowledge Modules (IKMs).   For a list of knowledge modules and SaaS applications supported by Bristlecone, go to “Bristlecone ODI Cloud Integration Pack.”

Although JDBC drivers for SaaS applications offer a great deal of functionality, it can be tedious to create and maintain ODI technologies for each SaaS application.  Alternatively, a generic approach can be followed by implementing universal or generic ODI technologies, so they can be used for more than one SaaS application.  An example of how to implement universal ODI technologies for more than one SaaS application can be found in the following Oracle article: “A Universal Cloud Applications Adapter for ODI.”  An ODI webcast demonstration on how to use universal ODI technologies can be found at the following location: “Oracle Data Integrator Special Topic:  Salesforce.com & Universal Cloud.”  This ODI webcast illustrates step-by-step how to use ODI universal technologies with JDBC drivers to extract data from Salesforce.  The JDBC drivers illustrated in this ODI webcast are from Progress Data Direct.

For a complete list of JDBC drivers and ODI knowledge modules available for SaaS applications, see the following Oracle article:  “Need to Integrate your Cloud Applications with On-Premise Systems… What about ODI?

 

ODI On-Premise Integration with Platform as a Service (PaaS)

 

Oracle Platform as a Service (PaaS) includes a variety of cloud products.  These cloud products are divided into several categories.  Two of these categories include Data Management, and Business Analytics.  Data Management includes products such as Database Cloud Service (DBCS), Exadata Cloud Service (ExaCS), and Big Data Cloud Service (BDCS).  Business Analytics includes cloud products such as Business Intelligence (BICS), Big Data Preparation (BDP), and Big Data Discovery (BDD).

The following sections focus on how to integrate ODI on-premise with three Oracle PaaS products:  Database Cloud Service (DBCS), Exadata Cloud Service (ExaCS), and Business Intelligence Cloud Service (BICS).  For a complete list of Oracle PaaS products and categories, go to “Oracle Cloud.”

 

ODI On-Premise Integration with Oracle Database Cloud Service (DBCS)

 

Oracle Database Cloud Service (DBCS) offers database features and database capabilities.  DBCS offers three products:  Database as a Service (DBaaS), Database Schema Service (DBSS), and Database Exadata Cloud Service (ExaCS).

Each DBCS product offers a predefined set of features and options.  For instance, DBaaS offers a dedicated virtual machine for running the Oracle database instance, full Oracle Net access, and full administrative OS and SYSDBA access.  The DBSS offers RESTful web services, which enables web applications to access data in the database service.  ExaCS offers the Oracle database on Exadata, Oracle Enterprise Manager, Exadata performance optimizations, and database rack options.  For additional information on Oracle Database Cloud Service offerings, go to “Oracle Database Cloud Service Offerings.”

ODI on-premise can integrate with these three DBCS products.  Figure 3 below shows how ODI on-premise can integrate with Database as a Service (DBaaS).  Three methods are illustrated:

 

Figure 3:  ODI On-Premise to Database as a Service (DBaaS)

Figure 3: ODI On-Premise to Database as a Service (DBaaS)

The first method uses a JDBC driver to access the Oracle Database located in the cloud.  Using the ODI Topology, a JDBC connection is configured under the Oracle technology.  The steps to configure this connection are similar to configuring an Oracle database on-premise.  The JDBC cloud connection can be secured via a Secure Socket Shell (SSH) protocol; thus, ODI can send data via a secure channel to the Oracle database in the cloud.

The second method uses the Oracle Datapump technology.  ODI on-premise can use this technology to extract data from either an Oracle database on-premise or an Oracle Database as a Service.  Using a Secure Copy Protocol (SCP) tool, datapump files can be copied from an Oracle database on-premise into an Oracle Database as a Service, or vice versa.  Datapump files can be uploaded into an Oracle database – on premise or as a service – by executing a mapping from ODI on-premise.  Datapump technology is an Oracle technology; thus, when using this method to extract data from a database on-premise, the database on-premise must be Oracle.  For additional information on how to use Oracle Datapump with ODI, go to “Using Oracle Data Pump in Oracle Data Integrator (ODI).”

The third method uses text files.  ODI on-premise can extract data from either a SQL database on-premise or an Oracle Database as a Service, and convert data into text files.  Using a Secure Copy Protocol (SCP) tool, text files can be copied from a SQL database on-premise into an Oracle Database as a Service, or vice versa.  Text files can be uploaded into a SQL database – on premise or as a service – by executing a mapping from ODI on-premise.  If the target database is Oracle, ODI on-premise can use the Oracle External Table technology to load the text files.  An example of this method can be found in the following Oracle blog:  “ODI 12c and DBCS in the Oracle Public Cloud.”  For additional information on how to work with text files in ODI, go to “Working with Files in Oracle Data Integrator (ODI).”

 

 

ODI On-Premise Integration with Oracle Database Exadata Cloud Service (ExaCS)

 

The Oracle Database Exadata Cloud Service (ExaCS) provides the highest-performing and most-available platform for running Oracle databases in the cloud.  This cloud service, based on the Oracle Exadata Database Machine, includes customized combinations of hardware, software, and storage that are highly tuned for maximum performance.

ODI on-premise integration strategies with DBaaS can be extended to ExaCS as well.  ODI on-premise can access and transform data in ExaCS using three methods:  JDBC, datapump (if both the source and the target data servers are Oracle databases), and text files.  Secured connections can be accomplished via a Secure Socket Shell (SSH) protocol, and files can be transferred from the source data server on-premise to the ExaCS – and vice versa – using a Secure Copy Protocol (SCP) tool.

Figure 4 below shows how ODI on-premise can upload and transform data in ExaCS:

 

Figure 4:  ODI On-Premise to Exadata Cloud Service

Figure 4: ODI On-Premise to Exadata Cloud Service

Figure 4, above, also illustrates a fourth method for accessing and transforming data in the cloud:  DBLINK.  This fourth method uses Oracle database links to orchestrate data transfers between DBCS and ExaCS.  For instance, a database link can be created in ExaCS to access data from DBCS.  In ODI on-premise, this database link can be used in an ODI mapping to select data from the DBCS and insert it into ExaCS.

Additionally, Oracle database links can be used in conjunction with Oracle file transfer utilities to transfer data between Oracle database servers in the cloud.  Thus, ODI on-premise can also be used to orchestrate file transfers – such as datapump files or text files – between DBCS and ExaCS.

For additional information on how to load data into an Oracle Database in an Exadata Cloud Service instance, go to “Loading Data into the Oracle Database in an Exadata Cloud Service Instance.”  For additional information on how to configure network access to an Exadata Cloud Service instance, go to “Managing Network Access to an Exadata Cloud Service Instance.”


ODI On-Premise Integration with Oracle Business Intelligence Cloud Service (BICS)

 

Oracle Business Intelligence Cloud Service (BICS) offers agile analytics for customers interested on analyzing data from a variety of sources, including on-premises and other services in the cloud.

In order to store and manage the data that users analyze in BICS, a database cloud service is needed.  BICS can integrate with Oracle Database Cloud using one of two options:  Database Schema Service (DBSS) or Database as a Service (DBaaS).  BICS comes integrated with DBSS, so there is no additional configuration required if users want to use this database schema service.

When using BICS with DBSS, data from on-premise sources can be loaded into BICS using the Oracle BI Cloud Service (BICS) REST APIBICS REST API is based on the DBSS RESTful web services.  ODI on-premise can load data from on-premise data sources into BICS using the BICS REST API.  This strategy is illustrated on Figure 5 below, method 1:

 

Figure 5:  ODI On-Premise to BI Cloud Service

Figure 5: ODI On-Premise to BI Cloud Service

ODI users and developers can use BICS REST API to programmatically create, manage, and load schemas, tables, and data into BICS.  ODI Knowledge Modules can be used to invoke the BICS REST API and mappings can be designed to load data from on-premise data sources into BICS.  An example of how to use ODI Knowledge Modules to invoke the BICS REST API can be found in the following article:  “ODI Knowledge Modules for BI Cloud Service (BICS).”

When BICS is integrated with the Database as a Service (DBaaS), data from on-premise sources can be loaded directly into DBaaS using various methods such as JDBC, datapump, and text files. Thus, the BICS REST API is not required.  This strategy is also illustrated on Figure 5 above, methods 2, 3, and 4.

The BICS REST API does not include methods for extracting data from the underlying BICS database tables.  However, if BICS has been integrated with DBaaS, ODI on-premise can select data from DBaaS and export it as datapump files or text files.   ODI on-premise can transfer these files from the cloud and load them into an on-premise data server using a Secure Copy Protocol (SCP) tool.

The BICS REST API does not include methods for extracting data from BICS reports.  Alternatively, Oracle Application Express (APEX) can be used to create RESTful web services to extract data from BICS reports.  An example of how to use APEX to extract data from BICS reports can be found in the following Oracle article:  “Extracting Data from BICS via APEX RESTful Web Services.

 

ODI On-Premise Integration with Infrastructure as a Service (IaaS)

 

Oracle Infrastructure as a Service (IaaS) offers three products:  Oracle Compute Cloud Service, Oracle Network Cloud Service, and Oracle Storage Cloud Service.  The Oracle Compute Cloud Service provides virtual compute environments, lifecycle management, dynamic firewalls, and secure access.  The Oracle Network Cloud Service provides connectivity services – such as VPN and FastConnect – between on-premise networks and the Oracle Public Cloud.  The Oracle Storage Cloud Service (SCS) provides storage for files and unstructured data.

The following section illustrates how to integrate ODI on-premise with Oracle Storage Cloud Service (SCS), an Oracle IaaS product.

 

ODI On-Premise Integration with Oracle Storage Cloud Service (SCS)

 

The Oracle Storage Cloud Service can be used by applications that require long-term data retention or as a staging area for data integration tasks in the cloud.  Other Oracle cloud services such as BI Cloud Service (BICS) can use SCS as a staging area for data consumption.

Users can programmatically store and retrieve content from SCS using the SCS REST API.  Additionally, SCS offers Java libraries to wrap the SCS REST API.  In ODI, tools and packages can be designed to invoke the SCS REST API, and ODI can upload files from an on-premise data server into SCS.  Likewise, using the SCS REST API, ODI on-premise can download files from SCS into an on-premise data sever.  This strategy is illustrated on Figure 6 below, methods 1, and 2:

 

Figure 6: ODI on Premise to Storage Cloud Service

Figure 6: ODI on Premise to Storage Cloud Service

 

Examples of how to use the ODI Open Tools framework to invoke the SCS REST API can be found in the following Oracle article:  “ODI Integration with Oracle Storage Cloud Service.”  Rittman Mead, an Oracle partner in the data integration space, has an example on how use ODI Open Tools to copy files from an on-premise data server into SCS:  “Oracle Data Integrator to Load Oracle BICS and Oracle Storage Cloud Service.”  This article also discusses how to load data into Business Intelligence Cloud Service (BICS) using the BICS REST API.  For additional information on how to develop and use ODI Open Tools, go to “Oracle Data Integrator Tools Reference.”

 

Conclusion

 

Cloud computing is now a service or utility in high demand.  Enterprises have a mix of on-premise environments and cloud computing services.  Oracle Data Integrator (ODI) on-premise can enable the integration of both on-promise data sources and cloud services.

This article presented an overview on how to integrate ODI with on-promise data sources and cloud services.  The article covered ODI on-premise integration with the following cloud services:  SaaS , PaaS, and IaaS.

For more Oracle Data Integrator best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-team Chronicles for Oracle Data Integrator (ODI).”

 

Other ODI Cloud Articles

A Universal Cloud Applications Adapter for ODI

Webcast: Oracle Data Integrator Special Topic:  Salesforce.com & Universal Cloud

Need to Integrate your Cloud Applications with On-Premise Systems… What about ODI?

ODI 12c and DBCS in the Oracle Public Cloud

ODI Knowledge Modules for BI Cloud Service (BICS)

ODI Integration with Oracle Storage Cloud Service

Oracle Data Integrator to Load Oracle BICS and Oracle Storage Cloud Service

 

Oracle PaaS Resources

Oracle Platform as a Service (PaaS)

Oracle Database Cloud Service Offerings

Oracle Database Cloud Service (DBCS)

Using Oracle Database Schema Cloud Service

Using RESTful Web Services in Oracle Schema Service

Oracle Exadata Cloud Service (ExaCS)

Loading Data into the Oracle Database in an Exadata Cloud Service Instance

Managing Network Access to an Exadata Cloud Service Instance

Oracle Business Intelligence Cloud Service (BICS)

Preparing Data in Oracle Business Intelligence Cloud Service

REST API Reference for Oracle Business Intelligence Cloud (BICS)

Extracting Data from BICS via APEX RESTful Web Services

Oracle Application Express (APEX) RESTful APIs

Oracle Application Express

Oracle IaaS Resources

Infrastructure as a Service (IaaS)

Oracle Storage Cloud Service (SCS)

Oracle Storage Cloud Service REST API

Oracle SaaS Resources

Applications as a Service (SaaS)

 

Other ODI Related Articles

Using Oracle Data Pump in Oracle Data Integrator (ODI)

Working with Files in Oracle Data Integrator (ODI)

Oracle Data Integrator (ODI) Tools Reference

 

Other Resources

Progress DataDirect Connectors

Bristlecone ODI Cloud Integration Pack

Oracle External Table Technology

 

 

Implementing an SFDC Upsert Operation in ICS

$
0
0

Introduction

While designing SOA services; especially those ones that represent operations around a business object, a common implementation pattern used is upsert. Upsert is an acronym that means the union of “update plus insert”. The idea behind is having a unique operation that decides which action to take – either update the existing record or insert a new one – based on information available in the message. Having one operation instead of two, makes the SOA service interface definition clearer and simpler.

Some SaaS applications offer upsert capabilities in their exposed services, and leveraging these capabilities can considerably decrease the amount of effort required while designing SOA services in an integration platform such as ICS. For instance, if you need to develop an upsert operation and the SaaS application does not have this functionality; you will have to implement that logic using some sort of conditional routing (see Content-Based Router in ICS) or via multiple update and insert operations.

ics_cbr_sample

Figure 1: Implementing upsert using CBR in ICS.

Salesforce.com (or SFDC for short) is one of those SaaS applications that offers built-in support for the upsert operation. This post will show how to leverage this support with ICS.

Setting up External Identifiers in SFDC

Every business object in SFDC can have custom fields. This allows business objects from SFDC to be customized to afford specific customer requirements regarding data models. As part of this feature SFDC allows that any custom field can act as a record identifier for systems outside of SFDC. These systems can identify any record through this custom field instead of using the SFDC internal primary key, which for security reasons is unknown. Therefore, if you need to perform transactions against business objects in SFDC from ICS, you need to make sure that the business object carries a custom field with the External ID attribute set. This is a requirement if you want to make the upsert operation work in SFDC.

In order to create a custom field with the External ID attribute, you need to access your SFDC account and click on the setup link on the upper right corner of the screen. Once there, navigate to the left side menu and look for the build section, which is below the administer section. Within that section, expand the customize option and SFDC will list all the business objects that can be customized. Locate the business object that you want to perform the upsert operation on. This blog will use the contact business object as example.

Go ahead and expand the business object. From the options listed, choose fields. That will bring you the page that allows the fields personalization for the selected business object. In this page, navigate to the bottom of it to access the section in which you can create custom fields, as shown in figure 2.

creating_custom_field_in_sfdc_1

Figure 2: Creating custom fields for the contact business object in SFDC.

To create a new custom field, click in the new button. This will invoke the custom field creation wizard. The first step of the wizard will ask which field type you will want to use. In this example we are going to use Text. Figure 3 shows the wizard’s step one. After setting the field type click next.

creating_custom_field_in_sfdc_2

Figure 3: Creating a custom field in SFDC, step one.

The second step is entering the field details. In this step you will need to define the field label, name, length and what special attributes it will have. Set the field name to “ICS_Ext_Field”. The most important attribute is the External ID one. Make sure that this option is selected. Also select Required and Unique since this is a record identifier. Figure 4 shows the wizard’s step two. Click next twice and then save the changes.

creating_custom_field_in_sfdc_3

Figure 4: Creating a custom field in SFDC, step two.

After the custom field creation, the next step is generating the SFDC Enterprise WSDL. This is the WSDL that must be used in ICS to connect to SFDC. The generated WSDL will include the information about the new custom field and ICS will be able to rely on that information to perform the upsert operation.

Creating a REST-Enabled Upsert Integration

In this section, we are going to develop an ICS REST-enabled source endpoint that will perform insertion and updates on the target contact business object, leveraging the upsert operation available in SFDC. Make sure to have two connections configured in ICS; one for the integration source which is REST-based and another for the integration target, which should be SFDC-based. You must have an SFDC account to properly set the connection up in ICS.

Create a new integration, and select the Map My Data pattern. From the connections palette, drag the REST-based connection onto the source icon. This will bring the new REST endpoint wizard. Fill the fields according as to what is shown in figure 5 and click next.

source_wizard_1

Figure 5: New REST endpoint wizard, step one.

Step two of the wizard will ask for the request payload file. Choose JSON Sample and upload a JSON file that contains the following payload:

request_payload_sample

Figure 6: Sample JSON payload for the request.

Click next. Step three of the wizard will ask for the response payload file. Again, choose JSON Sample and upload a JSON file that contains the following payload:

response_payload_sample

Figure 7: Sample JSON payload for the response.

Click next. The wizard will wrap up the options chosen and display for confirmation. Click on the done button to finish the wizard.

source_wizard_4

Figure 8: New REST endpoint wizard, final step.

Moving further, from the connections palette, drag the SFDC-based connection onto the target icon. That will bring the new SFDC endpoint wizard. Fill the fields according as to what is shown in figure 9 and click next.

target_wizard_1

Figure 9: New Salesforce endpoint wizard, step one.

Step two of the wizard will ask for which operation must be performed in SFDC. You need to choose the upsert operation. To accomplish that, first select the option Core in the operation type field and then select the upsert option in the list of operations field. Finally, select the business object in which you would like to perform upserts, as shown in figure 10.

target_wizard_2

Figure 10: New Salesforce endpoint wizard, step two.

Click next twice and then the wizard will wrap up the options chosen and display for confirmation. Click on the done button to finish the wizard.

target_wizard_4

Figure 11: New Salesforce endpoint wizard, final step.

Save all the changes made so far in the integration. With the source and target properly configured, we can now start the mapping phase, in which we will configure how the integration will handle the request and response payloads. Figure 12 shows what we have done so far.

integration_before_mapping

Figure 12: Integration before the mapping implementation.

Create a new request mapping in the integration. This will bring the mapping editor, in which you will perform the upsert implementation. Figure 13 shows how this mapping should be implemented.

request_mapping

Figure 13: Request mapping implementation.

Let’s understand the mapping implementation details. The first thing that needs to be done is set into the externalIDFieldName the field name from the business object that will be used to identify the record. You must use any valid custom field that has the External ID attribute set. Any other field will not work here. To set the value into the field, click on top of the field link to open the expression editor.

setting_external_field_value

Figure 14: Setting the “externalIDFieldName” using the expression editor.

The best way to set the value is using the concat() XLST function. Set the first parameter of the concat() function to the custom field name and the second parameter to a empty string.

Keep in mind that the field name in ICS can be different from what you set in SFDC. When the SFDC Enterprise WSDL is generated, it appends into the custom fields a suffix to make them unique. In most cases, this suffix is a “__c” but a better way to figure this out is reviewing the WSDL for the field.

The next step is making sure that the custom field cited in the externalIDFieldName field has a value set. This is necessary because that field will be used by SFDC to decide which action to take. If no value is set in that field, it means that SFDC will create a new record for that business object. Otherwise if that field has a value; then SFDC will try to locate that record and once found, it will update the record with the data set in the other fields. In this example, we will populate the custom field with the identifier value from the request payload, as shown in figure 13. Map the remaining fields accordingly. Once you finish the mapping, save the changes in click on the exit mapper button to come back to the integration.

Now create a new response mapping in the integration. This will bring the mapping editor, in which you will perform the mapping implementation for the response. Figure 15 shows how this mapping should be implemented.

response_mapping

Figure 15: Response mapping implementation.

Simply map the success field from the source with the result field from the target. According to the SFDC documentation, the success field is set to true if the operation is successfully performed into the record, and it is set to false of any issues happen during the operation. Once you finish the mapping, save the changes in click on the exit mapper button to come back to the integration. Figure 16 shows the integration after the mapping.

integration_after_mapping

Figure 16: Integration after the mapping implementation.

Finish the integration implementation by setting the tracking information and optionally mapping any faults from the SFDC connection. Save all the changes and go ahead and activate the integration in ICS. Once activated, you should be able to get the information from the REST endpoint exposed by ICS. Just access the integrations page and click in the exclamation link situated on the upper right corner of the integration entry.

checking_endpoint_details

Figure 17: Getting the information from REST endpoint exposed by ICS.

Before testing the endpoint, keep in mind that the URL of the REST endpoint does not contain the “metadata” suffix present in the end of the URL shown in figure 17. Remove that suffix before using the URL to avoid any HTTP 403 errors.

Conclusion

The upsert operation is a very handy way to handle insert and update operations within a single API, and it is a feature present in most SaaS applications that expose services for external consumption. SFDC is one of those applications. This blog showed how to leverage the upsert support found in SFDC and the steps required to invoke the upsert operation using the externalIDFieldName element from ICS.

Oracle Unified Directory 11gR2PS3 Very Large Static Groups

$
0
0

This post is about OUD and extremely large static groups where membership numbers exceed hundreds of thousands or even millions; yes I said millions.  I have been using Directory Services for over 15 years and the response I typically have for a customer that wants to use very large static groups is don’t do it.  Then I steer them into dynamic groups or even suggest leveraging attributes from user entries.  In fact OUD has a great feature unique to itself called Virtual Static Groups that is kind of a hybrid between dynamic and static group, which has proved successful for past customers wanting very large groups yet get great performance.  That said, in this post I am going to break all the rules and say you can have static groups with even millions of members because of the new static group performance improvements that has come with OUD11gR2 PS3 (11.1.2.3.0).

OUD PS3 Static Group Improvements

Oracle has worked hard at fixing problems seen with large static groups in OUD11gPS2 or older.  As of version OUD 11gR2PS3 (11.1.2.3.0), improvements related to static groups include group cache improvements, a redesign of where members of static groups are stored, how the attributes for groups are referenced, improvements in entry cache specific to groups with more than 100,000 members, and improvements in importing large groups.  Some new tuning properties have also been added that help with larger static groups which include member-lookthrough-limit, return-attribute-value-limit, and import-big-entries-memory-percent.  All these changes have had a significant impact on performance improvements.

I personally have loaded an OUD instance with groups as large as 2 million members, and did LDAP searches, adds, modifies, and deletes at around 20 operations a second with staggering great results.  For example I have got searches with the slowest response at 87ms with an average of 1ms, and even modifications with a maximum time of 212ms and an average of 1ms.  Oracle themselves have tested OUD PS3 with groups of up to 50 million members!  This certainly breaks all the rules I have ever learned about large static groups in Directory Services, so this is quite exciting.

However, before you jump into loading up OUD with these massive groups I want to pass on some lessons learned from the considerable time I spent with OUD11gPS3 and large static groups.  Even though normal tasks you would see a Directory do like search, add, or remove a member from a group work and perform fine, there are things you will want to know about importing and exporting large groups.  Read on to learn more about the tips on importing and exporting large groups into OUD.

 

BULK IMPORTING VERY LARGE GROUPS

There is a hard limit when importing large groups in excess of 100,000 unique members per group when using ldapmodify. This is also documented in the latest OUD 11.1.2.3.0 documentation here https://docs.oracle.com/cd/E52734_01/oud/OUDAG/tuning_performance.htm#BBADGEFF. If you try to use ldapmodify to import a group greater than 100,000 members you will get the following error.

Connection reset

The good news is there is a way to bulk import groups with very large members using the import-ldif command utility.  One nice thing about using import-ldif is it not only imports large groups, but also updates the indexes as needed toward the end of the import so there is nothing to do once the import completes except to restart the OUD instance.  And the speed at which it bulk imports is extremely fast.  For example I bulk imported a group with 2 million members in under 2 minutes.  A big reason is that when OUD is offline and import-ldif is used, it has access to directly insert the entries directly into the database.  Below are the simple steps you will need import a group.  Note that the LDIF you import should look similar to the example below and you can have more than one group in the LDIF if needed.

 

Option #1 using import-ldif command:

1.  cd <ORACLE_INSTANCE>/oud1/OUD/bin

2.  ./stop-ds

3.  ./import-ldif –b dc=example,dc=com –a –i “*” –l /home/oracle/import_group.ldif

4.  ./start-ds

 

Where:

-b – Base DN of a branch to include in the LDIF import
-a – Append to an existing database rather than overwriting it
-i – Attribute to include in the LDIF import
-l – Path to the LDIF file to be imported
optional –
-r – Replace existing entries when appending to the database

 

Example LDIF:

dn: cn=Rewards,cn=Groups,dc=example,dc=com
objectclass: top
objectclass: groupofuniquenames
cn: Rewards
uniquemember: uid=josh.l.davis,cn=Users,dc=example,dc=com
uniquemember: uid=drew.w.walton,cn=Users,dc=example,dc=com
uniquemember: uid=arnault.o.gunter,cn=Users,dc=example,dc=com

 

Out of Memory Error using ldapmodify

Another possible error when using ldapmodify while trying to import a group that has greater than 100,000 members is the following.  For example increasing the member-lookthrough-limit value from the default to 100,000 to something much larger may seem like it would help you import beyond the default using ldapmodify, but I have found that I get the following error.

 

Exception in thread “main” java.lang.OutOfMemoryError: GC overhead limit exceeded
    at org.opends.server.util.StaticUtils.toLowerCase(StaticUtils.java:2825)
    at org.opends.server.util.LDIFReader.readAttribute(LDIFReader.java:1202)
    at org.opends.server.util.LDIFReader.parseAddChangeRecordEntry(LDIFReader.java:1986)
    at org.opends.server.util.LDIFReader.readChangeRecord(LDIFReader.java:750)
    at org.opends.server.tools.LDAPModify.readAndExecute(LDAPModify.java:344)
    at org.opends.server.tools.LDAPModify.mainModifyNoLogger(LDAPModify.java:1423)
    at org.opends.server.tools.LDAPModify.mainModify(LDAPModify.java:757)
    at org.opends.server.tools.LDAPModify.main(LDAPModify.java:707)

 

This is related to bug 19602418.  It is possible you could also increase the OUD JVM heap size setting to overcome this.  In one example though I had to bring OUD offline, I was able to import a group that had 2 million members in under 2 minutes.  Considering you can schedule something like that during an off peak time, that is a very fast way of adding the group without worrying about something happening during an import using ldapmodify.  Honestly I would stick with using the import-ldif utility since it is fast and also updates the indexes, which are very important.

Merging Large Bulk Members with an Existing Group

One last group import use case I want to cover is adding over 100,000 members to an existing group in bulk.  For example a company acquiring or merging with another company could possibly have a case where they need to add hundreds of thousands of members to an existing group. Using import-ldif there is an option “-a” which means append, but this is a bit misleading.  The append is actually adding a group with its members to the existing database and does not mean appending attributes.  So for example if you have a group with 1 million existing members, then use import-ldif with the option -a to import another new 250,000 members to an existing group because of some acquisition, the existing group will be overwritten.  What you will have is the same group, but with only the 250,000 members you just imported.  The existing 1 million members will be wiped out.  There are two ways to solve this problem.

1. Export the exiting group using some of the options I point out in the next section, merge the new members into a new LDIF, and re-importing the group using import-ldif with the option “-a” as mentioned above.  Just to reiterate, this is only if you are trying to bulk import new members that exceed 100,000 members.

2. Break the new members into 100,000 chunks and use ldapmodify.  For example you have 250,000 new members you want to add, you can split the members into 100,000 chunks to make 3 LDIF files.  Then use ldapmodify to import the new members, which will then merge with the existing group.

Either of the two options will work fine and you will have to decide what option makes sense for you.

BULK EXPORTING VERY LARGE GROUPS

There may be cases where you or some application may want to return all the members from a group. When it comes to groups that could potentially have millions this is a lot of data for any Directory to deal with, but with OUD PS3 this is possible with the correct tuning. By default if you try to return a group using ldapsearch that has more than 100,000 members the following error happens.

Cannot decode the provided ASN.1 sequence as an LDAP message because the sequence was null
Result Code: 2 (Protocol Error)

The good news is there are a couple options to returning very large groups using either ldapsearch or export-ldif only after making some tuning property changes.

Option #1 tune OUD using the dsconfig command:

If you want to use ldapsearch to return groups that have large members in excess of 100,000, you will need to use the dsconfig command under <OUD_INSTANCE>/OUD/bin and increase the return-attribute-value-limit and member-lookthrough-limit property. However, first you may want to see what the current value is by running the following command.

./dsconfig get-global-configuration-prop \
-h oud1.melander.us \
-p 4444 \
-D “cn=Directory Manager” \
-j passwd.txt \
–advanced \
–property returned-attribute-value-limit \
–property member-lookthrough-limit

 

Output:

If your properties below are the default values the output should look like the following.

Property : Value(s)
——————————-:———
member-lookthrough-limit       : 100000
returned-attribute-value-limit : 100000

 

Where:

member-lookthrough-limit – Specifies the maximum number of members that OUD should look through in processing an operation on a static group. Setting the value to 0 (zero) says there is no limit enforced.

returned-attribute-value-limit – Specifies the maximum number of values for an attribute that OUD will return per entry while processing a search. Setting the value to 0 (zero) says there is no limit enforced.

Use the dsconfig command to adjust the returned-attribute-value-limit. For example if you have a group with 2 million users and want to return all the members using ldapsearch, you need to change the returned-attribute-value-limit value to 2000000 or greater depending on how fast the group grows. Setting the value to 0 (zero) will tell OUD there are no limits.  The following is the command to adjust the value as needed and this can be done while OUD is live and it does not need to be restarted.

./dsconfig set-global-configuration-prop \
-h oud1.melander.us \
-p 4444 \
-D “cn=Directory Manager” \
-j passwd.txt \
–advanced \
–set returned-attribute-value-limit:2000000 \
-n

 

Option #2 using the export-ldif command:

Alternatively if you will not be using ldapsearch to return large groups, then using the export-ldif command found under the <ORACLE_INSTANCE>/OUD/oud1 will export any size group.  The following is the command that will do the job, but if you want to see more options simply run ./export-ldif -? for all the options.

1.  cd <ORACLE_INSTANCE>/oud1/OUD/bin

2.  ./stop-ds

3.  ./import-ldif –b “cn=Rewards,cn=Groups,dc=example,dc=com” –a –i “*” –l /home/oracle/import_group.ldif

4.  ./start-ds

Which export option is fastest?

I have run some tests between using both ldapsearch and export-ldif, and ldapsearch seems to be the fastest by a large margin. However, this is only true if you increase the default value of the return-attribute.value-limit too a minimum larger than the group membership number that is being returned from the large static group or set it to 0 (zero). If you don’t increase the return-attribute.value-limit value enough it will produce an error as mentioned earlier.  The following table shows some examples of results I got trying to return a group with 2 million members.

 

Command OUD Status Results
./ldapsearch -h localhost -p 1389 -D “cn=Directory Manager” -w ******  -b “cn=Bronze,ou=Groups,dc=oracle,dc=com” objectclass=* >export_group.ldif ONLINE real 0m18.563s
user 0m5.820s
sys 0m10.080s
./export-ldif -b “cn=Bronze,ou=Groups,dc=oracle,dc=com” -n userRoot -l /scratch/oracle/ldif/groups/members.ldif ONLINE real 10m29.260s
user 11m56.572s
sys 1m49.673s
./export-ldif -b “cn=Bronze,ou=Groups,dc=oracle,dc=com” -n userRoot -l /scratch/oracle/ldif/groups/members.ldif OFFLINE real 17m7.991s
user 18m25.231s
sys 0m38.981s

 

Summary

Upgrading to OUD PS3 has a lot of positives besides just major improvements with static groups. That said if you are a customer who do use static groups, and the groups are very large or you expect to have groups that will grow to such large numbers, then there is no question OUD PS3 is the version for you.

What is SCIM?

$
0
0

SCIM is a standard protocol for accessing identity information (users, roles, etc), including querying, retrieval, create, update and delete. The latest version of SCIM, SCIM 2.0, has been defined in a series of RFCs: RFC 7642, RFC 7643 and RFC 7644.

What does SCIM stand for?

Originally it was an acronym for “Simplified Cloud Identity Management”. When SCIM moved to the IETF during the development of SCIM 2.0, the acronym was kept but the expansion was changed to “System for Cross-domain Identity Management”. The reason was that, while SCIM was originally developed for use with Cloud services, it is not in any way exclusive to the Cloud, and can be used just as well with purely on-premise scenarios. The new name reflects that.

Why SCIM?

For many years, the primary protocol used for accessing identity data has been LDAP. LDAP started out (back in 1993) as a simplification of the X.500 Directory Access Protocol (DAP). DAP was originally part of the OSI protocol suite; it is a complex binary protocol, described using ASN.1. In simplifying it, LDAP removed some of the more complex binary encodings from DAP and replaced them with plain text strings; nevertheless, it is still fundamentally a binary ASN.1-based protocol.

Over 20 years later, and the industry has largely moved on from ASN.1 binary protocols. Some of them are still around and still used – the most notable being LDAP and SNMP – but few people designing a new protocol today would choose ASN.1 as a base. The most popular approach for network communication nowadays is RESTful web services using JSON. The key advantages of this approach:

  • It runs over HTTP(S), which is the protocol with the broadest support on the public Internet. Attempts to use other protocols are in many cases blocked by firewalls which will let HTTP(S) through (whether directly, or via a web proxy)
  • Middle boxes such as firewalls, load balancers, API gateways (such as Oracle API Gateway), etc., have excellent support for handling HTTP(S)-based protocols, whereas the support for non-HTTP protocols (such as LDAP) is much more patchy
  • Developers have ample experience with calling REST/JSON APIs, and the ability to call them is increasingly included with popular development platforms. To use binary protocols such as LDAP, you generally need to use an LDAP client library. While SCIM client libraries exist, it is feasible for a developer to call it directly without using any such client; that is far less feasible for LDAP.

What about DSML and SPML?

SCIM is not the first attempt to create a more modern successor protocol to LDAP. DSML (Directory Service Markup Language) was an earlier attempt, based on XML and SOAP. While DSML has seen some adoption over the years, XML-based protocols have lost favour due to their complexity compared to JSON-based protocols. SPML (Service Provisioning Markup Language), which extends DSML with richer provisioning operations, is another; like DSML, it is based on XML and SOAP web services. SCIM can be used to replace the functionality of DSML and SPML.

Summary of the SCIM 2.0 RFCs

SCIM 2.0 is defined in 3 RFCs. I provide below a brief summary of the contents of each one:

  • RFC 7642: This RFC describes some of the chief use cases the SCIM protocol was designed to solve, in particular related to its use with Cloud services. (However, although these use cases provided the original impetus for designing the SCIM protocol, SCIM is by no means limited to these use cases only.)
  • RFC 7643: This defines SCIM schemas. SCIM schemas are conceptually similar to LDAP schemas, however they are expressed in a JSON syntax. Instead of “object classes”, SCIM uses resource types (e.g. “User”) and named schemas. Schemas are named by URIs, generally starting with urn:ietf:params:scim:schemas:.... As well as defining the basic infrastructure of SCIM schemas, this RFC defines two standard resource types (“User” and “Group”) and standard schemas providing a basic set of attributes for those resource types.
  • RFC 7644: This RFC defines the basic operations that must be supported by a SCIM server. As a REST-based protocol, these operations are expressed by standard HTTP verbs such as GET, POST, PUT and DELETE. In addition to those more commonly used HTTP verbs, SCIM also uses the HTTP PATCH verb to perform modification operations.

Note that in addition to RFCs, it is important to read the documentation of the implementing products. Most SCIM vendors (Oracle included) add extensions to the standard to support specific functions in their products. These extensions include additional resource types, and additional schemas and attributes.

HTTP verbs used in SCIM

This table shows the HTTP verbs used in SCIM and how SCIM uses them:

HTTP Method Usage in SCIM
GET Retrieves the attributes of a SCIM resource as a JSON document
POST Purpose varies depending on the endpoint. Used to create new resources (e.g. create new users), perform bulk operations, and perform searches.
PUT Modifies an existing resource (e.g. a User) by replacing all of its attributes with new values. This requires the SCIM client to first retrieve the entire resource with GET, then modifying that JSON document by adding/updating/removing the attributes it desires, then PUTting the modified JSON document back to the same resource endpoint.
PATCH Also used to modify an existing resource, but supports finer-grained modification than PUT. Whereas with PUT you have to first retrieve all the attributes, and then send all the attributes (with your changes) back to the server, with PATCH you specify the specific attributes you wish to modify, and only send those attributes to the SCIM server. As a result, PATCH is more efficient than PUT (but is more complex to implement).
DELETE Deletes a specified resource (e.g. a User)

(Acknowledgement: The above table is based on RFC7644 section 3.2.)

HTTP Endpoints used in SCIM

As a REST-based protocol, every SCIM resource has a URL. This URL has three components:

  1. The SCIM server base URL; this includes the protocol scheme (in production environments, should be https://, the hostname (and port if necessary), and the base path of the SCIM endpoint (this will vary from product to product; for the OIM SCIM server, it is /idaas/im/scim/v1
  2. The resource type, for example Users
  3. An identifier of the specific resource. This is usually an opaque string such as a number or a UUID; SCIM clients should not make any assumptions about the structure of SCIM resource identifiers.

To give an example, a individual user resource in the OIM SCIM server might have the URL: https://myoimserver.example.com/idaas/im/scim/v1/Users/136971

The following table shows the SCIM endpoints defined in RFC7644. Note that since implementations are allowed to define new resource types (and indeed, usually will do so), that entails additional endpoints for those additional resource types. The URLs in the table below are relative to the SCIM server base URL.

Resource Endpoint HTTP Verbs Description
User /Users GET, POST, PUT, PATCH, DELETE Retrieve, add, modify and delete users
Group /Groups GET, POST, PUT, PATCH, DELETE Retrieve, add, modify and delete groups
Self /Me GET, POST, PUT, PATCH, DELETE This is an alias for the user the client has authenticated as. It can be used to perform self-service operations, e.g. updating the attributes of your own user account.
Service provider configuration /ServiceProviderConfig GET This resource enables a SCIM client to discover which specific SCIM protocol features a SCIM server supports.
Resource type /ResourceTypes GET Retrieves supported resource types
Schemas /Schemas GET Enables SCIM client to retrieve the schemas this server supports
Bulk /Bulk POST Supports bulk updates to multiple resources (e.g. bulk update many user resources in a single SCIM call)
Search /.search POST Used to perform a search against a specific resource type endpoint, or against all resource types

(Acknowledgement: The above table is based on RFC7644 section 3.2.)

SCIM support in Oracle products

One of the new features introduced in Oracle Identity Manager 11.1.2.3 is a SCIM server. The documentation is here. If you were at OpenWorld 2015 (or saw the replays), you may have seen the announcement of our new Identity Cloud Service (IDCS). At the time of writing, this has not yet been released, but it will heavily use SCIM.

Using JSP on Oracle Compute and Oracle DBaaS – End to End Example

$
0
0

Introduction

Many customers request to get a quick demo how they would deploy their custom Java application in the Oracle Cloud. A great way to do this is Oracle Compute Service which can easily be combined with the Oracle Database as a Service offering. In this example two VMs will be deployed. One for the application server – GlassFish. The second on is a DBaaS VM to hold the database. A simple JSP will be created to display data from the database on the client browser as shown below.

Drawing1

 

Deploying Oracle DBaaS

For this example a simple Database will be deployed in the Cloud. In order to achieve this; first login to “My Services” from cloud.oracle.com.

image1

Enter your Identity Domain

image2

And provide your username and password.

image3

Scroll down until you find the Oracle Database Cloud Service and click on “Service Console”

image4

For this example select “Oracle Database Cloud Service” and based on your billing preference choose Monthly or Hourly. Finalise your selection by clicking “Next”.

image5

For this example we will use the Pluggable Database Demos and hence will select “Oracle Database 12c Release 1”

image6

 

For this example any edition can be selected.

image7

 

In this screen select a service name, e.g. “clouddb” and fill the other information as per screenshot below. Make sure to select the checkbox “Include “Demos” PDB”. Once done click the Edit Button next to SSH Public Key.

image8

Download Putty Key Generate (PuTTYgen.exe) from http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

After the download start the puttygen.exe and click generate with SSH-2 RSA and 1024 select. Randomly move your mouse over the blank area to generate the key.

The procedure for other Operating Systems can be found here: http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/javaservice/JCS/JCS_SSH/create_sshkey.html

image9

Once generated save the private key – it is recommended to use a pass-phrase in most cases and copy and paste the public key into a new text file to be saved with the private key.  Save the keys in a secure location and take a backup as there will be no access to the VMs if the keys are lost. Note using the Save public key option will not save the public key in the required format.

image10

Keep the public key in your clipboard and paste it into the “Public key input for VM access” finish by clicking “Enter”.

image11

Review and Confirm all details in the next screen and click “Create” to start the build of the database.

image12

You will see when the database is provisioned in the Service Console. Open the detail page and note the Public IP Address of the Database for later use.

image13a

 

Compute Provisioning

Scroll down in the Main Menu to “Oracle Compute Cloud Service” and open the Service Console.

image14

Create a new Compute Instance and give it a meaningful name. For this example the Oracle Linux 6.6 Image with oc1m shape is sufficient. Click Next.

image15

Select the DNS Hostname Prefix carefully as this will be the hostname of the VM that gets provisioned. Select a Persistent Public IP Reservation or choose Auto Generated to have the first available Public IP assigned to the VM.

image16

You can add additional storage – however this example won’t require additional storage.

image17

At the SSH Public Keys point copy and paste the Public Key generated above, give it a name and click Next. You can create a separate Key for this VM, if you prefer following the steps outlined earlier.

image18

Review the selection and press “Create” to start the VM provisioning.

 

image19

Once done the Instance will show up in the Service Console. Write down the public IP it will be required shortly.

image20

 

Connect to the VMs

In this example Putty is used to connect to the VMs. Putty can be downloaded here: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

Start Putty and enter the Public IP of the Database VM in the Hostname field.

image21

Navigate to Connection – SSH – Auth and click browse to select your previously created Private Key.

image22

Head back to the Session Menu – Enter a name for this session – Click Save followed by Open.

image21

For the first connection you will recive a Security Alert – this is expected. Answer with Yes.

image23

Next the Terminal opens and you can simply enter the username opc.

image24

Security Configuration

Validate the listener is running by executing “lsnrctl status” with the user oracle as shown below on the Database VM – click on the picture to enlarge. You should see a separate service for the demos PDB.

image25

Open the Compute Service Console and select the Network Tab. Click on Create Security IP List to start.

image26

Enter the IP of all hosts that you would like to give similar Network Security Settings. For this example it’s sufficent to use the Compute VM. Enter a description that describes the list.

image27

Switch to the Security Rules tab and hit the “Create Security Rule” button.

image28

The application in this example will use port 1521 for the communication with the database, which is provisioned as part of the database provisioning for each instance with the name ora_dblistener. Select the Security IP List created above as the source and select the Database VM as the destination for this Security Rule.

image29

Creating this list will enable the communication between the VMs on Port 1521.

Testing communication between the compute and Database VM.

A great and simple way to test communication is telnet. Please note you might have to install it on your Compute VM with “yum install telnet” as shown here:

image30

Once installed the connection can be tested with telnet as shown below. If you receive the “Escape character is …” message the connection is working.

image31

In order to verify that there are no superseeding rules to the rule just created it is useful to disable the rule and test wheter communication is still possible.

Disable the rule via the Security Rules screen from the Network section. Click on the context symbol next to the rule and select update from the context menu.

image32

In the Update Security Rule screen, set the status to Disabled and submit the change by clicking the Update button.

image33

The communcation should be closed between the Compute VM and the Database VM. Try to telnet – if the rule is setup correctly the communication will timeout.

image34

Reopen the Update Security Rule Dialog and set the status to Enabled.image35

Using telnet verify that the communication is running again, if the rule is correct it should work again. This confirms that the rule is working as expected and that there are no superseeding rules.

image31

Allow access to Compute VM to the Public Internet

In this example the application will be exposed to the public internet. Consider this carefully when using your own data in the backend.

Create a Security Application from the Network Tab. This example is using Port 4848 and 8080. Port 4848 is the Administration Port for the Glassfish Server – this rule should be disabled after the configuration is finished.

image36

Create a Security List with the Inbound Policy of Deny to block all traffic except the explicitly allowed traffic. You can allow packets to travel outbound from the Cloud VM by selecting Permit in the Outbound Policy.

image37

Once the Security List is created, you will need to add the Compute VM to the List. Do this by opening the Service Console for the Instance and click “Add to Security List”.

image38

This open a drop-down list, where the created Security list needs to be selected. Attach the Security List to the VM.

image39

Combine the Security List, Security Application and Security IP list by creating a new Security Rule. Make sure to select the predefinded Security IP List “public-internet” to grant access to every host. The destination has to be the Security List created above to allow access to the Compute VM. Ensure to select the correct Security Application has been selected to allow access on Port 4848 for this example.

image40

Repeat the process for Port 8080 to allow application access.

image41

Create the corresponding Security Rule:

image42

Application Deployment

Download the latest JDK from here: http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

Download the latest Glassfish release here:

https://glassfish.java.net/download.html

Download the latest Oracle JDBC Driver here (ojdbc7.jar only)

http://www.oracle.com/technetwork/database/features/jdbc/jdbc-drivers-12c-download-1958347.html

 

Extract both archives to /u01.

sudo su – oracle
tar xvzpf jdk-8u71-linux-x64.tar.gz
unzip glassfish-4.1.1.zip
export PATH=/u01/jdk1.8.0_71/bin:${PATH}
glassfish4/bin/asadmin create-domain clouddomain
glassfish4/bin/asadmin start-domain clouddomain
glassfish4/bin/asadmin --host localhost --port 4848 enable-secure-admin
cp ojdbc7.jar glassfish4/glassfish/domains/clouddomain/lib
glassfish4/bin/asadmin restart-domain clouddomain
 <a href="http://www.ateam-oracle.com/wp-content/uploads/2016/02/image43.png" rel="attachment wp-att-36642"><img src="http://www.ateam-oracle.com/wp-content/uploads/2016/02/image43.png" alt="image43" width="600" height="485" class="alignnone wp-image-36642" />
</a>

You can now login to the Admin Console from your local browser:

image44

Create JDBC Connection

Using the asadmin tool the JDBC Connection is created quickly:

glassfish4/bin/asadmin create-jdbc-connection-pool --restype javax.sql.DataSource --datasourceclassname oracle.jdbc.pool.OracleDataSource --property "user=hr:password=hr: url=jdbc\\:oracle\\:thin\\:@<your-cloud-ip>\\:1521\\/demos.rdb.oraclecloud.internal" CloudDB-Pool
glassfish4/bin/asadmin ping-connection-pool CloudDB-Pool
glassfish4/bin/asadmin create-jdbc-resource --connectionpoolid CloudDB-Pool jdbc/CloudDB

This application is based on this example only modified to connect to the HR Schema in the Demo PDB: https://docs.oracle.com/cd/E17952_01/connector-j-en/connector-j-usagenotes-glassfish-config-jsp.html

Create a folder with the following directory structure:

index.jsp
WEB-INF
   |
   - web.xml
   - sun-web.xml

The code for sun-web.xml is:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE sun-web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Application Server 8.1 Servlet 2.4//EN" "http://www.sun.com/software/appserver/dtds/sun-web-app_2_4-1.dtd">
<sun-web-app>
  <context-root>HelloWebApp</context-root>
  <resource-ref>
    <res-ref-name>jdbc/CloudDB</res-ref-name>
    <jndi-name>jdbc/CloudDB</jndi-name>  
  </resource-ref> 
</sun-web-app>

The code for web.xml is:

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd">
  <display-name>HelloWebApp</display-name>  
  <distributable/>
  <resource-ref>
    <res-ref-name>jdbc/CloudDB</res-ref-name>
    <res-type>javax.sql.DataSource</res-type>
    <res-auth>Container</res-auth>
    <res-sharing-scope>Shareable</res-sharing-scope>                
  </resource-ref>
</web-app>

The index.jsp contains:

<%@ page import="java.sql.*, javax.sql.*, java.io.*, javax.naming.*" %>
<html>
<head><title>Data from the cloud with JSP JSP</title></head>
<body>
<%
  InitialContext ctx;
  DataSource ds;
  Connection conn;
  Statement stmt;
  ResultSet rs;

  try {
    ctx = new InitialContext();
        ds = (DataSource) ctx.lookup("jdbc/CloudDB");
    conn = ds.getConnection();
    stmt = conn.createStatement();
    rs = stmt.executeQuery("SELECT * FROM DEPARTMENTS");

    while(rs.next()) {
%>
    <h3>Department Name: <%= rs.getString("DEPARTMENT_NAME") %></h3>
    <h3>Department ID: <%= rs.getString("DEPARTMENT_ID") %></h3>
<%    
    }
  }
  catch (SQLException se) {
%>
    <%= se.getMessage() %>
<%      
  }
  catch (NamingException ne) {
%>  
    <%= ne.getMessage() %>
<%
  }
%>
</body>
</html>

Zip up the entire folder and login to the Glassfish Admin Console

image44

Select the option Applications in the tree on the left hand side and click on deploy.

image45

Select the previously created zip file and set “Web Application” as Type.

image47

Clicking OK will take you to the Deployed Applications screen from where you can press “Launch” to open a new browser window. The JSP will show the rows of the department table from the Cloud DB proving end-to-end communication.

image48

This concludes this example. It should illustrate how simple it is to deploy a custom application with an Oracle Cloud Database in the backend.

 

Further Reading

 

Oracle Cloud Documentation

https://docs.oracle.com/cloud/latest/

Glassfish

https://glassfish.java.net/


OAM 11g Webgate Tuning

$
0
0

INTRODUCTION

This post is part of a larger series on Oracle Access Manager 11g called Oracle Access Manager Academy. An index to the entire series with links to each of the separate posts is available.

People typically are introduced to Webgate tuning in one of two ways, either forced into it because of a crisis or actively preparing an environment to do some aggressive load testing.  Hopefully you are in the later group.  Unfortunately, there is still a lot of mystery behind tuning some of these Webgate parameters.  Creating a comprehensive article to cover all aspects of tuning is a real challenge.  That said, this article will be focused on what I feel are the most important tuning parameters; 1) Max Connections, including the relationship between Max Connections and Max Number of Connection, 2) the Failover Threshold, and 3) the AAA Timeout Threshold.  If you can grasp the concepts around these few important key parameters your success in getting better performance and stability out of the Webgates and Access Servers will greatly increase.

Quick Overview

Knowledge in this article is based on extensive experience in the field, discussions with Oracle Webgate developers, and of course invaluable peers.  As I already mentioned in the introduction I will break out the Webgate tuning into three areas to help make it a little easier to digest.   Each of the three parameters are not necessarily relate to each other or dependent, so you are free to jump to the section you are interested in.  However, I highly advise that you spend time reading the entire article before making any major tuning changes.  Below is a screenshot of an 11gR2PS3 (OAM 11.1.2.3.0) Webgate definition that highlights the parameters I will cover plus any associated field; all settings are R2PS3 default values.

 

img1_webgate_def

 

Max Connections — Not so Literal

The Max Connections parameter can reap some big improvements in performance, but beware — increasing the value does not necessarily equate to more performance and can in fact have a negative impact. The official Oracle OAM 11g 11.1.2 Administration Guide says, “Max Connections is the maximum number of connections that a Webgate can establish with the Access Server.” This statement is a bit confusing and could lead you to believe that by applying Max Connections value X will only send X number of connections to the Access Server, but that is completely false.

Before jumping into the Max Connection parameter, first things first, we need to understand how connections work with web servers and how it relates to the Webgate module. Since the majority of the audience use OHS (Oracle HTTP Server) or Apache, I will focus on OHS to keep things simple since it is basically Apache from a fundamental level. So what I explain with OHS going forward will also apply to Apache. If you use a different 11g Webgate supported web server, how connections work can be different so please extrapolate this information and try to apply it to your environment.

Worker or Pre-Fork Mode

OHS will run in one of two modes, “Worker” or “Pre-Fork”. The default in OHS is Worker mode, but with Apache it can depend on how it is compiled though typical implementations use Worker mode. Be sure to verify what mode you are running in. Worker mode uses multiple child processes with several threads for each process. Each thread will handle one connection at a time.

Now, the thing that is important to understand here is that the Webgate module is actually instantiated by the child processes directly, rather than by the OHS parent process. Again focusing purely on the multi-threaded “Worker” mode, a number of directives within the web server configuration file control exactly how many child processes will be spawned based on the number of incoming requests. From a Webgate point of view, we must bear in mind that each of these child processes will open its own pool of connections to the Access Servers, as defined by the Max Connections setting in the Webgate profile.

As a working example, let us specify Max Connections as “12” and our web server is configured to spawn up to 20 child processes, the total number of connections from the web server as a whole to the OAM servers will thus be 12 Max Connections times 20 child processes for a total of 240 connections; (12 x 20 = 240). We should always consider this multiplicative effect in mind when defining “Max Connections”, since we don’t want to end up opening too many connections and risk overloading the Access Servers. In the sections that follow, this multiplicative effect will not be explicitly called out, but please remember that it still applies in every case. So let’s apply another example so we fully understand the ramification of both the Max Connection and OHS configuration settings and how they relate.

Take for example the default mpm_worker_module section from an OHS httpd.conf file; shown below. We see ThreadsPerChild is set to 25, MaxClients is 150, and StartServers equals 2. The MaxClients value basically limits the maximum number of threads that can be opened by OHS while StartServers says open up 2 threads at start up. That means at start up we will immediately get 2 children times 25 threads for a total of 50 threads. We know that each child has X WebGate connections where X is defined by the Max Connection setting in the webgate profile.  So if our Max Connection is 12 we will immediately have a total of 24 connection (2 StartServers x 12 Max Connections = 24 Webgate connections).  As traffic increases, OHS/Apache will spawn more children and therefore more webgate connections until the MaxClients limit is reached.  With MaxClient set to 150 and ThreadsPerChild set to 25, we can expect somewhere between 6-8 children max (the extra are due to the spare threads portion of the algorithm).  With 12 connections per child this means a maximum of somewhere between 72 and 96 connections for our example OHS/Apache server.

 

<IfModule mpm_worker_module>
     StartServers         2
     MaxClients         150
     MinSpareThreads     25
     MaxSpareThreads     75
     ThreadsPerChild     25
     MaxRequestsPerChild  0
     AcceptMutex fcntl
     LockFile
</IfModule>

 

If Max Connections is changed to 24 then the number of connections goes to 1,200 (25 ThreadsPerChild x 2 StartServers x 24 Max Connections = 1,200 Webgate connections). As the web server accepts greater loads it will open up additional threads as needed. Each thread that is opened spawns 25 new children. We can easily see how the Webgate connections can multiply to become hundreds or even thousands of connections from one OHS server to each Access Server. The only throttle is MaxClients, which limits the total number of threads OHS will open. And keep in mind a production environment will have several OHS servers so the load on the Access Servers can grow quite fast. It is important when tuning Max Connections to monitor the utilization of CPU and Memory, plus the TCP connections on each Access Server as you tweak Webgate Max Connections or even the OHS ThreadsPerChild and MaxClients values. It is also important to understand that the specific number of threads per process is governed by the setting “ThreadsPerChild”.  The take away for this lesson is that a few Max Connections can go a long way, but too much of a good thing can be bad. Remember Mom always knew best when she said everything in moderation.

 

img2_http_threads

 

Now if your web server is configured for Pre-Fork mode, be especially careful because each request to the web server is handled by a dedicated (i.e. single-threaded) child process.  It follows that the maximum number of child processes – and hence the total number of Access Server connections – can quickly grow to a very large number.  I am sure you are asking, so what is a good value for Max Connections?  As for a magical recommended number, besides calculating the total sum based on the Max Number of Connections from each primary Access Server (more on that in the next section), unfortunately there is no sweet spot.  The value needs to be determined based on experimenting with load tests and recording the results that can be compared to see what values reap the best performance.  No implementation is alike, and as many deployments I have seen I have equally seen as many different values.   Now before you decided on the Max Connections value, you need to read the next section.

 

Making the Connection to Max Connections

There is no pun in the connection between Max Connections and Max Number of Connections. In a nutshell, the value for the Max Connections parameter should be the sum of all the Max Number of Connections from each Primary Server. Take the following diagram as an example.

maxconn_01

The value for Max Connections in the diagram is 12. If you add up the Max Number of Connections from each of the three Primary Servers it totals 12 (4+4+4=12).

Let’s take another example, but this time change OAM 3 primary Access Server to a secondary server, and also update the Max Number of Connections value for each OAM Server from 4 to 6.

maxconn_02

The first thing I want to point out is that the secondary Access Server will not get requests from the Webgate until connections to any primary Access Server fall below the Failover Threshold; more on that later. Since we have two primary OAM servers with Max Number of Connections values of 6 each, the total Max Connections value for the Webgate would be 12 (6+6=12); it is pretty simple. Now that we understand how to get the value for Max Connections parameter, you maybe wondering about what value to even use for Max Number of Connections; 4, 6, 20, 100? Good question, and fortunately Chris Johnson wrote a great article on this very subject, “How many connections do I need from the WebGate to the OAM Server?”. Again, it must be called out that the number you define in the Webgate profile will be multiplied by the number of Web Server child processes to determine the actual number of connections – so a little can often go a long way!

 

Does each Max Number of Connections need to be Symmetrical?

So far in my examples I have made each OAM server Max Number of Connections the same or symmetrical, but you don’t necessarily have to do that. You can optionally add more connections to different primary servers if you want more requests to go to any specific server. This strategy is basically a type of load balancing using the Webgate Max Number of Connections configuration value instead of using an actual physical load balancer appliance; take the following diagram as an example.

maxconn_03

Notice that OAM 1 primary server has 8 Max Number of Connections while OAM 2 and OAM 3 primary servers have 4 each. So the total Max Connections value would be 16 (8+4+4=16). In this particular configuration OAM 1 server would get double the number of connections from the Webgates as the other two primary OAM servers. One reason to do this would be that OAM 1 is a much larger server, more memory, etc. and can handle more traffic, or maybe OAM 1 is physically closer to the Webgate so it can process requests much faster. In reality even though this is an option, I have never really seen this in practice because normally all the servers have the equivalent sized hardware, are in the same network, and therefore there is no need to distribute more requests to any one server. That said, I did want to at least bring this up so you understand that there are options for various reasons if you so decide it makes sense.

 

The Skinny on Failover Threshold

The latest (At the time of this post) official 11g Access Manager documentation in section Table 16-3 Elements on Expanded 11g and 10g WebGate/Access Client Registration Pages says the Failover Threshold parameter is “Number representing the point when this Webgate opens connections to a Secondary OAM Server.” It also gives an example, if 30 were used as a value, and the number of connections to primary servers drops to 29, connections begin to open up to the secondary Access Server; the default value is 1. This description kind of gives an idea of what is happening, but no recommendations and some find it confusing. So I wanted to add some of my experience with recommendations.

 

1. First, the word “Failover” in the parameter name is exactly what it means. As connections are lost from each primary OAM server, the Webgate will then try to make up that connection by connecting to a secondary OAM server; hence the word “Failover”.  So a big note here, this setting only works if there are at least one or more secondary OAM servers defined in the Webgate profile. The parameter Failover Threshold will do nothing if there is no secondary OAM server defined.

failover_02

2. Second, the word “Threshold” in the parameter name is talking about at what point do connections begin to go over to the secondary OAM server(s).   Based on the official documentation, which is correct, if the Failover Threshold is set to 6 where the Max Number of Connections is also set to 6, then as soon as the number of connections going from the Webgate to the OAM server drops below the Failover Threshold of 6, connections will start to be sent to the secondary OAM server(s).   If there are two secondary OAM servers, the first in the list will be the one getting all the connections. As soon as the first secondary OAM server fills up its Max Number of Connections, the second secondary OAM server will start getting connections. Are you following?

So the big question is what is the best setting? My recommendation is two fold.

1. If you DO have Secondary OAM Servers configured:
Set the Failover Threshold value equal to the Max Number of Connections only if you have at least one secondary OAM server. Take my examples above, if the OAM server Max Number of Connections is 4, then set the Failover Threshold to 4. The reason for this is that you engage all the processing power needed as connections drop from any one primary OAM server since the secondary OAM server will start picking up the slack. As soon as the primary server having connection problems corrects itself, the Webgate will start failing back to the primary OAM server and slowly drop the connections from the secondary server until all the Max Number of Connections are met.

2. If you DO NOT have Secondary OAM Servers configured:
If you decide not to configure any secondary OAM server, you can leave the Failover Threshold value to the default of 1 because it will never be used. Remember, Failover Threshold requires a secondary OAM server to be configured. In practice, most clients like to see all their hardware provide some value, which means keep them all working to get their money worth. So I will typically see all OAM servers configured as primary servers; there is nothing wrong with this. That said, I have also seen various configurations with a mix of primary and secondary servers in a criss cross fasion that is a bit more complicated, but certainly has merrits too depending on the situation.

If you follow either of the points above you should have a solid configuration.

 

AAA Timeout Threshold

The AAA Timeout Threshold parameter setting determines how long the Webgate will wait on a connection response before it gives up and attempts to request a new connection. For example let’s say the Webgate has a connection opened, and a request comes through to validate some credentials. This process normally should take a fraction of a second, but there could be all sorts of variables to make this request take much longer. If the wait for the response is longer than the AAA Timeout Threshold, it will abandon the connection for that request, toss it back in the pool, and open a new connection to try again.

For most of OAM’s life (prior to R2 PS3), the default value for AAA Timeout Threshold is “-1” (minus one). The -1 is a special value that tells the Webgate to use the operating systems TCP timeout, which could easily be 2 minutes or even more! I have seen actual cases in practice where something goes awry with some Access Server and while the Webgate tries to connect to the Access Server or get some response from it, the Webgate keeps trying for a long time because the AAA Timeout Threshold was set to the default -1. As each connection tries for a very long time, the Webgate begins to get into a state that gives impression it is down when in reality the Webgate is doing what it was told, and that was to wait for a long time before retrying. When all the connections start doing this we have an OAM zombie apocalypse problem. Zombies are bad, but we can try to avoid this behavior by shortening that wait time.

The recommended value is any where from 5 to 10; this is in seconds. For example if you set the AAA Timeout Threshold to 5, the Webgate will open its connection, send its request, and expect to get a response back in say 5 seconds. If not, then it opens a new connection and tries again while the old connection is just freed up and tossed back into the pool. If the value is set to be shorter, like say 1 second, an authentication or authorization request could possibly take longer because the Access Server is waiting for a long LDAP search to be returned, and therefore send us into a whirling tail spin because you would never get your request completed since there is not enough time allotted for such an LDAP search. So we have found that a 5 – 10 seconds value seems to be a fair and balanced approach.  In R2 PS3 the default is now 5 seconds, which is reasonable.

 

User-Defined Webgate Parameters

One worthy parameter to mention that many may not know about is “client_request_retry_attempts”. A description of this parameter can be found in the latest (at the time of this article) in the official Oracle online document https://docs.oracle.com/cd/E40329_01/admin.1112/e27239/register.htm#AIAAG5856. The official description says; “WebGate-to-OAM Server timeout threshold specifies how long (in seconds) the WebGate waits for the OAM Server before it considers it unreachable and attempts the request on a new connection.” This at first seems similar to the AAA Timeout Threshold, but the difference is that this parameter is more about how many times the WebGate will retry its request before attempting the secondary server.

So if the AAA Timeout Threshold is set to 5 seconds, it will time out that connection after 5 seconds if there is no response, but using the client_request_retry_attempts tells the Webgate how any times it will attempt to retry that connection. If the value is set to 2, then the Webgate will wait 5 seconds (Assuming the AAA Timeout Threshold is set to 5), and if it times out it will try up to 2 times before timing out the connection. This configuration may be useful if you think a network connectivity between the Webgates and the Access Servers are not stable and you want the Webgate to at least try more than once before closing its connection.

 

Summary

I realize there are a lot of details in this blog, but it is all very useful and you may need to read each section carefully to absorb the data.  I can say that tuning the Webgate profile is a very important part of an OAM deployment and can save you lots of late nights worrying about performance or outages.  Good luck and be sure to load test your configurations before going live.

Integration Cloud Service – Promote Integrations from Test to Production (T2P)

$
0
0

The purpose of this blog is to provide simple steps to move Oracle Integration Cloud Service (ICS) integrations between different ICS environments. Oracle ICS provides export and import utilities to achieve integration promotion.

A typical use-case is to promote tested integrations from Test ICS Environment to Production ICS Environment, in preparation for a project go-live. Usually the Connection endpoints used by the integrations will be different on Test and Production Environments.

The main steps involved in code promotion for this typical use-case are as follows

  • Export an integration from Test ICS
  • Import the integration archive on Prod ICS
  • Update Connection details and activate the integration on Prod ICS Environment

Export an integration from Test ICS

Login to Test ICS
Search and locate the integration on Test ICS
Select ‘Export’ and save the integration archive to the file system.

Step2-BrowseAndExport-Integration-TestICS

 

The integration is saved with a “.iar” extension.

Step3-Save-IAR_new

 

 

 

 

 

 

 

During export, basic information about the connections, like identifier, connection type are persisted.

 

Import the integration archive on Prod ICS Environment

Login to Prod ICS
Navigate to ‘Integrations’
Select ‘Import Integration’ and choose the integration archive file that was saved in the previous step

Step4-Import-Saved-IAR-ProdICS

 

Since connection properties and security credentials are not part of the archive, the imported integration is typically not ready for activation.
An attempt to activate will error out and the error message indicates the connection(s) with missing information

Step6-Incomplete-Connections-Warning

Note that, if the connections used by the archive are already present and complete in Prod ICS, then the imported integration is ready for activation.

 

Update Connection details and activate the integration on Prod ICS Environment

After importing the archive, the user needs to update any incomplete connections before activating the flow.
Navigate to “Connections” and locate the connection to be updated

Step7-Find_incompleConn_Edit

 

Select ‘Edit’ and update the connection properties, security credentials and other required fields, as required in the Prod ICS Environment.
‘Test’ Connection and ensure that connection status shows 100%

Step8-Review-And-Complete-Conn-ICSProd

Note that, the connection identifier and Connection type were preserved during import and cannot be changed.

 

Once the connection is completed, then the imported integration is ready to be activated and used on Prod ICS environment.

Intgn-Ready

 

We have seen above the steps for promoting a completed integration for the T2P use-case.
Note that, even incomplete integrations can be moved between ICS environments using the same steps outlined above. This could be useful during development to move integration code reliably between environments.

Also, multiple integrations can be moved between environments using the ‘package’ export and import. This requires that integrations to be organized within ICS packages.

Export-Import-Packages
Finally, Oracle ICS provides a rich REST API which can be used to automate code promotion between ICS environments.

 

Tips for ODI in the Cloud: ODI On-Premise with DBCS

$
0
0

As describe in the article Integrating Oracle Data Integrator (ODI) On-Premise with Cloud Services, if you are considering connecting to the Cloud and using Oracle DBCS – Oracle Database Cloud Service – one of the good news is that you can use ODI on-premise to do the work. David Allan has published a very interesting article (ODI 12c and DBaaS in the Oracle Public Cloud) about connecting ODI to DBCS using a custom driver which performs the SSL tunneling. In my investigations on how to use ODI in the Cloud I have followed David’s Blog and my goal here is to share some tips that may be useful when trying to do that.

Connect to DBCS using SSL tunneling driver

When connecting to Oracle Database Cloud Service (DBCS) from ODI it is possible to use the “default” JDBC driver but, in that case, between the machine where the ODI Agent is running and the DBCS Cloud service.

To avoid performing those manual steps, David Allan has written a Blog on how to use a driver which performs the SSL tunneling.

Here are the steps I have done to make it work, using my ODI 12.1.3 on-premise Agent.

Get ready!

1- Create an OpenSSH Key

When you created your Database Cloud Service Instance you had to provide a Private Key. You can use either PuttyGen or SSH to convert it in OpenSSH key which is the only format supported by the tunneling driver

2- Download the driver (odi_ssl.jar) from java.net here and save it in any temporary folder.

Install the Driver

1- Stop all ODI processes.

2- Copy odi_ssl.jar into the appropriate directory:

— For ODI Studio (Local, No Agent), place the files into the “userlib” directory

On UNIX/Linux operating systems, go to the following directory

$HOME/.odi/oracledi/userlib

On Windows operating systems, go to the following

%APPDATA%\odi\oracledi\userlib

%APPDATA% is the Windows Application Data directory for the user (usually C:\Documents and Settings\user\Application Data)

— For ODI standalone Agent, place the files into the “drivers” directory:

For ODI 12c: $ODI_HOME/odi/agent/lib

For ODI 11g: $ODI_HOME/oracledi/agent/drivers

— For ODI J2EE Agent, and ODI 12c colocated Agent:

The JDBC driver files must be placed into the Domain Classpath.
For details refer to documentation: http://docs.oracle.com/middleware/1212/wls/JDBCA/third_party_drivers.htm#JDBCA706

Use the Driver

1- Create the properties file with text below and save it (for example c:\dbcs\dbcs.properties)

You need to check the ip of your DBCS instance, from the DBCS console:

sslUser=oracle (DBCS user)
sslHost=<your_dbcs_ip_address>
sslRHost=<your_dbcs_ip_address>
sslRPort=1521 (DBCS Listener Port)
sslPassword=your_private_key_password
sslPrivateKey= <url to OpenSSHKey> (ex: D:\\DBCS\\myOpenSSHKey.ppk)
sslLPort=5656 (Local port used in the JDBC url)

it is possible to define two hosts – one that you SSH to (sslHost) and one is where the Oracle listener resides (sslRHost). The DBCS infrastructure today has the same host, but the driver supports different hosts (for example if there is a firewall that you SSH to and then the Oracle listener is on another machine).

2- Create a new Data Server under the Oracle Technology

JDBC driver: oracle.odi.example.SSLDriver
JDBC url: jdbc:odi_ssl_oracle:thin:@localhost:5656/YourPDB.YourIdentityDomain

(ex: jdbc:odi_ssl_oracle:thin:@localhost:5656/ODIPDB1.usoraclexxx.oraclecloud.internal)

Property: PropertiesFile = c:\dbcs\dbcs.properties

You can refer to David’s Blog for more details: ODI 12c and DBaaS in the Oracle Public Cloud

this connection can only be used when there is a direct JDBC connection. It means that, if you are planning to use the SQL Loader utility from ODI, this tunneling driver cannot be used.

Connect to DBCS using Native JDBC driver

The beauty of the previous method is that no extra step is needed in order to connect to DBCS as the tunneling Driver is doing the job for you.

The only limitation is, if you wish to use a loader utility, such as SQL Loader, then the tunnel must be created BEFORE using the native JDBC connection. In that case the connection is not made through the tunneling Driver but directly through SQL*Net

Define a tunnel

Refer to Creating an SSH Tunnel to a Port in the Virtual Machine but with following changes:

— the ip you need is the DBCS one

— the tunnel will be between Local Host 5656 and Remote Host 1521 (or the one defined in your organization as SQL*Net port).

Open the tunnel and then you are ready to go to connect to DBCS safely through SSH.

Use the native Driver

Once you have created your tunnel, ODI Agent can connect to DBCS as any other Oracle database. As the tunnel is set you can connect directly to localhost:5656.

This step is not mandatory when willing to use using LKM File to Oracle (SQLLDR), but as the tunnel now exists it is easy to use it in ODI.
Now, let me share some tips as well on how to use that KM in a DBCS environment.

Steps to use LKM File to Oracle (SQLLDR)

Apply Patch

If you are in ODI 12.1.3 then apply Patch 18330647: ODI JOBS FAILS CALLING SQLLDR ON WINDOWS 7, WINDOWS 2008.

Note: if you are already using LKM File to Oracle (SQLLDR) a “copy of” will be created by the Patch. The LKM build must be 45.1 or higher.

This issue is fixed in ODI 12.2.1

Define the tnsnames.ora entry for DBCS

When using LKM File to Oracle (SQLLDR), the connection to DBCS is made directly through the tnsnames.ora and not the ODI Topology.
So, in order to use SQL Loader, an

MyDBAAS =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 5656))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = YourPDB.YourIdentityDomain)
)
)

As we are going through the tunnel the Host=localhost and the Port=5656 (the local port defined for the tunnel).

Do not forget to set “MyDBAAS” in your ODI Topology as the Instance Name of your Oracle Data Server.

It is then possible to use SQL Loader utility, through LKM File to Oracle (SQLLDR).

Note that, as the connection to DBCS is done through SSH, the performance is not equivalent to an internal network.

Conclusion

Using these methods, it is pretty easy to connect to Oracle Database Cloud Service to load or extract data in the Cloud.

For more ODI best practices, tips, tricks, and guidance that the A-Team members gain from real-world experiences working with customers and partners, visit Oracle A-Team Chronicles for ODI.

Acknowledgements

Special thanks to David Allan, Oracle Data Integration Architect, for his help and support.

OGG Replicat Abend with Error ORA-29861: domain index is marked LOADING/FAILED/UNUSABLE

$
0
0

Introduction

Both Classic Replicat and Integrated Replicat can abend on this error.  This blog will discuss the causes of this abend and how you can resolve it.

Main Article

This error message, Error ORA-29861: domain index is marked LOADING/FAILED/UNUSABLE, will appear in your replicat report file.  To some DBAs, a Domain Index is not something they see or use on a regular basis.  This type of error is usually due to one of two scenarios: 1) the index is invalid and needs to be rebuilt, or 2) the index was created just prior to the problematic DML statement and the database has not completely recognized the index for DML operations yet.

Scenario 1:

This is fairly easy to quickly identify by checking the DBA_INDEXES view.   You can’t just look at the STATUS of the index.  You must query the DBA_INDEXES view and look at the domain index columns (DOMIDX_STATUS, DOMIDX_OPSTATUS).

select index_name,index_type,status,domidx_status,domidx_opstatus from user_indexes where
index_type like ‘%DOMAIN%’ and (domidx_status <> ‘VALID’ or domidx_opstatus <> ‘VALID’) and TABLE_NAME like ‘%<table_name>%’ ;

 If either domidx_status or domidx_opstatus is NOT VALID. The index must be rebuilt. Rebuild the index with the following SQL:

ALTER INDEX <index_name> REBUILD;

In some cases, the DDL for the index may need to be pulled from the source system to be executed on the target to build the index.  Any partitioned domain index can end up in this state intermittently due to normal failures. When a domain index operation fails on one partition of a partitioned domain index, it is expected that the user will execute a DDL to rebuild the failed partition.

Once the index is rebuilt, the replicat will restart without issue.

 

 Scenario 2:

In this case, usually the DDL for the creation of the new Domain Index is executed just prior to the DML statement which is loading data into the table for which the new Domain Index was created.  Both DDL and DML replication is enabled in this case.  This is fairly easy to see in the Replicat Report file.

2015-08-29 08:11:14 INFO OGG-00487 DDL operation included [include all],
optype [CREATE], objtype [INDEX], objowner [TEST23], objname [TEST23_TAB04].

2015-08-29 08:11:14 INFO OGG-01407 Setting current schema for DDL
operation to [TEST23].

2015-08-29 08:11:14 INFO OGG-00484 Executing DDL operation.

In some cases, the DML statement that immediately follows the Index Creation DDL, may not recognize the new index on its initial attempt.  When the replicat autorestarts, it is usually enough time for the index to be recognized.  If this scenario happens often in your environment in order to avoid replicat abends, we suggest adding the following to your replicat parameter file to force replicat to retry the DML 10 times before abending:

REPERROR 29861 retryop maxretries 10

 

Summary

Hopefully, this post will address most of the scenarios where this ORA-29861 error will occur.  If your situation is not matching the scenarios described above, then you may need to contact Oracle Support for further investigation.

OGG Extract entry in DBA_APPLY view

$
0
0

Introduction

DBAs are often reviewing what items exist in their DBA views.  I was recently asked by a DBA if they could drop the Apply Process  in the DBA_APPLY view since they had no OGG Apply processes running on this particular system.  They only had OGG Capture processes on this particular server.  The short answer to that question is ‘No’, unless you can absolutely verify that the entry you wish to remove has nothing to do with an OGG Extract process.

Main Article

If you query the DBA_CAPTURE and the DBA_APPLY views, you will see that for every entry in the DBA_CAPTURE view there is a corresponding entry in the DBA_APPLY view.   Below is an example:

SQL> select apply_name from dba_apply ;

APPLY_NAME
——————————————————————————–
OGG$EXT_CP

SQL> select capture_name from dba_capture ;

CAPTURE_NAME
——————————————————————————–
OGG$CAP_EXT_CP

If you dig a little deeper in the the DBA_APPLY view, you will see a column named ‘PURPOSE’.  This is the column you should look at to see what exactly that process is being used for.  In our case we can see it is for our capture process.

 

SQL> select apply_name, purpose from dba_apply;

APPLY_NAME      PURPOSE
————— ——————-
OGG$EXT_CP      GoldenGate Capture

 

Summary

It is a good practice to try to keep your environments as clean as possible.  But in this case, dropping the process in the DBA_APPLY view would have had serious consequences for the capture process.  While the apply process is not actually started, the configuration info (for example , queue subscriber info for the apply queue) is all necessary for Integrated Extract.  If the DBA would have dropped this apply process, they would have been forced to rebuild the replication environment.

Using Oracle BI Answers to Extract Data from HCM via Web Services

$
0
0

Introduction

Oracle BI Answers, also known as ‘Analyses’ or ‘Analysis Editor’, is a reporting tool that is part of the Oracle Transactional Business Intelligence (OTBI), and available within the Oracle Human Capital Management (HCM) product suite.

This article will outline an approach in which a BI Answers report will be used to extract data from HCM via web services.  This provides an alternative method to the file based loader process (details of which can be found here)

This can be used for both Cloud and On-Premise versions of Oracle Fusion HCM.

Main Article

During regular product updates to Oracle HCM, underlying data objects may be changed.  As part of the upgrade process, these changes will automatically be updated in the pre-packaged reports that come with Oracle HCM, and also in the OTBI ‘Subject Areas’ – a semantic layer used to aid report writers by removing the need to write SQL directly against the underlying database.

As a result it is highly recommended to use either a pre-packaged report, or to create a new report based on one of the many OTBI Subject Areas, to prevent extracts subsequently breaking due to the changing data structures.

Pre-Packaged Reports

Pre-packaged reports can be found by selecting ‘Catalog’, expanding ‘Shared Folders’ and looking in the ‘Human Capital Management’ sub-folder.  If a pre-packaged report is used, make a note of the full path of the report shown in the ‘Location’ box below.  This path, and the report name, will be required for the WSDL.

Windows7_x64

Ad-Hoc Reports

To create an Ad-Hoc report, a user login with the minimum of BI Author rights is required.

a. Select ‘New’ and then ‘Analysis’

Windows7_x64

b. Select the appropriate HCM Subject Area to create a report.

Windows7_x64

c. Expand the folders and drag the required elements into the report.

d. Save the report into a shared location.  In this example this is being called ‘Answers_demo_report’ and saved into this location.

/Shared Folders/Custom/demo

This path will be referenced later in the WSDL.

Edit_Post_‹_ATeam_Chronicles_—_WordPress

Building Web Service Request

To create and test the Web Service, this post will use the opensource tool SoapUI.  This is free and can be downloaded here:

https://www.soapui.org

Within SoapUI, create a new SOAP project.  For the Initial WSDL address, use the Cloud or On-Premise URL, appending  ‘/analytics-ws/saw.dll/wsdl/v7′

For example:

https://cloudlocation.oracle.com/analytics-ws/saw.dll/wsdl/v7

or

https://my-on-premise-server.com/analytics-ws/saw.dll/wsdl/v7

This will list the available WSDLs

 

Calling the BI Answers report is a 2 step process

1. Within SoapUI, expand out the ‘SAWSessionService’ and then ‘logon’.  Make a copy of the example ‘Request’ WSDL, then update it to add the username and password for a user with credentials to run the BI Answers report.

Run that WSDL and a sessionID is returned:

SoapUI_4_6_4

2. In SoapUI expand ‘XmlViewService’ / ‘executeXMLQuery’.  Make a copy of the example ‘Request’ WSDL.  Edit that, insert the BI Answers report name and path into the <v7:reportPath> variable, and the SessionID from the first step into the <v7:sessionID> variable.

Note that while in the GUI the top level in the path was called ‘Shared Reports’, in the WSDL that is replaced with ‘shared’.  The rest of the path will match the format from the GUI.

You will notice a number of other options available.  For this example we are going to ignore those.

You can then execute the web service request.  The report returns the data as an XML stream, which can then be parsed by your code.

3

Summary

This post demonstrated a simple method to leverage BI Answers and the underlying OTBI Subject Areas within Oracle HCM, to create and call a report via web service to extract data for a down stream process.

Using Oracle BI Publisher to Extract Data From Oracle Sales and ERP Clouds

$
0
0

Introduction

Many Cloud products such as Oracle Sales Cloud, and Oracle ERP Cloud come packaged with Oracle Transaction Business Intelligence (OTBI). OTBI allows users to create and run reports against both Transactional and Warehouse databases.  At times there may be a need to extract that data to an external system. On-Premise customers can create ETL jobs to run against the Database, but would need to figure out how everything joins together – the RPD does a great job in obfuscating that for the end user.  For Cloud customers, it’s going to be even more complicated getting access to the source databases.

This post will cover a method for calling a BI Publisher report (a component of OTBI), via SOAP web services, and returning an XML file with the report data (embedded as a Base64 encoded content).

This approach can be used as an alternative to using the standard SOAP APIs documented in https://fusionappsoer.oracle.com/oer/  and as a basis to extract data automatically from your OTBI, albeit in the Cloud or On-Premise.

Main Article

For this demonstration, the assumption is being made that the reader has access to a user with the minimum of BI Author rights to create and run BI Publisher .

Create a simple BI Publisher report and name it ‘BIP_demo_report’ and save it into the shared location:

e.g. /Shared Folders/Custom/demo   , make not of this path as it will be referenced later in the WSDL.

Untitled

 

Go to XMLPServer

Within the XMLPServer, edit the BI Publisher report.

In the upper right, select ‘View a List’

Add_New_Post_‹_ATeam_Chronicles_—_WordPress

And then in the ‘Output Formats’ drop down – deselect everything except for ‘XML’:

Add_New_Post_‹_ATeam_Chronicles_—_WordPress

Make sure you ‘Save’ the changes.

 

Building Web Service Request

To create and test the Web Service, this post will use the opensource tool SoapUI.  This is free and can be downloaded from https://www.soapui.org

 

In SoapUI create a new Soap Project. The WSDL you should use is the Cloud, or On-Premise URL with the BIP suffix “xmlpserver/services/ExternalReportWSSService?wsdl” appended

 

For example:

https://cloudlocation.oracle.com/xmlpserver/services/ExternalReportWSSService?wsdl

or

https://my-on-premise-server.com/xmlpserver/services/ExternalReportWSSService?wsdl

This will generate a Soap UI project and  list the available methods available from the WSDL.

Take a copy of the ‘runReport’ command and edit it.  There are a number of parameters available, but in this example we will remove the majority of those and just leave the basic attributes as shown in this example;

Add_New_Post_‹_ATeam_Chronicles_—_WordPress

This includes the path of the report.  There is an assumption that the report is in the /Shared folder, so that’s not included in the path.  The suffix ‘.xdo’ is also required.

This service follows Oracle’s OWSM policy.  Within SoapUI you’ll also need to enter the user authentication details in the request properties:

Add_New_Post_‹_ATeam_Chronicles_—_WordPress

Run the WSDL and the report is returned as an XML file, encoded in Base64 format, within the <reportBytes> tag:

Add_New_Post_‹_ATeam_Chronicles_—_WordPress

The base64 payload can easily be parsed by most programming languages, for example in Java you would use the Base64  class Base64.decodeBase64(base64String)
.

Summary

This post demonstrated a method which can be used to leverage BI Publisher within an Oracle Application that is packaged with OTBI – beit cloud or on-premise – and extract the data through a web service for a downstream process.


12c BPM and BAM Multi Domain Setup

$
0
0

Introduction

In this blog, I will show you how to setup BPM and BAM in 2 separate WebLogic domains, so the out-of-the-box process analytics will work as it is on the same domain.

In the multi domain setup, you will have a BPM only domain and BAM/BPM domain. The BPM only domain will contain both BPM and BAM managed servers, after you have finished the multi domain configuration, you must shut down the BAM managed server in the BPM only domain. For the BAM/BPM domain, it will also contain both BPM and BAM managed servers, but you will need to startup both BPM and BAM managed server in order for this multi domain configuration to work.  The diagram below illustrate the high level BAM and BPM servers’ setup in 2 separate WebLogic domains:

BPM and BAM Multi Domain - Overall

If you configure BPM and BAM in the same domain, you will notice 2 foreign JNDI providers, BAMForeignJndiProvider and BPMForeignJndiProvider.  The BAMForeignJndiProvider has the provider URL configured for localhost (127.0.0.1) BAM server and targeted to SOA cluster, the BPMForeignJndiProvider has the provider URL configured for localhost (127.0.0.1) SOA server and targeted to BAM cluster. If you are planning to setup a different domain for BPM and BAM, you will need to reconfigure the foreign JNDI providers and enable WebLogic global domain trust.

Step 1 – Install and configure 2 domains

In my test environment, I have 2 virtual machines setup (soa1 and soa12csrv1), BPM and BAM have been installed on both virtual machines.  When you are configuring both domains, the managed server names on both machines must be different.  This is required in order for the inter server distributed transactions to work properly. When you have configured the BPM and BAM domain, you can then start up all managed servers in both domains.

Server Name: soa1 (BPM Only Domain)

IP Address: 192.168.1.101

BPM and BAM Multi Domain - BPM Only

Server Name: soa12csrv1  (BAM and BPM Domain)

IP Address: 192.168.1.110

BPM and BAM Multi Domain - BAM and BPM

Step 2 – Enable Process Analytics

By default, the process metrics is disabled, so for the process analytics to work, you will need to enable the process metrics in the MBean using Enterprise Manager Fusion Middleware Control.  You will need to enable process analytics for both domain.

1. Log in to the Fusion Middleware Control console. (http://hostname:port/em)
2. In the Target Navigation pane, expand the WebLogic Domain node.
3. Select the domain in which the Oracle BAM server is installed. For example, the domain might be soainfra or base_domain (AdminServer and right click!).
4. Right-click on the domain and select System MBean Browser. The System MBean Browser page appears.
5. In the System MBean Browser, expand the Application Defined MBeans node.
6. Under Application Defined MBeans, expand the oracle.as.soainfra.config node.
7. Under oracle.as.soainfra.config, expand the Server: server_name node.
8. Under Server: server_name, expand the AnalyticsConfig node.
9. Under AnalyticsConfig, click analytics.
10. The analytics attributes are listed. Change the value of the DisableProcessMetrics attribute to false.
11. Click Apply.
12. If no BAM server is detected, an error message appears and the DisableProcessMetrics attribute value is not changed.

BPM and BAM Multi Domain -EM

Step 3 – Enable Global Domain Trust

For the cross domain to work, you will also need to enable the global domain trust, so that the identity is passed between the BPM only and the BAM/BPM domain over an RMI connection without requiring authentication in the second domain. You will need to configure global trust for both domain using the same credential:

1. In WebLogic Administration Console (http://host:port/console), click Lock & Edit.
2. In the left pane, click the name of the domain.
3. Select Security > General, then scroll down and click Advanced.
4. Enter a password for the domain in the Credential text field. Choose the password carefully. Oracle Systems recommends using a combination of Upper and Lower case letters and numbers.
5. Click Save.
6. To activate these changes, in the Change Center of the Administration Console, click Activate.

BPM and BAM Multi Domain - GlobalTrust

Step 4 – Reconfigure Foreign JNDI

By default, the foreign JNDI provider URL is configured to point to the managed server in the same domain, the JNDI URLs need to be reconfigured to point to the managed server residing in the second domain.

For BPM only Domain, the BAMForeignJNDIProvider JNDI provider URL needs to be changed to the URL for the BAM/BPM domain:
1. Login to WebLogic console. Click on “Foreign JNDI providers” link in home page.
2. Click on BAMForeignJNDIProvider -> Configuration tab -> General tab
3. Enter Provider URL: URL for the BAM Server. e.g. t3://192.168.1.110:7004
4. Enter username: BAM server username. e.g. WebLogic
5. Enter password: BAM Server password.
6. Click on Save.

BPM and BAM Multi Domain - jndi1

BPM and BAM Multi Domain - jndi2

For BAM/BPM Domain, the BPMForeignJndiProvider JNDI provider URL needs to be changed to the URL for the BPM only domain:

1. Login to WebLogic console. Click on “Foreign JNDI providers” link in home page.
2. Click on BPMForeignJndiProvider -> Configuration tab -> General tab
3. Enter Provider URL: URL for the BPM Server. e.g. t3://192.168.1.101:7004
4. Enter username: BPM server username. e.g. WebLogic
5. Enter password: BPM Server password.
6. Click on Save.

BPM and BAM Multi Domain - jndi3 BPM and BAM Multi Domain - jndi4

Step 5 – Servers Restart

Shutdown all managed and Admin servers on both nodes. This is required, otherwise, the foreign JNDI will not work and you will run in to a stack overflow error.

For BPM only Domain, start BPM/SOA and Admin Server

BPM and BAM Multi Domain -Restart

For BPM/BAM Domain, Start Admin Server, SOA/BPM server and BAM server

BPM and BAM Multi Domain -Restart2

Step 6 – Deploy BPM application

Although BPM server exists in both domain, but in order for this topology to work, you will need to deploy your BPM composite to the BPM only Domain (do not deploy your BPM composite to the BAM/BPM domain). After deployment, you can verify your deployment using enterprise manager in BPM only domain and BAM composer in BAM/BPM domain.

In a browser window, open the Fusion Middleware Control console. (http://hostname:port/em) in the BPM only Domain –> SOA-INFRA, you should be able to see the BPM process you have just deployed:

BPM and BAM Multi Domain - Deploy1

In a different browser window, open the BAM composer (http://host:port/bam/composer) in the BPM-BAM Domain, then click on the Open Project link, you should be able to see the project you have just deployed:

BPM and BAM Multi Domain - Deploy2

Step 7 – Testing

In a browser window, open the BPM Workspace (http://host:port/bpm/workspace/) in the BPM Only Domain and initiate a process.

BPM and BAM Multi Domain - Test1

In a different web browser window, open the BAM Composer (http://host:port/bam/composer) on the BPM/BAM Domain ->Open the Process Analytics Dashboards->Process Summary, you should be able to view the process data being populated and shown in the process summary dashboard.

BPM and BAM Multi Domain - Test2

Cloud Security: Federated SSO for Fusion-based SaaS

$
0
0

Introduction

To get you easily started with Oracle Cloud offerings, they come with their own user management. You can create users, assign roles, change passwords, etc.

However, real world enterprises already have existing Identity Management solutions and want to avoid to maintain the same information in many places. To avoid duplicate identities and the related security risks, like out of sync passwords, outdated user information, or rogue user accounts of locked accounts, single sign-on solutions are mandatory.

This post explains how to setup Federated Single Sign-on with Oracle SaaS to enable users present in existing Identity Management solutions to work with the Oracle SaaS offerings without additional user setup. After a quick introduction to Federated Single Sign-on based on SAML, we explain the requirements and the setup of Oracle SaaS for Federated Single Sign-on.

Federated Single Sign-on

Federated Single Sign-on or Federated SSO based on SAML Web Browser Single Sign-on is a widely-used standard in many enterprises world-wide.

The SAML specification defines three roles: the principal (typically a user), the Identity Provider, and the Service Provider. The Identity Provider and the Service Provider form a Circle of Trust and work together to provide a seamless authentication experience for the principal.

SAML Login Flows

The most comomly used SAML login flows are Service Provider Initiated Login and Identity Provider Initiated Login as shown below.

Service Provider Initiated Login

The Service Provider Initiated Login is the most common login flow and will be used by users without explicitely starting it. Pointing the browser to an application page is usually all that is needed.

Here the principal requests a service from the Service Provider. The Service Provider requests and obtains an identity assertion from the Identity Provider and decides whether to grant access to the service.

SAML_IdP_Initiated_Login_0

Identity Provider Initiated Login

SAML allows multiple Identity Provider configured for the same Service Provider. Deciding which of these Identity Providers is the right one for the principal is possible but not always easy to setup. The Identity Provider Initiated Login allows the principal to help here by picking the correct Identity Provider as a starting point. The Identity Provider creates the identity assertion and redirects to the Service Provider which is now able to decide whether to grant access to the service.

SAML_IdP_Initiated_Login_0

Oracle SaaS and Federated SSO

Here Oracle SaaS acts as the Service Provider and builds a Circle of Trust with a third-party, on-premise Identity Provider. This setup applies to all Fusion Applications based SaaS offerings (like Oracle Sales Cloud, Oracle HCM Cloud, or Oracle ERP Cloud) and looks like this.

SaaS_SP_OnPrem_IDP
The setup requires a joint effort of the customer and Oracle Cloud Support.

Scenario Components

The components of this scenario are:

  • Oracle SaaS Cloud (based on Fusion Applications, for example, Oracle Sales Cloud, Oracle HCM Cloud, Oracle ERP Cloud)
  • Any supported SAML 2.0 Identity Provider, for example:
    • Oracle Identity Federation 11g+
    • Oracle Access Management 11gR2 PS3+
    • AD FS 2.0+
    • Shibboleth 2.4+
    • Okta 6.0+
    • Ping Federate 6.0+
    • Ping One
    • etc.

The list of the supported SAML 2.0 Identity Providers for Oracle SaaS is updated regularly, and is available as part of the Fusion Applications Technology: Master Note on Fusion Federation (Support Doc ID 1484345.1).

Supported SAML 2.0 Identity Providers

The Setup Process

To setup this scenario Oracle Cloud Support and the Customer work together to create a operational scenario.

Setup of the On-Premise Identity Provider

To start the setup, the on-premise Identity Provider must be configured to fulfill these requirements:

  • It must implement SAML 2.0 of the Federation protocol.
  • The SAML 2.0 browser artifact SSO profile has been configured.
  • The SAML 2.0 Assertion NameID element must contain one of the following:
    • The user’s email address with the NameID Format being Email Address
    • The user’s Fusion uid with the NameID Format being Unspecified
  • All Federation Identity Provider endpoints must use SSL.
Setup of the Oracle SaaS Cloud Service Provider

Once the on-premise Identity Provider has been configured successfully, the following table outlines the process to request the setup of Oracle SaaS as Service Provider for Federated SSO with the customers on-premise Identity Provider:

Step Customer Oracle
1. File a Service Request to enable the required Oracle SaaS instance as Service Provider. The Service Request must follow the documented requirements.
(See Support Doc ID 1477245.1 or 1534683.1 for details.)
2. Approves the Service Request
3. Receives a document that describes how to configure the on-premise Identity Provider for the Service Provider.
4. When the conformance check has been done successfully, the Identity Provider Metadata File as XML file must be uploaded to the Service Request.
5. Configures the Service Provider in a non-production SaaS environment. When this is completed the Service Provider Metadata will be attached to the Service Request as an XML file for the customer. This file includes all the required information to add the Service Provider as a trusted partner to the Identity Provider.
6. Download the Service Provider metadata file and import it into the Identity Provider.
7. Adds the provided Identity Provider metadata to the Service Provider setup.
8. After the completion of the Service Provider setup, publishes a verification link in the Service Request.
9. Uses the verification link to test the features of Federated SSO.

Note: No other operations are allowed during this verification.

10. When the verification has been completed, update the SR to confirms the verification.
11. Finalize the configuration procedures.
12. Solely responsible for authenticating users.

When Federated SSO has been enabled, only those users whose identities have been synchronized between the on-premise Identity Provider and Oracle Cloud will be able to log in. To support this, Identity Synchronization must be configured (see below).

Identity Synchronization

Federated SSO only works correctly when users of the on-premise Identity Store and of the Oracle SaaS identity store are synchronized. The following sections outline the steps in general. The detailed steps will be covered in a later post.

Users are First Provisioned in Oracle SaaS

The general process works as follows:

Step Oracle SaaS On-premise Environment
1. Setup Extraction Process
2. Download Data
3. Convert Data into Identity Store Format
4. Import Data into Identity Store

Users are First Provisioned in On-Premise Environment

It is very common that users are already exiting in on-premise environments. To allow these users to work with Oracle SaaS, they have to be synchronized into Oracle SaaS. The general process works as follows:

Step Oracle SaaS On-premise Environment
1. Extract Data
2 Convert data into supported file format
3 Load user data using supported loading methods

References

Setting up Oracle’s Database as a Service (DBaaS) Pluggable Databases (PDBs) to Allow Connections via SID

$
0
0

Introduction

With the release of Oracle 12c Database the concept of Pluggable Databases (PDBs) was introduced.  Within a Container Database (CDB) one or many of these PDBs can exist.  Each PDB is a self-contained database, with its own system, sysaux and user tablespaces.  Each database has its own unique service name, and essentially functions as a stand-alone database.

Oracle’s Database as a Service (DBaaS) is based on 12c, and uses the same concepts.

Some applications – BI Cloud Service (BICS) as an example – require database connections to be defined using an Oracle SID, and not by a Service Name.  By default, the SID is not externally available for these PDBs, which causes connection issues for these applications.

This article will outline a simple method by which the listener within a PDB in an Oracle DBaaS environment can be set to use the Service Name as a SID option, and thus allow these applications to connect.

More information can be found in this support note  My Oracle Support note 1644355.1

 

Main Article

The pre-requists for this approach are:

  • A copy of the private key created when the DBaaS environment was set up, and the passphrase used.  The administrator who created the DBaaS instance should have both of these.
  • Port 1521 should be opened through the Compute node for the IPs of the servers or computers that need to connect to the PDB.
  • An SSH tool capable of connecting with a Private Key file.  In this article Putty will be used, which is a free tool available for download from here

 

Steps

a. From within the DBaaS console, identify the PDB database and it’s IP address.

Oracle_Database_Cloud_Service

b. Confirm that a connection can be made to the PDB using the service name.  If it the connection can not be made, see these instructions on how to resolve this within the Compute Node.

Windows7_x64

c. Open Putty and Set Up a Connection using the IP of the PDB obtained in step (a) and port 22.

Windows7_x64

d. Expand the ‘Connection’ / ‘SSH’ / ‘Auth’ menu item.  Browse in the ‘Private key file for authentication’ section to the key that the DBaaS administrator provided, and then click ‘Open’ in Putty to initiate the SSH session.

Windows7_x64

e. Login as the user ‘opc’ and enter the passphrase that the DBaaS administrator provided when prompted.

f. Use the following commands to change the user to ‘oracle’, and set the environmental variables:

sudo su – oracle

. oraenv

The correct Oracle SID should be displayed so you can just hit <enter> when prompted.  Only change this if it does not match the SID displayed in the DBaaS console in step (a).

Windows7_x64

g. The next set of commands will change the working directory to the Oracle DB home, take a copy of the existing Listener.ora file, and then stop the listener:

cd $ORACLE_HOME/network/admin

cp listener.ora listener.oraBKP

lsnrctl stop

Windows7_x64

h. The next commands will append the line ‘USE_SID_AS_SERVICE_LISTENER=on’ to the listener.ora file, and then re-start the Listener.

echo USE_SID_AS_SERVICE_LISTENER=on >> listener.ora

lsnrctl start

 

Windows7_x64

i. The final set of commands register the database to the listener.

sqlplus / as sysdba

alter system register;

exit

 

Windows7_x64

j. The Service Name can now be used as a SID for applications that can only connect with a SID.  Use SQL Developer to confirm that  a connection can be made using the Service Name from before – but this time in the SID field:

Windows7_x64

k. If Apex is used, it may be necessary to make a change within the Apex listener to reference the service name.  Before making the change, test to see if Apex is available by loging in.  If it works, then no change is required.  To make the change, follow steps (d) – (f) from above, and then type the following commands to locate the directory of the Apex listener configuration:

cd $ORACLE_HOME

cd ../../apex_listener/apex/conf/

Make a copy of the apex.xml file, then edit it and change the <entry key=”db.sid”> key to be the service name.  Finally, go to the GlassFish Administration from the DBaaS Cloud Service Console (requires port 4848 to be accessible – this can be made available in the Compute Cloud console – see step (b) above):

Oracle_Database_Cloud_Service

Within the ‘Applications’ option, select ‘Reload’ under Apex.

 

Summary
This article walked through the steps to configure the listener in a DBaaS PDB database to allow connections based on the Service Name as a SID option.

Using BICS Data Sync to Extract Data from Oracle OTBI, either Cloud or On-Premise

$
0
0

Introduction

Last year I wrote about configuring the BICS Data Sync Tool to extract data from ‘On-Premise’ data sources and to load those into BI Cloud Services (BICS).  That article can be found here.

In March 2016 the Data Sync Tool added the ability to connect and extract data from Oracle Transactional Business Intelligence (OTBI).  OTBI is available on many Oracle Cloud Services, and also On-Premise products.

This approach opens up many more sources that can now be easily loaded into BICS.  It also allows the Data Sync tool to leverage the Metadata Repository in OTBI (the RPD) to make the creation of an extract a simple process.

 

Main Article

This article will walk through the steps to download and configure the Data Sync tool to extract data from OTBI.  These steps are the same for an on-premise or cloud source, with the only difference being the URL used for the connection.

The Data Sync tool provides 3 approaches for extracting data from OTBI.  The ‘pros’ and ‘cons’ of each will be discussed, and then each method explained in detail.  The 3 approaches are:

1. An extract based on a report created in Analyses (also known as BI Answers)

2. An extract based on the SQL from the ‘Advanced’ tab of an Analyses Report

3. An extract based on a Folder from a Subject Area from the /analytics portal

 

Table of Contents

Which Approach Should You Use ?

Downloading Latest Version of Data Sync Tool

Setup Source Connection for OTBI Environment

Create Data Source Based on BI Analyses Report

Create Data Source Based On Logical SQL

Create Data Source Based on Subject Area Folder

Which Approach Should You Use ?

It is the opinion of this author that the second method, the extract based on SQL, is going to be the most useful approach for regular data updates.  Once created, there is no reliance on a saved analysis, and the true incremental update capability reduces the volume of data needing to be extracted from OTBI, improving performance and load times.

It is also not restricted by the 65,000 maximum row limit that many Cloud and On-Premise OTBI environments impose on reports.

Below is a quick summary of the differences of the 3 approaches.

 

Extract based on a report created in Analyses (also known as BI Answers)

In this approach, a report is created in OTBI and saved.  The Data Sync tool is then configured to run that report and extract the data.

  • Fully leverages the OTBI reporting front-end, allowing complex queries to be created without the need to understand the underlying table structures or joins, including filters, calculated fields, aggregates, etc
  • The select logic / filters in the Analyses report may be changed later, with no need to make changes in the Data Sync tool.  As long as the data fields being returned by the report remain the same, the changes will be transparent to the Data Sync tool. This would allow, for example, a monthly change to a report – perhaps changing the date range to be extracted.
  • For Cloud environments, this approach is limited to 65,000 rows, so should only be used for smaller data sets
  • It is not possible to restrict the data extract programmatically from the Data Sync tool, so true incremental updates are not possible.  For this functionality, one of the next two approaches should be used.

Extract based on the SQL from the ‘Advanced’ tab of an Analyses Report

This approach is very similar to the previous one, but instead of using a saved report, the logical SQL generated by OTBI is used directly.

  • Fully leverages the OTBI reporting front-end, allowing complex queries to be created without the need to understand the underlying table structures or joins, including filters, calculated fields, aggregates, etc
  • Allows for true incremental updates, with an Incremental SQL option that will reduce the amount of data being pulled from OTBI, improving performance and reducing load times
  • Once created, there is no reliance on a saved OTBI analyses report
  • Is NOT limited to 65,000 rows being returned, so can be used for both small, and larger data sets

Extract based on a Folder from a Subject Area from the /analytics portal

This approach bases the extract on a folder within a Subject Area within OTBI.  It allows the creation of many such mappings to be created at once.

  • No need to create an OTBI report or logical SQL, or even to log into OTBI.  The extract is set up purely within the Data Sync tool.
  • Allows mappings for multiple Subject Area folders to be created in one step, saving time if many mappings are needed and the Subject Area folders are structured in a meaningful way for BICS tables
  • Only allows Subject Area ‘folders’ to be imported, with no additional joins to other folders.  This approach will be most useful when the Subject Area folders closely mimic the desired data structures in BICS
  • Allows for true incremental updates, with a Filter option option that will reduce the amount of data being pulled from OTBI, improving performance and reducing load times
  • Is NOT limited to 65,000 rows being returned

Downloading Latest Version of Data Sync Tool

Versions of the Data Sync Tool prior to 2.0, released in February 2016, do not include this functionality.   The latest version can be obtained from OTN through this link.

For further instructions on configuring Data Sync, see this article.  If a previous version of Data Sync is being upgraded, use the documentation on OTN.

Setup Source Connection for OTBI Environment

No matter which of the 3 approaches is used, an initial Source Connection to OTBI needs to be configured.  If multiple OTBI environments are to be sourced from, then a Source Connection for each should be set up.

The Data Sync tool connects via web-services that many instances of OTBI expose.  This applies to both Cloud and On-Premise versions of OTBI.

To confirm whether your version of OTBI exposes these, take the regular ‘/analytics’ URL that is used to connect to the reporting portal – as demonstrated in this image:

Oracle_BIEE_Home_and_Edit_Post_‹_ATeam_Chronicles_—_WordPress_and_Evernote_Premium

and in a browser try and open the page with this syntax:

https://yourURL.com/analytics-ws/saw.dll/wsdl/v9

if the web-services are available – a page similar to the following will be displayed:

https___casf-test_crm_us1_oraclecloud_com_analytics-ws_saw_dll_wsdl_v9

 

If this does not display, try repeating but with this syntax (using ‘analytics’ instead of ‘analytics-ws’)

https://yourURL.com/analytics/saw.dll/wsdl/v9

If neither of these options display the XML page, then unfortunately web-services are not available in your environment, or the version of web-services available are earlier than the ‘v9’ that the data sync tool requires.

Speak with your BI Administrator to see if the environment can be upgraded, or the web-services exposed.

 

Defining Connection in the Data Sync Tool

a. In the Data Sync tool, select ‘Connections’, ‘Sources/Targets’, and ‘New’

Windows7_x64

b. Give the connection an appropriate name, and select ‘Oracle BI Connector’ as the connection type.

c. Enter the credentials for a user that has Consumer rights in the OTBI environment required to run a report.

d. For a the URL – first test with the syntax

https://yourURL.com/analytics-ws

This has less layers for the Data Sync tool to traverse, and so may offer slightly improved performance.

If the ‘Test Connection’ option fails – then try the following syntax:

https://yourURL.com/analytics

 

In this case, using the ‘analytics-ws’ version of the syntax, the configuration would look like this:

Windows7_x64

Save the connection, and then use the ‘Test Connection’ button to confirm details are correct.

Windows7_x64

e. For an On-Premise connection, the process would be identical.  Use the URL that is used to connect to the BI Analytics portal, and edit the URL to either use the ‘/analytics-ws’ or ‘/analytics’ path as described above.  In this example screenshot an internal URL is used, with the port number.  Once again, test the connection to confirm it connects.

Windows7_x64

 

Create Data Source Based on BI Analyses Report

a. Log into the Analytics portal for OTBI.

1

b. Create a new Analysis.  In this example, a simple report using Customer data from the ‘Marketing – CRM Leads’ Subject Area.  While not used here, a more complex query, with filters, calculated fields, aggregations, and fields from multiple subject area folders could easily be created.

Oracle_BI_Answers

c. Save the Analysis.  In this example the report was named ‘DataSyncTest’ and saved in /Shared Folders/Customer.  This path will be used in subsequent steps, although the path format will be slightly different.

3

d. Within the Data Sync tool, create a new ‘Manual Entry’ within the ‘Project’ / ‘Pluggable Source Data’ menu hierarchy:

1

e. Give the extract a Logical Name, used as the Source Name within the Data Sync Tool.  The ‘Target Name’ should be the name of the table you want to load into BICS. If the table doesn’t exist, the Data Sync Tool will create it.

Windows7_x64

f. A message provides some additional guidance of best practice.  Select ‘Report‘ as the source at the bottom of the message box as shown below, and then ‘OK’ to continue.

Windows7_x64

f. In the next screen – enter the path for the BI Answers analysis from step c.  Notice that the syntax for the Shared Folder is ‘/shared’ which differs from how it’s displayed in the OTBI Portal as ‘Shared Folder’.  In this example the folder path is:

/shared/Custom/DataSyncTest

 

Windows7_x64

g. the Data Sync tool can identify the field types but it does not know the correct length to assign in the target for VARCHAR fields.  By default these will be set to a length of 200.  These should be manually checked afterwards.  A message will inform you which of these need to be looked at.  Click ‘OK’ to continue.

Windows7_x64

h. The target table defined in step e. will be created.  Go to ‘Target Tables / Data Sets’, select the Target table that had just been created, and the ‘Table Columns’ option, and adjust the VARCHAR lengths as necessary.

Windows7_x64

i. As long as the Source Report has a unique ID field, and a date field that shows when the record was last updated, then the Load Strategy can be changed so that only new or updated data is loaded into the target table in BICS.  This can be changed in the ‘Project’ / ‘Pluggable Source Data’ menu hierarchy as shown below:

 

Windows7_x64

j. In this case the ‘Customer Row ID’ would be selected as the unique key for ‘User Key’ and the ‘Last Update Data’ for the ‘Filter’.

 

It is important to realize that while only changed or new data is being loaded into BICS, the full set of data needs to be extracted from OTBI each time.  The next two methods also provide the ability to filter the data being extracted from OTBI, and thus improving performance.

There is also a restriction within some Cloud OTBI environments where the result set is restricted to 65,000 rows or less.  If the extract is going to be larger than this, the other 2 methods should be considered.

You now have a Source and Target defined in BICS and can run the Job to extract data from an OTBI Analyses and load the data into BICS.

Create Data Source Based On SQL

a. While editing the Analyses Reports within OTBI, select the ‘Advanced’ tab and scroll down. The SQL used by the report is displayed.  This example below is the same report created earlier.

Windows7_x64

b. Cut and paste the SQL and remove the ‘ORDER by’ and subsequent SQL.  Also remove the first row of the select statement (‘0 s_0,’).  Both off these highlighted in the green boxes in the image above.

In this case, the edited SQL would look like this:

SELECT
“Marketing – CRM Leads”.”Customer”.”City” s_1,
“Marketing – CRM Leads”.”Customer”.”Country” s_2,
“Marketing – CRM Leads”.”Customer”.”Customer Row ID” s_3,
“Marketing – CRM Leads”.”Customer”.”Customer Unique Name” s_4,
“Marketing – CRM Leads”.”Customer”.”Last Update Date” s_5,
DESCRIPTOR_IDOF(“Marketing – CRM Leads”.”Customer”.”Country”) s_6
FROM “Marketing – CRM Leads”

c. In the Data Sync tool, Create a ‘Manual Entry’ for the SQL Data Source under the ‘Project’ / ‘Pluggable Source Data’ menu hierarchy.  As before, the Logical Name is used as the Source, and the Target Name should either be the existing table in BICS that will be loaded, or the new table name that the Data Sync Tool is to create.

Windows7_x64

d. Select ‘SQL‘ as the source type

Windows7_x64

e. In the ‘Initial SQL’ value, paste the SQL edited from the ‘Advanced’ tab in the Analyses Report

Windows7_x64

f. As before, a message is displayed reminding you to check the target table and adjust the size of VARCHAR fields as necessary:

Windows7_x64

g. Edit the newly created target and adjust the lengths of the VARCHAR fields as necessary.

Windows7_x64

h. This approach allows for true Incremental Updates of the target data, where only new or updated records from the Source OTBI environment are extracted.  To set up Incremental Updates, go back to ‘Project’, ‘Pluggable Source Data’ and select the source created in the previous step.  In the bottom section under the ‘Edit’ tab, select the ‘Incremental SQL’ box.

Windows7_x64

The Data Sync tool has 2 variables that can be added to the Logic SQL as an override to reduce the data set extracted from the source.

Those 2 variables are:

‘%LAST_REPLICATION_DATE%’ – which captures the date that the Data Sync job was last run

‘%LAST_REPLICATION_DATETIME%’ – which captures the timestamp that the Data Sync job was last run

As long as there is a suitable DATE or TIMESTAMP field in the source data that can be used to filter records, these variables can be used to reduce the data set pulled from OTBI to just the data that has changed since the last extract was run.

This is an example of the Incremental SQL using the %LAST_REPLICATION_DATE% variable.  The SQL is identical to the ‘Initial SQL’, just with the additional WHERE clause appended to the end.

Windows7_x64

And this is an example of the Incremental SQL using the %LAST_REPLICATION_DATETIME% variable:

Windows7_x64

i. To utilize the Incremental Approach, the Load Strategy should also be set to ‘Update Table’ from ‘Project’ / ‘Pluggable Source Data’ and by selecting the Source based on the Logic SQL.  Under the ‘Edit’ tab, change the Load Strategy:

Windows7_x64

Set the ‘User Key’ to a column, or columns, that make the row unique.

Windows7_x64

For the Filter, use a field that identifies when the record was last updated.

Windows7_x64

Data Sync may suggest an index to help load performance.  Click ‘OK’ to accept the recommendation and to create the Index.

Windows7_x64

You now have a Source and Target defined in BICS and can run the Job to extract data from the SQL created from an OTBI Analyses and load the data into BICS.

Create Data Source Based on Subject Area Folder

a. In the Data Sync Tool under the ‘Project’ / ‘Pluggable Source Data’ menu hierarchy, select ‘Data from Object’

b. To save time, use the Filter and enter the full or partial name of the Subject Area from within OTBI that is to be used.  Be sure to use the ‘*’ wildcard at the end.  If nothing is entered in the search field to reduce the objects returned, then an error will be thrown.

c. Select ‘Search’.  Depending on the Filter used, this could take several minutes to return.

1

Windows7_x64

d. Select the Subject Area folder(s) to be used.  If more than one is selected, a separate source and target will be defined for each within the Data Sync tool.

Windows7_x64

e. Some best practice for this method are displayed – hit ‘OK’

Windows7_x64

f. As in the other methods, a list of the fields cast as VARCHAR with a default length of 200 are shown.  Click ‘OK’

Windows7_x64

g. After a few moments a ‘Success’ notification should be received:

Windows7_x64

h. As before, update the length of the VARCHARs as needed, under ‘Target Tables / Data Sets’, and then by selecting the Target table created in the previous step, and then in the bottom section under ‘Table Columns’ the lengths can be altered:

Windows7_x64

i. The Load Strategy can be updated as needed in the same way, and the true ‘Incremental’ option is available if there is a suitable date field in the source.

Windows7_x64

Windows7_x64

NOTE – for this method, if a date is select as part of the Update Strategy, then that is automatically used to restrict the data extract from the source.  No further action is required to implement true incremental updates.

There is the option to add a filter to further restrict the data.  This example below shows how the “Contact” folder within the “Marketing – CRM Leads” could be restricted to just pull back contacts from California.  The value is in the form of a ‘Where Clause’, using fully qualified names.

Windows7_x64

You now have a Source and Target defined in BICS and can run the Job to extract data from one or more Subject Areas (separate mapping for each) and load that  data into BICS.

 

There is another method for creating an extract based on a single Subject Area.  This may be preferable if an incremental update approach is required, although only allows a single mapping to be set up at a time.

a. Under ‘Project’, ‘Pluggable Source Data’, select ‘Manual Entry’.  Select the OTBI DB connection and enter a suitable name for the logical source and target:

Windows7_x64

b. Select ‘Subject.Area.Table’ as the data source, and then click ‘OK’.

Windows7_x64

c. In the Properties, enter the fully qualified name for the Subject Area and Folder (‘Table’), and also the filter with the same format as step (i) above.  Be sure to follow the steps to update the Load Strategy if incremental updates are required.

Windows7_x64

Summary
This article walked through the steps to configure the Data Sync tool to be able to connect and extract data from a cloud or on-premise OTBI environment.  The second method – ‘Extracting Data based on SQL’ – is the recommended approach for most use cases.

For further information on the Data Sync Tool, and also for steps on how to upgrade a previous version of the tool, see the documentation on OTN.  That documentation can be found here.

Using the 12c Coherence Adapter to access a non-transactional local cache

$
0
0

Executive Overview

New in Oracle Fusion Middleware (FMW) 12c is the Oracle JCA Adapter for Coherence. Prior to FMW 12c, users wanting to take advantage of Coherence directly from, say, BPEL would have had to utilise a Java call-out for that purpose.

Oracle Coherence is a JCache-compliant in-memory caching and data management solution for clustered J2EE applications and application servers. Naturally, being in-memory, it performs extremely well and distributed caches are only constrained in terms of their performance by network infrastructure. In any case, one would expect Coherence to outperform database or any other file based data access mechanism.

In its first release, the Adapter is shipped with three pre-configured outbound connection pool definitions. Two of these relate to transactional caches and the other is merely a placeholder for a remote cache. The full documentation for the Adapter (in general) and configuration of a remote cache (in particular) can be found here

Therefore, this release does not provide a pre-configured set-up for a local, non-transactional cache.

This article comes about as a result of a specific customer requirement and discusses the steps needed to achieve this. One further requirement was that any cache entry would have a time-to-live (TTL). In other words, after a specific period (measured in milliseconds) the cache entry would be invalidated.

Solution

 

Solution detail

The first step is to configure a new entry for the Coherence Adapter’s Connection Factory using a JNDI name of your choice. In this example, it is eis/Coherence/localNonXATTL

Configure it like this:-

nontransttl

Note that the CacheConfigLocation has to be available to all / any SOA servers using the Adapter (shared storage).

 

Now let’s configure the cache configuration file:-

<?xml version=”1.0″?>
<!DOCTYPE cache-config SYSTEM “cache-config.dtd”>
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>local-AK</cache-name>
<scheme-name>no-trans-ttl</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<local-scheme>
<scheme-name>no-trans-ttl</scheme-name>
<service-name>LocalCache</service-name>
<autostart>true</autostart>
</local-scheme>
</caching-schemes>
</cache-config>

The cache-name can be any name of your choosing. The scheme-name in the cache-mapping section has to be the same value as scheme-name in the local-scheme section. All other values must be as shown here.

The cache-name and the JNDI entry name will be needed when configuring the Coherence Adapter in JDeveloper (JCA Configuration).

So let’s configure the Adapter to carry out a PUT to the cache and ask Coherence to invalidate the object after 1 minute.

1. In JDeveloper, drag the Coherence Adapter from the Technology pane into the right-hand swim-lane of your Composite

2. Name it what you will

3. Specify the JNDI name you used earlier (e.g. eis/Coherence/localNonXATTL)

4. Select Put operation

5. Enter the cache name that you specified in the cache configuration file (e.g. local-AK)

6. [ You can ask Coherence to generate a key for you in which case the generated key is available as a response value from the Put operation. If you want to specify your own key, you will need to un-check the Auto-generate key option ]

7. Select Custom from the Time To Live drop-down and enter a value of 60000 (ms)

8. You will have defined a data structure (XSD) for the object being cached so select that next

9. Finish

If you inspect the appropriate JCA file, you should see this:-

<adapter-config name=”CoherencePut” adapter=”coherence” wsdlLocation=”../WSDLs/CoherencePut.wsdl” xmlns=”http://platform.integration.oracle/blocks/adapter/fw/metadata”>
<connection-factory location=”eis/Coherence/localNonXA-TTL”/>
<endpoint-interaction portType=”Put_ptt” operation=”Put” UICacheItemType=”XML”>
<interaction-spec className=”oracle.tip.adapter.coherence.jca.CoherenceInteractionSpec”>
<property name=”TimeToLive” value=”15000″/>
<property name=”KeyType” value=”java.lang.String”/>
<property name=”CacheOperation” value=”put”/>
<property name=”CacheName” value=”local-AK”/>
</interaction-spec>
</endpoint-interaction>
</adapter-config>

Finally

This mechanism has been tested and shown to work in FMW 12.2.1. However, there is no reason that I know of why this wouldn’t also work in 12.1.3.

Here is a Sample that demonstrates this mechanism.

As a pre-requisite for this JDeveloper Project, you will have needed to:-

1) Create the Coherence configuration file as described earlier and saved it in a location accessible to any/all SOA servers that you’ll be using for the test

2) Targeted the Coherence adapter (it is not targeted by default)

3) If it was already targeted, you will need to update the deployment after adding the JNDI entry

4) Build and deploy the project

To test the project from FMW Control, invoke the web service with the value true in the doPut element (i.e. you are going to put a value into the cache). Enter a string of your choice into the keyOrValue element. Click Test Web Service.

In the response, you will see the value that you asked to be put in the cache and a key (this example uses the auto key generation option of the Adapter).

Copy the key value and click the Request tab. Enter false in the doPut element and copy the returned key into the keyOrValue element. Click Test Web Service. If you execute this second part of the test within one minute (remember the TTL value from earlier?) the value that you asked to be put in the cache will be returned. If you continue to execute this second phase of the test, you will eventually see a returned value of “Not found” – i.e. the TTL has expired and the cache entry has been invalidated

Other posts of interest

…may be found here and here

Viewing all 987 articles
Browse latest View live