Quantcast
Channel: ATeam Chronicles
Viewing all 987 articles
Browse latest View live

Poller Transport Based Service Management Scripts

$
0
0

The polling transports (Email, File, FTP) in Service Bus only poll on one managed server for a given service, which is defined in the service configuration.  However, if the polling managed server is not running, polling is not resumed automatically on another managed server in the cluster.

The zip file below contains a collection of WLST scripts meant to aid in the re-targeting of polling services from one managed server to another, such as during a managed server outage.

To redirect pollers to a new managed server, use the scripts to take the following steps:
1) List all the services currently polling on a given managed server and save this information to a file.
2) Modify all the services in the list to poll from a different managed server.
3) Once the original managed server is back on line, use the original list to modify those same services a second time to move the poller back to that managed server.

Additional usage information can be found in the enclosed readme.txt file.

OSBChangePollingMS


Tips on Documents Cloud URL Parameters: Part II

$
0
0

In the first post on this topic we covered some of the ways you can use and understand the Oracle Documents Cloud URL structure. Since that post, an update to DOCS has added some great new features, especially for embedding a folder of file. More granular control over the user interface is possible through URL parameters. In addition, the REST API has public link abilities now which enriches the possibilities of client side apps interacting with DOCS. The structure of the public link URLs is straightforward, and we will show how to use a public link to do a direct download of a file (rather than open the DOCS web interface).

 

Public Links

The DOCS public links REST API is available now. One of the first questions fielded on this API asked, “Where is the public link in the JSON response?”  Notice that upon creation of a public link that the response doesn’t have the full URL in the response.

{
    "assignedUsers": "@everybody",
    "createdTime": "2015-06-10T16:01:44Z",
    "errorCode": "0",
    "expirationTime": "2016-01-01T00:00:01Z",
    "id": "FBE1117A270E747BB1D95024T0000000000100000001",
    "lastModifiedTime": "2015-06-10T16:01:44Z",
    "linkID": "LF906748A021ACD714CABC82T0000000000100000001",
    "linkName": "MyLinkOne",
    "ownedBy": {
        "displayName": "User AA",
        "id": "U0EAA20910FAF3052ACB79E4T00000000001",
        "type": "user"
    },
    "role": "contributor",
    "type": "publiclink"
}

But this “missing” URL actually reduces the size of the response and allows the calling client to react and use the data in an appropriate manner for the context. For example, the public link response can be used in different ways. Typically, the public link will be used to open up the DOCS web interface. So what is the URL format for the web public link? Here are the formats for folder, file, and file direct download.

Public link to a folder

Creating a public link to a folder has been around since the inception of DOCS, so this is likely the most familiar public link. The host name of DOCS is already known (since you’ve successfully made a REST call to DOCS to create the public link) and thus putting together the URL only requires two pieces of the JSON response: linkID, and id. Note that id can represent a folder or a file, but it’s easy to tell them apart because a folder id starts with ‘F’ and a file id starts with a ‘D’ (as in document).

The format for all public links only requires these two parts of the JSON response.
https://mydocshost/documents/link/{linkID}/folder/{id}

Example: /documents/link/LFA634B6996F78EB758C14E915BE5DEA38AC513C7430/folder/F0AF7C0FFA4D23F7509C524715BE5DEA38AC513C7430

Public link to a file

A public link to a file is almost identical to the folder, except that the word “folder” is changed to “fileview”. This will open the file in the DOCS file viewer.
https://mydocshost/documents/link/{linkID}/fileview/{id}

Example: /documents/link/LD3DB1208053E644D517B25315BE5DEA38AC513C7430/fileview/D59AD1F507B1B167ABF5D59315BE5DEA38AC513C7430

 

Public link to a file – direct download

Another question that has come up from customers is, “How can I automatically prompt the user to download a file instead of view the file?” This requires just a small tweak of the URL, changing “fileview” to “file”. The URL below will prompt the user to open or save the file. This can be useful when the recipient of the public link URL doesn’t need to see the DOCS interface at all, but rather is expected to download and open the file using the native application, such as a Word file, or a zip file which isn’t viewable. Often delivering a direct download link matches the usage better than the file viewer.

https://<host>/mydocshost/link/{linkID}/file/{id}

Example: /documents/link/LD3DB1208053E644D517B25315BE5DEA38AC513C7430/file/D59AD1F507B1B167ABF5D59315BE5DEA38AC513C7430

 

Emebedded URL parameters

 

Two powerful parameters have been added to DOCS that allow what is displayed in the interface. The parameters are cfg, for folder browsing configuration, and vcf, for the file viewer configuration. These parameters contain many toggle switches for showing and hiding options for the user. Note that these are parameters used in embedded mode, meaning for public links, file picker, or AppLink integrations. See this link for reminders on how to embed the DOCS interface. Here’s a super short primer on embedding a DOCS folder. The word ‘embed’ just needs to be added to the URL.

Not embedded: https://mydocshost/documents/home/nameasc
Embedded: https://mydocshost/documents/embed/home/nameasc

And surely you recall from the first post on this topic that “nameasc” means “sort by name, ascending”.

 

Browse Configuration: cfg

The cfg parameter can take multiple values in a comma-separated list. For example, if you want to hide the upload and the breakout button for a public link set with a contributor rights, you can set cfg=nup,hbr.

So for example, if I have a public link that is set to contributor rights but need to deliver it to a user who we want to be able to delete files but not upload them, we would add cfg=nup. If we also want to disable the breakout icon, the hbr value can also be added.

/documents/embed/link/{LinkID}/folder/{id}/cfg=hbr,nup

The full list of values and the usage are shown in the table below.

 

Value Description
sfo Show Folders Only
sdo Show Documents Only. Allow selection of documents (files) only. When in this mode, the folders are shown only for the purposes of navigating into a folder so that you can find a file.
ssl Allow single selection only. (No, it’s not Secure Sockets Layer!)
evw Embed file viewer. When not specified, the file view will appear in a top-level window.
hbr Hide the Breakout button
hhd Hide the Header
hdc Hide the Open Oracle Documents Cloud button
hnw Hide the New Button
nca No Context Actions. When present, no context actions will appear in the toolbar or menu when an item is selected.
nvw No View Action. If present, documents cannot be opened in the viewer.
nop No Open Action. If present, folders cannot be opened.
nup No upload. If present, there will be no upload capability. The upload buttons and links are hidden.
nmv No Move Action
ncp No Copy Action
nsh No Share. If present, the share button is hidden.
npr No Properties
ndl No Delete Action
ndw No Download Action. Removes the ability to download in all areas of the product including the Viewer even if the Viewer Configuration (vcf) option ndl has not been set.

 

Viewer Configuration: vcf

Like the cfg parameter, the vcf parameters can take multiple values. The vcf parameter can hide or show functionality on the DOCS viewer. Various controls may need to be hidden depending on the user context of where the embedded interface is presented.

An example of using a public link to a file is below. The vcf parameter has ndl set to disallow downloads of the file, and hbc to hide the breadcrumbs.

/documents/embed/link/{LinkID}/fileview/{id}/vcf=ndl,hbc

The full list of values and the usage are shown in the table below.

Value Description
hfp Hide fit-to-page control.
hfw Hide fit-to-width control.
hzc Hide zoom controls. This includes the zoom out, zoom in, and zoom slider controls.
hzs Hide the zoom slider. IF hzc is specified, this setting is redundant.
nup No upload button will be shown.
ndl No download button will be shown.
htn Hide the thumbnails of each page in the file.
hfv Hide the favorite indicator button.
hnm Hide the name of the file.
hbc Hide the breadcrumbs.
sfp Start in fit-to-page mode. This is the default setting. Only one of sfp, sfw, or sos should be specified.
sfw Start in fit-to-width mode. Only one of sfp, sfw, or sos should be specified.
sos Start in original size mode. Only one of sfp, sfw, or sos should be specified.
stc Start with the thumbnail pane collapsed.
vhc Video controls should be hidden.
vap Video should auto play when the page loads
vlp Video should loop when it finishes playing.
vmt Video should be muted.

 

 

These URL configuration parameters enable more integration capabilities of how public links or AppLinks are used. The context of the embedded URL usage can be controlled with more focus on the user experience. The addition of the public link REST API and the URL parameters allow Oracle customers to incorporate DOCS into more places with a refined interface.

 

Customizing Fusion Application Expenses with Descriptive Flex Fields

$
0
0

Introduction

This article describes how to customize the Oracle Fusion Applications (FA) Expenses application with the use of Descriptive Flex Field (DFF) in expenses. We will look at a use case with a need to capture more detail around an expense item and look at how to implement it. This use case is meant to showcase the ability of the application that can be further developed for specific needs.

Use Case

Let’s take a use case where a company has decided that employees are required to get pre-approval before claiming internet expenses and enter an approval code each time an internet expense is claimed.

The purpose of this use case is to look at how to capture more detail of an expense item when claimed. This can be achieved by using context sensitive Descriptive Flex Field (DFF) that must be entered when a certain type of expense is selected and claimed. This additional data is stored in the system along with the expense report for future reporting and analysis.

Scope

In this blog we captured details of the solution and setup. Please consider this as a reference to give you sufficient idea of the setup and not the sole guide for your complete setup. Functional setup of the FA Finance module is a prerequisite, and as a result is not discussed in this blog. Because the setup can change from environment to environment depending on your functional setup, so can the steps documented here. If you have previously setup DFF and contexts, please refer to the product documents (links are given in reference section below) and adjust as necessary. Prior experience in the Oracle Fusion Finance and Expenses modules is desired to follow the content discussed in this article.

Implementation of Use Case

As this use case is about capturing the approval code when the user is entering the internet expense report, we will first look at how to enable a user to enter the approval code upon selection of ‘Internet’ for expense type. We expect the approval code has been obtained outside of the expense system and prior to creating the expense report.

Let’s first discuss briefly about flexfields in FA and look at the steps to implement the same. Generally speaking, flexfields offer a way to add custom fields that are specific for a customer need and a way to extend the fusion application for customer specific use cases. There are 3 types of flexfields available in FA: Key, Descriptive and Extensible flexfields. Key flexfields are configurable multipart intelligent keys that can represent part numbers and account numbers. Descriptive and Extensible flexfields allow capturing additional information declaratively without custom development. Please refer to the Oracle documentation for more details on each of the flex fields at the link here.

We will be using Descriptive Flexfields (DFF) for this use case, as it provides simple, sufficient building blocks necessary for this use case. Generally, flexfields are considered part of the base fusion apps product and so benefits of using ‘sandbox’ feature is limited for the use case discussed here. However, ‘sandbox’ is recommended for heavier customizations where page component properties are used.

Descriptive Flex Field Setup

Login to Fusion Applications as a user such as Application Implementation Consultant with implementation privileges and go to Setup and Maintenance as pointed in the picture below with a green arrow.

ExpB1Nav

In the ‘Setup and Maintenance’ screen as shown below, under ‘Search Tasks’, search for ‘expense’ that shows expense related tasks. We will be using two of the tasks listed in the search result – ‘Manage Expenses System Options’ and ‘Manage Descriptive FlexFields for Expense Reports’. First, let’s do a quick check whether the ‘Descriptive Flexfields’ are enabled in the system by selecting go to task ‘Manage Expenses System Options’.

ExpB1SM2

This will launch a window as shown below and you will need to make sure the high-lighted ‘Enable Descriptive Flexfields’ is set to ‘Yes’.

ExpB1SO3

Once the field is confirmed to have been set correctly, ‘Save and Close’ the window to move to the next step.

Now, back in the list of ‘expense’ tasks, select go to task ‘Manage Descriptive FlexFields for Expense Reports’. Now, in the ‘Manage Descriptive FlexFields for Expense Reports’ screen, select ‘Expenses’ in Module pull-down and search as shown below. This results in two lines, and now you can edit the ‘Expense’ line as highlighted below.

ExpB1SM4

Once you select the pencil for edit, the following screen shows up where you can manage contexts and various other expense related fields.

ExpB1SM5

As shown in the ‘Edit Descriptive Flexfield: Expenses’ screen, select ‘Manage Context’ button high-lighted at the top and create a context ‘Internet’ for this use case and the context sensitive segments to hold the value of the approval code.

ExpB1SM6

The below screen shows the ‘Context Sensitive Segments’ and the options entered to ‘Create Segment’.

ExpB1SM7

Once done entering values, ‘Save and Close’ at ‘Create Segment’ and then ‘Save and Close’ at ‘Create Context’ screen.

‘Save and Close’ the ‘Edit Descriptive Flexfield: Expense’ window and select ‘Deploy Flexfield’. The progress is as shown below:

ExpB1SM8

This completes the setup of DFF and the next step is to create an expense report and test the DFF field.

Now, we login to FA as a user with necessary privileges to launch ‘Expenses’ and select the ‘Expenses’ application icon under ‘My Information’. Then, the user can see the option to ‘Create Report’ and ‘Create Expense Item’ to fill in the details of the expense as shown below:

ExpB1SM9

Here we see that selecting ‘Internet’ under ‘Expense Type’ makes the ‘ApprovalCodeForInternet’ field show up and it is a mandatory field. Now the values entered here will get stored into the expense database in column ‘ATTRIBUTE_CHAR1’ as we have setup in the segment.

With what we have done so far, we have used the DFF to capture a specific expense attribute in the expense tables that can be used for runtime validations and future audits and analysis. This approval code value can be accessed in BPM Worklist as ‘ExpenseItem.attributeChar1’ and also be accessed in reports.

Summary

In this article, we have looked at how to customize Fusion Apps Expense using DFF to capture additional data of expense item depending on the expense type/context selected for claim by the user.

References

Web links and Oracle Documents:

  • Information Center: Oracle Internet Expenses (Doc ID 1381228.2)
  • Troubleshooting Assistant: Fusion Expenses (EXM) (Doc ID 1574035.2) is a nice menu driven tool.
  • Oracle Cloud Expenses Co-existence and Integration Options (Doc ID 2046956.1)
  • How to Assign Fusion Expenses to A User (Doc ID 1571884.1)

Integration with Fusion Application Expenses over Web Services

$
0
0

Introduction

This blog is about how to integrate Fusion Application (FA) Expenses with external applications over web services. We will provide an outline of the steps for integration along with sample payloads that can be adapted for specific needs.

Use Case

In this use case, a customer has an application such as Customer Relationship Management (CRM) or a custom application that limits an expense amount according to local company policy defined in the application. Let’s use an example that the policy evaluation has determined that the entertainment expense limit is $40 for a particular event and this information needs to be posted to an expense report along with the event details. So, after incurring the expense, the user can go to the FA expenses module and pick up the report, which is waiting for the user in ‘saved’ mode, which is created for this event with the set limits.

This integration of posting the information from the customer application to FA Expenses can be achieved by invoking the web services offered by FA Expenses module. Let’s first look at the components of an expense report and then look at how to integrate with it.

Scope

Please consider this article as a reference guide to give you an idea of the integration steps and not the sole guide for your integration work. Functional setup of the FA Finance module is a prerequisite that is not covered in this blog. Because the setup can change from environment to environment depending on your functional setup, so do the steps documented here. Prior experience in consuming SOAP/WSDL web services, Oracle Fusion Finance and FA Expenses modules are desired to follow the content discussed in this article.

Components of an FA Expense Report

An expense report is a container for expense items and other data points useful for the approver, to apply policies / rules, as well as for auditing and reporting. An expense report can have one or more expense items. The report total is a sum of the expense items claimed and the ‘purpose’ is to give some meaningful message to the approver and others involved in processing the expense report including the filer. An expense report, as shown in the picture below, is created for the purpose of ‘Taking Govt customer for XYZ event EventCode:ABC123’:

ExpWSBl-1

Per our use case, this event information has originated in CRM and an expense has been pre-approved there with a Per Diem for entertainment according to local laws or policy.

Our use case requires that this information needs to be populated in an expense item as shown in the picture below:

ExpWSBl-2

So, our plan is to create an expense report in the FA expense system saved with such details for the user to go in and review, which will look like the following when the user logs in:

ExpWSBl-3

Now, let’s look at the steps to perform this integration and transfer of information.

Preparing for Integration

To perform this integration over web services, we need to first identify the SOAP WSDL end-points to invoke. This information is listed at the site https://fusionappsoer.oracle.com (accessible with an Oracle Technology Network Login). This is your starting point to discover services that are exposed for external customer consumption. The web page allows you to search the module you are looking for in a product family and the version you are interested in. The following picture shows such a query for the ‘expenses’ related web services. We are looking to use the service ‘Expense Item and Expense Report’ as selected here:

ExpWSBl-4

The ‘Detail’ tab has all the methods offered by this service and towards the end of ‘Detail’ tab content, you’ll see these URLs:

Service Path: https://<fin_server:PortNumber>/finExmSharedCommon/ExpenseService?WSDL

Abstract WSDL URL: rep://R10_FUSIONAPPS_HOSTEDREP/oracle/apps/financials/expenses/shared/common/expenseExternalService/ExpenseService.wsdl

With help of these, you could map to your own WSDL services URL, for example:

https://fin-external.mycompany.com:12604/finExmSharedCommon/ExpenseService?wsdl

The ‘Documentation’ tab has further details on the methods and service XSDs.

Integration with FA Expense Web Services

Now that we have identified the WSDL end-point to integrate with FA Expenses, let’s look at the invocation of the methods to create an expense report and expense item. We will use the SoapUI tool to demonstrate the client invocation. First, note down the WSDL endpoint of the expense application of your Fusion Application, as discussed above. Let’s take the following URL for our discussion:

https://fin-external.mycompany.com:12604/finExmSharedCommon/ExpenseService?wsdl

Again, this is only a sample URL and the host and port entries need to be updated according to your FA setup. Now, we will use SoapUI tool to create a SOAP client project, as shown in the picture below, to perform the operations. You may use your tool of choice.

ExpWSBl-5

Now, let’s look at creating an expense report over a web-service call. In the SoapUI project that we just created, locate the ‘createExpenseReport’ under ‘ExpenseServiceSoapHttp’. Double click ‘Request 1’ and we’ll see that the window open up for running the operation as shown below:

ExpWSBl-6.1

Please set the necessary credentials. Then the request body needs to be edited to desired input values. A sample payload is given below.

Payload Sample to Create Expense Report

<soapenv:Envelope xmlns:com="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/companykff/" xmlns:cos="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/" xmlns:dff="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/dff/" xmlns:exp="http://xmlns.oracle.com/apps/financials/expenses/shared/common/expenseExternalService/" xmlns:pjc="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/pjcdff/" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/financials/expenses/shared/common/expenseExternalService/types/">
   <soapenv:Header><wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"><wsu:Timestamp wsu:Id="TS-16"><wsu:Created>2015-11-11T23:30:25.998Z</wsu:Created><wsu:Expires>2015-11-11T23:40:25.998Z</wsu:Expires></wsu:Timestamp><wsse:UsernameToken wsu:Id="UsernameToken-15"><wsse:Username>finuser1</wsse:Username><wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">Welcome1</wsse:Password><wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary">zmye0rbowzoQFQGtL1H/jg==</wsse:Nonce><wsu:Created>2015-11-11T23:30:25.998Z</wsu:Created></wsse:UsernameToken></wsse:Security></soapenv:Header>
   <soapenv:Body>
      <typ:createExpenseReport>
         <typ:expenseReport>
             
          <!--  <exp:PersonId>?</exp:PersonId>
           -->
		<exp:PersonId>300100051411339</exp:PersonId>
		<exp:AssignmentId>300100051411351</exp:AssignmentId>
		<exp:ExpenseReportNumber>0099140154</exp:ExpenseReportNumber>
           <exp:Purpose>Taking Govt customer for XYZ event EventCode:ABC123 
</exp:Purpose>
             
            <exp:ReimbursementCurrencyCode>USD</exp:ReimbursementCurrencyCode>

            <exp:ExpenseStatusCode>SAVED</exp:ExpenseStatusCode>
             
            <exp:OrgId>204</exp:OrgId>
   
         </typ:expenseReport> 
          
      </typ:createExpenseReport>
   </soapenv:Body>
</soapenv:Envelope>

This will create an expense report in ‘Saved’ mode, as set by the property ‘ExpenseStatusCode’ in the payload above. Then, when the user logs in to the expense system, he/she will see that there is a report waiting in ‘Saved’ mode.

A successful create expense report returns the following response from server.

Response from Server to Create Expense Report Call

<env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:wsa="http://www.w3.org/2005/08/addressing" xmlns:typ="http://xmlns.oracle.com/apps/financials/expenses/shared/common/expenseExternalService/types/">
   <env:Header>
      <wsa:Action>http://xmlns.oracle.com/apps/financials/expenses/shared/common/expenseExternalService//ExpenseService/createExpenseReportResponse</wsa:Action>
      <wsa:MessageID>urn:uuid:fa9adc34-90e6-4fcf-b358-bce24fa3f7d8</wsa:MessageID>
      <wsse:Security env:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
         <wsu:Timestamp wsu:Id="Timestamp-Lh548KZH8MY1FHz1M1OQMQ22" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
            <wsu:Created>2015-11-11T23:30:55.998Z</wsu:Created>
            <wsu:Expires>2015-11-11T23:35:55.998Z</wsu:Expires>
         </wsu:Timestamp>
      </wsse:Security>
   </env:Header>
   <env:Body>
      <ns0:createExpenseReportResponse xmlns:ns0="http://xmlns.oracle.com/apps/financials/expenses/shared/common/expenseExternalService/types/">
         <ns1:result xsi:type="ns4:ExpenseReport" xmlns:ns1="http://xmlns.oracle.com/apps/financials/expenses/shared/common/expenseExternalService/types/" xmlns:ns4="http://xmlns.oracle.com/apps/financials/expenses/shared/common/expenseExternalService/" xmlns:ns0="http://xmlns.oracle.com/adf/svc/types/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
            <ns4:ExpenseReportId>300100051444933</ns4:ExpenseReportId>
            <ns4:ParentExpenseReportId xsi:nil="true"/>
            <ns4:PreparerId xsi:nil="true"/>
            <ns4:PersonId>300100051411339</ns4:PersonId>
            <ns4:AssignmentId>300100051411351</ns4:AssignmentId>
            <ns4:ExpenseReportDate xsi:nil="true"/>
            <ns4:ExpenseReportTotal xsi:nil="true"/>
            <ns4:ExpenseReportNumber>0099140154</ns4:ExpenseReportNumber>
            <ns4:Purpose>Taking Govt customer for XYZ event EventCode:ABC123
</ns4:Purpose>
            <ns4:ReimbursementCurrencyCode>USD</ns4:ReimbursementCurrencyCode>
            <ns4:ExchangeRateType xsi:nil="true"/>
            <ns4:OverrideApproverId xsi:nil="true"/>
            <ns4:CurrentApproverId xsi:nil="true"/>
            <ns4:ReportSubmitDate xsi:nil="true"/>
            <ns4:ExpenseStatusCode>SAVED</ns4:ExpenseStatusCode>
            <ns4:ExpenseStatusDate xsi:nil="true"/>
            <ns4:ExpReportProcessingId xsi:nil="true"/>
            <ns4:ReceiptsStatusCode>NOT_REQUIRED</ns4:ReceiptsStatusCode>
            <ns4:ReceiptsReceivedDate xsi:nil="true"/>
            <ns4:BothpayFlag xsi:nil="true"/>
            <ns4:ExportRejectCode xsi:nil="true"/>
            <ns4:ExportRequestId xsi:nil="true"/>
            <ns4:OrgId>204</ns4:OrgId>
            <ns4:ObjectVersionNumber>1</ns4:ObjectVersionNumber>
            <ns4:PaymentMethodCode>CHECK</ns4:PaymentMethodCode>
            <ns4:CashExpensePaidDate xsi:nil="true"/>
            <ns4:FinalApprovalDate xsi:nil="true"/>
         </ns1:result>
      </ns0:createExpenseReportResponse>
   </env:Body>
</env:Envelope>

Now we take the newly created report for our next step of creating the expense item to further add the specific Per Diem for ‘Entertainment’ expense item. For this we need to parse the response of ‘createExpenseReport’ and get a set of values to build the payload for the Create Expense Item call ‘createExpense’ as shown below. As an example, <ns4:ExpenseReportId>300100051444933</ns4:ExpenseReportId> is parsed from ‘createExpenseReport’ call above and is used as value to ‘ExpenseReportId’ in our next payload.

Payload Sample to Create Expense Item

<soapenv:Envelope xmlns:com="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/companykff/" xmlns:cos="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/" xmlns:dff="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/dff/" xmlns:exp="http://xmlns.oracle.com/apps/financials/expenses/shared/common/expenseExternalService/" xmlns:pjc="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/pjcdff/" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/financials/expenses/shared/common/expenseExternalService/types/">
  <soapenv:Header><wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"><wsu:Timestamp wsu:Id="TS-43DC3896F929810A1D14429866338154"><wsu:Created>2015-11-11T23:35:55.998Z</wsu:Created><wsu:Expires>2015-11-11T23:45:55.998Z</wsu:Expires></wsu:Timestamp><wsse:UsernameToken wsu:Id="UsernameToken-43DC3896F929810A1D14429866338143"><wsse:Username>finuser1</wsse:Username><wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">Welcome1</wsse:Password><wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary">JUlms0V81jjx5yCY04TCcA==</wsse:Nonce><wsu:Created>2015-11-11T23:35:55.998Z</wsu:Created></wsse:UsernameToken></wsse:Security></soapenv:Header>
  <soapenv:Body>
     <typ:createExpense>
        <typ:expense>
           <exp:ExpenseReportId>300100051444933</exp:ExpenseReportId>
           <!--Optional:-->
           <exp:ReimbursableAmount currencyCode="USD">40</exp:ReimbursableAmount>
           <!--Optional:-->
           <exp:Description>Entertainment Per Diem - do not change.</exp:Description>
           <!--Optional:-->
           <exp:StartDate>2015-08-11</exp:StartDate>
           <!--Optional:-->
           <exp:EndDate>2015-08-11</exp:EndDate>
           <!--Optional:-->
           <exp:ReceiptCurrencyCode>USD</exp:ReceiptCurrencyCode>
           <!--Optional:-->
           <exp:ExchangeRate>1</exp:ExchangeRate>
           <!--Optional:-->
           <exp:ReceiptAmount currencyCode="USD">0</exp:ReceiptAmount>
           <!--Optional:-->
           <exp:ExpenseSource>CASH</exp:ExpenseSource>
           <!--Optional:-->
           <exp:ExpenseCategoryCode>BUSINESS</exp:ExpenseCategoryCode>
           <!--Optional:-->
           <exp:ExpenseTypeCategoryCode>Entertainment</exp:ExpenseTypeCategoryCode>
           <!--Optional:-->
           <exp:FuncCurrencyAmount currencyCode="USD">0</exp:FuncCurrencyAmount>
           <exp:LocationId>300100001957889</exp:LocationId>
           <!--Optional:-->
           <exp:ReceiptRequiredFlag>false</exp:ReceiptRequiredFlag>
           <exp:OrgId>204</exp:OrgId>
           <exp:Location>Abbeville, Dodge, Georgia, United States</exp:Location>
           <!--Optional:-->
           <exp:ReimbursementCurrencyCode>USD</exp:ReimbursementCurrencyCode>
           <exp:ExpenseTypeId>100000010094106</exp:ExpenseTypeId>
           <exp:ExpenseTemplateId>10024</exp:ExpenseTemplateId>
               <exp:ExpenseDistribution xmlns:com="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/" xmlns:cos="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/companykff/">
                 <exp:CodeCombinationId>13799</exp:CodeCombinationId>
                 <exp:ReimbursableAmount xmlns:tns="http://xmlns.oracle.com/adf/svc/errors/">40</exp:ReimbursableAmount>
                 <exp:CostCenter>740</exp:CostCenter>
                 <exp:Segment1>01</exp:Segment1>
                 <exp:Segment2>740</exp:Segment2>
<exp:CompanyKFF xmlns:type="com:CompanyKffOPERATIONS_5FACCOUNTING_5FFLEX">
                 <com:ExpenseDistId xmlns:cos1="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/">300100051444944</com:ExpenseDistId>
                 <com:_FLEX_StructureInstanceCode xmlns:cos1="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/">OPERATIONS_ACCOUNTING_FLEX</com:_FLEX_StructureInstanceCode>
                 <com:_FLEX_StructureInstanceId xmlns:cos1="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/">246</com:_FLEX_StructureInstanceId>
                 <com:_GL_5FGL_23_StructureInstanceNumber xmlns:cos1="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/">101</com:_GL_5FGL_23_StructureInstanceNumber>
                 <com:FND_ACFF_ConcatenatedStorage xmlns:cos1="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/" xmlns:nil="true"/>
                 <com:FND_ACFF_Delimiter xmlns:cos1="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/" xmlns:nil="true"/>
                 <com:_Company xmlns:cos1="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/">01</com:_Company>
              </exp:CompanyKFF>
              <exp:CostCenterKFF xmlns:type="cos:CostCenterKffOPERATIONS_5FACCOUNTING_5FFLEX">
                 <cos:ExpenseDistId>300100051444944</cos:ExpenseDistId>
                 <cos:_FLEX_StructureInstanceCode>OPERATIONS_ACCOUNTING_FLEX</cos:_FLEX_StructureInstanceCode>
                 <cos:_FLEX_StructureInstanceId>246</cos:_FLEX_StructureInstanceId>
                 <cos:_GL_5FGL_23_StructureInstanceNumber>101</cos:_GL_5FGL_23_StructureInstanceNumber>
                 <cos:FND_ACFF_ConcatenatedStorage xmlns:nil="true"/>
                 <cos:FND_ACFF_Delimiter xmlns:nil="true"/>
                 <cos:_Department>520</cos:_Department>
              </exp:CostCenterKFF>
              <exp:ProjectDFF xmlns:type="pjc:PJCDFFEXM_5FExpense_5FReport_5FLine">
                 <pjc:ExpenseDistId>300100051444944</pjc:ExpenseDistId>
                 <pjc:__FLEX_Context>EXM_Expense_Report_Line</pjc:__FLEX_Context>
                 <pjc:__FLEX_Context_DisplayValue>EXM: Expense Report Line</pjc:__FLEX_Context_DisplayValue>
                 <pjc:_FLEX_NumOfSegments>10</pjc:_FLEX_NumOfSegments>
                 <pjc:FLEX_PARAM_BusinessUnit>204</pjc:FLEX_PARAM_BusinessUnit>
              </exp:ProjectDFF>
           </exp:ExpenseDistribution>
           <!--Optional:-->
           <exp:ExpenseDFF xmlns:type="dff:Internet">
           </exp:ExpenseDFF>
        </typ:expense>
     </typ:createExpense>
  </soapenv:Body>
</soapenv:Envelope>

As a result of the steps we carried out so far, the user should be able to see the below report in his/her expense application:

ExpWSBl-7

Now the user can add any other expenses incurred, review the details and submit for further processing.

Summary

We looked at the steps to integrate with Fusion Application Expenses module over web services, locate the WSDL URL along with sample payloads that can be adapted for specific needs.

References

Web links and Oracle Documents:

• http://www.oracle.com/us/products/applications/fusion/financial-management/financials/expenses/resources/index.html

• Information Center: Oracle Internet Expenses (Doc ID 1381228.2)

• Troubleshooting Assistant: Fusion Expenses (EXM) (Doc ID 1574035.2) is a nice menu driven tool.

• Oracle Cloud Expenses Co-existence and Integration Options (Doc ID 2046956.1)

• How to Assign Fusion Expenses to A User (Doc ID 1571884.1)

Pipelined Table Functions in Oracle Business Intelligence Cloud Service (BICS)

$
0
0

Introduction

 

This article outlines how to use a pipelined table function in Oracle Business Intelligence Cloud Service (BICS).


Using a pipelined table function makes it is possible to display data in BICS without having to load the data to a physical table.


Two possible use cases for pipelined table functions in BICS are:

1)    Drill-to-deal scenarios where data is only required momentarily and temporarily.

2)    Situations where corporate security may restrict sensitive data being physically saved to the cloud.


Pipelined table functions are best suited to small data-sets. Latency issues may occur on large data volumes.


The code snippets provided use Oracle Social Data in Oracle Social Data and Insight Cloud Service as the data source. Data is retrieved using the Social Data and Insight REST APIs. That said any data source accessible by either SOAP or REST web services may be referenced in the pipelined table function.


Since the current version of BICS does not support opaque views; it is not possible to reference variables in the model view. This means any parameters that need to be passed to the pipelined table function must be table driven. Therefore, in order to pass the selected value of a prompt to the pipelined table function, it must be first saved to a physical table. Unfortunately, this adds extra complexity and overhead to the solution when using prompts.


The article covers the four steps required to create, configure, and execute pipeline table functions from BICS:


1)    Create Pipelined Table Function

2)    Display Pipelined Data in BICS

3)    Pass Dashboard Prompt

4)    Consume Dashboard


Due to the sensitive nature of the Social Data and Insight Cloud Service data limited screens shots of output are available.

In the code snippet examples, DUNS number is used in the Dashboard Prompt to retrieve the Social Data. The Data Universal Numbering System or DUNS Number is assigned and maintained by Dun & Bradstreet (D&B’s). A DUNS number is a unique nine-character number used to identify each physical location of a business. Additionally, the US federal government uses this number to track how federal money is allocated.


Main Article

 

Step 1 – Create Pipelined Table Function

 

Step 1 outlines how to create the SQL artifacts needed for the pipelined table function to run in Oracle Application Express SQL Workshop.


Five artifacts are created:


Object Type: TYPE DUNS_OT

Table Type: DUNS_TYPE

Table: SELECTED_DUNS

Function: FUNC_DUNS_PIPELINE

View: DUNS_VIEW


Steps:


From Oracle Application Express -> SQL Workshop -> SQL Commands


For a text file containing ALL the code snippets click here.

 

a)    Create an Object Type – specifying all columns to return with relevant data types.

CREATE TYPE DUNS_OT AS OBJECT
(
FIRST_NAME VARCHAR(500),
LAST_NAME VARCHAR(500),
TITLE VARCHAR(500)
);

b)    Create Table Type DUNS_TYPE based on DUNS_OT.

CREATE TYPE DUNS_TYPE AS TABLE OF DUNS_OT;

c)    Create table to store values selected from Dashboard Prompts. In this case the Prompt is a DUNS Number.

CREATE TABLE SELECTED_DUNS(
COMPANY_DUNS_NUMBER VARCHAR2(9)
);

d)    Insert a sample DUNS Number into the table (to test with).

INSERT INTO SELECTED_DUNS(COMPANY_DUNS_NUMBER)
VALUES(‘123456789′);

e)    Create Pipelined Table Function.

Code Snippet Breakdown:

Gold: Retrieves Social Data via Rest API. For more info see: Integrating Oracle Social Data and Insight Cloud Service with Oracle Business Intelligence Cloud Service (BICS).

Green: Pipelines the results from the Rest API and parses them into DUNS_OT [created in step a]. It may help to compare the syntax used in this snippet to that used in Integrating Oracle Social Data and Insight Cloud Service with Oracle Business Intelligence Cloud Service (BICS) which inserts into a physical table vs. the virtual select shown here.

Aqua: Defines RETURN type DUNS_TYPE [created in step b].

Purple: Selects DUNS Number from table driven prompt [created in step c and populated in step d].

create or replace FUNCTION FUNC_DUNS_PIPELINE
RETURN DUNS_TYPE PIPELINED AS
l_ws_response_clob CLOB;
l_num_contacts NUMBER;
l_selected_duns VARCHAR2(9);
l_pad_duns VARCHAR2(9);
l_body CLOB;

SELECT MAX(COMPANY_DUNS_NUMBER) INTO l_selected_duns FROM SELECTED_DUNS;
l_pad_duns := LPAD(l_selected_duns,9,’0′);
l_body := ‘{“objectType”:”People”,”limit”:”10″,”filterFields”:[{"name":"company.gl_ult_dun","value":"' || l_pad_duns || '"},{"name":"person.management_level","value":"0"},{"name":"person.department","value":"3"}],”returnFields”:["person.first_name","person.last_name","person.title"]}';
apex_web_service.g_request_headers(1).name := ‘Content-Type';
apex_web_service.g_request_headers(1).value := ‘application/json';
apex_web_service.g_request_headers(2).name := ‘X-ID-TENANT-NAME';
apex_web_service.g_request_headers(2).value := ‘TenantName';
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => ‘https://SocialDataURL/data/api/v2/search’,
p_username => ‘User’,
p_password => ‘Pwd’,
p_body => l_body,
p_http_method => ‘POST’
);
–parse the clob as JSON
apex_json.parse(l_ws_response_clob);
–get total hits
l_num_contacts := CAST(apex_json.get_varchar2(p_path => ‘totalHits’) AS NUMBER);
–loop through total hits and insert JSON data into database
IF l_num_contacts > 0 THEN
for i in 1..l_num_contacts LOOP
PIPE ROW ( DUNS_OT (apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[1].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[2].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[3].value’)
));
end loop;
RETURN;
END IF;
–;

f)    Test the function from SQL Workshop.

CREATE VIEW DUNS_VIEW AS
SELECT * FROM TABLE(FUNC_DUNS_PIPELINE);

SELECT * FROM DUNS_VIEW;

 

Step 2 – Display Pipelined Data in BICS


Step 2 outlines how to display the data retrieved from the pipelined table function in BICS.


This is a two part process:


1)   Reference the pipelined table function in a data modeler view.

2)   Build an Analysis based on the view.


Steps:


a)    Lock to Edit the Model.

b)    Click on the cog / wheel next to “Database” then “Create View” to create a new View.

Untitled

c)    Click on SQL Query

Use the example below as a starting point for the syntax needed to call the pipelined table function from the view.

Confirm that your data is returned in the data tab.

SELECT
FIRST_NAME,
LAST_NAME,
TITLE
FROM
(
SELECT
*
FROM TABLE(FUNC_DUNS_PIPELINE)
)

Snap4

d)    Create an Analysis based on the custom view. Confirm that the Analysis returns results.

Snap5

e)    Place the Analysis on a Dashboard. Confirm results are viable.


Step 3 – Pass Dashboard Prompt


Step 3 outlines how to use a dashboard prompt on a dashboard to pass a user selected value from BICS to the pipelined table function in the database.

Currently BICS does not support opaque views; therefore, it is not possible to pass variables to the pipelined table function. At this stage the best approach is to use a physical table to store the prompt selection, and drive the pipelined table function from that table.

Remember the pipelined table function has no parameters. It is driven from the values contained in the SELECTED_DUNS table.

i.e. SELECT MAX(COMPANY_DUNS_NUMBER) INTO l_selected_duns FROM SELECTED_DUNS

The diagram below shows the steps required to save the selected dashboard prompt to the SELECTED_DUNS table. (Detailed steps follow.)

BICS_Prompts

Steps:


a)    Create Procedure PROCEDURE SP_LOAD_SELECTED_DUNS

create or replace PROCEDURE SP_LOAD_SELECTED_DUNS(p_selected_duns VARCHAR2)
IS
BEGIN
DELETE FROM SELECTED_DUNS;
INSERT INTO SELECTED_DUNS(COMPANY_DUNS_NUMBER)
VALUES(p_selected_duns);
END;

b)    Create Function FUNCTION FUNC_EXEC_SELECTED_DUNS

create or replace FUNCTION FUNC_EXEC_SELECTED_DUNS
(p_selected_duns VARCHAR2) RETURN INTEGER
IS PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
SP_LOAD_SELECTED_DUNS(p_selected_duns);
COMMIT;
RETURN 1;
END;

c)    Test Function

SELECT FUNC_EXEC_SELECTED_DUNS(‘123456789′) from DUAL;

SELECT * FROM SELECTED_DUNS;

d)    Create a dummy table that will be used in the trigger analysis. Insert any descriptive text into the table.

This should be a one row only table – as the trigger will run for every record in the table.

CREATE TABLE DUMMY_REFRESH(REFRESH_TEXT VARCHAR2(100));

INSERT INTO DUMMY_REFRESH(REFRESH_TEXT)
VALUES (‘Hit Return once complete’);

e)    In the Modeler create session variable r_selected_duns.

Snap7

f)     Add the DUMMY_REFRESH table in the Modeler as a Dimension table and join it to a fact table.

Add a Column Expression called UPDATE_DUNS.

This Expression calls the FUNC_EXEC_SELECTED_DUNS and passes the r_selected_duns request variable to it.

Snap8

EVALUATE(‘FUNC_EXEC_SELECTED_DUNS(%1)’,VALUEOF(NQ_SESSION.”r_selected_duns”))

Snap9

g)   Create an Analysis based on the DUMMY_REFRESH table that containing REFRESH_TEXT field & UPDATE_DUNS.

Snap10

h)    Add an Action Link to trigger the Analysis from the Dashboard (through Navigate to BI Content).

Set Action Options. Run Confirmation may be useful.

Snap11

Snap12 Snap13

i)     Add a Prompt to the Dashboard for DUNS Number.

Either manually add choice list DUNS Numbers or drive your Prompt from a table.

Set a Request Variable on the Prompt. This will be used to pass the selected value to the pipelined table function.

Due to BICS not supporting VALUELISTOF “Enable user to select multiple values” has been unchecked.

A workaround for passing multiple values to session variables has been previously discussed in Integrating Oracle Social Data and Insight Cloud Service with Oracle Business Intelligence Cloud Service (BICS).

Snap6

Step 4 – Consume Dashboard


Step 4 outlines to use an action link to trigger a data refresh to reflect the selection made on the dashboard prompt.


Steps: 


The final dashboard will contain thee components: A Dashboard Prompt, The Action Link, Analysis Request to display results.

Snap14
Snap15

In order to prevent accidental data refreshes, the Dashboard Consumer must confirm the action.

To refresh results; it may be necessary to clear BI Server Cache (through the modeler) or hit Refresh on the Dashboard (to clear Presentation cache).

Snap16

 

 

Further Reading


Click here for the Application Express API Reference Guide – MAKE_REST_REQUEST Function.

Click here for the SRM Developer Platform API Guide.

Click here for more A-Team BICS Blogs.

Summary


This article described how to create a pipelined table function in BICS. Additionally, it described how to display the results and pass a dashboard prompt to the pipelined function.

Pipelined table functions may be useful in situations where it is not feasible or desirable to physically save data to the BICS database.

The data-source example provided was for Social Data and Insight Cloud Service data. However, this article can easily be modified to apply to any other data-source accessible via REST or SOAP Web Service API’s.

How to find purgeable instances in SOA/BPM 12c

$
0
0

If you are familiar with SOA/BPM 11g purging, after you have upgraded/implemented SOA/BPM 12c, you will not be able to use most of the SQL for 11g to determine the purgeable instances.  This is because SOA/BPM 12c is no longer using composite_instance table for composite instance tracking.

In SOA/BPM 12c, a common component is used to track the state associated with a business flow and report audit information.  This design will reduce the instance tracking data generated and stored in the database, and improve purge performance by minimizing the number of tables that need to be accessed.  Component instance state will no longer be stored in individual table for instance tracking purpose, the overall flow state will be stored in SCA_FLOW_INSTANCE table.

In SCA_FLOW_INSTANCE table, the “active_component_instances” column keeps track of how many component instances are still in a running/active state. These are the instances in one of the following states:

  • RUNNING
  • SUSPENDED
  • MIGRATING
  • WAITING_ON_HUMAN_INTERVENTION

When the “active_component_instances” value reaches 0, this indicates that the Flow is no longer executing. There is another column called “recoverable_faults”, this column keeps track of how many faults can be recovered. This information together with the “active_component_instances” is used to determine whether the Flow can be purged or not.

The SCA_FLOW_ASSOC table is used to record the association between the original Flow that creates the BPEL component instance and the correlated Flow. The SCA_FLOW_ASSOC table is used by the purge logic to ensure that all correlated Flows are purged together when none of the flow is in an active state.

Another important thing to take note: if you create a SOAINFRA schema with LARGE database profile, all transactional tables will be created with range-partition. If you decide to run the SOA purging with the purge script either manually by running the stored procedure or by using auto purge function which can be configured in Oracle Enterprise Manager Fusion Middleware Control, you will need to set the purge_partitioned_component => true (default is false), otherwise the purge logic will skip all partitioned tables when the purge script run and no flow instance will be purged.  You will be able to find all the partition tables in your SOAINFRA schema by using the following SQL:

select table_name from user_tables where partitioned = 'YES';

You can use the following sample PL/SQL to determine whether the SCA_FLOW_INSTANCE has been partitioned and the number of purgeable flow instances in your SOAINFRA schema.

set serveroutput on;
DECLARE
  MAX_CREATION_DATE TIMESTAMP;
  MIN_CREATION_DATE TIMESTAMP;
  batch_size        INTEGER;
  retention_period  TIMESTAMP;
  purgeable_instance INTEGER;
  table_partitioned INTEGER;
BEGIN
  MAX_CREATION_DATE := to_timestamp('2015-12-27','YYYY-MM-DD');
  MIN_CREATION_DATE := to_timestamp('2015-12-01','YYYY-MM-DD');
  retention_period  := to_timestamp('2015-12-27','YYYY-MM-DD');
  batch_size        := 100000;
 
  if retention_period < max_creation_date then
    retention_period := max_creation_date;  
  end if;
 
  select count(table_name) into table_partitioned from user_tables where partitioned = 'YES' and table_name='SCA_FLOW_INSTANCE';
 
  if table_partitioned > 0 then
   DBMS_OUTPUT.PUT_LINE ('SCA_FLOW_INSTANCE is partitioned ');
  else
   DBMS_OUTPUT.PUT_LINE ('SCA_FLOW_INSTANCE is not partitioned ');
  end if;
 
  SELECT Count(s.flow_id) into purgeable_instance
  FROM sca_flow_instance s
  WHERE s.created_time            >= MIN_CREATION_DATE
  AND s.created_time              <= MAX_CREATION_DATE
  AND s.updated_time              <= retention_period
  AND s.active_component_instances = 0
  AND s.flow_id NOT IN  (SELECT r.flow_id FROM temp_prune_running_insts r)
  AND s.flow_id IN
    (SELECT c.flow_id FROM sca_flow_to_cpst c, sca_entity e, sca_partition p WHERE c.composite_sca_entity_id = e.id)
  AND rownum <= batch_size;
   DBMS_OUTPUT.PUT_LINE ('Total purgeable flow instance: ' ||  purgeable_instance);
END;
/

PaaS – Security

Tuning Asynchronous Web Services in Fusion Applications Part II

$
0
0

Introduction

In a series of earlier blogs we have covered in detail several aspects of asynchronous web services in Fusion Applications including the general implementation, how to monitor, and also discussed tuning considerations and options. This article discusses an additional tuning option for how Fusion Applications consumes waiting messages in the underlying queues.

Main Article

In a recent engagement tuning a high-volume environment we have discovered a situation where a large Fusion Applications cluster can lead to contention on the database when a high number of application threads concurrently attempt to dequeue messages from the Advanced Queues (AQ) used for the asynchronous web service implementation. We were lucky to have the database experts from the Real World Performance group with us supporting the database tuning.

A detailed analysis revealed that the default MDB implementation (detailed discussion) leverages the AQjmsConsumer.receive operation which can result in database side contention if the number of MDB threads and, hence, the corresponding database sessions were increased in order to achieve higher parallelism on the application tier. AQjmsConsumer.receive is implemented in a way to delegate the active polling for new messages from the queue to the database tier and the database session will repeatedly try to lock a message and deliver this message back to the waiting application tier thread. Now with a significant number of such database sessions technically competing for messages, a high portion of these dequeue attempts can’t be successful as only a single session can be successful in locking a record. This logic can cause relevant overhead on the database tier as observed through a high number of ‘select for update skip locked’ invocations in the usual database performance reports while only a small portion of the invocations actually deliver a record.

In order to address this situation, there is a new patch available for the Weblogic MDB implementation which allows for leveraging the non-blocking operation AQjmsConsumer.receiveNoWait instead of AQjmsConsumer.receive. With that, the control is passed back from the database to the application tier immediately if the the process cannot get hold of a message (i.e. no looping happens on the database tier) and put the application thread to sleep for a defined duration before attempting the next dequeue. This results overall in a significant reduction of database contention and unsuccessful row locking attempts while not compromising the overall throughput.

In order to activate the non-blocking dequeue logic, the corresponding patch 21561271 delivered through P4FA patch bundles must be available on the system. Secondly, there is a new Java system property in order to turn on the new logic by setting weblogic.ejb.container.AQMDBReceiveNoWait=true. Per default this switch is set to false for backward compatibility reasons, i.e. it needs to be explicitly turned on to use the non-blocking implementation.

Finally, in this analysis there was a parallel effort in order to optimise the database side implementation as well resulting in patch 16739157 which is recommended to be applied if high number of row lock attempts are observed on Advanced Queues.

Conclusion

This article presented an additional tuning option for decreasing database side contention for deployments with a larger number of application tier processes dequeuing from the same Advanced Queue.


SOA Cloud Service – Quick and Simple Setup of an SSH Tunnel for On-Premises Database Connectivity

$
0
0

Executive Overview

With the current release of SOA Cloud Service (SOACS) a common requirement often requested is to connect to an on-premise  database from the cloud SOACS instance. This article outlines a quick and simple method to establish the connectivity between the single-node SOACS instance and the on-premise database server using an SSH tunnel.

Solution Approach

Overview

The overall solution is described in the diagram shown below.

sshDBsimple (1)An SSH tunnel is established between the SOACS instance running in the Oracle Public Cloud (OPC) and the OnPrem database server residing inside the corporate firewall. This tunnel is defined with the local port as the TNS listener port of the database server and any suitable remote port on the SOACS instance, that is available and can be used for port forwarding. After the tunnel is established, all the SOACS instance traffic directed to the newly defined port opened on SOACS instance will automatically be forwarded to the database TNS listener. Thus JDBC connections connecting to the port within SOACS instance will automatically connect to the OnPrem database server.

The following sections walk through the detailed setup and shows a composite deployed to the SOACS instance invoking a database adapter retrieving data from a table within the On-Premises database server.

Summary of Steps

  • Copy the private key for SOACS instance into the on-premises database server.
  • Establish a SSH tunnel session from the DB server to the SOACS VM’s public IP. The tunnel will specify a local port and a remote port. The local port will be the TNS listener port of the database and the remote port can be any port that is available within the SOACS instance.
  • Define a JDBC datasource using the onPrem database parameters with the exception of replacing the TNS listener port with the remote port defined for SSH tunnel in the last step.
  • Test the JDBC datasource connectivity.
  • Define a new JNDI entry within Database Adapter that uses the newly created JDBC datasource.
  • Use the JNDI in a SOA composite containing a Database Adapter to run a simple query from a database table.
  • Deploy and test the new composite for runtime verification of the database query operation.

Task and Activity Details

The following sections will walk through the details of individual steps. The environment consists of the following 2 machines:

  • SOACS instance with a single managed server and all the dependent cloud services within OPC.
  • Linux machine inside the corporate firewall, used for hosting the On-Premises Database (myOnPremDBServer)

I. Copy the private key of SOACS instance to the database server

When a SOACS instance is created, a public key file is uploaded for establishing SSH sessions. The corresponding private key has to be copied to the database server. The private key can then be used to start the SSH tunnel from the database server to the SOACS instance.

Alternatively, a private/public key can be generated in the database server and the public key can be copied into the authorized_keys file of the SOACS instance. In the example here, the private key for the SOACS instance has been copied to the database server. A transcript of a typical session is shown below.

slahiri@slahiri-lnx:~/stage/cloud$ ls -l shubsoa_key*
-rw——- 1 slahiri slahiri 1679 Dec 29 18:05 shubsoa_key
-rw-r–r– 1 slahiri slahiri 397 Dec 29 18:05 shubsoa_key.pub
slahiri@slahiri-lnx:~/stage/cloud$ scp shubsoa_key myOnPremDBServer:/home/slahiri/.ssh
slahiri@myOnPremDBServer’s password:
shubsoa_key                                                                                100% 1679        1.6KB/s     00:00
slahiri@slahiri-lnx:~/stage/cloud$

On the database server, login and confirm that the private key for SOACS instance has been copied in the $HOME/.ssh directory

[slahiri@myOnPremDBServer ~/.ssh]$ pwd
/home/slahiri/.ssh
[slahiri@myOnPremDBServer ~/.ssh]$ ls -l shubsoa_key
-rw——-+ 1 slahiri g900 1679 Jan  9 06:39 shubsoa_key
[slahiri@myOnPremDBServer ~/.ssh]$

II. Create an SSH Tunnel from the On-Premises Database Server to the SOACS instance VM’s public IP

Using the private key from Step I, start an SSH session from the on-premises Database server host, specifying the local and remote ports. As mentioned earlier, the local port is the TNS listener port for the database, e.g. 1521. The remote port is any suitable port that is available in the SOACS instance. The syntax of the ssh command used is shown here.

ssh -R <remote-port>:<host>:<local port> -i <private keyfile> opc@<SOACS VM IP>

The session transcript is shown below.

[slahiri@myOnPremDBServer ~/.ssh]$ ssh -R 1621:localhost:1521 -i ./shubsoa_key opc@shubsoa
[opc@shubsoacs-jcs-wls-1 ~]$ netstat -an | grep 1621
tcp        0      0 127.0.0.1:1621              0.0.0.0:*                   LISTEN
tcp        0      0 ::1:1621                         :::*                            LISTEN
[opc@shubsoacs-jcs-wls-1 ~]$

After establishing the SSH tunnel, the netstat utility can confirm that the remote port 1621 is enabled in listening mode within the SOACS VM. This remote port, 1621 and localhost along with other on-premises database parameters can now be used to define a datasource in Weblogic Adminserver (WLS) console.

III. Define a JDBC DataSource for the On-Premises Database

The remote port from Step II is used in defining the JDBC datasource for the database definition port. It should be noted that the host specified will be localhost instead of the actual IP address of the database server, since the port forwarding with SSH tunnel is now enabled locally within the SOACS VM in Step II.

We will define a datasource corresponding to the standard SCOTT schema of the sample database. From here onwards, the process to define a datasource is straightforward and follows the standard methodology. So, only the primary field values that are used in the datasource definition process are provided below.

  • Name: OnPremDataSource
  • JNDI Name: jdbc/onPremDataSource
  • Driver: Thin XA for instance connections (default)
  • Database Name: orcl
  • Host Name: localhost
  • Port: 1621
  • Database User: scott
  • Database Password: Enter the password as set in the sample database

IV. Test the JDBC DataSource Connectivity

Near the end of the JDBC datasource creation wizard, the Test Configuration button can confirm if the datasource is created successfully, as shown below.

ds4After the completion of the JDBC datasource creation process, another quick check of netstat for port 1621 will show two additional entries indicating an established session corresponding to the newly created datasource. The connections are terminated if the datasource is shutdown from the WLS console.

[opc@shubsoacs-jcs-wls-1 ~]$ netstat -an | grep 1621
tcp        0      0 127.0.0.1:1621              0.0.0.0:*                   LISTEN
tcp       0      0 127.0.0.1:15148            127.0.0.1:1621               ESTABLISHED                 
tcp       0      0 127.0.0.1:1621              127.0.0.1:15148             ESTABLISHED                 
tcp        0      0 ::1:1621                             :::*                        LISTEN
[opc@shubsoacs-jcs-wls-1 ~]$

 

V. Define a new JNDI entry within Database Adapter that uses the newly created JDBC datasource

From WLS console, under Deployments, update DbAdapter by creating a new JNDI entry (e.g. eis/DB/DemoDB) and associate it with the JDBC datasource created in Step IV. Since this is a standard configuration task for Database Adapter using a new JNDI entry, the details are not included here.

Instead, only the key field values used in the DbAdapter configuration are provided below.

  • New JNDI under Outbound Connection Pools: eis/DB/DemoDB
  • XADataSourceName: jdbc/onPremDataSource

 

VI. Use the newly created JNDI to develop a SOA composite containing a Database Adapter to run a simple database table query

The JNDI entry created in step V is used in the Database Wizard session within JDeveloper to run a query against the DEPT table within SCOTT schema of the sample database. The composite SAR used for deployment in SOACS (sca_DBQuery_rev1.0.jar) is available for download here.

VII. Deploy and test the composite

After deployment, composite can be tested by entering a dummy string for input. Results from the DEPT table rows should be visible in the database adapter invoke, as shown below.

ds5

 

Summary

The test case described here is a quick way to establish the connectivity to an on-premises database with SSH tunnels. This can be typically used for development and functional testing purposes. There are limitations in this approach. An improved solution with better stability and applicability to clusters will be covered in another blog within this space. So stay tuned and for further details, please contact the SOACS Product Management or the SOACS group within A-Team.

Acknowledgements

SOACS Product Management and Engineering teams have been actively involved in the development of this solution for many months. It would not have been possible to deliver such a solution to the customers without their valuable contribution. Finally, a big thanks to my team-mate, Christian Weeks, whose technical help always works wonders.

Integrating Oracle Document Cloud and Oracle Sales Cloud, maintaining data level business object security

$
0
0

Introduction

When customers see the rich functionality available in Oracle Documents Cloud they often ask if they can use this functionality within their Oracle Fusion SaaS Applications to store, and share, documents. At first the integration appears to be quite straightforward, e.g. use the Documents Cloud Web API, embed an iFrame that points to the relevant opportunity id folder in Oracle Documents Cloud and all should be good. However the implementer will soon realise that this approach, on its own, would not respect the Oracle Fusion Applications data security model, I.e. who has authorization to see the documents? and therefore is not production worthy.

The remainder of this blog article describes the challenges faced when integrating Oracle Documents Cloud with Oracle Sales Cloud and proposes a design pattern which solves many of the issues encountered, whilst ensuring the pattern is flexible enough to be used by multiple SaaS applications.

Once Oracle Documents Cloud is embedded within Oracle Sales Cloud, the user then will have many additional features available ot them, such as :

  • Hierarchical folders
  • In Place viewing of documents of different types (Word, Excel, Powerpoint etc)
  • Drag and Drop interface for adding new documents
  • And more

For more information on Oracle Documents Cloud features please see Oracle Documents Cloud webpage

What are the challenges embedding Oracle Documents Cloud with Oracle Sales Cloud?

There are a number of considerations which need to be taken into account when embedding Oracle Documents cloud into Oracle Sales Cloud, namely:

Angelo_DocCld_sampleimage

Screenshot of Documents Cloud Embedded within a Sales Cloud

  • How do you embed the Oracle Documents Cloud user interface in Oracle Sales Cloud?
  • Security, how are you going to ensure documents are secured based on SAAS data security policies?, these normally do not map to PaaS security roles
  • How do you ensure the integration is easily maintainable?

It is possible that you will have multiple Oracle SaaS applications installed (Sales, HCM, ERP, Service etc.) and ideally you would want to use the same integration pattern for all of your SaaS integrations. This however raises its own set of challenges given that different SaaS products offer different extensibility tools (Application Composer, .NET, PHP etc) and differing levels of extensibility, therefore to ensure the highest level of reuse, you will need to use the lowest common denominator extensibility option and move common logic to a common location, such as a middletier.

Architecture

BlogArchitecture

High Level Architecture

Oracle Java Cloud – SaaS Extensions is a specific variant of the Oracle Java Cloud server but designed specifically to integrate with Oracle SaaS Applications. JCS-SX’s main differentiators are identity association with Fusion Applications and low cost due to its multi tenanted architecture For more information please see the Oracle Java Cloud – SaaS Extensions (JCS-SX) website.

The implementation of this pattern uses the Oracle Java Cloud – SaaS Extensions (JCS-SX) service to host some java code, in the form of a JEE Servlet. The functionality of this servlet includes :

  • Ensuring only Fusion Applications Authenticated Users can access a document folder
  • If an object doesn’t already have a folder created, the servlet creates the folder in Oracle Documents cloud then stores it in Oracle Sales Cloud again. This is done for to ensure subsequent calls are more efficient.
  • Only folders for which the user can “see” from the SaaS application are able to be accessed – thus ensuring the solution respects SaaS visibility rules
  • Return a generated html page which displays the Oracle Documents Cloud folder for the Sales Cloud object requested

Historically a pattern previous commonly used involved using Oracle Sales Cloud groovy code to call a SOAP façade hosted on JCS-SX, which in turn communicated using the REST API with Oracle Documents Cloud to create folders in Oracle Documents Cloud. This pattern is not used here for a number of reasons.

  • This would require additional customizations/groovy code on the SaaS side to call our SOAP-REST Façade and one of the desires of this integration pattern is to reduce the amount of SaaS side customization as much as possible so that can support as many SaaS products as possible with a single code base.
  • This would make the JCS-SX code more complex than it really needs to be, we would need multiple entry points (createFolder, ShareFolder, generate an AppLink etc)
  • Finally although Oracle Sales Cloud has great outbound SOAP service support it is often best practice to encapsulate multiple, potentially complex, SOAP calls in a middle tier layer, in our case a servlet hosted on JCS-SX.

The added benefit is that given that the bulk of the business logic is in our Java layer, our integration can easily be used for other SaaS applications, like HCM,ERP, Service Cloud, even non Oracle SaaS products, which offer different levels of customization/languages but fundamentally they all support the concept of embedding an iFrame and passing parameters..

The Integration Flow

As the saying goes “A picture is worth a thousand words” so at this point let’s look at the integration flow implemented by the pattern. We will do this by means of a flow diagram and go through each step explaining the functionality explaining the design consolidations the developer should be aware of and offering some limited code samples.

Angelo_DocCld_flow

SalesCloud to DocCloud Integration Flow

Step 1 is the action of calling the integration from within a Oracle Sales Cloud tab. Here we are embedding the integration servlet within Sales Cloud using the App composer framework and some groovy scripting. The groovy script below calls the JCS-SX servlet, passing data (the object type, the object number, a JWT security token and optionally a Oracle Documents Cloud folder stored in a custom field) and the response of the Java servlet is a HTML page which redirects to the specific Oracle Documents Cloud Folder.

def jwt=(new oracle.apps.fnd.applcore.common.SecuredTokenBean().getTrustToken();
def docCloudIntegrationURL = oracle.topologyManager.client.deployedInfo.DeployedInfoProvider.getEndPoint("DocSalesCloudIntegration");
def url = docCloudIntegrationURL+"?objectnumber="+OptyNumber+"&objecttype=OPPORTUNITY&jwt="+jwt+"&folderGUID="+folderGUID;
return url;

TIP : In the above example we have used a feature of Oracle Fusion Applications called “Topology manager” to store the endpoint of our JCS-SX hosted Servlet. This is good practice as it allows a level of indirection where the developer can store the physical hostname/url of the servlet in one place and reuse it in many places. e.g. Tab for Opportunities, Tab for Accounts etc etc.

For more information please refer to the Oracle Documentation : Working with user Tokens & Topology Manager

For specific steps on how to create a tab in Sales Cloud simplified UI please see this documentation link or this short YouTube video by our fusion Applications Developer Relations group.

Step 2 is now being executed from within our servlet running on JCS-SX. This is where we check to see that the user who’s requested to see the objects documents, actually has permission to do so. To accomplish this we issue a REST query back to Oracle Sales Cloud asking if this user (identified by the JWT security token passed) can query this specific object using the object number.

To check the object is accessible we issue the following REST call in our Java code

GET <SalesCloudServerURL>/salesApi/resources/latest/opportunities/123456?fields=Name&onlyData=true


TIP :  For efficiency purposes, when querying data from Oracle Fusion Applications using the REST, or SOAP , API ensure that you only return the data you require.For the REST API simply add the parameter “fields” with a comma seperated list of fieldnames. For the SOAP API you can add <findAttribute> tags with the fields you wish to return.

If the query to Oracle Sales Cloud returns data (i.e. a single row) then we can assume the user has valid access to the object, at that time, and thus can proceed. If we get a “no data found” (i.e. a 404 error) then we simply return a security exception to the user interface as the user has no access. In practice the user should never receive this error as the URL call is generated based on them navigating to a screen with access but for security reasons this check is necessary.

The principal advantage of this approach is we are using SaaS application functionality to determine if a record is visible by a user rather than trying to determine it from the Java servlet . We assume that if you can query the object from the SaaS API then you have access. Additionally this technique will work for any SaaS application regardless of how it “secures” data visibility, e.g. Oracle Sales Cloud uses territory management to determine visibility whereas HCM Cloud uses job roles. For non Fusion Applications we are assuming that SaaS applications API respects its user interfaces visibility rules in the same way as Oracle Fusion Applications.

Step 3 is concerned with checking, or finding, the folder in Oracle Documents Cloud where the objects documents are stored (e..g files for an opportunity). This integration pattern stores the objects documents in a hierarchy within Oracle Documents Cloud. The application root directory contains a collection of folders, one for each object type (e.g. Opportunity, Account etc.) and then within that a sub folder (ObjectNumber-ObjectName) of the object we’re looking for.

If a folder GUID is passed in as a parameter to the Java servlet then we then simply need to check, using the GUID, that the folder exists in Documents Cloud and then move on to step 4 of the process. If we are not passed a folder GUID then we need to perform an in-order traversal of the hierarchy and find the folder, whose name would be <ObjectNumber>-<ObjectName>. This second method is not going to be very efficient as we could have a scenario where we could have 1000s of folders which we would need to trawl through. Thankfully it should only occur the first time a folder is accessed and subsequent requests would be quicker as we store the folder GUID in Sales Cloud in a custom field.

This second approach however does have a massive advantage in that it can be used for SaaS Applications where it is not possible to store a custom field in the SaaS application and pass it in context of the user interface. So although less efficient it will continue to work and give us more options. If we were in this scenario then I would strongly recommend one of the following strategies be implemented to reduce the folder count to only include a subset of all documents.

  • A data aging mechanism where objects relating to inactive/old Accounts/Opportunities etc. are archived off
  • Or the use of a database store, like Database Cloud Service, to hold a mapping table of DocCloud GUIDs to Oracle Sales Cloud objects, indexed by the Oracle Sales Cloud ObjectID/Type.
Angelo_DocCld_heirarchy

Example of hierarchy stored in Document Cloud

Example Object type hierarchy within Documents Cloud

Step 4 is only executed if the folder does not exist. The lack of the folder implies that no documents have been stored for this object and the next step is to create the folder. This would normally happen the first time a user shows the tab in Oracle Sales Cloud. It is also worth noting that the folder only gets created in Oracle Documents Cloud when, and only when, the Documents Cloud tab in Oracle Sales Cloud is selected, this way we don’t get empty folders in Oracle Documents Cloud.

If you had used groovy scripts in Oracle Sales Cloud and created a folder on the creation event of a Oracle Sales Cloud object (e.g. creation of a new opportunity), you would cause the creation of a number of empty folders and require more customizations to be done in Oracle Sales Cloud.

The Documents Cloud REST call for creating a folder is as follows :

POST <DocCloudServerURL>/folders/FF4729683CD68C1CCCCC87DT001111000100000001
{
    name : "1234-Vision-Website",
    description : "Folder for Vision Software Opportunity 1234 Website Deal"
}

The long HEX number is the parent folder GUID, which is either passed in from SalesCloud or discovered by an inorder traversal of the documents cloud folder hierarchy.

Step 5, In step 3 we walked through a fixed hierarchy to find the folder using a combination of Object Number and Object Name (e.g. “1234-Vision-Website”), as mentioned earlier this approach isn’t efficient therefore at this stage we store the discovered, or generated, folder GUID in Oracle Sales Cloud using for subsequent requests

TIP : For more information on how to create custom fields in Sale Cloud checkout this youtube viewlet from Oracle Fusion Application Developer Relations.

Other Oracle Fusion Applications products also allow the storage of custom fields in the SaaS instance by a technology called “FlexFields”, if your interested checkout this ATeam Chronicles blog article on Descriptive FlexFields.

Step 6 is only reached when we’ve found the folder on Oracle Documents Cloud and have checked that he user has access to the folder. Now all we need to do is generate an HTML page which will show the documents. Specifically we want to show the user the folder with the documents BUT importantly not allow them to navigate in/out of the folder. This can be achieved by using a feature of Oracle Documents Cloud called “AppLinks”. An AppLink is a generated URL which gives temporary access to an Oracle Documents Cloud folder, or item. In this step the Java servlet generates some HTML and JavaScript code which when sent back to the browser iFrame, which executes it and subsequently redirects to Oracle Document Cloud to the AppLink URL previously generated

The REST call which generates a Folder AppLink is shwon below. In this example we are using the role of “contributor” as we want to give our users the ability to add, and remove, files

POST <serverURL>/folders/FF4729683CD68C1CCCCC87DT001111000100000001
{
   "assignedUser": "Any User",
    "role : "contributor"
}

For more information on AppLinks please see this article on Oracle Documents Cloud and AppLinks on the A-Team Chronicles website.

Step 7 Is now the final phase of the servlet which is to return a HTML page back to the iframe embedded within Oracle Sales Cloud

Step 8 is the page being rendered in the iFrame in Oracle Sales Cloud, at this point the HTML/JavaScript is executed and redirects the iframe to Oracle Documents Cloud showing the appropriate page in Oracle Documents Cloud.

 

Conclusion

From the above pattern you can see that it’s perfectly possible to integrate Oracle Sales Cloud and Oracle Documents Cloud in a manner that not only is incredibly functional, efficient but importantly maintains the complex security model enjoyed by SaaS applications like Oracle Sales Cloud. This pattern also highlights many micro patterns which are used to create this integration, namely (with links to other blog entries)

 

A complete downloadable asset containing the sample code above, ready to be deployed into Oracle JCS-SX will be made available soon via the Oracle website in the near future

 

 

 

FMW 12c Coherence Adapter – Using a non-transactional local cache

$
0
0

Executive Overview

New in Fusion Middleware 12c is the Coherence Adapter. As standard, there is no facility to work with a local, non-transactional cache.

Solution

Note:-

By default, the CoherenceAdapter (WebLogic Deployment) is not targeted. Follow this document for instructions on how to do this.

Solution detail

The first step is to configure a new entry for the Coherence Adapter’s Connection Factory using a JNDI name of your choice. In this example, it is eis/Coherence/localNonXATTL

Configure it like this:-

nontransttl

Note that the CacheConfigLocation has to be available to all / any SOA servers using the Adapter (shared storage).

 

Now let’s configure the cache configuration file:-

<?xml version=”1.0″?>
<!DOCTYPE cache-config SYSTEM “cache-config.dtd”>
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>local-AK</cache-name>
<scheme-name>no-trans-ttl</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<local-scheme>
<scheme-name>no-trans-ttl</scheme-name>
<service-name>LocalCache</service-name>
<autostart>true</autostart>
</local-scheme>
</caching-schemes>
</cache-config>

The cache-name can be any name of your choosing. The scheme-name in the cache-mapping section has to be the same value as scheme-name in the local-scheme section. All other values must be as shown here.

The cache-name and the JNDI entry name will be needed when configuring the Coherence Adapter in JDeveloper (JCA Configuration).

So let’s configure the Adapter to carry out a PUT to the cache and ask Coherence to invalidate the object after 15 seconds.

1. In JDeveloper, drag the Coherence Adapter from the Technology pane into the right-hand swim-lane of your Composite

2. Name it what you will

3. Specify the JNDI name you used earlier (e.g. eis/Coherence/localNonXATTL)

4. Select Put operation

5. Enter the cache name that you specified in the cache configuration file (e.g. local-AK)

6. [ You can ask Coherence to generate a key for you in which case the generated key is available as a response value from the Put operation. If you want to specify your own key, you will need to un-check the Auto-generate key option ]

7. Select Custom from the Time To Live drop-down and enter a value of 15000 (ms)

8. You will have defined a data structure (XSD) for the object being cached so select that next

9. Finish

If you inspect the appropriate JCA file, you should see this:-

<adapter-config name=”CoherencePut” adapter=”coherence” wsdlLocation=”../WSDLs/CoherencePut.wsdl” xmlns=”http://platform.integration.oracle/blocks/adapter/fw/metadata”>
<connection-factory location=”eis/Coherence/localNonXA-TTL”/>
<endpoint-interaction portType=”Put_ptt” operation=”Put” UICacheItemType=”XML”>
<interaction-spec className=”oracle.tip.adapter.coherence.jca.CoherenceInteractionSpec”>
<property name=”TimeToLive” value=”15000″/>
<property name=”KeyType” value=”java.lang.String”/>
<property name=”CacheOperation” value=”put”/>
<property name=”CacheName” value=”local-DNA”/>
</interaction-spec>
</endpoint-interaction>
</adapter-config>

Finally

This mechanism has been tested and shown to work in FMW 12.2.1. However, there is no reason that I know of why this wouldn’t also work in 12.1.3. I’m sure you’ll let me know if it doesn’t

Setting up SSH tunnels for cloud to on-premise with SOA Cloud Service clusters

$
0
0

Executive Overview

With the current release of SOA Cloud Service (SOACS) a common requirement often requested is to connect to an on-premise  database from the cloud SOACS instance. SSH tunnels can be used to establish cloud to on-premise communications, allowing SOA Cloud Service to access resources from on-premise applications.

Companion post : Single host SSH tunneling

My colleague, Shub Lahiri has written an excellent article as well, he discusses the simpler configuration where there isn’t a cluster of managed servers in the cloud- this is much simpler to setup,more suited for a development environment but cannot work with a cluster set up in the cloud.
http://www.ateam-oracle.com/soa-cloud-service-quick-and-simple-setup-of-an-ssh-tunnel-for-on-premises-database-connectivity/

Overview

This post expands on the concept of ssh tunneling using a more advanced setup to allow connection of a SOA Cloud cluster to a on-premise database. In principle this setup could be configured to access any tcp based service on-premise.

Motivation

Every managed server requires access to the on-premise database or other resource, for composite flows using the resource to function, as work is almost universally load-balanced between managed server nodes. Unfortunately, that means either we have multiple on-premise ssh connections to the cloud, or we have this solution. Multiple connections requires every managed server have a unique public IPV4 address. Unfortunately, IPV4 addresses are a scarce resource and as such, SOACS does not provision one for every managed server node.

Network topology

For this example, we will be tunneling database traffic, allowing a Database Adapter deployed in the cloud to access an on-premise Oracle Database. The SOA Suite cluster will be running on 2 compute nodes (a 2 node SOA cluster) with the standard SOA CS setup – an LBR node as the front end gateway, and a Database Cloud Service node for SOA Suite persistence.

The diagram shows the basic idea of the network topology. SSH is used from the database server on-premise to connect the database node of the SOA cluster in the cloud. The specific choice of the databases is technically incidental – this approach will work with the bridge between any two hosts on-premise and cloud, but it seems the most natural fit for a tunneled database connection to use the databases.

The DB host on-premise runs a reverse SSH tunnel to the DB host in the cloud. Traffic for the on-premise database flows (green lines) from the managed servers, via the SSH tunnel to the DB on-premise. The apparent connectivity is to the DB host in the cloud, but in reality SSH is back-hauling the traffic through the tunnel to on-premise.

Setting up

Unlike the single managed server usecase, we need to tweak some components of the cloud setup to allow the shared SSH tunnel to work.

First, we need to clarify some terminology:
The SSH server host – the endpoint in the cloud to which ssh connectivity is established. In the diagram above, it is the “DB” node in the cloud.
The SSH client – the endpoint on-premise from which ssh connectivity is established. In the diagram above it is the “OnPremise DB” node.
The managed servers – the hosts in the cloud which require access to the SSH tunnel to communicate to on-premise. In the diagram above, they are identified as MS1 and MS2.

High level activity summary

At a high level, we need to do the following.

  1. Tweak the ssh configuration to turn on “GatewayPorts”.
  2. Add a security application to the compute cloud console to allow the tunneled SSH connection in the SOA Cloud.
  3. Add a security rule using the new security application, so traffic can pass.
  4. Use a special format of the ssh -R command to open a tunnel that allows LAN level access to the remote port.

Detailed walkthrough

  1. The SSH server host needs to have a small tweak to the ssh configuration. Be careful with this step, make sure the ssh daemon is running correctly after you edit the file, losing ssh connectivity to your cloud host would be bad! The file /etc/ssh/sshd_config needs to have “GatewayPorts yes” inserted, and the ssh daemon restarted. This configuration step allows the ssh server daemon to open ports on interfaces other than the localhost/loopback interface, when requested by an ssh client. Without this step, nothing else will work as it is impossible for the other hosts in the cluster to communicate through the ssh tunnel then.
    Setup SSH daemon
    1. ssh opc@<ssh server host> – connect to server as opc user
    2. sudo su – we need to be root to edit the ssh config
    3. vi /etc/ssh/sshd_config – edit the sshd_config file and add the line GatewayPorts yes if it doesn’t already exist.
    4. service sshd restart – restart the ssh daemon
    5. logout of the ssh session (so your connection can pick up the new configuration)
  2. Oracle Cloud uses a whitelist for all intra-node communication (as well as all external communication) – nothing travels between nodes without a whitelist entry. We will need to whitelist the communication between the managed servers and the SSH server host, on the port that we will be forwarding back to on-premise. Shub’s single node SSH discusses ports and I refer you to that for more on choices. Here I will assume we will be forwarding port 1591 to on-premise.
    Setting up the security application

    To create new whitelist entries we will need to use the Oracle Cloud My Services console web application.

    1. Login to your cloud console
      cloud1-login
    2. Scroll down and locate the “Oracle Compute Cloud Service” entry – it is near the bottom, and open the service console.
      cloud2-computeserviceconsole
    3. This is the Compute Cloud Service console, and we will need to visit the network configuration to add additional information to the network security rules.
      Click Network to view the network settings.
      compute1-main
    4. First we need to create a Security Application – a definition of a network protocol (tcp or udp) and a port number.
      Select the Security Applications list from the left.
      computenet-secapplications
    5. Click the “Create Security Application” to create an in-browser dialog where we can configure the security application.
      computenet-secappcreate
    6. For this example, name it “c2g_sshtunnel”, the port type is TCP, and the port range start and end are both 1591.
      computenet-createsecapp
    7. Our new security application should now show in the list.
      computenet-secapphighlight
  3. Continuing on in the compute cloud console, we need to establish the rule that allows the traffic to flow, based on the security application definition we just created.
    Setting up the security rule
    1. Back on the Security Rules page, we can now create our new Security rule.
      computenet-createsecrule
    2. Name it C2GSSHDB.
      computenet-createsecrule-dialog
    3. The Security Application is the one we created previously: c2g_sshtunnel.
      computenet-createsecrule-dialog2
    4. The Source (where the connections will originate) will be the managed servers. We have a security list already defined for that: it’ll be named <name of soacs install>/wls/ora_ms, select it.
      The Destination (where the connections will terminate within the cloud) is, for the purposes of this document the database, again, we have a defined list: <name of soacs install>/db_1/ora_db, select it.
      Make sure it is enabled, and create the rule.
      computenet-createsecrule-dialog3
    5. Our security rule is now visible on the list.
      computenet-secrulehighlight
  4. At this point, we have created everything that is necessary to route and tunnel traffic, we just need to actually ssh from on-premise into the cloud, and establish the remote port forwarding. Once that is done, the managed servers in the cloud will be able to see and access the configured resources on-premise.
    Setting up the ssh tunnel
    1. On the ssh client machine on-premise, we simply need to ssh to the server in the cloud designated ssh server. We use remote TCP port forwarding however we add an additional component to that, indicating we wish the remote forwarded port to be bound on all network interfaces.
      ssh -i <identityfile> -R :1591:<addressofonpremservice>:1521 opc@<cloudserveraddress>

      The vital additional element is the colon : before the rest of the -R content – that indicates to ssh to attempt to bind all network interfaces for the 1591 port at the remote machine. This will only work if the remote server has GatewayPorts enabled, as completed in step 1.

    2. At this point, we should have a functional ssh tunnel exactly as in Shub’s discussion, however it is able to service requests from both managed servers in the cloud, forwarding to the on-premise system.
    3. To configure a database connection in this example, we simply connect to the cloud database with the forwarded port number instead of the standard port number. SSH will tunnel that traffic back through to the on-premise environment, exactly as in Shub’s example.

How to Recover BPM Process Instances from a MDS 00054 Error in Oracle BPM 12.1.3 (Part 1)

$
0
0

Introduction

There is an issue in Oracle SOA Suite Fabric layer in version 11.1.x. and 12.1.3. The issue is documented in Bug# 20517579: “Composites Lost Completely after redeployment and server restart”. This bug is fixed in version 12.2.1. A few customers have run into this bug. Once this bug is encountered, BPM server usually shows “MDS 00054: The file to be loaded oramds:/deployed-composites/CompositeName_rev1.0/composite.xml does not exist” error during server startup. The composite is no longer visible in EM and all User Tasks from this composite is not visible in Workspace as well. So the composite appears to be lost.

One work-around for this issue is deploying the same composite as a higher version. Or customers can manually un-deployed their problematic version of the composite using WLST script and then deploy the same composite again. In either case, customers will lose all running instances of this composite. If this outcome is not desirable, we need to find a way to recover all running instances.

This multi-part blog will present one way to manually recover those instances. To make this process more understandable to readers, a very simple BPM composite is used to take you through the processes of deployment/undeployment, reproduction of the bug# 20517579 and MDS 00054 errors, and finally steps to recover instances. Along the way, we will look at changes in MDS and SOAINFRA tables due to normal life cycles of the sample composite and bug# 20517579.

Even though this bug is fixed in 12.2.1, I think the information documented in this blog will provide a valuable resource for understanding MDS and SOAINFRA schema.

 

What happens in MDS when a composite is deployed?

To be continued

 

 

 

 

Common WSDL Issues in ICS and How to Solve Them

$
0
0

Introduction

When using SOAP Web Services, WSDL documents play a very important role. Therefore, while using SOA services it is important to have knowledge about how to handle WSDL documents. While working with Oracle ICS (Integration Cloud Service) having a good handle on WSDL basics and troubleshooting will be very helpful. The fundamental reasoning behind this is because most of the built-in SaaS adapters available in Oracle ICS connect with those applications using SOAP. Salesforce.com, Oracle CPQ and Oracle Sales Cloud are some good examples, not to mention the generic SOAP adapter. Thus, most of these adapters as their first setup step require a WSDL document that describes the structural elements necessary to perform SOAP-based message exchanges, such as message types, port types and bindings.

Properly parsing a WSDL in ICS is a critical step for three reasons.

1) Because It describes how ICS will connect with the application, leveraging bindings and SOAP addresses within it.

2) Because It allows ICS to discover the business objects and operations, which eventually are used in the mapping phase.

3) For those adapters that provide automatic mapping recommendations, it is imperative that the adapter correctly parse all complex types available in the types section of the WSDL document.

Failing to parse the WSDL document of an application pretty much invalidates any further work in ICS. This blog will present common issues found while handling WSDL documents, and what can be done to solve those issues.

Rule Of Thumb: Correctly Inspect the WSDL

Regardless of which issue you are having with WSDL documents, one of the best practices is to always inspect the WSDL content. Most people wrongly assume that if the WSDL is accessible via its URL, then it will be valid. The verification process is basically entering the URL in the browser and checking if any content is being displayed. If any content is displayed it means that the WSDL is accessible, and no network restrictions are in place. However, the content shown in the browser can significantly differ from the raw content generated by the WSDL, so what you see is not what you want.

Tip: From the ICS perspective, the raw content of the WSDL is what the adapters rely on to generate and build the runtime artifacts. Keep this in mind if you are working with any SOAP-based adapter.

This happens because most modern browsers have built-in formatting features that are applied to the content received by the servers. These features present the content in a much better view for end users, such as removing empty lines, coloring the text or breaking down structured contents (such as XML derived documents) into a tree view. For instance, figure 1 shows a WSDL document opened in Google Chrome, where formatting took place while accessing the content.

fig1

Figure 1: Formatted WSDL content being shown in Google Chrome.

Do not rely on what the browser displays; this is a huge mistake, since the browser may obfuscate some issues in the WSDL. A better way to inspect the WSDL is getting access to its raw content. This can be accomplished using several techniques; but from the browser, you can access an option called “View Page Source” that displays the content in raw format, which allows you to copy-and-paste the content into a text editor. Save the content AS-IS in a file with a .wsdl extension. That file must be your starting point to troubleshoot any WSDL issue.

#1 Common Issue: Bad Generated WSDL Documents

Although SOAP-based Web Services are regulated by a W3C specification, which technology to use and how the Web Services are implemented, that is entirely up to the developer. Thus; there are thousands of ways to implement them, and their WSDL can also be created using different approaches. A common practice is having the WSDL automatically generated on-demand. This means that the WSDL is created when its URL is invoked. While this is good practice, since it ensures that the WSDL is always up-to-date with the Web Service implementation, it can also allow for issues on the consumer side.

For example, there are cases where the WSDL is generated with empty lines in the beginning of the document. Issues like this generates parsing errors; because according to the W3C specification, nothing can be contained before the XML declaration (i.e.: <?xml). If that happens, you have to make sure that those empty lines are removed from the WSDL, before using it in the ICS’s connections page. Figure 2 shows an example of a bad generated WSDL document.

fig2

Figure 2: Bad generated WSDL document, with illegal empty lines before the XML declaration.

While being in ICS’s connection page, if you use the WSDL shown in figure 2 and try to hit the “Test” button, ICS will throw an error related to parsing. This pretty much invalidates the connection because in order to be used in integrations, a connection needs to be 100% complete. Figure 3 shows the error thrown by ICS.

fig3

Figure 3: Error thrown by ICS after testing the connection.

To solve this issue, make sure that the generated WSDL has no empty lines before the XML declaration. While this is really simple, it can sometimes be really hard to accomplish that if the people responsible for the Web Service have zero control over the WSDL generation. It is not an uncommon scenario where the exposed Web Service is part of a product that cannot be easily changed. If that happens, another alternative can be hosting a modified version of the WSDL in a HTTP Web Server, and having ICS pointing to that server instead. As long the port types don’t have their SOAP addresses changed, this might work. The counterpart of this approach is that it introduces additional overhead over the implementation, with another extra layer to implement, patch and monitor.

#2 Common Issue: Non Well formed WSDL Documents

Whether having the WSDL automatically generated; or having it statically defined, it is the responsibility of the service provider to make sure that the WSDL document is well formed. If the WSDL is not well formed, then the ICS parser will not be able to validate the document and an error will be thrown. Just like the first common issue; this leads to connection invalidation, which cannot be used when building integrations.

In this site, there are a set of rules that state what XML documents must adhere to, in order to be considered well formed. It also contains a validator tool that you can leverage to make sure a WSDL document is valid.

#3 Common Issue: Types in Separated Documents

Some Web Services that have their WSDL automatically generated create the types used within the WSDL in a separate document. This means that when you get the WSDL, the WSDL only mentions the types by their names; but the types are defined someplace else. Typically, these types are defined in a XML schema document that the WSDL only points to using the import clause. This practice improves the reusability of the element types, and allows to be used in more than one web service definition.

While this practice is great from the service provider point of view, this might cause some issues for the service consumer. If for some reason ICS is not able to completely retrieve the types used in the WSDL, then it will not be able to create the business objects and operations for the integration. This might happen if ICS is not able to access the URL mentioned due to network connectivity issues such as firewall, proxies, etc. Figure 4 shows an example of WSDL that accesses its types using the import clause.

fig4

Figure 4: WSDL document using the import clause for the types.

This situation can be tricky to foresee because any error related to this practice will only occur when you start building the integration. The connection page will inform a user that the connection is “complete”; because for the sake of the test performed in the connections page, it does not establish any physical connection. It only checks if the WSDL document is valid. But when you start building your integration, an error might be thrown when the wizard tries to retrieve the business objects that must be invoked for a given operation. If that happens, make sure that any URL used in the import clause is reachable. If the error still persists, you will have no choice besides including all types directly in the WSDL, manually.

Conclusion

Most built-in adapters found in ICS allow native connection with SaaS applications using the SOAP Web Services technology. Because of this characteristic, being familiar with WSDL documents is essential to obtain its maximum benefits. However, there are issues related to WSDL that most users using ICS might face. This blog explored a few issues discovered  while using WSDL in ICS,  and presented how to solve these issues.

Automating Data Loads from Taleo Cloud Service to BI Cloud Service (BICS)

$
0
0

Introduction

This article will outline a method for extracting data from Taleo Cloud Service, and automatically loading that data into BI Cloud Service (BICS).  Two tools will be used, the Taleo Connect Client, and the BICS Data Sync Utility.   The Taleo Connect Client will be configured to extract data in CSV format from Taleo, and save that in a local directory.  The Data Sync tool will monitor that local directory, and once the file is available, it will load the data into BICS using an incremental load strategy.  This process can be scheduled to run, or run on-demand.

 

Main Article

This article will be broken into 3 sections.

1. Set-up and configuration of the Taleo Connect Client,

2. Set-up and configuration of the Data Sync Tool,

3. The scheduling and configuration required so that the process can be run automatically and seamlessly.

 

1. Taleo Connect

The Taleo Connect Tool communicates with the Taleo backend via web services and provides an easy to use interface for creating data exports and loads.

Downloading and Installing

Taleo Connect tool can be downloaded from Oracle Software Delivery Cloud.

a. Search for ‘Oracle Taleo Platform Cloud Service – Connect’, and then select the Platform.  The tool is available for Microsoft Windows and Linux.

1

 

b. Click through the agreements and then select the ‘Download All’ option.

 

1

c. Extract the 5 zip files to a single directory.

d. Run the ‘Taleo Connect Client Application Installer’

2

e. If specific Encryption is required, enter that in the Encryption Key Server Configuration screen, or select ‘Next’ to use default encryption.

f. When prompted for the Product Packs directory, select the ‘Taleo Connect Client Application Data Model’ folder that was downloaded and unzipped in the previous step, and then select the path for the application to be installed into.

 

Configuring Taleo Connect

a. Run the Taleo Connect Client.  By default in windows, it is installed in the “C:\Taleo Connect Client” directory.  The first time the tool is run, a connection needs to be defined.  Subsequent times this connection will be used by default.

b. Enter details of the Taleo environment and credentials.  Important – the user must have the ‘Integration Role’ to be able to use the Connect Client.

c. Select the Product and correct version for the Taleo environment.  In this example ‘Recruiting 14A’.

d. Select ‘Ping’ to confirm the connection details are correct.

3

 

Creating Extracts from Taleo

Exporting data with Taleo Connect tool requires an export definition as well as an export configuration.  These are saved as XML files, and can then be run from a command line to execute the extract.

This article will walk through very specific instructions for this use case.  More details on the Connect Client can be found in this article.

1. Create The Export Definition

a. Under the ‘File’ menu, select ‘New Export Wizard’

1

b. Select the Product and Model, and then the object that you wish to export.  In this case ‘Department’ is selected.

Windows7_x64

c. To select the fields to be included in the extract, chose the ‘Projections’ workspace tab, as shown below, and then drag the fields from the Entity Structure into that space.  In this example the whole ‘Department’ tree is dragged into the Projections section, which brings all the fields in automatically.

 

Windows7_x64

d. There are options to Filter and Sort the data, as well as Advanced Options, which include using sub-queries, grouping, joining, and more advanced filtering.  For more information on these, see the Taleo Product Documentation.  In the case of a large transaction table, it may be worth considering building a filter that only extracts the last X period of data, using the LastModifiedDate field, to limit the size of the file created and processed each time.  In this example, the Dataset is small, so a full extract will be run each time.

 

Windows7_x64

e. Check the ‘CSV header present’ option.  This adds the column names as the first row of the file, which makes it easier to set up the source in the Data Sync tool.

Windows7_x64

f. Once complete, save the Export Definition with the disk icon, or under the ‘File’ menu.

 

2. Create The Export Configuration

a. Create the Export Configuration, by selecting ‘File’ and the ‘New Configuration Wizard’.

6

b. Base the export specification on the Export Definition created in the last step.

Windows7_x64

c. Select the Default Endpoint, and then ‘Finish’.

8

d. By default the name of the Response, or output file, is generated using an identifier, with the Identity name – in this case Department – and a timestamp.  While the Data Sync tool can handle this type of file name with a wildcard, in this example the ‘Pre-defined value’ is selected so that the export creates the same file each time – called ‘Department.csv’.

Windows7_x64

e. Save the Export Configuration.  This needs to be done before the schedule and command line syntax can be generated.

f. To generate the operating system dependent syntax to run the extract from a command line, check the ‘Enable Schedule Monitoring’ on the General tab, then ‘Click here to configure schedule’.

g. Select the operating system, and interval, and then ‘Build Command Line’.

h. The resulting code can be Copied to the clipboard.  Save this.  It will be used in the final section of the article to configure the command line used by the scheduler to run the Taleo extract process.

Windows7_x64

i.  Manually execute the job by selecting the ‘gear’ icon

 

Menubar

 

j. Follow the status in the monitoring window to the right hand side of the screen.

In this example, the Department.csv file was created in 26 seconds.  This will be used in the next step with the Data Sync tool.

Windows7_x64

 

2. Data Sync Tool

The Data Sync Tool can be downloaded from OTN through this link.

For more information on installing and configuring the tool, see this post that I wrote last year.  Use this to configure the Data Sync tool, and to set up the TARGET connection for the BICS environment where the Taleo data will be loaded.

 

Configuring the Taleo Data Load

a. Under “Project” and “File Data”, create a new source file for the ‘Department.csv’ file created by the Taleo Connect tool.

1

Windows7_x64

b. Under ‘Import Options’, manually enter the following string for the Timestamp format.

yyyy-MM-dd’T’HH:mm:ssX

This is the format that the Taleo Extract uses, and this needs to be defined within the Data Sync tool so that the CSV file can be parsed correctly.

1

c. Enter the name of the Target table in BICS.  In this example, a new table called ‘TALEO_DEPARTMENT’ will be created.

Windows7_x64

d. The Data Sync tool samples the data and makes a determination of the correct file format for each column.  Confirm these are correct and change if necessary.

Windows7_x64

e. If a new table is being created in BICS as part of this process, it is often a better idea to let the Data Sync tool create that table so it has the permissions it requires to load data and create any necessary indexes.  Under ‘Project’ / ‘Target Tables’ right click on the Target table name, and select ‘Drop/Create/Alter Tables’

Windows7_x64

f. In the resulting screen, select ‘Create New’ and hit OK.  The Data Sync tool will connect to the BICS Target environment and execute the SQL required to create the TALEO_DEPARTMENT target table

2

g. If an incremental load strategy is required, select the ‘Update table’ option as shown below

Windows7_x64

h. Select the unique key on the table – in this case ‘Number’

Windows7_x64

i. Select the ‘LastModifiedDate’ for the ‘Filters’ section.  Data Sync will use this to identify which records have changed since the last load.

Windows7_x64

In this example, the Data Sync tool suggests a new Index on the target table in BICS.  Click ‘OK’ to let it generate that on the Target BICS database.

Windows7_x64

 

Create Data Sync Job

Under ‘Jobs’, select ‘New’ and name the job.  Make a note of the Job name, as this will be used later in the scheduling and automation of this process

 

Windows7_x64

 

Run Data Sync Job

a. Execute the newly created Job by selecting the ‘Run Job’ button

Windows7_x64

b. Monitor the progress under the ‘Current Jobs’ tab.

Windows7_x64

c. Once the job completes, go the the ‘History’ tab, select the job, and then in the bottom section of the screen select the ‘Tasks’ tab to confirm everything ran successfully.  In this case the ‘Status Description’ confirms the job ‘Successfully completed’ and that 1164 rows were loading into BICS, with 0 Failed Rows.  Investigate any errors and make changes before continuing.

Windows7_x64

 

3. Configuring and Scheduling Process

As an overview of the process, a ‘.bat’ file will be created and scheduled to run.  This ‘bat’ file will execute the extract from Taleo, with that CSV file being saved to the local file system.  The second step in the ‘.bat’ file will create a ‘touch file’.  The Data Sync Tool will monitor for the ‘touch file’, and once found, will start the load process.  As part of this, the ‘touch file’ will automatically be deleted by the Data Sync tool, so that the process is not started again until a new CSV file from Taleo is generated.

a. In a text editor, create a ‘.bat’ file.  In this case the file is called ‘Taleo_Department.bat’.

b. Use the syntax generated in step ‘2 h’ in the section where the ‘Taleo Export Configuration’ was created.

c. Use the ‘call’ command before this command.  Failure to do this will result in the extract being completed, but the next command in the ‘.bat’ file not being run.

d. Create the ‘touch file’ using an ‘echo’ command.  In this example a file called ‘DS_Department_Trigger.txt’ file will be created.

Windows7_x64

e. Save the ‘bat’ file.

f. Configure the Data Sync tool to look for the Touch File created in step d, by editing the ‘on_demand_job.xml’, which can be found in the ‘conf-shared’ directory within the Data Sync main directory structure.

Windows7_x64

g. At the bottom of the file in the ‘OnDemandMonitors’ section, change the ‘pollingIntervalInMinutes’ to be an appropriate value. In this case Data Sync will be set to check for the Touch file every minute.

h. Add a line within the <OnDemandMonitors> section to define the Data Sync job that will be Executed once the Touch file is found, and the name and path of the Touch file to be monitored.

Windows7_x64

In this example, the syntax looks like this

<TriggerFile job=”Taleo_Load” file=”C:\Users\oracle\Documents\DS_Department_Trigger.txt”/>

 

The Data Sync tool can be configured to monitor for multiple touch files, each that would trigger a different job.  A separate line item would be required for each.

h. The final step is to Schedule the ‘.bat’ file to run at a suitable interval.  Within Windows, the ‘Task Scheduler’ can be found beneath the ‘Accessories’ / ‘System Tools’ section under the ‘All Programs’ menu.  In linux, use the ‘crontab’ command.

 

Summary

This article walked through the steps for configuring the Taleo Connect Client to download data from Taleo and save to a location to be automatically consumed by the Data Sync tool, and loaded to BICS.

 

Further Reading

Taleo Product Documentation

Getting Started with Taleo Connect Client

Configuring the Data Sync Tool for BI Cloud Services


How to Recover BPM Process Instances from a MDS 00054 Error in Oracle BPM 12.1.3 (Part 2)

$
0
0

In Part 1, we looked into MDS and SOAINFRA tables to gain an understanding of data structure related to composite deployment and instance creation. At the end of Part 1, we can summarize, in the following, the key data pieces created in MDS and SOAINFRA due to deploying, redeploying our test composite and creating two composite instances.

MDS:

— SCA label: soa_8ac4edfb-ae15-4e93-a6e2-f62852460064, created for redeployment

— Composite source under default folder

SOAINFRA tables

— Two SCA labels:

— soa_3663225e-43a8-4ded-bd58-7d3a95915c91, created for the first composite deployment, with a value of MIGRATED in MIGRATIONSTATUS column of the BPM_CUBE_PROCESS table

— soa_8ac4edfb-ae15-4e93-a6e2-f62852460064, created for the redeployment, with a value of LATEST in MIGRATIONSTATUS column of the BPM_CUBE_PROCESS table

— Two sets of data in the runtime tables for the two BPM instances created. Data set for instance 1 has the first SCA label and that for instance 2 has the second label.

With this background knowledge, we will try to trigger Bug# 20517579 and thus the MDS 00054 error. We will then check for any changes in MDS and SOAINFRA tables.

Triggering Bug# 20517579 and MDS-00054

Bug# 20517579 exists in the SOAINFRA layer of the Oracle SOA Suite products. It is triggered in 12.1.3 (or earlier version) by a composite deployment failure. Let’s say the current active version of your composite is 1.0. For some reason (bug fix for example), you have to redeploy the same composite with the same verion 1.0 (rather than 1.1). If the redeployment fails, Bug# 20517579 causes all MDS artifacts related to version 1.0 to be deleted. As a result, after you restart server, you will get an MDS 00054 error and the previously active composite version 1.0 disappears from the server along with all of its human tasks.

There are many ways to cause the redeployment failure. In BPM, we have seen for a few times this failure caused by incompatible changes made in BPM processes. It is quite easy to make this error when you modify a BPM process. So we will make an incompatible change in our test process to trigger the bug.

To do that, we will change the implementation of our user task from Human Task 1 to Human Task 2 as the following,

UserTaskChangeBefore

UserTaskChangeAfter

Now let’s redeploy the composite. This time the deployment fails with the expected error:

IncompatibleError

Restart the server and you will get the MDS 00054 error:

MDS00054

 

What is in MDS now?

By looking into the MDS data using the MDS Explorer, you will see the SCA label is still in the deployed-composite.xml file. As a result, the composite is still deployed from the SOAINFRA point of view. When it tries to start up the composite, the SOAINFRA needs to access source files of the composite (composite.xml being one of the source files). But if you check the folder “default”, you will see the source folder of the composite has disappeared as it is deleted mistakenly due to the bug.

mds-data-mds00054

As this point, you won’t be able to see our test composite in EM and its related human tasks in Workspace.

What is in SOAINFRA tables?

If you check the tables mentioned in Part 1, you will see nothing has changed in any of the tables. This indicates to us that even though we can access our running instances in EM and Workspace, they are still intact in the DB store.

In Part 3 (last part) of this blog, we will look at a way to restore the composite and its existing instances.

 

 

How to Recover BPM Process Instances from a MDS 00054 Error in Oracle BPM 12.1.3 (Part 3)

$
0
0

In Part 2, we produced the MDS-00054 error. In this final part of the blog, we will find a way to fix the error and recover all running instances from previous deployments.

Based on the knowledge from Part 1, we can divide the procedure of fixing our problem into four steps.

— Deploy the composite

— Fix running instances

— Fix human workflow

— Fix instance audit trails

Deploy the composite

Because of the MDS-00054 error, any attempt of redeploying the composite over the same version 1.0 will fail. We cannot underploy the composite either in EM (cannot see the composite anyway) or with WLST script because in doing so all running instances will be marked as aborted. The good news is that in Part 1, we have a back door access to MDS “deployed-composites.xml” file. If we remove the entry for our test composite from this file, SOAINFRA will treat our composite as if never existed. Then we can deploy our composte application as a brand new composite with version 1.0.

Here are the steps:

— Change the user task implementation in our test composite back to Human Task 1 in JDeveloper. So that the composite is compatible with the running instances.

— Download deployed-composites.xml file via MDS Explorer.

— Record the SCA label as the bad label and scaEntityId as  the bad sca entity ID from the deployed-composites.xml file. We will need them later in our steps. In our case:

Bad-Label = soa_8ac4edfb-ae15-4e93-a6e2-f62852460064

Bad-SCA-Entity-ID = 6

— Remove the entry for our composite from the deployed-composites.xml file. In our case, it will become an empty xml file.

— Delete the deployed-composites.xml file in MDS via MDS Explorer.

— Upload the modified deployed-composites.xml file from our local drive to MDS. Your MDS should look like this:

emptyMDS2

— Restart the server

The server should start up without the MDS-00054 error.

— If your BPM project does not uses Proected Flex Fields, you can skip this step. Otherwise, you need to remove all rows related to your composite from WFATTRIBUTELABELUSAGE table.

— Deploy our test composite as version 1.0. This time, the deployment should be successful.

— Check out the deployed-composites.xml file again. There should be a new entry with a new SCA label created in this file. Record this label as the good label and the new scaEntityId as the good sca entity id. They will be used in later steps. In our case, the good label is

    Good-Label = soa_be66f4a7-0142-477a-a653-b1c1c4f19bb6

Good-SCA-Entity-ID = 10001

mds-afterDeployment3

— Check in EM and you will be able to see our composite. But searching for instances of this composite yields no result.

— Check in BPM Workspace and you will see there is no human tasks displayed either.

Now that we have our composite successfully deployed again. Let’s see what we can do about our running instances.

Fix running BPM instances

From Part 1, we know that instance data is stored in SOAINFRA schema while the deployed composite meta data is stored in MDS. The only link between these two stores is the SCA label. In our situation, our instances are associated with an old SCA label (the Bad-Label) and our newly deployed composite has a new SCA label (the Good-Label). So it seems obvious that if we manage to change the Bad-Label in SOAINFRA into the Good-Label, we should be able to link the running instances to our newly deployed composite. So the following are the tables that we need to change.

– BPM_CUBE_PROCESS table

This is the only table relevant to our concern that is changed due to our new deployment. It should look like this,

table-bpm_cube_process3

As you can see, there is a new row added to this table with the Good-Label from the latest deployment. It has the MIGRATIONSTATUS of LATEST. There is also a row with the LATEST as its MIGRATTIONSTATUS but with the Bad-Label. This is the row that is associated with our running instances (probably via PROCESSID column). We need to replace its Bad-Label with our Good-Label so that it is associated with our latest deployment. To do that, we can execute the following SQL statements

— delete from BPM_CUBE_PROCESS where SCALABEL = ‘Good-Label’

— update bpm_cube_process set SCALABEL = ‘Good_Label’, compositedn = ‘default/TestBpmProject4!1.0*good_label’ where scalabel = ‘Bad-Label’

– Other instance tables

Now we need to fix a few other instance related tables in SOAINFRA.

— update sca_flow_instance set composite_sca_entity_id = Good-SCA-Entity-ID where composite_sca_entity_id = Bad-SCA-Entity-ID

— update SCA_FLOW_TO_CPST set composite_sca_entity_id = Good-SCA-Entity-ID where composite_sca_entity_id = Bad-SCA-Entity-ID

— update cube_instance set composite_label=’Good-Label’ where composite_label=’Bad-Label’

Now restart the server and you should be able to see your composite and the two instances. To complete our repair, we need to do the following steps.

Fix Human Workflow and audit trail tables

– Human Workflow table

— update wftask set compositedn = ‘default/TestBpmProject4!1.0*Good-Label’ where compositedn = ‘default/TestBpmProject4!1.0*Bad-Label’

– Audit trail table

— update bpm_audit_query set composite_dn = ‘default/TestBpmProject4!1.0*Good_Label’ where composite_dn like ‘%Bad_Label’

– Other table

The table MDS_ATTRIBUTES also contains the Bad-Label. I am not certain at this point how this table is used. Even if we leave the Bad-Label as is, it does not seem to have any impact to our runtime. But for completeness, we will make the following change as well.

— update mds_attributes set att_value = ‘dc/Good-Label’ where att_value = ‘dc/Bad-Label’

— update mds_attributes set att_value = ‘default/TestBpmProject4!1.0*Good-Label’ where att_value = ‘default/TestBpmProject4!1.0*Bad-Label’

 

Conclusion

At this point, all running instances should have been restored. They are accessible in EM and in BPM Workspace.

 

EDI Processing with B2B in hybrid SOA Cloud Cluster integrating On-Premises Endpoints

$
0
0

Executive Overview

SOA Cloud Service (SOACS) can be used to support the B2B commerce requirements of many large corporations. This article discusses a common use case of EDI processing with Oracle B2B within SOA Cloud Service in a hybrid cloud architecture. The documents are received and sent from on-premises endpoints using SFTP channels configured using SSH tunnels.

Solution Approach

Overview

The overall solution is described in the diagram shown here.

B2BCloudFlow(1)(1)An XML file with PurchaseOrder content is sent to a SOACS instance running in Oracle Public Cloud (OPC) from an on-premise SFTP server.

The XML file is received by an FTP Adapter in a simple composite for hand-off to B2B. The B2B engine within SOACS then generates the actual EDI file and transmits it over an SFTP delivery channel back to an on-premise endpoint.

In reality, the endpoint can be any endpoint inside or outside the corporate firewall. Communication with an external endpoint is trivial and hence left out of the discussion here. Using the techniques of SSH tunnels, the objective here is to demonstrate the ease by which any on-premises endpoint can be seamlessly integrated into the SOA Cloud Service hybrid solution architecture.

Our environment involves a SOACS domain on OPC with 2 managed servers. Hence, the communication with an on-premise endpoint is configured using SSH tunnels as described in my team-mate, Christian Weeks’ blog on SSH tunnel for on-premises connectivity in SOA Cloud clusters[1].

If the SOACS domain contains only a single SOACS node, then a simpler approach can also be used to establish the on-premise connectivity via SSH tunneling, as described in my blog on simple SSH tunnel connectivity for on-premises databases from SOA Cloud instance[2].

The following sections walk through the details of setting up the flow for a PurchaseOrder XML document from an on-premise back-end application, like eBusiness Suite to the 850 X12 EDI generated for transmission to an external trading partner.

Summary of Steps

  • Copy the private key of SOACS instance to the on-premise SFTP server
  • Update the whilelist for SOACS compute nodes to allow traffic flow between the SOACS compute nodes and the on-premise endpoints via the intermediate gateway compute node, referred to as CloudGatewayforOnPremTunnel in rest of this post from here onwards. This topic has also been extensively discussed in Christian’s blog[1].
  • Establish an SSH tunnel from the on-premise SFTP Server (OnPremSFTPServer) to the Cloud Gateway Listener host identified within the SOA Cloud Service compute nodes (CloudGatewayforOnPremTunnel). The role of this host to establish the SSH tunnel for a cluster has been extensively discussed in Christian’s blog[1]. This SSH tunnel, as described, will specify a local port and a remote port. The local port will be the listening port of SFTP server, (default is 22) and the remote port can be any port that is available within the SOACS instance (e.g. 2522).
  • Update FTP Adapter’s outbound connection pool configuration to include the new endpoint and redeploy. Since we have a cluster within the SOA Cloud service, the standard JNDI entries for eis/ftp/HAFtpAdapter should be used.
  • Define a new B2B delivery channel for the OnPremise SFTP server using the redirected ports for SFTP transmission.
  • Develop a simple SOA composite to receive the XML  payload via FTP adapter and hand-off to B2B using B2B Adapter.
  • Deploy the B2B agreement and the SOA composite.
  • Test the entire round-trip flow for generation of an 850 X12 EDI from a PurchaseOrder XML file.

sftpTunnel

Task and Activity Details

The following sections will walk through the details of individual steps. The environment consists of the following key machines:

  • SOACS cluster with 2 managed servers and all the dependent cloud services within OPC.
  • A compute node within SOACS instance is identified to be the gateway listener for the SSH tunnel from on-premise hosts (CloudGatewayforOnPremTunnel)
  • Linux machine inside the corporate firewall, used for hosting the On-Premise SFTP Server (myOnPremSFTPServer)

I. Copy the private key of SOACS instance to the on-premise SFTP server

When a SOACS instance is created, a public key file is uploaded for establishing SSH sessions. The corresponding private key has to be copied to the SFTP server. The private key can then be used to start the SSH tunnel from the database server to the SOACS instance.

Alternatively, a private/public key can be generated in the SFTP server and the public key can be copied into the authorized_keys file of the SOACS instance. In the example here, the private key for the SOACS instance has been copied to the SFTP server. A transcript of a typical session is shown below.

slahiri@slahiri-lnx:~/stage/cloud$ ls -l shubsoa_key*
-rw——- 1 slahiri slahiri 1679 Dec 29 18:05 shubsoa_key
-rw-r–r– 1 slahiri slahiri 397 Dec 29 18:05 shubsoa_key.pub
slahiri@slahiri-lnx:~/stage/cloud$ scp shubsoa_key myOnPremSFTPServer:/home/slahiri/.ssh
slahiri@myOnPremDBServer’s password:
shubsoa_key                                                                                100% 1679        1.6KB/s     00:00
slahiri@slahiri-lnx:~/stage/cloud$

On the on-premise SFTP server, login and confirm that the private key for SOACS instance has been copied in the $HOME/.ssh directory.

[slahiri@myOnPremSFTPServer ~/.ssh]$ pwd
/home/slahiri/.ssh
[slahiri@myOnPremSFTPServer ~/.ssh]$ ls -l shubsoa_key
-rw——-+ 1 slahiri g900 1679 Jan  9 06:39 shubsoa_key
[slahiri@myOnPremSFTPServer ~/.ssh]$

II. Create whitelist entries to allow communications between different SOACS compute nodes and on-premise SFTP server

The details about creation of a new security application and rule have been discussed extensively in Christian’s blog[1]. For the sake of brevity, just the relevant parameters for the definition are shown here. These entries are created from the Compute Node Service Console under Network tab.

Security Application
  • Name: OnPremSFTPServer_sshtunnel_sftp
  • Port Type: tcp
  • Port Range Start: 2522
  • Port Range End: 2522
  • Description: SSH Tunnel for On-Premises SFTP Server
Security Rule
  • Name: OnPremSFTPServer_ssh_sftp
  • Status: Enabled
  • Security Application: OnPremSFTPServer_sshtunnel_sftp (as created in last step)
  • Source: Security Lists – ShubSOACS-jcs/wls/ora-ms (select entry that refers to all the managed servers in the cluster)
  • Destination: ShubSOACS-jcs/lb/ora_otd (select the host designated to be CloudGatewayforOnPremTunnel, which could be either the DB or LBR VM)
  • Description: ssh tunnel for On-Premises SFTP Server

III. Create an SSH Tunnel from On-Premise SFTP Server to the CloudGatewayforOnPremTunnel VM’s public IP

Using the private key from Step I, start an SSH session from the on-premise SFTP server host to the CloudGatewayforOnPremTunnel, specifying the local and remote ports. As mentioned earlier, the local port is the standard port for SFTP daemon, e.g. 22. The remote port is any suitable port that is available in the SOACS instance. The syntax of the ssh command used is shown here.

ssh -R :<remote-port>:<host>:<local port> -i <private keyfile> opc@<CloudGatewayforOnPremTunnel VM IP>

The session transcript is shown below.

[slahiri@myOnPremSFTPServer ~/.ssh]$ ssh -v -R :2522:localhost:22 -i ./shubsoa_key opc@CloudGatewayforOnPremTunnel
[opc@CloudGatewayforOnPremTunnel ~]$ netstat -an | grep 2522
tcp        0      0 127.0.0.1:2522              0.0.0.0:*                   LISTEN
tcp        0      0 ::1:2522                         :::*                            LISTEN
[opc@CloudGatewayforOnPremTunnel ~]$

After establishing the SSH tunnel, the netstat utility can confirm that the remote port 2522 is enabled in listening mode within the Cloud Gateway VM. This remote port, 2522 and localhost along with other on-premises SFTP parameters can now be used to define an endpoint in FTP Adapter’s outbound connection pool in Weblogic Adminserver (WLS) console.

IV. Define a new JNDI entry for FTP Adapter that uses the on-premise SFTP server via the SSH  tunnel

From WLS console, under Deployments, update FtpAdapter application by defining parameters for the outbound connection pool JNDI entry for clusters, i.e eis/Ftp/HAFtpAdapter.

The remote port from Step II is used in defining the port within the JNDI entry for FTP Adapter. It should be noted that the host specified will be CloudGatewayforOnPremTunnel instead of the actual on-premise hostname or address of the SFTP server, since the port forwarding with SSH tunnel is now enabled locally within the SOACS instance in Step III.

It should be noted that SOA Cloud instances do not use any shared storage. So, the deployment plan must be copied to the file systems for each node before deployment of the FTP Adapter application.

The process to update the FtpAdapter deployment is fairly straightforward and follows the standard methodology. So, only the primary field values that are used in the JNDI definition are provided below.

  • JNDI under Outbound Connection Pools: eis/Ftp/HAFtpAdapter
  • Host:CloudGatewayforOnPremTunnel
  • Username: <SFTP User>
  • Password: <SFTP User Password>
  • Port:2522
  • UseSftp: true

V. Configure B2B Metadata

Standard B2B configuration will be required to set up the trading partners, document definitions and agreements. The unique configuration pertaining to this test case involves setting up the SFTP delivery channel to send the EDI document to SFTP server residing on premises inside the corporate firewall. Again, the remote port from Step III is used in defining the port for the delivery channel. The screen-shot for channel definition is shown below.

edicloud6After definition of the metadata, the agreement for outbound 850 EDI is deployed for runtime processing.

VI. Verification of SFTP connectivity

After the deployment of the FTP Adapter. another quick check of netstat for port 2522 may show additional entries indicating an established session corresponding to the newly created FTP Adapter. The connections are established and disconnected based on the polling interval of the FTP Adapter. Another alternative to verify the SFTP connectivity will be to manually launch an SFTP session from the command-line as shown here.

[opc@shubsoacs-jcs-wls-1 ~]$ sftp -oPort=2522 slahiri@CloudGatewayforOnPremTunnel
Connecting to CloudGatewayforOnPremTunnel…
The authenticity of host ‘[cloudgatewayforonpremtunnel]:2522 ([10.196.240.130]:2522)’ can’t be established.
RSA key fingerprint is 93:c3:5c:8f:61:c6:60:ac:12:31:06:13:58:00:50:eb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘[cloudgatewayforonpremtunnel]:2522′ (RSA) to the list of known hosts.
slahiri@cloudgatewayforonpremtunnel’s password:
sftp> quit
[opc@shubsoacs-jcs-wls-1 ~]$

While this SFTP session is connected, a quick netstat check on the CloudGatewayforOnPremTunnel host will confirm the established session for port 2522 from the SOACS compute node.

[opc@CloudGatewayforOnPremTunnel ~]$ netstat -an | grep 2522
tcp        0       0 0.0.0.0:2522                       0.0.0.0:*                               LISTEN
tcp        0      0 10.196.240.130:2522         10.196.246.186:14059        ESTABLISHED
tcp        0       0 :::2522                                 :::*                                       LISTEN
[opc@CloudGatewayforOnPremTunnel ~]$

VII. Use the newly created JNDI to develop a SOA composite containing FTP Adapter and B2B Adapter to hand-off the XML payload from SFTP Server to B2B engine

The simple SOA composite diagram built in JDeveloper for this test case is shown below.

The JNDI entry created in step IV (eis/ftp/HAFtpAdapter) is used in the FTP Adapter Wizard session within JDeveloper to set up a receiving endpoint from the on-premises SFTP server. A simple BPEL process is included to transfer the input XML payload to B2B. The B2B Adapter then hands-off the XML payload to the B2B engine for generation of the X12 EDI in native format.

edicloud4

Deploy the composite via EM console to complete the design-time activities. We are now ready for testing.

VIII. Test the end-to-end EDI processing flow

After deployment, the entire flow can be tested by copying a PurchaseOrder XML file in the polling directory for incoming files within the on-premise SFTP server. An excerpt from the sample XML file used as input file to trigger the process, is shown below.

[slahiri@myOnPremSFTPServer cloud]$ more po_850.xml
<Transaction-850 xmlns=”http://www.edifecs.com/xdata/200″ xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” XDataVersion=”1.0″ Standard=”X12” Version=”V4010” CreatedDate=”2007-04-10T17:16:24″ CreatedBy=”ECXEngine_837″>
     <Segment-ST>
           <Element-143>850</Element-143>
           <Element-329>16950001</Element-329>
      </Segment-ST>
      <Segment-BEG>
           <Element-353>00</Element-353>
           <Element-92>SA</Element-92>
           <Element-324>815455</Element-324>
           <Element-328 xsi:nil=”true”/>
           <Element-373>20041216</Element-373>
        </Segment-BEG>
–More–(7%)

The FTP Adapter of the SOA composite from SOACS instance will pick up the XML file via the SSH tunnel and process it in Oracle Public Cloud within Oracle B2B engine to generate the EDI. The EDI file will then be transmitted back to the on-premise SFTP server via the same SSH tunnel.

Results from the completed composite instance should be visible in the Enterprise Manager, as shown below.

edicloud2

Content of the EDI file along with the SFTP URL used to transmit the file can be seen in the B2B console, under Wire Message Reports section.

edicloud1

Summary

The test case described here is a quick way to demonstrate the concept that SOA Cloud Service can be very easily used in a hybrid architecture for modelling common B2B use cases, that require access to on-premise endpoints. The EDI generation process and all the business layer orchestration can be done in Oracle Public Cloud (OPC) with SOA Suite. Most importantly, integration with on-premise server endpoints can be enabled as needed via SSH tunnels to provide a hybrid cloud solution.

Acknowledgements

SOACS Product Management and Engineering teams have been actively involved in the development of this solution for many months. It would not have been possible to deliver such a solution to the customers without their valuable contribution.

References

  1. 1. Setting up SSH tunnels for cloud to on-premise with SOA Cloud Service clusters – Christian Weeks, A-Team
  2. 2. SOA Cloud Service – Quick and Simple Setup of an SSH Tunnel for On-Premises Database Connectivity - Shub Lahiri, A-Team

Integrating Oracle Sales Cloud with Oracle Business Intelligence Cloud Service (BICS) – Part 2

$
0
0

Introduction

 

This article provides a fresh approach on the subject of integrating Oracle Sales Cloud with Oracle Business Intelligence Cloud Service (BICS).


Integrating Oracle Sales Cloud with Oracle Business Intelligence Cloud Service (BICS) – Part 1,
showcased how to use Oracle Transactional Business Intelligence (OTBI) to extract data from Sales Cloud and load it into BICS.


This article tackles the reverse data movement pattern – loading data from a BICS dashboard into Sales Cloud.


Data is inserted into Sales Cloud using the REST API for Oracle Sales Cloud. This is the more conventional part of the solution, using similar concepts covered in past BICS integration blogs such as:


1)    PL/SQL is used for the ETL.


2)    A Database Stored Procedure is triggered by a Database Function.


3)    The Database Function is referenced in the Modeler using EVALUATE.


4)    The data-load is triggered from a Dashboard using an Action Link.


5)    Dashboard Prompts are used to pass selected values from the Dashboard

       to the Stored Procedure using Request and Session Variables.

sales_cloud_blog

The more ambitious component of this article is replicating the user experience of scraping data from a dynamically filtered Dashboard Analysis Request. Write-back is emulated by replicating what the user views on the Dashboard in a stored procedure SQL SELECT.


1)    The Dashboard Consumer refines the results on the Dashboard with a Prompt that represents a unique record identifier.


2)    The Dashboard Prompt Sections are passed to the Stored Procedure SELECT and used in a WHERE CLAUSE to replicate the refinement that the Dashboard Consumer makes on the Dashboard.

Snap16


Note: When replicating Dashboard SQL in the stored procedure, be cautious of data that has had row level security applied in the Modeler. To avoid erroneous results, all customized permissions must be manually enforced through the stored procedure SQL WHERE clause.


The following steps walk-through the creation of the necessary BICS and PL/SQL artifacts needed to load data from a BICS Dashboard into Sales Cloud. The example provided interprets the contact information from the Dashboard and creates a new matching contact in Sales Cloud. This example could be easily modified to support other REST API methods.


Part A – Configure BICS Dashboard


1)    Create BICS Table

2)    Insert Records into BICS Table

3)    Create Analysis Request

4)    Create Dashboard


Part B – Configure PL/SQL


5)    Review REST API Documentation

6)    Test POST Method

7)    Create Stored Procedure

8)    Test Stored Procedure

9)    Create Function

10)  Test Function


PART C – Configure Action Link Trigger


11)  Create DUMMY_PUSH Table

12)  Create Variable

13)  Reference Variable

14)  Create Model Expression

15)  Create DUMMY_PUSH Analysis Request

16)  Create Action Link

17)  Execute Update

 

Main Article


Part A – Configure BICS Dashboard


Step 1 – Create BICS Table


Create a simple “contacts” table in Oracle Application Express (Apex) -> SQL Workshop -> SQL Commands.

Where CONTACT_KEY is the unique record identifier that will be used to refine the data on the Dashboard. This must be something that the Dashboard Consumer can easily recognize and decipher.

CREATE TABLE BICS_CONTACTS(
FIRST_NAME VARCHAR2(500),
LAST_NAME VARCHAR2(500),
ADDRESS1 VARCHAR2(500),
CITY VARCHAR2(500),
COUNTRY VARCHAR2(500),
STATE VARCHAR2(500),
CONTACT_KEY VARCHAR2(500));


Step 2 – Insert Records into BICS Table


Insert a selection of sample contact records into the contacts table.

For a text version of both SQL snippet click here.

INSERT INTO BICS_CONTACTS(FIRST_NAME,LAST_NAME,ADDRESS1,CITY,COUNTRY,STATE,CONTACT_KEY)
VALUES (‘Jay’,’Pearson’,’7604 Technology Way’,’Denver’,’US’,’CO’,’Pearson-Jay’);

Snap17


Step 3 – Create Analysis Request


Add the BICS_CONTACTS table to the Model and join it to another table.

Create an Analysis Request based on the BICS_CONTACTS table.

Add a filter on CONTACT_KEY where Operator = “is prompted”.

Snap5

 Snap2

Snap3

Step 4 – Create Dashboard


Create a Dashboard. Add the BICS_Contacts Analysis and a Prompt on CONTACT_KEY. To keep the example simple, a List Box Prompt has been used. Additionally, “Include All Column Values” and “Enable user to select multiple values” are disabled. It is possible to use both these options, with extra manual SQL in the stored procedure. A workaround for passing multiple values to session variables has been previously discussed in Integrating Oracle Social Data and Insight Cloud Service with Oracle Business Intelligence Cloud Service (BICS).

Snap7

Part B – Configure PL/SQL


Step 5 – Review REST API Documentation


Begin by reviewing the REST API for Oracle Sales Cloud documentation. This article only covers using:

Task: Create a contact
Request: POST
URI: crmCommonApi/resources/<version>/contact

That said there are many other tasks / requests available in the API that may be useful for various integration scenarios.

Step 6 – Test POST Method


From Postman:

Snap14

Snap15

Snap13

From Curl:

For a text version of the curl click here.

curl -u user:pwd -X POST -v -k -H “Content-Type: application/vnd.oracl
e.adf.resourceitem+json” -H “Cache-Control: no-cache” -d@C:\temp\contact.json ht
tps://abcd-fap1234-crm.oracledemos.com:443/crmCommonApi/resources/11.1.10/contacts

Where C:\temp\contact.json is:

For a text version of the JSON click here.

{
“FirstName”: “John Barry”,
“LastName”: “Smith”,
“Address”: [
{
"Address1": "100 Oracle Parkway",
"City": "Redwood Shores",
"Country": "US",
"State": "CA"
}
]
}

Confirm Contact was created in Sales Cloud.

Snap9 Snap10

Snap11

Snap12

Step 7 – Create Stored Procedure


Replace Sales Cloud Server Name, Username, and Password.

For a text version of the code snippet click here.

create or replace PROCEDURE PUSH_TO_SALES_CLOUD(p_selected_records varchar2, o_status OUT varchar2) IS
l_ws_response_clob CLOB;
l_ws_url VARCHAR2(500) := ‘https://abc1-fap1234-crm.oracledemos.com:443/crmCommonApi/resources/11.1.10/contacts';
l_body CLOB;
v_array apex_application_global.vc_arr2;
v_first_name VARCHAR2(100);
v_last_name VARCHAR2(100);
v_address1 VARCHAR2(100);
v_city VARCHAR2(100);
v_country VARCHAR2(100);
v_state VARCHAR2(100);
v_status VARCHAR2(100);

v_array := apex_util.string_to_table(p_selected_records, ‘,’);
FOR j in 1..v_array.count LOOP
SELECT FIRST_NAME,LAST_NAME,ADDRESS1,CITY,COUNTRY,STATE
INTO v_first_name, v_last_name, v_address1, v_city, v_country, v_state
FROM BICS_CONTACTS
WHERE CONTACT_KEY = v_array(j);
l_body := ‘{
“FirstName”: “‘ || v_first_name ||
‘”,”LastName”: “‘ || v_last_name ||
‘”,”Address”: [{"Address1": "' || v_address1 ||
'","City": "' || v_city ||
'","Country": "' || v_country ||
'","State": "' || v_state ||
'"}]}';
–dbms_output.put_line(‘Body:’ || dbms_lob.substr(l_body));
apex_web_service.g_request_headers(1).name := ‘Content-Type';
apex_web_service.g_request_headers(1).value := ‘application/vnd.oracle.adf.resourceitem+json';
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => l_ws_url,
p_body => l_body,
p_username => ‘User‘,
p_password => ‘Pwd‘,
p_http_method => ‘POST’
);
v_status := apex_web_service.g_status_code;
–dbms_output.put_line(‘Status:’ || dbms_lob.substr(v_status));
END LOOP;
o_status :=v_status;
COMMIT;
–;

Step 8 – Test Stored Procedure


For a text version of the code snippet click here.

declare
o_status integer;

PUSH_TO_SALES_CLOUD(‘Pearson-Jay’,o_status);
dbms_output.put_line(o_status);
–;

RETURNS: 201 – indicating successful creation of contact

Step 9 – Create Function


For a text version of the code snippet click here.

create or replace FUNCTION FUNC_PUSH_TO_SALES_CLOUD
(
p_selected_records IN VARCHAR2
) RETURN VARCHAR
IS PRAGMA AUTONOMOUS_TRANSACTION;
o_status VARCHAR2(100);

PUSH_TO_SALES_CLOUD(p_selected_records,o_status);
COMMIT;
RETURN o_status;
–;

Step 10 – Test Function


For a text version of the SQL click here.

select FUNC_PUSH_TO_SALES_CLOUD(‘Pearson-Jay’)
from dual;

RETURNS: 201 – indicating successful creation of contact


Part C – Configure Action Link Trigger


Quick Re-Cap:


It may be useful to revisit the diagram provided in the intro to give some context to where we are at.

“Part A” covered building the Dashboard show in #4

“Part B” covered building items #1 & #2.

“Part C” will now cover the remaining artifacts show in #3, and #4.


Step 11 – Create DUMMY_PUSH Table


This tables main purpose is to trigger the database function. It must have a minimum of one column and maximum one row. It is important that this table only has one row as the function will be trigger for every row in this table.

For a text version of the SQL click here.

CREATE TABLE DUMMY_PUSH (REFRESH_TEXT VARCHAR2(255));

INSERT INTO DUMMY_PUSH(REFRESH_TEXT) VALUES (‘Status:’);

 

Step 12 – Create Variable


From the Modeler create a variable called “r_selected_records”. Provide a starting Value and define the SQL Query.

Snap19

Step 13 – Reference the Variable


On the Dashboard Prompt (created in Part A – Step 4) set a “Request Variable” matching the name of the Variable (created in Part C – Step 12). i.e. r_selected_records

 Snap32

 

Step 14 – Create Model Expression


Add the DUMMY_PUSH table to the Model. Join it to another table.

Add an Expression Column called PUSH_TO_SALES_CLOUD.

Use EVALUATE to call the database function “FUNC_PUSH_TO_SALES_CLOUD” passing through the variable “r_selected_records”.

Snap20

For a text version of the EVALUATE statement click here.

EVALUATE(‘FUNC_PUSH_TO_SALES_CLOUD(%1)’,VALUEOF(NQ_SESSION.”r_selected_records”))

Snap18


Step 15 – Create DUMMY_PUSH Analysis Request


Add both fields to the Analysis – hiding column headings if desired.

Snap21

Snap30

Snap31

Step 16 – Create Action Link


Add an Action Link to the Dashboard. Choose Navigate to BI Content.

Snap24

 Check “Run Confirmation” and customize message if needed.

Snap26

Customize the Link Text and Caption (if desired).

Snap27

17)  Execute Update


Select the user to insert into Sales Cloud.

*** Important ***

CLICK APPLY

Apply must be hit to set the variable on the prompt!

Snap28

Click the “Push to Sales Cloud” Action Link”.

Confirm the Action

Snap29

Status 201 is returned indicating the successful creation of the contact.

Snap35

Confirm contact was created in Sales Cloud.

Snap34

Further Reading


Click here for the REST API for Oracle Sales Cloud guide.

Click here for the Application Express API Reference Guide – MAKE_REST_REQUEST Function.

Click here for more A-Team BICS Blogs.


Summary


This article provided a set of examples that leverage the APEX_WEB_SERVICE_API to integrate Oracle Sales Cloud with Oracle Business Intelligence Cloud Service (BICS) using the REST API for Oracle Sales Cloud.

The use case shown was for BICS and Oracle Sales Cloud integration. However, many of the techniques referenced could be used to integrate Oracle Sales Cloud with other Oracle and non-Oracle applications.

Similarly, the Apex MAKE_REST_REQUEST example could be easily modified to integrate BICS or standalone Oracle Apex with any other REST web services.

Techniques referenced in this blog could be useful for those building BICS REST ETL connectors and plug-ins.

Integration Cloud Service (ICS) On-Premise Agent Installation

$
0
0

The On-Premises Agent (aka, Connectivity Agent) is necessary for ICS to communicate to on-premise resources without the need for firewall configurations or VPN. Additional details about the Agent can be found at New Agent Simplifies Cloud to On-premises Integration. The purpose of this A-Team blog is to give a consolidated and simplified flow of what is needed to install the agent and provide a foundation for other blogs (e.g., Using The Cloud Database Adapter for On-Premis Database). For the detailed online documentation for the On-Premises Agent, see Managing Agent Groups and the On-Premises Agent.

On-Premises Agent Installation

The high-level steps for getting the On-Premises Agent installed on your production POD consist of two activities: 1. Create an Agent Group, and 2. Run the On-Premises Agent installer. These steps will be done on an on-premise linux machine and the end result will be a lightweight WebLogic server instance that will be running on port 7001.

Create an Agent Group

1. Login to the production ICS console and view landing page.
 ICSConnectivityAgent_001
2. Verify that the ICS version is 15.4.5 or greater.
ICSConnectivityAgent_002
ICSConnectivityAgent_003
3. Scroll down on ICS Home page and select Create Agents. Notice this brings you to the Agents page of the Designer section.
ICSConnectivityAgent_004
ICSConnectivityAgent_005
4. On the Agents page click on Create New Agent Group.
5. Provide a name for your agent group (e.g., AGENT_GROUP).
ICSConnectivityAgent_006
6. Review the Agent page containing new group.
ICSConnectivityAgent_007

Run the On-Premises Agent Installer

1. Click on the Download Agent Installer drop down on the Agent page, select Connectivity Agent, and save file.
ICSConnectivityAgent_008
ICSConnectivityAgent_009
2. Extract the contents of the zip file for the cloud-connectivity-agent-installer.bsx, which is a self extracting linux bash script:
ICSConnectivityAgent_010
3. Make sure the cloud-connectivity-agent-installer.bsx file is executable (e.g., chmod +x cloud-connectivity-agent-installer.bsx) and execute the shell script.  NOTE: It is important to specify the SSL port (443) as part as the host URL.  For example:
./cloud-connectivity-agent-installer.bsx -h=https://<ICS_HOST>:443 -u=[username] -p=[password] -ad=AGENT_GROUP
ICSConnectivityAgent_011
4. Return to the ICS console and the Agents configuration page.
ICSConnectivityAgent_012
5. Review the Agent Group.
ICSConnectivityAgent_013
ICSConnectivityAgent_014
6. Click on Monitoring and select the Agents icon on the left-hand side.
ICSConnectivityAgent_0151
7. Review the Agent monitoring landing page.
ICSConnectivityAgent_016
8. Review the directory structure for the agent installation.
ICSConnectivityAgent_017
As you can see this is a standard WLS installation.  The agent server is a single-server configuration where everything is targeted to the Admin server and is listening on port 7001.  Simply use the scripts in the ${agent_domain}/bin directory to start and stop the server.

We are now ready to leverage the agent for things like the Cloud Database Adapter.

Viewing all 987 articles
Browse latest View live