Quantcast
Channel: ATeam Chronicles
Viewing all 987 articles
Browse latest View live

Connecting ICS and Apache Kafka via REST Proxy API

$
0
0

Introduction

Apache Kafka (Kafka for short) is a proven and well known technology for a variety of reasons. First it is very scalable and has the capability of handling hundreds of thousands of messages per second without the need of expensive hardware; and close to zero fine tuning, as you can read here. But another reason is due its client API capabilities. Kafka allows connections from different platforms, by leveraging a number of client APIs that make it easy for developers to connect to and transact with Kafka. Being able to easily connect to a technology is a major requirement for open-source projects.

In nutshell, Kafka clients APIs are divided into three categories:

* Native Clients: This is the preferred way to develop client applications that must connect to Kafka. These APIs allow high-performance connectivity and leverage most of the features found in Kafka’s clustering protocol. By using this API, developers are responsible for writing code to handle aspects like fault-tolerance, offset management, etc. An example of this is the Oracle Service Bus Transport for Kafka has been built using the native clients, which can be found here.

* Connect API: SDK that allows the creation of reusable clients, which run on top of a pre-built connector infrastructure that takes care of details such as fault-tolerance, execution runtime and offset management. The Oracle GoldenGate adapter has been built on top of this SDK, as you can read here

* Rest Proxy API: For all those applications that for some reason can neither use the native clients nor the connect API, there is an option to connect to Kafka using the REST Proxy API. This is an open-source project maintained by Confluent, the company behind Kafka that allows REST-based calls against Kafka, to perform transactions and administrative tasks. You can read more about this project here.

The objective of this blog is to detail how Kafka’s REST Proxy API can be used to allow connectivity from Oracle ICS (Integration Cloud Service). By leveraging the native REST adapter from ICS, it is possible to implement integration scenarios in which messages can be sent to Kafka. This blog is going to show the technical details about the REST Proxy API infrastructure and how to implement a use case on top of it.

Use_Case_Diagram

Figure 1: Use case where a request is made using SOAP and ICS delivers it to Kafka.

The use case is about leveraging ICS transformation capabilities to allow applications limited to the SOAP protocol to be able to send messages to Kafka. Maybe there are some applications out there that have no REST support, and can only interact with SOAP-based endpoints. In this pattern, ICS can be used to adapt and transform the message so it could be properly delivered to Kafka. SOAP is just an example; it could be the case of any other protocol/technology supported by ICS. Plus, any Oracle SaaS application that has built-in connectivity with ICS can also benefit from this pattern.

Getting Started with the REST Proxy API

As mentioned before, the REST Proxy API is an open-source project maintained by Confluent. Its source-code can be found on GitHub, here. So please be aware that the REST Proxy API will not be part of any Kafka deployment by default. That means that if you download and install a community version of Kafka, the bits for the REST Proxy API will not be there. You need to explicitly build the project and integrate with your Kafka install. This can be a little tricky since the REST Proxy API project depends on other projects such as commons, rest-utils and the schema-registry.

Luckily; the Confluent folks provide an open-source version of their product, which has everything pre-integrated including the REST API Proxy and the other dependencies. This distribution is called Confluent Open Source and can be downloaded here. It is strongly recommended to start using this distribution, so you can be sure that you face no errors that might be the result of bad compilation/building/packaging. Oracle’s own distribution of Kafka called Event Hub Cloud Service could be used as well.

Once you have a Kafka install that has the REST Proxy API, you will be good to go. Everything was built to be O.O.T.B through easy-to-use scripts. The only thing you have to keep in mind is about the services dependencies. In a typical Kafka deployment, the brokers depend on the Zookeeper service that has to be continuously up and running. Zookeeper is required to keep metadata information about the brokers, partitions and topics in a highly available fashion. Zookeeper’s default port is 2181.

The services from the REST Proxy API also depend on Zookeeper. To have a REST Proxy API deployment, you need a service called the REST Server. The REST Server depends on Zookeeper. Also, the REST Server depends on another service called Schema Registry – which in turn depends on Zookeeper as well. Figure 2 summarizes this dependency relationship between the services.

Services_Depedencies

Figure 2: Dependency relationship between the REST Proxy API services.

Although may look like, but none of these services can become a SPOF (Single Point of Failure) or SPOB (Single Point of Bottleneck) in Kafka’s architecture. All of them were designed from scratch to be idempotent and stateless. Therefore, you can have multiple copies of each service running behind a load balancer to meet your performance and availability goals. In order to start a Kafka deployment with the REST Proxy API; you need to execute the following scripts, in the order shown on listing 1.

/bin/zookeeper-server-start /etc/kafka/zookeeper.properties &

/bin/kafka-server-start /etc/kafka/server.properties &

/bin/schema-registry-start /etc/schema-registry/schema-registry.properties &

/bin/kafka-rest-start /etc/kafka-rest/kafka-rest.properties &

Listing 1: Starting a Kafka deployment with the REST Proxy API.

As you can see on listing 1, every script references a properties configuration file. These files are used to customize the behavior of a given service. Most properties on these files has been preset to meet a variety of workloads; so unless you are trying to fine tune a given service – most likely you won’t need to change them.

There is an exception though. For most production environments you will run these services on different boxes for high availability purposes. However; if you choose to run them within the same box, you might need to adjust some ports to avoid conflicts. That can be easily accomplished by editing the respective properties file and adjusting the corresponding property. If you are unsure about which property to change, consult the configuration properties documentation here.

Setting Up a Public Load Balancer

This section might be considered optional depending on the situation. In order for ICS to connect to the REST Proxy API, it needs to have network access to the endpoints exposed by the REST Server. This happens because ICS runs on the OPC (Oracle Public Cloud) and can only access endpoints that are publicly available on the internet (or endpoints exposed through the connectivity agent). Therefore, you may need to set up a load balancer in front of your REST Servers to allow for this connection. This should be considered a best practice because otherwise, you would need to setup firewall rules to allow public internet access to the boxes that holds your REST Servers. Moreover, running without a load balancer would make difficult to transparently change your infrastructure if you need to scale up/down your REST servers. This blog will show how to set up OTD (Oracle Traffic Director) in front of the REST servers but any other load balancer that supports TCP/HTTP would also suit the needs.

In OTD, the first step would be creating a server pool that has all the exposed REST Server endpoints. In the setup built for this blog, I had a REST Server running over the port 6666. Figure 3 shows an example of server pool named rest-proxy-pool.

OTD_Config_1

Figure 3: Creating a server pool that references the REST Server services.

The second step is the creation of a route under your virtual server configuration that will forward any request that matches a certain pattern to the server pool created above. In the REST Proxy API, any request that intends to perform a transaction (which could be either to produce or consume messages) goes through a URI pattern that starts with /topics/*. Therefore; create a route that uses this pattern, as shown on figure 4.

OTD_Config_2

Figure 4: Creating a route to forward requests to the server pool.

Finally, you need to make sure that a functional HTTP listener is associated with your virtual server. This HTTP listener will be used by ICS when it sends messages out. In the setup built for this blog, I have used a HTTP listener on top of the port 8080 for non-SSL requests. Figure 5 depicts this.

OTD_Config_3

Figure 5: HTTP listener created to allow external communication.

Before moving to the following sections; it would be a good idea to validate the setup built so far, since there are a lot of moving parts that can fail. The best way to validate this is by sending a message to a topic using the REST Proxy API and checking if that message is received using Kafka’s console consumer. Thus, start a new console consumer instance to listen for messages sent to the topic orders as shown in listing 2.

/bin/kafka-console-consumer –bootstrap-server <BROKER_ADDRESS>:9092 –topic orders

Listing 2: Starting a new console consumer that listens for messages.

Then, send a message out using the REST Proxy API exposed by your load balancer. Remember that the request should pass through the HTTP listener configured on OTD. Listing 3 shows a cURL example that sends a simple message to the topic using the infrastructure built so far.

curl -X POST -H “Content-Type: application/vnd.kafka.json.v1+json” –data ‘{“records”:[{“key”:”12345″, “value”:{“message”:”Hello World”}}]}’ “http://<OTD_ADDRESS>:8080/topics/orders”

Listing 3: HTTP POST to send a message to the topic using the REST Proxy API.

If everything was setup correctly, you should see the JSON payload sent to the topic in the output of the console consumer started on listing 2. There are some interesting things to comment about the example shown in listing 3. Firstly, you may have noticed that actual payload sent has a strictly defined structure. It is a JSON payload with only one root element called “records”. This element’s value is an array with multiple entries of type key/value. This means that you can send multiple records in once with a single request to maximize throughput, since you can avoid having to perform multiple network calls.

Secondly, the “key” field is not mandatory. If you send a record containing only the value, that will work as well. However, it is highly recommended to use a key every time you send a message out. That will give you more control over how the messages will be grouped together within the partitions in Kafka, therefore considerably improving the partition persistence/replication over the cluster.

Thirdly, you may also have noticed the content type header used in the cURL command. Instead of using a simple application/json as most applications would use, we used application/vnd.kafka.json.v1+json. This is a requirement for the REST Proxy API to work. Keep this in mind while developing flows in ICS.

Message Design for REST Proxy API

Now it is time to start thinking about how are we going to map the SOAP messages sent to ICS into the JSON payload that needs to be sent to the REST Proxy API. This exercise is important because once you start using ICS to build the flow, it will ask for payload samples and message schemas that you may not have in first hand. Therefore, this section will focus on generating these artifacts.

Let’s start by designing the SOAP messages. In this use case we are going to have ICS receiving order confirmation requests. Each order confirmation request will contain the details of an order made by a certain customer. Listing 4 shows an example of this SOAP message.

<soapenv:Envelope xmlns:blog=”http://cloud.oracle.com/paas/ics/blogs”

   xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/”>

   <soapenv:Body>

      <blog:confirmOrder>

         <order>

            <orderId>PO000897</orderId>

            <custId>C00803</custId>

            <dateTime>2017-02-09:11:06:35</dateTime>

            <amount>89.90</amount>

         </order>

      </blog:confirmOrder>

   </soapenv:Body>

</soapenv:Envelope>

Listing 4: SOAP message containing the order confirmation request.

In order to build the SOAP message shown in listing 4, it is necessary to have the corresponding message schemas typically found in a WSDL document. You can download the WSDL used to build this blog here. It will be necessary when we setup the connection in ICS later.

The message that we really want to send to Kafka is in the JSON format. It has essentially all the fields shown on listing 4, except for the “orderId” field. Listing 5 shows the JSON message we need to send.

   “custId”:”C00803″,

   “dateTime”:”2017-02-09:11:06:35″,

   “amount”:89.90

}

Listing 5: JSON message containing the order confirmation request.

The “orderId” field was omitted on purpose. We are going to use this field as the key for the record that will be sent to Kafka. By using this design we can provide a way to track orders by their identifiers. If you recall to the JSON payload shown in listing 3, you will figure that the JSON payload shown in listing 5 will be the portion used in the “value” field. Listing 6 shows the concrete payload that needs to be built so the REST Proxy API can properly process the payload.

{  
   “records”:[  
      {  
         “key”:”PO000897″,
         “value”: {  
            “custId”:”C00803″,
            “dateTime”:”2017-02-09:11:06:35″,
            “amount”:89.90
         }
      }
   ]
}

Listing 6: JSON payload used to process messages using the REST Proxy API.

Keep in mind that although the REST Proxy API will receive the payload shown on listing 6, what the topic consumers will effectively receive is only the record containing the key and the value set. When the consumer reads the “value” field of the record, it will have access to the actual payload containing the order confirmation request. Figure 6 shows the mapping that needs to be implemented in ICS.

Abstract_Source_Target_Mapping

Figure 6: Message mapping to be implemented in ICS.

Developing the Integration Flow

Now that we passed by the configuration necessary to establish communication with the REST Proxy API; we can start the development of the integration flow in ICS. Let’s start with the configuration of the connections.

Create an SOAP-based connection as shown in figure 7. Since this connection will be used for inbound, you can skip any configuration about security. Go ahead and attach the WSDL that contains the schemas into this newly created connection.

Creating_SOAP_Conn_2

Figure 7: SOAP-based connection used for inbound processing.

Next, create a REST-based connection as shown in figure 8. This is the connection that will be used to send messages out to Kafka. Therefore, make sure to set in the “REST API Base URL” field the correct endpoint that should point to your load balancer. Also make sure to set the /topics resource after the port.

Creating_REST_Conn_2

Figure 8: REST-based connection used for outbound processing.

With the inbound and outbound connections properly created, go ahead and create a new integration. For this use case we are going to use Basic Map Data as integration style/pattern, although you could also leverage the outbound connection to REST Proxy API in orchestration -based integrations.

Creating_Flow_1

Figure 9: Using Basic Map Data as the integration style for the use case.

Name the integration as OrderService and provide some description, as shown in figure 10. Once the integration flow is created, go ahead and drag the SOAP connection to the source area of the flow. That will trigger the SOAP endpoint creation wizard. Go through the wizard details until you reach the last page. Accept all values suggested by default. Then, click on the “Done” button to finish it.

Creating_Flow_2

Figure 10: Setting up the details for the newly created integration.

ICS will create the source mapping according to the information gathered from the wizard; along with the information from the WSDL attached to the connection, as shown in figure 11. Up to this point, we can now drag the REST connection to the target area of the flow. That will trigger the REST endpoint creation wizard.

Creating_Flow_6

Figure 11: Integration flow with the inbound mapping built.

Differently from the SOAP endpoint creation wizard; we will make some changes in the options shown in the REST endpoint creation wizard. The first one is setting the Kafka topic name in the “Relative Source URI” field. This is important because ICS will use this information to build the final URI that will be sent to the REST Proxy API. Therefore, make sure to set the appropriate topic name. For this use case, we are using a topic named orders, as shown in figure 12. Also, please select the option “Configure Request Payload” before clicking next.

Creating_Flow_7

Figure 12: Setting up details about the REST endpoint behavior.

In the next page, you will need to associate the schema that will be used to parse the request payload. Select the “JSON Sample” and upload a JSON sample file that contains a payload like the one shown in listing 6. Please make sure to provide a JSON sample that has at least two sample values in the array section. ICS validates if the samples provided has enough information that can be used to generate the internal schemas. If some JSON sample has an array construct then ICS will ask for at least two values within the array, to make sure that it is going to deal with a list of values instead of a single value. You can grab a copy of a valid JSON sample for this use case here.

Creating_Flow_8

Figure 13: Setting up details about schemas and media types.

In the “Type of Payload” section, make sure to select the “Other Media Type” option to allow the usage of custom media types. Then; set application/vnd.kafka.json.v1+json as the value, as shown in figure 13. Click next and review the options set. If everything looks like what is shown in figure 14 then you can click the “Done” button to finish the wizard.

Creating_Flow_9

Figure 14: Summary page of the REST endpoint creation wizard.

ICS will bring together the request and response mappings and expects that you set them up. Thus, go ahead and create the mappings for both request and response. For the request mapping, you should simply associate the fields as shown in figure 15. Remember that this field mapping should mimic what we had shown before in figure 6, including the usage of the “orderId” field as the record key.

Creating_Flow_11

Figure 15: Request mapping configuration.

The response mapping is way simpler: the only thing you have to do is associating the “orderId” field to the “confirmationId” field. The idea here is providing the user a way to know if the transaction was 100% successful or not. By returning the same order identifier value provided we will be doing this because otherwise; if any failure happens during the message transmission then the REST Proxy API will make sure to propagate an fault back to the caller, which in turn will force ICS to simply catch this fault and propagated it back. Figure 16 shows the response mapping.

Creating_Flow_12

Figure 16: Response mapping configuration.

Now set up some tracking fields (For this use case using the “orderId” field would be a good idea) and finish the integration flow as shown in figure 17. Now you are ready to activate and test the integration to check the end-to-end behavior of the use case.

Creating_Flow_13

Figure 17: Integration flow 100% complete in ICS.

You can download a copy of this use case here. Once the integration is active, you can validate if it is working correctly by starting a console consumer like shown in listing 2. Then, open your favorite SOAP client utility and import the WSDL from the integration. You can easily access the integration’s WSDL in the UI; by clicking in the information icon of the integration, just like shown in figure 18.

Creating_Flow_14

Figure 18: Retrieving the integration’s WSDL from the UI.

Once the WSDL is properly imported in your SOAP client utility, send a payload request like the one shown in listing 4 to validate the integration. If everything was setup correctly, you should see the JSON payload sent to the topic in the output of the console consumer started on listing 2.

Conclusion

This blog has shown in details how to configure ICS to send messages to Kafka. Since ICS has no built-in adapter for Kafka, it was used the REST Proxy API project that is part of the Kafka ecosystem.


GoldenGate Cloud Service (GGCS): Replication from On-Premises to Oracle Public Cloud (OPC)

$
0
0

Introduction

This document will walk you through how to configure Oracle GoldenGate (OGG) replication between On-Premises Oracle Database to an Oracle Database Cloud Service (DBCS) on Oracle Public Cloud (OPC) via GoldenGate Cloud Service (GGCS).

Installation of Oracle GoldenGate for Oracle Database on the On-Premises and the provisioning of Oracle GGCS and DBCS are not discussed in this article, it is assumed that Oracle GoldenGate Software has been installed on the On-Premises server and instances for GGCS and DBCS already exist.

The scripts and information provided in this article are for educational purposes only. They are not supported by Oracle Development or Support, and come with no guarantee or warrant for functionality in any environment other than the test system used to prepare this article.

For details on OGG installation and provisioning of DBCS and GGCS, please check the following Oracle Documentation links:

GoldenGate Cloud Service (GGCS)

The GoldenGate Cloud Service (GGCS), is a cloud based real-time data integration and replication service, which provides seamless and easy data movement from various On-Premises relational databases to databases in the cloud with sub-second latency while maintaining data consistency and offering fault tolerance and resiliency.

Figure 1: GoldenGate Cloud Service (GGCS) Architecture Diagram

ggcs_architecture_01

OGG Replication between On-Premises and OPC via GGCS

The high level steps for OGG replication between On-Premises (source) database and DBaaS/DBCS (target) database in the Oracle Public Cloud (OPC) are as follows:

  • Configure and Start GGCS Oracle GoldenGate Manager on the OPC side
  • Configure and Start SSH proxy server process on the On-Premises
  • Configure and Start On-Premises OGG Extract process
  • Configure and Start On-Premises OGG Extract Data Pump process
  • Configure and Start GGCS Replicat process on the OPC side to deliver data into the target DBaaS/DBCS

GGCS Oracle GoldenGate Manager

To start configuring Oracle GoldenGate on the GGCS instance, the manager process must be running. Manager is the controller process that instantiates the other Oracle GoldenGate processes such as Extract, Extract Data Pump, Collector and Replicat processes.

Connect to GGCS Instance through ssh and start the Manager process via the GoldenGate Software Command Interface (GGSCI).

[oracle@ogg-wkshp db_1]$ ssh -i mp_opc_ssh_key opc@129.145.1.180

[opc@bics-gg-ggcs-1 ~]$ sudo su – oracle
[oracle@bics-gg-ggcs-1 ~]$ cd $GGHOME

Note: By default, “opc” user is the only one allowed to ssh to GGCS instance. We need to switch user to “oracle” via “su” command to manage the GoldenGate processes. The environment variable $GGHOME is  pre-defined in the GGCS instance and it’s the directory where GoldenGate was installled.

[oracle@bics-gg-ggcs-1 gghome]$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.160517 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_160711.1401_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Jul 12 2016 02:21:38
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2016, Oracle and/or its affiliates. All rights reserved.

GGSCI (bics-gg-ggcs-1) 1> start mgr

Manager started.

GGSCI (bics-gg-ggcs-1) 2> info mgr

Manager is running (IP port bics-gg-ggcs-1.7777, Process ID 25272).

Note: By default, GoldenGate processes doesn’t accept any connection remotely. To enable connection from other hosts via the SSH proxy we need to add an ACCESS RULE to the Manager parameter File (MGR.prm) to allow connectivity through the public IP Address of the GGCS Instance.

Here’s the MGR.prm file used in this example:

–###############################################################
–## MGR.prm
–## Manager Parameter Template
— Manager port number
— PORT <port number>
PORT 7777
— For allocate dynamicportlist. Here the range is starting from
— port n1 through n2.
Dynamicportlist 7740-7760
— Enable secrule for collector
ACCESSRULE, PROG COLLECTOR, IPADDR 129.145.1.180, ALLOW
— Purge extract trail files
PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS, MINKEEPHOURS 24
— Start one or more Extract and Replicat processes automatically
— after they fail. –AUTORESTART provides fault tolerance when
— something temporary interferes with a process, such as
— intermittent network outages or programs that interrupt access
— to transaction logs.
— AUTORESTART ER *, RETRIES <x>, WAITMINUTES <y>, RESETMINUTES <z>
— This is to specify a lag threshold that is considered
— critical, and to force a warning message to the error log.
— Lagreport parameter specifies the interval at which manager
— checks for extract / replicat –lag.
–LAGREPORTMINUTES <x>
–LAGCRITICALMINUTES <y>
–Reports down processes
–DOWNREPORTMINUTES <n>
–DOWNCRITICAL

Start SSH Proxy Server on the On-Premises

By default, the only access allowed to GGCS instance is via ssh, so to allow communication of GoldenGate processes between On-Premises and GGCS instance we would need to run SSH proxy server on the on-premises side to communicate to GoldenGate processes on the GGCS instance.

Start the SSH proxy server process via the following ssh command (all in one line):

[oracle@ogg-wkshp db_1]$ ssh -i mp_opc_ssh_key -v -N -f -D 127.0.0.1:8888 opc@129.145.1.180 > ./dirrpt/socks.log 2>&1

Command Syntax: ssh –i {private_key_file} -v –N –f –D {istening_ip_address:listening_tcp_port_address} {user}@{GGCS_Instance_IP_address} > {output_file} 2>&1

SSH Command Options Explained:

  1. -i = Private Key file
  2. -v = Verbose Mode
  3. -N = Do no execute remote command; mainly used for port forwarding 
  4. -f = Run ssh process in the background
  5. -D Specifies to run as local dynamic application level forwarding; act as a SOCKS proxy server on a specified interface and port
  6. listening_ip_address = Host Name or Host IP Address where this SOCKS proxy will listen (127.0.0.1 is the loopback address)
  7. listening_tcp_port_address = TCP/IP Port Number to listen on
  8. 2>&1 = Redirect Stdout and Stderr to the output file
  9. Verify the SSH Socks Proxy server process has started successfully.

    1. Check the socks proxy output file via the “cat” utility and look for the messages “Local connections to forwarded…” and “Local forwarding listening on port ”.  Make sure it’s connected to GGCS instance and listening on the right IP and port address.

[oracle@ogg-wkshp db_1]$ cat ./dirrpt/socks.log

OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to 129.145.1.180 [129.145.1.180] port 22.
debug1: Connection established.
debug1: identity file keys/mp_opc_ssh_key type 1
debug1: loaded 1 keys
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3
debug1: match: OpenSSH_5.3 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_4.3
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host ‘129.145.1.180’ is known and matches the RSA host key.

debug1: Authentication succeeded (publickey).
debug1: Local connections to 127.0.0.1:8888 forwarded to remote address socks:0
debug1: Local forwarding listening on 127.0.0.1 port 8888.
debug1: channel 0: new [port listener]
debug1: Entering interactive session.

Configure On-Premises Oracle GoldenGate

For our test, we shall use the following tables for source and target database:

CREATE TABLE ACCTN
(
ACCOUNT_NO NUMBER (10,0) NOT NULL
, BALANCE NUMBER (8,2) NULL
, PREVIOUS_BAL NUMBER (8,2) NULL
, LAST_CREDIT_AMT NUMBER (8,2) NULL
, LAST_DEBIT_AMT NUMBER (8,2) NULL
, LAST_CREDIT_TS TIMESTAMP NULL
, LAST_DEBIT_TS TIMESTAMP NULL
, ACCOUNT_BRANCH NUMBER (10,0) NULL
, CONSTRAINT PK_ACCTN
PRIMARY KEY
(
ACCOUNT_NO
)
USING INDEX
)
;
CREATE TABLE ACCTS
(
ACCOUNT_NO NUMBER (10,0) NOT NULL
, FIRST_NAME VARCHAR2 (25) NULL
, LAST_NAME VARCHAR2 (25) NULL
, ADDRESS_1 VARCHAR2 (25) NULL
, ADDRESS_2 VARCHAR2 (25) NULL
, CITY VARCHAR2 (20) NULL
, STATE VARCHAR2 (2) NULL
, ZIP_CODE NUMBER (10,0) NULL
, CUSTOMER_SINCE DATE NULL
, COMMENTS VARCHAR2 (30) NULL
, CONSTRAINT PK_ACCTS
PRIMARY KEY
(
ACCOUNT_NO
)
USING INDEX
)
;
CREATE TABLE BRANCH
(
BRANCH_NO NUMBER (10,0) NOT NULL
, OPENING_BALANCE NUMBER (8,2) NULL
, CURRENT_BALANCE NUMBER (8,2) NULL
, CREDITS NUMBER (8,2) NULL
, DEBITS NUMBER (8,2) NULL
, TOTAL_ACCTS NUMBER (10,0) NULL
, ADDRESS_1 VARCHAR2 (25) NULL
, ADDRESS_2 VARCHAR2 (25) NULL
, CITY VARCHAR2 (20) NULL
, STATE VARCHAR2 (2) NULL
, ZIP_CODE NUMBER (10,0) NULL
, CONSTRAINT PK_BRANCH
PRIMARY KEY
(
BRANCH_NO
)
USING INDEX
)
;
CREATE TABLE TELLER
(
TELLER_NO NUMBER (10,0) NOT NULL
, BRANCH_NO NUMBER (10,0) NOT NULL
, OPENING_BALANCE NUMBER (8,2) NULL
, CURRENT_BALANCE NUMBER (8,2) NULL
, CREDITS NUMBER (8,2) NULL
, DEBITS NUMBER (8,2) NULL
, CONSTRAINT PK_TELLER
PRIMARY KEY
(
TELLER_NO
)
USING INDEX
)
;

Start On-Premises Oracle GoldenGate Manager

[oracle@ogg-wkshp db_1]$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.1.2.1.10 21604177 23004694_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Apr 29 2016 01:06:03
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

GGSCI (ogg-wkshp.us.oracle.com) 1> start mgr

Manager started.

GGSCI (ogg-wkshp.us.oracle.com) 2> info mgr

Manager is running (IP port ogg-wkshp.us.oracle.com.7809, Process ID 7526).

Configure and Start Oracle GoldenGate Extract Online Change Capture process 

Before we can configure the Oracle GoldenGate Extract Online Change process, we need to enable supplemental logging for the schema/tables we need to capture on the source database via the GGCSI utility.

Enable Table Supplemental Logging via GGCSI:

GGSCI (ogg-wkshp.us.oracle.com) 1> dblogin userid tpcadb password tpcadb

Successfully logged into database.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 2> add schematrandata tpcadb

2017-02-22 10:38:01 INFO OGG-01788 SCHEMATRANDATA has been added on schema tpcadb.
2017-02-22 10:38:01 INFO OGG-01976 SCHEMATRANDATA for scheduling columns has been added on schema tpcadb.

Note: The GGSCI “dblogin” command let’s the GGSCI session logged into the database. Your GGSCI session needs to be connected to the database before you can execute the “add schematrandata” command.

Create an Online Change Data Capture Extract Group via Integrated Extract process

For this test, we will name our Online Change Data Capture group process to ETPCADB.

-> Register the Extract group with the database via GGSCI:

GGSCI (ogg-wkshp.us.oracle.com) 1> dblogin userid tpcadb password tpcadb

Successfully logged into database.

Note: When creating/adding/managing an Extract group as an Integrated Extract process, your GGSCI session needs to be connected to the database via the “dblogin” command.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 2> register extract etpcadb database

Extract ETPCADB successfully registered with database at SCN 2373172.

-> Create/Add the Extract Group in GoldenGate via GGSCI:

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 3> add extract etpcadb, integrated, tranlog, begin now

EXTRACT added.

Note: To edit/create the Extract Configuration/Parameter file, you need to execute “edit param <group_name>” via the GGSCI utility.

GGSCI (ogg-wkshp.us.oracle.com) 1> edit param etpcadb

Here’s the Online Change Capture Parameter (etpcadb.prm) file used in this example:

EXTRACT ETPCADB
userid tpcadb, password tpcadb
EXTTRAIL ./dirdat/ea
discardfile ./dirrpt/etpcadb.dsc, append
TABLE TPCADB.ACCTN;
TABLE TPCADB.ACCTS;
TABLE TPCADB.BRANCH;
TABLE TPCADB.TELLER;

Add a local extract trail to the Online Change Data Capture  Extract Group via GGSCI

GGSCI (ogg-wkshp.us.oracle.com) 1> add exttrail ./dirdat/ea, extract etpcadb

EXTTRAIL added.

Start the Online Change Data Capture  Extract Group via GGSCI

GGSCI (ogg-wkshp.us.oracle.com) 2> start extract etpcadb

Sending START request to MANAGER …
EXTRACT ETPCADB starting

Check the Status of Online Change Data Capture  Extract Group via GGSCI

GGSCI (ogg-wkshp.us.oracle.com) 4> dblogin userid tpcadb password tpcadb

Successfully logged into database.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 5> info extract etpcadb detail

EXTRACT ETPCADB Last Started 2017-02-22 10:46 Status RUNNING
Checkpoint Lag 00:00:10 (updated 00:00:09 ago)
Process ID 10705
Log Read Checkpoint Oracle Integrated Redo Logs
2017-02-22 10:59:17
SCN 0.2394754 (2394754)
Target Extract Trails:
Trail Name Seqno RBA Max MB Trail Type
./dirdat/ea 0 1450 100 EXTTRAIL
Integrated Extract outbound server first scn: 0.2373172 (2373172)
Integrated Extract outbound server filtering start scn: 0.2373172 (2373172)
Extract Source Begin End
Not Available 2017-02-22 10:44 2017-02-22 10:59
Not Available * Initialized * 2017-02-22 10:44
Not Available * Initialized * 2017-02-22 10:44
Current directory /u01/app/oracle/product/12cOGG/db_1
Report file /u01/app/oracle/product/12cOGG/db_1/dirrpt/ETPCADB.rpt
Parameter file /u01/app/oracle/product/12cOGG/db_1/dirprm/etpcadb.prm
Checkpoint file /u01/app/oracle/product/12cOGG/db_1/dirchk/ETPCADB.cpe
Process file /u01/app/oracle/product/12cOGG/db_1/dirpcs/ETPCADB.pce
Error log /u01/app/oracle/product/12cOGG/db_1/ggserr.log

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 6> info all

Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING ETPCADB 00:00:09 00:00:08

Configure and Start Oracle GoldenGate Extract Data Pump process 

For this test, we will name our GoldenGate Extract Data Pump group process to PTPCADB.

Create the Extract Data Pump Group (Process) via GGSCI

The Extract Data Pump group process will read the trail created by the Online Change Data Capture Extract (ETPCADB) process and sends the data to the GoldenGate process running on the GGCS instance via the SSH Socks Proxy server.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 7> add extract ptpcadb, exttrailsource ./dirdat/ea

EXTRACT added.

Note: To edit/create the Extract Configuration/Parameter file, you need to execute “edit param <group_name>” via the GGSCI utility.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 8> edit param ptpcadb

Here’s the Extract Data Pump Parameter (ptpcadb.prm) file used in this example:

EXTRACT PTPCADB
RMTHOST 129.145.1.180, MGRPORT 7777, SOCKSPROXY 127.0.0.1:8888
discardfile ./dirrpt/ptpcadb.dsc, append
rmttrail ./dirdat/pa
passthru
table TPCADB.ACCTN;
table TPCADB.ACCTS;
table TPCADB.BRANCH;
table TPCADB.TELLER;

Add the remote trail to the Extract Data Pump Group via GGSCI

The remote trail is the location output file on the remote side (GGCS instance) used by the Extract Data Pump to write data to be read by the Replicat Delivery process and apply to the target database on the Oracle Database Cloud Service (DBCS) instance.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 9> add rmttrail ./dirdat/pa, extract ptpcadb

RMTTRAIL added.

Start the Extract Data Pump Group via GGSCI

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 10> start extract ptpcadb

Sending START request to MANAGER …
EXTRACT PTPCADB starting

Check the Status of Extract Data Pump Group via GGSCI 

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 11> info extract ptpcadb detail

EXTRACT PTPCADB Last Started 2017-02-22 11:12 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:08 ago)
Process ID 15281
Log Read Checkpoint File ./dirdat/ea000000
First Record RBA 0
Target Extract Trails:
Trail Name Seqno RBA Max MB Trail Type
./dirdat/pa 0 0 100 RMTTRAIL
Extract Source Begin End
./dirdat/ea000000 * Initialized * First Record
./dirdat/ea000000 * Initialized * First Record
Current directory /u01/app/oracle/product/12cOGG/db_1
Report file /u01/app/oracle/product/12cOGG/db_1/dirrpt/PTPCADB.rpt
Parameter file /u01/app/oracle/product/12cOGG/db_1/dirprm/ptpcadb.prm
Checkpoint file /u01/app/oracle/product/12cOGG/db_1/dirchk/PTPCADB.cpe
Process file /u01/app/oracle/product/12cOGG/db_1/dirpcs/PTPCADB.pce
Error log /u01/app/oracle/product/12cOGG/db_1/ggserr.log

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 13> info all

Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING ETPCADB 00:00:10 00:00:06
EXTRACT RUNNING PTPCADB 00:00:00 00:00:00

Configure and Start GGCS Oracle GoldenGate Delivery Process

Connect to GGCS Instance through ssh and go GoldenGate Software Command Interface (GGSCI) utility to configure GoldenGate Delivery process.

[oracle@ogg-wkshp db_1]$ ssh -i mp_opc_ssh_key opc@129.145.1.180

[opc@bics-gg-ggcs-1 ~]$ sudo su – oracle
[oracle@bics-gg-ggcs-1 ~]$ cd $GGHOME

Note: By default, “opc” user is the only one allowed to ssh to GGCS instance. We need to switch user to “oracle” via “su” command to manage the GoldenGate processes. The environment variable $GGHOME is  pre-defined in the GGCS instance and it’s the directory where GoldenGate was installled.

[oracle@bics-gg-ggcs-1 gghome]$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.160517 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_160711.1401_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Jul 12 2016 02:21:38
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2016, Oracle and/or its affiliates. All rights reserved.

Configure GGCS Oracle GoldenGate Replicat Online Delivery group via Integrated process

Configure the Replicat Online Delivery group that reads the trail file that the Data Pump writes to and deliver the changes into the BICS DBCS.

Before configuring the delivery group as an Integrated delivery process, make sure that the GGSCI session is connected to the database via the GGSCI “dblogin” command.

GGSCI (bics-gg-ggcs-1) 1> dblogin useridalias ggcsuser_alias

Successfully logged into database BICSPDB1.

Create / Add the Replicat Delivery group as an Integrated process  and in this example we will name our Replicat Delivery group to RTPCADB.

GGSCI (bics-gg-ggcs-1 as c##ggadmin@BICS/BICSPDB1) 2> add replicat rtpcadb, integrated, exttrail ./dirdat/pa

REPLICAT (Integrated) added.

Note: To edit/create the Replicat Delivery Configuration/Parameter file, you need to execute “edit param <group_name>” via the GGSCI utility.

GGSCI (bics-gg-ggcs-1 as c##ggadmin@BICS/BICSPDB1) 3> edit param rtpcadb

Here’s the GGCS Replicat Online Delivery Parameter (rtpcadb.prm) file used in this example:

REPLICAT RTPCADB
useridalias ggcsuser_alias
–Integrated parameter
DBOPTIONS INTEGRATEDPARAMS (parallelism 2)
DISCARDFILE ./dirrpt/rtpcadb.dsc, APPEND Megabytes 25
ASSUMETARGETDEFS
MAP TPCADB.ACCTN, TARGET GGCSBICS.ACCTN;
MAP TPCADB.ACCTS, TARGET GGCSBICS.ACCTS;
MAP TPCADB.BRANCH, TARGET GGCSBICS.BRANCH;
MAP TPCADB.TELLER, TARGET GGCSBICS.TELLER;

Start the GGCS Replicat Online Delivery process via GGCSI 

GGSCI (bics-gg-ggcs-1 as c##ggadmin@BICS/BICSPDB1) 3> start replicat rtpcadb

Sending START request to MANAGER …
REPLICAT RTPCADB starting

Check the Status of GGCS Replicat Online Delivery process via GGSCI 

GGSCI (bics-gg-ggcs-1 as c##ggadmin@BICS/BICSPDB1) 4> info replicat rtpcadb detail

REPLICAT RTPCADB Last Started 2017-02-22 14:23 Status RUNNING
INTEGRATED
Checkpoint Lag 00:00:00 (updated 00:00:06 ago)
Process ID 25601
Log Read Checkpoint File ./dirdat/pa000000
2017-02-22 14:23:38.468569 RBA 0
INTEGRATED Replicat
DBLOGIN Provided, inbound server name is OGG$RTPCADB in ATTACHED state
Current Log BSN value: (no data)
Integrated Replicat low watermark: (no data)
(All source transactions prior to this scn have been applied)
Integrated Replicat high watermark: (no data)
(Some source transactions between this scn and the low watermark may have been applied)
Extract Source Begin End
./dirdat/pa000000 * Initialized * 2017-02-22 14:23
./dirdat/pa000000000 * Initialized * First Record
./dirdat/pa000000000 * Initialized * First Record
Current directory /u02/data/gghome
Report file /u02/data/gghome/dirrpt/RTPCADB.rpt
Parameter file /u02/data/gghome/dirprm/rtpcadb.prm
Checkpoint file /u02/data/gghome/dirchk/RTPCADB.cpr
Process file /u02/data/gghome/dirpcs/RTPCADB.pcr
Error log /u02/data/gghome/ggserr.log

At this point, we now have a complete OGG replication processes between the source Oracle database on the On-Premises to the target Oracle database on the OPC via GGCS.

Run Test Transactions

Now we are ready to run some transactions on the On-Premises source database and to be replicated by the GGCS onto the target database running on the DBCS instance on the OPC.

In this example, we start with empty tables on both source and target.

Check of Source Tables (On-Premises)

[oracle@ogg-wkshp db_1]$ sqlplus tpcadb/tpcadb <<EOF
select count(*) from ACCTN;
select count(*) from ACCTS;
select count(*) from BRANCH;
select count(*) from TELLER;
EOF

SQL*Plus: Release 12.1.0.2.0 Production on Wed Feb 22 12:56:14 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Wed Feb 22 2017 12:49:42 -08:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL>
COUNT(*)
———-
0
SQL>
COUNT(*)
———-
0
SQL>
COUNT(*)
———-
0
SQL>
COUNT(*)
———-
0
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Check of Target Tables from GGCS Instance

[oracle@bics-gg-ggcs-1 ~]$ sqlplus ggcsbics@target/ggcsbics <<EOF
select count(*) from ACCTN;
select count(*) from ACCTS;
select count(*) from BRANCH;
select count(*) from TELLER;
EOF

SQL*Plus: Release 12.1.0.2.0 Production on Wed Feb 22 16:02:23 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Wed Feb 22 2017 16:01:10 -05:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production
SQL>
COUNT(*)
———-
0
SQL>
COUNT(*)
———-
0
SQL>
COUNT(*)
———-
0
SQL>
COUNT(*)
———-
0
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production

Note: When GGCS instance is provisioned, a default TNS net service name gets created in the tnsnames.ora of the GGCS instance and that is the “target” net service. This is the connection net service name that contains the connection information for the database that was associated to the GGCS instance when it was provisioned. The location of the tnsnames.ora can be found under the /u01/app/oracle/oci/network/admin directory.

Here’s a sample of the tnsnames.ora file that gets generated after the GGCS instance has been provisioned:

#GGCS generated file
target =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = BICS-DB)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = BICSPDB1.usoracle55293.oraclecloud.internal)
)
)

Run Test Transactions on the Source Tables (On-Premises) via SQLPLUS

Let’s start with insert transactions into the tables – inserting 2 records on each table a total of 8 operations since we have 4 tables.

[oracle@ogg-wkshp dirsql]$ sqlplus tpcadb/tpcadb <<EOF
INSERT INTO ACCTN (ACCOUNT_NO, BALANCE, PREVIOUS_BAL, LAST_CREDIT_AMT, LAST_CREDIT_TS, ACCOUNT_BRANCH) VALUES ( 83915, 1000, 0, 1000, TO_TIMESTAMP (‘2005-08-18:15:11:37.123456’, ‘YYYY-MM-DD:HH24:MI:SS.FF’), 82);
INSERT INTO ACCTN (ACCOUNT_NO, BALANCE, PREVIOUS_BAL, LAST_CREDIT_AMT, LAST_CREDIT_TS, ACCOUNT_BRANCH) VALUES ( 83916, 1000, 0, 1000, TO_TIMESTAMP (‘2005-08-18:15:11:37.123456’, ‘YYYY-MM-DD:HH24:MI:SS.FF’), 82);
COMMIT WORK;
INSERT INTO ACCTS (ACCOUNT_NO, FIRST_NAME, LAST_NAME, ADDRESS_1, ADDRESS_2, CITY, STATE, ZIP_CODE, CUSTOMER_SINCE) VALUES ( 83915, ‘Margarete’, ‘Smith’, ‘222 8th Ave’, ‘ ‘, ‘San Diego’, ‘CA’, 97827, to_date (‘1992-08-18’, ‘YYYY-MM-DD’));
INSERT INTO ACCTS (ACCOUNT_NO, FIRST_NAME, LAST_NAME, ADDRESS_1, ADDRESS_2, CITY, STATE, ZIP_CODE, CUSTOMER_SINCE) VALUES ( 83916, ‘Margarete’, ‘Howsler’, ‘1615 Ramona Ave’, ‘ ‘, ‘Fresno’, ‘CA’, 91111, to_date (‘1985-08-18’, ‘YYYY-MM-DD’));
COMMIT WORK;
INSERT INTO TELLER (TELLER_NO, BRANCH_NO, OPENING_BALANCE) VALUES ( 9815, 82, 10000 );
INSERT INTO TELLER (TELLER_NO, BRANCH_NO, OPENING_BALANCE) VALUES ( 9816, 83, 10000 );
COMMIT WORK;
INSERT INTO BRANCH (BRANCH_NO, OPENING_BALANCE, ADDRESS_1, ADDRESS_2, CITY, STATE, ZIP_CODE) VALUES ( 82, 100000, ‘7 Market St’, ‘ ‘, ‘Los Angeles’, ‘CA’, 90001);
INSERT INTO BRANCH (BRANCH_NO, OPENING_BALANCE, ADDRESS_1, ADDRESS_2, CITY, STATE, ZIP_CODE) VALUES ( 83, 100000, ‘222 8th Ave’, ‘ ‘, ‘Salinas’, ‘CA’, 95899);
COMMIT WORK;
EOF

SQL*Plus: Release 12.1.0.2.0 Production on Wed Feb 22 18:26:29 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Wed Feb 22 2017 18:25:24 -08:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL>
1 row created.
SQL>
1 row created.
SQL>
Commit complete.
SQL>
1 row created.
SQL>
1 row created.
SQL>
Commit complete.
SQL>
1 row created.
SQL>
1 row created.
SQL>
Commit complete.
SQL>
1 row created.
SQL>
1 row created.
SQL>
Commit complete.
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Now, will do an update transactions into the tables – updating 2 records on each table a total of 8 update operations since we have 4 tables.

[oracle@ogg-wkshp dirsql]$ sqlplus tpcadb/tpcadb <<EOF
UPDATE ACCTN SET BALANCE=25000, PREVIOUS_BAL=1000 WHERE ACCOUNT_NO=83915;
UPDATE ACCTN SET BALANCE=55789, PREVIOUS_BAL=1000 WHERE ACCOUNT_NO=83916;
COMMIT WORK;
UPDATE ACCTS SET FIRST_NAME = ‘Margie’ WHERE ACCOUNT_NO=83915;
UPDATE ACCTS SET FIRST_NAME = ‘Mandela’ WHERE ACCOUNT_NO=83916;
COMMIT WORK;
UPDATE TELLER SET OPENING_BALANCE=99900 WHERE TELLER_NO=9815;
UPDATE TELLER SET OPENING_BALANCE=77777 WHERE TELLER_NO=9816;
COMMIT WORK;
UPDATE BRANCH SET TOTAL_ACCTS = 25000 WHERE BRANCH_NO = 82;
UPDATE BRANCH SET TOTAL_ACCTS = 55789 WHERE BRANCH_NO = 83;
COMMIT WORK;
EOF

SQL*Plus: Release 12.1.0.2.0 Production on Wed Feb 22 18:37:13 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Wed Feb 22 2017 18:26:29 -08:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL>
1 row updated.
SQL>
1 row updated.
SQL>
Commit complete.
SQL>
1 row updated.
SQL>
1 row updated.
SQL>
Commit complete.
SQL>
1 row updated.
SQL>
1 row updated.
SQL>
Commit complete.
SQL>
1 row updated.
SQL>
1 row updated.
SQL>
Commit complete.
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Now, will do a delete transactions into the tables – deleting 1 record on each table a total of 4 delete operations since we have 4 tables.

[oracle@ogg-wkshp dirsql]$ sqlplus tpcadb/tpcadb <<EOF
DELETE FROM ACCTN WHERE ACCOUNT_NO = 83916;
DELETE FROM ACCTS WHERE ACCOUNT_NO = 83916;
DELETE FROM TELLER WHERE TELLER_NO = 9816;
DELETE FROM BRANCH where BRANCH_NO = 83;
COMMIT WORK;
EOF

SQL*Plus: Release 12.1.0.2.0 Production on Wed Feb 22 18:43:34 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Wed Feb 22 2017 18:37:13 -08:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL>
1 row deleted.
SQL>
1 row deleted.
SQL>
1 row deleted.
SQL>
1 row deleted.
SQL>
Commit complete.
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Now, let’s just do a simple count via sqlplus of the final total number of records in our source database.

[oracle@ogg-wkshp db_1]$ sqlplus tpcadb/tpcadb <<EOF
select count(*) from ACCTN;
select count(*) from ACCTS;
select count(*) from BRANCH;
select count(*) from TELLER;
EOF

SQL*Plus: Release 12.1.0.2.0 Production on Wed Feb 22 21:40:28 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Wed Feb 22 2017 21:18:00 -08:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL>
COUNT(*)
———-
1
SQL>
COUNT(*)
———-
1
SQL>
COUNT(*)
———-
1
SQL>
COUNT(*)
———-
1
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

At this point, we have executed the following operations:

Table Name Insert Update Delete Total Operations Final # of Rows/Records
TPCADB.ACCTN 2 2 1 5 1
TPCADB.ACCTS 2 2 1 5 1
TPCADB.TELLER 2 2 1 5 1
TPCADB.BRANCH 2 2 1 5 1

A total of 8 inserts, 8 updates, and 4 deletes.

Check Online Change Data Capture Extract process ETPCADB Statistics (On-Premises)

Now, let’s check the statistics for our Extract process ETPCADB via GGCSI “STATS” command, this should capture and reflect the operations we have just executed on the source tables.

GGSCI (ogg-wkshp.us.oracle.com) 1> dblogin userid tpcadb password tpcadb

Successfully logged into database.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 2> stats extract etpcadb, total, table *.*

Sending STATS request to EXTRACT ETPCADB …
Start of Statistics at 2017-02-22 21:26:13.
DDL replication statistics (for all trails):
*** Total statistics since extract started ***
Operations 21.00
Output to ./dirdat/ea:

Extracting from TPCADB.ACCTN to TPCADB.ACCTN:
*** Total statistics since 2017-02-22 18:49:44 ***
Total inserts                                 2.00
Total updates                                 2.00
Total deletes                                 1.00
Total discards                                0.00
Total operations                              5.00

Extracting from TPCADB.ACCTS to TPCADB.ACCTS:
*** Total statistics since 2017-02-22 18:49:44 ***
Total inserts                                 2.00
Total updates                                 2.00
Total deletes                                 1.00
Total discards                                0.00
Total operations                              5.00

Extracting from TPCADB.TELLER to TPCADB.TELLER:
*** Total statistics since 2017-02-22 18:49:44 ***
Total inserts                                 2.00
Total updates                                 2.00
Total deletes                                 1.00
Total discards                                0.00
Total operations                              5.00

Extracting from TPCADB.BRANCH to TPCADB.BRANCH:
*** Total statistics since 2017-02-22 18:49:44 ***
Total inserts                                 2.00
Total updates                                 2.00
Total deletes                                 1.00
Total discards                                0.00
Total operations                              5.00
End of Statistics.

Check Extract Datapump process PTPCADB Statistics (On-Premises)

Now, let’s check the statistics for our Extract Datapump process PTPCADB via the same GGCSI “STATS” command, this should also reflect the same number of operations we have just executed on the source tables.

GGSCI (ogg-wkshp.us.oracle.com as tpcadb@oracle) 3> stats extract ptpcadb, total, table *.*

Sending STATS request to EXTRACT PTPCADB …
Start of Statistics at 2017-02-22 21:48:44.
Output to ./dirdat/pa:

Extracting from TPCADB.ACCTN to TPCADB.ACCTN:
*** Total statistics since 2017-02-22 18:49:45 ***
Total inserts                                 2.00
Total updates                                 2.00
Total deletes                                 1.00
Total discards                                0.00
Total operations                              5.00

Extracting from TPCADB.ACCTS to TPCADB.ACCTS:
*** Total statistics since 2017-02-22 18:49:45 ***
Total inserts                                 2.00
Total updates                                 2.00
Total deletes                                 1.00
Total discards                                0.00
Total operations                              5.00

Extracting from TPCADB.TELLER to TPCADB.TELLER:
*** Total statistics since 2017-02-22 18:49:45 ***
Total inserts                                 2.00
Total updates                                 2.00
Total deletes                                 1.00
Total discards                                0.00
Total operations                              5.00

Extracting from TPCADB.BRANCH to TPCADB.BRANCH:
*** Total statistics since 2017-02-22 18:49:45 ***
Total inserts                                 2.00
Total updates                                 2.00
Total deletes                                 1.00
Total discards                                0.00
Total operations                              5.00
End of Statistics.

Check Online Change Delivery Replicat process RTPCADB Statistics (GGCS Instance on the OPC)

Now, let’s check the statistics for our Online Delivery Replicat process RTPCADB via the same GGCSI “STATS” command we did for our Extract processes. This should also reflect the same number of operations we have just executed on the source tables and captured by the Extract (ETPCADB) process and was sent by Extract Datapump (PTPCADB) process.

GGSCI (bics-gg-ggcs-1) 1> dblogin useridalias ggcsuser_alias

Successfully logged into database BICSPDB1.

GGSCI (bics-gg-ggcs-1 as c##ggadmin@BICS/BICSPDB1) 2> stats replicat rtpcadb, total, table *.*

Sending STATS request to REPLICAT RTPCADB …
Start of Statistics at 2017-02-23 01:03:12.
Integrated Replicat Statistics:
Total transactions                             9.00
Redirected                                     0.00
DDL operations                                 0.00
Stored procedures                              0.00
Datatype functionality                         0.00
Event actions                                  0.00
Direct transactions ratio                      0.00%

Replicating from TPCADB.ACCTN to BICSPDB1.GGCSBICS.ACCTN:
*** Total statistics since 2017-02-23 00:59:41 ***
Total inserts                                  2.00
Total updates                                  2.00
Total deletes                                  1.00
Total discards                                 0.00
Total operations                               5.00

Replicating from TPCADB.ACCTS to BICSPDB1.GGCSBICS.ACCTS:
*** Total statistics since 2017-02-23 00:59:41 ***
Total inserts                                  2.00
Total updates                                  2.00
Total deletes                                  1.00
Total discards                                 0.00
Total operations                               5.00

Replicating from TPCADB.TELLER to BICSPDB1.GGCSBICS.TELLER:
*** Total statistics since 2017-02-23 00:59:41 ***
Total inserts                                  2.00
Total updates                                  2.00
Total deletes                                  1.00
Total discards                                 0.00
Total operations                               5.00

Replicating from TPCADB.BRANCH to BICSPDB1.GGCSBICS.BRANCH:
*** Total statistics since 2017-02-23 00:59:41 ***
Total inserts                                  2.00
Total updates                                  2.00
Total deletes                                  1.00
Total discards                                 0.00
Total operations                               5.00
End of Statistics.

Now, for the final step, let’s just do a simple count via sqlplus of the final total number of records in our target database and make sure that the result matches the total number of final records in our source database.

[oracle@bics-gg-ggcs-1 ~]$ sqlplus ggcsbics@target/ggcsbics <<EOF
select count(*) from ACCTN;
select count(*) from ACCTS;
select count(*) from BRANCH;
select count(*) from TELLER;
EOF

SQL*Plus: Release 12.1.0.2.0 Production on Thu Feb 23 01:12:24 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Wed Feb 22 2017 16:02:34 -05:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production
SQL>
COUNT(*)
———-
1
SQL>
COUNT(*)
———-
1
SQL>
COUNT(*)
———-
1
SQL>
COUNT(*)
———-
1
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production

Summary

This article walked through the steps to on how to configure Oracle GoldenGate (OGG) replication between a source Oracle database on the On-Premises to a target Oracle Database running on the Database Cloud Service (DBCS) on the Oracle Public Cloud (OPC) using GoldenGate Cloud Service (GGCS).

Additional Resources:

Oracle Database Cloud Service (DBCS) 

Oracle GoldenGate Cloud Service (GGCS)

GGCS User Guide Documentation

GGCS Tutorial Section

Integrating with Taleo Enterprise Edition using Integration Cloud Service (ICS)

$
0
0

Introduction

Oracle Taleo provides talent management functions as Software as a service (SaaS). Taleo often needs to be integrated with other human resource systems. In this post, let’s look at few integration patterns for Taleo and implementing a recommended pattern using Integration Cloud Service (ICS), a cloud-based integration platform (iPaaS).

Main Article

Oracle Taleo is offered in Enterprise and Business editions.  Both are SaaS applications that often need to be integrated with other enterprise systems, on-premise or on the cloud. Here are the integration capabilities of Taleo editions:

  • Taleo Business Edition offers integration via SOAP and REST interfaces.
  • Taleo Enterprise Edition offers integration via SOAP services, SOAP-based Bulk API and Taleo Connect Client (TCC) that leverages the Bulk API.

Integrating with Taleo Business Edition can be achieved with SOAP or REST adapters in ICS, using a simple “Basic Map Data” pattern. Integrating with Taleo Enterprise Edition, however, deserves a closer look and consideration of alterative patterns. Taleo Enterprise provides three ways to integrate, each with its own merits.

Integration using Taleo Connect Client(TCC) is recommended. We’ll also address other 2 approaches for sake of completeness. To jump to a sub-section directly, click one of the links below.


Taleo SOAP web services
Taleo Bulk API
Taleo Connect Client (TCC)
Integrating Taleo with EBS using ICS and TCC
Launching TCC client through a SOAP interface


Taleo SOAP web services

Taleo SOAP web services provide synchronous integration. Web services update the system immediately. However, there are restrictive metered-limits to number of invocations and number of records per invocation, in order to minimize impact to live application. These limits might necessitate several web service invocations to finish a job that might need only one execution of other job alternatives.  Figure 1 shows a logical view of such integration using ICS.

Figure1

Figure1

ICS integration could be implemented using “Basic Map Data” for each distinct flow or using “Orchestration” for more complex use cases.


Taleo Bulk API

Bulk APIs asynchronously exchange data with Taleo Enterprise Edition. Bulk APIs are SOAP-based and require submission of a job, subsequent polling to observe status of the jobs and optionally an invocation to fetch the data, for read operations.  Bulk APIs are less restrictive than SOAP APIs in terms of volume of records exchanged.

Bulk APIs invocations cloud include T-XML queries, CSV or XML content. T-XML queries could be easily generated from Taleo Connect Client (TCC)’s editor.  Figure 2 shows a logic view of integration using Bulk API.

Figure2

Figure 2

As seen above, the integration logic is complex, with multiple calls to complete the integration and one or more polling calls to find status of the request. In addition, need for TCC to author T-XML import/export queries might mean that any change in the query will require TCC to author the query and a redeployment of integration with modified T-XML to ICS. In addition, Bulk API requests that exceed certain size limit require the data MTOM attachments. A link to Bulk API guide is provided in References section.


Taleo Connect Client (TCC)

As stated previously, TCC provides the best way to integrate with Taleo Enterprise. TCC has design editor to author exports and imports and run configurations. It also could be run from command line to execute the import or export jobs. TCC leverages the Bulk API to execute the imports and exports, while abstracting the complex steps involved in using Bulk API directly. A link to another post introducing TCC is provided in References section.

Figure3

Figure 3

Figure 3 shows a logical view of a solution using TCC and ICS. In this case, ICS orchestrates the flow by interacting with HCM and Taleo.   TCC is launched remotely through SOAP service. TCC, the SOAP launcher service and a staging file system are deployed to an IaaS compute node running Linux.


Integrating Taleo with EBS using ICS and TCC

Let’s look at a solution to integrate Taleo and EBS Human resources module, using ICS as the central point for scheduling and orchestration. This solution is suitable for on-going scheduled updates involving few hundred records for each run. Figure 4 represents the solution.

Figure4

Figure 4

TCC is deployed to a host accessible from ICS. The same host runs a J-EE container, such as WebLogic or Tomcat. The launcher web service deployed to the container launches TCC client upon a request from ICS. TCC client, depending on the type of job, either writes a file to a staging folder or reads a file from the folder.  The staging folder could be local or on a shared file system, accessible to ICS via SFTP.  Here are the steps performed by the ICS orchestration.

  • Invoke launcher service to run a TCC export configuration. Wait for completion of the export.
  • Initiate SFTP connection to retrieve the export file.
  • Loop through contents of the file. For each row, transform the data and invoke EBS REST adapter to add the record. Stage the response from EBS locally.
  • Write the staged responses from EBS to a file and transfer via SFTP to folder accessible to TCC.
  • Invoke launcher to run a TCC import configuration. Wait for completion of the import.
  • At this point, bi-direction integration between Taleo and EBS is complete.

This solution demonstrates the capabilities of ICS to seamlessly integrate SaaS applications and on-premise systems. ICS triggers the job and orchestrates export and import activities in single flow. When the orchestration completes, both, Taleo and EBS are updated. Without ICS, the solution would contain a disjointed set of jobs that could be managed by different teams and might require lengthy triage to resolve issues.


Launching TCC client through a SOAP interface

Taleo Connect Client could be run from command line to execute a configuration to export or import data. A Cron job or Enterprise Scheduling service (ESS) could launch the client. However, enabling the client to be launched through a service will allow a more cohesive flow in integration tier and eliminate redundant scheduled jobs.

Here is a sample java code to launch a command line program. This code launches TCC code and wait for completion, capturing the command output. Note that the code should be tailored to specific needs and suitable error handing, and, tested for function and performance.

package com.test.demo;
import com.taleo.integration.client.Client;
import java.io.BufferedReader;
import java.io.InputStreamReader;
public class tccClient {
    public boolean runTCCJoB(String strJobLocation) {
        Process p=null;
        try {
            System.out.println("Launching Taleo client. Path:" + strJobLocation);
            String cmd = "/home/runuser/tcc/scripts/client.sh " + strJobLocation;
            p = Runtime.getRuntime().exec(cmd);
	//Read both Input and Error streams.
            ReadStream s1 = new ReadStream("stdin", p.getInputStream());
            ReadStream s2 = new ReadStream("stderr", p.getErrorStream());
            s1.start();
            s2.start();
            p.waitFor();
            return true;
        } catch (Exception e) {
            //log and notify as appropriate
            e.printStackTrace();
            return false;
        } finally {
            if (p != null) {
                p.destroy();
            }
        }
    }
}

Here is a sample service for a launcher service using JAX-WS and SOAP.

package com.oracle.demo;
import javax.jws.WebService;
import javax.jws.WebMethod;
import javax.jws.WebParam;

@WebService(serviceName = "tccJobService")
public class tccJobService {

    @WebMethod(operationName = "runTCCJob")
    public String runTCCJob(@WebParam(name = "JobPath") String JobPath) {
        try{
        //tccClient().runTCCJob(JobPath);
        return new tccClient().runTCCJoB(JobPath) ;
        }
        catch(Exception ex)
        {
            ex.printStackTrace();
            return ex.getMessage();
        }
    }
}

Finally, this is a SOAP request that could be sent from an ICS orchestration, to launch TCC client.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:demo="http://demo.oracle.com/">
   <soapenv:Header/>
   <soapenv:Body>
      <demo:runTCCJob>
         <!--Optional:-->
         <JobPath>/home/runuser/tcc/exportdef/TCC-Candidate-export_cfg.xml</JobPath>
      </demo:runTCCJob>
   </soapenv:Body>
</soapenv:Envelope>

Summary

This post addressed alternative patterns to integrate with Taleo Enterprise Edition, along with pros and cons of each pattern. It explained a demo solution based on the recommended pattern using TCC and provided code snippets and steps to launch TCC client via web service. At the time of this post’s publication, ICS does not offer Taleo-specific adapter. A link to current list of supported adapters is provided in references section.

 

References

·        Getting started with Taleo Connect Client (TCC) – ATeam Chronicles

·        Taleo business edition REST API guide

·        Taleo Enterprise Edition Bulk API guide

·        Latest documentation for Integration Cloud Service

·        Currently available ICS adapters

Protected: Load Data into Oracle HCM Cloud Using ICS

$
0
0

This content is password protected. To view it please enter your password below:

Automated unit tests with Node.JS and Developer Cloud Services

$
0
0
Introduction Oracle’s Developer Cloud Service (DevCS) is a great tool for teams of developers. It provides great tools for continuous delivery, continuous integration, team collaboration, scrum boards, code repositories and so on. When using these feature, you can leverage the best practices in an application lifecycle to deliver high quality and manageable code. One of […]

Identity Cloud Service: Configuring SAML

$
0
0
Introduction As we begin to deliver our Identity Cloud Service (IDCS) to the world(https://www.oracle.com/middleware/identity-management/index.html), we on the A-Team have been working to provide patterns and how-to posts to implement some of the common use cases we see in the field.  One of the more common use cases is integrating with third party Service Providers (SP) […]

Loading Data into Oracle BI Cloud Service using BI Publisher Reports and SOAP Web Services

$
0
0
Introduction This post details a method of loading data that has been extracted from Oracle Business Intelligence Publisher (BIP) into the Oracle Business Intelligence Cloud Service (BICS). The BIP instance may either be Cloud-Based or On-Premise. It builds upon the A-Team post Using Oracle BI Publisher to Extract Data from Oracle Sales and ERP Clouds. […]

IDCS OAuth 2.0 and REST API

$
0
0
Introduction This article is to help expand on topics of integration with Oracle’s Cloud Identity Management service called Identity Cloud Service (IDCS).  IDCS delivers core essentials around identity and access management through a multi-tenant Cloud platform.  One of the more exciting features of IDCS is that you can interact with it using a REST API.  […]

IDCS Audit Event REST API

$
0
0
Introduction This article is to help expand on topics of integration with Oracle’s Cloud Identity Management service called Identity Cloud Service (IDCS). IDCS delivers core essentials around identity and access management through a multi-tenant Cloud platform. As part of the IDCS framework, it collects audit events that capture all significant events, changes, and actions which […]

Auto-mounting disk on Oracle Public Cloud Compute nodes

$
0
0
In the process of getting your Oracle Public Cloud Compute nodes ready for some real work you often need to attach additional disks to enlarge your disk capacity. The blog shows how you can easily format and mount these disks with a script that is activated through Linux’ SystemD. After you have orchestrated your nodes […]

Getting Started with Chatbots

$
0
0
Introduction At Oracle Open World 2016, Larry Ellison demoed the upcoming Oracle Intelligent Bots Cloud Service (IBCS), if you haven’t seen the demo, you can watch the recording on youtube. Chatbots employ a conversational interface that is both lean and smart, and if designed properly is even charming. Chat helps people find the things they want […]

Oracle GoldenGate: Working With Tokens and Environment Variables

$
0
0
Introduction Oracle GoldenGate contains advanced functionality that exposes a wealth of information users may leverage. In this article we shall discuss three of these, TOKENS; which is user defined data written to Oracle GoldenGate Trails, the Column Conversion Function @TOKEN; which is used to retrieve the token data from the Oracle GoldenGate Trail, and the […]

Loading Data into Oracle BI Cloud Service using BI Publisher Reports and REST Web Services

$
0
0
Introduction This post details a method of loading data that has been extracted from Oracle Business Intelligence Publisher (BIP) into the Oracle Business Intelligence Cloud Service (BICS). The BIP instance may either be Cloud-Based or On-Premise. It builds upon the A-Team post Extracting Data from Oracle Business Intelligence 12c Using the BI Publisher REST API. […]

Best Practices – Data movement between Oracle Storage Cloud Service and HDFS

$
0
0
Introduction Oracle Storage Cloud Service should be the central place for persisting raw data produced from another PaaS services and also the entry point for data that is uploaded from the customer’s data center. Big Data Cloud Service ( BDCS ) supports data transfers between Oracle Storage Cloud Service and HDFS. Both Hadoop and Oracle […]

Loading Identity Data Into Oracle IDCS: A Broad High-level Survey

$
0
0
Introduction Oracle Identity Cloud Service (IDCS) – Oracle’s comprehensive Identity and Access Management platform for the cloud – was released recently. Populating identity data – such as user identities, groups and group memberships – is one of most important tasks that is typically needed initially and on an on-going basis in any identity management system. […]

Publishing business events from Supply Chain Cloud’s Order Management through Integration Cloud Service

$
0
0
Introduction In Supply Chain Cloud (SCM) Order Management, as a sales order’s state changes or it becomes ready for fulfillment, events could be generated for external systems. Integration Cloud Service offers Pub/Sub capabilities that could be used to reliably integrate SaaS applications. In this post, let’s take close look at these capabilities in order to […]

Bulk import of sales transactions into Oracle Sales Cloud Incentive Compensation using Integration Cloud Service

$
0
0
Introduction Sales Cloud Incentive Compensation application provides API to import sales transactions in bulk. These could be sales transactions exported out of an ERP system. Integration Cloud Service (ICS) offers extensive data transformation and secure file transfer capabilities that could be used to orchestrate, administer and monitor file transfer jobs. In this post, let’s look […]

Uploading a file to Oracle storage cloud service using REST API

$
0
0
Introduction This is the second part of a two part article which demonstrates how to upload data in near-real time from an on-premise oracle database to Oracle Storage Cloud Service. In the previous article of this series, we demonstrated Oracle GoldenGate functionality to write to a flat file using Apache Flume File Roll Sink. If you would […]

Integrating Big Data Preparation (BDP) Cloud Service with Business Intelligence Cloud Service (BICS)

$
0
0
Introduction This article presents an overview of how to integrate Big Data Preparation Cloud Service (BDP) with Business Intelligence Cloud Service (BICS).  BDP is a big data cloud service designed for customers interested on cleansing, enriching, and transforming their structured and unstructured business data.  BICS is a business intelligence cloud service designed for customers interested […]

Automated Deployment to SOA Cloud Services using Developer Cloud Services

$
0
0
Introduction The process of deploying SOA Projects to Oracle SOA Cloud Services (SOACS) can be significantly simplified and streamlined using the Oracle Developer Cloud Services (DevCS) by facilitating the inbuilt Maven and GIT repositories. This article is a walk-through on how create a SOA Project in JDeveloper and get it deployed on SOACS using DevCS. […]
Viewing all 987 articles
Browse latest View live


Latest Images