Quantcast
Channel: ATeam Chronicles
Viewing all 987 articles
Browse latest View live

WebCenter Content and Multiple Identity Providers: The Virtualization Issue

$
0
0
A common scenario that arises with WebCenter Content Suite of products is one where an external LDAP directory such as Oracle Internet Directory or Active Directory is used along with the embedded WebLogic LDAP ‘DefaultAuthenticator’. But by default only users/groups from the primary authenticator (the first authenticator in the WLS provider list) are available to the Oracle Platform Security Services (OPSS). However, when the virtualization setting is enabled, this enables the identity store to merge both the external LDAP and embedded LDAP roles, thus solving the problem where Oracle Platform Security Services (OPSS) will only use the top provider that is defined in the list in the WLS Security Realm. To view whether the setting is enabled or not, Enterprise Manager can be used. 1. Log in to Fusion Middleware Control and navigate to Domain > Security > Security Provider Configuration to display the Security Provider Configuration page. 2. Expand, if necessary, the area Identity Store Provider, and click the "Configure" button to display the page Identity Store Configuration. 3. In the Custom Properties section, if turned on, the virtualize setting will be displayed.  idstore-virtualize   The side effect of this is added complexity in the lookup of user groups, since additional work is then needed for authorizing users. Given that most customers only use the DefaultAuthenticator for one user (weblogic), turning on virtualize for one user is not recommended. The impact of using virtualize can be significant, depending on the complexity of the external LDAP directory. The solution then is to determine how to avoid using the DefaultAuthenticator at all, and to only use the external LDAP directory for all user roles including the WebLogic Console Administrator. Turning off the Default Authenticator can be simple with Oracle Internet Directory, but only if the OID administrator will create a user named "weblogic" (or whatever admin user name is chosen) and add that user to a group called "Administrators" in OID.  In the case of Active Directory, this is not as simple. In Active Directory, the "Administrators" group has special meaning, just like it does in WebLogic. There is a naming collision. AD admins are loathe to add any user to the Administrators group, since that opens up the domain to that user. This would mean a "weblogic" user would have full access to AD, and no AD administrator is likely to give the nod to that request. The solution to this naming issue with the Administrators group in WebLogic is to use a different LDAP group, such as FMWAdministrators, to separate the need for Active Directory to protect the Administrators group from application users, and for WLS and application users to have full access. Once an FMWAdministrators group exists in Active Directory, the XACML (eXtensible Access Control Markup Language) settings in WebLogic can be updated to use FMWAdministrators instead of Administrators for allowing access to the WLS console. Of course, the “weblogic” administration user needs to be a member of the FMWAdministrators group for this to work properly. The primary reason to remove the Default Authenticator is to improve performance. The virtualize=true setting is easy to turn on but adds complexity to the user authorization process. In development and test environments using this setting may not show any performance degradation, but in production this can lead to unwanted side effects in your applications as the LDAP structure becomes increasingly complex. The best scenario in production is to use your external enterprise LDAP directory, such as OID or Active Directory, and turn off the Default Authenticator.   WebCenter Content pre-requisite steps  Removing the DefaultAuthenticator for use with WebCenter Content and WebCenter Content: Imaging requires a set of steps that must be performed in order to maintain proper access to content. Like WebLogic and Active Directory, WebCenter Content also uses the Administrators group for assigning elevated access. Before removing the WebLogic DefaultAuthenticator create a Credential Map in WebCenter Content that maps FMWAdministrators to both the Administrators and admin roles. Example Credential Map name: FMWAdministrators Example Credential Map contents:
|#all|, %%
FMWAdministrators, Administrators
FMWAdministrators, admin
  This mapping allows the content server to make use of the FMWAdministrators group for admin users. This map must be set for use in the JpsUserProvider, otherwise it will not take effect. The provider.hda file must be updated to have a line like the following. For the JpsUserProvider, the location of the provider.hda file is at this path: <domain>/ucm/cs/data/providers/jpsuserprovider Keep in mind if WebCenter Content is clustered, this path will be on the shared file system, not on local disk. This file must be edited in a text editor. The following line can be added anywhere in the section "@Properties LocalData". ProviderCredentialsMap=FMWAdministrators   Upon restart, the JpsUserProvider in WebCenter Content will begin mapping the role FMWAdministrators. An important point to make here is that you will want to add this Credential Map prior to removing the Default Authenticator, otherwise you will have to add the map manually on the file system. The other requirement for WebCenter Content when removing the Default Authenticator is that the Admin Server link will not work in WCC unless the admin user (e.g. weblogic) is a member of either Administrators or sysmanager groups. WebCenter Content blocks access to the Admin Server unless the user is associated to the Administrators or sysmanager group. In the case of Active Directory, you will likely not be using Administrators, which is why the FMWAdministrators group is needed in the first place. Thus the only other option is to create a group called “sysmanager” in the external LDAP setup (AD or OID) and assign the desired WebCenter Content Admin user to that group, whether is it weblogic or another user.  

Removing the Default Authenticator

Much of the steps followed below are similar to Oracle Business Intelligence documentation on "Using Alternative Authentication Providers". However, some of the BI-specific users and groups are not needed. To view the official documenation used by BI customers, see the link below. http://docs.oracle.com/cd/E23943_01/bi.1111/e10543/privileges.htm#BABFHAIC Backup config.xml file Backup the system before deleting the Default Authenticator. To do so, make a copy of <domain-home>/config directory so that it may be restored if needed. Create the Active Directory or OID authenticator This has likely already done if you are considering removal of the Default Authenticator. If not already, this provider should be moved to the top of the provider list in the WebLogic security realm and the services restarted. Identify or Create Essential Users Required in external LDAP Create the following essential groups in Active Directory or OID. weblogic - This username may be different if you have defined another username in the embedded LDAP directory in WebLogic. OracleSystemUser - This user is needed for Oracle Web Services Manager (OWSM). Create Essential Groups in external LDAP To remove the Default Authenticator, certain groups must exist in the external LDAP directory. FMWAdministrators - This group name can be anything you choose. AdminChannelUsers AppTesters CrossDomainConnectors Deployers Monitors Operators OracleSystemGroup sysmanager  (This is the group needed for WebCenter Content access to the admin server.)   After creating the users and groups in the Active Directory, the default weblogic user (or username that you’ve chosen) should be made a member of FMWAdministrators group. The OracleSystemUser should be a member of OracleSystemGroup. Note: The WebLogic security realm provides the capability to export an ldif file of all users and groups defined in the embedded LDAP directory. This can provide a method of exporting all users and groups at once. However, the ldif produced will need modification if used for creating users and groups in Active Directory or OID. To get an ldif file for the DefaultAuthenticator, in Weblogic Console, click on Security Realms -> myrealm -> Provider -> DefaultAuthenticator -> Migration -> Export.   Update WebLogic to use FMWAdministrators for the "Admin" role Login to the WebLogic Admin Server console. Click on Security Realms -> myrealm -> Roles and Policies -> Global Roles Expand "Roles". On the row with the role "Admin", click "View Role Conditions". Click the "Add Condition" button. Select "Group" from the Predicate List drop down. Click "Next". In the "Group Argument Name" text field, enter "FMWAdministrators". Click the "Add" button and then click "Finish". At this point the “Admin” role for WLS console will be assigned to users who are members of either Administrators or FMWAdministrators. Once the DefaultAuthenticator has been removed then the Administrators group should also removed unless you want to allow your external LDAP Administrators admin access to the WebLogic Console. Imaging Solution Accelerators and SOA Roles If using the Imaging Accounts Payable Solution Accelerator, steps must be taken to update the SOA application roles in Enterprise Manager. Various roles for SOA rely on the Administrators group, thus the addition of FMWAdministrators is also needed for SOA to function as expected when the Administrators group is no longer in use. add FMWAdministrators to these in SOA app roles. Out of the box, soa has all of these set to “Administrators”.   SOAAdmin SOAOperator SOAMonitor SOAAuditAdmin SOAAuditViewer SOADesigner   This can be done through Enterprise Manager or WLST. Updating the SOA role using Enterprise Manager:   Log in to Enterprise Manager (e.g. http://hostname:7001/em). Click on soa-Infra Click on the Soa Infrastructure dropdown and navigate to security -> Application Roles Click on the Search button in the middle of the screen (this will display the SOA App roles) Add the Group to each of the roles listed above.   Updating the SOA role using WLST: The WebLogic scripting tool can also be used for this step. Change the paths to the Middleware home as needed.
export ORACLE_HOME=/opt/middleware/Oracle_SOA1
cd $ORACLE_HOME/common/bin
./wlst.sh

#connect to the SOA server
connect('weblogic','welcome1', 't3://hostname:7001')

#Add the new external group name to SOAAdmin role
grantAppRole(appStripe='soa-infra',appRoleName='SOAAdmin',principalClass='weblogic.security.principal.WLSGroupImpl',principalName='FMWAdministrators')
grantAppRole(appStripe='soa-infra',appRoleName='SOAOperator',principalClass='weblogic.security.principal.WLSGroupImpl',principalName='FMWAdministrators')
grantAppRole(appStripe='soa-infra',appRoleName='SOAMonitor',principalClass='weblogic.security.principal.WLSGroupImpl',principalName='FMWAdministrators')
grantAppRole(appStripe='soa-infra',appRoleName='SOAASOAAuditVieweruditAdmin',principalClass='weblogic.security.principal.WLSGroupImpl',principalName='FMWAdministrators')
grantAppRole(appStripe='soa-infra',appRoleName='SOAAuditViewer',principalClass='weblogic.security.principal.WLSGroupImpl',principalName='FMWAdministrators')
grantAppRole(appStripe='soa-infra',appRoleName='SOADesigner',principalClass='weblogic.security.principal.WLSGroupImpl',principalName='FMWAdministrators')
Updating Imaging security At this point, you will also want to run the mbean operation “refreshIpmSecurity” to make sure everything is updated in the Imaging managed server. Login into Enterprise Manager. Navigate down to the Imaging server under the Weblogic Domain Folder. Once the right hand pane refreshes, click on the ‘Weblogic Server’ drop down menu and select ‘System MBean Browser’. On the MBean Browser tree go to Application Defined MBeans –> oracle.imaging –> Server: IPM_server1 –> cmd –> cmd Click on the ‘refreshIPMSecurity’ link on the right hand pane. Press Invoke button. In WebCenter Content: Imaging(IPM) all the existing security references to DefaultAthenticator users/groups that have not been duplicated in the external LDAP will need to be /replaced with external LDAP users/groups by walking through the System Security, Definition Security, and Application Document Security using the IPM UI. This must be performed before the DefaultAthenticator or virtualization have been removed. Delete the DefaultAuthenticator Before deleting the DefaultAuthenticator, verify that the Active Directory or OID users and groups show up in the WLS security realm. Click on the “Users and Groups” tab of the realm page to verify that the LDAP provider is finding external users and groups. Also log out of the WebLogic Console and attempt to log in as the new administrative user you have created in the external LDAP provider. In addition, don't forget to turn off virtualize=true in Enterprise Manager. This step should be done before removing the DefaultAuthenticator.    1. Log in to Fusion Middleware Control and navigate to Domain > Security > Security Provider Configuration to display the Security Provider Configuration page. 2. Expand, if necessary, the area Identity Store Provider, and click the "Configure" button to display the page Identity Store Configuration. 3. In the Custom Properties section, if turned on, the virtualize setting will be displayed. Remove the setting. Once that is verified, go to the Providers tab and check the box next to DefaultAuthenticator, and then click the "Delete" button. Restart the WebLogic Admin/Managed Servers and verify that you can login to the WLS console as weblogic or whatever user you setup as the WLS Admin.     Note: If users or passwords are changed for the admin users referenced in credentials for web services, csf-key values will need to be updated. Credentials are used in keys for calling web services, so if a username or password change is made in AD, the credential stores need to be updated. Be aware of these. Imaging's AXF integration point can be used with web services that communicate with SOA, E-Business Suite, and PeopleSoft. If the password or usernames change, the keys must be updated to match. Imaging: MA_CSF_KEY and basic.credential SOA: basic.credential EBS: SOAP user used by AXF. execute fnd_vault.put('AXF','AXF_SOAP_USER','SOAP_PASSWORD'); PSFT: Integration Broker -> Node Configuration   If using a different user or password from the original weblogic user then the boot.properties file will need to be manually updated. Back this file up and replace the encrypted user and/or password with the new user/password. Once the services have been restarted this information will automatically be encrypted. Additional warning: If using Active Directory, and the weblogic user is set to only be allowed logon from certain workstations/hosts, and WebLogic is running on Linux (or non-Windows), the WebLogic servers may fail to start because of a bind failure. The weblogic user cannot be restricted to specific "Logon Workstations", otherwise the error is difficult to locate. Run a manual ldapbind to test that the weblogic user can bind to AD. If it cannot bind, the issue may be due to a LDAP 531 error, meaning the user is restricted to logon only from certain machines. An example error of an ldapbind command is shown below: ldap_bind: Invalid credentials ldap_bind: additional info: 80090308: LdapErr: DSID-0C0903A9, comment: AcceptSecurityContext error, data 531, v1db1   Verify Imaging and Content Server login work as expected Login to Imaging to ensure that all applications show up as they did before when the DefaultAuthenticator was in place. Login to the Content Server as the weblogic user and click on the username in the upper right. The FMWAdministrators role should appear, and the user should also have the "admin" role in the list. Verify that the sysmanager role also appears. To test that the Admin Server is opening correctly, go into the Administration menu and click "Admin Server".   Conclusion The virtualize=true option is a powerful feature, but often not needed and adds an additional layer of complexity to security. In the event you have multiple external LDAP providers, you may require that the virtualization setting is enabled. However, even in that situation, it is often best to remove the DefaultAuthenticator because instead of virtualizing two sets of LDAP directory roles, the embedded WLS LDAP directory also will be virtualized. The WLS embedded LDAP directory can be removed safely and give a performance bump to user interactions not only within the Imaging and WebCenter Content instances but throughout the entire ECM product. Lastly, the performance between Imaging and WebCenter Content can be further boosted by changing the user caching settings on WebCenter Content. By default, WCC has a UserCacheTimeout of 1 minute (UserCacheTimeout=60000). This time can be increased to a higher value and make the link between Imaging and the WCC repository faster, since authorization will only need to be done at a higher interval. To increase the UserCacheTimeout on the Content Server, add an entry to the config.cfg file using milliseconds as the value. To set the timeout to 1 hour, use the following: UserCacheTimeout=3600000 This will make the Content Server return requests faster to Imaging, as well as reduce load on the LDAP directory servers.     If virtualization is still needed with AD... For customers that require multiple security providers the virtualize=true flag will still need to be set. When this is the case, the Default Authenticator should still be removed to reduce the workload of resolving groups in the OPSS layer. This is especially important if the users have many group associations (i.e. 50 assigned groups). When virtualize=true is enabled, a parameter called max.search.filter.length is hardcorded as 500 bytes. When making nested group membership calls, number of (uniquemember=group_dn_1 OR uniquemember=group_dn_2 OR ...) filter values are limited by this max filter length parameter. With user having around 100 roles assigned, there are around 10-15 ldap search calls made for a single user. Each LDAP search call may only take 50 milliseconds, but because the calls are done serially and for each individual security provider, performance problems arise.   However, most LDAP providers can support filters of much larger length. Increasing the max.search.filter.length to bigger value can reduce number of nested search calls significantly. To benefit from this setting, a patch needs to be applied to the oracle_common home. Once the patch is applied, the setting needs to be added to the jps-config.xml file. To add this setting in Enterprise Manager, use the following steps. Note that this setting will only have value if virtualize=true is already enabled. 1. Log in to Fusion Middleware Control and navigate to Domain > Security > Security Provider Configuration to display the Security Provider Configuration page. 2. Expand, if necessary, the area Identity Store Provider, and click the "Configure" button to display the page Identity Store Configuration. 3. In the Custom Properties section, verify the virtualize=true flag exists. If not then the max.search.filter.length is not required. 4. Add the parameter max.search.filter.length=20000   The base bug for the max.search.filter.length setting is below. This bug is fixed in 11.1.1.9 and 11.1.1.5. Backports to 11.1.1.6 and 11.1.1.7 have been requested at this time. Bug 17302469 - MAX.SEARCH.FILTER.LENGTH IS NOT CONFIGURABLE FOR LIBOVD PROVIDER

Going Mobile with ADF – Understanding the Options

$
0
0

Introduction

With over 90% of internet traffic now coming from mobile devices, there is a huge pressure on companies to make their customer-facing applications suitable for rendering on smart phones and tablets. A-team is involved with a number of customers who are looking for ways to adapt their existing Oracle ADF applications and make them "mobile-enabled" . This article is the first in a series of articles that describe what we learned from the experiences with these customers both from a technical and a non-technical perspective. This first article provides insight in the technology choices you need to make and the implications of those choices. It also contains a lot of links with additional information. The next articles will provide in-depth technical guidance to optimize rendering of ADF Faces applications on mobile devices, as well as techniques to support working in offline mode using ADF Mobile.

Main Article

This article is divided in three sections. The first section "Understanding the mobile user interface" discusses whether mobile-enabling existing ADF (Faces) applications is desirable at all. The second section "Choosing the right technology" discusses the technology options and choices you need to make. The third section "Conclusions and Recommendations" will provide you with Oracle's opinion on the preferred approach to ADF-based mobile development.

Understanding the Mobile User Interface

Ask any software developer whether they understand the differences between a desktop user interface and a mobile user interface and most likely they will say "Yes, of course". The three obvious differences are:
  • Touchscreen instead of mouse
  • No external keyboard
  • Smaller screen size
So, to mobile-enable an existing desktop web application, we need to make sure that actions typically performed with the mouse can be done with a touch gesture, and we need to make sure that the web page is still usable on a smaller screen, where part of the screen real estate might be consumed by the touch keyboard. With the aid of responsive web design techniques that shouldn't be too hard, right? And by the way, the screen size and screen resolution on a "traditional" large tablet like the iPad is such that we hardly need to change anything in the page layout. Well, this is the typical view of a person focused on the technical aspects. It completely ignores the fundamental differences in how mobile devices are used, and the type of user interface that makes good mobile apps so appealing and productive. Oracle's User Experience (UX) group has created some excellent material which we consider as mandatory reading before embarking on a project to mobile-enable your existing web applications:
  • Introduction to mobile design guidelines: a set of 10 design principles that reflect key considerations in mobile application design.
  • Oracle Applications UX Tablet Guide (iBook or PDF version): excellent and very comprehensive guide that makes you truly understand mobile user interfaces, including the huge differences between smart phones and tablets.
  • ADF Mobile Design Wiki, contains lots of mobile design patterns and list of UI components available in ADF Mobile.
As said, mandatory reading, but we will summarize two key take-aways here:
  • The nature of tasks performed on mobile devices can be very different from tasks performed on desktop computers
  • The natural user interface (NUI) on mobile devices is fundamentally different from the graphical user interface (GUI) on desktop computers.
So, from a usability perspective the conclusion is clear:it is undesirable to take an existing desktop web application and make it suitable for mobile rendering with as little changes as possible. However, as with any software development project, usability and end-user friendliness is just one of many aspects to take into account. Budget constraints, time-to-market, available technical skills, ease of maintenance, deployment models, supported device types, and reuse of existing code, are just a few of the other aspects that organizations take into account when "going mobile". The next section will discuss the technology options available when taking all of the above aspects into account.

Choosing the Right Technology

There are three Oracle technologies to consider for mobile applications
  • ADF Mobile: Java and HTML5-based framework that allows you to develop iOS and Android applications that run on the mobile device using a single code base. ADF Mobile supports access to native device services and enables offline applications. Applications built with ADF Mobile can be distributed through Google Play or the Apple store.
  • ADF Mobile Browser: Refers to server-side web applications that use Apache MyFaces Trinidad, a set of JSF components optimized to render on mobile devices using mobile-specific Cascading Style Sheets (CSS).
  • ADF Faces Rich Client: Refers to server-side web applications that use ADF Faces Rich Client, a set of over a 150 Ajax-enabled JSF components that let you build a very rich Web user interface for Java EE applications. ADF Faces RC uses specific component renderers and CSS files to optimize rendering on mobile devices using HTML5.
Here is a list of aspects and considerations that should help you make make a choice between these technologies.

Offline Usage

If you need to support offline usage of your mobile application, you have to use ADF Mobile. ADF Mobile Browser and ADF Faces RC are server-side technologies that do not support offline usage. The new HTML5 feature for running web applications offline is NOT supported by ADF Mobile Browser nor ADF Faces RC.

Device Features Integration

ADF Mobile fully supports integration with mobile device features. It is possible to update contacts, send SMS or e-mail, take or show a picture, display or store a file, or use the GPS location of the device, for example to show nearby places on a geographical map. ADF Faces RC and ADF Mobile Browser support a more basic integration with email, telephony and google maps using specific URL prefixes like "mailto:", and "tel:". See chapter Extending ADF Mobile Browser Applications in the Mobile Browser Developer's Guide for ADF for more information. Note that while this is the guide for ADF Mobile Browser, the techniques described in this chapter can be used as well with ADF Faces RC.

HTML5 Support

ADF Mobile is fully based on HTML5. ADF Faces RC automatically uses HTML5 for rendering advanced components when it has detected that the browser supports HTML5. It is also possible to use the new HTML5 input types with ADF Faces RC as will be illustrated in the next article in this series. The ADF Mobile Browser option has not changed for years and does not support HTML5.

Mobile User Interface

ADF Mobile is the best choice for creating a natural user interface that has a device-native look and feel and fully integrates with mobile device features. ADF Faces RC provides by far the most rich set of user interface components, including very advanced data visualization components. The recently added Skyros skin is a "skinny" skin that minimizes the usages of images, making it the best performing choice for mobile rendering. The user interface that can be created with the Trinidad components of ADF Mobile Browser is more sober. We have seen customers complaining about the way ADF Mobile Browser renders the calendar popup window and popup windows created using the Trinidad dialog framework. Both types of popup windows render as a new tab in the mobile browser which visually "disconnects" them from the current application.These rendering issues do not occur with ADF Faces RC which fully supports inline dialogs on mobile devices.

Mobile Browser Support

Browser support is not applicable for ADF Mobile. It runs in the native WebView of iOS and Android devices. It does not matter which browser is installed on the device. ADF Mobile Browser support a wide range of mobile browsers. ADF Faces RC supports a more limited set of mobile browsers. See the 11g and 12c certification matrices. Note: look under the heading "Browsers", do not get confused by the heading "Mobile Browsers" in these two certification matrices, this heading only applies to browsers supported by ADF Mobile Browser, not to the mobile browsers supported by ADF Faces RC.

Reuse of Business Logic

Reuse of business logic coded in Java that has been built for existing desktop (web) applications can be achieved with all three technologies. The better the existing application adheres to the architectural  Model-View-Controller pattern the easier it will be to reuse the logic coded in the model layer. For ADF Mobile Browser and ADF Faces RC this reuse is trivial as the web application can run on the same JEE server that contains the model-layer business logic. For ADF Mobile applications, the functionality of the business logic layer needs to be disclosed through SOAP or RESTful web services.

Reuse of Web Pages - Response or Adaptive Web Design

The concept of designing and building one web page that automatically adapts its layout and behavior based on the mobile browser's device is very attractive to many customers.It maximizes reuse of web pages, enables a fast time-to-market for mobile applications, and reduces maintenance cost as only one code base for the View and Controller layers need to be maintained. This concept has become very popular under terms like responsive web design and adaptive web design. There is some debate about the difference between these two terms, but in the context of ADF the difference made by the wikipedia definitions makes a lot of sense: responsive design is implemented client-side in the browser, adaptive design is implemented server-side. An important technique used in client-side responsive design is CSS media queries. The following articles describe how this technique can be used with ADF Faces: While not tested, it should be possible to use the same technique with the Apache MyFaces Trinidad components. However, as Java Server Faces (JSF) generates the HTML that is sent to the browser on the server,it might make more sense to go for adaptive design and tweak the generated html at the server-side using the browser agent info. Otherwise, chunks of HTML might be sent over the wire that end up not being used for rendering because of some CSS media query. ADF Faces RC has a number of adaptive design techniques built in that optimize mobile rendering:
  • Pages that have a stretching layout on a desktop browser will automatically use a flowing layout on a mobile browser, allowing the user to scroll using a swipe-down action if needed.
  • ADF data visualization components (DVT's) that use Flash to render on older desktop browsers will automatically use HTML5 to render on mobile browsers.
  • Tables with a scrollbar on a desktop browser will automatically render with google-style pagination controls on mobile browsers.
  • Touch and tap gestures are supported to perform actions typically done with mouse clicks and mouse movements on a desktop browser
ADF product manager Shay Shmeltzer wrote two articles that highlight these behaviors: Adaptive Layout for ADF Faces on Tablets and Tablet Support in ADF Faces with 12c. ADF product management has stated there are plans to further enhance the responsive design support in ADF Faces over the next couple of releases. One issue that must be dealt deal with though is that not all ADF Faces certified browsers support CSS3 so there has to to be a story for older browsers as well. A practical approach for page design is needed, because you can easily encounter very complex design and layout issues. It implies you are actually building more then one page layout per page and this can easily escalate the complexity. In the meantime, additional adaptive design behavior can be built in using custom code, as illustrated in the article Running ADF Faces on Mobile Phones and Tablets in this series. Another example of this approach can be found in the article Creating an adaptive layout in ADF for desktop vs. tablet by Andrew Robinson, a member of the ADF Faces development team.

Mixing Technologies

ADF Mobile allows you to run hybrid applications, where part of the application is running locally on the device and part of the application is supplied through web pages delivered by a remote server. This means you can use ADF Mobile together with the other two technologies, creating a hybrid application where the end user can be ignorant as to which pages are served locally and which pages are served remotely. Note that even if you do not need the unique features of ADF Mobile, it still might be worth to use an ADF Mobile application as a "UI Shell" of your remote web pages. This has two benefits:
  • the application is easily accessibly through an icon on the mobile device, there is no need to open the mobile browser and enter the application URL.
  • the web page will be shown without the browser chrome which makes it look more native AND more screen real estate is available for the web page itself.

Mixing ADF Faces RC and Apache MyFaces Trinidad components in one web page is NOT supported. We have seen customers who developed some ADF task flows with page fragments using the Trinidad components and tried embedding those task flows as regions in pages built using ADF Faces RC or vice versa. This will not work correctly as the JavaScript libraries coming with these two JSF component sets are not compatible with each other.

Conclusions and Recommendations

ADF Mobile Browser has been around for a number of years and used to be the obvious choice for bringing web applications to mobile devices including old devices with limited HTML capabilities. On  the other hand, ADF Faces RC and ADF Mobile are now offering a better solution with a more modern interface for developing mobile web applications for modern devices. In recent releases of ADF Faces RC, important new functionality has been added to optimize mobile rendering, and this will continue in future releases, increasing support for state-of-the-art technologies like HTML5 and CSS3. Given this trend, A-team and ADF product management strongly recommend to use ADF Faces RC as the primary choice for mobile-enablement of ADF applications. ADF Mobile brings unique features to the table that can be used to build a truly natural user interface, enhancing the user experience and productivity. We recommend customers build hybrid applications using ADF Mobile. Initially, the ADF Mobile app can largely act as a UI shell for remote (ADF Faces RC) web pages, and then over time the remote pages can be replaced incrementally with local on-device pages that provide a more native look and feel and fully leverage the unique capabilities of mobile devices.

Going Mobile with ADF – Running ADF Faces on Mobile Phones and Tablets

$
0
0

Introduction

With over 90% of internet traffic now coming from mobile devices, there is a huge pressure on companies to make their customer-facing applications suitable for rendering on smart phones and tablets. A-team is involved with a number of customers who are looking for ways to adapt their existing Oracle ADF applications and make them "mobile-enabled" . This article is the second in a series of articles that describe what we learned from the experiences with these customers both from a technical and a non-technical perspective. The first article "Understanding the Options" provides insight in the technology choices you need to make and the implications of those choices. It also contains a lot of links with additional information. This second article provides tips and techniques to optimize rendering of ADF Faces applications on mobile devices.

Main Article

We will take an existing ADF application (version 12.1.2 or 11.1.1.7) that runs fine on desktop browsers. We then run the same application both on an iPad and an Android smart phone. We will discuss the changes we need to make to run the same application with the same web pages successfully on a desktop browser, tablet and smart phone. Please refer to the first article in this series for a discussion on the desirability of this "adaptive design" approach. In this article, we assume you have valid reasons to choose this approach, and are looking for technical guidance in getting the job done.  Testing the Sample Application on Mobile Devices The application is built using the UIShell approach: the application consists of a single .jsf page (or .jspx page when using JDeveloper 11.1.1.x) and uses a dynamic ADF region to switch the page content based on the menu tab that is selected. For details on this best-practice implementation see the article Core ADF11: UIShell with Menu Driving a Dynamic Region. We use the Skyros skin introduced in JDeveloper 11.1.1.7 as this skin minimizes the usage of images in favor of CSS3 shadows and gradients, making it perform better than older skins, in particular on mobile devices using a small bandwidth network. If you want to use another skin in your desktop browser, you can make the skin dynamic as well, by using an EL expression in trinidad-config.xml. Here are two screen shots to show you how the sample application looks when run within a desktop browser. EmpTableDesktop EmpEditDesktop

Testing on iPad

If we run the same application on an iPad, the employees table page looks like this EmpTableIpad As you can see the vertical and horizontal scrollbars are gone, and you can swipe to the left to see the remaining columns.Also note that the stretching layout has been replaced with a flowing layout. The number of records shown depends on the table fetch size. This becomes more clear when we switch the orientation to portrait, while there is room for more records, only 25 records are shown. This implies we need to make the fetchsize dynamic based on the form factors of the mobile device. The pagination controls to scroll through the table rows are not visible, this is because the autoHeightRows property is not yet set on the table. This property must be set to get proper pagination controls as we will see later on. Note that in JDeveloper 11.1.1.7 the behavior is slightly different: you will see pagination controls when the autoHeightRows property is not set, and the paging size is taken from the iterator binding range size as defined in the page definition. When we navigate to the edit employee page, it looks pretty much the same as in the desktop browser. EmpEditIpad The calendar popup you get on the date field is nice but looks different than the standard iOS date picker. If you tap into the email field, a default keyboard pops up without the @ character. Likewise, it would be nice when the numeric keyboard shows up when tapping in a number field like Salary or CommissionPct. These small annoyances with email, number and date fields can be solved by using HTML5 input types as we will discuss later on.

Testing on Smart Phone

When we run the same application on an Android phone (HTC Desire X), this is what we get EmpTablePhonePortraitUnscaled It is hardly readable,if we switch to landscape orientation, it is slightly better, but still unusable EmpTablePhoneLandscapeUnscaled Before the user can even read what is on the screen he needs to reverse-pinch with two fingers to enlarge the font. We will explain how to fix this initial rendering in the next section.

Setting the Initial Scaling

By default, mobile web browsers will try to fit the whole page within the available width, which means the browser will zoom out until the page fits. To fix this we need to add the viewport meta tag. See articles Don't Forget the Viewport Meta Tag and Using the viewport meta tag to control layout on mobile browsers for more information. To add this meta tag in our sample application, we add the following code snippet to our UIShell page:
<f:view >
  <af:document title="ADF Faces Mobile Demo" id="d1">
    <f:facet name="metaContainer">
      <af:group id="g1">
        <meta name="viewport" content="width=device-width, initial-scale=1"/> 
      </af:group>
    </f:facet>
If your application consists of multiple .jspx or .jsf pages, you obviously need to add the meta tag to every page. Now, let's refresh the application in our smart phone browser. Below you can see the result for both portrait and landscape orientation. EmpTablePhonePortrait EmpEditPhone EmpTablePhoneLandscape EmpEditPhoneLandscape Not surprisingly,  even if the application is scaled properly, you can see that this page layout is not going to work on such a small screen. A lot of precious screen real estate is taken by the page header, footer and menu, and the table and form layouts are too wide forcing the user to swipe to the left all the time. The "View" and "Detach" options (shown in dutch in the screen shot) of the panelCollection surrounding the table are also quite useless on a mobile phone and they consume some valuable space as well. In section "Adapting the Layout to Enhance Mobile Rendering" we will discuss a technique that allows you to radically change the page layouts and make it suitable for display on smart phones without making changes to individual pages. But first we need to collect as much information as possible about the browser agent used on the mobile device, a prerequisite for determining how the layout should be adapted.

Obtaining Browser Agent Info

The ADF Faces API provides various methods to get information about the browser agent. Two agent properties are useful in the context of this article, all other documented properties either return the same value for all devices, or return the string "unknown". The table below shows the two properties and their values for the three test devices used for this article.
Property Name Expression Desktop Value iPad Value HTC Value
 Platform Name  #{requestContext.agent.platformName}  windows iPhone android
 Touch Screen #{requestContext.agent.capabilities['touchScreen']}  none multiple single
To get the same information in Java, you can use code like this:
RequestContext context = RequestContext.getCurrentInstance();
Agent agent = context.getAgent();
String platformName = agent.getPlatformName();
String touchScreen = (String) agent.getCapabilities().get("touchScreen");
While this information helps, it does not tell us which kind of device is used. Both an iPad and an iPhone will return "iphone" as platform name, and both will return "multiple" as value for the touchScreen property. Since there are so may different sizes for mobile phones and tablets, we really need the actual browser window width and height to be able to adapt the layout properly. This can be done using the combination of JavaScript, and an ADF Faces clientListener and serverListener, that together ensure the browser dimensions are stored in a session-scoped "AgentInfo" bean. This is the code that needs to be added to the UIShell page:
<af:resource type="javascript">
  function setWindowSize(){
  var h = document.documentElement.clientHeight, w = document.documentElement.clientWidth;
  comp = AdfPage.PAGE.findComponentByAbsoluteId('d1');
  AdfCustomEvent.queue(comp, "setScreenSize",{'screenWidth':w,'screenHeight':h}, true);}
</af:resource>
<af:clientListener method="setWindowSize" type="load"/>
<af:serverListener type="setScreenSize"  method="#{agentInfo.setWindowSize}"/>
Note that in a real application, you would store this JavaScript method in a separate library. The serverListener ensures that the setWindowSize method in the AgentInfoBean is called which has has the following signature:
public void setWindowSize(ClientEvent clientEvent)
{
  if (getScreenWidth()==0)
  {
    Map<String, Object> map = clientEvent.getParameters();
    Double width = (Double) map.get("screenWidth");
    Double height = (Double) map.get("screenHeight");
    setScreenHeight(height);
    setScreenWidth(width);      
    JsfUtils.redirectToSelf();
  }
}
Note the call to the redirect-to-self utility method at the end, this is needed because the above code is executed AFTER the page is rendered, which means the screen size info cannot be used on initial page rendering. Now, to prevent this costly redirect-to-self you can have the end user start the application with just specifying the web context root. In the index.html that will be automatically launched, you can forward to the UIShell page, passing in the screen width and height as request parameters. Here is an example index.html page using this technique:
<!DOCTYPE HTML>
<html>
  <head>
    <script>
        var height = document.documentElement.clientHeight; // window.innerHeight does not work on IE8 and FF ;
        var width = document.documentElement.clientWidth; //window.innerWidth does not work on IE8 and FF ;
        window.location.href = "faces/UIShell?screenHeight=" + height + "&screenWidth=" + width;
    </script>
  </head>
  <body></body>
</html>
In the AgentInfoBean a method annotated with the PostConstruct will then read the request parameters and store them in member variables inside the bean:
@PostConstruct
/**
 * Check if request contains screenWidth, screenHeight params. If so, store
 * the values in corresponding properties
 */
public void init()
{
  String width = (String) JsfUtils.getRequestParam(SCREEN_WIDTH_KEY);
  String height = (String) JsfUtils.getRequestParam(SCREEN_HEIGHT_KEY);
  if (width!=null)
  {
    screenWidth = new Double(width);
  }
  if (height!=null)
  {
    screenHeight = new Double(height);
  }    
}
With this code in place, the user gets "rewarded" with faster initial loading when he accesses the application through index.html. The setWindowSize method should stay as a fallback scenario when the end user launched the application using a URL that included the UIShell target. With the two agent properties and screen width and height available, we can add a number of convenience methods to our AgentInfoBean that we can use in EL expressions to dynamically adapt the layout based on the mobile device:
public boolean isIOS()
{
  return "iphone".equals(getAgent().getPlatformName());
}

public boolean isAndroid()
{
  return "android".equals(getAgent().getPlatformName());
}

public boolean isLandscape()
{
  return getScreenWidth() > getScreenHeight();
}

public boolean isTouchScreen()
{
  return !"none".equals(getAgent().getCapabilities().get("touchScreen"));
}

/**
 * Return true when device has touch screen and either screen width or height is < 400px.
 * We check either screen width or height depending on the device orientation.
 * 400px is pretty arbitrary number, change it as desired
 */
public boolean isSmartPhone()
{ 
 double size = isLandscape() ? getScreenHeight() : getScreenWidth(); 
 return isTouchScreen() && getScreenWidth()!=0 
 && (size <=400);
}

public boolean isTablet()
{
 return isTouchScreen() && !isSmartPhone();
}
Note that when the user changes the device orientation while using the ADF Faces application, this does not affect the values returned by the above methods. If you want to support a change in device orientation while running the application, additional JavaScript is needed to detect such an orientation change and call some resize method on the AgentInfoBean with the new width and height values. If you want to detect whether the iOS device is having a Retina screen you can extend the above code and also pass in the value of window.devicePixelRatio. When this value is 2, it is a Retina device. See this article on devicePixelratio for more information.

Adapting the Layout to Enhance Mobile Rendering

With the information about device orientation and screen width and height available, you can now start to determine how you want to adapt the layout based on the viewport size of the mobile device. As you probably want to support a wider range of mobile phones and tablets, than you have at your disposal for actual testing, this overview of viewport sizes for mobiles and tablets might come in handy. Now, let's see how we can adapt the layout of our application to make it more usable for smart phones and small and (small) tablets.

Using Dynamic Page Templates

The first thing we want to do is use a different page template when rendering on smaller devices. We will use the isSmartPhone convenience method from the AgentInfoBean to dynamically set the page template:
<af:pageTemplate viewId="#{agentInfo.smartPhone ? '/common/pageTemplates/PhonePageTemplate.jspx' : '/common/pageTemplates/TabsMenuPageTemplate.jspx'}" id="pt">
  <f:facet name="pageContent">
    <af:region value="#{bindings.mainRegion.regionModel}" id="mr"/>
  </f:facet>
  <f:attribute name="menuModel" value="#{menuModel}"/>
</af:pageTemplate>
The PhonePageTemplate.jspx is a basic page template with a simple page header that holds the application title, and display the menu navigation pane as a drop down list instead of tabs:
<af:panelStretchLayout id="pt_pgl1" topHeight="32px">
  <f:facet name="top">
    <af:panelGridLayout id="pt_gPbl" styleClass="AFBrandingBar">
      <af:gridRow id="pt_rh1" height="auto" marginTop="4px"
                  marginBottom="4px">
        <af:gridCell id="pt_bt" width="auto" valign="middle"
                     marginStart="4px">
          <af:outputText value="#{attrs.brandingTitle}"
                         styleClass="AFBrandingBarTitle" id="pt_ot1"/>
        </af:gridCell>
        <af:gridCell id="pt_flexSpaceHead" width="100%"/>
        <af:gridCell id="mgie" width="auto" valign="middle"
                     marginStart="4px">
          <af:navigationPane id="Menu1" var="menuItem"
                             partialTriggers="Item1"
                             value="#{attrs.menuModel}" hint="choice">
            <f:facet name="nodeStamp">
              <af:commandNavigationItem id="Item1"
                                        textAndAccessKey="#{menuItem.label}"
                                        actionListener="#{pageFlowScope.pendingChangesBean.handle}"
                                        action="#{menuItem.doAction}"
                                        rendered="#{menuItem.rendered}"/>
            </f:facet>
          </af:navigationPane>
        </af:gridCell>
      </af:gridRow>
    </af:panelGridLayout>
  </f:facet>
  <f:facet name="center">
    <af:facetRef facetName="pageContent"/>
  </f:facet>
</af:panelStretchLayout>

You might wonder why we still use a panelStretchLayout in this template as stretch layouts are automatically converted to flowing layouts on touch devices. Well, this automatic conversion is exactly the reason we need to preserve the stretching in the page template, so ADF Faces will correctly compute the fixed table height based on the available vertical space. If we would enclose the pageContent facetRef in a panelGroupLayout, and then navigate to the employees table page, about 8 rows would be visible, while there is room for about 15 rows.

The application now looks like this on a HTC phone: EmpTablePhoneWrongFetchSize EmpEditPhoneTemplate While the phone-specific page template is a huge improvement in terms of efficient real estate management, the display of the table and form layout is still far from optimal. In the next sub-sections we will address these display issues. We will first introduce a generic technique that allows us to change the properties of UIComponent elements in the JSF component tree at runtime, and then we will see how we can use that technique to optimize various UI components for mobile rendering.

Traversing and Changing the UIComponent Tree at Runtime

To traverse and change the UI Component tree at runtime, we will use a JSF phase listener in combination with the VisitTree Callback API. In JSF 1.2 (JDeveloper 11.1.1.x) the VisitTree API was included in the Trinidad library, which is used as foundation for ADF Faces RC. In JSF 2.0 and beyond (JDeveloper 11.1.2.x and 12c) the VisitTree implementation in trinidad has become part of the JSF standard. So, depending on the JDeveloper version you are using, your imports will change, but the code can largely stay the same, as you can see in the two samples applications you can download at the bottom of this post. We start with a Java class that implements the VisitCallback interface:
public class MobileRenderingVisitCallback implements VisitCallback {

    public VisitContext createVisitContext() {
        return VisitContext.createVisitContext(FacesContext.getCurrentInstance());
    }

    public VisitResult visit(VisitContext context, UIComponent target) {
        AgentInfoBean agentInfo = AgentInfoBean.getInstance();
        if (!agentInfo.isTouchScreen()) {
            // No touch device, do nothing and stop tree traversal
            return VisitResult.COMPLETE;
        }
        // TODO: some mobile rendering enhancements
        return VisitResult.ACCEPT;
    }
}
For more information on using the VisitTree API see the article Efficient component tree traversal in JSF. Next, we will create a JSF phase listener that uses this class just before render response phase:
public class MobilePhaseListener implements PhaseListener {
    public MobilePhaseListener() {
        super();
    }

    public void afterPhase(PhaseEvent phaseEvent) {
    }

    public void beforePhase(PhaseEvent phaseEvent) {
        if (phaseEvent.getPhaseId() == PhaseId.RENDER_RESPONSE) {
            MobileRenderingVisitCallback cb =
                new MobileRenderingVisitCallback();
            UIXComponent.visitTree(cb.createVisitContext(),
                                   JsfUtils.getViewRoot(), cb);
        }
    }

    public PhaseId getPhaseId() {
        return PhaseId.ANY_PHASE;
    }
}
And finally, we register this class as a managed bean in adfc-config.xml:
<managed-bean id="__18">
  <managed-bean-name id="__19">MobilePhaseListener</managed-bean-name>
  <managed-bean-class id="__21">oracle.ateam.demo.ui.application.MobilePhaseListener</managed-bean-class>
  <managed-bean-scope id="__20">request</managed-bean-scope>
</managed-bean>
and reference the bean in the beforePhase property of the f:view element of the UIShell page:
<f:view beforePhase="#{MobilePhaseListener.beforePhase}">
  <af:document title="ADF Faces Mobile Demo" id="d1">

You might be tempted to register the JSF phase listener in faces-config.xml as this is faster and does not require a managed bean definition. However that will not work in this case because you cannot access the UI component tree on initial page loading when the phase listener is defined in faces-config.xml!

With this generic code in place, we can now start adding code to our MobileRenderingVisitCallback class to fix various rendering issues.

Changing the PanelFormLayout to One Column

To avoid horizontal swiping in multi-column form layouts, we need to set the maxColumns property of the panelFormLayout components to 1 in all our pages and page fragments. With the above generic UI component tree traversal in place, this is pretty straightforward. We add the following method to the MobileRenderingVisitCallback class
private void processPanelFormLayout(UIComponent target) {
    if (target instanceof RichPanelFormLayout) {
        RichPanelFormLayout pfl = (RichPanelFormLayout)target;
        pfl.setMaxColumns(1);
    }
}
and call this method from the visit method
public VisitResult visit(VisitContext context, UIComponent target) {
    AgentInfoBean agentInfo = AgentInfoBean.getInstance();
    if (!agentInfo.isTouchScreen()) {
        return VisitResult.COMPLETE;
    }
    if (agentInfo.isSmartPhone()) {
        processPanelFormLayout(target);            
    }
    return VisitResult.ACCEPT;
}
With this code in place, the form layouts start to look much better on smart phones, as you can see below. OneColumn You could further extend this sample code, for example to shorten the width of large input fields, and turn it into multi-row input fields.

Fixing Table Pagination

In JDeveloper 11.1.1.7 the pagination size is controlled by the table fetch size property, in JDeveloper 12.1.2. it is controlled by the autoHeightRows property. If this property is not set you will not see pagination controls. You should make sure that the number of rows visible on the screen matches the value of the fetchSize and autoHeightRows properties. If the pagination size is larger than the number of rows visible ,it can cause a confusing user experience. For example, when fetchSize/autoHeightRows is set to 25, and only 18 rows are visible, then navigating to the next "table page" will show rows 26 to 44. Rows 19 to 25 remained invisible. The end user could have visited these rows by using a swipe-up action, but this is counter-intuitive as pagination controls are added to navigate through the rows.  Similar to changing the number of columns in a panelFormLayout, we can change the table fetch size and autoHeightRows property in the MobileRenderingVisitCallback class:
public VisitResult visit(VisitContext context, UIComponent target) {
    AgentInfoBean agentInfo = AgentInfoBean.getInstance();
    if (!agentInfo.isTouchScreen()) {
        return VisitResult.COMPLETE;
    }
    if (agentInfo.isSmartPhone()) {
        processPanelFormLayout(target);
    }
    processTable(target, agentInfo);
    return VisitResult.ACCEPT;
}
The processTable method looks like this:
private void processTable(UIComponent target, AgentInfoBean agentInfo) {
    if (target instanceof RichTable) {
        RichTable table = (RichTable) target;
        // set fetch size and autoHeightRows so we get proper pagination controls
        int tabletRows = agentInfo.isLandscape() ? 21 : 35;
        int rows = agentInfo.isSmartPhone() ? 16 : tabletRows;
        table.setFetchSize(rows);
        table.setAutoHeightRows(rows);
    }
}
With this code in place, the pagination controls on the employees table show up nicely on the iPad: TablePaginationIpad The tricky thing here is to determine the number of rows that will be visible. Based on your test devices you should come up with a formula that computes the number of rows based on the screen height, and other elements on the page that consume real estate. If this doesn't work out, you need to specify the autoHeightRows property for eacht af:table directly in the page source. Also note that with table pagination turned on, you typically do not want to fill the last page with rows, and just show the last remaining rows. This can be done by going to the view object general tab, tuning section, and uncheck the checkbox as shown below FillLastPage

 Hiding the Surrounding PanelCollection

Many of the standard features in a panelCollection are less useful on a mobile phone. For example, the ability to show/hide columns, attach/detach the table, or freeze columns isn't something we expect a mobile phone user to do. So, we can save some real estate and display more rows in the table by hiding the panelCollection chrome. This is easily done by adding the following code inside the processTable method:
if (agentInfo.isSmartPhone() && table.getParent() instanceof RichPanelCollection) {
    RichPanelCollection pc =
        (RichPanelCollection)table.getParent();
    HashSet<String> featuresOff = new HashSet<String>();
    featuresOff.add("viewMenu");
    featuresOff.add("formatMenu");
    featuresOff.add("wrap");
    featuresOff.add("detach");
    featuresOff.add("freeze");
    featuresOff.add("statusBar");
    pc.setFeaturesOff(featuresOff);
}
With this code in place, we can increase the fetch size to 19, resulting in a page like this: EmpTableNoPC

Handling Wide Tables

Most tables initially designed for desktop browsers will be wider than the available width on mobile phones. In the table shown above, a user can swipe to the left with two fingers simultaneously to see the remaining columns. While this works it is not an intuitive touch gesture. There are a couple of ways to address this issue of "overflow" columns:
  • SImply hide the additional columns, and show that information in a detail screen after the user selected a row.
  • Use the concept of "detail disclosure" by defining a detailStamp facetof the af:table component and move the remaining items inside a panelFormLayout in the detailStamp facet.
  • Replace the table component with a listView component, which is more suited for mobile devices and tablets.
The first two solutions can easily be implemented in the MobileRenderingVisitCallback class. Here is example code for implementing the detail disclosure:
private void moveTableItemsToDetailStamp(RichTable table) {
    if (table.getFacets().get("detailStamp") != null) {
        return;
    }
    RichPanelGroupLayout form = new RichPanelGroupLayout();
    RichSpacer spacer = new RichSpacer();
    spacer.setWidth("5px");
    spacer.setHeight("5px");
    form.getFacets().put("separator", spacer);
    table.getFacets().put("detailStamp", form);
    List<UIComponent> children = table.getChildren();
    int counter = 0;
    for (UIComponent kid : children) {
        if (kid instanceof RichColumn) {
            counter++;
            RichColumn column = (RichColumn)kid;
            if (counter > 3) {
                column.getChildren().get(0).setParent(form);
                form.getChildren().add(column.getChildren().get(0));
                column.setRendered(false);
            }
        }
    }
}
With this code in place, the employees table has an additional detail disclosure icon at the beginning that can be used to expand the row: DetailDisclosure If you want to have a more structured layout of the items in detail disclosure area, you can change the code to use a panelFormLayout and get the item label values from the column header. While these improvements make the table component somewhat usable on a smart phone, you might want to consider replacing the table component with a listView (javadoc and demo) component which provides a more native look and feel.

Using HTML5 Input Types

A number of new input types have been added to HTML 5. Using these input types has the following advantages:
  • Device-native rendering of the input element. For example, a date input type will render with iOS or Android native date picker
  • Context-sensitive virtual keyboard. For example, an email input type will show the @-character, a number input type will only show numeric values on the virtual keyboard
  • Declarative validation. For example, a number input type will raise an error when entering a non-numeric value
HTML5 input types can be used in ADF Faces by specifying the usage property on the af:inputText component. The property help and code insight in the JDeveloper page visual editor will only show Auto, text or search as allowable values, but you can enter the new HTML5 input types as well. To prevent the red underlining of the property, you can enclose the literal input type value inside an EL expression. Here is an example of using the HTML5 date type in your page:
<af:inputText label="#{bindings.HireDate.hints.label}" required="#{bindings.HireDate.hints.mandatory}"
      usage="#{'date'}" value="#{bindings.HireDate.inputValue}" id="it9">
  <af:convertDateTime pattern="yyyy-MM-dd" id="cdt22"  type="date" />
</af:inputText>
Note the pattern used to convert the date, this format is the 'wire' format for the date input type as prescribed by the HTML5 standard. If you don't add an af:convertDateTime tag with this wire date format as value for the pattern attribute, then the field will not show any data. The way the date input field will be rendered is browser-specific. On the desktop, only Google Chrome currently displays a nice calendar widget to pick a date using the user's locale, Internet Explorer and Firefox display a simple input field, using the wire format as display date format. Chrome on android devices, and Safari on iOS nicely display a native date picker. Since most desktop browsers do not show a calendar widget, you probably want to stick with the standard ADF Faces calendar that you get for free when using an af:inputDate component on desktop browsers. So, to leverage the native date picker on iOS and android, you need to conditionally render either the af:inputDate component or the af:inputText component with usage property set to 'date'. Here is a code snippet that implements this approach:
<af:inputDate value="#{bindings.HireDate.inputValue}" label="#{bindings.HireDate.hints.label}"
              required="#{bindings.HireDate.hints.mandatory}" columns="#{bindings.HireDate.hints.displayWidth}"
              shortDesc="#{bindings.HireDate.hints.tooltip}" rendered="#{!agentInfo.touchScreen}" id="id1">
  <f:validator binding="#{bindings.HireDate.validator}"/>
  <af:convertDateTime pattern="#{bindings.HireDate.format}"/>
</af:inputDate>
<af:inputText label="#{bindings.HireDate.hints.label}" required="#{bindings.HireDate.hints.mandatory}"
      value="#{bindings.HireDate.inputValue}" usage="#{'date'}" rendered="#{agentInfo.touchScreen}"   id="it9">
  <f:validator binding="#{bindings.HireDate.validator}"/>
  <af:convertDateTime pattern="yyyy-MM-dd" id="cdt22" type="date"/>
</af:inputText>
To avoid the tedious work of adding this af:inputText to each and every page with a date item, we can add the HTML5 date support in a generic fashion using the JSF phase listener and VisitTree API as explained in the section 'Traversing and Changing the UIComponent Tree at Runtime'. Here is a sample method that can be added to the MobileRenderingVisitCallback class to achieve this:
private void processDate(UIComponent target) {
    if (target instanceof RichInputDate) {
        RichInputDate date = (RichInputDate) target;
        int index = date.getParent().getChildren().indexOf(date);
        if (!date.isRendered()) {
            // html5 date already added or not needed
            return;
        }
        RichInputText html5Date = new RichInputText();
        html5Date.setUsage("date");
        html5Date.setValueExpression("rendered", date.getValueExpression("rendered"));
        html5Date.setValueExpression("required", date.getValueExpression("required"));
        html5Date.setValueExpression("disabled", date.getValueExpression("disabled"));
        html5Date.setValueExpression("readOnly", date.getValueExpression("readOnly"));
        html5Date.setValueExpression("label", date.getValueExpression("label"));
        html5Date.setValueExpression("value", date.getValueExpression("value"));
        // we need to use the converter from original date item, creating new DateTimeConverter instance
        // causes java.lang.ClassCastException for some unknown reason:
        // Value "2005-06-24" is not of type java.util.Date, it is class oracle.jbo.domain.Date
        DateTimeConverter conv = (DateTimeConverter) date.getConverter();
        date.setConverter(null);
        conv.setPattern("yyyy-MM-dd");
        html5Date.setConverter(conv);
        date.getParent().getChildren().add(index + 1, html5Date);
        date.setRendered(false);
    }
}
Using the HTML date input type on an iPad shows the iOS-native date picker (dutch language): html5date On the HTC-Android device, the date picker looks like this: html5dateAndroid In a similar way, you can add the usage property to af:inputText components that represent an email or number item. You only need to have a way to identify these item types in a generic way. For number items, this could be done by checking whether the af:inputText has an af:numberConverter specified:
private void processNumberField(UIComponent target) {
    if (target instanceof RichInputText) {
        RichInputText field = (RichInputText) target;
        Converter converter = field.getConverter();
        if (converter instanceof NumberConverter) {
            field.setUsage("number");
        }
    }
}
This code will cause a numeric keyboard to show up on the iPad when tapping in the ManagerId field: html5number

Using Client-Side Responsive Design Techniques

This articles described server-side adaptive design techniques. You can use these techniques in combination with client-side responsive design techniques using CSS media queries as desired. Here are some links to good articles on using CSS media queries in ADF Faces: Note that while using CSS media queries is a very popular technique to optimize mobile rendering, you should be aware that all changes are made on the client, which means that the full JSF UI component tree is still sent to the mobile browser. If you want to hide significant parts of your desktop user interface to optimize mobile rendering it will be more performant to do this server-side.

Conclusion

With some simple JavaScript we can gather additional screen sizing information from the browser agent. This information can then be used to implement server-side adaptive design techniques to enhance mobile rendering without modifying each and every page or page fragment. This is a fast and cost-effective way to go mobile with ADF. However, as explained in the first article of this series Understanding the Options, you might want to consider more significant redesign, as well as the use of ADF Mobile to truly optimize the mobile user experience. You can download the sample application for both JDeveloper 11.1.1.7 and JDeveloper 12.1.2.

OAAM Admin Console Dashboard Update Frequency

$
0
0
There are three sections in the dashboard in OAAM Admin Console. oaam-dashboard The refresh time in section 1 and section 2 shown above can be configured by selecting the appropriate duration from the dropbox. There is no provision to select the update frequency of the items in section 3. This is actually controlled by different monitors that run in the background. These monitors keep track of various user sessions and generate the statistics at well defined interval. Some examples of such monitors are – oneMinuteMonitor, fiveMinuteMonitor, oneHourMonitor, etc. that will update the data every minute or every five minute or every hour. These monitors are defined by using the following properties in oaam_server.war/WEB-INF/classes/bharosa_properties/oaam_core.properties monitors.enum.oneHourMonitor=5 monitors.enum.oneHourMonitor.name=oneHourMonitor monitors.enum.oneHourMonitor.description=Aggregates Monitor Data every one hour. monitors.enum.oneHourMonitor.enabledByDefault=true monitors.enum.oneHourMonitor.sleepDurationInSeconds=3600 monitors.enum.oneHourMonitor.destinations=com.bharosa.common.monitoring.Log4JMonitorDataDestination,com.bharosa.common.monitoring.DatabaseMonitorDataDestination This monitor can be associated with a dashboard item using the below list of properties. monitor.interceptors.enum.Runtime=18 monitor.interceptors.enum.Runtime.name=Runtime monitor.interceptors.enum.Runtime.description=Monitors risk score being set. monitor.interceptors.enum.Runtime.monitor=oneHourMonitor monitor.interceptors.enum.Runtime.enabledByDefault=true In the above list, we are associating oneHourMonitor with Dashboard for “Performance” for “Checkpoints” (checkpoints are referred as runtimes internally). You can override the value of any of the above properties in the oaam_extension.war/WEB-INF/classes/bharosa_properties/oaam_custom.properties and redeploy the oracle.oaam.extensions shared library. More information about OAAM Extensions Shared Library can be found in Oracle Fusion Middleware Developer's Guide for Oracle Adaptive Access Manager. For example, I can define a new monitor called halfMinuteMonitor in oaam_custom.properties in oaam_extension.war as below monitors.enum.halfMinuteMonitor=10 monitors.enum.halfMinuteMonitor.name=halfMinuteMonitor monitors.enum.halfMinuteMonitor.description=Aggregates Monitor Data every 30 seconds. monitors.enum.halfMinuteMonitor.enabledByDefault=true monitors.enum.halfMinuteMonitor.sleepDurationInSeconds=30 monitors.enum.halfMinuteMonitor.destinations=com.bharosa.common.monitoring.Log4JMonitorDataDestination,com.bharosa.common.monitoring.DatabaseMonitorDataDestination Then associate it with “checkpoints” as below monitor.interceptors.enum.Runtime.monitor=halfMinuteMonitor

Oracle GoldenGate: Capture and Apply of Microsoft SQL Server DDL Operations

$
0
0

Introduction

Oracle GoldenGate (OGG) supports DDL capture and apply for the Oracle and Teradata databases only. Because of this, maintenance tasks for other databases require a lot of coordination and typically an extended OGG outage. In this article we shall discuss one option for automating the capture, delivery, and execution of DDL operations using OGG for Microsoft SQL Server 2008. The techniques discussed may also be used for other databases supported by OGG.

The scripts and information provided in this article are for educational purposes only. They are not supported by Oracle Development or Support, and come with no guarantee or warrant for functionality in any environment other than the test system used to prepare this article.

Main Article

Oracle GoldenGate (OGG) provides data capture from transaction logs and delivery for homogeneous and heterogeneous databases and messaging systems. Built-in functionality allows OGG capture and apply processes to perform non-database actions based upon database transactional data. Using this knowledge, we can use OGG as the intermediary mechanism to execute DDL statements on the source and target databases. Download the complete document: MSS_DDL_Capture_Apply    

Oracle GoldenGate in a distributed file system with file locking

$
0
0

Introduction

This write up illustrates Oracle GoldenGate's behavior in a distributed file system environment that supports file locking. Oracle GoldenGate(OGG) is very commonly used in an Oracle RAC environment. When OGG is installed in a distributed file system in a RAC environment, at any given time the OGG processes run in a single node of the cluster. To prevent accidental startup from another node of the cluster while the OGG processes are already running in another node of the cluster, distributed file locking is essential.

Main Article

The ACFS file system supports distributed file locking from Oracle 11.2.0.3+ onwards. To illustrate the scenario better lets take an example. Lets say we have a 2 node(NodeA and NodeB) RAC cluster running Oracle 11.2.0.3.  OGG is installed in an ACFS file system which is shared between the nodes. The OGG processes are running in node A. Now if we go to node B and check the status of the OGG processes (info all) it will show that processes are running. This might confuse some users as they might think that OGG is simultaneously running on both nodes. This is not true and can be verified  by simply checking at the OS level ("ps -ef|grep <ogg process name>"). The "info all" command actually checks the status by querying the shared "pcs" file. Since in a distributed environment the OGG binaries are shared, the "info all" command will always show a consistent view no matter which node it was issued from (active or passive). In the above scenario while OGG is running in node A if the user logs into node B and issues a "start/stop ER *" the command will get the IP address and port of the manager from the shared "pcs" file. Since in this case the manager is running in node A the TCP message to start/stop will go to node A. In order to start the processes in node B OGG manager first needs to be stopped in node A and then restarted in node B.   Summary In a distributed file system environment where file locking is supported (e.g ACFS) the OGG processes can be managed from any node. But all the OGG processes would always run in the same node as OGG manager. To relocate the OGG processes to a different node of the cluster the manager should first be stopped in the active node and then restarted in the new node. (Please note that typically in a RAC environment Oracle Clusterware is used to manage OGG for high availability purposes)

Exalogic Virtual Tea Break Snippets – Modifying the Default LVM Guest Template

$
0
0

With the latest release of the Exalogic Virtual Environment (2.0.6.0.0) an number of modifications have been implemented and one of these is the introduction of an LVM based Guest Template. LVM was a much requested feature for the base template but it's introduction means the information provided in my blog entry Exalogic Virtual Tea Break Snippets - Modifying the Default Shipped (Base) Template is no longer appropriate because modifyjeos does not work with an LVM based System.img.

To resize the new Base Template we need to work with LVM directly and working with colleagues I have put together a simple script that will allow you to increase the size of the default root Volume and Swap Volume and thus generate a new template.

This script is not part of the Exalogic product and is provided as an example only and as such support tickets can not be opened against the script.

This script is best run on the Exalogic Control VM but can be run on one of the Exalogics Compute nodes at the OVS level. If you choose to run the script on a compute node you will need to modify the listing filter within the /etc/lvm/lvm.conf file on the chosen compute node. This can be done by editing the filter line as follows:

 # By default for OVS we restrict every block device:
filter = [ "a/.*/" ]

Example

In the following example we will modify the Template to increase the root Volume by 20 GB and the and the swap by 1 GB.

Step 1 - Mount Images Directory

If you have installed the Base Template, mount the /export/common/images from the ZFS storage and you will be able to find the el_base_linux_guest_vm_template_2.0.6.0.0_64.tgz. Alternatively it can be downloaded from the Oracle Software Delivery Cloud (Part Number: V39450-01).

Step 2 - Extract Template

Once the Template el_base_linux_guest_vm_template_2.0.6.0.0_64.tgz is available, extract it as follows:
tar xvzf ./el_base_linux_guest_vm_template_2.0.6.0.0_64.tgz 

This will create a directory called BASE which will contain the System.img (VServer image) and vm.cfg (VServer Config information). Change directory into the newly created BASE directory.

Step 3 - Modify Syatem.img

Execute the script as follows:

ModifyLVMImg.sh -if System.img -er 20 -es 1

This will result in the root file system being extended by 20 GB to 25GB and the swap from 0.5 GB to 1.5 GB.

Step 4 - Re-Package Base Template

After running the script and taring the modified System.img into an appropriate .tgz it can but uploaded to EMOC.

Repackage the Base Template with the following command:

tar -pczvf ./el_base_linux_guest_vm_template_2.0.6.0.0_64_expanded.tgz ./BASE

Step 5 - Test Template

Creating a vServer from the new template and executing df -kh will display the following:

[root@test25Gb ~]# df -kh
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
25G 3.3G 20G 15% /
/dev/xvda1 99M 23M 71M 25% /boot
tmpfs 2.0G 0 2.0G 0% /dev/shm
[root@test25Gb ~]#

As we can see the root volume has been successfully increased in size.

ModifyLVMImg.sh Script

Usage

[root@exalogic-iaas-vbox BASE]# ModifyLVMImg.sh -help



##########################################################################################
##                                                                                      ##
##                                     DISCLAIMER                                       ##
##                                                                                      ##
##  THIS SCRIPT IS PROVIDED ON AN AS IS BASIS, WITHOUT WARRANTY OF ANY KIND,            ##
##  EITHER EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT         ##
##  THE COVERED SCRIPT IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR           ##
##  PURPOSE OR NON-INFRINGING. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE        ##
##  OF THE COVERED SOFTWARE IS WITH YOU. SHOULD ANY COVERED SOFTWARE PROVE              ##
##  DEFECTIVE IN ANY RESPECT, YOU (NOT THE INITIAL DEVELOPER OR ANY OTHER               ##
##  CONTRIBUTOR) ASSUME THE COST OF ANY NECESSARY SERVICING, REPAIR OR CORRECTION.      ##
##                                                                                      ##
##  THIS SCRIPT IS NOT PART OF THE BASE PRODUCT SET AND AS SUCH, ALTHOUGH FUNCTIONAL,   ##
##  IS PROVIDED ONLY AS AN EXAMPLE OF WHAT CAN BE DONE.                                 ##
##                                                                                      ##
##########################################################################################




usage: /root/bin/ModifyLVMImg.sh [-if <System Image File>] [-er <Root Extend Size in Gb>] [-es <Swap Extend Size in Gb>] [ -bs <Block Size bytes>] [-vg <Volume Group Name>] [-rlv <Root Logical Volume Name>] [-slv <Swap Logical Volume Name>] [-nvg <New Volume Group Name>] [-nrlv <New Root Logical Volume Name>] [-nslv <New Swap Logical Volume Name>]

          -if <Image File> : This specificies the image file to be processed and will default to System.img
          -er <Root Extend Size> : This is the additional amount of space that will be added to the root Logical Volume (in GB). Defaults to 0
          -es <Swap Extend Size> : This is the additional amount of space that will be added to the swap Logical Volume (in GB). Defaults to 0
          -bs <Block Size> : Block size (in bytes) to be used whilst extending. Defaults to 1024 bytes
          -vg <Volume Group Name> : Name of the current Volume Group. Default VolGroup00
          -rlv <Root Logical Volume Name> : Name of the current Root Logical Volume. Default LogVol00
          -slv <Swap Logical Volume Name> : Name of the current Swap Logical Volume. Default LogVol01
          -nvg <New Volume Group Name> : Name of the new Volume Group. If not specified then the name will not be changed
          -nrlv <New Root Logical Volume Name> : Name of the new Root Logical Volume. If not specified then the name will not be changed
          -nslv <New Swap Logical Volume Name> : Name of the new Swap Logical Volume. If not specified then the name will not be changed

Script

#!/bin/bash

################################################################################
#
# Exalogic Virtual (Linux x86-64) Simplified CLI
# HEADER START
#
# THIS SCRIPT IS PROVIDED ON AN AS IS BASIS, WITHOUT WARRANTY OF ANY KIND,
# EITHER EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT
# THE COVERED SCRIPT IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR
# PURPOSE OR NON-INFRINGING. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE
# OF THE COVERED SOFTWARE IS WITH YOU. SHOULD ANY COVERED SOFTWARE PROVE
# DEFECTIVE IN ANY RESPECT, YOU (NOT THE INITIAL DEVELOPER OR ANY OTHER
# CONTRIBUTOR) ASSUME THE COST OF ANY NECESSARY SERVICING, REPAIR OR CORRECTION.
# NO USE OF ANY COVERED SOFTWARE IS AUTHORIZED HEREUNDER EXCEPT UNDER THIS
# DISCLAIMER.
#
# When distributing this Code, include this HEADER in each file.
# If applicable, add the following below this this HEADER, with the fields
# enclosed by brackets "[]" replaced with your own identifying information:
# Portions Copyright [yyyy] [name of copyright owner]
#
# HEADER END
#
#
# Copyright 2013 Andrew Hopkinson, Oracle Corporation UK Ltd.
#
################################################################################


function extendImg() {

echo "***********************************************"
echo "*** Extending $SYSTEMIMG"
echo "***********************************************"
dd if=/dev/zero of=$SYSTEMIMG bs=$BLOCKSIZE count=0 seek=$ADDBLOCKS

export LOOP=`losetup -f`
echo "***********************************************"
echo "*** Using loop $LOOP"
echo "***********************************************"

losetup $LOOP $SYSTEMIMG

# n = New Partition
# p = Primary
# 3 = Partition 3
# = Default first cylinder
# = Default last cylinder
# t = Change partition system id
# 3 = Partition Id
# 8e = Linux LVM
# w = Write & Exit

fdisk $LOOP <<EOF
n
p
3


t
3
8e
w
EOF

#echo "***********************************************"
#echo "*** Pausing $SLEEP seconds for previous command"
#echo "***********************************************"
#sleep $SLEEP

echo "***********************************************"
echo "*** Mounting $LOOP"
echo "***********************************************"
msgArray=( $(kpartx -av $LOOP | sed 's/ /|/g') )
echo "==============================================="
for msg in "${msgArray[@]}"
do
echo "$msg"
loopId=${msg%%|*}
msg=${msg#*|}
loopId=${msg%%|*}
msg=${msg#*|}
loopId=${msg%%|*}
msg=${msg#*|}
done
echo "Using loopId: $loopId"
echo "==============================================="

echo "***********************************************"
echo "*** Scanning Volume Groups"
echo "***********************************************"
vgscan
#pvdisplay
#vgdisplay
#lvdisplay

#echo "***********************************************"
#echo "*** Pausing $SLEEP seconds for previous command"
#echo "***********************************************"
#sleep $SLEEP

echo "***********************************************"
echo "*** Extending $VGNAME"
echo "***********************************************"
#msg=$(vgextend VolGroup00 /dev/mapper/`basename $LOOP`p3)
msg=$(vgextend $VGNAME /dev/mapper/$loopId 2>&1)
echo "==============================================="
echo "$msg"
echo "==============================================="
if [[ "$msg" == *"$VGNAME"* && "$msg" == *"successfully extended"* ]]
then

vgdisplay

echo "***********************************************"
echo "***** Extending Root $ROOTLVNAME"
echo "***********************************************"
lvextend -L+`echo $EXTROOT`G /dev/$VGNAME/$ROOTLVNAME

echo "***********************************************"
echo "***** Extending Swap $SWAPLVNAME"
echo "***********************************************"
lvextend -L+`echo $EXTSWAP`G /dev/$VGNAME/$SWAPLVNAME
#lvdisplay

echo "***********************************************"
echo "*** Changing $VGNAME"
echo "***********************************************"
vgchange -ay $VGNAME

echo "***********************************************"
echo "***** Checking Root $ROOTLVNAME"
echo "***********************************************"
e2fsck -f /dev/mapper/$VGNAME-$ROOTLVNAME

echo "***********************************************"
echo "***** Resizing Root $ROOTLVNAME"
echo "***********************************************"
resize2fs /dev/mapper/$VGNAME-$ROOTLVNAME

echo "***********************************************"
echo "***** Setting Swap $SWAPLVNAME"
echo "***********************************************"
mkswap /dev/mapper/$VGNAME-$SWAPLVNAME

vgchange -an $VGNAME
lvdisplay

else
echo ""
echo ""
echo "***********************************************"
echo "************** ERROR ********************"
echo "***********************************************"
echo ""
echo " Validate that the filter entry for OVS has been "
echo " changed from :"
echo " filter = [ "r/.*/" ]"
echo " to :"
echo " filter = [ "a/.*/" ]"
echo ""
echo " /etc/lvm/lvm.conf"
echo ""
echo "***********************************************"
echo ""
echo ""
fi

echo "***********************************************"
echo "*** Unmounting $LOOP"
echo "***********************************************"
kpartx -dv $LOOP
losetup -d $LOOP
}

function usage() {
echo ""
echo >&2 "usage: $0 [-if <System Image File>] [-er <Root Extend Size in Gb>] [-es <Swap Extend Size in Gb>] [ -bs <Block Size bytes>]"
echo >&2 " "

exit 1
}


export BASEIMAGESIZE=6
export SYSTEMIMG=System.img
export EXTROOT=0
export EXTSWAP=0
export BLOCKSIZE=1024
export GIGABYTE=`expr 1024 \* 1024 \* 1024`
export SLEEP=10
export VGNAME=VolGroup00
export ROOTLVNAME=LogVol00
export SWAPLVNAME=LogVol01

while [ $# -gt 0 ]
do
case "$1" in
-if) SYSTEMIMG="$2"; shift;;
-er) EXTROOT="$2"; shift;;
-es) EXTSWAP="$2"; shift;;
-bs) BLOCKSIZE="$2"; shift;;
-vg) VGNAME="$2"; shift;;
-rlv) ROOTLVNAME="$2"; shift;;
-slv) SWAPLVNAME="$2"; shift;;
*) usage;;
*) break;;
esac
shift
done

#rpm -qid lvm2

echo "***********************************************"
echo "*** Resizing $SYSTEMIMG"
echo "***********************************************"


# Calculate sizing
# Spare blocks to get around extent issue, i.e. missing 1 extent
SPAREBLOCKS=$((($GIGABYTE / $BLOCKSIZE) / 2))
BASEIMAGESIZE=$(stat -c%s "$SYSTEMIMG")
BASEIMGBLOCKS=$(($BASEIMAGESIZE / $BLOCKSIZE))
ROOTADD=$(($EXTROOT * $GIGABYTE / $BLOCKSIZE))
SWAPADD=$(($EXTSWAP * $GIGABYTE / $BLOCKSIZE))
ADDBLOCKS=$(($ROOTADD + $SWAPADD + $BASEIMGBLOCKS + $SPAREBLOCKS))

echo "Block size: $BLOCKSIZE"
echo "Base Image Size $BASEIMAGESIZE"
echo "Base Image $BASEIMGBLOCKS blocks"
echo "Adding $ROOTADD blocks to root file system"
echo "Adding $SWAPADD blocks to swap file system"
echo "Adding $SPAREBLOCKS spare blocks"
echo "Resizing image file to $ADDBLOCKS"

ls -lh

extendImg

ls -lh

echo "***********************************************"
echo "*** $SYSTEMIMG Resized"
echo "***********************************************"

Going Mobile with ADF – Implementing Data Caching and Syncing for Working Offline

$
0
0

Introduction

With over 90% of internet traffic now coming from mobile devices, there is a huge pressure on companies to make their customer-facing applications suitable for rendering on smart phones and tablets. A-team is involved with a number of customers who are looking for ways to adapt their existing Oracle ADF applications and make them “mobile-enabled” . This article is the third in a series of articles that describe what we learned from the experiences with these customers both from a technical and a non-technical perspective. Previous articles in this series are Understanding the Options, and Running ADF Faces on Mobile Phones and Tablets. This article introduces very powerful and generic sample code developed by the A-Team that you can reuse to quickly and easily implement data caching on the mobile device in a secure way using the SQLite database, with only a few lines of Java coding.  The sample code also helps you implementing data synchronization with a remote server in case your mobile application should support offline transactions.

Main Article

As discussed in the first article in this series, Understanding the Options, ADF Faces web applications cannot be used for working offline on a mobile device. So the technology choice is simple in this case, we need to use ADF Mobile to create an on-device application that supports working in offline mode. Most likely, the mobile application needs to communicate with an existing back-end (ADF) application to get data ("read actions") and to send transactions based on changes made to the data by the mobile user ("write actions"). We have various degrees in which we can support read and write actions in offline mode, as illustrated by the following picture: DataCachingStrategies   The simplest strategy 1 is to not support working in offline mode, if that strategy is acceptable for you, you can stop reading this article. Strategy 2 supports reading/viewing the data in offline mode, but the user needs to be connected when he/she wants to modify data. This strategy implies you need to cache the data on the device. This strategy is straightforward to implement when using the sample persistence code provided by the A-team, as we will see later. Strategies 3 and 4 are a different ballgame, here you need to keep track of pending transactions that are not yet sent to the server. The sample code provided by us will help you with registering and (re-)sending pending transactions, however the complex issue of data synchronization conflicts and transactions based on stale data needs to be resolved by you, based on the specifics of both your mobile application and the back-end application that serves the data. This article will first discuss how you can disclose the business logic and data from your back-end and consume this in your mobile application. Then we will cover strategy 2: implementing data caching using the SQLIte database. Finally, we will discuss the issues around data synchronization that come with strategies 3 and 4 in more depth.

Disclosing Back-End (ADF) Applications Using Web Services

ADF Mobile can access back-end systems solely through web services. We will not go into the details of creating and consuming these web services as this topic has already been discussed in various articles and videos: Creating web services: Consuming web services: When disclosing your back-end system, you need to choose between SOAP and RESTful web services, and within RESTful web services between XML and JSON payload. A-Team recommends to use REST-JSON web services. RESTful services are simply easier to create and use, and the JSON data exchange format is much more efficient than the more verbose XML, reducing the size of the data packages that are sent over the wire, and improving overall performance. In addition, with its origins in JavaScript, JSON nicely integrates with many client-side web frameworks, which might be useful when you want to build some non-ADF web applications on top of your back-end data. You might notice that most of the above links show SOAP-based or REST-XML web services. This might be related to the fact that declarative support for SOAP-based web services in JDeveloper and ADF Mobile is currently better. To consume a RESTful web service you always need to do Java coding. However, the declarative SOAP-based data control or does not implement data caching, so you still need to do the same amount of Java coding when you go for strategies 2,3 or 4. Furthermore, you will see a strong focus on enhancing support for REST-JSON in upcoming JDeveloper versions. A new REST data control is planned, as well as full declarative support to create REST-JSON web services on top of ADF Business Components. For a sneak preview of how that functionality will look like, you can take a look at this presentation.

Implementing Strategy 2: On-Device Caching Using SQLite Database

To cache data in a secure way on the mobile device, we recommend to use the embedded SQLIte database. This self-contained database can be created on-the-fly by your mobile application, and does not require any installation steps beforehand. Your data can be stored securely by using one of the supported data encryption algorithms. The data can be stored and retrieved using plain JDBC statements. For more information on using SQLite check out this video by ADF product manager Frederic Desbiens. You typically create so-called service object classes that contain logic for calling the web services and for reading data from and writing data to the local database. The service object also contains the CRUD operations that are exposed through an ADF data control to allow you to build your ADF Mobile AMX pages using drag and drop actions. To read data the service object includes getter methods that return an array of so-called data objects. A data object (also known as entity object or domain object) typically maps 1:1 to an underlying database table, and each instance of a data object represents a row of data. The data object contains attribute getter/setter methods that map to columns in the underlying database table. A data object can also contain methods that return a collection other data objects to implement master-detail like object structures.This data caching architecture is illustrated by the picture below.  ServiceAndDataObject As explained in the links included in the previous section, ADF Mobile provides utility methods to call a REST web service and SOAP web service programmatically. The RestServiceAdapter class can be used to call REST web services. The technique for calling a SOAP-based web service might be a bit confusing: you first create a data control for the web service, and then you programmatically invoke a method on the data control, for example to retrieve the data. So, you do not use the web service data control to create the user interface, it is only created so you can easily invoke the SOAP web service. The data control you create for the service object is the one you will use to build the user interface. With this architecture in place, the main steps to implement on-device data caching include:
  • Create a set of database tables that match the structure of your web service payload.
  • Create a set of data objects that map to your database tables and web service payload
  • Create service objects (with helper classes to better organize your logic) that include functionality to call the web service, and store the payload returned in an array of Java data objects and persist the same data using JDBC in your on-device database so the data will be preserved when the user closes the application.
Writing the code to perform the above steps is not difficult. However, it is tedious work and coding low-level JDBC statements might feel like we are going back 10 years in Java programming. It would be nice to have some lightweight object relational mapping tool that saves us most of this repetitive Java programming, right? Well, this is exactly what the A-team developed as part of a proof of concept for some of their customers.

The A-Team ADF Mobile Persistence Sample

The ADF Mobile Persistence Sample is a lightweight persistence and data synchronization framework developed by the A-team. It is powerful and sophisticated sample code that you can (partially) reuse, extend or copy as desired. Note however, that it has not been extensively tested, nor has it been used in production applications. You can use it at your own risk, there is no support channel available, but we are always interested in feedback that you can provide by commenting on this article. The sample code contains generic Java code that invokes the web services and performs CRUD operations against the SQLite database. You only need to create a service object with a few lines of Java code that extends the EntityCRUDService class from the sample runtime code. This superclass contains methods to insert, find, update, delete and merge data objects. The implementation of these methods depends on so-called persistence manager classes, also included in the sample runtime code, that you can plug into your service object. By default, a DBPersistenceManager is configured which implements the service object CRUD operations by constructing SQL SELECT, INSERT,UPDATE and DELETE statements. You can also configure a remote persistence provider which implements the service object CRUD operations by making the appropriate web service calls.  These generic persistence provider classes are able to generate the correct SQL statements and do the conversion from the XML or JSON payload to the data objects and vice versa, by reading a persistence mapping XML file that describes how tables map to data objects, and how columns map to data object attributes and payload attributes. The overall structure of your ADF Model layer looks like this when using the sample code to provide CRUD operations for departments: PersistenceArchitecture You might wonder how we create the data objects and mapping file if you only need to create the service object yourself. Well, we leverage an existing wizard in JDeveloper, the "Java Objects from Tables" wizard. This wizard creates a TopLink-based mapping XML file as well as your data objects. Note that apart from this XML file and the generated Java domain classes, no TopLink libraries are used in the persistence sample code. The generated data object classes are slightly modified using a utility included with the sample to remove TopLink API-specific references, and replace them with references to the generic ADF Mobile Persistence. The overall development process for the ADF Model layer can be summarized as follows when using the ADF Mobile Persistence Sample code:
  • Create a DDL file that creates a database structure that can store the payload of the web services you are going to use. This DDL file is used to create the database on the mobile device.
  • Create the same database structure on a database you can access from your JDeveloper machine
  • Run the "Java Objects from Tables" wizard and select the tables you just created in the previous step
  • Run the "Enable for Mobile Persistence" utility to remove TopLink API references in the generated data objects
  • Create the service object that extends the EntityCRUDService
  • Configure the remote persistence provider in the service object
  • Create a data control for the service object
You are then ready to create the user interface using drag and drop actions from the data control palette as illustrated by the pictures below: BuildUserInterface BuildUserInterface2 BuildUserInterface3   The only Java code you need to write to get this CRUD application for departments that caches all data in the on-device database is in the DepartmentService class, which looks like this:
package oracle.demo.hrcrud.mobile.model.service;

import oracle.ateam.sample.mobile.persistence.manager.RestJSONPersistenceManager;
import oracle.ateam.sample.mobile.persistence.service.EntityCRUDService;
import oracle.demo.hrcrud.mobile.model.Department;

public class DepartmentService extends EntityCRUDService {
    private static final String WRITE_REQUEST_URI = "/hrdemorest/rest/json/WriteDeps";
    private static final String WRITE_PARAM_NAME = "json";
    private static final String READ_REQUEST_URI = "/hrdemorest/rest/json/ReadDeps";
    private static final String REMOVE_REQUEST_URI = "/hrdemorest/rest/json/RemoveDeps";
    private static final String ROOT_ELEMENT = "DepartmentsView";

    public DepartmentService() {
        RestJSONPersistenceManager remotePersistenceManager =
            new RestJSONPersistenceManager(READ_REQUEST_URI, WRITE_REQUEST_URI, WRITE_PARAM_NAME, REMOVE_REQUEST_URI,
                                           ROOT_ELEMENT);
        setRemotePersistenceManager(remotePersistenceManager);
        setDoRemoteReadInBackground(true);
        setDoRemoteWriteInBackground(true);
        super.findAll();
    }

    protected Class getEntityClass() {
        return Department.class;
    }

    protected String getEntityListName() {
        return "departments";
    }

    public Department[] getDepartments() {
        Department[] departments = (Department[])getEntityList().toArray(new Department[getEntityList().size()]);
        return departments;
    }

    public void addDepartment(int index, Department department) {
        addEntity(index, department);
    }

    public void removeDepartment(Department department) {
        removeEntity(department);
    }
}
In the constructor we configure the remote persistence provider which is a REST-JSON web service in this example. The getEntityClass and getEntityListName are abstract methods in the superclass that must be implemented.The getDepartments method converts the list of departments into an array of departments, because JDK 1.4 is used on the mobile device, which does not support typed collections.The addDepartment and removeDepartment methods are automatically called by ADF Mobile framework when using the standard Create and Delete operations from the data control palette, so we can keep the list of departments up-to-date.

Implementing Strategies 3 and 4: Data Synchronization

As illustrated above, implementing data caching strategy 2 with offline reads and online writes is straightforward to implement with the help of the A-Team sample code. Implementing strategies 3 and 4 is much more complex. When you allow offline writes, you have to register pending transactions on the device, and send them later to the remote server. This introduces many issues you have to think about and deal with, for example:
  • The pending transaction might be based on stale data: another user might have changed the same data in between the moment the mobile user created the transaction offline and the moment he sends it to the remote server when he is online again.
  • The pending transaction might be based on data that has been deleted from the server in the meantime.
  • The pending transaction might be refused by the server because some server-side business rules are violated.
  • The order of pending transactions might be important So, if synchronization of one transaction fails, should you continue to try to synchronize the next transaction, possibly changing the transaction order when this next transaction succeeds?
In all these situations, the mobile user needs to be notified somehow about the failed transactions. You can choose to automatically remove failed transactions, or have the mobile user explicitly remove them from the list of pending data actions. This all depends on the type of application and how you want it to be used. To detect a transaction that is based on stale data, you might need to introduce a version number attribute or last modified timestamp which is incremented/updated with each transaction. This version number or last modified timestamp should be part of the payload of the cached transaction in order for the server to check whether no other transactions have been taken place in the meantime on the same data. The ADF Mobile persistence sample code helps you to a certain extent with the implementation of strategies 3 and 4. If the device is offline, or the server refuses the transaction for some reason, then the transaction will be registered as a pending transaction by the EntityCRUDService superclass and its helper classes. When the user later on performs another transaction when the device is online again, a new attempt will be made to synchronize the pending transactions, in the order of creation. When the user closes the mobile application, the pending transactions are persisted to the device (in an XML file, but this could easily be changed to save them in the database) and restored when the application is started again. So, the housekeeping of pending transactions is taken care of when using the sample code. The sample code also includes a reusable ADF Mobile feature archive that you can plug into your application to make the pending transactions visible to the mobile user. You can add this feature from the archive to your application, and then you can add a button that invokes the feature. You can make this button render conditionally so it is only visible when there are pending changes:
<amx:commandButton text="Sync"  id="cb1" rendered="#{bindings.hasDataSynchActions.inputValue}"
                  actionListener="#{GoToFeature.goToDataSynchFeature}">
  <amx:setPropertyListener id="spl3" from="oracle.demo.hrcrud.mobile.model.Department"
                           to="#{applicationScope.dataSynchEntity}"/>
</amx:commandButton>
DataSynch This is just one way you could present the pending transactions. Feel free to use this reusable feature, or just build your own mechanism to view and handle pending transactions.You can then still use the sample code for internal housekeeping of pending transactions.

Getting Started with the Mobile Persistence Sample

The mobile persistence sample is packaged as a JDeveloper extension and comes with a comprehensive tutorial. So, the easiest way to get started is to install the extension and follow the tutorial. Here are the files you need for this:
  • adf-mobile-persistence-sample-install.zip (build 11.1.2.4.33): This zip file is the JDeveloper extension. Download it, start JDeveloper 11.1.2.4, go to the Help menu, choose "Check for Updates" and then choose the option "Install from local file". Select this zip file, and the mobile persistence sample code will be installed.
  • MobilePersistenceSampleTutorial.pdf: Step-by-step instructions for creating a mobile application from scratch that uses the generic persistence and data synchronization functionality of the sample.
  • TutorialFiles.zip: Supporting files needed for the tutorial.
  • HrCrudMobile: This zip contains the finished tutorial application (open HrCrudMobile.jws in JDeveloper).
  • HRDemoRest.zip: This is a "bigADF" web application that exposes ADF Business Components CRUD functionality as RESTful web services. See the tutorial (section 4) for instructions on running this app and using it as remote persistence provider.
  • adf-mobile-persistence-sample-source.zip: All source code of the mobile persistence sample, including the JDeveloper extension project and data synchronization feature
Note that the sample code requires the latest ADF Mobile extension (build 11.1.2.4.39.64.51) that has become available in october 2013. You can also take a look at this presentation which includes some more screenshots to illustrate the development process when using the ADF Mobile Persistence Sample Code.

Webgate Reverse Proxy Farm

$
0
0
Introduction Some of our larger deployments are seeing the benefits of centralizing their Webgate deployments onto a server farm. This post discusses some of the architecture and recommendation when deploying such an architecture.

Main Article

First, what is a Webgate farm or Webgate Reverse Proxy farm? A Webgate farm is:
  • A series of web servers that are clustered on the basis of their protected applications.
  • These servers protect the same set of applications. It is not unheard of to create multiple farms for different sets of applications, say internal/external applications.
  • This architecture acts as a reverse proxy to back end applications.
  WebProxy4   Some things to consider when deploying this architecture:
  • Which team manages the proxy farm? Will it be the Security team or maybe the infrastructure team. Working knowledge of the farm infrastructure as well as proxy configuration is crucial.
  • Web server type. Any web server that WebGate supports will work. Many of our customers use Oracle HTTP Server (OHS) or Apache which has good support for virtual hosting and reverse proxy configuration.
Some benefits in deploying a centralized Webgate farm:
  • Adding patches/upgrading is easier knowing where the servers are as opposed to the application owner; Security team may not have access.
  • Adding more servers and/or upgrading servers is easier in a clustered environment.
  • Aside from Web-Gates protecting applications, the web server reverse proxy configuration defines which applications to expose. This provides additional security preventing inadvertent access.
Configuration considerations:
  • Use a single Web-Gate profile for all servers in the farm. It is highly recommend that you use a single Webgate profile for all servers on the farm; otherwise you may see cookie decryption errors if the load balancer is not configured correctly.
  • Use 'SERVER_NAME' for the preferred host value when using Apache/OHS and use 'HTTP_HOST_HEADER' for all other web servers. This is used for virtual hosting.  For more info on Host IDs take a look at this post.
 

Use Case: Internal/External Webgate Scenario

Let's say you have an internal facing application where both internal and external users are allowed access.  Well the easiest implementation is to only allow access through the DMZ.  This means that internal users must go back out to the DMZ to gain access.  Done. Now let's say the requirement is that internal users must access the site internally and external users through the DMZ.  In 10g it was relatively easy to set this up (see diagram below).  All that was needed is two sets of Webgates; one within the DMZ and a second internally.  Once authenticated the ObSSOCookie is easily consumed by any of the Webgates shown below; as long as the same OAM infrastructure supported both internal and external users/applications. WebProxy2 In 11g, the cookie model is quite different as you an read here.  Supporting the above use case will not work out-of -the-box; nor is it recommended.  However it can be done with some caveats:
  • Set a user defined parameter in the Webgate configuration, filterOAMAuthnCookie=false
  • The Webgate profile for all Webgates depicted must be the same.
By default, the OAM_Authn cookie will not be passed along in the HTTP payload for security reasons; however, setting the parameter 'filterOAMAuthNCookie' to 'false' removes this condition.  Also keep in mind that the Webgate profile must be the same for both sets of Webgates in order for Single-Sign-On (SSO) to work.  This may not be feasible depending on your topology/requirements. The recommended approach in 11g is to create a new end-point for the application. WebProxy3-1 Notice the distinction, we again decouple the Webgate from the application and create another Webgate Proxy farm (internally).  Why?  In 10g it was feasible for the Webgate plug-in to reside on the same web server as the application; it was also possible to have a single transaction going through two Webgates.  In 11g, this is not recommended; a single transaction should never pass through two Webgates.

Manual Recovery Mechanisms in SOA Suite and AIA

$
0
0

Introduction

Integration flows can fail at run-time with a variety of errors. The cause of these failures could be either Business errors or System errors.  When Synchronous Integration Flows fail, they are restarted from the beginning. On the other hand, Asynchronous Integration flows when they error can potentially be resubmitted/recovered from designated/pre-configured milestones within the flow. These milestones could be persistence points like queues topics or database tables, where the state of the flow was last persisted. Recovery is a mechanism whereby a faulted Asynchronous Flow can be rerun from such a persistence milestone. The SOA Suite 11g and AIA products provides various Automated and Manual recovery mechanisms to recover from asynchronous fault scenarios. They differ based on the SOA component that encounters the error. For instance, recovering from a BPEL fault may be quite different than recovering from a Resequencer fault. In this blog, we look at the various Manual Recovery mechanisms and options available to an end user. Manual recovery mechanisms require an Admin user to take appropriate action on the faulted instance from the Oracle Enterprise Manager Fusion Middleware Control [EM FMWC Console] The intention of this blog is to provide a quick reference for Manual Recovery of Faults within the SOA and AIA contexts. It aims to present some of the valuable information regarding Manual recovery in one place. These are currently available across many sources such as SOA Developers Guide, SOA Admin Guide, AIAFP Developers Guide and AIAFP Infrastructure and Utilities Guide. Next we look at the various Manual recovery mechanisms available in SOA Suite 11g and AIA, starting with the BPEL Message Recovery.

BPEL Message Recovery

To understand the BPEL Message Recovery, let us briefly look into how BPEL Service engine performs asynchronous processing. Asynchronous BPEL processes use an intermediate Delivery Store in the SOA Infrastructure Database to store the incoming request. The message is then picked up and further BPEL processing happens in an Invoke Thread. The Invoke Thread is one among the free threads from the ‘Invoke Thread Pool’ configured for BPEL Service Engine. The processing of the message from the delivery Store onwards until the next dehydration in the BPEL process or the next commit point in the flow constitutes a transaction. Figure below shows at a high level the Asynchronous request handling by BPEL Invoke Thread.1

Any unhandled errors during this processing will cause the message to roll back to the delivery Store. The Delivery Store acts as a safe milestone for any errors that cause the asynchronous BPEL processing to rollback. In such scenarios these messages sitting in the delivery store can be resubmitted for processing using the BPEL Message Recovery. It is quite similar in case of Callback messages that arrive for in-flight BPEL process instances. The Callback messages are persisted in the Delivery Store and a free thread from the Engine Thread Pool will perform correlation and asynchronously process the callback activities. Callback messages from Faulted activities are available at the Delivery Store for Recovery.

Refer to the section from FMW Administrator’s Guide here - http://docs.oracle.com/cd/E28280_01/admin.1111/e10226/bp_config.htm#CEGFJJIF for details on configuring the BPEL Service Engine Threadpools.2

The Recovery of these Invoke/Callback messages can be performed from the Oracle Enterprise Manager Fusion Middleware Control [EM FMWC Console] [SOA->Service Engine->BPEL->Recovery]. The admin user can search for recoverable messages by filtering based on available criteria on this page. The figure below shows the BPEL Engine Recovery page where the messages eligible for recovery are searched based on the message type and state. 3 During Recovery of these messages, the end user cannot make any modifications to the original payload. The messages marked recoverable can either be recovered or aborted. In the former case, the original message is simply redelivered for processing again. The BPEL Configuration property ‘MaxRecoverAttempt’ determines the number of times a message can be recovered manually or automatically. Messages go to the exhausted state after reaching the MaxRecoverAttempt. They can be selected and 'Reset' back to make them available for manual/automatic recovery again. In addition to Invoke and Callback messages, the BPEL Recovery console can also be used to recover activities which have an expiration time associated like the Wait activity. Expired Activities can be searched for and recovered. These would then be rescheduled for execution. The BPEL Service Engine can be configured to automatically recover failed messages, either on Server startup or during scheduled time periods. Refer to the section from FMW Administrator’s Guide here - http://docs.oracle.com/cd/E28280_01/admin.1111/e10226/bp_config.htm#CDEHIIFG for details on setting up auto recovery.

SOA Fault Recovery

The Fault Handling for invocations from SOA Components can be enhanced, customized and externalized by using the Fault Management Framework (FMF). We will not go into the details of Fault Management Framework here. Refer to this a-team blog post here - http://www.ateam-oracle.com/fault-management-framework-by-example for insights into the FMF.

In short FMF, allows a Fault Policy with configurable Actions to be bound to SOA Component. These can be attached at the Composite, Component or Reference levels. The configured Actions will be executed when the invocation fails. The available Actions could be retry, abort, human intervention, custom java callout, etc. When the Action applied is human intervention the faults become available for Manual Recovery from the Oracle Enterprise Manager Fusion Middleware Control [EM FMWC Console]. They show up as recoverable instances in the faults tab of ‘SOA->faults and rejected messages’ as shown in the figure below

4

During the recovery, the Admin user can opt for one among Retry, Replay, Abort, Rethrow or Continue as the Recovery option. For Retry options, the EM User has access to the payload. The payload can be changed and resubmitted during recovery. While this is a useful feature, it could pose audit/security issue from an administrative perspective if it is not properly controlled using Users/Roles. The retry action can also chain the execution into a custom java callout to do additional processing after a successful Retry. This is selected from the ‘After Successful Retry’ option during Retry. The custom java callout should be configured in the Fault Policy file attached to the composite. 5

Resequencer Recovery

Mediator Resequencer groups which end up in Errored or Timed Out states can be recovered from the EM Console by an Admin user. In fact Resequencer faults do not have other automated recovery mechanisms and rely on only Manual recovery by the admin for their remediation. Mediator Resequencer faults can be searched and filtered from the faults page of the Mediator component. Figure below shows a search of faults by the Resequencing Group. 6 An Errored group can be recovered by either choosing to Retry or Abort. Retry will reprocess the current failed message belonging to the faulted Group. In case of abort, the current failed message is marked as failed and processing will resume from the next in sequence available message for the faulted Group. In both cases the group itself is unlocked and set to ready so that it can process further messages. As can be seen in the figure below, the Admin user can modify the request payload during this recovery. 7 In case of Standard Resequencer, groups can end up as Timed Out when the next in sequence message does not arrive until the timeout period. Such groups can be recovered by skipping the missing message. Figure below shows such a recovery. In this case the processing of the group will continue from the next message rather than wait for the missing sequence id. 8    9  

AIA Message Resubmission

This section deals with Integrations built using the AIA Foundation Pack. Refer to the AIA Concepts and Technologies Guide at http://docs.oracle.com/cd/E28280_01/doc.1111/e17363/toc.htm to familiarize with the AIA concepts. Let us see a common design pattern employed in AIA Integrations. The Figure below is from the AIA Foundation Pack Developers Guide Document available and shows an architecture used for Guaranteed Message Delivery between Source and Target applications with no intermediate persistence points. The blocks shown are SOA Suite Composites. The Source and Target milestones are persistence points such as Queue, Topics or Database tables. The same design can also be enhanced to have multiple intermediate milestones in case of more complex flows. Such Flows are commonly seen in AIA Pre Built Integrations which use Asynchronous flows to integrate systems. E.g. Communications O2C Integration Pack for Siebel, BRM and OSM. Refer to the Pre Built Integrations Documentation here-  http://www.oracle.com/technetwork/apps-tech/aia/documentation/index.html10   The salient points of this design are
  • A Single transaction performs the message consumption from source milestone, the processing and the message delivery to target milestone
  • All blocks are implemented using SOA Suite composites
  • Any errors/faults in processing, rollback the message all the way to the source milestone
  • Milestone Queues and Topics are configured with Error Destinations to hold the rolled back messages for resubmission.
  • An enhanced Fault message (AIAFault) is raised and stored in the AIA Error Topic. This Fault has sufficient information to resubmit the message from the Source milestone.
  The Faults can be recovered using the AIA Error Resubmission Utility. In AIA Foundation Pack 11.1.1.7.0 the AIA Error Resubmission Utility is a GUI utility and can be used for single or bulk fault recoveries. The Resubmission Utility can be accessed from AIA home-> Resubmission Utility of the AIA application, as shown in figure below. Earlier versions of AIA foundation Pack 11g only have a command line utility for Error Resubmission. This is available at <AIA_HOME>/util/AIAMessageResubmissionUtil. 11-2 Any fault within the flow will roll back to the previous milestone or recovery point and enable resubmission from that point. The milestones could be Queues, Topics or AQ destinations. The Queues and Topics designed to be milestones are associated with corresponding Error Destinations. This is where the faulted messages reside. The AIA Resubmission Utility simply redelivers the messages from the fault destination back to the milestone destination for reprocessing in case of Queue or Topic. In the case of Resequencer errors, the Resequencer is the Recovery point and holds the message for resubmission. Note that Resequencer is not typically designed as a milestone in the flow but acts as a recovery point for Resequencer errors. For such errors, the AIA Resubmission utility recovers the failed Resequencer message and also unlocks the faulted Resequencer Group for further processing. It is important to note here that the AIA Error Handling and Resubmission Mechanism is a designed solution. It relies on the fact that the Integration implements the principles and guidelines of AIA Foundation Pack and AIA Guaranteed Message Delivery pattern for its accurate functioning. Refer to the AIA Foundation Pack Infrastructure and Utilities Guide at http://docs.oracle.com/cd/E28280_01/doc.1111/e17366/toc.htm for details of the AIA Error Handling Framework and AIA Resubmission utility. Refer to the AIA Foundation Pack Developers Guide at http://docs.oracle.com/cd/E28280_01/doc.1111/e17364/toc.htm for implementing AIA Error Handling and Recovery for the Guaranteed Message Delivery Pattern.

Use Case: Message Resubmission with Topic Consumers

Let us next look at a use case from one of our customer engagements. It is a Custom Integration developed using AIA Guaranteed Message Delivery pattern and employing the AIA Resubmission utility for recovery. We can see how the above recovery mechanisms offer different options when designing a typical integration flow Without going deep into the details, the figure below shows at a high level the design used for delivering messages to 3 End Systems using a Topic and 3 Topic Consumers. The BPEL ABCS components consume the canonical message, convert it to the respective Application specific formats and deliver to the End Systems. The requirement was the guarantee delivery of the message to each of the 3 systems within the flow, which the design achieves under normal circumstances.AIA3  

However issues were observed at run-time for failure cases. When the message delivery fails for one of the Systems e.g. System B, the design caused a rollback of the message to the previous milestone which in this case is the Topic. The rolled back message residing in the Error destination is then redelivered to the Topic. The message is picked for processing again by all 3 Topic Consumers causing duplicate message delivered to Systems A and C.

This issue can be addressed in a few ways;

1) Introducing an intermediate milestone after the message is consumed off the topic. For instance we could introduce a queue to hold the converted messages. (Indicated by point 1 in the figure)

2) Use separate queues instead of a topic to hold the canonical messages.

In case of failures, only the message in the failed branch would have to be recovered using AIA Message Resubmission as seen in section above.

However, both these options introduce additional queues which need to be maintained by the operations team. Also if in future an additional end systems were to be introduced, it would necessitate adding new queues in addition to new JMS consumers and ABCS components.

3) Introduce a transaction boundary: This can be done by changing the BPEL ABCS component use an Asynchronous One Way Delivery Policy. In this case, any failures cause the message to rollback not to the topic but to the internal BPEL Delivery store. (Indicated by point 2 in the figure) These messages can then be recovered manually using Bpel Message Recovery as we saw in the first section above. The recovery is limited only to the faulted branch of the integration.

4) Another option is to employ Fault Policies. We can attach a Fault Policy to the BPEL ABCS component. The policy invokes the human intervention action for faults encountered during end system invoke. The message can then be manually recovered from the EM FMWC Console as seen in the SOA Fault Recovery Section above. This would apply only to the faulted branches and hence avoid the duplicate message delivery to the other End Systems.

Also another issue seen was that the end systems would lose messages that arrived when the consumers are offline. This problem can be addressed by configuring durable subscription for the Topic consumers. In the absence of Durable subscriptions, a Topic discards a message once it has been delivered to all the active subscribers. With Durable Subscribers, the message is retained until delivered to all the registered durable subscribers hence guaranteeing message delivery. Refer to the Adapters Users Guide here - http://docs.oracle.com/cd/E28280_01/integration.1111/e10231/adptr_jms.htm for details on configuring Durable Subscriptions for Topic Consumers.

Summary

Table below summarizes the different Manual recoveries that we have seen and their main characteristics A ready reckoner table listing the manual recovery options and their characteristics

In this blog, we have seen the various Manual Fault Recovery mechanisms provided by SOA Suite and AIA 11g versions. We have also seen the security requirements for the Admin user to perform recovery and the variety of options available to handle the faults. This knowledge should enable us to design and administer robust integrations which have definite points of recovery. -Shreeni

Customizing Session Time Out Pop Ups

$
0
0

Introduction

In past posts, ATEAM has detailed how to customize the default behavior of the out-of-the-box (OOTB) session time out popups: tips-on-dealing-with-the-session-time-out-popup how-to-create-a-custom-session-timeout-page-for-webcenter-spaces An item that has frequently been asked to ATEAM is how to customizing the default strings that are in the OOTB session time out popups. This blog post will detail the steps to create custom resource strings, which will override the default values.

Main Article

By default, the current session time out popups appear as follows: timeout_resbundle_1 timeout_resbundle_2 The good news is that all of the text based resource strings (e.g. Expiration Warning, .etc) shown here are customizable.  The unfortunate news is that there is no where in the documentation that references how to modify them. These resource strings are actually contained in a bundle (RichBundle.java), which is declared in the skin (CSS).  So in order to override the defaults, a custom resource bundle must be created and then declared in a custom skin.

The custom skin declaration in the trindad-skins.xml should also declare to extend the default skin, so the other default resource strings can be picked up

For example, here is the example of the trinidad-skins.xml, which declares the custom resource bundle:
<?xml version="1.0" encoding="windows-1252" ?>
<skins xmlns="http://myfaces.apache.org/trinidad/skin">
  <skin>
    <id>myskin.custom.desktop</id>
    <family>myskindemo</family>
    <render-kit-id>org.apache.myfaces.trinidad.desktop</render-kit-id>
    <style-sheet-name>css/myskindemo.css</style-sheet-name>
    <extends>fusionFx-v1.desktop</extends>
    <bundle-name>com.ateam.view.MyBundleOverride</bundle-name>
  </skin>
</skins>
Once the skin is declared and selected in the trinidad-skins.xml, the last step is to create the resource bundle file that will hold the custom strings.  For this example, I have created a java based version for declaring the strings in the resource bundle. There are also other ways to declare the strings (e.g. .properties and .xliff).  Please refer to the documentation on how to create the different variations and on how to ensure support for both Internationalization and Localization. Now that the configuration is done to override the default bundle, the next step is to declare the strings themselves. OOTB the five default resource strings used, which are not declared in the documentation, are as follows:
af_document.PRE_SESSION_TIMEOUT_CONFIRM_TITLE = Expiration Warning

af_document.PRE_SESSION_TIMEOUT_MSG = This page will expire unless a response is received within {0} minutes. Click OK to prevent expiration.

af_document.PRE_SESSION_TIMEOUT_MSG_SECOND = This page will expire unless a response is received within {0} seconds. Click OK to prevent expiration.

af_document.POST_SESSION_TIMEOUT_MSG = The page has expired af_document.POST_SESSION_TIMEOUT_MSG_CONTINUE = Click OK to continue.

af_document.POST_SESSION_TIMEOUT_ALERT_TITLE = Page Expired
Here is an example of the MyBundleOverride.java, which contains the custom resource strings:
public class MyBundleOverride extends ListResourceBundle {
    @Override
    public Object[][] getContents() {
        return _CONTENTS;
    }

    static private final Object[][] _CONTENTS =
    { 
      { "af_document.PRE_SESSION_TIMEOUT_CONFIRM_TITLE",
        "af_document.PRE_SESSION_TIMEOUT_CONFIRM_TITLE : Custom Expiry Warning" },
      { "af_document.PRE_SESSION_TIMEOUT_MSG",
        "PRE_SESSION_TIMEOUT_MSG : Custom: within {0} minutes. Click OK to prevent expiration." },
      { "af_document.PRE_SESSION_TIMEOUT_MSG_SECOND",
        "PRE_SESSION_TIMEOUT_MSG_SECOND : Custom: This page will expire unless a response is received within {0} seconds. Click OK to prevent expiration." },
      { "af_document.POST_SESSION_TIMEOUT_MSG",
        "POST_SESSION_TIMEOUT_MSG : This is a custom message from MyOverride Bundle" },
      { "af_document.POST_SESSION_TIMEOUT_MSG_CONTINUE",
        "POST_SESSION_TIMEOUT_MSG_CONTINUE : Custom continue Message" },
      { "af_document.POST_SESSION_TIMEOUT_ALERT_TITLE",
        "POST_SESSION_TIMEOUT_ALERT_TITLE : Custom Page Expired Title" }
      };
}
This will produce the following results: timeout_resbundle_5timeout_resbundle_4

The browser cache may need to be cleared in order for the new strings to display correctly.

Exceptions Handling and Notifications in ODI

$
0
0

Introduction

ODI processes are typically at the mercy of the entire IT infrastructure’s health. If source or target systems become unavailable, if network incidents occur, ODI processes will be impacted. When this is the case, it is important to make sure that ODI can notify the proper individuals. This post will review the techniques that are available in ODI to guarantee that such notifications occur reliably.

Exceptions Handling and Notifications in ODI

  When you create an ODI package, one of the ODI Tools available to you for notifications is OdiSendMail:   OdiSendMail Figure 1: OdiSendMail   Specify the SMTP server, to and from parameters, subject and message for your email and you can alert operators, administrators or developers when errors occur in your packages. One challenge though is that if you want an email to be sent for any possible error in your package, each and every step of the package must be linked to this tool in case of error. This can get to be quite overwhelming, and there is no guarantee that one step will not be forgotten by a developer along the way. Error Processing in Packages Figure 2: Sending an email for every step that ends-up in error Another caveat with sending a notification email from the package itself is that the final status of the package will be successful as long as sending the email is successful. If you want the package to fail, you now have to raise an exception after sending the email to force the package to end in an error state. If we look back at the original objective, all we really want is to send a notification no matter what error occurs. So ideally we want the package to actually fail. Only after it has failed should we send the notification email. This is exactly what we can perform with the Load Plan Exceptions.  

Load Plans and Exceptions

If we handle the exceptions at the load plan level there is no need to send the notification emails from the original package. What must do now is to create two separate packages: one to process the data, another one to handle the exceptions.   Package without notifications Figure 3: package without notification emails   Notification Package Figure 4: package dedicated to notification emails   There are two main aspects to a load plan:
  • Steps: a set of scenarios that must be orchestrated, whether executions are serialized of parallelized
  • Exceptions: a series of scenarios that can be used in case one of the steps fails to execute properly.
  LP Steps Figure 5: Load plans steps For instance in Figure 6 below, we have created an exception called Email Notification. This exception will run a dedicated scenario whose sole purpose is to send a notification email. The scenario here is generated from the package represented earlier in figure 4.   Exception Scenario Figure 6: Load Plans Exception When you edit the properties of the steps of the load plans, you can choose whether they will trigger the execution of an exception scenario or not. For each step, including the root step, you can define what the behavior will be in case of error:
  • Choose the exception scenario to run (or leave blank)
  • Run the exception scenario (if one is selected) and stop the execution of the load plan at this point
  • Run the exception scenario (if one is selected) and proceed with the rest of the load plan.
  Select Exceptions for Steps Figure 7: selection of an exception scenario If you select an exception at the root level, this will be the default exception for the entire load plan unless you decide to overwrite this default with a different selection in the properties of individual branches or step of the load plan. This will guarantee that no matter what fails in the scenarios used in the load plan, the notification email is always sent. This includes corner cases where the scenarios would not even start, for instance in case of errors in Topology definitions such as missing physical schema definitions. In figure 8 below we can see steps of the load plan where the scenario that was executed fails, raising an exception:   Failed Execution with exception handling notification Figure 8: scenario step raising an exception We can also look at the scenario sessions to see the original scenario failure and the execution of the notification scenario:   Sessions view of executions Figure 9: scenario sessions for failed scenario and notification scenario.  

Conclusion

Leveraging exceptions handling in Load Plans is a very straightforward and efficient way to ensure that no matter what fails in your ODI processes, notifications are sent reliably and consistently.

Using ODI with a Development Topology that Doesn’t Match Production Topology

$
0
0

Introduction

The cost of reproducing a production environment for the purpose of developing and testing data integration solutions can be prohibitive. A common approach is to downscale the development environment by grouping databases and applications. The purpose of this post is to explain how to define the ODI Topology when the physical implementation differs between the development and production environments.

Using ODI with a Development Topology that Doesn’t Match Production Topology

Let’s consider a setup where the production environment uses three distinct database installations to host data that needs to be aggregated. By reducing the volume of data, it is possible to group all three databases in one single installation that developers can leverage as they write their data integration code. This simplifies the infrastructure maintenance and greatly reduces the overall cost. We have represented such a setup in figure 1 below. development doesn't match production Figure 1: Simplified architecture to reduce the cost of development The challenge with such a setup will be to make sure that the code designed in the development environment will run properly in the production environment. As a point of reference, an ODI Topology that would match the above infrastructure is represented in figure 2. ODI view of different development / Production Figure 2: Different Topology layout for development and production environments  

ODI Code Generation

  The SQL code that ODI generates is optimized for the infrastructure where the code runs. If we look at the Development environment represented above, best practices recommend that we create a dedicated login for the database, and then use that single login to access data from the two source schemas and then use the same login to write to the target schema. From an ODI Topology perspective, this means that there is one single data server under which we define 3 physical schemas. Based on the above Topology declaration, ODI optimizes the data movement when you create your mappings. It makes sure that data flows as quickly as possible from the source schemas to the target schemas. From that perspective, ODI does not use any LKM for the source schemas: all data is already in the database, there is no need to stage the data in a C$ table (or C$ view or C$ external table). Sources and target in the same database Figure 3: Sources and targets schemas in the same database   Conversely, if the data happens to be physically located on separate servers, then ODI automatically introduces LKMs to bring the data into the target server.   Source and Target in Separate Databases Figure 4: Separate databases for two sources and target schema

The challenge with topology discrepancies

If the development environment matches the architecture shown on figure 3, and the production environment matches the architecture represented on 4, the scenarios generated in the development environment cannot run in the production environment as is. But the last thing we want to do is to redesign the developed code as we promote it from environment to environment. With the challenge now clearly stated, let’s see what ODI provides to solve this conundrum.

Option 1: Optimization contexts

When you are building mappings, you can ask ODI to generate code based on a specific execution environment, as long as all environments are defined in the same Master repository. In this case, ODI Contexts will represent the different environments. If we look at the use case provided here as an example, we can ask ODI to generate the code explicitly for the production environment even though the development environment is much simpler. In other words, we can force the use of LKMs to better represent the reality of production even if they are not needed to process the data in the development environment. If you click on the Physical tab or your mappings on ODI 12c, you can see an option called Optimization Context (in earlier versions of ODI, this same option is available in the Overview tab of Interfaces). If there is a discrepancy between the different environments, setting this option properly guarantees that the code always matches the layout of the environment of your choice.   Optimization Context Selection Figure 5: Optimization context selection One challenge with this approach though is that you will have to remember to select the proper optimization context for every single mapping that is created. Unless of course you use the production context as your default context for Designer in the User Preferences: Select the Tools menu, then Preferences… Under ODI/System you can set the Default Context for Designer Set Default Context Figure 6: ODI preferences to set the default context for Designer

 For users of older releases of ODI you will find this parameter under the menu ODI/User Parameter.

 

Option 2: Design the development Topology to match the production Topology in ODI

  Another approach would be to ignore the specifics of the simplified Development environment and design everything so that it matches the production environment. The ODI best practice of declaring all schemas in the same database under a single data server in Topology has only one purpose: making sure that ODI generates the most optimized code for the environment. But if the production environment does not have all schemas in the same database, then we can create different data servers in the development environment so that ODI believes that there are different databases. In a more extreme example we have combined all our source and target tables in the same database schema for our development environment. This does not prevent us from creating 3 separate database and schema definitions. Then we can have 3 separate models that each contain only the relevant source or target tables in order to match the production environment: Development Simulates Production Figure 7: Comparing real data organization vs. simulated data organization If we use a Topology organization that always matches the production environment, then we never have to worry about setting the optimization contexts in any of the mappings.

Conclusion

It is possible for ODI to always generate code that matches your production environment infrastructure even if it differs from your development environment. Just make sure that you are aware of these discrepancies as you lay out your Topology environment so that you can select the approach that best fits the specifics of your projects.

Validating the Fusion Applications Security Components During Installations and Upgrades

$
0
0

Introduction

  When installing or upgrading Fusion Applications, it is necessary to validate the security components to ensure that they are functioning correctly. This article provides a list of tasks that can be performed to accomplish this. The order of tasks below follow the dependency that the components have on each other so that if a fault is found the problematic component can be more easily identified. Prior to beginning validation, the components should be started in the following order:  
  1. Database Listener Database Oracle Internet Directory Server (OID) Oracle Virtual Directory Server (OVD) Node Manager WebLogic Server (WLS) WLS Managed Servers (Oracle Directory Services Manager, Oracle Access Manager, Oracle Identity Manager, Oracle Service Oriented Architecture) Oracle HTTP Server (OHS)
 

Database

 

1. Check Database Listener

Check that the listener process is up:

[oracle@tester bin]$ ps -ef | grep LISTENER oracle    5211     1  0 09:17 ?        00:00:00 /u01/app/oracle/idmdb/dbhome_1/bin/tnslsnr LISTENER -inherit oracle    5238  5118  0 09:19 pts/1    00:00:00 grep LISTENER

Confirm that the listener is listening on the expected TCP port for the database:

[oracle@tester bin]$ netstat -an 1521 | grep 1521 tcp        0      0 :::1521                     :::*                        LISTEN unix  2      [ ACC ]     STREAM     LISTENING     19247  /var/tmp/.oracle/sEXTPROC1521

2. Check Database Processes

Check that the database processes are up:

[oracle@tester bin]$ ps -ef | grep idmdb oracle    5211     1  0 09:17 ?        00:00:00 /u01/app/oracle/idmdb/dbhome_1/bin/tnslsnr LISTENER -inherit oracle    5389     1  0 09:23 ?        00:00:00 ora_pmon_idmdb oracle    5391     1  0 09:23 ?        00:00:00 ora_psp0_idmdb oracle    5394     1  0 09:23 ?        00:00:00 ora_vktm_idmdb oracle    5398     1  0 09:23 ?        00:00:00 ora_gen0_idmdb oracle    5400     1  0 09:23 ?        00:00:00 ora_diag_idmdb oracle    5402     1  0 09:23 ?        00:00:00 ora_dbrm_idmdb oracle    5404     1  0 09:23 ?        00:00:00 ora_dia0_idmdb oracle    5406     1  9 09:23 ?        00:00:10 ora_mman_idmdb oracle    5408     1  0 09:23 ?        00:00:00 ora_dbw0_idmdb oracle    5410     1  0 09:23 ?        00:00:00 ora_lgwr_idmdb oracle    5412     1  0 09:23 ?        00:00:00 ora_ckpt_idmdb oracle    5414     1  0 09:23 ?        00:00:00 ora_smon_idmdb oracle    5416     1  0 09:23 ?        00:00:00 ora_reco_idmdb oracle    5418     1  0 09:23 ?        00:00:00 ora_mmon_idmdb oracle    5420     1  0 09:23 ?        00:00:00 ora_mmnl_idmdb oracle    5422     1  0 09:23 ?        00:00:00 ora_d000_idmdb oracle    5424     1  0 09:23 ?        00:00:00 ora_s000_idmdb oracle    5538     1  0 09:24 ?        00:00:00 ora_qmnc_idmdb oracle    5553     1  0 09:24 ?        00:00:00 ora_cjq0_idmdb oracle    5598     1  0 09:24 ?        00:00:00 ora_q000_idmdb oracle    5602     1  0 09:24 ?        00:00:00 ora_q001_idmdb oracle    5625     1  0 09:25 ?        00:00:00 ora_j000_idmdb oracle    5627     1  0 09:25 ?        00:00:00 ora_j001_idmdb oracle    5629     1  0 09:25 ?        00:00:00 ora_j002_idmdb oracle    5635  5118  0 09:25 pts/1    00:00:00 grep idmdb

3. Perform tnsping on Database from Database and OID Servers

On the database server:

[oracle@tester bin]$ ./tnsping idmdb TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 21-OCT-2013 09:28:21 Copyright (c) 1997, 2011, Oracle.  All rights reserved. Used parameter files: Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = tester.mycompany.com)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = idmdb))) OK (30 msec)

On the OID server:

[oracle@tester bin]$ export ORACLE_HOME=/u01/app/oracle/product/fmw/idm [oracle@tester config]$ $ORACLE_HOME/bin/tnsping //tester.mycompany.com:1521/idmdb TNS Ping Utility for Linux: Version 11.1.0.7.0 - Production on 21-OCT-2013 09:38:18 Copyright (c) 1997, 2008, Oracle.  All rights reserved. Used parameter files: Used HOSTNAME adapter to resolve the alias Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=idmdb))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.217.142)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.217.142)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.217.142)(PORT=1521))) OK (10 msec)

If OAM, OIM and SOA are on different servers, it is recommended that a similar check be made for them as well.  

Oracle Internet Directory (OID)

 

1. Check that LDAP/LDAPS Listeners and Processes are Up

For OID, use opmnctl and netstat to check the ports:

[oracle@tester bin]$ ./opmnctl status -l Processes in Instance: oid1 ---------------------------------+--------------------+---------+----------+------------+----------+-----------+------ ias-component                    | process-type       |     pid | status   |        uid |  memused |    uptime | ports ---------------------------------+--------------------+---------+----------+------------+----------+-----------+------ oid1                             | oidldapd           |    6135 | Alive    |  345332946 |   846788 |   0:00:26 | N/A oid1                             | oidldapd           |    6131 | Alive    |  345332945 |   846916 |   0:00:26 | N/A oid1                             | oidldapd           |    6127 | Alive    |  345332944 |   909764 |   0:00:26 | N/A oid1                             | oidldapd           |    6115 | Alive    |  345332943 |   845864 |   0:00:27 | N/A oid1                             | oidldapd           |    6105 | Alive    |  345332942 |   325448 |   0:00:30 | N/A oid1                             | oidmon             |    6074 | Alive    |  345332941 |   380332 |   0:00:34 | LDAPS:3131,LDAP:3060 EMAGENT                          | EMAGENT            |    6075 | Alive    |  345332940 |    63848 |   0:00:33 | N/A

[oracle@tester bin]$ netstat -an | grep 3060 tcp        0      0 :::3060                     :::*                        LISTEN [oracle@tester bin]$ netstat -an | grep 3131 tcp        0      0 :::3131                     :::*                        LISTEN

2. Perform ldapbind over LDAP/LDAPS Ports

[oracle@tester bin]$ export ORACLE_HOME=/u01/app/oracle/product/fmw/idm [oracle@tester config]$ cd $ORACLE_HOME/bin/ [oracle@tester bin]$ ./ldapbind -D cn=orcladmin -q -h tester.mycompany.com -p 3060 Please enter bind password: bind successful [oracle@tester bin]$ ./ldapbind -D cn=orcladmin -q -h tester.mycompany.com -p 3131 -U 1 Please enter bind password: bind successful

3. Perform ldapsearch over LDAP/LDAPS Ports

[oracle@tester bin]$ export ORACLE_HOME=/u01/app/oracle/product/fmw/idm [oracle@tester config]$ cd $ORACLE_HOME/bin/ [oracle@tester bin]$ ./ldapsearch -D cn=orcladmin -q -h tester.mycompany.com -p 3060 -s sub -b "cn=users,dc=mycompany,dc=com" "cn=oaamadmin" Please enter bind password: cn=oaamadmin,cn=Users,dc=mycompany,dc=com obpasswordexpirydate=2033-01-19T15:23:41Z objectclass=top objectclass=person objectclass=organizationalPerson objectclass=inetorgperson objectclass=orcluser objectclass=orcluserV2 objectclass=orclIDXPerson objectclass=oblixPersonPwdPolicy objectclass=oblixOrgPerson objectclass=OIMPersonPwdPolicy userpassword={SSHA}7mkhojy5h/QnOBg6jwN2jGwcMk88DIk1d+p4ow== orclpassword={x- orcldbpwd}1.0:8778E460077C8CAF authpassword;oid={SASL/MD5}tEPZqagkbB8KzpO3JPZ2Uw== authpassword;oid={SASL/MD5-DN}Cor4GYRZnQnQDmihNzBYrg== authpassword;oid={SASL/MD5-U}DSUq+epZuKKFAPTX5aIhQg== authpassword;orclcommonpwd={MD5}tW4LTqSWIoO+52JSXC1JDw== authpassword;orclcommonpwd={X- ORCLIFSMD5}Qr85fKpR7fSS8bEKLHt+UQ== authpassword;orclcommonpwd={X- ORCLWEBDAV}xLe9oAZMJGGaRkYzgWkgPw== authpassword;orclcommonpwd={X- ORCLLMV}C23413A8A1E7665FC2265B23734E0DAC authpassword;orclcommonpwd={X- ORCLNTV}CF3A5525EE9414229E66279623ED5C58 orclsamaccountname=oaamadmin mail=oaamadmin@company.com orclisenabled=ENABLED uid=oaamadmin givenname=oaamadmin sn=oaamadmin cn=oaamadmin

[oracle@tester bin]$ ./ldapsearch -D cn=orcladmin -q -h tester.mycompany.com -p 3131 -U 1 -s sub -b "cn=users,dc=mycompany,dc=com" "cn=oaamadmin" Please enter bind password: cn=oaamadmin,cn=Users,dc=mycompany,dc=com obpasswordexpirydate=2033-01-19T15:23:41Z objectclass=top objectclass=person objectclass=organizationalPerson objectclass=inetorgperson objectclass=orcluser objectclass=orcluserV2 objectclass=orclIDXPerson objectclass=oblixPersonPwdPolicy objectclass=oblixOrgPerson objectclass=OIMPersonPwdPolicy userpassword={SSHA}7mkhojy5h/QnOBg6jwN2jGwcMk88DIk1d+p4ow== orclpassword={x- orcldbpwd}1.0:8778E460077C8CAF authpassword;oid={SASL/MD5}tEPZqagkbB8KzpO3JPZ2Uw== authpassword;oid={SASL/MD5-DN}Cor4GYRZnQnQDmihNzBYrg== authpassword;oid={SASL/MD5-U}DSUq+epZuKKFAPTX5aIhQg== authpassword;orclcommonpwd={MD5}tW4LTqSWIoO+52JSXC1JDw== authpassword;orclcommonpwd={X- ORCLIFSMD5}Qr85fKpR7fSS8bEKLHt+UQ== authpassword;orclcommonpwd={X- ORCLWEBDAV}xLe9oAZMJGGaRkYzgWkgPw== authpassword;orclcommonpwd={X- ORCLLMV}C23413A8A1E7665FC2265B23734E0DAC authpassword;orclcommonpwd={X- ORCLNTV}CF3A5525EE9414229E66279623ED5C58 orclsamaccountname=oaamadmin mail=oaamadmin@company.com orclisenabled=ENABLED uid=oaamadmin givenname=oaamadmin sn=oaamadmin cn=oaamadmin

 

Oracle Virtual Directory (OVD)

 

1. Check that LDAP/LDAPS/Admin Listeners and Processes are Up

For OVD, use opmnctl and netstat to check the ports. Note that OVD also has an Admin port for ODSM connections to OVD:

[oracle@tester bin]$ cd /u01/app/oracle/admin/ovd1/bin/ [oracle@tester bin]$ ./opmnctl startall opmnctl startall: starting opmn and all managed processes... [oracle@tester bin]$ ./opmnctl status -l Processes in Instance: ovd1 ---------------------------------+--------------------+---------+----------+------------+----------+-----------+------ ias-component                    | process-type       |     pid | status   |        uid |  memused |    uptime | ports ---------------------------------+--------------------+---------+----------+------------+----------+-----------+------ ovd1                             | OVD                |   14828 | Alive    |  391195326 |   662832 |   0:00:30 | https:8899,ldap:6501,ldaps:7501 EMAGENT                          | EMAGENT            |   14829 | Alive    |  391195325 |    63848 |   0:00:30 | N/A

[oracle@tester bin]$ netstat -an | grep 6501 tcp        0      0 ::ffff:192.168.217.142:6501 :::*                        LISTEN [oracle@tester bin]$ netstat -an | grep 7501 tcp        0      0 ::ffff:192.168.217.142:7501 :::*                        LISTEN [oracle@tester bin]$ netstat -an | grep 8899 tcp        0      0 ::ffff:192.168.217.142:8899 :::*                        LISTEN

2. Perform ldapbind over LDAP/LDAPS Ports

[oracle@tester bin]$ export ORACLE_HOME=/u01/app/oracle/product/fmw/idm [oracle@tester config]$ cd $ORACLE_HOME/bin/ [oracle@tester bin]$ ./ldapbind -D cn=orcladmin -q -h tester.mycompany.com -p 6501 Please enter bind password: bind successful [oracle@tester bin]$ ./ldapbind -D cn=orcladmin -q -h tester.mycompany.com -p 7501 -U 1 Please enter bind password: bind successful

3. Perform ldapsearch over LDAP/LDAPS Ports

[oracle@tester bin]$ ./ldapsearch -D cn=orcladmin -q -h tester.mycompany.com -p 6501 -s sub -b "cn=users,dc=mycompany,dc=com" "cn=oaamadmin" Please enter bind password: cn=oaamadmin,cn=Users,dc=mycompany,dc=com authpassword;orclcommonpwd={MD5}tW4LTqSWIoO+52JSXC1JDw== authpassword;orclcommonpwd={X- ORCLIFSMD5}Qr85fKpR7fSS8bEKLHt+UQ== authpassword;orclcommonpwd={X- ORCLWEBDAV}xLe9oAZMJGGaRkYzgWkgPw== authpassword;orclcommonpwd={X- ORCLLMV}C23413A8A1E7665FC2265B23734E0DAC authpassword;orclcommonpwd={X- ORCLNTV}CF3A5525EE9414229E66279623ED5C58 orclisenabled=ENABLED orclsamaccountname=oaamadmin sn=oaamadmin mail=oaamadmin@company.com userpassword={SSHA}7mkhojy5h/QnOBg6jwN2jGwcMk88DIk1d+p4ow== givenname=oaamadmin uid=oaamadmin authpassword;oid={SASL/MD5}tEPZqagkbB8KzpO3JPZ2Uw== authpassword;oid={SASL/MD5-DN}Cor4GYRZnQnQDmihNzBYrg== authpassword;oid={SASL/MD5-U}DSUq+epZuKKFAPTX5aIhQg== orclpassword={x- orcldbpwd}1.0:8778E460077C8CAF obpasswordexpirydate=2033-01-19T15:23:41Z cn=oaamadmin objectclass=top objectclass=person objectclass=organizationalPerson objectclass=inetorgperson objectclass=orcluser objectclass=orcluserV2 objectclass=orclIDXPerson objectclass=oblixPersonPwdPolicy objectclass=oblixOrgPerson objectclass=OIMPersonPwdPolicy

[oracle@tester bin]$ ./ldapsearch -D cn=orcladmin -q -h tester.mycompany.com -p 7501 -U 1 -s sub -b "cn=users,dc=mycompany,dc=com" "cn=oaamadmin" Please enter bind password: cn=oaamadmin,cn=Users,dc=mycompany,dc=com authpassword;orclcommonpwd={MD5}tW4LTqSWIoO+52JSXC1JDw== authpassword;orclcommonpwd={X- ORCLIFSMD5}Qr85fKpR7fSS8bEKLHt+UQ== authpassword;orclcommonpwd={X- ORCLWEBDAV}xLe9oAZMJGGaRkYzgWkgPw== authpassword;orclcommonpwd={X- ORCLLMV}C23413A8A1E7665FC2265B23734E0DAC authpassword;orclcommonpwd={X- ORCLNTV}CF3A5525EE9414229E66279623ED5C58 orclisenabled=ENABLED orclsamaccountname=oaamadmin sn=oaamadmin mail=oaamadmin@company.com userpassword={SSHA}7mkhojy5h/QnOBg6jwN2jGwcMk88DIk1d+p4ow== givenname=oaamadmin uid=oaamadmin authpassword;oid={SASL/MD5}tEPZqagkbB8KzpO3JPZ2Uw== authpassword;oid={SASL/MD5-DN}Cor4GYRZnQnQDmihNzBYrg== authpassword;oid={SASL/MD5-U}DSUq+epZuKKFAPTX5aIhQg== orclpassword={x- orcldbpwd}1.0:8778E460077C8CAF obpasswordexpirydate=2033-01-19T15:23:41Z cn=oaamadmin objectclass=top objectclass=person objectclass=organizationalPerson objectclass=inetorgperson objectclass=orcluser objectclass=orcluserV2 objectclass=orclIDXPerson objectclass=oblixPersonPwdPolicy objectclass=oblixOrgPerson objectclass=OIMPersonPwdPolicy

 

Node Manager

 

1. Check that Listener and Process are Up

[oracle@tester oracle]$ ps -ef | grep nodemanager oracle   16666     1  4 10:30 pts/4    00:00:05 /u01/app/oracle/product/fmw/jdk6/jre/bin/java -classpath /u01/app/oracle/product/fmw/jdk6/jre/lib/rt.jar:/u01/app/oracle/product/fmw/jdk6/jre/lib/i18n.jar:/u01/app/oracle/product/fmw/patch_wls1036/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/u01/app/oracle/product/fmw/patch_ocp371/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/u01/app/oracle/product/fmw/jdk6/lib/tools.jar:/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/weblogic_sp.jar:/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/weblogic.jar:/u01/app/oracle/product/fmw/modules/features/weblogic.server.modules_10.3.6.0.jar:/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/webservices.jar:/u01/app/oracle/product/fmw/modules/org.apache.ant_1.7.1/lib/ant-all.jar:/u01/app/oracle/product/fmw/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar:/u01/app/oracle/product/fmw/utils/config/10.3/config-launch.jar:/u01/app/oracle/product/fmw/wlserver_10.3/common/derby/lib/derbynet.jar:/u01/app/oracle/product/fmw/wlserver_10.3/common/derby/lib/derbyclient.jar:/u01/app/oracle/product/fmw/wlserver_10.3/common/derby/lib/derbytools.jar -DListenAddress=ADMINVHN.mycompany.com -DNodeManagerHome=/u01/app/oracle/product/fmw/wlserver_10.3/common/nodemanager -DQuitEnabled=true -DListenPort=5556 weblogic.NodeManager -v oracle   16895 16543  0 10:32 pts/4    00:00:00 grep nodemanager

[oracle@tester oracle]$ netstat -an | grep 5556 tcp        0      0 ::ffff:192.168.217.142:5556 :::*                        LISTEN

2. Perform nmConnect via WLST

[oracle@tester bin]$ export MW_HOME=/u01/app/oracle/product/fmw [oracle@tester bin]$ cd $MW_HOME/oracle_common/common/bin [oracle@tester bin]$ ./wlst.sh CLASSPATH=/u01/app/oracle/product/fmw/patch_wls1036/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/u01/app/oracle/product/fmw/patch_ocp371/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/u01/app/oracle/product/fmw/jdk6/lib/tools.jar:/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/weblogic_sp.jar:/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/weblogic.jar:/u01/app/oracle/product/fmw/modules/features/weblogic.server.modules_10.3.6.0.jar:/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/webservices.jar:/u01/app/oracle/product/fmw/modules/org.apache.ant_1.7.1/lib/ant-all.jar:/u01/app/oracle/product/fmw/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar::/u01/app/oracle/product/fmw/oracle_common/modules/oracle.jrf_11.1.1/jrf-wlstman.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/lib/adfscripting.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/lib/adf-share-mbeans-wlst.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/lib/mdswlst.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/auditwlst.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/igfwlsthelp.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/jps-wlst.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/jrf-wlst.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/oamap_help.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/oamAuthnProvider.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/ossoiap_help.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/ossoiap.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/ovdwlsthelp.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/sslconfigwlst.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/wsm-wlst.jar:/u01/app/oracle/product/fmw/utils/config/10.3/config-launch.jar::/u01/app/oracle/product/fmw/wlserver_10.3/common/derby/lib/derbynet.jar:/u01/app/oracle/product/fmw/wlserver_10.3/common/derby/lib/derbyclient.jar:/u01/app/oracle/product/fmw/wlserver_10.3/common/derby/lib/derbytools.jar:: Initializing WebLogic Scripting Tool (WLST) ... Welcome to WebLogic Server Administration Scripting Shell Type help() for help on available commands wls:/offline> nmConnect('nmAdmin','Welcome1','tester.mycompany.com','5556','IDMDomain','/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain') Connecting to Node Manager ... Successfully Connected to Node Manager. wls:/nm/IDMDomain> nmStart('AdminServer') Starting server AdminServer ... Successfully started server AdminServer ... wls:/nm/IDMDomain> nmKill('AdminServer') Killing server AdminServer ... Successfully killed server AdminServer ... wls:/nm/IDMDomain> exit() Exiting WebLogic Scripting Tool. [oracle@tester bin]$

 

WebLogic Server (WLS)

 

1. Check AdminServer Listener and Process

[oracle@tester ~]$ ps -ef | grep AdminServer oracle   18303 18249 23 10:47 ?        00:02:25 /u01/app/oracle/product/fmw/jdk6/bin/java -jrockit -Xms768m -Xmx1536m -Dweblogic.Name=AdminServer -Djava.security.policy=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/weblogic.policy -Dweblogic.system.BootIdentityFile=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/AdminServer/security/boot.properties -Dweblogic.nodemanager.ServiceEnabled=true -Xverify:none -da -Dplatform.home=/u01/app/oracle/product/fmw/wlserver_10.3 -Dwls.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dweblogic.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dcommon.components.home=/u01/app/oracle/product/fmw/oracle_common -Djrf.version=11.1.1 -Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger -Ddomain.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Djrockit.optfile=/u01/app/oracle/product/fmw/oracle_common/modules/oracle.jrf_11.1.1/jrocket_optfile.txt -Doracle.server.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/servers/AdminServer -Doracle.domain.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig -Digf.arisidbeans.carmlloc=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/carml -Digf.arisidstack.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/arisidprovider -Doracle.security.jps.config=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/jps-config.xml -Doracle.deployed.app.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/AdminServer/tmp/_WL_user -Doracle.deployed.app.ext=/- -Dweblogic.alternateTypesDirectory=/u01/app/oracle/product/fmw/iam/oam/agent/modules/oracle.oam.wlsagent_11.1.1,/u01/app/oracle/product/fmw/iam/server/loginmodule/wls,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.ossoiap_11.1.1,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.oamprovider_11.1.1 -Djava.protocol.handler.pkgs=oracle.mds.net.protocol|oracle.fabric.common.classloaderurl.handler|oracle.fabric.common.uddiurl.handler|oracle.bpm.io.fs.protocol -Dweblogic.jdbc.remoteEnabled=false -DOAM_POLICY_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-policy.xml -DOAM_CONFIG_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-config.xml -DOAM_ORACLE_HOME=/u01/app/oracle/product/fmw/iam/oam -Doracle.security.am.SERVER_INSTNCE_NAME=AdminServer -Does.jars.home=/u01/app/oracle/product/fmw/iam/oam/server/lib/oes-d8 -Does.integration.path=/u01/app/oracle/product/fmw/iam/oam/server/lib/oeslib/oes-integration.jar -Does.enabled=true -Djavax.xml.soap.SOAPConnectionFactory=weblogic.wsee.saaj.SOAPConnectionFactoryImpl -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Djavax.xml.soap.SOAPFactory=oracle.j2ee.ws.saaj.soap.SOAPFactoryImpl -DXL.HomeDir=/u01/app/oracle/product/fmw/iam/server -Djava.security.auth.login.config=/u01/app/oracle/product/fmw/iam/server/config/authwl.conf -Dorg.owasp.esapi.resources=/u01/app/oracle/product/fmw/iam/server/apps/oim.ear/APP-INF/classes -da:org.apache.xmlbeans... -Didm.oracle.home=/u01/app/oracle/product/fmw/idm -Xms512m -Xmx1024m -Xss512K -Djava.protocol.handler.pkgs=oracle.mds.net.protocol -Dweblogic.management.discover=false -Dsoa.archives.dir=/u01/app/oracle/product/fmw/soa/soa -Dsoa.oracle.home=/u01/app/oracle/product/fmw/soa -Dsoa.instance.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Dtangosol.coherence.clusteraddress=227.7.7.9 -Dtangosol.coherence.clusterport=9778 -Dtangosol.coherence.log=jdk -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Dweblogic.transaction.blocking.commit=true -Dweblogic.transaction.blocking.rollback=true -Djavax.net.ssl.trustStore=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/DemoTrust.jks -Dums.oracle.home=/u01/app/oracle/product/fmw/soa -Dem.oracle.home=/u01/app/oracle/product/fmw/oracle_common -Djava.awt.headless=true -Dweblogic.management.discover=true -Dwlw.iterativeDev= -Dwlw.testConsole= -Dwlw.logErrorsToConsole= -Dweblogic.ext.dirs=/u01/app/oracle/product/fmw/patch_wls1036/profiles/default/syse oracle   19595 19540  0 10:58 pts/5    00:00:00 grep AdminServer

[oracle@tester ~]$ netstat -an | grep 7001 tcp        0      0 ::ffff:192.168.217.142:7001 :::*                        LISTEN

2. Log in to WLS Console via AdminServer Port

validation_blog001

3. Check that All Servers are Up

Navigate to Summary of Servers and ensure that all managed servers have started: validation_blog002

4. Check that Users/Groups are Visible

Navigate to Security realms > myrealm and ensure that users and groups from both the Default and OVD Authenticators are visible:

validation_blog003 5. Log in to FMW Control via AdminServer port

  validation_blog004

6. Check that All Components are Up

  validation_blog005  

Oracle Access Manager Console (OAM Console)

 

1. Log in to OAM Console via AdminServer Port

  validation_blog006   validation_blog007 Navigate to some sample configuration screens to ensure that they are properly displayed: validation_blog008   validation_blog009  

Oracle Directory Services Manager (ODSM)

 

1. Check ODSM Listener and Process

[oracle@tester ~]$ ps -ef | grep wls_ods1 oracle   19093 19039  8 10:54 ?        00:05:44 /u01/app/oracle/product/fmw/jdk6/bin/java -jrockit -Xms768m -Xmx1536m -Dweblogic.Name=wls_ods1 -Djava.security.policy=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/weblogic.policy -Dweblogic.system.BootIdentityFile=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/wls_ods1/data/nodemanager/boot.properties -Dweblogic.nodemanager.ServiceEnabled=true -Dweblogic.security.SSL.ignoreHostnameVerification=false -Dweblogic.ReverseDNSAllowed=false -Xverify:none -da -Dplatform.home=/u01/app/oracle/product/fmw/wlserver_10.3 -Dwls.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dweblogic.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dcommon.components.home=/u01/app/oracle/product/fmw/oracle_common -Djrf.version=11.1.1 -Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger -Ddomain.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Djrockit.optfile=/u01/app/oracle/product/fmw/oracle_common/modules/oracle.jrf_11.1.1/jrocket_optfile.txt -Doracle.server.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/servers/wls_ods1 -Doracle.domain.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig -Digf.arisidbeans.carmlloc=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/carml -Digf.arisidstack.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/arisidprovider -Doracle.security.jps.config=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/jps-config.xml -Doracle.deployed.app.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/wls_ods1/tmp/_WL_user -Doracle.deployed.app.ext=/- -Dweblogic.alternateTypesDirectory=/u01/app/oracle/product/fmw/iam/oam/agent/modules/oracle.oam.wlsagent_11.1.1,/u01/app/oracle/product/fmw/iam/server/loginmodule/wls,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.ossoiap_11.1.1,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.oamprovider_11.1.1 -Djava.protocol.handler.pkgs=oracle.mds.net.protocol|oracle.fabric.common.classloaderurl.handler|oracle.fabric.common.uddiurl.handler|oracle.bpm.io.fs.protocol -Dweblogic.jdbc.remoteEnabled=false -DOAM_POLICY_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-policy.xml -DOAM_CONFIG_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-config.xml -DOAM_ORACLE_HOME=/u01/app/oracle/product/fmw/iam/oam -Doracle.security.am.SERVER_INSTNCE_NAME=wls_ods1 -Does.jars.home=/u01/app/oracle/product/fmw/iam/oam/server/lib/oes-d8 -Does.integration.path=/u01/app/oracle/product/fmw/iam/oam/server/lib/oeslib/oes-integration.jar -Does.enabled=true -Djavax.xml.soap.SOAPConnectionFactory=weblogic.wsee.saaj.SOAPConnectionFactoryImpl -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Djavax.xml.soap.SOAPFactory=oracle.j2ee.ws.saaj.soap.SOAPFactoryImpl -DXL.HomeDir=/u01/app/oracle/product/fmw/iam/server -Djava.security.auth.login.config=/u01/app/oracle/product/fmw/iam/server/config/authwl.conf -Dorg.owasp.esapi.resources=/u01/app/oracle/product/fmw/iam/server/apps/oim.ear/APP-INF/classes -da:org.apache.xmlbeans... -Didm.oracle.home=/u01/app/oracle/product/fmw/idm -Xms512m -Xmx1024m -Xss512K -Djava.protocol.handler.pkgs=oracle.mds.net.protocol -Dweblogic.management.discover=false -Dsoa.archives.dir=/u01/app/oracle/product/fmw/soa/soa -Dsoa.oracle.home=/u01/app/oracle/product/fmw/soa -Dsoa.instance.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Dtangosol.coherence.clusteraddress=227.7.7.9 -Dtangosol.coherence.clusterport=9778 -Dtangosol.coherence.log=jdk -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Dweblogic.transaction.blocking.commit=true -Dweblogic.transaction.blocking.rollback=true -Djavax.net.ssl.trustStore=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/DemoTrust.jks -Dums.oracle.home=/u01/app/oracle/product/fmw/soa -Dem.oracle.home=/u01/app/oracle/product/fmw/oracle_common -Djava.awt.headless=true -Dweblogic.management.discover=false -Dweblogic.management.server=http://ADMINVHN.mycompany.com:700 oracle   25148 19540  0 11:58 pts/5    00:00:00 grep wls_ods1

[oracle@tester ~]$ netstat -an | grep 7006 tcp        0      0 ::ffff:192.168.217.142:7006 :::*                        LISTEN

2. Connect to OID via ODSM Port

  validation_blog010   validation_blog011

3. Browse OID Directory Tree

Ensure that users and groups are populated and visible: validation_blog012

4. Connect to OVD via ODSM Port

  validation_blog013

5. Browse OVD Directory Tree

  validation_blog014   validation_blog015  

Oracle Access Manager (OAM)

 

1. Check OAM Server Listener and Process

[oracle@tester ~]$ ps -ef | grep wls_oam1 oracle   18766 18712  7 10:51 ?        00:06:32 /u01/app/oracle/product/fmw/jdk6/bin/java -jrockit -Xms768m -Xmx1536m -Dweblogic.Name=wls_oam1 -Djava.security.policy=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/weblogic.policy -Dweblogic.system.BootIdentityFile=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/wls_oam1/data/nodemanager/boot.properties -Dweblogic.nodemanager.ServiceEnabled=true -Dweblogic.security.SSL.ignoreHostnameVerification=false -Dweblogic.ReverseDNSAllowed=false -Xverify:none -da -Dplatform.home=/u01/app/oracle/product/fmw/wlserver_10.3 -Dwls.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dweblogic.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dcommon.components.home=/u01/app/oracle/product/fmw/oracle_common -Djrf.version=11.1.1 -Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger -Ddomain.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Djrockit.optfile=/u01/app/oracle/product/fmw/oracle_common/modules/oracle.jrf_11.1.1/jrocket_optfile.txt -Doracle.server.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/servers/wls_oam1 -Doracle.domain.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig -Digf.arisidbeans.carmlloc=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/carml -Digf.arisidstack.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/arisidprovider -Doracle.security.jps.config=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/jps-config.xml -Doracle.deployed.app.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/wls_oam1/tmp/_WL_user -Doracle.deployed.app.ext=/- -Dweblogic.alternateTypesDirectory=/u01/app/oracle/product/fmw/iam/oam/agent/modules/oracle.oam.wlsagent_11.1.1,/u01/app/oracle/product/fmw/iam/server/loginmodule/wls,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.ossoiap_11.1.1,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.oamprovider_11.1.1 -Djava.protocol.handler.pkgs=oracle.mds.net.protocol|oracle.fabric.common.classloaderurl.handler|oracle.fabric.common.uddiurl.handler|oracle.bpm.io.fs.protocol -Dweblogic.jdbc.remoteEnabled=false -DOAM_POLICY_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-policy.xml -DOAM_CONFIG_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-config.xml -DOAM_ORACLE_HOME=/u01/app/oracle/product/fmw/iam/oam -Doracle.security.am.SERVER_INSTNCE_NAME=wls_oam1 -Does.jars.home=/u01/app/oracle/product/fmw/iam/oam/server/lib/oes-d8 -Does.integration.path=/u01/app/oracle/product/fmw/iam/oam/server/lib/oeslib/oes-integration.jar -Does.enabled=true -Djavax.xml.soap.SOAPConnectionFactory=weblogic.wsee.saaj.SOAPConnectionFactoryImpl -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Djavax.xml.soap.SOAPFactory=oracle.j2ee.ws.saaj.soap.SOAPFactoryImpl -DXL.HomeDir=/u01/app/oracle/product/fmw/iam/server -Djava.security.auth.login.config=/u01/app/oracle/product/fmw/iam/server/config/authwl.conf -Dorg.owasp.esapi.resources=/u01/app/oracle/product/fmw/iam/server/apps/oim.ear/APP-INF/classes -da:org.apache.xmlbeans... -Didm.oracle.home=/u01/app/oracle/product/fmw/idm -Xms512m -Xmx1024m -Xss512K -Djava.protocol.handler.pkgs=oracle.mds.net.protocol -Dweblogic.management.discover=false -Dsoa.archives.dir=/u01/app/oracle/product/fmw/soa/soa -Dsoa.oracle.home=/u01/app/oracle/product/fmw/soa -Dsoa.instance.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Dtangosol.coherence.clusteraddress=227.7.7.9 -Dtangosol.coherence.clusterport=9778 -Dtangosol.coherence.log=jdk -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Dweblogic.transaction.blocking.commit=true -Dweblogic.transaction.blocking.rollback=true -Djavax.net.ssl.trustStore=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/DemoTrust.jks -Dums.oracle.home=/u01/app/oracle/product/fmw/soa -Dem.oracle.home=/u01/app/oracle/product/fmw/oracle_common -Djava.awt.headless=true -Dweblogic.management.discover=false -Dweblogic.management.server=http://ADMINVHN.mycompany.com:700 oracle   26309 19540  0 12:13 pts/5    00:00:00 grep wls_oam1

[oracle@tester ~]$ netstat -an | grep 14100 tcp        0      0 ::ffff:192.168.217.14:14100 :::*                        LISTEN

2. Check that /oam/server via OAM Server Port Responds

Note that the error below is expected behavior. This test is meant to ensure only that the server responds to the HTTP request.

[oracle@tester ~]$ wget http://tester.mycompany.com:14100/oam/server --2013-10-21 12:15:35--  http://tester.mycompany.com:14100/oam/server Resolving tester.mycompany.com... 192.168.217.142 Connecting to tester.mycompany.com|192.168.217.142|:14100... connected. HTTP request sent, awaiting response... 404 Not Found 2013-10-21 12:15:36 ERROR 404: Not Found.

 

Oracle Identity Manager (OIM)

 

1. Check OIM Listener and Process

[oracle@tester ~]$ ps -ef | grep wls_oim1 oracle   19390 19336 10 10:56 ?        00:09:04 /u01/app/oracle/product/fmw/jdk6/bin/java -jrockit -Xms768m -Xmx1536m -Dweblogic.Name=wls_oim1 -Djava.security.policy=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/weblogic.policy -Dweblogic.system.BootIdentityFile=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/wls_oim1/data/nodemanager/boot.properties -Dweblogic.nodemanager.ServiceEnabled=true -Dweblogic.security.SSL.ignoreHostnameVerification=false -Dweblogic.ReverseDNSAllowed=false -Djps.subject.cache.key=5 -Djps.subject.cache.ttl=600000 -Xverify:none -da -Dplatform.home=/u01/app/oracle/product/fmw/wlserver_10.3 -Dwls.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dweblogic.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dcommon.components.home=/u01/app/oracle/product/fmw/oracle_common -Djrf.version=11.1.1 -Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger -Ddomain.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Djrockit.optfile=/u01/app/oracle/product/fmw/oracle_common/modules/oracle.jrf_11.1.1/jrocket_optfile.txt -Doracle.server.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/servers/wls_oim1 -Doracle.domain.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig -Digf.arisidbeans.carmlloc=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/carml -Digf.arisidstack.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/arisidprovider -Doracle.security.jps.config=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/jps-config.xml -Doracle.deployed.app.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/wls_oim1/tmp/_WL_user -Doracle.deployed.app.ext=/- -Dweblogic.alternateTypesDirectory=/u01/app/oracle/product/fmw/iam/oam/agent/modules/oracle.oam.wlsagent_11.1.1,/u01/app/oracle/product/fmw/iam/server/loginmodule/wls,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.ossoiap_11.1.1,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.oamprovider_11.1.1 -Djava.protocol.handler.pkgs=oracle.mds.net.protocol|oracle.fabric.common.classloaderurl.handler|oracle.fabric.common.uddiurl.handler|oracle.bpm.io.fs.protocol -Dweblogic.jdbc.remoteEnabled=false -DOAM_POLICY_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-policy.xml -DOAM_CONFIG_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-config.xml -DOAM_ORACLE_HOME=/u01/app/oracle/product/fmw/iam/oam -Doracle.security.am.SERVER_INSTNCE_NAME=wls_oim1 -Does.jars.home=/u01/app/oracle/product/fmw/iam/oam/server/lib/oes-d8 -Does.integration.path=/u01/app/oracle/product/fmw/iam/oam/server/lib/oeslib/oes-integration.jar -Does.enabled=true -Djavax.xml.soap.SOAPConnectionFactory=weblogic.wsee.saaj.SOAPConnectionFactoryImpl -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Djavax.xml.soap.SOAPFactory=oracle.j2ee.ws.saaj.soap.SOAPFactoryImpl -DXL.HomeDir=/u01/app/oracle/product/fmw/iam/server -Djava.security.auth.login.config=/u01/app/oracle/product/fmw/iam/server/config/authwl.conf -Dorg.owasp.esapi.resources=/u01/app/oracle/product/fmw/iam/server/apps/oim.ear/APP-INF/classes -da:org.apache.xmlbeans... -Didm.oracle.home=/u01/app/oracle/product/fmw/idm -Xms512m -Xmx1024m -Xss512K -Djava.protocol.handler.pkgs=oracle.mds.net.protocol -Dweblogic.management.discover=false -Dsoa.archives.dir=/u01/app/oracle/product/fmw/soa/soa -Dsoa.oracle.home=/u01/app/oracle/product/fmw/soa -Dsoa.instance.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Dtangosol.coherence.clusteraddress=227.7.7.9 -Dtangosol.coherence.clusterport=9778 -Dtangosol.coherence.log=jdk -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Dweblogic.transaction.blocking.commit=true -Dweblogic.transaction.blocking.rollback=true -Djavax.net.ssl.trustStore=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/DemoTrust.jks -Dums.oracle.home=/u01/app/oracle/product/fmw/soa -Dem.oracle.home=/u01/app/oracle/product/fmw/oracle_common -Djava.awt.headless=true -Dweblogic.management.discover=false -Dweb oracle   26875 19540  0 12:20 pts/5    00:00:00 grep wls_oim1

[oracle@tester ~]$ netstat -an | grep 14000 tcp        0      0 ::ffff:192.168.217.14:14000 :::*                        LISTEN

2. Log in to OIM Admin Console

  validation_blog016   validation_blog017

3. Look Up User

Navigate to the Administration console and search for an sample user: validation_blog018

4. Test Role Grant/Revocation

Assign a role to the sample user: validation_blog019   validation_blog020   validation_blog021 Confirm via ODSM that the user has been added to the associated group in OID: validation_blog022 Revoke the role from the sample user: validation_blog023   validation_blog024 Confirm the revocation via ODSM:

validation_blog025 5. Test a Reconciliation Process

Search for a Fusion Applications reconciliation scheduled job and run it: validation_blog026   validation_blog027   validation_blog028   validation_blog029   validation_blog030 Confirm that the reconciliation was successful: validation_blog031  

Oracle Service Oriented Architecture Suite (SOA)

 

1. Check SOA Listener and Process

[oracle@tester ~]$ ps -ef | grep wls_soa1 oracle   20160 20106 11 11:03 ?        00:10:46 /u01/app/oracle/product/fmw/jdk6/bin/java -jrockit -Xms768m -Xmx1536m -Dweblogic.Name=wls_soa1 -Djava.security.policy=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/weblogic.policy -Dweblogic.system.BootIdentityFile=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/wls_soa1/data/nodemanager/boot.properties -Dweblogic.nodemanager.ServiceEnabled=true -Dweblogic.security.SSL.ignoreHostnameVerification=false -Dweblogic.ReverseDNSAllowed=false -Djps.subject.cache.key=5 -Djps.subject.cache.ttl=600000 -Xverify:none -da -Dplatform.home=/u01/app/oracle/product/fmw/wlserver_10.3 -Dwls.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dweblogic.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dcommon.components.home=/u01/app/oracle/product/fmw/oracle_common -Djrf.version=11.1.1 -Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger -Ddomain.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Djrockit.optfile=/u01/app/oracle/product/fmw/oracle_common/modules/oracle.jrf_11.1.1/jrocket_optfile.txt -Doracle.server.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/servers/wls_soa1 -Doracle.domain.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig -Digf.arisidbeans.carmlloc=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/carml -Digf.arisidstack.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/arisidprovider -Doracle.security.jps.config=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/jps-config.xml -Doracle.deployed.app.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/wls_soa1/tmp/_WL_user -Doracle.deployed.app.ext=/- -Dweblogic.alternateTypesDirectory=/u01/app/oracle/product/fmw/iam/oam/agent/modules/oracle.oam.wlsagent_11.1.1,/u01/app/oracle/product/fmw/iam/server/loginmodule/wls,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.ossoiap_11.1.1,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.oamprovider_11.1.1 -Djava.protocol.handler.pkgs=oracle.mds.net.protocol|oracle.fabric.common.classloaderurl.handler|oracle.fabric.common.uddiurl.handler|oracle.bpm.io.fs.protocol -Dweblogic.jdbc.remoteEnabled=false -DOAM_POLICY_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-policy.xml -DOAM_CONFIG_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-config.xml -DOAM_ORACLE_HOME=/u01/app/oracle/product/fmw/iam/oam -Doracle.security.am.SERVER_INSTNCE_NAME=wls_soa1 -Does.jars.home=/u01/app/oracle/product/fmw/iam/oam/server/lib/oes-d8 -Does.integration.path=/u01/app/oracle/product/fmw/iam/oam/server/lib/oeslib/oes-integration.jar -Does.enabled=true -Djavax.xml.soap.SOAPConnectionFactory=weblogic.wsee.saaj.SOAPConnectionFactoryImpl -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Djavax.xml.soap.SOAPFactory=oracle.j2ee.ws.saaj.soap.SOAPFactoryImpl -DXL.HomeDir=/u01/app/oracle/product/fmw/iam/server -Djava.security.auth.login.config=/u01/app/oracle/product/fmw/iam/server/config/authwl.conf -Dorg.owasp.esapi.resources=/u01/app/oracle/product/fmw/iam/server/apps/oim.ear/APP-INF/classes -da:org.apache.xmlbeans... -Didm.oracle.home=/u01/app/oracle/product/fmw/idm -Xms512m -Xmx1024m -Xss512K -Djava.protocol.handler.pkgs=oracle.mds.net.protocol -Dweblogic.management.discover=false -Dsoa.archives.dir=/u01/app/oracle/product/fmw/soa/soa -Dsoa.oracle.home=/u01/app/oracle/product/fmw/soa -Dsoa.instance.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Dtangosol.coherence.clusteraddress=227.7.7.9 -Dtangosol.coherence.clusterport=9778 -Dtangosol.coherence.log=jdk -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Dweblogic.transaction.blocking.commit=true -Dweblogic.transaction.blocking.rollback=true -Djavax.net.ssl.trustStore=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/DemoTrust.jks -Dums.oracle.home=/u01/app/oracle/product/fmw/soa -Dem.oracle.home=/u01/app/oracle/product/fmw/oracle_common -Djava.awt.headless=true -Dweblogic.management.discover=false -Dweb oracle   28045 19540  0 12:35 pts/5    00:00:00 grep wls_soa1

[oracle@tester ~]$ netstat -an | grep 8001 tcp        0      0 ::ffff:192.168.217.142:8001 :::*                        LISTEN

2. Log in to SOA Diagnostic Page

Note that the login credentials to be used here are the WebLogic admin credentials: validation_blog032   validation_blog033  

Oracle HTTP Server (OHS)

 

1. Check OHS Listener and Process

[oracle@tester bin]$ ./opmnctl status -l Processes in Instance: web1 ---------------------------------+--------------------+---------+----------+------------+----------+-----------+------ ias-component                    | process-type       |     pid | status   |        uid |  memused |    uptime | ports ---------------------------------+--------------------+---------+----------+------------+----------+-----------+------ ohs1                             | OHS                |   22727 | Alive    |  342892205 |   736376 |   1:56:59 | https:9999,https:4443,http:7777

[oracle@tester bin]$ ps -ef | grep ohs oracle   22727 22705  0 11:38 ?        00:00:01 /u01/app/oracle/product/fmw/web/ohs/bin/httpd.worker -DSSL oracle   22735 22727  0 11:38 ?        00:00:00 /u01/app/oracle/product/fmw/web/ohs/bin/odl_rotatelogs -l /u01/app/oracle/admin/web1/diagnostics/logs/OHS/ohs1/ohs1-%Y%m%d%H%M%S.log 10M 70M oracle   22736 22727  0 11:38 ?        00:00:00 /u01/app/oracle/product/fmw/web/ohs/bin/odl_rotatelogs /u01/app/oracle/admin/web1/diagnostics/logs/OHS/ohs1/access_log 43200 oracle   22737 22727  0 11:38 ?        00:00:00 /u01/app/oracle/product/fmw/web/ohs/bin/odl_rotatelogs -l -h:/u01/app/oracle/admin/web1/config/OHS/ohs1/component_events.xml_ohs1 /u01/app/oracle/admin/web1/auditlogs/OHS/ohs1/audit-pid22727-%Y%m%d%H%M%S.log 1M 4M oracle   22738 22727  0 11:38 ?        00:00:00 /u01/app/oracle/product/fmw/web/ohs/bin/httpd.worker -DSSL oracle   22741 22727  0 11:38 ?        00:00:00 /u01/app/oracle/product/fmw/web/ohs/bin/httpd.worker -DSSL oracle   22743 22727  0 11:38 ?        00:00:00 /u01/app/oracle/product/fmw/web/ohs/bin/httpd.worker -DSSL oracle   22855 22727  0 11:38 ?        00:00:00 /u01/app/oracle/product/fmw/web/ohs/bin/httpd.worker -DSSL oracle   22911 22727  0 11:38 ?        00:00:00 /u01/app/oracle/product/fmw/web/ohs/bin/httpd.worker -DSSL oracle   22912 22727  0 11:38 ?        00:00:01 /u01/app/oracle/product/fmw/web/ohs/bin/httpd.worker -DSSL oracle   32727  5753  0 13:36 pts/2    00:00:00 grep ohs

[oracle@tester bin]$ netstat -an | grep 7777 tcp        0      0 :::7777                     :::*                        LISTEN [oracle@tester bin]$ netstat -an | grep 4443 tcp        0      0 :::4443                     :::*                        LISTEN

2. Log in to OAM Console via SSO

  validation_blog034   validation_blog035   validation_blog036 Be sure to test that the Sign Out link returns the user to the login page: validation_blog037

A Hidden Gem of ADF Faces 12c: The Tag

$
0
0

Introduction

In JDeveloper 12c, the <af:target> tag has been added, a very powerful new ADF Faces tag which can make your life much easier and more productive when building ADF Faces pages. This post discusses how you use this new tag, and explains how specific functional requirements that used to be mind-boggling to implement, are now easily and quickly implemented using this new tag.

Main Article

To really value the power of this new tag, it is important to understand the complex implementations often required to implement seemingly simple use cases before this tag existed. In this article, we will first discuss such use cases, then introduce the <af:target> tag, and finally we will see how the same use cases can be implemented smoothly using this tag.

Problematic Use Cases Before <af:target> Tag Existed

The development problems that we will discuss are all related to the ADF and JSF Lifecycle. So, it helps to have an understanding of the ADF/JSF Lifecycle and its phases that are executed when you submit a JSF page. It is beyond the scope of this article to explain these phases, so if you don't feel comfortable yet with the ADF/JSF lifecycle, then first take a look at the following resources: The presentation and ADF insider not only explain the JSF/ADF lifecycle, but also discuss in great detail the development problems and their (often complex) solutions that you might encounter when using an older ADF Faces version. In this article, we will use three of those use cases that were difficult (or impossible) to implement before this tag existed, and illustrate how easily they can be implemented using the new tag.

Use Case 1: Avoiding premature validation when clicking on a button

This use case is illustrated by the following screen shot. Usecase1 When clicking the Suggest button, a suggested greeting should be displayed in the greeting field based on the name entered. The name that is entered must have at least 4 characters. Here is the source code of the page:
<af:panelFormLayout id="pfl1">
  <af:inputText required="true" label="Name" id="it1" value="#{viewScope.HelloBeanSu.name}">
    <f:validateLength minimum="4"></f:validateLength>
  </af:inputText>
  <af:panelLabelAndMessage showRequired="true" label="Greeting" id="plam2">
    <af:inputText required="true" label="Greeting" id="it2" value="#{viewScope.HelloBeanSu.greeting}"
                  simple="true"/>
    <f:facet name="end">
      <af:button text="Suggest" id="cb3"
                        actionListener="#{viewScope.HelloBeanSu.suggestPreferredGreeting}"></af:button>
    </f:facet>
  </af:panelLabelAndMessage>
  <af:inputDate label="Date" required="true" id="id1" value="#{viewScope.HelloBeanSu.date}"/>
  <f:facet name="footer">
    <af:panelGroupLayout id="pgl3" layout="horizontal">
      <af:button text="Say Hello" id="cb1" action="#{viewScope.HelloBeanSu.sayHello}"/>
      <af:button text="Reset" id="cb2" immediate="true" actionListener="#{viewScope.HelloBeanSu.reset}"></af:button>
      <f:facet name="separator">
        <af:spacer width="10" height="10" id="s3"/>
      </f:facet>
    </af:panelGroupLayout>
  </f:facet>
</af:panelFormLayout>
Note: the code snippet uses the new 12c <af:button> tag, this is the renamed version of the now deprecated <af:commandButton> tag in ADF Faces 12c. And this is the Java code implemented behind the suggestPreferredGreeting method:
public void suggestPreferredGreeting(ActionEvent event) {
  String greeting = getPreferredGreeting(getName());
  setGreeting(greeting);
}

private String getPreferredGreeting(String name)
{
  if ("Steven".equalsIgnoreCase(name)) {
    return "Goedendag";
  }
  else if ("Angela".equalsIgnoreCase(name)) {
    return "Gutentag";
  }
  else if ("Nathalie".equalsIgnoreCase(name)) {
    return "Bonjour";
  }
  else if ("Barack".equalsIgnoreCase(name)) {
    return "Hi";
  }
  else {
    return "Hello";
  }    
}
There are quite a few issues you will encounter when implementing this use case in JDeveloper 11.1.1.x or 11.1.2.x:
  • When clicking the suggest button, we get validation errors on the other fields: greeting and date
  • To avoid these validation errors, we can set the immediate property of the Suggest button to true. However, this will also skip the required validation of the name field which should have at least 4 characters.
  • Furthermore, by setting immediate=true, the setName method in the managed bean will no longer be called because the Update Model phase is also skipped. So, the suggestPreferredGreeting method can no longer use the getName method to get hold of the name just entered by the user.
  • We cannot use a valueChangeListener to set the name value in the managed bean, because a valueChangeListener is executed in the Process Validations phase which is also skipped when immediate is set to true on a command component. And if we set immediate=true on the name field to execute the valueChangeListener in the Apply Request Values phase, we hit the issues described below in use case 3....
  • And because immediate=true even if the greeting would have been set correctly in the managed bean, it would not display because the greeting inputText component will not check its underlying managed bean value when an immediate request has been sent to the server
To avoid all these issues, you do need to set immediate=true on the button, and write Java code to programmatically execute the validation on the name field, obtain the entered value for the name field directly from the inputText component using the binding property, and programmatically refresh (re-render) the greeting field. See the aforementioned presentation for implementation details. Well, you have to admit, that is a lot of work for a seemingly simple requirement, right?

Use Case 2: Refreshing dependent fields when tabbing out an item

This use case is a variation of the first use case, rather than clicking on a button to suggest a preferred greeting, the suggested greeting will be displayed automatically when the user tabs out the name field. Usecase2 The relevant page code snippet looks like this:
<af:inputText required="true" label="Name" id="it1"
              value="#{viewScope.HelloBeanAs.name}" autoSubmit="true"
              valueChangeListener="#{viewScope.HelloBeanAs.nameChanged}">
</af:inputText>
<af:inputText required="true" label="Greeting" id="it2"
              value="#{viewScope.HelloBeanAs.greeting}"/>
The Java code of the valueChangeListener method looks like this:
public void nameChanged(ValueChangeEvent valueChangeEvent) {
  String name = (String) valueChangeEvent.getNewValue();
  String greeting = getPreferredGreeting(name);      
  setGreeting(greeting);
}
What are the issues with this use case? Well, we do not (yet) have issues with premature validation because when auto-submitting an inputText component, the ADF optimized lifecycle kicks in and ensures that the JSF lifecycle is only executed on the component itself. The problem here is that we do not see the suggested value of the greeting field in the user interface because the ADF optimized lifecycle, by default, only re-renders the component that is auto-submitted. You might think the obvious solution is to add a partialTriggers property to the greeting field that references the name field. However, this will not work because of so-called Cross-Component-Refresh: the ADF optimized lifecycle will now also run the JSF lifecycle phases against the greeting field causing again premature validation leading to a "field is required" error on the greeting field when tabbing out the name field. The solution here is less elaborate than use case 1 but still requires Java coding: you need to programmatically refresh the greeting field using the AdfFacesContext.addPartialTarget API.  

Use Case 3: Implementing a cancel or reset button

If you want to abandon or reset a page that misses required data, or contains invalid data, then you typically do this by adding a Cancel button with immediate property set to true. However, this will only work when there are no input elements that also have the immediate property set to true. If you click a button with immediate set to true, and there are input elements which also have immediate set to true, then validation of these elements is executed in the Apply Request Values phase and might cause the cancel or reset action to fail, as illustrated by the screen shot shown below. Usecase3 Source code of this page fragment:
<af:panelFormLayout id="pfl1">
  <af:inputText required="true" label="Name" id="it1" value="#{viewScope.HelloBeanAs.name}" immediate="true"
                autoSubmit="true" valueChangeListener="#{viewScope.HelloBeanAs.nameChanged}">
    <f:validateLength minimum="4"></f:validateLength>
  </af:inputText>
  <af:inputText required="true" label="Greeting" id="it2" value="#{viewScope.HelloBeanAs.greeting}"/>
  <af:inputDate label="Date" required="true" id="id1" value="#{viewScope.HelloBeanAs.date}"/>
  <f:facet name="footer">
    <af:panelGroupLayout id="pgl3" layout="horizontal">
      <af:button text="Say Hello" id="cb1" action="#{viewScope.HelloBeanAs.sayHello}"/>
      <af:button text="Reset" id="cb2" immediate="true" actionListener="#{viewScope.HelloBeanAs.reset}">
       <af:resetActionListener/>
      <af:button>
      <f:facet name="separator">
        <af:spacer width="10" height="10" id="s3"/>
      </f:facet>
    </af:panelGroupLayout>
  </f:facet>
</af:panelFormLayout>
And the reset method looks like this:
public void reset(ActionEvent event) {
  setName(null);
  setDate(null);
  setHelloMessage(null);
}
Prior to ADF Faces 12c, there is no solution for this problem other than just not setting immediate to true on input elements.

Introducing the <af:target> tag

The tag documentation states the following about the <af:target> tag: it provides a declarative way to allow a component to specify the list of targets it wants executed and rendered when an event (among the list of events) is fired by the component. You might wonder, is that all, what is the big deal here? Well, the great power of this tag is in two things:
  • You have complete control on which components the JSF lifeycle is executed, and which components are re-rendered (refreshed).
  • The list of components on which the JSF lifecycle will be executed, can be defined separately from the list of components that needs to be re-rendered. 
Before this tag existed, there was little control over which components were executed in the JSF lifecycle. Either the standard JSF lifecycle kicked in which processed all the components on a page, or the ADF optimized lifecycle kicked in which has its own algorithm for determining the boundary component on which the lifecycle is run. This list of components (boundary component and its children) can be extended, but cannot be reduced. Clicking a button inside an <af:region> will always run the JSF lifecycle on at least all components within the region, possibly causing premature validation issues as discussed in use case 1. Furthermore, with the optimized ADF lifecycle the components that are executed are also re-rendered. Adding a partialTrigger property to add a component to the re-render list also means that the component is added to the execute-lifecycle list, causing issues as mentioned in use case 2. The <af:target> tag has the following attributes:
Name Description
events List of event names for which the target rules apply. The space delimited legal values are: @all, action, calendar, calendarActivity, calendarActivityDurationChange, calendarDisplayChange, carouselSpin, contextInfo, dialog, disclosure, focus, item, launch, launchPopup, poll, popupCanceled, popupFetch, query, queryOperation, rangeChange, regionNavigation, return, returnPopupData, returnPopup, rowDisclosure, selection, sort, and valueChange. The default value is @all.
execute Set of components that will be executed when one of the specified events is raised. If a literal is specified it must be a space delimited String of component identifiers and/or one of the keywords. Supported keywords are @this, @all and @default. The @default keword can be used to fall back to the ADF default behavior which is usually the behavior of the ADF optimized lifecycle. The default value is @default.
render Set of the components that will be re-rendered when one of the specified events is raised. If not specified the default behavior will apply. Supported keywords are @this, @all and @default. The keyword @default can be used to fall back to the ADF default behavior which is usually the behavior of the ADF optimized lifecycle. The default value is @default.
 

The <af:target> tag in action

In the first use case, avoiding premature validation when clicking a button, the added value of this tag is most visible. All issues with this use case as discussed above can be avoided by using the <af:target> tag as follows:

<af:inputText required="true" label="Name" id="it1" value="#{viewScope.HelloBeanSu.name}">
  <f:validateLength minimum="4"></f:validateLength>
</af:inputText>
<af:panelLabelAndMessage showRequired="true" label="Greeting" id="plam2">
  <af:inputText required="true" label="Greeting" id="it2" value="#{viewScope.HelloBeanSu.greeting}"
                simple="true"/>
  <f:facet name="end">
    <af:button text="Suggest" id="cb3"
                      actionListener="#{viewScope.HelloBeanSu.suggestPreferredGreeting}">
       <af:target execute="@this it1" render="it2"/>                
    </af:button>
  </f:facet>
</af:panelLabelAndMessage>
The execute attribute specifies that only the button itself and the name field should be executed as part of the lifecycle. This ensures that the setName method in the managed bean will be called in the Update Model phase, and the suggestPreferredGreeting method in the Invoke Application phase. No other components are processed so we don't get premature validation errors. We only need to re-render the greeting field so the render attribute specifies the id of this field.  The solution in the second use case, refreshing dependent items when tabbing out an item, is very similar to the first use case. Main difference is that we now add the <af:target> tag to the name inputText component:
<af:inputText required="true" label="Name" id="it1" value="#{viewScope.HelloBeanAs.name}"
              autoSubmit="true" valueChangeListener="#{viewScope.HelloBeanAs.nameChanged}">
  <f:validateLength minimum="4"></f:validateLength>
  <af:target execute="@this" render="it2"/>
</af:inputText>
<af:inputText required="true" label="Greeting" id="it2" value="#{viewScope.HelloBeanAs.greeting}"/>
The solution in the third use case, implementing a cancel or reset button, is to no longer use the immediate property on the button, and use the <af:target> tag as follows:
<af:button text="Reset" id="cb2" actionListener="#{viewScope.HelloBeanAs.reset}">
  <af:target execute="@this" render="@all"/>
</af:button>
You can argue how useful this is, as the <af:target> tag removes the need for using the immediate attribute on an input component all together as shown in the second use case. But at least it shows that we can completely avoid the use of the immediate property if desired. It also removes the need to use the <af:resetActionListener> tag, which is a poorly understand tag that many developers just add because they are told so, without really understanding what it is doing. Final Observations  The render attribute of the <af:target> tag can be seen as the inverse function of the partialTriggers property with one important difference: a component listed in the render attribute will NOT be executed as part of the JSF lifecycle. It also moves the responsibility of dependent components to refresh themselves when appropriate to the triggering component which is not necessarily a good thing as the triggering component now needs to be aware of all its dependent components. So, the partialTriggers property still has its place and can be used when there are no issues with the component also being added to the JSF lifecycle execute list. You can also use them in combination, but then the partialTriggers property "wins". Let me explain what I mean by that with an example based on the second use case:
<af:inputText required="true" label="Name" id="it1" value="#{viewScope.HelloBeanAs.name}" autoSubmit="true"
              valueChangeListener="#{viewScope.HelloBeanAs.nameChanged}">
  <f:validateLength minimum="4"></f:validateLength>
  <af:target execute="@this" render="@this"/>
</af:inputText>
<af:inputText required="true" label="Greeting" id="it2" value="#{viewScope.HelloBeanAs.greeting}"
              partialTriggers="it1"/>
The render attribute no longer specifies the greeting field id, instead the greeting field "subscribes" itself to the "tab-out-name-field" event so it will be re-rendered. However, this will not work in this use case: while the <af:target> tag only specifies the name field itself to be executed in the JSF lifecycle, the greeting component will also be executed because of the partialTriggers property pointing to the name field.   In our experience, the use of the immediate attribute, both on command components and input components, is an endless source of confusion, misunderstanding and utter frustration resulting in loss of productivity and buggy applications. With the availability of the <af:target> tag the immediate attribute has become redundant and you might want to consider to stop using this attribute in favor of the <af:target> tag. Developers will get a better understanding of the ADF/JSF Lifecycle when using this tag, and it will probably lead to more robust applications that are built faster. More information: Section 8.3 Using the Target Tag To Execute PPR in the fusion middleware documentation "Developing Web User Interfaces with Oracle ADF".

Reentrantlock Effecting Content Presenter Performance

$
0
0

Introduction

During a performance tuning engagement, ATEAM had noticed an increase in page response times when the number of concurrent users increased to a value greater than 1000.  Further investigation revealed that the bottleneck was coming from the usage of Content Presenter Template(s).

Main Article

Using a jRockit Mission Control Flight Recording (JFR), ATEAM was able to determine the cause: Threads were being parked; each waiting to resume. So what was causing all of the parked threads? Digging deeper in stack traces provided by the JFR the root cause was discovered: java.util.concurrent.locks.ReentrantLock.lock(), which is called by UCMBridge.getAccessLevel(IdcContext, DataObject, ITrace). From within the Mission Control Event Graph view, the evidence of how this issue (identified by each of the grey blocks) significantly effected the page response times (identified by the lime-green blocks). Reentrantlock_1 Stack trace view: Reentrantlock_2 Although this has been identified as a BUG (17330713), there is a patch fix that is now available for 11.1.1.7.1.

Oracle Mainframe Re-hosting Solutions – Part One

$
0
0

Introduction

This paper provides an overview about Oracle Mainframe Re-hosting solutions from a technical point of view. For people who have little Mainframe background, it adds some mainframe basic concept introduction. It is targeted to technical people who are interesting in mainframe modernization technology to understand Oracle solution in this area. This paper will be split into two parts:
  • Part 1: Background for some basic mainframe concepts
  • Part 2: Oracle solution overview (It will be published later in an other blog)

What is Mainframe Re-Hosting?

Re-hosting is one of the options for modernization of Mainframe legacy system.  In the modernization of Mainframe legacy system technology, there are several technical options, according to some common definitions, we can categorize three options: Re-engineering: A technique to rebuild legacy applications in a new technology or platform, with the same or enhanced functionality, from scratch – usually by adopting Service Oriented Architecture (SOA). From a technical point of view, this is the most efficient and agile way of transforming legacy applications. But, this approach also has more risk from project management, timing and cost. Package Implementation: Replacement of legacy applications, in whole or part, with commercial off-the-shelf software (COTS) such as ERP, CRM, SCM, Billing software etc.  It is a good solution when such package application can meet the business requirements well. Re-hosting: Migrate, with some automation tool, the legacy application source code (mainly COBOL) and database , running the legacy applications, with no functional changes, on an Open System platform. This is often used as an intermediate step to eliminate legacy and expensive hardware. Most common examples include mainframe applications being re-hosted on UNIX or “Wintel” platform. For a typical corporate mainframe customer, the three above options are not mutually exclusive. In reality, a complete modernization plan is often the combination of these options according to different existing software asset analyses and business requirements. Deciding which options to use to modernization a legacy system is not the subject of this paper.  For that, please refer to the following studies: This paper will give an overview of Oracle's Mainframe Re-hosting solution.

Mainframe Legacy Systems

IBM mainframes are large computer systems produced by IBM from 1952 to the present. During the 1960s and 1970s, the term Mainframe was almost synonymous with IBM products due to their market share. Current mainframes in IBM's line of business computers are developments of the basic design of the IBM System/360.

Operating Systems

The core OS is based on IBM proprietary hardware and software from old DOS, VSE, MVS, MVS/XA, Z/OS, since the 1990s.  On top of Z/OS, IBM has added USS (Unix Service System) and also z/LINUX. But these new OS haven’t had large success; very few mainframe customers have adopted USS and z/Linux for their production system, the majority still run with MVS as the core OS.

Middleware

Mainly, Mainframes have two major Middleware Products:
  • IMS (Information Management System): IMS is a message based Transaction Processing Management system (TPM) on IBM Mainframe for very large customers.  However since 1990, it has attracted very few new customers. IMS is quite heavy and rigid TPM; it needs a lot of effort to manage it. But it has a solid reputation as a robust TPM.
  • CICS (Customer Information Control System): CICS is another Transaction Processing Management system on IBM mainframe, CICS is middleware designed to support rapid, high-volume online transaction processing. More than 90 % of IBM Mainframe customers are using CICS as middleware.  Compared to IMS, CICS is a more lightweight TPM, initially, it designated to small and middle customers. But today, even big customers are using CICS instead of IMS.

Network

Network is component to manage, configure and monitor all network hardware and software. Today it has two major sub components: SNA and TCP/IP
  • SNA (System Network Architect) is IBM proprietary network technology; it is based on concept PU (Physical Unit) and LU (Logical Unit), and VTAM (Virtual Telecom Access Method).
  • TCP/IP, from 1990’s, IBM was introducing TCP/IP network into IBM Mainframe platform. It is becoming the standard network for IBM mainframe platform.

Database

The Mainframe has several database products:
  • VSAM (Virtual Storage Access Method)  is disk file storage access method.  VSAM comprises fourfile organizations (see below).  Application programs use different organization of VSAM to manage their data.
    • KSDS (Key Sequenced Data Set Key)
    • RRDS (Relative Record Data Set)
    • ESDS (Entry Sequenced Data Set)
    • LDS (Linear Data Set)
  • IMS DB is IMS Database component that stores data using a hierarchical model.  The first IMS DB product started from 1968. Today, there are three options of IMS DB.  These options are used in different business scenarios depending on requirements.
    • Full function databases
    • Fast path databases
    • High Availability Large Databases (HALDBS)
  • DB2 is a major component on IBM Mainframe platform. It was first given to the Database Management System or DBMS in 1983 when IBM released DB2 on its MVS mainframe platform. It is a full relation model database product of IBM.  Today, IBM has brought DB2 to other platforms, including OS/2, UNIX and MS Windows servers, then Linux (including Linux on zSeries) and PDAs.
  • Some third party products, including
    • Natural
    • SUPER Base
    • IDMS2

Batch Execution Environment

During the 1970’s, the OLTP technology was not mature enough to manage data integrity well for On-Line Transaction Processing (OLTP) especially concerning concurrent DB access.  The best practices was to store all DB updates into a temporary file by on-line transaction, then using batch process to update master DB after the on-line transaction closed. To support these practices, IBM mainframe has developed a Batch Execution Infrastructure. Mainframe has a strong Batch Execution Environment including two major components:
  • JCL (JOB Control Language) is a script language to implement batch process on mainframe. Application developers use it to define their batch process: logic processing, dependencies between multiple steps, error handling, result reporting, etc.
  • JES (Job Entry Subsystem) is one of the major component of MVS system.  It receives jobs into the computer system, schedules them for processing, and controls their output processing.  There are three job entry subsystems in MVS; Master, JES2 and JES3.

Development Environment and Language

Mainframes generally have a 3270 terminal-based development environment.
  • TSO /E (Time Sharing Option/Extensions) allows users to create an interactive session with the MVS system. TSO provides a single-user logon capability and a basic command prompt interface to MVS. It is very like Unix login session.
  • ISPF (Interactive System Productivity Facility), most users work with TSO through ISPF menu-driven interface. This collection of menus and panels offers a wide range of functions to assist users in working with data files on the system.  ISPF users include system programmers, application programmers, administrators, and others who access MVS. In general, TSO and ISPF make it easier for people with varying levels of experience to interact with the MVS system.
  • COBOL is the dominant language for development of applications; More than 95 % of existing mainframe applications were developed with COBOL.
  • PL/I is another application programming language on mainframe. It is a block-structured language, consisting of packages, procedures, statements, expressions, and built-in functions.
  • C, C/++, Since the 1990’s, some customers are using C as application programming language.
  • 4GL languages likes FOCUS, NOMAD
  • Assembly language is used to develop some technical module and exit routine for performance reason
  • Script languages: JCL, REXX, CLIST for Batch JOB, administration, monitoring, etc.

Charging Mode

One particularity interesting feature of IBM mainframe is their software charging mode: MIPS (Millions Instruction by Second) based charging mode.  Most of IBM mainframe software is charged on this mode.  It means user will be charge by what they consume CPU power. On Open System, most software is are charged by hardware capability.

Current Mainframe Customer Challenges

Some challenges for almost all mainframe customers are:
  • Very high and growing costs
    • Up to 60-80% of IT budget is spent on mainframe legacy systems
    • $2-$5M/year for a 1000 MIPS mainframe
  • Constrained IT Responsiveness
    • All applications have a strong dependency on obsolete technologies
    • High cost of integration and maintenance due to proprietary technologies like SNA network, IMS DB, etc.
    • Difficult to meet new business requirements on time and with allocated budgets
  • Graying mainframe skill base
    • Due to closed,proprietary technology, very few public schools ggive education on these technologies.  It is losing qualified engineers and critical knowledge
  • Rigid IT Infrastructure
    • Complex software layers constrain choices
    • Vendor-driven forced upgrades.  This costs a lot of resources and budget

Market Trend

According to some market studies:
  • Almost all Mainframe customers are thinking, planning, or performing mainframe modernization.
  • A major obstacle is how to find a solution to meet Mainframe RASP  (Robust, Availability, Scalability, Performance.
  • How to make sure of the feasibility of such project

When and Why Re-Hosting

As mainframe customers are evaluating their mainframe modernization strategy, they must take these following points into consideration:
  • Protect investment:  if target software is quite stable, and meets current business requirement, the re-hosting option is a right way to protect investment.
  • Minimize risk: technically, re-hosing is a solution which has minimum risk, because it keeps business logic, data model, user interface almost unchanged.
  • ROI (Return Of Investment):  among all mainframe modernization solutions, very often, re-hosting has a good ROI of whole cost.
  • Progressive Migration:  Re-hosting solutions can make the whole mainframe modernization process progressive.  During whole project period; existing mainframe application, re-hosted application and Open System applications can co-exist and integrate together.    
 

Effect of Queue and JCA Settings on Message Retry by JMS Adapter

$
0
0

Introduction

This blog is intended to share some knowledge about the effects of Queue Level Redelivery Settings and Adapter level Retry Settings on message processing by JMS Adapter.  It is also intended to provide some useful insights that help in designing retry mechanisms into an integration system. Specifically, this blog illustrates the Retry behavior of JMS Adapter and how it is impacted by these settings.

Detail

Consider an Integration system that uses JMS Adapter to consume message from a queue and deliver to an End System after performing BPEL processing. The Figure below depicts such a simple system. Note that the source queue is configured with an error queue to hold the failed messages. JMSSystem

Adapter level Retry Settings

First, let’s look at the Adapter level Retry Settings that are available to configure on the JMS Adapter Consumer service. The settings below are typically used to configure the retry behavior of inbound JMS Adapter consumers:
  1. Jca.retry.count --> example value 3 Jca.retry.interval --> example value 2 Jca.retry.backoff --> example value 2
Assume that the end system is down. During such an error condition when the message cannot be successfully processed and delivered to the End System, the JMS Adapter retries the processing of the failed message, using the above retry settings. For the above example values, the adapter retries a failed message after 2 , 6  and 14 seconds from the time of first failure. Now, assume the end system is down even after the 3 retries.  The expectation in most integration flows is that the message rolls back to the source queue and can be found in the error destination. This will help in manually recovering the failed message after the error condition is resolved.
However, under certain conditions, the JMS Adapter can reject a failed message after exhausting the configured number of retries. When this happens, the message is no longer available at the source queue for recovery. The rejected messages are handled by the Adapter Rejection Handler. Refer here for details on rejection handlers for Adapters.

Queue Level Redelivery Settings

At this point, let us look at Queue Level Redelivery Settings. When redelivery is set at the queue level, any messages that fail to process and which are rolled back to the queue will be redelivered for processing. If the number of failures for the message exceeds the redelivery count, the message is redirected to the error destination.
All messaging providers support some form of queue redelivery settings. For instance, a Weblogic JMS has the Redelivery Limit setting, and AQ JMS provides the same using the max_retries setting of a queue.

Messaging Provider

Redelivery settings

Other related settings

Weblogic JMS

Redelivery Limit

Expiration Policy=Redirect, Redelivery Delay Override

AQ JMS

Max_retries

Retry_delay

Note that Weblogic JMS can be configured to discard or log the failed messages instead of redirecting to an error destination.

Failure to Rollback

Under what conditions does JMS Adapter reject messages that are submitted to it by the queue for reprocessing? When the queue redelivers a message to the adapter more number of times than it can retry, the adapter rejects the message Hence, the condition below will ensure that the message properly rolls back to the source queue rather than be rejected by the Adapter.

Number of Redeliveries by the Queue <=  Retry Count of Adapter Service

Note that when Jca.retry.count is not set at the adapter service level, the GlobalInboundJcaRetryCount setting takes effect. The default value of GlobalInboundJcaRetryCount is -1, which implies an infinite number of retries. Refer to the Adapter’s Guide section here for more information on setting the retry properties.
The table below lists some sample values of the settings and the behavior observed after repeated failures:

Jca.retry.count

GlobalInboundJcaRetryCount

Queue Redelivery

Behavior after repeated Failure

3

-1

5

Message rejected by adapter

6

-1

5

Message Rolled back to Error Queue

0

-1

0

Message Rolled back to Error Queue

Not set

5

5

Message Rolled back to Error Queue

Not set

0

2

Message rejected by adapter

 

Summary

Incorrect settings of the queues and adapters could lead to undesired behavior in recovery of messages during failure conditions. We have seen a few such situations in this blog. With proper settings, we can design integration systems to exhibit consistent error handling and recovery behaviour.

References

-Shreeni

Index of Oracle Event Processor articles

Improving Performance via Parallelism in Oracle Event Processing Pipelines with High-Availability

$
0
0

Improving Performance via Parallelism in Oracle Event Processing Pipelines with High-Availability

  This posting explains how to use parallelism to improve performance of Oracle Event Processing (OEP) applications with active-active high-availability (HA) deployments. Parallelism is exploited for performance gain in each one of the server instances of an HA configuration. This is achieved by identifying sections of an application’s processing pipeline that can operate in parallel, and, therefore, can be mapped to separate processing threads. Both pipeline and independent query parallelism are described.  

Pipeline Parallelism

A pipeline architecture has inherent concurrency because each of its stages works in parallel on different data elements flowing through it. For example, in the pipeline in figure 1, if each stage is assigned its own processing thread, the following actions can occur in concurrently: input JMS adapter reads event #3 from a JMS topic, CQL query processor handles event #2, and output JMS adapter writes event #1 to a queue.  
Figure 1. OEP Pipeline with three concurrent stages
 

figure1

Although OEP HA pipelines are limited to one thread per stage, significant performance gains can be achieved by running each stage in a separate thread as compared to running all stages on one thread or in a number of threads smaller than the number of pipeline stages. A key constraint in OEP when using active-active HA (see Oracle Fusion Middleware Developer's Guide for Oracle Event Processing 11g Release 1 (11.1.1.7) for Eclipse, section 24) is that it requires the input streams to both the primary and the secondary instances to be identical and to maintain the same event ordering as events flow through the OEP Event Processing Network (EPN). This constrain limits the EPN topology to be either a linear pipeline, starting from an input adapter and ending with an output adapter, or a tree where, each node with downstream branching replicates every event to each of its branches. The event ordering requirement also limits to one the number of threads assigned to each stage of the EPN. Having more than one thread in one stage, for example, in an input JMS adapter, would fail to assure that the order of events entering the following stage, such as input channel, is the same in both the primary and secondary instances. The reason event order cannot be assured when using multiple threads on a pipeline stage is that a pipeline stage operates as a queue with multiple worker threads serving the queue. Since the execution times in serving each event and the thread scheduling order cannot be maintained in complete alignment across the primary server instance and the secondary instances, the order of events passed to the following pipeline stages could be out or order across HA instances. The mechanism recommended in OEP best practices for assuring the primary and secondary instances of an HA configuration receive identical input streams is a JMS topic. Since processing speeds of consecutive stages can vary, buffers are used to couple stages and hold the output of one stage while the following stage can consume it.  In OEP these inter-stage buffers are the EPN channels. In addition to operating as buffers, EPN channels also are used as the configuration mechanism to specify the number of processing threads assigned to the stage following the channel. A channel’s buffer length and the number of threads assigned to its following stage are defined within a corresponding channel element in the EPN’s META-INF/wlevs/config.xml file by assigning values to the max-threads and the max-size parameters. For example the inputChannel element in the pipeline in figure 1 is configured as follows:
<channel>
    <name>inputChannel</name>
    <max-size>1000</max-size>
    <max-threads>1</max-threads>
</channel>
For input adapters, which don’t have a preceding channel, assignment is done by setting to one the concurrentConsumers property in the corresponding JMS input adapter element in the META-INF/spring/MonitoracaoTransacao.xml file:
<wlevs:adapter id="jmsInputAdapter" provider="jms-inbound">
    <wlevs:listener ref="inputChannel" />
    <wlevs:instance-property name="converterBean" ref="jmsMessageConverter" />
    <wlevs:instance-property name="concurrentConsumers" value="1" />	
    <wlevs:instance-property name="sessionTransacted" value="false" />
</wlevs:adapter>

Query Parallelism

Query parallelism refers to processing stages where there are multiple independent queries applied simultaneously to each event of the input stream. This is achieved by having a channel with multiple downstream elements, where each event flowing through the channel is broadcast to all of the channel’s downstream elements. This is illustrated in figure 2, where the input channel has five downstream processors, each one running a concurrent query. As explained above, the topology resulting from this type of scenario is a pipeline tree, as opposed to a linear pipeline topology.  
Figure 2. Query parallelism
 

figure2

Each of the five concurrent queries in figure 2 is configured to be independent of the other because each one consumes a separate copy of every event that flows out of the input channel. This configuration forks the single pipeline of the JMS input adapter followed by the input channel into five independent pipelines comprising a CQL query processor followed by an output channel and followed by an HA and JMS output adapter pair. To increase performance, each of these forked pipelines can be treated as an independent linear pipeline whose stages can be parallelized. In the example in figure 2, on each of the branch pipelines, the CQL query processor is assigned one thread, and the HA and JMS output adapter pair is assigned also one thread. Thread assignment for the CQL query processor stage is defined in the input channel configuration element in META-INF/wlevs/config.xml by setting the max-threads property to 5 as follows:
<channel>
    <name>inputChannel</name>
    <max-size>1000</max-size>
    <max-threads>5</max-threads>
</channel>
max-threads should not be larger than the number of processors fanning out from the input channel. This configures a pool of threads capable of handling one event simultaneously on each of the forked pipelines. The remaining stages in each of the pipelines are assigned one thread as in the single linear pipeline case. In summary, even tough HA OEP configurations have strong event ordering requirements that prevent parallelism on each stage, there is still end-to-end pipeline concurrency that can be effectively exploited by assigning at most one thread to each element on each of the linear pipelines in an OEP EPN.    
Viewing all 987 articles
Browse latest View live