When it comes to persisting service and subsystems data (i.e. JMS), WebLogic server offers customers a choice: filesystem or relational database using JDBC. Persistence store has implications on WLS performance and systems management. In this post, I will provide an unofficial JMS performance comparison using different persistence stores.
PLEASE NOTE: This is not an official Oracle performance benchmark. It is just intended as an example to give readers an idea about possible performance differences when considering different persistent stores.
WebLogic persistence store provides a high-performance and transactional store for those WebLogic services and subsystems that need to store their data. As an example, WebLogic JMS uses the persistence store for storing persistent messages and transmission of messages between Store-And-Forward (SAF) agents. The persistence store plays an important role in performance and high-availability of WebLogic applications and topology. There are also considerations for management of the persistence store.
In this blog, I will be sharing metrics from the following scenarios:
1) WebLogic server JMS performance using default persistence store (filesystem)
2) WebLogic server JMS performance using a local DB persistence store
3) WebLogic server JMS performance using a remote DB persistence store
In all cases, WLS version 12.1.2 was running in a Windows 7 (64-bit) environment on a Dell Pecision M6600 with i7-2920xm cpu @ 2.5 and 16G RAM. I used the bundled, 64-bit Hotspot JVM with default heap: Xms256m Xmx512m.
In the second case, I used Oracle DB EE version 11.2.0.1 in the same Windows 7 environment as WLS. Again, all default settings.
In the third scenario, I used a remote Oracle DB EE version 12.1.0 on a 64-bit machine running OEL v5 under OVM. The average ping between the Windows and the Linux server was 45 ms.
For the testing, I used a standalone JMS client application to send WLS 1K, 4K, 16K, and 64K persistent messages. In each run, the client instantiated 4 concurrent threads to send 1000 messages to WLS (250 messages / thread). The client ran local to the WLS server, and between each test, all the messages in the JMS queue was cleared. Here is a sample output from one of the tests:
All the times above are in milliseconds. It is also important to note that the servers were not running in WLS AdminServer.
Here is a summary of average message throughput for each scenario:
So, what is the key take-away from the stats above?
When using a local database, as the size of the message grows, you may see performance degradation of up 123.02% for a 64K message. When you compare this with the remote database, it took more than 3600 times longer to commit 1000 64K messages to JMS persistence store backed by the remote DB...
When using a local database, as the size of the message grows, you may see performance degradation of up 123.02% for a 64K message. When you compare this with the remote database, it took more than 3600 times longer to commit 1000 64K messages to JMS persistence store backed by the remote DB...
So, what is the right persistence store strategy for you?
There are a lot of considerations. Please take a look at Using the WebLogic Persistence Store for considerations and limitations. Also, as I said, I used vanilla, out-of-the-box parameters for this test. You should consult Tuning the WebLogic persistence store for configuration and best practices.
There are a lot of considerations. Please take a look at Using the WebLogic Persistence Store for considerations and limitations. Also, as I said, I used vanilla, out-of-the-box parameters for this test. You should consult Tuning the WebLogic persistence store for configuration and best practices.
All content listed on this page is the property of Oracle Corp. Redistribution not allowed without written permission