Tips and Tricks guide to configuring Portal systems for long run testing on WebSphere Portlet Factory reference applicationsAdded by Rob Flynn | Edited by Alice Smith009 on April 7, 2018 | Version 3
This tips and tricks document has been written to assist customers configuring their systems for long run testing your WebSphere Portlet Factory sample applications and discusses configurations used by WebSphere Portlet Factory test teams.
Both tuning and capacity are affected by many factors, including the workload scenario and the performance measurement environment. For tuning, the objective of this paper is not to recommend that you use the values we used when measuring our scenarios, but to make you aware of those parameters used in our configuration. When tuning your individual systems, it is important to begin with a baseline, monitor the performance metrics to determine if any parameters should be changed and, when a change is made, monitor the performance metrics to determine the effectiveness of the change.
Deployment Configurations used by the WebSphere Portlet Factory System Verification Testing (SVT) team
Tuning guide for WebSphere Portlet Factory 6.1.5
During the course of our Reliability 5 day runs, we determined the following values for JVM heap/Web Container and DB connection pool values, provided SVT with successful 5 days of load testing at a constant transaction rate
||JVM INITIAL AND MAXIMUM HEAP SIZE
||WEB CONTAINER THREAD POOL SIZE (Min/Max Values)
||DB CONNECTION POOL
Steps to configure these values can be found in the Tuning Guide -> http://www-01.ibm.com/support/docview.wss?rs=688&uid=swg27013972
How to set these tuning parameters:
for JVM INITIAL AND MAXIMUM HEAP SIZE,
In the WebSphere Administrative Console: Servers -> Application Servers -> WebSphere Portal -> Server Infrastructure: Java and Process Management -> Process Definition -> Java Virtual Machine
- Initial Heap Size
- Maximum Heap Size
In the WebSphere Administrative Console: Servers -> Application Servers -> WebSphere Portal -> Server Infrastructure: Java and Process Management -> Process Definition -> Servant -> Java Virtual Machine
- Initial Heap Size
- Maximum Heap Size
for WEB CONTAINER THREAD POOL SIZE
In the WebSphere Administrative Console: Servers -> Application Servers -> WebSphere Portal -> Additional Properties: Thread Pools -> Web Container -> Thread Pool
- Minimum size threads
- Maximum size threads
for DB CONNECTION POOL
In the WebSphere Administrative Console: Resources -> JDBC -> Data sources -> -> Additional Properties -> Connection Pool Properties
- Minimum connections
- Maximum connections
Application / backend specific tuning parameters
This table discusses some of the known limitations we have found with specific types of application types.
|after deploying a Domino application, update svt_domino_server.properties with your Domino ServerName, UserName and Password from /WEB-INF/config/domino_config/
By default, the Domino Data Access builder now distributes Domino sessions over a configurable number of cached CORBA ORBs (Object Request Brokers). This reduces network resources because all sessions created with a given ORB share one TCP/IP connection. This is controlled using the following properties.
Specifies the number of CORBA ORBs that are created and shared among all Domino sessions. If this property is not set, the default is 5. To help spread the load over the entire set of orbs, the next sequential ORB is retrieved from the cache to handle a Domino transaction. Care should be taken not to overload the network connection. Adjust the number of orbs depending on your application performance requirements. Also, Domino transactions that take a long time can block other sessions using the same ORB. Therefore, care should be taken to ensure fast Domino transactions.
|After deploying a WDF based
application, update bowstreet.properties from /WEB-INF/config/ with following lines:
# Set this property to specify the persistence manager used by the Annotation Engine, Annotation engine support two types of persistence manager: xml file and database.
# By default, if no persistence manager is selected, then the Annotation Engine will use the XML-File Persistence Manager.
# If database persistence manager is selected, Use the following line to specify the datasource name which you created in WAS
# If database persistence manager is selected, Use the following line to specify the schema of the table in the database
|LWM application only supported on the following platforms Windows/AIX/Linux
||After deploying a SAP
application, update bowstreet.properties from /WEB-INF/config/override.properties with following lines:
Set bowstreet.sap.session.pool.maxConnections to 100
Set bowstreet.sap.session.pool.maxPoolSize to 160
Some other general tips and hints:
Additional points help you debug OOM and heap issues:
- Enable Garbage collection - Enabling verbose garbage collection (verbosegc) logging is often required when tuning and debugging many issues. http://publib.boulder.ibm.com/infocenter/wasinfo/v4r0/index.jsp?topic=/com.ibm.support.was40.doc/html/Java_SDK/swg21114927.html
- Disable PMI - PMI statistics are enabled by default for an Application Server and when enabled, a gradual degradation in performance might be observed. so we suggest disable should be disabled prior to workload execution PMI.
- Increase logs filesize and number of historical files - In order to collect full logs form a workload run
- In the WebSphere Administrative Console: Servers -> Application Servers -> WebSphere_Portal -> Troubleshooting: Diagnostic Trace Service
-Maximum File Size
Maximum Number of Historical Files
- Ensure that you are using the latest version of the IBM JVM. Check the version you are currently using against what is available and update if necessary. - Older IBM JVM versions had memory leaks. You can get the latest IBM JVM version and bug fixes list here ->http://www.ibm.com/developerworks/java/jdk/
- Checking for WAS memory leak - SVT have seen a TechNote (http://www-01.ibm.com/support/docview.wss?uid=swg21368248) that says WAS may leak memory if ThreadLocals are in use (which there's a good chance there are) and you have set the MIN webcontainer threads less than the MAX webcontainer threads (such that WAS constantly shrinks/grows the pool), so we suggest setting MIN = MAX for the WAS WebContainer threads, at least to rule out that known WAS issue.
- Ensure that in the WebSphere Administrative Console setting ALLOCATION_THRESHOLD to 3MB - When you debug Java™ heap fragmentation problems, follow steps in below link to find the stack traces of the thread that makes large allocation requests.
- Set inactivity timeout on WAS and DB2 - If you met com.ibm.websphere.ce.cm.StaleConnectionException in the application run, typically it means the database timed-out a connection, but the application server tried to use the connection after it was timed out. You need to set the pool's connection inactivity time-out in WebSphere Administrative Consoles to be less than the inactivity time-out in the DB. Otherwise you'll hit this exception at random times.
- Leave Reap Time at the default of 180 seconds (3 minutes).
- Set Unused Timeout to be 240 seconds (4 minutes).
- If the problem persists, you can try setting Aged Timeout to be 3600 seconds (one hour)
Other Useful links:
Techniques to enhance WebSphere Portlet Factory application performance
Performance best practices