March 06, 2004

JProfiler 3.0 still a star

JProfiler 3.0 continues to impress. I just upgraded my copy from 2.3 and this is as painless as the old one to use and the issues with the IBM JDK seem to have gone (heap snapshots fell over before but no more). I'm testing it with 6.0 and 5.1 builds and it just works, recommended.

Using it with WAS is pretty simple. Generate a script to start the server using:

startServer server1 - script profserver1.bat

Then edit profserver1.bat and add the following to the java launch command:

-Xrunjprofiler:port=8849 "-Xbootclasspath/a:C:\jprofiler3\bin\agent.jar"
where c:\jprof... is where you installed jprof.

also update the path in that file to:

@REM Environment Settings
SET PATH=%WAS_PATH%;C:\jprofiler3\bin\windows
And presto, it works. Just launch your server using profserver1.bat and it then waits for you to attach to it using a generic app server attach from jprofiler.

Check it out at http://www.jprofiler.com. At 500 bucks a copy or 600 with upgrade protection, it's a bargain in this space where tools can cost kilo bucks. I have no connection with ej technologies either personally or financially, I just like it, it works very well with WAS and if you're looking for a quality profiler, you can't go wrong with this.

March 6, 2004 in WebSphere Performance | Permalink | Comments (1)

November 07, 2003

Async Beans Introduction

Async Beans is a feature in WebSphere 5.0 Enterprise. It allows J2EE applications to take advantage of threading (short lived aka pooled threads and long lived aka daemon threads). I'm just going to discuss how to use the threading in applications today.

It all centers around the com.ibm.websphere.asynchbeans.WorkManager interface. A web app or EJB can declare a resource-ref to one or more of these. These are bound to a physical WorkManager at deploy time. The administrator can create one or more WorkManagers. Each one is basically a thread pool. An application can be deployed to use multiple thread pools or a single one, it's just like a resource-ref to a DataSource.

The application component looks it up in its JNDI using the name corresponding to the resource-ref. This can be done many times but the same WorkManager is returned each time. If two application are bound to the same physical WorkManager then they share the WorkManager.

The main methods on WorkManager are startWork and join. The main interfaces to know about are Work, WorkItem and WorkManager. Work is simply a Runnable with an additional release method. So, make an instance of an inner class or a JavaBeans that implements the Work interface and then pass it to a WorkManager.startWork. You will be returned a WorkItem. These can be put in an array list and then provided to the WorkManager.join method. The join method allows the caller to block waiting for 1 or all Work's to complete. A timeout can also be specified.

The javadocs are here

When the Works are dispatched on a thread then the JNDI context from the component that called startWork is used. The Work cannot make global transactions but it can safely invoke an EJB that it looks up through an ejb-local-ref or ejb-ref and then use it to do any transactional stuff. If security is on then the credential from the caller of startWork is also present on the dispatching thread so the Work is running as the same identity as the caller of startWork.

Some variations of the startWork method take a WorkListener instance which allows events to be fired back as the Work is executed or when it finishes. Some also take a boolean 'isDaemon' which if true means that it's a daemon thread. In this case, I don't allocate it from a pool, instead I just spin up a thread for it.

The release method is there so that when the application is stopped, I iterate over the daemon threads and call this to give them a hint that they should stop.

There really isn't much to using it. I have another blog entry on this site showing how to use JMX to tune the thread pools associated with a WorkManager while it's running.

Once you get the hang of it then you can build pretty advanced J2EE applications. The commonest things I see are dynamic messaging, advanced threading models for processing messages and people using them to do tasks in parallel like database queries and calculations.

Startup beans are basically SLSBs which use a specific remote and remote home interface. If WebSphere sees one of these in an ejb module then when the module is started, we invoke the start method. The start method can be used to spawn threads or warm CMP caches etc. If the application stops normally then the stop method is called. There can be more than one startup bean in an application and they can be ordered with respect to each other.

I'll try to get permission to publish a messaging application which demonstrates how to do some useful things using async beans and JMS or Tibco.

November 7, 2003 in WebSphere Performance | Permalink | Comments (13)

September 16, 2003

GC and server tuning

Server tuning is still a black art to a degree. The best hints are to look at what we do in tests like ECPerf of jSpecApp. There, you'll see our experts do what ever they can to go faster.

If you're writing a pretty critical response time critical application like a trading system then GC can be a problem. You really need GC to happen rarely and when it does to only hold up processing for the minimum time possible.

I'm finding that the following works well for me in terms of JVM tuning on IBM JDKs:

"-Xgcpolicy:optavgpause" "-Xgcthreads2"
This slightly slows response time because the JDK is doing more work on each alloc command but the GC pauses when they come can be half the size. The gcthreads parameter should be set to the number of CPUs assigned to your application. This speeds the GC operation by parallelizing it.

Heap Size
The JVM heap should also be only as big as you really need it. You may be tempted to set your heap to 3GB but while this results in less frequent GC operations, the GC pauses will be longer, there is simply more memory to GC, it takes longer. If you are sensitive to GCs then this is the wrong thing to do in my opinion anyway. A small heap may result in more frequent GCs but the pauses should be lower meaning that when they occur the response time has a minimum hit. The former case means, yep, less transactions are impacted but when the GC comes, it increases the response time at that moment significantly. So, if what you want is more consistent response times then a smaller heap is better.

Thick versus Thin JDBC Drivers
Check if the thick JDBC driver generates less GC than the thin driver. Remember, the thick driver is written in C with a JNI wrapper. There are simply less Java objects getting created here. It may be slightly slower than a thin one (because thin doesn't cross JNI) but it may generate significantly less garbage and hence the GC pauses may be shorter when they come. Again, it's consistency versus flat out performance over a short interval.

Thread pool size
Only use as many threads as you need to saturate the box. If in testing, you notice that 6 threads pushes your 4 way to 100% CPU running your tran mix then there is no point in having more than 6 threads in the pools. More threads will SLOW IT DOWN because of context switching. So, if your applications can be tested and you see this how many threads at 100% then just set the pool maximums to this and it'll perform better.

September 16, 2003 in WebSphere Performance | Permalink | Comments (5)