« Move over XML config files, roll on YAML | Main | Pluggable index support in ObjectGrid »

October 25, 2005

Design: ObjectGrid on Z/OS

Given my previous discussion on how the server code must be written for Z/OS

  • everything that listens to sockets needs to be in the CR
  • the application code must only be in the SR, it is not allowed to have application code run inside the CR.

This kind of complicates things for a client/server ObjectGrid. WebSphere XD v6 of the ObjectGrid just has asynchronous peer to peer invalidation or push of transactions. This works fine on Z/OS as the servant can host the ObjectGrid and the ObjectGrid connects to an external JMS provider to send and receive messages. Notice, it's not opening any sockets of its own. So, no rules broken.

WebSphere XD 6.0.1 will have a lot more function. Lets imagine an application using a huge cache, one larger than the maximum heap size, say 200GB. We can store this data in 200 1GB JVMs using partitioning. We could even replicate it and need either 200 JVMs with 2GB heaps or we can have 400 JVMs, 200 replication groups consisting of 2 1GB JVMs, a primary and a replica. 64 bit JVMs will allow much larger heaps and should allow the total number of JVMs to be reduced accordingly.

The ObjectGrid in the server will communicate with the remote ObjectGrid servers using TCP/IP. But, this is a problem for the ObjectGrid server on Z/OS as we can't run a server socket in a servant. So, we could put it in the controller, right? Wrong. We're caching application objects. The evictor is an application object. We can't deploy/run application code inside the CR.

So, this makes life complex. We could try splitting the ObjectGrid code in two pieces and try to seperate it all out in to system and application code but I'd like to ship it before I retire. So, the ObjectGrid server cannot run in a servant and also cannot run in a control region. What to do?

We could run a standalone ObjectGrid server in its own JVM on Z/OS. That can listen to sockets just like on a conventional operating system. The servants can then just embed the ObjectGrid client and connect to the servers they require.

We also could run the ObjectGrid server in a zLinux LPAR or on a Windows box or a Linux box or a pSeries box, that would work also. The clients don't care what platform is hosting the servers.

A customer could do any of the above and it would work. Running the cache servers on a conventional server or blade farm would allow the Z/OS application to leverage the price performance of using a large cache running on blades whilst keeping the application processing on Z/OS. Memory is a lot cheaper on a blade than a 390. If you need a lot of memory for a cache then it's possible that hosting the cache on the blades makes sense if the memory cost was a big factor. There are a bunch of ways this technology could be deployed by customers.

October 25, 2005 | Permalink

Comments

Why do you need so many JVMs for your cache distribution?? Can't you have less and passivate stuff to disk instead of feeding the heap with the cache data?
I thought ObjectGrid had evictors.

Posted by: Alex | Oct 26, 2005 7:28:41 AM

It does have evictors. You can plugin in your own to evict using any criteria you want. You can also plugin a Loader to offload to disk as you say if that serves your needs. There are also scenarios where a file based offload isn't the best answer (HTTP session replication, for example) and ObjectGrid will support those scenarios also.

Posted by: Billy | Oct 26, 2005 8:44:02 AM

Billy -

The Z customers we've worked with mainly have to have the ability to evict or update data in the data grid. In other words, it's usually a blade-based application, and the Z is the original "system of record" (historical reasons). So the blades are serving up web pages, or modeling the data, but they need to know if/when that data is changing so that they can cache it more aggressively.

Anyhow, my $.02.

Peace.

Posted by: Cameron Purdy | Oct 26, 2005 11:51:51 AM

Post a comment