December 15, 2007
Great time at the Spring Experience
I just returned from the Spring Experience where I'd been invited to speak. I did a session on the new data center support for ObjectGrid as well as an overview of an XTP application built using J2SE, OpenJPA, ObjectGrid and Spring. The conference was very well attended, I think there were around 600-700 attendees which is great. I like nothing better than making a single trip and being able to communicate to lots of customers with that single trip :)
I met a lot of customers at the sessions and got a very good response on the new Spring integration features in ObjectGrid V6.1 ifix 3 which should be public any day now.
I had a good chat with Rod and I think we'll be doing more to make ObjectGrid work even better with Spring moving forward.
The batch programming sessions were very full which shows that just because something is viewed as old tech, i.e. batch, it's very popular. I had people ask about ObjectGrid with Spring WebFlow (works fine), ObjectGrid with Spring native transactions (works in ifix 3) and questions on building ObjectGrid applications that use Spring services and configuration capabilities. I'm going to try to do more blog entries with examples/specific use cases and I'll work on putting them in to the official wiki also.
I bumped in to Floyd from infoq and recorded a video interview on virtualization so that should interesting for some, I hope. Floyd and I go way back and we'll probably work together on some content around multicore for infoq in the near future.
Bob Lozano was there and fun to hang around with as always. He works at Appistry which is a very cool product for organizing grids and running workflow style tasks with existing code on those grids. ObjectGrid would work very nicely with it. I had customers at one session ask me if ObjectGrid competed with Appistry and, of course, nothing could be further from the truth, they are very, very complimentary technologies. Appistry would be a great solution to provision ObjectGrid applications and submits jobs to them or use them as part of other jobs.
I met some old friends from my consulting days and that was great also. The hotel was very nice and the conference rooms were nice. The event seemed well run and they provided lots of cool prizes to attendees also. iPods, iPhones etc.
Unfortunately, I couldn't stay for the scalability expert panel which was on Friday at 9:30pm. Gigaspaces Nati was there as well as Patrick from Oracle Coherence so that would have been fun to attend but I had a personal appointment on Saturday and had to fly out right after my session on Friday.
I'm very much looking forward to doing more of these in the future.
November 29, 2007
Demo of creating JavaEE applications hosting ObjectGrid
ObjectGrid is designed to work in J2SE environments as well as run within WebSphere application server. WebSphere clusters can be easily used to host ObjectGrids and then clients can access the data stored within that grid. Clients can be applications within the same cluster using servlets or EJBs, J2SE clients or J2EE clients in different clusters or cells. This allows large datasets to be stored in the total memory of all cluster member JVMs. This can be used for XTP type applications, network attached caches or to host HTTP sessions. Several clusters can also be used as single grid even if the clusters are in different cells. This allows advanced topologies that we can go over in future blog entries.
This is a set of four screen casts. I recorded them at 1280x1024 so hopefully it fits on most screens. The intent is to show how easy it is to create a JavaEE EAR file and then add a couple of xml files to it which will cause ObjectGrid to start a container within the application server JVM the EAR is eventually deployed to. The end result is it shows how ObjectGrid can be embedded in WebSphere to provide a complete environment for hosting an objectgrid within a WebSphere cluster.
The demo uses a WebSphere V18.104.22.168 ND application server that has XD DataGrid (ObjectGrid) installed on top of it. The first video shows how to create the EAR in Rational Application Developer V6.0 and then export the EAR to the file system.
The next video shows how to start an ND cell on my laptop, define a single node. The cell topology on my laptop will be a single dmgr, a node agent and two cluster member JVMs. I'm using a 1GB RAM virtual machine running Windows XP to host the demo.
Next, we will show how to create the cluster and add two cluster members. We will then deploy the EAR file that we exported in the first step to the newly created cluster.
Finally, we will start the cluster members on the command line and show how to look at the SystemOut.log file from the cluster member to look for ObjectGrid messages that show that the grid is working and fails over correctly.
November 11, 2007
Network attached cache/memory for lower consolidation/virtualization costs
Many companies are consolidating their servers today. Rather than watch servers go underutilized whilst running a single application, they are consolidating multiple applications on to fewer boxes to save money and increase the average utilization of the servers.
This increases the CPU usage but memory can be a problem. Memory is not cheap. Even on blades, maxing out the memory capacity can be very expensive. Running multiple Java applications in their own JVMs on a single box uses up memory fast. Each one can take a GB or more of memory. If the application uses 5% of CPU on average then consolidating can push up the CPU usage but you may not be able to achieve 40-50% CPU usage because you may run out of memory first. 10 JVMs can easily take 5-10Gb of memory when running a biggish application. If the applications use caching then the memory usage may be considerable and result in either very expensive memory costs or fewer applications per server which works against a consolidation strategy.
Network attached caches such as ObjectGrid can help reduce memory costs considerably and thereby allow a successful consolidation strategy without incurring unreasonable memory costs. A cluster of ObjectGrid servers can run on a set of machines and provide a large network attached memory based cache for applications. This means that if an application needs a Gb of cache to run well then rather than have a Gb of memory per JVM, we can have a Gb of memory in total across the cluster which can result in a massive saving. If you have 20 machines then you may be able to run 20 objectgrid servers and use 500MB per box and still have 10Gb of network attached cache available for applications to use. This allows you to run more applications per server because the per application memory load is significantly lower.
ObjectGrid can easily pay for itself quickly when used in this manner. It cuts memory costs massively and allows more consolidation of applications on fewer boxes.
November 08, 2007
Caching read/write data using a network attached cache
This audio podcast discussing why traditional caching approaches fail when customers attempt to cache read/write data and how newer caching technologies like ObjectGrid, through the use of network attached cache architectures, allow read/write data even at high update rates to be effectively cached by a cluster of applications.
You can easily subscribe to all audio podcasts using iTunes by clicking on this link.
Network attached caches or memory - Audio Podcast
This is an audio pod cast discussing using ObjectGrid as a network attached cache rather than the traditional view of caches as an in memory cache. A network attached cache has numerous advantages over an in memory cache in that it's populated by all clients concurrently and what can been cached by one client is immediately available to service cache requests from its peer clients.
There is also no stale data in that data in a network cache is stored in one location and all clients seen the same record as there is only one copy in the network.
A near or local cache can be specified to filter cache requests to the network cache to improve performance when some smaller subset make sense to keep close to a particular client. Staleness can be handled using a local evictor or optimistic locking or row versioning to detect if the locally cached data was changed by a peer client while it was cached. The client can then invalidate the local entries and retry the transaction.
If you want to subscribe to audio podcasts using iTunes then click on this link.
October 01, 2007
Event Stream processing with ObjectGrid V6.1
ObjectGrid V6.1 shipped with a substantial upgrade in the streaming event space. We added the capability to define changes to a Map as a stream of insert/update events and then create derivative streams based on these fundamental streams. The derivative streams can do aggregation, filtering, processing of events.
The streaming capability allows events from a single partition to be combined/processed. Layers of event processing can be used to further aggregate these per partition results using the DataGrid APIs to aggregate the resulting derivative streams in parallel.
If you would like more information then head over to our wiki to read about it.
September 25, 2007
Spring WebFlow and ObjectGrid
Spring Webflow looks like a very nice Apache alternative to the LGPL competitors in the same space and looks to be extendable to a non web specific lightweight conversation manager also which is cool.
I had a conversation with Keith Donald yesterday about it and it looks like it should work with ObjectGrid backing the flows out of the box since it usually stores the flow state as an attribute in the users HTTP Session. ObjectGrid already provides a HTTP Session Manager servlet filter to attach to web applications persisting HTTP sessions to an ObjectGrid. If the web application is using flows then those flows will be persisted to the HTTP Session (and therefore the grid) along with the other application user state. So, there is nothing special to do for Webflow users when running in an ObjectGrid backed session environment.
I guess implementing an independant ObjectGrid backed flow repository would be interesting also so that users can use it to store conversations that happen outside a web context and these could be checked the next time they log in without needing to write the flows to a database or similar non scalable persistent backend.
Anyway, Webflow 1.0 looks pretty cool and Webflow 2.0 shows a lot of promise also.
September 21, 2007
Podcast available on iTunes now
I'm going to try doing an audio podcast as well as some video ones showing us discussing various scenarios. You can subscribe to the podcast using iTunes here.
September 20, 2007
Virtual machine specifications, the new BIOS
Way back when IBM first launched the PC, PCs had a BIOS which was necessary to run operating systems on the box. Clean room copying the BIOS allowed the clone PC business to take off. I saw deals with Xen and VMWare to ship the hypervisor with the server hardware from vendors last week. This is interesting because this may be the beginning of a new BIOS. This new BIOS may well be the virtual hardware architecture of these virtual machines. The virtual machines define a spec for a virtual computer. There are virtual network chips, video cards, disk controllers. This combination of virtual hardware is important because when an operating system is installed, it configures itself and installs drivers for that specific collection of emulated hardware.
This is a really important point for customers using these hypervisors as it allows them to create virtual images that can be moved from one server to another and be sure that the image works on the new server. This is the case because the virtual machine doesn't depend on the hardware definition of the physical server it's running on because it uses virtualized hardware not the actual hardware on the server hosting the virtual image. This is great for vendors as it clearly couples the virtual machine image to the hypervisor that created it as it will only run on a hypervisor that emulates that exact virtual server.
There are programs to convert a Microsoft Virtual PC to a VMWare image by switching the device drivers in the image from the ones defined by vendor A to those defined by vendor B but this doesn't always work, especially with virtual machines running Windows as the hardware change is detected and then it requires another activation key from Microsoft.
So, I'm wondering if these vendors are protecting the definition of their virtual machines. If they were protected then effectively they are locking in customers creating virtual images or vendors creating virtual appliances to their hypervisor. VMWares early lead and the fact that XEN and other competitors are using different virtual hardware specifications means that VMWare has a lot of appliances etc available on its hypervisor and they won't run easily on competitive hypervisors, especially if they are windows based.
So, are we seeing virtual machine vendors using their virtual machine specifications as a form of IP that is legally protected? I don't know the answer to that but it's easy to see why a forward thinking vendor would try to get protection around a virtual machine specification. If any vendor can use VMWares virtual machine specification then customer will be able to move away from VMWare. If not then switching hypervisor vendors won't be easy and a significant amount of work. This then means that once a customer starts to use a particular hypervisor then they may well be locked in to that hypervisor.
We need an open source virtual machine specification as an industry standard.
What customers need but vendor may not pursue without a lot of pressure is an open source or industry standard virtual machine definition thats implemented by all vendors. That would of course prevent lockin and so is unlikely to be vigorously pursued by the dominant vendors but it would clearly benefit customers and allow competition to keep prices reasonable as the technology matures due to the ability of customers to switch vendors based on price or other factors without incurring a huge migration cost.
So, who will be the first to clone the new BIOS and create the new clone business or is the new BIOS/virtual machine specification protected? Thats the question today.
September 13, 2007
Tuning for multicore, now the hard part, processors are going NUMA
The easy stuff I've already covered. Critical sections, planning for dramatically lower core speeds over the next 3-5 years whilst trying to have more and more threads to attempt to fully utilize all cores on the processors. This is actually comparatively easy compared to the next stuff. The next thing is hard. These processors will be organized in a NUMA fashion. Whats NUMA? It means that unlike todays processors, all memory is not equal. The processors will arrange cores in to M blocks of N cores apiece given MN cores per processor. A block of cores will have a common cache and it's own memory controller.
You can see this with AMDs current designs pretty easily. They have already used a hyperchannel to connect two seperate processors in to a multicore processor complex. If a core on processor A needs memory thats attached to processor B then it asks B for that data over the hyperchannel and vice versa. This happens transparently from a programming point of view, memory is just memory. But it's far from transparent from a performance point of view. Imagine processor A executing code that is too big to fit in its cache stored in the memory attached to B. Pretty painful. Now imagine this scaled up on a single chip a few times.
This is likely whats coming. This means the address space is partitioned across core blocks. Your core is no longer directly attached to all the memory as it is now. There is now a hidden network of sorts between core blocks allowing a core in one block to request memory from another core block. This adds more latency when accessing the 'non local' memory than if the core wanted memory from its local memory controller.
The processor is becoming like a distributed system. We have cache on the core blocks which is highest speed, we have local memory which is next and then we have foreign memory which is more memory thats attached to a different core block. Anyone that has seen NUMA before will recognize this and the tuning issues that come along with it. Clearly, for maximum performance, we want a core to only access its local memory. All memory is equal from a programming model point of view but obviously, fetching non local memory to a core will cost you.
Java programmers unlike C/C++ programmers have long forgotten about this type of stuff, it's the heap, simple and uniform. This uniformity is the enemy to performance here because the memory simply isn't any more. The JVMs are going to have to jump through hoops and the middleware on top will need to cooperate in order to make sure that when a thread runs, it runs on a core that has most of the data the thread needs on memory thats attached to that core. Clearly, JVM vendors will want to try and hide this but thats not going to be easy. Just as we really need partitioned architectures to scale linearly on grids of servers, the same is coming to software within a JVM so that we can exploit these architecture to their fullest.
Clearly, code will still run if applications don't do this, but it's also clear that code will not run quickly on the hardware. We have a lot of work and innovation ahead of us in JVM technologies as well as middleware on top to exploit the new features that are coming in the JVMs to allow high performing applications to be written on top of these new architectures.