« The listener pattern evolves | Main | The listener pattern evolves #2 »

March 03, 2007

Will 64 bit require in memory databases?

As readers will know, I'm doing some 64 bit testing on ObjectGrid 6.1. We're testing it with large heaps (around 30GB) for now looking for issues. Generational GC turns out to be easily fooled by a Java application that does database like things such as ObjectGrid. So, we're tuning it now to 'work' better with generational GC. We will end up with different code/tuning for different garbage collectors.

But, this brings up another question. Managing data structures that scale when you have 10, 30 or even 100 GB of data is non trivial. Searching that effectively is hard. As applications move to 64 bit address spaces, do we want developers trying to solve these problems or solving the actual business problem that they have. I think the latter is whats important and this may give an opportunity for products like ObjectGrid with 64 bit JVMs.

We are tuning ObjectGrid to work well with large address spaces. ObjectGrid has full query with indexing, advanced locking, transactions, dead lock detection etc. It's being tuned to work well with generational garbage collection. Anyone that says garbage collection is transparent is living under a rock, it's a good goal but in practise, thats all it is, a goal. Garbage collection works with a set of scenarios but there will always be edge cases and working with changing large volumes of data is something that is just about worst case scenario for a generational garbage collector.

An application that needs large address space support should be able to use ObjectGrid to manage its state in the JVM and leave the non business logic issues to ObjectGrid to solve.

March 3, 2007 | Permalink

Comments

Post a comment