January 19, 2007
The future of frameworks and multi-core, a return to fibers
Today, we have a wealth of frameworks that make us all more productive. People typically use layers upon layers of frameworks to make the job of writing software easier. We pay a path length price for these layers but so far, Moore's law has saved us by giving us faster and faster single threaded performance making up for the path length expansion caused by the richer programming tools we all use.
Once highly multi-core chips arrive then that's going to end. Routines that took .2 seconds to run today may take significantly longer due to lower clock speeds on massively multi-core processors. What kind of impact will this have on developers? Well, lets look at the frameworks they use. Obviously, we're being told, we need to write micro/multi threaded code now to get the best performance from these MMC processors.
Trouble is, that's not going to happen overnight. That's going to take time. Will each framework do it differently, will each framework now include small thread pools for parallelizing simple tasks that must be threaded now to perform acceptably. If the application is using N frameworks and each framework has a different pool and the application is running itself in a container, how will all these thread pools interact, get sized, get managed? How much parallelization of what was once single threaded code will we tolerate at the expense of taking away threads for pushing concurrent requests through the processor?
Once frameworks do this, how will that code perform on 'normal' high clock speed cores? Will we need two code bases (obviously we hope not!).
But, whats clear is that today, things are comparatively simple. Threads are typically used for servicing requests concurrently and most application frameworks themselves don't typically use threads for inline logic. Lower clock speeds will force parallelization at finer levels to reduce overall path length for individual requests and those pools need to be managed, coordinated and sized automatically or the TCO for solutions using MMC will be raised which clearly nobody wants either.
I guess, in a way, we're going back to fibers. Threads seem too heavy for this finer level of parallelism. In a way this simplifies things. We won't need massively threaded processes which are very difficult to write. Instead, we will have processes with the same number (comparatively) as today but each thread will use multiple fibers to parallize stuff we don't bother doing today. This is good news because writing something that scales to a 1000 threads is mega hard. But, 100 threads and then having fibers associated with each thread is more doable. Of course, this means we won't be using all of the cores unless the workload utilizes fibers (fiber = core) but this may be good enough.
So, if you're a framework developer, we all have interesting days ahead.
January 19, 2007 | Permalink
Readers may like a reference to fibers: http://en.wikipedia.org/wiki/Multithreading#Processes.2C_threads.2C_and_fibers
Posted by: Glyn Normington | Jan 23, 2007 5:24:17 AM