September 25, 2006
What to do with an 8 core blade? virtualize it!
Looks like we'll have 8 core blades at year end. Intel just announced their new 4 core server chip and this means blades will have 8 cores per blade soon. Blades are now turning in to what SMP boxes where before but they are a lot cheaper.
It's likely these blades will use a high speed bus like 10Gb ethernet or infiniband as their connection to the world. Blades always have a problem with expansion but something like infiniband could act as a better PCI express bus and allow it to attach at local speeds to remote disks or even a side blade with PCI slots in it or something.
Given the needs to keep data center costs reasonable, I can see someone buying a full rack of these with 14 8 core blades per 7U of rack space. Thats an incredible amount of power. It's going to be virtualized. Not many applications need 8 cores all to them selves. I see a big market coming for products such as WebSphere XD, VMware or the Linux Xen stuff or Windows virtual PC. It's just crazy not to virtualize these boxes.
I can see iSCSI taking off finally as these blades need access to fast disks as a reasonable cost, fiber is too expensive as is infiniband probably but 10Gb ethernet would be awesome for iSCSI and provide enough throughput but most customers.
A big problem is likely to be backplanes. 14 8 core blades with 10GB ethernet connections. Thats going to take quite a backplane to keep up. Now, hook multiple chassis together, and the bandwidth need boggles the mind. Switches between chassis are not going to be cheap, not at all. Currently the blades with gb ethernet have 4 gbit between the chassis but this is going to be no where fast enough for whats coming in a couple of months.
Anyway, I heard an IBMer predicting this a couple of years ago and he was right. Another interesting point is going to be battery backup and simply supplying power to these things. They are going to require serious power and as a result require some pretty hefty UPS devices to power them if the power fails.
Interesting times. Of course, give it 6 months and we'll have 16 core blades.
September 25, 2006 | Permalink
Interesting analysis. This is where the PCI-SIG IOV (I/O Virtualization) Working Group comes in - which IBM co-chairs, by the way.
The idea is to share I/O modules (like 10 GbE) among the blades in the chassis, through a mid-plane PCIe interconnect. Read more at: http://www.pcisig.com/specifications/iov/review_zone/
Also note that, even before the PCI-SIG ratifies a standard there, some companies already offer a valid solution today (e.g. www.nextio.com).
Posted by: Phil | Sep 27, 2006 1:30:15 PM
Infiniband is actually significantly cheaper than both fibre and 10GbE
Posted by: Brendan | Oct 9, 2006 7:41:08 PM