• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

"Gordon" - Flash Based Supercomputer (SSD's)

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.


Dec 16, 2007
Chicago, IL

When fully configured and deployed in 2011, Gordon will feature 245 teraflops of total compute power (one teraflop or TF equals a trillion calculations per second), 64 terabytes (TB) of DRAM (digital random access memory), 256 TB of flash memory, and four petabytes of disk storage (one petabyte or PB equals one quadrillion bytes of data.)

The new supercomputer's key feature will be 32 "supernodes" based on the latest Intel Xeon Processors available in 2011 combined with several state-of-the-art technological innovations to include virtual shared-memory software. Using virtual shared-memory software, each of the system's 32 supernodes has the potential of 7.7 TF of power and 10 TB of memory (2 TB of DRAM and 8 TB of flash memory). The supernodes will be interconnected via an InfiniBand network, capable of 16 gigabits per second of bi-directional bandwidth -- that's eight times faster than some of the most powerful national supercomputers to come on-line in recent months.

Very interesting stuff here. This is one of the first next generation super computer architectures to address the bottleneck in the shared memory, file IO space, and it does so in a big way. You can really see the power of SSD's coming alive.

We could also see the emergence of 'specialized' nodes. Currently, all of the nodes of a supercomputer are generally identical (same amount of memory, processors, interconnects, cores, etc.) In a few years we could see architectures that house node clusters that excel at certain tasks. For example, a cluster of nodes that has a large amount of GPU acceleration emphasizing visual computing / vector processing, another cluster of nodes that specializes in scalar processing of tasks that aren't easily parralized, another cluster of nodes like 'Gordon's' that specializes in getting the info in and out of the storage array and to the other parts at a much faster rate.

Your thoughts?


Sep 22, 2008
SC09 is going on and is where full details of this project will be released. Anyone have the details?

SDSC using Appro clusters is just extending on what they've already done on a smaller scale using Intel hardware: http://www.hpcwire.com/blogs/SSDs-Make-Entrance-into-HPC-Finally-57056287.html#28712424

Sounds like an interesting theoretical concept for $20mill, especially for throughput and shared access. It's a build on common tiered propietary Servers where the NAND Flash is used as an extensive cache behind RAM but before disk. Multi-channel random r/w are much faster and with higher throughput this way although they look to focus and maximize large file read ops. 245 TF is nothing special on the performance for a mid-2011 supercomputer though. That's below what 2008 ORNLs Jaguar with Barcelona was doing. Current reigning leader is ORNLs Jaguar at 2.35 PF. That figure is said to double in 2010. Early 2012 IBM+US Gov collaboration is projected at 20 PF. So this future development is on a much smaller scale at 245 TF.

Not much to comment on either. Let's first see some setup diagrams and hardware specs. There's almost no details but just vague performance projections given. "Next generation 2011 hardware" is extremely vague. I wonder how they know what the performance and infrastructure requirements of Intels 2011 CPU/SSD/DRAM are. They're not even finalized on paper yet.

SDSC and Appro are the first high-end place committing to Intels Flash drives, let alone in HPC too. No Enterprise major vendor is backing or supplying them and the only one to take them up has already dropped them citing major continued problems and shortcomings. I'm not sure if these guys have really made the right choice with their Flash vendor or the choice was simply due to being cost constrained, since Intel is offering SSDs far cheaper per GB than the well known Enterprise leaders.