Data Centers

Supercomputers coming soon to an office near you

Supercomputers are to ordinary servers as race cars are to street vehicles. Burst processing and cognitive applications are examples of tech that will be in the data centers or desktops of tomorrow.

x
player version2.5.3
stream typeHLS
playback state1
duration224
current time29.92
buffer length0.00
average dropped (fps)0.00
playback framerate (fps)24.14
switching modeauto
transition statestart
start index bitrate (B/s)-0.00k
current index bitrate (B/s)4.94M
current bandwidth (B/s)1.47M
  • playlist
  • 0:30 / 03:44
  • Autoplay: onoff
  • fullscreen

Supercomputer manufacturers and their clients are thinking more than ever about ways to make high-performance technology innovations useful for ordinary corporate servers.

Just as specialty race car technology evolves downstream to the practical carriages in your local showroom—consider anti-lock brakes, paddle shifters, and safety cages—so too are high-performance computing (HPC) features such as burst memory, cognitive applications, and graphical processing units gradually becoming part of mainstream enterprise data centers.

"I definitely see HPC being a harbinger or on the bleeding edge of where enterprises want to be," said Dean Hildebrand, a technical advisor at Google and chair of the International Workshop on Parallel Data Storage & Data Intensive Scalable Computing Systems. The workshop met last week at the Supercomputing '17 conference in Denver.

"Google does not use supercomputers. Google is a supercomputer in itself, but it doesn't buy IBM or Cray, everything is designed in-house," Hildebrand explained. Burst buffering is among the methods used in their virtual supercomputer, in order to load an application into very fast storage such as non-volatile memory or flash, and then immediately move it back onto conventional hard drives when it's no longer needed.

"No one wants to run their applications directly out of spinning disks anymore. They want the latency and the IOPS that SSD or NVME can provide, but they don't want to pay for it. Supercomputing has been struggling with this for almost a decade," Hildebrand said. Regular enterprise servers have small amounts of burst capability but generally not enough to run an entire data set, and that will probably change in the next few years, he noted.

SEE: IT hardware procurement policy (Tech Pro Research)

But there's also an important lesson supercomputer developers can learn from their enterprise cousins: ease-of-use. An ordinary corporate server can be bootstrapped in a day, while a typical supercomputer can take months or even years to hum along, Hildebrand said. "A lot of times the supercomputer you're using is the biggest system that's ever been deployed. Nobody's ever tested it at that scale because there's nothing to test it on."

IBM supercomputers may also not yet be plug-and-play, but they're definitely being built with a focus on real-world applications, said vice-president of technical computing Dave Turek. His staff at Big Blue works on a US Department of Energy project called CORAL, named for collaboration of the Oak Ridge, Argonne, and Lawrence Livermore national laboratories, which in conjunction with the National Nuclear Security Administration, seeks to produce an exascale computer by the year 2021. The fire-breathing monster would be capable of crunching 10^18 calculations per second—that's a billion billion, for the mathematically disinclined.

To say that's fast is an understatement, but Turek insisted Big Blue isn't in it for the numbers. He gave a resounding "No" when asked if his group has a particular level in mind for the industry's biannual TOP500 list comparing raw processing power. "TOP500 is based on a particular benchmark. That benchmark will teach you nothing about cognitive computing. It may not teach you anything about your own applications," he said. "The CORAL system was the first major contract that ignored as a goal getting to a particular number."

Instead, "CORAL was initially inspired by, 'Let's build the fastest supercomputer in the world and solve problems that can't be solved in ordinary circumstances,' and to do that we had to invent technology," Turek said. For example, a cognitive program running on a server a decade from now could determine corporate policies, identify unauthorized building visitors, and perform e-discovery for your legal department. It could do all that while continuously evaluating data for changes and tweaking its results on-the-fly, as opposed to relying on constant human approval, Turek said.

(The current Watson system has similar yet often criticized claims, however, an exascale computer would attack problems with many times more brainpower. E-discovery is one example of existing corporate vs. purely scientific machine learning applications. It is often viewed with skepticism by all but the most technophiliac attorneys, yet it's gradually gaining acceptance in federal courts.)

"The technical foundations are based on a lot of algorithmic work coming out of HPC," Turek noted. High-speed file collaboration and auditing among geographically diverse workers is another supercomputer application that will trickle down to your average data center rack, added IBM's Alex Chen, an expert in software-defined storage.

SEE: Special report: The cloud v. data center decision (free PDF) (TechRepublic)

It's not just servers benefiting from supercomputer research. Technology finds it way into high-end workstations, also known as personal supercomputers, using largely the same data processing designs originally made for expensive gaming rigs. Now the technology follows full-scale supercomputers into the same type of customers such as universities, scientific-minded corporations, and three-letter government agencies—"And front companies for three-letter agencies," added Pyschsoftpc founder Tim Lynch, who said he's not at liberty to specify any of them. His systems start at $8,500, practically free compared to the $430 million invested in exascale thus far.

Also see

4-sequoia.jpg

According to the TOP500, IBM Sequoia is currently #6 on the list of the world's fastest supercomputers.

Image: Lawrence Livermore National Laboratory

About Evan Koblentz

Evan Koblentz began covering enterprise IT news during the dot-com boom times of the late 1990s. He recently published a book, "Abacus to smartphone: The evolution of mobile and portable computers". He is director of Vintage Computer Federation, a 50...

Editor's Picks

Free Newsletters, In your Inbox