SC13: Top 500 shows growing inequality in supercomputing power

The top supercomputers are getting faster while development of midlevel systems is starting to stagnate

Jack Dongarra, creator of the Linpack benchmark, and one of the organizers of the Top500 ranking of supercomputing.

Jack Dongarra, creator of the Linpack benchmark, and one of the organizers of the Top500 ranking of supercomputing.

Supercomputing power is being concentrated in a smaller number of machines, according to the latest Top 500 list of high-performance computers. Keepers of the list are uncertain how to parse that trend.

The first 17 entrants in the latest supercomputer ranking produce half of all the supercomputing power on the list, which totaled over 250 petaflop/s (quadrillions of calculations per second), noted Erich Strohmaier, an organizer of the Top 500 twice-yearly ranking of the world's most powerful supercomputers, speaking at a Tuesday evening panel at the SC2013 supercomputer conference,

The first place entrant alone, the Chinese Tianhe-2 system, brought in 33.86 petaflop/s (quadrillions of calculations per second).

"The list has become very top heavy in the last couple of years," Strohmaier said. "In the last five years, we have seen a drastic concentration of performance capabilities in large centers."

The organizers of the Top 500, however, are unsure if the trend bodes ill for supercomputing in general. Could it signal a decline in supercomputing overall, or a concentration of supercomputing's investigative powers among fewer government agencies and large companies?

"We don't know what it actually means," said Horst Simon of Lawrence Berkeley National Laboratory, one of the organizers of the Top 500. "But it is important to exhibit the trend and have a discussion."

To characterize the depth of this "anomaly" as Strohmaier called this trend, he used a measure of statistical dispersion called a Gini Coefficient, which ranks the distribution of some resource. The Gini Coefficient, which is often used to measure the wealth distribution of nations, can range from 0, where the resource is spread evenly among all the holders, to 1, where one party holds all of the resources.

The list scored a Gini Coefficient of 0.6, which is quite high, Simon noted. By way of comparison, were the Top500 supercomputers a nation, it would have a greater inequality in computation than all but a few of countries have today in terms of wealth distribution. Simon jokingly called it "the rich-getting-richer phenomenon of supercomputing."

Drilling further down into the metrics, Strohmaier found no major differences between the buying habits of governments and industry. Both parties are buying fewer midsized systems and concentrating their efforts on building fewer, larger systems.

The trend could be problematic because fewer larger systems might reduce over time the number of administrators and engineers skilled in running high-performance computers. On the other hand, it might not be problematic in that most of the largest systems are shared across multiple users, such as all the researchers from a nation's universities.

One member of the audience for the panel, Alfred Uhlherr, who is a research scientist for Australia's Commonwealth Scientific and Industrial Research Organization (CSIRO), attributed the cause to another possible factor.

A number of organizations he knows of, both governmental and industrial, decline to participate in the Top500, knowing that their systems would not rank that high on the list. Nations such as China, or companies such as IBM, can generate positive publicity for themselves to be positioned near the top of the list. For entrants that might appear on the bottom reaches of the list, the benefits of getting on the list may not be worth the efforts.

Not helping in this regard is the sometimes laborious Linpack benchmark that supercomputers are required to run to be considered for the voluntary Top 500.

For instance, the US Department of Energy Lawrence Livermore National Laboratory's Sequoia machine, which ranked third on the current list with 17 petaflop/s, had to run Linpack for over 23 hours to get its results, noted Jack Dongarra, another one of the list's curators, and a co-creator of Linpack.

That night, Dongarra suggested that Linpack, created in the 1970s, is no longer the best metric to use to estimate supercomputer performance. He championed the use of a new metric he also helped to create, called the High Performance Conjugate Gradient (HPCG).

"In the 1990s, Linpack performance was more correlated with the kinds of applications that were run on the high performance systems. But over time that correlation has changed. There is mismatch now between what the benchmark is reporting and what we are seeing from applications," Dongarra said.

Nonetheless, many in attendance at the conference still find the Linpack-driven Top500 viable. CSIRO's Uhlherr said his organization still studies the list closely, not so much for the Linpack ratings, but to observe which industries, such as energy companies, are using supercomputers, as a way of assuring Australia is staying competitive in these fields.

Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is

Join the PC World newsletter!

Error: Please check your email address.

Tags ClusterssupercomputersHigh performanceIBMhardware systemsCray

Our Back to Business guide highlights the best products for you to boost your productivity at home, on the road, at the office, or in the classroom.

Keep up with the latest tech news, reviews and previews by subscribing to the Good Gear Guide newsletter.

Joab Jackson

IDG News Service
Show Comments

Cool Tech

Crucial® BX200 SATA 2.5” 7mm (with 9.5mm adapter) Internal Solid State Drive

Learn more >

Lexar® Professional 1000x microSDHC™/microSDXC™ UHS-II cards

Learn more >

Xiro Drone Xplorer V -3 Axis Gimbal & 1080p Full HD 14MP Camera

Learn more >

ASUS ROG Swift PG279Q – Reign beyond virtual world

Learn more >

D-Link TAIPAN AC3200 Ultra Wi-Fi Modem Router (DSL-4320L)

Learn more >

D-Link PowerLine AV2 2000 Gigabit Network Kit

Learn more >

Gadgets & Things

Lexar® Professional 1000x microSDHC™/microSDXC™ UHS-II cards

Learn more >


Learn more >

Lexar Professional 2000x SDHC™/SDXC™ UHS-II cards

Learn more >

Family Friendly

ASUS VivoPC VM62 - Incredibly Powerful, Unbelievably Small

Learn more >

Lexar Professional 2000x SDHC™/SDXC™ UHS-II cards

Learn more >

Lexar® Professional 1000x microSDHC™/microSDXC™ UHS-II cards

Learn more >

Stocking Stuffer

Lexar® Professional 1000x microSDHC™/microSDXC™ UHS-II cards

Learn more >

Lexar Professional 2000x SDHC™/SDXC™ UHS-II cards

Learn more >

Christmas Gift Guide

Click for more ›

Most Popular Reviews

Best Deals on PC World

Latest News Articles


GGG Evaluation Team

Kathy Cassidy


First impression on unpacking the Q702 test unit was the solid feel and clean, minimalist styling.

Anthony Grifoni


For work use, Microsoft Word and Excel programs pre-installed on the device are adequate for preparing short documents.

Steph Mundell


The Fujitsu LifeBook UH574 allowed for great mobility without being obnoxiously heavy or clunky. Its twelve hours of battery life did not disappoint.

Andrew Mitsi


The screen was particularly good. It is bright and visible from most angles, however heat is an issue, particularly around the Windows button on the front, and on the back where the battery housing is located.

Simon Harriott


My first impression after unboxing the Q702 is that it is a nice looking unit. Styling is somewhat minimalist but very effective. The tablet part, once detached, has a nice weight, and no buttons or switches are located in awkward or intrusive positions.


Latest Jobs

Don’t have an account? Sign up here

Don't have an account? Sign up now

Forgot password?