First impression on unpacking the Q702 test unit was the solid feel and clean, minimalist styling.
High-performance computing: Muscle in the middle
- — 21 October, 2004 09:20
It's easy to get the impression that the midrange Unix server is like a boxer near the end of a career - still packing a wallop, just no longer throwing knockout left hooks. But the truth is, new processor architectures, multi-core systems, and the ascension of x86 chips are giving these systems extra punch, and users plenty to think about as they plan their purchasing for the next few years.
Server makers have been rethinking the way they can improve computer performance, and re-evaluating the role played by commodity systems in environments that require high-speed computation. And while clusters of inexpensive Linux machines may be one of the most talked-about technologies in scientific computing, traditional sellers of "midrange" systems, with anywhere from eight to 32 processors, are incorporating new designs and adding products at a pace that shows they have no intention of abandoning the high-performance market -- especially the life sciences market.
Perhaps the most significant change in computer design is the industrywide switch to dual-core processors. After years of boosting processing power by cramming more transistors onto chips, cranking up processor clock speeds, or cleverly tinkering with the machine language instructions used by microprocessors, chip makers are now adding a second core, or processing unit, to their microprocessor designs.
IBM, the leading vendor in the midrange space, according to figures from the Gartner research firm, was the first vendor to bring dual-core designs to the midrange with the 2002 launch of its eServer p670, which was based on the Power4 processor. Since then, Big Blue has delivered two Power upgrades, and is now shipping a pair of midrange systems based on its state-of-the art Power5 microchip, introduced in July: the eServer p5 570, which can contain as many as 16 Power5 processors, and a slimmed-down version of the 570, called the p5 570 Express, which ships with as many as eight processors.
Hewlett-Packard has exited the business of designing microprocessors, having pinned its midrange hopes on Intel's Itanium chip. This August, HP shipped the last-ever update to its Alpha line of processors, once one of the crown jewels of Digital Equipment Corp. and later Compaq, and the company has said it will stop development of its PA-RISC line of processors in 2005, thus abandoning its traditional RISC architecture for Itanium.
HP currently ships two HP 9000 systems, based on its latest generation of PA-RISC, the dual-core PA-8800. They are the rp8420, which can house as many as 32 processors, and the 16-processor rp7420. The company also sells eight-way and 16-way systems based on the older PA-8700 processor.
The last major iteration of the PA-RISC architecture, the PA-8900, is due out in early 2005, around the same time that HP is expected to upgrade its Itanium-based Integrity product line with new servers based on the dual-core Itanium processor, code-named Montecito. HP currently ships two midrange Itanium systems, the 32-way Integrity rx8620 and the 16-way rx7620.
Silicon Graphics too has adopted the Itanium chip. The SGI Altix 350, which supports up to 16 Itanium 2 processors, is designed for technical databases and cluster applications, putting it squarely in the life science HPC environment. The system can run 64-bit Linux programs, and SGI says there are about 100 technical applications tailored to the Altix architecture. When the system was introduced earlier this year, IDC research vice president Christopher Willard pointed out the price point as a competitive advantage in the departmental server market. A four-CPU configuration sells for about US$21,000, or about US$5,000 per processor.
Sun Microsystems has been shipping dual-core systems since early 2004. And while Sun has balked at supporting Intel's Itanium, the company has overcome its aversion to commodity processors and begun shipping low-end systems based on Intel's Xeon and Advanced Micro Devices Inc.'s Opteron processors. In fact, Sun has become one of Opteron's strongest supporters, and is currently building workstations, blade systems, and two- and four-processor systems around the chip. The company has an eight-way Opteron server under development as well.
Last spring, Sun abruptly scrapped plans for its UltraSparc V microprocessor and decided instead to base the bulk of its server product line on an upcoming version of Fujitsu's Sparc64 microprocessor, code-named Olympus. Unlike HP, however, Sun did not exit the processor design business altogether. It is working on a number of radical new chip designs for what it calls "throughput computing" systems that, while unproven, could form the basis of a whole new class of servers, quite different from the machines Sun sells today.
The first of these new systems, which will be based on a processor code-named Niagara, are expected in 2006. Niagara systems will be designed for network-intensive tasks such as Web serving or traffic encrypting.
The processor most likely to have an effect on midrange servers is code-named the Rock. When Rock systems begin to emerge in 2008, Sun says, they will feature an extremely large number of processor cores, and one Rock chip will be able to do the work of many of the company's current UltraSparcs.
"The number of processors that are required in the system drops dramatically," says Andy Ingram, vice president of marketing for Sun's Scalable Systems Group. A 6U (10.5 inches high) Rock system could theoretically replace Sun's current top-of-the-line, 72-processor Sun Fire E25K, which stands 75 inches tall today, Ingram says.
But analysts say it will be five or six years before systems based on the Rock have any impact on the marketplace. In the meantime, the big question is: Will Unix and its variants like Sun Solaris maintain a hold on the midrange market, or cede it to low-cost commodity systems based on 64-bit x86 processors from Intel and AMD?
"I do see an interesting battle looming," says Vernon Turner, group vice president for global enterprise server solutions at research firm IDC. "Can the existing incumbents in 64-bit architectures hold on to the independent software vendors? .... If they don't do that, there could be a migration to a platform that has considerable cost advantage."
In the Laboratory
The low price of x86 servers is not the only cost that users need to consider, says David Fenstermacher, director of biomedical informatics at the Abramson Cancer Center at the University of Pennsylvania. "We look not just at upfront costs, but also recurring costs," he says. "The amount of administration that it takes to keep a cluster (of x86 systems) up rather than an SMP (symmetric multiprocessing) server is much more," he adds.
Large Unix systems are also superior for handling I/O-intensive tasks -- for example, when hundreds or even thousands of users want to search a particular database using different algorithms, Fenstermacher says.
The Abramson Center uses Sun Fire V1280 and V880 systems to run a number of databases, as well as laboratory and information management systems. The fact that these machines can be dynamically configured to add or remove processors from specific applications, coupled with the fact that many of the other bioinformatics systems that Abramson must exchange information with are based on the same Unix-based architectures, makes Unix a logical choice for Fenstermacher.
Many computer users in the life sciences are simply more comfortable and better trained in Unix, which has been the platform of choice for years, Fenstermacher says. "The people who do this are Unix-based, and they want Unix systems."
"It might not be the best reason to still be in the domain, especially when it comes to initial cost," he says, "but I get long-term value out of it."
Another reason for Unix's strength in the high-performance environment is the relationship that Unix vendors have with commercial database suppliers. "People just generally have a lot of comfort in the 22-plus-year history of the relationships that Sun and Oracle have, and they're not willing to move the family jewels" from Oracle and Solaris, says Loralyn Mears, global market development manager for life sciences with Sun.
And though midrange x86 systems are not nearly as prevalent as two-way and four-way boxes, commodity systems are starting to make an impact on the high-performance market. IBM and Unisys both ship 32-processor Xeon systems, and Gartner estimates that 8,000 x86 systems with between eight and 24 processors shipped during the second quarter of 2004, up from 6,500 during the same period in 2003.
Gartner reckons that just over 18,300 RISC systems in this category shipped in the second quarter of 2004. Only 550 Itanium midrange systems in this class shipped during the same period, the research firm estimates.
But the biggest threat to Unix midrange systems in the life sciences is not another less expensive version of a midrange server built on commodity parts. It's the move to clustered computing. "I'm not sure that SMP servers are really going to be viable much longer," Fenstermacher says. "Linux and Intel had a lot of problems early on, but they've solved a lot of the reliability problems with their servers."
With Oracle now committed to supporting its database on clustered Intel boxes, and with Linux constantly improving its technical credentials, the move to Intel-based high-performance systems is gaining steam, says Michael Swenson, research manager with Life Science Insights. "The momentum is shifting to Linux and commodity platforms. Even applications that not that long ago were the domain of a single platform, many of those have now ported to Linux."
Fenstermacher agrees: "Clusters are the way of the future, both for enterprises and parallel computing."