Networking's greatest debates in the Data Center

All time classic debates include Mac Vs PC, Tape storage vs. disk storage and AMD vs. Intel

NAS vs. SAN

The argument over the humble file serving network-attached storage and the data-intensive storage-area network is a popular topic for the less than storage-savvy industry pundit. But a battle between NAS and SAN was never meant to be. NAS came out of the NetWare and Microsoft file servers used in the 1980s to provide access to files for network clients. Network Appliance first commercialized the concept of a NAS appliance, which would serve up files and be based on a stripped down network operating system.

NAS has been adopted by legions of Network Appliance users and bunches of Windows users for hosting Microsoft Exchange and SQL Server. It is used by millions of others for storing their file-oriented data.

SANs were adopted to partition storage traffic from the rest of the LAN and by doing so, speed up transaction-intensive databases, ERP and CRM systems.

"Separating ownership of a server from its storage and placing all the storage devices directly on a Fibre Channel network allows a many-to-many connection from servers to storage, and from storage to other storage devices. This approach grants the benefits of traditional networking to storage devices, such as increased scalability, availability and performance," consultant Barb Goldworm once wrote in a Network World newsletter.

SAN backers say the technology is best when performance is paramount for business-critical applications such as databases, ERP or CRM systems. NAS, they say is hindered by performance concerns. If you talk to some of these NAS users though, they will say that using NAS to host business-critical applications has never been a problem.

A recent test of a BlueArc NAS system showed more than 192,900 operations per second. -Deni Connor

Fibre Channel vs. iSCSI

Fibre Channel is dead." That was the controversial conclusion of one participant in a heated debate at an industry conference in 2000. Industry vendors were investigating a new protocol -- storage over IP -- that they said would replace the then dominant Fibre Channel. That newfangled transport protocol, which allowed storage traffic to flow across the Gigabit Ethernet network, would become iSCSI -- it would be implemented by individuals who were familiar with Ethernet networking, but not with the more complicated and expensive Fibre Channel.

Spin forward seven years and the battle between Fibre Channel and iSCSI is now passe. Fibre Channel isn't dead -- it's still the dominant storage protocol -- and iSCSI is being implemented at an increasing rate. According to IDC, while iSCSI commanded just 3 percent market share in external disk storage systems (with Fibre Channel accounting for the rest), the research firm expects that market share to increase to 21 percent by 2010. Now the two technologies even exist in the same network.

Fibre Channel is being used in enterprises to host transaction- and data-intensive operations because of its performance and its assured delivery of data; iSCSI, an inexpensive technology that operates on top of Gigabit Ethernet, is being used by organizations that don't have dedicated and storage-savvy IT personnel and in small and midsize businesses and departments in the enterprise to host mid-range business-critical applications that do not require the blazing performance of Fibre Channel.

Today, the industry is vetting iSCSI to run on 10Gbps Ethernet, where it can take advantage of TCP offload, remote DMA and I/O virtualization capabilities. Research firm Dell'Oro Group sized the 10Gbps Ethernet switch market at 100,000 port shipments in the fourth quarter of 2006 with revenue of US$1 billion. As 10Gbps Ethernet continues to grow, there may be no way to stop iSCSI's market momentum.

Fibre Channel, on the other hand, may at some time be replaced by the proposed Fibre Channel over Ethernet (FCoE), a technology that relies on the lossless, enhanced Ethernet specification. This technology, which layers Fibre Channel over Ethernet, will be attractive to companies that want to operate storage and networking on a converged network. FCoE products are expected to be available from Cisco, Brocade, Network Appliance, Nuova Systems, Emulex and QLogic sometime in 2009. -Deni Connor

Page Break

Macs vs. PCs

Macs belong in design firms, art departments, schools and the ilk, but never in enterprise data centers; PCs belong in the corporate enterprise. Enough said?

No, probably not, especially if you talk to those who use Macs. When the Macintosh was introduced in 1984, Apple introduced the concept of a graphical user interface. The IBM PC, which debuted in 1981, used an interface more familiar to users of the time - one based on ASCII text.

At Mac's introduction during the 1984 SuperBowl, a Ridley Scott-inspired character threw a hammer at the screen of an IBM text-based computer in an attempt to inspire legions of people to switch from PCs to what Apple perceived as a more user-friendly Mac. After the drama of its introduction, sales of the US$2,500 Macintosh eked in. By September 1985, some 20 months later, only 500,000 Macs had been sold.

Some said that Apple's focus was on the wrong area - its Macintosh, which, shipped with MacWrite and MacPaint to show off its GUI, wasn't a magnet for application developers. Applications for the Mac needed to be completely rewritten, and except for a handful of independent software vendors (ISV), application porting didn't happen.

"When Apple brought in a Macintosh to show it to us, I asked: Where are the business applications such as VisiCalc and a database?" says Jim Bagley, formerly vice president of marketing for Radix in Salt Lake City. Creative types - advertising agencies and video professionals -- remained the low hanging fruit for Apple, inspired by programs such as PageMaker, PhotoShop and Macromedia's Director and FreeHand, which first worked on the Mac.

The IBM PC on the other hand enticed ISVs - companies wrote business applications for it. One of the first was the Lotus 1-2-3 spreadsheet in 1981. The program didn't become available on early Macs until 1991.

By 2006, even Apple had thrown in the towel to compete with the PC - it adopted Intel CPUs and made computers that could run Windows. -Deni Connor

Micro Channel vs. EISA and PCI

It was the best of times, it was the worst of times for IBM, the developer of the proprietary, yet oh-so-capable Micro Channel (MCA) bus. Created in 1987 for use in its PS/2 personal computers, the 16- or 32-bit bus was designed to overcome the limitations of the ISA bus, which suffered from a slow speed, limited interrupts and a lack of bus-mastering support. IBM had the misfortune of butting heads with a bus developed in 1989 by IBM competitors - the EISA bus, which was backward-compatible with older PC- and XT-bus computers and also offered bus-mastering support.

The IBM competitors, the so-called Gang of Nine -- AST Research, Compaq Computer, Epson, HP, NEC, Olivetti, Tandy, WYSE and Zenith Data Systems - reacted to IBM's proprietary architecture and refused to license it for use in their servers. The gang prevailed and their EISA design, which was used in clone PCs, soon won out.

Walt Thirion, formerly CTO for Level One Communications of Sacramento, remembers the Micro Channel/EISA bus wars and does not want them repeated. As president and CEO of Thomas-Conrad, Thirion had to manufacture network interface adapters for both EISA and Micro Channel computers. You might ask him if that was a burden making adapters to two different bus specifications and Thirion, like the CEOs of Standard Microsystems and 3Com, would have said 'Hell, yes, it was a pain for little return."

The Gang of Nine wasn't the only group that riled IBM. The Music Corporation of America, then a powerhouse in the music publishing field, filed a suit claiming its rights to the MCA acronym. IBM uncharacteristically withdrew its use of the acronym - going forward the Micro Channel bus would be known as just that.

In 1996, IBM caved in to the EISA bus backers when it introduced computers that used the technology. In spite of the spat between IBM and the rest of the PC industry, desktop PCs continued to use the EISA bus, until the introduction of PCI. Today, neither server nor desktop PCs use MCA (whoops Micro Channel) or EISA - they all use PCI and its successor PCI-Express. -Deni Connor

Page Break

DEC vs IBM

During the 1980s you'd be hard pressed to find a better rivalry than DEC and IBM. The companies tried to wear each other out in what was then called the midrange server market. The DEC VAX, rolled out in 1976 was a legend but the IBM System /36s and /38s were no slouches. Big Blue morphed those successful servers into its VAX killer, the AS/400 in 1988 and by 1994, 250,000 of them had been sold. DEC eventually tried to counter with its Alpha chip -based line of advanced servers but by the early 1990's Ken Olsen's engineering company was in trouble.

Meanwhile the companies' competed with their network technologies as well - DEC's DECnet and IBM's Systems Network Architecture - both being introduced in 1974. While the general perception at the time was that DECnet generally held sway with the techie folks and IBM's SNA went after the business side of the house, that distinction was ultimately lost by onrushing industry clamor for less proprietary technology, namely TCP/IP. Download the latest Network World Executive Guide - Growing a Data Center For a Growing Business

The combination of bad results and industry acceptance of IP ultimately knocked DEC out and it was sold to Compaq in 1998 for $9.6 billion. IBM fared only a little better in the network arena, but SNA ultimately wilted. IBM sold good portions of its network business in 1999 to a new, more ravenous rival: Cisco. -Michael Cooney

Distributed vs. centralized

As projects go, running a data center ranks right up there with big kahunas. So it may come as no surprise that the way to run one or many most cost effectively raises an argument or two.

These days the move is on to consolidate data centers. Coming from a decentralized approach in which each data center was on its own, to a structure that allows consolidation of resources, simplified disaster and recovery and centralized management has been a big affair.

Many IT departments have struggled with the decentralized approach - shuttling data from one location to another, managing diverse backups and applications, sustaining the costs of operating multiple data centers instead of one. They've put in Wide Area File Services (WAFS) technology to speed data shuffling between the remote office and data center.

They've launched Wide Area Acceleration products. And, they've consolidated application servers from several locations into one - in doing so, they've freed countless IT administrators from server management and set them out to manage other IT investments.

Some might argue that these tools - WAFS and WAN acceleration - do nothing for consolidation - they simply allow a decentralized infrastructure to exist. But the tide is turning. Companies are starting to consolidate far-flung data center operations. HP for one, has been the darling in the past year for data center consolidation. The company hopes to reduce the number of servers it uses by 30% and increase its processing capabilities by 80%.

IBM too recently announced it would consolidate nearly 4,000 small computer servers in six locations onto about 30 refrigerator-sized mainframes running Linux saving US$250 million in the process.

Other companies such as ENglobal in Houston have improved their disaster recovery and business continuity. In ENglobal's case, by consolidating six data centers to two - a primary data center and a backup one - the company has simplified disaster recovery. It now only has to replicate data between two locations.

But there are other issue: high-density computing systems that use server and storage virtualization and energy-efficient power and cooling systems that make the idea and the argument for managing consolidated data centers easier. -Deni Connor

Page Break

Tape storage vs. disk storage

Many prognosticators saw the emergence of fast disk-based storage systems in the early 2000s as the beginning of the end for methodical tape-based storage, but tape storage celebrated its 50th birthday in 2003 and is still looking in relatively good health. It might not be the first choice for primary storage at many businesses, but tape is typically the final resting place of loads of archived data because its cost is relatively low and it can be used to store data offsite.

"Most secondary and all tertiary storage functions and utilities, such as disk backup, transporting of large data databases and data archiving, are ... best performed on tape," says a recent study from Freeman Reports. "Accordingly, tape subsystems usually accompany secondary disk subsystems to provide an optimum solution."

Not that tape is exactly thriving even in that role. Freeman estimated that tape library revenue dropped more than 15% from 2005 to 2006.

Disk's ability to back up and recover data faster at a slightly higher premium than tape has made it the preferable way to protect data. "The cost per

megabyte of magnetic disk storage continues to fall, resulting in a perception that disk storage is closing the price gap with tape storage," says Freeman Reports. Some have even raised the question of whether tape really is less expensive than disk.

A recent report from the Enterprise Strategy Group shows that 21% of the respondents backed up data to disk, 51% backed up to disk and then to tape and 29% backed up to tape only.

Adding insult to injury for tape suppliers, newfangled virtual tape systems are actually disk-based systems that emulate robotic tape libraries, enabling customers to stick with a consistent data management scheme while taking advantage of disk's speed. iDeni Connor

AMD vs. Intel

Advanced Micro Devices deployed Darth Vader and a platoon of Storm Troopers to greet visitors to a Barcelona launch event at Lucasfilm in September, but it was Intel that was assigned the role of the "Evil Empire."

AMD, long the oppressed rebel force in the chip industry, managed to launch an attack on the Intel Death Star with the introduction of its 64-bit Opteron processors in 2003. Opteron ran 64-bit applications and legacy 32-bit applications without the drag on performance noted in Intel's Itanium processors. AMD upped the ante further in 2005 with the introduction of its first dual-core Opteron processors that doubled the performance of single-core Opterons.

The first chink in Intel's armor appeared in the second quarter of that year when, as Mercury Research reported, Intel's market share slipped to 82.5%, from 82.8 % in the year ago quarter, while AMD's inched up to 15.7 % from 15.6 %.

AMD further provoked Intel by running a newspaper ad challenging Intel to a processor duel, using the image of an AMD chip in a boxing ring. AMD's share rose to 25.3 % in the fourth quarter of 2006, while Intel's fell to 74.4 %. Intel, while perhaps surprised, didn't take long to retaliate. Intel (2006 revenue, US$35 billion) financed a price war with AMD (US$5.6 billion) that pushed AMD into a pool of red ink, losing US$2.1 billion over the last four quarters.

But AMD also fought back with a gavel, suing Intel in 2006 in U.S. District Court on grounds of antitrust violations, a suit that's still pending.

But Intel also matched AMD on the product side, introducing a dual-core Xeon processor in 2005, and regained the upper hand on AMD with its first quad-core Xeon in early 2007. AMD hastened to point out that all Intel did to make a quad-core was squeeze two dual-cores onto one piece of silicon. AMD introduced its "native" quad-core Barcelona at that Lucasfilm event September 10.

On the eve of the Barcelona launch, Bruce Shaw, AMD's director of server and workstation product marketing, said AMD may be battle-weary but is still in the fight: "If you look at the market as a whole it's hard not to wax poetic about [how] we've brought competition to the market just by being here." -Robert Mullins

Page Break

VMware vs. Xen vs. Microsoft

VMware owns the lion's share of the virtualization market in terms of both revenue (the company's revenue doubled in 2006 to $709 million from the year before) and mindshare (the company's annual user show in San Francisco this year recently drew 10,000 supporters) . To ice it off, the company also had a huge IPO this summer.

Yet open source Xen, which doesn't even make the single digits on any analyst's set of predictions, and its commercial instantiation XenSource may still have a chance in the corporate market. And don't overlook Microsoft's Windows Server Virtualization offering either. Read the latest WhitePaper - Storage Consolidation: Overcoming Inhibitors to Simple and Effective Data Management in Small and Medium Enterprises

"Both have been talking up their plans and efforts for months, as well as their proposed superiority over competing solutions - namely VMware. In 2007, customers will finally be able to tell for themselves," says Charles King, analyst at Pund-IT Research.

VMware has been around longer than Xen. VMware was first released in 1999 as VMware Workstation; Xen was developed in 2003 and made available as an open source project. The products are similar in that they are both funded by largely successful companies - EMC, which acquired VMware in 2003, and Citrix, which acquired XenSource.

VMware has been so successful that nearly 80 independent software and hardware companies have partnered with it and developed products that work with it. But XenSource isn't all that far behind, with 63 partners squired in its short existence.

"Xen won't have the maturity of VMware in 2007, but it might be a cheaper alternative, if that's a major consideration," Gordon Haff, an analyst at Illuminata, told Network World earlier this year.

Not to be counted out in this virtualization battle is Microsoft. Its Windows Server Virtualization - a.k.a. Viridian -- technology is set to ship with Windows Server 2009. Although the Windows product commands only 7% of the market now, when it ships you can expect companies to add it to their arsenals quickly because well, Microsoft is Microsoft. -Deni Connor

Read Networking's greatest debates in Security
Read Networking's greatest debates in Software
Read Networking's greatest debates in Management
Read Networking's greatest debates in LANs and WANs