- What is a server?
- Serving up some history
- Client-side computing
- Client-server to the fore
- Server hardware on the market
- Server technical specifications
This buying guide provides a thorough overview of server hardware with a focus on CPU-based, entry-level/workgroup devices from Intel and AMD. While it is structured for first-time buyers with IT experience as well as for business managers, experienced server buyers will also benefit from the coverage of the latest technologies around which these products are built.
What is a server?
A server is a networked computer, or an application on a networked computer, that is used to distribute information, programs or resources from a central point to other computers and resources on a network.
Server hardware is a basic component of business-quality IT infrastructure. Servers are dedicated, high-reliability computers that are generally used to support multi-user business applications (software).
These multi-user, business applications are often referred to as server software.
For example, an application such as the Oracle Database runs on a [hardware] server. The combination is referred to as a database server. Another example is Microsoft Exchange: running on server hardware this is referred to as an e-mail server.
Software server applications share one common attribute: they run specific functions on a network-accessed machine remote to the user, and serve the results up to the user across a network.
It's important to note that software server applications do not have to run on server hardware. However, the requirements of high-availability and high-reliability generally ensure that they do.
Serving up some history
Years ago most IT infrastructures were centred on the mainframe. These generally physically-massive machines provide multi-user access to operators. The machines themselves ran multi-user mainframe operating systems, with access to users provided through terminals (devices which provide screen and keyboard access generally via a direct connection to the mainframe).
While there were many advantages to mainframes - and many businesses still use them today - they had their drawbacks. The main problem was that they were expensive - the growing use of IT in organizations meant gaining adequate access to the mainframe computing resources was a challenge, despite their multi-user capabilities.
This issue led to the rise of the minicomputer. The minicomputer could do most of the functions of the mainframe, at a lower cost. They were still expensive; however, businesses could buy a number of minicomputers instead of - or in addition to - the mainframe infrastructure.
Minicomputers, like the mainframe, run multi-user operating systems and are accessed through terminals.
Mainframes and minicomputers needed trained operators and programmers. Most of the applications written for these systems were specific to the business needs. The introduction of personal computers - particularly the IBM PC and Apple systems - saw the writing of business applications, which could be used by a wide range of users without programming skills.
One of the keys to these applications was the spreadsheet. Managers, with no programming skills, could use these applications for ad hoc business modelling and reporting.
While PCs were expensive and unreliable when compared to the machines available today, they provided managers with dedicated access to computing resources. This led to the rapid adoption of PCs in business.
This proliferation of PCs drove the introduction of local area networks (LANs) across organizations. The need for business managers to share their files was the driving force. And it also allowed some costs to be centralised - printers, which were still expensive at the time, could be shared amongst many users.
The common configuration at the time was to have user files stored on centralised Intel-based server hardware, generally running Novell NetWare rather than on client PCs. Printers and tape backup hardware were attached to these centralised servers.
At first, this centralised, Intel-based server hardware was just a PC with extra memory. However, over time this hardware took on features that added to reliability.
Client-server to the fore
Very quickly PCs began to have more and more processing power, which led to the rise of client/server computing, the essence of most recent computing infrastructure, and has driven the market for server hardware.
File serving - as described earlier - is perhaps the simplest form of client/server computing. An example of this is a user running a spreadsheet on their client machine, but with the file stored on a network server. In this example of client/server computing, the client does most of the computation while the server essentially provides disk access and the computing resources to move the data off the server's disk and onto the network.
Software application servers generally place the emphasis on the server carrying out most of the computation. For example, a client-side application may request some information from a database server. The database server will perform the calculations required and provide the results back to the client application. The client application can then manipulate and present the results to the user, using the computational power of the client system.
Another common configuration is a server running the Windows Server operating system and Windows Terminal Server. Application software runs on the server with access to the applications from the client made using Terminal Server software. The Terminal Server software transmits screen, keyboard and mouse information to the Terminal Server client software running on the client PCs. The server hardware is running the entire application for multiple users.