When we tested firewall performance as part of in our UTM firewall test we focused on how well the products would push inspected packets along with other UTM features, specifically intrusion-prevention systems and antivirus, turned on. However, many enterprise managers will use these devices primarily just as firewalls, and might be curious how fast they'd operate without UTM slowing them down.
Our initial test bed had been tuned for 1Gbps throughput, and eight of the 13 firewalls we tested blew right past the 1Gbps mark without UTM turned on. So, with the help of David Newman from Network Test, we outfitted the test bed with a 2.8Gbps capacity, and re-ran our firewalls through at that higher speed.
This second round of testing employed the same product configurations used for the 1Gbps UTM test with two exceptions. WatchGuard and Secure Computing have long offered proxy-based firewalls, claiming higher security than simple packet filters although with a cost in performance. WatchGuard's Firebox and Secure Computing's Sidewinder have the flexibility to use either simple packet filters, a generic proxy or an HTTP-specific proxy for HTTP traffic. Since our tests were made using HTTP traffic, we tested all three scenarios and reported all three numbers for each product.
Tracking high-speed firewall performance
Follow-up firewall tests showed that when pushed to speeds faster than 2Gbps, the top raw performers are Crossbeam Systems and IBM. When cost is factored in, however, Juniper Networks' lower-end box and WatchGuard Technologies' Firebox Peak provide the best firewall price/performance punch.
Overall, we found that if you don't want to turn on any of the UTM features, you can get outstanding performance with almost half of the boxes we tested running at more than gigabit speeds. Even better news is that some of those high-performance boxes (namely Juniper SSG-520M and WatchGuard's Firebox Peak X8500e) are offered (we say almost) at a great price.
The interesting twist is that the top performers in this test are not a one-to-one match with the higher performers on our slower testbed. For example, the top-scoring device in our UTM test was the Juniper ISG-1000. However, on the price-per-megabit-of-throughput basis we can point to from this second round of testing, the ISG-1000 only falls into the middle of the pack. Instead, IT outfits looking for raw bandwidth to handle a gigabit link with power to spare will want to look at either the WatchGuard Firebox Peak X8500e (which costs just more than US$20,000 and yields 1340Mbps throughput) and Juniper SSG-520M (which costs US$24,600 and yields 1420Mbps throughput), either of which is one-fourth as expensive as the ISG-1000 on a price-for-bandwidth basis.
We still found that two of the firewalls, from IBM and Crossbeam, were faster than our test bed could go (that is 2800Mbps). But those are among some of the more expensive offerings we tested as well, coming in at just less than US$70,000 and US$100,000 respectively.
In some cases, our numbers came out below the advertised specifications for the firewalls we tested. This can happen for a number of reasons. For example, we discussed the FortiGate 3600A performance (which costs US$121,790 and yielded 1240Mbps throughput) with the company's engineers because it was much lower than the advertised specifications. They helped us to tune the firewall, and explained their specifications are based on streams of UDP packets running over a single connection at maximum packet size -- a test that will definitely give the highest performance number for a firewall.
There's nothing wrong with using those kinds of tests, but this practice (common among security product specification sheets) means that you need to be extra-careful when using these products in your own network. Because firewalls (and IPSes and anti-virus scanners) are very sensitive to the type of traffic you send through them, normal specifications you find on a two-page glossy brochure won't tell you very much about how the product will behave in your own network. A key strategy for high-speed products such as these is to test them using your own traffic to find out what performance you're going to get. Most vendors also have other performance tests that they can furnish, usually under non-disclosure agreements, which show a greater spectrum of types of tests and traffic loads.
Another reason our test results might be lower than published specifications is that we still had a number of "additional" features turned on for each device. Our tests were done using high-availability pairs, usually in an active/passive configuration. High availability has an overhead of its own. We also had network address translation, dynamic routing and logging turned on. Having logging enabled is an abrupt about-face from our test methodology of 10 years ago. Now, it's a reasonable assumption to have any enterprise firewall sending logs off to a security information manager or log server of some sort, for forensics and compliance reasons.