Antivirus software is frequently tested for performance, so picking a top product should be straightforward: Select the number-one vendor whose software kills off all of the evil things circulating on the Internet. You're good to go then, right? Not necessarily.
The increasing complexity of security software is causing vendors to gripe that current evaluations do not adequately test other technologies in the products designed to protect machines.
Relations between vendors and testing organisations are generally cordial but occasionally tense when a product fails a test. Representatives in both camps agree that the testing regimes need to be overhauled to give consumers a more accurate view of how different products compare.
"I don't think anyone believes the tests as they are run now ... are an accurate reflection of how one product relates to the other," an antivirus engineer with Symantec, Mark Kennedy, said.
Representatives of Symantec, F-Secure and Panda Software agreed last month at the International Antivirus Testing Workshop in Reykjavik, Iceland, to design a new testing plan that would better reflect the capabilities of competing products.
They hoped all security vendors would agree on a new test that cuold be applied industrywide, Kennedy said. A preliminary plan should be drawn up by September.
One of the most common tests involves running a set of malicious software samples through a product's antivirus engine. The antivirus engine contains indicators, called signatures, that enable it to identify harmful software.
But antivirus products had changed over the last couple years, and now many products had other ways of detecting and blocking malware, security lead system engineer for McAfee, Toralv Dirron, said.
Signature-based detection is important, but an explosion in the number of unique malicious software programs created by hackers is threatening its effectiveness. As a result, vendors have added overlapping defenses to catch malware.
Vendors are employing behavioral detection technology, which may identify a malicious program if it undertakes a suspicious action on a machine. A user may unwittingly download a malicious software program that is not detected through signatures. But if the program starts sending spam, the activity can be identified and halted.
A program can also be halted if it tries to exploit a buffer overflow vulnerability, where an error in memory can allow a bad program to run. Host-based, intrusion-prevention systems, which can employ firewalls and packet inspection techniques, can also stop attacks.
The ways in which a computer can be infected also make comprehensive testing complex. For example, users may infect their computers by opening malicious email attachments or visiting harmful websites designed to exploit known vulnerabilities in a Web browser.
The different modes of attack also involved different defenses, all of which would need to be tested to arrive at an accurate ranking, analysts said.
By contrast, signature-based tests can take as little as five minutes. "This is a very basic test," said Andreas Marx of AV-Test.org, who wrote his master's degree thesis on antivirus testing. "It's easy, and it's cheap."
Other concerns remain, over sample sets of malicious software, the age of the samples and the relative threat those samples pose on the Internet as they become older. Security vendors also think tests should check how well security applications remove bad programs, a process that can affect a computer's performance.
For vendors, a failed test can be embarrassing, since the testing companies often issue news releases highlighting the latest results.
Testing companies make money in various ways. AV-Test.org is often commissioned by technology magazines such as PC World (a magazine owned by IDG). Virus Bulletin licenses its logo to companies for use in promotional material and publishes a monthly online magazine.
Earlier this month, Virus Bulletin announced that its latest round of testing produced some big-name failures including products from Kaspersky Lab and Grisoft SRO.
The company's VB100 tests antivirus engines against malware samples collected by the Wildlist Organisation International, a group of security researchers who collect and study malware. To pass the VB100, products must detect all samples.
Kaspersky briefly removed a signature for a worm out of its product for "optimisation" purposes on the day of the test, senior research engineer for Kaspersky, Roel Schouwenberg, said in an email. The signature had since been put back in, he said.
"Obviously, we would have rather passed than failed," Schouwenberg wrote. "Had the test been conducted a day earlier or a day later, we would have passed."
Similarly, F-Secure initially failed its test also because of a technicality, but the failed rating was later reversed. All vendors are told after testing which samples they failed to detect, thus most end up adding signatures to their products.
So what should a user do? A technical consultant for Virus Bulletin, John Hawes, cautioned that the signature-based tests were not enormously representative of the way things are in the real world.
But Hawes also noted that signature-based tests could indicate the reliability and consistency of a vendor's software. Virus Bulletin also writes reviews of AV suites, which take into account aspects such as usability, which may be just as important as detection for consumers. The company is developing more advanced tests that will test new security technologies.
AV-Test.org was already performing more comprehensive tests, although it used between 30-50 malware samples, a much smaller sample set compared to the Wildlist, which used more than 600,000 samples, Marx said. Those tests might give a better indication of how a security software suite performs.
At a bare minimum, through, users should install some security software, as computers without it could face high risks, Marx said. Several free suites were available that might be fine for light Internet use, he said.
Ironically, Marx doesn't use any antivirus software. That's because AV-Test.org collects malware for its testing, most of which comes through email from other researchers.
"I'm getting about 1000 viruses a day," he said. "It [antivirus software] would be counterproductive."