If you've watched "The Lord of the Rings" or many other visual effects-driven Hollywood films, you have seen Massive Software's 3-D animation software at work.
The New Zealand company's technology is the reason why the hordes of orcs and warriors who cross swords in the screen versions of J.R.R. Tolkien's classic tales do so in such a remarkably lifelike, non-uniform way.
The software allows developers and designers to give 3-D characters -- dubbed "agents" in Massive's parlance -- the ability to react to their surroundings based on factors including sight, touch and hearing. When scaled into a crowd, the agents interact with each other, creating a more realistic result.
The company is showcasing its software at Cebit in Hanover, Germany, this week. The appearance at Cebit represents the company's "soft launch" into new potential areas of business, which for now include engineering, architecture and robotics, according to CEO Diane Holland.
Massive sees the software being used for safe-building design, disaster scenarios, traffic and municipal planning, and possibly for scientific research into the behavior of species.
At heart, the software is an "AI authoring system," Holland said. "There is no actual, wired-in AI that comes in Massive. It's a blank canvas, because we wanted to use it for any kind of character and didn't want it to be constrained whatsoever by some predetermined AI."
A number of the program's tools center on 3-D design tasks like the characters' appearance, but the agents' AI (artificial intelligence) characteristics get fined-tuned through a node-based interface called a "Brain Editor."
Massive also employs fuzzy logic to make characters' reactions more natural "than the on/off robotic results of binary logic," the company's Web site notes, as well as "rigid body dynamics," which the site describes as "a physics-based approach to facilitating realistic stunt motion such as falling, animation of accessories, and projectiles."
Massive claims it can run simulations using hundreds of thousands of simple agents. "Large numbers of more complex agents, such as typical humanoid agents, can be done in multiple passes, by simulating agents in groups of about 10,000 to 20,000, with each subsequent group able to see and react to the previously simulated groups," its site states.
Anticipating its push into new verticals, the company added a 'memory' capability to its software, Holland said. Agents running within a short film or TV clip don't need to recollect much, she said, "but when we're talking about a 45-minute fire-evacuation scenario, you've got to have them be able to remember 'what was the exit I came in, where did i come from, what sign did I see,' and that sort of thing."
While Holland stressed the extensibility of the Massive platform, the company sees a need for some prebuilt agents. They will have a "pedestrian" model ready for market within a year, she said.
"We realized that architects don't have the time to build all these characters and put them in," she said. "They can certainly modify it, but we've got some basic behaviors that are already built in."
It's unclear how many markets Massive's technology could serve, according to Holland. "If you can accurately simulate what we as human beings think and do, [the possibilities are] absolutely endless."
For example, someone planning a grocery store could build a simulation using the behavior and motion patterns of retail customers to determine how best to place certain products on the shelves, she said.
In the meantime, the software is helping power a robot, Zeno, now in development as a consumer product.