The Research School of Information Sciences and Engineering at the Australian National University is working on an project called "Where's the Weet-Bix?" to help people efficiently locate items in a supermarket by means of visual image retrieval.
"We took about 3000 pictures of shelves in a Coles supermarket in Canberra, and the idea is you query this database to find whatever it is that you are looking for, wherever it is in that supermarket," said Professor Hartley.
A real-world implementation of visual image retrieval in the supermarket is still some way off, but the significance of the project lies in the proof of concept that features of images themselves can form the basis of search queries.
Hartley said that when searching for images, search engines such as Google look for surrounding key words associated with the imagery, rather than features within the image itself.
"Our ultimate goal is to do this in a purely image-based way. We all know how Google works by searching for various key words in documents or Web pages and extracting those keywords out of the documents. But what about images? What are the key words of an image?"
Hartley explained that salient regions of an image that are proto-typical - easy to define and recognizable -- can be used as 'key words' to define that image.
An image of a particular shelf in the supermarket is split into separate parts through a 'Scale-Invariant Feature Transform' or 'SIFT' descriptor, which represents gradient direction and strength.
In the case of supermarket shelf photos, each being 2272 by 1704 pixels, the image was broken up into 20,000 individually identifiable features. In a database of 3000 images this translates to 52 million features.
"So you take all of these features and you put them into a database. Then the idea is you cluster these into classes, each called a visual keyword. And a visual dictionary is a collection of all these classes of visual keywords," Hartley explained.
Each image is described by the presence or absence of a given visual keyword. Each keyword occurs a different number of times in any image, thus giving a unique characteristic to each particular image.
So, say you have just eaten the last Weet-bix but don't know where to find them, you can take a photo of the empty box, query the 52 million feature database in a matter of milliseconds, and be directed to the location of your favourite cereal.
The database presents the top 20 images matching the source query.
Professor Hartley explained that recognition of location is another area where this kind of image retrieval technology could be applied.
"If you are in a foreign city, I was just in Tokyo for example, and I was unable to read the street signs."
"It would be very nice if I could take out my mobile phone, take a picture of the street sign, and it would be able to tell me where I am and how to get where I want to go. The technology is not quite there yet but we're working towards it," he said.
Despite the promise the project holds, Yuhang Zhang stated in his paper that less than half of the first retrieved images were correct, indicating that "more work on the verification of matches and the consideration of feature dependency needs to be explored to further boost the retrieval accuracy".
(Andrew Hendry was a guest at the ANU's Research School of Information Sciences and Engineering)