Visual Search

The technology of visual search is in its infancy.  The ability of visual search technology to scale well with database sizes must be balanced against both the computational cost of the search, including indexing time, and the search relevance.

One generation of visual search technology was spun-out from our group in 2008; Cortexica Vision Systems provides cloud and GPU-based visual search technology  to different vertical markets, and has deployed its search technology internationally.

Toni Creswell has also been having great success in using Adversarial Networks — in this case, pairs of convolutional networks — that are trained via engaging in competition with each other. A “side-effect” of this training is good generative models that can be applied to a range of tasks.  One of these is retrieval, and this turns out to be particularly useful when labels are limited and the visual structure of the images differs from those of large labelled datasets (e.g. ImageNet). Potential applications include retrieval of sketches and cases within medical image data.

Meanwhile, the BICV Group continues to look at visual search in several contexts, including the “classic” Pascal VOC retrieval tests, and also ImageNet.  We have also introduced the SHORT database, which contains its own particular challenges to do with the fact that:

  • images vary dramatically in quality between database and query;
  • query images come from hand-held cameras (camera phones) and hand-held objects;
  • we include data that is representative of searches used in assistive contexts;

For more information, see the SHORT Dataset page.