Our paper “Appearance-based indoor localization: A comparison of patch descriptor performance” is currently in Press. We will post the open-access final version soon, but in the mean time you can download the author accepted manuscript from arxiv.org: http://arxiv.org/pdf/1503.03514v1.pdf
As part of our commitment to both open source and reproducible research, BICV has started utilising Docker to create development environments for research into machine learning and computer vision. These environments consist of combinations of operating system + libraries + application environments, where OS/library dependencies are largely resolved. From a practical perspective this reduces the headache of compilation problems that many users struggle with daily. From a scientific perspective it allows code to be run in reproducible environments.
A Dockerfile can automate the building of Docker images. By making these publicly available on our Bitbucket repository, the creation of Docker images is not only transparent, but allows for further Docker images to be built on top of existing ones.
This code implements dense descriptors as described in the ECCV paper “Spatio-chromatic Opponent Features” by Alexiou and Bharath. The code is written in Matlab, but this release includes support for CUDA-acceleration with an appropriate GPU. A pre-print of the paper is available for download from the PDF link below:
and the code is available in the Zip package:
or from BitBucket.
We are pleased to announce that our paper “Associating locations from wearable cameras” by Jose Rivera-Rubio et al. has been accepted for BMVC 2014, Nottingham. BMVC is the fourth highest ranked conference in computer vision, with a rejection rate of 70%. The publication is closely related to the RSM dataset project, so you can read more here.
We are pleased to announce that our paper “Spatio-Chromatic Opponent Features” by Ioannis Alexiou and Anil Bharath has been accepted for ECCV 2014, Zurich. ECCV is the third highest ranked conference in computer vision, with a rejection rate of 75%. The code is now available on BitBucket, and a PDF preprint is available below:
As promised, the Small Hand-held Object Recognition Test has been expanded from 30 to 100 categories, including a wider range of groceries, toiletries and other widely available products. This expansion will allow better generalisations of the evaluated algorithms.
The expansion can be browsed from here under the TRAINING-100 directory.
More uploads will follow over the summer: the evaluation code will finally be released, and a single download link will be created for the dataset.
UPDATE: The dataset page can be found here.
Congratulations to Jose on starting his internship at The MathWorks, Cambridge ! Jose has secured an internship at the MathWorks in Cambridge, UK. He’ll be back in London from October to finish off his PhD
The paper “A dataset for hand-held object recognitio