- Congratulations to Toni Creswell on her paper “Adversarial Training for Sketch Retrieval”, which will appear at an ECCV Workshop; a preprint is available on arXiV.
- Congratulations to Christoforos Charalambous on his paper “A data augmentation methodology for training machine/deep learning gait recognition algorithms”, at the British Machine Vision Conference.
- Also this month: “An assistive haptic interface for appearance-based indoor navigation”, is available from here.
Congratulations to Kai Arulkumaran and Nat Dilokthanakul on getting their paper accepted for the 2016 IJCAI Workshop on Deep Reinforcement Learning: Frontiers and Challenges. The paper integrates deep reinforcement learning and hierarchical reinforcement learning by using multiple deep Q-network heads to represent different option policies, with shared convolutional layers to learn common statistical relationships from raw visual inputs. The paper can be viewed on arXiv.
Congratulations to Jose and team for getting to the CVIU Special Issue on computer vision for assistive devices. The paper examines the components of a prototype assistive system for navigation. We look at the accuracy of spatial localization using computer vision, comparing it to the ability of users to sense location via cues provided via a haptic tablet.
Congratulations to Christoforos Charalambous (BICV Group), who won the prize for “Best Poster Presentation” at the 6th IET Conference on Imaging for Crime Detection and Prevention, held in London, 15-17 July 2015. This study – at the boundary between biomechanics and biometrics – investigates how viewing angle affects the performance of identity recognition using a person’s gait, particularly through the use of joint kinematics. Chris visualised how the accuracy of identity recognition varies with the spatial relationship between a security camera and the trajectory of a person captured using that camera. In addition, he showed how the accuracy of model-based gait recognition depends on the elevation angle of the camera, information which is likely to be useful for companies installing surveillance equipment. Chris’s prize was sponsored by the IEEE.
Our paper “Appearance-based indoor localization: A comparison of patch descriptor performance” is currently in Press. We will post the open-access final version soon, but in the mean time you can download the author accepted manuscript from arxiv.org: http://arxiv.org/pdf/1503.03514v1.pdf
This code implements dense descriptors as described in the ECCV paper “Spatio-chromatic Opponent Features” by Alexiou and Bharath. The code is written in Matlab, but this release includes support for CUDA-acceleration with an appropriate GPU. A pre-print of the paper is available for download from the PDF link below: [wpdm_package id=’1277′]
and the code is available in the Zip package: [wpdm_package id=’1278′]
or from BitBucket.
We are pleased to announce that our paper “Associating locations from wearable cameras” by Jose Rivera-Rubio et al. has been accepted for BMVC 2014, Nottingham. BMVC is the fourth highest ranked conference in computer vision, with a rejection rate of 70%. The publication is closely related to the RSM dataset project, so you can read more here.
We are pleased to announce that our paper “Spatio-Chromatic Opponent Features” by Ioannis Alexiou and Anil Bharath has been accepted for ECCV 2014, Zurich. ECCV is the third highest ranked conference in computer vision, with a rejection rate of 75%. The code is now available on BitBucket, and a PDF preprint is available below:
As promised, the Small Hand-held Object Recognition Test has been expanded from 30 to 100 categories, including a wider range of groceries, toiletries and other widely available products. This expansion will allow better generalisations of the evaluated algorithms.
The expansion can be browsed from here under the TRAINING-100 directory.
More uploads will follow over the summer: the evaluation code will finally be released, and a single download link will be created for the dataset.
UPDATE: The dataset page can be found here.