Generative adversarial training involves training a competitive pair of networks, a generator and a discriminator. The discriminator network is trained to distinguish real data samples, drawn from the training data, from those produced (or “imagined”) by the generator. Thus, one objective is to train the discriminator to correctly classify samples as arising from the real world, or the generator. The generator has an opposing objective: to generate samples that are similar to those in the training data, causing the discriminator to erroneously classify the two types of samples. A full explanation of adversarial training can be found here.
We are developing methods for using adversarial training to learn representations for unlabelled data and are applying this to the problem of image retrieval; we are looking at ways to improve adversarial training for retrieval tasks. We have a recent publication on applying adversarial training to the retrieval of sketches [Creswell et al, 2016].
We are also looking at applying adversarial training to sparse, yet complex datasets, to generate novel samples of data. For instance, using the Omniglot dataset, we can train a GAN to imagine new hand-written characters.