14 papers with code • 2 benchmarks • 4 datasets
We view this work as a notable step towards building a simple procedure to harness unlabeled video sequences and extra images to surpass state-of-the-art performance on core computer vision tasks.
One is focused on second-order spatial information to increase the performance of image descriptors, both local and global.
In this paper, we propose to learn high per- formance descriptor in Euclidean space via the Convolu- tional Neural Network (CNN).
Instead of supervising the network with ground truth sketches, we first perform patch matching in feature space between the input photo and photos in a small reference set of photo-sketch pairs.
Ranked #1 on Face Sketch Synthesis on CUHK
Recent works show that local descriptor learning benefits from the use of L2 normalisation, however, an in-depth analysis of this effect lacks in the literature.
We address the person re-identification problem by effectively exploiting a globally discriminative feature representation from a sequence of tracked human regions/patches.