Multi-view metric learning for multi-instance image classification

21 Oct 2016  ·  Dewei Li, Yingjie Tian ·

It is critical and meaningful to make image classification since it can help human in image retrieval and recognition, object detection, etc. In this paper, three-sides efforts are made to accomplish the task. First, visual features with bag-of-words representation, not single vector, are extracted to characterize the image. To improve the performance, the idea of multi-view learning is implemented and three kinds of features are provided, each one corresponds to a single view. The information from three views is complementary to each other, which can be unified together. Then a new distance function is designed for bags by computing the weighted sum of the distances between instances. The technique of metric learning is explored to construct a data-dependent distance metric to measure the relationships between instances, meanwhile between bags and images, more accurately. Last, a novel approach, called MVML, is proposed, which optimizes the joint probability that every image is similar with its nearest image. MVML learns multiple distance metrics, each one models a single view, to unifies the information from multiple views. The method can be solved by alternate optimization iteratively. Gradient ascent and positive semi-definite projection are utilized in the iterations. Distance comparisons verified that the new bag distance function is prior to previous functions. In model evaluation, numerical experiments show that MVML with multiple views performs better than single view condition, which demonstrates that our model can assemble the complementary information efficiently and measure the distance between images more precisely. Experiments on influence of parameters and instance number validate the consistency of the method.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here