The effect of changing training data on a fixed deep learning detection model

Within the lack of accurate data, for some computer vision applications, researchers usually use other pictures collected from different sources for the training. To know the effect of these added data, we compare the detection results of a customized dataset of objects, using the same detection model, while changing the training data fed into the network. For our work, we run the detection on images captured by the Microsoft Kinect sensor after training the network on different combinations of training data. The first part of the training data is captured by the Kinect itself, and the second is collected from several sources from the internet, referred to as collected images. We then change the distribution of these images between training and validation to feed them into the fixed training model. The results prove that this distribution of data can considerably affect training and detection results under the same model parameters. In addition, mixing the captured images with other collected ones can improve these results.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here