Automatic Defect Segmentation on Leather with Deep Learning

28 Mar 2019  ·  Sze-Teng Liong, Y. S. Gan, Yen-Chang Huang, Chang-Ann Yuan, Hsiu-Chi Chang ·

Leather is a natural and durable material created through a process of tanning of hides and skins of animals. The price of the leather is subjective as it is highly sensitive to its quality and surface defects condition. In the literature, there are very few works investigating on the defects detection for leather using automatic image processing techniques. The manual defect inspection process is essential in an leather production industry to control the quality of the finished products. However, it is tedious, as it is labour intensive, time consuming, causes eye fatigue and often prone to human error. In this paper, a fully automatic defect detection and marking system on a calf leather is proposed. The proposed system consists of a piece of leather, LED light, high resolution camera and a robot arm. Succinctly, a machine vision method is presented to identify the position of the defects on the leather using a deep learning architecture. Then, a series of processes are conducted to predict the defect instances, including elicitation of the leather images with a robot arm, train and test the images using a deep learning architecture and determination of the boundary of the defects using mathematical derivation of the geometry. Note that, all the processes do not involve human intervention, except for the defect ground truths construction stage. The proposed algorithm is capable to exhibit 91.5% segmentation accuracy on the train data and 70.35% on the test data. We also report confusion matrix, F1-score, precision and specificity, sensitivity performance metrics to further verify the effectiveness of the proposed approach.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here