OBJECT DETECTION USING DEEP LEARNING ON METAL CHIPS IN MANUFACTURING

Designing cutting tools for the turning industry, providing optimal cutting parameters is of importance for both the client, and for the company's own research. By examining the metal chips that form in the turning process, operators can recommend optimal cutting parameters. Instead of doing ma...

Full description

Bibliographic Details
Main Authors: Andersson Dickfors, Robin, Grannas, Nick
Format: Others
Language:English
Published: Mälardalens högskola, Akademin för innovation, design och teknik 2021
Subjects:
Online Access:http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55068
id ndltd-UPSALLA1-oai-DiVA.org-mdh-55068
record_format oai_dc
spelling ndltd-UPSALLA1-oai-DiVA.org-mdh-550682021-06-24T05:24:55ZOBJECT DETECTION USING DEEP LEARNING ON METAL CHIPS IN MANUFACTURINGengAndersson Dickfors, RobinGrannas, NickMälardalens högskola, Akademin för innovation, design och teknik2021Object detectiondeep learningmachine learningcomputer visionclassificationmanufacturingsmall object detectionComputer SciencesDatavetenskap (datalogi)Computer Vision and Robotics (Autonomous Systems)Datorseende och robotik (autonoma system)Designing cutting tools for the turning industry, providing optimal cutting parameters is of importance for both the client, and for the company's own research. By examining the metal chips that form in the turning process, operators can recommend optimal cutting parameters. Instead of doing manual classification of metal chips that come from the turning process, an automated approach of detecting chips and classification is preferred. This thesis aims to evaluate if such an approach is possible using either a Convolutional Neural Network (CNN) or a CNN feature extraction coupled with machine learning (ML). The thesis started with a research phase where we reviewed existing state of the art CNNs, image processing and ML algorithms. From the research, we implemented our own object detection algorithm, and we chose to implement two CNNs, AlexNet and VGG16. A third CNN was designed and implemented with our specific task in mind. The three models were tested against each other, both as standalone image classifiers and as a feature extractor coupled with a ML algorithm. Because the chips were inside a machine, different angles and light setup had to be tested to evaluate which setup provided the optimal image for classification. A top view of the cutting area was found to be the optimal angle with light focused on both below the cutting area, and in the chip disposal tray. The smaller proposed CNN with three convolutional layers, three pooling layers and two dense layers was found to rival both AlexNet and VGG16 in terms of both as a standalone classifier, and as a feature extractor. The proposed model was designed with a limited system in mind and is therefore more suited for those systems while still having a high accuracy. The classification accuracy of the proposed model as a standalone classifier was 92.03%. Compared to the state of the art classifier AlexNet which had an accuracy of 92.20%, and VGG16 which had an accuracy of 91.88%. When used as a feature extractor, all three models paired best with the Random Forest algorithm, but the accuracy between the feature extractors is not that significant. The proposed feature extractor combined with Random Forest had an accuracy of 82.56%, compared to AlexNet with an accuracy of 81.93%, and VGG16 with 79.14% accuracy. DIGICOGSStudent thesisinfo:eu-repo/semantics/bachelorThesistexthttp://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55068application/pdfinfo:eu-repo/semantics/openAccess
collection NDLTD
language English
format Others
sources NDLTD
topic Object detection
deep learning
machine learning
computer vision
classification
manufacturing
small object detection
Computer Sciences
Datavetenskap (datalogi)
Computer Vision and Robotics (Autonomous Systems)
Datorseende och robotik (autonoma system)
spellingShingle Object detection
deep learning
machine learning
computer vision
classification
manufacturing
small object detection
Computer Sciences
Datavetenskap (datalogi)
Computer Vision and Robotics (Autonomous Systems)
Datorseende och robotik (autonoma system)
Andersson Dickfors, Robin
Grannas, Nick
OBJECT DETECTION USING DEEP LEARNING ON METAL CHIPS IN MANUFACTURING
description Designing cutting tools for the turning industry, providing optimal cutting parameters is of importance for both the client, and for the company's own research. By examining the metal chips that form in the turning process, operators can recommend optimal cutting parameters. Instead of doing manual classification of metal chips that come from the turning process, an automated approach of detecting chips and classification is preferred. This thesis aims to evaluate if such an approach is possible using either a Convolutional Neural Network (CNN) or a CNN feature extraction coupled with machine learning (ML). The thesis started with a research phase where we reviewed existing state of the art CNNs, image processing and ML algorithms. From the research, we implemented our own object detection algorithm, and we chose to implement two CNNs, AlexNet and VGG16. A third CNN was designed and implemented with our specific task in mind. The three models were tested against each other, both as standalone image classifiers and as a feature extractor coupled with a ML algorithm. Because the chips were inside a machine, different angles and light setup had to be tested to evaluate which setup provided the optimal image for classification. A top view of the cutting area was found to be the optimal angle with light focused on both below the cutting area, and in the chip disposal tray. The smaller proposed CNN with three convolutional layers, three pooling layers and two dense layers was found to rival both AlexNet and VGG16 in terms of both as a standalone classifier, and as a feature extractor. The proposed model was designed with a limited system in mind and is therefore more suited for those systems while still having a high accuracy. The classification accuracy of the proposed model as a standalone classifier was 92.03%. Compared to the state of the art classifier AlexNet which had an accuracy of 92.20%, and VGG16 which had an accuracy of 91.88%. When used as a feature extractor, all three models paired best with the Random Forest algorithm, but the accuracy between the feature extractors is not that significant. The proposed feature extractor combined with Random Forest had an accuracy of 82.56%, compared to AlexNet with an accuracy of 81.93%, and VGG16 with 79.14% accuracy. === DIGICOGS
author Andersson Dickfors, Robin
Grannas, Nick
author_facet Andersson Dickfors, Robin
Grannas, Nick
author_sort Andersson Dickfors, Robin
title OBJECT DETECTION USING DEEP LEARNING ON METAL CHIPS IN MANUFACTURING
title_short OBJECT DETECTION USING DEEP LEARNING ON METAL CHIPS IN MANUFACTURING
title_full OBJECT DETECTION USING DEEP LEARNING ON METAL CHIPS IN MANUFACTURING
title_fullStr OBJECT DETECTION USING DEEP LEARNING ON METAL CHIPS IN MANUFACTURING
title_full_unstemmed OBJECT DETECTION USING DEEP LEARNING ON METAL CHIPS IN MANUFACTURING
title_sort object detection using deep learning on metal chips in manufacturing
publisher Mälardalens högskola, Akademin för innovation, design och teknik
publishDate 2021
url http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55068
work_keys_str_mv AT anderssondickforsrobin objectdetectionusingdeeplearningonmetalchipsinmanufacturing
AT grannasnick objectdetectionusingdeeplearningonmetalchipsinmanufacturing
_version_ 1719412472145772544