Interpretable representation learning for visual intelligence
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018. === This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. === Cataloged from student-su...
Main Author: | |
---|---|
Other Authors: | |
Format: | Others |
Language: | English |
Published: |
Massachusetts Institute of Technology
2018
|
Subjects: | |
Online Access: | http://hdl.handle.net/1721.1/117837 |
id |
ndltd-MIT-oai-dspace.mit.edu-1721.1-117837 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-MIT-oai-dspace.mit.edu-1721.1-1178372019-05-02T16:04:46Z Interpretable representation learning for visual intelligence Zhou, Bolei Antonio Torralba. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Electrical Engineering and Computer Science. Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 131-140). Recent progress of deep neural networks in computer vision and machine learning has enabled transformative applications across robotics, healthcare, and security. However, despite the superior performance of the deep neural networks, it remains challenging to understand their inner workings and explain their output predictions. This thesis investigates several novel approaches for opening up the "black box" of neural networks used in visual recognition tasks and understanding their inner working mechanism. I first show that objects and other meaningful concepts emerge as a consequence of recognizing scenes. A network dissection approach is further introduced to automatically identify the internal units as the emergent concept detectors and quantify their interpretability. Then I describe an approach that can efficiently explain the output prediction for any given image. It sheds light on the decision-making process of the networks and why the predictions succeed or fail. Finally, I show some ongoing efforts toward learning efficient and interpretable deep representations for video event understanding and some future directions. by Bolei Zhou. Ph. D. 2018-09-17T14:51:45Z 2018-09-17T14:51:45Z 2018 2018 Thesis http://hdl.handle.net/1721.1/117837 1052123927 eng MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582 140 pages application/pdf Massachusetts Institute of Technology |
collection |
NDLTD |
language |
English |
format |
Others
|
sources |
NDLTD |
topic |
Electrical Engineering and Computer Science. |
spellingShingle |
Electrical Engineering and Computer Science. Zhou, Bolei Interpretable representation learning for visual intelligence |
description |
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018. === This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. === Cataloged from student-submitted PDF version of thesis. === Includes bibliographical references (pages 131-140). === Recent progress of deep neural networks in computer vision and machine learning has enabled transformative applications across robotics, healthcare, and security. However, despite the superior performance of the deep neural networks, it remains challenging to understand their inner workings and explain their output predictions. This thesis investigates several novel approaches for opening up the "black box" of neural networks used in visual recognition tasks and understanding their inner working mechanism. I first show that objects and other meaningful concepts emerge as a consequence of recognizing scenes. A network dissection approach is further introduced to automatically identify the internal units as the emergent concept detectors and quantify their interpretability. Then I describe an approach that can efficiently explain the output prediction for any given image. It sheds light on the decision-making process of the networks and why the predictions succeed or fail. Finally, I show some ongoing efforts toward learning efficient and interpretable deep representations for video event understanding and some future directions. === by Bolei Zhou. === Ph. D. |
author2 |
Antonio Torralba. |
author_facet |
Antonio Torralba. Zhou, Bolei |
author |
Zhou, Bolei |
author_sort |
Zhou, Bolei |
title |
Interpretable representation learning for visual intelligence |
title_short |
Interpretable representation learning for visual intelligence |
title_full |
Interpretable representation learning for visual intelligence |
title_fullStr |
Interpretable representation learning for visual intelligence |
title_full_unstemmed |
Interpretable representation learning for visual intelligence |
title_sort |
interpretable representation learning for visual intelligence |
publisher |
Massachusetts Institute of Technology |
publishDate |
2018 |
url |
http://hdl.handle.net/1721.1/117837 |
work_keys_str_mv |
AT zhoubolei interpretablerepresentationlearningforvisualintelligence |
_version_ |
1719033694622056448 |