Understanding what a captioning network doesn't know

This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. === Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 === Cataloged from student-sub...

Full description

Bibliographic Details
Main Author: Yip, Richard B.,M. Eng.Massachusetts Institute of Technology.
Other Authors: Antonio Torralba.
Format: Others
Language:English
Published: Massachusetts Institute of Technology 2019
Subjects:
Online Access:https://hdl.handle.net/1721.1/122996
id ndltd-MIT-oai-dspace.mit.edu-1721.1-122996
record_format oai_dc
spelling ndltd-MIT-oai-dspace.mit.edu-1721.1-1229962019-11-23T03:51:15Z Understanding what a captioning network doesn't know Yip, Richard B.,M. Eng.Massachusetts Institute of Technology. Antonio Torralba. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Electrical Engineering and Computer Science. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (page 29). While recent years have seen significant advances in the capabilities of image recognition and classification neural networks, we still know little about the relationship between the activation of hidden layers and human-understandable concepts. Recent work in network interpretability has provided a framework for analyzing hidden nodes and layers, showing that in many convolutional architectures, there exists a significant correlation between groups of nodes and human-understandable concepts. We use this framework to investigate the encoding of images produced by standard image classification networks. We do this in the context of encoder-decoder image classification networks. These provide a natural way to observe the effect that perturbing node activations has on the image encoding by observing the generated captions, which are inherently understandable by humans and thus convenient and informative to use. We also generate and analyze captions of images modified by inserting small sub-images of single, human-interpretable concepts. These modifications and the resulting captions show the existence of training-triggered correlations between semantically dissimilar words. by Richard B. Yip. M. Eng. M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science 2019-11-22T00:00:51Z 2019-11-22T00:00:51Z 2019 2019 Thesis https://hdl.handle.net/1721.1/122996 1127292891 eng MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582 29 pages application/pdf Massachusetts Institute of Technology
collection NDLTD
language English
format Others
sources NDLTD
topic Electrical Engineering and Computer Science.
spellingShingle Electrical Engineering and Computer Science.
Yip, Richard B.,M. Eng.Massachusetts Institute of Technology.
Understanding what a captioning network doesn't know
description This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. === Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 === Cataloged from student-submitted PDF version of thesis. === Includes bibliographical references (page 29). === While recent years have seen significant advances in the capabilities of image recognition and classification neural networks, we still know little about the relationship between the activation of hidden layers and human-understandable concepts. Recent work in network interpretability has provided a framework for analyzing hidden nodes and layers, showing that in many convolutional architectures, there exists a significant correlation between groups of nodes and human-understandable concepts. We use this framework to investigate the encoding of images produced by standard image classification networks. We do this in the context of encoder-decoder image classification networks. These provide a natural way to observe the effect that perturbing node activations has on the image encoding by observing the generated captions, which are inherently understandable by humans and thus convenient and informative to use. We also generate and analyze captions of images modified by inserting small sub-images of single, human-interpretable concepts. These modifications and the resulting captions show the existence of training-triggered correlations between semantically dissimilar words. === by Richard B. Yip. === M. Eng. === M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
author2 Antonio Torralba.
author_facet Antonio Torralba.
Yip, Richard B.,M. Eng.Massachusetts Institute of Technology.
author Yip, Richard B.,M. Eng.Massachusetts Institute of Technology.
author_sort Yip, Richard B.,M. Eng.Massachusetts Institute of Technology.
title Understanding what a captioning network doesn't know
title_short Understanding what a captioning network doesn't know
title_full Understanding what a captioning network doesn't know
title_fullStr Understanding what a captioning network doesn't know
title_full_unstemmed Understanding what a captioning network doesn't know
title_sort understanding what a captioning network doesn't know
publisher Massachusetts Institute of Technology
publishDate 2019
url https://hdl.handle.net/1721.1/122996
work_keys_str_mv AT yiprichardbmengmassachusettsinstituteoftechnology understandingwhatacaptioningnetworkdoesntknow
_version_ 1719295388199944192