Building blocks for the mind
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, June 2017. === Cataloged from PDF version of thesis. "February 2015." Thesis pagination reflects the way it was delivered to the Institute Archives. === Includes bibliographical references (pag...
Main Author: | |
---|---|
Other Authors: | |
Format: | Others |
Language: | English |
Published: |
Massachusetts Institute of Technology
2017
|
Subjects: | |
Online Access: | http://hdl.handle.net/1721.1/112425 |
id |
ndltd-MIT-oai-dspace.mit.edu-1721.1-112425 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-MIT-oai-dspace.mit.edu-1721.1-1124252019-05-02T16:10:07Z Building blocks for the mind Raiman, Jonathan (Jonathan Raphael) Brian C. Williams. Massachusetts Institute of Technology. Department of Aeronautics and Astronautics. Massachusetts Institute of Technology. Department of Aeronautics and Astronautics. Aeronautics and Astronautics. Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, June 2017. Cataloged from PDF version of thesis. "February 2015." Thesis pagination reflects the way it was delivered to the Institute Archives. Includes bibliographical references (pages 93-102). Successful man-machine interaction requires justification and transparency for the behavior of the machine. Artificial agents now perform a variety of high risk jobs alongside humans: the need for justification is apparent when we consider the millions of dollars that can be lost by robotic traders in the stock market over misreading online news [9] or the hundreds of lives that could be saved if the behavior of plane autopilots was better understood [1]. Current state of the art approaches to man-machine interaction within a dialog, which use sentiment analysis, recommender systems, or information retrieval algorithms, fail to provide a rationale for their predictions or their internal behavior. In this thesis, I claim that making the machine selective in the elements considered in its final computation, by enforcing sparsity at the Machine Learning stage, reveals the machine's behavior and provides justification to the user. My second claim is that selectivity in the machine's inputs acts as Occam's Razor: rather than hindering performance, enforcing sparsity allows the trained Machine Learning model to better generalize to unseen data. I support my first claim concerning transparency and justification through two separate experiments that are each fundamental to Man-Machine interaction: - Recommender System: Interactive plan resolution using Uhura and user profiles represented by ontologies, - Sentiment Analysis: Text climax as support for predictions. In the first experiment, I find that the trained system's recommendations agree better with human decisions than existing several baselines which rely on state of the art topic modelling methods that do not enforce sparsity in the input data. In the second experiment, I obtain a new state of the art result on Sentiment Analysis and show that the trained system can now provide justification by pinpointing climactic moments in the original text that influence the sentiment of the text, unlike competing approaches. My second claim about sparsity's regularization benefits is supported with another set of experiments, where I demonstrate significant improvement over non-sparse baselines in 3 challenging Machine Learning tasks. by Jonathan Raiman. S.M. 2017-12-05T19:12:15Z 2017-12-05T19:12:15Z 2015 2017 Thesis http://hdl.handle.net/1721.1/112425 1008753943 eng MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582 102 pages application/pdf Massachusetts Institute of Technology |
collection |
NDLTD |
language |
English |
format |
Others
|
sources |
NDLTD |
topic |
Aeronautics and Astronautics. |
spellingShingle |
Aeronautics and Astronautics. Raiman, Jonathan (Jonathan Raphael) Building blocks for the mind |
description |
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, June 2017. === Cataloged from PDF version of thesis. "February 2015." Thesis pagination reflects the way it was delivered to the Institute Archives. === Includes bibliographical references (pages 93-102). === Successful man-machine interaction requires justification and transparency for the behavior of the machine. Artificial agents now perform a variety of high risk jobs alongside humans: the need for justification is apparent when we consider the millions of dollars that can be lost by robotic traders in the stock market over misreading online news [9] or the hundreds of lives that could be saved if the behavior of plane autopilots was better understood [1]. Current state of the art approaches to man-machine interaction within a dialog, which use sentiment analysis, recommender systems, or information retrieval algorithms, fail to provide a rationale for their predictions or their internal behavior. In this thesis, I claim that making the machine selective in the elements considered in its final computation, by enforcing sparsity at the Machine Learning stage, reveals the machine's behavior and provides justification to the user. My second claim is that selectivity in the machine's inputs acts as Occam's Razor: rather than hindering performance, enforcing sparsity allows the trained Machine Learning model to better generalize to unseen data. I support my first claim concerning transparency and justification through two separate experiments that are each fundamental to Man-Machine interaction: - Recommender System: Interactive plan resolution using Uhura and user profiles represented by ontologies, - Sentiment Analysis: Text climax as support for predictions. In the first experiment, I find that the trained system's recommendations agree better with human decisions than existing several baselines which rely on state of the art topic modelling methods that do not enforce sparsity in the input data. In the second experiment, I obtain a new state of the art result on Sentiment Analysis and show that the trained system can now provide justification by pinpointing climactic moments in the original text that influence the sentiment of the text, unlike competing approaches. My second claim about sparsity's regularization benefits is supported with another set of experiments, where I demonstrate significant improvement over non-sparse baselines in 3 challenging Machine Learning tasks. === by Jonathan Raiman. === S.M. |
author2 |
Brian C. Williams. |
author_facet |
Brian C. Williams. Raiman, Jonathan (Jonathan Raphael) |
author |
Raiman, Jonathan (Jonathan Raphael) |
author_sort |
Raiman, Jonathan (Jonathan Raphael) |
title |
Building blocks for the mind |
title_short |
Building blocks for the mind |
title_full |
Building blocks for the mind |
title_fullStr |
Building blocks for the mind |
title_full_unstemmed |
Building blocks for the mind |
title_sort |
building blocks for the mind |
publisher |
Massachusetts Institute of Technology |
publishDate |
2017 |
url |
http://hdl.handle.net/1721.1/112425 |
work_keys_str_mv |
AT raimanjonathanjonathanraphael buildingblocksforthemind |
_version_ |
1719035693906264064 |