On the computational power of RNNs

This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. === Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 === Cataloged from student-sub...

Full description

Bibliographic Details
Main Author: Korsky, Samuel A.
Other Authors: Robert C. Berwick.
Format: Others
Language:English
Published: Massachusetts Institute of Technology 2020
Subjects:
Online Access:https://hdl.handle.net/1721.1/127704
id ndltd-MIT-oai-dspace.mit.edu-1721.1-127704
record_format oai_dc
spelling ndltd-MIT-oai-dspace.mit.edu-1721.1-1277042020-09-29T05:09:45Z On the computational power of RNNs Korsky, Samuel A. Robert C. Berwick. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Electrical Engineering and Computer Science. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 Cataloged from student-submitted PDF of thesis. Includes bibliographical references (page 27). Recent neural network architectures such as the basic recurrent neural network (RNN) and Gated Recurrent Unit (GRU) have gained prominence as end-to-end learning architectures for natural language processing tasks. But what is the computational power of such systems? We prove that finite precision RNNs with one hidden layer and ReLU activation and finite precision GRUs are exactly as computationally powerful as deterministic nice automata. Allowing arbitrary precision, we prove that RNNs with one hidden layer and ReLU activation are at least as computationally powerful as pushdown automata. If we also allow infinite precision, infinite edge weights, and nonlinear output activation functions, we prove that GRUs are at least as computationally powerful as pushdown automata. All results are shown constructively. by Samuel A. Korsky. M. Eng. M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science 2020-09-25T20:03:17Z 2020-09-25T20:03:17Z 2019 2019 Thesis https://hdl.handle.net/1721.1/127704 1196234132 eng MIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided. http://dspace.mit.edu/handle/1721.1/7582 27 pages application/pdf Massachusetts Institute of Technology
collection NDLTD
language English
format Others
sources NDLTD
topic Electrical Engineering and Computer Science.
spellingShingle Electrical Engineering and Computer Science.
Korsky, Samuel A.
On the computational power of RNNs
description This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. === Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 === Cataloged from student-submitted PDF of thesis. === Includes bibliographical references (page 27). === Recent neural network architectures such as the basic recurrent neural network (RNN) and Gated Recurrent Unit (GRU) have gained prominence as end-to-end learning architectures for natural language processing tasks. But what is the computational power of such systems? We prove that finite precision RNNs with one hidden layer and ReLU activation and finite precision GRUs are exactly as computationally powerful as deterministic nice automata. Allowing arbitrary precision, we prove that RNNs with one hidden layer and ReLU activation are at least as computationally powerful as pushdown automata. If we also allow infinite precision, infinite edge weights, and nonlinear output activation functions, we prove that GRUs are at least as computationally powerful as pushdown automata. All results are shown constructively. === by Samuel A. Korsky. === M. Eng. === M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
author2 Robert C. Berwick.
author_facet Robert C. Berwick.
Korsky, Samuel A.
author Korsky, Samuel A.
author_sort Korsky, Samuel A.
title On the computational power of RNNs
title_short On the computational power of RNNs
title_full On the computational power of RNNs
title_fullStr On the computational power of RNNs
title_full_unstemmed On the computational power of RNNs
title_sort on the computational power of rnns
publisher Massachusetts Institute of Technology
publishDate 2020
url https://hdl.handle.net/1721.1/127704
work_keys_str_mv AT korskysamuela onthecomputationalpowerofrnns
_version_ 1719342973973430272