Computational applications of invariance principles

This thesis focuses on applications of classical tools from probability theory and convex analysis such as limit theorems to problems in theoretical computer science, specifically to pseudorandomness and learning theory. At first look, limit theorems, pseudorandomness and learning theory appear to b...

Full description

Bibliographic Details
Main Author: Meka, Raghu Vardhan Reddy
Other Authors: Zuckerman, David I.
Format: Others
Published: 2015
Subjects:
Online Access:http://hdl.handle.net/2152/30359
id ndltd-UTEXAS-oai-repositories.lib.utexas.edu-2152-30359
record_format oai_dc
spelling ndltd-UTEXAS-oai-repositories.lib.utexas.edu-2152-303592015-09-20T17:32:06ZComputational applications of invariance principlesMeka, Raghu Vardhan ReddyInvariance principlesLimit theoremsPseudorandomnessPTFsLearning theoryNoise sensitivityPolytopesHalfspacesThis thesis focuses on applications of classical tools from probability theory and convex analysis such as limit theorems to problems in theoretical computer science, specifically to pseudorandomness and learning theory. At first look, limit theorems, pseudorandomness and learning theory appear to be disparate subjects. However, as it has now become apparent, there's a strong connection between these questions through a third more abstract question: what do random objects look like. This connection is best illustrated by the study of the spectrum of Boolean functions which directly or indirectly played an important role in a plethora of results in complexity theory. The current thesis aims to take this program further by drawing on a variety of fundamental tools, both classical and new, in probability theory and analytic geometry. Our research contributions broadly fall into three categories. Probability Theory: The central limit theorem is one of the most important results in all of probability and richly studied topic. Motivated by questions in pseudorandomness and learning theory we obtain two new limit theorems or invariance principles. The proofs of these new results in probability, of interest on their own, have a computer science flavor and fall under the niche category of techniques from theoretical computer science with applications in pure mathematics. Pseudorandomness: Derandomizing natural complexity classes is a fundamental problem in complexity theory, with several applications outside complexity theory. Our work addresses such derandomization questions for natural and basic geometric concept classes such as halfspaces, polynomial threshold functions (PTFs) and polytopes. We develop a reasonably generic framework for obtaining pseudorandom generators (PRGs) from invariance principles and suitably apply the framework to old and new invariance principles to obtain the best known PRGs for these complexity classes. Learning Theory: Learning theory aims to understand what functions can be learned efficiently from examples. As developed in the seminal work of Linial, Mansour and Nisan (1994) and strengthened by several follow-up works, we now know strong connections between learning a class of functions and how sensitive to noise, as quantified by average sensitivity and noise sensitivity, the functions are. Besides their applications in learning, bounding the average and noise sensitivity has applications in hardness of approximation, voting theory, quantum computing and more. Here we address the question of bounding the sensitivity of polynomial threshold functions and intersections of halfspaces and obtain the best known results for these concept classes.Zuckerman, David I.2015-08-14T17:54:01Z2011-082015-08-14August 20112015-08-14T17:54:02ZThesisapplication/pdfhttp://hdl.handle.net/2152/30359
collection NDLTD
format Others
sources NDLTD
topic Invariance principles
Limit theorems
Pseudorandomness
PTFs
Learning theory
Noise sensitivity
Polytopes
Halfspaces
spellingShingle Invariance principles
Limit theorems
Pseudorandomness
PTFs
Learning theory
Noise sensitivity
Polytopes
Halfspaces
Meka, Raghu Vardhan Reddy
Computational applications of invariance principles
description This thesis focuses on applications of classical tools from probability theory and convex analysis such as limit theorems to problems in theoretical computer science, specifically to pseudorandomness and learning theory. At first look, limit theorems, pseudorandomness and learning theory appear to be disparate subjects. However, as it has now become apparent, there's a strong connection between these questions through a third more abstract question: what do random objects look like. This connection is best illustrated by the study of the spectrum of Boolean functions which directly or indirectly played an important role in a plethora of results in complexity theory. The current thesis aims to take this program further by drawing on a variety of fundamental tools, both classical and new, in probability theory and analytic geometry. Our research contributions broadly fall into three categories. Probability Theory: The central limit theorem is one of the most important results in all of probability and richly studied topic. Motivated by questions in pseudorandomness and learning theory we obtain two new limit theorems or invariance principles. The proofs of these new results in probability, of interest on their own, have a computer science flavor and fall under the niche category of techniques from theoretical computer science with applications in pure mathematics. Pseudorandomness: Derandomizing natural complexity classes is a fundamental problem in complexity theory, with several applications outside complexity theory. Our work addresses such derandomization questions for natural and basic geometric concept classes such as halfspaces, polynomial threshold functions (PTFs) and polytopes. We develop a reasonably generic framework for obtaining pseudorandom generators (PRGs) from invariance principles and suitably apply the framework to old and new invariance principles to obtain the best known PRGs for these complexity classes. Learning Theory: Learning theory aims to understand what functions can be learned efficiently from examples. As developed in the seminal work of Linial, Mansour and Nisan (1994) and strengthened by several follow-up works, we now know strong connections between learning a class of functions and how sensitive to noise, as quantified by average sensitivity and noise sensitivity, the functions are. Besides their applications in learning, bounding the average and noise sensitivity has applications in hardness of approximation, voting theory, quantum computing and more. Here we address the question of bounding the sensitivity of polynomial threshold functions and intersections of halfspaces and obtain the best known results for these concept classes.
author2 Zuckerman, David I.
author_facet Zuckerman, David I.
Meka, Raghu Vardhan Reddy
author Meka, Raghu Vardhan Reddy
author_sort Meka, Raghu Vardhan Reddy
title Computational applications of invariance principles
title_short Computational applications of invariance principles
title_full Computational applications of invariance principles
title_fullStr Computational applications of invariance principles
title_full_unstemmed Computational applications of invariance principles
title_sort computational applications of invariance principles
publishDate 2015
url http://hdl.handle.net/2152/30359
work_keys_str_mv AT mekaraghuvardhanreddy computationalapplicationsofinvarianceprinciples
_version_ 1716824500691009536