Machine learning on a budget
Thesis (Ph.D.)--Boston University === In a typical discriminative learning setting, a set of labeled training examples is given, and the goal is to learn a decision rule that accurately classifies (or labels) unseen test examples. Much of machine learning research has focused on improving accuracy,...
Main Author: | |
---|---|
Language: | en_US |
Published: |
Boston University
2015
|
Online Access: | https://hdl.handle.net/2144/11067 |
id |
ndltd-bu.edu-oai-open.bu.edu-2144-11067 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-bu.edu-oai-open.bu.edu-2144-110672019-01-08T15:34:22Z Machine learning on a budget Trapeznikov, Kirill Thesis (Ph.D.)--Boston University In a typical discriminative learning setting, a set of labeled training examples is given, and the goal is to learn a decision rule that accurately classifies (or labels) unseen test examples. Much of machine learning research has focused on improving accuracy, but more recently costs of learning and decision making are becoming more important. Such costs arise both during training and testing. Labeling data for training is often an expensive process. During testing, acquiring or processing measurements for every decision is also costly. This work deals with two problems: how to reduce the amount of labeled data during training, and how to minimize measurements cost in making decisions during testing, while maintaining system accuracy. The first part falls into an area known as active learning. It deals with the problem of selecting a small subset of examples to label, from a pool of unlabeled data, for training a good classifier. This problem is relevant in many applications where a large collection of unlabeled data is readily available but to label an instance requires using an expensive expert (a radiologist annotating a medical image). We study active learning in the boosting framework. We develop a practical algorithm that labels examples to maximally reduce the space of feasible classifiers. We show that, under certain assumptions, our strategy achieves the generalization error performance of a system trained on the entire data set while only selecting logarithmically many samples to label. In the second part, we study sequential classifiers under budget constraints. In many systems, such as medical diagnosis and homeland security, sensors have varying acquisition costs, and these costs account for delay, throughput or monetary value. While some decisions require all measurements, it is often unnecessary to use every modality to classify every example. So the problem is to learn a system that, for every decision, sequentially selects sensors to meet a measurement budget while minimizing classification error. Initially, we study the case where the sensor order in which measurement are acquired is given. For every instance, our system has to decide whether to seek more measurements from the next sensor or to terminate by classifying based on the available information. We use Bayesian analysis of this problem to construct a novel multi-stage empirical risk objective and directly learn sequential decision functions from training data. We provide practical algorithms for binary and multi-class settings and derive generalization error guarantees. We compare our approach to alternative strategies on real world data. In the last section, we explore a decision system when the order of sensors is no longer fixed. We investigate how to combine ideas from reinforcement and imitation learning with empirical risk minimization to learn a dynamic sensor selection policy. 2015-04-27T14:32:57Z 2015-04-27T14:32:57Z 2013 2013 Thesis/Dissertation https://hdl.handle.net/2144/11067 en_US Boston University |
collection |
NDLTD |
language |
en_US |
sources |
NDLTD |
description |
Thesis (Ph.D.)--Boston University === In a typical discriminative learning setting, a set of labeled training examples is given, and the goal is to learn a decision rule that accurately classifies (or labels) unseen test examples. Much of machine learning research has focused on improving accuracy, but more recently costs of learning and decision making are becoming more important. Such costs arise both during training and testing. Labeling data for training is often an expensive process. During testing, acquiring or processing measurements for every decision is also costly. This work deals with two problems: how to reduce the amount of labeled data during training, and how to minimize measurements cost in making decisions during testing, while maintaining system accuracy.
The first part falls into an area known as active learning. It deals with the problem of selecting a small subset of examples to label, from a pool of unlabeled data, for training a good classifier. This problem is relevant in many applications where a large collection of unlabeled data is readily available but to label an instance requires using an expensive expert (a radiologist annotating a medical image). We study active learning in the boosting framework. We develop a practical algorithm that labels examples to maximally reduce the space of feasible classifiers. We show that, under certain assumptions, our strategy achieves the generalization error performance of a system trained on the entire data set while only selecting logarithmically many samples to label.
In the second part, we study sequential classifiers under budget constraints. In many systems, such as medical diagnosis and homeland security, sensors have varying acquisition costs, and these costs account for delay, throughput or monetary value. While some decisions require all measurements, it is often unnecessary to use every modality to classify every example. So the problem is to learn a system that, for every decision, sequentially selects sensors to meet a measurement budget while minimizing classification error. Initially, we study the case where the sensor order in which measurement are acquired is given. For every instance, our system has to decide whether to seek more measurements from the next sensor or to terminate by classifying based on the available information. We use Bayesian analysis of this problem to construct a novel multi-stage empirical risk objective and directly learn sequential decision functions from training data. We provide practical algorithms for binary and multi-class settings and derive generalization error guarantees. We compare our approach to alternative strategies on real world data. In the last section, we explore a decision system when the order of sensors is no longer fixed. We investigate how to combine ideas from reinforcement and imitation learning with empirical risk minimization to learn a dynamic sensor selection policy. |
author |
Trapeznikov, Kirill |
spellingShingle |
Trapeznikov, Kirill Machine learning on a budget |
author_facet |
Trapeznikov, Kirill |
author_sort |
Trapeznikov, Kirill |
title |
Machine learning on a budget |
title_short |
Machine learning on a budget |
title_full |
Machine learning on a budget |
title_fullStr |
Machine learning on a budget |
title_full_unstemmed |
Machine learning on a budget |
title_sort |
machine learning on a budget |
publisher |
Boston University |
publishDate |
2015 |
url |
https://hdl.handle.net/2144/11067 |
work_keys_str_mv |
AT trapeznikovkirill machinelearningonabudget |
_version_ |
1718810348748800000 |