Summary: | Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007. === Includes bibliographical references (leaves 125-128). === Traditionally, information retrieval systems aim to maximize the number of relevant documents returned to a user within some window of the top. For that goal, the Probability Ranking Principle, which ranks documents in decreasing order of probability of relevance, is provably optimal. However, there are many scenarios in which that ranking does not optimize for the user's information need. One example is when the user would be satisfied with some limited number of relevant documents, rather than needing all relevant documents. We show that in such a scenario, an attempt to return many relevant documents can actually reduce the chances of finding any relevant documents. In this thesis, we introduce the Expected Metric Principle, which generalizes the Probability Ranking Principle in a way that intimately connects the evaluation metric and the retrieval model. We observe that given a probabilistic model of relevance, it is appropriate to rank so as to directly optimize these metrics in expectation. === (cont.) We consider a number of metrics from the literature, such as the rank of the first relevant result, the %no metric that penalizes a system only for retrieving no relevant results near the top, and the diversity of retrieved results when queries have multiple interpretations, as well as introducing our own new metrics. While direct optimization of a metric's expected value may be computationally intractable, we explore heuristic search approaches, and show that a simple approximate greedy optimization algorithm produces rankings for TREC queries that outperform the standard approach based on the probability ranking principle. === by Harr Chen. === S.M.
|