Bayesian Logic Programs for plan recognition and machine reading
Several real world tasks involve data that is uncertain and relational in nature. Traditional approaches like first-order logic and probabilistic models either deal with structured data or uncertainty, but not both. To address these limitations, statistical relational learning (SRL), a new area in m...
Main Author: | |
---|---|
Format: | Others |
Language: | en_US |
Published: |
2013
|
Subjects: | |
Online Access: | http://hdl.handle.net/2152/19544 |
id |
ndltd-UTEXAS-oai-repositories.lib.utexas.edu-2152-19544 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-UTEXAS-oai-repositories.lib.utexas.edu-2152-195442015-09-20T17:13:50ZBayesian Logic Programs for plan recognition and machine readingVijaya Raghavan, SindhuBayesian Logic ProgramsStatistical relational learningPlan recognitionAbductive reasoningMachine readingRule learningInformation extractionBLPsSRLIEBALPsBayesian Abductive Logic ProgramsSeveral real world tasks involve data that is uncertain and relational in nature. Traditional approaches like first-order logic and probabilistic models either deal with structured data or uncertainty, but not both. To address these limitations, statistical relational learning (SRL), a new area in machine learning integrating both first-order logic and probabilistic graphical models, has emerged in the recent past. The advantage of SRL models is that they can handle both uncertainty and structured/relational data. As a result, they are widely used in domains like social network analysis, biological data analysis, and natural language processing. Bayesian Logic Programs (BLPs), which integrate both first-order logic and Bayesian net- works are a powerful SRL formalism developed in the recent past. In this dissertation, we develop approaches using BLPs to solve two real world tasks – plan recognition and machine reading. Plan recognition is the task of predicting an agent’s top-level plans based on its observed actions. It is an abductive reasoning task that involves inferring cause from effect. In the first part of the dissertation, we develop an approach to abductive plan recognition using BLPs. Since BLPs employ logical deduction to construct the networks, they cannot be used effectively for abductive plan recognition as is. Therefore, we extend BLPs to use logical abduction to construct Bayesian networks and call the resulting model Bayesian Abductive Logic Programs (BALPs). In the second part of the dissertation, we apply BLPs to the task of machine reading, which involves automatic extraction of knowledge from natural language text. Most information extraction (IE) systems identify facts that are explicitly stated in text. However, much of the information conveyed in text must be inferred from what is explicitly stated since easily inferable facts are rarely mentioned. Human readers naturally use common sense knowledge and “read between the lines” to infer such implicit information from the explicitly stated facts. Since IE systems do not have access to common sense knowledge, they cannot perform deeper reasoning to infer implicitly stated facts. Here, we first develop an approach using BLPs to infer implicitly stated facts from natural language text. It involves learning uncertain common sense knowledge in the form of probabilistic first-order rules by mining a large corpus of automatically extracted facts using an existing rule learner. These rules are then used to derive additional facts from extracted information using BLP inference. We then develop an online rule learner that handles the concise, incomplete nature of natural-language text and learns first-order rules from noisy IE extractions. Finally, we develop a novel approach to calculate the weights of the rules using a curated lexical ontology like WordNet. Both tasks described above involve inference and learning from partially observed or incomplete data. In plan recognition, the underlying cause or the top-level plan that resulted in the observed actions is not known or observed. Further, only a subset of the executed actions can be observed by the plan recognition system resulting in partially observed data. Similarly, in machine reading, since some information is implicitly stated, they are rarely observed in the data. In this dissertation, we demonstrate the efficacy of BLPs for inference and learning from incomplete data. Experimental comparison on various benchmark data sets on both tasks demonstrate the superior performance of BLPs over state-of-the-art methods.text2013-02-22T17:44:14Z2012-122012-12-03December 20122013-02-22T17:44:14Zapplication/pdfhttp://hdl.handle.net/2152/19544en_US |
collection |
NDLTD |
language |
en_US |
format |
Others
|
sources |
NDLTD |
topic |
Bayesian Logic Programs Statistical relational learning Plan recognition Abductive reasoning Machine reading Rule learning Information extraction BLPs SRL IE BALPs Bayesian Abductive Logic Programs |
spellingShingle |
Bayesian Logic Programs Statistical relational learning Plan recognition Abductive reasoning Machine reading Rule learning Information extraction BLPs SRL IE BALPs Bayesian Abductive Logic Programs Vijaya Raghavan, Sindhu Bayesian Logic Programs for plan recognition and machine reading |
description |
Several real world tasks involve data that is uncertain and relational in nature. Traditional approaches like first-order logic and probabilistic models either deal with structured data or uncertainty, but not both. To address these limitations, statistical relational learning (SRL), a new area in machine learning integrating both first-order logic and probabilistic graphical models, has emerged in the recent past. The advantage of SRL models is that they can handle both uncertainty and structured/relational data. As a result, they are widely used in domains like social network analysis, biological data analysis, and natural language processing. Bayesian Logic Programs (BLPs), which integrate both first-order logic and Bayesian net- works are a powerful SRL formalism developed in the recent past. In this
dissertation, we develop approaches using BLPs to solve two real world tasks – plan recognition and machine reading.
Plan recognition is the task of predicting an agent’s top-level plans based on its observed actions. It is an abductive reasoning task that involves inferring cause from effect. In the first part of the dissertation, we develop an approach to abductive plan recognition using BLPs. Since BLPs employ logical deduction to construct the networks, they cannot be used effectively for abductive plan recognition as is. Therefore, we extend BLPs to use logical abduction to construct Bayesian networks and call the resulting model Bayesian Abductive Logic Programs (BALPs).
In the second part of the dissertation, we apply BLPs to the task of machine reading, which involves automatic extraction of knowledge from natural language text. Most information extraction (IE) systems identify facts that are explicitly stated in text. However, much of the information conveyed in text must be inferred from what is explicitly stated since easily inferable facts are rarely mentioned. Human readers naturally use common sense knowledge and “read between the lines” to infer such implicit information from the explicitly stated facts. Since IE systems do not have access to common sense knowledge, they cannot perform deeper reasoning to infer implicitly stated facts. Here, we first develop an approach using BLPs to infer implicitly stated facts from natural language text. It involves learning uncertain common sense knowledge in the form of probabilistic first-order rules by mining a large corpus of automatically extracted facts using an existing rule learner. These rules are then used to derive additional facts from extracted information using BLP inference. We then develop an online rule learner that handles the concise, incomplete nature of natural-language text and learns first-order rules from noisy IE extractions. Finally, we develop a novel approach to calculate the weights of the rules using a curated lexical ontology like WordNet.
Both tasks described above involve inference and learning from partially
observed or incomplete data. In plan recognition, the underlying cause or the top-level plan that resulted in the observed actions is not known or observed. Further, only a subset of the executed actions can be observed by the plan recognition system resulting in partially observed data. Similarly, in machine reading, since some information is implicitly stated, they are rarely observed in the data. In this dissertation, we demonstrate the efficacy of BLPs for inference and learning from incomplete data. Experimental comparison on various benchmark data sets on both tasks demonstrate the superior performance of BLPs over state-of-the-art methods. === text |
author |
Vijaya Raghavan, Sindhu |
author_facet |
Vijaya Raghavan, Sindhu |
author_sort |
Vijaya Raghavan, Sindhu |
title |
Bayesian Logic Programs for plan recognition and machine reading |
title_short |
Bayesian Logic Programs for plan recognition and machine reading |
title_full |
Bayesian Logic Programs for plan recognition and machine reading |
title_fullStr |
Bayesian Logic Programs for plan recognition and machine reading |
title_full_unstemmed |
Bayesian Logic Programs for plan recognition and machine reading |
title_sort |
bayesian logic programs for plan recognition and machine reading |
publishDate |
2013 |
url |
http://hdl.handle.net/2152/19544 |
work_keys_str_mv |
AT vijayaraghavansindhu bayesianlogicprogramsforplanrecognitionandmachinereading |
_version_ |
1716823010638299136 |