Summary: | Discovering efficient algorithms is central to computer science. In this thesis, we aim to discover efficient programs (algorithms) using machine learning. Specifically, we claim we can efficiently learn programs (Claim 1), and learn efficient programs (Claim 2). In contrast to universal induction methods, which learn programs using only examples, we introduce program induction techniques which additionally use background knowledge to improve learning efficiency. We focus on inductive logic programming (ILP), a form of program induction which uses logic programming to represent examples, background knowledge, and learned programs. In the first part of this thesis, we support Claim 1 by using appropriate background knowledge to efficiently learn programs. Specifically, we use logical minimisation techniques to reduce the inductive bias of an ILP learner. In addition, we use higher-order background knowledge to extend ILP from learning first-order programs to learning higher-order programs, including the support for higher-order predicate invention. Both contributions reduce learning times and improve predictive accuracies. In the second part of this thesis, we support Claim 2 by introducing techniques to learn minimal cost logic programs. Specifically, we introduce Metaopt, an ILP system which, given sufficient training examples, is guaranteed to find minimal cost programs. We show that Metaopt can learn minimal cost robot strategies, such as quicksort, and minimal time complexity logic programs, including non-deterministic programs. Overall, the techniques introduced in this thesis open new avenues of research in computer science and raise the potential for algorithm designers to discover novel efficient algorithms, for software engineers to automate the building of efficient software, and for AI researchers to machine learn efficient robot strategies.
|