On the Foundations of Inductive Reasoning (FIR)
Temporary Supervisor
Description
Humans and many other intelligent systems (have to) learn from experience, build models of the environment from the acquired knowledge, and use these models for prediction. In philosophy this is called inductive inference, in statistics it is called estimation and prediction, and in computer science it is addressed by machine learning. The problem of how we (should) do inductive inference is of utmost importance in science and beyond. There are many apparently open problems regarding induction, the confirmation problem (Black raven paradox), the zero p(oste)rior problem, reparametrization invariance, the old-evidence and updating problems, to mention just a few. Solomonoff's theory of universal induction based on Occam's and Epicurus' principles, Bayesian probability theory, and Turing's universal machine [Hut05], presents a theoretical solution [Hut07].
Goals
- Elaborate on some of the solutions presented in [Hut07].
- Present them in a generally accessible form, illustrating them with lots of examples.
- Address other open induction problem.
Requirements
- being able to think sharply
- reasonable to good math skills
- good writing skills
- interest in the philosophical foundations of induction
Background Literature
- [Hut07] M. Hutter. On universal prediction and Bayesian confirmation. Theoretical Computer Science, 384(1):33-48, 2007.
- [Hut05] M. Hutter. Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability Springer, Berlin, 2005.
Gain
- getting acquainted with Bayesian reasoning and algorithmic information theory.
- interdisciplinary work between philosophy, computer science, and statistics
- writing a small paper (e.g. for a philosophy journal)