Machine Learning & Scientific Computing Series
Humans and many other intelligent systems (have to) learn from experience, build
models of the environment from the acquired knowledge, and use these models for
prediction. In philosophy this is called inductive inference, in statistics it is called
estimation and prediction, and in computer science it is addressed by machine
learning.
I will first review unsuccessful attempts and unsuitable approaches towards a
general theory of induction, including Popper's falsificationism and denial of
confirmation, frequentist statistics and much of statistical learning theory, subjective
Bayesianism, Carnap's confirmation theory, the data paradigm, eliminative induction,
and deductive approaches. I will also debunk some other misguided views, such as
the no-free-lunch myth and pluralism.
I will then turn to Solomonoff's formal, general, complete, and essentially unique
theory of universal induction and prediction, rooted in algorithmic information theory
and based on the philosophical and technical ideas of Ockham, Epicurus, Bayes,
Turing, and Kolmogorov.
This theory provably addresses most issues that have plagued other inductive
approaches, and essentially constitutes a conceptual solution to the induction
problem. Some theoretical guarantees, extensions to (re)active learning, practical
approximations, applications, and experimental results are mentioned in passing, but
they are not the focus of this talk.
I will conclude with some general advice to philosophers and scientists interested in
the foundations of induction.