Using Modern Information Theory to Develop a

Quantitative Philosophy of Science

 

Douglas S. Robertson

 

Cooperative Institute for Research in Environmental Sciences

University of Colorado

Boulder, CO 80309

 e-mail: Douglas.Robertson(at-sign)Colorado.edu

 

The modern quantitative theory of information provides us with a set of powerful mathematical tools that allow us to develop a novel quantitative approach to the philosophy of science. After all, science is fundamentally a set of effective methods for dealing with information. A quantitative information-theoretic analysis of those methods can provide the key to developing an interesting, useful, and quantitative philosophy of science.

 

Quantitative information theory was developed in the twentieth century by theorists including Leo Szilard, Norbert Weiner, Claude Shannon, Andrei Kolmogorov, and Gregory Chaitin. And theorists who tried to devise a philosophy of science before the development of this quantitative information theory were in a position somewhat analogous to early physicists from Thales and Aristotle through Galileo and Huygens: These men lacked the critical mathematical tools contained in Newton’s calculus that were needed to make physics truly quantitative for the first time. Similarly, philosophers from the period prior to the development of quantitative information theory also lacked the critical mathematical tools from information theory that were needed to make philosophy quantitative for the first time. Information theory plays a role in the philosophy of science analogous to the role of calculus in physics.

 

The centerpiece of an information-based philosophy of science is the fundamental insight from information theory that information is a conserved quantity, like energy and momentum. And just as one cannot understand physics without knowing that energy is a conserved quantity, one similarly cannot understand philosophy without knowing that information is a conserved quantity. It is impossible to overstate the importance of these conservation laws.

 

Classical philosophy of science and epistemology deal with questions of the form: What do we know and how do we know it? A quantitative information-theoretic approach allows us instead to frame questions of the form: How much do we know? How much can we know? How much do we need to know?

 

Unlike the classical questions, these new questions actually have answers, quantitative answers, although admittedly the answers could be a bit coarse at first. We often have to determine simply whether a certain quantity of information is finite or not. Framing these epistemological questions in this quantitative fashion will lead to important insights into the nature of science itself. One of the important questions that we will examine is whether or not a “Theory of Everything” in physics can be created with a finite quantity of information.

 

We can show how a single error by the logical positivists on this particular question led to absurdities such as relativism and post-modernism, and we will see how the error can be easily corrected. A quantitative philosophy of science will also provide us with something that Thomas Kuhn conspicuously lacked in his famous discussion of “paradigm shifts.” Kuhn was unable to provide a precise quantitative definition of a paradigm because he lacked the quantitative information-theoretic tools that can provide such a definition. Information theory also makes famously difficult topics such as Goedel’s celebrated incompleteness theorem much simpler, much easier to understand, and, as Chaitin put it: “almost obvious.” One of the quantitative implications of Goedel’s theorem is that a “Theory of Everything” for mathematics cannot be created with any finite quantity of information. Therefore every mathematical system that is based on any finite set of axioms must be incomplete.

 

The following pages show how to develop this new quantitative, information-theoretic approach to the philosophy of science. They can be selected in any order, but I would recommend looking at them in the order listed; the later ones often depend on the information in the earlier ones, particularly on the first three. These first three pages cover the conceptual background needed to define a quantitative, information-theoretic philosophy of science. The remaining pages explore various applications of these ideas.

 

-- Introduction to Gregory Chaitin's Algorithmic Information Theory (AIT)

-- Discussion of Compression of Information

-- Philosophy of Science: AIT-based Definitions of Scientific Methods

-- Goedel's Incompleteness Theorem Made Simple (and Almost Obvious) Using AIT

-- Theories of Everything (TOE’s)

-- Phase Changes and Paradigm Shifts

-- Uncomputable Numbers

-- Free Will and the Turing Test

-- Information Theory and the Evolution of Consciousness and Free Will

-- Information Theoretic Analysis of the Development of Human Civilization