Understanding Beliefs
I became interested in the subject of beliefs through my research in artificial intelligence (AI) and robotics. Many AI systems represent their knowledge as declarative statements in special computer-understandable languages. AI researchers often call these statements “beliefs.” For example, Watson, the IBM computer system that competed and won on the quiz program Jeopardy!, had access to millions of declarative statements accessible from huge databases of textual information. And the autonomous automobiles being developed by Google have access to the California vehicle code, among other things, to help them drive safely and avoid infractions. All of this textual data can be thought of as sets of beliefs.
I think we humans are pretty much in the same boat as these kinds of computer systems. We have lots and lots of beliefs, and we use them for many purposes: to guide our actions, to make predictions, to understand and explain things, to inspire confidence (or trepidation), and even to entertain. In my new book, Understanding Beliefs, recently published by MIT Press, I describe these various roles of beliefs, how we acquire them, and how we can evaluate them. I also examine the topics of “truth” and “reality” from the points of view adopted in the book. What follows is a summary of some of these ideas.
Many of our beliefs have degrees of confidence or strengths. We believe some propositions very, very strongly. We are even tempted to say that those beliefs are “true.” Others are less firmly held, and we might think of them as only probable – not certain. We often use probabilities as measures of belief strength. For example, following a prediction by a weather forecaster, we might say that there is a 70% chance of rain tomorrow. Or, we might quote two-to-one odds that a particular football team will make it to the Super Bowl. Watson used probabilities also as measures of its certainty when it competed on Jeopardy!. For example, it had a confidence of 85% when it answered “Sir Christopher Wren” as the designer of Pembroke College.
It is important to examine and evaluate our beliefs because acting on them can have profound effects – both good and bad. One way to evaluate a belief is to consider the opinions of experts. For example, to evaluate whether or not the world climate is changing, we might consider what the Intergovernmental Panel on Climate Change (IPPC, http://www.ipcc.ch/) has concluded.
Experts typically consider two things when they evaluate a proposition. First, they consider its consequences. For example, if the climate is changing, what are the likely consequences? Are the polar ice caps melting? Are deciduous trees budding earlier and shedding leaves later in the world’s temperate zones? If those and similar propositions have high probability, based on evidence, they help raise the probability of climate change itself – giving it extra “points,” as it were.
Second, if the climate is changing, what are its likely causes and explanations? Is atmospheric carbon dioxide, a “greenhouse gas,” increasing? If the causes and explanations for climate change themselves have high probability, based on evidence, they too help raise the probability of climate change. So, in addition to taking into account the beliefs of experts, we can become our own experts by carefully considering consequences and explanations.
Beliefs, their consequences, and their explanations can all be linked together in a network of connections. In AI, such structures are called Bayesian Belief Networks, and the probabilities of each “belief node” in such a network affect the probabilities of all of the other belief nodes. Belief networks have many applications in computer science, from genomics to medical diagnosis. Whether in humans or in machines, the interlinking of beliefs and their strengths affects the strengths of other beliefs – especially the strengths of closely related beliefs.
As I mentioned earlier, when we hold a belief with very, very high strength, we say it is “true.” I think that saying that something is true is just a way of saying we hold it with high strength. I don’t think there is any way to define truth as being something “out there,” independent of belief. Philosophical “realists” claim (among other things) that a statement is “true” when its referents in “reality” hold. For example, according to the correspondence theory of truth, the statement “Coal is black” is true if the substance in reality denoted by the word “coal” has the property in reality denoted by the word “black.” But objects, substances, and properties are themselves just words – invented by us to help us carve up the world. As a slang expression would have it “reality doesn’t know from coal and black,” it just is. We cannot apprehend reality directly; we can only attempt to describe it with theories and beliefs.
People who have read and have commented on my book continue to ask questions like, “But how do we decide whether or not something is really true?” Even to ask such a question belies a realist philosophical orientation. Instead, they should ask, “How can we best evaluate the strength of a belief?” Some people even complain that my book has not given arguments against philosophical realism. My attitude is, I don’t have to. The philosophical position espoused in my book, that there is no such thing as Truth (capital “T”), is a very conservative one – a null hypothesis, so to speak. I think it’s up to those who want to argue against a null hypothesis to give convincing arguments for their less conservative positions. So far, all the arguments I have heard are circular, depending as they do, on tacitly accepting a realist position.
If you read my book, I’d be very happy to receive comments, criticisms, and suggestions! I may respond to some of them on this blog.
Nils Nilsson
Home page: http://ai.stanford.edu/~nilsson/
Email: nilsson@cs.stanford.edu