Back to First Page

Telling Tales
An elementary introduction to the Philosophy of Science

By David Gardiner

This article may be reproduced in whole or in part for any non-commercial purpose provided that authorship is acknowledged and credited. The copyright remains the property of the author

Have you ever wondered why human beings tell stories? Has there ever been a human culture that didn’t?

There is something compulsive about this “narrative drive” in human beings. We can no more resist it than we can suppress the impulse to breathe or to walk on two legs. We are story-telling animals in the same way that wolves are pack animals. Not only are we story-telling animals, it is our story-telling skills that have (to paraphrase Reginald Perrin’s boss CJ) got us where we are today.

Suppose for a moment that we did not tell any stories - that we constructed no narrative to accompany our experience. What would we see when we looked out into the world? All that we would “see” (or more accurately, experience) would be raw data. A meaningless flux of light and dark, colour and shape, movement and stillness. It’s only when we start to interpret, to tell a story about the raw data, that we can perceive the world at all. That undulating mass of greenish blue is water. Those purplish unmoving shapes are mountains. The green and brown splodges are trees, and so on. We recognise pattern and regularity, cause and effect, and construct from it a meaningful world. Sentience, consciousness itself, is based on pattern recognition. Some of it is carried out for us by the organs of perception themselves. Our eyes interpret the world as containing objects: we have to learn the names for them and how to distinguish one from another and how they are related causally, but we cannot reject or escape from that basic level of interpretation that makes us duck to avoid something thrown towards us or change direction so as to avoid walking into a tree. We are “condemned to meaning” by our very biology. This level of interpretation and narrative is something that we obviously share with the animal kingdom.

But we human beings take our story-telling several stages further. We are so very good at pattern recognition that we can see patterns that are not immediately obvious. Not only can we do this but we can use language, a system of symbols, to describe to others of our species the patterns that we have discovered. The result is what we call a culture; a human group with a shared symbol system and a shared set of stories about the world. We could call these shared stories “theories” or “beliefs”, or if we think they are particularly useful or if we want to flatter them we might call them something like “knowledge” or even “wisdom”. For example, the simplest hunter/gatherer societies have beliefs (stories that everyone accepts) about the progression of the seasons, the connection between sexual activity and pregnancy and the tendency of old people to become frail and die. They believe such basic information to be highly reliable, and so, by and large, do we. They have used their experience, no doubt over more than one lifetime, and their language (their shared symbol system) to identify clear patterns and regularities in the world around them. When they eventually invent a written version of their symbol system, their ability to develop a culture spanning many generations is enormously enhanced. Data recorded over years and centuries can be used to reveal patterns that would be invisible to short-lived members of an oral culture. Slow changes in the climate, declines in particular animal populations, astronomical events like eclipses, passages of comets and regular cycles of droughts and famines all become much more understandable and predictable. Even more importantly, a written culture lays the foundation for a critical tradition, a community in which theories are tested and discussed and either accepted or rejected on the basis of shared rational standards. In addition to all this, once a society has invented numbers it discovers that manipulation of these numbers can reveal patterns much too complex or subtle for human beings to discern using their unaided senses. Once you can apply numbers to weight and to time you have the basic tools for understanding Newton’s laws of motion. When you start counting incidents and events you have the basis for statistical theories that allow you to predict likely outcomes. And so on.

But human beings have evolved such a powerful narrative drive that it can on occasion take them over. The problem is that as a species we are almost incapable of saying: “I don’t know”. The need to tell a story of some kind is an irresistible human compulsion. How often have you asked someone for directions and received a load of complex instructions which later experience reveals to be total nonsense? It takes a very sophisticated and confident person to be able to admit to areas of ignorance. Thomas Henry Huxley said that the most reliable measure of a man’s scientific attitude was his willingness to admit how much he didn’t know. Unfortunately this is an attitude that comes very late in the history of human development, and even then only to a small minority.

Not only individuals but whole cultures feel a compulsion to offer answers when they don’t have any relevant facts. These stories, which are made up to cover gaps in our knowledge, are called myths. Probably the most all-pervasive myths are what are called creation myths, because (even today) nobody can really explain how the universe began, why there should be something rather than nothing, and yet it seems such a fundamental question that surely we must have SOME story to tell about it. The Bantu of Central Africa tell how in the beginning there was only darkness, water, and the great god Bumba. One day Bumba, in pain from a stomach ache, vomited up the sun. The sun dried up some of the water, leaving the land. Still in pain, Bumba vomited up the moon, the stars, and then some animals: the leopard, the crocodile, the turtle, and, finally, some men. The Ekoi of Southern Nigeria tell us that in the beginning there were two gods, Obassi Osaw and Obassi Nsi. The two gods created everything together. Then Obassi Osaw decided to live in the sky and Obassi Nsi decided to live on the earth. The god in the sky gives light and moisture, but also brings drought and storms. The god of the earth nurtures, and takes the people back to him when they die. One day long ago Obassi Osaw made a man and a woman, and placed them upon the earth. They knew nothing so Obassi Nsi taught them about planting and hunting to get food. I think most of you will already know the Christian genesis myth so I won’t restate it.

What all these myths have in common is that they make claims to knowledge that they cannot justify. Myths can never really coexist with a critical tradition because they are not at root evidence-based. They are knowledge claims rooted in faith and priestly authority. When evidence is uncovered that bears on them, generally speaking it is the myth that has to give way. Thus, for example, the Bible-based claims that the world is less than 10,000 years old are no longer seriously advanced even by fundamentalist Christians. Neither is the proposition that the sun goes around the earth, although about 360 years ago its denial almost cost Galileo his life at the hands of the Inquisition. Darwin’s theory of evolution is a borderline case where the fundamentalists still feel there might be some hope of retaining the myth, but the straw to which they cling is rapidly pulling away from the soil.

I want to argue therefore that there are at least two fundamentally different kinds of stories about the world that we human beings tell one another: the variety that are based on evidence and open to disproof (notice I have said disproof and not proof), and the variety that are based on faith or non-rational belief of one kind or another and are not. There are also of course consciously created stories of the kind that you find on writers’ sites like this one, but these (generally speaking) make no claim whatever to literal truth. They can and do reveal truths of a different sort that are unique to art. These are usually “insights”, perspectives of one kind or another on the human condition. There is no ambiguity about the status of fiction, but there is considerable ambiguity about that of the other two kinds of stories.

The problem is that it is only quite late in the development of a human society that people begin to understand the distinction between traditional beliefs and beliefs that are arrived at by experiment, research and the rigorous application of reason. The name that we have for this latter category of beliefs is “science”.

I want to make it very clear that the distinction between science and non-science is not that of truth and falsity. That is an entirely separate issue. The distinction lies in the way that the two belief systems arrive at their conclusions.

There have been, in my estimation, two truly great philosophers of science in human history. The first was Aristotle (384-322 B.C.), The second Karl Popper (1902-1994 A.D.). I had the privilege of meeting and talking to Sir Karl on several occasions when he was Professor of Logic and Scientific Method at the London School of Economics and I was a struggling student trying to complete a Philosophy Ph.D., which I never did. What these two men did was attempt to spell out for us this essential difference between the stories that make up the scientific view of the world and the stories that make up other belief systems and traditions.

Both philosophers agreed that science had to be evidence based, and experiments were needed to test theories and distinguish between competing theories. Both agreed that scientific knowledge was that knowledge which could be constructed out of the results of these rigorous experiments, in the light of public debate and peer criticism of all kinds (Plato and his pupil Aristotle could be said to have invented the notion of academic freedom). Popper accepted the system of logic invented by Aristotle, based on the syllogism (all Xes are Y, Z is an X, therefore Z is Y). Where they disagreed was in the actual process by which experimental evidence might be used to arrive at scientific (rational) conclusions. Aristotle believed that it was possible to arrive at correct scientific theories by what he called a process of “induction”. What this meant in the simplest case was that you looked at a very large number of instances in which the theory yielded the predicted result. Each new observation constituted a “verification” of the theory. When you had collected enough of these instances then you could be confident in declaring the theory true or proven.

Taking a simple example, suppose that I espouse the theory that every object thrown into the air will eventually come down again. I observe an enormous number of instances of balls, stones and other objects thrown into the air and even bullets fired into the air by guns, and on every single occasion that I make an observation the thrown object does indeed return to earth. By the law of induction I am entitled to say that my theory has been proven or “verified”. But what exactly is this “law of induction”?

In fact there is an enormous problem in stating this “law” without recourse to completely arbitrary conditions. Are a thousand instances enough or do we need two thousand? The more we think about it the more the “law of induction” is revealed as a psychological entity: it depends entirely on how difficult we are to convince, on the standards of evidence and proof we choose to apply.

For two thousand years the “law of induction” was accepted because there was nothing more rigorous available. If we keep trying something and we always get the same result then the theory it’s based on must be correct. It seems like common sense. Remaining with our example, the day arrives when a super-gun is invented that fires an object into the air at more than 7 miles per second (“escape velocity”) and it doesn’t come down at all. Oh dear. Where did we go wrong? Was there something wrong with the law of induction? Has common sense failed us? Enter Karl Popper!

Karl Popper’s profoundest contribution to our understanding of the scientific method was so simple, as he said himself, that it was almost inconceivable that nobody had thought of it before. The way to arrive at reliable theories isn’t to go on and on repeating experiments whose outcome we already know and which will “verify” our beliefs. It is to design experiments which will FALSIFY our beliefs should they yield a particular answer. We know that objects return to earth if thrown into the air over a particular range of velocities: let’s try velocities outside that range. Perhaps if the velocity is high enough the object will not return. Perhaps the theory can be FALSIFED.

Popper’s point, as I have intimated, is at heart childishly simple. No number of “verifications”, however large, can “prove” a theory. But JUST ONE falsification can disprove it. Suddenly the perceived structure of scientific knowledge is turned on its head. Science does NOT consist of a body of “established facts” or “verified theories”: there are NO scientific facts at all, NO verified theories. Scientific knowledge is permanently provisional. Whatever theory we hold dear today might be overturned tomorrow by the very next experiment that somebody designs to test it. There are absolutely no scientific certainties. No scientific theory is greater than the sum of the evidence supporting it, and a single counter example is all that is needed to show it to be false, or true only within a particular range of instances (velocities lower than escape velocity, perhaps).

This is such an important insight that it deserves restating. All scientific knowledge is permanently provisional. The science of a given moment is a freeze-frame picture of those theories which have not up to that time been disproved and replaced by theories of greater generality. Science is an enterprise, not a body of knowledge. Arguably the greatest enterprise in which human beings engage.

Now at last we can see the difference between the stories that make up what we call science and the stories that make up other parts of human culture that we do not call science.

Has Popper’s account gone unchallenged? Most certainly not. The most powerful challenge came from the defenders of Marxism and Marxist social science, because Popper’s account of science denied their whole world view the scientific validity that they claimed for it. The claims of Marxism in his view were not of a kind that admitted of falsification. They were metaphysical (un-falsifiable, non-evidence-based) claims, which did not mean they were false but did mean they were non-scientific, and Marxists wanted that scientific status for their claims very badly. In defence of their views they (Thomas Khun most prominent among them) came up with an alternative account of science using the notion of “scientific paradigms” and “paradigm shift”, and also of “normal science” which was what practitioners of science did day-to-day, leaving these paradigms unchallenged.

The Khunian introduction of the “paradigm” notion was in fact an attempt to deny any fundamental difference between the scientific account of the world and other accounts of the world which Popper ruled out as non-scientific or ideological. In essence, Khun’s claim was that science adopted different “paradigms”, or models of what counted as a scientific explanation, and that these models with the passage of time became replaced by other models which better suited the convenience and prejudices of theorists but had no more (or less) validity than the ones they replaced. Thus one generation, under the spell of Newton, sought out explanations in terms of colliding particles and transfers of momentum; another under the spell of Niels Bohr and the early quantum theorists looked for explanations in terms of probability and quantum levels; while elsewhere social scientists under the spell of Freud looked for explanations in terms of unconscious motivations and suppressed drives and ego defence mechanisms. Science for Khun was an activity administered by powerful interest groups who stipulated what the current paradigm should be and laid down the rules for the “normal science” practiced by the lower orders of the hierarchy. It was, in fact, a culture-bound and socially relative activity.

Khun and his followers had to contend with the logical weakness of any form of relativism (if one theory is as good as another why should I accept yours?) but the Popper/Khun debate lives on among their followers to the present day, having outlived to a large extent the crude ideological Marxism from which it sprang. Many attempts have been made to reconcile the two views and there is an enormous literature out there for anybody who wants to pursue the issue. But speaking for myself, I am satisfied that the difference between the stories that scientists tell us and the stories that we are told by other people is a profound one. I think that the key to the nature of scientific thought is exactly what Popper said it was, humility in the face of evidence and acknowledgement of the limitations and fallibility of all our beliefs. A willingness to preface every statement not with: “Once upon a time…” but with: “Unless I am mistaken…”.

This attitude of mind did not begin with Popper of course. In 1727, the year of his death, Sir Isaac Newton wrote: “I do not know what I may appear to the world; but to myself I seem to have been only like a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.”

There is a more modern example of a scientist expressing the essence of the scientific attitude of mind that I cannot resist quoting. Prof. Freeman Dyson, speaking of his own theory of quantum electrodynamics some two decades after he had invented it: "It was just patched together out of bits and pieces, in order to explain experiments. We didn't expect it to last. Every time there was a new experiment, we all expected that the theory would be proved wrong in some interesting way. Instead each experiment still agrees with the theory. That's sort of a disappointment."


Read or Sign Guest Book

Top of Page

Back to First Page