The Language of Science and Skepticism
Making sure what’s intended is what’s heard can be more difficult than it seems. Melvin Gaal (mindsharing.eu)
A lot of problems are caused by an incorrect or incomplete understanding of terms we use regularly.
By Peter Ellerton / 08.25.2012
Lecturer in Critical Thinking
The University of Queensland
As scientists, one of our responsibilities should be to promote clarity. A lot of problems are caused by an incorrect or incomplete understanding of terms we regularly, and even lovingly, use.
When I use the word “evidence”, what I think I mean is a function of many things, not least my education in science and philosophy.
It’s also the product of many discussions with people about science, superstition, psychology, pseudoscience and subjectivity.
These discussions have added nuance to my understanding of the nature of evidence. They’ve also alerted me to the fact this nature changes in certain circumstances and through certain worldviews. In other words, what I intend to say is sometimes heard as something else entirely.
This type of miscommunication can be bad enough when dealing with someone who isn’t using the terms in a scientific way, but it’s particularly frustrating when it happens when talking to teachers and communicators of science.
I’d like to take a shot, then, at defining some key terms in the name of clarity.
People might think scientific law is about the highest sort of truth you can get; they might think something “proven” scientifically has the status of certainty, which is to say it’s always true: nature will always behave so as to be in accord with this law.
While in some way accurate, that interpretation is fundamentally flawed. It conflates (or worse, ignores) important concepts and creates a brittleness in the public conception of science that erodes confidence and trust.
First and foremost, laws in science are seldom proven: they are demonstrated, and they are demonstrated because they are demonstrable, which is to say they are descriptive.
Newton’s inverse square law of gravity outlines how the force of gravity between two massive objects varies with distance. Basically, if you double the distance, the force is reduced by a factor of four. Triple it and the force reduces by a factor of nine, and so on.
The same relationship with distance holds for the intensity of omnidirectional radiation, as shown below. What’s significant about a law like this is that while it describes the effect it does not really explain it.
Newton himself was famously silent on the question of what gravity was and why it would behave this way. To get an explanation of what gravity is, we needed Einstein. And we needed a theory.
General relativity explains the phenomena associated with gravity by postulating that the presence of mass warps, and hence affects movement through, space-time. This theory – or model – of how the universe works, when “run” through the process of mathematical calculation, produces outcomes that correspond to possible states of the world.
These states are checked against reality to test their veracity. The more times the model produces results that agree with observation, the more confidence we have in the model as an accurate representation of how the world works.
The example above shows nicely the difference between a model and a law: the former is a representation of reality, the latter a descriptive account.
It’s worth noting, of course, that “model” can be both a noun and a verb (and sometimes both at once). We can build a model of the solar system, or we can model weather on a computer. Either way, the terminology holds.
To put this another way, a law describes what happens and to what degree, but if we want to find out why it happens we need a theory – a model that represents reality.
A model can give us a more satisfying insight into the possible mechanisms of the universe – it’s an analogy (for rarely is it completely accurate) that betters our comprehension, as analogies are designed to do.
Both theories and laws have predictive power and are subject to being refining, falsified or confirmed; although in the case of laws refining is best done in the light of theoretical change (i.e. explaining the law by the theory/model).
Observing the law
We generalise to laws through observation, and support our generalisations with theoretical understanding. But it can be very tricky to determine that something is true in all cases (we can’t test the potential law in all possible places and at all possible times) or just happens to be true every time we check.
When stating something is universally true (even if parameters need to be defined), we must be very careful to determine whether we mean it’s true because it must be that way, or simply because it happens to be that way.
It may be a necessary condition of the universe that all like charges repel each other. But what about a generalisation such as “all posters are held up by drawing pins”?
The posters in my room and all those in my building are held up by drawing pins, but this hardly seems a necessary condition of posters: surely something else would do the job just as well. These are extreme examples, but many “laws” of nature may not be necessary laws – which seems to suggest they really shouldn’t be called laws in the first place.
Calling something a law certainly does not mean it is unchallengeable.
Laws do not develop from theories. To put it another way, theories do not become laws. I have thrown out science textbooks from several schools because they outline an unrealistic progression: from hypothesis to theory to law.
These three concepts are different creatures, and one does not morph into the other. One of the most significant misunderstandings in science exists because of this type of thinking.
In as much as science can make us sure of anything, we are sure evolution occurred in the manner generally accepted by evolutionary biologists; it is a fact about the world.
Darwin, as is generally known, developed a theory – a model – to explain evolution. This model is natural selection. It’s unfortunate that the lovely phrase “the theory of evolution by natural selection” has been truncated into the misleading, inaccurate, confusing and very wrong phrase “the theory of evolution” – including on this very website.
The “theory of evolution” is wrong for two reasons (when scientists use it they know of what they speak, but this is not my point). First, evolution is not the model – natural selection is. So we immediately conflate two very different ideas – that of evolution and the model of natural selection.
When added to the mistaken belief that theories become laws, adherents of young earth creationism (for there are really no other serious evolution opposers) can claim evolution as a tentative conclusion, akin to vague, hand-waving notions, that culminated in Ronald Reagan’s famous dismissal of evolution as “only a theory”.
The consequences for both the teaching of evolution and the credibility of science are enormous. And yet I have never seen a defender of science articulate this misunderstanding.
Just as a theory is a model, and law is a generalisation, a hypothesis is a statement about the world that could be true or false.
Moreover, the statement must be testable, which means it must be falsifiable, or inherently disprovable.
Phrased like this, hypotheses seem to have more in common with laws than they do with theories, considering that Newton could easily have hypothesised the inverse square law of gravity without going through any theoretical modelling of gravity.
But, of course, the creative act of devising a model of the universe, or a part of it, is to hypothesise that the world is really like that, and the hypothesis becomes that the model is an accurate representation.
Hypotheses, then, are ways of talking about building theories and laws, but not in the common way of theories being intermediate between hypotheses and laws.
While hypotheses can stand alone or inform both theories and laws, the interplay in practice between various hypotheses, theories and laws is web-like and complex and exists at nearly every level of operation from the experiment of the day to the paradigm of the century.
The idea of a hypothesis-to-theory-to-law progression is seriously flawed, and this needs to be articulated as the root cause of much misunderstanding.
“Prove” comes from the Latin probare, meaning “to test”. It’s also the origin of the word “probe”.
An older term – “proving ground” – for a testing area or trial shows we have not entirely lost that interpretation. But in the everyday use of the term, “proof” has come to indicate certitude.
What remains poorly understood is that “proof”, as such, is a deductive creature that really does not sit comfortably in science (at least not in an affirming sense). In mathematics a proof conveys that, within the bounds of the axioms in use, there is a truth to be discovered or a certainty to be expressed.
For its theoretical claims, and indeed for its laws, inductive science can only boast confirming instances.
Headlines that (routinely) claim “Einstein proved right” would, we know from his own words, make the great man turn in his grave.
He often spoke of the exquisite sensitivity of his theories to falsification, saying that it would not matter how many times experiment agreed with him, it had only to disagree once to prove him wrong (granted, of course, the validity of the experiment, as recent neutrino-based dramas have shown).
The simple fact that we can never test his theories under all conditions in all places at all times creates conclusions that are tentative, even though the level of confidence may be very high.
We may “prove” facts about the world, such as Earth being more or less spherical, but this does not extend to our laws and theories to the extent we might like to think.
So proof works best in science to falsify, not to affirm, though this is the opposite of common belief.
If we are clear on the above, we have a better appreciation of what makes an idea scientific, as opposed to pseudo-scientific.
We know that the best scientific hypotheses and theories are those with great explanatory power and high sensitivity to falsification, and that these are often the results of highly creative thinking, as are the experimental attempts to confirm or falsify them.
This is a very beautiful idea, but one that can’t be appreciated unless you know science does not spend its time stamping into place dry facts about the world, but grows as a vigorous and exhilarating human enterprise showcasing the best of collective human achievement.
Clarifying these ideas will, I hold, go a very long way indeed into increasing people’s understanding of science and their confidence in scientific findings.
Originally published by The Conversation under the terms of a Creative Commons Attribution/No derivatives license.