Do You Believe In Electrons? If So, Why?
People talk about electrons all the time. Electrons, we are told, are around everywhere - all electrical appliances depend on the flow of electrons to power them, as does just about every chemical reaction that has ever taken place. Children are even shown diagrams of what they look like in secondary school. No-one, however, has ever actually seen an electron; diagrams are based on an approximate model of how physicists believe electrons to be constructed. In fact, due to them being of the same order of magnitude as photons, we simply would not be able to see the structure of electrons, even with the most advanced microscope, or even if our eyes evolved to millions of times the precision they currently have. So are we being irresponsible assuring students of the existence of these unseen - unseeable - entities, when we are forced to admit we will never be able to observe these things in the same way we observe, say, a table?
The discussion of whether such entities exist has fascinated philosophers, with an intensification during the last century, particularly with the advent of Logical Positivism, which held that ideas are meaningful only in the context of how they might observably affect the world. Positivism, however, is very controversial, and has all but been disproved, yet the debate reignited by its proponents, that of whether we should be Scientific Realists (and believe in the literal existence of unobservable entities) is still very much being fought, and by no means falls with the fall of Positivism.
I shall argue that there is, in fact, little difference between how we observe electrons and how we observe a table, and that to suggest that electrons do not exist is simply to take a sceptical position, and is little different from forms of Cartesian scepticism which doubt that we can trust anything we observe with our five senses. I hope to show, therefore, that we should be sceptics about all or nothing, and that, since there is no significant difference in believing that electrons exist based on experimental data from believing that a table exists based on our sense data, we cannot reject one and hold on to the other.
Possibly the most popular anti-realist position is Instrumentalism, which holds that Science is not aiming to find out the truth, but simply to enable prediction. The theory holds that any two competing theories which always predict the same empirical results are logically equivalent, and in fact do not compete at all. Many philosophers, however, do not hold that prediction is good enough, and argue that there is still more to be said once we have established rules as to what things might do in the future, and what the results of particular inputs might be - namely what in fact is going on. Like Plato's cave, it does not seem to satisfy us that we should just blindly praise our abilities to predict what will happen next, never asking what is actually going on behind the curtains.
Another response might be that we want to know the truth of the matter because only through understanding may we get to a stage where we can predict with confidence; if we know about electrons, rather than simply observed patterns, we will be able to explain the patterns themselves and will be in a better position to know whether such patterns will hold in the future. The fascination amongst many philosophers and scientists about induction seems to be because the hope is that by understanding the process of induction we will be able to explain it, and decide whether it is rational to hold that past events are a good guide for the future.
But why, we must ask, should the truth be the best way to get correct predictions? Some Realists argue that it simply must be so, since the truth must be the best explanation of phenomena, yet unless these people are closet Instrumentalists, and believe that this is so by definition (i.e. the thing we call the truth is that which best predicts), I cannot see how this claim might be justified. Surely a table actually being a solid continuous mass would be the best explanation of why we cannot pass a hand through it, not that there are tiny particles exhibiting forces and repelling us from moving another 'solid' object through it, or even more extreme, that there is not even such thing as 'solidity' - the theory now commonly held to be the truth? It might be claimed that this is an unfair simplification, since the reason that we explain a table's solidity in these terms is because there are other empirical facts that are better explained with a discontinuous particle-forces explanation, yet some philosophers would argue that we need not align these rarer pieces of evidence with the simple case. Nancy Cartwright, for example, would argue that many scientific predictions work in the laboratory precisely because the experiments are done in a lab, and that outside these controlled conditions the same rules cannot be applied (at the very least not in the rigorous way that scientists might demand of us). A table, Cartwright might argue, could plausibly be entirely continuous and solid regardless of any observations scientists might make in a laboratory. I'm not convinced I agree with such an argument, and we would probably at least hope that laws found in controlled conditions can be extrapolated and used for prediction on a larger scale, but this shows that we should not lazily just presume that the truth will be the best explanation, and should try to show why this should be the case.
Van Fraassen argues that we shouldn't even attempt to justify why our particular theories are good at prediction (and thus that we cannot use their predictive power as proof of their truth) because, he claims, the development of our theories is like a Darwinian evolutionary process, whereby we keep those theories which we have used profitably in the past for prediction, and throw out those whose predictions have been inaccurate. Since we have a preselected set of theories which produce good predictions, van Fraassen argues, it is not sensible to argue on the basis of our preference for theories which produce good predictions. Lipton's response to this, however, is that van Fraassen is mistaking explaining the set for explaining the elements. The example he uses is that if we went to a red-head convention that would be enough to explain why everyone in the room had red hair; but explaining why everyone in a room has red hair does not explain why each person in the room has red hair (which he suggests is a matter for a geneticist, not a conference organiser). Likewise, explaining how we come to end up with a set of theories which appear to have a good predictive value as an evolutionary process does not tell us anything about how each theory gains its predictive power. With evolution, even, we frequently ask why a particular trait has been chosen, as it is interesting to see how it might be advantageous, for example, to have two eyes rather than one.
Lipton further explains that he believes that Inference to the Best Explanation will be a successful account of entities such as electrons, and points out that van Fraassen makes the mistake of presuming that someone who supports the use of this process is claiming to already have considered the right theories, and thus weeded out the bad ones by a process of 'natural selection' - which of course they do not claim. Instead the principle is useful for choosing between different theories, including ones in the future that have not yet been considered, and, Lipton believes, given all the evidence it will indeed be the theory that explains all the data the best that will be the truth. This will be even more reasonable a claim if we believe that there will in the end only be a single set of consistent theories that will explain the data, but of course such a claim is by no means obviously true.
As to prediction and truth, therefore, I would probably agree that there is some correlation between the two, and perhaps I would support the claim that truth is the best way of predicting - but what I would have trouble accepting is that no other method comes close, and this becomes the important issue if it is being claimed that we should seek truth rather than pure prediction because truth is the only way to get properly accurate predictions. I would argue that it is perfectly possible to make very accurate predictions without ever considering truths, and that you certainly won't be able to prove logically that you can't come incredibly close to perfect predictions without the truth, even if it might be possible to argue you could never produce totally perfect predictions.
A more useful distinction than that between prediction and truth, therefore, that I would like to look at, is that of the seeing / observing distinction. Any anti-realist account must say that there is a significant distinction between seeing something and observing its effects - so seeing a table is very different from observing an electron - but I would argue that such a distinction is unfounded, and that the two things stand or fall together. The problem is, I would claim, that even seeing an object, in good light, using no external equipment, and with no deception being played on you, does not count as directly experiencing the object. What in fact is occurring, is that you are using a particular set of apparatus (your eyes, retinal nerve, etc.) for the purpose they best serve, getting information about an object, in this case visual information.
Hacking describes an experiment which, similarly, involves a piece of apparatus created for a specific purpose, namely getting information about electrons. PEGGY II is a piece of purpose-built equipment used to measure how electrons deflect. It tests the hypothesis that there is 'parity violation in a weak neutral current interaction' which will be shown if we can affect the deflections of electrons by altering their spin, and more precisely that electrons with left-handed spin will deflect differently from those with right-handed spin. From a philosophical view it is not necessary to concentrate on the specifics of the case to understand Hacking's suggestion about what changes entities from simply being ones posited by a theory to things we hold to actually exist. What Hacking suggests is that we do not believe entities exist simply because it would be quite a nice explanation for the data produced by an experiment, but because we use the entities to investigate something else - that we consider entities to exist not when we can fairly extrapolate to them, but when we begin to extrapolate from them. I would agree with Hacking, and this does appear to be what we do, since we cannot easily avoid saying that things exist if we depend on them for other experiments, or at least not without seeming hypocritical.
Overall I'd suggest that observing does correlate with seeing - we take objects to exist, as Hacking suggests, when they are necessary to discuss other things, not simply when we first believe we have observed them. A table can be an optical illusion, or a projection, but when we eat off it we'd have great trouble denying it was there - especially if we consider that eating off something might be as good a reason as any to call that thing a table. Likewise we are not able to say with confidence that electrons exist simply because they might be a nice answer to questions brought up by chemistry, but because they can valuably be used to discuss a whole host of other questions posed by science. If we are to reject electrons, therefore, it must be with the same degree of scepticism with which we reject all those things that our senses have always made us believe to exist.