Attention! Double-slit!

Recently Dean Radin and others published an article that purports to study the effects of attention on a double slit experiment.

Originally I wanted to do just a rebuttal to that but then found it necessary to also review the entire background. The simple rebuttal spiraled out of control into a 3-part series. My old math teacher was right. Once you add the imaginary things get complex, for reals. And not only for them.

A Word of Caution

People often ask for evidence when they are faced with something they find unlikely. The more skeptical will also ask for evidence for something they consider credible, at least sometimes. For the academically educated evidence means articles published in peer-reviewed, reputable, scientific journals.
For example, all the articles I cite as evidence in the first part, where I look at mainstream quantum physics, are from such journals.
So here comes the warning. Not all journals that call themselves peer-reviewed are reputable. For example, there is a peer-reviewed journal dedicated to creationistic ideas. And I probably don’t need to tell you what scientists on the whole think of creationism.

The journals that published the articles discussed in this series are not reputable. Mainstream science does not take note of them. Physics Essays, where the most recent article appeared may very well be the closest to the mainstream and still it is mostly ignored.
It is largely an outlet for people who believe that Einstein was wrong. We’re not talking about scientists looking for the next big thing, we’re talking about people who are to Einstein’s theory what creationists are to evolution.
This is not meant as an argument against these ideas, I just don’t want to mislead anyone into believing that there is a legitimate scientific debate going on here.

That’s not to say that science ignores fringe ideas. For example, Stanley Jeffers who appears in the second part of this series is a mainstream physicist who decided to follow up on some of those.
He just didn’t find that there was anything there. It was a dead end.
James Alcock has a few words on that in his editorial Give the Null Hypothesis a Chance.

There are many cranks out there. These are people who hold onto some theory in the face of contrary evidence. They will not go away but they will, almost invariably, accuse the mainstream of science to be dogmatic. Eventually, there is nothing to be done but ignore them.

On to the Review

The first part gives a brief overview over the quantum physics background to the experiment. Dean Radin gets this completely wrong. And I fear the misunderstandings he propagates will pop up in many places.

Part 1: A Quantum Understanding

In the next part we will look at the experiment in question. Let’s call it the parapsychological double-slit experiment. We will learn who came up with the idea and what he found out and also what a positive result should look like and what it might mean.

Part 2: A Physicist Investigates

The 3rd and last part, for now, looks at the two articles authored by Dean Radin, presenting seven replications of the original design.

Part 3: Radin for a Rerun

Further studies are being conducted so more parts are likely to follow at some point.

A Quantum Understanding

This is the first part in my series on parapsychological double-slit experiments. However, this post contains just straightforward mainstream science. This tells you the minimum about quantum physics that you need to know to make sense of the parapsychological adaptation.

In a double-slit experiment, particles are shot at a barrier with two slits or holes in it. The particles are usually electrons or photons but large molecules, such as C70 bucky balls, have also been used.
Behind the barrier, there is a detector. In the case of photons, aka light, you can simply put a piece of cardboard there and see a special pattern projected onto it.
The interference pattern indicates that a wave is passing through the double-slits.

Interference

In a nutshell, when a wave crest encounters another wave crest, their heights will add. When a crest encounters a trough, they will cancel out. That is called interference. This can be seen when throwing two stones into a pond. It will look like this:

Double-slit

The simple and obvious explanation for what can be seen in the double-slit experiment is that there are waves coming from either slit and interfering, as can be seen in this diagram:

This is curious since we thought we were shooting particles at the slits. Particles shouldn’t show interference. Think of kicking a ball at two windows. Have you ever noticed an interference pattern in such a situation? Or in any?

So how can it be that a particle behaves like a wave? The answer is given by quantum mechanics.
Quantum means ‘amount’ and there’s quite a story behind ‘amount mechanics’ and how it was discovered and why it is so named. It involves Einstein but this is not the place to tell it.

For one, in order to get an interference pattern, the size of the slits needs to match the wavelength of the particles. Just in case you were wondering why kicking a ball at two windows does not produce interference.

Oversimplification

We think of as particles having a definite state. They have a location and a momentum.
However, it turns out that location and momentum, and a couple other properties, can’t be known with arbitrary precision. Not because of some technological limitation but because of the very laws of physics.
Going even further, it is so that the very state of a particle, or even many particles together, can only be known in terms of probabilities. A quantum mechanical description of something is known as a wavefunction and allows you to predict with what probability you will observe a certain outcome.
As the name wavefunction implies, these probabilities behave mathematically like waves.

When you shoot a particle at the double-slits, it might go through either slit. There is a probability of finding it at or behind either slit. The probabilities of finding it in some place spread from both slits like waves. The probability waves from both slit interfering is what causes the interference pattern.

Note that this pattern only emerges when you have a sufficient number of particles. One particle leaves behind only one blip on the screen behind the double slits. But even when you shoot one particle at a time, the pattern will eventually emerge. Where it is more likely to find a particle, more particles will be found.

Now we have the basics down. This is how double-slit experiment works.

Information

Now we need to ask, what happens if we try to determine through which slit the particle went?
The simple answer is that if the particle has to have come from either slit, then the probability wave only spreads from that one slit. And that means no interference.

Whenever such ‘which-way information’ exists then there is no interference pattern. In fact, how much contrast there is in the pattern depends on how much information there is. (see Wootters and Zurek 1979)

In an experiment with C70 bucky balls, these molecules were heated as they went through the slits. They were made so hot that they glowed, they gave off thermal photons. These photons carried which-way information from the bucky balls to the environment. The hotter they were, the more photons they gave off and the less pronounced the interference pattern was.
Here you can see the interference patterns obtained when different intensities of heating were applied (given in Watts).

Collisions with air molecules play the same role. The higher the pressure is, the less pronounced the interference pattern.

Warning! Philosophy ahead!

Now we must head into the somewhat murkier waters of philosophy.

In quantum mechanics everything is all about probabilities. And as the interference pattern shows, in some way these probabilities are real. If they weren’t real, the probability waves could hardly interfere.
And yet, on the screen we get a single blip for each particle. Not a ‘maybe here, maybe there’.
At first, you have a wavefunction that goes through both slits, and then, on the detector, you have a single definite location (within in the limits of the uncertainty principle).
That is known as the measurement problem.
Obviously, interactions with the environment play some role to explain this. We have seen that in the experiment with the bucky balls. Such processes cause so-called decoherence which causes interference phenomena to be suppressed in everyday situations.

However, this, most say, cannot be the entire solution. Even if there is no interference, the particle is still described only in terms of probabilities, as a wavefunction. So why do we perceive a seemingly definite outcome?

One answer to this is taking the math at face value. The wavefunction is all there is and all that matters. In that view, all possibilities are equally real. You perceiving the blip on one end of the screen or on the other, both happens. In a way, the universe splits up into different versions for each outcome. Of course, there is really just one wavefunction describing it all, so that’s more what it appears to us than reality.

This view is known as the Many Worlds Interpretation (MWI).

Another answer is that the wavefunction collapses on measurement. That means that when we perceive a definite outcome, then this outcome is really all there is. Something happened to reduce the probabilities down to one actuality.
That means that another physical process must be assumed. And it is absolutely unknown when or why or how it happens. Not to mention if.

That view is known as the Copenhagen Interpretation.

So far this is just philosophy. Both these views are interpretations of the same experiments and the same theories. The math and the predictions stay the same, whether you think that all possibilities are true or just the one you experience. There are many variants of these views, especially of the Copenhagen Interpretation.

One particular variant of the Copenhagen Interpretation holds that the collapse takes place when consciousness gets involved. This view was held by some big names like Von Neumann or Wigner but is decidedly a minority opinion now. In my opinion that’s because no one has figured out a way in which a conscious observer should be different from any old measurement device. But I’ll just leave it at pointing out that there is no evidence for this view, just like there is no evidence that collapse is real or not.

It has also been argued that this view is incompatible with experimental evidence. Though I think most physicist would rather regard the view as being unfalsifiable philosophy. Eventually all experiments we know of came into the awareness of at least one conscious being (That’s you, my dear reader. Not humble me.).

Put that way, consciousness causes collapse is awfully close to solipsism.

[sound of sighing]

The May issue of The Psychologist carries an article by Stuart Ritchie, Richard Wiseman and Chris French titled Replication, Replication, Replication plus some reactions to it. The Psychologist is the official monthly publication of The British Psychological Society. And the article is, of course, about the problems the 3 skeptics had in getting their failed replications published.

Yes, replication is important

That the importance of replications receives attention is good, of course. Depositories for failed experiments are important and have the potential to aid the scientific enterprise.

What is sad, however, is that the importance of proper methodology is largely overlooked. Even the 3 skeptics who should know all about the dangers of data-dredging cavalierly dismiss the issue with these words:

While many of these methodological problems are worrying, we don’t think any of them completely undermine what appears to be an impressive dataset.

But replication is still not the answer

I have written about how replication cannot be the whole answer before. In a nutshell, by cunning abuse of statistical methods it is possible to give any mundane and boring result the impression of showing some amazing, unheard of effect. That takes hardly any extra work but experimentally debunking the supposed effect is a huge effort. It takes more searching to be sure that something is not there than to simply find it. For statistical reasons, an experiment needs more subjects to “prove” the absence of an effect with the same confidence as finding it.
But there’s also that there might be some difference between the original experiment and the replication that explain the lack of effect. In this case it was claimed that maybe the 3 failed because they did not believe in the effect. It takes just seconds to make such a claim. Disproving it requires finding a “believer” who will again run an experiment with more subjects that the original.

Quoth the 3 skeptics:

Most obviously, we have only attempted to replicate one of Bem’s nine experiments; much work is yet to be done.

It should be blindingly obvious that science just can’t work like that.

There are a few voices that take a more sensible approach. Daniel Bor writes a little of how neuroimaging which has, or had, extreme problems with useless statistics might improve by foster greater expertise among the practitioners. Neuroimaging seems to have made methodological improvements. What social psychology needs is a drink of the same cup.

The difficulty of publishing and the crying of rivers

On the whole, I find the article by the 3 skeptics to be little more than a whine about how difficult it is to get published, hardly an unusual experience. The first journal refused because they don’t publish replications.
Top journals are supposed to make sure that the results they publish are worthwhile. Showing that people can see into the future is amazing, not being able to show that is not. Back in the day it was simply so that there was only a limited number of pages that could be stuffed into an issue, these days, with online publishing, there’s still the limited attention of readers.
The second journal refused to publish because one of the peer-reviewers, who happened to be Daryl Bem, requested further experiments to be done. That’s a perfectly normal thing and it’s also normal that researchers should be annoyed by what they see as a frivolous request.
In this case, one more experiment should have made sure that the failure to replicate wasn’t due to the beliefs of the experimenters. The original results published by Bem were almost certainly not due to chance. Looking for a reason for the different results is good science.

I’ve given a simple explanation for the obvious reason here. If the 3 skeptics are unwilling or unable to actually give such an explanation they are hardly in a position to complain.

Beware the literature

As a general rule, failed experiments have a harder time to get published than successful ones. That’s something of a problem because it means that information about what doesn’t work is lost to the larger community. When there is an interesting result in the older literature that seems not to have been followed up on then it probably is the case that it didn’t work after all. The original report was a fluke and the “debunking” was never much published. Of course, one can’t be sure if it was not maybe overlooked, which is a problem.
One must be aware that the scientific literature is not a complete record of all available scientific information. Failures will mostly live on in the memory of professors and will still be available to their ‘apprentices’ but it would be much more desirable if the information could be made available to all. With the internet, this possibility now exists and that discussion about such means is probably the most valuable result of the Bem affair so far.

Is Replication the Answer?

One question that is forced on us by the publication of papers like Daryl Bem’s Feeling the Future is what went wrong and how it can be fixed.

One demand that often arises is for replication. It is one of the standard demands made by interested skeptics in forums and such places. I can understand why calling for replication is seductive.
It is shrewd and skeptical. It says: Not so fast, let’s be sure first while at the same time offering a highly technical criticism. Replication is technical jargon, don’t you know?. On the other hand it’s also nice and open-minded. It says: This is totally serious science and some people who aren’t me should spend a lot of time on it.
And perhaps most important of all, it requires not a moments thought.

Cynicism aside, replication really is important. As long as a result is not replicated it is very likely wrong. If you don’t replicate you’re not really generating knowledge. Not only can you not rely on the results, you also lose the ability to determine if you are using good methods or are applying them correctly. Which I’d speculate will decrease reliability still further over time.

Replication is essential but is replication really all that is needed?

Put yourself in the shoes of a scientist. You have just run an experiment and found absolutely no evidence that people can see the future.  That’s going to be tough to publish.
Journals are sometimes criticized for being biased against negative results but the simple fact is that they are biased against uninteresting results. Attention is a limited quantity; there’s only so much time in a day that can be spent reading. Most ideas don’t work out and so it is hardly news when an idea fails in experiment. Think for an example of all the chemicals that are not drugs of any kind.

Before computers and the information age it probably wouldn’t even have been possible to handle all the information about failed ideas. Things have changed now but the scientific community is still struggling to incorporate these new possibilities. However, one still can’t expect real life humans to pay attention to evidence of the completely expected.

Now you could try a new idea and hope that you have more luck with that.
Or you could do what Bem did and work some statistical magic on the data. And by magic I mean sleight of hand. The additional work required is much less and it is almost certain to work.
The question is simply if you want to advance science and humanity or your career and yourself.

If you go the 2nd route, the Bem route, your result will almost certainly fail to replicate.

So you might say that replication, if it is attempted solves the problem. Until then you have a confused public by premature press reports, perhaps bad policy decisions, and certainly a lot of time wasted trying to replicate the effect. Establishing that an effect is not there always takes more effort than just demonstrating it.

To this one might say that the nature of science is just so, tentative and self-correcting. Meanwhile the original data magician, our Bem-alike, has produced a publication in a respectable journal, which indicates quality work, and received numerous citations (in the form of failed replications), which indicates that the paper was fruitful and stimulated further research. These factors, number of publications, reputation of journal and number of citations are usually used to judge the quality of work by a scientist in some objective way.

Eventually, if replication is all the answer needed, one should expect science to devolve into producing seemingly, amazing results that are then slowly disproven by subsequent failed replications. Any of that progress we have come to expect would be merely an accidental byproduct.

The problem might be said to lie rather in judging scientists in such a way. Maybe we should include the replicability of results in such judgments. But now we’re no longer talking about replication as the sole answer. We’re now talking about penalizing bad research.

And that’s the point. Science only works if people play by the rules. Those who won’t or can’t must be dealt with somehow. In the extreme case that means labeling them crackpots and ostracizing them.
But there’s less extreme examples.

The case of the faster than light neutrinos

You probably have heard that some scientists recently announced that they had measured neutrinos to go faster than light. This turned out to be due to a faulty cable.

This story is currently a favorite of skeptics who pointed out that few physicists took the result seriously, despite the fact that it was originally claimed that all technical issues had been ruled.. It makes a good cautionary tale about how implausible results should be handled and why. Human error is just always possible and plausible.

There’s another chapter to this story, one that I fear will not get much attention.

The leaders of the experiment were forced to resign as a consequence of the affair.

There were very many scientists involved in the experiment due to the sheer size of the experimental apparatus. Among them, there was much discontentment about how the results were handled. Some said that they should have run more tests, including the test that found the fault, before publishing. Which means, of course, that they shouldn’t have published at all.

It is easy to see how a publish-or-perish environment that puts a premium on exciting results encourages not looking too closely for faults. But what’s the alternative? No incentive to publish equals no incentive to work. No incentive for exciting results just cements the status quo and hinders progress.

A Pigasus for Daryl Bem

Every year on April Fools day, James Randi hands out the Pigasus Award. Here is the announcement for the 2011 awards, delivered on April 1 2012.

One award went to Daryl Bem for “his shoddy research that has been discredited on many accounts by prominent critics, such as Drs. Richard Weisman, Steven Novella, and Chris French.”

I’ve called this well deserved but there’s certainly much that can be quibbled about. For example, these critics are hardly those who delivered the hardest hitting critiques. Far more deserving of honorable mention are Wagenmakers, Francis and Simmons (and their respective co-authors) for their contribution of peer reviewed papers that tackle the problem.

A point actually concerning the award is whether it is fair to single out Bem for a type of misconduct that may be very wide-spread in psychological research. Let’s be clear on this, his methods are not just “strange” or “shoddy” as Randi kindly puts it, they border on the fraudulent. Someone else, in a different field, might have found themselves in serious trouble with a paper like this. Though I think it very hard to get such a paper past peer review in a more math savvy discipline.
But even if you think it is just a highly visible example of normal bad practice, surely it is appropriate to use the high visibility to bring attention to it. Numerous people have done exactly that. Either using it to argue for different statistical techniques or to draw attention for the lack of necessary replication in psychology.

I doubt that Randi calling this out will do much good since I doubt that many psychologists will even notice. And even if, I doubt that it will cause them to rethink their current (mal)practice. There’s a good chance that Bem will be awarded an IgNobel prize later this year. That probably gets more attention but even so…

 

The reactions from believers have been completely predictable. They have so far ignored the criticisms of the methods and so they ignore that Randi explicitly justifies the award with the “strange methods”. They simply pretend that any doubt or criticism is the result of utter dogmatism.

Sadly, some skeptical individuals have also voiced disappointment, for example Stuart Ritchie on his Twitter feed. Should I ever come across a justification for such reactions I will report and comment.