Radin for a Rerun

This is the 3rd and currently last part in my series on parapsychological double-slit experiments. It discusses the pilot study by Dean Radin and the six following experiments by him and others.

The Less Said…

In 2008, Dean Radin published an article in Explore, The Journal of Science & Healing, a journal dedicated to alternative medicine. That’s certainly a good way to hide it from anyone who knows or cares about physics. Or ethics.
Dean Radin is currently coeditor-in-chief.

Now hold tight because this paper is bad.

Obviously, the original design and justification for the experiment is taken from Jeffers, though Radin fails to give him the appropriate credit. However, the implementation is different.

Radin used a so-called Michelson interferometer for the experiment. This is physically equivalent to Jeffers’ set-up in all relevant aspects. However, due to being extremely sensitive to environmental factors, like temperature or vibrations, it seems a less than ideal choice.
Another thing that is different is the outcome measure. That’s where things really go south. He uses a CCD chip to capture the interference pattern. Unfortunately he then completely disregards it.
Instead he computes the total intensity of the light reaching the chip.
With that, the experiment becomes a test if the subjects can throw a shadow by concentrating on a spot.
Radin seems completely oblivious of that simple fact.

In all likelihood, Radin simply did not get a positive result and then, instead of accepting it, went fishing. How that works (except not really) is simply explained here.

The title of the paper, Testing Nonlocal Observation as a Source of Intuitive Knowledge, tells us that Radin sees this as relevant to intuition. Intuition, as he points out, is regarded as an important source of artistic inspiration or for scientific insights. It’s a bit hard to see what the connection between that and casting shadows should be. But remember that Radin doesn’t know what he is doing. He believes this is like Jeffers’ original design.
His first error is in thinking that the original design could that someone gains knowledge about the photons’ paths. Learn why not in this post. This is a simple misunderstanding and easy to follow.
His second error is more weird and more typically parapsychological. He thinks that, if people can gain knowledge about the paths of a few photons in some unknown way, then it’s reasonable to assume that they can gain any knowledge via the same unknown mechanism.
We detect photons all the time with our eyes, and much more effectively. If that doesn’t tell us anything about intuition, then why should this?

The bottom line is that evidently the experiment was botched. And even if it hadn’t been, the conclusions he tries to draw just wouldn’t follow. The connections he sees are just not there, at least as far as there is evidence.

Six More Experiments

Finally, we get to the current paper by Dean Radin, Leena Michel, Karla Galdamez, Paul Wendland, Robert Rickenbach, and Arnaud Delorme. I think Radin wrote most of the paper. It repeats so many errors of the previous one and the statistics are his style.

The paper is miles better. It follows Jeffers original set-up quite closely. For one, they use a standard double-slit set-up. They don’t measure the contrast in the way Jeffers did but something that should work just as well. At least, the justification seems solid to me though I don’t know enough to actually vouch for it.
What’s more, they say that the ups and downs of the measure were given as feed-back to the subjects. This again follows Jeffers’ lead and, importantly, gives me some confidence that it was not chosen after the fact. But again Jeffers is not properly credited with coming up with the original design of the experiment.

Unfortunately this paper again does not report the actual physical outcome. They don’t report how much the subjects were able to influence the pattern. What seems clear is that none of them was able to alter the pattern in a way that stood out above the noise.
It would have been interesting to know if the measurements were any more precise than those conducted by Jeffers and Ibison. If their apparatus is better, and the effect still couldn’t be clearly measured, then that would suggest confirmation that Ibison’s result was just chance.

They calculate that, on average, the pattern in periods where subjects paid attention was slightly different from the pattern when they did not pay attention. That is valid in principle because you can get a good measurement with a bad apparatus by simply repeating the process a lot of times.
However, any physicist or engineer would try their utmost to improve that apparatus rather than rely on repetitions.

On the whole, this is not handled like a physics experiment but more like one in social science.
Most importantly, the effect size reported is not about any physically measurable change to the interference pattern. It is simply a measure for how much the result deviated from “chance expectation”. That’s not exactly what should be of top interest.

The paper reports six experiments. To give credit where credit should not be due, some would have pooled the data and pretended to have run fewer but larger experiments to make the results seem more impressive. That’s not acceptable, of course, but still done (EG by Daryl Bem).

Results and Methods

The first four experiments presented all failed to reach a significant result, even by the loose standards common in social science. However, they all pointed somewhat in the right direction which might be considered encouraging enough to continue.

Among these experiments there were two ill-conceived attempts to identify other factors that might influence the result.
The first idea is that somewhat regular ups and downs in the outcome measure could have coincided with periods of attention and no attention. I can’t stress enough that this would have been better addressed by trying to get the apparatus to behave.
Instead, Radin performs a plainly bizarre statistical analysis. I’m sure this was thought up by him rather than a co-author because it is just his style.
Basically, he takes out all the big ups and downs. So far so fine. This should indeed remove any spurious ups and downs coming from within the apparatus. But wait, it should also remove any real effect!
Radin, however, is satisfied with still getting a positive result even when there is nothing left of that could cause a true positive result. The “positive” result Radin gets is obviously a meaningless coincidence that almost certainly would not repeat in any of the other experiments. And indeed, he reports the analysis only for this one experiment.
Make no mistake here, once a method for such an analysis has been implemented on the computer for one set of data, it takes only seconds to perform it on any other set of data.

The second attempt concerns the possibility that warmth might have affected the result. A good way to test this is probably to introduce heat sources into the room and see how that affects the apparatus.
What is done is quite different. Four thermometers are placed in the room while an experiment is conducted. The idea seems to have been that if the room gets warmer this indicates that warmth may have been responsible. Unfortunately, since you don’t know if you are measuring at the right places, you can’t conclude that warmth is not responsible if you don’t find any. Besides it might not be a steady increase you are looking for. In short, you don’t know if your four thermometers could pick up anything relevant or how to recognize it, if they did.
Conversely, even if the room got warmer with someone in it, this would not necessarily affect the measurement adversely.
In any case, temperature indeed seemed to increase slightly. Why the same temperature measurements were not conducted in the other experiments, or why the possible temperature influence was not investigated further, is unclear to me. They believe this should work, so why don’t they continue with it?

The last two experiments were somewhat more elaborate. They were larger, comprising about 50 subjects rather than about 30, and took an EEG of subjects. The fifth experiment is the one success in the lot insofar that it reports a significant result.


If you have read the first part of this series then you have encountered a mainstream physics articles that studied how the thermal emission of photons affects the interference pattern. What that paper shares with this one is that both are interested in how a certain process affects the interference pattern.

And yet the papers could hardly be more different. The mainstream paper contains extensive theoretical calculations that place the results in the context of known physics. The fringe paper has no such calculations and relies mainly on pop science accounts of quantum physics.

The mainstream paper presents a clear and unambiguous change in the interference pattern. Let’s look at it again.

The dots are the particles and the lines mark the theoretically expected interference patterns fitted to the actual results. As you can see the dots don’t exactly follow the lines. That’s just unavoidable random variation due to any number of reasons. And yet the change in the pattern can be clearly seen.

From what is reported in Radin’s paper we can deduce that the change associate with attention was not even remotely as clean. In fact, the patterns should be virtually identical the whole time.
That means, that if there is a real effect in Radin’s paper, it is tiny. So tiny that it can’t be properly seen with the equipment they used.

That is hardly a surprising result. If paying attention on something was able to change its quantum behavior in a noticeable way, then this should have been noticed long ago. Careful experiments should be plagued by inexplicable noise, depending on what the experimenters are thinking about.

The “positive” result that he reports suffer from the same problem as virtually all positive results in parapsychology and also many in certain recognized scientific disciplines. It may simply be due to kinks in the social science methodology employed.
Some of the weirdness in the paper, not all of which I mentioned, leaves me with no confidence that there is more than “flexible methods” going on here.

Poor Quantum Physics

Radin believes that a positive result supports “consciousness causes collapse”.  He bemoans a lack of experimental tests of that idea and attributes it, quite without justification, to a “taboo” against including consciousness in physics.
Thousands upon thousands of physicists and many times more students have out of some desire to conform simply refused to do some simple and obvious experiment. I think it says a lot about Radin and the company he keeps that he has no problem believing that.
I don’t know about you, my dear readers, but when I am in such a situation would have thought differently. Either all those people who should know more about the subject than me have their heads up their behinds. Or maybe it is just me. And I would have wondered if there was maybe something I am missing. And I would have found out what it was and avoided making an ass of myself. Then again, I would have (and have) also avoided book deals and the adoration of many fans and the like, all of which Radin secured for himself.
So who’s to say that reasonable thinking is actually the same as sensible thinking.

But back to the physics. As is obvious when one manages to find the relevant literature, conscious awareness of any information is not necessary to affect an interference pattern. Moreover, wave function collapse is not necessary to explain this. Both of this should be plain from the mainstream paper mentioned here.


My advice to anyone who thinks that there’s something about this is to try to build a more sensitive apparatus and/or to calibrate it better. If the effect still doesn’t rise over the noise, it probably still wasn’t there in the first place. If it does, however, future research becomes much easier.
For example, if tiny magnetic fields influence this, as Radin suggests, that could be learned in a few days.

Unfortunately, it does not appear that this is the way Dean Radin and his colleagues are going about but I’ll refrain from comment until I have more solid information.
But at least, the are continuing the line of investigation. They deserve some praise for that. It is all too often the case that parapsychologists present a supposedly awesome, earth-shattering result and then move on to do something completely different.



I omitted to comment on a lot of details in the second paper to keep things halfway brief. In doing so I overlooked one curiosity that really should be mentioned.

The fourth experiment is “retrocausal”. That means, in this case, that the double-slit part of the experiment was run and recorded three months before the humans viewed this record, and tried to influence it. The retrocausality in itself is not really such an issue. Time is a curious thing in modern physics and not at all like we intuit.

The curious thing is that it implies that the entire recording was in a state of quantum superposition for a whole three months. Getting macroscopic objects into and keeping them in such states is enormously difficult. It certainly does not just happen on its own. What they claim to have done there is simply impossible as far as mainstream quantum physics is concerned. Not just in theory, but it can’t be done in practice despite physicists trying really hard.

A Physicist Investigates

This is the second part on my series on the parapsychological double-slit experiment.

The original double-slit experiment was performed in 1803 by Thomas Young.
The parapsychological version was thought up by one of his intellectual heirs, almost 200 years later.

In the Beginning

Stanley Jeffers, a professor of physics at York University in Toronto, Canada, came across the results from PEAR lab during a sabbatical in 1992. The PEAR project claimed to have shown that people could influence Random Number Generators (RNGs) by their intention alone. There were also earlier experiments which claimed that people could influence radioactive decay.
Jeffers became interested enough to follow up on that. Jeffers came up with a more direct way to test this idea by having people try to influence the diffraction of photons (unsuccessfully). (Jeffers and Sloan 1992)

Afterwards, he refined that experiment by using a double-slit set-up. This makes for an exquisite test of whether humans have some unknown means of affecting the world.
If there is some way of making information available, about through which slit the particles went, then this should affect the interference pattern. Moreover the change in the pattern would allow one to estimate just how much information was made available.
It’s not necessary for anyone to become consciously aware of this information, it’s enough that it exists. For example, if people can somehow influence what happens at one slit but not the other, then that could “tag” the particle in such a way that it becomes possible to distinguish through which slit it went. Such a thing would be sufficient to affect the interference pattern.
More directly, if people can clairvoyantly observe one slit (aka remote view it), then this too should make which slit information available.
There’s a catch, though. That’s only true as far as we understand the laws of nature. However, the paranormal, by definition, is not bound by that.

There’s another thing that could lead to a positive result. The experimental apparatus needs to be carefully calibrated. If the right parts are bent out of shape by just a few micrometers or if the laser is somehow affected then this could also lead to a change in the pattern. It would be quite tricky to mimic the change expected from available information but it could still lead to a positive result under the right (or wrong) circumstances. Either if the subconscious psychokinetic ability is also ingenious or if the pattern is not scrutinized properly.
Jeffers seems not to have considered this possibility but, of course, outside of parapsychology such ideas, IE psychokinesis, tend to be regarded as rather philosophically than actually possible. Nevertheless, parapsychologists have little hesitation to jump to any conclusion and it would be quite in line with what is often claimed.

All in all, Jeffers’ double-slit set-up promises to be a very sensitive means of detecting even the smallest influence, whatever it’s nature.

A Double-Slit Diffraction Experiment to Investigate Claims of Consciousness-Related Anomalies

Stanley Jeffers ran 74 sessions, each consisting of many attempts to influence the pattern. Then he gave up and presented his findings at a conference. After that, the people of the PEAR lab borrowed the device and used it to run 20 sessions.
I do not have the original conference report by Jeffers available to me but was able to find first hand information by Jeffers in the book Psi Wars: Getting to Grips with the Paranormal.
The two experiments were reported together in The Journal of Scientific Exploration, which is dedicated to publishing stuff that is, put bluntly, too stupid or crazy for normal peer-reviewed journals. (direct link to paper)
The article is authored by Ibison and Jeffers but appears written by Ibison.

While Jeffers completely failed to find anything in his 74 sessions, Ibison of the PEAR lab reported a “significant” result with only 20. Some will say that the latter experiment had a positive result.

I fear, I have to go into a little detail on that. This is an issue that will be relevant for all the following experiments. The interesting thing here is the contrast of the interference pattern. But you can’t measure it with arbitrary precision.
The pattern itself will be influenced by tiny disturbances of the apparatus caused by changes in ambient heat or vibrations. The sensor which picks up the brightness, basically a predecessor to the CCD chip in current digital camera, is not perfect either. And then there’s more esoteric things, too.
The upshot is that every measurement will have a slightly different result.
According to Jeffers in Psi Wars, repeated calibrations yield typical values for the contrast of 0.991 with a standard deviation of 0.001, while Ibison talks of “around 5% of the peak value”. I don’t quite know what Ibison means by that since it’s not how measurement uncertainty is usually reported. However, it does seem to imply more uncertainty than Jeffers’ figure. Maybe there’s a typo in one of the sources or maybe Ibison at the PEAR lab was not able to calibrate the device as well as Jeffers who was, after all, a pro at this.

What is clear is that any influence by the subjects must have been so small that it never rose above the noise, that is, the fluctuations due to imperfect measurement. Unfortunately, none of the sources I have at hand provides information on what kind of an upper bound this puts on the subjects’ maximal influence.

It is possible to measure something even if it hides in the noise. To do so, one must simply repeat the measurement over and over again and form an average. In the long run the errors in either direction will balance out. The more often one repeats the measurement, the more reliable the average. Using statistics, one can compute how reliable a result is, which is then expressed with error bars.
Basically, Ibison found that the contrast was slightly altered and computed that there was only a 1 in 20 chance for that to happen, merely from noise. That means, if you repeated the experiment many times, you’d get such a result or a better one 1 in 20 times, if you’re only gazing at noise. That’s, of course, not very impressive, even not taking into account that such figures tend to be exaggerated. There’s a couple of things that can make a result appear more unlikely than it really is. Physicists tend to ask for much more clear-cut results.


Fundamentally, this should be a discouraging result for parapsychology. Even though a device was constructed able to detect influences far smaller than that what most parapsychological experiments require, the supposed effect was still hidden in the noise.

Someone who believes that there really was an effect there has two basic options. The preferred one should be to build a better apparatus, that has less noise and so will let the true effect stand out.
The other one is, as mentioned, to run this experiment a lot of times but that will not be nearly as useful, nor as convincing.

Further Thoughts

I mentioned that one way of achieving a positive result in this test is by exerting some physical force on the apparatus. A physicist or engineer would test this directly by having people try to exert that force on a force sensor of some kind.
Obviously, parapsychologists don’t do it that way. It fails to show what they know to be true.
One answer to that is abandoning careful experiments and simply argue that some feat can’t possibly have been a trick because they surely would have noticed. Parapsychologists apparently do not suffer from the same limitations to perceptions as the rest of us. Or perhaps of a few more inabilities to see something.
The other answer is to conduct experiments involving randomness, like the RNGs of PEAR or radioactive decay or something as simple as die. These experiments seem much less silly than any argument for the genuineness of something that looks like a magic trick.
But are they really? Why not measure the effect directly?
Why is there always randomness involved?
An obvious answer is that the “effect” is simply the result of data mining. My take on that here.

But, since I aim to deliver the whole truth, I must also tell you that parapsychologists also have ideas on the issue. One idea that Ibison mentions in the mentioned article is that maybe experimenters can see into the future and somehow know, subconsciously, when to start an experimental run so that the purely random processes will deliver a favorable result. Ibison is quite open with the fact that the apparatus was basically just generating random numbers and not necessarily measuring anything.


Stanley Jeffers has published his conclusion on the results on the PEAR lab results Skeptical Inquirer in 2006 and 2007, and also written an essay which was published in the book Psi Wars: Getting to Grips with the Paranormal. (Review by Caroline Watts)

Attention! Double-slit!

Recently Dean Radin and others published an article that purports to study the effects of attention on a double slit experiment.

Originally I wanted to do just a rebuttal to that but then found it necessary to also review the entire background. The simple rebuttal spiraled out of control into a 3-part series. My old math teacher was right. Once you add the imaginary things get complex, for reals. And not only for them.

A Word of Caution

People often ask for evidence when they are faced with something they find unlikely. The more skeptical will also ask for evidence for something they consider credible, at least sometimes. For the academically educated evidence means articles published in peer-reviewed, reputable, scientific journals.
For example, all the articles I cite as evidence in the first part, where I look at mainstream quantum physics, are from such journals.
So here comes the warning. Not all journals that call themselves peer-reviewed are reputable. For example, there is a peer-reviewed journal dedicated to creationistic ideas. And I probably don’t need to tell you what scientists on the whole think of creationism.

The journals that published the articles discussed in this series are not reputable. Mainstream science does not take note of them. Physics Essays, where the most recent article appeared may very well be the closest to the mainstream and still it is mostly ignored.
It is largely an outlet for people who believe that Einstein was wrong. We’re not talking about scientists looking for the next big thing, we’re talking about people who are to Einstein’s theory what creationists are to evolution.
This is not meant as an argument against these ideas, I just don’t want to mislead anyone into believing that there is a legitimate scientific debate going on here.

That’s not to say that science ignores fringe ideas. For example, Stanley Jeffers who appears in the second part of this series is a mainstream physicist who decided to follow up on some of those.
He just didn’t find that there was anything there. It was a dead end.
James Alcock has a few words on that in his editorial Give the Null Hypothesis a Chance.

There are many cranks out there. These are people who hold onto some theory in the face of contrary evidence. They will not go away but they will, almost invariably, accuse the mainstream of science to be dogmatic. Eventually, there is nothing to be done but ignore them.

On to the Review

The first part gives a brief overview over the quantum physics background to the experiment. Dean Radin gets this completely wrong. And I fear the misunderstandings he propagates will pop up in many places.

Part 1: A Quantum Understanding

In the next part we will look at the experiment in question. Let’s call it the parapsychological double-slit experiment. We will learn who came up with the idea and what he found out and also what a positive result should look like and what it might mean.

Part 2: A Physicist Investigates

The 3rd and last part, for now, looks at the two articles authored by Dean Radin, presenting seven replications of the original design.

Part 3: Radin for a Rerun

Further studies are being conducted so more parts are likely to follow at some point.

A Quantum Understanding

This is the first part in my series on parapsychological double-slit experiments. However, this post contains just straightforward mainstream science. This tells you the minimum about quantum physics that you need to know to make sense of the parapsychological adaptation.

In a double-slit experiment, particles are shot at a barrier with two slits or holes in it. The particles are usually electrons or photons but large molecules, such as C70 bucky balls, have also been used.
Behind the barrier, there is a detector. In the case of photons, aka light, you can simply put a piece of cardboard there and see a special pattern projected onto it.
The interference pattern indicates that a wave is passing through the double-slits.


In a nutshell, when a wave crest encounters another wave crest, their heights will add. When a crest encounters a trough, they will cancel out. That is called interference. This can be seen when throwing two stones into a pond. It will look like this:


The simple and obvious explanation for what can be seen in the double-slit experiment is that there are waves coming from either slit and interfering, as can be seen in this diagram:

This is curious since we thought we were shooting particles at the slits. Particles shouldn’t show interference. Think of kicking a ball at two windows. Have you ever noticed an interference pattern in such a situation? Or in any?

So how can it be that a particle behaves like a wave? The answer is given by quantum mechanics.
Quantum means ‘amount’ and there’s quite a story behind ‘amount mechanics’ and how it was discovered and why it is so named. It involves Einstein but this is not the place to tell it.

For one, in order to get an interference pattern, the size of the slits needs to match the wavelength of the particles. Just in case you were wondering why kicking a ball at two windows does not produce interference.


We think of as particles having a definite state. They have a location and a momentum.
However, it turns out that location and momentum, and a couple other properties, can’t be known with arbitrary precision. Not because of some technological limitation but because of the very laws of physics.
Going even further, it is so that the very state of a particle, or even many particles together, can only be known in terms of probabilities. A quantum mechanical description of something is known as a wavefunction and allows you to predict with what probability you will observe a certain outcome.
As the name wavefunction implies, these probabilities behave mathematically like waves.

When you shoot a particle at the double-slits, it might go through either slit. There is a probability of finding it at or behind either slit. The probabilities of finding it in some place spread from both slits like waves. The probability waves from both slit interfering is what causes the interference pattern.

Note that this pattern only emerges when you have a sufficient number of particles. One particle leaves behind only one blip on the screen behind the double slits. But even when you shoot one particle at a time, the pattern will eventually emerge. Where it is more likely to find a particle, more particles will be found.

Now we have the basics down. This is how double-slit experiment works.


Now we need to ask, what happens if we try to determine through which slit the particle went?
The simple answer is that if the particle has to have come from either slit, then the probability wave only spreads from that one slit. And that means no interference.

Whenever such ‘which-way information’ exists then there is no interference pattern. In fact, how much contrast there is in the pattern depends on how much information there is. (see Wootters and Zurek 1979)

In an experiment with C70 bucky balls, these molecules were heated as they went through the slits. They were made so hot that they glowed, they gave off thermal photons. These photons carried which-way information from the bucky balls to the environment. The hotter they were, the more photons they gave off and the less pronounced the interference pattern was.
Here you can see the interference patterns obtained when different intensities of heating were applied (given in Watts).

Collisions with air molecules play the same role. The higher the pressure is, the less pronounced the interference pattern.

Warning! Philosophy ahead!

Now we must head into the somewhat murkier waters of philosophy.

In quantum mechanics everything is all about probabilities. And as the interference pattern shows, in some way these probabilities are real. If they weren’t real, the probability waves could hardly interfere.
And yet, on the screen we get a single blip for each particle. Not a ‘maybe here, maybe there’.
At first, you have a wavefunction that goes through both slits, and then, on the detector, you have a single definite location (within in the limits of the uncertainty principle).
That is known as the measurement problem.
Obviously, interactions with the environment play some role to explain this. We have seen that in the experiment with the bucky balls. Such processes cause so-called decoherence which causes interference phenomena to be suppressed in everyday situations.

However, this, most say, cannot be the entire solution. Even if there is no interference, the particle is still described only in terms of probabilities, as a wavefunction. So why do we perceive a seemingly definite outcome?

One answer to this is taking the math at face value. The wavefunction is all there is and all that matters. In that view, all possibilities are equally real. You perceiving the blip on one end of the screen or on the other, both happens. In a way, the universe splits up into different versions for each outcome. Of course, there is really just one wavefunction describing it all, so that’s more what it appears to us than reality.

This view is known as the Many Worlds Interpretation (MWI).

Another answer is that the wavefunction collapses on measurement. That means that when we perceive a definite outcome, then this outcome is really all there is. Something happened to reduce the probabilities down to one actuality.
That means that another physical process must be assumed. And it is absolutely unknown when or why or how it happens. Not to mention if.

That view is known as the Copenhagen Interpretation.

So far this is just philosophy. Both these views are interpretations of the same experiments and the same theories. The math and the predictions stay the same, whether you think that all possibilities are true or just the one you experience. There are many variants of these views, especially of the Copenhagen Interpretation.

One particular variant of the Copenhagen Interpretation holds that the collapse takes place when consciousness gets involved. This view was held by some big names like Von Neumann or Wigner but is decidedly a minority opinion now. In my opinion that’s because no one has figured out a way in which a conscious observer should be different from any old measurement device. But I’ll just leave it at pointing out that there is no evidence for this view, just like there is no evidence that collapse is real or not.

It has also been argued that this view is incompatible with experimental evidence. Though I think most physicist would rather regard the view as being unfalsifiable philosophy. Eventually all experiments we know of came into the awareness of at least one conscious being (That’s you, my dear reader. Not humble me.).

Put that way, consciousness causes collapse is awfully close to solipsism.