Radin for a Rerun

This is the 3rd and currently last part in my series on parapsychological double-slit experiments. It discusses the pilot study by Dean Radin and the six following experiments by him and others.

The Less Said…

In 2008, Dean Radin published an article in Explore, The Journal of Science & Healing, a journal dedicated to alternative medicine. That’s certainly a good way to hide it from anyone who knows or cares about physics. Or ethics.
Dean Radin is currently coeditor-in-chief.

Now hold tight because this paper is bad.

Obviously, the original design and justification for the experiment is taken from Jeffers, though Radin fails to give him the appropriate credit. However, the implementation is different.

Radin used a so-called Michelson interferometer for the experiment. This is physically equivalent to Jeffers’ set-up in all relevant aspects. However, due to being extremely sensitive to environmental factors, like temperature or vibrations, it seems a less than ideal choice.
Another thing that is different is the outcome measure. That’s where things really go south. He uses a CCD chip to capture the interference pattern. Unfortunately he then completely disregards it.
Instead he computes the total intensity of the light reaching the chip.
With that, the experiment becomes a test if the subjects can throw a shadow by concentrating on a spot.
Radin seems completely oblivious of that simple fact.

In all likelihood, Radin simply did not get a positive result and then, instead of accepting it, went fishing. How that works (except not really) is simply explained here.

The title of the paper, Testing Nonlocal Observation as a Source of Intuitive Knowledge, tells us that Radin sees this as relevant to intuition. Intuition, as he points out, is regarded as an important source of artistic inspiration or for scientific insights. It’s a bit hard to see what the connection between that and casting shadows should be. But remember that Radin doesn’t know what he is doing. He believes this is like Jeffers’ original design.
His first error is in thinking that the original design could that someone gains knowledge about the photons’ paths. Learn why not in this post. This is a simple misunderstanding and easy to follow.
His second error is more weird and more typically parapsychological. He thinks that, if people can gain knowledge about the paths of a few photons in some unknown way, then it’s reasonable to assume that they can gain any knowledge via the same unknown mechanism.
We detect photons all the time with our eyes, and much more effectively. If that doesn’t tell us anything about intuition, then why should this?

The bottom line is that evidently the experiment was botched. And even if it hadn’t been, the conclusions he tries to draw just wouldn’t follow. The connections he sees are just not there, at least as far as there is evidence.

Six More Experiments

Finally, we get to the current paper by Dean Radin, Leena Michel, Karla Galdamez, Paul Wendland, Robert Rickenbach, and Arnaud Delorme. I think Radin wrote most of the paper. It repeats so many errors of the previous one and the statistics are his style.

The paper is miles better. It follows Jeffers original set-up quite closely. For one, they use a standard double-slit set-up. They don’t measure the contrast in the way Jeffers did but something that should work just as well. At least, the justification seems solid to me though I don’t know enough to actually vouch for it.
What’s more, they say that the ups and downs of the measure were given as feed-back to the subjects. This again follows Jeffers’ lead and, importantly, gives me some confidence that it was not chosen after the fact. But again Jeffers is not properly credited with coming up with the original design of the experiment.

Unfortunately this paper again does not report the actual physical outcome. They don’t report how much the subjects were able to influence the pattern. What seems clear is that none of them was able to alter the pattern in a way that stood out above the noise.
It would have been interesting to know if the measurements were any more precise than those conducted by Jeffers and Ibison. If their apparatus is better, and the effect still couldn’t be clearly measured, then that would suggest confirmation that Ibison’s result was just chance.

They calculate that, on average, the pattern in periods where subjects paid attention was slightly different from the pattern when they did not pay attention. That is valid in principle because you can get a good measurement with a bad apparatus by simply repeating the process a lot of times.
However, any physicist or engineer would try their utmost to improve that apparatus rather than rely on repetitions.

On the whole, this is not handled like a physics experiment but more like one in social science.
Most importantly, the effect size reported is not about any physically measurable change to the interference pattern. It is simply a measure for how much the result deviated from “chance expectation”. That’s not exactly what should be of top interest.

The paper reports six experiments. To give credit where credit should not be due, some would have pooled the data and pretended to have run fewer but larger experiments to make the results seem more impressive. That’s not acceptable, of course, but still done (EG by Daryl Bem).

Results and Methods

The first four experiments presented all failed to reach a significant result, even by the loose standards common in social science. However, they all pointed somewhat in the right direction which might be considered encouraging enough to continue.

Among these experiments there were two ill-conceived attempts to identify other factors that might influence the result.
The first idea is that somewhat regular ups and downs in the outcome measure could have coincided with periods of attention and no attention. I can’t stress enough that this would have been better addressed by trying to get the apparatus to behave.
Instead, Radin performs a plainly bizarre statistical analysis. I’m sure this was thought up by him rather than a co-author because it is just his style.
Basically, he takes out all the big ups and downs. So far so fine. This should indeed remove any spurious ups and downs coming from within the apparatus. But wait, it should also remove any real effect!
Radin, however, is satisfied with still getting a positive result even when there is nothing left of that could cause a true positive result. The “positive” result Radin gets is obviously a meaningless coincidence that almost certainly would not repeat in any of the other experiments. And indeed, he reports the analysis only for this one experiment.
Make no mistake here, once a method for such an analysis has been implemented on the computer for one set of data, it takes only seconds to perform it on any other set of data.

The second attempt concerns the possibility that warmth might have affected the result. A good way to test this is probably to introduce heat sources into the room and see how that affects the apparatus.
What is done is quite different. Four thermometers are placed in the room while an experiment is conducted. The idea seems to have been that if the room gets warmer this indicates that warmth may have been responsible. Unfortunately, since you don’t know if you are measuring at the right places, you can’t conclude that warmth is not responsible if you don’t find any. Besides it might not be a steady increase you are looking for. In short, you don’t know if your four thermometers could pick up anything relevant or how to recognize it, if they did.
Conversely, even if the room got warmer with someone in it, this would not necessarily affect the measurement adversely.
In any case, temperature indeed seemed to increase slightly. Why the same temperature measurements were not conducted in the other experiments, or why the possible temperature influence was not investigated further, is unclear to me. They believe this should work, so why don’t they continue with it?

The last two experiments were somewhat more elaborate. They were larger, comprising about 50 subjects rather than about 30, and took an EEG of subjects. The fifth experiment is the one success in the lot insofar that it reports a significant result.

Conclusion

If you have read the first part of this series then you have encountered a mainstream physics articles that studied how the thermal emission of photons affects the interference pattern. What that paper shares with this one is that both are interested in how a certain process affects the interference pattern.

And yet the papers could hardly be more different. The mainstream paper contains extensive theoretical calculations that place the results in the context of known physics. The fringe paper has no such calculations and relies mainly on pop science accounts of quantum physics.

The mainstream paper presents a clear and unambiguous change in the interference pattern. Let’s look at it again.

The dots are the particles and the lines mark the theoretically expected interference patterns fitted to the actual results. As you can see the dots don’t exactly follow the lines. That’s just unavoidable random variation due to any number of reasons. And yet the change in the pattern can be clearly seen.

From what is reported in Radin’s paper we can deduce that the change associate with attention was not even remotely as clean. In fact, the patterns should be virtually identical the whole time.
That means, that if there is a real effect in Radin’s paper, it is tiny. So tiny that it can’t be properly seen with the equipment they used.

That is hardly a surprising result. If paying attention on something was able to change its quantum behavior in a noticeable way, then this should have been noticed long ago. Careful experiments should be plagued by inexplicable noise, depending on what the experimenters are thinking about.

The “positive” result that he reports suffer from the same problem as virtually all positive results in parapsychology and also many in certain recognized scientific disciplines. It may simply be due to kinks in the social science methodology employed.
Some of the weirdness in the paper, not all of which I mentioned, leaves me with no confidence that there is more than “flexible methods” going on here.

Poor Quantum Physics

Radin believes that a positive result supports “consciousness causes collapse”.  He bemoans a lack of experimental tests of that idea and attributes it, quite without justification, to a “taboo” against including consciousness in physics.
Thousands upon thousands of physicists and many times more students have out of some desire to conform simply refused to do some simple and obvious experiment. I think it says a lot about Radin and the company he keeps that he has no problem believing that.
I don’t know about you, my dear readers, but when I am in such a situation would have thought differently. Either all those people who should know more about the subject than me have their heads up their behinds. Or maybe it is just me. And I would have wondered if there was maybe something I am missing. And I would have found out what it was and avoided making an ass of myself. Then again, I would have (and have) also avoided book deals and the adoration of many fans and the like, all of which Radin secured for himself.
So who’s to say that reasonable thinking is actually the same as sensible thinking.

But back to the physics. As is obvious when one manages to find the relevant literature, conscious awareness of any information is not necessary to affect an interference pattern. Moreover, wave function collapse is not necessary to explain this. Both of this should be plain from the mainstream paper mentioned here.

Outlook

My advice to anyone who thinks that there’s something about this is to try to build a more sensitive apparatus and/or to calibrate it better. If the effect still doesn’t rise over the noise, it probably still wasn’t there in the first place. If it does, however, future research becomes much easier.
For example, if tiny magnetic fields influence this, as Radin suggests, that could be learned in a few days.

Unfortunately, it does not appear that this is the way Dean Radin and his colleagues are going about but I’ll refrain from comment until I have more solid information.
But at least, the are continuing the line of investigation. They deserve some praise for that. It is all too often the case that parapsychologists present a supposedly awesome, earth-shattering result and then move on to do something completely different.

 

Update

I omitted to comment on a lot of details in the second paper to keep things halfway brief. In doing so I overlooked one curiosity that really should be mentioned.

The fourth experiment is “retrocausal”. That means, in this case, that the double-slit part of the experiment was run and recorded three months before the humans viewed this record, and tried to influence it. The retrocausality in itself is not really such an issue. Time is a curious thing in modern physics and not at all like we intuit.

The curious thing is that it implies that the entire recording was in a state of quantum superposition for a whole three months. Getting macroscopic objects into and keeping them in such states is enormously difficult. It certainly does not just happen on its own. What they claim to have done there is simply impossible as far as mainstream quantum physics is concerned. Not just in theory, but it can’t be done in practice despite physicists trying really hard.

11 Comments

  1. Arnaud Delorme said,

    June 11, 2012 at 7:33 pm

    I did not find this comment objective. The author claims that Radin used “kinks in the social science methodology” but does not provide any details on how his methodology is flawed. Does it mean that all social science experiments are flawed?

    • June 12, 2012 at 10:07 am

      For details on what these kinks may be see the linked paper by Simmons et al.
      It’s in “Psychological Science” and titled “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant”

      I do not know how widespread such practices are in the social sciences as a whole.but for social psychology in particular see:
      “Measuring the prevalence of questionable research practices with incentives for truth-telling.” by Leslie John and others.

      • Chris said,

        August 2, 2013 at 10:05 pm

        It’s fair enough to point out that “undisclosed flexibilty in data collection and analysis” can produce an increase in the number of marginally significant results. But given that the p value in Radin’s fifth experiment was around 4 millionths, in that case “undisclosed flexibility” is wholly inadequate as an explanation.

        Surely a result like this can only be explained by one of the following?
        (1) A genuine effect
        (2) A major flaw in experimental design
        (3) Outright fraud

  2. ty said,

    July 3, 2012 at 3:25 pm

    “The journals that published the articles discussed in this series are not reputable…Physics Essays, where the most recent article appeared may very well be the closest to the mainstream and still it is mostly ignored.”

    Where’s your support the assertion that Physics Essays is “ignored” by mainstream scientists? Ignored by whom? You and….?

    “It is largely an outlet for people who believe that Einstein was wrong. We’re not talking about scientists looking for the next big thing, we’re talking about people who are to Einstein’s theory what creationists are to evolution.”

    Yeah…Einstein was wrong when it came to his perspective on quantum physics. Countless “mainstream” experiments demonstrating hypotheses in quantum physics have shown this (think: Bell’s Theorem, Alain Aspect, Anton Zeilinger, etc.) And the analogy you provide in the last sentence of the above quote is egregiously wrong.

    Speaking of Jeffers, the “physicist who investigates”, you conveniently forgot:

    Freedman, M., Jeffers, S., Saeger, K., Binns, M., & Black, S. (2003). Effects of Frontal Lobe Lesions on Intentionality and Random Physical Phenomena. Journal of Scientific Exploration, Vol. 17, No. 4, pp. 651–668.

    Oops. Maybe you better start challenging Jeffers’ credentials now – you know, lump him together with Radin and all those other “charlatan’s” doing scientific research and obtaining results that don’t accord with your a priori assumptions.

    Your goals here are not as modest as simply pointing out the flaws in a “crackpot” paper. There is no singular flaw plaguing Radin’s study. Rather, you are attacking the what you believe to be the flaws in the methodology that underpins all of the social and behavioral sciences (this of course includes cognitive neuroscience).

    And yes, as the first commenter observed, this entry is reeks of bias. It’s disheartening how scathing, biased, pseudo-skepticical pulp like this masquerades as good science.

    Why not attempt to publish this entry as a formal response-critique of Radin’s paper, the social and behavioral sciences, modern physics, in Physics Essays or another scientific journal?

    • July 7, 2012 at 10:47 am

      Re: Physics Essays
      There are institutions that track how often scientific works are cited. -> http://en.wikipedia.org/wiki/Impact_factor
      I was not able to find a current impact factor which is a bad sign. The old impact factors were very low which means that the papers are cited very little.

      Re: Publishing this in a journal
      These blog posts are not in a format that is suitable for scientific publication. I deal at great length with issues that should be known to a scientific audience. I don’t think there would be enough left for publication after taking all that out.
      In any case, I don’t think there is a point in a rebuttal since I doubt anyone will take this paper seriously.

      Re:Accusations of bias
      I am not able to respond to these accusations as you don’t provide any justification. If you believe that there are errors in these posts then please point them out. If you do not know any errors then honesty and integrity would demand you to retract your accusation.

      • ty said,

        July 7, 2012 at 3:38 pm

        Justification of bias:

        For starters: In your post, you heralded physicist Stanley Jeffers for his skepticism and research into psychokinesis/intentionality/ whatever. The work by Jeffers that you alluded to did, indeed, fail to obtain significant results. However, you failed to mention a more recent study (I cited it above) conducted by Jeffers that DID find a significant effect. Furthermore, Jeffers was able to replicate this finding. It should be transparent why the convenient omission this study leads me to conclude bias, and not ignorance. The fact that Stanley Jeffers, a mainstream scientist and staunch skeptic, found a significant effect when he (like a good scientist) actually investigated the phenomena does not accord with your de facto “psi is impossible” assumption. More importantly, this has significant implications for the whole psi debate (i.e., mainstream skeptics can and do obtain postitive empirical evidence for psi – all they have to do is investigate…).

        Here’s another overt example of bias: You state that psi-related research does not generally get published in mainstream scientific journals because such research is not conducted rigorously, employs flawed methodology, etc. In other words, you imply psi research does not meet the strict inclusion criteria that mainstream scientific journals demand. However, this is false. Certain (if not all) mainstream journals specifically refuse to even consider any submission that includes psi-related words/terms in the text, if the article presents positive evidence for psi.

        Also, I believe your portrayal of the state of modern physics reveals a bias. I gave an example in the previous post.

        Publishing in a journal:

        Obviously, blog posts such as these are not suitable for publication. I suppose I should have specified: If you believe that your criticisms here are sound, you should refurbish and publish them in a formal rebuttal/response to Physics Essays.

        As it happens, I don’t find your general criticisms are sound/cogent: they are mostly a rehashing of the trite, pathologically-skeptical, rhetoric that obfuscates genuine scientific inquiry. I don’t believe that you have identified any crippling methodological flaw that is unique to Radin’s paper, either. Rather (as I said previously), your only valid points of contention concern the methodology underlying ALL of the social and behavioral sciences – specifically concerning the use of classical statistics. In the relevant (mainstream) scientific community, these points of contention are being vigorously disputed.

        {As you know, Daryl Bem’s recent “Feeling the Future” study was the impetus for this debate (as skeptics Wiseman/French/Ritchie state in “Replication…” regarding Bem’s paper: “we don’t think any of them [methodological/design flaws] completely undermine what appears to be an impressive dataset.” Thus, others – like Wagenmakers – have asserted it must therefore be a flaw in the classical-statistical methodology employed. Nevertheless, see Rouder Morey’s paper on the issue.)}

  3. July 8, 2012 at 7:02 pm

    Re: Freedman et al
    The study you mention has no bearing at all on the topic, which is double-slit experiments. I do not understand why you think I should have included it.
    I see no evidence for your contention that Jeffers conducted the study. He obviously was not the principal investigator and the methodology is not what you’d expect from him.

    Re: Psi research in mainstream journals.
    There are still reputable journals that are, in principle, open to such research.
    Some journals will now refuse parapsychological papers without review. That was not always so but has been seriously earned.

    Re: Statistics
    You appear to be conflating different issues. I raised the likelihood of data fishing. This is widely recognized as a questionable practice that leads to false positives. There is no debate on that.
    What you’re thinking of is the long-standing debate over Null-Hypothesis Significance Testing. In the wake of the Bem debacle a certain alternative was discussed. That is an entirely separate issue.
    I am surprised that you think that my strongest point lies in criticizing the statistics. I would have thought pointing out that the effect is clearly within the noise would be the one fatal blow.

    Re:Physics
    If you think that I have made a mistake in my presentation of the background to the double-slit experiment then please point it out.

    I have no interest in publishing in Physics Essays.

    • ty said,

      July 9, 2012 at 12:58 pm

      Re: Re: Freedman et al

      Both the double slit paradigm discussed herein and the paradigm employed in Freedman et al. sought to examine the hypothesis of whether or not intentionality/attention can have a direct effect on physical systems. And since you were, indeed, discussing Jeffers’ research into such a hypothesis…well, it’s not difficult to make the connection.

      “I see no evidence for your contention that Jeffers conducted the study. He obviously was not the principal investigator…”

      You’re quibbling now.

      True, Jeffers was not the “principle” investigator. But so what? All of the investigators contribute immensely to a study (Jeffers name was second). It’s obvious from reading the article that Jeffers had very a significant role in the implementation of this study, especially regarding the application of proper controls and due diligence.

      Re: Re: Statistics

      “On the whole, this is not handled like a physics experiment but more like one in social science.
      Most importantly, the effect size reported is not about any physically measurable change to the interference pattern. It is simply a measure for how much the result deviated from ‘chance expectation'”

      “In all likelihood, Radin simply did not get a positive result and then, instead of accepting it, went fishing.”

      “The ‘positive’ result that he reports suffer from the same problem as virtually all positive results in parapsychology and also many in certain recognized scientific disciplines. It may simply be due to kinks in the social science methodology employed.”

      The above are criticisms founded on potential problems in social/behavioral science methodology.

      But then, you say:

      “…the effect is clearly within the noise…”

      Well, which is it? How are we evaluating the study?

      If we are evaluating the study as if it were “handled…more like one in social science” (as you said, which it was, and which is irrelevant to the internal validity of the study), then the former criticisms are applicable, and not the latter.

      Your recourse to accusing Radin of “fishing” is, in the context of social/behavioral science, equivalent to accusing someone of outright fraud. You might as well just stubbornly accuse Radin of simply fabricating the data. To me, your accusation of “fishing” is a cop out. If one disagrees with the results of any study proffered from the social/behavioral sciences, one can always fall back on this accusation, regardless of the quality of the study’s methodology.

      Re: Re:Physics

      “If you think that I have made a mistake in my presentation of the background to the double-slit experiment then please point it out.”

      Never said that. (Straw man?)

      “I have no interest in publishing in Physics Essays.”

      Of course you don’t.

      • July 9, 2012 at 5:55 pm

        Re:Unrelated experiments
        The experiments are very different and I see no reason to suppose that any effects in both should be related. Neither do I see evidence that this effect is causally related to intention.
        In parapsychology it is normal to jump to such conclusions. That’s one of its problems but that’s a different matter.
        There is no shortage of experiments claimed to investigate the effect of intention. This post is not about them.
        This series is about double-slit experiments and that’s that.
        BTW Misrepresenting someone’s contribution to an experiment is not a small thing. Correcting that is not quibbling.

        Re:The effect in the noise
        I don’t think I understand what you are trying to say here.
        Radin tried to make a physics experiment of a kind that is common. Apparently the predicted effect was not seen. His analysis can mask that fact but not change it.
        The way he handles it turns it into an ordinary random number experiment which is quite simply not what he present it as.
        I think I wrote all that in my series. What confuses you?

        Re: “Fishing”
        You are much too optimistic about standards of conduct in science. I advise you to read:
        “Measuring the prevalence of questionable research practices with incentives for truth-telling.” by Leslie John and others.
        Radin was found to have used such methods in the past by fellow parapsychologists. It was in a paper about the GCP and 9/11, the title of which escapes me.

        Re:Physics
        Then you retract your accusations of a biased presentation?

  4. Jodey said,

    April 23, 2013 at 5:35 am

    Response to Bearnormality’s “Attention! Double Slit!” article on the research of Dr. Radin

    The following is a response to criticisms laid out in the article “Attention! Double Slit!”, which can be found here:

    Attention! Double-slit!

    Introduction

    I will only cover the introduction of this article briefly, as it appears to rely on ad-hominem attacks, as well as priming the reader to believe that the conclusions of Dr. Radin are not to be believed, before even analyzing the research itself.

    The author begins with asserting “The journals that published the articles discussed in this series are not reputable.” Whether this is true or not, this says nothing of the procedure or controls utilized by Dr. Radin’s experiment. The author then goes on to compare the journal with “creationism”, which has nothing to do with Dr. Radin’s hypothesis, nor research. Further claims include that discussion of psi research does not qualify as a “legitimate scientific debate”, despite researchers in various areas going back and forth about the legitimacy and implications of psi research, currently for decades. The rest is an example of “poisoning the well”, without much logical merit.

    Part 1

    To his credit, in the first section, the author gives a comprehensive overview of the physics involved with the “double-slit experiment”, even going through the trouble of acknowledging the various competing theories as to what the results of the double-slit experiment might imply. Of note is the relevant “Copenhagen Interpretation”, which the author explains here:

    “This view was held by some big names like Von Neumann or Wigner but is decidedly a minority opinion now. In my opinion that’s because no one has figured out a way in which a conscious observer should be different from any old measurement device. But I’ll just leave it at pointing out that there is no evidence for this view, just like there is no evidence that collapse is real or not.“

    There are a number of problems with this paragraph. The first implication is that science is somehow a “popularity contest”, in which the most popular opinion must be true… which is obviously illogical. Secondly, the research by Dr. Radin makes no claim as to -how- a conscious observer is different from a measurement device… and his results indicate that consciousness acts in the same way that a measurement device placed at the slit, tuned down to a low level, produces results identical to when conscious meditators direct their attention at the double slit. Finally, is the assertion that “there is no evidence for this view”, despite the fact that this is exactly what Dr. Radin’s research shows to be true.

    Part 2

    This section seems to largely deal with results produced by a separate PEAR lab, and not Dr. Radin’s research… therefore, it confusing why the author spent so much time analyzing research that was not put forth by Dr. Radin, while attempting to criticize Dr. Radin’s research.

    One of the key differences between such research, which the author failed to acknowledge, is the use of experienced meditators vs. those inexperienced with meditative practices. Dr. Radin’s research utilized participants which were experienced with meditative practices, producing a p-value of 9.4*10^6, much higher than what would be expected by chance. By comparison, non-meditators, and the control group of having no one focus on the double slit, produced merely chance results.

    This rules out the criticism of improper tuning of the equipment, posed by the author. Furthermore, the result is significant enough that it cannot be attributed to noise, as the author claims.

    Part 2, Conclusion

    In his conclusion, and furthermore the rest of his analysis, the author barely touches on Dr. Radin’s actual experiment. Rather, he concludes that experiments which did not follow Dr. Radin’s experimental setup are discouraging to parapsychology, while failing to address any hypothetical problems in Dr. Radin’s setup, or conclusions.

    The author makes further assumptions about the experiment. The first is a blatant falsehood, in claiming “the supposed effect was still hidden in the noise.” Dr. Radin’s results are statistically significant enough to be reasonably attributed to random chance, or noise. Secondly, although there can be no harm done in creating an apparatus that is less prone to noise… as stated, the results are not able to be attributed to noise. Last, the author claims that additional experiments would be no more convincing (to be read, providing greater statistical evidence). Further runs of the experiment (if they in turn produce statistically significant results), could only further support the initial hypothesis. This is the exact opposite of being “useless” and/or “unconvincing”, as the author states.

    Part 2, Further Thoughts

    Although the author fails to raise a legitimate counterpoints in these arguments, they will be dealt with for sake of completeness.

    The author’s first point is as follows,

    “A physicist or engineer would test this directly by having people try to exert that force on a force sensor of some kind. Obviously, parapsychologists don’t do it that way. It fails to show what they know to be true.”

    This is beyond the scope of Dr. Radin’s experiment, therefore is a moot point. This experiment provided evidence that the “measurement problem” involved with quantum mechanics is far more complex than most tend to assume, in terms of how consciousness differs from physical measurement. The effect on other measuring devices is a subject for a different experiment.

    “One answer to that is abandoning careful experiments and simply argue that some feat can’t possibly have been a trick because they surely would have noticed.”

    The assertion here is that Dr. Radin’s experiment was improperly controlled, and therefore the result was due to some other factor besides the conscious meditators. If such a criticism is to hold any weight, the author would have to provide what factor was improperly controlled, or provide an alternative explanation for the results, beyond the conscious influence. Since the author fails to identify any such flaws, this is not a legitimate argument.

    “The other answer is to conduct experiments involving randomness, like the RNGs of PEAR or radioactive decay or something as simple as die. These experiments seem much less silly than any argument for the genuineness of something that looks like a magic trick. But are they really? Why not measure the effect directly? Why is there always randomness involved? An obvious answer is that the “effect” is simply the result of data mining. My take on that here.”

    The experiment in question was an investigation of the conscious influence of the uncertainty principle, in terms of wave/particle duality in the double-slit experiment. Die rolls do not appear to be subject to wave/particle duality, therefore is comparing apples to oranges. It is not certain whether conscious influence is able to effect the outcome of die rolls, but this was completely beyond the scope of Dr. Radin’s experiment.

    Furthermore, an obvious explanation as to why “there is always randomness involved”, is twofold. The most obvious one is that we are not dealing with deterministic systems. In stochastic systems, by definition, there is always a certain amount of randomness involved, by their very nature. Secondly, as evidenced by the experimental setup involving those with “experience with meditation”, we are dealing fundamentally with a developed skill that is related to consciousness. Similar to how certain baseball players are able to hit home runs with a certain efficiency, it appears as though conscious meditators as able to break down quantum superposition with a certain efficiency, depending on meditative skill. Therefore, the “meditative skill” of the participant in question, although difficult to quantify, explains the differences in effect sizes (most significantly with the lack of results with those completely unskilled in meditative practices, much like those completely unskilled in swinging a bat.)

    Part 3

    The author begins with another argument about problems with tuning, which doesn’t hold much weight, considering the control group still produced chance results.

    The next point asserts that Dr. Radin went “fishing for data”, despite the fact the initial hypothesis included those which were reportedly “experienced with meditation”, vs. those who were not. This was not a post-hoc analysis, placing groups into “experienced” vs. “inexperienced” based merely on positive results, which with author claims.

    “The title of the paper, Testing Nonlocal Observation as a Source of Intuitive Knowledge, tells us that Radin sees this as relevant to intuition. Intuition, as he points out, is regarded as an important source of artistic inspiration or for scientific insights.”

    This is a fair criticism as to the factors which make one participant better at “decohereing” a quantum particle more so than another participant, but says nothing of the experimental results themselves. It is quite possible that there is no causal connection between those experienced with meditation, and “intuition”, but does nothing to disprove the experimental results. The rest of the section is addressed with this point.

    “We detect photons all the time with our eyes, and much more effectively. If that doesn’t tell us anything about intuition, then why should this?”

    For starters, what is not being measured is photons, but rather electrons, which are not the same. Furthermore, the effects of quantum decoherence can only be observed in highly controlled experimental designs. While this point addresses the overall question of “What does the double-slit experiment mean in terms of everyday experience?”, this is not something easily addressed by any rendition of the double-slit experiment, which is why so many competing interpretations currently exist in the first place, in mainstream physics. In terms of, “Why aren’t we constantly collapsing quantum states/entanglement/etc by viewing reality?”, this is a question for the field of quantum mechanics at large, in which there doesn’t seem to be a clear answer to, even in “mainstream” physics.

    Part 3, Six More Experiments

    “Finally, we get to the current paper by Dean Radin, Leena Michel, Karla Galdamez, Paul Wendland, Robert Rickenbach, and Arnaud Delorme. I think Radin wrote most of the paper. It repeats so many errors of the previous one and the statistics are his style.”

    It is unclear what “so many errors” the author is referring to here. His previous criticisms involve improperly tuned apparatuses, which given the control group of chance expectation, does not appear to be a valid criticism.

    “Unfortunately this paper again does not report the actual physical outcome. They don’t report how much the subjects were able to influence the pattern. What seems clear is that none of them was able to alter the pattern in a way that stood out above the noise.”

    As the author provides no link for this paper, I can’t speak for whether or not the paper reports the physical outcome, or the participants influence. However, the last sentence does not follow from the previous ones. If its true that somehow the paper did not report on the results of the experiment, it is impossible to tell whether the participants were able to alter the pattern or not, which does not support this authors’ assertion that they were not able to influence the pattern above noise.

    “It would have been interesting to know if the measurements were any more precise than those conducted by Jeffers and Ibison. If their apparatus is better, and the effect still couldn’t be clearly measured, then that would suggest confirmation that Ibison’s result was just chance.”

    Although true to the idea, the author still incorrectly makes the assumption that the device was still improperly tuned to begin with, which was discounted by the control group. If somehow a more finely tuned apparatus was used, it would be interesting to repeat the experiment again… however, this still does nothing to discount the research performed by Dr. Radin, whose results were far above any expected noise. The author’s next paragraph merely repeats this point.

    “Instead, Radin performs a plainly bizarre statistical analysis. I’m sure this was thought up by him rather than a co-author because it is just his style. Basically, he takes out all the big ups and downs. So far so fine. This should indeed remove any spurious ups and downs coming from within the apparatus. But wait, it should also remove any real effect!”

    This criticism is rather confusing, as it seems to deny the scientific validity of statistical analysis, utilized by so many scientific fields of exploration. If “statistical analysis” is bizarre, then most fields of science could similarly be considered “woo” by this criteria. Similarly, by discounting trials that were subject to large amounts of noise (either in support or against the hypothesis), if the effect was due to noise, this should remove the effect. The experiment shows clearly that the effect was not due to random noise, when trials which contained a severe amount of noise were taken out, which would skew the overall results one way or the other.

    “The second attempt concerns the possibility that warmth might have affected the result. A good way to test this is probably to introduce heat sources into the room and see how that affects the apparatus. What is done is quite different. Four thermometers are placed in the room while an experiment is conducted. The idea seems to have been that if the room gets warmer this indicates that warmth may have been responsible. Unfortunately, since you don’t know if you are measuring at the right places, you can’t conclude that warmth is not responsible if you don’t find any. Besides it might not be a steady increase you are looking for. In short, you don’t know if your four thermometers could pick up anything relevant or how to recognize it, if they did. Conversely, even if the room got warmer with someone in it, this would not necessarily affect the measurement adversely. In any case, temperature indeed seemed to increase slightly. Why the same temperature measurements were not conducted in the other experiments, or why the possible temperature influence was not investigated further, is unclear to me. They believe this should work, so why don’t they continue with it?”

    This paragraph seems primarily to be an argument against itself. Even outside the control group, which shows that thermal fluctuations could not account for above-chance results, Dr. Radin painstakingly measured the temperature in multiple points in the room, in order to further discount the influence of thermal fluctuations on the results. This appears to be an argument of “subjectively not significant enough for me.” One thermometer would be ideal for detecting thermal fluctuations in a room due to the participant. Four could reasonably considered overkill… the argument that “I won’t believe in these results until they put a hundred thermometers in the room” is rather ridiculous by any standards. The results cannot be reasonably attributed to thermal fluctuations, which were accounted for in the experimental setup.

    The next part of the criticism revolves around,

    “That means, that if there is a real effect in Radin’s paper, it is tiny. So tiny that it can’t be properly seen with the equipment they used… That is hardly a surprising result. If paying attention on something was able to change its quantum behavior in a noticeable way, then this should have been noticed long ago. Careful experiments should be plagued by inexplicable noise, depending on what the experimenters are thinking about.”

    The argument in Dr. Radin’s paper is not about how mind-blowingly large the result is… the argument is how statistically significant the results are, which as he found, occur far beyond chance expectation. As pointed out in previous experiments, this is not an effect that would be noticed by unrelated quantum double-slit experiments. Those who are inexperienced with meditative practices do not produce statistically significant results, and its making a leap of an assumption that all previous experiments in the double slit experiment were conducted by researchers who were experienced with meditative practices, and were focusing on the double slit during their experiments.. as opposed to the normal setup, which simply involves a mechanical measuring device, with little to no influence on the part of the researcher.

    “Radin believes that a positive result supports “consciousness causes collapse”.  He bemoans a lack of experimental tests of that idea and attributes it, quite without justification, to a “taboo” against including consciousness in physics. Thousands upon thousands of physicists and many times more students have out of some desire to conform simply refused to do some simple and obvious experiment. I think it says a lot about Radin and the company he keeps that he has no problem believing that.”

    His results do support the idea that “consciousness causes collapse”, based on his experimental evidence. Perhaps, citing that there is a “taboo” against this sort of research might be a stretch, but the fact remains that most researchers are disinterested in replicating these sorts of experiments, both due to the lack of funding in producing replications of experiments, as well as the very slight (though real) effect these experiments produces, though admittedly lacking any practical utilization due to their slight effect. The bottom line, however, is the continued assumption that science is somehow a “popularity contest”, and by association, experiment evidence is invalid on the basis that the results have not gained more attention.

    “Moreover, wave function collapse is not necessary to explain this. Both of this should be plain from the mainstream paper mentioned here.”

    I have skipped over various ad-hominem attacks to this point. Wave function collapse still holds its ground in discussion of quantum mechanics… and at the very least, Dr. Radin’s research shows evidence that conscious meditators are able to act similarly to how physical measurement devices are observed to work in the double-slit experiment, regardless of the physical interpretation.

    In the “mainstream”, it is generally assumed that consciousness is not able to cause “collapse” or “decoherence”. However, Dr. Radin’s results imply otherwise, despite pre-existing assumptions, which sheds new light on the “measurement problem”.

    Part 3, Outlook

    “My advice to anyone who thinks that there’s something about this is to try to build a more sensitive apparatus and/or to calibrate it better. If the effect still doesn’t rise over the noise, it probably still wasn’t there in the first place. If it does, however, future research becomes much easier.
    For example, if tiny magnetic fields influence this, as Radin suggests, that could be learned in a few days.”

    As explained ad-naseum, the results cannot be explained away by improper calibration, as evidenced by the control group. There is no harm in utilizing an even more accurate apparatus, but this does not explain away Dr. Radin’s results, which were far beyond noise/chance expectation. Furthermore, electromagnetism was historically proposed as a mechanism for psi, but other experiments (ganzfeld) indicate that this isn’t the case. The body produces an incredibly weak electromagnetic field, which becomes indecipherable from background noise more than a few inches away from the body.

    Part 4, Update

    This section is largely an aside, but still worth analyzing.

    “The fourth experiment is “retro causal”. That means, in this case, that the double-slit part of the experiment was run and recorded three months before the humans viewed this record, and tried to influence it. The retro causality in itself is not really such an issue. Time is a curious thing in modern physics and not at all like we intuit. The curious thing is that it implies that the entire recording was in a state of quantum superposition for a whole three months. Getting macroscopic objects into and keeping them in such states is enormously difficult. It certainly does not just happen on its own. What they claim to have done there is simply impossible as far as mainstream quantum physics is concerned. Not just in theory, but it can’t be done in practice despite physicists trying really hard.”

    As explained earlier, quantum entanglement/superposition can only be detected using a highly specific setup. Furthermore, it does not appear as though this superposition breaks down over time. Taking the example of a subatomic particle decaying into two separate particles, there is little (no) evidence to indicate that after leaving these alone for any amount of time, we will somehow observe that both particles have a positive/clockwise, or both particles have a negative/counterclockwise spin state. Although it can be difficult not only to cause and detect certain particles to become entangled or superimposed… it does not appear overly difficult to keep them in such a state, so long as no measurements are made on them, and they remain isolated.

    Entanglement, arguably, does not happen on its own… but this experiment does not rely on such. It merely requires them to be in a naturally superimposed state before measurement, which according to mainstream theories should remain as such until measured. If entangled particles become “less entangled” over time, while isolated and without measurement, the author should provide some peer-reviewed evidence to support this point to discount the retro-causal experiments.

    • April 28, 2013 at 11:35 am

      Thank you for your comment. It is important for me to learn in what way my posts can be misunderstood so that I can do better the next time around.

      First of all, you confuse ad hominem argument with argument from authority. I know that some of Radin’s fans mistakenly believed that Physics Essays was a reputable, mainstream physics publication. I considered it important to correct this.

      Second, Radin’s experiment and what he measures is not at all like any proper physics experiment attempting to measure the same. I am not quite sure how I could be clearer on that. I am not saying that his equipment is mistuned, at all. Though see this paper by Pallikari.

      Radin simply did not measure the change in diffraction pattern as is done in physics. Apparently he data-mined the noise for significance. For more information, see this paper by Simmons et al.

      I hope that this takes care of the misapprehesions regarding statistics.

      You also complain a lot about me providing background that is not immediately relevant to Radin’s experiments. The reason I do so is because I try not to assume any knowledge about scientific or statistical knowledge as a given.

      You misunderstand what I say about the bizarre statistical analysis, of course. It is the method that Radin uses which is bizarre, not statistics as such. If you are interested I can give a more detailed account of that.

      Next up, thermometers: It does not matter if you put 4 or 4,000 thermometers in the room. The method, as practiced, is incapable of serving as a control.
      And word of advice: Whenever you think a skeptic makes an argument of “subjectively not significant enough for me” you probably have misunderstood it.

      The ‘Six Experiments’ article was linked indirectly in the opening post. Find it here via Radin’s blog.

      I think that covers all your points.


Leave a comment