Modern Mediumship Research: Kelly and Arcangel

This paper is almost hot of the presses, having been published only this year.

Emily Kelly and Dianne Arcangel are firm believers in mediumship. Their stated goal is to find a medium that can perform under laboratory conditions, like existed back in the old days.
To do this they administer a test to volunteer mediums. A skeptic would see this as simply a test of mediumship or someone’s paranormal ability. For them it is only a test if mediums can do something paranormal under certain conditions. The possibility that just perhaps one or the other medium may not be able to do anything paranormal at all never enters the article.
I wonder why that is. Are they somehow convinced of the abilities of their research mediums? Do they want to spare their feelings? Are they afraid of the hostile reaction of the paranormal scene that skepticism invariable provokes?

Despite being firmly in the believer camp, they find the same problems in previous modern research on mediumship (IE by Schwartz, Beischel and Robertson&Roy) as yours truly. Naturally I find this quite endearing. Indeed, making allowances for the authors’ convictions, there is nothing about the paper that I could call wrong. Sure, it’s not evidence for mediumship but neither is it claimed to be.

The Experiments

Their first experiment involved 4 mediums and 12 sitters (that is people who receive a reading). Each medium read 3 sitters for a total of 12 readings. The readings were held over telephone, with Emily Kelly standing in for the actual sitter. She only knew the first name and birthday (but not year) of the deceased person that was to be contacted. Care was taken that only Dianne Arcangel had contact with the sitters before the reading.
That way it was ensured that no information was leaked to the mediums.

The sitters were given their own reading as well as 3 randomly selected others. Mentions of names and birthdays had been deleted. They then rated the readings for accuracy and had to pick their own.
The results were completely and unambiguously negative.

Therefore they decided to make a few changes in their next experiment. Mediums would now receive a photograph of the deceased on top of the other information. Also the experiment would be conducted in a sloppier manner. In the previous experiment it was made sure that the stand-in for the sitter knew nothing besides name and birthday. This good practice was abandoned.
This introduces the possibility that the medium received clues before or during the reading. It also makes one wonder if any other corners were cut.

This second experiment involved 40 sitters and 9 mediums. 14 of the 40 chose their reading as the most applicable which is significantly more than you would expect if they had been guessing. The question, of course, is why.

The Conclusion

The failure of the first experiment is in itself interesting. The mediums, we are told, had thought that they could perform under the conditions. This tells us that they overestimated their abilities.
It also seem inconsistent with a study by Gary Schwartz and Julie Beischel that is widely touted as evidence for mediumship (discussed here, under Triple Blind!). That study found significant results under very similar conditions.
Why the difference? There’s a number of possibilities. For one, it could have been a fluke. But it must also be said that while similar, the task in this experiment was actually more difficult. Eventually, there is too little data to engage in fruitful speculation.

The success of the second experiment in contrast with the first may seem to imply that either the loosened protocol or the photograph played a key role but one shouldn’t forget that the mediums were different, too. There is ample room for a normal, rather than paranormal explanation, and indeed the differences between the outcomes of the various experiments can be seen as pointed to such a one.
Some believers have argued that some of the “hits”, quoted in the paper, are too specific to be explicable. They forget that some of those who picked the wrong reading also found specific information, just for them. Illusory correlation goes a long way to explain this.

Further Research

If Kelly and Arcangel have found one or more real mediums then they should be able to present results from well designed experiments within the next few years. Of course, this has never happened in any similar situation in the past, so don’t hold your breath.

Nevertheless that doesn’t mean that we shouldn’t expect more research. For one, they express the believe that it may be necessary to “prime the pump” by feeding information to mediums. Then the mediums can supposedly come up with even more information. This is effectively what they have done in this study and I’d expect them to do more of the same in the future. What I don’t expect is for them or anyone else to address the contradiction between this claim and the claims of other’s that mediums don’t need any previous information at all, not even to contact the deceased (see EG Julie Beischel).

Source:
An Investigation of Mediums Who Claim to Give Information About Deceased Persons by Kelly and Arcangel

Modern Mediumship Research: Robertson & Roy

This post will examine the relatively extensive experiments conducted by Tricia Robertson and Archie Roy. In some ways, these are the best I know of. But I’m getting ahead of myself, for they were of to a rather shaky start.

Archie Roy is Professor Emeritus of Astronomy from Glasgow University (Emeritus means that he is retired). He has had an interest in the paranormal for several decades and was president of the Society for Psychical Research (SPR) from 1993-1995. Tricia Robertson is a past president of the Scottish SPR (SSPR) but I don’t know that she has mainstream qualifications.

The First Article

In 2001 they published the first paper that will interest us here. Interestingly, the results are presented not as evidence of mediums but as a test of “the sceptical hypothesis that the statements made by mediums to recipients are so general that they could as readily be accepted by non-recipients.

To do so they conducted a “two-year study involving 10 mediums, 44 recipients and 407 non-recipients “. That’s really awesome in many ways. They really spent time and effort on this and they don’t even claim proof of mediumship. Compare that to the junk that Gary Schwartz has produced!

Unfortunately, despite all that the study is scientifically worthless. A complete waste of time and effort.

The first problem is obviously that no half-way knowledgable skeptic actually claims that. The second problem is that the experiment was carried out so badly that no conclusions are possible. It is indeed a maximally shaky start.

What they did

They assembled an audience and then had a medium perform a reading for one of the people in that audience. The reading was then broken down into individual items. All subjects (the entire audience), including the intended recipient of the reading, then rated these items as applicable to themselves or not.

This then allows them to determine if the recipient endorsed more items as applicable then the non-recipients. Which would mean that they found the reading as a whole more applicable. It also allows them to deduce how specific an item was. If many people say that an items applies to them it is quite general, if few, or just one, say so then it is specific.

They develop a fairly complicated method to statistically evaluate the readings based on both the number and the specificity of the accepted statements but we needn’t concern ourselves with the details now.

The First Problem

Now we face the first problem. What they test is if the entire reading is so general that it could be accepted by anyone. Yet, if we actually look at the literature on how to fake such readings we learn that this is only part of the art. An accomplished “faker” (actually, many mentalists, like magicians, are quite upfront about their use of trickery) will use any means at his or her disposal to tailor his or her statements specifically to the client.

But all right. Just because no one actually believes a certain hypothesis to be true, does not mean that it shouldn’t be tested. It just means that falsifying it is as exciting as finding that water is not dry.

This gets us to problem number two.

The Second Problem

Now it’s time to ask ourselves what results we would expect from the experiment, depending on whether the hypothesis is true or not.

If the medium makes statements that are true only for the recipient and for few or no others then we would find that the intended recipients accept more statements than non-recipients. Obvious.

But what if the medium only makes statements that are true for most or every one of the subjects?

We would expect exactly the same result. This is counter-intuitive but supported by ample psychological research. Such phenomena are called Barnum effect, illusory correlation, confirmation bias and a number of other names. It’s not necessary to go into detail now. Suffice it to say that Robertson and Roy will be able to confirm this expectation in a later experiment.

This simply means that the experiment did not test the hypothesis it was supposed to test. In fact, I don’t think any conclusion can be drawn from the data so collected. 2 years of research, 451 participants and all for nothing.

The Second Article

Others would have dug in, insisted on their worthless research and ranted about closed-minded skeptics. Robertson and Roy set out to make amends. A few months later they published a protocol for a new experiment that was supposed to demonstrate mediumship under conditions eliminating all ordinary, skeptical explanations.

Publishing the protocol in advance enabled them to take criticism into account before carrying out the experiments. Indeed, the experimental design found in their 3rd and last paper was improved over this one.

Basically, it is quite similar to the one in the first article except that the audience doesn’t learn for who the reading is and the medium performs in a separate room so that he or she does not get any usable clues.

I won’t bore you, dear readers, with the minutiae but as far as I can tell, in this final, rigorous protocol, skeptics would not expect the intended recipient to accept more statements in a reading than anyone else.

The Third Article

The third and final article in the series was published in 2004.  It presents data collected in 13 sessions that took place over 2 and a half years. They involved some 300 participants and 10 mediums giving 73 readings.

However, few of the experiments conducted actually followed the rigorous protocol they took so much care designing.  Ostensibly the reason for this is to assess other factors that may influence subject’s acceptance or rejection of statements but there is not much in the way of analysis of such factors. There’s not even a complete table with results to allow readers to perform their own analyses. But I’m getting ahead of myself.

Skeptics vindicated

One thing that they did find was that misleading the subjects on who the intended recipient is, really does affect the results. As I’ve said previously, psychology leads us to expect that someone who believes that a reading is intended for him or her will rate it as more applicable than someone who believes it is for someone else. This expectation was found to be true, confirming that the experiment presented in the first article was indeed a waste of time.

Despite the ample evidence that psychology has amassed that lead us to expect this, it is good to have it confirmed. The situation is not exactly the same as in the standard experiments demonstrating the relevant cognitive biases. This leaves a slight possibility that these biases are not operating under the conditions in which Robertson and Roy studied mediumship.

Rigorous Results

Now what about the results to the rigorous protocol. That’s, after all, the supposed main point of the exercise. I’m not sure how to put this but… they aren’t there. Yes. Seriously.

We never learn how mediums did under conditions precluding normal explanations.

All right, this needs some more explanation.

There is a lot of analysis presented in the article but it’s almost all completely pointless filler. The results come from experiments that differed in crucial design aspects. In principle, this allows one to determine if a variable factor correlates with greater mediumistic success. However, such analyses are almost completely absent with the notable exception mentioned.

Instead the results from these different experimental conditions are pooled without any reason being given or even inferable. Such pooled results are also compared with other pooled results, even though there is nothing to be learned from that. It’s just weird and bizarre. I can only conclude they don’t really have anything to report that they want to report and so stuff the paper with filler.

The reviewers seriously failed in not demanding that the analyses be cleaned up and all the results reported.

An Abstract Untruth

So where does the supposed main result come from that they report in the abstract?

Due to the design of the experiments the results cannot be due to normal factors such as body language and verbal response. The probability that the results are due to chance is one in a million.

In context one would be lead to believe that these results are actually the results of the rigorous protocol that was so much talked about. One would be wrong.

Indeed these results come from pooling data from several different experimental conditions. That includes the rigorous protocol but also data from experiments where normal factors operate. For one, the bias of the subjects introduced by whether they belief the reading to be for them or not. There are also other factors which might conceivably have influenced the result but one is enough.

To make this clear, if I have results that indicate that something is going on and I combine them with other data then the combination should also indicate something going on because that’s the case, even if there’s nothing in the added data.

In a very strict sense the quoted statement is true. The mentioned factors do not operate, different factors do. It is merely completely misleading in context.

Conclusion

Clearly Robertson and Roy confirmed one skeptical explanation despite their protestations.  Moreover, all results that they report have ordinary, conventional explanations. Speculating about paranormal effects is not justified.

I personally believe that the results of the rigorous protocol must have been negative or else they would surely have been reported.

As such, I consider Robertson and Roy’s work the best evidence against mediumship obtained in modern times.

I also think that the misleading nature of their last article was not intentional. I think they just couldn’t face up to the results. Although, the fact that Tricia Robertson has threatened to sue people who claim that they fixed or rigged the result does not speak of a clean conscience. Not that I see why anyone would claim that, given that the available results are so ordinary.

Sources:

Robertson, T. J. and Roy, A. E. (2001) A preliminary study of the acceptance by non-recipients of mediums’ statements to recipients. JSPR Vol 65.2

Roy, A. E. and Robertson, T. J. (2001) A double-blind procedure for assessing the relevance of a medium’s statements to a recipient. JSPR 65.3

Robertson, T. J. and Roy, A. E. (2004) Results of the application of the Robertson-Roy Protocol to a series of experiments with mediums and participants. JSPR 68.1

JSPR, Journal of the Society for Psychical Research, can be accessed via Lexscien. Skeptical minds, interested in double-checking my work, can do so for free by signing up for a free trial.

Tricia Robertson on critics: “I am aware that critics will say the tests were somehow rigged. But, rest assured, we could not have been more scientific in the way this was carried out. If anyone claims it is fixed or rigged, we would sue.”

Modern Mediumship Research: The Afterlife Experiments

The Afterlife Experiments is a book by Gary Schwartz, Ph.D. It is a chronicle of his mediumship experiments and his personal journey. These experiments were also published in various disreputable journals devoted to such ideas. This article deals primarily with these experiments.
Schwartz’ work has been extensively torn to bits. I recommend the review by Ray Hyman and if you have time this one is also nice but more limited.

The true challenge lies not in refuting Schwartz but in bringing home just how horrible his work was. Ahh, but how to explain the vastness of the ocean to one who has never seen it. I know I must fail and yet I will try.

Schwartzian Theory

Let’s begin as he begins his book, let’s look at his “theory”. He invokes systems theory and quantum physics as foundations for it. In truth, his theory contains as much science as the technobabble in a TV show like Stargate.
Light travels forever between the stars, we are still receiving photons from the Big Bang , he observes, and that much is true. Some photons coming from our bodies will also travel through space forever, assuming they don’t bump into anything on their way out. To him this somehow means that we live forever.
By that kind of logic, taking a picture is stealing someone’s soul.
He says that according to quantum mechanics matter is mostly empty space. Yes, matter is mostly empty space, in a sense but no, that was established by the gold foil experiment with no quantum physics as such involved. Schwartz somehow concludes that you can take away the matter and still go on. Kind of like you still have power when you take the batteries out, I guess. The analogy is fitting because it’s the electric force that keeps together electrons and protons in an atom and that gives solidity to what is otherwise ’empty space’.
It was pretty painful getting through all that nonsense. It’s not just wrong or mistaken, it betrays a deeply irrational mind. I will be honest with you. When I read that-, I thought myself reminded of the ravings of a schizophrenic but not at all of the ruminations of a scientist.

But nevermind that. Eventually it’s all about the data. Right?

The Data Speaks
But which data should we let speak first? How about some data from the late 40ies.
A psychologists called Forer gave a personality test to his students. Then he played a little prank of them, by giving everyone the same text as a phony result. Asked to rate the accuracy of the personality assessment on a scale of 1-5 (with 5 being best), they gave it an average of 4.26.
What this data tells us, or rather shouts in our faces, is that we need to be real careful when relying on human judgement.
This effect has become known as the Forer or Barnum Effect. It is very easy to elicit. So easy, in fact, that not eliciting it is usually the challenge for researchers.
Since then, more has been learned. For example, such Barnum statements (at least some of them) are viewed as more applicable to oneself then to others. Some statements are rated more accurate than others, with people typically preferring favorable statements. Also it makes a difference for the accuracy ratings if the receivers believe in the credibility of the method. Statements presented as resulting from astrology, for example, will go over well with believers, not so well with skeptics.

Psychology knows a number of similar effects. I will only mention one more to emphasize the point. It is called Illusory Correlation and was described around 1970. The experimental subjects were shown drawings and were told that the drawers had certain psychological problems. In reality the psychological problems had been assigned randomly to the pictures.
Still, the subjects found ample signs of these non-existent problems in the drawings.
This was connected to the uncovering of serious problems with diagnostic methods in psychiatry but since then the effect has been demonstrated in a variety of different guises. It is also thought responsible for the persistence of racial and gender stereotypes.

Mediumship

So now that we have listened to what some data tells us, we can think about what that means for mediumship.
For one, we cannot simply ask someone to rate the accuracy of a reading. A human being simply is unable to give an objective measure of that.
This isn’t just a problem for proving mediumship that need only concern skeptics.
Say, you want to find an especially good medium. You have clients rate the accuracy of the reading. Mediums that consistently get higher ratings are surely better at something. But are they better at generating readings that are accurate or just at generating readings that are perceived as accurate?
With that method you might end up selecting fakes over real mediums!
Even worse. Think about how budding mediums usually learn their trade. They practice and refine their skill based on feedbacks from their sitters. Are they really practicing a paranormal skill there? And if not, maybe they ruin any such talent they might have?

Schwartz’ folly

Let’s get back to The Afterlife Experiments
Schwartz holds a PhD in psychology, it is virtually unthinkable that he was not aware of these pitfalls. Even if he was that incompetent, he sought out the advice of experts like Ray Hyman who told him.
The closest Schwartz comes to acknowledging such problems is hand-waving dismissal. He finds it ‘implausible’ that sitters might misrate their reading, without a word about all the evidence to the contrary.

Schwartz inadvertently offers an example of how these psychological effects work in practice.
First what the medium said according to the transcript in the book.

[The medium just said that the dead grandmother was at the client’s wedding.]
And she’s talking about … some kind of flower connection. And what’s weird is she’s showing me flowers that I wouldn’t think about being at a wedding, and these are daisies. Um. they’re showing me daisies.
So I don’t know what the reference is to daisies, but they’re showing me daisies.

The medium could have equally well said:
Come up with a connection between your wedding and daisies!
The connection was eventually made between the wedding of the sitter’s mother and daisies. The grandmother had brought daisies for that wedding.
What Schwartz says is that the medium knew this, even though the medium states quite clearly otherwise.

Pretend Science

But while Schwartz completely fails to deal with this problem he still pretends to. One way is by letting other people besides the recipient rate the reading. Of course, when people know that a reading is not meant for them, this lowers the perceived accuracy.

Schwartz also made a stupid and amateurish attempt at using a control group. He wanted to know if anyone could guess information like a medium. So he had psychology undergraduates answer some questions.
Now you might think:  But psychology graduates are not anyone but a select group! Or you might think: So what if mediums can guess better than these guys? They are pros and should be expected to know the statistics better. That doesn’t tell us about afterlife communication.

Both true but neither captures the depth of Schwartz’ incompetence. Here’s an example.
Medium says:

I think that this is her mother, she is definitely a pistol, she must have had false teeth because she is taking them in and out, in and out. And she’s not supposed to do that in front of everybody.

The control group was asked:

Who had false teeth?
What did she do with her teeth?

A significant proportion of people have false teeth. And they do a lot more with them than just taking them in and out. The actual task of the “control group” was to guess not facts, but what the medium said and how it was interpreted.

Finally, I should say something about Schwartz’ attempts to calculate probabilities. In short, Schwartz clearly has no understanding of probability theory. A detailed explanation of where the errors lie would require again as much place as I have spent on his ignorance of known psychological effects and moreover be quite “mathy” which to most people reads “boring”.
The necessary math should have been learned at school. If it has been forgotten or was never properly understood then there is probably no interest in catching up now.

Experiments deserving special attention

There are two experiments, the first and the last in the book, that deserve a closer look. They avoid the problems that make most of Schwartz’ work scientifically worthless.

In the first, there were two mediums. Medium one was given four deceased persons to contact. She then made one drawing for each, based on information supposedly from that person.
Medium two then had the task of matching the names to the drawings.
This is not ideal as one might speculate that the drawings might contain hints about the names, put there either consciously or unconsciously. Still, it’s hard to imagine a high degree of accuracy unless the mediums are in collusion and have an agreed upon code.
In some ways, this is one of the best conceived experiments that Schwartz ever reported. But there’s a catch.

You’d think that medium two would simply match the drawings and the pictures in the absence of medium one to avoid being influenced. Indeed, there was a session with the 3 experimenters and medium two. Some vague impressions of colors and shapes were recorded but we are not told of any matching of pictures to person. The medium was unable to receive any clear descriptions of the pictures.
We are not told of any attempt at matching the pictures at that point, which seems incredible!
Then things proceeded in the presence of medium one. The three experimenters attempted to match pictures and persons but without success. We are also told that medium two had no success. She was unable to make good contact, we are told, and she couldn’t recall much of the information that came through in the previous session.
But never fear. When they went back to the impressions recorded without medium one, suddenly everyone was able to guess correctly.

We can reasonably infer that after the first round of guessing, medium one must have revealed at least part of the answer. If she had not told them that they had no success then they would not have kept guessing, right?
I think, the most plausible scenario is that medium one revealed the full answer upon which everyone attempted to fit medium two’s vague utterings to the drawings in an exercise of illusory correlation.
Of course, that is just a possibility but I’m thinking, if it had been possible, even easy, to match correctly with the information first received, why didn’t they do it?

Basically, it looks like the experiment failed and Schwartz just couldn’t face it.

The Last Experiment

The last experiment in the book was apparently added last minute. It is described on only 1 and a half pages, without any additional information in the appendix.
In the experiment, the medium made the reading without having any information on the sitter. That means she could not tailor her readings to specific clients based on appearance or feedback.
Also every sitter received two readings to rate, his or her own and one other. Sitters had to choose which of the two was meant for them personally.
In most previous experiments the expected outcome was the same regardless of whether the mediums were real or not. As such they did not provide any evidence for the reality of mediumship.
In this experiment there was finally a difference. Conventionally, one would expect every sitter to have a 50/50 chance of chosing their own reading. If mediumship is real, the chance should be much higher.

The outcome was that 4 of 6 sitters chose their own reading. Assuming that everyone had a 50/50 chance, getting that many (ie 4 or more) has a probability of 34%. By normal scientific standards that means that there is no reason to assume that anything noteworthy happened.
For comparison, if one assumes a 95% chance of picking the right reading then the chance of getting that few (ie 4 or less) is only 3%.

In Schwartz’ mind this somehow morphs into a ‘breathtaking’ finding.
He points out that one of those sitters rated the intended reading as very accurate and the non-intended reading as 0% accurate. That sitter is supposedly an especially talented sitter.
The whole argument would be more convincing, or rather the least bit convincing at all, if it had been tested in a specially designed experiment rather than being based on some quirk noticed after the fact.

He also says that if the experiment had been much larger, with 25 sitters, with again 2/3rds picking the right reading then that would have been ‘statistically significant’ (ie fairly unlikely according to the conventional 50/50 expectation). If wishes were horses
It’s now almost ten years and Schwartz hasn’t done any experiment of that size, so don’t hold your breath.

Triple Blind!

In science, some participant in an experiment is ‘blind’ if he doesn’t know relevant facts which might bias him.
Single-blind means that the experimental subjects don’t know the relevant facts, double-blind means that the experimenters in contact with the subjects don’t know them either. Terms such as triple-blind or quadruple-blind are sometimes encountered but don’t have a truly standardized meaning.
This is doubly true for mediumship research, which doesn’t have any standard experimental designs. Still, Schwartz and his troupe employ them most haphazardly. It seems to be more of a marketing term rather than anything with scientific sense. As a consequence, they have begun touting it much like razor makers the number of blades. They are now at quintuple-blind, something that in mainstream science is only encountered in the punchline of jokes.

A few years after the book came out, Schwartz, together with Julie Beischel, published a paper with a triple blind medium test. After what Schwartz produced during The Afterlife Experiments era, this experiment completely blew me away with its good design. I think Julie Beischel is a good influence on the project. Objectively, that shows just how low my expectations have become.

There were 8 sitters and 8 mediums. Each medium gave 2 readings and each sitter rated 2 for a total of 16.
The sitters were paired up. One sitter who had lost a parent together with a sitter who had lost a peer. Moreover, the pairings were done so that the deceased each one hoped to contact would be maximally different. Both sitters in a pair received both readings and had to pick theirs, again while being ‘blind’.
The pairing with maximization of differences was to ensure that picking the right reading would be especially easy. That’s a neat idea but that it was found necessary has some interesting implications.
Years earlier, Schwartz entertained us with ‘breathtaking’ findings and ‘breakthrough evidence’. Now things appear to have become a little more difficult. Could it be that mediumship is harder to demonstrate when known psychological effects are taken into account rather than merely being dismissed?

Now things get a little tricky.
You could look at the medium. Each medium is faced with a pair of sitters and needs to “guess” who has the deceased parent and who the peer. That would mean that each medium has a 50/50 chance of getting her readings right. In total that’s eight 50/50 chances
Or you could look at the sitters. Each sitter is faced with the task of picking the right reading out of a pair, which is a 50/50 chance. But each sitter has to do that twice which makes a total of sixteen 50/50 chances.
Which of these views is correct? I don’t know. That depends on what the mediums did. The choice was up to them.

The reason why this matters is in the probabilities. For example, if there are 6 of 8 pairs that are correct then the likelihood for that is 14%, fairly high. If you say that there are 12 out of 16 readings correctly picked then there is only a 4% chance. It is like either getting 6 heads in 8 coin tosses or 12 in 16.

Unfortunately, Schwartz and Beischel fail to realize this problem and adopt the second view. In fact, the sitters picked 13 out of 16 readings correctly which has a likelihood of only 1%. Given that the number of correct picks is uneven, the first view cannot be entirely correct but different mediums may have made different choices. So we shouldn’t assume that it is entirely incorrect either. The likelihood of getting that many readings correct may well exceed 10%.
That’s the difference between interesting and boring.

Of course, even if it’s a 1 in a 100 chance it can still have been chance, or if it’s a 1 in 10 chance it may still have been mediumship.

Related to that problem is the problem of the small size. In comparison to Schwartz’ previous experiments, 16 readings is a lot. But compared to the huge amount of work that appears to be going on at Beischel’s “Windbridge institute” this is only a drop in the bucket.
So if you say that the results could arise by chance only once in so many experiments, well, in this case it is distinctly possible that there are so many experiments out there.

The biggest problem however lies in the interpretation. Each medium received the first name of the “discarnate” they were to contact. But names contain information about a person.
Think of Gertrude vs. Britney. Who is the dead parent, who the dead peer? Think of Tyrone vs. Cletus, who has the higher skin cancer risk?

So if anything was going on in the experiment what was it? A display of mediumship or of statistical knowledge?
We know that one of those is possible. We don’t know that the other is.

On the whole, I find the flimsy results coming from a research program spannig well over a decade quite telling.

Modern Mediumship Research

Mediums impress a lot of people. Believers will swear that their medium must have known something about them or their dead relatives. Something that they could only have learned by communicating with the dead.
The sheer number of testimonials are probably enough to warrant an investigation but everyone knows there are two sides to the story: Believers and skeptics.

On the believer side there are testimonials, biographic accounts from mediums. In short flakey New Age babble. But there are also purported scientific investigations. I will focus on these and examine them in detail.
On the skeptical side, the situation is more complicated. There are a number of skeptical books that will attempt, with greater or lesser success, to give a conventional scientific explanation. Conventional is a bit of a misnomer here, as the idea that one can really talk to the dead is much older than specific attempts to otherwise explain such experiences.
These explanations draw from two sources: magic and psychology. There is specialized literature for magicians of all sorts, including for so-called mentalists. Mentalists specialize in mind reading, clairvoyance and other such tricks. Like other magicians they write up their tricks and sell them.
Psychology meanwhile offers rich insight and explanation for the mental biases and pitfalls, such as ‘the fallacy of personal validation‘, that these tricks use and abuse.
Skeptical books provide a valuable service by collecting together these varied bits of information and making them easily accessible but they are not in themselves science.
From time to time, skeptical tests of mediums happen. These aim not at establishing how the medium does its business but only at testing if the medium can at all perform the paranormal feat he or she claims. Since mediums routinely fail, they aren’t much of an investigation into mediumship. They just fail to find anything to investigate.

The Great Asymmetry

There is a great asymmetry between these two sides. Believers undertake extensive investigations of mediumship while skeptics fail to find anything they could possibly investigate in the first place. Much research into how people fool themselves has been and is carried out by psychology but under completely different labels.
This may fool a novice investigator into believing that believers have studied the issue more thoroughly and have more to say. In fact, believer investigations are more extensive because they ignore the already existing literature. They make error after error, all too often only correcting them after having had their noses rubbed in it.
That pattern is not unique to mediumship but found across parapsychology. Still, there are notable exceptions but that’s for another time.

Now enough about the bickering. Let’s get to the facts.

As this is a very extensive topic, I’m breaking it down into multiple parts. Links will appear here as they become available.

Modern Mediumship Research: The Afterlife Experiments

Modern Mediumship Research: Robertson & Roy

Modern Mediumship Research: Kelly and Arcangel

Modern Mediumship Research: Negative

Modern Mediumship Research: Summary

Randi’s Prize: Answering Chapter 3

Chapter Three:  Communicators

With this chapter, things get at once more and less interesting. It is less interesting because we no longer deal with movie style magic. No more psychokinetic kids and ectoplasmic ghosts. Instead we get something even more juicy: laboratory experiments.
Alright, that’s not quite as exciting. But if you want to get to the bottom of things it is very, very interesting.

The key to the past
It is said (especially by geologists) that ‘the present is the key to the past’. It is assumed by the historical sciences that the laws of nature that we observe today are the same that operated in the past. This assumption seems to hold up well, judging from what we learn from distant (and thus old) starlight or from the convergence of different radiometric dating methods.
The assumption essentially follows from Occam’s razor but that it works is really the important point.
Much has been written on this issue. Especially in response and in opposition to the efforts of creationists to inject religious dogma into scientific inquiry. I’m not going there now.

There is also another reason for the assumption that is based on the constraints imposed by the scientific method. Science can only study the testable. It is an essential characteristic of science that claims are judged by experiment.
If mediumship (or other psychic powers) are shown to be active in the past, then science will assume it operated in the past. Arguments that explain certain recorded events in terms of these “new” laws of nature will be credited. The reverse will never stand up.

Back to the book
What this chapter deals are mental mediums. Unlike the physical mediums of the previous chapter these don’t produce effects like straight from a Ghostbusters movie but only purport to communicate with the spirit world. They only talk.

The first example we are given, at great length, is Leonora Piper. A so-called trance medium from the second half of the 19th century. McLuhan has succeeded in rousing my curiosity as to Piper’s accomplishments but it a historical case will never be acceptable, scientific proof.
He cites example after example of her amazing feats and then more examples by Osborne Leonard. Then also the Edgar Vandy case.

He frequently points to a piece and transcript and argues that it doesn’t look like cold reading. Yet, all his bibliography offers on cold reading is a single chapter in The Elusive Quarry by Ray Hyman. For the curious it includes this article.
That’s awfully little to speak with authority on the subject.
By the way, my personal recommendation would have been Ian Roland’s The Full Facts Book of Cold Reading, for the dry wit. It’s a much more entertaining read than Hyman’s academic prose.

He points out that these mediums were never caught in fraud which seems a rather curious argument. A sleight-of-hand conjurer can easily be caught in his trick. A cold reader only talks. She can only be shown to have made statements that look like cold reading. McLuhan points out that this is true for the mediums he mentions but apparently doesn’t consider it significant.
Of course, it is reasonable to focus on the more interesting instances. A true psychic should still be expected to use some cold reading (maybe unconsciously), just like any hot reader will make use of these techniques.
But how do we know that these tidbits were gained paranormally rather than conventionally?
For one, the mediums were sometimes trailed by detectives to catch them in any information gathering. Without success. Yet, it should be immediately obvious that catching someone gathering information is vastly more difficult than catching someone using a conjuring trick right before one’s eyes.
The possibility that they picked up information just by the way, in chats or such, is discounted on the grounds that the researchers were well-respected and surely would not have made the mistake of enabling this.
As I am quite familiar with the antics of contemporary mediumship researchers I find this appeal to authority more than doubtful. That aside, even if psi exists, it must be rare to encounter it so strongly as in these mediums. Is that a counter argument against Piper and the others having been real?
If improbability is not an argument against paranormality, then why is it an argument against normality?

The Fundamental Error
MyLuhan shows one proposition to be improbable and then concludes that another, also improbable proposition must be true. That has been McLuhan’s fundamental mistake from the start. I have pointed it out from many angles. It is one thing to show an idea to be unlikely and quite another to show that this idea is less likely than another. Besides, who says that the right idea isn’t one that no-one thought of?

The consequential application of his erroneous reasoning leads McLuhan to the only possible conclusion: It is proven that these mediums were real.
By McLuhan’s logic, only travelling back in time to find a satisfying, conclusive explanation for each of the mediums’ apparent paranormal deeds can undo this proof.

Conversely, bringing science, even out-spoken skeptics, round to McLuhan’s opinion would be much easier. Just show mediumship to work today.

Modern Mediumship Research
This modern research consists of the work done by Gary Schwartz. Not mentioned in the book is the quite extensive work by Robertson and Roy, perhaps because it was not available to McLuhan at the time of printing.
I will need to write a few posts on the current state of mediumship research at some point.

For now we are focused on Randi’s Prize. McLuhan gives an outline of Schwartz’ experiments and also of the criticisms. The whole thing is related in a he said/she said style that remains sterile. One does not get the sense that he actually engaged the material on a deeper level.
Eventually he does gives some credence to the critics but still finds that “these experiments give a strong suggestions” of psi. Given how abysmally horrible Schwartz’ research was, I find such a conclusion mind-boggling. What went wrong there? I don’t know.

I will not go into more detail here, dear reader. Please await the up-coming series on modern mediumship research.

Scientific Skepticism

You’ve probably had some physics or chemistry at some point. If so, you probably remember the being shown experiments.If not, or if you can’t remember, click here. In my memory, these were deadly boring but at least you needn’t pay so much attention.
Have you ever wondered why that has such an important part in the science curriculum?

Doing experiments yourself teaches you some skills and, hopefully, is more engaging than just reading in a book. But what’s the point of having the teacher perform experiments? Surely it would be much cheaper and easier to just have the pupils read about them.

In my experience, the answer is never spelled out in class. That answer is that you’re not supposed to believe things just because it says so in a book. You’re supposed to know, for yourself, that what you’re taught is right, that it works.

The Battle Cry of the Revolution

Nullius in verba is latin for ‘take nobody’s word for it’. In the 17th century it became the motto of the Royal Society and a battle cry of the scientific revolution. The men of the Royal Society resolved not to believe things merely because they were written in some ancient book or said by some respected personage. They would experiment.
Think about how radical this was in a time of absolute monarchy when the bible was regarded as the word of god.
If these man had realized that the same attitude would eventually challenge even monarchy and church, most of them would probably have been horrified and abandoned their quest.

Of course, especially in our modern times, one cannot personally repeat every single experiment. And when one gets a different answer on one experiment, well, maybe it is not everyone else that has made a mistake. Skepticism is one thing, solipsism another.

Science is Social

Science is a social enterprise. Just like society as a whole, science relies on people doing their jobs and follow certain rules. Both have fail-safes, to deal with instances where individuals either make mistakes or cheat. In science this fail-safe is largely the skepticism of the peers, who are relied on to find and weed out erroneous results. Unfortunately, science has little ability to deal with actual frauds. Only occasionally, when someone has gone completely over-board in making stuff up, frauds are identified as malicious rather than simple mistakes.
Scientific skepticism ensures that fraudulent results are weeded out just like innocent errors. Scientific fraud can still harm society by causing bad decisions to be made. It also harms science by causing people to waste time and money trying to confirm the unconfirmable, or trying to build on a rotten foundation.
The only way a false result can escape correction is if it so unimportant as to be completely ignored. That’s, ironically enough, the best case.

I hope I made clear why skepticism (in this particular version) is so fundamental to science. It should also be clear why fabricating or altering data is such a heinous crime against science. What if everyone did it?
Science would become a meaningless sham.

There’s two more things I want to say. Not so much about scientific skepticism but it fits here.

Reputation
In science, you’re supposed to judge a claim on its merits, not on the merits of the person making that claim. Still, reputation plays an important part when judging a claim.
The reason is because whenever someone publishes scientific results they are not just offering the fruits of their labor. They are also asking others to do work. The result must be vetted, it must be replicated before it can be used.
This request doesn’t go out to some faceless, abstract entity called science. It goes out to individuals. And each of these individuals will ask themselves: If I take this seriously, am I wasting my time?
That’s where reputation (and a whole number of other soft factors) comes into play. It’s not logically valid to judge the claim by the reputation of the person making. But judging the likelihood of wasting your time on these soft factors certainly works.

The Bem Exploration Method
Yes, this again. This is where it gets a bit ranty, so feel free to stop reading here.
Maybe you’ve read my previous article and wondered why this seemed so serious to me. Perhaps this article gave an answer.
Part of the BEM is data dredging and similar abuses of statistics produce false results. The more widespread this technique becomes, the more waste there is in science. It takes away resources that could be used gainfully.
Even worse is the omission of negative results. This is something that turns replication itself into a sham. It is much more work intensive and easier to detect than fabricating data but the effect on the scientific enterprise is the same. It turns it into a sham.
Seeing how Bem’s advice is used to teach students makes me seriously wonder about the integrity of (social) psychological science.

This blog post was the most shocking on Feeling the Future that I’ve read. And I’ve read a lot.
Lots of stupid and credulous things were written about it but he had the good sense to see the signs of questionable research practices in Bem’s article. His post is very good in that department.
And still he calls it a good paper. He also calls the publishing standards too lax, so the only thing I can really blame him for is not being sufficiently outraged.