A LABORATORY, bubbling chemicals, glassware and, most probably, gaslight – the scene is set for the physician-scientist to drink his transforming potion. This is our stock image of self-experimentation in clinical research, and apart from the bubbling and the gaslight it is probably not too far from the truth. Researchers have always self-experimented, both in fact and fiction. In the latter, these experiments have usually resulted in mayhem, while in the former they have, on occasion, earned their scientists the Nobel Prize, while leading others to their death.
Dr Jekyll sipped his concoction to discover the beast of Mr Hyde within, while Dr Brundle’s dabbling with teleportation in The Fly led to a self-experiment with distinctly unfortunate consequences when he inadvertently shared his pod with a housefly. Comic-books abound with villains – such as The Lizard, Spider-Man’s arch enemy – who are often the product of self-experimentation by mad-geniuses or initially well-meaning, but ultimately evil, scientists. But what of the reality of self-experimentation? Does it happen and if so why? Here are two examples from the 20th century.
In 1984, Barry Marshall and Robin Warren postulated that Helicobacter pylori was the causative agent of peptic ulcer disease. The implications of this for medicine and surgery would be enormous, as Marshall thought the disease might be treated using antibiotics rather than surgery, which had been the only treatment for decades. However, his proposals were laughable to many and he needed to gather some convincing proof quickly.
Increasingly frustrated by the negative criticism of his work and his failed attempts to produce an animal model for Helicobacter infection, Marshall decided to self-experiment. He successfully infected himself without discussing his plans with an ethics committee or, “more significantly” he noted later, with his wife. “This was one of those occasions,” he added, “when it would be easier to get forgiveness than permission.”
One of his colleagues mentioned the self-experiment to a journalist and the story was instantly sensationalised under the headline: “Guinea-pig doctor discovers new cure for ulcers ... and the cause”. It would, however, take 10 years before his evidence was finally accepted, and another 10 before he and Warren would be awarded the Nobel Prize.
Unravelling another infection was the task given in 1900 to Major Walter Reed, a US Army doctor who led a team to investigate the transmission of yellow fever in Cuba. They tested a number of theories, the most crackpot of which had been proposed 20 years earlier: that the disease was spread by mosquitoes.
Concerned about the ethics of such investigations Reed and his team decided that before they sought volunteers they should experiment on themselves. First, they “loaded” mosquitoes by allowing them to feed on yellow fever patients. The mosquitoes were then allowed to feed on a pair of junior doctors, Carroll and Lazear, all while Reed was safely in Washington DC. Both doctors developed the disease and while Carroll recovered, Lazear died aged 34.
This self-experimentation in Cuba became famous and, for many, one of the hallmarks of ethical research. Indeed, when the 10 principles of bioethical research were pronounced at the conclusion of the Nuremberg Doctors’ Trial in 1947, article 5 stated:
No experiment should be conducted where there is an a priori reason to believe that death or disabling injury will occur; except, perhaps, in those experiments where the experimental physicians also serve as subjects.
This caveat may have been included to counter one line of defence put forward by the counsel for the Nazi defendants, i.e. that previous US military research had knowingly put the lives of research subjects at risk. In particular, the lawyers cited the example of Reed and the yellow fever experiments. The fact that Reed himself was never a subject of the experiments was conveniently overlooked.
Thus, self-experimentation is used partly as an expedient as in the case of Marshall, and partly as a justification as in the case of Reed. Expediency, however, may lead to a loss of objectivity that is so vital to good research, and justification may be challenged. Even if an investigator is willing to be the first subject in a potentially life-threatening experiment does this really make it ethical to put others at risk?
Self-experimentation, it has been argued, is not research at all but simply “self indulgence” or even “self abuse”. However, one eminent researcher, Thomas Chalmers, summed up his view as, “…you shouldn't be involved in a trial unless you would be willing to be randomised yourself”.
Today, self-experimentation continues but is increasingly controversial, raising uncomfortable ethical and scientific questions. The quality of the work, the safety and the motivations are all debatable but, for the self-experimenter, perhaps no more debatable than the same work done on volunteers. We certainly need rules of clinical research but whether we need different rules for researchers who wish to take that first step themselves is far from clear.
Dr Allan Gaw is a writer and educator in Glasgow
This page was correct at the time of publication. Any guidance is intended as general guidance for members only. If you are a member and need specific advice relating to your own circumstances, please contact one of our advisers.
Read more from this issue of FYi
Save this article
Save this article to a list of favourite articles which members can access in their account.
Save to library