Show me the data

With so much scientific research being published, how can trainee GPs critically appraise the evidence available? Dr Allan Gaw offers some practical tips

  • Date: 24 March 2016

AS DOCTORS, we are constantly reminded of the importance of practising evidencebased medicine, and rightly so, for what is the alternative? But in order to practise the best medicine, we need the best evidence — and we need to understand it. The latest evidence in the medical sciences is presented as a weekly diet of articles in an everincreasing number of journals. This scientific literature presents the practitioner with two challenges. First there is the volume and then there is the quality.

We have to learn to read efficiently and selectively if we are ever to keep up, but we must also hone our critical faculties and, like all good scientists, take nothing at face value. How then should we read a paper critically?

All scientific papers set out to answer questions. When tackling a paper, your first task is to identify exactly what the authors have set out to do and why. Next, you should find out how they tried to answer their question. In other words, what sort of experiment did they perform? Depending on the topic you are reading about, this might be an experiment done in test tubes on a laboratory bench, a study in mice or rats, or perhaps a clinical trial.

Whatever was done, you should now be asking yourself if the approach was appropriate to the research question. For example, while an animal study might provide invaluable pre-clinical evidence for the effectiveness of a new treatment, it is never going to provide the sole evidence to support the use of a novel drug in your patients.

The design of the study is crucial to its usefulness. The size of the study and its duration will affect how you view it, and what about the study participants — do you recognise them? Are they the same patients you see in terms of age, comorbidity and concomitant therapies? Or are they a cherry-picked group that does not mirror the population you are used to? If so, you may question how generalisable the study findings are to your practice.

Another key consideration in study design is whether the experiment is controlled. If not, the results (whatever they may be) are meaningless. If there is no control group, we cannot ascribe any change or treatment effect we might have observed to the intervention under study — what we have observed might simply be background noise.

In the paper, the authors will present their findings and from these draw conclusions, but are the conclusions plausible? If a paper reported that smoking does in fact increase your life expectancy, you should pause. Given everything that we already know about this lifestyle risk factor, such a finding would be hard to swallow. Similarly, if the authors of a relatively small, short study claimed that the drug they had studied should now be prescribed to everyone over the age of 35, you should be sceptical. Extraordinary claims require extraordinary levels of evidence. And while consistency between multiple lines of evidence will make the conclusions more credible, disagreement should make us stop and think.

A study may be well designed and executed, and the data credible but the evidence may not support the authors’ conclusions. There may be an extrapolation that requires you to take little more than a leap of faith. There may also simply be other explanations of the data. For example, mere associations are often presented as cause and effect, when it is rarely that simple.

Which brings us to our next point — overall, just how good is the evidence? Publication does not alone mean that the evidence is of high quality. The better the journal and the higher its impact factor, the more likely the paper has been subjected to rigorous peer review. This means that poorly designed and underpowered studies should have been filtered out during the review process, but sometimes poor studies slip through the net, even of the better journals. When it comes to lower tiered journals their nets have bigger holes and you might have to work a bit harder to evaluate the quality of the evidence, because nothing can be taken for granted.

In summary, there are seven questions you should ask of any clinical research paper:

  1. What is the research question?
  2. How did they answer it?
  3. Was their approach appropriate?
  4. Was the study controlled?
  5. Do you recognise the study population as your patients?
  6. Is the answer plausible?
  7. Does the evidence support the conclusion?

To answer these you will have to focus on different parts of the paper and you will also have to do some thinking. The study may be published, but that doesn't necessarily mean it's valid or useful, especially to you and your practice. Critical evaluation is about gathering the facts, putting them in context, reflecting upon them and making decisions — decisions that will ultimately guide your practice.

Dr Allan Gaw is a writer and educator in Glasgow

This page was correct at the time of publication. Any guidance is intended as general guidance for members only. If you are a member and need specific advice relating to your own circumstances, please contact one of our advisers.

Read more from this issue of Insight Primary

GPST is published twice a year and distributed to MDDUS members in GP training throughout the UK. It provides a mix of articles on risk, medico-legal and regulatory matters as well as general features and profiles of interest to trainee GPs. Browse all current and back issues below.
In this issue
GPST12.jpg

Related Content

Raising concerns

Coroner's inquests

Assessing capacity

Save this article

Save this article to a list of favourite articles which members can access in their account.

Save to library

For registration, or any login issues, please visit our login page.