Michael A. Überall, MD, Nuremberg, Germany, took a closer look at the pitfalls of reading study publications and the significance of statistical data in a series of seminars. In doing so, he recommended trusting in one’s common sense and not being misled by numbers. This is because many studies are deceptive and already offer in the abstract that a closer examination of the contents is not worthwhile.
Every year, countless studies are published on the effectiveness – or ineffectiveness – of therapies. A central criterion in the evaluation of efficacy is the p-value: If statistical significance is missed with a value of >0.05, a study is generally considered to have failed. “The p-value, however, says nothing about right or wrong, but merely indicates a probability,” explained Überall. Thus, a p-value ≤0.05 means that the result is certain with a probability of 95% or more. However, the value can also be reached by chance. With a p-value of 0.06, the result is still 94% certain – and the difference from p≤0.05 minimal. “Does a p-value >0.05 therefore really mean that the therapy is not working? We should question whether this convention is correct,” he said. This is because a p-value =0.06 could mean that a drug that may be urgently needed by many patients does not receive approval.
The certainty that a difference between two groups is not random can be increased by p-values <0.01 or <0.001. A p-value <0.001 is considered highly significant – and the result 99.9% reliable. However, an error probability of 0.1% would mean that 445,000 prescriptions would be filled incorrectly every year or that one of the airplanes taking off or landing in Frankfurt would crash every other day, Überall pointed out. Thus, even a p-value <0.001 does not provide 100% certainty. In his view, one of the most important values in medicine is the confidence interval, which is usually given as 95%. If the range encloses the value 1 or if the confidence intervals of two therapies overlap, the documented differences in efficacy are statistically irrelevant. “Then you can safely put the study aside,” Überall affirmed.
If the study does not deliver what it promises
As far as reading studies in general is concerned, a glance at the abstract is often enough to see whether further reading is worthwhile. Überall illustrated this with the example of a study on the treatment of neuropathic pain in patients with multiple sclerosis. The investigational drug was no more effective than placebo in terms of pain relief – but caused side effects twice as often. Nevertheless, the authors write in the title of the publication that the drug is a safe option for long-term therapy of neuropathic pain. “Therefore, read with common sense whether the content of a publication delivers what the title promises,” Überall appealed to the congress participants.
Wrong conclusions from incorrect evaluations
The extent to which study data can be misleading is shown by an evaluation that played into the hands of skeptics of the Corona vaccine in 2021. For example, according to data from the Bavarian Ministry of Health, vaccinated people had a higher mortality rate of 4.3% than unvaccinated people at 3.4%. Older people, however, were greatly overrepresented in the evaluation – and after weighting for age, it appeared that the vaccinated had a lower risk of death than the unvaccinated. What do we learn from this? Reading studies also needs to be learned.
Source: Everywhere M: Statistics in pain medicine. German Pain and Palliative Day 2023
InFo NEUROLOGY & PSYCHIATRY 2023; 21(2): 29.