Contribute STATS Staff Publications Receive email updates STATS: Seven Rules for Reporting Polls and Research Results
In Depth Analysis



Seven Rules for Reporting Polls and Research Results
Steven S. Ross, February 11, 2008

Editor's note: For 19 years Steve Ross was a Professor at Columbia University's Graduate School of Journalism, where he pioneered the teaching of computer-assisted reporting and taught statistics (disclosure, I was one of his students in 1997/1998). He is a widely published technology writer (currently Editor-in-Chief of Broadband Properties) and he has won numerous technical, professional and journalistic awards. I asked Ross to summarize his "rules" for reporting on studies, polls, and statistics for STATS.

1. In general, effects are small. So you need a lot of statistical power
That means you need a large sample sizes and information on possible confounders – things that can change the results being reported, if they are not taken into account. Example: The number of new cancer cases in the US is increasing. But when the aging of our expanding population is taken into account, the chance that any specific individual will get cancer is declining. Another example: A poll of 1,000 may yield statistically useful results, but the story will be too short unless subsamples are discussed as well. Thus we hear about the opinion of “black females,”who may make up only 50 people in the sample.

2. You have to watch for spurious clustering
Imagine a chess or checkerboard occupying the bottom of a large cardboard box. Toss in exactly 64 grains of rice. The grains will bounce around and finally come to rest on the 64 squares of the game board. The average incidence is one grain per square. But you’re not likely to ever see that in your lifetime. Some squares will have many grains – all by chance. Likewise, some communities will report much larger-than-average incidences of certain diseases, all by chance.

3. Spurious studies, by definition, create news
Large, well-designed studies are very expensive, so persuasive studies of health issues are rare... and spurious studies, by definition, create “news” because results are unexpected.

4. Be skeptical of meta-analysis
The mathematical definition of a meta-analysis is the combining of raw data from many studies to gain the statistical power of a large sample, which is then analyzed as if all the data came from one place. It is a powerful tool for increasing statistical power that has been used in many environmental studies. But medical journals, including the most prestigious ones, have misappropriated the term. Thus, in medicine, a meta-analysis is often – in fact, almost always – BS-squared. Privacy issues and other barriers often make it impossible to improve on statistical power (item 1 again) by combining the raw data despite the seeming gloss of a large sample.

Unless you know otherwise, treat a “medical meta-analysis” as nothing more than a literature review. Multiple small studies analyzed separately have the same low statistical power as the studies do individually. And while it is comforting to report that multiple researchers get the same results in different times and places, with perhaps slightly different methodology, it isn’t really news. In fact, the comfort may be illusory, as researchers cherry-pick the studies to meta-analyze.

5. Look for mechanisms when the results are unexpected
When you have unexpectedly high responses to seemingly low doses, the case is significantly bolstered by identifying a mechanism instead of looking only at statistical correlations or regressions (e.g., prions, mercury in the cerebellum for infants. etc.).

6. With polls, keep an eye on demographics
When it comes to polling, yes, we can take 850 in an imperfectly-drawn New Hampshire sample and split it 6 ways (young-old, rich-poor, male-female, minority-white...) and insinuate the overall statistical power of the overall sample, which isn't that great in the first place, while never once mentioning that New Hampshire's demographics and ground truths have changed a lot since the last hugely contested primary there in 2000, and that younger voters, who have only cell phones, are hard to find – and thus hard to poll. And why? Because bringing up any of this screws up the story!

7. PR plays on laziness - your laziness
Thinking is such hard work. That's the secret of PR. Odds are, journalists will reprint the press release on the new study or poll results rather than thinking about what could go wrong.

Do you have any rules you think journalists should follow when reporting science, statistics and polling? Let us know and we'll add the best ones here.


Digg!

Technorati icon View the Technorati Link Cosmos for this entry