Health, Science, and Environment Reporting
These fields offer incredibly interesting and universally appealing topics to cover, and there is never a shortage of good ideas or pitches vying for attention. But with this bonanza of opportunity comes unique responsibility. A misleading or incorrect story could lead someone to make unwise, harmful choices. It’s impossible to be too careful, yet there’s no need to fear these topics, either. The skills that make for good science or medical writing are the same as those for good reporting on any topic.
Chief among them is critical thinking: to question rather than accept a claim, to check what’s already been written about it, to ask for evidence to evaluate the claim, and to seek the help of experts to put it into context. Is it the first treatment for a disease or the fifth? The biggest rise in average temperature in a decade or the ninth? Does it change what scientists have long understood? Anyone can write a recap of results. The journalist’s job is to help readers understand what they mean.
News stories are aimed at the public, not at scientists, doctors, investors, or policymakers. They must be useful to general readers and written in clear language free of jargon.
In health and medicine, topics that affect a lot of people appeal to a larger audience than those that affect a few (heart disease versus a rare disease) unless there’s a great human-interest story or major advance that makes it newsworthy.
In science, advances that help further understanding of our world (or distant worlds) in ways that can be made relevant or interesting to general readers can be worth covering. Also worthy of coverage are stories that hold some other special appeal because of the subject or scientific approach.
News stories should have value for readers, not just give credit to a researcher, university, or hospital. A good test: whether it would be talked about at a dinner table. When initially evaluating potential stories, here are things to look for, and to look out for:
- Important science usually is made public in journals or at scientific meetings where a researcher’s methods are results have been deemed acceptable by outside experts (a process called peer review). That gives their findings more credibility, though rigorous checking by reporters is still essential. Reputable scientists rarely announce their findings solely in a press release or at a press conference.
- Consider the source. Is it from an established research institution? A large, reputable journal, or one without peer review? Off-the-beaten-path places and minor journals can have good science, but reporting on it requires even more care than usual.
- Reports conducted by advocacy and industry groups, even though many employ accomplished scientists, should be treated with caution. That research is generally designed to make a point, not to look for an answer objectively.
- Beware of “breakthroughs” because few things truly are. Exaggeration makes readers and viewers distrust the media and science.
- Be skeptical of “first” and “only,” and use these words only when you’re sure they’re true. It’s often difficult or impossible to verify that something is a first. Or it may be a first for a certain research institution or hospital yet has been done elsewhere, or it’s an incremental step that’s technically a first but in reality is of small import.
- Be skeptical of health claims based on testimonials or anecdotes. They are not science. Many people ascribe benefits to something when it is really a placebo effect—when a person responds to a dummy treatment. Conversely, many people also ascribe their health problems to something without any way to know if that is truly the cause. Readers are not served well from one person’s belief, experience, or understanding of the effectiveness or safety of a treatment, a medical device, a diet, or a chemical.
- Keep in mind that scientists are not infallible. They are human, with their own biases, and can make mistakes even when they’re very careful.
Scientific Journals, Meetings, and Embargoes
Science and medical journals are written for scientists, researchers, and doctors, offering professionals in the field findings that further research can build on. Sometimes these findings are important or interesting enough for general readers. These are usually published in major journals and provided several days in advance under embargo to registered journalists to allow time to prepare stories and consult experts.
Understand, however, that institutions use embargoes to drum up interest. Science happens continuously, not just when journals or universities announce findings. Avoid embargoes that do not allow you to speak to sources outside of the institution that is promoting their work.
Never use information obtained under embargo for any other purpose besides journalism. Do not discuss with family, friends, or others while the information is under embargo. Drug company stocks rise and fall on such news, and journalists can be liable like anyone else if insider trading happens because of improper or premature disclosure of results.
There are thousands of journals, some with more rigorous standards than others. Research that has not been peer-reviewed, including articled posted on preprint servers, should be reported with extreme care. Even studies published in respected peer-reviewed journals require journalistic vetting with experts unconnected with the research because the process of peer review is not foolproof.
Research sometimes is presented first at scientific or medical conferences and meetings. Coverage of this research requires extra care because often only partial results are release and they haven’t been subjected to full peer review. If reporting findings presented at meetings, it’s best to be there in person so you can consult outside experts who have seen the presentation.
Types of Stories
How a study was done helps determine how reliable its results are. Here are things to consider about some common study designs:
- Experiments: The strongest studies are experiments in which researchers can directly test a hypothesis while considering alternative explanations. In medicine, the best ones randomly assign a group of people to get either the treatment being tested or a fake version of it or a current standard of care. Neither the participants nor their doctors know who got what until the study ends (which is why they are called “double blind”). Sometimes experiments, especially early-stage ones, lack a comparison group, and that limits what can be known from the results.
- Observational Studies: Observational studies examine a group of people or compare groups based on such factors as how much they weigh or what medicines they take. A common version is a study that looks for links between lifestyle habits or exposures to chemicals and diseases or conditions. Some things can be learned only from observational studies. For example, it would be unethical to assign one group to a medical test involving radiation every year and watch for 20 years to see how many develop cancer, and compare the results to a group that didn’t get the annual test. Drawing conclusions from observational studies is hard because many other things can affect results besides the factor being examined. This is called confounding.It is a particular problem with food, nutrition, and diet studies because it is almost impossible to account for all the other things a person does over long periods. But sometimes, especially when multiple observational studies reach the same conclusion, the evidence is strong enough to establish a cause-and-effect relationship. This is how scientists originally showed that smoking causes lung cancer. Prospective(forward-looking) observational studies, in which the study population is carefully tracked over time, are more reliable than retrospective studies, in which researchers must take a “best guess” approach by relying on questionnaires and looking back at records.
- Meta-Analyses: Another type of study is a meta-analysis, when researchers compile results of many related studies that individually are not big or strong enough to establish a point but that might suffice collectively. Reporting on these requires great care; much depends on which studies researchers include or leave out. Big numbers give statistical power to observe an effect, but combining studies also magnifies the flaws in each one’s studies.
- Models: In a modeling study, scientists use computer stimulations to play out thousands of scenarios to see what could happen when changes are made to complex systems, like ecosystems, climate, or cosmic events. The mathematics of the model should be checked with outside experts, along with the assumptions that go into the calculations. Researchers should show that they have rigorously tested the model by running it over and over—and, if possible, by applying it retroactively and seeing if the results match real-world observations from the past (such as air temperature or sea level, for climate models).
- Research Stages: Drug studies usually are conducted in three phases in human. In phase 1, small numbers of people are given an experimental treatment to see if it’s safe. In phase 2, more are treated to further test safety and determine appropriate dosages. Phase 3 studies are large tests of safety and effectiveness. It’s often best to wait to report until then because many things that look good early on fail at this stage. They’re also what regulators usually rely on for approval decisions. Note: While it’s important for the reporter to understand this, there’s usually no need to bog down news stories with these terms. Instead of phase 1, for example, call it early research.
Here are some things to consider when evaluating a study:
- Is it in people or animals? Results in animals frequently don’t extend to humans.
- How large is it? Bigger is almost always better, but it depends on the type of study. An experiment that tests drug X in 1,000 people and includes a comparison group is much more definitive than a 5,000-person observational study that notes just who took drug X and how they fared. In rare situations, studies of these sizes are impossible, and small studies are just as valid statistically.
- Is there something else that might explain or influence the results, such as age, other medical conditions, genetic differences, or where people live? If so, did researchers adjust results to consider these in a way outside experts think is valid?
- Is it consistent with prior studies? Studies with results that contradict earlier, well-regarded research should be treated with more skepticism. That doesn’t mean you should ignore studies with outlier results, but you should vet them even more thoroughly than usual.
- Does the effect increase with the dose? In an environmental study, for instance, does the occurrence or severity of a health problem increase as the amount of toxic exposure increases? There may be a special reason if it doesn’t, but it may be a red flag.
- Is there a plausible biological explanation? A study result has more credibility if researchers can point to how it occurred. For example., a study finding that left-handed people are more likely to suffer from athlete’s food should be regarded with great skepticism if researchers cannot suggest a credible explanation. On the other hand, a researcher who found that adding fluoride to a city’s water supply reduced tooth decay can credibly point to experiments should that fluoride bonds with the outer layer of tooth enamel and strengthens it.
- Does it add to knowledge, or is it a “marketing study” designed to encourage doctors to use a company’s drug in the guise of a clinical trial? Similarly, be aware of awareness campaigns that may push readers to be tested for conditions they shouldn’t be, to create more “worried well,” and sell more drugs or boost donations to advocacy groups.
Numbers can help you tell health, science, and environment stories—and help you tell if there is a story to be told. First, we want to know whether the results of a study are statistically significant, which means that the risk they are just a fluke is acceptably low. Then we need to carefully show readers just how much difference in outcome the study found. Patients “did better” tells us little; “60 out of 100 patients who took the drug survived, compared with just 30 of 100 who did not” is much more informative.
Here are some statistics you will encounter in studies. You will probably never tell your audience about p-values or confidence intervals, but they help you evaluate whether a study result is worth reporting. You probably will include risks and percentages because they can give your audience important context—as long as you handle them properly.
- Relative Risk: Relative risk is the risk of something happening to one group compared with the risk of it happening to another. This is often expressed in a fraction or ratio in scientific studies. If there is no difference, the ratio is 1. For example, if a study finds that the relative risk of a group of smokers getting a disease is 1.5 compared with a group of nonsmokers, it means the smokers are 1.5 times—or 50%—more likely to develop the disease. But it doesn’t say how likely it is that either group gets the disease. For that, you need absolute risk.
- Absolute Risk: Absolute risk is the risk of something happening at all. For example, the nonsmoking group in the above example may have had a 4 in 100 chance of getting the disease, while the smokers had a 6 in 100 chance of getting a disease. Another example: A drug that extends life by 50% (a relative risk) sounds impressive, but that might means living six months on average on a treatment versus four months without. Readers deserve both views of the results.
- P-Value: A p-value (or probability value) is a measure that scientists use to gauge whether a result reflects a real, reliable difference or is just a fluke. Generally, a p-value of less than 0.05 suggests the result is reliable. So if a study reports that a drug lowered cholesterol and the result has a p-value of 0.04, it has met the test. This number is nearly always included in medical studies and in some science studies if relevant. Keep in mind, however, that such statistics can be gamed, in what is known as “p-hacking,” or cherry-picking experimental conditions until there is a statistically significant result. Ask researchers whether they tested more variables than they report.
- Confidence Interval: In addition to a single result (often expressed as a relative risk), many medical, environmental, and science studies include a confidence interval that encompasses the range of likely results. If a confidence interval ranges from below 1 to over 1, it means the result is not statistically significant. For example, if a drug lowered the risk of a heart attack but the confidence interval was 0.85 to 1.25, it fails the test; there is a meaningful chance that the drug did not actually lower the risk of heart attack but the confidence interval was 0.85 to 1.25, it fails the test; there is a meaningful chance that the drug did not actually lower the risk of heart attack. Another warning sign is if the upper and lower bound of a confidence interval are far apart (1.2 to 15.3, for example). In this sense, you can think of confidence intervals like margins of error in an opinion poll: the tighter the range, the better.
- Significant: Significant when referring to statistics means the results have passed statistical tests. But that does not mean the results are automatically noteworthy—or significant—for readers. Even though a study finds a statistically significant benefit from a treatment, that doesn’t necessarily mean it makes people feel noticeably better or is worth using. Study results that are not statistically significant should be treated with great skepticism, though there may occasionally be circumstances where a result is newsworthy even if it falls short of statistical significance
- Percentages: There is a big difference between percent and percentage points, so be careful when using these terms to report results (see percent, percentage, percentage points in The AP Stylebook 2019.) If a drug changes the number of people in a group who have high blood pressure from 80% to 40%, that’s a 50% decline but a difference of 40 percentage points.
Reporting Health and Science
Among the most critical parts of reporting a story that involves health, science or the environment is getting comments from outside experts who know the subject as well. When reporting a study, you should find experts who had no role in the work. Ideally consult more than one about the methods, the results, and the conclusions being made by the study’s authors. This helps reveal whether the study is worth reporting and if so, why, even though the study has already been reviewed by peers.
To find independent experts, check a study’s footnotes and references for who has previously researched the topic. PubMed and Google Scholar can also point to experts and previous studies for context. Ask them: Do you believe the conclusions? Does the evidence strongly support them? What are the problems with the study or conclusions? What other factors could be at play? Is this a big deal and why? How does this fit with what we knew before?
Don’t rely solely on a press release about a study; always read the actual study. Sometimes press releases hype or exaggerate claims and conclusions beyond what a study really showed. Other important reporting tips and considerations include:
- Financial Disclosures: Find out who paid for the research, and report it when it is relevant, which is almost always the case with studies of treatments. Much if not most medical research is paid for by private companies, or advocacy or special interest groups; governments increasingly fund a smaller share. This doesn’t mean the work is bad or wrong, but stories need to report what role the sponsor played (supplied the drug? Compiled the results?). Include what ties, if any, the researchers had to the sponsor or its competitors, and whether the researchers might profit through patents or royalties. Sometimes sponsorship is a reason not to write, such as a health claim for food on an industry-funded study.
- Caveats: Science is rarely definitive. There often are other potential explanations for a phenomenon or competing interpretations of what a fossil tells us about the past. AS long as those alternatives also have a sound scientific underpinning, they should be notes.
- Side Effects: There always are risks or side effects to treatments. If a news release or a meeting abstract doesn’t mention that, it’s a red flag—and you need to find out. Any story reporting a treatment’s benefits also should include its risks and any serious side effects.
- Costs: Always try to include the cost of a treatment. If it’s experimental and the cost hasn’t been set, often you can discuss context, such as the cost of other similar treatments. Also try to find out if insurers are likely to cover it, or note that it is still unclear whether they will and that insurance policies and out-of-pocket costs vary. Often, that’s what patients need to know.
- Time to Market: Remember that it can take many years for a drug to move from testing to government approval and commercial use. Many drugs never make the transition because of concerns about profitability or effectiveness. Readers often assume that they will be able to use an experimental drug immediately after it has been tested. Avoid giving them false hope. Instead, explain the steps that lie ahead and how long they are likely to take.
- False Balance: Do not give a platform to unqualified claims or sources in the guise of balancing a story by including all views. This perpetuates denialism. For example, coverage of a study describing effects of climate change should not seek “other side” comment that humans have no influence on the climate; in reporting about lung cancer deaths, do not pursue comment that smoking does not cause cancer.
On the other hand: Recognize when statements are false but also newsworthy and necessary to report. Examples include a key policymaker rejecting mainstream climate science, or parents lobbying Congress with the argument that vaccines cause autism. Such statements or actions need to be reported—but such stories must prominently include fact-checking material making clear that science shows the statements are wrong.
- Patients and Families: When writing about the medical conditions of patients or public figures, it’s important to verify information supplied by the patient, friends, or family with medical records or their doctor. Without confirmation, attribute the information to its source. This is not to say that patients are not worth talking to; quite the opposite. Patients’ experiences and their struggle to and their struggle to understand and navigate their condition and how it is being treated is often interesting and newsworthy. But quote patients on their lives and their feelings quote scientists on science. Use care when featuring patients in anecdotes to make sure they are truly representative or typical of what you’re writing about. Be wary of patients suggested by drug or device makers to speak with journalists because they may have been compensated or coached.
- Nonexperts: Don’t report medical advice from celebrities or sport figures. They’re often paid by companies or advocacy groups to pitch products or a point of view, such as the need for certain cancer screenings or a diet or “wellness” product, and they are not scientific or medical experts.
- Toxic Chemicals, Radiation, Carcinogens: Living down the street from a dump doesn’t necessarily raise someone’s risk for a disease. Working with a known cancer-causing substance for decades might. Reporting on these topics requires consulting toxicologists, public health researchers, and other specialists. As with general reporting, avoid reporting claims that are based on advocacy or made by lawyers or people who claim to be affected rather than science. When news requires coverage, for example when a major lawsuit is being filed, be sure to note if the claims have not yet been proven. Treat all claims made by sources—whether they are made by polluters or by those who claim to be affected—with the same level of scrutiny. Making sure to include absolute risk (discussed above) when reporting on these subjects is critical to provide readers an accurate understanding of danger.
- Cancer Clusters: Most suspected clusters, after investigation, turn out to be either baseless or unverifiable. That’s especially true if they involve many types of cancer, which have many different causes. Other factors such as a family history of the diseases, genes that predispose people to it, and habits such as smoking all affect cancer risk. It requires public health expertise and training to sort out these risks, so always seek out qualified independent experts when reporting on alleged clusters.
- Cures: Avoid calling a disease cured, especially cancer. An infection or temporary condition can be cured but doctors can’t be sure that a cancer won’t recur, so they say “remission” for that reason.
We are translators between people who speak the language of science and ordinary readers who don’t. If you don’t understand a term or know for certain that you can replace a technical term with a more reader-friendly one, ask an expert for help. You can write high blood pressureinstead of hypertension, for example, but you cannot refer to a cardiac arrest as a heart attack, because they are two different things.
As in all writing, avoid jargon and clichés—even in quotes. Some common science and medical jargon that can be said more simply include: clinician(use doctorinstead), efficacy (just say how well it works or not),literature(other studies),pathogen(germs), proportion (share), prevalence(say how common something is), trails or clinical trials(studies, research), underlying condition(other conditions, other medical conditions).
Some common clichés in science writing include: cutting edge, holy grail, game changing,low-hanging fruit, outside of the box, paradigm shift, perfect storm, sci-fi, sea chance, silver bullet, smoking gun, tip of the iceberg, wake-up call.
Individual Terms. Specific health and science terms are in The AP Stylebook’s alphabetical section.