How to Distinguish Good Science from Bad Science

Science is a powerful tool. It can change the world, improve our understanding of our universe, and help us find new and innovative ways to solve problems. But science is only as good as the data it uses, and bad science can lead us astray.

Bad Science

Over the last few years, I've written several articles and recorded a similar number of podcasts on ethics related to innovation. Just as this is titled Bad Science, we could have titled it Bad Innovation.

In this episode, we explore how to determine whether the science you are reading is accurate or not.

The inspiration for this episode came from an infographic created by Compound Interest (compoundchem.com). I've taken the list of ways to spot bad science and created my descriptions with examples — but all credit goes to @compoundchem.

12 Ways To Spot Bad Science

There are many ways to identify bad science studies and articles that publish the results, but here are twelve of the most common. You can protect yourself from being misled by being aware of these red flags.

1)     Sensationalised Headlines

Sensationalized headlines can be incredibly misleading. They often over-simplify the findings or, worse, misrepresent them entirely. Misinterpretation can lead to bad decision-making on the reader's part and ultimately negatively impact.

It's essential to be discerning when reading science articles and always to consider the source of information. Reputable sources always aim to present accurate information, while less reputable sources may sensationalize information to get more readers/viewers. In the long run, this can muddy the waters and make it more difficult for people to discern what is true.

An example of a misleading sensationalized headline would be the article “A New Drug Can Cure Alcoholism,” published by The Sun. The report claims that a new drug called Selincro can “cure” alcoholism, but this is not the case. Selincro is for alcohol dependence, not alcoholism, and it does not cure addiction.

2)     Misinterpreted Results

Misinterpreted results can often lead to bad science and innovation. Research in the media can be sensationalized or simplified in a way that distorts the actual findings. Simplification can lead to poor decisions being made based on inaccurate information. Therefore, reading the original research to understand what was studied is essential. Only then can informed decisions be made about whether the findings apply to your work.

One example of misinterpreted results would be the oft-cited study that claimed eating chocolate can help you lose weight. Later found to be flawed, and the author had to retract his findings.

3)     Conflict of Interest

Science often thought of as a purely objective pursuit, is unaffected by the biases and motivations of the people involved. However, scientists are people, and their interests and agendas can influence them. Their agenda is a conflict of interest.

A conflict of interest can distort scientific research and make poor decisions. For example, scientists might be more likely to publish results that support their theory or downplay negative results.

Conflicts of interest can also hurt innovation. Innovators seeking patents or commercial opportunities are less likely to share their findings with others. Lack of information sharing can stifle innovation and prevent the development of new ideas.

Ultimately, it is crucial to recognize that conflicts of interest exist, and we must consider them when evaluating discoveries. It is also essential to have transparent and accountable systems to manage conflicts of interest.

A recent example of a conflict of interest that impacted innovation is the Volkswagen emissions scandal. In 2015, reports surfaced that Volkswagen had been cheating on emissions tests for its diesel cars. Cheating was possible because Volkswagen had developed software to turn off the emissions controls during car testing. The software allowed the cars to pass the emissions tests, but when they were on the road, they emitted more pollutants than allowed.

This scandal highlighted the importance of managing conflicts of interest and showed how bad decisions could happen when scientists are not impartial.

4)     Correlation and Causation

Science can be misused and abused by exploiting people's confusion between correlation and causation.

Correlation is when two things happen together more often than would be expected by chance. For example, there is a correlation between ice cream sales and murders — when ice cream sales go up, so do murders. But that doesn't mean that eating ice cream causes people to murder others. There could be any number of other factors at work.

Causation, on the other hand, means that one thing causes another. When we say that A causes B, it means that A always comes before B— and that changing A will change B. For example, we know that smoking causes cancer because smokers are more likely to get cancer than non-smokers. Changing the amount of smoking will change the amount of cancer.

When two things appear correlated, it's important not to jump to conclusions and assume that one thing is causing the other. Without doing proper research and testing, bad science can result. So next time you hear about some scientific study that seems too good to be true, be skeptical!

Using scientific studies to sell products is one commercial example where correlation and causation can mislead the public. A study that shows a correlation between a product and a positive outcome can persuade people to buy the product. But without knowing the full details of the study, it's hard to tell if the product caused the correlation.

For example, numerous studies show a correlation between eating breakfast and being thinner. So, many companies have started selling breakfast foods to help people lose weight. But does eating breakfast make you thinner? It's hard to say because there are many other factors at work. Maybe people who eat breakfast are more likely to be thinner overall or more likely to exercise in the morning. It's difficult to say for sure what's causing the correlation.

5)     Unsupported Conclusions

Bad science can often come from unsupported conclusions. When a study jumps to a conclusion without proper evidence, it can be misleading and cause further bad science. This is because people may get the wrong idea about the study and try to build on that misconception. In some cases, this can even lead to injury lawsuits by bad information based on bad science.

Therefore, studies must be very clear on what their evidence shows and what conclusions are still speculative. Clear studies allow people to understand the research better and prevent bad science from spreading further.

One recent example of bad science based on unsupported conclusions is the case of Theranos. This company claimed to have developed a new way to test blood requiring much less than traditional methods. However, after multiple retractions of their studies, revealing that their technology didn't work. The revelation led to massive financial losses for investors and patients who trusted the company.

Others attempting to follow in their footsteps found themselves back at the drawing board, wasting time and resources on something that wasn't possible. This is just one example of how bad science can have far-reaching consequences.

6)     Problem with Sample Size

Small sample sizes can lead to bad science for several reasons.

First and foremost, when the sample size is small, it's more likely that the data will not represent the population. This means that any conclusions drawn from that data may be inaccurate.

Additionally, small samples have less statistical power, meaning they're less likely to detect differences between groups or to identify significant results. This can lead to bad science in two ways: if researchers incorrectly conclude that there is no difference between groups and falsely deem a result statistically significant when it's not.

Finally, small sample sizes can increase the chances of type II errors (false negatives), which means publishing bad science because of a missed true effect due to the small sample size. These issues caused by small sample sizes can lead to faulty conclusions and bad science.

One recent example of a research study based on a small sample size that leads to bad science is a study on the effect of fluoride on children's intelligence. The study had a very small sample size, and as a result, the authors could not detect any significant difference between the fluoride and placebo groups. This led to bad science, as the authors incorrectly concluded that fluoride does not have an impact on children's intelligence.

7)     Unrepresentative Samples Used

Unrepresentative samples are often used in bad science experiments, leading to faulty conclusions.

Using a non-representative sample makes it much easier to obtain the results you're looking for because the data is guaranteed to be biased. Bad science perpetuates itself when this happens, and we often can't trust any scientific findings. For example, a study that claims salt is terrible for your health might be from 1a sample of people who already have health problems. The study would give the impression that salt is bad for everyone when it might only harm people with certain conditions.

If we rely on these studies to make decisions about our health, we could be doing ourselves a disservice. It's, therefore, important to always look at the methodology of a study before accepting its conclusions as fact. Only by doing so can we avoid being misled by bad science.

8)     No Control Group Used

The lack of a control group might doubt the results of an experiment. It's critical to compare the outcomes from test participants who received the tested substance to a control group that didn't receive it in clinical trials. This process allows researchers to see whether the drug made any difference.

Random allocation of groups is also crucial to minimize bias. In experiments, it's important to have a control test for controlled variables, allowing researchers to isolate the effects of a single variable.

Perhaps the most famous example of bad science without a control group is the case of thalidomide. The marketed sedative for pregnant women in the 1950s and 1960s revealed that the drug caused severe congenital disabilities in thousands of children.

9)     No Blind Testing Used

By not blinding the test, researchers can introduce bias into the study. Lack of blind testing can happen in different ways, such as researcher bias, subject bias, and observer bias.

Researcher bias happens when the researcher has a preconceived notion about the study's outcome and influences how it is conducted or analyzed.

Subject bias is when the subject knows which group they are in and alters their behavior. For example, if someone knows they are in a test group using a new drug treatment and feels better than those in the control group, they may believe that the drug worked when it didn't.

Observer bias is when someone not involved in the study (e.g., a friend or family member of one of the participants) knows which group a participant is in and reports on their behavior differently based on that information.

These biases can lead to inaccurate findings and conclusions in scientific studies. This can have far-reaching consequences, using bad science to make recommendations or decisions about treatments, policies, etc. It's, therefore, important that scientists use a blind test whenever possible.

One example of not using a blind test that caused future bad science is the Tuskegee syphilis study. In this study, 399 black men with syphilis were left untreated so researchers could study the progression of the disease. Even after it was discovered that penicillin could cure the disease, the study continued for another two years. This study's lack of a blind test led to biased results and further bad science.

10)Selective Reporting of Data

Regarding scientific research, the data collected should be unbiased and interpreted relatively. However, sometimes bad science is caused by researchers selectively reporting data. They review the data that supports their conclusion and ignore any information that does not. This can cause incorrect judgments and assertions.

One way to avoid this issue is always to present all the data collected, regardless of whether it supports your findings. This will help ensure that other researchers can interpret and analyze the data and reach their conclusions. It is essential to be open and transparent about your research methods and results so that others can evaluate them for themselves.

A recent example of bad science caused by selective data reporting is the paper “The Mismeasure of Man” by Stephen Jay Gould. In this paper, Gould argues that intelligence tests are biased against certain groups of people, such as women and minorities. However, later research has shown that Gould selectively reported data to support his conclusions. For example, he ignored evidence that showed no significant difference in test scores between men and women.

11)Unreproducible Results

When research is not reproducible, it becomes difficult to verify the findings, which can create doubt about the entire study. This can lead to bad science, as scientists may accept bad data as fact. In short, reproducible research is essential for good science, and when research is not reproducible, it can lead to a variety of problems.

One recent example of a retracted study that could not be reproduced is the infamous “South Korean stem cell study.” Researchers claimed to have created stem cells in this study using a new method, but other scientists could not reproduce the results. As a result, the journal that originally published the study retracted it.

12)Non-Peer Reviewed Material

The importance of using peer-reviewed studies cannot be overstated. Using these studies, researchers can be sure that the information they are getting is accurate and reliable. Studies that have not been peer-reviewed may be flawed and thus unreliable. This can lead to bad science and inaccurate information being spread. Peer review is a critical step in the scientific process and helps to ensure that only the best, most accurate information is published.

There has been a recent rash of peer-reviewed studies and later retracted. It is not a perfect system, but it is our best. To avoid bad science, researchers should always look for peer-reviewed studies.

One example of a study that was peer-reviewed but later retracted is the infamous study on climate change by Dr. Michael Mann. The study, published in 1999, purported to show a correlation between climate change and the increase in global temperatures. However, the study was later retracted after it was revealed that the data had been manipulated.

These kinds of retractions of peer-reviewed studies put all other studies into question. It is hard to know which studies to trust when bad science like this makes its way into the peer-reviewed process.

Good Science versus Bad Science

Science is a process of exploration and discovery. When bad science occurs, it can cast doubt on all the findings of that study and the entire scientific process. However, we can avoid being misled by these studies by being aware of the signs of bad science. We can also help to ensure that good science is not tainted by bad data.

It is important to remember that science is an ever-evolving process. The retracted “South Korean stem cell study,” for example, may have been flawed, but it led to discoveries about stem cells that could not have been made otherwise. In this way, even bad science can be valuable in helping us learn more about the world.

We should not give up on science just because of bad science; instead, we should use bad science as a learning experience and continue to explore and discover new truths about the world around us.

Let's work together to recognize and reward good science while calling out bad science so that we can ensure the best information in which to make informed decisions.

To know more about good and bad science: How to Distinguish Good Science from Bad Science

RELATED:   Subscribe To The Killer Innovations Podcast
Zoom - 2017 Gartner Magic Quadrant for Meeting Solutions - Is a sponsor of the Killer Innovations Show

Please note: I reserve the right to delete comments that are offensive or off-topic.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.