Science Has a Nasty Photoshopping Problem

By Elisabeth Bik

Dr. Bik is a microbiologist who has worked at Stanford University and for the Dutch National Institute for Health.

One evening in January 2014, I sat at my computer at home, sifting through scientific papers. Being a microbiologist, this wasn’t unusual, although I certainly didn’t expect to find what I did that night.

These particular papers were write-ups of medical research, with many including photographs of biological samples, like tissue. One picture caught my eye. Was there something familiar about it? Curious, I quickly scrolled back through other papers by the same authors, checking their images against each other.

There it was. A section of the same photo being used in two different papers to represent results from three entirely different experiments.

What’s more, the authors seemed to be deliberately covering their tracks. Although the photos were of the same sample, one appeared to have been flipped back-to-front, while the other appeared to have been stretched and cropped differently.

Two papers, three experiments, one image

These figures show western blots, which are used to detect the presence of a specific protein in tissues or bodily fluids.

First paper

Stretched

Second paper, first repetition

Flipped

and tilted

Second paper, second repetition

First paper

Second paper, first repetition

Stretched

Second paper, second repetition

Flipped and tilted

Sources: “REDOX regulation of IL-13 signaling in intestinal epithelial cells: usage of alternate pathways mediates distinct gene expression patterns,” by Debasmita Mandal, Pingfu Fu and Alan D. Levine (first paper), “Elevated IL-13Rα2 in intestinal epithelial cells from ulcerative colitis or colorectal cancer initiates MAPK pathway,” by Debasmita Mandal and Alan D. Levine (second paper).

Although this was eight years ago, I distinctly recall how angry it made me. This was cheating, pure and simple. By editing an image to produce a desired result, a scientist can manufacture proof for a favored hypothesis, or create a signal out of noise. Scientists must rely on and build on one another’s work. Cheating is a transgression against everything that science should be. If scientific papers contain errors or — much worse — fraudulent data and fabricated imagery, other researchers are likely to waste time and grant money chasing theories based on made-up results.

But were those duplicated images just an isolated case? With little clue about how big this would get, I began searching for suspicious figures in biomedical journals.

Manipulated imagery in scientific papers can look ordinary at first glance. Consider this figure from a study about a chemical called d-Limonene, arguing for its potential in fighting cancerous tumors.

On closer examination, this image contains regions that appear to have been copied and pasted…

… as well as duplicated and flipped.

A cell culture in this image from a paper on gastric cancer appears to have been rotated and reused. This kind of behavior suggests an intention to mislead.

In some cases, like this photograph of bacteria under a microscope, there are so many repeated areas that it’s unclear whether any of the pixels in the image reflect the results of an actual experiment.

All of these images have been taken from papers that were retracted after I reported concerns about image manipulation.

Since childhood, I’ve been “blessed” with what I’m told is a better-than-average ability to spot repeating patterns. It’s a questionable blessing when you’re focused more on the floor tiles than on the person you’re supposed to talk to. However, this ability, combined with my — what some might call obsessive — personality, helped me when hunting duplications in scientific images by eye.

By day I went to my job in a lab at Stanford University, but I was soon spending every evening and most weekends looking for suspicious images. In 2016, I published an analysis of 20,621 peer-reviewed papers, discovering problematic images in no fewer than one in 25. Half of these appeared to have been manipulated deliberately — rotated, flipped, stretched or otherwise Photoshopped. With a sense of unease about how much bad science might be in journals, I quit my full-time job in 2019 so that I could devote myself to finding and reporting more cases of scientific fraud.

Using my pattern-matching eyes and lots of caffeine, I have analyzed more than 100,000 papers since 2014 and found apparent image duplication in 4,800 and similar evidence of error, cheating or other ethical problems in an additional 1,700. I’ve reported 2,500 of these to their journals’ editors and — after learning the hard way that journals often do not respond to these cases — posted many of those papers along with 3,500 more to PubPeer, a website where scientific literature is discussed in public.

While some of this research may be relatively unimportant, not all of it is. Earlier this year, Science magazine asked me to comment on apparently manipulated photos appearing in influential Alzheimer’s disease research conducted at the University of Minnesota. The paper claimed to demonstrate a unique piece of evidence about the underlying cause of Alzheimer’s.

Matthew Schrag, a Vanderbilt University neuroscientist and physician, had already found dozens of suspicious images in papers authored by one of the researchers, Sylvain Lesné. Checking his findings, I agreed and found even more. (A representative from the University of Minnesota said that the university is reviewing questions about his work.)

Other researchers have been unable to reproduce the University of Minnesota’s famous study. Now that images in these papers have shown signs of deliberate manipulation, it raises questions about an entire line of research, which means potentially millions of dollars of wasted grant money and years of false hope for patients. All may not be entirely lost, though; the pharmaceutical companies Biogen and Eisai recently said that an anti-amyloid drug they are developing for Alzheimer’s disease is showing promise.

In 2018, Harvard Medical School and Brigham and Women’s Hospital in Boston accused a former employee, Dr. Piero Anversa, and his laboratory of having falsified or fabricated data and imagery in 31 scientific papers over nearly two decades. Dr. Anversa’s lab had pioneered the theory that stem cells taken from bone marrow could regenerate the human heart by being injected into it. According to a Reuters analysis, the National Institutes of Health spent at least $588 million to pursue this line of research. Other scientists were never able to replicate his astounding results. Dr. Anversa placed the blame on a colleague and said that he had not been aware that cheating was occurring in his lab.

Just last month, the Nobel Prize-winning geneticist Gregg Semenza had to retract four of his papers following the revelation that they contain images that appear to have been manipulated or duplicated. The prestigious Proceedings of the National Academy of Sciences had published the retracted articles. The pseudonymous Clare Francis, a “science detective” like me, made the discovery.

Microbiologist Elisabeth Bik
Elisabeth Bik in her home office. Amy Osborne/Agence France-Presse — Getty Images

Most of my fellow detectives remain anonymous, operating under pseudonyms such as Smut Clyde or Cheshire. Criticizing other scientists’ work is often not well received, and concerns about negative career consequences can prevent scientists from speaking out. Image problems I have reported under my full name have resulted in hateful messages, angry videos on social media sites and two lawsuit threats.

The Times attempted to contact the lead scientists of the retracted papers with images reprinted in this essay. Only one responded; the others did not write back or declined to comment. The author who responded, Thomas J. Webster, said publishing the images had been an honest mistake.

Of course, the images themselves don’t directly reveal how they came into being or which authors were involved in making them. Although some duplicated images appear to be the result of intentional editing, it is possible that others are created from sloppy lab work, accidental mislabeling or miscommunication between colleagues.

Several things could lead researchers to cheat. For a start, most scientists feel the pressure to publish. Publications are essential to a scientist’s career and crucial to earning academic tenure. Employers might demand a quota of published articles over time, pay bonuses or promote staff members upon publication. In general, studies reporting successful outcomes have a higher chance of getting published than those failing to confirm a hypothesis. So when a scientist’s research shows a negative result, cheating can be tempting. Or perhaps a scientist has received praise and attention in the past for a notable discovery but has entered a fallow stretch of research. In those cases, they may be tempted to “adjust” their findings to make them look more compelling. And some labs are run by overly demanding — perhaps even bullying — professors. As a result, to get a letter of recommendation that will enable them to escape to a new position, young researchers may become desperate to please.

Before scientific papers are published, they undergo peer review, a process in which two or three independent scientists judge an article for scientific rigor and correct analysis. But peer review is unpaid and undervalued, and the system is based on a trusting, non-adversarial relationship. Peer review is not set up to detect fraud.

Often, problems with data — tables, statistical tests, charts and photos — are not caught until after publication, when a much wider audience reads the study. Minor errors can be addressed with a correction. But a paper should be retracted if critics can demonstrate scientific misconduct such as Photoshopping or faked data. After retraction, it will still be available to read or download but will be marked as untrustworthy.

Unfortunately, many scientific journals and academic institutions are slow to respond to evidence of image manipulation — if they take action at all. So far, my work has resulted in 956 corrections and 923 retractions, but a majority of the papers I have reported to the journals remain unaddressed.

Scientific publishers care enormously about their reputations, and research institutions might be embarrassed to admit that misconduct occurred within their walls. When confronted with duplicated images, they often conclude that “errors” occurred. The institutions may fire some junior researchers, but the laboratory leaders usually stay firmly in place.

Things could be about to get even worse. Artificial intelligence might help detect duplicated data in research, but it can also be used to generate fake data. It is easy nowadays to produce fabricated photos or videos of events that never happened, and A.I.-generated images might have already started to poison the scientific literature. As A.I. technology develops, it will become significantly harder to distinguish fake from real.

Science needs to get serious about research fraud. Journals should be much faster at retracting papers containing Photoshopped images or manipulated data — and should not publish them in the first place. Scientists who find flaws in published results should not be threatened with lawsuits in an attempt to silence criticism.

Here is a list of things I believe must change:

Journals must carry out better quality control. Publishers should hire image analysts and statistical experts to screen accepted papers before publication.

Journals need to act much faster — for example, within six months — when evidence of image manipulation arises.

We need national and international science integrity organizations that can independently investigate suspected cases of fraud and have some ability to punish the guilty.

Legitimate criticism of scientific research should receive legal protection.

Journals should pay the data detectives who find fatal errors or misconduct in published papers, similar to how tech companies pay bounties to computer security experts who find bugs in software.

As it becomes harder to distinguish between fake and real data, science might need to move toward a model based on reproduction, where Ph.D. students earn credit for replicating published studies, while the researchers whose work is reproduced get credit as well.

Despite all these problems, I believe in science. Firmly. We need trustworthy science to help us deal with consequential issues like climate change and pandemics. But science needs to be quicker and better at correcting itself.