The Hypocrisy of Medical Disinformation: A Report from Hungary

Nóra Falyuna and Péter Krekó

Looking at the business behind pseudoscientific disinformation, we see that disseminators of online medical disinformation are hypocritical anti-capitalists. Despite regularly attacking the “pharmaceutical mafia” and Big Pharma, they obtain tremendous profits both directly (through website advertisements and clicks) and indirectly (through promotion of pseudoscientific medical products). Thus, the editors of medical disinformation websites are neither victims of deception nor “useful idiots.” They are instead motivated by financial interests, making disinformation the best marketing tool possible. Fake news sells best.

Many clickbait and disinformation websites that spread pseudoscientific content and medical disinformation generate much of their revenue through advertisements. Previous studies have revealed how profitable the dissemination of disinformation might be. For example, despite promises to filter anti-vaccine content, tech companies generate up to $1 billion a year in advertising and other revenue through their support of the anti-vax industry (CCDH 2020).

According to the Business of Misinformation project focusing on Central and Eastern Europe (Bosnia and Herzegovina, Hungary, Moldova, Romania, Serbia, and Slovakia), ad placement is one of the primary sources of profit for most misinformation websites. This method relies heavily on Google’s advertising system (Szakács 2020). We examined this business model used by medical pseudoscientific and disinformation websites in a study published by a Budapest-based think tank, Political Capital Institute, last year (Falyuna et al. 2022).

Measuring the Profitability of Disinformation

In the past few years, the pandemic has been a huge boon to conspiracy theories, pseudoscience, and the disinformation industry. Emotions can be critical drivers of conspiracy theories (Grzesiak-Feldman 2013). It can be far easier to influence people when they are emotionally affected and possibly frightened, so the role of emotions may not only be relevant for conspiracy theories: “increased emotionality is associated with increased belief in fake news” (Martel et al. 2020).

The coronavirus pandemic significantly amplified the online disinformation industry (Grimes 2021; CCDH 2021). The pseudoscientific disinformation ecosystem became more vibrant, organized, and extensive, and the crises provided new topics for the clickbait websites to entertain—or scare—their readers. The deafening information noise and emotional upheaval created a perfect environment for disseminating misleading content during the pandemic that could easily be exploited for business purposes. When people are afraid of losing their lives, they may be willing to purchase products or services that promise to treat or prevent an illness or to accept misleading information that may offer an explanation, answer, or reassurance. In response to the coronavirus pandemic, there has been a heightened focus on health, leading to increased profitability for both traditional and alternative medicine markets.

Our study, as far as we know, has been the first to provide a quantitative estimate of the profit advertisements generate for websites that systemically spread disinformation about the coronavirus, related treatments, and health in general. These topics often tie into other topics, such as political conspiracy theories. These revenues are significant; based on our estimate from Google Ads, the ninety-three Hungarian clickbait websites we evaluated that promote disinformation about COVID-19, advertise alternative medicine, or provide anti-vaccine information generate up to 3.7 billion HUF (approximately $10 million) in annual advertising revenue for their creators.1 In 2022, the profitability of such websites increased significantly as a result of both an intense political campaign as well as the pandemic, which has increased not only the amount of time users spend online but also the volume of online commerce.

Political campaigns can provide a substantial contribution to the revenues of the disinformation industry. Google’s advertising helped to spread disinformation during the 2016 U.S. presidential election, as Rosie Graham’s study on the relationship between Google AdSense and Facebook describes (Graham 2017). The analysis of Campaign for Accountability reached the same conclusion (Campaign for Accountability 2017). We did our research in the first months of 2022, and the parliamentary elections were held in April with enormous amounts of money spent in the online space (K-Monitor 2022).

For the most part, the authors and creators of medical disinformation websites remain anonymous; the pages do not include a statement of ownership. It is thus similarly challenging to determine precisely where their profits end up.

To obtain a quantitative estimate of advertising revenue, we used the program Google Ads. As the basis of our study, we relied on a list of 105 Hungarian websites, ninety-three of which we identified as ones using the advertising platform Google AdSense, which embeds banner advertisements on websites, thereby generating revenue for the website operator. In Google AdSense’s system, advertisers have the option to target selected websites directly. In these cases, the ads appear only on the chosen websites or pages or on the YouTube channel associated with the pages selected. This media set may obtain revenue from other sources as well (directly or through advertisements sold on other advertising networks); additionally, our analysis examined only a sample, so we may safely state that the theoretical maximum revenue of the entire media set from the online advertising market must be significantly more than our estimate.

We found that the major corporations have unwittingly supported the profiteering of the disinformation websites studied by advertising on these sites, with several bizarre examples of this phenomenon. One is an advertisement for Volvo luxury automobiles appearing on a website that blends medical disinformation with political conspiracy theories, directly beneath articles that attempt to attract clicks by discussing Hitler at age 128(!), the supposed world dominance of the Rothschilds, and the supposed cure for aging.

An advertisement for Volvo luxury automobiles appears on a website that blends medical disinformation with political conspiracy
theories.

 

Another example is the website of the Hungarian frontman in COVID-19 and vaccine disinformation, Doctor Gődény, which features an excessive number of advertisements, up to nine ads per article. On this site, one can find advertisements for PCR coronavirus tests from a Hungarian laboratory, Triton Labs. Ironically, the articles on this website repeatedly claim that COVID-19 is fake and that PCR tests are tools used for malicious purposes—including the insertion of a microchip into the brain.

The website of the Hungarian frontman in COVID-19 and vaccine disinformation, Doctor Gődény, features an excessive number of advertisements, up to nine ads per article.

 

The Six Commandments of Medical Pseudoscience

By analyzing the content of pseudoscientific pages and the claims of anti-vaxxer celebrities, we identified several false principles they use to promote alternative or pseudoscientific medicine, including in connection with the coronavirus and vaccines. These principles serve as fundamental truths or axioms, without requiring any further justification. We found the following six “Commandments” in this universe:

1) Anything natural is good; anything artificial is bad.

2) Every illness has mental origins, so it is primarily fear that needs to be cured, not the illness itself.

3) Traditional medicine is always more effective than modern pharmaceutical products.

4) Certain miracle medicines (e.g., vitamin C, baking soda, ginger) can cure any illness, but the profit-driven pharmaceutical industry wants to cover it up.

5) Anything that causes pain, suffering, or is unpleasant or inconvenient (e.g., vaccines, masks) cannot play a positive role in healing.

6) Physical exercise can protect the body against any illness by strengthening the immune system.

The medical disinformation websites we examined in our report have a crucially important role in the whole disinformation industry and beyond. As the Russian-Ukrainian war broke out, throughout the Western world the former anti-vaccination, pseudoscientific, or anti-scientific disinformation and conspiracy theory websites and celebrities became major promoters of conspiracy theories and disinformation about the war. This phenomenon could also be observed in the United States and Europe, with even conspiracy celebrities such as the French rapper Booba switching from coronavirus-denialism to the promotion of war-related conspiracy theories, such as the alleged existence of U.S. biolabs in Ukraine. American websites and podcasts that were promoting coronavirus-denialism and anti-vax disinformation have immediately jumped on the biolab theory as well. According to our attitude research, people who believe in coronavirus conspiracy theories (e.g., that the virus was deliberately created by sinister forces) are accepting war-related conspiracies as well (e.g., that there was a secret genocide being waged against the Russians in the Donbas). COVID-19, therefore, served not only as a booster for medical pseudoscience, but as a general booster for the whole disinformation industry.

 

The Background Powers Strike Back: Possible Solutions

As our examples above and study in Hungary indicate, corporations become—unwittingly—the main financial supporters of the disinformation ecosystem, thus contributing to the promotion of often life-threatening pseudoscientific tips and tricks. Thus, these companies—and the companies organizing their advertising campaigns—should play an important role in cutting the financial resources of this ecosystem, primarily by taking conscious steps to ensure they do not place ads as part of their online advertising campaigns on medical disinformation websites.

In the fight against disinformation, several actors have responsibility for the solution. Users and consumers should be critical of online information, but governments also need to develop and enforce legal solutions. Companies unwittingly funding the disinformation and pseudoscience industry can stop if they are more careful and vigilant, and technology firms could be more careful in automatically detecting disinformation and operating advertising systems more transparently, thereby helping in the demonetization of disinformation websites. The efficient demonetization stands on four legs: 1) transparency of websites and ad systems, 2) conscious and ethical advertisement practice of the companies, 3) public pressure on the companies financing disinformation and pseudoscience, and 4) pressure (including legal action) on tech companies to take more steps toward demonetization.

Transparency is one of the key principles. According to Braun and Eklund (2019, 25), “transparency—total clarity for advertisers concerning the placement of their ads and the specific intermediaries involved—would be the holy grail of accountability in the programmatic advertising industry.” Advertisers need to know that they are advertising on a fake news website, but “because of the number of intermediaries handling each transaction—there’s nowhere near the transparency necessary in the programmatic ecosystem to make those sorts of guarantees” (Braun and Eklund 2019, 26).

Several international initiatives, such as konspiratori.sk in Slovakia, have been successful in demonetization efforts through higher transparency. The Slovakian case is exemplary: a group of independent media and disinformation experts created a list of the nonreliable websites that are spreading dangerous disinformation (e.g., anti-vax) and approached big companies to avoid advertising there—with some success. Konspiratori.sk raises the important point that these websites can be dangerous for the marketers themselves, with the risk of tarnishing their reputation.

Public pressure is needed to encourage companies to take these reputational risks seriously. Just as products produced under unethical conditions are often boycotted by society, or as companies pay attention to environmental protection or racist, homophobic, sexist, or other socially reprehensible phenomena as part of their social responsibility mission, they should also include in their corporate social responsibility activities the avoidance of platforms that disseminate harmful disinformation, fake news, or pseudoscientific products.

The Global Disinformation Index (GDI, a nonprofit organization that focuses “on a neutral, independent, transparent index of a website’s risk of disinforming readers”)—again, in line with the principle of transparency—made some steps in this direction, drawing attention to the brands, advertising, and tech companies (ad tech, e-commerce, e-payment, etc.) that provide financial backing for disinformation websites, so that efforts can be made to defund disinformation. Their main goal is to “demonetize disinformation actors, stories, and sites by ensuring policies targeted at online advertising, e-payment, e-commerce, and other monetization platforms.”

Alliances of responsible media platforms and companies can also be important for raising awareness that advertising on shady conspiracy websites can hurt the companies’ self-interest through reputational risks. For example, the Global Alliance for Responsible Media (GARM), a cross-industry initiative established by the World Federation of Advertisers, aims to address harmful content on the internet and digital media platforms and step up against its monetization through advertising. Members of this alliance are advertisers, media agencies, media platforms, and industry associations, and they are also cooperating with NGOs involved in civil society and safety with a focus on safe uses of media and tech. For example, GARM’s reports research the safety of platforms for advertisers.

One solution often suggested for advertising companies is to blacklist known disinformation websites (we also made such a blacklist as part of our study). However, this tool should be used with caution: the problem with this solution is that disinformation websites always evade legislation or blacklists by reappearing in a new form, with a new name and a new URL (Szakács 2020). The blacklist method should not be discarded, but it does require constant monitoring and updating of these lists.

Furthermore, because social media platforms also help disinformation websites to make a profit, for example by maintaining pages on the platform whose URL has changed, social media platforms such as Facebook also have to make more efforts—in line, for example, with the Code of Practice on Disinformation, an initiative of the European Union that social media companies have also subscribed to—to monitor disinformation websites. Social media and tech platforms should also place greater emphasis on content moderation and screening pages to see where the clicks and advertisement money is going. It is fairly common for several openly anti-vaccination pages to be alive and well on Hungarian social media websites (especially Facebook) even though they run counter to Facebook’s own social media guidelines. These guidelines would make it possible to remove anti-vaccination pages.

Of course, it would be even better if the organizations that are selling the ads—such as Google Ads—would flag these websites for the marketers, create a category for them that they can choose or not while placing the ads, or even remove these websites from their ad portfolio. Google’s Trusted Flagger program, which enables particular users to flag dangerous content on websites, could help identify these pages, and if the program did not work on a voluntary basis, Google could provide financial incentives for organizations and individuals that participate in the Trusted Flagger system.

The analysis of Campaign for Accountability (Campaign for Accountability 2017) also raises the issue that by not listing disinformation websites, those selling advertising space can avoid responsibility and put all the blame and responsibility on advertisers:

Under Google’s system, it is incumbent upon advertisers to identify and blacklist specific domains that they find objectionable. But Google doesn’t make this easy: its ad platforms don’t allow advertisers to block fake news sites as a category. … Even if advertisers could identify specific extreme websites, Google offers these publishers a way to circumvent advertiser exclusions by making their sites anonymous. (Campaign for Accountability 2017, 7)

There have been some good steps in the direction of regulation in the European Union: demonetization as a main principle in the self-regulatory Code of Practice developed by representatives of online platforms, leading social networks, advertisers, and the advertising industry. Compliance with the Code is monitored and evaluated by the European Commission. Of course, legal pressure is also needed to enforce the online platforms to cut these channels of revenue. This is the goal of the Digital Services Act (European Commission 2022) that has been recently approved by the Council of Ministers in the EU. The European Union has vast experience with putting legal and political pressure on tech companies while cooperating with them. Time will tell how efficient these efforts are.

Note

1. Our method (finalized in February 2022) enabled us to estimate the theoretical maximum advertising revenue of the media set. In reality, this would translate into actual revenues only in the case of a fully saturated advertising market. How much of the theoretical maximum earnings determined are realized remains up to the advertisers.

References

Braun, J.A., and J.L. Eklund. 2019. Fake news, real money: Ad tech platforms, profit-driven hoaxes, and the business of journalism. Digital Journalism 7(1): 1–21. Online at https://doi.org/10.1080/21670811.2018.1556314.

Campaign for Accountability. 2017. How Google makes millions off of fake news. Online at https://campaignforaccountability.org/wp-content/uploads/2017/10/Google-Fake-News-10-18-17.pdf.

CCDH. 2020. The anti-vaxx industry. How big tech powers and profits from vaccine misinformation. The Center for Countering Digital Hate. Online at https://counterhate.com/wp-content/uploads/2022/05/200112-The-Anti-Vaxx-Industry.pdf.

———. 2021. The pandemic profiteers. The business of anti-vaxx. Online at https://counterhate.com/research/pandemic-profiteers/.

European Commission. 2022. The Digital Services Act package. Online at https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package.

Falyuna, N., P. Krekó, and R. Berkes. 2022. Hypocritical anticapitalists. For-profit pseudo-scientific disinformation on the internet. Budapest, Hungary: Political Capital Institute. Online at https://www.politicalcapital.hu/pc-admin/source/documents/PC_Tanulmany_AlszentAntikapitalistak_2022_02_EN.pdf.

Graham, R. 2017. Google, and advertising: Digital capitalism in the context of post-Fordism, the reification of language, and the rise of fake news. Palgrave Communications 3: 45. Online at https://doi.org/10.1057/s41599-017-0021-4.

Grimes, D.R. 2021. Medical disinformation and the unviable nature of COVID-19 conspiracy theories. PLoS One 16(3): e0245900.

Grzesiak-Feldman, M. 2013. The effect of high-anxiety situations on conspiracy thinking. Current Psychology 32: 100–118.

K-Monitor. 2022. Campaign spending in Hungary. Online at https://k.blog.hu/2022/03/31/campaign_spending_hungary_billboards.

Martel, C., G. Pennycook, and D.G. Rand. 2020. Reliance on emotion promotes belief in fake news. Cognitive Research: Principles and Implications 5(47). Online at https://doi.org/10.1186/s41235-020-00252-3.

Szakács, J. 2020. The business of misinformation. CMDS. Online at https://cmds.ceu.edu/business-misinformation.

Nóra Falyuna and Péter Krekó

Nóra Falyuna, PhD, is a linguist, assistant professor at the Department of Social Communication at University of Public Service (UPS), Budapest, Hungary, and head of Rector’s Cabinet at UPS. In addition to her academic work, she also works as a science communication expert and adviser. Her main research topics are communication aspects of pseudoscience and anti-science as well as science communication. She can be reached at falyuna.nora@uni-nke.hu.   Péter Krekó, PhD, is a social psychologist, associate professor at the Department of Social Psychology at Eötvös Loránd University of Sciences in Budapest, Hungary, and director of Political Capital Institute—a think-tank that is leading the Hungarian Digital Media Observatory, a consortium supported by the European Commission to counter disinformation. He is also a PopBack fellow at the University of Cambridge. His main research interest is the psychological background of false beliefs: fake news, pseudoscience, and conspiracy theories. He can be reached at kreko.peter@ppk.elte.hu or on Twitter @peterkreko.