Homeopathy Research Hits New Low

Norbert Aust and Viktor Weisshäupl

“Homeopathy in Cancer Patients: Almost Too Good to Be True.” That was the headline of an article in the October 23, 2022, issue of the Austrian weekly news magazine Profil reporting on an investigation by the Austrian Agency for Scientific Integrity (OeAWI) (Schönberger 2022). The subject of the investigation was a study on the use of homeopathic adjunct treatment in lung cancer patients published in November 2020 in the prestigious scientific journal The Oncologist (Frass et al. 2020). As the conclusion of the investigation summarized, “The Committee concludes that there are numerous breaches of scientific integrity in the study, as reported in the publication. Several of the results can only be explained by data manipulation or falsification.”

This brought to a climax an affair initiated by the Information Network Homeopathy from Germany and the Initiative for Scientific Medicine from Austria. The only remaining question is whether or not the study will be retracted from the journal; as this magazine went to press, it has not been.

This incident marks a new low in homeopathy research. Until now, if positive results occurred in clinical studies of homeopathy, they were typically attributed to weaknesses in the study. For example, a flawed study design had produced a high risk of bias, making it likely that the results were not valid. Alternatively, there were errors in data collection, analysis, interpretation, or other inadequacies. However, in the case of the Oncologist study, it is impossible to assume random and unintentional errors. The findings of the OeAWI investigation suggested that the authors quite deliberately took actions to create positive results.

In addition, this incident draws attention to a weakness in current medical research publications. Although published in a renowned oncology journal, the errors in this study were not identified by medical experts in oncology as part of the journal’s peer review process but instead by a group of skeptics not engaged in medical research who examined the data after publication. Without their scrutiny, finally leading to official disclosure of scientific misconduct, homeopaths could still brag about first-class scientific evidence for the effectiveness of homeopathy over placebo.

Research on Homeopathy

From a purely logical basis, research on homeopathy does not make sense—especially in the form of a Phase III study to test its effects in patients—because homeopathic medicines are so diluted that it is unlikely there is any active ingredient remaining.

Usually, studies on homeopathy get published in rather dubious journals focusing on complementary and alternative medicine, which exist in an astonishing number. But on occasion, authors succeed in publishing supposedly positive studies on homeopathy in respectable scientific journals. The professional public may ignore them with a shrug of the shoulders, but for supporters of homeopathy these publishing successes serve as supposedly first-class evidence, which in turn is presented to the public to promote homeopathy. Besides the study under consideration here, other examples include Katharina Gaertner et al. (2022) in Pediatric Research on homeopathy and ADHD or Michael Frass et al. (2005) in Chest on homeopathy for patients with severe lung problems in the ICU.

We appeal to the various parties involved in publishing scientific papers to be very skeptical about trials on highly implausible therapies. Specifically,

  • Scientific journals should publish such studies only after a very critical review that established that they contain solid scientific research.
  • Peer reviewers should check positive results on homeopathy and other very implausible therapeutic methods in great detail for possible errors and inconsistencies and provide a clear report to the editors.
  • The professional public, when finding articles about positive results on homeopathy, should critically assess such publications and express their concerns about the integrity of the study in letters to the editor.
  • Respectable journals should not shy away from admitting errors in the peer-review process and be willing to retract dubious studies.
  • Of course, freedom of research and science is valuable and necessary. But to maintain their international reputations, medical universities should take actions to prevent dubious studies under their names and should only approve them after some critical assessment of the project to establish that the assumptions to be tested do not contradict other fields of science, such as chemistry, physics, physiology, and the like.

The Questioned Study

The lead author of the study is Professor Michael Frass. He states his affiliation with Medical University of Vienna, but at the time of publication, he had retired (Frass et al. 2020). The study was conducted as a double-blinded, randomized, and placebo-controlled comparative trial, performed at several sites in Austria. Thus, the gold standard for study design validity was more than met. The procedures deployed for blinding, randomization, etc., without doubt merit a “low risk of bias” rating.

The results of this apparently high-quality study would really be fantastic for homeopathy—if they were valid:

  • On average cancer patients in the homeopathy group (n = 51) achieved a 70 percent longer survival time than with placebo (n = 47).
  • Quality of life as well as subjective well-being under homeopathy improved considerably, while the patients on placebo got progressively worse.

There was a third group of patients who did not want to participate in the study and did not receive any additional treatment (n = 52). They fared even worse. This group is not considered here because of the unclear basis for comparison.

If our working group had not taken action to review the paper and inform the Medical University Vienna of our findings, this study would have passed as first-class evidence for the efficacy of homeopathy: a high-quality study, published in a prestigious peer-reviewed journal yielding excellent results that indicate a substantial clinical effect of homeopathic remedies on a lethal pathological process. This had never happened before.

The Protocol

Our analysis was not limited to the published text of the article. Because this was a pre-registered study, we were able to include data from the registry ClinicalTrials.gov (https://clinicaltrials.gov/ct2/show/NCT01509612), which shows the “history of changes” to the study over time (https://clinicaltrials.gov/ct2/history/NCT01509612), as well as the data specified when the study was originally registered (ClinicalTrials 2012). In addition, we included the study protocol that was uploaded to the study registry, including both the original version (Frass 2011) and the updated version (Frass 2014).

The first version of the protocol, where the parameters were set and the procedures described in detail, is dated January 10, 2011 (Frass 2011). This is one year before the first registration with ClinicalTrials.gov in January 2012 and the beginning of patient enrollment in February 2012. Thus, everything appears to be in good order. The odd thing: The protocol more or less meets the details given in the publication—but the data given in the first registration, one year after the date of the protocol, differ significantly (see Table 1).

There is no way to understand what made the authors provide completely different data with their first registration only one year after the study parameters were established, or why the original protocol is a much better match to the publication data. The only plausible explanation is that the protocol was not created in January 2011 but at a much later point in time. At least parts of it were compiled together with the manuscript for publication. This is corroborated by the following observations:

  • The protocol was dated January 2011 but uploaded to the registry on September 18, 2019, instead (ClinicalTrials 2020), about two months after data collection was completed. Most probably, these were no longer blinded—meaning the researchers could tell who got the homeopathic medicine and who got placebo—a potential source of bias.
  • In the protocol, the software used for data analysis is specified as “IBM SPSS statistics 25.0” (Frass 2011), but according to the SPSS listing on Wikipedia, that version was not released until July 2017! In January 2011, the current version number was 19; the researchers could not have used a software version that had yet to be developed.
  • In the text of the protocol, a superscript number 25 appears in the description of the questionnaire SF-36 looking similar to a literature reference (Frass 2011). But that notation makes no sense there because the protocol has no bibliography. However, this superscript number corresponds with reference number 25 in the publication, which refers to a paper describing the SF-36 questionnaire (Frass et al. 2020). References 1 through 24 contain ten papers that had not yet been published at the time the protocol was ostensibly written. The only possible explanation for this superscript number seems to be that it originates from a somewhat failed copy-and-paste transfer from manuscript to protocol.

In May 2021, we published our results and posted the English version on Edzard Ernst’s blog (Aust and Weisshäupl 2021). We contacted the authors and asked for their comments on our crucial findings (see below). We received no reply; however, on June 14, 2021, a new protocol version was uploaded to clinicaltrials.gov (ClinicalTrials 2021). This bears the date of February 6, 2014, which would be two years into the study, at which point the first subjects might have just reached the end of their follow-up time (Frass 2014). The limited data available at that time were most certainly still blinded. In a rigorous trial, this would be the last chance to publish a change to the protocol without raising concerns about possible bias caused by knowledge of the early patient results.

In this second version, all data including the number of exclusion criteria and the follow-up time had been adjusted to match the publication. On the other hand, the previous version of the protocol is not mentioned at all, which would be essential for transparency on how the trial progressed.

This second version did not yet exist prior to September 18, 2019, otherwise this version would have been uploaded to the registry. This removes all possible doubts: predating the protocols cannot have happened as a casual error. Only one interpretation is reasonable: the reader is led to believe the procedures were established at an early stage of the trial and then followed consequently until the very end.

Post Hoc Adjustment of Exclusion Criteria

If the version of protocol that was actually used was created about the same time as the manuscript of the publication, then all the procedures reported there must be regarded as post hoc, i.e., as a subsequent modification of rules after the outcome data were known to the authors.

Under these circumstances, it would be possible to adjust the study parameters in such a way that the results are shifted in a certain direction. This procedure is descriptively called the Texas Sharpshooter Fallacy: a gunslinger shoots at a barn door and paints a bullseye target around the bullet hole after the fact.

In this case, two major parameters were modified post hoc, namely a reduced follow-up time (see below) and an increase in the number of exclusion criteria—the justifications given for eliminating a participant’s data from the study. Both are mentioned for the first time in the first version of the protocol. As of August 15, 2018, when the registry was updated the last time before the protocol was uploaded, the parameters still indicated that the patients would be followed up for 104 weeks, and only one single exclusion criterion was declared.

At the onset of the trial, only pregnant women were to be excluded from participation (ClinicalTrials 2012). In the publication, no fewer than twenty-two different exclusion criteria were defined (Frass et al. 2020). Furthermore, the criteria seem to be arbitrary with no consistent underlying basis by which they could have been established. Kidney disease, liver disease, blood disease, coronary heart disease, and many other conditions that become more prevalent with age led to exclusion without any clear rationale provided. In contrast, other maladies common in this age group, such as diabetes, hypertension, and gastric or intestinal diseases, did not lead to exclusion.

In addition to the lack of rationale, the authors did not disclose how many patients were excluded this way (Frass et al. 2020): Neither the CONSORT diagram—which shows the flow of patients and their assignment to experimental groups—nor the text of the publication provides this information. Eight patients were excluded after randomization due to “late detection of mutations,” but no other exclusions are mentioned. It is implausible that this constellation of exclusion criteria was introduced while the study was being conducted yet no patients were affected.

We have demonstrated what can be achieved with the post hoc exclusion of test subjects in a numeric model (see Figure 1; Aust and Weisshäupl 2021).

Figure 1. Numerical model of data manipulation (see text for explanation).

Two sets of random distributions were modified to represent two equivalent survival functions (thin lines in Figure 1). These curves show how many patients were still alive at a given time. The median lifetime for both curves is twenty-eight weeks, by which time exactly 50 percent of the patients in each group have died.

Each set consists of eighty elements. If a random fifteen of the first twenty deaths are dropped from the blue curve representing homeopathy, the thick blue line emerges. This sample now has sixty-five elements instead of eighty, but this still is 100 percent of the elements present. Thus, the origin is maintained, but the curve shifts upward due to the now missing early deaths. The thick red curve, supposedly representing placebo, emerges when a random fifteen of the twenty patients that survived the longest are dropped. Again, this curve maintains its origin at 100 percent, but the curve deflects downward due to the lack of long-term survivors.

Comparing both thick curves, certain features may be observed:

  • The difference in survival is set during the first twelve weeks only. With the blue curve, eight patients die during this time marked by the vertical green line in Figure 1. With the red curve, twenty-three patients die. After that, for the next sixty-eight weeks, about the same number of patients die in both groups: thirty-six and thirty-seven for blue and red, respectively. After that, the curves start to converge again, because at some future point in time all patients will eventually have died.
  • The median survival time of the red curve decreases by eight to nineteen weeks.
  • The median survival time of the blue curve increases by twelve to thirty-nine weeks.

Exactly these characteristics can be found in the survival functions presented in Figure 1 of the Frass et al. (2020) publication in Oncology:

  • The advantage of the homeopathy group arises in the first nine weeks only. Two of fifty-one (4 percent) homeopathy patients die, but eleven of forty-seven (23 percent) patients of the placebo group die. From then onward to the end of the follow-up, virtually the same number of patients die in both groups: twenty-six under homeopathy (51 percent) and twenty-five under placebo (53 percent).
  • The median survival time in the placebo group is only 257 days. This is significantly shorter than the 303 days expected under conventional care alone, as the authors cite from prior investigations.
  • The median survival time in the homeopathy group is 435 days, significantly higher than the expected value of 303 days.

Conclusion: The improved survival under homeopathy may have been produced by data manipulation—specifically, by dropping unfavorable data from two practically equivalent samples for no discernible, legitimate reason. This would be patients dying early under homeopathy and long survivors in the placebo group. For example, if a patient in the homeopathy group with an autoimmune disease (such as rheumatism) died early, but there were no such patient in the placebo group, or if such a patient survived for a long time, then the exclusion criterion “autoimmune disease” may be introduced to exclude an unfavorable data point. Enough selective inclusions and omissions could produce the results presented in the published study.

If, on the other hand, homeopathy were an effective treatment, then the effect should be obvious throughout the whole follow-up; you would expect the curves to diverge more or less continuously throughout the whole trial until they inevitably start to converge again. However, here this breaking point seems to be outside the follow-up observation.

This pattern of results suggests that the post hoc introduction of a large number of exclusion criteria was part of a deliberate effort to manipulate the data in the direction of a positive outcome. Predating the protocol to a time before the study was started and not mentioning this change in the publication further supports this hypothesis. And finally, the results show telltale characteristics that can originate from precisely such manipulations. It is very hard to believe that this kind of manipulation did not occur.

Post Hoc Reduction of Observation Time

The second major change to the original research plan concerns the follow-up time for measurements of quality of life (QoL) and subjective well-being, both of which were primary outcomes of the study—that is, the main measured effects. With the first version of the protocol, this follow-up time was reduced from 104 to eighteen weeks. QoL and well-being are very important for patients in the late stages of cancer, so it is reasonable to define these measures as primary outcomes with survival as a secondary outcome. But it is hard to understand why QoL and well-being are considered in the early phase of treatment only—and not until the follow-up for survival was complete.

And at the beginning of the trial, the authors seemed to share the same opinion. The overall duration of the study was planned to be seven years (ClinicalTrials 2012). With a planned recruiting period of five years, this yields a minimal follow-up of two years. This was conducted for survival and was indicated in the registry to be applied to QoL and well-being as well. Up to August 2018, this was the plan (ClinicalTrials 2018):

“Primary Outcome Measures

  1. Life Quality

            [Time Frame: 7 Years]

            Life Quality is evaluated using the results of the EORTC-QLQ-C30 questionnaire.” This unambiguously indicates a data collection period of two years or more. Even the publication gives information in this direction (Frass et al. 2020): “Patients were followed up every nine weeks until death. … Patients were asked to complete again the questionnaires they answered on study entry.”

However, for the first time with the uploaded protocol, the observation period was reduced to eighteen weeks (Frass 2011). This means that out of 104 weeks, only the first eighteen are presented, resulting in over 80 percent of the data being ignored. Why would you observe and report survival as the secondary outcome for two years and the primary outcome for eighteen weeks only? The authors did not provide any research rationale for this change in the plan.

Such selective outcome reporting is a severe flaw in any study. If the data to be reported are selected after unblinding—and the selection is in stark contrast to the original stated methods, as occurred here—the pattern of events is highly suggestive of data manipulation. The results reflect only a very early stage of the trial—likely because this cherry-picked subset of data represented the most favorable outcomes for homeopathy observed within the whole trial. Valuable information—how the patients performed in the long term—has been withheld, and important criteria for evaluating the intervention were kept secret. An analogy would be researchers agreeing on a protocol to test 100 falsifiable psychic predictions and, when completed, finding the overall success rate no better than chance but then only reporting an accurate streak of twenty instead of the whole hundred. Study registries such as ClinicalTrials.gov are set up to help detect (and deter) precisely this sort of bias.

The observation probably ended when data looked particularly good for homeopathy, further enhanced by the selective exclusion discussed above. And the data did look great to an incredibly high degree (Figure 2).

Figure 2. Quality of life and subjective well-being results relative to baseline (BL) at follow-up 1 (1. FU) and follow-up 2 (2. FU). Positive changes are indicted by upward movement and negative by downward movement.

Figure 2 shows three diagrams we compiled from the numerical data given in the appendix of the publication showing the results of two questionnaires for quality of life (SF-36) and subjective well-being in two scales (QLQ-30; Frass et al. 2020). All three diagrams deal with symptoms such as fatigue, nausea, vomiting, pain, etc., or how well the patient is coping in daily life. The data are arranged so that favorable values are oriented upward and unfavorable ones downward. For ease of comparison, the curves were shifted so that for each item, the mean value of all participants at the beginning (BL = Baseline) is set to zero.

Apparently, homeopathy patients fared continuously better in all respects—surprisingly, even with respect to financial problems—while placebo patients were able to maintain their level in rare cases only; most of the time they fared poorly, and their indices and scales worsened over time.

Why did the authors omit the further development during the next nine follow-up appointments that a patient experienced during the study if he survived to its end? According to all life experience, one may safely assume that the endpoint, arbitrarily set with knowledge of the outcome data, was chosen in such a way that the results for homeopathy patients represent the most advantageous time point. This cannot be conclusively proven, but selective outcome reporting is considered a serious study flaw, because this is exactly what cannot be ruled out. What if the homeopathy patients had not fared better in the further course but considerably worse?

Further Inadequacies

When we reviewed and analyzed the publication, we found many more flaws and inadequacies that somehow escaped the peer reviewers’ attention. Here we present a few examples:

  • Diagrams given in Figures 2 and 3 of Frass et. al. 2020 do not match. Both represent the flow of patients. In Figure 2, some eligible patients refused to participate in the study but gave their consent to be observed for survival. Some of these received homeopathy. This group is not present in the CONSORT-flowchart in Figure 3.
  • One criterion to participate was a histologically confirmed cancer stage no more than eight weeks prior to inclusion. However, time point zero is not defined for these patients: Is it the date of diagnosis or instead the first homeopathic consultation? This discrepancy could be a gap of days to months, and with a follow-up time of only eighteen weeks in the primary outcome, this uncertainty might be important.
  • The number of participants was determined to yield a sufficient statistical power for the secondary outcome—but not the primary one, which is quite unusual. Still the authors failed to recruit the appropriate number of patients and the study is vastly underpowered (ninety-eight patients in two groups instead of 300).
  • The devastating result of the first group (which received conventional treatment only and fared even worse than the placebo group, with survival times well below what conventional treatment usually yields) is never discussed.

OeAWI Investigation

Unfortunately, the final detailed report of the Austrian Agency for Scientific Integrity is not available to the public. The only public source is the article in the Austrian weekly news magazine Profil from October 23, 2022, with numerous quotes of the findings in German (Schönberger 2022) but including several quotations  from the original English report.

The OeAWI commission deployed other methods than we did and had access to original data and the protocols presented to the ethics committee. Nevertheless, overall, they came to very similar conclusions as we did:

  • OeAWI criticizes the huge number of amendments of trial parameters while the study was underway that were not mentioned in the publication. They consider this lack of transparency “unacceptable” and suggestive of data manipulation.
  • The definition of the exclusion criteria in the protocol leads OeAWI to the conclusion that many patients were excluded post hoc, which suggests data manipulation.
  • The analysis of survival yielded that homeopathy seemed effective at certain time periods only with long intervals of ineffectiveness in between. This is considered implausible but consistent with the assumption of data manipulation.
  • In the original data available to OeAWI, the commission found several patients who had been excluded after the study was completed but whose removal was not reported in the publication. This is also suggestive of data manipulation.
  • Among completed questionnaires, several have the top scores in all sixty-six items of SF-36 and QLQ-C30 questionnaires. In addition, the homeopathy group showed an average quality of life score that exceeded that of the general Austrian population (and was in fact even better than the upper quartile of the population). This is very implausible for patients in the final stages of lung cancer.
  • In a statistical study, patients’ quality of life data were compared with their survival times. There was a clear correlation in the placebo group: patients who felt worse also had a lower life expectancy. There was no such correlation in the homeopathy group: some of the patients who reported feeling excellent died a few months later. This, also, is possible but highly unlikely.

The conclusion of the OeAWI reads, “The Committee concludes that there are numerous breaches in scientific integrity in the study, as reported in the publication. Several of the results can only be explained by data manipulation or falsification. The publication is not a fair representation of the study.”

The OeAWI informed us of their findings in a personal communication and indicated that they had relayed them to the Medical University of Vienna, Austria. The OeAWI also informed the editor-in-chief of The Oncologist of their findings and demanded a retraction of the publication. The Oncologist has so far complied to the extent that it has issued a (much milder) “Expression of Concern” (2022) tagged to the publication. The editors indicate that there are allegations of data manipulation and falsification associated with the publication, originating from a credible source. The results should be regarded as invalid until their own investigation is completed.

 

Authors’ Reaction

In the Profil article, the lead author, Professor Michael Frass, had the opportunity to comment on the allegations (Schönberger 2022). He firmly rejected any accusations of data manipulation or falsification. Indeed, Profil reports that he still is absolutely convinced that he has presented a clean and rigorous study. His summarizing statement: “The accusations are all known to us and absolutely incomprehensible. We can answer all of them. Our work was carried out in compliance with all scientific standards. The accusation of violations of scientific integrity has no basis whatsoever. It is obvious to us that not all documents were included in the evaluation. For this reason, we have asked to get access to the files to understand the basis for the concluding statement” (original in German, translation by the authors). Meanwhile, homeopathy organizations have announced that legal action to force the OeAWI to retract its conclusions will be taken if necessary.

One wonders what additional documentation Frass could produce to explain the predating of the protocols or the disappearance of patients after randomization. Many points of our and OeAWI’s critique are, after all, fairly obvious from the data in the publication and the website www.clinicaltrials.gov; anybody with access to the internet can check the findings. From our point of view, Frass has obviously not quite understood the situation: The problem is not that we or the OeAWI have excluded documents but rather that we have included all available documents in our analysis, especially previous registration data.

Note

A version of this article previously appeared in the German magazine Skeptiker.

 

References

Aust, N., and V. Weisshäupl. 2021. A thorough analysis of Prof. M. Frass’ recent Homeopathy Trial casts serious doubts on its reliability. Edzard Ernst Blog (June 11). Online at https://edzardernst.com/2021/06/a-thorough-analysis-of-prof-m-frass-recent-homeopathy-trial-casts-serious-doubts-on-its-reliability/.

ClinicalTrials. 2012. Study NCT01509612: Submitted Date: January 13. Online at https://clinicaltrials.gov/ct2/history/NCT01509612?V_2=View#StudyPageTop.

———. 2018. Study NCT01509612: Submitted Date: August 15. Online at https://clinicaltrials.gov/ct2/history/NCT01509612?V_7=View#StudyPageTop.

———. 2020. Study NCT01509612: Submitted Date: October 29. Online at https://clinicaltrials.gov/ct2/history/NCT01509612?V_8=View#StudyPageTop.

———. 2021. Study NCT01509612: Submitted Date: June 14. Online at https://clinicaltrials.gov/ct2/history/NCT01509612?V_10=View#StudyPageTop.

Expression of Concern. 2022. Expression of concern: Homeopathic treatment as an add-on therapy may improve quality of life and prolong survival in patients with non-small cell lung cancer: A prospective, randomized, placebo-controlled, double-blind, three-arm, multicenter study. The Oncologist 27(12): e985. Online at https://doi.org/10.1093/oncolo/oyac221.

Frass, M. 2011. Homeopathy in cancer (HINC)—study protocol; Version date: January 10. Online at https://clinicaltrials.gov/ProvidedDocs/12/NCT01509612/Prot_SAP_000.pdf.

———. 2014. Homeopathy in Cancer (HINC)—study protocol; Version date: February 6. Online at https://clinicaltrials.gov/ProvidedDocs/12/NCT01509612/Prot_SAP_001.pdf.

Frass, M., C. Dielacher, M. Linkesch, et al. 2005. Influence of potassium dichromate on tracheal secretions in critically ill patients. Chest 127: 936–941. Online at https://doi.org/10.1378/chest.127.3.936.

Frass, M., P. Lechleitner, C. Gründling, et al. 2020. Homeopathic treatment as an add-on therapy may improve quality of life and prolong survival in patients with non-small cell lung cancer: A prospective, randomized, placebo-controlled, double-blind, three-arm, multicenter study. The Oncologist 25(12): 1–26 (Open Access). Online at https://doi.org/10.1002/onco.13548.

Gaertner, K., M. Teut, and H. Walach. 2022. Is homeopathy effective for attention deficit and hyperactivity disorder? A meta-analysis. Pediatric Research. Online at https://doi.org/10.1038/s41390-022-02127-3.

Schönberger, A. 2022. Homöopathie bei Krebspatienten: Fast zu schön, um wahr zu sein. Profil (October 28 [in German]). Online at https://www.profil.at/wissenschaft/homoeopathie-bei-krebspatienten-fast-zu-schoen-um-wahr-zu-sein/402198219.

Norbert Aust and Viktor Weisshäupl

Norbert Aust holds a doctoral degree in mechanical engineering and has worked in various management positions in the vacuum pumps and compressor industry, including R&D and quality management. After he finished his active career, he turned to skeptical topics in energy and power and in pseudomedicine. In 2016, he initiated the Homeopathy Information Network, a group of scientists and other individuals striving to fight misconceptions about homeopathy as a valid therapeutic approach. He can be reached at aust@gwup.org. Viktor Weisshäupl holds a PhD in chemistry from the University of Vienna, Austria, a Medical Licentiate and a Specialist Diploma in Anesthesiology from the University of Oulu, Finland, and an MD from the University of Vienna. He worked as an anesthesiologist in tertiary care centers in Oulu and Vienna. Since his retirement, he has worked against the promotion of pseudomedicine by the Austrian Medical Association and the Austrian Health Ministry. He can be reached at weisshaeupl@initiative-wissenschaftliche-medizin.at.