Hallucinations are the fundamental technical barrier today for the widespread adoption of generative AI. There is a lot of work into mitigating this type of errors, and while the reliability of robustness of LLMs and other GenAI models has improved a lot, there is something fundamentally broken in the statistical language modeling paradigm that makes hallucinations impossible to solve, even in principle. We need a paradigm shift.