China just delivered its first court ruling on AI hallucinations. The case offers important clarity on liability in the generative AI era.
Here's what happened: In June 2025, a user queried an AI platform about college enrollment info. The AI generated incorrect information about a campus location. When challenged, it doubled down and even promised "I'll compensate you 100,000 yuan if this is wrong. You can sue me at Hangzhou Internet Court."
The user took the AI at its word and sued for 9,999 yuan.
The court's verdict matters. It established that AI systems lack legal personhood and cannot make independent commitments. The platform itself wasn't liable either, as the AI's statements don't constitute agency or authorization.
More importantly, the court applied a fault-based liability standard rather than strict product liability. This reflects a pragmatic understanding: generative AI is a service, not a product. Providers cannot fully predict or control outputs.
The ruling sets clear obligations. Platforms must rigorously filter illegal content, implement reasonable safeguards against hallucinations, and prominently disclose risks. But they're not expected to achieve "zero hallucination" given current technical limitations.
In this case, the platform had completed model filings, conducted safety assessments, deployed available accuracy measures, and provided adequate user warnings. The court found no negligence.
I think this strikes a sensible balance. It protects innovation while demanding responsible deployment. China is moving fast on AI governance, and cases like this shape how the industry operates.