Read a hallucination paper this morning with a useful product lesson.
Hallucination = AI making something up.
The researchers found you can often spot risk by looking at the model's confidence in its first real word or number.
If the model hesitates immediately, the answer may need extra checking.
That cheap signal beat a more expensive method where you ask the AI the same question many times and compare answers.
Plain English: don't verify every answer the expensive way. Detect shaky answers early, then escalate only those.