That is why every AI expert who defines strategy, use cases, or gives any technical advisory should actively use and implement AI themselves.
The theory sounds great.
But the reality is: my OpenClaw used to create 10 duplicate tasks at once, cancel client meetings, send emails to the wrong "Michael", ignore any instructions about the language of communication being English and talk Russian to me (still does, despite prohibition).
And it managed to cancel the OpenClaw Workshop on the 26th of May by AI realist 🙄
I restored it. It is happening. Nothing is cancelled.
If you use it yourself, you see how many bugs and errors these tools make, but at the same time you learn to safeguard them, define pipelines through deterministic tools, solve problems with code rather than asking an LLM, and get suspicious about the output.
Do you know how far away we are from full automation with AI agents? Galaxies.
These tools cannot be left unsupervised. If they run by themselves, you end up with dropped databases, calendar chaos, emails sent to everyone except the right person.
And guard-railing everything is not a solution either, as then you cripple the tools. You need to babysit, check and fix it.
Everywhere, starting from email writing by OpenClaw to kill chain by Claude in Maven.