As always these past few months, there are more questions than answers, but in this week’s intro I wonder about why sometimes interactions with the agents go well and other times they don’t.
I suspect it’s about knowledge gaps — implicit ones, where you assume there’s knowledge in the codebase or in the training data that isn’t there.
And, interestingly enough, it harmonizes with the linked posts by John Regehr and Mary Rose Cook.