The app for independent voices

One reader posed a thoughtful question in the comments: This study was conducted in 2024. Are those results still relevant in 2026? My answer:

Someone on LinkedIn made a similar critique. It’s certainly fair to question whether results from a few years ago are still valid today, and unfortunately academic publishing can take a year or more (I’ve had papers published in JAVMA >2 years after they were accepted 😱).

However, my counterargument would be: What other options do we have? These companies are by and large not publishing equivalent data themselves. The few studies that have been put out usually have the company providing funding with several of their founders/employees as co-authors, which introduces huge bias. Furthermore, they often use very fuzzy benchmarks for “ground truth” that are favorable to the AI, rather than hard metrics like “we found the obstruction at surgery.”

Furthermore, I’d suggest the core problems these tools face have not changed. Machine learning in all forms (and computer vision convolutional neural networks like these systems are a subset of ML) is acutely sensitive to outliers. The rock in that image is way more radioopaque than soft tissue, so none of these systems should have confused it for spleen if they could generalize the underlying principles, but they cannot. So anything they weren’t explicitly trained on is highly likely to be missed. We both know all of the crazy stuff dogs eat, from corncobs to rocks to underwear or weirder objects—that’s a big ask!

Like I mention in the article, it’s even worse for pathology tools. InVue is flying off the shelves despite zero published evidence it can accurately evaluate anything; the citation on marketing materials is always a footnote saying “internal data on file” (of course they won’t share that with you if you asked!) It took years to see studies on SediVue, and most were IDEXX-sponsored. The few truly independent ones found significant limitations for serious conditions like casts and infections. Zoetis has been employing AI for CBCs for several years, and will imminently be deploying algorithms for cyto. Still, no published studies that I’m aware of (happy to be corrected if you know any).

There is a simple answer to this for the companies: Put up or shut up. Partner with independent researchers and let them validate their tools, with permission to publish whatever they find, good, bad, or ugly. In the absence of regulations similar to those in human medicine, that is truly the only way vets can feel confident these tools are appropriate for routine clinical use.

Mar 27
at
3:30 PM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.