Radiology, Ultrasound, and Derm: AI Moves Into the Veterinary Imaging Suite

AI in Veterinary Diagnostics · Part 4 of 6

As a pathologist, I have seen many cases where the imaging picture and the histopathology told different stories. A lesion that looked aggressive on radiograph turned out to be reactive on biopsy. A mass that appeared non-infiltrative on ultrasound came back as a high-grade sarcoma. These aren't failures of the imaging — they're reminders that each diagnostic tool sees a different dimension of the same problem.

Imaging captures structure. Histopathology captures biology. Neither is complete without the other, and the best diagnostic outcomes come from using both — thoughtfully, in sequence, with an understanding of what each one can and can't tell you. That same principle applies when AI enters the imaging suite. The question is never whether a tool produces output. It's whether that output tells you something reliable — and what it's missing.

Why imaging data suits AI well

Imaging data has properties that make it a natural fit for AI. It's inherently digital, produced in large volumes, and involves the kind of spatial pattern recognition that machine learning handles well. This is why medical imaging was one of the first clinical areas where AI showed real promise — and why veterinary imaging is following a similar path, with a lag that reflects the smaller scale of the veterinary market rather than any fundamental barrier.

The tools available today range from genuinely validated to aspirationally marketed. That gap isn't always obvious from product descriptions, which is why a clear-eyed look at what has actually been demonstrated — in which species, for which conditions — is worth doing before any adoption decision.

Radiograph analysis: the most developed application

Thoracic and orthopedic radiograph analysis has the most mature AI tool landscape in veterinary medicine. The targets — lung patterns, pleural effusion, heart size, bone lesions — are visually distinct, common enough to generate training data, and have well-established criteria that lend themselves to model training.

Several commercial platforms now offer AI-assisted radiograph triage for companion animals — flagging abnormal studies for prioritized review rather than generating standalone diagnoses. That positioning reflects appropriate humility: these tools perform best on clear-cut cases and weakest on the ambiguous presentations that represent the bulk of real clinical uncertainty.

Context from human medicine

AI-assisted chest radiograph analysis has regulatory clearance in several countries for flagging critical findings as a triage tool. The clinical benefit is real but specific: faster identification of urgent findings in high-volume settings. Lower-volume settings with different case mixes have produced more variable results. Veterinary practices face additional complexity: species diversity means canine-trained models don't automatically transfer to feline, equine, or exotic patients.

Species generalization deserves emphasis here. Normal chest anatomy differs meaningfully between dogs and cats — heart shape, lung lobe configuration, tracheal position, diaphragm contour. A model trained predominantly on canine radiographs applied to feline patients without revalidation is not a validated tool. Practices with significant non-canine caseloads should ask specifically which species a radiograph AI was validated in.

Ultrasound: a harder problem

Ultrasound is a more difficult target for AI than radiography. Image quality is highly operator-dependent — the same lesion imaged by two different people on the same machine can look substantially different. A static image captures only a slice of what a skilled sonographer sees in real time. And the interpretive task — integrating echogenicity, architecture, vascularity, and spatial relationships — involves dynamic reasoning current AI tools aren't well suited to replicate.

Where AI shows more promise in ultrasound is in well-defined subtasks: measurement standardization, structural flagging in echocardiography, consistency checks. These are assistance functions, not interpretive ones — and they still require a skilled sonographer at the transducer end.

The training data problem: Building an AI ultrasound model requires expert-labeled images at scale. Veterinary ultrasound data is less systematically archived than radiograph data, more variable in quality, and rarely labeled in the structured format needed for model training. This is solvable over time — but it's a real constraint on what's possible right now.

Dermatology AI: promising, with caveats

AI-assisted skin lesion classification is one of the more active development areas in veterinary medicine. Skin conditions are common, the data — clinical photographs — is relatively easy to collect, and dermatology is genuinely challenging in general practice: a large differential space, time pressure, and limited immediate access to specialist input.

Tools that classify skin lesions from clinical photographs are in various stages of development and commercial release. The evidence base is early-stage. Published studies have generally tested models on curated image datasets rather than real clinical populations — and performance on curated data tends to overestimate how a tool will perform in practice, where lighting, coat interference, and image angle all introduce variability.

There is also a deeper limitation: a photograph captures surface appearance, not tissue. An AI tool that classifies a lesion as "likely inflammatory" from an image cannot account for the cases where surface appearance and histopathology diverge — which happens with meaningful frequency in dermatology. AI-assisted triage is a reasonable concept. Replacing histopathologic follow-up for lesions of uncertain character is a different proposition.

Questions worth asking any imaging AI vendor: Which species was the model trained and validated in? On prospective clinical data or curated research datasets? What's the case mix, and how does it compare to your practice? How does the model handle cases outside its training — does it flag uncertainty, or produce confident output regardless? These are reasonable questions. Any reputable vendor should be able to answer them.

The multimodal future

The most significant near-term development in veterinary imaging AI isn't any single tool — it's the movement toward combining imaging findings with lab data, clinical history, and signalment to produce more complete assessments. Multimodal models outperform single-modality tools on complex diagnostic tasks in human medicine, and the same principle will apply in veterinary medicine as the data infrastructure matures.

For now, the practical takeaway is straightforward: imaging AI tools that have been transparently validated in the relevant species, positioned honestly as decision support rather than standalone diagnosis, and integrated into a workflow that preserves clinical judgment are worth serious consideration. Tools that can't answer basic questions about their validation are worth approaching with caution — regardless of how good the interface looks.


Next in this series: The AI-connected clinic — what a fully integrated veterinary diagnostic ecosystem would look like, what it would take to build, and how close we actually are.

Written by Eric Snook, DVM, PhD, DACVP · Vetopathy LLC

Previous
Previous

The AI-Connected Clinic: A Diagnostic Ecosystem That Doesn’t Exist Yet — But Almost Does

Next
Next

Computational Pathology: What AI Sees Under the Microscope — and What It Still Gets Wrong