Computational Pathology: What AI Sees Under the Microscope — and What It Still Gets Wrong

AI in Veterinary Diagnostics · Part 3 of 6

One of the uncomfortable truths in veterinary pathology is that tumor grading is more subjective than most clinicians realize. Not because pathologists are careless — but because grading schemes ask us to make judgment calls on features that exist on a spectrum. How many mitotic figures cross the threshold from low-grade to high-grade? How much nuclear variation is "marked" versus "moderate"? Two experienced pathologists looking at the same slide can reach different conclusions, and both can be defensible.

That variability has real consequences. A grade can determine whether a dog receives chemotherapy. It can shape the prognosis a clinician communicates to an owner. It can influence the decision to pursue aggressive surgery. The stakes of getting it wrong — or of getting inconsistent answers from different labs — are not abstract.

This is what makes computational pathology so compelling to me. Not as a replacement for the pathologist, but as a tool that can bring more objectivity and consistency to the measurements that feed into grading. A mitotic count that is reproducible across labs and observers is a more reliable number than one that varies with who counted and how long they spent looking. That reproducibility matters — to the pathologist, to the clinician, and most of all to the patient.

What computational pathology actually means

Computational pathology is the application of digital image analysis and machine learning to tissue and cytology samples. It covers a wide range of tasks that differ significantly in how technically complex they are and how well they've been validated.

At the simpler end are quantitative tasks — counting mitotic figures, measuring nuclear size, calculating ratios. These are well-defined, benchmarkable, and well-suited to AI. At the complex end are interpretive tasks — classifying a tumor type, grading a neoplasm, distinguishing inflammation from neoplasia. These require integrating many features across a tissue section, often alongside clinical context. They're what pathologists spend years learning to do reliably — and where AI performance is most variable and most in need of oversight.

Context from human medicine

Computational pathology in human medicine has advanced furthest where datasets are large and well-labeled — prostate cancer grading, breast cancer subtyping, colorectal polyp classification. In those areas, AI tools have regulatory clearance as decision-support tools. The common thread: the AI works alongside a pathologist who reviews and validates the output. Autonomy is not the goal. Consistency and efficiency are.

Mitotic counting: the clearest use case right now

AI-assisted mitotic figure detection has the strongest evidence base of any computational pathology application in veterinary medicine. Mitotic count feeds into grading schemes for canine mast cell tumors, soft tissue sarcomas, and mammary tumors — and it's a measurement known to vary between observers even among experienced pathologists.

Mitotic figures have recognizable features that make them good targets for image classifiers — condensed, darkly staining chromatin in characteristic patterns. Research has shown that AI-assisted mitotic counting can match experienced pathologist performance in controlled settings.

The caveats are real. Mitotic figures can be mimicked by apoptotic cells, pyknotic nuclei, and staining artifacts. Performance drops with variation in slide preparation and staining. And a mitotic count without interpretive context is just a number. The number matters — but the pathologist's interpretation of what that number means in this tumor, in this species, in this clinical context, matters more.

Why this matters clinically: In canine mast cell tumor grading, a difference of just a few mitotic figures can shift a case from low-grade to high-grade — changing the treatment recommendation and the prognosis communicated to the owner. AI assistance that reduces that variability, when properly validated, has direct clinical value.

Nuclear measurements: making the subjective objective

Beyond mitotic counting, AI tools that measure nuclear features — size, shape, chromatin texture, nucleolar prominence — are beginning to move from research tools toward more accessible platforms. Pathologists have always assessed these features, but qualitatively: "marked anisokaryosis," "prominent nucleoli." AI makes it possible to measure them objectively, consistently, and across an entire slide.

Whether those measurements improve clinical outcomes depends on validation in veterinary species — work that is underway at several research centers and in early-stage development at Vetopathy, where the goal is building toward tools that can assist in objective grading. This remains a research-stage effort, not a current clinical offering.

Where AI in histopathology still falls short

The limitations of AI at the microscope are not edge cases. They reflect real constraints on what the technology can currently do.

Species diversity is the biggest veterinary-specific challenge. A model trained on canine tissue doesn't automatically work on feline, equine, or exotic species. Morphologic norms, staining patterns, and diagnostic features all vary. An AI tool validated in dogs being applied across a mixed-species caseload without revalidation is not a validated tool — it's a dog model being asked to read a gecko.

Interpretive integration is a second major gap. Histopathologic interpretation isn't just image recognition. It integrates clinical history, gross findings, location, signalment, and ancillary results. AI tools working from image data alone are working with a fraction of the information a pathologist uses — and that fraction doesn't always contain what matters most.

The label quality problem: AI models are only as good as the data they're trained on. In veterinary pathology, diagnostic labels are often institution-specific, reflect evolving classification schemes, and rarely include long-term outcome data. Training on imperfect labels produces models with imperfect performance — often in ways that aren't visible in aggregate accuracy numbers.

What this means for the practicing clinician

For the veterinarian receiving a histopathology report, the question isn't whether AI was involved — it's whether the interpretation is reliable and whether the pathologist behind it has the expertise to stand behind their findings.

AI used responsibly in computational pathology can improve the consistency of grading measurements and surface findings that might otherwise be missed. It doesn't change the need for expert oversight. A pathology report assisted by AI without a qualified veterinary pathologist in the loop is not a more efficient version of a traditional report. It's a riskier one.

The tools worth watching are those built with that principle at their center — AI as an instrument in the hands of a specialist, not a substitute for one.


Next in this series: Radiology, ultrasound, and derm AI — how artificial intelligence is moving into the veterinary imaging suite, what tools already exist, and where the evidence actually is.

Written by Eric Snook, DVM, PhD, DACVP · Vetopathy LLC

Next
Next

AI and the Diagnostic Sample: From Cytology Reads to Smarter Biopsy Selection