AI and the Diagnostic Sample: From Cytology Reads to Smarter Biopsy Selection
AI in Veterinary Diagnostics · Part 2 of 6
The diagnostic sample is where a clinical impression becomes something testable. It is also where a substantial amount of diagnostic information is lost — through sampling error, poor technique, inadequate fixation, or the decision not to submit at all. These are not failures of laboratory science. They are failures that happen upstream, in the clinic, before the sample is ever handed off.
Artificial intelligence is beginning to exert influence at this upstream stage, and the implications are worth thinking through carefully. Some of what is emerging is genuinely useful. Some of it is being oversold. The challenge, as with all AI tools in veterinary medicine, is distinguishing between the two.
The cytology question: what AI can and cannot read
Cytology occupies an interesting position in veterinary diagnostics. It is fast, inexpensive, and widely accessible — and it is also highly operator-dependent, both in sample acquisition and in interpretation. The same fine needle aspirate can yield dramatically different diagnostic information depending on how it was prepared and who reads it.
AI-assisted cytology interpretation has received meaningful attention in human medicine, particularly in fields like cervical cytology and hematopathology, where large standardized datasets have made model training feasible. The results in those narrowly defined contexts have been encouraging. The translation to veterinary cytology, however, is considerably more complicated.
Context from human medicine
AI-assisted cervical cytology screening — automated detection of abnormal cells in Pap smear preparations — has been in clinical use in human medicine for decades and has been validated in large prospective studies. The key enabling factors were standardized sample preparation, high case volumes, and a relatively limited morphologic vocabulary. Veterinary cytology involves far greater species diversity, less standardized preparation, and smaller training datasets — all of which raise the bar for meaningful AI validation.
Veterinary cytology spans dozens of species, a wide range of tissue types, and enormous variability in sample preparation quality. Training an AI model to reliably distinguish a well-differentiated mast cell tumor from a reactive mast cell population in a canine lymph node aspirate requires not just labeled data, but correctly labeled, high-quality data at scale. That kind of dataset does not yet exist in veterinary medicine in any systematic form.
What does exist, and what is beginning to show early promise, is AI assistance at the pattern recognition level — identifying cellular features such as nuclear-to-cytoplasmic ratio, chromatin pattern, and mitotic figures in a prepared cytologic sample. These are tasks that map well to image classification models, and early research in veterinary species has produced results that warrant continued investigation. They do not yet warrant clinical deployment without expert oversight.
Commercial cytology AI tools: what already exists
It is worth acknowledging that AI-assisted cytology is not purely a future prospect in veterinary medicine — commercial tools already exist and are in active use at some practices. These tools generally address a defined subset of cytologic tasks: automated differential counting of blood cells and common fluid analysis parameters are the most established applications, with some platforms beginning to expand into mass aspirate characterization for common tumor types in companion animal species.
The scope of what these tools have been validated to do matters, and it is narrower than their marketing sometimes implies. Automated cell counting in well-prepared blood films is a different technical problem from classifying a poorly cellular aspirate from a subcutaneous mass in an unusual species. Practices evaluating commercial cytology AI tools should ask specifically which cell types and sample preparations the tool was validated on, in which species, and using what reference standard. Performance on a curated validation dataset does not always translate directly to the variable sample quality encountered in routine clinical practice.
The oversight principle applies here too: AI cytology tools trained without veterinary pathologist input carry the same risk as AI differential tools applied without species-specific logic — confident output that does not account for what the model was never trained to know. The responsible path is AI assistance reviewed by a qualified interpreter, not AI interpretation as a standalone product. When a result matters clinically, a cytology AI output is a starting point, not a conclusion.
Biopsy site selection: where AI could matter most
One of the least discussed but potentially highest-impact applications of AI in veterinary diagnostics is biopsy site selection. The decision of where exactly to sample a lesion — and whether the selected site is likely to yield diagnostic tissue — has an enormous effect on whether a histopathologic result is interpretable.
Necrotic centers, peripheral reactive zones, and areas of secondary inflammation are all capable of producing a biopsy result that reads as non-diagnostic or misleading. An experienced clinician learns over time to read a lesion's gross morphology and choose accordingly. A less experienced one may not — and no amount of pathology expertise at the other end of the pipeline can recover information that was never in the sample.
AI tools capable of analyzing gross lesion images and flagging likely viable versus non-viable tissue zones are in early development. The concept is sound: the same image analysis capabilities used to characterize lesion morphology can, in principle, be applied to guide sampling strategy. The evidence base for this specific application in veterinary species remains limited, and it represents a direction of active investigation rather than a current clinical standard.
The submission decision: AI as a prompt to act
Perhaps the most underappreciated role AI can play at this stage of the diagnostic pipeline is simply prompting the submission decision in the first place. A substantial proportion of diagnostically actionable lesions in veterinary practice are never sampled — not because the clinician lacks the skill to sample them, but because the clinical picture did not rise to the threshold that triggers submission.
An AI tool that generates a morphology-informed differential list at the point of gross lesion evaluation does something beyond generating differentials. It makes the stakes of the clinical decision visible. When a tool surfaces a differential list that includes malignant possibilities for a lesion that the clinician might otherwise have elected to monitor, it creates an explicit prompt to consider submission — documented, transparent, and tied to clinical reasoning rather than intuition alone.
This is not a guarantee of better outcomes. It is a shift in how the decision is made, and in many cases that shift will improve the quality of what reaches the diagnostic laboratory. It also creates a record of the clinical reasoning that existed at the time the decision was made — which has implications for continuity of care and, in some contexts, medical record quality.
What AI cannot fix: Sample quality remains a function of technique, fixation, and submission handling — none of which AI currently influences. A well-reasoned submission decision followed by a poorly prepared sample still produces a non-diagnostic result. AI tools at the front end of the pipeline work best in combination with strong foundational practice in sample acquisition and submission.
Where the field is moving
The trajectory in this space is toward integration — AI tools that connect the point-of-care decision to the submission workflow, carrying clinical context forward rather than letting it dissipate between the exam room and the laboratory. The technical infrastructure for this kind of closed-loop system is beginning to exist. The validated, species-specific content that should power it is still being built.
What veterinary practitioners can reasonably expect from AI in the near term is assistance — with pattern recognition, with differential generation, with prompting submission decisions that might otherwise not be made. What they should not expect is a substitute for the interpretive expertise that has always sat at the center of veterinary diagnostics. That expertise remains essential, and AI tools built with it at their core will be the ones worth using.
Next in this series: Computational pathology — what AI sees under the microscope, where it is already performing, and what it still gets wrong.

