AVMA podcast highlights AI risks and rules in scientific writing: full analysis

CURRENT FULL VERSION: AI-assisted writing is moving from novelty to routine in scholarly publishing, and AVMA’s Veterinary Vertex podcast is framing that shift as both an opportunity and a responsibility for veterinary medicine. In its March 28, 2026, episode, the podcast warns that generative AI can produce references that look credible but do not exist, creating a direct threat to the integrity of manuscripts, peer review, grant review, and the literature veterinarians depend on. The guests make a basic but important point: verifiable citations are part of the scientific “building blocks” that connect new findings to prior research, so when references cannot be confirmed, the trustworthiness of the manuscript itself is called into question. (buzzsprout.com)

The discussion lands in a broader moment of rapid AI adoption across the profession. A recent ACVIM task force perspective described AI as increasingly relevant across veterinary research, education, clinical practice, and hospital management, while also warning that veterinary medicine lacks uniform regulatory oversight for AI use. That same paper noted that many veterinarians still report limited knowledge of AI, even as interest and use grow. In other words, the tools are arriving faster than shared standards for how to use them well. (academic.oup.com)

What makes the AVMA episode especially practical is its focus on how problems show up in real editorial workflows—and why they happen in the first place. Conway explains that large language models are not factual databases like PubMed or Google Scholar; they are language-processing systems trained to predict the next likely word or concept. That means they can assemble citations that look highly specific and convincing without ever pulling from a verified source. According to the episode notes and discussion, fabricated references may be uncovered when a reviewer recognizes an implausible claim, or when editorial staff cannot verify a DOI, PMID, volume, page range, or Crossref record. The guests describe these references as particularly dangerous because they can be polished and persuasive enough to slip past a casual read. (buzzsprout.com)

The episode also suggests the problem is no longer confined to manuscripts. AVMA Editor-in-Chief Lisa Fortier notes that hallucinated references are now being found in grant review, where at least one review panel reportedly chose not to score applications containing unverifiable citations. She described that response as controversial but consistent with the “hard line” AVMA journals have taken. The editorial takeaway is straightforward: disclosure of AI use is expected, and every AI-generated output has to be checked by a human before submission. (buzzsprout.com)

That position is consistent with the direction of travel in veterinary and medical publishing more broadly. An AJVR editorial published in 2025, “Toward responsible use of artificial intelligence in our journals,” signals that AVMA journals are formalizing their approach to AI in publishing. Across recent JAVMA and AJVR articles, disclosure statements such as “No AI-assisted technologies were used” are already appearing routinely, suggesting that AI-use declarations are becoming normalized in manuscript publishing. (pubmed.ncbi.nlm.nih.gov)

Outside AVMA, expert commentary has been converging on similar concerns. A 2023 opinion piece in Frontiers in Veterinary Science warned that large language models could accelerate the production of low-quality or misleading scientific text, and raised concerns about AI-enabled plagiarism and weak disclosure. More recently, veterinary thought leaders writing in Journal of Veterinary Internal Medicine argued that the profession needs stronger AI literacy so clinicians and researchers can validate outputs rather than overtrust them. Together, those perspectives support the idea that AI is not just a writing aid issue, but a professional standards issue. (frontiersin.org)

Why it matters: For veterinary professionals, the immediate implication is operational. Anyone submitting a manuscript, reviewing one, applying for funding, or applying published evidence in practice may need to scrutinize references, methods descriptions, and disclosure statements more closely than before. AI can help organize ideas, improve readability, and reduce drafting time, but it does not remove the author’s obligation to verify sources, protect confidentiality, or ensure that conclusions reflect actual evidence. The podcast’s explanation is a useful reminder that these systems generate plausible language, not guaranteed facts. In a profession where pet parents and producers ultimately depend on trustworthy science, even a small number of fabricated citations can erode confidence quickly. (buzzsprout.com)

What to watch: The next phase will likely include clearer author instructions, more explicit reviewer guidance, and stronger prepublication checks for citation validity and AI disclosure at veterinary journals; over time, those policies may also shape expectations in grant review, residency programs, research training, and continuing education. That’s an inference based on the AVMA episode, the 2025 AJVR editorial, and the wider push for AI literacy and responsible-use standards across veterinary publishing. (buzzsprout.com)

← Brief version

Like what you're reading?

The Feed delivers veterinary news every weekday.