AVMA podcast highlights AI risks and rules in scientific writing
CURRENT BRIEF VERSION: AVMA’s Veterinary Vertex podcast is putting a spotlight on how generative AI is reshaping scientific writing in veterinary medicine, with a clear message: these tools can help, but they also create real risks for research integrity. In the March 28, 2026, episode “AI in Scientific Writing: Opportunity, Risk, and Responsibility,” Scholarly Journal Consultant Morna Conway, PhD, and JAVMA/AJVR copy editor Vic Schultz discussed how AI systems can generate polished but fabricated citations because large language models predict plausible language rather than retrieve verified facts. The guests stressed that verifiable references are a basic building block of science, and that when citations cannot be confirmed, the credibility of the manuscript itself becomes questionable. Those errors are being caught by reviewers and editors through missing DOIs, broken Crossref links, absent PubMed records, and inconsistent bibliographic details; the episode also notes that similar hallucinated references are now showing up in grant review. AVMA editors described a hardening stance, including disclosure expectations around AI use and the possibility that manuscripts can be rejected—or grants not scored—if fabricated references are found. (buzzsprout.com)
Why it matters: For veterinary professionals who write, review, or rely on published research, this is less about technology adoption than about accountability. AI can speed drafting and editing, and may be especially useful for non-native English speakers, but recent veterinary and medical publishing commentary has emphasized that authors remain responsible for accuracy, authorship integrity, and transparent disclosure of AI assistance. The podcast’s core point is that AI is a language-processing tool, not a fact-checking database, so users cannot assume that authoritative-sounding citations or claims are real. That matters in a field where AI uptake is rising, AI literacy is still uneven, and regulatory oversight remains limited, putting more responsibility on individual users, reviewers, and journals to set guardrails. (buzzsprout.com)
What to watch: Expect more explicit journal policies, disclosure language, and manuscript screening workflows across veterinary publishing as editors try to balance AI’s efficiency gains with the risk of fabricated or misleading content. The same scrutiny may increasingly extend beyond journals to grant panels and other scientific review settings as institutions draw firmer lines around unverifiable citations. (buzzsprout.com)