Veterinary CAT database review shows global reach over 10 years

A new 10-year review of BestBETs for Vets offers a rare look at how one of veterinary medicine’s better-known evidence summary tools has actually been used. According to the study, the database contained 96 critically appraised topics across 27 topic areas over the review period, with canine medicine and reproduction most represented. Users came from more than 190 countries, and most visits were direct, pointing to a resource that clinicians appear to seek out deliberately rather than stumble across. (ovid.com)

That matters because BestBETs for Vets was built to solve a persistent problem in practice: vets often need fast, defensible answers to clinical questions, but don’t have time to search and critically appraise the primary literature from scratch. The resource was developed by the Centre for Evidence-based Veterinary Medicine at the University of Nottingham and has been positioned as a way to provide accessible, up-to-date evidence summaries for clinical decision-making. Nottingham also uses BestBETs in undergraduate teaching, and says the summaries can support journal clubs and clinical guideline development, underscoring that the platform sits at the intersection of practice, education, and professional development. (ovid.com)

The broader backdrop is that evidence-based veterinary medicine has matured as a concept, but not always as a routine habit in clinics. Recent commentary marking roughly 20 years of EBVM said the profession still struggles with limited training, variable research quality, and a tendency for busy clinicians to rely on less rigorous information sources when time is short. Another recent review argued that easily accessible tools are essential if EBVM is going to move from theory into everyday case management. That challenge also mirrors a wider pattern across healthcare decision support: a recent scoping review in Drug Safety identified 18 case-level causality assessment tools developed or updated between 2008 and 2023, with most falling into the algorithmic category and several built for specific outcomes or settings, including drug-induced liver injury, severe cutaneous adverse reactions, pediatrics, neonatal intensive care, and vaccines. The authors concluded that future tools may need to go further by incorporating biomarkers and practical considerations such as drug quality, medication error, and adherence to risk-minimization measures—essentially the same argument, in another domain, for evidence tools that are usable in the real context of care rather than only on paper. In that context, a 10-year usage analysis of BestBETs is really a check on whether one piece of that infrastructure is reaching the people it was meant to help. (pmc.ncbi.nlm.nih.gov)

The Nottingham group’s own updates suggest the resource has continued to expand beyond the study window. In an October 11, 2024 newsletter, the Centre for Evidence-based Veterinary Medicine said BestBETs for Vets had reached its 100th published BestBET earlier that year, describing that milestone as the product of a relatively small team over the prior decade. The center said the platform started in 2014 and framed the work as producing short evidence summaries that may look simple, but require substantial effort in searching, appraisal, and drafting to stay objective and useful for veterinary professionals. (nottingham.ac.uk)

There doesn’t appear to be a broad wave of outside commentary tied specifically to this 10-year review, but the wider EBVM community has consistently made the case for resources like this. The Evidence-Based Veterinary Medicine Association highlights BestBETs among the profession’s evidence tools, while Nottingham showcases practice-facing testimonials from clinicians who say the summaries have helped settle real-world questions, including surgical approach decisions. That’s not the same as formal outcomes data, but it does suggest the database has practical credibility among at least some users on the ground. (ebvma.org)

Why it matters: For veterinary professionals, the most useful takeaway may be where the review points indirectly rather than explicitly. If a CAT database attracts users from more than 190 countries and much of its traffic is direct, that suggests demand for concise, clinically usable evidence summaries remains strong. It also highlights a continuing workforce and education issue: clinicians want evidence, but they need it in formats that fit real practice constraints. For teams trying to support consistent care, train early-career associates, or build internal protocols, CATs can serve as a middle layer between raw papers and formal guidelines. They won’t replace judgment, and Nottingham explicitly says they’re not meant to be prescriptive, but they can shorten the distance between published evidence and the exam room. The broader lesson from the causality-assessment literature is similar: even outside veterinary medicine, tools are increasingly being designed for specific populations, settings, and adverse-event types rather than as one-size-fits-all checklists. That trend may be relevant for veterinary evidence services too, especially as practices look for summaries that reflect species, case type, and workflow realities. (nottingham.ac.uk)

There’s also a more strategic signal here for veterinary education and knowledge services. If BestBETs is being used in undergraduate training and continues to expand, that supports the argument that evidence appraisal shouldn’t sit only in research tracks or postgraduate programs. It can be embedded in how future clinicians learn to frame questions, search efficiently, and weigh study quality. That aligns with recent EBVM commentary calling for stronger curricular attention and more usable evidence infrastructure across the profession. And if other health fields are already debating how to improve decision tools with biomarkers and context-specific design, veterinary medicine may face similar questions as its own evidence resources mature. (nottingham.ac.uk)

What to watch: The next milestones are likely to be less about raw publication counts and more about impact, including whether BestBETs usage broadens across species and topic areas, whether more clinics integrate CATs into protocol development and team training, and whether future research can link these evidence tools to measurable changes in clinical decision-making or care quality. A related question is whether veterinary evidence resources become more specialized over time, mirroring the broader move toward tools tailored to particular clinical contexts rather than generic appraisal aids. (nottingham.ac.uk)

← Brief version

Like what you're reading?

The Feed delivers veterinary news every weekday.