10-year review highlights reach and limits of BestBETs for Vets
CURRENT FULL VERSION: A new 10-year review of BestBETs for Vets offers a snapshot of how one of veterinary medicine’s longstanding evidence tools is actually being used. The analysis found 96 critically appraised topics published across 27 topic areas, with canine and reproduction topics leading the mix, while website analytics showed users from more than 190 countries were accessing the resource, most often through direct traffic. The study centers on BestBETs for Vets, a free database created by the University of Nottingham’s Centre for Evidence-based Veterinary Medicine to help clinicians answer focused clinical questions using structured evidence summaries. (nottingham.ac.uk)
That matters because CATs occupy a practical middle ground between original studies and full systematic reviews. In veterinary medicine, they’re designed to answer narrow clinical questions quickly, using a standardized process that includes literature searching, critical appraisal, and a concise “bottom line.” Previous reviews of CATs in veterinary medicine have described them as a way to close the gap between research and clinical decision-making, especially when practitioners need something more usable than a stack of individual papers, but faster to produce than a full systematic review. (nottingham.ac.uk)
BestBETs for Vets has been part of that landscape for more than a decade. The Nottingham center describes the database as a freely accessible collection for vets in practice, adapted from the original BestBETs model used in human emergency medicine. Its own methodology notes that reviews are systematic and repeatable, but also acknowledges limits, including searches in only two literature databases and no attempt to capture unpublished evidence. A 2020 review of CATs in veterinary medicine also listed BestBETs for Vets among the major free CAT collections available to clinicians, alongside resources such as Veterinary Evidence Knowledge Summaries. (nottingham.ac.uk)
The new paper’s findings, based on the source material provided, suggest two parallel realities. First, there is clear demand: users are coming from a genuinely international audience, which supports the idea that concise evidence syntheses meet a real need across practice settings. Second, the database’s output appears modest relative to the breadth of veterinary medicine, with fewer than 100 CATs over 10 years and topic concentration in selected areas. That may reflect the labor-intensive nature of producing CATs, the uneven strength of the underlying evidence base, or the fact that veterinary evidence resources remain dependent on a relatively small pool of trained contributors. This is an inference from the study findings and the broader literature on CAT production, rather than a direct statement from the authors. (nottingham.ac.uk)
Broader commentary in evidence-based veterinary medicine supports that interpretation. A recent commentary marking 20 years of evidence-based veterinary medicine argued that resources such as BETs and Knowledge Summaries help reduce the burden on veterinarians to appraise primary evidence themselves. RCVS Knowledge makes a similar case for Knowledge Summaries, positioning them as a practical way for busy clinicians to stay current and apply evidence in care decisions. At the same time, the veterinary CAT literature has emphasized that these resources are also educational tools, used to teach searching and appraisal skills in both undergraduate and continuing professional development settings. Seen in a wider evidence-tools context, that pattern is not unique to veterinary medicine: a recent scoping review of case-level causality assessment tools in pharmacovigilance identified 18 tools developed or updated between 2008 and 2023, drawing on 48 articles and seven grey-literature sources. Most were algorithmic (12 of 18), with smaller numbers using hybrid, global introspection, or probabilistic approaches, and many were built for specific outcomes or settings such as drug-induced liver injury, severe cutaneous adverse reactions, pediatrics, neonatal intensive care, or vaccine adverse events. (veterinaryevidence.org)
That comparison is useful because it underscores a broader design principle: structured decision-support tools tend to become more clinically useful as they become more context-specific. In pharmacovigilance, the scoping review noted that future causality tools may benefit from incorporating biomarkers in areas such as drug-induced liver injury, severe cutaneous adverse reactions, or immune checkpoint inhibitor toxicity, while also accounting for factors like drug quality, medication error, and adherence to risk-minimization measures. BestBETs for Vets is solving a different problem, but the same tension applies: general-purpose resources are valuable, yet clinicians often need answers tailored to a species, condition, and practice setting. That helps explain why strong international use can coexist with pressure for broader coverage and more frequent updating.
Why it matters: For veterinary teams, this study is really about information infrastructure. If a free, globally accessed CAT database is drawing users from more than 190 countries, that signals ongoing demand for short, clinically usable evidence summaries. But the limited volume and uneven topic spread also highlight a familiar bottleneck: evidence-based practice depends not just on research generation, but on translation, curation, and maintenance. For clinicians advising pet parents, especially in areas where published evidence is thin or conflicting, the availability of current CATs can shape how confidently teams discuss diagnostics, treatments, and uncertainty. For educators and employers, the findings also reinforce that evidence appraisal remains a workforce skill, not just an academic exercise. More broadly, the experience of other health fields suggests that decision-support tools often evolve toward narrower, higher-context applications rather than purely generic formats. (pmc.ncbi.nlm.nih.gov)
There’s also a strategic implication for the profession. Veterinary medicine now has multiple evidence-synthesis formats, including CATs, Knowledge Summaries, and systematic review databases, but they’re spread across organizations and updated on different timelines. The BestBETs for Vets review may therefore be read as a progress report on one resource, and as a reminder that discoverability, topic prioritization, and update cadence matter almost as much as methodological rigor if these tools are going to influence real-world care. That’s especially true in a profession where time pressure often pushes clinicians toward informal information sources instead of peer-reviewed evidence. The wider literature on structured assessment tools points in the same direction: usefulness depends not just on having a method, but on choosing or designing one that fits the clinical question and context. (pmc.ncbi.nlm.nih.gov)
What to watch: The next signal will be whether this review leads to more CAT production, broader subject coverage beyond the current concentration in canine and reproduction topics, or tighter coordination between evidence platforms so clinicians can move more easily from clinical question to usable answer. Longer term, it will also be worth watching whether veterinary evidence tools follow the same path seen elsewhere in medicine toward more specialized frameworks, potentially with added data inputs or markers that improve confidence in specific use cases. (nottingham.ac.uk)