BestBETs for Vets review shows global reach, narrow topic spread

CURRENT FULL VERSION: A new 10-year review of BestBETs for Vets offers a snapshot of how one of veterinary medicine’s better-known critically appraised topic databases has been used, and what that says about evidence needs in practice. The study, published in Veterinary Record Open, examined 96 CATs across 27 topic areas and found that canine medicine and reproduction dominated the database’s content. It also reported global reach, with users from more than 190 countries accessing the site, most often by going there directly. (nottingham.ac.uk)

That matters because BestBETs for Vets was built to solve a practical problem: clinicians often need a usable answer before there’s time, or capacity, for a full systematic review. The resource, developed by the University of Nottingham’s Centre for Evidence-based Veterinary Medicine, publishes focused reviews of specific clinical questions and is intended to support evidence-based decision-making in practice. Earlier guidance from the CAT literature has framed these reviews as a pragmatic middle ground, faster and more clinically usable than a full evidence synthesis, but less exhaustive by design. (nottingham.ac.uk)

The broader evidence-based veterinary medicine literature helps explain why this database still matters. A 2020 review of CATs in veterinary medicine described BestBETs for Vets as one of a relatively small number of openly accessible veterinary CAT collections, noted that it has used a multi-author review model, and said updates were intended on a roughly two-year cycle at that time. That same review also outlined the tradeoff at the center of the CAT model: speed and clinical relevance versus comprehensiveness. CATs can be highly useful for answering focused questions, teaching appraisal skills, and identifying research gaps, but they’re not substitutes for systematic reviews or guidelines. (frontiersin.org)

That tradeoff is not unique to veterinary medicine. A recent scoping review in Drug Safety identified 18 case-level causality assessment tools developed or updated between 2008 and 2023 across human pharmacovigilance, spanning global introspection, algorithmic, hybrid, and probabilistic approaches. Most were algorithmic, and several were built for very specific contexts, including drug-induced liver injury, severe cutaneous adverse reactions, pediatrics, neonatal intensive care, and vaccine adverse events. The authors argued that future tools may become more useful by incorporating biomarkers and by accounting more explicitly for factors such as drug quality, medication error, and adherence to risk-minimization measures. While those tools address adverse-event causality rather than frontline veterinary clinical questions, the parallel is useful: CAT-style resources tend to work best when they are designed for a clear use case, with transparent limits, rather than treated as universal answers.

The Nottingham team’s own supporting materials reinforce that point. BestBETs searches rely on CAB Abstracts and MEDLINE rather than every possible database, a choice the group says is meant to balance rigor with feasibility. The site also stresses that a CAT’s “clinical bottom line” should not be treated as prescriptive, because each patient, practice setting, and pet parent context is different. That framing is important for frontline teams trying to use evidence without turning it into a rigid protocol. (nottingham.ac.uk)

Expert commentary around evidence-based veterinary medicine suggests this is also an education and workforce story. A recent overview of EBVM described BestBETs for Vets and RCVS Knowledge Summaries as practical tools that help clinicians find and apply evidence, while also noting ongoing efforts by the Evidence-Based Veterinary Medicine Association to expand journal clubs, roundtables, and other training opportunities. CEVM has also recently linked EBVM more explicitly with quality improvement, arguing in a 2025 research commentary that the two approaches work best together when teams are trying to improve patient care. (pmc.ncbi.nlm.nih.gov)

Why it matters: For veterinary professionals, the review is less about web traffic than about where the profession still needs support. If a free CAT database with fewer than 100 reviews can attract users from more than 190 countries, that suggests persistent demand for concise, practical evidence summaries. It also hints at gaps: species imbalance, topic concentration, the labor involved in keeping reviews current, and the ongoing need to train clinicians to interpret evidence in context rather than treat summaries as rules. The wider CAT literature adds another lesson: useful tools are often the ones built for specific decisions, settings, or patient groups, not just broader collections with more content. In busy primary care settings especially, CATs may remain one of the few realistic ways to bring published evidence into case discussions, protocol development, and team learning.

What to watch: The next question is whether findings from the 10-year review lead to a broader refresh of the CAT model, including more frequent updates, wider species coverage, and tighter integration with continuing education and quality improvement work. CEVM’s recent newsletter shows the group is still actively publishing BestBETs and positioning EBVM as part of practice improvement, which suggests this review may be used as a roadmap for how these resources evolve next. The broader CAT literature also points toward another possibility: more targeted, context-specific review formats, and eventually the use of better predictors or decision supports where the evidence base allows. (nottingham.ac.uk)

← Brief version

Like what you're reading?

The Feed delivers veterinary news every weekday.