BestBETs for Vets review shows global reach, uneven topic spread

Version 2

A 10-year review of BestBETs for Vets suggests the veterinary profession is still building the infrastructure it needs to make evidence usable at the point of care. The paper analyzed the content of the CAT database and how clinicians interacted with it over a decade, finding 96 published CATs across 27 topic areas, with canine medicine and reproduction the most represented. The authors also reported global reach, with users from more than 190 countries accessing the site, underlining that even a niche veterinary evidence resource can attract an international audience. (ovid.com)

That matters because BestBETs for Vets was created to solve a familiar practice problem: clinicians often need an answer faster than a full systematic review can provide. The University of Nottingham’s Centre for Evidence-based Veterinary Medicine describes BestBETs as a quick, achievable way to bring evidence into practice by starting with a tightly framed clinical question, searching the literature, critically appraising relevant papers, and producing a clinical bottom line. The resource launched in September 2013, drawing on the broader BestBETs model first used in human emergency medicine. (nottingham.ac.uk)

The broader context is that evidence-based veterinary medicine has long faced structural barriers. Nottingham notes that practitioners may struggle to access full-text literature outside university settings, which makes open-access summaries and secondary evidence products especially useful. RCVS Knowledge likewise positions CATs as part of the profession’s evidence toolkit, alongside systematic reviews and knowledge summaries, while CEVM says BestBETs for Vets is used not just in practice but also in student training and practice-based learning. (nottingham.ac.uk)

The review’s findings appear consistent with how the platform has been positioned over time. CEVM says BETs are designed to inform, not dictate, care, and can support practice meetings, journal clubs, and clinical guideline development. It also acknowledges methodological limits: BestBETs searches use systematic and repeatable methods, but only two literature databases are searched and unpublished evidence is not included, meaning some relevant studies may be missed. That’s an important caveat for clinicians who may be tempted to treat a CAT as definitive rather than as a structured summary of the best readily retrievable evidence. (nottingham.ac.uk)

That caution also fits with a wider pattern seen in other causality and appraisal tools. A recent scoping review in Drug Safety identified 18 case-level causality assessment tools developed or updated between 2008 and 2023, spanning global introspection, algorithmic, hybrid, and probabilistic approaches. Most were algorithmic, and many were built for specific outcomes or settings rather than broad general use, including tools for drug-induced liver injury, severe cutaneous adverse reactions, pediatrics, neonatal intensive care, and vaccine adverse events. The review concluded that no single format solves every use case and suggested future improvements may include biomarkers, especially in areas such as biologics and immune-mediated adverse events, as well as closer attention to drug quality, medication error, and adherence to risk-minimization measures. While that paper focused on pharmacovigilance rather than veterinary CAT databases, it reinforces the same practical point: structured tools are useful, but context matters, and design choices shape what questions they can answer well.

Industry and professional reaction around these resources has generally framed them as a practical bridge between research and day-to-day care. Veterinary Evidence describes BestBETs for Vets as a freely accessible database for vets in practice, and BVA previously highlighted the development of BestBETs and related evidence resources as part of a wider push toward evidence-based veterinary medicine. In a Webinar Vet session, CEVM representatives said the platform was built after studying what worked in human medicine and adapting it for veterinary use, while also noting that topic requests and online discussion have helped shape engagement. (veterinaryevidence.org)

Why it matters: For veterinary professionals, this is less a story about website traffic than about clinical workflow. When teams are under pressure, the difference between “evidence exists” and “evidence is usable” is huge. A database of CATs can help clinicians quickly frame a question, explain uncertainty to pet parents, and avoid overconfidence when the literature is thin. At the same time, the uneven spread of topics in the 10-year review suggests the profession still needs more coverage across species, disciplines, and common primary care questions. Inference: if canine and reproduction topics dominate, then some other everyday decision areas may still lack concise evidence summaries, which could reinforce variation in care. That inference follows from the reported topic distribution and from CEVM’s own invitation for clinicians to submit unanswered questions. The broader causality-tool literature points in the same direction: tools tend to work best when they are matched to the clinical context they were designed for, not treated as one-size-fits-all solutions. (nottingham.ac.uk)

The global usage signal is also notable. Earlier CEVM reporting said the BestBETs for Vets website had visitors from 173 countries across six continents during an earlier measurement period, with the UK, the US, and Canada among the largest user bases. Combined with the new paper’s report of users from more than 190 countries, that suggests sustained international reach as awareness of veterinary evidence resources has grown. For a workforce and education audience, that may strengthen the case for investing in open, practical evidence tools that can support clinicians regardless of geography or institutional affiliation. (nottingham.ac.uk)

What to watch: The next question is whether this review leads to active commissioning of CATs in underrepresented topic areas, closer integration with teaching and practice protocols, or broader collaboration with organizations such as RCVS Knowledge that are already curating evidence-based veterinary medicine resources. It will also be worth watching whether future veterinary evidence tools become more tailored to specific clinical contexts, as seen in newer human causality-assessment tools, and whether emerging methods such as biomarker-supported assessment influence how adverse-event and treatment-decision evidence is summarized. (knowledge.rcvs.org.uk)

← Brief version

Like what you're reading?

The Feed delivers veterinary news every weekday.