10-year review highlights global use of BestBETs for Vets

CURRENT FULL VERSION: A new 10-year review of BestBETs for Vets points to the growing, if still specialized, role of evidence summaries in veterinary medicine. The study, published in Veterinary Record Open, analyzed the content of the BestBETs for Vets database and how users interacted with it over a decade, finding 96 critically appraised topics across 27 subject areas, with canine medicine and reproduction the most represented. The database drew users from more than 190 countries, with most traffic arriving directly, suggesting repeat or intentional use rather than casual discovery. (ovid.com)

That matters because BestBETs for Vets was built to solve a familiar problem in practice: clinicians often have focused clinical questions, but limited time to search, appraise, and synthesize primary literature on the fly. The University of Nottingham’s Centre for Evidence-based Veterinary Medicine launched the resource in 2014 as a veterinary adaptation of the human BestBETs model, aiming to provide structured, practical reviews with a “clinical bottom line” that can support decision-making without pretending to replace clinical judgment. In 2024, the group said the site had reached its 100th published BestBET, marking a decade of steady development by a relatively small team. (exchange.nottingham.ac.uk)

The broader context is a profession still working to embed evidence-based veterinary medicine into everyday care. A widely cited review on CATs in veterinary medicine describes these summaries as a way to close the gap between research and clinical decision-making, while also serving undergraduate teaching, post-registration education, journal clubs, policy work, and identification of research gaps. RCVS Knowledge similarly frames evidence-based veterinary medicine as the integration of clinical expertise with the best available scientific evidence, tailored to the patient and the pet parent’s circumstances. It is also worth noting that “CAT” can mean something different in adjacent evidence fields: a recent scoping review in Drug Safety looked at case-level causality assessment tools used to judge whether a medicine caused an adverse event in an individual patient, identifying 18 tools developed or updated between 2008 and 2023. Most were algorithmic, with others using hybrid, probabilistic, or global-introspection approaches, and several were designed for specific contexts such as drug-induced liver injury, severe cutaneous adverse reactions, pediatrics, neonatal intensive care, and vaccine adverse events following immunization. (pmc.ncbi.nlm.nih.gov)

BestBETs sits within a wider ecosystem of secondary evidence tools, including RCVS Knowledge Summaries and Banfield’s evidence outputs. Training materials from RCVS Knowledge note that these kinds of syntheses are especially valuable because the volume of published veterinary literature makes it unrealistic for many busy clinicians to rely mainly on primary papers. Those materials also point to BestBETs for Vets as one of the key freely accessible evidence-summary resources available to the profession. The pharmacovigilance review offers a useful parallel: as evidence tools mature, they often become more tailored to particular decisions and settings rather than trying to serve every use case with one generic framework. (knowledge.rcvs.org.uk)

While the abstracted findings from the new paper are descriptive rather than practice-changing, they offer useful signals about demand. The concentration of CATs in canine and reproduction topics may reflect where clinical questions have been most actively submitted or where evidence synthesis capacity has been focused. The international reach suggests that even a UK-based resource can fill global needs when it is open access and designed around practical clinical questions. The fact that users often accessed the site directly may also indicate habitual use by clinicians, educators, or students who already know the platform. That last point is an inference from the traffic pattern, not a stated conclusion of the sources. A similar lesson emerges from the causality-assessment literature: tool design tends to follow the realities of the problem being solved, which is why newer adverse-event causality tools are often disease-, population-, or setting-specific rather than universally applicable. (nottingham.ac.uk)

Expert and institutional commentary around CATs has been consistent: they’re useful precisely because they are pragmatic. Nottingham’s guidance says BestBETs are intended to help vets make informed decisions, but are not prescriptive rules, and can be used in best-practice discussions, journal clubs, and guideline development. The CAT review literature makes a similar point, arguing that these summaries are particularly well suited to translating evidence into clinical settings where time is limited and the evidence base may be incomplete. In pharmacovigilance, the same pragmatism shows up in a different form: the recent scoping review concluded that future causality tools may need to incorporate biomarkers for some outcomes, especially drug-induced liver injury, severe cutaneous adverse reactions, and certain biologics such as immune checkpoint inhibitors, while also accounting for drug quality, medication error, and adherence to risk-minimization measures. (nottingham.ac.uk)

Why it matters: For veterinary professionals, this review reinforces that evidence infrastructure deserves attention alongside clinical research itself. A database of nearly 100 to 100-plus CATs will not solve the profession’s evidence gaps, but it can reduce friction between a clinical question and a usable answer. In workforce and education terms, that has implications for how practices train early-career veterinarians, how teams standardize decisions, and how clinicians discuss options with pet parents when evidence is thin or evolving. It also highlights a persistent challenge: if certain species, disciplines, or question types dominate CAT production, others may remain underserved unless institutions invest in broader topic development. The wider CAT landscape adds another reminder: structured tools are most useful when they are fit for purpose, transparent about their limits, and updated as the science changes. (pmc.ncbi.nlm.nih.gov)

What to watch: The next phase will likely be less about whether CAT databases are useful and more about whether they can scale, stay current, and integrate with the places clinicians already work, learn, and search for answers. Watch for follow-up work on topic gaps, update frequency, contributor models, and whether evidence-summary tools become more formally embedded in veterinary curricula, continuing education, and practice protocols. In parallel, watch for more specialized causality-assessment methods in drug safety that move beyond generic checklists toward context-specific tools and, potentially, biomarker-informed approaches. (nottingham.ac.uk)

← Brief version

Like what you're reading?

The Feed delivers veterinary news every weekday.