BestBETs for Vets review shows global demand for CATs

CURRENT FULL VERSION: A new 10-year review of BestBETs for Vets offers a snapshot of how one of veterinary medicine’s better-known critically appraised topic databases has been used, and where its gaps remain. The study analyzed 96 CATs across 27 topic areas and paired that content review with website analytics, finding the resource reached users in more than 190 countries. Canine medicine and reproduction were the most represented subjects, and most traffic came through direct access, pointing to habitual use by people who already know the platform. (veterinaryevidence.org)

That matters because BestBETs for Vets was built to solve a familiar clinical problem: busy teams often need an evidence summary fast, not a long-form review weeks later. The database was launched by the University of Nottingham’s Centre for Evidence-based Veterinary Medicine as a free, open-access resource modeled on the broader BestBETs approach, which was originally developed in human medicine. In veterinary medicine, CATs have been positioned as a practical bridge between primary research and clinical decision-making, especially for focused case questions that don’t yet have guideline-level answers. (exchange.nottingham.ac.uk)

The broader literature helps explain why this review is relevant now. A 2020 overview in Frontiers in Veterinary Science described CATs as useful not only in practice, but also in undergraduate and post-registration education, research gap identification, grant preparation, and policy support. That paper also stressed the tradeoff built into the format: CATs are intentionally rapid and narrowly framed, which makes them practical, but also more vulnerable to missing evidence and becoming obsolete if they aren’t updated. (pmc.ncbi.nlm.nih.gov)

A separate recent scoping review from human pharmacovigilance adds some useful context on how structured assessment tools evolve over time. Reviewing tools developed or updated between 2008 and 2023, the authors identified 18 causality assessment tools spanning expert judgment, algorithmic, hybrid, and probabilistic approaches. Most were algorithm-based, and several were tailored to specific outcomes or settings, including drug-induced liver injury, severe cutaneous adverse reactions, pediatrics, neonatal intensive care, and vaccine adverse events. The review’s main takeaway was not that one model had won out, but that tool design often needs to match the clinical context, and may increasingly need to incorporate things like biomarkers, medication-error considerations, product quality, or adherence to risk-minimization measures. While that paper focused on adverse drug reaction assessment rather than veterinary BestBETs specifically, it reinforces a broader point relevant here: structured evidence tools are valuable because they standardize thinking, but they also work best when they are updated and adapted for the questions users actually face.

BestBETs for Vets sits within a wider evidence-based veterinary medicine ecosystem that also includes systematic review databases and journal-based knowledge summaries. RCVS Knowledge’s Veterinary Evidence describes BestBETs for Vets as one route to concise evidence summaries, alongside other CAT-style resources and systematic review repositories. The University of Nottingham’s own materials say the BestBETs process relies on structured literature searching and critical appraisal to produce a clinical bottom line, reinforcing that the resource is designed for applied use rather than exhaustive synthesis. (veterinaryevidence.org)

There wasn’t much independent outside commentary specifically on this new 10-year review, but the surrounding expert literature is fairly consistent about the role CATs play. Brennan and colleagues wrote that CATs are “fundamental” to evidence-based veterinary medicine despite their limitations, particularly where rapid answers are needed and higher-order syntheses don’t exist. Older commentary on evidence-based veterinary medicine has made a similar point: CATs are less rigorous than systematic reviews, but far more achievable for clinicians and trainees who need a usable summary in real time. The newer pharmacovigilance review lands in a similar place from a different angle: even when tools are highly structured, they still have strengths and weaknesses tied to purpose, population, and setting. (pmc.ncbi.nlm.nih.gov)

Why it matters: For veterinary professionals, this is really a workforce and education story. A resource that attracts repeat global use over a decade suggests ongoing demand for practical evidence translation tools, especially in clinics where time for literature review is scarce. It also highlights where the profession may still be thin on support: if only 96 CATs were published over 10 years, coverage is necessarily selective, and the concentration in canine and reproduction topics may leave gaps for other species, settings, and question types. For educators, the review reinforces CATs’ value as a teachable format for building literature-searching and critical appraisal skills. For clinicians, it’s a reminder that CATs can support decisions, but shouldn’t be treated as a substitute for updated systematic reviews, guidelines, or patient-specific judgment. And the wider CAT literature suggests another practical lesson: structured tools are often strongest when they are built for a defined use case, rather than assumed to transfer cleanly across every species, clinical problem, or care setting. (veterinaryevidence.org)

The paper also lands at a time when veterinary teams are under pressure to make decisions efficiently while pet parents increasingly arrive with online information in hand. In that environment, trusted, openly accessible summaries can help teams have better-informed conversations and anchor recommendations in published evidence. But the value of that model depends on maintenance. A CAT database is only as useful as its recency, topic breadth, and discoverability. That makes the analytics piece important: direct traffic suggests loyalty, but it may also imply the resource depends heavily on existing awareness rather than broad search visibility. That last point is an inference based on the reported traffic pattern, not a direct claim from the study. (veterinaryevidence.org)

What to watch: The next phase will be whether BestBETs for Vets continues to scale after reaching its 100th published BET in 2024, whether older topics are updated systematically, and whether future development broadens species and subject coverage to better match the diversity of questions seen in practice. More broadly, the evolution of CAT-style tools in other fields suggests future usefulness may depend not just on adding more entries, but on refining formats for specific contexts and incorporating newer forms of evidence where appropriate. (nottingham.ac.uk)

← Brief version

Like what you're reading?

The Feed delivers veterinary news every weekday.