BestBETs review highlights global reach and topic gaps
A new 10-year review of BestBETs for Vets puts numbers behind something many veterinary professionals already sense: evidence-based point-of-care resources are valued, but they’re still uneven in what they cover. The study, published in Veterinary Record Open, analyzed the content of the BestBETs for Vets database and user interactions over a decade, finding 96 CATs across 27 topic areas, with canine medicine and reproduction the most represented subjects. Users came from more than 190 countries, suggesting the resource’s reach now extends well beyond its UK academic base. (nottingham.ac.uk)
That matters because BestBETs for Vets was built to solve a familiar clinical problem: practitioners need usable evidence quickly, and they rarely have time to run and appraise a full literature search between appointments. The resource was launched by the University of Nottingham’s Centre for Evidence-based Veterinary Medicine, which says its mission is to help veterinary professionals bring scientific evidence, clinical expertise, and patient and caregiver circumstances together in decision-making. In 2024, the center reported that BestBETs for Vets had reached its 100th published CAT, describing that milestone as the product of 10 years of work by a small, dedicated team. (nottingham.ac.uk)
The broader background is that CATs occupy a middle ground in the evidence hierarchy. They’re faster and narrower than systematic reviews, but more structured than informal expert opinion. A 2020 review led by several of the same evidence-based veterinary medicine researchers described CATs as rapid evidence syntheses that can support clinical decision-making, undergraduate and postgraduate education, literature scoping, grant development, and policy work. That review also laid out the tradeoffs: CATs can be highly practical, but their narrow framing, rapid methods, and limited updating can introduce selection bias or leave summaries outdated as new studies emerge. (public-pages-files-2025.frontiersin.org)
That tension between practicality and precision is not unique to veterinary medicine. A recent scoping review in Drug Safety identified 18 case-level causality assessment tools developed or updated between 2008 and 2023 for judging whether a medicine caused an adverse event in an individual patient. Most were algorithmic, with others using hybrid, probabilistic, or expert-judgment approaches, and several were built for specific contexts such as drug-induced liver injury, severe cutaneous adverse reactions, pediatrics, neonatal intensive care, or vaccine adverse events. The review’s takeaway was that CATs become more useful when they are tailored to the clinical setting and, increasingly, when they can incorporate newer signals such as biomarkers. While that paper was focused on pharmacovigilance rather than veterinary point-of-care summaries, it adds useful context here: across health care, CATs are evolving toward more specialized, context-dependent tools rather than one-size-fits-all evidence products.
BestBETs for Vets is designed to address some of that by using formal literature searches and standardized appraisal methods. The University of Nottingham says relevant studies are critically appraised by the BestBETs for Vets team, with the appraisal tailored to study design and used to produce a final “clinical bottom line.” That structure helps explain why the database has become a recognizable evidence resource in veterinary education and practice circles, even if total output remains relatively small compared with the breadth of questions seen in companion animal, equine, and food animal practice. (nottingham.ac.uk)
Industry reaction to this specific paper appears limited so far, but the surrounding evidence-based veterinary medicine community has consistently framed CATs as a practical bridge between research and practice. The Centre for Evidence-based Veterinary Medicine has said evidence-based approaches have become more prominent across the profession over the past decade, and external educational resources continue to point clinicians toward BestBETs for Vets as a secondary source for quick evidence summaries. That’s not a substitute for widespread commentary on the new paper itself, but it does suggest the review lands in a field that already sees CATs as useful infrastructure rather than an academic side project. (nottingham.ac.uk)
Why it matters: For veterinary professionals, this review is really about capacity. If a globally used CAT database contains fewer than 100 entries after 10 years, that’s a sign both of the labor involved in producing high-quality summaries and of the evidence gaps still facing clinicians. The concentration in canine and reproduction topics may reflect where contributors, literature volume, or user demand have been strongest, but it also implies thinner support in other species and disciplines. The broader CAT literature points in the same direction: in other areas of medicine, newer tools are often built for narrowly defined outcomes, patient populations, or care settings, because general tools may miss clinically important details. For veterinary medicine, that raises a practical question about whether future CAT development should not only expand in volume, but also become more targeted by species, discipline, or use case. For clinicians, educators, and practice leaders, the takeaway is that CAT databases can help answer focused questions quickly, but they work best as one part of a wider evidence toolkit that also includes systematic reviews, guidelines, and primary literature. (nottingham.ac.uk)
The review also lands at a time when veterinary teams are under pressure to make faster decisions, train newer staff efficiently, and communicate clearly with pet parents who increasingly expect recommendations to be evidence-based and transparent. In that setting, a well-maintained CAT database can support consistency in care and teaching. But the study’s findings also point to a familiar operational challenge: if the profession wants broader, current, species-diverse evidence summaries, somebody has to fund the editorial, appraisal, and update work needed to produce them. That’s as much a workforce and infrastructure story as it is a publishing one. And if CATs are going to become more specialized over time—as they have in areas like adverse-event causality assessment—that will likely increase, not reduce, the need for sustained editorial support. (nottingham.ac.uk)
What to watch: The next phase will likely center on whether BestBETs for Vets can accelerate output, diversify topic coverage, and use user feedback gathered in 2024 to shape future updates, while maintaining the appraisal standards that give CATs their value in the first place. Another question is whether the resource evolves toward more context-specific summaries for particular species, clinical problems, or practice settings, reflecting a broader trend in CAT development across health care. (nottingham.ac.uk)