BestBETs for Vets review shows global reach over 10 years

A new 10-year review of BestBETs for Vets offers a snapshot of how one of veterinary medicine’s long-running evidence-synthesis tools has been built and used. The paper, published in Veterinary Record Open, analyzed the content of the database and user interactions over a decade, reporting 96 critically appraised topics across 27 topic areas. Canine medicine and reproduction were the most represented subjects, and the site drew users from more than 190 countries, with most visits coming through direct access. (nottingham.ac.uk)

That matters because BestBETs for Vets was created to solve a familiar clinical problem: veterinarians often need a fast, structured answer to a narrow clinical question, but the underlying evidence may be sparse, mixed, or buried across the literature. The University of Nottingham’s Centre for Evidence-based Veterinary Medicine describes the platform as a repository of Best Evidence Topics, a form of critically appraised topic that uses standardized literature searching and appraisal methods to produce a practical “clinical bottom line.” The model sits between expert opinion and full systematic review, aiming to be rigorous enough to guide practice while still being usable in real time. (nottingham.ac.uk)

The broader background is that veterinary evidence-based practice has long faced structural constraints. Compared with human medicine, many veterinary questions have smaller studies, fewer randomized trials, and more species-specific variation. That has helped drive interest in concise evidence summaries such as CATs, knowledge summaries, and Best Evidence Topics. RCVS Knowledge’s Veterinary Evidence, for example, points clinicians to BestBETs for Vets alongside other evidence resources, underscoring how these tools now function as part of a wider evidence-access network rather than as standalone projects. (veterinaryevidence.org)

That wider appraisal landscape extends beyond veterinary medicine. A recent scoping review in Drug Safety identified 18 case-level causality assessment tools developed or updated between 2008 and 2023 for judging whether a medicine caused an adverse event in an individual patient. Most were algorithmic tools, with smaller numbers using hybrid, probabilistic, or global-introspection approaches. Some were built for specific outcomes such as drug-induced liver injury or severe cutaneous adverse reactions, while others were tailored to settings or populations including pediatrics, neonatal intensive care, and vaccine safety. Although that review focused on human pharmacovigilance rather than veterinary clinical decision support, it reinforces the same general point: evidence-appraisal tools are increasingly being shaped around particular use cases rather than treated as one-size-fits-all methods.

The review’s findings suggest that clinicians are using the database internationally, but they may also hint at how discovery happens in practice. If most users arrive directly, that can be read as a sign of repeat use and brand recognition among people who already know the platform. It may also imply a discoverability challenge: evidence resources can be highly valued by regular users without being fully embedded into broader clinical workflows, search habits, or teaching environments. That interpretation is an inference from the traffic pattern, not a stated conclusion of the source, but it fits the longstanding challenge of moving evidence tools from niche use into routine practice. (nottingham.ac.uk)

On the content side, the concentration in canine and reproduction topics is notable. It likely reflects both the clinical questions submitted and the shape of the available literature base. But it also raises a practical question for educators, editors, and evidence-synthesis teams: which areas are still underserved? If some species or practice settings are less represented, the issue may not be lack of demand alone, but also lack of publishable evidence, limited author capacity, or weak incentives for clinicians to produce CATs. Nottingham’s description of the BestBETs process emphasizes standardized critical appraisal tailored to study type, which is useful for quality control, but also resource-intensive. That tension between standardization and specificity shows up in other appraisal fields too: the pharmacovigilance scoping review found that newer causality tools often target particular syndromes or populations, and suggested future refinements may include biomarkers, especially for areas such as drug-induced liver injury, severe cutaneous adverse reactions, and some biologic therapies where immune-mediated toxicity is a concern.

Direct expert reaction to this specific paper was limited in publicly accessible sources I could verify, but the surrounding commentary in the evidence-based veterinary medicine field has been consistent: short-form evidence syntheses are valuable because they help bridge the gap between research and practice, especially when systematic reviews are unavailable or too slow for frontline use. Veterinary Evidence describes CATs as concise appraisals that package the literature into a clinical bottom line, while other EBVM commentaries have framed them as a pragmatic way to support decisions and expose research gaps. The same practical logic appears in adjacent evidence-assessment fields. In the pharmacovigilance review, authors noted that causality judgments may also need to account for factors beyond the event itself, including product quality, medication error, and adherence to risk-minimization measures—another reminder that structured tools are helpful, but only within the context they were designed for. (veterinaryevidence.org)

Why it matters: For veterinary professionals, this paper is really a report card on evidence-delivery infrastructure. It shows that there is durable, global demand for concise, practice-oriented evidence summaries, and it reinforces the role these tools can play in supporting conversations with pet parents when evidence is incomplete. It also has workforce implications. If CAT databases are being used regularly, they can support newer clinicians who are still building confidence in literature appraisal, and they can help educators identify which clinical questions repeatedly surface in practice. At the same time, uneven topic coverage is a reminder that evidence access is only as strong as the pipeline of contributors, editorial support, and underlying published studies. The broader appraisal literature adds another useful caution: different clinical questions may need different frameworks, and future tool development may move toward more specialized, context-aware approaches rather than broader generic formats. (nottingham.ac.uk)

What to watch: The next step is whether this kind of retrospective analysis leads to operational changes, such as prioritizing underrepresented species and disciplines, improving referral traffic and search visibility, or linking CAT production more closely with veterinary curricula and continuing education. If that happens, the impact of BestBETs for Vets may shift from being a useful library to being more deeply embedded in how clinicians learn, search, and make decisions. It will also be worth watching whether veterinary evidence tools become more tailored to specific clinical contexts, echoing trends in other appraisal areas where specialized methods have been developed for particular adverse events, patient groups, and product classes. (veterinaryevidence.org)

← Brief version

Like what you're reading?

The Feed delivers veterinary news every weekday.