Skip to Main Content Skip to main content

Home

Systematic reviews and other evidence synthesis projects

What is a Rapid Review?

Rapid reviews seek to balance meeting time-sensitive information needs with methodological rigor. They are often conducted with a high amount of stakeholder input, and produce a review that is geared towards relevance to decision-makers in situations that require faster action than is usually possible with a full systematic review.

They are often faster than a systematic review, but the streamlining of the systematic review process may introduce bias in ways that are still being researched. Therefore, it is critical that the methods are transparent and reproducible.

A rapid review is not necessarily easier than a systematic review.

Guidance

Similarities and differences summary

Based on the Cochrane Rapid Reviews Methods Group guidance on conducting rapid reviews

Some similarities between systematic reviews and rapid reviews:

  • Protocol should still be published. PROSPERO accepts protocols for rapid reviews; Open Science Framework is also an option. Any updates to the protocol should be tracked at the registry location.
  • Use of standard PRISMA statement is recommended until a version for Rapid Reviews is available.
  • The process includes a Risk of Bias assessment.
  • The methods used must be transparent and reproducible.

Some differences between systematic reviews and rapid reviews:

  • Since RRs are often to inform policy, key stakeholders are involved in setting and refining the review question, eligibility criteria, and the outcomes of interest.
  • Eligibility criteria may be more restrictive, including limiting to higher quality study designs
  • Searching
    • Database searching of PubMed/Medline, Embase, and the Cochrane Central Register of Controlled Trials (aka Cochrane CENTRAL),  plus (optionally) up to 2 specialized databases
    • Peer review of search strategy is recommended
    • Extant systematic or scoping reviews are often included
    • Limited grey lit/supplemental searching
  • Screening
    • Use of review support software is recommended, and may further speed the process.
    • Use a standardized screening form during both phases
    • Title/abstract
      • Run a pilot exercise using the same 30-50 abstracts to get everyone on team calibrated
      • Then dual-screen for at least 20% of abstracts, with conflict resolution
      • Then single-reviewer-screen remaining abstracts, with all excluded abstracts screened by a second reviewer to screen, with conflict resolution
    • Full text
      • Run a pilot exercise using the same 5-10 full text articles to get team calibrated
      • Single-reviewer-screen all full-text articles, with all excluded articles screened by a second reviewer to screen, with conflict resolution
  • Data extraction
    • Single reviewer rather than dual, with second reviewer to check for correctness and completeness of extracted data
    • Data extraction limited to a minimal set of required data items
    • Can use data from existing SRs to reduce time spent on data extraction
  • Bias assessment
    • Single reviewer rather than dual, with verification by second reviewer
    • Limit risk of bias ratings to the most important outcomes, with a focus on those most important for decision-making
  • Synthesis
    • Narrative synthesis organized around the PICO question
    • Meta-analysis only if studies are similar enough to pool
  • Grading of evidence done by single reviewer rather than dual, with verification by second reviewer

Background reading