We are currently updating our website, and will have our new version online soon. Please check back later this fall.

Mailing List

Subscribe to the KT Canada mailing list

Login

Welcome

Signup

Once you have signed up, you will receive a confirmation email with your username and password. To activate your account, follow the instructions in the email.

 

KT
Canada

Theme 1: Knowledge distillation

1.1 Improving the Uptake for Reporting Guidelines by Health Care Journals (Moher et al.)

Investigators:
Moher, Grimshaw, Straus
Duration:
0-24 months
Targets:
Journal Editors
MRC Phase:
0-2
Collaborators:
Doug Altman (UK); Ken Schulz (USA); John Hoey (Queen's University, Canada)

Background

Reporting guidelines propose standards to improve reporting of different types of studies published in health care journals. For example, the CONSORT statement was developed to improve reporting of 2-group parallel randomized controlled trials {Moher, 2001; Moher, 1996) and has been extended to other trial designs (Piaggio; Campbell). CONSORT has been endorsed by the International Committee of Medical Journal Editors and the World Association of Medical Editors, and has been translated into at least 10 languages. Use of CONSORT is associated with improved reporting {Plint, 2006}. However, despite this a 2003 survey of 167 high impact medical journals revealed that only 36 (22%) of them referred to CONSORT in their advice to authors {Altman, 2005}. The EQUATOR Network {Anonymous, 2007} has identified more than 50 reporting guidelines for different studies designs. Given the accumulation of reporting guidelines, early data indicating their positive impact on quality of reporting and their apparent lack of uptake, there remains a KT challenge to increase use of reporting guidelines across healthcare journals.

Objectives

Using CONSORT as the exemplar, the proposed project will: i. identify barriers and facilitators to the adoption of reporting guidelines by health care journals; ii. design a KT strategy to improve the uptake of reporting guidelines; iii. undertake a controlled before and after feasibility study to determine the potential benefits of the strategy.

Methods

We will conduct semi-structured telephone interviews with a judgment sample of 8-10 medical editors including major general, specialty and subspecialty journals to identify potential barriers and facilitators to uptake of CONSORT and other reporting guidelines. Data will be transcribed and thematically analysed to identify key barriers and facilitators. The results will be used to design a web based survey of editors of healthcare journals that publish randomized trials. Eligible journals will be identified by applying the Cochrane search filter for randomized trials (Glanville) to all new Pubmed entries during an index calendar month. Based on the experiences of Antman and Chan, we anticipate identifying approximately 300 eligible journals. Information about eligible journals including the contact details of editors, scope of the journal (major general, major specialty and subspecialty journals) and Instructions to authors will be identified by a web search on the journal title. Editors will be invited by e-mail to respond to the survey conducted using Survey Monkey software {Anonymous, 2007}. We will search the Instructions to Authors from each journal to identify whether they recommend use of CONSORT or other reporting guidelines. Data will be initially analysed descriptively to determine the prevalence of potential barriers and facilitators, we will then explore whether there are any differences between different types of journals. Based upon the results of the survey, we will develop a KT strategy for journals to increase the uptake and implementation of CONSORT. This will likely involve standardized methods to approach journals to inform them about CONSORT, provision of tools to support use of the CONSORT Statement and training courses for journal staff. We will assess the acceptability and usability of the KT strategy using formal usability studies with 6-8 subjects per iteration. We will conduct a pilot study of the strategy using a controlled before and after study (Grimshaw; Shaddish, Cook and Campbell). We will identify 10 intervention and 10 control journals (that publish randomized controlled trials but do not endorse CONSORT). Our primary outcome will be changes in intention of journal editors to endorse CONSORT using standard measures {Francis, 2004} by Web Monkey survey. Over the course of KT Canada we will also follow up these journals to determine whether the CONSORT has been added to the instructions to authors. We will use the results of this pilot study to develop a definitive randomized trial of the KT strategy.

Significance

This study will identify key barriers and facilitators to uptake of reporting guidelines and pilot a KT intervention that will influence KT approached to improving uptake of reporting guidelines.

1.2 Efficient Updating of Evidence-Based Resources (Haynes and Moher et al.)

Investigators:
Haynes, Moher, Wilczynski, McKibbon, Grimshaw, Shojania
Duration:
0-24 months
Targets:
Review Authors, Evidence-based resource authors/publishers
MRC Phase:
0-2
Collaborators:
Chantelle Garritty, Alex Tsertsvazde (CHEO/Ottawa)

There has been considerable global investment in knowledge syntheses,{Grimshaw, 2006} health technology assessments, clinical practice guidelines {Thomason, 2000} and other evidence based resources in the last 15 years. However these resources are only valuable for decision makers if kept up-to-date with advances in research evidence {Shojania, 2007}. However updating is resource & time intensive and optimal models of updating have not yet been developed {Moher, 2007}. Updating policies across different agencies and groups vary considerably. We propose complementary projects:

  1. Assessing barriers and facilitators to updating systematic reviews

    We observed that only 18% of all reviews (38% of Cochrane reviews and 2% of non Cochrane reviews) published in an index month of Medline were updated versions of previously published reviews (Moher). There remains significant barriers to updating of systematic reviews and uncertainty as to the roles and responsibilities of different stakeholder groups (including authors, funders, and journal editors) for updating. Objective: i. To describe current policies and approaches to updating by key stakeholders ; ii. To identify barriers and facilitators to updating of evidence based resources; iii. To identify potential strategies to overcome barriers to updating Design: Objectives i. and ii –We will conduct web based surveys with key stakeholder groups (including authors, funders, policy-makers and journal editors) to identify their current policies and attitudes to updating systematic reviews and barriers and facilitators to updating. We will conduct semi structured telephone interviews with a convenience sample of 4-5 members of each stakeholder group to identify key issues to be addressed within the surveys. Data will be transcribed and thematically analysed to identify key barriers and facilitators. The results will be used to design a series of web based surveys targeting the different stakeholder groups – the surveys will include common questions asked of all stakeholder groups and stakeholder specific groups. Separate sampling frames will be developed for each stakeholder group from a cohort of systematic reviews indexed in Medline between July and December 2007 identified using the systematic review hedge ({Anonymous, 2007}). Based on our previous experience, we anticipate identifying approximately 1250 individual reviews published in 200-250 journals (Moher). We will randomly sample one review from each identified journal (the modules of different Cochrane review groups will be considered individual publications) and identify the corresponding author, journal editor and representative of the funder of each review. The sample will be invited by e-mail to respond to the survey conducted using Survey Monkey software {Anonymous, 2007}. Funders will only be asked to complete one survey even if they have funded multiple reviews. Data will be analysed descriptively to determine the prevalence of potential barriers and facilitators and whether these differ across different stakeholder groups. Objective iii We will convene a series of focus groups with key representatives of the different stakeholder groups to present the results of the survey and identify potential strategies to improve updating. For each stakeholder group we will convene a Canadian and international focus group linked to key Canadian and international meetings where feasible. Data will be transcribed and analysed inductively to identify potential strategies to overcome barriers to updating.

  2. Validating methodologies for efficient updating of evidence-based resources (reviews, guidelines, evidence-based clinical trials)

    Traditional bibliographic approaches to updating evidence based resources focus on comprehensive searches undertaken by individual research teams. The Health Information Research Unit at McMaster University has pioneered the development of efficient search strategies ("hedges") to retrieve clinically relevant and valid studies from MEDLINE {Anonymous, 2007} and other bibliographic databases (adopted as Clinical Queries by the National Library of Medicine, Ovid and other services) and since 2003 has operated a "health knowledge refinery" (HKR {Anonymous, 2007}) to define the studies and reviews that are of both high quality and relevance for evidence-based clinical practice for a wide range of medical, nursing, and rehabilitation disciplines. The HKR represents a centralised resource that could significantly improve the efficiency of updating evidence based resources if the HKR supplemented by additional searches using hedges were more efficient that traditional bibliographic approaches Objective: To determine whether the HKR (with or without supplementary searches using hedges) efficiently identifies key studies that would make a difference to an existing review, guideline or text for most health problems compared to traditional bibliographic approaches. Design: We will identify a sample of evidence based resources published since 2003 including: systematic reviews with meta-analyses (identified from the index search of Medline (see above)), clinical practice guidelines (identified from the National Guidelines Clearing House) and evidence-based textbooks (for example UpToDate, Harrison’s Practice). We will appraise the systematic reviews (with Oxman and Guyatt or Amstar) and clinical practice guidelines (using the rigour of development domain from the AGREE instrument (Ref)) to identify high quality resources. We will identify cited references published after 2003 from the evidence based resource. We will independently search the same topic areas HKR to identify potentially relevant studies. We will independently perform Medline searches using hedges filters. We will compare the results from the evidence based resources to HKR alone and HKR plus additional hedge searches. If HKR +/- hedges miss studies included in the evidence based resources we will estimate the potential impact of this by recreating the analyses within the evidence based resources excluding studies missed by HKR /- hedges to observe whether this would likely lead to a change in recommendations.

Significance

If HKR performs well, then it will provide an opportunity to narrow the gap between the amount of published original evidence and the world’s current production of systematic reviews, guidelines and evidence-based textbooks, as well as assisting with their periodic renewal. Prospectively, HKR could also be used to determine when a review needs to be updated.

1.3 Assessing and increasing the implementability of clinical practice guidelines (Bhattacharyya et al.)

Investigators:
Bhattacharyya, Zwarenstein, Laupacis, Grimshaw, Straus, Blumer
Community Partner
Canadian Diabetes Association
Duration:
0-60 months
Targets:
Guideline Developers, HCP, Decision makers
MRC Phase:
0, 1

Background

Most guidelines are complex and contain large numbers of recommendations with varying evidential support, health impact and feasibility of implementation in practice.[1,2] Modifying guidelines so that they are more realistic in what they require of providers and focus attention on the interventions with the greatest benefit for patients could make them more effective. This requires assessing and enhancing their implementability, a set of characteristics that predict the relative ease of implementation of guideline recommendations. Existing instruments like the Appraisal of Guidelines Research and Evaluation (AGREE)[3] and the GuideLine Implementability Appraisal (GLIA)[4] focuses on some elements but no tool assesses the overall ease with which a guideline can be implemented. An instrument for assessing implementability could help guideline developers with the choice of recommendations within a guideline and help guideline users select guidelines which are easier to implement.

Objective

To develop and test an instrument to assess implementability of guidelines.

Methods

Phase I: Instrument development (3 years)

Item generation:

Individual items for an implementability assessment tool will be derived from a systematic review. A thematic analysis of the literature will be done to develop a conceptual framework for implementability which will be divided into relevant dimensions. This framework will be revised by an expert group of guideline developers and implementers using a modified Delphi process.

Validation:

The revised framework will be sent a larger group of guideline development experts, implementers, and users who will rate each item for relevance and clarity. The instrument will be applied to the 2008 Canadian Diabetes Association Clinical Practice Guideline[5] by several members of the expert group. Inter-rater reliability will be assessed. An "implementability enhanced" version of the guideline will be developed and the two versions of the guideline will be rated by a second independent expert group using the implementability instrument. This group will provide feedback on the usability of the instrument, and their scores will be compared to assess inter-rater reliability.

Phase II: Instrument testing (2 years)

We will assess the applicability of the instrument for guidelines concerning different conditions and for multiple guidelines addressing the same condition using guidelines that have been reviewed by the Ontario Ministry of Health Guideline Advisory Committee. The implementability of the guidelines will be scored and compared across conditions. Preliminary content validation of the tool will be done. The underlying complexity of the treatment of diabetes will be addressed by comparing the implementability of guidelines from different countries, involving members of the Guidelines-International-Network Diabetes group. This sample will include a larger number of guidelines, allowing for factor analysis of the dimensions of the instrument.

Significance

This study will produce a validated instrument for assessing the overall implementability of guidelines. The tool will not only provide information on the ease with which a guideline can be implemented, it will also identify areas for improvement sample of Canadian guidelines, providing information for their developers.

1.4 Improving the useability of systematic review: Development study (Straus et al.)

Investigators:
Straus, McKibbon, Perrier, Shepperd, Chignell, Hebert, Armson, Khandwala, Grimshaw
Duration:
0-24 months
Targets:
HCP, publishers, researchers
MRC Phase:
0-2
Community Partner
CMAJ

Background

While much attention has been paid to enhancing the quality of systematic reviews, relatively little attention has been paid to the format for presentation of the review. Because reporting tends to focus on methodological rigour more than clinical context, reviews often do not provide crucial information for clinicians.[1,2] There is little data available to show the impact of the presentation of a systematic review on clinicians' understanding of the evidence or their ability to apply it to patients.[3]

Objectives

  1. To develop and test different formats for systematic reviews including case-based and evidence-expertise versions through iterative usability testing involving generalist clinicians; and,
  2. To determine the feasibility of completing an internet-based trial of the impact of these formats on the ability of generalist physicians to understand and apply evidence to patients.

Methods

Phase 1: (Year 1)

The prototype for these reviews will be tested by 5 to 8 generalist clinicians from the Calgary Health Region who will review the format and a relevant clinical scenario during individual testing sessions. They will be asked if and how they would apply the evidence from the systematic review to the scenario. Sessions will be facilitated by a research associate with human factors expertise using 'think aloud' methodology [4]. These sessions will be audiorecorded and the results transcribed and analysed.[5] Studies have concluded that only a modest number of subjects are required for usability testing (e.g. 8-9 subjects), and that often 4 to 5 are necessary to identify 80% of the usability problems[6] Cycles of design, development and testing will be completed until no further major revisions are suggested.

Selection of three systematic reviews will be completed by a panel of 10 generalist clinicians, including those with interest in evidence-based health care. Systematic reviews of interventions published in the CMAJ, Lancet, BMJ, Annals of Internal Medicine, or the Cochrane Library in 2007 will be identified and the clinicians will be asked to rate the articles they believe would be important to the practising generalist clinician using a likert scale from 1 to 7 where 1 indicates the article is definitely not relevant and 7 indicates it is directly and highly relevant.[7]

Phase 2: (Year 2)

We will use the CMAJ website to complete a pilot study to determine the feasibility of completing an internet-based study with the CMA membership. Feasibility will be established by assessing the time to recruit, developing a sample size estimate for a randomized trial (including an estimate of loss to follow-up), testing the online allocation and data collection procedures, and assessing the outcome measures. We will select one systematic review from the 3 used in the usability testing and prepare 2 formats. Participants will be alternately allocated to one of the 2 versions which will include a relevant clinical scenario and 3 questions to answer.

Outcomes:

The primary outcome is the proportion of clinicians who appropriately apply the evidence from each systematic review format to the patient in the scenario as measured by agreement with the expert panel's recommendation. Expert answers to these questions will be provided by a panel of 3 clinicians with expertise in evidence-based medicine. Their answers will provide the definition of appropriate application of the evidence.

Sample size:

It is anticipated that 60% of clinicians will appropriately apply evidence from a traditional systematic review to an individual patient and an increase of 15% would be important. Setting the ? error at 0.05 (two-sided), the β error at 0.15, 56 physicians will be required in each group. Allowing for dropouts, 70 physicians in each arm will be required.

Significance

The formats will be evaluated subsequently in a randomized trial that will look at the impact of the presentation of evidence from systematic reviews on clinicians’ abilities to understand and apply it to individual patients. It is anticipated that the results of the trial will be implemented by the CMAJ upon completion of the study. We will also explore the feasibility of replicating this study with nurses, under the leadership of Benzies (Calgary).

References

  1. Dawes M, Sampson U. Knowledge management in clinical practice: a systematic review of information seeking behaviour in physicians. Int J Med Inform 2003;71:9-15.
  2. Glasziou P, Shepperd S. Ability to apply evidence from systematic reviews. Abstract presented, Society for Academic Primary Care, July 5, 2007, UK.
  3. McDonald J, Mahon J, Zarnke K et al. A randomised survey of the preference of gastroenterologists for a Cochrane review versus a traditional narrative review. Can J Gastroenterol 2002;16:17-21.
  4. Kushniruk A, Patel V, Fleiszer D. Analysis of medical decision making: a cognitive perspective on medical informatics. Proc Annu Symp Comput Appl Med Care 1995;193-7.
  5. Qualitative research in health care. Ed. N Mays, C Pope. BMJ Books: London, 1999.
  6. Lewis JR. Sample sizes for usability studies: additional considerations. Hum Factors 1994;36:368-78.
  7. www.bmjupdates.com, accessed Sept 28, 2007