SWiM Frequently Asked Questions*

*These FAQ are based on questions asked at the SWiM webinar.

General:

Yes and no!  The term “narrative synthesis” is often used when authors do not use meta-analysis to synthesise their data.  This is because the synthesis relies heavily on narrative or text.  However, there is no clear definition of what narrative synthesis or what narrative synthesis involves as a method.  This means that the term can be confusing and opaque.  Indeed, we conducted work that found that the methods of narrative synthesis are rarely reported; this is of course contrary to the application of a scientific principles of transparency that under pin a systematic review approach. (Campbell et al 2019)  The SWiM work was initiated to address the lack of transparency in narrative synthesis, and was originally called “Improving the CONduct and reporting of Narrative Synthesis of Quantitative data (ICONS-Quant).  Over the duration of the project we tried to develop a clear definition of narrative synthesis. But after many lengthy discussions this seemed to be an impossible task.  This is in part because, the term narrative synthesis may be applicable in many, or even most, reviews.  That is, that in most reviews the narrative or textual description of the studies and the synthesis findings plays a key role in presenting the interpretation of the included data, and can be a valuable way to explore and explain relationships across the included studies, in particular when the review includes diverse sources of data.  In addition, the term narrative synthesis is used for a wide range of methods, approaches and include synthesis of different data types, including both quantitative and qualitative data.  As indicated above, the focus of this work was to promote transparency of synthesis of quantitative intervention effect data when meta-analysis is not used: This is only one of the situations in which the term narrative synthesis has been used.  The term SWiM therefore relates specifically to improving transparency of, and pointing authors to conduct guidance on, the synthesis of quantitative intervention effect data when meta-analysis of standardised effect sizes is not used.  This is only one aspect of what has been implied within the wide banner of narrative synthesis.

 

Have a look at this opinion piece on SWiM for some further thoughts.

No.  The SWiM acronym is used for brevity for the lengthy phrase Synthesis Without Meta-analysis.  SWiM is an umbrella phrase or term which describes a broad approach and may include one or more of a range of different synthesis methods, e.g. synthesis based on effect direction, or summary statistics such as the median.  Currently, we are using SWiM to relate specifically to synthesis of quantitative intervention effect data which does not use meta-analysis of standardised effect sizes.  But this is just the start and the terminology may evolve further, for example there may be scope for extensions to SWiM, as there is definitely potential for more discussion on the use of terminology such as “narrative synthesis”.

There are many possible explanations for the poor reporting in reviews where the synthesis does not use meta-analysis of standardised effect sizes.  One reason may be due to the limited guidance on how to conduct and report effect data when meta-analysis is not used, and also limited guidance on appropriate management in heterogeneity in study characteristics, for example studies which report similar but not the same outcome at the same timepoint.  Another related reason may be that many authors expect and hope that they will conduct a meta-analysis, indeed some authors think that a systematic review needs to include a meta-analysis, which is not the case, around 16% of Cochrane reviews do not include any meta-analysis, and only half of Cochrane reviews rely exclusively on meta-analysis to synthesise effect data. 

For more guidance on managing heterogeneous data in a systematic review, have a look at Cochrane Handbook Chapter 9, “Preparing for Synthesis”.

The term “vote counting” has been associated with bad practice in systematic reviews. Two main reasons why vote counting has been considered bad practice is that it often relies on inappropriate interpretation of statistical significance, ignores risk of bias of included studies, and/or the approach used is not reported transparently.  However, many reviews which do not conduct meta-analysis of standardised effect sizes resort to vote counting even though they may avoid using the term due its negative associations.  The new Cochrane Handbook Chapter “Synthesis using other methods”,outlines various approaches to vote counting, noting that there are different approaches to vote counting.  The advice in the Handbook cautions against approaches which rely on counting studies on the basis of statistical significance.  Currently the most common form of vote counting uses effect direction as the standardised metric.  While vote counting is not so useful or robust a method as a meta-analysis, use of a reliable method of vote counting can allow authors to make best of the available evidence, and avoid concluding that little or nothing is known about the effect of an intervention when there are data on effect direction.

Conduct Guidance for and implementation of SWiM:

The SWiM reporting guideline does not extend to providing guidance on the conduct of alternative synthesis methods.  However, within the SWiM reporting guideline we refer readers to key sources on conduct.  For guidance on the alternative methods of synthesis of intervention effects when meta-analysis is not used have a look at the recently published Cochrane Handbook Chapter 12, “Synthesis using other methods”While much of the methods suggested in this chapter are not new, the clear description of how to implement these alternative approaches represents a significant improvement in the clarity of methods when meta-analysis of effect sizes is not used.  This is, therefore, a relatively new topic and there is potential for much discussion and development.

Yes. We have provided example text in the SWiM guidance reported in the BMJ paper. For each item there is a mix of clinical and non-clinical topics. It was difficult to find good examples, so some of the examples were developed or adapted by the SWiM team based on a published review.

No.  There is no specific guidance in the SWiM reporting guideline about what should be reported in the abstract/plain language summary, or Summary of Findings table(s).  When reporting on SWiM in these summary results it is important to clearly report what standardised metric was used in the synthesis and frame the question being addressed in relation to this. For example, if the standardised metric used is effect direction, then the question addressed by the review is around evidence overall positive or negative impacts on the outcome of interest rather than effect size.

If there is only one study which reports a particular outcome then it will not be possible to synthesise the outcome data, rather it will only be possible to summarise the findings of that study.  In some reviews, studies are separated because one or more PICOC (Population, Intervention, Comparison, Outcome, Context) is different.  This can mean that no synthesis is conducted.  In these circumstances, it may be useful considering grouping a study characteristic to allow synthesis of similar characteristics even though they are not identical.  For example, different measures of mental health may be grouped together to allow synthesis under a mental health domain.  Decisions about appropriate grouping or “lumping” of data should be informed by the volume of available data, what is conceptually appropriate and what will be useful to end users of the synthesis. 

For more guidance on appropriate grouping in synthesis have a look at Cochrane Handbook Chapter 9, “Preparing for Synthesis”.

Scope of SWiM

Heterogeneity is a common reason for not conducting meta-analysis and so this means that for reviews of quantitative effect data a SWiM approach may allow the data to be synthesised.  Deciding not to meta-analyse due to heterogeneity is an issue across most review questions and topic areas.  However, there are different views about the level of heterogeneity which might make a meta-analysis inappropriate or not meaningful. 

It is helpful to be clear about what is being meant by “heterogeneity” (sometimes also referred to as diversity), as there are different sources of heterogeneity.

Statistical heterogeneity: This is where the effect of an intervention varies for the same outcome, this is measured by the I2 test which describes the percentage of the variability in effect estimates that is due to heterogeneity rather than sampling error.  Variability in effect estimates can be informally assessed in a forest plot.  For example, where the confidence intervals of the effect sizes appear on both sides of the line of no effect this indicates high variability.

Methodological heterogeneity:  This is where there is methodological variation in the included studies, e.g. including different study designs, or the metric used to report the same outcome, e.g. odds ratio, or mean difference.  Methodological heterogeneity will be more common in reviews which address questions where there is limited evidence from Randomised Controlled Trials, for example public health.

Clinical or conceptual* heterogeneity: This is where there is variation in one or more aspects of the PICOC (Population, Intervention, Comparison, Outcome, Context).  For example, the intervention of interest may vary across the included studies with respect to different components, intensity, methods of implementation, etc.

*the term conceptual is used here to relate to the many reviews which do not address clinical questions

A common response to high levels of heterogeneity is that no study is considered similar enough to allow synthesis, this is usually due to methodological and/or clinical heterogeneity.  This can lead to a review in which each study is considered separately, and no synthesis is conducted.  The conclusions of these reviews are, therefore, typically limited to drawing attention to a lack of evidence and uncertainty which may not be useful to review users, and may not represent most appropriate use of available evidence.  For more guidance on how to manage heterogeneous data in a review, and whether and how to combine data to facilitate synthesis at some level, have a look at Cochrane Handbook Chapter 9, Summarizing study characteristics and preparing for synthesis

Also see this paper:

Ioannidis JPA, Patsopoulos NA, Rothstein HR: Reasons or excuses for avoiding meta-analysis in forest plots. BMJ 2008, 336(7658):1413-1415.

There is a lot of confusion of terminology which we have outlined below, we recommend that authors report clearly and simply what they have done.  The SWiM reporting guideline is for synthesis of quantitative effect data when meta-analysis of effect sizes is not used. 

The term mixed method-synthesis could mean different things:

Any of the above options could incorporate SWiM if there are quantitative data which are synthesised using a SWiM approach to synthesis but not meta-analysis of standardised effect sizes.

The term “qualitative evidence synthesis” is also potentially confusing.  It could mean:

  • synthesis of qualitative data- this is not within the scope of SWiM

non-statistical synthesis of quantitative or qualitative data- this is within the scope of SWiM where the synthesis is of quantitative effect data but does not involve meta-analysis of standardised effect sizes for some or all outcomes.

No, the SWiM reporting guideline does not provide detailed guidance on the conduct of SWiM methods or on the application of GRADE.  However, in the elaboration of the SWiM items we refer readers to key sources on conduct guidance.  For the application of GRADE to reviews using a SWiM approach see SWiM Item 6, we recommend having a look at Cochrane Handbook Chapter 14, Completing ‘Summary of findings’ tables and grading the certainty of the evidence”. The following may also be useful:

Murad MH, Mustafa RA, Schünemann HJ, Sultan S, Santesso N: Rating the certainty in evidence in the absence of a single estimate of effect. Evid Based Med 2017, 22(3):85-87.

Ryan R, Santesso N, Hill S. Preparing Summary of Findings (SoF) tables. Cochrane Consumers and Communication Group, http://cccrg.cochrane.org/author-resources. La Trobe University, Melbourne. Published December 1st 2016. Version 2.0. Approved (S. Hill) December 1st 2016.(accessed 26th February 2020).

Santesso N. GRADEpro GDT 7 Presenting narrative outcomes in a ‘Summary of Findings’ table.  Youtube, May 2016 (accessed 26th February 2020).