10 common mistakes that could ruin your enrichment analysis
Anusuiya Bora1,2, Matthew McKenzie1 & Mark Ziemann1,2*
Deakin University, School of Life and Environmental Sciences, Geelong, Australia.
Burnet Institute, Melbourne, Australia
(*) Correspondence: mark.ziemann@burnet.edu.au
| Author | ORCID |
|---|---|
| Anusuiya Bora | 0009-0006-2908-1352 |
| Matthew McKenzie | 0000-0001-7508-1800 |
| Mark Ziemann | 0000-0002-7688-6974 |
Target journal: Nature Methods
Abstract
Functional enrichment analysis (FEA) is an incredibly powerful way to summarise complex genomics data into information about the regulation of biological pathways including cellular metabolism, signalling and immune responses. About 10,000 scientific articles describe using FEA each year, making it among the most used techniques in bioinformatics. While FEA has become a routine part of workflows via myriad software packages and easy-to-use websites, mistakes can easily creep in due to poor tool design and unawareness among users of pitfalls. Here we outline 10 mistakes that undermine the effectiveness of FEA which we commonly see in research articles. We provide practical advice on their mitigation.
Background
PubMed searches indicate keywords like “pathway analysis” and “enrichment analysis” appear in titles or abstracts of approximately 10,000 articles per year, and that the number of articles matching these keywords has increased by a factor of 5.4 between 2014 and 2024. The purpose of FEA is to understand whether gene categories are collectively differentially represented in the molecular profile at hand, and involves querying hundreds or thousands of functional categories representing gene pathways or ontologies. There are a variety of methods for FEA, but the main two are over-representation analysis (ORA) and functional class scoring (FCS)1. ORA involves selecting genes based on a hard cut-off followed by a test of enrichment (e.g.: Fisher’s exact test) as compared to a background list2. Popular ORA tools include websites like DAVID3 and software packages like ClusterProfiler4. FCS involves ranking all detected genes followed by a test to assess whether the distribution of scores deviates towards the up- or down-regulated directions. GSEA5 is a stand-alone FCS software with a graphical user interface, and there are several command-line implementations such as fgsea6.
Recommendations on correct application of pathway enrichment have been previously published7–10, yet we and others continue to observe blatant mistakes and methodological deficiencies appearing in peer-reviewed publications at an alarming rate11,12. Such methodological and interpretation errors are known to “snowball” - a process where an initial flawed research study can create cascading problems for scientific reliability13.
The purpose of this opinion article is to share what our group has learned about successful FEA over the past 15 years having authored several articles using it and critically examining hundreds of published articles using the method.
Using an example RNA-seq dataset and simulation analysis, we provide evidence to show just how impactful these mistakes are. Details of this analysis are provided in the Methods and Supplementary Material.
1. Using uncorrected p-values for statistical significance
Enrichment tests generate p-values (probability values) between 0 and 1. The p-value estimates the probability of an observed result occurring by random chance. A low p-value (eg: p<0.05) suggests the observed result would be unlikely from random data, suggesting a real effect. However, as gene set libraries can contain thousands of categories, we could expect 5% of the gene sets to meet the p<0.05 threshold with random data. Therefore, we almost always get many “significant” results just by chance. This leads to the multiple testing problem whereby uncorrected p-values lead to high rates of false positives.
In the example dataset (see Methods below), we randomly selected 1000 genes that met the detection threshold and submitted these to ORA with gene sets from Reactome. Of the 1840 Reactomes with five or more members detected, on average we obtained a mean of 51.6 hits (p<0.05; 100 repeats) with these random genes (Figure 1A), proving that reporting raw p-values is bound to yield many false positives, in this case at a rate of 2.8%. There are a variety of p-value correction methods to reduce the risk of false positives7, including approaches from Sidak14, Holm-Bonferroni15 and Benjamini-Hochberg16. The Benjamini-Hochberg method, also called the false discovery rate (FDR) method, appears to be the most widely used in genomics to adjust p-values, but it has been critiqued as being overly conservative when a larger fraction of tests are not null17. After applying FDR adjustment to the results of the random gene sets (Figure 1A), we see that the mean number of significant hits (FDR adjusted p<0.05) is 0.16, effectively eliminating false positives.
In the example dataset (AML3 cells with and without azacitidine treatment), omitting FDR correction leads to the identification of 696 Reactomes with p<0.05, of which 345 are likely false positives (~49.6%) (Figure 1B). Omitting p-value correction could lead to half of the enrichment results being false, which is why it is number one on our list. Most enrichment analysis packages provide adjusted p-values, so if the tool you’re using doesn’t, then it is time for a change.
Figure 1A. FDR control reduces false positives.
Figure 1B. Impact of FDR correction of p-values on the number of ‘significant’ gene sets.
2. Not using a custom background gene list
Every omics analysis has its limitations. In the world of gene expression, microarrays can only measure genes it has been designed to assay. RNA-seq has certain genes that are poorly detected as a result of sequence similarity or GC bias. Biological differences also play a big part in what is detected11. Out of the 78,691 human genes annotated in Ensembl’s latest release (v115), typically only 12-20k are expressed at detectable levels with RNA-seq in any one tissue. In the example dataset based on an earlier Ensembl release (v90) that involves 58,302 annotated genes, most are silent. For example, 45,134 recorded a mean <10 reads per sample across the six samples, with only 13,168 genes meeting this detection threshold. Although 77.4% of genes are discarded at this step, they account for a miniscule 0.2% of reads. Filtering poorly detected genes gives a slight boost to differential gene identification18, but it is most crucial in defining the background gene list for subsequent enrichment analysis. Genes with very low expression are unlikely to be biologically relevant in the context being studied and should be excluded from the background.
Using a simulation approach, we can demonstrate the consequence of omitting a background gene list on an RNA-seq experiment. By drawing a random set of 1000 genes in the AML3 example dataset with expression above 10 reads per sample on average and using an enrichment test (hypergeometric) that uses the whole genome annotation as a background, we see 444 gene sets reaching the FDR<0.05 significance level on average (Figure 2A). Alternatively, if we use a custom background gene list composed of genes meeting the 10 reads per sample threshold, we expect 0.16 gene sets with FDR<0.05, practically eliminating false positives. These false positives are highly reproducible (Figure 2B), and as many of them are cancer-relevant pathways like cell cycle and immune signalling, they have the potential to mislead readers.
Performing ORA with azacitidine-responsive genes with the whole genome background gave a much larger number of “significant” gene sets (1269) in contrast to the custom background analysis (351) (Figure 2C). The overlap between them was just 347, giving a Jaccard statistic of 0.27. In other words, without the right background gene list, up to 73% of results could be false positives.
Figure 2A. Impact of background list on the number of significant gene sets. 100 simulations.
Figure 2B. Gene sets appearing as false positives in 100/100 simulations include those related to cancer.
Figure 2C. Impact of background list on the number of significant gene sets. Example dataset. The correct background based on the implementation of detection threshold is labelled ‘bg’, while the incorrect whole genome background is labelled ’bg*’.
3. Using a tool that doesn’t report enrichment scores
FDR values can tell us whether something is statistically significant, but it doesn’t directly indicate whether there will be any biological impact19,20. For that, we need some measure of effect size. In enrichment analysis, we can use an enrichment score as a proxy measure of effect size. For rank-based tools like GSEA, the enrichment score varies from -1 to +1 denoting the distribution of genes in a gene set relative to all other genes5. For a gene set composed of 15 genes, a score of 1.0 would mean that these 15 genes are the top 15 upregulated, while if a value is close to 0, it means the distribution of genes is close to what you might get by random chance. For over-representation methods like DAVID, the fold enrichment score is often quoted, which is the odds ratio of genes of a gene set in the foreground list as compared to the background7. Unfortunately, DAVID doesn’t provide the fold enrichment scores in the main results page, they are only available in the table for download. Many other common tools don’t provide enrichment scores (for example clusterProfiler), which leaves researchers with no information about their effect sizes. Tools that do provide enrichment scores include ShinyGO (web)21, GSEA5 and fgsea (fora)6.
4. Prioritising results solely by p-value
Pathway enrichment analysis can return hundreds of significant results, which can be confusing to interpret. Many tools by default will sort the results by significance, but this can lead users to prioritise pathways that are very large but where each gene is only slightly dysregulated. To demonstrate this, see the results from a typical pathway enrichment analysis result with p-value prioritisation and with enrichment score (ES) prioritisation after removal of non-significant pathways (Table 1 and Table 2).
| Gene set | Set Size | p-value | FDR | ES | Mean Log2FC |
|---|---|---|---|---|---|
| Cell Cycle | 589 | 1.4e-47 | 2.6e-44 | 0.35 | 0.069 |
| Metabolism of RNA | 725 | 4.4e-46 | 4.0e-43 | 0.31 | 0.079 |
| Cell Cycle, Mitotic | 479 | 1.5e-39 | 9.1e-37 | 0.35 | 0.066 |
| Cell Cycle Checkpoints | 246 | 2.1e-27 | 9.6e-25 | 0.40 | 0.081 |
| M Phase | 336 | 9.5e-27 | 2.9e-24 | 0.34 | 0.065 |
| Mitotic Prometaphase | 189 | 4.6e-25 | 1.1e-22 | 0.44 | 0.120 |
| Mitotic Metaphase and Anaphase | 208 | 8.9e-24 | 1.8e-21 | 0.40 | 0.110 |
| Mitotic Anaphase | 207 | 2.3e-23 | 4.3e-21 | 0.40 | 0.110 |
| Processing of Capped Intron-Containing Pre-mRNA | 273 | 6.6e-23 | 1.1e-20 | 0.35 | 0.084 |
| Resolution of Sister Chromatid Cohesion | 114 | 1.0e-20 | 1.6e-18 | 0.51 | 0.140 |
| Gene set | Set Size | p-value | FDR | ES | Mean Log2FC |
|---|---|---|---|---|---|
| Activation of NOXA and translocation to mitochondria | 5 | 1.1e-03 | 6.9e-03 | 0.84 | 0.26 |
| Condensation of Prometaphase Chromosomes | 11 | 2.5e-06 | 3.5e-05 | 0.82 | 0.26 |
| Postmitotic nuclear pore complex (NPC) reformation | 27 | 1.8e-11 | 6.4e-10 | 0.75 | 0.21 |
| Phosphorylation of Emi1 | 6 | 1.6e-03 | 8.8e-03 | 0.75 | 0.24 |
| Interactions of Rev with host cellular proteins | 37 | 6.8e-15 | 5.2e-13 | 0.74 | 0.20 |
| Nuclear import of Rev protein | 34 | 1.7e-13 | 9.9e-12 | 0.73 | 0.20 |
| Rev-mediated nuclear export of HIV RNA | 35 | 1.0e-13 | 6.3e-12 | 0.73 | 0.20 |
| Transport of Ribonucleoproteins into the Host Nucleus | 32 | 2.1e-12 | 1.0e-10 | 0.72 | 0.20 |
| Export of Viral Ribonucleoproteins from Nucleus | 32 | 2.9e-12 | 1.3e-10 | 0.71 | 0.19 |
| NEP/NS2 Interacts with the Cellular Export Machinery | 32 | 2.9e-12 | 1.3e-10 | 0.71 | 0.19 |
P-value prioritisation (Table 1) emphasises generic functions with large gene sets and moderate fold changes, while enrichment score prioritisation (Table 2) highlights smaller gene sets with highly specific functions, where member genes have bigger fold changes. These more specific gene sets are (generally) better candidates for downstream validation due to their explanatory power. The scatterplot shown in Figure 3 shows how focusing on statistical significance only could overlook interesting results with larger effect sizes. End users should therefore use both prioritisation approaches to interpret their data.
Figure 3. Scatterplot showing absolute enrichment scores (x-axis) and log-transformed significance values (y-axis) for each detected pathway. Gene sets with FDR<0.05 are highlighted in red.
5. Using gene lists that are too large or too small for ORA
It’s a common misconception that only differentially expressed genes that meet the FDR threshold should be submitted to an enrichment test. Tarca and colleagues suggest a heuristic that selects the top 1% of genes if there are none that meet the standard significance cut-off22. If proper FDR control of enrichment results is applied (See #1 above), then gene selection criteria can be flexible. The caveat is that enrichment tests (like the hypergeometric method) have size ranges of input gene lists that work best. We tested different input gene list sizes in ORA and found that 2500 yielded the most (455 with FDR<0.05), while lists of 200 and fewer yielded very few (Figure 4). In the range of 300-1000, there’s a steep increase in the number of significant pathways, with the gradient reducing with more than 1000 genes. This result indicates that using a larger gene list of up to 2500 genes would be better, but the main downside is that these additional results are for gene sets with smaller enrichment scores. For example, for significantly downregulated pathways at a gene set size of 100, fold enrichment scores were 16.9 on average, and at a size of 2500, fold enrichment scores were 2.5. Users may wish to prespecify a fold enrichment score that they consider biologically relevant (3-5 seems reasonable), and then tune their analysis to capture the most statistically significant enrichments above this score. In the example dataset, the number of gene sets meeting this minimum fold enrichment score appeared to peak with a gene set size of 700 for the upregulated genes, and 800 for downregulated genes (Figure 4), which corresponds to 5-6% of all detected genes.
This suggests that a gene list size of 700-800 genes, of 5-6% of all those detected would be reasonable for a differential expression study. Nevertheless some users may want to avoid setting seemingly arbitrary thresholds - in that case, using an FCS method like GSEA instead that calculates enrichment from all detected genes would be recommended.
Figure 4. Effect of gene list size on number of significant pathways. Up-regulated in red, down-regulated in blue.
6. Combining up and down-regulated genes in the same ORA test
In some articles, we noticed that authors didn’t conduct separate ORA tests for up- and down-regulated gene lists, instead opting to submit the combined list for ORA. This isn’t necessarily an error, as it tests the hypothesis that some pathways are “dysregulated”; a mix of up- and down-regulated genes which appear at an elevated rate. However, this combined approach can miss many enriched gene sets when compared to the separate approach. According to the results of our example analysis, the separate approach identified 351 pathways compared to the combined approach that found only 166 (53% fewer; Figure 5). The combined approach does uniquely identify some pathways, but this is relatively few. In the example dataset, only 2.8% of results were identified exclusively with the combined test.
The reason behind this is two-fold. Firstly, we know that genes in the same pathway are typically correlated with each other23. Consider cell cycle genes, or genes responding to pathogens, which are activated in unison to coordinate a complex biological process. In a typical differential expression experiment after a stimulus, this results in pathways that are predominantly up or down regulated, but rarely a mix of up and down. Due to this phenomenon, the up and down lists each have relatively strong enrichments, but they are diluted when combined24. Based on this, failing to report data using the separate approach could result in 53% fewer identified gene sets, and an incomplete picture of what’s happening at the molecular level.
Figure 5. Combining up and downregulated genes into one ORA test yields far fewer results.
7. Using shallow gene annotations
One of the most important decisions for pathway enrichment analysis is selecting the database to query. There are many options to consider, both proprietary and open source. When choosing, users should consider whether the database contains the gene sets that they a priori suspect will be altered. Secondly, consider the breadth and depth of the pathway library; this will be where the unexpected discoveries may occur and it pays to use a comprehensive library to capture as many aspects of the dataset as possible. To demonstrate this, note how KEGG legacy and KEGG Medicus seem tiny when compared to Reactome, which is itself dwarfed by Gene Ontology’s Biological Process (GOBP; Table 3). Consequently, the results obtained for the example dataset are substantially richer for Reactome and GOBP as compared to KEGG libraries.
| No. gene sets | Total no. annotations | Median gene set size | No. genes with ≥1 annotation | Up-regulated | Down-regulated | |
|---|---|---|---|---|---|---|
| KEGG | 186 | 12800 | 52.5 | 5245 | 11 | 51 |
| KEGGM | 658 | 9662 | 11.5 | 2788 | 20 | 18 |
| Reactome | 1787 | 97544 | 23.0 | 11369 | 165 | 117 |
| GOBP | 7583 | 616242 | 20.0 | 18000 | 340 | 1416 |
| MSigDB | 35134 | 4089406 | 47.0 | 43351 | 3214 | 7217 |
8. Using outdated gene identifiers and gene sets
Data repositories like GEO25 contain thousands of previously published data sets that we can reanalyse with new pathway databases and software tools to gain further insights. However, when the data is several years old, we should use it with caution, as many gene names may have changed. For example, Illumina’s EPIC DNA Methylation microarray was released in 2016, and in the following eight years, 3,253 of 22,588 gene names on the chip have changed (14.4%)26. Therefore, these genes wouldn’t be recognised by the pathway enrichment software. To update defunct gene symbols, the HGNChelper R package can help27, also having the benefit of fixing gene symbols corrupted by Excel autocorrect, which are unfortunately common in GEO28. Persistent gene identifiers like Ensembl (eg: ENSG00000180096) and HGNC (eg: HGNC:2879) are less likely to change over time and are therefore preferable over gene symbols (eg: SEPTIN1) for FEA.
The depth of pathway databases increases every year as annotation consortia continue assimilating functional information from the literature, and this impacts the quality of results and the conclusions that can be derived29. These databases will undergo undergo large updates, as shown by the chart of Reactome below (Figure 6). Therefore it’s always best to download and use the newest available version of the gene set.
Figure 6. The number of Reactome gene sets grows over time. Gene sets were downloaded from the MSigDB website, except The last bar which represents the latest gene sets downloaded directly from Reactome, not yet incorporated into MSigDB.
9. Bad presentation
When discussing FEA with colleagues, it is often said that the method is used to generate “filler”; data and charts used to “pad out” articles that would otherwise be too short or lack the normal number of figures. For FEA, a general rule of thumb is to only show the charts and data that are relevant to assessing aims and hypotheses, and which contribute to the conclusions. While others have recommended using multiple FEA tools9, we’d suggest limiting the amount of data shown in an article to just one FEA method and one or two gene set databases (eg: Reactome, transcription factor target genes). Excessive use of tools and databases can make the results difficult to interpret, as in30.
We’ve also noticed cases of confusing, incomplete or incorrect data presentation choices that should be avoided:
The number of selected genes in a category is often shown as evidence of enrichment,31–34, but this can be misleading because this is one of four numbers that goes into calculating a fold enrichment score.
Similarly, the proportion of selected genes that belong to a gene category is sometimes shown35–38, but this does not directly reflect the fold enrichment score.
Presenting enrichment results as a pie chart30,32,37,39 isn’t recommended because it isn’t possible to show enrichment scores and significance values in this form.
Sometimes a network of genes or pathways are shown, but the significance of nodes and edges aren’t described40.
Figures missing key elements such as axis labels32,41–44.
FEA mentioned in the abstract but no data shown in the main article or supplement45,46.
Confusion around which tool was used for each figure and paneleg: 47.
Such misinterpretation and data presentation problems can also occur when a tool is used without understanding the statistical basis of inference13, so it is crucial that users take the time to familiarise themselves with the tool’s documentation and recommendations.
10. Neglecting methods reproducibility
According to Goodman and colleagues48, methods reproducibility is:
“the provision of enough detail about study procedures and data so the same procedures could, in theory or in actuality, be exactly repeated.”
There are several crucial pieces of information required in order to enable reproducibility of enrichment analysis including;
how genes were selected or scored - especially whether up or down-regulated genes were considered separately or combined,
the tool used, and its version,
the options or parameters applied,
the statistical test used,
the gene set/pathway database(s) queries, and their versions,
for ORA, how a background list was defined, and,
how p-value correction was done9,49.
A systematic literature analysis published in 2022 found insufficient background description in 95% of articles describing ORA tests, and p-value correction was insufficiently described in 57% of articles, suggesting that enrichment analysis generally suffers from a lack of methods reproducibility12. Indeed, this literature analysis identified some inadequate methodological descriptions including:
“GO and KEGG enrichment analysis were performed on the DEGs by perl scripts in house.”50.
“Gene ontology (GO) and pathway analysis were performed in the standard enrichment computation method.”51.
We’ve also noted some cases where FEA wasn’t described in the methods at all, despite being important enough to mention said results in the abstract52–54. Moreover, we’ve identified cases where the tool mentioned in the methods section is inconsistent with what’s shown in the results55,56.
In addition to including the methodological details mentioned above, authors could also provide gene profile data and/or gene lists used for enrichment analysis as supplementary files, or better still, provide full reproducibility and transparency with the five pillars framework57.
Other issues
There are several more subtle issues not covered in depth here but are worth mentioning as they have been flagged as potential problems. First, the length of genes is known to impact the ease at which they are detected and so correction of gene length has been suggested to improve enrichment results58,59. Second, many FEA tests use genes as the sampling unit and do not take into consideration (or model) biological variation which could yield unrealistic significance values60. Third, the size of gene sets, even though they represent similar biology, can disproportionately impact significance scores and complicate interpretation61,62. Fourth, tight correlation between each gene’s expression within a pathway could exacerbate false positive rates63,64. Fifth, slight differences in the implementation of ORA tests can impact results in some circumstances65. Lastly, some web-based FEA tools lack longevity. For example, DAVID 6.83,66 has been used for over 10,000 publications but since 2022 has been taken offline, leaving these articles irreproducible. As web-based tools appear to be the most popular option for FEA12,67, we should advocate preservation/archiving of the tool as a Docker imageeg: 21, to enable reproducibility into the future68.
Conclusion
So why are problems so pervasive in enrichment analysis? It’s likely a combination of poor researcher training, supervision and peer-review scrutiny. The design of tools and (low) quality of tool documentation might also play a role. We also know that inadequate methods have a type of advantage compared to the more rigorous ones due to researcher preferences to present “significant” findings69 and reliance upon default settings even if they are incorrect12,62. Problems 2, 3, 5 and 6 appear to be specific to ORA-based tools, and can be avoided entirely by switching to FCS tools like GSEA; which has the added benefit of enhanced accuracy in terms of precision and recall65,70. Although learning and running FCS tools is more difficult and time-consuming, the benefits to the quality of results are substantial.
The deluge of manuscripts in genomics and other fields places an increasing burden on a limited pool of competent, voluntary peer-reviewers to a point where editors are struggling to maintain high quality, consistent peer-review71. As studies become ever more multi-disciplinary, is is increasingly difficult for editors to cover all the technical aspects thoroughly, causing mistakes like the above to become widespread. Ultimately, responsibility for using best practices in enrichment analysis lies with authors. Avoiding these mistakes will have many benefits, including avoiding wasted resources on unreliable research directions.
Methods
The example RNA-seq dataset used involves AML3 cells with and without
azacytidine treatment72. This
dataset was selected because it represents a typical transcriptomic
study; with three experimental replicates and over 1000 differentially
expressed genes. Read counts were obtained from the DEE2 database using
the getDEE2 R package using SRA project accession SRP038101 as a
query73. Genes with average
expression above 10 counts per sample were considered detected, and
genes not meeting this criterion were removed from downstream analysis
(unless stated). Differential expression was conducted using DESeq2
v1.48.274. Human gene symbols
were updated to Ensembl 11575.
Genes with FDR<0.05 were considered significantly differentially
expressed. Reactome gene sets were downloaded in GMT format on
02/09/202576, and these gene
sets were used for all subsequent enrichment tests unless otherwise
stated. All ORA tests were conducted using the fora
function belonging to the fgsea package v1.34.26. Minimum gene set size was set to 5
for all subsequent enrichment tests. Gene sets with FDR<0.05 were
considered significantly differentially expressed.
To demonstrate the importance of FDR control (Mistake 1), random sampling of 1000 detected genes was followed with ORA with and without FDR control. This was repeated 100 times. For comparison of the nominal and FDR corrected results, the up (1672) and down (1926) regulated genes underwent ORA using a background consisting of the 13,068 detected genes.
To demonstrate the importance of a suitable background gene list (Mistake 2), random sampling of 1000 detected genes as the foreground followed by ORA using either the whole genome background or background consisting of detected genes. This process was repeated 1000 times.
To demonstrate the importance of using enrichment scores for
interpretation of FEA results, (Mistake 3 and 4) the mitch
package v1.20.0 was used for FCS analysis of the differential expression
data70.
To investigate how foreground gene set size impacts the number of ORA results (Mistake 5), the top N significant up- and down-regulated genes were selected for ORA compared to the background consisting of detected genes, where N was varied between 50 and 7000.
To demonstrate the effect of combining up and down regulated genes into a single test (Mistake 6), all unique genes with FDR<0.05 (n=3950) were used as the foreground for an ORA test, and compared with results from the separate approach (n=1667,1923 up and downregulated genes respectively).
To show the effect of using shallow gene annotations (Mistake 7), various gene set libraries were extracted from the MSigDB Collection (msigdb.v2025.1.Hs.symbols.gmt)77. These gene set libraries were each used for ORA tests.
To show the effect of using old gene sets (Mistake 8), the Reactome gene sets were extracted from archived MSigDB Collections going back to 2010. These were compared to the most recent Reactome release (02/09/2025).
Instances of poor presentation (Mistake 9) and methods reproducibility (Mistake 10) were drawn from unpublished notes made during a previous systematic analysis of the literature12.
Analysis was conducted in R 4.5.1 using a Bioconductor Docker image corresponding to Bioconductor release 3.21. Analysis code is available from GitHub (https://github.com/markziemann/10mistakes) and the Docker image is available from DockerHub (https://hub.docker.com/repository/docker/mziemann/10mistakes/general). These will be archived to Zenodo upon acceptance.