10 common mistakes that could ruin your enrichment analysis

Anusuiya Bora1,2, Matthew McKenzie1,3 & Mark Ziemann1,2*

  1. Deakin University, School of Life and Environmental Sciences, Geelong, Australia.

  2. Burnet Institute, Melbourne, Australia

  3. Institute for Physical Activity and Nutrition, Deakin University, Australia

(*) Correspondence:

Author ORCID
Anusuiya Bora 0009-0006-2908-1352
Matthew McKenzie 0000-0001-7508-1800
Mark Ziemann 0000-0002-7688-6974

Abstract

Functional enrichment analysis (FEA) is an incredibly powerful way to summarise complex genomics data into information about the regulation of biological pathways including cellular metabolism, signalling and immune responses. About 10,000 scientific articles describe using FEA each year, making it among the most used techniques in bioinformatics. While FEA has become a routine part of workflows via myriad software packages and easy-to-use websites, mistakes can easily creep in due to poor tool design and unawareness among users of pitfalls Here we outline ten mistakes that undermine the effectiveness of FEA which we commonly see in research articles. We provide practical advice on their mitigation.

Background

PubMed searches indicate keywords like “pathway analysis” and “enrichment analysis” appear in titles or abstracts of approximately 10,000 articles per year, and that number has increased by a factor of 5.4 between 2014 and 2024. The purpose of FEA is to understand whether gene categories are collectively differentially represented in the molecular profile at hand, and involves querying hundreds or thousands of functional categories representing gene pathways or ontologies. The versatile nature of FEA means it can be applied on different types of profiling data including proteomics, transcriptomics, genomic variant searches, and chromatin/epigenomics analyses [1,2].

There are a variety of methods for FEA, but two main methods are over-representation analysis (ORA) and functional class scoring (FCS) [3,4]. ORA involves selecting genes based on a hard cut-off followed by a test of enrichment (e.g.: Fisher’s exact test) as compared to a background list [5]. Popular ORA web tools include DAVID [6], g:Profiler [7] and Enrichr [8], while ClusterProfiler [9] is popular for R based analysis. FCS involves ranking all detected genes followed by a test to assess whether the distribution of scores deviates towards the up- or down-regulated directions. GSEA [10] is a stand-alone FCS software with a graphical user interface, and there are several command-line implementations such as fgsea [11].

Recommendations on correct application of pathway enrichment have been previously published [1,4,12–14], yet we and others continue to observe mistakes and methodological deficiencies in peer-reviewed publications at an alarming rate [15,16]. The purpose of this education article is to share what our group has learned about successful FEA over the past 15 years having authored several articles using it and critically examining hundreds of published articles describing the use of FEA.

Using an example RNA-seq dataset and simulation analysis, we provide evidence to show just how impactful these mistakes are. Details of this analysis are provided in the Supplementary Material.

1. Using uncorrected p-values for statistical significance

Enrichment tests generate p-values (probability values) between 0 and 1. The p-value estimates the probability of an observed enriched category occurring by random chance. A low p-value (eg: p<0.05) suggests the observed result would be unlikely from random data, suggesting a real effect. However, as gene set libraries can contain thousands of categories, we could expect 5% of the gene sets to meet the p<0.05 threshold with random data. Therefore, we almost always get many “significant” results just by chance [4]. Our previous literature study showed that this problem was prevalent in 43% of pathway enrichment articles [16].

There are a variety of p-value correction methods to reduce the risk of false positives [12], including approaches from Sidak [17], Holm-Bonferroni [18] and Benjamini-Hochberg [19]. The Benjamini-Hochberg method, also called the false discovery rate (FDR) method, appears to be the most widely used in genomics to adjust p-values, but it has been critiqued as being overly conservative when a larger fraction of tests are not null [20].

  • Our simulation analysis identified a mean of 51.6 pathways as significant from randomly selected genes when p-value correction wasn’t implemented; this was reduced to 0.16, after FDR correction (Figure 1A).

  • Our example analysis indicates that ~50.6% of pathway enrichment results could be false if correction of p-values isn’t conducted (Figure 1B).

To avoid unacceptable false positives, use a tool that provides adjusted significance values like FDR. P-value adjustment can also be done separately with other tools like Stata, SPSS, GraphPad and R. A stricter FDR significance threshold, like 0.01 has been shown to be effective in reducing false positives [21].

2. Wrong background gene list

Defining a background list (a.k.a. universe or reference) is crucial because genes that have no chance of being part of the foreground, i.e. undetected genes, should not contribute to enrichment calculations as they will inflate significance values [4,12].

Every type of omics analysis has its limitations. Microarrays can only measure genes they are designed to assay. RNA-seq has certain genes that are poorly detected as a result of sequence similarity or GC bias. Biological differences also play a big part in what is detected [15]. From 78k human genes annotated in Ensembl’s latest release (v115), typically only 12-20k are expressed at detectable levels with RNA-seq in any one tissue.

The severity of this problem is contentious, but our previous analysis of seven RNA-seq studies suggested that using the wrong background could lead to false positive rates of 60-80% [16].

  • Our simulations show that using all annotated genes as the background leads to ~444 Reactome pathways meeting statistical significance when using a foreground of random genes (Figure 2A).

  • The example analysis with a background of detected genes identified 310 significant pathways, but a background of all annotated genes led to an additional 737 false positive pathways (Figure 2C).

Recommendations for selecting a detection threshold and defining a background list are given in Box 1.

Box 1. Recommendations for setting a detection threshold to define the background list from various omics data.
Proteomics: Missing values are common. Consider keeping proteins detected in ≥50% of samples.

RNA-seq, scRNA-seq, ChIP-Seq and ATAC-seq: Various valid approaches: * Mean read count of 10 across all samples. * Mean reads per million threshold of 1.0 across all samples. * Read counts of 10 or more in ≥50% of samples. * Mean reads per million threshold of 1.0 in ≥50% of samples. Microarray gene expression and DNA methylation: Discard known problematic probes. Include all genes with probes that pass quality control filtering. Genomics (eg: variant searches): All annotated genes could be used unless there are reasons to believe that some are not detected (eg: due to extreme GC content). This can be checked using sequence depth tools.|

3. Using a tool that does not report enrichment scores

FDR values can tell us whether an observation is statistically significant, but it does not inform whether it is biologically impactful [22,23]. For that, we need some measure of effect size. In FEA, we can use an enrichment score as a proxy measure of effect size. For rank-based tools like GSEA, the enrichment score varies from -1 to +1, denoting the distribution of genes in a gene set relative to all other genes [10]. For a gene set composed of 15 genes, a score of 1.0 would mean that these 15 genes are the top 15 upregulated, while if a value is close to 0, it means the distribution of genes is close to what you might get by random chance. For over-representation methods like DAVID, the fold enrichment score is often quoted, which is the odds ratio of genes of a gene set in the foreground list as compared to the background [12]. Unfortunately, many common tools don’t provide enrichment scores (for example clusterProfiler and g:Profiler), which leaves researchers with no information about their effect sizes. Tools that do provide enrichment scores include ShinyGO (web) [24], GSEA [10], and fgsea [11].

4. Prioritising results solely by p-value

FEA can return hundreds of significant results, which can be confusing to interpret. Many tools by default will sort the results by significance, but this can result in missing the most interesting results. As p-value prioritisation emphasises generic functions with large gene sets and moderate fold changes, there is a risk of over-looking smaller gene sets with larger fold changes (contrast Tables 1 and 2). Smaller and more specific gene sets with larger magnitude enrichment scores are generally better candidates for downstream validation due to their explanatory power.

To avoid this problem, end users should also do enrichment score prioritisation, by first removing pathways above the FDR threshold (eg: 0.05 or 0.01) and then sorting by enrichment score magnitude.

5. Foreground lists that are too large or too small for ORA

It is a common misconception that only differentially expressed genes that meet the FDR threshold should be submitted to an enrichment test. Tarca and colleagues suggest a heuristic that selects the top 1% of genes if there are none that meet the standard significance cut-off [25]. If proper FDR control of enrichment results is applied (See #1 above), then gene selection criteria can be flexible. The caveat is that enrichment tests (like the hypergeometric method) have size ranges of input gene lists that work best. If the number of foreground genes is too large, then the enrichment scores won’t be as large or interesting, but if the foreground is too small, then the overlap with pathways will be small and fail to reach statistical significance.

Our testing suggests that a gene list size of 700-800 genes, or 5-6% of all those detected would be optimal for a differential expression study (Figure 4). To achieve this number, thresholds for significance or fold change filtering can be fine-tuned. Nevertheless, some users may want to avoid setting seemingly arbitrary thresholds — in that case, using an FCS method like GSEA instead that calculates enrichment from all detected genes would be recommended.

6. Not running ORA separately on up and down-regulated genes

In some articles, we noticed that authors did not conduct separate ORA tests for up- and down-regulated gene lists, instead opting to submit the combined list for ORA. This is not necessarily an error, as it tests the hypothesis that some pathways are “dysregulated”; a mix of up- and down-regulated genes which appear at an elevated rate. However, the results from the “combined” and “separate” approaches are very different.

The example dataset shows the combined approach identified 82% fewer results as compared to the separate approach (Figure 5). There were no enriched pathways specific to the combined test.

The reason behind this is two-fold. Firstly, we know that genes in the same pathway are typically correlated with each other [26]. Consider cell cycle genes, or genes responding to pathogens, which are activated in unison to coordinate a complex biological process. In a typical differential expression experiment after a stimulus, this results in pathways that are predominantly up or down regulated, but rarely a mix of up and down. Due to this phenomenon, the up and down lists each have relatively strong enrichments, but they are diluted when combined [27]. Based on this, ORA users should use both the combined and separate approaches if directional information is available (some omics types don’t).

7. Using shallow gene annotations

One of the most important decisions for FEA is selecting the pathway or ontology database to query. There are many options to consider, both proprietary and open source. When choosing, users should consider whether the database contains the gene sets that they a priori suspect will be altered. Secondly, consider the breadth and depth of the pathway library; this will be where the unexpected discoveries may occur and it pays to use a comprehensive library to capture as many aspects of the dataset as possible.

The example analysis showed that using a larger pathway database like Reactome or Gene Ontology Biological Process results in richer results as compared to smaller databases like KEGG (Table 3).

Using a smaller database like KEGG may be justified based on a priori hypotheses, but in most cases where the goal is discovery of novel themes, a larger pathway database would be recommended. Users should be aware that these larger databases have some degree of redundancy which can be confusing to interpret [28].

8. Using outdated gene identifiers and gene sets

Data repositories like GEO [29] contain thousands of previously published data sets that we can reanalyse with new pathway databases and software tools to gain further insights. However, when the data is several years old, we should use it with caution, as many gene names may have changed. For example, Illumina’s EPIC DNA Methylation microarray was released in 2016, and in the following eight years, 3,253 of 22,588 gene names on the chip changed (14.4%) [30]. Therefore, these genes would not be recognised by the pathway enrichment software. To update defunct gene symbols, the HGNChelper R package can help [31], also having the benefit of fixing gene symbols corrupted by Excel autocorrect, which are unfortunately common in GEO [32]. Persistent gene identifiers like Ensembl (eg: ENSG00000180096) and HGNC (eg: HGNC:2879) are less likely to change over time and are therefore preferable over gene symbols (eg: SEPTIN1) for FEA. The depth of pathway databases increases constantly as annotation consortia continue assimilating functional information from the literature (See Figure 6), and the quality of annotations strongly influences the findings [33]. Therefore, it is generally best to download and use the newest available version of the gene sets.

FEA users should also understand how well updated their preferred pathway databases are. Actively updated databases like Reactome [34] constantly increase in size as annotation consortia continue assimilating functional information from the literature (See S6 Fig), however the popular KEGG database hasn’t grown since 2010. A regularly updated database is likely to lead to richer and more relevant FEA findings [33].

9. Bad presentation

Bad presentation of data is not exclusive to pathway enrichment, but there are a few key mistakes that should be avoided:

  1. The number or proportion of selected genes in a category is sometimes shown as evidence of enrichment, but this can be misleading because it does not take into consideration the frequency of these genes in the background list. Enrichment scores and adjusted p-values are better for this purpose.

  2. Presenting enrichment results as a pie chart is not recommended because it is not possible to show enrichment scores and significance values in this form. Bubble or bar charts are better alternatives.

  3. Sometimes a network of genes or pathways are shown, but the significance of nodes and edges are not described.

  4. Figures missing key elements such as axis labels.

  5. FEA mentioned in the abstract but no data shown in the main article or supplement.

  6. Confusion around which tool was used for each figure and panel.

Such misinterpretation and data presentation problems can also occur when a tool is used without understanding the statistical basis of inference [35], so it is crucial that users take the time to familiarise themselves with the tool’s documentation and recommendations.

10. Neglecting methods reproducibility

According to Goodman and colleagues [36], methods reproducibility is:

“the provision of enough detail about study procedures and data so the same procedures could, in theory or in actuality, be exactly repeated.”

There are several crucial pieces of information required in order to enable reproducibility of enrichment analysis including;

  • how genes were selected or scored - especially whether up or down-regulated genes were considered separately or combined,

  • the tool used, and its version,

  • the options or parameters applied,

  • the statistical test used,

  • the gene set/pathway database(s) queries, and their versions,

  • for ORA, how a background list was defined, and,

  • how p-value correction was done [14,37].

A systematic literature analysis published in 2022 found insufficient background list description in 95% of articles describing ORA tests, and p-value correction was insufficiently described in 57% of articles, suggesting that FEA generally suffers from a lack of methods reproducibility [16].

Examples of poor and good methods reproducibility are provided in the Supplement, together with an AI prompt that users could use to assess their Methods sections.

In addition to including the methodological details mentioned above, authors could also provide gene profile data and/or gene lists used for FEA as supplementary files, or better still, provide full reproducibility and transparency with the five pillars framework [38].

Other issues

There are several more subtle issues not covered in depth here but are worth mentioning as they have been flagged as potential problems. First, the length of genes is known to impact the ease at which they are detected and so correction of gene length has been suggested to improve enrichment results [39,40]. Second, many FEA tests use genes as the sampling unit and do not take into consideration (or model) biological variation which could yield unrealistic significance values [41]. Third, the size of gene sets, even though they represent similar biology, can disproportionately impact significance scores and complicate interpretation [42,43]. Fourth, tight correlation between each gene’s expression within a pathway could exacerbate false positive rates [44,45]. Fifth, slight differences in the implementation of ORA tests can impact results in some circumstances [46]. Lastly, some web-based FEA tools lack longevity. For example, DAVID the version 6.8 [6,47] has been used for over 10,000 publications but since 2022 has been taken offline, leaving these articles irreproducible. As web-based tools appear to be the most popular option for FEA [16,48], tools that expressly allow preservation/archiving as a Docker image [eg: 24], are recommended to enable future reproducibility and transparency [49].

Conclusion

Methodological problems in FEA are likely a combination of poor researcher training, supervision and peer-review scrutiny. The design of tools and (low) quality of tool documentation might also play a role. We also know that inadequate methods have a type of advantage compared to the more rigorous ones due to researcher preferences to present “significant” findings [50] and reliance upon default settings even if they are incorrect [16,43]. Problems 2, 3, 5 and 6 appear to be specific to ORA-based tools, and can be avoided entirely by switching to FCS tools like GSEA, which has the added benefit of enhanced accuracy in terms of precision and recall [21,46]. Although learning and running FCS tools is more difficult and time-consuming, the benefits to the quality of results are substantial. A related issue is the overinterpretation (and indeed misinterpretation) of omics data. Researchers should be mindful of the specific biological context of their study, as this directly impacts the interpretation of the results obtained. FEA excels at generating hypotheses, but requires separate validation to draw definitive conclusions.

Bibliography

1.
Zhao K, Rhee SY. Interpreting omics data with pathway enrichment analysis. Trends Genet. 2023;39: 308–319.
2.
Chicco D, Jurman G. A brief survey of tools for genomic regions enrichment analysis. Front Bioinform. 2022;2: 968327.
3.
Khatri P, Sirota M, Butte AJ. Ten years of pathway analysis: Current approaches and outstanding challenges. PLoS Comput Biol. 2012;8: e1002375.
4.
Reimand J, Isserlin R, Voisin V, Kucera M, Tannus-Lopes C, Rostamianfar A, et al. Pathway enrichment analysis and visualization of omics data using g:profiler, GSEA, cytoscape and EnrichmentMap. Nat Protoc. 2019;14: 482–517.
5.
Tavazoie S, Hughes JD, Campbell MJ, Cho RJ, Church GM. Systematic determination of genetic network architecture. Nat Genet. 1999;22: 281–285.
6.
Sherman BT, Hao M, Qiu J, Jiao X, Baseler MW, Lane HC, et al. DAVID: A web server for functional enrichment analysis and functional annotation of gene lists (2021 update). Nucleic Acids Res. 2022;50: W216–W221.
7.
Kolberg L, Raudvere U, Kuzmin I, Adler P, Vilo J, Peterson H. G:profiler-interoperable web service for functional enrichment analysis and gene identifier mapping (2023 update). Nucleic Acids Res. 2023;51: W207–W212.
8.
Kuleshov MV, Jones MR, Rouillard AD, Fernandez NF, Duan Q, Wang Z, et al. Enrichr: A comprehensive gene set enrichment analysis web server 2016 update. Nucleic Acids Res. 2016;44: W90–7.
9.
Wu T, Hu E, Xu S, Chen M, Guo P, Dai Z, et al. clusterProfiler 4.0: A universal enrichment tool for interpreting omics data. Innovation (Camb). 2021;2: 100141.
10.
Subramanian A, Tamayo P, Mootha VK, Mukherjee S, Ebert BL, Gillette MA, et al. Gene set enrichment analysis: A knowledge-based approach for interpreting genome-wide expression profiles. Proc Natl Acad Sci U S A. 2005;102: 15545–15550.
11.
Korotkevich G, Sukhov V, Budin N, Shpak B, Artyomov NM, Sergushichev A. Fast gene set enrichment analysis. bioRxiv. 2021. doi:10.1101/060012
12.
Tilford CA, Siemers NO. Gene set enrichment analysis. Methods Mol Biol. 2009;563: 99–121.
13.
Tipney H, Hunter L. An introduction to effective use of enrichment analysis software. Hum Genomics. 2010;4: 202–206.
14.
Chicco D, Agapito G. Nine quick tips for pathway enrichment analysis. PLoS Comput Biol. 2022;18: e1010348.
15.
Timmons JA, Szkop KJ, Gallagher IJ. Multiple sources of bias confound functional enrichment analysis of global -omics data. Genome Biol. 2015;16: 186.
16.
Wijesooriya K, Jadaan SA, Perera KL, Kaur T, Ziemann M. Urgent need for consistent standards in functional enrichment analysis. PLoS Comput Biol. 2022;18: e1009935.
17.
Ury HK. Comparison of four procedures for multiple comparisons among means (pairwise contrasts) for arbitrary sample sizes. Technometrics. 1976;18: 89–97. doi:10.1080/00401706.1976.10489405
18.
Holm S. A simple sequentially rejective multiple test procedure. Scandinavian journal of statistics. 1979; 65–70.
19.
Benjamini Y, Hochberg Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological). 1995;57: 289–300.
20.
Storey JD, Tibshirani R. Statistical significance for genomewide studies. Proceedings of the National Academy of Sciences. 2003;100: 9440–9445.
21.
Kaspi A, Ziemann M. Mitch: Multi-contrast pathway enrichment for multi-omics and single-cell profiling data. BMC genomics. 2020;21: 447.
22.
Sullivan GM, Feinn R. Using effect size—or why the p value is not enough. Journal of graduate medical education. 2012;4: 279–282.
23.
Schober P, Bossers SM, Schwarte LA. Statistical significance versus clinical importance of observed effect sizes: What do p values and confidence intervals really represent? Anesthesia & Analgesia. 2018;126: 1068–1072.
24.
Ge SX, Jung D, Yao R. ShinyGO: A graphical gene-set enrichment tool for animals and plants. Bioinformatics. 2020;36: 2628–2629.
25.
Tarca AL, Bhatti G, Romero R. A comparison of gene set analysis methods in terms of sensitivity, prioritization and specificity. PloS one. 2013;8: e79217.
26.
Gatti DM, Barry WT, Nobel AB, Rusyn I, Wright FA. Heading down the wrong pathway: On the influence of correlation within gene sets. BMC Genomics. 2010;11: 574.
27.
Hong G, Zhang W, Li H, Shen X, Guo Z. Separate enrichment analysis of pathways for up- and downregulated genes. J R Soc Interface. 2014;11: 20130950.
28.
Gillis J, Pavlidis P. Assessing identity, redundancy and confounds in gene ontology annotations over time. Bioinformatics. 2013;29: 476–482.
29.
Clough E, Barrett T, Wilhite SE, Ledoux P, Evangelista C, Kim IF, et al. NCBI GEO: Archive for gene expression and epigenomics data sets: 23-year update. Nucleic acids research. 2024;52: D138–D144.
30.
Ziemann M, Abeysooriya M, Bora A, Lamon S, Kasu MS, Norris MW, et al. Direction-aware functional class scoring enrichment analysis of infinium DNA methylation data. Epigenetics. 2024;19: 2375022.
31.
Oh S, Abdelnabi J, Al-Dulaimi R, Aggarwal A, Ramos M, Davis S, et al. HGNChelper: Identification and correction of invalid gene symbols for human and mouse. F1000Res. 2020;9: 1493.
32.
Ziemann M, Eren Y, El-Osta A. Gene name errors are widespread in the scientific literature. Genome Biol. 2016;17.
33.
Wadi L, Meyer M, Weiser J, Stein LD, Reimand J. Impact of outdated gene annotations on pathway enrichment analysis. Nature methods. 2016;13: 705–706.
34.
Ragueneau E, Gong C, Sinquin P, Sevilla C, Beavers D, Grentner A, et al. The reactome knowledgebase 2026. Nucleic Acids Res. 2026;54: D673–D681.
35.
Liu L, Zhu R, Wu D. Misuse of reporter score in microbial enrichment analysis. Imeta. 2023;2: e95.
36.
Goodman SN, Fanelli D, Ioannidis JP. What does research reproducibility mean? Science translational medicine. 2016;8: 341ps12–341ps12.
37.
Wijesooriya K, Jadaan SA, Perera KL, Kaur T, Ziemann M. Guidelines for reliable and reproducible functional enrichment analysis. BioRxiv. 2021; 2021–09.
38.
Ziemann M, Poulain P, Bora A. The five pillars of computational reproducibility: Bioinformatics and beyond. Briefings in Bioinformatics. 2023;24: bbad375.
39.
Mi G, Di Y, Emerson S, Cumbie JS, Chang JH. Length bias correction in gene ontology enrichment analysis using logistic regression. 2012.
40.
Mandelboum S, Manber Z, Elroy-Stein O, Elkon R. Recurrent functional misinterpretation of RNA-seq data caused by sample-specific gene length bias. PLoS biology. 2019;17: e3000481.
41.
Goeman JJ, Bühlmann P. Analyzing gene expression data in terms of gene sets: Methodological issues. Bioinformatics. 2007;23: 980–987.
42.
Karp PD, Midford PE, Caspi R, Khodursky A. Pathway size matters: The influence of pathway granularity on over-representation (enrichment analysis) statistics. BMC genomics. 2021;22: 191.
43.
Mubeen S, Tom Kodamullil A, Hofmann-Apitius M, Domingo-Fernandez D. On the influence of several factors on pathway enrichment analysis. Briefings in bioinformatics. 2022;23: bbac143.
44.
Gatti DM, Barry WT, Nobel AB, Rusyn I, Wright FA. Heading down the wrong pathway: On the influence of correlation within gene sets. BMC genomics. 2010;11: 574.
45.
Wu D, Smyth GK. Camera: A competitive gene set test accounting for inter-gene correlation. Nucleic acids research. 2012;40: e133–e133.
46.
Ziemann M, Schroeter B, Bora A. Two subtle problems with overrepresentation analysis. Bioinformatics Advances. 2024;4: vbae159.
47.
Huang DW, Sherman BT, Lempicki RA. Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources. Nature protocols. 2009;4: 44–57.
48.
Xie C, Jauhari S, Mora A. Popularity and performance of bioinformatics software: The case of gene set analysis. BMC bioinformatics. 2021;22: 191.
49.
Perkel JM. Challenge to scientists: Does your ten-year-old code still run? Nature. 2020;584: 656–659.
50.
Smaldino PE, McElreath R. The natural selection of bad science. Royal Society open science. 2016;3: 160384.