10 common mistakes that could ruin your enrichment analysis

Anusuiya Bora1,2 & Mark Ziemann1,2*

  1. Deakin University, School of Life and Environmental Sciences, Geelong, Australia.

  2. Burnet Institute, Melbourne, Australia

(*) Correspondence:

Author ORCID
Anusuiya Bora 0009-0006-2908-1352
Mark Ziemann 0000-0002-7688-6974

Target journal: F1000 Research

Abstract

Functional enrichment analysis (FEA) is an incredibly powerful way to summarise complex genomics data into information about the regulation of biological pathways including cellular metabolism, signalling and immune responses. About 10,000 scientific articles describe using FEA each year, making it among the most used techniques in bioinformatics. While FEA has become a routine part of workflows via myriad software packages and easy-to-use websites, mistakes can easily creep in due to poor tool design and unawareness among users of pitfalls. Here we outline 10 mistakes that undermine the effectiveness of FEA which we commonly see in research articles. We provide practical advice on their mitigation.

Background

PubMed searches indicate keywords like “pathway analysis” and “enrichment analysis” appear in title or abstracts of approximately 10,000 articles per year, and that the number of articles matching these keywords has increased by a factor of 5.4 between 2014 and 2024. The purpose of FEA is to understand whether gene categories are collectively differentially represented in the molecular profile at hand, and involves querying hundreds or thousands of functional categories representing gene pathways or ontologies. There are a variety of methods for FEA, but the main two are over-representation analysis (ORA) and functional class scoring (FCS) (1). ORA involves selecting genes based on a hard cut-off followed by a test of enrichment (e.g.: Fisher’s exact test) as compared to a background list (2). Popular ORA tools include websites like DAVID (3) and software packages like ClusterProfiler (4). FCS involves ranking all detected genes followed by a test to asses whether the distribution of scores deviates towards the up- or down-regulated directions. GSEA (5) is a stand-alone FCS software with a graphical user interface, and there are several command-line implementations such as fgsea (6).

Recommendations on correct application of pathway enrichment have been previously published (7–10), yet we and others continue to observe blatant mistakes and methodological deficiencies appearing in peer-reviewed publications at an alarming rate (11,12). The purpose of this opinion article is to share what our group has learned about successful FEA over the past 15 years having authored dozens of articles using it and critically examining hundreds of published articles using the method.

Using an example RNA-seq dataset and simulation analysis, we provide evidence to show just how impactful these mistakes are. Details of the analysis are provided in the Supplementary Material.

1. Using uncorrected p-values for statistical significance

Enrichment tests generate p-values (probability values) between 0 and 1. The p-value estimates the probability of an observed result occurring by random chance. A low p-value (eg: p<0.05) suggests the observed result would be unlikely from random data, suggesting a real effect. However, as gene set libraries can contain thousands of categories, we could expect 5% of the gene sets to meet the p<0.05 threshold with random data. In the example dataset, we randomly selected 1000 genes that met the detection threshold and submitted these to ORA with Reactome. Of the 1840 Reactomes with five or more members detected, we obtained a mean of 51.6 hits (p<0.05) with these random genes (Figure 1A), proving that reporting raw p-values is bound to yield many false positives, in this case at a rate of 2.8%.

There are a variety of p-value correction methods to reduce the risk of false positives (7), including approaches from Sidak (13), Holm-Bonferroni (14) and Benjamini-Hochberg (15). The Benjamini-Hochberg method, also called the false discovery rate (FDR) method, appears to be the most widely used in genomics to adjust p-values, but it has been critiqued as being overly conservative when a larger fraction of tests are not null (16). After applying FDR adjustment to the results of the random gene sets (Figure 1A), we see that the mean number of significant hits (FDR adjusted p<0.05) is 0.16, effectively eliminating false positives.

In the example dataset (AML3 cells treated with azacitidine), omitting FDR correction leads to the identification of 696 Reactomes with p<0.05, of which 345 are likely false positives (~49.6%) (Figure 1B). Omitting p-value correction could lead to half of the enrichment results being false, which is why it is number one on our list. Most enrichment analysis packages provide adjusted p-values, so if the tool you’re using doesn’t, then it is time for a change.

Figure 1A. FDR control reduces false positives.

Figure 1A. FDR control reduces false positives.

Figure 1B. Impact of FDR correction of p-values on the number of 'significant' gene sets.

Figure 1B. Impact of FDR correction of p-values on the number of ‘significant’ gene sets.

2. Not using a custom background gene list

Every omics analysis has its limitations. In the world of gene expression, microarrays can only measure genes it has been designed to assay. RNA-seq has certain genes that are poorly detected as a result of sequence similarity or GC bias. Biological differences also play a big part in what is detected (11). Out of the 78,691 human genes annotated in GENCODE’s latest release, typically only 12-20k are expressed at detectable levels in any one tissue. In the example dataset based on an earlier Ensembl release (v90) that involves 58,302 annotated genes, most are silent. For example, 45,134 recorded a mean less than 10 reads per sample across the six samples. Only 13,168 genes met this detection threshold. Although 77.4% of genes are discarded at this step, they account for a miniscule 0.2% of reads. Filtering poorly detected genes gives a slight boost to differential gene identification (17), but it is most crucial in defining the background gene list for subsequent enrichment analysis. Those genes are expressed at such low levels that they didn’t have a realistic chance of being called as differentially expressed.

Using a simulation approach we can demonstrate the consequence of omitting a background on an RNA-seq experiment. By drawing a random set of 1000 genes from among the ones with expression above 10 reads per sample on average in the AML3 example dataset and using an enrichment test (hypergeometric) that uses the whole genome annotation as a background we see that on average we should expect 444 gene sets reaching the FDR<0.05 significance level (Figure 2A). Meanwhile, if we use a custom background gene list composed of genes meeting the 10 reads per sample threshold, we should expect on average 0.16 genesets with FDR<0.05, practically eliminating false positives. Many of these false positives are highly reproducible (Figure 2B), and as many of them are related to relevant cell functions like cell cycle, immune function and signalling, they have the potential to mislead readers.

Now coming back to the example dataset, we did ORA with a whole genome background and separately with the custom background of detected genes. The whole genome background gave a much larger number of significant gene sets (1279) in contrast to the custom background analysis (351) (Figure 2C). The overlap between them was just 347, giving a Jaccard statistic of 0.27. In other words, without the right background gene list, up to 72% of results could be false positives.

Figure 2A. Impact of background list on the number of significant gene sets. 100 simulations.

Figure 2A. Impact of background list on the number of significant gene sets. 100 simulations.

Figure 2B. Gene sets consistently appearing as false positives include those related to cancer.

Figure 2B. Gene sets consistently appearing as false positives include those related to cancer.

Figure 2C. Impact of background list on the number of significant gene sets. Example dataset. The correct background based on the implementation of detection threshold is labelled 'bg', while the incorrect whole genome background is lebelled 'bg*'.

Figure 2C. Impact of background list on the number of significant gene sets. Example dataset. The correct background based on the implementation of detection threshold is labelled ‘bg,’ while the incorrect whole genome background is lebelled ’bg*’.

3. Using a tool that doesn’t report enrichment scores

FDR values can tell us whether something is statically significant, but it doesn’t directly indicate whether there will be any biological impact. For that, we need some measure of effect size. In enrichment analysis, we can use an enrichment score as a proxy measure of effect size. For rank-based tools like GSEA, the enrichment score varies from -1 to +1 denoting the distribution of genes in a gene set relative to all other genes. A score of 1.0 would mean that X genes in the set are the X most upregulated, while if a value is close to 0, it means the distribution of genes is close to what you might get by random chance. For over-representation methods like DAVID, the fold enrichment score is often quoted, which is the odds ratio of genes of a gene set in the list as compared to the background. Unfortunately, DAVID doesn’t provide the fold enrichment scores in the main results page, they are only available in the table for download. Many other common tools don’t even calculate the enrichment scores (looking at you clusterProfiler), which leaves researchers in the dark about their effect sizes. Tools that do provide enrichment scores include ShinyGO (web) (18), GSEA (5) and fgsea (fora) (6).

4. Prioritising results solely by p-value

Pathway enrichment analysis can return hundreds of significant results, which can be confusing to interpret. Many tools by default will sort the results by significance, but this can lead you to prioritise pathways that are very large but where each gene is only slightly dysregulated. To demonstrate this, see the results from a typical pathway enrichment analysis result with p-value prioritisation and with enrichment score (ES) prioritisation after removal of non-significant pathways (Table 1 and Table 2).

Table 1. Top upregulated pathways when prioritised by FDR.
Gene set Set Size p-value FDR ES
Cell Cycle 589 1.4e-47 2.6e-44 0.35
Metabolism of RNA 725 4.4e-46 4.0e-43 0.31
Cell Cycle, Mitotic 479 1.5e-39 9.1e-37 0.35
Cell Cycle Checkpoints 246 2.1e-27 9.6e-25 0.40
M Phase 336 9.5e-27 2.9e-24 0.34
Mitotic Prometaphase 189 4.6e-25 1.1e-22 0.44
Mitotic Metaphase and Anaphase 208 8.9e-24 1.8e-21 0.40
Mitotic Anaphase 207 2.3e-23 4.3e-21 0.40
Processing of Capped Intron-Containing Pre-mRNA 273 6.6e-23 1.1e-20 0.35
Resolution of Sister Chromatid Cohesion 114 1.0e-20 1.6e-18 0.51
Table 2. Top upregulated pathways when prioritised by ES. Pathways with FDR>0.05 were excluded.
Gene set Set Size p-value FDR ES
Activation of NOXA and translocation to mitochondria 5 1.1e-03 6.9e-03 0.84
Condensation of Prometaphase Chromosomes 11 2.5e-06 3.5e-05 0.82
Postmitotic nuclear pore complex (NPC) reformation 27 1.8e-11 6.4e-10 0.75
Phosphorylation of Emi1 6 1.6e-03 8.8e-03 0.75
Interactions of Rev with host cellular proteins 37 6.8e-15 5.2e-13 0.74
Nuclear import of Rev protein 34 1.7e-13 9.9e-12 0.73
Rev-mediated nuclear export of HIV RNA 35 1.0e-13 6.3e-12 0.73
Transport of Ribonucleoproteins into the Host Nucleus 32 2.1e-12 1.0e-10 0.72
Export of Viral Ribonucleoproteins from Nucleus 32 2.9e-12 1.3e-10 0.71
NEP/NS2 Interacts with the Cellular Export Machinery 32 2.9e-12 1.3e-10 0.71

P-value prioritisation emphasises generic functions with really large gene sets, while enrichment score prioritisation highlights much smaller gene sets with highly specific functions, where each member gene is showing a relatively bigger change in expression (Figure 3). These more specific gene sets are in general better candidates for downstream validation due to their explanatory power.

Figure 3. Scatterplot showing absolute enrichment scores (x-axis) and log-transformed significance values (y-axis). Gene sets with FDR<0.05 are highlighted in red.

Figure 3. Scatterplot showing absolute enrichment scores (x-axis) and log-transformed significance values (y-axis). Gene sets with FDR<0.05 are highlighted in red.

5. Using gene lists that are too large or too small for ORA

It’s a common misconception that only differentially expressed genes that meet the FDR threshold should be submitted to an enrichment test, but this simply isn’t true. So long as you’re using proper FDR control of your pathways (See #1 above), you can select genes in any arbitrary way your like. The caveat here is that enrichment tests (like hypergeometric method) have size ranges of input gene lists that work best. We tested different input gene list sizes in ORA and found that 2500 yielded the most (456 with FDR<0.05), while sizes 200 and less yielded very few significant pathways (Figure 4). In the range of 300-1000, there’s a steep increase in the number of significant pathways, and after that the gradient reduces. This suggests a sweet spot around 1000, which in our example is 7.6% of the 13,168 genes detected. If you want to avoid making arbitrary thresholds (which seem to annoy reviewers) then we’d suggest using a method like GSEA instead that calculates enrichment from all detected genes.

Figure 4. Effect of gene list size on number of significant pathways. Up-regulated in red, down-regulated in blue.

Figure 4. Effect of gene list size on number of significant pathways. Up-regulated in red, down-regulated in blue.

6. Combining up and down-regulated genes in the same ORA test

In some articles we’ve read, we noticed that authors didn’t conduct separate ORA tests for up- and down-regulated gene lists, instead opting to submit the combined list for ORA. This isn’t necessarily an error, as it tests the hypothesis that some pathways are “dysregulated,” a mix of up- and down-regulated genes, that appear at an elevated rate. The combined approach can miss a lot of results as compared to the separate approach. According to the results of our example analysis the separate approach identified 355 pathways and the combined approach found only 149, that’s 58% less (see Figure 5). The combined approach could uniquely identify some pathways, but this is relatively few. In the example dataset, only 2.2% of results were identified exclusively with the combined test.

The reason behind this is two-fold. Firstly, we know that genes in the same pathway are typically correlated with each other (19). Consider cell cycle genes, or genes responding to pathogens, which are activated in unison to coordinate a complex biological process. In a typical differential expression experiment after a stimulus, this results in pathways that are predominantly up or down regulated, but rarely a mix of up and down. Due to this phenomenon, the up and down lists each have relatively strong enrichments, but they are diluted when combined (20). Based on this, failing to report data of the separate approach could leave you 58% fewer results, and an incomplete picture of what’s happening at the molecular level.

Figure 5. Combining up and downregulated genes into one ORA test yields far fewer results.

Figure 5. Combining up and downregulated genes into one ORA test yields far fewer results.

7. Using shallow gene annotations

One of the most important decisions you’ll make when doing a pathway enrichment analysis is selecting the database to query. There are many options to consider, both proprietary and open source. When choosing, consider whether the database contains the gene sets that you a priori suspect will be altered in the profile you’re looking at. Secondly consider the breadth and depth of the pathway library, this will be where the unexpected discoveries may occur, and it pays to use a comprehensive one to capture as many aspects of your data as possible. To demonstrate this, see how KEGG legacy and KEGG Medicus seem tiny when compared to Reactome, which is itself dwarfed by Gene Ontology’s Biological Process (GOBP; Table 3). Consequently, the results obtained are substantially richer for Reactome and GOBP as compared to KEGG libraries.

Table 3. Size metrics of selected gene set libraries, and the number of differentially regulated pathways (FDR<0.05).
No. gene sets Total no. annotations Median gene set size No. genes with ≥1 annotation Up-regulated Down-regulated
KEGG 186 12800 52.5 5245 11 51
KEGGM 658 9662 11.5 2788 20 18
Reactome 1787 97544 23.0 11369 165 117
GOBP 7583 616242 20.0 18000 340 1416
MSigDB 35134 4089406 47.0 43351 3214 7217

8. Using outdated gene identifiers and gene sets

One great thing about working with omics data is that databases like GEO are loaded with old data sets that we can reanalyse with new pathway databases and software tools to eke out further insights. When the data is several years old, we should use the processed data with caution, as many gene names may have since changes. For example, Illumina’s EPIC DNA Methylation microarray was released in 2016, and in the following eight years, 3,253 of 22,588 gene names on the chip have changed (14.4%) (21), meaning that those genes wouldn’t be recognised by the pathway enrichment software. To update defunct gene symbols, the HGNChelper R package can help (22), and it has the benefit of fixing gene symbols ruined by Excel autocorrect, which are unfortunately common in GEO (23). Persistent gene identifiers like Ensembl (eg: ENSG00000180096) and HGNC (eg: HGNC:2879) are less likely to change over time and would therefore be preferable over gene symbols (eg: SEPTIN1) for FEA.

The depth of pathway databases increases every year as annotation consortia continue assimilating functional information from the literature, and this impacts the quality of results and the conclusions that can be derived (24). Sometimes these databases undergo large updates, as shown by the chart of Reactome growth below (Figure 6). To be certain you have the best possible gene annotation for your analysis, it’s always best to download the newest version.

Figure 6. Reactome gene set growth over time. Gene sets were downloaded from the MSigDB website, except The last bar which represents the latest gene sets downloaded directly from Reactome, not yet incorporated into MSigDB.

Figure 6. Reactome gene set growth over time. Gene sets were downloaded from the MSigDB website, except The last bar which represents the latest gene sets downloaded directly from Reactome, not yet incorporated into MSigDB.

9. Bad presentation

When discussing FEA with colleagues, it is often said that the method is used to generate “filler”; data and charts used to “pad out” articles that would otherwise be too short or lack the normal number of figures. For FEA, a general rule of thumb is to only show the charts and data that are relevant to assessing aims and hypotheses, and contribute to the conclusions. While others have recommended using multiple FEA tools (9), we’d suggest limiting the amount of data shown in an article to just one FEA method and one or two gene set databases (eg: Reactome, transcription factor target genes). Excessive use of tools and databases can make the results difficult to interpret, as in (25).

We’ve also noticed cases of outright confusing, incomplete and simply wrong data presentatation choices that you should avoid:

  1. The number of selected genes in category is often shown as evidence of enrichment, (26–29), but this can be misleading because this is one of four numbers that goes into calculating a fold enrichment score.

  2. Similarly, the proportion of selected genes that belong to a gene category is sometimes shown (30–33), but this does not directly reflect the fold enrichment score.

  3. Presenting enrichment results as a pie chart (25,27,32,34) isn’t recommended because it isn’t possible to show enrichment scores and significance values in this form.

  4. Sometimes a network of genes or pathways are shown, but the significance of nodes and edges aren’t described (35).

  5. Figures missing key elements such as axis labels (27,36–39).

  6. FEA mentioned in the abstract but no data shown in the main article (40,41).

  7. Confusion around which tool was used for each figure and panel (eg: (42)).

10. Neglecting methods reproducibility

According to Goodman and colleagues (43), methods reproducibility is:

“the provision of enough detail about study procedures and data so the same procedures could, in theory or in actuality, be exactly repeated.”

There are several crucial pieces of information required in order to enable reproducibility of enrichment analysis including;

  • how genes were selected or scored,

  • the tool used, and it’s version,

  • the options or parameters applied,

  • the statistical test used,

  • the database(s) queries, and their versions,

  • for ORA, how a background list was defined, and,

  • how p-value correction was done (9,44).

A systematic literature analysis published in 2022 found insufficient background description in 95% of articles describing ORA tests, and p-value correction was insufficiently described in 57% of articles, suggesting that enrichment analysis generally suffers from a lack of methods reproducibility (12). Indeed, this literature analysis turned up some memorably poor methodological descrioptions including:

“GO and KEGG enrichment analysis were performed on the DEGs by perl scripts in house.” (45).

“Gene ontology (GO) and pathway analysis were performed in the standard enrichment computation method.” (46).

We’ve also noted some cases where FEA wasn’t described in the methods at all, despite being important enough to mention said results in the abstract (47–49). Moreover, we’ve identified cases where the tool mentioned in the methods section doesn’t actually match what’s shown in the results (50,51).

In addition to including the methodological details mentioned above, authors could also provide gene profile data and/or gene lists used for enrichment analysis as supplementary files, or better still, providing full reproducibility and transparancy with the five pillars framework (52).

Other issues

There are several more subtle issues not covered in depth here but are worth mentioning because they have been flagged as potential problems. First, the length of genes is known to impact the ease at which they are detected and so correction of gene length has been suggested to improve enrichment results (53,54). Second, many FEA tests use genes as the sampling unit and do not take into consideration or model biological variation which could yield inrealistic significance values (55). Third, the size of gene sets, even though they represent similar biology, can disproportionately impact significance scores and complicate interpretation (56,57). Fourth, tight correlation between genes’ expression within a pathway could exacerbate false positive rates (58,59). Fifth, slight differences in the implementation of ORA tests can impact results in some circumstances (60). Lastly, some web-based FEA tools lack longevity. For example, DAVID 6.8 (3,61) has been used for over 10,000 publications but has since 2022 been taken offline leaving those articles in an irreproducible state. As web-based tools appear to be most popular option for FEA (12,62), we should advocate using tools which allow preservation/archiving of the tool as a Docker image (eg: (18)), which could enable reproducibility into the future (63).

Conclusion

So why are problems so pervasive in enrichment analysis? It’s likely a combination of poor researcher training, supervision and peer-review scrutiny. The design of tools and (poor) quality of tool documentation might also play a role. We also know that poor methods have a type of advantage compared to the more rigorous ones due to researcher preferences to present significant findings (64) and reliance upon default settings even if they are incorrect (12,57). Problems 2, 3, 5 and 6 appear to be specific to ORA-based tools, and can be avoided entirely by switching to FCS tools like GSEA; this has the added benefit of enhanced accuracy in terms of precision and recall (60,65). Although learning and running FCS tools is more difficult and time-consuming, the benefits to the quality of results are substantial.

The deluge of manuscripts in genomics and other fields places increasing burden on a limited pool of competant, voluntary peer-reviewers to a point where editors are struggling to maintain the peer-review system as we know it (66). As studies become ever more multi-disciplinary, this makes it more difficult for editors to cover all the technical aspects thoroughly, causing mistakes like the above to become widespread.

Bibliography

1.
Khatri P, Sirota M, Butte AJ. Ten years of pathway analysis: Current approaches and outstanding challenges. PLoS Comput Biol. 2012;8(2):e1002375.
2.
Tavazoie S, Hughes JD, Campbell MJ, Cho RJ, Church GM. Systematic determination of genetic network architecture. Nat Genet. 1999;22(3):281–5.
3.
Sherman BT, Hao M, Qiu J, Jiao X, Baseler MW, Lane HC, et al. DAVID: A web server for functional enrichment analysis and functional annotation of gene lists (2021 update). Nucleic Acids Res. 2022;50(W1):W216–21.
4.
Wu T, Hu E, Xu S, Chen M, Guo P, Dai Z, et al. clusterProfiler 4.0: A universal enrichment tool for interpreting omics data. Innovation (Camb). 2021;2(3):100141.
5.
Subramanian A, Tamayo P, Mootha VK, Mukherjee S, Ebert BL, Gillette MA, et al. Gene set enrichment analysis: A knowledge-based approach for interpreting genome-wide expression profiles. Proc Natl Acad Sci U S A. 2005;102(43):15545–50.
6.
Korotkevich G, Sukhov V, Budin N, Shpak B, Artyomov NM, Sergushichev A. Fast gene set enrichment analysis. bioRxiv. 2021.
7.
Tilford CA, Siemers NO. Gene set enrichment analysis. Methods Mol Biol. 2009;563:99–121.
8.
Tipney H, Hunter L. An introduction to effective use of enrichment analysis software. Hum Genomics. 2010;4(3):202–6.
9.
Chicco D, Agapito G. Nine quick tips for pathway enrichment analysis. PLoS Comput Biol. 2022;18(8):e1010348.
10.
Zhao K, Rhee SY. Interpreting omics data with pathway enrichment analysis. Trends Genet. 2023;39(4):308–19.
11.
Timmons JA, Szkop KJ, Gallagher IJ. Multiple sources of bias confound functional enrichment analysis of global -omics data. Genome Biol. 2015;16(1):186.
12.
Wijesooriya K, Jadaan SA, Perera KL, Kaur T, Ziemann M. Urgent need for consistent standards in functional enrichment analysis. PLoS Comput Biol. 2022;18(3):e1009935.
13.
Ury HK. Comparison of four procedures for multiple comparisons among means (pairwise contrasts) for arbitrary sample sizes. Technometrics. 1976;18(1):89–97.
14.
Holm S. A simple sequentially rejective multiple test procedure. Scandinavian journal of statistics. 1979;65–70.
15.
Benjamini Y, Hochberg Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological). 1995;57(1):289–300.
16.
Storey JD, Tibshirani R. Statistical significance for genomewide studies. Proceedings of the National Academy of Sciences. 2003;100(16):9440–5.
17.
Sha Y, Phan JH, Wang MD. Effect of low-expression gene filtering on detection of differentially expressed genes in RNA-seq data. In: 2015 37th annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE; 2015. p. 6461–4.
18.
Ge SX, Jung D, Yao R. ShinyGO: A graphical gene-set enrichment tool for animals and plants. Bioinformatics. 2020;36(8):2628–9.
19.
Gatti DM, Barry WT, Nobel AB, Rusyn I, Wright FA. Heading down the wrong pathway: On the influence of correlation within gene sets. BMC Genomics. 2010;11(1):574.
20.
Hong G, Zhang W, Li H, Shen X, Guo Z. Separate enrichment analysis of pathways for up- and downregulated genes. J R Soc Interface. 2014;11(92):20130950.
21.
Ziemann M, Abeysooriya M, Bora A, Lamon S, Kasu MS, Norris MW, et al. Direction-aware functional class scoring enrichment analysis of infinium DNA methylation data. Epigenetics. 2024;19(1):2375022.
22.
Oh S, Abdelnabi J, Al-Dulaimi R, Aggarwal A, Ramos M, Davis S, et al. HGNChelper: Identification and correction of invalid gene symbols for human and mouse. F1000Res. 2020;9:1493.
23.
Ziemann M, Eren Y, El-Osta A. Gene name errors are widespread in the scientific literature. Genome Biol. 2016;17(1).
24.
Wadi L, Meyer M, Weiser J, Stein LD, Reimand J. Impact of outdated gene annotations on pathway enrichment analysis. Nature methods. 2016;13(9):705–6.
25.
Lin YT, Wu PH, Tsai YC, Hsu YL, Wang HY, Kuo MC, et al. Indoxyl sulfate induces apoptosis through oxidative stress and mitogen-activated protein kinase signaling pathway inhibition in human astrocytes. Journal of Clinical Medicine. 2019;8(2):191.
26.
Zhao Z, Zhao Q, Zhu S, Huang B, Lv L, Chen T, et al. iTRAQ-based comparative proteomic analysis of cells infected with eimeria tenella sporozoites. Parasite. 2019;26:7.
27.
Li C, E C, Zhou Y, Yu W. Candidate genes and potential mechanisms for chemoradiotherapy sensitivity in locally advanced rectal cancer. Oncology Letters. 2019;17(5):4494–504.
28.
Sarıman M, Abacı N, Ekmekçi SS, Çakiris A, Paçal FP, Üstek D, et al. Investigation of gene expressions of myeloma cells in the bone marrow of multiple myeloma patients by transcriptome analysis. Balkan medical journal. 2019;36(1):23.
29.
Bhatia G, Sharma S, Upadhyay SK, Singh K. Long non-coding RNAs coordinate developmental transitions and other key biological processes in grapevine. Scientific Reports. 2019;9(1):3552.
30.
Wang X, Diao L, Sun D, Wang D, Zhu J, He Y, et al. OsteoporosAtlas: A human osteoporosis-related gene database. PeerJ. 2019;7:e6778.
31.
Wang XM, Tian FY, Fan LJ, Xie CB, Niu ZZ, Chen WQ. Comparison of DNA methylation profiles associated with spontaneous preterm birth in placenta and cord blood. BMC Medical Genomics. 2019;12(1):1.
32.
Hu F, Li Y, Yu K, Huang B, Ma X, Liu C, et al. ITRAQ-based quantitative proteomics reveals the proteome profiles of primary duck embryo fibroblast cells infected with duck tembusu virus. BioMed research international. 2019;2019(1):1582709.
33.
Koh SY, Moon JY, Unno T, Cho SK. Baicalein suppresses stem cell-like characteristics in radio-and chemoresistant MDA-MB-231 human breast cancer cells through up-regulation of IFIT2. Nutrients. 2019;11(3):624.
34.
Liu Y, Zhu D, Xing H, Hou Y, Sun Y. A 6-gene risk score system constructed for predicting the clinical prognosis of pancreatic adenocarcinoma patients. Oncology reports. 2019;41(3):1521–30.
35.
Bandi S, Tchaikovskaya T, Gupta S. Hepatic differentiation of human pluripotent stem cells by developmental stage-related metabolomics products. Differentiation. 2019;105:54–70.
36.
Boyko AV, Girich AS, Eliseikina MG, Maslennikov SI, Dolmatov IY. Reference assembly and gene expression analysis of apostichopus japonicus larval development. Scientific Reports. 2019;9(1):1131.
37.
Lou W, Ding B, Fan W. High expression of pseudogene PTTG3P indicates a poor prognosis in human breast cancer. Molecular Therapy-Oncolytics. 2019;14:15–26.
38.
Shi Y, Sun H, Wang X, Jin W, Chen Q, Yuan Z, et al. Physiological and transcriptomic analyses reveal the molecular networks of responses induced by exogenous trehalose in plant. PLoS One. 2019;14(5):e0217204.
39.
Li M, Guo Y, Feng YM, Zhang N. Identification of triple-negative breast cancer genes and a novel high-risk breast cancer prediction model development based on PPI data and support vector machines. Frontiers in genetics. 2019;10:180.
40.
Xu L, Wang L, Zhou L, Dorfman RG, Pan Y, Tang D, et al. The SIRT2/cMYC pathway inhibits peroxidation-related apoptosis in cholangiocarcinoma through metabolic reprogramming. Neoplasia. 2019;21(5):429–41.
41.
Di Gerlando R, Mastrangelo S, Sardina MT, Ragatzu M, Spaterna A, Portolano B, et al. A genome-wide detection of copy number variations using SNP genotyping arrays in braque français type pyrénées dogs. Animals. 2019;9(3):77.
42.
Jin L, Zhu C, Qin X. Expression profile of tRNA-derived fragments in pancreatic cancer. Oncology Letters. 2019;18(3):3104–14.
43.
Goodman SN, Fanelli D, Ioannidis JP. What does research reproducibility mean? Science translational medicine. 2016;8(341):341ps12–2.
44.
Wijesooriya K, Jadaan SA, Perera KL, Kaur T, Ziemann M. Guidelines for reliable and reproducible functional enrichment analysis. BioRxiv. 2021;2021–09.
45.
Zhou T, Luo X, Yu C, Zhang C, Zhang L, Song Y, et al. Transcriptome analyses provide insights into the expression pattern and sequence similarity of several taxol biosynthesis-related genes in three taxus species. BMC plant biology. 2019;19(1):33.
46.
Liu F, Wei J, Hao Y, Tang F, Jiao W, Qu S, et al. Long noncoding RNAs and messenger RNAs expression profiles potentially regulated by ZBTB7A in nasopharyngeal carcinoma. BioMed Research International. 2019;2019(1):7246491.
47.
Hu N, Cheng Z, Pang Y, Zhao H, Chen L, Wang C, et al. High expression of MiR-98 is a good prognostic factor in acute myeloid leukemia patients treated with chemotherapy alone. Journal of Cancer. 2019;10(1):178.
48.
Zhao J, Xu J, Chen B, Cui W, Zhou Z, Song X, et al. Characterization of proteins involved in chloroplast targeting disturbed by rice stripe virus by novel protoplast–chloroplast proteomics. International Journal of Molecular Sciences. 2019;20(2):253.
49.
Chen L, Chen Q, Kuang S, Zhao C, Yang L, Zhang Y, et al. USF1-induced upregulation of LINC01048 promotes cell proliferation and apoptosis in cutaneous squamous cell carcinoma by binding to TAF15 to transcriptionally activate YAP1. Cell death & disease. 2019;10(4):296.
50.
Li M, Li A, Zhou S, Lv H, Yang W. SPAG5 upregulation contributes to enhanced c-MYC transcriptional activity via interaction with c-MYC binding protein in triple-negative breast cancer. Journal of hematology & oncology. 2019;12(1):14.
51.
Tong Y, Song Y, Deng S. Combined analysis and validation for DNA methylation and gene expression profiles associated with prostate cancer. Cancer Cell International. 2019;19(1):50.
52.
Ziemann M, Poulain P, Bora A. The five pillars of computational reproducibility: Bioinformatics and beyond. Briefings in Bioinformatics. 2023;24(6):bbad375.
53.
Mi G, Di Y, Emerson S, Cumbie JS, Chang JH. Length bias correction in gene ontology enrichment analysis using logistic regression. 2012;
54.
Mandelboum S, Manber Z, Elroy-Stein O, Elkon R. Recurrent functional misinterpretation of RNA-seq data caused by sample-specific gene length bias. PLoS biology. 2019;17(11):e3000481.
55.
Goeman JJ, Bühlmann P. Analyzing gene expression data in terms of gene sets: Methodological issues. Bioinformatics. 2007;23(8):980–7.
56.
Karp PD, Midford PE, Caspi R, Khodursky A. Pathway size matters: The influence of pathway granularity on over-representation (enrichment analysis) statistics. BMC genomics. 2021;22(1):191.
57.
Mubeen S, Tom Kodamullil A, Hofmann-Apitius M, Domingo-Fernandez D. On the influence of several factors on pathway enrichment analysis. Briefings in bioinformatics. 2022;23(3):bbac143.
58.
Gatti DM, Barry WT, Nobel AB, Rusyn I, Wright FA. Heading down the wrong pathway: On the influence of correlation within gene sets. BMC genomics. 2010;11(1):574.
59.
Wu D, Smyth GK. Camera: A competitive gene set test accounting for inter-gene correlation. Nucleic acids research. 2012;40(17):e133–3.
60.
Ziemann M, Schroeter B, Bora A. Two subtle problems with overrepresentation analysis. Bioinformatics Advances. 2024;4(1):vbae159.
61.
Huang DW, Sherman BT, Lempicki RA. Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources. Nature protocols. 2009;4(1):44–57.
62.
Xie C, Jauhari S, Mora A. Popularity and performance of bioinformatics software: The case of gene set analysis. BMC bioinformatics. 2021;22(1):191.
63.
Perkel JM. Challenge to scientists: Does your ten-year-old code still run? Nature. 2020;584(7822):656–9.
64.
Smaldino PE, McElreath R. The natural selection of bad science. Royal Society open science. 2016;3(9):160384.
65.
Kaspi A, Ziemann M. Mitch: Multi-contrast pathway enrichment for multi-omics and single-cell profiling data. BMC genomics. 2020;21(1):447.
66.
Adam D. The peer-review crisis: How to fix an overloaded system. Nature. 2025;644(8075):24–7.