Transformers Enhance the Predictive Power of Network Medicine
Publication
medRxiv
January 28, 2025
NetSI authors
Research area
Resources
View online
Download PDF
Download PDF
Download PDF
Download PDF
Abstract
Background Self-attention mechanisms and token embeddings behind transformers allow the extraction of complex patterns from large datasets, and enhance the predictive power over traditional machine learning models. Yet, being trained to make predictions about individual cells or genes, it is not clear if transformers can learn the inherent interaction patterns between genes, ultimately responsible for their mechanism of action. We use Geneformer, pretrained on single-cell transcriptomes, to ask if transformers can implicitly capture molecular dependencies, including protein-protein interactions (PPIs), allowing us to explore the use of transformers to improve network medicine tasks such as disease gene identification and drug repurposing.