David Liu
Talk recording
Increasingly, training machine learning models requires the compression of vast amounts of data and perspectives. For instance, when learning on social networks, the interactions between people are compressed into a low-dimensional space. Effective machine learning models necessitate efficient, stable, and fair representation; this dissertation identifies challenges and algorithms for achieving such representations for complex networks. First, I present work tackling the technical challenge of learning embeddings efficiently and stably. I demonstrate how we can reduce the memory footprint of graph representation learning by considering more efficient alternatives to negative sampling utilizing dimension regularization. I also identify instability in current graph embedding algorithms to perturbations in the periphery of the network and present a meta-algorithm for mitigating such instability. Second, I show that graph representation learning broadens our approach to and understanding of algorithmic fairness. Graph representation learning enables us to measure group fairness without discrete class labels, and analyzing embeddings elicits mechanisms of unfairness in collaborative filtering. To conclude, I propose work on mitigating popularity bias in recommender systems. It is known that recommender systems learn better representations for popular items, resulting in a feedback loop of less and less diverse recommendations. I propose utilizing regularization techniques from degree-corrected network models, which have been shown to improve group inference in popularity-heterogeneous networks, to improve the representation of low-resource users and items.