Aida Nematzadeh
11th floor
58 St Katharine's Way
London E1W 1LP, UK
Language is one of the greatest puzzles of both human and artificial intelligence (AI). Language learning happens effortlessly in children; yet, it is a complex process that we do not fully understand. Moreover, although access to more data and computation has resulted in recent advances in AI systems, they are still far from human performance in many language tasks. How do humans learn and represent language? And how can this inform AI?
In this talk, I focus on representation of semantic knowledge -- word meanings and their relations -- which is an important aspect of child language learning and AI systems: it impacts how word meanings are stored in, searched for, and retrieved from memory. First, I talk about how humans learn and represent semantic knowledge. I show that, using the evolving knowledge of word relations and their contexts, we can grow a network that exhibits the properties of adult semantic knowledge. Moreover, this can be achieved using limited computation. Next, I explain how investigating human semantic processing helps us model semantic representations more accurately. I show that recent neural models of semantics, despite being trained on huge amount of data, fail at capturing important aspects of human similarity judgements. I also show that a probabilistic topic model does not have these problems, suggesting that exploring different representations may be necessary to capture different aspects of human semantic processing.
Want to be notified about upcoming NetSI events? Sign up for our email list below!
Thank you! You have been added to our email list.
Oops! Something went wrong while submitting the form