Language Learning and Representation in Humans and Computers
Visiting speaker
Aida Nematzadeh
Postdoctoral researcher, Computational Cognitive Science Lab, UC Berkeley
Past Talk
Tuesday
Feb 28, 2017
Watch video
4:00 pm
EST
Virtual
177 Huntington Ave.
11th floor
Devon House
58 St Katharine's Way
London E1W 1LP, UK
Online
Register here

Language is one of the greatest puzzles of both human and artificial intelligence (AI). Language learning happens effortlessly in children; yet, it is a complex process that we do not fully understand. Moreover, although access to more data and computation has resulted in recent advances in AI systems, they are still far from human performance in many language tasks. How do humans learn and represent language? And how can this inform AI?

In this talk, I focus on representation of semantic knowledge -- word meanings and their relations -- which is an important aspect of child language learning and AI systems: it impacts how word meanings are stored in, searched for, and retrieved from memory.   First, I talk about how humans learn and represent semantic knowledge. I show that, using the evolving knowledge of word relations and their contexts, we can grow a network that exhibits the properties of adult semantic knowledge. Moreover, this can be achieved using limited computation. Next, I explain how investigating human semantic processing helps us model semantic representations more accurately. I show that recent neural models of semantics, despite being trained on huge amount of data, fail at capturing important aspects of human similarity judgements. I also show that a probabilistic topic model does not have these problems, suggesting that exploring different representations may be necessary to capture different aspects of human semantic processing.  

About the speaker
About the speaker