I am a PhD candidate at ILLC, at the University of Amsterdam, advised by Katrin Schulz and Willem Zuidema. You can also find me at the EleutherAI Discord, where I do interpretability research. Previously, I was an active participant of the BigScience initiative as part of the evaluation working group.
My research broadly focuses on understanding why and how language models exhibit social biases. I study how we can reliably measure bias in NLP and try to ground the discussion of bias in the broader societal perspective. I'm in particular interested in using interpretability tools to answer questions like "How can we reliably measure and mitigate bias?" and "How do LMs learn these biases in the first place?"
As part of the Bias Barometer research group, I also aim to develop NLP techniques for investigating the (hidden) biases in Dutch digital media, in a broader multidisciplinary effort to understand its role in the public perception of politics, events, and social groups.
Before doing my PhD, I was a Master's student in Artificial Intelligence at the University of Amsterdam, where I did my thesis on the topic of language emergence in referential games. As an undergraduate, I studied at the University College Twente—a selective Bachelor’s program focusing on both technology and its role in society—where I grew an interest in computer science, ethics, and philosophy of technology.
In my free time I like to sketch and read. I'm also more proficient in Emacs than in Vim.