Hi, welcome to my website! My name is Jasmijn (she/her), and I’m a Research Scientist at Google DeepMind. Currently I’m interested in the following topics within Natural Language Processing (NLP) / Computational Linguistics (CL):
- Interpretability and analysis of language models
- Machine Learning for NLP
- Efficiency / Efficient models
I received my PhD from ILLC, University of Amsterdam, where I was advised by Wilker Aziz, Ivan Titov and Khalil Sima’an.
- I started a YouTube channel for BlackboxNLP. Check it out and subscibe here: youtube.com/@blackboxnlp.
- You can now find me on Mastodon:
- New Dissecting Recall of Factual Associations in Auto-Regressive Language Models. Mor Geva, Jasmijn Bastings, Katja Filippova, Amir Globerson. Preprint.
- “Will You Find These Shortcuts?” A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification. Jasmijn Bastings, Sebastian Ebert, Polina Zablotskaia, Anders Sandholm, Katja Filippova. EMNLP 2022. [blog]
- Diagnosing ai explanation methods with folk concepts of behavior. Alon Jacovi, Jasmijn Bastings, Sebastian Gehrmann, Yoav Goldberg, Katja Filippova. 2021.
See my full publication list on my Google Scholar profile.
- The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?. Jasmijn Bastings, Katja Filippova. BlackboxNLP 2020.
- Interpretable neural predictions with differentiable binary variables. Jasmijn Bastings, Wilker Aziz, Ivan Titov. ACL 2019.
- Joey NMT: A Minimalist NMT Toolkit for Novices. Julia Kreutzer, Jasmijn Bastings, Stefan Riezler. EMNLP 2019. [code]
- Graph convolutional encoders for syntax-aware neural machine translation Jasmijn Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, Khalil Sima’an. EMNLP 2017.
You can find a full list of my publications on my Google Scholar profile.
- The Annotated Encoder-Decoder. Explains implementing RNN-based NMT models in PyTorch.
- Interpretable Neural Predictions with Differentiable Binary Variables contains the HardKuma distribution that allows (hybrid) binary samples (with true zeros and ones) that allow gradients to pass through.
- Joey NMT is an easy-to-use, educational, and benchmarked NMT toolkit for novices that I developed with Julia Kreutzer and is currently maintained by Mayumi Ohta.
- FREVAL is an all-fragments parser evaluation metric that I developed with Khalil Sima’an.
- EMNLP 2020 Blackbox NLP. The Elephant in the Interpretability Room. (PDF)
- ACL 2019. Interpretable Neural Predictions with Differentiable Binary Variables (Google Slides)
- EMNLP 2017. Graph Convolutional Encoders for Syntax-Aware Neural Machine Translation (Google Slides)
- 2019-Now, Research Scientist, Google. Berlin & Amsterdam.
- 2015-2020. PhD in AI, ILLC, University of Amsterdam. Defended 8 October 2020.*
- 2009-2012, MSc in AI, University of Amsterdam.
- 2006-2009, BSc in AI, Utrecht University.
Reviewing / Area Chair / Committees
I was a co-organizer of:
- Blackbox NLP 2022 (co-located with EMNLP 2022)
- Blackbox NLP 2021 (co-located with EMNLP 2021)
I was area chair (AC) / action editor (AE) for the following conferences:
- ACL (2021, 2022, 2023) (Interpretability and Analysis of Models for NLP)
- EMNLP (2021, 2022) (Interpretability and Analysis of Models for NLP)
- ACL rolling review (2021-2022)
- EACL (2021) (Machine Learning for NLP)
- NAACL (2021) (Interpretability and Analysis of Models for NLP)
I reviewed for the following conferences and workshops:
- ACL (2019, 2020)
- EMNLP (2018, 2019, 2020)
- CoNNL (2018, 2019)
- ICLR (2020)
- MT Summit (2019)
- WMT (2018, 2019)
- Analyzing and interpreting neural networks for NLP (BlackboxNLP, 2019, 2020)
- Debugging Machine Learning Models (Debug ML, ICLR Workshop, 2019)
- Workshop on Neural Generation and Translation (WNGT, 2018, 2019, 2020)
- Workshop on Representation Learning for NLP (RepL4NLP, 2020)
- Workshop on Structured Prediction for NLP (SPNLP, 2019)
- You can find me on Twitter: @jasmijnbastings.
- My code is on Github: github.com/bastings.