SS26: Disagreement in NLP
Seminar Description
Traditional NLP approaches resolve label disagreements into a single “gold standard,” since disagreements are treated as noise in the data, resulting from the lack of attention or mistakes of the annotators, subjective bias, or insufficient annotation guidelines.
However, recent research highlights that a single gold label may not capture the ambiguity and diversity in language. For subjective tasks such as abuse detection and quality estimation, there is an even greater need for multi-perspective modelling in order to include different viewpoints and improve the robustness and fairness of NLP models.
This seminar explores disagreement in linguistic annotation and perspectivist approaches in NLP, focusing on learning from non-aggregated datasets and multi-perspective evaluation. We will explore the causes of annotation disagreements and strategies to address them. We will discuss current research on modelling diverse viewpoints and the broader implications for AI fairness and inclusion. We will then also discuss how these disagreement-aware models can interact with users in practice, including connections to human-centered AI, adaptive interfaces, and cognitive models of interpretation and attention.
| taught by: | Dr. Frances Yung and Anwesha Das |
| language: | English |
| start date: | 13.04.2026 |
| time: | Monday, 12:15 - 13:45 |
| classroom: | Building C7 3 - Seminar room 1.14 |
| sign-up: | Please join the following MS Teams if you are interested in taking this seminar: General | [162073] Disagreement in NLP | Microsoft Teams |
| credits: | 4 CP (R), 7 CP (R+H) |
| suited for: | see LSF |