SS23: Learning inherent human disagreement in annotation

Course description

NLP tasks have been driven by training and evaluating on linguistic data with human annotation. Traditionally, annotation disagreement are discussed among the trained annotators to derive a single "gold label".  For annotations collected by the popular crowdsourcing approach, algorithms have been developed to remove noisy labels and aggregate all workers' judgements to a single one. 
However, there is increasing evidence that human interpretation often cannot be aggregated to a single judgement and the disagreements are not always noise but could be signal. 
In this seminar, we will read about 

  • The reason behind inherent human disagreements 
  • How to train models to learn from disagreements 
  • How to evaluate model predictions against human judgements with disagreements 

We would focus on papers on NLP tasks, but works in other domains (e.g. vision) could also be discussed if it is of general interests of the class. 

taught by:  Dr. Frances Yung
start date: 13.04.2023
time: Thursday, 12:15 - 13:45
located in:In building C7 2, seminar room 1.05
sign-up:Interested students can join our MS Team
credits:4 CP (R), 7 CP (R+H)
suited for:  B.Sc. in Computational Linguistics
M.Sc. in Language Science and Technology (LST)
more details:In LSF                            
notice:Registration deadline for the examination is 14.07.2023