SS22: Crowdsourcing high quality annotations and experimental data
Seminar (in English)
|taught by:||Dr. Merel Scholman|
|time:||Monday, from 10:15 to 11:45 a.m.|
|located in:||The seminar will be held online via MS Teams|
Link to MS Teams
|sign-up:||Interested students please join the group on Teams|
before the start of the semester.
If you have any questions, feel free to send me an email
|suited for:||B.Sc. in Computational Linguistics|
B.Sc. in Computer Science
M.Sc. in Computer Science
M.Sc. in Language Science and Technology
|more details:||In LSF|
Crowdsourcing observations from non-experts is one of the most common approaches to collecting data and annotations in NLP, and it is becoming increasingly popular in psycholinguistics. Crowdsourcing has been applied to a plethora of tasks, such as eliciting annotations of diverse phenomena ranging from discourse relations to image labelling, as well as obtaining experimental data such as reading times or word recognition.
Despite crowdsourcing having grown into a fundamental method for collecting data, its usage is largely guided by common practices and personal experience of researchers. This seminar has a focus on how methodology can shape our research results. We will discuss various principles and practices that have proven effective in generating high quality data for a large range of tasks.