Zahlreiche wissenschaftliche Beiträge unserer Fachrichtung wurden bei der International Conference on Learning Representations (ICLR)2026 angenommen. Herzlichen Glückwunsch an alle Autorinnen und Autoren!
- “Automata Learning and Identification of the Support of Language Models”
Von Satwik Bhattamishra, Michael Hahn, Varun Kanade - “Grounding or Guessing? Visual Signals for Detecting Hallucinations in Sign Language Translation.”
Von Yasser Hamidullah, Koel Dutta Chowdury, Yusser Al-Ghussin, Shakib Yazdani, Cennet Oguz, Josef van Genabith, Cristina España-Bonet - “Decomposing Representation Space into Interpretable Subspaces with Unsupervised Learning”
Von Xinting Huang, Michael Hahn - “Softmax Transformers are Turing-Complete”
Von Hongjian Jiang, Michael Hahn, Georg Zetzsche, Anthony Widjaja Lin - “Understanding the Emergence of Seemingly Useless Features in Next-Token Predictors”
Von Mark Rofin, Jalal Naghiyev, Michael Hahn - “Benefits and Limitations of Communication in Multi-Agent Reasoning”
Von Michael Rizvi-Martel, Satwik Bhattamishra, Neil Rathi, Guillaume Rabusseau, Michael Hahn - “Bridging Fairness and Explainability: Can Input-Based Explanations Promote Fairness in Hate Speech Detection?”
Von Yifan Wang, Mayank Jobanputra, Ji-Ung Lee, Soyoung Oh, Isabel Valera, Vera Demberg