The following text has been machine translated from the German with no human editing.
All large language models currently rely on what is known as the Transformer architecture. This architecture is modeled on the human ability to focus on relevant information and ignore less important details. Michael Hahn, Professor of Computational Linguistics at Saarland University, has mathematically proven that Transformers fail at tasks where every part of the input is relevant to the output. So if just a single character is changed, this can alter the correct result. From this, the computer scientist can gain theoretical insights that allow the strengths and weaknesses of large language models to be better predicted.
Just recently, Michael Hahn received €1.4 million from the German Research Foundation (DFG) for his research at the interface of machine learning and computational linguistics, to establish an Emmy Noether research group (see press release of 13 November 2025). The Heinz Maier-Leibnitz Prize is regarded as one of the most prestigious awards in the German-speaking world for scientists at an early stage of their careers. It is endowed with 200,000 euros, intended to support the laureates in pursuing their scientific careers. The prize is named after Heinz Maier-Leibnitz, a physicist and former President of the German Research Foundation ( ), and has been awarded since 1977.
Further information:
Press release from the German Research Foundation
Information on the Heinz Maier-Leibnitz Prize
Professor Michael Hahn's personal website: https://www.mhahn.info
For further enquiries:
Prof. Dr Michael Hahn
Chair of Language, Computation and Cognition
Tel. +49-681-302-4343

