Timo Speith

timo.speith(at)uni-saarland.de
Bldg. A2 3, Room 0.24,
Tel. +49 (0)681 302-3651

Research Interests

My current research focusses on machine ethics and machine explainability with regard to already available (non-sci-fi) AI systems. I am also interested in ancient philosophy and philosophy of science.

In my PhD thesis in philosophy (supervised by Prof. Dr. Ulrich Nortmann), I investigate the connection between machine ethics and machine explainability. I argue that machine explainability is an essential component for making machine ethics as beneficial as possible. On top of that, I also think that machine ethics without machine explainability is empty, as there are effectively no possible means other than explanations in order to detect whether a machine acted according to the right moral constrains imposed on it. In my opinion, machine ethics and machine explainability can be linked effectively. In my thesis I propose a way to do so and evaluate its benefits and drawbacks.

Education

  • 09/2018–present: Doctoral studies in Philosophy, Saarland University, Germany.
    Tentative thesis title: From Machine Ethics to Machine Explainability and Back. Supervisor: Ulrich Nortmann 
  • 09/2016–08/2018: Master studies in Computer Science, Saarland University, Germany.
    Thesis title: Towards a Framework of Verifiable Machine Ethics and Machine Explainability. Supervisor: Holger Hermanns
  • 09/2013–09/2016: Bachelor studies in Philosophy with minor Computer Science, Saarland University, Germany.
    Thesis title: Selbstprädikation bei Platon - Eine Grundlagenuntersuchung ausgehend vom Argument des Dritten Menschen. Supervisor: Ulrich Nortmann
  • 07/2013: Abitur, Gymnasium Brede, Brakel, Germany

Employment

  • 03/2021–09/2021: Wissenschaftlicher Mitarbeiter (Research Associate), Saarland University, Department of Computer Science, Center for Perspicuous Computing (Holger Hermanns)
  • 03/2020–09/2020: Wissenschaftlicher Mitarbeiter (Graduate Assistant), Saarland University, Department of Computer Science, Chair for Dependable Systems (Holger Hermanns)
  • 08/2019–12/2020: Wissenschaftlicher Mitarbeiter (Graduate Assistant), Saarland University, Department of Computer Science, Infolab Saar (Verena Wolf)
  • 09/2018–present: Wissenschaftlicher Mitarbeiter (Research Associate), Saarland University, Department of Philosophy, Chair for Theoretical Philosophy (Ulrich Nortmann)
  • 10/2016–08/2018: Wissenschaftliche Hilfskraft (Research Assistant), Saarland University, Department of Philosophy, Chair for Theoretical Philosophy (Ulrich Nortmann)
  • 2014–2018: Studentische Hilfskraft (Student Assistant), Saarland University, Department of Philosophy.

Articles

  • »From Machine Ethics to Machine Explainability and Back« (with Kevin Baum and Holger Hermanns). ISAIM 2018 (International Symposium on Artificial Intelligence and Mathematics). Link
  • »Towards a Framework Combining Machine Ethics and Machine Explainability« (with Kevin Baum and Holger Hermanns). CREST 2018 (Workshop on formal reasoning about Causation, Responsibility, and Explanations in Science and Technology). DOI: 10.4204/EPTCS.286.4
  • »Explainability as a Non-Functional Requirement« (with Maximilian A. Köhl, Dimitri Bohlender, Kevin Baum, Markus Langer, and Daniel Oster). RE'19 (27th IEEE International Requirements Engineering Conference). DOI: 10.1109/RE.2019.00046
  • »What Do We Want From Explainable Artificial Intelligence (XAI)? – A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research« (with Markus Langer, Daniel Oster, Holger Hermanns, Lena Kästner, Eva Schmidt, Andreas Sesing, and Kevin Baum). Artificial Intelligence, vol. 296 (2021). DOI: 10.1016/j.artint.2021.103473
  • »Spare me the details: How the type of information about automated interviews influences applicant reactions« (with Markus Langer, Kevin Baum, Cornelius J. König, Viviane Hähne, and Daniel Oster). International Journal of Selection and Assessment, vol. 29, no. 2 (2021). DOI: 10.1111/ijsa.12325
  • »Explainability Auditing for Intelligent Systems: A Rationale for Multi-Disciplinary Perspectives« (with Markus Langer, Kevin Baum, Kathrin Hartmann, Stefan Hessel, and Jonas Wahl). RE4ES21 (First International Workshop on Requirements Engineering for Explainable Systems co-located with the 29th IEEE International Requirements Engineering Conference). Forthcoming. Link
  • »On the Relation of Trust and Explainability: Why to Engineer for Trustworthiness« (with Lena Kästner, Markus Langer, Veronika Lazar, Astrid Schomäcker, and Sarah Sterz). RE4ES21 (First International Workshop on Requirements Engineering for Explainable Systems co-located with the 29th IEEE International Requirements Engineering Conference). Forthcoming. Link
  • »Exploring Explainability: A Definition, a Model, and a Knowledge Catalogue« (with Larissa Chazette and Wasja Brunotte). RE'21 (29th IEEE International Requirements Engineering Conference). Forthcoming. Link

Scientific Talks

  • »The promises of XAI: Understanding, Explanations, Discovery?« (together with Lena Kästner), invited talk for the workshop »Issues in Explainable AI 2: Understanding and Explaining in Healthcare«, May 25, 2021 in Cambridge, United Kingdom (online).
  • »Ethical Issues of Artificial Intelligence?«, Colloquium at Wichmann-Lab, May 18, 2021 in Tübingen, Germany (online).
  • Commentary on the paper »Ideal and Nonideal Justice, Discrimination and the Design of ADM Systems« by Jürgen Sirsch at a Workshop of the »FairAndGoodADM« Project, September 21, 2020 in Kaiserslautern, Germany.
  • »Fair Algorithmic Decision-Making: Unrealizable Dream or Actual Possibility?«, Fraunhofer ITWM Deep Learning Seminar, March 5, 2020 in Kaiserslautern, Germany
  • »Why Explainable AI Matters Morally« (work with Kevin Baum and Holger Hermanns) as part of the panel »Explainable Intelligent Systems and the Trustworthiness of Artificial Experts«, 27th Annual Meeting of the European Society for Philosophy and Psychology, September 5, 2019 in Athens, Greece.
  • »Why Explainable AI Matters Morally« (work with Kevin Baum and Holger Hermanns) as part of the panel »Explainable Intelligent Systems and the Trustworthiness of Artificial Experts«, European Conference for Cognitive Science, September 3, 2019 in Bochum, Germany.
  • »Why Explainable AI Matters Morally« (work with Kevin Baum and Holger Hermanns) as part of the panel »Explainable Intelligent Systems and the Trustworthiness of Artificial Experts«, 9th International Conference on Information Law and Ethics, July 12, 2019 in Rome, Italy.
  • »Moral(?) Decision-Making of Autonomous Systems under Uncertainty«, Summer School of the 15th Conference of the International Society for Utilitarian Studies (ISUS 2018), July 7, 2018 in Karlsruhe, Germany

Public Talks

Paper Presentations

  • »What Do We Want From Explainable Artificial Intelligence (XAI)? – A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research« (with Markus Langer, Daniel Oster, Holger Hermanns, Lena Kästner, Eva Schmidt, Andreas Sesing, and Kevin Baum) at the 30th International Joint Conference on Artificial Intelligence (IJCAI 2021), August 24 and 26, 2021 in Montreal, Canada (online).
  • »Explainability as a Non-Functional Requirement« (together with Maximilian A. Köhl, work with Kevin Baum, Dimitri Bohlender, Markus Langer, and Daniel Oster) at the 27th IEEE International Requirements Engineering Conference (RE'19), September 25, 2019 in Jeju Island, South Korea.
  • »Towards a Framework Combining Machine Ethics and Machine Explainability« (together with Kevin Baum) at the 3rd Workshop on formal reasoning about Causation, Responsibility, and Explanations in Science and Technology (CREST 2018) as part of the 28th European Joint Conferences on Theory and Practice of Software (ETAPS 2018), 20th of April 2018 in Thessaloniki, Greece

Organised Workshops

Memberships

  • Algoright e.V. (Founding Member)
  • European Society for Philosophy and Psychology
  • Gesellschaft für Analytische Philosophie e.V. (German Society for Analytic Philosophy)
  • Gesellschaft für Informatik e.V. (German Informatics Society)

Committee Work

  • 10/2019-present: Elected alternate member of the Saarland University's commitee for the ethics of security-related research (Kommision der Ethik sicherheitsrelevanter Forschung).
  • 10/2018-09/2021: Elected member of the Saarland University's research committee (Forschungsausschuss).

Teaching (Saarland University)

  • (Upcoming) B.A. Seminar: »Einführung in die Metaphysik« (Introduction to metaphysics), Winter Term 2021/22
  • B.A. Seminar: »Ausgewählte Texte der Wissenschaftstheorie« (Selected texts in philosophy of science), Summer Term 2021
  • B.A. Seminar: »Platon: Theätet« (Plato: Theaetetus), Winter Term 2020/21
  • B.A. Seminar: »Künstliche Intelligenz« (Artificial Intelligence; together with Daniel Oster), Summer Term 2020 
  • B.A. Seminar: »Die Säulen der Welt« (The pillars of the world; a seminar about ontological grounding; together with Daniel Oster), Summer Term 2020 
  • B.A. Seminar: »Einführung in die Maschineneethik und Maschinenerklärbarkeit« (Introduction to machine ethics and machine explainability), Winter Term 2019/20
  • B.A. Seminar: »Digitale Dystopien? - Philosophische Probleme einer technisierten Welt« (Digital dystopias? - Philosophical problems of a technological world), Summer Term 2019
  • B.A. Seminar: »Einführung in die Metaphysik« (Introduction to metaphysics), Winter Term 2018/19

Tutoring (Saarland University)

Programming Courses (as part of and together with Infolab Saar)

 

Der Header ist ein Ausschnitt von Hermann Waibels Bild "Lichtfarbe" von 1987. Wir danken Herrn Waibel für die freundliche Erlaubnis, sein Bild zu nutzen.