Challenge
Guaranteeing fairness and non-discriminative behaviour of an AI system is a complex task, as it requires combining ethical, legal and technical aspects, and as it is highly context dependent: What is “fair” in a given context, and how is fairness perceived by stakeholders? What is the appropriate way of assessing the fairness of a technical system? How can a required level of fairness be implemented technically, without compromising on the performance of the system?
Approach
In a transdisciplinary research project with philosophers, social scientists and computer scientists, involving 5 European universities, we developed an AI fairness framework which connects a structured ethical analysis with stakeholders with a technical implementation procedure that guarantees maximum performance for any given level of fairness. The resulting AI systems can be certified according to the latest standards and are explainable for non-tech stakeholders.
Outcome
The developed approach is the world’s first approach which integrates ethics and technology seamlessly. It is based on the current state-of-the-art in AI philosophy and established theories of social justice. At the same time, in integrates the current knowledge from computer science of how to deal with fairness technically, in the context of AI systems. 26 peer-reviewed publications with several best-paper awards demonstrate this outstanding scientific quality.
The approach has been cast into an actionable step-by-step procedure for practitioners and tested with real-life examples. Its application is supported by an associated instruction course and a supporting software tool “FairnessLab”.
Contact
Prof. Christoph Heitz PhD, ZHAW School of Engineering, Institute of Data Analysis and Process Design, Winterthur, christoph.heitz@zhaw.ch