Explainable AI – Transparency and Accountability in a Data-Driven Society

FREE WORKSHOP
Tuesday April 9th, 9.30 - 19.00

aulogo_uk_var2_blue

The impressing increase in data-driven services and tools that use Artificial Intelligence, Machine Learning, Data Mining and Data Visualization, has given rise to concerns on the accuracy, validity and transparency of the underlying models, assumptions and data sources. Often referred to as Explainable AI, recent research discusses issues related to transparency and accountability of such “smart” systems. This workshop brings together experts from different fields that study how to provide transparency, explanations and accountability to models, services and systems that create a representation of human behaviour and actions, and that increasingly impact our daily life.

The workshop will present:
(1) an overview of criticism on existing systems or models,
(2) approaches to provide explanations and discrimination-aware analysis methods, and
(3) user experience, interaction and design approaches for smart interactive systems and Big Data approaches in general.

Programme

09.30-09.45 – Opening: Introduction of the workshop and overall objectives

09.45 –10.30:

The Autonomous Internet of Things and Explainable AI by Enrico Costanza, University College London

Abstract and Bio

10.30–11.15: 

Designing Human-Centric Explainable AI by Brian Y. Lim, National University of Singapore

Abstract and Bio

11.15–11.30: Coffee break

11.30–12.15:

Explainability through Abductive Hypothesizing: Understanding and improving our models through open-ended investigation by Simon Enni, Aarhus University

Abstract and Bio

12.15–13.00:

Panel with Enrico, Brian, and Simon

13.00–14.00: Lunch in the Incuba Katrinebjerg Canteen 

14.00–14.45:

Explainable Machine Learning is Often More Complex and Less Helpful Than You Might Think: Lessons from the Law, Lab and the Office by Michael Veale, University College London

Abstract and Bio

14.45–15.30:

Making Conclusions Conclusive: Challenges of Interpreting Machine Learning Results for Science  by Indrė Žliobaitė, University of Helsinki

Abstract and Bio

15.30-15.45: Coffee break

15.45–16.30:

Machine-Assisted Decision-Making: Understanding and Accounting for Human Factors by Nina Grgić-Hlača, Max Planck Institute for Software Systems

Abstract and Bio

16.30–17.15: 

Panel with Michael, Indrė, Nina and Anne Henriksen

Abstract and Bio Anne Henriksen

17.45-18.00: Coffee break

18.00–18.30:

Closing remarks

18.30–19.30: Closing Reception 

Organizers
Ira Assent, Professor in Data-Intensive Systems 
Jo Vermeulen, Assistant Professor in Ubiquitous Computing and Interaction 
Marianne Graves Petersen, Associate Professor in Ubiquitous Computing and Interaction

Audience / Target Group
The workshop is open to a general audience, and we welcome participation from researchers, students, and experts in industry working with data analysis.

Prerequisites
None
 
Participants
Maximum 75 participants
 
Time and place
Tuesday at 9.30 at Room Kahn 119K (5126-119K), Finlandsgade 26, Katrinebjerg, 8200 Aarhus N.

Workshops:  Full list of workshops

Digital Innovation Conference: Conference program

Close Menu