Loading…
Predict 2018 has ended
avatar for Dr. Luca Longo

Dr. Luca Longo

Dublin Institute of Technology
Lecturer
Talk: Explainable Artificial Intelligence: What, why and how

The important success of current Artificial Intelligence (AI) solutions and in particular of Machine Learning (ML) algorithms is due to the practical applicability of statistical learning approaches in arbitrarily high dimensional spaces. Despite this, their effectiveness is still limited by their inability to explain their inferences in an understandable and retraceable way. Even if we understand the underlying mathematical and computational theories, it is often complicated and sometimes impossible to get insight into the internal representation and functioning of the models produced by AI algorithms. In turn, it is very hard to explain how and why an inference was made or a result achieved. The key problem of such models is that they are often regarded as black-boxes and even if we understand their underlying mathematical principles, they lack an explicit declarative knowledge representation. In other words, they are not equipped with a functionality aimed at generating the underlying explanatory structures.

Future AI needs contextual adaptation, that means tools that support the construction of explanatory models for solving real-world problems.  The ultimate goal of future explainable AI systems is to make results understandable and transparent and to answer questions of how and why a result was achieved. In these systems it would be beneficial not to exclude human expertise, but to augment human intelligence with artificial intelligence. If human intelligence is complemented by artificial intelligence, and in some cases even overruled, humans must be able to understand it. This capacity is essential to minimize the gap between human thinking and machine thinking.

My Speakers Sessions

Tuesday, October 2
 

12:00pm PDT