CAISSathon 2024: AI Explainability in CSS

CAISSathon 2024 IS BACK ON!

NEW DATES:  22nd & 23rd July 2024

NEW LOCATION: Exeter University

The CAISS hub, supported by DSTL, The Alan Turing Institute, Lancaster University and Defence AI Center, is proud to announce the first CAISSathon.

Theme of the event: Explainability

The CAISSathon will bring together individuals from government, academia and industry to brainstorm and engineer potential solutions. Researchers will have an opportunity to put knowledge into practice and solve problems which have real life implications within Defence. At the end of the two days, a portfolio of potential solutions, research questions and collaboration will be established which will inspire future investigation and lead to insightful developments in this fast moving field. Additionally, there will be a prize awarded to the team who prepare the most innovative solutions.

The social responsibility of Artificial Intelligence (AI) has been under increased scrutiny as it creeps into every facet of society. Understanding the potential ramifications and harm caused by AI is of key importance, particularly as AI technology used for facial recognition, loans and mortgages and job applications (amongst others), have already been shown to be biased against ethnic minority populations and, for example, those with disabilities. Within a Defence context, understanding how AI enabled technologies can facilitate decision making is of key interest. The need for explainable and transparent AI systems is one argument to uncover bias and prevent harm. However, does explainability solve these issues? Does understanding how AI makes it decisions provide enough evidence to negate the harm or at least provide indications of potential harm? Additionally, how understandable do such explanations need to be for expert and lay users?

Due to limits, attendance is by invitation only. However, if you would like to express interest in participating, please email caiss@lancaster.ac.uk.