CAISSathon 2024: AI Explainability in CSS

CHANGE OF LOCATION!

Don’t panic: Its still at University of Exeter.

LOCATION: University of Exeter, South West Institute of Technology (SWIoT), Innovation Centre, phase 1 Rennes Drive, Exeter, EX4 4RN.

CAISSathon 2024

 

DATES:  22nd & 23rd July 2024

LOCATION: University of Exeter, South West Institute of Technology (SWIoT), Innovation Centre, phase 1 Rennes Drive, Exeter, EX4 4RN.

CAR PARKING: https://www.exeter.ac.uk/about/visit/streathamcampus/#map  , Area A shows the SWIoT building 93,

Visitors may only park in car park C. Visitors must have a valid “Pay By Phone” session. All other parking bays are for permit holders only from 08:00 -18:00. Please note that parking spaces are limited, especially during term, and we encourage visitors to use public transport where possible….”

It’s graduation so we would recommend getting to campus early or parking elsewhere are in town and walking / catching the shuttle bus up to the Innovation Centre/ SWIoT building.

DETAILS:

The CAISS hub, supported by DSTL, The Alan Turing Institute, Lancaster University and Defence AI Center, is proud to announce the first CAISSathon.

Theme of the event: Explainability

The CAISSathon will bring together individuals from government, academia and industry to brainstorm and engineer potential solutions. Researchers will have an opportunity to put knowledge into practice and solve problems which have real life implications within Defence. At the end of the two days, a portfolio of potential solutions, research questions and collaboration will be established which will inspire future investigation and lead to insightful developments in this fast moving field. Additionally, there will be a prize awarded to the team who prepare the most innovative solutions.

The social responsibility of Artificial Intelligence (AI) has been under increased scrutiny as it creeps into every facet of society. Understanding the potential ramifications and harm caused by AI is of key importance, particularly as AI technology used for facial recognition, loans and mortgages and job applications (amongst others), have already been shown to be biased against ethnic minority populations and, for example, those with disabilities. Within a Defence context, understanding how AI enabled technologies can facilitate decision making is of key interest. The need for explainable and transparent AI systems is one argument to uncover bias and prevent harm. However, does explainability solve these issues? Does understanding how AI makes it decisions provide enough evidence to negate the harm or at least provide indications of potential harm? Additionally, how understandable do such explanations need to be for expert and lay users?

If you would like to join this exciting event please email caiss@lancaster.ac.uk.