Uncategorized

CAISS goes to AI UK, London March 2023

Around 3,000 delegates attended the QE2 Centre for AI UK. One of the most popular sessions dealt with the much hyped ChatGPT and was delivered by Gary Marcus, Emeritus Professor of Psychology and Neural Science at New York University. He began by stating that although we have a lot of individual AI solutions (for example, GPS) so far there is not a general purpose system that will do everything for us. ChatGPT is the one most advanced and reliable system to date, taking in massive amounts of data and has good guardrails, so it will not for example write an article on the benefits of eating glass! But is it the universal panacea?

Problems:

  • It will make things up and it can even give references for fake information, there is an illusion that adding more information will mitigate the incorrect outputs.
  • After completing eight million chess games, it still does not understand the rules.
  • Driverless cars involves deep learning, this is not AI. This technology is just memorising situations and is unable to cope with unusual events. The system cannot reason in the
  • same way that a human being does.
  • If the circumstance is not in the training set it won’t know what to do, in Chat GPT4
  • (which is the latest version) we do not know yet what that training data set is?

Positives:

  • It can help with de-bugging, it can write pieces of code that are 30% correct and then humans can fix them, this is easier than starting from scratch, the “best use case”.
  • It can write letters, stories, songs and prose, it is fun, fluent and good with grammar.
  • Large Language Models (LLMs) can be used to write articles – looks good but they have errors. If someone does not know the facts though it could be believed. But if it is a story and fiction, does this matter?

Worries and considerations:

Chat GPT is being used at scale, leading to misinformation and a possible polluting of democracy, there is an opportunity for fake information, potential discriminatory, stereotypical or even offensive responses. The 2024 US Presidential Election could be a concern, as the technology could be used by State Actors or as an advertising tool – leading to a spread of misinformation that appears plausible. It can write fictitious news reports, describe data etc. e.g. Covid 19 versus vaccines, the results will look authoritative. This could result in millions of fake tweets/posts in a day output via “troll farms”. Large Language Models (LLM) without guardrails are already being used on the dark web. ChatGPT has been used in a programme to solve CAPTURES – when challenged the bot said it was a person with a visual disability! Already it is being used in credit card scams and phishing attacks.

Classical AI is about facts, LLM’s do not know how to fact check e.g. Elon Musk has died in a car crash – we can check this as humans. With LLM’s, as this is such a wide and fast moving area, should we be looking at them in the same way that we would look at a new drug? Possible controlled releases with a pause in place for a “safety check”?

AI literacy is important for future generations – understanding the limits is crucial, people still need to think in a critical way. Is a coordinated campaign needed to fully understand and warn about the limits of such technology?

Other presentations included Professor Lynn Gladden on Integrating AI for Science and Government, Public Perceptions of AI, how we can “do better in data science and AI”, the on-line safety bill, creating economic and societal impact, what can data science do for policy makers and individual skills for global impact. Overall it was a fascinating two days with many opinions and high profile speakers under the overarching banner of open research, collaboration and inclusion.

Link: https://ai-uk.turing.ac.uk/