Byte

Super-intelligent AI is not a thing

Panic not! – says a report in Nature, LLM’s will not have the ability to match or even exceed human beings on most tasks. “Scientific study to date strongly suggests most aspects of language models are indeed predictable,” says computer scientist and study co-author Sanmi Koyejo.  Emerging artificial “general” intelligence is no longer apparent when systems are tested in different ways.  This “emergence”, when AI models gain knowledge in a sharp and predictable way is nothing more than a mirage with systems’ abilities building gradually.

So What?

Models are making improvements but they are no where near approaching consciousness, perhaps benchmarking needs more attention paid to it – working on how tasks fit into real world activities.  Link to article.

Byte

Why Algorithms pick up on our biases

Why do algorithms pick up on our biases? It could be argued that this is due to a 95 year old economic model that assumes people’s preferences can be revealed by looking at their behaviour. However, the choices we make are not always what would be best for us. We might have a great wish list on our Netflix account which reflects our true interests, but watch the “trashy” shows that are easier to click on that Netflix sends us. All algorithms are built on what the user is doing, making predictions rather than realistic assumptions as revealed preferences can be incomplete and even misleading. Should algorithms be built with a move away from revealed preferences and encompass more behavioural science? Would this lead to an improvement in our welfare? Or do we just need to watch something “trashy” to de-stress at the end of the day?

SOURCE: Nature Human Behaviour

Byte

CAISS Bytes: ChatGPT Content Moderation

Anirban Ghosal, senior writer for Computerworld, discusses how OpenAI are planning to use GPT-4 LLM for content moderation, and how this could help to eliminate bias. By automating the process of content moderation on digital platforms, especially social media, GPT-4 could interpret rules and nuances in long content policy documentation, as well as adapting instantly to policy updates. The company believe AI can help to moderate online traffic and relive the mental burden on a large number of human moderators. The company posit that custom content policies could be created in hours, and they could use data sets containing real-life examples of policy violations in order to label the data. Traditionally people label the data and this is time consuming and expensive.

People will then be used to read the policy and assign labels to the same dataset without seeing the answers. Using these discrepancies the experts can ask GPT-4 to explain the reasoning behind its labels, look into policy definitions, discuss the ambiguity and resolve any confusion. This iterative process will have many steps with data scientists and engineers before the LLM can generate good useful results.

So What: Using this approach should lead to a decrease in inconsistent labelling and a faster feedback loop. Results should be more consistent. Undesired biases can creep into content moderation during training, although results and output will need to be carefully looked at and further refined by maintaining humans in the loop, therefore, bias could be reduced. Industry experts suggest that this approach has potential and could lead to a massive multi-million dollar market for Open AI.

Link: https://www.computerworld.com/article/3704618/openai-to-use-gpt-4-llm-for- content-moderation-warns-against-bias.html

Byte

CAISS Bytes: A.I. Literature Reviews

Nature magazine tells us how Mushtaq Bilal is using a new generation of search engines, powered by machine learning and large language models, moving beyond keyword searches to pull connections from the mass of the scientific literature to speed up work. Some programs, such as Consensus, give research-backed answers to yes-or-no questions; others, such as Semantic Scholar, Elicit and Iris, act as digital assistants. Collectively, the platforms facilitate many of the early steps in the writing process. Critics note, however, that the programs remain relatively untested and run the risk of perpetuating existing biases in the academic publishing process. Can using AI free up time to allow more focus on innovation and discovery by drawing information from a massive body of literature and papers? Link below:

https://www.nature.com/articles/d41586-023-01907- z?utm_source=Nature+Briefing&utm_campaign=487d548e6a-briefing-dy- 20230808&utm_medium=email&utm_term=0_c9dfd39373-487d548e6a-48999819

Byte

CAISS Bytes: Noam Chomsky

Noam Chomsky a professor of linguistics has dismissed ChatGPT as “hi-tech plagiarism”, it has also been described as a “parrot” as it can only repeat what has been said before. There are concerns around how we will teach people to research, think and write, is using ChatGPT a “way of avoiding learning”? Further questions are posed such as: is the essay now dead, could a machine learning algorithm produce a better result, are students being failed if they do not find the content interesting enough to engage with? Link https://www.openculture.com/2023/02/noam-chomsky-on-chatgpt.html

Byte

Students using AI Chatbots to learn a language

The BBC have reported that students are switching to AI to learn a language. AI has benefits as it will not judge you if you make a mistake. With Spanish for example it can give regional variations such as Argentinian and Mexican Spanish. However, as useful as it can be for practising it can be repetitive, corrections are missing and words can be invented. As a supplement to other methods AI could have a place in cementing knowledge and making practice fun. Link here: https://www.bbc.co.uk/news/business-65849104

Byte

Sir Paul McCartney has used AI to complete a Beatles song that was never finished. limits of such technology?

Using machine learning Sir Paul said they managed to “lift” the late John Lennon’s voice and get the piece of work completed. By extracting elements of his voice from a “ropey little bit of a cassette”, the 1978 song entitled “Now and Then” will hopefully be released later this year. “We had John’s voice and a piano and he could separate them with AI. They tell the machine, ‘That’s the voice. This is a guitar. Lose the guitar’. This was not a “hard days night” and was certainly faster than “eight days a week”, it will be interesting to hear the finished result and we may be “glad all over” that they did not “let it be”.

Link here: https://www.bbc.co.uk/news/entertainment-arts-65881813