Thinking about thinking machines
Artificial intelligence (AI) sometimes seems like the world’s slowest overnight success. Leaving aside thought experiments and fantastical tales, AI was first seriously conceived of in 1956 at Dartmouth College in the US – progress has been both fast and slow.
While those present at the Dartmouth conference expected AI to arrive in short order – organiser John McCarthy famously wrote “Every aspect of learning or any other feature of intelligence can, in principle, be so precisely described that a machine can be made to simulate it” – the reality was a little different.
A long grind ensued, with interest in, and funding for, AI research collapsing in the 1970s during a so-called ‘AI winter’. In the 1980s when businesses attempted to launch AI-based products they renamed them using euphemisms such as ‘intelligent systems’ due to the fact that, as a term, AI had developed the reverse midas touch due to a history of over-promising and under-delivering.
Research continued, however, and real progress was made, particularly in statistical analysis. So, while the events of 2022 and 2023 might seem like they came out of nowhere, they were the result of decades of incremental work.
Nevertheless, Barry O’Sullivan, professor at the School of Computer Science and IT at University College Cork and likely Ireland’s best-known AI researcher, told TechCentral.ie developments in 2022 and 2023 should not be waved away because generative AI is an impressive application of AI technology.
“It has been an impressive year,” he said. “ChatGPT and the various other generative AI systems have been technologically amazing advances, and they have certainly captured the public imagination due to being deployed so well”.
However, O’Sullivan, who was recently awarded the distinguished service award 2023 by EurAI, the European Association for Artificial Intelligence, cautioned that the hype around the technology, whether from companies flogging AI, from journalists, or even from governments, has tended toward hyperbole.
Today’s AIs are not, for instance, going to evolve into machines that can actually think (known as ‘artificial general intelligences’ or AGIs). How could they? Generative AIs, notably the large language model-based chatbots, do not reason. Instead, they make probabilistic assessments of what is the correct word order in a sentence. This is intelligence, but it is very far from human cognition, lacking many basic building blocks, including the ability to make a coherent model of reality.
“These systems are still extremely limited. The primary challenges of AI still stand. While it has been a great year, it’s not a solved problem, most certainly. Far from it. These systems can’t reason, they can’t really do mathematics, and they don’t really have an understanding of the world,” O’Sullivan said.
Hallucinating success
O’Sullivan’s comments stand in stark contrast to what the AI business is selling. Right now, for example, ChatGPT creator OpenAI says it is planning for the development of AGI. Indeed, speaking recently at Salesforce’s annual Dreamforce conference, OpenAI boss Sam Altman said that developing AI was the goal from day one.
Back on planet earth, meanwhile, AIs still do not think. They also get things wrong. As do their creators, it would seem.
“They hallucinate and make things up. If you’re a company with an LLM you’re going to say LLMs are the foundation of the future,” O’Sullivan said.
From O’Sullivan’s point of view, talk of AGI is not only premature, it masks the true nature of the technology behind AI and runs the risk of mystifying how AI works and, ironically, covering up the achievements in the field.
“AGI is still science fiction in my view, and it will continue to be science fiction for a long time to come. But we do have AI systems that have moved out of being able to do just one thing – play chess, for instance,” he said.
The hype often also contains inherent contradictions, O’Sullivan says. For instance, in May, Altman was among the notables who signed a statement warning AI posed a “risk of extinction” that should be taken as seriously as the threat of nuclear war.
O’Sullivan argues that if AI is an existential threat then why is it available to the public.
This is a real problem, he says, not because AI is a threat, but because the rhetoric ratchets up fear in society. As a result, those working in AI have a responsibility to choose their words carefully and to properly explain how the technology works.
“The communication around AGIs and existential threats is bordering on the irresponsible, because people can feel that there really is something different going on [with AI], but they don’t understand the technology well enough to understand the limitations of it,” he said.
If there is a threat today it is a more quotidian one than human extinction: companies gorging on users’ data. Asked by TechCental.ie if this was a problem, O’Sullivan said some transparency would go a long way.
“These systems, their very food is data and interaction. Regarding data privacy, it is really hard to know as they are not publishing ethical reviews and fundamental rights assessments,” he said.
Subscribers 0
Fans 0
Followers 0
Followers