Rishi Sunak says AI has threats and risks – but outlines its potential
Rishi Sunak says AI has threats and risks - but outlines its potential
Artificial intelligence could help make it easier to build chemical and biological weapons, Prime Minister Rishi Sunak has warned.
In a worst-case scenario, society could lose all control over AI, preventing it from being switched off, Mr Sunak said.
While the potential for harm is disputed, we must not "put our heads in the sand" over AI risks, he argued.
In a speech aiming to present the UK as a world leader on AI, the PM said the technology was already creating jobs.
He added that development of the technology would catalyse economic growth and productivity, though admitted it would have an impact on the labour market.
The prime minister's speech on Thursday morning set out the capabilities and potential risks posed by AI – including cyber attacks, fraud and child sexual abuse – following the publication of a government report.
Mr Sunak said among the risks outlined in the report was that AI could be used by terrorist groups "to spread fear and disruption on an even greater scale".
Mitigating the risk of human extinction from AI should be a "global priority", he said.
But he added: "This is not a risk that people need to be losing sleep over right now and I don't want to be alarmist."
He said that he was generally "optimistic" about the potential of AI to transform people's lives for the better.
A threat that will be much closer to home for many is the disruption AI is already bringing to jobs.
Mr Sunak mentioned AI tools efficiently doing admin tasks like preparing contracts and helping to make decisions – traditionally roles carried out by employees.
He said he believed education was the solution to preparing people for the changing market, adding that technology had always brought changes to the way people make money.
Automation has already changed the nature of factory and warehouse work, for example, but has not entirely removed human input.
The prime minister insisted it was too simple to say artificial intelligence would "take people's jobs", instead urging the public to view the tech as a "co-pilot" in the day-to-day activities of the workplace.
Reports, including declassified material from the UK intelligence community, set out a series of warnings about the threats AI could pose within the next two years.
According to the government's "Safety and Security Risks of Generative Artificial Intelligence to 2025" report, AI could be used to:
- Enhance terrorist capabilities in propaganda, radicalisation, recruitment, funding streams, weapons development and attack planning
- Increase fraud, impersonation, ransomware, currency theft, data harvesting, voice cloning
- Increase child sexual abuse images
- Plan and carry out cyberattacks
- Erode trust in information and use 'deepfakes' to influence societal debate
- Assemble knowledge on physical attacks by non-state violent actors, including chemical, biological and radiological weapons
Experts are divided about the threat posed by AI and previous fears about other emerging technologies have not fully materialised.
Rashik Parmar, the chief executive of the BCS, The Chartered Institute for IT, said: "AI won't grow up like The Terminator.
"If we take the proper steps, it will be a trusted co-pilot from our earliest school days to our retirement."
In his speech, Mr Sunak said the UK would not "rush to regulate" AI because it was "hard to regulate something you do not fully understand".
He said the UK's approach should be proportionate while also encouraging innovation,
Mr Sunak wants to position the UK as a global leader on the safety of the technology, which would put it at the centre of a stage on which it can't really compete with huge players like the US and China in terms of resources or homegrown tech giants.
So far, most of the West's powerful AI developers seem to be cooperating – but they are also keeping a lot of secrets about what data their tools are trained on and how they really work.
The UK will have to find a way to persuade these firms to stop, as the prime minster put it, "marking their own homework".
Prof Carissa Veliz, associate professor in philosophy, Institute of Ethics in AI, at the University of Oxford, said unlike the EU the UK had so far been "notoriously averse to regulating AI, so it is interesting for Sunak to say that the UK is particularly well-suited to lead the efforts of ensuring the safety of AI".
She said regulation often leads to "the most impressive and important innovations".
Labour said the government had not yet set out concrete proposals on how it would regulate the most powerful AI models.
"Rishi Sunak should back up his words with action and publish the next steps on how we can ensure the public is protected," Shadow Science, Innovation and Technology Secretary Peter Kyle said.
The UK is hosting a two-day AI safety summit at Bletchley Park in Buckinghamshire next week, with China expected to attend.
The decision to invite China at a time of tense relations between the two countries has been criticised by some. Former Prime Minister Liz Truss has written to Mr Sunak asking him to rescind China's invitation.
She believes "we should be working with our allies, not seeking to subvert freedom and democracy" and cites concerns around Beijing's attitude to the West about AI.
But, speaking earlier Mr Sunak defended the decision, arguing there could be "no serious strategy for AI without at least trying to engage all of the world's leading AI powers".
The summit will bring together world leaders, tech firms, scientists and academics to discuss the emerging technology.
Professor Gina Neff, Director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, has criticised the focus of the summit.
"The concerns that most people care about are not on the table, from building digital skills to how we work with powerful AI tools," she said.
"This brings its own risks for people, communities, and the planet."
Related Topics
-
-
Published16 hours ago
-
-
-
Published23 hours ago
-