Artificial Intelligence: What is AI, is it dangerous and what jobs are at risk?
Artificial Intelligence: What is AI, is it dangerous and what jobs are at risk?
Artificial intelligence (AI) technology is developing at high speed, and is transforming many aspects of modern life.
However, some experts fear that it could be used for malicious purposes, and may threaten jobs.
What is AI and how does it work?
AI allows a computer to think, act, and respond like a human.
Computers can be fed huge amounts of information and trained to identify the patterns in it, in order to make predictions, solve problems, and even learn from their own mistakes.
As well as data, AI relies on algorithms – lists of rules which must be followed in the correct order to complete a task.
The technology is behind the voice-controlled virtual assistants Siri and Alexa, and helps Spotify, YouTube and BBC iPlayer suggest what you might want to play next.
It also lets Amazon analyse customers' buying habits to recommend future purchases, and helps Facebook and Twitter decide which social media posts to show users.
What are ChatGPT and Snapchat's My AI?
Two powerful AI-driven applications which have become very high profile in recent months are ChatGPT and Snapchat My AI.
ChatGPT is an example of what is called "generative" AI.
This uses the patterns and structures it identifies in vast quantities of source data to generate new and original content which feels like it has been created by a human.
It is coupled with a computer programme known as a chatbot, which "talks" to human users via text.
The programme can answer questions, tell stories and write computer code.
However, the system has only been fed information from before 2021. It sometimes generates incorrect answers for users, and can reproduce the bias contained in its source material, such as sexism or racism.
Snapchat's AI chatbot, My AI, works in a similar way to provide helpful human-like responses to instructions.
But My AI can also provide incorrect responses – called "hallucinating" – where it states untrue or misleading things as fact.
Which jobs are at risk because of AI?
AI has the potential to revolutionise the world of work, but this raises questions about which roles it might displace.
A recent report by investment bank Goldman Sachs suggested that AI could replace the equivalent of 300 million full-time jobs across the globe, as certain tasks and job functions become automated. That equates to a quarter of all the work humans currently do in the US and Europe.
But it also identified huge potential benefits for many sectors, and predicted that AI would lead to a 7% increase in global GDP.
Some areas of medicine and science are already taking advantage of AI, with doctors using the technology to help spot breast cancers, and scientists using it to develop new antibiotics.
Why do critics fear AI could be dangerous?
With few rules currently in place governing how AI is used, experts have warned that its rapid growth could be dangerous. Some have even said AI research should be halted.
In May, Geoffrey Hinton, widely considered to be the godfather of artificial intelligence, quit his job at Google, warning that AI chatbots could soon be more intelligent than humans
Later that month, the US-based Center for AI Safety published a statement supported by dozens of leading tech specialists warning of the dangers of AI.
They argue it could be used to generate misinformation that could destabilise society, allowing those who control it to track and even suppress the wider population.
In the worst-case scenario, machines might become so intelligent that they take over, leading to the extinction of humanity.
But others, including tech pioneer Martha Lane Fox, say we shouldn't get what she calls "too hysterical" about AI, urging a more sensible conversation about its capabilities.
What rules are in place at the moment about AI?
Governments around the world are wrestling with how to regulate AI.
Members of the European Parliament will shortly vote on its Artificial Intelligence Act. If passed, it would put in place the first and strictest legal framework for AI, which companies would need to follow.
The EU proposals include grading AI products depending on their impact – an email spam filter, for example, would require lighter regulation than a cancer-detection tool.
These rules would not apply in the UK, where the government set out its vision for the future of AI in March.
It ruled out setting up a dedicated AI regulator, and said instead that existing bodies will be responsible for its oversight.
In April, Italy became the first western country to ban ChatGPT over privacy concerns. The ban was overturned a month later, after owner Open AI said it had addressed the issues.
US lawmakers have also expressed concern about whether the existing voluntary codes are up to the job.
Meanwhile, China intends to make companies notify users whenever an AI algorithm is being used.