The AI safety race: tech giants seek to self-regulate

The AI safety race: tech giants seek to self-regulate

Major tech companies are banding together to establish safety standards and responsible development practices for artificial intelligence systems.

According to a recent Financial Times report, Microsoft, OpenAI, Google, and Anthropic announced this week the appointment of Chris Meserole as executive director of their new industry alliance, the Frontier Model Forum. The Forum, established this summer by the four tech giants, aims to fill “a gap” in global AI regulation.

Artificial intelligence has advanced rapidly in recent years, with systems like ChatGPT demonstrating capabilities such as conversing fluently and generating original text. However, the increasing sophistication of AI has raised concerns about potential dangers.

Advanced AI systems are powered by large language models that can process massive datasets to generate nuanced outputs. However, the complexity of these models makes their inner workings opaque and risks unpredictable behaviors.

“We’re probably a little ways away from there actually being regulation,” Meserole, who is stepping down from his role as an AI director at the Washington-based think-tank, told the Financial Times. “In the meantime, we want to make sure that these systems are being built as safely as possible.”

Risks like AI’s potential to aid the design of bioweapons or generate malicious computer code

The Forum will focus specifically on risks like AI’s potential to aid the design of bioweapons or generate malicious computer code that could enable hacking.

The alliance has committed $10 million towards an AI safety fund, supported by former Google CEO Eric Schmidt, to finance academic research on techniques like “red teaming.” Red teaming refers to methods researchers use to systematically test AI systems for flaws, bugs, and dangers before real-world deployment.

Developing standards for technical risk assessment

In addition to funding research, the Forum plans to develop its own standards for technical risk assessments and evaluations of AI systems. According to Meserole, establishing global standards will provide consistency across jurisdictions as governments move to regulate the technology. The Forum’s work aims to supplement eventual government regulations.

The Forum’s announcement comes just ahead of the first global summit on AI safety next week. With the EU’s comprehensive AI Act expected to be finalized as soon as early next year and governments worldwide calling for legislation, tech companies are trying to lead in establishing ethical norms and safety practices for artificial intelligence now. The goal is to shape the coming regulatory environment and avoid potentially restrictive policies down the line.

Featured Image Credit: Google DeepMind; Pexels; Thank you!

Radek Zielinski

Radek Zielinski is an experienced technology and financial journalist with a passion for cybersecurity and futurology.

Add a Comment