F.T.C. Is Investigating ChatGPT Maker

F.T.C. Is Investigating ChatGPT Maker

The Federal Trade Commission has opened an investigation into OpenAI, the artificial intelligence start-up that makes ChatGPT, over whether the chatbot has harmed consumers through its collection of data and its publication of false information on individuals.

In a 20-page letter sent to the San Francisco company this week, the agency said it was also looking into OpenAI’s security practices. The F.T.C. asked the company dozens of questions in its letter, including how the start-up trains its A.I. models and treats personal data, and said it should provide the agency with documents and details.

The F.T.C. is examining whether OpenAI “engaged in unfair or deceptive privacy or data security practices or engaged in unfair or deceptive practices relating to risks of harm to consumers,” the letter said.

The investigation was earlier reported by The Washington Post and confirmed by a person familiar with the investigation. OpenAI declined to comment.

The F.T.C. investigation poses the first major U.S. regulatory threat to OpenAI, one of the highest profile A.I. companies, and signals that the technology may increasingly come under scrutiny as people, businesses and governments use more A.I.-powered products. The rapidly evolving technology has raised alarms as chatbots, which can generate answers in response to prompts, have the potential to replace people in their jobs and spread disinformation.

Sam Altman, who leads OpenAI, has said that the fast-growing A.I. industry needs to be regulated. In May, he testified in Congress to invite A.I. legislation and has visited hundreds of lawmakers, aiming to set a policy agenda for the technology. “I think if this technology goes wrong, it can go quite wrong,” he said at the May hearing. “We want to work with the government to prevent that from happening.”

OpenAI has already come under regulatory pressure internationally. In March, Italy’s data protection authority banned ChatGPT, saying that OpenAI unlawfully collected personal data from users and did not have an age-verification system in place to prevent minors from being exposed to illicit material. OpenAI restored access to the system the following month, saying it had implemented the changes the Italian authority asked for.

The F.T.C. is acting on A.I. with notable speed, opening an investigation less than a year after OpenAI introduced ChatGPT. Lina Khan, the F.T.C. chair, has said that tech companies should be regulated while technologies are nascent, rather than only when they become mature.

In the past, the agency typically began investigations after a major public misstep by a company, such as launching a probe into Meta’s privacy practices after reports that it shared user data with a political consulting firm, Cambridge Analytica, in 2018.

Ms. Khan, who testified in a hearing on Thursday over the agency’s practices, has previously said that the A.I. industry needed scrutiny.

“Although these tools are novel, they are not exempt from existing rules, and the F.T.C. will vigorously enforce the laws we are charged with administering, even in this new market,” she wrote in an opinion piece in The New York Times in May. “While the technology is moving swiftly, we already can see several risks.”

Image

Lina Khan, the chair of the Federal Trade Commission, has said that while A.I. is novel, it is “not exempt from existing rules.” Credit…Tom Brenner for The New York Times

The investigation could force OpenAI to reveal its methods around building ChatGPT and what data sources it uses to build its A.I. systems. While OpenAI had long been fairly open about such information, it more recently has said little about where the data for its A.I. systems come from and how much is used to build ChatGPT, probably because it is wary of competitors copying it and concerns about lawsuits over the use of certain data sets.

Chatbots, which are also being deployed by companies like Google and Microsoft, represent a major shift in the way computer software is built and used. They are poised to reinvent internet search engines like Google Search and Bing, talking digital assistants like Alexa and Siri and email services like Gmail and Outlook.

When OpenAI first released ChatGPT in November, it instantly captured the public’s imagination with its ability to answer questions, write poetry and riff on almost any topic. But the technology can also blend fact with fiction and even make up information, a phenomenon that scientists call “hallucination.”

ChatGPT is driven by what A.I. researchers call a neural network. This is the same technology that translates between French and English on services like Google Translate and identifies pedestrians as self-driving cars navigate city streets. A neural network learns skills by analyzing data. By pinpointing patterns in thousands of cat photos, for example, it can learn to recognize a cat.

Researchers at labs like OpenAI have designed neural networks that analyze massive amounts of digital text, including Wikipedia articles, books, news stories and online chat logs. These systems, known as large language models, have learned to generate text on their own but may repeat flawed information or combine facts in ways that produce inaccurate information.

In March, the Center for A.I. and Digital Policy, an advocacy group pushing for the ethical use of technology, asked the F.T.C. to block OpenAI from releasing new commercial versions of ChatGPT, citing concerns involving bias, disinformation and security.

The organization updated the complaint less than a week ago, describing additional ways the chatbot could do harm, which it said OpenAI has also pointed out.

“The company itself has acknowledged the risks associated with the release of the product and has itself called regulation,” said Marc Rotenberg, the president and founder of the Center for AI and Digital Policy. “The Federal Trade Commission needs to act.”

OpenAI has been working to refine ChatGPT and to reduce the frequency of biased, false or otherwise harmful material. As employees and other testers use the system, the company asks them to rate the usefulness and truthfulness of its responses. Then through a technique called reinforcement learning, it uses these ratings to more carefully define what the chatbot will and will not do.

This is a developing story. Check back for updates.

Add a Comment