Meta Unveils a More Powerful A.I. and Isn’t Fretting Who Uses It
The models are systems that learn skills by analyzing enormous volumes of digital text, including Wikipedia articles, books, online forum conversations and chat logs. By pinpointing patterns in the text, these systems learn to generate text of their own, including term papers, poetry and computer code. They can even carry on a conversation.
Meta executives argue that their strategy is not as risky as many believe. They say that people can already generate large amounts of disinformation and hate speech without using A.I., and that such toxic material can be tightly restricted by Meta’s social networks such as Facebook. They maintain that releasing the technology can eventually strengthen the ability of Meta and other companies to fight back against abuses of the software.
Meta did additional “Red Team” testing of LLaMA 2 before releasing it, Mr. Al-Dahle said. That is a term for testing software for potential misuse and figuring out ways to protect against such abuse. The company will also release a responsible-use guide containing best practices and guidelines for developers who wish to build programs using the code.
But these tests and guidelines apply to only one of the models that Meta is releasing, which will be trained and fine-tuned in a way that contains guardrails and inhibits misuse. Developers will also be able to use the code to create chatbots and programs without guardrails, a move that skeptics see as a risk.
In February, Meta released the first version of LLaMA to academics, government researchers and others. The company also allowed academics to download LLaMA after it had been trained on vast amounts of digital text. Scientists call this process “releasing the weights.”