The tech industry’s biggest companies have spent the year warning that advances in artificial intelligence technology far exceed their wildest expectations and that they need limit who has access to,
Mark Zuckerberg is working on a different issue: He’s giving it away.
Meta Chief Executive Mr Zuckerberg said on Tuesday he planned to make the code behind the company’s latest and most advanced AI technology available for free to developers and software enthusiasts around the world.
decision, same meta created in february, could help the company overtake competitors such as Google and Microsoft. Those companies have moved more quickly to incorporate generative artificial intelligence — the technology behind OpenAI’s popular ChatGPT chatbot — in their products,
“When the software is open, more people can test it to identify and fix potential problems,” Mr Zuckerberg said in a post on his personal Facebook page.
The latest version of Meta’s AI was built with 40 percent more data than the one the company released a few months ago and is believed to be significantly more powerful. And Meta is providing a detailed road map that shows how developers can work with the vast amount of data collected.
is meta stick to a long standing belief The best way to improve it is to allow all kinds of programmers to tinker with the technology. Until recently, most AI researchers agreed with this. But in the last year, companies like Google, Microsoft and San Francisco start-up OpenAI have put limits on who has access to their latest technology and put controls on what can be done.
The companies say they are limiting access because of security concerns, but critics say they are also trying to stifle competition. Meta argues that it is in everyone’s best interest to share what it is working on.
“Meta has historically been a big supporter of open platforms, and that’s worked really well for us as a company,” Ahmed Al-Dahle, vice president of generative AI at Meta, said in an interview.
The move would make the software “open source,” which is computer code that can be freely copied, modified, and reused. The technology, called LLaMA 2, provides everything anyone would need to build an online chatbot like ChatGPT. LLAMA 2 will be released under a commercial license, meaning developers can build their own businesses using Meta’s built-in AI – all for free.
By open-sourcing LLaMA 2, Meta can take advantage of improvements made by programmers outside the company, while — Meta executives hope — fostering AI experimentation.
Meta’s open-source approach Not new. Companies often open-source technologies in an effort to compete with rivals. Fifteen years ago, Google open-sourced its Android mobile operating system to better compete with Apple’s iPhone. While the iPhone took an early lead, Android eventually became the dominant software used in smartphones.
But the researchers argue that one could deploy Meta’s AI without the safeguards that tech giants like Google and Microsoft often use to suppress toxic content. For example, the newly created open-source model could be used to flood the Internet with even more spam, financial scams and disinformation.
LLaMA 2 is short for Large Language Model Meta AI, which scientists call Large Language Model, or LLM chatbots like ChatGPT and Google Bard are built with the Large Language Model.
Models are systems that learn skills by analyzing vast amounts of digital text including Wikipedia articles, books. online forum conversation and chat log. By noticing patterns in text, these systems learn to generate text of their own, including term papers, poetry, and computer code. They can also continue the conversation.
Meta executives argue that their strategy is not as risky as many believe. He says that people can already generate vast amounts of misinformation and hate speech without using AI, and that such toxic content could be strictly prohibited by meta-like social networks like Facebook. He believes that releasing the technology could ultimately strengthen Meta and other companies’ ability to fight against software abuse.
Mr. Al-Dahle said that META conducted additional “red team” testing of LLAMA 2 before releasing it. It is a term for testing software for potential misuse and finding ways to protect against such misuse. The company will also release a responsible-use guide containing best practices and guidelines for developers who want to build programs using the code.
But these tests and guidelines only apply to one of the models META is releasing, which will be trained and fixed in a way that includes handrails and prevents abuse. Developers will also be able to use the code to build chatbots and programs without railings, a move that skeptics see as a risk.
In February, META released the first version of LLaMA for academics, government researchers, and others. The company also allowed academics to download LLAMA after being trained on a large amount of digital text. Scientists call this process “releasing weight.”
This was a remarkable step because analyzing all that digital data requires vast computing and financial resources. With Weight, anyone can build a chatbot from scratch much more cheaply and easily.
Many in the tech industry believed that Meta had set a dangerous precedent, and after Meta shared its AI technology with a small group of academics in FebruaryOne of the researchers leaked the technology to the public Internet.
In a recent opinion piece The Financial TimesMeta’s president of global public policy, Nick Clegg, argued that “keeping basic technology in the hands of only a few large corporations was not sustainable,” and historically also served strategically to companies releasing open source software. Went.
“I look forward to seeing what you all create!” Mr Zuckerberg said in his post.