Microsoft on Thursday backed a crop of regulations for artificial intelligence as the company fends off concerns from governments around the world about the risks of the fast-evolving technology.
Microsoft, which has promised build artificial intelligence in many Its products, the proposed rules, include a requirement that systems used in critical infrastructure can be completely shut down or slowed down, similar to emergency braking systems on a train. The company also asked for laws to clarify when additional legal obligations apply to AI systems and for labels to clarify when an image or video was created by a computer.
“Companies need to step up,” Brad Smith, Microsoft’s president, said in an interview about the push for regulations. “The government needs to move fast.” He laid out the proposals Thursday morning before an audience including lawmakers at an event in downtown Washington.
The call for regulations pauses the boom in AI, with Release of ChatGPT ChatboT is generating a wave of interest in November. Companies including Microsoft and Google’s parent, Alphabet, have since raced to incorporate the technology into their products. This has raised concerns that companies are sacrificing security to reach the next big thing before their competitors.
Lawmakers have publicly expressed concern that such AI products, which can automatically generate text and images, will create a flood of misinformation, be used by criminals and put people out of work. Regulators in Washington have pledged to be on the lookout for scammers using AI and instances in which the system perpetuates discrimination or makes decisions that violate the law.
In response to that scrutiny, AI developers have increasingly called for shifting some of the burden of controlling the technology onto the government. Sam AltmanThe chief executive of OpenAI, which makes ChatGPT and counts Microsoft as an investor, told a Senate subcommittee this month that the government should regulate the technology.
The maneuver calls for new privacy or social media laws by Internet companies like Google and Meta, Facebook’s parent. In the United States, with few new federal regulations on privacy or social media in recent years, lawmakers have slowly followed up on such calls.
In the interview, Mr. Smith said that Microsoft was not trying to avoid responsibility for managing the new technology, as it was offering specific ideas and promising to carry out some of them, even though the government Have taken action.
“There is no trace of responsibility,” he said.
He backed the idea, espoused by Mr Altman during his congressional testimony, that a government agency should require companies to obtain licenses to deploy “highly capable” AI models.
“It means when you start testing you inform the government,” Mr Smith said. “You have to share the results with the government. Even when it is licensed for deployment, you have a duty to continue monitoring it and report to the government if any unforeseen problems arise.” Do it.
Microsoft, which earned more than $22 billion from its cloud computing business in the first quarter, also said those high-risk systems should only be allowed to operate in “licensed AI data centers.” Mr Smith acknowledged the company would not be in a “bad position” to offer such services, but said many US competitors could provide them as well.
Microsoft said governments should designate some AI systems used in critical infrastructure as “high risk” and require “security breaks”. It compared that feature “to what braking system engineers have long used in other technologies such as elevators, school buses and high-speed trains.”
In some sensitive cases, Microsoft said, companies providing AI systems must know certain information about their customers. The company said that AI-generated content should have a special label to protect consumers from deception.
Mr Smith said companies should bear legal “responsibility” for AI-related harm. In some cases, he said, the liable party could be the developer of an application such as Microsoft’s Bing search engine that uses someone else’s underlying AI technology. does. He added that cloud companies may be responsible for complying with security regulations and other regulations.
“We don’t necessarily have the best information or the best answers, or we might not be the most reliable speakers,” Mr. Smith said. “But, you know, right now, especially in Washington DC, people are looking for ideas.”