The UN Security Council held a session on the threat of artificial intelligence to international peace and stability on Tuesday for the first time, and Secretary-General Antonio Guterres called for a global watchdog to monitor a new technology that has at least raised many fears. have been born. Hopes.
Mr Guterres warned that AI could ease the way for criminals, terrorists and other actors intent on causing “death and destruction, widespread trauma and deep psychological damage on an unimaginable scale”.
was launched last year chatgpt – which can create text from signals, mimic voices and produce photos, images and videos – has raised warnings about disinformation and manipulation.
On Tuesday, diplomats and leading experts in the field of AI laid out before the Security Council the scientific and social benefits as well as the risks and dangers of the new emerging technology. He added that much remains unknown about the technology, even as the pace of its development is progressing.
Co-founder Jack Clark said, “It’s like we’re building engines without understanding the science of combustion.” anthropic, an AI security research company. Private companies should not be the sole creators and regulators of AI, he said
Mr Guterres said the UN watchdog should act as a governing body to regulate, monitor and enforce AI rules, just as other agencies monitor aviation, climate and nuclear energy. Are.
The proposed agency would consist of experts in the field who would share their expertise with governments and administrative agencies that may lack the technical know-how to deal with AI threats.
But a legally binding proposal on how to control it is still a long way off. However, most diplomats did. Supported the notion of a global governing system and a set of international rules.
James Cleverley, the UK Foreign Secretary, who chaired the meeting, said, “No country will be immune to AI, so we must engage a broad coalition of international actors from all sectors.” ,
Russia, diverging from the majority view of the Council, expressed doubt that there was enough knowledge about the risks of AI to highlight it as a source of threats to global instability. And China’s ambassador to the United Nations, Zhang Jun, pushed against the creation of one set of global laws, saying international regulatory bodies should be flexible enough to allow countries to develop their own rules.
However, the Chinese ambassador said that his country opposes the use of AI as “a means of creating military hegemony or undermining a country’s sovereignty”.
Military use of autonomous weapons for killings on the battlefield or in another country satellite-controlled AI robot It was also mentioned that Israel sent Iran to kill top nuclear scientist Mohsen Fakhrizadeh.
Mr Guterres said the United Nations should reach a legally binding agreement by 2026 to ban the use of AI in automatic weapons of war.
Professor Rebecca Willett, director of AI at the Data Science Institute at the University of Chicago, said in an interview that in regulating technology, it is important not to lose sight of the humans behind it.
The systems are not completely autonomous. And the people who designed them should be held accountable, he said.
“It’s one of the reasons the United Nations is looking into this,” Professor Willett said. “There is a need for truly international influence so that a company based in one country cannot destroy another country without violating international agreements. Real enforceable regulation can make things better and safer.”