Several executives from companies developing artificial intelligence (AI), including Sam Altman, CEO of OpenAI, joined experts and professors in the field on Tuesday to express concern about the “increasing risk of extinction ( of humanity no) due to AI”, asking political decision-makers to take this threat seriously and consider it as serious as the threat of a nuclear war or a pandemic, reports Reuters.
“Reducing the risk of extinction due to AI must be a global priority, just like other serious risks to the entire human society, such as pandemics or nuclear war,” according to the 350 signatories of an open letter published by the nonprofit Center for AI Safety (CAIS).
Along with Altman, this letter was signed by the directors of the AI companies DeepMind and Anthropic, as well as directors from Microsoft and Google, writes Agerpres. Also on the list of signatures are Geoffrey Hinton and Yoshua Bengio – two of the so-called “godfathers of artificial intelligence”, who received the Turing Award in 2018 for their work in the field, as well as numerous professors from universities such as Harvard or Tsinghua, from China.
In a press release from the CAIS organization, the case of the Meta company is invoked, where the third “godfather” of artificial intelligence, Yann LeCun, works. No representative of this company wanted to sign the open letter.
In April, Elon Musk and other experts in the field were the first to talk about the risks to society as a whole of this technology.
Of course, AI is a technology that can be revolutionary in fields such as medicine, but it can also lead to intrusions into people’s personal lives, can be the basis of powerful disinformation campaigns and can cause new problems when intelligent machines will think for they.
One of the pioneers of this field (and signatory of the letter) Geoffrey Hinton recently told Reuters that artificial intelligence could be “a more urgent threat” to humanity than climate change.
Publisher : AP