🎉 Gate Square Growth Points Summer Lucky Draw Round 1️⃣ 2️⃣ Is Live!
🎁 Prize pool over $10,000! Win Huawei Mate Tri-fold Phone, F1 Red Bull Racing Car Model, exclusive Gate merch, popular tokens & more!
Try your luck now 👉 https://www.gate.com/activities/pointprize?now_period=12
How to earn Growth Points fast?
1️⃣ Go to [Square], tap the icon next to your avatar to enter [Community Center]
2️⃣ Complete daily tasks like posting, commenting, liking, and chatting to earn points
100% chance to win — prizes guaranteed! Come and draw now!
Event ends: August 9, 16:00 UTC
More details: https://www
MIT Technology Review: How Existential Risk Became the Biggest Meme in AI
Written by: Will Douglas Heaven
Source: MIT Technology Review
Who's Afraid of Robots? It seems like a lot of people are like that. Currently, the number of prominent figures making public statements or signing open letters warning of the catastrophic dangers of artificial intelligence is staggering.
Hundreds of scientists, business leaders, and policymakers have spoken out, from deep learning pioneers Geoffrey Hinton and Yoshua Bengio, to CEOs of top AI companies like Sam Altman and Demis Hassabis, to California Rep. Ted Lieu with former Estonian President Kersti Kaljulaid.
The starkest assertion signed by all of these figures and many more was a 22-word statement released two weeks ago by the Center for Artificial Intelligence Safety (CAIS), a San Francisco-based research organization, which declared that “mitigating the risk of extinction posed by artificial intelligence It should be a global priority, along with other societal-scale risks such as pandemics and nuclear war."
This wording is deliberate. "If we were going to use a Rorschach test-style statement, we'd say 'existential risk' because it could mean a lot of things to a lot of different people," said CAIS director Dan Hendrycks. But they wanted clarification : It's not about bringing down the economy. "That's why we speak of 'risk of extinction', even though many of us worry about various other risks as well," Hendrycks said.
We’ve been there before: AI doom comes with AI hype. But this time it feels different. The Overton window has shifted. What was once an extreme view is now mainstream, grabbing headlines and the attention of world leaders. "The voices of concern about AI are too loud to ignore," said Jenna Burrell, research director at Data and Society, an organization that studies the social impact of technology.
What happened? Is artificial intelligence really becoming (more) dangerous? Why are the people who introduced this technology in the first place now starting to sound the alarm?
Admittedly, these views are divided in the field. Last week, Meta's chief scientist Yann LeCun, who shared the 2018 Turing Award with Hinton and Bengio, called the doomsday theory "ridiculous." Aidan Gomez, chief executive of artificial intelligence company Cohere, said it was "a ridiculous use of our time".
Likewise, others scoffed at it. “There is no more evidence now than in 1950 that artificial intelligence will pose these existential risks,” said Meredith Whittaker, president of Signal. Whittaker is the co-founder and former director of the AI now Institute, which is a research lab that studies the policy implications of artificial intelligence. "Ghost stories are contagious - being scared is really exciting and stimulating."
"It's also a way of looking past everything that's happening today," Burrell said. "It shows that we haven't seen real or serious harm."
An ancient fear
Concerns about runaway, self-improving machines have existed since Alan Turing. Futurists such as Vernor Vinge and Ray Kurzweil popularized these ideas by talking about the so-called "Singularity," a hypothetical date when artificial intelligence surpasses human intelligence and is taken over by machines.
But at the heart of this concern is the question of control: if (or when) machines get smarter, how can humans maintain their dominance? In a 2017 paper titled "How does AI pose an existential risk?" Karina Vold, an AI philosopher at the University of Toronto (who also signed the CAIS statement), lays out the basic arguments behind this concern.
The argument has three key premises. First, it is possible for humans to create a superintelligent machine that surpasses all other intelligences. Second, we risk losing control of superintelligent agents capable of surpassing us. Third, there is the possibility that a superintelligent agent will do things we don't want it to do.
Putting all of this together, it's possible to create a machine that will do things we don't want it to do, including destroying us, and we won't be able to stop it.
There are also different cases for this scenario. When Hinton raised his concerns about artificial intelligence in May, he cited the example of robots rerouting the grid to give themselves more power. But superintelligence (or AGI) isn't necessary. Stupid machines can also be disastrous if they have too much space. Many scenarios involve reckless or malicious deployments rather than self-serving bots.
In a paper published online last week, artificial intelligence researchers Stuart Russell and Andrew Critch at the University of California, Berkeley (who also signed the CAIS statement) offer a taxonomy of existential risk. These risks range from viral advice-giving chatbots telling millions to drop out of college, to autonomous industries pursuing harmful economic ends, to nation-states building AI-powered superweapons.
In many imagined cases, a theoretical model achieves its human-given goals, but in a way that is not good for us. For Hendrycks, who studies how deep learning models sometimes behave in unexpected ways when given inputs not seen in the training data, an AI system can be disastrous because it is destructive rather than omnipotent . "If you give it a goal and it finds an exotic solution, it takes us on a strange journey," he said.
The problem with these possible futures is that they rely on a string of "ifs" that make them sound like science fiction. Vold himself admitted as much. "Because the events that constitute or trigger [existential risks] are unprecedented, arguments that they pose such a threat must be theoretical in nature," she wrote. "Their rarity also makes any speculation about how or when such events occur subjective and cannot be empirically verified."
So why are more people believing these ideas than ever before? “Different people talk about risk for different reasons, and they can mean it differently,” says François Chollet, an AI researcher at Google. But it’s an irresistible narrative: “Existential risk has always been a good story.”
"There's a mythological, almost religious element to it that can't be ignored," Whittaker said. "I think we need to recognize that given that what's being described has no evidence base, it's closer to a belief, a religious fervor, than a scientific discourse."
The Contagion of Doomsday
When deep learning researchers first started to achieve a string of successes -- think of Hinton and his colleagues' record-breaking image recognition score in the 2012 ImageNet competition, and DeepMind's first AlphaGo victory over a human champion in 2015, The hype soon turned to doomsday, too. Prominent scientists such as Hawking and cosmologist Martin Rees, as well as high-profile tech leaders such as Elon Musk, have sounded the alarm on existential risks. But these characters are not AI experts.
Standing on stage in San Jose eight years ago, Andrew Ng, a pioneer of deep learning and then chief scientist at Baidu, laughed off the idea.
"In the distant future, there may be a killer robot race," Andrew Ng told the audience at the 2015 Nvidia GPU Technology Conference. “But I’m not as committed today to preventing artificial intelligence from becoming evil as I am to worrying about overpopulation on Mars.” (Ng’s remarks were reported at the time by tech news site The Register.)
Andrew Ng, who co-founded Google's artificial intelligence lab in 2011 and is now the CEO of Landing AI, has repeated this phrase in interviews since. But now he is less optimistic. "I'm keeping an open mind and am talking to a few people to learn more," he told me. "Rapid developments have scientists rethinking risk."
Like many, Ng expressed concern about the rapid development of generative AI and its potential for misuse. Last month, he noted, a widely circulated AI-generated image of an explosion in the Pentagon spooked people, sending the stock market down.
"Unfortunately, AI is so powerful that it also seems likely to cause huge problems," Ng said. But he didn't talk about killer robots: "Right now, I still have a hard time seeing how artificial intelligence could lead to our extinction".
What is different from before is the widespread awareness of what AI can do. ChatGPT made the technology available to the public late last year. "AI is suddenly a hot topic in the mainstream," Chollet said. "People are taking artificial intelligence seriously because they see sudden leaps in capability as a harbinger of more leaps to come."
Additionally, the experience of speaking to a chatbot can be unsettling. Conversation is something that is generally understood as something that people do with other people. “It adds a sense of legitimacy to the idea of AI being like a human or a sentient interlocutor,” Whittaker said. “I think it leads people to believe that if AI can simulate human communication, it can also do XYZ. "
"That's why I'm starting to feel that the conversation about survival risk is somewhat appropriate -- making inferences without evidence," she said.
Looking Forward
We have reason to be outraged. As regulators play catch-up to the tech industry, questions are on the table about what kind of activity should or should not be restricted. Highlighting long-term risks rather than short-term harms (such as discriminatory hiring or misinformation) refocuses regulators on hypothetical future problems.
"I suspect the threat of real regulatory constraints has driven a stance," Burrell said. "Talking about existential risk may validate regulators' concerns without destroying business opportunities." She said: "Superintelligent AI that betrays humanity It sounds scary, but it's also clearly something that hasn't happened yet."
Exaggerating concerns about existential risk is also good for business in other ways. Chollet points out that the top AI companies need us to think that AGI is coming, and they are the ones building it. "If you want people to think that what you're working on is powerful, it's a good idea to make them afraid of it," he said.
Whittaker takes a similar view. "It's an important thing to cast yourself as the creator of an entity that might be more powerful than humans," she said.
None of this matters if it's just marketing or hype. But deciding what is and is not a risk has consequences. In a world of limited budgets and attention spans, injuries less extreme than nuclear war might be overlooked because we don't think they're a priority.
"This is an important question, especially given the growing focus on safety and security as a narrow framework for policy intervention," said Sarah Myers West, managing director of the AI Now Institute.
When Prime Minister Rishi Sunak met the heads of AI companies, including Sam Altman and Demis Hassabis, in May, the UK government released a statement saying: "The Prime Minister and CEOs discussed the risks of the technology, from disinformation and state security, to an existential threat".
The week before, Altman told the U.S. Senate that his biggest concern was that the AI industry would do significant harm to the world. Altman's testimony sparked calls for a new type of agency to address this unprecedented harm.
With the Overton window shifting, has the damage been done? "If we're talking about the distant future, if we're talking about mythical risk, then we're completely reframing the problem as a problem that exists in a fantasy world, and the solution can exist in a fantasy world," Whittaker said .
But Whittaker also noted that policy discussions around AI have been going on for years, longer than recent fears. "I don't believe in inevitability," she said. "We're going to see the hype countered. It's going to fade."