news

Stop Making Mysterious Doomsday Clocks for the Love of God

The Saudi-backed business school in Switzerland has launched the Doomsday Clock to warn the world of the dangers of “unregulated general artificial intelligence,” what artificial intelligence calls “god-like.” Imagine if the people who sold office desks on Excel spreadsheets in the 1980s tried to tell workers that the program was a way to give birth to a god and used a Rolex clock to do so, you’ll get an idea of what we’re dealing with here.

Michael Wade, the creator of the clock and IMD professor of strategy and digital at IMD Business School in Lausanne, Switzerland, and director of the global TONOMUS Center for digital transformation and artificial intelligence (“Oh My God”), revealed the clock in a recent commentary for Time magazine.

The clock, which strikes midnight, is a powerful metaphor from the atomic age and has now become outdated. It is a very old image that has even just celebrated its seventy-fifth anniversary. After America dropped atomic bombs on Japan, some researchers and scientists working on weapon development formed the Atomic Scientists Bulletin.

Their project was to warn the world of its imminent destruction. The Doomsday Clock is one of the ways they do this. Every year, experts from various fields – from nuclear weapons to climate change to artificial intelligence – gather and discuss the world’s corruption. Then they adjust the clock. As midnight approaches, humanity is close to its doom. We are now 90 seconds away from midnight, the closest the clock has ever been set.

Wade and IMD have no connection to the Atomic Scientists Bulletin and the Doomsday Clock is its own thing. Wade’s innovation is the artificial intelligence safety clock. In his Time magazine article, he said, “The current reading of the clock – 29 minutes until midnight – is a measure of how close we are to a critical turning point where unregulated general artificial intelligence could lead to existential risks.” “Despite no catastrophic harm yet, the breakneck speed of AI development and regulatory complexities mean that all stakeholders must remain alert and engaged.”

The top proponents of artificial intelligence in Silicon Valley rely on the nuclear analogy. Sam Altman, CEO of OpenAI, compared his company’s work to the Manhattan Project. Senator Edward J. Markey (Democrat from Massachusetts) wrote that America’s rush to embrace AI is akin to Oppenheimer’s pursuit of the atomic bomb. Some of this fear and anxiety may be real, but it’s all ultimately marketing.

We are in the midst of the AI hype cycle. Companies promise unprecedented returns and cost savings in labor. They claim machines will soon do everything for us. The truth is, AI is useful but mostly just shifts labor and production costs to other parts of the chain where end users don’t see it.

The fear that AI will become advanced enough to annihilate humanity is just another kind of noise. The doomsday scenario around word processors and predictive modeling systems is just another way to hype up the potential of this technology and mask the real harm it causes.

At a recent Tesla event, robotic waiters served drinks to the audience. They were remotely controlled by humans. Master’s degree holders in law burn tons of water and electricity to get answers and often rely on close and continuous attention from human “trainers” working in poor countries for meager pay. Humans use technology to flood the internet with naked images of others created without their consent. These are just a few real-world harms already caused by rapid Silicon Valley adoption of AI.

If you’ve been afraid of Skynet coming back to life and wiping out humanity in the future, you don’t care about the problems you’re currently facing. The Doomsday Clock may be mysterious on the surface, but there’s an army of impressive minds behind this metaphor producing work every day about the real risks of nuclear weapons and new technologies.

In September, the Bulletin featured Altman in an article exposing the exaggerated claims about how AI is used in engineering new biological weapons. The article stated, “Despite all this skepticism, there is actually a lot of skepticism about how AI affects biological weapons and the broader biological security field.”

It also emphasized that discussing extreme scenarios about AI helps people avoid having more difficult conversations. The Bulletin said, “The challenge, as it has been for over two decades, is to avoid indifference and exaggeration about scientific and technological developments affecting biological disarmament and efforts to keep biological weapons out of war plans and the arsenals of violent actors.” “Discussions about AI capture high-level interest and public attention, and…risk tightly focusing on threats that neglect other risks and opportunities.”

There are dozens of articles like this published every year by people who run the Doomsday Clock. The Swiss artificial intelligence safety clock does not have this scientific support, although it claims to monitor such articles in FAQs.

Instead, it has funding from Saudi Arabia. Wade’s position at the school was made possible by funding from TONOMUS, a subsidiary company of NEOM. NEOM is a futuristic city popular in Saudi Arabia that is trying to build it in the desert. Among other promises, NEOM has giant robotic dinosaurs, flying cars, and a massive artificial moon.

Forgive me if I don’t take Wade or the AI safety clock at face value.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
error: Content is protected !!