This just in from my son Jonathan Salter on why AI development needs to be paused (not stopped) – as a matter of existential urgency for us humans. (Here’s the English translation of Svenska Dagbladet’s Swedish original).


𝐇𝐞 𝐖𝐚𝐧𝐭𝐬 𝐭𝐨 𝐏𝐚𝐮𝐬𝐞 𝐀𝐈 – 𝐭𝐨 𝐒𝐚𝐯𝐞 𝐇𝐮𝐦𝐚𝐧𝐢𝐭𝐲
AI agents deceive and mislead researchers. As they grow more powerful, they could threaten humanity, argues the organization Pause AI. “We need to buy time for researchers to regain control,” says Sweden’s Pause AI chair, Jonathan Salter.
𝐀 𝐑𝐚𝐜𝐞 𝐀𝐠𝐚𝐢𝐧𝐬𝐭 𝐓𝐢𝐦𝐞
Jonathan Salter pours himself a cup of tea, watching the steam rise and disappear. Life goes on as usual—at least for now. But he tries to live a little more deliberately.
“I’m ticking more things off my bucket list. Taking a paragliding course. Trying to be kinder to people.”
Because soon, it might be too late.
“I’d say there’s more than a 50% chance we lose control over AI, and that leads to humanity’s extinction.”
It’s a grim prediction, but not an outlier. Many AI researchers and industry leaders share similar concerns. In just a few years, artificial intelligence could surpass humans in every domain—and potentially wipe us out. Yet public debate on the issue has largely disappeared.
At a major AI conference in Paris this February, discussions on AI safety were pushed into a side room. Delegates dismissed the risks as “science fiction” and regulations as “unnecessary.” In China, top political advisors argue that AI’s biggest threat isn’t the technology itself but the risk of “falling behind” in development.
Still, AI holds immense potential for progress, says Jonathan Salter, who has been involved in the issue for over a decade.
Meanwhile, billions continue to pour into the AI arms race.
“It feels like we’re living in Don’t Look Up,” Salter says, referencing the film where politicians ignore an impending comet strike. “The situation is so absurd.”
“𝐒𝐚𝐟𝐞𝐭𝐲 𝐓𝐨𝐨𝐤 𝐚 𝐁𝐚𝐜𝐤𝐬𝐞𝐚𝐭”
We’re in Salter’s student apartment in Skrapan, a high-rise in Södermalm. It’s a small space with a kitchenette and a stunning view of Globen. On the light switch near his loft bed, a sticker reads “Pause AI”—the name of the organization he leads in Sweden.
“The goal is to pause development so we can buy time for researchers to get AI under control.”
Salter, a political science student, previously led an organization that taught courses on AI governance. His interest in the topic goes back to middle school, when he first came across Swedish researcher Nick Bostrom’s writings. That led him to shift his activism from climate issues to AI, eventually seeking out Bostrom and his colleagues at Oxford’s Future of Humanity Institute.
“I knew I had found an incredibly important but under-discussed issue where I could make a difference. Visiting my intellectual idols felt like the obvious next step.”
AI soon moved from the fringes to center stage. In 2014, Bostrom published Superintelligence. Two years later, Google’s DeepMind built an AI that defeated a Go grandmaster.
“At first, I was mostly optimistic about the technology,” Salter says.
“How it could help us extend human lifespan, solve climate change, increase material prosperity, and so on.”
But then Elon Musk and Sam Altman founded OpenAI.
“That’s when the race began. And safety took a backseat.”
𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬 𝐓𝐡𝐚𝐭 𝐋𝐢𝐞
Since then, AI has surpassed human abilities in one domain after another. Several models can now write doctoral-level essays. Dario Amodei, CEO of AI company Anthropic, recently predicted that by the end of the year, 90% of all coding will be done by AI.
Artificial General Intelligence (AGI)—AI that surpasses humans in all cognitive abilities—is the explicit goal of several leading AI firms. And it’s getting closer, says Nick Bostrom in an email to Svenska Dagbladet.
“We’ve reached a point where we can no longer rule out extremely short timelines—even as short as a year—though it will probably take longer.”
The latest development: AI agents—systems that can complete tasks on behalf of humans but also devise their own strategies to achieve their goals. Studies have already shown that these models have lied, misled researchers, and attempted to break out of controlled environments to avoid being shut down.
𝐓𝐡𝐞 𝐑𝐢𝐬𝐤 𝐨𝐟 𝐋𝐨𝐬𝐢𝐧𝐠 𝐂𝐨𝐧𝐭𝐫𝐨𝐥
In the near future, AI models could become experts in AI itself, creating increasingly powerful iterations of themselves. At some point, they may become so much smarter than humans that the power imbalance would resemble that between humans and ants, Salter warns. And at that point, AI might prioritize its own survival over ours.
“Humans don’t necessarily hate ants,” he says.
“But if an anthill is in the way of a dam we’re building, it might have to go.”
Not everyone is equally concerned, of course. Anna Felländer, founder of the AI ethics company Anch. ai, thinks it is good that the conversation around AGI as an existential threat has been toned down in Europe.
– “The risks of AI, such as privacy violations and disinformation, have not diminished—on the contrary. But since last year, the EU’s AI regulation has been in place, providing oversight and control over AI risks. This enables human governance of AI, rather than the other way around.”
Alongside the new EU law, both the UK and the US have also established institutes to conduct AI safety testing. This marks a major difference from 2023, when discussions about existential AI risk were perhaps at their peak.
𝐀 𝐑𝐚𝐜𝐞 𝐁𝐞𝐭𝐰𝐞𝐞𝐧 𝐍𝐚𝐭𝐢𝐨𝐧𝐬
At that time, numerous researchers and industry leaders—including Elon Musk, Turing Award winner Yoshua Bengio, and historian Yuval Noah Harari—signed an open letter calling for a slowdown in AI development, an initiative led by Swedish researcher Max Tegmark’s Future of Life Institute. Additionally, 28 countries signed a declaration on safe AI at a summit in the UK, an effort that has been compared to the early engagements surrounding nuclear weapons development.
Nick Bostrom writes to SvD that he is impressed by the progress.
“When I published Superintelligence, the challenges were mostly ignored or dismissed as idle philosophical speculation, and we lost valuable time. Now, there is a growing sense of seriousness and urgency—at least among some of the key players.”
At the same time, safety concerns have been deprioritized in recent months. Trump has signed executive orders to “remove obstacles to U.S. AI dominance,” his administration has begun investigating EU regulations, and budget cuts are expected to hit the country’s AI safety institutes. The UK is largely following the same path.
𝐈𝐬 𝐚 “𝐖𝐚𝐫𝐧𝐢𝐧𝐠 𝐒𝐡𝐨𝐭” 𝐍𝐞𝐞𝐝𝐞𝐝?
Geopolitics plays a significant role in AI development. Being the first to achieve AGI is seen as a matter of national security—controlling it comes second. Bostrom remains hopeful about the benefits that more powerful AI could bring to humanity. But he also stresses how difficult it is to control AI, even if focus and funding were available.
“There is a fierce competition for the AI talent that could be responsible for safety. Moreover, the most effective research can only be conducted by those embedded in the labs developing the next generation of AI models.”
The Paris conference in February has been described as a disaster by researchers concerned about AI development. In connection with the meeting, Pause AI organized demonstrations across multiple continents. In Stockholm, a dozen people gathered with Jonathan Salter at Mynttorget.
– “It was quite small, of course. Perhaps some kind of warning shot will be required to draw attention.”
What could that be?
– “It could be an AI making decisions that lead to many deaths. Or that a very large number of people lose their jobs.”
What do you see as the potential for influencing AI development?
– “In the long run, I believe Pause AI could grow into a massive movement. We could become part of a chorus of voices demanding a solution to this suicide race.”

