If those working in the field of AI are to be believed, the world is on the precipice of rapid, disruptive change that could, in the words of one safety researcher, put the world “in peril.”
On February 9, two prominent figures in AI simultaneously seemed to ring alarm bells. First, Mrinank Sharma, safety researcher at Anthropic, posted his entire resignation letter on X.
Sharma, who worked on issues such as AI-assisted bioterrorism, and AI sycophancy, wrote that the “world is in peril” for a whole host of reasons, including AI.
“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions. I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most and throughout broader society too.”
Although Sharma’s warning was somewhat cryptic, a long blog post by Matt Shumer, co-founder and CEO of OthersideAI, was far more detailed and specific.
“Here’s the thing nobody outside of tech quite understands yet: the reason so many people in the industry are sounding the alarm right now is because this already happened to us. We’re not making predictions. We’re telling you what already occurred in our own jobs, and warning you that you’re next,” Shumer wrote.
Shumer lays out the frightening speed at which AI is advancing and how unaware the public is. Shumer says free, public AI tools are a year behind where AI is now, which is says is a lifetime for the technology.
But the big breakthrough, Shumer writes, is that AI can now write the code to create new versions of AI: “The AI labs made a deliberate choice. They focused on making AI great at writing code first… because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version.”
“The experience that tech workers have had over the past year, of watching AI go from ‘helpful tool’ to ‘does my job better than I do’, is the experience everyone else is about to have,” Shumer wrote.
Two days after Shumer’s detailed warning, Heineken announced plans to layoff 6,000 people. Although the company cited market conditions and needed cost reductions, outgoing CEO Dolf van den Brink told CNBC that the cuts were also “partly due to AI, or let’s say digitization.”
Shumer honed in on some specific industries that could be affected by AI, although he said the list was far from exhaustive: legal work, financial analysis, writing and content, software engineering, medical analysis, and customer service.
“A lot of people find comfort in the idea that certain things are safe. That AI can handle the grunt work but can’t replace human judgment, creativity, strategic thinking, empathy. I used to say this too. I’m not sure I believe it anymore,” Shumer wrote.
OpenAI engineer Heiu Pham also added to the chorus of concern with a post on X on February 10.
“Today, I finally feel the existential threat that AI is posing. When AI becomes overly good and disrupts everything, what will be left for humans to do? And it’s when, not if,” he wrote.






