Will AI Safety Survive the Bubble?
There’s been a lot of discussion lately about whether we’re in an AI bubble. While not anywhere near gospel, Altman, Zuckerberg, and a handful of other major figures have all made public statements suggesting we might be. I’m not an economist, I don’t even play one on TV, so I don’t want to get lost in a discussion of whether the bubble does or doesn’t exist. I’m more concerned about what might happen to the AI safety industry as a whole if the market contracts.
“AI safety” isn’t one coherent field. It’s a patchwork of different priorities, depending on who you ask. For some people, safety means data privacy and transparency when using a chatbot. For others, it means preventing the extinction of humanity. Between those poles sit interpretability research, model auditing, secure deployment, fairness testing, content moderation, red-teaming, and a dozen other subfields. It’s a wide ecosystem that stretches from PhD researchers to prompt engineers to policy wonks, and somehow they’re all talking about “AI safety.” Of course, the reason AI safety is such a broad category is the result of just how far reaching the effects of AI proliferation will be.
Right now, VCs are giving out wheelbarrows of cash to AI companies. You can’t throw a GPU without hitting a startup raising a $50 million seed. OpenAI raised $40 billion in March, Anthropic raised $13 billion in September, and Cursor raised $2.3 billion in November. The real thing driving these funding rounds is revenue scaling though, and the numbers are impressive. Cursor, a company founded 2023, announced that they had crossed $1 billion in annually recurring revenue this year. Hitting $1 billion in ARR in two years is absurd growth. It’s no wonder that everyone wants a piece of AI: the labs, the infrastructure companies, the chip makers, venture capitalists, even the SaaS layers. The problem is, hype cycles never last forever. At some point, though, capital gets expensive again and the industry contracts. This will be exceptionally painful for the AI industry in particular, given the massive capital expenditures that are required to both hire talent and expand compute resources to the level necessary to operate. Worse, frontier labs are still losing money on every transaction.
Within that flood of cash, AI safety is barely a rounding error. It’s an increasingly visible space, but not a well-funded one. There’s potentially some big money incoming like the OpenAI Foundation, new donors in the effective altruism orbit, and a handful of high-profile grants. Overall, most of the funding is concentrated on the nonprofit side. For-profit safety startups exist, and there is an AI safety venture ecosystem, but it’s a tiny portion of total AI investment. If a contraction hits, that side of the market will get hit badly. Ironically, a downturn might make safety an easier sell politically, as governments and enterprises will be more motivated to regulate and manage risk, but the funding environment won’t support it. Everyone will be cutting costs at the same time that safety work becomes most urgent.
If capital dries up, the frontier labs will feel it first. OpenAI and Anthropic, both of which run on staggering compute costs and massive burn rates, would be hit hard. Safety research, which doesn’t drive short-term revenue, will be one of the first things on the chopping block. Internal teams at the labs focused on interpretability or long-term alignment will quietly shrink. Microsoft, Google, and Meta might weather the storm better; they have the balance sheets to survive a slowdown. The question is whether they’ll keep spending. They might if they can scoop up world-class safety talent for cheap. But more likely, they’ll refocus on efficiency and near-term return. xAI probably sits somewhere between OpenAI and Anthropic, vulnerable to the same market pressures but potentially insulated if Musk decides to keep pumping in money regardless of market conditions. Still, when the market shrinks, we’ll see consolidation. Only the labs that can maintain financing or hit profitability will stick around, and right now safety has few benefits on a balance sheet.
The nonprofit side is trickier to predict. Charitable giving doesn’t follow clean market logic, but it’s still tied to wealth and sentiment. If major donors’ portfolios take a hit, grants will shrink or slow down. Large, well-endowed foundations might be relatively insulated, but smaller donors, the ones who keep early research orgs and fellowships alive, will dry up fast. If endowments fall, foundations will scale back their grant-making, and AI safety nonprofits will have to fight harder for a smaller pool of money. The ones that survive will be those with diversified funding or government partnerships. Everyone else will struggle to keep the lights on.
Public perception will shift too. If the bubble pops, resentment toward AI will spike. People who lose jobs, investors who lose money, and policymakers looking for someone to blame will all point fingers. That could actually be good for regulation, because it might create the political will to impose safety requirements and transparency standards. That could be the slow down in development that safety needs to catch up. On the other hand, it could also lead to complacency around generative AI. If the bubble has already burst, what more is there to protect against? The argument for safety could lose visibility right at the worst time.
Enterprises will also be under pressure to cut costs, which might ironically drive adoption of autonomous agents and automation tools. That, in turn, could create a real market for safety tooling, systems that monitor, constrain, and verify AI behavior at scale. There’s a plausible path where contraction forces enterprises to take safety seriously, not out of ethics, but because it’s cheaper than being sued.
Right now, the AI market is sprinting toward AGI in an all-out race. That’s obviously not good for safety. A contraction might slow things down enough for safety to catch up to capability, or at least to be prioritized alongside it. But it could just as easily go the other way, consolidating the market around a few dominant labs and cutting off funding for everyone else. In that world, the labs that survive will have even less incentive to spend capital on safety tech, and the independent organizations working on oversight could disappear entirely.
People in the safety space should be wary of that dynamic. We need to spend heavily now to keep pace with capability work, to build real infrastructure, evaluation tools, and governance mechanisms, but overextension could backfire if capital becomes hard to raise. The paradox of AI safety is that it has to scale like a growth industry but endure like a moral cause. The right answer for how to plan for a market contraction isn’t a one size fits all solution either, it has to vary from organization to organization. For startups with the belief that they have good market fit now, it could make sense to keep spending and try to drive growth in revenue. For companies going after more prospective markets, or that are earlier and product building, tightening the belt now and extending their runway is more reasonable. Really, if you have any take away, whether you are a non-profit or for-profit, it should be this: get money in the bank now and be conscientious of the bets that you are making.
If the AI bubble bursts, AI safety won’t vanish, but it will change shape. The field has always lived in the margins of larger trends, growing or shrinking alongside investor sentiment rather than independent of it. A contraction would test whether AI safety is a moral commitment or just a byproduct of the boom. The next few years will determine whether safety becomes an integral part of how we build and govern technology, or whether it gets sidelined until the next crisis forces it back into view. Of course, if AI is an existential threat, we might not have the time or capacity to quickly resolve the next crisis. Either way, the lesson is the same: hype is temporary, but consequences are not. The people working in this space should plan for both abundance and austerity, because if the goal is to keep AI from going off the rails, someone has to stay standing after the music stops.

" A contraction would test whether AI safety is a moral commitment or just a byproduct of the boom"
There's a fascinating fractal aspect to this. Ablation is perhaps the most canonical construct in the AI safety algorithmic landscape. It is but only fair that the field itself passes an ablation test to assess it's utility when the hype is removed!