True Confessions of an AI Flip Flopper
My views on AI have evolved. Here's why and why they'll likely shift again at some point.
A few years ago I would have agreed with the argument that the most important AI regulatory issue is mitigating the low probability of catastrophic risks. Today, I’d think nearly the opposite. My primary concern is that we will fail to realize the already feasible and significant benefits of AI—benefits that stand to improve the lives of marginalized communities, individuals lacking access to health care and quality legal representation, and students suffering because their school cannot afford the necessary specialists. Most concerningly, I fear this outcome will be the result of poor policy analysis.
What changed and why do I think my own evolution matters?
Discussion of my personal path from a more “safety” oriented perspective to one that some would label as an “accelerationist” view isn’t important because I, Kevin Frazier, have altered my views. Nor is it important as a sort of examination of so-called accelerationist perspectives (or any associated variant of that view). For what it’s worth, I reject that label and contest the urge to push folks into specific camps given that there’s so much we don’t know about AI’s benefits and risks. My aim is not to develop AI for the sake of crossing someone’s definition of AGI or rebuilding society by restructuring existing institutions, but rather making sure that its best uses—as established through rigorous research and traditional policy review—reach as many people as possible in as little time as possible. I’m not sure what you call that.
The point of walking through my pivot is instead valuable because it may help those unsure of how to think about these critical issues navigate a complex and, increasingly, heated debate in which the labels affixed to your view carries political and even moral connotations. By sharing my own change in though, I hope others will feel welcomed to do three things:
first, reject unproductive, static labels that are misaligned with a dynamic technology in a turbulent world;
second, adjust their own views in light of the wide variety of shifting variables at play when it comes to AI regulation; and,
third, get back to first principles of policy analysis. More generally, I believe that calling myself out for a so-called “flip-flop” may give others more leeway to do so without feeling as though they’ve committed some logical or ethical wrong.
This discussion also matters because everyone should have a viewpoint on AI policy. This is no longer an issue that we can leave to San Francisco house parties and whispered conversations in the quiet car of an Acela train. I know that folks are tired of all the ink spilled about AI, all the podcasts that frame new model releases as the end of the world or the beginning of a utopian future, and all the speculation about whether AI will take your job today or tomorrow. It’s exhausting and, in many cases, not productive. Yet, absent more general participation in these debates, the folks on the coasts will shape how AI is developed and adopted across the country.
You may be tired of it but you cannot opt out of knowing about AI and having a reasoned stance on its regulation. This is especially true you for a few reasons: AI will become an ever more prevalent part of your professional and personal life, so you have a vested stake in its regulation; today’s decisions about AI will have decades-long ramifications, so if you have kids, grandkids, or otherwise care about the next generations of Americans, you owe it to them to not let a handful of tech bros and policy nerds dictate our tech policy; and, most importantly, we’ve reached a critical juncture in AI policy.
Congress recently considered a ten-year moratorium on a wide-range of state AI regulation. Though the proposal was ultimately scrapped from the One Big, Beautiful Bill for political reasons, it’s likely to reappear as a part of forthcoming legislation from Senator Ted Cruz (R-TX). So the stakes are set for an ongoing conversation about the nation’s medium-term approach to AI. I have come out in strong defense of a federal-first approach to AI governance, preventing states from adopting the sort of AI safety measures I may have endorsed a few years back.
So what gives? Why have I flipped in the factors I weigh most when thinking through America’s AI posture?
First, I’ve learned more about the positive use cases of AI. For unsurprising reasons, media outlets that profit from sensationalistic headlines tend to focus on reports of AI bias, discrimination, and hallucinations. These stories draw clicks and align well with social media-induced techlash that’s still a driving force in technology governance conversations. Through attending Meta’s Open Source AI Summit, however, I realized that AI is already being deployed in highly sensitive and highly consequential contexts and delivering meaningful results. I learned about neurosurgeons leveraging AI tools to restore a paralyzed woman’s voice, material science researchers being able to make certain predictions 10,000 times faster thanks to AI, and conservation groups leaning on AI to improve deforestation tracking. If scaled, these sorts of use cases could positively transform society. These are the use cases that I focus on when I analyze AI policy proposals and AI policy positions.
Despite the importance of doing some form of cost-benefit analysis (an imperfect but useful framework) prior to adopting any regulation, it seems these benefits receive too little weight. Policymakers, the public, and all the stakeholders in between are awash in information about AI’s flaws. There’s at least two AI incident trackers run by reputable institutions (MIT and the OECD) that provide ready access to stories and studies of AI’s shortcomings. Many others have written thoughtful analyses of how best to track and define such incidents. Where’s that research on AI’s benefits? Besides the AI labs (that, for obvious reasons, may put a rosy gloss on the impact of their models), which institutions have set about conducting an independent, rigorous tallying of dollars saved, lives extended, and students tutored thanks to AI?
The broader conversation is also flooded by imprecise, vague polls that give the impression that Americans generally oppose AI development. A recent Pew Poll, for example, identified that a mere 17 percent of Americans think AI will be a net positive on society 20 years from now. Absent from that poll and related polls is the sort of analysis that could inform more nuanced surveys. How much do respondents know about the technical underpinnings of AI? What’s the precise nature of their concerns around AI? How specifically do they use AI at home and at work? Are they just creating memes or are they savvy users that try the latest models? Do they even have access to the latest models? Have they received any real training on AI? Do they know how their doctors, educators, and pilots are already using AI to keep them healthy, informed, and safe?
Until we take quantifying both benefits and costs seriously, AI regulation will run the risk of broadly addressing its very real harms and, in doing so, foreclose or limit the likelihood of detecting and sharing its positive use cases.
Second, I’ve thoroughly engaged with leading research on the importance of technological diffusion to national security and economic prosperity. In short, as outlined by Jeffrey Ding, and others, the country that dominates a certain technological era is not the one that innovates first, but rather the one that spreads the technology across society first. The latter country is better able to economically, politically, and culturally adjust to the chaos introduced by massive jumps in technology. Those that insist on a negative framing of AI threaten to undermine AI adoption by the American public. If all the average American hears about is how AI will exacerbate every social ill, then they will have little reason to adopt it into their own lives, let alone encourage others to do so.
Third, I’ve spent some time questioning the historical role of lawyers in stifling progress. As noted by Ezra Klein, Derek Thompson, and others across the ideological spectrum who have embraced some version of the Abundance Agenda, lawyers erected much of the bureaucratic barriers that have prevented us from building housing, completing public transit projects, and otherwise responding to public concerns in the 21st Century. Many of the safety-focused policy proposals being evaluated at the state and federal levels threaten to do the same with respect to AI—these lawyer-subsidization bills set vague “reasonableness” standards, mandate annual audits, and, more generally, increase the need for lawyers to litigate and adjudicate whether a certain model adheres to each state’s interpretation of “responsible” AI development. I do not want to contribute to yet another round of regulatory calcification that transforms a dynamic technological frontier into a static, litigation-heavy industry where compliance costs favor incumbents and innovation gets buried under layers of legal interpretation. The irony is that this approach may ultimately undermine the very safety goals it purports to serve by concentrating AI development in the hands of the few companies wealthy enough to navigate the regulatory maze, while pushing the most promising research underground or overseas where democratic oversight becomes impossible.
What’s more, a policy framework that emphasizes the identification and quantification of AI’s benefits and subsequently aims to facilitate the dissemination of models that achieve those ends carries an important positive externality: a more robust market for AI labs. The more attention we pay to labs that achieve good outcomes and the more support they receive—via subsidies, subscriptions, or otherwise, the more dynamic and diverse the AI marketplace. This environment carries better odds of inducing new startups to join a scene that has so far been dominated by a few giants.
Fourth, I’ve bought into the (well-supported) theory of technological innovation that it's a combinatorial, complex, emergent process. In other words, innovation builds off on prior advances based on a variety of factors and in often unexpected ways. The takeaway is that premature, albeit well-intentioned regulation in an emerging space can significantly disrupt technological progress. Just as safety-oriented AI communities argue that we ought to safeguard the well-being of future generations by protecting them from risky models, I contend that we owe it to future generations to present them with the most advanced and useful technology possible.
Adherents to that safety perspective will rightly point out that I'm potentially downplaying legitimate concerns about AI risks. They might remind me that though they too acknowledge catastrophic scenarios have low probabilities, they nevertheless warrant substantial regulatory intervention because of the magnitude of the potential harm. This is the classic precautionary principle argument: when the potential downside is civilization-ending, shouldn't we err on the side of extreme caution?
I continue to acknowledge this concern but believe it fundamentally misunderstands both the nature of risk and the trade-offs we face. First, the “low probability, high impact” framing often obscures the fact that many proposed AI safety regulations would impose certain, immediate costs on society while addressing speculative future harms. We're not comparing a small chance of catastrophe against no cost—we're comparing it against the guaranteed opportunity costs of delayed medical breakthroughs, slowed scientific research, and reduced economic productivity. When a child dies from a disease that could have been cured with AI-accelerated drug discovery, that’s not a hypothetical cost. It's a real consequence of regulatory delay.
Moreover, the safety-first approach assumes that slowing AI development actually reduces risk, but this may be backwards. Rushing to regulate emerging technology often means locking in current limitations and creating regulatory capture by established players. The companies best positioned to navigate complex compliance regimes are precisely the large, well-resourced firms that safety advocates claim to be most concerned about. Meanwhile, the open-source researchers and smaller innovators who might develop more democratic, transparent, and beneficial AI systems get crowded out by regulatory barriers they cannot afford to navigate.
Perhaps most importantly, my evolution reflects a growing awareness of the international dimensions of AI development. While American policymakers debate the finer points of algorithmic auditing requirements, Chinese researchers are rapidly advancing AI capabilities with fewer regulatory constraints. This isn't merely an economic competition—it's a competition over which values and governance models will shape the global AI ecosystem.
The countries that achieve AI leadership won't just reap economic benefits; they'll export their technological standards, privacy norms, and governance approaches worldwide. If democratic societies hamstring themselves with excessive precaution while authoritarian regimes race ahead, we risk ceding control over humanity's technological future to actors with fundamentally different values about human rights, privacy, and democratic governance.
This doesn't mean abandoning all safety considerations, but it does mean we need to be strategic about which regulations truly enhance safety versus which merely provide the illusion of control while undermining our competitive position. The goal should be fostering AI development that is both rapid and responsible, not choosing between the two as if they were mutually exclusive.
My evolution from AI safety advocate to my current stance reflects not an abandonment of caution, but a more holistic understanding of where the real risks lie. The greatest threat isn't that AI will develop too quickly, but that beneficial AI will develop too slowly—or in the wrong places, under the wrong governance structures.
The path forward requires rejecting both naive techno-optimism and paralyzing risk aversion. Instead, we need policies that maximize AI's benefits while addressing its genuine risks through targeted, evidence-based interventions rather than broad regulatory frameworks that inevitably become tools for incumbent protection and innovation suppression.
Critically, we need more Americans engaged in these debates. The future of AI governance cannot be left to a small circle of coastal elites, whether they lean toward safety or acceleration. The stakes are too high, the implications too far-reaching, and the decisions too consequential for the rest of us to remain on the sidelines.
The question isn't whether AI will transform our society—it's whether that transformation will be shaped by democratic deliberation and broad participation, or by the preferences of whoever manages to navigate the regulatory maze we're currently constructing. I've chosen my position in that debate, and I hope others will make their own informed choice rather than defaulting to whatever position comes with the most comfortable political label.
* * *
As an aside, I’m going to start an AI benefits tracker. You can bet it will have a better name. You can also bet that it’s going to be a challenge—in many instances, benefits derived by AI may come at severe costs. That said, it’s a necessary part of formulating a more rigorous policy analysis of how best to address AI.
If you’d like to assist, please complete this survey and feel free to reach out: kevin.frazier@law.utexas.edu.
Good read! It’d be cool to see you talk to Liron Shapira on his channel “Doom Debates” to kind of flush this out more! I’ll take part in this discussion a little tho if you don’t mind!
I think the argument in the post about delaying immediate benefits is a matter of what you value and the probability of risk you have.
I think most people including me would agree that AI helping people sooner would be great, and if you believe the probability of existential risk is low, then it may be worth it to roll the dice.
However I think a good counter to that to be explored, is if someone believes that the probability of existential risk is high,(let’s say 30-50%) then do we put a higher value on the potential lives of trillions and more than that of people that could exist in the future if humanity survives.
Going back to the benefits you’ve heard about in material science, neuroscience, etc. It seems like developing Tool AI is really beneficial and I agree that we shouldn’t just immediately ban all AI (especially for game theory reasons with international players). But I think there is a difference from AGI and Tool AI.
Running out of characters haha! Overall great read and I agree with you in that we need more people to discuss AI!