Reports in early December indicate that President Donald J. Trump is preparing to sign an executive order that would block state-level laws regulating artificial intelligence and replace them with a single federal standard. According to early descriptions, the order would bar states from enforcing their own AI rules on safety, transparency, data, or algorithmic accountability.
The administration’s stated goal is to prevent a “chaotic patchwork of state AI laws” that, it argues, would burden interstate commerce and undermine the nation’s competitiveness. The justification is familiar: when the states cannot agree, when new technologies cross borders too easily, only a uniform national standard can provide stability.
At one level, the reasoning is plausible. Yes, California, Colorado, Texas, Florida, Vermont, New York, and a dozen others have enacted or proposed conflicting approaches to regulating AI. Yes, this makes compliance difficult. But that does not justify a federal takeover of the entire domain of artificial intelligence, especially when the order reportedly envisions not merely preemption of state law, but the creation of a national regulatory framework enforced by federal agencies. The question that should precede any such action is the oldest one in the American political tradition: what powers does the government require in order to protect liberty — and what powers will compromise it?
I write as someone now completing a book titled “A Serious Chat with Artificial Intelligence,” an extended reflection on the human mind in dialogue with its own creation. That work has made me especially sensitive to a paradox now unfolding: at the very moment when AI is expanding the reach of our intelligence, we risk shrinking the scope of our freedom. The challenge is real, the choices difficult. But the temptation to answer complexity with centralization — the temptation that now animates the push for a single federal rule for AI — has almost always led to stagnation.
State regulation of AI is not, by itself, an ideal situation. We are indeed seeing 50 variations of concern, from the merely paternalistic to the openly fearful. Some states worry about algorithmic bias, others about deepfakes, still others about data privacy or labor displacement. Several are experimenting with rules requiring disclosure of training data, transparency of model decision-making, or permission requirements for models above certain compute thresholds. This is confusing. It is also federalism working as intended. The states are laboratories of democracy, not subordinate offices waiting for federal consolidation.
The administration’s answer — federal preemption followed by federal regulation — is not a remedy. It is a cure worse than the disease. The premise behind it is philosophically wrong: that a central authority can foresee the risks of an emergent technology better than the distributed knowledge of millions of actors operating within a free market. That premise was wrong when applied to railroads, radio, electricity, telephony, airlines, and nuclear power. It is even more disastrously wrong when applied to artificial intelligence.
Yes, government regulation is justified in cases where AI demonstrably is being used as a weapon of war or is modified specifically for crime. Governments successfully regulated nuclear weapons for three-quarters of a century. When state governments turned to regulating nuclear power to satisfy fears and doomsday fantasies, the nuclear power industry froze. Only now are we realizing that had its growth continued, the entire (dubious) “catastrophe” of climate change might instead have been stillborn.
The historical analogy that looms largest is the regulation of the airwaves. For decades, federal licensing of broadcast frequencies created a rigid, centralized, and stagnant media environment dominated by three giant networks. Only when cable television emerged — outside the FCC’s jurisdiction — did that system collapse and innovation resume. The market corrected the government’s mistake. It did so not through national uniformity, but through decentralization. What was true of the airwaves is true of AI: innovation is born at the periphery, not the center.
‘Regime Uncertainty’
The deeper economic argument against a single, national AI rule is the one articulated by the historian and economist Robert Higgs, who coined the term “regime uncertainty.”
Higgs examined why private investment collapsed in the late 1930s even as the Great Depression was easing. The answer: business owners no longer trusted that the rules governing their property, contracts, and earnings would remain stable. With each new intervention, tax, or regulatory threat of the New Deal, the future became unpredictable. And when the future is unpredictable, capital retreats.
That insight perfectly describes the present moment in AI. Innovators are already facing a barrage of unpredictable interventions worldwide: the European Union’s AI Act, which classifies models by risk and bans categories of algorithmic use; China’s “Generative AI Measures” requiring adherence to state ideology; Congress floating licensing schemes for large models; the proliferation of state laws in the United States with incompatible compliance burdens; and now, a looming federal plan to centralize authority over the entire sector. To paraphrase Higgs, when government insists on being the co-author of every technological step, innovation freezes in anticipation of the next decree.
The Higgs argument is only half the story, however. Higgs tells us why government intervention discourages innovation. Hayek tells us what regulates innovation when the government does not.
Spontaneous Order
For Friedrich Hayek, the central lesson of economics was that no single decision-making body — no panel of officials, no federal agency — could ever match the distributed intelligence of a free society. The knowledge required to understand a complex market is not held by any single mind. It is dispersed in millions of judgments, preferences, price signals, reputational cues, and feedback loops operating simultaneously. Out of this decentralized coordination emerges what Hayek called spontaneous order: an evolving, self-adjusting system far more responsive than regulation.
Applied to artificial intelligence, Hayek’s argument is decisive. AI is not a static technology. It evolves weekly, often daily. It is shaped by user feedback at scale: billions of queries, millions of corrections, and innumerable signals of trust or distrust. When an AI product errs, offends, misleads, or harms, the market reacts immediately. Companies patch vulnerabilities, revise guardrails, withdraw faulty features, or lose customers. The constraint is real. It is continuous. And it is informed by vastly more data than any federal oversight board could ever obtain.
If a teenager becomes obsessed with an AI avatar and suffers emotional fallout, public outcry registers instantly across social media and news cycles. Every user of the platform casts an implicit vote — continue using it, abandon it, or demand change. The company involved must respond or perish. That is regulation — regulation by consent, not compulsion. And it is the only kind of regulation agile enough to match the speed of AI.
In other words, Hayek answers the most important question that arises from Higgs: if government does not restrain innovation, will innovation run wild? No. Because the free market restrains it — not by freezing it, but by continuous mid-course corrections.
However, the push for a single federal rule for AI often implies that only uniformity can protect the public. History suggests the opposite. Uniformity is a great danger when officials do not know what they do not know. A rigid national standard, especially one drafted at the dawn of AI’s development, will inevitably reflect the fears, preferences, and misconceptions of a small political elite. It will enshrine those views long after they have been discredited by experience. Regulators will be tempted to err on the side of caution — of prohibition, delay, or excessive reporting — because no one at a federal agency is ever criticized for being too cautious. They are only blamed when something goes wrong.
But when regulators overreact, the harms are invisible: the startup never launched, the medical breakthrough delayed, the scientific tool never invented, the small firm unable to afford compliance, the innovative business pushed offshore. These losses are real, even if they are unseen.
One of the most telling developments in recent months is that some leaders of the AI industry itself are calling for regulation. At a Senate hearing in 2023, OpenAI CEO Sam Altman openly encouraged the creation of a federal licensing regime for advanced models. Senators were delighted. But smaller competitors were alarmed. The CEO of Stability AI warned that such regulation would entrench incumbents and “crush innovation.” The founder of Hugging Face noted that requiring federal permission to train a model would be like requiring a license to write code. These concerns are not abstract. Regulation tends to protect the powerful and eliminate the weak.
Hayek would have recognized this immediately: when industries embrace regulation, it is often because regulation will serve them. It will not serve the future competitors who would challenge them. It will not serve the young minds with new ideas. It will not serve the yet-unknown innovators who cannot hire a team of lawyers and lobbyists.
Critics of AI regulation often are asked: “But without government, how do we address the real risks?” The question assumes that centralized regulation is the default condition of order, and freedom the dangerous alternative. That assumption is historically and philosophically backward.
The real alternative is not government versus chaos. It is government as a centralized overseer versus the spontaneous order of free participants responding to incentives, information, and feedback. The rigid mind versus the adaptive mind. Foreclosing the future versus learning from it.
Artificial intelligence is a domain uniquely suited to the latter. Its risks are evolving. Its errors become visible instantly. Its user base provides rich data on whether features are helpful, harmful, or somewhere in between. The companies building it are extraordinarily attentive to public trust; reputational failure is death in this field. Where harms arise — fraud, impersonation, privacy breaches — existing law already provides remedies. Where harms are novel, civil courts can adjudicate responsibility, creating precedents rooted in real cases rather than speculative fears.
Risk Is Inevitable, But Not the Risks of Regulation
To demand comprehensive regulation now is to demand answers in advance to questions not yet understood. It is to regulate the unknown. And to regulate the unknown is almost always to prohibit it.
No one denies that AI presents risks. So did electricity, the printing press, the automobile, the telephone, aviation, antibiotics, nuclear power, and the internet. Every one of these technologies was met by prophets of doom who were certain that catastrophe was imminent. The Swiss physician and naturalist Conrad Gessner (1516-1565) feared an “overwhelming abundance of books” would corrupt the human mind. Early critics of the telephone warned that it would annihilate privacy and eliminate face-to-face interaction. President Benjamin Harrison refused to touch the electric switches installed in the White House. Radio was denounced for corrupting children. In every era, fear accompanied invention.
Sometimes the fears were reasonable. But preemptive restraints would have prevented the very learning process that revealed how to use the new technologies safely. A muggy household without air conditioning would have seemed less dangerous to some than a building wired with electricity. But electricity ultimately became safer and more indispensable than anyone predicted. What mattered was not fear, but adaptation.
The difference between the fears of the past and the fears of today is that today we are tempted to freeze innovation before it teaches us anything. That temptation is strongest in the political class. Bureaucracies do not benefit from rapid change; they benefit from stability, scope, and control. Innovation threatens all three.
And now, under the proposed federal AI order, we are at risk of taking the first step toward nationalizing the evolution of intelligence itself. It would be an irony worthy of Swift if the American government — long the champion of free enterprise, experimentation, and engineering boldness — became the agent of technological paralysis.
The inevitability of a “single federal standard” is a myth. The United States does not need a national rule for AI. It needs national protection of the freedom to innovate, and national restraint upon those who would regulate prematurely. The problem of state-by-state inconsistency is real. But the proper federal role is to enforce the constitutional principle that states may not obstruct interstate commerce, not to impose a federal regulatory regime in the name of uniformity.
Federal Deregulation Can Defeat State Chaos
If the administration wishes to prevent states from impeding AI innovation, it has a simple tool: federal deregulation, not national regulation. Congress can preempt states from restricting AI deployment or training while simultaneously refusing to give federal agencies new regulatory powers over the technology. That would be the correct application of federal power: not central planning of innovation, but protection of the freedom to innovate.
In my own conversations with AI while preparing “A Serious Chat with Artificial Intelligence,” I found myself returning to a single question: who will protect us from our protectors? It is easy to imagine dangers from unrestrained technology. It is harder to imagine dangers from unrestrained regulation. Yet the latter has done far more damage throughout history. Every great technological leap has been delayed, distorted, or nearly destroyed by those who feared its consequences more than they valued its promise.
The greatest danger is not that AI will escape our control. It is that we will surrender our control of our own minds — our capacity to imagine, invent, experiment, and err — in exchange for the illusion of security. If the federal government adopts a single national AI rule, that illusion will become law.
The alternative is more demanding, but far more hopeful: trust the spontaneous order of a free society. Trust the judgment of millions of users. Trust competition to discipline excesses. Trust courts to address real harms. Trust innovation to solve the problems innovation creates. Trust, above all, the human mind — the free, adaptive, self-correcting engine of progress.
Higgs teaches us why not to freeze innovation. Hayek teaches us what will regulate innovation when we refuse to freeze it. Together, they answer the demand for regulation not with anarchy but with confidence. Confidence in liberty. Confidence in knowledge dispersed. Confidence in evolution through experience.
Artificial intelligence will not destroy us. But fear-driven regulation might. If we lose the courage that built our civilization, we will lose the future AI promises to create.