An AI Spark Worth Spreading
An AI bill pending before the Washington State Legislature is worth studying and spreading.
Before we dive into a piece of legislation in Washington that deserves your attention, let’s flag two big pieces of AI news:
Nvidia's AI Chip Manufacturing Shift to the U.S.
Nvidia has commissioned over one million square feet of manufacturing space in Arizona and Texas to build and test AI chips as part of a strategic move to shift production to the United States, with Blackwell chips already in production at TSMC's Arizona plants and additional “supercomputer” manufacturing facilities being built with Foxconn in Houston and Wistron in Dallas. This announcement follows Nvidia's narrow avoidance of export controls on its H20 chip after CEO Jensen Huang struck a domestic manufacturing deal with the Trump administration, positioning the company to potentially produce "p to half a trillion dollars of AI infrastructure in the U.S. within the next four years. The significance of this development is underscored by Huang's statement that “the engines of the world's AI infrastructure are being built in the United States for the first time,” though challenges remain including potential retaliatory tariffs from China, raw material supply issues, and a shortage of skilled workers.
This move represents a pivotal shift in AI manufacturing strategy with substantial geopolitical and economic implications, aligning with the current administration's "America-first, America-only" approach to technology while potentially creating numerous jobs and strengthening U.S. technological sovereignty in the critical AI sector.
NATO acquires AI military system from Palantir
NATO has acquired Palantir Technologies' AI-powered military system, which will be deployed within NATO's Allied Command Operations to enhance military operations. The system is designed to improve intelligence fusion, battlespace awareness, and expedite decision-making through AI applications including large language models and machine learning, with General Markus Laubenthal, SHAPE Chief of Staff, emphasizing that it enables the Alliance to “leverage complex data and accelerate decision-making.” The procurement process was notably efficient, taking only six months from requirement definition to acquisition.
This acquisition represents a significant advancement in NATO's technological capabilities and signals the growing integration of AI into military operations within the Alliance. The rapid procurement timeline highlights NATO's commitment to technological modernization, while the selection of Palantir underscores the deepening relationship between Western defense institutions and Silicon Valley technology firms.
Now to today’s essay!
The Fulcrum originally published this essay.
In the rapidly evolving landscape of artificial intelligence, policymakers face a delicate balancing act: fostering innovation while addressing legitimate concerns about AI's potential impacts. Representative Michael Keaton’s proposed HB 1833, also known as the Spark Act, represents a refreshing approach to this challenge—one that Washington legislators would be right to pass and other states would be wise to consider.
As the AI Innovation and Law Fellow at Texas Law, I find the Spark Act particularly promising. By establishing a grant program through the Department of Commerce to promote innovative uses of AI, Washington's legislators have a chance to act on a fundamental truth: technological diffusion is essential to a dynamic economy, widespread access to opportunity, and the inspiration of future innovation.
The history of technological advancement in America reveals a consistent pattern. When new technologies remain concentrated in the hands of a few, their economic and social benefits remain similarly concentrated. On the other hand, when technological tools become widely available—as happened with personal computers in the 1980s or internet access in the 1990s on through today (though too many remain on the wrong side of the digital divide)—we witness explosive growth in unexpected innovations and broader economic participation.
HB 1833 wisely prioritizes several key elements that deserve particular commendation. The bill's emphasis on ethical AI use, risk analysis, small business participation, and statewide impact reflects a nuanced understanding of how to foster responsible innovation. By requiring applicants to share their technology with the state and demonstrate a clear public benefit, the program ensures that taxpayer investments yield broader societal returns.
The involvement of Washington's AI task force in identifying state priorities further strengthens the approach. This collaborative model between government, industry, and presumably academia creates a framework for ongoing dialogue about AI development—a far more productive approach than imposing rigid restrictions based on speculative concerns.

While regulatory frameworks for AI are necessary and inevitable, premature or excessive regulation risks several negative consequences. First, burdensome compliance costs disproportionately impact startups and smaller labs, potentially cementing the dominance of tech giants who can easily absorb these expenses. This would ironically undermine the competitive marketplace that effective regulation aims to protect.
Second, regulatory approaches that begin from a place of suspicion rather than a balanced assessment may perpetuate unfounded negative perceptions of AI. Public discourse already tends toward dystopian narratives that overshadow AI's transformative potential in healthcare, environmental protection, education, and accessibility. Policy should be informed by a complete picture—acknowledging risks while recognizing benefits.
Washington's approach appears to recognize what history has repeatedly demonstrated: innovation rarely follows predictable paths. The personal computer, the internet, and smartphones all produced applications and implications that their early developers could never have anticipated. By creating space for experimentation while establishing guardrails around ethical use and risk assessment, the Spark Act creates a framework for responsible innovation.
Other states considering AI policy would do well to study Washington's example. Rather than racing to implement restrictive regulations that may quickly become obsolete or counterproductive, states can establish programs that simultaneously promote innovation while gathering the practical experience necessary to inform more targeted regulation where truly needed.
The technological transformation unfolding before us holds tremendous promise for addressing long-standing societal challenges—but only if we resist the urge to stifle it before it has the chance to develop. Washington's legislators deserve recognition for charting a path that neither ignores legitimate concerns nor sacrifices the potential benefits of AI advancement.
In the coming years, the states that thrive economically will likely be those that find this balance—creating frameworks that promote responsible AI innovation while ensuring its benefits are widely shared. The Spark Act represents a promising step in that direction, one that merits both our attention and our support. The Senate should follow the House's lead in passing this important piece of legislation.