Blind to Disruption – The CEOs Who Missed the Future

How did you go bankrupt?”
Two ways. Gradually, then suddenly.”
Ernest Hemingway, The Sun Also Rises

Every disruptive technology since the fire and the wheel have forced leaders to adapt or die. This post tells the story of what happened when 4,000 companies faced a disruptive technology and why only one survived.


In the early 20th century, the United States was home to more than 4,000 carriage and wagon manufacturers. They were the backbone of mobility and the precursors of automobiles, used for personal transportation, goods delivery, military logistics, public transit, and more. These companies employed tens of thousands of workers and formed the heart of an ecosystem of blacksmiths, wheelwrights, saddle makers, stables, and feed suppliers.

And within two decades, they were gone. Only 1 company out of 4,000 carriage and wagon makers pivoted to automobiles.

Today, this story feels uncannily familiar. Just as the carriage industry watched the automobile evolve from curiosity to dominance, modern companies in SaaS, media, software, logistics, defense and education are watching AI emerge from novelty into existential threat.

A Comfortable Industry Misses the Turn
In 1900, the U.S. was the global leader in building carriages. South Bend, IN; Flint, MI; and Cincinnati, Ohio, were full of factories producing carriages, buggies, and wagons. On the high-end these companies made beautifully crafted vehicles, largely from wood and leather, hand-built by artisans. Others were more basic wagons for hauling goods.

When early automobiles began appearing in the 1890’s — first steam-powered, then electric, then gasoline –most carriage and wagon makers dismissed them. Why wouldn’t they? The first cars were:

  • Loud and unreliable
  • Expensive and hard to repair
  • Starved for fuel in a world with no gas stations
  • Unsuitable for the dirt roads of rural America

Early autos were worse on most key dimensions that mattered to customers. Clayton Christensen’s “Innovator’s Dilemma” described this perfectly – disruption begins with inferior products that incumbents don’t take seriously. But beneath that dismissiveness was something deeper: identity and hubris. Carriage manufacturers saw themselves not as transportation companies, but as craftsmen of elegant, horse-drawn vehicles. Cars weren’t an evolution—they were heresy. And so, they waited. And watched. And went out of business slowly and then all of a sudden.

Early Autos Were Niche and Experimental  (1890s–1905) The first cars (steam, electric, and early gas) were expensive, unreliable, and slow. They were built by 19th century mechanical nerds. And the few that were sold were considered toys for other nerds and the rich. (Carl Benz patented the first internal combustion engine in 1886. In 1893 Frank  Duryea drove the first car  in the U.S.)

These early cars coexisted with a massive horse-powered economy. Horses pulled wagons, delivered goods, powered streetcars, and people. The first automakers used the only design they knew: the carriage. Drivers sat up high like they did in a carriage when they had to see over the horses.

For the first 15 years carriage makers, teamsters, and stable owners saw no immediate threat. Like AI today: autos were powerful, new, buggy, unreliable and not yet mainstream.

 Disruption Begins (1905–1910) 10 years after their first appearance, gasoline cars became more practical, they had better engines, rubber tires, and municipalities had begun to pave roads. From 1903 to 1908 Ford shipped 9 different models of cars as they experimented with what we would call today minimum viable products. Ford (and General Motors) broke away from their carriage legacies and began designing cars from first principles, optimized for speed, safety, mass production, and modern materials. That’s the moment the car became its own species. Until then, it was still mostly a carriage with a motor. Urban elites switched from carriages to autos for status and speed, and taxis, delivery fleets, and wealthy commuters adopted cars in major cities.

Even with evidence staring them in the face, carriage companies still did not pivot, assuming cars were a fad. For carriage companies this was the “denial and drift” phase of disruption.

The Tipping Point: Ford’s Model T and Mass Production (1908–1925) The Ford Model T introduced in 1908 was affordable ($825 to as little as $260 by the 1920s), durable and easy to repair, and made using assembly line mass production. Within 15 years tens of millions of Americans owned cars. Horse-related businesses — not only the carriage makers, but the entire ecosystem of blacksmiths, stables, and feed suppliers — began collapsing. Cities banned horses from downtown areas due to waste, disease, and congestion.  This was like the arrival of Google, the iPhone or ChatGPT: a phase shift.     

Collapse of the Old Ecosystem (1920s–1930s) Between 1900 and 1930 U.S. horse population fell from 21 million to 10 million and the carriage and buggy production plummeted. New infrastructure—roads, gas stations, driver licensing, traffic laws—was built around the car, not the horse.

Early automakers borrowed heavily from carriage design (1885–1910). Cars emerged in a world dominated by horse-drawn vehicles and they inherited the materials and mechanical designs from the coach builders.

– Leaf springs were the dominant suspension in 19th-century carriages. Early cars used the same.
– There were no shock absorbers in carriages, and early autos. They both relied on leaf spring damping, making them bouncy and unstable at speed. Why? Roads were terrible. Speeds were low. Coachbuilders understood how to make wagons survive cobblestones and dirt.
– Carriages used solid steel or wooden axles; early cars did the same.

Body Construction and Design Borrowed from Carriages
– Car bodies were wood framed with steel or aluminum sheathing, like a carriage.
– Upholstery, leatherwork, and ornamentation were also carried over.
– Terms like roadster, phaeton, landaulet, and brougham are directly inherited from carriage types.
– High seating and narrow track: Early cars had tall wheels and high ground clearance, like buggies and carriages, since early roads were rutted and muddy.

Result: Early automobiles looked like carriages without the horse, because they were, functionally and structurally, carriages with engines bolted on.

What Changed Over Time
As speeds increased and roads improved, wood carriage design couldn’t handle the torsional stress of faster, heavier cars. Leaf-spring suspensions were too crude for speed and handling. Car builders began using pressed steel bodies (Fisher Body’s breakthrough), independent front suspension (introduced in the 1930s), finally integrating the car body and chassis into a single, unified structure, rather than having a separate body and frame (in the 1930s–40s). 

Studebaker: From Horses to Horsepower
The one carriage maker who did not go out of business and became an automobile company was Studebaker. Founded in 1852 in South Bend, IN, Studebaker began by building wagons for farmers and pioneers heading west. They supplied wagons to the Union Army during the Civil War and became the largest wagon manufacturer in the world by the late 19th century. But unlike its peers, Studebaker made a series of early, strategic bets on the future.

In 1902, they began producing electric vehicles—a cautious but forward-thinking move. Two years later, in 1904, they entered the gasoline car business, at first by contracting out the engine and chassis. Eventually, they began making the entire car themselves.

Studebaker understood two things the other 4,000 carriage companies ignored:

  1. The future wouldn’t be horse-drawn.
  2. The company’s core capability wasn’t in carriages—it was in mobility.

Studebaker made the painful shift in manufacturing, retooled their factories, and retrained their workforce. By the 1910s, they were a full-fledged car company.

Studebaker survived long into the auto age—longer than most of the early automakers—and only stopped making cars in 1966.

Fisher Body: A Coach Builder for the Machine Age
While Studebaker made a direct pivot of their entire company from carriage to cars, a case can be made that Fisher Body was a spinoff. Founded in 1908 in Detroit by brothers Fred and Charles Fisher, the Fishers had worked at a carriage firm before starting their own auto-body business.  They specialized in producing the car bodies, not an entire car. Their key innovation was making closed steel car bodies which was a major improvement over open carriages and wood frames. By 1919, Fisher was so successful that General Motors bought a controlling stake and in 1926, GM acquired them entirely. For decades, “Body by Fisher” was stamped into millions of GM cars.

Durant-Dort: The Origin of General Motors
While the Durant-Dort Carriage Company never made cars itself, its co-founder William C. (Billy) Durant saw what others didn’t.  See the blog posts on Durant’s adventures here and here.

Durant used the fortune he made in carriages to invest in the burgeoning auto industry. He founded Buick in 1904 and in 1908 set up General Motors. Acting like one of Silicon Valley’s crazy entrepreneurs, he rapidly acquired Oldsmobile, Cadillac, and 11 other car companies and 10 parts/accessory companies, creating the first auto conglomerate. (In 1910 Durant would be fired by his board. Undeterred, Durant founded Chevrolet, took it public and in 1916 did a hostile takeover of GM and fired the board. He got thrown out again by his new board in 1920 and died penniless managing a bowling alley.)

While his financial overreach eventually cost him control of GM, his vision reshaped American manufacturing. General Motors became the largest car company in the 20th century.

Why the Other 3,999 Carriage makers Didn’t Make It
Most carriage makers didn’t have a William Durant, a Fisher brother, or a Studebaker in the boardroom. Here’s why they failed:

  • Technological Discontinuity
    • Carriages were made of wood, leather, and iron; cars required steel, engines, electrical systems. The skills didn’t transfer easily.
  • Capital Requirements
    • Retooling for cars required huge investment. Most small and midsize carriage firms didn’t have the money—or couldn’t raise it in time.
  • Business Model Inertia
    • Carriage makers sold low-volume, high-margin products. The car business, especially after Ford’s Model T, was about high-volume, low-margin scale.
  • Cultural Identity
    • Carriage builders didn’t see themselves as engineers or industrialists. They were artisans. Cars were noisy, dirty machines—beneath them.
  • Managers versus visionary founders
    • In each of the three companies that survived, it was the founders, not hired CEOs that drove the transition.
  • Underestimating the adoption curve
    • Early cars were bad. But technological S-curves bend quickly. By the 1910s, cars were clearly better. And by the 1920s, the carriage was obsolete.
  • How did you go bankrupt? “Two ways. Gradually, then suddenly.”

By 1925, out of the 4,000+ carriage companies in operation around 1900, nearly all were gone.

The tragedy of the carriage era and lessons for today
What does an early 20th century disruption have to do with AI and today’s companies? Plenty. The lessons are timeless and relevant for today’s CEOs and boards.

It wasn’t just that carriage companies failed to pivot. It’s that they had time and customers—and still missed it. That same pattern happens at every disruptive transition; they were led by CEOs who simply couldn’t imagine a different world than the one they had mastered. (This happened when companies had to master the web, mobile and social media, and is repeating today with AI.)

Carriage company Presidents were tied to sales and increasing revenue. The threat to their business from cars seemed far in the future. That was true for two decades until the bottom dropped out of their market with the rapid adoption of autos, with the introduction of the Ford Model T. Today, CEO compensation is tied to quarterly earnings, not long-term reinvention. Most boards are packed with risk-averse fiduciaries, not builders or technologists. They reward share buybacks, not AI moonshots. The real problem isn’t that companies can’t see the future. It’s that they are structurally disincentivized to act on it. Meanwhile, disruption doesn’t wait for board approval.

If you’re a CEO, you’re not just managing a P&L. You are deciding whether your company will be the Studebaker—or one of the other 3,999.

Why Investors Don’t Care About Your Business

Founders with great businesses are often frustrated that they can’t raise money.
Here’s why.


I’ve been having coffee with lots of frustrated founders (my students and others) bemoaning most VCs won’t even meet with them unless they have AI in their fundraising pitch. And the AI startups they see are getting valuations that appear nonsensical. These conversations brought back a sense of Déjà vu from the Dot Com bubble (at the turn of this century), when if you didn’t have internet as part of your pitch you weren’t getting funded.

I realized that most of these founders were simply confused, thinking that a good business was of interest to VCs. When in fact VCs are looking for extraordinary businesses that can generate extraordinary returns.

In the U.S., startups raising money from venture capitalists are one of the engines that has driven multiple waves of innovation – from silicon, to life sciences, to the internet, and now to AI. However, one of the most frustrating things for founders who have companies with paying customers to see is other companies with no revenue or questionable technology raise enormous sums of cash from VCs.

Why is that? The short answer is that the business model for most venture capital firms is not to build profitable companies, nor is it to build companies in the national interest. VCs’ business model and financial incentives are to invest in companies and markets that will make the most money for their investors. (If they happen to do the former that’s a byproduct, not the goal.) At times that has them investing in companies and sectors that won’t produce useful products or may cause harm but will generate awesome returns (e.g. Juul, and some can argue social media.)

Founders looking to approach VCs for investment need to understand the four forces that influence how and where VCs invest:

1) how VCs make money, 2) the Lemming Effect, 3) the current economic climate and 4) Secondaries.

How VCs Make Money
Just a reminder of some of the basics of venture capital. Venture is a just another financial asset class – with riskier investments that potentially offer much greater returns. A small number of a VC investments will generate 10x to 100x return to make up for the losses or smaller returns from other companies. The key idea is that most VCs are looking for potential homeruns, not small (successful?) businesses.

Venture capital firms are run by general partners who raise money from limited partners (pension funds, endowments, sovereign wealth funds, high-net-worth individuals.) These limited partners expect a 3x net multiple on invested capital (MOIC) over 10 years, which translates to a 20–30% net internal rate of return (IRR). After 75 years of venture investing VC firms still can’t pick which individual company will succeed so they invest in a portfolio of startups.

VCs seesaw between believing that a winning investment strategy is access to the hottest deals (think social media a decade ago, AI today), versus others believing in the skill of finding and investing in non-obvious winners (Amazon, Airbnb, SpaceX, Palantir.) The ultimate goal of a VC investment is to achieve a successful “exit,” such as an Initial Public Offering (IPO) or acquisition, or today on a secondary, where they can sell their shares at a significant profit. Therefore, the metrics for their startups was to create the highest possible market cap(italization). A goal was to have a startup become a “unicorn” having a market cap of $1billion or more.

The Lemming Effect
VCs most often invest as a pack. Once a “brand-name” VC invests in a sector others tend to follow. Do they somehow all see a disruptive opportunity at the same time, or is it Fear Of Missing Out (FOMO)? (It was years after my company Rocket Science Games folded that my two investors admitted that they invested because they needed a multi-media game company in their portfolio.) Earlier in this century the VC play was fuel cells, climate, food delivery, scooters, social media, crypto, et al. Today, it’s defense and AI startups. Capital floods in when the sector is hot and dries up when the hype fades or a big failure occurs.

The current economic climate
In the 20th century the primary path for liquidity for a VC investment in a startup (the way they turned their stock ownership in a startup into dollars) meant having the company “go public” via an initial public offering (IPO) on a U.S. stock exchange. Back then underwriters required that the company had a track record of increasing revenue and profit, and a foreseeable path to do so in the next year. Having your company bought just before the IPO was a tactic for a quick exit but was most often the last resort at a fire sale price if an IPO wasn’t possible.

Beginning with the Netscape IPO in 1995 and through 2000, the public markets began to have an appetite for Internet startups with no revenue or profits. These promised the next wave of disruption. The focus in this area became eyeballs and clicks versus revenue. Most of these companies crashed and burned in the dotcom crash and nuclear winter of 2001-2003, but VC who sold at the IPO or shortly after made money.

For the last two decades IPO windows have briefly opened (although intermittently) for startups with no hope for meaningful revenue, profit or even deliverable products (fusion, quantum, etc. heavy, infrastructure-scale moonshots that require decades to fruition). Yet with company and investor PR, hype and the public’s naivete about deep technology these companies raised money, their investors sold out and the public was left hanging with stock of decreasing value.

Today, the public markets are mostly closed for startup IPOs. That means that venture capital firms have money tied up in startups that are illiquid. They have to think about other ways to get their money from their startup investments.

Secondaries
Today with the Initial Public Offering path for liquidity for VCs mostly closed, secondaries have emerged as a new way for venture firms and their limited partners to make money.

Secondaries allow existing investors (and employees) to sell stock they already own – almost always at a higher price than their purchase price. These are not new shares and don’t dilute the existing investors. (Some VC funds can sell a stake in their entire fund if they want an early exit.) Secondaries offer VC funds a way to take money off the table and reduce their exposure.

The game here is that startups and their investors need to continually hype/promote their startup to increase the company’s perceived value. The new investors – later stage funds, growth equity firms, hedge funds or dedicated secondary funds, now have to do the same to make money on the secondary shares they’ve purchased.

What Do These Forces Mean For Founders?

  • Most VCs care passionately about the industry they invest in. And if they invest in you they will do anything to help your company succeed.
    • However, you need to remember their firm is a business.
    • While they might like you, think you are extraordinarily talented, they are giving you money to make a lot more money for themselves and their investors (their limited partners.)
    • See my painful lesson here when I learned the difference between VC’s liking you, versus their fiduciary duty to make money.
  • The minute you take money from someone their business model becomes yours.
    • If you don’t understand the financial engineering model a VC firm is operating under, you’re going to be an ex CEO.
    • You need to understand the time horizon, size, scale of the returns they are looking for.
  • Some companies, while great businesses may not be venture fundable.
    • Can yours provide a 10 to 100x return? Is it in (or can it create) a large $1B market?
    • VC funds tend to look for a return in 7-10 years.
    • Is your team extraordinary and coachable?
  • VCs tend to be either followers into hot deals and sectors or are looking for undiscovered big ideas.
    • Understand which type of investor you are talking to. Some firms have a consistent strategy; in others there may be different partners with contrary opinions.
  • Storytelling matters. Not only does it matter, but it’s an integral part of the venture capital game.
    • If you cannot tell a great credible story that matches the criteria for a venture scale investment you’re not ready to be a venture funded CEO.
  • If you’re lucky enough to have an AI background, grab the golden ring. It won’t be there forever.

Lean Launchpad at Stanford – 2025

The PowerPoints embedded in this post are best viewed on steveblank.com

We just finished the 15th<>annual Lean LaunchPad class at Stanford. The class had gotten so popular that in 2021 we started teaching it in both the winter and spring sessions.

During the 2025 spring quarter the eight teams spoke to 935 potential customers, beneficiaries and regulators. Most students spent 15-20 hours a week on the class, about double that of a normal class.

This Class Launched a Revolution in Teaching Entreprenurship
This class was designed to break out of the “how to write a business plan” as the capstone of entrepreneurial education. A business plan assumed that all startups needed to was to write a plan, raise money and then execute the plan. We overturned that orthodoxy when we pointed out that while existing organizations execute business models, startups are searching for them. And that a startup was a temporary organization designed to search for a repeatable and scaleable business model. This class was designed to teach startups how to search for a business model.
Several government-funded programs have adopted this class at scale. The first was in 2011 when we turned this syllabus into the curriculum for the National Science Foundation I-Corps. Errol Arkilic, the then head of commercialization at the National Science Foundation, adopted the class saying, “You’ve developed the scientific method for startups, using the Business Model Canvas as the laboratory notebook.”

Below are the Lessons Learned presentations from the spring 2025 Lean LaunchPad.

Team Cowmeter – early detection of cow infections through biological monitoring of milk.

If you can see the Team Cowmeter presentation click here

I-Corps at the National Institute of Health
In 2013 I partnered with UCSF and the National Institute of Health to offer the Lean LaunchPad class for Life Science and Healthcare (therapeutics, diagnostics, devices and digital health.) In 2014, in conjunction with the National Institute of Health, I took the UCSF curriculum and developed and launched the I-Corps @ NIH program.

Team NowPilot – AI copilot for enhancing focus and executive function.

If you can’t see the Team NowPilot presentation click here

I-Corps at Scale
I-Corps is now offered in 100 universities and has trained over 9,500 scientists and engineers; 7,800 participants in 2,546 teams at I-Corps at NSF (National Science Foundation), 950 participants in 317 teams at I-Corps at NIH, and 580 participants in 188 teams at Energy I-Corps (at the DOE).  15 universities in Japan now teach the class.

Team Godela – AI physics engine – with a first disruptive market in packaging.

If you can’t see the Team Godela presentation click here

$4 billion in Venture Capital For I-Corps Teams
1,380 of the NSF I-Corps teams launched startups raising $3.166 billion. Over 300 I-Corps at NIH teams have collectively raised $634 million. Energy I-Corps teams raised $151 million in additional funding.

Team ProspectAI – An AI sales development agent for lean sales teams.

If you can’t see the Team ProspectAI presentation click here

Mission Driven Entreprenurship
In 2016, I co-created both the Hacking for Defense course with Pete Newell and Joe Felter as well as the Hacking for Diplomacy course with Jeremy Weinstein at Stanford. In 2022, Steve Weinstein created Hacking for Climate and Sustainability. In 2024  Jennifer Carolan launched Hacking for Education at Stanford.

Team VLAB – accelerating clinical trials with AI orchestration of data.

If you can’t see the team VLAB presentation click here

Design of This Class
While the Lean LaunchPad students are experiencing what appears to them to be a fully hands-on, experiential class, it’s a carefully designed illusion. In fact, it’s highly structured. The syllabus has been designed so that we are offering continual implicit guidance, structure, and repetition. This is a critical distinction between our class and an open-ended experiential class. Guidance, Direction and Structure –
For example, students start the class with their own initial guidance – they believe they have an idea for a product or service (Lean LaunchPad/I-Corps) or have been given a clear real-world problem (Hacking for Defense). Coming into the class, students believe their goal is to validate their commercialization or deployment hypotheses. (The teaching team knows that over the course of the class, students will discover that most of their initial hypotheses are incorrect.)

Team Blix – IRB clinical trial compliance / A control layer for AI governance for financial services.

If you can’t see the team Blix presentation click here

The Business Model Canvas
The business model / mission model canvas offers students guidance, explicit direction, and structure. First, the canvas offers a complete, visual roadmap of all the hypotheses they will need to test over the entire class. Second, the canvas helps the students goal-seek by visualizing what an optimal endpoint would look like – finding product/market fit. Finally, the canvas provides students with a map of what they learn week-to-week through their customer discovery work. I can’t overemphasize the important role of the canvas. Unlike an incubator or accelerator with no frame, the canvas acts as the connective tissue – the frame – that students can fall back on if they get lost or confused. It allows us to teach the theory of how to turn an idea, need, or problem into commercial practice, week by week a piece at a time.

Team Plotline – A smart marketing calendar for author’s book launch.

If you can’t see the team Plotline presentation click here

Lean LaunchPad Tools
The tools for customer discovery (videos, sample experiments, etc.) offer guidance and structure for students to work outside the classroom. The explicit goal of 10-15 customer interviews a week along with the requirement for building a continual series of minimal viable products provides metrics that track the team’s progress. The mandatory office hours with the instructors and support from mentors provide additional guidance and structure.

Team Eluna/Driftnet  – Data Center data aggregation and energy optimization software.

If you can’t see the team Eluna/Driftnet presentation click here

AI Embedded in the Class
This was the first year where all teams used AI to help create their business model canvas, build working MVPs in hours, generate customer questions, analyze and summarizing interviews.

It Takes A Village
While I authored this blog post, this class is a team project. The secret sauce of the success of the Lean LaunchPad at Stanford is the extraordinary group of dedicated volunteers supporting our students in so many critical ways.

The teaching team consisted of myself and:

  • Steve Weinstein, partner at America’s Frontier Fund, 30-year veteran of Silicon Valley technology companies and Hollywood media companies. Steve was CEO of MovieLabs, the joint R&D lab of all the major motion picture studios.
  • Lee Redden – CTO and co-founder of Blue River Technology (acquired by John Deere) who was a student in the first Lean LaunchPad class 14 years ago!
  • Jennifer Carolan, Co-Founder, Partner at Reach Capital the leading education VC and author of the Hacking for Education class.

Our teaching assistants this year were Arthur C. Campello, Anil Yildiz, Abu B. Rogers and Tireni Ajilore.

Mentors helped the teams understand if their solutions could be a commercially successful business. Thanks to Jillian Manus, Dave Epstein, Robert Feldman, Bobby Mukherjee, Kevin Ray, Deirdre Clute, Robert Locke, Doug Biehn, and John Danner. Martin Saywell from the Distinguished Careers Institute joined the Blix team. The mentor team was led by Todd Basche.

Summary
While the Lean LaunchPad/I-Corps curriculum was a revolutionary break with the past, it’s not the end. In the last decade enumerable variants have emerged. The class we teach at Stanford has continued to evolve. Better versions from others will appear. AI is already having a major impact on customer discovery and validation and we had each team list the AI tools they used. And one day another revolutionary break will take us to the next level.

Hacking for Defense @ Stanford 2025 – Lessons Learned Presentations

The videos and PowerPoints embedded in this post are best viewed on steveblank.com

We just finished our 10th annual Hacking for Defense class at Stanford.

What a year.

Hacking for Defense, now in 70 universities, has teams of students working to understand and help solve national security problems. At Stanford this quarter the 8 teams of 41 students collectively interviewed 1106 beneficiaries, stakeholders, requirements writers, program managers, industry partners, etc. – while simultaneously building a series of minimal viable products and developing a path to deployment.

This year’s problems came from the U.S. Army, U.S. Navy, CENTCOM, Space Force/Defense Innovation Unit, the FBI, IQT, and the National Geospatial-Intelligence Agency.

We opened this year’s final presentations session with inspiring remarks by Joe Lonsdale on the state of defense technology innovation and a call to action for our students. During the quarter guest speakers in the class included former National Security advisor H.R. McMaster, Jim Mattis ex Secretary of Defense, John Cogbill Deputy Commander 18th Airborne Corps, Michael Sulmeyer former Assistant Secretary of Defense for Cyber Policy, and John Gallagher Managing Director of Cerberus Capital.

“Lessons Learned” Presentations
At the end of the quarter, each of the eight teams gave a final “Lessons Learned” presentation along with a 2-minute video to provide context about their problem. Unlike traditional demo days or Shark Tanks which are, “Here’s how smart I am, and isn’t this a great product, please give me money,” the Lessons Learned presentations tell the story of each team’s 10-week journey and hard-won learning and discovery. For all of them it’s a roller coaster narrative describing what happens when you discover that everything you thought you knew on day one was wrong and how they eventually got it right.
While all the teams used the Mission Model Canvas, Customer Development and Agile Engineering to build Minimal Viable Products, each of their journeys was unique.

This year we had the teams add two new slides at the end of their presentation: 1) tell us which AI tools they used, and 2) their estimate of progress on the Technology Readiness Level and Investment Readiness Level.

Here’s how they did it and what they delivered.

Team Omnyra – improving visibility into AI-generated bioengineering threats.

If you can’t see the team Omnyra summary video click here

If you can’t see the Omnyra presentation click here

These are “Wicked” Problems
Wicked problems refer to really complex problems, ones with multiple moving parts, where the solution isn’t obvious and lacks a definitive formula. The types of problems our Hacking For Defense students work on fall into this category. They are often ambiguous. They start with a problem from a sponsor, and not only is the solution unclear but figuring out how to acquire and deploy it is also complex. Most often students find that in hindsight the problem was a symptom of a more interesting and complex problem – and that Acquisition of solutions in the Dept of Defense is unlike anything in the commercial world. And the stakeholders and institutions often have different relationships with each other – some are collaborative, some have pieces of the problem or solution, and others might have conflicting values and interests.
The figure shows the types of problems Hacking for Defense students encounter, with the most common ones shaded.

Team HydraStrike – bringing swarm technology to the maritime domain.

If you can’t see the HydraStrike summary video click here.


If you can’t see the HydraStrike presentation click here

Mission-Driven Entrepreneurship
This class is part of a bigger idea – Mission-Driven Entrepreneurship. Instead of students or faculty coming in with their own ideas, we ask them to work on societal problems, whether they’re problems for the State Department or the Department of Defense or non-profits/NGOs  or the Oceans and Climate or for anything the students are passionate about. The trick is we use the same Lean LaunchPad / I-Corps curriculum — and the same class structure – experiential, hands-on– driven this time by a mission-model not a business model. (The National Science Foundation and the Common Mission Project have helped promote the expansion of the methodology worldwide.)
Mission-driven entrepreneurship is the answer to students who say, “I want to give back. I want to make my community, country or world a better place, while being challenged to solve some of the toughest problems.”

Team HyperWatch – tracking hypersonic threats.

If you can’t see the HyperWatch video click here

If you can’t see the HyperWatch presentation click here

It Started With An Idea
Hacking for Defense has its origins in the Lean LaunchPad class I first taught at Stanford in 2011. I observed that teaching case studies and/or how to write a business plan as a capstone entrepreneurship class didn’t match the hands-on chaos of a startup. Furthermore, there was no entrepreneurship class that combined experiential learning with the Lean methodology. Our goal was to teach both theory and practice. The same year we started the class, it was adopted by the National Science Foundation to train Principal Investigators who wanted to get a federal grant for commercializing their science (an SBIR grant.) The NSF observed, “The class is the scientific method for entrepreneurship. Scientists understand hypothesis testing” and relabeled the class as the NSF I-Corps (Innovation Corps). I-Corps became the standard for science commercialization for the National Science Foundation, National Institutes of Health and the Department of Energy, to date training 3,051 teams and launching 1,300+ startups.

Team ChipForce – Securing U.S. dominance in critical minerals.

If you can’t see the ChipForce video click here

If you can’t see the ChipForce presentation click here
Note: After briefing the Department of Commerce, the Chipforce was offered jobs with the department.

Origins Of Hacking For Defense
In 2016, brainstorming with Pete Newell of BMNT and Joe Felter at Stanford, we observed that students in our research universities had little connection to the problems their government was trying to solve or the larger issues civil society was grappling with. As we thought about how we could get students engaged, we realized the same Lean LaunchPad/I-Corps class would provide a framework to do so. That year we launched both Hacking for Defense and Hacking for Diplomacy (with Professor Jeremy Weinstein and the State Department) at Stanford. The Department of Defense adopted and scaled Hacking for Defense across 60 universities while Hacking for Diplomacy has been taught at Georgetown, James Madison University, Rochester Institute for Technology, University of Connecticut and now Indiana University, sponsored by the Department of State Bureau of Diplomatic Security (see here).

Team ArgusNet – instant geospatial data for search and rescue.

If you can’t see the ArgusNet video click here

If you can’t see the ArgusNet presentation click here

Goals for Hacking for Defense
Our primary goal for the class was to teach students Lean Innovation methods while they engaged in national public service.
In the class we saw that students could learn about the nation’s threats and security challenges while working with innovators inside the DoD and Intelligence Community. At the same time the experience would introduce to the sponsors, who are innovators inside the Department of Defense (DOD) and Intelligence Community (IC), a methodology that could help them understand and better respond to rapidly evolving threats. We wanted to show that if we could get teams to rapidly discover the real problems in the field using Lean methods, and only then articulate the requirements to solve them, defense acquisition programs could operate at speed and urgency and deliver timely and needed solutions.
Finally, we wanted to familiarize students with the military as a profession and help them better understand its expertise, and its proper role in society. We hoped it would also show our sponsors in the Department of Defense and Intelligence community that civilian students can make a meaningful contribution to problem understanding and rapid prototyping of solutions to real-world problems.

Team NeoLens – AI-powered troubleshooting for military mechanics.

If you can’t see the NeoLens video click here

If you can’t see the NeoLens presentation click here

Go-to-Market/Deployment Strategies
The initial goal of the teams is to ensure they understand the problem. The next step is to see if they can find mission/solution fit (the DoD equivalent of commercial product/market fit.) But most importantly, the class teaches the teams about the difficult and complex path of getting a solution in the hands of a warfighter/beneficiary. Who writes the requirement? What’s an OTA? What’s color of money? What’s a Program Manager? Who owns the current contract? …

Team Omnicomm – improving the quality, security and resiliency of communications for special operations units.

If you can’t see the Omnicomm video click here


If you can’t see the Omnicomm presentation click here

Mission-Driven in 70 Universities and Continuing to Expand in Scope and Reach
What started as a class is now a movement.
From its beginning with our Stanford class, Hacking for Defense is now offered in over 70 universities in the U.S., as well as in the UK as Hacking for the MOD and in Australia. In the U.S., the course is a program of record and supported by Congress, H4D is sponsored by the Common Mission Project, Defense Innovation Unit (DIU), and the Office of Naval Research (ONR). Corporate partners include Boeing, Northrop Grumman and Lockheed Martin.
Steve Weinstein started Hacking for Impact (Non-Profits) and Hacking for Local (Oakland) at U.C. Berkeley, and Hacking for Oceans at bot Scripps and UC Santa Cruz, as well as Hacking for Climate and Sustainability at Stanford. Jennifer Carolan started Hacking for Education at Stanford.

Team Strom – simplified mineral value chain.

If you can’t see the Strom video click here

If you can’t see the Strom presentation click here

What’s Next For These Teams?
.When they graduate, the Stanford students on these teams have the pick of jobs in startups, companies, and consulting firms .This year, seven of our teams applied to the Defense Innovation Unit accelerator – the DIU Defense Innovation Summer Fellows Program – Commercialization Pathway. Seven were accepted. This further reinforced our thinking that Hacking for Defense has turned into a pre-accelerator – preparing students to transition their learning from the classroom to deployment

See the teams present in person here

It Takes A Village
While I authored this blog post, this class is a team project. The secret sauce of the success of Hacking for Defense at Stanford is the extraordinary group of dedicated volunteers supporting our students in so many critical ways.

The teaching team consisted of myself and:

  • Pete Newell, retired Army Colonel and ex Director of the Army’s Rapid Equipping Force, now CEO of BMNT.
  • Joe Felter, retired Army Special Forces Colonel; and former deputy assistant secretary of defense for South Asia, Southeast Asia, and Oceania; and currently the Director of the Gordian Knot Center for National Security Innovation at Stanford which we co-founded in 2021.
  • Steve Weinstein, partner at America’s Frontier Fund, 30-year veteran of Silicon Valley technology companies and Hollywood media companies. Steve was CEO of MovieLabs, the joint R&D lab of all the major motion picture studios.
  • Chris Moran, Executive Director and General Manager of Lockheed Martin Ventures; the venture capital investment arm of Lockheed Martin.
  • Jeff Decker, a Stanford researcher focusing on dual-use research. Jeff served in the U.S. Army as a special operations light infantry squad leader in Iraq and Afghanistan.

Our teaching assistants this year were Joel Johnson, Rachel Wu, Evan Twarog, Faith Zehfuss, and Ethan Hellman.

31 Sponsors, Business and National Security Mentors
The teams were assisted by the originators of their problems – the sponsors.

Sponsors gave us their toughest national security problems: Josh Pavluk, Kari Montoya, Nelson Layfield, Mark Breier, Jason Horton, Stephen J. Plunkett, Chris O’Connor, David Grande, Daniel Owins, Nathaniel Huston, Joy Shanaberger, and David Ryan.
National Security Mentors helped students who came into the class with no knowledge of the Department of Defense, and the FBI understand the complexity, intricacies and nuances of those organizations: Katie Tobin, Doug Seich, Salvadore Badillo-Rios, Marco Romani, Matt Croce, Donnie Hasseltine, Mark McVay, David Vernal, Brad Boyd, Marquay Edmonson.
Business Mentors helped the teams understand if their solutions could be a commercially successful business: Diane Schrader, Marc Clapper, Laura Clapper, Eric Byler, Adam Walters, Jeremey Schoos, Craig Seidel, Rich “Astro” Lawson.

Thanks to all!

Teaching National Security Policy with AI

The videos embedded in this post are best viewed on steveblank.com

International Policy students will be spending their careers in an AI-enabled world. We wanted our students to be prepared for it. This is why we’ve adopted and integrated AI in our Stanford national security policy class – Technology, Innovation and Great Power Competition.

Here’s what we did, how the students used it, and what they (and we) learned.


Technology, Innovation and Great Power Competition is an international policy class at Stanford (taught by me, Eric Volmar and Joe Felter.) The course provides future policy and engineering leaders with an appreciation of the geopolitics of the U.S. strategic competition with great power rivals and the role critical technologies are playing in determining the outcome.

This course includes all that you would expect from a Stanford graduate-level class in the Masters in International Policy – comprehensive readings, guest lectures from current and former senior policy officials/experts, and deliverables in the form of written policy papers. What makes the class unique is that this is an experiential policy class. Students form small teams and embark on a quarter-long project that got them out of the classroom to:

  • select a priority national security challenge, and then …
  • validate the problem and propose a detailed solution tested against actual stakeholders in the technology and national security ecosystem

The class combines multiple teaching tools.

  • Real world – Students worked in teams on real problems from government sponsors
  • Experiential – They get out of the building to interview 50+ stakeholders
  • Perspectives – They get policy context and insights from lectures by experts
  • And this year… Using AI to Accelerate Learning

Rationale for AI
Using this quarter to introduce AI we had three things going for us: 1) By fall 2024 AI tools were good and getting exponentially better, 2) Stanford had set up an AI Playground enabling students to use a variety of AI Tools (ChatGPT, Claude, Perplexity, NotebookLM, Otter.ai, Mermaid, Beautiful.ai, etc.) and 3) many students were using AI in classes but it was usually ambiguous about what they were allowed to do.

Policy students have to read reams of documents weekly. Our hypotheses was that our student teams could use AI to ingest and summarize content, identify key themes and concepts across the content, provide an in-depth analysis of critical content sections, and then synthesize and structure their key insights and apply their key insights to solve their specific policy problem.  They did all that, and much, much, more.

While Joe Felter and I had arm-waved “we need to add AI to the class” Eric Volmar was the real AI hero on the teaching team. As an AI power user Eric was most often ahead of our students on AI skills. He threw down a challenge to the students to continually use AI creatively and told them that they would be graded on it. He pushed them hard on AI use in office hours throughout the quarter. The results below speak for themselves.

If you’re not familiar with these AI tools in practice it’s worth watching these one minute videos.

Team OSC
Team OSC was trying to understand what is the appropriate level of financial risk for the U.S. Department of Defense to provide loans or loan guarantees in technology industries?

The team started using AI to do what we had expected, summarizing the stack of weekly policy documentsusing Claude 3.5. And like all teams, the unexpected use of AI was to create new leads for their stakeholder interviews. They found that they could ask AI to give them a list of leaders that were involved in similar programs, or that were involved in their program’s initial stages of development.

See how Team OSC summarized policy papers here:

If you can’t see the video click here

Claude was also able to create a list of leaders with the Department of Energy Title17 credit programs, Exim DFC, and other federal credit programs that the team should interview. In addition, it created a list of leaders within Congressional Budget Office and the Office of Management and Budget that would be able to provide insights. See the demo here:

If you can’t see the video click here
The team also used AI to transcribe podcasts. They noticed that key leaders of the organizations their problem came from had produced podcasts and YouTube videos. They used Otter.ai to transcribe these. That provided additional context for when they did interview them and allowed the team to ask insightful new questions.

If you can’t see the video click here

Note the power of fusing AI with interviews. The interviews ground the knowledge in the teams lived experience.

The team came up with a use case the teaching team hadn’t thought of – using AI to critique the team’s own hypotheses. The AI not only gave them criticism but supported it with links from published scholars. See the demo here:

If you can’t see the video click here

Another use the teaching team hadn’t thought was using Mermaid AI to create graphics for their weekly presentations. See the demo here:

If you can’t see the video click here

The surprises from this team kept coming. Their last was that the team used Beautiful.ai in order to generate PowerPoint presentations. See the demo here:

If you can’t see the video click here

For all teams, using AI tools was a learning/discovery process all its own. By and large, students were largely unfamiliar with most tools on day 1.

Team OSC suggested that students should start using AI tools early in the quarter and experiment with tools like ChatGPT, Otter.ai. Tools that that have steep learning curves, like Mermaid should be started at the very start of the project to train their models.

Team OSC AI tools summary: AI tools are not perfect, so make sure to cross check summaries, insights and transcriptions for accuracy and relevancy. Be really critical of their outputs. The biggest takeaway is that AI works best when prepared with human efforts.

Team FAAST
The FAAST team was trying to understand how can the U.S. improve and scale the DoE FASST program in the urgent context of great power competition?

Team FAAST started using AI to do what we had expected, summarizing the stack of weekly policy documents they were assigned to read and synthesizing interviews with stakeholders.

One of the features of ChatGPT this team appreciated, and important for a national security class, was the temporary chat feature –  data they entered would not be used to train the open AI models. See the demo below.

If you can’t see the video click here

The team used AI do a few new things we didn’t expect –  to generate emails to stakeholders and to create interview questions. During the quarter the team used ChatGPT, Claude, Perplexity, and NotebookLM. By the end of the 10-week class they were using AI to do a few more things we hadn’t expected. Their use of AI expanded to include simulating interviews. They gave ChatGPT specific instructions on who they wanted it to act like, and it provided personalized and custom answers. See the example here.

If you can’t see the video click here

Learning-by-doing was a key part of this experiential course. The big idea is that students learn both the method and the subject matter together. By learning it together, you learn both better.

Finally, they used AI to map stakeholders, get advice on their next policy move, and asked ChatGPT to review their weekly slides (by screenshotting the slides and putting them into ChatGPT and asking for feedback and advice.)

The FAAST team AI tool summary: ChatGPT was specifically good when using images or screenshots, so in these multi-level tasks, and when you wanted to use kind of more custom instructions, as we used for the stakeholder interviews.  Claude was better at more conversational and human in writing, so used it when sending emails. Perplexity was better for researchers because it provides citations, so you’re able to access the web and actually get directed to the source that it’s citing. NotebookLM was something we tried out, but it was not as successful. It was a cool tool that allowed us to summarize specific policy documents into a podcast, but the summaries were often pretty vague.

Team NSC Energy
Team NSC Energy was working on a National Security Council problem, “How can the United States generate sufficient energy to support compute/AI in the next 5 years?”

At the start of the class, the team began by using ChatGPT to summarize their policy papers and generate tailored interview questions, while Claude was used to synthesize research  for background understanding. As ChatGPT occasionally hallucinated information, by the end of the class they were cross validating the summaries via Perplexity pro.

The team also used ChatGPT and Mermaid to organize their thoughts and determine who they wanted to talk to. ChatGPT was used to generate code to put into the Mermaid flowchart organizer. Mermaid has its own language, so ChatGPT was helpful, so we didn’t have to learn all the syntax for this language.
See the video of how Team NSC Energy used ChaptGPT and Mermaid here:

If you can’t see the video click here

Team Alpha Strategy
The Alpha Strategy team was trying to discover whether the U.S. could use AI to create a whole-of-government decision-making factory.

At the start of class, Team Alpha Strategy used ChatGPT.40 for policy document analysis and summary, as well for stakeholder mapping. However, they discovered going one by one through the countless numbers of articles was time consuming. So the team pivoted to using Notebook LM, for document search and cross analysis. See the video of how Team Alpha Strategy used Notebook LM here:

If you can’t see the video click here

The other tools the team used were custom Gpts to build stakeholder maps and diagrams and organize interview notes. There’s going to be a wide variety of specialized Gpts. One that was really helpful, they said, was a scholar GPT.
See the video of how Team Alpha Strategy used custom GPTs:

If you can’t see the video click here

Like other teams, Alpha Strategy used ChatGPT to summarize their interview notes and to create flow charts to paste into their weekly presentations.

Team Congress
The Congress team was exploring the question, “if the Department of Defense were given economic instruments of power, which tools would be most effective in the current techno-economic competition with the People’s Republic of China?”

As other teams found, Team Congress first used ChatGPT to extract key themes from hundreds of pages of readings each week and from press releases, articles, and legislation. They also used for mapping and diagramming to identify potential relationships between stakeholders, or to creatively suggest alternate visualizations.

When Team Congress weren’t able to reach their sponsor in the initial two weeks of the class, much like Team OSC, they used AI tools to pretend to be their sponsor, a member of the defense modernization caucus. Once they realized its utility, they continued to do mock interviews using AI role play.

The team also used customized models of ChatGPT but in their case found that this was limited in the number of documents they could upload, because they had a lot of content. So they used retrieval augmented generation, which takes in a user’s query, and matches it with relevant sources in their knowledge base, and fed that back out as the output. See the video of how Team Congress used retrieval augmented generation here:

If you can’t see the video click here

Team NavalX
The NavalX team was learning how the U.S. Navy could expand its capabilities in Intelligence, Surveillance, and Reconnaissance (ISR) operations on general maritime traffic.

Like all teams they used ChatGPT to summarize and extract from long documents, organizing their interview notes, and defining technical terms associated with their project. In this video, note their use of prompting to guide ChatGPT to format their notes.

See the video of how Team NavalX used tailored prompts for formatting interview notes here:

If you can’t see the video click here

They also asked ChatGPT to role play a critic of our argument and solution so that we could find the weaknesses. They also began uploading many interviews at once, and asked Claude to find themes or ideas in common that they might have missed on their own.

Here’s how the NavalX team used Perplexity for research.

If you can’t see the video click here
Like other teams, the NavalX team discovered you can customize ChatGPT by telling it how you want it to act.

If you can’t see the video click here

Another surprising insight from the team is that you can use ChatGPT to tell you how to write better prompts for itself.

If you can’t see the video click here
In summary, Team NavalX used Claude to translate texts from Mandarin, and found that ChatGPT was the best for writing tasks, Perplexity the best for research tasks, Claude the best for reading tasks, and notebook LM was the best for summarization.

Lessons Learned

  • Integrating AI into this class took a dedicated instructor with a mission to create a new way to teach using AI tools
  • The result was AI vastly enhanced and accelerated learning of all teams
    • It acted as a helpful collaborator
    • Fusing AI with stakeholders interviews was especially powerful
  • At the start of the class students were familiar with a few of these AI tools
    • By the end of the class they were fluent in many more of them
    • Most teams invented creative use cases
  • All Stanford classes we now teach – Hacking for Defense, Lean Launchpad, Entrepreneurship Inside Government – have AI integrated as part of the course
  • Next year’s AI tools will be substantively better

How the United States Gave Up Being a Science Superpower

US global dominance in science was no accident, but a product of a far-seeing partnership between public and private sectors to boost innovation and economic growth.

Since 20 January, US science has been upended by severe cutbacks from the administration of US President Donald Trump. A series of dramatic reductions in grants and budgets — including the US National Institutes of Health (NIH) slashing reimbursements of indirect research costs to universities from around 50% to 15% — and deep cuts to staffing at research agencies have sent shock waves throughout the academic community.

These cutbacks put the entire US research enterprise at risk. For more than eight decades, the United States has stood unrivalled as the world’s leader in scientific discovery and technological innovation. Collectively, US universities spin off more than 1,100 science-based start-up companies each year, leading to countless products that have saved and improved millions of lives, including heart and cancer drugs, and the mRNA-based vaccines that helped to bring the world out of the COVID-19 pandemic.

These breakthroughs were made possible mostly by a robust partnership between the US government and universities. This system emerged as an expedient wartime design to fund weapons research and development (R&D) in universities. It has fuelled US innovation, national security and economic growth.

But, today, this engine is being sabotaged in the Trump administration’s attempt to purge research programmes in areas it doesn’t support, such as climate change and diversity, equity and inclusion, and to rein in campus protests. But the broader cuts are also dismantling the very infrastructure that made the United States a scientific superpower. At best, US research is at risk from friendly fire; at worst, it’s political short-sightedness.

Researchers mustn’t be complacent. They must communicate the difference between eliminating ideologically objectionable programmes and undermining the entire research ecosystem. Here’s why the US research system is uniquely valuable, and what stands to be lost.

Unique innovation model

The backbone of US innovation is a close partnership between government, universities and industry. It is a well-calibrated ecosystem: federally funded research at universities drives scientific advancement, which in turn spins off technology, patents and companies. This system emerged in the wake of the Second World War, rooted in the vision of US presidential science adviser Vannevar Bush and a far-sighted Congress, which recognized that US economic and military strength hinge on investment in science (see ‘Two systems’).

Two Systems – How US and UK science diverged

When Winston Churchill became UK prime minister in 1940, he had at his side his science adviser, physicist Frederick Lindemann. The country’s wartime technical priorities focused on defence and intelligence — such as electronics-based weapons, radar-based air defence and plans for nuclear weapons. Their code-breaking organization at Bletchley Park, UK, was reading secret German messages using the earliest computers ever built.

Under Churchill, Lindemann influenced which projects received funding and which were sidelined. His top-down, centralized approach, with weapons development primarily in government research laboratories, shaped UK innovation during the Second World War — and led to its demise post-war.

Meanwhile, in the United States, Vannevar Bush, a former dean of engineering at the Massachusetts Institute of Technology (MIT) in Cambridge, became science adviser to US president Franklin Roosevelt in June 1940. Bush told him that war would be won or lost on the basis of advanced technology. He convinced Roosevelt that, although the army and navy should keep making conventional weapons (planes, ships, tanks), scientists could develop more-advanced weapons and deliver them faster. He argued that the only way that the scientists could be productive was if they worked in a university setting in civilian-run weapons laboratories run by academics. Roosevelt agreed to it.

In 1941, Bush convinced the president that academics should also be allowed to acquire and deploy weapons, which were manufactured in volume by US corporations. To manage this, Bush created the US Office of Scientific Research and Development. Each division was run by an academic hand-picked by Bush. And they were located in universities, including MIT, Harvard University, Johns Hopkins University, the California Institute of Technology, Columbia University and the University of Chicago.

Nearly 10,000 scientists, engineers, academics and their graduate students received draft deferments to work in these university labs. Their work led to developments in a wide range of technologies, including electronics, radar, rockets, napalm and the bazooka, penicillin and cures for malaria, as well as chemical and nuclear weapons.

The inflow of government money — US$9 billion (in 2025 dollars) between 1941 and 1945 — changed US universities, and the world. Before the war, academic research was funded mostly by non-profit organizations and industry. Now, US universities were getting more money than they had ever seen. They were full partners in wartime research, not just talent pools.

Wartime Britain had different constraints. First, England was being bombed daily and blockaded by submarines, so focusing on a smaller set of projects made sense. Second, the country was teetering on bankruptcy. It couldn’t afford the big investments that the United States made. Many areas of innovation — such as early computing and nuclear research — went underfunded. And when Churchill was voted out of office in 1945, with him went Lindemann and the coordination of UK science and engineering. Post-war austerity led to cuts to all government labs and curtailed innovation.

The differing economic realities of the United States and United Kingdom also shaped their innovation systems. The United States had an enormous industrial base, abundant capital and a large domestic market, which enabled large-scale investment in research and development. In the United Kingdom, key industries were nationalized, which reduced competition and slowed technological progress.

Although UK universities such as Cambridge and Oxford remained leaders in theoretical science, they struggled to commercialize their breakthroughs. For instance, pioneering work on computing at Bletchley Park didn’t turn into a thriving UK computing industry — unlike in the United States. Without government support, UK post-war innovation never took off.

Meanwhile, US universities and companies realized that the wartime government funding for research had been an amazing accelerator for science and engineering. Everyone agreed it should continue.

In 1950, Congress set up the US National Science Foundation to fund all basic science in the United States (except for life sciences, a role that the US National Institutes of Health would assume). The US Atomic Energy Commission spun off the Manhattan Project and the military took back advanced weapons development. In 1958, the US Defense Advanced Research Projects Agency and NASA would also form as federal research agencies. And decades of economic boom followed.

It need not have been this way. Before the Second World War, the United Kingdom led the world in many scientific domains, but its focus on centralized government laboratories rather than university partnerships stifled post-war commercialization. By contrast, the United States channelled wartime research funds into universities, enabling breakthroughs that were scaled up by private industry to drive the nation’s post-war economic boom. This partnership became the foundation of Silicon Valley and the aerospace, nuclear and biotechnology industries.

The US government remains the largest source of academic R&D funding globally — with a budget of US$201.9 billion for federal R&D in the financial year 2025. Out of this pot, more than two dozen research agencies direct grants to US universities, totalling $59.7 billion in 2023, with the NIH and the US National Science Foundation (NSF) receiving the most.

The agencies do this for a reason: they want professors at universities to do research for them. In exchange, the agencies get basic research from universities that moves science forward, or applied research that creates prototypes of potential products. By partnering with universities, the agencies get more value for money and quicker innovation than if they did all the research themselves.

This is because universities can leverage their investments from the government with other funds that they draw in. For example, in 2023, US universities received $27.7 billion from charitable donations, $6.2 billion in industrial collaborations, $6.7 billion from non-profit organizations, $5.4 billion from state and local government and $3.1 billion from other sources — boosting the $59.7 billion up to $108.8 billion (see ‘US research ecosystem’). This external money goes mostly to creating research labs and buildings that, as any campus visitor has seen, are often named after their donors.

Source: US Natl Center for Science and Engineering Statistics; US Congress; US Natl Venture Capital Assoc; AUTM; Small Business Administration

Thus, federal funding for science research in the United States is decentralized. It supports mostly curiosity-driven basic science, but also prizes innovation and commercial applicability. Academic freedom is valued and competition for grants is managed through peer review. Other nations, including China and those in Europe, tend to have more-centralized and bureaucratic approaches.

But what makes the US ecosystem so powerful is what then happens to the university research: it’s the engine for creating start-ups and jobs. In 2023, US universities licensed 3,000 patents, 3,200 copyrights and 1,600 other licences to technology start-ups and existing companies. Such firms spin off more than 1,100 science-based start-ups each year, which lead to countless products.

Since the 1980 Bayh–Dole Act, US universities have been able to retain ownership of inventions that were developed using federally funded research (see go.nature.com/4cesprf). Before this law, any patents resulting from government-funded research were owned by the government, so they often went unused.

Closing the loop, these technology start-ups also get a yearly $4-billion injection in seed-funding grants from the same government research agencies. Venture capital adds a whopping $171 billion to scale those investments.

It all adds up to a virtuous circle of discovery and innovation.

Facilities costs

A crucial but under-appreciated component of this US research ecosystem is the indirect-cost reimbursement system, which allows universities to maintain the facilities and administrative support necessary for cutting-edge research. Critics often misunderstand the function of these funds, assuming that universities can spend this money on other areas, such as diversity, equity and inclusion programmes. In reality, they fund essential infrastructure: laboratory space, compliance with safety regulations, data storage and administrative support that allows principal investigators to focus on science rather than paperwork. Without this support, universities cannot sustain world-class research.

Reimbursing universities for indirect costs began during the Second World War, and broke ground, just as the weapons development did. Unlike in a typical fixed-price contract, the government did not set requirements for university researchers to meet or specifications for them to design their research to. It asked them to do research and, if the research looked like it might solve a military problem, to build a prototype they could test. In return, the government paid the researchers for their direct and indirect research costs.

Two scientists demonstrate the Dr. Robert Van De Graf 1,500,000 volt generator.

Vannevar Bush (right) led the US Office of Scientific Research and Development during the Second World War.Credit: Bettmann/Getty

At first, the government reimbursed universities for indirect costs at a flat rate of 25% of direct costs. Unlike businesses, universities had no profit margin, so indirect-cost recovery was their only way to pay for and maintain their research infrastructure. By the end of the war, some universities had agreed on a 50% rate. The rate is applied to direct costs, so that a principal investigator will be able to spend two-thirds of a grant on direct research costs and the rest will go to the university for indirect costs. (A common misconception is that indirect-cost rates are a percentage of the total grant, for example a 50% rate meaning that half of the award goes to overheads.)

After the Second World War, the US Office of Naval Research (ONR) began negotiating indirect-cost rates with universities on the basis of actual institutional expenses. Universities had to justify their overhead costs (administration, facilities, utilities) to receive full reimbursement. The ONR formalized financial auditing processes to ensure that institutions reported indirect costs accurately. This led to the practice of negotiating indirect-cost rates, which is still used today.

Since then, the reimbursement process has been tweaked to prevent gaming the system, but has remained essentially the same. Universities negotiate their indirect-cost rates with either the US Department of Health and Human Services (HHS) or the ONR. Most research-intensive universities receive rates of 50–60% for on-campus research. Private foundations often have a lower rate (10–20%), but tend to have wider criteria for what can be considered a direct cost.

In 2017, the first Trump administration attempted to impose a 10% cap on indirect costs for NIH research. Some in the administration viewed such costs as a form of bureaucratic bloat and argued that research universities were profiting from inflated overhead rates.

Congress rejected this and later added language in the annual funding bill that essentially froze most rates at their 2017 levels. This provision is embodied in section 224 of the Consolidated Appropriations Act of 2024, which has been extended twice and is still in effect.

In February, however, the NIH slashed its indirect reimbursement rate to an arbitrary 15% (see go.nature.com/4cgsndz). That policy is currently being challenged in court.

If the policy is ultimately allowed to proceed, the consequences will be immediate. Billions of dollars of support for research universities will be gone. In anticipation, some research universities are already scaling back their budgets, halting lab expansions and reducing graduate-student funding. This will mean fewer start-ups being founded, with effects on products, services, jobs, taxes and exports.

Race for talent

The ripple effects of Trump’s cuts to US academia are spreading, and one area in which there will be immediate ramifications is the loss of scientific talent. The United States has historically been the top destination for international researchers, thanks to its well-funded universities, innovation-driven economy and opportunities for commercialization.

US-trained scientists — many of whom have historically stayed in the country to launch start-ups or contribute to corporate R&D — are being actively recruited by foreign institutions, particularly in China, which has ramped up its science investments. China has expanded its Thousand Talents Program, which offers substantial financial incentives to researchers willing to relocate. France and other European nations are beginning to design packages to attract top US researchers.

Erosion of the US scientific workforce will have long-term consequences for its ability to innovate. If the country dismantles its research infrastructure, future transformative breakthroughs — whether in quantum computing, cancer treatment, autonomy or artificial intelligence — will happen elsewhere. The United States runs the risk of becoming dependent on foreign scientific leadership for its own economic and national-security needs.

History suggests that, once a nation loses its research leadership, regaining it is difficult. The United Kingdom never reclaimed its pre-war dominance in technological innovation. If current trends continue, the same fate might await the United States.

University research is not merely an academic concern — it is an economic and strategic imperative. Policymakers must recognize that federal R&D investments are not costs but catalysts for growth, job creation and national security.

Policymakers need to reaffirm the United States’ commitment to scientific leadership. If the country fails to act now, the consequences will be felt for generations. The question is no longer whether the United States can afford to invest in research. It is whether it can afford not to.

How the U.S. Became A Science Superpower

Prior to WWII the U.S was a distant second in science and engineering. By the time the war was over, U.S. science and engineering had blown past the British, and led the world for 85 years.


It happened because two very different people were the science advisors to their nation’s leaders. Each had radically different views on how to use their country’s resources to build advanced weapon systems. Post war, it meant Britain’s early lead was ephemeral while the U.S. built the foundation for a science and technology innovation ecosystem that led the world – until now.

The British – Military Weapons Labs
When Winston Churchill became the British prime minister in 1940, he had at his side his science advisor, Professor Frederick Lindemann, his friend for 20 years. Lindemann headed up the physics department at Oxford and was the director of the Oxford Clarendon Laboratory. Already at war with Germany, Britain’s wartime priorities focused on defense and intelligence technology projects, e.g. weapons that used electronics, radar, physics, etc. – a radar-based air defense network called Chain Home, airborne radar on night fighters, and plans for a nuclear weapons program – the MAUD Committee which started the British nuclear weapons program code-named Tube Alloys. And their codebreaking organization at Bletchley Park was starting to read secret German messages – the Enigma – using the earliest computers ever built.

As early as the mid 1930s, the British, fearing Nazi Germany, developed prototypes of these weapons using their existing military and government research labs. The Telecommunications Research Establishment built early-warning Radar, critical to Britain’s survival during the Battle of Britain, and electronic warfare to protect British bombers over Germany. The Admiralty Research Lab built Sonar and anti-submarine warfare systems. The Royal Aircraft Establishment was developing jet fighters. The labs then contracted with British companies to manufacture the weapons in volume. British government labs viewed their universities as a source of talent, but they had no role in weapons development.

Under Churchill, Professor Lindemann influenced which projects received funding and which were sidelined. Lindemann’s WWI experience as a researcher and test pilot on the staff of the Royal Aircraft Factory at Farnborough gave him confidence in the competence of British military research and development labs. His top-down, centralized approach with weapons development primarily in government research labs shaped British innovation during WW II – and led to its demise post-war.

The Americans – University Weapons Labs
Unlike Britain, the U.S. lacked a science advisor. It wasn’t until June 1940, that Vannevar Bush, ex-MIT dean of engineering, and President of the Carnegie Institute told President Franklin Roosevelt that World War II would be the first war won or lost on the basis of advanced technology electronics, radar, physics problems, etc.

Unlike Lindemann, Bush had a 20-year-long contentious history with the U.S. Navy and a dim view of government-led R&D. Bush contended that the government research labs were slow and second rate. He convinced the President that while the Army and Navy ought to be in charge of making conventional weapons – planes, ships, tanks, etc. — scientists from academia could develop better advanced technology weapons and deliver them faster than Army and Navy research labs. And he argued the only way the scientists could be productive was if they worked in a university setting in civilian-run weapons labs run by university professors.

To the surprise of the Army and Navy Service chiefs, Roosevelt agreed to let Bush build exactly that organization to coordinate and fund all advanced weapons research.

(While Bush had no prior relationship with the President, Roosevelt had been the Assistant Secretary of the Navy during World War I and like Bush had seen first-hand its dysfunction. Over the next four years they worked well together. Unlike Churchill, Roosevelt had little interest in science and accepted Bush’s opinions on the direction of U.S. technology programs, giving Bush sweeping authority.)

In 1941, Bush upped the game by convincing the President that in addition to research, development, acquisition and deployment of these weapons also ought to be done by professors in universities. There they would be tasked to develop military weapons systems and solve military problems to defeat Germany and Japan. (The weapons were then manufactured in volume by U.S. corporations Western Electric, GE, RCA, Dupont, Monsanto, Kodak, Zenith, Westinghouse, Remington Rand and Sylvania.) To do this Bush created the Office of Scientific Research and Development (OSR&D).

OSR&D headquarters divided the wartime work into 19 “divisions,” 5 “committees,” and 2 “panels,” each solving a unique part of the military war effort. There were no formal requirements.

Staff at OSRD worked with their military liaisons to understand what the most important military problems were and then each OSR&D division came up with solutions. These efforts spanned an enormous range of tasks – the development of advanced electronics, radar, rockets, sonar, new weapons like the proximity fuse, Napalm, the Bazooka and new drugs such as penicillin, cures for malaria, chemical warfare, and nuclear weapons.

Each division was run by a professor hand-picked by Bush. And they were located in universities –  MIT, Harvard, Johns Hopkins, Caltech, Columbia and the University of Chicago all ran major weapons systems programs. Nearly 10,000 scientists and engineers, professors and their grad students received draft deferments to work in these university labs.

(Prior to World War 2, science in U.S. universities was primarily funded by companies interested in specific research projects. But funding for basic research came from two non-profits: The Rockefeller Foundation and the Carnegie Institution. In his role  as President of the Carnegie Institution Bush got to know (and fund!) every top university scientist in the U.S.  As head of Physics at Oxford, Lindemann viewed other academics as competitors.)

Americans – Unlimited Dollars
What changed U.S. universities, and the world forever, was government money. Lots of it. Prior to WWII most advanced technology research in the U.S. was done in corporate innovation labs (GE, AT&T, Dupont, RCA, Westinghouse, NCR, Monsanto, Kodak, IBM, et al.) Universities had no government funding (except for agriculture) for research. Academic research had been funded by non-profits, mostly the Rockefeller and Carnegie foundations and industry. Now, for the first time, U.S. universities were getting more money than they had ever seen. Between 1941 and 1945, OSR&D gave $9 billion (in 2025 dollars) to the top U.S. research universities. This made universities full partners in wartime research, not just talent pools for government projects as was the case in Britain.

The British – Wartime Constraints
Wartime Britain had very different constraints. First, England was under daily attack. They were being bombed by air and blockaded by submarines, so it was logical that they focused on a smaller set of high-priority projects to counter these threats. Second, the country was teetering on bankruptcy. It couldn’t afford the broad and deep investments that the U.S. made. (Illustrated by their abandonment of their nuclear weapons programs when they realized how much it would cost to turn the research into industrial scale engineering.) This meant that many other areas of innovation—such as early computing and nuclear research—were underfunded compared to their American counterparts.

Post War – Britain
Churchill was voted out of office in 1945. With him went Professor Lindemann and the coordination of British science and engineering. Britain would be without a science advisor until 1951-55 when Churchill returned for a second term and brought back Lindemann with him.

The end of the war led to extreme downsizing of the British military including severe cuts to all the government labs that had developed Radar, electronics, computing, etc.

With post-war Britain financially exhausted, post-war austerity limited its ability to invest in large-scale innovation. There were no post-war plans for government follow-on investments. The differing economic realities of the U.S. and Britain also played a key role in shaping their innovation systems. The United States had an enormous industrial base, abundant capital, and a large domestic market, which enabled large-scale investment in research and development. In Britain, a socialist government came to power. Churchill’s successor, Labor’s Clement Attlee, dissolved the British empire, nationalized banking, power and light, transport, and iron and steel, all which reduced competition and slowed technological progress.

While British research institutions like Cambridge and Oxford remained leaders in theoretical science, they struggled to scale and commercialize their breakthroughs. For instance Alan Turing’s and Tommy Flower’s pioneering work on computing at Bletchley Park didn’t turn into a thriving British computing industry—unlike in the U.S., where companies like ERA, Univac, NCR and IBM built on their wartime work.

Without the same level of government support for dual-use technologies or commercialization, and with private capital absent for new businesses, Britain’s post-war innovation ecosystem never took off.

Post War – The U.S.
Meanwhile in the U.S. universities and companies realized that the wartime government funding for research had been an amazing accelerator for science, engineering, and medicine. Everyone, including Congress, agreed that the U.S. government should continue to play a large role in continuing it. In 1945, Vannevar Bush published a report “Science, The Endless Frontier” advocating for government funding of basic research in universities, colleges, and research institutes. Congress argued on how to best organize federal support of science.

By the end of the war, OSR&D funding had taken technologies that had been just research papers or considered impossible to build at scale and made them commercially viable – computers, rockets, radar, Teflon, synthetic fibers, nuclear power, etc. Innovation clusters formed around universities like MIT and Harvard which had received large amounts of OSR&D funding (MIT’s Radiation Lab or “Rad Lab” employed 3,500 civilians during WWII and developed and built 100 radar systems deployed in theater,) or around professors who ran one of the OSR&D divisions – like Fred Terman at Stanford.

When the war ended, the Atomic Energy Commission spun out of the Manhattan Project in 1946 and the military services took back advanced weapons development. In 1950 Congress set up the National Science Foundation to fund all basic science in the U.S. (except for Life Sciences, a role the new National Institutes of Health would assume.) Eight years later DARPA and NASA would also form as federal research agencies.

Ironically, Vannevar Bush’s influence would decline even faster than Professor Lindemann’s. When President Roosevelt died in April 1945 and Secretary of War Stimson retired in September 1945, all the knives came out from the military leadership Bush had bypassed in the war. His arguments on how to reorganize OSR&D made more enemies in Congress. By 1948 Bush had retired from government service. He would never again play a role in the U.S. government.

Divergent Legacies
Britain’s focused, centralized model using government research labs was created in a struggle for short-term survival. They achieved brilliant breakthroughs but lacked the scale, integration and capital needed to dominate in the post-war world.

The U.S. built a decentralized, collaborative ecosystem, one that tightly integrated massive government funding of universities for research and prototypes while private industry built the solutions in volume.

A key component of this U.S. research ecosystem was the genius of the indirect cost reimbursement system. Not only did the U.S. fund researchers in universities by paying the cost of their salaries, the U.S. gave universities money for the researchers facilities and administration. This was the secret sauce that allowed U.S. universities to build world-class labs for cutting-edge research that were the envy of the world. Scientists flocked to the U.S. causing other countries to complain of a “brain drain.”

Today, U.S. universities license 3,000 patents, 3,200 copyrights and 1,600 other licenses to technology startups and existing companies. Collectively, they spin out over 1,100 science-based startups each year, which lead to countless products and tens of thousands of new jobs. This university/government ecosystem became the blueprint for modern innovation ecosystems for other countries.

Summary
By the end of the war, the U.S. and British innovation systems had produced radically different outcomes. Both systems were influenced by the experience and personalities of their nations science advisor.

  • Britain remained a leader in theoretical science and defense technology, but its socialist government economic policies led to its failure to commercialize wartime innovations.
  • The U.S. emerged as the global leader in science and technology, with innovations like electronics, microwaves, computing, and nuclear power driving its post-war economic boom.
  • The university-industry-government partnership became the foundation of Silicon Valley, the aerospace sector, and the biotechnology industry.
  • Today, China’s leadership has spent the last three decades investing heavily to surpass the U.S. in science and technology.
  • In 2025, with the abandonment of U.S. government support for university research, the long run of U.S. dominance in science may be over. Others will lead.

Quantum Computing – An Update

In March 2022 I wrote a description of the Quantum Technology Ecosystem. I thought this would be a good time to check in on the progress of building a quantum computer and explain more of the basics.

Just as a reminder, Quantum technologies are used in three very different and distinct markets: Quantum Computing, Quantum Communications and Quantum Sensing and Metrology. If you don’t know the difference between a qubit and cueball, (I didn’t) read the tutorial here.

Summary –

  • There’s been incremental technical progress in making physical qubits
  • There is no clear winner yet between the seven approaches in building qubits
  • Reminder – why build a quantum computer?
  • How many physical qubits do you need?
  • Advances in materials science will drive down error rates
  • Regional research consortiums
  • Venture capital investment FOMO and financial engineering

We talk a lot about qubits in this post. As a reminder a qubit – is short for a quantum bit. It is a quantum computing element that leverages the principle of superposition (that quantum particles can exist in many possible states at the same time) to encode information via one of four methods: spin, trapped atoms and ions, photons, or superconducting circuits.

Incremental Technical Progress
As of 2024 there are seven different approaches being explored to build physical qubits for a quantum computer. The most mature currently are Superconducting, Photonics, Cold Atoms, Trapped Ions. Other approaches include Quantum Dots, Nitrogen Vacancy in Diamond Centers, and Topological.  All these approaches have incrementally increased the number of physical qubits.

These multiple approaches are being tried, as there is no consensus to the best path to building logical qubits. Each company believes that their technology approach will lead them to a path to scale to a working quantum computer.

Every company currently hypes the number of physical qubits they have working. By itself this is a meaningless number to indicate progress to a working quantum computer. What matters is the number of logical qubits.

Reminder – Why Build a Quantum Computer?
One of the key misunderstandings about quantum computers is that they are faster than current classical computers on all applications. That’s wrong. They are not. They are faster on a small set of specialized algorithms. These special algorithms are what make quantum computers potentially valuable. For example, running Grover’s algorithm on a quantum computer can search unstructured data faster than a classical computer. Further, quantum computers are theoretically very good at minimization / optimizations /simulations…think optimizing complex supply chains, energy states to form complex molecules, financial models (looking at you hedge funds,) etc.

It’s possible that quantum computers will be treated as “accelerators” to the overall compute workflows – much like GPUs today. In addition, several companies are betting that “algorithmic” qubits (better than “noisy” but worse than “error-corrected”) may be sufficient to provide some incremental performance to workflows lie simulating physical systems. This potentially opens the door for earlier cases of quantum advantage.

However, while all of these algorithms might have commercial potential one day, no one has yet to come up with a use for them that would radically transform any business or military application. Except for one – and that one keeps people awake at night. It’s Shor’s algorithm for integer factorization – an algorithm that underlies much of existing public cryptography systems.

The security of today’s public key cryptography systems rests on the assumption that breaking into those keys with a thousand or more digits is practically impossible. It requires factoring large prime numbers (e.g., RSA) or elliptic curve (e.g., ECDSA, ECDH) or finite fields (DSA) that can’t be done with any type of classic computer regardless of how large. Shor’s factorization algorithm can crack these codes if run on a Quantum Computer. This is why NIST has been encouraging the move to Post-Quantum / Quantum-Resistant Codes.

How many physical qubits do you need for one logical qubit?
Thousands of logical qubits are needed to create a quantum computer that can run these specialized applications. Each logical qubit is constructed out of many physical qubits. The question is, how many physical qubits are needed? Herein lies the problem.

Unlike traditional transistors in a microprocessor that once manufactured always work, qubits are unstable and fragile. They can pop out of a quantum state due to noise, decoherence (when a qubit interacts with the environment,) crosstalk (when a qubit interacts with a physically adjacent qubit,) and imperfections in the materials making up the quantum gates. When that happens errors will occur in quantum calculations. So to correct for those error you need lots of physical qubits to make one logical qubit.

So how do you figure out how many physical qubits you need?

You start with the algorithm you intend to run.

Different quantum algorithms require different numbers of qubits. Some algorithms (e.g., Shor’s prime factoring algorithm) may need >5,000  logical qubits (the number may turn out to be smaller as researchers think of how to use fewer logical qubits to implement the algorithm.)

Other algorithms (e.g., Grover’s algorithm) require fewer logical qubits for trivial demos but need 1000’s of logical qubits to see an advantage over linear search running on a classical computer. (See here, here and here for other quantum algorithms.)

Measure the physical qubit error rate.

Therefore, the number of physical qubits you need to make a single logical qubit starts by calculating the physical qubit error rate (gate error rates, coherence times, etc.) Different technical approaches (superconducting, photonics, cold atoms, etc.) have different error rates and causes of errors unique to the underlying technology.

Current state-of-the-art quantum qubits have error rates that are typically in the range of 1% to 0.1%. This means that on average one out of every 100 to one out of 1000 quantum gate operations will result in an error. System performance is limited by the worst 10% of the qubits.

Choose a quantum error correction code

To recover from the error prone physical qubits, quantum error correction encodes the quantum information into a larger set of physical qubits that are resilient to errors. Surface Codes is the most commonly proposed error correction code. A practical surface code uses hundreds of physical qubits to create a logical qubit.  Quantum error correction codes get more efficient the lower the error rates of the physical qubits. When errors rise above a certain threshold, error correction fails, and the logical qubit becomes as error prone as the physical qubits.

The Math

To factor a 2048-bit number using Shor’s algorithm with a 10-2 (1% per physical qubit) error rate:

  • Assume we need ~5,000 logical qubits
  • With an error rate of 1% the surface error correction code requires ~ 500 physical qubits required to encode one logical qubit. (The number of physical qubits required to encode one logical qubit using the Surface Code depends on the error rate.)
  • Physical cubits needed for Shor’s algorithm= 500 x 5,000 = 2.5 million

If you could reduce the error rate by a factor of 10 – to 10-3 (0.1% per physical qubit,)

  • Because of the lower error rate, the surface code would only need ~ 100 physical qubits to encode one logical qubit
  • Physical cubits needed for Shor’s algorithm= 100 x 5,000 = 500 thousand

In reality there another 10% or so of ancillary physical bits needed for overhead. And no one yet knows the error rate in wiring multiple logical bits together via optical links or other technologies.

(One caveat to the math above. It assumes that every technical approach (Superconducting, Photonics, Cold Atoms, Trapped Ions, et al) will require each physical qubit to have hundreds of bits of error correction to make a logical qubit. There is always a chance a breakthrough could create physical qubits that are inherently stable, and the number of error correction qubits needed drops substantially. If that happens, the math changes dramatically for the better and quantum computing becomes much closer.)

Today, the best anyone has done is to create 1,000 physical qubits.

We have a ways to go.

Advances in materials science will drive down error rates
As seen by the math above, regardless of the technology in creating physical qubits (Superconducting, Photonics, Cold Atoms, Trapped Ions, et al.) reducing errors in qubits can have a dramatic effect on how quickly a quantum computer can be built. The lower the physical qubit error rate, the fewer physical qubits needed in each logical qubit.

The key to this is materials engineering. To make a system of 100s of thousands of qubits work the qubits need to be uniform and reproducible. For example, decoherence errors are caused by defects in the materials used to make the qubits. For superconducting qubits that requires uniform thickness, controlled grain size, and roughness. Other technologies require low loss, and uniformity. All of the approaches to building a quantum computer require engineering exotic materials at the atomic level – resonators using tantalum on silicon, Josephson junctions built out of magnesium diboride, transition-edge sensors, Superconducting Nanowire Single Photon Detectors, etc.

Materials engineering is also critical in packaging these qubits (whether it’s superconducting or conventional packaging) and to interconnect 100s of thousands of qubits, potentially with optical links. Today, most of the qubits being made are on legacy 200mm or older technology in hand-crafted processes. To produce qubits at scale, modern 300mm semiconductor technology and equipment will be required to create better defined structures, clean interfaces, and well-defined materials. There is an opportunity to engineer and build better fidelity qubits with the most advanced semiconductor fabrication systems so the path from R&D to high volume manufacturing is fast and seamless.

There are likely only a handful of companies on the planet that can fabricate these qubits at scale.

Regional research consortiums
Two U.S. states; Illinois and Colorado are vying to be the center of advanced quantum research.

Illinois Quantum and Microelectronics Park (IQMP)
Illinois has announced the Illinois Quantum and Microelectronics Park initiative, in collaboration with DARPA’s Quantum Proving Ground (QPG) program, to establish a national hub for quantum technologies. The State approved $500M for a “Quantum Campus” and has received $140M+ from DARPA with the state of Illinois matching those dollars.

Elevate Quantum
Elevate Quantum is the quantum tech hub for Colorado, New Mexico, and Wyoming. The consortium was awarded $127m from the Federal and State Governments – $40.5 million from the Economic Development Administration (part of the Department of Commerce) and $77m from the State of Colorado and $10m from the State of New Mexico.

(The U.S. has a National Quantum Initiative (NQI) to coordinate quantum activities across the entire government see here.)

Venture capital investment, FOMO, and financial engineering
Venture capital has poured billions of dollars into quantum computing, quantum sensors, quantum networking and quantum tools companies.

However, regardless of the amount of money raised, corporate hype, pr spin, press releases, public offerings, no company is remotely close to having a quantum computer or even being close to run any commercial application substantively faster than on a classical computer.

So why all the investment in this area?

  1. FOMO – Fear Of Missing Out. Quantum is a hot topic. This U.S. government has declared quantum of national interest. If you’re a deep tech investor and you don’t have one of these companies in your portfolio it looks like you’re out of step.
  2. It’s confusing. The possible technical approaches to creating a quantum computer – Superconducting, Photonics, Cold Atoms, Trapped Ions, Quantum Dots, Nitrogen Vacancy in Diamond Centers, and Topological – create a swarm of confusing claims. And unless you or your staff are well versed in the area, it’s easy to fall prey to the company with the best slide deck.
  3. Financial engineering. Outsiders confuse a successful venture investment with companies that generate lots of revenue and profit. That’s not always true.

Often, companies in a “hot space” (like quantum) can go public and sell shares to retail investors who have almost no knowledge of the space other than the buzzword. If the stock price can stay high for 6 months the investors can sell their shares and make a pile of money regardless of what happens to the company.

The track record so far of quantum companies who have gone public is pretty dismal. Two of them are on the verge of being delisted.

Here are some simple questions to ask companies building quantum computers:

  • What is their current error rates?
  • What error correction code will they use?
  • Given their current error rates, how many physical qubits are needed to build one logical qubit?
  • How will they build and interconnect the number of physical qubits at scale?
  • What number of qubits do they think is need to run Shor’s algorithm to factor 2048 bits.
  • How will the computer be programmed? What are the software complexities?
  • What are the physical specs – unique hardware needed (dilution cryostats, et al) power required, connectivity, etc.

Lessons Learned

  • Lots of companies
  • Lots of investment
  • Great engineering occurring
  • Improvements in quantum algorithms may add as much (or more) to quantum computing performance as hardware improvements
  • The winners will be the one who master material engineering and interconnects
  • Jury is still out on all bets

Update: the kind folks at Applied Materials pointed me to the original 2012  Surface Codes paper. They pointed out that the math should look more like:

  • To factor a 2048-bit number using Shor’s algorithm with a 0.3% error rate (Google’s current quantum processor error rate)
  • Assume we need ~ 2,000 (not 5,000) logical qubits to run Shor’s algorithm.
  • With an error rate of 0.3% the surface error correction code requires ~ 10 thousand physical qubits to encode one logical qubit to achieve 10^-10 logical qubit error rate.
  • Physical cubits needed for Shor’s algorithm= 10,000 x 2,000 = 20 million

Still pretty far away from the 1,000 qubits we currently can achieve.

For those so inclined
The logical qubit error rate P_L  is  P_L = 0.03 (p/p_th)^((d+1)/2), where p_th ~ 0.6% is the error rate threshold for surface codes, p the physical qubit error rate, and d is the size of the code, which is related to the number of the physical qubits: N = (2d – 1)^2.

See the  plot below for P_L versus N for different physical qubit error rate for reference.

How Saboteurs Threaten Innovation–and What to Do About It

This article first appeared in First Round Review.

“Only the Paranoid Survive”
Andy Grove – Intel CEO 1987-1998

I just had an urgent “can we meet today?” coffee with Rohan, an ex-student. His three-year-old startup had been slapped with a notice of patent infringement from a Fortune 500 company. “My lawyers said defending this suit could cost $500,000 just for discovery, and potentially millions of dollars if it goes to trial. Do you have any ideas?”

The same day, I got a text from Jared, a friend who’s running a disruptive innovation organization inside the Department of Defense. He just learned that their incumbent R&D organization has convinced leadership they don’t need any outside help from startups or scaleups.

Sigh….

Rohan and Jared have learned three valuable lessons:

  • Only the paranoid survive (as Andy Grove put it)
  • If you’re not losing sleep over who wants to kill you, you’re going to die.
  • The best fight is the one you can avoid.

It’s a reminder that innovators need to be better prepared about all the possible ways incumbents sabotage innovation.

Innovators often assume that their organizations and industry will welcome new ideas, operating concepts and new companies. Unfortunately, the world does not unfold like business school textbooks.

Whether you’re a new entrant taking on an established competitor or you’re trying to stay scrappy while operating within a bigger company here’s what you need to know about how incumbents will try to stand in your way – and what you can do about it.


Entrepreneurs versus Saboteurs
Startups and scaleups outside of companies or government agencies want to take share of an existing market, or displace existing vendors. Or if they have a disruptive technology or business model, they want to create a new capability or operating concept – even creating a new market.

As my student Rohan just painfully learned, the incumbent suppliers and existing contractors want to kill these new entrants. They have no intention of giving up revenue, profits and jobs. (In the government, additional saboteurs can include Congressional staffers, Congressman and lobbyists, as these new entrants threaten campaign contributions and jobs in local districts.)

Intrapreneurs versus Saboteurs
Innovators inside of companies or government agencies want to make their existing organization better, faster, more effective, more profitable, more responsive to competitive threats or to adversaries. They might be creating or advocating for a better version of something that exists. Or perhaps they are trying to create something disruptive that never existed before.

Inside these commercial or government organizations there are people who want to kill innovation (as my friend Jared just discovered). These can be managers of existing programs, or heads of engineering or R&D organizations who are feeling threatened by potential loss of budget and authority. Most often, budgets and headcount are zero-sum games so new initiatives threaten the status quo.

Leaders of existing organizations often focus on the success of their department or program rather than the overall good of the organization. And at times there are perverse incentives as some individuals are aligned with the interests of incumbent vendors rather than the overall good of the company or government agency.

How Do incumbents Kill Innovation?
Rohan and Jared were each dealing with one form of innovation sabotage. Incumbents use a variety of ways to sabotage and kill innovative ideas inside of organizations and outside new companies. And most of the time innovators have no idea what just hit them. And those that do – like Rohan and Jared – have no game plan in place to respond.

Here are the most common methods of sabotage that I’ve seen, followed by a few suggestions on how to prepare and defend against them.

Founders and Innovators should expect that existing organizations and companies will defend their turf – ferociously.

 

Common ways incumbents kill innovation in both commercial markets and government agencies.

  • Create career FUD (fear, uncertainty and doubt). Positioning the innovative idea, product or service as risk to the career of whoever adopts or champions it.
  • Emphasize the risk to existing legacy investments, like the cost of switching to the new product or service or highlighting the users who would object to it.
  • Claim that an existing R&D or engineering organization is already doing it (0r can do it better/cheaper.)
  • Create innovation theater by starting internal innovation programs with the existing staff and processes.
  • Set up committees and advisory boards to “study” the problem. Appoint well respected members of the status quo.
  • Poison funding for internal initiatives. Claiming that you’ll have to kill important program x or y to pay for the new initiative. Or funding the demo of the new idea and then “slow-walk” the budget for scale.
  • File Lawsuits/Protests against winners of contracts.
  • Use patents as a weapon. Filing patent infringement lawsuits – whether true or not. Try to invalidate existing patents – whether true or not.
  • Claim that employees have stolen IP from their previous employer.
  • File HR Complaints against internal intrapreneurs for cutting corners or breaking rules.
  • Isolate senior leadership from the innovators inside the organization via reporting hierarchy and controlling information about alternatives.
  • Object to structures and processes for the rapid adoption of new technologies. Treat innovation and execution as the same processes. Lack tolerance for failure at innovation. Do not cultivate a culture of urgency. Don’t offer a a structured career path for innovators.
  • Lock-up critical resources, like materials, components, people, law firms, distribution channels, partners and make them unavailable to innovation groups/startups.
  • Control industry/government standards to ensure that they are lock-in’s for incumbents.
  • Acquire a startup and shut it down or bury its product
  • Poach talent from an innovation organization or company by convincing talent that the innovation effort won’t go anywhere.
  • Influence “independent” analysts, market research firms with “research” contracts to prove that the market is too small.
  • Confuse buyers and senior leadership by preannouncing products or products that never ship – vaporware.
  • Bundle products (Microsoft Office)
  • Long term lock-in contracts for commercial customers or sole-source for government programs (e.g. F-35).

How incumbents kill startups in government markets

  • File contract appeals or protests, creating delays that burn cash for new entrants.
  • File Inspector General (IG) complaints, claiming innovators are cutting corners, breaking rules or engaging in illegal hiring and spending. If possible, capture these IG offices and weaponize them against innovators.
  • Hijack the acquisition system by creating requirements written for incumbents, while setting unnecessary standards, barriers and paperwork for new entrants. Ignore requirements to investigate alternate suppliers and issue contracts to the incumbents.
  • Revolving door. The implicit promise of jobs to government program executives and managers and the implicit promise of jobs to congressional staffers and congressmen.
  • Lobbying. Incumbents have dedicated staffs to shape requirements and budgets for their products, as well as dedicated staff for continual facetime in Washington. They are experts at managing the POM, PPBE, House and Senate Armed Services Committees  and appropriations committees.
  • Create career risks for innovators attempting to gain support outside of official government channels, penalizing unofficial contacts with members of Congress or their staffs.
  • Create Proprietary interfaces
  • Weaponize security clearances, delaying or denying access to needed secure information, or even pulling your, or your company’s clearance.

How incumbents kill startups in commercial markets.

  • Rent Seeking via regulatory bodies (e.g. FCCSECFTC, FAA, Public Utility, Taxi/Insurance Commissions, School Boards, etc, …) Use government regulation to keep out new entrants who have more innovative business models (or delay them so the incumbents can catch up).
  • Rent Seeking via local, state and federal laws (e.g. occupational licensing, car dealership laws, grants, subsidies, or tariff protection). Use arguments – from public safety, to lack of quality, or loss of jobs –  to lobby against the new entrants.
  • Rent Seeking via courts to tie up and exhaust a startup’s limited financial resources.
  • Rent Seeking via proprietary interfaces (e.g. John Deere tractor interfaces…)
  • Poison startup financing sources. Telling VCs the incumbents already own the market. Tell Government funders the company is out of cash.
  • Legal kickbacks, like discounts, SPIFs, Co-advertising (e.g. Intel and Microsoft for x86 processors/Windows).
  • State Attorney General complaints to tie up startup resources
  • Create fake benchmark groups or greenwash groups to prove existing solution is better or that new solution is worse.

Innovators Survival Checklist

There is no magic bullet I could have offered Rohan or Jared to defend against every possible move an incumbent might make. However, if they had realized that incumbents wouldn’t welcome them, they (and you) might have considered the suggestions below on how to prepare for innovation saboteurs.

In both government and commercial markets:

  • Map the order of battle. Understand how the money flows and who controls budget, headcount and organizational design. Understand who has political, regulator, leadership influence and where they operate.
  • Understand saboteurs and their motivation. Co-opt them. Turn them into advocates – (this works with skeptics). Isolate them – with facts. Get them removed from their job (preferably by promoting them to another area.)
  • Build an insurgent team. A technologist, visionary, champion, allies, proxies, etc. The insurgency grows over time.
  • Avoid publicly belittling incumbents. Do not say, “They don’t get it.” That will embarrass, infuriate and ultimately motivate them to put you out of business.
  • Avoid early slideware. Instead focus on delivering successful minimal viable products which demonstrate feasibility and a validated requirement.
  • Build evidence of your technical, managerial and operational excellence. Build Minimal Viable Products (MVPs) that illustrate that you understand a customer or stakeholders problem, have the resources to solve it, and a path to deployment.
  • If possible, communicate and differentiate your innovation as incremental innovation. Point out that over time it’s disruptive.
  • Go after rapid scale of a passionate customer who values the disruption e.g. INDOPACOM; or Uber and Airbnb, Tesla in the commercial world
  • Ally with larger partners who see you as a way to break the incumbents’ lock on the market. i.e. Palantir and the intelligence agencies versus the Army and in industry, IBM’s i2, / Textron Systems Overwatch.

In commercial markets:

  • Figure out an “under the radar” strategy that doesn’t attract incumbents’ lawsuits, regulations or laws when you have limited resources to fight back.
  • Patent strategy. Build a defensive patent portfolio and strategy? And consider an offensive one, buying patents you think incumbents may infringe.
  • Pick early markets where the rent seekers are weakest and scale. For example, pick target markets with no national or state lobbying influence. i.e. Craigslist versus newspapers, Netflix versus video rental chains, Amazon versus bookstores, etc.
  • When you get scale and raise a large financing round, take the battle to the incumbents. Strategies at this stage include hiring your own lobbyists, or working with peers in your industry to build your own influence and political action groups.

Jared is still trying to get senior leadership to understand that the clock is ticking, and internal R&D efforts and current budget allocation won’t be sufficient or timely. He’s building a larger coalition for change, but the inertia for the status quo is overwhelming.

Rohan’s company was lucky. After months of scrambling (and tens of thousands of dollars), they ended up buying a patent portfolio from a defunct startup and were able to use it to convince the Fortune 500 company to drop their lawsuit.

I hope they both succeed.

What have you found to be effective in taking on incumbents?

What Does Product Market Fit Sound Like? This.

I got a call from an ex-student asking me “how do you know when you found product market fit?”

There’s been lots of words written about it, but no actual recordings of the moment.

I remembered I had saved this 90 second, 26 year-old audio file because this is when I knew we had found it at Epiphany.

The speaker was the the Chief Financial Officer of a company called Visio, subsequently acquired by Microsoft

I played it for her and I think it provided some clarity.