When building becomes cheaper than specifying, everything changes
How the cost of building collapsed, and what it means for product teams and companies.
In 2023, I asked an engineering team to estimate a monitoring module. Pull satellite data, run calculations, display results, send alerts when thresholds are crossed. Nothing exotic. The estimate came back: eight to ten weeks, two engineers full-time. Call it 800 engineering hours.
In early 2025, I built a functional version of the same module in nine days. Me. A product leader with zero engineering background, using AI-assisted building tools. The interface was rough. The error handling was “it crashes and you refresh.” But it pulled real data, ran real calculations, and sent real alerts. A risk analyst at a client company could log in and see which properties in their portfolio needed attention.
The engineering estimate wasn’t wrong. To build that module at production quality, with proper architecture, security, and scalability, it would take eight to ten weeks. The question is whether you need production quality to learn whether the idea works.
The answer, almost always, is no.
That cost difference is not incremental. It’s not 20% faster or 30% cheaper. It’s a different order of magnitude. And it changes the logic of how products should be created.
The economics that shaped everything
For most of the history of software, building was the expensive part. Engineers were scarce. Tools were complex. Infrastructure was costly. If you were going to commit months of engineering time, you’d better be sure of what you were building.
So we created an apparatus to de-risk the building phase. Research to make sure the problem was real. Specifications to make sure everyone agreed on the solution. Design reviews, estimation sessions, sprint planning, approval gates. All designed to prevent wasted engineering time.
Every one of these steps exists for a reasonable purpose. And every one of them adds time between the moment someone has an insight about what a user needs and the moment that user touches something real.
When building takes months, that tradeoff makes sense. Spending four weeks specifying to avoid twelve weeks of wrong construction is a good deal. The entire organizational structure of product development was an optimization for expensive construction.
What just inverted
In roughly two years, the cost of building a functional software module dropped by an order of magnitude. Not for everything. Not for complex distributed systems. But for the kind of functional prototype that answers “does this idea work when a real user touches it?” the cost collapsed.
This happened because tools emerged that allow people who understand problems to build solutions without needing to be engineers. A product manager who can articulate a problem clearly can build a working backend in an afternoon with Claude Code. A product designer who understands user flows can create a functional interface in days with Cursor. A product owner who knows the business rules can assemble a module that processes real data using Bolt or Lovable.
The output isn’t production-grade. The architecture is messy. The error handling is incomplete. But it works. A user can interact with it. Data flows through it. Business logic executes.
This is the part most people miss. They hear “AI and coding” and imagine robots replacing programmers. That’s not what’s happening. What’s happening is that the ability to build functional software is spreading beyond the engineering team. It’s becoming a horizontal skill, like spreadsheets made financial modeling a horizontal skill in the 1980s.
The calculus flips
Consider what specification actually costs. A PM spends three to five days writing a detailed PRD. The document goes through review: stakeholder feedback, design input, engineering feasibility check. Then a designer creates screens. Another review cycle. Then estimation, sprint planning, grooming.
By the time the first line of code is written, the organization has invested 150 to 250 hours of people-time. The output is a set of documents describing a product nobody has used yet.
Compare that to a product person who spends 40 hours building a functional prototype with AI tools. The output is not a document describing a product. It’s a product. Rough, incomplete, imperfect, but real. A user can touch it. The questions that took 200 hours of specification to maybe answer correctly can be answered in a day of testing.
The cost of specifying is now often higher than the cost of building a testable version.
That sentence inverts the economic logic that has governed product development for decades. When thinking was cheap and building was expensive, you maximized thinking before committing to building. When building becomes cheap and speculation-without-building becomes expensive (because it’s slow and unreliable), you build to think. The rational behavior flips.
What dies and what doesn’t
I want to be precise about what I’m claiming.
Research doesn’t die. Understanding your user, their context, their constraints, that matters enormously. Talking to users before building anything is not optional. What dies is research as a gate, something that must be declared “complete” before building begins.
Specifications don’t die. Sometimes you need to document decisions and align stakeholders. What dies is using specification as the primary tool for figuring out what to build.
Engineering doesn’t die. Production-grade software requires architecture, security, performance optimization, scalability. What dies is the dependency on engineering for every validation. The product person can now test the idea. Engineering enters to scale what’s validated.
What dies is the gap. The months-long gap between “someone had an insight” and “a user touches something real.” That gap was filled with documents, meetings, and speculation. Now it can be filled with working software and observed behavior.
The implications nobody wants to discuss
If building is cheap and accessible to non-engineers, some things that were previously essential become optional. The PM whose primary value was writing thorough specifications will struggle, because specification is no longer the bottleneck. The value shifts from “can you describe what to build?” to “can you identify what’s worth building and prove it quickly?”
The designer whose primary value was creating mockups before engineering starts will struggle, because the mockup phase can be skipped. Build the function first, design the experience after you know the function works.
The engineer whose primary value was translating specs into code will struggle, because that translation step is being automated. The value shifts from “can you build what was specified?” to “can you scale, secure, and optimize what was validated?”
None of these shifts eliminate the roles. They redefine what makes each role valuable. The roles that adapt thrive. The roles that resist become expensive ways to do something that can now be done faster by other means.
What this looks like in practice
I lead a product team at an agritech company. Our clients are trading companies, cooperatives, banks, and investment funds that need to manage agricultural credit risk. Complex domain. Institutional clients with high expectations.
Before this model, a new product idea entered the engineering pipeline. It competed for sprint capacity with everything else. Getting from idea to first user feedback took months, most of it waiting for engineering availability.
Now our product team builds and validates using AI tools. Engineering’s pipeline isn’t disrupted. Ideas are tested before they touch the engineering backlog. When something reaches engineering, it arrives as a validated product with real users, real feedback, and evidence of value. Not a spec. A working thing.
The number of ideas we test per quarter went up dramatically. Most don’t survive alpha. That’s the point. We find out fast and move on.
The products that do survive reach users weeks after inception instead of months. The quality of the first version is lower. The speed of learning is incomparably faster. And because the learning is faster, the product that eventually reaches production quality is better, because it was shaped by real usage from the beginning.
What this means for innovation
There’s a consequence of the cost inversion that goes beyond product management and touches something more fundamental: how organizations innovate.
Innovation has always been constrained by the cost of experimentation. When each experiment requires months of engineering time and hundreds of thousands in resources, organizations can only afford a few bets per year. Those bets need to be big enough to justify the investment. They need executive sponsorship, business cases, ROI projections. The bar for trying something new is high, and most ideas never clear it. Not because they’re bad ideas, but because the overhead of testing them exceeds the organization’s appetite for risk.
When the cost of building a testable version drops to days and one person’s time, that bar drops to nearly zero. An idea that would never have survived a prioritization meeting can now be tested in a week without touching the engineering pipeline, without a business case, without permission. The product person builds it, puts it in front of a user, and finds out if it works. If it doesn’t, the cost is a week. If it does, the organization has a validated opportunity it would never have discovered through the traditional process.
This changes what innovation looks like inside a company. It stops being a strategic initiative managed by a committee and starts being a continuous practice embedded in the product team’s daily work. The number of ideas tested goes up by an order of magnitude. Most fail. That’s the point. The ones that survive are stronger because they were tested against reality, not evaluated in a slide deck.
I’ve watched this happen on my own team. Before this model, we tested a handful of ideas per year. Now we test closer to twenty. The hit rate hasn’t changed much, maybe one in five survives alpha. But the absolute number of surviving ideas went from one or two per year to four or five. That’s not a marginal improvement. That’s a different innovation capacity entirely.
The organizations that figure this out won’t just build products faster. They’ll discover opportunities faster. They’ll find product-market fit faster. They’ll innovate at a pace that organizations running the traditional model simply can’t match, not because they’re smarter, but because they’re running more experiments per unit of time.
Innovation was never limited by ideas. It was limited by the cost of testing them. That cost just collapsed.
The window
Every economic shift creates a window. The organizations that recognize it early and adapt gain a structural advantage. The ones that wait lose the window.
Most organizations haven’t adjusted. They’re still writing detailed specs before building. Still estimating in story points. Still separating thinking from doing. Still running a playbook optimized for expensive construction.
The gap between fast-learning organizations and process-following organizations is widening quickly. Not because the fast learners are shipping faster. Because they’re learning what to build faster. They’re killing bad ideas faster. They’re finding product-market fit faster. Every cycle compounds the advantage.
The question isn’t whether this shift is real. It’s whether you want to be on the leading side or the trailing side of it.
Felipe Fernandes leads product teams that build and validate using AI tools, shipping functional modules to real users in days. His book “The PM Who Builds” is available on Amazon Kindle.
