Modern coding agents are very capable and offer features like sub-agents and even teammates. But how does this affect performance, token consumption, and quality? The only way to find out is to build the same project multiple times. I decided to use only a single prompt, so it's not influenced by my steering. Same specification, same technology, and (almost) the same prompt.
I prepared a comprehensive specification of ~50 pages, with acceptance criteria for every feature, but without any code snippets. The spec itself was not part of the one-shot run.
Why a shop? Simply because I have been building e-commerce platforms for years. I know what a proper implementation looks like, so I can actually evaluate the output rather than just checking if it runs.
The specification covers the typical e-commerce feature set: multi-store support, product catalog, checkout flow, payments, fulfillment, and more. You can browse the full feature overview.
Technology-wise, I decided to start with a fresh Laravel project with Livewire for dynamic UIs. Laravel is a batteries-included framework that enables the agent to build the entire system without additional libraries.
Everything is published: the specification, the prompts, the full session logs, analysis, and the resulting code. If you like, you can check out the main branch and re-run the same experiments yourself, maybe with another coding agent or even another technology (you might need to check the specs for references to Laravel).
The Setup
Codebase
Fresh Laravel template with Livewire
MCP Servers
Laravel Boost, Playwright
The Builds
March 2026
#07 Claude Code Team v4(Same Prompt, 1M Context)
Same specification, same technology, and this time the same original prompt as build #1. The idea was to check whether the increased 1M token context limit plus the 42 Claude Code releases since the first run (v2.1.39 to v2.1.81) produce a better result on their own, without any prompt engineering.
This also serves as a direct comparison to build #6 which used a very detailed prompt with instructions on HOW to build. Here, the agent gets no process instructions at all, only the specification of WHAT to build.
The result: 121 of 143 tests pass with an 88.8% weighted score, the highest across all builds. Zero SonarCloud bugs, all A-ratings, and 12.16 estimated Halstead bugs. The simple prompt with the upgraded context window outperformed the heavily engineered prompt from build #6.
3h 39m
Duration
$132.06
API Cost
389
Files Created
34
Active Agents
Admin login:
#06 Claude Code Team v3(Advanced Prompt, 1M Context)
Same specification, same technology, but this time with an advanced prompt combining several techniques: a thorough controller supporting the team lead, BDD and TDD, code reviews, and a dedicated QA teammate that actively tries to break the system. The prompt was designed to leverage the new 1M token context window.
The result speaks for itself: 119 of 143 E2E tests pass (83%), the highest since the original team mode run. However, this came at a steep price: about $285 in hypothetical API costs and almost 11 hours of runtime, with 176 active agents. The advanced prompt produced more robust code but consumed far more resources than any previous build.
10h 59m
Duration
$284.52
API Cost
482
Files Created
158
Active Agents
Admin login:
February 2026
#01 Claude Code with Team Mode
Claude Code took a phased approach. The team lead read the full specification, broke it into 12 implementation phases with explicit dependencies, and then spawned specialized agents for each area: migrations, models, Livewire components, admin panel, seeders, and so on.
At peak, 31 agents were working simultaneously. The whole thing was done in just over an hour, producing 388 files across the full Laravel stack. Foundation first, then catalog, storefront, cart and checkout, payments, customer accounts, admin, search, analytics, and webhooks.
1h 6m
Duration
$73.44
API Cost
388
Files Created
31
Active Agents
Admin login:
Console
Screenshots
#02 Claude Code with Sub-Agents
Same specification, same prompt structure, but this time Claude Code ran with sub-agents instead of team mode. No specialized agent instructions were prepared - the prompt simply told it to use sub-agents. Claude spawned 20 sub-agents total, 12 of which actively contributed code.
The build took about 2 hours and 13 minutes, producing 358 files. With an estimated API cost of $61.97, it sits between the team mode run ($73) and the Codex run ($8.79) in terms of cost.
2h 13m
Duration
$61.97
API Cost
358
Files Created
12
Active Agents
Admin login:
Console
Screenshots
#03 Codex with Sub-Agents
Codex launched explorer agents to analyze the specification first, synthesized their findings into a phased roadmap, then delegated implementation to worker agents. The process took about 1 hour and 44 minutes with 16 sub-agents total.
On my first try, Codex actually finished after just a few minutes already with a surprisingly good but familiar result. That was suspicious. The shop looked exactly like the one Claude had built. Turns out, Codex found the other branch in the repo and switched to it instead of building from scratch. I had to start over with slightly adjusted instructions.
1h 44m
Duration
$8.79
API Cost
16
Agents Spawned
357
Tool Calls
Admin login:
Console
Screenshots
#04 Claude Code Team v2(More Instructions)
Same specification, same technology, but this time with a tuned prompt and strict quality constraints. The prompt included mandatory PHPStan compliance at max level, Deptrac architectural boundary checks, Pest test coverage, QA self-verification against every acceptance criterion, and a fresh agent review cycle where a new agent instance re-evaluated the entire codebase.
The idea was to see if explicit quality instructions produce measurably better code. The result: significantly fewer estimated bugs (12.4 vs 18.1), higher maintainability (89.7 vs 79.7), zero SonarCloud bugs, and 60% fewer code smells. The quality gate still failed on duplication (7.5%) and unreviewed security hotspots, but the core metrics improved across the board. The trade-off was runtime: the agent took about 3 hours, partly because it did not terminate on its own when done.
A few features were deferred: search was moved to a roadmap item instead of being fully implemented. The storefront uses a dark theme this time.
3h 0m
Duration
$73.92
API Cost
376
Files Created
29
Active Agents
Admin login:
Console
Screenshots
#05 Codex with Sub-Agents v2(More Instructions)
Same specification, same technology, but this time Codex received custom instructions with two additional quality tools: PHPStan (static analysis at max level) and Deptrac (architectural boundary checks). The idea was to see if giving Codex explicit quality constraints would produce measurably better code.
The result is mixed. On the positive side: zero SonarCloud bugs, zero vulnerabilities, and all A-ratings across reliability, security, and maintainability. On the other hand, the code smells tripled (153 vs 54) and estimated Halstead bugs jumped to 31.4 (vs 4.0 in v1). The quality focus seems to have shifted the agent towards more classes and more code, but with higher internal complexity in key controllers.
Only a single product was seeded, and the storefront UI is noticeably reduced compared to other builds. The quality-focused instructions appear to have consumed attention that would otherwise go to spec coverage and demo data. The admin panel, however, is functional with OAuth-based API authentication.
3h 27m
Duration
$28.40
API Cost
53
Agents Spawned
898
Tool Calls
Admin login:
Screenshots
Conclusion
Five builds. Same spec. Same baseline. Same tooling. 143 end-to-end tests. Two independent runs. One question: can an agent take a detailed spec and produce a working multi-tenant commerce platform in a single run?
Let's be clear about this, though. This is not production-ready and not a valid Shopify-clone. And this is not how agentic engineering should be done. It's an experiment to compare coding agent setups.
There Is a Clear Winner
Claude Code in Team Mode scored 85%. Second place: 57%. Last place: 37%. The gap between first and second is larger than between second and last. This was not a close race.
Team Mode Beats Sub-Agents
The decisive factor was not the model. It was orchestration. Sub-agents built great individual pieces but failed at the seams: variants that exist in the backend but never render, discounts defined in admin but not applied at checkout, orders created but not linked to customers. E-commerce is a chain of integrations. Sub-agents optimized locally. Team Mode optimized globally.
Simple Features Are Easy. Checkout Is Not.
All builds can render product listings and display collections. Very few can execute a full checkout with tax, shipping zones, discount logic, and inventory updates. Only the top build implemented magic card numbers for declined payments exactly as specified. Simple display features work everywhere. Transactional flows expose architectural weakness immediately.
Surprising Findings
Seed data was decisive. One build failed 30+ tests simply because it seeded one product instead of 20. The seeder is not boilerplate. It is the data contract between the spec and the system.
Speed hurt. The fastest build (1.5 hours) scored 51%. The slowest (8 hours) scored 85%. In a one-shot scenario, thoroughness beats speed.
Static analysis did not predict success. Builds with strict quality gates (PHPStan, Deptrac, fresh agent review) scored lower than their unconstrained counterparts. You can have zero static violations and a broken registration form.
Even at 85%, no build implemented order timelines, fulfillment progression, or postal code validation correctly. There is still a gap between strong autonomous generation and production-grade completeness.
Final Verdict
Claude Code with Team Mode is the only build where a customer can browse products, select variants, apply discounts, complete checkout with three payment methods, see decline errors, and access their order history. That is a full commerce journey.
Orchestration pattern matters more than model choice. Integration quality matters more than code volume. Seed data fidelity matters more than scaffolding speed. If you want agents to build real systems end to end, the architecture of the agents themselves is the decisive variable.