Fastly’s July 2025 survey finds senior developers ship 2.5x more AI-generated code than juniors. Learn why, how it affects speed, and what teams should do.
Overview
Fastly’s July 2025 survey of 791 professional developers reveals a major shift in how AI-generated code reaches production. Senior developers are shipping far more AI-assisted code than their junior peers. In fact, seniors say over half their shipped code is AI-generated at nearly 2.5x the rate of juniors. Read the original report on the Fastly blog.

Key Findings at a Glance
- AI in production: 32% of seniors vs. 13% of juniors say over half of their shipped code is AI-generated.
- Shipping speed: 59% of seniors feel AI tools help them ship faster, vs. 49% of juniors.
- Big speed gains: 26% of seniors say AI makes them a lot faster, double the 13% of juniors.
- Editing overhead: 28% frequently edit AI output enough to offset most time savings; only 14% rarely edit.
- Reality check: An RCT found experienced developers took 19% longer when using AI tools.
- Developer morale: Nearly 80% say AI tools make coding more enjoyable.
- Green coding: ~56% of juniors and nearly 80% of mid/senior devs consider energy use; ~two-thirds know AI tools have a notable carbon footprint.
Why Seniors Ship More AI-Generated Code
Experience matters. Senior engineers are better at spotting when AI output looks right but is wrong. They can quickly assess risks, rewrite fragile parts, and keep work moving. That confidence helps seniors lean on tools like GitHub Copilot, Gemini, and Claude even for complex or business-critical tasks.
Interestingly, more seniors also report heavy editing of AI code. Nearly 30% say they fix AI output enough to erase most time savings, vs. 17% of juniors. Even with this overhead, most seniors still feel faster overall, which suggests they integrate AI into mature review, testing, and deployment workflows.
Speed, Perception, and the Reality Check
AI coding often feels fast. Suggestions appear instantly. Boilerplate writes itself. But early speed can hide later costs. Many teams see cycles of editing, testing, and reworking that cut into gains. This is echoed by survey comments and validated by a randomized trial where experienced developers were 19% slower when using AI.

That gap between perceived and real speed explains why 28% of developers say editing AI code often offsets gains. Only 14% rarely need changes. Leaders should measure end-to-end cycle time, not just coding minutes, to understand AI’s true impact.
Quality, Risk, and Trust in Production
Trust drives adoption. Seniors trust themselves to catch AI mistakes. Juniors may not yet, which makes them cautious about putting AI-assisted code into production. That caution can be wise. Unchecked “vibe coding” risks hidden bugs and vulnerabilities.
Teams can raise trust without slowing down by adding low-friction guardrails:
- Automated tests that gate merges for AI-authored changes.
- Static analysis and security scanning in CI for every pull request.
- Clear review rules for high-risk code paths and dependencies.
- Prompt hygiene and style guides for consistent, safer AI output.
Sustainability: The Hidden Cost of AI Coding
Green coding awareness climbs with experience. About 56% of juniors consider energy use, compared to nearly 80% of mid- and senior-level engineers. Roughly two-thirds across all levels know that AI coding tools carry a meaningful carbon footprint, and fewer than 8% are unaware.
To cut impact without losing speed, teams can:
- Prefer smaller models for routine tasks and cache results when possible.
- Batch requests and reduce prompt churn to limit inference calls.
- Track energy and cost metrics alongside velocity.
- Refactor AI-generated boilerplate for efficiency as a standard practice.
Practical Takeaways for Engineering Teams
- Define when to use AI: Code generation, tests, docs, and refactors benefit most; use caution on critical paths.
- Measure outcomes: Track cycle time, rework, defects, change failure rate, and review duration.
- Raise the floor: Provide curated prompts, coding guidelines, and examples that juniors can trust.
- Tighten feedback loops: Pair programming and targeted reviews help juniors learn to validate AI output.
- Bake in safety: Enforce tests, SAST/DAST, and dependency checks on all AI-assisted changes.
- Mind sustainability: Include energy and cloud cost in your AI adoption scorecard.
Developer Experience Still Matters
Nearly 80% say AI tools make coding more enjoyable. Enjoyment is not the same as efficiency, but morale matters in a field grappling with burnout and backlogs. If teams combine better guardrails with coaching, they can turn that positive sentiment into real, measurable productivity.

Methodology
The survey was conducted by Fastly from July 10–14, 2025, with 791 professional developers in the United States. All respondents write or review code as a core part of their job. Findings are self-reported and subject to common survey biases.