AI coding boom deepens cognitive debt, says Thoughtworks
Thoughtworks has released volume 34 of its Technology Radar, examining how AI-assisted software development is changing engineering practice.
The report argues that wider use of AI in software engineering is increasing the amount of code being produced and, with it, the difficulty for developers to fully understand the systems they are building. It identifies this as growing "cognitive debt" - the widening gap between human understanding and software complexity.
That concern sits at the centre of the new Radar, which draws on Thoughtworks' work with clients across technology projects. Rather than presenting AI tools as a substitute for established engineering discipline, it argues that teams are being pushed back towards long-standing practices designed to improve control, visibility and resilience.
Back to basics
Among the practices highlighted are zero trust architecture, DORA metrics and testability. These techniques are becoming more important as software teams adopt agent-based tools that can generate code quickly but may also reduce direct human oversight.
The report also points to the risks around so-called permission-hungry agents, which are designed to access private data and external systems to complete tasks. It says these systems create tension between usefulness and security, prompting greater emphasis on sandboxed execution and defence in depth.
Another theme is the rise of controls for coding agents. Teams are developing mechanisms to constrain and check automated coding systems, including spec-driven development, so-called Agent Skills and mutation testing intended to trigger self-correction before human review.
Alongside operational and security issues, Thoughtworks says the market for developer tools is becoming harder to assess. The report describes a flood of new products and projects, some created rapidly and maintained by small teams or individual contributors, making it harder for companies to judge durability and long-term support.
Shifting terms
The Radar also warns of "semantic diffusion", where emerging concepts are named and adopted before shared definitions are settled. It argues that this is complicating decision-making for organisations trying to compare tools, methods and technical approaches in an increasingly crowded market.
Rachel Laycock, chief technology officer at Thoughtworks, said the speed of change in AI has altered the challenge facing software teams. "The capabilities of AI have been increasing at a staggering rate over the last year," she said.
Laycock said stronger human oversight remains necessary as these systems move into broader use. "However, rather than displacing humans, we've seen in recent months that there's a significant need for humans to proactively implement appropriate practices and technical harnesses to ensure these capabilities are leveraged effectively and securely. The inflection point we're at isn't so much about technology - it's about technique."
The findings reflect a broader shift in discussion around AI in software development. Earlier debate often focused on productivity gains and code generation speed, but attention has recently shifted towards reliability, governance and whether teams can maintain enough understanding of systems created with growing levels of automation.
For companies deploying AI-assisted development tools in production, the Radar suggests the issue is no longer simply whether the tools work, but whether organisations have the engineering processes needed to manage what those tools produce. That includes validating outputs, controlling access, maintaining test coverage and preserving a clear understanding of system behaviour over time.
The report presents this as a practical rather than theoretical problem. As more code is generated with AI assistance, the burden on teams shifts from writing every line manually to setting constraints, reviewing behaviour and ensuring the surrounding systems of measurement and governance are robust enough to cope.
Thoughtworks concludes that AI is not removing the need for engineering fundamentals, but making them more important as software complexity rises.