Software Testing Is at a CrossroadsSoftware Testing Is at a CrossroadsSoftware Testing Is at a Crossroads
A new industry survey reveals QA teams are struggling to bridge the gap between automation goals and reality, while manual testing maintains its critical role despite AI tool adoption.
There has been tremendous innovation in software development over the past year, as AI has had a massive impact.
That impact, however, isn't as apparent when it comes to software testing and quality. The fourth annual Software Testing and Quality Report, conducted by test management platform TestRail, surveyed thousands of quality assurance (QA) professionals. The study's findings reveal an industry grappling with the practical challenges of scaling quality operations in increasingly complex technical environments.
Organizations are discovering that achieving meaningful quality improvements requires more than technological adoption; it demands fundamental changes in processes, skills, and organizational culture that many teams are still developing.
Key findings from the report include:
63% automation target versus 40% current reality.
47% of organizations report understaffing as a primary barrier to quality scaling initiatives.
39% of teams embed QA earlier in development cycles through shift-left practices.
86% of mature CI/CD teams report faster releases, while 71% experience fewer production defects.
54% use ChatGPT and 23% use GitHub Copilot, but fewer than one-third integrate AI into core workflows.
Manual testing remains dominantacross functional, regression, end-to-end, and smoke testing categories.
Related:To Survive Server Crashes, IT Needs a 'Black Box'
"Despite how much buzz there's been around AI, DevOps , and quality at speed over the past year, many of the core challenges facing QA teams remain the same," João Crisóstomo, product marketing manager at TestRail, told ITPro Today. "Teams are still working in silos, still struggling to keep up with growing demand, and still haven't automated the majority of their test coverage."
Why Development Teams Aren't Hitting Automation Targets
The report identified a significant gap between automation goals (63%) and current reality (40%).
There are numerous bottlenecks that are preventing teams from achieving their automation targets.
"The test automation gap as we call it usually stems from three key challenges: limited skills, tooling constraints, and resource shortages,"Crisóstomo said.
He noted that smaller teams often struggle because they don't have enough experienced or specialized staff to take on complex automation work. At the same time, even well-resourced teams run into limitations with their current tools, many of which can't handle the increasing complexity of modern testing needs.
"Across the board, nearly every team we surveyed cited bandwidth as a major issue," Crisóstomo said. "It's a classic catch-22: You need time to build automation so you can save time later, but competing priorities make it hard to invest that time upfront."
Related:The IT Bermuda Triangle: Where Do All the Unknown Assets Go?
Crisóstomo pulled quote
While there are challenges, some organizations are in fact able to close or at least narrow the test automation gap. Crisóstomo noted that the most successful organizations are closing this gap by investing in scalable tooling, upskilling their teams, and shifting left to make automation part of the development cycle earlier. They're also finding ways to reuse components, reduce test maintenance, and prioritize automation efforts that deliver the biggest return.
AI Integration Challenges Persist
While artificial intelligence tools are becoming more prevalent in QA environments, the report reveals significant gaps between adoption and integration. Although more than half of surveyed teams use ChatGPT and nearly a quarter utilize GitHub Copilot , organizations struggle with low impact and integration complexity when attempting to incorporate AI into their core testing workflows.
As to why those challenges exist, Crisóstomo noted that the bulk of AI innovation over the past year has been concentrated on the developer side of the software lifecycle. AI-assisted coding , code reviews, and DevOps pipelines have received a lot of attention and investment.
Related:Campus Tech Essentials: Gear Up for Your Best Semester Yet
"Meanwhile, AI-enhanced quality, particularly in testing and security, hasn't seen the same level of maturity or resources," he said. "That's starting to change, but many teams still see AI as more of a novelty than a business-critical tool for QA."
Compliance and security are also major hurdles. In regulated industries like finance, healthcare, energy, and the public sector, teams are very cautious. Crisóstomo said there's growing concern about how AI tools handle sensitive data, especially when it's unclear where that data is going or how it might be used.
"To move beyond experimentation, QA leaders should start by identifying specific pain points where AI can provide measurable value, like reducing test maintenance, accelerating regression testing, or generating smarter test cases," he said. "Just as importantly, they should push for transparency and compliance in the tools they adopt."
Shift-Left Methodology Shows Measurable Quality Improvements
The adoption of shift-left practices is gaining significant traction, with 39% of surveyed teams now embedding quality assurance activities earlier in the software development lifecycle . This represents a notable increase in proactive quality management, moving away from traditional end-stage testing approaches toward integrated quality practices throughout development.
Organizations implementing shift-left methodologies report tangible benefits that justify the strategic investment. Teams practicing early QA integration experience reduced defect leakage, meaning fewer bugs reach production environments. Additionally, these organizations demonstrate higher satisfaction with their overall quality processes, suggesting that early quality intervention improves both technical outcomes and team morale.
Crisóstomo noted that the report shows a correlation between tight-knit teams, tools, and processes with overall QA efficacy. Teams report faster release cycles, reduced defect rates, and significantly lower costs associated with late-stage bug fixes. For example, fixing a bug in production can cost up to 100x more than resolving it during the requirements or development phase.
Manual Testing Maintains Critical Role Despite Tool Advancement
Contrary to predictions that advanced automation tools will diminish manual testing's importance, the survey demonstrates that human-driven quality assurance remains indispensable.
The majority of respondents continue to conduct functional testing, regression testing, end-to-end testing, and smoke testing through manual processes, indicating that these testing categories require human judgment, creativity, and adaptability that current automation tools cannot replicate.
This persistence of manual testing reflects the complex nature of modern software applications, which often require nuanced testing approaches that consider user experience, edge cases, and contextual factors that automated scripts may miss. The continued reliance on manual testing also highlights the evolving role of QA professionals, who must balance automation implementation with hands-on testing expertise to ensure comprehensive quality coverage.
The Evolving Role of QA Testing
What's clear from the report is that needs are changing, even as 47% of organizations report understaffing.
"The QA role is evolving quickly, and AI is playing a major part in that transformation," Crisóstomo said. "Despite some concerns, we don't believe AI is replacing testers."
In his view, what's actually happening is a shift in expectations: Testers who embrace AI will thrive, while those who don't adapt may fall behind. As AI accelerates development and increases the volume and complexity of code, strong, strategic QA will become even more essential. With nearly half of organizations already reporting understaffing, AI is helping to close the gap by automating repetitive tasks like regression testing, test data generation, and test maintenance.
"Over the next two to three years, QA professionals will need to build more hybrid skill sets," Crisóstomo said. "That includes getting comfortable with AI-powered tools, developing basic scripting or automation skills and gaining a strong understanding of security and compliance, especially as more organizations introduce AI into regulated environments."
About the Author
Contributor
Sean Michael Kerner is an IT consultant, technology enthusiast and tinkerer. He consults to industry and media organizations on technology issues.
https://www.linkedin.com/in/seanmkerner/
You May Also Like
Enterprise Connect 2026 – All In on What’s Next