[フレーム]
BT

InfoQ Software Architects' Newsletter

A monthly overview of things you need to know as an architect or aspiring architect.

View an example

We protect your privacy.

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Unlock the full InfoQ experience

Unlock the full InfoQ experience by logging in! Stay updated with your favorite authors and topics, engage with content, and download exclusive resources.

Log In
or

Don't have an InfoQ account?

Register
  • Stay updated on topics and peers that matter to youReceive instant alerts on the latest insights and trends.
  • Quickly access free resources for continuous learningMinibooks, videos with transcripts, and training materials.
  • Save articles and read at anytimeBookmark articles to read whenever youre ready.

Topics

Choose your language

InfoQ Homepage News DevSummit Boston: Humans in the Loop: Engineering Leadership in a Chaotic Industry

DevSummit Boston: Humans in the Loop: Engineering Leadership in a Chaotic Industry

Jun 16, 2025 2 min read

Write for InfoQ

Feed your curiosity. Help 550k+ global
senior developers
each month stay ahead.
Get in touch
Listen to this article - 0:00
Audio ready to play
0:00
0:00

At the InfoQ Dev Summit in Boston, Michelle Brush, engineering director of Site Reliability Engineering (SRE) at Google, delivered a keynote that spoke directly to software leaders on the broader changes underway in software engineering, systems thinking, and leadership through complexity.

She opened by acknowledging the uncertainty that many practitioners feel, affirming that this was a shared experience and an expected part of navigating today’s technological landscape. Brush argued that the nature of software engineering work is shifting, not disappearing. As AI systems automate pieces of software development, engineers will face harder and more complex challenges.

Citing Bainbridge’s “ironies of automation”, she explained, “when you automate some piece of work, the job that you leave behind for humans to do is actually harder.” The result is a landscape where engineers must monitor, debug, and validate automated systems, even as their direct responsibilities evolve.

She illustrated this point with a simple analogy: “Dishwashers are great… but we didn’t get rid of all the work.” While machines may handle routine tasks, humans are left with responsibility for exception handling, quality assurance, and system maintenance. In software, this translates into higher-level abstraction work, deeper troubleshooting, and a reliance on engineering judgment. “Our brains are going to start working on higher and higher abstractions,” she said, emphasizing the cognitive shift required in modern development.

Brush explained that large language models (LLMs) today operate with a kind of “unconscious competence.” They can produce impressive results, but lack explainability and awareness of their limitations. “They don’t know what they don’t know,” she said, framing hallucinations as a natural byproduct of this architecture. By contrast, humans sit in the space of “conscious competence”—we understand what we know and can explain it, which is essential for teaching, mentoring, and validating machine outputs.

A central concept in her talk was the importance of “chunking,” or cognitive encapsulation, as engineers deal with increasing complexity. She argued that the ability to move between abstraction layers—while still being able to drill into the underlying systems—is crucial. “All abstractions leak,” she reminded the audience, “especially our hardware abstractions.”

Brush also stressed the enduring importance of foundational technical knowledge. “I have used calculus in my day job. Definitely discrete math. I’ve had the misfortune of using assembly twice,” she joked, highlighting how education in the fundamentals continues to pay off—even as tools and platforms evolve. She called this kind of knowledge essential for engineering resilience, not just in code, but in understanding systems holistically.

To this end, she advocated for systems thinking, citing Donella Meadows’ work on flows, feedback loops, and change. She recommended supporting disciplines such as control theory, cybernetics, and behavioral economics to better model and design socio-technical systems. For engineering leaders, this was a call to develop broader lenses for decision-making and risk assessment.

Sharing a case study from Google, Brush detailed a 2019 outage that brought down two data centers due to runaway automation. The assumption that geographic distribution was sufficient proved wrong when a third data center also failed under the load of recovery traffic. The takeaway? “We realized we needed to be in more than just three data centers,” she said. The response involved not just more capacity, but smarter design—using latency injection testing and intent-based rollout systems to surface risks before deployment.

Developers looking to learn more can watch infoq.com in the coming weeks for videos from the event.

About the Author

Andrew Hoblitzell

Show moreShow less

Rate this Article

Adoption
Style

Related Content

The InfoQ Newsletter

A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example

We protect your privacy.

BT

AltStyle によって変換されたページ (->オリジナル) /