Enter your keyword

The Transformative IT Team: Speed Through Quality for Tech Leaders [1]

The Transformative IT Team: Speed Through Quality for Tech Leaders [1]

Why are some IT organizations engines of innovation while others seem to be albatrosses? Transformative tech teams need to be high-performing tech teams, and high-performing tech teams generate speed through quality, have profound business centricity, and are culturally resilient.

In this article, I’ll dig deeper into building speed through quality, and discuss business centricity and cultural resiliency at a future date. (I’ll leave how to transmogrify albatrosses into engines for a future series, hopefully.)

Why speed through quality instead of speed at the expense of quality?

If you’ve ever been in an executive meeting where some poor IT project manager is trying to explain why a release is going to be late, you’ve likely witnessed a classic exchange. The PM will try to explain the famed Trade-off Triangle of time verses cost verses scope (including quality), and that it is simply a law of nature that the beleaguered stakeholder must pick two of these, throwing the third to the wolves. The frustrated business exec listens patiently while secretly calculating which would be more harmful to the business plan, firing IT or staying on this “road to nowhere fast.”

I’d like to point out, from my own experience as well as the experience of those smarter than I, that this trade-off dilemma is simply not true. It is not true, at least, if you care about anything beyond the very next release. For that release, the trade-off holds. Otherwise, it’s bonk.

With the big picture in view, quality is one of the many flywheels a high-performing IT organization uses to increase speed and thereby reduce cost. Quality software is easier and safer to change and has a lower Total Cost of Ownership as the system grows in scope. It also reduces the burden on IT operations and support, allowing the team to shift resources from maintaining status quo to adding new value.

How does an IT team increase speed through higher quality?

This is a path that requires some sophistication and discipline, but it is achievable by any IT team willing to adopt a few philosophical principles in their approach to building software. I’ll give you four, though I’m sure there are more to be mined by the adept engineering leader. And, while none of this thinking is very new, I have found it is not as broadly implemented in our industry as one would think.

Principle #1: Forward velocity is a byproduct of short development cycles with tight feedback loops.

Running faster in circles isn’t what we’re after. Business value grows with increased forward velocity, though this velocity rarely is in a straight line. Short cycles of development maximize progress by validating it has actually occurred, while feedback loops smooth out the direction of the progress through consistent course correction. If the cycles are short, the corrections are small.

Implement this principle at the higher levels by adopting Agile (spiritually and culturally, as well as procedurally) in how you shape your scope and organize your work. Agile is an adaptive way of building software incrementally with stakeholder feedback loops built in. Sprints are the development cycles, and Sprint Reviews are the feedback loops.

Implement this principle at the engineering level by fully embracing an automated, test-centric CI/CD pipeline as your means of building software. Test-driven development yields simpler, safer code, which has regression testing built in. Testable code is quality code, as the proverb goes. Automate builds, software deployments, testing, and infrastructure deployment (if possible). Continuous Integration and deployments on demand are your development cycles (the smaller the better) and your automated testing with good code coverage as part of your integration and deployment cycles are your feedback loops.

Principle #2: A low IT burden allows IT to concentrate its resources on increasing the business value it provides.

Low quality software, and software that hasn’t been optimized for operations, slows everything down and saps resources from more business-valued activities. A stable enterprise should be a given, but so often it’s a pipe dream. Addressing this was largely what inspired the DevOps movement.

Everything I’m talking about in this article can help to decrease IT burden. But I shall highlight three practices that go a long way. First, attack the root of instability like it’s a demon rising from the pits of hell. This means deploying a vigorous root cause analysis (RCA) of every issue with the goal of zero repeat issues over time. If the software defies a quick RCA, prioritize changes to the software that provide visibility into the source of errors.

Second, make sure the architecture and automation capabilities provide a very short Mean Time To Recovery (MTTR). If the system, on average, recovers completely and quickly with minimal effort, this will reduce the cost of quality issues. If the system recovers or adapts in an automated fashion (through automated Failover or elasticity, for example), fewer people need to be deployed and the chance of them doing something horribly wrong is minimized.

Third, automate everything you can in the management and provisioning of infrastructure (including software deployments and rollbacks). This is much easier in a public cloud context but can be done in an on-prem datacenter.

Implementing this principle can seem like a big hill to climb that yields an unquantifiable value. Don’t take my word for it. Measure what matters, here: frequency and age of repeat issues, time to RCA, MTTR, hours of engineering time in support, FTEs in operations roles dedicated to business continuity, ratio of automated vs. manual procedures. Make these measures better and dig into the outcomes.

Principle #3: Implement Clean Architecture, seriously

This is the subject of Robert Martin’s great book, “Clean Architecture: A Craftsman’s Guide to Software Structure and Design,” so I will refer you to Uncle Bob for the details and wisdom. But the thing to understand is the goal of clean architecture is not software purity, but software utility—specifically, it is to design software in a way that increases quality so change over time is fast, safe, and efficient. This is, after all, the rub for the business. In effect, quality is defined in terms of the properties of a system that enables it to retain business value as it, and the business, grow. Often, abandoning clean architecture in favor of speed for one release is the very thing that slows down the next ten.

But implementing clean architecture is no easy task. Often, the burden is felt more by engineers than architects. For this reason, your CI/CD pipeline, mentioned above, should include code reviews to ensure the proper design tradeoffs are being made. Additionally, your agile processes and culture should allow the teams ample time to give attention to building momentum in this part of the quality flywheel. Finally, your architects must be able and willing to mentor engineers in how to implement these principles, when appropriate, identified by Martin.

Principle #4: Organize your teams to match the high-level architecture you want[2]

Team organization and software architecture can be in reinforcing harmony or crippling dissonance. That these two structures (human organization and software design) interact so fundamentally is hardly intuitive, but the friction between mismatched structures can be felt by everyone, especially the project manager. Teams naturally build software up to the boundary of the team’s responsibility—no matter what the intended architecture. If that team responsibility is too broad, or not cohesive, odd software design emerges. If a team’s responsibility or the system-level architecture change, territorial software disputes erupt and code integration conflicts multiply. All these things undermine the simplicity and rationality of the software, which slows things down, making change harder and less safe.

Additionally, the software architecture should have an affinity with the business function it’s designed to address. It should make sense, at a high level, considering the business it enables. Since team organization tends to have a significant influence on the software architecture, IT organization changes made without consideration of its implications on the software design can warp, over time, the software’s original intent. This causes the systems to evolve away from the business rather than with it. This, in turn, makes it harder and harder for IT to enable business innovation.

So . . .

None of this is easy or quickly done. Some of the statements (or even clauses!) above are deep and the subjects of whole books. Fear not. These disciplines can each be implemented incrementally to good effect. Though, taken together they yield better results. And there are many who have walked this road with great success.

If you do these things, your IT team will deliver better software, and more of it, over time. This will make your business more nimble and able to take advantage more fully of the Art of The Possible, which is the subject of my next post.

If you are struggling with digital transformation, contact me at Ty.Beltramo@cgsadvisors.com to explore how I can assist on the disciplines outlined above and beyond.

Until next time . . .

Best,

Ty


[1] I owe much of the thinking in this article to Robert Martin’s works on clean architecture in his books and blog posts, and Neal Ford’s work on evolutionary architecture, as well as the research done by Forsgren, Humble, and Kim in their seminal book on DevOps: Accelerate.

[2] https://medium.com/better-practices/how-to-dissolve-communication-barriers-in-your-api-development-organization-3347179b4ecc and https://en.wikipedia.org/wiki/Conway%27s_law

If you would like to explore how our team can support your Board of Directors or transformation efforts, please reach to us at info@cgsadvisors.com.

CGS Fellow, Ty Beltramo, is a hands-on technical leader with proven expertise in connected solutions and large enterprise integration projects. He has over 20 years of experience as the senior technology leader of large firms, including Fortune 50 companies, overseeing the design and development of complex enterprise systems. He deeply understands the solutions with many years of intense development experience, building connected applications and managing deployments. Under his leadership, companies have seen dramatic turnarounds, transformations, and growth.