Measuring Software Engineering Efficiency: A CEO Guide to Understanding Key Metrics and Mechanisms
I often discuss engineering efficiency with curious leaders from non-technical backgrounds. Software engineering, unlike other areas of business that have benefited from years of evolutionary practice to measure business performance, measuring engineering performance, for many, is a black box. Software engineering is still in its infancy from a performance-measuring point of view. Recently a private equities client asked me to deliver a presentation to a CEO group of companies they are invested in. This document aims to break down that presentation and discuss some of these questions to help prepare CEOs to engage with their teams purposefully.
Clarify the distinction between efficiency and effectiveness.
Before diving deep with your teams on measuring Engineering efficiency and effectiveness, let’s step back and discuss the two. The concepts of efficiency and effectiveness frequently need clarification, but they have distinct advantages and disadvantages. A clear understanding of these concepts can enhance a technical team’s work. For me, these concepts were bought into full view when I read the following quote from former Apple CEO Gil Amelio, who said:
“Apple is like a ship with a hole in the bottom, leaking water, and my job is to get the ship pointed in the right direction.”
to which Steve Jobs responded,
“But what about the hole?”
Before analyzing the quote, let’s clarify some definitions. Efficiency means doing something correctly, and effectiveness means doing the right thing. Amelio concentrates on ensuring the task is executed correctly, which involves getting the ship headed in the correct direction. However, he neglects the crucial aspect of whether the job being accomplished is right. Therefore, his focus is on productivity rather than achieving the desired outcome. So how does this manifest itself with teams of software engineers? Let’s now see how these concepts relate to groups involved in software development.
- Efficiency; As a developer working alone or in a team, it can be easy to become too focused on increasing your productivity by streamlining your code writing process, using patterns and practices, and optimizing for brevity instead of readability. While writing quality code is essential, it’s possible to take this too far and lose sight of the bigger picture.
- Effectiveness; Effective Developers can be pragmatic and understand the context of their tasks. This helps them decide about code quality and implementation details, such as when to compromise or step back and solve problems without writing new code. Removing features can also be necessary to achieve effectiveness in development.
Finding the right balance between efficiency and effectiveness is crucial. Here are a few considerations to ensure you have in place that, once ingrained in the culture of your technical teams, can go a long way toward impacting the efficiency and effectiveness of your engineering teams.
I encourage you to ask open-ended questions about these topics with your CTO:
- Code Readability and Maintainability: While efficiency can be significant, writing code that is readable, understandable, and maintainable is equally vital. Clear and well-documented code reduces the chances of introducing bugs, makes collaboration more straightforward, and ensures long-term maintainability.
- Code Reviews and Collaboration: Encourage regular code reviews and collaboration within development teams. This helps ensure that code meets quality standards, facilitates knowledge sharing, and allows for discussions on potential improvements or trade-offs.
- Understand the Problem Domain: Effective developers prioritize understanding the problem they are trying to solve. By gaining a deep understanding of the domain, they can make informed decisions about the appropriate level of complexity, the need for new features, or potential areas where compromise might be necessary.
-Continuous Learning and Improvement: Foster a continuous learning and improvement culture. Encourage developers to stay updated with industry trends, explore new technologies, and learn from their peers. This helps them make more informed decisions about the most effective and efficient approaches to development.
- Agile and Iterative Development: Adopting agile methodologies and an iterative development approach can promote efficiency and effectiveness. Frequent feedback loops, user testing, and incremental development allow for course corrections and ensure development efforts align with the desired outcomes.
In summary, while efficiency is important, it should not come at the expense of code quality, maintainability, or the ability to solve problems effectively. Striking a balance between efficiency and effectiveness involves considering the larger context, making informed decisions, prioritizing readability and maintainability, and fostering a culture of continuous improvement.
Seven Principles to Establish from Day One
Assessing how well your CTO and technical teams are measuring engineering efficiency can be challenging. As the CEO, you don’t want to micro-manage the team but are ultimately accountable for their actions. In my experience as a leader of technical organizations, I have learned that the most critical first step is to earn trust quickly and collaborate closely and often with your technical leaders, rely on metrics and reports, and seek external expertise when necessary. By taking these steps, you can better understand your software engineering teams’ efficiency and identify opportunities for improvement.
While non-technical CEOs may not have deep expertise in software engineering, there are several principles I have found useful to help set up the first steps on your journey toward a better understanding the efficiency of your software engineering teams:
1. Establishing clear goals and metrics for your company to measure engineering efficiency is essential. Collaborate with your CTO to determine key performance indicators (KPIs) that align with these goals.
2. Regular performance check-in: Schedule regular meetings with your CTO to review the tracked KPIs. During these meetings, ask for explanations of any fluctuations in the data and how they impact the team’s overall performance.
3. Get involved; involvement in the process is essential. Even if you’re not a technical expert, you can ask your CTO to explain their methods to collect and analyze data to help you understand and contribute to the measurement process.
4. Seek external expertise: Consider engaging external consultants or advisors with software engineering expertise. They can perform audits or assessments of the software development process and provide an unbiased evaluation of efficiency and areas for improvement.
5. Use benchmarking: to compare your company’s engineering efficiency against industry standards to help you identify areas where you need to improve and where you are already excelling.
6. Engage in Cross-Department Collaboration: Encourage collaboration between software engineering teams and other departments, such as product management, marketing, or customer support. Observe how effectively the teams work together on solutions that meet the needs of other departments. Effective cross-department collaboration is often indicative of efficient software engineering processes.
7. Foster a Learning Culture: Encourage a culture of learning, innovation, and continuous improvement within the organization. Promote knowledge-sharing sessions, training programs, and industry conferences for the software engineering teams. By supporting ongoing learning and development, you create an environment where teams are motivated to improve efficiency and stay updated with the latest industry trends and best practices.
What are the main categories of technology governance related to efficiency?
Governance is an essential part of any CEO’s role. In the technology world, your teams should generally focus on three large governance categories to drive efficiencies; Configuration management, Infrastructure management, and Frameworks for deployment.
1. Configuration management refers to managing and controlling the setup and interrelationships of an organization’s IT resources to optimize them according to its requirements. Effective configuration management provides:
- Performance Configuration helps enhance data flow between different systems, increasing organization efficiency. This ensures that all software tools operate at their maximum potential, ensuring your IT ecosystem’s seamless operation.
- Compliance Maintaining consistency between different software systems is crucial for effective configuration management. It ensures compliance with industry guidelines and regulations, protecting your company from data breaches and other potential risks.
- Cost Reduction Improving configuration management results in fewer system downtimes and less maintenance, leading to resource savings. Moreover, complying with regulations is easier with effective configuration management, reducing the possibility of costly legal penalties.
2. Infrastructure Management. The IT infrastructure of your organization consists of various technology components such as servers, software, computers, and network switches that are used to manage data and information. Configuration management, on the other hand, refers to how these components are organized and how they interact with each other. Infrastructure plays a crucial role in ensuring business success in various contexts.
- Productivity Optimizing your infrastructure can lead to smoother workflows across all departments. Cloud-based services are now commonly favored over physical servers due to their increased efficiency. Communication technology is crucial in boosting employee productivity and should be considered an integral part of your organization’s IT infrastructure.
-Maintenance Optimizing your software infrastructure can simplify maintenance tasks. If the infrastructure could be better optimized, it may require significant time and resources for maintenance as your Business expands, either through additions or complete reconstruction.
3. Frameworks for Deployment. Developers use frameworks like toolboxes to construct large, complex software systems more efficiently. Each framework offers a unique set of tools and is suitable for specific types of software. For instance, a framework might provide a solid base for mobile development, while another might be more suitable for web development. The choice of framework significantly impacts various business aspects, including the programming language used.
- Efficiency Using the appropriate framework can significantly reduce the time required for development. When the development team uses a framework that matches the project requirements, they can use tools and processes that simplify software development. However, if the selected framework needs to be more adaptable to support the project, it can be as effective as using a screwdriver to drive a nail.
- Reliability A well-defined framework helps produce a stable and cohesive final product with consistent code, improved functionality, and better Security throughout the project. This, in turn, saves developers time that would have otherwise on debugging.
- Scalability The scalability potential of a framework varies; choosing the appropriate one can enable or limit your organization’s growth. Change can be facilitated by selecting a framework that allows for future scaling that needs to be considered in the planning phase.
What are Engineering team performance Metrics?
When I recently wrote about what I call ‘full-stack product leadership’, the thesis for my thinking in this area stems from my experience as an executive in Technology across many different disciplines, from marketing to engineering. While I am technically deep enough to be dangerous, my career started in product management, marketing, and business development. Yes, I worked with software engineers daily and even participated in hackathons to earn my technical stripes, but coding and shipping my own code was never my day job.
So, when I started running teams comprising product managers, product designers, and software engineers, a large case of imposter syndrome kicked in. Questions like; Are these folks going to take me seriously? How should I earn trust? How do I justify increased investments in engineering efforts to my own leadership?
If you are a CEO who does not have a technical background, you will likely be exposed to a whole new profile of employees and metrics and systems that seem foreign to you. Most CEOs I meet today running SaaS businesses are deep in the SaaS metrics surrounding Revenue, customer churn and acquisition, and lifetime customer value. But these metrics are just one piece of the equation.
Engineering leaders and CTOs are often asked,” You have so many engineers. Why are you not shipping more features?” and “Where is the cool new stuff?” As a CPO and a leader of product teams, I always get these questions. In response, I always learned to give my stakeholders insight into typical engineering headcount utilization. In my experience, engineering teams spend most of their time in the following areas; KTLO (keep the lights on), OOF (out of office), Tech Debt, Bugs, Innovation testing, Globalization/Localization, and New Features.
It’s easy for those who don’t run engineering teams not to understand that, in many cases, engineers spend as much as 50% of their time on the less visible features or keeping the lights on to ensure the service is always running smoothly or fixing bugs. When I look at the average time allocation across engineering teams I have run, it looks something like the chart below. UpLevel, a partner of Bodhi Venture Labs and one of the leading vendors in measuring software development productivity and ROI, provided this data. How easily could you build the same insight for your team? It isn’t easy, but some vendors, such as Uplevel, have great tools that integrate with your existing engineering tools platform to provide this level of real-time insight today.
Why are Engineering Metrics Important?
It’s crucial for businesses, especially larger ones, to have a disciplined approach to evaluating the performance of their technology team and their efforts to drive continuous improvement. This is particularly important since Technology plays a significant role in business operations, especially as the team grows. Therefore, managing data is critical for any sizable organization.
As a team grows, metrics become increasingly important. Improving something you’re not keeping track of is challenging. Engineering leaders must expand their understanding of metrics when managing a large team to oversee operations effectively.
It’s essential for everyone on the team, including engineers, to see the value in metrics. They’re not just for management to use but can help the team pinpoint areas for improvement. Make sure to use metrics consistently over time. Understanding that different metrics may be relevant to other organizational groups is essential.
For instance, some metrics may benefit engineering or scrum teams more, while others may be more pertinent to management. Suppose a metric is primarily for control, such as time tracking metrics. In that case, it’s crucial for the leadership team to clearly explain the rationale behind the metric and how it contributes to the team’s success. For example, time-tracking metrics help manage priorities and ensure enough staff members to handle the workload.
What Engineering Performance Metrics Are Not
Regrettably, engineering metrics cannot guarantee or predict the success of a product in the marketplace. Product success in the marketplace depends on more than just the process of creating it. While engineering metrics typically focus on the creation of the product, it is also crucial to measure adoption and usage. The Product Operations function within Product Management usually handles this responsibility. To ensure the product’s success, it is typical for Product, Business, and Engineering to align their objectives and incentives with the product’s performance in the marketplace.
Engineering metrics are a guiding tool for achieving engineering excellence but are not the only factor determining it. Metrics alone cannot provide a clear-cut assessment of the health and maturity of your Engineering function. This is because certain architecture aspects, like service-to-service interactions in distributed micro service architectures, cannot be measured or quantified directly.
In service-to-service interactions, an anti-pattern is a common but ineffective or problematic approach to designing or implementing such interactions. These anti-patterns can lead to issues like poor performance, reduced reliability, increased complexity, or difficulties in maintaining and evolving the system. As a CEO, you don’t need to be deep in these anti-patterns, but it’s important that you form an appreciation for the efficiency trade-offs your CTO makes regularly.
Here are a few examples of anti-patterns in service-to-service interactions:
- Chatty Communication: This anti-pattern occurs when services excessively exchange small messages or make numerous requests for simple operations. It leads to increased network overhead, latency, and reduced performance. Instead, using a more coarse-grained communication approach is recommended, such as batching multiple requests into one or adopting an event-driven architecture.
- Monolithic Service: Building a monolithic service that tries to handle all functionalities and responsibilities can be an anti-pattern. It can result in an extensive, tightly coupled system that is difficult to scale, maintain, or modify. Instead, adopting a micro services architecture or breaking down the functionality into smaller, loosely coupled services allows for better scalability, independent deployment, and easier evolution.
- Lack of Resilience and Fault Handling: Neglecting to design for failure or handling faults appropriately is an anti-pattern. Services should be designed to handle failures gracefully, such as implementing retries, circuit breakers, or failover mechanisms. Without these, a failure in one service can lead to cascading failures and an unreliable system.
- Data Inconsistency: Inconsistent data handling is another anti-pattern. If services have different views of the same data, it can lead to incorrect results or conflicts. Maintaining data consistency through proper coordination mechanisms like distributed transactions, event-driven architectures, or data replication strategies is essential.
- Tight Coupling and Dependency Hell: When services have strong dependencies on each other, it becomes difficult to modify or replace them independently. This anti-pattern can result in a “dependency hell” scenario where changing one service requires modifying numerous others. Designing services with loose coupling, clear boundaries, and well-defined APIs can help mitigate this issue.
- Lack of Monitoring and Observability: Failing to incorporate proper monitoring and observability mechanisms is an anti-pattern that hampers the ability to understand and diagnose issues in the system. Services should have proper logging, metrics, and tracing capabilities to enable effective debugging, performance analysis, and troubleshooting.
It’s essential to be aware of these anti-patterns and strive to avoid them when designing service-to- service interactions. You can create more robust and efficient systems by adopting best practices and principles such as loose coupling, scalability, fault tolerance, and observability. Nonetheless, we can obtain insight into the other vital factors of a well-designed and successful product by concentrating on metrics that measure significant business and customer outcomes. The example above about anti- patterns in service-to-service interactions showed that metrics like uptime and failed transactions help identify underlying issues that cannot be measured directly.
Your team’s Engineering metrics should show patterns, tell a story, and provide a clear path to success. The table available here for download contains essential metrics covering different aspects of an Engineering team’s performance that your CTO should be very close to.
These metrics are the most broadly used in my experience and are meant to be measured regularly by Engineering leadership at the team level and are used to guide ongoing improvements. While some metrics may occasionally be shared with Business stakeholders for strategic purposes, they are primarily intended for internal monitoring and progress within the Engineering team.
Common CEO Questions
CEOs commonly ask me two main questions: First. According to my engineering leader, how do I interpret the definition of a ‘good’ performance? The answer to this varies depending on the critical metric for the Business. For example, a mission-critical application in the finance or transportation sectors may require high resiliency and less frequent releases than a consumer or social application.
The second question is about accountability — specifically, what actions should be taken if the CTO could improve in establishing operational discipline and metrics. I recommend approaching the answer by considering both the Business and technical realities. Engineering teams require relevant metrics that align with business goals and enable continuous technological advancement. These metrics should be available to stakeholders in other collaborating groups, including Product and Customer Support teams and upper management. Transparency is key.
Your CTO should understand that the importance of metrics in measuring engineering performance and progress on a large scale is crucial, especially for larger teams. Once you acknowledge the significance of objective performance measurement, your Engineering team can establish an operational metrics program through a Project/Program Management Office (PMO).
Some organizations have found success by assigning Operational Leads from the PMO or an operational excellence function to work with a select number of scrum teams. The goal is to assist with navigating, implementing, understanding, and improving their operational capabilities. This includes the use of metrics to track progress over time.
If you’re a CEO, you must ask your Chief Technology Officer (CTO) a few key questions to ensure your company’s technology strategy is aligned with your business goals. Here are some common questions to consider:
- Can you provide an update on the status of our technology infrastructure and identify areas with the potential for improvement?
- What upcoming technologies do you believe will significantly impact our business, and what steps can we take to stay ahead?
- What are the most significant technology risks facing our company, and what steps are we taking to mitigate them?
- How are we prioritizing technology initiatives, and what criteria are we using to make those decisions?
- What is our data security and privacy strategy, and how are we protecting our customers’ data?
- What are the key performance indicators (KPIs) that we are tracking to measure the success of our technology initiatives, and how are we performing against those metrics?
- What is our technology talent strategy, and how are we attracting and retaining top talent?
- What partnerships or collaborations are we pursuing to drive innovation and stay competitive?
- How are we integrating technology into our product development process, and what role does technology play in our customer experience?
- What investments are we making in research and development, and how are those investments aligned with our long-term business strategy?
To achieve growth, innovation, and success, it’s essential to maintain an open and continuous dialogue with your CTO. These critical questions should be considered when it comes to leveraging technology.
What metrics should you care most about as CEO?
How do we assess if the engineering team is developing new product capabilities well? How do we evaluate them objectively? There are so many metrics rabbit holes to go down. Where should I start? Many CEOs I talk to ask me these questions. This can be achieved by setting transparent metrics that align with desired business outcomes and tracking how those metrics improve or don’t improve over time. Think of engineering as a black box that needs to be opened to evaluate its performance correctly.
Fortunately, Google has been asking themselves these same questions for a long time, and in 2014, Google’s DevOps Research and Assessment (DORA) team shared critical metrics for measuring software delivery performance and quality. The DevOps Research and Assessment (DORA) Team created the Four metrics below based on extensive research of thousands of engineering teams.
To improve clarity, I suggest sharing a concise list of 3–5 top-level metrics at the senior level and additional details for the context where necessary. Suppose a team is working on improving a specific area. In that case, they can temporarily report other metrics until the required improvements are achieved; at this point, they can stop writing those metrics at the senior level. The Engineering team and CTO should monitor a set of critical metrics in addition to the senior executive level (including Business leaders). A list of these metrics for reference is available for download here.
Due to the constantly changing equation, it can take time for software engineering leaders to determine the most effective method of measuring performance. The DORA team, known for their book “Accelerate,” also added a new metric, reliability, to their previous list. This indicates that even the DORA team’s research is evolving after a few years. As an industry, we have only been trying to measure engineering performance for the last ten years, and we have a lot more to figure out. While DORA has made a good start, we’re all still learning and iterating on how to measure performance.
What benchmarks should CTOs use to measure their team’s performance?
The DORA team has established benchmarks based on their research and analysis of the State of DevOps Reports. These benchmarks can help organizations compare their own DevOps performance against industry averages and high-performing teams. The State of DevOps Report results from eight years of research and over 33,000 survey responses from industry professionals. This report highlights the software development and DevOps practices that have been successful for teams and organizations. In the latest report by DORAS, more than 1350 working professionals from various industries across the globe shared their experiences to help the industry understand the factors that lead to higher performance.
The latest State of DevOps Report published by Google Cloud’s DevOps Research and Assessment team provides benchmarks for the DORA metrics. The benchmarks are categorized based on performance levels:
- Low Performers: Organizations at the lowest level of performance, characterized by longer lead times, lower deployment frequencies, higher change failure rates, and longer mean time to restore.
- Medium Performers: Organizations at an intermediate level of performance, showing improvements in some areas but still with room for growth.
- High Performers: Organizations at the highest level of performance, demonstrating the best outcomes in terms of lead times, deployment frequencies, change failure rates, and mean time to restore.
Of all the respondents, 22% work for companies with more than 10,000 employees, while 7% work for companies with 5,000–9,999 employees. Additionally, 15% work for organizations with 2,000–4,999 employees. The survey also showed that 13% of respondents work for companies with 500–1,999 employees, 15% for companies with 100–499 employees, and another 15% for companies with 20–99 employees. This year, respondents were allowed to select “I don’t know” regarding their organization’s size, and 15% chose this option.
By comparing your company’s metrics against these benchmarks, you can gain insights into how your DevOps practices and performance compared to industry norms and high-performing organizations.
The State of DevOps Report, previously published by the DevOps Research and Assessment (DORA) team, is now published by Google Cloud. Google Cloud acquired DORA and integrated its research and expertise into DevOps practices and offerings.
To access the latest State of DevOps Report, visit the following website: Google Cloud’s State of DevOps Report: https://cloud.google.com/devops/state-of-devops. The table below summarizes the benchmark standards across the latest DORA study:
Looking beyond pure (DORA) Quality Metrics
My observation of the most significant gaps in the DORA research is a time-honored understanding of the COGs (cost of goods sold) metric. This metric has been around for a long time in manufacturing physical goods, but in the software world, it is largely irrelevant. This is where CTS (cost to serve) comes into play. In my experience, 20–40 % of customers are unprofitable, so understanding which customers are profitable and which aren’t is the challenge. The cost-to-serve analysis can help you with this and give you the information you need to adjust your pricing or invest in process improvements and optimization to reduce the costs of those unprofitable services.
CTS, or Cost to Serve, is a metric that calculates the cost of serving individual customers or customer segments within a business. It is a methodology that helps organizations understand the true cost and profitability of serving different customers, products, or channels.
The Cost to Serve concept recognizes that different customers have unique requirements and preferences and serving them may incur varying costs. Businesses can make more informed pricing, resource allocation, and customer segmentation decisions by analyzing and understanding these costs.
The Cost to Serve model considers various cost components, such as
- Public and Private Cloud application hosting: Costs associated with hosting applications on the cloud, such as storage, bandwidth, and maintenance.
- Compute and other software infrastructure costs: Costs associated with the hardware and software needed to run the application.
- Customer support and account management costs: Costs associated with providing customer support and managing customer accounts.
- Data communications: Costs associated with data communication between the application and its users.
- Software license fees for products embedded in the application: Costs associated with licensing software products embedded in the application.
By understanding the Cost to Serve for different customers or segments, businesses can make more accurate pricing decisions, optimize their supply chain, identify opportunities to reduce costs, and potentially improve overall profitability. It allows organizations to focus on more profitable customer segments and products while addressing inefficiencies that may be driving up costs.
Implementing a Cost to Serve analysis typically involves gathering data from various departments or systems within the organization, including finance, sales, operations, and customer service. This data is then analyzed and allocated to specific customers or segments to calculate the overall cost of serving them.
It’s important to note that the specific methodology and calculations for Cost to Serve can vary across industries and organizations, as it depends on the unique cost structures and factors relevant to each business.
How does the cost to serve (CTS) differ from the cost of goods sold (COGS)? Traditional sources of COGS include labor, manufacturing, and other overheads — Everything needed to produce a product. Think of the cost to serve as the costs involved in serving up your SaaS application to your customer. Cost to serve is the measurement of costs to meet your customer’s requirements.
As a CEO, gross margin is an essential indicator of how profitable and scalable the Business is. When a company looking to raise venture capital funding is a SaaS company, the Gross Margin criterion changes. The same can be said for justifying investments in existing engineering teams. A healthy SaaS business model should typically have a gross margin of about 80–90%, which means the CTS should be close to 10–20% of total Revenue.
SaaS companies provide a product that is a software-enabled service, mainly delivered from the Cloud. Therefore, the items comprising the CTS for this business model differ from those in traditional businesses’ COGS. Some of the typical items included in the CTS for a SaaS business and are not part of the Operating Expenses are; Public and Private Cloud application hosting, compute and other software infrastructure costs, Customer support and account management costs, Data communication expense, Software license fees for products embedded in the application, Website development and support costs.
How is the Cost to Serve (CTS) different from the Total Cost of Ownership? (TCO) TCO (total cost of ownership) was a common term for most CIOs and CTOs in the 80s, 90s, and today. When calculating expenses today, however, this total cost of ownership framework that has been in the driver’s seat for decades needs to be updated.
TCO refers to how much software solutions cost in terms of their list price and any updates or maintenance they need along the way. The TCO phrase was introduced in the late 80s to help businesses with their financial management as they considered buying a product. The problem? The SaaS industry has shifted significantly since then.
SaaS products are now a service rather than something a customer owns outright. This new business model makes the total cost of ownership seem imprecise because SaaS vendors now view the cost of their product not as something that will end when the software is outdated but as something that can extend indefinitely through multiple subscription cycles.
When SaaS business factors in the total cost to serve (CTS), it can account for price uplifts and other things that wouldn’t otherwise be considered. The total to serve is calculated by determining all the expenses to bring your product to market. This includes product development, ops/support, sales/marketing, etc. Then, you add on whichever pricing strategy you feel will best turn your Business into a profit. Naturally, pricing is a complex process, but factor it into the equation whatever method you use.
Culture Eats Strategy for Breakfast
Although metrics are important, this quote by Peter Drucker, is as relevant today as ever. But unfortunately, so is this one “we have always done things this way,” remains relevant today. This phrase has been used countless times across various industries to describe their organization’s approach to challenges and opportunities. The DORA study also dives into the human elements of team performance. They found that an organization’s culture is foundational to its success and the well-being of its employees.
For Engineering teams, culture is essential because it’s about tools, practices, and how people work together to develop and deliver software quickly, reliably, and safely. By understanding the factors that impact an organization’s culture, leadership can address culture-related challenges proactively. Therefore, organizations should prioritize fostering a healthy culture. In 2022, DORA conducted research to determine the health of an organization’s culture using Westrum’s organizational typology.
The Westrum Typology, also known as Westrum Organizational Culture, is a model created by Ron Westrum to classify and evaluate communication patterns and culture in organizations. This model places emphasis on the level of collaboration, openness, and information flow within an organization, specifically in relation to safety and learning. The Westrum Typology categorizes organizational cultures into three main types:
- Pathological Culture: In a pathological culture, information flow is restricted and often distorted. There is a lack of trust, with communication being primarily used to assign blame and cover up mistakes. Decision-making is centralized, and there is little accountability or room for learning. This culture can hinder innovation and impede effective collaboration.
- Bureaucratic Culture: A bureaucratic culture is characterized by formal procedures, rules, and hierarchy. Information flow is controlled and limited to designated channels, often resulting in delays and inefficiencies. Decisions are made based on seniority and authority, rather than open collaboration or data-driven insights. This culture emphasizes compliance and adherence to established protocols.
- Generative Culture: A generative culture represents the most desirable type, according to Westrum’s model. It is characterized by open and transparent communication, trust, and collaboration. Information flows freely across the organization, empowering individuals to make decisions and take ownership of their work. Mistakes are viewed as learning opportunities, and the focus is on continuous improvement, innovation, and adaptability.
They also looked at other factors such as team turnover, flexible work arrangements, organizational support, and burnout to gain a better understanding of culture. Their findings show that the type of culture within an organization can greatly impact its performance and Engineering efficiency. Specifically, organizations with a generative culture tend to perform better than those with a bureaucratic or pathological culture. Employees in generative cultures are more likely to be part of stable teams, produce high-quality work, and engage in meaningful tasks.
Critical Points for CEOs and Board Members
Here is a summary of the key takeaway points that I encourage all CEOs to keep as a handy reference:
- Simplify and shorten the list of Engineering performance metrics presented to executives and the Board of Directors. The list should be focused and easy to understand to ensure that the metrics have a noticeable impact and are not ignored.
- Understand the team’s mechanisms; The executive team, especially business heads, should understand the Engineering team’s operational rhythm beyond the short list of performance metrics shared at their level. This includes familiarizing themselves with the team’s additional measures and areas of focus for improvement and evolution.
- Encourage regular sharing; Engineering should share product quality, stability, reliability, and performance metrics with Product Management to measure team performance and incentives. Failure to do so creates misalignment and destructive conflict within the team. This leads to Product Management focuses more on features than quality, while Engineering may need to give more attention to ensuring ongoing product quality.
- Agree on the common terminology; agree on the specific metrics to use and the language to describe them. Additionally, it’s crucial to explain the importance of the metrics to all the executives, considering that not all of them may have a technical background.
- Focus on the leading indicators; Remember that metrics can either predict events before (leading indicators) or provide insights after they have occurred (lagging indicators). Although lagging indicators are more common, leading indicators have greater significance regarding timely impact.
- Business outcomes matter; metrics are likely only worth measuring if they are tied to a meaningful result. Therefore, it’s essential that metrics always firmly tie back to an important business outcome.
- Standardize the executive dashboard view; Engineering should regularly measure a wide range of metrics, but only a few should be reported to executives. Operational reviews for the complete list of metrics should occur periodically, usually between monthly and quarterly. However, at some companies, like during my time at Amazon, it was not uncommon to review these metrics weekly, especially for new services post-launch. Key executives must know this process and the engineering team’s discipline level.
Conclusion
I wrote this guide to help readers understand key metrics and mechanisms for measuring software engineering efficiency. The most important details to take away from this guide are the significance of compliance, cost reduction, infrastructure management, productivity, revenue, maintenance, and reliability for deployment. Compliance ensures adherence to industry guidelines and regulations, cost reduction leads to fewer system downtimes and less maintenance, productivity results in smoother workflows, revenue translates to excellent client service, maintenance simplifies maintenance tasks, and reliability produces a stable and cohesive final product with consistent code, improved functionality, and better security. Scalability refers to a framework’s ability to enable or limit an organization’s growth.
Efficiency and effectiveness are distinct concepts with their own advantages and disadvantages. Influential developers understand the context of their tasks and know when to compromise or step back to solve problems without writing new code. Ultimately, these Engineering team performance metrics are crucial for full-stack product leadership aspirations as you grow and scale.
To learn more about how Bodhi Venture Labs can help you with your Full stack Product Leadership aspirations, please shoot us an email