Business Metrics & Total Quality Management for Data Managers

Executive Summary

This webinar addresses the critical intersection of artificial intelligence (AI) and data quality, emphasising the necessity of linking quality metrics to tangible business outcomes. Howard Diesel examines the limitations of total quality management when it is devoid of strategic direction and introduces the concept of driver trees to enhance clarity in performance metrics. Using the insurance sector as a practical case study, the webinar highlights the importance of inclusive metric trees, defining a North Star metric, and establishing leading indicators to drive organisational success.

Furthermore, Howard explores root cause analysis and continuous improvement, illustrating how AI techniques can be harmonised with quality concepts to foster evaluation-driven development. Overall, this comprehensive approach aims to enhance decision-making and operational efficiency by improving data quality and fostering strategic alignment.

Webinar Details

Title: Business Metrics & TQM for Data Managers
Date: 2025-11-13
Presenter: Howard Diesel
Meetup Group: African Data Management Community
Write-up Author: Howard Diesel

The Challenge of AI and Data Quality

AI systems are heavily reliant on the quality of data, records, and security protocols to function effectively. However, data scientists often find themselves spending as much as 70% of their time on data cleaning instead of utilising established data governance frameworks. This substantial time investment leads to a disconnect in AI initiatives, which often struggle to achieve success due to the lack of properly governed data provided from the outset.

The core challenge appears to stem from the tension between data scientists’ immediate needs and the more prolonged governance processes. Many data scientists may either be reluctant to wait for these governance frameworks or not feel confident enough to access data lakes independently. Consequently, addressing this disconnect is crucial for improving the effectiveness of AI initiatives, ensuring that they can leverage high-quality, well-governed data from the beginning.

The medallion architecture approach plays a crucial role in ensuring that data quality aligns with the specific needs of various models. By delivering appropriate data at the right stage, this method helps maintain the integrity and effectiveness of model outputs. Moreover, without implementing early warning systems to identify data drift or quality issues, organisations risk compromising their chances of achieving successful outcomes.

As models rely on consistent data inputs, any shift in the data landscape can negatively impact success metrics. This reality underscores the necessity of proactive quality monitoring to safeguard the performance of machine learning efforts. Ultimately, by prioritising data quality and early detection mechanisms, organisations can enhance their ability to adapt to changing conditions and achieve reliable results.

Figure 1 Business Metric Driver Tree Across Departments

Connecting Quality to Business Metrics

A clear North Star metric must guide data quality initiatives to ensure they serve a purpose and have direction. Without this focus, organisations may struggle to align their efforts with business objectives. Every use case should not only demonstrate commercial viability but also technical feasibility, making it easier to communicate the value of data quality initiatives.

When engaging with executives, it’s crucial to frame discussions around their key performance indicators rather than diving into technical details, such as LLMs or conversational AI. Business leaders are primarily concerned with tangible results; therefore, the conversation should emphasise how data quality can lead to a specific percentage improvement in metrics that matter to them. By asking questions like, “How do I deliver a 5% improvement on this metric, and what impact will it have?” we can align data strategies with business goals, ultimately driving better outcomes for the organisation.

A successful data quality program hinges on effective communication, rather than a focus on technical metrics such as accuracy or relevance. Understanding the existing friction and the performance indicators used by business leaders is crucial in this process. By demonstrating a significant improvement—such as a 20% increase in key metrics through targeted quality enhancements—data professionals can gain the necessary support and buy-in from stakeholders.

To bridge the gap between data professionals and business users, it is essential to prioritise the language of business over technical jargon. This shift requires data experts to articulate their insights in ways that resonate with organisational goals, rather than expecting business users to improve their data literacy. Ultimately, fostering this mutual understanding will lead to more effective collaborations and better outcomes across the organisation.

Total Quality Management Without Strategy

Organisations often initiate quality improvement programs but face significant challenges in demonstrating their tangible business impact. A primary issue is the lack of prioritisation, which leads to disconnected metrics that fail to align with overall strategic goals. Additionally, local optimisation occurs when individual departments enhance their own processes without considering the broader implications, resulting in bottlenecks and inefficiencies elsewhere in the organisation.

To effectively address these challenges, it is essential for organisations to connect data quality initiatives with key business metrics, focusing on dimensions such as reasonableness and completeness. By fostering collaboration across departments and ensuring that improvement efforts are aligned with overarching business objectives, organisations can create a more cohesive approach to quality improvement. Ultimately, this integrated strategy will facilitate meaningful progress and enhance the overall effectiveness of quality initiatives.

The challenges faced by a pension fund company during the onboarding of large organisations highlight the critical importance of data management in business processes. Staff had to enter ID numbers quickly to meet tight deadlines for invoicing, which inadvertently led to a staggering 60% duplication of data. Notably, 40% of these duplications involved multiple entries for the same employee, resulting in significant downstream data quality issues. This rush to solve immediate problems ultimately shifted the responsibility for data quality to others in the organisation.

As a consequence, executive scepticism towards Total Quality Management (TQM) can intensify when leadership perceives these initiatives as mere operational busywork rather than solutions with strategic value. This perception can diminish the impact of quality programs, causing them to evolve into compliance exercises that lack real significance. It is essential for organisations to prioritise data accuracy and quality management in order to maintain executive buy-in and uphold long-term strategic objectives.

Figure 2 TQM Without Strategy Lacks Direction

Introducing Driver Trees

Driver trees effectively bridge the gap between strategy and execution by providing clear causal relationships and actionable insights at every organisational level. At the top of the hierarchy is a North Star metric, such as Spotify’s “listening hours,” which guides the entire strategy. This metric is then broken down into strategic themes, such as customer experience, operational efficiency, and price/value, ensuring that every department understands its role in contributing to the broader objectives.

Each department identifies specific ways in which its efforts impact the North Star metric through its designated categories. This structured approach not only fosters alignment across the organisation but also enhances accountability by making it clear how each team’s actions contribute to overall success. Ultimately, driver trees facilitate a cohesive strategy execution, driving performance and fostering a culture of collaboration.

The policy retention rate is a fundamental metric for insurance companies, often seen as their guiding principle. A key component of the customer experience that greatly influences this rate is first-contact resolution; when claims are handled swiftly and efficiently on the first contact, customers are more inclined to renew their policies. By focusing on leading indicators, such as average first contact time, management can effectively predict and enhance customer satisfaction, as these metrics are within employees’ control.

This proactive approach fosters alignment across the organisation by prioritising measurable activities over lagging indicators that only reflect past performance. By emphasising the importance of leading indicators, management can create clarity and direction, ensuring that all employees understand their role in achieving company goals. Ultimately, this alignment strengthens the overall performance of the insurance company and contributes to higher policy retention rates.

Figure 3 Driver Trees Create a Visual Roadmap

Figure 4 Why Driver Trees Solve the Strategy-Execution Gap

Figure 5 Real-World Example: Insurance Policy Retention

Real-World Application: Insurance Example

To enhance policy retention rates from 88% to 90%, an insurance company should utilise a driver tree to ensure alignment across departments. Without this strategic framework, marketing efforts may initiate a loyalty campaign while the claims department pursues faster settlement processes, leading to disjointed initiatives and suboptimal results. By establishing clear connections between policy retention, customer experience, and operational efficiency, the company can create a cohesive plan that addresses all key aspects of customer satisfaction.

Implementing a driver tree highlights the importance of specific operational elements, such as claim straight-through processing (STP), which relies on factors like the percentage of first notices of loss submitted through digital platforms, effective backlog management, and rapid initial response times. As departments coordinate their efforts based on this structured approach, the potential for improved customer interactions and retention increases significantly. Ultimately, a unified strategy driven by a clearly defined driver tree will enable the company to effectively meet its retention goals.

The hierarchical structure of metrics within an organisation is crucial for understanding performance and driving strategy. Leading indicators, such as average first-contact time, can be influenced on a daily basis or even per individual customer, allowing for an immediate impact on outcomes.

Conversely, lagging indicators, such as the straight-through processing (STP) rate, reflect past performance and require a longer timeframe and multiple contributors to observe significant changes. This distinction between leading and lagging indicators provides employees with clear actions to take today, illustrating how these actions align with broader organisational goals.

By utilising this structured approach to metrics, organisations empower their employees to recognise the direct connection between their individual contributions and the overall strategy. Leading indicators provide a proactive pathway for immediate adjustments, whereas lagging indicators offer insight into long-term performance trends. Understanding this dynamic is essential for fostering a culture of accountability and continuous improvement, ultimately enhancing both individual and organisational effectiveness.

Building Inclusive Metric Trees

Metric trees offer a streamlined evolution of traditional balanced scorecards by reducing complexity and enhancing departmental autonomy. Unlike balanced scorecards that necessitate extensive cross-domain mapping and centralised processes, metric trees empower departments to create their own metrics based on their specific contributions within the organisational hierarchy. This shift not only preserves the valuable insights derived from performance metrics but also alleviates the maintenance burden that often accompanies balanced scorecards.

By adopting a decentralised approach, metric trees ensure that departments can focus on what matters most to their operations and outcomes. This flexibility fosters greater accountability and relevance in performance measurement, allowing organisations to respond more swiftly to changes and challenges. Ultimately, metric trees represent a more efficient and effective method for tracking performance, driving value while minimising administrative overhead.

Effective interlevel communication is vital for organisational success, as it ensures that all team members are aware of each other’s activities and contributions. It is essential for the process to be inclusive across various departments, fostering an understanding of how individual roles and responsibilities align with overarching business goals.

While some organisations may use weighted averages to prioritise elements across levels, this approach can create confusion regarding the significance of different people and processes. Instead, organisations should emphasise how diverse elements collectively contribute to overall metrics.

Additionally, utilising a visual roadmap can serve as a powerful tool for translating an organisation’s strategy into a clear digital representation. This roadmap illustrates how each action within the organisation aligns with and influences business outcomes, providing transparency and clarity throughout the team. By establishing these connections, organisations can enhance collaboration and ensure that every team member understands their impact on achieving shared objectives.

Figure 6 Start with the North Star, Decompose into Drivers, and Embed Quality and Monitoring.

Defining Your North Star Metric

The North Star metric serves as a crucial indicator of an organisation’s core value, reflecting its overall success and guiding strategic decisions. This metric should be a lagging outcome measure that aligns with customer value, much like Neom Health’s focus on wellness, rather than merely tracking hospital episodes. Additionally, it must correlate with revenue and carry significance across various functions within the organisation, making it a comprehensive measure of performance.

To effectively determine the North Star metric, teams should pose the critical question: “If we could only track one metric for the next three years, which would best indicate our success?” By focusing on this singular metric, businesses can streamline their efforts, ensuring all stakeholders are aligned and driving toward a common goal. Ultimately, a well-chosen North Star metric not only highlights success but also cultivates a shared vision throughout the entire organisation.

The emergence of data trust as a critical metric has fundamentally transformed data stewardship programs in organisations. This concept enables teams to assess their ability to make decisions effectively without disputes over data validity, thereby fostering a more collaborative environment.

Key operational metrics, such as straight-through processing, claim settlement time, first-contact resolution, and billing accuracy, are now prioritised, highlighting the importance of efficient workflows. Additionally, ensuring data completeness at the end of initial data collection is essential, particularly for features integral to forecasting and machine learning applications.

To achieve success, organisations must recognise that data stewardship requires a collective effort across multiple departments rather than isolated initiatives. The overarching goal, often referred to as the North Star, should guide these collaborative endeavours, prompting teams to work together to enhance data management practices. This unified approach not only improves operational efficiency but also builds a foundation of trust in the data that supports decision-making across the organisation.

Figure 7 Define Your North Star Metric

Figure 8 Identify Level 1 Categories

Figure 9 Define Level 2 Business Metrics

Figure 10 Characteristics of Good Business Metrics

Figure 11 Establish Leading Indicators

Figure 12 The Complete Driver Tree Hierarchy

Figure 13 TQM Application: Improving Claims STP Rate

Establishing Leading Indicators

Leading indicators play a crucial role in predicting business outcomes and facilitating continuous performance evaluation within teams. For example, in the context of improving claims straight-through processing rates, key leading indicators include the number of first notices of loss submitted through digital portals, the percentage of claims submitted with all required documentation at the first submission, and the average data quality scores of incoming claims. By monitoring these metrics regularly, whether daily or weekly, organisations can effectively forecast their performance on a quarterly basis.

Utilising leading indicators allows companies to identify areas for improvement and enhance overall efficiency. By tracking these critical metrics, teams can make data-driven decisions that streamline the claims process, ultimately leading to faster processing times and improved customer satisfaction. In doing so, businesses position themselves to respond proactively to trends and challenges, ensuring they remain competitive in the industry.

The effectiveness of a performance indicator is crucial in determining its impact on team productivity and business outcomes. The “non-couch potato dashboard” test serves as a practical measure, assessing whether team managers are regularly engaged with these indicators to ensure they remain on track for quarterly results. This ongoing monitoring prompts necessary actions and responses, which are essential for continuous improvement.

Implementing the DMAIC approach—Define, Measure, Analyse, Improve, Control—can effectively link total quality initiatives to measurable business objectives. For instance, a targeted goal such as increasing the percentage of claims with complete information from 50% to 80% within a six-month period exemplifies this process. By measuring specific shortcomings, such as 45% of incomplete claims lacking photographs and 30% missing police reports, along with tracking a 3.2-day cycle time for gathering missing information, organisations can foster an environment of sustained improvement and accountability.

Figure 14 GOAL: Business Metric – FNOL – TQM Expectations

Figure 15 Wang-Strong Framework

Root Cause Analysis and Improvement

Quality issues in claims processing often stem from a lack of understanding among customers regarding the necessity of specific documentation. This confusion often results in inadequate photographs or missing information, which further complicates the process of efficiently processing claims. Additionally, the absence of real-time validation means that errors often go unnoticed until later in the process, leading to delays and frustration for both customers and processing teams.

To address these issues, it is essential to redesign the first notice of loss flows by implementing dynamic checklists that are tailored to each claim type. Such improvements will provide clear guidance to customers on the required documentation, minimising misunderstandings. Ultimately, this proactive approach will enhance the claims experience by reducing errors and streamlining the overall workflow.

Agentic AI revolutionises the way conversational agents operate by enabling sophisticated, dynamic interaction flows. Unlike traditional static conversational models, where users must answer predetermined queries, agentic systems dynamically select specialised agents based on the current context or process state.

For instance, when dealing with fire or accident claims, a general chatbot can seamlessly transfer the conversation to a specialised photograph agent to ensure that the necessary types and quantities of images are accurately gathered. This adaptability allows for more relevant and efficient interactions.

To maintain the effectiveness of these dynamic conversational flows, continuous monitoring is essential. Regular reviews of conversation traces, conducted weekly, help identify instances when agents may pose inappropriate questions or diverge from the optimal conversational paths.

By assessing these interactions, organisations can refine their agentic AI systems, ensuring that they not only meet user needs but also improve over time. This ongoing effort to optimise agent interactions solidifies the role of agentic AI in providing responsive and context-aware support.

Figure 16 Total Quality Management Model

Figure 17 AI Task Categories

Figure 18 Evaluation-Driven-Development for Agenetic AI

Connecting Quality Dimensions to Business Metrics

In today’s data-driven environment, an Excel framework is crucial for linking business performance with specific use cases, dimensions, and concepts. This is especially evident in claims processing, where key factors such as the correctness of answers, precision and recall for categorisation of claims into high-risk and low-risk categories, and entity resolution for identifying the correct member and policy play a vital role. Additionally, location verification and analysis of previous claims history are essential components that help streamline the process.

To ensure an efficient claims processing system, a chatbot must effectively capture all relevant information during the initial interaction with the first notice of loss. By doing so, it facilitates a more accurate and prompt handling of claims, ultimately leading to improved customer satisfaction and operational efficiency. In this way, the integration of a comprehensive framework can significantly enhance both the accuracy of data collection and the overall effectiveness of claims processing.

The importance of maintaining accurate transcript records cannot be overstated, particularly in situations where a claimant may dispute the advice provided. Organisations must have clear proof of the information exchanged during interactions, which makes it essential to validate critical data elements such as member identification, policy ID, and location accuracy. To guarantee overall accuracy, various data types and aspects should be scrutinised.

Utilising frameworks like the Wang and Strong model can help organisations assess data quality and AI performance by focusing on dimensions such as believability, reputation, relevancy, and appropriate data volume. While these dimensions serve as useful guidelines, they remain somewhat abstract; therefore, breaking them down into specific concepts enables organisations to achieve actionable insights and enhance decision-making processes.

Figure 19 Understanding Bias and Variance Tradeoff

Figure 20 AI Implementation Framework for Total Quality Management

AI Techniques and Quality Concepts

Various AI techniques necessitate distinct evaluation quality concepts tailored to their specific functions. These techniques can be categorised into several groups, including generation, regression, categorisation, classification, clustering, ranking, and decision-making. Each group serves a unique purpose: for instance, generation focuses on creating new content, while regression is concerned with predicting continuous values.

Furthermore, classification predicts discrete labels, categorisation assigns items to a taxonomy, clustering identifies groups, and ranking determines the relative importance of items. Each of these techniques also comes with its own specific error rates and evaluation metrics, which are integral to assessing their performance.

Understanding these categories and their corresponding evaluation methods is crucial for effectively applying AI in various contexts. By employing the appropriate quality concepts for each technique, practitioners can more accurately measure outcomes and refine their models.

This tailored approach ensures that the strengths and weaknesses of each AI method are adequately assessed, leading to improved performance and more reliable results in real-world applications. Ultimately, recognising the diversity among AI techniques and their evaluation needs contributes to more effective AI implementations across different fields.

The integration of large language models into various applications necessitates a comprehensive framework that addresses multiple quality aspects, such as hallucination, toxicity, and dataset bias. By mapping relevant concepts to diverse technique types—without specifying particular algorithms, such as one-class SVMs—this framework aims to enhance the overall quality management process. It encompasses various dimensions, including AI quality, data quality, cybersecurity, accessibility, and protection, thereby presenting a unified approach rather than treating each element as an isolated aspect of quality.

Every technique employed within this framework carries an inherent error rate that must be meticulously assessed. Utilising metrics such as precision, recall, and F1 scores enables a thorough evaluation of performance and effectiveness, ensuring accountability in AI outputs. Ultimately, this holistic system quality framework promotes a balanced and rigorous evaluation of technology, paving the way for improvements and innovations in the field.

Evaluation-Driven Development

The eval-driven approach is a comprehensive strategy that integrates evaluation metrics into observability platforms and actively traces agent actions. By incorporating human feedback and performing real-time evaluations on these traces during production, this approach ensures a continuous cycle of performance monitoring and improvement. This multifaceted process allows organisations to adapt swiftly to challenges, improving overall system reliability.

When evaluations reveal deficiencies, a range of improvement strategies can be implemented. Options such as prompt engineering to minimise hallucination, fine-tuning parameters, enhancing application topology, implementing guardrails, model routing, or retraining can all contribute to better outcomes. Addressing these issues requires attention to multiple elements, signalling that a holistic approach is essential for sustained effectiveness and innovation.

The bias-variance trade-off highlights a fundamental challenge in model training: undertrained models exhibit high bias, failing to recognise important patterns, while overtrained models suffer from high variance, memorising noise instead of generalising from the data. The key is to find a balance point where training effectively reduces error without leading to overfitting. Achieving this balance requires careful model validation to identify the optimal point for performance.

Moreover, unlike traditional BI reports, which can rely on predetermined timelines, AI models necessitate ongoing training as conditions evolve and new data emerges. Organisations must adopt a mindset that views AI as a dynamic system, requiring continuous monitoring, adjustment, and maintenance rather than a one-time project. This shift in perspective, along with the associated costs, must be incorporated into ROI calculations from the outset to ensure the long-term success of AI initiatives.

If you would like to join the discussion, please visit our community platform, the Data Professional Expedition.

Additionally, if you would like to watch the edited video on our YouTube please click here.

If you would like to be a guest speaker on a future webinar, kindly contact Debbie (social@modelwaresystems.com)

Don’t forget to join our exciting LinkedIn and Meetup data communities not to miss out!

Scroll to Top