Executive Summary
This webinar highlights key concepts related to enhancing organisational quality and performance through various frameworks and methodologies. Howard Diesel begins with a recap of previous sessions, emphasising the importance of Metric Driver Trees in linking business metrics to Total Quality Management. He draws on the Wang-Strong Framework, which explores the quality dimensions essential to operational excellence. Implementation strategies focus on integrating MLOps and observability to ensure quality outcomes. Additionally, the webinar covers the significance of the People Dimension, particularly the Return on Experience, and addresses the critical factors of trust, confidence, and shadow processes in business operations. Howard also examines the intersection of customer experience with AI performance metrics while addressing security risks and the principles of responsible AI to foster a safe and efficient operational environment.
Webinar Details
Title: Business Metrics & TQM for Data Professionals
Date: 2025-11-27
Presenter: Howard Diesel
Meetup Group: African Data Management Community
Write-up Author: Howard Diesel
Introduction and Session Recap
Howard Diesel opens the webinar and shares that it will offer a thorough recap of a three-week exploration of business metrics and Total Quality Management (TQM), emphasising the vital connection between AI initiatives and measurable business outcomes. Additionally, Howard shared that he would review essential concepts, including aligning AI with business metrics, identifying North Star metrics, and differentiating between leading and lagging indicators. This foundational knowledge is crucial for data professionals tasked with demonstrating the tangible value derived from their AI and data projects.
Furthermore, Howard underscores that the successful implementation of AI transcends mere technological advancements; it requires establishing a comprehensive framework that links every technical decision to business impact and organisational strategy. By illuminating these connections, Howard aims to equip participants with the insights needed to effectively drive AI initiatives that yield meaningful results for their organisations.
Figure 1 AI Strategy: Driving Business Value through People and Technology
Figure 2 AI Business Value Octagon
Figure 3 The Complete Driver Tree Hierarchy
Figure 4 FNOL TQM Expectations
Understanding Metric Driver Trees
The metric driver tree concept is an essential framework for understanding an insurance company’s performance, with the policy retention rate serving as the North Star metric. This model distinguishes it from customer lifetime value, as life insurance policies end upon a claim. The driver tree is organised into strategic themes such as customer experience, operational efficiency, and price and value, which then break down into specific metrics, including first contact resolution, straight-through processing, and billing accuracy.
A critical aspect of this framework is the distinction between business metrics, which represent outcomes, and leading indicators, which are controllable activities. For example, the percentage of complete information in chatbot interactions serves as a leading indicator that can be actively managed by departments, enabling them to improve performance rather than waiting for updates on straight-through processing rates. This proactive approach allows organisations to better navigate their operational challenges and enhance overall customer satisfaction.
Figure 5 Wang-Strong Framework
Figure 6 UC. FNOL: Data, Records, Development and Logs
Figure 7 TQM Concepts
Figure 8 AI Task Categories
Figure 9 TQM in AI: From Metric Driver Tree to MLOps Validation
Figure 10 Beyond Training: Validating TQM Compliance in Live AI Systems
Figure 11 Open Telemetry: Standardising AI Observability for TQM Validation
Figure 12 Operationalising TQM: OTEL Metrics for Validation
Figure 13 Automated Responses: Ensuring Continuous Adherence to TQM
Linking Business Metrics to Total Quality Management
The Wang-Strong framework facilitates a vital connection between business metrics and Total Quality Management (TQM) principles by shifting the focus from “data quality” to “total quality.” This shift acknowledges that quality encompasses not only data but also records, processes, and AI outputs. For example, in a use case such as First Notice of Loss (FNOL), integrating entity resolution, location data, policy information, and prior claims highlights the interconnectedness of these quality metrics.
Techniques in artificial intelligence are evaluated using accuracy, precision, and F1 score, underscoring the importance of comprehensive metrics. This holistic approach emphasises the diminishing boundaries between structured and unstructured data, promoting a unified framework for assessing quality across diverse data types. By preventing governance silos, organisations can create cohesive strategies that enhance overall quality management. Ultimately, integrating embedding databases and graph databases into this framework further enriches the understanding of quality in business contexts, leading to improved decision-making and operational efficiency.
Figure 14 AI Business Value Octagon
Figure 15 Our AI Strategy is Built on Three Pillars
Figure 16 AI Drives Value Across Eight Critical Dimensions
Figure 17 Return on Investment and Return on Experience
Figure 18 Balancing Financial Return (ROI) with Experience (ROE)
Figure 19 Empowerment and Innovation
Figure 20 Empowering Employees is the Engine of AI Adoption
The Wang-Strong Framework and Quality Dimensions
The Wang-Strong framework provides a comprehensive approach to assessing data quality across various types, highlighting the critical interplay between traditional and AI-driven metrics. In this framework, dark-coloured dimensions symbolise AI metrics while lighter ones represent conventional data dimensions.
It transitions from abstract concepts to concrete measurements, clarifying that completeness can refer to different forms, such as row, column, or table completeness. Furthermore, the instructor elucidates the relationship between accuracy and data precision, underscoring the importance of cross-dimension intersections.
In the context of First Notice of Loss (FNOL) automation, this framework contextualises the evaluation of inputs, AI development—such as chatbots, retrieval-augmented generation (RAG) models, large language models (LLMs), and claim forecasting—and the corresponding outputs. Each aspect of this process necessitates specific quality measurements, exemplified by the accurate identification of members by chatbots to avoid unnecessary processing delays and ensure precise policy references. Ultimately, this structured assessment enhances data integrity and operational efficiency in FNOL automation.
Figure 21 AI Must Enhance Customer Experience to Drive Growth
Figure 22 AI Must Enhance Customer Experience to Drive Growth
Figure 23 Proactive Risk Mitigation is Essential for Long-Term AI Sustainability
Implementing Quality Through MLOps and Observability
The implementation phase is crucial as it bridges the foundational principles of Total Quality Management (TQM) with MLOps and observability platforms. This phase utilises evaluation metrics to assess when machine learning models are ready for production and when they need to be rolled back due to inaccuracies. It addresses several real-time challenges, including adversarial attacks that may lead models to generate hallucinated or toxic responses. Key concepts such as data drift, concept drift, and operational failures are integral to understanding the complexities of model performance in diverse environments.
To effectively tackle these challenges, open telemetry emerges as a vital solution, ensuring that every AI platform follows standardised practices to gather metrics, traces, and logs. A practical example of this is GenSpark AI’s use of traces and project IDs to identify and resolve looping issues that drained system credits. By transforming quality expectations into observable metrics for alerting, organisations can automate processes such as model rollback, pipeline retraining, and triggering human intervention via high-priority tickets. This comprehensive approach enhances the reliability and effectiveness of machine learning systems.
Figure 24 Proactive Risk Mitigation is Essential for Long-Term AI Sustainability
Figure 25 Next Steps: Integrating Value, People, and AI
Figure 26 Conclusion: AI is a Strategic Transformation
Figure 27 Conclusion: AI is a Strategic Transformation
Figure 28 Question and Discussion
The People Dimension – Return on Experience
The critical role of users in the success of AI systems is often underestimated, yet it is a cornerstone of effective implementation. Howard identifies three pillars of AI success: business value, adoption measured by Return on Experience, and responsible AI practices.
Central to this discussion is the AI Business Value Octagon, which encompasses financial, strategic, human value, and risk-mitigation factors. A significant insight is that an exclusive focus on ROI can lead to the accumulation of technical debt, while prioritising user experience without financial considerations may yield projects with minimal practical outcomes. Striking a balance among these elements is crucial for sustainable AI integration.
To achieve successful AI adoption, organisations must prioritise reducing drudgery by leveraging AI to minimise tedious tasks and empower their workforce. This approach not only enhances employee satisfaction but also drives the effectiveness of AI implementations.
Measures of Return on Experience, such as Employee Net Promoter Score, customer effort score, total confidence, and zero-touch resolution rates, provide valuable insights into user experience and engagement. Ultimately, the goal is to create an environment where AI enhances productivity and innovation, ensuring long-term success and user adoption.
Figure 29 What is Responsible AI under the EU AI Act?
Figure 30 AI Literacy – Article 4 of the EU AI Act
Figure 31 Connecting AI Literacy to Responsible AI
Figure 32 Definition of Literacy
Figure 33 Literacy, Savviness, Fluency and Acumen
Figure 34 Evolution of Data and AI Maturity Model
Trust, Confidence, and Shadow Processes
The exploration of trust metrics in AI-generated stock recommendations highlights the critical relationship between user verification and inventory management. Salespeople face challenges verifying the accuracy of AI outputs, particularly when handling unusual orders that could significantly affect stock levels. Metrics such as the total confidence score are essential in determining whether users genuinely trust these recommendations or feel compelled to establish “shadow processes” for cross-checking. Additionally, the chapter underscores the importance of addressing all warnings, as they indicate real issues that must not be overlooked.
Furthermore, integrating frameworks such as SHAP and LIME provides valuable explainability for AI systems, enhancing users’ understanding and confidence in the output. The acquisition of skills, velocity, and employees’ willingness to recommend these tools to their colleagues directly influence the practical deployment of AI in the workplace. Ultimately, these metrics are vital for determining the success or failure of AI systems in real-world applications, underscoring the need to foster user trust.
Customer Experience and AI Performance Metrics
Measuring AI effectiveness from the customer perspective involves a range of critical metrics that assess interaction quality and operational efficiency. Key indicators include the zero-touch resolution rate, which tracks how often AI completes interactions without human intervention, and the customer effort score, which gauges the ease of interactions. Additionally, personalisation accuracy is vital, evaluating the system’s ability to recognise customer names, policy details, and claim history while also considering cross-channel engagement. Together, these metrics emphasise the importance of balancing speed with ease to minimise customer friction.
Operational metrics, such as average handle time and customer wait times, are essential in understanding how well AI systems perform. When these operational metrics exceed established thresholds, it signals that MLOps must intervene to maintain service quality. The NIST risk management framework further enhances this evaluation by incorporating quality dimensions like hallucination rates, human-in-the-loop correction rates, and explainability coverage. These additions ensure that the AI system provides clear and comprehensive explanations when human oversight is required, ultimately improving the overall customer experience.
Figure 35 The Complete Capability Journey
Figure 36 Business Value and ROI of Data and AI Acumen
Figure 37 Business Value and ROI of Data and AI Acumen
Figure 38 AI Progression Phases
Security Risks and Responsible AI
A critical security concern related to AI systems is prompt injection, which enables users to manipulate systems to extract sensitive documents and expose crucial policies, such as automatic approval thresholds. Additionally, adversaries can target documents behind paywalls and exploit engagement processes, leading to significant data breaches.
Other risks include data leakage through parameter manipulation, where users might ask “what if” questions to adjust claim estimates, and the rise of shadow AI use, where employees resort to unapproved tools like ChatGPT, potentially jeopardising sensitive data.
A notable concern is highlighted by an HR professional who uploaded employee salary lists to ChatGPT for analysis, showcasing the potential for data exposure. Organisations are urged to create systems that link return on investment (ROI) to total quality management (TQM) and operational observability. This ensures that AI models are deployed based on quality metrics and thorough assessments rather than hasty implementation, ultimately fostering a safer, more responsible approach to integrating AI technologies.
Figure 39 The Complete Capability Journey
- Executive Summary
- Introduction and Session Recap
- Understanding Metric Driver Trees
- Linking Business Metrics to Total Quality Management
- The Wang-Strong Framework and Quality Dimensions
- Implementing Quality Through MLOps and Observability
- The People Dimension – Return on Experience
- Trust, Confidence, and Shadow Processes
- Customer Experience and AI Performance Metrics
- Security Risks and Responsible AI
If you would like to join the discussion, please visit our community platform, the Data Professional Expedition.
Additionally, if you would like to watch the edited video on our YouTube please click here.
If you would like to be a guest speaker on a future webinar, kindly contact Debbie (social@modelwaresystems.com)
Don’t forget to join our exciting LinkedIn and Meetup data communities not to miss out!