Executive Summary
This webinar highlights essential strategies for effectively integrating data and AI within organisations. Key elements include addressing the Data Activation Gap, conducting AI Readiness Assessments, and utilising a Strategic Alignment Framework to establish a Shared Business Language. Additionally, Howard Diesel shows how integrating Data Architecture with Policy as Code will enhance governance, while a Unified Governance model and an Implementation Roadmap are critical for successful execution and organisational growth.
Webinar Details
Title: Is Your Data GenAI‑Ready for Data Managers
Date: 2026-02-05
Presenter: Howard Diesel
Meetup Group: African Data Management Community
Write-up Author: Howard Diesel
The Data Activation Gap and AI Readiness Assessment
Howard Diesel opens the webinar and introduces the Modern Data Report 2026 titled “The Data Activation Gap.” This highlights the shocking statistics about organisational AI readiness. He shares his experience taking the Transform Data with Intelligence assessment, which asks critical questions that expose significant gaps between executive expectations and actual data maturity.
The assessment explores crucial questions: Are we confident in the integrity of our data structures? Is our data fragmented? Can we trust our unstructured data sources? These questions become urgent as generative AI and large language models require access to virtually every form of data—structured and unstructured alike.
An attendee acknowledges the category “not ready and not knowing what readiness means” when Howard presents it. This honest assessment reveals a common organisational challenge: executives expect data to be ready for AI while data teams struggle with foundational issues. Another attendee shares that implementing AI use cases revealed that many things expected from GenAI actually require fixing data first—especially data quality issues.
If data quality isn’t correct, whatever you feed GenAI will produce output dependent on bad inputs, leading to hallucinations. Howard emphasises the importance of using AI to support metadata collection and data quality assessment, as manual processes cannot keep pace with real-time AI requirements.
Figure 1 Is Your Data Ready?
Figure 2 Organisational Data Readiness Diagram
Figure 3 Question on Organisational Data Readiness
Figure 4 Question on Organisational Data Readiness pt.2
Figure 5 Is Your Data AI-Ready?
Figure 6 AI is the Wheel; Architecture and Governance are the Axles
Figure 7 What AI-Ready actually means
Figure 8 Business Architecture
AI Readiness Formula and Business Architecture
Howard presents his comprehensive definition of AI readiness: the combination of business alignment, appropriate architecture, data discovery capabilities, governance frameworks, and corporate memory. The Modern Data Report identified a critical discovery gap: the ability of people and AI to quickly find appropriate, trustworthy data is the biggest drain on AI initiatives.
Data management serves as “the axle between the wheels” of AI models. Without robust data management, AI models in all their forms—co-pilots, agents, applications—cannot advance effectively. The discovery of the right data takes as long as the modelling process itself, representing one of the most significant barriers to AI value realisation.
An attendee introduces the critical importance of business architecture as the foundation for AI success. They reference the City Report, which shows that different regions approach AI value differently: Europe takes a risk-based approach, prioritising data quality; the US focuses on marketing and innovation; and Asia emphasises pure innovation. The attendee then demonstrates this with a diamond-sorting facility example, showing the value chain from planning through cleansing, sorting, valuation, negotiation, and sales.
Figure 9 Example of a Business Value Chain
Figure 10 Readiness Scorecard
Figure 11 Risk Analysis
Strategic Alignment and Assessment Framework
Howard introduces his AI Readiness Assessment Framework, which features dimensions that organisations should evaluate on a maturity scale from “haven’t started” through “reactive,” “operational,” to “innovating.” Key assessment dimensions include business outcomes and KPIs clarity, information architecture maturity, semantic models, data products and contracts, governance frameworks, technology platform capabilities, and knowledge management systems.
The “Expected” column is introduced and defined as executives’ assessment of the current state. This reveals the gap between perceived and actual maturity. For example, if executives believe knowledge management is mature when it’s actually immature, that gap represents significant organisational risk.
The assessment helps identify quick wins and high-risk areas. When executive expectations far exceed actual capabilities, organisations risk committing to AI initiatives they cannot deliver. This framework provides a reality check and roadmap for honest conversations about readiness.
Howard then emphasises starting with business goals rather than AI technology. Many AI innovation hubs have data scientists explore data without clear business goals. When findings are presented, business teams often respond, “That’s interesting, but how does it help me?” Without capability maps and value streams, initiatives drift and fail to demonstrate measurable value.
Figure 12 Information Architecture
Figure 13 Data Architecture
The Semantic Layer and Shared Business Language
An attendee raises an important question about the meaning, noting that different products have different interpretations. Howard points out that data professionals reuse terms, creating confusion. “Semantic layer” has been used to describe BI semantic layers (e.g., Cognos), metrics layers, caching layers, and AI context layers.
Howard clarifies that he’s discussing the business language semantic layer—shared understanding before modelling. This includes business glossaries, concept models, ontologies, and taxonomies. This upstream work is foundational to data warehouses, BI, KPI definitions, and ultimately AI implementations. Additionally, breaking down definitions into simple sentences with 3-4 words each. Simple subject-predicate-object statements will help to build consensus: “A customer submits a claim,” “A claim relates to a policy,” “Risk triggers an assessment.”
In addition, Ontologies can help in two ways: ensuring users use the correct terminology in prompts and validating that AI responses use the agreed-upon business language. Howard notes that getting the glossary level correct, with proper classifications and structural definitions, enables total quality to flow downstream naturally.
Figure 14 Data Architecture pt.2
Figure 15 Technology Architecture
Data Architecture and Policy as Code
Howard outlines the role of data architecture in AI readiness, emphasising responsibility for structures, flows, and the definition of products through contracts. Howard then highlights the emerging data contract standard that includes schema definitions, data quality rules, Service Level Agreements (SLAs), and Service Level Objectives (SLOs).
An attendee then notes that bundling data and metadata in data products enables evaluation of quality expectations before use. However, Howard raises a concern: quality statements are often defined by creators rather than consumers. Since data quality is about “fit for purpose,” consumers should define quality requirements.
Howard shares his experience writing nine comprehensive policies. When he explored how much could become policy-as-code in observability platforms, the results were stunning—policies could automatically verify that contracts include completeness dimensions, privacy classifications, shareability flags, and quality thresholds.
This is then validated by an attendee who shares their Systems and Organisation Controls (SOC) 2 compliance journey, where platforms integrate with infrastructure to automatically validate policies, saving significant time. This raises a compelling question: Why isn’t this level of automation standard on data platforms? The evolution moves from writing policies and hoping people follow them to defining policies as automated rules with continuous validation and real-time enforcement.
Figure 16 Unified Governance
Figure 17 The Wheel & Axles Model
Figure 18 Total Quality and Metadata Foundations
Figure 19 Corporate Memory and Knowledge Management
Unified Governance, Total Quality, and Implementation Roadmap
Howard argues that organisations can no longer operate governance in silos. The traditional separation creates gaps that AI systems expose. When a chatbot engages in a conversation with a customer, that conversation is a vital record that requires integrated governance by records managers, data stewards, and AI governance teams. Additionally, a paradigm shift is introduced of moving from “data quality” to “total quality”, which encompasses data quality, records quality, knowledge quality, and decision quality.
An attendee notes that decision quality is linked to business operations, but there’s often a gap between the data presented and how decisions are made. Effective decisions require high-quality data, sound techniques, risk assessment, experienced teams, and accumulated knowledge. This raises questions about decision traces—whether they capture actual policies being applied rather than documented policies. Howard connects this to knowledge management and execution traces from observability platforms.
Howard emphasises making stakeholders aware of challenges through honest assessment. Organisations should identify where they are versus where executives think they are, then prioritise quick wins while building comprehensive readiness. The webinar’s final remarks conclude with the sobering reality: executives often overestimate data quality, creating an expectation gap that must be managed through transparent communication and realistic roadmaps.
Figure 20 Govern-by-Wire and Automated Controls
Figure 21 The AI-Ready Maturity Journey
Figure 22 Conclusion and Call to Action
Figure 23 Unified Governance Infographic
Figure 24 What AI-ready actually means pt.2
Figure 25 Information Architecture: Shared Meaning Infographic
- Executive Summary
- The Data Activation Gap and AI Readiness Assessment
- AI Readiness Formula and Business Architecture
- Strategic Alignment and Assessment Framework
- The Semantic Layer and Shared Business Language
- Data Architecture and Policy as Code
- Unified Governance, Total Quality, and Implementation Roadmap
If you would like to join the discussion, please visit our community platform, the Data Professional Expedition.
Additionally, if you would like to watch the edited video on our YouTube please click here.
If you would like to be a guest speaker on a future webinar, kindly contact Debbie (social@modelwaresystems.com)
Don’t forget to join our exciting LinkedIn and Meetup data communities not to miss out!