Executive Summary
This webinar outlines a comprehensive framework for understanding and innovating data products, emphasising the transition from crisis to opportunity. Mario Meir-Huber defines a data product and explores its value delivery, highlighting the critical balance between business and technology dimensions.
The assessment of the financial impact is elaborated through four value dimensions and a detailed cost structure analysis, along with methods for calculating ROI and for strategic positioning. The Impact-Feasibility Matrix is introduced as a prioritisation tool, while the GAP Framework addresses governance, architecture, and personnel aspects essential for success.
Further insights into the types of data products from both system and consumer perspectives underscore the necessity for pragmatic governance. Architectural principles focus on interoperability and reliability, alongside a dedicated examination of roles and change management. Mario then concludes the webinar with a look at the data product flow, detailing the phases of data retrieval, integration, and ownership, thereby providing a holistic view of effective data product management.
Webinar Details
Title: Designing Data Products Book Launch with Mario Meir-Huber
Date: 2025-11-24
Presenter: Mario Meir-Huber
Meetup Group: DAMA SA User Group Meeting
Write-up Author: Howard Diesel
Introduction: From Crisis to Data Product Innovation
Mario Meir-Huber opens the webinar and shares the compelling origin story of the book “Designing Data Products Volume One.” He began the story by stating that the idea for the book came to him during his honeymoon in Seville, Spain. On the first day of his trip, he received an urgent call from his CTO regarding a broken data pipeline that jeopardised quarterly reporting for Deutsche Telekom’s Austrian entity, threatening billions in shareholder value. This critical incident revealed a significant problem: thousands of undocumented data pipelines are operating without proper governance.
This experience not only highlighted the urgent need for better data management but also crystallised Mario’s vision of treating data as products rather than ad-hoc projects. Inspired by the challenges he faced, he aimed to provide a structured approach to data governance and quality in his writing. Ultimately, a work crisis Mario experienced while on his honeymoon became a powerful impetus to develop a framework that encourages organisations to prioritise the strategic management of their data assets.
Figure 1 Demystifying Data Products
Figure 2 Mario’s Honeymoon in Spain
Figure 3 Agenda
What is a Data Product? Understanding Value Delivery
The essence of a data product lies in its ability to deliver value to consumers, much like physical products that require refinement to achieve their worth. Just as a high-end Mercedes car or a simple brake pad undergoes a transformation to enhance its usability and reliability, a data product must also be crafted with a focus on quality. This emphasis on quality is crucial; without it, a data product fails to provide genuine value. Key attributes such as return on investment, reliability, usability, and interoperability are essential to ensuring a data product meets its users’ needs.
The simplicity of this definition belies its power, encapsulating the critical characteristics of a successful data product. It stresses the importance of quality assurance, financial returns, trustworthiness, ease of use, and seamless integration with other systems. By adhering to these principles, data products can not only meet but exceed consumer expectations, thereby enhancing satisfaction and loyalty. Ultimately, the ability to deliver true value is what distinguishes a data product in a competitive landscape.
Figure 4 What are Data Products?
Figure 5 Is this a product?
Figure 6 And this?
Figure 7 And this?
Figure 8 Definition of a Data Product
Figure 9 “There is no Value without Quality”
Figure 10 “There is no Value without ROI”
Figure 11 “There is no Value without Reliability”
Figure 12 “There is no Value without Usability”
Figure 13 “There is no Value without Interoperability”
Figure 14 “There is no Value without Impact”
The Business-Technology Balance: Two Critical Dimensions
Effective data products necessitate a harmonious balance between business objectives and technical foundations. As Mario observed, a prevalent conflict arises within organisations: business teams often feel that technology fails to understand their requirements, while tech teams perceive that business lacks an appreciation for the complexities of technology. This disconnect is primarily driven by the business’s focus on value delivery, which frequently overlooks crucial technical needs, while the technology side prioritises infrastructure, sometimes overlooking the business ramifications of its decisions.
To achieve success, it is essential to maintain continuous alignment between both business and technical dimensions throughout the product lifecycle. Building data products exclusively for business purposes can lead to solutions that disregard governance and ultimately rely on Excel; conversely, developing systems solely for technical reasons often results in outputs that lack tangible business value. Hence, both teams must collaborate effectively to deliver solutions that add value while adhering to the required technical standards.
Figure 15 Two Dimensions of Data Products
Figure 16 Balancing Both Dimensions
Financial Impact: The Four Value Dimensions
Data products create financial value through four primary mechanisms: cost reduction, revenue increase, new revenue streams, and efficiency gains. Cost reduction enhances operational efficiency and optimises runtime, while revenue growth is achieved through stronger customer retention, improved loyalty, and higher customer lifetime value; a prime example is Amazon, which excels at customer service that keeps buyers coming back. Additionally, companies can generate new revenue streams by developing marketable data-driven products, while efficiency gains arise from improved processes and faster operations.
When evaluating the value of data products, it is crucial to consider both strategic value and traditional financial metrics. Some initiatives may focus on achieving regulatory compliance or enhancing competitive positioning, which may not yield immediate financial returns but are nonetheless vital for long-term organisational success. Overall, understanding these mechanisms and their broader impact contributes significantly to a company’s growth and sustainability.
Figure 17 The Business Dimension
Figure 18 What “Value” Means for a Data Product
Figure 19 Four Financial Impact Dimensions
Understanding the Cost Structure of Data Products
Data product costs can be categorised into two main types: one-time expenses and continuous operational costs. One-time expenses primarily encompass development, integration, and regulatory or legal compliance. Development costs involve engineering efforts before the initial deployment, while integration costs refer to the often-overlooked task of connecting data products to existing CRM or operational systems. Additionally, multinational companies must account for compliance with various regulations such as GDPR, the AI Act, and IFRS, which can add high costs.
In contrast, continuous operational costs are crucial for the ongoing success of data products. These include ongoing development, as data products often require constant refinement, and operations and maintenance, which ensure that systems reliably support financial and executive reporting. Other continuous costs include technology infrastructure, such as cloud instances and platforms, as well as necessary training programs to equip teams to use them effectively. Organisations often underestimate these expenses, especially integration and operational costs, which can lead to budget shortfalls and hinder the overall success of their data initiatives.
Figure 20 One-Time Costs (CAPEX) and Continuous Costs (OPEX)
Figure 21 Breaking Down One-Time Costs
Figure 22 Continuous Cost Categories
Figure 23 Non-Financial Impact Matters Too
Figure 24 It all Starts with Value
ROI Calculation and Strategic Positioning
Understanding the complexities of calculating return on investment (ROI) is essential for accurate financial decision-making. The traditional ROI formula — (potential revenues minus investment costs) divided by investment costs — seems straightforward, but it can become complicated when estimating revenues.
Executives may introduce biases by inflating potential benefits to secure project approval or by downplaying them to avoid profit-and-loss accountability. To address these biases, Mario proposes collaborating with controlling departments, such as finance teams, which are equipped to provide neutral, data-driven assessments of proposed benefits.
In addition to financial considerations, it’s important to acknowledge the broader impacts of investment decisions. Factors such as strategic positioning, legal compliance, and organisational enablement play crucial roles in justifying investments that may not yield immediate monetary returns.
For example, mandatory standards such as IFRS 17 in the insurance industry underscore the need for compliance, while building internal capabilities can enhance market competitiveness. By considering these non-financial aspects, organisations can make more holistic investment choices that align with their long-term goals.
Figure 25 Crafting a Strategic Roadmap
The Impact-Feasibility Matrix: Prioritisation Framework
Organisations facing limited budgets and competing demands require an effective prioritisation method to maximise their resources. To address this need, Mario developed a matrix that evaluates initiatives based on two key criteria: business impact, which encompasses both financial and strategic value, and technical feasibility, referring to the ease of implementation. This matrix categorizes projects into four distinct quadrants: “Low-hanging fruits,” which offer high impact and high feasibility, should be prioritized first; “Challengers,” which present high impact but low feasibility, necessitating infrastructure investments before they can be executed; “Playgrounds,” which have low impact but high feasibility, are ideal for training junior staff with minimal risk; and “Underworld,” representing low impact and low feasibility, which should be avoided or delegated to research departments.
By employing this systematic framework, Mario successfully demonstrated how prioritising data governance initiatives could significantly enhance infrastructure improvements, transforming projects from the “Challenger” quadrant to “Low-hanging fruit” status. This approach not only helped secure necessary funding but also illustrated a clear path to unlocking tangible business value. Ultimately, Mario’s matrix serves as a vital tool for organisations looking to make informed decisions while navigating resource constraints and competing priorities.
Figure 26 Crafting a Strategic Roadmap
Figure 27 The Challenge of Prioritization
Figure 28 Impact vs. Technical Feasibility Matrix
Figure 29 Aligning Value with Feasibility
Figure 30 Challengers
Figure 31 Playgrounds
Figure 32 Underworld
Figure 33 Low-Hanging Fruits
Figure 34 “How to Measure Feasibility?”
Figure 35 Feasibility Scoring Example
Figure 36 Creating a Strategic Roadmap
Measuring Technical Feasibility: The GAP Preview
The technical feasibility assessment, as outlined in Mario’s GAP framework, is built upon three critical dimensions: Governance, Architecture, and People. Governance encompasses data quality, policies, standards, and contracts, while Architecture focuses on technical platforms, pipelines, and integrations.
The People dimension emphasises skills, ownership, and business demand. For instance, a customer service dashboard is highly feasible when it leverages well-governed source data, reusable pipelines, clear ownership, and strong business demand.
In contrast, a supply risk API is not feasible due to compliance gaps, the need for new integrations, and a lack of necessary expertise. By quantifying these feasibility scores, organisations can develop precise business cases—illustrating that an investment of $1.5 million in data governance could unlock $15 million in business value over five years. This approach not only highlights the importance of assessing technical feasibility but also underscores the tangible benefits that can result from making informed investments.
Figure 37 The GAP
Figure 38 Types of Data Products
The GAP Framework: Governance, Architecture, and People
The GAP framework outlines Mario’s holistic approach to developing data products, emphasising the critical balance among governance, architecture, and people. Governance focuses on implementing pragmatic data quality standards, naming conventions, and compliance measures, all integrated into the product development process to avoid bureaucratic bottlenecks. Architecture ensures that systems are interoperable, composable, secure, and reliable, and recommends centralised platforms to mitigate performance issues typically associated with cross-cloud data movement, even when adhering to data mesh principles.
In addition to governance and architecture, the people aspect is crucial, encompassing roles such as product owners, tech leads, data engineers, analysts, and change management teams. The strength of successful data products lies in the harmonious collaboration among these three dimensions, fostering an environment where innovative solutions can thrive and effectively meet organisational goals. By prioritising these elements, Mario ensures that data product development is both efficient and aligned with best practices.
Figure 39 (Don’t) Mind the Gap
Figure 40 Types of Governance
Figure 41 Data Governance for Data Products
Figure 42 The Technical Dimension of Architecture
Types of Data Products: System vs. Consumer Views
Data products can be categorised along a spectrum that reflects their proximity to source systems and their usability for end users. On one end of the spectrum are source-aligned data products, which maintain a close relationship with operational systems such as CRM, ERP, or transactional systems.
Typically built by IT departments, these products refine data while preserving the original structure of the source systems, much as iron is refined from ore. In the middle of the spectrum, aggregate data products combine information from multiple sources, offering a more comprehensive perspective.
At the opposite end are consumer-aligned data products, which prioritise end-user usability and often follow medallion architecture patterns characterised by bronze, silver, and gold layers. A significant debate arises over whether these consumer-aligned products should encompass the entire data pipeline or instead build on the foundation established by source-aligned products. The resolution to this question largely depends on an organisation’s structure and maturity, highlighting that there is no one-size-fits-all solution in the realm of data products.
Figure 43 People: Roles in a Data Product Team
Figure 44 People: Change Management
Figure 45 The Data Product Flow
Figure 46 Premise 01: Data is Always in Motion
Figure 47 Premise 02: Business Requirements Change Frequently
Figure 48 Upstream Changes Impact Downstream Data Processing
Figure 49 Data is Always in Motion and We Need GAP Everywhere
Figure 50 Data Retrieval
Why Governance Must Be Pragmatic, Not Bureaucratic
Traditional Data Governance programs often hinder progress by becoming rigid bottlenecks that delay projects and frustrate teams. As Mario observed, governance teams frequently produce elaborate PowerPoint presentations that fail to translate into actual implementation. To counter this inefficiency, a more pragmatic approach is needed—embedding governance directly into data product lifecycles.
This method promotes self-service capabilities with appropriate guardrails, shifting away from the concept of centralised gatekeepers. Key elements such as data contracts, quality checks, naming conventions, and metadata standards should be integrated into product design. By treating governance as a continuous and integrated process, much like software testing, teams can enhance quality without creating bureaucratic obstacles, ensuring that governance supports development rather than hindering it.
Figure 51 Data Retrieval pt.2
Figure 52 Data Retrieval pt.3
Figure 53 Data Integration
Architectural Principles: Interoperability and Reliability
Strong architecture is essential for seamless integration of data products, enabling enhanced interoperability and composability. By employing clear definitions through the Open Data Product Standard (ODPS) and utilising APIs, organisations can build aggregate products that align with source data.
This approach facilitates combining multiple products into innovative capabilities while prioritising trustworthiness, security, and reliability.
Systems must be designed to handle failures effectively, protect sensitive information with appropriate access controls, and consistently deliver reliable results. Moreover, Mario emphasises the importance of shared platforms over a distributed infrastructure to avoid the pitfalls of a data mesh, where products are dispersed across multiple cloud services such as Google Cloud, AWS, and Azure. Such a fragmented setup can lead to performance bottlenecks and escalating costs.
Additionally, maintaining shared models is crucial to prevent duplication of efforts; for instance, key performance indicators (KPIs) such as “net customer additions” should have a single authoritative calculation rather than multiple competing implementations. This unified approach fosters operational efficiency and aligns strategies across the organisation.
Figure 54 Value Extraction
Figure 55 Value Extraction and Change Management
Figure 56 Data is Always in Motion, Continuous Improvement and Continuous Adaption
The People Dimension: Roles and Change Management
Building successful data products requires integrating specific roles and effective change management strategies. Key roles include data product owners who ensure business alignment, tech leads responsible for architectural decisions, data engineers focused on pipeline development, and analytical engineers tasked with modelling and transformation. Together, these roles create a collaborative environment vital for the development of robust data products.
In addition to defined roles, managing change effectively is essential during organisational transformations. A case in point is Mario’s restructuring of his 60-person team from a traditional waterfall model to Spotify-inspired squads. By reducing the number of team leads to four chapter leads with distinct responsibilities, he facilitated a more agile environment.
To support this transition, Mario implemented daily open-door sessions for team members to voice their concerns, and he actively evaluated their feedback with senior leaders, addressing valid points while clearly justifying any rejected suggestions. Ultimately, after a month of open communication, the team achieved buy-in for the new structure. This demonstrates that giving people a voice during transformation can significantly reduce resistance and foster a smoother transition.
Figure 57 Mario Meir-Huber’s LinkedIn
The Data Product Flow: Beyond the DAMA Wheel
In today’s dynamic data environment, viewing data products as static projects with defined endpoints is no longer effective. Mario introduces the Data Product Flow, a continuous cycle that recognises the persistent motion of data within organisations. This approach underscores that operational systems generate data incessantly, business requirements shift frequently, and even minor upstream changes, such as removing a database field, can disrupt entire downstream processes.
The Data Product Flow comprises three ongoing phases: Data Retrieval, Data Integration, and Value Extraction. Each of these phases requires implementing the GAP framework, which encompasses governance, architecture, and people considerations. By embracing this continuous cycle, organisations can move away from traditional project mindsets and understand that data products are never truly finished; they are in a state of perpetual evolution, continually adapting to new requirements and challenges.
Data Retrieval Phase: Contracts and Pipelines
The retrieval phase is essential for ensuring the reliability of data acquired from source systems. During this phase, data contracts play a critical role by outlining the expected data structure, quality standards, validation rules, and error-handling procedures, helping identify and resolve issues before they propagate downstream.
These contracts specify the types of data that will be received, acceptable ranges, and quality thresholds, establishing a clear framework for data integrity. Additionally, transformation pipelines, particularly Directed Acyclic Graphs (DAGs), are utilised to process incoming data, emphasising clear dependencies and robust error handling.
Effective governance is key to this process, requiring data quality validation at the point of ingestion, metadata capture, and data lineage tracking. Training personnel is equally important, as mistakes during data entry at the source can lead to significant, ongoing complications. As Mario illustrates with his numerous data entry horror stories, the quality of data ultimately begins with humans who must understand their crucial role in maintaining accuracy and reliability in the data pipeline.
Data Integration Phase: Modelling and Ownership
The integration of raw data into structured formats is vital for maximising its utility across various business functions. By transforming data into models such as dimensional structures (star schemas), data vaults for finance, or one big table (OBT) when denormalisation is appropriate, organisations can facilitate better analysis and insights.
Implementing a layered architecture, often referred to as the medallion approach (bronze, silver, gold), enables progressive refinement and enhances clarity around data usage. Effective data governance is crucial in this integration process, with a clear emphasis on distributed ownership among data stewards and product owners. Mario closed the webinar by highlighting the importance of keeping governance simple, as overly complicated systems can hinder rather than help.
- Executive Summary
- Introduction: From Crisis to Data Product Innovation
- What is a Data Product? Understanding Value Delivery
- The Business-Technology Balance: Two Critical Dimensions
- Financial Impact: The Four Value Dimensions
- Understanding the Cost Structure of Data Products
- ROI Calculation and Strategic Positioning
- The Impact-Feasibility Matrix: Prioritisation Framework
- Measuring Technical Feasibility: The GAP Preview
- The GAP Framework: Governance, Architecture, and People
- Types of Data Products: System vs. Consumer Views
- Why Governance Must Be Pragmatic, Not Bureaucratic
- Architectural Principles: Interoperability and Reliability
- The People Dimension: Roles and Change Management
- The Data Product Flow: Beyond the DAMA Wheel
- Data Retrieval Phase: Contracts and Pipelines
- Data Integration Phase: Modelling and Ownership
If you would like to join the discussion, please visit our community platform, the Data Professional Expedition.
Additionally, if you would like to watch the edited video on our YouTube please click here.
If you would like to be a guest speaker on a future webinar, kindly contact Debbie (social@modelwaresystems.com)
Don’t forget to join our exciting LinkedIn and Meetup data communities not to miss out!