Data & AI Governance Unification Recap with Mario Cantin

Executive Summary

This webinar highlights the critical intersection of framework mapping, data privacy, and the automation of insurance claims within the context of current business processes. Howard Diesel and Mario Cantin address the challenges and strategies involved in leveraging AI for operational efficiency, emphasising the need for robust governance and management frameworks. Key considerations include the impact of privacy regulations on AI solutions in India, data rights management, and the importance of personal information management in business applications. Furthermore, the webinar outlines the importance of data quality, normalisation, and anomaly detection, particularly in fraud detection and financial claims processing. Lastly, Howard emphasises the importance of effective governance strategies in AI and Data Management to support global initiatives, particularly for startups that implement AI-enabled systems.

Webinar Details

Title Data & AI Governance Unification Recap with Mario Cantin
Date: 12 September 2024
Presenter: Howard Diesel & Mario Cantin
Meetup Group: African Data Management Community
Write-up Author: Howard Diesel

Contents

Framework Mapping, Data Privacy, and Insurance Claim Automation
Automating the Business Process in Insurance
Challenges and Strategies in Automating AI Use Cases in Business
AI Framework and Operating Procedures
Governance Strategies and Requirements in AI Management
AI Management and Governance in the Prodago Application
Implementing an Effective Governance Strategy in Data Management
Impact of Privacy Regulations on AI Solutions in India
Rights in Data Management Systems and Personal Information Management
Personal Information Management and Privacy in Business Applications
Importance of Data Management and AI Governance in Global Initiatives
Implementation and Usability of AI-Enabled Systems in Startups
Data Quality and AI in Fraud Detection
Data Normalisation and Anomaly Detection in Financial Claims
Impact of AI on Data Quality and Anomaly Detection

Framework Mapping, Data Privacy, and Insurance Claim Automation

Howard Diesel opened the webinar and shared that the previous webinar in the series, “Data & AI Governance Unification,” had explored framework mapping, and that some attendees sought a recap. The focus of today’s session is on Mario’s work regarding the mapping challenges associated with frameworks governing Large Language Models (LLMs) and data privacy. He noted that this would cover two main areas: first, a series of questions to assess your readiness for an LLM project, and second, the steps involved in setting up a project. Our goal is to streamline the claims process through automation, as illustrated in our first case study, where we aim to create an efficient system that enables automatic claim approval upon submission. Next week, we will delve into data quality and the automation of the insurance claims process.

In South Africa, funeral policies provided by insurers play a vital role for families facing the burden of burial expenses. These policies are especially crucial for those who may not have the means to cover costs upfront, as they ensure that financial support is available during a profoundly difficult time.

The prompt payout of these policies is essential; delays can lead families to seek loans, potentially plunging them into further financial hardship. Therefore, efficient processing of insurance claims is not only beneficial but also necessary to protect the well-being of policyholders during times of need.

Figure 1 Insurance Claims Processing Automation

Figure 2 High-level Business Process

Automating the Business Process in Insurance

The procedure for managing claims starts with the initial report of loss, where a claimant, possibly through a broker, communicates a concern related to their policy. For example, in the context of a funeral policy, a claimant may seek assistance from a provider who coordinates with an insurer to obtain coverage. This process involves multiple parties and is not solely managed by one insurer.

Following the initial report, the process includes gathering detailed information, collecting photographs, validating the claim to prevent fraud, processing documentation, conducting fraud detection and risk assessments, and making decisions on the claim—ideally using digital methods with human oversight when necessary. Finally, the claim concludes with settlement, payment, and customer communication. According to a standard within the insurance industry, this process is facilitated by providing specific message types, such as the First Notice of Loss (FNOL) type, for transmitting first notices of loss from portals to claims systems.

The business process involves integrating a customer portal with a claims system, which necessitates the collection of detailed data to ensure smooth transactions. A UML interaction diagram illustrates the high-level automation needed for this process, highlighting the potential use of web APIs and software services. Additionally, the process aligns with the DMBoK framework, emphasising the importance of reference and master Data Management, particularly when handling the first notice of loss.

This involves entity resolution across three key levels: identifying and matching customers, policies, and insured assets. Furthermore, the process necessitates various master data lookups for entities such as claimants and insured parties, underscoring the complexity of data requirements essential for effective automation and potential chatbot implementation.

Figure 3 UML Interaction Diagram

Figure 4 Mapping to DMBoK Knowledge Areas

Figure 5 Reference and Master Data Management

Challenges and Strategies in Automating AI Use Cases in Business

The automation of First Notice of Loss (FNOL) and guided intake using natural language chatbots presents significant challenges related to various regulatory frameworks, including GDPR, HIPAA, and the EU AI Act, especially when handling sensitive data such as health claims or information involving children. It is crucial to consider standards like ISO 27000 and the NISDAI framework in AI management.

Mario’s Forces Framework for AI governance emphasises the importance of addressing operational, Data Management, risk management, cybersecurity, ethics, and the intricacies of AI systems and models. This holistic approach enables us to identify specific challenges and requirements for each use case within the AI deployment, ensuring compliance and effective risk management.

Assessing the automation of a business use case requires a comprehensive understanding of several key components. As outlined in a questionnaire by Mario at Prodago, this process involves meticulous planning aligned with the organisation’s data strategy goals. Crucial elements include data risk management, cybersecurity, ethical considerations, records management, and a robust AI framework, all of which are interconnected and can influence the success or failure of automation efforts.

Recognising these dependencies is vital for effective decision-making, as they encompass essential aspects such as reporting, risk aggregation, and monitoring machine learning operations and models, while also addressing concerns like model drift and data retention. Ultimately, a thorough grasp of these complexities can significantly enhance the likelihood of successful automation in business contexts.

The operation of integrating various frameworks in a project involves numerous critical elements, including customer submissions, risk tolerance, risk accountabilities, and robustness principles. Key frameworks to consider include the AI Adoption Framework, the National AI Index, the National Data Strategy, and others.

The challenge lies in navigating these frameworks without becoming overwhelmed. By strategically linking these frameworks using orchestrated operational processes and deliverables, we can streamline connections and automate several background tasks. Ultimately, this approach aims to clearly define risk tolerances, enabling teams to identify high-risk areas that require focused attention and management.

The National AI Index, recently released, outlines the dimensions and considerations for selecting AI use cases, ranging from level 1 to level 5, along with relevant OPs, DCAM, ISO standards, corporate regulations, and AI acts. These elements are crucial in determining the organisation’s strategy, which aims to address existing challenges and meet ambitions while identifying necessary capabilities and constraints. The focus is on effectively delivering solutions, such as those in insurance claims, without incurring excessive resource expenditure.

Figure 6 Insurance Claims and Framework Mapping

Figure 7 Insurance Automated Claim

Figure 8 Insurance Automated Claim Pt.2

Figure 9 Insurance Automated Claim Pt.3

Figure 10 Insurance Automated Claim Pt.4

Figure 11 AI Use Cases

Figure 12 KSA: National AI Index

Figure 13 Govern by Wire

AI Framework and Operating Procedures

Mario has prepared a visual representation related to the AI Forces framework, which consists of several essential elements. The framework outlines key operating procedures that focus on business alignment, emphasising the importance of conducting cost-benefit analyses and documenting risk-benefit trade-offs. In terms of data operations, it highlights aspects such as data stewardship, ownership, sources, metadata retention, and requirements.

Additionally, the framework addresses risk management considerations, includingcybersecurity and ethical implications associated with AI systems. Overall, it is crucial to approach the implementation of these elements through a structured progression often described as “Crawl, Walk, Run.”

The “Crawl, Walk, Run” framework for implementing AI systems outlines a gradual approach to development and deployment, emphasising a systematic progression from initial to advanced stages. In the “Crawl” stage, essential tasks include documenting the technical and business limitations of the AI system, conducting an AI inventory, and defining security requirements along with metrics and objectives.

As organisations progress to the “Walk” stage, they should focus on additional tasks that build on the foundational work established during the Crawl phase. This structured approach enables teams to tailor their actions to their current skills and capabilities, ensuring a smoother transition through the stages of implementing an AI system.

Figure 14 Governance Operations

Figure 15 Governance Operations Pt.2

Figure 16 Governance Operations Pt.3

Figure 17 Data Governance Pt.4

Governance Strategies and Requirements in AI Management

The concept of strategy plays a crucial role in guiding organisations toward their goals by providing a structured framework for planning and execution. Integrating this strategic framework into governance is crucial for understanding its implications and evaluating its effectiveness. Ultimately, a strategic approach to governance not only enhances decision-making but also aligns organisational ambitions with actionable plans.

The governance strategy of an organisation should be aligned directly with its objectives, such as achieving claim automation or establishing a first notification of loss capability. Instead of viewing governance merely as a collection of structural elements, such as policies and committees, organisations should assess what specific governance capabilities are necessary to meet their goals.

This shifts the focus toward a more effective, lean governance model that is purpose-driven, helping to avoid the pitfalls of a complex structure that may be disconnected from desired outcomes. By adopting this approach, organisations can ensure they have the essential governance components required for success.

The shift from traditional governance to a more enabling approach is essential for organisations aiming to achieve their goals, particularly in areas such as data privacy and AI management. This highlights the fact that an excessive focus on controls can hinder progress. For instance, transforming an organisational ambition, such as developing First Notice of Loss (FNOL) capabilities, into actionable governance requirements involves critical components, including Data Management, risk assessment, cybersecurity, and ethics.

These elements form the foundation of an effective AI governance framework, ensuring that organisations can align structure, personnel, policies, and processes to harness AI’s potential successfully. Ultimately, adopting this enabling governance model empowers organisations to navigate complexities while supporting their strategic objectives.

The management capability of AI systems is closely tied to effective governance operations, which ensure that these systems perform correctly. For an organisation aiming to establish a notification of loss capability, it is crucial to assess operational risk measures within the framework of cybersecurity ethics. This involves identifying the minimum requirements for governance and capital investment to support these initiatives. Additionally, it is essential to examine the AI management and oversight capabilities needed for successful implementation. By analysing Large Language Models (LLMs) as an AI technique, organisations can delineate specific governance requirements across operational and risk management dimensions, ultimately allowing them to determine the relevant subsets tailored to their needs.

Figure 18 Forces – AI Governance Framework

Figure 19 AI Governance Model

AI Management and Governance in the Prodago Application

The Prodago application facilitates the management and governance of AI initiatives, particularly those involving Large Language Models (LLMs). Effective governance operations encompass various components essential for LLM projects, including cybersecurity measures, pre-training and post-training safety protocols, as well as requirements related to ethics and privacy rights.

This involves adhering to a comprehensive framework consisting of 566 elements that address security aspects and the ethical implications of data use. Key considerations include ensuring transparency through privacy notices and careful management of data privacy rights, all of which are critical when developing an LLM solution.

Utilising a playbook is essential for successfully navigating projects involving Large Language Models (LLMs). A playbook not only highlights key areas that demand attention but also provides a structured approach that mitigates the daunting nature of such projects. By acknowledging critical aspects and addressing them systematically, a playbook ensures a smoother path forward, ultimately enhancing the project’s success and outcomes.

Figure 20 AI Technique Playbook – LLM

Figure 21 AI Technique Playbook – LLM Pt.2

Implementing an Effective Governance Strategy in Data Management

Implementing LLM (Large Language Model) technology can follow a “Crawl, Walk, Run” approach, facilitating a gradual and manageable integration. This methodology begins by identifying a preliminary bundle of 72 essential components required to create a functional subset of LLM technology. While these foundational requirements offer valuable guidance, it is important to note that they may not be comprehensive, as certain elements, such as privacy notices and the need to filter undesirable content from a domain blocklist, can vary significantly depending on specific use cases. Ultimately, this structured approach provides a solid foundation while allowing for adaptability and customisation in diverse applications.

The necessity of a privacy notice hinges on whether an initiative utilises personal information. To evaluate this, one must consider various factors, including the intended behaviour of the model, its usage characteristics, the initiative’s context, and the data source. Instead of assessing all 72 elements from every LLM initiative, we focused on identifying the specific contexts in which 600 applicable requirements arise, providing a clearer understanding of when privacy notices are required based on the model’s application and operational practices.

The process of determining the applicability of operating practices in a project involves identifying specific contextual elements and associated questions relevant to requirements. For instance, in an FNOL initiative, a questionnaire will help ascertain whether the project will function as a proof of concept or be fully deployed; this distinction influences which elements, such as deployed data quality metrics, are applicable. By asking targeted questions, we can automatically identify a subset of requirements that are meaningful to the project’s governance strategy. Ultimately, this enables the project team to focus on the necessary elements that will contribute to its success.

Figure 22 AI Technique Playbook – LLM Pt.3

Figure 23 AI Governance (LLM)

Figure 24 FNOL Initiative

Impact of Privacy Regulations on AI Solutions in India

In developing an FNOL solution in India, it is crucial to consider the Digital Personal Data Protection Act (DPDPA), the country’s privacy law. This regulation mandates specific governance requirements for handling personal data, which must be integrated into the FNOL strategy.

Key components of compliance include providing clear privacy notices and obtaining consent from individuals. A comprehensive playbook has been created that outlines the necessary chapters and requirements from the DPDPA, facilitating the implementation of these privacy measures within the broader FNOL governance framework.

The transition from understanding privacy relations in India to addressing them within the context of Large Language Models (LLMs) emphasises the importance of communicating privacy notices effectively. It is essential for organisations utilising LLMs to align with governance requirements that ensure individuals are informed and provide explicit consent before participating in LLM-driven interactions, such as chatbots. This approach highlights the importance of transparency and consent, underscoring the rights of individuals in the use of emerging technologies.

When developing a Language Model (LM) solution, it’s essential to notify individuals that they are receiving a warning or facing a challenge. The governance requirements for the LM should include specific mechanisms for rectifying personal information within the model, in line with the Data Protection and Digital Privacy Act (DPDPA).

This involves starting with a foundational approach within the LM playbook and ensuring that aspects like the right to accuracy are addressed. In summary, adopting a lean methodology that incorporates these essential governance components is crucial for achieving compliance and effective Data Management.

To build a large language model (LLM), it is essential to adopt a lean approach that focuses on incorporating only necessary elements while considering additional regulatory requirements that may arise, such as DPDPA or GDPR, depending on the project’s context. This entails starting with a foundational framework and progressively adding features that ensure compliance and enhance data privacy. Furthermore, attention must be given to data quality and accuracy, emphasising the importance of a contextual dimension to validate that the data is fit for its intended purpose.

Effective data quality planning is crucial for systems that utilise Large Language Models (LLMs) to ensure compliance with regulations and enhance user experience. Specifically, in accordance with Indian regulations, users must have the right to rectify any inaccuracies in their personal information provided through chatbots.

This highlights the necessity of developing robust mechanisms that enable users to update their information easily as needed. Consequently, implementing rectification capabilities is not only a regulatory requirement but also an essential best practice for governing LLMs.

Figure 25 The Digital Personal Data Protection Act

Figure 26 The Digital Personal Data Protection Act Pt.2

Figure 27 AI Technique Playbook – LLM Pt.2

Rights in Data Management Systems and Personal Information Management

In managing personal information collected from claimants for the LLM, it is essential to provide them with the ability to access and rectify their data. This entails ensuring the accuracy and quality of the information stored within the system. Additionally, individuals have the right to request the deletion of their personal data, which emphasises the importance of implementing processes that respect both the right to accuracy and the right to be forgotten within the database of the LLM system.

To effectively implement model editing or unlearning capabilities, it is crucial to establish these functionalities at the outset of a project. By identifying these requirements early, organisations can either prepare adequately to meet them or decide against pursuing the initiative if they lack the necessary capabilities. This proactive approach helps prevent the disappointment of discovering critical missing functionalities at the deployment stage, ensuring a smoother transition to deploying the solution in production.

Personal Information Management and Privacy in Business Applications

In the context of implementing processes to ensure the quality of personal information collected during the First Notice of Loss (FNOL), it is essential to include a validation step within the chatbot. This step would confirm the accuracy of the information provided by the individual, thereby ensuring the quality and veracity of the data.

Validation may involve cross-referencing the information against a golden record within Master Data Management (MDM) systems, especially when extracting details relevant to policies, such as funeral policies, where beneficiaries must be accurately recorded. This not only upholds the rights of individuals but also facilitates access to transactional data linked to the client list.

The integration of automation in insurance claims processing presents both opportunities and challenges that require careful management of various regulatory frameworks. Effective claim management begins with the initial report of loss, where the claimant, often through a broker, communicates their request for assistance. This multifaceted process involves gathering detailed information, validating claims to prevent fraud, and utilising digital methods for assessments while ensuring human oversight.

The use of standards, such as the Accord framework for communication between claims systems, further enhances efficiency. However, challenges arise, especially with regulations like GDPR and HIPAA, which impose strict guidelines on handling sensitive data, including health-related claims. Consequently, a proactive approach to AI governance, as emphasised by Mario’s Forces Framework, is essential for navigating these complexities and ensuring that automation can be effectively implemented in the insurance industry.

The governance of Large Language Models (LLMs) necessitates a comprehensive approach that encompasses not only fundamental requirements but also additional obligations specific to open government, such as the publication of source code and transparency regarding operational mechanisms. While privacy and specific security requirements are essential, other factors—like data sharing agreements and open data considerations—should also be integrated into the governance strategy.

The key is to develop a lean and effective governance framework that aligns with organisational goals, explicitly identifies the necessary supports for those ambitions, and accounts for relevant constraints. This approach will result in a set of governance requirements that are minimal yet sufficient to instil trust among various stakeholders in the developed solutions.

Importance of Data Management and AI Governance in Global Initiatives

A significant concern in the Gen AI initiative is that 95% of projects yield little to no return on investment due to overlooked facets. To address this, a bottom-up approach ensures that all aspects are explicitly considered during the development process.

For instance, in Saudi Arabia, the establishment of the AI Office alongside the Data Management Office (NDMO) is part of the national AI index initiative. This involves creating a comprehensive playbook for the National AI Framework, which must include adherence to key capabilities such as data readiness, human capital, and partnerships.

To customise systems effectively across different countries and contexts, it is crucial to integrate Data Management with AI initiatives, as exemplified by Saudi Arabia’s strong foundation in time management through the NDMO framework. They must avoid treating AI as a separate entity to prevent reinforcing silos between Data Management and AI. Instead, Saudi Arabia should leverage its Data Management capabilities as a fundamental aspect of its AI strategy, demonstrating that effective data governance is essential for the success of AI applications. This approach will also provide a structured playbook for the National AI Index, highlighting the necessity of proper Data Management within the framework of AI governance.

Implementation and Usability of AI-Enabled Systems in Startups

The product system utilises AI-assisted technology to create playbooks and questionnaires, ensuring a streamlined and efficient experience. This approach enables the use of consistent questions that address the same objectives, thereby eliminating redundancy in inquiries. Overall, the integration of AI plays a crucial role in connecting the necessary elements to effectively determine privacy component prescriptions.

The application of a comprehensive system for evaluating AI-driven startup ideas is crucial, particularly when utilising a Language Representation Model (LRM). An illustrative case involves a customer leveraging an LRM to analyse text responses from a questionnaire in conjunction with video recordings of participants, aiming to assist startups in sidestepping potential pitfalls associated with LRM technology. However, a significant concern arises regarding the accessibility and affordability of this robust system for startups, as it is primarily designed for larger organisations. Ultimately, the goal is to ensure that smaller entities can also benefit from this innovative technology without exceeding their budgets.

The Government of Canada is working to enhance the applicability of its internal policies for very small departments and agencies, especially those with low organisational maturity. To achieve this, the team has partnered with the Treasury Board of Canada to identify a minimal set of directives that these smaller entities can effectively implement.

This initiative emphasises the importance of aligning deliverables from the Digital Policy Development and Advisory (DPDA) with pertinent frameworks, ensuring that essential elements, such as data quality assessments and privacy considerations, are properly integrated. Ultimately, this approach aims to create a clearer understanding of required deliverables while allowing for the necessary flexibility across various frameworks.

Data Quality and AI in Fraud Detection

The upcoming discussion will centre around the critical topic of data quality in relation to artificial intelligence and its implications for various stakeholders, particularly focusing on the head of the FNOL (First Notice of Loss) process. This presentation will delve into the essential requirements for developing effective products, including the creation of a chatbot designed to enhance and streamline FNOL processes. A significant component of this exploration will be the establishment of robust quality control measures for our data products, ensuring they are tailored to meet the needs of end users while promoting a better understanding and management of data quality. Ultimately, this focus on data quality will lead to improved efficiency and effectiveness in meeting consumer demands.

The assurance of data quality is crucial for the effective functioning of products, necessitating the implementation of validation and anomaly detection methods to identify fraudulent claims and minimise false positives and negatives. Key steps include ensuring the right data is used, proper training of the data, and conducting root cause analyses for failures or erroneous judgments. Frameworks such as ISO 8000 for Reference and Master Data quality, along with ISO 55000 for AI data products, provide valuable guidance in assessing and improving data quality dimensions. Ultimately, the focus is on understanding whether the data is contextual and adds value to the overall system.

In developing a fraud detection system, it is crucial to identify critical data elements and features that significantly contribute to detecting fraudulent transactions. This involves evaluating the relevance of these features and ensuring sufficient data availability for effective analysis. During the training process, it is essential to determine the appropriate allocation of data for the training, testing, and production phases, while also creating various scenarios to ensure optimal performance. Understanding which data quality dimensions pertain to the data product and the overall system is essential. Finally, as claims are assessed, it is vital to analyse their status within defined operational boundaries to enhance decision-making.

Figure 28 Data Quality Planning

Figure 29 Root Cause Analysis

Figure 30 Wang Strong Framework

Figure 31 Visualise and Inspect

Data Normalisation and Anomaly Detection in Financial Claims

The detection algorithm process begins with a focus on First Notice of Loss (FNOL) and involves normalising various features to identify anomalies effectively. Given the wide range of claim amounts, durations, and ages—where claim amounts can vary from $100,000 to minimal sums, and claim durations can span from 0 to 50 days—it’s essential to standardise these features to prevent any one from dominating the analysis.

Normalisation techniques enable the adjustment of these discrepancies, allowing values to be scaled within a consistent range. This approach not only facilitates the creation of normalised data but also supports the use of multiple algorithms in an ensemble method, which helps determine the number of routines detecting anomalies and identifies the most suitable algorithms for different scenarios.

The upcoming week will focus on a detailed process of using a one-class Support Vector Machine (SVM) to detect anomalies in data. We will explore how this model identifies anomalies, ranks their severity, and establishes boundaries between normal and anomalous data points. The approach involves working with synthetic data, which includes splitting, labelling, and injecting anomalies to ensure accurate detection by the model. Rigorous training is essential before applying the model to real data, as failure to detect anomalies indicates potential issues within the model. Participants will learn how to interpret these results and guide individuals through the outcomes of the anomaly detection process.

Assessing data quality at a record level presents significant challenges, especially when multiple interrelated variables, such as claim amount, claim age, and claim duration, are involved. Traditional data quality dimensions often focus on individual columns, but the complexity increases when evaluating how these variables interact across various columns with undefined boundaries. Effective data quality checks usually rely on clear definitions and rules, such as validating postal codes against suburbs; however, the absence of such guidelines makes it difficult to identify meaningful patterns. Emphasising the importance of reasonableness in establishing these patterns, it becomes evident that any deviations from accepted norms warrant scrutiny by analysts. Such investigations are crucial for ensuring timely and fair outcomes, particularly in sensitive contexts like claims processing.

Figure 32 AI Governance in the Insurance Claim Process

Figure 33 Data Normalisation

Figure 34 Anomaly Detection Technique and Normalisation

Figure 35 Claim ID and Anomaly

Figure 36 Anomaly Detection Recipe

Impact of AI on Data Quality and Anomaly Detection

An attendee with extensive experience in data quality expressed enthusiasm about the advancements in AI, particularly how tools like ChatGPT and cloud services are streamlining data analysis. He highlighted his experimentation with these technologies to identify patterns and anomalies in unfamiliar data sets, which has been a revelation. He noted that by leveraging AI for anomaly detection, they are learning more about the underlying data even when they are not the subject matter experts (SMEs). He emphasised the significant benefits of visualising patterns and engaging with SMEs to understand outliers, thereby enhancing the overall analysis process.

The automation of the First Notice of Loss (FNOL) and guided intake processes using natural language chatbots presents significant challenges, particularly in relation to various regulatory frameworks such as GDPR, HIPAA, and the EU AI Act. These challenges are especially pronounced when sensitive data is involved, such as health claims or information about children, which necessitates rigorous compliance measures. Additionally, adherence to standards like ISO 27000 and the NISDAI framework is essential for effective AI management. Overall, navigating these complexities is crucial for the successful implementation of AI solutions in the insurance industry, ensuring both operational effectiveness and regulatory compliance.

If you would like to join the discussion, please visit our community platform, the Data Professional Expedition.

Additionally, if you would like to watch the edited video on our YouTube please click here.

If you would like to be a guest speaker on a future webinar, kindly contact Debbie (social@modelwaresystems.com)

Don’t forget to join our exciting LinkedIn and Meetup data communities not to miss out!

Scroll to Top