Data & AI Governance Unification for Data Managers

Executive Summary

This webinar outlines key considerations in the intersection of insurance, Data Management, and Artificial Intelligence (AI). Howard Diesel highlights the importance of robust information governance frameworks and addresses the challenges associated with AI governance, compliance, and scalability in project operations. The integration of risk management frameworks is essential for maintaining data integrity and mitigating risks, especially in the insurance and oil and gas sectors.
The webinar emphasises the ethical implications of AI in the insurance industry and advocates for the implementation of AI risk management strategies and standards that align with corporate governance. As AI continues to evolve, understanding its potential and addressing the challenges it presents—particularly in healthcare and decision-making—will be critical for future developments and operational success.

Webinar Details

Title: Data & AI Governance Unification for Data Managers

Date: 2025-08-14
Presenter: Howard Diesel
Meetup Group: African Data Management Community
Write-up Author: Howard Diesel

Contents

Insurance Industry Frameworks and Data Management
Understanding Information Governance Frameworks
AI Governance and Data Management Challenges
Compliance, Governance, and Scalability in Project Operations
Data Management and Risk Management Frameworks
AI Governance and Ethics in the Insurance Industry
Insurance Specification and Data Management in the Oil and Gas Industry
Understanding and Implementing AI Risk Management Frameworks
Frameworks and Use Cases for Insurance
Future of AI Models
Challenges and Success Factors in AI Development
AI Standards, Regulations, and Corporate Governance
Understanding the Layered Approach to AI Governance
AI Framework and Risk Management Strategies
Operational Procedures and NIST Standards in Cybersecurity
Decision Intelligence and AI in Decision Making
The Challenges and Potential of AI in Healthcare
Challenges of Data Governance and AI

Insurance Industry Frameworks and Data Management

Howard Diesel opened the webinar and shared that it would highlight the crucial role of Data Management frameworks within the insurance industry, emphasising their significance for effective governance. He clarified a previous reference to a specific Life Insurance AI use case. He stressed that frameworks like Informatica and Calibra should be viewed as flexible guidance tools rather than rigid models.

Figure 1 Unification of Data, Records and AI Governance

Figure 2 Governance Unification Overview

Understanding Information Governance Frameworks

Information governance poses significant challenges for organisations, necessitating a cohesive approach that combines various governance areas, including records management. Key frameworks such as the DMBoK, DCAM, CMMI, and ISO 385-O5 provide essential guidance for effective Data Governance, complemented by industry-specific regulatory standards like BCBS 239 for banking and insurance. Furthermore, the involvement of national archives from countries such as Australia, the U.S., and South Africa underscores the importance of robust records management. For companies like MetLife, understanding and navigating these multiple frameworks is crucial to ensuring compliance and optimising their operations.

Figure 3 Core Framework Content

Figure 4 Core Framework Content Pt.2

AI Governance and Data Management Challenges

The complexities of AI Governance have become a significant topic of discussion among stakeholders, particularly concerning various frameworks such as the 42,000 series, NIST AI, and OECD recommendations. Howard shares that many have expressed feeling overwhelmed by the sheer volume of guidelines and frameworks available, leading to a call for feedback on whether others are focusing on specific areas or struggling with information overload. Overall, there is a pressing need for clearer direction and effective governance strategies in the rapidly evolving field of artificial intelligence.

The ongoing challenges in governance related to Data Management and AI regulations necessitate a more integrated approach, one that extends beyond the outdated DMBoK framework established in 2017. The team is actively making progress by engaging the organisation through informative sessions aimed at enhancing understanding of these complex issues. By focusing on the development of comprehensive maturity models and integrating operational procedures that collectively address privacy, Data Management, and AI governance, the team is working to eliminate confusion and ensure effective navigation of diverse regulatory requirements. This holistic approach is crucial for developing a comprehensive understanding of governance in the rapidly evolving landscape of technology and data.

Selecting appropriate governance structures and creating relevant artefacts is crucial for ensuring data privacy and compliance with regulations such as GDPR. This discussion highlights the crucial role of Data Architects in integrating Enterprise Data Models and monitoring data flows, which are essential for maintaining a registry of processing activities. Ultimately, effective governance not only safeguards data privacy but also fosters trust and accountability within organisations.

By aligning business processes with a Subject Area Model and focusing on core artefacts rather than excessive documentation for every regulation, organisations can avoid overwhelming complexity. There is also a mention of AI governance and the need for a strategic approach to AI adoption, indicating a lack of current attention to relevant standards such as ISO 42000.

Figure 5 Practical Activities and Templates

Compliance, Governance, and Scalability in Project Operations

In the upcoming weeks, Howard shared that the webinar series would be focusing on the development and scoring of artefacts to ensure compliance with various assessments, such as privacy impact assessments, ethical impact assessments, and risk management readiness assessments.

Validation through scorecards will help confirm that all relevant compliance areas have been addressed. The objective is to shift the emphasis back to executing projects effectively, emphasising actionable steps rather than becoming overly entangled in different frameworks and their complexities.

Navigating data privacy compliance across multiple jurisdictions presents significant complexities that organisations must strategically address. To effectively manage these challenges, businesses can either tackle the requirements of each jurisdiction individually or develop a set of overarching principles to streamline compliance efforts.

This meticulous process not only requires incremental progress towards operational maturity but also emphasises the importance of creating tangible artefacts that demonstrate adherence to various frameworks, such as DCAM and the NIST AI Risk Management Framework. Ultimately, ensuring robust governance involves continuous tracking of processes and evaluating compliance artefacts against established maturity criteria, which helps organisations understand their impact and effectiveness in maintaining data privacy standards.

The development process involves every step, leaving a compliance trail while facilitating the accumulation of knowledge and learning. Review gates, exemplified by a scorecard, are used to evaluate artefacts, thereby establishing trust within the organisation. This approach not only serves as an audit mechanism but also functions as a coaching tool, enhancing trust across various teams and ensuring accountability during internal audits.

Customisation is essential for creating suitable operating procedures that align with the organisation’s needs. However, challenges may arise when the organisation faces technical limitations in implementing necessary actions, prompting the need to strategise on how to address these gaps effectively.

To ensure the successful delivery of strategic initiatives, it is essential to prioritise scalability and sustainability as we enhance our skills and talents. This involves integrating governance into our operations beyond mere projects, policies, and procedures. A sustainable approach must be at the forefront of our efforts, fostering growth while maintaining the integrity of our initiatives. Questions and further insights are welcome for discussion.

Figure 6 Key Takeaways and Best Practices

Data Management and Risk Management Frameworks

The workflow process begins with the identification of a use case, followed by the validation of technical and functional requirements, as well as the assessment of commercial and technical feasibility. Upon agreement, appropriate workflows are developed across various areas, including risk evaluation and unified governance. This involves aligning various frameworks, such as the Business Architecture Body of Knowledge, which addresses stakeholders, value propositions, and value streams.

A well-structured data strategy must align with the business architecture’s goals and capabilities, while also encompassing processes, disciplines, best practices, and established standards. Additionally, it incorporates Data Management blocks and records management frameworks, ensuring comprehensive coverage of Data Management processes.

Effective Data Management and alignment with organisational risk tolerance are critical, and utilising various assessment frameworks is essential for achieving these goals. Frameworks like the Data Science Body of Knowledge (DSBoK) provide the necessary capabilities for enhancing data literacy and software engineering skills. At the same time, the Data Capability Assessment Model (D-CAMP) evaluates organisational capabilities. Additionally, maturity models such as CMMI, Gartner, and TDWI play a vital role in assessing the effectiveness of Data Management processes. By understanding the distinctions among these models, organisations can enhance their maturity levels and ensure a robust approach to Data Management.

In the realm of software engineering and Data Management, several key frameworks and models play a crucial role in guiding best practices and enhancing efficiency. Notably, the Common Maturity Model for software engineering (CMN) and the Nora framework in Saudi Arabia provide structured approaches that align with TOGAF principles. Furthermore, the National Data Management Office (NDMO) oversees a broader scope than the Data Management Reference Model (DMROC), emphasising the importance of comprehensive Data Governance.

Various ISO standards, such as those focusing on Master Data, Data Quality, and asset management, further reinforce the standards and protocols essential for effective Data Governance. Overall, these frameworks and standards collectively promote a more robust and efficient approach to managing software and data resources.

Other notable frameworks include BCBS 239 and the National Archives of Australia, as well as the NIST AI Risk Management Framework. Industry reference models from organisations like IBM and Teradata, alongside unified data models like the Unified Dimension Model (UDM) developed by Len Silverstone, further illustrate the landscape of frameworks influencing Data Management practices. The mention of business architecture frameworks, including DCAM and the DMBOK lifecycle and functions, underscores the complexity and importance of navigating these various models.

Data Management and cybersecurity are critical components in today’s digital infrastructure, with various frameworks and models providing valuable guidance. Key references in this discussion include the Saudi framework, the NIST framework, and the PPBM model, all of which emphasise the need for effective strategies in safeguarding data. Ongoing collaborations with organisations such as ARAMCO further underscore the importance of identifying the most pertinent models to meet our industry objectives.

Figure 7 Mapping Frameworks

Figure 8 Mapping Frameworks Pt.2

AI Governance and Ethics in the Insurance Industry

In recent research on insurance industry frameworks, Howard explored various aspects of AI governance, with a particular focus on the diminishing availability of updates. For instance, the DSBOK framework, last published in 2017, does not address generative AI. Notably, the American College Centre for Ethics has conducted significant work on ethical AI in life insurance, emphasising the need to balance innovation with accessibility. Additionally, the European Union is progressing with numerous legislative acts under the 42,000 series. At the same time, the National Association of Insurance Commissioners highlights the importance of understanding AI models and regulations across various states.

Figure 9 Framework Catalogue

Figure 10 Framework Catalogue Pt.2

Figure 11 ‘AI, Ethics and Life Insurance: Balancing Innovation With Access’

Insurance Specification and Data Management in the Oil and Gas Industry

In the UK, a specific insurance specification focused on analytics exists, which may come as a surprise to those in the insurance industry. This specification is part of the AUK insurance data capability framework, underscoring the importance of industry-specific Data Management alongside generic frameworks, such as TDWI and DCAM. For the oil and gas sector, the PPDM (Professional Petroleum Data Management) standards are crucial, and there is also a Data Management certification related to this organisation. Overall, these insights highlight the diverse capability needs across different industries.

The insurance and informatics sectors are supported by a variety of frameworks that enhance operational effectiveness and strategic decision-making. Notable frameworks include Bismarck, Life and Annuity Insurance Infotech, and various business reference architectures, with key examples being Bartner, IBM, Informatica, and the ISO 3850 series. However, it is important to note that some links, particularly for the latter, may be inaccurate. Awareness of these diverse capabilities is crucial, as users may face information overload when exploring the numerous available options, including the port reference architecture. Engaging with questions about these frameworks is encouraged to ensure a thorough understanding and facilitate clarity.

Figure 12 NAIC – Insurance Topics

Figure 13 datos – Reports

Figure 14 Datos Insurance Data Capability Maturity Model

Figure 15 ‘Life and Annuity Insurance Industry Business Reference Architecture’

Figure 16 ‘Reference Architecture Framework’

Figure 17 ISO 21637:2020

Understanding and Implementing AI Risk Management Frameworks

The examination of AI Governance frameworks, such as DAMA and various AI risk management models, reveals intricacies in their principles and terminologies that can hinder effective interpretation. Howard argues for a foundational approach, prioritising the evaluation of specific artefacts related to AI governance instead of simply adopting existing frameworks.

This method highlights the crucial importance of having clear glossaries, as the diverse meanings of terms across frameworks significantly impact understanding and successful implementation. Ultimately, achieving clarity in terminology is essential for navigating the complexities of AI governance effectively.

The importance of documentation in governance, particularly within the Risk Management Framework (RMF) in Govan 1.2, highlights that effective AI risk management relies on well-defined policies, processes, and procedures, which are crucial for ensuring individual and organisational accountability. Key considerations include assessing how these policies foster trust in the system, what measures have been implemented to guarantee appropriate use, and how documentation supports the system’s objectives. Additionally, it is essential to evaluate whether the model outputs align with the organisation’s core values. Therefore, every artefact created should be explicitly linked to relevant documentation to demonstrate its connection to these governance aspects.

Effective risk management requires a structured approach that navigates the complexities of the Risk Management Framework. By emphasising governance and the oversight of organisational resources, it becomes crucial to manage risk tolerances systematically. Howard noted that participants should utilise relevant data sets and documentation to inform their strategies, ensuring a thorough understanding of each component involved. Ultimately, this comprehensive approach leads to more informed decision-making and improved risk management outcomes.

The Risk Management Framework (RMF) plays a crucial role in enhancing our understanding of various risk management frameworks. By integrating seamlessly with NIST’s guidelines, the RMF offers clear definitions that facilitate comprehension and application across different contexts.

This streamlined approach not only helps identify key areas of risk management that require attention but also encourages ongoing exploration of specific categories within the framework. Ultimately, the RMF serves as a foundational tool for enhanced risk management and informed decision-making.

Figure 18 Insurance Industry Appropriate Frameworks

Figure 19 Framework Catalogue Pt.3

Figure 20 NIST.AI.100

Figure 21 Framework Catalogue Pt.4

Frameworks and Use Cases for Insurance

When considering frameworks for insurance, it’s crucial to focus on distinct frameworks without overlaps, such as choosing between IBM Data Governance and the Gartner EIM framework based on your technology needs, like Informatica or Calibra. Starting with clear business objectives is essential, as they guide the selection of potential AI use cases that align with those objectives. Key areas to explore include customer experience personalisation, market analysis—exemplified by Spotify’s success—process automation through RPA, product and service innovation, decision intelligence, and IT enablement. It’s advisable to begin with one use case, ensuring its feasibility and potential to deliver significant business value.

Figure 22 Framework Categories

Figure 23 Possible Frameworks for Insurance

Figure 24 Business Architecture Frameworks/ Data Management Frameworks

Future of AI Models

Recent statistics highlight significant challenges in implementing AI initiatives. According to Gartner, only 54% of AI models succeed in production, with 85% of these models at risk of failure. Additionally, 70-80% of AI projects fail to meet ROI expectations, leading to a 42% abandonment rate of initiatives by enterprises. Overall, approximately 80% of AI projects fail to achieve their intended outcomes, raising concerns about the foundational approaches taken at the outset. This situation underscores the pressing need for a reassessment of strategies in AI deployment.

The importance of recalibrating strategies for successful implementation is underscored by the lessons learned from the dot-com boom of the early 2000s, where many rushed to launch websites without adequate preparation and faced widespread failures. In the current landscape, a company called Prodago is leveraging generative AI to analyse various frameworks and identify the essential processes for achieving success. Although the journey may involve setbacks, the speaker is confident that through careful adjustments and a deep understanding of the circumstances, effective solutions can ultimately be developed.

Establishing a structured framework is crucial for effectively tackling challenges such as data quality, explainability, data sovereignty, and risk mitigation. Without a cohesive strategy, organisations often struggle to connect disparate elements, resulting in confusion and a significant drain on resources. Therefore, prioritising a well-defined approach can streamline efforts and enhance the overall effectiveness of Data Management initiatives.

By implementing appropriate AI frameworks, Data Management practices, cybersecurity measures, and ethical considerations, participants can better visualise and recalibrate their approach. The intention is to foster focus and clarity amidst the complexity of the issues at hand, ultimately aiding in the effective organisation and classification of information.

Collaboration among individuals with diverse expertise is crucial for navigating the challenges of procurement in the AI field. Organisations face significant hurdles when purchasing functions from AI vendors, particularly in areas such as model training, data ownership, and output accessibility. As partnerships may dissolve, it becomes essential for organisations to be vigilant about their procurement practices, especially when dealing with Large Language Models. Ultimately, a thorough understanding of these issues can help ensure the successful and sustainable implementation of AI.

The decision-making process surrounding the adoption of various Data Management tools, such as ChatGPT, is crucial for advancing the field of Data Management. This challenge mirrors the hesitation some experienced during the 2009 Big Data Movement, where a lack of understanding hindered progress for some, while others seized the opportunity to innovate and lay a solid groundwork for the industry. Ultimately, confident and forward-thinking professionals in Data Management are poised to catalyse progress and develop effective solutions tailored to their expertise.

The NIST AI Risk Management Framework provides valuable insights into effective

governance, outlining the necessary actions, policies, and procedures for managing AI-relatedrisks. Utilising tools like ChatGPT and Copilot, users can engage with the framework to gain a comprehensive understanding of its coverage across various domains, leading to increased confidence in their risk management strategies. It emphasises the importance of structured oversight, including accountability for monitoring data drift and ensuring adherence to established policies. Without leveraging such frameworks, organisations may face significant challenges and risks in their AI implementations.

The NIST and AI frameworks provide a reassuring approach to understanding the risks associated with artificial intelligence. The comprehensive nature of these frameworks addresses various concerns, alleviating anxieties by ensuring that risks have been thoughtfully considered. By encouraging others to explore these materials, Howard highlighted the extensive information available on risk oversight, security, privacy, and quality Metadata, including the FAIR principles. Overall, these resources significantly enhance confidence in effective risk management in the context of AI.

Figure 25 Business Objectives/ AI Use Cases/ Develop/ Buy AI Systems/ Business Value

Challenges and Success Factors in AI Development

The low success rates of AI projects are a significant concern, particularly due to the high number of pilot initiatives that fail to transition into full production. Paul highlighted that while Gartner recognises 54% of AI models, many projects encounter hurdles stemming from the complexities of AI model testing and training. Unlike Business Intelligence (BI), which benefits from clearly defined requirements and established testing frameworks, AI demands a more nuanced approach to cultivate trust and ensure reliable decision-making.

Factors such as overtraining of models and issues related to data quality further complicate the implementation process, emphasising the necessity of a thorough understanding of data flows and cascades in successful AI development. Ultimately, addressing these challenges is crucial for enhancing the effectiveness of AI initiatives.

The quality of data is crucial for building effective machine learning models, as poor data can lead to unreliable results. Professionals in the field often grapple with data quality issues and the challenge of instilling trust in the decisions made by these models. Recent discussions have highlighted the ongoing debate between traditional methodologies, such as the DMBoK, and the evolving landscape of artificial intelligence. Many argue that inadequate governance and management practices contribute to data quality concerns, which in turn lead to mitigation issues in model development.

In the evolving vendor landscape, a concerning trend has emerged where opportunistic companies hastily introduce AI solutions without a deep understanding of the technology or access to high-quality data. This rush to market results in a cycle of trial and error, as these vendors seek to capitalise on the hype surrounding AI to attract users and glean insights from their experiences and feedback. Ultimately, this approach not only undermines the potential of AI but also risks frustrating users who seek reliable and effective solutions.

Organisations must prioritise Data Quality and establish robust frameworks before integrating new tools, as failing to do so can lead to ineffective business intelligence (BI) dashboards that frustrate executives. While the process of preparing data might seem simple, delivering meaningful insights proves to be much more complex and challenging. Therefore, without a solid foundation in Data Management, organisations risk undermining their decision-making capabilities and overall success.

Organisations encounter significant challenges in managing customer data, particularly in their reliance on Business Intelligence (BI) solutions. While these solutions offer advanced capabilities, many vendors fail to prioritise essential factors such as data quality, processes, and available resources. This oversight creates unrealistic expectations for quick fixes to complex issues, ultimately hindering effective Data Management. Therefore, addressing these critical aspects is essential for organisations to leverage the potential of their BI strategies fully.

The ongoing trend reveals that while executives are often impressed by BI innovations—such as those related to balanced scorecards—there remains a fundamental dependency on the integrity of incoming data and effective management practices to drive meaningful change truly.

Figure 26 AI Governance Framework

AI Standards, Regulations, and Corporate Governance

International standards, particularly those set by ISO, play a critical role in shaping compliance and legal accountability across nations. When ISO publishes a standard, national standards bodies are tasked with adapting it to local contexts, ensuring relevance in various legal frameworks. Consequently, if a standard achieves global acceptance, it often becomes a default reference point in legal situations, reinforcing its significance in both compliance and accountability.

This raises the critical issue of accountability, as individuals or organisations may be questioned in court regarding their adherence to such standards, and ignorance of a standard’s existence is not a valid defence. This highlights the importance of staying informed about evolving standards and regulations, including the EU AI Act and other emerging frameworks.

In highly regulated environments such as the nuclear industry, adherence to regulations is crucial for ensuring safety and compliance. This commitment relies on the collective responsibility of all individuals involved, underscoring the importance of a strong culture of accountability. The King Commission and the King Code serve as essential frameworks in corporate governance, currently addressing the challenges posed by emerging technologies, including artificial intelligence. By adopting a broad approach, the King Code encompasses future advancements beyond AI, reflecting the ongoing uncertainty in governance practices. Ultimately, these frameworks emphasise the importance of adaptability in regulatory compliance in an ever-evolving technological landscape.

The integration of new and developing technologies within a company is crucial for effective corporate governance and management. As highlighted in the discussion, companies must prioritise proper preparation and management to navigate the complexities introduced by these technologies. Furthermore, aligning compliance with International Standards on Auditing (ISA) and corporate governance ensures that all operational aspects are cohesive, allowing organisations to adapt successfully to an evolving landscape. Ultimately, addressing these interconnected areas is crucial for achieving sustainable growth and mitigating risk in today’s technological environment.

Figure 27 AI Governance/ AI Projects

Understanding the Layered Approach to AI Governance

The implementation of AI systems requires a comprehensive framework that adheres to established standards, such as those set by the King’s Commission for Information Technology. This framework emphasises the integration of internal business objectives, principles, values, and risk tolerance, which play a crucial role in shaping enterprise architecture and guiding governance practices. By aligning policies and procedures with AI project requirements, organisations can ensure effective governance and achieve their strategic goals.

Each AI project follows a distinct life cycle that includes planning, data collection and processing, model building and validation, deployment, and ongoing monitoring. Additionally, governance evaluates the eligibility and appropriateness of AI at various stages, including procurement, design, and operations, highlighting the interconnectedness of these layers in the AI implementation process.

The integration of AI into business operations has evolved from initial reluctance to a more constructive and controlled approach. By allowing technology teams to test AI applications with specific data sets in a regulated environment, organisations can thoroughly assess AI capabilities while maintaining accountability and data safety for the data owners. This process not only fosters trust among stakeholders but also encourages a broader adoption of AI, as businesses recognise the dynamic nature of this technology compared to traditional applications. Ultimately, this shift signifies a new era of confidence and innovation in the use of AI in various business contexts.

To effectively implement AI projects, it’s essential to regularly monitor decision-making processes that involve human oversight and ensure that appropriate policies are in place to identify potential failures and challenges. A risk management framework can guide this process by highlighting key areas of focus. Understanding business priorities, risk tolerance, compliance contexts, and existing capabilities is crucial in determining the next steps for AI initiatives. It is essential to pay close attention to these aspects to navigate the complexities of AI implementation successfully.

A systematic approach to identifying challenges and opportunities in use cases like fraud and risk detection is vital for organisations aiming to enhance operational efficiency. This approach involves a comprehensive analysis of data operations and engagement challenges to identify critical concern areas, known as “red lights.” By evaluating the potential benefits and impacts in these areas, organisations can gain valuable insights into their risk tolerance and prioritise their responses effectively. Ultimately, this process not only helps in mitigating risks but also supports informed decision-making and improved operational strategies.

Figure 28 Framework Catalogue Pt.5

Figure 29 Prodago: Where to start?

AI Framework and Risk Management Strategies

The trustworthy taxonomy presented is derived from the AI framework and focuses on addressing key areas essential for the responsible development of AI. It encompasses various phases, including accountability and transparency, explainability and interpretability, mitigation of harmful bias, privacy protection, and ensuring safety, security, resilience, validity, and reliability.

To effectively implement this taxonomy, important questions arise regarding legal compliance, liability considerations, and insurance adherence across different jurisdictions. These inquiries are critical for guiding decision-making in the responsible deployment of AI technologies.

The NIST framework plays a crucial role in alleviating concerns related to risk management in software engineering by providing a structured and reassuring approach. By acknowledging the possibility of issues while detailing mitigation strategies and necessary actions, it effectively addresses common apprehensions. Furthermore, the framework emphasises the importance of documenting measurement methodologies, test sets, metrics, and processes, which helps to clarify potential problems that may have been previously overlooked. In conclusion, the NIST framework serves as a valuable resource for fostering a comprehensive understanding of risk management, ultimately enhancing the effectiveness of software engineering practices.

The effective use of a risk management framework is critical for understanding accountability, robustness, reliability, and resilience. It involves navigating a complex landscape of laws, standards, and policies, ensuring that all relevant factors are accounted for while linking operations and tools to the broader framework. Integration with maturity models, such as DCAM capabilities, is essential, encompassing components like AI strategy, outcome metrics, data operations, risk management, and ethical considerations, including privacy and bias management. It is essential to recognise and leverage the valuable insights provided by industry vendors, rather than hastily adopting technologies without a robust control and recalibration strategy.

Figure 30 AI Use Cases

Figure 31 AI Risks

Figure 32 Framework Catalogue Pt.6

Figure 33 Framework Catalogue Pt.7

Figure 34 SWEBoK Version 4

Figure 35 Laws, Standards and Policies

Figure 36 Existing Capabilities

Operational Procedures and NIST Standards in Cybersecurity

In the upcoming week, Howard shared that a webinar will focus on the methodology for implementing Operational Procedures (OPS) and discuss how to ensure they are actionable while accommodating various frameworks. He shared that he would also explore how to utilise our scorecard artefacts as evidence, ensuring they validate our processes and provide necessary assurances.

Decision Intelligence and AI in Decision Making

The upcoming sessions will focus on simulating case studies related to decision intelligence and the application of Generative AI in knowledge management for Data Governance. While the primary simulations will take place the week after next, there is a possibility of conducting a preliminary session next week to discuss these use cases. Additional topics of interest include evaluating ethics and biases in training data, as well as generating effective training datasets, which is often necessary in these scenarios. Notably, Howard shared that he would also explore simulation applications in areas such as fraud and risk detection within the framework of decision intelligence.

Concerns are growing in forums regarding the use of AI in decision-making by large corporations, particularly in the banking sector and with companies like Amazon. Many individuals have reported instances where these organisations have made incorrect decisions yet remain steadfast in their refusal to reassess these conclusions through a human lens. This reliance on AI without adequate human oversight raises significant questions about accountability and the potential consequences of erroneous judgments.

Howard emphasised that AI should never be the final decision-maker, asserting that a human must always make the ultimate decisions. He discussed the evolution from Robotic Process Automation (RPA) to agentic AI, highlighting that while RPA laid the groundwork for digital decisioning by establishing clear cause-and-effect relationships akin to simple “if-then” statements, it often relies on rules-based programming rather than advanced machine learning. Howard expressed concern for RPA engineers, noting that a significant portion of RPA work remains focused on basic coding rather than tapping into the potential of machine learning technologies.

Understanding the relationship between cause and effect is crucial for successful RPA (Robotic Process Automation) efforts. When a clear and consistent correlation exists, ongoing monitoring becomes vital, and integrating digital decision-making tools can enhance effectiveness, provided that adequate checks and balances are in place. Conversely, in scenarios with uncertain relationships, it is essential to utilise exploratory analytics, which demands increased human oversight to navigate potential risks arising from ambiguity. Ultimately, this highlights the importance of assessing the quality of information and the associated risks in RPA implementations.

The Challenges and Potential of AI in Healthcare

The societal perception of errors reveals a striking double standard between human and AI mistakes, particularly in decision-making contexts. While individuals tend to receive more leniency for their missteps, AI technologies, including robotic process automation (RPA), face intense criticism when they fail to perform as expected.

This heightened scrutiny is largely driven by concerns surrounding AI hallucination, a phenomenon that significantly erodes trust in generative AI systems. Ultimately, addressing this discrepancy is crucial for fostering a more balanced view of accountability in both human and technological contexts.
This has led to a perception that while human errors can be regarded as incompetence, similar mistakes by AI are unacceptable. Additionally, the use of AI in critical decision-making areas, such as healthcare, has led to high-profile lawsuits, underscoring the serious implications of deploying AI in sensitive contexts.

In the late 1990s and early 2000s, the U.S. faced significant challenges with health maintenance organisations (HMOs), where untrained call takers made critical treatment approval decisions, leading to a surge in lawsuits that ultimately drove 99% of these organisations out of business. Currently, a similar issue is emerging as health insurers are increasingly sourcing medical supplies and equipment and compelling hospitals to use specific products and technologies. This trend creates additional pressure on the healthcare industry, forcing compliance with insurer demands regarding the use of certain devices and medications.

The integration of AI in healthcare, particularly in imaging technologies, presents both significant advancements and noteworthy challenges. While AI has demonstrated remarkable capabilities in enhancing imaging modalities such as MRI, CT, and X-rays—often detecting abnormalities like cancer that may elude human observation—there remains a level of scepticism regarding its reliability in diagnosing medical conditions. Understanding the underlying information and establishing criteria for AI eligibility is crucial in navigating this complex landscape. Ultimately, while the promise of AI in diagnostics is considerable, careful consideration of its risks and limitations is crucial for the safe and effective implementation of this technology.

Challenges of Data Governance and AI

Effective decision-making processes are crucial in combating fraud, particularly in the context of Robotic Process Automation (RPA). A participant shared their experience at Vodacom, highlighting how advanced fraudsters were able to spoof domains and manipulate contact information on registers, rendering fraudulent bank statements nearly indistinguishable from authentic ones to basic Optical Character Recognition (OCR) systems. This underscores the importance of enhancing detection methods to stay ahead of ever-evolving fraudulent tactics.

Relying exclusively on Optical Character Recognition (OCR) technology for document processing poses significant risks due to its limitations in recognising subtle variations in fonts and formatting. Howard emphasised the importance of human oversight, highlighting that even advanced AI-enhanced OCR systems often struggle to interpret context in critical documents accurately. Ultimately, integrating human judgment with technological tools is essential to ensure the accuracy and reliability of document processing.

The integration of risk management and corporate ethics is essential when adopting new technologies, particularly Artificial Intelligence (AI). Attendees in the discussion emphasised the importance of recognising inherent risks and ethical considerations, advocating for a balance between corporate ethics and risk tolerance. A significant concern raised was the tendency of senior executives to misunderstand new technologies, which can lead to an overreliance on AI as a quick fix for complex challenges.

This reflects a broader frustration within the executive ranks, as they often seek immediate solutions reminiscent of past trends such as data lakes. This approach typically fails to tackle underlying issues effectively. Ultimately, addressing both risk and ethics is vital to ensure that technology serves as a genuine solution rather than a simplistic answer to multifaceted problems.

The frustrations with ineffective leadership in organisations often arise from a perceived lack of accountability among executives. Many leaders fail to achieve desired results, raising questions about their qualifications and effectiveness in their roles. This discontent is further exacerbated by the chaotic environments within these organisations, which seem to thrive on exclusivity and closed decision-making processes, often occurring in informal settings like golf courses and pubs. Using the analogy of transitioning from managing a motorboat to sailing, it becomes evident that leaders face increasingly complex challenges that require a deep understanding of new dynamics and strategies. Ultimately, addressing these issues is crucial for fostering a more effective and inclusive leadership culture.

Effective Metadata Management and Data Governance are increasingly essential as AI technologies evolve, as highlighted by a recent discussion among industry participants. They emphasise the importance of collaboration in navigating these challenges, noting that addressing issues individually can lead to confusion and complications. While some argue that AI could eliminate the need for Data Governance, there is a strong consensus that a structured approach remains crucial for tackling ongoing obstacles.

This necessity is likened to the difference between flying a stable Boeing aircraft and the heightened awareness required for piloting a glider, illustrating the need to remain vigilant against external factors in complex data environments. Ultimately, a proactive and unified strategy is crucial for navigating the complexities of modern data landscapes successfully.

Collaboration is crucial for the successful development of space technology. Howard argued that establishing a centre of excellence fosters teamwork and innovation, which are key to overcoming challenges. While perfection may remain elusive, effective controls and human oversight can help detect potential issues early in the process. Additionally, actively seeking feedback from external customers is crucial for refining strategies and increasing the likelihood of achieving desired outcomes. In conclusion, a collaborative approach, grounded in oversight and customer feedback, is essential for success in the complex field of space technology.

If you would like to join the discussion, please visit our community platform, the Data Professional Expedition.

Additionally, if you would like to watch the edited video on our YouTube please click here.

If you would like to be a guest speaker on a future webinar, kindly contact Debbie (social@modelwaresystems.com)

Don’t forget to join our exciting LinkedIn and Meetup data communities not to miss out!

Scroll to Top