Executive Summary
This webinar outlines key aspects of implementing and evaluating AI in compliance with data privacy regulations, with a particular focus on the EU GDPR and the AI Act. Rowana Urip Santos emphasises the importance of understanding data privacy principles, the roles and responsibilities of stakeholders, and the ethical considerations in personal data processing. She addresses the implications of AI systems within regulatory frameworks, including the NIST AI Risk Management Framework, and explores the risks, legal implications, and future prospects of AI. Key points include the need for effective data management and accountability, conducting data privacy impact assessments, and navigating cross-border regulatory challenges while ensuring compliance with data storage and ethical guidelines in AI model training.
Webinar Details
Title: AI Meets Privacy: Navigating GDPR and the EU AI Act with Rowana Urip Santoso
Date: 2025-08-20
Presenter: Rowana Urip Santoso
Meetup Group: DAMA SA User Group Meeting
Write-up Author: Howard Diesel
Implementing and Evaluating AI with Rowana Urip Santoso
Rowana Urip Santoso was welcomed by Howard Diesel, who praised her reputation based on recommendations from colleagues of his. Rowana introduced herself as a Data Privacy Officer and Compliance Officer in the automotive industry in the Netherlands with nearly four years of experience, following six years in a similar role within the health insurance sector. She touched on her journey from implementing data breach procedures to now focusing on AI, specifically in developing an AI framework and evaluating AI tools. Lastly, Rowana shared that she aimed to share her insights and practical tips for effectively working with AI in the session.
Figure 1 Data Protection in the AI Act
Data Privacy Regulation, AI Act, and Ethics
Rowana shared that the webinar’s agenda would cover the GDPR data privacy regulation in the EU, focusing on the definition of personal data, data processing, and the roles defined within the GDPR, along with its core principles, which are similarly addressed in the AI Act. Rowana then highlighted the origins and key points of the AI Act, particularly the associated risk levels that must be considered when working with AI systems. Additionally, a brief overview of ethical considerations in AI will be provided to mitigate risks.
Figure 2 Agenda
Understanding the Basics of the EU GDPR
The General Data Protection Regulation (GDPR) defines personal data as any information that can identify a natural person, encompassing a wide range of identifiers, both direct (such as a surname) and indirect (like an online ID). Additionally, data processing under the GDPR is broadly defined and encompasses all activities related to personal data, including collection, storage, deletion, and sharing via email. Understanding these key concepts is essential for compliance with the GDPR.
Figure 3 Data Privacy
Figure 4 Definition of Personal Data (GDPR)
Figure 5 Definition of Data Processing (GDPR)
The Roles and Responsibilities in the GDPR
It’s crucial to understand the roles involved in the processing of personal data. If you are responsible for collecting personal data, you may act as a data controller. For instance, a startup selling products must collect postal addresses to facilitate the delivery of its products. As the data controller, the startup defines the purpose of collecting this data—for both shipping and marketing—and determines the methodology for processing it.
In the context of data processing, a data processor is typically a supplier that handles personal data on behalf of a data controller, often utilising tools such as customer relationship management (CRM) or HR software. Under GDPR, it is essential to have a legal basis for processing personal data, which can include obtaining consent from data subjects, fulfilling contract obligations, or complying with legal requirements. There are six legal grounds for data processing, with “legitimate interest” being particularly significant for businesses, as it allows the data controller’s interests to potentially outweigh those of the individual.
Figure 6 Roles in the GDPR
Figure 7 Principles Relating to Processing of Personal Data (1/3)
Understanding the Principles of Personal Data Processing
When processing personal data, it is essential to adhere to principles such as legality, fairness, and transparency. The data subject must be informed about the processing of their data, exemplified by obtaining Debbie’s name and email address for legitimate communication purposes. It is crucial to adhere to purpose limitation, processing only the necessary data for a specific and legitimate reason. The information must remain accurate, allowing individuals, such as “Debbie,” the right to correct any errors, including a mistaken last name. Maintaining these principles ensures compliance with the GDPR and respects the rights of data subjects.
Figure 8 Principles Relating to Processing of Personal Data (2/3)
Data Storage, Ethical Considerations, and Requirements under GDPR
Storage limitations for personal data are governed by legal requirements that specify the duration for which data can be retained. For instance, in the Netherlands, applicant data can only be stored for a short period, while tax-related information may be retained for a maximum of seven years. After this period, the data must be deleted. Additionally, ensuring data integrity and confidentiality is crucial for data security. The role of a Data Protection Officer (DPO) becomes essential when a company processes large volumes or sensitive data, as outlined by the GDPR, making the DPO a necessary requirement in these situations.
The necessity for a DPO in an organisation often depends on its size and the volume of personal data processed, particularly in large companies. While the GDPR primarily focuses on empowering individuals with rights and control over their personal data, there is some confusion regarding the ethical aspects tied to GDPR. Unlike the AI Act, which provides clearer guidelines on ethical considerations, the GDPR does not explicitly address ethics, except in terms of fairness. Overall, the ethical principles related to data protection are now more comprehensively covered under the AI Act than within GDPR itself.
Data Accountability and Privacy Principles
Accountability is a crucial principle for data controllers, who must demonstrate their methods and timing in processing personal data. This includes maintaining a register of processing activities that outlines the use of personal data and conducting data privacy impact assessments to evaluate any high-risk data processing. Additionally, the importance of understanding and documenting third-party relationships has gained attention, particularly in light of the Digital Operational Resilience Act (DORA), which emphasises the need for a comprehensive overview of these third parties.
Under the GDPR, it is essential for companies to know whether the data they process is critical for their services and to understand their data flows. This includes maintaining data processing agreements (DPAs) with suppliers and implementing appropriate technical and organisational measures to protect personal data. Data subjects have rights regarding their personal information, including the right to object to automated decision-making and to request access to their data. Companies are required to provide this information within 30 days; however, if a thorough investigation is delayed due to dependencies on other departments, they must inform the data subject of the delay.
Figure 9 Principles Relating to Processing of Personal Data (3/3)
Figure 10 The EU AI Act
Figure 11 AI Act: Part of the European Commission’s Digital Decade
AI Regulations and the AI Act in the European Commission
The AI Act is a key component of the European Commission’s Digital Decade campaign, which aims to enhance digitalisation leading up to 2030. As the first law specifically addressing artificial intelligence, it adopts a risk-based approach, focusing on product regulation while striking a balance between supporting innovation and protecting fundamental rights. In addition to the AI Act, there are two other relevant laws: the AI Liability Directive, which has been withdrawn due to political pushback, particularly from the US, as it was perceived to stifle innovation by establishing a liability framework for AI systems; and a revised Product Liability Directive that expands existing rules to encompass AI and automated systems.
An attendee expressed frustration with the United States’ inability to effectively address regulatory issues, particularly in comparison to the EU’s leadership in areas such as GDPR and AI regulation. While they acknowledge disagreements with certain EU regulations, they strongly support the framework established by the GDPR and advocate for the inclusion of AI liability. The attendee then highlighted the lengthy implementation process of GDPR and criticised the patchwork of state laws in the U.S., which complicates regulatory compliance. Despite these challenges, he aims to focus on both EU and U.S. regulations, given his work with international companies, emphasising the importance of a cohesive approach to legal frameworks.
Figure 12 AI Act: Regulation of AI in the EU
Understanding the Implications and Regulations of AI Systems
An AI system is a machine-based entity that operates with varying levels of autonomy, which may include human interaction. These systems are designed to be adaptable, capable of generating outputs, and influencing physical environments. Streaming services serve as a prime example of AI applications. The EU’s AI Act aims to ensure that AI systems are safe, uphold fundamental rights, and maintain transparency about their identity and data processing activities. The Act establishes specific regulations for high-risk AI models and prohibits certain models, such as ChatGPT, from applying to all EU member states and any non-EU entities engaging with the EU market.
The importance of clearly identifying AI systems cannot be overstated, as it directly relates to the specific AI system and its associated risk levels. Key principles from the AI Act mirror those established in the GDPR, underscoring the essential values of trustworthiness and transparency in AI development. Users must be informed when their data is processed by AI, and the systems designed should align with their intended purposes to prevent the production of harmful outputs. Ultimately, developers must prioritise the intended purpose throughout the design and implementation phases to ensure safety and accountability in AI practices.
Accountability for AI systems aligns with GDPR principles, emphasising the need for designated accountability despite variations in legal reflection. Companies utilising AI must ensure AI literacy among employees, as mandated since February 2nd, by providing training to help them recognise AI-related risks. Key principles include maintaining accuracy to achieve desired outcomes, implementing storage limitations with a focus on purpose and time, ensuring robust security against cyber threats, and upholding the rights of individuals. Even in fully automated processes, individuals should have the right to object to automated decision-making and profiling, highlighting the importance of protecting individual rights in AI implementations.
Figure 13 AI Act: Salient Points
Figure 14 AI Act: Principles
The Roles and Responsibilities in AI Implementation
The AI Act establishes distinct roles in the context of GDPR, including providers, deployers, importers, distributors, and users. Each role carries specific responsibilities, especially for those providing AI tools, who must ensure that their systems are maintained and up to date while informing customers about their AI products. When implementing an AI solution, it’s crucial to identify your role and assess the applicable AI risk level, as different obligations are enforced based on both the role in the AI value chain and the associated risk level. Understanding these elements is essential for compliance with the Act.
Figure 15 AI Act: the Roles
Understanding the Risks and Regulations of AI Systems
The AI Act categorises AI systems into four risk levels, with the first category, unacceptable risk, prohibiting certain systems entirely, such as social scoring systems seen in China that evaluate individuals based solely on their behaviour. In the European Union, such practices are prohibited. When discussing high-risk and low-risk AI systems, transparency obligations are crucial, as these categories encompass systems that could potentially cause harm or significantly affect individuals. Existing regulations already govern critical sectors, such as aviation and elevators, underscoring the importance of safety and accountability in the development of AI.
In the EU, the AI Act categorises certain AI systems as high risk, particularly those used in sensitive areas such as education and employment. If developing an AI system for screening job applicants, it is crucial to ensure compliance with all regulations outlined in the AI Act, as it necessitates thorough documentation of your processes. Conversely, less impactful applications, like a company chatbot providing policy information, are subject to fewer regulations, provided it is clearly indicated that users are interacting with an AI and not a human. Ultimately, the specific obligations depend on the intended use of the AI system.
Figure 16 AI Act: AI Risk Levels
NIST AI Risk Management Framework and EU AI Act
The NIST AI risk management framework and the EU AI Act share significant common ground in their focus on trustworthiness and risk mitigation in artificial intelligence systems. Both frameworks emphasise the importance of structured governance processes that manage, map, and monitor AI risks, thereby fostering accountability and stakeholder involvement. By prioritising thorough assessments of decision-making processes and system capabilities, they each contribute to a more comprehensive understanding of AI risks. Ultimately, these similarities highlight the necessity of effective risk assessment and privacy implementation in the evolving landscape of AI regulation.
Risks, Legal Implications, and Future of AI Systems
High-risk AI systems, particularly those involving customer data for credit scoring, are subject to strict requirements as outlined in the AAI Act. The processing of credit data is deemed high risk due to its significant impact on individuals’ financial opportunities, such as loan approvals. To comply with regulations, companies must ensure comprehensive technical documentation, register their systems, and incorporate human oversight in the AI processes. This raises questions about how organisations can effectively utilise AI to streamline risk management and documentation efforts.
There is a growing interest in Artificial Intelligence (AI) initiatives in both the US and Europe, although many organisations remain cautious about fully embracing them. In the US, advancements are clearly being made, while in Europe, various organisations are exploring the potential of AI, still in the preliminary stages of assessing its feasibility and implications. Many are hesitant to implement AI on a larger scale until there is a better understanding of its fundamentals and the broader impact it may have.
While the use of AI in various sectors is not yet widespread, it’s evident that activity is increasing globally, including in the U.S. Recent discussions have highlighted the potential of AI in enhancing security measures, such as identifying cyberattacks. However, numerous ongoing lawsuits related to AI exist, particularly in the healthcare sector, where erroneous decisions made by AI systems have led to legal challenges. These cases are currently navigating the legal system, reminiscent of the regulatory landscape faced by health maintenance organisations (HMOs) in the late 1990s and early 2000s.
The involvement of non-medical, clerical personnel in treatment decision-making within healthcare management organisations (HMOs) has led to significant legal repercussions, contributing to their decline, with only a few remaining in the country. Concerns have emerged regarding whether the existing legal consequences are sufficient to prompt a reassessment of these practices. Additionally, ongoing legal challenges are arising in relation to artificial intelligence (AI), particularly regarding data ownership and the copyrightability of AI-generated artworks, which currently cannot be copyrighted in the U.S. This combination of legal issues underscores the need for a re-evaluation of both HMO practices and the role of AI in creative fields.
The complexities of intellectual property law in the context of artificial intelligence present significant legal challenges as the technology evolves. As stakeholders navigate these issues, they often overlook the negative implications associated with AI developments, epitomised by the “law of unintended consequences.” The recent implementation of the AI Act emphasises the need for accountability, urging organisations to educate their employees on responsible AI usage and data management. Additionally, past experiences with AI model releases, such as the Grok model, underscore the need for continual improvement and adaptation in this rapidly evolving field. Ultimately, addressing these legal and operational challenges is crucial for fostering a responsible and innovative AI landscape.
Figure 17 AI Act: High Risk Compliance
Ethical and Practical Implications of AI Implementation
The ethical considerations of AI are paramount in ensuring its responsible use in society. Central to these considerations is the necessity for human oversight, which maintains control over AI processes and fosters trust. Furthermore, AI systems must be technically robust and safe while adhering to stringent principles of privacy and data governance.
Transparency is essential, allowing users to recognise when they are interacting with an AI and comprehend the risks involved. Lastly, accountability remains a critical principle, and it is imperative that these guidelines are diligently followed during the implementation of AI technologies. In summary, a commitment to ethical considerations is essential for the successful integration of AI in a manner that respects human values and promotes safety.
Figure 18 Covering Ethics: AI Risks and Mitigation
Data Privacy Impact Assessment and AI Implementation under GDPR
To effectively implement data privacy in AI systems, it is essential to conduct a Data Privacy Impact Assessment (DPIA), particularly since AI often processes personal data. This involves understanding the type of data being processed, its purpose, retention periods, and ensuring its security, all while adhering to GDPR principles.
Prior to utilising AI, one must determine if the data processing falls within GDPR allowances, ensuring that any further data processing aligns with the original explicit purposes. For instance, as a startup, identifying these factors is crucial when presenting personal data usage in your initial pitch or presentation.
The use of personal data, specifically customers’ postal addresses, is essential for fulfilling product orders; without this data, sales cannot occur. However, secondary use of the same postal addresses for training an AI model raises legal concerns, particularly regarding consent. Since customers are not informed that their data will be utilised for AI purposes at the time of purchase, obtaining consent is not feasible. Additionally, using this data is not necessary for the performance of the contract, which complicates the legal grounds for such action. The only potential legal basis for this secondary use could be legitimate interest, necessitating careful assessment of this justification.
To assess the legitimacy of utilising AI in service improvement, it is essential to identify and evaluate your legitimate interest alongside the interests of your customers. This involves a balancing test to determine whether your purpose for using AI outweighs any potential impacts on the customer. Key questions to consider include: What is your specific interest? Is the use of AI necessary for achieving your goals? Does your interest prevail over any potential negative effects on the customer? If your interest is deemed more significant and does not adversely affect the customer, you may proceed with implementing AI to enhance your service.
Effective AI Management: Privacy, Risk, and Compliance
When preparing for the integration of AI systems in large organisations, it’s crucial to conduct privacy impact assessments and legitimate interest assessments to identify and mitigate data-related risks while ensuring compliance with fundamental rights. A multidisciplinary approach is essential, involving IT for system integration within the existing IT landscape, as well as legal teams to address agreements, intellectual property, and fair purchasing conditions for third-party AI systems. Additionally, it’s essential to strike a balance between innovation and the business environment to ensure the responsible and accountable use of AI technologies.
Figure 19 Tips and Tricks
Similarities of GDPR and AI Act
The GDPR and the AI Act share many similarities, particularly in areas such as data protection, transparency, and accountability. Both frameworks require thorough documentation of activities related to AI and the processing of personal data, along with the imposition of significant fines for violations.
A key challenge lies in enhancing AI literacy among employees to ensure proper training and compliance with individual rights. Additionally, while GDPR emphasises data minimisation, the AI Act allows for broader data sets, necessitating extensive documentation and monitoring, which can lead to increased administrative burdens.
Figure 20 Summary
Memory and Knowledge Extraction in AI Systems and Chat Bots
The nature of AI memory is often misunderstood, particularly the belief that AI systems lack the capacity to remember past interactions. While AI tools are designed to analyse data, identify patterns, and enhance their performance, they typically do not retain original data after learning from it. This raises concerns regarding the absence of mechanisms, such as non-disclosure agreements, to manage or limit what an AI remembers from user interactions. Furthermore, generative AI, including chatbots, can extract and store tacit knowledge from users over time, prompting questions about the extent of their memory regarding individual users. Ultimately, understanding the limitations and nuances of AI memory is crucial in navigating interactions with these technologies.
Jurisdiction of AI Act and GDPR
The AI Act shares a similar jurisdictional scope with the GDPR, meaning it is not limited by geography. For instance, if a South African company engages in AI activities that involve EU citizens or residents, it must comply with the AI Act, which has a similar reach to the GDPR. This broader applicability ensures that all EU member states are subject to the regulations, even when activities originate outside the EU, highlighting the importance of compliance for entities interacting with EU citizens.
Implications of AI and GDPR Across Border Boundaries
The regulation of AI across international borders presents significant complexities, particularly in relation to the GDPR. This issue arises when considering whether AI developed in countries like South Africa, which interacts with clients in the EU, must adhere to the same data transfer restrictions that apply to personal information sent across borders, such as data stored in US data centres.
While the GDPR establishes protocols, such as standard contractual clauses for data transfer, the application of these regulations to AI usage remains ambiguous, creating a compliance grey area. Ultimately, the rules governing the AI processing of personal information differ significantly across various contexts and jurisdictions, as illustrated by the differing regulations in countries such as India.
India has numerous bots actively running on networks that continuously collect personal information. The applicability of GDPR in such cases hinges on whether personal data is involved. If bots are scraping or transferring personal data, GDPR regulations still apply; the fact that the operation is automated does not exempt it from compliance with these regulations. For example, a recruitment company that scrapes LinkedIn data is required to delete that information after a certain period, as mandated by GDPR. Thus, any processing of personal data, regardless of the means used, falls under data privacy regulations, such as the GDPR.
Impact of GDPR on AI Processing and Data Privacy
The regulation of AI processing across various jurisdictions reveals significant complexities, particularly in comparison to GDPR standards. In countries like South Africa, data processing activities face stringent restrictions that can exceed those imposed by GDPR, necessitating a valid legal basis and ensuring that personal data is protected to equivalent standards. Additionally, European regulations impose specific limitations on outsourcing customer information processing to countries with less robust privacy laws, underscoring the necessity for strict compliance in international data processing activities. Consequently, navigating these varying regulatory landscapes is essential for organisations aiming to manage data responsibly while adhering to legal requirements.
A company in the UK faced a GDPR compliance issue when it outsourced the creation of Power BI reports to a service provider in India. The UK company intended to send live data to the Indian affiliate for report building, which violated GDPR regulations since the data transfer occurred outside the legal framework.
This scenario highlights the complexities of data handling under GDPR, especially when involving international service providers, as similar concerns about compliance arise with the use of AI technologies located in regions without adequate data protection measures.
The European Union’s GDPR and the AI Act exemplify effective regulatory frameworks for data protection due to their stringent penalties, which compel companies to prioritise compliance. In contrast, many U.S. companies tend to view regulatory fines as just another cost of doing business, undermining the deterrent effect of such penalties.
This disparity highlights the seriousness with which regulations are approached, particularly with recent initiatives by privacy firms like POPIA, which have introduced criminal judgments for violations, signalling a significant shift toward accountability in data protection. Ultimately, these contrasting approaches underscore the necessity for stricter compliance mechanisms to ensure the protection of personal data.
The differences between the POPIA regulation and GDPR underscore the complexities of accountability and responsibility in data protection. While the CEO holds ultimate responsibility, institutional roles, such as the DPO, play a vital part in ensuring compliance.
Non-compliance can result in severe legal repercussions for the CEO, including potential criminal prosecution. Despite these stringent measures in Europe, there is scepticism about the adoption of similar accountability frameworks in the U.S., especially in light of Congress’s historical reluctance to restrict its own benefits or terms.
Data Storage, Ethical Implications, and Impact Assessments
When considering data storage, it is important to recognise the jurisdictional boundaries of data centres and the complexities involved in data processing, particularly with AI technologies operating from satellites or other non-traditional locations. This raises questions about the movement of software across the Internet and its relationship to data storage; specifically, whether there are restrictions on processing data based on geographical location. Currently, there is no clear answer to these questions, highlighting a significant lack of transparency in our understanding of data movement and processing limitations.
Incorporating ethical considerations into data privacy impact assessments is crucial for fostering comprehensive risk management. While traditional assessments often focus on mitigating risks and ensuring data privacy, ethical impacts related to human control, inclusivity, and data diversity are frequently neglected.
Despite the availability of various impact assessment templates online, a centralised framework for ethical assessments akin to privacy impact assessments is still lacking. Hence, there is a pressing need for professionals to integrate these ethical components into their current assessment practices, ensuring a more holistic approach to data privacy and ethics.
AI Model Training and Data Deletion Concerns
A question was raised regarding the implications of GDPR when training an unsupervised AI model on customer data. If a customer requests that their information be deleted, the model must be updated to remove that data. However, proving that the data was used in the model can be challenging, as it’s not straightforward to track individual contributions. It is suggested that while it’s necessary to delete the data if requested, retraining the model may not always be required since features related to the individual typically do not directly inform the model’s operations. This raises broader questions about data validity and the ethical considerations surrounding the handling of personalised data in machine learning models.
The dynamics of AI models such as ChatGPT present fascinating insights into data handling and user interaction. While the process of retraining these models is not always immediate, it remains essential to ensure that any user-provided data they wish to exclude is effectively removed in future versions. Engaging with these models through various inquiries not only showcases their capabilities but also highlights the importance of user anonymity, as well-trained systems utilise data in a way that protects individual identities by aggregating it. Ultimately, these considerations underpin the vital balance between user engagement and privacy in the development of AI technologies.
Ethical Considerations and Challenges of Data Privacy in AI Models
The implications of using customer data in model training raise significant concerns regarding compliance with data removal requests. When models are trained on previously used customer data, even if the original model is no longer active, the potential for re-identification of individuals remains. Specific data points, such as zip codes or city names, can inadvertently facilitate the reconstruction of Personally Identifiable Information (PII), highlighting the critical importance of meticulously managing sensitive data throughout the model development process. Therefore, it is imperative for organisations to implement stringent data handling practices to safeguard against unintended breaches of privacy.
The complexities of data handling, particularly in terms of re-identification risks, pose significant challenges to maintaining privacy. Even a single data point can lead to the identification of individuals in an anonymised dataset, highlighting the urgent need for stringent data classification and privacy measures.
This issue is especially critical in sectors such as healthcare, where advancements in diagnosis and imaging must navigate strict regulations like HIPAA in the U.S. Ultimately, effective data management strategies must prioritise confidentiality while balancing the potential for valuable insights.
Autism is a protected characteristic in the UK, and information about autistic individuals who are publicly known can raise issues regarding privacy and consent. While some individuals openly disclose their autism, the question arises about their right to be forgotten, especially in cases where information is spread without their approval. There are legal avenues, such as requesting Google to remove specific search results, to protect privacy. Furthermore, advancements in chatbots and AI have led to more careful handling of personal information, moving away from previous practices where sensitive data was more readily shared. This evolution reflects a growing awareness of the importance of respecting privacy and personal boundaries.
Data Management, Ethics, and Bias in LLMs
The inherent biases present in current Large Language Models (LLMs) create significant inconsistencies in how personal information is handled across different systems. While some models exercise stringent safeguards and withhold sensitive details about certain individuals, others may freely disclose similar information about different people, indicating varying training objectives. This disparity underscores the pressing need for a more thorough examination of the ethical implications surrounding LLMs, as exemplified by a recent presentation in Iceland, where the speaker addressed these issues during a concise 20-minute talk. Ultimately, organising a comprehensive webinar could facilitate crucial discussions and assessments to address the ethical implications of these technologies.
The complexity of ethics and bias in data-related fields requires a multifaceted approach that incorporates diverse perspectives. Rowana highlighted how her expertise in regulatory and compliance matters has offered unique insights in their roles as product managers and executives, often surprising colleagues who may overlook these crucial aspects. By emphasising the significance of understanding regulatory requirements and auditor needs, she advocates for a more holistic discussion around data ethics, ultimately fostering a more informed and responsible approach to data management.
- Executive Summary
- Implementing and Evaluating AI with Rowana Urip Santoso
- Data Privacy Regulation, AI Act, and Ethics
- Understanding the Basics of the EU GDPR
- The Roles and Responsibilities in the GDPR
- Understanding the Principles of Personal Data Processing
- Data Storage, Ethical Considerations, and Requirements under GDPR
- Data Accountability and Privacy Principles
- AI Regulations and the AI Act in the European Commission
- Understanding the Implications and Regulations of AI Systems
- The Roles and Responsibilities in AI Implementation
- Understanding the Risks and Regulations of AI Systems
- NIST AI Risk Management Framework and EU AI Act
- Risks, Legal Implications, and Future of AI Systems
- Ethical and Practical Implications of AI Implementation
- Data Privacy Impact Assessment and AI Implementation under GDPR
- Effective AI Management: Privacy, Risk, and Compliance
- Similarities of GDPR and AI Act
- Memory and Knowledge Extraction in AI Systems and Chat Bots
- Jurisdiction of AI Act and GDPR
- Implications of AI and GDPR Across Border Boundaries
- Impact of GDPR on AI Processing and Data Privacy
- Data Storage, Ethical Implications, and Impact Assessments
- AI Model Training and Data Deletion Concerns
- Ethical Considerations and Challenges of Data Privacy in AI Models
- Data Management, Ethics, and Bias in LLMs
If you would like to join the discussion, please visit our community platform, the Data Professional Expedition.
Additionally, if you would like to watch the edited video on our YouTube please click here.
If you would like to be a guest speaker on a future webinar, kindly contact Debbie (social@modelwaresystems.com)
Don’t forget to join our exciting LinkedIn and Meetup data communities not to miss out!