Executive Summary
This webinar highlights the crucial role of human factors and strategic approaches in the development and deployment of Artificial Intelligence (AI). Tiankai Feng emphasises the importance of a human-centric approach in AI strategies, advocating for frameworks such as the 5Cs to promote AI literacy and human competence. The webinar addresses the evolving roles and skills necessary in the AI era, emphasising collaboration and governance in AI projects to tackle challenges like bias, disinformation, and ethical concerns. Furthermore, Tiankai explores the intersection of AI with creativity, the workplace, and various sectors, including the ethical implications of AI in music and social media. Ultimately, the webinar highlights the importance of maintaining brand authenticity in human-to-AI communication and examines the risks and accountability associated with AI decision-making in diverse corporate settings.
Webinar Details
| Title | Humanizing AI Strategy – Book Launch with Tiankai Feng |
| Date | 07 October 2025 |
| Presenter | Tiankai Feng |
| Meetup Group | DAMA SA User Group |
| Write-up Author | Howard Diesel |
Contents
Humanising AI: the Importance of Human Factors and Strategy
The Challenges and Future of AI
The Concept of Human-Centricity in AI Strategies
Humanising AI: The 5C’s Framework and Its Application
Understanding AI Literacy
Human Competence in AI Processes
The Evolution of Roles and Skills in the AI Era
Collaboration and Governance in AI Projects
Problem-Solving and Collaboration in AI Development
AI, Purpose, and Team Collaboration
The Value of AI: The Framing of AI
Maintaining Brand Authenticity in Human-to-AI Communication
The Human and AI in Creativity
AI Principles, Accountability, and Human Risks
The Impact of AI: Bias, Disinformation, and Ethical Choices
A Satirical Take on Artificial Intelligence and Its Influence
The Ethical Implications and Use of AI in Music and Social Media
Challenges of Humanising AI in Different Corporate Settings
AI in the Workplace
The Role of Accounting and AI in the South African Gambling Market
AI in Data Stewardship and Academic Freedom in Higher Education
The Ethical Implications and Human Involvement in AI Decision Making
Humanising AI: the Importance of Human Factors and Strategy
Howard Diesel opened the webinar and introduced Tiankai Feng to the community, sharing his enthusiasm for engaging with sessions featuring authors. Tiankai shared his presentation by expressing gratitude for the opportunity to explore the vital human aspect of Artificial Intelligence (AI). He highlights that many failures in AI projects can be traced back to human-related factors, such as misalignment, poor communication, and differing interpretations, which often lead to detrimental decisions that diminish AI’s potential or cause harm. This challenge is not exclusive to AI; it has also been a persistent issue in digital transformations, including initiatives related to cloud computing and data management.
To address these concerns, Tiankai, who is the Director of Data and AI Strategy at ThoughtWorks and the Vice President of DAMA Germany, emphasised the importance of proactively managing human factors. He showcases his commitment to integrating human values into AI solutions while aiming to make complex topics more engaging and relatable. Incorporating music into the session, including original rap songs, Tiankai shares his aim to create an enjoyable learning environment that underscores their passion for creativity and innovation in technology, as well as digital analytics and data governance.
Figure 1 Humanising AI Strategy
Figure 2 Let me Introduce Myself
The Challenges and Future of AI
The rapid growth of AI adoption and investment highlights its potential, yet many organisations struggle to fully realise its benefits. With enterprise AI adoption in the EU rising from 8% to 13% year-on-year and tech giants pledging $320 billion towards AI development, it is evident that interest in AI is at an all-time high. However, challenges persist, as evidenced by the fact that 87% of AI projects fail to reach production, and 81% of workers do not utilise AI tools effectively. This disparity between investment and value creation raises critical questions about the underlying reasons.
To address these challenges, a human-centric AI strategy is essential, focusing on three key aspects: mindset, skill set, and behaviour. Organisations must cultivate a mindset that prioritises responsible AI use, develop the necessary skills to implement AI effectively, and foster behaviours that translate these mindsets and skills into meaningful actions. By bridging the gap between AI investment and tangible outcomes through such a strategy, organisations can harness AI’s true potential, leading to responsible and successful integration into their operations.
Figure 3 AI is Everywhere, But there are Hurdles
Figure 4 Human Factors are the Root Cause of Most Failures
Figure 5 Human-Centricity Boils Down to Three Factors
The Concept of Human-Centricity in AI Strategies
Tiankai outlined an important perspective on AI strategy, defining it as a long-term plan that encompasses the people, processes, and technologies necessary for creating, processing, and utilising AI to drive meaningful value. This definition highlights three essential elements: it encompasses the entire AI lifecycle, engaging all stakeholders—from data creators to end-users—ensuring a comprehensive and unbiased approach. Additionally, the strategy is centred around value, aligning AI initiatives with business objectives to foster sustainable growth rather than merely adopting the latest technologies.
Furthermore, the definition acknowledges the subjective nature of concepts such as meaningful, secure, and transparent, emphasising the need for a shared understanding to cultivate effective AI initiatives. By prioritising inclusivity and clarity in definitions, a well-rounded AI strategy can better guide the development and deployment of AI systems. Ultimately, this comprehensive approach is crucial for leveraging AI’s potential in a way that is both impactful and responsible.
Figure 6 What is an AI Strategy?
Humanising AI: The 5C’s Framework and Its Application
The “5C’s” framework offers a comprehensive approach to humanising AI systems, emphasising the integration of human-centric values into their development and use. This framework includes five key components: competence, collaboration, communication, creativity, and conscience. Competence focuses on fostering AI literacy tailored to specific roles, helping users understand when to trust or question AI.
Collaboration promotes the co-creation of AI initiatives through federated governance, encouraging cross-functional teamwork and aligning efforts with genuine business challenges. Communication involves translating complex AI concepts into clear, relatable narratives, ensuring transparency and authenticity, particularly in interactions with chatbots and language models.
Moreover, creativity highlights the partnership between human ingenuity and AI’s potential for refinement, advocating for a balance between incremental and disruptive innovations while viewing prompting as a creative discipline. Finally, conscience underscores the importance of embedding ethics by design, ensuring accountability, bias detection, and psychological safety, while nurturing a human moral compass that AI can emulate but never possess. Together, these elements provide a structured and intuitive approach to seamlessly integrate human values into AI development, offering practical guidance and inspiration for organisations striving to create more responsible and human-centred AI systems.
Figure 7 Let’s “Humanise” AI Strategy by Putting People in the Centre
Understanding AI Literacy
The importance of AI literacy has become increasingly relevant amidst the rapid advancements in technology, yet it is essential to recognise that not everyone needs to master every aspect of Artificial Intelligence. Instead, AI literacy should be tailored to specific roles within an organisation, focusing on the elements most pertinent to an individual’s job function.
The European Union has proposed a useful framework that categorises AI interactions into four distinct areas: engaging with AI, creating with AI, managing AI, and designing AI. For example, marketing leads may engage with AI-powered CRM tools, while customer service agents focus on creating with generative AI.
To develop effective AI literacy programs, organisations must identify the relevant interaction types for each role, allowing for targeted upskilling initiatives. By recognising that executive leadership requires a broader understanding spanning all four categories, companies can enhance their strategy, risk management, and innovation culture. This tailored approach ensures that employees focus on what is most applicable to their responsibilities, avoiding an impractical attempt to grasp the entirety of AI’s complex and rapidly evolving landscape.
Figure 8 AI Literacy Tailored to Different Personas
Human Competence in AI Processes
The “human in the loop” concept in AI and automation emphasises the importance of strategically incorporating human experts at critical junctures in automated processes. This approach prioritises identifying specific trigger points, such as high-risk transactions in fraud detection, where human intervention can enhance decision-making.
When an AI system flags a transaction, a human reviewer evaluates the context and applies their expertise to determine the appropriate course of action. In cases that exceed the reviewer’s knowledge, the issue is escalated to specialised teams, like compliance, ensuring that decisions are well-informed and accurate.
Moreover, having fallback mechanisms in place is essential for maintaining operational continuity during human reviews, allowing alternative methods to be utilised seamlessly. Traceability and documentation throughout these workflows are crucial for recording decisions and actions taken, which fosters transparency and compliance while creating valuable learning opportunities for both humans and AI systems. Ultimately, adopting a balanced and well-defined approach to human involvement not only improves decision quality but also upholds legal and ethical standards in automated processes.
Figure 9 Human in the Loop Means Right Person, Right Time
The Evolution of Roles and Skills in the AI Era
The notion that AI will eliminate jobs is a common misconception; rather, AI is expected to transform the workforce by automating routine tasks and creating new, specialised roles. This transition allows jobs to evolve, resulting in positions such as AI product managers, who combine product thinking with an understanding of AI technology and its human implications.
Furthermore, the demand for AI trainers is growing, as these professionals play a vital role in fine-tuning AI models by bridging subject expertise with technical capabilities. The emergence of AI ethics highlights the importance of responsible and beneficial AI development, ensuring the technology aligns with legal and ethical standards. Finally, AI translators are crucial for facilitating communication between business and technical teams, similar to data analysts or architects in the data realm.
In summary, the evolution of the job market is not about eliminating roles but rather about redefining them in the context of AI advancements. As we embrace these changes, the career landscape will increasingly emphasise collaboration between humans and technology, fostering innovative opportunities that demand a blend of creativity, ethical considerations, and technical skills. New avenues for professional growth will characterise the future of work, ultimately pushing us to become more human in our endeavours.
Figure 10 Adapting Career Paths to AI
Collaboration and Governance in AI Projects
Effective collaboration in AI projects is crucial for achieving shared goals, yet the tension between transactional and coordination approaches often hinders this process. Transactional collaboration, where one team serves another, frequently leads to complaints and a lack of genuine cooperation. In contrast, coordination involves teams informing each other while working in isolation, preventing the establishment of shared objectives.
To overcome these limitations, a co-creative mindset that recognises each team’s unique contributions—such as data, IT, business, and AI engineering—is essential. This mindset promotes formal commitments through shared objectives integrated into planning processes, fostering a collaborative environment that surpasses the constraints of traditional operating models.
Further enhancing collaboration, a federated governance model can effectively balance business autonomy with alignment across AI initiatives. Under this model, individual business domains operate their own AI teams while adhering to common standards and guidelines, all overseen by cross-functional review councils and an AI strategy board that prioritises investments.
This dual-layered governance structure not only supports autonomous innovation but also ensures compliance and oversight across diverse use cases. Ultimately, adopting a business-led federated model, combined with a co-creative mindset, is fundamental for successful AI collaboration and governance, enabling organisations to unlock scalable AI value while maintaining necessary governance and oversight.
Figure 11 Co-Creation as a Mindset for Collaboration
Figure 12 AI Governance through Proven Federated Models
Problem-Solving and Collaboration in AI Development
Collaboration and practical problem-solving are essential for realising the true value of AI in organisations. Many businesses fall into the trap of “pilot theatre,” where sophisticated AI applications are developed without a clear purpose or measurable impact, ultimately leading to their neglect. To combat this, it is vital to start by identifying specific pain points within business processes or individual tasks. By uncovering the root causes of these issues and clarifying requirements, leaders can frame the challenges in a compelling business case that highlights their urgency and relevance.
Once these pain points are well articulated, organisations can define and implement appropriate AI solutions that resonate with the needs of the teams involved. This approach not only builds advocacy for the new technology but also encourages teams to share additional pain points, creating a continuous cycle of improvement. Ultimately, the focus of AI should be on addressing real problems, fostering collaboration, and delivering tangible benefits, rather than simply generating innovative technologies that go unused.
Figure 13 From “Pilot Theater” to Solution of Actual Problems
AI, Purpose, and Team Collaboration
The successful integration of AI within an organisation hinges on aligning four critical elements: passion, business strategy, ethics, and feasibility. First, stakeholders must demonstrate genuine enthusiasm for exploring AI innovations, fostering a proactive environment. Additionally, AI initiatives should directly support the organisation’s strategic objectives, ensuring they contribute to overall goals. It is also essential that the approach taken is ethically sound, emphasising proactive responsibility over mere compliance. Lastly, the feasibility of these initiatives encompasses both the technical capabilities and the necessary skill sets, as organisations must leverage the right technologies, expertise, and use cases for effective implementation.
Achieving alignment among these elements requires collaboration among a diverse group of stakeholders, including engineers, strategy teams, legal and compliance experts, data protection officers, IT professionals, and learning and development specialists. By engaging these parties in defining what constitutes a relevant, exciting, aligned, ethical, and feasible AI purpose, organisations can foster a sense of shared commitment. This co-creation process not only increases the likelihood of successful AI implementation but also ensures that everyone is aligned with a common goal, paving the way for innovation and the responsible use of AI technologies.
Figure 14 Finding your AI Purpose
The Value of AI: The Framing of AI
The value of AI should be framed not only through business metrics but also by connecting it to personal motivations that resonate with individuals. While traditional measures often focus on the difference between baseline and target outcomes, such an approach can fail to inspire genuine enthusiasm. For instance, AI that enhances information findability does more than save time; it boosts confidence by facilitating quicker, informed conversations. Similarly, AI-driven marketing efficiency isn’t solely about reducing effort; it allows individuals to allocate more time to creative ideation and to hone their uniquely human skills, such as originality.
To effectively communicate the value of AI, it is essential to tailor the message to the audience’s personal drivers and aspirations. In the realm of talent development, AI’s benefits extend beyond mere retention metrics by helping individuals plan and advance their careers. By highlighting personal motivations, organisations can transform support for AI from a rational obligation into genuine enthusiasm. This personalised framing not only helps to align AI’s advantages with what truly inspires people but also makes the technology’s impact more relatable and engaging.
Figure 15 Framing Value on Organisational and Personal Level
Maintaining Brand Authenticity in Human-to-AI Communication
Maintaining a brand’s authentic voice in AI-driven communications, particularly with chatbots, is essential for preserving its unique identity. While AI offers significant efficiency in customer interactions, relying solely on pre-trained models can dilute a brand’s tone and diminish its authenticity. Companies should establish clear voice principles that align with their core brand values and marketing guidelines, serving as guardrails for language models. Implementing adjustable voice parameters—such as warmth, playfulness, and formality—will enable different teams to tailor communication effectively for diverse audiences and channels.
Furthermore, ensuring voice consistency is vital to prevent communication from becoming overly robotic or bland, which can undermine customer trust. To facilitate this, forming “voice steward circles” composed of cross-functional teams from branding, legal, diversity and inclusion, and customer support can be beneficial. These groups can regularly review and refine communication guidelines, helping to maintain the brand’s distinct voice across AI interactions. By preserving a genuine and human touch while leveraging the speed and efficiency of AI, companies can enhance customer resonance and loyalty.
Figure 16 Flexible Guardrails, Not Rigid Uniformity
The Human and AI in Creativity
Human creativity and AI-generated creativity serve distinct yet complementary roles in the creative process. While original and disruptive ideas mark human creativity—exemplified by artistic pioneers like Picasso—AI creativity stems from the ability to learn from existing data, refining and remixing established concepts rather than generating entirely independent thoughts. This distinction underscores the importance of human ingenuity in generating innovative ideas, which AI can then refine and efficiently implement.
The collaboration between humans and AI transforms the creative landscape by streamlining the execution of various tasks, such as producing marketing assets, translations, and content creation, which previously required significant time and effort. As AI takes on the burden of routine and complex tasks, it allows humans to dedicate more time to ideation and problem framing, ultimately fostering a more human-centred creative process. This collaboration not only amplifies human creative instincts but also alleviates the pressures of time constraints, leading to a richer and more innovative exploration of ideas.
Figure 17 Humans Disrupt, Machines Refine
Figure 18 The Idea is the New Execution – From a Time Spent POV
AI Principles, Accountability, and Human Risks
The fifth “C” of AI ethics, conscience, plays a crucial role in ensuring ethical AI practices across organisations. Establishing clear, organisation-wide principles grounded in both internal values, such as brand identity and human-centric approaches, and external frameworks like the EU AI Act and OECD AI principles is essential. These foundational principles not only guide AI governance but also enforce necessary guardrails and facilitate audits, ensuring the responsible use of AI technologies.
Accountability in AI is a complex and multifaceted concept that spans multiple layers, including data accuracy, model explainability, and user experience. This is often compounded by fragmented responsibilities among individuals who lack awareness or coordination, highlighting the necessity for clear accountability chains that align with decision-making authority.
Furthermore, addressing the critical risks of AI learning from biased or unethical historical data requires a proactive approach to prevent the perpetuation of harmful behaviours. Ultimately, the early establishment of culturally aligned AI principles is vital for preventing ethical issues and managing accountability effectively across all layers of AI systems.
Figure 19 Principles to Anchor Practise
Figure 20 Main (Human) Risks to Look Out for in AI
The Impact of AI: Bias, Disinformation, and Ethical Choices
The development and usage of Artificial Intelligence (AI) raise several critical considerations that must be addressed to harness its potential responsibly. One major concern is the prevalence of bias in AI systems, as seen in Amazon’s recruitment AI that favoured male candidates due to its reliance on past hiring data, illustrating how these technologies can inadvertently perpetuate existing prejudices. Additionally, the spread of disinformation through AI-generated content poses significant risks, as fake videos and images can misinform the public and incite panic.
Compounding these dangers are AI hallucinations—instances where AI generates false information that users might mistakenly accept as truth, leading to potentially detrimental decisions. Moreover, the ecological impact of AI cannot be overlooked, as the substantial energy consumption of AI models warrants their use only when necessary, with simpler alternatives often adequate for many tasks.
As the landscape of AI evolves, it is essential to ensure that these systems reflect current ethical standards rather than outdated norms, thus mitigating the risk of perpetuating harmful views. The conversation also prompts important questions regarding the future of AI: which enduring values will guide its development, what decisions should remain strictly human, and whether advancements in machine intelligence actually enhance human wisdom. Ultimately, these reflections invite us to engage thoughtfully with the implications of AI technology while also embracing the creativity it offers, such as in the form of an AI-generated parody medley of 80s songs, which highlights the fascinating paradoxes surrounding AI.
Figure 21 Some Closing Questions
A Satirical Take on Artificial Intelligence and Its Influence
Despite the impending arrival of AI regulations, Tiankai encouraged ongoing investment and development within the field. AI regulations recognise that challenges exist but emphasise the potential for creativity and innovation. Additionally, the mention of creative contributions, such as a music video, serves to acknowledge the collaborative spirit in the AI community, particularly expressing gratitude toward Tiankai for his efforts. Overall, Tiankai’s performance encapsulates both the exhilaration and caution that accompany the rapid growth of AI technologies, delivered in a casual and lyrical manner.
The Ethical Implications and Use of AI in Music and Social Media
The Temptation surrounding pioneering AI-generated music videos highlights the complexities of incorporating Artificial Intelligence into creative endeavours. While the band faced backlash from fans who accused them of job theft, an attendee clarified that the initiative actually involved hiring more personnel than usual and viewed their use of AI as a tribute to creativity rather than a replacement for traditional artistry. This incident highlights the mixed reactions toward AI in the music industry, with some embracing its experimental nature and unique visuals, while others express concern about the ethical implications and authenticity of creative work.
An attendee addresses ethical concerns around AI use on platforms like LinkedIn, where individuals may teach methods for generating and posting content that falsely represents original ideas. They shared that they experienced having faced removal from a webinar for criticising this practice, emphasising the importance of transparency in how AI-generated content is disclosed to audiences. Although the attendees acknowledge the productivity benefits that AI brings, such as efficiently creating personalised presentations, they also caution against the growing prevalence of inauthentic AI-generated content on social media. Overall, the advocacy for responsible and transparent AI use remains a focal point, ensuring that originality and ethical standards in creativity are upheld.
Challenges of Humanising AI in Different Corporate Settings
In the context of humanising AI and data, smaller companies generally find it easier to maintain a human-centric approach compared to large corporations. This is primarily because smaller organisations naturally facilitate personal interactions among colleagues and customers, allowing for direct communication and collaboration. While this intimate environment promotes inherent human-centricity, these companies often still overlook the importance of consciously prioritising these values as they grow.
Conversely, as organisations expand, they face significant challenges in sustaining personal relationships and maintaining a human-centred approach. The increasing scale makes manual management of connections impractical, which can lead to colleagues and customers feeling reduced to mere data points. Thus, larger companies must intentionally cultivate human-centric values alongside their growth. The quest for maintaining meaningful interactions within a predominantly digital landscape underscores the critical need for businesses to prioritise human-centric practices as they scale, ensuring that the essence of individuality is not lost in the process.
AI in the Workplace
The integration of AI in the workforce presents both opportunities and challenges, as highlighted by Migro Bank’s innovative approach to addressing economic disparity in Switzerland. By offering fee-free banking to those unable to afford traditional fees and utilising AI tools like large language model chatbots, Migros Bank focused on efficiency without resorting to layoffs. Instead of displacing employees, the bank redefined the roles of call centre staff as data stewards, ensuring that AI responses remain high-quality and non-toxic while keeping employees engaged and employed.
In contrast, the mass layoffs at companies like PwC, which eliminated 11,000 positions to replace workers with AI agents, raise critical questions about workforce stability and the effectiveness of AI systems. The abrupt firing of knowledgeable employees undermines the quality of AI outputs, leading to a negative perception of AI technologies among the workforce, particularly amid fears of widespread job losses.
While financial pressures may motivate such decisions, using AI as a scapegoat detracts from its potential benefits and hinders broader acceptance of AI transformations. Ultimately, the discussion highlights the importance of considering ethical implications and implementing thoughtful redeployment strategies when integrating AI into organisations.
The Role of Accounting and AI in the South African Gambling Market
The role of accountants in South Africa’s Gambling market is evolving significantly due to the integration of AI tools in processing financial and taxation data. With multiple provincial gambling boards regulating the industry, accountants are increasingly being seen not just as traditional financial processors, but as data stewards who oversee AI-managed information. The automation of tax declaration and income statement processing is becoming feasible under clear, standardised rules, allowing accountants to shift their focus towards a more supervisory role.
As a result, a “human-in-the-loop” approach is recommended, where automation manages routine tasks while accountants intervene in cases of anomalies or thresholds being exceeded, such as unusually high or low tax amounts. This evolution will require accountants to manage AI agents, ensure compliance with evolving regulations, and update AI systems in response to new government rules. In this context, accountants will no longer be merely data processors; instead, they will enhance their effectiveness by applying human judgment to complex cases and overseeing AI systems, ultimately redefining their roles within the industry.
AI in Data Stewardship and Academic Freedom in Higher Education
Tiankai highlighted the crucial role of data stewards, particularly in industries like iGaming, where they should be recognised not only as subject matter experts (SMEs) but also as skilled professionals in data and AI management. These stewards, often with business backgrounds, navigate the complexities of data while collaborating with regulators and accountants who contribute their expertise for tax-related matters. In higher education, the integration of AI literacy into data literacy programs becomes vital to uphold academic freedom alongside ethics and integrity, creating a foundation for informed decision-making in an increasingly data-driven landscape.
As emphasised through Ethan Mollick’s insights from ‘Cointelligence,’ the evolving relationship between humans and AI parallels the transformative effect of calculators on mathematical learning, highlighting the necessity for human oversight in understanding AI outputs. This calls for an ethical approach to AI governance within data governance frameworks, ensuring that the use of AI is responsible and does not infringe upon the freedoms necessary for academic growth. Ultimately, the conversation advocates for a balanced synergy where AI tools enhance human capabilities while maintaining ethical responsibility and oversight in both business and educational contexts.
The Ethical Implications and Human Involvement in AI Decision Making
Tiankai emphasised the crucial need for human oversight in the application of AI technologies, particularly in mitigating errors and addressing legal concerns associated with fully automated decision-making. This necessity is especially pronounced in sectors such as healthcare and academia, where the proliferation of AI-generated content complicates detection and raises significant ethical dilemmas.
A personal anecdote shared by Tiankai illustrated the contrast between their own thoroughly written paper and the pitfalls of relying on AI for assignments, underlining the ethical imperative to use such tools responsibly. The example of Deloitte facing penalties for submitting an unchecked AI-generated paper further emphasises the risks associated with an over-reliance on AI.
In conclusion, Tiankai advocated for viewing AI as an enhancement of human productivity rather than a substitute for human judgment and responsibility. He shared that there is optimism that key influencers in the AI field will guide organisations toward ethical and effective practices before negative consequences arise. Ultimately, the call for balanced and ethical integration of AI with human oversight is crucial for ensuring positive and responsible outcomes in society.
If you would like to join the discussion, please visit our community platform, the Data Professional Expedition.
Additionally, if you would like to watch the edited video on our YouTube please click here.
If you would like to be a guest speaker on a future webinar, kindly contact Debbie (social@modelwaresystems.com)
Don’t forget to join our exciting LinkedIn and Meetup data communities not to miss out!