We know how your business works so you can put AI to work for your business

GenAI: Creating value through governance

Over a year of Generative AI (GenAI) excitement has translated into palpable momentum, with 60% of organisations seeing GenAI as an opportunity rather than a risk, and many launching GenAI-enabled capabilities, investing in GenAI skills and pursuing ambitious strategies.
The ecosystem is rapidly evolving as Big Tech and venture capitalists invest heavily in GenAI, employees integrate these tools into daily workflows, and enterprise software vendors are augmenting products with GenAI features. However, organisations encounter challenges such as confidential data leaks, inaccuracies or "hallucinations", deep fakes, manipulated content, and bias.

The case for GenAI governance 

  • Most business leaders see strong governance as crucial to scaling GenAI pilots / proofs of concept (POCs)
    Ensuring sustainable value while avoiding high costs, legal issues, and poor outcomes. They are strengthening risk management, compliance, security, and privacy functions to support governance, including vetting third-party GenAI tools and assessing risks. This helps align processes, manage risks, and prioritise investments.
  • According to PwC's 2024 CEO Survey, 75% of CEOs with GenAI adoption believe it will improve stakeholder trust over the next 12 months, likely due to safe deployment practices.
    Effective AI governance also ensures proper prioritisation across investments and projects by assessing use cases appropriately and applying the necessary technology.
  • GenAI sensitivities for the telecommunications industry
    Telecommunications companies handling large volumes of consumer data are exploring and adopting responsible AI principles and tools which minimise reliance on sensitive customer data while delivering more personalised and curated experiences.

Steps in the Right Direction.

GenAI governance does not need to be built from scratch – existing governance frameworks can be modified to address GenAI-specific risks. 

Risk functions can identify additional responsibilities to adopt to support GenAI governance objectives, determining when to engage in escalations and decision-making, whilst coordinating leadership and additional responsibilities within a holistic AI governance programme.

GenAI's broad applicability can pose challenges for governance, such as reduced control over outputs compared with narrow AI, and an inability to oversee specific teams.  The risk management process should consider how one tool can potentially apply to many use cases of different risk profiles.

Centralised or decentralised governance?

When updating the AI governance model for GenAI, organisations should decide how much to centralise management of governance resources, giving considerations to:

  • Is the overall enterprise operating structure highly centralised, and therefore conducive to centralised governance?
  • Which governance practices must remain centralised (e.g., risk taxonomy).
  • Which practices should be federated? (e.g., localised testing).

 

Scaling GenAI adoption is a major organisational challenge due to the sheer volume of GenAI solutions, freely available tools and features available. Managing enterprise-wide usage and maintaining an up-to-date system inventory is complex and requires consistent, holistic risk taxonomy.

A standardised organisation-wide framework ensures consistency, addressing risks related to data protection and intellectual property, and enabling effective triage of systems and coordination of remediation.

Several main categories can form the basis of an enterprise-wide risk taxonomy of an AI system:

  • Model risks: Risks related to training, development, and performance.
  • Data risks: Risks related to data collection, processing, storage, management, and use training and operation.
  • System and infrastructure risks: Risks related to acquisition, implementation, and operation in a broader technology environment.
  • Use risks: Risks related to intentional or unintentional misuse, manipulation, or attack on AI systems.
  • Process impact risks: Unforeseen or unmitigated risks that arise from AI integration into an existing workflow.

Educating governance personnel regarding how these risks may manifest in GenAI use cases and aligning on a shared point of view of risk and tolerance, is essential to embed AI into existing risk processes.

Organisations should follow the below steps to design a governance strategy for AI.

1. Align AI governance to AI strategy.

The EU AI Act and NIST's AI Risk Management Framework advocate for AI governance to be proportional to risk to avoid friction or ineffectiveness arising from overly restrictive or loose AI deployment. This alignment also fast-tracks innovation by focusing on priority areas of investment where risk can be effectively managed.

2. Update enterprise perspectives on risk and firm values.

Organisations must foster a shared understanding of AI-related risks that evolves past traditional risk management, considering factors such as enhancing procurement, third-party risk, security, privacy, and data.

Implementing a risk taxonomy for AI technologies and systems supports efficient oversight, while clearly defined roles and responsibilities formalise risk control ownership. Internal codes of conduct should be updated to prevent GenAI misuse.

Organisations should also align decision-making with firm values and revisit these by updating code of conduct and acceptable use policies to ensure they address emerging ethical dilemmas and policies remain responsive.

3. Define roles and responsibilities.

To formalise governance, organisations can establish structures such as an AI steering committee or governance board, including members from existing governance teams. These groups guide internal policy development, support use case prioritisation, and handle escalated issues.

Roles and responsibilities may be adjusted based on time or stage of AI development, implementation, use, and monitoring, reflecting varying needs for approvals, risk management, decisions, and remediation.

4. Develop a training and change programme.

Responsible use of AI capabilities is necessary for GenAI adoption. This can be achieved coaching on how GenAI works and how risks manifest, as well as clear understanding on staff's responsibilities.

Organisations that move forward methodically, invest in a solid foundation of governance, and unite as a cross-functional team to address difficult questions will be well positioned for AI success.

Find out more about PwC's Responsible AI approach.

 

How can PwC Gibraltar assist?

We don’t just bring tech. We bring results.

  • PwC can help your organisation adopt GenAI in a responsible way so that you can embrace the vast benefits whilst managing the risks.
  • Managing GenAI risks.
  • GenAI Governance Rules.
  • GenAI training curriculum.
  • GenAI tool implementation.
  • Technology Risk Management.
  • Data Use in AI.

Explore how to create and protect value

Follow us