Risk functions can identify additional responsibilities to adopt to support GenAI governance objectives, determining when to engage in escalations and decision-making, whilst coordinating leadership and additional responsibilities within a holistic AI governance programme.
GenAI's broad applicability can pose challenges for governance, such as reduced control over outputs compared with narrow AI, and an inability to oversee specific teams. The risk management process should consider how one tool can potentially apply to many use cases of different risk profiles.
When updating the AI governance model for GenAI, organisations should decide how much to centralise management of governance resources, giving considerations to:
Scaling GenAI adoption is a major organisational challenge due to the sheer volume of GenAI solutions, freely available tools and features available. Managing enterprise-wide usage and maintaining an up-to-date system inventory is complex and requires consistent, holistic risk taxonomy.
A standardised organisation-wide framework ensures consistency, addressing risks related to data protection and intellectual property, and enabling effective triage of systems and coordination of remediation.
Several main categories can form the basis of an enterprise-wide risk taxonomy of an AI system:
Educating governance personnel regarding how these risks may manifest in GenAI use cases and aligning on a shared point of view of risk and tolerance, is essential to embed AI into existing risk processes.
Organisations should follow the below steps to design a governance strategy for AI.
The EU AI Act and NIST's AI Risk Management Framework advocate for AI governance to be proportional to risk to avoid friction or ineffectiveness arising from overly restrictive or loose AI deployment. This alignment also fast-tracks innovation by focusing on priority areas of investment where risk can be effectively managed.
Organisations must foster a shared understanding of AI-related risks that evolves past traditional risk management, considering factors such as enhancing procurement, third-party risk, security, privacy, and data.
Implementing a risk taxonomy for AI technologies and systems supports efficient oversight, while clearly defined roles and responsibilities formalise risk control ownership. Internal codes of conduct should be updated to prevent GenAI misuse.
Organisations should also align decision-making with firm values and revisit these by updating code of conduct and acceptable use policies to ensure they address emerging ethical dilemmas and policies remain responsive.
To formalise governance, organisations can establish structures such as an AI steering committee or governance board, including members from existing governance teams. These groups guide internal policy development, support use case prioritisation, and handle escalated issues.
Roles and responsibilities may be adjusted based on time or stage of AI development, implementation, use, and monitoring, reflecting varying needs for approvals, risk management, decisions, and remediation.
Responsible use of AI capabilities is necessary for GenAI adoption. This can be achieved coaching on how GenAI works and how risks manifest, as well as clear understanding on staff's responsibilities.
Organisations that move forward methodically, invest in a solid foundation of governance, and unite as a cross-functional team to address difficult questions will be well positioned for AI success.
Find out more about PwC's Responsible AI approach.
How can PwC Gibraltar assist?