Privacy professionals are critical in the governance of AI. As Ai becomes more embedded in business operations, its ricks – ranging from regulatory penalties to reputational damage – demand a coordinated governance approach. We outline why AI governance must be a cross-functional effort and argue that privacy professionals are uniquely positioned to lead this charge.
Jump to:
- The growing need for AI governance
- AI governance requires a team of experts
- Why privacy professionals are natural AI governance leaders
- Aligning with the NIST AI Risk Management Framework
- The privacy function’s unique strengths
- The role of consent and preferences in building trustworthy AI
The growing need for AI governance
With Artificial Intelligence (AI), its business and personal benefits, as well as its ethical and practical challenges on the tips of everyone’s tongue, it is clear that AI is here to stay and simultaneously a sticky problem that businesses must solve. It is also clear that both AI’s uses, and governance, require collaboration across multiple areas of expertise. Privacy professionals are uniquely positioned to lead the latter and at least play a key role in the former.
Careful and coordinated AI governance is a prerequisite. Without solid governance, a company that either develops and trains or deploys AI runs the risk of expensive fines and penalties, inaccurate and ineffective outcomes, and a sudden drop in consumer (and regulator) trust.
AI governance requires a team of experts
Without a doubt, AI governance requires the collaboration of broad and diverse experts. After all, AI is a magical, complicated mess – a combination of diverse types of technologies, data, uses, and possible outcomes.
Effective governance demands collaboration across:
- IT and InfoSec: For infrastructure, deployment, and oversight.
- Data professionals: To ensure quality, architecture, and strategic alignment.
- Legal, ethics, and compliance: To address bias, IP, and regulatory risks.
- Business leaders: To align AI use with strategic goals.
Why privacy professionals are natural AI governance leaders
If AI is a team sport, like any team, AI needs a coach – someone to organize the moving parts and keep an eye on both the forest and the trees. Though AI governance requires multiple subject matter experts, the privacy pro is a natural fit for that coach role.
After all, privacy professionals are already facile in technology, including Privacy Enhancing Technologies (PETs) and techniques. They are intimately connected with data and data issues, such as data quality, origin, consents, and legality. As the operational side of privacy compliance, privacy teams partner closely with legal experts and understand the ins and outs of legal requirements, guidance, and risk balancing.
Privacy experts understand AI privacy issues and obligations, of course, including but not limited to consent, data subject rights, data limitation/minimization, automated decisions, data retention/deletion, and security. Most importantly, however, privacy professionals appreciate compliance and the governance frameworks that make compliance possible, AI included.
Aligning with the NIST AI Risk Management Framework
For example, the National Institute of Standards and Technology, or NIST, created an AI risk management framework, which starts with a Govern function. At a high level, the NIST Govern function includes:
- Policies, processes, procedures, and practices related to mapping, measuring, and managing AI risks. These include understanding and addressing legal requirements, integrating trustworthy AI characteristics into organizational policies/procedures, determining/documenting/periodically evaluating/resourcing the appropriate risk management practices based on risk tolerance, and establishing procedures for phasing out AI in a way that does not increate risk or decrease trustworthiness.
- Accountability structures with empowered teams/individuals responsible for mapping, measuring, and managing AI risks. These include lines of communication, documented responsibilities and agreements, and executive leadership accountability.
- Prioritized workforce diversity, equity, inclusion, and accessibility in the risk management process. Diverse teams making decisions and policies/procedures to define roles and responsibilities for human oversight of AI play roles in this subcategory.
- Culture that promotes AI risk consideration and communication. Documented policies promoting critical thinking and a safety-first mindset, documentation of risks and AI impacts, and practices that promote AI testing/information sharing are all activities that support this subcategory.
- Processes for engagement with AI actors. These can include policies related to collecting and prioritising/acting on feedback external to the team regarding individual and societal impacts of AI, as well as feedback loops to help regularly incorporate improvements into AI system design and deployment.
- Policies addressing third-party AI risk. Policies and procedures related to third-party risks, including IP and other rights, as well as contingency processes, support this subcategory.
The privacy function’s unique strengths
From this list of AI governance activities, it is easy to see how a company’s privacy team is well positioned to serve as the primary AI governance directing and coordinating function.
- Communication: The privacy function already communicates with cross functional stakeholders, both in supporting functions like legal and IT/IS, and in core business activities like strategy, production, and operations. Privacy’s expertise in translating technical requirements into operational guidelines and vice versa comes in handy.
- Policy development: As a compliance-oriented motion, privacy appreciates the value of well-designed and communicated policies, procedures, and standards in promulgating the right workforce behaviors and culture.
- Accountability: One of the ingrained principles of privacy is that of accountability. The privacy function understands how to establish clear roles and responsibilities across the organization, as well as how to document and monitor behavioral expectations.
- Ethics: Any privacy team is no stranger to ethical dilemmas. Though AI’s ethical conundrums go beyond privacy, this function is comfortable with discussing ethics and making ethical decisions in a cross-functional, human-centric manner.
- Risk storytelling: Privacy is also accustomed to assessing, addressing, and communicating risk through the organization. For example, Data Protection Impact Assessments are foundational activities that privacy professionals drive in most companies. These DPIAs provide the organization with a formal process for identifying and discussing risks, as well as a vehicle for documenting the care with which the organization made risk-based decisions and telling the ‘story’ of risks, the company’s handling of those risks, and the outcome for all relevant stakeholders.
Summary
As more companies appreciate the risks of AI and begin to understand the necessity of AI governance, they also recognize the need for a single person or function to organize the broad set of activities that AI governance requires. The privacy function – with its long history in risk management and DPIAs, growing knowledge of AI technology and its special problems, cross-functional collaboration, ethical decision-making, accountability structures, and documented policy sets – is often taking the lead in that effort.
The role of consent and preferences in building trustworthy AI
As AI governance matures, organizations require a single source of truth for consent and data preferences. A strong consent and preference management partner ensures:
- Centralized tracking of user permissions
- Compliance with evolving global regulations
- Seamless integration across systems and teams
- Trust-building through transparency and control
In a world where AI decisions increasingly impact individuals, having a reliable partner to manage consent is a strategic imperative.