There is a saying to the effect that ‘if you have a hammer, the world looks like a nail’ – meaning, that human beings are predisposed to use the tools each of us is comfortable handling to solve all problems, even if a different tool might work better.
Carried over into the world of governance, especially Artificial Intelligence (AI) governance, the thought behind the saying still caries weight. AI is a complicated, multifaceted, and data-intensive set of technologies, and the governance that it requires to train and run in ways that mitigate all the various risks associated with AI is equally multifaced and complex. In other words, AI governance requires many diverse types of tools – not just a hammer – and so it requires perspectives of a variety of specialists who are comfortable with those different types of tools.
At an elevated level, a few of the AI issues that require governance to mitigate include security, privacy, infrastructure, data quality, legal/compliance, ethics/, and result accuracy/bias risks. Each area of risk requires some special knowledge often best provided by professionals specific to those areas.
However, specialists working alone can have a myopic point of view of issues and their solutions – offering their own unique “hammer” to solve problems when a different tool might better serve solutions outside their expertise. This means that AI governance done well is shared responsibility governance. It is governance that covers all relevant risks, addressed through collaboration of multiple stakeholders.
In this blog, we’ll explore:
- Security: How to protect AI systems from unauthorized access, tampering, and adversarial threats.
- Privacy: Ensuring data protection through principles like minimization, consent, and anonymization.
- Infrastructure: The IT backbone required to support scalable, secure, and efficient AI operations.
- Data quality: Why clean, accurate, and unbiased data is foundational to trustworthy AI.
- Legal & compliance: Navigating evolving regulations and ensuring AI systems meet legal standards.
- Ethics: Embedding human values and fairness into AI decision-making processes.
- Collaboration: The importance of cross-functional teams in building responsible and effective AI.
AI security governance: Protecting models from threats and tampering
Security professionals will think of AI governance in terms of access controls, monitoring to detect unusual API activity and other suspicious activity, guardrails for defining AI outputs, and other technological and procedural security protections. One source suggests six AI security control categories that include different deployment and AI training strategies. All these strategies aim at mitigating the AI risks of unauthorized access/model tampering, data poisoning and integrity, adversarial manipulation, and regulatory requirements.
AI privacy controls: Safeguarding personal data and ensuring compliance
Privacy pros think of AI governance differently. Rather than protecting operational and training databases and AI operations from unauthorized tampering and access, they look for controls related to Fair Information Practice Principles (FIPPs). This means that AI governance for a privacy officer is a matter of data minimization, anonymization/pseudonymization, consent management, and auditing and transparency measures. Techniques like differential privacy and federated learning may also be top of mind for privacy pros.
Building scalable AI infrastructure: IT’s role in supporting AI systems
Information Technology (IT) specialists will consider the optimal combination of hardware, software, networking, and storage for AI – both regarding AI training and its deployment. As one source suggests, IT considerations for successful AI will involve identifying:
- Data storage and management architecture that allows for the large scale of AI data needs, as well as superior governance policies and version control practices that establish guardrails and provide for data lineage tracking.
- Scalable and flexible infrastructure that also allow for high data volumes and complexity without sacrificing performance.
- Effective integrations that allow data flows between systems while still providing protective structures and tracking to identify and halt unauthorized or inappropriate communications.
- Maintenance and monitoring practices across the AI infrastructure, including software updates, hardware checks, and storage optimization activities.
Ensuring high-quality data for AI: Strategies for accuracy and reliability
Just as critical as security, privacy, and infrastructure considerations, good AI demands high quality data. After all, AI outputs based on inaccurate data will be inaccurate or otherwise deeply flawed. Data quality experts training or implementing AI will be concerned with reducing the amount of incomplete, inaccurate, untimely, duplicate, inconsistent, irrelevant, and biased data. The controls data quality specialists will want to see in AI training and deployment effort will focus on data hygiene, data quality monitoring, and data source monitoring – all with the velocity and scale that AI demands. Data controls may also include keeping close tabs on data lineage.
Navigating AI legal and compliance risks: Regulations and best practices
From the European Union AI Act to the US, Canada, and other jurisdictional requirements and regulator guidance, AI is impacting both how existing regulation enforcement expands to fit the AI use case and promulgation of new requirements and industry practices. Legal concerns do not only focus on Intellectual Property (IP), but also consent management, transparency, spoofing and deep fakes, bias and unfair uses, and many other topics.
Compliance and legal teams will be interested in controls for the risk of not complying with laws. Some controls legal and compliance professionals will suggest may include Data Protection Impact Assessments (DPIAs), Privacy by Design processes, legal reviews, third party management and contracting guidelines and agreements, and other procedural and technical processes enforcing compliance.
Additionally, given that AI law is a volatile area, legal teams will also want a plan for monitoring external regulatory changes and internal practice changes that might newly trigger existing laws.
Ethical AI governance: Principles and processes for responsible use
Then there are ethical concerns relating to whether and how an organization uses AI. Ethics professionals will look to establish processes like review boards or committees gather, discuss, and arrive at decisions related to ethical AI use. Controls may also include a creation and application of guiding principles aimed at upholding human values, such as fairness, minimized harm, reliability, and safety. These considerations will also require model and system transparency, giving visibility into processes and establishing accountability at each step in the process.
Collaborative AI governance: Integrating IT, security, privacy, and ethics
While there is some overlap across these AI roles and their considerations, it is clear to see that AI done well requires all these the perspectives and their associated controls. From IT/infrastructure to security, privacy to legal/compliance, and ethics to data quality, each speciality will have its own unique and necessary point of view about issues, risks, and controls. If even one of these roles is missing in the AI discussion, the company will face unaddressed risk and fail to arrive at the value that AI can provide.
This means that collaboration on AI projects is key. Forming a cross functional team to define AI requirements, evaluate or build and train solutions, and implement effectively across the organization will result in effective AI. In this way, an organization will have the ability to benefit from the power of secure, privacy-sensitive, compliant, ethical, effective, accurate, efficient, and scalable AI.