A Practical Guide for Better AI Risk Assessments - privAIcy Insights Newsletter
Your compass through the maze of AI, privacy, and security compliance đ§
Are your AI risk assessments missing the mark? Letâs discover how effective AI governance can streamline your process and eliminate redundancies.
In today's rapidly evolving tech landscape, AI has become integral to many business operations, necessitating robust risk assessments. As a lawyer and AI governance lead at a SaaS company, Iâve observed a significant shift in the types of questionnaires we receive from clients during procurement. What once focused on security and privacy risk assessments now increasingly includes AI-related queries.Â
Unfortunately, many of these AI questionnaires miss crucial points, leading to inefficiencies on both sides. This article explores how to make AI risk assessments more comprehensive and effective, ensuring the right questions are asked from the start.
đĄAsk the Right Foundational QuestionsÂ
Having sat on both sides of the tableâresponding to clients' due diligence questionnaires and performing due diligence on vendors my company wishes to procureâI experienced the complexities involved. To ensure that AI risk assessments are effective, itâs crucial to start with effective foundational questions. These questions clarify the source and risk level of the AI technology, providing a solid basis for subsequent questions. Understand the forest before asking about the trees. Key questions include:
Is the AI technology directly developed by the vendor, or is it integrated with a third-party AI, such as those provided by OpenAI? This distinction is crucial as it affects the granular components of many subsequent considerations, including accountability and transparency. Knowing the origin of the AI technology helps you correctly identify potential risks and the division of responsibilities.
What is the risk classification of the AI feature under the EU AI Act? Even if you believe the EU AI Act doesnât apply, its risk classification framework is valuable for both parties. Determining whether the AI is classified as high risk, moderate risk, etc., helps shape the nature and depth of the questions that follow (for example, certain categories trigger âconformity assessmentâ requirements while others do not). This framework provides a structured approach for you to evaluate risk and for both sides to align on expectations.
What independent certifications does the company have, and can they provide proof? Certifications such as ISO 27001, ISO 27701, SOC 2 Type II, and ISO 42001 are indicators of a companyâs commitment to security and privacy standards. These certifications provide a benchmark for assessing the reliability and robustness of the vendorâs AI technologies. Ensuring that the AI features are covered under these certifications can help verify their compliance with established standards. Theyâre not a guarantee of regulatory compliance, but they can help you figure out the volume, scope, and types of questions you need the vendor to subsequently answer.
Setting the stage with these foundational questions will help reduce needless back-and-forth requests for clarity, time-consuming calls, and redundant information gathering. And it will help ensure that you obtain the precise information needed to form a comprehensive risk assessment profile.
đ Eliminate Redundant Questions from Siloed Teams
From my experience, one common issue is a lack of coordination between different departments within some companies. It's not uncommon to receive separate questionnaires from a company's security team, privacy team, risk team, and an isolated department focused on AI.
This siloed approach leads to inefficiencies and redundancy for both sides. For instance, we might answer a 500+ question security questionnaire covering a wide range of security measures on our platform, only to receive another questionnaire asking the same questions framed as an AI questionnaire. Itâs redundant on the vendor (answering duplicative questions) and itâs redundant on the customer (reviewing, assessing, and classifying duplicative answers).
In reality, these teams should operate symbiotically. Transdisciplinary work is an important component of responsible and ethical AI â and that goes for both the vendor providing AI features and the company looking to use those AI features.
Plus, the security measures applied to an underlying SaaS platform should extend to any AI tech or features layered on top. AI may be new, but secure coding and software development lifecycle practices are not.
Instead of having your security team send out 500+ security questions only for your AI or privacy team to turn around and ask the same questions, work together to leverage dynamic questionnaire capabilities. Begin with introductory questions to determine whether the security protocols covered in, say, the vendorâs SOC 2 Type II report apply equally to their AI features. Better yet, ask if their AI features were in scope or descoped from that audit.
đŻ Laser Focus on What you Really Need to Know
As a former trial lawyer, I learned that asking open-ended questions during a deposition is a gamble: I might get a vague response, a completely off-topic response, or a response that leaves me with more questions than answers. I was taught: if you know what you need answers to, frame it as a âyesâ or ânoâ question.
The same is true for vendor due diligence.Â
If you ask an open-ended question, youâre going to get an open-ended response. And this triggers a bunch of back-and-forth just to understand what it is the company really wants to know.Â
When writing AI questionnaires, ask yourself: what do I need to know, and why do I need to know it? Example: Do you need to know if the vendor encrypts data in transit using a certain standard (say, AES-256)? If so, ask it. Donât broadly ask âhow do you protect company data?â.
đ„ Remember Your Audience
Another layer of complexity arises when a company has a series of AI questions designed to be answered by their internal business owner, but the questions are too complex or technical. Those business owners forward piecemeal questions to the vendor, but the questions werenât written for the vendor, and the vendor isnât privy to all of the available answer options. This results in a lot of back-and-forth just to understand what it is the company really wants to know.
When I sit on the procurement side of the table, I advocate for the use of two different questionnaires:
Internal Questionnaire: These are conducted within your organization to evaluate the risks, benefits, and compliance of new technology and products. The focus is to see if there is general alignment with internal policies, guidelines, and regulatory requirements.Â
Think: non-technical, user-friendly questions sent to the internal business owner to help you get the lay of the land
Ask yourself: is this something an internal business owner who knows nothing about technology could answer?
Vendor Questionnaire: These are sent to vendors to evaluate their security, privacy, and ethical standards. The focus is to see if the vendorâs solutions and practices meet your companyâs requirements and do not introduce unacceptable risks.Â
Think: longer, more technical questions sent to the vendor to help you understand the technical components of the vendorâs AI systems.Â
Ask yourself: what do I need to know to form my risk assessment, and what is the most reliable and efficient way to get that information?Â
Conflating these two types of assessments just leads to inefficiencies and burdens on both sides.
đŁ Final Thoughts
Putting law into practice is always tricky at first. When privacy impact assessments first entered the scene, inefficient and burdensome questionnaires were the norm, as everyone was trying to figure out how to navigate new privacy laws. The same will be true for AI. But the lessons we learned from past experiences can help us figure out how to ensure we receive the answers we need to the right questions.
Responsible AI requirements are crucial. Finding a balanced, efficient way to address these requirements will benefit everyone involved.
What are your thoughts on improving AI risk assessments? Letâs discuss!
đ§ Join the privAIcy ThinkTank
Iâm starting a think tank for legal, privacy, security, and AI governance practitioners that want to brainstorm how we can take the theories espoused in regulations and actually apply them in our day-to-day ops. Interested? Let me know!
đ Thank you for reading!
I hope you enjoyed reading it as much as I enjoyed writing it.
-Rachel
