IAPP AIGP Exam Questions

Questions for the AIGP were updated on : Nov 21 ,2025

Page 1 out of 11. Viewing questions 1-15 out of 164

Question 1

A company deploys an AI model for fraud detection in online transactions. During its operation, the
model begins to exhibit high rates of false positives, flagging legitimate transactions as fraudulent.
Which is the best step the company should take to address this development?

  • A. Dedicate more resources to monitor the model.
  • B. Maintain records of all false positives.
  • C. Deactivate the model until an assessment is made.
  • D. Conduct training for customer service teams to handle flagged transactions.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
When an AI system causessignificant false positives, especially in sensitive contexts likefraud
detection, the priority is tohalt harmful activityand perform a full assessment. Continued use without
understanding the fault may cause furthercustomer harmand legal exposure.
From theAI Governance in Practice Report 2024:
“Incident management plans should enable identification, escalation, and system rollback to prevent
continued harm from malfunctioning AI systems.” (p. 12, 35)

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2

After initially deploying a third-party AI model, you learn the developer has released a new version.
As deployer of this third-party model, what should you do?

  • A. Audit the model.
  • B. Retrain the model.
  • C. Seek input from data scientists.
  • D. Communicate necessary updates to your users.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
When anew versionof a third-party model is released, the deployer must ensure it still meets safety,
performance, and compliance requirements — which calls for aformal audit.
From theAI Governance in Practice Report 2024:
“Any updates or changes to AI systems should trigger a re-evaluation to ensure continued
compliance and performance.” (p. 12)
“Post-market monitoring includes reassessing the impact of updated models or retraining.” (p. 35)

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

What is the most significant risk of deploying an AI model that can create realistic images and
videos?

  • A. Copyright infringement.
  • B. Security breaches.
  • C. Downstream harms.
  • D. Output cannot be protected.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
The greatest risk from AI systems generatingrealistic synthetic mediaisdownstream harm, such
asdeepfakes, misinformation, reputational damage, and erosion of trust.
From theAI Governance in Practice Report 2024:
“With generative AI, downstream harms such as deception, reputational damage, misinformation,
and manipulation can emerge even if original use was lawful.” (p. 55–56)

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4

A deployer discovers that a high-risk AI recruiting system has been making widespread errors,
resulting in harms to the rights of a considerable number of EU residents who are denied
consideration for jobs for improper reasons such as ethnicity, gender and age.
According to the EU AI Act, what should the company do first?

  • A. Notify the provider, the distributor, and finally the relevant market authority of the serious incident.
  • B. Identify any decisions that may have been improperly made and re-open them for human review.
  • C. Submit an incomplete report to the relevant market authority immediately and follow up with a complete report as soon as possible.
  • D. Conduct a thorough investigation of the serious incident within the 15 day timeline and present the completed report to the relevant market authority.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Under theEU AI Act, serious incidents involvinghigh-risk AI systemsmust be reported. The deployer is
required topromptly inform the provider and relevant authoritiesabout the issue.
From theAI Governance in Practice Report 2024:
“Serious incidents involving high-risk systems… must be reported to the provider and relevant
market surveillance authority.” (p. 35)
“Timely reporting is required when AI systems result in or may result in violations of fundamental
rights.” (p. 35)

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

All of the following issues are unique for proprietary AI model deployments EXCEPT?

  • A. The acquisition of training data.
  • B. The cost of AI chips.
  • C. The potential for bias.
  • D. The necessity of performing conformity assessments.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Biasis a common risk acrossboth proprietary and open-source models, andnot uniqueto proprietary
deployments. All AI systems — regardless of origin — require evaluation for fairness, accuracy, and
representativeness.
From theAI Governance in Practice Report 2024:
“Bias, discrimination and fairness challenges are present in both open and closed models, regardless
of how the model is sourced.” (p. 41)

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

A company that deploys AI but is not currently a provider or developer intends to develop and
market its own AI system.
Which obligation would then be likely to apply?

  • A. Implementing a risk management framework.
  • B. Conducting an impact assessment including a post-deployment monitoring plan.
  • C. Developing documentation on the system, the potential risks and the safeguards applied.
  • D. Developing a reporting plan for any observed algorithmic discrimination or harms to individuals’ rights and freedoms.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Once a company moves from being adeployerto also acting as aprovider or developer, it
assumesnew obligationsunder regulations like the EU AI Act. One of the core requirements for
providers is to produce and maintaintechnical documentation, including descriptions of the model,
associated risks, and mitigation strategies.
From theAI Governance in Practice Report 2024:
“Providers of high-risk AI systems must draw up technical documentation demonstrating the
system’s conformity with the requirements... including potential risks and safeguards applied.” (p.
34)
“This documentation must be available before placing the system on the market.” (p. 35)

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

A company developing and deploying its own AI model would perform all of the following steps to
monitor and evaluate the model's performance EXCEPT?

  • A. Publicly disclosing data with forecasts of secondary and downstream harms to stakeholders.
  • B. Setting up automated tools to regularly track the model's accuracy, precision and recall rates in real-time.
  • C. Implementing a formal incident response plan to address incidents that may occur during system operation.
  • D. Establishing a regular schedule for human evaluation of the model's performance, including qualitative assessments.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
While transparency is encouraged,publicly disclosing forecasts of secondary harmsisnot a required or
standard practicefor internal performance evaluation. Risk assessments and reporting typically
remaininternal or shared with regulators.
From theAI Governance in Practice Report 2024:
“Organizations must assess secondary risks… but disclosure is subject to context, regulatory
requirements, and risk management discretion.” (p. 30)

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

A leading software development company wants to integrate AI-powered chatbots into their
customer service platform. After researching various AI models in the market which have been
developed by third-party developers, they're considering two options:
Option A - an open-source language model trained on a vast corpus of text data and capable of being
trained to respond to natural language inputs.
Option B - a proprietary, generative AI model pre-trained on large data sets, which uses transformer-
based architectures to generate human-like responses based on multimodal user input.
Option A would be the best choice for the company because?

  • A. It is less expensive to run
  • B. It may be better suited for applications requiring customization.
  • C. It can handle voice commands and is more suitable for phone-based customer support.
  • D. It is built for large-scale, complex dialogues and would be more effective in handling high-volume customer inquiries.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Open-source modelsoffer morecustomization flexibility, allowing organizations to fine-tune or adapt
the model tofit their own workflows, branding, or compliance needs— making it preferable when
deep control is needed.
From theAI Governance in Practice Report 2024:
“Open-source AI allows organizations to review, adapt, and control model behavior in line with
organizational needs and policies.” (p. 39)

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9

Which model is best for efficiency and agility, and tailored for lower-resource settings?

  • A. Supervised learning model.
  • B. Multimodal model.
  • C. Small language model.
  • D. Generative language model.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Small language models (SLMs)arelightweight, requireless compute, and arebetter suited to low-
resource or edge environments, making them ideal for agility and efficiency.
From general AI best practices:
“SLMs can be deployed in environments with limited computing power, ensuring lower cost and
faster integration in constrained contexts.” (aligned with industry-wide AI deployment strategies)

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

What is the most important factor when deciding whether or not to select a proprietary AI model?

  • A. What business purpose it will serve.
  • B. How frequently it will be updated.
  • C. Whether its training data is disclosed.
  • D. Whether its system card identifies risks.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Theprimary considerationin selecting any AI system, especially aproprietary model, is itsfit for
business purpose. Whether it serves the intended goals is foundational before evaluating technical
or governance features.
From theAI Governance in Practice Report 2024:
“AI governance starts with defining the corporate strategy for AI… and aligning systems with business
purpose and operational context.” (p. 11)
B, C, Dare relevant for evaluation, but onlyafterconfirming business applicability.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11

All of the following are potential benefits of using private over public LLMs EXCEPT?

  • A. Reduction in time taken for data validation and verification.
  • B. Confirmation of security and confidentiality.
  • C. Reduction in possibility of hallucinated information.
  • D. Application for specific use cases within the enterprise.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Private LLMs offer advantages likecustomizability,reduced hallucination,confidentiality,
andalignment with enterprise-specific tasks, but theydo not inherently reduce the time or
effortneeded fordata validation or verification— which remains an essential step regardless of model
privacy.
From the AI risk and quality sections:
“Ensuring the quality of the data… is highly contextual and must be validated regardless of the
model’s deployment environment.” (p. 17)
B, C, Dare legitimate benefits of private LLMs.
Ais incorrect — validation still requires time and resources.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12

The best method to ensure a comprehensive identification of risks for a new AI model is?

  • A. An environmental scan.
  • B. Red teaming.
  • C. Integration testing.
  • D. An impact assessment.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
The most comprehensive way to identify a full range of risks — legal, ethical, operational, and
societal — for a new AI model is through aformal impact assessment, such as aData Protection
Impact Assessment (DPIA)orAlgorithmic Impact Assessment.
From theAI Governance in Practice Report 2024:
“Risk-based approaches are often distilled into organizational risk management efforts, which put
impact assessments at the heart of deciding whether harm can be reduced.” (p. 29)
“DPIAs… help organizations identify, analyze and minimize data-related risks and demonstrate
accountability.” (p. 30)
A . Environmental scanis too general.
B . Red teamingis useful for adversarial risk but not broad.
C . Integration testingfocuses on technical/system compatibility, not overall risk.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 13

Why is it important that conformity requirements are satisfied before an AI system is released into
production?

  • A. To ensure the visual design is fit-for-purpose.
  • B. To ensure the AI system is easy for end-users to operate.
  • C. To guarantee interoperability of the AI system across multiple platforms and environments.
  • D. To comply with legal and regulatory standards, ensuring the AI system is safe and trustworthy.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Conformity assessmentsare a core requirement under theEU AI Actfor high-risk systems and serve to
confirm that the AI meetsregulatory, safety, and ethical standardsbefore it is put into production.
From theAI Governance in Practice Report 2024:
“Conformity assessments… ensure that systems comply with legal requirements, safety criteria, and
intended purpose before being placed on the market.” (p. 34)
“They are a critical step to demonstrate safety and trustworthiness in AI deployment.” (p. 35)

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

In procuring an AI system from a vendor, which of the following would be important to include in a
contract to enable proper oversight and auditing of the system?

  • A. Liability for mistakes.
  • B. Ownership of data and outputs.
  • C. Responsibility for improvements.
  • D. Appropriate access to data and models.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Ensuringoversight and auditabilityrequires that the organization hassufficient access to data,
documentation, and model internalsor outputs necessary for evaluation.
From theAI Governance in Practice Report 2024:
“Access to technical documentation and system internals is essential to enable effective auditing,
conformity checks, and accountability mechanisms.” (p. 11, 34)
Ais about liability, not auditability.
Bmatters for IP rights, not oversight.
Crelates to lifecycle responsibility but doesn’t guarantee audit access.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

Your organization is searching for a new way to help accurately forecast sales predictions by various
types of customers.
Which of the following is the best type of model to choose if your organization wants to customize
the model and avoid lock-in?

  • A. A free large language model.
  • B. A classic machine learning model.
  • C. A proprietary generative AI model.
  • D. A subscription-based, multimodal model.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Forcustomizable, interpretable modelsthat allow organizations toretain control and avoid vendor
lock-in,classic ML models(e.g., regression, decision trees, random forests) are optimal.
From theAI Governance in Practice Report 2024:
“Organizations seeking transparency, customizability, and control often prefer classic ML models due
to their flexibility and ease of governance.” (p. 33)
AandCmay have limited transparency and are often tied to specific providers.
Dinvolves ongoing costs and limited model control.

Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2