Questions for the AAISM were updated on : Dec 01 ,2025
Which of the following AI-driven systems should have the MOST stringent recovery time objective
(RTO)?
D
Explanation:
AAISM risk guidance notes that the most stringent recovery objectives apply to industrial control
systems, as downtime can directly disrupt critical infrastructure, manufacturing, or safety operations.
Health support systems also require high availability, but industrial control often underpins safety-
critical and real-time environments where delays can result in catastrophic outcomes. Credit risk
models and navigation systems are important but less critical in terms of immediate physical and
operational impact. Thus, industrial control systems require the tightest RTO.
Reference:
AAISM Study Guide – AI Risk Management (Business Continuity in AI)
ISACA AI Security Management – RTO Priorities for AI Systems
An organization utilizes AI-enabled mapping software to plan routes for delivery drivers. A driver
following the AI route drives the wrong way down a one-way street, despite numerous signs. Which
of the following biases does this scenario demonstrate?
D
Explanation:
AAISM defines automation bias as the tendency of individuals to over-rely on AI-generated outputs
even when contradictory real-world evidence is available. In this scenario, the driver ignores traffic
signs and follows the AI’s instructions, showing blind reliance on automation. Selection bias relates
to data sampling, reporting bias refers to misrepresentation of results, and confirmation bias involves
interpreting information to fit pre-existing beliefs. The most accurate description is automation bias.
Reference:
AAISM Exam Content Outline – AI Risk Management (Bias Types in AI)
AI Security Management Study Guide – Automation Bias in AI Use
To ensure AI tools do not jeopardize ethical principles, it is MOST important to validate that:
B
Explanation:
AAISM highlights that the core ethical risk in AI is the perpetuation of bias that results in unfair or
discriminatory outcomes. Therefore, the most important validation step is ensuring that outputs of
AI systems are free from adverse biases. A responsible development policy, stakeholder approvals,
and privacy reviews all contribute to governance, but they do not directly ensure ethical outcomes.
Validation of output fairness is the critical safeguard for ensuring AI does not violate ethical
principles.
Reference:
AAISM Study Guide – AI Risk Management (Bias and Ethics Validation)
ISACA AI Security Management – Ethical AI Practices
Which of the following is the BEST reason to immediately disable an AI system?
A
Explanation:
According to AAISM lifecycle management guidance, the best justification for disabling an AI system
immediately is the detection of excessive model drift. Drift results in outputs that are no longer
reliable, accurate, or aligned with intended purpose, creating significant risks. Performance slowness
and overly detailed outputs are operational inefficiencies but not critical shutdown triggers.
Insufficient training should be addressed before deployment rather than after. The trigger for
immediate deactivation in production is excessive drift compromising reliability.
Reference:
AAISM Exam Content Outline – AI Governance and Program Management (Model Drift
Management)
AI Security Management Study Guide – Disabling AI Systems
Which of the following is a key risk indicator (KRI) for an AI system used for threat detection?
D
Explanation:
AAISM materials emphasize that in operational AI systems, key risk indicators (KRIs) must reflect
risks to performance and reliability rather than technical design factors alone. In the case of threat
detection, the most relevant KRI is the frequency of system overrides by human analysts, as this
indicates a lack of trust, frequent false positives, or poor detection accuracy. Training epochs, model
depth, and training time are technical metrics but do not directly measure operational risk. Analyst
overrides represent a practical measure of system effectiveness and risk.
Reference:
AAISM Study Guide – AI Risk Management (Operational KRIs for AI Systems)
ISACA AI Security Management – Monitoring AI Effectiveness
The PRIMARY benefit of implementing moderation controls in generative AI applications is that it
can:
D
Explanation:
AAISM materials identify the primary benefit of moderation controls in generative AI systems as
their ability to filter out harmful, offensive, or inappropriate content before it is delivered to users.
This safeguards organizational reputation, compliance, and user trust. While moderation may
indirectly support compliance with privacy requirements, its main function is ensuring that outputs
align with ethical and safety standards. Moderation does not enhance creativity or response speed.
Its primary value is in controlling the quality of generated outputs by blocking harmful content.
Reference:
AAISM Study Guide – AI Technologies and Controls (Moderation and Output Controls)
ISACA AI Security Management – Harmful Content Mitigation in Generative AI
Which of the following controls BEST mitigates the inherent limitations of generative AI models?
A
Explanation:
The AAISM governance framework emphasizes that the inherent limitations of generative AI—
including hallucinations, bias, and unpredictability—are best mitigated by human oversight. Human-
in-the-loop review ensures that outputs are validated before being used in sensitive or high-risk
contexts. Regulatory adoption, system classification, and reverse engineering all play supporting
roles but do not directly safeguard against the model’s inherent unpredictability. Governance best
practices highlight human oversight as the critical safeguard.
Reference:
AAISM Exam Content Outline – AI Governance and Program Management (Human Oversight and
Accountability)
AI Security Management Study Guide – Mitigating Generative AI Limitations
An organization recently introduced a generative AI chatbot that can interact with users and answer
their queries. Which of the following would BEST mitigate hallucination risk identified by the risk
team?
D
Explanation:
AAISM highlights fine-tuning foundational models as one of the most effective strategies for reducing
hallucination risk. By tailoring the model with domain-specific, curated, and verified datasets,
organizations can reduce the frequency of irrelevant or fabricated outputs. Testing and validation
help evaluate risks but do not directly minimize hallucinations. Training on larger datasets may
improve generalization but does not guarantee accuracy. Developer training in AI risk supports
governance but is not a technical control against hallucinations. The best mitigation is fine-tuning to
align the chatbot with trusted, context-specific knowledge.
Reference:
AAISM Study Guide – AI Risk Management (Hallucination and Output Integrity Risks)
ISACA AI Security Management – Fine-tuning Generative Models
Which of the following is the MOST important factor to consider when selecting industry frameworks
to align organizational AI governance with business objectives?
D
Explanation:
According to AAISM governance principles, the risk appetite of the organization is the most
important factor in selecting appropriate frameworks for AI governance. Risk appetite defines the
level of risk an organization is willing to accept in pursuit of its objectives, ensuring frameworks are
aligned with strategic goals. Risk tolerance and thresholds are operational measures derived from
appetite, and the risk register is a documentation tool. The foundational consideration for framework
alignment is the organization’s risk appetite.
Reference:
AAISM Exam Content Outline – AI Governance and Program Management (Risk Appetite in
Governance Alignment)
AI Security Management Study Guide – Framework Selection and Business Strategy
From a risk perspective, which of the following is the MOST important step when implementing an
adoption strategy for AI systems?
C
Explanation:
AAISM guidance states that when adopting AI, the most important step is to conduct a risk
assessment and update the enterprise risk register. This ensures AI-specific risks are identified,
documented, and integrated into the organization’s existing governance structures. Benchmarking
peers provides context but does not address internal risk. Implementing methodologies and
frameworks are important, but they precede or follow the assessment process. The decisive step that
connects adoption to enterprise risk governance is updating the risk register with AI-specific risks.
Reference:
AAISM Study Guide – AI Risk Management (Integration with Enterprise Risk Management)
ISACA AI Security Management – Risk Assessment and Register Updates
Which of the following factors is MOST important for preserving user confidence and trust in
generative AI systems?
C
Explanation:
AAISM risk guidance underscores that transparent disclosure and informed consent are the most
important factors in maintaining user trust in generative AI. Users must clearly understand how
outputs are created, what data sources are used, and how risks such as bias or misinformation are
managed. While bias minimization, access controls, and anonymization contribute to technical or
ethical robustness, they are not sufficient to preserve user trust. Trust requires openness and
consent, which align with governance expectations for transparency and accountability.
Reference:
AAISM Exam Content Outline – AI Risk Management (Transparency and Trust)
AI Security Management Study Guide – User Confidence in Generative AI
Which area of intellectual property law presents the GREATEST challenge in determining copyright
protection for AI-generated content?
B
Explanation:
AAISM governance content highlights that the greatest intellectual property challenge in the context
of AI-generated works is determining rightful ownership. Traditional copyright law requires human
authorship, but AI-generated creations blur authorship and ownership boundaries, raising legal
uncertainty about who can claim rights. Trademark enforcement, trade secret protection, and
licensing frameworks are established areas of IP law but do not present the same fundamental
challenge as ownership attribution. For AI-generated content, the central legal dilemma is ownership
of the creation.
Reference:
AAISM Study Guide – AI Governance and Program Management (Intellectual Property and AI)
ISACA AI Security Management – Copyright and Ownership Challenges
An AI research team is developing a natural language processing model that relies on several open-
source libraries. Which of the following is the team’s BEST course of action to ensure the integrity of
the software packages used?
B
Explanation:
AAISM’s technical control guidance emphasizes that when using open-source libraries, the best
safeguard for integrity is to scan the packages for malware before installation. This ensures that
compromised or malicious code does not enter the AI system environment. Maintaining lists aids
consistency but not security. Always using the latest versions may introduce unverified
vulnerabilities. Retraining models addresses functionality but not software integrity. Therefore, the
strongest protective measure is pre-installation malware scanning of open-source packages.
Reference:
AAISM Exam Content Outline – AI Technologies and Controls (Software Supply Chain Security)
AI Security Management Study Guide – Open-Source Package Risk Mitigation
Which of the following is the MOST effective way to mitigate the risk of deepfake attacks?
C
Explanation:
AAISM study content identifies validating the provenance of data sources as the most effective way
to counter deepfake risks. Provenance validation ensures that content is authentic, verifiable, and
traceable, preventing malicious synthetic media from being trusted as legitimate. Human oversight
helps but cannot reliably detect sophisticated fakes. Limiting tool access reduces exposure but does
not prevent external attacks. General-purpose LLMs are not optimized for fraud detection. The
strongest control is verifying the origin and authenticity of data before acceptance.
Reference:
AAISM Study Guide – AI Risk Management (Deepfake and Content Integrity Risks)
ISACA AI Security Management – Provenance Validation as a Defense
A PRIMARY objective of responsibly providing AI services is to:
C
Explanation:
AAISM emphasizes that the primary objective of responsible AI is to establish and maintain trust in
AI-driven decisions and predictions. Trust is achieved through transparency, accountability, fairness,
and governance. While confidentiality and integrity are critical technical objectives, they are not the
overarching purpose of responsible AI service provision. Autonomy and learning ability are features
of AI, but without trust, adoption and compliance falter. The correct answer is that responsible AI
services must focus on building trust in AI outcomes.
Reference:
AAISM Exam Content Outline – AI Governance and Program Management (Responsible AI Principles)
AI Security Management Study Guide – Trust and Ethical AI Adoption