isaca AAISM Exam Questions

Questions for the AAISM were updated on : Dec 01 ,2025

Page 1 out of 6. Viewing questions 1-15 out of 90

Question 1

Which of the following AI-driven systems should have the MOST stringent recovery time objective
(RTO)?

  • A. Health support system
  • B. Credit risk modeling system
  • C. Car navigation system
  • D. Industrial control system
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
AAISM risk guidance notes that the most stringent recovery objectives apply to industrial control
systems, as downtime can directly disrupt critical infrastructure, manufacturing, or safety operations.
Health support systems also require high availability, but industrial control often underpins safety-
critical and real-time environments where delays can result in catastrophic outcomes. Credit risk
models and navigation systems are important but less critical in terms of immediate physical and
operational impact. Thus, industrial control systems require the tightest RTO.
Reference:
AAISM Study Guide – AI Risk Management (Business Continuity in AI)
ISACA AI Security Management – RTO Priorities for AI Systems

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2

An organization utilizes AI-enabled mapping software to plan routes for delivery drivers. A driver
following the AI route drives the wrong way down a one-way street, despite numerous signs. Which
of the following biases does this scenario demonstrate?

  • A. Selection
  • B. Reporting
  • C. Confirmation
  • D. Automation
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
AAISM defines automation bias as the tendency of individuals to over-rely on AI-generated outputs
even when contradictory real-world evidence is available. In this scenario, the driver ignores traffic
signs and follows the AI’s instructions, showing blind reliance on automation. Selection bias relates
to data sampling, reporting bias refers to misrepresentation of results, and confirmation bias involves
interpreting information to fit pre-existing beliefs. The most accurate description is automation bias.
Reference:
AAISM Exam Content Outline – AI Risk Management (Bias Types in AI)
AI Security Management Study Guide – Automation Bias in AI Use

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

To ensure AI tools do not jeopardize ethical principles, it is MOST important to validate that:

  • A. The organization has implemented a responsible development policy
  • B. Outputs of AI tools do not perpetuate adverse biases
  • C. Stakeholders have approved alignment with company values
  • D. AI tools are evaluated by the privacy department before implementation
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
AAISM highlights that the core ethical risk in AI is the perpetuation of bias that results in unfair or
discriminatory outcomes. Therefore, the most important validation step is ensuring that outputs of
AI systems are free from adverse biases. A responsible development policy, stakeholder approvals,
and privacy reviews all contribute to governance, but they do not directly ensure ethical outcomes.
Validation of output fairness is the critical safeguard for ensuring AI does not violate ethical
principles.
Reference:
AAISM Study Guide – AI Risk Management (Bias and Ethics Validation)
ISACA AI Security Management – Ethical AI Practices

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4

Which of the following is the BEST reason to immediately disable an AI system?

  • A. Excessive model drift
  • B. Slow model performance
  • C. Overly detailed model outputs
  • D. Insufficient model training
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
According to AAISM lifecycle management guidance, the best justification for disabling an AI system
immediately is the detection of excessive model drift. Drift results in outputs that are no longer
reliable, accurate, or aligned with intended purpose, creating significant risks. Performance slowness
and overly detailed outputs are operational inefficiencies but not critical shutdown triggers.
Insufficient training should be addressed before deployment rather than after. The trigger for
immediate deactivation in production is excessive drift compromising reliability.
Reference:
AAISM Exam Content Outline – AI Governance and Program Management (Model Drift
Management)
AI Security Management Study Guide – Disabling AI Systems

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

Which of the following is a key risk indicator (KRI) for an AI system used for threat detection?

  • A. Number of training epochs
  • B. Training time of the model
  • C. Number of layers in the neural network
  • D. Number of system overrides by cyber analysts
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
AAISM materials emphasize that in operational AI systems, key risk indicators (KRIs) must reflect
risks to performance and reliability rather than technical design factors alone. In the case of threat
detection, the most relevant KRI is the frequency of system overrides by human analysts, as this
indicates a lack of trust, frequent false positives, or poor detection accuracy. Training epochs, model
depth, and training time are technical metrics but do not directly measure operational risk. Analyst
overrides represent a practical measure of system effectiveness and risk.
Reference:
AAISM Study Guide – AI Risk Management (Operational KRIs for AI Systems)
ISACA AI Security Management – Monitoring AI Effectiveness

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

The PRIMARY benefit of implementing moderation controls in generative AI applications is that it
can:

  • A. Increase the model’s ability to generate diverse and creative content
  • B. Optimize the model’s response time
  • C. Ensure the generated content adheres to privacy regulations
  • D. Filter out harmful or inappropriate content
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
AAISM materials identify the primary benefit of moderation controls in generative AI systems as
their ability to filter out harmful, offensive, or inappropriate content before it is delivered to users.
This safeguards organizational reputation, compliance, and user trust. While moderation may
indirectly support compliance with privacy requirements, its main function is ensuring that outputs
align with ethical and safety standards. Moderation does not enhance creativity or response speed.
Its primary value is in controlling the quality of generated outputs by blocking harmful content.
Reference:
AAISM Study Guide – AI Technologies and Controls (Moderation and Output Controls)
ISACA AI Security Management – Harmful Content Mitigation in Generative AI

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

Which of the following controls BEST mitigates the inherent limitations of generative AI models?

  • A. Ensuring human oversight
  • B. Adopting AI-specific regulations
  • C. Classifying and labeling AI systems
  • D. Reverse engineering the models
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
The AAISM governance framework emphasizes that the inherent limitations of generative AI—
including hallucinations, bias, and unpredictability—are best mitigated by human oversight. Human-
in-the-loop review ensures that outputs are validated before being used in sensitive or high-risk
contexts. Regulatory adoption, system classification, and reverse engineering all play supporting
roles but do not directly safeguard against the model’s inherent unpredictability. Governance best
practices highlight human oversight as the critical safeguard.
Reference:
AAISM Exam Content Outline – AI Governance and Program Management (Human Oversight and
Accountability)
AI Security Management Study Guide – Mitigating Generative AI Limitations

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

An organization recently introduced a generative AI chatbot that can interact with users and answer
their queries. Which of the following would BEST mitigate hallucination risk identified by the risk
team?

  • A. Performing model testing and validation
  • B. Training the foundational model on large data sets
  • C. Ensuring model developers have been trained in AI risk
  • D. Fine-tuning the foundational model
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
AAISM highlights fine-tuning foundational models as one of the most effective strategies for reducing
hallucination risk. By tailoring the model with domain-specific, curated, and verified datasets,
organizations can reduce the frequency of irrelevant or fabricated outputs. Testing and validation
help evaluate risks but do not directly minimize hallucinations. Training on larger datasets may
improve generalization but does not guarantee accuracy. Developer training in AI risk supports
governance but is not a technical control against hallucinations. The best mitigation is fine-tuning to
align the chatbot with trusted, context-specific knowledge.
Reference:
AAISM Study Guide – AI Risk Management (Hallucination and Output Integrity Risks)
ISACA AI Security Management – Fine-tuning Generative Models

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9

Which of the following is the MOST important factor to consider when selecting industry frameworks
to align organizational AI governance with business objectives?

  • A. Risk tolerance
  • B. Risk threshold
  • C. Risk register
  • D. Risk appetite
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
According to AAISM governance principles, the risk appetite of the organization is the most
important factor in selecting appropriate frameworks for AI governance. Risk appetite defines the
level of risk an organization is willing to accept in pursuit of its objectives, ensuring frameworks are
aligned with strategic goals. Risk tolerance and thresholds are operational measures derived from
appetite, and the risk register is a documentation tool. The foundational consideration for framework
alignment is the organization’s risk appetite.
Reference:
AAISM Exam Content Outline – AI Governance and Program Management (Risk Appetite in
Governance Alignment)
AI Security Management Study Guide – Framework Selection and Business Strategy

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

From a risk perspective, which of the following is the MOST important step when implementing an
adoption strategy for AI systems?

  • A. Benchmarking against peer organizations’ AI risk strategies
  • B. Implementing a robust risk analysis methodology tailored to AI-specific tasks
  • C. Conducting an AI risk assessment and updating the enterprise risk register
  • D. Establishing a comprehensive AI risk assessment framework
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
AAISM guidance states that when adopting AI, the most important step is to conduct a risk
assessment and update the enterprise risk register. This ensures AI-specific risks are identified,
documented, and integrated into the organization’s existing governance structures. Benchmarking
peers provides context but does not address internal risk. Implementing methodologies and
frameworks are important, but they precede or follow the assessment process. The decisive step that
connects adoption to enterprise risk governance is updating the risk register with AI-specific risks.
Reference:
AAISM Study Guide – AI Risk Management (Integration with Enterprise Risk Management)
ISACA AI Security Management – Risk Assessment and Register Updates

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11

Which of the following factors is MOST important for preserving user confidence and trust in
generative AI systems?

  • A. Bias minimization
  • B. Access controls and secure storage solutions
  • C. Transparent disclosure and informed consent
  • D. Data anonymization
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
AAISM risk guidance underscores that transparent disclosure and informed consent are the most
important factors in maintaining user trust in generative AI. Users must clearly understand how
outputs are created, what data sources are used, and how risks such as bias or misinformation are
managed. While bias minimization, access controls, and anonymization contribute to technical or
ethical robustness, they are not sufficient to preserve user trust. Trust requires openness and
consent, which align with governance expectations for transparency and accountability.
Reference:
AAISM Exam Content Outline – AI Risk Management (Transparency and Trust)
AI Security Management Study Guide – User Confidence in Generative AI

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12

Which area of intellectual property law presents the GREATEST challenge in determining copyright
protection for AI-generated content?

  • A. Enforcing trademark rights associated with AI systems
  • B. Determining the rightful ownership of AI-generated creations
  • C. Protecting trade secrets in AI technologies
  • D. Establishing licensing frameworks for AI-generated works
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
AAISM governance content highlights that the greatest intellectual property challenge in the context
of AI-generated works is determining rightful ownership. Traditional copyright law requires human
authorship, but AI-generated creations blur authorship and ownership boundaries, raising legal
uncertainty about who can claim rights. Trademark enforcement, trade secret protection, and
licensing frameworks are established areas of IP law but do not present the same fundamental
challenge as ownership attribution. For AI-generated content, the central legal dilemma is ownership
of the creation.
Reference:
AAISM Study Guide – AI Governance and Program Management (Intellectual Property and AI)
ISACA AI Security Management – Copyright and Ownership Challenges

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 13

An AI research team is developing a natural language processing model that relies on several open-
source libraries. Which of the following is the team’s BEST course of action to ensure the integrity of
the software packages used?

  • A. Maintain a list of frequently used libraries to ensure consistent application in projects
  • B. Scan the packages and libraries for malware prior to installation
  • C. Use the latest version of all libraries from public repositories
  • D. Retrain the model regularly to handle package and library updates
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
AAISM’s technical control guidance emphasizes that when using open-source libraries, the best
safeguard for integrity is to scan the packages for malware before installation. This ensures that
compromised or malicious code does not enter the AI system environment. Maintaining lists aids
consistency but not security. Always using the latest versions may introduce unverified
vulnerabilities. Retraining models addresses functionality but not software integrity. Therefore, the
strongest protective measure is pre-installation malware scanning of open-source packages.
Reference:
AAISM Exam Content Outline – AI Technologies and Controls (Software Supply Chain Security)
AI Security Management Study Guide – Open-Source Package Risk Mitigation

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

Which of the following is the MOST effective way to mitigate the risk of deepfake attacks?

  • A. Relying on human judgment for oversight
  • B. Limiting employee access to AI tools
  • C. Validating the provenance of the data source
  • D. Using a general-purpose large language model (LLM) to detect fraud
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
AAISM study content identifies validating the provenance of data sources as the most effective way
to counter deepfake risks. Provenance validation ensures that content is authentic, verifiable, and
traceable, preventing malicious synthetic media from being trusted as legitimate. Human oversight
helps but cannot reliably detect sophisticated fakes. Limiting tool access reduces exposure but does
not prevent external attacks. General-purpose LLMs are not optimized for fraud detection. The
strongest control is verifying the origin and authenticity of data before acceptance.
Reference:
AAISM Study Guide – AI Risk Management (Deepfake and Content Integrity Risks)
ISACA AI Security Management – Provenance Validation as a Defense

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

A PRIMARY objective of responsibly providing AI services is to:

  • A. Enable AI models to operate autonomously
  • B. Ensure the confidentiality and integrity of data processed by AI models
  • C. Build trust for decisions and predictions made by AI models
  • D. Improve the ability of AI models to learn from new data
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
AAISM emphasizes that the primary objective of responsible AI is to establish and maintain trust in
AI-driven decisions and predictions. Trust is achieved through transparency, accountability, fairness,
and governance. While confidentiality and integrity are critical technical objectives, they are not the
overarching purpose of responsible AI service provision. Autonomy and learning ability are features
of AI, but without trust, adoption and compliance falter. The correct answer is that responsible AI
services must focus on building trust in AI outcomes.
Reference:
AAISM Exam Content Outline – AI Governance and Program Management (Responsible AI Principles)
AI Security Management Study Guide – Trust and Ethical AI Adoption

Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2