amazon AWS Certified DevOps Engineer - Professional Exam exam practice questions

Questions for the DOP-C02 were updated on : Dec 01 ,2025

Page 1 out of 22. Viewing questions 1-15 out of 329

Question 1

A company is implementing a standardized security baseline across its AWS accounts. The accounts
are in an organization in AWS Organizations. The company must deploy consistent IAM roles and
policies across all existing and future accounts in the organization. Which solution will meet these
requirements with the MOST operational efficiency?

  • A. Enable AWS Control Tower in the management account. Configure AWS Control Tower Account Factory customization to deploy the required IAM roles and policies to all accounts.
  • B. Activate trusted access for AWS CloudFormation StackSets in Organizations. In the management account, create a stack set that has service-managed permissions to deploy the required IAM roles and policies to all accounts. Enable automatic deployment for the stack set.
  • C. In each member account, create IAM roles that have permissions to create and manage resources. In the management account, create an AWS CloudFormation stack set that has self-managed permissions to deploy the required IAM roles and policies to all accounts. Enable automatic deployment for the stack set.
  • D. In the management account, create an AWS CodePipeline pipeline. Configure the pipeline to use AWS CloudFormation to automate the deployment of the required IAM roles and policies. Set up cross-account IAM roles to allow CodePipeline to deploy resources in the member accounts.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2

A company configured an Amazon S3 event source for an AWS Lambda function. The company needs
the Lambda function to run when a new object is created or an existing object is modified in a
specific S3 bucket. The Lambda function will use the S3 bucket name and the S3 object key of the
incoming event to read the contents of the new or modified S3 object. The Lambda function will
parse the contents and save the parsed contents to an Amazon DynamoDB table.
The Lambda function's execution role has permissions to 'eari from the S3 bucket and to Write to the
DynamoDB table. During testing, a DevOpS engineer discovers that the Lambda fund on does rot run
when objects are added to the S3 bucket or when existing objects are modified.
Which solution will resolve these problems?

  • A. Create an S3 bucket policy for the S3 bucket that grants the S3 bucket permission to invoke the Lambda function.
  • B. Create a resource policy for the Lambda function to grant Amazon S3 permission to invoke the Lambda function on the S3 bucket.
  • C. Configure an Amazon Simple Queue Service (Amazon SQS) queue as an OnFailure destination for the Lambda function. Update the Lambda function to process messages from the SQS queue and the S3 event notifications.
  • D. Configure an Amazon Simple Queue Service (Amazon SQS) queue as the destination for the S3 bucket event notifications. Update the Lambda function's execution role to have permission to read from the SQS queue. Update the Lambda function to consume messages from the SQS queue.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

A company uses Amazon API Gateway and AWS Lambda functions to implement an API. The
company uses a pipeline in AWS CodePipeline to build and deploy the API. The pipeline contains a
source stage, build stage, and deployment stage.
The company deploys the API without performing smoke tests. Soon after the deployment, the
company observes multiple issues with the API. A security audit finds security vulnerabilities in the
production code.
The company wants to prevent these issues from happening in the future.
Which combination of steps will meet this requirement? (Select TWO.)

  • A. Create a smoke test script that returns an error code if the API code fails the test. Add an action in the deployment stage to run the smoke test script after deployment. Configure the deployment stage for automatic rollback.
  • B. Create a smoke test script that returns an error code if the API code fails the test. Add an action in the deployment stage to run the smoke test script after deployment. Configure the deployment stage to fail if the smoke test script returns an error code.
  • C. Add an action in the build stage that uses Amazon Inspector to scan the Lambda function code after the code is built. Configure the build stage to fail if the scan returns any security findings. D. Add an action in the build stage to run an Amazon CodeGuru code scan after the code is built. Configure the build stage to fail if the scan returns any security findings.
  • E. Add an action in the deployment stage to run an Amazon CodeGuru code scan after deployment. Configure the deployment stage to fail if the scan returns any security findings.
Answer:

B, D

User Votes:
A
50%
B
50%
C
50%
E
50%

Discussions
vote your answer:
A
B
C
E
0 / 1000

Question 4

A company has a web application that publishes logs that contain metadata for transactions, with a
status of success or failure for each log. The logs are in JSON format. The application publishes the
logs to an Amazon CloudWatch Logs log group.
The company wants to create a dashboard that displays the number of successful transactions.
Which solution will meet this requirement with the LEAST operational overhead?

  • A. Create an Amazon OpenSearch Service cluster and an OpenSearch Service subscription filter to send the log group data to the cluster. Create a dashboard within the Dashboards feature in the OpenSearch Service cluster by using a search query for transactions that have a status of success.
  • B. Create a CloudWatch subscription filter for the log group that uses an AWS Lambda function. Configure the Lambda function to parse the JSON logs and publish a custom metric to CloudWatch for transactions that have a status of success. Create a CloudWatch dashboard by using a metric graph that displays the custom metric.
  • C. Create a CloudWatch metric filter for the log groups with a filter pattern that matches the transaction status property and a value of success. Create a CloudWatch dashboard by using a metric graph that displays the new metric.
  • D. Create an Amazon Kinesis data stream that is subscribed to the log group. Configure the data stream to filter incoming log data based on a status of success and to send the filtered logs to an AWS Lambda function. Configure the Lambda function to publish a custom metric to CloudWatch. Create a CloudWatch dashboard by using a metric graph that displays the custom metric.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

A company needs to update its order processing application to improve resilience and availability.
The application requires a stateful database and uses a single-node Amazon RDS DB instance to store
customer orders and transaction history. A DevOps engineer must make the database highly
available.
Which solution will meet this requirement?

  • A. Migrate the database to Amazon DynamoDB global tables. Configure automatic failover between AWS Regions by using Amazon Route 53 health checks.
  • B. Migrate the database to Amazon EC2 instances in multiple Availability Zones. Use Amazon Elastic Block Store (Amazon EBS) Mult-Attach to connect all the instances to a single EBS volume.
  • C. Use the RDS DB instance as the source instance to create read replicas in multiple Availability Zones. Deploy an Application Load Balancer to distribute read traffic across the read replicas.
  • D. Modify the RDS DB instance to be a Multi-AZ deployment. Verify automatic failover to the standby instance if the primary instance becomes unavailable.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

A DevOps engineer is creating a CI/CD pipeline to build container images. The engineer needs to
store container images in Amazon Elastic Container Registry (Amazon ECR) and scan the images for
common vulnerabilities. The CI/CD pipeline must be resilient to outages in upstream source
container image repositories.
Which solution will meet these requirements?

  • A. Create an ECR private repository in the private registry to store the container images and scan images when images are pushed to the repository. Configure a replication rule in the private registry to replicate images from upstream repositories.
  • B. Create an ECR public repository in the public registry to cache images from upstream source repositories. Create an ECR private repository to store images. Configure the private repository to scan images when images are pushed to the repository.
  • C. Create an ECR public repository in the public registry. Configure a pull through cache rule for the repository. Create an ECR private repository to store images. Configure the ECR private registry to perform basic scanning.
  • D. Create an ECR private repository in the private registry to store the container images. Enable basic scanning for the private registry, and create a pull through cache rule.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

A company uses a pipeline in AWS CodePipeline to deploy an application. The company created an
AWS Fault Injection Service (AWS FIS) experiment template to test the resiliency of the application. A
DevOps engineer needs to integrate the experiment into the pipeline.
Which solution will meet this requirement?

  • A. Configure a new stage in the pipeline that includes an AWS FIS action. Configure the action to reference the AWS FIS experiment template. Grant the pipeline access to start the experiment.
  • B. Create an Amazon EventBridge scheduler. Grant the scheduler permission to start the AWS FIS experiment. Configure a new stage in the pipeline that includes an action to invoke the EventBridge scheduler.
  • C. Create an AWS Lambda function to start the AWS FIS experiment. Grant the Lambda function permission to start the experiment. Create a new stage in the pipeline that has a Lambda action. Set the action to invoke the Lambda function.
  • D. Export the AWS FIS experiment template to an Amazon S3 bucket. Create an AWS CodeBuild unit test project that has a buildspec that starts the AWS FIS experiment. Grant the CodeBuild project access to start the experiment. Configure a new stage in the pipeline that includes an action to run the CodeBuild unit test project.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

A company runs hundreds of EC2 instances with new instances launched/terminated hourly. Security
requires all running instances to have an instance profile attached. A default profile exists and must
be attached automatically to any instance missing one.
Which solution meets this requirement?

  • A. EventBridge rule for RunInstances API calls, invoke Lambda to attach default profile.
  • B. AWS Config with ec2-instance-profile-attached managed rule, automatic remediation using Systems Manager Automation runbook to attach profile.
  • C. EventBridge rule for StartInstances API calls, invoke Systems Manager Automation runbook to attach profile.
  • D. AWS Config iam-role-managed-policy-check managed rule, automatic remediation with Lambda to attach profile.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
AWS Config’s ec2-instance-profile-attached managed rule checks for attached instance profiles.
Config supports automatic remediation via Systems Manager Automation runbooks.
This provides continuous compliance with minimal operational overhead.
EventBridge and Lambda (A) require custom coding and risk missing existing instances.
StartInstances (C) does not cover RunInstances and new instances.
IAM-role managed policy check (D) does not check instance profile attachments.
Reference:
AWS Config Managed Rules
Config Automatic Remediation

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9

A DevOps engineer needs a resilient CI/CD pipeline that builds container images, stores them in ECR,
scans images for vulnerabilities, and is resilient to outages in upstream source image repositories.
Which solution meets this?

  • A. Create a private ECR repo, scan images on push, replicate images from upstream repos with a replication rule.
  • B. Create a public ECR repo to cache images from upstream repos, create a private repo to store images, scan images on push.
  • C. Create a public ECR repo, configure a pull-through cache rule, create a private repo to store images, enable basic scanning.
  • D. Create a private ECR repo, enable basic scanning, create a pull-through cache rule.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
ECR pull-through cache caches images from upstream repositories for resilience.
Private repo with basic scanning ensures vulnerability detection on pushed images.
Enabling pull-through caching on a private repo combines caching and vulnerability scanning
seamlessly.
Public repos do not support pull-through caching of upstream images.
Replication rules are for multi-Region replication, not upstream caching.
Reference:
Amazon ECR Pull Through Cache
Amazon ECR Image Scanning

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

A company containerized its Java app and uses CodePipeline. They want to scan images in ECR for
vulnerabilities and reject images with critical vulnerabilities in a manual approval stage.
Which solution meets these?

  • A. Basic scanning with EventBridge for Inspector findings and Lambda to reject manual approval if critical vulnerabilities found.
  • B. Enhanced scanning, Lambda invokes Inspector for SBOM, exports to S3, Athena queries SBOM, rejects manual approval on critical findings.
  • C. Enhanced scanning, EventBridge listens to Detective scan findings, Lambda rejects manual approval on critical vulnerabilities.
  • D. Enhanced scanning, EventBridge listens to Inspector scan findings, Lambda rejects manual approval on critical vulnerabilities.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Amazon ECR enhanced scanning uses Amazon Inspector for vulnerability detection.
EventBridge can capture Inspector scan findings.
Lambda can process scan findings and reject manual approval if critical vulnerabilities exist.
Options A and C use incorrect or less integrated services (basic scanning or Detective).
Option B adds unnecessary complexity with SBOM and Athena.
Reference:
Amazon ECR Image Scanning
Integrating ECR Scanning with CodePipeline

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11

A company manages shared libraries across development and production accounts with IAM roles
and CodePipeline/CDK. Developers must be the only ones to access latest versions. Shared packages
must be independently tested before production.
Which solution meets these requirements?

  • A. Single CodeArtifact repository in central account with IAM policies allowing only developers access. Use EventBridge to start CodeBuild testing projects before copying packages to production repo.
  • B. Separate CodeArtifact repositories in dev and prod accounts. Dev repo has repository policy allowing only developers access. EventBridge triggers pipeline to test packages before copying to prod repo.
  • C. Single S3 bucket with versioning in central account, IAM policies restricting developers. Use EventBridge to trigger CodeBuild tests before copying to production.
  • D. Separate S3 buckets with versioning in dev and prod accounts, dev bucket policy restricting developers. EventBridge triggers pipeline to test packages before copying to prod and revert if tests fail.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Having separate CodeArtifact repositories in dev and prod accounts provides clear isolation and
control.
Repository policies can restrict dev repo access to developers.
EventBridge triggers pipelines to test and promote packages only if tests pass, ensuring safe
deployment to production.
Using S3 (C and D) is not ideal for package management.
A single repo (A) complicates access and version control across accounts.
Reference:
CodeArtifact Repository Policies
Cross-Account Package Promotion

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12

A company’s web app runs on EC2 Linux instances and needs to monitor custom metrics for API
response and DB query latency across instances with least overhead.
Which solution meets this?

  • A. Install CloudWatch agent on instances, configure it to collect custom metrics, and instrument app to send metrics to agent.
  • B. Use Amazon Managed Service for Prometheus to scrape metrics, use CloudWatch agent to forward metrics to CloudWatch.
  • C. Create Lambda to poll app endpoints and DB, calculate metrics, send to CloudWatch via PutMetricData.
  • D. Implement custom logging in app; use CloudWatch Logs Insights to extract and analyze metrics.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Installing the CloudWatch agent and instrumenting the application to push custom metrics to the
agent is the easiest and lowest overhead method.
Prometheus (B) adds operational complexity.
Lambda polling (C) introduces unnecessary complexity and latency.
Using Logs Insights (D) requires extracting metrics from logs, which is less efficient.
Reference:
Custom Metrics with CloudWatch Agent
CloudWatch Agent User Guide

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 13

A company’s web app publishes JSON logs with transaction status to CloudWatch Logs. The company
wants a dashboard showing the number of successful transactions with the least operational
overhead.
Which solution meets this?

  • A. Create an OpenSearch cluster and subscription filter to send logs; create OpenSearch dashboard with queries for success.
  • B. Create a CloudWatch subscription filter with Lambda to parse logs and publish custom metrics; create CloudWatch dashboard with metric graph.
  • C. Create a CloudWatch metric filter on the log group with a pattern matching success; create CloudWatch dashboard with metric graph.
  • D. Create a Kinesis data stream subscribed to the log group; filter logs by success; send to Lambda; Lambda publishes custom metrics; dashboard uses metric graph.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
CloudWatch metric filters can parse logs directly to create metrics without additional infrastructure.
Metric filters combined with CloudWatch dashboards provide the simplest and most operationally
efficient solution.
Options A, B, and D add complexity with additional services (OpenSearch, Lambda, Kinesis).
Reference:
CloudWatch Logs Metric Filters
CloudWatch Dashboards

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

A company uses Amazon RDS for Microsoft SQL Server as its primary database. They need high
availability within and across AWS Regions, with an RPO <1 min and RTO <10 min. Route 53 CNAME
is used for the DB endpoint and must redirect to standby during failover.
Which solution meets these requirements?

  • A. Deploy an Amazon RDS for SQL Server Multi-AZ DB cluster with cross-Region read replicas. Use automation to promote replica and update Route 53.
  • B. Deploy RDS Multi-AZ with snapshots copied every 5 minutes; use Lambda to restore snapshot and update Route 53 on failover.
  • C. Deploy Single-AZ RDS and use AWS DMS to continuously replicate to another Region. Use CloudWatch alarms for failover notification.
  • D. Deploy Single-AZ RDS and use AWS Backup for cross-Region backups every 30 seconds. Use automation to restore and update Route 53 during failover.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
RDS Multi-AZ with cross-Region read replicas provides synchronous replication with near-zero data
loss and supports fast failover with RPO < 1 min and RTO < 10 min.
Automation can promote replicas and update Route 53 CNAME to redirect traffic during failover.
Options B, C, and D rely on snapshots, backups, or DMS, which cannot guarantee such low RPO/RTO
and require manual intervention or complex automation.
Reference:
Amazon RDS Multi-AZ Deployments
Amazon RDS Cross-Region Read Replicas

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

A company runs an application on Amazon EKS. The company needs comprehensive logging for
control plane and nodes, analyze API requests, and monitor container performance with minimal
operational overhead.
Which solution meets these requirements?

  • A. Enable CloudTrail for control plane logging; deploy Logstash as a ReplicaSet on nodes; use OpenSearch to store and analyze logs.
  • B. Enable control plane logging for EKS and send logs to CloudWatch; use CloudWatch Container Insights for node and container logs; use CloudWatch Logs Insights to query logs.
  • C. Enable API server control plane logging and send to S3; deploy Kubernetes Event Exporter on nodes; send logs to S3; use Athena and QuickSight for analysis.
  • D. Use AWS Distro for OpenTelemetry; stream logs to Firehose; analyze data in Redshift.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
EKS supports control plane logging sent directly to CloudWatch Logs.
CloudWatch Container Insights provides metrics and logs collection for nodes and containers
automatically with minimal setup.
CloudWatch Logs Insights offers powerful querying and visualization integrated in AWS without
managing additional infrastructure.
Options A and C require deploying and managing additional components (Logstash, Event Exporter,
S3, Athena).
Option D adds complexity with OpenTelemetry and Redshift, which is more operationally heavy.
Reference:
Amazon EKS Control Plane Logging
CloudWatch Container Insights for EKS

Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2