amazon AWS Certified Developer - Associate exam practice questions

Questions for the DVA-C02 were updated on : Dec 01 ,2025

Page 1 out of 25. Viewing questions 1-15 out of 368

Question 1

A healthcare company is developing a multi-tier web application to manage patient records that are
in an Amazon Aurora PostgreSQL database cluster. The company stores the application code in a Git
repository and deploys the code to Amazon EC2 instances.
The application must comply with security policies and follow the principle of least privilege. The
company must securely manage database credentials and API keys within the application code. The
company must have the ability to rotate encryption keys on demand.
Which solution will meet these requirements?

  • A. Store database credentials and API keys in AWS Secrets Manager. Use AWS managed AWS KMS keys. Set up automatic key rotation. Use the AWS SDK to retrieve secrets.
  • B. Store the database credentials and API keys in AWS Secrets Manager. Use customer managed AWS KMS keys. Set up automatic key rotation. Create a key policy in the application to retrieve secrets by using the AWS SDK.
  • C. Store the database credentials in the application code. Separate credentials by using environment- specific branches that have restricted access to the code repositories.
  • D. Store the database credentials and API keys as parameters in AWS Systems Manager Parameter Store. Encrypt the credentials and API keys with AWS managed AWS KMS keys. Use the AWS SDK to retrieve secrets.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Requirement Summary:
Multi-tier app on EC2 + Aurora PostgreSQL
Must comply with least privilege and security policies
Need to manage credentials and API keys securely
Must support key rotation on demand
Evaluate Options:

A . Secrets Manager + AWS managed KMS keys

Best practice for secure secret storage

Supports auto rotation

Uses AWS SDK to fetch at runtime (secure, avoids hardcoding)
AWS managed keys are rotated automatically and easier to manage
B . Secrets Manager + customer managed keys
⚠️
Also valid, but adds complexity
Since the question asks for LEAST development effort, AWS-managed keys are preferred
C . Store secrets in code

Violates all security best practices
D . Use SSM Parameter Store + AWS managed keys
⚠️
Possible, but Secrets Manager is preferred when rotation is needed
Parameter Store does not natively rotate secrets
Secrets Manager:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
Automatic key rotation:
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
Best practices for secret management:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/best-practices.html

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2

A developer needs to use Amazon DynamoDB to store customer orders. The developer's company
requires all customer data to be encrypted at rest with a key that the company generates.
What should the developer do to meet these requirements?

  • A. Create the DynamoDB table with encryption set to None. Code the application to use the key to decrypt the data when the application reads from the table. Code the application to use the key to encrypt the data when the application writes to the table.
  • B. Store the key by using AW5 KMS. Choose an AVVS KMS customer managed key during creation of the DynamoDB table. Provide the Amazon Resource Name (ARN) of the AWS KMS key.
  • C. Store the key by using AWS KMS. Create the DynamoDB table with default encryption. Include the kms:Encrypt parameter with the Amazon Resource Name (ARN) of the AWS KMS key when using the DynamoDB SDK.
  • D. Store the key by using AWS KMS. Choose an AWS KMS AWS managed key during creation of the DynamoDB table. Provide the Amazon Resource Name (ARN) of the AWS KMS key.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Requirement Summary:
Store customer orders in DynamoDB
Must encrypt data at rest
Company wants to use a key it generates (i.e., customer managed key)
Evaluate Options:
A . Set encryption to None, manually encrypt/decrypt in code

Overhead and error-prone
Also non-compliant with AWS encryption best practices

B . Use customer managed KMS key

Exactly meets the requirement: customer generates and controls the key
During table creation, you can specify a KMS CMK ARN
C . Default encryption + kms:Encrypt in SDK

Misunderstanding: DynamoDB handles encryption automatically
You don’t need to call kms:Encrypt manually in SDK
D . Use AWS managed key

Does not meet the requirement of using custom company-generated key
DynamoDB encryption:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/EncryptionAtRest.html
KMS customer managed keys:
https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

A company is concerned that a malicious user could deploy unauthorized changes to the code for an
AWS Lambda function. What can a developer do to ensure that only trusted code is deployed to
Lambda?

  • A. Turn on the trusted code option in AWS CodeDeploy. Add the CodeDeploy digital certificate to the Lambda package before deploying the package to Lambda.
  • B. Define the code signing configuration in the Lambda console. Use AWS Signer to digitally sign the Lambda package before deploying the package to Lambda.
  • C. Link Lambda to AWS KMS in the Lambda console. Use AWS KMS to digitally sign the Lambda package before deploying the package to Lambda.
  • D. Set the KmsKeyArn property of the Lambda function to the Amazon Resource Name (ARN) of a trusted key before deploying the package to Lambda.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Requirement Summary:
Prevent unauthorized code changes in AWS Lambda
Ensure only trusted code is deployed

AWS Lambda supports Code Signing:
You can configure code signing in Lambda using AWS Signer
Packages must be digitally signed and verified against the signing profile
Rejects unauthorized/modified packages automatically
Evaluate Options:
A . Trusted code option in CodeDeploy

No such feature exists for Lambda
CodeDeploy is more for EC2/On-Prem/Containers, not Lambda code signing

B . Define code signing config + use AWS Signer

This is exactly how AWS enforces trusted code deployment
Attach a code signing configuration to the Lambda function
Use AWS Signer to digitally sign deployment packages
C . Link to KMS to sign code

KMS is not used to sign Lambda packages
KMS is for data encryption, not application code integrity
D . Set KmsKeyArn

This configures data encryption, not code signing
Lambda code signing:
https://docs.aws.amazon.com/lambda/latest/dg/configuration-codesigning.html
AWS Signer overview:
https://docs.aws.amazon.com/signer/latest/developerguide/what-is-signer.html

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4

A developer has a financial application. The application uses AWS Secrets Manager to manage an
Amazon RDS for PostgreSQL database's username and password. The developer needs to rotate the
password while maintaining the application's high availability. Which solution will meet these
requirements with LEAST development effort?

  • A. Rotate the secret by using the alternating-users rotation strategy. Update the application with an appropriate retry strategy to handle authentication failures.
  • B. Use the PostgreSQL client to create a new database username and password. Include the new secret values by performing an immediate rotation. Use the AWS CLI to update the RDS database password. Perform an immediate rotation of the Secrets Manager secrets.
  • C. Rotate the secret by using multivalue answer rotation. Update the application with an appropriate retry strategy to handle authentication failures.
  • D. Rotate the secret by using the single-user rotation strategy. Update the application with an appropriate retry strategy to handle authentication failures.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Requirement Summary:
Secrets managed in AWS Secrets Manager
DB: Amazon RDS for PostgreSQL
Need automated password rotation
Must maintain high availability
Least development effort
Rotation Strategies:

Single-user rotation strategy

Simplest to implement
The secret contains one set of credentials used by app and rotation logic

Supports automated rotation
AWS provides built-in Lambda rotation templates for RDS
A . Alternating-users strategy
⚠️
More complex
Requires application to switch users during rotation window
B . Manual secret + CLI rotation

Too much manual work
Not scalable or reliable
C . Multivalue answer rotation

Not a valid strategy in this context
Doesn’t apply to Secrets Manager
Secrets Manager rotation strategies:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html
RDS PostgreSQL secret rotation:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets_strategies.html#rotating-secrets-single-user

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

A developer is working on an ecommerce application that stores data in an Amazon RDS for MySQL
cluster The developer needs to implement a caching layer for the application to retrieve information
about the most viewed products.
Which solution will meet these requirements?

  • A. Edit the RDS for MySQL cluster by adding a cache node. Configure the cache endpoint instead of the duster endpoint in the application.
  • B. Create an Amazon DynamoDB Accelerator (DAX) cluster in front of the RDS for MySQL cluster. Configure the application to connect to the DAX endpoint instead of the RDS endpoint.
  • D. Configure the RDS for MySQL cluster to add a standby instance in a different Availability Zone. Configure the application to read the data from the standby instance.
Answer:

B

User Votes:
A
50%
B
50%
D
50%

Explanation:
Requirement Summary:
E-commerce app using Amazon RDS for MySQL
Needs caching layer for most-viewed products
Evaluate Options:
A . Add a cache node to RDS

No such feature in RDS for MySQL
Caching must be implemented outside RDS

B . ElastiCache for Redis (OSS)

Purpose-built for caching frequently accessed data

Reduces read pressure on RDS

Fast in-memory access (microseconds)

Seamless integration into app logic
C . DynamoDB DAX

DAX is for accelerating DynamoDB, not RDS
D . RDS standby instance

Read from standby is not allowed
Standby is for failover only, not for load balancing
ElastiCache for Redis:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/WhatIs.html
Caching with Redis for RDS:
https://aws.amazon.com/blogs/database/caching-strategies-using-
amazon-elasticache-for-read-heavy-workloads-on-amazon-rds/

Discussions
vote your answer:
A
B
D
0 / 1000

Question 6

A company has an application that uses an AWS Lambda function to process customer orders. The
company notices that the application processes some orders more than once.
A developer needs to update the application to prevent duplicate processing.
Which solution will meet this requirement with the LEAST implementation effort?

  • A. Implement a de-duplication mechanism that uses Amazon DynamoDB as the control database. Configure the Lambda function to check for the existence of a unique identifier before processing each event.
  • B. Create a custom Amazon ECS task to perform idempotency checks. Use AWS Step Functions to integrate the ECS task with the Lambda function.
  • C. Configure the Lambda function to retry failed invocations. Implement a retry mechanism that has a fixed delay between attempts to handle duplicate events.
  • D. Use Amazon Athena to query processed events to identify duplicate records. Add processing logic to the Lambda function to handle the duplication scenarios that the query identifies.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Requirement Summary:
Orders are being processed more than once
Need to prevent duplicate processing
Looking for least implementation effort
Key Concept:
Lambda + Event-driven patterns can occasionally result in duplicate invocations (at-least-once
delivery model)
You need idempotency (i.e., prevent repeated processing of same event)
Evaluate Options:

A . Use DynamoDB for de-duplication

Simple and widely used approach
Store a unique orderId as the primary key
Before processing, check if order exists
If yes → skip
If no → process and store the ID

Minimal code changes required
B . ECS + Step Functions

Overkill for basic de-duplication
Adds significant complexity
C . Retry logic with fixed delay

Doesn't prevent duplication — makes it worse
Retrying might trigger the same message again
D . Athena to identify duplicates

Reactive solution, not preventative
Not suitable for real-time event de-duplication
Lambda idempotency:
https://docs.aws.amazon.com/lambda/latest/dg/invocation-retries.html
Using DynamoDB for idempotent design:
https://aws.amazon.com/blogs/compute/how-to-design-
idempotent-APIs-on-aws/

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

A developer built an application that calls an external API to obtain data, processes the data, and
saves the result to Amazon S3. The developer built a container image with all of the necessary
dependencies to run the application as a container.
The application runs locally and requires minimal CPU and RAM resources. The developer has
created an Amazon ECS cluster. The developer needs to run the application hourly in Amazon ECS.
Which solution will meet these requirements with the LEAST amount of infrastructure management
overhead?

  • A. Add a capacity provider to manage instances.
  • B. Add an Amazon EC2 instance that runs the application.
  • C. Define a task definition with an AWS Fargate launch type.
  • D. Create an Amazon ECS cluster and add the managed node groups feature to run the application.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Requirement Summary:
Containerized app
Runs hourly
Minimal CPU and RAM
Goal: Least infrastructure management
Evaluate Options:
A . Add a capacity provider to manage instances

Capacity providers are used for managing EC2-backed ECS clusters, which still require underlying
EC2 management.
B . Add an Amazon EC2 instance

Involves managing infrastructure (provisioning, patching, scaling, etc.)

C . Use AWS Fargate launch type

Serverless container runtime

No server management

Easily scheduled using EventBridge + ECS Fargate Task

Best fit for periodic workloads like this
D . Use managed node groups

Applies to EKS (Kubernetes), not ECS
Unnecessary overhead for this use case
Fargate:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/what-is-fargate.html
Scheduled ECS tasks:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/scheduled_tasks.html

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

A company stores its data in data tables in a series of Amazon S3 buckets. The company received an
alert that customer credit card information might have been exposed in a data table on one of the
company's public applications. A developer needs to identify all potential exposures within the
application environment.
Which solution will meet these requirements?

  • A. Use Amazon Athena to run a job on the S3 buckets that contain the affected data. Filter the findings by using the SensitiveData:S30bject/Personal finding type.
  • B. Use Amazon Made to run a job on the S3 buckets that contain the affected data. Filter the findings by using the SensitiveData:S30bject/Financial finding type.
  • C. Use Amazon Made to run a job on the S3 buckets that contain the affected data. Filter the findings by using the SensitiveData:S30bject/Personal finding type.
  • D. Use Amazon Athena to run a job on the S3 buckets that contain the affected data. Filter the findings by using the SensitiveData:S30bject/Financial finding type.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Requirement Summary:
Customer credit card data may be exposed
Data is stored in Amazon S3
Developer must identify all exposure risks
Tool to Use:

Amazon Macie is designed to:
Automatically scan S3 for sensitive data
Detect financial information, PII, credentials, etc.
Finding Type Mapping:
Credit card data maps to: SensitiveData:S3Object/Financial
Evaluate Options:
A . Athena + filtering

Athena is a query engine; it doesn’t detect sensitive data automatically

B . Macie + Financial finding type

Correct
Designed for this use case
C . Macie + Personal finding type

Personal maps to names, addresses, etc., not credit cards
D . Athena + Financial

Again, Athena can’t classify data – it only queries structured data
Macie Overview:
https://docs.aws.amazon.com/macie/latest/userguide/what-is-macie.html
Finding Types:
https://docs.aws.amazon.com/macie/latest/user/findings-types.html
Financial finding type: SensitiveData:S3Object/Financial

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9

A real-time messaging application uses Amazon API Gateway WebSocket APIs with backend HTTP
service. A developer needs to build a feature in the application to identify a client that keeps
connecting to and disconnecting from the WebSocket connection. The developer also needs the
ability to remove the client
Which combination of changes should the developer make to the application to meet these
requirements? (Select TWO.)

  • A. Switch to HTTP APIs in the backend service.
  • B. Switch to REST APIs in the backend service.
  • C. Use the callback URL to disconnect the client from the backend service.
  • D. Add code to track the client status in Amazon ElastiCache in the backend service.
  • E. Implement $connect and $disconnect routes in the backend service.
Answer:

D, E

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%

Explanation:
Requirement Summary:
WebSocket-based messaging app using API Gateway WebSocket APIs
Need to:
Identify clients repeatedly connecting/disconnecting
Be able to remove problematic clients
Evaluate Options:
A . Switch to HTTP APIs

HTTP APIs don’t support WebSocket connections
B . Switch to REST APIs

REST APIs are not compatible with WebSockets

C . Use the callback URL to disconnect clients
⚠️
Possible, but not a direct option
Callback URLs are used for sending messages to connected clients, not for disconnecting

D . Track client status in ElastiCache

Good solution: Store and update connection state (connected, disconnected, timestamps)
Helps track abuse or reconnections

E. Implement $connect and $disconnect routes

Required to capture connection lifecycle events
These can be used to log/store client behavior and decide on removal
WebSocket routes in API Gateway:
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-websocket-api-route-selection.html
Managing WebSocket connections:
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-websocket-api-mapping-template-reference.html

Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 10

A developer uses AWS IAM Identity Center to interact with the AWS CLI and AWS SDKs on a local
workstation. API calls to AWS services were working when the SSO access was first configured.
However, the developer is now receiving Access Denied errors. The developer has not changed any
configuration files or scripts that were previously working on the workstation.
What is the MOST likely cause of the developer's access issue?

  • A. The access permissions to the developer's AWS CLI binary file have changed.
  • B. The permission set that is assumed by IAM Identity Center does not have the necessary permissions to complete the API call.
  • C. The credentials from the IAM Identity Center federated role have expired.
  • D. The developer is attempting to make API calls to the incorrect AWS account.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Requirement Summary:
Developer uses AWS IAM Identity Center (SSO) with AWS CLI / SDKs
Initially working fine
Now receiving AccessDenied errors
No changes to config or scripts
Key Understanding:
IAM Identity Center credentials are temporary and time-limited. When you use SSO-based access via
the AWS CLI (aws sso login), it obtains temporary credentials stored in the local cache.
By default, these expire in 1 hour (can be extended).
Evaluate Options:
A . Permissions to CLI binary changed

Unlikely – this would produce execution errors, not AccessDenied from AWS API
B . Permission set lacks required permissions

Then the error would have occurred from the beginning, not after time

C . IAM Identity Center credentials expired

Most likely – user hasn't refreshed credentials using aws sso login again
After expiration, API calls fail with AccessDenied
D . Developer is calling the wrong AWS account

Would likely show different types of errors (AccountNotFound, etc.)
SSO and AWS CLI:
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html
Troubleshooting SSO CLI:
https://docs.aws.amazon.com/cli/latest/userguide/sso-configure-profile-token.html

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11

A company needs to develop a proof of concept for a web service application. The application will
show the weather forecast for one of the company's office locations. The application will provide a
REST endpoint that clients can call. Where possible, the application should use caching features
provided by AWS to limit the number of requests to the backend service. The application backend
will receive a small amount of traffic only during testing.
Which approach should the developer take to provide the REST endpoint MOST cost-effectively?

  • A. Create a container image. Deploy the container image by using Amazon EKS. Expose the functionality by using Amazon API Gateway.
  • B. Create an AWS Lambda function by using AWS SAM. Expose the Lambda functionality by using Amazon API Gateway.
  • C. Create a container image. Deploy the container image by using Amazon ECS. Expose the functionality by using Amazon API Gateway.
  • D. Create a microservices application. Deploy the application to AWS Elastic Beanstalk. Expose the AWS Lambda functionality by using an Application Load Balancer.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Requirement Summary:
Simple REST endpoint for weather data
Light backend usage (POC, testing)
Wants caching support to reduce backend calls
Must be cost-effective
Evaluate Options:

B: Lambda + API Gateway + SAM

Serverless = No idle costs

API Gateway can enable caching (response caching at endpoint level)

SAM makes deployment simple and repeatable

Perfect for low-traffic testing
A: EKS + API Gateway

High overhead
Not cost-effective for POC/testing
C: ECS + API Gateway

Similar to A: Container orchestration not needed for light REST endpoint
D: Elastic Beanstalk + ALB + Lambda

Overly complex and does not directly expose Lambda
Beanstalk better suited for full apps, not small REST functions
AWS SAM:
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html
API Gateway caching:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html
Serverless Best Practices:
https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12

A company operates a media streaming platform that delivers on-demand video content to users
from around the world. User requests flow through an Amazon CloudFront distribution, an Amazon
API Gateway REST API, AWS Lambda functions, and Amazon DynamoDB tables.
Some users have reported intermittent buffering issues and delays when users try to start a video
stream. The company needs to investigate the issues to discover the underlying cause.
Which solution will meet this requirement?

  • A. Enable AWS X-Ray tracing for the REST API, Lambda functions, and DynamoDB tables. Analyze the service map to identify any performance bottlenecks or errors.
  • B. Enable logging in API Gateway. Ensure that each Lambda function is configured to send logs to Amazon CloudWatch. Use CloudWatch Logs Insights to query the log data.
  • C. Use AWS Config to review details of any recent configuration changes to AWS resources in the application that could result in increased latency for users.
  • D. Use AWS CloudTrail to track AWS resources in all AWS Regions. Stream CloudTrail data to an Amazon CloudWatch Logs log group. Enable CloudTrail Insights. Set up Amazon SN5 notifications if unusual API activity is detected.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Requirement Summary:
Users experience buffering/delay when starting video stream
Architecture:
CloudFront → API Gateway → Lambda → DynamoDB
Need to identify root cause of performance issues
Evaluate Options:

A: Enable AWS X-Ray tracing

Ideal for end-to-end tracing
Visualizes latency across services (API Gateway, Lambda, DynamoDB)
Creates a service map for easy identification of bottlenecks or errors
Designed specifically for distributed tracing and performance monitoring
B: CloudWatch Logs Insights
⚠️
Helpful for querying logs
But lacks the visual trace linkage across services like X-Ray
Does not identify where latency accumulates
C: AWS Config

Tracks configuration changes, not runtime performance
D: CloudTrail + CloudWatch Logs

More useful for audit/logging, not tracing performance or latency issues
X-Ray overview:
https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html
Service map:
https://docs.aws.amazon.com/xray/latest/devguide/xray-console-service-map.html
Tracing API Gateway:
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-xray.html

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 13

A company uses an AWS Lambda function to perform natural language processing (NLP) tasks. The
company has attached a Lambda layer to the function. The Lambda layer contain scientific libraries
that the function uses during processing.
The company added a large, pre-trained text-classification model to the Lambda layer. The addition
increased the size of the Lambda layer to 8.7 GB. After the addition and a recent deployment, the
Lambda function returned a RequestEntityTooLargeException error.
The company needs to update the Lambda function with a high-performing and portable solution to
decrease the initialization time for the function.
Which solution will meet these requirements?

  • A. Store the large pre-trained model in an Amazon S3 bucket. Use the AWS SDK to access the model.
  • B. Create an Amazon EFS file system to store the large pre-trained model. Mount the file system to an Amazon EC2 instance. Configure the Lambda function to use the EFS file system.
  • C. Split the components of the Lambda layer into five new Lambda layers. Zip the new layers, and attach the layers to the Lambda function. Update the function code to use the new layers.
  • D. Create a Docker container that includes the scientific libraries and the pre-trained model. Update the Lambda function to use the container image.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Requirement Summary:
NLP Lambda function with a large pre-trained model
Lambda layer became 8.7 GB → Exceeds AWS limits
Function returns RequestEntityTooLargeException
Need: High-performing, portable, low initialization time
Important AWS Limits:
Lambda Layers size limit (combined across all layers): 250 MB (unzipped)
Deployment package size (unzipped): 250 MB
Lambda container image support allows up to 10 GB image size
Evaluate Options:
A: Store model in S3 and load during execution

Leads to cold start latency every time
Model loading from S3 is slower and not suitable for real-time NLP
Not optimal for performance
B: Use EFS mounted to Lambda
⚠️
Valid for large models, but adds latency during cold start as model loads from EFS
Requires EFS setup, VPC, and has added network I/O overhead
Still slower than bundling in container image
C: Split into five Lambda layers

Still violates the total layer size limit of 250 MB (unzipped)
You cannot exceed that even with multiple layers

D: Use Docker container image

Allows bundling up to 10 GB of dependencies and models

High portability and performance

Avoids latency of downloading models at runtime

Ideal for scientific/NLP models
Lambda container image support:
https://docs.aws.amazon.com/lambda/latest/dg/images-create.html
Lambda limits:
https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html
Using large models with Lambda:
https://aws.amazon.com/blogs/machine-learning/deploying-large-
machine-learning-models-on-aws-lambda-with-container-images/

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

A company runs applications on Amazon EKS containers. The company sends application logs from
the containers to an Amazon CloudWatch Logs log group. The company needs to process log data in
real time based on a specific error in the application logs. Which combination of steps will meet
these requirements? (Select TWO.)

  • A. Create an Amazon SNS topic that has a subscription filter policy.
  • B. Create a subscription filter on the log group that has a filter pattern.
  • C. Set up an Amazon CloudWatch agent operator to manage the trace collection daemon in Amazon EKS.
  • D. Create an AWS Lambda function to process the logs.
  • E. Create an Amazon EventBridge rule to invoke the AWS Lambda function on a schedule.
Answer:

B, D

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%

Explanation:
Requirement Summary:
EKS containers send logs to CloudWatch Logs
Need to process logs in real time
Trigger logic based on a specific error in logs
Evaluate Options:
Option A: SNS topic with filter policy

SNS filter policies work on message attributes, not on CloudWatch Logs subscription filters
Option B: Subscription filter on log group

This enables real-time log processing
You can create a subscription filter with a pattern matching specific error strings
Sends matched logs to a Lambda function or Kinesis
Option C: CloudWatch agent operator for trace collection

Irrelevant for log processing
Used for monitoring and tracing, not real-time log filtering
Option D: Lambda function to process logs

Once logs match the pattern, Lambda can process and act (e.g., alert, store, analyze)
Option E: EventBridge rule on a schedule

Not real-time
Scheduled EventBridge rules are for cron-like tasks, not log stream processing
Subscription filters:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html
Real-time log processing with Lambda:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#LambdaExa
mple
Logs in EKS to CloudWatch:
https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html

Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 15

A company is building a serverless application on AWS. The application uses Amazon API Gateway
and AWS Lambd
a. The company wants to deploy the application to its development, test, and production
environments.
Which solution will meet these requirements with the LEAST development effort?

  • A. Use API Gateway stage variables and create Lambda aliases to reference environment-specific resources.
  • B. Use Amazon ECS to deploy the application to the environments.
  • C. Duplicate the code for each environment. Deploy the code to a separate API Gateway stage.
  • D. Use AWS Elastic Beanstalk to deploy the application to the environments.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Requirement Summary:
Deploy serverless application using:
API Gateway
AWS Lambda
Need dev, test, and prod environments
Want least development effort
Evaluate Options:
Option A: API Gateway stage variables + Lambda aliases

Most efficient and scalable
API Gateway supports stage variables (like env)
Lambda supports aliases (e.g., dev, test, prod)
You can configure each stage to point to a different alias of the same function version
Enables versioning, isolation, and low effort management
Option B: Use Amazon ECS

Overkill for a serverless setup
ECS is container-based, not serverless
Introduces unnecessary complexity
Option C: Duplicate code for each environment

High operational overhead and poor maintainability
Option D: Use Elastic Beanstalk

Not applicable: Elastic Beanstalk is for traditional app hosting, not optimal for Lambda + API
Gateway
Lambda Aliases:
https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html
API Gateway Stage Variables:
https://docs.aws.amazon.com/apigateway/latest/developerguide/stage-variables.html

Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2