amazon AWS Certified Solutions Architect - Associate exam practice questions

Questions for the SAA-C03 were updated on : Dec 01 ,2025

Page 1 out of 36. Viewing questions 1-15 out of 527

Question 1

A developer creates a web application that runs on Amazon EC2 instances behind an Application
Load Balancer (ALB). The instances are in an Auto Scaling group. The developer reviews the
deployment and notices some suspicious traffic to the application. The traffic is malicious and is
coming from a single public IP address. A solutions architect must block the public IP address.
Which solution will meet this requirement?

  • A. Create a security group rule to deny all inbound traffic from the suspicious IP address. Associate the security group with the ALB.
  • B. Implement Amazon Detective to monitor traffic and to block malicious activity from the internet. Configure Detective to integrate with the ALB.
  • C. Implement AWS Resource Access Manager (AWS RAM) to manage traffic rules and to block malicious activity from the internet. Associate AWS RAM with the ALB.
  • D. Add the malicious IP address to an IP set in AWS WAF. Create a web ACL. Include an IP set rule with the action set to BLOCK. Associate the web ACL with the ALB.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
When an application is fronted by an Application Load Balancer (ALB) and malicious traffic is
detected from a specific IP, the correct way to block the IP is by using AWS WAF (Web Application
Firewall).
With AWS WAF, you can create an IP Set to include the offending IP address or range.
Then create a Web ACL (Access Control List) with a rule set to BLOCK requests from that IP set.
Finally, associate the Web ACL with the ALB.
Security groups (Option A) cannot deny specific IPs because they are stateful and allow-only rules.
Amazon Detective (Option B) is a security analysis and investigation tool; it doesn’t block traffic.
AWS RAM (Option C) is for resource sharing across accounts, not for blocking IPs.
This approach aligns with AWS’s Security Pillar of the Well-Architected Framework and is fully
managed, with minimal operational effort.
​Reference:
Using AWS WAF with an Application Load Balancer
Block IPs with AWS WAF

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2

A large financial services company uses Amazon ElastiCache (Redis OSS) for its new application that
has a global user base. A solutions architect must develop a caching solution that will be available
across AWS Regions and include low-latency replication and failover capabilities for disaster recovery
(DR). The company's security team requires the encryption of cross-Region data transfers.
Which solution meets these requirements with the LEAST amount of operational effort?

  • A. Enable cluster mode in ElastiCache (Redis OSS). Then create multiple clusters across Regions and replicate the cache data by using AWS Database Migration Service (AWS DMS). Promote a cluster in the failover Region to handle production traffic when DR is required.
  • B. Create a global data store in ElastiCache (Redis OSS). Then create replica clusters in two other Regions. Promote one of the replica clusters as primary when DR is required.
  • C. Disable cluster mode in ElastiCache (Redis OSS). Then create multiple replication groups across Regions and replicate the cache data by using AWS Database Migration Service (AWS DMS). Promote a replication group in the failover Region to primary when DR is required.
  • D. Create a snapshot of ElastiCache (Redis OSS) in the primary Region and copy it to the failover Region. Use the snapshot to restore the cluster from the failover Region when DR is required.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
The optimal solution for low-latency global caching with disaster recovery and cross-Region
replication is to use Amazon ElastiCache Global Datastore for Redis OSS.
A Global Datastore enables fully managed cross-Region replication and supports automatic failover
by promoting read replica clusters in another Region.
ElastiCache ensures encryption in-transit and at-rest, meeting compliance and security requirements.
It's a fully managed AWS-native feature, reducing operational effort compared to setting up DMS-
based or snapshot-based replication manually.
Other options (A, C, D):
Require manual setup and management (e.g., custom DMS pipelines, snapshots).
Do not offer real-time replication or failover without manual intervention.
​Reference:
ElastiCache Global Datastore for Redis

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

A company operates a data lake in Amazon S3 that stores large datasets in multiple formats. The
company has an application that retrieves and processes subsets of data from multiple objects in the
data lake based on filtering criteri
a. For each data query, the application currently downloads the entire S3 object and performs
transformations. The current process requires a large amount of transformation time.
The company wants a solution that will give the application the ability to query and filter directly on
S3 objects without downloading the objects.
Which solution will meet these requirements?

  • A. Use Amazon Athena to query and filter the objects in Amazon S3.
  • B. Use Amazon EMR to process and filter the objects.
  • C. Use Amazon API Gateway to create an API to retrieve filtered results from Amazon S3.
  • D. Use Amazon ElastiCache (Valkey) to cache the objects.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
The best solution to query and filter S3 data directly without downloading the full object is to use
Amazon Athena.
Amazon Athena is an interactive query service that lets you use SQL to analyze structured, semi-
structured, and unstructured data directly in Amazon S3, without needing to move or transform the
data.
It supports formats like CSV, JSON, ORC, Parquet, and Avro and integrates with AWS Glue Data
Catalog for schema management.
Athena is serverless, meaning there’s no infrastructure to manage, and it's billed per query, which
keeps it cost-effective.
Option B (EMR) is heavier and requires managing a cluster.
Option C (API Gateway) is not suited for querying S3 datasets.
Option D (ElastiCache) is a memory store, not a query engine.
​Reference:
What is Amazon Athena?

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4

A company runs a critical Amazon RDS for MySQL DB instance in a single Availability Zone. The
company must improve the availability of the DB instance.
Which solution will meet this requirement?

  • A. Configure the DB instance to use a multi-Region DB instance deployment.
  • B. Create an Amazon Simple Queue Service (Amazon SQS) queue in the AWS Region where the company hosts the DB instance to manage writes to the DB instance.
  • C. Configure the DB instance to use a Multi-AZ DB instance deployment.
  • D. Create an Amazon Simple Queue Service (Amazon SQS) queue in a different AWS Region than the Region where the company hosts the DB instance to manage writes to the DB instance.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
To improve availability and fault tolerance of an Amazon RDS instance, the recommended approach
is to configure a Multi-AZ deployment.
Multi-AZ deployments for RDS automatically replicate data to a standby instance in a different
Availability Zone (AZ).
If a failure occurs in the primary AZ (due to hardware, network, or power), RDS will automatically
failover to the standby instance with minimal downtime, without administrative intervention.
This is an AWS-managed feature and does not require application modification.
It does not provide scalability or load balancing; it's designed for high availability and resiliency.
Options A, B, and D are incorrect:
A refers to cross-Region, which is used for disaster recovery, not high availability.
B and D with SQS do not address high availability directly for the RDS instance; queues help decouple
systems but do not make a database more resilient.
​Reference:
Amazon RDS Multi-AZ Deployments

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

A finance company hosts a data lake in Amazon S3. The company receives financial data records over
SFTP each night from several third parties. The company runs its own SFTP server on an Amazon EC2
instance in a public subnet of a VPC. After the files are uploaded, they are moved to the data lake by
a cron job that runs on the same instance. The SFTP server is reachable on DNS sftp.example.com
through the use of Amazon Route 53.
What should a solutions architect do to improve the reliability and scalability of the SFTP solution?

  • A. Move the EC2 instance into an Auto Scaling group. Place the EC2 instance behind an Application Load Balancer (ALB). Update the DNS record sftp.example.com in Route 53 to point to the ALB.
  • B. Migrate the SFTP server to AWS Transfer for SFTP. Update the DNS record sftp.example.com in Route 53 to point to the server endpoint hostname.
  • C. Migrate the SFTP server to a file gateway in AWS Storage Gateway. Update the DNS record sftp.example.com in Route 53 to point to the file gateway endpoint.
  • D. Place the EC2 instance behind a Network Load Balancer (NLB). Update the DNS record sftp.example.com in Route 53 to point to the NLB.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
The optimal way to improve reliability and scalability of SFTP on AWS is to use AWS Transfer Family
(for SFTP). It provides a fully managed SFTP server integrated with Amazon S3.
No EC2 instances or infrastructure management is required.
AWS Transfer Family supports custom DNS domains (e.g., sftp.example.com) and allows integration
with existing authentication mechanisms like LDAP, AD, or custom identity providers.
Files are uploaded directly to S3, eliminating the need for cron jobs to move data from EC2 to S3.
Built-in high availability and scalability removes the burden of managing infrastructure.
Other options:
A and D still require manual scaling, server maintenance, and cron jobs.
C (Storage Gateway) is used for hybrid file access, not for replacing an SFTP server.
​Reference:
AWS Transfer Family for SFTP

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

A company recently migrated its application to a VPC on AWS. An AWS Site-to-Site VPN connection
connects the company's on-premises network to the VPC. The application retrieves customer data
from another system that resides on premises. The application uses an on-premises DNS server to
resolve domain records. After the migration, the application is not able to connect to the customer
data because of name resolution errors.
Which solution will give the application the ability to resolve the internal domain names?

  • A. Launch EC2 instances in the VPC. On the EC2 instances, deploy a custom DNS forwarder that forwards all DNS requests to the on-premises DNS server. Create an Amazon Route 53 private hosted zone that uses the EC2 instances for name servers.
  • B. Create an Amazon Route 53 Resolver outbound endpoint. Configure the outbound endpoint to forward DNS queries against the on-premises domain to the on-premises DNS server.
  • C. Set up two AWS Direct Connect connections between the AWS environment and the on-premises network. Set up a link aggregation group (LAG) that includes the two connections. Change the VPC resolver address to point to the on-premises DNS server.
  • D. Create an Amazon Route 53 public hosted zone for the on-premises domain. Configure the network ACLs to forward DNS requests against the on-premises domain to the Route 53 public hosted zone.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
When AWS workloads must resolve DNS names from on-premises systems over a hybrid network
(like VPN or Direct Connect), the best solution is to use Amazon Route 53 Resolver outbound
endpoints.
The outbound endpoint enables DNS queries to be forwarded from your VPC to on-premises DNS
servers.
You must also configure a Route 53 Resolver forwarding rule to define which domain names (e.g.,
corp.internal) should be forwarded to the specific on-premises DNS IPs.
This setup allows private DNS resolution from AWS to on-premises systems and is fully managed,
eliminating the need to run and maintain EC2-based DNS proxies (as in option A).
Options C and D are incorrect:
C is not DNS-specific and doesn’t solve name resolution.
D misuses a public hosted zone for a private DNS domain.
​Reference:
Route 53 Resolver Outbound Endpoints

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

A company runs its critical storage application in the AWS Cloud. The application uses Amazon S3 in
two AWS Regions. The company wants the application to send remote user data to the nearest S3
bucket with no public network congestion. The company also wants the application to fail over with
the least amount of management of Amazon S3.
Which solution will meet these requirements?

  • A. Implement an active-active design between the two Regions. Configure the application to use the regional S3 endpoints closest to the user.
  • B. Use an active-passive configuration with S3 Multi-Region Access Points. Create a global endpoint for each of the Regions.
  • C. Send user data to the regional S3 endpoints closest to the user. Configure an S3 cross-account replication rule to keep the S3 buckets synchronized.
  • D. Set up Amazon S3 to use Multi-Region Access Points in an active-active configuration with a single global endpoint. Configure S3 Cross-Region Replication.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
To meet the requirement of low-latency global access and failover with minimal management, the
best solution is to use Amazon S3 Multi-Region Access Points (MRAP) with Cross-Region Replication
(CRR).
Multi-Region Access Points provide a global endpoint that automatically routes requests to the
nearest AWS Region using the AWS Global Accelerator infrastructure. This avoids public internet
congestion and ensures low-latency access.
When combined with S3 Cross-Region Replication, data is automatically synchronized between
buckets in different Regions, enabling active-active setup.
In case of a Regional failure, S3 Multi-Region Access Points handle failover automatically, requiring
no manual intervention.
Options A and C require manual management and configuration of endpoints per Region. Option B
misrepresents MRAP—it is used for active-active, not active-passive.
​Reference:
S3 Multi-Region Access Points
S3 Cross-Region Replication

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

A disaster response team is using drones to collect images of recent storm damage. The response
team's laptops lack the storage and compute capacity to transfer the images and process the data.
While the team has Amazon EC2 instances for processing and Amazon S3 buckets for storage,
network connectivity is intermittent and unreliable. The images need to be processed to evaluate
the damage.
What should a solutions architect recommend?

  • A. Use AWS Snowball Edge devices to process and store the images.
  • B. Upload the images to Amazon Simple Queue Service (Amazon SQS) during intermittent connectivity to EC2 instances.
  • C. Configure Amazon Data Firehose to create multiple delivery streams aimed separately at the S3 buckets for storage and the EC2 instances for processing images.
  • D. Use AWS Storage Gateway pre-installed on a hardware appliance to cache the images locally for Amazon S3 to process the images when connectivity becomes available.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed Explanation From Exact Extract of Amazon Web Services (AWS)
Architect documents:
AWS Snowball Edge is specifically designed for use cases that involve limited or unreliable network
connectivity. It enables data transfer and local compute processing at edge locations.
It comes in two options: Snowball Edge Storage Optimized and Snowball Edge Compute Optimized.
The Compute Optimized model allows the disaster response team to both store images locally and
process data on the device using Amazon EC2-compatible compute resources.
This removes the need for constant network connectivity. After processing, the device can be
shipped back to AWS, where data is uploaded to S3.
Other options fail due to:
SQS not being suitable for large binary image data (Option B)
Kinesis Data Firehose needing steady connectivity (Option C)
Storage Gateway is for hybrid cloud environments with ongoing connection, not rugged field use
(Option D)
​Reference:
AWS Snowball Edge Overview
Snowball Edge Use Cases

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9

A company uses Amazon EC2 instances behind an Application Load Balancer (ALB) to serve content
to users. The company uses Amazon Elastic Block Store (Amazon EBS) volumes to store data.
The company needs to encrypt data in transit and at rest.
Which combination of services will meet these requirements? (Select TWO.)

  • A. Amazon GuardDuty
  • B. AWS Shield
  • C. AWS Certificate Manager (ACM)
  • D. AWS Secrets Manager
  • E. AWS Key Management Service (AWS KMS)
Answer:

C, E

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%

Explanation:
Comprehensive and Detailed Explanation From Exact Extract of Amazon Web Services (AWS)
Architect documents:
To secure data in transit, the company should use AWS Certificate Manager (ACM) to provide SSL/TLS
certificates for the Application Load Balancer. ACM allows easy provisioning, management, and
renewal of public and private certificates, ensuring secure communication between users and
applications.
To secure data at rest, AWS Key Management Service (KMS) is used to manage encryption keys for
Amazon EBS volumes. EBS integrates with AWS KMS, allowing for server-side encryption using KMS-
managed keys (SSE-KMS), thus meeting the encryption at rest requirement.
Other options:
GuardDuty (A) is for threat detection, not encryption.
AWS Shield (B) protects against DDoS attacks, not encryption.
Secrets Manager (D) manages credentials, not general data encryption.
This solution follows the AWS Well-Architected Framework – Security Pillar.
​Reference:
Encrypting EBS volumes with KMS
Using ACM with ALB

Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 10

A company wants a flexible compute solution that includes Amazon EC2 instances and AWS Fargate.
The company does not want to commit to multi-year contracts.
Which purchasing option will meet these requirements MOST cost-effectively?

  • A. Purchase a 1-year EC2 Instance Savings Plan with the All Upfront option.
  • B. Purchase a 1-year Compute Savings Plan with the No Upfront option.
  • C. Purchase a 1-year Compute Savings Plan with the Partial Upfront option.
  • D. Purchase a 1-year Compute Savings Plan with the All Upfront option.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed Explanation From Exact Extract of Amazon Web Services (AWS)
Architect documents:
To optimize costs for both Amazon EC2 and AWS Fargate, the best option is a Compute Savings Plan
because it offers flexibility across instance families, Regions, and compute options including EC2,
AWS Fargate, and AWS Lambda.
Unlike EC2 Instance Savings Plans, which apply only to specific instance families, Compute Savings
Plans apply across multiple services.
Since the company does not want to commit to multi-year contracts or large upfront payments, the
1-year No Upfront Compute Savings Plan provides the greatest flexibility with no upfront capital
commitment, while still offering cost savings over On-Demand pricing.
This option also aligns with cost-optimization best practices by allowing for scalability and service
mix flexibility.
​Reference:
AWS Compute Savings Plans
AWS Pricing Models

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11

A company has a non-production application that runs on an Amazon EC2 instance. The EC2 instance
has an instance profile and an associated IAM role.
The company wants to automate patching for the EC2 instance.
Which solution will meet this requirement?

  • A. Create a new IAM role. Attach the AmazonSSMManagedInstanceCore policy to the new IAM role. Attach the new IAM role to the EC2 instance profile. Use AWS Systems Manager to patch the instance.
  • B. Create an IAM user. Attach the AmazonSSMManagedInstanceCore policy to the IAM user. Configure AWS Systems Manager to use the IAM user to patch the instance.
  • C. Attach the AmazonSSMManagedInstanceCore policy to the existing IAM role. Use AWS Systems Manager to patch the EC2 instance.
  • D. Attach the AmazonSSMManagedInstanceCore policy to an existing IAM user. Use EC2 Image Builder to patch the EC2 instance.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed Explanation From Exact Extract of Amazon Web Services (AWS)
Architect documents:
To manage EC2 instances with AWS Systems Manager (SSM), the EC2 instance must be configured as
a managed instance by attaching an IAM role that has the AmazonSSMManagedInstanceCore
managed policy.
This policy allows:
SSM agent to register the instance with SSM
Perform actions like patching, automation, session management, inventory collection, etc.
Access to SSM endpoints (via internet or VPC endpoint if needed)
Since the EC2 instance already has an IAM role, the least operational overhead option is to attach the
required policy to the existing role (Option C). No need to create new IAM roles or users, which
simplifies management and adheres to the principle of least privilege.
Patching can then be automated via SSM Patch Manager, ensuring consistency, compliance, and
operational efficiency.
​Reference:
SSM Managed Instance Setup
AmazonSSMManagedInstanceCore Policy
Patching EC2 with SSM

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12

A solutions architect is creating a website that will be hosted from an Amazon S3 bucket. The website
must support secure browser connections (HTTPS).
Which combination of actions must the solutions architect take to meet this requirement? (Select
TWO.)

  • A. Create an Elastic Load Balancing (ELB) load balancer. Configure the load balancer to direct traffic to the S3 bucket.
  • B. Create an Amazon CloudFront distribution. Set the S3 bucket as an origin.
  • C. Configure the Elastic Load Balancing (ELB) load balancer with an SSL/TLS certificate.
  • D. Configure the Amazon CloudFront distribution with an SSL/TLS certificate.
  • E. Configure the S3 bucket with an SSL/TLS certificate.
Answer:

B, D

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%

Explanation:
Comprehensive and Detailed Explanation From Exact Extract of Amazon Web Services (AWS)
Architect documents:
To serve a static website hosted in Amazon S3 over HTTPS, you must use Amazon CloudFront because
S3 does not natively support HTTPS for static website endpoints.
Steps to meet HTTPS requirement:
B . Create a CloudFront distribution and configure the S3 bucket as the origin. This enables global
edge caching and performance optimization.
D . Attach an SSL/TLS certificate (typically from AWS Certificate Manager) to the CloudFront
distribution to handle HTTPS connections.
S3 buckets used as static website hosts only support HTTP directly. While S3 supports HTTPS for REST
API access, it does not support HTTPS on static website endpoints.
This setup aligns with security best practices and supports the Secure and Operational Excellence
pillars of the AWS Well-Architected Framework.
​Reference:
Hosting a static website using Amazon S3 and CloudFront
CloudFront + HTTPS with ACM

Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 13

A company runs an enterprise resource planning (ERP) system on Amazon EC2 instances in a single
AWS Region. Users connect to the ERP system by using a public API that is hosted on the EC2
instances. International users report slow API response times from their data centers.
A solutions architect needs to improve API response times for the international users.
Which solution will meet these requirements MOST cost-effectively?

  • A. Set up an AWS Direct Connect connection that has a public virtual interface (VIF) to connect each user's data center to the EC2 instances. Create a Direct Connect gateway for the ERP system API to route user API requests.
  • B. Deploy Amazon API Gateway endpoints in multiple Regions. Use Amazon Route 53 latency-based routing to route requests to the nearest endpoint. Configure a VPC peering connection between the Regions to connect to the ERP system.
  • C. Set up AWS Global Accelerator. Configure listeners for the necessary ports. Configure endpoint groups for the appropriate Regions to distribute traffic. Create an endpoint in each group for the API.
  • D. Use AWS Site-to-Site VPN to establish dedicated VPN tunnels between multiple Regions and user networks. Route traffic to the API through the VPN connections.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed Explanation From Exact Extract of Amazon Web Services (AWS)
Architect documents:
AWS Global Accelerator improves the performance and availability of applications by directing user
traffic through the AWS global network of edge locations using anycast IP addresses. It reduces
latency and jitter for global users accessing applications in a single Region.
Why this works:
Global Accelerator routes user requests to the nearest AWS edge location using AWS’s high-
performance backbone network.
It then forwards traffic to the optimal endpoint — in this case, the public API hosted on EC2.
This is much more cost-effective and requires less operational complexity than deploying and
maintaining multiple API Gateway endpoints across regions (Option B), or setting up Direct Connect
links for every international location (Option A).
Option C requires no application change and is designed specifically for latency improvement and
high availability.
​Reference:
AWS Global Accelerator Documentation
Use Cases for Global Accelerator
Performance Improvements for Global Users

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

A solutions architect needs to design a solution for a high performance computing (HPC) workload.
The solution must include multiple Amazon EC2 instances. Each EC2 instance requires 10 Gbps of
bandwidth individually for single-flow traffic. The EC2 instances require an aggregate throughput of
100 Gbps of bandwidth across all EC2 instances. Communication between the EC2 instances must
have low latency.
Which solution will meet these requirements?

  • A. Place the EC2 instances in a single subnet of a VPC. Configure a cluster placement group. Ensure that the latest Elastic Fabric Adapter (EFA) drivers are installed on the EC2 instances with a supported operating system.
  • B. Place the EC2 instances in multiple subnets in a single VPC. Configure a spread placement group. Ensure that the EC2 instances support Elastic Network Adapters (ENAs) and that the drivers are updated on each instance operating system.
  • C. Place the EC2 instances in multiple VPCs. Use AWS Transit Gateway to route traffic between the VPCs. Ensure that the latest Elastic Fabric Adapter (EFA) drivers are installed on the EC2 instances with a supported operating system.
  • D. Place the EC2 instances in multiple subnets across multiple Availability Zones. Configure a cluster placement group. Ensure that the EC2 instances support Elastic Network Adapters (ENAs) and that the drivers are updated on each instance operating system.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed Explanation From Exact Extract of Amazon Web Services (AWS)
Architect documents:
HPC workloads require high-throughput, low-latency networking, especially for tightly-coupled
applications like weather modeling, genomics, or real-time rendering.
A cluster placement group places instances in the same Availability Zone and on physically connected
hardware, reducing network latency and increasing throughput.
Elastic Fabric Adapter (EFA) is a network device for EC2 instances that enables low-latency, high-
throughput networking using OS-bypass technology, ideal for tightly-coupled HPC applications.
Each instance can support single-flow 10 Gbps bandwidth using EFA, and collectively, the cluster can
achieve up to 100 Gbps aggregate throughput when properly configured.
This solution supports the Performance Efficiency and Resilience design principles and is a standard
AWS-recommended pattern for HPC.
​Reference:
EC2 Placement Groups
Elastic Fabric Adapter Overview
Best Practices for HPC on AWS

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

A company has an application that runs only on Amazon EC2 Spot Instances. The instances run in an
Amazon EC2 Auto Scaling group with scheduled scaling actions. However, the capacity does not
always increase at the scheduled times, and instances terminate many times a day. A solutions
architect must ensure that the instances launch on time and have fewer interruptions.
Which action will meet these requirements?

  • A. Specify the capacity-optimized allocation strategy for Spot Instances. Add more instance types to the Auto Scaling group.
  • B. Specify the capacity-optimized allocation strategy for Spot Instances. Increase the size of the instances in the Auto Scaling group.
  • C. Specify the lowest-price allocation strategy for Spot Instances. Add more instance types to the Auto Scaling group.
  • D. Specify the lowest-price allocation strategy for Spot Instances. Increase the size of the instances in the Auto Scaling group.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed Explanation From Exact Extract of Amazon Web Services (AWS)
Architect documents:
Spot Instances can be interrupted when AWS needs the capacity back. To reduce interruptions and
improve availability, AWS provides the capacity-optimized allocation strategy.
Capacity-optimized strategy launches Spot Instances from the most available Spot capacity pools
instead of the lowest-priced ones, reducing interruption rates.
By adding multiple instance types (e.g., using Instance Type Flexibility), the Auto Scaling group can
launch instances in a broader set of pools, improving the chance that capacity is available.
Scheduled scaling actions combined with a diverse set of instances under the capacity-optimized
strategy ensure higher resilience and better timing for instance launches.
This approach directly supports the Resiliency design principle in the AWS Well-Architected
Framework.
​Reference:
Best Practices for EC2 Spot Instances
Capacity-Optimized Allocation Strategy

Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2