Questions for the SAA-C03 were updated on : Dec 01 ,2025
A developer creates a web application that runs on Amazon EC2 instances behind an Application
Load Balancer (ALB). The instances are in an Auto Scaling group. The developer reviews the
deployment and notices some suspicious traffic to the application. The traffic is malicious and is
coming from a single public IP address. A solutions architect must block the public IP address.
Which solution will meet this requirement?
D
Explanation:
When an application is fronted by an Application Load Balancer (ALB) and malicious traffic is
detected from a specific IP, the correct way to block the IP is by using AWS WAF (Web Application
Firewall).
With AWS WAF, you can create an IP Set to include the offending IP address or range.
Then create a Web ACL (Access Control List) with a rule set to BLOCK requests from that IP set.
Finally, associate the Web ACL with the ALB.
Security groups (Option A) cannot deny specific IPs because they are stateful and allow-only rules.
Amazon Detective (Option B) is a security analysis and investigation tool; it doesn’t block traffic.
AWS RAM (Option C) is for resource sharing across accounts, not for blocking IPs.
This approach aligns with AWS’s Security Pillar of the Well-Architected Framework and is fully
managed, with minimal operational effort.
Reference:
Using AWS WAF with an Application Load Balancer
Block IPs with AWS WAF
A large financial services company uses Amazon ElastiCache (Redis OSS) for its new application that
has a global user base. A solutions architect must develop a caching solution that will be available
across AWS Regions and include low-latency replication and failover capabilities for disaster recovery
(DR). The company's security team requires the encryption of cross-Region data transfers.
Which solution meets these requirements with the LEAST amount of operational effort?
B
Explanation:
The optimal solution for low-latency global caching with disaster recovery and cross-Region
replication is to use Amazon ElastiCache Global Datastore for Redis OSS.
A Global Datastore enables fully managed cross-Region replication and supports automatic failover
by promoting read replica clusters in another Region.
ElastiCache ensures encryption in-transit and at-rest, meeting compliance and security requirements.
It's a fully managed AWS-native feature, reducing operational effort compared to setting up DMS-
based or snapshot-based replication manually.
Other options (A, C, D):
Require manual setup and management (e.g., custom DMS pipelines, snapshots).
Do not offer real-time replication or failover without manual intervention.
Reference:
ElastiCache Global Datastore for Redis
A company operates a data lake in Amazon S3 that stores large datasets in multiple formats. The
company has an application that retrieves and processes subsets of data from multiple objects in the
data lake based on filtering criteri
a. For each data query, the application currently downloads the entire S3 object and performs
transformations. The current process requires a large amount of transformation time.
The company wants a solution that will give the application the ability to query and filter directly on
S3 objects without downloading the objects.
Which solution will meet these requirements?
A
Explanation:
The best solution to query and filter S3 data directly without downloading the full object is to use
Amazon Athena.
Amazon Athena is an interactive query service that lets you use SQL to analyze structured, semi-
structured, and unstructured data directly in Amazon S3, without needing to move or transform the
data.
It supports formats like CSV, JSON, ORC, Parquet, and Avro and integrates with AWS Glue Data
Catalog for schema management.
Athena is serverless, meaning there’s no infrastructure to manage, and it's billed per query, which
keeps it cost-effective.
Option B (EMR) is heavier and requires managing a cluster.
Option C (API Gateway) is not suited for querying S3 datasets.
Option D (ElastiCache) is a memory store, not a query engine.
Reference:
What is Amazon Athena?
A company runs a critical Amazon RDS for MySQL DB instance in a single Availability Zone. The
company must improve the availability of the DB instance.
Which solution will meet this requirement?
C
Explanation:
To improve availability and fault tolerance of an Amazon RDS instance, the recommended approach
is to configure a Multi-AZ deployment.
Multi-AZ deployments for RDS automatically replicate data to a standby instance in a different
Availability Zone (AZ).
If a failure occurs in the primary AZ (due to hardware, network, or power), RDS will automatically
failover to the standby instance with minimal downtime, without administrative intervention.
This is an AWS-managed feature and does not require application modification.
It does not provide scalability or load balancing; it's designed for high availability and resiliency.
Options A, B, and D are incorrect:
A refers to cross-Region, which is used for disaster recovery, not high availability.
B and D with SQS do not address high availability directly for the RDS instance; queues help decouple
systems but do not make a database more resilient.
Reference:
Amazon RDS Multi-AZ Deployments
A finance company hosts a data lake in Amazon S3. The company receives financial data records over
SFTP each night from several third parties. The company runs its own SFTP server on an Amazon EC2
instance in a public subnet of a VPC. After the files are uploaded, they are moved to the data lake by
a cron job that runs on the same instance. The SFTP server is reachable on DNS sftp.example.com
through the use of Amazon Route 53.
What should a solutions architect do to improve the reliability and scalability of the SFTP solution?
B
Explanation:
The optimal way to improve reliability and scalability of SFTP on AWS is to use AWS Transfer Family
(for SFTP). It provides a fully managed SFTP server integrated with Amazon S3.
No EC2 instances or infrastructure management is required.
AWS Transfer Family supports custom DNS domains (e.g., sftp.example.com) and allows integration
with existing authentication mechanisms like LDAP, AD, or custom identity providers.
Files are uploaded directly to S3, eliminating the need for cron jobs to move data from EC2 to S3.
Built-in high availability and scalability removes the burden of managing infrastructure.
Other options:
A and D still require manual scaling, server maintenance, and cron jobs.
C (Storage Gateway) is used for hybrid file access, not for replacing an SFTP server.
Reference:
AWS Transfer Family for SFTP
A company recently migrated its application to a VPC on AWS. An AWS Site-to-Site VPN connection
connects the company's on-premises network to the VPC. The application retrieves customer data
from another system that resides on premises. The application uses an on-premises DNS server to
resolve domain records. After the migration, the application is not able to connect to the customer
data because of name resolution errors.
Which solution will give the application the ability to resolve the internal domain names?
B
Explanation:
When AWS workloads must resolve DNS names from on-premises systems over a hybrid network
(like VPN or Direct Connect), the best solution is to use Amazon Route 53 Resolver outbound
endpoints.
The outbound endpoint enables DNS queries to be forwarded from your VPC to on-premises DNS
servers.
You must also configure a Route 53 Resolver forwarding rule to define which domain names (e.g.,
corp.internal) should be forwarded to the specific on-premises DNS IPs.
This setup allows private DNS resolution from AWS to on-premises systems and is fully managed,
eliminating the need to run and maintain EC2-based DNS proxies (as in option A).
Options C and D are incorrect:
C is not DNS-specific and doesn’t solve name resolution.
D misuses a public hosted zone for a private DNS domain.
Reference:
Route 53 Resolver Outbound Endpoints
A company runs its critical storage application in the AWS Cloud. The application uses Amazon S3 in
two AWS Regions. The company wants the application to send remote user data to the nearest S3
bucket with no public network congestion. The company also wants the application to fail over with
the least amount of management of Amazon S3.
Which solution will meet these requirements?
D
Explanation:
To meet the requirement of low-latency global access and failover with minimal management, the
best solution is to use Amazon S3 Multi-Region Access Points (MRAP) with Cross-Region Replication
(CRR).
Multi-Region Access Points provide a global endpoint that automatically routes requests to the
nearest AWS Region using the AWS Global Accelerator infrastructure. This avoids public internet
congestion and ensures low-latency access.
When combined with S3 Cross-Region Replication, data is automatically synchronized between
buckets in different Regions, enabling active-active setup.
In case of a Regional failure, S3 Multi-Region Access Points handle failover automatically, requiring
no manual intervention.
Options A and C require manual management and configuration of endpoints per Region. Option B
misrepresents MRAP—it is used for active-active, not active-passive.
Reference:
S3 Multi-Region Access Points
S3 Cross-Region Replication
A disaster response team is using drones to collect images of recent storm damage. The response
team's laptops lack the storage and compute capacity to transfer the images and process the data.
While the team has Amazon EC2 instances for processing and Amazon S3 buckets for storage,
network connectivity is intermittent and unreliable. The images need to be processed to evaluate
the damage.
What should a solutions architect recommend?
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of Amazon Web Services (AWS)
Architect documents:
AWS Snowball Edge is specifically designed for use cases that involve limited or unreliable network
connectivity. It enables data transfer and local compute processing at edge locations.
It comes in two options: Snowball Edge Storage Optimized and Snowball Edge Compute Optimized.
The Compute Optimized model allows the disaster response team to both store images locally and
process data on the device using Amazon EC2-compatible compute resources.
This removes the need for constant network connectivity. After processing, the device can be
shipped back to AWS, where data is uploaded to S3.
Other options fail due to:
SQS not being suitable for large binary image data (Option B)
Kinesis Data Firehose needing steady connectivity (Option C)
Storage Gateway is for hybrid cloud environments with ongoing connection, not rugged field use
(Option D)
Reference:
AWS Snowball Edge Overview
Snowball Edge Use Cases
A company uses Amazon EC2 instances behind an Application Load Balancer (ALB) to serve content
to users. The company uses Amazon Elastic Block Store (Amazon EBS) volumes to store data.
The company needs to encrypt data in transit and at rest.
Which combination of services will meet these requirements? (Select TWO.)
C, E
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of Amazon Web Services (AWS)
Architect documents:
To secure data in transit, the company should use AWS Certificate Manager (ACM) to provide SSL/TLS
certificates for the Application Load Balancer. ACM allows easy provisioning, management, and
renewal of public and private certificates, ensuring secure communication between users and
applications.
To secure data at rest, AWS Key Management Service (KMS) is used to manage encryption keys for
Amazon EBS volumes. EBS integrates with AWS KMS, allowing for server-side encryption using KMS-
managed keys (SSE-KMS), thus meeting the encryption at rest requirement.
Other options:
GuardDuty (A) is for threat detection, not encryption.
AWS Shield (B) protects against DDoS attacks, not encryption.
Secrets Manager (D) manages credentials, not general data encryption.
This solution follows the AWS Well-Architected Framework – Security Pillar.
Reference:
Encrypting EBS volumes with KMS
Using ACM with ALB
A company wants a flexible compute solution that includes Amazon EC2 instances and AWS Fargate.
The company does not want to commit to multi-year contracts.
Which purchasing option will meet these requirements MOST cost-effectively?
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of Amazon Web Services (AWS)
Architect documents:
To optimize costs for both Amazon EC2 and AWS Fargate, the best option is a Compute Savings Plan
because it offers flexibility across instance families, Regions, and compute options including EC2,
AWS Fargate, and AWS Lambda.
Unlike EC2 Instance Savings Plans, which apply only to specific instance families, Compute Savings
Plans apply across multiple services.
Since the company does not want to commit to multi-year contracts or large upfront payments, the
1-year No Upfront Compute Savings Plan provides the greatest flexibility with no upfront capital
commitment, while still offering cost savings over On-Demand pricing.
This option also aligns with cost-optimization best practices by allowing for scalability and service
mix flexibility.
Reference:
AWS Compute Savings Plans
AWS Pricing Models
A company has a non-production application that runs on an Amazon EC2 instance. The EC2 instance
has an instance profile and an associated IAM role.
The company wants to automate patching for the EC2 instance.
Which solution will meet this requirement?
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of Amazon Web Services (AWS)
Architect documents:
To manage EC2 instances with AWS Systems Manager (SSM), the EC2 instance must be configured as
a managed instance by attaching an IAM role that has the AmazonSSMManagedInstanceCore
managed policy.
This policy allows:
SSM agent to register the instance with SSM
Perform actions like patching, automation, session management, inventory collection, etc.
Access to SSM endpoints (via internet or VPC endpoint if needed)
Since the EC2 instance already has an IAM role, the least operational overhead option is to attach the
required policy to the existing role (Option C). No need to create new IAM roles or users, which
simplifies management and adheres to the principle of least privilege.
Patching can then be automated via SSM Patch Manager, ensuring consistency, compliance, and
operational efficiency.
Reference:
SSM Managed Instance Setup
AmazonSSMManagedInstanceCore Policy
Patching EC2 with SSM
A solutions architect is creating a website that will be hosted from an Amazon S3 bucket. The website
must support secure browser connections (HTTPS).
Which combination of actions must the solutions architect take to meet this requirement? (Select
TWO.)
B, D
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of Amazon Web Services (AWS)
Architect documents:
To serve a static website hosted in Amazon S3 over HTTPS, you must use Amazon CloudFront because
S3 does not natively support HTTPS for static website endpoints.
Steps to meet HTTPS requirement:
B . Create a CloudFront distribution and configure the S3 bucket as the origin. This enables global
edge caching and performance optimization.
D . Attach an SSL/TLS certificate (typically from AWS Certificate Manager) to the CloudFront
distribution to handle HTTPS connections.
S3 buckets used as static website hosts only support HTTP directly. While S3 supports HTTPS for REST
API access, it does not support HTTPS on static website endpoints.
This setup aligns with security best practices and supports the Secure and Operational Excellence
pillars of the AWS Well-Architected Framework.
Reference:
Hosting a static website using Amazon S3 and CloudFront
CloudFront + HTTPS with ACM
A company runs an enterprise resource planning (ERP) system on Amazon EC2 instances in a single
AWS Region. Users connect to the ERP system by using a public API that is hosted on the EC2
instances. International users report slow API response times from their data centers.
A solutions architect needs to improve API response times for the international users.
Which solution will meet these requirements MOST cost-effectively?
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of Amazon Web Services (AWS)
Architect documents:
AWS Global Accelerator improves the performance and availability of applications by directing user
traffic through the AWS global network of edge locations using anycast IP addresses. It reduces
latency and jitter for global users accessing applications in a single Region.
Why this works:
Global Accelerator routes user requests to the nearest AWS edge location using AWS’s high-
performance backbone network.
It then forwards traffic to the optimal endpoint — in this case, the public API hosted on EC2.
This is much more cost-effective and requires less operational complexity than deploying and
maintaining multiple API Gateway endpoints across regions (Option B), or setting up Direct Connect
links for every international location (Option A).
Option C requires no application change and is designed specifically for latency improvement and
high availability.
Reference:
AWS Global Accelerator Documentation
Use Cases for Global Accelerator
Performance Improvements for Global Users
A solutions architect needs to design a solution for a high performance computing (HPC) workload.
The solution must include multiple Amazon EC2 instances. Each EC2 instance requires 10 Gbps of
bandwidth individually for single-flow traffic. The EC2 instances require an aggregate throughput of
100 Gbps of bandwidth across all EC2 instances. Communication between the EC2 instances must
have low latency.
Which solution will meet these requirements?
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of Amazon Web Services (AWS)
Architect documents:
HPC workloads require high-throughput, low-latency networking, especially for tightly-coupled
applications like weather modeling, genomics, or real-time rendering.
A cluster placement group places instances in the same Availability Zone and on physically connected
hardware, reducing network latency and increasing throughput.
Elastic Fabric Adapter (EFA) is a network device for EC2 instances that enables low-latency, high-
throughput networking using OS-bypass technology, ideal for tightly-coupled HPC applications.
Each instance can support single-flow 10 Gbps bandwidth using EFA, and collectively, the cluster can
achieve up to 100 Gbps aggregate throughput when properly configured.
This solution supports the Performance Efficiency and Resilience design principles and is a standard
AWS-recommended pattern for HPC.
Reference:
EC2 Placement Groups
Elastic Fabric Adapter Overview
Best Practices for HPC on AWS
A company has an application that runs only on Amazon EC2 Spot Instances. The instances run in an
Amazon EC2 Auto Scaling group with scheduled scaling actions. However, the capacity does not
always increase at the scheduled times, and instances terminate many times a day. A solutions
architect must ensure that the instances launch on time and have fewer interruptions.
Which action will meet these requirements?
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract of Amazon Web Services (AWS)
Architect documents:
Spot Instances can be interrupted when AWS needs the capacity back. To reduce interruptions and
improve availability, AWS provides the capacity-optimized allocation strategy.
Capacity-optimized strategy launches Spot Instances from the most available Spot capacity pools
instead of the lowest-priced ones, reducing interruption rates.
By adding multiple instance types (e.g., using Instance Type Flexibility), the Auto Scaling group can
launch instances in a broader set of pools, improving the chance that capacity is available.
Scheduled scaling actions combined with a diverse set of instances under the capacity-optimized
strategy ensure higher resilience and better timing for instance launches.
This approach directly supports the Resiliency design principle in the AWS Well-Architected
Framework.
Reference:
Best Practices for EC2 Spot Instances
Capacity-Optimized Allocation Strategy