amazon AWS Certified Solutions Architect - Professional Exam exam practice questions
Questions for the SAP-C02 were updated on : Dec 01 ,2025
Page 1 out of 38. Viewing questions 1-15 out of 569
Question 1
A company runs AWS workloads that are integrated with software as a service (SaaS) applications. The company needs to analyze the SaaS applications to identify unused licenses. Which solution will meet this requirement with the LEAST operational overhead?
A. Use AWS License Manager automated discovery to retrieve audit logs from the SaaS applications. Use Amazon Athena to analyze the data and to identify unused SaaS licenses.
B. Create an AWS Lambda function to retrieve audit logs from the SaaS applications and to store the data in Amazon S3. Use Amazon EMR to analyze the data and to identify unused SaaS licenses.
C. Use AWS AppFabric to ingest audit logs from the SaaS applications into Amazon S3. Use Amazon Athena to analyze the data and to identify unused SaaS licenses.
D. Use AWS App Runner to ingest audit logs from the SaaS applications into Amazon S3. Use Amazon EMR to analyze the data and to identify unused SaaS licenses.
Answer:
C
User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
0/ 1000
Question 2
A company wants to use an Amazon S3 bucket for its data scientists to store documents. The company uses AWS IAM Identity Center to authenticate users. The company created an IAM Identity Center group for the data scientists. The company wants to grant the data scientists access to only their specific folders in the S3 bucket. The company also wants to know which documents each data scientist accessed. Which combination of steps will meet these requirements? (Select TWO.)
A. Create a custom IAM Identity Center permission set to grant the data scientists access to an S3 bucket prefix that matches their username tag. Use a policy to limit access to paths with the ${aws:PrincipalTag/userName>/" condition.
B. Create an IAM Identity Center role for the data scientist group that has Amazon S3 read access and write access. Add an S3 bucket policy that allows access to the IAM Identity Center role.
C. Configure AWS CloudTrail to log S3 data events and deliver the logs to an S3 bucket. Use Amazon Athena to run queries on the CloudTrail logs in Amazon S3.
D. Configure AWS CloudTrail to log S3 management events to Amazon CloudWatch. Use the Amazon Athena CloudWatch connector to query the logs.
E. Enable S3 access logging to the EMR File System (EMRFS). Create an AWS Glue job to run queries on the access log data in EMRFS.
Answer:
A, C
User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
0/ 1000
Question 3
A company has multiple applications that run on Amazon EC2 instances in private subnets in a VPC. The company has deployed multiple NAT gateways in multiple Availability Zones for internet access. The company wants to block certain websites from being accessed through the NAT gateways. The company also wants to identify the internet destinations that the EC2 instances access. The company has already created VPC flow logs for the NAT gateways' elastic network interfaces. Which solution will meet these requirements?
A. Use Amazon CloudWatch Logs Insights to query the logs and determine the internet destinations that the EC2 instances communicate with. Use AWS Network Firewall to block the websites.
B. Use Amazon CloudWatch Logs Insights to query the logs and determine the internet destinations that the EC2 instances communicate with. Use AWS WAF to block the websites.
C. Use the BytesInFromSource and BytesInFromDestination Amazon CloudWatch metrics to determine the internet destinations that the EC2 instances communicate with. Use AWS Network Firewall to block the websites.
D. Use the BytesInFromSource and BytesInFromDestination Amazon CloudWatch metrics to determine the internet destinations that the EC2 instances communicate with. Use AWS WAF to block the websites.
Answer:
A
User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
0/ 1000
Question 4
A company is using AWS to develop and manage its production web application. The application includes an Amazon API Gateway HTTP API that invokes an AWS Lambda function. The Lambda function processes and then stores data in a database. The company wants to implement user authorization for the web application in an integrated way. The company already uses a third-party identity provider that issues OAuth tokens for the company's other applications. Which solution will meet these requirements?
A. Integrate the company's third-party identity provider with API Gateway. Configure an API Gateway Lambda authorizer to validate tokens from the identity provider. Require the Lambda authorizer on all API routes. Update the web application to get tokens from the identity provider and include the tokens in the Authorization header when calling the API Gateway HTTP API.
B. Integrate the company's third-party identity provider with AWS Directory Service. Configure Directory Service as an API Gateway authorizer to validate tokens from the identity provider. Require the Directory Service authorizer on all API routes. Configure AWS IAM Identity Center as a SAML 2.0 identity provider. Configure the web application as a custom SAML 2.0 application.
C. Integrate the company's third-party identity provider with AWS IAM Identity Center. Configure API Gateway to use IAM Identity Center for zero-configuration authentication and authorization. Update the web application to retrieve AWS STS tokens from IAM Identity Center and include the tokens in the Authorization header when calling the API Gateway HTTP API.
D. Integrate the company's third-party identity provider with AWS IAM Identity Center. Configure IAM users with permissions to call the API Gateway HTTP API. Update the web application to extract request parameters from the IAM users and include the parameters in the Authorization header when calling the API Gateway HTTP API.
Answer:
A
User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
0/ 1000
Question 5
A company needs to migrate a 2 TB MySQL database from an on-premises data center to an Amazon Aurora cluster. The database receives hundreds of updates every minute. The on-premises database server is not accessible through the internet. The migration solution must ensure that no data is lost between the start of migration and cutover. The migration must begin as soon as possible and must minimize downtime. Which solution will meet these requirements?
A. Create an AWS Site-to-Site VPN connection between the on-premises data center and the VPC that hosts the Aurora duster. Create a dump of the on-premises database by using mysqldump. Upload the dump to Amazon S3 by using multipart upload. Use an Amazon EC2 instance with appropriate permissions to import the dump to the Aurora cluster.
B. Create an AWS Site-to-Site VPN connection between the on-premises data center and the VPC that hosts the Aurora cluster. Specify the on-premises database as the source endpoint in AWS DMS. Specify the Aurora duster as the target endpoint. Configure a DMS task with ongoing replication.
C. Set up an AWS Direct Connect connection between the on-premises data center and the VPC that hosts the Aurora duster. Create a dump of the on-premises database by using mysqldump. Upload the dump to Amazon S3 by using multipart upload. Use an Amazon EC2 instance with appropriate permissions to import the dump to the Aurora cluster. Set up replication between the data center and the Aurora cluster.
D. Set up an AWS Direct Connect connection between the on-premises data center and the VPC that hosts the Aurora cluster. Specify the on-premises database as the source endpoint in AWS DMS. Specify the Aurora duster as the target endpoint Configure a DMS task with ongoing replication.
Answer:
B
User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
0/ 1000
Question 6
A company uses AWS Cloud Formation to deploy its infrastructure. The company is concerned that data stored in Amazon RDS databases or Amazon EBS volumes might be deleted if a production Cloud Formation stack is deleted. How can the company prevent users from accidentally deleting data in this way?
A. Modify the Cloud Formation templates to add a DeletionPolicy attribute with a Retain deletion policy to RDS resources and EBS resources.
B. Configure a stack policy that disallows the deletion of RDS resources and EBS resources.
C. Modify 1AM policies to deny the deletion of RDS resources and EBS resources that are tagged with an aws:cloudformation:stack-name tag.
D. Use AWS Config rules to prevent the deletion of RDS resources and EBS resources.
Answer:
A
User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
0/ 1000
Question 7
A solutions architect is designing a solution to automatically provision new AWS accounts in an organization in AWS Organizations. The solutions architect has enabled AWS Control Tower for the organization. The solution must enable security controls and create resources such as billing alarms after creating new AWS accounts. The solution must be scalable. Which solution meets these requirements with the LEAST operational overhead?
A. Create a new AWS account in the organization. Deploy a blueprint to the new AWS account. Define a blueprint that creates resources such as billing alarms. Configure AWS Control Tower to apply the blueprint after creating the new AWS account
B. Create a new AWS account in the organization. Establish trusted access to the account by using an AWS Cloud Formation template. Enroll the new AWS account into AWS Control Tower. Deploy a blueprint to the new AWS account by using AWS Control Tower to provision resources.
C. Use Account Factory to initiate the creation of a new AWS account by using AWS Service Catalog. Configure a lifecycle event in AWS Control Tower that invokes an AWS Lambda function. Configure the Lambda function to deploy an AWS CloudFormation template by using the AWSControlTowerExecution role.
D. Use Account Factory to initiate the creation of a new AWS account by using AWS Control Tower. Define a blueprint that creates resources such as billing alarms. Configure AWS Control Tower to apply the blueprint after creating the new AWS account.
Answer:
C
User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
0/ 1000
Question 8
A company stores a static website on Amazon S3. AWS Lambda functions retrieve content from an S3 bucket and serve the content as a website. An Application Load Balancer (ALB) directs incoming traffic to the Lambda functions. An Amazon CloudFront distribution routes requests to the ALB. The company has set up an AWS Certificate Manager (ACM) certificate on the HTTPS listener of the ALB. The company needs all users to communicate with the website through HTTPS. HTTP users must not receive an error. Which combination of steps will meet these requirements? (Select THREE.)
A. Configure the ALB with a TCP listener on port 443 for passthrough to backend systems.
B. Create an S3 bucket policy that denies access to the S3 bucket if the aws:SecureTransport request is false.
C. Configure HTTP to HTTPS redirection on the S3 bucket.
D. Set the origin protocol policy to HTTPS Only for CloudFront.
E. Set the viewer protocol policy to HTTPS Only for CloudFront.
F. Set the viewer protocol policy to Redirect HTTP to HTTPS for CloudFront.
Answer:
D, E, F
User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
F
50%
Discussions
0/ 1000
Question 9
A company runs a highly available data collection application on Amazon EC2 in the eu-north-1 Region. The application collects data from end-user devices and writes records to an Amazon Kinesis data stream and a set of AWS Lambda functions that process the records. The company persists the output of the record processing to an Amazon S3 bucket in eu-north-1. The company uses the data in the S3 bucket as a data source for Amazon Athena. The company wants to increase its global presence. A solutions architect must launch the data collection capabilities in the sa-east-1 and ap-northeast-1 Regions. The solutions architect deploys the application, the Kinesis data stream, and the Lambda functions in the two new Regions. The solutions architect keeps the S3 bucket in eu-north-1 to meet a requirement to centralize the data analysis. During testing of the new setup, the solutions architect notices a significant lag on the arrival of data from the new Regions to the S3 bucket. Which solution will improve this lag time the MOST?
A. In each of the two new Regions, set up the Lambda functions to run in a VPC. Set up an S3 gateway endpoint in that VPC.
B. Turn on S3 Transfer Acceleration on the S3 bucket in eu-north-1. Change the application to use the new S3 accelerated endpoint when the application uploads data to the S3 bucket.
C. Create an S3 bucket in each of the two new Regions. Set the application in each new Region to upload to its respective S3 bucket. Set up S3 Cross-Region Replication to replicate data to the S3 bucket in eu-north-1.
D. Increase the memory requirements of the Lambda functions to ensure that they have multiple cores available. Use the multipart upload feature when the application uploads data to Amazon S3 from Lambda.
Answer:
C
User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
0/ 1000
Question 10
A company is planning to migrate to the AWS Cloud. The company hosts many applications on Windows servers and Linux servers. Some of the servers are physical, and some of the servers are virtual. The company uses several types of databases in its on-premises environment. The company does not have an accurate inventory of its on-premises servers and applications. The company wants to rightsize its resources during migration. A solutions architect needs to obtain information about the network connections and the application relationships. The solutions architect must assess the company's current environment and develop a migration plan. Which solution will provide the solutions architect with the required information to develop the migration plan?
A. Use Migration Evaluator to request an evaluation of the environment from AWS. Use the AWS Application Discovery Service Agentless Collector to import the details into a Migration Evaluator Quick Insights report.
B. Use AWS Migration Hub and install the AWS Application Discovery Agent on the servers. Deploy the Migration Hub Strategy Recommendations application data collector. Generate a report by using Migration Hub Strategy Recommendations.
C. Use AWS Migration Hub and run the AWS Application Discovery Service Agentless Collector on the servers. Group the servers and databases by using AWS Application Migration Service. Generate a report by using Migration Hub Strategy Recommendations.
D. Use the AWS Migration Hub import tool to load the details of the company's on-premises environment. Generate a report by using Migration Hub Strategy Recommendations.
Answer:
B
User Votes:
A
50%
B
50%
C
50%
D
50%
Explanation: To develop a migration plan with accurate inventory and dependency data: AWS Migration Hub provides a single view for tracking migration tasks and resources across multiple AWS services. The AWS Application Discovery Agent (installed on servers) collects detailed data about running processes, system performance, and network connections. Migration Hub Strategy Recommendations leverages this data to automatically identify application patterns, generate recommended AWS target services, and provide a detailed migration plan. This approach ensures accurate data collection, detailed dependency mapping, and tailored recommendations—crucial for a successful and right-sized migration to AWS.
Discussions
0/ 1000
Question 11
A company runs a simple Linux application on Amazon EKS by using nodes of the M6i (general purpose) instance type. The company has an EC2 Instance Savings Plan for the M6i family that will expire soon. A solutions architect must minimize the EKS compute costs when the Savings Plan expires. Which combination of steps will meet this requirement? (Select THREE.)
A. Rebuild the application container images to support ARM64 architecture.
B. Rebuild the application container images to support containers.
C. Migrate the EKS nodes to the most recent generation of Graviton-based instances.
D. Replace the EKS nodes with the most recent generation of x86_64 instances.
E. Purchase a new EC2 Instance Savings Plan for the newly selected Graviton instance family.
F. Purchase a new EC2 Instance Savings Plan for the newly selected x86_64 instance family.
Answer:
A, C, E
User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
F
50%
Explanation: To minimize EKS compute costs: A . Rebuild the application container images to support ARM64 architecture: Graviton instances (ARM-based) require container images built for the ARM64 architecture. This ensures compatibility and allows leveraging Graviton-based compute. C . Migrate the EKS nodes to the most recent generation of Graviton-based instances: AWS Graviton instances (e.g., c7g, m7g) offer significant price/performance benefits over x86_64 instances. By migrating EKS worker nodes to Graviton, the company can reduce compute costs substantially. E . Purchase a new EC2 Instance Savings Plan for the newly selected Graviton instance family: Savings Plans provide cost savings compared to On-Demand pricing. After moving to Graviton, purchasing a new Savings Plan tailored for these instance families ensures continued cost savings. This approach aligns with AWS cost optimization best practices—migrating to the latest instance types and purchasing Savings Plans to match the new usage pattern—while ensuring full compatibility with EKS.
Discussions
0/ 1000
Question 12
A company hosts a metadata API on Amazon EC2 instances behind an internet-facing Application Load Balancer (ALB). Only internal applications that run on EC2 instances in separate AWS accounts need to access the metadata API. All the internal EC2 instances use NAT gateways. A new policy requires that traffic between internal applications must not travel across the public internet. Which solution will meet this requirement?
A. Create an HTTP API in Amazon API Gateway. Configure a route for the metadata API. Configure a VPC link to the VPC that hosts the metadata API's EC2 instances. Update the API Gateway resource policy to include the account IDs of the internal applications that access the metadata API.
B. Create a REST API in Amazon API Gateway. Specify the API Gateway endpoint type as private. Associate the REST API with the metadata API's VPC. Create a gateway VPC endpoint for the REST API. Share the endpoint across accounts by using AWS Resource Access Manager (AWS RAM). Configure the internal applications to connect to the gateway VPC endpoint.
C. Create an internal ALB. Register the metadata API's EC2 instances with the internal ALB. Create an internal Network Load Balancer (NLB) that has a target group type of ALB. Register the internal ALB as the target. Configure an AWS PrivateLink endpoint service for the NLB. Grant the internal applications access to the metadata API through the PrivateLink endpoint.
D. Create an internal ALB. Register the metadata API's EC2 instances with the internal ALB. Configure an AWS PrivateLink endpoint service for the internal ALB. Grant the internal applications access to the metadata API through the PrivateLink endpoint.
Answer:
D
User Votes:
A
50%
B
50%
C
50%
D
50%
Explanation: Creating an internal ALB and configuring it as a PrivateLink endpoint service enables private connectivity between internal applications and the metadata API, ensuring that traffic does not traverse the public internet. Internal ALB: Ensures traffic stays within the AWS network and is not exposed publicly. PrivateLink endpoint service: Provides secure, private access to the ALB from the internal EC2 instances in other AWS accounts. Traffic stays within the AWS global network, leveraging AWS security best practices and meeting the new policy requirements for no public internet exposure. This approach is secure, scalable, and minimizes management complexity compared to API Gateway solutions.
Discussions
0/ 1000
Question 13
A company has an application that uses an on-premises Oracle database. The company is migrating the database to the AWS Cloud. The database contains customer data and stored procedures. The company needs to migrate the database as quickly as possible with minimum downtime. The solution on AWS must provide high availability and must use managed services for the database. Which solution will meet these requirements?
A. Use AWS DMS to replicate data from the on-premises Oracle database to a new Amazon RDS for Oracle database. Transfer the database files to an Amazon S3 bucket. Configure the RDS database to use the S3 bucket as database storage. Set up S3 replication for high availability. Redirect the application to the RDS DB instance.
B. Create a database backup of the on-premises Oracle database. Upload the backup to an Amazon S3 bucket. Shut down the on-premises Oracle database to avoid any new transactions. Restore the backup to a new Oracle cluster that consists of Amazon EC2 instances across two Availability Zones. Redirect the application to the EC2 instances.
C. Use AWS DMS to replicate data from the on-premises Oracle database to a new Amazon DynamoDB table. Use DynamoDB Accelerator (DAX) and implement global tables for high availability. Rewrite the stored procedures in AWS Lambda. Run the stored procedures in DAX. After replication, redirect the application to the DAX cluster endpoint.
D. Use AWS DMS to replicate data from the on-premises Oracle database to a new Amazon Aurora PostgreSQL database. Use AWS SCT to convert the schema and stored procedures. Redirect the application to the Aurora DB cluster.
Answer:
D
User Votes:
A
50%
B
50%
C
50%
D
50%
Explanation: Using AWS Database Migration Service (DMS) with an Amazon Aurora PostgreSQL target and the AWS Schema Conversion Tool (SCT) ensures a rapid migration with minimal downtime. DMS continuously replicates data, supporting near-zero downtime migration while keeping the source and target in sync. SCT automatically converts the Oracle schema and stored procedures to PostgreSQL-compatible format, minimizing manual effort. Aurora provides a managed, highly available database that scales across multiple AZs, ensuring high availability for the migrated workload. This method leverages AWS-managed services for database and migration, ensuring a secure, reliable, and low-overhead transition to the cloud.
Discussions
0/ 1000
Question 14
A company wants to migrate its website to AWS. The website uses microservices and runs on containers that are deployed in an on-premises, self-managed Kubernetes cluster. All the manifests that define the deployments for the containers in the Kubernetes deployment are in source control. All data for the website is stored in a PostgreSQL database. An open source container image repository runs alongside the on-premises environment. A solutions architect needs to determine the architecture that the company will use for the website on AWS. Which solution will meet these requirements with the LEAST effort to migrate?
A. Create an AWS App Runner service. Connect the App Runner service to the open source container image repository. Deploy the manifests from on premises to the App Runner service. Create an Amazon RDS for PostgreSQL database.
B. Create an Amazon EKS cluster that has managed node groups. Copy the application containers to a new Amazon ECR repository. Deploy the manifests from on premises to the EKS cluster. Create an Amazon Aurora PostgreSQL DB cluster.
C. Create an Amazon ECS cluster that has an Amazon EC2 capacity pool. Copy the application containers to a new Amazon ECR repository. Register each container image as a new task definition. Configure ECS services for each task definition to match the original Kubernetes deployments. Create an Amazon Aurora PostgreSQL DB cluster.
D. Rebuild the on-premises Kubernetes cluster by hosting the cluster on Amazon EC2 instances. Migrate the open source container image repository to the EC2 instances. Deploy the manifests from on premises to the new cluster on AWS. Deploy an open source PostgreSQL database on the new cluster.
Answer:
B
User Votes:
A
50%
B
50%
C
50%
D
50%
Explanation: Migrating to an Amazon EKS cluster with managed node groups minimizes the effort required because: EKS is fully managed, offering native Kubernetes support, making it easy to deploy the existing Kubernetes manifests without major changes. Copying containers to Amazon ECR allows for fully managed, scalable container image storage in AWS, eliminating reliance on the on-premises container repository. Deploying the existing manifests directly to EKS reuses all the existing configuration, such as service definitions, deployments, and scaling policies, simplifying migration. Using Amazon Aurora PostgreSQL provides a fully managed, highly available database service that is compatible with PostgreSQL, reducing operational overhead compared to managing a self-hosted database. This approach leverages the power of AWS managed services while preserving the existing microservices and deployment practices, ensuring minimal disruption and fastest migration path.
Discussions
0/ 1000
Question 15
A company is planning to migrate its on-premises data analysis application to AWS. The application is hosted across a fleet of servers and requires consistent system time. The company has established an AWS Direct Connect connection from its on-premises data center to AWS. The company has a high-precision stratum-0 atomic clock network appliance that acts as an NTP source for all on-premises servers. After the migration to AWS is complete, the clock on all Amazon EC2 instances that host the application must be synchronized with the on-premises atomic clock network appliance. Which solution will meet these requirements with the LEAST administrative overhead?
A. Configure a DHCP options set with the on-premises NTP server address. Assign the options set to the VPC. Ensure that NTP traffic is allowed between AWS and the on-premises networks.
B. Create a custom AMI to use the Amazon Time Sync Service at 169.254.169.123. Use this AMI for the application. Use AWS Config to audit the NTP configuration.
C. Deploy a third-party time server from the AWS Marketplace. Configure the time server to synchronize with the on-premises atomic clock network appliance. Ensure that NTP traffic is allowed inbound in the network ACLs for the VPC that contains the third-party server.
D. Create an IPsec VPN tunnel from the on-premises atomic clock network appliance to the VPC to encrypt the traffic over the Direct Connect connection. Configure the VPC route tables to direct NTP traffic over the tunnel.
Answer:
A
User Votes:
A
50%
B
50%
C
50%
D
50%
Explanation: Using a DHCP options set in AWS allows EC2 instances to automatically receive network configuration information, including the IP address of the on-premises NTP server. This ensures that all EC2 instances in the VPC consistently use the on-premises atomic clock as their authoritative time source, maintaining the same time synchronization as the existing environment. By allowing NTP traffic over the Direct Connect connection and ensuring proper security group and network ACL configurations, this approach eliminates the need to manually configure each EC2 instance’s NTP settings, significantly reducing administrative overhead and ensuring consistent, accurate time across the environment. This aligns with AWS best practices for extending on-premises time synchronization to EC2 instances in a hybrid environment.