Questions for the ANS-C01 were updated on : Dec 01 ,2025
A company ran out of IP address space in one of the Availability Zones in an AWS Region that the
company uses. The Availability Zone that is out of space is assigned the
10.10.1.0/24 CIDR block. The company manages its networking configurations in an AWS
CloudFormation stack. The company's VPC is assigned the 10.10.0.0/16 CIDR
block and has available capacity in the 10.10.1.0/22 CIDR block.
How should a network specialist add more IP address space in the existing VPC with the LEAST
operational overhead?
A.
Update the AWS :: EC2 :: Subnet resource for the Availability Zone in the CloudFormation
stack. Change the CidrBlock property to 10.10.1.0/22.
B.
Update the AWS :: EC2 :: VPC resource in the CloudFormation stack. Change the CidrBlock
property to 10.10.1.0/22.
C.
Copy the CloudFormation stack. Set the AWS :: EC2 :: VPC resource CidrBlock property to
10.10.0.0/16. Set the AWS :: EC2 :: Subnet resource CidrBlock property to 10.10.1.0/22 for the
Availability Zone.
D.
Create a new AWS :: EC2 :: Subnet resource for the Availability Zone in the CloudFormation
stack. Set the CidrBlock property to 10.10.2.0/24.
D
A company has multiple firewalls and ISPs for its on-premises data center. The company has a single
AWS Site-to-Site VPN connection from the company's on-premises data center to a transit gateway. A
single ISP services the Site-to-Site VPN connection. Multiple VPCs are attached to the transit
gateway.
A customer gateway that the Site-to-Site VPN connection uses fails. Connectivity is completely lost,
but the company's network team does not receive a notification.
The network team needs to implement redundancy within a week in case a single customer gateway
fails again. The team wants to use an Amazon CloudWatch alarm to send notifications to an Amazon
Simple Notification Service (Amazon SNS) topic if any tunnel of the Site-to-Site VPN connection fails.
Which solution will meet these requirements MOST cost-effectively?
B
Explanation:
Redundancy requires a second customer gateway and ideally a second ISP to avoid a single point of
failure. AWS Site-to-Site VPN connections support two tunnels per VPN connection.
By creating a second VPN connection (to the transit gateway) with a second customer gateway and
ISP, the solution meets the redundancy requirement.
CloudWatch TunnelState alarms can be configured on each tunnel. A value of < 1 (i.e., when the
tunnel is down) will trigger the alarm.
A company operates in the us-east-1 Region and the us-west-1 Region. The company is designing a
solution to connect an on-premises data center to the company's AWS environment in us-east-1. The
solution uses two AWS Direct Connect connections.
Traffic from us-west-1 to the data center needs to traverse the Direct Connect connections. A
network engineer needs to set up active-passive functionality across the two Direct Connect
connections by using a Direct Connect gateway to influence inbound traffic from VPCs that are in us-
west-1 to the data center.
Which solution will meet these requirements?
D
Explanation:
To control inbound traffic from AWS to the on-premises data center, local preference BGP community
tags are used with a Direct Connect gateway.
AWS uses the following tags:
7224:9300 sets higher local preference (preferred route).
7224:9100 sets lower local preference (less preferred).
Applying 7224:9300 to the primary connection and 7224:9100 to the secondary connection ensures
active-passive routing behavior for inbound traffic from AWS VPCs.
A company runs an application across multiple AWS Regions and multiple Availability Zones. The
company needs to expand to a new AWS Region. Low latency is critical to the functionality of the
application.
A network engineer needs to gather metrics for the latency between the existing. Regions and the
new Region. The network engineer must gather metrics for at least the previous 30 days.
Which solution will meet these requirements?
B
Explanation:
AWS Network Manager Infrastructure Performance is specifically designed to monitor the network
performance across multiple AWS Regions, and it provides network metrics, including latency,
between AWS Regions. By setting it up and publishing the metrics to Amazon CloudWatch, the
network engineer can gather the necessary latency metrics for at least the previous 30 days. This
solution directly addresses the requirement for low latency and monitoring network performance
between the existing Regions and the new Region.
A company is establishing hybrid cloud connectivity from an on-premises environment to AWS in the
us-east-1 Region. The company is using a 10 Gbps AWS Direct Connect dedicated connection. The
company has two accounts in AWS. Account A has transit gateways in four AWS Regions. Account В
has transit gateways in three Regions. The company does not plan to expand.
To meet security requirements the company's accounts must have separate cloud infrastructure.
Which solution will meet these requirements MOST cost-effectively?
A
Explanation:
The most cost-effective and scalable solution is to create a single Direct Connect gateway in us-east-
1, and use AWS Resource Access Manager (AWS RAM) to share the Direct Connect gateway between
Account A and Account B. This approach avoids the need for multiple Direct Connect connections
and allows both accounts to share the same connection, which is a more cost-efficient solution
compared to creating separate connections for each account.
Transit VIFs (Virtual Interfaces) will be created for both Account A and Account B, and each account's
respective transit gateways will be associated with the same Direct Connect gateway. This solution
allows both accounts to access AWS resources in the most efficient manner.
A company has two AWS Direct Connect connections between Direct Connect locations and the
company's on-premises environment in the US. The company uses the connections to communicate
with AWS workloads that run in the us-east-1 Region. The company has a transit gateway that
connects several VPCs. The Direct Connect connections terminate at a Direct Connect gateway and
the transit VIFs to the transit gateway.
The company recently acquired a smaller company that is based in Europe. The newly acquired
company has only on-premises workloads. The newly acquired company does not
expect to run workloads on AWS for the next 3 years. However, the newly acquired company requires
connectivity to the parent company's AWS resources in us-east-1 and to the
parent company's on-premises environment in the US. The parent company wants to use two new
Direct Connect connections in Europe to provide the required connectivity.
Which solution will meet these requirements with the LEAST operational overhead for the newly
acquired company?
A
Explanation:
In this scenario, the company wants to provide connectivity from the newly acquired company in
Europe to the existing AWS resources in the us-east-1 Region with minimal operational overhead.
The best approach is to use Direct Connect SiteLink, which allows direct communication between
two different Direct Connect locations (one in Europe and one in the US) via the existing Direct
Connect gateway.
By associating new transit VIFs (Virtual Interfaces) to the existing Direct Connect gateway and
configuring Direct Connect SiteLink, the company can efficiently extend the existing network
architecture with minimal additional configuration. This solution provides the required connectivity
to both AWS resources and the on-premises environment in the US, leveraging the existing
infrastructure without introducing significant complexity or the need for additional resources like
new transit gateways or VPCs.
AnyCompany deploys and manages networking resources in its AWS network account, named
C
Explanation:
Chosen solution involves setting up a cross-account Global Accelerator attachment from Account-B
(where the application is hosted) to Account-A (where the accelerator will be managed). By using
this shared attachment, the networking team in Account-A can manage the Global Accelerator with
minimal management overhead, while still allowing the ALB in Account-B to be the endpoint for the
accelerator. This approach requires fewer resources and minimizes complexity compared to other
solutions.
A media company is planning to host an event that the company will live stream to users. The
company wants to use Amazon CloudFront.
A network engineer creates a primary origin and a secondary origin for CloudFront. The engineer
needs to ensure that the primary origin can fail over to the secondary origin within 15 seconds if a
disruption occurs.
Which solution will meet this requirement with the LEAST operational overhead?
B
Explanation:
The solution involves using an NLB to manage the failover between the primary and secondary
origins. The NLB automatically handles health checks for both origins and will route traffic to the
healthy origin, providing a seamless failover within the required 15 seconds. This approach requires
minimal operational overhead, as the NLB handles the routing and health checking without the need
for custom code or manual intervention.
A company wants to analyze TCP internet traffic. The traffic originates from Amazon EC2 instances in
the company’s VPC. The EC2 instances initiate connections through a NAT gateway.
The company wants to capture data about the traffic including source and destination IP addresses
ports, and the first 8 bytes of the TCP segments of the traffic. The company needs to collect, store,
and analyze all the required data points.
Which solution will meet these requirements?
A
Explanation:
This solution meets the requirements for capturing detailed TCP internet traffic, including source and
destination IP addresses, ports, and the first 8 bytes of TCP segments. By configuring the EC2
instances as traffic mirror sources and deploying a software solution on the target to forward the
captured traffic to CloudWatch Logs, you can analyze the traffic in-depth using CloudWatch Logs
Insights. VPC traffic mirroring is ideal for capturing low-level network traffic, providing the necessary
data points for analysis.
A company operates in multiple AWS Regions. The company has deployed transit gateways in each
Region. The company uses AWS Organizations to operate multiple AWS accounts in one organization.
The company needs to capture all VPC flow log data when a new VPC is created. The company needs
to send flow logs to a specific Amazon S3 bucket.
Which solution will meet these requirements with the LEAST administrative effort?
B
Explanation:
This solution uses AWS Config, which allows you to automatically monitor and evaluate the
configuration of AWS resources, including VPCs. By creating a custom AWS Config rule that checks
whether VPC Flow Logs are enabled and correctly configured, you can ensure that VPC flow logs are
captured for every new VPC. With automatic remediation, the rule can also ensure the VPC Flow
Logs configuration is applied if not already set. Additionally, applying this rule to your entire
organization will simplify the management process and reduce administrative effort.
A company has an AWS environment that includes multiple VPCs that are connected by a transit
gateway. The company wants to use a certificate-based AWS Site-to-Site VPN connection to establish
connectivity between an on-premises environment and the AWS environment. The company does
not have a static public IP address for the on-premises environment.
Which combination of steps should the company take to establish VPN connectivity between the
transit gateway and the on-premises environment? (Choose two.)
B, D
Explanation:
Create a private certificate in AWS Certificate Manager (ACM): This involves setting up a private
Certificate Authority (CA) within AWS ACM, which will be used to issue certificates for authenticating
your customer gateway device.
Create a customer gateway. Specify the current dynamic IP address of the customer gateway device's
external interface: Even though on-premises environment doesn't have a static IP, you can still
configure the customer gateway in AWS by specifying its current dynamic IP address. This setup
allows AWS to recognize and authenticate your customer gateway device during the VPN connection
establishment.
None
A
Explanation:
This solution minimizes operational overhead by using a single transit gateway (TGW-A) for both
teams, while also leveraging resource sharing between accounts. This approach eliminates the need
to create new transit gateways, thus reducing complexity and the operational overhead of managing
multiple transit gateways.
A company has several AWS Site-to-Site VPN connections between an on-premises customer
gateway and a transit gateway. The company's application uses IPv4 to communicate through the
VPN connections.
The company has updated the VPC to be dual stack and wants to transition to using IPv6-only for new
workloads. When the company tries to communicate through the existing VPN connections, IPv6
traffic fails.
Which solution will provide IPv6 support with the LEAST operational overhead?
A
Explanation:
IPv6 Support in VPN Connections: Existing AWS Site-to-Site VPN connections that were originally
configured for IPv4 do not automatically support IPv6 traffic. To enable IPv6 communication, a new
Site-to-Site VPN connection must be created that explicitly supports IPv6.
Least Operational Overhead: Creating a new IPv6-enabled Site-to-Site VPN connection is
straightforward and does not require extensive reconfiguration of the existing IPv4 setup. This
ensures a smooth transition to dual-stack or IPv6-only workloads with minimal disruption.
Support for Dual-Stack Workloads: The new IPv6-enabled Site-to-Site VPN connection can coexist
with the existing IPv4 connections, allowing the company to transition workloads incrementally to
IPv6.
A company uses transit gateways to route traffic between the company's VPCs. Each transit gateway
has a single route table. Each route table contains attachments and routes for the VPCs that are in
the same AWS Region as the transit gateway. The route tables in each VPC also contain routes to all
the other VPC CIDR ranges that are available through the transit gateways. Some VPCs route to local
NAT gateways.
The company plans to add many new VPCs soon. A network engineer needs a solution to add new
VPC CIDR ranges to the route tables in each VPC.
Which solution will meet these requirements in the MOST operationally efficient way?
A
Explanation:
Using a Prefix List for Route Management: A customer-managed prefix list allows you to group
multiple CIDR ranges into a single logical entity. By referencing the prefix list in VPC route tables, you
can simplify route management. This eliminates the need to manually add individual CIDR ranges to
each VPC route table.
Operational Efficiency: When a new VPC is added, its CIDR range can be added to the prefix list, and
all route tables referencing the prefix list will automatically include the new CIDR. This reduces
operational overhead compared to manually updating each route table.
Flexibility: The prefix list approach is highly scalable and supports the company’s need to add many
new VPCs in the future.
A company runs a workload in a single VPC on AWS. The company’s architecture contains several
interface VPC endpoints for AWS services, including Amazon CloudWatch Logs and AWS Key
Management Service (AWS KMS). The endpoints are configured to use a shared security group. The
security group is not used for any other workloads or resources.
After a security review of the environment, the company determined that the shared security group
is more permissive than necessary. The company wants to make the rules associated with the
security group more restrictive. The changes to the security group rules must not prevent the
resources in the VPC from using AWS services through interface VPC endpoints. The changes must
prevent unnecessary access.
The security group currently uses the following rules:
• Inbound - Rule 1
Protocol: TCP
Port: 443
Source: 0.0.0.0/0
• Inbound - Rule 2
Protocol: TCP
Port: 443
Source: VPC CIDR
• Outbound - Rule 1
Protocol: All
Port: All
Destination: 0.0.0.0/0
Which rule or rules should the company remove to meet with these requirements?
B
Explanation:
Inbound Rule 1 (Allow TCP 443 from 0.0.0.0/0): This rule allows all sources, including the public
internet, to access the interface VPC endpoints. Since interface VPC endpoints are used within the
VPC for communication with AWS services, this rule is unnecessarily permissive. Removing this rule
enhances security while still allowing communication within the VPC using Rule 2 (TCP 443 from the
VPC CIDR).
Outbound Rule 1 (Allow All Protocols, All Ports to 0.0.0.0/0): This rule is overly permissive and
unnecessary for interface VPC endpoints, as traffic destined for AWS services through these
endpoints does not need unrestricted outbound access. Removing this rule ensures that outbound
traffic is limited to what is required for communication with the AWS services through the interface
endpoints.