Questions for the CNPA were updated on : Nov 21 ,2025
A cloud native application needs to establish secure communication between its microservices.
Which mechanism is essential for implementing security in service-to-service communications?
B
Explanation:
Mutual TLS (mTLS) is the core mechanism for securing service-to-service communication in cloud
native environments. Option B is correct because mTLS provides encryption in transit and mutual
authentication, ensuring both the client and server verify each other’s identity. This prevents
unauthorized access, man-in-the-middle attacks, and data leakage.
Option A (API Gateway) manages ingress traffic from external clients but does not secure internal
service-to-service communication. Option C (Service Mesh) is a broader infrastructure layer (e.g.,
Istio, Linkerd) that implements mTLS, but mTLS itself is the mechanism that enforces secure
communications. Option D (Load Balancer) distributes traffic but does not handle encryption or
authentication.
mTLS is foundational to zero-trust networking inside Kubernetes clusters. Service meshes typically
provide automated certificate management and policy enforcement, ensuring seamless adoption of
mTLS without requiring developers to modify application code.
Reference:
— CNCF Service Mesh Whitepaper
— CNCF Platforms Whitepaper
— Cloud Native Platform Engineering Study Guide
In what way does an internal platform impact developers' cognitive load?
B
Explanation:
The primary role of an Internal Developer Platform (IDP) is to reduce cognitive load for developers by
abstracting away infrastructure complexity and providing simple, self-service interfaces. Option B is
correct because platforms deliver curated golden paths, service catalogs, and APIs that allow
developers to focus on application logic instead of learning every underlying infrastructure tool.
Option A is incorrect—platforms are specifically designed to reduce mental overhead. Option C
contradicts the platform engineering principle of shifting complexity away from developers. Option D
also misrepresents the intent of platforms, which aim to unify and simplify rather than complicate.
By lowering cognitive load, platforms improve productivity, enable faster onboarding, and reduce the
likelihood of errors. This aligns with the “platform as a product” model, where developers are treated
as customers and the platform is designed to optimize their experience.
Reference:
— CNCF Platforms Whitepaper
— Team Topologies (Cognitive Load Principle)
— Cloud Native Platform Engineering Study Guide
Which provisioning strategy ensures efficient resource scaling for an application on Kubernetes?
B
Explanation:
The most efficient and scalable strategy is to use a declarative approach with Infrastructure as Code
(IaC). Option B is correct because declarative definitions specify the desired state (e.g., resource
requests, limits, autoscaling policies) in code, allowing Kubernetes controllers and autoscalers to
reconcile and enforce them dynamically. This ensures that applications can scale efficiently based on
actual demand.
Option A (fixed allocation) is inefficient, leading to wasted resources during low usage or insufficient
capacity during high demand. Option C (manual provisioning) introduces delays, risk of error, and
operational overhead. Option D (imperative scripting) is not sustainable for large-scale or dynamic
workloads, as it requires constant manual intervention.
Declarative IaC aligns with GitOps workflows, enabling automated, version-controlled scaling
decisions. Combined with Kubernetes’ Horizontal Pod Autoscaler (HPA) and Cluster Autoscaler, this
approach allows platforms to balance cost efficiency with application reliability.
Reference:
— CNCF GitOps Principles
— Kubernetes Autoscaling Documentation
— Cloud Native Platform Engineering Study Guide
What is the primary purpose of Kubernetes runtime security?
B
Explanation:
The main purpose of Kubernetes runtime security is to protect workloads during execution. Option B
is correct because runtime security focuses on monitoring active Pods, containers, and processes to
detect and prevent malicious activity such as privilege escalation, anomalous network connections,
or unauthorized file access.
Option A (etcd encryption) addresses data at rest, not runtime. Option C (image scanning) occurs pre-
deployment, not during execution. Option D (API access control) is enforced through RBAC and IAM,
not runtime security.
Runtime security solutions (e.g., Falco, Cilium, or Kyverno) continuously observe system calls,
network traffic, and workload behaviors to enforce policies and detect threats in real time. This
ensures compliance, strengthens defenses in zero-trust environments, and provides critical
protection for cloud native workloads in production.
Reference:
— CNCF Security TAG Guidance
— CNCF Platforms Whitepaper
— Cloud Native Platform Engineering Study Guide
Which tool is commonly used to automate environment provisioning?
D
Explanation:
OpenTofu (the open-source fork of Terraform) is one of the most widely used tools for automating
environment provisioning. Option D is correct because OpenTofu allows teams to define
infrastructure as code, supporting multiple cloud providers and services. It enables declarative,
reusable, and version-controlled provisioning workflows, ensuring consistency across environments.
Option A (Kubernetes) orchestrates containers and workloads but does not provision infrastructure
outside its cluster scope. Option B (Prometheus) is an observability tool, not an IaC tool. Option C
(Docker) manages containers but does not provision full environments or infrastructure.
By using tools like OpenTofu/Terraform, platform engineers ensure scalable, repeatable environment
provisioning integrated into CI/CD or GitOps workflows. This aligns with platform engineering’s goals
of reducing toil and enabling self-service with compliance.
Reference:
— CNCF Platforms Whitepaper
— Infrastructure as Code Best Practices
— Cloud Native Platform Engineering Study Guide
Which approach is effective for scalable Kubernetes infrastructure provisioning?
D
Explanation:
The most effective approach for scalable Kubernetes infrastructure provisioning is Crossplane
compositions. Option D is correct because compositions let platform teams define custom CRDs
(Composite Resources) that abstract infrastructure details while embedding organizational policies
and guardrails. Developers then consume these abstractions through simple Kubernetes-native APIs,
enabling self-service at scale.
Option A (Helm with values.yaml) is useful for application deployment but not for scalable
infrastructure provisioning across multiple clouds. Option B (imperative scripts) lacks scalability,
repeatability, and governance. Option C (static YAML with kubectl apply) is manual and not suited for
dynamic, multi-team environments.
Crossplane compositions allow platform teams to curate golden paths while giving developers
autonomy. This reduces complexity, ensures compliance, and supports multi-cloud provisioning—all
key aspects of platform engineering.
Reference:
— CNCF Crossplane Project Documentation
— CNCF Platforms Whitepaper
— Cloud Native Platform Engineering Study Guide
In a GitOps workflow using Crossplane, how is infrastructure provisioned across multiple clusters?
B
Explanation:
Crossplane integrates tightly with GitOps workflows by extending Kubernetes with infrastructure
APIs. Option B is correct because infrastructure resources (databases, networks, S3 buckets, etc.) are
defined declaratively in Git repositories. Git becomes the single source of truth, while Crossplane
controllers automatically reconcile the desired state into real infrastructure across supported cloud
providers.
Option A reflects imperative scripting, which contradicts GitOps principles. Option C (manual
provisioning) lacks automation, governance, and repeatability. Option D involves manual application
with kubectl, which bypasses GitOps reconciliation loops.
With Crossplane and GitOps, teams achieve consistent, reproducible, and auditable infrastructure
provisioning at scale. This enables full alignment with cloud native platform engineering principles of
declarative management, self-service, and extensibility.
Reference:
— CNCF Crossplane Documentation
— CNCF GitOps Principles
— Cloud Native Platform Engineering Study Guide
A platform team wants to let developers provision cloud services like S3 buckets and databases using
Kubernetes-native APIs, without exposing cloud-specific details. Which tool is best suited for this?
B
Explanation:
Crossplane is the CNCF project designed to extend Kubernetes with the ability to provision and
manage cloud resources via Kubernetes-native APIs. Option B is correct because Crossplane lets
developers use familiar Kubernetes manifests to request resources like S3 buckets, databases, or
VPCs while abstracting provider-specific implementation details. Platform teams can define
compositions and abstractions, providing developers with golden paths that include organizational
guardrails.
Option A (Cluster API) is focused on provisioning Kubernetes clusters themselves, not cloud services.
Option C (Helm) manages Kubernetes application deployments but does not provision external
infrastructure. Option D (OpenTofu) is a Terraform fork that provides IaC but is not Kubernetes-
native.
By leveraging Crossplane, platform teams achieve infrastructure as data and full GitOps integration,
empowering developers to provision services declaratively while ensuring governance and
compliance.
Reference:
— CNCF Crossplane Project Documentation
— CNCF Platforms Whitepaper
— Cloud Native Platform Engineering Study Guide
In a Kubernetes environment, what is the primary distinction between an Operator and a Helm
chart?
C
Explanation:
The key distinction is that Helm charts are packaging and deployment tools, while Operators extend
Kubernetes controllers to provide ongoing lifecycle management. Option C is correct because
Operators continuously reconcile the desired and actual state of custom resources, enabling
advanced behaviors like upgrades, scaling, and failover. Helm charts, by contrast, define templates
and values for deploying applications but do not actively manage them after deployment.
Option A oversimplifies; Operators do more than deploy, while Helm manages deployment
packaging. Option B is incorrect—Helm does not create CRDs by default; Operators often do. Option
D is incorrect because Operators and Helm serve different purposes, though they may complement
each other.
Operators are essential for complex workloads (e.g., databases, Kafka) that require ongoing
operational knowledge codified into Kubernetes-native controllers. Helm is best suited for standard
deployments and reproducibility. Together, they improve Kubernetes extensibility and automation.
Reference:
— CNCF Kubernetes Operator Pattern Documentation
— CNCF Platforms Whitepaper
— Cloud Native Platform Engineering Study Guide
Which of the following observability pillars provides detailed information about the path a request
takes through different services in a distributed system?
A
Explanation:
Traces provide end-to-end visibility into how a request flows through multiple services in a
distributed system. Option A is correct because tracing captures spans (individual service operations)
and stitches them together to form a complete picture of request execution, including latency,
bottlenecks, and dependencies.
Option B (logs) provide detailed event records but lack contextual linkage across services. Option C
(events) are discrete system occurrences, not correlated request flows. Option D (metrics) provide
aggregated numerical data like latency or throughput but cannot show request-level detail across
distributed systems.
Tracing is especially critical in microservices architectures where a single request may traverse dozens
of services. Tools like OpenTelemetry, Jaeger, and Zipkin are commonly used to implement
distributed tracing, which is essential for debugging, performance optimization, and improving
reliability.
Reference:
— CNCF Observability Whitepaper
— OpenTelemetry CNCF Project Documentation
— Cloud Native Platform Engineering Study Guide
In a GitOps workflow, what is a secure and efficient method for managing secrets within a Git
repository?
B
Explanation:
The secure and efficient way to handle secrets in a GitOps workflow is to use a dedicated secrets
management tool (e.g., HashiCorp Vault, Sealed Secrets, or External Secrets Operator) and store only
references or encrypted placeholders in the Git repository. Option B is correct because Git should
remain the source of truth for configuration, but sensitive values should be abstracted or encrypted
to maintain security.
Option A (environment variables) can supplement secret management but lacks versioning and
auditability when used alone. Option C (encrypting secrets in Git) can work with tools like Mozilla
SOPS, but it still requires external key management, making Option B a more complete and secure
approach. Option D (plain text secrets) is highly insecure and should never be used.
By integrating secrets managers into GitOps workflows, teams achieve both security and
automation, ensuring secrets are delivered securely during reconciliation without exposing sensitive
data in Git.
Reference:
— CNCF GitOps Principles
— CNCF Supply Chain Security Whitepaper
— Cloud Native Platform Engineering Study Guide
How can an internal platform team effectively support data scientists in leveraging complex AI/ML
tools and infrastructure?
C
Explanation:
The best way for platform teams to support data scientists is by enabling easy access to specialized
AI/ML workflows, tools, and compute resources. Option C is correct because it empowers data
scientists to experiment, train, and deploy models without worrying about the complexities of
infrastructure setup. This aligns with platform engineering’s principle of self-service with guardrails.
Option A (integrating into standard CI/CD) may help, but AI/ML workflows often require specialized
tools like MLflow, Kubeflow, or TensorFlow pipelines. Option B (strict quotas) ensures stability but
does not improve usability or productivity. Option D (UI-driven execution only) restricts flexibility and
reduces the ability of data scientists to adapt workflows to evolving needs.
By offering AI/ML-specific workflows as golden paths within an Internal Developer Platform (IDP),
platform teams improve developer experience for data scientists, accelerate innovation, and ensure
compliance and governance.
Reference:
— CNCF Platforms Whitepaper
— CNCF Platform Engineering Maturity Model
— Cloud Native Platform Engineering Study Guide
Which of the following statements describes the fundamental relationship between Continuous
Integration (CI) and Continuous Delivery (CD) in modern software development?
A
Explanation:
Continuous Integration (CI) and Continuous Delivery (CD) are complementary practices. Option A is
correct: CI is a prerequisite for CD. CI focuses on automating code integration by building, testing,
and validating changes, ensuring code quality and early detection of defects. CD builds upon CI by
automating the process of releasing validated builds into staging and production environments,
making delivery repeatable and reliable.
Option B incorrectly treats them as entirely separate. Option C reverses the relationship, as CD
cannot exist without CI pipelines. Option D is inaccurate because CI and CD are not
interchangeable—they represent distinct stages in the software delivery lifecycle.
Together, CI/CD accelerates software delivery, reduces risk, and improves quality. In platform
engineering, CI/CD pipelines are critical enablers of developer productivity and efficient operations.
Reference:
— CNCF Platforms Whitepaper
— Continuous Delivery Foundation Guidance
— Cloud Native Platform Engineering Study Guide
As a Cloud Native Platform Associate, you are tasked with improving software delivery efficiency
using DORA metrics. Which of the following metrics best indicates the effectiveness of your platform
initiatives?
A
Explanation:
Lead Time for Changes is the DORA metric that best measures the efficiency and impact of platform
initiatives. Option A is correct because it tracks the time from code commit to successful production
deployment, directly reflecting how effectively a platform enables developers to deliver software.
Option B (MTTR) measures resilience and recovery speed, not efficiency. Option C (Change Failure
Rate) measures deployment stability, while Option D (SLAs) are contractual agreements, not
engineering performance metrics.
By reducing lead time, platform engineering demonstrates its ability to provide self-service,
automation, and streamlined CI/CD workflows. This makes Lead Time for Changes a critical
measurement of platform efficiency and developer experience improvements.
Reference:
— CNCF Platforms Whitepaper
— Accelerate (DORA Report)
— Cloud Native Platform Engineering Study Guide
Which Kubernetes feature allows you to control how Pods communicate with each other and
external services?
B
Explanation:
Kubernetes Network Policies are the feature that controls how Pods communicate with each other
and external services. Option B is correct because Network Policies define rules for ingress
(incoming) and egress (outgoing) traffic at the Pod level, ensuring fine-grained control over
communication pathways within the cluster.
Option A (Pod Security Standards) defines policies around Pod security contexts (e.g., privilege
escalation, root access) but does not control network traffic. Option C (Security Context) is specific to
Pod or container-level permissions, not networking. Option D (RBAC) governs access to Kubernetes
API resources, not Pod-to-Pod traffic.
Network Policies are essential for implementing a zero-trust model in Kubernetes, ensuring that only
authorized services communicate. This enhances both security and compliance, especially in multi-
tenant clusters.
Reference:
— CNCF Kubernetes Security Best Practices
— CNCF Platforms Whitepaper
— Cloud Native Platform Engineering Study Guide