Questions for the JN0-214 were updated on : Dec 01 ,2025
Regarding the third-party CNI in OpenShift, which statement is correct?
B
Explanation:
OpenShift supports third-party Container Network Interfaces (CNIs) to provide advanced networking
capabilities. However, there are specific requirements and limitations when using third-party CNIs.
Let’s analyze each statement:
A . In OpenShift, you can remove and install a third-party CNI after the cluster has been deployed.
Incorrect:
OpenShift does not allow you to change or replace the CNI plugin after the cluster has been
deployed. The CNI plugin must be specified during the initial deployment.
B . In OpenShift, you must specify the third-party CNI to be installed during the initial cluster
deployment.
Correct:
OpenShift requires you to select and configure the desired CNI plugin (e.g., Calico, Cilium) during the
initial cluster deployment. Once the cluster is deployed, changing the CNI plugin is not supported.
C . OpenShift does not support third-party CNIs.
Incorrect:
OpenShift supports third-party CNIs as alternatives to the default SDN (Software-Defined
Networking) solution. This flexibility allows users to choose the best networking solution for their
environment.
D . In OpenShift, you can have multiple third-party CNIs installed simultaneously.
Incorrect:
OpenShift does not support running multiple CNIs simultaneously. Only one CNI plugin can be active
at a time, whether it is the default SDN or a third-party CNI.
Why This Statement?
Initial Configuration Requirement: OpenShift enforces the selection of a CNI plugin during the initial
deployment to ensure consistent and stable networking across the cluster.
Stability and Compatibility: Changing the CNI plugin after deployment could lead to network
inconsistencies and compatibility issues, which is why it is not allowed.
JNCIA Cloud Reference:
The JNCIA-Cloud certification covers OpenShift networking, including the use of third-party CNIs.
Understanding the limitations and requirements for CNI plugins is essential for deploying and
managing OpenShift clusters effectively.
For example, Juniper Contrail can be integrated as a third-party CNI in OpenShift to provide advanced
networking and security features, but it must be specified during the initial deployment.
Reference:
OpenShift Documentation: Third-Party CNIs
Juniper JNCIA-Cloud Study Guide: OpenShift Networking
You must install a basic Kubernetes cluster.
Which tool would you use in this situation?
A
Explanation:
To install a basic Kubernetes cluster, you need a tool that simplifies the process of bootstrapping and
configuring the cluster. Let’s analyze each option:
A . kubeadm
Correct:
kubeadm is a command-line tool specifically designed to bootstrap a Kubernetes cluster. It
automates the process of setting up the control plane and worker nodes, making it the most suitable
choice for installing a basic Kubernetes cluster.
B . kubectl apply
Incorrect:
kubectl apply is used to deploy resources (e.g., pods, services) into an existing Kubernetes cluster by
applying YAML or JSON manifests. It does not bootstrap or install a new cluster.
C . kubectl create
Incorrect:
kubectl create is another Kubernetes CLI command used to create resources in an existing cluster.
Like kubectl apply, it does not handle cluster installation.
D . dashboard
Incorrect:
The Kubernetes dashboard is a web-based UI for managing and monitoring a Kubernetes cluster. It
requires an already-installed cluster and cannot be used to install one.
Why kubeadm?
Cluster Bootstrapping: kubeadm provides a simple and standardized way to initialize a Kubernetes
cluster, including setting up the control plane and joining worker nodes.
Flexibility: While it creates a basic cluster, it allows for customization and integration with additional
tools like CNI plugins.
JNCIA Cloud Reference:
The JNCIA-Cloud certification covers Kubernetes installation methods, including kubeadm.
Understanding how to use kubeadm is essential for deploying and managing Kubernetes clusters
effectively.
For example, Juniper Contrail integrates with Kubernetes clusters created using kubeadm to provide
advanced networking and security features.
Reference:
Kubernetes Documentation: kubeadm
Juniper JNCIA-Cloud Study Guide: Kubernetes Installation
Which operating system must be used for control plane machines in Red Hat OpenShift?
C
Explanation:
Red Hat OpenShift requires specific operating systems for its control plane machines to ensure
stability, security, and compatibility. Let’s analyze each option:
A . Ubuntu
Incorrect:
While Ubuntu is a popular Linux distribution, it is not the recommended operating system for
OpenShift control plane machines. OpenShift relies on Red Hat-specific operating systems for its
infrastructure.
B . Red Hat Enterprise Linux
Incorrect:
Red Hat Enterprise Linux (RHEL) is commonly used for worker nodes in OpenShift clusters. However,
control plane machines require a more specialized operating system optimized for Kubernetes
workloads.
C . Red Hat CoreOS
Correct:
Red Hat CoreOS is the default operating system for OpenShift control plane machines. It is a
lightweight, immutable operating system specifically designed for running containerized workloads
in Kubernetes environments. CoreOS ensures consistency, security, and automatic updates.
D . CentOS
Incorrect:
CentOS is a community-supported Linux distribution based on RHEL. While it can be used in some
Kubernetes environments, it is not supported for OpenShift control plane machines.
Why Red Hat CoreOS?
Immutable Infrastructure: CoreOS is designed to be immutable, meaning updates are applied
automatically and consistently across the cluster.
Optimized for Kubernetes: CoreOS is tailored for Kubernetes workloads, providing a secure and
reliable foundation for OpenShift control plane components.
JNCIA Cloud Reference:
The JNCIA-Cloud certification covers OpenShift architecture, including the operating systems used for
control plane and worker nodes. Understanding the role of Red Hat CoreOS is essential for deploying
and managing OpenShift clusters effectively.
For example, Juniper Contrail integrates with OpenShift to provide advanced networking features,
relying on CoreOS for secure and efficient operation of control plane components.
Reference:
OpenShift Documentation: Red Hat CoreOS
Juniper JNCIA-Cloud Study Guide: OpenShift Architecture
You are asked to deploy a Kubernetes application on your cluster. You want to ensure the application,
and all of its required resources, can be deployed using a single package, with all install-related
variables defined at start time.
Which tool should you use to accomplish this objective?
B
Explanation:
To deploy a Kubernetes application with all its required resources packaged together, a tool that
supports templating and variable management is needed. Let’s analyze each option:
A . A YAML manifest should be used for the application.
Incorrect:
While YAML manifests are used to define Kubernetes resources, they do not provide a mechanism to
package multiple resources or define variables at deployment time. Managing complex applications
with plain YAML files can become cumbersome.
B . A Helm chart should be used for the application.
Correct:
Helm is a package manager for Kubernetes that allows you to define, install, and upgrade
applications using charts . A Helm chart packages all the required resources (e.g., deployments,
services, config maps) into a single unit and allows you to define variables (via values.yaml) that can
be customized at deployment time.
C . An Ansible playbook should be run for the application.
Incorrect:
Ansible is an automation tool that can be used to deploy Kubernetes resources, but it is not
specifically designed for packaging and deploying Kubernetes applications. Helm is better suited for
this purpose.
D . Kubernetes imperative CLI should be used to run the application.
Incorrect:
Using imperative CLI commands (e.g., kubectl create) is not suitable for deploying complex
applications. This approach lacks the ability to package resources or define variables, making it error-
prone and difficult to manage.
Why Helm?
Packaging: Helm charts bundle all application resources into a single package, simplifying
deployment and management.
Customization: Variables defined in values.yaml allow you to customize the deployment without
modifying the underlying templates.
JNCIA Cloud Reference:
The JNCIA-Cloud certification emphasizes tools for managing Kubernetes applications, including
Helm. Understanding how to use Helm charts is essential for deploying and maintaining complex
applications in Kubernetes environments.
For example, Juniper Contrail integrates with Kubernetes to provide advanced networking features,
ensuring seamless operation of applications deployed via Helm charts.
Reference:
Helm Documentation: Charts
Juniper JNCIA-Cloud Study Guide: Kubernetes Application Management
Which container runtime engine is used by default in OpenShift?
B
Explanation:
OpenShift uses a container runtime engine to manage and run containers within its Kubernetes-
based environment. Let’s analyze each option:
A . containerd
Incorrect:
While containerd is a popular container runtime used in Kubernetes environments, it is not the
default runtime for OpenShift. OpenShift uses a runtime specifically optimized for Kubernetes
workloads.
B . cri-o
Correct:
CRI-O is the default container runtime engine for OpenShift. It is a lightweight, Kubernetes-native
runtime that implements the Container Runtime Interface (CRI) and is optimized for running
containers in Kubernetes environments.
C . Docker
Incorrect:
Docker was historically used as a container runtime in earlier versions of Kubernetes and OpenShift.
However, OpenShift has transitioned to CRI-O as its default runtime, as Docker's architecture is not
directly aligned with Kubernetes' requirements.
D . runC
Incorrect:
runC is a low-level container runtime that executes containers. While it is used internally by higher-
level runtimes like containerd and cri-o, it is not used directly as the runtime engine in OpenShift.
Why CRI-O?
Kubernetes-Native Design: CRI-O is purpose-built for Kubernetes, ensuring compatibility and
performance.
Lightweight and Secure: CRI-O provides a minimalistic runtime that focuses on running containers
efficiently and securely.
JNCIA Cloud Reference:
The JNCIA-Cloud certification covers container runtimes as part of its curriculum on container
orchestration platforms. Understanding the role of CRI-O in OpenShift is essential for managing
containerized workloads effectively.
For example, Juniper Contrail integrates with OpenShift to provide advanced networking features,
leveraging CRI-O for container execution.
Reference:
OpenShift Documentation: CRI-O Runtime
Juniper JNCIA-Cloud Study Guide: Container Runtimes
When considering OpenShift and Kubernetes, what are two unique resources of OpenShift? (Choose
two.)
A, B
Explanation:
OpenShift extends Kubernetes by introducing additional resources and abstractions to simplify
application development and deployment. Let’s analyze each option:
A . routes
Correct:
Routes are unique to OpenShift and provide a way to expose services externally by mapping a
hostname to a service. They are built on top of Kubernetes Ingress but offer additional features like
TLS termination and wildcard support.
B . build
Correct:
Builds are unique to OpenShift and represent the process of transforming source code into container
images. OpenShift provides build configurations and strategies (e.g., Docker, S2I) to automate this
process, which is not natively available in Kubernetes.
C . ingress
Incorrect:
Ingress is a standard Kubernetes resource used to manage external access to services. While
OpenShift uses Ingress as the foundation for its Routes, Ingress itself is not unique to OpenShift.
D . services
Incorrect:
Services are a core Kubernetes resource used to expose applications internally within the cluster.
They are not unique to OpenShift.
Why These Resources?
Routes: Extend Kubernetes Ingress to provide advanced external access capabilities, such as custom
domain mappings and TLS termination.
Builds: Simplify the process of building container images directly within the OpenShift platform,
enabling streamlined CI/CD workflows.
JNCIA Cloud Reference:
The JNCIA-Cloud certification covers OpenShift's unique resources as part of its curriculum on
container orchestration platforms. Understanding the differences between OpenShift and
Kubernetes resources is essential for leveraging OpenShift's full capabilities.
For example, Juniper Contrail integrates with OpenShift to provide advanced networking features,
ensuring secure and efficient traffic routing for Routes and Builds.
Reference:
OpenShift Documentation: Routes and Builds
Juniper JNCIA-Cloud Study Guide: OpenShift vs. Kubernetes
Which two consoles are provided by the OpenShift Web UI? (Choose two.)
AB
Explanation:
OpenShift provides a web-based user interface (Web UI) that offers two distinct consoles tailored to
different user roles. Let’s analyze each option:
A . administrator console
Correct:
The administrator console is designed for cluster administrators. It provides tools for managing
cluster resources, configuring infrastructure, monitoring performance, and enforcing security
policies.
B . developer console
Correct:
The developer console is designed for application developers. It focuses on building, deploying, and
managing applications, including creating projects, defining pipelines, and monitoring application
health.
C . operational console
Incorrect:
There is no "operational console" in OpenShift. This term does not correspond to any official
OpenShift Web UI component.
D . management console
Incorrect:
While "management console" might sound generic, OpenShift specifically refers to the administrator
console for management tasks. This term is not officially used in the OpenShift Web UI.
Why These Consoles?
Administrator Console: Provides a centralized interface for managing the cluster's infrastructure and
ensuring smooth operation.
Developer Console: Empowers developers to focus on application development without needing to
interact with low-level infrastructure details.
JNCIA Cloud Reference:
The JNCIA-Cloud certification emphasizes understanding OpenShift's Web UI and its role in cluster
management and application development. Recognizing the differences between the administrator
and developer consoles is essential for effective collaboration in OpenShift environments.
For example, Juniper Contrail integrates with OpenShift to provide advanced networking features,
leveraging both consoles for seamless operation.
Reference:
OpenShift Documentation: Web Console Overview
Juniper JNCIA-Cloud Study Guide: OpenShift Web UI
What are two available installation methods for an OpenShift cluster? (Choose two.)
A, C
Explanation:
OpenShift provides multiple methods for installing and deploying clusters, depending on the level of
control and automation desired. Let’s analyze each option:
A . installer-provisioned infrastructure
Correct:
Installer-provisioned infrastructure (IPI) is an automated installation method where the OpenShift
installer provisions and configures the underlying infrastructure (e.g., virtual machines, networking)
using cloud provider APIs or bare-metal platforms. This method simplifies deployment by handling
most of the setup automatically.
B . kubeadm
Incorrect:
kubeadm is a tool used to bootstrap Kubernetes clusters manually. While it is widely used for
Kubernetes installations, it is not specific to OpenShift and is not an official installation method for
OpenShift clusters.
C . user-provisioned infrastructure
Correct:
User-provisioned infrastructure (UPI) is a manual installation method where users prepare and
configure the infrastructure (e.g., virtual machines, load balancers, DNS) before deploying OpenShift.
This method provides greater flexibility and control over the environment but requires more effort
from the user.
D . kubespray
Incorrect:
Kubespray is an open-source tool used to deploy Kubernetes clusters on various infrastructures. Like
kubeadm, it is not specific to OpenShift and is not an official installation method for OpenShift
clusters.
Why These Methods?
Installer-Provisioned Infrastructure (IPI): Automates the entire installation process, making it ideal for
users who want a quick and hassle-free deployment.
User-Provisioned Infrastructure (UPI): Allows advanced users to customize the infrastructure and
tailor the deployment to their specific needs.
JNCIA Cloud Reference:
The JNCIA-Cloud certification covers OpenShift installation methods as part of its curriculum on
container orchestration platforms. Understanding the differences between IPI and UPI is essential for
deploying OpenShift clusters effectively.
For example, Juniper Contrail integrates with OpenShift to provide advanced networking features,
regardless of whether the cluster is deployed using IPI or UPI.
Reference:
OpenShift Documentation: Installation Methods
Juniper JNCIA-Cloud Study Guide: OpenShift Deployment
You want to view pods with their IP addresses in OpenShift.
Which command would you use to accomplish this task?
B
Explanation:
OpenShift provides various commands to view and manage pods. Let’s analyze each option:
A . oc qet pods -o vaml
Incorrect:
The command contains a typo (qet instead of get) and an invalid output format (vaml). The correct
format would be yaml, but this command does not display pod IP addresses.
B . oc get pods -o wide
Correct:
The oc get pods -o wide command displays detailed information about pods, including their names,
statuses, and IP addresses . The -o wide flag extends the output to include additional details like pod
IPs and node assignments.
C . oc qet all
Incorrect:
The command contains a typo (qet instead of get). Even if corrected, oc get all lists all resources (e.g.,
pods, services, deployments) but does not display pod IP addresses.
D . oc get pods
Incorrect:
The oc get pods command lists pods with basic information such as name, status, and restart count.
It does not include pod IP addresses unless the -o wide flag is used.
Why oc get pods -o wide?
Detailed Output: The -o wide flag provides extended information, including pod IP addresses, which
is essential for troubleshooting and network configuration.
Ease of Use: This command is simple and effective for viewing pod details in OpenShift.
JNCIA Cloud Reference:
The JNCIA-Cloud certification emphasizes understanding OpenShift CLI commands and their outputs.
Knowing how to retrieve detailed pod information is essential for managing and troubleshooting
OpenShift environments.
For example, Juniper Contrail integrates with OpenShift to provide advanced networking features,
relying on accurate pod IP information for traffic routing and segmentation.
Reference:
OpenShift CLI Documentation: oc get pods Command
Juniper JNCIA-Cloud Study Guide: OpenShift Networking
Which OpenShift resource represents a Kubernetes namespace?
A
Explanation:
OpenShift is a Kubernetes-based container platform that introduces additional abstractions and
terminologies. Let’s analyze each option:
A . Project
Correct:
In OpenShift, a Project represents a Kubernetes namespace with additional capabilities. It provides a
logical grouping of resources and enables multi-tenancy by isolating resources between projects.
B . ResourceQuota
Incorrect:
A ResourceQuota is a Kubernetes object that limits the amount of resources (e.g., CPU, memory) that
can be consumed within a namespace. While it is used within a project, it is not the same as a
namespace.
C . Build
Incorrect:
A Build is an OpenShift-specific resource used to transform source code into container images. It is
unrelated to namespaces or projects.
D . Operator
Incorrect:
An Operator is a Kubernetes extension that automates the management of complex applications. It
operates within a namespace but does not represent a namespace itself.
Why Project?
Namespace Abstraction: OpenShift Projects extend Kubernetes namespaces by adding features like
user roles, quotas, and lifecycle management.
Multi-Tenancy: Projects enable organizations to isolate workloads and resources for different teams
or applications.
JNCIA Cloud Reference:
The JNCIA-Cloud certification covers OpenShift and its integration with Kubernetes. Understanding
the relationship between Projects and namespaces is essential for managing OpenShift
environments.
For example, Juniper Contrail integrates with OpenShift to provide advanced networking and security
features for Projects, ensuring secure and efficient resource isolation.
Reference:
OpenShift Documentation: Projects
Juniper JNCIA-Cloud Study Guide: OpenShift and Kubernetes
Which two statements are correct about Kubernetes resources? (Choose two.)
A, B
Explanation:
Kubernetes resources are the building blocks of Kubernetes clusters, enabling the deployment and
management of applications. Let’s analyze each statement:
A . A ClusterIP type service can only be accessed within a Kubernetes cluster.
Correct:
A ClusterIP service is the default type of Kubernetes service. It exposes the service internally within
the cluster, assigning it a virtual IP address that is accessible only to other pods or services within the
same cluster. External access is not possible with this service type.
B . A daemonSet ensures that a replica of a pod is running on all nodes.
Correct:
A daemonSet ensures that a copy of a specific pod is running on every node in the cluster (or a
subset of nodes if specified). This is commonly used for system-level tasks like logging agents or
monitoring tools that need to run on all nodes.
C . A deploymentConfig is a Kubernetes resource.
Incorrect:
deploymentConfig is a concept specific to OpenShift, not standard Kubernetes. In Kubernetes, the
equivalent resource is called a Deployment , which manages the desired state of pods and
ReplicaSets.
D. NodePort service exposes the service externally by using a cloud provider load balancer.
Incorrect:
A NodePort service exposes the service on a static port on each node in the cluster, allowing external
access via the node's IP address and the assigned port. However, it does not use a cloud provider
load balancer. The LoadBalancer service type is the one that leverages cloud provider load balancers
for external access.
Why These Statements?
ClusterIP: Ensures internal-only communication, making it suitable for backend services that do not
need external exposure.
DaemonSet: Guarantees that a specific pod runs on all nodes, ensuring consistent functionality
across the cluster.
JNCIA Cloud Reference:
The JNCIA-Cloud certification covers Kubernetes resources and their functionalities, including
services, DaemonSets, and Deployments. Understanding these concepts is essential for managing
Kubernetes clusters effectively.
For example, Juniper Contrail integrates with Kubernetes to provide advanced networking features
for services and DaemonSets, ensuring seamless operation of distributed applications.
Reference:
Kubernetes Documentation: Services, DaemonSets, and Deployments
Juniper JNCIA-Cloud Study Guide: Kubernetes Resources
Which component of Kubernetes runs on each node maintaining network rules?
B
Explanation:
Kubernetes components work together to ensure seamless communication and network
functionality within the cluster. Let’s analyze each option:
A . container runtime
Incorrect: The container runtime (e.g., containerd, cri-o) is responsible for running containers on
worker nodes. It does not maintain network rules.
B . kube-proxy
Correct: kube-proxy is a Kubernetes component that runs on each node and maintains network rules
to enable communication between services and pods. It ensures proper load balancing and routing
of traffic.
C . kubelet
Incorrect: The kubelet is responsible for managing the state of pods and containers on a node. It does
not handle network rules.
D . kube controller
Incorrect: The kube controller manages the desired state of the cluster, such as maintaining the
correct number of replicas. It does not directly manage network rules.
Why kube-proxy?
Network Rules: kube-proxy implements iptables or IPVS rules to route traffic between services and
pods, ensuring seamless communication.
Load Balancing: It provides basic load balancing for services, distributing traffic across available pods.
JNCIA Cloud Reference:
The JNCIA-Cloud certification covers Kubernetes networking, including the role of kube-proxy.
Understanding how kube-proxy works is essential for managing network communication in
Kubernetes clusters.
For example, Juniper Contrail integrates with Kubernetes to enhance networking capabilities,
leveraging kube-proxy for service-level traffic management.
Reference:
Kubernetes Documentation: kube-proxy
Juniper JNCIA-Cloud Study Guide: Kubernetes Networking
Which key value store is used as a Kubernetes’s backend store?
A
Explanation:
Kubernetes relies on a distributed key-value store to maintain its state and configuration data. Let’s
analyze each option:
A . etcd
Correct: etcd is a distributed key-value store used as Kubernetes’ backend store. It stores all cluster
data, including configurations, states, and metadata, ensuring consistency and reliability across the
cluster.
B . firebase
Incorrect: Firebase is a Backend-as-a-Service (BaaS) platform for building mobile and web
applications. It is unrelated to Kubernetes.
C . postgres
Incorrect: PostgreSQL is a relational database management system. While it can be used for other
purposes, it is not the backend store for Kubernetes.
D . mongodb
Incorrect: MongoDB is a NoSQL database used for storing unstructured data. It is not used as
Kubernetes’ backend store.
Why etcd?
High Availability: etcd is designed for distributed systems, providing strong consistency and fault
tolerance.
Cluster State Management: Kubernetes uses etcd to store critical data such as pod states, service
definitions, and configuration details.
JNCIA Cloud Reference:
The JNCIA-Cloud certification covers Kubernetes architecture, including the role of etcd.
Understanding etcd’s function is essential for managing and troubleshooting Kubernetes clusters.
For example, Juniper Contrail integrates with Kubernetes to provide networking and security
features, relying on etcd for cluster state management.
Reference:
Kubernetes Documentation: etcd
Juniper JNCIA-Cloud Study Guide: Kubernetes Architecture
You have built a Kubernetes environment offering virtual machine hosting using KubeVirt.
Which type of service have you created in this scenario?
C
Explanation:
Kubernetes combined with KubeVirt enables the hosting of virtual machines (VMs) alongside
containerized workloads. This setup aligns with a specific cloud service model. Let’s analyze each
option:
A . Software as a Service (SaaS)
Incorrect: SaaS delivers fully functional applications over the internet, such as Salesforce or Google
Workspace. Hosting VMs using Kubernetes and KubeVirt does not fall under this category.
B . Platform as a Service (PaaS)
Incorrect: PaaS provides a platform for developers to build, deploy, and manage applications without
worrying about the underlying infrastructure. While Kubernetes itself can be considered a PaaS
component, hosting VMs goes beyond this model.
C . Infrastructure as a Service (IaaS)
Correct: IaaS provides virtualized computing resources such as servers, storage, and networking over
the internet. By hosting VMs using Kubernetes and KubeVirt, you are offering infrastructure-level
services, which aligns with the IaaS model.
D . Bare Metal as a Service (BMaaS)
Incorrect: BMaaS provides direct access to physical servers without virtualization. Kubernetes and
KubeVirt focus on virtualized environments, making this option incorrect.
Why IaaS?
Virtualized Resources: Hosting VMs using Kubernetes and KubeVirt provides virtualized
infrastructure, which is the hallmark of IaaS.
Scalability and Flexibility: Users can provision and manage VMs on-demand, similar to traditional
IaaS offerings like AWS EC2 or OpenStack.
JNCIA Cloud Reference:
The JNCIA-Cloud certification emphasizes understanding cloud service models, including IaaS.
Recognizing how Kubernetes and KubeVirt fit into the IaaS paradigm is essential for designing hybrid
cloud solutions.
For example, Juniper Contrail integrates with Kubernetes and KubeVirt to provide advanced
networking and security features for IaaS-like environments.
Reference:
KubeVirt Documentation
Juniper JNCIA-Cloud Study Guide: Cloud Service Models
The openstack user list command uses which OpenStack service?
B
Explanation:
OpenStack provides various services to manage cloud infrastructure resources, including user
management. Let’s analyze each option:
A . Cinder
Incorrect: Cinder is the OpenStack block storage service that provides persistent storage volumes for
virtual machines. It is unrelated to managing users.
B . Keystone
Correct: Keystone is the OpenStack identity service responsible for authentication, authorization, and
user management. The openstack user list command interacts with Keystone to retrieve a list of
users in the OpenStack environment.
C . Nova
Incorrect: Nova is the OpenStack compute service that manages virtual machine instances. It does
not handle user management.
D . Neutron
Incorrect: Neutron is the OpenStack networking service that manages virtual networks, routers, and
IP addresses. It is unrelated to user management.
Why Keystone?
Identity Management: Keystone serves as the central identity provider for OpenStack, managing
users, roles, and projects.
API Integration: Commands like openstack user list rely on Keystone's APIs to query and display user
information.
JNCIA Cloud Reference:
The JNCIA-Cloud certification covers OpenStack services, including Keystone, as part of its cloud
infrastructure curriculum. Understanding Keystone’s role in user management is essential for
operating OpenStack environments.
For example, Juniper Contrail integrates with OpenStack Keystone to enforce authentication and
authorization for network resources.
Reference:
OpenStack Keystone Documentation
Juniper JNCIA-Cloud Study Guide: OpenStack Services