Questions for the PORTWORX ENTERPRISE PROFESSIONAL were updated on : Nov 21 ,2025
An administrator needs to create a backup of a Portworx volume in an AWS S3 bucket and has
already configured the secrets so Portworx can connect to the AWS S3 bucket.
What command is needed to create the backup?
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
After configuring credentials for AWS S3 object storage, the administrator uses the command pxctl
cloudsnap backup <volumename> -cred-id <credentials-name> to create a cloud snapshot backup of
a Portworx volume. This command instructs Portworx to take a point-in-time snapshot of the
specified volume and upload it securely to the configured S3 bucket using the referenced credentials.
The command leverages Portworx’s cloud snapshot feature for disaster recovery and long-term
retention. Option B relates to creating credentials and is not the backup command. Option C creates
a local snapshot but does not back it up to the cloud. The Portworx CLI documentation highlights
pxctl cloudsnap backup as the core method to perform backups to cloud object storage, enabling
data protection strategies aligned with cloud-native architectures
Pure Storage Portworx Cloud
【
Snapshot Guide†source
.
】
A Portworx administrator wants to create a storage class that can be used to create volumes with the
following characteristics:
• Encrypted volume
• Two replicas
Which definition should the administrator use?
A.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: px-encrypted
provisioner: kubernetes.io/portworx-volume
parameters:
encrypted: "true"
repl: "2"
B.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: px-encrypted
provisioner: kubernetes.io/portworx-volume
parameters:
sharedv4: "true"
repl: "2"
C.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: px-encrypted
provisioner: kubernetes.io/portworx-volume
parameters:
secure: "true"
repl: "2"
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
To create a StorageClass in Kubernetes for Portworx volumes that are encrypted and replicated
twice, the correct parameters are encrypted: "true" to enable encryption and repl: "2" to specify two
replicas. Option A accurately sets these parameters, ensuring volumes provisioned with this
StorageClass will be encrypted at rest and maintain two replicas for data redundancy. Option B uses
sharedv4: "true", which relates to NFS-like sharing, not encryption. Option C uses secure: "true",
which is not the recognized parameter for enabling encryption in Portworx StorageClass definitions.
The official Portworx StorageClass parameter documentation confirms encrypted as the correct flag
for encryption and repl to specify replication factor, enabling administrators to enforce data security
and availability policies declaratively through Kubernetes manifests
Pure Storage Portworx
【
StorageClass Guide†source
.
】
Which storage type does Portworx primarily rely on for storage provisioning?
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Portworx primarily relies on Direct Attached Storage (DAS) for its storage provisioning. DAS refers to
physical disks or SSDs directly connected to the nodes running Portworx. Using DAS enables high-
performance, low-latency access to storage resources, crucial for stateful containerized applications.
Portworx aggregates and abstracts these local devices into distributed storage pools, providing
features like replication, encryption, and snapshots. While Portworx integrates with Object Storage
for cloud snapshots and disaster recovery, and can support NFS for certain use cases, the core storage
provisioning and volume management depend on DAS. The Portworx architecture documentation
clarifies that leveraging local node storage is essential for delivering performant, resilient, and
scalable persistent storage in Kubernetes environments
Pure Storage Portworx Architecture
【
Guide†source
.
】
How do you label a Kubernetes node to provide rack information to Portworx?
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Labeling Kubernetes nodes with rack information is achieved using the kubectl label nodes
command. The syntax would be something like kubectl label nodes <node-name> px/rack=<rack-
identifier>. This label allows Portworx to understand the physical or logical topology of nodes,
enabling placement strategies that optimize data locality, fault tolerance, and availability based on
rack awareness. Taints and annotations serve different purposes; taints affect pod scheduling by
repelling pods, while annotations provide metadata without influencing scheduling. Portworx uses
node labels extensively for topology-aware volume placement and disaster recovery planning.
Official Portworx documentation recommends labeling nodes with topology identifiers like rack or
zone to enable advanced placement strategies and maintain application resiliency in distributed
environments
Pure Storage Portworx Placement Guide†source
.
【
】
What command can an administrator run to view Portworx alerts?
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
To view current alerts raised by Portworx within the cluster, the primary command is pxctl alerts
show. This CLI command lists all active alerts with details such as severity, affected resources, and
timestamps. It helps administrators quickly identify issues impacting cluster health, storage pools,
volumes, or nodes. While Grafana is a powerful visualization tool often used alongside Prometheus
for monitoring, it requires additional setup and does not directly replace the immediate, real-time
alert query functionality of pxctl. The pxctl cd list alerts is not a valid command. Portworx
documentation emphasizes pxctl alerts show as the go-to tool for alert inspection during operational
checks and troubleshooting, offering a concise and focused alert view integrated with Portworx’s
internal alerting system
Pure Storage Portworx Alerting Guide†source
.
【
】
Which command shows a summary of the Portworx cluster status?
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The command pxctl cluster status provides a concise summary of the Portworx cluster’s health and
operational status. This includes node states, storage pool information, volume statuses, and quorum
information. It is the primary CLI command for administrators to quickly assess cluster health and
detect any issues affecting storage availability or performance. helm list --px is a Helm package
management command unrelated to cluster status, and kubectl get pxstatus is not a valid Kubernetes
or Portworx command. Portworx documentation recommends pxctl cluster status as an essential
monitoring command during routine operations and troubleshooting to ensure the cluster is
functioning properly and that all nodes are communicating and healthy
Pure Storage Portworx CLI
【
Guide†source
】
What label can be used to migrate Network Policies with Asynchronous DR?
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
When using Portworx Asynchronous Disaster Recovery (DR) to migrate workloads and storage across
clusters, network policies can sometimes interfere with seamless failover. The label
skipNetworkPolicyCheck: true can be used to instruct the DR mechanism to bypass strict network
policy checks during migration. This allows applications and volumes to migrate even if network
policies differ or are incompatible between source and destination clusters. Without this label,
migration might be blocked or fail due to network restrictions. By default, network policies are not
always migrated, and strict checks are performed unless explicitly skipped. Portworx DR
documentation details this option as a means to increase migration flexibility, reduce operational
friction, and enable faster recovery during disaster scenarios while administrators work on aligning
network configurations
Pure Storage Portworx DR Guide†source
.
【
】
What is a local snapshot in the context of Portworx?
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
A local snapshot in Portworx refers to a point-in-time of a volume’s data that is stored within the
same storage cluster as the original volume. Local snapshots use efficient -on-write techniques to
minimize storage overhead while preserving the volume state for backup, recovery, or rollback
operations. Unlike cloud or remote snapshots, local snapshots do not require network transfer or
object storage integration, enabling fast snapshot creation and restoration with low latency. They are
ideal for short-term data protection, testing, or recovery scenarios where immediate access to
snapshots is required. Portworx’s snapshot documentation describes local snapshots as the
foundational snapshot type, essential for operational backups and data consistency within
Kubernetes clusters using Portworx storage
Pure Storage Portworx Snapshot Guide†source
.
【
】
An infrastructure admin wants to restrict installing Portworx in two nodes.
What label does the node need to have?
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Restricting Portworx installation on certain Kubernetes nodes is achieved by labeling those nodes
with px/enabled=false. This label signals the Portworx Operator or installer to exclude these nodes
from Portworx deployment. This allows admins to reserve nodes for other workloads or prevent
Portworx from running on unsupported hardware. The label px/service=stop or px/storage-
node=false are not recognized controls in the Portworx installation process. Portworx deployment
guides consistently document the use of px/enabled=false for node exclusion, providing a simple,
declarative way to control cluster topology and resource assignment during Portworx installations
and upgrades
Pure Storage Portworx Deployment Guide†source
.
【
】
How would an administrator schedule automatic backups of a volume using Portworx?
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Portworx provides a declarative method to schedule automatic backups by configuring schedule
policies within its Backup and DR framework. These policies specify when and how frequently
backups should occur, retention rules, and target storage locations. By applying schedule policies,
administrators enable Portworx to perform backups automatically without manual intervention or
external scripting. Using cron jobs to run pxctl snapshot create is possible but less integrated, error-
prone, and not recommended for scalable environments. The command px backup volume is not a
valid Portworx CLI command. The Portworx backup documentation encourages using native schedule
policies for reliable, maintainable, and policy-driven backup automation, supporting compliance and
disaster recovery strategies
Pure Storage Portworx Backup Guide†source
.
【
】
What is the primary function of the telemetry pod added to each node when telemetry is enabled in
Portworx?
C
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
When telemetry is enabled, Portworx deploys a telemetry pod on each node whose primary function
is to collect diagnostic and performance data and securely upload it to Pure1, Pure Storage’s cloud-
based management and analytics platform. This pod gathers metrics such as resource utilization,
error rates, and configuration changes, enabling proactive monitoring and predictive analytics. The
data helps Pure1 provide customers with actionable insights, alerting, and automated support
features, improving cluster reliability and reducing operational overhead. The telemetry pod does
not directly monitor node health (which is the role of other components) nor manage network
settings; its focus is on data collection and communication with Pure1. Official Portworx telemetry
documentation highlights this pod as critical for enabling cloud-based health monitoring and
customer support enhancements
Pure Storage Portworx Telemetry Guide†source
.
【
】
How should a Portworx administrator enable the Alertmanager?
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Enabling Alertmanager in Portworx involves creating a Kubernetes Secret containing the
Alertmanager configuration (such as alert routing rules and notification channels) and referencing
this secret in the Portworx StorageCluster manifest. This integration allows Portworx’s monitoring
stack to forward alerts to Alertmanager for centralized alert processing and notifications. Unlike
ConfigMaps, which are generally used for non-sensitive data, Secrets protect sensitive alert
configuration. Enabling Alertmanager via pxctl CLI is not supported as Portworx relies on Kubernetes
declarative configuration for monitoring components. Additionally, deploying Alertmanager
independently and integrating through webhooks requires manual setup but is not the
recommended or integrated approach. Portworx official observability documentation details the
secret-based configuration as the standard and secure method to enable and manage Alertmanager
within Portworx clusters for robust alert handling
Pure Storage Portworx Monitoring
【
Guide†source
.
】
An infrastructure admin wants to restrict installing Portworx on two nodes.
What label does the node need to have?
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Portworx uses node labeling as a mechanism to control on which Kubernetes nodes Portworx is
installed and allowed to operate. To restrict Portworx installation on specific nodes, those nodes
should be labeled with px/enabled=false. This label tells the Portworx Operator or installation scripts
to exclude these nodes from Portworx deployment, preventing Portworx daemons from running
there. This feature is useful for reserving nodes for non-storage workloads or avoiding unsupported
hardware. Labels like px/service=stop or px/storage-node=false are not recognized by Portworx as
controls for installation exclusion. The official Portworx deployment and node labeling
documentation specify px/enabled=false as the standard method for controlling node participation in
the storage cluster, offering administrators fine-grained control over cluster topology and resource
allocation
Pure Storage Portworx Deployment Guide†source
.
【
】
After enabling security in Portworx, the pxctl command returns an “access denied” error.
What action must be taken to allow pxctl to gain access again?
A
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
When security is enabled in Portworx, all commands, including those issued via the pxctl CLI, require
authentication to access the cluster. If pxctl returns an “access denied” error, it means the CLI does
not have valid credentials. To regain access, administrators must provide authentication details using
the --user and --password flags or configure a context with an authentication token. The username
and password are stored securely within the Kubernetes secret px-admin-token. Using these
credentials ensures pxctl commands are authorized to perform management operations. Without
authentication, Portworx enforces strict access controls to protect sensitive storage operations and
data. While creating new contexts via pxctl context create is a valid method, initially supplying
credentials is mandatory. Failure to authenticate prevents any management activity, reinforcing
Portworx’s security posture. Official security guides outline these steps as fundamental to
transitioning from unsecured to secured cluster operation and managing authenticated access
effectively
Pure Storage Portworx Security Guide†source
.
【
】
What is the correct procedure to upgrade a Portworx cluster from version 3.0 to 3.1 using the
Portworx Operator?
B
Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Upgrading Portworx clusters managed by the Kubernetes Operator requires a declarative update to
the StorageCluster Custom Resource Definition (CRD). Specifically, the administrator must edit the
StorageCluster resource and update the .spec.image field to point to the new version image, such as
changing portworx/oci-monitor:3.0 to portworx/oci-monitor:3.1. This change instructs the Operator
to roll out the new image across the cluster nodes, performing a seamless upgrade with minimal
downtime. The pxctl CLI does not perform upgrades in Operator-managed environments; it is
primarily for direct cluster management. The Operator ensures orderly upgrade sequencing, node by
node, handling pod restarts and health checks. Automatic upgrades without manual intervention are
not currently supported to prevent unintentional disruptions. Official Portworx upgrade
documentation details this procedure, emphasizing the importance of version pinning and controlled
rollout for production stability and rollback capabilities during upgrades
Pure Storage Portworx
【
Upgrade Guide†source
.
】