Questions for the C1000-150 were updated on : Dec 01 ,2025
Which type of log collector uses input and output plug-ins to collect data from multiple sources and
to distribute or send data to multiple destinations?
C
Explanation:
Fluentd is a log collector that uses input and output plug-ins to collect data from multiple sources
and to distribute or send data to multiple destinations. This allows Fluentd to collect and process
data from various sources and send it to various destinations with minimal effort.
Reference: [1]
https://docs.fluentd.org/
[2]
https://www.fluentd.org/
When dealing with OpenShift Container Platform (OCP) logs and log persistence, which component
collects all node and container logs and stores them in a dedicated project indexes?
B
Explanation:
When dealing with OpenShift Container Platform (OCP) logs and log persistence, Fluentd is the
component that collects all node and container logs and stores them in a dedicated project indexes.
Fluentd is an open source data collector that can collect, process, and forward data from a variety of
sources.
Reference:
[1]
https://docs.openshift.com/container-platform/4.5/logging/understanding-logging.html
[2]
https://docs.fluentd.org/
Which statement is true for the Cloud Pak for Business Automation standard capabilities logging?
A
Explanation:
The Cloud Pak for Business Automation standard capabilities logging is enabled to collect and
forward standard output to the specified logging destination when configured. This logging is not
stored in a dedicated persistent data store unless specified, and is viewable only by the OpenShift
Container Platform (OCP) web console.
Reference:
[1]
https://www.ibm.com/support/knowledgecenter/SSFTN5_2.2.2/bas/logging.html
[2]
https://www.ibm.com/support/knowledgecenter/SSFTN5_2.2.2/bas/logging_setup.html
To manually scale up the Process Mining deployment in the IBM Cloud Pak for Business Automation,
which parameter section needs to be updated in the custom resource YAML file?
D
Explanation:
To manually scale up the Process Mining deployment in the IBM Cloud Pak for Business Automation,
the replicas parameter section needs to be updated in the custom resource YAML file. This parameter
allows you to specify the desired number of replicas for the deployment.
Reference:
[1]
https://www.ibm.com/support/knowledgecenter/SSFTN5_2.2.2/bas/bas_install.html#manually_
scale_up_the_process_mining_deployment
[2]
https://kubernetes.io/docs/tasks/run-
application/scale-stateful-set/
What should be supplied as part of the custom resource prior to deployment if it is desired to use a
root CA signer certificate that is signed by a recognized certificate authority?
A
Explanation:
If it is desired to use a root CA signer certificate that is signed by a recognized certificate authority,
the rootcacertificate should be supplied as part of the custom resource prior to deployment. This is
necessary in order for the root CA signer certificate to be validated.
Reference:
[1]
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-
certificates/#running-an-https-server
[2]
https://kubernetes.io/docs/concepts/cluster-
administration/certificates/
When deploying License Service Reporter, the summary card additionally shows a View license usage
link. The link leads to the License Service Reporter user interface that presents the license usage of
your products within the reporting period for a multi-cluster environment.
What is that license usage?
D
Explanation:
When deploying License Service Reporter, the summary card additionally shows a View license usage
link. This link leads to the License Service Reporter user interface that presents the license usage of
your products within the reporting period for a multi-cluster environment. The license usage
presented is Average Daily Usage, which is the average number of licenses used per day in the
reporting period.
Reference:
[1]
https://www.ibm.com/support/knowledgecenter/SSFTN5_2.2.2/com.ibm.cic.agent.lmgr.user/license_usage.html
[2]
https://www.ibm.com/support/knowledgecenter/SSFTN5_2.2.2/com.ibm.cic.agent.lmgr.user/view_license_usage.html
When setting up a demo environment an identity provider may not be known. What can be used to
replace the default admin user with a simple identity provider?
A
Explanation:
When setting up a demo environment an identity provider may not be known. In this case, htpasswd
can be used to replace the default admin user with a simple identity provider. Htpasswd is an Apache
utility for creating and updating user authentication files for the Apache web server. It uses a
combination of plaintext passwords and a hashing algorithm to store its credentials.
Reference:
[1]
https://httpd.apache.org/docs/2.4/programs/htpasswd.html
[2]
https://httpd.apache.org/docs/2.4/howto/auth.html#gettingstarted
How is the Business Automation Studio web interface accessed?
A
Explanation:
Reference:
[1]
https://www.ibm.com/support/knowledgecenter/SSFTN5_2.2.2/bas/getting_started/overview.ht
ml
[2]
https://www.ibm.com/support/knowledgecenter/SSFTN5_2.2.2/pam/getting_started/overview.html
What does IBM Cloud Pak foundational services monitoring require?
A
Explanation:
IBM Cloud Pak foundational services monitoring requires Role-based access control (RBAC) to
monitor APIs and data. This ensures that only authorized users have access to the data and APIs that
are being monitored. It also ensures that data is only being accessed by users with the appropriate
permissions. Kibana is used as the data source for the Cloud Pak foundational services monitoring.
Adopter customization is only necessary to query and visualize application metrics. Red Hat
OpenShift Container Platform monitoring is not required for Cloud Pak foundational services
monitoring.
Reference:
[1]
https://www.ibm.com/support/knowledgecenter/SSFTN5_2.2.2/monitoring/overview.html
[2]
https://www.ibm.com/support/knowledgecenter/SSFTN5_2.2.2/monitoring/rbac.html
What is the best data to check for installation and upgrade problems?
C
Explanation:
The best data to check for installation and upgrade problems is Pod status. Pods are the smallest
deployable units in a Kubernetes cluster and contain the necessary components to run an
application. Examining the Pod status can help identify any issues that may be present with the
installation or upgrade process. The other options are not related to this process.
Reference:
[1]
https://kubernetes.io/docs/concepts/workloads/pods/
[2]
https://kubernetes.io/docs/tasks/deb
ug-application-cluster/debug-cluster-upgrade/
Operator log files can be retrieved from where?
C
Explanation:
Operator log files can be retrieved from the Ansible directory. The Ansible directory is located in the
home directory at ~/.ansible/logs.
Reference:
[1]
https://www.ibm.com/support/knowledgecenter/SSFTN5_2.2.2/cpd/troubleshoot/operator_logs.html
[2]
https://docs.ansible.com/ansible/latest/reference_appendices/config.html#ansible-log-dir
Which statement is true about a Cloud Pak for Business Automation starter deployment?
C
Explanation:
A Cloud Pak for Business Automation starter deployment can be upgraded to a production
deployment if required. It is designed to provide a quick and easy way to get started with the
capabilities offered by the Cloud Pak for Business Automation. It is possible to include the
Automation Document Processing capability in a starter deployment. The starter deployment uses
the Operator Lifecycle Manager to deploy and manage the components of the Cloud Pak for Business
Automation.
Reference:
[1]
https://www.ibm.com/support/knowledgecenter/SSFTN5_2.2.2/cpd/getting_started/overview.ht
ml
[2]
https://www.ibm.com/support/knowledgecenter/SSFTN5_2.2.2/cpd/administer/overview.ht
ml
Once a starter deployment of the Cloud Pak for Business Automation is installed, where can access to
the different capability services and applications be found?
B
Explanation:
Once a starter deployment of the Cloud Pak for Business Automation is installed, access to the
different capability services and applications can be found by opening a config map which contains
the route URL to access the components and a secret which contains the credentials to use with the
different URLs.
Reference:
[1]
https://www.ibm.com/support/knowledgecenter/SSFTN5_2.2.2/cpd/getting_started/accessing_components.html
[2]
https://www.ibm.com/support/knowledgecenter/SSFTN5_2.2.2/cpd/getting_started/overview.html
Which parameter is required to forward audit logging?
B
Explanation:
To forward audit logging, the ENABLEAUDITLOGGINGFORWARDING parameter is required. This
parameter is used to enable the forwarding of audit logs to an external service.
Reference:
[1]
https://www.ibm.com/support/knowledgecenter/SSFTN5_2.2.2/cpd/administer/audit.html
[2]
https://www.ibm.com/support/knowledgecenter/SSFTN5_2.2.2/cpd/administer/overview.html
Which component can have its certificate refreshed after install?
A
Explanation:
After install, the certificate of the etcd component can be refreshed. etcd is a key-value store that
stores the Kubernetes cluster state and is used to secure communication between Kubernetes
components.
Reference:
[1]
https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-
etcd/
[2]
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#certificate-
renew