oracle 1Z0-1084-25 Exam Questions

Questions for the 1Z0-1084-25 were updated on : Dec 01 ,2025

Page 1 out of 7. Viewing questions 1-15 out of 100

Question 1

In the shared responsibility model, who should perform patching, upgrading, and maintaining of the
worker nodes in provisioned Oracle Container Engine for Kubernetes (OKE) clusters?

  • A. Oracle Support does it.
  • B. It is the responsibility of the customer.
  • C. It is an automated process.
Answer:

B

User Votes:
A
50%
B
50%
C
50%

Explanation:
In the shared responsibility model, Oracle is responsible for securing the underlying cloud
infrastructure and platform services, while customers are responsible for securing their data and
applications within the cloud4
.
For provisioned OKE clusters, Oracle manages the control plane
(master nodes) of the Kubernetes cluster, while customers manage the data plane (worker nodes) of
the cluster5
.
Therefore, it is the responsibility of the customer to perform patching, upgrading, and
maintaining of the worker nodes in provisioned OKE clusters5
.
Customers can use tools such as
Terraform or kubectl to automate these tasks5
.

Discussions
vote your answer:
A
B
C
0 / 1000

Question 2

Which technique is used for testing the entire user flow as well as the moving parts of a cloud native
app, ensuring that there are no high-level discrepancies?

  • A. Contract Testing
  • B. Integration Testing
  • C. Unit Testing
  • D. Component Testing
  • E. End-to-end Testing
Answer:

E

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%

Explanation:
End-to-end testing is a technique that involves checking the entire user flow as well as the moving
parts of a cloud native app, ensuring that there are no high-level discrepancies3
.
End-to-end testing
simulates real user scenarios and validates the functionality, performance, reliability, and security of
the app from start to finish3
.
End-to-end testing has several benefits, such as3
:
Comprehensive testing: You can test your app as a whole and verify that all the components work
together as expected.
User-centric testing: You can test your app from the user’s perspective and ensure that it meets the
user’s needs and expectations.
Quality assurance: You can test your app in a realistic environment and identify any issues or defects
before releasing it to the users.

Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 3

Which of these is a valid use case for OCI Queue?

  • A. Managing network traffic between services
  • B. Storing and retrieving large files
  • C. Sending real-time streaming data
  • D. Building decoupled and scalable systems
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
OCI Queue is a fully managed serverless service that helps decouple systems and enable
asynchronous operations2
.
Queue handles high-volume transactional data that requires
independently processed messages without loss or duplication2
.
A valid use case for OCI Queue is
building decoupled and scalable systems, such as event-driven architectures or microservices-based
applications2
. For example, you can use Queue to decouple your application and build an event-
driven architecture.
Decoupling ensures that individual application components can scale
independently and that you can future-proof your design so that as new application components are
built, they can publish or subscribe to the queue2
.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4

In the DevOps lifecycle, what is the difference between continuous delivery and continuous
deployment? (Choose two.)

  • A. Continuous delivery involves automation of developer tasks, while continuous deployment involves manual operational tasks.
  • B. Continuous delivery utilizes automatic deployment to a development environment, while continuous deployment involves automatic deployment to a production environment.
  • C. Continuous delivery requires more automatic linting, while continuous deployment testing must be run manually.
  • D. Continuous delivery is a process that initiates deployment manually, while continuous deployment is based on automating the deployment process.
Answer:

B, D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
The two correct differences between continuous delivery and continuous deployment in the DevOps
lifecycle are: Continuous delivery is a process that initiates deployment manually, while continuous
deployment is based on automating the deployment process. In continuous delivery, the software is
ready for deployment, but the decision to deploy is made manually by a human. On the other hand,
continuous deployment automates the deployment process, and once the software passes all the
necessary tests and quality checks, it is automatically deployed without human intervention.
Continuous delivery involves automatic deployment to a development environment, while
continuous deployment involves automatic deployment to a production environment. In continuous
delivery, the software is automatically deployed to a development or staging environment for further
testing and validation. However, the actual deployment to the production environment is performed
manually. In continuous deployment, the software is automatically deployed to the production
environment, eliminating the need for manual intervention in the deployment process. These
differences highlight the level of automation and human involvement in the deployment process
between continuous delivery and continuous deployment approaches in the DevOps lifecycle.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

You have two microservices, A and B, running in production. Service A relies on APIs from service B.
You want to test changes to service A without deploying all of its dependencies, which include
service B. Which approach should you take to test service A?

  • A. Test using API mocks.
  • B. Test the APIs in private environments.
  • C. Test against production APIs.
  • D. There is no need to explicitly test APIs.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
API mocking is a technique that simulates the behavior of real APIs without requiring the actual
implementation or deployment of the dependent services1
.
API mocking allows you to test changes
to service A without deploying all of its dependencies, such as service B, by creating mock responses
for the APIs that service A relies on1
.
API mocking has several benefits, such as1
:
Faster testing: You can test your service A without waiting for service B to be ready or available,
which reduces the testing time and feedback loop.
Isolated testing: You can test your service A in isolation from service B, which eliminates the
possibility of external factors affecting the test results or causing errors.
Controlled testing: You can test your service A with different scenarios and edge cases by creating
mock responses that mimic various situations, such as success, failure, timeout, etc.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

What can you use to dynamically make Kubernetes resources discoverable to public DNS servers?
(Choose the best answer.)

  • A. kubeDNS
  • B. DynDNS
  • C. CoreDNS
  • D. ExternalDNS
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
To dynamically make Kubernetes resources discoverable to public DNS servers, you can use
ExternalDNS. ExternalDNS is a Kubernetes add-on that automates the management of DNS records
for your Kubernetes services and ingresses. It can be configured to monitor the changes in your
Kubernetes resources and automatically update DNS records in a supported DNS provider. By
integrating ExternalDNS with your Kubernetes cluster, you can ensure that the DNS records for your
services and ingresses are automatically created, updated, or deleted based on changes in your
Kubernetes resources. This allows your Kubernetes resources to be discoverable by external systems
through public DNS servers.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

A DevOps engineer is troubleshooting the Meshifyd application, which is running in an Oracle Cloud
Infrastructure (OCI) environment. The engineer has set up the OCI Logging service to store access
logs for the application but notices that the logs from the Meshifyd application are not showing up in
the logging service. The engineer suspects that there might be an issue with the logging
configuration. Which two statements are potential reasons for logs from the Meshifyd application
not showing up in the OCI Logging service?

  • A. The logconfig.json file has incorrect or missing OCID for the custom log in the logobjectId field.
  • B. The OCI Logging service is set up to pre access logs by creating a log group and custom log within the same compartment.
  • C. The logconfig.json file has incorrect or missing information in the application namespace in the paths field.
  • D. The logconfig.json file has incorrect or missing information in the application namespace in the src field.
  • E. The logconfig.json file has incorrect or missing OCID for the custom log group in the logGroupObjectId field.
Answer:

A, E

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%

Explanation:
The logconfig.json file is a configuration file that specifies how the Unified Monitoring Agent collects
and uploads custom logs to the OCI Logging service2
.
The logconfig.json file contains an array of
objects, each representing a custom log configuration2
.
Each custom log configuration object has the
following fields2
:
logGroupObjectId: The OCID of the log group where the custom log is stored.
logObjectId: The OCID of the custom log.
paths: An array of paths to files or directories containing the custom logs.
src: A regular expression that matches the files containing the custom logs.
parser: A parser definition that specifies how to parse the custom logs. If the logconfig.json file has
incorrect or missing OCID for the custom log in the logobjectId field, or incorrect or missing OCID for
the custom log group in the logGroupObjectId field, then the Unified Monitoring Agent will not be
able to upload the custom logs to the OCI Logging service2
. Therefore, these are potential reasons
for logs from the Meshifyd application not showing up in the OCI Logging service. Verified
Reference:
Unified Monitoring Agent Configuration File

Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 8

Which open source engine is used by Oracle Cloud Infrastructure (OCI) to power Oracle Functions?

  • A. Knative
  • B. Kubeless
  • C. Apache OpenWhisk
  • D. Fn Project
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Fn Project is the open source engine that is used by OCI to power Oracle Functions1
.
Fn Project is an
open source, container native, serverless platform that can be run anywhere - any cloud or on-
premises1
. Fn Project is easy to use, extensible, and performant.
You can download and install the
open source distribution of Fn Project, develop and test a function locally, and then use the same
tooling to deploy that function to Oracle Functions1
. Verified Reference:
Overview of Functions

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9

Having created a Container Engine for Kubernetes (OKE) cluster, you can use Oracle Cloud
Infrastructure (OCI) Logging to view and search the logs of applications running on the worker node
compute instances in the cluster. Which task is NOT required to collect and parse application logs?
(Choose the best answer.)

  • A. Create a dynamic group with a rule that includes all worker nodes In the cluster.
  • B. Set the OCI Logging option to Enabled for the cluster.
  • C. Enable monitoring for all worker nodes in the cluster.
  • D. Configure a custom log in OCI Logging with the appropriate agent configuration.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
The correct answer is: Enable monitoring for all worker nodes in the cluster. Enabling monitoring for
all worker nodes in the cluster is not required to collect and parse application logs using Oracle Cloud
Infrastructure (OCI) Logging. Monitoring is a separate feature that allows you to collect metrics and
monitor the health and performance of the worker nodes. To collect and parse application logs, you
need to perform the following tasks: Set the OCI Logging option to Enabled for the cluster: This
enables the OCI Logging service for the cluster. Create a dynamic group with a rule that includes all
worker nodes in the cluster: This helps in targeting the logs generated by the worker nodes.
Configure a custom log in OCI Logging with the appropriate agent configuration: This involves
specifying the log source, log path, and log format to parse and collect the application logs. By
completing these tasks, you can collect and parse the application logs generated by the applications
running on the worker node compute instances in the OKE cluster.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

Assuming that your function does NOT have the --provisioned-concurrency option enabled, which
parameter is used to configure the time period during which an idle function will remain in memory
before Oracle Functions removes its container image from memory?

  • A. timeout
  • B. access-timeout
  • C. idle-timeout
  • D. None, as this time is not configurable.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Idle-timeout is the parameter that is used to configure the time period during which an idle function
will remain in memory before Oracle Functions removes its container image from memory2
.
The
idle-timeout parameter is specified in seconds and can be set when creating or updating a
function2
.
The default value for idle-timeout is 30 seconds and the maximum value is 900 seconds
(15 minutes)2
.
If a function has the --provisioned-concurrency option enabled, the idle-timeout
parameter is ignored and the function instances are always kept in memory3
. Verified
Reference:
Creating Functions
,
Provisioned Concurrency

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11

Which concept in OCI Queue is responsible for hiding a message from other consumers for a
predefined amount of time after it has been delivered to a consumer?

  • A. Maximum retention period
  • B. Visibility timeout
  • C. Delivery count
  • D. Polling timeout
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Visibility timeout is the concept in OCI Queue that is responsible for hiding a message from other
consumers for a predefined amount of time after it has been delivered to a consumer1
.
The visibility
timeout can be set at the queue level when creating a queue, or it can be specified when consuming
or updating messages1
.
If a consumer is having difficulty successfully processing a message, it can
update the message to extend its invisibility1
.
If a message’s visibility timeout is not extended, and
the consumer does not delete the message, it returns to the queue1
. Verified Reference:
Overview of
Queue

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12

Which two "Action Type" options are NOT available in an Oracle Cloud Infrastructure (OCI) Events
rule definition? (Choose two.)

  • A. Email
  • B. Streaming
  • C. Slack
  • D. Functions
  • E. Notifications
Answer:

A, C

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%

Explanation:
The two "Action Type" options that are NOT available in an Oracle Cloud Infrastructure (OCI) Events
rule definition are: Email (Correct) Slack (Correct) The available "Action Type" options in OCI Events
rule definition include Functions, Notifications, and Streaming. However, email and Slack are not
directly supported as action types in OCI Events. Instead, you can use Notifications to send
notifications to various notification channels, including email and Slack, through the OCI Notifications
service.

Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 13

Who is responsible for patching, upgrading, and maintaining the worker nodes in Oracle Cloud
Infrastructure (OCI) Container Engine for Kubernetes (OKE)? (Choose the best answer.)

  • A. Oracle Support
  • B. It is automated
  • C. The user
  • D. Independent Software Vendors
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
The user is responsible for patching, upgrading, and maintaining the worker nodes in Oracle Cloud
Infrastructure (OCI) Container Engine for Kubernetes (OKE). In OKE, the user has control over the
worker nodes, which are the compute instances that run the Kubernetes worker components. As the
user, you are responsible for managing and maintaining these worker nodes, including tasks such as
patching the underlying operating system, upgrading Kubernetes versions, and performing any
necessary maintenance activities. While Oracle provides the underlying infrastructure and support
services, including managing the control plane and ensuring the availability of the OKE service, the
responsibility for managing the worker nodes lies with the user. This allows you to have control and
flexibility in managing your Kubernetes environment according to your specific needs and
requirements.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

What is the maximum execution time of Oracle Functions?

  • A. 240 seconds
  • B. 300 seconds
  • C. 60 seconds
  • D. 120 seconds
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
The maximum execution time of Oracle Functions is 300 seconds, which is equivalent to 5 minutes.
This means that a function running within Oracle Functions cannot exceed a runtime of 5 minutes. If
a function requires longer execution times, alternative approaches such as invoking external services
asynchronously or using long-running processes should be considered. It is important to design
functions with this execution time limitation in mind to ensure optimal performance and efficiency
within the Oracle Functions platform.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

Which kubectl command syntax is valid for implementing a rolling update deployment strategy in
Kubernetes? (Choose the best answer.)

  • A. kubectl upgrade -c <container> --image=image:v2
  • B. kubectl update <deployment-name> --image=image:v2
  • C. kubectl rolling-update <deployment-name> --image=image:v2
  • D. kubectl update -c <container> --iniage=image: v2
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
The correct syntax for implementing a rolling update deployment strategy in Kubernetes using the
kubectl command is: kubectl rolling-update <deployment-name> --image=image:v2 This command
initiates a rolling update of the specified deployment by updating the container image to image:v2.
The rolling update strategy ensures that the new version of the application is gradually deployed
while maintaining availability and minimizing downtime.

Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2