oracle 1Z0-1127-25 Exam Questions

Questions for the 1Z0-1127-25 were updated on : Dec 01 ,2025

Page 1 out of 6. Viewing questions 1-15 out of 88

Question 1

Given the following prompts used with a Large Language Model, classify each as employing the
Chain-of-Thought, Least-to-Most, or Step-Back prompting technique:

  • A. "Calculate the total number of wheels needed for 3 cars. Cars have 4 wheels each. Then, use the total number of wheels to determine how many sets of wheels we can buy with $200 if one set (4 wheels) costs $50."
  • B. "Solve a complex math problem by first identifying the formula needed, and then solve a simpler version of the problem before tackling the full question."
  • C. "To understand the impact of greenhouse gases on climate change, let’s start by defining what greenhouse gases are. Next, we’ll explore how they trap heat in the Earth’s atmosphere."A. 1: Step- Back, 2: Chain-of-Thought, 3: Least-to-MostB. 1: Least-to-Most, 2: Chain-of-Thought, 3: Step-BackC. 1: Chain-of-Thought, 2: Step-Back, 3: Least-to-MostD. 1: Chain-of-Thought, 2: Least-to-Most, 3: Step- Back
Answer:

C

User Votes:
A
50%
B
50%
C
50%

Explanation:
Comprehensive and Detailed In-Depth Explanation=
Prompt 1: Shows intermediate steps (3 × 4 = 12, then 12 ÷ 4 = 3 sets, $200 ÷ $50 = 4)—Chain-of-
Thought.
Prompt 2: Steps back to a simpler problem before the full one—Step-Back.
: OCI 2025 Generative AI documentation likely defines these under prompting strategies.

Discussions
vote your answer:
A
B
C
0 / 1000

Question 2

Which is a distinguishing feature of "Parameter-Efficient Fine-Tuning (PEFT)" as opposed to classic
"Fine-tuning" in Large Language Model training?

  • A. PEFT involves only a few or new parameters and uses labeled, task-specific data.
  • B. PEFT modifies all parameters and is typically used when no training data exists.
  • C. PEFT does not modify any parameters but uses soft prompting with unlabeled data.
  • D. PEFT modifies all parameters and uses unlabeled, task-agnostic data.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed In-Depth Explanation=
PEFT (e.g., LoRA, T-Few) updates a small subset of parameters (often new ones) using labeled, task-
specific data, unlike classic fine-tuning, which updates all parameters—Option A is correct. Option B
reverses PEFT’s efficiency. Option C (no modification) fits soft prompting, not all PEFT. Option D (all
parameters) mimics classic fine-tuning. PEFT reduces resource demands.
: OCI 2025 Generative AI documentation likely contrasts PEFT and fine-tuning under customization
methods.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom
training dat
a. How many unit hours are required for fine-tuning if the cluster is active for 10 hours?

  • A. 25 unit hours
  • B. 40 unit hours
  • C. 20 unit hours
  • D. 30 unit hours
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed In-Depth Explanation=
In OCI, unit hours typically equal actual hours of cluster activity unless specified otherwise (e.g., per
GPU scaling). For 10 hours of activity, it’s 10 hours × 1 unit/hour = 10 unit hours, but options suggest
a multiplier (common in cloud pricing). Assuming a standard 2-unit/hour rate (e.g., for GPU clusters),
it’s 10 × 2 = 20 unit hours—Option C fits best. Options A, B, and D imply inconsistent rates (2.5, 4, 3).
: OCI 2025 Generative AI documentation likely specifies unit hour rates under DedicatedAI Cluster
pricing.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4

Which statement describes the difference between "Top k" and "Top p" in selecting the next token in
the OCI Generative AI Generation models?

  • A. "Top k" and "Top p" are identical in their approach to token selection but differ in their application of penalties to tokens.
  • B. "Top k" selects the next token based on its position in the list of probable tokens, whereas "Top p" selects based on the cumulative probability of the top tokens.
  • C. "Top k" considers the sum of probabilities of the top tokens, whereas "Top p" selects from the "Top k" tokens sorted by probability.
  • D. "Top k" and "Top p" both select from the same set of tokens but use different methods to prioritize them based on frequency.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed In-Depth Explanation=
“Top k” sampling selects from the k most probable tokens, based on their ranked position, while “Top
p” (nucleus sampling) selects from tokens whose cumulative probability exceeds p, focusing on a
dynamic probability mass—Option B is correct. Option A is false—they differ in selection, not
penalties. Option C reverses definitions. Option D (frequency) is incorrect—both use probability, not
frequency. This distinction affects diversity.
: OCI 2025 Generative AI documentation likely contrasts Top k and Top p under sampling methods.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

Which role does a "model endpoint" serve in the inference workflow of the OCI Generative AI
service?

  • A. Updates the weights of the base model during the fine-tuning process
  • B. Serves as a designated point for user requests and model responses
  • C. Evaluates the performance metrics of the custom models
  • D. Hosts the training data for fine-tuning custom models
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed In-Depth Explanation=
A “model endpoint” in OCI’s inference workflow is an API or interface where users send requests and
receive responses from a deployed model—Option B is correct. Option A (weight updates) occurs
during fine-tuning, not inference. Option C (metrics) is for evaluation, not endpoints. Option D
(training data) relates to storage, not inference. Endpoints enable real-time interaction.
: OCI 2025 Generative AI documentation likely describes endpoints under inference deployment.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

Which component of Retrieval-Augmented Generation (RAG) evaluates and prioritizes the
information retrieved by the retrieval system?

  • A. Retriever
  • B. Encoder-Decoder
  • C. Generator
  • D. Ranker
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed In-Depth Explanation=
In RAG, the Ranker evaluates and prioritizes retrieved information (e.g., documents) based on
relevance to the query, refining what the Retriever fetches—Option D is correct. The Retriever (A)
fetches data, not ranks it. Encoder-Decoder (B) isn’t a distinct RAG component—it’s part of the LLM.
The Generator (C) produces text, not prioritizes. Ranking ensures high-quality inputs for generation.
: OCI 2025 Generative AI documentation likely details the Ranker under RAG pipeline components.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

What is the primary function of the "temperature" parameter in the OCI Generative AI Generation
models?

  • A. Controls the randomness of the model's output, affecting its creativity
  • B. Specifies a string that tells the model to stop generating more content
  • C. Assigns a penalty to tokens that have already appeared in the preceding text
  • D. Determines the maximum number of tokens the model can generate per response
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed In-Depth Explanation=
The “temperature” parameter adjusts the randomness of an LLM’s output by scaling the softmax
distribution—low values (e.g., 0.7) make it more deterministic, high values (e.g., 1.5) increase
creativity—Option A is correct. Option B (stop string) is the stop sequence. Option C (penalty) relates
to presence/frequency penalties. Option D (max tokens) is a separate parameter. Temperature
shapes output style.
: OCI 2025 Generative AI documentation likely defines temperature under generation parameters.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

Which is NOT a built-in memory type in LangChain?

  • A. ConversationImageMemory
  • B. ConversationBufferMemory
  • C. ConversationSummaryMemory
  • D. ConversationTokenBufferMemory
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed In-Depth Explanation=
LangChain includes built-in memory types like ConversationBufferMemory (stores full history),
ConversationSummaryMemory (summarizes history), and ConversationTokenBufferMemory (limits
by token count)—Options B, C, and D are valid. ConversationImageMemory (A) isn’t a standard
type—image handling typically requires custom or multimodal extensions, not a built-in memory
class—making A correct as NOT included.
: OCI 2025 Generative AI documentation likely lists memory types under LangChain memory
management.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9

Given the following code:
chain = prompt | llm
Which statement is true about LangChain Expression Language (LCEL)?

  • A. LCEL is a programming language used to write documentation for LangChain.
  • B. LCEL is a legacy method for creating chains in LangChain.
  • C. LCEL is a declarative and preferred way to compose chains together.
  • D. LCEL is an older Python library for building Large Language Models.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed In-Depth Explanation=
LangChain Expression Language (LCEL) is a declarative syntax (e.g., using | to pipe components) for
composing chains in LangChain, combining prompts, LLMs, and other elements efficiently—Option C
is correct. Option A is false—LCEL isn’t for documentation. Option B is incorrect—it’s current, not
legacy; traditional Python classes are older. Option D is wrong—LCEL is part of LangChain, not a
standalone LLM library. LCEL simplifies chain design.
: OCI 2025 Generative AI documentation likely highlights LCEL under LangChain chaincomposition.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

Which statement is true about the "Top p" parameter of the OCI Generative AI Generation models?

  • A. "Top p" selects tokens from the "Top k" tokens sorted by probability.
  • B. "Top p" assigns penalties to frequently occurring tokens.
  • C. "Top p" limits token selection based on the sum of their probabilities.
  • D. "Top p" determines the maximum number of tokens per response.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed In-Depth Explanation=
“Top p” (nucleus sampling) selects tokens whose cumulative probability exceeds a threshold (p),
limiting the pool to the smallest set meeting this sum, enhancing diversity—Option C is correct.
Option A confuses it with “Top k.” Option B (penalties) is unrelated. Option D (max tokens) is a
different parameter. Top p balances randomness and coherence.
: OCI 2025 Generative AI documentation likely explains “Top p” under sampling methods.
Here is the next batch of 10 questions (81–90) from your list, formatted as requested with detailed
explanations. The answers are based on widely accepted principles in generative AI and Large
Language Models (LLMs), aligned with what is likely reflected in the Oracle Cloud Infrastructure (OCI)
2025 Generative AI documentation. Typographical errors have been corrected for clarity.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11

How are fine-tuned customer models stored to enable strong data privacy and security in the OCI
Generative AI service?

  • A. Shared among multiple customers for efficiency
  • B. Stored in Object Storage encrypted by default
  • C. Stored in an unencrypted form in Object Storage
  • D. Stored in Key Management service
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed In-Depth Explanation=
In OCI, fine-tuned models are stored in Object Storage, encrypted by default, ensuring privacy and
security per cloud best practices—Option B is correct. Option A (shared) violates privacy. Option C
(unencrypted) contradicts security standards. Option D (Key Management) stores keys, not models.
Encryption protects customer data.
: OCI 2025 Generative AI documentation likely details storage security under fine-tuning workflows.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12

Which is NOT a category of pretrained foundational models available in the OCI Generative AI
service?

  • A. Summarization models
  • B. Generation models
  • C. Translation models
  • D. Embedding models
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed In-Depth Explanation=
OCI Generative AI typically offers pretrained models for summarization (A), generation (B), and
embeddings (D), aligning with common generative tasks. Translation models (C) are less emphasized
in generative AI services, often handled by specialized NLP platforms, making C the NOT category.
While possible, translation isn’t a core OCI generative focus based on standard offerings.
: OCI 2025 Generative AI documentation likely lists model categories under pretrained options.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 13

What is the purpose of the "stop sequence" parameter in the OCI Generative AI Generation models?

  • A. It specifies a string that tells the model to stop generating more content.
  • B. It assigns a penalty to frequently occurring tokens to reduce repetitive text.
  • C. It determines the maximum number of tokens the model can generate per response.
  • D. It controls the randomness of the model’s output, affecting its creativity.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed In-Depth Explanation=
The “stop sequence” parameter defines a string (e.g., “.” or “\n”) that, when generated, halts text
generation, allowing control over output length or structure—Option A is correct. Option B (penalty)
describes frequency/presence penalties. Option C (max tokens) is a separate parameter. Option D
(randomness) relates to temperature. Stop sequences ensure precise termination.
: OCI 2025 Generative AI documentation likely details stop sequences under generation parameters.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

What does a higher number assigned to a token signify in the "Show Likelihoods" feature of the
language model token generation?

  • A. The token is less likely to follow the current token.
  • B. The token is more likely to follow the current token.
  • C. The token is unrelated to the current token and will not be used.
  • D. The token will be the only one considered in the next generation step.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed In-Depth Explanation=
In “Show Likelihoods,” a higher number (probability score) indicates a token’s greater likelihood of
following the current token, reflecting the model’s prediction confidence—Option B is correct.
Option A (less likely) is the opposite. Option C (unrelated) misinterprets—likelihood ties tokens
contextually. Option D (only one) assumes greedy decoding, not the feature’s purpose. This helps
users understand model preferences.
: OCI 2025 Generative AI documentation likely explains “Show Likelihoods” under token generation
insights.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

Given the following code:
PromptTemplate(input_variables=["human_input", "city"], template=template)
Which statement is true about PromptTemplate in relation to input_variables?

  • A. PromptTemplate requires a minimum of two variables to function properly.
  • B. PromptTemplate can support only a single variable at a time.
  • C. PromptTemplate supports any number of variables, including the possibility of having none.
  • D. PromptTemplate is unable to use any variables.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Comprehensive and Detailed In-Depth Explanation=
In LangChain, PromptTemplate supports any number of input_variables (zero, one, or more),
allowing flexible prompt design—Option C is correct. The example shows two, but it’s not a
requirement. Option A (minimum two) is false—no such limit exists. Option B (single variable) is too
restrictive. Option D (no variables) contradicts its purpose—variables are optional but supported.
This adaptability aids prompt engineering.
: OCI 2025 Generative AI documentation likely covers PromptTemplate under LangChain prompt
design.

Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2