Questions for the 1Z0-1127-25 were updated on : Dec 01 ,2025
Given the following prompts used with a Large Language Model, classify each as employing the
Chain-of-Thought, Least-to-Most, or Step-Back prompting technique:
C
Explanation:
Comprehensive and Detailed In-Depth Explanation=
Prompt 1: Shows intermediate steps (3 × 4 = 12, then 12 ÷ 4 = 3 sets, $200 ÷ $50 = 4)—Chain-of-
Thought.
Prompt 2: Steps back to a simpler problem before the full one—Step-Back.
: OCI 2025 Generative AI documentation likely defines these under prompting strategies.
Which is a distinguishing feature of "Parameter-Efficient Fine-Tuning (PEFT)" as opposed to classic
"Fine-tuning" in Large Language Model training?
A
Explanation:
Comprehensive and Detailed In-Depth Explanation=
PEFT (e.g., LoRA, T-Few) updates a small subset of parameters (often new ones) using labeled, task-
specific data, unlike classic fine-tuning, which updates all parameters—Option A is correct. Option B
reverses PEFT’s efficiency. Option C (no modification) fits soft prompting, not all PEFT. Option D (all
parameters) mimics classic fine-tuning. PEFT reduces resource demands.
: OCI 2025 Generative AI documentation likely contrasts PEFT and fine-tuning under customization
methods.
You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom
training dat
a. How many unit hours are required for fine-tuning if the cluster is active for 10 hours?
C
Explanation:
Comprehensive and Detailed In-Depth Explanation=
In OCI, unit hours typically equal actual hours of cluster activity unless specified otherwise (e.g., per
GPU scaling). For 10 hours of activity, it’s 10 hours × 1 unit/hour = 10 unit hours, but options suggest
a multiplier (common in cloud pricing). Assuming a standard 2-unit/hour rate (e.g., for GPU clusters),
it’s 10 × 2 = 20 unit hours—Option C fits best. Options A, B, and D imply inconsistent rates (2.5, 4, 3).
: OCI 2025 Generative AI documentation likely specifies unit hour rates under DedicatedAI Cluster
pricing.
Which statement describes the difference between "Top k" and "Top p" in selecting the next token in
the OCI Generative AI Generation models?
B
Explanation:
Comprehensive and Detailed In-Depth Explanation=
“Top k” sampling selects from the k most probable tokens, based on their ranked position, while “Top
p” (nucleus sampling) selects from tokens whose cumulative probability exceeds p, focusing on a
dynamic probability mass—Option B is correct. Option A is false—they differ in selection, not
penalties. Option C reverses definitions. Option D (frequency) is incorrect—both use probability, not
frequency. This distinction affects diversity.
: OCI 2025 Generative AI documentation likely contrasts Top k and Top p under sampling methods.
Which role does a "model endpoint" serve in the inference workflow of the OCI Generative AI
service?
B
Explanation:
Comprehensive and Detailed In-Depth Explanation=
A “model endpoint” in OCI’s inference workflow is an API or interface where users send requests and
receive responses from a deployed model—Option B is correct. Option A (weight updates) occurs
during fine-tuning, not inference. Option C (metrics) is for evaluation, not endpoints. Option D
(training data) relates to storage, not inference. Endpoints enable real-time interaction.
: OCI 2025 Generative AI documentation likely describes endpoints under inference deployment.
Which component of Retrieval-Augmented Generation (RAG) evaluates and prioritizes the
information retrieved by the retrieval system?
D
Explanation:
Comprehensive and Detailed In-Depth Explanation=
In RAG, the Ranker evaluates and prioritizes retrieved information (e.g., documents) based on
relevance to the query, refining what the Retriever fetches—Option D is correct. The Retriever (A)
fetches data, not ranks it. Encoder-Decoder (B) isn’t a distinct RAG component—it’s part of the LLM.
The Generator (C) produces text, not prioritizes. Ranking ensures high-quality inputs for generation.
: OCI 2025 Generative AI documentation likely details the Ranker under RAG pipeline components.
What is the primary function of the "temperature" parameter in the OCI Generative AI Generation
models?
A
Explanation:
Comprehensive and Detailed In-Depth Explanation=
The “temperature” parameter adjusts the randomness of an LLM’s output by scaling the softmax
distribution—low values (e.g., 0.7) make it more deterministic, high values (e.g., 1.5) increase
creativity—Option A is correct. Option B (stop string) is the stop sequence. Option C (penalty) relates
to presence/frequency penalties. Option D (max tokens) is a separate parameter. Temperature
shapes output style.
: OCI 2025 Generative AI documentation likely defines temperature under generation parameters.
Which is NOT a built-in memory type in LangChain?
A
Explanation:
Comprehensive and Detailed In-Depth Explanation=
LangChain includes built-in memory types like ConversationBufferMemory (stores full history),
ConversationSummaryMemory (summarizes history), and ConversationTokenBufferMemory (limits
by token count)—Options B, C, and D are valid. ConversationImageMemory (A) isn’t a standard
type—image handling typically requires custom or multimodal extensions, not a built-in memory
class—making A correct as NOT included.
: OCI 2025 Generative AI documentation likely lists memory types under LangChain memory
management.
Given the following code:
chain = prompt | llm
Which statement is true about LangChain Expression Language (LCEL)?
C
Explanation:
Comprehensive and Detailed In-Depth Explanation=
LangChain Expression Language (LCEL) is a declarative syntax (e.g., using | to pipe components) for
composing chains in LangChain, combining prompts, LLMs, and other elements efficiently—Option C
is correct. Option A is false—LCEL isn’t for documentation. Option B is incorrect—it’s current, not
legacy; traditional Python classes are older. Option D is wrong—LCEL is part of LangChain, not a
standalone LLM library. LCEL simplifies chain design.
: OCI 2025 Generative AI documentation likely highlights LCEL under LangChain chaincomposition.
Which statement is true about the "Top p" parameter of the OCI Generative AI Generation models?
C
Explanation:
Comprehensive and Detailed In-Depth Explanation=
“Top p” (nucleus sampling) selects tokens whose cumulative probability exceeds a threshold (p),
limiting the pool to the smallest set meeting this sum, enhancing diversity—Option C is correct.
Option A confuses it with “Top k.” Option B (penalties) is unrelated. Option D (max tokens) is a
different parameter. Top p balances randomness and coherence.
: OCI 2025 Generative AI documentation likely explains “Top p” under sampling methods.
Here is the next batch of 10 questions (81–90) from your list, formatted as requested with detailed
explanations. The answers are based on widely accepted principles in generative AI and Large
Language Models (LLMs), aligned with what is likely reflected in the Oracle Cloud Infrastructure (OCI)
2025 Generative AI documentation. Typographical errors have been corrected for clarity.
How are fine-tuned customer models stored to enable strong data privacy and security in the OCI
Generative AI service?
B
Explanation:
Comprehensive and Detailed In-Depth Explanation=
In OCI, fine-tuned models are stored in Object Storage, encrypted by default, ensuring privacy and
security per cloud best practices—Option B is correct. Option A (shared) violates privacy. Option C
(unencrypted) contradicts security standards. Option D (Key Management) stores keys, not models.
Encryption protects customer data.
: OCI 2025 Generative AI documentation likely details storage security under fine-tuning workflows.
Which is NOT a category of pretrained foundational models available in the OCI Generative AI
service?
C
Explanation:
Comprehensive and Detailed In-Depth Explanation=
OCI Generative AI typically offers pretrained models for summarization (A), generation (B), and
embeddings (D), aligning with common generative tasks. Translation models (C) are less emphasized
in generative AI services, often handled by specialized NLP platforms, making C the NOT category.
While possible, translation isn’t a core OCI generative focus based on standard offerings.
: OCI 2025 Generative AI documentation likely lists model categories under pretrained options.
What is the purpose of the "stop sequence" parameter in the OCI Generative AI Generation models?
A
Explanation:
Comprehensive and Detailed In-Depth Explanation=
The “stop sequence” parameter defines a string (e.g., “.” or “\n”) that, when generated, halts text
generation, allowing control over output length or structure—Option A is correct. Option B (penalty)
describes frequency/presence penalties. Option C (max tokens) is a separate parameter. Option D
(randomness) relates to temperature. Stop sequences ensure precise termination.
: OCI 2025 Generative AI documentation likely details stop sequences under generation parameters.
What does a higher number assigned to a token signify in the "Show Likelihoods" feature of the
language model token generation?
B
Explanation:
Comprehensive and Detailed In-Depth Explanation=
In “Show Likelihoods,” a higher number (probability score) indicates a token’s greater likelihood of
following the current token, reflecting the model’s prediction confidence—Option B is correct.
Option A (less likely) is the opposite. Option C (unrelated) misinterprets—likelihood ties tokens
contextually. Option D (only one) assumes greedy decoding, not the feature’s purpose. This helps
users understand model preferences.
: OCI 2025 Generative AI documentation likely explains “Show Likelihoods” under token generation
insights.
Given the following code:
PromptTemplate(input_variables=["human_input", "city"], template=template)
Which statement is true about PromptTemplate in relation to input_variables?
C
Explanation:
Comprehensive and Detailed In-Depth Explanation=
In LangChain, PromptTemplate supports any number of input_variables (zero, one, or more),
allowing flexible prompt design—Option C is correct. The example shows two, but it’s not a
requirement. Option A (minimum two) is false—no such limit exists. Option B (single variable) is too
restrictive. Option D (no variables) contradicts its purpose—variables are optional but supported.
This adaptability aids prompt engineering.
: OCI 2025 Generative AI documentation likely covers PromptTemplate under LangChain prompt
design.