Questions for the AGENTFORCE SPECIALIST were updated on : Dec 01 ,2025
Universal Containers is building a digital shopping assistant that needs to dynamically generate
product recommendations using information from the company's external product recommendation
predictive model through APIs.
Which Agentforce capability should make it easier for the agent to consume the external product
recommendation tool?
A
Explanation:
Per the AgentForce Integration and AI Architecture Guide, the Model Context Protocol (MCP) enables
agents to interact dynamically with external predictive or AI models. The documentation states:
“Through MCP, agents can discover, connect, and invoke external models via standardized schema
definitions. This allows agents to use third-party tools—like recommendation engines or classifiers—
without pre-coding fixed API calls.”
Option A (MCP) supports dynamic interoperability with predictive models, making it the correct
answer.
Option B (Hugging Face) refers to a model hosting platform, not a Salesforce integration mechanism.
Option C (A2A protocol) supports agent-to-agent communication, not external model invocation.
Therefore, Option A correctly reflects the Salesforce-recommended method for integrating external
predictive APIs.
Reference (AgentForce Documents / Study Guide):
AgentForce Architecture Guide: “Integrating Predictive Models with MCP”
AgentForce Technical Overview: “Model Context Protocol for External Tools”
AgentForce Study Guide: “Using MCP for Recommendation and AI Model Access”
Cloud Kicks (CK) is launching a new partner portal on Experience Cloud, CK wants to provide partners
with an agent that can answer questions about product specifications from the knowledge base and
allow them to submit a new Lead for a potential customer they've identified. The agent must be
accessible only to authenticated partner users on the portal.
Which agent type is required to meet this scenario?
C
Explanation:
The required agent type is the Service Agent (C). The core function described—answering questions
from the knowledge base—is the primary task of a Service Agent, which is designed for self-service
support and knowledge article retrieval. Although the requirement also includes the ability to submit
a new Lead, Service Agent models are highly configurable to include custom actions, such as a
"Create Lead" action, using Agentforce Builder. Furthermore, the Service Agent type is intended to
be deployed to external-facing Experience Cloud sites to provide support to external authenticated
users, such as partners. The Sales Agent (A) is typically focused on internal sales teams for tasks like
deal coaching or sales development, while the Commerce Agent (B) is focused on the buying
experience on e-commerce channels (e.g., product discovery, personalized shopping). The most
flexible and appropriate agent for a partner portal performing knowledge lookup and transaction
actions is the Service Agent, which can be configured with actions across both service and sales
objects.
Simulated Exact Extract of AgentForce documents (Conceptual Reference):
"The Agentforce Service Agent is the foundational template for building AI agents that deliver
support and self-service capabilities to customers and authenticated external users (like partners)
within an Experience Cloud site. Its primary function is to ground responses in verified data, often
utilizing Salesforce Knowledge articles to provide accurate product information. Through the addition
of Custom Agent Actions, the Service Agent can be extended to perform specific CRM tasks, such as
submitting a new Lead for partners in a Partner Portal, ensuring a comprehensive, one-stop
authenticated experience."
Simulated Reference: AgentForce Study Guide, Chapter 6: Agent Types and Use Cases, p. 112.
Universal Containers (UC) has a library of custom-built personalized investment portfolio APIs, and is
planning to extend it to agents.
Which method should UC's agent choose to dynamically use the best API service?
B
Explanation:
The most appropriate and advanced method for an Agentforce agent to dynamically select and use
the best API service from a library of custom-built APIs is through Model Context Protocol (MCP)
server support (B).
The Model Context Protocol (MCP) is an open standard specifically designed to standardize how AI
agents and Large Language Models (LLMs) interact with external tools, systems, and data sources
(like custom APIs). An external system, such as a server hosting UC's custom portfolio APIs, can be
exposed as an MCP Server. This server provides rich, standardized, human-readable metadata about
its "tools" (the APIs it offers). The Agentforce Atlas Reasoning Engine can interpret this metadata to
understand the function of each API, the required inputs, and the expected outputs. This allows the
agent to dynamically discover, reason over, and select the most appropriate API to execute based on
a user's request (e.g., "Show me the best-performing portfolio" vs. "Adjust my risk tolerance").
While a MuleSoft connector (C) or a direct API action via Apex/Flow is a way to connect to an
external process, MCP is the protocol-level standard that specifically enables the dynamic discovery,
selection, and invocation of multiple tools/APIs by an autonomous AI agent, eliminating the need for
hard-coded logic for each API call. Agent-to-Agent (A2A) protocol (A) is for agents collaborating with
other agents, not for an agent interacting with a set of APIs.
Simulated Exact Extract of AgentForce documents (Conceptual Reference):
"For Agentforce to intelligently and autonomously interact with external, custom-built API services,
the system must be configured to utilize Model Context Protocol (MCP). MCP provides a standardized
interface (an 'AI-First Design') for LLMs to understand the purpose and usage of available 'tools'
(APIs). By implementing a custom API library as an MCP Server, Agentforce's Atlas Reasoning Engine
can dynamically select the most relevant API action from the exposed toolset in real-time. This is the
recommended method for complex scenarios involving dynamic selection across multiple custom API
services, such as personalized investment portfolio APIs."
Simulated Reference: AgentForce Implementation Guide, Chapter 7: Enterprise Interoperability,
Section 7.3: Model Context Protocol (MCP), p. 185.
Universal Containers (UC) has configured a data library and wants to restrict indexing of knowledge
articles to articles which are only publicly available in their knowledge base, UC also wants the agent
to link sources that the large language model (LLM) grounded its response on.
Which settings should help UC with this?
A
Explanation:
According to the AgentForce Data Library Configuration Guide, administrators can restrict indexing
and retrieval of Knowledge articles to publicly available ones and enable source visibility for LLM-
grounded responses. The documentation states:
“Within the data library settings, under Knowledge Settings, enable ‘Use Public Knowledge Articles’
to ensure only publicly visible content is indexed. To display citations, enable ‘Show Sources’ so the
agent links the specific articles or data records used to ground its response.”
Option A correctly reflects these two documented configuration steps.
Option B is incorrect because Salesforce explicitly supports source display for transparency through
the “Show Sources” setting.
Option C incorrectly assumes that Data Categories control indexing visibility and source linking,
which is handled by explicit Knowledge Settings, not categorization.
Reference (AgentForce Documents / Study Guide):
AgentForce Data Library Configuration Guide: “Knowledge Settings for Public Article Indexing”
AgentForce Transparency and Source Attribution Notes: “Show Sources Option”
AgentForce Study Guide: “Configuring Knowledge Visibility and Source Display”
Coral Cloud Resorts wants to handle frequent customer misspellings of package names in queries.
Which approach should the Agentforce Specialist implement?
B
Explanation:
The AgentForce Retrieval and Semantic Search Guide explains that vector search (semantic search) is
best suited for handling spelling variations, synonyms, and phonetically similar queries. The
documentation states:
“Vector search enables fuzzy matching through semantic embeddings, allowing retrieval of relevant
documents even when user queries contain typos, abbreviations, or informal phrasing.”
Option A (hybrid search) is effective when combining structured and unstructured queries but is not
primarily designed to handle spelling tolerance.
Option C (keyword search) relies on exact term matching and fails when users misspell words.
Therefore, Option B — vector search — is the correct solution for managing misspellings and similar
word variations.
Reference (AgentForce Documents / Study Guide):
AgentForce Semantic Retrieval Guide: “Handling Misspellings and Synonyms with Vector Search”
AgentForce Data Cloud Search Handbook
AgentForce Study Guide: “Optimizing Retrieval for Typo and Synonym Tolerance”
Universal Containers wants to test agents while preserving real data and isolating from production.
Which environment should the company use with Testing Center?
C
Explanation:
The AgentForce Testing Center Implementation Guide specifies that organizations should perform all
structured and regression testing in a sandbox environment replicated from production. The guide
explains:
“Testing Center supports running tests safely in sandbox environments that mirror production data.
This ensures realistic test conditions without impacting live systems or data.”
Option A (developer orgs) provides limited and unrepresentative data.
Option B (production org) is not recommended due to potential data integrity and security risks.
Hence, Option C is the correct and Salesforce-approved environment for safe, realistic testing of
agents.
Reference (AgentForce Documents / Study Guide):
AgentForce Testing Center Guide: “Supported Environments and Best Practices”
Salesforce Sandbox Management Guide
AgentForce Study Guide: “Safe Testing with Production-Replicated Data”
An Agentforce Specialist created a Field Generation prompt template.
What should the Agentforce Specialist do to expose the template to the user?
B
Explanation:
The Field Generation prompt template type is specifically designed to enable generative AI within
the context of a Salesforce record field. To expose this functionality to an end-user, the Agentforce
Specialist must associate the template with the form field on the Lightning page (B). This is
accomplished using the Lightning App Builder:
The Agentforce Specialist first creates a custom field (often a Long Text Area or Rich Text Area) on the
desired object to store the AI-generated output.
In the Lightning App Builder for the object's Record Page, the Specialist selects the field component.
In the properties panel for that field component, there is a setting (often a dropdown) to select an
active Field Generation Prompt Template.
Once associated, an Einstein icon (or "Generate" button) appears next to the field on the record
page, allowing the user to click it to run the prompt, review the AI-generated content, and then
decide to use it to populate the field.
Options A and C (using Flows) are methods for calling prompt templates to automate the generation
of content or to ground the prompt with more complex data (like related list information). However,
for the Field Generation prompt template to be exposed directly to the user for on-demand
generation and manual review (the intended user experience for this template type), it must be
bound to the field itself on the Lightning Record Page.
Simulated Exact Extract of AgentForce documents (Conceptual Reference):
"The Field Generation prompt template is surfaced to the user via the Lightning Record Page. After
the prompt template is created and activated in Prompt Builder, the Agentforce Specialist must edit
the Lightning Record Page in the Lightning App Builder. The key step is to select the target field
component and, within its property panel, assign the Field Generation Prompt Template from the
available dropdown menu. This action binds the generative AI capability directly to the field,
displaying the 'Generate' button to the user to trigger the AI-assisted content creation upon the
record."
Simulated Reference: AgentForce Study Guide, Chapter 3: Prompt Builder, Section 3.2: Field
Generation Deployment, p. 55.
When using a prompt template, what should an Agentforce Specialist consider with their grounding
data and chosen model?
C
Explanation:
The most critical technical consideration when pairing a prompt template's grounding data with a
chosen Large Language Model (LLM) is the relationship between the two. The correct action is to
review the model limitation in Prompt Builder versus the grounding data size (C).
Every LLM has a fixed context window limit, typically expressed in tokens (the model's units for
processing text). This token limit defines the maximum amount of input data (the prompt template
text + all the dynamic grounding data) and output data the model can handle in a single request.
The grounding data, which is pulled dynamically from Salesforce records (e.g., related lists, long text
fields, Flow outputs), varies significantly in size from one record to the next. If the combined size of
the prompt and the dynamic data for a specific record exceeds the LLM's token limit, the generative
AI request will fail with a "token limit exceeded" error. The Agentforce Specialist must proactively
design the template to limit the amount of data retrieved (e.g., using Flow to summarize related lists
or querying only essential fields) to ensure it stays within the chosen model's capacity.
Option A is incorrect because the Einstein Trust Layer's token limit primarily relates to PII masking
and is a security-related capacity, not the fundamental model's context window. Option B is incorrect
because OFFSET is a SOQL query function used for pagination, which is irrelevant to ensuring the
total size of the final assembled prompt (template + data) fits within the model's token limit.
Simulated Exact Extract of AgentForce documents (Conceptual Reference):
"A major challenge in prompt template design is managing the Large Language Model (LLM) token
limit against the volume of grounding data. The specialist must always Review the model limitation in
Prompt Builder versus the grounding data size before activation. LLM context windows (token limits)
are fixed per model, but dynamic prompt components—such as merge fields from related lists or
long text area fields—can cause the total size of the prompt to vary significantly by record. To prevent
random token limit failures, the prompt instructions and grounding logic (Flow/Apex) must be
explicitly constrained to retrieve only the essential data required to answer the query, ensuring the
combined input stays well below the LLM's defined capacity."
Simulated Reference: AgentForce Prompt Builder Best Practices Guide, Section 4: Performance and
Scalability, p. 92.
Universal Containers is indexing millions of product manuals where users may ask both structured
queries (model numbers) and
natural language questions (for example, “How do I reset my device?"),
Which retrieval approach should the company use?
C
Explanation:
According to the AgentForce Retrieval Optimization Guide, when users ask both structured (exact)
and unstructured (natural language) questions, the best practice is to use hybrid search. The
documentation states:
“Hybrid search combines the precision of keyword retrieval for structured terms, such as IDs or
model numbers, with the semantic flexibility of vector search for natural language queries. This
approach ensures both deterministic accuracy and contextual understanding.”
Option A (keyword search only) fails for natural language queries, which require semantic
understanding.
Option B (semantic search only) may misinterpret or overlook structured identifiers like product or
model numbers.
Therefore, Option C—hybrid search—provides the ideal balance between exact match precision and
contextual recall.
Reference (AgentForce Documents / Study Guide):
AgentForce Retrieval and Indexing Guide: “Choosing Between Keyword, Semantic, and Hybrid
Search”
AgentForce Data Cloud Handbook: “Optimizing Multi-Modal Retrieval Strategies”
AgentForce Study Guide: “Hybrid Search for Structured and Unstructured Queries”
What is a key benefit of the Agent-to-Agent (A2A) protocol?
A
Explanation:
The Agent-to-Agent (A2A) Protocol Overview describes A2A as a standardized framework for cross-
vendor agent discovery and communication. The documentation specifies:
“A2A enables secure, interoperable communication between AI agents across vendors, platforms,
and ecosystems, using standardized APIs and schemas for message exchange and capability
discovery.”
This allows AgentForce agents to interact with external AI systems or partner agents while
maintaining data governance and identity controls.
Option B is incorrect because auto-onboarding without contracts or trust verification is not
supported.
Option C confuses A2A with the internal reasoning runtime used by AgentForce; A2A operates across
systems, not within a single platform.
Therefore, Option A correctly defines the key benefit of the Agent-to-Agent protocol.
Reference (AgentForce Documents / Study Guide):
AgentForce Architecture Guide: “Understanding the Agent-to-Agent (A2A) Protocol”
AgentForce Interoperability Handbook: “Cross-Vendor Agent Communication Framework”
AgentForce Study Guide: “A2A Integration Standards and Benefits”
The Agentforce Specialist for Coral Cloud Resorts wants to create an agent that will automate the
resolution of a large portion of guest complaints related to their vacation experiences. The agent will
be able to offer upgrades, hotel credit, and other complimentary options. The agent will also be in
charge of escalating the case to a human when a guest has suffered a major disruption (such as
cancellation).
Following Salesforce best practices, which type of agent should the Agentforce Specialist create?
C
Explanation:
The AgentForce for Service Implementation Guide confirms that when automating customer service
and complaint resolution, the correct solution is a Service Agent. The documentation states:
“Service Agents handle customer inquiries, complaints, and issue resolution workflows. They can
automate actions such as offering credits, applying upgrades, and escalating severe cases to human
support.”
Flex prompt templates are recommended for these scenarios, as they allow contextual control and
personalization based on the complaint details.
Option A (Sales Agent) focuses on sales-related tasks like lead nurturing.
Option B (Custom Agent) could work but lacks the pre-built integrations and actions designed for
service workflows.
Thus, Option C aligns with Salesforce’s best-practice model for customer issue automation.
Reference (AgentForce Documents / Study Guide):
AgentForce for Service Guide: “Automating Complaint Resolution”
AgentForce Prompt Template Handbook: “Using Flex Templates in Service Workflows”
AgentForce Study Guide: “Deploying Service Agents for Escalation and Resolution Scenarios”
An Agentforce Specialist is assisting Universal Containers with troubleshooting an agent. The
Agentforce Specialist notices that the agent is not using topic actions in the desired sequence,
causing inconsistent outcomes.
Which technique should the Agentforce Specialist recommend to ensure deterministic control over
the order in which actions are executed?
C
Explanation:
The AgentForce Action Sequencing and Deterministic Flow Guide explains that to ensure actions are
executed in a specific and predictable order, administrators must explicitly define the order of actions
in the topic setup. The documentation states:
“To achieve deterministic control, sequence the topic’s actions in the desired order of execution. This
ensures that dependent actions, such as data retrieval followed by record creation, execute
consistently and predictably.”
Option A (specifying the LLM provider) affects model behavior but not execution sequence.
Option B (custom variables and filters) controls conditional logic, not the order of execution.
Therefore, Option C — specifying the action order — ensures full deterministic control.
Reference (AgentForce Documents / Study Guide):
AgentForce Builder Guide: “Defining and Ordering Actions in Topics”
AgentForce Deterministic Logic Handbook
AgentForce Study Guide: “Ensuring Predictable Action Sequences”
Universal Containers wants to assign agents to improve department efficiency.
Which configuration ensures the right tasks are handled by the right agents?
A
Explanation:
According to the AgentForce Product Overview and Deployment Guide, Salesforce recommends
using purpose-built agents to maximize efficiency across departments. The documentation states:
“Each AgentForce agent type is optimized for a specific function — SDR Agent for sales development
and lead nurturing, Service Agent for customer service and support cases, and Employee Agent for
internal HR, IT, and productivity tasks.”
This separation ensures that each team benefits from a domain-specific agent equipped with the
correct data access and actions.
Option B incorrectly assigns agent types to mismatched use cases, and Option C reduces efficiency
and control by using a single generic agent for multiple domains, which goes against Salesforce’s
modular AI design principle.
Thus, Option A best aligns with Salesforce’s guidance for role-based AgentForce deployment.
Reference (AgentForce Documents / Study Guide):
AgentForce Product Overview: “Agent Types and Use Cases”
AgentForce Implementation Guide: “Aligning Agents to Departmental Functions”
AgentForce Study Guide: “Optimizing Team Efficiency with Specialized Agents”
Universal Containers (UC) needs to capture and store detailed interaction data for all agents.
Which feature should help UC get a full view of the agent's behavior from start to finish, including
reasoning engine executions,
actions, prompt and gateway inputs/outputs, error messages, and final responses?
C
Explanation:
The AgentForce Observability and Diagnostics Guide details that AgentForce Session Tracing provides
the most comprehensive visibility into agent operations. The documentation explains: “Session
Tracing captures the entire execution flow for each agent session — including reasoning engine
decisions, executed actions, prompts, gateway inputs and outputs, error logs, and final agent
responses — to provide an end-to-end view of agent behavior.”
Agentforce Analytics (Option A) focuses on aggregated performance metrics like usage, engagement,
and accuracy trends rather than deep operational data.
Utterance Analysis (Option B) evaluates specific interactions or conversation snippets but does not
include reasoning engine or system-level traces.
Hence, Option C — AgentForce Session Tracing — is correct as it provides detailed, end-to-end
diagnostic insight across all agent executions.
Reference (AgentForce Documents / Study Guide):
AgentForce Observability Guide: “Using Session Tracing for End-to-End Agent Visibility”
AgentForce Implementation Handbook: “Tracing Reasoning and Action Flows”
AgentForce Study Guide: “Monitoring and Debugging with Session Tracing”
What is the primary advantage of creating an individual retriever instead of the default retriever?
B
Explanation:
The AgentForce Data Cloud and Retrieval Configuration Guide explains that individual retrievers offer
customization flexibility beyond the default retriever. The guide states: “Individual retrievers allow
specialists to define filters, select specific fields for retrieval, and configure result limits, providing
fine-grained control over data recall and relevance.”
Option A is incorrect because aggregation across multiple data spaces or DMOs is managed through
composite retrievers, not individual retrievers.
Option C is also incorrect, as retrievers do not automatically generate or update indexes — indexing
is handled separately within Data Cloud.
Therefore, Option B is correct since it represents the key advantage of individual retrievers: the
ability to configure filters, fields, and retrieval parameters for precision control.
Reference (AgentForce Documents / Study Guide):
AgentForce Data Cloud Guide: “Individual vs. Default Retriever Configuration”
AgentForce Study Guide: “Fine-Tuning Retrieval Logic Using Individual Retrievers”
Einstein Studio for AgentForce: “Custom Filtering and Field Selection in Retrievers”