Questions for the SPLK-1003 were updated on : Dec 01 ,2025
Which of the following is an acceptable channel value when using the HTTP Event Collector indexer
acknowledgment capability?
A
Explanation:
The HTTP Event Collector (HEC) supports indexer acknowledgment to confirm event delivery. Each
acknowledgment is associated with a unique GUID (Globally Unique Identifier).
GUID ensures events are not re-indexed in the case of retries.
Incorrect Options:
B, C, D: These are not valid channel values in HEC acknowledgments.
References:
Splunk Docs: Use indexer acknowledgment with HTTP Event Collector
When enabling data integrity control, where does Splunk Enterprise store the hash files for each
bucket?
B
Explanation:
Data integrity controls in Splunk ensure that indexed data has not been tampered with.
When enabled, Splunk calculates hashes for each bucket and stores these hash files in the rawdata
directory of the corresponding bucket.
Incorrect Options:
A, C, D: These directories do not store hash files.
References:
Splunk Docs: Configure data integrity controls
An admin updates the Role to Group mapping for external authentication. How does the change
affect users that are currently logged into Splunk?
A
Explanation:
Splunk checks role-to-group mapping only during user login for external authentication (e.g., LDAP,
SAML). Users already logged in will continue using their previously assigned roles until they log out
and log back in.
The changes to role mapping do not disrupt ongoing sessions.
Incorrect Options:
B: Search is not disabled upon role updates.
C: This is incorrect since existing users are also updated upon the next login.
D: Role updates do not terminate ongoing sessions.
References:
Splunk Docs: Configure user authentication
There is a file with a vast amount of old dat
a. Which of the following inputs.conf attributes would allow an admin to monitor the file for updates
without indexing the pre-existing data?
D
Explanation:
IgnoreOlderThan: This setting filters files for indexing based on their age. It does not prevent
indexing of old data already in the file.
allowList: This setting allows specifying patterns to include files for monitoring, but it does not
control indexing of pre-existing data.
monitor: This is the default method for monitoring files but does not address indexing pre-existing
data.
followTail: This attribute, when set in inputs.conf, ensures that Splunk starts reading a file from the
end (tail) and does not index existing old data. It is ideal for scenarios with large files where only new
updates are relevant.
References:
Splunk Docs: Monitor text files
Splunk Docs: Configure followTail in inputs.conf
An admin oversees an environment with a 1000 GBI day license. The configuration file
server.conf has strict pool quota=false set. The license is divided into the following three pools, and
today's usage is shown on the right-hand column:
Pool
License Size
Today's usage
X
500 GB/day
100 GB
Y
350 GB/day
400 GB
Z
150 GB/day
300 GB
Given this, which pool(s) are issued warnings?
D
Explanation:
In Splunk Enterprise, when you configure the server.conf file with strict pool quota=false, it means
that license pools are allowed to share the total available license quota rather than being restricted
to their individually allocated quotas. However, this does not prevent pools from issuing warnings if
they exceed their allocated limits.
Given the environment with a 1000 GB/day license split into three pools:
Pool X: 500 GB/day license, 100 GB used
Pool Y: 350 GB/day license, 400 GB used
Pool Z: 150 GB/day license, 300 GB used
Let's analyze the usage:
Pool X is allocated 500 GB/day but has only used 100 GB, well within its limit.
Pool Y is allocated 350 GB/day but has used 400 GB, which exceeds its limit by 50 GB.
Pool Z is allocated 150 GB/day but has used 300 GB, which exceeds its limit by 150 GB.
Even with strict pool quota=false, pools Y and Z have exceeded their individual allocated quotas and
will issue warnings. Pool X has not exceeded its quota and thus will not issue any warnings.
Therefore, the pools that are issued warnings are Y and Z.
Which of the following is a valid method to create a Splunk user?
C
Explanation:
Splunk REST API is a valid method to create a Splunk user. The Splunk REST API allows administrators
to create, edit, and delete users programmatically using HTTP requests. The other options are not
valid methods to create a Splunk user. Creating a support ticket does not create a user, but requests
assistance from Splunk support. Creating a user on the host operating system does not create a
Splunk user, but a system user. Adding the username to users.conf does not create a user, but
modifies the configuration file that stores user information. References = Configure users with the
CLI, Configure users and roles
Which scenario is applicable given the stanzas in authentication.conf below?
[authentication]
externalTwoFactorAuthVendor = Duo
externalTwoFactorAuthSettings = duoMFA
[duoMFA]
integrationKey = aGFwcHliaXJ0aGRheU1pZGR5
secretKey = YXVzdHJhaWxpYW5Gb3JHcmVw
applicationKey = c3BsaW5raW5ndGhlcGx1bWJ1c3NpbmN1OTU
apiHostname = 466993018.duosecurity.com
failOpen = True
timeout = 60
D
Explanation:
The failOpen setting in the [duoMFA] stanza determines how Splunk software handles authentication
requests when it cannot connect to the Duo Security service. If failOpen is set to True, as in this
example, Splunk software allows users to log in without completing a multifactor challenge. If
failOpen is set to False, Splunk software denies all logins when it cannot connect to Duo Security. This
setting is independent of the authentication type or the secretKey protection. References = Connect
to Duo Security for multifactor authentication
Which file will be matched for the following monitor stanza in inputs. conf?
[monitor: ///var/log/*/bar/*. txt]
C
Explanation:
The correct answer is C. /var/log/host_460352847/bar/file/foo.txt.
The monitor stanza in inputs.conf is used to configure Splunk to monitor files and directories for new
data. The monitor stanza has the following syntax1:
[monitor://<input path>]
The input path can be a file or a directory, and it can include wildcards (*) and regular expressions.
The wildcards match any number of characters, including none, while the regular expressions match
patterns of characters. The input path is case-sensitive and must be enclosed in double quotes if it
contains spaces1.
In this case, the input path is /var/log//bar/.txt, which means Splunk will monitor any file with the
.txt extension that is located in a subdirectory named bar under the /var/log directory. The
subdirectory bar can be at any level under the /var/log directory, and the * wildcard will match any
characters before or after the bar and .txt parts1.
Therefore, the file /var/log/host_460352847/bar/file/foo.txt will be matched by the monitor stanza,
as it meets the criteria. The other files will not be matched, because:
A . /var/log/host_460352847/temp/bar/file/csv/foo.txt has a .csv extension, not a .txt extension.
B . /var/log/host_460352847/bar/foo.txt is not located in a subdirectory under the bar directory, but
directly in the bar directory.
D . /var/log/host_460352847/temp/bar/file/foo.txt is located in a subdirectory named file under the
bar directory, not directly in the bar directory.
When should the Data Preview feature be used?
D
Explanation:
The Data Preview feature should be used when validating the parsing of data. The Data Preview
feature allows you to preview how Splunk software will index your data before you commit the data
to an index. You can use the Data Preview feature to check the following aspects of data parsing1:
Timestamp recognition: You can verify that Splunk software correctly identifies the timestamps of
your events and assigns them to the _time field.
Event breaking: You can verify that Splunk software correctly breaks your data stream into individual
events based on the line breaker and should linemerge settings.
Source type assignment: You can verify that Splunk software correctly assigns a source type to your
data based on the props.conf file settings. You can also manually override the source type if needed.
Field extraction: You can verify that Splunk software correctly extracts fields from your events based
on the transforms.conf file settings. You can also use the Interactive Field Extractor (IFX) to create
custom field extractions.
The Data Preview feature is available in Splunk Web under Settings > Data inputs > Data preview. You
can access the Data Preview feature when you add a new input or edit an existing input1.
The other options are incorrect because:
A . When extracting fields for ingested data. The Data Preview feature can be used to verify the field
extraction for data that has not been ingested yet, but not for data that has already been indexed. To
extract fields from ingested data, you can use the IFX or the rex command in the Search app2.
B . When previewing the data before searching. The Data Preview feature does not allow you to
search the data, but only to view how it will be indexed. To preview the data before searching, you
can use the Search app and specify a time range or a sample ratio.
C . When reviewing data on the source host. The Data Preview feature does not access the data on
the source host, but only the data that has been uploaded or monitored by Splunk software. To
review data on the source host, you can use the Splunk Universal Forwarder or the Splunk Add-on for
Unix and Linux.
Windows can prevent a Splunk forwarder from reading open files. If files need to be read while they
are being written to, what type of input stanza needs to be created?
C
Explanation:
The correct answer is C. MonitorNoHandle.
MonitorNoHandle is a type of input stanza that allows a Splunk forwarder to read files on Windows
systems as Windows writes to them. It does this by using a kernel-mode filter driver to capture raw
data as it gets written to the file1. This input stanza is useful for files that get locked open for writing,
such as the Windows DNS server log file2.
The other options are incorrect because:
A . Tail Reader is not a valid input stanza in Splunk. It is a component of the Tailing Processor, which is
responsible for monitoring files and directories for new data3.
B . Upload is a type of input stanza that allows Splunk to index a single file from a local or network file
system. It is not suitable for files that are constantly being updated, as it only indexes the file once
and does not monitor it for changes4.
D . Monitor is a type of input stanza that allows Splunk to monitor files and directories for new data.
However, it may not work for files that Windows prevents Splunk from reading while they are open.
In such cases, MonitorNoHandle is a better option2.
Reference
A Splunk forwarder is a lightweight agent that can forward data to a Splunk deployment. There are
two types of forwarders: universal and heavy. A universal forwarder can only forward data, while a
heavy forwarder can also perform parsing, filtering, routing, and aggregation on the data before
forwarding it5.
An input stanza is a section in the inputs.conf configuration file that defines the settings for a specific
type of input, such as files, directories, network ports, scripts, or Windows event logs. An input
stanza starts with a square bracket, followed by the input type and the input path or name. For
example, [monitor:///var/log] is an input stanza for monitoring the /var/log directory.
References:
1: Monitor files and directories - Splunk Documentation
2: How to configure props.conf for proper line breaking … - Splunk Community
3: How Splunk Enterprise monitors files and directories - Splunk Documentation
4: Upload a file - Splunk Documentation
5: Use forwarders to get data into Splunk Enterprise - Splunk Documentation
[6]: inputs.conf - Splunk Documentation
When deploying apps on Universal Forwarders using the deployment server, what is the correct
component and location of the app before it is deployed?
C
Explanation:
The correct answer is C. On Deployment Server, $SPLUNK_HOME/etc/deployment-apps.
A deployment server is a Splunk Enterprise instance that acts as a centralized configuration manager
for any number of other instances, called “deployment clients”. A deployment client can be a
universal forwarder, a non-clustered indexer, or a search head1.
A deployment app is a directory that contains any content that you want to download to a set of
deployment clients. The content can include a Splunk Enterprise app, a set of Splunk Enterprise
configurations, or other content, such as scripts, images, and supporting files2.
You create a deployment app by creating a directory for it on the deployment server. The default
location is $SPLUNK_HOME/etc/deployment-apps, but this is configurable through the
repositoryLocation attribute in serverclass.conf. Underneath this location, each app must have its
own subdirectory. The name of the subdirectory serves as the app name in the forwarder
management interface2.
The other options are incorrect because:
A . On Universal Forwarder, $SPLUNK_HOME/etc/apps. This is the location where the deployment
app resides after it is downloaded from the deployment server to the universal forwarder. It is not
the location of the app before it is deployed2.
B . On Deployment Server, $SPLUNK_HOME/etc/apps. This is the location where the apps that are
specific to the deployment server itself reside. It is not the location where the deployment apps for
the clients are stored2.
D. On Universal Forwarder, $SPLUNK_HOME/etc/deployment-apps. This is not a valid location for any
app on a universal forwarder. The universal forwarder does not act as a deployment server and does
not store deployment apps3.
Search heads in a company's European offices need to be able to search data in their New York
offices. They also need to restrict access to certain indexers. What should be configured to allow this
type of action?
C
Explanation:
The correct answer is C. Distributed search is the feature that allows search heads in a company’s
European offices to search data in their New York offices. Distributed search also enables restricting
access to certain indexers by using the splunk_server field or the server.conf file1.
Distributed search is a way to scale your Splunk deployment by separating the search management
and presentation layer from the indexing and search retrieval layer. With distributed search, a Splunk
instance called a search head sends search requests to a group of indexers, or search peers, which
perform the actual searches on their indexes. The search head then merges the results back to the
user2.
Distributed search has several use cases, such as horizontal scaling, access control, and managing
geo-dispersed data. For example, users in different offices can search data across the enterprise or
only in their local area, depending on their needs and permissions2.
The other options are incorrect because:
A . Indexer clustering is a feature that replicates data across a group of indexers to ensure data
availability and recovery. Indexer clustering does not directly affect distributed search, although
search heads can be configured to search across an indexer cluster3.
B . LDAP control is a feature that allows Splunk to integrate with an external LDAP directory service
for user authentication and role mapping. LDAP control does not affect distributed search, although
it can be used to manage user access to data and searches.
D . Search head clustering is a feature that distributes the search workload across a group of search
heads that share resources, configurations, and jobs. Search head clustering does not affect
distributed search, although the search heads in a cluster can search across the same set of indexers.
A Universal Forwarder has the following active stanza in inputs . conf:
[monitor: //var/log]
disabled = O
host = 460352847
An event from this input has a timestamp of 10:55. What timezone will Splunk add to the event as
part of indexing?
D
Explanation:
The correct answer is D. The timezone of the forwarder will be added to the event as part of
indexing.
According to the Splunk documentation1, Splunk software determines the time zone to assign to a
timestamp using the following logic in order of precedence:
Use the time zone specified in raw event data (for example, PST, -0800), if present.
Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the
stanza specifies.
If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the
forwarder provides.
Use the time zone of the host that indexes the event.
In this case, the event does not have a time zone specified in the raw data, nor does it have a TZ
attribute set in props.conf. Therefore, the next rule applies, which is to use the time zone that the
forwarder provides. A universal forwarder is a lightweight agent that can forward data to a Splunk
deployment, and it knows its system time zone and sends that information along with the events to
the indexer2. The indexer then converts the event time to UTC and stores it in the _time field1.
The other options are incorrect because:
A . Universal Coordinated Time (UTC) is not the time zone that Splunk adds to the event as part of
indexing, but rather the time zone that Splunk uses to store the event time in the _time field. Splunk
software converts the event time to UTC based on the time zone that it determines from the rules
above1.
B . The timezone of the search head is not relevant for indexing, as the search head is a Splunk
component that handles search requests and distributes them to indexers, but it does not process
incoming data3. The search head uses the user’s timezone setting to determine the time range in
UTC that should be searched and to display the timestamp of the results in the user’s timezone2.
C . The timezone of the indexer that indexed the event is only used as a last resort, if none of the
other rules apply. In this case, the forwarder provides the time zone information, so the indexer does
not use its own time zone1.
Which pathway represents where a network input in Splunk might be found?
B
Explanation:
The correct answer is B. The network input in Splunk might be found in the
$SPLUNK_HOME/etc/apps/$appName/local/inputs.conf file.
A network input is a type of input that monitors data from TCP or UDP ports. To configure a network
input, you need to specify the port number, the connection host, the source, and the sourcetype in
the inputs.conf file. You can also set other optional settings, such as index, queue, and host_regex1.
The inputs.conf file is a configuration file that contains the settings for different types of inputs, such
as files, directories, scripts, network ports, and Windows event logs. The inputs.conf file can be
located in various directories, depending on the scope and priority of the settings. The most common
locations are:
$SPLUNK_HOME/etc/system/default: This directory contains the default settings for all inputs. You
should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/system/local: This directory contains the custom settings for all inputs that
apply to the entire Splunk instance. The settings in this directory override the default settings2.
$SPLUNK_HOME/etc/apps/$appName/default: This directory contains the default settings for all
inputs that are specific to an app. You should not modify or copy the files in this directory2.
$SPLUNK_HOME/etc/apps/$appName/local: This directory contains the custom settings for all inputs
that are specific to an app. The settings in this directory override the default and system settings2.
Therefore, the best practice is to create or edit the inputs.conf file in the
$SPLUNK_HOME/etc/apps/$appName/local directory, where $appName is the name of the app that
you want to configure the network input for. This way, you can avoid modifying the default files and
ensure that your settings are applied to the specific app.
The other options are incorrect because:
A . There is no network directory under the apps directory. The network input settings should be in
the inputs.conf file, not in a separate directory.
C . There is no udp.conf file in Splunk. The network input settings should be in the inputs.conf file,
not in a separate file. The system directory is not the recommended location for custom settings, as it
affects the entire Splunk instance.
D . The var/lib/splunk directory is where Splunk stores the indexed data, not the input settings. The
homePath setting is used to specify the location of the index data, not the input data. The inputName
is not a valid variable for inputs.conf.
Which Splunk component(s) would break a stream of syslog inputs into individual events? (select all
that apply)
CD
Explanation:
The correct answer is C and D. A heavy forwarder and an indexer are the Splunk components that can
break a stream of syslog inputs into individual events.
A universal forwarder is a lightweight agent that can forward data to a Splunk deployment, but it
does not perform any parsing or indexing on the data. A search head is a Splunk component that
handles search requests and distributes them to indexers, but it does not process incoming data.
A heavy forwarder is a Splunk component that can perform parsing, filtering, routing, and
aggregation on the data before forwarding it to indexers or other destinations. A heavy forwarder can
break a stream of syslog inputs into individual events based on the line breaker and should
linemerge settings in the inputs.conf file1.
An indexer is a Splunk component that stores and indexes data, making it searchable. An indexer can
also break a stream of syslog inputs into individual events based on the props.conf file settings, such
as TIME_FORMAT, MAX_TIMESTAMP_LOOKAHEAD, and line_breaker2.
Reference
A Splunk component is a software process that performs a specific function in a Splunk deployment,
such as data collection, data processing, data storage, data search, or data visualization.
Syslog is a standard protocol for logging messages from network devices, such as routers, switches,
firewalls, or servers. Syslog messages are typically sent over UDP or TCP to a central syslog server or
a Splunk instance.
Breaking a stream of syslog inputs into individual events means separating the data into discrete
records that can be indexed and searched by Splunk. Each event should have a timestamp, a host, a
source, and a sourcetype, which are the default fields that Splunk assigns to the data.
References:
1: Configure inputs using Splunk Connect for Syslog - Splunk Documentation
2: inputs.conf - Splunk Documentation
3: How to configure props.conf for proper line breaking … - Splunk Community
4: Reliable syslog/tcp input – splunk bundle style | Splunk
5: Configure inputs using Splunk Connect for Syslog - Splunk Documentation
6: About configuration files - Splunk Documentation
[7]: Configure your OSSEC server to send data to the Splunk Add-on for OSSEC - Splunk
Documentation
[8]: Splunk components - Splunk Documentation
[9]: Syslog - Wikipedia
[10]: About default fields - Splunk Documentation