Snowflake ARA-R01 Exam Questions

Questions for the ARA-R01 were updated on : Dec 01 ,2025

Page 1 out of 11. Viewing questions 1-15 out of 162

Question 1

A user, analyst_user has been granted the analyst_role, and is deploying a SnowSQL script to run as a
background service to extract data from Snowflake.
What steps should be taken to allow the IP addresses to be accessed? (Select TWO).

  • A. ALTER ROLE ANALYST_ROLE SET NETWORK_POLICY='ANALYST_POLICY';
  • B. ALTER USER ANALYSTJJSER SET NETWORK_POLICY='ANALYST_POLICY';
  • C. ALTER USER ANALYST_USER SET NETWORK_POLICY='10.1.1.20';
  • D. USE ROLE SECURITYADMIN; CREATE OR REPLACE NETWORK POLICY ANALYST_POLICY ALLOWED_IP_LIST = ('10.1.1.20');
  • E. USE ROLE USERADMIN; CREATE OR REPLACE NETWORK POLICY ANALYST_POLICY ALLOWED_IP_LIST = ('10.1.1.20');
Answer:

BD

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%

Explanation:
To ensure that an analyst_user can only access Snowflake from specific IP addresses, the following
steps are required:
Option B: This alters the network policy directly linked to analyst_user. Setting a network policy on
the user level is effective and ensures that the specified network restrictions apply directly and
exclusively to this user.
Option D: Before a network policy can be set or altered, the appropriate role with permission to
manage network policies must be used. SECURITYADMIN is typically the role that has privileges to
create and manage network policies in Snowflake. Creating a network policy that specifies allowed IP
addresses ensures that only requests coming from those IPs can access Snowflake under this policy.
After creation, this policy can be linked to specific users or roles as needed.
Options A and E mention altering roles or using the wrong role (USERADMIN typically does not
manage network security settings), and option C incorrectly attempts to set a network policy directly
as an IP address, which is not syntactically or functionally valid.
Reference: Snowflake's security management documentation covering network policies and role-
based access controls.

Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 2

Two queries are run on the customer_address table:
create or replace TABLE CUSTOMER_ADDRESS ( CA_ADDRESS_SK NUMBER(38,0), CA_ADDRESS_ID
VARCHAR(16), CA_STREET_NUMBER VARCHAR(IO) CA_STREET_NAME VARCHAR(60),
CA_STREET_TYPE VARCHAR(15), CA_SUITE_NUMBER VARCHAR(10), CA_CITY VARCHAR(60),
CA_COUNTY
VARCHAR(30), CA_STATE VARCHAR(2), CA_ZIP VARCHAR(10), CA_COUNTRY VARCHAR(20),
CA_GMT_OFFSET NUMBER(5,2), CA_LOCATION_TYPE
VARCHAR(20) );
ALTER TABLE DEMO_DB.DEMO_SCH.CUSTOMER_ADDRESS ADD SEARCH OPTIMIZATION ON
SUBSTRING(CA_ADDRESS_ID);
Which queries will benefit from the use of the search optimization service? (Select TWO).

  • A. select * from DEMO_DB.DEMO_SCH.CUSTOMER_ADDRESS Where substring(CA_ADDRESS_ID,1,8)= substring('AAAAAAAAPHPPLBAAASKDJHASLKDJHASKJD',1,8);
  • B. select * from DEMO_DB.DEMO_SCH.CUSTOMER_ADDRESS Where CA_ADDRESS_ID= substring('AAAAAAAAPHPPLBAAASKDJHASLKDJHASKJD',1,16);
  • C. select * from DEMO_DB.DEMO_SCH.CUSTOMER_ADDRESS Where CA_ADDRESS_ID LIKE ’%BAAASKD%';
  • D. select * from DEMO_DB.DEMO_SCH.CUSTOMER_ADDRESS Where CA_ADDRESS_ID LIKE '%PHPP%';
  • E. select * from DEMO_DB.DEMO_SCH.CUSTOMER_ADDRESS Where CA_ADDRESS_ID NOT LIKE '%AAAAAAAAPHPPL%';
Answer:

AB

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%

Explanation:
The use of the search optimization service in Snowflake is particularly effective when queries involve
operations that match exact substrings or start from the beginning of a string. The ALTER TABLE
command adding search optimization specifically for substrings on the CA_ADDRESS_ID field allows
the service to create an optimized search path for queries using substring matches.
Option A benefits because it directly matches a substring from the start of the CA_ADDRESS_ID,
aligning with the optimization's capability to quickly locate records based on the beginning segments
of strings.
Option B also benefits, despite performing a full equality check, because it essentially compares the
full length of CA_ADDRESS_ID to a substring, which can leverage the substring index for efficient
retrieval.
Options C, D, and E involve patterns that do not start from the beginning of the string or use
negations, which are not optimized by the search optimization service configured for starting
substring matches.
Reference: Snowflake's documentation on the use of search optimization for substring matching in
SQL queries.

Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 3

A user has activated primary and secondary roles for a session.
What operation is the user prohibited from using as part of SQL actions in Snowflake using the
secondary role?

  • A. Insert
  • B. Create
  • C. Delete
  • D. Truncate
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
In Snowflake, when a user activates a secondary role during a session, certain privileges associated
with DDL (Data Definition Language) operations are restricted. The CREATE statement, which falls
under DDL operations, cannot be executed using a secondary role. This limitation is designed to
enforce role-based access control and ensure that schema modifications are managed carefully,
typically reserved for primary roles that have explicit permissions to modify database structures.
Reference: Snowflake's security and access control documentation specifying the limitations and
capabilities of primary versus secondary roles in session management.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4

An Architect is designing a solution that will be used to process changed records in an orders table.
Newly-inserted orders must be loaded into the f_orders fact table, which will aggregate all the orders
by multiple dimensions (time, region, channel, etc.). Existing orders can be updated by the sales
department within 30 days after the order creation. In case of an order update, the solution must
perform two actions:
1. Update the order in the f_0RDERS fact table.
2. Load the changed order data into the special table ORDER _REPAIRS.
This table is used by the Accounting department once a month. If the order has been changed, the
Accounting team needs to know the latest details and perform the necessary actions based on the
data in the order_repairs table.
What data processing logic design will be the MOST performant?

  • A. Use one stream and one task.
  • B. Use one stream and two tasks.
  • C. Use two streams and one task.
  • D. Use two streams and two tasks.
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
The most performant design for processing changed records, considering the need to both update
records in the f_orders fact table and load changes into the order_repairs table, is to use one stream
and two tasks. The stream will monitor changes in the orders table, capturing both inserts and
updates. The first task would apply these changes to the f_orders fact table, ensuring all dimensions
are accurately represented. The second task would use the same stream to insert relevant changes
into the order_repairs table, which is critical for the Accounting department's monthly review. This
method ensures efficient processing by minimizing the overhead of managing multiple streams and
synchronizing between them, while also allowing specific tasks to optimize for their target
operations.
Reference: Snowflake's documentation on streams and tasks for handling data changes efficiently.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

An Architect needs to improve the performance of reports that pull data from multiple Snowflake
tables, join, and then aggregate the dat
a. Users access the reports using several dashboards. There are performance issues on Monday
mornings between 9:00am-11:00am when many users check the sales reports.
The size of the group has increased from 4 to 8 users. Waiting times to refresh the dashboards has
increased significantly. Currently this workload is being served by a virtual warehouse with the
following parameters:
AUTO-RESUME = TRUE AUTO_SUSPEND = 60 SIZE = Medium
What is the MOST cost-effective way to increase the availability of the reports?

  • A. Use materialized views and pre-calculate the data.
  • B. Increase the warehouse to size Large and set auto_suspend = 600.
  • C. Use a multi-cluster warehouse in maximized mode with 2 size Medium clusters.
  • D. Use a multi-cluster warehouse in auto-scale mode with 1 size Medium cluster, and set min_cluster_count = 1 and max_cluster_count = 4.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
The most cost-effective way to increase the availability and performance of the reports during peak
usage times, while keeping costs under control, is to use a multi-cluster warehouse in auto-scale
mode. Option D suggests using a multi-cluster warehouse with 1 size Medium cluster and allowing it
to auto-scale between 1 and 4 clusters based on demand. This setup ensures that additional
computing resources are available when needed (e.g., during Monday morning peaks) and are scaled
down to minimize costs when the demand decreases. This approach optimizes resource utilization
and cost by adjusting the compute capacity dynamically, rather than maintaining a larger fixed size or
multiple clusters continuously.
Reference: Snowflake's official documentation on managing warehouses and using auto-scaling
features.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

A company is following the Data Mesh principles, including domain separation, and chose one
Snowflake account for its data platform.
An Architect created two data domains to produce two data products. The Architect needs a third
data domain that will use both of the data products to create an aggregate data product. The read
access to the data products will be granted through a separate role.
Based on the Data Mesh principles, how should the third domain be configured to create the
aggregate product if it has been granted the two read roles?

  • A. Use secondary roles for all users.
  • B. Create a hierarchy between the two read roles.
  • C. Request a technical ETL user with the sysadmin role.
  • D. Request that the two data domains share data using the Data Exchange.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
In the scenario described, where a third data domain needs access to two existing data products in a
Snowflake account structured according to Data Mesh principles, the best approach is to utilize
Snowflake’s Data Exchange functionality. Option D is correct as it facilitates the sharing and
governance of data across different domains efficiently and securely. Data Exchange allows domains
to publish and subscribe to live data products, enabling real-time data collaboration and access
management in a governed manner. This approach is in line with Data Mesh principles, which
advocate for decentralized data ownership and architecture, enhancing agility and scalability across
the organization.
Reference:
Snowflake Documentation on Data Exchange
Articles on Data Mesh Principles in Data Management

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7

The data share exists between a data provider account and a data consumer account. Five tables
from the provider account are being shared with the consumer account. The consumer role has been
granted the imported privileges privilege.
What will happen to the consumer account if a new table (table_6) is added to the provider schema?

  • A. The consumer role will automatically see the new table and no additional grants are needed.
  • B. The consumer role will see the table only after this grant is given on the consumer side: grant imported privileges on database PSHARE_EDW_4TEST_DB to DEV_ROLE;
  • C. The consumer role will see the table only after this grant is given on the provider side: use role accountadmin; Grant select on table EDW.ACCOUNTING.Table_6 to share PSHARE_EDW_4TEST;
  • D. The consumer role will see the table only after this grant is given on the provider side: use role accountadmin; grant usage on database EDW to share PSHARE_EDW_4TEST ; grant usage on schema EDW.ACCOUNTING to share PSHARE_EDW_4TEST ; Grant select on table EDW.ACCOUNTING.Table_6 to database PSHARE_EDW_4TEST_DB ;
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
When a new table (table_6) is added to a schema in the provider's account that is part of a data
share, the consumer will not automatically see the new table. The consumer will only be able to
access the new table once the appropriate privileges are granted by the provider. The correct
process, as outlined in option D, involves using the provider’s ACCOUNTADMIN role to grant USAGE
privileges on the database and schema, followed by SELECT privileges on the new table, specifically
to the share that includes the consumer's database. This ensures that the consumer account can
access the new table under the established data sharing setup.
Reference:
Snowflake Documentation on Managing Access Control
Snowflake Documentation on Data Sharing

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

A company has built a data pipeline using Snowpipe to ingest files from an Amazon S3 bucket.
Snowpipe is configured to load data into staging database tables. Then a task runs to load the data
from the staging database tables into the reporting database tables.
The company is satisfied with the availability of the data in the reporting database tables, but the
reporting tables are not pruning effectively. Currently, a size 4X-Large virtual warehouse is being used
to query all of the tables in the reporting database.
What step can be taken to improve the pruning of the reporting tables?

  • A. Eliminate the use of Snowpipe and load the files into internal stages using PUT commands.
  • B. Increase the size of the virtual warehouse to a size 5X-Large.
  • C. Use an ORDER BY <cluster_key (s) > command to load the reporting tables.
  • D. Create larger files for Snowpipe to ingest and ensure the staging frequency does not exceed 1 minute.
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Effective pruning in Snowflake relies on the organization of data within micro-partitions. By using an
ORDER BY clause with clustering keys when loading data into the reporting tables, Snowflake can
better organize the data within micro-partitions. This organization allows Snowflake to skip over
irrelevant micro-partitions during a query, thus improving query performance and reducing the
amount of data scanned12.
Reference =

Snowflake Documentation on micro-partitions and data clustering2

Community article on recognizing unsatisfactory pruning and improving it1

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9

An Architect needs to design a data unloading strategy for Snowflake, that will be used with the
COPY INTO <location> command.
Which configuration is valid?

  • A. Location of files: Snowflake internal location . File formats: CSV, XML . File encoding: UTF-8 . Encryption: 128-bit
  • B. Location of files: Amazon S3 . File formats: CSV, JSON . File encoding: Latin-1 (ISO-8859) . Encryption: 128-bit
  • C. Location of files: Google Cloud Storage . File formats: Parquet . File encoding: UTF-8 · Compression: gzip
  • D. Location of files: Azure ADLS . File formats: JSON, XML, Avro, Parquet, ORC . Compression: bzip2 . Encryption: User-supplied key
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
For the configuration of data unloading in Snowflake, the valid option among the provided choices is
"C." This is because Snowflake supports unloading data into Google Cloud Storage using the COPY
INTO <location> command with specific configurations. The configurations listed in option C, such as
Parquet file format with UTF-8 encoding and gzip compression, are all supported by Snowflake.
Notably, Parquet is a columnar storage file format, which is optimal for high-performance data
processing tasks in Snowflake. The UTF-8 file encoding and gzip compression are both standard and
widely used settings that are compatible with Snowflake’s capabilities for data unloading to cloud
storage platforms.
Reference:
Snowflake Documentation on COPY INTO command
Snowflake Documentation on Supported File Formats
Snowflake Documentation on Compression and Encoding Options

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

Which command will create a schema without Fail-safe and will restrict object owners from passing
on access to other users?

  • A. create schema EDW.ACCOUNTING WITH MANAGED ACCESS;
  • B. create schema EDW.ACCOUNTING WITH MANAGED ACCESS DATA_RETENTION_TIME_IN_DAYS - 7;
  • C. create TRANSIENT schema EDW.ACCOUNTING WITH MANAGED ACCESS DATA_RETENTION_TIME_IN_DAYS = 1;
  • D. create TRANSIENT schema EDW.ACCOUNTING WITH MANAGED ACCESS DATA_RETENTION_TIME_IN_DAYS = 7;
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
A transient schema in Snowflake is designed without a Fail-safe period, meaning it does not incur
additional storage costs once it leaves Time Travel, and it is not protected by Fail-safe in the event of
a data loss. The WITH MANAGED ACCESS option ensures that all privilege grants, including future
grants on objects within the schema, are managed by the schema owner, thus restricting object
owners from passing on access to other users1.
Reference =

Snowflake Documentation on creating schemas1

Snowflake Documentation on configuring access control2

Snowflake Documentation on understanding and viewing Fail-safe3

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11

A company is designing a process for importing a large amount of loT JSON data from cloud storage
into Snowflake. New sets of loT data get generated and uploaded approximately every 5 minutes.
Once the loT data is in Snowflake, the company needs up-to-date information from an external
vendor to join to the dat
a. This data is then presented to users through a dashboard that shows different levels of
aggregation. The external vendor is a Snowflake customer.
What solution will MINIMIZE complexity and MAXIMIZE performance?

  • A. 1. Create an external table over the JSON data in cloud storage. 2. Create a task that runs every 5 minutes to run a transformation procedure on new data, based on a saved timestamp. 3. Ask the vendor to expose an API so an external function can be used to generate a call to join the data back to the loT data in the transformation procedure. 4. Give the transformed table access to the dashboard tool. 5. Perform the aggregations on the dashboard tool.
  • B. 1. Create an external table over the JSON data in cloud storage. 2. Create a task that runs every 5 minutes to run a transformation procedure on new data based on a saved timestamp. 3. Ask the vendor to create a data share with the required data that can be imported into the company's Snowflake account. 4. Join the vendor's data back to the loT data using a transformation procedure. 5. Create views over the larger dataset to perform the aggregations required by the dashboard. 6. Give the views access to the dashboard tool.
  • C. 1. Create a Snowpipe to bring the JSON data into Snowflake. 2. Use streams and tasks to trigger a transformation procedure when new JSON data arrives. 3. Ask the vendor to expose an API so an external function call can be made to join the vendor's data back to the loT data in a transformation procedure. 4. Create materialized views over the larger dataset to perform the aggregations required by the dashboard. 5. Give the materialized views access to the dashboard tool.
  • D. 1. Create a Snowpipe to bring the JSON data into Snowflake. 2. Use streams and tasks to trigger a transformation procedure when new JSON data arrives. 3. Ask the vendor to create a data share with the required data that is then imported into the Snowflake account. 4. Join the vendor's data back to the loT data in a transformation procedure 5. Create materialized views over the larger dataset to perform the aggregations required by the dashboard. 6. Give the materialized views access to the dashboard tool.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Using Snowpipe for continuous, automated data ingestion minimizes the need for manual
intervention and ensures that data is available in Snowflake promptly after it is generated.
Leveraging Snowflake’s data sharing capabilities allows for efficient and secure access to the
vendor’s data without the need for complex API integrations. Materialized views provide pre-
aggregated data for fast access, which is ideal for dashboards that require high performance1234.
Reference =

Snowflake Documentation on Snowpipe4

Snowflake Documentation on Secure Data Sharing2

Best Practices for Data Ingestion with Snowflake1

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 12

A retailer's enterprise data organization is exploring the use of Data Vault 2.0 to model its data lake
solution. A Snowflake Architect has been asked to provide recommendations for using Data Vault 2.0
on Snowflake.
What should the Architect tell the data organization? (Select TWO).

  • A. Change data capture can be performed using the Data Vault 2.0 HASH_DIFF concept.
  • B. Change data capture can be performed using the Data Vault 2.0 HASH_DELTA concept.
  • C. Using the multi-table insert feature in Snowflake, multiple Point-in-Time (PIT) tables can be loaded in parallel from a single join query from the data vault.
  • D. Using the multi-table insert feature, multiple Point-in-Time (PIT) tables can be loaded sequentially from a single join query from the data vault.
  • E. There are performance challenges when using Snowflake to load multiple Point-in-Time (PIT) tables in parallel from a single join query from the data vault.
Answer:

A, C

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%

Explanation:
Data Vault 2.0 on Snowflake supports the HASH_DIFF concept for change data capture, which is a
method to detect changes in the data by comparing the hash values of the records. Additionally,
Snowflake’s multi-table insert feature allows for the loading of multiple PIT tables in parallel from a
single join query, which can significantly streamline the data loading process and improve
performance1.
Reference =

Snowflake’s documentation on multi-table inserts1

Blog post on optimizing Data Vault architecture on Snowflake2

Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 13

What is the MOST efficient way to design an environment where data retention is not considered
critical, and customization needs are to be kept to a minimum?

  • A. Use a transient database.
  • B. Use a transient schema.
  • C. Use a transient table.
  • D. Use a temporary table.
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Transient databases in Snowflake are designed for situations where data retention is not critical, and
they do not have the fail-safe period that regular databases have. This means that data in a transient
database is not recoverable after the Time Travel retention period. Using a transient database is
efficient because it minimizes storage costs while still providing most functionalities of a standard
database without the overhead of data protection features that are not needed when data retention
is not a concern.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

What actions are permitted when using the Snowflake SQL REST API? (Select TWO).

  • A. The use of a GET command
  • B. The use of a PUT command
  • C. The use of a ROLLBACK command
  • D. The use of a CALL command to a stored procedure which returns a table
  • E. Submitting multiple SQL statements in a single call
Answer:

AD

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%

Explanation:
A. The Snowflake SQL REST API does support the use of a GET command, which can be used to
retrieve the status of a previously submitted query or to fetch the results of a query once it has been
executed.
D. The use of a CALL command to a stored procedure is supported, which can return a result set,
including a table. This allows the invocation of stored procedures within Snowflake through the SQL
REST API.

Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 15

Role A has the following permissions:
. USAGE on db1
. USAGE and CREATE VIEW on schemal in db1
. SELECT on tablel in schemal
Role B has the following permissions:
. USAGE on db2
. USAGE and CREATE VIEW on schema2 in db2
. SELECT on table2 in schema2
A user has Role A set as the primary role and Role B as a secondary role.
What command will fail for this user?

  • A. use database db1; use schema schemal; create view v1 as select * from db2.schema2.table2;
  • B. use database db2; use schema schema2; create view v2 as select * from dbl.schemal. tablel;
  • C. use database db2; use schema schema2; select * from db1.schemal.tablel union select * from table2;
  • D. use database db1; use schema schemal; select * from db2.schema2.table2;
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
This command will fail because while the user has USAGE permission on db2 and schema2 through
Role B, and can create a view in schema2, they do not have SELECT permission on
db1.schemal.table1 with Role B. Since Role A, which has SELECT permission on db1.schemal.table1,
is not the currently active role when the view v2 is being created in db2.schema2, the user does not
have the necessary permissions to read from db1.schemal.table1 to create the view. Snowflake’s
security model requires that the active role have all necessary permissions to execute the command.

Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2