Questions for the QREP were updated on : Nov 21 ,2025
A Qlik Replicate administrator will use Parallel load during full load Which three ways does Qlik
Replicate offer? (Select three.)
ACF
Explanation:
Qlik Replicate offers several methods for parallel load during a full load process to accelerate the
replication of large tables by splitting the table into segments and loading these segments in parallel.
The three primary ways Qlik Replicate allows parallel loading are:
Use Data Ranges:
This method involves defining segment boundaries based on data ranges within the columns. You
can select segment columns and then specify the data ranges to define how the table should be
segmented and loaded in parallel.
Use Partitions - Use all partitions - Use main/sub-partitions:
For tables that are already partitioned, you can choose to load all partitions or use main/sub-
partitions to parallelize the data load process. This method ensures that the load is divided based on
the existing partitions in the source database.
Use Partitions - Specify partitions/sub-partitions:
This method allows you to specify exactly which partitions or sub-partitions to use for the parallel
load. This provides greater control over how the data is segmented and loaded, allowing for
optimization based on the specific partitioning scheme of the source table.
These methods are designed to enhance the performance and efficiency of the full load process by
leveraging the structure of the source data to enable parallel processing
Which are the main hardware components to run a Qlik Replicate Task in a high performance level?
C
Explanation:
To run a Qlik Replicate Task at a high-performance level, the main hardware components that are
recommended include:
Cores: A higher number of cores is beneficial for handling many tasks running in parallel and for
prioritizing full-load performance1
.
SSD (Solid State Drive): SSDs are recommended for optimal performance, especially when using a
file-based target or dealing with long-running source transactions that may not fit into memory1
.
Network bandwidth: Adequate network bandwidth is crucial to handle the data transfer
requirements, with 1 Gbps for basic systems and 10 Gbps for larger systems being recommended1
.
The other options do not encompass all the recommended hardware components for high-
performance levels in Qlik Replicate tasks:
A . SSD, RAM: While these are important, they do not include the network bandwidth component.
B . Cores, RAM: This option omits the SSD, which is important for disk performance.
D . RAM, Network bandwidth: This option leaves out the cores, which are essential for processing
power.
For detailed hardware recommendations for different scales of Qlik Replicate systems, you can refer
to the official Qlik documentation on
Recommended Hardware Configuration
.
Which logging level should be used to identify the internal command that Qlik Replicate Is executing
prior to an error?
C
Explanation:
To identify the internal commands that Qlik Replicate is executing prior to an error, the Trace logging
level should be used.
This level provides detailed information about the operations being performed
by Qlik Replicate, including the internal commands executed before an error occurs1
.
Here’s how the Trace logging level works in Qlik Replicate:
When logging is set to Trace, the log lines are identified with ]T:.
This indicates that the log will
include detailed trace information about the internal workings of Qlik Replicate, such as sending
control records to components or waiting for termination of threads1
.
The Trace level is more detailed than Warnings (]W:) and Errors (]E:), which only show warning and
error messages without the detailed context of the operations leading up to them1
.
The Trace level is also distinct from Verbose (]V:), which provides even more detailed logging
information but may not be necessary for identifying the commands leading up to an error1
.
Therefore, the correct answer is C. Trace, as it is the appropriate logging level to use when you need
to analyze the actions performed by Qlik Replicate just before an error occurs1
.
Which components can be controlled with Qlik Enterprise Manager?
C
Explanation:
Qlik Enterprise Manager provides a centralized command center to configure, execute, and monitor
data replication and transformation tasks across the enterprise. It is specifically designed to manage
and control Qlik Replicate and Qlik Compose tasks.
Additionally, it integrates with Qlik Catalog to
automatically catalog data assets generated by Qlik Replicate directly in Qlik Catalog1
.
This
integration allows for tracking end-to-end data lineage, which improves compliance, governance,
and trust in the data assets managed within Qlik Catalog1
.
The documentation clearly states that Qlik Enterprise Manager is used to design, execute, and
monitor Qlik Replicate and Qlik Compose tasks, and it also mentions the integration with Qlik
Catalog for data asset management2
. However, there is no mention of Qlik Sense being controlled by
Qlik Enterprise Manager.
Qlik Sense is a separate product for data visualization and analytics, and its
management is not within the scope of Qlik Enterprise Manager’s functionalities as described in the
available resources12
.
Therefore, the correct answer is C. Qlik Replicate, Qlik Compose, Qlik Catalog, as these are the
components that can be controlled with Qlik Enterprise Manager.
Two companies are merging Both companies have IBM DB2 LUW running The Qhk Replicate
administrator must merge a database (12 TB of data) into an existing database (15 TB of data). The
merge will be done by IBM load.
Which approach should the administrator use?
B
Explanation:
When merging databases, especially of such large sizes (12 TB and 15 TB), it is crucial to ensure data
integrity and consistency. The recommended approach is to:
Stop the Replication Task: This is important to prevent any changes from being replicated to the
target while the IBM load process is ongoing.
Perform the IBM Load: Execute the IBM load to merge the database into the existing database.
Resume the Replication Task: Once the IBM load has been successfully completed, the replication
task can be resumed.
This approach ensures that the data loaded via IBM load is not missed or duplicated in the target
database. It also allows Qlik Replicate to continue capturing changes from the point where the task
was stopped, thus maintaining the continuity of the replication process.
It’s important to note that creating a new task after the IBM load (Option D) could lead to
complexities in managing the data consistency and might require additional configuration.
Continuing to run the task (Option C) could result in conflicts or data integrity issues during the load
process. Therefore, Option B is the safest and most reliable approach to ensure a smooth merge of
the databases.
For further details and best practices, you can refer to the official Qlik Replicate documentation and
support articles which provide guidance on similar scenarios1234
.
Which are limitations associated with Qlik Replicate stream endpoint types (e.g.. Kafka or Azure
Event Hubs)? (Select two.)
DE
Explanation:
For stream endpoint types like Kafka or Azure Event Hubs in Qlik Replicate, there are specific
limitations that apply to the replication options and target table preparation options:
D . The Store Changes replication option is not supported: This limitation is explicitly mentioned for
Kafka1 and Azure Event Hubs23
. The Store Changes mode is not supported when using these stream
endpoints, meaning that changes cannot be stored for later retrieval or reporting.
E. The DROP and CREATE table target table preparation option is not supported: This is also a known
limitation for Kafka as a target endpoint1
. The Drop and Create table Target Table Preparation option
is not supported, which affects how tables are prepared on the target side during replication.
The other options are not correct because:
A . The Apply Changes replication option is not supported: This is not listed as a limitation for Kafka
or Azure Event Hubs.
B . The Full Load replication option is not supported: Full Load is supported for Kafka1
.
C . Associated tasks filling those endpoint types cannot be stopped: This is not mentioned as a
limitation, and tasks can typically be stopped unless otherwise specified.
For more detailed information on the limitations of using Kafka or Azure Event Hubs as target
endpoints in Qlik Replicate, you can refer to the official Qlik documentation123
.
Which is the command to export the task, task name Oracle_2_SS_Target1 using REPCTL?
C
Explanation:
To export a task using REPCTL in Qlik Replicate, the correct command is repctl exportrepository
task=task_name. Here’s how you would use it for the task named Oracle_2_SS_Target1:
Open the command-line console on the machine where Qlik Replicate is installed.
Use the REPCTL utility with the exportrepository command followed by the task parameter and the
name of the task you want to export.
The correct syntax for the command is:
repctl exportrepository task=Oracle_2_SS_Target1
This command will create a JSON file containing the exported task settings
.
The other options provided have either incorrect syntax or misspellings:
A has a typo in the command (repct1 instead of repctl).
B uses an incorrect command (export_task is not a valid REPCTL command).
D has a typo in the task name (0racle_2_SS_Target1 instead of Oracle_2_SS_Target1) and an incorrect
command (exporttask is not a valid REPCTL command).
Therefore, the verified answer is C, as it correctly specifies the REPCTL command to export the task
named Oracle_2_SS_Target11
.
A Qlik Replicate administrator is creating a task to replicate the data from one RDBMS to another
After the administrator starts the task, the following error message appears: "Cannot create the
specific schema".
Which method should the administrator use to fix this issue?
C
Explanation:
When the error message “Cannot create the specific schema” appears during a Qlik Replicate task, it
indicates that the task is unable to automatically create the required schema in the target RDBMS.
The recommended method to resolve this issue is to:
Create the schema manually in the target ©: This involves accessing the target database and
manually creating the schema that Qlik Replicate is attempting to use. This ensures that the
necessary database objects are in place for the replication task to proceed.
Test the target endpoint connection (D): Although not the direct solution to the schema creation
issue, it is a good practice to test the target endpoint connection to confirm that Qlik Replicate can
connect to the target database. This can help rule out any connectivity issues that might be
contributing to the problem.
The options to drop and recreate the task (A) or reload the target (B) are less likely to resolve the
specific issue of schema creation, as they do not address the underlying problem of the missing
schema in the target database.
It’s important to note that the manual creation of the schema should match the expected structure
that Qlik Replicate is attempting to replicate to ensure compatibility and successful replication1
.
Which three task types does Qlik Replicate support? (Select three.)
AEF
Explanation:
Qlik Replicate supports a variety of task types to accommodate different data replication needs. The
three task types supported are:
LogStream to Staging Folder (A): This task type allows Qlik Replicate to save data changes from the
source database transaction log to a staging folder.
These changes can then be applied to multiple
targets1
.
Full load, apply, and store change (E): This is a comprehensive task type that includes a full load of
the source database, applying changes to the target, and storing changes in an audit table on the
target side1
.
LogStream full load (F): Similar to the LogStream to Staging Folder, this task type involves saving data
changes from the source database transaction log.
However, it also includes a full load of the data to
the target database1
.
The other options provided do not align with the task types supported by Qlik Replicate:
B . Store changes bidirectional: While Qlik Replicate supports bidirectional tasks, the option as stated
does not accurately describe a supported task type.
C . LogStream store changes: This option is not clearly defined as a supported task type in the
documentation.
D . Scheduled full loads: Although Qlik Replicate can perform full loads, “Scheduled full loads” as a
specific task type is not mentioned in the documentation.
Therefore, the verified answers are A, E, and F, as they represent the task types that Qlik Replicate
supports according to the official documentation1
.
The Apply batched changes to multiple tables concurrently option in a Qlik Replicate task is enabled
Which Information can be stored in the attrep_apply_exceptlon Control table?
C
Explanation:
When the “Apply batched changes to multiple tables concurrently” option is enabled in a Qlik
Replicate task, the attrep_apply_exception control table stores specific information related to change
processing errors. The details stored in this table include:
TASK_NAME: The name of the Qlik Replicate task.
TABLE_NAME: The name of the table.
ERROR_TIME (in UTC): The time the exception (error) occurred.
STATEMENT: The statement that was being executed when the error occurred.
ERROR: The actual error message1
.
This information is crucial for troubleshooting and resolving issues that may arise during the
replication process.
The data in the attrep_apply_exception table is never deleted, ensuring a
persistent record of all exceptions1
.
The other options do not accurately reflect the information stored in the attrep_apply_exception
control table:
A and B mention “Warning_Time,” which is not a column in the table.
D is incorrect because the table does store information about errors.
For more detailed information on the attrep_apply_exception control table and its role in handling
change processing errors, you can refer to the official Qlik Replicate documentation1
.
When working with Qlik Enterprise Manager, which component must be installed to run Analytics
under Enterprise Manager?
C
Explanation:
To run Analytics under Qlik Enterprise Manager, it is required to have a PostgreSQL Database
installed. This is because the Analytics data for Qlik Enterprise Manager is stored in a PostgreSQL
database.
Before using the Analytics feature, you must ensure that PostgreSQL (version 12.16 or
later) is installed either on the Enterprise Manager machine or on a machine that is accessible from
Enterprise Manager1
.
Here are the steps and prerequisites for setting up Analytics in Qlik Enterprise Manager:
Install PostgreSQL: The setup file for PostgreSQL is included with Enterprise Manager, and it must be
installed to store the Analytics data1
.
Create a dedicated database and user: A dedicated database and user in PostgreSQL should be
created, which will own the tables accessed by the Enterprise Manager Analytics module1
.
Configure connectivity: Connectivity to the PostgreSQL repository must be configured as described in
the Repository connection settings1
.
Data collection and purging: Configure data collection and purging settings as described in the
Analytics - Data collection and purge settings1
.
Register a license: A Replication Analytics license is required to use Analytics.
If you have a license,
you can register it by following the procedure described in Registering a license1
.
The other options provided, such as Qlik Replicate (A), Qlik Compose (B), and both Qlik Compose and
Replicate (D), are not components that must be installed to run Analytics under Enterprise Manager.
The essential component is the PostgreSQL Database ©, which serves as the backend for storing the
Analytics data1
.
Therefore, the verified answer is C. PostgreSQL Database, as it is the required component to run
Analytics under Qlik Enterprise Manager1
.
A Qlik Replicate administrator needs to load a Cloud Storage Data Warehouse such as Snowflake.
Synapse. Redshift. or Big Query Which type of storage Is required for the COPY statement?
D
Explanation:
When loading data into a Cloud Storage Data Warehouse like Snowflake, Synapse, Redshift, or Big
Query, the type of storage required for the COPY statement is Object Storage such as Azure Data Lake
Storage (ADLS), Amazon S3, or Google Cloud Storage (GCS). This is because these cloud data
warehouses are designed to directly interact with object storage services, which are scalable, secure,
and optimized for large amounts of data.
For example, when using Microsoft Azure Synapse Analytics as a target endpoint in Qlik Replicate,
the COPY statement load method requires the Synapse identity to be granted “Storage Blob Data
Contributor” permission on the storage account, which is applicable when using either Blob storage
or ADLS Gen2 storage1
.
Similarly, for Amazon S3, the Cloud Storage connector in Qlik Application
Automation supports operations with files stored in S3 buckets2
.
The prerequisites for using Azure
Data Lake Storage (ADLS) Gen2 file system or Blob storage location also indicate the necessity of
these being accessible from the Qlik Replicate machine3
.
Therefore, the correct answer is D. Object Storage (ADLS, S3, GCS), as these services provide the
necessary infrastructure for the COPY statement to load data efficiently into cloud-based data
warehouses.
Which is the minimum role permission that should be selected for a user that needs to share status
on Tasks and Server activity?
D
Explanation:
To determine the minimum role permission required for a user to share status on Tasks and Server
activity in Qlik Replicate, we can refer to the official Qlik Replicate documentation. According to the
documentation, there are four predefined roles available: Admin, Designer, Operator, and Viewer.
Each role has its own set of permissions.
The Viewer role is the most basic role and provides the user with the ability to view task history,
which includes the status on Tasks and Server activity.
This role does not allow the user to perform
any changes but does allow them to share information regarding the status of tasks and server
activity1
.
Here is a breakdown of the permissions for the Viewer role:
View task history: Yes
Download a memory report: No
Download a Diagnostics Package: No
View and download log files: No
Perform runtime operations (such as start, stop, or reload targets): No
Create and design tasks: No
Edit task description in Monitor View: No
Delete tasks: No
Export tasks: No
Import tasks: No
Change logging level: No
Delete logs: No
Manage endpoint connections (add, edit, duplicate, and delete): No
Open the Manage Endpoint Connections window and view the following endpoint settings: Name,
type, description, and role: Yes
Click the Test Connection button in the Manage Endpoint Connections window: No
View all of the endpoint settings in the Manage Endpoint Connections window: No
Edit the following server settings: Notifications, scheduled jobs, and executed jobs: No
Edit the following server settings: Mail server settings, default notification recipients, license
registration, global error handling, log management, file transfer service, user permissions, and
resource control: No
Specify credentials for running operating system level post-commands on Replicate Server: No
Given this information, the Viewer role is sufficient for a user who needs to share status on Tasks and
Server activity, making it the minimum role permission required for this purpose1
.
Which is the possible Escalate Action for Table Errors?
D
Explanation:
When encountering table errors in Qlik Replicate, the escalation policy is set to Stop Task and cannot
be changed. This means that if the number of table errors reaches a specified threshold, the task will
automatically stop, requiring manual intervention to resolve the issue.
The escalation action for table errors is specifically designed to halt the task to prevent further errors
or data inconsistencies from occurring.
This is a safety measure to ensure that data integrity is
maintained and that any issues are addressed before replication continues1
.
The other options listed are not escalation actions for table errors:
A . Log Record to the Exceptions Table: While logging errors to the exceptions table is a common
action, it is not an escalation action.
B . No Escalate Action: This is not a valid option as there is a specific escalation action defined for
table errors.
C . Suspend Table: Suspending a table is a different action that can be taken in response to errors, but
it is not the defined escalation action for table errors in Qlik Replicate.
For more information on error handling and escalation actions in Qlik Replicate, you can refer to the
official Qlik Replicate Help documentation, which provides detailed guidance on configuring error
handling policies and actions for various types of errors1
.
How can a source be batch-loaded automatically on a daily basis?
A
Explanation:
To batch-load a source automatically on a daily basis in Qlik Replicate, you would typically use a
server scheduler. Here’s how it can be done:
Set trigger through server scheduler (A): You can configure a scheduler on the server where Qlik
Replicate is running to trigger the batch load process at a specified time each day.
This could be done
using the operating system’s built-in task scheduler, such as Windows Task Scheduler or cron jobs on
Linux systems1
.
The scheduler would execute a command or script that starts the Qlik Replicate task configured for
batch loading.
The command would utilize Qlik Replicate’s command-line interface or API to initiate
the task1
.
This approach allows for precise control over the timing of the batch load and can be adjusted to
meet the specific scheduling requirements of the data replication process1
.
The other options provided are not typically used for setting up an automatic daily batch load:
B . Set trigger through Advanced Run options: While Advanced Run options provide various ways to
run tasks, they do not include a scheduling function for daily automation.
C . Set trigger through Task Designer: Task Designer is used for designing and configuring tasks, not
for scheduling them.
D . Enable task on full load and apply changes: This option would start the task immediately and is
not related to scheduling the task on a daily basis.
Therefore, the verified answer is A. Set trigger through server scheduler, as it is the method that
allows for the automation of batch loading on a daily schedule1
.