confluent CCDAK Exam Questions

Questions for the CCDAK were updated on : Nov 21 ,2025

Page 1 out of 10. Viewing questions 1-15 out of 150

Question 1

In Kafka, every broker... (select three)

  • A. contains all the topics and all the partitions
  • B. knows all the metadata for all topics and partitions
  • C. is a controller
  • D. knows the metadata for the topics and partitions it has on its disk
  • E. is a bootstrap broker
  • F. contains only a subset of the topics and the partitions
Answer:

B, E, F

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
F
50%

Explanation:
Kafka topics are divided into partitions and spread across brokers. Each brokers knows about all the
metadata and each broker is a bootstrap broker, but only one of them is elected controller

Discussions
vote your answer:
A
B
C
D
E
F
0 / 1000

Question 2

To continuously export data from Kafka into a target database, I should use

  • A. Kafka Producer
  • B. Kafka Streams
  • C. Kafka Connect Sink
  • D. Kafka Connect Source
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Kafka Connect Sink is used to export data from Kafka to external databases and Kafka Connect Source
is used to import from external databases into Kafka.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 3

A Zookeeper configuration has tickTime of 2000, initLimit of 20 and syncLimit of 5. What's the
timeout value for followers to connect to Zookeeper?

  • A. 20 sec
  • B. 10 sec
  • C. 2000 ms
  • D. 40 sec
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
tick time is 2000 ms, and initLimit is the config taken into account when establishing a connection to
Zookeeper, so the answer is 2000 * 20 = 40000 ms = 40s

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4

In Avro, adding an element to an enum without a default is a __ schema evolution

  • A. breaking
  • B. full
  • C. backward
  • D. forward
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Since Confluent 5.4.0, Avro 1.9.1 is used. Since default value was added to enum complex type , the
schema resolution changed from:
(<1.9.1) if both are enums:** if the writer's symbol is not present in the reader's enum, then an error
is signalled. **(>=1.9.1) if both are enums:
if the writer's symbol is not present in the reader's enum and the reader has a default value, then
that value is used, otherwise an error is signalled.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5

There are five brokers in a cluster, a topic with 10 partitions and replication factor of 3, and a quota of
producer_bytes_rate of 1 MB/sec has been specified for the client. What is the maximum throughput
allowed for the client?

  • A. 10 MB/s
  • B. 0.33 MB/s
  • C. 1 MB/s
  • D. 5 MB/s
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Each producer is allowed to produce @ 1MB/s to a broker. Max throughput 5 * 1MB, because we
have 5 brokers.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 6

A topic "sales" is being produced to in the Americas region. You are mirroring this topic using Mirror
Maker to the European region. From there, you are only reading the topic for analytics purposes.
What kind of mirroring is this?

  • A. Passive-Passive
  • B. Active-Active
  • C. Active-Passive
Answer:

C

User Votes:
A
50%
B
50%
C
50%

Explanation:
This is active-passing as the replicated topic is used for read-only purposes only

Discussions
vote your answer:
A
B
C
0 / 1000

Question 7

What is true about replicas ?

  • A. Produce requests can be done to the replicas that are followers
  • B. Produce and consume requests are load-balanced between Leader and Follower replicas
  • C. Leader replica handles all produce and consume requests
  • D. Follower replica handles all consume requests
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Replicas are passive - they don't handle produce or consume request. Produce and consume requests
get sent to the node hosting partition leader.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 8

If I want to send binary data through the REST proxy, it needs to be base64 encoded. Which
component needs to encode the binary data into base 64?

  • A. The Producer
  • B. The Kafka Broker
  • C. Zookeeper
  • D. The REST Proxy
Answer:

A

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
The REST Proxy requires to receive data over REST that is already base64 encoded, hence it is the
responsibility of the producer

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9

What's is true about Kafka brokers and clients from version 0.10.2 onwards?

  • A. Clients and brokers must have the exact same version to be able to communicate
  • B. A newer client can talk to a newer broker, but an older client cannot talk to a newer broker
  • C. A newer client can talk to a newer broker, and an older client can talk to a newer broker
  • D. A newer client can't talk to a newer broker, but an older client can talk to a newer broker
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Kafka's new bidirectional client compatibility introduced in 0.10.2 allows this. Read more
herehttps://www.confluent.io/blog/upgrading-apache-kafka-clients-just-got-easier/

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10

How will you set the retention for the topic named “my-topic” to 1 hour?

  • A. Set the broker config log.retention.ms to 3600000
  • B. Set the consumer config retention.ms to 3600000
  • C. Set the topic config retention.ms to 3600000
  • D. Set the producer config retention.ms to 3600000
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
retention.ms can be configured at topic level while creating topic or by altering topic. It shouldn't be
set at the broker level (log.retention.ms) as this would impact all the topics in the cluster, not just the
one we are interested in

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11

Two consumers share the same group.id (consumer group id). Each consumer will

  • A. Read mutually exclusive offsets blocks on all the partitions
  • B. Read all the data on mutual exclusive partitions
  • C. Read all data from all partitions
Answer:

B

User Votes:
A
50%
B
50%
C
50%

Explanation:
Each consumer is assigned a different partition of the topic to consume.

Discussions
vote your answer:
A
B
C
0 / 1000

Question 12

A consumer is configured with enable.auto.commit=false. What happens when close() is called on
the consumer object?

  • A. The uncommitted offsets are committed
  • B. A rebalance in the consumer group will happen immediately
  • C. The group coordinator will discover that the consumer stopped sending heartbeats. It will cause rebalance after session.timeout.ms
Answer:

B

User Votes:
A
50%
B
50%
C
50%

Explanation:
Calling close() on consumer immediately triggers a partition rebalance as the consumer will not be
available anymore.

Discussions
vote your answer:
A
B
C
0 / 1000

Question 13

What happens when broker.rack configuration is provided in broker configuration in Kafka cluster?

  • A. You can use the same broker.id as long as they have different broker.rack configuration
  • B. Replicas for a partition are placed in the same rack
  • C. Replicas for a partition are spread across different racks
  • D. Each rack contains all the topics and partitions, effectively making Kafka highly available
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
Partitions for newly created topics are assigned in a rack alternating manner, this is the only change
broker.rack does

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14

You want to send a message of size 3 MB to a topic with default message size configuration. How
does KafkaProducer handle large messages?

  • A. KafkaProducer divides messages into sizes of max.request.size and sends them in order
  • B. KafkaProducer divides messages into sizes of message.max.bytes and sends them in order
  • C. MessageSizeTooLarge exception will be thrown, KafkaProducer will not retry and return exception immediately
  • D. MessageSizeTooLarge exception will be thrown, KafkaProducer retries until the number of retries are exhausted
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
MessageSizeTooLarge is not a retryable exception.

Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15

What exceptions may be caught by the following producer? (select two)
ProducerRecord<String, String> record =
new ProducerRecord<>("topic1", "key1", "value1");
try {
producer.send(record);
} catch (Exception e) {
e.printStackTrace();
}

  • A. BrokerNotAvailableException
  • B. SerializationException
  • C. InvalidPartitionsException
  • D. BufferExhaustedException
Answer:

B, D

User Votes:
A
50%
B
50%
C
50%
D
50%

Explanation:
These are the client side exceptions that may be encountered before message is sent to the broker,
and before a future is returned by the .send() method.

Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2