Queue Sink: Apache Kafka


  • Apache Kafka is a distributed, high-performance, transactional messaging platform, that remains performant as the number of messages it needs to process increases and the number of events it needs to stream climbs to the big-data zone.

  • RavenDB can harness the advantages presented by message brokers like Kafka both as a producer (by running ETL tasks) and as a consumer (using a sink task to consume enqueued messages).

  • To use RavenDB as a consumer, define an ongoing Sink task that will read batches of enqueued JSON formatted messages from Kafka topics, construct documents using user-defined scripts, and store the documents in RavenDB collections.

  • In this page:


The Queue Sink Task

Users of RavenDB 6.0 and on can create an ongoing Sink task that connects a Kafka broker, retrieves enqueued messages from selected Kafka topics, runs a user-defined script to manipulate data and construct documents, and potentially stores the created documents in RavenDB collections.


Connecting a Kafka broker

In the message broker architecture, RavenDB sinks take the role of data consumers.
A sink would connect a Kafka broker using a connection string, and retrieve messages from the broker's Topics.

Read below about adding a connection string via API.
Read here about adding a connection string using Studio.

Like all ongoing tasks, a sink task is operated by a responsible node.
When the responsibility for the task is moved from one node to another, e.g. from node A to node B as a result of node A down time:

  • The consumer task will maintain the same consumer group id it had on the original node.
  • Kafka brokers may cease serving the sink task for some time as the Kafka consumer group rebalances (adapting to the leaving of one node and the joining of another, among other changes).


Retrieving enqueued messages from selected Kafka topics

When a message is sent to a Kafka broker by a producer, it is pushed to the tail of a topic. As preceding messages are pulled, the message advances up the queue until it reaches its head and can be consumed by RavenDB's sink.


Running user-defined scripts

A sink task's script is a JavaScript segment. Its basic role is to retrieve selected Kafka messages or message properties, and construct documents that will then be stored in RavenDB.

The script can simply store the whole message as a document, as in this segment:

// Add the document a metadata `@collection` property to keep it in 
// this collection, or do not set it to store the document in @empty).  
this['@metadata']['@collection'] = 'Orders'; 
// Store the message as is, using its Id property as its RavenDB Id as well.  
put(this.Id.toString(), this)

But the script can also retrieve some information from the read message and construct a new document that doesn't resemble the original message.
Scripts often apply two sections: a section that creates a JSON object that defines the document's structure and contents, and a second section that stores the document.

E.g., for Kafka messages of this format -

{
   "Id" : 13,
   "FirstName" : "John",
   "LastName" : "Doe"
}

We can create this script -

var item = { 
    Id : this.Id, 
    FirstName : this.FirstName, 
    LastName : this.LastName, 
    FullName : this.FirstName + ' ' + this.LastName, 
    "@metadata" : {
        "@collection" : "Users"
    }
};

// Use .toString() to pass the Id as a string even if Kafka provides it as a number
put(this.Id.toString(), item)

The script can also apply various other JavaScript commands, including load to load a RavenDB document (e.g. to construct a document that includes data from the retrieved message and complementing data from existing RavenDB documents), del to remove existing RavenDB documents, and many others.


Storing documents in RavenDB collections

The sink task consumes batches of queued messages and stores them in RavenDB in a transactional manner, processing either the entire batch or none of it.

Exceptions to this rule

Some script processing errors are allowed; when such an error occurs RavenDB will skip the affected message, record the event in the logs, and alert the user in Studio, but continue processing the batch.

Once a batch is consumed, the task confirms it by calling kafkaConsumer.Commit().

Note that the number of documents included in a batch is configurable.

Take care of duplicates

Producers may enqueue multiple instances of the same document.
if processing each message only once is important to the consumer, it is the consumer's responsibility to verify the uniqueness of each consumed message.

Note that as long as the Id property of Kafka messages is preserved (so duplicate messages share an Id), the script's put(ID, { ... }) command will overwrite a previous document with the same Id and only one copy of it will remain.

Client API

Add a Kafka Connection String

Prior to defining a Kafka sink task, add a Kafka connection string that the task will use to connect the message broker's bootstrap servers.

To create the connection string:

  • Create a QueueConnectionStringinstance with the connection string configuration.
    Pass it to the PutConnectionStringOperation store operation to add the connection string.

    QueueConnectionString:

    // Add Kafka connection string
    var res = store.Maintenance.Send(
        new PutConnectionStringOperation<QueueConnectionString>(
            new QueueConnectionString
            {
                Name = "KafkaConStr",
                BrokerType = QueueBrokerType.Kafka,
                KafkaConnectionSettings = new KafkaConnectionSettings()
                        { BootstrapServers = "localhost:9092" }
            }));

    QueueBrokerType:

    public enum QueueBrokerType
    {
        None,
        Kafka,
        RabbitMq
    }
    Property Type Description
    Name string Connection string name
    BrokerType QueueBrokerType Set to QueueBrokerType.Kafka for a Kafka connection string
    KafkaConnectionSettings KafkaConnectionSettings[] A list of comma-separated host:port URLs to Kafka brokers

Add a Kafka Sink Task

To create the Sink task:

  • Create QueueSinkScript instances to define scripts with which the task can process retrieved messages, apply JavaScript commands, construct documents and store them in RavenDB.

    // Define a Sink script
    QueueSinkScript queueSinkScript = new QueueSinkScript
    {
        // Script name
        Name = "orders",
        // A list of Kafka topics to connect
        Queues = new List<string>() { "orders" },
        // Apply this script
        Script = @"this['@metadata']['@collection'] = 'Orders'; 
                   put(this.Id.toString(), this)"
    };
  • Prepare a QueueSinkConfigurationobject with the sink task configuration.

    QueueSinkConfiguration properties:

    Property Type Description
    Name string The sink task name
    ConnectionStringName string The registered connection string name
    BrokerType QueueBrokerType Set to QueueBrokerType.Kafka to define a Kafka sink task
    Scripts List<QueueSinkScript> A list of scripts
  • Pass this object to the AddQueueSinkOperation store operation to add the Sink task.

    QueueSinkScript properties:

    Property Type Description
    Name string Script name
    Queues List<string> A list of Kafka topics to consume messages from
    Script string The script contents

Code Sample:

// Add Kafka connection string
var res = store.Maintenance.Send(
    new PutConnectionStringOperation<QueueConnectionString>(
        new QueueConnectionString
        {
            Name = "KafkaConStr",
            BrokerType = QueueBrokerType.Kafka,
            KafkaConnectionSettings = new KafkaConnectionSettings() 
                    { BootstrapServers = "localhost:9092" }
        }));

// Define a Sink script
QueueSinkScript queueSinkScript = new QueueSinkScript
{
    // Script name
    Name = "orders",
    // A list of Kafka topics to connect
    Queues = new List<string>() { "orders" },
    // Apply this script
    Script = @"this['@metadata']['@collection'] = 'Orders'; 
               put(this.Id.toString(), this)"
};

// Define a Kafka configuration
var config = new QueueSinkConfiguration()
{
    // Sink name
    Name = "KafkaSinkTaskName",
    // The connection string to connect the broker with
    ConnectionStringName = "KafkaConStr",
    // What queue broker is this task using
    BrokerType = QueueBrokerType.Kafka,
    // The list of scripts to run
    Scripts = { queueSinkScript }
};

AddQueueSinkOperationResult addQueueSinkOperationResult = 
    store.Maintenance.Send(new AddQueueSinkOperation<QueueConnectionString>(config));

Configuration Options

Use these configuration options to gain more control over queue sink tasks.