Ongoing Tasks: ETL Basics
ETL (Extract, Transform & Load) is a three-stage RavenDB process that transfers data from a RavenDB database to an external target. The data can be filtered and transformed along the way.
The external target can be:
- Another RavenDB database instance (outside of the Database Group)
- A relational database
In this page:
Why use ETL
Share relevant data
Data that needs to be shared can be sent in a well-defined format matching your requirements so that only relevant data is sent.
Protect your data - Share partial data
Limit access to sensitive data, details that should remain private can be filtered out as you can share partial data.
Reduce system calls
Distribute data to related services in your system architecture so that they have their own copy of the data and can access it without making a cross-service call.
i.e. A product catalog can be shared among multiple stores where each can modify the products or add new ones.
Transform the data
- Multiple documents can be sent from a single source document.
- Data can be transformed to match a rational model used in the target destination.
Aggregate your data
Data sent from multiple locations can be aggregated in a central server.
- Send data to an already existing reporting solution.
- Point of sales systems can send sales data to a central place for calculations.
Defining ETL Tasks
The following two ETL tasks can be defined:
- RavenDB ETL - send data to another RavenDB database
- SQL ETL - send data to a SQL database
The destination URL address is set by using a pre-defined named connection string.
This makes deployment between environments easier.
For RavenDB ETL, multiple URLs can be configured in the connection string as the target database can reside on multiple nodes within the Database Group in the destination cluster. Thus, if one of the destination nodes is down, RavenDB will run the ETL process against another node in the Database Group topology.
See more in Connection Strings
The tasks can be defined from code or from the Studio
ETL's three stages are:
- Extract - Extract the documents from the database
- Transform - Transform & filter the documents data according to the supplied script (optional)
- Load - Load (write) the transformed data into the target destination
The ETL process starts with retrieving the documents from the database.
You can choose which documents will be processed by next two stages (Transform and Load).
The possible options are:
- Documents from a single collection
- Documents from multiple collections
- All documents (RavenDB ETL only)
This stage transforms and filters the extracted documents according to a provided script.
Any transformation can be done so that only relevant data is shared.
In addition to ECMAScript 5.1 API, RavenDB introduces the following functions and members:
||object||The current document (with metadata)|
||function||Returns the document ID|
||function||Load another document.
This will increase the maximum number of allowed steps in a script.
Note: Changes made to the other loaded document will not trigger the ETL process.
Specific ETL functions:
||function||Load an object to a specified
The target must be either a collection name (RavenDB ETL) or a table name (SQL ETL).
An object will be sent to the destination only if the
||function||Load an attachment (SQL ETL only)|
Documents are extracted and transformed by the ETL process in a batch manner.
The number of documents processed depends on the following configuration limits:
ETL.ExtractAndTransformTimeoutInSec(default: 60 sec)
Time-frame for the extraction and transformation stages (in seconds), after which the loading stage will start.
Max number of extracted documents in an ETL batch.
Loading the results to the target destination is the last stage.
Updates are implemented by executing consecutive DELETEs and INSERTs. When a document is modified, the delete command is sent before the new data is inserted, and both are processed under the same transaction on the destination side. This applies to both ETL types.
There are two exceptions to this behavior:
- In RavenDB ETL, when documents are loaded to the same collection there is no need to send DELETE because the document on the other side has the same identifier and it will just update it.
- in SQL ETL you can configure to use inserts only, which is a viable option for append-only systems.
In contrast to Replication, ETL is a push-only process that writes data to the destination
whenever documents from the relevant collections were changed. Existing entries on the target will always be overwritten.
Loading data from encrypted database
If a database is encrypted then you must not send data in ETL process using a non encrypted channel by default. It means that a connection to a target must be secured:
- in Raven ETL an URL of a destination server has to use HTTPS (a server certificate of the source server needs to be registered as a client certificate on the destination server)
- in SQL ETL a connection string to SQL database must specify encrypted connection (specific per SQL engine provided)
This validation can be turned off by selecting Allow ETL on a non-encrypted communication channel option in the Studio (or setting
AllowEtlOnNonEncryptedChannel if a task is defined using the client API).
Please note that your data encrypted at rest won't be protected in transit then.
ETL errors and warnings are logged to the files and displayed in the notification center panel. You will be notified if any of the following events happen:
- connection error to the target
- JS script is invalid
- transformation error
- load error
- slow SQL was detected
If the ETL cannot proceed the load stage (e.g. it can't connect to the destination) then it enters the fallback mode.
The fallback mode means suspending the process and retrying it periodically. The fallback time starts from 5 seconds and
it's doubled on every consecutive error according to the time passed since the last error but it never cross
ETL.MaxFallbackTimeInSec configuration (default: 900 sec)
Once the process is in the fallback mode then Reconnect state is shown in the Studio.
Details and examples for type specific ETL scripts can be found in the following articles: