CAP and ACID Optimization Strategies for Database Consistency and Availability

Est. reading time: 22 min
CAP and ACID optimization strategies for database consistency and availability

How different read and write strategies result in different trade-offs for database consistency and availability.

In previous articles, we covered choosing the right availability strategy and optimizing hardware performance to ensure smooth database operation. In this article, we’ll discuss in what ways RavenDB offers ACID transactions that work with different combinations of database consistency and availability.

Availability or Consistency? Why Not Both!

You may know the CAP theorem describes how a distributed system can behave. It states a system can only provide 2 out of 3 guarantees around consistency, availability, and partition tolerance. RavenDB always chooses to be partition tolerant for all transaction types, meaning operations against a single node or against the cluster will rollback on failure.

Most NoSQL products do not offer much in the way of consistency guarantees, preferring to focus solely on availability, which can lead to duplicated data or worse, data lost in transit. RavenDB is one of the only NoSQL databases that can offer transactional guarantees for distributed workloads that manages to achieve high availability and consistency.

ACID Compliant Document Transactions

Session operations on documents are ACID-compliant, meaning they guarantee atomic reads and writes. In terms of the CAP theorem, single node session operations are AP: available and partition tolerant. When you call session.SaveChanges(), the session transaction ensures data is persisted to disk and will rollback if the transaction fails. Writes can then be guaranteed for that single node. They are not consistent since data is asynchronously replicated across the cluster after a successful write.

Eventually Consistent Index Queries

In other database technologies, a “query” can mean loading a piece of data by ID or by certain matching criteria (like WHERE clauses in SQL). In RavenDB these two operations are distinct: loading a document by its ID (or “key”) is called a “document load” and these are ACID-compliant. On the other hand, “queries” are used when you want to ask for documents based on some criteria. Queries are powered by background indexes which are eventually consistent (sometimes called BASE instead of ACID).

“Eventually consistent” means that query results could be stale when new data is being indexed. Under most circumstances this is desirable because it means RavenDB can return results incredibly quickly as long as data is in the index. You can learn more about indexing in RavenDB compared to other databases.

Consistency Across a Distributed Cluster

When you create a new session, by default it is scoped to a single node. This gives you faster document operations by default. However, we mentioned single node operations cannot meet the “consistency” guarantee in the CAP theorem across a distributed cluster. If you need to guarantee uniqueness or replicate the data for redundancy across more than one node, you can choose to have higher consistency at the cost of availability (CP).

There are more advanced APIs that can act as “insurance policies” when reading and writing critical data. These operations will be more expensive to perform, requiring coordination between multiple nodes, but they can ensure your data ends up where you need it across a cluster.

Cluster-Wide Transactions

For ensuring consistency across a cluster, RavenDB allows you to opt in to cluster-wide transactions when opening a session using the session transaction mode TransactionMode.ClusterWide:

using var session = store.OpenSession(new SessionOptions() {
  TransactionMode = TransactionMode.ClusterWide
});

// Now all operations done in this session will use a distributed transaction mode

Cluster-wide operations take longer to execute because they need to obtain consensus from the majority of nodes when calling SaveChanges. If enough nodes are unavailable, the transaction will fail. If that happens, all changes will be rolled back automatically.

Consistent Document Operations

Document operations like making changes, patching, and storing will maintain ACID guarantees across the cluster (Consistent, Partition Tolerant).

Compare-Exchange Operations

In addition to regular document operations, you can also perform an interlocked distributed operation (or “compare-exchange”).

A compare-exchange operation will ensure a piece of data (the compare-exchange value) is persisted to disk on a majority of nodes and that its key is unique across the cluster. This means compare-exchange operations can be a substitute for “unique constraints” in traditional RDMS technologies except they are more powerful. You can customize the handling depending on the situation such as with email address reservation.

Compare-exchange can also be performed outside a session transaction using the operations API.

Other Types of Distributed Operations

A session in cluster-wide transaction mode does not support modifying attachments, counters, or time series. However, RavenDB’s counters and time series APIs are already built using distributed operations that do not need you to enable cluster-wide transaction mode.

Waiting For Replication

Single-node transactions are resilient to failures since they work even when other nodes in the cluster are down. For example, when updating one or more documents, this would allow the application to continue serving requests even if only one node was available.

In some circumstances where you want to guarantee data is replicated across more than one node, you can use session.Advanced.WaitForReplicationAfterSaveChanges() to perform “write assurance.” Waiting for replication will wait until a certain number of cluster nodes confirm they persisted the newly modified documents to disk. This means assurance is only provided for internal replication, not for external replication.

The trade-off is that this will cause a slower write operation as RavenDB has to allow time for each node to acknowledge that the data has been replicated (determined by the replication factor argument). 

Waiting for replication is not a cluster transaction, meaning the data is not rolled back if RavenDB cannot write to the specified number of nodes. It’s still only transactional for the preferred node, and the data will be replicated in the future once the nodes are healthy again.

Document Key Generation Strategies

Client-based Strategies

By default, RavenDB uses the HiLo algorithm to allow application clients to auto generate document key identifiers that never conflict across a cluster. The client will retrieve the latest HiLo identity metadata from the server when initializing but after that, the client is able to generate IDs without communicating with the server.

You can also provide custom or semantic IDs (e.g. users/1/login-history) in the application when saving documents. Since these are user-provided, they do not require any communication with the server when storing.

These strategies favor availability so that if a node fails and cannot be communicated with, IDs can still be generated without issues on all application nodes.

Server-based Strategies

Client-based document key generation strategies like HiLo or semantic IDs do not require server-side validation of the document key when storing documents which makes them resilient to network failures.

However, sometimes you may want more consistency in key generation. The GUID and server-wide sequential strategies generate a key on the server when the document is stored. These strategies will be resilient to most failures though they are less available than client-only strategies.

The most consistent strategy, cluster-wide incremental Identity, requires communication across the majority of nodes in a cluster and should therefore be avoided unless you want to guarantee document keys are incremented across a cluster.

Balancing Reads Across a Cluster

Another application-level mitigation is having a load balancing strategy in place to ensure a single node isn’t handling the majority of read/write traffic. You can balance read traffic between nodes using the ReadBalanceBehavior convention.

No Read Balancing

ReadBalanceBehavior.None will send read requests to the first database in the topology group, falling back in order of the list. This is the default strategy and may not always be optimal for your application. This will almost always cause the first node in the database group to receive the bulk of the read/write traffic.

Fastest Node Balancing

ReadBalanceBehavior.Fastest can be used for geo-distributed clusters so that database clients have the fastest connection. For example, if a RavenDB cluster was split into East US and EU regions, application servers in the EU may want to use this read balance behavior mode to talk to the closer RavenDB nodes instead of the slower East US nodes. The fastest node is determined by a speed test. Writes will still use the preferred node regardless of the speed test.

Round-Robin Balancing

ReadBalanceBehavior.RoundRobin will distribute read requests in a round-robin fashion across all databases in the topology. This is useful if you want to distribute read request load evenly across a cluster. In a distributed multi-region architecture, this may cause slower read requests to database nodes outside the application server’s region.

Replication Implications with Read Balancing

Waiting for replication may be required when using read balancing conventions to ensure the user can read the value they just wrote during their next request. If a user submits a form which updates their profile, that write may go to the user document on node A. When the user is redirected back to their profile page, the read request may go to node B, which may not have replicated the change yet to its user document. 

For user-based or multi-tenant application architectures specifically, you could also solve this problem using session context.

Balancing Writes Using Session Context

By default, RavenDB writes to the “preferred node” which is usually the first available database in the topology. If you want finer-grained control over balancing writes, you can use the LoadBalanceBehavior.UseSessionContext API

LoadBalanceBehavior overrides ReadBalanceBehavior and specifies that both read and write requests for a session should prefer a specific node based on a “tag” set during the session. This tag could be the current user, tenant ID, or some other hash that you wish to correspond to a specific database node.

In a multi-tenant architecture for example, you could specify that all read and write requests for a tenant should prefer a specific database node. Replication, failover and handling of unreachable nodes is all still handled transparently. During a failure scenario, RavenDB falls back to another node when the preferred one is unavailable.

Gracefully Handling Conflict Resolution

When using single-node operations, it’s possible to run into replication conflicts like during concurrent writes to the same document across multiple nodes or when a write is performed on a document faster before a change is replicated to it. Under most conditions, this is rare. If you’re using APIs designed for distributed operations like compare-exchange operations or time series, conflicts will never occur.

If conflicts do occur, RavenDB can usually resolve these conflicts automatically and you will not even notice. In cases where it can’t, you can write custom conflict resolvers that can be executed to resolve conflicts when conflicts are deterministic (such as merging nested arrays). In rare circumstances, it will alert you and fallback to manual resolution in RavenDB Studio.

Conclusion

RavenDB optimizes for availability and consistency by design which helps mitigate application downtime in failure scenarios. It supports both ACID document transactions and eventually consistent queries. When you might want to favor more consistency, there are advanced APIs like interlocked distributed operations and write assurance which work across a distributed cluster. You can customize the balancing behavior of reads and writes to be more optimized for your environment and workload. All of these features work together to help you build resilient applications that stay up when disaster strikes.

Woah, already finished? 🤯

If you found the article interesting, don’t miss a chance to try our database solution – totally for free!

Try now try now arrow icon