ACID Transactions in NoSQL? RavenDB vs MongoDB

Est. reading time: 22 min
RavenDB vs MongoDB

How did two NoSQL databases, RavenDB & MongoDB, become ACID at the cluster level?

You may have heard that #NoSQL databases and ACID transactions don’t mix. Surely they can’t do it as well as traditional relational databases can, right? Well, it turns out people only got that impression from the way non-relational databases happened to develop – there was never some law that says a database has to be relational to be ACID. Look at two leading document databases: MongoDB introduced full ACID capabilities in just the past two years, but RavenDB has been fully ACID for over a decade. So if the stereotype was ever true, it definitely isn’t anymore.

More recently, both MongoDB and RavenDB have gone further and developed ACID capabilities not just at the level of a single server, but at the distributed cluster level. Let’s explore the paths these two databases have taken to get here, and what these powerful features allow you and your application to do. But first – a refresher on what the term ACID means, and what makes achieving it in a distributed database so difficult.

What are ACID transactions?

ACID transactions are a very important feature that most relational databases have had for decades. They enable you to combine a series of different database operations into one transaction that provides the following four guarantees: Atomicity – that the operations will all either succeed or fail as a single unit; Consistency – that they won’t violate certain constraints you defined for the data as a whole; Isolation – that each operation is hidden from view until the whole transaction is complete; and Durability – that all changes to the data are safely persisted.

Why are these guarantees important? Imagine a bank with a database that can’t perform ACID transactions:

  • If I want to transfer $100 from my bank account to yours, two separate operations are required: a withdrawal of $100 at my end, and a deposit of $100 at your end. These operations must happen Atomically! We don’t want any possibility of one of these operations succeeding while the other fails. We don’t mind if the whole transaction fails as long as it never ‘half succeeds’. Money should not appear out of thin air, nor go poof.
  • Should I be able to perform a ‘transfer’ in which $100 is withdrawn from my account, but $200 is deposited in yours? Obviously not – but it’s not the withdrawal or deposit on their own that violate any rules, it’s only this combination of operations that is illegal. In database terms, we say that this transaction makes the database enter a state that is not Consistent. What counts as “consistent” is completely up to your own business logic – in this case it’s to ensure conservation of money. But the enforcement of consistency is an important part of the transaction logic.
  • When we execute a transfer, we expect it to look like both the withdrawal and deposit happened at the “same time” – at least from our perspective. There is always a slight lag between different database operations, even if it’s just a few milliseconds, but during that lag the transaction must be Isolated – invisible to all outside observers. Isolation is also true in reverse: the transaction doesn’t see any changes that happened to the data since it started, it sees the database as though it’s frozen in time. Isolation implies that transactions always look like they happen in a particular order, even if the truth is that they overlap in time. If there are two transfers, A and B, and the database executes their steps like a sandwich: withdrawal A, withdrawal B, deposit B, deposit A – it would be meaningless to ask which one happened “first”. Nevertheless, we want all observers to see one instantaneous transfer followed by another instantaneous transfer, and everyone should agree on which one happened first.
  • When we see that a transfer has succeeded, we expect it to already be Durable. This means it’s not just stored in volatile memory – it is safely stored on a persistent medium (such as a hard drive). A transaction can’t be considered ‘complete’ if it could be rolled back by a sudden power outage.

This classic bank example is just one of many scenarios where compromising on ACIDity is a bad idea. But ACIDity can be difficult and expensive to implement. It is especially hard to guarantee across a distributed cluster.

So what’s a distributed cluster?

A cluster is formed by distributing the same database to more than one server, a.k.a. scaling the database ‘horizontally’. This lets you increase the throughput of reads and writes, as well as ensure that the database stays available even if a few of the servers go down. But what happens if there’s a network partition, meaning the different servers in the cluster can’t communicate with each other and synchronize their data?

Partitions in distributed clusters present a tradeoff commonly known as the CAP theorem: how willing are you to let the data on different servers get out of sync? You have to choose between Consistency and Availability any time you encounter a Partition.

No, that was not deja vu – “consistency” has two different definitions relevant to our discussion, because the people who coin terminology are mean.

Total Consistency means you just…wait. Your database stops serving reads and writes until the partition is healed. Total Availability means you continue serving reads and writes as normal. This is tempting, but it means that your data can be changed in conflicting ways, so you need to have a strategy in place for sorting out the conflicts once the partition heals. In practice, there is a spectrum of options between consistency and availability: one is to make reads available, but writes consistent.

Conflicts sound scary, and they often are, but there are common scenarios where dealing with conflicts is pretty straightforward. Suppose your data is twitter tweets: you probably want readers to be able to like the tweets regardless of any network problems. When the data is synchronized, you might end up with conflicting numbers of likes on the same tweets, but that is resolved by simply adding the conflicting numbers of likes together.

The CAP tradeoff is the challenge a database needs to adapt to if it wants to scale horizontally. Dealing with that while also guaranteeing ACID is an even bigger challenge.

The 21st century ACID race

In the early 2000s, relational databases began struggling to keep up with the demands of the information age. One of many reasons for this was the problem we just discussed: how do you scale horizontally while maintaining ACIDity? This temporary weakness opened the door to creative new technologies to carve out niches for themselves. These new databases sacrificed some of the rigid norms of traditional databases in order to prioritize other capabilities, allowing them to outperform traditional databases in some scenarios. Many of them abandoned the norm of the relational model which had been so dominant for decades, and we call these databases the “#NoSQL movement”. Some, like the document database MongoDB, also abandoned the norm of ACIDity.

ACIDity is not always necessary of course, there are many scenarios where it’s sufficient to promise a weaker set of guarantees. These are known by the amusing backronym “BASE”, or “Basically Available, Soft State, Eventual Consistency”. While these are alternatives to ACID, the words “available” and “consistency” refer to the same properties as the CAP theorem, which lets you know these guarantees apply specifically to distributed databases. And so MongoDB gave up ACID to pursue advantages in flexibility, horizontal scalability, and the ability to handle “big data”. This evidently it paid off.

Fine, but what happens when you really do need ACIDity? How can you use a database that doesn’t at least give you the option? Well, ACID guarantees don’t have to be implemented in the database layer. Developers always have the option to implement them in their application layer for specific cases. This can be a hassle, and many applications are probably easier to develop with an ACID database, but for other applications the effort was worth the advantages MongoDB offered.

Now I’m not saying MongoDB version 1.0 had no tools at all for implementing transactions. They have always been at least atomic at the level of an individual document, and today all single-document modifications are fully ACID. This might not help with data located in different documents, but by designing your documents with transactions in mind you can get a whole lot done. What we call a document can be anything from: { username: ”foo”, e-mail: ”” }, to: an entire youtube video, plus all of its comments, plus all of its metadata. You have a lot of flexibility as a developer, which is a big advantage over relational databases.

This trick can’t work for every application though. Thinking back to our bank example, you couldn’t keep all of the accounts on one document just so you could have ACID transfers between accounts. If you do manage to stretch your object definitions so all the transactions you need to make are on data from the same document, it’s probably only temporary. It’s difficult to account for all your future needs and use cases when you’re just designing your application. It’s not a matter of whether you’ll need to implement your own ACID guarantees, but when. So you can understand why in 2018, MongoDB 4.0 finally introduced multi-document transactions to great fanfare. This was no small feat – it required MongoDB to switch to a new storage engine called WiredTiger, and make many other changes throughout the development of versions 3.x to be able to implement this feature.

And yet, even today MongoDB warns at every opportunity that “in most cases, multi-document transaction incurs a greater performance cost”. They also implore you to remember that they “estimate 80%-90% of applications don’t need multi-document transactions at all”. There’s nothing wrong with that, it just won’t make you feel any better if your application is in that 10%-20%. Plus, this estimate heavily depends on how you define an “application”. It would probably be more accurate to say that 80%-90% of “scenarios” don’t require multi-document transactions, and the more complex an application is, and the longer it’s been developed and updated, the more scenarios it will have to contend with. Seeing things this way, it’s more accurate to say that nearly all applications will need multi-document transactions eventually.

This story could have gone very differently. First developed at around the same time as MongoDB, the document database RavenDB chose not to give up ACIDity at all. It was capable of multi-document transactions since version 1.0. Because it was optimized with this in mind, there wasn’t even a need for a non-ACID option – any combination of database operations can be combined into an ACID transaction. As a user you never needed to implement ACID guarantees yourself, and you were free to design documents around your own requirements. Even though RavenDB had these different design priorities than MongoDB, it was also equally well suited to clustering.

An example of an optimization RavenDB was first designed with involves the way an application sends the commands for a transaction to the server. MongoDB requires a minimum of four calls from the application to the server to perform a transaction: one call to start the transaction, one call for each of the operations the transaction performs (at least two, otherwise there’s no need to start a transaction), and a final call to commit. This is how the API for transactions usually works in traditional databases, but all these round-trips over the network take a lot of time and bandwidth. RavenDB was designed to make one and only one round trip to the server per transaction. RavenDB’s version of the session object tracks a series of commands, collects them as a batch, and sends them all to the server in a single round-trip when the method session.saveChanges() is called.

So MongoDB was able to catch up to RavenDB in offering multi-document transactions (though at a higher cost), but they couldn’t afford to stop there. ACID was back in fashion for NoSQL databases, and this time the stakes were higher: it was finally time for distributed transactions. Up till then, transactions were usually committed on one server, then replicated to the rest of the cluster in an “eventually consistent” manner. This leaves the possibility of different nodes in the cluster receiving conflicting transactions at the same time, which will need to be resolved. By involving the whole cluster in the transaction and only committing once there is a consensus, these conflicts can be avoided.

MongoDB and RavenDB both use the consensus algorithm Raft to coordinate their clusters. Briefly, in Raft a cluster consists of an odd number of cluster nodes, and one of them is the leader. Distributed transactions are passed to the leader, who notifies the rest of the cluster. If a majority of the cluster acknowledges the first message, the leader commits the transaction and sends the rest of the cluster the final commit message. The odd number of nodes means that if a cluster of let’s say, 5 nodes, is partitioned into two smaller networks of 2 and 3 nodes each, only one of those networks will be able to achieve majority consensus. There can’t be two conflicting transactions committed even during a partition. Raft guarantees that eventually either the whole cluster will commit the transaction, or the whole cluster will roll it back.

For both document databases, distributed transactions are much slower and more expensive than single node transactions. But the important thing to note is that just like in the single node version, for MongoDB to commit a transaction there needs to be consensus for every single step – on each for the start and commit transaction calls, and another for each operation in the transaction. As in the single node version, RavenDB commits a transaction with just one round of Raft consensus.


So what have we learned? We learned that ACID is a very important, but very difficult set of guarantees to implement – for database developers and application developers alike. We learned that by relaxing standards like ACID, it’s possible for a newcomer document database MongoDB to challenge the relational kings. This created the common misconception that ACIDity and non-relational database models are somehow incompatible. The question is, was it worth it? Inevitably, as the NoSQL movement matured, and the relational databases proved that they could adapt and maintain their dominance, the demand for ACID capabilities grew. This finally forced MongoDB to backtrack on their original design priorities. Meanwhile, the less well known RavenDB stood in defiance of these trends, and has proven for almost a decade that you can have the best of both worlds.

Woah, already finished? 🤯

If you found the article interesting, don’t miss a chance to try our database solution – totally for free!

Try now try now arrow icon