Deep Dive Into the RavenDB Client API
In this chapter, we're going to take a deep dive into how the client API works. We're going to show mostly C# code examples, but the same concepts apply to any of the RavenDB client APIs, regardless of platform, with minor changes needed to make things applicable.
There are still some concepts that we haven't gotten around to (clustering or indexing, for example), which will be covered in their own chapters. But the client API is rich and has a lot of useful functionality on its own, quite aside from the server-side behavior.
We already looked into the document store and the document session, the basic building blocks of CRUD in RavenDB. But in this chapter, we're going to look beyond the obvious and into the more advanced features.
One thing we'll not talk about in this chapter is querying. We'll cover that extensively in Chapter 9, so let's keep it there. You already know the basics of querying in RavenDB, but there's a lot more power for you to discover.
This chapter is going to contain a large number of code examples, and it will discuss the nitty-gritty details of using the client. It's divided into brief sections each dealing with a specific feature or behavior. I suggest reading this over to note the capabilities of RavenDB and coming back to it as needed in your application.
For the rest of this chapter, we'll use the classes shown in Listing 4.1 as our model, using a simplified help desk as our example.
Listing 4.1 Simplified Help Desk sample model
public class Customer
{
public string Id { get; set; }
public string Name { get; set; }
}
public class SupportCall
{
public string Id { get; set; }
public string CustomerId { get; set; }
public DateTime Started { get;set; }
public DateTime? Ended { get;set; }
public string Issue { get; set; }
public int Votes { get; set; }
public List<string> Comments { get; set; }
}
Writing documents
Writing documents in RavenDB is easy, as we saw in "Zero to RavenDB". If we want to create a new support call, we can use the code in Listing 4.2 to do so.
Listing 4.2 Creating a new support call using the session
using (var session = store.OpenSession())
{
var call = new SupportCall
{
Started = DateTime.UtcNow,
Issue = customerIssue,
CustomerId = customerId
};
session.Store(call);
session.SaveChanges();
}
This is the basic behavior of RavenDB and how you would typically work with saving data. But there are lot of additional things that we can do when writing data. For example, let's say the user might have sent us some screenshots that we want to include in the support call.
Working with attachments
You can add attachments to a RavenDB document to store binary data related to that document. Let's assume the user sent us a screenshot of the problem along with the call. Listing 4.3 shows how we can store and retrieve the attachments.
Listing 4.3 Saving attachments to RavenDB as part of opening the support call
using (var session = store.OpenSession())
{
var call = new SupportCall
{
Started = DateTime.UtcNow,
Issue = customerIssue,
CustomerId = customerId
};
session.Store(call);
foreach (var file in attachedFiles)
{
session.Advanced.StoreAttachment(call, file.Name,
file.OpenStream());
}
session.SaveChanges();
}
Note that we're using the session to store both the support call document and any attachments the user might have sent. An attachment is basically
a file name and a stream that will be sent to the server (with an optional content type). When the call to SaveChanges
is made, the RavenDB client API
will send both the new document and all of its attachments to the server in a single call, which will be treated as a transaction. Both the document and
the attachments will be saved, or both will fail.
That was easy enough, but now how do we retrieve the attachments? The list of attachments for a particular document is accessible via the session, as shown in Listing 4.4.
Listing 4.4 Getting the list of attachments for a support call
using (var session = store.OpenSession())
{
var call = session.Load<SupportCall>("SupportCalls/238-B");
var attachments = session.Advanced.GetAttachmentNames(call);
// render the call and the attachment names
}
Calling GetAttachmentNames
is cheap; the attachments on a document are already present in the document metadata, which we loaded as part of
getting the document. There is no server-side call involved. Note that the result of GetAttachmentNames
doesn't include the content of the
attachments. To get the attachment itself and not just its name, you need to make an additional call, as shown in Listing 4.5.
Listing 4.5 Getting an attachment content
using (var session = store.OpenSession())
{
var call = session.Load<SupportCall>("SupportCalls/238-B");
var attachments = session.Advanced.GetAttachmentNames(call);
using (var stream = session.Advanced.GetAttachment(call,
attachments[0].Name))
{
// process the content of the attachment
}
}
Each call to GetAttachment
will make a separate call to the server to fetch the attachment. If you have a lot of attachments, be aware that
fetching all their information can be expensive due to the number of remote calls that are involved.
Working with the document metadata
In the attachments section, we noted that attachment information is stored in the document metadata. RavenDB uses the metadata for a lot of things. Most of them you don't generally care about (etag, change vector, etc.). But the document metadata is also available to you for your own needs and use.
An actual use case for direct use of the document metadata is pretty rare. If you want to store information, you'll typically want to store it in the document itself, not throw it to the metadata sidelines. Typical use cases for storing data in the metadata are cross-cutting concerns. The preeminent one is auditing. You may want to see who edited a document, for example.
In order to demonstrate working with the metadata, we'll consider creating a support call. Handling a support call can be a complex process that has to go through several steps. In this case,
we will save the new support call document to RavenDB with a draft status in the metadata.
Typical modeling advice would be to model this
explicitly in the domain (so you'll have an IsDraft
or Status
property on your model), but for this example, we'll use the metadata. You can see
the code for setting a draft status in the metadata in Listing 4.6.
Listing 4.6 Setting a metadata flag as part of creating a new support call
using (var session = store.OpenSession())
{
var call = new SupportCall
{
Started = DateTime.UtcNow,
Issue = customerIssue,
CustomerId = customerId
};
session.Store(call);
var metadata = session.Advanced.GetMetadataFor(call);
metadata["Status"] = "Draft";
session.SaveChanges();
}
We can call GetMetadataFor
on any document that has been associated with the session. A document is associated with the session either by loading
it from the server or by calling Store
. After the document has been associated with the session, we can get its metadata and manipulate it.
Changes to the metadata count as changes to the document and will cause the document to be saved to the server when SaveChanges
is called.
Change tracking and SaveChanges
The document session implements change tracking on your documents, as you can see in Listing 4.7.
Listing 4.7 Saving the changes to a document without manually tracking what changed
using (var session = store.OpenSession())
{
var call = session.Load<SupportCall>("SupportCalls/238-B");
call.Ended = DateTime.UtcNow;
session.SaveChanges();
}
The session's change tracking (and identity map) means that you don't have to keep track of what changed and manually call Store
. Instead, when
you call SaveChanges
, all your changes will be sent to the server in a single request.
You have a few knobs available to tweak the process. session.Advanced.HasChanges
will let you know if calling SaveChanges
will result in a call to the
server. And session.Advanced.HasChanged(entity)
will tell you when a particular entity
has changed. You can also take it up a notch and ask RavenDB
to tell you what changed, using session.Advanced.WhatChanged()
. This will give you all the changes that happened in the session. The WhatChanged
feature can be nice if you want to highlight changes for user approval, for example, or if you just want to see what modifications were made to your model
after a certain operation.
You can also tell RavenDB not to update a particular instance by calling session.Advanced.IgnoreChangesFor(entity)
. The document will remain attached to
the session and will be part of any identity map operations, but it won't be saved to the server when SaveChanges
is called. Alternatively, you can call
session.Advanced.Evict(entity)
to make the session completely forget about a document.
These operations tend to be useful only in specific cases, but they are very powerful when utilized properly.
Optimistic concurrency
We covered optimistic concurrency in Chapter 3, but only in the most general terms. Now, let's take a look and see how we can use optimistic concurrency in practice. Listing 4.8 shows two simultaneous sessions modifying the same support call.
Listing 4.8 Concurrent modifications of a support call
using (var sessionOne = store.OpenSession())
using (var sessionTwo = store.OpenSession())
{
var callOne = sessionOne.Load<SupportCall>("SupportCalls/238-B");
var callTwo = sessionTwo.Load<SupportCall>("SupportCalls/238-B");
callOne.Ended = DateTime.Today;
callTwo.Ended = DateTime.Today.AddDays(1);
sessionOne.SaveChanges();
sessionTwo.SaveChanges();
}
In the case of the code in Listing 4.8, we're always going to end up with the support call end date set to tomorrow. This is because, by default, RavenDB
uses the Last Write Wins
model. You can change that by setting store.Conventions.UseOptimisticConcurrency
to "true," which will affect all sessions, or you can change it
on a case-by-case basis by setting session.Advanced.UseOptimisticConcurrency
to "true" on the session directly.
In either case, when this flag is set and SaveChanges
is called, we'll send the modified documents to the server alongside their change vectors that were received from the server when loading the documents. This allows the server to reject any stale writes. If the flag were set to true, the code in Listing 4.8 would result in a
ConcurrencyException
on the sessionTwo.SaveChanges()
call.
This ensures that you can't overwrite changes you didn't see, and if you set UseOptimisticConcurrency
, you need to handle this error in some manner.
Pessimistic locking
When changes happen behind our backs to the document we modified, optimistic locking handles it. Pessimistic locking, on the other hand, prevents those changes entirely. RavenDB does not support pessimistic locking. And while you really need support from the database engine to properly implement pessimistic locking, we fake it in an interesting way. The following is a recipe for using approximating pessimistic locking in RavenDB. We mention it not so much because it's a good idea, but because it allows us to explore several different features and see how they work together.
Using pessimistic locking, we can lock a document for modification until we release the lock or until a certain amount of time has gone by. We can build a pessimistic
lock in RavenDB by utilizing the document metadata and optimistic concurrency. It's easier to explain with code, and you can find the Lock
and Unlock
implementations in Listing 4.9.
The locks are opt-in
In RavenDB, both the pessimistic lock explored in this section and the optimistic lock in the previous section are opt-in. That means that you have to explicitly participate in the lock. If you're using
UseOptimisticConcurrency
and another thread isn't, that thread will get theLast Write Wins
behavior (and might overwrite the changes made by the thread using optimistic concurrency).In the same manner, the pessimistic lock recipe described here is dependent on all parties following it. If there's a thread that isn't, the lock will not be respected.
In short, when using concurrency control, make sure that you're using it across the board, or it may not hold.
Listing 4.9 Extension method to add pessimistic locking to the session
public static IDisposable Lock(
this IDocumentSession session,
string docToLock)
{
var doc = session.Load<object>(docToLock);
if (doc == null)
throw new DocumentDoesNotExistException("The document " +
docToLock + " does not exists and cannot be locked");
var metadata = session.Advanced.GetMetadataFor(doc);
if (metadata.GetBoolean("Pessimistic-Locked"))
{
// the document is locked and the lock is still value
var ticks = metadata.GetNumber("Pessimistic-Lock-Timeout");
var lockedUntil = new DateTime(ticks);
if (DateTime.UtcNow <= lockedUntil)
throw new ConcurrencyException("Document " +
docToLock + " is locked using pessimistic");
}
metadata["Pessimistic-Locked"] = true;
metadata["Pessimistic-Lock-Timeout"] =
DateTime.UtcNow.AddSeconds(15).Ticks;
// will throw if someone else took the look in the meantime
session.Advanced.UseOptimisticConcurrency = true;
session.SaveChanges();
return new DisposableAction(() =>
{
metadata.Remove("Pessimistic-Locked");
metadata.Remove("Pessimistic-Lock-Timeout");
Debug.Assert(session.Advanced.UseOptimisticConcurrency);
session.SaveChanges();
});
}
There's quite a bit of code in Listing 4.9, but there isn't actually a lot that gets done. We load a document and check if its metadata contains the
Pessimistic-Locked
value. If it does, we check if the lock time expired. If it isn't locked, we first update the document metadata,
then enable optimistic concurrency and finally call SaveChanges
. If no one else modified the document in the meantime, we'll successfully mark the
document as ours, and any other call to Lock
will fail.
The Lock
method returns an IDisposable
instance that handles releasing the lock. This is done by removing the metadata values and then calling
SaveChanges
again. If the lock has timed out and someone took the lock, we'll fail here with a concurrency exception as well.
Avoid your own distributed pessimistic locks
There's a reason why RavenDB does not include a pessimistic lock feature, and I strongly recommend you avoid using the recipe above. It's here to show how you'd use several different features at once to achieve a goal.
Actually handling a distributed lock is a non-trivial issue. Consider a RavenDB cluster with multiple nodes. If two lock requests go to two distinct nodes at the same time, both of them will succeed.1 The two nodes will quickly discover the conflicting updates and generate a conflict. But it isn't guaranteed they'll discover it inside the lock/unlock period.
Another issue is the subject of timing. If two clients have enough of a clock skew, a client might consider a lock to have expired even though it's still valid. Proper distributed locking requires a consensus protocol of some kind, and those aren't trivial to build or use. RavenDB does have a consensus protocol, but pessimistic locking is usually a bad fit for an OLTP environment, and we decided not to implement it.
A typical use for pessimistic locks is to lock a document while a user is editing it. That might sound like a good idea, but experience has shown that, in most cases, it leads to trouble. Consider, for example, version control systems. If you're reading this book, you've likely used a SCM of some kind. If you haven't, I suggest you pick up a book about source control and prioritize it over this one: learning about that is far more foundational.
Early source control systems (SourceSafe is a good example) used locks as their concurrency model, and that led to a lot of problems. Say Joe locks a file and then leaves on vacation. All of his coworkers then have to wait until he gets back before committing code to that file. This is a typical problem that arises in such cases. The same happens when you have pessimistic locks. Implementing pessimistic locks requires you to also implement forced lock release, a feature that tells you who locked the document and a whole slew of management functions around it. Typically, implementing optimistic concurrency or merging is easier and matches most users' expectations.
Offline optimistic concurrency
We looked at online optimistic concurrency in Listing 4.8, where we loaded a document into the session, modified it and then saved. In that time frame, if there was a change, we'd get a concurrency exception. But most software doesn't work like that. In a web application, you aren't going to keep the document session open for as long as the user is on the site. Instead, most likely you'll use a session-per-request model. The user will load a page with the document's content in one request and modify it in another request. There isn't a shared session in sight, so how can we implement optimistic concurrency?
All you need to do is send the change vector of the document to the user and accept it back when the user wants to save the document. Listing 4.10 shows an example using two separate sessions with concurrency handling between them.
Listing 4.10 Concurrent modifications of a support call
string changeVector;
Supportcall callOne;
using (var sessionOne = store.OpenSession())
using (var sessionTwo = store.OpenSession())
{
callOne = sessionOne.Load<SupportCall>("SupportCalls/238-B");
changeVector = sessionOne.Advanced.GetChangeVectorFor(callOne);
var callTwo = sessionTwo.Load<SupportCall>("SupportCalls/238-B");
callTwo.Ended = DateTime.Today.AddDays(1);
sessionTwo.SaveChanges();
}
using (var sessionThree = store.OpenSession())
{
sessionThree.Advanced.UseOptimisticConcurrency = true;
callOne.Ended = DateTime.Today;
sessionThree.Store(callOne, changeVector, callOne.Id);
sessionThree.SaveChanges(); // will raise ConcurrencyException
}
The code in Listing 4.10 first loads the support call in sessionOne
. Then the code loads it again in sessionTwo
, modifies the support call and saves it to the server.
Both sessions are then closed, and we open a new session, sessionThree
. We call Store
, passing the entity instance and the change vector that we got from the first session, as
well as the document ID.
This gives the RavenDB client API enough information for us to do an optimistic concurrency check from the time we loaded callOne
in the first session. In
a web scenario, you'll typically send the change vector alongside the actual data and get it back from the client to do the check. You might also want to check out the
Changes API
, which is covered a little later in this chapter. This might help you get early change notifications when you need to implement offline
optimistic concurrency.
Patching documents and concurrent modifications
The typical workflow with RavenDB is to open a session, load a document, run some business logic and call SaveChanges
. When you follow those steps, the
session will figure out what documents have changed and send them to the server. This model is simple and easy to follow, and it's the recommended way to work.
However, there are a few scenarios in which we don't want to send the entire document back to the server. For example, if our document is very large and we want to make a small change, we can avoid that cost. Another reason to avoid the full document save is a scenario that calls for concurrent work on a document.
Let's consider the SupportCall.Votes
property. Two users may very well want to vote on the same support call at the same time. One way to handle that is to
load the support call, increment the Votes
property and call SaveChanges
. In order to handle concurrent modifications, we can utilize optimistic
concurrency and retries. But that's quite a lot of work to write, and if the document is large, there's also a lot of data going back and forth over the
network for little reason. Listing 4.11 shows how we can do much better.
Listing 4.11 Incrementing a property using the Patch API
using (var session = store.OpenSession())
{
session.Advanced.Increment<SupportCall, int>("SupportCalls/238-B", c => c.Votes, 1);
session.SaveChanges();
}
What the code in Listing 4.11 does is generate a patch request, rather than load and save the full document. That request is stored in the session, and when
SaveChanges
is called, it will be sent to the server (alongside any other changes/operations made on the session, as usual). On the server side, we'll apply
this operation to the document. This patch request is safe to call concurrently, since there's no loss of data when executing patches.
Which is faster, patching or load/save?
The hasty answer is that patching is faster. We send a lot less data, and we need one less round trip to do so. Winning all around, right?
But the real answer is that things are bit more complex. The patch request is actually a JavaScript function that we send. That means we need to parse and run it on the server side, potentially marshal values into the script environment and then marshal it back. Conversely, the code path for loading and saving documents in RavenDB is well trodden and has been optimized plenty. That means in many cases it might be easier to just load and modify the document directly, rather than use a patch.
Patching isn't expensive; I want to emphasize that. But at the same time, I've seen codebases where all writes had to be made using patching because of perceived performance benefits. That resulted in an extremely hard-to-understand system that was resistant to change. The general recommendation is to utilize patching only when you need to support concurrent modifications.
Note that in most cases concurrent modifications of the same document is not the default. A properly modeled document should have a single reason to change, but it's common that documents have additional data on them (like the
Votes
property orComments
) that are important to save but don't have any real business logic attached to them.That kind of change is fine to do using patching. If you find yourself trying to run serious business logic in patch scripts (we'll see exactly how to do this in a bit), you should move that into your own business logic.
An important consideration to take into account is that there is no guarantee as to the order in which the patches will run. But, you don't need to worry about concurrency between patches on the same document as there is no concurrent/interleaved execution of the scripts on the same document.
A slightly more complex example of the use of patching is adding a comment to a SupportCall
. Just like before, we want to support adding a comment
concurrently. But to make things a bit more interesting, we'll add a business rule: a call that has been ended cannot have additional comments added to
it. Listing 4.12 shows the obvious way to accomplish this.
Listing 4.12 Adding a comment to a support call using patch
using (var session = store.OpenSession())
{
var call = session.Load<SupportCall>("SupportCalls/238-B");
if (call.Ended != null)
throw new InvalidOperationException("Cannot comment on closed call");
session.Advanced.Patch(call, c => c.Comments,
comments => comments.Add("This is important stuff!"));
session.SaveChanges();
}
In Listing 4.12, you can see how we moved from using the simple Increment
call to the Patch
, which allows us to either replace a property value completely or add an item to a collection. If you look closely at the code in Listing 4.12, you'll find that there's a hidden race condition there. The business rule
is that we can't add comments to a closed call; we're able to load the call, check that its Ended
property is null and then
send the patch request to the server. However, in the meantime, another client could have closed the call, and yet we'd still add the comment.
The seriousness of this issue depends entirely on the domain and the model. It's possible that you do want to add comments during that period, or it's possible that allowing it could break important business invariants.
Patch requests are sent as part of
SaveChanges
It's probably obvious, but I wanted to spell this out explicitly. Calls to
Patch
,Increment
orDefer
don't go to the server immediately. Instead, they're added to the list of operations the session needs to execute and will be sent to the server in a single batch (along with any modified documents) whenSaveChanges
is called. If you have multiple patch operations in the same session on the same document, they'll be merged into a single patch. And if there are multiple patches on different documents, they'll all be executed within the same transaction, as a single unit.
There are two ways to avoid the race condition. We can send a change vector to the server asking it to fail with a concurrency exception if the document has been
modified since we last saw it. That works, but it defeats the whole point of using patches for concurrent modification of the document. The second alternative
is to move the invariant check into the script itself. Calls to Increment
and Patch
are actually just wrappers around the Defer
call, which allows you to
add work to the session that's to be sent to the server when SaveChanges
is called.
In Listing 4.13, we're dropping down to using Defer
directly to manipulate the patch request ourselves, with no wrappers. As you can see, this is a bit involved,
but overall it's pretty straightforward.
Listing 4.13 Using a patch script to maintain call invariants
using (var session = store.OpenSession())
{
session.Advanced.Defer(new PatchCommandData(
id: "SupportCalls/238-B",
changeVector: null,
patch: new PatchRequest
{
Script = @"
if (this.Ended != null)
throw 'Cannot add a comment to a closed call';
this.Comments.push($comment);
",
Values =
{
["comment"] = "This is important stuff!!"
}
},
patchIfMissing: null));
session.SaveChanges();
}
The code in Listing 4.13 passes a PatchCommandData
, containing the relevant document ID, to Defer
. The key part is in the PatchRequest
itself. We do the check on
the document and fail if the call has already been closed. If it hasn't, we add the comment to the call. You can also see that we don't have to deal with string
concatenation here since we can pass arguments to the scripts directly. There's also the option to run a script if the document does not exist. This gives you
the option to do a "modify or create" style of operation.
Using this method, we can be sure that we'll never violate the rule about adding comments to a closed call. A cautionary word, though: this is more complex than anything we've dealt with before, and I would only recommend doing it if you really must. Running logic in this manner inside the database is a powerful technique, but it's also usually a bad idea because of its potential for abuse. Problems that come with abusing this feature include the scripts being run under a lock, which prevents the database from completing transactions quickly. If you find yourself doing something like this frequently, stop and reconsider.
Deferring commands
In the previous section, we used the Defer
method to register a PatchCommandData
on the session, to be executed when SaveChanges
is called. But Defer
is
more generic then this. It's a general mechanism for the user to register arbitrary commands that will be part of the same transaction as the SaveChanges
.
The RavenDB API is like an ogre. It has layers
Like the popular character Shrek, RavenDB API is composed of layers. At the top, you have the document store and the document session. Those are built using the operations concept, which you typically won't use directly. And the operations are handled via the request executor, which allows you to generate requests directly to the server (and take advantage of RavenDB's authentication and automatic failover).
The case of
Defer
is a good example. Instead of forcing you to drop down all the way, we expose an extension point in the session so you can plug in a command of your own and piggyback on the session handling of transactions.
The available commands range from putting and deleting documents and attachments to applying patches that delete documents having some prefix. Aside from the patch
operation, the rest are only useful in rare cases. The most common use of Defer
beyond patch is when you need fine-grained control over the operations
that will be executed in a single transaction, to the point where you want to control the ordering of the operations.
RavenDB doesn't allow changing the document's collection, you can't just update the @collection
metadata. So, if you need to do this,2 you must first delete the old document and then create a new one with the appropriate collection. The session doesn't allow you to both
delete and modify a document. And for the purpose of discussion, let's say that we have to make this in a single transaction, so no other client sees a
point in time where the document was deleted.
To be clear, this is a strange situation, dreamt up specifically to showcase a feature that should be only used in special circumstances. This escape hatch in the API is specifically intended to prevent you from being blocked if you need something we didn't foresee, but I can't emphasize enough that this is probably a bad idea. The emergency exit is important, but you don't want to make it the front door.
Another reason to avoid using Defer
is that it's lower in the RavenDB client API layers. Instead of dealing with high level concepts
like entity, you'll be dealing with the direct manner in which RavenDB is representing JSON, the blittable format (which we'll discuss later in this chapter). That format is meant to be high performance,
and things like developer convenience were secondary in its design.
Bulk inserting documents to RavenDB
RavenDB is fast, really fast, but it still needs to face operational realities. The fallacies of distributed computing still apply, and I/O takes a non-trivial amount of time. This means that when you want to get the best speed out of RavenDB, you need to help it achieve that.
Stacking the deck
I'm going to be talking performance numbers in this section, and I wanted to make it clear that I've intentionally chosen the worst possible situation for RavenDB and then compounded the issue by using the wrong approaches. This is so I can show the real costs in a manner that's highly visible.
I'll refer you again to the fallacies of distributed computing. I'm trying to select a scenario that would break as many of those fallacies as possible and show how RavenDB is able to handle them.
Listing 4.14 shows the absolute slowest way to write 10,000 documents into RavenDB.
Listing 4.14 Writing 10,000 documents, one at a time
var sp = Stopwatch.StartNew();
for (int i = 0; i < 10_000; i++)
{
using (var session = store.OpenSession())
{
session.Store(new Customer
{
Name = "Customer #" + i
});
session.SaveChanges();
}
}
Console.WriteLine(sp.Elapsed);
For fun, I decided to run the code in Listing 4.14 against the live test instance we have. That instance was in San Francisco, and I was testing this from Israel. The test instance was also running as a container inside an AWS t2.medium machine (two cores and 4 GB of memory, with burst-only mode). In other words, this performance test was heavily biased against RavenDB, and the results were not great. In fact, they were bad.
This is because we're running each write as an independent operation, and we have to wait for the previous operation to complete before we can start the new one. What's more, the database server handles just a single request concurrently, which means we have no way to amortize I/O costs across multiple requests. This is the absolute worst way you can write a large amount of documents into RavenDB because most of the time is spent just going back and forth between the client and the application. For each request, we have to make another REST call, send a packet to the server, etc. On the other side, the server accepts a new request, processes it and commits it to disk. During the entire process, it's effectively idle, since most of the time is spent waiting for I/O. That's a big waste all around.
You can see the various times nicely when looking at the Fiddler3 statistics. Each request takes about 220–260 milliseconds to run. Writing the first 1,000 documents took four minutes and six seconds, and 2,000 requests took eight minutes on the dot. The full 10,000 documents would take 40 minutes or so. Granted, we're intentionally going to a remote server, but still...
What happens when we're running the writes in parallel? The code in Listing 4.15 shows how to do this.
Listing 4.15 Writing 10,000 documents, with a bit of parallelism thrown in
var sp = Stopwatch.StartNew();
Parallel.For(0, 10_000, i =>
{
using (var session = store.OpenSession())
{
session.Store(new Customer
{
Name = "Customer #" + i
});
session.SaveChanges();
}
});
Console.WriteLine(sp.Elapsed);
Using the method in Listing 4.15, I was able to write 1,000 documents in 56 seconds. We got to 2,000 in a minute and a half, 3,000 in a minute and 50 seconds, etc. The reason for the speed up is actually related to how thread pooling is handled on the client side. Since we make a lot of blocking requests, the thread pool figures out that we have plenty of blocking work and creates more threads. That means we have the chance to do more concurrent work. So as time goes by, more threads are created and we make additional concurrent requests to RavenDB.
The total time of writing 10,000 documents in this setup was two minutes and 52 seconds. So we've gotten done 20 times faster than we did using sequential writes. The code in Listing 4.15 is still using synchronous calls, which means the client side is spinning threads to handle the load and we're limited by the rate of new thread creation on the client.
RavenDB also supports an async API, which is much more suitable for scale-out scenarios because we aren't holding a thread for the duration of the connection. Listing 4.16 shows how we can write all those documents in parallel, using the async API. The code is a tad complex because we want to control the number of concurrent requests we make. Spinning 10,000 concurrent requests will likely load the network and require careful attention to how they are managed, which is out of scope for this book. Instead, I limited the number of concurrent connections to 128.
Listing 4.16 Writing 10,000 documents, using async API
var sp = Stopwatch.StartNew();
var semaphore = new SemaphoreSlim(128);
async Task WriteDocument(int i)
{
using (var session = store.OpenAsyncSession())
{
await session.StoreAsync(new Customer
{
Name = "Customer #" + i
});
await session.SaveChangesAsync();
}
semaphore.Release();
}
var tasks = new List<Task>();
for (int i = 0; i < 10_000; i++)
{
semaphore.Wait();
tasks.Add(WriteDocument(i));
}
Task.WaitAll(tasks.ToArray());
Console.WriteLine(sp.Elapsed);
The code in Listing 4.16 is also using a local method, which is a new C# 7.0 feature. It allows you to package a bit of behavior quite nicely, and it's very useful for small demos and internal async code. This code writes 1,000 documents in just under 10 seconds, and it completes the full 10,000 writes in under 30 seconds (29.6, on my machine). The speed difference is, again, related to the client learning our pattern of behavior and adjusting itself accordingly (creating enough buffers, threads and other resources needed; warming up the TCP connections.4)
However, we really had to make an effort. We wrote explicit async code and managed it, rate-limited our behavior and jumped through several hoops to get a more reasonable level of performance. Note that we went from over 40 minutes to less than 30 seconds in the span of a few pages. Also note that we haven't actually modified what we're doing — we only changed how we're talking to the server — but it had a huge impact on performance.
You can take it as a given that RavenDB is able to process as much data as you can feed it. The typical concern in handling writes is how fast we can get the data to the server, not how fast the server can handle it.
RavenDB contains a dedicated API and behavior that makes it easier to deal with bulk loading scenarios. The bulk insert API uses a single connection to talk to the server and is able to make much better use of the network. The entire process is carefully orchestrated by both the client and the server to optimize performance. Let's look at the code in Listing 4.17 first and then discuss the details.
Listing 4.17 using bulk insert to write 100,000 documents, quickly
var sp = Stopwatch.StartNew();
using (var bulkInsert = store.BulkInsert())
{
for (int i = 0; i < 100_000; i++)
{
bulkInsert.Store(new Customer
{
Name = "Customer #" + i
});
}
}
Console.WriteLine(sp.Elapsed);
The code in Listing 4.17 took two minutes and 10 seconds to run on my machine — which is interesting, because it seems slower than the async API usage sample, right? Except there's one problem. I made a typo when writing the code and wrote a hundred thousand documents instead of ten thousand. If I was writing merely 10,000 documents, it would complete in about 18 seconds. The code is fairly trivial to write, similar to our first sample in Listing 4.14, but the performance is many times faster.
To compare the costs, I ran the same code against a local machine, giving me a total time of 11 seconds to insert 100,000 documents (instead of two minutes remotely). If we wanted to compare apples with apples, then the cost for writing 10,000 documents is shown in Table. 4.1.
Remote | Local | |
---|---|---|
Session | 41 minutes | 20 seconds |
Bulk Insert | 18 seconds | 6.5 seconds |
Table: Bulk insert costs locally and remotely
You can see that while bulk insert is significantly faster in all cases, being over three times faster than the session option (Listing 4.14) locally seems insignificant considering it's over 130 times faster in the remote case. The major difference, as you can imagine, is the cost of going over the network, but even on the local machine (and we're not even talking about the local network), there's a significant performance benefit for bulk insert.
Amusingly enough, using bulk insert still doesn't saturate the server. For large datasets, it's advisable to have parallel bulk insert operations going at the same time. This gives the server more work to do, and it allows us to do optimizations that increase the ingest rate of the server.
The way bulk insert works is by opening a single long running request to the server and writing the raw data directly into the database. That means we don't need to go back and forth between the client and the server and can rely on a single network roundtrip to do all the work. The server, for its part, will read the data from the network and write it to disk when it's best to do so. In other words, bulk inserts are not transactional. A bulk insert is actually composed of many smaller transactions, whose size and scope are determined by the server based on its own calculations, in order to maximize performance.
When the bulk insert is completed, you can rest assured that all the data has been safely committed to disk properly. But during the process, the data is committed incrementally instead of going with a single big-bang approach.
For the most part, RavenDB performance is ruled by how many requests you can send it. The more requests, the higher the degree of parallelism and the more efficiently RavenDB can work. In our internal tests, we routinely bumped into hardware limits (the network card cannot process packets any faster, the disk I/O is saturated, etc.), not software ones.
Reading documents
We just spent a lot of time learning how we can write documents to RavenDB in all sorts of interesting ways. But for reading, how much is there really to know? We already learned how to load and query a document; we covered that in "Zero to RavenDB." We also covered Include
and how
to use it to effortlessly get referenced documents from the server. What else is there to talk about? As it turns out, quite a bit.
In most applications, reads are far more numerous than writes — often by an order of magnitude. That means RavenDB needs to be prepared to handle a lot of reads, and those applications typically have a number of ways in which they access, shape and consume the data. RavenDB needs to be able to provide an answer to all those needs.
The first feature I want to present allows you to dramatically increase your overall performance by being a little lazy.
Lazy requests
In Section 4.1.7, which dealt with bulk insert, we saw how important the role of the network is. Running the same code on the local network vs. the public internet results in speed differences of 20 seconds to 41 minutes, just because of network latencies. On the other hand, moving from many requests to a single bulk insert request is the primary reason we cut our costs by two-thirds on the local machine and over two orders of magnitude in the remote case.
I talked about this a few times already, but it's important. The latency of going to the server and making a remote call is often much higher than the cost of actually processing the request on the server. On the local machine, you'll probably not notice it much. That's normal for running in a development environment. When you go to production, your database is typically going to run on a dedicated machine,5 so you'll have to go over the network to get it. And that dramatically increases the cost of going to the database.
This problem is well known: it's the fallacies of distributed computing. RavenDB handles the issue in several ways. A session has a budget on the number of remote calls it can make. (This is controlled by
session.Advanced.MaxNumberOfRequestsPerSession
.) If it goes over that limit, an exception is thrown. We had this feature from the get-go, and that
led to a lot of thinking about how we can reduce the number of remote calls.
Include
is obviously one such case. Instead of going to the server multiple times, we let the server know we'll need additional information after this request and tell it to send that immediately. But we can't always do that. Let's take a look at Listing 4.18, showing two queries that we
can't optimize using Include
.
Listing 4.18 Loading a customer and the count of support calls for that customer
using (var session = store.OpenSession())
{
var customer = session.Load<Customer>("customers/8243-C");
var countOfCalls = session.Query<SupportCall>()
.Where(c => c.CustomerId == "customers/8243-C"))
.Count();
// show the customer and the number of calls to the user
}
A scenario like the one outlined in Listing 4.18 is incredibly common. We have many cases where we need to show the user information from multiple sources,
and that's a concern. Each of those calls turns out to be a remote call, requiring us to go over the network. There are ways to optimize this specific
scenario. We can define a MapReduce index and run a query and Include
on it. We haven't yet gone over exactly what this means,6 but this is a pretty complex solution, and it isn't relevant when you have different
types of queries. If we wanted to also load the logged-in user, for example, that wouldn't work.
RavenDB's solution for this issue is the notion of lazy requests. A lazy request isn't actually being executed when you make it. Instead, it's
stored in the session, and you get a Lazy<T>
instance back. You can make multiple lazy requests, one after another, and no network activity will occur. However,
as soon as you access the value of one of those lazy instances, all the lazy requests held up by the session will be sent to the server as a single
unit.
All those requests will be processed by the server, and all the replies will be sent as a single unit. So no matter how many lazy requests you have, there will only ever be a single network round trip to the server. You can see the code sample in Listing 4.19.
Listing 4.19 Lazily loading a customer and their count of support calls
using (var session = store.OpenSession())
{
Lazy<Customer> lazyCustomer = session.Advanced.Lazily
.Load<Customer>("customers/8243-C");
Lazy<int> lazyCountOfCalls = session.Query<SupportCall>()
.Where(c => c.CustomerId == "customers/8243-C"))
.CountLazily();
// no network calls have been made so far
// force execution of pending lazy operations explicitly
session.Advanced.Eagerly.ExecuteAllPendingLazyOperations();
// if ExecuteAllPendingLazyOperations wasn't called, it
// will be implicitly called here.
int countOfCalls = lazyCountOfCalls.Value;
Customer customer = lazyCustomer.Value;
// show the customer and the number of calls to the user
}
As the code in Listing 4.19 shows, we can define multiple lazy operations. At that stage, they're pending. They're stored in the session but haven't been
sent to the server yet. We can either call ExecuteAllPendingLazyOperations
to force all pending operations to execute, or we can have that happen implicitly
by accessing the Value
property on any of the lazy instances we received.
Why do we need ExecuteAllPendingLazyOperations?
The existence of ExecuteAllPendingLazyOperations is strange. It's explicitly doing something that will happen implicitly anyway. So why is it needed? This method exists to allow users to have fine-grained control over the execution of requests. In particular, it allows you to set up a stage in your pipeline that will request all the data it's going to need. Then it will call ExecuteAllPendingLazyOperations to fetch this explicitly.
The next stage is supposed to operate on the pure in-memory data inside the session and not require any calls to the server. This is important in advanced scenarios, when you need this level of control and want to prevent the code from making unexpected remote calls in performance-critical sections of your code.
The performance gain from Lazy
is directly correlated to the number of lazy requests it's able to batch and how far away the actual server is. The more
requests that can be batched and the further away the database server, the faster this method becomes. On the local machine, it's rarely worth going to the trouble,
but once we go to production, this can get you some real benefits.
Note that, as useful as Lazy
is, it's limited to requests that you can make with the information you have on hand. If you need to make queries based on the
results of another query, you won't be able to use Lazy
for that. For most of those scenarios, you can use Include
. And of course, Lazy
and Include
can work together, so that will usually suffice.
Streaming data
When dealing with large amounts of data, the typical API we use to talk to RavenDB is not really suitable for the task. Let's consider the case of the code in Listing 4.20.
Listing 4.20 Query all support calls for a customer
using (var session = store.OpenSession())
{
List<SupportCall> calls = session.Query<SupportCall>()
.Where(c => c.CustomerId == "customers/8243-C"))
.ToList();
}
What will happen if this is a particularly troublesome customer that opened a lot of calls? If this customer had just 30 calls, it's easy to see that we'll get them all in the list. But what happens if this customer has 30,000 calls? Figure 4.1 shows how a query is processed on the server in this case.
The server will accept the query, find all matches, prepare the results to send and then send them all over the network. On the client side, we'll read the results from the network and batch them all into the list that we'll return to the application.
If there are 30 results in all, that's great, but if we have 30,000, we'll likely suffer from issues. Sending 30,000 results over the network, reading 30,000 results from the network and then populating a list of 30,000 (potentially complex) objects is going to take some time. In terms of memory usage, we'll need to hold all those results in memory, possibly for an extended period of time.
Due to the way memory management works in .NET,7 allocating a list with
a lot of objects over a period of time (because we are reading them from the network) will likely push the list instance, and all of its contents, into a higher
generation. This means that, when you're done using it, the memory will not be collected without a more expensive Gen1
or even Gen2
round.
In short, for a large number of results, the code in Listing 4.20 will take more time, consume more memory and force more expensive GC in the future. In previous versions of RavenDB, we had guards in place to prevent this scenario entirely. It's easy to start writing code like that in Listing 4.20 and over time have more and more results come in. Our logic was that, at some point, there needed to be a cutoff point where an exception is thrown before this kind of behavior poisoned your application.
As it turned out, our users really didn't like this behavior. In many cases, they would rather the application do more work (typically unnecessarily) than to have it
throw an error. This allowed them to fix a performance problem rather than a "system is down" issue. As a result of this feedback, this limitation was removed, but
we still recommend always using a Take
clause in your queries to prevent just this kind of issue.
All queries should have a
Take
clauseA query that doesn't have a take clause can be a poison pill for your application. As data size grows, the cost of making this query also grows until the entire thing goes down.
The RavenDB client API contains a convention setting called
ThrowIfQueryPageSizeIsNotSet
, which will force all queries to specify aTake
clause and will error otherwise. We recommend that, during development, you set this value to true to ensure your code will always be generating queries that have a limit to the number of results they get.
Very large queries are bad, it seems, but that isn't actually the topic of this section. Instead, it's just the preface explaining why buffered large queries are a bad idea. RavenDB also supports the notion of streaming queries. You can see what that would look like in Figure 4.2.
Unlike the previous example, with streaming, neither client nor server need to hold the full response in memory. Instead, as soon as the server has a single result, it sends that result to the client. The client will read the result from the network, materialize the instance and hand it off to the application immediately. In this manner, the application can start processing the results of the query before the server is done sending it, and it doesn't have to wait. You can see the code for that in Listing 4.21.
Listing 4.21 Stream all support calls for a customer
using (var session = store.OpenSession())
{
var callsQuery = session.Query<SupportCall>()
.Where(c => c.CustomerId == "customers/8243-C"));
using (var stream = session.Advanced.Stream(callsQuery))
{
while (stream.MoveNext())
{
SupportCall current = stream.Current;
// do something with this instance
}
}
}
Instead of getting all the results in one go, the code in Listing 4.21 will pull them from the stream one at a time. This way, the server, the client API and the application can all work in parallel with one another to process the results of the query. This technique is suitable for processing a large number of results (in the millions).
The use of streaming queries requires you to keep a few things in mind:
The results of a streaming query are not tracked by the session. Changes made to them will not be sent to the server when
SaveChanges
is called. This is because we expect streaming queries to have a high number of results, and we don't want to hold all the references for them in the session. If we did, we would prevent the GC from collecting them.Since streaming is happening as a single large request, there's a limit to how long you can delay before you call
MoveNext
again. If you wait too long, it's possible for the server to give up sending the rest of the request to you (since you didn't respond in time) and abort the connection. Typically, you'll be writing the results of the stream somewhere (to a file, to the network, etc.).If you want to modify all the results of the query, don't call
session.Store
on each as they're pulled from the stream. You'll just generate an excess of work for the session and eventually end up with a truly humongous batch to send to the server. Typically, if you want to read a lot of results and modify them, you'll use a stream and aBulk Insert
at the same time. You'll read from the stream and callStore
on the bulk insert for each. This way, you'll have both streaming for the query as you read and streaming via the bulk insert on the write.
When you need to select whether to use a regular (batched) query or a streaming query, consider the number of items that you expect to get from the query and what you intend to do with them. If the number is small, you'll likely want to use a query for its simple API. If you need to process many results, you should use the streaming API.
Note that the streaming API is intentionally a bit harder to use. We made sure the API exposes the streaming nature of the operation. You should strive to avoid wrapping that. Streaming should be used on the edge of your system, where you're doing something with the results and passing them directly to the outside in some manner.
Caching
An important consideration when speeding up an application is caching. In particular, we can avoid expensive operations if we cache the results from the last time we accessed them. Unfortunately, caching is hard. Phil Karlton said:
There are only two hard things in Computer Science:
cache invalidation and naming things.
Caching itself is pretty trivial to get right. The hard part is how you're going to handle cache invalidation. If you're serving stale information from the cache, the results can range from nothing much to critical, depending on what exactly you're doing.
With RavenDB, we decided early on that caching was a complex topic, so we'd better handle it properly. It's done in two parts. The server side generates an etag for all read operations. This etag is computed by the server and can be used by the client later on. The client, on the other hand, is able to cache the request from the server internally. The next time a similar read request is made, the client will send the cached etag to the server alongside the request.
When the server gets such a request with an etag, it follows a dedicated code path, optimized specifically for that, to check whether the results of the operation have changed. If they didn't, the server can return to the client immediately, letting it know that it's safe to use the cached copy. In this manner, we save computation costs on the server and network transfer costs between the server and the client.
Externally, from the API consumer point of view, there's no way to tell that caching happened. Consider the code in Listing 4.22.
Listing 4.22 Query caching in action
using (var session = store.OpenSession())
{
var calls = session.Query<SupportCall>()
.Where(c => c.CustomerId == "customers/8243-C"))
.ToList();
}
using (var session = store.OpenSession())
{
// this query will result in the server
// returning a "this is cached" notification
var calls = session.Query<SupportCall>()
.Where(c => c.CustomerId == "customers/8243-C"))
.ToList();
}
The client code doesn't need to change in any way to take advantage of this feature. This is on by default and is always there to try to speed up your requests. Caching is prevalent in RavenDB, so although the example in Listing 4.22 uses queries, loading a document will also use the cache, as will most other read operations.
The cache that's kept on the client side is the already-parsed results, so we saved not only the network round-trip time but also the parsing costs. We keep the data in unmanaged memory because it's easier to keep track of the size of the memory and avoid promoting objects into Gen2 just because they've been in the cache for a while. The cache is scoped to the document store, so all sessions from the same document store will share the cache and its benefits.
Time to skip the cache
Caching by default can be a problem with a particular set of queries — those that use the notion of the current time. Consider the case of the following query:
session.Query<SupportCall>() .Where(c => c.StartedAt >= DateTime.Now.AddDays(-7) && c.Ended == null);
This query is asking for all opened support calls that are over a week old.
This kind of query is quite innocent looking, but together with the cache, it can have surprising results. Because the query uses
DateTime.Now
, on every call, it will generate a different query. That query will never match any previously cached results, so it will always have to be processed on the server side. What's worse, every instance of this query will sit in the cache, waiting to be evicted, never to be used.A much better alternative would be to use the following:
c.StartedAt >= DateTime.Today.AddDays(-7)
By using
Today
, we ensure that we can reuse the cached entry for multiple calls. Even if you need more granularity than that, just truncating the current time to a minute/hour interval can be very beneficial.
The cache works by utilizing HTTP caching behavior. Whenever we make a GET
request, we check the cache to see if we previously made a request with the same URL and query
string parameters. If we did, we fetch the cached etag for that request and send it to the server. The server will then use that etag to check if the results have
changed. If they didn't, the server returns a 304 Not Modified
response. The client will then just use the cached response that it already has.
While most read requests are cached, there are a few that aren't. Anything that will always be different, such as stats calls, will never be cached. This is because stats and debug endpoints must return fresh information any time they're called. Attachments are also not cached because they can be very large and are typically handled differently by the application.
Aggressive caching
Caching is great, it would seem. But in order to utilize caching by RavenDB, we still need to go back to the server. That ensures we'll get a server confirmation that the data we're going to return for that request is indeed fresh and valid. However, there are many cases where we don't care much about the freshness of the data.
On a heavily used page, showing data that might be stale for a few minutes is absolutely fine, and the performance benefits gained from not having to go to the server can be quite nice. In order to address this scenario, RavenDB supports the notion of aggressive caching. You can see an example of that in Listing 4.23.
Listing 4.23 Aggressive caching in action
using (var session = store.OpenSession())
using (session.Advanced.DocumentStore.AggressivelyCache())
{
var customer = session.Load<SupportCall>(
"customers/8243-C");
}
The code in Listing 4.23 uses aggressive caching. In this mode, if the request is in the cache, the RavenDB client API will never even ask the server if it's up to date. Instead, it will immediately serve the request from the cache, skipping all network traffic. This can significantly speed up operations for which you can live with a stale view of the world for a certain period.
However, operating in this mode indefinitely would be pretty bad since you'd never see new results. This leads us back to the problem of cache invalidation. Aggressive caching
isn't just going to blindly cache all data at all times. Instead, the first time it encounters an instruction to aggressively cache, the client is going
to open a connection to the server and ask the server to let it know whenever something changes in the database. This is done using the Changes API
, which
is covered in the next section of this chapter.
Whenever the server lets the client know that something has changed, the client will ensure that the next cached request will actually hit the server for the
possibly updated value. The client will not be asking specifically for changes it has in its cache. Instead, it will ask for all changes on the server.
(Take note of that, as you might be caching a lot of different queries, documents, etc.)
The client knows to check with the server for anything in the cache that is older than that notification.
When this is done, the cached etag is still being sent, so if that particular response hasn't changed, the server will still respond with a 304 Not Modified
and we'll use the cached value (and update its timestamp).
The idea is that, with this behavior, you get the best of both worlds. If nothing has changed, immediate caching is available without having to go to the server. But if something might have changed, we'll check the server for the most up-to-date response. Given typical behavioral patterns for the application, we'll often be able to use the aggressive cache for quite some time before a write will come in and make us check with the server.
Why isn't aggressive caching on by default?
Aggressive caching isn't on by default because it may violate a core constraint of RavenDB: that a request will always give you the latest information. With the requirement that aggressive caching must be turned on explicitly, you're aware that there's a period of time where the response you receive might have diverged from the result on the server.
Caching behavior in a cluster
Typically, RavenDB is deployed in a cluster, and a database will reside on multiple machines. How does caching work in this context? The caching is built on the full URL of the request, and that takes into account the particular server that we'll be making the request to. That means that the cache will store a separate result for each server, even if the request is identical otherwise.
For the most part, the etags generated for HTTP requests among servers should be identical for identical data since it uses the change vectors of the documents to compute it. However, different servers may receive documents in a different order, which may result in a difference in the actual results. That shouldn't impact the behavior of the system, but for now we skipped implementing cross node caching.
Changes API
We mentioned the Changes API
in the previous section since the aggressive caching is using it. The Changes API
is a way for us to connect to the server and
ask it to let us know when a particular event has happened. Listing 4.24 shows how we can ask the server to tell us when a particular document has changed.
Listing 4.24 Getting notified when a document changes
var subscription = store.Changes()
.ForDocument("customers/8243-C")
.Subscribe(change => {
// let user know the document changed
});
// dispose to stop getting future notifications
subscription.Dispose();
Typically, we use the code in Listing 4.24 when implementing an edit page for a document. When the user starts editing the document, we register for notifications on changes to this document. If it does change, we let the user know immediately. That allows us to avoid having to wait for the save action to discover we need to redo all our work.
The Changes API
works by opening a WebSocket to the server and letting the server know exactly what kind of changes it's interested in. We can register for
a particular document, a collection, documents that start with a particular key, or even global events, such as operations or indexes.
Changes API in a cluster
In a cluster, the
Changes API
will always connect to one node, and changes must first flow to that node before the changes will be sent to the client. The failure of the node will cause us to reconnect (possibly to a different server) and resume waiting for the changes we're interested in.
The Changes API
is meant to get non-critical notifications. It's cheap to run and is pretty simple, but it's possible that a failure scenario will cause you
to miss updates. If the connection has been reset, you might lose notifications that happened while you were reconnecting. It's recommended that you use
the Changes API
for enrichment purposes and not rely on it. For example, you might use it to tell if a document has changed so you can not only give early notification but also ensure
you have optimistic concurrency turned on so it will catch the change on save anyway.
Another example of a way you might use the Changes API
is with aggressive caching. If we missed a single notification, that isn't too bad. The next notification will put us in the same state. And we'll
be fine because the user explicitly chose performance over getting the latest version, in this case. Yet another example of Changes API
use might be for monitoring. You want to know what's going on in the server, but it's fine to lose something if there's an error because you're interested in what's happening now, not the full
and complete history of actions on the server.
For critical operations — ones where you can't afford to miss even a single change — you can use Subscriptions
, which are covered in the next chapter. They're
suited for such a scenario, since they guarantee that all notifications will be properly sent and acknowledged.
Projecting data in queries
In the previous chapter, we talked a lot about modeling and how we should structure our documents to be independent, isolated and coherent. That makes for an excellent system for transaction processing (OLTP) scenarios. But there are quite a few cases where, even in a business application, we have to look at the data differently. Let's take a look at our support case example. If I'm a help desk engineer and I'm looking at the list of open support calls, I want to see all the recent support calls and the customer that opened them.
Based on what we know so far, it would be trivial to write the code in Listing 4.25.
Listing 4.25 Recent orders and their customers
using (var session = store.OpenSession())
{
List<SupportCall> recentOrders = session.Query<SupportCall>()
.Include(c => c.CustomerId) // include customers
.Where(c => c.Ended == null) // open order
.OrderByDescending(c => c.Started) // get recent
.Take(25) // limit the results
.ToList();
}
The code in Listing 4.25 is doing a lot. It gets us the 25 most recently opened support calls, and it also includes the corresponding customers. We can now show this to the user quite easily, and we were able to do that in a single request to the server. Under most circumstances, this is exactly what you'll want to do.
However, in order to pull the few fields we need to show to the user in the application, we had to pull the full support calls and customers documents. If those documents are large, that can be expensive. Another way to handle this is by letting the server know exactly what we need returned and then letting it do the work on the server side.
This can be done by specifying a projection during the query. Projections allow us to control exactly what is being returned from the query. On the most basic level it allows us to decide what fields we want to get from the server, but we can actually do a lot more than that.
Let's see how we can use a query to project just the data that we are interested in. You can see the details in Listing 4.26.
Listing 4.26 Recent orders and their customers, using projection
using (var session = store.OpenSession())
{
var recentOrders = (
from call in session.Query<SupportCall>()
where call.Ended == null // open order
orderby call.Started descending // get recent
// fetch customer (happens on the server side)
let customer = session.Load<Customer>(call.CustomerId)
select new
{
// project just the data that we care about
// back to the client
CustomerName = customer.Name,
call.CustomerId,
call.Started,
call.Issue,
call.Votes
}
)
.Take(25) // limit the results
.ToList();
}
There are a few things to note in Listing 4.26. First, we no longer include the Customer
in the query. We don't need that because the result of this query isn't a list
of SupportCall
documents but a list of projections that already include the CustomerName
that we want. The key part, as far as we're concerned, is the call to the
session.Load<Customer>()
method. This is translated into a server side load operation that fetches the related Customer
document and extracts just the Name
field from
it.8
The output of a query like the one in Listing 4.26 is not a document, it is a projection and as a result of that, it isn't tracked by the session.
This means that changes to a projection won't be saved back to the server when SaveChanges
is called. You can call Store
on the result of a projection, but be
aware that this will create a new document, which is probably isn't what you intended to happen.
Cross-cutting concerns on the client
The RavenDB client API is quite big. We already discussed the notion of layers in the design of the client API, allowing you to select at what level you want to work at any given point in time. There is quite a lot that you can configure and customize in the client behavior. The most common customization is changing the conventions of the client by providing your own logic.
Most of the decisions that the RavenDB client API makes are actually controlled by the DocumentConventions
class. This class allows you to modify all sorts
of behaviors, from how RavenDB should treat complex values in queries to selecting what property to use as the document ID in entities.
If you need fine-grained control over the serialization and deserialization of your documents, this is the place to look. The DocumentConventions
class holds important configurations,
such as the maximum number of requests to allow per server or whether we should allow queries without a Take
clause. Listing 4.27 shows an example of controlling
what collection an object will find itself in.
Listing 4.27 Customize the collection to allow polymorphism
store.Conventions.FindCollectionName = type =>
typeof(Customer).IsAssignableFrom(type)
? "customers"
: DocumentConventions.DefaultGetCollectionName(type);
In Listing 4.27, we're letting RavenDB know that Customer
or any derived type should be in the "customers" collection. That means that we can create a class
called VIPCustomer
that will have additional properties, but it will still be treated as a Customer
by anything else (indexing, queries, etc.). Such options allow
you to have absolute control over how RavenDB will work within your environment.
Event handlers
Alongside conventions, which are typically reactive, you can use event handlers to perform various operations during the execution of requests by RavenDB. The events are available as events that can be subscribed to at the document store level or at the individual session level.
The following events are available:
- OnBeforeStore
- OnAfterSaveChanges
- OnBeforeDelete
- OnBeforeQueryExecuted
This allows us to register to be called whenever a particular event happens, and that in turn gives us a lot of power. Listing 4.28 shows how we can implement
auditing with the help of the OnBeforeStore
event.
Listing 4.28 Implementing auditing mechanism
store.OnBeforeStore += (sender, args) =>
{
args.DocumentMetadata["Modified-By"] =
RequestContext.Principal.Identity.GetUserId();
};
Listing 4.28 ensures that whenever the document is modified, we'll register which user has made the modification in the metadata. You can also use this event
to handle validation in a cross-cutting fashion. Throwing an error from the event will abort the SaveChanges
operation and raise the error to the caller.
Another example of using events is to ensure that we always include a particular clause in our queries, as you can see in Listing 4.29.
Listing 4.29 Never read inactive customer
store.OnBeforeQueryExecuted += (sender, args) =>
{
if (args.QueryCustomization is IDocumentQuery<Customer> query)
{
query.WhereEquals("IsActive", true);
}
}
The code in Listing 4.29 will apply to all queries operating on Customer
and will ensure that all the results returned have the IsActive
property set to true.
This can also be used in multi-tenancy situations, where you want to add the current tenant ID to all queries.
The full details of what you can do with conventions and listeners are in RavenDB's online documentation. I encourage you to browse the documentation and consider when it might make sense to use listeners in your application. Using them can be quite a time saver because listeners can be applied globally with ease.
Document revisions
In the previous section, we briefly mentioned auditing. In this section, we're going to take the notion of auditing and dial it up a notch, just to see what will happen. Certain classes of applications have very strict change control requirements for their data. For example, most medical, banking, payroll and insurance applications have strict "never delete since we need to be able to see all changes on the system" rules. One particular system I worked with had the requirement to keep all changes for a minimum of seven years, for example.
With RavenDB, this kind of system is much easier. That's because RavenDB has built-in support for handling revisions. Allow me to walk you through setting up such a system.
Go into the RavenDB Studio in the browser and create a new database, as we have seen in "Zero to RavenDB." Now, go into the database, and on the
left side menu, click Settings
and then Document Revisions
.
You can configure revisions globally for all collections or a single particular collection. For the purpose of this exercise, we'll define revisions
for all collections, as seen in Figure 4.3. You can enable revisions by clicking on Create default configuration
, accepting the defaults by clicking OK
and then Save
.
Now that we've enabled revisions, let's see what this means. Go ahead and create a simple customer document, as seen in Figure 4.4.
You can see that this document has a @flags
metadata property that is set to HasRevisions
. And if you look at the right-hand side, you'll see a revisions
tab that you can select to see previous revisions of this document. Play around with this document for a bit, modify it, save and see how
revisions are recorded on each change.
Documents' revisions in RavenDB are created whenever you have revisions enabled and a document is modified. As part of saving the new document, we create a snapshot of the document (and its metadata, attachments, etc.) and store it as a revision. This allows us to go back in time and look at a previous revision of a document. In a way, this is similar to how we work with code. Every change creates a new revision, and we can go back in time and compare the changes between revisions.
Revisions in a cluster
In a cluster, revisions are going to be replicated to all the nodes in the database. Conflicts cannot occur with revisions, since each revision has its own distinct signature. One of the most common usages of revisions is to store the entire document history when you have automatic conflict resolution. We'll cover this behavior in depth in Chapter 6.
As part of configuring revisions on the database, you can select how many revisions we'll retain and for what period of time. For example, you may choose to keep around 15 revisions for seven days. Under those conditions, RavenDB will delete all revisions that are both older than seven days and have more than 15 revisions after them. In other words, if you've made 50 changes to a document in the span of a week, we'll keep all of them and only delete the earliest of them when they're over seven days old.
From the client side, you don't really need to consider revisions at all. This behavior happens purely on the server side and requires no involvement from the client. But you can access the revisions through the client API, as you can see in Listing 4.30.
Listing 4.30 Getting revisions of a document
List<SupportCall> revisions = session.Advanced
.GetRevisionsFor<SupportCall>("SupportCalls/238-B");
The code in Listing 4.30 will fetch the most recent revisions and provide you with the revisions of the document as it was changed. You can also page through the revision history.
Once a revision has been created, it cannot be changed. This is done to ensure compliance with strict regulatory requirements, since it means you can treat the data in the revision as safe. It cannot be manipulated by the client API or even by the admin.
I've found that some applications, without the regulations requiring versioning, make use of revisions just because they give the users an easy way to look at the changes on an entity over time. That was especially true in one example I can recall, where the user in question was the business expert in charge of the whole system. This feature was very valuable to her since it allowed her to see exactly what happened and who took the action. (The application utilized a listener to implement audits, as shown in Listing 4.29). It was quite interesting to see how much value she got out of this feature.
In terms of cost, revisions obviously increase the amount of disk space required. But given today's disk sizes, that isn't usually a significant concern. Aside from disk space utilization, revisions don't actually have any impact on the system. Revisions are also quite important for ensuring that transaction boundaries are respected when replicating changes in the cluster9, handling historical subscriptions10 and ETL processes.
Revisions from an older version of your software
One thing you should note when using the revisions feature is that over time, as your software evolves, you might see revisions from previous versions of your application. As such, the revision document might be missing properties or have properties that have been removed or had their type changed.
Because revisions are immutable, it isn't possible to run migration on them, and you need to take that into account. When working with revisions, you might want to consider working with the raw document, rather than turning it into an instance of an object in your model.
How RavenDB uses JSON
The RavenDB server and the RavenDB C# client API use a dedicated binary format to represent JSON in memory. The details on this format are too low level for this book and generally shouldn't be of much interest to outside parties, but it's worth understanding a bit about how RavenDB handles JSON even at this stage. Typically, you'll work with JSON documents in their stringified form — a set of UTF8 characters with the JSON format. That is human-readable, cheap to parse and simple to work with.
But JSON parsing requires you to work in a streaming manner, which means that to pull up just a few values from a big document, you still need to parse the full document. As it turns out, once a document is inside RavenDB, there are a lot of cases where we want to just get a couple of values from it. Indexing a few fields is common, and parsing the JSON each and every time can be incredibly costly. Instead, RavenDB accepts the JSON string on write and turns it into an internal format called blittable.11
A blittable JSON document is a format that allows RavenDB random access to any piece of information in the document without having to parse the document, with a traversal cost of (amortised) O(1). Over the wire, RavenDB is sending JSON strings, but internally, it's all blittable. The C# client is also using the blittable format internally since that greatly helps with memory consumption and control. You generally won't see that in the public API, but certain low-level operations may expose you to it.
Blittable documents are immutable once created and must be disposed of once you're done with them. Since the document session will typically hold such blittable objects, the session must also be disposed of to make sure all the memory it's holding is released. An important consideration for the overall performance of RavenDB is that blittable documents always reside in native memory. This allows RavenDB fine-grained control over where and how the memory is used and reused, as well as its life cycle.
On the client side, using the blittable format means we have reduced memory consumption and reduced fragmentation. It also reduces the cost of caching significantly.
Summary
In this chapter, we've gone over a lot of the advanced features in the client API. We looked at working with attachments and understanding how we can use them to store binary data. Then we moved on to working with the document metadata in general. The document metadata is a convenient place to stash information about our documents that doesn't actually belong to the document itself. Auditing is one such example, and we saw how we can use listeners on the events that the client API exposes to us.
We looked at how change tracking is managed and how we can get detailed information from the session about what exactly changed in our documents. Then we examined how we should handle concurrency in our application. We looked at optimistic concurrency in RavenDB and even implemented pessimistic locking.12 Online optimistic concurrency can be handled for us automatically by the session, or we can send the change vector value to the client and get it back on the next request, thus implementing offline optimistic concurrency.
There's another way to handle concurrency — or just to save yourself the trouble of shuffling lots of data between client and server — and that way is to use patching. The client API offers patching
at several levels. Setting a value or incrementing a number is supported by a strongly typed API, but more complex tasks can be handled using Defer
,
which also offers you the ability to write JavaScript code that will be executed on the server to mutate your documents.
We also looked at various ways to get a lot of data into RavenDB, from a sequential SaveChanges
per document, to running them in parallel, to using the
bulk insert API to efficiently push data into RavenDB. We saw that the major limiting factor was typically the cost of going to the
database and that different approaches could produce significantly different performance profiles.
After looking at the various ways we could write data to RavenDB, it was time to look at the other side: seeing how we can optimize reads
from the server. We had already gone over Include
in "Zero to RavenDB," and in this chapter we looked at lazy requests, allowing us to combine
several different requests to the server into a single round trip.
The mirror image to bulk insert is the streaming feature, suitable for handling an immense amount of data. Streaming allows us to start processing the request from the server immediately, without having to wait for the complete results. This allows us to parallelize the work between client and server and gives us the option to immediately start sending results to the user.
Following the reading and writing of documents, we looked into caching them. The client API has sophisticated caching behaviors, and we delved into exactly how that works, as well as how it reduces the time we need to provide answers to the user. Caching allows us to tell the server we already have the result of a previous call to the same URL. And it allows the server to let us know if that hasn't been modified. If that's the case, the server doesn't need to send any results on the wire, and the client can use the cached (parsed and processed) data immediately. RavenDB also supports the notion of aggressive caching, which allows us to skip going to the server entirely. This is done by asking the server to notify the client when things change, and only then go to the server to fetch those changes.
That option is also exposed in the client API, using the changes API. The changes API gives you the ability to ask the server to tell you when a particular document, collection or set of documents with a given prefix has changed. This lets users know that someone has changed the document they're working on, and allows them to implement features such as "this data has changed," etc.
Next, we looked at how we can project the results of queries and document loads on the server side using projections. A projection allows you to modify the shape of the data that RavenDB returns to the client. This can be done by simply returning a subset of the data from the documents — or even by loading additional documents and merging the data from associated documents into a single result.
We looked at cross-cutting concerns on the client and how we can apply behavior throughout the client once. We can modify the client behavior
by controlling how RavenDB decides what class belongs in what collection, as well as serialization and deserialization. Listeners allow you to
attach behavior to certain actions in
the client API, giving you the option to customize certain behaviors. We looked at adding auditing to our application in about three lines of code and
even saw how we can limit all the queries on the client to only include active users as a cross-cutting behavior.
Following cross-cutting behaviors, we moved to looking at the revisions feature in RavenDB and how to use it from the client side. The revisions feature asks the server to create a new revision of a document upon each change. Those revisions are immutable and create an audit trail and a change log for the documents in question. While this is primarily a server-side feature, we looked at how we can expose the revisions to the user through the client API and allow the users to view previous revisions of a document.
Our final endeavor was to cover at a high level the native serialization format that RavenDB uses, the blittable format. This format is meant to be extremely efficient in representing JSON. It's typically stored in native memory to reduce managed memory consumption and garbage collection costs down the road. You don't typically need to concern yourself with this, except to remember that it's important to dispose of sessions when you're done with them.
This chapter is a long one, but it still doesn't cover the full functionality. There are plenty of useful features, but they tend to be useful in specific, narrow circumstances or only make sense to talk about in a larger scope. We've barely discussed queries so far, but we'll do so extensively when we get to Chapter 11 and when we discuss indexing.
The next chapter is a good example of this. We'll dedicate that chapter to handling data subscriptions and all the myriad ways they make data processing tasks easier for you.
-
We'll discuss in detail how RavenDB clusters work in the next chapter.↩
-
And this should be a very rare thing indeed.↩
-
The Fiddler web proxy is a great debugging tool in general, and is quite useful to peek into the communication between RavenDB's server and clients.↩
-
TCP slow start can be a killer on benchmarks.↩
-
In fact, it's likely that a database cluster will be used on a set of machines.↩
-
We'll cover this technique when we discuss MapReduce indexes in Chapter 11.↩
-
The behavior on the JVM is the same. Other clients environment have different policies.↩
-
This is possible because we are using a synchronous session and queries. If we were using async queries, we'll need to use
RavenQuery.Load<Customer>
in the Linq query.↩ -
See "Transaction atomicity and replication" in Chapter 6.↩
-
See "Versioned Subscriptions" in the next chapter.↩
-
I don't like the name, but we couldn't come up with anything better.↩
-
Although you shouldn't probably use that in real production code.↩