There are only two hard things in computer science: cache invalidation and naming things.Phil Karlton
Performance is a make-or-break factor in the success of your application. For example, 33% of potential buyers will abandon a shopping cart just one click away from finalizing if it takes too long to process.
Amazon claims that each second of wait time costs them $1.6 billion per year. Conversely, for every second Walmart improves their performance, they see conversions rise 2%.
Site speed, especially for mobile rendering, is a primary factor in SEO scores and how well an application will rank among its peers in a search query.
One of the significant challenges for optimal performance is in reducing the number of times an application has to go back and forth to its database to fetch information.
What if it is the same information that it has to fetch?
I want to see the score of last night’s game? That information will never change.
I want to know the current price of a dozen eggs? That information will seldom change.
I want to see a page on my friend’s blog? That information will periodically change.
Caching is the act of storing data from the database in memory to spare the application an additional trip to the database to answer questions it already knows the answer to.
If 5,001 crazed sport’s fans rush my site at 7 AM this morning to see who was voted MVP for 2021, only the first person needs the application to ask the database.
The answer won’t change for the other 5,000 fans, so why make additional trips to the database to find the same information?
From Crusader Fiefdoms to Current Domains
Back in Medieval Times, the lord of the Manor would go on a hunting trip. His knights would escort him and his crew days out into the woods to hunt game.
To prevent a need to go back and forth to the Manor, the hunting party would bury provisions in spots throughout the hunt. These locations were called caches.
Today’s caches also exist to keep you from going in circles.
Storing data inside the local server lets you answer the same query with the same answer a lot faster.
But what about data inconsistency? The 2021 MVP won’t change, but the price of a dozen eggs will. How do you account for that?
Developers have options. They can code a caching solution themselves or they can make sure the components they install inside their applications have built-in caching solutions.
Built-In Caching at the Database Layer
RavenDB is a document database that has built-in automatic and aggressive caching. Anyone requesting data from an application has several possibilities to get it much faster.
With automated caching, when your application is processing a query, it will ask the database: I already have the results from this query. They generated 2 minutes ago, two days ago, a year ago. Tell me if anything has changed since then.
Instead of returning a lengthy results set, RavenDB sends a simple 0, for “nothing has changed,” or 1, “the current results are different.”
A return of 0 means the application can send what it already has without a trip to the server. For something like a product page that can be queried hundreds of times a day, or a sports score that can be queried thousands of times in a minute.
There is also aggressive caching.
The application doesn’t ask RavenDB if results for a query have changed. The database will ping the application when something changes. If there is no ping, the application can keep sending the in-memory results. If there is a ping, the application will query the database once, then serve up the cached results until it gets another ping.
Why should you have to waste time and money coding a caching solution when it can be built into your application from scratch? Learn more about automatic and aggressive caching by taking a live one-on-one demo of RavenDB with a developer who will answer all your questions.
Start enjoying a built-in caching solution today!