Powering the website of the world’s largest provider of medical parts, simplifying DevOps, and serving their data in 1/3000th the time of their previous database.

PartsSource is “The leading healthcare services online marketplace and the world’s largest provider of medical parts, products, and solutions.” They’re essentially the Amazon.com of medical replacement parts, operating as a one-stop shop for customers to maintain their equipment without the hassle of dealing with multiple manufacturers.

Their catalog contains ~3.5 million items – some are their own products, but most come from around 7000 other vendors. They sell to almost every hospital in the US, taking orders and communicating between the vendors and customers.

The Problem

PartsSource’s database was making their website slow.

Their product information was stored in a relational database, which they needed to query against in real time. To complicate matters, they have a flexible pricing structure with over 10,000 different pricing rules to determine what a given customer will pay on a given day. All this data was highly normalized, and to get any of it they essentially had to get all of it. Their queries required complex joins of 12 different tables, the cost of which was extreme.

Retrieving pricing information for a single product could take several seconds.

PartsSource wanted to create a new generation website, one where they could display multiple products on one page and have it load virtually instantly for their customers (basically what we’ve all come to expect from an online store.) To achieve this they needed DB performance to improve by orders of magnitude.

The Solution

PartsSource decided to copy their data from their relational database to a RavenDB document database and perform live queries against that instead. They denormalized the data from the 12 tables and put each item into a single JSON document, allowing them to retrieve all the product’s information in one round trip to the database.

Once they’d begun moving information into RavenDB and had a taste of a much faster database, they didn’t want to stop. They realized they could precompute and cache more and more info into it, pushing performance continuously higher.

They went from being able to price 1 item in about 3 seconds to a page of 30 items in 30ms – a 3000x increase in speed.

That completely revolutionized our website.

Steve Gaetjens, Principal Software Architect

RavenDB now handles more than just their pricing information. Each product has its own document, and all associated product info is stored in that document, including information for both original and aftermarket versions.

Why RavenDB?

The PartsSource website is their biggest RavenDB project, but it wasn’t their first.

The first use was for a listing of their products on another company’s website. For this, they needed to take their product information and publish it to a new, dedicated database. This database needed to contain all customer records, transactions, orders, products,  and so on. They wanted it to be a completely standalone system to minimize overhead.

RavenDB was originally recommended for this purpose by a contractor for its speed, and ACID transactions.

As a standalone, it would have to use its own search engine. They’d used Solr for search in the past, but there were some things it couldn’t handle, such as order documents, and a relational DB would have been overkill for the size and scale of the application. Instead, they took advantage of RavenDB’s built-in full-text search capabilities. It did everything Solr could do for tasks like search and faceting, and it could handle their order docs.

They also built a whole administrative portal behind the scenes – all based on the data in RavenDB.

We were just really happy as a development team.

Steve Gaetjens, Principal Software Architect

The ability to arbitrarily work with a data model in code and simply save it to a document made it easier to develop with than anything they were used to. 

The database was fast too, handling the ~40k item inventory with ease. They tried adding other tools (e.g. Redis) to speed things up, only to discover that RavenDB was already a better solution by itself. “We’ve stopped trying to help it.”

After seeing RavenDB exceed all expectations, they started using it for their main project as well – the partssource.com website.

Steve is sometimes asked why they choose RavenDB instead of a more established document database. His answer:

There may be ones you feel better about just from a name recognition perspective, but technically, I don’t think there’s anything better than Raven.

Steve Gaetjens, Principal Software Architect

PartsSource.com in Production

PartsSource’s experience in production has been very positive. “At runtime, it’s been stellar… we have no problems with Raven ever.”

They’ve been able to manage versioning and lifecycle by being smart about how they evolve the data models. 

They’ve found that runtime scaling doesn’t affect how long it takes to load a page of documents – it always stays around 30ms. Scaling only matters in the sense of how much data is stored and how much needs to be indexed. 

They have a synchronization (or publishing) process that runs 4 times a day, going through their relational database and generating JSON documents. They compute and compare hashes to determine when information has been changed and when they need to update the document in RavenDB. Since they always only differentially update documents, they cut way down on indexing work.

SQL ETL

PartsSource uses RavenDB’s SQL ETL to export data to a reporting database. This feature takes a connection string and transformation script, then handles the complexity of running the ongoing task without any input or maintenance from the user.

Hosting

They originally self-hosted, but they’ve since moved their database onto RavenDB Cloud – our managed cloud service. Not having to worry about the database hosting, management, and backups takes a lot of work off their plate. Having now been deployed for several years, they say they’re happy with the service and have never run into anything to make them question using it. 

Technical Information

PartsSource uses a Product Information Management system (PIM) and exports the information into their relational database (IBM’s Db2).

Their RavenDB database contains ~6.8M documents and 50 indexes, with a total database size of around 50 GB.

Their primary document is a Product, of which they have around 2.68M.

They’re on RavenDB Cloud, using a 3-node cluster – the standard configuration for ensuring high availability.

They’re all .NET Core, all distributed microservices, and all async down to the bottom of the stack.

Onboarding

PartsSource only encountered one significant obstacle when adopting RavenDB. They weren’t sure how to balance the weight of computation in indexing with the performance benefits to ensure they weren’t over-indexing. Our CEO Oren Eini took this on personally and helped them tune things up. From that point on they were able to handle everything themselves.

What’s Next for PartsSource

PartsSource are already planning for their next big explosion of growth. They see the future of the company as being decentralized, using a data warehouse and multiple specialized databases for more specific roles.

RavenDB will be the first choice for these roles.

Raven has become the fast, read-only data store for many parts of our enterprise application, and I imagine that process will continue.

Steve Gaetjens, Principal Software Architect

We look forward to supporting PartsSource as they grow and evolve.

If you’d like to see how RavenDB NoSQL Database can support your application, click here to book a free demo