Trapped in the 60s: How our View on Data is Holding us Back
Woodstock was the marquis event of the 1960s, just like SQL was its marquis technology. They tried to revive the magic for Woodstock 50 in 2019 but the event was cancelled before they were able to sell a single ticket. There are simply some things you just gotta let go.
If we can move on from Creedence Clearwater Revival, then we are ready to embrace database technology of today. To do that, we need to try to break free from the mentality of the 60s.
Escaping the Excel Syndrome
From the time of go-go boots and lava lamps, we have been thinking of data in rows and columns. Electronic spreadsheets were invented in the 1960s and we have been looking at data with these eyes ever since.
Call it the Excel syndrome.
Here is the irony. From the invention of paper, over 2,000 years ago until the first recording of The Beatles, data was processed in documents. All the information you needed on something was put to a single physical file and those files were stored according to an index for quick and simple retrieval. Those indexes could be alphabetic order, size of the file, geographic location of the business, or anything else.
What made it so simple was that you could look at the document and make sense of it without having to reference any other sources.
It was only during the administration of Lyndon B. Johnson that we stopped doing this. Computers matured to the point that they could take in information digitally, albeit in its simplest form. Since the computers didn’t have the memory to take in too much at a time, you had to create multiple tables containing localized information, and for each query piece together the information from all the tables. The data was simple, and so were the queries. You didn’t have any other choice.
As the Beatles gave way to Bon Jovi, the technology advanced, but never fundamentally changed. You still processed data in rows and columns. In 1987, Microsoft Excel came out, literally creating generations of people who were programmed to see data as what came in and out of spreadsheets.
Back to the Future with a Document Database
The ideal of any digital asset is to reproduce something physically in digital form exactly as it is naturally done.
As computing power and memory expanded in the turn of this century, technology outgrew the need for rows and columns. Instead of grinding data into small bits for the computer to digest, you can take it in whole.
The document database does exactly that.
A Document Database lets you use data the way we have been doing it forever. When you fill out a form in a doctor’s office, you fill out a piece of paper, or a document. When you apply for a loan, the online form is also a digital document. The document database takes it all in as one and lets you process it from there.
As a result, it takes less time to process because you don’t need to put together rows and columns from different places. It costs less on the cloud and is far less complex, especially in a distributed environment.
With a relational database, you have to map the data from its original document form into bits that fit in rows and columns. Then you have to put it back together to service queries. This adds complexity to your development process and can be performance costly at runtime.
We have the technology to do it the way we did for thousands of years, faster and easier than ever.
What Else we Can Leave Behind with Tye Dye Shirts?
Smart, hard-working people all fall into the same trap, namely creating really complex data structures that require their database to do a lot of work it doesn’t have to.
The RavenDB Document Database exists to mitigate the common frustrations every developer has agonized over and over again by enabling you to model your data in a way that saves you time, resources, and complexity. Living in the age of nonrelational data modelling lets you do things you could never conceptualize with a row and column setup.
You can store your data in its native format and at the same time enjoy all the advantages of digital records. Features like text search, aggregations, and massive flexibility are now part of the back to the future data model.