Many websites and applications commonly store their data in a database. Reading and writing data from a database can significantly affect an application’s latency. It’s important to reduce latency as much as possible, as users expect fast and responsive applications, and quicker websites perform better for search engine optimization (SEO).

Writing to a database adds latency because databases generally write data to a disk instead of holding it in memory. It’s common for databases to apply compression and encryption, adding latency when reading and writing data. To overcome these challenges, you can use an in-memory database for fast data storage and retrieval from RAM instead of a disk.

This article discusses how in-memory databases work, some popular options, and some of the trade-offs versus a standard database.

What Are In-Memory Databases?

In-memory databases use RAM instead of hard disk drives (HDD) or solid-state drives (SSD) to store data, drastically reducing the latency of reading and writing data. Latency reduction is due to two main reasons. First, accessing data from memory is faster than from a disk, and second, the data structures used to store data in memory are more straightforward than disk storage. Therefore, the CPU overhead is lower when reading and writing data.

This low latency comes at a cost because the data stored in memory will be lost if a server fails. Unlike disk storage, memory doesn’t retain its contents upon power loss, so there’s a trade-off of resilience versus speed.

In-memory databases are an excellent option for applications that require fast or real-time data, such as leaderboards or real-time analytics. They are also helpful for caching data that you usually store in a disk-based database to reduce the number of reads and writes to the disk and minimize latency.

Reducing latency is particularly important for websites. Users who find the website responsive are more likely to continue using it. Additionally, Google and other search engines also use site load speeds as a factor in SEO. Fast websites rank better in search results, increasing the chances of users visiting your website.

In-Memory Databases Explained

Because in-memory databases store data in RAM, they experience far lower latency than an HDD, which uses mechanical, moving parts to access the correct disk location. The HDD must then read the data and transfer it via the interface between the storage device and the computer. Moreover, even with SSDs, RAM is still up to 30 times faster because of its more performant memory chips and CPU interface. Some benchmarking tests have shown that using MySQL with Redis — a popular in-memory database — as a caching layer can decrease query latency by up to 25% versus using a standalone MySQL database.

A graph representing the number of requests with only MySQL and with both MySQL Redis.
Benchmarks with only MySQL and with MySQL and Redis. (Image Source: DZone)

There’s a second reason why in-memory databases are fast. You can optimize the data structures used by in-memory databases for faster retrieval. For example, relational databases often use B-trees for indexes, allowing quick searches while supporting reading and writing large data blocks to disk. In-memory databases don’t need to write data blocks to disk and can choose more performant data structures, further reducing latency. In-memory databases often store and use data as-is, without any transformation or parsing at the database layer. This also adds to the reduction in latency, as it speeds up both read and write times.

In-memory databases have become more popular due to technological improvements. First, the price per gigabyte (GB) of RAM has reduced significantly in the last 20 years, which has made using memory for data storage more affordable. Improvements in in-memory database solutions and managed cloud services have also helped alleviate some of their main disadvantages.

Additionally, in-memory databases like Redis can now snapshot data from memory to disk, allowing data restoration if a server fails. Cloud services provide geo-replication, meaning applications can stay online by failing over in the event of an issue. This cost reduction and reliability improvement have made in-memory databases feasible options for modern applications and websites.

Advantages and Disadvantages of In-Memory Databases

The main advantages of in-memory databases are:

  • They improve performance.
  • They’re simpler to scale because of how they store data.
  • They often improve the reliability of an application.

In-memory databases usually store data as unstructured or semi-structured instead of stored in complex relational models. Unstructured data makes scaling the database more straightforward, as the network data transfer overhead of joining data that lives on multiple nodes is unnecessary.

Improving the reliability of an application may seem counterintuitive due to the volatility of data stored in RAM. However, when used as a caching layer, in-memory databases reduce the burden on the primary database during request peaks. A caching layer can also help reduce costs because it’s often more expensive to scale a traditional database than an in-memory database to speed up frequent requests and then use the central database for longer-term storage.

The main disadvantages of in-memory databases are:

  • Increase in cost if used as the sole database
  • Limited storage size
  • Fewer security features

In-memory databases generally don’t use security features such as encryption, as everything must be in memory — including encryption keys. These features make encrypting data ineffective because any malicious entity with access to the memory can, in theory, also access the encryption key.

In-memory databases can reduce costs when used with traditional databases. However, they’re often more expensive when used as the sole database, especially if storing large amounts of data, due to the higher price of memory versus disk storage. This cost also prohibits the amount of data you can keep, as storing large data sets in memory becomes expensive and often requires multiple servers.

Why Aren’t All Databases In-Memory?

The main drawback preventing in-memory databases from being ubiquitous is cost. Although RAM prices have fallen significantly, they’re still much higher per GB than HDDs and SSDs. This cost makes in-memory databases too expensive for more extensive applications with colossal data footprints.

If the price of RAM continues to fall, there could be a time when in-memory databases are the default, and disk-based databases are only used in niche circumstances.

Use Cases for In-Memory Databases

One of the most common uses for in-memory databases is caching. You can use the in-memory database as a caching layer in conjunction with a traditional database. The in-memory database stores frequently accessed data, preventing repeated and costly lookups in the disk-based database and providing a faster user experience.

In-memory databases have also become famous for ecommerce sites, forums, and high-traffic blogs with comment sections. This is because these are highly dynamic sites. E-commerce sites want to personalize the user experience and show real-time product availability. Blogs and forums can have hundreds or thousands of users simultaneously posting and commenting. This means a site will need to handle a high write throughput and be able to serve the latest content and comments back to users quickly. In-memory databases reduce the latency in storing user-generated content and providing an up-to-date and personalized experience.

In-memory databases are also great candidates for gaming leaderboards. They can update and retrieve data in real-time and efficiently sort data to provide a current view of the leaderboard as the game progresses.

You can also use them for real-time analytics. They enable you to stream data into the database and execute queries on the most up-to-date version of the data for real-time dashboards, risk analysis, and machine learning models.

Examples of In-Memory Databases

There are many choices when selecting an in-memory database. Some of the most popular are Redis, Memgraph, and Hazelcast. Redis is the most widely used and is available as a managed service on most cloud platforms. Memgraph provides graph computations of streaming data, all in memory, and Hazelcast offers similar functionality to Redis but with different caching patterns.

Redis is commonly a caching layer between websites and applications to improve performance by preventing costly database reads. This performance increase is also possible for WordPress sites with the help of the Redis add-on from Kinsta. Along with this add-on, Kinsta also provides the Kinsta APM tool to help troubleshoot any performance issues with Redis queries.

Websites running on Kinsta use caching by default. However, sites with frequent database requests will still greatly benefit from Redis. Database latency is one of the most significant factors that slow a website down, but Redis helps reduce this burden and enables the website to scale quickly.

Summary

Database latency can significantly affect a website or application’s overall latency. Reading from and writing to hard disks increases latency. In-memory databases reduce database latency because they store data in RAM. Even when using SSDs, RAM is still faster because it uses speedier memory chips and a faster interface to the CPU. Moreover, you can optimize the data structures used by in-memory databases for faster retrieval.

In-memory databases can speed up websites and applications when used as a caching layer between the website and a traditional database. This is because memory is faster to access than disk, and this reduced overhead results in faster website load times and can contribute to improved SEO.

Redis is one of the most popular in-memory database options, and you can easily add it to WordPress sites using the Kinsta add-on. Try the Redis add-on for your Kinsta-hosted site.

Salman Ravoof

Salman Ravoof is a self-taught web developer, writer, creator, and a huge admirer of Free and Open Source Software (FOSS). Besides tech, he's excited by science, philosophy, photography, arts, cats, and food. Learn more about him on his website, and connect with Salman on Twitter.