dot Be the first to see our latest product releases—virtually—at Redis Released: Worldwide.

Register now

Benchmark: Shared vs. Dedicated Redis Instances

 
 
 
If there’s one thing that Redis is known for it is speed; it is considered the most performant of modern K/V stores. Almost every aspect of Redis’ design and implementation screams maximal efficiency and is geared towards top performance.
 
There are, however, aspects that may inadvertently cause a Redis server to perform far less than optimally. One such aspect is the seemingly useful ability to manage multiple databases on a single Redis instance (as opposed to the dedicated, one database per one instance approach). This topic has been discussed in the past by Eli and kenn on Stack Overflow, Chris Laskey on his blog and matteo on the Redis group for example.

The Theory

Much like schemas of traditional RDBMSs, Redis’ databases are managed by a single instance of the Redis server but are kept logically separated. The main benefit of having a single Redis instance manage multiple databases is a reduction in administrative overhead. It is common practice for a single application to use several databases in conjunction (e.g., sessions, rankings, counters…) and these are needed for every one of the environments (development, testing, staging, production…). By having a single Redis instance manage multiple (up to 16) databases, the overall number of instances (and possibly servers) is lowered and so is the effort needed to manage them.
 
The argument for using a single instance for multiple databases (a.k.a. shared instance) to reduce administration makes perfect sense, so oftentimes this approach is temporarily adopted by application developers. It is the nature of things, however, for temporary to become permanent. Thus the shared instance deployment choice is carried forward, sometimes even making its way into production. And this is usually when performance issues surface.
 
Shared instances will eventually perform less than adequately because of Redis’ architecture. Designed to be extremely fast, every Redis instance is implemented using a (almost) single thread in order to eliminate the efforts of context switching, abolish blocking and provide trivial serialization. Since a single thread is used by the shared instance, that thread may be slowed or even become blocked by operations executed against a specific database and thus impact the other databases. While all but indiscernible in development and testing, this may prove to be catastrophic if experienced in production.

The Proof

We set out to demonstrate this point with a benchmark. We set up an m2.4xlarge AWS instance in us-east-1 as our client application and used it to run our own home-grown benchmarking tool (available from our github account here). Each run of the benchmarking tool consisted of executing 10000 SET & GET operations (1:1 ratio) by launching 4 threads and each thread opening 25 connections. We let the tool do 10 iterations of each run to collect meaningful aggregate averages. For those wishing to reproduce our results, here’s the syntax for running the benchmark tool:

memtier_benchmark -s <host> -p <port> -P redis -t 4 -n 10000 –ratio 1:1 -c 25 -x 10 -d 100 –key-pattern S:S

We ran the benchmark against two Redis instances, twice against each. The first instance was a standard Redis server running off an m1.xlarge AWS instance, and the second instance was a freshly-provisioned resource from our very own Redis Cloud service. Both servers were using Ubuntu v12.04 and Redis v2.6.14 (we will be rolling v2.6.14 to our production service within a couple of weeks).
 
The first run against each instance was intended to benchmark the performance of the instance in a dedicated mode so only the benchmarking tool used it. In the second run, we simulated a shared instance scenario, so we created another database1 and executed resource-intensive ZINTERSTORE operations against it in an endless loop while the benchmark was running.
 
The following graphs clearly show the impact that our little loop had on the benchmark’s results with the shared instance’s performance metrics having plummeted by an order of magnitude. The throughput of each instance, measured by the average number of requests per second that it processes, dropped from almost 35K to a little more than 3.5K requests per second:
Requests Per Second Chart
 
The latency of each instance, measured by the average response time in milliseconds, has also taken a hit – growing from a little less than 3 milliseconds to almost 28 milliseconds:
Average Response Time Chart
 
Last but not least are the response times for the top 95th-percentile of requests that represent the tail of the latency distribution curve. While both dedicated instances are in the same neighborhood, the shared instance is way out of their ballpark:
Ninety-Fifth Percentile Response Time Chart

The Inevitable Conclusion

Redis is a marvelous piece of technology with an ample set of features and impressive capabilities. Like any tool, it can be misused and yield undesired results. Using shared Redis instances may save a lot of overhead in the beginning, but using them in production is generally considered a bad idea. Despite the increase in workload it requires, we recommend using only dedicated Redis instances for real-world purposes. These not only avoid inter-database blocking, but also allow for configuration of different data persistence and eviction policies for each database. Furthermore, Redis is likely to be moving away from shared instances so it would be a good idea to stop using them regardless.

1 Actually, every resource in our Redis Cloud service is a dedicated instance. Hence, the second Redis Cloud benchmark run was executed against two independent resources (databases) and demonstrated results that are identical to the first benchmark run against the service