Redis configuration for production

Written by Christophe Limpalair on 08/29/2015

This post is extracted from one of my episodes in the Redis and Laravel series.

Do you know what happens when Redis runs out of memory to use? If your server crashes, what happens to your data? Both of these questions, and more, depend on how you configure Redis.

This post will walk you through a few of what I think are the most important settings to understand for a small to medium web application. On top of explaining these configs, I give my two cents on what you should change, if anything at all.

Shape redis to your specific needs.

Saving Data
If you've read or watched any of my content on Redis before, you already know it has built-in persistance. This means that we can save all of our data from memory to disk and then back that up to the cloud in case of a stormy day.

Open up your Redis configuration.
vim /etc/redis/redis.conf

I recommend you read the whole thing, but for this section you just need to scroll down to line
. There you should see the SNAPSHOTTING section.

The default config is:
save 900 1
save 300 10
save 60 10000

The format is save [seconds] [changes] and the default settings means:
1) If 15 minutes (900 seconds) go by and at least 1 change was made, create a snapshot.
2) If 5 minutes (300 seconds) go by and at least 10 changes were made, create a snapshot.

Don't want to save your data at all? Just comment out the lines.

The next setting,
, signals Redis to stop accepting writes if any of the above saves fail. This is to let the user know of such failure.

Redis stop background writes

Now let's go to line
where you will find
. As the name implies, this is the name for your backups.

Redis stop background writes

A few lines below is the directory where this file name will be saved. Default is

Redis stop background writes

Now we know how often Redis backs up our data and where it saves it. There are a few different ways to perform backups, depending on your provider, but if you are on AWS you can simply push it up to S3.

Backing up very important data
By default, Redis uses a backup system called RDB which asynchronously dumps the data set to disk. This means that data may not be immediately saved after one of our save points (which we just looked at) is triggered. If your instance crashes for whatever reason, you could potentially lose minutes of data. In most applications that's not a big deal, but in some applications this is a deal breaker.

Luckily, Append Only File comes to the rescue. AOF logs every write operation and is played again at server startup. This is guaranteed to be more complete than RDB (the default one).

The disadvantages to AOF (bet you were waiting for that one) is that AOF files are bigger than RDB files for the same dataset, and can also be slower.

So which one to use? Why not both? The performance is not going to change much at all and you risk losing less data. When you turn it on, scroll down to the fsync options and take a look at those.

The default is every second and probably what you want unless you really care about data loss. Switch to always if that's the case, but be warned that this option is definitely slower.

Setting a Max Memory
Because of the way Redis manages memory, it is usually a good idea to set a maxmemory. Redis is coded in C and uses a memory allocation implementation that won't always free up memory to the OS as soon as a key is removed.

Without going into too much detail, you should be aware that the maxmemory should be based on your expected peak memory usage. For example if your Redis workload only uses 1GB most of the time, but you've seen it peak to 2GB under really heavy load, you need to provision at least 2GB.

If you don't set a max limit, Redis could continue to store more and more data without evicting data you don't need anymore, taking down your machine because of lack of memory.

This is not the only reason you'd want to set a max memory limit. Let's say you want to store volatile data (with no persistance). Instead of removing keys with your code or setting expiration times, you can lower the max memory limit and add an eviction policy. Once Redis maxes out its allocated memory, it will get rid of keys in the order that you desire:

1) noeviction: returns an error when the memory limit is reached and the client tries to execute commands that use memory. This may be a good one to use if your application relies on Redis writes to properly function. It might stop your app from functioning but at least you won't have unexpected behavior that could cause more problems
2) allkeys-lru: evict keys by trying to remove the least recently used (LRU) keys first
3) volatile-lru: evict keys by trying to remove the LRU keys first, but only for keys with an expiration set
4) allkeys-random: evict random keys
5) volatile-random: evict random keys that have an expiration set
6) volatile-ttl: expire keys that have an expiration set, and evict the ones with the shortest time to live first

Which one you choose totally depends on your application and what you're willing to deal with. What's perhaps more important is knowing what behavior to expect in case you do run in this situation.