Configuring Redis for Rails Cache (Ephemeral) and Resque (Persistence)

When we first built the search analytics app that became Keylime Toolbox we knew we wanted to use Resque for background jobs. Because that’s based on Redis, we decided to use Redis for the Rails cache. But as things grew we realized pretty quickly that these are two very different configurations.

Cached data is ephemeral. We keep it in memory so it’s easily accessible, but if the Redis instance fails it’s OK if we lose some of the data (we can always rebuild it).

Resque worker jobs, on the other hand, are not ephemeral. When we queue a job we expect it to be run and if the Redis instance crashes we want to make sure we can recover where we left off.

While we continued with Redis for both, we spun up two distinct Redis instances and with different configurations.

Ephemeral Redis for Cache

For the Rails cache we wanted to configure redis for the fastest performance and response, accepting that we could lose data. Here’s our configuration:


daemonize yes
pidfile /var/run/redis/redis.pid
port 6379
timeout 300
loglevel warning
logfile /var/log/redis/redis.log
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error no
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis/
slave-serve-stale-data yes
slave-read-only yes
slave-priority 100
maxmemory-policy noeviction
appendonly no
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60

Some things to note here:

1. We kept the default save settings as they seemed to work fine.


save 900 1
save 300 10
save 60 10000

You could reduce the frequency if you want to reduce the time spent writing to disk. You could remove them altogether if your cache is really ephemeral (like if you are just view/fragment caching). In our case we cache materialized views as ruby objects and they take a long time to build, so we wanted to ensure that we had a fairly recent snapshot. Honestly, we could probably drop this to save every hour and it would work as well for us.

2. We turned off stop-writes-on-bgsave-error because we want to get data cached in memory first and foremost. We also set up monitoring (we use monit) to ensure that background writes are still happening and alert us if they fail.

3. We keep all keys forever because we are caching materialized views, some of which may be stored and kept (unchanged) for years.


maxmemory-policy noeviction

If you don’t need that you can probably handle memory management much better for ephemeral store by evicting any old keys:


maxmemory-policy allkeys-lru

Note that the LRU algorithm is pretty rudimentary but likely better than random.

4. We do not use the Append Only File method because we don’t really care if the server crashes and we lose data since the last write.


appendonly no

5. You could set slowlog-log-slower-than to -1 to disable logging slow queries if you want to tune some extra performance.

6. You could set activerehashing no if you don’t want the occasional 2ms delay and have extra memory to spare.

Persisted Redis for Resque Jobs

For Resque jobs we want to make sure that items added to the queue are pretty certain not to be lost. So we adjusted that server to be more durable.


daemonize yes
pidfile /var/run/redis/redis.pid
port 6379
timeout 300
loglevel notice
logfile /var/log/redis/redis.log
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis/
slave-serve-stale-data yes
slave-read-only yes
slave-priority 100
appendonly yes
appendfilename appendonly.aof
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60

Things to note here:

1. We kept default save settings because they worked for us.


save 900 1
save 300 10
save 60 10000

You could tune it to write more often if it doesn’t affect the latency of your Redis.

2. We left stop-writes-on-bgsave-error yes because we definitely want to raise errors enqueuing jobs if we can’t be sure they’ll be persisted.

3. We enabled Append Only File (AOF) mode so that Redis would write out changes as fast as possible leading to the best recovery.


appendonly yes
appendfilename appendonly.aof
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes

The specific change we made was appendonly yes, because the rest of the defaults work for us. You could tweak the auto-aof-rewrite-* parameters if you find the file is being rewritten too often.

Leave a comment