Configuring Redis as an LRU cache

I’m trying to configure Redis to be used as an LRU cache. According to this article we’ll need to set the following configuration options:

# Where 400mb is (just below?) the service's available memory
maxmemory 400mb

# Exact policy might be different per use case
maxmemory-policy allkeys-lfu

I could fork http://github.com/render-examples/redis and change the configuration for my own needs. But I don’t want to hardcode the max memory variable as I want it to scale with whatever plan it’s on.

I believe Redis allows configuration to be set on runtime like this:

# Where 400 is dynamically generated based on available memory
echo "CONFIG SET maxmemory 400" | redis-cli

I guess this could be part of the Dockerfile. But I’m not sure how.

Any advice on how to set this all up?

Right now I came up with this:

# render.yaml
- type: pserv
  name: startupjobs-redis-cache
  env: docker
  branch: master
  dockerfilePath: ./.render/redis-cache/Dockerfile
  dockerContext: ./.render/redis-cache
  plan: starter plus
  envVars:
    - key: REDIS_MAXMEMORY
      value: 768MB
  disk:
    name: data
    mountPath: /var/lib/redis
    sizeGB: 1
// ./.render/redis-cache/Dockerfile
FROM redis:6-alpine

COPY redis.conf .

ENTRYPOINT echo "maxmemory $REDIS_MAXMEMORY" >> ./redis.conf && redis-server ./redis.conf

This isn’t too bad, because I get to set the maxmemory value in the render.yaml file alongside the plan I use for the redis service. Which should be changed together anyway.

But I’m still curious to hear if there’s a way to remove some of the duplication. Perhaps Render could set some ENV vars based on the plan. RENDER_MEMORY, RENDER_CPUS, etc.

I agree with you. Render can provide environment variables with plan detail: CPU and memory limitation. I create this feature request. Please upvote it. We will notify you when this feature is available. Thank you.

1 Like

I just checked in on my Redis instance and it seems not to work properly. There’s a very low hit rate (under 1%), zero keys are evicted, disk usage is at 100% (4GB) preventing FLUSHDB from working ( MISCONF Errors writing to the AOF file: No space left on device ) and RAM usage is only 200MB despite having a maxmemory set of 900MB on a Starter Plan (which has 1GB available RAM).

Would love some documentation on setting this up properly.

Hi Marc,

Can you share the redis.conf file? It is strange that no keys are being evicted, but that would usually depend on the eviction policy and other parameters in redis.conf.

I looked at the output of df -h on your redis cache service and saw that the /dev/sda1 filesystem is 80% full. I wouldn’t expect that to be high enough to prevent a write to the append only file, but am new to redis.

Thanks Aaron. I deleted the AOF file yesterday and flushed the Redis database so that might skew some of the data you currently see.

Here’s my Redis config: redis.conf · GitHub

It should be noted that I dynamically update the maxmemory to 900mb as well as part of the deployment. I’ve confirmed this works by checking redis-cli info

I’ll follow up once the cache fills up and I start running into problems again.

From what I can tell, no keys should be evicted until the max memory value is reached, and your redis instance doesn’t seem to be close to that max value.

You mentioned the hit rate being very low. Based on the Cache Hit Ratio section of this article, if this value is under 0.8, that could be an indication that the disk size needs to be increased. If this happens again, can you share the hit rate value?

https://scalegrid.io/blog/6-crucial-redis-monitoring-metrics/#:~:text=If%20the%20cache%20hit%20ratio,fetching%20data%20from%20the%20disk.

I ended up flushing the redis database and that seems to have fixed things. Maybe it was needed to get the expiration policy to work. Not sure.

Either way, I’m at a 81% hitrate right now which seems reasonable.

AOF disk size is growing. 3GB used of 5GB available. So too soon to tell whether that issue is fixed too. Will follow up.

Memory usage is stable at 950MB which seems good too.

The disk has filled up again, and I’m starting to run into issues like these:

Short write while writing to the AOF file: (nwritten=658, expected=21245332)

Getting that error a couple of times per minute. Every minute.

Any idea what’s going on here? It seems like the AOF file might be getting too big?

Edit: I’ve now disabled AOF for this Redis instance.

Hi Marc,

It appears there are some Redis concepts that need to be considered when using an AOF. I haven’t used an AOF in Redis before, but this doc addresses the disk size issue. Can you let me know if it helps get the disk size under control? I’m not sure if the “short write” error is linked to the AOF size, but getting the AOF file down to a smaller size should help.

AOF and Disk Size:

Found in this article:

Thanks Aaron. I did play around with those two settings (auto-aof-rewrite-percentage and auto-aof-rewrite-min-size 64mb) but couldn’t get it to work. It’s possible that the AOF file was already too large by then, preventing a new AOF file from being written. Not sure.

Either way, with AOF disabled and using snapshots instead it seems like I’ve got everything working for now. I might refer back to the links you posted if I need to minimise data loss. (In this case of a cache database, speed is more important though)

Sounds good Marc. I think the Redis docs will ultimately be more helpful for fine tuning the Redis workflow, but let us know if you have any issues on the Render side and we’ll take a look!

1 Like