Builds spiking memory / tripping autoscaling

Was doing some rather ‘rapid fire’ change/deploy cycles, noticed the app autoscaled up to 3 instances in the events panel (it’s set to do this via blueprint settings) however there was no actual usage of the app aside from running builds.

Just confirming that i should expect to see behavior like this

Hi Jason,

I’d be happy to take a look for you, could you send me the render.yaml file you were using when you observed this behavior?

Regards,

Matt

Sure, i’ve masked sensitive info like actual app name/repo/org name etc:

services:

  # Prod
  - type: web
    name: App Name Prod Web
    env: docker
    repo: https://github.com/org-name/app-repo.git
    region: oregon
    plan: starter plus
    branch: main
    dockerfilePath: Dockerfile.prod
    scaling:
      minInstances: 1
      maxInstances: 3
      targetMemoryPercent: 90 # optional if targetCPUPercent is set
      targetCPUPercent: 90 # optional if targetMemory is set.
    healthCheckPath: /
    autoDeploy: false
    envVars:
      - key: DATABASE_URL
        fromDatabase:
          name: App Name Prod DB
          property: connectionString
      - key: RAILS_ENV
        value: production
      # Placeholders set in dashboard
      - key: RAILS_MASTER_KEY
        sync: false
      - key: AWS_SECRET_ACCESS_KEY
        sync: false
      - key: AWS_ACCESS_KEY_ID
        sync: false
      # Use Datadog env group
      - fromGroup: DD Agent Service

  # Dev
  - type: web
    name: App Name Dev Web
    env: docker
    repo: https://github.com/org-name/app-repo.git
    region: oregon
    plan: starter
    branch: dev
    dockerfilePath: Dockerfile.prod
    healthCheckPath: /
    autoDeploy: false
    envVars:
      - key: DATABASE_URL
        fromDatabase:
          name: App Name Dev DB
          property: connectionString
      - key: RAILS_ENV
        value: remote_development
      # Placeholders set in dashboard
      - key: RAILS_MASTER_KEY
        sync: false
      - key: AWS_SECRET_ACCESS_KEY
        sync: false
      - key: AWS_ACCESS_KEY_ID
        sync: false

databases:

  # Prod
  - name: App Name Prod DB
    plan: starter

  # Dev
  - name: App Name Dev DB
    plan: starter

Hi Jason,

From what I can see, you may have had target memory and CPU values of 45% when the autoscaling kicked up to 3 instances. Your production service seems to average above 45% memory consumption in its steady state. I don’t see any autoscaling events once you increased the target to 90.

The Scaling and Metrics tab in the dashboard is an excellent place to take note of average usage data and use that as a starting point to decide on auto-scaling target values.

If you still have questions, I’d be happy to discuss this in more detail if you want to open a support ticket so we can chat about your specific services and usage metrics, etc. Just let me know, and I’ll keep an eye out for it.

Regards,

Matt

Ok - will take you up on this now, sending shortly.

For those who may come across this - this is a Rails 7 / Ruby 3.1.1 app running in Docker, using Puma as the server.

After poking around the web, it seems like we should all be using Jemalloc in our Ruby-based containers. Via the following configuration in the Dockerfile:

# install libjemalloc2 in your container via your preferred method of installing system-level dependencies, then add

ENV LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.2

# adding this will show you jemalloc info output in your build log, this confirms it's working

RUN MALLOC_CONF=stats_print:true ruby -e "exit"

This greatly reduced the memory usage to sub-500MB from almost 2GB

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.