Was doing some rather ‘rapid fire’ change/deploy cycles, noticed the app autoscaled up to 3 instances in the events panel (it’s set to do this via blueprint settings) however there was no actual usage of the app aside from running builds.
Just confirming that i should expect to see behavior like this
From what I can see, you may have had target memory and CPU values of 45% when the autoscaling kicked up to 3 instances. Your production service seems to average above 45% memory consumption in its steady state. I don’t see any autoscaling events once you increased the target to 90.
The Scaling and Metrics tab in the dashboard is an excellent place to take note of average usage data and use that as a starting point to decide on auto-scaling target values.
If you still have questions, I’d be happy to discuss this in more detail if you want to open a support ticket so we can chat about your specific services and usage metrics, etc. Just let me know, and I’ll keep an eye out for it.
For those who may come across this - this is a Rails 7 / Ruby 3.1.1 app running in Docker, using Puma as the server.
After poking around the web, it seems like we should all be using Jemalloc in our Ruby-based containers. Via the following configuration in the Dockerfile:
# install libjemalloc2 in your container via your preferred method of installing system-level dependencies, then add
ENV LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.2
# adding this will show you jemalloc info output in your build log, this confirms it's working
RUN MALLOC_CONF=stats_print:true ruby -e "exit"
This greatly reduced the memory usage to sub-500MB from almost 2GB