Rails static assets behaviors

Hi,

I’m experimenting with Render coming from Heroku. I have a basic Rails 7 app that I created two web services for. One is Docker environment and one is the native Ruby environment. I’m seeing some different behavior between the two and hoping you can provide some insight.

First, on the native Ruby environment, the docs indicate you must tell rails to serve static assets. On https://render.com/docs/deploy-rails#go-production-ready toward the bottom of this section, it says:

Open config/environments/production.rb and enable the public file server when the RENDER environment variable is present (which always is on Render):

I did not do this, but after deploying the app, assets are served without issue. I thought maybe the ENV var was being set for me, but I don’t see anything listed https://render.com/docs/environment-variables#ruby that would cause puma/rails to serve static assets without me explicitly saying so. In the Docker environment, I received a 404 on my assets until I added the RAILS_SERVE_STATIC_FILES ENV var to my settings. Why does the native Ruby environment serve assets without this?

Second, I’m seeing issues with the deploys with regards to static assets. When I deploy a CSS change, here is what I observe to happen.

  1. App finished deploying.
  2. User accesses app from browser, receives new HTML pointing to new CSS file.
  3. Browser requests the CSS file, but receives a 404 (presumably because this request was directed to the old code).
  4. User refreshes browser, and it’s hit or miss. User might receive old HTML pointing to old CSS file, and then when browser requests that old CSS file, it receives a 404 (presumably because now that request was directed to the new server/code).
  5. This behavior continues for ~3 minutes, where randomly I’ll get old HTML or new HTML, and old or new CSS files load or don’t load. Eventually, it does work, returning both new HTML and CSS successfully.

This happens in both the Docker and native Ruby environments. It happens whether or not I’ve enabled a health check endpoint. It does not happen every time, but it seems to happen most of the time. It’s confusing what is going on.

I found this post Rails Static Assets which indicates this shouldn’t be happening, so any help would be appreciated.

Thanks.

As an FYI, I’ve settled on just using Rails’ built in file server[1]. If you set the headers correctly in production.rb then once you pop everything behind Cloudflare it seems to work perfectly:

  config.public_file_server.headers = {
    'Cache-Control' => 'public, s-maxage=31536000, max-age=15552000',
    'Expires' => 1.year.from_now.to_formatted_s(:rfc822)
  }

Nice and simple, no races to worry about.

Worth noting that I believe these headers will also apply to your public/favicon.ico (haven’t checked) so you should be careful to use the dedicated helpers or asset_path for those to ensure they get fingerprinted.

Nik

[1] I’m not sure the concern over using app processes for static assets really plays out in a world with threaded app servers - easy/cheap enough to vertically scale the app and add more threads. At the point where that’s not practical you are (a) hopefully making enough money off your app to not care anymore and (b) can set up nginx or similar in Docker, or migrate to a full-noise AWS setup…

1 Like

I’m happy serving static assets from puma. I was just surprised that was working without me setting the ENV var to true. I realized I could check if it’s set, and sure enough, on the Ruby native environment, RAILS_SERVE_STATIC_FILES is set to true by default. That explains my first question, though the docs are maybe a bit outdated.

For my second question around the deploying and the race conditions with assets, your solution makes complete sense.

I did some more testing and I might have misspoke originally. I found that with the Ruby native environment, when you define a health check endpoint, the app switches to the new deploy immediately and I was unable to reproduce the race condition because all requests were being served by the new deploy. However, without a health check endpoint, both the old deploy and the new deploy serve requests for 2-3 minutes after the new deploy is “live”. I think that is where my main problem originated. I’ll make sure to set a health check going forward, but would be nice to know why both deploys run together for so long without one.

1 Like

Yep - using Docker here, so no defaults apart from what I’ve set as far as I can tell :slight_smile: Seems like it’s worth a section in the docs about static files - maybe something for the Render crew to consider?

Similarly, some sort of orchestration or layer on top of services is hopefully on a backlog somewhere, which would let you be much more explicit about how builds etc should work :smiley:

I found that with the Ruby native environment, when you define a health check endpoint, the app switches to the new deploy immediately

After more testing, this isn’t always true.

From what I can tell, if I watch the logs during a deploy and I hit refresh in the browser after Puma starts, but before seeing the logs for the health check request, the old deploy will serve the request, and may continue serving requests from the browser for up to 3 minutes (during which time the browser requests are randomly served from old or new deploys, so often the asset request will 404). But, if I let the deploy finish, wait for the “live” badge, and wait to see the first logs for health check request, then I hit refresh in the browser, it is always served the new deploy. I can refresh for 3 minutes and it’s always the new deploy. Can anyone at Render help explain what is going on? I don’t want old deploys serving code for minutes after a new deploy is live. I thought it would switch instantly.

@nikz would love to know what kinds of orchestration you’re interested in so we can put that into our planning. It’s definitely something we think about, but hearing more about your specific use-case would be helpful.

@Adam_Solove Basically just higher order organisation - for instance a particular “Application” probably has some app servers, maybe a worker, and then one or more databases. It would be nice to group that such that deploys are atomic (assuming app server + worker run off the same code, I don’t necessarily want a deploy to succeed to one and fail to the other). Similarly stats are relevant across different layers - database load feeds up into app server delay etc.

Definitely a “long term” request rather than something that’s immediately missing :slight_smile:

3 Likes

I’d like to bring this back on topic. Can someone help explain the behavior I’m seeing during deploy? I’m looking at How Deploys Work | Render Docs but it’s not clear if the behavior I’m seeing is expected. Specifically,

If the response is successful (say a 200 response code), we mark the deploy live and start directing user requests to the shiny new version of your app.

Does the switch to “start directing user requests” happen immediately? For me, after a deploy is marked live, some requests go to the old version of the app, some go to the new. I did not expect that, can someone verify this is supposed to happen this way or not?

We terminate the old version at this point by sending your app a SIGTERM signal. Most web servers automatically intercept SIGTERM and shut down gracefully. There is a grace period of 30 seconds to shut everything down. If your app is still up after 30 seconds, it is shut down via a SIGKILL signal.

This is conflicting with the behavior I see too. After a successful deploy and having the new version marked as “live”, I still see the old version of the app running for around 3 minutes, and still receiving new requests. It’s a basic Rails app running Puma, so if it had received a SIGTERM right after deploy, I would expect it to finish serving the requests it was, then shut down, with no new requests coming in.

Finally, I am seeing a process named srv-{service_id}-port-detector-{random} that starts after a deploy and is live for 3 minutes. I don’t think it serves any requests, but it must be consuming database connections and other resources. What is this for?

@davekaro I’ve just merged a PR to update the Ruby Environment variables to reflect reality when it comes to static files and also as it happens, RAILS_LOG_TO_STDOUT - thanks for calling that out.

I’m going to spend some time attempting to recreate this scenario so we can understand what’s going on here better.

1 Like