Deploy failed: Out of memory

Hi there! :wave:

I’m trying to deploy a Gatsby v4 website (static site generation) served via Express on a Starter Plus account (1GB RAM, 1 CPU), and the deploys keep failing during the Deploying... step (after successful build and upload):

Done in 976.00s.
==> Uploading build...
==> Build successful 🎉 
==> Deploying...

The failures shows up in the Events tab of the dashboard for the service as such:

Deploy failed for `08b2c7d`: Fix module names

Out of memory

January 6, 2022 at 12:01 AM

However, no spikes in memory usage larger than 1GB RAM show up on the Metrics tab:

cc @dan


The project is however a monorepo, and the total size of the project is 13GB. Could the size on disk be a problem during the last stage of deployment? I may be able to experiment with removing some development dependencies…

Hi Karl,

The metrics page won’t necessarily catch spikes as they give an average over ~3 minutes. I took a look at the Gatsby documentation and they say that the memory footprint can vary based on the size of your site so the project being 13GB could be an issue. Can you share the name of the service in question so I can take a closer look?

Hi @tyler, thanks for your answer!

To be clear, it’s the whole monorepo that takes 13GB (the size of the GitHub repo and installed node_modules which contains multiple npm packages), not only our Gatsby npm package itself. Also not the built size of the Gatsby website - this is “only” 700MB.

The service name is srv-brua640951caka6i3sd0 (I’m assuming this is the name that you want - the one from the dashboard URL).

Ah very interesting, tried again this morning (after making unrelated changes to test files), and it succeeded :thinking: Also built a lot faster.

Not sure if this will resolve the issue permanently though…

Done in 771.86s.
==> Uploading build...
==> Build successful 🎉
==> Deploying...
==> Detected Node version 16.2.0
==> Starting service with 'yarn server start'
$ node build/server
Listening at http://localhost:10000

Hm, interesting, now I’m getting “Deploy Failed: Cause of failure could not be determined” (this message only appears via Slack, not in the Render Dashboard), this time during uploading:

Jan 6 04:20:13 PM  
Jan 6 04:20:13 PM  Done in 1025.61s.
Jan 6 04:20:35 PM  ==> Uploading build...  # This is when the error occurred

No other errors or warnings to see in the output of the deploy…

Ok, it appears the “Cause of failure could not be determined” is related to the caching I tried to introduce to speed up the build (usage of $XDG_CACHE_HOME environment variable - see script in linked post below).

I commented out the cache saving (rsync of built files to the $XDG_CACHE_HOME destination directory) and the build + deploy has succeeded - but so far only once, so who knows whether it will stay stable.

Maybe a very large cache directory causes these types of “unknown” failures?

So @tyler would still be good to see if you can take a look at it, even though it succeeded once.

Ok, it appears the “Cause of failure could not be determined” is related to the caching I tried to introduce to speed up the build (usage of $XDG_CACHE_HOME environment variable).

After disabling this caching, the build + deploy has succeeded (but only once so far, so who knows).

So @tyler would still be good to see if you can take a look at it, even though it succeeded once.

Gatsby - build caching and image transformations

Just trying this out today (although I’m experiencing unexplained build failures). But I think the cache saving and restoration is working…? Here’s my version of the script: #!/usr/bin/env bash # Ref: Gatsby - build caching and image transformations - #2 by Ralph restore_render_cache() { local source_cache_dir=“$1” if [[-d “$XDG_CACHE_HOME/$source_cache_dir”]]; then echo “CACHE HIT $source_cache_dir, rsyncing…” rsync -a "$XDG_CACHE_HOME/$source_cache_dir/…

[Discourse post]

Ah yeah, failed again just now with Out of memory, again during the Deploying... step:

Jan 6 05:34:54 PM   Done in 718.80s.
Jan 6 05:35:02 PM   ==> Uploading build...
Jan 6 05:39:41 PM   ==> Build successful 🎉
Jan 6 05:39:41 PM   ==> Deploying...

@tyler did you have a chance to take a look?


I first tried reducing the size on disk by bundling all of the dependencies and then completely deleting the node_modules folder and the extra packages/* workspace folders.

But this attempt did not solve the build failures during the “Deploying…” step.

I ended up upgrading to Standard (2GB RAM, 1 CPU), and two subsequent deploys have now succeeded. Still not 100% sure that it is stable like this, but at least it’s a step in the right direction.

But it’s unclear to my why the “Deploying…” step on Render causes these out of memory issues, when the rest of my deploy script is not causing noticeable RAM usage increases.

If “Deploying…” is causing the spike in memory instead of my deploy script, it would be nice for Render to cover the costs of that (eg. adding extra RAM to the machine at only this step), unless I’m causing it somehow with what I’m doing.

Ahh, another deploy failure, also after the upgrade, with 2GB RAM :tired_face:

This time, with “Cause of failure could not be determined”

Just stopped during “Uploading build…” again:

Jan 7 05:02:41 PM   Done in 803.08s.
Jan 7 05:03:00 PM   ==> Uploading build...

cc @anurag anyone have an idea about what’s going on here?

Hi, @karlhorky-upleveled, apologies for the delayed response, I am taking a look at this today.

Ok thank you, I’ll continue the discussion in DMs, as requested.