Running `rails db:seed` in render shell prematurely returns 'Killed'

Hi there!

It looks like render exits prematurely without warning when I’m trying to do the following:

  1. Run render shell
  2. Execute a command that reads a large file
  3. Load the file into memory and parse it

Here’s an example of what I did in rails console in the render shell (browser):

irb(main):006:0> dump_path = Rails.root.join('db/data/dump.json')
irb(main):007:0> data = File.read(dump_path)
Killed

Is there any way around this?

There must be some config on render that’s causing this to happen.

Appreciate any help here.

Hey there- it’s possible that your service doesn’t have enough memory or CPU to handle this request. I would recommend looking at your Metrics tab to see if spikes at around the time you tried to run this command. You might want to try increasing the service’s plan to see if a higher memory/CPU allows you to run this.

1 Like

Hey Danielle,

I am on the same team as Gabriel. I tried running our import task again to see the spike in metrics while running the task.

I looked at the metrics (as attached) and found that the spike is pretty minimal as highlighted.

Could this still be an insufficient memory or CPU issue as there was a larger spike at around 4-5am (GMT +8)?

Currently, we are on the starter plan which has 512MB ram and shared CPU.

Also, is there any way to get more logs with regards to why the render shell returns Killed?

Thank you and appreciate your help!

That’s a good point. I believe it still may be a memory/CPU issue despite the spike on the chart being small, because if memory/CPU spikes happen quickly enough, or a process is killed immediately following a sudden memory spike, our scraping methods can miss the peak of that spike.

I would recommend the following:

  • Try upgrading your service at least long enough to test if that makes a difference, perhaps decrease the plan when you’re done if you don’t always need it running with memory/CPU that high
  • Test these commands with a much smaller file, to see if it still results in Killed

Especially if the second option works, it might make sense to process this data in smaller chunks.

Out of curiosity, what’s the end goal you’re trying to accomplish? I’m wondering if it would make sense to have a separate cron job or background worker to do this processing rather than running it in the shell.