Faild to execute PyTorch model with Python Flask

I have a Python 3.8 flask web service on starter plus plan, also mount an additional disk with my pytorch model.

Then, I initialized a pytorch model on the begining of the flask script, example:

model = load_model() // Look at my additional disk

@app.post('/method')
def method():
       // Get input
       // ...
       res = model.forward(input)

Build Command: pip install -r requirements.txt
Start Command: gunicorn api:app

GUNICORN_CMD_ARGS: --preload --access-logfile - --bind=0.0.0.0:10000

After launching, the model is loaded succesfully with higher RAM status (around 600MB).
But, it failed when I call the api service /method.
pid 101 terminated and booting a new one.

I have followed this post (disable --preload flag), and it also failed (terminated and booting a new one again)

And other suggestion is to use share memory. But docker default share memory is 64mb only. I can’t implement it with my larger model.

So, my questions are:

  • Is it any way to change shm-size for render docker run?
  • Anyone has other approach to deploy flask with pytorch on the render service?

Thank you!

Hi there,

Thanks for reaching out.

It’s not currently possible to change the shm-size on Render and unfortunately I don’t have any experience with PyTorch.

Does a plan with more RAM stop the process from being terminated?

Kind regards

Alan

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.