HuggingFaceEmbeddings - Worker with pid 75 was terminated due to signal 9

Trying to use langchain and this command is causing a terminate due to signal 9. Certainly is a hungry set of code but not sure why things are terminating with no clear reason. Don’t see any clear responses in older community similar questions either.

embeddings = HuggingFaceEmbeddings(model_name=embeddings_model_name)

Downloading (…)5de9125/modules.json: 0%| | 0.00/349 [00:00<?, ?B/s]
Downloading (…)5de9125/modules.json: 100%|██████████| 349/349 [00:00<00:00, 1.52MB/s]
Jun 4 04:03:29 PM [2023-06-04 23:03:29 +0000] [51] [WARNING] Worker with pid 75 was terminated due to signal 9

Hey,

If you encounter “killed 9” errors, the most common reason is that your service is being terminated due to running out of memory (OOM killed). To address this, you can attempt to upgrade your instance type to a larger one that offers more memory and observe if the issue persists.

If you continue to experience this problem even after upgrading, please proceed to open a support ticket via the “Contact Support” form in the dashboard and select the affected service.

See https://stackoverflow.com/questions/67637004/gunicorn-worker-terminated-with-signal-9

Jérémy.
Render Support, UTC+3

Thanks for the information. Any way to understand how much memory is being used and/or how much over the instance limits we are. I just see some basic graphs showing CPU under 0.06 and memory between 100 and 200M. Guessing this means the ‘Starter’ instance CPU of 0.05 is what is blowing this out. Just how to tell or not finding the details in logs, etc.

Hey Bob,

You can find the relevant information in the “Metrics” tab, which provides a general overview of the resource usage of your service. It is worth noting that some memory-related errors may be correlated with fluctuations in the graph displayed on this tab.

To determine the amount of RAM available for a specific instance type, you can visit the “Settings” > “Change Instance Type” page.

In general, I recommend using a Standard instance type, or higher, when running an ML model as they typically require substantial resources.

Jérémy.
Render Support, UTC+3

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.