Deploy of Docker as background worker fails with timeout

I created a background worker where I deploy a Docker image. It fails after a couple of minutes with a timeout. My supervisor in the container receives a SIGTERM and shuts down. There is nothing in the logs to be found. Locally the container keeps running. What’s going on?

I had exactly the same issue some time ago. For reference: Deploy of Docker as private service fails with timeout

Back then the issue was that I was using a private service. Now I am using a background worker (as suggested by the Render team). I changed some things in my code but most things stayed the same. It should work, but it doesn’t.

Log output:

Nov 24 11:07:09 PM  2021-11-25 00:07:09,473 CRIT Supervisor is running as root.  Privileges were not dropped because no user is specified in the config file.  If you intend to run as root, you can set user=root in the config file to avoid this message.
Nov 24 11:07:09 PM  2021-11-25 00:07:09,480 INFO supervisord started with pid 1
Nov 24 11:07:10 PM  2021-11-25 00:07:10,485 INFO spawned: 'cron' with pid 8
Nov 24 11:07:11 PM  2021-11-25 00:07:11,486 INFO success: cron entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Nov 24 11:10:09 PM  2021-11-25 00:10:09,650 WARN received SIGTERM indicating exit request
Nov 24 11:10:09 PM  2021-11-25 00:10:09,676 INFO waiting for cron to die
Nov 24 11:10:09 PM  2021-11-25 00:10:09,676 INFO stopped: cron (terminated by SIGTERM)

Dockerfile:

FROM joyzoursky/python-chromedriver:3.8-selenium

ENV REFRESHED_AT 2021-11-24
################
# dependencies #
################
RUN apt-get update && \ 
    apt-get -y install libblas-dev liblapack-dev libatlas-base-dev gfortran supervisor cron tzdata

# set display port to avoid crash
ENV DISPLAY=:99

# Set timezone
ENV TZ=Europe/Berlin
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

# Copy the app 
WORKDIR /app
COPY ./src /app

# Create directory for results
RUN mkdir /app/scrapy/results
RUN chmod -R 777 /app/scrapy/results 

# COPY CREDS
COPY creds.json /app/creds.json

# Copy python run script
COPY config/runcron.sh /app/runcron.sh
COPY config/scrapecompanies.sh /app/scrapecompanies.sh
COPY config/scrapejobs.sh /app/scrapejobs.sh
RUN chmod +x /app/runcron.sh
RUN chmod +x /app/scrapecompanies.sh
RUN chmod +x /app/scrapejobs.sh

# Run docker container with: docker run -d -v "$PWD":/app diversity_scraper
# Enter docker container with: docker exec -it container_name bash

# VOLUME ["/code"]
# WORKDIR /app

# Install requirements
RUN pip install --upgrade pip
RUN pip install Cython --install-option="--no-cython-compile"
RUN pip install -r /app/requirements.txt

# Configure cron jobs, and ensure crontab-file permissions
COPY config/cronjobs /etc/crontabs/root
RUN chmod 0644 /etc/crontabs/root

# Apply cron job
RUN crontab /etc/crontabs/root

# Setup supervisord
COPY config/supervisord.conf /etc/supervisor/
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]

Supervisord config:

[supervisord]
nodaemon=true
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
nodaemon=true
[program:cron]
command = /bin/bash -c "declare -p | grep -Ev '^declare -[[:alpha:]]*r' > /run/supervisord.env && /usr/sbin/cron -f -L 15"
stdout_logfile = /dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile = /dev/stderr
stderr_logfile_maxbytes=0
user = root
autostart = true
autorestart = true

Thanks!

Hello!

I’m sorry to hear that you’re having issues with your service- could you message me privately with the name of your service + service ID, and email address for your account? I’ll dig further into what’s going on.

Additionally, have you considered using a cron job for this service? It looks like you’re using cron here already, which makes me wonder if it would be simpler to configure through our offered cron jobs.