I think I’ve come across a bug which may connect a few different issues people have been having.
For context, I’ve been trying to set up a saleor instance on render. I’m starting both a worker service and a web service using the same Docker image, but with different docker commands. Both services need access to a multiline environment variable containing a PEM key.
The documented way of doing this is to create a secret file with the PEM key, and then export it as part of the docker command:
export RSA_PRIVATE_KEY=$(cat rsa-private-key) && start_script
I’ve spent a day and a half trying to get this to work reliably. The infuriating thing is that sometimes it does build and run, but then on a subsequent build with exactly the same setup, it’ll fail with “exited with status 128”. I’ve also noticed that I’ll often get the shell reconnecting cycle that has been reported elsewhere.
Strangely, if I remove the
export RSA_PRIVATE_KEY=$(cat rsa-private-key) && from the start of docker command, the service will start reliably. The issue is, I need this environment variable to put the app into production.
More strangely, if I remove
export RSA_PRIVATE_KEY=$(cat rsa-private-key) && for one deploy, and then manually add it back in to the service’s docker command using the web UI, it’ll build and run as expected… until I change anything else, at which point it’ll start failing with “Exited with status 128” again.
I experimented a bit, and found that the executing multiple commands separated by “&&” or “;” seems to trigger the issue.
My workaround for now is to add the shdotenv script to my docker image, which allows me to set environment variables from a dotenv file without using
./shdotenv --env secrets.env start_script
This is less than ideal, as it means maintaining my own fork of the docker image I’m deploying. Either a fix for “&&” in docker commands, or support for multiline environment variables would make life a lot easier.