I’m trying to create a cron job that will execute a BigQuery query. To do so, I put my query in a bash script (main.sh) and dropped it in a cron job based on a docker environment with a service account key as an environment file.
This failed to build because the environment file appears not to be available. So I moved the auth command into the bash script. The environment file works there but the process attempts to save something to a config file that is read-only.
Any suggestions on how to give service account credentials to my cron job so it can call this google api?
I believe your secret files should still be available during build time. After build time they’re available at the etc/secrets/ path, but during build time they should be at the root of your repo or Docker context. Have you tried getting the secret file from this location instead?
So I was able to get it once. Then I changed the build context directory and it is no longer working. When you say, root, might you mean build context directory?
Hey @ericmand, it looks like you want to be injecting secrets into your cron job. We support doing that at build time and run time, but looking at your Dockerfile, it looks like you might only need it at run time if your main.sh can activate the service account. I recommend doing it at run time because it’s best practice to avoid including secrets in Docker images.
If you create a secret file in the dashboard for your service named service_account_key.json, it will be available to your cron job when it runs at /etc/secrets/service_account_key.json.
If you do need that secret file during build time though, you’ll have to make use of Docker’s secret mount capability. We don’t have docs on that yet, but check out my post on it.
Thanks Adrian. Makes sense. I think runtime sounds cleaner. My issue there is that apparently the gcloud SDK expects to be able to save results of the authentication in .config/gcloud which it says is read-only. I haven’t done many docker files before. I imagine there is a way to create that folder with proper permissions at build time so that gcloud has no problem accessing it at runtime but I haven’t gotten that far.
Yeah, it sounds like ~/.config isn’t writable. I know /tmp is writable though, and you can set the Google Cloud SDK to use a different configuration directory via an environment variable.
I’ll check on the writability of ~, but in the meantime, I think you can set CLOUDSDK_CONFIG=/tmp/gcloud either in your script, in your Dockerfile, or even from the dashboard for your service.