Avoid creating Preview Environments for both staging and prod

I have a project for which I have both a ‘staging’ and ‘prod’ version of my web services and preview environments enabled. The ‘staging’ service is tied to the ‘staging’ branch and the ‘prod’ service is tied to the ‘main’ branch. A copy of the render.yaml file is available here For the most part it works great, and when I create a new PR the preview environment is created as expected.

The only catch is that for each PR I open a preview environment is created for both staging and prod, which really isn’t needed and so costs me a bit extra. Is there a way I can limit things so that if I’m creating a PR against ‘staging’ only a preview env is created for ‘staging’, and if I’m creating a PR against ‘prod’ only a preview env of ‘prod’ is created? Alternatively is there a way to have separate yaml files for each branch, or something along those lines?

+1, looking for the same solution.

This would be probably solved by this: Multiple environments in a single render.yaml file | Feature Requests | Render

2 Likes

Thanks Stefan - Have upvoted for this feature just now.

Welcome @carleton,

The problem here is a confusion that we often see arising and something we plan on addressing. Because your render.yaml contains both your prod and staging services, when you push to prod we redeploy the entire blueprint and because you have a staging service listed it gets deployed as well.

You want to split your blueprint so it represents a single environment and then you have a blueprint deploy for your master branch (ie production env) and one for staging. Each of these blueprints would deploy its own set of services.

Pull requests would be made to your staging branch and then the preview environment would be triggered and when you merge staging into master that would trigger a deployment to production. NOTE: Something to watch out for is that Github defaults Pull requests against the default branch (typically master/main) so unless you change the default branch to staging you have to remember to change each PR to be against staging. I’ve seen customers doing things similar to this, in fact, one took master as ‘staging’ and then maintained a ‘prod’ branch.

When you create a new blueprint deployment for the master branch we’d create the services but then when you create the blueprint deployment for staging we’d detect the resources exist and this is where you’d pick ‘create new resources’ as opposed to ‘update existing resources’. We’d then create a NEW set of services suffixed with a random string. If this isn’t obvious enough when you see the service listed in the dashboard you can always create a new team, specifically for your staging environment and then deploy the blueprint there, because it’s an empty team the resources would be created without the suffix in the name but the service URLs would have a suffix appended.

However…

Because you want your staging environment to be NODE_ENV=staging you’re going to have to follow something like this…First up, setting NODE_ENV in your render.yaml with:

- key: NODE_ENV
  sync:false

This will let you set it to staging for your staging env in the dashboard - we default it to production so you wouldn’t need to explicitly set that one.

For your preview environments (we don’t provide a way to copy these vars into them at the moment) so you’re going to have to tweak the build and start commands to set the vars you need - changing these to be a script (that eventually runs the same command as already defined in your buildCommand and startCommand for your services) but before that in each have:

# /bin/render-build.sh
if [[ -z "${NODE_ENV:-}" ]]; then
  if [[ "${IS_PULL_REQUEST:-}" == "true" ]]; then
    export NODE_ENV=staging
  else
    echo "NODE_ENV not set"
    exit 1
  fi
fi

# Build command here

This will set NODE_ENV=staging if it’s not set and it’s being executed via a preview environment. The downside to this is that it won’t show in the dashboard as an environment variable, nor would it be set for the shell tab either.

I think this will achieve what you’re after here, let me know if you have any questions.

1 Like

Thanks for taking the time to respond to me. Can you provide more info about:

Can you explain a little further how to achieve this? Are you suggesting I have different render.yaml files in my master and staging branches? If so, that’s a deal breaker. I need to be able to regularly merge staging into master.

Additionally I really want to pre-define the names of my services. I have less tech savvy customers that will also be accessing the Render dashboard that I set up for them and I don’t want any mistakes about which environment is which, so it’d be great if I could name them “strapi-prod” and “strapi-staging” ahead of time.

Thanks a lot for this guide @John_B
I feel like folks here (myself included) have been struggling with this for quite long time and this is the first time I see it explained like this and it finally makes sense. :slight_smile:

No, you should be able to have a single render.yaml that’s the same for both environments.

Right now, the behaviour when you create a new blueprint in the same team but for a different branch (ie so services as named in the render.yaml already exist) is that we append a suffix to the name of all the services - this can’t be changed.

You could however, use a separate team for staging services to have them named the same as defined in the render.yaml.

No, you should be able to have a single render.yaml that’s the same for both environments.

How do you manage the plan in the blueprint then? It’s common for staging to have lower resources than production, so you can’t just use the same render.yaml for both. Same question for the domains key. In CloudFormation, I have some kind of parameterization so I can change out things like domain, url, host, resources, CPU, memory, etc… Ideally, I don’t want anyone to be using the dashboard and to be making changes once in the blueprint. How can I achieve this with blueprint without creating more duplication?

1 Like

You can override plans for preview environments - https://render.com/docs/preview-environments#override-preview-plans

John B

That is true, but in this scenario both staging and production environments are not a preview environments. They are created exactly as defined in the render.yaml file, so both in the exact same way. @tansan wanted to have higher plan for production compared to staging. And that is not possible, afaik.

Thanks for the feedback. What I ended up doing is creating the production environments by hand, and then reducing the render.yaml file to just describe the staging environments. With Autosync and preview environments enabled on that staging only blueprint I have mostly what I’m looking for: New PRs against staging create a single preview environment and I can specify the exact names of the environments. Merges to master don’t get preview environments with this solution, but I can live with that as I don’t do that very often.

A key change I also had to make to the render.yaml file is removing the branch: lines, because when those are present the new preview environments will use those branches, not the new feature branch that I want to test.

Just to say, we’ve heard all this feedback - just watch this space :slight_smile:

3 Likes

As mentioned in some replies, the recommended approach described here limits production to have exactly same configuration as the staging environment. But, I’ve worked around this with the following GitHub workflow

name: Deploy (prod)
on:
  - workflow_dispatch
jobs:
  update:
    name: Deploy Production
    runs-on: ubuntu-latest
    steps:
      - name: Checkout production branch
        uses: actions/checkout@v2
        with:
          repository: ${{ github.event.pull_request.head.repo.full_name }}
          ref: production
      - name: Point production branch to main's head
        run: |
          git fetch origin
          git reset --hard origin/main
      - name: Set production specific values
        uses: mikefarah/yq@master
        with:
          cmd: |
            yq -i '(.services.[] | select(.name == "api")).plan |= "standard"' render.yaml &&
            yq -i '(.databases.[] | select(.name == "main")).plan |= "standard"' render.yaml
      - name: Check git status
        run: |
          git status
      - name: Commit to production branch
        uses: stefanzweifel/git-auto-commit-action@v4
        with:
          commit_message: Deploying to production
          branch: production
          push_options: "--force"
          skip_fetch: true
          skip_checkout: true

This action needs to be triggered manually from the GitHub interface. It assumes that there is a ‘production’ branch that needs to be deployed. When the action is triggered it will change the ‘production’ branch to point to the current ‘main’ branch.

It then uses yq to change values in the render.yaml and creates a new commit on the production branch. This commit is then pushed. This automation will make sure that the production branch (more a tag with this git structure) is always what is in ‘main’ but with the some values in the render.yaml changes. You can use yq to change any value you like: plan, env variables, etc

To complete my setup I’ve created a new team on Render that is just responsible for syncing from the production branch on GitHub. Having an dedicated team for this has the added benefit that we can separate permissions, notifications and log streams.

1 Like

Ah this approach works so well except for this drawback. I want my staging environment to be much cheaper than prod for obvious reasons.