Best practices for staging & prod environments, like a pipeline, using blueprints

I’m using a render.yaml blueprint in a single repo containing a Rails application. I’m aiming to have staging and production environments, loosely mimicking a Heroku Pipeline. Our development process is fairly simple - a main branch into which feature branches are merged. main is automatically released to staging. Then we manually release to production (“promote”).

  • There is a single render.yaml in the repo containing the rails app, postgres db, redis & worker

  • In render there is a staging blueprint instance based on this render.yaml

    • this is setup to automatically sync
    • the services in here are set to auto deploy
  • For production, I intend to do this again but…

    • choose not to automatically sync the blueprint
    • turn off auto deploy for the services

A few questions:

  1. Does this sound reasonable? We won’t have as nice of a process like we currently do using Heroku Pipelines, but at least staging will be kept in sync. It will require some manual work to setup and there is room for human error.

  2. How can we get the staging auto deploy to run after our CI has run in Github Actions?

  3. There is nothing environment specific in our blueprint etc. So our services are created with the same name. This can cause confusion when reviewing services and logs. I’d rather have services created by the staging blueprint instance have a “-staging” appended etc. Is the best way to achieve this to manually rename the services after they have been created by the blueprint’s sync?

  4. I’ve read a few hints that there’s work being done in this area. I suspect the Heroku Pipeline features is often asked for. Any updates? Is the approach we’re taking here compatible with what is planned, or are there tweaks we can make now that will make migration to a future pipeline-like solution smooth?

1 Like

Hi there!

There are a few approaches to managing services across environments on Render, so first let me see if I can address some of your questions.

There is nothing environment specific in our blueprint etc. So our services are created with the same name. This can cause confusion when reviewing services and logs. I’d rather have services created by the staging blueprint instance have a “-staging” appended etc. Is the best way to achieve this to manually rename the services after they have been created by the blueprint’s sync?

Since services of the same type can’t have the same name, when you create a new Blueprint instance using the same render.yaml, those services will be prepended with a short string to make them unique.

Generally the approach you’ve defined should work just fine, but as you noted, it’s not perfect. I’ll give you a couple other options to consider!

Alternative 1: One Blueprint instance, multiple services for each environment

In this solution, you’d only have one Blueprint instance, but you’d define your staging- and production- services all in the same render.yaml. This would allow you to override things like env vars, autodeploy, etc wherever necessary, and you can adopt whatever naming convention works for you. This does require listing each service twice (with overrides) in your Blueprint spec, but it might be a better option since it 1) fixes the environment-specific naming issue and 2) allows you to override service configuration for staging.

Alternative 2: Use Teams to differentiate staging and production services

In this solution, you’d have a team for staging and one for production. Services are allowed to have the same name when they’re separated by teams. Then in order to override environment variables, you’d have your services use env vars from a particular environment group, which would be named the same in both teams, but have different values for the variables.

Hope that helps! As to this…

I’ve read a few hints that there’s work being done in this area. I suspect the Heroku Pipeline features is often asked for. Any updates?

Yes! I am one of the engineers working on this feature, and what we’re developing will allow users to group related services into projects that consist of multiple environments. I can’t get too into the nitty-gritty details since it’s still in development, but I can absolutely confirm that this is something we’re actively working on. :slight_smile:

Is the approach we’re taking here compatible with what is planned, or are there tweaks we can make now that will make migration to a future pipeline-like solution smooth?

A primary goal for introducing projects and environments is to make it easy for existing users to transfer their services into a project. After all, lots of folks have been asking for this, including myself!! :raising_hand_woman::raising_hand_woman::raising_hand_woman: From what you’ve described, it doesn’t sound like you’d have any trouble migrating to projects.

Hope this helps!

Annie

4 Likes

Great. Thank you Annie. Good to confirm that the current approach is OK. I may try alternative no.1 out too. Looking forward to the projects & environments features.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.

I’m just going to add a 3rd option here as well that we do see customers deploying that might be of interest to the wider audience.

Alternative 3: Different repo and different render.yaml for environments

This one can be tricky to explain but I’ll try and break it down into small pieces.

A question we often get is how can I have production, staging and preview environments but I want to use different plans between production and staging/preview environments and possibly different settings for staging/previews etc.

So here’s how you can achieve this.

Because a render.yaml can have explicit repo/branch attributes you use this to your advantage.

First up, I’m going to make some assumption here as well as setup a scenario

  • acme\mywidgetsite is the repo where the development takes place
  • master branch is where PRs are merged into
  • render.yaml in this repo is set up WITHOUT repo/branch attributes

So given this, and PR’s to this repo would create a preview environment, I’ll also assume that this is probably what you already have setup in place now. This is going to become your new staging environment.

So now, what you do is create a new repo in Github - call it mywidgetsite-production for arguments sake. All this repo needs is a render.yaml and probably a readme.md to explain why it exists in the first place. You can use your original render.yaml as a starting point but now any of the services that don’t specify a repo/branch will add one:

previewsEnabled: false
services:
   name: mywidgetsite-production
   repo: acme\mywidgetsite
   branch: master
   autoDeploy: false
   ...
...

and now, when you create a new blueprint from the dashboard for your production services (you can also use Annie’s alternative 2 from above here as well to separate production/staging) this blueprint will ALWAYS deploy your master production of the original repo and won’t have previewsEnabled and to trigger a deployment you’d have to trigger the deployment yourself manually.

Once you’ve got your separate blueprint instances setup like (you may have to move a custom domain/DB backup etc) this you can make any changes to the one in the original repo like adding previewPlans, staging config etc and they would only apply to preview environments from what has now become your staging environment. When PRs are merged they would be auto deployed from the master branch to your staging service and then when you want them in production you would have to manually deploy (or hit the deploy hook URLs) the services created by the other blueprint.

3 Likes