Our PostgreSQL Instance Stuck on “Creating”. Are there any incident on your end now? We have already retried multiple times but the result is the same for about 3 hours now.
Deployment Type: Blueprint
PostgreSQL Type: Standard
Manual Sync Started at 4:02 PM GMT+8
Current Time: 5:07 PM GMT+8
Looking forward for your response.
Just testing a create in Singapore myself, I do note that you’ve created a new database which is in a different region to your webservices, so prompting your next question - that’s not possible. We don’t have cross regional network access yet.
Also, Service specific questions might be best served by contacting us directly from Dashboard (Contact Support Link at the very bottom)
DB creation in Singapore should be back to normal now,
We’ll take note on the “Contact Support” Link.
I’ve verified that the DB service in Singapore is back to normal. I just created a database in a different region to test if the issue is region specific. We’ll be reverting it back.
May we have more info on this incident? We checked the service status dashboard but what we experienced yesterday did not reflect on that dashboard.
Impact was tiny, pretty much limited to your database creation and my test one in the Singapore region at that time.
Thanks John, are you able to tell us which cloud infra provider Render’s Singapore region is running on? We’d like to colo our hosted database instance with the same provider in the same region to minimise latency.
As a point of feedback: the reason we’re not able to use Render’s PostgreSQL offer is due to the relatively miniscule connection limit even with the largest instance size. Each instance of our app has a maximum connection pool of 100 and we have half a dozen instance running at a time so that’s a maximum of 600 connections under peak workloads. We’re running this many connections off of a meagre 16GiB RDS instance right now.
Are you able to provide some insights to why even the “Contact Us for Custom Plan” caps out at 397 connections? This seems like an oddly low connection ceiling for real workloads on PostgreSQL servers with 16GiB of RAM or more .
So we’re running in AWS Southeast 1.
Have you considered deploying pgbouncer to better manage your connection pooling, often at these connection counts it works wonders, there’s really no need to have a Postgres service capable of that many connections.
Even if you look at services like Heroku and their massive fleets of Postgres, they’re topping out at 500 connections even on their MASSIVE 768GB plan!
I find this article https://www.enterprisedb.com/postgres-tutorials/why-you-should-use-connection-pooling-when-setting-maxconnections-postgres to be a great reference when talking about connection counts as it really covers it with data, I’ll also quote the closing line here:
A connection pooler is a vital part of any high-throughput database system, as it eliminates connection overhead and reserves larger portions of memory and CPU time to a smaller set of database connection, preventing unwanted resource contention and performace degradation.