[SOLVED] Postgresql: Running pg_restore with +300.000 rows times out in Frankfurt region

I’m moving from Heroku to Render, and I’d like to bring my data with me :slight_smile:

I’ve tried psql with an sql export, and pg_restore with an sql/tar dump. Both start, continue for a bit, and then times out when a table with ~300.000 records is being restored. When using pg_restore I get this error:

error: error returned by PQputCopyData: SSL SYSCALL error: Operation timed out

The command I’m running is:

pg_restore --no-owner --password -h DATABASE_HOST -U USERNAME -d DATABASE_NAME dump.sql

The dump.sql is ~600MB, the database is on a Standard plan, and located in the Frankfurt region. I’ve checked statement_timeout and idle_in_transaction_session_timeout, which are both set to zero.

Am I missing a configuration somewhere that would allow pg_restore to finish?

(edit: added Render plan and statement_timeout + idle_in_transaction_session_timeout checks)

Found a solution that, at least for my database, worked :tada:

I changed the file format to directory (--format=d below) when running pg_dump. Hence, my pg_dump command (at Heroku) looks like this:

pg_dump -U DB_USER_NAME -d DB_NAME -n 'public' -f "path/to/dump.sql" --no-owner --no-acl --format=d

Before running pg_restore on a fresh instance of PostgreSQL at Render, I tweaked a few settings:

SET work_mem TO '1GB';
SET maintenance_work_mem TO '1GB';

Followed by (remember --format=d):

pg_restore --verbose --no-owner --format=d -h DB_HOST -U DB_USER_NAME -d DB_NAME dump.sql

Lastly, I reversed settings to defaults (not sure if these vary from plan to plan):

SET work_mem TO '1MB';
SET maintenance_work_mem TO '64MB';

I was on a standard-0 at Heroku, and now on a Standard at Render. All commands were run on instances with PostgreSQL 12.x :robot:

Resources:

2 Likes

Marc,
Great research and thank you for sharing your findings with the community so that everyone can benefit

John B

I’m also experiencing this issue, and tried the proposed solution but ran into some issues. What’s causing this problem? Is there anything that can be done on render’s side in order to make this process easier? 300k+ records import on an app seems like it would be a common task.

I have created a separate post as this one has been solved here…