Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
149 views
Copy postgis layer from S3 to Heroku
I have a dump of a postgis layer (layer.dump), which I am trying to add to my heroku database (mydatabase). The postgis layer is stored on S3 (https://s3.amazonaws.com/layer.dump). I would like to add the layer to to the heroku database and previously used `heroku pgbackups:restore DATABASE 'https:/...
I have a dump of a postgis layer (layer.dump), which I am trying to add to my heroku database (mydatabase). The postgis layer is stored on S3 (https://s3.amazonaws.com/layer.dump) . I would like to add the layer to to the heroku database and previously used heroku pgbackups:restore DATABASE 'https://s3.amazonaws.com/layer.dump '. However, the new heroku pg:backups restore 'https://s3.amazonaws.com/layer.dump ' DATABASE deletes all data from the target database before restoring the backup (https://devcenter.heroku.com/articles/heroku-postgres-backups) . Is there still a way to only restore a single table and leave the remaining tables in the database untouched?
Anne (143 rep)
Sep 2, 2015, 06:44 PM • Last activity: Jul 18, 2025, 07:06 PM
1 votes
1 answers
298 views
Reproduce Heroku's Postgres WAL metric with pg_ls_waldir()
Is it possible to reproduce Heroku’s WAL usage metric by using the postgres query `SELECT sum(size) FROM pg_ls_waldir();` that returns the size of the WAL directory? I asked Heroku support but they haven’t been able to answer and said the WAL drive is 64 GB in one ticket and 112 GB in another. The c...
Is it possible to reproduce Heroku’s WAL usage metric by using the postgres query SELECT sum(size) FROM pg_ls_waldir(); that returns the size of the WAL directory? I asked Heroku support but they haven’t been able to answer and said the WAL drive is 64 GB in one ticket and 112 GB in another. The calculation is off when using either of these values. We have a large migration and it could reach the Heroku WAL threshold. I think the Heroku metric is produced every minute or so via the logging system and reproducing it via a query would make things easier
Todd (205 rep)
Mar 1, 2022, 01:55 AM • Last activity: May 8, 2025, 03:08 AM
5 votes
3 answers
6186 views
How to anonymize pg_dump output before it leaves server?
For development purposes, we dump the production database to local. It's fine because the DB is small enough. The company's growing and we want to reduce risks. To that end, we'd like to anonymize the data before it leaves the database server. One solution we thought of would be to run statements pr...
For development purposes, we dump the production database to local. It's fine because the DB is small enough. The company's growing and we want to reduce risks. To that end, we'd like to anonymize the data before it leaves the database server. One solution we thought of would be to run statements prior to pg_dump, but within the same transaction, something like this: BEGIN; UPDATE users SET email = 'dev+' || id || '@example.com' , password_hash = '/* hash of "password" */' , ...; -- launch pg_dump as usual, ensuring a ROLLBACK at the end -- pg_dump must run with the *same* connection, obviously -- if not already done by pg_dump ROLLBACK; Is there a ready-made solution for this? Our DB is hosted on Heroku, and we don't have 100% flexibility in how we dump. I searched for [postgresql anonymize data dump before download](https://duckduckgo.com/?q=postgresql+anonymize+data+dump+before+download) and variations, but I didn't see anything highly relevant.
François Beausoleil (1473 rep)
Mar 23, 2017, 07:06 PM • Last activity: Jan 18, 2025, 01:55 AM
2 votes
1 answers
5438 views
View PostgreSQL column constraints
I applied a constraint but forgot its name. Now I want to drop it using `ALTER TABLE [TABLENAME] DROP CONSTRAINT [CONSTRAINTNAME]`. If there is a way to drop all constraint from a column that should work too. I cannot use a `psql` command.
I applied a constraint but forgot its name. Now I want to drop it using ALTER TABLE [TABLENAME] DROP CONSTRAINT [CONSTRAINTNAME]. If there is a way to drop all constraint from a column that should work too. I cannot use a psql command.
No Name (25 rep)
May 2, 2020, 09:49 AM • Last activity: Dec 9, 2024, 01:48 PM
1 votes
1 answers
407 views
Postgresql super slow on server - super fast locally?
I am hosting a Postgresql database on Heroku with a standard-0 plan. My database has a table called `transactions` which contains ~18 million rows: ``` SQL SELECT COUNT(*) FROM transactions; count ---------- 17927768 (1 row) ``` Over the past months I've been noticing that the database is getting sl...
I am hosting a Postgresql database on Heroku with a standard-0 plan. My database has a table called transactions which contains ~18 million rows:
SQL
SELECT COUNT(*) FROM transactions;
  count   
----------
 17927768
(1 row)
Over the past months I've been noticing that the database is getting slower and slower. I am now at the point where I receive time-outs from my applications because (even simple) queries take longer than 30 seconds. While trying to find out what is going on I noticed something weird: On the hosted server a simple query like:
sql
EXPLAIN ANALYZE SELECT COUNT(*) FROM transactions WHERE partner_id = 1;
---------------------------------------------------------------------------------
Finalize Aggregate  (cost=405691.73..405691.74 rows=1 width=8) (actual time=34941.061..34961.256 
rows=1 loops=1)
   ->  Gather  (cost=405691.63..405691.73 rows=1 width=8) (actual time=34940.913..34961.247 rows=2
 loops=1)
         Workers Planned: 1
         Workers Launched: 1
         ->  Partial Aggregate  (cost=404691.63..404691.63 rows=1 width=8) (actual time=34924.080.
.34924.081 rows=1 loops=2)
               ->  Parallel Seq Scan on transactions  (cost=0.00..400083.56 rows=9216145 width=0) 
(actual time=77.981..34179.970 rows=7801236 loops=2)
                     Filter: (partner_id = 1)
                     Rows Removed by Filter: 1164606
 Planning Time: 0.755 ms
 JIT:
   Functions: 10
   Options: Inlining false, Optimization false, Expressions true, Deforming true
   Timing: Generation 1.912 ms, Inlining 0.000 ms, Optimization 30.606 ms, Emission 119.538 ms, To
tal 152.057 ms
 Execution Time: 35190.328 ms
(14 rows)
takes **up to 35 seconds**. But when I download the production dump to my machine (an older thinkpad) the query takes **less than a second**:
SQL
EXPLAIN ANALYZE SELECT COUNT(*) FROM transactions WHERE partner_id = 1;
---------------------------------------------------------------------------------
 Finalize Aggregate  (cost=251757.89..251757.90 rows=1 width=8) 
(actual time=669.234..674.362 rows=1 loops=1)
   ->  Gather  (cost=251757.67..251757.88 rows=2 width=8) (actua
l time=669.008..674.348 rows=3 loops=1)
         Workers Planned: 2
         Workers Launched: 2
         ->  Partial Aggregate  (cost=250757.67..250757.68 rows=
1 width=8) (actual time=638.447..638.448 rows=1 loops=3)
               ->  Parallel Index Only Scan using index_transact
ions_on_partner_id on transactions  (cost=0.44..234528.06 rows=6
491844 width=0) (actual time=0.061..405.148 rows=5199597 loops=3
)
                     Index Cond: (partner_id = 1)
                     Heap Fetches: 0
 Planning Time: 0.231 ms
 JIT:
   Functions: 11
   Options: Inlining false, Optimization false, Expressions true, Deforming true
   Timing: Generation 3.109 ms, Inlining 0.000 ms, Optimization 0.826 ms, Emission 11.406 ms, Total 15.342 ms
 Execution Time: 676.047 ms
(14 rows)
One can also see that the hosted Postgresql uses a parallel seq scan, while the local instance uses a parallel index scan. How is this possible? What do I need to do to get somewhere near this performance on the hosted server? ## Edit 1: More information regarding 'bloat' I tried to investigate the possible bloat and I received this for the transaction table:
type   | schemaname |  object_name  | bloat |   waste    
-------+------------+--------------+-------+------------
 table | public     | transactions |   1.3 | 571 MB
And this:
schema |             table              | last_vacuum | last_autovacuum  |    rowcount    | dead_rowcount  | autovacuum_threshold | expect_autovacuum 
--------+--------------------------------+-------------+------------------+----------------+----------------+----------------------+-------------------

 public | transactions                   |             |                  |     17,949,072 |            600 |      3,589,864       |
These queries are generated by Herokus built-in tools to analyze bloat as described here . A dead rowcount of 600 in comparison to the 17 million rows looks neglectable - but why is the waste so high (570MB)? Could this be the source of the problem? It seems that a vacuum was never performed.
zarathustra (113 rep)
Feb 2, 2024, 08:53 PM • Last activity: Feb 3, 2024, 09:25 PM
3 votes
0 answers
416 views
Best method for daily incremental copy of Postgres database
I'm trying to figure out the best plan of attack for making a nightly copy of our Heroku PostgreSQL production database to our local development environment. The database is rather large and grows at roughly 250,000 rows per day. Ideally the solution would be an incremental method and only restore t...
I'm trying to figure out the best plan of attack for making a nightly copy of our Heroku PostgreSQL production database to our local development environment. The database is rather large and grows at roughly 250,000 rows per day. Ideally the solution would be an incremental method and only restore the previous days changes. Our development environment is also Postgres running on OSX. Any and all help is appreciated.
contact411 (31 rep)
Sep 15, 2015, 05:34 AM • Last activity: Jul 29, 2023, 07:16 AM
0 votes
0 answers
1313 views
Can I see the remote IP address for failed PostgreSQL logins?
We're running PostgreSQL on Heroku, with all the application logs going to Logentries, and I'm seeing around 4,000 failed login attempts per hour for a specific set of credentials: ``` 2019-07-30T09:08:45+00:00 app postgres.3863 - - [ONYX] [6-1] sql_error_code = 28P01 FATAL: password authentication...
We're running PostgreSQL on Heroku, with all the application logs going to Logentries, and I'm seeing around 4,000 failed login attempts per hour for a specific set of credentials:
2019-07-30T09:08:45+00:00 app postgres.3863 - - [ONYX] [6-1] sql_error_code = 28P01 FATAL: password authentication failed for user "xxxxxxxx"
2019-07-30T09:08:45+00:00 app postgres.3863 - - [ONYX] [6-2] sql_error_code = 28P01 DETAIL: Role "xxxxxxxx" does not exist.
The credential in question (redacted here to xxxxxxxx) belonged to a former employee, who I think has left some sort of monitoring or dashboard process running on a private server or AWS account somewhere (yes, this is bad. We know. That's not what I'm asking about.) There's nothing on any of our instances that's responsible for this traffic and I'm trying to identify where it's coming from. Problem is, the Heroku logs don't include any remote IP information or anything I could use to identify the source of these login requests, and because the credentials have been revoked they never establish a connection which would show up in pg_stat_activity. Any way I can get the remote address of these requests from the PostgreSQL server stats or the Heroku logs so I can try to work out where the rogue connection attempts are coming from and hopefully get it shut down?
Dylan Beattie (129 rep)
Jul 30, 2019, 10:22 AM • Last activity: Apr 4, 2023, 03:08 AM
13 votes
5 answers
4911 views
How can I export a subset of tabledata from a prodution database into my local testing database?
We have a relatively big production postgres based db: ~20GB. The PostgreSQL database is hosted on heroku. I would like to copy a small subset of the table data to my local database so I can run some tests on them without having to work on production. I do not want to generate sample data myself, bu...
We have a relatively big production postgres based db: ~20GB. The PostgreSQL database is hosted on heroku. I would like to copy a small subset of the table data to my local database so I can run some tests on them without having to work on production. I do not want to generate sample data myself, but rather use the data which already exists in the production environment. ~100 rows from each table in the database would be sufficient. Is there an easy way to accomplish this?
jottr (243 rep)
Nov 26, 2012, 01:05 AM • Last activity: Mar 24, 2023, 02:01 PM
0 votes
0 answers
59 views
How is Pricing calculated in Heroku PostGres Addon
Heroku has now upgraded its plans. So I have a doubt on the Heroku postgres Mini plan. I have a site which uses the heroku postgres database. The site is just for my practice and in my free time i work or use it. The Heroku upgrade for mini plan reads like "Heroku prorates costs to the second. For e...
Heroku has now upgraded its plans. So I have a doubt on the Heroku postgres Mini plan. I have a site which uses the heroku postgres database. The site is just for my practice and in my free time i work or use it. The Heroku upgrade for mini plan reads like "Heroku prorates costs to the second. For example, if you have 1 Mini Postgres database provisioned for 1 hour, it costs you ~$0.004." What exactly does provision mean over here? Since I have a postgres database addon does that mean it will run all 24 hours? Or hours will count only when I open my website which fetches data from the DB or when i connect to my database, does the hours begin to count. Please help!
Joy (1 rep)
Dec 4, 2022, 01:45 PM
0 votes
1 answers
99 views
Problem seeding Heroku database
I'm trying to seed my Heroku production database and got the following error message: > ERROR: insert or update on table "Posts" violate foreign key constraint "Posts_userId_fkey The post migration file: ``` module.exports = { up: (queryInterface, Sequelize) => { return queryInterface.createTable('P...
I'm trying to seed my Heroku production database and got the following error message: > ERROR: insert or update on table "Posts" violate foreign key constraint "Posts_userId_fkey The post migration file:
module.exports = {
  up: (queryInterface, Sequelize) => {
    return queryInterface.createTable('Posts', {
      id: {
        allowNull: false,
        autoIncrement: true,
        primaryKey: true,
        type: Sequelize.INTEGER
      },
      userId: {
        type: Sequelize.INTEGER,
        allowNull: false,
        references: {
          model: 'Users'
        }
      },
      content: {
        type: Sequelize.TEXT,
        allowNull: false
      },
      title: {
        type: Sequelize.STRING(50),
        allowNull: false
      },
      photo: {
        type: Sequelize.TEXT,
        allowNull: false
      },
      categoryId: {
        type: Sequelize.INTEGER,
        allowNull: false,
        references: {
          model: 'Categories'
        }
      },
      createdAt: {
        allowNull: false,
        type: Sequelize.DATE,
        defaultValue: Sequelize.fn('now')
      },
      updatedAt: {
        allowNull: false,
        type: Sequelize.DATE,
        defaultValue: Sequelize.fn('now')
      }
    });
  },
  down: (queryInterface, Sequelize) => {
    return queryInterface.dropTable('Posts');
  }
};
user model file:
User.associate = function(models) {
    // associations can be defined here
    User.hasMany(models.Post, { foreignKey: 'userId' })
  };
my config file:
production: {
    use_env_variable: 'DATABASE_URL',
    dialect: 'postgres',
    seederStorage: 'sequelize',
    dialectOptions: {
      ssl: {
        require: true,
        rejectUnauthorized: false
      }
    }
  }
my post seeders:
return queryInterface.bulkInsert(
      "Posts",
      [
        {
          userId: 4,
          content: "Cream cheese halloumi camembert de normandie. Queso emmental melted cheese cream cheese cheese triangles the big cheese emmental blue castello. When the cheese comes out everybody's happy gouda queso fromage camembert de normandie stinking bishop rubber cheese rubber cheese. Edam cheese triangles pecorino babybel stilton. Parmesan hard cheese smelly cheese. Cheese triangles fondue macaroni cheese port-salut taleggio chalk and cheese brie cheesecake. Bavarian bergkase emmental taleggio dolcelatte fondue roquefort cheeseburger cheese slices. Fromage frais.Cheesecake cheese and wine fromage frais. Roquefort cheese triangles fromage frais stilton paneer mascarpone chalk and cheese dolcelatte. Dolcelatte who moved my cheese croque monsieur manchego taleggio cheese on toast hard cheese bocconcini. Cow everyone loves monterey jack cheesy grin smelly cheese cauliflower cheese bocconcini the big cheese. Hard cheese hard cheese.Cheese slices smelly cheese cheese on toast. Cheese strings chalk and cheese camembert de normandie cheese and biscuits red leicester cow brie cut the cheese. Cheese on toast melted cheese stilton pecorino brie st. agur blue cheese manchego cheese strings. Boursin who moved my cheese stilton paneer cheese triangles lancashire cow who moved my cheese. Who moved my cheese cauliflower cheese mascarpone say cheese.",
          title: 'First blog',
          photo: 'Image',
          categoryId: 1
        },
        {
          userId: 4,
          content: "Cream cheese halloumi camembert de normandie. Queso emmental melted cheese cream cheese cheese triangles the big cheese emmental blue castello. When the cheese comes out everybody's happy gouda queso fromage camembert de normandie stinking bishop rubber cheese rubber cheese. Edam cheese triangles pecorino babybel stilton. Parmesan hard cheese smelly cheese. Cheese triangles fondue macaroni cheese port-salut taleggio chalk and cheese brie cheesecake. Bavarian bergkase emmental taleggio dolcelatte fondue roquefort cheeseburger cheese slices. Fromage frais.Cheesecake cheese and wine fromage frais. Roquefort cheese triangles fromage frais stilton paneer mascarpone chalk and cheese dolcelatte. Dolcelatte who moved my cheese croque monsieur manchego taleggio cheese on toast hard cheese bocconcini. Cow everyone loves monterey jack cheesy grin smelly cheese cauliflower cheese bocconcini the big cheese. Hard cheese hard cheese.Cheese slices smelly cheese cheese on toast. Cheese strings chalk and cheese camembert de normandie cheese and biscuits red leicester cow brie cut the cheese. Cheese on toast melted cheese stilton pecorino brie st. agur blue cheese manchego cheese strings. Boursin who moved my cheese stilton paneer cheese triangles lancashire cow who moved my cheese. Who moved my cheese cauliflower cheese mascarpone say cheese.",
          title: 'Cat Ipsum',
          photo: 'Image',
          categoryId: 2
        },
        {
          userId: 4,
          content: "Cream cheese halloumi camembert de normandie. Queso emmental melted cheese cream cheese cheese triangles the big cheese emmental blue castello. When the cheese comes out everybody's happy gouda queso fromage camembert de normandie stinking bishop rubber cheese rubber cheese. Edam cheese triangles pecorino babybel stilton. Parmesan hard cheese smelly cheese. Cheese triangles fondue macaroni cheese port-salut taleggio chalk and cheese brie cheesecake. Bavarian bergkase emmental taleggio dolcelatte fondue roquefort cheeseburger cheese slices. Fromage frais.Cheesecake cheese and wine fromage frais. Roquefort cheese triangles fromage frais stilton paneer mascarpone chalk and cheese dolcelatte. Dolcelatte who moved my cheese croque monsieur manchego taleggio cheese on toast hard cheese bocconcini. Cow everyone loves monterey jack cheesy grin smelly cheese cauliflower cheese bocconcini the big cheese. Hard cheese hard cheese.Cheese slices smelly cheese cheese on toast. Cheese strings chalk and cheese camembert de normandie cheese and biscuits red leicester cow brie cut the cheese. Cheese on toast melted cheese stilton pecorino brie st. agur blue cheese manchego cheese strings. Boursin who moved my cheese stilton paneer cheese triangles lancashire cow who moved my cheese. Who moved my cheese cauliflower cheese mascarpone say cheese.",
          title: 'Cheesecake Party',
          photo: 'https://media.istockphoto.com/photos/cheesecake-slice-with-strawberries-picture-id1205169550?k=20&m=1205169550&s=612x612&w=0&h=QqJDIpCEpGEXBFU2c-aoZKEgtU5tfFGxKxrBu1bHYww= ',
          categoryId: 2
        },
        {
          userId: 5,
          content: "Cream cheese halloumi camembert de normandie. Queso emmental melted cheese cream cheese cheese triangles the big cheese emmental blue castello. When the cheese comes out everybody's happy gouda queso fromage camembert de normandie stinking bishop rubber cheese rubber cheese. Edam cheese triangles pecorino babybel stilton. Parmesan hard cheese smelly cheese. Cheese triangles fondue macaroni cheese port-salut taleggio chalk and cheese brie cheesecake. Bavarian bergkase emmental taleggio dolcelatte fondue roquefort cheeseburger cheese slices. Fromage frais.Cheesecake cheese and wine fromage frais. Roquefort cheese triangles fromage frais stilton paneer mascarpone chalk and cheese dolcelatte. Dolcelatte who moved my cheese croque monsieur manchego taleggio cheese on toast hard cheese bocconcini. Cow everyone loves monterey jack cheesy grin smelly cheese cauliflower cheese bocconcini the big cheese. Hard cheese hard cheese.Cheese slices smelly cheese cheese on toast. Cheese strings chalk and cheese camembert de normandie cheese and biscuits red leicester cow brie cut the cheese. Cheese on toast melted cheese stilton pecorino brie st. agur blue cheese manchego cheese strings. Boursin who moved my cheese stilton paneer cheese triangles lancashire cow who moved my cheese. Who moved my cheese cauliflower cheese mascarpone say cheese.",
          title: 'Tell Me A Joke',
          photo: 'Image',
          categoryId: 1
        },
      ],
      {}
    );
  },
Why is it producing that message?
skyeNoLimit (1 rep)
Feb 28, 2022, 11:38 AM • Last activity: Feb 28, 2022, 01:27 PM
1 votes
2 answers
445 views
Approaching row limit for hobby-dev database on Heroku app
I just got an email from Heroku about my database nearing its row limit, but I'm confused because my database has almost nothing in it. `heroku pg:info` returns the following: ``` === DATABASE_URL Plan: Hobby-dev Status: Available Connections: 1/20 PG Version: 10.18 Created: 2018-08-11 04:56 UTC Dat...
I just got an email from Heroku about my database nearing its row limit, but I'm confused because my database has almost nothing in it. heroku pg:info returns the following:
=== DATABASE_URL
Plan:                  Hobby-dev
Status:                Available
Connections:           1/20
PG Version:            10.18
Created:               2018-08-11 04:56 UTC
Data Size:             13.7 MB/1.00 GB (In compliance)
Tables:                14
Rows:                  7290/10000 (In compliance, close to row limit)
Fork/Follow:           Unsupported
Rollback:              Unsupported
Continuous Protection: Off
Add-on:                postgresql-xxx-xxxxxx
The data size looks not wrong, but the rows for some reason are way over what I would expect. If I check my console and do a Model.count for everything, it doesn't even begin to approach 7290. Why is this happening, and is it possible (or even advisable) to *clear* some of these rows?
calyxofheld (111 rep)
Oct 27, 2021, 11:38 PM • Last activity: Dec 5, 2021, 04:45 PM
1 votes
1 answers
77 views
Heroku PostgresSQL: How to replay statement logs to another server
We're using Heroku Postgres v12 for production web app and plan to upgrade to v13. I want to be able to replay statements from previous days log to 1. warm cache on upgraded instance (Heroku will switch it's hardware) 1. simulate production load on testing environments I tried [pgreplay](https://git...
We're using Heroku Postgres v12 for production web app and plan to upgrade to v13. I want to be able to replay statements from previous days log to 1. warm cache on upgraded instance (Heroku will switch it's hardware) 1. simulate production load on testing environments I tried [pgreplay](https://github.com/laurenz/pgreplay) and [pgreplay-go](https://github.com/gocardless/pgreplay-go/) , but both require me to set custom log_line_prefix which I can't do on Heroku Postgres. What other options am I missing? Given my tasks and the nature of the web app, we can drop any data\schema modification queries and only replay SELECTs, I also don't care much about the order.
scorpp (173 rep)
Jul 29, 2021, 08:58 AM • Last activity: Sep 20, 2021, 07:38 PM
-1 votes
1 answers
56 views
Seeking advice on a DB project: hosting services
I am a new web developer who just finished a full stack boot camp. This question is to get advice and suggestions in building a free web-based database site for archaeology research. **I would appreciate suggestions for free or inexpensive hosting services (I am currently only familiar with Heroku a...
I am a new web developer who just finished a full stack boot camp. This question is to get advice and suggestions in building a free web-based database site for archaeology research. **I would appreciate suggestions for free or inexpensive hosting services (I am currently only familiar with Heroku and JawsDB).** For a side project to help a friend and build up my portfolio I want to build a web-based database for my friend’s doctoral research project. They are an archaeologist studying a Maya road and have data on the excavation site. Currently this data is in a number of different excel sheets and folders containing many photographs (images that would go in the DB). Our goal is to consolidate this for storage and to streamline querying so that they can look at patterns in the data and write analysis of those patterns. The size is about 150-200GB of data right now, but ideally we would like to build it so that it could scale - future researchers could insert data, print queried tables, for example. With a project of that relatively small size (~200GB but would scale), **what would your suggestion be to host something like this? Is there a reliable free option with this much data that we could use for hosting?** Probably no more than 10-15 people would be using it at the same time. This is my first question on Database Admins and my first solo project after bootcamp so I really appreciate any advice you all have. Just trying to grow here and make a cool research tool. EDITS: This was flagged as a duplicate because of the MySQL/NoSQL question, which has been removed. I'm wondering about hosting services. If you consider this off-topic, I would appreciate a nudge in the right direction as to where to go to ask about hosting service suggestions.
deweyD (3 rep)
Feb 11, 2021, 01:08 AM • Last activity: Feb 11, 2021, 12:49 PM
0 votes
1 answers
270 views
Detecting running pg_dump conflicting with schema changes
I use Heroku PostgreSQL and their PGBackups for my production environment. I have found, for reasons that I'm not quite clear on, that at least some schema changes do not run correctly during the time that the backups are happening. According to these [Heroku docs](https://help.heroku.com/7U1BTYHB/h...
I use Heroku PostgreSQL and their PGBackups for my production environment. I have found, for reasons that I'm not quite clear on, that at least some schema changes do not run correctly during the time that the backups are happening. According to these [Heroku docs](https://help.heroku.com/7U1BTYHB/how-can-i-take-a-logical-backup-of-large-heroku-postgres-databases) , it seems like pgbackups is using pg_dump internally, though there may be additional things going on as well. The docs indicate that the output is essentially the same as this pg_dump command.
pg_dump -F c --no-acl --no-owner --quote-all-identifiers $DATABASE_URL
Attempting to run migrations during the backup window has repeatedly caused me some pain, as the migrations lock up and hang. To mitigate this, I'd like to detect when the backups are running, and avoid running the migrations by having a pre-check in the script that runs the migrations. I can do this by checking with Heroku's CLI, and that will work for my current specifics, but I'm wondering if something less specific to Heroku might be reasonable, by watching for a running pg_dump instead. Is there a good way to detect if pg_dump is currently running, and may block schema changes?
Ryan H (101 rep)
Jan 29, 2021, 06:54 PM • Last activity: Feb 1, 2021, 07:40 AM
15 votes
4 answers
7969 views
Migrate heroku database to Amazon RDS with minimum downtime
I have a heroku postgres database and want to migrate it to Amazon RDS to save cost. What's a way to do so with minimum downtime? Usually this involves replicating database in real time and then promoting the replicated DB as the main DB. I know I can use a follower database to migrate DB within her...
I have a heroku postgres database and want to migrate it to Amazon RDS to save cost. What's a way to do so with minimum downtime? Usually this involves replicating database in real time and then promoting the replicated DB as the main DB. I know I can use a follower database to migrate DB within heroku, and I can use read replica database to migrate DB within Amazon RDS. Is there a similar method to create database replication of heroku DB that lives in my own Amazon RDS?
Byron Singh
Jan 7, 2014, 12:17 AM • Last activity: Nov 12, 2020, 05:23 PM
1 votes
1 answers
2525 views
pg_restore: [archiver] did not find magic string in file header: please check the source URL and ensure it is publicly accessible
I have been trying to push a dump file from my local Postgrel DB (which I uploaded into my Google Drive and is accessible to public) into my Heroku remote DB with the following URL: heroku pg:backups:restore 'https://drive.google.com/open?id=dump_id_link_here' DATABASE_URL I am already logged in int...
I have been trying to push a dump file from my local Postgrel DB (which I uploaded into my Google Drive and is accessible to public) into my Heroku remote DB with the following URL: heroku pg:backups:restore 'https://drive.google.com/open?id=dump_id_link_here ' DATABASE_URL I am already logged in into my Heroku app from the terminal on which I run the command. but I got the same error twice. I have been searching online and found threads such as https://dba.stackexchange.com/questions/111904/pg-restore-archiver-did-not-find-magic-string-in-file-header but I could not help link between the two, since I am very new two Postgrel. I Hope you guys point me out towards the issues. Very much appreciated. Starting restore of https://drive.google.com/open?id=dump_id_link_here to postgresql-symmetrical-52186... done Stop a running restore with heroku pg:backups:cancel. Restoring... ! ▸ An error occurred and the backup did not finish. ▸ ▸ waiting for restore to complete ▸ pg_restore finished with errors ▸ waiting for download to complete ▸ download finished with errors ▸ please check the source URL and ensure it is publicly accessible ▸ Run heroku pg:backups:info r002 for more details. === Backup r002 Database: BACKUP Started at: 2019-09-14 21:14:26 +0000 Finished at: 2019-09-14 21:14:27 +0000 Status: Failed Type: Manual Backup Size: 0.00B (0% compression) === Backup Logs 2019-09-14 21:14:27 +0000 pg_restore: [archiver] did not find magic string in file header 2019-09-14 21:14:27 +0000 waiting for restore to complete 2019-09-14 21:14:27 +0000 pg_restore finished with errors 2019-09-14 21:14:27 +0000 waiting for download to complete 2019-09-14 21:14:27 +0000 download finished with errors 2019-09-14 21:14:27 +0000 please check the source URL and ensure it is publicly accessible
coredumped0x (121 rep)
Sep 14, 2019, 10:15 PM • Last activity: Sep 2, 2020, 11:00 PM
2 votes
0 answers
29 views
Postgres on Heroku: Is it possible to prevent users from logging in to the primary host?
I have a Postgres 12.3 server running in a primary-follower configuration on Heroku. I added a few credentials to the primary instance, which were replicated to the follower. I would like to prevent users from logging into the primary host. I understand this is generally possible with `pg_hba.conf`,...
I have a Postgres 12.3 server running in a primary-follower configuration on Heroku. I added a few credentials to the primary instance, which were replicated to the follower. I would like to prevent users from logging into the primary host. I understand this is generally possible with pg_hba.conf, but I do not believe I have access to this file on Heroku. Is it possible to achieve this on Heroku?
AmitA (121 rep)
Aug 18, 2020, 10:40 AM
0 votes
1 answers
412 views
MySQL id not saving in order (Laravel in production)
[![enter image description here][1]][1] I'm really confused about this.. So, I created a Laravel app and hosted it on Heroku. I'm using ClearDB extension to be able to use MySQL. Problem is: when I save a new User on my DB, it is not being saved in ID order. I got id 1, then id 11 for the second reg...
enter image description here I'm really confused about this.. So, I created a Laravel app and hosted it on Heroku. I'm using ClearDB extension to be able to use MySQL. Problem is: when I save a new User on my DB, it is not being saved in ID order. I got id 1, then id 11 for the second register, then id 21 for the third... Then I deleted them and tried again, and I got id 31. I think, there's a pattern, huh? It's going +10, +10... But why? Look, the code I'm using to save a new register is only: DB::table('registers')->insert($registerData); On the $registerData variable, I have only the following data: name, e-mail, a url for the picture and birth date. Even my "migrations" table is in this order: 1, 11, 21, 31...
GabrielLearner (3 rep)
Jun 22, 2020, 11:58 PM • Last activity: Jun 23, 2020, 12:39 AM
0 votes
1 answers
149 views
heroku postgres sql changes not reflected in db
I wrote this program import psycopg2 try: #connect to db cursor = connection.cursor() except Exception as e: raise e else: print("connection Success") cmd = "" while(True): cmd= input("Enter cmd") if cmd == "CLOSE": break try: cursor.execute(cmd) pass except Exception as e: print("Err") print(e) pas...
I wrote this program import psycopg2 try: #connect to db cursor = connection.cursor() except Exception as e: raise e else: print("connection Success") cmd = "" while(True): cmd= input("Enter cmd") if cmd == "CLOSE": break try: cursor.execute(cmd) pass except Exception as e: print("Err") print(e) pass else: record = "" try: record = cursor.fetchall() except Exception as e: print(e) print("Success ") print(record) pass finally: pass cursor.close() connection.close() print("all close") Weirdly, it worked fine yesterday for sometime. However now changes are not reflected in db (when checked through heroku dataclip) but can be seen through python app before closing the session.Only . I an sure I haven't crossed any limit of the hobby dev plan. Curious enough the primary key still increments. Output example- enter image description here this query(insert into info(name,nickname,ismetro) values('abc', 'xyx','n') worked a while ago.
No Name (25 rep)
May 1, 2020, 01:12 PM • Last activity: May 1, 2020, 02:14 PM
1 votes
1 answers
242 views
Vacuum increases bloat size as reported by Heroku's bloat SQL query
I am using using the Heroku command as described [here](https://github.com/heroku/heroku-pg-extras/blob/master/commands/bloat.js) to detect bloat in tables. One of our tables reported a bloat size of around 7GB, but running vacuum on it, then running the same bloat command, reports close to 21GB of...
I am using using the Heroku command as described [here](https://github.com/heroku/heroku-pg-extras/blob/master/commands/bloat.js) to detect bloat in tables. One of our tables reported a bloat size of around 7GB, but running vacuum on it, then running the same bloat command, reports close to 21GB of "waste". I have no idea how this can happen. I understand the various conditions under which a table can bloat, even this dramatically, but not after a vacuum, I have never seen such a case. Any ideas why this might happen? The table in question does have a few TOAST tuples, perhaps that might be it. I also can't tell if that query from Heroku accounts for TOAST waste, but my understanding is that vacuum would take care of those as well. Local tests show the "waste" column from that bloat query staying pretty stable for small sizes and decreasing for larger values, but never *an increase*, which makes this rather strange.
m_w_k (33 rep)
Dec 8, 2019, 07:17 PM • Last activity: Dec 8, 2019, 07:33 PM
Showing page 1 of 20 total questions