Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
114 views
Does setting a default value for Alter Table in Postgres make the query significantly slower?
We had an incident very recently that caused our service to be unavailable for some minutes. Here is the liquibase script that led to the incident: ``` ``` [![change set logs][1]][1] As you can see, the changeset took quite a lot of time. Our team has concluded that the default value set to `null` i...
We had an incident very recently that caused our service to be unavailable for some minutes. Here is the liquibase script that led to the incident:
change set logs As you can see, the changeset took quite a lot of time. Our team has concluded that the default value set to null is the one to blame here. However, it somehow doesn't sit right with me. When I check the Postgres pages 1 and 2 , I see that setting the values to null is no different than removing the default value altogether. So, I don't see why this default value setting would cause such problems. On a side note, there were 12 million rows in the table. As in the title, my question is, does setting a default value cause such downtimes? The application startup:
Started ApplicationKt in 289.989 seconds (process running for 290.727)
Some logs before the liquibase operation: lqbase1 lqbase2 Waiting for changelog lock
leventunver (119 rep)
Dec 5, 2024, 04:15 PM • Last activity: Dec 6, 2024, 12:04 PM
0 votes
0 answers
29 views
pg_largeobject is empty after droping a table with BLOBS
So we have a springboot app using a postrgressql db. We used to have some data that were mapped as CLOBs and BLOBs. Since we decided to move away from large objects we migrated the database and while doing so, we created some backup tables in case something went wrong. Example from liquibase CREATE...
So we have a springboot app using a postrgressql db. We used to have some data that were mapped as CLOBs and BLOBs. Since we decided to move away from large objects we migrated the database and while doing so, we created some backup tables in case something went wrong. Example from liquibase CREATE TABLE data_files_backup (LIKE data_files ); INSERT INTO data_files_backup SELECT * FROM data_files ; UPDATE data_files SET context_new = lo_get(data_files.context::BIGINT) where data_files.context is not null; So now only the backups have Large Objects stored in them. Now that this stage is over, i want to drop every backup table. There comes my confusion. So the changeset is like this DROP TABLE IF EXISTS data_files_backup; ..... other tables ..... Backend starts and i wanted to clean up the db and what i got is this $ vacuumlo -n -U MYNAME -p 5438 CENTRALDB Password: Connected to database "CENTRALDB" Test run: no large objects will be removed! Would remove 0 large objects from database "CENTRALDB". Upon looking to the pg_largeobject in the pg_catalog i realized that it is completely empty. pg_largeobject_metadata is also empty. I thought you had to run vacuumlo for a cleanup. Have i done something wrong ? When did the pg_largeobject got emptied?
physicsuser (1 rep)
Nov 5, 2024, 08:29 AM
1 votes
0 answers
482 views
Liquibase on Oracle SQLcl - table and index drops not being propagated
I've been using Oracle's SQLcl, specifically the `lb` command to use [Liquibase][1], for the last day or so to use for versioning my development, staging and production databases, which are using Oracle's Autonomous Data Warehouse (version 19). It works fine when doing the tutorial changes, like the...
I've been using Oracle's SQLcl, specifically the lb command to use Liquibase , for the last day or so to use for versioning my development, staging and production databases, which are using Oracle's Autonomous Data Warehouse (version 19). It works fine when doing the tutorial changes, like the ones listed by Jeff Smith in his video on it here , although the syntax of the commands has changed a bit. I'm reaching a problem when I want to drop an index, if I just execute the SQL statement DROP INDEX EXAMPLE_IDX in my development environment and then run the lb genschema statement, it doesn't drop the index on my staging environment when I run lb update -changelog controller.xml. I think this is because it's generating a fresh Liquibase changelog every time I run lb genschema which seems to overwrite the previous changelog. I'm also having the same behaviour when trying to drop tables etc, and I'm not sure how I am supposed to propogate DROP statements on tables and indexes with the lb tool. Here's what I'm doing (users/passwords etc all changed). Open SQLcl to create the staging database's structure (v1)
{plain}
set cloudconfig wallets/dev.zip
connect user/password@dev_medium
lb genschema

set cloudconfig wallets/staging.zip
connect user/password@staging_medium
lb update -changelog controller.xml
Open SQL Developer on the dev database and delete the index.
{sql}
DROP INDEX INDEXNAME;
Open SQLcl to update the changelog and the staging database.
{plain}
set cloudconfig wallets/dev.zip
connect user/password@dev_medium
lb genschema

set cloudconfig wallets/staging.zip
connect user/password@staging_medium
lb update -changelog controller.xml
After doing these steps, I'm still noticing that the tables and indexes are still there, which means my database behaves differently to what it should be doing! If anyone has knowledge or experience of using the lb tool, SQLcl, or can let me know some best practices that would be amazing.
Ash Oldershaw (121 rep)
Jul 1, 2020, 01:36 PM • Last activity: Sep 26, 2023, 03:45 PM
2 votes
1 answers
17510 views
Is there any way to clear all tables using Liquibase?
I'm currently using Liquibase for creating test data in my database before each test run. I came across a problem where some of the test cases need special data, while others need the tables to be empty. So I decided to give Liquibase contexts a try and only insert the data I really need for the cur...
I'm currently using Liquibase for creating test data in my database before each test run. I came across a problem where some of the test cases need special data, while others need the tables to be empty. So I decided to give Liquibase contexts a try and only insert the data I really need for the current testcase plus - in an abstract superclass - all the data every test case needs (user accounts and stuff)... I would like to clear the tables before each run, however I can't seem to find anything for this case except dropAll(), which seems to be overkill... I'm not quite sure if it is even possible - because of foreign keys and other constraints - but I wonder if there is a way to delete the content of ALL the tables without dropping them completely?
gilaras (123 rep)
Apr 3, 2012, 06:36 AM • Last activity: Sep 26, 2023, 03:45 PM
0 votes
0 answers
149 views
Column ___ of relation ___ does not exist in csv migration -- is Liquibase ignoring the schema I configured?
edit: I fought this too long and went with SQL insert statements instead, since I was generating the CSV from a JS script it was easy to translate over. I think there's a bug in this `loadData` command. Asked the same question in Liqubibase forums and got no responses. I recommend using Flyway inste...
edit: I fought this too long and went with SQL insert statements instead, since I was generating the CSV from a JS script it was easy to translate over. I think there's a bug in this loadData command. Asked the same question in Liqubibase forums and got no responses. I recommend using Flyway instead of Liquibase, seems to be more active. I'm trying to insert csv data. My config looks like this:
{
            "description": "DS-5001: insert boundary data",
            "tagDatabase": {
              "tag": "v2.1.0"
            },
            "loadData": {
              "dbms": "postgresql",
              "columns": [
                {
                  "column": {
                    "name": "id",
                    "type": "STRING"
                  }
                },
                {
                  "column": {
                    "name": "name",
                    "type": "STRING"
                  }
                },
                {
                  "column": {
                    "name": "state_id",
                    "type": "STRING"
                  }
                },
                {
                  "column": {
                    "name": "state_abbr",
                    "type": "STRING"
                  }
                },
                {
                  "column": {
                    "name": "state_name",
                    "type": "STRING"
                  }
                },
                {
                  "column": {
                    "name": "country_code",
                    "type": "STRING"
                  }
                },
                {
                  "column": {
                    "name": "country_name",
                    "type": "STRING"
                  }
                },
                {
                  "column": {
                    "name": "geom",
                    "type": "STRING"
                  }
                }
              ],
              "commentLineStartsWith": "//",
              "encoding": "UTF-8",
              "file": "../csv/0000-geom-data.csv",
              "quoteChar": "\"",
              "schemaName": "boundaries_justincollum",
              "relativeToChangelogFile": true,
              "separator": ",",
              "tableName": "boundary_area",
              "usePreparedStatements": true
            }
          },
The table looks like:
CREATE TABLE boundaries_justincollum.boundary_area (
	id varchar NOT NULL,
	"name" varchar NOT NULL,
	state_id varchar NOT NULL,
	state_abbr varchar NOT NULL,
	state_name varchar NOT NULL,
	country_code varchar NOT NULL,
	country_name varchar NOT NULL,
	geom varchar NOT NULL,
	CONSTRAINT pk_county2 PRIMARY KEY (id)
);
But when I run the migration I get an error:
"23-09-21 17:03:15] SEVERE [liquibase.integration] Migration failed for changeset changelog/changesets/000210_add_geoms.json::000210::jcollum:",
		"son: liquibase.exception.DatabaseException: org.postgresql.util.PSQLException: ERROR: current transaction is aborted, commands ignored until end of transaction block",
		"aused by: ERROR: column \"\"id\"\" of relation \"boundary_area\" does not exist",
		"ition: 48",
(The logs are a little messed up because it's running in an AWS ambda that's running locally.) In fact it can't find any of the columns in the boundary_area table (many similar entries in the logs, column ___ of relation ___ does not exist). But when I select them it's fine:
SELECT id FROM boundaries_justincollum.boundary_area ma LIMIT 10;  -- returns 10 rows!
command line:
JAVA_OPTS=\"-Xms2048m -Xmx4096m\" liquibase update --url=jdbc:postgresql://[redacted]?currentSchema=boundaries_justincollum,public --default-schema-name=boundaries_justincollum --username=*** --password=*** --changelog-file=\"changelog/index.json\" --log-level=INFO"
It's acting like it's looking in the public schema -- the table does not exist in the public schema. I built a very similar migration using sql files (we have about 20 existing migrations like that) and it worked fine. But as soon as I tried to use loadData I started encountering issues. Am I missing something or is this a bug in Liquibase's loadData? I have another migration that is inserting large amounts of data but it's using INSERT INTO ... with a a generated SQL file. That works fine. But I'd rather keep the data in CSV, that's easier to edit.
jcollum (229 rep)
Sep 22, 2023, 03:35 PM • Last activity: Sep 26, 2023, 03:21 PM
Showing page 1 of 5 total questions