Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
560 views
MySQL column that auto generates data, longer than UUID
My friend has the problem as shown below. He asked at StackOverFlow and there is no reply yet till now so I wish to help him to try the luck here. Thanks for everyone that helps. "I would like to ask if its possible to set up a database table that has a column that auto generates random alphanumeric...
My friend has the problem as shown below. He asked at StackOverFlow and there is no reply yet till now so I wish to help him to try the luck here. Thanks for everyone that helps. "I would like to ask if its possible to set up a database table that has a column that auto generates random alphanumerical data that is longer that what UUID can do. I've been using UUID for a while now but I would like to have a longer string of random data in my columns, something similar to that of a token authenticator (around 300+ characters). So when I insert values in columns, this particular column will auto generate data by itself. Thanks in advance." Quoted from Reuben Tan Source: https://stackoverflow.com/questions/43461771/mysql-column-that-auto-generates-data-longer-than-uuid
Tan Yih Wei (1 rep)
Apr 19, 2017, 01:00 AM • Last activity: Jul 28, 2025, 11:06 PM
0 votes
1 answers
14 views
Is it possible to run Django migrations on a Cloud SQL replica without being the owner of the table?
I'm using **Google Cloud SQL for PostgreSQL** as an **external primary replica**, with data being replicated continuously from a self-managed PostgreSQL source using **Database Migration Service (DMS)** in CDC mode. I connected a Django project to this replica and tried to run a migration that renam...
I'm using **Google Cloud SQL for PostgreSQL** as an **external primary replica**, with data being replicated continuously from a self-managed PostgreSQL source using **Database Migration Service (DMS)** in CDC mode. I connected a Django project to this replica and tried to run a migration that renames a column and adds a new one:
uv run python manage.py migrate
However, I get the following error:
django.db.utils.ProgrammingError: must be owner of table camera_manager_invoice
This makes sense, since in PostgreSQL, ALTER TABLE requires table ownership. But in this case, the replica was created by DMS, so the actual table owner is the replication source — and not the current user. --- ## 🔍 The Problem: I'm trying to apply schema changes via Django migrations on a Cloud SQL replica that I do **not own**. The replication is working fine for data (CDC), but I need to apply structural changes on the replica independently. --- ## ✅ What I Tried: * Changing the connected user: still not the owner, so same error. * Running sqlmigrate to get the SQL and applying manually: same result — permission denied. * Attempted to change ownership of the table via ALTER TABLE ... OWNER TO ...: failed due to not being superuser. * Tried running migration with --fake, but this skips execution and doesn't change the schema. --- ## ❓ My Question: > **Is there any way to apply schema changes via Django migrations (or manually) on a Cloud SQL replica, without being the table owner?** I'm open to alternatives, best practices, or official GCP recommendations for this situation. ---
Raul Chiarella (117 rep)
Jul 22, 2025, 06:40 PM • Last activity: Jul 25, 2025, 02:53 PM
1 votes
1 answers
25 views
Why is my Cloud SQL external replica not reflecting schema changes (like new columns) after Django migrations?
I'm using **Google Cloud Database Migration Service (DMS)** to replicate data from a self-managed PostgreSQL database into a **Cloud SQL for PostgreSQL instance**, configured as an *external primary replica*. The migration job is running in **CDC mode** (Change Data Capture), using **continuous repl...
I'm using **Google Cloud Database Migration Service (DMS)** to replicate data from a self-managed PostgreSQL database into a **Cloud SQL for PostgreSQL instance**, configured as an *external primary replica*. The migration job is running in **CDC mode** (Change Data Capture), using **continuous replication**. Everything seems fine for data: new rows and updates are being replicated successfully. However, after running Django’s makemigrations and migrate on the source database — which added new columns and renamed others — **the schema changes are not reflected in the Cloud SQL replica**. The new columns simply don’t exist in the destination. ### 🔍 What I’ve done: - Source: self-managed PostgreSQL instance. - Target: Cloud SQL for PostgreSQL set as an external replica. - Replication user has proper privileges and is connected via mTLS. - The job is active, with "Optimal" parallelism and healthy status. - Data replication (INSERT/UPDATE/DELETE) works great. - Schema changes like ALTER TABLE, ADD COLUMN, RENAME COLUMN are **not reflected** in the replica. --- ### ❓ Question: **How can I configure DMS or Cloud SQL to also replicate schema changes (like ALTER TABLE or CREATE COLUMN) from the source to the replica? Or is it necessary to manually apply schema changes on the target?** > I'm fine with workarounds or official recommendations — just need clarity on the correct approach for schema evolution in this setup. ---
Raul Chiarella (117 rep)
Jul 22, 2025, 06:05 PM • Last activity: Jul 25, 2025, 02:48 PM
0 votes
3 answers
179 views
PostgreSQL query stuck on LWLock:BufferMapping with high CPU and memory usage — how to debug further?
We’re experiencing frequent long-running queries (>43 secs) in our PostgreSQL production DB, and they often get stuck on: wait_event_type = LWLock wait_event = BufferMapping This seems to indicate contention on shared buffers. The queries are usually simple SELECTs (e.g., on the level_asr_asrlog tab...
We’re experiencing frequent long-running queries (>43 secs) in our PostgreSQL production DB, and they often get stuck on: wait_event_type = LWLock wait_event = BufferMapping This seems to indicate contention on shared buffers. The queries are usually simple SELECTs (e.g., on the level_asr_asrlog table) but during peak usage, they slow down drastically and sometimes get auto-killed after 60 seconds.[based on statement_timeout]
Instance Configuration:
PostgreSQL version: 14
RAM: 104 GB (≈95 GB usable for Postgres)
vCPUs: 16
SSD Storage: GCP auto-scaled from 10TB → 15TB over the last year
Shared Buffers: 34.8 GB
work_mem: 4 MB
maintenance_work_mem: 64 MB
autovacuum_work_mem: -1 [I think this means its equal to maintenance_work_mem]
temp_buffers: 8 MB
effective_cache_size: ~40 GB
max_connections: 800
Observations
VACUUM processes often take >10 minutes.
Memory is almost fully utilized (free memory 95% and correlates with memory pressure.
The system appears to be thrashing — swapping data instead of doing useful work.
The wait event BufferMapping implies the backend is stuck trying to 
associate a block with a buffer, likely due to memory contention.
I need help with below things, - How to further diagnose LWLock:BufferMapping contention? - Is increasing work_mem or shared_buffers a safe direction? - Should I implement PgBouncer to reduce max_connections impact on memory? - How to confirm if the OS is thrashing, and if so, how to resolve it?
pramod (25 rep)
Mar 27, 2025, 04:20 AM • Last activity: Jul 22, 2025, 01:45 AM
1 votes
1 answers
152 views
Adding columns upped my RAM usage significantly
I have a Google Cloud Platform MySQL 2nd Gen 5.7 instance with two databases, one for testing and one for production. There are about 20 or so tables. One of the tables accounts for 99% of the storage. I added 8 columns to the largest table (it already has ~150 columns) of the testing db. 6 of the n...
I have a Google Cloud Platform MySQL 2nd Gen 5.7 instance with two databases, one for testing and one for production. There are about 20 or so tables. One of the tables accounts for 99% of the storage. I added 8 columns to the largest table (it already has ~150 columns) of the testing db. 6 of the new columns are Integer and 2 of them are Boolean (or more accurately, TinyInt). No indexes of any kind added. Below is a snapshot of the metrics from the migration operation. You can clearly see CPU and Storage peaks for each of the 8 columns. snapshots Weird part is, RAM usage goes up an stays up. Any insight on what I may have forgotten to do? The columns did not have any data populated to them, all are Null for each row so far
Brian Leach (225 rep)
Oct 30, 2018, 05:07 PM • Last activity: Jul 12, 2025, 12:04 AM
0 votes
1 answers
169 views
GCP Cloudsql Mysql Replica unresponsive after mass delete on Master
Directed here from [S/O][2] We have a Master/Replica configuration for Mysql innodb(5.7.32) databases in Cloud SQL... We have a single Table (let's call it Master table) partitioned on Two keys having both primary and non-clustered indexes... It's a row-based replication with automatic disk increase...
Directed here from S/O We have a Master/Replica configuration for Mysql innodb(5.7.32) databases in Cloud SQL... We have a single Table (let's call it Master table) partitioned on Two keys having both primary and non-clustered indexes... It's a row-based replication with automatic disk increase on both instances... It's not a HA configuration so it's not a failover replica... *What we're trying to do...* We're trying to purge the master table back to N number of days... This is done for multiple reasons so let's say this is a client requirement... *What's the issue...* Whenever we're purging the master table it just stalls the replica, it deletes a certain number of rows on the replica and then just passes out... The number of records in a single purge is around 5 million rows... The time the purge starts on the master, the lag starts... It's a totally repeatable issue... we know it's caused because it's a row-based, sequential replication so *What we've tried so far...* 1. Increasing the size of the replica, we've given it 104 GB RAM but the lag doesn't go... 2. Restarting replica 3. RESET SLAVE 4. Trying enabling parallel replication https://cloud.google.com/sql/docs/mysql/replication/manage-replicas#configuring-parallel-replication ... every single time I tried this it failed with an 'Unknown error occurred'... 5. Trying setting it to a Statement-based replication by the SET binlog_format="STATEMENT" command but the "root" user doesn't have the privilege and gets an 'access denied' error... Now the question... *what am I missing in my:* 1. explanation 2. mysql configuration 3. method Thanks
Faraz Beg (11 rep)
May 6, 2021, 09:16 PM • Last activity: Jul 9, 2025, 12:05 AM
1 votes
2 answers
43 views
Can't update mysql users authentication plugin in Google Cloud SQL because root does not have SYSTEM_USER permission
I'm working on upgrading my Google Cloud SQL mysql instance from 8.0 to 8.4. I just upgraded it from 5.7 to 8.0 and now I'm trying to convert my user authentication plugins for my existing users from `mysql_native_password` to `caching_sha2_password` so that I can take the next step of upgrading fro...
I'm working on upgrading my Google Cloud SQL mysql instance from 8.0 to 8.4. I just upgraded it from 5.7 to 8.0 and now I'm trying to convert my user authentication plugins for my existing users from mysql_native_password to caching_sha2_password so that I can take the next step of upgrading from 8.0 to 8.4. When I login as root, I'm not able to alter the various user's plugins and it gives me the error:
ERROR: Access denied; you need (at least one of) the SYSTEM_USER privilege(s) for this operation
I can't see any way to grant my root user that permission. I've got this list of users and I don't know how to update them before I perform the 8.0 -> 8.4 upgrade. I'm worried that I'll break a bunch of stuff if I don't update them before the upgrade to 8.4.
mysql> SELECT user, host, plugin FROM mysql.user WHERE plugin = 'mysql_native_password' order by user, host;
+----------------------------+-----------+-----------------------+
| user                       | host      | plugin                |
+----------------------------+-----------+-----------------------+
| cloudiamgroup              | %         | mysql_native_password |
| cloudsqlapplier            | localhost | mysql_native_password |
| cloudsqlexport             | 127.0.0.1 | mysql_native_password |
| cloudsqlexport             | ::1       | mysql_native_password |
| cloudsqlimport             | 127.0.0.1 | mysql_native_password |
| cloudsqlimport             | ::1       | mysql_native_password |
| cloudsqlimport             | localhost | mysql_native_password |
| cloudsqlinactiveuser       | %         | mysql_native_password |
| cloudsqlobservabilityadmin | 127.0.0.1 | mysql_native_password |
| cloudsqlobservabilityadmin | ::1       | mysql_native_password |
| cloudsqlobservabilityadmin | localhost | mysql_native_password |
| cloudsqloneshot            | 127.0.0.1 | mysql_native_password |
| cloudsqloneshot            | ::1       | mysql_native_password |
| cloudsqlreadonly           | 127.0.0.1 | mysql_native_password |
| cloudsqlreadonly           | ::1       | mysql_native_password |
| cloudsqlreadonly           | localhost | mysql_native_password |
| cloudsqlreplica            | %         | mysql_native_password |
| cloudsqlsuperuser          | %         | mysql_native_password |
| mysql.sys                  | localhost | mysql_native_password |
| root                       | 127.0.0.1 | mysql_native_password |
| root                       | ::1       | mysql_native_password |
| root                       | localhost | mysql_native_password |
+----------------------------+-----------+-----------------------+
22 rows in set (0.06 sec)

mysql>
How do I update these user entries so that I can safely upgrade to mysql 8.4 in my Google Cloud SQL instance?
Kenny Wyland (129 rep)
Jun 27, 2025, 09:58 PM • Last activity: Jun 30, 2025, 07:55 PM
1 votes
1 answers
185 views
What is the best way to migrate a multi-TB MySQL database from on-prem to gcloud Cloud SQL?
I have several multi-TB on-prem MySQL databases I need to migrate into Google Cloud's managed MySQL offering, Cloud SQL. I have migrated two of size ~1TB so far with mysqldump but this method is far too slow for the bigger databases. Ideally I would like to use Percona Xtrabackup but I don't know if...
I have several multi-TB on-prem MySQL databases I need to migrate into Google Cloud's managed MySQL offering, Cloud SQL. I have migrated two of size ~1TB so far with mysqldump but this method is far too slow for the bigger databases. Ideally I would like to use Percona Xtrabackup but I don't know if it is possible? Has anyone completed such a migration? What tools did you use? Thanks in advance
CClarke (133 rep)
Oct 26, 2022, 11:47 AM • Last activity: Jun 26, 2025, 03:03 PM
1 votes
1 answers
212 views
Google Managed PGSQL DB migration to DigitalOcean Managed PGSQL DB
We are migrating from Google-Cloud PGSQL managed database to a managed PGSQL server on Digital ocean. The dilemma we are facing is that both Google and Digital ocean are managed so we have no access to directories so we can't dump then restore. So how would we go about this? Is there a command that...
We are migrating from Google-Cloud PGSQL managed database to a managed PGSQL server on Digital ocean. The dilemma we are facing is that both Google and Digital ocean are managed so we have no access to directories so we can't dump then restore. So how would we go about this? Is there a command that copies the data directly from Google-Cloud PGSQL to Digital ocean? Digital oceans resources assume we have access to a directory.
sqwale (231 rep)
Jul 7, 2020, 07:01 PM • Last activity: Jun 13, 2025, 04:04 AM
1 votes
0 answers
31 views
Cloud SQL: gcloud sql export sql does not include procedures/triggers — alternatives?
### How to export stored procedures and triggers from Cloud SQL without hitting production? I'm using Google Cloud SQL (MySQL) and want to automate a **nightly/weekly clone** of the production database into staging. The main goal is to keep staging up to date **without impacting prod performance**....
### How to export stored procedures and triggers from Cloud SQL without hitting production? I'm using Google Cloud SQL (MySQL) and want to automate a **nightly/weekly clone** of the production database into staging. The main goal is to keep staging up to date **without impacting prod performance**. I’ve been using:
gcloud sql export sql my-instance gs://my-bucket/clone.sql.gz \
  --database=mydb \
  --offload \
  --async
This uses a temporary worker VM (serverless export) and avoids load on the main instance — which is great. However, I found that **stored procedures, triggers, and events are missing** in the dump. Attempts to add --routines, --triggers, or --events fail with:
ERROR: (gcloud.sql.export.sql) unrecognized arguments: --routines
Apparently, gcloud sql export sql **doesn't support exporting routines or triggers at all**, and there's no documented way to include them. Yet the export is still billed at $0.01/GB. --- ### Goal: Clone a Cloud SQL instance into staging, including: - Tables and schema - Data - Stored procedures / functions - Triggers - Events ...without putting load on production. --- ### Options I’ve found: 1. **gcloud sql export sql** - ✅ Offloads work (zero prod impact) - ❌ Skips procedures/triggers/events - ❌ No way to include them 2. **Direct mysqldump --routines --events --triggers on prod** - ✅ Complete dump - ❌ Impacts prod (not acceptable) 3. **Run mysqldump on a read-replica** - ✅ Complete + safe - ❌ Slightly more setup / cost --- ### Question: Is using a **read-replica + mysqldump** the only way to do a full logical export (schema + routines + triggers) **without touching prod**? Any better alternatives or official GCP-supported approaches?
scr4bble (111 rep)
Jun 2, 2025, 10:02 PM
0 votes
1 answers
293 views
How to get Google Cloud SQL Auth Proxy working on Windows (WSL)?
I am trying to run Google's Cloud SQL Auth Proxy on my Windows 11 machine under WSL (Windows Subsystem for Linux). I downloaded the Cloud SQL Auth Proxy 64-bit executable and run it and get these messages: ``` .\cloud-sql-proxy.x64.exe my-gcp-project:us-west1:my-postgres-1 --credentials-file=.\cloud...
I am trying to run Google's Cloud SQL Auth Proxy on my Windows 11 machine under WSL (Windows Subsystem for Linux). I downloaded the Cloud SQL Auth Proxy 64-bit executable and run it and get these messages:
.\cloud-sql-proxy.x64.exe my-gcp-project:us-west1:my-postgres-1 --credentials-file=.\cloud-sql.json
2024/04/13 18:39:34 Authorizing with the credentials file at ".\\cloud-sql.json"
2024/04/13 18:39:35 [my-gcp-project:us-west1:my-postgres-1] Listening on 127.0.0.1:5432
2024/04/13 18:39:35 The proxy has started successfully and is ready for new connections!
So it looks good. However, when I try to connect I get a "Can't reach database server". In WSL I run netstat -a and do not see the port 5432 open. However, when I run netstat -ano under a PowerShell terminal I do see this
TCP    127.0.0.1:5432   0.0.0.0:0    LISTENING   11812
How do I make (and verify) that the local Auth Proxy endpoint is accessible under WSL?
rlandster (375 rep)
Apr 14, 2024, 01:51 AM • Last activity: May 20, 2025, 10:08 AM
0 votes
0 answers
35 views
Postgres (Cloudsql) shows lock wait on write transactions during an index creation (on unrelated table)
Redirected from [stackoverflow][1]. I am using Postgresql on CloudSQL (GCP), in version 15. (Running on 4vCPU, 16Gb RAM, with SSD disk, currently at 500Gb) We are observing an unexpected issue during index creation. While creating an index on a large-ish table (~10M rows), we observe a spike of lock...
Redirected from stackoverflow . I am using Postgresql on CloudSQL (GCP), in version 15. (Running on 4vCPU, 16Gb RAM, with SSD disk, currently at 500Gb) We are observing an unexpected issue during index creation. While creating an index on a large-ish table (~10M rows), we observe a spike of lock wait on write transactions (not on read) on completely unrelated tables. By "unrelated", I mean that this happens even if the index is created on a table in a different database (same instance) than the tables where lock wait is observed on writes - so presumably they share only low-level postgres ressources as well as the infrastructure ressources. Essentially I am interested in knowing if this happened to others, and in which conditions. Initially we thought that "lock wait" in the query analytics interface was reported only for transaction locks, but this issue seems to indicate that it also includes wait time for IO ressources. I would be happy if anyone can confirm or point me to a better hypothesis.
Pascal Delange (101 rep)
May 17, 2025, 09:10 PM
0 votes
1 answers
317 views
Google Cloud SQL (MySQL) consume too much CPU during several days
There was noticed that Cloud SQL (MySQL) consumed to much CPU (~100%) during several days. Firstly, I decided that we have many user online or background jobs. But it was regular workload, nothing special. Query Insights showed CPU under 20% for all users in DB and for all DBs in the MySQL instance....
There was noticed that Cloud SQL (MySQL) consumed to much CPU (~100%) during several days. Firstly, I decided that we have many user online or background jobs. But it was regular workload, nothing special. Query Insights showed CPU under 20% for all users in DB and for all DBs in the MySQL instance. Secondly, I decided that we have connections or memory leaks. Also, nothing special. And, lastly, what I checked was output from SHOW FULL PROCESSLIST. I noticed some process which lasts too long (~8days) (see Time field) and State was 'statistics'. This workload was not shown in the CPU chart of Query Insights but belongs to my sql user. And, actually, this is strange for me. The 'Info' field contains a 'SELECT ...' query. I was able to kill such processes. And the CPU came to normal levels. enter image description here enter image description here ## UPDATED: ## 1. https://bugs.mysql.com/bug.php?id=20932 2. https://bugs.mysql.com/bug.php?id=26339 3. https://bugs.mysql.com/bug.php?id=21153 4. https://www.linuxtopia.org/online_books/database_guides/mysql_5.1_database_reference_guide/controlling-optimizer.html ? ### Description ### Looks like, it was combination of two bugs (1) and (2). It seems that the current implementation of search query by tags, with dynamic (depends on amount of selected tags' categories) combination of 'IN ()' clauses where greater than 2 tags and 'INNER JOIN' clauses, leads to performance degradation of MySQL query optimizer. ### Solution (in progress) ### 1. Replace INNER JOIN with INTERSECT (MySQL 8.0.31). 2. Replace IN () with UNION DISTINCT. 3. Add suitable indexes (with help of EXPLAIN ANALYSE).
Anton Komarov (1 rep)
Feb 24, 2023, 05:37 AM • Last activity: May 4, 2025, 02:04 PM
0 votes
2 answers
155 views
How to disable max key length error in mysql
I use a google could sql (mysql) as my production database. To replicate this database for testing in docker i use the mysql:8 image. I noticed that one of my migration scripts succedes on the cloud db but failes during tests. The following script causes an error: CREATE TABLE testTable ( name varch...
I use a google could sql (mysql) as my production database. To replicate this database for testing in docker i use the mysql:8 image. I noticed that one of my migration scripts succedes on the cloud db but failes during tests. The following script causes an error: CREATE TABLE testTable ( name varchar(1000) ); CREATE INDEX idx_device_designs_name ON testTable (name); The error: Specified key was too long; max key length is 3072 bytes [Failed SQL: (1071)... I understand the reason for the error, but as our standard production db does not produce it I'm looking for the setting that disables this check. **EDIT 1:** SELECT VERSION(); -> 8.0.31-google SHOW CREATE TABLE tableName production CREATE TABLE testTable ( name varchar(1000) COLLATE utf8mb3_unicode_ci NOT NULL, KEY idx_device_designs_name (name), ) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb3 COLLATE=utf8mb3_unicode_ci docker container CREATE TABLE testTable ( name varchar(1000) CHARACTER SET utf8mb3 COLLATE utf8mb3_unicode_ci NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci production had the index applied in the past (before a mysql update, so 5.7). SHOW TABLE STATUS LIKE 'testTable' (on production) testTable,InnoDB,10,Dynamic,2,8192,16384,0,32768,0,3,2024-11-14 08:52:26,,,utf8mb3_unicode_ci,,"",""
Laures (113 rep)
Jan 30, 2025, 03:39 PM • Last activity: Jan 30, 2025, 08:12 PM
2 votes
1 answers
1725 views
How to sync easily between a master database from Google Cloud SQL and a master database from Amazon RDS
I've created a master - slave (replication) in Google Cloud SQL. I've also created a master - slave (replication) in Amazon RDS. I want to sync two master MysQL from Google Cloud SQL and Amazon RDS. What's the approach can be done?
I've created a master - slave (replication) in Google Cloud SQL. I've also created a master - slave (replication) in Amazon RDS. I want to sync two master MysQL from Google Cloud SQL and Amazon RDS. What's the approach can be done?
Eldwin Eldwin (161 rep)
May 24, 2019, 04:11 AM • Last activity: Jan 19, 2025, 09:06 AM
0 votes
2 answers
1205 views
can't backup postgres 13 with pg_dump
I'm trying to backup a postgresql 13 database with the following command on MacOS: ```` PGPASSWORD=“my_password” /Library/PostgreSQL/13/bin/pg_dump -h xx.xx.xx.xx -p 5432 -U postgres dbname > my_backup.backup ````` I get the following error: ```` pg_dump: error: connection to database "dbname" faile...
I'm trying to backup a postgresql 13 database with the following command on MacOS:
`
PGPASSWORD=“my_password” /Library/PostgreSQL/13/bin/pg_dump -h xx.xx.xx.xx -p 5432 -U postgres dbname > my_backup.backup
`` I get the following error:
`
pg_dump: error: connection to database "dbname" failed: could not initiate GSSAPI security context:  The operation or option is not available: Credential for asked mech-type mech not found in the credential handle
FATAL:  password authentication failed for user "postgres"
FATAL:  password authentication failed for user "postgres"
`` I'm looking up this error on google but I didn't found the solution yet. I'm providing the correct password, because I can connect to the host with this username/password-combination through pgadmin 4. It's a Google Cloud Platform SQL Instance. Anyone who knows the solution?
Sam Leurs (141 rep)
Sep 19, 2021, 06:58 PM • Last activity: Jan 10, 2025, 07:00 PM
0 votes
1 answers
48 views
CloudSQL for Postgres S3 extension
In AWS RDS for Postgres, there is an extension called AWS_S3 that provides functions within Postgres that allow me to import data directly from a bucket into a table and export data from a table directly into a bucket. Example: ``` SELECT aws_s3.table_import_from_s3( 'test_gzip', '', '(csv format)',...
In AWS RDS for Postgres, there is an extension called AWS_S3 that provides functions within Postgres that allow me to import data directly from a bucket into a table and export data from a table directly into a bucket. Example:
SELECT aws_s3.table_import_from_s3(
   'test_gzip', '', '(csv format)',
   'myS3Bucket', 'test-data.gz', 'us-east-2'
);
There's nothing similar in CloudSQL for Postgres. Has anyone had this type of problem? How can I solve it?
Jose Rocha (45 rep)
Mar 17, 2024, 02:27 AM • Last activity: Jan 3, 2025, 07:34 AM
0 votes
1 answers
386 views
Online schema changes for cloudsql DB
I'm new to cloudsql for mysql. I have a table with trigger. I need to do some ddl changes on that table. However, I can't take any downtime. **What I have tried so far:** I tried with pt-online-schema-change. Because, it work for our local databases. However, in case of managed cloudsql - it wasn't...
I'm new to cloudsql for mysql. I have a table with trigger. I need to do some ddl changes on that table. However, I can't take any downtime. **What I have tried so far:** I tried with pt-online-schema-change. Because, it work for our local databases. However, in case of managed cloudsql - it wasn't working. Opened an issue with Percona(PT-1964). Next, I checked for gh-ost. However, as per their documentation(https://github.com/github/gh-ost/blob/master/doc/requirements-and-limitations.md) -it doesn't support any table with trigger. Making changes with shared lock option isn't available for me - because we have a replica too. Does anyone knows any other method/suggeation to make ddl changes online for cloudsql for mysql. Thanks in advance!
Shiwangini (380 rep)
May 22, 2021, 07:06 AM • Last activity: Dec 30, 2024, 11:02 AM
0 votes
3 answers
115 views
Can a new transaction claim an older sequence id?
I'm using a PostgresSQL database as an eventstore. We used to use https://github.com/SQLStreamStore/SQLStreamStore But they had issues when having a lot of parallel transactions. Essentially we suffered from a lot of 'skipped' events. A similar problem is explained here: https://github.com/eugene-kh...
I'm using a PostgresSQL database as an eventstore. We used to use https://github.com/SQLStreamStore/SQLStreamStore But they had issues when having a lot of parallel transactions. Essentially we suffered from a lot of 'skipped' events. A similar problem is explained here: https://github.com/eugene-khyst/postgresql-event-sourcing?tab=readme-ov-file#transactional-outbox-using-transaction-id So together with a co-worker we decided to fork the library and implement it using [pg_current_snapshot()](https://pgpedia.info/p/pg_current_snapshot.html) . We had a few iterations of this but in the end we got it working: https://github.com/ArneSchoonvliet/SQLStreamStore So the main idea is, if we see a gap in between positions we will only trust the events with a lower transaction_id than 'xmin'. This has worked great for us. And most problems are solved. But sometimes we have a weird occurrence
Position    MessageId                               CreatedAt                       TransactionId
31170300	be7b412a-103c-5cdd-8458-57fbb0e5c39e	2024-09-29 13:23:27.733 +0200	2306832989
31170299	38b9d7d9-540c-5440-a2a0-10b91cffb2ad	2024-09-29 13:23:27.736 +0200	2306832990
Query result
Position: 31170297, Array index: 0, Transaction id: 2306832974
Position: 31170298, Array index: 1, Transaction id: 2306832976
Position: 31170300, Array index: 2, Transaction id: 2306832989
Xmin: 2306832990
In the query result you see that 31170299 is missing. So our 'gap checking' code kicks in. And will check if all transactions_ids are lower than xmin. In this case they are... 31170299 wasn't visible yet. So as a result that event will be skipped. **Question** Is it expected that this can happen. A newer transaction claiming a lower seq value? We are using Google Cloud managed pgsql db Since I don't really know how we would ever be able to detect that without checking every time if transactions are still happening. But this would impact performance since we would lose a lot of time with 'actual' gaps (caused by transactions that are rolled back) People probably wonder what the insert / query sql looks like INSERT: https://github.com/ArneSchoonvliet/SQLStreamStore/blob/master/src/SqlStreamStore.Postgres/PgSqlScripts/AppendToStream.sql Important part:
INSERT INTO __schema__.messages (message_id,
                                 stream_id_internal,
                                 stream_version,
                                 created_utc,
                                 type,
                                 json_data,
                                 json_metadata,
                                 transaction_id)
SELECT m.message_id, _stream_id_internal, _current_version + (row_number()
    over ()) :: int, _created_utc, m.type, m.json_data, m.json_metadata, pg_current_xact_id()
FROM unnest(_new_stream_messages) m
ON CONFLICT DO NOTHING;
GET DIAGNOSTICS _success = ROW_COUNT;
As you can see the position isn't set. This is because it's an autoincrement defined like this: "position" int8 DEFAULT nextval('messages_seq'::regclass) NOT NULL QUERY: https://github.com/ArneSchoonvliet/SQLStreamStore/blob/master/src/SqlStreamStore.Postgres/PgSqlScripts/ReadAll.sql Important part:
BEGIN
  OPEN _txinfo FOR
  SELECT pg_snapshot_xmin(pg_current_snapshot());
  RETURN NEXT _txinfo;
    
  OPEN _messages FOR
  WITH messages AS (
      SELECT __schema__.streams.id_original,
             __schema__.messages.message_id,
             __schema__.messages.stream_version,
             __schema__.messages.position,
             __schema__.messages.created_utc,
             __schema__.messages.type,
             __schema__.messages.transaction_id,
             __schema__.messages.json_metadata,
             __schema__.messages.json_data,
             __schema__.streams.max_age
      FROM __schema__.messages
             INNER JOIN __schema__.streams ON __schema__.messages.stream_id_internal = __schema__.streams.id_internal
      WHERE  __schema__.messages.position >= _position
      ORDER BY __schema__.messages.position
      LIMIT _count
  )
  SELECT * FROM messages LIMIT _count;
  RETURN NEXT _messages;
END;
ErazerBrecht (101 rep)
Oct 9, 2024, 10:20 AM • Last activity: Nov 20, 2024, 11:30 AM
0 votes
0 answers
17 views
Google Cloud SQL MySQL 8.0 only accepting connections after external connection
I upgraded mysql from 5.7 to 8.0.37 on GCP Cloud SQL. Now, when the db is restarted (manually or for maintenance), it does not accept connections from a GCP Cloud Run Java server with this error: java.sql.SQLException: Access denied for user Only when I log in via some external tool (IntelliJ DataGr...
I upgraded mysql from 5.7 to 8.0.37 on GCP Cloud SQL. Now, when the db is restarted (manually or for maintenance), it does not accept connections from a GCP Cloud Run Java server with this error: java.sql.SQLException: Access denied for user Only when I log in via some external tool (IntelliJ DataGrip JDBC in my case) everything suddenly starts to work as soon as the connection is established. What could be the problem?
cputoaster (101 rep)
Oct 31, 2024, 08:16 AM • Last activity: Oct 31, 2024, 08:27 AM
Showing page 1 of 20 total questions