Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
669 views
Best way to cast a VIEW row type to the underlying TABLE row type
I have a table with an index and a row-level security policy. Due to [this problem](https://dba.stackexchange.com/q/232789/188406) (more details: [1](https://stackoverflow.com/q/48230535/1048572), [2](https://www.postgresql.org/message-id/flat/2811772.0XtDgEdalL@peanuts2), [3](https://stackoverflow....
I have a table with an index and a row-level security policy. Due to [this problem](https://dba.stackexchange.com/q/232789/188406) (more details: (https://stackoverflow.com/q/48230535/1048572) , (https://www.postgresql.org/message-id/flat/2811772.0XtDgEdalL@peanuts2) , (https://stackoverflow.com/q/63008838/1048572) , (https://www.postgresql.org/message-id/flat/CAGrP7a2t+JbeuxpQY+RSvNe4fr3+==UmyimwV0GCD+wcrSSb=w@mail.gmail.com) , (https://stackoverflow.com/q/48230535/1048572)) , the index is not used when the policy applies, which makes my queries unbearably slow. The workaround I am contemplating would be to create a VIEW with security_invoker = false and security_barrier = false. (If I do enable the security_barrier, the query again doesn't use the index). The problem I am facing now is that I cannot just change the queries to use FROM my_view AS example instead of FROM my_table AS example, since some of them use functions that are defined to take the my_table composite type. A simplified example: CREATE TABLE example ( id int, name text, is_visible boolean ); CREATE VIEW test AS SELECT * FROM example WHERE is_visible; CREATE FUNCTION prop(e example) RETURNS text LANGUAGE SQL AS $$ SELECT e.id::text || ': ' || e.name; $$; SELECT e.prop FROM example e; -- works SELECT e.prop FROM test e; -- ERROR: column e.prop does not exist ([online demo](https://dbfiddle.uk/cb0bn3NV)) Now the question is **how to cast the rows to the expected type?** There is [this question](https://dba.stackexchange.com/q/247240/188406) and I also found a way to do this using the ROW constructor, but I'm not certain how good this is: SELECT e.prop FROM (SELECT (ROW(test.*)::example).* FROM test) e; It's nice that I can just use it as a drop-in replacement for the table expression (without changing anything else in the query), and it does work (postgres accepts it and does use my index when I have the respective WHERE clause), but it looks horrible. Are there problems with my approach that I am missing? Is there a better solution?
Bergi (514 rep)
Oct 18, 2022, 11:14 AM • Last activity: Aug 6, 2025, 03:06 PM
0 votes
1 answers
178 views
select items from multiple rows and add to one
I need some help on `SQL` as this kind of selecting is beyond my knowledge. The result of the select should be in one row as it is shown in the picture. Can someone provide any ideas on how to achieve this? [![enter image description here][1]][1] if col_name1 =AA -> add col_name2 as value col_name5...
I need some help on SQL as this kind of selecting is beyond my knowledge. The result of the select should be in one row as it is shown in the picture. Can someone provide any ideas on how to achieve this? enter image description here if col_name1 =AA -> add col_name2 as value col_name5 if col_name1 =BB -> add col_name3 value as col_name6 if col_name1 =CC -> add col_name4 value of first CC (c31) as col_name7 and col_name4 of second CC (c31) value as col_name8 original table might not have all 4 ids
vandit (1 rep)
Oct 5, 2017, 08:51 AM • Last activity: Jul 7, 2025, 06:04 AM
-1 votes
1 answers
140 views
MySQL 8.0.41 - 83 % Waiting for row lock on AO_319474_QUEUE
Environment We run three Jira Service Management DC clusters that all share the same topology: * MySQL 8.0.36 on Ubuntu 22.04 – 32 vCPU, 100 GB RAM, buffer-pool now 80 GB (only ~33 GB in use), NVMe storage. * Each Jira node holds a 50-connection pool (3 nodes × 50 = 150 sessions per cluster). *...
Environment We run three Jira Service Management DC clusters that all share the same topology: * MySQL 8.0.36 on Ubuntu 22.04 – 32 vCPU, 100 GB RAM, buffer-pool now 80 GB (only ~33 GB in use), NVMe storage. * Each Jira node holds a 50-connection pool (3 nodes × 50 = 150 sessions per cluster). * Workload is at 500 TPS steady; A week ago we removed an unrelated update bottleneck and suddenly discovered a hot row issue at Jira table. - 83 % of total wait time is Waiting for row lock (Percona PMM, 6-hour window). - Slow-log is 90 % UPDATE AO_319474_QUEUE … WHERE ID = ?. - SHOW ENGINE INNODB STATUS always shows 50-70 transactions waiting on the same PK row. - CPU, disk I/O, redo, buffer-pool, latches — all SELECT ID, NAME -> FROM AO_319474_QUEUE -> WHERE ID = 11459545; +----------+---------------------------------------------+ | ID | NAME | +----------+---------------------------------------------+ | 11459545 | servicedesk.base.internal.processing.master | +----------+---------------------------------------------+ Query details mysql> explain SELECT ID, NAME -> FROM AO_319474_QUEUE -> WHERE ID = 11459545; +----+-------------+-----------------+------------+-------+---------------+---------+---------+-------+------+----------+-------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------------+------------+-------+---------------+---------+---------+-------+------+----------+-------+ | 1 | SIMPLE | AO_319474_QUEUE | NULL | const | PRIMARY | PRIMARY | 8 | const | 1 | 100.00 | NULL | +----+-------------+-----------------+------------+-------+---------------+---------+---------+-------+------+----------+-------+ 1 row in set, 1 warning (0.01 sec) Update Query update AO_319474_QUEUE set CLAIMANT_TIME = 1748437310646 where AO_319474_QUEUE.ID = 11459545 and (AO_319474_QUEUE.CLAIMANT = 'c.a.s.plugins.base.internal.events.runner.async.XXX' and AO_319474_QUEUE.CLAIMANT_TIME >= 1748437010646) Update Query Details mysql> explain update AO_319474_QUEUE -> set CLAIMANT_TIME = 1748437310373 where AO_319474_QUEUE.ID = 11459545 and (AO_319474_QUEUE.CLAIMANT = 'c.a.s.plugins.base.internal.events.runner.async.XXX' and AO_319474_QUEUE.CLAIMANT_TIME >= 1748437010373) -> ; +----+-------------+-----------------+------------+-------+----------------------------------------+---------+---------+-------+------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-----------------+------------+-------+----------------------------------------+---------+---------+-------+------+----------+-------------+ | 1 | UPDATE | AO_319474_QUEUE | NULL | range | PRIMARY,index_ao_319474_queue_claimant | PRIMARY | 8 | const | 1 | 100.00 | Using where | +----+-------------+-----------------+------------+-------+----------------------------------------+---------+---------+-------+------+----------+-------------+ Warnings mysql> SHOW WARNINGS LIMIT 10; +-------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Level | Code | Message | +-------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Note | 1003 | update scr_518f3268.AO_319474_QUEUE set scr_518f3268.AO_319474_QUEUE.CLAIMANT_TIME = 1748437310373 where ((scr_518f3268.AO_319474_QUEUE.CLAIMANT = 'c.a.s.plugins.base.internal.events.runner.async.XXX') and (scr_518f3268.AO_319474_QUEUE.ID = 11459545) and (scr_518f3268.AO_319474_QUEUE.CLAIMANT_TIME >= 1748437010373)) | +-------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.01 sec) So the DB is idle, but everybody’s queuing on one hot row. Mainly we were looking at stats from
ENGINE INNODB STATUS\G
Already tried / confirmed - Increased innodb_buffer_pool_size from 16 G → 80 G; buf-pool hit stays 1000/1000, so I/O really is not the issue. - No I/O or redo backlog. - Hinted Atlassian support — they’re still digging on the AO side. **Indexes** Index Summary mysql> SELECT -> INDEX_NAME, -> COLUMN_NAME, -> SEQ_IN_INDEX, -> NON_UNIQUE -> FROM INFORMATION_SCHEMA.STATISTICS -> WHERE TABLE_SCHEMA = 'scr_518f3268' -> AND TABLE_NAME = 'AO_319474_QUEUE' -> ORDER BY INDEX_NAME, SEQ_IN_INDEX; +--------------------------------+-------------+--------------+------------+ | INDEX_NAME | COLUMN_NAME | SEQ_IN_INDEX | NON_UNIQUE | +--------------------------------+-------------+--------------+------------+ | index_ao_319474_queue_claimant | CLAIMANT | 1 | 1 | | index_ao_319474_queue_topic | TOPIC | 1 | 1 | | PRIMARY | ID | 1 | 0 | | U_AO_319474_QUEUE_NAME | NAME | 1 | 0 | +--------------------------------+-------------+--------------+------------+ 4 rows in set (0.00 sec) Analyse Table mysql> ANALYZE TABLE AO_319474_QUEUE; +-----------------------------------------+---------+----------+----------+ | Table | Op | Msg_type | Msg_text | +-----------------------------------------+---------+----------+----------+ | scr_518f3268.AO_319474_QUEUE | analyze | status | OK | +-----------------------------------------+---------+----------+----------+ 1 row in set (0.00 sec) **Create table info (SHOW CREATE)** mysql> SHOW CREATE TABLE AO_319474_QUEUE\G *************************** 1. row *************************** Table: AO_319474_QUEUE Create Table: CREATE TABLE AO_319474_QUEUE ( CLAIMANT varchar(127) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin DEFAULT NULL, CLAIMANT_TIME bigint DEFAULT NULL, CREATED_TIME bigint NOT NULL, ID bigint NOT NULL AUTO_INCREMENT, MESSAGE_COUNT bigint NOT NULL, MODIFIED_TIME bigint NOT NULL, NAME varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL, PURPOSE varchar(450) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL, TOPIC varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin DEFAULT NULL, PRIMARY KEY (ID), UNIQUE KEY U_AO_319474_QUEUE_NAME (NAME), KEY index_ao_319474_queue_topic (TOPIC), KEY index_ao_319474_queue_claimant (CLAIMANT) ) ENGINE=InnoDB AUTO_INCREMENT=12485939 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin 1 row in set (0.01 sec) **Questions** Mainly interested in way to resolve or further troubleshoot. - Are there any known MySQL 8.0.41+ bugs that make single-row updates stall longer than expected? - Any ideas how to mitigate problem? We can't change query as it's coming from product (Atlassian Jira Service Management / Data Center). - Any low hanging fruits we can try for a quick fix? - If it's Virtualisation issue - what we need to ask / capture to chat with our provider in DC? I’m a Java engineer, not a full-time DBA - feel free to point out obvious RTFM gaps. Attachments - Percona InnoDB Buffer Pool InnoDB Locking Percona Query Analytics InnoDB Log IO
user340962 (1 rep)
May 27, 2025, 09:00 AM • Last activity: Jun 24, 2025, 09:07 PM
0 votes
1 answers
225 views
MySQL 8.0.20 - Master Replica scheme, errors during replication process
This threads follows a previous one, given at this URL: https://dba.stackexchange.com/questions/285442/mysql-8-0-20-master-replica-scheme-increasing-delay-between-source-and-replic Replica Server has been configured this way, which ensures a fast replication: innodb_flush_method = O_DIRECT SET GLOBA...
This threads follows a previous one, given at this URL: https://dba.stackexchange.com/questions/285442/mysql-8-0-20-master-replica-scheme-increasing-delay-between-source-and-replic Replica Server has been configured this way, which ensures a fast replication: innodb_flush_method = O_DIRECT SET GLOBAL sync_binlog = 0; SET GLOBAL innodb_flush_log_at_trx_commit = 2; The Source binlogs are in ROW format for instance. As I understand, we cannot change to STATEMENT as it is already a Replication process running, and without restarting Source database, which is touchy. **An I right?** However, the replication process sometimes falls in error, because of inexisting records: `Last_SQL_Error: Could not execute Update_rows event on table levelup.videos; Can't find record in 'videos', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log binlog.001822, end_log_pos 328518491` We checked in source Binlog file, without finding anything to show us the incorrect record nor the source of the error. **Any help to decrypt the logs would be very appreciated. Thanks in advance** Command executed on Master :
--base64-output=decode-rows --start-position=328517915 --stop-position=328518679 binlog.001822 --verbose
Result :
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
# at 156
#210217  9:32:50 server id 1  end_log_pos 125 CRC32 0x74769bdd  Start: binlog v 4, server v 8.0.22-13 created 210217  9:32:50
# at 328517915
#210217  9:41:23 server id 1  end_log_pos 328517994 CRC32 0xa92a39bf    Anonymous_GTID  last_committed=628870   sequence_number=628871  rbr_only=yes    original_committed_timestamp=1613551283843458   immediate_commit_timestamp=1613551283843458     transaction_length=607
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
# original_commit_timestamp=1613551283843458 (2021-02-17 09:41:23.843458 CET)
# immediate_commit_timestamp=1613551283843458 (2021-02-17 09:41:23.843458 CET)
/*!80001 SET @@session.original_commit_timestamp=1613551283843458*//*!*/;
/*!80014 SET @@session.original_server_version=80022*//*!*/;
/*!80014 SET @@session.immediate_server_version=80022*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 328517994
#210217  9:41:23 server id 1  end_log_pos 328518081 CRC32 0x56149f5b    Query   thread_id=219241        exec_time=0     error_code=0
SET TIMESTAMP=1613551283/*!*/;
SET @@session.pseudo_thread_id=219241/*!*/;
SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/;
SET @@session.sql_mode=1174405120/*!*/;
SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/;
/*!\C utf8mb4 *//*!*/;
SET @@session.character_set_client=255,@@session.collation_connection=255,@@session.collation_server=255/*!*/;
SET @@session.lc_time_names=0/*!*/;
SET @@session.collation_database=DEFAULT/*!*/;
/*!80011 SET @@session.default_collation_for_utf8mb4=255*//*!*/;
BEGIN
/*!*/;
# at 328518081
#210217  9:41:23 server id 1  end_log_pos 328518171 CRC32 0x40249bce    Table_map: XXX.videos mapped to number 227
# at 328518171
#210217  9:41:23 server id 1  end_log_pos 328518491 CRC32 0x8a73f2c2    Update_rows: table id 227 flags: STMT_END_F
### UPDATE XXX.videos
### WHERE
###   @1=229814401
###   @2=9
###   @3='6801549427720504325'
###   @4='6929519055464353030'
###   @5='2021:02:15'
###   @6='TAG these teammates 😭 #FamilyDay'
###   @7=35
###   @8=NULL
###   @9=NULL
###   @10=1300000
###   @11=8101
###   @12=306600
###   @13=NULL
###   @14=26100
###   @15=NULL
###   @16='0000:00:00'
### SET
###   @1=229814401
###   @2=9
###   @3='6801549427720504325'
###   @4='6929519055464353030'
###   @5='2021:02:15'
###   @6='TAG these teammates 😭 #FamilyDay'
###   @7=35
###   @8=''
###   @9=3
###   @10=1300000
###   @11=8101
###   @12=306600
###   @13=NULL
###   @14=26100
###   @15=NULL
###   @16='0000:00:00'
# at 328518491
#210217  9:41:23 server id 1  end_log_pos 328518522 CRC32 0x7b25222b    Xid = 3758209563
COMMIT/*!*/;
# at 328518522
#210217  9:41:23 server id 1  end_log_pos 328518601 CRC32 0x0342fc32    Anonymous_GTID  last_committed=628871   sequence_number=628872  rbr_only=yes    original_committed_timestamp=1613551283854427   immediate_commit_timestamp=1613551283854427     transaction_length=473
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
# original_commit_timestamp=1613551283854427 (2021-02-17 09:41:23.854427 CET)
# immediate_commit_timestamp=1613551283854427 (2021-02-17 09:41:23.854427 CET)
/*!80001 SET @@session.original_commit_timestamp=1613551283854427*//*!*/;
/*!80014 SET @@session.original_server_version=80022*//*!*/;
/*!80014 SET @@session.immediate_server_version=80022*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 328518601
#210217  9:41:23 server id 1  end_log_pos 328518679 CRC32 0xac04ba69    Query   thread_id=49970 exec_time=0     error_code=0
SET TIMESTAMP=1613551283/*!*/;
BEGIN
/*!*/;
ROLLBACK /* added by mysqlbinlog */ /*!*/;
SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;
DELIMITER ;
# End of log file
/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;
user1458792 (13 rep)
Feb 17, 2021, 04:07 PM • Last activity: Jun 9, 2025, 07:11 PM
0 votes
2 answers
1100 views
Replication error from Mariadb 10.1 to Mysql 5.1/5.0/5/5 when master's logging format is set to row based
While replicating from Mariadb 10.1 to MySQL (5.0, 5.1, 5.5) or Mariadb (5.2, 5.5) lower versions, if master's `binlog_format` is set to row, the replication failure occurs with the following message at slave (`show slave status \G;`): > Last_Error: Table definition on master and slave does not matc...
While replicating from Mariadb 10.1 to MySQL (5.0, 5.1, 5.5) or Mariadb (5.2, 5.5) lower versions, if master's binlog_format is set to row, the replication failure occurs with the following message at slave (show slave status \G;): > Last_Error: Table definition on master and slave does not match: Column 18 type mismatch - received type 19, rtmariadb10.empdetails > has type 11 Here Master: Mariadb 10.1,binlog_format: row ; Slave : Mysql 5.1, binlog_format=statement/row/mixed(any one of these) Can someone please help to solve this issue?
Avijit batabyal (11 rep)
Dec 8, 2016, 12:13 PM • Last activity: Jun 7, 2025, 05:08 AM
26 votes
4 answers
39777 views
How to limit maximum number of rows in a table to just 1
I have a configuration table in my SQL Server database and this table should only ever have one row. To help future developers understand this I'd like to prevent more than one row of data being added. I have opted to use a trigger for this, as below... ALTER TRIGGER OnlyOneConfigRow ON [dbo].[Confi...
I have a configuration table in my SQL Server database and this table should only ever have one row. To help future developers understand this I'd like to prevent more than one row of data being added. I have opted to use a trigger for this, as below... ALTER TRIGGER OnlyOneConfigRow ON [dbo].[Configuration] INSTEAD OF INSERT AS BEGIN DECLARE @HasZeroRows BIT; SELECT @HasZeroRows = CASE WHEN COUNT (Id) = 0 THEN 1 ELSE 0 END FROM [dbo].[Configuration]; IF EXISTS(SELECT [Id] FROM inserted) AND @HasZeroRows = 0 BEGIN RAISERROR ('You should not add more than one row into the config table. ', 16, 1) END END This does not throw an error but is not allowing the first row to go in. Also is there a more effective / more self explaining way of limiting the number of rows that can be inserted into a table to just 1, than this? Am I missing any built in SQL Server feature?
Dib (447 rep)
Jun 21, 2016, 08:25 AM • Last activity: Jun 6, 2025, 05:11 PM
0 votes
1 answers
655 views
Postgresql: Trigger is not working at times
I have a PostgreSQL trigger that is not firing sometimes, even though the status is always shown as "enabled". My trigger code is as follows: ``` CREATE OR REPLACE FUNCTION audit_src_exhibit() RETURNS trigger AS $BODY$ BEGIN IF TG_OP = 'INSERT' then if new.audit_created_date is null THEN new.audit_c...
I have a PostgreSQL trigger that is not firing sometimes, even though the status is always shown as "enabled". My trigger code is as follows:
CREATE OR REPLACE FUNCTION audit_src_exhibit() RETURNS trigger AS $BODY$
BEGIN
  IF TG_OP = 'INSERT' then
		if new.audit_created_date is null  THEN
		new.audit_created_date := current_timestamp;
		new.audit_created_by := session_user::text;
	end if;
	else
		if new.audit_modified_date is null  THEN
		new.audit_modified_date := current_timestamp;
		new.audit_modified_by := session_user::text;
	end if;
END IF;
  RETURN NEW;
END; $BODY$ LANGUAGE plpgsql VOLATILE;

CREATE TRIGGER audit_src_exhibit_tr
BEFORE INSERT OR UPDATE ON 
FOR EACH ROW EXECUTE PROCEDURE audit_src_exhibit();
Is there any specific reason for this behaviour? Does my code show any signs of known issues which would result in triggers not firing? I am getting the audit columns as empty even though some insert and update happened todayenter image description here
Arun (13 rep)
Feb 7, 2023, 08:07 AM • Last activity: Jun 15, 2024, 02:01 PM
2 votes
1 answers
8536 views
Array of strings when updating a field
Unintentionally discovered that following query works in PostgreSQL: UPDATE "sometable" SET "somefield" = ('string1', 'string2') WHERE "id" = 1; I'm passing an array of strings to field and the result is a string of '("string1","string2")', not an array as expected, at least then using php extension...
Unintentionally discovered that following query works in PostgreSQL: UPDATE "sometable" SET "somefield" = ('string1', 'string2') WHERE "id" = 1; I'm passing an array of strings to field and the result is a string of '("string1","string2")', not an array as expected, at least then using php extension pgsql. How this happens and that does ('string1', 'string2') means in PostgreSQL? I don't think this works in other SQL RDBMSes.
happy_marmoset (539 rep)
Sep 9, 2015, 09:11 AM • Last activity: May 6, 2024, 02:01 AM
1 votes
1 answers
4762 views
The connection was recovered and rowcount in the first query is not available. Please execute another query to get a valid rowcount
I get the error message as appears in the title and seen below when running a query against one of our SQL databases. The error seems to be consistent with this specific query when trying to return the @@ROWCOUNT of an executed transaction. However when returning @@ROWCOUNT within other queries, the...
I get the error message as appears in the title and seen below when running a query against one of our SQL databases. The error seems to be consistent with this specific query when trying to return the @@ROWCOUNT of an executed transaction. However when returning @@ROWCOUNT within other queries, the error message does not appear. I've been running the query from SSMS and haven't changed any of my connection settings. The query is quite long and I was a bit relucatant to post it however, please see it below. Error Message: The connection was recovered and rowcount in the first query is not available. Please execute another query to get a valid rowcount SQL Syntax: SET NOCOUNT ON; DECLARE @MigrationTableKey INT; DECLARE @StartDtTime DATETIME = GETDATE(); DECLARE @EndDtTime DATETIME = GETDATE(); DECLARE @ChangeVersion INT = 0; DECLARE @RowCount INT = 0; DECLARE @TransferHistoryKey INT = 0; SELECT @MigrationTableKey = MigrationTableKey FROM Administration.MigrationTables WHERE MigrationSchema = 'dbo' AND MigrationTable = 'DplanJobLineTask'; SELECT @ChangeVersion = ISNULL(MAX(SYS_CHANGE_VERSION),0) FROM Logging.ChangeTrackingHistory WHERE MigrationTableKey = @MigrationTableKey; INSERT INTO dbo.DplanJobLineTask ( [SYS_CHANGE_VERSION] ,[SYS_CHANGE_CREATION_VERSION] ,[SYS_CHANGE_OPERATION] ,[DtTimeAdded] ,[Job No_] ,[Sub-Job Line No_] ,[Line No_] ,[Job Type] ,[Job Sub-Type] ,[Job Status] ,[Retailer Code] ,[Customer Store Code] ,[Bill-To Customer No_] ,[Supplier] ,[Brand] ,[Task Code] ,[Sub-Activity Code] ,[JAS Type Code] ,[Job Style] ,[Job Priority] ,[Brief Reqd] ,[Survey Requirements] ,[Parent Job From Year No_] ,[Parent Job From Week No_] ,[Parent Job To Year No_] ,[Parent Job To Week No_] ,[Job Description] ,[Job Short Name] ,[Customer Reference] ,[Customer Contact] ,[Billing Type] ,[Billable] ,[Billing Detail] ,[Preference - Sat] ,[Preference - Sun] ,[Preference - Mon] ,[Preference - Tue] ,[Preference - Wed] ,[Preference - Thu] ,[Preference - Fri] ,[Sub-Job Completed Date] ,[Region Code] ,[Area Code] ,[Tier1 Manager Code] ,[Tier2 Manager Code] ,[Tier3 Manager Code] ,[Ready to Archive] ,[Cancellation Reason] ,[Autocopy] ,[Copied From Job No_] ,[Copied From Job Line No_] ,[Reporting Deadline Day] ,[Shift Pattern Code] ,[Query Reasons] ,[Reset Flag] ,[Reset Reason] ,[Force Closed] ,[Previous Status] ,[Last Status Change Date] ,[Training Day Required] ,[POS required] ,[Stock Delivery To] ,[Multi-Region Code] ,[Supervisor (Store) Code] ,[Store Name] ,[Linked Job No_] ,[Group Job] ,[Captains Log Job No_] ,[Sub-Job No_] ,[Job Suffix Code] ,[Week No_ Code (JSx)] ,[Sub_ Activity Code (JSx)] ,[Task No_ Code (JSx)] ,[Job Weekly Summary Code] ,[Exported to D-Lite] ,[Job Status Changed] ,[Year No_ Code] ,[Sub-Job Year No_] ,[Sub-Job Week No_] ,[Week From Date] ,[Week To Date] ,[Fast Track] ,[Completed (D-Survey)] ,[Completed Date (D-Survey)] ,[Completed Time (D-Survey)] ,[Job Schedule Line No_] ,[Task Description] ,[Created Date] ,[Created Time] ,[Created by User ID] ,[Last Modified Date] ,[Last Modified Time] ,[Modified by User ID] ,[Tier1_Manager_Code_Id] ,[Tier2_Manager_Code_Id] ,[Tier3_Manager_Code_Id] ) SELECT ChangeTab.SYS_CHANGE_VERSION ,ChangeTab.SYS_CHANGE_CREATION_VERSION ,ChangeTab.SYS_CHANGE_OPERATION ,@StartDtTime ,ChangeTab.[Job No_] ,ChangeTab.[Sub-Job Line No_] ,ChangeTab.[Line No_] ,JLine.[Job Type] ,JLine.[Job Sub-Type] ,JLine.[Job Status] ,JLine.[Retailer Code] ,JLine.[Customer Store Code] ,JLine.[Bill-To Customer No_] ,JLine.[Supplier] ,JLine.[Brand] ,JLine.[Task Code] ,JLine.[Sub-Activity Code] ,JLine.[JAS Type Code] ,JLine.[Job Style] ,JLine.[Job Priority] ,JLine.[Brief Reqd] ,JLine.[Survey Requirements] ,JLine.[Parent Job From Year No_] ,JLine.[Parent Job From Week No_] ,JLine.[Parent Job To Year No_] ,JLine.[Parent Job To Week No_] ,JLine.[Job Description] ,JLine.[Job Short Name] ,JLine.[Customer Reference] ,JLine.[Customer Contact] ,JLine.[Billing Type] ,JLine.[Billable] ,JLine.[Billing Detail] ,JLine.[Preference - Sat] ,JLine.[Preference - Sun] ,JLine.[Preference - Mon] ,JLine.[Preference - Tue] ,JLine.[Preference - Wed] ,JLine.[Preference - Thu] ,JLine.[Preference - Fri] ,JLine.[Sub-Job Completed Date] ,JLine.[Region Code] ,JLine.[Area Code] ,JLine.[Tier1 Manager Code] ,JLine.[Tier2 Manager Code] ,JLine.[Tier3 Manager Code] ,JLine.[Ready to Archive] ,JLine.[Cancellation Reason] ,JLine.[Autocopy] ,JLine.[Copied From Job No_] ,JLine.[Copied From Job Line No_] ,JLine.[Reporting Deadline Day] ,JLine.[Shift Pattern Code] ,JLine.[Query Reasons] ,JLine.[Reset Flag] ,JLine.[Reset Reason] ,JLine.[Force Closed] ,JLine.[Previous Status] ,JLine.[Last Status Change Date] ,JLine.[Training Day Required] ,JLine.[POS required] ,JLine.[Stock Delivery To] ,JLine.[Multi-Region Code] ,JLine.[Supervisor (Store) Code] ,JLine.[Store Name] ,JLine.[Linked Job No_] ,JLine.[Group Job] ,JLine.[Captains Log Job No_] ,JLine.[Sub-Job No_] ,JLine.[Job Suffix Code] ,JLine.[Week No_ Code (JSx)] ,JLine.[Sub_ Activity Code (JSx)] ,JLine.[Task No_ Code (JSx)] ,JLine.[Job Weekly Summary Code] ,JLine.[Exported to D-Lite] ,JLine.[Job Status Changed] ,JLine.[Year No_ Code] ,JLine.[Sub-Job Year No_] ,JLine.[Sub-Job Week No_] ,JLine.[Week From Date] ,JLine.[Week To Date] ,JLine.[Fast Track] ,JLine.[Completed (D-Survey)] ,JLine.[Completed Date (D-Survey)] ,JLine.[Completed Time (D-Survey)] ,JLine.[Job Schedule Line No_] ,JLine.[Task Description] ,JLine.[Created Date] ,JLine.[Created Time] ,JLine.[Created by User ID] ,JLine.[Last Modified Date] ,JLine.[Last Modified Time] ,JLine.[Modified by User ID] ,CASE WHEN ISNUMERIC(JLine.[Tier1 Manager Code]) = 1 THEN CAST(JLine.[Tier1 Manager Code] AS INT) ELSE NULL END [Tier1_Manager_Code_Id] ,CASE WHEN ISNUMERIC(JLine.[Tier2 Manager Code]) = 1 THEN CAST(JLine.[Tier2 Manager Code] AS INT) ELSE NULL END [Tier2_Manager_Code_Id] ,CASE WHEN ISNUMERIC(JLine.[Tier3 Manager Code]) = 1 THEN CAST(JLine.[Tier3 Manager Code] AS INT) ELSE NULL END [Tier3_Manager_Code_Id] FROM CHANGETABLE(CHANGES [DS-LOG-TEST].[dbo].[LIVE - DEE SET LOGISTICS$D-Plan Job Line Task],1) as ChangeTab LEFT JOIN [DS-LOG-TEST].[dbo].[LIVE - DEE SET LOGISTICS$D-Plan Job Line Task] JLine ON ChangeTab.[Job No_] = JLine.[Job No_] AND ChangeTab.[Line No_] = JLine.[Line No_] AND ChangeTab.[Sub-Job Line No_] = JLine.[Sub-Job Line No_] LEFT JOIN dbo.DplanJobLineTask JLine1 ON ChangeTab.[Job No_] = JLine1.[Job No_] COLLATE DATABASE_DEFAULT AND ChangeTab.[Line No_] = JLine1.[Line No_] AND ChangeTab.[Sub-Job Line No_] = JLine1.[Sub-Job Line No_] LEFT JOIN Logging.ChangeTrackingHistory CTH ON ChangeTab.SYS_CHANGE_VERSION = CTH.SYS_CHANGE_VERSION AND ChangeTab.SYS_CHANGE_CREATION_VERSION = CTH.SYS_CHANGE_CREATION_VERSION AND CTH.MigrationTableKey = @MigrationTableKey WHERE JLine1.[Job No_] IS NULL AND CTH.ChangeTrackingHistoryKey IS NULL AND ChangeTab.SYS_CHANGE_VERSION > @ChangeVersion; INSERT INTO Logging.ChangeTrackingHistory ( MigrationTableKey ,SYS_CHANGE_VERSION ,SYS_CHANGE_CREATION_VERSION ) SELECT @MigrationTableKey MigrationTableKey ,DJL.SYS_CHANGE_VERSION ,DJL.SYS_CHANGE_CREATION_VERSION FROM dbo.DplanJobLineTask DJL LEFT JOIN Logging.ChangeTrackingHistory CTH ON DJL.SYS_CHANGE_VERSION = CTH.SYS_CHANGE_VERSION AND DJL.SYS_CHANGE_CREATION_VERSION = CTH.SYS_CHANGE_CREATION_VERSION AND CTH.MigrationTableKey = @MigrationTableKey WHERE CTH.ChangeTrackingHistoryKey IS NULL GROUP BY DJL.SYS_CHANGE_VERSION ,DJL.SYS_CHANGE_CREATION_VERSION; SELECT @RowCount = @@ROWCOUNT ,@EndDtTime = GETDATE();
Krishn (383 rep)
Apr 10, 2019, 03:00 PM • Last activity: Feb 22, 2024, 08:06 AM
60 votes
7 answers
178325 views
Select columns inside json_agg
I have a query like: SELECT a.id, a.name, json_agg(b.*) as "item" FROM a JOIN b ON b.item_id = a.id GROUP BY a.id, a.name; How can I select the columns in `b` so I don't have `b.item_id` in the JSON object? I have read about [`ROW`][1], but it returns a JSON object like: {"f1": "Foo", "f2": "Bar"} I...
I have a query like: SELECT a.id, a.name, json_agg(b.*) as "item" FROM a JOIN b ON b.item_id = a.id GROUP BY a.id, a.name; How can I select the columns in b so I don't have b.item_id in the JSON object? I have read about ROW , but it returns a JSON object like: {"f1": "Foo", "f2": "Bar"} I would need to remap the JSON object once it is fetched to match the proper column keys. I'd like to avoid that and keep original column names.
Yanick Rochon (1651 rep)
Jul 3, 2014, 02:55 PM • Last activity: Feb 7, 2024, 10:49 PM
2 votes
1 answers
1298 views
IS NOT DISTINCT FROM vs row-wise equality with =
Looking at the below query, SELECT null IS NOT DISTINCT FROM null AS indf, null = null AS eq; indf | eq ------+---- t | From this we can see the result of the `IS NOT DISTINCT FROM` is `true`, and the result of the `eq` is `false`. The following then logically follows from that, -- returns 1 row. SE...
Looking at the below query, SELECT null IS NOT DISTINCT FROM null AS indf, null = null AS eq; indf | eq ------+---- t | From this we can see the result of the IS NOT DISTINCT FROM is true, and the result of the eq is false. The following then logically follows from that, -- returns 1 row. SELECT * FROM ( VALUES (null) ) AS t(a) JOIN ( VALUES (null) ) AS g(a) ON t.a IS NOT DISTINCT FROM g.a; -- returns 0 rows. SELECT * FROM ( VALUES (null) ) AS t(a) JOIN ( VALUES (null) ) AS g(a) ON t.a = g.a; Because if the condition returns null the join fails. But, this throws me off, with [*row-wise comparison*](https://www.postgresql.org/docs/current/static/functions-comparisons.html#ROW-WISE-COMPARISON) -- returns 1 row. SELECT * FROM ( VALUES (null) ) AS t(a) JOIN ( VALUES (null) ) AS g(a) ON (t) IS NOT DISTINCT FROM (g); -- also returns one row. SELECT * FROM ( VALUES (null) ) AS t(a) JOIN ( VALUES (null) ) AS g(a) ON (t) = (g); Why does [*row-wise comparison*](https://www.postgresql.org/docs/current/static/functions-comparisons.html#ROW-WISE-COMPARISON) treat nulls different than scalar comparison? And is there a point of IS NOT DISTINCT FROM in row-wise comparison?
Evan Carroll (65502 rep)
Sep 11, 2018, 11:25 PM • Last activity: Dec 29, 2023, 05:44 PM
51 votes
2 answers
83039 views
Replace multiple columns with single JSON column
I am running PostgreSQL 9.3.4. I have a table with 3 columns: id | name | addr ---|----|---- 1 | n1 | ad1 2 | n2 | ad2 I need to move the data to a new table with a JSON column like: id | data ---| --- 1 | {"name": "n1", "addr": "ad1"} 2 | {"name": "n2", "addr": "ad2"} I tried: SELECT t.id, row_to_j...
I am running PostgreSQL 9.3.4. I have a table with 3 columns: id | name | addr ---|----|---- 1 | n1 | ad1 2 | n2 | ad2 I need to move the data to a new table with a JSON column like: id | data ---| --- 1 | {"name": "n1", "addr": "ad1"} 2 | {"name": "n2", "addr": "ad2"} I tried: SELECT t.id, row_to_json(t) AS data FROM (SELECT id, name, addr FROM myt) t; But that includes id in the result. So row_to_json is not the solution for me. Is there a way to get only the columns I need (name & addr)?
AliBZ (1827 rep)
Feb 2, 2015, 09:30 PM • Last activity: Sep 22, 2023, 12:00 AM
2 votes
1 answers
264 views
How can I paginate when ordering by `date_bin`?
I have the following query ```sql SELECT u.update_time, about_me FROM users u ORDER BY date_bin('14 days', u.update_time, '2023-04-07 23:11:56.471560Z') DESC, LENGTH(u.about_me) DESC, u.user_id; ``` I get the following: | update_time | about_me | | -------- | -------------- | | 2023-04-06 19:59:56.7...
I have the following query
SELECT u.update_time, about_me
FROM users u
ORDER BY date_bin('14 days', u.update_time, '2023-04-07 23:11:56.471560Z') DESC, LENGTH(u.about_me) DESC, u.user_id;
I get the following: | update_time | about_me | | -------- | -------------- | | 2023-04-06 19:59:56.771388 +00:00 | Hello! How are you? | | 2023-04-02 03:31:09.833925 +00:00 | Hello!!! | | 2023-04-06 00:36:26.822102 +00:00 | Hello! | | 2023-04-05 19:16:20.968274 +00:00 | Hey! | I now want to only get everything after the 3rd row. So I would do the following:
SELECT u.update_time, about_me
FROM users u
WHERE (date_bin('14 days', u.update_time, '2023-04-07 23:11:56.471560Z'), LENGTH(u.about_me)) <
      ('2023-04-07 03:05:24.990233 +00:00', 6)
ORDER BY date_bin('14 days', u.update_time, '2023-04-07 23:11:56.471560Z') DESC, LENGTH(u.about_me) DESC, u.user_id;
But the issue is that I'm still getting the exact same results, it's as if the WHERE isn't working. How can I paginate the query?
DanMossa (145 rep)
Apr 7, 2023, 06:31 AM • Last activity: Apr 11, 2023, 12:26 AM
0 votes
2 answers
2847 views
Postgresql: Trigger is not working at times
I have a PostgreSQL trigger that is not firing sometimes, even though the status is always shown as "enabled". My trigger code is as follows: CREATE OR REPLACE FUNCTION audit_src_exhibit() RETURNS trigger AS $BODY$ BEGIN IF TG_OP = 'INSERT' then if new.audit_created_date is null THEN new.audit_creat...
I have a PostgreSQL trigger that is not firing sometimes, even though the status is always shown as "enabled". My trigger code is as follows: CREATE OR REPLACE FUNCTION audit_src_exhibit() RETURNS trigger AS $BODY$ BEGIN IF TG_OP = 'INSERT' then if new.audit_created_date is null THEN new.audit_created_date := current_timestamp; new.audit_created_by := session_user::text; end if; else if new.audit_modified_date is null THEN new.audit_modified_date := current_timestamp; new.audit_modified_by := session_user::text; end if; END IF; RETURN NEW; END; $BODY$ LANGUAGE plpgsql VOLATILE; CREATE TRIGGER audit_src_exhibit_tr BEFORE INSERT OR UPDATE ON FOR EACH ROW EXECUTE PROCEDURE audit_src_exhibit(); Is there any specific reason for this behaviour? Does my code show any signs of known issues which would result in triggers not firing? I find the audit columns populated as null when some insert happened today enter image description here
Arun (13 rep)
Feb 7, 2023, 09:43 AM • Last activity: Feb 7, 2023, 10:37 PM
13 votes
3 answers
33970 views
How to insert 10000 new rows?
Presumably there is a straight forward and easy solution. I'm wanting to create 10000 new rows - that are numbered sequentially with no data per row (except sequentially numbered id). I have used: ```sql INSERT INTO bins (id) VALUES (1) ``` to create a single row with id '1' How do I create 10000 ro...
Presumably there is a straight forward and easy solution. I'm wanting to create 10000 new rows - that are numbered sequentially with no data per row (except sequentially numbered id). I have used:
INSERT INTO bins (id)
VALUES (1)
to create a single row with id '1' How do I create 10000 rows with corresponding id number? Version: PostgreSQL 9.5.5
Andrew (131 rep)
Oct 29, 2017, 09:48 AM • Last activity: Dec 17, 2022, 04:42 PM
4 votes
5 answers
5601 views
How to update table records in reverse order?
I've a table Student ``` Id Name Mark 1 Medi 10 2 Ibra 15 3 Simo 20 ``` and I want to update it, where I want to reverse it in descending order only Name and Mark and keep Id in its order: ``` Id Name Mark 1 Simo 20 2 Ibra 15 3 Medi 10 ``` So Firstly, I reverse the order from top to bottom with `row...
I've a table Student
Id	Name	Mark
1	Medi	10
2	Ibra	15
3	Simo	20
and I want to update it, where I want to reverse it in descending order only Name and Mark and keep Id in its order:
Id	Name	Mark
1	Simo	20
2	Ibra	15
3	Medi	10
So Firstly, I reverse the order from top to bottom with row_number():
SELECT row_number() OVER (ORDER BY  [Id]) [Id],[Name],[Mark] 
FROM [Student] 
ORDER BY [Id] DESC
But What I need is to update not just select. So Secondly I tried to update those two columns.
UPDATE students_ordered
set students_ordered.[Name]=students_reversed.[Name],
	students_ordered.[Mark] =students_reversed.[Mark]
from (SELECT row_number() OVER (ORDER BY  [Id]) [Id],[Name],[Mark]
  FROM [test].[dbo].[Student] students_ordered) students_ordered
inner join 
(SELECT row_number() OVER (ORDER BY  [Id]) [Id],[Name],[Mark] 
FROM [Student] 
ORDER BY [Id] DESC) students_reversed
  on students_reversed.Id=students_ordered.Id
I got an error :/ *"Msg 1033, Level 15, State 1: The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP, OFFSET or FOR XML is also specified."* I didn't lose hope and Tried to update only one field, then pass to the other But query goes in vain:
UPDATE Student Student_set
set Student_set.Name = (SELECT [Name]
FROM [Student] Student_get
where Student_set.id=Student_get.id ORDER BY row_number() OVER (ORDER BY [Id]) DESC)
` Since the id I can't be update it(*Id is a primary key and it is incremented by one.*)
TAHER El Mehdi (292 rep)
Sep 28, 2022, 08:17 PM • Last activity: Oct 20, 2022, 02:04 AM
0 votes
2 answers
3656 views
How to limit the number of rows in a mysql table
How can I set a limit of rows to a mysql table? (I am using php) POSSIBLE METHOD-1: When the user tries to SIGN-UP I can select everything from the `TABLE` and get the number of `ROWS` in the `TABLE` and check if it is `== 100` and if so `echo "Could not sign you up. Cause: Max user limit reached!";...
How can I set a limit of rows to a mysql table? (I am using php) POSSIBLE METHOD-1: When the user tries to SIGN-UP I can select everything from the TABLE and get the number of ROWS in the TABLE and check if it is == 100 and if so echo "Could not sign you up. Cause: Max user limit reached!";, or if it is < 100 to allow the user to SIGN-UP. But is there an easier method to use like just setting a max row limit for the TABLE?
user253088
Jun 6, 2022, 11:01 AM • Last activity: Oct 15, 2022, 03:04 PM
1 votes
2 answers
1032 views
Rownum equivalent in Informix SQL?
I want to explore (=looking at the first 100 rows) databases using the *Informix* SQL dialect. - In Oracle SQL I would use `SELCT* FROM table_name WHERE ROWNUM < 100` - In Postgress SQL I would use `SELCT* FROM table_name limit 100` - I also tried `SELCT* FROM table_name first 100` None of these met...
I want to explore (=looking at the first 100 rows) databases using the *Informix* SQL dialect. - In Oracle SQL I would use SELCT* FROM table_name WHERE ROWNUM < 100 - In Postgress SQL I would use SELCT* FROM table_name limit 100 - I also tried SELCT* FROM table_name first 100 None of these methods works for Informix. **What I found:** - When looking at the documentation (https://www.ibm.com/docs/en/informix-servers/12.10?topic=programming-retrieve-multiple-rows) I only find explanations how it works internally, but not how to do it on the user side. - This question (https://stackoverflow.com/questions/119278/row-numbers-for-a-query-in-informix) only covers the issue that rum numbers shall be added to a table. **One side note:** the programm I use will do the SQL call on multiple databases and combine the resulting tables in one table.
Qaswed (121 rep)
Oct 6, 2022, 02:26 PM • Last activity: Oct 9, 2022, 12:31 PM
2 votes
1 answers
87 views
Insert rows into a table from another table row?
I've got a table **Schedule** |Id|unserId|Sunday|Monday|Tuesday|Wednesday|Thursday|Friday|Saturday| |:----|:----|:----|:----|:----|:----|:----|:----|:----| |0|10|0|1|1|1|1|0|0| |1|20|1|0|0|0|0|0|0| I need to insert lines from ***one*** row in table **Schedule** into ***seven*** rows in table **Atten...
I've got a table **Schedule** |Id|unserId|Sunday|Monday|Tuesday|Wednesday|Thursday|Friday|Saturday| |:----|:----|:----|:----|:----|:----|:----|:----|:----| |0|10|0|1|1|1|1|0|0| |1|20|1|0|0|0|0|0|0| I need to insert lines from ***one*** row in table **Schedule** into ***seven*** rows in table **Attending**, So the result would looks like this: |Id|userId|Day|Status| |:----|:----|:----|:----| |1|10|0|0| |2|10|1|1| |3|10|2|1| |4|10|3|1| |5|10|4|0| |6|10|6|0| |7|10|5|0| |8|20|0|1| |9|20|1|0| |10|20|2|0| To solve this, I think of using **Cursor**: I would use two cursors, one for each row in the table Schedule and the other for the 7 days of each row! Is it the right way of doing it, or there is a better way of doing it?
TAHER El Mehdi (292 rep)
Sep 29, 2022, 06:11 PM • Last activity: Sep 29, 2022, 09:19 PM
13 votes
5 answers
78359 views
Calculate row value based on previous and actual row values
Hi everyone and thanks for your help. I have the following situation: a table called statements that contains fields **id**(int), **stmnt_date**(date), **debit**(double), **credit**(double) and **balance**(double) ![Structure of my table][1] I want to calculate the balance following these rules: The...
Hi everyone and thanks for your help.
I have the following situation: a table called statements that contains fields **id**(int), **stmnt_date**(date), **debit**(double), **credit**(double) and **balance**(double) Structure of my table I want to calculate the balance following these rules:
The first row balance (**chronologically**) = debit - credit and for the rest of the rows
**current row balance = chronologically previous row balance + current row debit - current row credit** As you can see on the picture above the rows are not arranged by date and that's why I used the word chronologically twice to emphasize on the importance of the stmnt_date value. Thank you very much for your help.
Mohamed Anis Dahmani (243 rep)
Mar 6, 2015, 12:43 AM • Last activity: Sep 8, 2022, 07:30 AM
Showing page 1 of 20 total questions