Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
3
answers
166
views
How to insert/combine values from multiple tables?
I have multiple tables (12 for each month) that have 3 columns each ("code", "test_name", "value") Example: _______________________________ January code | test_name | value 0001 name1 17 0002 name2 45 0005 name5 12 ______________________________ February code | test_name | value 0001 name1 3 0002 na...
I have multiple tables (12 for each month) that have 3 columns each ("code", "test_name", "value")
Example:
_______________________________
January
code | test_name | value
0001 name1 17
0002 name2 45
0005 name5 12
______________________________
February
code | test_name | value
0001 name1 3
0002 name2 7
0004 name4 13
_______________________________
March
code | test_name | value
0001 name1 6
0006 name6 32
0007 name7 41
_______________________________
How do I merge or insert or combine it in sql into one table that would look like this:
Year
code | test_name | January | February | March | ...
0001 name1 17 3 6
0002 name2 45 7 0
0003 name3 0 0 0
0004 name4 0 13 0
0005 name5 12 0 0
0006 name6 0 0 32
0007 name7 0 0 41
Mladen G
(1 rep)
Mar 15, 2016, 07:31 PM
• Last activity: Jul 6, 2025, 02:06 PM
0
votes
1
answers
203
views
MongoDB: How to merge a BSON backup with a '.wt' back-up
I am trying to merge a mongodb deployment with a previous back-up due to some processing that took place on the data. I have downloaded and extracted the backup to recover the '.wt' files and can set up a local deployment from the '.bson' files of the original database at a later point in time using...
I am trying to merge a mongodb deployment with a previous back-up due to some processing that took place on the data. I have downloaded and extracted the backup to recover the '.wt' files and can set up a local deployment from the '.bson' files of the original database at a later point in time using mongorestore. How can I combine the two sets of data?
Mongorestore doesn't seem to handle the '.wt' file type to complete the merge.
mspen_golf
(1 rep)
Apr 12, 2023, 08:33 AM
• Last activity: Jun 14, 2025, 05:08 AM
0
votes
1
answers
212
views
merging nearly- duplicate rows from different databases in the same table
I am sure , that many people before me faced a need to merge nearly- duplicate rows from several databases. However, I have not been able to find a simple yet general solution. I provided a data sample in Excel here: https://disk.yandex.ru/d/UtuENQ4LXHmOmQ It shows in yellow the 4 rows, that I have...
I am sure , that many people before me faced a need to merge nearly- duplicate rows from several databases. However, I have not been able to find a simple yet general solution. I provided a data sample in Excel here: https://disk.yandex.ru/d/UtuENQ4LXHmOmQ It shows in yellow the 4 rows, that I have now, and one row in green, that I want to make.
The new row should have some data only from one of the 4 database in some columns, and merged from all (or some) databases in other columns. I will delete the original rows after the merger. I spent all morning trying to figure out what functions to use. I have INSERT INTO, VALUES, WHERE , but smth else is missing. Could someone please direct me to a possible solution?
YW T
(11 rep)
Jan 24, 2022, 03:36 PM
• Last activity: Jun 12, 2025, 08:07 PM
0
votes
1
answers
384
views
pg_dump -> import traject using "upsert" paradigm?
Well say I have two physically separate database and I wish to replace database B with data from A. Except that I wish to keep the data in B which does not exist in A? So basically I'd go over each entry in A and use an insert-or-replace like: INSERT INTO table_name (name, value) VALUES ('hello', 'w...
Well say I have two physically separate database and I wish to replace database B with data from A. Except that I wish to keep the data in B which does not exist in A?
So basically I'd go over each entry in A and use an insert-or-replace like:
INSERT INTO table_name (name, value)
VALUES ('hello', 'world')
ON CONFLICT (name) do nothing;
IE the following databases would be merged into the third:
A)
name | value
A | 1
B | 2
B)
name | value
B | 10
C | 20
MERGED)
name | value
A | 1
B | 2
C | 20
If I use the
-c
option from pg_dump
it would just throw away the "c" value in the merged table.
paul23
(483 rep)
Mar 16, 2021, 02:56 PM
• Last activity: Apr 24, 2025, 05:03 AM
1
votes
1
answers
1343
views
"merge" two tables -from different databases- together?
Well say I have a table running on a database at localhost (donor database A), and a receiving table running at another server externally (database B) With table creation (on both sides) like: CREATE TABLE public.my_table ( id integer NOT NULL, key text NOT NULL UNIQUE, description text, PRIMARY KEY...
Well say I have a table running on a database at localhost (donor database A), and a receiving table running at another server externally (database B) With table creation (on both sides) like:
CREATE TABLE public.my_table (
id integer NOT NULL,
key text NOT NULL UNIQUE,
description text,
PRIMARY KEY (id)
);
CREATE SEQUENCE public.my_table_id_seq
AS integer
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
ALTER TABLE public.my_table ALTER COLUMN id SET DEFAULT nextval('public.language_dictionary_id_seq'::regclass)
Now I wish to merge from database A into database B. *However* only merging should happen on keys that are not existing on database B, and id should be recalculated (obviously to not have duplicates).
Basically as if each row from database A is insert into database B using:
INSERT INTO public.my_table (key, description)
VALUES (...)
ON CONFLICT DO NOTHING
paul23
(483 rep)
Jan 29, 2021, 01:48 PM
• Last activity: Apr 23, 2025, 09:03 AM
24
votes
1
answers
1263
views
Why does this MERGE statement cause the session to be killed?
I have the below `MERGE` statement which is issued against the database: MERGE "MySchema"."Point" AS t USING ( SELECT "ObjectId", "PointName", z."Id" AS "LocationId", i."Id" AS "Region" FROM @p1 AS d JOIN "MySchema"."Region" AS i ON i."Name" = d."Region" LEFT JOIN "MySchema"."Location" AS z ON z."Na...
I have the below
MERGE
statement which is issued against the database:
MERGE "MySchema"."Point" AS t
USING (
SELECT "ObjectId", "PointName", z."Id" AS "LocationId", i."Id" AS "Region"
FROM @p1 AS d
JOIN "MySchema"."Region" AS i ON i."Name" = d."Region"
LEFT JOIN "MySchema"."Location" AS z ON z."Name" = d."Location" AND z."Region" = i."Id"
) AS s
ON s."ObjectId" = t."ObjectId"
WHEN NOT MATCHED BY TARGET
THEN INSERT ("ObjectId", "Name", "LocationId", "Region") VALUES (s."ObjectId", s."PointName", s."LocationId", s."Region")
WHEN MATCHED
THEN UPDATE
SET "Name" = s."PointName"
, "LocationId" = s."LocationId"
, "Region" = s."Region"
OUTPUT $action, inserted.*, deleted.*;
However, this causes the session to be terminated with the following error:
> Msg 0, Level 11, State 0, Line 67 A severe error occurred on the
> current command. The results, if any, should be discarded.
> Msg 0, Level 20, State 0, Line 67 A severe error occurred on the current
> command. The results, if any, should be discarded.
I have put a short test script together which produces the error:
USE master;
GO
IF DB_ID('TEST') IS NOT NULL
DROP DATABASE "TEST";
GO
CREATE DATABASE "TEST";
GO
USE "TEST";
GO
SET NOCOUNT ON;
IF SCHEMA_ID('MySchema') IS NULL
EXECUTE('CREATE SCHEMA "MySchema"');
GO
IF OBJECT_ID('MySchema.Region', 'U') IS NULL
CREATE TABLE "MySchema"."Region" (
"Id" TINYINT IDENTITY NOT NULL CONSTRAINT "PK_MySchema_Region" PRIMARY KEY,
"Name" VARCHAR(8) NOT NULL CONSTRAINT "UK_MySchema_Region" UNIQUE
);
GO
INSERT [MySchema].[Region] ([Name])
VALUES (N'A'), (N'B'), (N'C'), (N'D'), (N'E'), ( N'F'), (N'G');
IF OBJECT_ID('MySchema.Location', 'U') IS NULL
CREATE TABLE "MySchema"."Location" (
"Id" SMALLINT IDENTITY NOT NULL CONSTRAINT "PK_MySchema_Location" PRIMARY KEY,
"Region" TINYINT NOT NULL CONSTRAINT "FK_MySchema_Location_Region" FOREIGN KEY REFERENCES "MySchema"."Region" ("Id"),
"Name" VARCHAR(128) NOT NULL,
CONSTRAINT "UK_MySchema_Location" UNIQUE ("Region", "Name")
);
GO
IF OBJECT_ID('MySchema.Point', 'U') IS NULL
CREATE TABLE "MySchema"."Point" (
"ObjectId" BIGINT NOT NULL CONSTRAINT "PK_MySchema_Point" PRIMARY KEY,
"Name" VARCHAR(64) NOT NULL,
"LocationId" SMALLINT NULL CONSTRAINT "FK_MySchema_Point_Location" FOREIGN KEY REFERENCES "MySchema"."Location"("Id"),
"Region" TINYINT NOT NULL CONSTRAINT "FK_MySchema_Point_Region" FOREIGN KEY REFERENCES "MySchema"."Region" ("Id"),
CONSTRAINT "UK_MySchema_Point" UNIQUE ("Name", "Region", "LocationId")
);
GO
-- CONTAINS HISTORIC Point DATA
IF OBJECT_ID('MySchema.PointHistory', 'U') IS NULL
CREATE TABLE "MySchema"."PointHistory" (
"Id" BIGINT IDENTITY NOT NULL CONSTRAINT "PK_MySchema_PointHistory" PRIMARY KEY,
"ObjectId" BIGINT NOT NULL,
"Name" VARCHAR(64) NOT NULL,
"LocationId" SMALLINT NULL,
"Region" TINYINT NOT NULL
);
GO
CREATE TYPE "MySchema"."PointTable" AS TABLE (
"ObjectId" BIGINT NOT NULL PRIMARY KEY,
"PointName" VARCHAR(64) NOT NULL,
"Location" VARCHAR(16) NULL,
"Region" VARCHAR(8) NOT NULL,
UNIQUE ("PointName", "Region", "Location")
);
GO
DECLARE @p1 "MySchema"."PointTable";
insert into @p1 values(10001769996,N'ABCDEFGH',N'N/A',N'E')
MERGE "MySchema"."Point" AS t
USING (
SELECT "ObjectId", "PointName", z."Id" AS "LocationId", i."Id" AS "Region"
FROM @p1 AS d
JOIN "MySchema"."Region" AS i ON i."Name" = d."Region"
LEFT JOIN "MySchema"."Location" AS z ON z."Name" = d."Location" AND z."Region" = i."Id"
) AS s
ON s."ObjectId" = t."ObjectId"
WHEN NOT MATCHED BY TARGET
THEN INSERT ("ObjectId", "Name", "LocationId", "Region") VALUES (s."ObjectId", s."PointName", s."LocationId", s."Region")
WHEN MATCHED
THEN UPDATE
SET "Name" = s."PointName"
, "LocationId" = s."LocationId"
, "Region" = s."Region"
OUTPUT $action, inserted.*, deleted.*;
If I remove the OUTPUT
clause then the error does not occur. Also, if I remove the deleted
reference then the error does not occur. So I looked at the MSDN documents for the OUTPUT
clause which state:
> DELETED cannot be used with the OUTPUT clause in the INSERT statement.
Which makes sense to me, however the entire point of MERGE
is that you may not know in advance.
Additionally, the below script works perfectly fine regardless of the action that is taken:
USE tempdb;
GO
CREATE TABLE dbo.Target(EmployeeID int, EmployeeName varchar(10),
CONSTRAINT Target_PK PRIMARY KEY(EmployeeID));
CREATE TABLE dbo.Source(EmployeeID int, EmployeeName varchar(10),
CONSTRAINT Source_PK PRIMARY KEY(EmployeeID));
GO
INSERT dbo.Target(EmployeeID, EmployeeName) VALUES(100, 'Mary');
INSERT dbo.Target(EmployeeID, EmployeeName) VALUES(101, 'Sara');
INSERT dbo.Target(EmployeeID, EmployeeName) VALUES(102, 'Stefano');
GO
INSERT dbo.Source(EmployeeID, EmployeeName) Values(103, 'Bob');
INSERT dbo.Source(EmployeeID, EmployeeName) Values(104, 'Steve');
GO
-- MERGE statement with the join conditions specified correctly.
USE tempdb;
GO
BEGIN TRAN;
MERGE Target AS T
USING Source AS S
ON (T.EmployeeID = S.EmployeeID)
WHEN NOT MATCHED BY TARGET AND S.EmployeeName LIKE 'S%'
THEN INSERT(EmployeeID, EmployeeName) VALUES(S.EmployeeID, S.EmployeeName)
WHEN MATCHED
THEN UPDATE SET T.EmployeeName = S.EmployeeName
WHEN NOT MATCHED BY SOURCE AND T.EmployeeName LIKE 'S%'
THEN DELETE
OUTPUT $action, inserted.*, deleted.*;
ROLLBACK TRAN;
GO
Also, I have other queries that use the OUTPUT
in the same fashion as the one that is throwing an error and they work perfectly fine - the only difference between them is the tables that take part in the MERGE
.
This is causing major problems in production for us. I have reproduced this error in SQL2014 and SQL2016 on both VM and Physical with 128GB RAM, 12 x 2.2GHZ Cores, Windows Server 2012 R2.
The estimated execution plan generated from the query can be found here:
Estimated Execution Plan
Mr.Brownstone
(13242 rep)
Jun 9, 2016, 07:13 PM
• Last activity: Apr 13, 2025, 12:18 PM
5
votes
2
answers
796
views
Which approach to concurrency safe upserts is best for table-valued input if MERGE is forbidden?
[This classic concurrency safety article](https://michaeljswart.com/2017/07/sql-server-upsert-patterns-and-antipatterns/) is clearly designed for only upserting one row at a time. In my situation, I have a table-valued input and I want to upsert each row in a concurrency safe way. I know that this i...
[This classic concurrency safety article](https://michaeljswart.com/2017/07/sql-server-upsert-patterns-and-antipatterns/) is clearly designed for only upserting one row at a time. In my situation, I have a table-valued input and I want to upsert each row in a concurrency safe way. I know that this isn't always possible, but I want to get as close as possible.
MERGE
seems like a natural solution to it, but I distrust it and truly am in a situation where it is [bug prone](https://michaeljswart.com/2021/08/what-to-avoid-if-you-want-to-use-merge/) . The remaining two approaches in Michael J. Swart's article are:
1) _Inside a Transaction With Lock Hints (Update More Common)_
CREATE PROCEDURE s_AccountDetails_Upsert ( @Email nvarchar(4000), @Etc nvarchar(max) )
AS
SET XACT_ABORT ON;
BEGIN TRAN
UPDATE TOP (1) dbo.AccountDetails WITH (UPDLOCK, SERIALIZABLE)
SET Etc = @Etc
WHERE Email = @Email;
IF (@@ROWCOUNT = 0)
BEGIN
INSERT dbo.AccountDetails ( Email, Etc )
VALUES ( @Email, @Etc );
END
COMMIT
2) _Inside a Transaction With Lock Hints (Insert More Common)_
CREATE PROCEDURE s_AccountDetails_Upsert ( @Email nvarchar(4000), @Etc nvarchar(max) )
AS
SET XACT_ABORT ON;
BEGIN TRAN
INSERT dbo.AccountDetails ( Email, Etc )
SELECT @Email, @Etc
WHERE NOT EXISTS (
SELECT *
FROM dbo.AccountDetails WITH (UPDLOCK, SERIALIZABLE)
WHERE Email = @Email
)
IF (@@ROWCOUNT = 0)
BEGIN
UPDATE TOP (1) dbo.AccountDetails
SET Etc = @Etc
WHERE Email = @Email;
END
COMMIT
I could adapt either of these to use table variables (e.g. I suspect that IF (@@ROWCOUNT = 0)
needs totally removing), but **does the usage of a table-valued input make it obvious that we should prefer either the first or second solution?** If not, then on what basis should the decision be made?
J. Mini
(1237 rep)
Apr 11, 2025, 11:15 PM
• Last activity: Apr 13, 2025, 10:35 AM
5
votes
2
answers
6646
views
Merge two records in the same table, keeping foreign key relationships intact for both
I have a table, called SITES, that has three columns, lets say it looks like this: ID Name Path 1 Google http://www.google.com/ 2 Microsoft http://www.microsoft.com/ I also have a related table called Logs that looks like this: ID SiteID LogData 1 1 --data-- 2 1 --more-- 3 2 --other-- ... The SITES...
I have a table, called SITES, that has three columns, lets say it looks like this:
ID Name Path
1 Google http://www.google.com/
2 Microsoft http://www.microsoft.com/
I also have a related table called Logs that looks like this:
ID SiteID LogData
1 1 --data--
2 1 --more--
3 2 --other--
...
The SITES table gets populated both by users of the system and also occasionally some batch processes. From the batch process, I don't always have the 'name' available, so a record is created that looks like this in SITES (when searching by Path yields no result), because the batch process is mostly interested in creating data in the Log table.
ID Name Path
99 http://arstechnica.com/
The problem situation arises like this:
1) User creates new SITE, but doesn't enter the Path (doesn't know it, or in some cases it doesn't yet exist) so the SITES table looks like this:
ID Name Path
1 Google http://www.google.com/
2 Microsoft http://www.microsoft.com/
3 Yahoo
2) The batch process comes along and needs to add a log for http://yahoo.com , searches SITES by Path and does not find it, so it makes its own and the result is this:
ID Name Path
1 Google http://www.google.com/
2 Microsoft http://www.microsoft.com/
3 Yahoo
4 http://yahoo.com
My question is, how can I merge Record 3 and record 4 while preserving the referential integrity of both records? Lets say for the sake of argument, that each record has several related records in both the Log table, as well as some other tables.
I understand that it will be a manual process of identifying records that need to be updated, so let any solution assume that I have audited the list and found all of the "duplicate" records.
Nate
(1727 rep)
Oct 25, 2012, 07:19 PM
• Last activity: Apr 12, 2025, 12:51 PM
10
votes
2
answers
19041
views
Difference between UPSERT and MERGE?
From the PostgreSQL wiki, > `MERGE` is typically used to merge two tables, and was introduced in the 2003 SQL standard. The `REPLACE` statement (a MySQL extension) or *UPSERT* sequence attempts an `UPDATE`, or on failure, `INSERT`. This is similar to `UPDATE`, then for unmatched rows, `INSERT`. Whet...
From the PostgreSQL wiki,
>
MERGE
is typically used to merge two tables, and was introduced in the 2003 SQL standard. The REPLACE
statement (a MySQL extension) or *UPSERT* sequence attempts an UPDATE
, or on failure, INSERT
. This is similar to UPDATE
, then for unmatched rows, INSERT
. Whether concurrent access allows modifications which could cause row loss is implementation independent.
Further PostgreSQL's [INSERT ... ON CONFLICT DO NOTHING/UPDATE
](https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.5#INSERT_..._ON_CONFLICT_DO_NOTHING.2FUPDATE_.28.22UPSERT.22.29) is marketed as *UPSERT* and was added in 9.5
What then is MERGE
? And how does it fit into the mix?
Evan Carroll
(65502 rep)
Apr 7, 2018, 09:12 AM
• Last activity: Mar 5, 2025, 01:47 PM
1
votes
2
answers
1432
views
MySQL: Merge Same Database on Several Location into Single Location
I have a registration form, which is exactly the same for several locations. Each of them has a `REG` column, which is unique for each inserted row. Basically, each data input has a different `REG` number for it, which is auto incremented. The location's form references a location's table which has...
I have a registration form, which is exactly the same for several locations. Each of them has a
And the engine is InnoDB.
REG
column, which is unique for each inserted row. Basically, each data input has a different REG
number for it, which is auto incremented.
The location's form references a location's table which has the same structure for each individual location.
However, we have one central database which pulls all the data from several locations. This central database has the job of manipulating and populate ing all the location's databases.
Let's say, I have 3 locations, A, B and C. All those locations are using same database structure to save the data of each individual location.
What I need help on here, is how must I change or configure the database table or column so I can "restore" each individual database from the different location's into one single database on the central location. If the structure is same, then the previous data will be replaced by the new one when restore/add to the database in the central location.
I have a column called CODE
which stored specific hard code for each location, it is just a simple character either A, B or C based on where the location of the database is saved.
So, at the specific time, from 3 different locations, the form table will be backed up and sent to the central database. And there, it will be restored or added to the database to populate all from those 3 locations.
Any idea and little help, please? Thanks in advance.
Here the database layout I would like to be done.

dhicom
(11 rep)
Aug 13, 2014, 06:29 AM
• Last activity: Feb 16, 2025, 06:02 AM
25
votes
1
answers
2599
views
Why MERGE doesn't insert more than 277 records into a table which is configured with temporal table and a non-clustered index on history table
I found again an issue with SQL Server and MERGE statement and need some confirmation. I can reproduce my issue constantly on a Azure Database (but not on a on premise SQL Server 2017/2019). Please execute following steps (step by step, not in one command execution)! **1) Script for Schema:** ``` CR...
I found again an issue with SQL Server and MERGE statement and need some confirmation.
I can reproduce my issue constantly on a Azure Database (but not on a on premise SQL Server 2017/2019).
Please execute following steps (step by step, not in one command execution)!
**1) Script for Schema:**
CREATE TABLE [dbo].[ImpactValueHistory]
(
[Rn] BIGINT NOT NULL,
[ImpactId] UNIQUEIDENTIFIER NOT NULL,
[ImpactValueTypeId] INT NOT NULL,
[Date] DATE NOT NULL,
[Value] DECIMAL(38, 10) NOT NULL,
[ValidFrom] DATETIME2 NOT NULL CONSTRAINT [DF_ImpactValueHistory_ValidFrom] DEFAULT CONVERT(DATETIME2, '0001-01-01'),
[ValidTo] DATETIME2 NOT NULL CONSTRAINT [DF_ImpactValueHistory_ValidTo] DEFAULT CONVERT(DATETIME2, '9999-12-31 23:59:59.9999999'),
[ImpactPeriodId] INT NOT NULL,
[NormalizedValue] DECIMAL(38, 10) NOT NULL,
)
GO
CREATE CLUSTERED COLUMNSTORE INDEX [COLIX_ImpactValueHistory]
ON [dbo].[ImpactValueHistory];
GO
CREATE NONCLUSTERED INDEX [IX_ImpactValueHistory_ValidFrom_ValidTo_ImpactId_DimensionItemId]
ON [dbo].[ImpactValueHistory] ([ValidFrom], [ValidTo], [ImpactId], [ImpactValueTypeId], [Date]);
GO
CREATE TABLE [dbo].[ImpactValue]
(
[Rn] BIGINT NOT NULL IDENTITY(1,1),
[ImpactId] UNIQUEIDENTIFIER NOT NULL,
[ImpactValueTypeId] INT NOT NULL,
[Date] DATE NOT NULL,
[Value] DECIMAL(38, 10) NOT NULL,
[ValidFrom] DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL CONSTRAINT [DF_ImpactValue_ValidFrom] DEFAULT CONVERT(DATETIME2, '0001-01-01'),
[ValidTo] DATETIME2 GENERATED ALWAYS AS ROW END NOT NULL CONSTRAINT [DF_ImpactValue_ValidTo] DEFAULT CONVERT(DATETIME2, '9999-12-31 23:59:59.9999999'),
[ImpactPeriodId] INT NOT NULL,
[NormalizedValue] DECIMAL(38, 10) NOT NULL,
PERIOD FOR SYSTEM_TIME ([ValidFrom], [ValidTo]),
CONSTRAINT [PK_ImpactValue] PRIMARY KEY NONCLUSTERED ([ImpactId], [ImpactValueTypeId], [Date], [ImpactPeriodId])
)
WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = [dbo].[ImpactValueHistory]))
GO
CREATE UNIQUE CLUSTERED INDEX [IX_ImpactValue_Id] ON [dbo].[ImpactValue]([Rn])
GO
CREATE COLUMNSTORE INDEX [CIX_ImpactValue] ON [dbo].[ImpactValue] ([ImpactId], [ImpactValueTypeId], [Date], [Value], [NormalizedValue])
GO
**2) Script for inserting some random data**
```
DECLARE @inserted0 TABLE ([Date] DATE, [ImpactId] uniqueidentifier, [ImpactPeriodId] int, [ImpactValueTypeId] int);
MERGE [dbo].[ImpactValue] USING (
SELECT TOP 278 -- it's then working
2) If I delete the non-clustered index [IX_ImpactValueHistory_ValidFrom_ValidTo_ImpactId_DimensionItemId] -> it's working
3) If I use SELECT TOP (@BatchSize) in step 2) --> it's working
4) If I use only OUTPUT instead of OUTPUT INTO @inserted0 --> it's working
Without the COLUMNSTORE index on the history table it is working. By only removing the COLUMNSTORE index on the main table I see the same issue.
Actual execution plan for a case that (a) repros the issue with TOP 278
and (b) doesn't repro with TOP (@BatchSize)
available at https://1drv.ms/u/s!AsOa6e9ukBWQlIRg9_9eySDFp5hvEA?e=KBQBsP . I also added the actual execution plans for batch size of 277. Both are working with this size!
Daniel C.
(353 rep)
Jun 20, 2022, 01:34 PM
• Last activity: Nov 16, 2024, 08:18 AM
0
votes
1
answers
326
views
Do I need a `HOLDLOCK` on a `MERGE` that's not sourcing from another table?
I need to know whether the following pattern requires me to use `HOLDLOCK` in a very highly concurrent environment. Particularly note that the `source` is not another table. It's basically parameters that have been passed to the query. ``` MERGE INTO table1 WITH (HOLDLOCK) AS target USING ( SELECT @...
I need to know whether the following pattern requires me to use
HOLDLOCK
in a very highly concurrent environment. Particularly note that the source
is not another table. It's basically parameters that have been passed to the query.
MERGE INTO table1 WITH (HOLDLOCK) AS target
USING (
SELECT
@ID AS ID,
...
) AS source
ON (target.ID = source.ID AND ...)
WHEN NOT MATCHED THEN
INSERT (ID, ...)
VALUES (source.ID, ...)
I've only put it in out of precaution, since I've been reading about the pitfalls of MERGE
. In my own dev-testing though (still very highly concurrent), it doesn't seem to make a difference. I never run into a concurrency problem even without the HOLDLOCK
.
I can't use INSERT INTO table1 ... WHERE ... DOES NOT EXISTS (... SELECT table1 ...)
pattern, because **that** runs into concurrency problems, even with HOLDLOCK
in place. So the MERGE
was my solution to it, I am just not sure if I need the HOLDLOCK
on it.
hyankov
(103 rep)
Sep 12, 2024, 09:05 PM
• Last activity: Sep 13, 2024, 05:34 AM
8
votes
2
answers
596
views
Blocking between two MERGE queries inserting into the same table
### Scenario I have a large table partitioned on an `INT` column. When I run two different `MERGE` statements on two different partitions of this table, they seem to be blocking each other. Sample code to recreate the scenario: #### 1. Preparation. Create tables and some dummy data ``` SET NOCOUNT O...
### Scenario
I have a large table partitioned on an
The actual table I work with has a
INT
column. When I run two different MERGE
statements on two different partitions of this table, they seem to be blocking each other.
Sample code to recreate the scenario:
#### 1. Preparation. Create tables and some dummy data
SET NOCOUNT ON
GO
--
-- Create parition function and partition scheme
--
DROP FUNCTION IF EXISTS PF_Site_ID
GO
CREATE PARTITION FUNCTION PF_Site_ID (INT)
AS RANGE RIGHT FOR VALUES (
0,
1, 2, 3, 4, 5, 6, 7, 8, 9, 10
)
GO
DROP PARTITION SCHEME PS_Site_ID
GO
CREATE PARTITION SCHEME PS_Site_ID
AS PARTITION PF_Site_ID
ALL TO ('PRIMARY')
GO
--
-- Large table partitioned on Site_ID. Two STG tables. And some dummy data
--
DROP TABLE IF EXISTS dbo.PartitionedLargeTable
GO
CREATE TABLE dbo.PartitionedLargeTable
(
ID INT NOT NULL IDENTITY(1,1)
, Site_ID INT NOT NULL
, Name VARCHAR(50)
) ON PS_Site_ID (Site_ID)
GO
ALTER TABLE dbo.PartitionedLargeTable SET (LOCK_ESCALATION = AUTO)
GO
--
-- STG tables
--
DROP TABLE IF EXISTS dbo.STG_Test1
GO
CREATE TABLE dbo.STG_Test1
(
ID INT NOT NULL IDENTITY(1,1)
, Site_ID INT NOT NULL
, Name VARCHAR(50)
) ON [PRIMARY]
GO
DROP TABLE IF EXISTS dbo.STG_Test2
GO
CREATE TABLE dbo.STG_Test2
(
ID INT NOT NULL IDENTITY(1,1)
, Site_ID INT NOT NULL
, Name VARCHAR(50)
) ON [PRIMARY]
GO
--
-- Dummy data
--
INSERT INTO dbo.PartitionedLargeTable (Site_ID, Name) SELECT 1, NEWID()
INSERT INTO dbo.PartitionedLargeTable (Site_ID, Name) SELECT 2, NEWID()
GO 10000
INSERT INTO dbo.PartitionedLargeTable (Site_ID, Name)
SELECT Site_ID, Name FROM dbo.PartitionedLargeTable
GO 5
INSERT INTO dbo.STG_Test1(Site_ID, Name) SELECT 1, NEWID()
GO 10000
INSERT INTO dbo.STG_Test2(Site_ID, Name) SELECT 2, NEWID()
GO 10000
INSERT INTO dbo.STG_Test1 (Site_ID, Name)
SELECT Site_ID, Name FROM dbo.STG_Test1
GO 7
INSERT INTO dbo.STG_Test2 (Site_ID, Name)
SELECT Site_ID, Name FROM dbo.STG_Test2
GO 7
#### 2. MERGE 1
In one SSMS window, run this MERGE
statement:
MERGE dbo.PartitionedLargeTable AS TGT
USING (SELECT ID, Site_ID, Name FROM dbo.STG_Test1) AS SRC
ON SRC.Site_ID = TGT.Site_ID
AND SRC.ID = TGT.ID
WHEN MATCHED THEN
UPDATE
SET TGT.Name = SRC.Name
WHEN NOT MATCHED THEN
INSERT (Site_ID, Name)
VALUES (SRC.Site_ID, SRC.Name);
#### 3. MERGE 2
In a second SSMS window, run this MERGE
statement:
MERGE dbo.PartitionedLargeTable AS TGT
USING (SELECT ID, Site_ID, Name FROM dbo.STG_Test2) AS SRC
ON SRC.Site_ID = TGT.Site_ID
AND SRC.ID = TGT.ID
WHEN MATCHED THEN
UPDATE
SET TGT.Name = SRC.Name
WHEN NOT MATCHED THEN
INSERT (Site_ID, Name)
VALUES (SRC.Site_ID, SRC.Name);
---
The two MERGE
statements run on different *Site_IDs* (so two different partitions).
One of the performance benefits of partitioned tables is that we could manipulate partitions independently of each other (within reason). So, something like INSERT
or UPDATE
on one partition will not block similar operations on other partitions.
Compare this to when the table is NOT partitioned, if we perform two large INSERT
operations (or two large UPDATE
operations), then one blocks the other once the number of rows manipulated goes over a certain number (something like 3k or 5k rows), then the PAGE
lock is escalated to TABLOCK
. Hence INSERT
blocks INSERT
(or UPDATE
blocks UPDATE
)
To avoid such lock escalation to TABLOCK
, this table was partitioned with LOCK_ESCALATION = AUTO, which limits locks up to HOBT level (and not table). But with MERGE
, the blocking still happens.
Any ideas on how to prevent this blocking? We have 10 parallel MERGE
statements running, on 10 different partitions of this large table (and they are blocking each other).
The image below shows the nature of blocking. When a table is partitioned, the lock escalation is supposed to only go up to the partition (not to the whole table). When these MERGE
statements are running, I see the HOBT id's that each MERGE
is querying (locking). And in some cases, the HOBT ID does not match the partition IDs of this table.

COLUMNSTORE CLUSTERED
index on the partitioning scheme.
ToC
(727 rep)
Aug 14, 2024, 09:44 PM
• Last activity: Aug 16, 2024, 08:57 AM
0
votes
1
answers
75
views
Two SQL databases combined
I have an accounting firm client that recently acquired another firm. They both use the IDENTICAL software products that utilize a SQL server, so the table structure should be identical or nearly identical. Can I combine the two files into a single file? I'd be happy to read some documentation, but...
I have an accounting firm client that recently acquired another firm. They both use the IDENTICAL software products that utilize a SQL server, so the table structure should be identical or nearly identical.
Can I combine the two files into a single file? I'd be happy to read some documentation, but the term "merge" sends me in many directions.
I'm lightly skilled; familiar with SSMS and basic queries.
Steve Jones
(1 rep)
Jun 19, 2024, 06:33 PM
• Last activity: Jun 24, 2024, 04:29 PM
1
votes
2
answers
6655
views
It's possible to skip duplicate keys (i.e. MERGE) in a BULK INSERT in MySQL?
I have a `INSERT...SELECT` statement that I want to run several times from TABLE1 to TABLE2 as the data in TABLE1 will change overtime. I don't expect the past data from select to change overtime, the select will just return more new data, and the same old data of previous queries. But I cannot inse...
I have a
INSERT...SELECT
statement that I want to run several times from TABLE1 to TABLE2 as the data in TABLE1 will change overtime.
I don't expect the past data from select to change overtime, the select will just return more new data, and the same old data of previous queries.
But I cannot insert old data again because of the unique key constraint on the primary keys.
What I would like to achieve is a kind of INSERT ...ON DUPLICATE KEY UPDATE...
but for a INSERT...SELECT
.
Skipping the duplicate keys, will be enough, but I'm not sure how to do it
Glasnhost
(478 rep)
Feb 24, 2020, 10:37 AM
• Last activity: Feb 7, 2024, 08:02 AM
5
votes
1
answers
446
views
Merge Delete on joined tables is allowed but has a bug
To start off, an updatable CTE, derived table or view may be the target of an `UPDATE` directly, even if it has multiple base tables, as long as all columns are from the same source table. But they cannot be the target of a `DELETE`, even if columns from only one table are selected. ``` Msg 4405 Lev...
To start off, an updatable CTE, derived table or view may be the target of an
UPDATE
directly, even if it has multiple base tables, as long as all columns are from the same source table.
But they cannot be the target of a DELETE
, even if columns from only one table are selected.
Msg 4405 Level 16 State 1
View or function 'x' is not updatable because the modification affects multiple base tables.
To get around this restriction, I attempted to use a MERGE
against a dummy table. (Obviously this simplistic example could be written using a DELETE...WHERE EXISTS
or by putting one table as the merge source, but the point stands. The original used ROW_NUMBER
so these weren't possible.)
WITH Joined AS (
SELECT t1.*
FROM t1
JOIN t2 ON t2.id1 = t1.id1
)
MERGE Joined
USING (VALUES(0)) v(dummy) ON 1=0
WHEN NOT MATCHED BY SOURCE THEN DELETE;
**dbfiddle**
This was actually allowed. But what I found was that the table that was modified did not depend on the columns being selected, or their order. It depended purely on the **order** that the tables were **joined**.
**This, to my mind seems completely buggy behaviour.**
Experimenting with THEN UPDATE
shows far more sensible behaviour: it depends on which columns are used in the THEN UPDATE
clause, in the same way as a normal UPDATE
statement.
So, I think SQL Server should:
* Either continue allowing updatable CTEs to be deleted from, but ensure that only one table's columns are selected (like an UPDATE
), ensuring no ambiguity.
* Or completely disallow THEN DELETE
in MERGE
when the source is an updatable CTE with multiple base tables.
**Do I have some misunderstanding in how updatable views work, or is there an actual bug here?**
____
**A bug report has now been filed on Azure Feedback. Please vote for it here .**
Charlieface
(17545 rep)
Jan 18, 2024, 12:04 PM
• Last activity: Jan 22, 2024, 11:26 AM
1
votes
1
answers
145
views
What are the idiomatic approaches to cross-server upserts?
This is a very common problem, faced by virtually every DBA who has to responsibilities to both application and BI teams. Consider the following: 1. You have two T-SQL **servers**, Production and Reporting. 2. Every day, at 6am, you want to copy new data from Production to Reporting. There may be fi...
This is a very common problem, faced by virtually every DBA who has to responsibilities to both application and BI teams. Consider the following:
1. You have two T-SQL **servers**, Production and Reporting.
2. Every day, at 6am, you want to copy new data from Production to Reporting. There may be filters or some other ETL done in this process.
3. You don't want to copy any more data than you need to, so ideally you want a
MERGE
of some sort.
**What are the idiomatic solutions to this problem?**
In my own research, I have found:
- Linked Servers - They are very slow when doing upserts and are prone to sending your whole table over the network when it only needed a few rows.
- SSIS - Painfully unportable, very old, and forces you to jump through extra hoops like putting staging tables on both servers.
I hope that there is something better.
J. Mini
(1237 rep)
Nov 20, 2023, 12:54 PM
• Last activity: Nov 22, 2023, 01:56 PM
0
votes
1
answers
56
views
Delete records from table that are not in the file being inserted
I have a Snaplogic pipeline that queries a web application every night and loads the data into a SQL Server table. I use the 'Update' snap to add new records or add any changes. The goal is that the database should mirror the web application (with a one day lag). The Update seems to be working fine....
I have a Snaplogic pipeline that queries a web application every night and loads the data into a SQL Server table. I use the 'Update' snap to add new records or add any changes. The goal is that the database should mirror the web application (with a one day lag).
The Update seems to be working fine. The issue is, when a record is deleted in the web application, it isn't being deleted in the database. Does anyone have any ideas as to how to either delete a record in the target database when it's been deleted in the source application or flag the record in the database as 'deleted' in the source?
I thought perhaps I could write the daily data to a temp table and then do some form of merge or insert operation in the database. Or I could truncate the table in the database and rebuild it each day with the new data but as the data grows that's seems very inefficient.
Matt
(291 rep)
Oct 18, 2023, 10:14 PM
• Last activity: Oct 18, 2023, 11:17 PM
10
votes
1
answers
2740
views
Merge row size overflow in SQL Server - "Cannot create a row of size.."
The target table to which I'm trying to merge the data has ~660 columns. The code for the merge: MERGE TBL_BM_HSD_SUBJECT_AN_1 AS targetTable USING ( SELECT * FROM TBL_BM_HSD_SUBJECT_AN_1_STAGING WHERE [ibi_bulk_id] in (20150520141627106) and id in(101659113) ) AS sourceTable ON (...) WHEN MATCHED A...
The target table to which I'm trying to merge the data has ~660 columns.
The code for the merge:
MERGE TBL_BM_HSD_SUBJECT_AN_1 AS targetTable
USING
(
SELECT *
FROM TBL_BM_HSD_SUBJECT_AN_1_STAGING
WHERE [ibi_bulk_id] in (20150520141627106) and id in(101659113)
) AS sourceTable
ON (...)
WHEN MATCHED AND ((targetTable.[sampletime]
***Cannot create a row of size 8410 which is greater than the allowable maximum row size of 8060.***
Why the second time I tried to merge the same row which already was inserted it resulted in an error. If this row exceeded maximum row size, it would expect for it not to be possible to insert it in the first place.
So I tried two things, (and succeeded!):
- Removing "WHEN NOT MATCHED" section from merge statement
- Running an update statement with the same row I tried to merge
Why does update using merge is not succeeding, while insert does, and direct update also does?
**UPDATE:**
Managed to find the actual row size - 4978. I've created a new table that has only this row, and find the row size this way:
And I still don't see something exceeding the allowed limit.
**UPDATE(2):**
Full reproduce
Made an effort that this reproduce will not require any additional auxiliary objects, and that the data will be (somewhat) obfuscated.
Tried this on several servers, from version 2012, and one from 2008, and was able to fully reproduce in all of them.

Yosi Dahari
(517 rep)
May 21, 2015, 03:43 PM
• Last activity: Sep 27, 2023, 11:09 AM
0
votes
1
answers
34
views
Optimize import data from one table into another
Edited on mustaccio request. I have simple table: ```-- Table: public.objects -- DROP TABLE IF EXISTS public.objects; CREATE TABLE IF NOT EXISTS public.objects ( id integer NOT NULL DEFAULT nextval('objects_id_seq'::regclass), value text COLLATE pg_catalog."default", key text COLLATE pg_catalog."def...
Edited on mustaccio request.
I have simple table:
-- Table: public.objects
-- DROP TABLE IF EXISTS public.objects;
CREATE TABLE IF NOT EXISTS public.objects
(
id integer NOT NULL DEFAULT nextval('objects_id_seq'::regclass),
value text COLLATE pg_catalog."default",
key text COLLATE pg_catalog."default",
is_valid boolean DEFAULT false
) PARTITION BY LIST (is_valid);
ALTER TABLE IF EXISTS public.objects
OWNER to postgres;
-- Index: key_idx
-- DROP INDEX IF EXISTS public.key_idx;
CREATE INDEX IF NOT EXISTS key_idx
ON public.objects USING btree
(key COLLATE pg_catalog."default" ASC NULLS LAST)
;
-- Index: id_idx
-- DROP INDEX IF EXISTS public.id_idx;
CREATE INDEX IF NOT EXISTS id_idx
ON public.objects USING btree
(id ASC NULLS LAST)
;
-- Index: value_idx
-- DROP INDEX IF EXISTS public.value_idx;
CREATE INDEX IF NOT EXISTS value_idx
ON public.objects USING btree
(value COLLATE pg_catalog."default" ASC NULLS LAST)
;
-- Partitions SQL
CREATE TABLE public.objects_with_valid PARTITION OF public.objects
FOR VALUES IN (true)
PARTITION BY LIST (lower("left"(value, 1)))
TABLESPACE pg_default;
ALTER TABLE IF EXISTS public.objects_with_valid
OWNER to postgres;
CREATE TABLE public.objects_without_valid PARTITION OF public.objects
FOR VALUES IN (false)
PARTITION BY LIST (lower("left"(value, 1)))
TABLESPACE pg_default;
ALTER TABLE IF EXISTS public.objects_without_valid
OWNER to postgres;
with 92M+ rows. Selecting is quite fast, WHERE around 0.06 sec LIKE around 0.2 sec
Then i imported data to temporary table (around 500K rows). I want to insert data from temp table into production one. But method that I use is extremly slow.
MERGE INTO table ca
USING temptable t
ON t.value = ca.value and t.key = ca.key
WHEN NOT MATCHED THEN
INSERT (value, key, is_valid)
VALUES (t.value, t.key, true);
EXPLAIN for above:
"Merge on objects ca (cost=40500054.86..42121761.54 rows=0 width=0)"
" Merge on objects_with_0 ca_1"
" Merge on objects_with_1 ca_2"
" Merge on objects_with_2 ca_3"
" Merge on objects_with_3 ca_4"
" Merge on objects_with_4 ca_5"
" Merge on objects_with_5 ca_6"
" Merge on objects_with_6 ca_7"
" Merge on objects_with_7 ca_8"
" Merge on objects_with_8 ca_9"
" Merge on objects_with_9 ca_10"
" Merge on objects_with_a ca_11"
" Merge on objects_with_b ca_12"
" Merge on objects_with_c ca_13"
" Merge on objects_with_d ca_14"
" Merge on objects_with_e ca_15"
" Merge on objects_with_f ca_16"
" Merge on objects_with_g ca_17"
" Merge on objects_with_h ca_18"
" Merge on objects_with_i ca_19"
" Merge on objects_with_j ca_20"
" Merge on objects_with_k ca_21"
" Merge on objects_with_l ca_22"
" Merge on objects_with_m ca_23"
" Merge on objects_with_n ca_24"
" Merge on objects_with_o ca_25"
" Merge on objects_with_p ca_26"
" Merge on objects_with_q ca_27"
" Merge on objects_with_r ca_28"
" Merge on objects_with_s ca_29"
" Merge on objects_with_t ca_30"
" Merge on objects_with_u ca_31"
" Merge on objects_with_v ca_32"
" Merge on objects_with_w ca_33"
" Merge on objects_with_x ca_34"
" Merge on objects_with_y ca_35"
" Merge on objects_with_z ca_36"
" Merge on objects_with_default ca_37"
" -> Merge Left Join (cost=40500054.86..42121761.54 rows=92459232 width=31)"
" Merge Cond: ((t.value = ca.value) AND (t.key = ca.key))"
" -> Sort (cost=19611819.49..19842967.57 rows=92459232 width=21)"
" Sort Key: t.value, t.key"
" -> Seq Scan on tempemails t (cost=0.00..1689976.32 rows=92459232 width=21)"
" -> Materialize (cost=20888235.37..21349648.96 rows=92282717 width=31)"
" -> Sort (cost=20888235.37..21118942.17 rows=92282717 width=31)"
" Sort Key: ca.value, ca.key"
" -> Append (cost=0.00..2055626.75 rows=92282717 width=31)"
" -> Seq Scan on objects_with_0 ca_1 (cost=0.00..777.18 rows=45418 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_1 ca_2 (cost=0.00..2561.07 rows=151407 width=30)"
" Filter: is_valid"
" -> Seq Scan on objects_with_2 ca_3 (cost=0.00..1427.53 rows=84653 width=29)"
" Filter: is_valid"
" -> Seq Scan on objects_with_3 ca_4 (cost=0.00..937.78 rows=56378 width=27)"
" Filter: is_valid"
" -> Seq Scan on objects_with_4 ca_5 (cost=0.00..853.42 rows=51242 width=27)"
" Filter: is_valid"
" -> Seq Scan on objects_with_5 ca_6 (cost=0.00..670.16 rows=40216 width=27)"
" Filter: is_valid"
" -> Seq Scan on objects_with_6 ca_7 (cost=0.00..521.06 rows=31206 width=27)"
" Filter: is_valid"
" -> Seq Scan on objects_with_7 ca_8 (cost=0.00..590.27 rows=35327 width=27)"
" Filter: is_valid"
" -> Seq Scan on objects_with_8 ca_9 (cost=0.00..515.52 rows=30752 width=27)"
" Filter: is_valid"
" -> Seq Scan on objects_with_9 ca_10 (cost=0.00..479.70 rows=28670 width=28)"
" Filter: is_valid"
" -> Seq Scan on objects_with_a ca_11 (cost=0.00..131448.62 rows=7602062 width=32)"
" Filter: is_valid"
" -> Seq Scan on objects_with_b ca_12 (cost=0.00..81150.92 rows=4700792 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_c ca_13 (cost=0.00..101959.91 rows=5871091 width=32)"
" Filter: is_valid"
" -> Seq Scan on objects_with_d ca_14 (cost=0.00..83177.18 rows=4819218 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_e ca_15 (cost=0.00..57243.55 rows=3315255 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_f ca_16 (cost=0.00..42702.34 rows=2466034 width=32)"
" Filter: is_valid"
" -> Seq Scan on objects_with_g ca_17 (cost=0.00..57396.93 rows=3321793 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_h ca_18 (cost=0.00..44526.97 rows=2579897 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_i ca_19 (cost=0.00..31954.73 rows=1852873 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_j ca_20 (cost=0.00..110328.16 rows=6384516 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_k ca_21 (cost=0.00..74663.27 rows=4339927 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_l ca_22 (cost=0.00..92877.07 rows=5369207 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_m ca_23 (cost=0.00..156545.37 rows=9049237 width=32)"
" Filter: is_valid"
" -> Seq Scan on objects_with_n ca_24 (cost=0.00..48544.22 rows=2814722 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_o ca_25 (cost=0.00..20505.73 rows=1193873 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_p ca_26 (cost=0.00..67841.19 rows=3923819 width=32)"
" Filter: is_valid"
" -> Seq Scan on objects_with_q ca_27 (cost=0.00..3265.34 rows=190334 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_r ca_28 (cost=0.00..74204.71 rows=4295571 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_s ca_29 (cost=0.00..132988.34 rows=7689234 width=32)"
" Filter: is_valid"
" -> Seq Scan on objects_with_t ca_30 (cost=0.00..68148.51 rows=3949151 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_u ca_31 (cost=0.00..7068.79 rows=412379 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_v ca_32 (cost=0.00..30176.51 rows=1746351 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_w ca_33 (cost=0.00..27847.93 rows=1613693 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_x ca_34 (cost=0.00..10339.96 rows=595496 width=32)"
" Filter: is_valid"
" -> Seq Scan on objects_with_y ca_35 (cost=0.00..15530.14 rows=902014 width=31)"
" Filter: is_valid"
" -> Seq Scan on objects_with_z ca_36 (cost=0.00..12222.85 rows=715885 width=30)"
" Filter: is_valid"
" -> Seq Scan on objects_with_default ca_37 (cost=0.00..220.24 rows=13024 width=29)"
" Filter: is_valid"
"JIT:"
" Functions: 196"
" Options: Inlining true, Optimization true, Expressions true, Deforming true"
I use db inside docker:
PostgreSQL 15.4 on x86_64-pc-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
host:
CPU i7 4 core
Ram 16GB 2133
Storage 256GB ssd
How to optimize process?
Solmorth
(1 rep)
Sep 7, 2023, 09:43 PM
• Last activity: Sep 8, 2023, 01:17 PM
Showing page 1 of 20 total questions