Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
2
answers
234
views
Conditionally use computed column
IS it possible that a not nullable column can be setup to either a) add value freely with INSERT/UPDATE query b) if value not provided on insert, to avoid null entry added default the value to that of another column? I know that with a computed column I can set the default to be another columns valu...
IS it possible that a not nullable column can be setup to either
a) add value freely with INSERT/UPDATE query
b) if value not provided on insert, to avoid null entry added default the value to that of another column?
I know that with a computed column I can set the default to be another columns value, but I cannot then override this.
Sami.C
(153 rep)
Mar 31, 2021, 04:05 AM
• Last activity: Jun 14, 2025, 02:09 AM
0
votes
1
answers
586
views
How to update rows in GENERATED ALWAYS FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP without changing timestamp
On Db2 11.5 on Linux I have (simplified example) the following table: CREATE TABLE ADMIN.TAB1 ( ID INTEGER NOT NULL, COL_TEXT CHAR(10), DATE_TIME TIMESTAMP NOT NULL GENERATED ALWAYS FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP IMPLICITLY HIDDEN ); Some data: INSERT INTO ADMIN.TAB1 (ID, COL_TEXT) V...
On Db2 11.5 on Linux I have (simplified example) the following table:
CREATE TABLE ADMIN.TAB1 (
ID INTEGER NOT NULL,
COL_TEXT CHAR(10),
DATE_TIME TIMESTAMP NOT NULL GENERATED ALWAYS FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP IMPLICITLY HIDDEN
);
Some data:
INSERT INTO ADMIN.TAB1 (ID, COL_TEXT) VALUES (1, 'A');
INSERT INTO ADMIN.TAB1 (ID, COL_TEXT) VALUES (2, 'A');
INSERT INTO ADMIN.TAB1 (ID, COL_TEXT) VALUES (3, 'A');
Display the data:
SELECT ID, COL_TEXT, DATE_TIME FROM ADMIN.TAB1;
Now I am required to update table with:
UPDATE ADMIN.TAB1 SET COL_TEXT = 'B' WHERE ID = 1;
But REQUIREMENT is only column COL_TEXT should be changed. According to "generated always" definition also DATE_TIME column is going to be changed, but it is now one time business requirement that this column should NOT! change, because some application actions are scheduled to executed according to DATE_TIME field and if new data is found in this DATE_TIME column then some actions are performed in some other tables.
---
I was thinking:
1. Dropping "generated always".
2. Do the update.
3. Add back from "generated by default" to "generated always" --> but can't figure it out how to do this?
How to update table without affecting timestamp column?
folow
(523 rep)
Jun 12, 2024, 11:29 AM
• Last activity: Apr 22, 2025, 02:01 PM
0
votes
1
answers
49
views
Waterfall Results for Numerous Individual Calculations
I'm trying to take the summation of 9 individual calculations, as listed below, where the results use a waterfall from 1-9. A given account will never have a value for each of the 9 different calculations, however it could have a value in more than one of the calculated buckets and so we would only...
I'm trying to take the summation of 9 individual calculations, as listed below, where the results use a waterfall from 1-9. A given account will never have a value for each of the 9 different calculations, however it could have a value in more than one of the calculated buckets and so we would only want to render values in the waterfall order (1-9). For example, if a given account had a value for the first (--1-) calculated bucket and the last (--9-) calculated bucket, then I would only want to bring in the first (--1-) calculated value.
SELECT AccountNbr,
SUM(CurrSofrOn) AS CurrSofrOn,
SUM(CurrTlpNonStableBase) AS CurrTlpNonStableBase,
SUM(CurrTlpStableBase) AS CurrTlpStableBase,
SUM(CurrBeta) AS CurrBeta, SUM(CurrFtp) AS CurrFtp,
SUM(CurrBase) AS CurrBase, SUM(CurrStabilityAdj) AS CurrStabilityAdj,
SUM(CurrTlpAdjAge) AS CurrTlpAdjAge, SUM(CurrModeled) AS CurrModeled
FROM
(
SELECT
COALESCE(l.AcctNbr_2, l.AcctNbr_1) AS AccountNbr,
--1-
COALESCE(l.CurrAvgBook * l.SofrOnRate_2,0) AS CurrSofrOn,
--2-
COALESCE(l.CurrAvgBook * l.MovingAvgBaseRt_2,0) AS CurrBase,
--3-
COALESCE(l.CurrAvgBook * l.TlpNonStableBase_2,0) AS CurrTlpNonStableBase,
--4-
COALESCE(l.CurrAvgBook * l.TlpStableBase_2,0) AS CurrTlpStableBase,
--5-
COALESCE(l.CurrAvgBook * l.Beta_2,0) AS CurrBeta,
--6-
COALESCE(l.CurrAvgBook * l.StableAdjBal_2,0) AS CurrStabilityAdj,
--7-
COALESCE(l.CurrAvgBook * l.FtpRate_2,0) AS CurrFtp,
--8-
COALESCE(l.CurrAvgBook * l.TlpAdjAge_2,0) AS CurrTlpAdjAge,
--9-
COALESCE(l.CurrAvgBook * l.FtpRateModel_2,0) AS CurrModeled
FROM
(
SELECT
n1.AcctNbr AS AcctNbr_1, n2.AcctNbr AS AcctNbr_2,
n1.TotalBookValueEoM AS PrevCurrBook,n2.TotalBookValueEoM AS CurrBook, n1.TotalBookValueAvg AS PrevAvgBook, n2.TotalBookValueAvg AS CurrAvgBook,
n1.FtpRate_Model AS FtpRateModel_1, n2.FtpRate_Model AS FtpRateModel_2, n1.Stable_Base AS StableBase_1, n2.Stable_Base AS StableBase_2,
n1.SofrOnRate AS SofrOnRate_1, n2.SofrOnRate AS SofrOnRate_2, n1.StableAdjustment_Balance AS StableAdjBal_1,n2.StableAdjustment_Balance AS StableAdjBal_2,
n1.FtpRate_Model_Floored AS FtpRateModelFloored_1, n2.FtpRate_Model_Floored AS FtpRateModelFloored_2, n1.TlpAdjustment_Age AS TlpAdjAge_1,
n2.TlpAdjustment_Age AS TlpAdjAge_2, n1.MovingAverageBaseRate AS MovingAvgBaseRt_1, n2.MovingAverageBaseRate AS MovingAvgBaseRt_2,
n1.FtpRate_Final AS FtpRate_1, n2.FtpRate_Final AS FtpRate_2, n1.Beta AS Beta_1, n2.Beta AS Beta_2, n1.TlpTermInMonths_NonStable AS TlpTermInMnthsNS_1,
n2.TlpTermInMonths_NonStable AS TlpTermInMnthsNS_2, n1.BaseRateIndex AS BaseRateIndex_1, n2.BaseRateIndex AS BaseRateIndex_2,
n1.TLP_Stable_Base AS TlpStableBase_1, n2.TLP_Stable_Base AS TlpStableBase_2, n1.TLP_NonStable_Base AS TlpNonStableBase_1, n2.TLP_NonStable_Base AS TlpNonStableBase_2
FROM
(
SELECT DISTINCT *
FROM dbo.NonMaturityDepositResults n
WHERE n.TapeDate = 2024-11-30
) n1
FULL JOIN
(
SELECT DISTINCT *
FROM dbo.NonMaturityDepositResults n
WHERE n.TapeDate = '2024-10-31'
) n2
ON n2.account_key = n1.account_key
and n2.Channel = n1.Channel
and n2.DepositType = n1.DepositType
) l
) a
GROUP BY a.DepositChannel,a.DepositType
Daylon Hunt
(27 rep)
Mar 5, 2025, 06:37 PM
• Last activity: Mar 5, 2025, 08:42 PM
1
votes
2
answers
1338
views
Alter table generated text column to normal text column
If I have a table with a generated text column, how do I alter the column to remove the generation expression to get a normal, not-generated text type? Here's the schema: ```sql CREATE TABLE product_history ( product_id bigint, name text GENERATED ALWAYS AS ( 'some name' ) STORED ); ``` What I'd lik...
If I have a table with a generated text column, how do I alter the column to remove the generation expression to get a normal, not-generated text type? Here's the schema:
CREATE TABLE product_history (
product_id bigint,
name text GENERATED ALWAYS AS ( 'some name' ) STORED
);
What I'd like is a command to use generation by default but allow custom values. As of Postgres 13.3, I don't think that's possible. The default clause looks like it only applies to generated identity columns:
-- Command I want:
ALTER TABLE product_history ALTER COLUMN name SET GENERATED BY DEFAULT;
Here's the insert query I want to work:
INSERT INTO product_history (product_id, name) VALUES (1, 'foo');
-- ERROR: cannot insert into column "name"
-- Detail: Column "name" is a generated column.
I'm running Postgres 13.3.
The use case is creating a history table such that I can insert the transition table of statement-level trigger into the history table. I can't do a blind insert from a select because the transition table rows include the generated column.
Joe
(179 rep)
Jul 15, 2021, 04:23 AM
• Last activity: Nov 26, 2024, 10:00 AM
0
votes
0
answers
37
views
Wrong values in generated columns in postgres 15.1
I have a problem with generated column in one of my tables. The column value is calculated basing on other column in the row (which is of type varchar). Most of the time it works fine. But sometimes the generated value is wrong - I can't find any specific steps that cause such issue. Simple non-upda...
I have a problem with generated column in one of my tables. The column value is calculated basing on other column in the row (which is of type varchar). Most of the time it works fine. But sometimes the generated value is wrong - I can't find any specific steps that cause such issue. Simple non-updating update like 'update column=column where [[generated_column_value]][[generated_column_expression]]' fixes the issue, but after some time the problem occurs again.
I tried recreate the column (drop it and create again), but it didn't help. Does any of you have similar problem or can you give me some recommendation, what should I check?
Following the alter that I used (the table has already column with name
of type varchar(1)):
alter table my_table
add column is_active boolean generated always as (status = 'A') stored;
After some time, when I execute following select:
select *
from my_table
where is_active
and status 'A'
I can receive some rows, but I expect no rows (as rows which have true in
column should all have value 'A' in column
). What can be the cause of the issue or what should I check?
Thanks a lot
trefkar
(1 rep)
Oct 1, 2024, 09:13 PM
0
votes
1
answers
508
views
Occasional incorrect values for generated columns in PostgreSQL 13
I have a PostgreSQL 13 server running in Google Cloud and one of our tables has a few generated columns and occasionally I find rows where the value saved in the generated column isn't up-to-date. For example, if I run a query like: ```sql SELECT * FROM [ table ] WHERE [ column_name ] != [ generatio...
I have a PostgreSQL 13 server running in Google Cloud and one of our tables has a few generated columns and occasionally I find rows where the value saved in the generated column isn't up-to-date.
For example, if I run a query like:
SELECT * FROM [ table ]
WHERE [ column_name ] != [ generation_expression value ]
Where [ generation_expression value ]
is the SQL statement copy-pasted directly from information_schema.columns
for that table, I would expect to always find 0 results. Unfortunately, this is not the case.
When I find a row where the generated column isn't up-to-date, I can UPDATE a column in that row and then the generated column will automatically update to the expected value. However, I don't understand why the generated column could ever have an out of date value in the first place.
I don't have any triggers setup for this database (which I confirmed by checking SELECT * FROM information_schema.triggers
), and looking at the PostgreSQL docs I don't see any scenario where a generated column wouldn't have an up-to-date value.
Does anyone know any scenario that could explain why the generated columns in my PostgreSQL database might occasionally contain an incorrect value?
EDIT: Here's a little more information about the table:
Column | Type | Collation | Nullable | Default
----------------------------+--------------------------+-----------+----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------
id | uuid | | not null |
snooze_until | timestamp with time zone | | |
latest_message_drafted_at | timestamp with time zone | | |
latest_message_received_at | timestamp with time zone | | |
latest_message_sent_at | timestamp with time zone | | |
default_sort_date | timestamp with time zone | | | generated always as (GREATEST(snooze_until, COALESCE(latest_message_received_at, latest_message_sent_at, latest_message_drafted_at))) stored
I don't understand how it is possible for:
SELECT COUNT(*)
FROM threads
WHERE default_sort_date != GREATEST(snooze_until, COALESCE(latest_message_received_at, latest_message_sent_at, latest_message_drafted_at))
to ever return anything other than 0 results (or, if I made a mistake in setting up the generation_expression, why it wouldn't match on 100% of the rows)
Joshmaker
(173 rep)
Oct 5, 2021, 07:03 PM
• Last activity: Oct 1, 2024, 12:10 PM
8
votes
3
answers
1047
views
View doesn't recognise a change to an underlying table when an existing column is dropped and replaced with one with the same name but as computed
My understanding here is that when a View is created, the definition is stored in metadata sys.tables. i.e. `sys.views`. Also, that if you use `SELECT * FROM` it will store the exact column names e.g. `SELECT a, b FROM`. Even if you use the "CHECK OPTION" - it still will not validate against the und...
My understanding here is that when a View is created, the definition is stored in metadata sys.tables. i.e.
Now I
The data in columns
Why is this?
I know a view refresh or alter view will correct this but the column names haven't changed.
I've replicated this in SQL-2022 and Azure SQL database.
sys.views
.
Also, that if you use SELECT * FROM
it will store the exact column names e.g. SELECT a, b FROM
.
Even if you use the "CHECK OPTION" - it still will not validate against the underlying tables.
"SCHEMABINDING" on the other hand will validate against the underlying tables.
My issue is when a column on a table is dropped
and replaced with the one with the same name but as a computed
then something weird happens when you query a view based on that table.
Here's an example.
DROP TABLE IF EXISTS dbo.Test1;
DROP TABLE IF EXISTS dbo.Test2;
GO
CREATE TABLE dbo.Test1
(
Id INT IDENTITY(1,1) PRIMARY KEY ,
Test1Col1 VARCHAR(80) NULL ,
Test1Col2 VARCHAR(80) NULL ,
Test1Col3 VARCHAR(80) NULL
);
CREATE TABLE dbo.Test2
(
Id INT IDENTITY(1,1) PRIMARY KEY ,
Test2Col1 VARCHAR(80) NULL ,
Test2Col2 VARCHAR(80) NULL ,
Test2Col3 VARCHAR(80) NULL ,
Test1Id INT
);
GO
INSERT INTO dbo.Test1
(Test1Col1, Test1Col2, Test1Col3)
VALUES
('Test1Col1Data1', 'Test1Col2Data1', 'Test1Col3Data1') ,
('Test1Col1Data2', 'Test1Col2Data2', 'Test1Col3Data2') ,
('Test1Col1Data3', 'Test1Col2Data3', 'Test1Col3Data3') ;
GO
INSERT INTO dbo.Test2
(Test2Col1, Test2Col2, Test2Col3, Test1Id)
VALUES
('Test2Col1Data1', 'Test2Col2Data1', 'Test2Col3Data1', 1) ,
('Test2Col1Data2', 'Test2Col2Data2', 'Test2Col3Data2', 2) ,
('Test2Col1Data3', 'Test2Col2Data3', 'Test2Col3Data3', 3) ;
GO
Create a view based on tables.
CREATE OR ALTER
VIEW dbo.View1
AS
SELECT T1.*, T2.*
FROM (
SELECT TestId = T.Id
FROM dbo.Test1 T
) T1
INNER JOIN dbo.Test2 T2 ON T2.Test1Id = T1.TestId ;
GO
SELECT * FROM dbo.View1 ;
GO
You get this result set,

ALTER
table dbo.Test2
.
DROP
column Test2Col3
and replaced it with a computed
column with the same name.
ALTER TABLE dbo.Test2
DROP COLUMN Test2Col3 ;
ALTER TABLE dbo.Test2
ADD Test2Col3 AS Test2Col1;
GO
Now when I query the view, I get the following result set.
SELECT * FROM dbo.View1 ;
GO

Test2Col3
and Test1Id
looked to have left shifted 1.
What I would expect to see in Test2Col3
is being shown in Test1Id
and vice versa.

Geezer
(513 rep)
Jun 21, 2024, 10:04 AM
• Last activity: Jun 21, 2024, 07:23 PM
7
votes
1
answers
318
views
Filter by partition number when the table is partitioned by computed column
I've created 2 identical tables partitioned by an integer column. In one of the tables the column is computed, in the other table the column is not computed. When I query the table with computed column filtering a single partition sorting with Clustered index (assuming that outcoming data is already...
I've created 2 identical tables partitioned by an integer column. In one of the tables the column is computed, in the other table the column is not computed.
When I query the table with computed column filtering a single partition sorting with Clustered index (assuming that outcoming data is already sorted, so no additional sorting is required) it scans the entire table.
The real table has billions of rows; we're using partitioning to avoid scanning the entire table.
SQL Server version is
> Microsoft SQL Server 2019 (RTM-CU26) (KB5035123) - 15.0.4365.2 (X64)
> Mar 29 2024 23:02:47 Copyright (C) 2019 Microsoft Corporation
> Developer Edition (64-bit) on Windows Server 2022 Standard 10.0 (Build 20348: )
CREATE PARTITION FUNCTION pf_test(int) AS RANGE RIGHT FOR VALUES(1, 2, 3, 4)
GO
CREATE PARTITION SCHEME ps_test AS PARTITION pf_test ALL TO([PRIMARY])
GO
CREATE TABLE test_computed
(
ID BIGINT NOT NULL,
ID_C AS CAST(ID % 4 AS INT) PERSISTED,
PRIMARY KEY CLUSTERED (ID, ID_C) ON ps_test(ID_C)
) ON ps_test(ID_C)
GO
CREATE TABLE test_not_computed
(
ID BIGINT NOT NULL,
ID_C INT NOT NULL,
PRIMARY KEY CLUSTERED (ID, ID_C) ON ps_test(ID_C)
) ON ps_test(ID_C)
GO
INSERT INTO test_computed(ID)
SELECT TOP 1000000 ROW_NUMBER() OVER(ORDER BY GETDATE())
FROM sys.all_columns a
CROSS JOIN sys.all_columns b
GO
INSERT INTO test_not_computed(ID, ID_C)
SELECT TOP 1000000
ROW_NUMBER() OVER(ORDER BY GETDATE()),
ROW_NUMBER() OVER(ORDER BY GETDATE()) % 4
FROM sys.all_columns a
CROSS JOIN sys.all_columns b
GO
The data is identical, but the execution plans for each query is different.
SELECT TOP 100 *
FROM test_computed
WHERE $partition.pf_test(ID_C) = 1
ORDER BY ID DESC
SELECT TOP 100 *
FROM test_not_computed
WHERE $partition.pf_test(ID_C) = 1
ORDER BY ID desc

Artashes Khachatryan
(1533 rep)
Jun 7, 2024, 06:29 AM
• Last activity: Jun 7, 2024, 06:11 PM
2
votes
1
answers
673
views
What is the best choose for PK in Partitioned Table?
I have one large table that is partitioned (table name: `Trans`). At the moment this table is to be created on 32 partitions. This table will contain approximately 300 million records and data older than 14 days will be deleted daily. One of the columns in this table is a reference to a table that w...
I have one large table that is partitioned (table name:
Trans
). At the moment this table is to be created on 32 partitions. This table will contain approximately 300 million records and data older than 14 days will be deleted daily. One of the columns in this table is a reference to a table that will contain up to 5 million records (table name: Sens
) and I also want it to be partitioned. I would like to ask you about:
1. Will it be a problem that both tables will use the same partitioning function? So the Sens
table would also be distributed over 32 partitions and would be save on the same files as the Trans
table. Is this a good approach ?
2. The Trans
table has a PK
based on two columns TranID (Identity (1,1))
and ParititionID
. At the moment, FK
to a smaller table ('Sens') is based on only one column - SenID
. The smaller table also has to be partitioned. What will be the difference in the approach / efficiency / speed of operation if the PK
in the Sens
table will only be on the IDENTITY (1,1)
column instead of the IDENTITY (1,1)
column and the partition column
, i.e.
ALTER TABLE [dbo].[Sen]
ADD CONSTRAINT [PK_SenID]
PRIMARY KEY CLUSTERED ([SenID] ASC) ON [PRIMARY];
-- or
ALTER TABLE [dbo].[Sen]
ADD CONSTRAINT [PK_SenID]
PRIMARY KEY CLUSTERED (
[SenID] ASC,
[PartitionID]
) ON [psTrans]([PartitionID])
3. Have you ever try to have partition column which is computed ? I am thinking about choose partition according to new column which is computed base on other column in table:
CAST(HASHBYTES('MD5', [othercolumnInTable]) AS tinyint) % 32
---
Many thanks for the comprehensive answer. The idea is that there are 32 partitions, 16 files and 8 file group. In other words, each filegroup
is supposed to contain 2
files (ie a total of 4
partitions). Honestly, it's my first time designing a large database where I have to create a new file group
and use partitioning. Therefore, the above numbers are indicative. Do you have any way to properly divide into files, filegroups and partitions?
Regarding the partitioning of the Trans
table, the partition column will be of Tinyint
type. Partitioning follows business logic and breaks all data (about 300 million records) into roughly equal parts (or at least that's the assumption). Thus, partitioning will not be by date, but by a column of type Tinyint
.
We want to take advantage of partitioning for the Trans
table because it will contain a lot of data, ie about 300 million records. In addition, it will have about 60 columns. Moreover, the requirement is that the database could manage 300 inserts
per second for this table and at the same time about 250 update
operations on this table. So I understand that by partitioning this table, with many insert
and update
operations, we will be running multiple files at the same time which should speed up and handle the requirements. Although maybe my interpretation is wrong?
In addition to the Sens
table, which I am describing here, there will also be one Events
table, which will have FK references to the PK of the Trans
table and will contain about 100-200 million records. To summarize the tables I think to partition at the moment are Trans
(about 300 million records), Sens
(about 5 million records), Events
(about 100-200 million records). All of them would use the same partitioning function, ie they would be present in 32 partitions, 16 files and 8 filegroups. There should be 300 inserts
on the Trans
table and 250 ~ 290 updates
per second. There should be 200-300 update
operations per second in the Sens
table. There should be approximately 400-500 inserts per second in the Event
table. The main reason to partition them all is not to do all of these operations on one database file, but to distribute it properly. You wrote that you have experience with partitioning. Do you think partitioning will be good for these requirements?
As for data deletion. Every day, data older than 14 days will be removed from the Trans
and Events
tables. I thought to do it in such a way that with the operation Delete
I delete data separately for each partition. I have no experience in this and I do not know if this is the most effective option. Moreover, the solution is to be created as part of AlwaysOn (so maybe there are some limits).
axdna
(119 rep)
Sep 8, 2020, 12:01 PM
• Last activity: Jun 1, 2024, 09:46 AM
1
votes
1
answers
100
views
MySQL Trigger updating second table based on History and computed values
**The Goal:** Update an secondary table called **Lottery_Cycle** when an INSERT occours on table **Lottery_History** with some computed values and checking previous data. Let's say a lottery with 25 numbers of which 15 are drawn. **What is an Cycle on this scenario?** On this scenario a cycle consis...
**The Goal:**
Update an secondary table called **Lottery_Cycle** when an INSERT occours on table **Lottery_History** with some computed values and checking previous data.
Let's say a lottery with 25 numbers of which 15 are drawn.
**What is an Cycle on this scenario?**
On this scenario a cycle consists of recording when all lottery numbers are drawn.
When this occurs, one cycle closes and another begins.
**Lottery_History structure**
CREATE TABLE IF NOT EXISTS Lottery_History
(
Id
SMALLINT(5) UNSIGNED NOT NULL,
Ball_01
TINYINT(3) UNSIGNED NOT NULL,
Ball_02
TINYINT(3) UNSIGNED NOT NULL,
Ball_03
TINYINT(3) UNSIGNED NOT NULL,
Ball_04
TINYINT(3) UNSIGNED NOT NULL,
Ball_05
TINYINT(3) UNSIGNED NOT NULL,
Ball_06
TINYINT(3) UNSIGNED NOT NULL,
Ball_07
TINYINT(3) UNSIGNED NOT NULL,
Ball_08
TINYINT(3) UNSIGNED NOT NULL,
Ball_09
TINYINT(3) UNSIGNED NOT NULL,
Ball_10
TINYINT(3) UNSIGNED NOT NULL,
Ball_11
TINYINT(3) UNSIGNED NOT NULL,
Ball_12
TINYINT(3) UNSIGNED NOT NULL,
Ball_13
TINYINT(3) UNSIGNED NOT NULL,
Ball_14
TINYINT(3) UNSIGNED NOT NULL,
Ball_15
TINYINT(3) UNSIGNED NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci ROW_FORMAT=DYNAMIC;
*Id* = The contest number
*Ball_01* ~ Ball_15 = Each of the balls drawn in the contest
**Lottery_Cycle**
CREATE TABLE IF NOT EXISTS Lottery_Cycle
(
Id
SMALLINT(5) UNSIGNED NOT NULL,
Ball_Missing_01
TINYINT(1) UNSIGNED DEFAULT NULL,
Ball_Missing_02
TINYINT(1) UNSIGNED DEFAULT NULL,
Ball_Missing_03
TINYINT(1) UNSIGNED DEFAULT NULL,
Ball_Missing_04
TINYINT(1) UNSIGNED DEFAULT NULL,
Ball_Missing_05
TINYINT(1) UNSIGNED DEFAULT NULL,
Ball_Missing_06
TINYINT(1) UNSIGNED DEFAULT NULL,
Ball_Missing_07
TINYINT(1) UNSIGNED DEFAULT NULL,
Ball_Missing_08
TINYINT(1) UNSIGNED DEFAULT NULL,
Ball_Missing_09
TINYINT(1) UNSIGNED DEFAULT NULL,
Ball_Missing_10
TINYINT(1) UNSIGNED DEFAULT NULL,
Missing
TINYINT(1) UNSIGNED AS (
(Ball_Missing_01
IS NOT NULL) +
(Ball_Missing_02
IS NOT NULL) +
(Ball_Missing_03
IS NOT NULL) +
(Ball_Missing_04
IS NOT NULL) +
(Ball_Missing_05
IS NOT NULL) +
(Ball_Missing_06
IS NOT NULL) +
(Ball_Missing_07
IS NOT NULL) +
(Ball_Missing_08
IS NOT NULL) +
(Ball_Missing_09
IS NOT NULL) +
(Ball_Missing_10
IS NOT NULL)
),
Contests
TINYINT(1) UNSIGNED NOT NULL DEFAULT 1,
Cycle
SMALLINT(1) UNSIGNED NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci ROW_FORMAT=DYNAMIC;
*Id* = The contest number
*Ball_Missing_01* ~ *Ball_Missing_10* = Displays balls not yet drawn in the cycle
*Missing* = It is a computed column that reports the total number of balls not drawn
*Contests* = Informs the number of contests in the current cycle
*Cycle* = Displays the cycle number
**Example use case and desired behavior**
Lottery Competition 1, the following numbers were drawn and entered in the table **Lottery_History**:
01 05 06 08 09 10 13 15 16 17 18 19 20 22 25
Therefore, the Cycle Table must have the following values:
Id = 1
Ball_Missing_01 = 02
Ball_Missing_02 = 03
Ball_Missing_03 = 04
Ball_Missing_04 = 07
Ball_Missing_05 = 11
Ball_Missing_06 = 12
Ball_Missing_07 = 14
Ball_Missing_08 = 21
Ball_Missing_09 = 23
Ball_Missing_10 = 24
Quantity = Automatically computed (will be 10)
Contest = 1
Cycle = 1
Lottery Competition 2, the following numbers were drawn and entered in the table **Lottery_History**:
03 04 05 09 10 11 13 15 16 17 19 20 21 24 25
Therefore, the Cycle Table must have the following values:
Id = 2
Ball_Missing_01 = 02
Ball_Missing_02 = 07
Ball_Missing_03 = 12
Ball_Missing_04 = 14
Ball_Missing_05 = 23
Ball_Missing_06 = NULL
Ball_Missing_07 = NULL
Ball_Missing_08 = NULL
Ball_Missing_09 = NULL
Ball_Missing_10 = NULL
Quantity = Automatically computed (will be 5)
Contest = 2
Cycle = 1
Note that the Ball_Missing_01 ~ Ball_Missing_05 field refers to the 5 numbers out of the 25 possible that have not yet been drawn in either contest 1 and 2.
Full drawn numbers from contest 1 and 2:
01 03 04 05 06 08 09 10 11 13 15 16 17 18 19 20 21 22 24 25
The "Contest" field informs that it is the second competition in Cycle 1 with missing numbers (not yet drawn).
Lottery Competition 3, the following numbers were drawn and entered in the table **Lottery_History**:
02 03 04 05 06 08 09 10 11 12 14 19 20 23 24
Therefore, the Cycle Table must have the following values:
Id = 3
Ball_Missing_01 = 07
Ball_Missing_02 = NULL
Ball_Missing_03 = NULL
Ball_Missing_04 = NULL
Ball_Missing_05 = NULL
Ball_Missing_06 = NULL
Ball_Missing_07 = NULL
Ball_Missing_08 = NULL
Ball_Missing_09 = NULL
Ball_Missing_10 = NULL
Quantity = Automatically computed (will be 1)
Contest = 3
Cycle = 1
Lottery Competition 4, the following numbers were drawn and entered in the table **Lottery_History**:
02 06 07 08 09 10 11 12 16 19 20 22 23 24 25
Therefore, the Cycle Table must have the following values:
Id = 4
Ball_Missing_01 = NULL
Ball_Missing_02 = NULL
Ball_Missing_03 = NULL
Ball_Missing_04 = NULL
Ball_Missing_05 = NULL
Ball_Missing_06 = NULL
Ball_Missing_07 = NULL
Ball_Missing_08 = NULL
Ball_Missing_09 = NULL
Ball_Missing_10 = NULL
Quantity = Automatically computed (will be zero)
Contest = 4
Cycle = 1
At this point, a Cycle is completed because in the 4 contests informed, all numbers from 1 to 25 were drawn, and a new cycle begins.
Lottery Competition 5, the following numbers were drawn and entered in the table **Lottery_History**:
01 02 04 05 07 08 09 10 11 12 14 16 17 24 25
Therefore, the Cycle Table must have the following values:
Id = 5
Ball_Missing_01 = 03
Ball_Missing_02 = 06
Ball_Missing_03 = 13
Ball_Missing_04 = 15
Ball_Missing_05 = 18
Ball_Missing_06 = 19
Ball_Missing_07 = 20
Ball_Missing_08 = 21
Ball_Missing_09 = 22
Ball_Missing_10 = 23
Quantity = Automatically computed (will be ten)
Contest = 1
Cycle = 2
Note that now the **Cycle** field receives the value of 2 because it is a new cycle and **Contest** field is 1 because its the first Contest from this new cycle.
And this process must continue whenever a new insertion occurs in the table **Lottery_History**.
Triggers I have made
1 - For determine the value to **Missing** field:
DROP TRIGGER IF EXISTS trg_BI_Lottery_Cycle_Missing
;
DELIMITER $$
CREATE TRIGGER trg_BI_Lottery_Cycle
BEFORE INSERT ON `Lottery_Cycle
FOR EACH ROW
BEGIN
IF NEW.Missing > 0 THEN
SET NEW.Missing = (SELECT MAX(Contests) + 1 FROM Lottery_Cycle);
ELSE
SET NEW.Missing = 1;
END IF;
END
$$
DELIMITER ;
2 - For determine the value to **Cycle** field:
DROP TRIGGER IF EXISTS trg_BI_Lottery_Cycle_Cycle
;
DELIMITER $$
CREATE TRIGGER trg_BI_Lottery_Cycle_Cycle
BEFORE INSERT ON Lottery_Cycle
FOR EACH ROW
BEGIN
SET NEW.Cicle = (SELECT MAX(Cycle IS NOT NULL) + 1 FROM Lottery_Cycle);
END
$$
DELIMITER ;
3 - Where I'm stuck!
DROP TRIGGER IF EXISTS trg_AI_Lottery_Cycle_Update_Lottery_Cycle
;
CREATE TRIGGER trg_AI_Lottery_Cycle_Update_Lottery_Cycle
AFTER INSERT ON Lottery_History
FOR EACH ROW
BEGIN
DECLARE Contest_Id TINYINT DEFAULT 1;
DECLARE Current_Cycle TINYINT DEFAULT 1;
DECLARE Contests_Total TINYINT DEFAULT 1; ????
DECLARE Ball_Missing_Insert_01 TINYINT DEFAULT NULL;
DECLARE Ball_Missing_Insert_02 TINYINT DEFAULT NULL;
DECLARE Ball_Missing_Insert_03 TINYINT DEFAULT NULL;
DECLARE Ball_Missing_Insert_04 TINYINT DEFAULT NULL;
DECLARE Ball_Missing_Insert_05 TINYINT DEFAULT NULL;
DECLARE Ball_Missing_Insert_06 TINYINT DEFAULT NULL;
DECLARE Ball_Missing_Insert_07 TINYINT DEFAULT NULL;
DECLARE Ball_Missing_Insert_08 TINYINT DEFAULT NULL;
DECLARE Ball_Missing_Insert_09 TINYINT DEFAULT NULL;
DECLARE Ball_Missing_Insert_10 TINYINT DEFAULT NULL;
-- Check if table is empty
IF (SELECT COUNT(*) AS Contests
FROM
Lottery_Cycle
WHERE
Contests IS NULL
) = 0 THEN SET Contest_Id = 1;
ELSE
SET Contest_Id = (SELECT MAX(Id) FROM Lottery_History);
END IF;
-- Get currently Cycle
SET Current_Cycle = (SELECT MAX(Cycle) FROM Lottery_Cycle);
-- Create an temporary table for simulate an array
CREATE TEMPORARY TABLE IF NOT EXISTS Lottery_Cycle_Insert_Temp (
Number TINYINT(1) UNSIGNED NOT NULL
);
-- Insert all numbers (1 to 25) into the temporary table
INSERT INTO Lottery_Cycle_Insert_Temp (Number)
VALUES (1), (2), (3), (4), (5), (6), (7), (8), (9), (10),
(11), (12), (13), (14), (15), (16), (17), (18), (19), (20),
(21), (22), (23), (24), (25);
-- Removing the 15 drawn balls from the temporary table
DELETE FROM Lottery_Cycle_Insert_Temp
WHERE Number IN (NEW.Ball_01, NEW.Ball_02, NEW.Ball_03, NEW.Ball_04, NEW.Ball_05,
NEW.Ball_06, NEW.Ball_07, NEW.Ball_08, NEW.Ball_09, NEW.Ball_10,
NEW.Ball_11, NEW.Ball_12, NEW.Ball_13, NEW.Ball_14, NEW.Ball_15);
-- Removing numbers already drawn in previous contests of the same
-- Cycle of the Lottery_Cycle table
DELETE FROM Lottery_Cycle_Insert_Temp
WHERE Number IN (
SELECT DISTINCT
IF(LC.Ball_Missing_01 IS NULL, NULL, LC.Ball_Missing_01),
IF(LC.Ball_Missing_02 IS NULL, NULL, LC.Ball_Missing_02),
IF(LC.Ball_Missing_03 IS NULL, NULL, LC.Ball_Missing_03),
IF(LC.Ball_Missing_04 IS NULL, NULL, LC.Ball_Missing_04),
IF(LC.Ball_Missing_05 IS NULL, NULL, LC.Ball_Missing_05),
IF(LC.Ball_Missing_06 IS NULL, NULL, LC.Ball_Missing_06),
IF(LC.Ball_Missing_07 IS NULL, NULL, LC.Ball_Missing_07),
IF(LC.Ball_Missing_08 IS NULL, NULL, LC.Ball_Missing_08),
IF(LC.Ball_Missing_09 IS NULL, NULL, LC.Ball_Missing_09),
IF(LC.Ball_Missing_10 IS NULL, NULL, LC.Ball_Missing_10)
FROM Lottery_Cycle LC
WHERE LC.Cycle = Current_Cycle
);
-- Storing remaining numbers in variables for Insert
SET Ball_Missing_Insert_01 = (SELECT Number FROM Lottery_Cycle_Insert_Temp ORDER BY Number DESC LIMIT 1);
DELETE FROM Lottery_Cycle_Insert_Temp WHERE Number = Ball_Missing_01;
SET Ball_Missing_Insert_02 = (SELECT Number FROM Lottery_Cycle_Insert_Temp ORDER BY Number DESC LIMIT 1);
DELETE FROM Lottery_Cycle_Insert_Temp WHERE Number = Ball_Missing_Insert_02;
SET Ball_Missing_Insert_03 = (SELECT Number FROM Lottery_Cycle_Insert_Temp ORDER BY Number DESC LIMIT 1);
DELETE FROM Lottery_Cycle_Insert_Temp WHERE Number = Ball_Missing_Insert_03;
SET Ball_Missing_Insert_04 = (SELECT Number FROM Lottery_Cycle_Insert_Temp ORDER BY Number DESC LIMIT 1);
DELETE FROM Lottery_Cycle_Insert_Temp WHERE Number = Ball_Missing_Insert_04;
SET Ball_Missing_Insert_05 = (SELECT Number FROM Lottery_Cycle_Insert_Temp ORDER BY Number DESC LIMIT 1);
DELETE FROM Lottery_Cycle_Insert_Temp WHERE Number = Ball_Missing_Insert_05;
SET Ball_Missing_Insert_06 = (SELECT Number FROM Lottery_Cycle_Insert_Temp ORDER BY Number DESC LIMIT 1);
DELETE FROM Lottery_Cycle_Insert_Temp WHERE Number = Ball_Missing_Insert_06;
SET Ball_Missing_Insert_07 = (SELECT Number FROM Lottery_Cycle_Insert_Temp ORDER BY Number DESC LIMIT 1);
DELETE FROM Lottery_Cycle_Insert_Temp WHERE Number = Ball_Missing_Insert_07;
SET Ball_Missing_Insert_08 = (SELECT Number FROM Lottery_Cycle_Insert_Temp ORDER BY Number DESC LIMIT 1);
DELETE FROM Lottery_Cycle_Insert_Temp WHERE Number = Ball_Missing_Insert_08;
SET Ball_Missing_Insert_09 = (SELECT Number FROM Lottery_Cycle_Insert_Temp ORDER BY Number DESC LIMIT 1);
DELETE FROM Lottery_Cycle_Insert_Temp WHERE Number = Ball_Missing_Insert_09;
SET Ball_Missing_Insert_10 = (SELECT Number FROM Lottery_Cycle_Insert_Temp ORDER BY Number DESC LIMIT 1);
DELETE FROM Lottery_Cycle_Insert_Temp WHERE Number = Ball_Missing_Insert_10;
-- Configuring the value of the Current Cycle Competitions field
SET Contests_Total = (SELECT MAX(Contests IS NOT NULL) +1 FROM Lottery_Cycle);
-- If new Cycle Begins
IF Ball_Missing_Insert_01 IS NULL AND
Ball_Missing_Insert_02 IS NULL AND
Ball_Missing_Insert_03 IS NULL AND
Ball_Missing_Insert_04 IS NULL AND
Ball_Missing_Insert_05 IS NULL AND
Ball_Missing_Insert_06 IS NULL AND
Ball_Missing_Insert_07 IS NULL AND
Ball_Missing_Insert_08 IS NULL AND
Ball_Missing_Insert_09 IS NULL AND
Ball_Missing_Insert_10 IS NULL THEN
SET Current_Cycle = Current_Cycle + 1;
END IF;
-- Insert remaining missing numbers into Lottery_Cycle
INSERT INTO Lottery_Cycle
(
Id
,
Ball_Missing_01
,
Ball_Missing_02
,
Ball_Missing_03
,
Ball_Missing_04
,
Ball_Missing_05
,
Ball_Missing_06
,
Ball_Missing_07
,
Ball_Missing_08
,
Ball_Missing_09
,
Ball_Missing_10
,
Contests
,
Cycle
) VALUES (
Contest_Id,
Ball_Missing_Insert_01,
Ball_Missing_Insert_02,
Ball_Missing_Insert_03,
Ball_Missing_Insert_04,
Ball_Missing_Insert_05,
Ball_Missing_Insert_06,
Ball_Missing_Insert_07,
Ball_Missing_Insert_08,
Ball_Missing_Insert_09,
Ball_Missing_Insert_10,
Contests_Total,
Current_Cycle
);
-- Drop the temporary table
DROP TEMPORARY TABLE IF EXISTS Lottery_Cycle_Insert_Temp;
END
$$
DELIMITER ;
I'm stucking on trial and error, sometimes got:
(conn=17034) Column 'Cycle' cannot be null
Other times got:
(conn=17049) Operand should contain 1 column(s)
And other erros.
Someone can point me what I'm doing wrong?
Thanks in advice!!!
Wisdom
(11 rep)
May 13, 2024, 11:19 PM
• Last activity: May 14, 2024, 09:55 PM
0
votes
1
answers
51
views
Add Flag for Missing Related Record
| PersonID | Name | IncidentID| Role | MissingOffender| | -- | ----- | -- | ------- | ---- | | 1 | Tom | 20 |Victim | | | 2 | Steve | 20 |Offender | | | 3 | Susan | 20 |Offender | | | 4 | John | 21 |Offender | | | 5 | Bill | 21 |Victim | | | 6 | Larry | 22 |Victim | Y | | 7 | Bert | 23 |Offender | |...
| PersonID | Name | IncidentID| Role | MissingOffender|
| -- | ----- | -- | ------- | ---- |
| 1 | Tom | 20 |Victim | |
| 2 | Steve | 20 |Offender | |
| 3 | Susan | 20 |Offender | |
| 4 | John | 21 |Offender | |
| 5 | Bill | 21 |Victim | |
| 6 | Larry | 22 |Victim | Y |
| 7 | Bert | 23 |Offender | |
I have a table that contains data for various behaviors and the people involved in those incidents. Incidents can have and offender only, but if there's a victim, there needs to be an offender.
I'd like to add a column to my query (MissingOffender) that can tell me if a victim is "orphaned" meaning there is no offender associated with the same IncidentID.
Is this possible? I'm not sure what this type of thing is called to google, so apologies in advance, but thank you for any assistance.
theonlynewts
(3 rep)
Mar 7, 2024, 04:53 PM
• Last activity: Mar 7, 2024, 08:35 PM
0
votes
1
answers
121
views
JSON_Value problem when adding persisted Computed column
i want to add a persisted column and i get an error: Msg 8116, Level 16, State 8, Line 3 Argument data type NULL is invalid for argument 2 of JSON_QUERY function. here is my script: ``` ALTER TABLE [ProviderDataSync].[Messages] ADD [account_id1] AS (json_value(json_query([Message],'$.'+json_value([M...
i want to add a persisted column and i get an error:
Msg 8116, Level 16, State 8, Line 3
Argument data type NULL is invalid for argument 2 of JSON_QUERY function.
here is my script:
ALTER TABLE [ProviderDataSync].[Messages]
ADD [account_id1] AS (json_value(json_query([Message],'$.'+json_value([Message],'$.requestType')),'$.accountId')) PERSISTED
i understand that it originates from the JSON_QUERY as i get the same result when running
SELECT top 2000
json_query([Message],'$.'+json_value([Message],'$.requestType'))
FROM [ProviderDataSync].[Messages]
it stops on line 1352.
i tried to understand if it's some irregular data but i faile to see it.
here is the line between 1351 and 1352
line 1351 : {"sourceId":39,"version":"1.2","apiId":"EA_44657857e4b5a","requestType":["cardStatusChange"],"sendingRetriesCount":0,"timestamp":1598441115,"cardAuthorization":null,"cardSettlement":null,"cardStatusChange":{"cardId":15614,"newStatus":"A","previousStatus":"N","initiatedBy":"api"},"externalPaymentSettlement":null,"directDebitMandate":null,"directDebitDue":null,"directCreditReceived":null,"financialDetailAddendum":null,"negativeAcknowledgement":null,"cardTokenization":null,"tokenUpdates":null,"epmAddressAssignCompleted":null,"epmAddressAssignFailed":null,"customNotification":null,"externalPaymentNotification":null}
line 1352:
{"sourceId":1,"version":1,"apiId":"1","tribeRequestType":["cardAuthorization"],"sendingRetriesCount":0,"cardAuthorization":{"cardId":123456,"accountId":12119,"availableBalance":99996157,"settledBalance":100000000,"transLink":"883244000000MCCA00021000021200602123493","transactionStatusCode":"A","statusCode":"00","responseCode":"00","entryModeType":0,"transactionType":2,"authorizeType":"AIM","transactionAmount":-22,"transactionCurrencyIson":"840","billingAmount":-22,"billingCurrencyIson":"840","holderAmount":-22,"holderCurrencyIson":"840","totalHolderAmount":-100,"currencyConvertionFee":0,"mti":"100","productId":1,"transactionCountryIson":"076","acquirerId":"019780","additionalAmount":{"accountType1":"00","currencyCode1":"826","creditDebitIndicator1":"D","amount1":"000000001000","accountType2":"00","currencyCode2":"826","creditDebitIndicator2":"D","amount2":"000000001000","original":"*****","raw":"*****"},"txAmountFee":0,"authCode":"883244","mccCode":"1520","merchantId":"SHP000000000017","merchantName":"mer11**AFS34373 London GBR","posDataCode":"810","posData":"1025104006000076902101234","posTerminalId":"pa1bra","posTime":"200602132821","procCode":"000000","retrievalReferenceNumber":"060200000021","traceLogId":15911045028444,"transactionTime":1598441359,"additionalData1":"T3206123456610510000371502110000012345642070103210","authorizedByStandIn":"Y","cardUsageGroup":1,"institutionCode":15907405717567,"partialApproval":1,"suspicious":0,"notifyHolder":0,"riskActions":{"markTransactionAsSuspicious":0,"notifyCardholderBySendingTaisNotification":0,"changeCardStatusToRisk":0,"changeAccountStatusToSuspended":0,"rejectTransaction":0},"securityChecks":{"cardExpirationDatePresent":0,"onlinePin":0,"offlinePin":0,"cvv2":0,"magneticStripe":1,"chipData":0,"avs":0,"phoneNumber":0},"validDateTo":"\/Date(1640988000000-0000)\/"},"cardSettlement":0,"cardStatusChange":0,"externalPaymentSettlement":0,"directDebitMandate":0,"directDebitDue":0,"directCreditReceived":0,"negativeAcknowledgement":0,"financialDetailAddendum":0,"cardTokenization":0,"tokenUpdates":0,"epmAddressAssignCompleted":0,"epmAddressAssignFailed":0,"customNotification":0}
how do i add this persisted column?
what am i doing wrong?
is it a problem with the data or the script?
dexon
(65 rep)
Jan 30, 2024, 05:18 PM
• Last activity: Feb 11, 2024, 11:03 AM
4
votes
1
answers
403
views
Unique index on computed column when contributing columns are already unique
I have a table like this: ```sql CREATE TABLE dbo.DiscountBarcodeList ( ID int NOT NULL IDENTITY CONSTRAINT PK_DiscountBarcodeList PRIMARY KEY CLUSTERED , Discount int NOT NULL CONSTRAINT FK_DiscountBarcodeList_Discount FOREIGN KEY REFERENCES dbo.Discount (ID) , CodeNumber int NOT NULL , AssignedEnt...
I have a table like this:
CREATE TABLE
dbo.DiscountBarcodeList
(
ID int NOT NULL IDENTITY
CONSTRAINT PK_DiscountBarcodeList
PRIMARY KEY CLUSTERED
, Discount int NOT NULL
CONSTRAINT FK_DiscountBarcodeList_Discount
FOREIGN KEY REFERENCES dbo.Discount (ID)
, CodeNumber int NOT NULL
, AssignedEntity int NULL
CONSTRAINT FK_DiscountBarcodeList_AssignedEntity
FOREIGN KEY REFERENCES dbo.Entity (ID)
ON DELETE SET NULL
, BarcodeID AS
CONVERT(
char(10)
, CAST(Discount AS binary(2)) + CAST(CodeNumber AS binary(3))
, 2)
, CONSTRAINT UQ_DiscountBarcodeList_DiscountCodeNumber
UNIQUE NONCLUSTERED
(
Discount ASC
, CodeNumber ASC
)
);
The table will hold CodeNumber
s allocated for Discount
s beforehand and assigned to Entity
s on demand. Though not on the database level, the values of Discount
and CodeNumber
will be limited to 2 bytes and 3 bytes respectively. Those limitations, together with the unique constraint on (Discount, CodeNumber)
, will effectively make the generated BarcodeID
values unique as well.
The way this table is supposed to be used is, the application will be passing a @BarcodeID
to look up an @Entity
to assign to it. If the lookup is successful, then either the row's Entity
will be set to @Entity
, or the application will be notified that the @BarcodeID
is already taken, something along these lines:
BEGIN TRANSACTION;
UPDATE
SET
@OldEntity = Entity
, Entity = @Entity
WHERE
BarcodeID = @BarcodeID
;
IF @OldEntity IS NULL
BEGIN
COMMIT TRANSACTION;
... /* report success */
END
ELSE
BEGIN
ROLLBACK TRANSACTION;
... /* report failure */
END;
Now I would like to make BarcodeID
sargable. Since I know that the column will only have unique values, I am considering to make the index unique as I think that can make my lookup more efficient. On the other hand, I am concerned that the generated values will have to be checked for uniqueness, which is redundant here since the uniqueness is already guaranteed.
Is it possible to somehow tell whether the benefits, if any, of a unique index on a computed column are going to outweigh the probable overhead of the (unnecessary in this case) uniqueness check? Or at least is it possible to determine that for a scenario like mine? Or am I just overthinking this?
Andriy M
(23261 rep)
Feb 9, 2024, 04:59 AM
• Last activity: Feb 9, 2024, 05:29 AM
19
votes
4
answers
25137
views
How to change definition of a Postgres GENERATED (stored) column
How to change definition of a Postgres 12 GENERATED column? ``` CREATE TABLE test ( id serial PRIMARY KEY, val_a int4 NOT NULL DEFAULT 0, val_b int4 NOT NULL DEFAULT 0, val_sum int4 generated always AS (val_a + val_b) stored ); ``` I tried ``` ALTER TABLE test ALTER COLUMN val_sum TYPE int4 generate...
How to change definition of a Postgres 12 GENERATED column?
CREATE TABLE test (
id serial PRIMARY KEY,
val_a int4 NOT NULL DEFAULT 0,
val_b int4 NOT NULL DEFAULT 0,
val_sum int4 generated always AS (val_a + val_b) stored
);
I tried
ALTER TABLE test ALTER COLUMN val_sum TYPE int4 generated always AS (val_a + val_b + 1) stored;
ALTER TABLE test ALTER COLUMN val_sum SET generated always AS (val_a + val_b + 1) stored;
but both give syntax errors.
The alternative is to drop & add column, which works, but I wonder how to simply change it like any other column.
Marius
(293 rep)
Oct 11, 2019, 04:16 PM
• Last activity: Jan 22, 2024, 10:30 AM
0
votes
1
answers
69
views
Help inconsistent index result on generated column after server restart
I have a table that has a date column and I need a data that will be group by week. And I found this Youtube video that used an index generated column [Faster database indexes (straight from the docs)][1] from PlanetScale. Then I modify my table to look like this. ```sql mysql> show create table tbl...
I have a table that has a date column and I need a data that will be group by week. And I found this Youtube video that used an index generated column Faster database indexes (straight from the docs) from PlanetScale.
Then I modify my table to look like this.
mysql> show create table tbl_bought_code\G;
*************************** 1. row ***************************
Table: tbl_bought_code
Create Table: CREATE TABLE tbl_bought_code
(
id
int unsigned NOT NULL AUTO_INCREMENT,
user_id
int unsigned NOT NULL,
package_code
char(8) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci NOT NULL,
package_type
tinyint NOT NULL,
code_price
int NOT NULL,
bought_on
date NOT NULL,
bought_on_week
int GENERATED ALWAYS AS (yearweek(bought_on
,0)) VIRTUAL, -- this column was added
PRIMARY KEY (id
),
UNIQUE KEY package_code
(package_code
),
KEY user_id
(user_id
),
KEY bought_on
(bought_on
), -- this index is added
KEY bought_on_week_index
(bought_on_week
), -- this index is also added
CONSTRAINT tbl_bought_code_ibfk_1
FOREIGN KEY (user_id
) REFERENCES tbl_users
(id
) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=47348 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
Then I did this explain query
explain
select sum(code_price), bought_on_week
from tbl_bought_code
where bought_on between DATE_FORMAT(DATE_ADD(NOW(), INTERVAL -6 MONTH),'%Y-%m-01') and NOW()
group by bought_on_week;
And the result is very promising.
+----+-------------+-----------------+------------+-------+--------------------------------+-----------+---------+------+------+----------+----------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-----------------+------------+-------+--------------------------------+-----------+---------+------+------+----------+----------------------------------------+
| 1 | SIMPLE | tbl_bought_code | NULL | range | bought_on,bought_on_week_index | bought_on | 3 | NULL | 6962 | 100.00 | Using index condition; Using temporary |
+----+-------------+-----------------+------------+-------+--------------------------------+-----------+---------+------+------+----------+----------------------------------------+
But when I **restart** the server, and re execute the explain query I got this.
+----+-------------+-----------------+------------+-------+--------------------------------+----------------------+---------+------+-------+----------+-------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-----------------+------------+-------+--------------------------------+----------------------+---------+------+-------+----------+-------------+
| 1 | SIMPLE | tbl_bought_code | NULL | index | bought_on,bought_on_week_index | bought_on_week_index | 5 | NULL | 46274 | 15.05 | Using where |
+----+-------------+-----------------+------------+-------+--------------------------------+----------------------+---------+------+-------+----------+-------------+
Why the explain result is different after restarting the server? Why it uses the bought_on_week_index
index instead of bought_on
index? The first explain query only returns 6962 rows while the second explain query returns 46274, it look like it do a full table scan.
zer09
(463 rep)
Jan 11, 2024, 02:59 AM
• Last activity: Jan 11, 2024, 10:26 PM
0
votes
1
answers
55
views
counting related records
I have 2 related models, model_A and model_B There's a 1:n relationship among them Meaning one record in model_A can have several related records in model_B I'd like model_A to have a generated (and stored) column Such generated column should be boolean It should be set to true if there are 1 or mor...
I have 2 related models, model_A and model_B
There's a 1:n relationship among them
Meaning one record in model_A can have several related records in model_B
I'd like model_A to have a generated (and stored) column
Such generated column should be boolean
It should be set to true if there are 1 or more related records in model_B
It should be false if there are 0 related records in model_B
How do I write the expression to generate such boolean values ?
user1632812
(117 rep)
Oct 27, 2023, 07:58 AM
• Last activity: Oct 27, 2023, 08:25 AM
4
votes
2
answers
3240
views
Is it possible to add a GENERATED COLUMN to a large table without locking/downtime on postgres?
I am trying to add a new column as a `generated column` to a large table (over millions of rows). This generated column takes in the `id` column and performs a custom PostgreSQL function and returns it. I think generated columns require a rewrite of the whole table, so it does not matter if you crea...
I am trying to add a new column as a
generated column
to a large table (over millions of rows). This generated column takes in the id
column and performs a custom PostgreSQL function and returns it. I think generated columns require a rewrite of the whole table, so it does not matter if you create the column as NULL/NOT NULL
. I have thought have trying to create a new nullable column with a default and then try to add the generated as always expression
, but this would require dropping the whole column and recreate it as a generated column. Is there any way to do this without locking the whole table?
Hyperion
(41 rep)
Mar 3, 2023, 06:22 PM
• Last activity: Oct 19, 2023, 04:08 AM
1
votes
0
answers
88
views
How to add a calculated field to a Microsoft Access query that equals the minimum value of the previous column field
I have a simple query on a very complex query that reads: ``` lang-sql SELECT Query1.Rdate, Query1.Rank, Query1.Score FROM Query1; ``` It returns: | RDate | Rank | Score | | ----- | ---- | ----- | |2/1/2022|5|0.9| |2/1/2022|3|0.6| |2/1/2022|1|0.3| |2/1/2022|2|0.8| |2/1/2022|4|0.2| I need a second qu...
I have a simple query on a very complex query that reads:
lang-sql
SELECT Query1.Rdate, Query1.Rank, Query1.Score
FROM Query1;
It returns:
| RDate | Rank | Score |
| ----- | ---- | ----- |
|2/1/2022|5|0.9|
|2/1/2022|3|0.6|
|2/1/2022|1|0.3|
|2/1/2022|2|0.8|
|2/1/2022|4|0.2|
I need a second query that uses this query to return:
| RDate | Rank | Score | MinScore |
| ----- | ---- | ----- | -------- |
|2/1/2022|5|0.9|0.2|
|2/1/2022|3|0.6|0.2|
|2/1/2022|1|0.3|0.2|
|2/1/2022|2|0.8|0.2|
|2/1/2022|4|0.2|0.2|
where the new column of 0.2 represents the Minimum value of the previous column.
I know I can write a query that will return a single column of the value 0.2 and then ANOTHER QUERY to combine that column with my original query. But I have over 200 original queries. I would then end up with over 600 queries. I don't want to go down that path if I can avoid it.
HBHAYWOOD
(11 rep)
Mar 29, 2023, 11:23 PM
• Last activity: Oct 16, 2023, 09:35 PM
0
votes
0
answers
58
views
How to Count calculated work hours in one month for each labors
Good Day. I have some data like bellow id name in out 1 lala 2023-09-29 07:00:00 2023-09-29 17:00:00 2 lili 2023-09-29 07:30:00 2023-09-29 16:00:00 1 lala 2023-09-30 07:00:00 2023-09-30 17:30:00 2 lili 2023-09-30 07:00:00 2023-09-30 16:00:00 how to calculate total work hours even different day on ex...
Good Day.
I have some data like bellow
id name in out
1 lala 2023-09-29 07:00:00 2023-09-29 17:00:00
2 lili 2023-09-29 07:30:00 2023-09-29 16:00:00
1 lala 2023-09-30 07:00:00 2023-09-30 17:30:00
2 lili 2023-09-30 07:00:00 2023-09-30 16:00:00
how to calculate total work hours even different day on example table record.
the result i wanted like bellow.
id name total_hours
1 lala 20.5
2 lili 17.5
i also try to use this query
SELECT global_id as id,
beacon_name,IFNULL(MIN(case when in_out_flag = 'IN' then received_date end),"") as in,
IFNULL(MAX(case when in_out_flag = 'OUT' then received_date end),"") as out
WHERE (date(received_date)
BETWEEN '2023-07-01' and '2023-07-05')
group by global_id ,date(received_date);
but I only get the record per day only, i want to get all total work hours in one month.
Thank You.
Regards.
epm.007
(29 rep)
Sep 30, 2023, 04:23 AM
• Last activity: Sep 30, 2023, 09:59 AM
0
votes
1
answers
153
views
Why cannot I create a generated column with a CASE expression?
I'm trying to add a generated column to a table in V7R3 of IBM DB2: ``` ALTER TABLE MYLIB.MYTABLE ADD COLUMN NEW_COL VARCHAR(255) GENERATED ALWAYS AS ( COALESCE( CASE WHEN ENV = 'A' THEN (SELECT GUIDE_URL FROM MYLIBA.PROGRAM WHERE CODE = PGM) WHEN ENV = 'B' THEN (SELECT GUIDE_URL FROM MYLIBB.PROGRAM...
I'm trying to add a generated column to a table in V7R3 of IBM DB2:
ALTER TABLE MYLIB.MYTABLE
ADD COLUMN NEW_COL VARCHAR(255) GENERATED ALWAYS AS (
COALESCE(
CASE
WHEN ENV = 'A' THEN (SELECT GUIDE_URL FROM MYLIBA.PROGRAM WHERE CODE = PGM)
WHEN ENV = 'B' THEN (SELECT GUIDE_URL FROM MYLIBB.PROGRAM WHERE CODE = PGM)
ELSE (SELECT GUIDE_URL FROM MYLIBC.PROGRAM WHERE CODE = PGM)
END
, URL
)
)
But it's giving the following error:
> SQL State: 42601 Vendor Code: -199 Message: [SQL0199] Keyword WHEN not expected. Valid tokens: . ACCTNG USERID APPLNAME PROGRAMID WRKSTNNAME. Cause . . . . . : The keyword WHEN was not expected here. A syntax error was detected at keyword WHEN. The partial list of valid tokens is . ACCTNG USERID APPLNAME PROGRAMID WRKSTNNAME. This list assumes that the statement is correct up to the unexpected keyword. The error may be earlier in the statement but the syntax of the statement seems to be valid up to this point. Recovery . . . : Examine the SQL statement in the area of the specified keyword. A colon or SQL delimiter may be missing. SQL requires reserved words to be delimited when they are used as a name. Correct the SQL statement and try the request again.
Am I doing something wrong? Or is it just impossible to create such a generated column in 7.3? Would upgrading to 7.5 help?
I tried to find specifics on what generated columns can be made in what versions, but couldn't find anything in the docs.
Sarov
(281 rep)
Sep 6, 2023, 08:18 PM
• Last activity: Sep 6, 2023, 08:31 PM
Showing page 1 of 20 total questions