Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
158
views
Is UPDATE FROM still not in the ANSI standard?
[This article](https://sqlserverfast.com/blog/hugo/2008/03/lets-deprecate-update-from/) fascinated me it claims that `FROM` clauses in `UPDATE` are not part of the ANSI standard. Unfortunately, it is very old and I don't care to buy a copy of the latest ANSI standard for SQL. Is it still the case th...
[This article](https://sqlserverfast.com/blog/hugo/2008/03/lets-deprecate-update-from/) fascinated me it claims that
FROM
clauses in UPDATE
are not part of the ANSI standard. Unfortunately, it is very old and I don't care to buy a copy of the latest ANSI standard for SQL. Is it still the case that FROM
clauses in UPDATE
are not part of the ANSI standard?
J. Mini
(1237 rep)
Nov 11, 2023, 09:57 PM
• Last activity: Jul 21, 2025, 11:02 AM
1
votes
1
answers
250
views
MySQL 1054 Error After Creating Trigger to Update on Insert
I am learning basic SQL and have gotten stuck with triggers. I have a database with two tables, customer and invoice. The table characteristics follow: ``` CREATE TABLE customer ( CUST_NUM VARCHAR(8), CUST_LNAME VARCHAR(25), CUST_FNAME VARCHAR(25), CUST_BALANCE DECIMAL(9,2), PRIMARY KEY (CUST_NUM) )...
I am learning basic SQL and have gotten stuck with triggers. I have a database with two tables, customer and invoice. The table characteristics follow:
CREATE TABLE customer (
CUST_NUM VARCHAR(8),
CUST_LNAME VARCHAR(25),
CUST_FNAME VARCHAR(25),
CUST_BALANCE DECIMAL(9,2),
PRIMARY KEY (CUST_NUM)
);
CREATE TABLE invoice (
INV_NUM VARCHAR(10),
CUST_NUM VARCHAR(8) NOT NULL,
INV_DATE DATE,
INV_AMOUNT DECIMAL(9,2),
PRIMARY KEY (INV_NUM),
FOREIGN KEY (CUST_NUM) REFERENCES customer (CUST_NUM)
);
I am trying to create a trigger that will update the customer.CUST_BALANCE value after an insert in the invoice table.
I am testing the trigger by entering values ('8005', '1001', '2018-04-27', '225.40') into Invoice
This script:
DELIMITER $$
CREATE TRIGGER trg_updatecustbalance
AFTER INSERT ON invoice
FOR EACH ROW
BEGIN
UPDATE customer
SET customer.CUST_BALANCE=customer.CUST_BALANCE+NEW.invoice.INV_AMOUNT
WHERE customer.CUST_NUM=NEW.invoice.CUST_NUM;
END;
$$
DELIMITER ;
Yields Error Code: 1054. Unknown column 'new.invoice.CUST_NUM'
The script:
DELIMITER $$
CREATE TRIGGER trg_updatecustbalance
AFTER INSERT
ON invoice FOR EACH ROW
BEGIN
IF invoice.CUST_NUM=customer.CUST_NUM THEN
INSERT INTO customer
VALUES (customer.CUST_BALANCE = customer.CUST_BALANCE + NEW.invoice.INV_AMOUNT);
END IF;
END$$
DELIMITER ;
Yields Error Code: 1109. Unknown table 'invoice' in field list.
I am looking for assistance in resolving this. Thanks!
**ETA:** Edited to provide script for the tables and clarify script and errors of previous attempts.
Kegan Smith
(11 rep)
Apr 6, 2023, 03:38 PM
• Last activity: Jun 22, 2025, 02:03 PM
0
votes
1
answers
228
views
MySQL - Fuzzy Match Text on Each Side of Newlines
I am working on an issue where I need to check a street field or exact matches at a minimum but also, if at all possible, fuzzy matches. For example, if I have address `1234 Some Road`, it should at least direct match to `1234 Some Road` on the other side of the newline. I would also like to match `...
I am working on an issue where I need to check a street field or exact matches at a minimum but also, if at all possible, fuzzy matches. For example, if I have address
1234 Some Road
, it should at least direct match to 1234 Some Road
on the other side of the newline. I would also like to match 999 Somewhere Lane
and 999 Somewhere Ln
. Obviously, addresses won't have identical lengths so it has have some flexibility and there shouldn't be any more than two rows since the website this is on caps it at Address 1 and 2.
entity_id, parent_id, customer_address_id, quote_address_id, concat(firstname, ' ', lastname) as 'name', street, city, region, postcode from sales_order_address where (street regexp "\n" or street regexp "\r" or street regexp "\r\n");
This query found all rows with any type of newline but as stated above, I need to match the newlines as though they're unique 'rows'.
This comment - https://stackoverflow.com/a/50490998/18582667 - seems to imply this kind of matching is possible, but I haven't found a definitive answer to that anywhere.
Even if I can't fuzzy match, a direct match will be fine and will help me locate street fields with an issue. I'm also fine with a roundabout way of matching so it doesn't need to be super concise.
What are my options, if any?
Marisa
(23 rep)
May 18, 2023, 03:32 PM
• Last activity: Jun 10, 2025, 04:08 AM
1
votes
2
answers
63
views
Sql Server: Query assistance to retrieve count in two different groupings within a dataset
We have a dataset with "visits & events" with their "frequency". Sample data looks like this: ``` DROP TABLE IF EXISTS dbo.EventMetrics GO -- -- Create table with sample dataset -- CREATE TABLE dbo.EventMetrics ( EventMetricsID INT NOT NULL IDENTITY(1,1) , EventDate INT NOT NULL , FacilityID INT NOT...
We have a dataset with "visits & events" with their "frequency". Sample data looks like this:
From this dataset, we want to capture some aggregate metrics.
For example:
1. Number of times we see **ItemValue** (M) for **FacilityID** (1).
- Answer is 3
2. Number of times we see **ItemID** (1) for **FacilityID** (1).
- Answer is 6
3. Number of **UniqueVisitID**'s for a given **ItemValue** (M) & **FacilityID** (1)
- Answer is 3
4. Number of **UniqueVisitID**'s for a given **ItemID** (M) & **FacilityID** (1)
- Answer is 6
5. Number of **UniqueVisitID**'s in the current dataset
- Answer is 21
Below is the SQL query that could answer some of my questions (from 1 to 3), but for questions 4 & 5, I need some mechanism to aggregate the numbers within the same SQL query.
**My progress so far:**
Using CTE and an extra query I can get answers for bullet points 4 & 5. See the query below:
**Current Solution (multi step)**
DROP TABLE IF EXISTS dbo.EventMetrics
GO
--
-- Create table with sample dataset
--
CREATE TABLE dbo.EventMetrics
(
EventMetricsID INT NOT NULL IDENTITY(1,1)
, EventDate INT NOT NULL
, FacilityID INT NOT NULL
, UniqueVisitID VARCHAR(25) NOT NULL
, ItemID INT NOT NULL
, ItemValue VARCHAR(10) NOT NULL
, ItemFrequency INT NOT NULL
)
--
-- Insert some sample dataset
--
INSERT INTO dbo.EventMetrics (EventDate, FacilityID, UniqueVisitID, ItemID, ItemValue, ItemFrequency)
VALUES (20250101, 1, '2025-01-01_1_100', 1, 'M', 1)
, (20250101, 1, '2025-01-01_1_101', 1, 'M', 1)
, (20250101, 1, '2025-01-01_1_102', 1, 'M', 1)
, (20250101, 1, '2025-01-01_1_103', 1, 'F', 1)
, (20250101, 1, '2025-01-01_1_104', 1, 'F', 1)
, (20250101, 1, '2025-01-01_1_105', 1, 'F', 1)
, (20250101, 1, '2025-01-01_1_106', 2, 'Boston', 1)
, (20250101, 1, '2025-01-01_1_107', 2, 'Boston', 1)
, (20250101, 1, '2025-01-01_1_108', 2, 'Boston', 1)
, (20250101, 1, '2025-01-01_1_109', 2, 'Tampa', 1)
, (20250101, 1, '2025-01-01_1_110', 2, 'London', 1)
, (20250101, 2, '2025-01-01_2_111', 1, 'M', 1)
, (20250101, 2, '2025-01-01_2_113', 1, 'M', 1)
, (20250101, 2, '2025-01-01_2_114', 1, 'F', 1)
, (20250101, 2, '2025-01-01_2_115', 2, 'Boston', 1)
, (20250101, 2, '2025-01-01_2_116', 2, 'Tampa', 1)
, (20250101, 2, '2025-01-01_2_117', 2, 'Tampa', 1)
, (20250101, 2, '2025-01-01_2_118', 2, 'Miami', 1)
, (20250101, 2, '2025-01-01_2_119', 2, 'Miami', 1)
, (20250101, 2, '2025-01-01_2_120', 2, 'Miami', 1)
, (20250101, 2, '2025-01-01_2_121', 2, 'Miami', 1)
SELECT * FROM dbo.EventMetrics
Sample data looks like this:

--
-- WIP query
--
SELECT FacilityID
, ItemID
, ItemValue
, SUM(SUM(ItemFrequency)) OVER (PARTITION BY ItemID, FacilityID) AS [Total_Items_Per_Facility]
, SUM(SUM(ItemFrequency)) OVER (PARTITION BY ItemID, FacilityID, ItemValue) AS [Count_Per_ItemValue_Per_Facility]
, COUNT(DISTINCT UniqueVisitID) AS [Count_Of_UniqueVisits_Per_ItemValue_Per_Facility]
, '??' AS [Count_Of_UniqueVisits_Per_ItemID_Per_Facility]
, '??' AS [Count_Of_UniqueVisits_For_This_Dataset]
FROM dbo.EventMetrics
WHERE EventDate BETWEEN 20250101 AND 20250101
AND FacilityID IN (1, 2)
AND ItemID IN (1, 2)
GROUP BY FacilityID
, ItemID
, ItemValue
ORDER BY FacilityID
, ItemID
, ItemValue
The above TSQL gives me the below result.

--
-- Current two step Solution, with two queries.
--
DECLARE @TotalUniqueVisitIDs INT
SELECT @TotalUniqueVisitIDs = COUNT(DISTINCT UniqueVisitID)
FROM dbo.EventMetrics
WHERE EventDate BETWEEN 20250101 AND 20250101
AND FacilityID IN (1, 2)
AND ItemID IN (1, 2)
;WITH Count_UniqueVisitID_Per_Facility (FacilityID, ItemID, Count_UniqueVisitID_Per_Facility)
AS (
SELECT FacilityID, ItemID, COUNT(DISTINCT UniqueVisitID) AS [Count_UniqueVisitID_Per_Facility]
FROM dbo.EventMetrics
WHERE EventDate BETWEEN 20250101 AND 20250101
AND FacilityID IN (1, 2)
AND ItemID IN (1, 2)
GROUP BY FacilityID, ItemID
)
SELECT
E.FacilityID
, E.ItemID
, E.ItemValue
, SUM(SUM(E.ItemFrequency)) OVER (PARTITION BY E.ItemID, E.FacilityID) AS [Total_Items_Per_Facility]
, SUM(SUM(E.ItemFrequency)) OVER (PARTITION BY E.ItemID, E.FacilityID, ItemValue) AS [Count_Per_ItemValue_Per_Facility]
, COUNT(DISTINCT UniqueVisitID) AS [Count_Of_UniqueVisits_Per_ItemValue_Per_Facility]
, UVF.Count_UniqueVisitID_Per_Facility AS [Count_Of_UniqueVisits_Per_ItemID_Per_Facility]
, @TotalUniqueVisitIDs AS [Count_Of_UniqueVisits_For_This_Dataset]
FROM dbo.EventMetrics AS E
INNER JOIN Count_UniqueVisitID_Per_Facility AS UVF
ON UVF.FacilityID = E.FacilityID
AND UVF.ItemID = E.ItemID
WHERE E.EventDate BETWEEN 20250101 AND 20250101
AND E.FacilityID IN (1, 2)
AND E.ItemID IN (1, 2)
GROUP BY E.FacilityID
, E.ItemID
, E.ItemValue
, UVF.Count_UniqueVisitID_Per_Facility
ORDER BY E.FacilityID
, E.ItemID
, E.ItemValue
**Question**
How can I get one query to return metrics for all 5?
**Context:**
1. This is a data warehouse table with billions of records
2. Clustered Columnstore index exists
3. There are more metrics generated as part of this query; For clarity, only the pertinent parts are included.
ToC
(727 rep)
May 29, 2025, 07:35 PM
• Last activity: May 30, 2025, 09:41 AM
3
votes
2
answers
137
views
Select row with max value for multiple columns
I have a table with speeds at different time intervals: ``` CREATE TABLE speeds ( result_id uuid NULL, t10 float4 NULL, t30 float4 NULL, t60 float4 NULL, t120 float4 NULL); ``` And some example values: ``` INSERT INTO speeds (result_id,t10,t30,t60,t120) VALUES ('9cce0257-122a-43a9-b3d7-4af6ea07361b'...
I have a table with speeds at different time intervals:
CREATE TABLE speeds (
result_id uuid NULL,
t10 float4 NULL,
t30 float4 NULL,
t60 float4 NULL,
t120 float4 NULL);
And some example values:
INSERT INTO speeds (result_id,t10,t30,t60,t120) VALUES
('9cce0257-122a-43a9-b3d7-4af6ea07361b',10.495212,10.495212,10.495212,10.495212),
('0e42f6d3-363e-464a-a7e0-a923afd55254',9.220896,9.351419,9.193572,9.494948),
('9fb35758-c9f0-4205-b1c1-3e0c8671a996',11.134026,11.134026,11.134026,9.671922),
('36cdec57-5d17-4260-a25f-c772a2c942fb',11.339522,11.339522,10.826232,10.826232),
('a7227251-82c8-4956-be44-fa54f96cab6d',11.300294,10.272741,10.272741,11.278638);
And I have a very ugly query to get the results at the maximum speed over all time intervals:
select * from (select result_id, t10 as value, 't10' as tag from speeds where t10 is not null order by t10 desc limit 1)
union select* from (select result_id, t30 as value, 't30' as tag from speeds where t30 is not null order by t30 desc limit 1)
union select* from (select result_id, t60 as value, 't60' as tag from speeds where t60 is not null order by t60 desc limit 1)
union select* from (select result_id, t120 as value, 't120' as tag from speeds where t120 is not null order by t120 desc limit 1)
And the results are:
36cdec57-5d17-4260-a25f-c772a2c942fb 11.339522 t10
a7227251-82c8-4956-be44-fa54f96cab6d 11.278638 t120
36cdec57-5d17-4260-a25f-c772a2c942fb 11.339522 t30
9fb35758-c9f0-4205-b1c1-3e0c8671a996 11.134026 t60
Is it possible to replace this terrible query with something more acceptable?
Noah Rivers
(31 rep)
May 13, 2025, 03:13 PM
• Last activity: May 14, 2025, 12:13 PM
0
votes
1
answers
83
views
Does the MERGE PARTITION functionality actually exist in PostgresSQL 17?
Is it possible to merge partitions in Postgres 17? There a a lot of blog posts on the internet suggesting that partition `MERGE` and `SPLIT` are (or will be) available in PostgreSQL 17 but there's no explicit mention of this in the release notes, nor have managed to get any of the commands to work....
Is it possible to merge partitions in Postgres 17?
There a a lot of blog posts on the internet suggesting that partition
MERGE
and SPLIT
are (or will be) available in PostgreSQL 17 but there's no explicit mention of this in the release notes, nor have managed to get any of the commands to work.
I have a table that's partitioned by date range. Initially the data is split by day, then it's merged into a monthly partition.
This is the command I'm trying to run:
alter table myTable
merge partitions (myTable_202505, myTable_20250503)
into myTable_202505a;
And this is the error:
>ERROR: syntax error at or near "merge"
Have I got the command wrong, or did the SPLIT/MERGE
functionality get pulled before release?
ConanTheGerbil
(1303 rep)
May 8, 2025, 10:41 AM
• Last activity: May 9, 2025, 11:44 AM
5
votes
1
answers
235
views
OUTPUT as default parameter value for stored procedure?
I've encountered peculiar behaviour by SQL Server, and I wonder why does it behave like this. Consider following CREATE PROCEDURE script: ``` CREATE PROCEDURE dbo.myProc @parameter INT = OUTPUT AS BEGIN PRINT @parameter; SET @parameter = 10; END ``` Notice the way parameter is defined - it seems lik...
I've encountered peculiar behaviour by SQL Server, and I wonder why does it behave like this.
Consider following CREATE PROCEDURE script:
CREATE PROCEDURE dbo.myProc
@parameter INT = OUTPUT
AS
BEGIN
PRINT @parameter;
SET @parameter = 10;
END
Notice the way parameter is defined - it seems like the integer parameter with default value of OUTPUT.
That is, of course, ridiculous.
It is also exactly how it behaves. Why?
If that was default value, I would expect it to be wrapped in single quotes (e.g. @parameter INT = 'OUTPUT'
).
If it was keyword, specifying that this is OUTPUT parameter, I would expect it to fail, since there is redundant =
.
The [documentation](https://learn.microsoft.com/en-us/sql/t-sql/statements/create-procedure-transact-sql?view=sql-server-ver16#:~:text=Transact%2DSQL%20syntax%20for%20stored%20procedures%20in%20SQL%20Server%20and%20Azure%20SQL%20Database%3A) seem to suggest possible formats are @paramter INT OUTPUT
or @parameter INT = OUTPUT
, where the former does not specify default value, and the latter does. However, the mangled middle form does seem to "work" as well. (At least in sense of being valid syntax.) I wonder if it is bug or some obscure interaction of known features.
[Fiddle](https://dbfiddle.uk/5Fwga1Td) .
Yano_of_Queenscastle
(1998 rep)
May 6, 2025, 01:38 PM
• Last activity: May 6, 2025, 05:02 PM
0
votes
4
answers
216
views
Why is LIMIT disallowed in IN subqueries in MySQL?
--- ### Please read carefully. This question is _not_ about workarounds or alternative ways to achieve `LIMIT` in `IN` subqueries! --- It’s a well-known and well-documented that there are [a number of restrictions on subqueries](https://dev.mysql.com/doc/mysql-reslimits-excerpt/8.0/en/subquery-restr...
---
### Please read carefully. This question is _not_ about workarounds or alternative ways to achieve
LIMIT
in IN
subqueries!
---
It’s a well-known and well-documented that there are [a number of restrictions on subqueries](https://dev.mysql.com/doc/mysql-reslimits-excerpt/8.0/en/subquery-restrictions.html) in MySQL (and MariaDB).
Of the restrictions listed in the documentation, the reasoning behind most is either quite logical (e.g., you cannot modify and select from the same table in the same query) or explained on the page.
The only restriction that _doesn’t_ provide a reasoning in the documentation is also the most common one to come up against, namely that you cannot LIMIT
a subquery with certain operators, most commonly IN
. This is invalid:
SELECT * FROM a WHERE b_id IN (SELECT id FROM b ORDER BY c LIMIT 5)
There are any number of questions strewn across the Internet (including on this site) looking for alternative ways to achieve the same effect, but that’s not what I’m asking for here – I know you can just wrap it in another subquery and it works. What I’m puzzled by is the fact that I cannot find any explanation as to **why** this restriction exists to begin with.
I’m guessing it’s not part of the SQL standard, since limiting subqueries in this way is perfectly valid in other engines, like SQL Server and (as far as I can tell) PostgreSQL.
So why did the developers of MySQL (and MariaDB) decide that using LIMIT
in subqueries with operators like IN
should be disallowed? Has anyone from the developer team ever mentioned or explained the choice?
Janus Bahs Jacquet
(111 rep)
Apr 3, 2025, 01:50 PM
• Last activity: Apr 20, 2025, 10:29 PM
3
votes
1
answers
129
views
When use the SHOW PROCESSLIST vs SELECT * FROM information_schema.PROCESSLIST command?
In MySQL 8 server community the two following commands show the same structure: ```mysql SHOW PROCESSLIST; SELECT * FROM information_schema.PROCESSLIST; ``` **Question** * When use the `SHOW PROCESSLIST` vs `SELECT * FROM information_schema.PROCESSLIST` command? Each command exists for a specific re...
In MySQL 8 server community the two following commands show the same structure:
SHOW PROCESSLIST;
SELECT * FROM information_schema.PROCESSLIST;
**Question**
* When use the SHOW PROCESSLIST
vs SELECT * FROM information_schema.PROCESSLIST
command?
Each command exists for a specific reason, right?
About their outputs I only found two differences for the State
and Info
columns for each command respectively:
State | Info
----------------------------
init | SHOW PROCESSLIST
executing | SELECT * FROM information_schema.PROCESSLIST
The difference of the Info
column is clear because it represents the command itself that was executed **but**:
* Why is different the State
column?
Is clear the executing
value but:
* What does the init
value mean?
Manuel Jordan
(229 rep)
Apr 17, 2025, 05:24 PM
• Last activity: Apr 18, 2025, 04:35 PM
1
votes
1
answers
410
views
MySQL Row Constructor forms in INSERT VALUES vs. VALUES
In MySQL 8, based on my reading of the documentation, [`INSERT VALUES`][1] accepts row constructors in the form: INSERT ... VALUES(1,2,3), (4,5,6), (7,8,9); or INSERT ... VALUES ROW(1,2,3), ROW(4,5,6), ROW(7,8,9); Either of which results in the values inserted into a table. However, with [`VALUES`][...
In MySQL 8, based on my reading of the documentation,
INSERT VALUES
accepts row constructors in the form:
INSERT ... VALUES(1,2,3), (4,5,6), (7,8,9);
or
INSERT ... VALUES ROW(1,2,3), ROW(4,5,6), ROW(7,8,9);
Either of which results in the values inserted into a table.
However, with VALUES
used by itself or in a UNION
for example (in a manner similar to SELECT
), only the form using ROW()
works.
VALUES ROW(1,2), ROW(3,4);
produces
+----------+----------+
| column_0 | column_1 |
+----------+----------+
| 1 | 2 |
| 3 | 4 |
+----------+----------+
2 rows in set (0.00 sec)
But the form with only parentheses produces a 1064 error "You have an error in your SQL syntax..." even if outer parentheses are added to group the rows.
VALUES (1,2), (3, 4);
produces
>ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '(1,2), (3, 4)' at line 1
VALUES ((1,2), (3, 4));
gives a similar error.
Why is there this inconsistency in row constructor forms between the two uses of VALUES
in MySQL? Is there a plan to address this in a future version?
By contrast, PostgreSQL works without the explicit ROW()
.
Dennis Williamson
(164 rep)
Oct 5, 2021, 11:25 PM
• Last activity: Apr 17, 2025, 12:00 PM
0
votes
1
answers
58
views
Mysql "ERROR 1045 (28000): Access denied" only when host is specified
I'm scratching my head with this issue. I have an Azure flexible for MySQL server with MySQL 5.7. I've created and give grants for a new user with the "%" wildcard, following the syntax (don't worry, I'm using fake names for everything): ``` MySQL [(none)]> CREATE USER 'mynewuser'@'%' IDENTIFIED BY...
I'm scratching my head with this issue.
I have an Azure flexible for MySQL server with MySQL 5.7.
I've created and give grants for a new user with the "%" wildcard, following the syntax (don't worry, I'm using fake names for everything):
MySQL [(none)]> CREATE USER 'mynewuser'@'%' IDENTIFIED BY 'password';
MySQL [(none)]> GRANT ALL PRIVILEGES ON *.* TO 'mynewuser'@'%';
MySQL [(none)]> FLUSH PRIVILEGES;
MySQL [(none)]> select user, host, account_locked from mysql.user where user = 'mynewuser'
-> ;
+--------------+------+----------------+
| user | host | account_locked |
+--------------+------+----------------+
| mynewuser | % | N |
+--------------+------+----------------+
1 row in set (0.01 sec)
MySQL [(none)]> show grants for mynewuser
-> ;
+------------------------------------------------------------+
| Grants for mynewuser@% |
+------------------------------------------------------------+
| GRANT USAGE ON *.* TO 'mynewuser'@'%' |
| GRANT ALL PRIVILEGES ON *.* TO 'mynewuser'@'%' |
+------------------------------------------------------------+
But when I try a connection from a remote machine, if I try to specify the host (even localhost), I've got an Access Denied error from 172.26.3.34 (The remote machine, JUMPSRV01 ip):
JUMPSRV01> mysql -hSRVDB01.mysql.database.azure.com --ssl=true -umynewuser@SRVDB01 -p
Enter password:
ERROR 1045 (28000): Access denied for user 'mynewuser@localhost'@'172.26.3.34' (using password: YES)
JUMPSRV01> mysql -hSRVDB01.mysql.database.azure.com --ssl=true -umynewuser@localhost -p
Enter password:
ERROR 1045 (28000): Access denied for user 'mynewuser@localhost'@'172.26.3.34' (using password: YES)
JUMPSRV01> mysql -hSRVDB01.mysql.database.azure.com --ssl=true -umynewuser@JUMPSRV01 -p
Enter password:
ERROR 1045 (28000): Access denied for user 'mynewuser@localhost'@'172.26.3.34' (using password: YES)
**However**, if I don't specify the host in the username parameter, it works
JUMPSRV01> mysql -hSRVDB01.mysql.database.azure.com --ssl=true -umynewuser -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 117
Server version: 5.7.44-azure-log MySQL Community Server (GPL)
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [(none)]>
What am I missing?
The problem i am facing is that I'm replicating a database, and all the services and clients connects using the "username@dbhost" syntax and I don't know how many host or services are trying to connect to the database like that, so modify the clients and services connection string is no possible.
vegatripy
(668 rep)
Mar 25, 2025, 12:11 PM
• Last activity: Mar 26, 2025, 08:47 AM
0
votes
2
answers
5082
views
If function not exist then create function in SQL server
If not Exists (Select * from sys.objects where name ='FNLX_getDate' and type =N'FN') Create function [dbo].[FNLX_getDate]() returns Datetime as Begin ................. End When I execute the above script I get the error > Incorrect syntax near the keyword 'function'. What can be wrong? Using Microso...
If not Exists (Select * from sys.objects where name ='FNLX_getDate' and type =N'FN')
Create function [dbo].[FNLX_getDate]() returns Datetime
as
Begin
.................
End
When I execute the above script I get the error
> Incorrect syntax near the keyword 'function'.
What can be wrong?
Using Microsoft SQL Server to execute above script.
Nikhil D
(111 rep)
Dec 20, 2016, 09:36 AM
• Last activity: Mar 14, 2025, 01:30 PM
0
votes
1
answers
38
views
Why does the DAX syntax require the original name for a filtered table?
I am learning DAX and reading Russo & Ferrari's Definitive Guide to Dax (2nd ed). I understand how `SUM ( Sales[Quantity] )` is really `SUMX ( Sales, Sales[Quantity] )`, which iterates over the rows of the `Sales` table and sums the values in the `Quantity` column. If I'm not mistaken, even though t...
I am learning DAX and reading Russo & Ferrari's Definitive Guide to Dax (2nd ed).
I understand how
SUM ( Sales[Quantity] )
is really SUMX ( Sales, Sales[Quantity] )
, which iterates over the rows of the Sales
table and sums the values in the Quantity
column. If I'm not mistaken, even though the book says it's against the standard formatting (so its clear the column is not a measure), in SUMX
you could remove the table name Sales
from the argument Sales[Quantity]
, since SUMX
already knows to look in Sales
.
I took some issue with the syntax of how SUMX
(and other iterators) is combined with FILTER
, based around the table name being used in the column reference.
On page 58 an example code is given:
Sales Amount Multiple Items :=
VAR
MultipleItemSales = FILTER ( Sales, Sales[Quantity] > 1 )
RETURN
SUMX (
MultipleItemSales,
Sales[Quantity] * Sales[Unit Price]
)
My initial understanding was that MultipleItemSales
is an entirely new object in memory, which is a filtered version of the Sales
table - therefore I thought the second argument should be MultipleItemSales[Quantity] * MultipleItemSales[Unit Price]
, since SUMX
is iterating over MultipleItemSales
and not Sales
. But apparently this would give an error.
I think the question becomes *what is DAX doing internally to make this syntax valid?*
Let me give another example:
VAR
MultipleItemSales = FILTER ( Sales, Sales[Quantity] > 1 )
VAR
SingleItemSales = FILTER ( Sales, Sales[Quantity] = 1 )
VAR
SalesAmountMultipleItems =
SUMX (
MultipleItemSales,
Sales[Quantity] * Sales[Unit Price]
)
VAR
SalesAmountAllItems =
SUMX (
Sales,
Sales[Quantity] * Sales[Unit Price]
)
VAR
SalesAmountSingleItems =
SUMX (
SingleItemSales,
Sales[Quantity] * Sales[Unit Price]
)
VAR
AvgSalesMultipleItems =
AVERAGEX (
MultipleItemSales,
Sales[Quantity] * Sales[Unit Price]
)
RETURN
....
In each of the SUMX
functions, a different version of the table (filtered or unfiltered) is being iterated, yet Sales
is always used as the table name for the columns. So, how does it know which version of Sales
to use?
I then thought, rather than creating an entirely new object in memory, FILTER
might actually filter the object which Sales
points to in-place. So in SalesAmountMultipleItems
, the first argument to SUMX
tells the system to apply some filtering to Sales
, before the looking down the Sales[Quantity]
and Sales[Unit Price]
columns. Then, when escaping the SUMX
function scope, whatever filtering done to Sales
gets undone, allowing SalesAmountAllItems
to access the unfiltered version Sales
.
But then I learned about lazy evaluation. In my previous argument, when evaluating AvgSalesMultipleItems
, the filtering applied by MultipleItemSales
would have to be applied a second time. However, to save CPU and memory, it's my understanding that the filtering wouldn't need to happen a second time if it's already been carried out once.
So, my current understanding is more like the follwing:
FILTER
does indeed create a new object in memory - so MultipleItemSales
is a filtered version of Sales
which can be referenced multiple times but need only be created once. But then to allow the continued use of the name Sales
, the effect of the first argument of SUMX
being MultipleItemSales
must be to **temporarily change which object in memory the name Sales
is pointing at**. At the end of the scope, the name Sales
reverts back to pointing to the original, unfiltered version of the table, allowing SalesAmountAllItems
to access it.
Is this the correct understanding?
jonnybolton16
(101 rep)
Jan 15, 2025, 09:57 AM
• Last activity: Mar 2, 2025, 01:04 PM
-1
votes
2
answers
2772
views
Named parameters in queries
I'm trying to find the proper syntax to do the following: SELECT a, b, c FROM foo WHERE foo.id = :id; (the query above is written as in SQLite). The "normal" query would be written as: SELECT a, b, c FROM foo WHERE foo.id = ?; The first query is written with a named parameter (`:id`); the second wit...
I'm trying to find the proper syntax to do the following:
SELECT a, b, c FROM foo WHERE foo.id = :id;
(the query above is written as in SQLite).
The "normal" query would be written as:
SELECT a, b, c FROM foo WHERE foo.id = ?;
The first query is written with a named parameter (
:id
); the second with unnamed (?
).
Searching the internet gives me the syntax for stored procedures and/or functions, but what I am looking for is a SELECT
query with a named parameter. When I write my C++ code I will be using BindParameter()
ODBC syntax.
For the first query in MS SQL Server, what syntax is correct?
---
As I understand the comment responses, the proper SQL Server syntax is:
SELECT a, b, c FROM foo WHERE foo.id = @id;
This means I will need to declare a variable named @id
, bind it to the prepared query, then execute it.
The question is specifically about SQL Server. My question is independent of the client tool.
I am using a plain SELECT
query, not a stored procedure or a function. What I am looking for is how to properly write such a query (not procedure or function) for SQL Server so the DB engine will understand it.
---
My full code will look like this:
std::string query = "SELECT a, b, c FROM foo WHERE foo.id = @id;";
SQLExecute( stmt, "DECLARE @id" ... );
SQLPrepare( stmt, query.c_str()... );
SQLBindParameter();
SQLExecute();
Let me give an example of what I'm looking for. If I have a query like the following:
IF NOT EXISTS (
SELECT *
FROM
WHERE table_name = ?
AND table_schema = ?)
INSERT INTO
(table_name, table_schema, param1, param2,...)
VALUES
( ?, ?, ...);
With SQLite and PostgreSQL I can write it like this:
IF NOT EXISTS (
SELECT *
FROM
WHERE table_name = ?
AND table_schema = ?)
INSERT INTO
(table_name, table_schema, param1, param2,...)
VALUES
( ?1, ?2, ...);
Can I do something similar with SQL Server and ODBC? Or I will have to write repeating code for every parameter I'm passing to the query?
In fact, SQLite can refer to a parameter either by name or by number in the query. Not sure if either way exists in SQL Server.
Igor
(247 rep)
Jan 5, 2022, 03:42 AM
• Last activity: Feb 12, 2025, 07:47 AM
3
votes
3
answers
12933
views
"ERROR: column "a" does not exist" when referencing column alias
I am attempting to retrieve either a null or boolean value from a query. Here is my query: SELECT EXISTS (SELECT 1 FROM employee where add_uuid = '7e53fa47-ade3-4e94-8efd-b25f998b09c6') AS a, CASE WHEN a = false then null ELSE (SELECT exists (SELECT 1 FROM employee where add_uuid = '7e53fa47-ade3-4e...
I am attempting to retrieve either a null or boolean value from a query. Here is my query:
SELECT EXISTS (SELECT 1 FROM employee
where add_uuid = '7e53fa47-ade3-4e94-8efd-b25f998b09c6') AS a,
CASE WHEN a = false then null
ELSE (SELECT exists (SELECT 1 FROM employee
where add_uuid = '7e53fa47-ade3-4e94-8efd-b25f998b09c6'
and is_active = true
)
)
END
Results in:
> SQL Error : ERROR: column "a" does not exist
Despite experimenting with multiple options, I am still unable to achieve the desired outcome.
Aamir
(133 rep)
Jan 18, 2023, 05:24 AM
• Last activity: Nov 21, 2024, 05:08 PM
1
votes
2
answers
240
views
Procedure with nested FOR loop to update rows
I am trying to write a Postgres procedure in pgAdmin. The procedure I've written fails with different error messages depending on where I put the cursor. It even sometimes seems to succeed, but with no procedure created. My questions: 1. What is wrong with my code? 2. Is there a better/simpler way t...
I am trying to write a Postgres procedure in pgAdmin. The procedure I've written fails with different error messages depending on where I put the cursor. It even sometimes seems to succeed, but with no procedure created.
My questions:
1. What is wrong with my code?
2. Is there a better/simpler way to achieve the same thing?
The below code tries to iterate through a table of values (outer loop), and for each row in that table, to iterate through another table that has some missing values (inner loop). Whenever a missing value is found in the inner loop table, if a value that can be used exists in the outer table, I update the inner table with that value.
~~~pgsql
CREATE OR REPLACE PROCEDURE impute()
LANGUAGE plpgsql
AS $$
DECLARE
cntry_dec_means RECORD;
table_row RECORD;
BEGIN
>
FOR cntry_dec_means IN
SELECT * FROM country_decade_means
LOOP
-- if all imputed values are null, don't do anything and move to the next row
IF ISNULL(cntry_dec_means.gdppc)
AND ISNULL(cntry_dec_means.gdp)
AND ISNULL(cntry_dec_means.ed_gdp)
AND ISNULL(cntry_dec_means.population) THEN
CONTINUE;
-- get matching rows from the actual table
>
FOR table_row IN
SELECT * FROM inls625_impute
WHERE country = cntry_dec_means.country
AND decade = cntry_dec_means.decade
LOOP
IF (table_row.gdppc IS NULL) AND (cntry_dec_means.gdppc IS NOT NULL) THEN
UPDATE inls625_impute SET gdppc = cntry_dec_means.gdppc
WHERE inls625_impute.country = cntry_dec_means.country
AND inls625_impute.decade = cntry_dec_means.decade;
END LOOP; -->
END LOOP; -->
END;
$$;
~~~
#### Update
Thanks too @prasad for clarifying how to use loop labels. Unfortunately, it does not seem to work for me.
Here are the errors (in psql):
~~~pgsql
CREATE OR REPLACE PROCEDURE impute()
LANGUAGE plpgsql
AS $$"
LINE 1: $$;
^
INLS-623-Labs=# table_row RECORD;
ERROR: syntax error at or near "table_row"
LINE 1: table_row RECORD;
^
INLS-623-Labs=# BEGIN
INLS-623-Labs-# >
INLS-623-Labs-# FOR cntry_dec_means IN
INLS-623-Labs-# SELECT * FROM country_decade_means
INLS-623-Labs-# LOOP
INLS-623-Labs-# -- if all imputed values are null, don't do anything and move to the next row
INLS-623-Labs-# IF ISNULL(cntry_dec_means.gdppc)
INLS-623-Labs-# AND ISNULL(cntry_dec_means.gdp)
INLS-623-Labs-# AND ISNULL(cntry_dec_means.ed_gdp)
INLS-623-Labs-# AND ISNULL(cntry_dec_means.population) THEN
INLS-623-Labs-# CONTINUE outer_loop;
ERROR: syntax error at or near ">
^
INLS-623-Labs=#
INLS-623-Labs=# -- get matching rows from the actual table
INLS-623-Labs=# >
INLS-623-Labs-# FOR table_row IN
INLS-623-Labs-# SELECT * FROM inls625_impute
INLS-623-Labs-# WHERE country = cntry_dec_means.country
INLS-623-Labs-# AND decade = cntry_dec_means.decade
INLS-623-Labs-# LOOP
INLS-623-Labs-# IF ISNULL(table_row.gdppc) AND NOT ISNULL(cntry_dec_means.gdppc) THEN
INLS-623-Labs-# UPDATE inls625_impute SET gdppc = cntry_dec_means.gdppc
INLS-623-Labs-# WHERE inls625_impute.country = cntry_dec_means.country
INLS-623-Labs-# AND inls625_impute.decade = cntry_dec_means.decade;
ERROR: syntax error at or near ">
^
INLS-623-Labs=# END LOOP inner_loop;
ERROR: syntax error at or near "LOOP"
LINE 1: END LOOP inner_loop;
^
INLS-623-Labs=# END LOOP outer_loop;
ERROR: syntax error at or near "LOOP"
LINE 1: END LOOP outer_loop;
^
INLS-623-Labs=# END;
WARNING: there is no transaction in progress
COMMIT
INLS-623-Labs=# $$;
INLS-623-Labs$#
~~~
What's more, I reproduced the example provided by @prasad and get similar errors:
~~~pgsql
CREATE OR REPLACE PROCEDURE t_proc()
LANGUAGE plpgsql
AS $$"
LINE 1: $$;
^
INLS-623-Labs=# t2_rec RECORD;
ERROR: syntax error at or near "t2_rec"
LINE 1: t2_rec RECORD;
^
INLS-623-Labs=# BEGIN
INLS-623-Labs-# >
INLS-623-Labs-# FOR t_rec IN
INLS-623-Labs-# SELECT * FROM t
INLS-623-Labs-# LOOP
INLS-623-Labs-# IF t_rec.name IS NULL AND t_rec.data IS NULL THEN
INLS-623-Labs-# CONTINUE outer_loop;
ERROR: syntax error at or near ">
^
INLS-623-Labs=# END IF;
ERROR: syntax error at or near "IF"
LINE 1: END IF;
^
INLS-623-Labs=# RAISE NOTICE 't.id value: %', t_rec.id;
ERROR: syntax error at or near "RAISE"
LINE 1: RAISE NOTICE 't.id value: %', t_rec.id;
^
INLS-623-Labs=# >
INLS-623-Labs-# FOR t2_rec IN
INLS-623-Labs-# SELECT * FROM t2 WHERE t2.id = t_rec.id
INLS-623-Labs-# LOOP
INLS-623-Labs-# RAISE NOTICE 't2.id value: %', t2_rec.id;
ERROR: syntax error at or near ">
^
INLS-623-Labs=# END LOOP inner_loop;
ERROR: syntax error at or near "LOOP"
LINE 1: END LOOP inner_loop;
^
INLS-623-Labs=# END LOOP outer_loop;
ERROR: syntax error at or near "LOOP"
LINE 1: END LOOP outer_loop;
^
INLS-623-Labs=# END;
WARNING: there is no transaction in progress
COMMIT
INLS-623-Labs=# $$;
~~~
bellysavalas
(13 rep)
Nov 12, 2024, 02:57 AM
• Last activity: Nov 12, 2024, 07:06 PM
0
votes
2
answers
918
views
Select using multiple lists from top to bottom
I need to select a value (Id) from a table based on matching items in 2 lists. Example. ```lang-none List 1 = ('Tony','Adam','Mark') List 2 = ('Everton', 'Arsenal', 'Villa') ``` Table ```lang-none Id,Number, Name, Team 1, 100, Mark, Villa 2 110, Tony, Everton 3 120, Adam, Arsenal ``` Let's say that...
I need to select a value (Id) from a table based on matching items in 2 lists.
Example.
-none
List 1 = ('Tony','Adam','Mark')
List 2 = ('Everton', 'Arsenal', 'Villa')
Table
-none
Id,Number, Name, Team
1, 100, Mark, Villa
2 110, Tony, Everton
3 120, Adam, Arsenal
Let's say that a query was done to pull only the name and team. I need take the results and get the number.
But I need a query using the lists from top to bottom so first items on List A AND List B, then the 2nd, 3rd and so forth.
I need both Name and Team to match in the same row.
I hope I've explained that clearly.
Select Number, Name, Team
where Name = (list A) and Team = (list B).
---
Select Number from Teams
where name = 'Tony' and team = 'Everton'
would give 110
but I need to do it on multiple values in one go.
Mogsy Mo
(11 rep)
Apr 5, 2023, 05:42 PM
• Last activity: Nov 8, 2024, 11:00 AM
0
votes
1
answers
37
views
What is the concept used to define tables related among them in a kind of package?
Consider the following Figure available on [MySQL Workbench: Visual Database Design](https://www.mysql.com/products/workbench/design/) [![DB Tables in packages][1]][1] **Question**: * What is the concept used to define tables related among them in a kind of package? I want to know that concept/term...
Consider the following Figure available on [MySQL Workbench: Visual Database Design](https://www.mysql.com/products/workbench/design/)
**Question**:
* What is the concept used to define tables related among them in a kind of package?
I want to know that concept/term to learn its respective syntax. Thus if is used other GUI such as DBeaver I want see the same structure. Of course I am assuming the colors are define by the GUI randomly
Furthermore: Is that syntax portable to other DB providers? PostgreSQL, H2, etc
**Note** I know about the

CREATE TABLE ...
syntax, but how to define each one in a custom "package"?
Manuel Jordan
(229 rep)
Sep 10, 2024, 11:38 PM
• Last activity: Sep 12, 2024, 01:04 AM
0
votes
1
answers
537
views
What is the default column name given to a TABLE() type with a single column
``` CREATE OR REPLACE TYPE MISSING_KEYS AS TABLE OF VARCHAR2(256); SELECT [¿what goes here?] as missing_key FROM TABLE (MISSING_KEYS('A','B','C','D')) ``` ### What is the default column name that should be in `[¿what goes here?]` in the above example[?][1] [1]: https://xkcd.com/979/
CREATE OR REPLACE TYPE MISSING_KEYS AS TABLE OF VARCHAR2(256);
SELECT [¿what goes here?] as missing_key
FROM TABLE (MISSING_KEYS('A','B','C','D'))
### What is the default column name that should be in [¿what goes here?]
in the above example?
user68575
Sep 19, 2019, 04:35 PM
• Last activity: Sep 1, 2024, 10:05 PM
1
votes
2
answers
239
views
How to use an expression like date_trunc() in GROUP BY?
I have a couple of joined tables with industrial data on them: ~~~pgsql create table v2.tag ( tag_id integer generated always as identity, tag text not null, primary key (tag_id), unique (tag) ); create table v2.state ( tag_id integer not null, "timestamp" timestamp without time zone not null, value...
I have a couple of joined tables with industrial data on them:
~~~pgsql
create table v2.tag (
tag_id integer generated always as identity,
tag text not null,
primary key (tag_id),
unique (tag)
);
create table v2.state (
tag_id integer not null,
"timestamp" timestamp without time zone not null,
value float not null,
primary key (tag_id, timestamp),
foreign key (tag_id) references v2.tag (tag_id)
) partition by range (timestamp);
~~~
The
state
table holds time series data of about 50 million rows from the last 6 months and I need to run a benchmark on it with various queries. The table is partitioned monthly.
The query I tried simply gets the number of data points per day and tag, that any actual TSDB can do without breaking a sweat on such a small dataset:
SELECT
count(*) as points,date_trunc('day', timestamp) as timestamp,tag.tag
FROM
v2.state
JOIN
v2.tag USING (tag_id)
GROUP BY
timestamp, tag
ORDER BY
timestamp ASC;
The thing is, that for some reason this query makes the DB take up almost 3GB of RAM and returns a bunch of duplicates. Like this:
2024-02-01 00:00:00 | /Heave | 1
2024-02-01 00:00:00 | /Pitch | 1
2024-02-01 00:00:00 | /Roll | 1
2024-02-01 00:00:00 | /Velocity | 1
2024-02-01 00:00:00 | /Heave | 1
...
And so on. All in the same day, I could not scroll over to the next, it just kept repeating these rows in the result instead of counting them up per tag like I expected. So instead of counting the number of data points per day/tag, it seems to just produce a duplicate for each actual row of the ~50 million rows in the database.
So something is not working in the aggregation. I would expect this query to return around 12K lines (65*30*6
), but it returns millions of rows instead, causing the Jupyter notebook I am trying to load it into to get OOM-killed.
I tried to run this with EXPLAIN ANALYZE
, but since I am a noob with Postgres, it doesn't really... explain anything:
~~~none
Sort (cost=700769.72..700798.22 rows=11400 width=78) (actual time=80503.260..83825.211 rows=47499969 loops=1)
Sort Key: (date_trunc('day'::text, state."timestamp"))
Sort Method: external merge Disk: 4703296kB
-> Finalize GroupAggregate (cost=697027.86..700001.55 rows=11400 width=78) (actual time=35609.801..64328.719 rows=47
499969 loops=1)
Group Key: state."timestamp", tag.tag
-> Gather Merge (cost=697027.86..699688.05 rows=22800 width=70) (actual time=35609.453..55143.276 rows=4749996
9 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Sort (cost=696027.84..696056.34 rows=11400 width=70) (actual time=34526.070..42018.956 rows=15833323
loops=3)
Sort Key: state."timestamp", tag.tag
Sort Method: external merge Disk: 1414088kB
Worker 0: Sort Method: external merge Disk: 1446832kB
Worker 1: Sort Method: external merge Disk: 1470664kB
-> Partial HashAggregate (cost=695145.67..695259.67 rows=11400 width=70) (actual time=8690.289..20
138.661 rows=15833323 loops=3)
Group Key: state."timestamp", tag.tag
Batches: 1029 Memory Usage: 8241kB Disk Usage: 1694608kB
Worker 0: Batches: 901 Memory Usage: 8241kB Disk Usage: 1727928kB
Worker 1: Batches: 773 Memory Usage: 8241kB Disk Usage: 1748528kB
-> Hash Join (cost=2.28..652834.40 rows=5641502 width=62) (actual time=138.598..4142.702 row
s=15833323 loops=3)
Hash Cond: (state.tag_id = tag.tag_id)
-> Parallel Append (cost=0.00..599769.83 rows=19794743 width=12) (actual time=138.383.
.2665.699 rows=15833323 loops=3)
-> Parallel Seq Scan on state_y2024m04 state_4 (cost=0.00..221214.31 rows=874583
1 width=12) (actual time=39.308..827.302 rows=6996457 loops=3)
-> Parallel Seq Scan on state_y2024m02 state_2 (cost=0.00..172317.34 rows=680943
4 width=12) (actual time=58.866..1102.604 rows=8171318 loops=2)
-> Parallel Seq Scan on state_y2024m03 state_3 (cost=0.00..78305.04 rows=3095204
width=12) (actual time=0.766..694.493 rows=7428501 loops=1)
-> Parallel Seq Scan on state_y2024m05 state_5 (cost=0.00..28879.42 rows=1141442
width=12) (actual time=180.418..416.467 rows=2739461 loops=1)
-> Parallel Seq Scan on state_y2024m01 state_1 (cost=0.00..20.00 rows=1000 width
=12) (actual time=0.000..0.001 rows=0 loops=1)
-> Parallel Seq Scan on state_y2024m06 state_6 (cost=0.00..20.00 rows=1000 width
=12) (actual time=0.000..0.001 rows=0 loops=1)
-> Parallel Seq Scan on state_y2024m07 state_7 (cost=0.00..20.00 rows=1000 width
=12) (actual time=0.000..0.001 rows=0 loops=1)
-> Parallel Seq Scan on state_y2024m08 state_8 (cost=0.00..20.00 rows=1000 width
=12) (actual time=0.002..0.002 rows=0 loops=1)
-> Hash (cost=1.57..1.57 rows=57 width=58) (actual time=0.149..0.268 rows=65 loops=3)
Buckets: 1024 Batches: 1 Memory Usage: 14kB
-> Seq Scan on tag (cost=0.00..1.57 rows=57 width=58) (actual time=0.031..0.036
rows=65 loops=3)
Planning Time: 2.447 ms
JIT:
Functions: 96
Options: Inlining true, Optimization true, Expressions true, Deforming true
Timing: Generation 14.487 ms, Inlining 105.515 ms, Optimization 203.723 ms, Emission 143.355 ms, Total 467.081 ms
Execution Time: 86164.911 ms
~~~
So what's wrong with my query? Why is it not aggregating?
Megakoresh
(113 rep)
Jul 31, 2024, 05:53 PM
• Last activity: Aug 1, 2024, 12:37 PM
Showing page 1 of 20 total questions