Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
4749 views
Group by latest date with conditions
I need your help! Let's say I have this table: Instance|Date |MetricID|Value |--| --- | --- |---|---| |Marc | 09/14/21|1|5 |Marc |09/14/21|2|2 |Marc |09/14/21|3|1 |John | 09/14/21|1|10 |John |09/14/21|2|1 |John |09/14/21|3|1 |Marc | 09/15/21|1|15 |Marc |09/15/21|2|0 |Marc |09/15/21|3|1 |John |09/15/...
I need your help! Let's say I have this table: Instance|Date |MetricID|Value |--| --- | --- |---|---| |Marc | 09/14/21|1|5 |Marc |09/14/21|2|2 |Marc |09/14/21|3|1 |John | 09/14/21|1|10 |John |09/14/21|2|1 |John |09/14/21|3|1 |Marc | 09/15/21|1|15 |Marc |09/15/21|2|0 |Marc |09/15/21|3|1 |John |09/15/21|1|10 |John |09/15/21|2|1 |John |09/15/21|3|0 And I want this: Instance|LatestDateMetric1 |LatestDateMetric2|LatestDateMetric3 |--| --- | --- |---|---| |Marc | 09/15/21|09/14/21|09/15/21 |John |09/15/21|09/15/21|09/14/21 I tried this code, It looks a bit like I want except It takes the value even if it is null and the result is by line not column. SELECT "Instance", "MetricID", MAX("Date") as "LatestDate" FROM "API_Metric2" GROUP BY "Instance", "MetricID" This is the result I got: Instance|MetricID|LatestDate |--| --- | --- |---|---| |Marc |1|09/15/21|1 |Marc |2|09/15/21|2 |Marc |3|09/15/21|3 |John |1| 09/15/21|1 |John |2|09/15/21|2| |John |3|09/15/21|3 And I also tried this: SELECT "Instance", CASE WHEN "MetricID"=1 AND "Value" NOT NULL THEN MAX("Date") ELSE 0 END) AS "LatestDateMetric1", CASE WHEN "MetricID"=2 AND "Value" NOT NULL THEN MAX("Date") ELSE 0 END) AS "LatestDateMetric2", CASE WHEN "MetricID"=3 AND "Value" NOT NULL THEN MAX("Date") ELSE 0 END) AS "LatestDateMetric3" FROM "StackExemple" GROUP BY "Instance", "Date", "MetricID" But I get this error message: > Parse error at line 2, column 66. Encountered: "Value" ***Edit***: I also got this code which seems to be working but It's not taking the null values into account. It only display 09/15/21 as the LatestDate for all metrics. SELECT "InstanceName", MAX(CASE WHEN "MetricID" = 4 THEN "Date" END) as "LatestProjectCreated", MAX(CASE WHEN "MetricID" = 5 THEN "Date" END) as "LatestActionCreated", MAX(CASE WHEN "MetricID" = 8 THEN "Date" END) as "LatestUserCreated" FROM "API_InstanceMetric" GROUP BY "InstanceName"; ***Edit2***: The issue persists even with adding the "Value IS NOT NULL" as below SELECT "Instance", MAX(CASE WHEN "MetricID" = 1 AND "Value" IS NOT NULL THEN "Date" END) as "LatestProjectCreated", MAX(CASE WHEN "MetricID" = 2 AND "Value" IS NOT NULL THEN "Date" END) as "LatestActionCreated", MAX(CASE WHEN "MetricID" = 3 AND "Value" IS NOT NULL THEN "Date" END) as "LatestUserCreated" FROM "StackExemple" GROUP BY "Instance";
no name sry (11 rep)
Sep 15, 2021, 10:05 AM • Last activity: Aug 6, 2025, 04:09 PM
0 votes
1 answers
1705 views
MySQL cursor always exits out of loop
The cursor query and `select value` query returns rows if I run it in `mysql` but when in a cursor it always exits out of loop. Anything wrong here? I've added "BEFORE LOOP", "EXIT" and "IN LOOP" so it prints where it is but it always starts with `BEFORE LOOP` and then ends with `EXIT`. CREATE PROCE...
The cursor query and select value query returns rows if I run it in mysql but when in a cursor it always exits out of loop. Anything wrong here? I've added "BEFORE LOOP", "EXIT" and "IN LOOP" so it prints where it is but it always starts with BEFORE LOOP and then ends with EXIT. CREATE PROCEDURE getTotal() BEGIN DECLARE HOSTID INTEGER; DECLARE cITEMID INT; declare finished bool default false; DECLARE Total INT; declare cur1 cursor for SELECT itemid FROM items WHERE hostid = 10579; declare continue handler for not found set finished = true; open cur1; loop_1: loop fetch cur1 into cITEMID; SELECT "BEFORE LOOP"; if finished then SELECT "EXIT"; leave loop_1; end if; SELECT "IN LOOP"; -- Test query SELECT value from history_uint WHERE itemid = cITEMID ORDER BY itemid DESC LIMIT 1; -- Final select query will look like this. -- SET @Total := @Total + (SELECT value from history_uint WHERE itemid = cITEMID ORDER BY itemid DESC LIMIT 1); -- SELECT @Total; end loop; close cur1; END // DELIMITER ; Queries: SELECT itemid FROM items WHERE hostid = 10579; | itemid | | 12345 | | 12346 | | 12347 | SELECT value from history_uint WHERE itemid = 12345 ORDER BY itemid DESC LIMIT 1; | value | | 1 | SELECT * from history_uint; | itemid | value | clock (unixtimestamp) | | 12345 | 13 | 4364564654654 | | 12346 | 1 | 4364564654657 | | 12347 | 16 | 4364564654654 | | 12345 | 13 | 4364564654756 | | 12346 | 2 | 4364564654753 | | 12347 | 15 | 4364564654756 | Note: The clock column value is just made up.
R0bert2 (121 rep)
Apr 1, 2020, 07:06 PM • Last activity: Aug 5, 2025, 06:00 PM
0 votes
1 answers
2725 views
Left Join with only one of multiple matches in the right table
I have a database, where every entry may have multiple names, therefore I made a name database. While all names will be displayed otherwhere I need a list of all main entries with one representative name. All names are ranked and the one with the highest ranking-position shall be used. Unranked name...
I have a database, where every entry may have multiple names, therefore I made a name database. While all names will be displayed otherwhere I need a list of all main entries with one representative name. All names are ranked and the one with the highest ranking-position shall be used. Unranked names are “0” or “-1” and should be ignored and since the name-ranking-system is bad, the one to use is not always “1”. In the case of no name being there, the main entry should still be returned. In short: I need a left join that takes all entries of table “main” and joins them with the name that has the smallest, greater than 0, position, if there is one. main:
| main_ID | val_A | val_B |
+---------+-------+-------+
|       2 | some  | stuff |
|       3 | and   | more  |
|       4 | even  | more  |
names:
| name_ID | main_ID | name           | position |
+---------+---------+----------------+----------+
|       1 |       2 | best name      |        1 |
|       2 |       2 | some name      |        0 |
|       3 |       3 | alt name       |        3 |
|       4 |       2 | cool name      |        2 |
|       5 |       3 | abandoned name |       -1 |
|       6 |       3 | awesome name   |        2 |
what I want to get:
| main_ID | val_A | val_B | name         |
+---------+-------+-------+--------------+
|       2 | some  | stuff | best name    |
|       3 | and   | more  | awesome name |
|       4 | even  | more  |              |
Benito (1 rep)
Jul 10, 2023, 10:01 PM • Last activity: Jul 24, 2025, 10:02 PM
0 votes
1 answers
217 views
Slow COUNT DISTINCT on a large table in PostgreSQL
In PostgreSQL 14, I have a table with around 10 million of rows with this structure: ``` CREATE TABLE search_stats ( id bigserial NOT NULL, search_date timestamp NOT NULL, user_id varchar NULL, search_query varchar NOT NULL ) ``` I want to retrieve, for each search_query, the count and the number of...
In PostgreSQL 14, I have a table with around 10 million of rows with this structure:
CREATE TABLE search_stats (
	id bigserial NOT NULL,
	search_date timestamp NOT NULL,
	user_id varchar NULL,
	search_query varchar NOT NULL
)
I want to retrieve, for each search_query, the count and the number of unique user_id associated. This is the query that I have come up with:
SELECT search_query, count(*) AS total_number_of_searches, 
	count(DISTINCT user_id) AS total_number_of_users
FROM search_stats 
WHERE search_date >= '2023-01-01' AND search_date   WindowAgg  (cost=100614.65..353591.31 rows=1552612 width=68) (actual time=14207.366..14836.972 rows=1656788 loops=1)                                                                               |
        Buffers: shared hit=15953 read=29111, temp read=21361 written=15188                                                                                                                              |
        ->  GroupAggregate  (cost=100614.65..334183.66 rows=1552612 width=60) (actual time=7578.995..13339.283 rows=1656788 loops=1)                                                                     |
              Group Key: search_stats.search_query                                                                                                                                         |
              Buffers: shared hit=15953 read=29111                                                                                                                                                       |
              ->  Gather Merge  (cost=100614.65..305804.74 rows=1713707 width=109) (actual time=7578.971..10803.777 rows=1714043 loops=1)                                                                |
                    Workers Planned: 4                                                                                                                                                                   |
                    Workers Launched: 4                                                                                                                                                                  |
                    Buffers: shared hit=15953 read=29111                                                                                                                                                 |
                    ->  Sort  (cost=99614.59..100685.65 rows=428426 width=109) (actual time=5440.355..5521.071 rows=342809 loops=5)                                                                      |
                          Sort Key: search_stats.search_query                                                                                                                              |
                          Sort Method: quicksort  Memory: 113434kB                                                                                                                                       |
                          Buffers: shared hit=15953 read=29111                                                                                                                                           |
                          Worker 0:  Sort Method: quicksort  Memory: 93953kB                                                                                                                             |
                          Worker 1:  Sort Method: quicksort  Memory: 95874kB                                                                                                                             |
                          Worker 2:  Sort Method: quicksort  Memory: 43844kB                                                                                                                             |
                          Worker 3:  Sort Method: quicksort  Memory: 38459kB                                                                                                                             |
                          ->  Parallel Append  (cost=0.00..59538.15 rows=428426 width=109) (actual time=12.168..142.869 rows=342809 loops=5)                                                             |
                                Buffers: shared hit=15785 read=29111                                                                                                                                     |
                                ->  Parallel Seq Scan on mdr_querystats_part_may search_stats_5  (cost=0.00..8218.54 rows=119279 width=110) (actual time=0.018..87.771 rows=286326 loops=1)|
                                      Filter: ((search_date >= '2023-01-01 00:00:00'::timestamp without time zone) AND (search_date   Parallel Seq Scan on mdr_querystats_part_jun search_stats_6  (cost=0.00..8212.33 rows=119198 width=110) (actual time=0.018..86.707 rows=286133 loops=1)|
                                      Filter: ((search_date >= '2023-01-01 00:00:00'::timestamp without time zone) AND (search_date   Parallel Seq Scan on mdr_querystats_part_jul search_stats_7  (cost=0.00..8206.25 rows=1 width=109) (actual time=59.912..59.912 rows=0 loops=1)         |
                                      Filter: ((search_date >= '2023-01-01 00:00:00'::timestamp without time zone) AND (search_date   Parallel Seq Scan on mdr_querystats_part_jan search_stats_1  (cost=0.00..8198.10 rows=118983 width=109) (actual time=0.405..52.659 rows=142808 loops=2)|
                                      Filter: ((search_date >= '2023-01-01 00:00:00'::timestamp without time zone) AND (search_date   Parallel Seq Scan on mdr_querystats_part_mar search_stats_3  (cost=0.00..8196.36 rows=119000 width=109) (actual time=0.499..21.592 rows=57132 loops=5) |
                                      Filter: ((search_date >= '2023-01-01 00:00:00'::timestamp without time zone) AND (search_date   Parallel Seq Scan on mdr_querystats_part_apr search_stats_4  (cost=0.00..8187.51 rows=118877 width=109) (actual time=0.006..45.467 rows=142681 loops=2)|
                                      Filter: ((search_date >= '2023-01-01 00:00:00'::timestamp without time zone) AND (search_date   Parallel Seq Scan on mdr_querystats_part_feb search_stats_2  (cost=0.00..8176.93 rows=118705 width=109) (actual time=0.061..83.882 rows=284948 loops=1)|
                                      Filter: ((search_date >= '2023-01-01 00:00:00'::timestamp without time zone) AND (search_date   Parallel Seq Scan on mdr_querystats_part_aug search_stats_8  (cost=0.00..0.00 rows=1 width=64) (actual time=0.000..0.000 rows=0 loops=1)               |
                                      Filter: ((search_date >= '2023-01-01 00:00:00'::timestamp without time zone) AND (search_date   Parallel Seq Scan on mdr_querystats_part_sep search_stats_9  (cost=0.00..0.00 rows=1 width=64) (actual time=0.001..0.001 rows=0 loops=1)               |
                                      Filter: ((search_date >= '2023-01-01 00:00:00'::timestamp without time zone) AND (search_date   Parallel Seq Scan on mdr_querystats_part_oct search_stats_10  (cost=0.00..0.00 rows=1 width=64) (actual time=0.000..0.000 rows=0 loops=1)              |
                                      Filter: ((search_date >= '2023-01-01 00:00:00'::timestamp without time zone) AND (search_date   Parallel Seq Scan on mdr_querystats_part_nov search_stats_11  (cost=0.00..0.00 rows=1 width=64) (actual time=0.000..0.001 rows=0 loops=1)              |
                                      Filter: ((search_date >= '2023-01-01 00:00:00'::timestamp without time zone) AND (search_date   Parallel Seq Scan on mdr_querystats_part_dec search_stats_12  (cost=0.00..0.00 rows=1 width=64) (actual time=0.002..0.002 rows=0 loops=1)              |
                                      Filter: ((search_date >= '2023-01-01 00:00:00'::timestamp without time zone) AND (search_date < '2023-07-01 00:00:00'::timestamp without time zone))                             |
Planning:                                                                                                                                                                                                |
  Buffers: shared hit=20                                                                                                                                                                                 |
Planning Time: 0.602 ms                                                                                                                                                                                  |
Execution Time: 16105.124 ms
How can I improve the performance of the query?
drew458 (1 rep)
Jul 18, 2023, 02:51 PM • Last activity: Jun 23, 2025, 11:04 PM
0 votes
2 answers
211 views
Selecting last value to be entered each month
I am looking to pull a closing balance from the database for each month. I have tried SELECT CloseBal As 'Balance', MONTHNAME(DateTime) as 'Month', DateTime FROM Table1 WHERE MAX(DateTime) Group By Month I am getting an error `invalud use of grouping function` What would be the best way to achieve t...
I am looking to pull a closing balance from the database for each month. I have tried SELECT CloseBal As 'Balance', MONTHNAME(DateTime) as 'Month', DateTime FROM Table1 WHERE MAX(DateTime) Group By Month I am getting an error invalud use of grouping function What would be the best way to achieve this?
Paulmcf1987 (43 rep)
Feb 3, 2023, 04:05 PM • Last activity: Jun 17, 2025, 03:06 PM
0 votes
1 answers
724 views
MySQL Conditional Counter Based Other Columns?
I have a query that returns purchases for all customers in a store across a date range. It works fine, but now I've been asked to modify the results so only the first purchase per customer per day is returned. I need the SELECT statement to calculate a column that means "This is the customer's Nth p...
I have a query that returns purchases for all customers in a store across a date range. It works fine, but now I've been asked to modify the results so only the first purchase per customer per day is returned. I need the SELECT statement to calculate a column that means "This is the customer's Nth purchase for the day." The data is sorted by customer name and date already, so when the customer name or date changes, I want the counter variable to reset to 1. ---------------------------------------------------------------- | Customer Name | Product | PurchaseDate | PurchaseNumberForDate | ---------------------------------------------------------------- | Customer A | ... | 2019-04-01 | 1 | | Customer A | ... | 2019-04-02 | 1 | | Customer A | ... | 2019-04-03 | 1 | | Customer A | ... | 2019-04-03 | 2 | | Customer A | ... | 2019-04-03 | 3 | | Customer B | ... | 2019-04-03 | 1 | | Customer B | ... | 2019-04-03 | 2 | | Customer B | ... | 2019-04-04 | 1 | ---------------------------------------------------------------- I have tried using MySQL variables, but I cannot figure out how to reset the counter conditionally when the customer name or purchase date change. If I could get the PurchaseNumberForDate correctly calculated, I will use this as a subquery with another query that will select WHERE PurchaseNumberForDate = 1. I have found plenty of examples using COUNT() and @var := @var+1, but I haven't found one based on multiple conditions. Is this possible with MySQL?
jimp (163 rep)
Apr 3, 2019, 08:42 PM • Last activity: Jun 3, 2025, 04:06 AM
0 votes
1 answers
252 views
Find the youngest customer grouped by province
``` SELECT a.province, c.birth_date, c.name FROM customer c JOIN address a ON (c.cust_id = a.cust_id) GROUP BY a.province ORDER BY birth_date DESC; ``` I want to find the youngest customer in each province. The query above doesn't work.
SELECT a.province, c.birth_date, c.name 
FROM customer c
JOIN address a ON (c.cust_id = a.cust_id) 
GROUP BY a.province 
ORDER BY birth_date DESC;
I want to find the youngest customer in each province. The query above doesn't work.
Omar Hassan (11 rep)
Mar 2, 2022, 07:16 AM • Last activity: May 23, 2025, 05:04 PM
0 votes
1 answers
307 views
Selecting a value from a row where another column is max
I have the following SQL query: ```mysql SELECT bug.`id`, Max(report.`date`), Count(report.`id`), Max(version.`code`), bug.`id` FROM `bug` LEFT OUTER JOIN `stacktrace` ON ( stacktrace.`bug_id` = bug.`id` ) CROSS JOIN `version` LEFT OUTER JOIN `report` ON ( report.`stacktrace_id` = stacktrace.`id` )...
I have the following SQL query:
SELECT bug.id,
       Max(report.date),
       Count(report.id),
       Max(version.code),
       bug.id
FROM   bug
       LEFT OUTER JOIN stacktrace
                    ON ( stacktrace.bug_id = bug.id )
       CROSS JOIN version
                  LEFT OUTER JOIN report
                               ON ( report.stacktrace_id = stacktrace.id )
WHERE  stacktrace.version_id = version.id
GROUP  BY bug.id
ORDER  BY Max(report.date) DESC;
I now want to select version.name instead of version.code from the row where version.code is maximal. Is this possible? If so, how do I do this with minimal amount of queries/overhead? --- Relevant Tables (stripped): CREATE TABLE bug ( id int(11) NOT NULL AUTO_INCREMENT, PRIMARY KEY (id) ); INSERT INTO bug VALUES(); INSERT INTO bug VALUES(); CREATE TABLE version ( id int(11) NOT NULL AUTO_INCREMENT, code int(11) NOT NULL, name varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, PRIMARY KEY (id) ); CREATE TABLE stacktrace ( id int(11) NOT NULL AUTO_INCREMENT, bug_id int(11) NOT NULL, version_id int(11) NOT NULL, PRIMARY KEY (id), KEY FK_s_v (version_id), KEY FK_s_b (bug_id), CONSTRAINT FK_s_b FOREIGN KEY (bug_id) REFERENCES bug (id) ON DELETE CASCADE, CONSTRAINT FK_s_v FOREIGN KEY (version_id) REFERENCES version (id) ON DELETE CASCADE ); CREATE TABLE report ( id varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL, date datetime NOT NULL, stacktrace_id int(11) NOT NULL, PRIMARY KEY (id), KEY FK_r_s (stacktrace_id), CONSTRAINT FK_r_s FOREIGN KEY (stacktrace_id) REFERENCES stacktrace (id) ON DELETE CASCADE ); As SQLFiddle
F43nd1r (101 rep)
Oct 20, 2020, 02:27 PM • Last activity: May 14, 2025, 05:10 PM
0 votes
1 answers
1165 views
Why a query takes too long in statistics thread state in AWS Aurora MySQL?
The following query execution too long in statistics state and I couldn't figure out why. DB engine - `5.7.mysql_aurora.2.07.2` DB Size - `db.r5.4xlarge` Sample Query Profile output ``` +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+...
The following query execution too long in statistics state and I couldn't figure out why. DB engine - 5.7.mysql_aurora.2.07.2 DB Size - db.r5.4xlarge Sample Query Profile output
+--------------------------------+----------+
| Status                         | Duration |
+--------------------------------+----------+
| starting                       | 0.000023 |
| checking query cache for query | 0.000155 |
| checking permissions           | 0.000009 |
| checking permissions           | 0.000002 |
| checking permissions           | 0.000003 |
| checking permissions           | 0.000002 |
| checking permissions           | 0.000009 |
| Opening tables                 | 0.000035 |
| init                           | 0.000102 |
| System lock                    | 0.000035 |
| optimizing                     | 0.000004 |
| optimizing                     | 0.000003 |
| optimizing                     | 0.000011 |
| statistics                     | 0.224528 |
| preparing                      | 0.000030 |
| Sorting result                 | 0.000017 |
| statistics                     | 0.000041 |
| preparing                      | 0.000013 |
| Creating tmp table             | 0.000023 |
| optimizing                     | 0.000013 |
| statistics                     | 0.064207 |
| preparing                      | 0.000035 |
| Sorting result                 | 0.000025 |
| statistics                     | 0.000098 |
| preparing                      | 0.000018 |
| executing                      | 0.000011 |
| Sending data                   | 0.000007 |
| executing                      | 0.000003 |
| Sending data                   | 0.000251 |
| executing                      | 0.000007 |
| Sending data                   | 0.000003 |
| executing                      | 0.000002 |
| Sending data                   | 0.000526 |
| end                            | 0.000007 |
| query end                      | 0.000013 |
| removing tmp table             | 0.000007 |
| query end                      | 0.000004 |
| closing tables                 | 0.000003 |
| removing tmp table             | 0.000004 |
| closing tables                 | 0.000002 |
| removing tmp table             | 0.000005 |
| closing tables                 | 0.000002 |
| removing tmp table             | 0.000004 |
| closing tables                 | 0.000010 |
| freeing items                  | 0.000050 |
| storing result in query cache  | 0.000007 |
| cleaned up                     | 0.000004 |
| cleaning up                    | 0.000017 |
+--------------------------------+----------+
Query
select xo.ITEM, xo.VALUE
from (
         select pi.ITEM, pi.ITEM_GROUP, pi.VALUE
         from TABLE_2 pi
                  inner join (select max(ps.EXPORTED_DATE) as max_expo, ps.ITEM
                              from TABLE_2 ps
                                       inner join (
                                  select max(pp.EFFECTIVE_DATE) max_eff_TABLE_2, pp.ITEM
                                  from TABLE_2 pp
                                  where pp.EFFECTIVE_DATE  SHOW INDEX FROM T1;
+-------+------------+--------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name     | Seq_in_index | Column_name    | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------+------------+--------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| T1    |          0 | PRIMARY      |            1 | CUSTOMER_ID    | A         |     3297549 |     NULL | NULL   |      | BTREE      |         |               |
| T1    |          0 | PRIMARY      |            2 | ITEM           | A         |   687374784 |     NULL | NULL   |      | BTREE      |         |               |
| T1    |          0 | PRIMARY      |            3 | EFFECTIVE_DATE | A         |  1314196480 |     NULL | NULL   |      | BTREE      |         |               |
| T1    |          1 | t1_ix_item   |            1 | ITEM           | A         |     2151649 |     NULL | NULL   |      | BTREE      |         |               |
+-------+------------+--------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
Table 2
mysql> SHOW INDEX FROM TABLE_2;
+-------+------------+-----------------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name              | Seq_in_index | Column_name    | Collation | Cardinality | Sub_T2rt | T2cked | Null | Index_type | Comment | Index_comment |
+-------+------------+-----------------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| T2    |          0 | PRIMARY               |            1 | ITEM           | A         |           1 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          0 | PRIMARY               |            2 | ITEM_GROUP     | A         |       14265 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          0 | PRIMARY               |            3 | EFFECTIVE_DATE | A         |    63663076 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          0 | PRIMARY               |            4 | EXPORTED_DATE  | A         |    62464764 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          1 | t2_ix_item_expo       |            1 | ITEM           | A         |      115823 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          1 | t2_ix_item_expo       |            2 | EXPORTED_DATE  | A         |    13766454 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          1 | t2_ix_item_eff_date   |            1 | ITEM           | A         |      115823 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          1 | t2_ix_item_eff_date   |            2 | EFFECTIVE_DATE | A         |    13766454 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          1 | t2_ix_item_eff_ig     |            1 | ITEM           | A         |      115823 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          1 | t2_ix_item_eff_ig     |            2 | EFFECTIVE_DATE | A         |    13766454 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          1 | t2_ix_item_eff_ig     |            3 | ITEM_GROUP     | A         |    68216912 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          1 | t2_idx_effective_date |            1 | EFFECTIVE_DATE | A         |       79406 |     NULL | NULL   |      | BTREE      |         |               |
+-------+------------+-----------------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
According to this: https://dba.stackexchange.com/questions/55969/statistics-state-in-mysql-processlist I checked the innodb_buffer_pool_size.
mysql> SHOW VARIABLES LIKE "innodb_buffer_pool_size";
+-------------------------+-------------+
| Variable_name           | Value       |
+-------------------------+-------------+
| innodb_buffer_pool_size | 96223625216 |
+-------------------------+-------------+
In EXPLAIN output rows are minimal (Depends on the Item count in the query. If Item count is 10, the number of rows were 20). Even though the row counts are minimal why the query takes too long in statistics state?
Vithulan (101 rep)
Jun 29, 2020, 06:21 AM • Last activity: Apr 29, 2025, 04:04 PM
0 votes
1 answers
313 views
mysql Quiz leaderboard filter by points, time taken
I have a quiz report table which shows a report for every quiz a user takes. I need to create a leaderboard from this, which shows the top users best score, filtering by points and then time taken. here is a link to a sql fiddle http://sqlfiddle.com/#!2/65fbf0/1 I am really struggling as i need to f...
I have a quiz report table which shows a report for every quiz a user takes. I need to create a leaderboard from this, which shows the top users best score, filtering by points and then time taken. here is a link to a sql fiddle http://sqlfiddle.com/#!2/65fbf0/1 I am really struggling as i need to filter the results by two columns for one user, my ideal result would be Results for Quiz id 1 --------------------------------------------------------------- | user_id | points | time_spend | start_dt | quiz_id | | 1 | 3 | 0.5 | May,15 2015| 1 | | 2 | 3 | 0.8 | May,15 2015| 1 | | 3 | 2 | 0.5 | May,15 2015| 1 | Then a separate query for all quiz's showing the results from the last week. Results from all Quizzs --------------------------------------------------------------- | user_id | points | time_spend | start_dt | quiz_id | | 1 | 3 | 0.5 | May,15 2015| 1 | | 2 | 3 | 0.8 | May,13 2015| 3 | | 3 | 2 | 0.5 | May,12 2015| 2 |
Gismmo (101 rep)
May 26, 2015, 11:09 PM • Last activity: Apr 26, 2025, 03:04 AM
3 votes
2 answers
5917 views
Efficient query to get last row group by multiple columns
I have a table like the following: ~~~pgsql CREATE TABLE spreads ( spread_id serial NOT NULL, game_id integer NOT NULL, sportsbook_id integer NOT NULL, spread_type integer NOT NULL, spread_duration integer NOT NULL, home_line double precision, home_odds integer, away_line double precision, away_odds...
I have a table like the following: ~~~pgsql CREATE TABLE spreads ( spread_id serial NOT NULL, game_id integer NOT NULL, sportsbook_id integer NOT NULL, spread_type integer NOT NULL, spread_duration integer NOT NULL, home_line double precision, home_odds integer, away_line double precision, away_odds integer, update_time timestamp without time zone NOT NULL, game_update_count integer NOT NULL ); ~~~ I'm trying to get the last row inserted (max game_update_count), for each group of (sportsbook_id, spread_type, spread_duration, game_id). The following query gets me close, but I am not able to select the lines/odds without Postgres complaining. ~~~pgsql SELECT spreads.game_id, sportsbook_id, spread_type, spread_duration, MAX(game_update_count) AS game_update_count FROM spreads LEFT JOIN schedule ON schedule.game_id = spreads.game_id WHERE date >= '2012-01-01' AND date <= '2012-01-02' GROUP BY spreads.game_id, sportsbook_id, spread_type, spread_duration ORDER BY spread_duration, spread_type, sportsbook_id, spreads.game_id, game_update_count DESC; ~~~ Anyone have any thoughts on a better approach?
Jeremy (33 rep)
Jan 21, 2015, 07:27 AM • Last activity: Feb 24, 2025, 10:38 PM
4 votes
2 answers
407 views
Make custom aggregate function easier to use (accept more input types without creating variants)
Recently I wrote a custom aggregate function in postgres that would return a specific column for the row that matches the `max`/`min` aggregate using a different column. While the code in itself works great it is somewhat bothersome to create custom data type for every possible input combination tha...
Recently I wrote a custom aggregate function in postgres that would return a specific column for the row that matches the max/min aggregate using a different column. While the code in itself works great it is somewhat bothersome to create custom data type for every possible input combination that I might need. Here is the code I use CREATE TYPE agg_tuple_text AS ( exists boolean, value numeric, text text ); -------------------------------------------------------------------------------- CREATE FUNCTION valued_min(old_tuple agg_tuple_text, new_value numeric, new_text text) RETURNS agg_tuple_text LANGUAGE plpgsql AS $$ BEGIN IF (old_tuple).exists = false THEN RETURN (true, new_value, new_text); ELSIF (old_tuple).value > new_value THEN RETURN (true, new_value, new_text); ELSE RETURN old_tuple; END IF; END; $$; -------------------------------------------------------------------------------- CREATE FUNCTION unpack_agg_tuple_text(value agg_tuple_text) RETURNS text LANGUAGE plpgsql AS $$ BEGIN IF (value).exists = false THEN RETURN NULL; ELSE RETURN (value).text; END IF; END $$; -------------------------------------------------------------------------------- CREATE AGGREGATE valued_min(numeric, text) ( INITCOND = '(false, 0, null)', STYPE = agg_tuple_text, SFUNC = valued_min, FINALFUNC = unpack_agg_tuple_text ); -------------------------------------------------------------------------------- -- Example SELECT min(value) as min_value, valued_min(value, name) as min_name, max..., avg... FROM kv; -- Output: -- min_value | min_name | ... -- ----------+--------------------+---- -- 11.11 | this is the lowest | ... EDIT: My goal is drawing a min/max/avg chart for a TSDB and displaying the name of the min and max entries each. Is there a way to achieve this without creating all of these for every possible combination? (Maybe some kind of generic parameter that are present in Java or alike) * Value column types * Various Date/Time types * Numeric types * Maybe text * (any comparable type) * data column types * anytype It would be sufficient if I only could use it for the data value since it isn't used in any calculation inside that code. Unfortunately the anyelement type isn't allowed in custom data types. I already considered using the json type as input, but that feels somewhat wrong, because it looses the type information (especially for date/time types). ------------------------ I use Postgres 10 without extensions, but if this is possible using postgres 1x or using a special extension I'm willing to try. ----------------------- I also considered joining the values, but then I get isues with performance and potential duplicates/rows that have the same value.
ST-DDT (280 rep)
Aug 4, 2018, 05:21 PM • Last activity: Feb 20, 2025, 01:07 AM
0 votes
1 answers
970 views
How can I speed up my query that involves `max(timestamp)` in a huge table? I already added an index to every field
I have a huge table that has fields `ip`, `mac`, and `timestamp`. Neither of the three fields is unique, but the combination of all three is. Table is automatically populated, with newer records added all the time. The field `timestamp` refers to when a row was added. Records are never *UPDATE*d. He...
I have a huge table that has fields ip, mac, and timestamp. Neither of the three fields is unique, but the combination of all three is. Table is automatically populated, with newer records added all the time. The field timestamp refers to when a row was added. Records are never *UPDATE*d. Here's the table description: Column | Type | Nullable | Default -----------+-----------------------------+----------+-------- event | text | not null | ip | inet | not null | mac | macaddr8 | not null | timestamp | timestamp without time zone | not null | now() Indexes: "ip_idx" btree (ip) "mac_idx" btree (mac) "time_idx" btree ("timestamp") "timestamp_ip_event_key" UNIQUE CONSTRAINT, btree ("timestamp", ip, event) I have this very slow query, causing the website to take very long time to load How can I speed it up? Is it possible to take advantage of the fact that the table is basically ordered by timestamp? I do not have access to the script that adds records. Executed SQL select ip, max(timestamp) from my_table WHERE ip Gather Merge (cost=291919.94..292169.16 rows=2136 width=15) (actual time=696.220..704.558 rows=429 loops=1) Workers Planned: 2 Workers Launched: 2 -> Sort (cost=290919.92..290922.59 rows=1068 width=15) (actual time=679.313..679.325 rows=143 loops=3) Sort Key: ip Sort Method: quicksort Memory: 31kB Worker 0: Sort Method: quicksort Memory: 31kB Worker 1: Sort Method: quicksort Memory: 31kB -> Partial HashAggregate (cost=290855.52..290866.20 rows=1068 width=15) (actual time=679.192..679.233 rows=143 loops=3) Group Key: ip Batches: 1 Memory Usage: 81kB Worker 0: Batches: 1 Memory Usage: 81kB Worker 1: Batches: 1 Memory Usage: 81kB -> Parallel Bitmap Heap Scan on my_table (cost=12023.68..289019.89 rows=367126 width=15) (actual time=67.898..580.432 rows=312819 loops=3) Filter: (ip Bitmap Index Scan on my_table_ip_idx (cost=0.00..11803.41 rows=881097 width=0) (actual time=62.721..62.721 rows=938457 loops=1) Index Cond: ((ip > '10.38.69.0/24'::inet) AND (ip <= '10.38.69.255'::inet)) Planning Time: 1.049 ms JIT: Functions: 30 Options: Inlining false, Optimization false, Expressions true, Deforming true Timing: Generation 2.180 ms, Inlining 0.000 ms, Optimization 1.470 ms, Emission 29.303 ms, Total 32.952 ms Execution Time: 726.126 ms
Granny Aching (393 rep)
Aug 11, 2023, 01:56 AM • Last activity: Feb 13, 2025, 03:04 AM
2 votes
1 answers
83 views
WHERE A=x DISTINCT ON (B), with a composite index on (A, B, C)
I have huge table with a composite index on `(A, B, C)`. ``` -- psql (13.16 (Debian 13.16-0+deb11u1), server 14.12) \d index_a_b_c Index "public.index_a_b_c" Column | Type | Key? | ----------+-----------------------+------+ A | character varying(44) | yes | B | numeric(20,0) | yes | C | numeric(20,0...
I have huge table with a composite index on (A, B, C).
-- psql (13.16 (Debian 13.16-0+deb11u1), server 14.12)

\d index_a_b_c
         Index "public.index_a_b_c"
  Column  |         Type          | Key? | 
----------+-----------------------+------+
 A        | character varying(44) | yes  |
 B        | numeric(20,0)         | yes  |
 C        | numeric(20,0)         | yes  |
btree, for table "public.table_a_b_c"
#### I need all distinct Bs. This query runs with Index Only Scan, but, scans over the all A matches. Which is not scale for my case since for some As there as millions of rows. Millions of Index Only Scan row is slow.
EXPLAIN (ANALYZE true) 
SELECT DISTINCT ON ("B") "B"
  FROM "table_a_b_c"
 WHERE "A" = 'astring'

-- Execution time: 0.172993s
-- Unique  (cost=0.83..105067.18 rows=1123 width=5) (actual time=0.037..19.468 rows=67 loops=1)
--  ->  Index Only Scan using index_a_b_c on table_a_b_c  (cost=0.83..104684.36 rows=153129 width=5) (actual time=0.036..19.209 rows=1702 loops=1)
--        Index Cond: (A = 'astring'::text)
--        Heap Fetches: 351
-- Planning Time: 0.091 ms
-- Execution Time: 19.499 ms
As you see, runs over 1.7k rows and manually filter and returns 67 rows. 20ms getting tens of seconds when 1.7k to millions. #### I also need all biggest Cs for distinct Bs. Same thing as in *1)*. In theory, Postgres could know possible Bs, and not need to check the whole list matched to A.
EXPLAIN (ANALYZE true)
SELECT DISTINCT ON ("B") *
  FROM "table_a_b_c"
 WHERE "A" = 'astring'
 ORDER BY "B" DESC,
          "C" DESC

-- Execution time: 0.822705s 
-- Unique  (cost=0.83..621264.51 rows=1123 width=247) (actual time=0.957..665.927 rows=67 loops=1)
--   ->  Index Scan using index_a_b_c on table_a_b_c  (cost=0.83..620881.69 rows=153130 width=247) (actual time=0.955..664.408 rows=1702 loops=1)
--         Index Cond: (a = 'astring'::text)
-- Planning Time: 0.116 ms
-- Execution Time: 665.978 ms
But for instance, this is fast:
SELECT * WHERE A="x" AND B=1 ORDER BY C DESC
  UNION
SELECT * WHERE A="x" AND B=2 ORDER BY C DESC
  UNION
....
for all possible Bs. It is like loop with number of B time. ### Questions a) Shouldn't the index on (A, B, C) be a superset of (A, B) in theory? (A, B) will be super fast for distinct. b) Why is it hard to find distinct Bs for Postgres? c) How to handle this without new index?
kadircancetin (23 rep)
Oct 10, 2024, 09:34 AM • Last activity: Oct 12, 2024, 12:22 AM
0 votes
2 answers
1027 views
MySQL Simple pivot table with MAX(), GROUP BY and only_full_group_by
I'm having trouble using `GROUP BY` with `only_full_group_by`. Here is my `employee_job` table: +----+-------------+--------+------------+ | id | employee_id | job_id | created_at | +----+-------------+--------+------------+ | 1 | 2 | 10 | 2019-01-01 | | 2 | 3 | 20 | 2019-01-01 | | 3 | 3 | 21 | 2019...
I'm having trouble using GROUP BY with only_full_group_by. Here is my employee_job table: +----+-------------+--------+------------+ | id | employee_id | job_id | created_at | +----+-------------+--------+------------+ | 1 | 2 | 10 | 2019-01-01 | | 2 | 3 | 20 | 2019-01-01 | | 3 | 3 | 21 | 2019-02-01 | | 4 | 3 | 22 | 2019-03-01 | | 5 | 4 | 30 | 2019-01-01 | | 6 | 4 | 35 | 2019-02-01 | +----+-------------+--------+------------+ I would like to select only the latest lines per employee_id, which I think gives me: SELECT *, MAX(created_at) FROM employee_job GROUP BY employee_id; The thing is, with only_full_group_by, which I can't disable, I get an error: #1055 - Expression #1 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'XXX.employee_job.id' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by Well... I tried to read about that, and it seems I don't get the error. Of course, if I add other fields to the GROUP BY, the result still contains multiple times the same id. Can someone explain to me how to group my results, maybe if GROUP BY isn't the best way to do it, what is, please?
Max13 (103 rep)
Jul 21, 2019, 01:28 AM • Last activity: Aug 29, 2024, 11:10 AM
1 votes
1 answers
370 views
How to get last N rows of each group from group by in MySQL
I have a table with isin (test), date (Date), and amount (int). I am trying to get the amount of the last 2 dates per isin. Someone can tell me the trick please?
I have a table with isin (test), date (Date), and amount (int). I am trying to get the amount of the last 2 dates per isin. Someone can tell me the trick please?
Nicolas Rey (113 rep)
Apr 10, 2024, 08:36 PM • Last activity: Apr 27, 2024, 04:29 AM
4 votes
2 answers
17903 views
Select only the last record for multiple values
I have a table called "sessions" with say 4 columns like below ```none id name s_time f_time 01 abc 10.15 10.45 02 abc 11.05 11.55 03 abc 12.18 13.46 04 abc 15.12 16.53 05 def 10.01 12.58 06 def 14.06 16.51 07 def 17.43 18.54 08 xyz 09.45 12.36 09 xyz 14.51 15.57 10 xyz 16.23 18.01 ``` How can I get...
I have a table called "sessions" with say 4 columns like below
id  name s_time f_time
01  abc  10.15  10.45
02  abc  11.05  11.55
03  abc  12.18  13.46
04  abc  15.12  16.53
05  def  10.01  12.58
06  def  14.06  16.51
07  def  17.43  18.54
08  xyz  09.45  12.36
09  xyz  14.51  15.57
10  xyz  16.23  18.01
How can I get the last f_time for each name? What I need is:
name f_time
abc  16.53
def  18.54
xyz  18.01
What am trying is this: select name,f_time from sessions where name in ('abc','def','xyz') order by id DESC LIMIT 1; but am only getting the finish time for the first name. MariaDB 10.1.37
R2xDV7 (43 rep)
Dec 19, 2018, 08:30 AM • Last activity: Apr 7, 2024, 08:39 AM
1 votes
3 answers
74 views
Last cell value in a grouped by query
I have a table related to game results with more than 300 000 records, where each row keeps player attack. This is simplified table example. I keep an attack id, date, when attack was done, unique player id, player name (changeable by player), and result (bool value, victory or defeat). | attack_id...

I have a table related to game results with more than 300 000 records, where each row keeps player attack. This is simplified table example. I keep an attack id, date, when attack was done, unique player id, player name (changeable by player), and result (bool value, victory or defeat). | attack_id | date | player_id | player_name | result | | -------- | -------------- | -------------- | -------------- |-------------- | | 1| 2024-03-09 00:00:00 | 1 | cat | 1 | 2| 2024-03-10 00:00:00| 1 | panda | 1 | 3| 2024-03-11 00:00:00| 2 | dog | 0 | 4| 2024-03-12 00:00:00| 3 | wolf | 1 I want to show 10 best attack players and my query looks like this: SELECT player_id as id, player_name as name, count(attack_id) as attacks, sum(result) as victory, ( count(attack_id) - sum(result) ) as defeat, ( sum(result) - ( count(attack_id) - sum(result) ) ) as difference FROM my_table_name GROUP BY player_id ORDER BY difference DESC LIMIT 10 Query calculates players' victories, defeats and difference between victories and defeats. The issue here related to player name. Player can change a name after some attacks, but this query returns the first player_name (related to player_id), not the last one (current). Result of this query (according to my example, instead of cat, should be panda, because it was the last player attack name): | id | name | attacks | victory | defeat | difference | | -------- | -------------- | -------------- | -------------- |-------------- |-------------- | | 1| cat | 2 | 2| 0| 2 | | 3| wolf| 1 | 1| 0| 1 | | 2| dog | 1 | 0| 1| -1 | How can I solve this issue?
anton (117 rep)
Mar 14, 2024, 03:11 PM • Last activity: Mar 15, 2024, 02:51 PM
0 votes
1 answers
242 views
Why doesn't Postgres apply limit on groups when retrieving N results per group?
I've come across many questions/answers for Greatest/Top N per group type problems that explain _how_ to solve the problem - generally some variation of row_number() or CROSS JOIN LATERAL, but I'm struggling to understand the theory behind the _why_ for this example. The particular example I'm worki...
I've come across many questions/answers for Greatest/Top N per group type problems that explain _how_ to solve the problem - generally some variation of row_number() or CROSS JOIN LATERAL, but I'm struggling to understand the theory behind the _why_ for this example. The particular example I'm working with is this:
SELECT "orders".* 
FROM "orders" 
WHERE user_id IN (?, ?, ?, ?, ?)
ORDER BY "orders"."created_at" LIMIT 50
Essentially, I want to find the 50 most recent orders amongst a group users. Each user may have thousands of orders. I have two indexes - (user_id) and (user_id, created_at). Only the first index is ever used with this query. I can understand that the query planner would not know ahead of time which users would have those 50 newest orders. I imagined that it would be clever enough to determine that only 50 results are needed, and that it could use the (user_id, created_at) index to get 50 orders for each user. Then sort and filter those few hundred results in memory. Instead what I'm seeing is that it gets all orders for each user using the user_id index and then sorts/filters them all in memory. Here is an example query plan:
Limit  (cost=45271.94..45272.06 rows=50 width=57) (actual time=13.221..13.234 rows=50 loops=1)
  Buffers: shared hit=12321
  ->  Sort  (cost=45271.94..45302.75 rows=12326 width=57) (actual time=13.220..13.226 rows=50 loops=1)
          Sort Key: orders.created_at
          Sort Method: top-N heapsort Memory: 36kB
        Buffers: shared hit=12321
        ->  Bitmap Heap Scan on orders orders  (cost=180.85..44862.48 rows=12326 width=57) (actual time=3.268..11.485 rows=12300 loops=1)
                Recheck Cond: (orders.user_id = ANY ('{11,1000,3000}'::bigint[]))
                Heap Blocks: exact=12300
              Buffers: shared hit=12321
              ->  Bitmap Index Scan on index_orders_on_user_id  (cost=0.00..177.77 rows=12326 width=0) (actual time=1.257..1.258 rows=12300 loops=1)
                      Index Cond: (orders.user_id = ANY ('{11,1000,3000}'::bigint[]))
                    Buffers: shared hit=21
Planning:
  Buffers: shared hit=6
Execution time: 13.263 ms
The table I'm querying has roughly 50,000,000 orders, with an even distribution of ~4000 orders per user. I have found that I can speed this up significantly using CROSS JOIN LATERAL and it will use the composite index, but I'm struggling to understand WHY the CROSS JOIN LATERAL is needed here for it to use the index. So my question is, why doesn't Postgres use the composite index, and then retrieve only the minimum necessary amount of rows (50 per user) using the query I posted above? EDIT: More context This is the lateral join query that _does_ use the index
SELECT o.*
FROM company_users cu
CROSS JOIN LATERAL (
   SELECT *
   FROM orders o
   WHERE o.user_id = company_users.user_id
   ORDER  BY created_at DESC LIMIT 50
   ) cu
WHERE  cu.company_id = ? 
ORDER BY created_at DESC LIMIT 50
Doing a nested select like this doesn't use the index - even though it does a nested loop just like the lateral join does:
SELECT "orders".* 
FROM "orders" 
WHERE user_id IN (SELECT user_id FROM company_users WHERE company_id = ?)
ORDER BY "orders"."created_at" LIMIT 50
Seanvm (101 rep)
Feb 4, 2024, 06:52 AM • Last activity: Feb 7, 2024, 07:06 AM
0 votes
1 answers
1805 views
Is MySQL syntax GROUP BY ... ASC/DESC removed or not?
I would like to ask if MySQL syntax `GROUP BY ... ASC/DESC` was removed or not? By [this worklog task][1] it should be removed but it seems that it is not truth. There are no details in which version it was applied first time. By [this db-fiddle demo][2] it still works well on any MySQL version (but...
I would like to ask if MySQL syntax GROUP BY ... ASC/DESC was removed or not? By this worklog task it should be removed but it seems that it is not truth. There are no details in which version it was applied first time. By this db-fiddle demo it still works well on any MySQL version (but on 5.7 and 8.0 sql_mode ONLY_FULL_GROUP_BY must be disabled or ANY_VALUE() must be used to avoid error ER_WRONG_FIELD_WITH_GROUP). I know that aim in this demo can be re-written using inner join and subquery but in my use case it is 5 times slower on table with 4.500.000 rows. Demo is simplified. My real use case is much similar to this fiddle . MySQL doc says that results from GROUP BY, used that way I used, are non-deterministic and I should not rely on it. But in my case result are always identical on any MySQL version (can be tested be switching version in db-fiddle). There ore only two requirements: engine must be InnoDB and I have to create index for columns used in GROUP BY (both requirements are not problem - InnoDB and index is performance advantage). I'm using this solution because it is much faster then any other solution using inner join, subqueries etc. However syntax is removed or not there is alternative which produces identical result. So GROUP BY col1 ASC/DESC can be rewriten as GROUP BY col1 ORDER BY col1 ASC/DESC. Which one is better to use in my case? Both are identical what about results and performance too. Thanks for any advice.
mikep (103 rep)
Apr 11, 2019, 08:54 AM • Last activity: Dec 19, 2023, 09:28 AM
Showing page 1 of 20 total questions