Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
61
views
SELECT with array values on WHERE using postgres
I'm using an query to update a object array inside a **jsonb** column. Example data: ``` [ { "_id": "68696e0a3aab2f9ff9c40679", "altura": 1, "comprimento": 1, "largura": 1, "peso": 1, "valor": 1 }, { "_id": "6869744b44829f42ccdbb32c", "altura": 2, "comprimento": 2, "largura": 2, "peso": 2, "valor":...
I'm using an query to update a object array inside a **jsonb** column.
Example data:
[
{
"_id": "68696e0a3aab2f9ff9c40679",
"altura": 1,
"comprimento": 1,
"largura": 1,
"peso": 1,
"valor": 1
},
{
"_id": "6869744b44829f42ccdbb32c",
"altura": 2,
"comprimento": 2,
"largura": 2,
"peso": 2,
"valor": 2
}
]
Using one ID, this works perfectly:
UPDATE
objetos o
SET
itens = o.itens - (
SELECT
i.id::int - 1
FROM
jsonb_array_elements(o.itens) WITH ORDINALITY i(v, id)
WHERE
i.v->'_id' = '6869744b44829f42ccdbb32c'
LIMIT 1
)
WHERE
_id = ${_id}
RETURNING
_id,
updated_at;
It deletes a entry containing _id = 6869744b44829f42ccdbb32c
---
I have tried to delete entries using ARRAY ids, example ['68696e0a3aab2f9ff9c40679', '6869744b44829f42ccdbb32c']
, but I get:
~~~none
operator does not exist: jsonb = text
~~~
I'm trying add this in WHERE
:
i.v->'_id' = ANY(ARRAY['68696e0a3aab2f9ff9c40679', '6869744b44829f42ccdbb32c'])
and IN
, but IN
does not return any information.
How to compare i.v->'_id'
to elements of an array? Like:
['68696e0a3aab2f9ff9c40679', '6869744b44829f42ccdbb32c'].includes(i.v->'_id')
References:
1. https://stackoverflow.com/a/10738459/2741415
2. https://dba.stackexchange.com/a/315124/321838
3. https://stackoverflow.com/a/75053441/2741415
flourigh
(145 rep)
Jul 5, 2025, 07:34 PM
• Last activity: Jul 10, 2025, 12:46 AM
0
votes
2
answers
212
views
Selecting last value to be entered each month
I am looking to pull a closing balance from the database for each month. I have tried SELECT CloseBal As 'Balance', MONTHNAME(DateTime) as 'Month', DateTime FROM Table1 WHERE MAX(DateTime) Group By Month I am getting an error `invalud use of grouping function` What would be the best way to achieve t...
I am looking to pull a closing balance from the database for each month.
I have tried
SELECT CloseBal As 'Balance', MONTHNAME(DateTime) as 'Month', DateTime
FROM Table1
WHERE MAX(DateTime)
Group By Month
I am getting an error
invalud use of grouping function
What would be the best way to achieve this?
Paulmcf1987
(43 rep)
Feb 3, 2023, 04:05 PM
• Last activity: Jun 17, 2025, 03:06 PM
1
votes
1
answers
232
views
How to order by then start at first non null value and select a limit for number of values
I have an environmental logger and I am trying to select every 5th row starting at where `outsidetemp` is not null (I want to include the null values after the first not null value). Is there a way to offset to find the first not null value? My line of code is below everything is working except I ca...
I have an environmental logger and I am trying to select every 5th row starting at where
outsidetemp
is not null (I want to include the null values after the first not null value). Is there a way to offset to find the first not null value?
My line of code is below everything is working except I cannot get the offset set.
SELECT * FROM (SELECT @row := @row+1 AS rownum, ttime,temperature,humidity,outsidetemp
FROM (SELECT @row :=0) r, tempHumiditydata) ranked
WHERE rownum % 5 = 1
ORDER BY rownum DESC LIMIT 2020 ??????OFFSET (WHERE outsidetemp IS NOT NULL)????????
Additional background. The outsidetemp
is being logged once for every 60 of the other data points the values in between these points is Null.
My code above is being used to graph the data so it is important I start at the first not null value to capture the remaining not null values (starting at the wrong point and I could not get any data for outsidetemp
).
I am skipping every 5th row to reduce the amount of data in a wider time frame graph.
Here is an example of what I need
Ttime temp humidity outsidetemp
8:00 71 50 NULL
7:00 72 49 80
6:00 73 48 NULL
5:00 73 48 NULL
4:00 73 48 NULL
3:00 69 51 76
Using only every other record starting at the first non null outsidetemp yields
7:00 72 49 80
5:00 73 48 NULL
3:00 69 51 76
I can then use the above to graph temp, humidity and outside temp versus time. I just interpolate between non null outside temp values.
Kevin
(11 rep)
Jun 8, 2019, 01:57 AM
• Last activity: Jun 15, 2025, 12:04 PM
2
votes
1
answers
795
views
Filter by list of values pairs
I have Postgresql 9.4 database and products table. For each product there must be list of pairs: price; quantity for which this price is acutal Products table may contain millions and billions of records. For this products table i must provide filtering by price, filter by quantity, and filter by pr...
I have Postgresql 9.4 database and products table. For each product there must be list of pairs:
price; quantity for which this price is acutal
Products table may contain millions and billions of records. For this products table i must provide filtering by price, filter by quantity, and filter by price+quantity. If there is only price filter than if product has at least one price variant that satisfites filter then this product will be in result list. If there is price+quantity filter then product will be in result list only if there is at least one price variant which has price AND quantity that satisfies filters.
If I create separate tables
create table prod (id integer primary key);
create table optprice (prod integer, price decimal, q integer);
than with millions of products query takes realy long time:
select * from prod where id in (select o.prod from optprice o where price between 10 and 500 ) limit 20;
Planning time: 0.166 ms
Execution time: 867.663 ms
select count(*) from prod where id in (select o.prod from optprice o where price between 10 and 500 );
Planning time: 0.166 ms
Execution time: 867.663 ms
Even if I replace first query with joins, count query still too slow
select count(*) from prod left join optprice on id=optprice.prod where price between 10 and 500 limit 20;
Planning time: 0.149 ms
Execution time: 1478.455 ms
I decided to use postgresql arrays, so each product has field optprice with something like:
{{112.3, 33}, {555.12, 66}, {77.8, 88}}
But I can't understand how can I implement filtering, described earlier. I can implement separate price or quantity filters. I can't see how price+query filtering is possible here. I can write some function but, if i not mistaken, i lose indexing ability and again queries become too slow.
Is it possible to do something like this in postgresql, so it will work relatively fast even on large datasets? (Also, sorry for my bad english).
Moisizz
(21 rep)
Sep 15, 2016, 03:13 PM
• Last activity: Apr 23, 2025, 06:04 AM
0
votes
2
answers
605
views
How to select Overdue Rows with Date Frequencies?
+------------------------+--------+ | Invoice_id | due_date | amount | +-------------+----------+--------+ | 20 |2020-01-18| 1250 | +-------------+----------+--------+ | 21 |2020-01-15| 1335 | +-------------+----------+--------+ Get all Records with date passed `n days and its multiple serires` like...
+------------------------+--------+
| Invoice_id | due_date | amount |
+-------------+----------+--------+
| 20 |2020-01-18| 1250 |
+-------------+----------+--------+
| 21 |2020-01-15| 1335 |
+-------------+----------+--------+
Get all Records with date passed
n days and its multiple serires
like below...
for example n=5
SELECT * FROM invoices
WHERE `due_date = DATE_ADD(CURDATE() + INTERVAL 5 days)
OR due_date = DATE_ADD(CURDATE() + INTERVAL 10 days)
OR due_date = DATE_ADD(CURDATE() + INTERVAL 15 days)`
but i want to make it universal for any n value
> **Note:** mySQL version in my machine is 5.*
Dev Matee
(101 rep)
Jan 22, 2020, 04:37 AM
• Last activity: Feb 5, 2025, 01:02 AM
1
votes
1
answers
79
views
Why do variables in the WHERE clause cause the execution time to balloon
I have a t-sql query that executes hourly, pulling in a variable amount of data depending on the current time when the query executes, i.e.: if it is executing between midnight and 2 AM local time (7 to 9 AM UTC) I want to pull in the last 120 days' worth of data; outside that window I only want to...
I have a t-sql query that executes hourly, pulling in a variable amount of data depending on the current time when the query executes, i.e.: if it is executing between midnight and 2 AM local time (7 to 9 AM UTC) I want to pull in the last 120 days' worth of data; outside that window I only want to bring in 3 days' worth of data.
While creating the query I tried a few things to avoid having to repeat the query in an IF ELSE statement with hard coded values. As a baseline for testing I created the following query:
SELECT COUNT(*)
FROM dbo.Tickets t
JOIN dbo.TicketsDetails td ON t.ticketGUID = td.ticketGUID
WHERE td.dtCreated > DATEADD(dd, -1, CAST(GETUTCDATE() AS date))
With the hardcoded interval it returns a value of about 750,000 in .829 seconds. When I modify it to use local variables (the second or third WHERE clause below), however, execution time explodes to over 10 minutes:
DECLARE @Interval INT,
@StartDate DATE;
SELECT @Interval = CASE WHEN DATEPART(hh, GETUTCDATE()) IN (7, 8) THEN -120 ELSE -1 END
, @StartDate = DATEADD(dd, @Interval, CAST(GETUTCDATE() AS date));
SELECT COUNT(*)
FROM dbo.Tickets t
JOIN dbo.TicketsDetails td
ON t.ticketGUID = td.ticketGUID
--WHERE td.dtCreated > DATEADD(dd, -1, CAST(GETUTCDATE() AS date))
WHERE td.dtCreated > DATEADD(dd, @Interval, CAST(GETUTCDATE() AS date))
--WHERE td.dtCreated > @StartDate
My question is why does this happen, and if this is working as designed what workarounds are there so I don't have to double the code?
Jim Stephenson
(11 rep)
Jan 10, 2025, 05:52 PM
• Last activity: Jan 11, 2025, 05:56 AM
0
votes
1
answers
45
views
JOIN vs. WHERE Special Case - Single Target Foreign Key
Consider: - table A with primary key "id" and unique key "name" - table B with foreign key "aid" referencing A."id" - function (plpgsql) "f_name2id" taking some "name" and returning the matching "id" Which of the following would be more efficient? 1) `SELECT * FROM B WHERE "aid" = f_name2id( );` 2)...
Consider:
- table A with primary key "id" and unique key "name"
- table B with foreign key "aid" referencing A."id"
- function (plpgsql) "f_name2id" taking some "name" and returning the matching "id"
Which of the following would be more efficient?
1)
SELECT * FROM B WHERE "aid" = f_name2id();
2) SELECT B.* FROM B INNER JOIN A ON B."aid" = A."id" WHERE A."name" = ;
giladrv
(103 rep)
Oct 28, 2024, 07:13 AM
• Last activity: Oct 28, 2024, 07:35 AM
0
votes
3
answers
53
views
Select object from first table, who doesn`t have true in second
I have table with objects | uuid | name | | -------- | -------------- | | 98665e5a-e1cb-4bf7-93fa-a8d13983f358 | object | and another table with information about enabled this object and another object together | uuid_from_table_upper | enabled | uuid_another_object| | -------- | -------------- | --...
I have table with objects
| uuid | name |
| -------- | -------------- |
| 98665e5a-e1cb-4bf7-93fa-a8d13983f358 | object |
and another table with information about enabled this object and another object together
| uuid_from_table_upper | enabled | uuid_another_object|
| -------- | -------------- | -------------- |
| 98665e5a-e1cb-4bf7-93fa-a8d13983f358 | true |ad5e670a-adac-485b-879e-9448d68a393c|
| 98665e5a-e1cb-4bf7-93fa-a8d13983f358 | false | a55b021b-ae4b-4373-91ae-87b9f7134862|
i need a sql query, return object from first table object, who doesn`t have true in second table
SELECT ft.*
FROM first_table ft
JOIN second_table st on ft.uuid = st.uuid_from_table_upper
WHERE
not st.enabled;
i guess needed group by or recursive with
smdsa1337
(5 rep)
Aug 28, 2024, 07:42 AM
• Last activity: Sep 3, 2024, 08:44 AM
1
votes
1
answers
7382
views
How do I query only rows where multiple instances with the same ID exist with Postgres?
I have a table where I want to gather some statistics, but only for items that are repeated two or more times. Here is how the a simplified table looks like: CREATE TABLE test ( id bigint, something text ); Most rows have a unique `id`, some have repeated entries where `something` differs. What I'm...
I have a table where I want to gather some statistics, but only for items that are repeated two or more times. Here is how the a simplified table looks like:
CREATE TABLE test (
id bigint,
something text
);
Most rows have a unique
id
, some have repeated entries where something
differs.
What I'm wondering is whether there is a way to read the table but only items that have 2 or more rows with the same id
.
I know how to count the number of duplicated rows using the COUNT()
and GROUP BY
:
SELECT id, COUNT(id) AS count FROM test GROUP BY id;
I don't really see how to use the COUNT(id)
in a WHERE
or FILTER
clause (since aggregate functions are not allowed there).
---
Just in case, a very small data sample would be:
id | something
----|-----------
1 | first
2 | second
2 | third
In the result I want to see:
id | something
----|-----------
2 | second
2 | third
Alexis Wilke
(135 rep)
Feb 2, 2023, 07:58 PM
• Last activity: Aug 5, 2024, 11:23 PM
1
votes
2
answers
2614
views
How to speed up counting and filtering on table with a lot of columns and filter conditions?
I have TABLE_A in my database that has a lot of columns. Let's say there are 100 columns: COLUMN_1, COLUMN_2, COLUMN_3, .. COLUMN_100 There are so many columns not because of denormalized design, but because each row represent Entity with a lot of properties (in this example - with 99 properties, fi...
I have TABLE_A in my database that has a lot of columns.
Let's say there are 100 columns: COLUMN_1, COLUMN_2, COLUMN_3, .. COLUMN_100
There are so many columns not because of denormalized design, but because each row represent Entity with a lot of properties (in this example - with 99 properties, first column is just an id)
My Application should do the following tasks:
- Receive a set of filter conditions from the user
- Execute a query in the database that filters the data on given conditions and then should count the result rows
There are no limits on the number of filter conditions:
- there may be no filter conditions
- there may be filter conditions on every column of the table
- there may be filter conditions on some columns of the table (for example on 50 columns out of all)
So below is an example query that my application executes:
SELECT
COUNT(*)
FROM
TABLE_A
WHERE
COLUMN_1 = 'test'
AND COLUMN_2 != 'q'
AND COLUMN_45 > 5
AND COLUMN_45 1000
TABLE_A doesn't have any indexes - each column can be used for filtering and there are no sets of queries that are run much more frequently than the rest.
I also don't use any kind of cache on any level: insert and update operations happen not very often but more often than at least 2 query with the same filter conditions occur.
So in case of every query the sequential search is executed. It was not a problem earlier, but now the run-times of the query became unacceptable (number of rows in the table increased a lot).
So..
**Is there any solution on how to speed up the execution of the task?**
Any suggestions would be appreciated - I can change anything except one thing - I should use SQL. Currently I use PostgreSQL, but I would change DBMS if it will help.
nik
(111 rep)
Feb 6, 2018, 02:51 PM
• Last activity: Jun 21, 2024, 07:02 AM
1
votes
1
answers
62
views
Which indexes create to speed up a query with where
I have a query but I do not know which index create to speed-up it. This is the query: select `id_my_table` from `my_table` where now() between created_data and ifnull(finish_data, now()) and `stato` = 1 and `closed` != 1 and `closed` = 0 and `deleted` = 0 and `id_my_table_type` in (3, 5) I can not...
I have a query but I do not know which index create to speed-up it.
This is the query:
select
id_my_table
from
my_table
where
now() between created_data
and ifnull(finish_data, now())
and stato
= 1
and closed
!= 1
and closed
= 0
and deleted
= 0
and id_my_table_type
in (3, 5)
I can not change the query, I just would like to speed-up.
If I EXPLAIN
the query I see that mysql does not use any index.
Giacomo M
(163 rep)
Mar 1, 2024, 01:32 PM
• Last activity: Mar 1, 2024, 02:03 PM
1
votes
0
answers
77
views
unique key on varchar is ignored for maria db
I have the following table: ``` | s3entry | CREATE TABLE `s3entry` ( `ID` bigint(20) NOT NULL AUTO_INCREMENT, `S3KEY` varchar(1024) DEFAULT NULL, `FETCHESSINCELASTS3FETCH` int(11) DEFAULT NULL, `inS3Store` tinyint(4) DEFAULT 0, `inLocalCache` tinyint(4) DEFAULT 0, PRIMARY KEY (`ID`), UNIQUE KEY `s3k...
I have the following table:
| s3entry | CREATE TABLE s3entry
(
ID
bigint(20) NOT NULL AUTO_INCREMENT,
S3KEY
varchar(1024) DEFAULT NULL,
FETCHESSINCELASTS3FETCH
int(11) DEFAULT NULL,
inS3Store
tinyint(4) DEFAULT 0,
inLocalCache
tinyint(4) DEFAULT 0,
PRIMARY KEY (ID
),
UNIQUE KEY s3key
(S3KEY
) USING HASH,
KEY inS3Store_inLocalCache_lastcachefetch
(inS3Store
,inLocalCache
,LASTCACHEFETCH
),
KEY inLocalCache_lastfetch
(inLocalCache
,LASTCACHEFETCH
),
KEY cache_size
(inLocalCache
,inS3Store
,size
)
) ENGINE=InnoDB AUTO_INCREMENT=5847320 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci |
The table has some 5 million records.
When I run the following query it ignores the s3key index:
explain SELECT * FROM s3entry force index (s3key) WHERE s3key = 'some long text of about 400 bytes that exactly matches an s3key in the db';
+------+-------------+---------+------+---------------+------+---------+------+---------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+---------+------+---------------+------+---------+------+---------+-------------+
| 1 | SIMPLE | s3entry | ALL | s3key | NULL | NULL | NULL | 5570486 | Using where |
+------+-------------+---------+------+---------------+------+---------+------+---------+-------------+
1 row in set (0.000 sec)
Given the exact match and the simple where clause I'm expecting mariadb to use the key.
I've also tried explicitly passing the collation in the query:
MariaDB [op]> explain SELECT * FROM s3entry WHERE s3key collate utf8mb4_unicode_ci = 'some long text of about 400 bytes that exactly matches an s3key in the db';
+------+-------------+---------+------+---------------+------+---------+------+---------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+---------+------+---------------+------+---------+------+---------+-------------+
| 1 | SIMPLE | s3entry | ALL | NULL | NULL | NULL | NULL | 5763585 | Using where |
+------+-------------+---------+------+---------------+------+---------+------+---------+-------------+
1 row in set (0.001 sec)
I can solve the problem by adding an index to the table on the same key, but this seems unnecessary:
s3entry | CREATE TABLE s3entry
(
ID
bigint(20) NOT NULL AUTO_INCREMENT,
CREATEDAT
datetime(3) DEFAULT NULL,
guid
varchar(255) NOT NULL,
LASTS3FETCH
date DEFAULT NULL,
size
bigint(20) DEFAULT NULL,
LASTCACHEFETCH
date DEFAULT NULL,
S3KEY
varchar(1024) DEFAULT NULL,
FETCHESSINCELASTS3FETCH
int(11) DEFAULT NULL,
inS3Store
tinyint(4) DEFAULT 0,
inLocalCache
tinyint(4) DEFAULT 0,
PRIMARY KEY (ID
),
UNIQUE KEY s3key
(S3KEY
) USING HASH,
KEY inS3Store_inLocalCache_lastcachefetch
(inS3Store
,inLocalCache
,LASTCACHEFETCH
),
KEY inLocalCache_lastfetch
(inLocalCache
,LASTCACHEFETCH
),
KEY cache_size
(inLocalCache
,inS3Store
,size
),
KEY s3key2
(S3KEY
(768))
) ENGINE=InnoDB AUTO_INCREMENT=5856082 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci |
explain SELECT * FROM s3entry WHERE s3key = 'some long text of about 400 bytes that exactly matches an s3key in the db';
+------+-------------+---------+------+---------------------+--------+---------+-------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+---------+------+---------------------+--------+---------+-------+------+-------------+
| 1 | SIMPLE | s3entry | ref | s3key,s3key2 | s3key2 | 3075 | const | 1 | Using where |
+------+-------------+---------+------+---------------------+--------+---------+-------+------+-------------+
1 row in set (0.000 sec)
I don't understand why the hash won't work.
Brett Sutton
(111 rep)
Jan 27, 2024, 12:11 PM
0
votes
1
answers
50
views
SELLECT * FROM Table WHERE name=$variable not working
I'm trying to set up a simple SQL Code to collect a users details when they log in to a new site I'm writing, but it throws an error every time. Any Ideas what. I'm doing wrong? ``` $sql = "SELECT * FROM Users WHERE uname = '".$uname."'"; if ($conn->query($sql) === TRUE) { echo "Returned rows are: "...
I'm trying to set up a simple SQL Code to collect a users details when they log in to a new site I'm writing, but it throws an error every time. Any Ideas what. I'm doing wrong?
$sql = "SELECT * FROM Users WHERE uname = '".$uname."'";
if ($conn->query($sql) === TRUE) {
echo "Returned rows are: " . $result -> num_rows;
// Free result set
$result -> free_result();
} else {
echo "Error: " . $sql . "
" . $conn->error;
}
Thee response I receive from the server is:
Error: SELECT * FROM Users WHERE uname = 'XYZ'
Simon
(103 rep)
Jan 24, 2024, 11:24 AM
• Last activity: Jan 24, 2024, 11:45 AM
0
votes
1
answers
40
views
Can conditionals be used in a WHERE condition?
I am working on something and found this SQL that I am not sure how to rewrite correctly, meaning using PHP PDO. The SQL looks like: ``` $sql = 'SELECT * FROM table WHERE column ' . isset($variable) ? . '=' . $variable : '>0'; ``` Basically what the query is telling is: if `$variable` is defined (wi...
I am working on something and found this SQL that I am not sure how to rewrite correctly, meaning using PHP PDO.
The SQL looks like:
$sql = 'SELECT * FROM table WHERE column ' . isset($variable) ? . '=' . $variable : '>0';
Basically what the query is telling is: if $variable
is defined (within the PHP world) then use an =
for the WHERE condition, if it is not then use the >0
.
I can clean up that a little bit on PHP and do something like:
$where = $variable ? 'column = ?' : column > ?'; // ternary operator to build the proper where condition
$sql = 'SELECT * FROM table WHERE $where';
$db->row($sql, [$variable ?? 0]); // bind parameters to the query, PDO way, and the operator will use the value of $variable if it is defined otherwise it will use 0
and it will work fine, I guess. Now, I wonder if I can achieve the same using plain SQL like a condition inside the WHERE same as within the SELECT, and if so, is it optimum? Or programmatically is better and faster?
ReynierPM
(1888 rep)
Dec 27, 2023, 02:05 PM
• Last activity: Dec 27, 2023, 02:32 PM
52
votes
4
answers
73952
views
Using column alias in a WHERE clause doesn't work
Given a table `users` with two fields: `id` and `email`. select id, email as electronic_mail from ( select id, email from users ) t where electronic_mail = '' Postgres complains that: > ERROR: column "electronic_mail" does not exist The example is just to demonstrate the arising problem. My actual c...
Given a table
users
with two fields: id
and email
.
select id, email as electronic_mail
from (
select id, email
from users
) t
where electronic_mail = ''
Postgres complains that:
> ERROR: column "electronic_mail" does not exist
The example is just to demonstrate the arising problem. My actual case is more complex, I iterate though an array of elements in a json column, taking a single scalar value from each. (I can share some code if that helps.)
I really don't get what would be the complication, probably I'm unaware of something. I was under the impression that aliased columns can be employed in a WHERE
clause without problem?
Victor
(657 rep)
Dec 27, 2018, 06:24 PM
• Last activity: Nov 30, 2023, 02:23 PM
0
votes
1
answers
1554
views
Which database scheme will be better for dating app performance?
![My Schema Diagram][1] [1]: https://i.sstatic.net/LBDjQ.png I'm creating dating app like Tinder with Spring Boot and want to make sure that even with huge amount of data it will be working with best possible performance. My tables looks like above. All tables and actions reference profiles.id. **Qu...

Ugabugaman
(3 rep)
Dec 6, 2022, 01:38 PM
• Last activity: Nov 2, 2023, 05:41 PM
0
votes
1
answers
89
views
Group by a column and select the group id on condition that applies to all the rows in it
I have the below table CREATE TABLE result( id INTEGER NOT NULL ,cid INTEGER NOT NULL ,aid INTEGER NOT NULL ,agid INTEGER NOT NULL ,ft INTEGER NOT NULL ,ct INTEGER NOT NULL ,jid INTEGER NOT NULL ,fcn INTEGER NOT NULL ,iap INTEGER NOT NULL ,flags INTEGER NOT NULL ,afcf INTEGER NOT NULL ); INSERT INTO...
I have the below table
CREATE TABLE result(
id INTEGER NOT NULL
,cid INTEGER NOT NULL
,aid INTEGER NOT NULL
,agid INTEGER NOT NULL
,ft INTEGER NOT NULL
,ct INTEGER NOT NULL
,jid INTEGER NOT NULL
,fcn INTEGER NOT NULL
,iap INTEGER NOT NULL
,flags INTEGER NOT NULL
,afcf INTEGER NOT NULL
);
INSERT INTO result(id,cid,aid,agid,ft,ct,jid,fcn,iap,flags,afcf) VALUES (44,2,9,6,2,1698222904,52,-1,1,65536,65539);
INSERT INTO result(id,cid,aid,agid,ft,ct,jid,fcn,iap,flags,afcf) VALUES (44,2,9,6,2,1698222904,52,-1,2,65536,4099);
INSERT INTO result(id,cid,aid,agid,ft,ct,jid,fcn,iap,flags,afcf) VALUES (54,2,9,6,2,1698223114,56,-1,1,65536,65539);
INSERT INTO result(id,cid,aid,agid,ft,ct,jid,fcn,iap,flags,afcf) VALUES (54,2,9,6,2,1698223114,56,-1,2,65536,4099);
INSERT INTO result(id,cid,aid,agid,ft,ct,jid,fcn,iap,flags,afcf) VALUES (68,2,9,6,2,1698223386,59,-1,1,65536,3);
INSERT INTO result(id,cid,aid,agid,ft,ct,jid,fcn,iap,flags,afcf) VALUES (68,2,9,6,2,1698223386,59,-1,2,65536,4099);
INSERT INTO result(id,cid,aid,agid,ft,ct,jid,fcn,iap,flags,afcf) VALUES (80,2,9,6,2,1698223477,60,-1,1,65536,3);
INSERT INTO result(id,cid,aid,agid,ft,ct,jid,fcn,iap,flags,afcf) VALUES (80,2,9,6,2,1698223477,60,-1,2,65536,4099);
The values looks like
|id |cid|aid|agid|ft |ct |jid|fcn|iap|flags|afcf |
|---|---|---|----|---|----------|---|---|---|-----|-----|
|44 |2 |9 |6 |2 |1698222904|52 |-1 |1 |65536|65539|
|44 |2 |9 |6 |2 |1698222904|52 |-1 |2 |65536|4099 |
|54 |2 |9 |6 |2 |1698223114|56 |-1 |1 |65536|65539|
|54 |2 |9 |6 |2 |1698223114|56 |-1 |2 |65536|4099 |
|68 |2 |9 |6 |2 |1698223386|59 |-1 |1 |65536|3 |
|68 |2 |9 |6 |2 |1698223386|59 |-1 |2 |65536|4099 |
|80 |2 |9 |6 |2 |1698223477|60 |-1 |1 |65536|3 |
|80 |2 |9 |6 |2 |1698223477|60 |-1 |2 |65536|4099 |
In the above table for each of the group iap whose value here is
[1,2]
. I want to get that iap where afcf & 65536 = 0
else I want the value to be NULL
.
So my output would contain 2
for above table as all the rows in iap 2 satisfy the condition `afcf & 65536 = 0.
**What needs to be done**
For each of the group of iap, check if all its rows satisfies the condition afcf & 65536 = 0
if yes then print that group id iap
.
**Expected Output**
Since iap = satisfied above condition for all of its rows
2
This is what I tried, which is wrong.
SELECT iap FROM result
GROUP BY iap
HAVING afcf & 65536 = 0
Himanshuman
(197 rep)
Nov 2, 2023, 08:26 AM
• Last activity: Nov 2, 2023, 05:13 PM
2
votes
1
answers
111
views
How to use aggregate column in where clause when using group by
I've following table ```sql CREATE TABLE `books` ( `book_id` int(11) NOT NULL AUTO_INCREMENT, `title` varchar(100) DEFAULT NULL, `author_fname` varchar(100) DEFAULT NULL, `author_lname` varchar(100) DEFAULT NULL, `released_year` int(11) DEFAULT NULL, `stock_quantity` int(11) DEFAULT NULL, `pages` in...
I've following table
CREATE TABLE books
(
book_id
int(11) NOT NULL AUTO_INCREMENT,
title
varchar(100) DEFAULT NULL,
author_fname
varchar(100) DEFAULT NULL,
author_lname
varchar(100) DEFAULT NULL,
released_year
int(11) DEFAULT NULL,
stock_quantity
int(11) DEFAULT NULL,
pages
int(11) DEFAULT NULL,
PRIMARY KEY (book_id
)
)
Trying to find out the author's first and last book release year only for authors who have more than one book.
Following is the query
SELECT author_lname,
MIN(released_year) first_release,
MAX(released_year) last_release,
COUNT(*) book_count,
MAX(pages) max_page_count
FROM books
GROUP BY author_lname
ORDER BY book_count DESC;
But I can't use book_count
in where
clause so I can do book_count > 1
I'm looking for an explanation of why this is not possible & then how to get the expected result.
Md. A. Apu
(123 rep)
Sep 28, 2023, 06:38 PM
• Last activity: Sep 28, 2023, 08:09 PM
0
votes
2
answers
140
views
How to use a WHERE clause with a GROUP BY
I'm learning SQL so apologies if this is too basic a question. I can't seem to figure out how to use a WHERE clause after I GROUP BY a field. [![enter image description here][1]][1] [1]: https://i.sstatic.net/RpSzy.jpg For example, how would I filter the results using a WHERE clause to only see thos...
I'm learning SQL so apologies if this is too basic a question. I can't seem to figure out how to use a WHERE clause after I GROUP BY a field.
For example, how would I filter the results using a WHERE clause to only see those countries where the Order count is greater than 100?
Thank you

Burner918
(3 rep)
Aug 31, 2023, 07:38 PM
• Last activity: Aug 31, 2023, 10:04 PM
0
votes
1
answers
51
views
How to query in the given order
```sql select * from table where options=(10,223,43,1,23,54,323,32) ``` But the result is not coming in the given order of options. Options need to be changed frequently.
select * from table where options=(10,223,43,1,23,54,323,32)
But the result is not coming in the given order of options. Options need to be changed frequently.
sh3hz
(1 rep)
May 19, 2023, 11:13 AM
• Last activity: May 19, 2023, 01:25 PM
Showing page 1 of 20 total questions