Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
2 answers
335 views
Select Sum from two joined tables
There are structures: CREATE TABLE `invoices` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `date` date NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB; INSERT INTO `invoices` VALUES (1,'2018-09-22'); CREATE TABLE `products` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `invoice_id` int(10) unsig...
There are structures: CREATE TABLE invoices ( id int(10) unsigned NOT NULL AUTO_INCREMENT, date date NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB; INSERT INTO invoices VALUES (1,'2018-09-22'); CREATE TABLE products ( id int(10) unsigned NOT NULL AUTO_INCREMENT, invoice_id int(10) unsigned NOT NULL, amount decimal(10,2) unsigned NOT NULL, quantity smallint(5) unsigned NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB; INSERT INTO products VALUES (1,1,150.00,2),(2,1,60.00,3),(3,1,50.00,1); CREATE TABLE payments ( id int(10) unsigned NOT NULL AUTO_INCREMENT, invoice_id int(10) unsigned NOT NULL, amount decimal(10,2) unsigned NOT NULL, date date NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB; INSERT INTO payments VALUES (1,1,400.00,'2018-09-23'),(2,1,80.00,'2018-09-23'); I have this query: select i.id, sum(pr.amount * pr.quantity) as productAmount, sum(pm.amount) as paymentAmount from invoices as i left join products as pr on pr.invoice_id=i.id left join payments as pm on pm.invoice_id=i.id group by i.id and have this result: +----+---------------+---------------+ | id | productAmount | paymentAmount | +----+---------------+---------------+ | 1 | 1060.00 | 1440.00 | +----+---------------+---------------+ 1 row in set (0,00 sec) However, I want to get the following result: +----+---------------+---------------+ | id | productAmount | paymentAmount | +----+---------------+---------------+ | 1 | 530.00 | 480.00 | +----+---------------+---------------+ 1 row in set (0,00 sec) I want sum amount of products and sum amount of payments grouped by invoice.id. What should be the query in this case?
abdulmanov.ilmir (101 rep)
Sep 24, 2018, 06:58 AM • Last activity: Jul 17, 2025, 09:08 AM
0 votes
3 answers
408 views
Column with Default value as Sum of TIMESTAMP
I have a table which has 2 columns ( `JobDate` , `RecordTime`) -------------------------------------- JobDate | RecordTime | SumCol -------------------------------------- 2019-07-20 | 2019-07-21 | 2019-07-19 | 2019-07-20 | I need `SumCol` to have a default value as `UNIX_TIMESTAMP(JobDate) + UNIX_TI...
I have a table which has 2 columns ( JobDate , RecordTime) -------------------------------------- JobDate | RecordTime | SumCol -------------------------------------- 2019-07-20 | 2019-07-21 | 2019-07-19 | 2019-07-20 | I need SumCol to have a default value as UNIX_TIMESTAMP(JobDate) + UNIX_TIMESTAMP(RecordTime) I've tried creating a virtual column but it gives me this error : **Expression of generated column 'Test2' contains a disallowed function.** that's what I tried: ALTER TABLE jobsTemp ADD SumCol TIMESTAMP AS (UNIx_timestamp(JobDate) + UNIx_timestamp(RecordTime));
Amr Ahmed (11 rep)
Jul 20, 2019, 06:05 PM • Last activity: Jul 14, 2025, 04:10 AM
0 votes
1 answers
1224 views
Select SUM from another Select SUM result in mySQL
I have one Select SUM command that works: # SIM Orders Pending by Customers for 3101708 IMSI SIMs with 5YY MSISDNs SELECT accountname AS 'Account Name', IF ((SELECT * FROM 5YYAccounts WHERE 5YYAccounts.accountid = T3101708.accountid) IS NULL , 'N','Y') AS '5YY MSISDN', LPAD(CONCAT(FORMAT(SUM(quantit...
I have one Select SUM command that works: # SIM Orders Pending by Customers for 3101708 IMSI SIMs with 5YY MSISDNs SELECT accountname AS 'Account Name', IF ((SELECT * FROM 5YYAccounts WHERE 5YYAccounts.accountid = T3101708.accountid) IS NULL , 'N','Y') AS '5YY MSISDN', LPAD(CONCAT(FORMAT(SUM(quantity), 0)),15,' ') AS 'Quantity' FROM T3101708 WHERE outputfilereceived IS NULL AND (SELECT * FROM 5YYAccounts WHERE 5YYAccounts.accountid = T3101708.accountid) IS NOT NULL GROUP BY accountname, accountid ORDER BY SUM(quantity) DESC; After running this, I get following result: +---------------------------------+------------+-----------------+ | Account Name | 5YY MSISDN | Quantity | +---------------------------------+------------+-----------------+ | FCA - SXM - AT&T | Y | 48,000 | | Numerex - AT&T | Y | 34,000 | | Mytrex Inc. - AT&T | Y | 24,000 | | Honda US - AT&T | Y | 18,000 | +---------------------------------+------------+-----------------+ 3 rows in set (0.03 sec) Based on the result above, I want to SUM the last column by having following Select SUM command: SELECT SUM( SELECT SUM(quantity) FROM T3101708 WHERE outputfilereceived IS NULL AND (SELECT * FROM 5YYAccounts WHERE 5YYAccounts.accountid = T3101708.accountid) IS NOT NULL GROUP BY accountname, accountid ); For this time, I am getting an error in mySQL: > ERROR 1064 (42000): You have an error in your SQL syntax; check the > manual that corresponds to your MySQL server version for the right > syntax to use near 'SELECT SUM(quantity) > FROM T3101708 > WHERE outputfilereceived IS NULL AND' at line 2 What did I code wrong here? Thank you. Bobby
Bobby Richardson (1 rep)
Apr 17, 2020, 02:06 AM • Last activity: Jul 5, 2025, 06:09 PM
0 votes
1 answers
255 views
Getting sum result for last year divided quarterly
I am trying to get financial data grouped by each quarter of last year and also grouped by the last three weeks. Also, if it is possible, I want to have the type as a separate column with the type as the header. This is the query I am using right now which is giving the result for last month. Select...
I am trying to get financial data grouped by each quarter of last year and also grouped by the last three weeks. Also, if it is possible, I want to have the type as a separate column with the type as the header. This is the query I am using right now which is giving the result for last month. Select to_char(add_months(last_day(TO_DATE(&DTE,'YYYYMMDD')),-1), 'MM') || '-' || to_char(add_months(last_day(TO_DATE(&DTE,'YYYYMMDD')),-1), 'YYYY') AS Month, case when (ap.asset = 2 or AP.ASSET_TYPE not like 'CFTD%' and AP.ASSET_TYPE not like 'CASH%' and AP.ASSET_TYPE != 'FTL') then 'AUM' when (AP.ASSET_TYPE like 'CFTD%') then 'Depostis' when (AP.ASSET_TYPE like 'CASH%') then 'Cash' when (AP.ASSET_TYPE = 'FTL') then 'Loan' end as Type, sum(ABS(AP.Eval_Market_Value)) as Total from (select p.account, p.open_date as Open_Date, ac.description as RM, s.* FROM k$portfolio p, k$client k LEFT JOIN k$client_role cr ON cr.client = k.id AND cr.role = 136 LEFT JOIN k$actors ac ON cr.actor = ac.id, table(gtw$reporting.Statement(p.id, 2, trunc(add_months(last_day(TO_DATE(&DTE,'YYYYMMDD')),-1)), trunc(add_months(last_day(TO_DATE(&DTE,'YYYYMMDD')),-1)), 1, 1002, 1, 'USD')) s wHERE s.line_type = 'P' and k.id = p.client and p.id = s.portfolio and p.portfolio_type = 'C' and p.status = 1 ) ap group by to_char(add_months(last_day(TO_DATE(&DTE,'YYYYMMDD')),-1), 'MM') ||'-'|| to_char(add_months(last_day(TO_DATE(&DTE,'YYYYMMDD')),-1), 'YYYY'), case when (ap.asset = 2 or AP.ASSET_TYPE not like 'CFTD%' and AP.ASSET_TYPE not like 'CASH%' and AP.ASSET_TYPE != 'FTL') then 'AUM' when (AP.ASSET_TYPE like 'CFTD%') then 'Depostis' when (AP.ASSET_TYPE like 'CASH%') then 'Cash' when (AP.ASSET_TYPE = 'FTL') then 'Loan' end ===================== Current output Output Desired result Final Resualt
Yousef (13 rep)
Feb 1, 2015, 07:43 AM • Last activity: May 18, 2025, 11:06 PM
0 votes
1 answers
1051 views
MS Access - How to sum matching criteria from two columns in same table
I have an MS Access database with four columns of interest that I am trying to work with. The rows represent sold jobs, and the columns of interest are "OriginalSaleAmount", "OriginalSaleDate", "RevisionSaleAmount", and "RevisionSaleDate." You can see that there is a relationship between the date an...
I have an MS Access database with four columns of interest that I am trying to work with. The rows represent sold jobs, and the columns of interest are "OriginalSaleAmount", "OriginalSaleDate", "RevisionSaleAmount", and "RevisionSaleDate." You can see that there is a relationship between the date and sale columns, but of course Access is not inherently aware of this. I would like to have either a report, or a view of some kind, which has a row for each month of the year, and which shows the combined sum of "OriginalSalesAmount" and "RevisionSaleAmount" which pertains to that month of the year. Of course, the years themselves should be distinguished also. We don't want data for January 2021 and January 2022 to be summed together; each should be its own year. **Here is an example to help clarify:** Let's say I have this table. |JobNo | OrigSaleAmnt | OrigSaleDt | RevSaleAmnt | RevSaleDt | |-----: | ------------:|:----------:| -----------:|:----------:| | 12345 | $98765 | 01/05/2022 | $506 | 01/29/2022 | | 12346 | $12345 | 01/24/2022 | $1028 | 02/27/2022 | | 12347 | $13579 | 02/07/2022 | $943 | 03/12/2022 | | 12348 | $12358 | 03/16/2022 | $729 | 03/19/2022 | | 12349 | $17935 | 03/29/2022 | $6821 | 04/25/2022 | I need a table, query, or report that produces this information from that table. | DateYear| DateMonth | TotalSales | Comment | | -------:|:---------:| ----------:|:----------------------------- | | 2022 | January | $111616 | $98765 + $12345 + $506 | | 2022 | February | $14607 | $13579 + $1028 | | 2022 | March | $31965 | $12358 + $17935 + $943 + $729 | | 2022 | April | $6821 | $6821 | You'll notice that each month sums the sale amount that corresponds to the date that is associated with a particular sales column. I'm not a strong Access user and I'm not entirely sure where to start with this sort of data handling. If anyone can help guide me in how to go about accomplishing this sort of thing, I'd appreciate the help.
JDDellGuy (1 rep)
May 20, 2022, 04:55 PM • Last activity: Feb 27, 2025, 02:01 AM
0 votes
0 answers
59 views
Mysql SUM inside Left Join return wrong value when using with rollup function
Good Afternoon. I have some problem/missunderstood with rollup function when using LEFT Join statement. the sum from rollup function at the end is not like my expected. please see the sample table data from my table ***tb_no_penjualan*** id_no_jual | total_bayar | created_date ----------------------...
Good Afternoon. I have some problem/missunderstood with rollup function when using LEFT Join statement. the sum from rollup function at the end is not like my expected. please see the sample table data from my table ***tb_no_penjualan*** id_no_jual | total_bayar | created_date ---------------------------------------------- 2 | 1024176.00 | 2024-10-23 20:40:10 3 | 520000.00 | 2024-10-23 20:47:39 4 | 241000.00 | 2024-10-23 20:55:26 7 | 346280.00 | 2024-10-24 08:55:22 8 | 957840.00 | 2024-10-24 08:57:49 9 | 534064.00 | 2024-10-24 09:04:32 12 | 731864.00 | 2024-10-24 09:34:06 ***tb_det_jual*** id_det_jual | id_no_jual | jumlah_item | total_harga ----------------------------------------------------- 2 | 2 | 2 | 300000.00 3 | 2 | 2 | 50000.00 4 | 3 | 25 | 237500.00 5 | 4 | 2 | 331500.00 6 | 7 | 1 | 150000.00 7 | 7 | 5 | 252200.00 8 | 7 | 5 | 50000.00 9 | 7 | 5 | 100000.00 10 | 7 | 2 | 329800.00 11 | 8 | 3 | 494700.00 12 | 9 | 4 | 201760.00 13 | 12 | 3 | 494700.00 14 | 12 | 5 | 50000.00 15 | 12 | 7 | 140000.00 i have try to use this query SELECT a.id_no_jual, date(a.created_date) as tanggal, SUM(a.jumlah) as jumlah_item, SUM(a.total_harga) AS total_harga,b.total_bayar from pos.tb_det_jual a LEFT JOIN ( SELECT id_no_jual as id_no_jual, SUM(total_bayar) as total_bayar FROM pos.tb_no_penjualan group by date(created_date) ) as b ON(a.id_no_jual=b.id_no_jual) group by date(a.created_date) WITH ROLLUP; and the result from that query like this: id_no_jual | tanggal | jumlah_item | total_harga | total_bayar --------------------------------------------------------------- 2 |2024-10-23 | 31 | 919000.00 | 1107280.00 7 |2024-10-24 | 40 | 2263160.00 | 2620739.20 7 |Null | 71 (correct)| 3182160.00(ok) | 2620739.20 (wrong) the result i wanted like this: id_no_jual | tanggal | jumlah_item | total_harga | total_bayar ------------------------------------------------------------------- 2 |2024-10-23 | 31 | 919000.00 | 1107280.00 7 |2024-10-24 | 40 | 2263160.00 | 2620739.20 7 |Null | 71 (correct)| 3182160.00(ok) | 3728019.20 is any mistake from my query? please correct the query if i wrong, Regards.
epm.007 (29 rep)
Oct 24, 2024, 07:34 AM • Last activity: Oct 24, 2024, 11:44 AM
2 votes
2 answers
232 views
sum() of aggregate count()?
I have a query where I pull the domain portion from email addresses out of a column and then count how many users from which domain I have. How do I get the total count (all users over all domains) in the same query? ``` select count(regexp_replace( email_id , '^[^@]+@','')) as count, regexp_replace...
I have a query where I pull the domain portion from email addresses out of a column and then count how many users from which domain I have. How do I get the total count (all users over all domains) in the same query?
select count(regexp_replace( email_id , '^[^@]+@','')) as count, regexp_replace( email_id , '^[^@]+@','') as domain from logins 
  where last_login between '2024-08-06' and '2024-09-06'  
  group by regexp_replace( email_id , '^[^@]+@','') 
  order by count;
tink (155 rep)
Sep 10, 2024, 08:52 PM • Last activity: Sep 12, 2024, 06:11 AM
0 votes
3 answers
2440 views
Query for getting the sum of all unique entries in a column
Currently I'm using mysql to return the sum of all data in a column that have the same word. To do this I'm using the following query: SELECT SUM(CASE WHEN status_to = 'Draft' THEN 1 END) AS draft, SUM(CASE WHEN status_to = 'Unpublish' THEN 1 END) AS unpublish, SUM(CASE WHEN status_to = 'Publish' TH...
Currently I'm using mysql to return the sum of all data in a column that have the same word. To do this I'm using the following query: SELECT SUM(CASE WHEN status_to = 'Draft' THEN 1 END) AS draft, SUM(CASE WHEN status_to = 'Unpublish' THEN 1 END) AS unpublish, SUM(CASE WHEN status_to = 'Publish' THEN 1 END) AS publish, SUM(CASE WHEN status_to = 'Action' THEN 1 END) AS action, SUM(CASE WHEN status_to = 'Unlisted' THEN 1 END) AS unlisted, SUM(CASE WHEN status_to = 'Sold' THEN 1 END) AS sold, SUM(CASE WHEN status_to = 'Let' THEN 1 END) AS let FROM crm_logs This gives out the correct output in my database for all the terms I've specified, but now I've found that there are more status variables in the database other than these specified above. So I want a way to have the same functionality of this statement, but make the status variable dynamic. Basically if this is the query, SUM(CASE WHEN status_to = 'Draft' THEN 1 END) AS draft, both the occurrences of draft should be dynamic.
JJM50 (103 rep)
Oct 12, 2021, 12:39 PM • Last activity: Jul 24, 2024, 06:35 AM
10 votes
3 answers
171856 views
How to Get the Sum (total) of a column in a query result set
Is there a way that I can get the sum of a column in my query result set by just clicking on the column (something like in Excel) so that I don't have to copy and paste the column into Excel to the see the total sum of all the values in the column? I'm running SQL Server 2008.
Is there a way that I can get the sum of a column in my query result set by just clicking on the column (something like in Excel) so that I don't have to copy and paste the column into Excel to the see the total sum of all the values in the column? I'm running SQL Server 2008.
Juan Velez (3303 rep)
Mar 13, 2012, 01:34 PM • Last activity: May 22, 2024, 03:53 AM
0 votes
1 answers
79 views
How to efficiently (O(log n)) get the sum over a range (date/time range) of a column?
One column is value in integer. Second column is timestamp in datetime. I want to efficiently get the sum of the values in the first column over a datetime range (start time to end time). Is there some way to tell postgres to maintain a range query segment tree over one column. Also imagine if the t...
One column is value in integer. Second column is timestamp in datetime. I want to efficiently get the sum of the values in the first column over a datetime range (start time to end time). Is there some way to tell postgres to maintain a range query segment tree over one column. Also imagine if the table had another column called userId. My queries would look 'give me the sum total over a date range for this userid'.
Kuelf Deez (101 rep)
May 21, 2024, 08:08 AM • Last activity: May 21, 2024, 08:37 AM
1 votes
1 answers
139 views
PostgreSQL - Sum all row that satisfy a condition into a single row
I'm trying to extract some statistics from a Postgres database and I made this query: ```sql SELECT city, job_count, TO_CHAR(job_count * 100 / SUM(job_count) OVER (), '90D00%') AS job_share FROM ( SELECT localities.name as city, COUNT(*) AS job_count FROM jobads JOIN localities ON jobads.locality_uu...
I'm trying to extract some statistics from a Postgres database and I made this query:
SELECT
  city,
  job_count,
  TO_CHAR(job_count * 100 / SUM(job_count) OVER (), '90D00%') AS job_share
FROM (
  SELECT
    localities.name as city,
    COUNT(*) AS job_count
  FROM jobads
  JOIN localities ON jobads.locality_uuid = localities.uuid
  WHERE jobads.external = true
  GROUP BY localities.name
  ORDER BY job_count DESC
) AS job_count_table;
Here's the result it returns:
city          | job_count | job_share
-----------------------+-----------+-----------
 City #1               |      1300 |  13.00%
 City #2               |       700 |   7.00%
 City #3               |       400 |   4.00%
 ...
 City #1200            |         1 |   0.01%
(1200 rows)
It returns hundred of rows and most of them have a really low job count. I would like to merge all row that have a job_count less then 100 into a single row that would so the output would look something like this:
city          | job_count | job_share
-----------------------+-----------+-----------
 City #1               |      1300 |  13.00%
 City #2               |       700 |   7.00%
 City #3               |       400 |   4.00%
 Other cities          |      2000 |  20.00%
(4 rows)
Any idea how can I do that?
Mateja Maric (13 rep)
May 8, 2024, 01:34 PM • Last activity: May 8, 2024, 02:11 PM
0 votes
0 answers
130 views
How is the ! used in Microsoft Access queries?
The "bang" or exclamation point is a logical operator. Just to [follow up on a previous question][1], is it problematic to use "!" with Microsoft Access queries to specify tables and columns? And, why? From copilot: > you show me more examples? Copilot Using the web Certainly! Here are > some additi...
The "bang" or exclamation point is a logical operator. Just to follow up on a previous question , is it problematic to use "!" with Microsoft Access queries to specify tables and columns? And, why? From copilot: > you show me more examples? Copilot Using the web Certainly! Here are > some additional examples illustrating the usage of exclamation marks > (!) in Microsoft Access queries: > > Calculations with Form Controls: Suppose you have a form named > frmSales with two textboxes: txtQuantity and txtUnitPrice. You want to > calculate the total price based on user input. You can create a query > like this: SQL SELECT [frmSales]![txtQuantity] AS Quantity, [frmSales]![txtUnitPrice] AS UnitPrice, [frmSales]![txtQuantity] * [frmSales]![txtUnitPrice] AS TotalPrice FROM SomeTable; > AI-generated code. Review and use carefully. More info on FAQ. In > this query, [frmSales]![txtQuantity] and [frmSales]![txtUnitPrice] > refer to the values entered by the user in the respective textboxes. > Filtering Based on Form Controls: Let’s say you have a form named > frmFilters with checkboxes for filtering data. You want to retrieve > records based on the user’s selections. You can construct a query like > this: SQL SELECT * FROM Orders WHERE [frmFilters]![chkHighPriority] = True AND [frmFilters]![chkShipped] = False; > AI-generated code. Review and use carefully. More info on FAQ. Here, [frmFilters]![chkHighPriority] and [frmFilters]![chkShipped] > correspond to the checkbox values set by the user. Joining Tables with > Form Controls: If you have a form that allows users to select criteria > for joining tables, you can use exclamation marks to reference those > selections. For instance: SQL SELECT Customers.CustomerName, Orders.OrderDate FROM Customers INNER JOIN Orders ON Customers.CustomerID = Orders.CustomerID WHERE [frmJoinCriteria]![cboOrderStatus] = 'Shipped'; > AI-generated code. Review and use carefully. More info on FAQ. In > this query, [frmJoinCriteria]![cboOrderStatus] represents the user’s > choice from a combo box on the form. Remember that the exclamation > mark simplifies referencing frontend objects within your queries, > making your Access applications more dynamic and interactive. 😊 From what I can see, the ! has a specific meaning in Access, but no meaning in SQL, aside from being a logical operator.
Nick (1 rep)
Apr 17, 2024, 05:51 AM • Last activity: Apr 17, 2024, 09:15 AM
0 votes
2 answers
78 views
Mysql count Same Two or More Same Record as One and Summarize It
Good Afternoon all, I have two same data record or more in MySql table. How i can counting the same data as one and zero if have only one data based on flag status pair. Below is an example of data: GLOBAL_ID FLAG_STATUS VTSA-C-B002 IN VTSA-C-B002 OUT VTSA-C-B003 IN VTSA-C-B031 IN VTSA-C-B031 OUT VT...
Good Afternoon all, I have two same data record or more in MySql table. How i can counting the same data as one and zero if have only one data based on flag status pair. Below is an example of data: GLOBAL_ID FLAG_STATUS VTSA-C-B002 IN VTSA-C-B002 OUT VTSA-C-B003 IN VTSA-C-B031 IN VTSA-C-B031 OUT VTSA-C-B031 OUT VTSA-C-B002 OUT VTSA-C-B005 OUT ABCD-D-B004 IN ABCD-D-B001 IN ABCD-D-B003 IN ABCD-D-B001 OUT ABCD-D-B003 OUT ABCD-D-B005 IN Then The result as i wanted to counting is looks like here Global ID Count VTSA-C-B002 1 (Because have a Pair Flag Status ("IN" AND "OUT") VTSA-C-B003 0 (Because Dont have a Pair Flag Status ("IN" AND "OUT") VTSA-C-B031 1 VTSA-C-B005 0 ABCD-D-B004 0 ABCD-D-B001 1 ABCD-D-B003 1 ABCD-D-B005 0 so the Final result to Summarize the counting like this, Global_ID ALias SUM of counting VTSA 2 ABCD 2 I have try this query in below, but the result as i looking for SELECT global_id, mandays FROM ( SELECT COUNT(DISTINCT b.global_id) AS mandays, CASE WHEN b.global_id like "%ABCD%" THEN 'ABCD' ELSE 'VTSA' END AS global_id FROM she_cii.tb_gate_log b WHERE b.global_id REGEXP 'VTSA|ABCD' AND date(b.received_date)="2024-02-03" AND flag_status IN("IN","OUT") group by SUBSTRING(b.global_id FROM POSITION('-' in b.global_id)+1 FOR 5) ) x the result from that query looks like this global_id mandays VTSA 36 (counting all distint global id record) ABCD 45 (counting all distint global id record) Thank You Regards.
epm.007 (29 rep)
Feb 20, 2024, 10:02 AM • Last activity: Feb 20, 2024, 01:47 PM
1 votes
2 answers
234 views
Comparing an aggregate function's result against a column
i have 2 tables like these; invoices id | total_amount | currency | discount payments id | invoice | amount As you can guess, payments table keeps records of the payment made to a specific invoice. I need a query to get invoices that has been partially paid; SELECT invoices.*, SUM(payments.amount) a...
i have 2 tables like these; invoices id | total_amount | currency | discount payments id | invoice | amount As you can guess, payments table keeps records of the payment made to a specific invoice. I need a query to get invoices that has been partially paid; SELECT invoices.*, SUM(payments.amount) as paid FROM invoices LEFT JOIN payments ON payments.invoice = invoices.id GROUP BY invoices.id I am being able to get paid amount with this query but also i need to do something like WHERE paid 0 But this says "Unknown colum 'paid' in 'where clause'". How can I get this work?
cem ekkazan (13 rep)
Jan 10, 2024, 06:41 PM • Last activity: Jan 10, 2024, 08:40 PM
0 votes
1 answers
21 views
Next Business Date Column
I have a dataset that looks like this. [![dataset sample][1]][1] Where `business_day` indicates whether the `transaction_created_date` is a business day or not. I'm trying to sum the `line_amount` so that values that occurred over the holiday or weekend gets added to the next business day to look so...
I have a dataset that looks like this. dataset sample Where business_day indicates whether the transaction_created_date is a business day or not. I'm trying to sum the line_amount so that values that occurred over the holiday or weekend gets added to the next business day to look something like this: output Essentially, if I can capture the next business day where business_day = 0 then I can just do a sum over partition.
Lena Zheng (3 rep)
Jan 10, 2024, 12:28 AM • Last activity: Jan 10, 2024, 08:40 AM
0 votes
1 answers
100 views
MySQL Query duplicates sum for left join
I have 2 tables, Greeter_Detail and ZipCode_Details I'm trying to SUM the total of People and Families from and Greeter_Detail and join them on ZipCodes, but I get duplicates. ZipCode_Details has city and county info. Can anyone help, I've been on this for hours. Seems like it should be straight for...
I have 2 tables, Greeter_Detail and ZipCode_Details I'm trying to SUM the total of People and Families from and Greeter_Detail and join them on ZipCodes, but I get duplicates. ZipCode_Details has city and county info. Can anyone help, I've been on this for hours. Seems like it should be straight forward. TIA! SELECT SUM(Families), SUM(People), Zipcode, ZipCode_Detail.Zipcode_D, ZipCode_Detail.City, ZipCode_Detail.County FROM Greeter_Detail LEFT JOIN ZipCode_Detail ON Greeter_Detail.Zipcode = ZipCode_Detail.Zipcode_D WHERE $dateRange GROUP BY Zipcode
angel12333 (1 rep)
Jan 4, 2024, 12:45 AM • Last activity: Jan 4, 2024, 01:48 AM
1 votes
1 answers
120 views
Limit results of complex query to most frequent values in given attribute
Say that you have the following tables for a multi store platform ``` CREATE TABLE orders ( id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY, store_id BIGINT NOT NULL, ordered_at TIMESTAMPTZ NOT NULL ); CREATE INDEX ON orders (store_id); ``` ``` CREATE TABLE order_lines ( id BIGINT PRIMARY KEY GEN...
Say that you have the following tables for a multi store platform
CREATE TABLE orders (
  id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
  store_id BIGINT NOT NULL,
  ordered_at TIMESTAMPTZ NOT NULL
);

CREATE INDEX ON orders (store_id);
CREATE TABLE order_lines (
  id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
  order_id BIGINT NOT NULL REFERENCES orders (id) ON DELETE CASCADE,
  item_id BIGINT NOT NULL REFERENCES items (id),
  quantity INT
);

CREATE INDEX ON order_lines (order_id);
CREATE INDEX ON order_lines (item_id);
CREATE TABLE items (
  id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
  store_id BIGINT NOT NULL,
  color VARCHAR(255) NOT NULL,
  size VARCHAR(255) NOT NULL,
  category VARCHAR(255)
);
CREATE TABLE returns (
  id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
  order_line_id BIGINT NOT NULL REFERENCES order_lines (id) ON DELETE CASCADE,
  returned_at TIMESTAMPTZ NOT NULL,
  quantity INT NOT NULL,
  reason VARCHAR(255)
);

CREATE INDEX ON returns (order_line_id);
From this I would like to get a list of all orders during a date range and calculate certain metrics, like how many items were bought, how many was returned etc. I would also like to do this for a subset of the items based on color or size, but I also want them to be "ranked". For example, I would like to show these metrics for the items that had a color that was part of the most returned colors, over all. What I have come up with so far is to do this in two queries. First one is to go through all of the returns and group them by color and sum the items, return, like below:
select
  i.color,
  trunc(sum(r.quantity)::numeric / sum(ol.quantity)::numeric, 2) as return_rate 
from orders o
inner join
  order_lines ol on ol.order_id = o.id
inner join
  items i on i.id = ol.item_id
left outer join
  returns r on r.order_line_id = ol.id
group by i.color
order by return_rate desc nulls last 
limit 4;

    color     | return_rate 
--------------+-------------
 Black        |       0.43
 Blue         |       0.41
 White        |       0.40
 Yellow       |       0.39
Based on this query I would then do a new one that groups all orders by day(date) and then sums up the total return rate and the return rate for the top returned colors during a time range. I would also like to be able to filter this by for example size, category etc on the items. It will be used for a dynamic report where people can see the total number of orders, returns, rate as well as a line chart of the average return rate, top returned colors over a selected time range. Is there any better ways of doing this? It feels a bit wrong to traverse and do all of the joins twice. Read some about window functions but wasn't able to figure out if it was applicable here. Started on a fiddle here with my naive approach as well as one with CTE to try to reuse the bulk data. https://www.db-fiddle.com/f/4jyoMCicNSZpjMt4jFYoz5/10700 It obviously performs a bit different locally with around 100k rows, here's a explain analyze plan of the CTE query being run locally.
QUERY PLAN                                                                                           
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 GroupAggregate  (cost=50730.74..50809.91 rows=2262 width=68) (actual time=350.787..351.912 rows=722 loops=1)
   Group Key: ((order_return_items.date)::date), order_return_items.color
   CTE order_return_items
     ->  HashAggregate  (cost=33884.69..39240.19 rows=226239 width=33) (actual time=211.390..258.041 rows=194309 loops=1)
           Group Key: o.ordered_at, i.color
           Planned Partitions: 4  Batches: 5  Memory Usage: 9265kB  Disk Usage: 7736kB
           ->  Hash Left Join  (cost=5740.52..14725.07 rows=226239 width=25) (actual time=44.080..158.610 rows=226902 loops=1)
                 Hash Cond: (ol.id = r.order_line_id)
                 ->  Hash Join  (cost=3754.69..9321.56 rows=226239 width=29) (actual time=31.592..107.385 rows=226239 loops=1)
                       Hash Cond: (ol.item_id = i.id)
                       ->  Hash Join  (cost=3572.53..8544.81 rows=226239 width=28) (actual time=29.966..79.287 rows=226239 loops=1)
                             Hash Cond: (ol.order_id = o.id)
                             ->  Seq Scan on order_lines ol  (cost=0.00..4378.39 rows=226239 width=28) (actual time=0.012..10.556 rows=226239 loops=1)
                             ->  Hash  (cost=2363.90..2363.90 rows=96690 width=16) (actual time=29.931..29.931 rows=96690 loops=1)
                                   Buckets: 131072  Batches: 1  Memory Usage: 5557kB
                                   ->  Seq Scan on orders o  (cost=0.00..2363.90 rows=96690 width=16) (actual time=0.005..13.289 rows=96690 loops=1)
                       ->  Hash  (cost=137.63..137.63 rows=3563 width=17) (actual time=1.620..1.621 rows=4462 loops=1)
                             Buckets: 8192 (originally 4096)  Batches: 1 (originally 1)  Memory Usage: 300kB
                             ->  Seq Scan on items i  (cost=0.00..137.63 rows=3563 width=17) (actual time=0.014..0.754 rows=4462 loops=1)
                 ->  Hash  (cost=1248.70..1248.70 rows=58970 width=12) (actual time=12.385..12.385 rows=61098 loops=1)
                       Buckets: 65536  Batches: 1  Memory Usage: 3376kB
                       ->  Seq Scan on returns r  (cost=0.00..1248.70 rows=58970 width=12) (actual time=0.007..5.392 rows=61098 loops=1)
   ->  Sort  (cost=11490.55..11496.20 rows=2262 width=52) (actual time=350.765..351.008 rows=6567 loops=1)
         Sort Key: ((order_return_items.date)::date), order_return_items.color
         Sort Method: quicksort  Memory: 626kB
         ->  Hash Join  (cost=6227.62..11364.52 rows=2262 width=52) (actual time=323.014..349.528 rows=6567 loops=1)
               Hash Cond: (order_return_items.color = most_returned_colors.color)
               ->  CTE Scan on order_return_items  (cost=0.00..4524.78 rows=226239 width=56) (actual time=211.393..223.913 rows=194309 loops=1)
               ->  Hash  (cost=6227.60..6227.60 rows=2 width=32) (actual time=111.569..111.571 rows=2 loops=1)
                     Buckets: 1024  Batches: 1  Memory Usage: 9kB
                     ->  Subquery Scan on most_returned_colors  (cost=6227.57..6227.60 rows=2 width=32) (actual time=111.564..111.566 rows=2 loops=1)
                           ->  Limit  (cost=6227.57..6227.58 rows=2 width=64) (actual time=111.564..111.565 rows=2 loops=1)
                                 ->  Sort  (cost=6227.57..6228.07 rows=200 width=64) (actual time=111.562..111.563 rows=2 loops=1)
                                       Sort Key: (trunc((sum(order_return_items_1.r_q) / sum(order_return_items_1.ol_q)), 3)) DESC NULLS LAST
                                       Sort Method: top-N heapsort  Memory: 25kB
                                       ->  HashAggregate  (cost=6221.57..6225.57 rows=200 width=64) (actual time=111.538..111.550 rows=44 loops=1)
                                             Group Key: order_return_items_1.color
                                             Batches: 1  Memory Usage: 48kB
                                             ->  CTE Scan on order_return_items order_return_items_1  (cost=0.00..4524.78 rows=226239 width=48) (actual time=0.000..85.098 rows=194309 loops=1)
Would also appreciate feedback on the overall structure as well as the indices if any!
nbon (113 rep)
Oct 30, 2023, 01:33 PM • Last activity: Nov 1, 2023, 01:40 AM
3 votes
1 answers
2161 views
Adding two columns results in concatenation instead of addition
I need help using addition. When I do it populates using the concatenate function. I used the following in my access database Total: `[ANTI TERROR]+[Records]` and instead of returning `1 + 1 = 2` it returns `11` - i.e. it's using the concatenate function.
I need help using addition. When I do it populates using the concatenate function. I used the following in my access database Total: [ANTI TERROR]+[Records] and instead of returning 1 + 1 = 2 it returns 11 - i.e. it's using the concatenate function.
user68849 (31 rep)
Jun 18, 2015, 08:05 PM • Last activity: Oct 31, 2023, 10:45 AM
0 votes
4 answers
6916 views
New field based on sum field
I'm trying to create a new field based in another (sum field) SELECT sum(price) AS old_price, old_price+100 AS total_price FROM sales This returns **Unknown column 'old_price' in 'field list'** `sum` is just a example, my real issue is to use with a select inside this field What am I doing wrong? Th...
I'm trying to create a new field based in another (sum field) SELECT sum(price) AS old_price, old_price+100 AS total_price FROM sales This returns **Unknown column 'old_price' in 'field list'** sum is just a example, my real issue is to use with a select inside this field What am I doing wrong? Thanks. --edit I've used variables to reuse the alias, updated code: SELECT @old_price := sum(price) AS old_price, @old_price+100 AS total_price FROM sales If you think that it is not a good practice, tell me.
Juliano Lima (115 rep)
Mar 5, 2014, 07:02 PM • Last activity: Oct 20, 2023, 03:18 PM
-2 votes
1 answers
64 views
Using sum and datediff from 2 tables
I am using 2 tables Members (MemberID, MemberName, Department, JoinDate, LeaveDate) Subscriptions (ID, MemberName, DatePaid, AmountPaid) I have combined them SELECT Members.MemberID, Members.MemberName, Members.Department, Members.JoinDate, Members.LeaveDate, Subscriptions.MemberName, Subscriptions....
I am using 2 tables Members (MemberID, MemberName, Department, JoinDate, LeaveDate) Subscriptions (ID, MemberName, DatePaid, AmountPaid) I have combined them SELECT Members.MemberID, Members.MemberName, Members.Department, Members.JoinDate, Members.LeaveDate, Subscriptions.MemberName, Subscriptions.DatePaid FROM Subscriptions LEFT JOIN Members ON Subscriptions.MemberName = Members.MemberName I am trying to add a column to calculate the number of weeks - DateDiff(JoinDate,CurDate()) as Weeks and also Sum(Subscriptions.AmountPaid) Can do individual, but want in same table myphpadmin -Database client version: libmysql - mysqlnd 8.2.11PHP extension: mysqli Documentation curl Documentation mbstring Documentation sodium Documentation PHP version: 8.2.11 –
Mark V (9 rep)
Oct 16, 2023, 07:52 AM • Last activity: Oct 16, 2023, 12:46 PM
Showing page 1 of 20 total questions