Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
162
views
Need a query for accumulated values
I need query for accumulated values from particular column with specific time interval. time | value -------------------------- 2020-02-26 09:30:00 | 5 2020-02-26 09:31:00 | 3 2020-02-26 09:32:00 | 4 2020-02-26 09:33:00 | 5 2020-02-26 10:31:00 | 5 2020-02-26 10:32:00 | 2 2020-02-26 10:34:00 | 4 2020...
I need query for accumulated values from particular column with specific time interval.
time | value
--------------------------
2020-02-26 09:30:00 | 5
2020-02-26 09:31:00 | 3
2020-02-26 09:32:00 | 4
2020-02-26 09:33:00 | 5
2020-02-26 10:31:00 | 5
2020-02-26 10:32:00 | 2
2020-02-26 10:34:00 | 4
2020-02-26 10:36:00 | 6
Fetch the data from table interval wise(15m,1hour,1 day) & calculate the max value and finally calculated the accumulated(1,2,3 values should be 1,3,6).
For Ex : 1hour interval then the result was:
time | value
--------------------------
2020-02-26 09:00:00 | 5
2020-02-26 10:00:00 | 6
and accumulated result was:
time | value
--------------------------
2020-02-26 09:00:00 | 5
2020-02-26 10:00:00 | 11
Avz Vicky
(1 rep)
Feb 26, 2020, 11:49 AM
• Last activity: Jul 12, 2025, 05:05 PM
1
votes
1
answers
40
views
Make sequential groups of rows that surpass a minimum sum
Let's say I have a table with a timestamp (`ts`) and a value (`a`). With some sample data: ``` ts, a 1, 10 2, 30 3, 10 4, 20 5, 40 6, 10 7, 20 8, 30 etc ``` I would like to group rows, in ts order, by summing `a` and outputting a row where the sum of `a` is > 50. In this case, the output should be t...
Let's say I have a table with a timestamp (
ts
) and a value (a
).
With some sample data:
ts, a
1, 10
2, 30
3, 10
4, 20
5, 40
6, 10
7, 20
8, 30
etc
I would like to group rows, in ts order, by summing a
and outputting a row where the sum of a
is > 50.
In this case, the output should be two timestamps (first and last row):
1, 3 -- 10 + 30 + 10
4, 5 -- 20 + 40
6, 8 -- 10 + 20 + 30
But I have no clue how to proceed. All attempts I did with Copilot yield complex code that never finishes.
Thomas
(355 rep)
Aug 12, 2024, 04:14 PM
• Last activity: Aug 14, 2024, 01:09 AM
6
votes
1
answers
9493
views
Calculate Quantity Based on First in, First Out (FIFO)
I'm trying to get result of quantity based on FIFO, with 2 tables below: Table Purchase: | PO | Date | Quantity | Item | |-------|--------------|----------|------| | PO001 | 01-Jan-2016 | 3 | AO21 | | PO002 | 10-Jan-2016 | 7 | AO21 | | PO003 | 01-Feb-2016 | 3 | AO21 | Table Stock: | SO | Date | Quan...
I'm trying to get result of quantity based on FIFO, with 2 tables below:
Table Purchase:
| PO | Date | Quantity | Item |
|-------|--------------|----------|------|
| PO001 | 01-Jan-2016 | 3 | AO21 |
| PO002 | 10-Jan-2016 | 7 | AO21 |
| PO003 | 01-Feb-2016 | 3 | AO21 |
Table Stock:
| SO | Date | Quantity | Item |
|-------|-------------|----------|------|
| SO001 | 02-Jan-2016 | 2 | AO21 |
| SO002 | 11-Feb-2016 | 8 | AO21 |
| SO003 | 12-Feb-2016 | 6 | AO23 |
I want output to be like this :
| SO | PO | Quantity |
|-------|-------|----------|
| SO001 | PO001 | 2 |
| SO002 | PO001 | 1 |
| SO002 | PO003 | 7 |
Do you have any idea of query for view this output? Result from calculated SO and PO in row. some more explanation:
> Where do the 2, 1, 7 come from in the wanted result?
From
stock
and purchase
. The first (by date) stock value for item A021
has 2 and the first purchase (PO001
) needs 3, so stock sold 2 and we get this row in the result:
| SO001 | PO001 | 2 |
We still need 1 more for the purchase and then next stock value is 8. So this purchase is completed and we get 1 (and 7 left in stock):
| SO002 | PO001 | 1 |
The next purchase (PO002
) needs 7 and we have exactly 7 left, so the purchase is completed (and 0 stock left for that item). We get:
| SO002 | PO003 | 7 |
Purchase PO003
needs 3 but there is no stock left, so we get no rows in the result for that purchase.
Mei_R
(63 rep)
Nov 8, 2016, 01:14 PM
• Last activity: Dec 6, 2023, 03:02 PM
0
votes
1
answers
80
views
Calculating Running Total in SQL
I have been looking online for an answer but do not really know how to formulate correctly what I would like to achieve and whether it's possible, sorry if the question sounds dumb. I have Site data with number of PC and AADTgroup 1,2 and 3 for each site. Data will remain sorted by ROWID and partiti...
I have been looking online for an answer but do not really know how to formulate correctly what I would like to achieve and whether it's possible, sorry if the question sounds dumb.
I have Site data with number of PC and AADTgroup 1,2 and 3 for each site. Data will remain sorted by ROWID and partitioned by AADTGroup.
My existing data is
I want the output with two more columns containing PCsum and NAF_Driver/Vehsum
where, I want the running total of PC in PCsum when it reaches to 5 and corresponding running total of NAF_Driver/Veh in NAF_Driver/Vehsum. Running total would start after reaching the limit 5 and with different AADTGroup.
The desired output is like below
I used SQL code
`DECLARE @point INT=5
;
;WITH x(ROWID, AADT_Group, PC, R, PCsum) AS
(
SELECT
ROWID, AADT_Group, PC, ROW_NUMBER() OVER (ORDER BY ROWID),
SUM(PC) OVER (PARTITION BY AADT_Group ORDER BY [ROWID], ROWID RANGE UNBOUNDED PRECEDING)
FROM dbo.CAUrban4LaneDiv$
)
SELECT x.ROWID, x.AADT_Group, x.PC, x.PCsum
FROM x WHERE x.PCsum < @point
ORDER BY x.ROWID; `
But it did't give me the desired result
TIA


Sadia Sharmin
(1 rep)
Jun 14, 2019, 03:01 PM
• Last activity: Aug 21, 2023, 04:11 AM
2
votes
2
answers
318
views
Group by running totals with reset
### Sample Data [![enter image description here][1]][1] ### Expected Data [![enter image description here][2]][2] Need to append trades under an order in a record, was able to do that using `string_agg`, now I need to split the orders with a maximum value of 10. That is, the quantity for the single...
### Sample Data
### Expected Data
Need to append trades under an order in a record, was able to do that using
Script to create the sample table.
In reality, the maximum would be about 1000 orders at 10 trades per order. A good approximate solution should be good enough.


string_agg
, now I need to split the orders with a maximum value of 10. That is, the quantity for the single order should not exceed 10 , need to combine trades under a single order as much as possible as fewer amended orders are desirable.
For example: I can combine tradeid 11 and 12, the running total is 9 but when I combine tradeid 13 it becomes 12 (exceeds 10) so that needs to be in a separate order.
Am aiming to rank the trades so that I can group by orderid and rank to get the result. Using SQL Server 2016.

create table #order
(
orderid varchar(100),
tradeid varchar(100),
quantity int
)
insert into #order (orderid, tradeid, quantity)
values
('a', 'a1', 5),
('a', 'a2', 4),
('a', 'a3', 3),
('a', 'a4', 12),
('b', 'b1', 9),
('b', 'b2', 8),
('b', 'b3', 7)
### Second example
INSERT INTO @Test
(OrderID, TradeID, Qty)
VALUES ('a', 11, 5)
, ('a', 12, 4)
, ('a', 13, 3)
, ('a', 14, 5)
, ('a', 15, 1)
, ('b', 21, 9)
, ('b', 22, 8)
, ('b', 23, 7)
### Expected Result

macro32
(23 rep)
Aug 18, 2023, 11:07 AM
• Last activity: Aug 20, 2023, 12:10 PM
2
votes
1
answers
284
views
Determining the overlap between payments and invoices
Given a table containing a list of invoices and a table containing a list of payments, I have the following problem: - Payments should be allocated to invoices up to the total amount of the invoice. - If the size of the payment is greater than the remaining amount on the invoice, the remainder shoul...
Given a table containing a list of invoices and a table containing a list of payments, I have the following problem:
- Payments should be allocated to invoices up to the total amount of the invoice.
- If the size of the payment is greater than the remaining amount on the invoice, the remainder should be allocated to the next invoice in line.
- If there are no more invoices in line, the total amount of the invoice can be exceeded. All remaining payments can be allocated to the final invoice.
For example, given the following tables:
| InvoicedId | InvoiceAmount |
| -------- | -------------- |
| 1 | 5.00 |
| 2 | 5.00 |
| 3 | 5.00 |
| PaymentId | PaymentAmount |
| -------- | -------------- |
| 1 | 4.00 |
| 2 | 4.00 |
| 3 | 4.00 |
| 4 | 4.00 |
The desired result would be the following:
| InvoiceId | InvoiceAmount | PaymentId | PaymentAmount |
| -------- | -------------- | -------- | -------------- |
| 1 | 5.00 | 1 | 4.00 |
| 1 | 5.00 | 2 | 1.00 |
| 2 | 5.00 | 2 | 3.00 |
| 2 | 5.00 | 3 | 2.00 |
| 3 | 5.00 | 3 | 2.00 |
| 3 | 5.00 | 4 | 4.00 |
The best solution I was able to come up with involves loops and/or cursors, but my gut tells me there's a much better way to do it that I'm just not seeing.
warhorus
(23 rep)
Jun 9, 2023, 01:52 PM
• Last activity: Jun 10, 2023, 01:15 PM
2
votes
2
answers
6783
views
Add cumulative sum to time-series query PostgreSQL 9.5
I wrote the query that gives me time-series over some date range and interval that shows revenue for each time interval: SELECT interval_date, coalesce(campaign_revenue,0) AS campaign_revenue, FROM -- generate_series helps fill the empty gaps in the following JOIN generate_series( $2::timestamp, $3:...
I wrote the query that gives me time-series over some date range and interval that shows revenue for each time interval:
SELECT
interval_date,
coalesce(campaign_revenue,0) AS campaign_revenue,
FROM
-- generate_series helps fill the empty gaps in the following JOIN
generate_series(
$2::timestamp,
$3::timestamp,
$4) AS interval_date -- could be '1 day', '1 hour' or '1 minute'.
LEFT OUTER JOIN
-- This SELECT gets all timeseries rows that have data
(SELECT
date_trunc($4, s.created) AS interval,
SUM(s.revenue) campaign_revenue
FROM
sale_event AS s
WHERE
s.campaignid = $1 AND s.created BETWEEN $2 AND $3 AND s.event_type = 'session_closed'
GROUP BY
interval) results
ON
(results.interval = interval_date);
The query takes every row of
sale_event
table, truncates the created date to some interval (aligns the created
timestamp with the time-series wanted granularity), groups by this time interval and sums up the revenue
columns on the rows where event_type
is session_closed
.
This works very well and gives me the revenue in the specified interval. The result may look like:
interval_date | campaign_revenue
------------------------------------
2018-08-05 | 0.0
2018-08-06 | 1.5
2018-08-07 | 0.0
2018-08-08 | 0.5
2018-08-09 | 1.0
When the provided range is 2018-08-05 - 2018-08-09
and interval = '1 day'
.
I want to add to the result the sum of revenue up to that date. So if before 2018-08-05
there a total revenue of 10.0
, the result would be:
interval_date | campaign_revenue | total_campaign_revenue
-----------------------------------------------------------------
2018-08-05 | 0.0 | 10.0
2018-08-06 | 1.5 | 11.5
2018-08-07 | 0.0 | 11.5
2018-08-08 | 0.5 | 12.0
2018-08-09 | 1.0 | 13.0
Alechko
(229 rep)
Aug 10, 2018, 09:48 AM
• Last activity: Apr 22, 2023, 08:24 AM
2
votes
2
answers
131
views
SQL Sum with break-out (audits on inventory)
I'm having some difficulty getting this mysql query to work. I have the following tables ``` create table items ( item_id int not null auto_increment, item_name varchar(25), primary key (item_id) ); create table types ( type_id int not null auto_increment, type_description varchar(20), type_val_mod...
I'm having some difficulty getting this mysql query to work. I have the following tables
create table items (
item_id int not null auto_increment,
item_name varchar(25),
primary key (item_id)
);
create table types
(
type_id int not null auto_increment,
type_description varchar(20),
type_val_mod int,
primary key (type_id)
);
create table log(
log_id int not null auto_increment,
item_id int not null,
log_qty int,
log_type int not null,
primary key (log_id)
);
default values:
|type_id|type_description|type_val_mod|
|-------|----------------|------------|
|1 |INVOICE |1 |
|2 |SALE |-1 |
|3 |AUDIT |0 |
and here is sample table:
|log_id|item_name|type_description|log_qty|qty|
|------|---------|----------------|-------|---|
|2 |WIDGET A |SALE |1 |-1 |
|3 |WIDGET A |SALE |3 |-3 |
|4 |WIDGET A |SALE |2 |-2 |
|5 |WIDGET A |INVOICE |10 |10 |
|6 |WIDGET A |AUDIT |60 |0 |
|7 |WIDGET A |SALE |5 |-5 |
|1 |WIDGET A |INVOICE |100 |100|
|11 |WIDGET B |SALE |2 |-2 |
|14 |WIDGET B |SALE |5 |-5 |
|13 |WIDGET B |INVOICE |60 |60 |
|12 |WIDGET B |AUDIT |10 |0 |
|10 |WIDGET B |SALE |1 |-1 |
|9 |WIDGET B |SALE |1 |-1 |
|8 |WIDGET B |INVOICE |10 |10 |
|15 |WIDGET C |SALE |1 |-1 |
|16 |WIDGET C |INVOICE |15 |15 |
|17 |WIDGET C |SALE |1 |-1 |
|18 |WIDGET C |SALE |2 |-2 |
|19 |WIDGET C |AUDIT |10 |0 |
|20 |WIDGET C |INVOICE |60 |60 |
|21 |WIDGET C |SALE |5 |-5 |
I want to have a running total column where sale subtracts from column, invoice adds to it, but audit will reset the value to whatever the audit is. So it's a conditional sum that breaks out on audit type. Likewise each item is it's own group.
this is what I have so far, but I cannot get it to work and have been playing around with it for days.
with recursive rn as (
select log_id, item_name, type_description, log_qty,
log_qty*types.type_val_mod as qty,
row_number() over (partition by log.item_id order by log.log_id) as row_num,
if(types.type_description="AUDIT",log_qty,null) as reset
from log
inner join types on log.log_type=types.type_id
inner join items on log.item_id=items.item_id),
updated_log as(
select log_id, item_name, type_description,row_num, log_qty, qty, reset,
sum(if(type_description="AUDIT", 0,qty)) over (partition by item_name order by row_num rows between 0 preceding and 1 following) as new
from rn
) select * from updated_log;
which gives: fiddle
|log_id|item_name|type_description|log_qty|qty|row_num|reset|running_total|
|------|---------|----------------|-------|---|-------|-----|-------------|
|1 |WIDGET A |INVOICE |100 |100|1 |null |100 |
|2 |WIDGET A |SALE |1 |-1 |2 |null |99 |
|3 |WIDGET A |SALE |3 |-3 |3 |null |96 |
|4 |WIDGET A |SALE |2 |-2 |4 |null |94 |
|5 |WIDGET A |INVOICE |10 |10 |5 |null |104 |
|6 |WIDGET A |AUDIT |60 |0 |6 |60 |104 |
|7 |WIDGET A |SALE |5 |-5 |7 |null |99 |
|8 |WIDGET B |INVOICE |10 |10 |1 |null |10 |
|9 |WIDGET B |SALE |1 |-1 |2 |null |9 |
|10 |WIDGET B |SALE |1 |-1 |3 |null |8 |
|11 |WIDGET B |SALE |2 |-2 |4 |null |6 |
|12 |WIDGET B |AUDIT |10 |0 |5 |10 |6 |
|13 |WIDGET B |INVOICE |60 |60 |6 |null |66 |
|14 |WIDGET B |SALE |5 |-5 |7 |null |61 |
|15 |WIDGET C |SALE |1 |-1 |1 |null |-1 |
|16 |WIDGET C |INVOICE |15 |15 |2 |null |14 |
|17 |WIDGET C |SALE |1 |-1 |3 |null |13 |
|18 |WIDGET C |SALE |2 |-2 |4 |null |11 |
|19 |WIDGET C |AUDIT |10 |0 |5 |10 |11 |
|20 |WIDGET C |INVOICE |60 |60 |6 |null |71 |
|21 |WIDGET C |SALE |5 |-5 |7 |null |66 |
any thoughts would be greatly appreciated.
Jason Chvat
(23 rep)
Feb 19, 2023, 08:47 PM
• Last activity: Feb 20, 2023, 06:54 AM
14
votes
3
answers
14302
views
How to get the last not-null value in an ordered column of a huge table?
I have the following input: id | value ----+------- 1 | 136 2 | NULL 3 | 650 4 | NULL 5 | NULL 6 | NULL 7 | 954 8 | NULL 9 | 104 10 | NULL I expect the following result: id | value ----+------- 1 | 136 2 | 136 3 | 650 4 | 650 5 | 650 6 | 650 7 | 954 8 | 954 9 | 104 10 | 104 The trivial solution woul...
I have the following input:
id | value
----+-------
1 | 136
2 | NULL
3 | 650
4 | NULL
5 | NULL
6 | NULL
7 | 954
8 | NULL
9 | 104
10 | NULL
I expect the following result:
id | value
----+-------
1 | 136
2 | 136
3 | 650
4 | 650
5 | 650
6 | 650
7 | 954
8 | 954
9 | 104
10 | 104
The trivial solution would be join the tables with a `= t1.id
GROUP BY t2.id
)
SELECT
tmp.id, t.value
FROM t, tmp
WHERE t.id = tmp.lastKnownId;
However, the trivial execution of this code would create internally the square of the count of the rows of the input table ( *O(n^2)* ). I expected t-sql to optimize it out - on a block/record level, the task to do is very easy and linear, essentially a for loop ( *O(n)* ).
However, on my experiments, the latest MS SQL 2016 can't optimize this query correctly, making this query impossible to execute for a large input table.
Furthermore, the query has to run quickly, making a similarly easy (but very different) cursor-based solution infeasible.
Using some memory-backed temporary table could be a good compromise, but I am not sure if it can be run significantly quicker, considered that my example query using subqueries didn't work.
I am also thinking on to dig out some windowing function from the t-sql docs, what could be tricked to do what I want. For example, cumulative sum is doing some very similar, but I couldn't trick it to give the latest non-null element, and not the sum of the elements before.
The ideal solution would be a quick query without procedural code or temporary tables. Alternatively, also a solution with temporary tables is okay, but iterating the table procedurally is not.
peterh
(2137 rep)
Mar 31, 2019, 05:19 PM
• Last activity: Jan 15, 2023, 03:31 PM
2
votes
1
answers
1073
views
Cumulative sum in a period of months
I have this table: | month_rep | fruits | harvested | | ----- | ------------- | --------- | | 2021-09-01 | 139 | 139 | | 2021-10-01 | 143 | 11 | | 2021-11-01 | 152 | 14 | | 2021-12-01 | 112 | 9 | | 2022-01-01 | 133 | 10 | | 2022-02-01 | 145 | 12 | | 2022-03-01 | 123 | 5 | | 2022-04-01 | 111 | 4 | |...
I have this table:
| month_rep | fruits | harvested |
| ----- | ------------- | --------- |
| 2021-09-01 | 139 | 139 |
| 2021-10-01 | 143 | 11 |
| 2021-11-01 | 152 | 14 |
| 2021-12-01 | 112 | 9 |
| 2022-01-01 | 133 | 10 |
| 2022-02-01 | 145 | 12 |
| 2022-03-01 | 123 | 5 |
| 2022-04-01 | 111 | 4 |
| 2022-05-01 | 164 | 9 |
| 2022-06-01 | 135 | 12 |
| 2022-07-01 | 124 | 14 |
| 2022-08-01 | 144 | 18 |
| 2022-09-01 | 111 | 111 |
| 2022-10-01 | 108 | 13 |
| 2022-11-01 | 123 | 7 |
| 2022-12-01 | 132 | 20 |
I want to create a new column called
sold
that is based on a calculation - which will be a running sum of harvested
in a period of months (Sep-Jun). Every September, sold
will always be 1 (or 100 in percent). The calculation for Oct 2021 will be fruits
/ (harvested
+ harvested_Nov
) = 143 / 11 + 139.
For the rest of the months of 2021, follows the same format: fruits
/ (harvested
+ harvested_until_Sep
) --> this will be a running sum, starting from the month you're in, and ends in Sep of the previous year.
Another example for 2022 is the calculation for Mar 2022 = fruits
/ (harvested
+ harvested_Feb_2022
+ harvested_Jan_2022
+ harvested_Dec_2021
+ harvested_Nov_2021
+ harvested_Oct_2021
+ harvested_Sep_2021
) = 123 / (5+12+10+9+14+11+139).
The table should look like this:
| month_rep | fruits | harvested | sold |
| ----- | ------------- | --------- | ---- |
| 2021-09-01 | 139 | 139 | 1
| 2021-10-01 | 143 | 11 | 0.95
| 2021-11-01 | 152 | 14 | 0.93
| 2021-12-01 | 112 | 9 | 0.65
| 2022-01-01 | 133 | 10 | ..
| 2022-02-01 | 145 | 12 | ..
| 2022-03-01 | 123 | 5 | ..
| 2022-04-01 | 111 | 4 | ..
| 2022-05-01 | 164 | 9 | ..
| 2022-06-01 | 135 | 12 | ..
| 2022-07-01 | 124 | 14 | null
| 2022-08-01 | 144 | 18 | null
| 2022-09-01 | 111 | 111 | 1
| 2022-10-01 | 108 | 13 | 0.87
| 2022-11-01 | 123 | 7 | 0.94
| 2022-12-01 | 132 | 20 | ..
I tried this:
select
month_rep,
fruits,
harvested,
case when extract(month from "month_rep") in (7, 8) then null
when extract(month from "month_rep") = 9 then 1
else ROUND(fruits / sum(harvested) over (order by month_rep), 2) end sold
from my_table
This works well, but only when I have data before the 2022 September. I want Jul and Aug to have null sold
- which works. After Aug, Sep 2022 should be a new period where sold
is 1. After that, Oct 2022 will be calculated as fruits
/ (harvested
+ harvested_Sep_2022
) - where we start a new period for the 2nd period Sep 2022 - Jun 2023.
Is there a way to group these "periods" and have the running sum over that? I might need to find a way to group the period and take partition by from that.
Jason
(23 rep)
Aug 1, 2022, 10:45 PM
• Last activity: Aug 3, 2022, 03:37 AM
2
votes
1
answers
1279
views
Cumulative sum by day with non-negative floor with PostgreSQL
Using PostgreSQL 12, I need to calculate the cumulative sum by day while filling in missing days. I *also* need to set a minimum value of `0` if any cumulative sum goes negative. ### Background There are two excellent answers for [cumulative sum with missing days](https://stackoverflow.com/questions...
Using PostgreSQL 12, I need to calculate the cumulative sum by day while filling in missing days. I *also* need to set a minimum value of
0
if any cumulative sum goes negative.
### Background
There are two excellent answers for [cumulative sum with missing days](https://stackoverflow.com/questions/32490946/cumulative-sum-of-values-by-month-filling-in-for-missing-months?noredirect=1&lq=1) and [non-negative floor values](https://dba.stackexchange.com/questions/179367/set-non-negative-floor-for-rolling-sum-in-postgresql) . I need to combine these two solutions.
Take this table and data as an example (note, this is also contained in [this db fiddle](https://dbfiddle.uk/?rdbms=postgres_12&fiddle=e84c62c5b4e95ebdd4c027b5f241a21f))
CREATE TABLE inventory_test (
"id" serial,
"account_id" bigint NOT NULL,
"is_active" boolean NOT NULL DEFAULT 'true',
"object_timestamp" bigint NOT NULL,
"amount" numeric(12, 1) NOT NULL,
"object_id" bigint NOT NULL,
PRIMARY KEY ("id")
);
*Note that object_timestamp
is epoch time in milliseconds
SELECT id, to_timestamp((object_timestamp + 0 )/1000), amount
FROM inventory_test
ORDER BY object_timestamp;
| id | to_timestamp | amount |
|----|--------------|--------|
| 1| 2022-04-08 15:13:30+01| 10000.0 |
| 2| 2022-04-09 15:13:30+01| -2000.0 |
| 3| 2022-04-09 15:13:31+01| -1000.0 |
| 4| 2022-04-10 15:13:30+01| -2000.0 |
| 5| 2022-04-11 15:13:30+01| -3000.0 |
| 6| 2022-04-12 15:13:30+01| -2500.0 |
| 7| 2022-04-15 15:13:30+01| -2500.0 |
| 8| 2022-04-16 15:13:30+01| -3000.0 |
| 9| 2022-04-17 15:13:30+01| 1000.0 |
I can generate the daily totals accounting for missing days with:
WITH cte as (
SELECT
date_trunc('day', to_timestamp((object_timestamp)/1000)) as day,
sum(amount) as day_sum
FROM inventory_test
WHERE object_id = 1 AND is_active = true
GROUP by 1
)
SELECT
day,
sum(c.day_sum) OVER (ORDER BY day) AS running_sum
FROM
(SELECT min(day) AS min_day FROM cte) init,
generate_series(init.min_day, now(), interval '1 day') day
LEFT JOIN cte c USING (day)
ORDER BY day;
|day | running_sum |
|-----------|---------------|
| 2022-04-08 00:00:00+00| 10000 |
| 2022-04-09 00:00:00+00| 7000 |
| 2022-04-10 00:00:00+00| 5000 |
| 2022-04-11 00:00:00+00| 2000 |
| 2022-04-12 00:00:00+00| -500 |
| 2022-04-13 00:00:00+00| -500 |
| 2022-04-14 00:00:00+00| -500 |
| 2022-04-15 00:00:00+00| -3000 |
| 2022-04-16 00:00:00+00| -6000 |
| 2022-04-17 00:00:00+00| -5000 |
### DB fiddle
You can see this in [this DB fiddle](https://dbfiddle.uk/?rdbms=postgres_12&fiddle=e84c62c5b4e95ebdd4c027b5f241a21f)
### Desired results
What I _want_ the results to look like are non-negative, where any positive changes are applied to the adjusted running total which should be MAX(0, running_sum)
. I'd prefer to solve this in a SQL statement rather than a custom function.
Desired output:
|day | running_sum |
|-----------|---------------|
| 2022-04-08 00:00:00+00| 10000 |
| 2022-04-09 00:00:00+00| 7000 |
| 2022-04-10 00:00:00+00| 5000 |
| 2022-04-11 00:00:00+00| 2000 |
| 2022-04-12 00:00:00+00| 0 |
| 2022-04-13 00:00:00+00| 0|
| 2022-04-14 00:00:00+00| 0 |
| 2022-04-15 00:00:00+00| 0 |
| 2022-04-16 00:00:00+00| 0 |
| 2022-04-17 00:00:00+00| 1000 |
brianz
(121 rep)
Apr 18, 2022, 03:34 AM
• Last activity: Apr 21, 2022, 02:35 PM
1
votes
1
answers
949
views
Calculate running total of "consecutive days without views" for each group, that can restart multiple times within each group based on conditions
**Summary** I'm working to create a counter metric that shows, for each client's slug, the number of consecutive days of zero views. * The counter should increase for each consecutive day that the client's slug doesn't receive a view. * If/once the slug gets a view, then the counter should restart....
**Summary**
I'm working to create a counter metric that shows, for each client's slug, the number of consecutive days of zero views.
* The counter should increase for each consecutive day that the client's slug doesn't receive a view.
* If/once the slug gets a view, then the counter should restart.
I can figure out a regular running total, but I can't seem to figure out how to make it restart. I've tried several ways with aggregate/window functions, but I can't seem to achieve the desired results.
---
**Source data**
dcev_zero_views_flag
| client_name | slug | calendar_date | views | consecutive_zero_views_flag |
|--------------------|------------------------------|--------------------------|-------|-----------------------------|
| Ambition LLC | canyon_properties | 2022-02-16T00:00:00.000Z | 2 | 0 |
| Ambition LLC | canyon_properties | 2022-02-17T00:00:00.000Z | 0 | 0 |
| Ambition LLC | canyon_properties | 2022-02-18T00:00:00.000Z | 0 | 1 |
| Ambition LLC | canyon_properties | 2022-02-19T00:00:00.000Z | 0 | 1 |
| Ambition LLC | canyon_properties | 2022-02-20T00:00:00.000Z | 2 | 0 |
| Ambition LLC | canyon_properties | 2022-02-21T00:00:00.000Z | 0 | 0 |
| Ambition LLC | canyon_properties | 2022-02-22T00:00:00.000Z | 0 | 1 |
| Ambition LLC | canyon_properties | 2022-02-23T00:00:00.000Z | 1 | 0 |
| Ambition LLC | city_station_apartment_homes | 2022-02-16T00:00:00.000Z | 0 | 0 |
| Ambition LLC | city_station_apartment_homes | 2022-02-17T00:00:00.000Z | 0 | 1 |
| Ambition LLC | city_station_apartment_homes | 2022-02-18T00:00:00.000Z | 3 | 0 |
| Ambition LLC | city_station_apartment_homes | 2022-02-19T00:00:00.000Z | 2 | 0 |
| Ambition LLC | city_station_apartment_homes | 2022-02-20T00:00:00.000Z | 0 | 0 |
| Ambition LLC | city_station_apartment_homes | 2022-02-21T00:00:00.000Z | 0 | 1 |
| Ambition LLC | city_station_apartment_homes | 2022-02-22T00:00:00.000Z | 0 | 1 |
| Ambition LLC | city_station_apartment_homes | 2022-02-23T00:00:00.000Z | 2 | 0 |
| D&D Properties Inc | east_side_living_spaces | 2022-02-16T00:00:00.000Z | 0 | 0 |
| D&D Properties Inc | east_side_living_spaces | 2022-02-17T00:00:00.000Z | 1 | 0 |
| D&D Properties Inc | east_side_living_spaces | 2022-02-18T00:00:00.000Z | 0 | 0 |
| D&D Properties Inc | east_side_living_spaces | 2022-02-19T00:00:00.000Z | 1 | 0 |
| D&D Properties Inc | east_side_living_spaces | 2022-02-20T00:00:00.000Z | 0 | 0 |
| D&D Properties Inc | east_side_living_spaces | 2022-02-21T00:00:00.000Z | 1 | 0 |
| D&D Properties Inc | east_side_living_spaces | 2022-02-22T00:00:00.000Z | 0 | 0 |
| D&D Properties Inc | east_side_living_spaces | 2022-02-23T00:00:00.000Z | 0 | 1 |
| D&D Properties Inc | happy_mountain_homes | 2022-02-16T00:00:00.000Z | 1 | 0 |
| D&D Properties Inc | happy_mountain_homes | 2022-02-17T00:00:00.000Z | 0 | 0 |
| D&D Properties Inc | happy_mountain_homes | 2022-02-18T00:00:00.000Z | 0 | 1 |
| D&D Properties Inc | happy_mountain_homes | 2022-02-19T00:00:00.000Z | 0 | 1 |
| D&D Properties Inc | happy_mountain_homes | 2022-02-20T00:00:00.000Z | 0 | 1 |
| D&D Properties Inc | happy_mountain_homes | 2022-02-21T00:00:00.000Z | 0 | 1 |
| D&D Properties Inc | happy_mountain_homes | 2022-02-22T00:00:00.000Z | 1 | 0 |
| D&D Properties Inc | happy_mountain_homes | 2022-02-23T00:00:00.000Z | 0 | 0 |
For reference, the consecutive_zero_views_flag
column in returns 1
if the view count for the current and preceding day = 0. Otherwise, it returns 0
. It's derived by:
case when
dcev.views = 0
and dcev.views = lag(dcev.views) over (PARTITION by client_name, slug ORDER BY dcev.calendar_date)
then 1
else 0
end as consecutive_zero_views_flag
---
**Desired output**
I want to add a column, consecutive_days_without_views
. That should represent a running total of the consecutive_zero_views_flag
for each (client_name slug) combination ordered by date. But, the running total should restart any time consecutive_zero_views_flag
changes from 1 to 0 for each client_name/slug combination, in order of date.
**Example of desired output**
Note the last row of the example below that shows the 2 for consecutive_days_without_views
.
| client_name | slug | calendar_date | views | consecutive_zero_views_flag | consecutive_days_without_views |
|--------------------|------------------------------|--------------------------|-------|-----------------------------|--------------------------------|
| Ambition LLC | canyon_properties | 2022-02-16T00:00:00.000Z | 2 | 0 | 0 |
| Ambition LLC | canyon_properties | 2022-02-17T00:00:00.000Z | 0 | 0 | 0 |
| Ambition LLC | canyon_properties | 2022-02-18T00:00:00.000Z | 0 | 1 | 1 |
| Ambition LLC | canyon_properties | 2022-02-19T00:00:00.000Z | 0 | 1 | 2 |
| Ambition LLC | canyon_properties | 2022-02-20T00:00:00.000Z | 2 | 0 | 0 |
| Ambition LLC | canyon_properties | 2022-02-21T00:00:00.000Z | 0 | 0 | 0 |
| Ambition LLC | canyon_properties | 2022-02-22T00:00:00.000Z | 0 | 1 | 1 |
| Ambition LLC | canyon_properties | 2022-02-23T00:00:00.000Z | 1 | 0 | 0 |
| Ambition LLC | city_station_apartment_homes | 2022-02-16T00:00:00.000Z | 0 | 0 | 0 |
| Ambition LLC | city_station_apartment_homes | 2022-02-17T00:00:00.000Z | 0 | 1 | 1 |
| Ambition LLC | city_station_apartment_homes | 2022-02-18T00:00:00.000Z | 3 | 0 | 0 |
| Ambition LLC | city_station_apartment_homes | 2022-02-19T00:00:00.000Z | 2 | 0 | 0 |
| Ambition LLC | city_station_apartment_homes | 2022-02-20T00:00:00.000Z | 0 | 0 | 0 |
| Ambition LLC | city_station_apartment_homes | 2022-02-21T00:00:00.000Z | 0 | 1 | 1 |
| Ambition LLC | city_station_apartment_homes | 2022-02-22T00:00:00.000Z | 0 | 1 | 2 |
| Ambition LLC | city_station_apartment_homes | 2022-02-23T00:00:00.000Z | 2 | 0 | 0 |
| D&D Properties Inc | east_side_living_spaces | 2022-02-16T00:00:00.000Z | 0 | 0 | 0 |
| D&D Properties Inc | east_side_living_spaces | 2022-02-17T00:00:00.000Z | 1 | 0 | 0 |
| D&D Properties Inc | east_side_living_spaces | 2022-02-18T00:00:00.000Z | 0 | 0 | 0 |
| D&D Properties Inc | east_side_living_spaces | 2022-02-19T00:00:00.000Z | 1 | 0 | 0 |
| D&D Properties Inc | east_side_living_spaces | 2022-02-20T00:00:00.000Z | 0 | 0 | 0 |
| D&D Properties Inc | east_side_living_spaces | 2022-02-21T00:00:00.000Z | 1 | 0 | 0 |
| D&D Properties Inc | east_side_living_spaces | 2022-02-22T00:00:00.000Z | 0 | 0 | 0 |
| D&D Properties Inc | east_side_living_spaces | 2022-02-23T00:00:00.000Z | 0 | 1 | 1 |
| D&D Properties Inc | happy_mountain_homes | 2022-02-16T00:00:00.000Z | 1 | 0 | 0 |
| D&D Properties Inc | happy_mountain_homes | 2022-02-17T00:00:00.000Z | 0 | 0 | 0 |
| D&D Properties Inc | happy_mountain_homes | 2022-02-18T00:00:00.000Z | 0 | 1 | 1 |
| D&D Properties Inc | happy_mountain_homes | 2022-02-19T00:00:00.000Z | 0 | 1 | 2 |
| D&D Properties Inc | happy_mountain_homes | 2022-02-20T00:00:00.000Z | 0 | 1 | 3 |
| D&D Properties Inc | happy_mountain_homes | 2022-02-21T00:00:00.000Z | 0 | 1 | 4 |
| D&D Properties Inc | happy_mountain_homes | 2022-02-22T00:00:00.000Z | 1 | 0 | 0 |
| D&D Properties Inc | happy_mountain_homes | 2022-02-23T00:00:00.000Z | 0 | 0 | 0 |
Matthew
(21 rep)
Feb 23, 2022, 08:56 PM
• Last activity: Feb 24, 2022, 03:24 AM
0
votes
1
answers
1086
views
Is there any way to let Postgres know it can filter results and then calculate aggregations?
Given a table A with 10 million records and another table B with child items (one-to-many), I have to provide a consumable view with totals from several fields in B grouped by the FK of A. None of the filtering or sorting I'm doing are reliant upon the aggregated totals, but the performance is poor...
Given a table A with 10 million records and another table B with child items (one-to-many), I have to provide a consumable view with totals from several fields in B grouped by the FK of A.
None of the filtering or sorting I'm doing are reliant upon the aggregated totals, but the performance is poor even when I'm doing a simple filter on a column that is in table A such as a smallint (status_id, etc.). The analyzer appears to try to perform aggregations on the total result set before applying my filter.
Because I have to use max(id), etc. on all of my columns in table A even though I know they are distinct, I'm wondering if that's causing an issue because I'm technically filtering on an aggregate value? If that's the case, I would expect subqueries to perform better.
Dane
(1 rep)
Jan 27, 2022, 05:45 AM
• Last activity: Jan 27, 2022, 07:00 AM
0
votes
0
answers
81
views
Performant way to limit selected rows but make sure the sum of the rows is large enough
Here's a simplified version of the problem. You have a marketplace where users can publish how many items they have for sale and the price. You would like to buy 1 000 000 items, and you want to purchase the cheapest listings first. Example of getting all rows: SELECT id, price, count FROM for_sale...
Here's a simplified version of the problem. You have a marketplace where users can publish how many items they have for sale and the price. You would like to buy 1 000 000 items, and you want to purchase the cheapest listings first.
Example of getting all rows:
SELECT id, price, count FROM for_sale ORDER BY price ASC
In a worst-case scenario, you might need to fetch 1 000 000 rows if each user is selling one item each, but usually, users would publish more than a single item.
Is there a performant way to get only the rows needed to fulfil the purchase? E.g., for each row, a counter named
total_items
is incremented with count
, and when total_items
covers the wanted purchase amount, no more rows are returned.
Using LIMIT
reduces the number of rows returned, but I can't be sure how many rows are needed as it depends on each row's count.
## What I've tried so far
SET @total := 0;
SELECT id, count, price, @total := @total + count AS total
FROM for_sale
WHERE @total < 1000000 ORDER BY price
ASC;
This query somewhat works, but the user variable check seems to be done client-side resulting in up to seven additional rows where the total is over 1 000 000. The performance is also terrible compared to just a LIMIT
(more than 300x execution time).
I've also tried to use a window function, but I'm not sure if that's possible without reading the whole table.
Is there a way to only select rows that covers the purchase amount instead of every row? I've searched a lot, and the only examples I've found is user variable ones that suffer from performance issues.
Alex
(53 rep)
Jun 30, 2021, 11:19 AM
• Last activity: Jun 30, 2021, 03:54 PM
0
votes
1
answers
223
views
Analytical function: sum the cumulative of previous column
Using an analytical function, I want that the column 'sum' contains the cumulative of the previous column. But my code gets the total sum of all encounters. Here is both table and data for testing: ``` CREATE TABLE users ( user_id INT PRIMARY KEY, name CHARACTER VARYING(50) ); CREATE TABLE orders_ca...
Using an analytical function, I want that the column 'sum' contains the
cumulative of the previous column.
But my code gets the total sum of all encounters.
Here is both table and data for testing:
CREATE TABLE users (
user_id INT PRIMARY KEY,
name CHARACTER VARYING(50)
);
CREATE TABLE orders_catalog (
order_code INT PRIMARY KEY,
order_desc CHARACTER VARYING(50) NOT NULL,
cost REAL NOT NULL
);
CREATE TABLE encounter (
encounter_id INT PRIMARY KEY,
user_id INT NOT NULL,
encounter_type CHARACTER VARYING(50) NOT NULL,
CONSTRAINT FK_encounter FOREIGN KEY (user_id) REFERENCES users(user_id)
);
CREATE TABLE orders (
order_id INT PRIMARY KEY,
order_code INT NOT NULL,
encounter_id INT NOT NULL,
created_dt TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT FK_orders_catalog FOREIGN KEY (order_code) REFERENCES orders_catalog (order_code),
CONSTRAINT FK_orders_encounter FOREIGN KEY (encounter_id) REFERENCES encounter(encounter_id)
);
--
INSERT INTO users(user_id, name) VALUES(1, 'Peter');
INSERT INTO users(user_id, name) VALUES(2, 'Charles');
INSERT INTO users(user_id, name) VALUES(3, 'Eva') ;
INSERT INTO users(user_id, name) VALUES(4, 'John');
INSERT INTO users(user_id, name) VALUES(5, 'Helene');
INSERT INTO orders_catalog(order_code, order_desc, cost) VALUES (10000, 'Painting', 100.34);
INSERT INTO orders_catalog(order_code, order_desc, cost) VALUES (10001, 'Painting', 214.11);
INSERT INTO orders_catalog(order_code, order_desc, cost) VALUES (10002, 'Painting', 214.11);
INSERT INTO orders_catalog(order_code, order_desc, cost) VALUES (10003, 'Spare part', 181.03);
INSERT INTO orders_catalog(order_code, order_desc, cost) VALUES (10004, 'Sheet metal', 168.18);
INSERT INTO orders_catalog(order_code, order_desc, cost) VALUES (10005, 'Sheet metal', 240.02);
INSERT INTO orders_catalog(order_code, order_desc, cost) VALUES (10006, 'Sheet metal', 240.02);
INSERT INTO orders_catalog(order_code, order_desc, cost) VALUES (10007, 'Electricity', 146.85);
INSERT INTO orders_catalog(order_code, order_desc, cost) VALUES (10008, 'Spare part', 162.13);
INSERT INTO orders_catalog(order_code, order_desc, cost) VALUES (10009, 'Electricity', 140.02);
INSERT INTO orders_catalog(order_code, order_desc, cost) VALUES (10010, 'Electricity', 180.02);
INSERT INTO encounter(encounter_id, user_id, encounter_type) VALUES(100,1,'appointment');
INSERT INTO encounter(encounter_id, user_id, encounter_type) VALUES(101,2,'appointment');
INSERT INTO encounter(encounter_id, user_id, encounter_type) VALUES(102,3,'appointment');
INSERT INTO encounter(encounter_id, user_id, encounter_type) VALUES(103,4,'urgent');
INSERT INTO encounter(encounter_id, user_id, encounter_type) VALUES(104,5,'urgent');
INSERT INTO encounter(encounter_id, user_id, encounter_type) VALUES(105,1,'appointment');
INSERT INTO encounter(encounter_id, user_id, encounter_type) VALUES(106,2,'appointment');
INSERT INTO encounter(encounter_id, user_id, encounter_type) VALUES(107,3,'waiting');
INSERT INTO encounter(encounter_id, user_id, encounter_type) VALUES(108,4,'urgent');
INSERT INTO encounter(encounter_id, user_id, encounter_type) VALUES(109,5,'waiting');
INSERT INTO encounter(encounter_id, user_id, encounter_type) VALUES(110,1,'waiting');
INSERT INTO orders(order_id, order_code, encounter_id, created_dt) VALUES (1000,10000,100,'2009-06-16 09:12');
INSERT INTO orders(order_id, order_code, encounter_id, created_dt) VALUES (1001,10001,101,'2009-06-16 09:12');
INSERT INTO orders(order_id, order_code, encounter_id, created_dt) VALUES (1002,10002,102,'2009-06-16 09:12');
INSERT INTO orders(order_id, order_code, encounter_id, created_dt) VALUES (1003,10003,103,'2009-12-03 09:50');
INSERT INTO orders(order_id, order_code, encounter_id, created_dt) VALUES (1004,10004,104,'2010-02-24 12:21');
INSERT INTO orders(order_id, order_code, encounter_id, created_dt) VALUES (1005,10005,105,'2010-03-27 23:54');
INSERT INTO orders(order_id, order_code, encounter_id, created_dt) VALUES (1006,10006,106,'2010-03-22 12:43');
INSERT INTO orders(order_id, order_code, encounter_id, created_dt) VALUES (1007,10007,107,'2010-02-24 12:21');
INSERT INTO orders(order_id, order_code, encounter_id, created_dt) VALUES (1008,10008,108,'2010-03-04 08:55');
INSERT INTO orders(order_id, order_code, encounter_id, created_dt) VALUES (1009,10009,109,'2010-03-06 09:25');
INSERT INTO orders(order_id, order_code, encounter_id, created_dt) VALUES (1010,10010,110,'2010-03-22 11:18');
And here is the query that I tried:
SELECT
u.user_id,
name,
ROW_NUMBER() OVER (
PARTITION BY u.user_id ORDER BY o.created_dt ASC
) AS position,
SUM (c.cost) OVER (
PARTITION BY o.encounter_id
) AS cost,
SUM (cost) OVER (
PARTITION BY u.user_id
) AS sum
FROM
users u
INNER JOIN
encounter e USING (user_id)
INNER JOIN
orders o USING (encounter_id)
INNER JOIN
orders_catalog c USING (order_code)
ORDER BY name, position;
But the output is not what I expect:
user_id|name|position|cost|sum
-|-|-|-|-
2 |Charles |1 |214.11 |454.13
2 |Charles |2 |240.02 |454.13
3 |Eva |1 |214.11 |360.96002
3 |Eva |2 |146.85 |360.96002
5 |Helene |1 |168.18 |308.2
5 |Helene |2 |140.02 |308.2
4 |John |1 |181.03 |343.16
4 |John |2 |162.13 |343.16
1 |Peter |1 |100.34 |520.38
1 |Peter |2 |180.02 |520.38
1 |Peter |3 |240.02 |520.38
Since I would want something like:
user_id|name|position|cost|sum
-|-|-|-|-
...|...|...|...|...
1 |Peter |1 |100.34 |100.34
1 |Peter |2 |180.02 |280.36
1 |Peter |3 |240.02 |520.38
Fiddle
PakLui
(87 rep)
Jun 2, 2021, 08:20 AM
• Last activity: Jun 2, 2021, 10:33 AM
1
votes
2
answers
4317
views
Subtracting sum of previous rows
I want to create total field in this table. But in SQL server 2008 can't use 'order by' in OVER clause. I have same PO and difference INV. I want to Qty field minus Quantity field and next row Total field minus Quantity field. This is a structure table. [![enter image description here][1]][1] And th...
I want to create total field in this table. But in SQL server 2008 can't use 'order by' in OVER clause.
I have same PO and difference INV. I want to Qty field minus Quantity field and next row Total field minus Quantity field.
This is a structure table.
And this's a result.
This is my code but it's error at ORDER BY.
select t2.BaseRef 'PO',t2.BaseQty 'qty', t1.NumAtCard 'INV',
t2.Quantity 'Quantity',
t2.BaseQty - SUM(t2.Quantity) OVER (ORDER BY t1.DocDate) AS 'total'
from OPDN t1
INNER JOIN PDN1 t2 ON t1.DocEntry = t2.DocEntry
Can I use other way for sort this data?


midtonight
(11 rep)
Mar 7, 2019, 05:41 AM
• Last activity: May 26, 2021, 01:00 PM
0
votes
0
answers
198
views
SQL YTD calculations adding periods without postings
I have a query which give me the YTD figures per company (client), account, IC and period for the periods in which financial postings have been performed. The table where I get this from is AAGCONSO which is a balance table of all the accounts, per client, period and IC. SELECT * FROM (SELECT client...
I have a query which give me the YTD figures per company (client), account, IC and period for the periods in which financial postings have been performed. The table where I get this from is AAGCONSO which is a balance table of all the accounts, per client, period and IC.
SELECT * FROM (SELECT client, account, IC, period, SUM(amount) OVER (PARTITION BY client, account, IC, LEFT(period, 4)
ORDER BY client, account, IC, period) AS YTD, ROW_NUMBER() OVER (PARTITION BY client, account, IC, period ORDER BY client, account, IC, period ASC) AS RowYTD FROM (SELECT t .client AS client, t .dim1 AS account, CASE WHEN t .dim2 IN ('10', '11', '30', '31', '32', '33', '34', '35', '36', '50', '60') THEN t .dim2 ELSE '' END AS IC, t .dim2, t .period AS period, SUM(t .amount)
AS amount, ROW_NUMBER() OVER (PARTITION BY t .client, t .dim1, LEFT(period, 4)
ORDER BY t .client, t .dim1, LEFT(period, 4) DESC) AS Row FROM aagconso t WHERE t .period >= 201853 AND client IN ('10', '11', '30', '31', '32', '33', '34', '35', '36', '50', '60') AND t .dim1 = '111110' GROUP BY t .client, t .dim1, t .dim2, period, t .amount HAVING SUM(t .amount) 0) AS CTE2) X WHERE RowYTD = 1 strong text ORDER BY 1 DESC Offset 0 Rows
This gives me the YTD amounts in the following table
The totals are correct, however I want to have the periods in which no postings have been performed included with the YTD amount of previous period.
I have another table with all possible accounting periods
SELECT acc_period FROM dbo.PBAP_GL_PERIODS
Which gives me this :
How do I get a full YTD table including the periods from table B for which there are no postings in table A? I probably need to use a cross apply or join but I really do not see where I get this ...
So for example Table A for account 111110 in client 10 gives me this
|Client|Account|Period| Amount | RowYTD
| ---- | ---- | ----- | ---------- | --- |
|10 |111110| 202012| 1325925.000| 1
And I want to have this
|Client|Account|Period| Amount | RowYTD
| ---- | ---- | ----- | ---------- | --- |
|10 |111110| 202000| 0.000| 1
|10 |111110| 202001| 0.000| 1
|10 |111110| 202002| 0.000| 1
|10 |111110| 202003| 0.000| 1
|10 |111110| 202004| 0.000| 1
|10 |111110| 202005| 0.000| 1
|10 |111110| 202006| 0.000| 1
|10 |111110| 202007| 0.000| 1
|10 |111110| 202008| 0.000| 1
|10 |111110| 202009| 0.000| 1
|10 |111110| 202010| 0.000| 1
|10 |111110| 202011| 0.000| 1
|10 |111110| 202012| 1325925.000| 1
Thanks in advance for any help!


Katrien
(1 rep)
Jan 29, 2021, 04:35 PM
• Last activity: Jan 29, 2021, 04:42 PM
1
votes
1
answers
30
views
How do I show a running total from the previous records?
I wonder if someone can please assist. I have the following code which basically inserts 10 records into a table. From this table I'm trying to show a running total for the amount of sales against the seller. I've tried using the LAG function but that does not appear to work. Here is the code. ```DR...
I wonder if someone can please assist.
I have the following code which basically inserts 10 records into a table. From this table I'm trying to show a running total for the amount of sales against the seller. I've tried using the LAG function but that does not appear to work.
Here is the code.
TABLE IF EXISTS RunningTotals
CREATE TABLE Sales
(
OrderId Int NOT NULL,
Seller varchar(10) NULL,
Amount money NULL
)
INSERT INTO Sales VALUES
(1, 'Alex', 10.00),
(2, 'Sarah', 15.00),
(3, 'Tracy', 10.25),
(4, 'Pete', 10.25),
(5, 'Tracy', 11.00),
(6, 'Tracy', 10.25),
(7, 'Alex', 10.25),
(8, 'Jane', 10.25),
(9, 'Alex', 20.55)
SELECT Seller, Amount, LAG(Amount,1) OVER (PARTITION BY Seller ORDER BY Seller) AS RunningTotal FROM Sales
Essentially in this example I would expect the following output
Seller|Amount|Running Total
Alex|10.00|10.00
Alex|1.25|20.25
Alex|20.55|40.90
Jane|10.25|10.25
Pete|10.25|10.25
Sarah|15.00|15.00
Tracy|10.25|10.25
Tracy|11.00|21.25
Tracy|10.25|31.50
Hope someone can help and thanks in advance.
Dave
Dave
(25 rep)
Jan 10, 2021, 08:01 PM
• Last activity: Jan 10, 2021, 08:17 PM
1
votes
1
answers
725
views
Running sum total over 14 days for each date
In a database of transactions spanning about 50 customers over a period of 8 months, I need to find which customers have 0 total transactions over a consecutive 14-day period. The customer table (customer_tbl) has the following general structure: ``` cust_id, date, new_transactions, total_transactio...
In a database of transactions spanning about 50 customers over a period of 8 months, I need to find which customers have 0 total transactions over a consecutive 14-day period. The customer table (customer_tbl) has the following general structure:
cust_id, date, new_transactions, total_transactions
So for every row, I want a column where it shows the total number of transactions that the customer had, for the past 14 days. I have tried to create a view to display customers who had 0 total transactions using the code below.
create view sumoftransactions as SELECT
cust_id
, date
, REVERSE_RUNNING_SUM
FROM
(
SELECT
cust_id
, date
, SUM(new_transactions) OVER (PARTITION BY cust_id ORDER BY date DESC, VALID_FLAG ASC) REVERSE_RUNNING_SUM
, VALID_FLAG
FROM
(
SELECT cust_id
, date
, new_transactions
, 1 VALID_FLAG
FROM customer_tbl
UNION ALL
SELECT cust_id
, DATE_ADD(date, interval -14 day) date
, -1 * new_transactions
, -1 VALID_FLAG
FROM customer_tbl
) t
) tt
WHERE tt.VALID_FLAG = 1
and tt.reverse_running_sum = 0;
However the output is skewed. I believe it has something to do with the cut-off dates. Each customer has a different date for their first record. The running sum formula includes the record of their first transaction date into the output because there are no dates before this, so the running sum is calculated to be 0, even though it is not.
How do I rectify this code? The following is some sample data. To simplify it, the following table only has 3 customers and covers a period of one week. I want to generate the customer_id
and dates whereby there have been at least 3 consecutive days of 0 transactions. The desired output is written below.
CREATE TABLE customer_tbl
(
cust_id
text,
date
date default null,
new_transactions
text,
total_transactions
text
)
INSERT INTO customer_tbl
VALUES
(‘10001’, ‘2020-02-01’, ‘1’, ‘1’),
(‘10001’, 2020-02-02’, ‘1’, ‘2’),
(‘10001’, ‘2020-02-03’, ‘0’, ‘2’),
(‘10001’, ‘2020-02-03’, ‘0’, ‘2’),
(‘10001’, ‘2020-02-04’, ‘0’, ‘2’),
(‘10001’, ‘2020-02-05’, ‘0’, ‘2’),
(‘10001’, ‘2020-02-06’, ‘1’, ‘3’),
(‘10002’, ‘2020-02-01’, ‘1’, ‘1’),
(‘10002’, 2020-02-02’, ‘1’, ‘2’),
(‘10002’, ‘2020-02-03’, ‘4’, ‘6’),
(‘10002’, ‘2020-02-03’, ‘0’, ‘6’),
(‘10002’, ‘2020-02-04’, ‘1’, ‘7’),
(‘10002’, ‘2020-02-05’, ‘0’, ‘7’),
(‘10002’, ‘2020-02-06’, ‘1’, ‘8’),
(‘10003’, ‘2020-02-01’, ‘0’, ‘0’),
(‘10003’, 2020-02-02’, ‘0’, ‘0’),
(‘10003’, ‘2020-02-03’, ‘1’, ‘1’),
(‘10003’, ‘2020-02-03’, ‘0’, ‘1’),
(‘10003’, ‘2020-02-04’, ‘0’, ‘1’),
(‘10003’, ‘2020-02-05’, ‘0’, ‘1’),
(‘10003’, ‘2020-02-06’, ‘1’, ‘2’);

Darrel
(11 rep)
Nov 11, 2020, 10:37 AM
• Last activity: Nov 12, 2020, 05:02 PM
1
votes
2
answers
1032
views
Sum all unique values from previous dates
Let's say, for simplicity sake, I have the following table: ``` none id amount p_id date ------------------------------------------------ 1 5 1 2020-01-01T01:00:00 2 10 1 2020-01-01T01:10:00 3 15 2 2020-01-01T01:20:00 4 10 3 2020-01-01T03:30:00 5 10 4 2020-01-01T03:50:00 6 20 1 2020-01-01T03:40:00 `...
Let's say, for simplicity sake, I have the following table:
none
id amount p_id date
------------------------------------------------
1 5 1 2020-01-01T01:00:00
2 10 1 2020-01-01T01:10:00
3 15 2 2020-01-01T01:20:00
4 10 3 2020-01-01T03:30:00
5 10 4 2020-01-01T03:50:00
6 20 1 2020-01-01T03:40:00
Here's a sample response I want:
{
"2020-01-01T01:00:00": 25, -- this is from adding records with ids: 2 and 3
"2020-01-01T03:00:00": 55 -- this is from adding records with ids: 3,4,5 and 6
}
I want to get the total (sum(amount)
) of all unique p_id
's grouped by the hour.
The row chosen per p_id
is the one with the latest date
. So for example, the first value in the response above doesn't include id 1
because the record with id 2
has the same p_id
and the date
on that row is later.
The one tricky thing is I want to include the summation of all the amount
per p_id
if their date
is before the hour presented. So for example, in the second value of the response (with key "2020-01-01T03:00:00"), even though id 3
has a timestamp in a different hour, it's the latest for that p_id 2
and therefore gets included in the sum for "2020-01-01T03:00:00". But the row with id 6
overrides id 2
with the same p_id 1
.
In other words: always take the latest amount
for each p_id
so far, and compute the sum for every distinct hour found in the table.
tony
(13 rep)
Jul 7, 2020, 03:23 AM
• Last activity: Jul 8, 2020, 01:49 AM
Showing page 1 of 20 total questions