Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
798
views
Insert Timestamp on value change from other table
I have an `EvidenceTimeStamp` table that is triggered with data when the case table has data inserted into it. `EvidenceTimeStamp`: - `CaseID` - `EvidenceNum` - `EvidenceType` - `Evidence_Initiated_Timestamp` - `Evidence_Returned_Timestamp` I created a trigger to fill the first four columns of data...
I have an
EvidenceTimeStamp
table that is triggered with data when the case table has data inserted into it.
EvidenceTimeStamp
:
- CaseID
- EvidenceNum
- EvidenceType
- Evidence_Initiated_Timestamp
- Evidence_Returned_Timestamp
I created a trigger to fill the first four columns of data whenever a new case is inserted in the case table and it creates a timestamp on Evidence_Initiated_Timestamp
when this happens.
However, I need to create a closing timestamp when the process is complete. I wanted to do this by my Outbound Ship Table- everytime the Attribute (bit value) Outbound_Item_Recevied
is changed from false to true (or 0 to 1) I want it to initate a timestamp on the Evidence_Returned_Timestamp
column in my EvidenceTimeStamp
table that matches the corresponding CaseID
and EvidenceNum
.
Note: the EvidenceTimeStamp
table does not have a primary key or foreign key technically... the CaseID
is primary key in the Case
table and EvidenceNum
is an attribute in the Evidence
Table. This is a "trigger" table so it really doesn't have its own specific data, it's just filling data from other tables.
Any idea of how to do this?
Also, other quick question, how do I get bit values to show "True" or "False" instead of 1 or 0? Thank you!!
Alee
(47 rep)
Jun 8, 2017, 03:10 PM
• Last activity: Aug 6, 2025, 07:09 AM
1
votes
1
answers
5283
views
timestamp with timezone issue with TZR vs TZH:TZM
I have a Hibernate with Oracle JDBC based application inserting/updating into the following column; COLUMN_A TIMESTAMP WITH TIME ZONE DEFAULT systimestamp NOT NULL but while it automatically appends timezone, it uses `TZR`, e.g. `UTC`, and weird thing with this data type in Oracle is that, it retain...
I have a Hibernate with Oracle JDBC based application inserting/updating into the following column;
COLUMN_A TIMESTAMP WITH TIME ZONE DEFAULT systimestamp NOT NULL
but while it automatically appends timezone, it uses
TZR
, e.g. UTC
, and weird thing with this data type in Oracle is that, it retains the format of timezone from insertion when we select it without any formatting, [explained here](https://stackoverflow.com/questions/51417100/timestamp-with-timezone-retains-timezone-format)
With this said, we are utilizing Golden Gate that is replicating this data to MongoDB, but afaik it requires these timestamps to contain only TZH:TZM
formatting, so I have this issue where one side blocked by Oracle JDBC, where insertion is done with TZR
, and one side blocked by Golden Gate where data is expected with TZH:TZM
Are there any way to handle this issue? Can I not forbid a certain formatting for TIMESTAMP WITH TIME ZONE
? Using NLS_TIMESTAMP_TZ_FORMAT
works for SELECT
formatting, but it is not usable for Golden Gate I was told. Also for INSERT
case I was able to utilize default
value of the column, and using systimestamp
does insert with TZH:TZM
, but for UPDATE
case, I am unable to achieve this. Totally stuck!
I have [this question](https://stackoverflow.com/questions/51412424/update-column-to-default-value-without-specifying-it-in-spring-data-jpa) with focus on application side of this issue. I am wondering if there is anything that can be done in DB side?
bvrakvs
(111 rep)
Jul 19, 2018, 06:18 AM
• Last activity: Jul 19, 2025, 03:08 AM
1
votes
2
answers
929
views
Can I get the created time of a PostgreSQL DB?
I'm using PostgreSQL-13.0. Is there a way I can get the created time or last modified time of a database? Thanks!
I'm using PostgreSQL-13.0.
Is there a way I can get the created time or last modified time of a database?
Thanks!
Leon
(411 rep)
Dec 17, 2023, 04:19 PM
• Last activity: Jul 18, 2025, 12:06 PM
0
votes
3
answers
408
views
Column with Default value as Sum of TIMESTAMP
I have a table which has 2 columns ( `JobDate` , `RecordTime`) -------------------------------------- JobDate | RecordTime | SumCol -------------------------------------- 2019-07-20 | 2019-07-21 | 2019-07-19 | 2019-07-20 | I need `SumCol` to have a default value as `UNIX_TIMESTAMP(JobDate) + UNIX_TI...
I have a table which has 2 columns (
JobDate
, RecordTime
)
--------------------------------------
JobDate | RecordTime | SumCol
--------------------------------------
2019-07-20 | 2019-07-21 |
2019-07-19 | 2019-07-20 |
I need SumCol
to have a default value as UNIX_TIMESTAMP(JobDate) + UNIX_TIMESTAMP(RecordTime)
I've tried creating a virtual column but it gives me this error :
**Expression of generated column 'Test2' contains a disallowed function.**
that's what I tried:
ALTER TABLE jobsTemp
ADD SumCol
TIMESTAMP
AS (UNIx_timestamp(JobDate
) + UNIx_timestamp(RecordTime
));
Amr Ahmed
(11 rep)
Jul 20, 2019, 06:05 PM
• Last activity: Jul 14, 2025, 04:10 AM
0
votes
1
answers
201
views
Postgresql filtering by value over rolling time period
I am newbie to postgresql and still getting my head around. I have used windows function etc. till date and yes I can make some sense. I have got the following data set: [![enter image description here][1]][1] [1]: https://i.sstatic.net/DgMxi.png I need to find those customers whose amount > 2 for 4...
I am newbie to postgresql and still getting my head around. I have used windows function etc. till date and yes I can make some sense.
I have got the following data set:
I need to find those customers whose amount > 2 for 4 minutes in a rolling fashion, considering the data for a day. By that I mean in the given data set, customer 1 is one such as it has got amount > 2 at 16:01,16:02,16:03,16:04 and then at 16:02,16:03,16:04, 16:05. If there is any such single occurrence, the customer needs to be selected. Customer 2 doesn't have so. Wondering what's the best approach to do this?
Any help will be much appreciated.
Regards,

app_developer
(1 rep)
Dec 17, 2018, 09:39 AM
• Last activity: Jun 21, 2025, 05:09 PM
0
votes
3
answers
239
views
Statement of total precipitation for each hour, day or month
I store the data from the rain gauge in mySQL and needed help with listing the data. I save the data every 5 minutes and use the save date in the form of a timestamp. At the end of each hour, therefore, HH: 55 will always be the maximum hourly total that I need to list for the day DD-MM-YYYY Is it p...
I store the data from the rain gauge in mySQL and needed help with listing the data. I save the data every 5 minutes and use the save date in the form of a timestamp.
At the end of each hour, therefore, HH: 55 will always be the maximum hourly total that I need to list for the day DD-MM-YYYY
Is it possible to list in the second SELECT a daily or monthly total of the data stored in this way? If so, how? Thank you very much.
EDIT:
Mysql version: 5.7.32-35-log I save the data every 5 minutes, but at the end of HH: 55 there is always the highest value for the hourly total precipitation.
So I have:
id | rain | timestamp
1 | 0.0 | 2021-10-04 12:00:00
2 | 0.0 | 2021-10-04 12:05:00
3 | 0.6 | 2021-10-04 12:10:00
4 | 0.9 | 2021-10-04 12:15:00
5 | 0.9 | 2021-10-04 12:20:00
6 | 1.4 | 2021-10-04 12:25:00
........
12 | 2.5 | 2021-10-04 12:55:00 // MAX rain
13 | 0.0 | 2021-10-04 13:00:00 // new hour and clean rain to 0.0
14 | 0.0 | 2021-10-04 13:05:00
15 | 0.6 | 2021-10-04 13:10:00
Novosad
(1 rep)
Oct 4, 2021, 08:47 AM
• Last activity: Jun 12, 2025, 04:04 PM
0
votes
1
answers
1012
views
Goldengate timestamp (FFF)
Working with Goldengate 19.1.0.0.211019 replicat inbound to Oracle 19.0.0.0.0 consuming a trail file incoming from Nonstop (ie a non-Oracle database). Column in question from Nonstop is a string containing value of: "20210723112304812" in format of: "yyyymmddhhmmssfff" ie timestamp. In my replicat,...
Working with Goldengate 19.1.0.0.211019
replicat inbound to Oracle 19.0.0.0.0
consuming a trail file incoming from Nonstop (ie a non-Oracle database).
Column in question from Nonstop is a string containing value of:
"20210723112304812" in format of:
"yyyymmddhhmmssfff" ie timestamp.
In my replicat, I use following colmap in the appropriate map .. target command:
COLMAP ( usedefaults,
TS_UPDATE = @DATE ( 'YYYY-MM-DD HH:MI:SS.FFF', 'YYYYMMDDHHMMSSFFF', TS_UPDATE ) );
Howevever, finishing, the data stored in Oracle is:
(defined as TIMESTAMP(3) )
23-jul-2021 11:23:04.001
and other dates either get a ".001" or ".000" value for the fractional second everytime.
I have also tried:
@DATE ( 'YYYY-MM-DD HH:MI:SS.FFFFFF', 'YYYYMMDDHHMMSSFFFFFF', TS_UPDATE )
with same results.
What format setting should I be using to import/include the fractional second properly from non_oracle DB into Oracle DB with TIMESTAMP(3) ??
Ditto
(504 rep)
Jan 19, 2022, 03:32 PM
• Last activity: Jun 5, 2025, 03:03 PM
1
votes
1
answers
381
views
Getting the latest timestamp in Oracle
I am trying to create a report that has the latest member id in the first column and the old member id in the second column based on the timestamp in audits table below. Member ids are stored in master table and the members table. audit_id from the audits table is a foreign key in members table. I t...
I am trying to create a report that has the latest member id in the first column and the old member id in the second column based on the timestamp in audits table below. Member ids are stored in master table and the members table. audit_id from the audits table is a foreign key in members table.
I tried to write the query below after researching but I am getting syntax errors.
SELECT
(SELECT mem.member_id
FROM members mem
JOIN audits aud ON mem.audit_id = aud.audit_id
ORDER BY aud.update_time DESC FETCH FIRST 1 ROWS ONLY) AS latest_id,
mem1.member_id AS old_id
FROM members mem1
JOIN master ma ON ma.member_id = mem1.member_id
**audits**
audit_id |update_time
656566 |2021-01-15 17:01:34
656565 |2020-05-13 17:01:34
545245 |2020-08-11 17:01:34
**members**
member_id |audit_id
5655656 |656566
653363 |656565
545454 |545245
**master**
id | member_id
545634 | 5655656
545634 | 653363
545634 | 545454
**Expected output**
latest_id | old_id
5655656 |653363
5655656 |545454
Megatron
(11 rep)
Feb 16, 2021, 09:35 PM
• Last activity: Apr 23, 2025, 10:06 AM
0
votes
1
answers
356
views
Transaction - Timestamp Ordering. When is aborted transaction restarted
(I don't know if this is the correct place to ask, since this is more like a theoretical question). I have a question about restarting aborted transactions. I don't know **when / at which point** should restarting of the transaction(s) occur. Example - using *Basic [Timestamp Ordering][1]* w11(x), r...
(I don't know if this is the correct place to ask, since this is more like a theoretical question).
I have a question about restarting aborted transactions. I don't know **when / at which point** should restarting of the transaction(s) occur.
Example - using *Basic Timestamp Ordering *
w11(x), r12(x), r11(z), r13(x), w13(z), r12(z), w13(x), w14(x)
Initial values are:
read_ts(x) = write_ts(x) = read_ts(z) = write_ts(z) = 10
---
As we can see transaction
12
is aborted because it is trying to read value z, written already by T13
(older, T12 can't read it).
So in this case T12 is aborted and its timestamp is raised (to 15?). My question is when does the aborted transaction(s) restart?
(1) After all the non-aborted transactions are done, or (2) *soon as possible* (= restarts after next operation w13(x)
)?
In scenario 1, w14(x)
would be done, and after that aborted transaction 12 would start (as T15).
In scenario 2 (*soon as possible*), transaction 12 (as T15) is restarted immediately after next operation w13(x)
. After that w14(x) is aborted (14 < 15) and restarted as T16
What is the correct order in this case? What does the *restart* actually mean? When do we restart an aborted transaction(s)?
I would like to see a practical example.
edit: changed the last operation to w14(x) instead of r14(x)
P_95
(1 rep)
Mar 5, 2017, 02:55 PM
• Last activity: Apr 20, 2025, 02:02 AM
0
votes
0
answers
10
views
Store edit-commencement time or edit-posted time
In all my past projects, for the following reasons, I have always used the **server.database.timestamp** for datetime field values, at the moment the record (row) is inserted or updated on the server. - Consistency - Client machine could have incorrect time But one could argue, as an example, whethe...
In all my past projects, for the following reasons, I have always used the **server.database.timestamp** for datetime field values, at the moment the record (row) is inserted or updated on the server.
- Consistency
- Client machine could have incorrect time
But one could argue, as an example, whether a bill is created at the time it is started or the time it is completed.
Are there any benefits in using the timestamp of "*start of record editing*" rather than the timestamp at "*posting the record*"?
Rohit Gupta
(2126 rep)
Apr 17, 2025, 07:00 AM
0
votes
1
answers
502
views
SQL - Spilt timestamp into multiple rows
I am dealing with data that needs to be looked at on a shift-to-shift basis (8:00:00 to 20:00:00 and its reciprocal are the two shifts) There are instances where a timestamp (one row) will span longer than a shift. Below is an example of what I am looking for. ``` -----------------------------------...
I am dealing with data that needs to be looked at on a shift-to-shift basis (8:00:00 to 20:00:00 and its reciprocal are the two shifts) There are instances where a timestamp (one row) will span longer than a shift. Below is an example of what I am looking for.
----------------------------------------------------------------------------------------------------------
Original Timestamp Data
----------------------------------------------------------------------------------------------------------
START_TIME END_TIME
2020-07-16 04:54:50 2020-07-27 06:36:14
----------------------------------------------------------------------------------------------------------
Updated Timestamp Data
---------------------------------------------------------------------------------------------------------
-
START_TIME END_TIME
2020-07-16 04:54:50 2020-07-16 08:00:00
2020-07-16 08:00:00 2020-07-16 20:00:00
2020-07-16 20:00:00 2020-07-17 08:00:00
2020-07-17 08:00:00 2020-07-17 20:00:00
. .
. .
. .
2020-07-26 20:00:00 2020-07-27 06:36:14
Here is the code I have tried but I am only able to split the data into two rows. SOmething tells me that the "Start Roll" and "End Roll" Columns within #T1 are not going to work in a situation like this.
Declare @DayTurn as DATETIME, @NightTurn As DATETIME, @TodaysDate As DATETIME, @DateCheck As DATETIME, @TimeChange As Integer, @MidNight As DATETIME
Set @DayTurn = '8:00:00'
Set @NightTurn = '20:00:00'
SET @TodaysDate = GETDATE()
SET @DateCheck = CASE WHEN DATEPART( WK, @TodaysDate) >= 7 THEN DATEADD(yy, DATEDIFF(yy, 0, GETDATE()), 0)
ELSE DATEADD(Week,-6,DATEADD(yy, DATEDIFF(yy, 0, GETDATE()), 0))
END;
SELECT
(Case
When cast(Activity.[START_TIME_UTC] as time) >= cast(@DayTurn as time) and cast(Activity.[END_TIME_UTC] as time) > cast(@NightTurn as time) and cast(Activity.[START_TIME_UTC]) as time) cast(@DayTurn as time) then CONVERT(DATETIME, CONVERT(CHAR(8), Activity.[START_TIME_UTC] , 112) + ' ' + CONVERT(CHAR(8), @DayTurn, 108))
else CONVERT(datetime, Activity.[START_TIME_UTC]) end) as 'Start Roll'
,(case
When cast(Activity.[START_TIME_UTC] as time) cast(@DayTurn as time) then CONVERT(DATETIME, CONVERT(CHAR(8), Activity.[START_TIME_UTC], 112) + ' ' + CONVERT(CHAR(8), @DayTurn, 108))
else CONVERT(datetime, Activity.[END_TIME_UTC]) end ) As 'END_TIME'
,(Case
When cast(Activity.[START_TIME_UTC] as time) >= cast(@DayTurn as time) and cast(Activity.[END_TIME_UTC] as time) > cast(@NightTurn as time) and cast(Activity.[START_TIME_UTC] as time) = @DateCheck
SELECT * INTO #T2 from(
Select
temp.[START_TIME]
,temp.[END_TIME]
From #T1 as temp
UNION
Select
temp.[Start Roll]
,temp.[End Roll]
From #T1 as temp
) as temp;
SELECT
*
FROM #T2
Order By START_TIME;
Drop Table #T1
Drop Table #T2
Any and all help is greatly appreciated.
Cheers!
BigMac
(1 rep)
Jul 28, 2020, 01:46 PM
• Last activity: Apr 12, 2025, 08:03 AM
0
votes
2
answers
1780
views
Java PostgreSQL library seems to convert dates to local time zone automatically
PostgreSQL 14.6 Ubuntu 22.04 I am using `postgresql-42.5.4.jar` which I downloaded from [pgJDBC][1]. I use this library to get data from a database and display it on a website running locally. The web server and database server are both running on the same machine. The database server's time zone is...
PostgreSQL 14.6
Ubuntu 22.04 I am using
Ubuntu 22.04 I am using
postgresql-42.5.4.jar
which I downloaded from pgJDBC . I use this library to get data from a database and display it on a website running locally. The web server and database server are both running on the same machine.
The database server's time zone is UTC
. The system's time zone is America/Chicago
.
I have a table that contains a column of type timestamp with time zone
.
The data is inserted into the table by a separate C++ program that uses a completely different library. In order to insert the timestamps, it uses a Unix timestamp and the to_timestamp ()
function, like this:
insert into my_table (my_time) values (to_timestamp (1654321098));
The timestamp is retrieved from the table as a string and passed back to the website as is. A comment below suggested using the java.sql.OffsetDateTime class but I don't know where that class would be used. Here is the Java code I am using:
String query = "select my_time from my_table";
ResultSet result_set = db_connection.createStatement ().executeQuery (query);
String result = result_set.getString ("my_time");
When I query this column from my command line database client, it shows me the dates in UTC, which is what I would expect because that is the time zone the server is using. This simple query would look like this:
select my_time from my_table;
While still in my command line client, if I want to display that column in my local time, I have to modify my query like this:
select my_time at time zone 'America/Chicago' as my_time from my_table;
But I started noticing that the website was displaying incorrect times. I temporarily had it print its query to the screen so I could look at it in my command line client. The result was not the same. In order to display the time in my local time on the website, I had to remove the at time zone 'America/Chicago'
part of the query, which does not seem to make sense and does not produce the same result in the command line client, and it also makes the code less portable if I were to move it to a system using a different database library.
Does the Java driver for PostgreSQL automatically convert timestamp fields to local time? If it does, is there a way to turn that feature off? If it doesn't, then what could be causing the different results I get between the JDBC library and my command line client?
Zephyrus
(283 rep)
Feb 19, 2023, 03:53 PM
• Last activity: Apr 6, 2025, 08:13 AM
3
votes
3
answers
3082
views
Is timestamptz preferred when timezone logic is performed by the client?
I'm in the process of migrating a web application from SqlServer to PostgreSQL and I'm trying to figure out which type to replace `datetime2` with. The general advice seems to be always use `timestamptz`, and never use `timestamp`. The reasons given tend to be along the lines that timestamp and `tim...
I'm in the process of migrating a web application from SqlServer to PostgreSQL and I'm trying to figure out which type to replace
datetime2
with.
The general advice seems to be always use timestamptz
, and never use timestamp
. The reasons given tend to be along the lines that timestamp and timestamptz
are stored the same regardless (so no performance penalty) and timestamptz
auto-converts to the timezone of the connection. Ignoring timezones altogether in Rails and PostgreSQL | Stack Overflow
Unfortunately my legacy .NET codebase is very inconsistent with datetimes and we usually render in UTC regardless of the users timezone. More recent code has been using NodaTime and it's Instant
class, but we rarely have to deal with times and displaying just date has been "close enough". My understanding of using NodaTime properly, however, is to convert an Instant
to LocalDateTime
as late as possible - and not in the database.
In addition to this, I'm not entirely sure how Postgres knows the correct timezone of the "current user". I know you can set the timezone specifically as a session parameter SET TIME ZONE 'UTC';
, are you expected to do this for every connection as appropriate for the "current user"? If so, is this reset whenever the connection is retrieved from the connection pool? I also see that Npgsql has the ability to set a timezone for a connection string, presumably this isn't appropriate if it's per user?
All this leads me to think the best option is to use timestamp
for all datetimes, and use application logic to convert to local datetime. I guess another option is to use timestamptz
for all datetimes, force the connection to use UTC in the connection string, and use application logic to convert to local datetime. However I worry that Postgres will perform extra work in doing a no-op conversion between UTC and UTC.
TLDR: Is timestamptz
still preferred if the application always inserts/reads UTC and converts to local datetime itself?
berkeleybross
(175 rep)
Apr 8, 2018, 06:11 PM
• Last activity: Mar 3, 2025, 04:03 PM
5
votes
2
answers
581
views
How to retrieve data based on year to date in Postgres?
I'd like to retrieve data from "year to date" in my below query. I was hoping for something simple in the 'INTERVAL' but could not find anything. I have the following: ``` SELECT "status_id", date_part('epoch', time) * 1000 as time, COUNT(time) as "count" FROM evStatus WHERE time >= NOW() - Interval...
I'd like to retrieve data from "year to date" in my below query. I was hoping for something simple in the 'INTERVAL' but could not find anything.
I have the following:
SELECT
"status_id", date_part('epoch', time) * 1000 as time,
COUNT(time) as "count"
FROM evStatus
WHERE time >= NOW() - Interval '{1}'
GROUP BY "status_id", time
I'd like to grab data from January of what ever year it is to current date.
I tested this below and it appears to work but want to make sure this would be the correct way?
SELECT
"status_id", date_part('epoch', time) * 1000 as time,
COUNT(time) as "count"
WHERE time BETWEEN date_trunc('year', now()) AND CURRENT_TIMESTAMP
muttBunch
(65 rep)
Mar 1, 2025, 09:45 PM
• Last activity: Mar 3, 2025, 03:01 PM
2
votes
2
answers
1074
views
Calculate Total Working Minutes in MySql
I have a design a table for attendance and here are the genral fileds 1. AttendanceMachineLoginId 2. EmpId 1. AttendanceDateTime Whenever user will come to office, he has to make his attendance. The **first** attendance will consider **login** and the **second** will consider the **logout**. **Each...
I have a design a table for attendance and here are the genral fileds
1. AttendanceMachineLoginId
2. EmpId
1. AttendanceDateTime
Whenever user will come to office, he has to make his attendance.
The **first** attendance will consider **login** and the **second** will consider the **logout**. **Each time a record will be added with the time**. A user can make **multiple entries** (login logout in a single day. Like this
EmpId 81 has login and logout two times in same day.
Now, My aim is to generate per day employee report that how many minutes he has given to the company. I just came to know that

TIMESTAMPDIFF()
can provide the minutes but i am unable to understand that how can I apply it to my table. Additionally, I want to ask that, do the **table Fields** are right for the desired report or I need to change it?
**One Another Strategy:** I was also thinking that I should add minutes column in the table and whenever user logout I should calculate the minutes and add that minutes with logout entry.
**Sample Data:**
INSERT INTO attendancemachinelogin
(AttendanceMachineLoginId
, EmpId
, TimeTrackId
, AttendanceDateTime
, RecordAddDateTime
) VALUES
(0, 81, 315079, '2018-8-15 14:8:46', '2018-08-15 14:09:25'),
(0, 81, 315079, '2018-8-15 14:20:38', '2018-08-15 14:21:17'),
(0, 81, 315079, '2018-8-15 14:21:9', '2018-08-15 14:21:47'),
(0, 81, 315079, '2018-8-15 14:28:37', '2018-08-15 14:29:16'),
(0, 81, 315079, '2018-8-15 14:28:58', '2018-08-15 14:29:36'),
(0, 81, 315079, '2018-8-15 14:36:42', '2018-08-15 14:37:21'),
(0, 81, 315079, '2018-8-15 15:36:34', '2018-08-15 15:37:13'),
(0, 81, 315079, '2018-8-15 15:52:39', '2018-08-15 15:53:17'),
(0, 81, 315079, '2018-8-15 16:5:38', '2018-08-15 16:06:17'),
(0, 81, 315079, '2018-8-15 16:6:50', '2018-08-15 16:07:29'),
(0, 81, 315079, '2018-8-15 16:8:49', '2018-08-15 16:09:29'),
(0, 81, 315079, '2018-8-15 16:18:28', '2018-08-15 16:19:08'),
(0, 81, 315079, '2018-8-15 16:20:49', '2018-08-15 16:21:28'),
(0, 81, 315079, '2018-8-15 16:23:18', '2018-08-15 16:23:58'),
(0, 81, 315079, '2018-8-15 16:24:3', '2018-08-15 16:24:42'),
(0, 81, 315079, '2018-8-15 16:24:47', '2018-08-15 16:25:26'),
(0, 81, 315079, '2018-8-15 16:24:58', '2018-08-15 16:25:37'),
(0, 81, 315079, '2018-8-15 16:25:54', '2018-08-15 16:26:33'),
(0, 81, 315079, '2018-8-15 16:56:47', '2018-08-15 16:57:27'),
(0, 101, 417092, '2018-8-15 17:37:53', '2018-08-15 17:38:32'),
(0, 101, 417092, '2018-8-15 18:4:34', '2018-08-15 18:05:14'),
(0, 101, 417092, '2018-8-15 18:7:43', '2018-08-15 18:08:22'),
(0, 81, 315079, '2018-8-15 18:13:15', '2018-08-15 18:13:54'),
(0, 81, 315079, '2018-8-17 10:50:16', '2018-08-17 10:50:54'),
(0, 101, 417092, '2018-8-17 10:51:54', '2018-08-17 10:52:31'),
(0, 4, 413034, '2018-8-17 11:45:16', '2018-08-17 11:45:54'),
(0, 91, 916086, '2018-8-17 11:59:34', '2018-08-17 12:00:12'),
(0, 81, 315079, '2018-8-17 12:0:19', '2018-08-17 12:00:56'),
(0, 81, 315079, '2018-8-17 15:7:41', '2018-08-17 15:08:17'),
(0, 101, 417092, '2018-8-17 15:9:54', '2018-08-17 15:10:32'),
(0, 101, 417092, '2018-8-17 15:10:9', '2018-08-17 15:10:45'),
(0, 101, 417092, '2018-8-17 15:10:23', '2018-08-17 15:10:59'),
(0, 101, 417092, '2018-8-17 15:10:25', '2018-08-17 15:11:02'),
(0, 101, 417092, '2018-8-17 15:11:6', '2018-08-17 15:11:43'),
(0, 101, 417092, '2018-8-17 15:11:15', '2018-08-17 15:11:52'),
(0, 101, 417092, '2018-8-17 15:11:17', '2018-08-17 15:11:54'),
(0, 81, 315079, '2018-8-17 15:11:32', '2018-08-17 15:12:09'),
(0, 81, 315079, '2018-8-17 15:12:32', '2018-08-17 15:13:09'),
(0, 81, 315079, '2018-8-17 15:35:33', '2018-08-17 15:36:10'),
(0, 81, 315079, '2018-8-17 15:41:58', '2018-08-17 15:42:34'),
(0, 81, 315079, '2018-8-17 15:42:17', '2018-08-17 15:42:54'),
(0, 81, 315079, '2018-8-17 16:8:25', '2018-08-17 16:09:01'),
(0, 81, 315079, '2018-8-17 16:8:32', '2018-08-17 16:09:08'),
(0, 101, 417092, '2018-8-17 16:8:53', '2018-08-17 16:09:30'),
(0, 101, 417092, '2018-8-17 16:9:20', '2018-08-17 16:09:57'),
(0, 4, 413034, '2018-8-17 16:10:16', '2018-08-17 16:10:53'),
(0, 36, 413037, '2018-8-17 16:10:46', '2018-08-17 16:11:23'),
(0, 81, 315079, '2018-8-17 16:22:21', '2018-08-17 16:22:58'),
(0, 101, 417092, '2018-8-17 16:22:45', '2018-08-17 16:23:21'),
(0, 4, 413034, '2018-8-17 16:23:12', '2018-08-17 16:23:49'),
(0, 81, 315079, '2018-8-17 16:23:35', '2018-08-17 16:24:12'),
(0, 81, 315079, '2018-8-17 16:44:4', '2018-08-17 16:44:42'),
(0, 101, 417092, '2018-8-17 16:44:22', '2018-08-17 16:44:58'),
(0, 81, 315079, '2018-8-17 17:6:51', '2018-08-17 17:07:28'),
(0, 101, 417092, '2018-8-17 17:7:8', '2018-08-17 17:07:45'),
(0, 4, 413034, '2018-8-17 17:7:52', '2018-08-17 17:08:28'),
(0, 81, 315079, '2018-8-17 17:9:25', '2018-08-17 17:10:02'),
(0, 101, 417092, '2018-8-17 17:9:46', '2018-08-17 17:10:22'),
(0, 4, 413034, '2018-8-17 17:10:6', '2018-08-17 17:10:42'),
(0, 81, 315079, '2018-8-17 17:10:24', '2018-08-17 17:11:01'),
(0, 81, 315079, '2018-8-17 17:10:39', '2018-08-17 17:11:15'),
(0, 101, 417092, '2018-8-17 17:10:47', '2018-08-17 17:11:24'),
(0, 101, 417092, '2018-8-17 17:10:58', '2018-08-17 17:11:35'),
(0, 81, 315079, '2018-8-17 17:11:10', '2018-08-17 17:11:46'),
(0, 101, 417092, '2018-8-17 17:11:31', '2018-08-17 17:12:09'),
(0, 4, 413034, '2018-8-17 17:40:40', '2018-08-17 17:41:18'),
(0, 101, 417092, '2018-8-17 17:41:23', '2018-08-17 17:41:59'),
(0, 36, 413037, '2018-8-17 17:41:37', '2018-08-17 17:42:14'),
(0, 81, 315079, '2018-8-17 17:42:9', '2018-08-17 17:42:45'),
(0, 3, 213020, '2018-8-17 17:47:34', '2018-08-17 17:48:11'),
(0, 81, 315079, '2018-8-17 17:48:16', '2018-08-17 17:48:52'),
(0, 4, 413034, '2018-8-17 17:48:59', '2018-08-17 17:49:36'),
(0, 4, 413034, '2018-8-17 17:49:59', '2018-08-17 17:50:36'),
(0, 36, 413037, '2018-8-17 17:52:36', '2018-08-17 17:53:13'),
(0, 101, 417092, '2018-8-17 17:52:53', '2018-08-17 17:53:29'),
(0, 6, 213016, '2018-8-17 17:53:30', '2018-08-17 17:54:06'),
(0, 81, 315079, '2018-8-17 17:53:44', '2018-08-17 17:54:20'),
(0, 4, 413034, '2018-8-17 17:54:27', '2018-08-17 17:55:03'),
(0, 3, 213020, '2018-8-17 17:54:49', '2018-08-17 17:55:27'),
(0, 4, 413034, '2018-8-17 17:55:23', '2018-08-17 17:56:00'),
(0, 36, 413037, '2018-8-17 17:58:33', '2018-08-17 17:59:10'),
(0, 101, 417092, '2018-8-17 17:58:47', '2018-08-17 17:59:24'),
(0, 102, 517094, '2018-8-17 17:59:4', '2018-08-17 17:59:40'),
(0, 81, 315079, '2018-8-17 17:59:33', '2018-08-17 18:00:09'),
(0, 4, 413034, '2018-8-17 18:0:16', '2018-08-17 18:00:52'),
(0, 3, 213020, '2018-8-17 18:0:40', '2018-08-17 18:01:17'),
(0, 6, 213016, '2018-8-17 18:1:30', '2018-08-17 18:02:06'),
(0, 36, 413037, '2018-8-17 18:26:24', '2018-08-17 18:27:01'),
(0, 101, 417092, '2018-8-17 18:26:38', '2018-08-17 18:27:14'),
(0, 6, 213016, '2018-8-17 18:27:9', '2018-08-17 18:27:45'),
(0, 81, 315079, '2018-8-17 18:27:24', '2018-08-17 18:28:00'),
(0, 102, 517094, '2018-8-17 18:27:38', '2018-08-17 18:28:14'),
(0, 4, 413034, '2018-8-17 18:28:13', '2018-08-17 18:28:49'),
(0, 81, 315079, '2018-8-17 19:36:49', '2018-08-17 19:37:26'),
(0, 101, 417092, '2018-8-17 19:37:17', '2018-08-17 19:37:54'),
(0, 102, 517094, '2018-8-17 19:37:30', '2018-08-17 19:38:07'),
(0, 36, 413037, '2018-8-17 19:38:13', '2018-08-17 19:38:50'),
(0, 4, 413034, '2018-8-17 19:38:54', '2018-08-17 19:39:32'),
(0, 3, 213020, '2018-8-17 19:39:58', '2018-08-17 19:40:35'),
(0, 101, 417092, '2018-8-18 10:21:26', '2018-08-18 10:22:03'),
(0, 81, 315079, '2018-8-18 10:30:23', '2018-08-18 10:31:09'),
(0, 4, 413034, '2018-8-18 10:31:46', '2018-08-18 10:32:27'),
(0, 102, 517094, '2018-8-18 10:32:15', '2018-08-18 10:32:53'),
(0, 6, 213016, '2018-8-18 10:32:44', '2018-08-18 10:33:22'),
(0, 3, 213020, '2018-8-18 10:33:23', '2018-08-18 10:34:03'),
(0, 81, 315079, '2018-8-18 10:42:49', '2018-08-18 10:43:27'),
(0, 101, 417092, '2018-8-18 10:43:25', '2018-08-18 10:44:03'),
(0, 81, 315079, '2018-8-18 10:48:51', '2018-08-18 10:49:30'),
(0, 102, 517094, '2018-8-18 10:49:9', '2018-08-18 10:49:49'),
(0, 81, 315079, '2018-8-18 10:56:46', '2018-08-18 10:57:25'),
(0, 1, 1211003, '2018-8-18 10:57:0', '2018-08-18 10:57:38'),
(0, 4, 413034, '2018-8-18 10:57:51', '2018-08-18 10:58:38'),
(0, 3, 213020, '2018-8-18 10:58:43', '2018-08-18 10:59:26');
Muhammad Faizan Khan
(143 rep)
Aug 18, 2018, 07:25 AM
• Last activity: Feb 5, 2025, 04:08 PM
0
votes
1
answers
976
views
localtimestamp or curent_timestamp for default value
In Oracle-12c I have some columns that I am converting from timestamp with time zone to timestamp with local time zone. Some of those same columns also currently have a default value of CURRENT_TIMESTAMP. Should I change the default values to use the LOCALTIMESTAMP function instead? The columns seem...
In Oracle-12c I have some columns that I am converting from timestamp with time zone to timestamp with local time zone. Some of those same columns also currently have a default value of CURRENT_TIMESTAMP. Should I change the default values to use the LOCALTIMESTAMP function instead? The columns seem to populate fine if I just leave the CURRENT_TIMESTAMP default values. Is there any practical reasons or gotchas for using one over the other in this case. I have been combing the docs but can't find anything.
user27643
(101 rep)
Apr 18, 2016, 04:39 PM
• Last activity: Jan 22, 2025, 01:01 AM
3
votes
2
answers
5287
views
What value should I choose as length for timestamp without timezone in PostgreSQL?
I'm currently setting up a database to populate it with big data. All the data I get comes from Valve's Steam Web API. Therefore, as all timestamps are returned in Unix timestamps, I cannot directly derive the correct timezone - which is also not of any interest as it's to fine-grained for what it i...
I'm currently setting up a database to populate it with big data. All the data I get comes from Valve's Steam Web API. Therefore, as all timestamps are returned in Unix timestamps, I cannot directly derive the correct timezone - which is also not of any interest as it's to fine-grained for what it is intended for. However, PostgreSQL enables the "length"-value field when setting the column up for timestamp without timezone but I don't know which value is meaningful to enter here and I also couldn't find any information about this value - neither in the official documentation nor on StackExchange so far.
As I'm someone who doesn't set up databases all the time I'm a bit confused and would love to get some assistance. Thanks in advance for your suggestions and input.
Samaranth
(131 rep)
Dec 14, 2020, 10:15 PM
• Last activity: Nov 25, 2024, 11:38 AM
39
votes
2
answers
70646
views
How to best store a timestamp in PostgreSQL?
I'm working on a PostgreSQL DB design and I am wondering how best to store timestamps. ### Assumptions Users in different timezones will use the database for all CRUD functions. I have looked at 2 options: - `timestamp NOT NULL DEFAULT (now() AT TIME ZONE 'UTC')` - `bigint NOT NULL DEFAULT` For **`t...
I'm working on a PostgreSQL DB design and I am wondering how best to store timestamps.
### Assumptions
Users in different timezones will use the database for all CRUD functions.
I have looked at 2 options:
-
-
timestamp NOT NULL DEFAULT (now() AT TIME ZONE 'UTC')
- bigint NOT NULL DEFAULT
For **timestamp
** I would send a string that would represent the exact (UTC) timestamp for the INSERT moment.
For **bigint
** I would store the exact same thing, but in a number format. (time zone issues are handled before millis is handed over to the server, so always millis in UTC.)
One main advantage with storing a bigint
could be that it would be easier to store and to retrieve, as passing a correctly formatted timestamp is more complex than a simple number (millis since Unix Epoc).
My question is which one would allow for the most flexible design and what could be the pitfalls of each approach.
Bam
(579 rep)
Jul 19, 2015, 08:08 PM
• Last activity: Nov 21, 2024, 10:34 PM
13
votes
3
answers
7002
views
"AT TIME ZONE" with zone name PostgreSQL bug?
I was answering this [stackoverflow][1] question and found strange result: select * from pg_timezone_names where name = 'Europe/Berlin' ; name | abbrev | utc_offset | is_dst ---------------+--------+------------+-------- Europe/Berlin | CET | 01:00:00 | f and next query select id, timestampwithtimez...
I was answering this stackoverflow question and found strange result:
select * from pg_timezone_names where name = 'Europe/Berlin' ;
name | abbrev | utc_offset | is_dst
---------------+--------+------------+--------
Europe/Berlin | CET | 01:00:00 | f
and next query
select id,
timestampwithtimezone,
timestampwithtimezone at time zone 'Europe/Berlin' as berlin,
timestampwithtimezone at time zone 'CET' as cet
from data ;
id | timestampwithtimezone | berlin | cet
-----+------------------------+---------------------+---------------------
205 | 2012-10-28 01:30:00+02 | 2012-10-28 01:30:00 | 2012-10-28 00:30:00
204 | 2012-10-28 02:00:00+02 | 2012-10-28 02:00:00 | 2012-10-28 01:00:00
203 | 2012-10-28 02:30:00+02 | 2012-10-28 02:30:00 | 2012-10-28 01:30:00
202 | 2012-10-28 02:59:59+02 | 2012-10-28 02:59:59 | 2012-10-28 01:59:59
106 | 2012-10-28 02:00:00+01 | 2012-10-28 02:00:00 | 2012-10-28 02:00:00
I'm using PostgreSQL 9.1.2 and ubuntu 12.04.
Just checked that on 8.2.11 result is the same.
According to documentation it doesn't matter if I use name or abbreviation.
Is this a bug?
Am I doing something wrong?
Can someone explain this result?
**EDIT**
For the comment that CET is not Europe/Berlin.
I'm just selecting values from pg_timezone_names.
select * from pg_timezone_names where abbrev ='CEST';
name | abbrev | utc_offset | is_dst
------+--------+------------+--------
and
select * from pg_timezone_names where abbrev ='CET';
name | abbrev | utc_offset | is_dst
---------------------+--------+------------+--------
Africa/Tunis | CET | 01:00:00 | f
Africa/Algiers | CET | 01:00:00 | f
Africa/Ceuta | CET | 01:00:00 | f
CET | CET | 01:00:00 | f
Atlantic/Jan_Mayen | CET | 01:00:00 | f
Arctic/Longyearbyen | CET | 01:00:00 | f
Poland | CET | 01:00:00 | f
.....
During winter Europe/Berlin is +01. During summer it is +02.
**EDIT2**
In 2012-10-28 timezone has change from summer time to winter time at 2:00.
This two records have the same value in Europe/Berlin:
204 | 2012-10-28 02:00:00+02 | 2012-10-28 02:00:00 | 2012-10-28 01:00:00
106 | 2012-10-28 02:00:00+01 | 2012-10-28 02:00:00 | 2012-10-28 02:00:00
This suggest that if I use one of abbreviations (CET or CEST) for big data range (summer time and winter time) result will be wrong for some of records. Will be good if I use 'Europe/Berlin'.
I changed the system time to '2012-01-17' and pg_timezone_names has changed also.
select * from pg_timezone_names where name ='Europe/Berlin';
name | abbrev | utc_offset | is_dst
---------------+--------+------------+--------
Europe/Berlin | CEST | 02:00:00 | t
sufleR
(678 rep)
Dec 19, 2012, 11:50 PM
• Last activity: Nov 20, 2024, 10:18 AM
2
votes
1
answers
606
views
Generate series of time ranges
I know we can use Postgres' `generate_series()` function to produce a set of timestamps. Can we do the same to produce a set of time ranges directly without having to manually convert generated timestamps into time ranges like so? ```SQL SELECT tstzrange( time_stamp, time_stamp + '1 days' '[)' ) AS...
I know we can use Postgres'
generate_series()
function to produce a set of timestamps. Can we do the same to produce a set of time ranges directly without having to manually convert generated timestamps into time ranges like so?
SELECT tstzrange(
time_stamp,
time_stamp + '1 days'
'[)'
) AS time_range
FROM generate_series(
'2022-01-01'::TIMESTAMPTZ,
'2022-02-01'::TIMESTAMPTZ,
'1 days'
) tmp(time_stamp);
eliangius
(155 rep)
Jun 5, 2022, 10:14 PM
• Last activity: Nov 20, 2024, 09:53 AM
Showing page 1 of 20 total questions