Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
7
votes
2
answers
839
views
Are there any reasons to not use smallint when it fits the data?
Since PostgreSQL doesn't have 1-byte tinyint, the second best option would be smallint. However, I've read from various posts* that it may actually be slower because CPU's are optimized to operate on 32-bit integers, or there may be implicit conversions to 32-bit integers. Besides these reasons, are...
Since PostgreSQL doesn't have 1-byte tinyint, the second best option would be smallint. However, I've read from various posts* that it may actually be slower because CPU's are optimized to operate on 32-bit integers, or there may be implicit conversions to 32-bit integers.
Besides these reasons, are there any other reasons I'm not yet aware of for not using integers smaller than 32-bits?
\* I can't find it now, and I could be wrong about this. What I understood from it is that smaller integers would save disk/index space. So, good for the database, but the web/computation server would need to convert it to 32-bit
davidtgq
(759 rep)
Sep 30, 2016, 09:22 PM
• Last activity: Jan 6, 2025, 10:33 AM
4
votes
3
answers
1586
views
Integer number in the 700000s as the days from year 1: how can this be cast in tsql to a date and back if the oldest datetime date is 1753-01-01?
I fell upon an integer format for dates for which I also know the date, but I do not know how to get to that date in TSQL and I also do not know how to get to the integer if I have the date: 700444 -> 1918-10-02 731573 -> 2003-12-24 739479 -> 2025-08-16 Those 6-digit numbers would fit as a counter f...
I fell upon an integer format for dates for which I also know the date, but I do not know how to get to that date in TSQL and I also do not know how to get to the integer if I have the date:
700444 -> 1918-10-02
731573 -> 2003-12-24
739479 -> 2025-08-16
Those 6-digit numbers would fit as a counter for each day from 0001-01-01 onwards, I checked that by getting the number of days for one century from that date that is almost year 2000 and adding that to 1900:
select DATEADD(dd,731573/20,'19000101')
Out:
2000-02-24 00:00:00.000
But I cannot run
select DATEADD(dd,731573/20,'10000101')
, which throws:
> The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.
Microsoft Learn says that TSQL allows dates only from 1753-01-01 onwards, see [datetime (Transact-SQL) Microsoft Learn](https://learn.microsoft.com/en-us/sql/t-sql/data-types/datetime-transact-sql?view=sql-server-ver16#description) , thus:
select DATEADD(dd,731573/20,'17530101')
Out:
1853-02-24 00:00:00.000
I cannot add the 731573 to year 1, though. Then I found [What is the significance of 1/1/1753 in SQL Server?](https://stackoverflow.com/questions/3310569/what-is-the-significance-of-1-1-1753-in-sql-server) :
*--(as said in one of the answers and at [Why should you always write "varchar" with the length in brackets behind it? Often, you get the right outcome without doing so - DBA SE](https://dba.stackexchange.com/q/338742/212659) , take varchar(length)
instead of just varchar
)--*
SELECT CONVERT(VARCHAR, DATEADD(DAY,-731572,CAST('2003-12-24' AS DATETIME2)),100)
SELECT CONVERT(VARCHAR(30), DATEADD(DAY,-731572,CAST('2003-12-24' AS DATETIME2)),100)
Out:
Jan 1 0001 12:00AM
So that this is proven, the number *is* the days from the first day of year 0001. Now I wonder whether I can get there without formatting the datetime column as datetime2. My dates are all just in the 20th and 21st century so that I do not need the datetime2
. I get the data as datetime and try to avoid a type conversion.
How can I cast this integer in the seven-houndred-thousands as the counter of the days from the year 1 on to a date and how can I get from the date back to that integer without converting the date to datetime2?
questionto42
(366 rep)
Apr 17, 2024, 11:00 PM
• Last activity: Oct 9, 2024, 08:27 PM
0
votes
1
answers
213
views
Why isn't it trivial to create a virtual integer timestamp column from a TIMESTAMP column in MySQL?
There's no way to extract an integer timestamp from a `TIMESTAMP` column in a virtual column context in MySQL because `UNIX_TIMESTAMP` is not allowed in virtual columns. Why isn't this a trivial task?
There's no way to extract an integer timestamp from a
TIMESTAMP
column in a virtual column context in MySQL because UNIX_TIMESTAMP
is not allowed in virtual columns. Why isn't this a trivial task?
Dan
(235 rep)
Jan 29, 2023, 11:24 PM
• Last activity: Jan 30, 2023, 12:48 AM
0
votes
1
answers
2000
views
How to subtract from the current timestamp the number of hours stored in a column?
I am trying to retrieve data from a column with integer values by getting the difference between the current_timestamp and the values in the column. This works in DB2 by simply stating the value as hour TRANS_DATETIME > (CURRENT_TIMESTAMP - my_column_alias HOURS). However using the same line in Post...
I am trying to retrieve data from a column with integer values by getting the difference between the current_timestamp and the values in the column. This works in DB2 by simply stating the value as hour
TRANS_DATETIME > (CURRENT_TIMESTAMP - my_column_alias HOURS).
However using the same line in PostgreSQL returns the error below. Any ideas on how to modify query for PostgreSQL?
> SQL Error : ERROR: operator does not exist: timestamp with time zone - integer¶
> Hint: No operator matches the given name and argument types. You might need to add explicit type casts.¶
kunleoju
(3 rep)
Nov 21, 2022, 07:43 AM
• Last activity: Nov 21, 2022, 07:00 PM
1
votes
1
answers
293
views
Check int4range includes number using B-tree index
I have a table with two important columns: `value` and `m_range`, where `value`: ``` CREATE TABLE m_filter ( value BIGINT NOT NULL, m_range int4range NOT NULL, EXCLUDE USING GIST (m_range WITH &&, value WITH =) ); ``` The usage scenario is following: 1. It's not frequently updated, but it can contai...
I have a table with two important columns:
value
and m_range
, where value
:
CREATE TABLE m_filter (
value BIGINT NOT NULL,
m_range int4range NOT NULL,
EXCLUDE USING GIST (m_range WITH &&, value WITH =)
);
The usage scenario is following:
1. It's not frequently updated, but it can contain a quite large amount of data.
2. It's queries very frequently by searching for value
s, which are in m_range
.
So I'm supposing to use BTree
index for m_range
column instead of Gist
for better query performance:
CREATE INDEX i_m_filter_range ON m_filter USING BTREE (m_range);
But according to docs https://www.postgresql.org/docs/current/indexes-types.html BTree indexes supports only these operators:
= >
.
Is it possible to select all value
s where int4range
columnt contains some integer
using BTree? Like I can do with ranges using @>
and <@
operators.
---
Update:
In this table m_range
values could be int values from 0 to 9999, each value
has about 500 non-overlaping m_range
records average, and there are about 1_000 value
s, so it's about 500_000 m_range
values.
PostgreSQL version is 13.5
g4s8
(111 rep)
Jun 16, 2022, 04:16 PM
• Last activity: Jun 24, 2022, 03:15 AM
2
votes
2
answers
375
views
Compare signed integers as if they were unsigned
I am using PostgreSQL to store 64 bits IDs. The IDs are unsigned and make full use of all their bits (including the first one). To store said IDs, I am currently converting them to signed integers before inserting them as `bigint` (because Pg doesn't support unsigned integers). This does work, and I...
I am using PostgreSQL to store 64 bits IDs. The IDs are unsigned and make full use of all their bits (including the first one). To store said IDs, I am currently converting them to signed integers before inserting them as
bigint
(because Pg doesn't support unsigned integers).
This does work, and I am able to convert back the IDs to unsigned integers after querying them, but I ran into a problem: if I try to compare IDs in an SQL request, it will fail because some of them are negative.
For example, if I try to store 18446744069414584320
and 25
inside my DB, those IDs will be stored as -4294967296
and 25
respectively. Hence for Pg, the largest ID would be 25
(which is problematic in my case).
Is there a way to treat signed integers as unsigned when using a comparison operator in PostgreSQL ?
Edit: Storing the IDs using the numeric
data type would not be practical in my case not only because of the increased memory consumption but also because of the way IDs are processed by the applications connected to the database.
Sadiinso
(123 rep)
Feb 18, 2022, 08:40 AM
• Last activity: Feb 22, 2022, 08:38 AM
0
votes
0
answers
45
views
Getting the max ID of integer columns in databases in Postgres 9.6/12
I am looking to get the max ID of each integer column in our databases servers. The function I currently use: ``` CREATE OR REPLACE FUNCTION intpkmax() RETURNS TABLE(schema_name name, table_name name, column_name name, max_value integer) LANGUAGE plpgsql STABLE AS $$BEGIN /* loop through tables with...
I am looking to get the max ID of each integer column in our databases servers.
The function I currently use:
CREATE OR REPLACE FUNCTION intpkmax() RETURNS
TABLE(schema_name name, table_name name, column_name name, max_value integer)
LANGUAGE plpgsql STABLE AS
$$BEGIN
/* loop through tables with a simgle integer column as primary key */
FOR schema_name, table_name, column_name IN
SELECT sch.nspname, tab.relname, col.attname
FROM pg_class tab
JOIN pg_constraint con ON con.conrelid = tab.oid
JOIN pg_attribute col ON col.attrelid = tab.oid
JOIN pg_namespace sch ON sch.oid = tab.relnamespace
WHERE con.contype = 'p'
AND array_length(con.conkey, 1) = 1
AND col.atttypid = 'integer'::regtype
AND NOT col.attisdropped
LOOP
/* get the maximum value of the primary key column */
EXECUTE 'SELECT max(' || quote_ident(column_name) ||
') FROM ' || quote_ident(schema_name) ||
'.' || quote_ident(table_name) || ''
INTO max_value;
/* return the next result */
RETURN NEXT;
END LOOP;
END;$$;
And then insert the results of the above function into a table
SELECT * into public.maxIDValue FROM public.intpkmax();
I need to generate a report once a month or maybe a shorter timeframe, I can schedule this via Cron but need the data. I am currently running this on a test environment with no traffic but wanted to ask:
- Would this function cause locking on any tables, as the environments it would run on are heavily transactional.
- Is there anyway of keeping the function on the postgres public schema but then calling that function so it runs against each database, if so is it possible to have the table the data is inserted in to have the database name (datname), I tried joining the pg_database but had no joy.
The function itself running against the largest database 600+GB does take some time is there an alternative that is quicker for me to run?
Any help is much appreciated.
rdbmsNoob
(459 rep)
Nov 29, 2021, 05:22 PM
0
votes
1
answers
1622
views
Calculating Average Value of JSONB array in Postgres
I have a column called "value" in my "answers" table. ``` | value | |---------| | [1,2] | | [1] | | [1,2,3] | ``` The type of "value" is "jsonb". I want to get the average value of each array in each row: ``` SELECT avg(value) AS avg_value FROM answers ``` But this doesn't work because avg() is not...
I have a column called "value" in my "answers" table.
| value |
|---------|
| [1,2] |
| |
| [1,2,3] |
The type of "value" is "jsonb".
I want to get the average value of each array in each row:
SELECT avg(value) AS avg_value
FROM answers
But this doesn't work because avg() is not a jsonb function. I've tried:
SELECT avg(value::integer[]) as avg_value
FROM answers
i.e. tried to cast the jsonb arrays into integer arrays and then taking the avg, but I get the following error: "cannot cast type jsonb to integer[]. null".
Any ideas?
user236222
Jul 30, 2021, 09:44 AM
• Last activity: Jul 30, 2021, 12:00 PM
1
votes
2
answers
548
views
How to make MySQL 8 alert when integer overflow happens during INSERT ... ON DUPLICATE KEY?
Let's consider we have a simple table with auto-incrementing integer `ID` and some unique column: ```sql CREATE TABLE `test` ( `id` int unsigned NOT NULL AUTO_INCREMENT, `value` tinyint DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `value` (`value`) ); ``` Imagine we inserted into it so many times th...
Let's consider we have a simple table with auto-incrementing integer
ID
and some unique column:
CREATE TABLE test
(
id
int unsigned NOT NULL AUTO_INCREMENT,
value
tinyint DEFAULT NULL,
PRIMARY KEY (id
),
UNIQUE KEY value
(value
)
);
Imagine we inserted into it so many times that ID
reached the maximum:
INSERT INTO test VALUES
(4294967294, 1),
(4294967295, 2);
Now we are trying to insert one more row and if it's duplicate then change the value
to 3:
INSERT INTO test (value
) VALUES (1) ON DUPLICATE KEY UPDATE value
= 3;
**What is the expected behaviour?**
id value
---------- --------
4294967294 3
4294967295 2
**What we get in fact:**
id value
---------- --------
4294967294 1
4294967295 3
**How?** Because of the integer overflow the auto-generated ID
of the new row was again 4294967295.
__So now the question. Is it possible to make MySQL 8 alert if the overflow happens during query?__ In my case such silent overflow caused huge data loss. Because all newly added rows just overwrote the last row.
PS I'm not asking how to fix or prevent such overflow. That's obvious. I'm asking exactly how to catch such errors.
Stalinko
(201 rep)
May 22, 2020, 01:09 PM
• Last activity: May 31, 2020, 02:21 AM
13
votes
1
answers
18744
views
Text string stored in SQLite Integer column?
I'm a database novice looking at an SQLite database which appears to be storing text in an integer column. Here's an example session at the sqlite3 command line: sqlite> .schema mytable CREATE TABLE mytable ( id integer primary key, /* 0 */ mycol integer not null, /* 1 */ ); sqlite> SELECT mycol FRO...
I'm a database novice looking at an SQLite database which appears to be storing text in an integer column. Here's an example session at the sqlite3 command line:
sqlite> .schema mytable
CREATE TABLE mytable (
id integer primary key, /* 0 */
mycol integer not null, /* 1 */
);
sqlite> SELECT mycol FROM mytable;
here is some text
here is more text
[...]
it's text all the way down
I'm confused. What gives?
igal
(345 rep)
Jul 8, 2015, 05:07 PM
• Last activity: Mar 25, 2020, 09:17 PM
5
votes
1
answers
1695
views
Will SQL Server "int" datatype reliably truncate (and not round) decimal values when they are input?
I have a user with software that sometimes sends back a UNIT's ID as an integer(ex. 1000), and sometimes with a small decimal value appended (ex. 1000.001). The software automatically generates MSSQL (2016 SP1) tables with "real" datatypes, and to "top it off" makes this UNIT columns part of a Prima...
I have a user with software that sometimes sends back a UNIT's ID as an integer(ex. 1000), and sometimes with a small decimal value appended (ex. 1000.001). The software automatically generates MSSQL (2016 SP1) tables with "real" datatypes, and to "top it off" makes this UNIT columns part of a Primary Key. This causes issues on the data input side and the reporting side.
I realize this is really a "bug" with the software, but I'm trying to be helpful on the DB side. I have proven in a Sandbox region that I can change the "real" datatype to an "int", that this does not cause data loss with existing data, and every input scenario shows that the decimal values of any input will be truncated in the "int" column(not rounded).
My questions are, "Will SQL Server always reliably truncate decimal values when input into an "int" datatype (We do not have a way to dig into the Packaged App and force it to ROUND)? Are there any scenarios where this will error, or cause issues?" I can't find this documentation anywhere.
Brett Walters
(53 rep)
Aug 29, 2019, 02:54 PM
• Last activity: Aug 30, 2019, 02:35 PM
9
votes
3
answers
18857
views
How to import blanks as nulls instead of zeros while importing txt using wizard
I'm using the Import Wizard to load a text file and need blanks in integer fields to be nulls, but only zeros are inserted. How to import it properly?
I'm using the Import Wizard to load a text file and need blanks in integer fields to be nulls, but only zeros are inserted.
How to import it properly?
Vladimir
(91 rep)
Dec 8, 2015, 12:41 PM
• Last activity: Apr 19, 2019, 06:09 PM
2
votes
2
answers
397
views
How does Postgres handle integer data of different sizes in the same column?
If I have a `bigint` column and in one row store `1` and in another store `999999999999`, will these take up different amounts of space on disk? Will Postgres have an easier time doing queries and calculations with the smaller data? The motivation for my question is that I have a lot of data in this...
If I have a
bigint
column and in one row store 1
and in another store 999999999999
, will these take up different amounts of space on disk?
Will Postgres have an easier time doing queries and calculations with the smaller data?
The motivation for my question is that I have a lot of data in this form: 1000000306429071
. I'm wondering if it will be valuable to strip the 1
and leading 0
s.
John Bachir
(867 rep)
Jun 19, 2017, 05:56 PM
• Last activity: Jun 20, 2017, 12:37 AM
1
votes
1
answers
127
views
Expanding contractions/number ranges into separate records
I am planning the migration of a directory of image files plus a FileMaker-database containing the corresponding metadata into an [Imagic IMS](http://www.imagic.ch/index.php?id=99&L=2) database (formerly known as ImageAccess) . The vendor provides a Java “Datasheet to xml conversion tool” which crea...
I am planning the migration of a directory of image files plus a FileMaker-database containing the corresponding metadata into an [Imagic IMS](http://www.imagic.ch/index.php?id=99&L=2) database (formerly known as ImageAccess). The vendor provides a Java “Datasheet to xml conversion tool” which creates an XML file for importing image metadata from a tab-separated file. The images' filenames are made up by their image reference numbers listed in the FileMaker database.
In the source FileMaker-database the image reference numbers are often contracted, making a sample export record describing four images look something like this:
ref subject photographer
123/3-6 Building A, Living room Photographer A
For the program converting the tab separated file to XML, these four images should be listed as:
ref subject photographer
123/3 Building A, Living room Photographer A
123/4 Building A, Living room Photographer A
123/5 Building A, Living room Photographer A
123/6 Building A, Living room Photographer A
How can I best expand these contracted records into separate lines?
Oliver
(19 rep)
Sep 25, 2012, 03:24 PM
• Last activity: Nov 14, 2015, 08:00 PM
0
votes
1
answers
460
views
Numeric Latitude & Longitude values are rounded up when querying SQL Server Database
Developing an application that is using Google Maps. Latitude and longitude values are stored in the table as the correct value. However when they are pulled into a view the value is rounded to the nearest 100 th , thus making the place marker on google maps inaccurate. Any suggestions on how to alw...
Developing an application that is using Google Maps. Latitude and longitude values are stored in the table as the correct value. However when they are pulled into a view the value is rounded to the nearest 100th, thus making the place marker on google maps inaccurate.
Any suggestions on how to always return the true value of the number ?
rwatts
(131 rep)
Jan 28, 2015, 04:41 PM
• Last activity: Oct 14, 2015, 04:40 PM
5
votes
1
answers
2518
views
Rounding issues?
Try this: create table test (f float); insert into test values (330.0); commit; select (8 + 330.0/60)::int; --14 select (8 + f/60)::int from test; --14 select (9 + 330.0/60)::int; --15 select (9 + f/60)::int from test; --14 Can someone explain why the last query returns 14 instead of 15? PostgreSQL...
Try this:
create table test (f float);
insert into test values (330.0);
commit;
select (8 + 330.0/60)::int; --14
select (8 + f/60)::int from test; --14
select (9 + 330.0/60)::int; --15
select (9 + f/60)::int from test; --14
Can someone explain why the last query returns 14 instead of 15?
PostgreSQL 9.3.9 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
12.04.5 LTS (GNU/Linux 3.2.0-63-virtual x86_64)
kev
(397 rep)
Sep 1, 2014, 06:05 AM
• Last activity: Jun 26, 2015, 06:32 AM
0
votes
1
answers
5148
views
Do I lose data if I change from INT to TINYINT
I have a database (Engine `InnoDB`) that was setup by someone else. I noticed that one column contains data between 1 and 107 at this moment, and it is highly unlikely to further increase much. As it is currently set up as `INT` my idea was to change type to `TINYINT unsigned`. Can I just do that wi...
I have a database (Engine
InnoDB
) that was setup by someone else. I noticed that one column contains data between 1 and 107 at this moment, and it is highly unlikely to further increase much.
As it is currently set up as INT
my idea was to change type to TINYINT unsigned
.
Can I just do that without losing data? What would be the most efficient way to go about it?
I am still rather new to databases, getting my hands dirty at the moment and learning all about data types and performance, etc. Looks like my precursor hasn't done such a good job... :(
Kolja
(363 rep)
Dec 30, 2014, 07:50 PM
• Last activity: Dec 31, 2014, 04:27 PM
0
votes
1
answers
7406
views
Looping through string, adding all numbers e.g. '123' = 1+2+3, demo with working loop included
This works to output the string `123456` as: 1 2 3 4 5 6 Code: declare @string varchar(20) declare @index int declare @len int declare @char char(1) set @string = '123456' set @index = 1 set @len= LEN(@string) WHILE @index<= @len BEGIN set @char = SUBSTRING(@string, @index, 1) print @char SET @index...
This works to output the string
123456
as:
1
2
3
4
5
6
Code:
declare @string varchar(20)
declare @index int
declare @len int
declare @char char(1)
set @string = '123456'
set @index = 1
set @len= LEN(@string)
WHILE @index<= @len
BEGIN
set @char = SUBSTRING(@string, @index, 1)
print @char
SET @index= @index+ 1
END
But when I try to use this code to add those values up when looping through them (1+2+3+4+5+6 = 21), then I do no get a result from SQL, please help
declare @string varchar(20)
declare @index int
declare @len int
declare @char char(1)
set @string = '123456'
set @index = 1
set @len= LEN(@string)
declare @Total int
WHILE @index<= @len
BEGIN
set @char = SUBSTRING(@string, @index, 1)
set @Total = @Total + cast(@char as int)
SET @index= @index+ 1
END
print @Total
Peter PitLock
(1405 rep)
Feb 24, 2014, 01:29 PM
• Last activity: Feb 24, 2014, 01:43 PM
1
votes
1
answers
3482
views
Should I be concered by large SERIAL values?
I have a Django application that uses PostgreSQL to analyze data from tweets. The data set increases by thousands of records with each request. I am using the database primarily as a cache, so I had planned to delete all records every 24 hours to permit new requests without needlessly increasing the...
I have a Django application that uses PostgreSQL to analyze data from tweets. The data set increases by thousands of records with each request. I am using the database primarily as a cache, so I had planned to delete all records every 24 hours to permit new requests without needlessly increasing the size of the database.
Django uses the
SERIAL
type for storing the ids; a new item is given the next highest value of the last item that was created, rather than the first available number. I don't use the ids for anything outside of the ORM. My concern is that I will eventually run out of key space on my 32 bit VM. What does PostgreSQL when the next value is too large for SERIAL
? Does it give error? Does it roll back to one?
Sean W.
(135 rep)
Aug 5, 2013, 08:46 PM
• Last activity: Aug 6, 2013, 06:00 PM
Showing page 1 of 19 total questions