Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
1202 views
Missing "public" schema when converting a "regclass" value to text
I am writing a script in which I need to parse the name of a table (in `regclass`). The parsing (with `parse_ident()`) works so far. However, the script fails when the table is in the `public` schema because PostgreSQL (10.3) automatically removes the schema name. For example, if a table `tt` is in...
I am writing a script in which I need to parse the name of a table (in regclass). The parsing (with parse_ident()) works so far. However, the script fails when the table is in the public schema because PostgreSQL (10.3) automatically removes the schema name. For example, if a table tt is in a non-public schema ex, the text value of the regclass is the same as the original: => select 'ex.tt'::regclass::text; text ------- ex.tt When it's in public, the schema name is lost: => select 'public.tt'::regclass::text; text ------ tt Is there an way to disable this behavior, or to convert to text without losing the schema name?
tinlyx (3820 rep)
May 26, 2018, 12:21 AM • Last activity: Apr 6, 2022, 08:15 PM
34 votes
4 answers
53028 views
Unquoting JSON strings; print JSON strings without quotes
SELECT json_array_elements('["one", "two"]'::json) gives result | json_array_elements | | :------------------ | | "one" | | "two" | I would like to have the same but without the quotes: one two Looks like I can't use [`->>`](https://stackoverflow.com/a/40928412/1024794) here because I don't have fie...
SELECT json_array_elements('["one", "two"]'::json) gives result
| json_array_elements |
| :------------------ |
| "one"               |
| "two"               |
I would like to have the same but without the quotes: one two Looks like I can't use [->>](https://stackoverflow.com/a/40928412/1024794) here because I don't have field names in the JSON. It's just an array of strings. Postgres version: PostgreSQL 10.0 on x86_64-apple-darwin, compiled by i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00), 64-bit
Maxim Yefremov (465 rep)
May 27, 2018, 06:24 PM • Last activity: Nov 11, 2021, 04:10 PM
30 votes
1 answers
8446 views
What date/time literal formats are LANGUAGE and DATEFORMAT safe?
It is easy to demonstrate that many date/time formats *other* than the following two are vulnerable to misinterpretation due to SET LANGUAGE, SET DATEFORMAT, or a login's default language: yyyyMMdd -- unseparated, date only yyyy-MM-ddThh:mm:ss.fff -- date dash separated, date/time separated by T Eve...
It is easy to demonstrate that many date/time formats *other* than the following two are vulnerable to misinterpretation due to SET LANGUAGE, SET DATEFORMAT, or a login's default language: yyyyMMdd -- unseparated, date only yyyy-MM-ddThh:mm:ss.fff -- date dash separated, date/time separated by T Even this format, without the T, may *look* like a valid ISO 8601 format, but it fails in several languages: DECLARE @d varchar(32) = '2017-03-13 23:22:21.020'; SET LANGUAGE Deutsch; SELECT CONVERT(datetime, @d); SET LANGUAGE Français; SELECT CONVERT(datetime, @d); Results: > Die Spracheneinstellung wurde auf Deutsch geändert.

Msg 242, Level 16, State 3
Bei der Konvertierung eines varchar-Datentyps in einen datetime-Datentyp liegt der Wert außerhalb des gültigen Bereichs.

Le paramètre de langue est passé à Français.

Msg 242, Level 16, State 3
La conversion d'un type de données varchar en type de données datetime a créé une valeur hors limites. Now, these fail as if, in English, I had transposed the month and day, to formulate a date component of yyyy-dd-mm: DECLARE @d varchar(32) = '2017-13-03 23:22:21.020'; SET LANGUAGE us_english; SELECT CONVERT(datetime, @d); Result: > Msg 242, Level 16, State 3
The conversion of a varchar data type to a datetime data type resulted in an out-of-range value. (This isn't Microsoft Access, which is "nice" to you and fixes the transposition for you. Also, similar errors can happen in some cases with SET DATEFORMAT ydm; - it isn't *just* a language thing, that's just the more common scenario where these breakages happen - and aren't always noticed because sometimes they aren't errors, it's just that August 7th became July 8th and nobody noticed.) So, the question: **Now that I know there are a bunch of unsafe formats, are there any other formats that will be safe given any language and dateformat combination?**
Aaron Bertrand (182010 rep)
Mar 10, 2017, 02:47 AM • Last activity: Aug 14, 2021, 09:16 PM
10 votes
2 answers
9142 views
Why isn't to_char IMMUTABLE, and how can I work around it?
How can I index a `to_char()` of a column? I have tried: adam_db=> CREATE INDEX updates_hourly_idx ON updates (to_char(update_time, 'YYYY-MM-DD HH24:00')); But got the error: > ERROR: functions in index expression must be marked `IMMUTABLE` Which seems strange, since the `to_char()` of a timestamp i...
How can I index a to_char() of a column? I have tried: adam_db=> CREATE INDEX updates_hourly_idx ON updates (to_char(update_time, 'YYYY-MM-DD HH24:00')); But got the error: > ERROR: functions in index expression must be marked IMMUTABLE Which seems strange, since the to_char() of a timestamp is reasonably immutable. Any ideas how to generate that index?
Adam Matan (12079 rep)
Sep 22, 2014, 01:31 PM • Last activity: Jul 14, 2018, 12:39 AM
1 votes
1 answers
1084 views
PostgreSQL function's input/output as JavaScript nested objects
I am looking for a simple/clean way that can be used consistently to automatically input JavaScript nested objects to user-defined PostgreSQL functions and also automatically output to JavaScript nested objects. Shall I used nested composite types? JSON? other ways? How to parse to/from these types...
I am looking for a simple/clean way that can be used consistently to automatically input JavaScript nested objects to user-defined PostgreSQL functions and also automatically output to JavaScript nested objects. Shall I used nested composite types? JSON? other ways? How to parse to/from these types and JavaScript nested objects automatically? Any better thoughts? Thank you.
geeko (41 rep)
Jul 2, 2017, 12:16 PM • Last activity: Jul 2, 2017, 05:28 PM
14 votes
1 answers
6915 views
Why is to_char left padding with spaces?
When ever I use `099` indicating 3 digits 0-padded, I'm getting spaces on the left side. SELECT '>' || to_char(1, '099') || ' 001< (1 row) Why is `to_char` left padding here? Why are there leading spaces?
When ever I use 099 indicating 3 digits 0-padded, I'm getting spaces on the left side. SELECT '>' || to_char(1, '099') || ' 001< (1 row) Why is to_char left padding here? Why are there leading spaces?
Evan Carroll (65502 rep)
Jun 8, 2017, 07:50 PM • Last activity: Jun 8, 2017, 08:39 PM
0 votes
1 answers
124 views
How to get one complete column in mysql?
I have a script that will fetch `Last_Error` in `mysql slave` in a particular interval of time and write that to a file. To fetch `Last_Error`, currently I am using below command in the script. mysql -u root -pxxxxxxxxx -e "show slave status\G;" 2>&1 | grep -v "Warning: Using a password" |grep "Last...
I have a script that will fetch Last_Error in mysql slave in a particular interval of time and write that to a file. To fetch Last_Error, currently I am using below command in the script. mysql -u root -pxxxxxxxxx -e "show slave status\G;" 2>&1 | grep -v "Warning: Using a password" |grep "Last_Error" The challenge I am facing is that grep "Last_Error" will only show that particular line and not complete error. For example, one of my Last_Error look like below:: Last_Error: Error 'TRIGGER command denied to user 'user'@'xx.xx.xx.xx' for table 'table_name'' on query. Default database: 'Db_name'. Query: 'UPDATE table SET user_status=0 WHERE contractid ='id.000.000'' and one another look like :: Last_Error: Error 'Duplicate entry '117583' for key 'PRIMARY'' on query. Default database: 'labs'. Query: 'insert into User (a,b,c,d) values ("Other","",now(),"") So basically I can't use A4 kind of switch with grep. Is it possible to get one complete column in mysql or any other work around?
Panda (101 rep)
Apr 21, 2017, 07:01 PM • Last activity: Apr 21, 2017, 07:56 PM
2 votes
1 answers
63 views
Which databases provide string pooling?
Suppose I have many data objects that include very many share very many identical strings (e.g. tags, URLs, "DEBUG"/"TRACE"/"INFO", etc...). Which databases (if any) pool identical strings internally to prevent them from being copied many times?
Suppose I have many data objects that include very many share very many identical strings (e.g. tags, URLs, "DEBUG"/"TRACE"/"INFO", etc...). Which databases (if any) pool identical strings internally to prevent them from being copied many times?
user48956 (171 rep)
Feb 11, 2016, 12:23 AM • Last activity: Feb 11, 2016, 01:04 AM
4 votes
1 answers
533 views
How is encoding done in SQL database tables for pattern matching?
Are strings in table columns represented as bit patterns or Unicode? Does the database engine bring them into memory in a different representation character set from the disk-persisted representations for optimal performance? I want to know for the purposes of index creation and optimal query writin...
Are strings in table columns represented as bit patterns or Unicode? Does the database engine bring them into memory in a different representation character set from the disk-persisted representations for optimal performance? I want to know for the purposes of index creation and optimal query writing. Select statements pull data that matches a where clause. Here is an example query: select name from carTable where name = 'FordEscort97' Here is the carTable: name columnA columnB columnC FordEscort91 FordEscort92 FordEscort93 FordEscort94 FordEscort95 FordEscort96 FordEscort97 FordEscort98 Would the queries perform faster if the column in the table was designed to have the year at the left of the name in the strings that are stored? (assuming the year component of the string was the most unique) If there is a character-by-character, left-to-right matching process, the process could be more resource intensive than evaluating bit pattern hashes representing the strings being compared. If the pattern being matched (the anchor string in the where clause) was a substring with a wildcard, each comparison operation would be more CPU intensive because if there are characters in the column that start out matching the wild card in a leftmost moiety of the string, a rightmost moiety may start out as a second potential match late in the string comparison. Early non-matching recognition would obviate the need for as much evaluation. But I don't know for sure that there is a character-by-character pattern match. There is an implication for creating indexes on unique columns. While a consideration should be made for a likelihood of uniqueness of strings in a column, designing data in the column *can* be arbitrary. Could an index provide a greater benefit if primary column's data had its uniqueness of the string happening mostly within the leftmost characters compared to the rightmost characters? Does it depend on the type of SQL you are using (e.g., Oracle, MySQL, Postgres, etc.)? Sometimes underlying data is stored in hexadecimal format or with bit patterns.
Yousef (41 rep)
May 23, 2015, 08:03 PM • Last activity: Oct 6, 2015, 03:33 AM
Showing page 1 of 9 total questions