Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
8
votes
2
answers
16119
views
varchar(MAX) text cuts off while going more than 8000 characters inside a procedure
I have a procedure in which i have declared a variable with datatype varchar(MAX), declare @str varchar(MAX); set @str='select * from Table1...' print (@str); exec (@str); but when the text written inside @str goes above 8000 characters (i.e. 8193 characters) it cuts off the string and prints only 8...
I have a procedure in which i have declared a variable with datatype varchar(MAX),
declare @str varchar(MAX);
set @str='select * from Table1...'
print (@str);
exec (@str);
but when the text written inside @str goes above 8000 characters (i.e. 8193 characters) it cuts off the string and prints only 8000 characters and executing it gives error off course.
I have tried 2 solutions after some search, but it doesn't work.
1) I have tried using 2 variables of same type "varchar(MAX)", and concat it at the time of execution, but doesn't works.
2) I have tried to cast "into varchar(MAX) again" both the variables before concatinating it, and then execute it, but that also doesn't works.
Priyank Pahuja
(83 rep)
Aug 25, 2017, 06:29 PM
• Last activity: Aug 15, 2024, 05:59 AM
0
votes
1
answers
241
views
What is the idiomatic way to truncate a string down to 100 byte-pairs?
`nvarchar(100)` holds 100 **byte-pairs**, as [documented here](https://learn.microsoft.com/en-us/sql/t-sql/data-types/nchar-and-nvarchar-transact-sql?view=sql-server-ver16#remarks). `LEFT([Your String], 100)` truncates your string down to 100 **characters** as [documented here](https://learn.microso...
nvarchar(100)
holds 100 **byte-pairs**, as [documented here](https://learn.microsoft.com/en-us/sql/t-sql/data-types/nchar-and-nvarchar-transact-sql?view=sql-server-ver16#remarks) . LEFT([Your String], 100)
truncates your string down to 100 **characters** as [documented here](https://learn.microsoft.com/en-us/sql/t-sql/functions/left-transact-sql?view=sql-server-ver16) . These are not the same.
Foolishly, I have used LEFT([Your String], 100)
in the hopes of keeping [Your String]
within nvarchar(100)
. What are the idiomatic approaches to solving this problem correctly? I considered using CAST
, but I'm uncomfortable with implicitly truncating a string.
J. Mini
(1237 rep)
Jan 5, 2024, 07:21 PM
• Last activity: Jan 7, 2024, 03:37 AM
13
votes
4
answers
69177
views
Find if any of the rows partially match a string
I want to see if a table contains any sub-string of a given string. Let's say I have a string somedomain.com In database I have: blabladomain.com testdomain.com domain.com I need to make a query that will return "domain.com" as it is a substring of "somedomain.com". I don't know if this is even poss...
I want to see if a table contains any sub-string of a given string.
Let's say I have a string
somedomain.com
In database I have:
blabladomain.com
testdomain.com
domain.com
I need to make a query that will return "domain.com" as it is a substring of "somedomain.com".
I don't know if this is even possible in MySQL.
CREATE TABLE
site_core_antispam_banned_domain
(
domain_id
int(11) NOT NULL AUTO_INCREMENT,
domain_name
varchar(255) NOT NULL,
PRIMARY KEY (domain_id
)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
Emanuel
(263 rep)
Apr 6, 2018, 05:45 AM
• Last activity: Jun 7, 2023, 09:01 AM
1
votes
1
answers
2152
views
When does Oracle still treat empty strings as NULL?
I was looking for the difference between `VARCHAR` and `VARCHAR2` and learned, among other things, that Oracle’s `VARCHAR2` treats empty strings as `NULL`, or the other way round. Also, apparently I should be using `VARCHAR2`, and that `VARCHAR` is reserved for the future. In my blissful ignorance,...
I was looking for the difference between
VARCHAR
and VARCHAR2
and learned, among other things, that Oracle’s VARCHAR2
treats empty strings as NULL
, or the other way round. Also, apparently I should be using VARCHAR2
, and that VARCHAR
is reserved for the future.
In my blissful ignorance, I have a table where I created a column using VARCHAR
, and I certainly don’t get the same:
SELECT * FROM customers WHERE state='';
SELECT * FROM customers WHERE IS NULL; -- not the same
I’m using Version 18.0.0.0.0, so it wouldn’t surprise me to find that the future they were talking about has finally arrived. All the answers I have found mentioning this are pretty old.
Here is a fiddle: https://dbfiddle.uk/yeWEbcDF
I tested in 11g, 18c and 21c.
Manngo
(3145 rep)
Mar 17, 2023, 11:07 PM
• Last activity: Mar 19, 2023, 07:12 AM
5
votes
3
answers
4343
views
Equivalent to PostgreSQL's Dollar-quoted String Constants for SQL Server
Is there an equivalent of PostgreSQL's [Dollar-quoted String Constants](https://www.postgresql.org/docs/9.2/static/sql-syntax-lexical.html), on SQL Server? I would like to enter HTML string literals that would potentially contain single or double quotes in them. Example: UPDATE table_name SET column...
Is there an equivalent of PostgreSQL's [Dollar-quoted String Constants](https://www.postgresql.org/docs/9.2/static/sql-syntax-lexical.html) , on SQL Server?
I would like to enter HTML string literals that would potentially contain single or double quotes in them.
Example:
UPDATE table_name
SET column_name = $$
Here's a string that contains "double quotes".
$$
WHERE condition = true
**Edit:**
The reason I'm asking this is because I would like to update some rather big HTML "web parts" that are stored in the database (yuck!). Because at work we use that kind of CMS (which I won't give you the name). So I don't want to have to escape my single quotes every time. That's why I'm asking about that kind of feature. If I understand correctly, [
QUOTENAME
](https://learn.microsoft.com/en-us/sql/t-sql/functions/quotename-transact-sql) would still require me to double my single quotes.
Julien
(151 rep)
Nov 17, 2017, 05:18 PM
• Last activity: Feb 3, 2023, 09:22 PM
0
votes
2
answers
354
views
How to convert getdate() stored as varchar to date
I have stored 'getdate()' as text in one of my column in a table. (It's a complicated scenario ,how I saved it) Now when I select the field what I am getting is getdate() instead of the date . How can I convert this to date time. I am adding a sample code to understand this ``` declare @a varchar(20...
I have stored 'getdate()' as text in one of my column in a table. (It's a complicated scenario ,how I saved it)
Now when I select the field what I am getting is getdate() instead of the date . How can I convert this to date time.
I am adding a sample code to understand this
declare @a varchar(20)='getdate()'
select convert(datetime,@a)
This results in conversion error. How can I convert this to date?
Viz Krishna
(109 rep)
Jan 23, 2023, 10:40 AM
• Last activity: Jan 23, 2023, 11:50 AM
1
votes
0
answers
64
views
How does mysql_real_escape_string() work?
I have to send some string (in C) as a query to MySQL, so i used mysql_real_escape_string() to escape some characters like \0 or \n: #include int loginQuery(char *Nickname, char *Password) { char bufferutility[READBUFSIZE]="SELECT * FROM user WHERE Nickname='"; char bufferutility2[READBUFSIZE*2+1];...
I have to send some string (in C) as a query to MySQL, so i used mysql_real_escape_string() to escape some characters like \0 or \n:
#include
int loginQuery(char *Nickname, char *Password)
{
char bufferutility[READBUFSIZE]="SELECT * FROM user WHERE Nickname='";
char bufferutility2[READBUFSIZE*2+1];
strcat(bufferutility,Nickname);
strcat(bufferutility,"' AND Password='");
strcat(bufferutility,Password);
strcat(bufferutility,"';");
if(mysql_real_escape_string(conn,bufferutility2,bufferutility,strlen(bufferutility))==(unsigned long)-1){
printf("\nEscaping error\n");
}
//code for mysql_real_query() here
}
But I got this error:
> You have an error in your SQL syntax; check the manual that
> corresponds to your MariaDB server version for the right syntax to use
> near '\'Hello\n\' AND Password=\'World\n\'' at line 1
What have I done wrong?
SempriGno
(11 rep)
Dec 19, 2022, 09:26 PM
2
votes
2
answers
1654
views
How to use replace text function in a column using another table as a "lookup table" reference?
Here's some examples to show what I'm after. The source table (the one to have its string values changed): ``` | string | |-----------------------| | abc${hello}123 | | def${yolo}321 | | lol${hello}wow${yolo} | ``` And here is the lookup table, the one it will be used to replace the values: ``` | so...
Here's some examples to show what I'm after. The source table (the one to have its string values changed):
| string |
|-----------------------|
| abc${hello}123 |
| def${yolo}321 |
| lol${hello}wow${yolo} |
And here is the lookup table, the one it will be used to replace the values:
| source | replacement |
|----------|-------------|
| ${hello} | :) |
| ${yolo} | :0 |
And this is the result I want:
| replaced |
|------------|
| abc:)123 |
| def:0321 |
| lol:)wow:0 |
I tried something like this, but it's not returning what I expect:
WITH replacements AS(
SELECT *
FROM (
VALUES
('${hello}', ':)'),
('${yolo}', ':0')
) AS sr(source,replacement)
),
strings AS(
SELECT *
FROM (
VALUES
('abc${hello}123'),
('def${yolo}321'),
('lol${hello}wow${yolo}')
) s(string)
)
SELECT replaced
FROM strings,
LATERAL (
SELECT replace(string, source, replacement) replaced
FROM replacements
) aux
;
It returns:
| replaced |
|------------------|
| abc:)123 |
| abc${hello}123 |
| def${yolo}321 |
| def:0321 |
| lol:)wow${yolo} |
| lol${hello}wow:0 |
Tiago Stapenhorst
(237 rep)
Nov 4, 2022, 01:07 AM
• Last activity: Nov 5, 2022, 11:21 AM
1
votes
1
answers
1097
views
How to prevent CONCAT function adding space in Postgresql?
Postgresql-11 select tick_time, nano_secs, concat( to_char(tick_time, 'MMDD HH24:MI:SS.US'), to_char(nano_secs, '000') ) from ticks order by tick_time, nano_secs limit 100; I want to concat 2 strings like above, but it always add a space char between the two strings. How to prevent it doing that? Th...
Postgresql-11
select tick_time, nano_secs,
concat( to_char(tick_time, 'MMDD HH24:MI:SS.US'),
to_char(nano_secs, '000') )
from ticks
order by tick_time, nano_secs limit 100;
I want to concat 2 strings like above, but it always add a space char between the two strings.
How to prevent it doing that?
Thanks!
Leon
(411 rep)
Sep 16, 2022, 05:59 AM
• Last activity: Sep 16, 2022, 06:56 AM
0
votes
1
answers
1518
views
Cannot Insert it into in my table for Special characters when ANSI_WARNINGS is Off even with nvarchar
I want to insert a data row into my table as name: **DSKWOR00** , but I get this error : > String or binary data would be truncated. so I turned ANSI_WARNINGS OFF and it inserted the row but the result **wasn't that my values** here is my T-SQL : **1**- Create My Table : CREATE TABLE [dbo].[DSKWOR00...
I want to insert a data row into my table as name: **DSKWOR00** , but I get this error :
> String or binary data would be truncated.
so I turned ANSI_WARNINGS OFF and it inserted the row but the result **wasn't that my values**
here is my T-SQL :
**1**- Create My Table :
CREATE TABLE [dbo].[DSKWOR00]( [DSW_ID] [nvarchar](10) NULL, [DSW_YY] [float] NOT NULL, [DSW_MM] [float] NOT NULL, [DSW_LISTNO] [nvarchar](12) NULL, [DSW_ID1] [nvarchar](8) NULL, [DSW_FNAME] [nvarchar](60) NULL, [DSW_LNAME] [nvarchar](60) NULL, [DSW_DNAME] [nvarchar](60) NULL, [DSW_IDNO] [nvarchar](15) NULL, [DSW_IDPLC] [nvarchar](30) NULL, [DSW_IDATE] [nvarchar](8) NULL, [DSW_BDATE] [nvarchar](8) NULL, [DSW_SEX] [nvarchar](3) NULL, [DSW_NAT] [nvarchar](10) NULL, [DSW_OCP] [nvarchar](50) NULL, [DSW_SDATE] [nvarchar](8) NULL, [DSW_EDATE] [nvarchar](8) NULL, [DSW_DD] [float] NULL, [DSW_ROOZ] [float] NULL, [DSW_MAH] [float] NULL, [DSW_MAZ] [float] NULL, [DSW_MASH] [float] NULL, [DSW_TOTL] [float] NULL, [DSW_BIME] [float] NULL, [DSW_PRATE] [float] NULL, [DSW_JOB] [nvarchar](6) NULL, [PER_NATCOD] [nvarchar](10) NOT NULL, CONSTRAINT [PK_DSKWOR00] PRIMARY KEY CLUSTERED( [DSW_YY] ASC, [DSW_MM] ASC, [PER_NATCOD] Asc ) ON [PRIMARY]) ON [PRIMARY]
**2**- Insert Into a row in My Table :
INSERT INTO dbo.DSKWOR00
(DSW_ID,DSW_YY,DSW_MM,DSW_LISTNO,DSW_ID1,DSW_FNAME,DSW_LNAME,DSW_DNAME,DSW_IDNO,DSW_IDPLC,DSW_IDATE,DSW_BDATE,DSW_SEX,DSW_NAT,DSW_OCP,DSW_SDATE,DSW_EDATE,DSW_DD,DSW_ROOZ,DSW_MAH,DSW_MAZ,DSW_MASH,DSW_TOTL,DSW_BIME,DSW_PRATE,DSW_JOB,PER_NATCOD)
VALUES
('6773790024',1401,1,1,'80079349','¤ç¬ üóä','ö‘þõþîž','‘®¤þóä','1011','¢¥þ','0','¢¤õ','ü÷¤þ',' ñõ‘ä ¤þ¢õ',0,0,'0','0','0','0','0','0','0','0',0,'OZ0509','4430928460')
**3**-The error if I insert with ANSI_WARNINGS ON :
Msg 8152, Level 16, State 30, Line 3
String or binary data would be truncated.
--Note: My insert is correct I don't know why I'm seeing this error
**4**-So for fixing error I did this :
SET ANSI_WARNINGS OFF
INSERT INTO dbo.DSKWOR00
(DSW_ID,DSW_YY,DSW_MM,DSW_LISTNO,DSW_ID1,DSW_FNAME,DSW_LNAME,DSW_DNAME,DSW_IDNO,DSW_IDPLC,DSW_IDATE,DSW_BDATE,DSW_SEX,DSW_NAT,DSW_OCP,DSW_SDATE,DSW_EDATE,DSW_DD,DSW_ROOZ,DSW_MAH,DSW_MAZ,DSW_MASH,DSW_TOTL,DSW_BIME,DSW_PRATE,DSW_JOB,PER_NATCOD)
VALUES
('6773790024',1401,1,1,'80079349','¤ç¬ üóä','ö‘þõþîž','‘®¤þóä','1011','¢¥þ','0','¢¤õ','ü÷¤þ',' ñõ‘ä ¤þ¢õ',0,0,'0','0','0','0','0','0','0','0',0,'OZ0509','4430928460')
SET ANSI_WARNINGS ON
**5**- **Finally** My row will insert but it's not value I have inserted into the table !
---- My Values in Inser : ('6773790034',1401,1,1,'80069349','¤ç¬ üóä','ö‘þõþîž','‘®¤þóä','1011','¢¥þ','0','¢¤õ','ü÷¤þ',' ñõ‘ä ¤þ¢õ',0,0,'0','0','0','0','0','0','0','0',0,'OZ0509','4430928444')
---- What did Insert : 6773790034 1401 1 1 80069349 ¤ç¬? ü?? ?‘???î? ‘®¤??? 1011 ¢¥? 0 ¢¤? ü÷? ??‘? ¤?¢? 0 0 0 0 0 0 0 0 0 0 0 OZ0509 4430928444
مجتبی حکیمیان
(25 rep)
Aug 18, 2022, 05:26 PM
• Last activity: Aug 18, 2022, 06:59 PM
0
votes
1
answers
805
views
Covert from literal HEX value to INT MariaDB
I have a situation where i need to get the integer value from a literal hex. Example: "ff" equals 255 in decimal, but if i use the HEX("ff") function it returns 6666. Is it possible to get it to return 255? Like a=10, b=11... and so on Thank you
I have a situation where i need to get the integer value from a literal hex.
Example: "ff" equals 255 in decimal, but if i use the HEX("ff") function it returns 6666.
Is it possible to get it to return 255? Like a=10, b=11... and so on
Thank you
Adelina Andreea trandafir
(3 rep)
Jun 5, 2022, 02:01 PM
• Last activity: Jun 5, 2022, 02:57 PM
13
votes
2
answers
3995
views
Is there an efficient way to see the cause for "String or binary data would be truncated"?
This is a follow-up on [this question][1]. It is also related to this [feature request][2] from Microsoft. However, many years have passed and several major releases reached the market since it was reported. **Question:** does SQL Server 2017 provide any mechanism to ease finding out the root cause...
This is a follow-up on this question . It is also related to this feature request from Microsoft.
However, many years have passed and several major releases reached the market since it was reported.
**Question:** does SQL Server 2017 provide any mechanism to ease finding out the root cause of this error? Or it is as hard to investigate as it was about 9 years ago when the issue was reported?
Alexei
(1191 rep)
Jan 25, 2018, 01:09 PM
• Last activity: Oct 28, 2021, 08:20 PM
5
votes
1
answers
224
views
Match .csv values as INTs. If one value from a group in the .csv matches one in another group, then merge strings
Here we have two sets of numbers. The problem is that I can't figure out how to get from the input to the output of numbers (DDL and DML below and also in the fiddle [here][1]). current_data 1,2,3 1,4 1,5,7 8,9,10 10,11,15 expected_outcome 1,2,3,4,5,7 8,9,10,11,15 We are simply trying to match a gro...
Here we have two sets of numbers. The problem is that I can't figure out how to get from the input to the output of numbers (DDL and DML below and also in the fiddle here ).
current_data
1,2,3
1,4
1,5,7
8,9,10
10,11,15
expected_outcome
1,2,3,4,5,7
8,9,10,11,15
We are simply trying to match a group of numbers based on if a single number matches in any other group. Then merge all of these groups.
For instance.
If we have:
('1,2,3'),
('1,4'),
('1,5,7')
We want:
(1,2,3,4,5,7)
We consolidate them into a single line in PostgreSQL.
or (another example):
('8,9,10'),
('10,11,15')
desired output:
(8,9,10,11,15)
The query would group these numbers because they have the number 10 in common. But it wouldn't group with the previous row (i.e.
(1,2,3,4,5,7)
) that don't share a number.
When we have these groups in a table. They will only group together if they at least have one matching number in each group.
======== DDL and DML =============
create table current (current_data text not null);
create table expected_output (expected_outcome text not null);
insert into current (current_data) values ('1,2,3'),('1,4'),('1,5,7'),('8,9,10'), ('10,11,15');
insert into expected_output (expected_outcome) values ('1,2,3,4,5,7'),('8,9,10,11,15');
user3050153
(67 rep)
Aug 16, 2021, 05:08 PM
• Last activity: Aug 29, 2021, 04:46 PM
0
votes
2
answers
960
views
Indexing on a fixed-length string (10 characters - phone numbers) for a large number of entries (performance)?
I understand that a lot of similar questions have already been asked but I could not find a definitive answer for my question. Briefly, - I have to authenticate users using a REST API, with their phone numbers. - Phone numbers are fixed length (10). can be stored in any way suggested. - In a table w...
I understand that a lot of similar questions have already been asked but I could not find a definitive answer for my question.
Briefly,
- I have to authenticate users using a REST API, with their phone numbers.
- Phone numbers are fixed length (10). can be stored in any way suggested.
- In a table with thousands of phone numbers and the associated **user_id**, I wish to be able to rapidly query the database and obtain the **user_id** associated with a given phone number.
- I read that indexing on strings or random integers is bad.
How should I structure the db to query on phone number most effectively?
gkm
(9 rep)
Aug 14, 2021, 07:07 AM
• Last activity: Aug 26, 2021, 10:46 PM
0
votes
1
answers
1622
views
Calculating Average Value of JSONB array in Postgres
I have a column called "value" in my "answers" table. ``` | value | |---------| | [1,2] | | [1] | | [1,2,3] | ``` The type of "value" is "jsonb". I want to get the average value of each array in each row: ``` SELECT avg(value) AS avg_value FROM answers ``` But this doesn't work because avg() is not...
I have a column called "value" in my "answers" table.
| value |
|---------|
| [1,2] |
| |
| [1,2,3] |
The type of "value" is "jsonb".
I want to get the average value of each array in each row:
SELECT avg(value) AS avg_value
FROM answers
But this doesn't work because avg() is not a jsonb function. I've tried:
SELECT avg(value::integer[]) as avg_value
FROM answers
i.e. tried to cast the jsonb arrays into integer arrays and then taking the avg, but I get the following error: "cannot cast type jsonb to integer[]. null".
Any ideas?
user236222
Jul 30, 2021, 09:44 AM
• Last activity: Jul 30, 2021, 12:00 PM
7
votes
1
answers
2087
views
text column compares equal to where clause but does not select matching row
We are having trouble when querying a table in our production database. One text column will compare equal to a string we filter on in the where clause, but postgres will not select the row. (We are on postgres 11.11) Our table set up like this: ``` (PROD)=> \d names; Table "public.names" Column | T...
We are having trouble when querying a table in our production database. One text column will compare equal to a string we filter on in the where clause, but postgres will not select the row.
(We are on postgres 11.11)
Our table set up like this:
(PROD)=> \d names;
Table "public.names"
Column | Type | Collation | Nullable | Default
----------------------+-----------------------------+-----------+----------+---------
name | text | | not null |
processed_name | text | | not null |
name_index | integer | | not null |
when_created | timestamp without time zone | | not null |
Indexes:
"names_pkey" PRIMARY KEY, btree (name, processed_name)
"names_name_index_key" UNIQUE CONSTRAINT, btree (name_index)
"ix_names_name" btree (name)
"ix_names_processed_name" btree (processed_name)
When we process a list of names we check if they are already in the table in order to prevent double adding and violating the primary key constraint.
However on one name, 'Сергей Иванович МЕНЯЙЛО', the query to see if the name is already present returns an empty set
I would expect to get back the row with the same name. However when we attempt to insert the row in the table we get a primary key conflict
Here are some queries which might explain the problem better
(PROD)=> SELECT name_index,
name,
name = 'Сергей Иванович МЕНЯЙЛО' names_compare_equal
FROM names where name_index = 75128;
name_index | name | names_compare_equal
----------------------+-------------------------+---------------------
75128 | Сергей Иванович МЕНЯЙЛО | t
(1 row)
However filtering instead on the name column selects no rows.
2021-05-24 20:37:41 UTC
(PROD)=> SELECT name_index,
name,
name = 'Сергей Иванович МЕНЯЙЛО'
names_compare_equal
FROM names
WHERE name = 'Сергей Иванович МЕНЯЙЛО';
name_index | name | names_compare_equal
----------------------+------+---------------------
(0 rows)
So then if we try and insert the row we get a primary key conflict:
(PROD)>=> INSERT INTO names (name_index, name, processed_name, when_created)
VALUES (89266, 'Сергей Иванович МЕНЯЙЛО', lower('Сергей Иванович МЕНЯЙЛО'), now());
ERROR: duplicate key value violates unique constraint "names_pkey"
DETAIL: Key (name, processed_name)=(Сергей Иванович МЕНЯЙЛО, сергей иванович меняйло) already exists.
What's more, if I query based on the hash of the rows I get the correct result:
(PROD)=> SELECT name_index,
name,
name = 'Сергей Иванович МЕНЯЙЛО' names_compare_equal
FROM names
WHERE md5(name) = md5('Сергей Иванович МЕНЯЙЛО');
name_index | name | names_compare_equal
----------------------+-------------------------+---------------------
75128 | Сергей Иванович МЕНЯЙЛО | t
(1 row)
This only happens on our production database - which has the following encoding settings
List of databases
Name | Owner | Encoding | Collate | Ctype |
----------------+----------------+----------+-------------+-------------+
PROD DB | PROD DB OWNER | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
This is pretty baffling to me so an ideas about what to check next would be helpful
Nate
(73 rep)
May 24, 2021, 09:02 PM
• Last activity: May 26, 2021, 10:32 AM
0
votes
1
answers
558
views
What exactly is the datatype input into a DATETIME value in MySQL
I am trying to get data from a PLC directly into a MySQL 8.0 database (using MySQL Workbench). I need to know what data type the input value is for a column set to `DATETIME` so I can send in the right format. **Actual question: I assume it's string, because when I manually input values using and `I...
I am trying to get data from a PLC directly into a MySQL 8.0 database (using MySQL Workbench). I need to know what data type the input value is for a column set to
DATETIME
so I can send in the right format.
**Actual question: I assume it's string, because when I manually input values using and INSERT
, the value has to be in ' '. Is this a stupid assumption on my part?**
When the output of my PLC is a string and the datatype of my column is DATETIME
, what I read from a SELECT
in my DB is completely wrong : '1905-07-13 00:00:00' instead of '2021-04-30 08:51:30'. However, if I write it in the DB column set to be of type VARCHAR
, it's written correctly, but not usable for my next app. So if I then change the column to DATETIME
, everything works great, but that's super impractical and I can be changing the datatype of my column every time I need to add to the DB.
Another assumption I'm making here is that the input datatype is what's causing this problem, but as it is now, DATE
and DATETIME
are not supported by the driver I'm using, so maybe it's just wishful thinking on my part that any of this will work at all.
Note: Please do migrate this if it's off topic, wasn't really sure if it was a best fit here or stack overflow...
Second Note: The only way I have found to make this work is to write into the DB as a VARCHAR and then use CONVERT(,DATETIME) in my app, but it feels weird that I can't just write a date in my DB for some reason.....
E.Aigle
(105 rep)
Apr 30, 2021, 09:25 AM
• Last activity: Apr 30, 2021, 04:46 PM
4
votes
2
answers
24973
views
Is there an easy way in PostgreSQL to escape newlines, carriage returns, and tabs?
Let's say I want to escape these to see a similar string SELECT * FROM ( VALUES (E'\r\n\t FOO BAR \t\n\r') ) AS t(str); How do I get the c-style escapes, `\r\n\t FOO BAR \t\n\r` back out? SELECT * FROM ( VALUES (E'foo\nfoo',E'\n\tBAR\t','baz'), (E'quz','qaz','quz') ) AS t(x,y,z); x | y | z -----+---...
Let's say I want to escape these to see a similar string
SELECT *
FROM ( VALUES
(E'\r\n\t FOO BAR \t\n\r')
) AS t(str);
How do I get the c-style escapes,
\r\n\t FOO BAR \t\n\r
back out?
SELECT *
FROM ( VALUES
(E'foo\nfoo',E'\n\tBAR\t','baz'),
(E'quz','qaz','quz')
) AS t(x,y,z);
x | y | z
-----+------------------+-----
foo+| +| baz
foo | BAR |
quz | qaz | quz
(2 rows)
Want instead..
x | y | z
----------+------------------+-----
foo\nfoo | \n\tBAR | baz
quz | qaz | quz
(2 rows)
Evan Carroll
(65502 rep)
May 25, 2017, 03:44 AM
• Last activity: Apr 4, 2021, 03:36 AM
2
votes
1
answers
2094
views
How to output string values in single quotes?
I'm trying to convert values stored in a json into its string representation. The `#>>` operator works for this purpose for numbers, e.g.: SELECT '{"a":[1,2,3],"b":["4","5","6"]}'::json#>>'{a,2}'; gives ?column? ------ 3 (1 row) But for string values, the text representation (`6` in the example belo...
I'm trying to convert values stored in a json into its string representation.
The
#>>
operator works for this purpose for numbers, e.g.:
SELECT '{"a":[1,2,3],"b":["4","5","6"]}'::json#>>'{a,2}';
gives
?column?
------
3
(1 row)
But for string values, the text representation (6
in the example below) does not have single quotes around it, so that one cannot tell that the stored value was a string:
SELECT '{"a":[1,2,3],"b":["4","5","6"]}'::json#>>'{b,2}';
?column?
----------
6
(1 row)
How to use a function or expression to let PostgreSQL output '6'
instead of 6
in the second example above (and output 3
in the first example)?
(This is with PostgreSQL 12)
*-- UPDATE --*
@McNets
To clarify, I am interested in getting the string representation in quotes, so that after decomposing a jsonb into its key value pairs, one can use them with inverse functions like jsonb_build_object
(so to speak) to reconstruct the original jsonb.
jsonb_each_text
and the #>>
apparently aren't good enough because they drop the single quotes.
tinlyx
(3820 rep)
Oct 13, 2019, 06:31 PM
• Last activity: Mar 5, 2021, 09:07 PM
1
votes
2
answers
356
views
Replace specific part of a string
I've got a table with multiple columns, `APGRPID`, `VIRTKEY`, `LFDNR` and `PARAMETERLIST`. I want to change a part of some strings in the `PARAMETERLIST` column. In the `PARAMETERLIST` I want to replace the "value" for the `PATH=` setting. Table: | APGRPID | VIRTKEY | LFDNR | PARAMETERLIST | |:-----...
I've got a table with multiple columns,
APGRPID
, VIRTKEY
, LFDNR
and PARAMETERLIST
. I want to change a part of some strings in the PARAMETERLIST
column.
In the PARAMETERLIST
I want to replace the "value" for the PATH=
setting.
Table:
| APGRPID | VIRTKEY | LFDNR | PARAMETERLIST |
|:----------- |:---------- | :---- | :------------ |
| 1091.000000 | 121.000000 | 1 | KF=138;SCANANDSEND=TRUE;ENDBARCODE=999999999999;PATH=K:\SSTEST;BARCODELENGTH=8 |
| 2092.000000 | 130.000000 | 8 | KF=138; PATH=s:\ |
| 2101.000000 | 114.000000 | 3 | DESIGNATION=JOB; KF=81; PATH=w:\; SENDALLDATA=TRUE |
| 2102.000000 | 116.000000 | 3 | KF=76; PATH=\\srv05\L3OERAPMFC; SENDALLDATA=TRUE |
| 2303.000000 | 114.000000 | 2 | KF=386; PATH=W:\; SENDALLDATA=TRUE |
The main problem I have is the randomness of the position of the PATH=
parameter in the PARAMETERLIST
column and the variability of the number of elements in the column if I try to CROSS APPLY
a STRING_SPLIT()
or anything.
Platform: Microsoft SQL Server Enterprise (Ver. 14.0.1000.169)
S_Koenig
(21 rep)
Jan 13, 2021, 08:42 AM
• Last activity: Jan 13, 2021, 05:58 PM
Showing page 1 of 20 total questions