Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
5700
views
Postgres HAVING ilike any wildcard in array_agg
I need a filter where the input value is similar to any value in an aggregate, using `ilike` and wildcards. My attempt at a query so far is: ```sql SELECT p.id, p.name, array_agg(vo.name) FROM product p LEFT JOIN product_variation pv ON p.id = pv.product_id LEFT JOIN variation_option vo ON pv.variat...
I need a filter where the input value is similar to any value in an aggregate, using
ilike
and wildcards. My attempt at a query so far is:
SELECT p.id, p.name, array_agg(vo.name)
FROM product p
LEFT JOIN product_variation pv
ON p.id = pv.product_id
LEFT JOIN variation_option vo
ON pv.variation_option_id = vo.id
GROUP BY p.id
HAVING $1 ilike any(array_agg('%' || vo.name || '%'));
But it doesn't work. It gets even more complicated: ideally, I'd be able to input an array of strings, where any of them are similar to the aggregate. Any tips?
I realize this kind of works, just not as I need it to. The input value should be able to be a partial match for any of the array elements, but it is the other way around. The wild card would need to be in the input value.
koi
(23 rep)
Jun 10, 2020, 02:46 PM
• Last activity: May 3, 2025, 07:07 AM
1
votes
1
answers
507
views
Postgresql (11.9) LIKE query not using text_ops index
I've seen related [questions][1] but I think this should be working and isn't. I have a text_ops function based index on column to do LIKE searches, but it only picks up the index with an equals: CREATE INDEX foo ON mytable USING btree ((upper((street)::text), group_id, current_version, pending, sto...
I've seen related questions but I think this should be working and isn't. I have a text_ops function based index on column to do LIKE searches, but it only picks up the index with an equals:
CREATE INDEX foo ON mytable USING btree ((upper((street)::text), group_id, current_version, pending, storage_id);
This query is lightning fast on my 113M row table:
select * from mytable
where upper(street) = '104 LESHAWN COVE'
and group_id = 5022352
and current_version = 1
and pending = 0
The corresponding LIKE is takes a couple of minutes:
select * from mytable
where upper(street) LIKE '104 LESHAWN COVE%'
and group_id = 5022352
and current_version = 1
and pending = 0
The explain plan show the one using the index and the other not. I thought the text_ops indexed column should allow for the LIKE to work, but I must be missing something here.
show lc_collate
en_US.UTF-8
I don't know that it matters, but this is on a partitioned table on the group_id column which also exists in the index for unpartitioned environments. The index is on the partition in this case.
So what am I missing. These operator classes are new to me, so I must not understand part of it.

nfdavenport
(121 rep)
Dec 7, 2021, 07:26 PM
• Last activity: Mar 2, 2025, 10:04 PM
21
votes
4
answers
114829
views
How to do a case-insensitive LIKE in a case-sensitive database?
My vendor requires the data warehouse database to be case sensitive, but I need to do case-insensitive queries against it. In a case-sensitive database, how would you write this to be case-insensitive? Where Name like '%hospitalist%'
My vendor requires the data warehouse database to be case sensitive, but I need to do case-insensitive queries against it.
In a case-sensitive database, how would you write this to be case-insensitive?
Where Name like '%hospitalist%'
James
(2668 rep)
Jun 13, 2018, 01:49 PM
• Last activity: Jan 30, 2025, 04:25 PM
3
votes
1
answers
517
views
I want to search for _01
I want to search for `_01` in a column, but when I do this using `LIKE` it also finds all cells with `01` and not just those with `_01`. ### Example ``` Select * from customers where code like'%_01' ``` ### Result [![enter image description here][1]][1] [1]: https://i.sstatic.net/IqZGSsWk.png
I want to search for
_01
in a column, but when I do this using LIKE
it also finds all cells with 01
and not just those with _01
.
### Example
Select * from customers
where code like'%_01'
### Result

Michael Lechner
(31 rep)
Jan 23, 2025, 09:37 AM
• Last activity: Jan 23, 2025, 11:59 AM
0
votes
2
answers
517
views
SSRS - select where parameter and fixed value
So there is a lot of stuff about using SSRS parameters to select multiple values but nothing about using both a parameter and a fixed value using the in select statement. So what I'm trying to do is this: select * from table where id in (Parameter_1, 'total') Yet when I do that it complains its can'...
So there is a lot of stuff about using SSRS parameters to select multiple values but nothing about using both a parameter and a fixed value using the in select statement.
So what I'm trying to do is this:
select * from table where id in (Parameter_1, 'total')
Yet when I do that it complains its can't find 'Parameter_1'.
I've gotten another report to work when its just:
select * from table where id = Parameter_2
but this this isn't good enough in this use case.
Exostrike
(183 rep)
Aug 10, 2018, 09:58 AM
• Last activity: Jan 6, 2025, 12:00 AM
0
votes
0
answers
46
views
Purely in performance terms, how do CONTAINS([MyCol], 'foo') and [MyCol] LIKE '%foo%' compare?
Consider `CONTAINS([MyCol], 'foo')` and `[MyCol] LIKE '%foo%'`. Assume the best possible indexes on `[MyCol]`, whatever those may be (at minimum, we need a unique key and a full-text index). I know that the two expressions are not equivalent, but how do their performance characteristics compare? Ass...
Consider
CONTAINS([MyCol], 'foo')
and [MyCol] LIKE '%foo%'
. Assume the best possible indexes on [MyCol]
, whatever those may be (at minimum, we need a unique key and a full-text index). I know that the two expressions are not equivalent, but how do their performance characteristics compare?
Assume SQL Server 2019. I am asking because I have inherited a codebase where the first expression was frequently preferred over the second, without any documentation explaining why.
J. Mini
(1237 rep)
Mar 18, 2024, 08:10 PM
-2
votes
3
answers
87
views
Query to find values based on a like and a not like
I have a column in a table: ID - this column can have a value of '123456' but can also have value '123456x'. I need to return all the numbers where there is an 'x' and the same number but without an 'x' doesn't exist. So if '123456' and '123456x' both exist I don't get a result. If '345678' exists a...
I have a column in a table: ID - this column can have a value of '123456' but can also have value '123456x'.
I need to return all the numbers where there is an 'x' and the same number but without an 'x' doesn't exist.
So if '123456' and '123456x' both exist I don't get a result. If '345678' exists and '345678x' does not exist then I get a result.
I'm sure this must be simple using a subquery and likes but it's wrecking my brain trying to construct the query. Please help!
Thanks
Daniel Blake
(1 rep)
Dec 8, 2023, 12:21 PM
• Last activity: Dec 9, 2023, 12:41 PM
-2
votes
1
answers
54
views
I want to assign a character value to a field that contains numbers without using CAST or CONVERT so as to avoid cardinality estimate warnings
I'm looking at something like ```sql SELECT CASE WHEN MyField LIKE [0-9] THEN 'xyz' WHEN MyField LIKE [10-99] THEN 'abc' WHEN MyField LIKE [100-999] THEN 'def' WHEN MyField LIKE [1000-9999] THEN 'ghi' ELSE [pqr] END AS ACCOUNT_CATEGORY FROM MyTable ``` The MyField is a datatype `smallint`, so does n...
I'm looking at something like
SELECT CASE WHEN MyField LIKE [0-9] THEN 'xyz'
WHEN MyField LIKE [10-99] THEN 'abc'
WHEN MyField LIKE [100-999] THEN 'def'
WHEN MyField LIKE [1000-9999] THEN 'ghi'
ELSE [pqr] END AS ACCOUNT_CATEGORY
FROM MyTable
The MyField is a datatype smallint
, so does not have any numbers in it above 32,000.
Eyespi20
(1 rep)
Oct 23, 2023, 01:23 PM
• Last activity: Oct 24, 2023, 12:45 AM
0
votes
1
answers
2743
views
Find partial match with textsearch query
I have the following query that I did hard looking on this site: SELECT * FROM renns WHERE ( to_tsvector('english', coalesce(numar_admin, '')) || to_tsvector('english', coalesce(nume_strada, '')) || to_tsvector('english', coalesce(nume_prenume, ''))|| to_tsvector('english', coalesce(bloc, '')) || to...
I have the following query that I did hard looking on this site:
SELECT *
FROM renns
WHERE
(
to_tsvector('english', coalesce(numar_admin, '')) ||
to_tsvector('english', coalesce(nume_strada, '')) ||
to_tsvector('english', coalesce(nume_prenume, ''))||
to_tsvector('english', coalesce(bloc, '')) ||
to_tsvector('english', coalesce(apartament, ''))
) @@ plainto_tsquery('english', '$q')
It works if I write in the variable
$q
all the text that appears in one of the columns present in the query. For example: cherry
I would like to write only: cher
and return all rows fields in which it finds this string: "cher".
CoderCoder42
(101 rep)
Mar 31, 2021, 07:56 PM
• Last activity: Aug 1, 2023, 06:30 PM
0
votes
2
answers
177
views
Find rows with given coordinates nested in a JSON column
I have a table `public.polygon_versions` in my PostgreSQL 14.7 database: ``` CREATE TABLE public.polygon_versions ( id bigint NOT NULL DEFAULT nextval('polygon_versions_id_seq'::regclass), entity_id bigint NOT NULL, creation_transaction_id bigint NOT NULL, obsolescence_transaction_id bigint, geo_jso...
I have a table
public.polygon_versions
in my PostgreSQL 14.7 database:
CREATE TABLE public.polygon_versions (
id bigint NOT NULL DEFAULT nextval('polygon_versions_id_seq'::regclass),
entity_id bigint NOT NULL,
creation_transaction_id bigint NOT NULL,
obsolescence_transaction_id bigint,
geo_json json NOT NULL,
CONSTRAINT polygon_versions_pkey PRIMARY KEY (id),
CONSTRAINT polygon_versions_creation_transaction_id_foreign FOREIGN KEY (creation_transaction_id)
REFERENCES public.transactions (id),
CONSTRAINT polygon_versions_entity_id_foreign FOREIGN KEY (entity_id)
REFERENCES public.polygons (id),
CONSTRAINT polygon_versions_obsolescence_transaction_id_foreign FOREIGN KEY (obsolescence_transaction_id)
REFERENCES public.transactions (id)
);
Example geo_json
data is:
~~~none
{"type":"Feature","geometry":{"type":"Polygon","coordinates":[[[30.3626584543099,50.43004834913466],[30.37021155489584,50.4281893337737],[30.36969657076498,50.433110097683176],[30.369009925257167,50.433984846616404],[30.3626584543099,50.43004834913466]]]},"properties":{}}
~~~
I'm trying to query all records where field geo_json
matches some coordinates. For example: 30.504106925781265, 50.43731985018662
I've tried the following SQL query:
SELECT *
FROM polygon_versions
WHERE geo_json LIKE '%30.504106925781265,50.43731985018662%'
It returns an error:
ERROR: operator does not exist: json ~~ unknown
LINE 1: SELECT * FROM polygon_versions WHERE geo_json LIKE '%30.5041...
^
HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
SQL state: 42883
Character: 47
Taras
(167 rep)
Apr 11, 2023, 02:32 PM
• Last activity: Apr 12, 2023, 09:29 AM
1
votes
1
answers
2001
views
How can I query using like on json field in PostgreSQL?
I've set up the following DB Fiddle: [https://dbfiddle.uk/mzsp-4tg][1] How should I make a correct SQL query to use `like` on `geo_json` field of `json` type to get table records with string matching coordinates combination `%30.504106925781265,50.43731985018662%` ? [1]: https://dbfiddle.uk/mzsp-4tg
I've set up the following DB Fiddle: https://dbfiddle.uk/mzsp-4tg
How should I make a correct SQL query to use
like
on geo_json
field of json
type to get table records with string matching coordinates combination %30.504106925781265,50.43731985018662%
?
Taras
(167 rep)
Apr 3, 2023, 06:16 PM
• Last activity: Apr 3, 2023, 06:38 PM
5
votes
1
answers
346
views
SQL server Cardinality Estimation for LIKE query
I have this statistics histogram vector for my non-clustered index made on LastName column of a table named AspNetUsers. [![enter image description here][1]][1] If I run a query as `SELECT * FROM dbo.AspNetUsers WHERE LastName = 'Baker'` it returns 6 rows as estimated rows, cause **Baker** is the **...
I have this statistics histogram vector for my non-clustered index made on LastName column of a table named AspNetUsers.
If I run a query as

SELECT * FROM dbo.AspNetUsers WHERE LastName = 'Baker'
it returns 6 rows as estimated rows, cause **Baker** is the **RANGE_HI_KEY** of the one of the step so the EQ_ROWS value is my Estimated rows count. Similarly, If i run a query as SELECT * FROM dbo.AspNetUsers WHERE LastName = 'Bacilia'
, it returns 1 row as estimated rows, cause **Bacilia** fells in to 'Baker' step range, so the **AVG_RAGE_ROWS** value of that step is my estimated rows count.
Similarly, to my understanding if i do query as SELECT * FROM dbo.AspNetUsers WHERE LastName LIKE 'Ba%'
it matches 2 steps (**Baker** and **Batagoda**), so it should return 27 + 51 (RANGE_ROWS) + 6 + 4 (EQ_ROWS) = 88. But it returns 99 rows as estimation.
How does this Cardinality Estimation is works with a LIKE Query? Does it use different formulae for estimating number of rows when doing a LIKE query?
Dhanuka Jayasinghe
(185 rep)
Feb 5, 2023, 10:52 AM
• Last activity: Feb 5, 2023, 06:11 PM
1
votes
1
answers
2299
views
Join tables based on variable value contained in column
I have two tables with dates as the primary column to join. But, I also need to join based on the value from one column that is partially contained in a column from the other table. Table 1: date, audience Table 2: date, channel '`audience`' in table 1 has a value that is partially contained in 'cha...
I have two tables with dates as the primary column to join. But, I also need to join based on the value from one column that is partially contained in a column from the other table.
Table 1: date, audience
Table 2: date, channel
'
audience
' in table 1 has a value that is partially contained in 'channel' in table 2 like this:
audience = "oldpeople"
channel = "oldpeople_email"
There are hundreds of potential 'audience' values and 'channel' values, so I cannot name the values, it has to work dynamically. This is what I have tried that has not worked:
SELECT * FROM table1
left JOIN table2 ON ((table2.Channel like '%table1.audience%')
AND (table2.date = table1.date) )
Steve L
(13 rep)
Mar 8, 2018, 01:57 AM
• Last activity: Aug 29, 2022, 08:16 AM
5
votes
1
answers
1206
views
Use square brackets on extended events filter
I want create an extended events session and use the `like_i_sql_unicode_string` operator to filter the phrase `[demo]`, with the square brackets. I've started with: ```SQL CREATE EVENT SESSION [demo] ON SERVER ADD EVENT sqlserver.sql_batch_completed( WHERE ([sqlserver].[like_i_sql_unicode_string]([...
I want create an extended events session and use the
like_i_sql_unicode_string
operator to filter the phrase [demo]
, with the square brackets.
I've started with:
CREATE EVENT SESSION [demo] ON SERVER
ADD EVENT sqlserver.sql_batch_completed(
WHERE ([sqlserver].[like_i_sql_unicode_string]([sqlserver].[sql_text],N'%[demo]%'))
)
ADD TARGET package0.ring_buffer
alter event session [demo] on server state=start
But this interprets [demo]
as a character group on a regex-like syntax. So if I run this:
-- m
It will be captured on the extended event.
The closest I've got was filtering it later, using [sqlserver].[like_i_sql_unicode_string]([sqlserver].[sql_text],N'%demo%')
on the filter and then:
SELECT
n.ev.value('@timestamp','datetimeoffset') as [timestamp],
n.ev.value('data[@name="batch_text"]','varchar(max)') as [batch_text]
FROM sys.dm_xe_session_targets xet
INNER JOIN sys.dm_xe_sessions xe ON xe.[address] = xet.event_session_address
cross apply (select CONVERT(XML, target_data) as xData ) as x
cross apply x.xData.nodes(N'RingBufferTarget/event') AS n(ev)
WHERE xe.name = N'demo' AND xet.target_name = N'ring_buffer'
and n.ev.value('data[@name="batch_text"]','varchar(max)') like '%\[demo\]%' escape '\'
But this still captures more events than necessary.
I've tried '%[[]demo[]]%'
, '%{[}demo{]}%'
, '%\[demo\]%'
, '%$[demo$]%'
, and none of those worked.
Jorge Bugal
(218 rep)
Aug 26, 2022, 06:47 PM
• Last activity: Aug 27, 2022, 07:06 AM
5
votes
2
answers
1832
views
Records greater than epoch timestamp using only LIKE operator
I have the following query so far and unfortunately, I cannot use *regexp* or *greater than* operators, I can only use the `LIKE` keyword. The whole column is in a json string, I can't use `json_value` or regexp because I'm on SQL Server so I'm stuck with using `LIKE`. It's SQL Server 2014 and `json...
I have the following query so far and unfortunately, I cannot use *regexp* or *greater than* operators, I can only use the
LIKE
keyword.
The whole column is in a json string, I can't use json_value
or regexp because I'm on SQL Server so I'm stuck with using LIKE
. It's SQL Server 2014 and json_value
is not supported until 2016.
SELECT * FROM DataTableOne
WHERE update_date LIKE '%1645290000%'
I would like to retrieve all records where the epoch unix timestamp is greater than 1645290000 using only the SQL LIKE
keyword (or even between 1645290000 and 9999999999 using the SQL LIKE
operator).
Any help will be much appreciated since this is a very tough unique case where I am limited to using only the LIKE
keyword.
Sample table/data below:
CREATE TABLE DataTableOne (
ID int,
DATA varchar(MAX)
);
INSERT INTO DataTableOne (ID, DATA)
VALUES (1, '{"name":"Cole", "update_date":"2855290000"}'),
(2, '{"name":"Peter", "update_date":"1222290000"}') ;
There could be a thousand rows with this sort of data and the only ones I want are the ones where the update_date is greater than 1645290000.
Running the query on the above table I gave should only return the first row since the *update_date* of 2855290000 is indeed greater than 1645290000 numerically.
Elite298
(61 rep)
Jun 24, 2022, 08:56 PM
• Last activity: Jun 26, 2022, 12:54 PM
0
votes
1
answers
2333
views
LIKE query with leading wildcards in large MySQL production database
According to insights in Amazon RDS the following query in a large MySQL production database is causing high loads ( ~ 50.000.000 entries): ``` SELECT * FROM entities WHERE status='ready' AND user_id='81663729' AND (primary_name LIKE '%mysearch%' OR additional_names LIKE '%mysearch%') ORDER BY id DE...
According to insights in Amazon RDS the following query in a large MySQL production database is causing high loads ( ~ 50.000.000 entries):
SELECT * FROM entities WHERE status='ready' AND user_id='81663729' AND (primary_name LIKE '%mysearch%' OR additional_names LIKE '%mysearch%') ORDER BY id DESC LIMIT 0, 100000
Columns responsible for it:
common_name: VARCHAR(255)
additional_names: VARCHAR(2000) (Note: This is *unfortunately* a comma-seperated string of zero to hundreds of names per entry, which is horrible database design)
Not a big surprise that this query is performing bad: Leading wildcards make indices more or less unusable (primary_name
is indexed, additional_names
is not). Some users have a whole lot of entries in this table (currently up to ~ 2.000.000 entries per user, numbers of entries growing) - so the user_id in the query will not rescue its performance.
My question is what might be the best way to solve this problem.
**Approach 1: MySQL FULLTEXT index**
Create a fulltext index:
CREATE FULLTEXT INDEX domain ON entities (primary_name,additional_names);
New Query:
SELECT * FROM entities WHERE status='ready' AND user_id='81663729' AND MATCH (primary_name,additional_names) AGAINST ('mysearch') IN NATURAL LANGUAGE MODE ORDER BY id DESC LIMIT 0, 100000;
*Pro*: Easiest to migrate and implement?
*Contra*: Not sure if this the requirements? Creating that index might take a huge amount of time.
**Approach 2: MySQL JSON**
Step 1: Add an additional column with JSON
ALTER TABLE entities ADD additional_names_j JSON AFTER additional_names;
Step 2: All new entities are inserted into the new JSON column as json as well as into the old column.
Step 3: A script which might run a few days makes sure that all old entities have their entries in the JSON column.
Step 4: Application is adjusted to use the JSON column.
Step 5: Delete the old additional_names column.
Query: * FROM entities WHERE status='ready' AND user_id='81663729' AND 'mysearch' member of ORDER BY id DESC LIMIT 0, 100000;
*Pro*: Searching the JSON would work much better and it would be possible to extract the names without comma exploding and such stuff.
*Contra*: Not sure if partial matching for JSON columns is even possible? Furthermore the effort is quite high.
**Approach 3: Create a relations table and a names table**
This would basically mean that we have a table with id and name and a m:n relationship table.
*Pro*: Would be nice and clean design.
*Contra*: Would have to exist and be populated in parallel to the running application, and in this case it would cause enormous changes in the application logic.
My question to the experts is: Which of those approaches would solve the situation best - and why?
Note: The table is growing by ~ 5 entries per second, downtime is not allowed. I am open to any other approach to make this query faster, which I am not aware of at the moment!
Blackbam
(225 rep)
May 24, 2022, 02:01 PM
• Last activity: May 24, 2022, 03:45 PM
0
votes
1
answers
272
views
Search for the root words using root words with prefixes or suffixes
Is it possible to search the word "FILL" saved in the SQLite database using the words "REFILL" or "FILLED"? Not just these words specifically but you get the idea of searching the root word using the same word but with prefixes or suffixes. I might get misunderstood as wanting to use the LIKE operat...
Is it possible to search the word "FILL" saved in the SQLite database using the words "REFILL" or "FILLED"? Not just these words specifically but you get the idea of searching the root word using the same word but with prefixes or suffixes.
I might get misunderstood as wanting to use the LIKE operator/command. But what I need is somehow reverse of that. The case is that the root words (FILL) are the ones in my database and the words that I am going to use in the query are those with prefixes or suffixes (e.g REFILL, FILLED)
I cannot use those word-trimming API because it's not going to be done in English. A code for android/Sqlite would be great but the query command is already a great help. Thaanks.
Niel Lat
(1 rep)
Aug 29, 2017, 08:38 AM
• Last activity: Apr 18, 2022, 08:02 PM
1
votes
1
answers
218
views
Query hierarchical file permissions
I have a table representing some files with their paths and another table with file sharings by user and files. Below is Table **files**, simplified. Example: - I have a directory A (path /A) containing some files. - Directory B is inside A and its path is /A/B and it contains some files. - Files C...
I have a table representing some files with their paths and another table with file sharings by user and files.
Below is Table **files**, simplified. Example:
- I have a directory A (path /A) containing some files.
- Directory B is inside A and its path is /A/B and it contains some files.
- Files C (path /A/B/C) and D (path /A/B/D) are inside B.
id | path
-|-
1 | /A
2 | /A/B
3 | /A/B/C
4 | /A/B/D
Below is table **file_sharings**, simplified. If a user has a file sharing with path='/A' and readable=true, it means that he can read every files with a path starting with '/A'
Now I have that in this file (it's only an example, it can be more complex):
id|user_id|file_id|path|readable
-|-|-|-|-
1 | 1 | 1 | '/A' | true
2 | 1 | 2 | '/A/B' | false
3 | 1 | 4 | '/A/B/D' | true
So, it means that user 1 can see everything in the directory A, but he can't see directory B and file C but he can see file D (the same way we can have with google drive for instance)
What I want is a query to have all files/directories that user 1 can see.
For the moment I have something like this:
With this request:
SELECT *
FROM files
WHERE path LIKE ANY (SELECT fs.path || '%'
FROM file_sharings fs
WHERE fs.user_id='1'
AND fs.readable=true)
AND (path NOT LIKE ANY (SELECT fs.path || '%'
FROM file_sharings fs
WHERE fs.user_id='1'
AND fs.readable=false)
);
but it returns only:
/A
If I do:
SELECT *
FROM files
WHERE path LIKE ANY (SELECT fs.path || '%'
FROM file_sharings fs
WHERE fs.user_id='1'
AND fs.readable=true)
AND (path NOT IN (SELECT fs.path
FROM file_sharings fs
WHERE fs.user_id='1'
AND fs.readable=false)
);
... it returns:
/A
/A/B/C
/A/B/D
I don't know how to have all files that user 1 can read except the ones he can't read. Here I must have /A
and /A/B/D
, but in reality the tree files can be more complex but the tenet is the same that in this simple example.
It's easy in the real life but I'm stuck to create the good request :D
Here a [dbfiddle](https://dbfiddle.uk/?rdbms=postgres_10&fiddle=e37c9d15a577e9e38816af01b9950a89)
When I have no row in the DB, it means the user has no readable rights.
When I have a row with readable=true, it means the user has a readable right on the file and the children (the files in the repository).
When I have a row with readable=false, it means that the file and its children are specifically forbidden to the user.
I do this way because I don't want to create billion rows when I share a root directory to another user.
file_id
is a foreign key used only to recalculate the file_sharings path when I move a file/repository into another one.
So I can have a readable right on a file whereas it is not explicitly in the DB
Here a picture of the fiddle example for user1:

SELECT *
FROM files
WHERE path LIKE ANY (SELECT fs.path || '%'
FROM file_sharings fs
WHERE fs.user_id='1'
AND fs.readable=true)
AND (path NOT LIKE ANY (SELECT fs.path || '%'
FROM file_sharings fs
WHERE fs.user_id='1'
AND fs.readable=false)
);
It returns A1
, B2
and C3
. I should also have C2
.
With the second request:
SELECT *
FROM files
WHERE path LIKE ANY (SELECT fs.path || '%'
FROM file_sharings fs
WHERE fs.user_id='1'
AND fs.readable=true)
AND (path NOT IN (SELECT fs.path
FROM file_sharings fs
WHERE fs.user_id='1'
AND fs.readable=false)
);
It returns A1
, B2
, C3
, C2
and C1
. I should not have C1
.
Olivier
(13 rep)
Mar 10, 2022, 05:02 PM
• Last activity: Mar 12, 2022, 12:15 AM
9
votes
4
answers
4274
views
Matching left and right single-quotes used as apostophes
I have four columns containing names and want to search these using a `LIKE` in a Microsoft SQL Server environment. The complication comes that names may include left and right single-quotes / angled apostrophes (i.e. `‘` and `’`, `char(145)` and `char(146)` respectively), which should match a strai...
I have four columns containing names and want to search these using a
LIKE
in a Microsoft SQL Server environment.
The complication comes that names may include left and right single-quotes / angled apostrophes (i.e. ‘
and ’
, char(145)
and char(146)
respectively), which should match a straight apostrophe (i.e. '
, char(39)
)
Doing the following is very slow:
SELECT person_id
FROM person
WHERE REPLACE(
REPLACE(
person_name,
CHAR(145),
CHAR(39)
),
CHAR(146),
CHAR(39)
) LIKE '{USER_INPUT}'
As explained in SQL replace statement too slow on Stack Overflow, this is because the use of REPLACE
makes the statement unsargable.
Is there a way that SQL Server can handle situations like this in a better way?
One solution which has been proposed is to have the application generate a 'searchable' value which concatenates all of the fields (person_name
, person_surname
, person_nickname
, etc.) and converts the problematic characters at the point of editing. This could be effectively indexed and searched. Storing this data in a separate SQL table/column would require less application rewrite than implementing a full NoSQL solution like Lucene.
The example above is a simplification: the query doesn't literally get built as I explained above and we do implement SQL injection (and other) protections.
The question is how to replace the angled-apostrophes with straight ones in the table data. To clarify:
* User supplies O‘Malley
- this should match both O‘Malley
or O'Malley
* User supplies O'Malley
- this should match both O‘Malley
or O'Malley
We need to replace the SQL data, not the user's input. We can convert the user input on the way through the application so that if they input angled apostrophes we change them to simple apostrophes before passing in to SQL. It's the data in SQL we need to standardise.
Unfortunately the data must stay in the database as the correct angled bracket, but when we do the search we need to match them all against straight apostrophes.
JLo
(193 rep)
Sep 28, 2016, 03:36 PM
• Last activity: Feb 18, 2022, 03:09 PM
0
votes
1
answers
3481
views
Postgresql: ILIKE query with % as search term
I have a JDBC query that performs an ILIKE operation. ``` select project_name from project_table where project_name ILIKE '%test%'; ``` The above returns all the results matching (wildcarded) the term ```test```. However, when the project_name is ```%project_name``` and the user searches for just ``...
I have a JDBC query that performs an ILIKE operation.
select project_name from project_table where project_name ILIKE '%test%';
The above returns all the results matching (wildcarded) the term
.
However, when the project_name is %project_name
and the user searches for just %
, the query returns all the rows instead of just the matching row.
Essentially the query becomes project_name from project_table where project_name ILIKE '%%%';
I understand the concept of wildcarding in postgresql, but is there a way to just limit the result matching the %
?
underachiever
(67 rep)
Jan 6, 2022, 02:32 PM
• Last activity: Jan 7, 2022, 02:32 AM
Showing page 1 of 20 total questions