Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
1542 views
MySQL FULLTEXT Match exponentially slower with search phrase
I'm trying to track down some performance issues in my FULLTEXT MATCH() AGAINST() query, and it seems to be related (at least in part) to queries requiring an exact phrase. My table stores system process information, has roughly 2M rows, a `keywords` TEXT column, and a FULLTEXT index on said column....
I'm trying to track down some performance issues in my FULLTEXT MATCH() AGAINST() query, and it seems to be related (at least in part) to queries requiring an exact phrase. My table stores system process information, has roughly 2M rows, a keywords TEXT column, and a FULLTEXT index on said column. The keywords column averages ~600 characters, with several "words" in that column containing one or many special characters. The query I'm trying to run is an exact match on "net.exe": `select id from processes where id " > > A phrase that is enclosed within double quote (") characters matches > only rows that contain the phrase literally, as it was typed. The > full-text engine splits the phrase into words and performs a search in > the FULLTEXT index for the words. Nonword characters need not be > matched exactly: Phrase searching requires only that matches contain > exactly the same words as the phrase and in the same order. For > example, "test phrase" matches "test, phrase". ...and indeed, running queries with ... AGAINST('"net exe"' ... take just as long. So it seems to just be searching for exact phrases in general. My latest theory is that because my table has process info (e.g. system paths and cmdline arguments, which have many special characters), the normal FULLTEXT isn't useful for my query and MySQL is effectively re-indexing the whole table when I search for phrases like "net.exe". Some supporting evidence for this is that the original creation of the FULLTEXT index took roughly 30 minutes. However I find it hard to believe that would be the full explanation. Regardless, I ran explain on my query (which itself took 30 minutes to resolve), and got the following: ``` mysql> explain select id from processes where id explain select id from processes where id Ft_hints: sorted, which seems to only be due to the lack of quotes. If I run explain when querying for "net" it goes back to Ft_hints: no_ranking. Lastly, I tried running CHECK TABLE and even making a fresh temp table with only the id and keywords columns, but the above numbers were consistent, so I don't feel this is specific to my table condition.
Eric (103 rep)
Oct 4, 2022, 11:51 PM • Last activity: Aug 5, 2025, 11:03 AM
2 votes
3 answers
1592 views
Index for searching in column where the column starts with the provided value
I have a table, where one column contains a substring of an url. It's always the leftmost part of the url without scheme. | id | domain | column1 | column2 | |---|--------|---------|---------| | 1 | example.com | value1 | value2 | | 2 | example.org | value3 | value4 | | 3 | example.net/en-US | value...
I have a table, where one column contains a substring of an url. It's always the leftmost part of the url without scheme. | id | domain | column1 | column2 | |---|--------|---------|---------| | 1 | example.com | value1 | value2 | | 2 | example.org | value3 | value4 | | 3 | example.net/en-US | value5 | value6 | 4 | example.net/en-GB | value7 | value8 There's never an overlap in the domains, so there won't be a row with domain example.net, nor a domain that's empty. I want to search and find the matching row given an url, for example example.org/sub/sub/sub/test.html it should return the row with id 2. So far I've used SELECT id FROM table WHERE 'example.org/sub/sub/sub/test.html' LIKE (domain || '%') which gives what I want, but it always does a seq scan of the table, even though I have an index on the domain column. I was hoping that Postgres could make some conclusions and use the index anyway. How can I improve the performance of this?
Zyberzero (123 rep)
Mar 17, 2023, 04:58 PM • Last activity: Jun 9, 2025, 11:39 PM
149 votes
8 answers
246879 views
Pattern matching with LIKE, SIMILAR TO or regular expressions
I had to write a simple query where I go looking for people's name that start with a B or a D: SELECT s.name FROM spelers s WHERE s.name LIKE 'B%' OR s.name LIKE 'D%' ORDER BY 1 I was wondering if there is a way to rewrite this to become more performant. So I can avoid `or` and / or `like`?
I had to write a simple query where I go looking for people's name that start with a B or a D: SELECT s.name FROM spelers s WHERE s.name LIKE 'B%' OR s.name LIKE 'D%' ORDER BY 1 I was wondering if there is a way to rewrite this to become more performant. So I can avoid or and / or like?
Lucas Kauffman (1835 rep)
Jan 15, 2012, 11:24 AM • Last activity: May 20, 2025, 01:30 AM
0 votes
2 answers
60 views
Find rows where column starts with A - P or AA - AE
I have a `name` column and I need to check whether it starts with letters A to P, or the first 2 letters of the name are AA to AE. I tried this: select name from table where substring(name,1,1) ~* '^[A-P]' or substring(name,1,2) ~* '^[AA-AE]'; This didn't work.
I have a name column and I need to check whether it starts with letters A to P, or the first 2 letters of the name are AA to AE. I tried this: select name from table where substring(name,1,1) ~* '^[A-P]' or substring(name,1,2) ~* '^[AA-AE]'; This didn't work.
skg (3 rep)
May 19, 2025, 09:07 AM • Last activity: May 20, 2025, 01:07 AM
1 votes
1 answers
5700 views
Postgres HAVING ilike any wildcard in array_agg
I need a filter where the input value is similar to any value in an aggregate, using `ilike` and wildcards. My attempt at a query so far is: ```sql SELECT p.id, p.name, array_agg(vo.name) FROM product p LEFT JOIN product_variation pv ON p.id = pv.product_id LEFT JOIN variation_option vo ON pv.variat...
I need a filter where the input value is similar to any value in an aggregate, using ilike and wildcards. My attempt at a query so far is:
SELECT p.id, p.name, array_agg(vo.name)
FROM product p
LEFT JOIN product_variation pv
ON p.id = pv.product_id
LEFT JOIN variation_option vo
ON pv.variation_option_id = vo.id
GROUP BY p.id
HAVING $1 ilike any(array_agg('%' || vo.name || '%'));
But it doesn't work. It gets even more complicated: ideally, I'd be able to input an array of strings, where any of them are similar to the aggregate. Any tips? I realize this kind of works, just not as I need it to. The input value should be able to be a partial match for any of the array elements, but it is the other way around. The wild card would need to be in the input value.
koi (23 rep)
Jun 10, 2020, 02:46 PM • Last activity: May 3, 2025, 07:07 AM
0 votes
1 answers
52 views
Pattern match with CTE
How do you pattern match using cte? | id | accountids | email | | -- | ---------- | ------------ | | a12| 456,789,012| jane@foo.com | | b23| 546,982,123| ben@foo.com | | c45| 654,982,456| mike@foo.com | I've tried following: with cte1 as ( select '''' || '{' || accountids || '}' || '''' as accountid...
How do you pattern match using cte? | id | accountids | email | | -- | ---------- | ------------ | | a12| 456,789,012| jane@foo.com | | b23| 546,982,123| ben@foo.com | | c45| 654,982,456| mike@foo.com | I've tried following: with cte1 as ( select '''' || '{' || accountids || '}' || '''' as accountids from table_a where email = 'jane@foo.com' ) select id from table_a where accountids ^@ ANY (select accountids from cte1); I would expect result set to return a12 and c45. Is this possible?
obautista (437 rep)
Mar 31, 2025, 01:07 PM • Last activity: Mar 31, 2025, 02:09 PM
0 votes
1 answers
1865 views
How to find contents of one field within another field?
How do I find where the contents of one field are within another field? I have a database table with a auto-generated sequence number in one field, and another character based field which should contain a string which describes a path to a file. The path should be based on the sequence in the other...
How do I find where the contents of one field are within another field? I have a database table with a auto-generated sequence number in one field, and another character based field which should contain a string which describes a path to a file. The path should be based on the sequence in the other field, sometimes it isn't and I want to identify these rows. This is query I've been trying (among others) but it doesn't work, *position()* is obviously designed to find a hard-coded string within a field, not another field SELECT recordId, path FROM TABLEA WHERE position(recordID IN path) = 0; RecordId is defined as: recordId integer not nulll default nextval('tablea_recordid_seq'::regclass) And path is: character varying(80) I can't help feeling that the correct solution is probably very simple, but damned if I can find it! Ok, this works ... SELECT recordId, path FROM TABLEA WHERE position(CAST(recordID AS CHAR) IN path) = 0;
ConanTheGerbil (1303 rep)
Oct 4, 2019, 02:14 PM • Last activity: Jan 30, 2025, 02:19 AM
36 votes
2 answers
37766 views
Why would you index text_pattern_ops on a text column?
Today [Seven Databases in Seven Weeks](http://pragprog.com/book/rwdata/seven-databases-in-seven-weeks) introduced me to per-operator indexes. > You can index strings for pattern matching the previous queries by creating a `text_pattern_ops` operator class index, as long as the values are indexed in...
Today [Seven Databases in Seven Weeks](http://pragprog.com/book/rwdata/seven-databases-in-seven-weeks) introduced me to per-operator indexes. > You can index strings for pattern matching the previous queries by creating a text_pattern_ops operator class index, as long as the values are indexed in lowercase. CREATE INDEX moves_title_pattern ON movies ( (lower(title) text_pattern_ops); > We used the text_pattern_ops because the title is of type text. If you need to index varchars, chars, or names, use the related ops: varchar_pattern_ops, bpchar_pattern_ops, and name_pattern_ops. I find the example really confusing. Why is it useful to do this? If the column is type text, wouldn't the other types (varchar, char, name) be cast to to text before being used as a search value? How does that index behave differently from one using the default operator? CREATE INDEX moves_title_pattern ON movies (lower(title));
Iain Samuel McLean Elder (2408 rep)
Nov 22, 2013, 05:51 PM • Last activity: Dec 5, 2024, 09:36 AM
4 votes
2 answers
3326 views
Replace a sequential set of numbers with special character
I have a *varchar(200)* column that contains entries such as, `ABC123124_A12312` `ABC123_A1212` `ABC123124_B12312` `AC123124_AD12312` `A12312_123` etc.. I want to replace a sequence of numbers with a single `*` so that I can group the distinct non-numeric patterns in the table. The result for this s...
I have a *varchar(200)* column that contains entries such as, ABC123124_A12312 ABC123_A1212 ABC123124_B12312 AC123124_AD12312 A12312_123 etc.. I want to replace a sequence of numbers with a single * so that I can group the distinct non-numeric patterns in the table. The result for this set would be ABC*_A* ABC*_B* AC*_AD* A*_* I have written the following primitive query below, it works correctly, but takes a long time to run on a huge table. **I need help with rewriting or editing it to improve it's performance. SQL Server 2014**
-- 1. replace all numeric characters with '*'
-- 2. replace multiple consecutive '*' with just a single '*'
SELECT REPLACE
        (REPLACE
             (REPLACE
                  (REPLACE
                       (REPLACE
                            (REPLACE
                                 (REPLACE
                                      (REPLACE
                                           (REPLACE
                                                (REPLACE
                                                     (REPLACE
                                                          (REPLACE
                                                               (REPLACE(SampleID, '0', '*'),
                                                                '1', '*'),
                                                           '2', '*'),
                                                      '3', '*'),
                                                 '4', '*'),
                                            '5', '*'),
                                       '6', '*'),
                                  '7', '*'),
                             '8', '*'),
                        '9', '*')
                  , '*', '~*') -- replace each occurrence of '*' with '~*' (token plus asterisk)
             , '*~', '') -- replace in the result of the previous step each occurrence of '*~' (asterisk plus token) with '' (an empty string)
        , '~*', '*') -- replace in the result of the previous step each occurrence of '~*' (token plus asterisk) with '*' (asterisk)
        AS Pattern
FROM TABLE_X
### Data The column includes letters and numbers [A-Za-z0-9] and may also include the special characters / and _. I want to replace any sequence of numbers with *, but I do not know if the entry has special characters, and if so how many special characters. I also do not know how many sequences of numbers are in the entry. All I know is that an entry must have a minimum of 1 number sequence.
Levi (43 rep)
Mar 26, 2020, 03:13 AM • Last activity: Jul 26, 2024, 06:13 PM
3 votes
3 answers
3638 views
How to query efficiently from Postgres to select special words?
Let's say that I have a table called `words` with very many records. Columns are `id` and `name`. In the `words` table I have for example: 'systematic', 'سلام','gear','synthesis','mysterious', etc. **NB: we have utf8 words, too.** How to query efficiently to see which words include letters `'s'`, `'...
Let's say that I have a table called words with very many records. Columns are id and name. In the words table I have for example: 'systematic', 'سلام','gear','synthesis','mysterious', etc. **NB: we have utf8 words, too.** How to query efficiently to see which words include letters 's', 'm' and 'e' (all of them)? The output would be: systematic,mysterious I have no idea how to do such a thing. It should be efficient because our server would suffer otherwise.
Alireza (3676 rep)
Dec 17, 2013, 08:55 AM • Last activity: Feb 19, 2024, 09:45 PM
12 votes
2 answers
22484 views
PostgreSQL full text search on many columns
I need an advice with searching of record based on specified string. Search strings can contain values from these columns. Values in this string don´t have to be strictly identical given in the correct order and also the values of some columns in this string may be missing. Example of search st...
I need an advice with searching of record based on specified string. Search strings can contain values from these columns. Values in this string don´t have to be strictly identical given in the correct order and also the values of some columns in this string may be missing. Example of search string: 22 Karntner Wien And I get for example result with top 5 similar records. I think I should use full text search, but I have no experiences with it. Can you tell me how to proceed?
Denis Stephanov (287 rep)
Feb 11, 2017, 01:06 PM • Last activity: Dec 31, 2023, 06:17 AM
8 votes
1 answers
17776 views
Equivalent of UTF8_UNICODE_CI collation in PostgreSQL
I would like a column in a table inside a PostgreSQL database (I am using version 9.6). I know of the `UTF8_UNICODE_CI` collation on MySQL, so I tried: CREATE TABLE thing ( id BIGINT PRIMARY KEY ,name VARCHAR(120) NOT NULL COLLATE "UTF8_UNICODE_CI" ); but I get: > ERROR: collation "UTF8_UNICODE_CI"...
I would like a column in a table inside a PostgreSQL database (I am using version 9.6). I know of the UTF8_UNICODE_CI collation on MySQL, so I tried: CREATE TABLE thing ( id BIGINT PRIMARY KEY ,name VARCHAR(120) NOT NULL COLLATE "UTF8_UNICODE_CI" ); but I get: > ERROR: collation "UTF8_UNICODE_CI" for encoding "UTF8" does not exist Reading around, I found that the pg_collation table displays collations, which showed: =# SELECT * from pg_collation; collname | collnamespace | collowner | collencoding | collcollate | collctype ----------+---------------+-----------+--------------+-------------+----------- default | 11 | 10 | -1 | | C | 11 | 10 | -1 | C | C POSIX | 11 | 10 | -1 | POSIX | POSIX (3 rows) So it's clearly not there... Is there any way to add it, or obtain the behavior I want? (I want a search WHERE name LIKE '%lala%' to match 'lalá', 'LÂLÄ', etc.)
Daniel Gray (699 rep)
Dec 1, 2016, 03:54 PM • Last activity: Aug 1, 2023, 06:38 PM
3 votes
3 answers
2423 views
Postgres trigram match acting strange for specific characters
I'm using pg_trgm to run a query, but I'm getting a lot of 1 matches when searching with a symbol difference. I have the following query: SELECT my_column, similarity('$ Hello', my_column) AS sml FROM my_table WHERE my_column % '$ Hello' ORDER BY sml DESC, my_column; In `my_table`, I have the follow...
I'm using pg_trgm to run a query, but I'm getting a lot of 1 matches when searching with a symbol difference. I have the following query: SELECT my_column, similarity('$ Hello', my_column) AS sml FROM my_table WHERE my_column % '$ Hello' ORDER BY sml DESC, my_column; In my_table, I have the following: - Hello ? Hello | Hello $ Hello ! Hello !? Hello They all return with a similarity match of 1. Do I need to escape the '$' or something similar?
Lloyd Powell (133 rep)
Jul 8, 2019, 02:54 PM • Last activity: Aug 1, 2023, 06:33 PM
0 votes
1 answers
2743 views
Find partial match with textsearch query
I have the following query that I did hard looking on this site: SELECT * FROM renns WHERE ( to_tsvector('english', coalesce(numar_admin, '')) || to_tsvector('english', coalesce(nume_strada, '')) || to_tsvector('english', coalesce(nume_prenume, ''))|| to_tsvector('english', coalesce(bloc, '')) || to...
I have the following query that I did hard looking on this site: SELECT * FROM renns WHERE ( to_tsvector('english', coalesce(numar_admin, '')) || to_tsvector('english', coalesce(nume_strada, '')) || to_tsvector('english', coalesce(nume_prenume, ''))|| to_tsvector('english', coalesce(bloc, '')) || to_tsvector('english', coalesce(apartament, '')) ) @@ plainto_tsquery('english', '$q') It works if I write in the variable $q all the text that appears in one of the columns present in the query. For example: cherry I would like to write only: cher and return all rows fields in which it finds this string: "cher".
CoderCoder42 (101 rep)
Mar 31, 2021, 07:56 PM • Last activity: Aug 1, 2023, 06:30 PM
12 votes
1 answers
7605 views
How to search hyphenated words in PostgreSQL full text search?
I have to search for hyphenated words like 'good-morning', 'good-evening', etc. My query is: select id, ts_headline(content, to_tsquery('english','good-morning'), 'HighlightAll=true MaxFragments=100 FragmentDelimiter=$') from table where ts_content @@ to_tsquery('english','good-morning'); When execu...
I have to search for hyphenated words like 'good-morning', 'good-evening', etc. My query is: select id, ts_headline(content, to_tsquery('english','good-morning'), 'HighlightAll=true MaxFragments=100 FragmentDelimiter=$') from table where ts_content @@ to_tsquery('english','good-morning'); When executing this query I also get results of *'good'* and *'morning'* separately. But I want exactly matching words and fragments. (For ts_content I used the same default config english to create the tsvector.) How can I search such hyphenated words in PostgreSQL full text search?
user3098231 (131 rep)
Apr 21, 2018, 07:45 AM • Last activity: Aug 1, 2023, 06:11 PM
30 votes
2 answers
34493 views
How to create an index to speed up an aggregate LIKE query on an expression?
I may be asking the wrong question in the title. Here are the facts: My customer service folk have been complaining about slow response times when doing customer lookups on the administration interface of our Django-based site. We're using Postgres 8.4.6. I started logging slow queries, and discover...
I may be asking the wrong question in the title. Here are the facts: My customer service folk have been complaining about slow response times when doing customer lookups on the administration interface of our Django-based site. We're using Postgres 8.4.6. I started logging slow queries, and discovered this culprit: SELECT COUNT(*) FROM "auth_user" WHERE UPPER("auth_user"."email"::text) LIKE UPPER(E'%deyk%') This query is taking upwards of 32 seconds to run. Here's the query plan provided by EXPLAIN: QUERY PLAN Aggregate (cost=205171.71..205171.72 rows=1 width=0) -> Seq Scan on auth_user (cost=0.00..205166.46 rows=2096 width=0) Filter: (upper((email)::text) ~~ '%DEYK%'::text) Because this is a query generated by the Django ORM from a Django QuerySet generated by the Django Admin application, I don't have any control over the query itself. An index seems like the logical solution. I tried creating an index to speed this up, but it hasn't made a difference: CREATE INDEX auth_user_email_upper ON auth_user USING btree (upper(email::text)) What am I doing wrong? How can I speed up this query?
David Eyk (537 rep)
Aug 9, 2011, 11:04 PM • Last activity: Aug 1, 2023, 06:09 PM
20 votes
1 answers
30590 views
Get partial match from GIN indexed TSVECTOR column
I would like to get results by query this: ~~~pgsql SELECT * FROM ( SELECT id, subject FROM mailboxes WHERE tsv @@ plainto_tsquery('avail') ) AS t1 ORDER BY id DESC; ~~~ This works and return rows with `tsv` containing *'Available'*. But if I use *'avai'* (truncating '...lable') it does not find any...
I would like to get results by query this: ~~~pgsql SELECT * FROM ( SELECT id, subject FROM mailboxes WHERE tsv @@ plainto_tsquery('avail') ) AS t1 ORDER BY id DESC; ~~~ This works and return rows with tsv containing *'Available'*. But if I use *'avai'* (truncating '...lable') it does not find anything. Do all queries have to be in dictionary? Can't we just query such letters? I have a table that contains e-mail bodies (content) and I would like to make queries fast as it grows rapidly. Currently, I am using: ... WHERE content ~* 'avail'
xangr (457 rep)
Dec 12, 2016, 08:18 PM • Last activity: Aug 1, 2023, 06:07 PM
21 votes
2 answers
38389 views
PostgreSQL LIKE query on ARRAY field
Is there any way to have a Postgres `LIKE` query on a ARRAY field? Currently I want something like that: SELECT * FROM list WHERE lower(array_field) LIKE '1234%' Currently lower is not needed that much. However it should find ONE matching field inside the ARRAY. Is that even possible? Currently I us...
Is there any way to have a Postgres LIKE query on a ARRAY field? Currently I want something like that: SELECT * FROM list WHERE lower(array_field) LIKE '1234%' Currently lower is not needed that much. However it should find ONE matching field inside the ARRAY. Is that even possible? Currently I use a materialized view to generate the "list" table with a JOIN and a ARRAY_AGG(), since I JOIN a table where more values could be on the right table. Which would duplicate fields on the left table, which is not what I want. Edit this is how I create the view (really sluggish and ugly): CREATE MATERIALIZED VIEW article_list_new AS SELECT a.id, a.oa_nr, a.date_deleted, a.lock, a.sds_nr, a.kd_art_nr, a.kd_art_index, a.kd_art_extend, a.surface, a.execution, a.surface_area, a.cu_thickness, a.endintensity, a.drilling, array_agg(o.id::text) AS offer_list FROM article_list a LEFT JOIN task_offer o ON o.article = a.oa_nr GROUP BY .....; I also need to return the IDs of the task_offer table.
Christian Schmitt (453 rep)
Oct 12, 2015, 01:16 PM • Last activity: Aug 1, 2023, 06:06 PM
18 votes
1 answers
16685 views
Best index for similarity function
So I have this table with 6.2 millions records and I have to perform search queries with similarity for one for the column. The queries can be: SELECT "lca_test".* FROM "lca_test" WHERE (similarity(job_title, 'sales executive') > 0.6) AND worksite_city = 'los angeles' ORDER BY salary ASC LIMIT 50 OF...
So I have this table with 6.2 millions records and I have to perform search queries with similarity for one for the column. The queries can be: SELECT "lca_test".* FROM "lca_test" WHERE (similarity(job_title, 'sales executive') > 0.6) AND worksite_city = 'los angeles' ORDER BY salary ASC LIMIT 50 OFFSET 0 More conditions can be added in the where(year = X, worksite_state = N, status = 'certified', visa_class = Z). Running some of those queries can take a really long time, over 30seconds. Sometimes more than a minutes. EXPLAIN ANALYZE of the previously mentioned query gives me this: > Limit (cost=0.43..42523.04 rows=50 width=254) (actual time=9070.268..33487.734 rows=2 loops=1) > -> Index Scan using index_lca_test_on_salary on lca_test (cost=0.43..23922368.16 rows=28129 width=254) (actual time=9070.265..33487.727 rows=2 loops=1) > >>>> Filter: (((worksite_city)::text = 'los angeles'::text) AND (similarity((job_title)::text, 'sales executive'::text) > 0.6::double precision)) > >>>> Rows Removed by Filter: 6330130 Total runtime: 33487.802 ms > Total runtime: 33487.802 ms I can't figure out how I should index my column to make it blazing fast. EDIT: Here is the postgres version: > PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit Here is the table definition: Table "public.lca_test" Column | Type | Modifiers | Storage | Stats target | Description ------------------------+-------------------+-------------------------------------------------------+----------+--------------+------------- id | integer | not null default nextval('lca_test_id_seq'::regclass) | plain | | raw_id | integer | | plain | | year | integer | | plain | | company_id | integer | | plain | | visa_class | character varying | | extended | | employement_start_date | character varying | | extended | | employement_end_date | character varying | | extended | | employer_name | character varying | | extended | | employer_address1 | character varying | | extended | | employer_address2 | character varying | | extended | | employer_city | character varying | | extended | | employer_state | character varying | | extended | | employer_postal_code | character varying | | extended | | employer_phone | character varying | | extended | | employer_phone_ext | character varying | | extended | | job_title | character varying | | extended | | soc_code | character varying | | extended | | naic_code | character varying | | extended | | prevailing_wage | character varying | | extended | | pw_unit_of_pay | character varying | | extended | | wage_unit_of_pay | character varying | | extended | | worksite_city | character varying | | extended | | worksite_state | character varying | | extended | | worksite_postal_code | character varying | | extended | | total_workers | integer | | plain | | case_status | character varying | | extended | | case_no | character varying | | extended | | salary | real | | plain | | salary_max | real | | plain | | prevailing_wage_second | real | | plain | | lawyer_id | integer | | plain | | citizenship | character varying | | extended | | class_of_admission | character varying | | extended | | Indexes: "lca_test_pkey" PRIMARY KEY, btree (id) "index_lca_test_on_id_and_salary" btree (id, salary) "index_lca_test_on_id_and_salary_and_year" btree (id, salary, year) "index_lca_test_on_id_and_salary_and_year_and_wage_unit_of_pay" btree (id, salary, year, wage_unit_of_pay) "index_lca_test_on_id_and_visa_class" btree (id, visa_class) "index_lca_test_on_id_and_worksite_state" btree (id, worksite_state) "index_lca_test_on_lawyer_id" btree (lawyer_id) "index_lca_test_on_lawyer_id_and_company_id" btree (lawyer_id, company_id) "index_lca_test_on_raw_id_and_visa_and_pw_second" btree (raw_id, visa_class, prevailing_wage_second) "index_lca_test_on_raw_id_and_visa_class" btree (raw_id, visa_class) "index_lca_test_on_salary" btree (salary) "index_lca_test_on_visa_class" btree (visa_class) "index_lca_test_on_wage_unit_of_pay" btree (wage_unit_of_pay) "index_lca_test_on_worksite_state" btree (worksite_state) "index_lca_test_on_year_and_company_id" btree (year, company_id) "index_lca_test_on_year_and_company_id_and_case_status" btree (year, company_id, case_status) "index_lcas_job_title_trigram" gin (job_title gin_trgm_ops) "lca_test_company_id" btree (company_id) "lca_test_employer_name" btree (employer_name) "lca_test_id" btree (id) "lca_test_on_year_and_companyid_and_wage_unit_and_salary" btree (year, company_id, wage_unit_of_pay, salary) Foreign-key constraints: "fk_rails_8a90090fe0" FOREIGN KEY (lawyer_id) REFERENCES lawyers(id) Has OIDs: no
bl0b (363 rep)
Jun 11, 2015, 05:50 PM • Last activity: Aug 1, 2023, 06:05 PM
4 votes
1 answers
6547 views
Index on column with data type citext not used
In PostgreSQL 9.4, with following schema: CREATE TABLE people ( id INTEGER PRIMARY KEY, name TEXT, junk CHAR(1000) ); INSERT INTO people(id, name) SELECT generate_series(1,100000), md5(random()::text); CREATE INDEX ON people (name text_pattern_ops); if I search by name, the index is used: test=# exp...
In PostgreSQL 9.4, with following schema: CREATE TABLE people ( id INTEGER PRIMARY KEY, name TEXT, junk CHAR(1000) ); INSERT INTO people(id, name) SELECT generate_series(1,100000), md5(random()::text); CREATE INDEX ON people (name text_pattern_ops); if I search by name, the index is used: test=# explain analyze select id, name from people where name like 'a%'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on people (cost=248.59..1160.92 rows=6061 width=37) (actual time=2.412..8.340 rows=6271 loops=1) Filter: (name ~~ 'a%'::text) Heap Blocks: exact=834 -> Bitmap Index Scan on people_name_idx (cost=0.00..247.08 rows=6266 width=0) (actual time=2.123..2.123 rows=6271 loops=1) Index Cond: ((name ~>=~ 'a'::text) AND (name ~ Otherwise, it behaves almost exactly like text. How can I tell PostgreSQL to use the index?
José Luis (215 rep)
Jun 26, 2015, 02:37 PM • Last activity: Aug 1, 2023, 06:04 PM
Showing page 1 of 20 total questions