Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
668 views
Best way to cast a VIEW row type to the underlying TABLE row type
I have a table with an index and a row-level security policy. Due to [this problem](https://dba.stackexchange.com/q/232789/188406) (more details: [1](https://stackoverflow.com/q/48230535/1048572), [2](https://www.postgresql.org/message-id/flat/2811772.0XtDgEdalL@peanuts2), [3](https://stackoverflow....
I have a table with an index and a row-level security policy. Due to [this problem](https://dba.stackexchange.com/q/232789/188406) (more details: (https://stackoverflow.com/q/48230535/1048572) , (https://www.postgresql.org/message-id/flat/2811772.0XtDgEdalL@peanuts2) , (https://stackoverflow.com/q/63008838/1048572) , (https://www.postgresql.org/message-id/flat/CAGrP7a2t+JbeuxpQY+RSvNe4fr3+==UmyimwV0GCD+wcrSSb=w@mail.gmail.com) , (https://stackoverflow.com/q/48230535/1048572)) , the index is not used when the policy applies, which makes my queries unbearably slow. The workaround I am contemplating would be to create a VIEW with security_invoker = false and security_barrier = false. (If I do enable the security_barrier, the query again doesn't use the index). The problem I am facing now is that I cannot just change the queries to use FROM my_view AS example instead of FROM my_table AS example, since some of them use functions that are defined to take the my_table composite type. A simplified example: CREATE TABLE example ( id int, name text, is_visible boolean ); CREATE VIEW test AS SELECT * FROM example WHERE is_visible; CREATE FUNCTION prop(e example) RETURNS text LANGUAGE SQL AS $$ SELECT e.id::text || ': ' || e.name; $$; SELECT e.prop FROM example e; -- works SELECT e.prop FROM test e; -- ERROR: column e.prop does not exist ([online demo](https://dbfiddle.uk/cb0bn3NV)) Now the question is **how to cast the rows to the expected type?** There is [this question](https://dba.stackexchange.com/q/247240/188406) and I also found a way to do this using the ROW constructor, but I'm not certain how good this is: SELECT e.prop FROM (SELECT (ROW(test.*)::example).* FROM test) e; It's nice that I can just use it as a drop-in replacement for the table expression (without changing anything else in the query), and it does work (postgres accepts it and does use my index when I have the respective WHERE clause), but it looks horrible. Are there problems with my approach that I am missing? Is there a better solution?
Bergi (514 rep)
Oct 18, 2022, 11:14 AM • Last activity: Aug 6, 2025, 03:06 PM
1 votes
1 answers
68 views
Slow queries on JSON_TABLE based view on mariaDB
We a running an instance of [Apache syncope][1] with around 130k users (ca. 33k with `active` or `pending` state). When searching for users by attributes we see long query times often over 50 seconds. One example query might be ``` select * from user_search_attr where schema_id = 'familyName' and st...
We a running an instance of Apache syncope with around 130k users (ca. 33k with active or pending state). When searching for users by attributes we see long query times often over 50 seconds. One example query might be
select * from user_search_attr 
where schema_id = 'familyName' 
and stringvalue = 'Bergmann'
user_search_attr is a view defined as follows:
CREATE OR REPLACE
ALGORITHM = UNDEFINED VIEW user_search_attr AS
select
    u.id AS any_id,
    attrs.schema_id AS schema_id,
    attrs.booleanvalue AS booleanvalue,
    attrs.datevalue AS datevalue,
    attrs.doublevalue AS doublevalue,
    attrs.longvalue AS longvalue,
    attrs.stringvalue AS stringvalue
from
    (SyncopeUser u
left join JSON_TABLE(coalesce(u.plainAttrs, '[]'), '$[*]' 
   COLUMNS (schema_id varchar(255) PATH '$.schema', 
   NESTED PATH '$.values[*]' 
   COLUMNS (booleanvalue int(11) PATH '$.booleanValue', 
            datevalue varchar(32) PATH '$.dateValue', 
            doublevalue double PATH '$.doubleValue', 
            longvalue bigint(20) PATH '$.longValue', 
            stringvalue varchar(255) PATH '$.stringValue'))) attrs on
    (1 = 1))
where
    attrs.schema_id is not null
    and (attrs.booleanvalue is not null
        or attrs.datevalue is not null
        or attrs.doublevalue is not null
        or attrs.longvalue is not null
        or attrs.stringvalue is not null);
As you can see the data comes from the table SyncopeUser which is defined as follows:
CREATE TABLE SyncopeUser (
  id varchar(36) NOT NULL,
  creationContext varchar(255) DEFAULT NULL,
  creationDate datetime(3) DEFAULT NULL,
  creator varchar(255) DEFAULT NULL,
  lastChangeContext varchar(255) DEFAULT NULL,
  lastChangeDate datetime(3) DEFAULT NULL,
  lastModifier varchar(255) DEFAULT NULL,
  status varchar(255) DEFAULT NULL,
  changePwdDate datetime(3) DEFAULT NULL,
  cipherAlgorithm varchar(20) DEFAULT NULL,
  failedLogins int(11) DEFAULT NULL,
  lastLoginDate datetime(3) DEFAULT NULL,
  mustChangePassword int(11) DEFAULT NULL,
  password varchar(255) DEFAULT NULL,
  passwordHistory text DEFAULT NULL,
  securityAnswer varchar(255) DEFAULT NULL,
  suspended int(11) DEFAULT NULL,
  token text DEFAULT NULL,
  tokenExpireTime datetime(3) DEFAULT NULL,
  username varchar(255) DEFAULT NULL,
  plainAttrs longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_bin
              DEFAULT NULL CHECK (json_valid(plainAttrs)),
  REALM_ID varchar(36) DEFAULT NULL,
  SECURITYQUESTION_ID varchar(36) DEFAULT NULL,
  PRIMARY KEY (id),
  UNIQUE KEY U_SYNCPSR_USERNAME (username),
  UNIQUE KEY SyncopeUser_username (username),
  KEY SECURITYQUESTION_ID (SECURITYQUESTION_ID),
  KEY SyncopeUser_realm_id (REALM_ID),
  CONSTRAINT SyncopeUser_ibfk_1 FOREIGN KEY (REALM_ID) 
           REFERENCES Realm (id),
  CONSTRAINT SyncopeUser_ibfk_2 FOREIGN KEY (SECURITYQUESTION_ID)
           REFERENCES SecurityQuestion (id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3 COLLATE=utf8mb3_general_ci;
The relevant column plainAttrs contains the users attibutes as JSON string. One example might be [{"uniqueValue":{"stringValue":"123456789"},"schema":"lockSystemId"},{"values":[{"stringValue":"Bergmann"}],"schema":"familyName"}]. The following indexes are defined on SyncopeUser:
Table      |Non_unique|Key_name            |Seq_in_index|Column_name        |Collation|Cardinality|Sub_part|Packed|Null|Index_type|Comment|Index_comment|Ignored|
-----------+----------+--------------------+------------+-------------------+---------+-----------+--------+------+----+----------+-------+-------------+-------+
SyncopeUser|         0|PRIMARY             |           1|id                 |A        |     149635|        |      |    |BTREE     |       |             |NO     |
SyncopeUser|         0|U_SYNCPSR_USERNAME  |           1|username           |A        |     149635|        |      |YES |BTREE     |       |             |NO     |
SyncopeUser|         0|SyncopeUser_username|           1|username           |A        |     149635|        |      |YES |BTREE     |       |             |NO     |
SyncopeUser|         1|SECURITYQUESTION_ID |           1|SECURITYQUESTION_ID|A        |          1|        |      |YES |BTREE     |       |             |NO     |
SyncopeUser|         1|SyncopeUser_realm_id|           1|REALM_ID           |A        |          1|        |      |YES |BTREE     |       |             |NO     |
As most normal users have over 15 attributes the user_search_attr view contains over 2M rows. The servers are VMs with currently 8 GB of RAM and 6 CPUs but this can be changed. We are currently running version 11.4.7 of MariaDB in a galera cluster of 3 Servers. Syncope is only connected to one of the servers as this cluster is also used for other applications and we don't want to influnce these other applications. Here are some of the IMHO relevant configuration options:
[mysqld]
aria_pagecache_buffer_size = 32M
basedir = /usr
bind-address = ::
binlog_format = ROW
character_set_server = utf8
collation_server = utf8_general_ci
innodb_adaptive_hash_index = OFF
innodb_autoinc_lock_mode = 2
innodb_buffer_pool_instances = 1
innodb_buffer_pool_size = 4096M
innodb_flush_log_at_trx_commit = 0
innodb_log_buffer_size = 16MB
innodb_log_file_size = 512M
key_buffer_size = 1M
log-error = /var/log/mariadb/mariadb.log
log_queries_not_using_indexes = OFF
long_query_time = 4.0
max_allowed_packet = 128M
max_binlog_size = 100M
max_connections = 400
max_heap_table_size = 256M
performance_schema = ON
query_cache_limit = 16M
query_cache_size = 0
query_cache_type = OFF
skip-external-locking
skip-name-resolve = 1
slow_query_log = 0
slow_query_log_file = /var/log/mariadb/slow.log
sort_buffer_size = 4M

table_definition_cache = 800
table_open_cache = 1000
thread_cache_size = 8
thread_stack = 256K
tmp_table_size = 256M
wsrep_on = ON
wsrep_provider = /usr/lib64/galera-4/libgalera_smm.so
As most queries only search for active or pending users one option that I have not tried is indexing or partitioning the table for the status column. Currently we are not in the position of changing the design of the database or even switching to another IDM sollution. We hope to find optimazion potential on the DB layer or maybe some improvements on some queries.
Clemens Bergmann (111 rep)
Aug 3, 2025, 06:44 AM • Last activity: Aug 5, 2025, 08:07 AM
0 votes
1 answers
667 views
How can I create a view with self referencing table and a cyclic condition?
I have a self-referencing table. ID BIGINT NOT NULL PRIMAY KEY, DELETED TIMESTAMP NULL, PARENT_ID FK How can I create a view of this table which each row's `DELETED` is `NULL` and any of its parent's `DELETED` is also `NULL`? So far, I tried and it doesn't work as expected. CREATE VIEW `OPERATOR_` A...
I have a self-referencing table. ID BIGINT NOT NULL PRIMAY KEY, DELETED TIMESTAMP NULL, PARENT_ID FK How can I create a view of this table which each row's DELETED is NULL and any of its parent's DELETED is also NULL? So far, I tried and it doesn't work as expected. CREATE VIEW OPERATOR_ AS SELECT c.* from OPERATOR AS c LEFT OUTER JOIN OPERATOR AS p ON c.PARENT_ID = p.ID WHERE c.DELETED_ IS NULL AND p.DELETED_ IS NULL; ## example 1 ID DELETED PARENT_ID 0 2017... NULL NOT SELECTED 1 NULL 0 NOT SELECTED 2 NULL 1 NOT SELECTED ## example 2 ID DELETED PARENT_ID 0 NULL NULL SELECTED 1 2017... 0 NOT SELECTED 2 NULL 1 NOT SELECTED ## example 3 ID DELETED PARENT_ID 0 NULL NULL SELECTED 1 NULL 0 SELECTED 2 2017... 1 NOT SELECTED
Jin Kwon (165 rep)
Jul 13, 2016, 05:05 AM • Last activity: Jul 31, 2025, 06:04 AM
3 votes
2 answers
2331 views
Complex view becomes slow when adding ORDER BY in outer query with small LIMIT
I have a pretty large query in a view (let's call it `a_sql`), that is really fast unless I use `ORDER BY` in an outer `SELECT` with a small `LIMIT`: SELECT customs.id AS custom_id, customs.custom_name AS custom_name, customs.slug AS slug, customs.use_case AS custom_use_case, SUM(CASE WHEN designers...
I have a pretty large query in a view (let's call it a_sql), that is really fast unless I use ORDER BY in an outer SELECT with a small LIMIT: SELECT customs.id AS custom_id, customs.custom_name AS custom_name, customs.slug AS slug, customs.use_case AS custom_use_case, SUM(CASE WHEN designers.id = orders.user_id AND orders.bulk = 't' THEN order_rows.quantity ELSE 0 END) AS sale_bulk, SUM(CASE WHEN designers.id = orders.user_id AND orders.bulk = 'f' THEN order_rows.quantity ELSE 0 END) AS sale_not_bulk, SUM(CASE WHEN designers.id = orders.user_id THEN order_rows.quantity ELSE 0 END) AS sale_total, SUM(CASE WHEN designers.id orders.user_id AND orders.bulk = 't' THEN order_rows.quantity ELSE 0 END) AS buy_bulk, SUM(CASE WHEN designers.id orders.user_id AND orders.bulk = 'f' THEN order_rows.quantity ELSE 0 END) AS buy_not_bulk, SUM(CASE WHEN designers.id orders.user_id THEN order_rows.quantity ELSE 0 END) AS buy_total, SUM(CASE orders.bulk WHEN 't' THEN order_rows.quantity ELSE 0 END) AS total_bulk, SUM(CASE orders.bulk WHEN 'f' THEN order_rows.quantity ELSE 0 END) AS total_not_bulk, COALESCE(SUM(order_rows.quantity), 0 ) AS total, MIN(shoes.id) AS shoe_id, MIN(shoe_models.id) AS shoe_model_id, MIN(shoe_models.name) AS shoe_model_name, MIN(shoe_models.title) AS shoe_model_title, MIN(model_categories.id) AS model_category_id, MIN(model_categories.name) AS model_category_name, MIN(business_orders.id) AS business_order_id, MIN(business_orders.state) AS business_order_state, MIN(business_orders.published_at) AS business_order_published_at, MIN(designers.id) AS designer_id, MIN(designers.email) AS designer_email, MIN(designer_details.first_name) AS designer_first_name, MIN(designer_details.last_name) AS designer_last_name FROM business_orders /* 10^6 rows */ LEFT JOIN users designers ON designers.id = business_orders.user_id /* 10^6 rows - business_orders has 0 or 1 users, users has n business_orders */ LEFT JOIN user_details designer_details ON designers.id = designer_details.user_id /* 10^6 rows - users has 0 or 1 user_details, user_details has 1 users */ INNER JOIN customs ON business_orders.id = customs.business_order_id /* 10^6 rows - business_orders has 1 customs, customs has 1 business_order */ LEFT JOIN shoes ON shoes.product_id = customs.id AND shoes.product_type = 'Custom' /* 10^6 rows - customs has 1 shoes, shoes has 1 customs */ LEFT JOIN shoe_models ON shoe_models.id = shoes.shoe_model_id /* 10^2 rows - shoes has 1 shoe_models, shoe_models has n shoes */ LEFT JOIN model_categories ON shoe_models.model_category_id = model_categories.id /* 10^1 rows - shoe_models has 1 model_categories, model_categories has n models */ INNER JOIN sizes ON shoes.id = sizes.shoe_id /* 10^6 rows - sizes has 1 shoes, shoes has n sizes */ LEFT JOIN order_rows ON order_rows.article_id = sizes.id AND order_rows.article_type::text = 'Size'::text /* 10^5 rows - sizes has n order_rows, order_rows has 0 or 1 size */ LEFT JOIN orders ON orders.id = order_rows.order_id /* 10^4 rows - order_rows has 1 orders, orders has n order_rows */ WHERE orders.state IN ('funded', 'confirmed', 'paid', 'delivered' ,'production', 'produced', 'ready_to_ship' , 'shipped') OR orders.id IS NULL GROUP BY business_orders.id Returns around 52.000 rows. A query of the following type is executed in 12.728 ms: SELECT * FROM A_SQL LIMIT 10 The related EXPLAIN output: Limit (cost=3.51..145.53 rows=10 width=324) (actual time=1.545..12.468 rows=10 loops=1) Buffers: shared hit=1652 -> Subquery Scan on x (cost=3.51..737218.84 rows=51911 width=324) (actual time=1.543..12.462 rows=10 loops=1) Buffers: shared hit=1652 -> GroupAggregate (cost=3.51..736699.73 rows=51911 width=610) (actual time=1.542..12.455 rows=10 loops=1) Group Key: business_orders.id Buffers: shared hit=1652 -> Nested Loop Left Join (cost=3.51..716552.04 rows=270739 width=610) (actual time=0.090..4.073 rows=608 loops=1) Filter: (((orders.state)::text = ANY ('{funded,confirmed,paid,delivered,production,produced,ready_to_ship,shipped}'::text[])) OR (orders.id IS NULL)) Rows Removed by Filter: 5 Buffers: shared hit=1652 -> Nested Loop Left Join (cost=3.23..408595.00 rows=448022 width=609) (actual time=0.087..3.264 rows=613 loops=1) Buffers: shared hit=1547 -> Nested Loop (cost=2.94..264656.18 rows=448022 width=605) (actual time=0.082..1.227 rows=596 loops=1) Buffers: shared hit=269 -> Nested Loop Left Join (cost=2.52..130221.18 rows=52594 width=601) (actual time=0.073..0.578 rows=14 loops=1) Buffers: shared hit=197 -> Nested Loop Left Join (cost=2.23..104252.63 rows=51831 width=588) (actual time=0.066..0.478 rows=14 loops=1) Join Filter: (shoe_models.model_category_id = model_categories.id) Rows Removed by Join Filter: 79 Buffers: shared hit=155 -> Nested Loop Left Join (cost=2.23..101141.72 rows=51831 width=72) (actual time=0.055..0.413 rows=14 loops=1) Buffers: shared hit=154 -> Nested Loop (cost=2.09..92396.06 rows=51831 width=52) (actual time=0.051..0.348 rows=14 loops=1) Buffers: shared hit=126 -> Nested Loop Left Join (cost=1.80..65264.56 rows=51831 width=48) (actual time=0.033..0.209 rows=14 loops=1) Buffers: shared hit=84 -> Merge Join (cost=1.38..21836.97 rows=51831 width=26) (actual time=0.022..0.109 rows=14 loops=1) Merge Cond: (business_orders.id = customs.business_order_id) Buffers: shared hit=28 -> Index Scan using business_orders_pkey on business_orders (cost=0.29..3688.80 rows=51911 width=22) (actual time=0.012..0.036 rows=14 loops=1) Buffers: shared hit=14 -> Index Scan using index_customs_on_business_order_id on customs (cost=0.41..17371.39 rows=51831 width=8) (actual time=0.005..0.029 rows=14 loops=1) Buffers: shared hit=14 -> Index Scan using users_pkey on users designers (cost=0.41..0.83 rows=1 width=26) (actual time=0.006..0.006 rows=1 loops=14) Index Cond: (id = business_orders.user_id) Buffers: shared hit=56 -> Index Scan using index_shoes_on_product_id_and_product_type on shoes (cost=0.29..0.51 rows=1 width=12) (actual time=0.007..0.008 rows=1 loops=14) Index Cond: ((product_id = customs.id) AND ((product_type)::text = 'Custom'::text)) Buffers: shared hit=42 -> Index Scan using shoe_models_pkey on shoe_models (cost=0.14..0.16 rows=1 width=24) (actual time=0.003..0.003 rows=1 loops=14) Index Cond: (id = shoes.shoe_model_id) Buffers: shared hit=28 -> Materialize (cost=0.00..1.06 rows=4 width=520) (actual time=0.001..0.002 rows=7 loops=14) Buffers: shared hit=1 -> Seq Scan on model_categories (cost=0.00..1.04 rows=4 width=520) (actual time=0.004..0.005 rows=7 loops=1) Buffers: shared hit=1 -> Index Scan using index_user_details_on_user_id on user_details designer_details (cost=0.29..0.49 rows=1 width=17) (actual time=0.006..0.006 rows=1 loops=14) Index Cond: (designers.id = user_id) Buffers: shared hit=42 -> Index Scan using index_sizes_on_shoe_id on sizes (cost=0.42..2.00 rows=56 width=8) (actual time=0.006..0.030 rows=43 loops=14) Index Cond: (shoe_id = shoes.id) Buffers: shared hit=72 -> Index Scan using index_order_rows_on_article_id on order_rows (cost=0.29..0.31 rows=1 width=12) (actual time=0.003..0.003 rows=0 loops=596) Index Cond: (article_id = sizes.id) Filter: ((article_type)::text = 'Size'::text) Rows Removed by Filter: 2 Buffers: shared hit=1278 -> Index Scan using orders_pkey on orders (cost=0.29..0.67 rows=1 width=18) (actual time=0.000..0.000 rows=0 loops=613) Index Cond: (id = order_rows.order_id) Buffers: shared hit=105 Planning time: 5.013 ms Execution time: 12.728 ms A query of the following type, instead, is executed in 9090.141ms SELECT * FROM a_sql ORDER BY custom_id LIMIT 10 The related EXPLAIN output: Limit (cost=328570.62..328570.64 rows=10 width=324) (actual time=8987.928..8987.929 rows=10 loops=1) Buffers: shared hit=10412 read=12400, temp read=18319 written=18063 -> Sort (cost=328570.62..328700.40 rows=51911 width=324) (actual time=8987.926..8987.926 rows=10 loops=1) Sort Key: x.business_order_id Sort Method: top-N heapsort Memory: 27kB Buffers: shared hit=10412 read=12400, temp read=18319 written=18063 -> Subquery Scan on x (cost=306105.20..327448.84 rows=51911 width=324) (actual time=3074.397..8978.470 rows=8004 loops=1) Buffers: shared hit=10412 read=12400, temp read=18319 written=18063 -> GroupAggregate (cost=306105.20..326929.73 rows=51911 width=610) (actual time=3074.395..8975.492 rows=8004 loops=1) Group Key: business_orders.id Buffers: shared hit=10412 read=12400, temp read=18319 written=18063 -> Sort (cost=306105.20..306782.04 rows=270739 width=610) (actual time=3073.679..3411.919 rows=467218 loops=1) Sort Key: business_orders.id Sort Method: external merge Disk: 56936kB Buffers: shared hit=10412 read=12400, temp read=18319 written=18063 -> Hash Right Join (cost=98065.48..133611.68 rows=270739 width=610) (actual time=1559.328..2325.275 rows=467218 loops=1) Hash Cond: (order_rows.article_id = sizes.id) Filter: (((orders.state)::text = ANY ('{funded,confirmed,paid,delivered,production,produced,ready_to_ship,shipped}'::text[])) OR (orders.id IS NULL)) Rows Removed by Filter: 3712 Buffers: shared hit=10412 read=12400, temp read=9442 written=9186 -> Hash Left Join (cost=813.00..1497.05 rows=7367 width=26) (actual time=9.566..22.691 rows=7367 loops=1) Hash Cond: (order_rows.order_id = orders.id) Buffers: shared hit=888 -> Seq Scan on order_rows (cost=0.00..509.08 rows=7367 width=12) (actual time=0.029..5.732 rows=7367 loops=1) Filter: ((article_type)::text = 'Size'::text) Rows Removed by Filter: 11199 Buffers: shared hit=277 -> Hash (cost=700.78..700.78 rows=8978 width=18) (actual time=9.507..9.507 rows=8993 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 470kB Buffers: shared hit=611 -> Seq Scan on orders (cost=0.00..700.78 rows=8978 width=18) (actual time=0.009..7.142 rows=8993 loops=1) Buffers: shared hit=611 -> Hash (cost=57087.20..57087.20 rows=448022 width=605) (actual time=1547.263..1547.263 rows=469413 loops=1) Buckets: 1024 Batches: 128 Memory Usage: 567kB Buffers: shared hit=9524 read=12400, temp read=1037 written=8932 -> Hash Left Join (cost=30955.54..57087.20 rows=448022 width=605) (actual time=496.442..1160.554 rows=469413 loops=1) Hash Cond: (shoes.shoe_model_id = shoe_models.id) Buffers: shared hit=9524 read=12400, temp read=1037 written=1035 -> Hash Join (cost=30938.67..52547.10 rows=448022 width=69) (actual time=496.300..964.720 rows=469413 loops=1) Hash Cond: (sizes.shoe_id = shoes.id) Buffers: shared hit=9509 read=12400, temp read=1037 written=1035 -> Seq Scan on sizes (cost=0.00..8642.10 rows=441710 width=8) (actual time=0.009..119.758 rows=441934 loops=1) Buffers: shared hit=797 read=3428 -> Hash (cost=29664.25..29664.25 rows=52594 width=65) (actual time=496.056..496.056 rows=54329 loops=1) Buckets: 4096 Batches: 2 Memory Usage: 2679kB Buffers: shared hit=8712 read=8972, temp written=294 -> Hash Left Join (cost=15725.17..29664.25 rows=52594 width=65) (actual time=162.077..460.095 rows=54329 loops=1) Hash Cond: (designers.id = designer_details.user_id) Buffers: shared hit=8712 read=8972 -> Hash Join (cost=11607.65..22688.39 rows=51831 width=52) (actual time=124.442..362.315 rows=51846 loops=1) Hash Cond: (customs.id = shoes.product_id) Buffers: shared hit=6055 read=8972 -> Hash Left Join (cost=7908.32..17952.45 rows=51831 width=48) (actual time=83.756..251.381 rows=51846 loops=1) Hash Cond: (business_orders.user_id = designers.id) Buffers: shared hit=3652 read=8972 -> Hash Join (cost=1843.00..10720.93 rows=51831 width=26) (actual time=27.942..139.640 rows=51846 loops=1) Hash Cond: (customs.business_order_id = business_orders.id) Buffers: shared hit=3079 read=4919 -> Seq Scan on customs (cost=0.00..7841.31 rows=51831 width=8) (actual time=0.009..41.084 rows=51846 loops=1) Buffers: shared hit=2404 read=4919 -> Hash (cost=1194.11..1194.11 rows=51911 width=22) (actual time=27.888..27.888 rows=51849 loops=1) Buckets: 8192 Batches: 1 Memory Usage: 2513kB Buffers: shared hit=675 -> Seq Scan on business_orders (cost=0.00..1194.11 rows=51911 width=22) (actual time=0.007..15.422 rows=51849 loops=1) Buffers: shared hit=675 -> Hash (cost=5265.70..5265.70 rows=63970 width=26) (actual time=55.788..55.788 rows=63972 loops=1) Buckets: 8192 Batches: 1 Memory Usage: 3679kB Buffers: shared hit=573 read=4053 -> Seq Scan on users designers (cost=0.00..5265.70 rows=63970 width=26) (actual time=0.003..35.227 rows=63972 loops=1) Buffers: shared hit=573 read=4053 -> Hash (cost=3051.16..3051.16 rows=51853 width=12) (actual time=40.654..40.654 rows=51846 loops=1) Buckets: 8192 Batches: 1 Memory Usage: 2154kB Buffers: shared hit=2403 -> Seq Scan on shoes (cost=0.00..3051.16 rows=51853 width=12) (actual time=0.009..28.311 rows=51846 loops=1) Filter: ((product_type)::text = 'Custom'::text) Buffers: shared hit=2403 -> Hash (cost=3306.12..3306.12 rows=64912 width=17) (actual time=37.610..37.610 rows=64670 loops=1) Buckets: 8192 Batches: 1 Memory Usage: 2748kB Buffers: shared hit=2657 -> Seq Scan on user_details designer_details (cost=0.00..3306.12 rows=64912 width=17) (actual time=0.007..19.790 rows=64670 loops=1) Buffers: shared hit=2657 -> Hash (cost=16.19..16.19 rows=54 width=540) (actual time=0.121..0.121 rows=54 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 4kB Buffers: shared hit=15 -> Hash Left Join (cost=1.09..16.19 rows=54 width=540) (actual time=0.034..0.101 rows=54 loops=1) Hash Cond: (shoe_models.model_category_id = model_categories.id) Buffers: shared hit=15 -> Seq Scan on shoe_models (cost=0.00..14.54 rows=54 width=24) (actual time=0.006..0.028 rows=54 loops=1) Buffers: shared hit=14 -> Hash (cost=1.04..1.04 rows=4 width=520) (actual time=0.016..0.016 rows=7 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 1kB Buffers: shared hit=1 -> Seq Scan on model_categories (cost=0.00..1.04 rows=4 width=520) (actual time=0.006..0.012 rows=7 loops=1) Buffers: shared hit=1 Planning time: 4.263 ms Execution time: 9090.141 ms Table definitions are the following. No integrity constraints are defined on the database (using ORM) CREATE TABLE business_orders ( id integer NOT NULL, user_id integer, published_at timestamp without time zone, CONSTRAINT business_orders_pkey PRIMARY KEY (id) ); CREATE INDEX index_business_orders_on_user_id ON business_orders USING btree (user_id); CREATE TABLE users ( id serial NOT NULL,, email character varying(255) NOT NULL DEFAULT ''::character varying, CONSTRAINT users_pkey PRIMARY KEY (id) ); CREATE UNIQUE INDEX index_users_on_email ON users USING btree (email COLLATE pg_catalog."default"); CREATE TABLE user_details ( id serial NOT NULL, user_id integer, first_name character varying(255), last_name character varying(255), CONSTRAINT user_details_pkey PRIMARY KEY (id) ); CREATE INDEX index_user_details_on_user_id ON user_details USING btree (user_id); CREATE TABLE customs ( id serial NOT NULL, shoes_assortment_id integer, business_order_id integer, CONSTRAINT customs_pkey PRIMARY KEY (id) ); CREATE INDEX index_customs_on_business_order_id ON customs USING btree (business_order_id); CREATE TABLE shoes ( id serial NOT NULL, product_id integer, product_type character varying(255), CONSTRAINT shoes_pkey PRIMARY KEY (id) ); CREATE INDEX index_shoes_on_product_id_and_product_type ON shoes USING btree (product_id, product_type COLLATE pg_catalog."default"); CREATE INDEX index_shoes_on_shoe_model_id ON shoes USING btree (shoe_model_id); CREATE TABLE shoe_models ( id serial NOT NULL, name character varying(255) NOT NULL, title character varying(255), model_category_id integer, CONSTRAINT shoe_models_pkey PRIMARY KEY (id) ); CREATE INDEX index_shoe_models_on_model_category_id ON shoe_models USING btree (model_category_id); CREATE UNIQUE INDEX index_shoe_models_on_name ON shoe_models USING btree (name COLLATE pg_catalog."default"); CREATE TABLE model_categories ( id serial NOT NULL, name character varying(255) NOT NULL, sort_order integer, created_at timestamp without time zone NOT NULL, updated_at timestamp without time zone NOT NULL, access_level integer, CONSTRAINT model_categories_pkey PRIMARY KEY (id) ); CREATE UNIQUE INDEX index_model_categories_on_name ON model_categories USING btree (name COLLATE pg_catalog."default"); CREATE TABLE sizes ( id serial NOT NULL, shoe_id integer, CONSTRAINT sizes_pkey PRIMARY KEY (id) ); CREATE INDEX index_sizes_on_shoe_id ON sizes USING btree (shoe_id); CREATE TABLE order_rows ( id serial NOT NULL, order_id integer, quantity integer, article_id integer, article_type character varying(255), article_name character varying(255), unit_taxed_cents integer, CONSTRAINT order_rows_pkey PRIMARY KEY (id) ); CREATE INDEX index_order_rows_on_article_id ON order_rows USING btree (article_id); CREATE INDEX index_order_rows_on_article_type ON order_rows USING btree (article_type COLLATE pg_catalog."default"); CREATE INDEX index_order_rows_on_order_id ON order_rows USING btree (order_id); CREATE INDEX index_order_rows_on_quantity ON order_rows USING btree (quantity); CREATE INDEX index_order_rows_on_unit_taxed_cents ON order_rows USING btree (unit_taxed_cents); CREATE TABLE orders ( id serial NOT NULL, user_id integer, state character varying(255), bulk boolean DEFAULT false, CONSTRAINT orders_pkey PRIMARY KEY (id) ); CREATE INDEX index_orders_on_user_id ON orders USING btree (user_id); Because the a_sql is a view, I can't insert the ORDER BY clause inside the view. I will need to call it as a black box. The use cases for this query are: - With a limit of 10, ordered by custom_id - With a limit of 10, ordered by total - To filter all rows that have business_order.user_id = orders.id and business_orders.id = x (usually not more than 100 rows as result) The graphical explain of pg_admin, even if I don't understand much, seems to be telling me that if I run the query with no ordering, then the query is using indexes, (and doing "nested loop joins"), while if I do it with the ordering, then it doesn't (it uses "hash joins"). Are there any ways to increase performance?
Marco Marcianesi (57 rep)
Jan 12, 2016, 11:58 AM • Last activity: Jul 19, 2025, 11:17 AM
0 votes
1 answers
39 views
Editing in MySQL Views
In the project I am working on, there is a **Booking** table. - There are 8 types of Bookings. - I have elected to have just one table - There are 12 common fields If I store the rest of the fields as columns, and use Views to extract the data for each type of booking. Am I able use the View for edi...
In the project I am working on, there is a **Booking** table. - There are 8 types of Bookings. - I have elected to have just one table - There are 12 common fields If I store the rest of the fields as columns, and use Views to extract the data for each type of booking. Am I able use the View for editing (if I don't use joins in the View)? This would allow me to rename some of the fields to something more suitable for that type of booking.
Rohit Gupta (2126 rep)
Jul 10, 2025, 01:11 PM • Last activity: Jul 11, 2025, 12:07 PM
0 votes
1 answers
193 views
Counting failed attempt
Here is my code: create table failed_login_attempts( login_id number(5) primary key, date_created timestamp default systimestamp, email varchar2(110), ip varchar2(24), mac varchar2(18), attempt number(1), isLocked number(1) ); I want to count `attempt` column and if `5` then `isLocked` become 1 from...
Here is my code: create table failed_login_attempts( login_id number(5) primary key, date_created timestamp default systimestamp, email varchar2(110), ip varchar2(24), mac varchar2(18), attempt number(1), isLocked number(1) ); I want to count attempt column and if 5 then isLocked become 1 from 0; How can I make query, view or procedure?
MikeJ (13 rep)
May 7, 2021, 01:11 AM • Last activity: Jun 29, 2025, 04:03 PM
0 votes
1 answers
48 views
Indexed Views added to database have increased Parse and Compile time dramatically
Recently, one of our development teams have added some indexed views based on some very commonly used tables. The problem we are getting now is that queries that used to execute quickly are taking a really long time in parse and compile. These are common tables so the performance hit is throughout o...
Recently, one of our development teams have added some indexed views based on some very commonly used tables. The problem we are getting now is that queries that used to execute quickly are taking a really long time in parse and compile. These are common tables so the performance hit is throughout our application, but the example query I have is taking 25+s in parse and compile, while it executes in about 250ms. The query references 3 core tables that are referenced in 4 of these indexed views. My experience with indexed views is limited, but I do understand the concepts, and that SQL Server may consider using the indexed view even if the view isn't referenced in the query. I'd expect a drop in performance with updates and inserts. However, I don't understand why this seems to have such a large impact on the parse and compile time. When I remove the indexes on these views one by one, the parse and compile time quickly drops and performance is restored. Add them back and it is poor again. And to clarify, the execution plan is always good - the execution time is fast and acceptable - it is just the parse and compile time that is poor. We are using SQL Server 2022, using native compatibility. I have played with other compat/CE options and 110 (2012) compatibility seems to alleviate the problem, but that is not an acceptable long term solution for us.
user1359111 (11 rep)
May 29, 2025, 07:15 PM • Last activity: May 30, 2025, 08:34 PM
1 votes
1 answers
448 views
What's the fastest way to query multiple tables at once where each table represents one year of data?
I'm developing an web app based around a database with multiple decades of data with multiple schema changes over the years. Each year of data is released as a zip file containing CSVs named `table1.csv`, `table2.csv`, `table3.csv`, etc. My initial goal was to just get the data into an _intermediate...
I'm developing an web app based around a database with multiple decades of data with multiple schema changes over the years. Each year of data is released as a zip file containing CSVs named table1.csv, table2.csv, table3.csv, etc. My initial goal was to just get the data into an _intermediate_ MySQL database without making any significant changes. Since there are schema changes over the years, I couldn't get away with simply creating the table1, table2, table3 tables so my compromise was to add a year prefix. For example, 1990_table1, 1991_table1, 1992_table1 etc. Besides this, I added some basic indexes on columns that can uniquely identify a record which I'll get into next. One of the complications with the schema is that there's no single primary key. column1, column2, column3, column4, column5 as well as three date columns date_month, date_day, and date_year comprise the unique identifier in the early years. In the middle years, the three date columns are combined into one date column with varying formats such as mddyy and later, mddyyyy. In recent years, data has been released with a primary key that combines all of the columnX columns and date column, for example: column1_column2_column3_column4_column5_date. To normalize the date formats, I created a separate table that has a standard date column with columns that tie back to the original tables. Then I imported the data for each year with using a few functions to parse the varying date formats. Now, to get a standard date for each record, I'd run:
-- Early years.
SELECT
    1990_table1.*,
    date_table.normalized_date
FROM 1990_table1
INNER JOIN
    date_table
ON
    1990_table1.column1 = date_table.column1 AND
    1990_table1.column2 = date_table.column2 AND
    1990_table1.column3 = date_table.column3 AND
    1990_table1.column4 = date_table.column4 AND
    1990_table1.column5 = date_table.column5 AND
    1990_table1.month   = date_table.month   AND
    1990_table1.day     = date_table.day     AND
    1990_table1.year    = date_table.year;

-- Middle years.
SELECT
    2000_table1.*,
    date_table.date
FROM 2000_table1
INNER JOIN
    date_table
ON
    2000_table1.column1 = date_table.column1 AND
    2000_table1.column2 = date_table.column2 AND
    2000_table1.column3 = date_table.column3 AND
    2000_table1.column4 = date_table.column4 AND
    2000_table1.column5 = date_table.column5 AND
    2000_table1.date    = date_table.date;

-- Recent years.
SELECT
    2020_table1.*,
    date_table.date
FROM 2020_table1
INNER JOIN
    date_table
ON
    2020_table1.combined_identifier = date_table.combined_identifier;
My next step was creating views for each year using the JOINs above. In addition to getting a standard date, there are two more joins pulling in data from 1-to-1 tables. These tables relate back to the original table, again, based on the columnX and date columns. After that, I created another view for testing purposes that UNION ALLs all the years with the same schema. So for example, 1990-1999. My ultimate goal is to create a "master" view that allows me to query all the years at once, but for testing purposes it's 1990-1999 currently:
SELECT * FROM 1990_to_1999_table1_view WHERE YEAR(date) = '1990' AND ZIP = '90210' LIMIT 100;
The problem is that this query is unacceptably slow to be used in an autosuggest search field in my web app. I'm not quite sure why this query is so slow (single digit seconds). Ideally, I'd like to get it under 250ms. Here's what EXPLAIN shows. Nested loop inner join is repeated and looks more-or-less the same for each year in the view that UNIONs all the years together. I cut off the other Nested loop inner joins for brevity.
-> Limit: 100 row(s)  (cost=232.39..232.39 rows=0.1)
    -> Table scan on my_view  (cost=232.39..232.39 rows=0.1)
        -> Union all materialize  (cost=229.89..229.89 rows=0.1)
            -> Nested loop inner join  (cost=1.12 rows=0.0007)
                -> Nested loop inner join  (cost=0.86 rows=0.01)
                    -> Nested loop inner join  (cost=0.79 rows=0.1)
                        -> Index lookup on 1990_table1 using ZIP_INDEX (ZIP='90210')  (cost=0.35 rows=1)
                        -> Filter: (1990_table2.column1 = 1990_table1.column1)  (cost=0.32 rows=0.1)
                            -> Index lookup on 1990_table2 using column2_INDEX (column2=1990_table1.column2)  (cost=0.32 rows=1)
                    -> Filter: (1990_table1.column1 = 1990_table1.column1)  (cost=0.61 rows=0.1)
                        -> Index lookup on 1990_table1 using COMBINED_IDENTIFIER_KEY_INDEX (column2=1990_table1.column2, column3=1990_table1.column3, column4=1990_table1.column4)  (cost=0.61 rows=1)
                -> Filter: ((date_table.year = 1990_table1.year) and (date_table.day = 1990_table1.day) and (date_table.month = 1990_table1.month) and (date_table.column2 = 1990_table1.column2) and (date_table.column1 = 1990_table1.column1) and (date_table.date = 1990_table1.date) and (1990_table1.column4 = date_table.column4) and (year(date_table.DATE) = 1990))  (cost=17.77 rows=0.05)
                    -> Index lookup on date_table using column3_index (column3=1990_table1.column3), with index condition: (1990_table1.column3 = date_table.column3)  (cost=17.77 rows=19)
Is it a bad idea to create a main view (table1_view) of sub-views (1990_to_1999_table1_view, 2000_to_2009_table1_view, etc.) of sub-sub-views (1990_table1_view, 1991_table1_view, etc.)? Or can I improve the queries with different joins or indexes? Or should I bite the bullet and create brand new tables (all-in-all about 100gb) with a unified schema that all the year-prefixed tables can be imported to?
Tyler (111 rep)
Apr 25, 2023, 05:09 PM • Last activity: May 27, 2025, 03:10 AM
1 votes
4 answers
316 views
Optimise View in MariaDB
I am currently involved in maintaining a web application that requires me to display a list of documents sourced from two tables. The primary objective is to present the list and enable the user to download it in an Excel format. I have created a view to facilitate this task, but it takes approximat...
I am currently involved in maintaining a web application that requires me to display a list of documents sourced from two tables. The primary objective is to present the list and enable the user to download it in an Excel format. I have created a view to facilitate this task, but it takes approximately 20 seconds to retrieve the data, which is not acceptable. My responsibility is to optimize the performance of the application by reducing the time taken to display the data. However, I am uncertain about where to begin with the optimization process. I have started by checking the indexes to see if they are appropriately configured but did not find any issues.. SELECT oapp_forms.id AS oapp_form_id, NULL AS job_doc_id, properties.address1 AS address1, properties.uprn AS uprn, properties.postcode AS postcode, oapp_forms.issued_date AS issued_date, document_types.document_type_name AS form_name, forms.form_code AS form_code, jobs.job_number AS job_number, oapp_forms.job_id AS job_id, contracts.client_document_name AS client_document_name, jobs.order1 AS order1, 'FORM' AS type, oapp_forms.updated_at AS updated_at, jobs.contract_id AS contract_id, jobs.department AS department, document_types.id AS document_type_id, NULL AS doc_name, NULL AS doc_number, oapp_forms.form_id AS form_id FROM ( ( ( ( ( oapp_forms LEFT JOIN forms ON ( oapp_forms.form_id = forms.id ) ) LEFT JOIN jobs ON ( oapp_forms.job_id = jobs.id ) ) LEFT JOIN contracts ON ( jobs.contract_id = contracts.id ) ) LEFT JOIN properties ON ( jobs.property_id = properties.id ) ) LEFT JOIN document_types ON ( forms.document_type_id = document_types.id ) ) WHERE forms.is_form = 'Yes' AND oapp_forms.form_status IN('Verified', 'Auto Verified') AND document_types.is_visible_on_portal = 'Yes' AND oapp_forms.deleted_at IS NULL AND forms.deleted_at IS NULL AND jobs.deleted_at IS NULL AND properties.deleted_at IS NULL AND document_types.deleted_at IS NULL UNION ALL SELECT NULL AS oapp_form_id, job_docs.id AS job_doc_id, properties.uprn AS uprn, properties.address1 AS address1, properties.postcode AS postcode, job_docs.doc_issue_date AS issued_date, document_types.document_type_name AS form_name, document_types.document_type_name AS form_code, jobs.job_number AS job_number, job_docs.job_id AS job_id, contracts.client_document_name AS client_document_name, jobs.order1 AS order1, 'DOC' AS type, job_docs.created_at AS updated_at, jobs.contract_id AS contract_id, jobs.department AS department, job_docs.doc_type_id AS document_type_id, job_docs.doc_name AS doc_name, job_docs.doc_number AS doc_number, NULL AS form_id FROM ( ( ( ( job_docs LEFT JOIN document_types ON ( job_docs.doc_type_id = document_types.id ) ) LEFT JOIN jobs ON (job_docs.job_id = jobs.id) ) LEFT JOIN contracts ON ( jobs.contract_id = contracts.id ) ) LEFT JOIN properties ON ( jobs.property_id = properties.id ) ) WHERE job_docs.doc_access = 'External' AND document_types.is_visible_on_portal = 'Yes' AND document_types.status = 'A' AND job_docs.deleted_at IS NULL AND document_types.deleted_at IS NULL AND jobs.deleted_at IS NULL AND contracts.deleted_at IS NULL AND properties.deleted_at IS NULL > EXPLAIN SELECT * FROM jobs_all_documents enter image description here ------------------------------------------------ [Edit] I created views from phpmyadmin. Parenthesis is added by default. i tried to optimsed query by replacing left join with inner join, but after that execution time were increased by 14 sec. - execution time with left joins -> 29.86 sec for 1034006 records - execution time with Inner joins -> 41.65 sec for 1034006 records,
Zoe (11 rep)
Apr 7, 2023, 08:36 AM • Last activity: May 22, 2025, 12:09 AM
1 votes
1 answers
268 views
Insert into indexed view?
I give an example to show my problem. I create 2 table as the following: ``` CREATE TABLE a ( id INT ) CREATE TABLE b ( name NVARCHAR(10), id INT ) ``` Then insert data into these tables ``` INSERT INTO a VALUES(1),(2) INSERT INTO b VALUES('Kan',1),('Michael',2) ``` Next,I create indexed view that j...
I give an example to show my problem. I create 2 table as the following:
CREATE TABLE a
(
	id INT
)

CREATE TABLE b
(
	name NVARCHAR(10),
	id INT
)
Then insert data into these tables
INSERT INTO a VALUES(1),(2)
INSERT INTO b VALUES('Kan',1),('Michael',2)
Next,I create indexed view that join these tables via id
CREATE VIEW a_b
WITH SCHEMABINDING
AS
(
SELECT a.id,b.name FROM a INNER JOIN b ON a.id=b.id
)

CREATE UNIQUE CLUSTERED INDEX ix_a_b
ON a_b(id)

INSERT INTO a_b VALUES (3,'Joe') will be wrong
As I know about view: + views do not store data,just saved queries + but indexed view that stored data physically like table in the database.So why I don't insert,delete from a_b? And what I know about VIEW is right or wrong?Help me improve?
Dunguyen (51 rep)
Apr 9, 2020, 02:11 PM • Last activity: May 21, 2025, 09:09 AM
0 votes
1 answers
267 views
Insufficient privilege
We are using oracle 11g I am replicating production db for dev environment. I Have a user named CST2 and CST0 where CST0 contains all tables and view and public synonyms for the same. I have provided same set of privileges for CST2 in dev env as in production. There is a view named Product(based on...
We are using oracle 11g I am replicating production db for dev environment. I Have a user named CST2 and CST0 where CST0 contains all tables and view and public synonyms for the same. I have provided same set of privileges for CST2 in dev env as in production. There is a view named Product(based on product_info and product_ainfo) in CST0 which can be accessed by CST2 in production(Did select). but doing the same in replicated dev env is giving the error insufficient privilege. I checked following means from which the user can get privileges. 1. the roles assigned are same and privileges for that roles are also same. 2. usr_tab_privs are same 3. usr_sys_priv are same I cant find out how the CST2 in prod is able to access view 'product' in CST0 as it got no privilege through roles or tab_privs. Am I missing any other way by which CST2 in prod got privilege which I do grant in dev.?
Vishwanath gowda k (131 rep)
Dec 5, 2014, 03:41 AM • Last activity: May 20, 2025, 03:06 AM
0 votes
1 answers
286 views
Oracle 12cR2 - How To Create DDL Script Of A View With All Dependencies and Permissions
I have a view in my production database and I want to create same view in my test database. I tried this commmand: ‌‌select dbms_metadata.get_ddl('VIEW','VIEW_NAME','VIEW_OWNER') from dual; It generated a create script and I run it. But it didn't copy all of the dependencies, data types, etc. How ca...
I have a view in my production database and I want to create same view in my test database. I tried this commmand: ‌‌select dbms_metadata.get_ddl('VIEW','VIEW_NAME','VIEW_OWNER') from dual; It generated a create script and I run it. But it didn't copy all of the dependencies, data types, etc. How can I do this? Regards,
jrdba (55 rep)
Aug 12, 2020, 11:40 AM • Last activity: May 17, 2025, 09:05 PM
0 votes
1 answers
282 views
Anyway to view original query on top of views using postgres
For example, I have a simple query: ```sql SELECT * FROM information_schema.columns where table_schema = 'public'; ``` Which turn out very complex, `information_schema.columns` is a complex view. Right now, to understand the query, I need to read all the definitions of all the views involved. Is the...
For example, I have a simple query:
SELECT * FROM information_schema.columns where table_schema = 'public';
Which turn out very complex, information_schema.columns is a complex view. Right now, to understand the query, I need to read all the definitions of all the views involved. Is there any way to "expand" query and make it easier to read side by side with query plan output?
nvcnvn (101 rep)
Apr 25, 2021, 10:52 AM • Last activity: May 15, 2025, 04:05 AM
3 votes
1 answers
1889 views
force SQLite view column type affinity
Using SQLite 3, I have some view columns which are defined with an expression, and so have no type affinity. Is it possible to define them in a way that forces them to have a certain type affinity? e.g. if I have a tables `t1`, `t2` and a view definition `create view v as select coalesce(t1.c1, t2.c...
Using SQLite 3, I have some view columns which are defined with an expression, and so have no type affinity. Is it possible to define them in a way that forces them to have a certain type affinity? e.g. if I have a tables t1, t2 and a view definition create view v as select coalesce(t1.c1, t2.c1) c1 from t1, t2 where X; is there some way of giving v1.c1 a type affinity? I've tried casting it select cast(coalesce(t1.c1, t2.c1) as INTEGER) c1 but that doesn't seem to work.
nik (223 rep)
Nov 16, 2015, 03:00 AM • Last activity: May 11, 2025, 05:04 PM
1 votes
1 answers
1868 views
Full text search with json field in SQLite
Is it possible to have full text search in SQLite along using json? In SQLite json is stored as `TEXT` ([JSON1](https://www.sqlite.org/json1.html)). Full text search requires creation of virtual table ([full text](https://www.sqlitetutorial.net/sqlite-full-text-search/)) but I don't know how to conn...
Is it possible to have full text search in SQLite along using json? In SQLite json is stored as TEXT ([JSON1](https://www.sqlite.org/json1.html)) . Full text search requires creation of virtual table ([full text](https://www.sqlitetutorial.net/sqlite-full-text-search/)) but I don't know how to connect those two extensions so that full text search wouldn't search in json field names.
WRITING (ID, FK_PERSON_ID, BOOK_NAME, TEXT_JSON)
PEOPLE (PERSON_ID, NAME)
additionally TEXT_JSON for certain person contains following json objects:
-- WRITING.ID = 1, WRITING.FK_PERSON_ID = 1, BOOK_NAME = "NAME_1"
{
  "chapter1": "Title1",
  "text1": "This is sample sentence with word text",
  "text2": "This is sample sentence with word text"
}
-- WRITING.ID = 2, WRITING.FK_PERSON_ID = 1, BOOK_NAME = "NAME_101"
{
  "chapter1": "Title2",
  "text1": "This is sample sentence without"
}
(the structure of json object can differ) How should I setup virtual table for full text search to search single person writings BOOK_NAME and values of all TEXT_JSON attributes? Searching word text in writings of PERSON_ID = 1 would return only WRITING.ID = 1.
SundayProgrammer (11 rep)
May 3, 2020, 12:27 PM • Last activity: May 10, 2025, 09:03 PM
1 votes
2 answers
1641 views
Mysql: Create a view with multiple self joins without duplicates in result
Just to clarify i can't change the tables structure, so please leave out the "you should change your tables to this and that" answers, thank you. So i have a table **entities_attributes_values** where an entity has a lot of attributes and the value of that attribute, basically imagine 3 fields: - en...
Just to clarify i can't change the tables structure, so please leave out the "you should change your tables to this and that" answers, thank you. So i have a table **entities_attributes_values** where an entity has a lot of attributes and the value of that attribute, basically imagine 3 fields: - entity_id - entity_attributes_id - value Because every entities attribute and its value is on row getting more values is not so easy i was thinking of multiple self joins, and because this query will be very common i created a view, which is built with this query: SELECT L1.entity_id, L1.value as 'company_id', L2.value as 'entity_name', P.value as 'person_name', L4.value as 'establishment_id', L5.value as 'department_id' FROM entities_attributes_values L1 LEFT JOIN entities_attributes_values L2 ON L1.entity_id = L2.entity_id AND L2.entity_attributes_id = 1 LEFT JOIN entities_attributes_values L3 ON L1.entity_id = L3.entity_id AND L3.entity_attributes_id = 3 LEFT JOIN persons_attributes_values P ON L3.value = P.core_persons_id AND P.core_persons_attributes_id = 4 LEFT JOIN entities_attributes_values L4 ON L1.entity_id = L4.entity_id AND L4.entity_attributes_id = 12 LEFT JOIN entities_attributes_values L5 ON L1.entity_id = L5.entity_id AND L5.entity_attributes_id = 13 WHERE L1.entity_attributes_id = 2 So this works but i have one problem i get "duplicate" values and its not really duplicate but **the point is that in my view i want every entity to be only one row with all its attributes values** but instead i get this: enter image description here So as you can see the first three result are not good for me, i only need the fourth one, where i have all my data about one entity. Thank you in advance for any help!
Mr. Sam (111 rep)
Jun 24, 2015, 10:08 AM • Last activity: Apr 17, 2025, 09:07 PM
1 votes
1 answers
457 views
Modelling a user-defined filter set in SQL
In my database I have a table containing data that I need to filter. The exact contents of this table aren't important so I'll just call it `data`. Now, my users also need to be able to create filter sets of alerts, which define some sort of problem with the data that they need to detect. For exampl...
In my database I have a table containing data that I need to filter. The exact contents of this table aren't important so I'll just call it data. Now, my users also need to be able to create filter sets of alerts, which define some sort of problem with the data that they need to detect. For example this might be "select all points where data.value is less than quality 50", or "select a group of consecutive points that all have the same sign", which can be answered with SQL queries. The results of these queries need to always stay up-to-date as new data rows are added to the database. As far as I can tell, I could model this two ways: 1. I treat each filter as a row in a filter_definition table that somehow stores the query (as a string), and then also have a filter_result table that is periodically updated by running the filter_definition query (e.g. whenever a new data or filter_definition is created). By having my filters as rows in the database it's very easy for me to create/update/delete filters using my ORM. However this is highly inefficient to update, and also filter_result can become out of date if not updated recently enough. 2. I treat each filter as a view in the database, each with their own associated SELECT statement. This would be much more efficient, since nothing needs periodic updates, because the view will automatically run the query when accessed. However, allowing my users to create/update/delete a view is much trickier because it's actually deleting an entire database entity each time. In addition, querying *all* filters (to find all dangerous data points) becomes nearly impossible. I can union all the views for one user, but how do I know which view belongs to each user? Each view is a database entity, not a row, so it can't have a foreign key. There might be some other solutions I haven't considered as well. Does anyone have any advice as to how I should solve this issue? *Note: I'm using sqlalchemy as a DBMS-agnostic ORM, so really I want a solution that will work on any DBMS. However Postgres, MySQL and SQLite are my 3 priorities to support.*
Migwell (258 rep)
Jan 8, 2020, 03:37 AM • Last activity: Apr 12, 2025, 11:05 PM
1 votes
0 answers
63 views
Looking for a query to get a view's referenced columns in another database
Getting the referenced columns from a view that references data to it's own database if pretty straightforward with the INFORMATION_SCHEMA.VIEW_COLUMN_USAGE-tables but I'm struggling with getting the referenced columns and tables that are in another database. I've come to this but it lacks the refer...
Getting the referenced columns from a view that references data to it's own database if pretty straightforward with the INFORMATION_SCHEMA.VIEW_COLUMN_USAGE-tables but I'm struggling with getting the referenced columns and tables that are in another database. I've come to this but it lacks the referenced columns. Do I need another join with a sys-table to get this information? SELECT o.name , r.referenced_server_name , r.referenced_database_name , r.referenced_entity_name FROM sys.objects o JOIN sys.sql_expression_dependencies r ON o.object_id = r.referencing_id In the meanwhile I've found this one but it would be handy to also have the target columns in the list, now you don't know what column in the view references to which source table/columns select distinct 'PersonDetails', cols.* from sys.sql_expression_dependencies objs outer apply sys.dm_sql_referenced_entities (OBJECT_SCHEMA_NAME(objs.referencing_id) + N'.' + object_name(objs.referencing_id), N'OBJECT' ) as cols where objs.referencing_id = object_id('PersonDetails')
chittybang (151 rep)
Apr 8, 2025, 08:33 AM • Last activity: Apr 8, 2025, 02:43 PM
0 votes
0 answers
35 views
Advice needed for VIEWS with some core columns and extra fields
I am looking for advice about designing VIEWs when a variable number of columns is needed. Context --- The tables are consolidated into views. For instance `office_rents` and `office_electricity_bills` are consolidated into the view `office_expenses`. Views are themselves consolidated to an upper le...
I am looking for advice about designing VIEWs when a variable number of columns is needed. Context --- The tables are consolidated into views. For instance office_rents and office_electricity_bills are consolidated into the view office_expenses. Views are themselves consolidated to an upper level by other views, e.g. office_expenses, salaries, a.s.o are consolidated into professional_expenses. From table columns id, date, amount, description, payment_method the current VIEWs retain only the date and amount fields, which suffice to plot data. However, having additional fields in views (description, payment_method) would be useful in some context, like listing the last paid invoices or the next ones to be paid. --- Approaches considered -- **I hesitate between the following approaches**: **A)** The VIEWs contain all the fields required for all the needs ; this is close to a SELECT * approach: CREATE VIEW myView AS SELECT date, amount, description, payment_method FROM table1 UNION ALL SELECT date, amount, description, payment_method FROM table2 Pros: Simplicity Cons: Performance due to bloated data, especially for some plots that ranges over years. --- **B)** The VIEWS are duplicated, with more or less fields to serve the different purposes: CREATE VIEW myViewLt AS SELECT date, amount FROM table1 UNION ALL SELECT date, amount FROM table2 CREATE VIEW myView AS SELECT date, amount, description, payment_method FROM table1 UNION ALL SELECT date, amount, description, payment_method FROM table2 Pros: Performance. Cons: Twice more views to create and complicates maintenance. --- **C)** The VIEWs contains only the columns that are always required, and an id that points back to the table records to get the extra fields when needed. CREATE VIEW myView AS SELECT 'table1' AS source_table, id AS tid, date, amount FROM table1 UNION ALL SELECT 'table2' AS source_table, id AS tid, date, amount FROM table2 Pros: - Performance when only the minimal date, amount columns are required. - Flexibility for different purposes requiring variable number of columns. - Creates a link to the original data, that can later be used to manipulate them. Cons: - Will complexify code, requiring JOIN queries when the extra columns will be needed. - Loss of performance if the extra columns are required for many records. --- **Edit:** I went with approach **C** for pratical reasons. --- N.B. I am using MariaDB (MySQL) or SQLite depending on the configuration.
OuzoPower (141 rep)
Feb 26, 2025, 07:36 PM • Last activity: Mar 1, 2025, 07:55 PM
0 votes
2 answers
88 views
MariaDB (MySQL) : Cannot create view as already existing after database duplication
For some project, I **duplicated an existing MariaDB (MySQL) database** and altered its records to make it a demo dataset. The database duplication was done through phpMyAdmin. The original database and its copy have **views that are used by the project**. After the copy, the views in the duplicated...
For some project, I **duplicated an existing MariaDB (MySQL) database** and altered its records to make it a demo dataset. The database duplication was done through phpMyAdmin. The original database and its copy have **views that are used by the project**. After the copy, the views in the duplicated database are unfortunately **pointing to the tables in the original database**, instead of those in its copy. --- Edit: A **solution could be found**, detailed below in this same thread: https://dba.stackexchange.com/a/345283/193688 --- __Description of the problem before the solution was found__ For each view, SHOW CREATE VIEW command returns something like: CREATE ALGORITHM=UNDEFINED DEFINER=root@localhost SQL SECURITY DEFINER VIEW some_view AS SELECT original_database.some_view.column1 AS column1, original_database.some_view.column2 AS column2 from original_database.some_view UNION ALL (...) In the copied database, I could delete each VIEW but **when trying to create again each VIEW** that points to the tables of the copied database, **the following error comes**: #1050 - Table 'some_view' already exists. I tried adding USE copied_database; but this didn't change anything ; it is like the namespace of view names is shared among all databases. USE copied_database; DROP TABLE IF EXISTS some_view; CREATE ALGORITHM=UNDEFINED DEFINER=root@localhost SQL SECURITY DEFINER VIEW some_view AS SELECT copied_database.some_view.column1 AS column1, copied_database.some_view.column2 AS column2 from copied_database.some_view UNION ALL (...) **I need each VIEW keeping the same name in both the original and the copied database**, so that the code still works. What should I do? Version of MariaDB installed on localhost: 10.4.8 Version of phpMyAdmin: 4.9.1
OuzoPower (141 rep)
Feb 13, 2025, 12:48 PM • Last activity: Mar 1, 2025, 06:13 PM
Showing page 1 of 20 total questions