Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
192 views
Determine Available Blocks of Contiguous Ids
We have a SQL Server table that has an int autoincrement primary key column on it. The table's primary key space is fragmented. For example, ids 1 through 10 are in use but then ids 11 through 100,000 are not in use. Ids 1,000,000 through 1,100,000 are in use but ids 1,100,000 through 50,000,000 are...
We have a SQL Server table that has an int autoincrement primary key column on it. The table's primary key space is fragmented. For example, ids 1 through 10 are in use but then ids 11 through 100,000 are not in use. Ids 1,000,000 through 1,100,000 are in use but ids 1,100,000 through 50,000,000 are not in use. I am trying to figure out any and all such available range of ids. Once the ranges are determined, then we may reseed the primary key column to start at the beginning of the widest range. Any tool or utility or SQL script out there for determining such available ranges?
Manny Siddiqui (11 rep)
Sep 23, 2021, 06:13 PM • Last activity: Jun 22, 2025, 08:09 PM
26 votes
4 answers
39777 views
How to limit maximum number of rows in a table to just 1
I have a configuration table in my SQL Server database and this table should only ever have one row. To help future developers understand this I'd like to prevent more than one row of data being added. I have opted to use a trigger for this, as below... ALTER TRIGGER OnlyOneConfigRow ON [dbo].[Confi...
I have a configuration table in my SQL Server database and this table should only ever have one row. To help future developers understand this I'd like to prevent more than one row of data being added. I have opted to use a trigger for this, as below... ALTER TRIGGER OnlyOneConfigRow ON [dbo].[Configuration] INSTEAD OF INSERT AS BEGIN DECLARE @HasZeroRows BIT; SELECT @HasZeroRows = CASE WHEN COUNT (Id) = 0 THEN 1 ELSE 0 END FROM [dbo].[Configuration]; IF EXISTS(SELECT [Id] FROM inserted) AND @HasZeroRows = 0 BEGIN RAISERROR ('You should not add more than one row into the config table. ', 16, 1) END END This does not throw an error but is not allowing the first row to go in. Also is there a more effective / more self explaining way of limiting the number of rows that can be inserted into a table to just 1, than this? Am I missing any built in SQL Server feature?
Dib (447 rep)
Jun 21, 2016, 08:25 AM • Last activity: Jun 6, 2025, 05:11 PM
0 votes
1 answers
240 views
MySQL logging max_questions resource
How would I know in MySQL if the user has exceeded his max_queries per hour? I know that I can see this error on the client side, but I just want to know that before client is reporting to me, somewhere on the server side, would be nice to have it in error_log at least.
How would I know in MySQL if the user has exceeded his max_queries per hour? I know that I can see this error on the client side, but I just want to know that before client is reporting to me, somewhere on the server side, would be nice to have it in error_log at least.
Ilie Soltanici (101 rep)
Mar 13, 2019, 04:27 PM • Last activity: May 23, 2025, 07:10 PM
-1 votes
1 answers
811 views
Can I set a row-limit for query results that throws an ERROR if the result set exceeds that limit?
I'm looking for a way to protect my database against runaway queries issued by an application. I have control over both the application and the database. It occurs to me that I can protect against two specific things relatively easily: 1. Set a [`max_statement`](https://mariadb.com/kb/en/server-syst...
I'm looking for a way to protect my database against runaway queries issued by an application. I have control over both the application and the database. It occurs to me that I can protect against two specific things relatively easily: 1. Set a [max_statement](https://mariadb.com/kb/en/server-system-variables/#max_statement_time) using GRANT ... MAX_STATEMENT_TIME for the application user. 2. Set [sql_select_limit](https://mariadb.com/kb/en/server-system-variables/#max_statement_time) for each connection. When testing, I can see that if I set a max_statement_time of 5 (seconds), if I execute a SELECT SLEEP(10), I end up with an error like this: Query execution was interrupted (max_statement_time exceeded) But if I perform a SELECT which should return a number of rows that exceeds some value (e.g. 1000), I get this result: mysql> set sql_select_limit = 1000; Query OK, 0 rows affected (0.000 sec) mysql> SELECT * FROM largetable; .... .... 1000 rows in set (0.010 sec) mysql> show warnings; Empty set (0.000 sec) So while I get the "protection" of limiting my query results to 1000 rows, I think what I really want is an error to be thrown in these cases, so we can identify them in the application and fix them. Is there anything in MariaDB that is like sql_select_limit but throws an error rather than simply limiting the size of the result set?
Christopher Schultz (327 rep)
Jul 24, 2023, 04:43 PM • Last activity: Mar 22, 2025, 10:04 AM
5 votes
2 answers
8683 views
How to deal with large offsets in select?
Table `jtest` with 200k rows, each row contains jsonb `{ id: " ", key: }` (` ` is integer 1-200k incremented per each row). There is also btree index on `data->'key'`. create extension if not exists pgcrypto; create table jtest (data jsonb not null default '{}'); insert into jtest (data) select json...
Table jtest with 200k rows, each row contains jsonb { id: "", key: } (` is integer 1-200k incremented per each row). There is also btree index on data->'key'`. create extension if not exists pgcrypto; create table jtest (data jsonb not null default '{}'); insert into jtest (data) select json_build_object('id', gen_random_uuid(), 'key', i)::jsonb FROM generate_series(1,200000) i; create index jtest_key on jtest ((data->'key')); First query (fast): EXPLAIN ANALYZE select j.data from jtest j order by j.data->'key' limit 20; -- "Limit (cost=0.42..1.43 rows=20 width=74) (actual time=0.023..0.044 rows=20 loops=1)" -- " -> Index Scan using jtest_key on jtest j (cost=0.42..10150.42 rows=200000 width=74) (actual time=0.023..0.039 rows=20 loops=1)" -- "Planning time: 0.082 ms" -- "Execution time: 0.066 ms" Second query with large offset (slow): EXPLAIN ANALYZE select j.data from jtest j order by j.data->'key' offset 100000 limit 20; -- "Limit (cost=5075.42..5076.44 rows=20 width=74) (actual time=112.861..112.896 rows=20 loops=1)" -- " -> Index Scan using jtest_key on jtest j (cost=0.42..10150.42 rows=200000 width=74) (actual time=0.027..106.378 rows=100020 loops=1)" -- "Planning time: 0.105 ms" -- "Execution time: 112.931 ms" In PG docs I found: > The rows skipped by an OFFSET clause still have to be computed inside the server; therefore a large OFFSET might be inefficient. But they don't say anything about how to deal with large offsets. How I can improve this query? Also is this behaviour (inefficiency for large offsets) common to all RDMS or just PG (using 9.4)?
user606521 (1415 rep)
Aug 23, 2015, 06:48 PM • Last activity: Mar 4, 2025, 06:09 AM
4 votes
2 answers
2477 views
PostgreSQL Size Quota on Table or Schema
How do one *limit* the size of a PostgreSQL table? (or schema) Limit in either bytes or rows would be OK. I'm starting to think that there's no easy and obvious solution. I have a lot of identical tables in multiple schemas (my take on multi-tenancy) and I would like to restrict each schema to a cer...
How do one *limit* the size of a PostgreSQL table? (or schema) Limit in either bytes or rows would be OK. I'm starting to think that there's no easy and obvious solution. I have a lot of identical tables in multiple schemas (my take on multi-tenancy) and I would like to restrict each schema to a certain max size. I have yet to find anything in Postgres that allows me to turn on any kind of quota. So I suppose you need to build this functionality yourself. The naive 'solution' would be to do something like: insert only if select count(*) < max quota. But that does not feel right. Anyone with better solutions in mind?
kirilian (41 rep)
Apr 10, 2017, 06:14 AM • Last activity: Jan 23, 2025, 01:07 PM
2 votes
2 answers
1928 views
Limiting the Number of Records Joined
How can I perform a query that contains a (one-to-many) join between two tables, which will only join up to a pre-defined number of joined records? So, lets say I have two tables, one for Matches, and one for Goals, and I want to perform some analysis on the matches based on the first 3 goals that h...
How can I perform a query that contains a (one-to-many) join between two tables, which will only join up to a pre-defined number of joined records? So, lets say I have two tables, one for Matches, and one for Goals, and I want to perform some analysis on the matches based on the first 3 goals that have been scored in it. The idea would be to execute a single query, and I would then have a list of comments that is only comprised of the most recent 5 comments on each post Matches: +----------+ | Match ID | +----------+ | 1 | +----------+ | 2 | +----------+ | 3 | +----------+ Goals: +---------+----+ | Goal ID | In | +---------+----+ | 1 | 2 | +---------+----+ | 2 | 2 | +---------+----+ | 3 | 1 | +---------+----+ | 4 | 3 | +---------+----+ | 5 | 1 | +---------+----+ | 6 | 2 | +---------+----+ | 7 | 3 | +---------+----+ | 8 | 2 | +---------+----+ | 9 | 1 | +---------+----+ Result: (limit to first 2) +---------+----+ | Goal ID | In | +---------+----+ | 1 | 2 | +---------+----+ | 2 | 2 | +---------+----+ | 3 | 1 | +---------+----+ | 4 | 3 | +---------+----+ | 5 | 1 | +---------+----+ | 7 | 3 | +---------+----+
topherg (163 rep)
Apr 25, 2014, 09:01 AM • Last activity: Aug 31, 2024, 03:05 PM
14 votes
1 answers
14536 views
Why *not* ERROR: index row size xxxx exceeds maximum 2712 for index "foo"?
We have repeatedly seen failing attempts to index columns with values exceeding a maximum size. Postgres 10 has this kind of error message for it: > ERROR: index row size xxxx exceeds maximum 2712 for index "foo_idx" > HINT: Values larger than 1/3 of a buffer page cannot be indexed. > Consider a fun...
We have repeatedly seen failing attempts to index columns with values exceeding a maximum size. Postgres 10 has this kind of error message for it: > ERROR: index row size xxxx exceeds maximum 2712 for index "foo_idx" > HINT: Values larger than 1/3 of a buffer page cannot be indexed. > Consider a function index of an MD5 hash of the value, or use full text indexing. Examples: - https://dba.stackexchange.com/questions/69161/character-varying-index-overhead-length-limit/69164 - https://dba.stackexchange.com/questions/25138/index-max-row-size-error Etc. Now, a_horse_with_no_name demonstrated a case with much larger text values (10000 characters) that still seems to work with a UNIQUE index in Postgres 9.6. Quoting his test case: create table tbl (col text); create unique index on tbl (col); insert into tbl values (rpad(md5(random()::text), 10000, md5(random()::text))); select length(val) from x; -- 10000 No error, and the column value indeed tested with a length of 10000 characters. Have there been recent changes or how is this possible?
Erwin Brandstetter (185717 rep)
Sep 8, 2018, 03:56 PM • Last activity: Jul 21, 2024, 02:05 AM
0 votes
1 answers
242 views
Why doesn't Postgres apply limit on groups when retrieving N results per group?
I've come across many questions/answers for Greatest/Top N per group type problems that explain _how_ to solve the problem - generally some variation of row_number() or CROSS JOIN LATERAL, but I'm struggling to understand the theory behind the _why_ for this example. The particular example I'm worki...
I've come across many questions/answers for Greatest/Top N per group type problems that explain _how_ to solve the problem - generally some variation of row_number() or CROSS JOIN LATERAL, but I'm struggling to understand the theory behind the _why_ for this example. The particular example I'm working with is this:
SELECT "orders".* 
FROM "orders" 
WHERE user_id IN (?, ?, ?, ?, ?)
ORDER BY "orders"."created_at" LIMIT 50
Essentially, I want to find the 50 most recent orders amongst a group users. Each user may have thousands of orders. I have two indexes - (user_id) and (user_id, created_at). Only the first index is ever used with this query. I can understand that the query planner would not know ahead of time which users would have those 50 newest orders. I imagined that it would be clever enough to determine that only 50 results are needed, and that it could use the (user_id, created_at) index to get 50 orders for each user. Then sort and filter those few hundred results in memory. Instead what I'm seeing is that it gets all orders for each user using the user_id index and then sorts/filters them all in memory. Here is an example query plan:
Limit  (cost=45271.94..45272.06 rows=50 width=57) (actual time=13.221..13.234 rows=50 loops=1)
  Buffers: shared hit=12321
  ->  Sort  (cost=45271.94..45302.75 rows=12326 width=57) (actual time=13.220..13.226 rows=50 loops=1)
          Sort Key: orders.created_at
          Sort Method: top-N heapsort Memory: 36kB
        Buffers: shared hit=12321
        ->  Bitmap Heap Scan on orders orders  (cost=180.85..44862.48 rows=12326 width=57) (actual time=3.268..11.485 rows=12300 loops=1)
                Recheck Cond: (orders.user_id = ANY ('{11,1000,3000}'::bigint[]))
                Heap Blocks: exact=12300
              Buffers: shared hit=12321
              ->  Bitmap Index Scan on index_orders_on_user_id  (cost=0.00..177.77 rows=12326 width=0) (actual time=1.257..1.258 rows=12300 loops=1)
                      Index Cond: (orders.user_id = ANY ('{11,1000,3000}'::bigint[]))
                    Buffers: shared hit=21
Planning:
  Buffers: shared hit=6
Execution time: 13.263 ms
The table I'm querying has roughly 50,000,000 orders, with an even distribution of ~4000 orders per user. I have found that I can speed this up significantly using CROSS JOIN LATERAL and it will use the composite index, but I'm struggling to understand WHY the CROSS JOIN LATERAL is needed here for it to use the index. So my question is, why doesn't Postgres use the composite index, and then retrieve only the minimum necessary amount of rows (50 per user) using the query I posted above? EDIT: More context This is the lateral join query that _does_ use the index
SELECT o.*
FROM company_users cu
CROSS JOIN LATERAL (
   SELECT *
   FROM orders o
   WHERE o.user_id = company_users.user_id
   ORDER  BY created_at DESC LIMIT 50
   ) cu
WHERE  cu.company_id = ? 
ORDER BY created_at DESC LIMIT 50
Doing a nested select like this doesn't use the index - even though it does a nested loop just like the lateral join does:
SELECT "orders".* 
FROM "orders" 
WHERE user_id IN (SELECT user_id FROM company_users WHERE company_id = ?)
ORDER BY "orders"."created_at" LIMIT 50
Seanvm (101 rep)
Feb 4, 2024, 06:52 AM • Last activity: Feb 7, 2024, 07:06 AM
0 votes
1 answers
65 views
Force MySQL to use specific index on ORDER BY clause with LIMIT to satisfy join
MySQL 8.0.30 I have three tables defined: - C - CT - T CT is a linking table (many-to-many between C & T) The query I'm trying to optimize is of the following kind: ``` SELECT C.*, T.* FROM C JOIN CT ON C.id = CT.cid JOIN T ON CT.tid = T.id WHERE T.col = 8 ORDER BY C.pop LIMIT 20 ``` There are secon...
MySQL 8.0.30 I have three tables defined: - C - CT - T CT is a linking table (many-to-many between C & T) The query I'm trying to optimize is of the following kind:
SELECT C.*, T.*
FROM C JOIN CT ON C.id = CT.cid JOIN T ON CT.tid = T.id
WHERE T.col = 8
ORDER BY C.pop
LIMIT 20
There are secondary indices defined on T.col and on C.pop. The optimal query plan is to satisfy the query with the following join-order:
C (*using C.pop_idx*)
    join CT (c.id == ct.cid)
    join T (ct.tid == t.id)
    filter (T.col == 8)
There is no doubt that this is the optimal query plan, because of the ORDER BY and LIMIT clauses. What MySQL chooses instead is:
CT (table-scan)
    join T (using T.primary)
    filter T.col == 8
    join C (using C.primary)
I've tried everything I can think of: - using FORCE INDEX - query optimizer hints
(/* JOIN_ORDER(C, CT, T) */), /* INDEX(C, C.pop_idx) */, etc, etc
But to no avail. The frustrating thing is that I've seen query plans where MySQL is smart enough to do exactly what I want it to do, viz. use the index defined on the ORDER BY/LIMIT to "filter" the rows in the outermost table as the outermost loop of a nested loop IJ, but I simply can't induce it to do so in this case. Any ideas? ### For those interested (not necessary to answer question) The calculations as to why said query plan would be optimal in pseudo-code:
for each row in C [using index C.pop_idx read 20 rows (or so) in order at a time]
     for each matching row in CT (CT.cid == C.id) (using CT primary)
          for each matching row in T (CT.tid == T.id) (using T primary)
              if T.col == 8:
                   emit C.*, T.*
    
              if total emitted rows == 20:
                  done
Even if the % of T rows with T.col == 8 is small (say, 1%), on average only 20 * 100 C rows would need to be read (which is way less than all the rows in CT).
ktn (1 rep)
Jan 8, 2024, 04:52 PM • Last activity: Jan 9, 2024, 08:50 AM
0 votes
1 answers
326 views
'Limit INT OFFSET @variable' throw runtime error in mysql server
I'm trying to solve [this problem in Hackerrand][1] The problem asks for the median in a specific column my code: SET @VAE = (SELECT COUNT(*) FROM STATION); SET @VAE1 = ROUND(@VAE/2)-1; SELECT LAT_N FROM STATION ORDER BY LAT_N LIMIT @VAE1; But I get syntax error. When I write the query as following...
I'm trying to solve this problem in Hackerrand The problem asks for the median in a specific column my code: SET @VAE = (SELECT COUNT(*) FROM STATION); SET @VAE1 = ROUND(@VAE/2)-1; SELECT LAT_N FROM STATION ORDER BY LAT_N LIMIT @VAE1; But I get syntax error. When I write the query as following it runs ok, but I don't want it in such way SET @VAE = (SELECT COUNT(*) FROM STATION); SET @VAE1 = ROUND(@VAE/2)-1; SELECT LAT_N FROM STATION ORDER BY LAT_N LIMIT 5; what is the problem ?
ahmad alghadban (1 rep)
Jun 14, 2023, 09:59 AM • Last activity: Nov 12, 2023, 09:32 AM
0 votes
1 answers
834 views
Why does my LIMITed SELECT fetch the last record of the table, not the first?
I [read][1] that to fetch the first table record, you may use `LIMIT 1`. I tried to apply that knowledge to my `JOIN` statement and [got][2] a diametrically opposite result. It joined the *last* record of the joined table ```sql SELECT * FROM users; ``` | user_id | username | [more columns] | |-----...
I read that to fetch the first table record, you may use LIMIT 1. I tried to apply that knowledge to my JOIN statement and got a diametrically opposite result. It joined the *last* record of the joined table
SELECT * FROM users;
| user_id | username | [more columns] | |--------:|----------:|--------:| | 1 | mickey_m | [...] | | [...] | [...] | [...] | | 6 | daisy_d | [...] |
SELECT * FROM roles;
| role_id | role | |--------:|----------:| | 1 | USER | | 2 | ADMIN |
INSERT IGNORE INTO user_role (user_id, username, role_id, role)
SELECT users.id, users.username, roles.id, roles.role
FROM users
JOIN roles ON roles.role = 'ADMIN' LIMIT 1;

SELECT * FROM user_role;
| user_id | username | role_id | role | |--------:|----------:|--------:|------:| | 6 | daisy_d | 2 | ADMIN | **Why did it happen?** It seems unnatural for LIMIT to limit the result set to the last record as opposed to the first one Even this
INSERT IGNORE INTO user_role (user_id, username, role_id, role)
SELECT first_row.id, first_row.username, roles.id, roles.role
FROM (SELECT id, username FROM users LIMIT 1) AS first_row
JOIN roles ON roles.role = 'ADMIN';

SELECT * FROM user_role;
produces the same In both cases, I expected to get this instead | user_id | username | role_id | role | |--------:|----------:|--------:|------:| | 1 | mickey_m | 2 | ADMIN | Does SELECT select from the bottom up in some cases? It seems so because plain SELECTs like this one
SELECT * FROM users LIMIT 1;
do indeed retrieve the first row. However, if you specify columns
SELECT id, username FROM users LIMIT 1;
you observe the same issue (the last row gets retrieved instead) Sage (a chat bot) says when it comes to LIMIT, no order is guaranteed so I should use ORDER BY (it does help, by the way). Is it right? Does LIMIT 1 limit the result set to a random row that satisfies the criteria? Then why is it always the last one? The likelihood of that is infinitesimally small > The behavior you are observing is likely due to the use of the LIMIT clause. When you use LIMIT without an explicit ORDER BY clause, the database will return an arbitrary set of rows that satisfy the query conditions, but the order of the rows is not guaranteed. In other words, the database may return the first row, the last row, or any other row that satisfies the query conditions. > > *–Sage*
Sergey Zolotarev (243 rep)
Apr 18, 2023, 06:50 AM • Last activity: Apr 19, 2023, 09:40 AM
2 votes
1 answers
49 views
How to get the most recent result sets when grouping key and date are in separate tables
Good evening! I'm having some difficulty. I'm trying to obtain the most recent row from a few tables using SQL. I can do this using a loop in code, but it requires a lot of round trips to the database and results in slow performance. Would someone be able to help me out? There are a number of questi...
Good evening! I'm having some difficulty. I'm trying to obtain the most recent row from a few tables using SQL. I can do this using a loop in code, but it requires a lot of round trips to the database and results in slow performance. Would someone be able to help me out? There are a number of questions on stackoverflow that utilize the MAX method, but in those cases the id column and date column are in the same table. I'm not sure how to get that to work in my case. Here is the create code for my tables. *Note: I've renamed the columns and tables to be generic.* ### Table Creation
CREATE TABLE IF NOT EXISTS table_2 (
    table1_id varchar(50) COLLATE utf8mb4_unicode_ci NOT NULL,
    table2_id varchar(50) COLLATE utf8mb4_unicode_ci NOT NULL,
    needed_value_1 varchar(254) COLLATE utf8mb4_unicode_ci NOT NULL,
    needed_value_2 varchar(254) COLLATE utf8mb4_unicode_ci NOT NULL,
    comparison_int smallint(5) unsigned NOT NULL,
    needed_limiting_date datetime DEFAULT NULL,
    PRIMARY KEY (table2_id,table1_id),
    UNIQUE KEY table1_id (table1_id)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;


CREATE TABLE IF NOT EXISTS table_1 (
    owned_by_id int(10) unsigned NOT NULL,
    table1_uuid char(36) COLLATE utf8mb4_unicode_ci NOT NULL,
    table1_id varchar(50) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
    needed_value_3 varchar(254) COLLATE utf8mb4_unicode_ci NOT NULL,
    comparison_bool tinyint(1) NOT NULL,
    comparison_bool2 tinyint(1) unsigned NOT NULL,
    external_id int(10) unsigned NOT NULL,
    needed_date datetime NOT NULL,
    PRIMARY KEY (owned_by_id,table1_uuid),
    UNIQUE KEY table1_uuid (table1_uuid),
    UNIQUE KEY table1_id (table1_id)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;

CREATE TABLE IF NOT EXISTS external_table (
    owned_by_id int(10) unsigned NOT NULL,
    external_id int(10) unsigned NOT NULL,
    PRIMARY KEY (owned_by_id,external_id) USING BTREE,
    KEY owned_by_id (owned_by_id),
    KEY external_id (external_id)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
### Code That Works but is not performant So here is what I would do in pseudocode.
// create a results array
results = []

// get a list of external ids
external_ids = get_data('SELECT external_id FROM external_table WHERE owned_by_id = {id_provided_via_param}');

// loop through ids and get the most recent information per each, add to results
foreach(external_ids as external_id){
	sql_to_run = 'SELECT t1.needed_value_3, t1.external_id, t1.needed_date, t2.needed_value_1, t2.needed_value_2, t2.needed_limiting_date
                    FROM table_1 AS t1
                             INNER JOIN table_2 AS t2 ON t2.table1_id = t1.table1_id
                    WHERE t1.comparison_bool = 1
                      AND t1.comparison_bool_2 = 0
                      AND t1.external_id = {external_id_param}
                      AND t2.comparison_int = 0
                      AND t2.needed_limiting_date <= {given_date_param}
                    ORDER BY needed_limiting_date DESC
                    LIMIT 1'
					
	results.push(get_data(sql_to_run))
}

return results
That works to get what I need, the most recent information from the database for each external_id. ## Primary Question How would I do this same thing in SQL, so that a resultset is returned consisting of the most recent information for each external id? ### Limitations - The user has read access, the user cannot create Temporary Tables. - The user cannot modify any schema. Thank you for any help you can provide.
jeremysawesome (121 rep)
Jan 26, 2023, 03:12 AM • Last activity: Jan 26, 2023, 11:56 PM
3 votes
1 answers
474 views
How to limit the amount of values within a postgres table which are updated
using the following sample: ```sql CREATE TABLE tbl( x integer, y integer ) INSERT INTO tbl SELECT g AS x FROM generate_series(1, 10) AS g; ``` Which gives: ``` x | y ----+--- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | (10 rows) ``` I'd like to have something like (*note - I do not care about the orde...
using the following sample:
CREATE TABLE tbl(
	x integer, 
	y integer
)

INSERT INTO tbl 
SELECT g AS x FROM generate_series(1, 10) AS g;
Which gives:
x  | y 
----+---
  1 |  
  2 |  
  3 |  
  4 |  
  5 |  
  6 |  
  7 |  
  8 |  
  9 |  
 10 |  
(10 rows)
I'd like to have something like (*note - I do not care about the ordering of the three values updated in y here*):
x  | y 
----+---
  1 |  1
  2 |  2
  3 |  3
  4 |  
  5 |  
  6 |  
  7 |  
  8 |  
  9 |  
 10 |  
(10 rows)
I've tried to do this using the following:
UPDATE tbl 
SET y = x 
WHERE y IS NULL 
LIMIT 3;
But I get the error ERROR: syntax error at or near "LIMIT", which I'm not sure how to fix. # Solution I think the following works:
update tbl 
set y = x 
where x in (select x from tbl where y is null limit 3);
I can't think of a scenario this would fail, if there's a better approach it'd be good to know.
baxx (326 rep)
Jan 4, 2023, 01:46 AM • Last activity: Jan 4, 2023, 01:46 PM
0 votes
1 answers
408 views
MariaDB: Limit changes query plan and makes query very slow
It's counter-intuitive and weird to see how adding "LIMIT 2" changes whole plan for MariaDB (version 10.6.7) from using one index and having total time of 2 seconds into another index with total time 80 seconds. ``` SELECT SQL_NO_CACHE id FROM items WHERE .... ORDER BY updateTime ASC; .... 118 rows...
It's counter-intuitive and weird to see how adding "LIMIT 2" changes whole plan for MariaDB (version 10.6.7) from using one index and having total time of 2 seconds into another index with total time 80 seconds.
SELECT SQL_NO_CACHE id FROM items WHERE .... ORDER BY updateTime ASC;
....
118 rows in set (2.531 sec)
vs.
SELECT SQL_NO_CACHE id FROM items WHERE .... ORDER BY updateTime ASC LIMIT 2;
...
2 rows in set (1 min 18.967 sec)
Explain shown next:
explain SELECT SQL_NO_CACHE id FROM items WHERE .... ORDER BY updateTime ASC;
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: items
         type: ref
possible_keys: ....
          key: saleCompleted
      key_len: 1
          ref: const
         rows: 486318
        Extra: Using where; Using filesort
// works 2 seconds
vs.
explain SELECT SQL_NO_CACHE id FROM items WHERE .... ORDER BY updateTime ASC LIMIT 2;
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: items
         type: index
possible_keys: ....
          key: updateTime
      key_len: 9
          ref: NULL
         rows: 62
        Extra: Using where
// works 80 seconds
I do understand how to change it (force using specific index OR change indexes), but I don't understand the logic behind the MariaDB. It seems like a bug, but I might be wrong. Also, it's so weird to see how filesort on 486k records works 40 times faster than other query plan (without filesorting on approx. 62 records). Are there some settings which can tweak MariaDB plan selection ? Thanks
Dmitriy Gorbenko (1 rep)
Oct 18, 2022, 02:38 PM • Last activity: Oct 18, 2022, 07:17 PM
0 votes
2 answers
3656 views
How to limit the number of rows in a mysql table
How can I set a limit of rows to a mysql table? (I am using php) POSSIBLE METHOD-1: When the user tries to SIGN-UP I can select everything from the `TABLE` and get the number of `ROWS` in the `TABLE` and check if it is `== 100` and if so `echo "Could not sign you up. Cause: Max user limit reached!";...
How can I set a limit of rows to a mysql table? (I am using php) POSSIBLE METHOD-1: When the user tries to SIGN-UP I can select everything from the TABLE and get the number of ROWS in the TABLE and check if it is == 100 and if so echo "Could not sign you up. Cause: Max user limit reached!";, or if it is < 100 to allow the user to SIGN-UP. But is there an easier method to use like just setting a max row limit for the TABLE?
user253088
Jun 6, 2022, 11:01 AM • Last activity: Oct 15, 2022, 03:04 PM
0 votes
1 answers
1106 views
How to limit CPU by Db2 instance?
On Linux/x86_64 I have several Db2 database instances and most of them have one database per instance. Today I have had one "run away instance" that consumed most of the CPU resources of whole hardware server. Is there any simple way to limit CPU resources by Db2 instance. Like setting max 20% CPU u...
On Linux/x86_64 I have several Db2 database instances and most of them have one database per instance. Today I have had one "run away instance" that consumed most of the CPU resources of whole hardware server. Is there any simple way to limit CPU resources by Db2 instance. Like setting max 20% CPU usage per instance? Regards
folow (523 rep)
Oct 10, 2022, 06:54 AM • Last activity: Oct 10, 2022, 07:35 PM
0 votes
0 answers
379 views
MySQL query: multiple select with different conditions and LIMIT for each, but also LIMIT whole query + paginator friendly
I would like to write SQL query (for MySQL) that is able to SELECT - x rows by condition1 from table1 - y rows by condition2 from table1+join1 - z rows by condition3 from table1+join2 But on every page (LIMIT+OFFSET), it must be full of "n" rows (so only on the last page there can be different numbe...
I would like to write SQL query (for MySQL) that is able to SELECT - x rows by condition1 from table1 - y rows by condition2 from table1+join1 - z rows by condition3 from table1+join2 But on every page (LIMIT+OFFSET), it must be full of "n" rows (so only on the last page there can be different number of rows than "n"). There can be different number of rows for condition1,condition2 and condition3. If there wouldn't be enough rows for condition1, select more rows by condition2 or condition3. If there isn't enough rows for condition2, select more rows by condition1 (or condition3 if cond1 not possible). And if there isn't enough rows for cond1+cond2, more in condition 3. Important is: - each page (except the last one) has the same number of rows - each condition returns "x","y" and "z" records if enough rows left - records from table1 has to be globally unique by id (each ID from table1 just once in whole solution) - effective, there is lots of records and operations (WHERE, ORDER BY, ...) - preferred to return as single query **Practical usecase:** Query will be used in searching. Client will enter "word" to search, but we don't know, if the word is code of a product, some parameter or name of a product. Goal is to return results for each category (code,parameter,name). For example on each page there will be 10 products. We have to return 2 products found by code, 3 products found by parameter and 5 products found by name. If there isn't any result for searching by code, return 5 products by parameter and 5 by name. Etc. In any case, still 10 records per page. We have to distinguish, if product was found by code,parameter or name. **Example**: Without paginator: condition1 returns ID from table1: 1 2 3 condition2 returns ID from table1: 11 12 13 14 15 16 17 18 19 condition3 returns ID from table1: 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 118 117 116 Every page contains 10 rows: - 2 rows by condition1 from table1 - 3 rows by condition2 from table1+join1 - 5 rows by condition2 from table1+join2 Results I need on "page 1": condition1 returns: 1 2 condition2 returns: 11 12 13 condition3 returns: 101 102 103 104 105 Results I need on "page 2": condition1 returns: 3 condition2 returns: 14 15 16 17 condition3 returns: 106 107 108 109 110 Results I need on "page 3": condition1 returns: nothing condition2 returns: 18 19 condition3 returns: 111 112 113 114 115 116 117 118 Results I need on "page 4": condition1 returns: nothing condition2 returns: nothing condition3 returns: 119 120 It there will be some crazy SQL query or if it wouldn't be possible, we can use stored procedure. But the performance is important. Thank you a lot.
Nick (1 rep)
Oct 9, 2022, 10:28 PM • Last activity: Oct 10, 2022, 09:18 AM
12 votes
1 answers
5012 views
ANSI/ISO plans for LIMIT standardization?
Are there currently plans to standardize one best way of limiting the number of results returned by a query? The stack overflow question at [Is there an ANSI SQL alternative to the MYSQL LIMIT keyword?][1] lists the various ways to handle this behavior in different languages: DB2 -- select * from ta...
Are there currently plans to standardize one best way of limiting the number of results returned by a query? The stack overflow question at Is there an ANSI SQL alternative to the MYSQL LIMIT keyword? lists the various ways to handle this behavior in different languages: DB2 -- select * from table fetch first 10 rows only Informix -- select first 10 * from table Microsoft SQL Server and Access -- select top 10 * from table MySQL and PostgreSQL -- select * from table limit 10 Oracle -- select * from (select * from table) where rownum <= 10 I don't play in DBs that often so I'm speaking from ignorance here, but it seems like this is a pretty important functionality - at least important enough that I scratch my head when I see it has been left up to vendors.
chucksmash (545 rep)
Dec 14, 2012, 03:52 PM • Last activity: Sep 23, 2022, 10:04 AM
2 votes
3 answers
99 views
How do I block all IP addresses from connecting remotely to Cassandra?
I want access to remote cassandra with one ip address. how can I change cassandra.yaml to block all ip address except ip that I want to connect? Can I do by cassandra.yaml ?
I want access to remote cassandra with one ip address. how can I change cassandra.yaml to block all ip address except ip that I want to connect? Can I do by cassandra.yaml ?
RezaRastgoo (41 rep)
Aug 22, 2022, 06:53 AM • Last activity: Sep 5, 2022, 02:11 AM
Showing page 1 of 20 total questions