Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

3 votes
2 answers
2176 views
MySQL InnoDB Reset Existing Auto-Increment Records
Using MySQL Workbench, I have a MySQL InnoDB with a few hundred records. The `records` table has the following format (`id` is `auto_increment` and `unique`); +------+-------+ | id | name | +------+-------+ | 2500 | jo | +------+-------+ | 2511 | jim | +------+-------+ | 2512 | sarah | +------+-----...
Using MySQL Workbench, I have a MySQL InnoDB with a few hundred records. The records table has the following format (id is auto_increment and unique); +------+-------+ | id | name | +------+-------+ | 2500 | jo | +------+-------+ | 2511 | jim | +------+-------+ | 2512 | sarah | +------+-------+ | 2513 | jane | +------+-------+ Notice my first record begins at 2500. Presumably this is because I have dropped and re-imported the table data numerous times. Is there any way I can 'reset' the id field so that the records begin at 1 and not 2500? I want the table to look like this: +----+-------+ | id | name | +----+-------+ | 1 | jo | +----+-------+ | 2 | jim | +----+-------+ | 3 | sarah | +----+-------+ | 4 | jane | +----+-------+ Any advice is appreciated.
TheOrdinaryGeek (177 rep)
Jun 21, 2018, 09:27 AM • Last activity: Jul 28, 2025, 03:36 PM
0 votes
1 answers
166 views
SQL : How to automatically run different SELECT query on a daily basis base on the current system date
I have 31 tables one for each day of the month where the table names are like subscription_x where x is the day of the month On a daily it should run a select * from subscription_x to get data from the table of the previous day e.g if today is Dec 14th it should run select * from subscription_13 the...
I have 31 tables one for each day of the month where the table names are like subscription_x where x is the day of the month On a daily it should run a select * from subscription_x to get data from the table of the previous day e.g if today is Dec 14th it should run select * from subscription_13 then tomorrow Dec 15th it will run select * from subscription_14 Can someone help please need a way to do this in sql
user20795704 (1 rep)
Dec 16, 2022, 06:43 PM • Last activity: Jul 12, 2025, 12:03 PM
0 votes
1 answers
192 views
Determine Available Blocks of Contiguous Ids
We have a SQL Server table that has an int autoincrement primary key column on it. The table's primary key space is fragmented. For example, ids 1 through 10 are in use but then ids 11 through 100,000 are not in use. Ids 1,000,000 through 1,100,000 are in use but ids 1,100,000 through 50,000,000 are...
We have a SQL Server table that has an int autoincrement primary key column on it. The table's primary key space is fragmented. For example, ids 1 through 10 are in use but then ids 11 through 100,000 are not in use. Ids 1,000,000 through 1,100,000 are in use but ids 1,100,000 through 50,000,000 are not in use. I am trying to figure out any and all such available range of ids. Once the ranges are determined, then we may reseed the primary key column to start at the beginning of the widest range. Any tool or utility or SQL script out there for determining such available ranges?
Manny Siddiqui (11 rep)
Sep 23, 2021, 06:13 PM • Last activity: Jun 22, 2025, 08:09 PM
0 votes
1 answers
320 views
Returning auto-generated primary key after SQL insertion
I am trying to submit data to a database and then have it return the value of the user_id (primary key) created by the insertion, but I can't get it to return the value. I use the same query in phpAdmin and it works just fine. I know this should be simple, but I just can't get it. Here is my code: (...
I am trying to submit data to a database and then have it return the value of the user_id (primary key) created by the insertion, but I can't get it to return the value. I use the same query in phpAdmin and it works just fine. I know this should be simple, but I just can't get it. Here is my code: (I do have values to connect to the database, just didn't include them for privacy.) $dbc = @mysqli_connect (DB_HOST, DB_USER, DB_PASSWORD, DB_SITE) OR die ('Could not connect to MYSQL: ' . mysqli_connect_error() ); $nameQuery="INSERT into nameReunion (username, password,email,firstName,middleName,lastName, maidenName, lastUpdate) VALUES ('$username', '$password', '$email', '$fname', '$miname', '$lname', '$maname', CURDATE());"; $r = @mysqli_query ($dbc, $nameQuery); if(!$r) { echo "The database was not updated."; } else { echo 'The connection was successful
'; }; $getUserID="SELECT user_id FROM nameReunion WHERE username='$username';"; $r2=@mysqli_query ($dbc, $getUserID); echo $r2; echo $getUserID; Please help. I have spent wayyy too much time on something I fear should be very simple.
CR_BU (17 rep)
Apr 5, 2015, 02:54 AM • Last activity: May 13, 2025, 07:03 PM
1 votes
1 answers
35 views
Is it possible to have an auto-increment field per a primary key in MySQL
I'm sure I used to be able to do this (15+ years ago), but I don't remember how, or know if it's still possible. I have a table with a composite primary key - `MainID` and `AutoIncrementID`. The `AutoIncrementID` is set to auto increment. If I insert three entries with a `MainID` of 1, the `AutoIncr...
I'm sure I used to be able to do this (15+ years ago), but I don't remember how, or know if it's still possible. I have a table with a composite primary key - MainID and AutoIncrementID. The AutoIncrementID is set to auto increment. If I insert three entries with a MainID of 1, the AutoIncrementID will be 1, 2, and 3 respectively. If I then insert two entries with a MainID of 2, I would like the AutoIncrementID to be 1, then 2. If I then insert another entry with a MainID of 1, I would like the AutoIncrementID to be 4 It doesn't work this way by default in MySQL default, is it possible to do, and how do I configure it? All my searches are flooded with general auto increment information. This question is for both MySQL 5.7 and MySQL 8.0. Thanks!
Ben Holness (183 rep)
May 12, 2025, 10:51 PM • Last activity: May 13, 2025, 01:43 AM
0 votes
1 answers
352 views
In Oracle SQL 19c, how can I INSERT a new record into a supertype and its related subtypes in one statement?
I'm designing a property management database that will hold - OWNER, CUSTOMER, and STAFF information. As these use many of the same attributes, I intend to create tables for PERSON, ADDRESS and CONTACT. The primary key for PERSON (`personID`) will be auto-incremented and used as the foreign key for...
I'm designing a property management database that will hold - OWNER, CUSTOMER, and STAFF information. As these use many of the same attributes, I intend to create tables for PERSON, ADDRESS and CONTACT. The primary key for PERSON (personID) will be auto-incremented and used as the foreign key for ADDRESS, CONTACT, OWNER, CUSTOMER, and STAFF. When it comes to adding a new record, is there a way to INSERT into PERSON, ADDRESS, CONTACT, and OWNER/CUSTOMER/STAFF at the same time using a simple statement? Or is there another way to retrieve the generated personID and use it to insert into the other tables? I'm testing on Oracle SQL Live 19c.
Gospel77 (1 rep)
Jun 14, 2022, 03:54 PM • Last activity: Apr 21, 2025, 08:05 PM
4 votes
2 answers
415 views
Why innodb locks index till the end?
Why a query BETWEEN two IDs locks index till the end? ``` UPDATE elem SET c='' WHERE id BETWEEN 2 AND 5; -- locks id>5 for some reason ``` The database is MySQL 8.0.33 GA. Here's the full example, taken from the book "Efficient MySQL Performance" by Daniel Nichter. ``` CREATE TABLE `elem` ( `id` int...
Why a query BETWEEN two IDs locks index till the end?
UPDATE elem SET c='' WHERE id BETWEEN 2 AND 5;
-- locks id>5 for some reason
The database is MySQL 8.0.33 GA. Here's the full example, taken from the book "Efficient MySQL Performance" by Daniel Nichter.
CREATE TABLE elem (
  id int unsigned NOT NULL PRIMARY KEY,
  a  char(2) NOT NULL,
  b  char(2) NOT NULL,
  c  char(2) NOT NULL,
  KEY idx_a (a)
);

insert into elem values
(2, 'Au', 'Be', 'Co'), 
(5, 'Ar', 'Br', 'C');

BEGIN;
UPDATE elem SET c='' WHERE id BETWEEN 2 AND 5;

select INDEX_NAME, LOCK_TYPE, LOCK_MODE, LOCK_DATA from performance_schema.data_locks;

+------------+-----------+---------------+------------------------+
| INDEX_NAME | LOCK_TYPE | LOCK_MODE     | LOCK_DATA              |
+------------+-----------+---------------+------------------------+
| NULL       | TABLE     | IX            | NULL                   |
| PRIMARY    | RECORD    | X,REC_NOT_GAP | 2                      |
| PRIMARY    | RECORD    | X             | supremum pseudo-record |
| PRIMARY    | RECORD    | X             | 5                      |
+------------+-----------+---------------+------------------------+
The isolation level is default (RR). I understand why the range id=2..5 is locked. But what's the point of locking id>5? Does it matter if someone inserts there? P.S. Such locking happens when the upper BETWEEN condition is the last index element. That's perfectly reproducible and was mentioned in many articles, is there a sane reason for it (dire consequences if we don't lock id>5?), or it's an imperfection of INNODB locking mechanism?
John Smith (41 rep)
Apr 28, 2023, 08:03 PM • Last activity: Mar 6, 2025, 06:06 PM
0 votes
1 answers
449 views
SQLite: excluded int primary key null value replaced with next auto value
I'm trying to create generic upsert query builder with specific behavior. Table has two unique constraints: primary key _itId_ and column _slug_: ```sqlite create table it ( itId integer primary key -- an alias for rowid , slug text , text text , CONSTRAINT "itId" UNIQUE("itId") , CONSTRAINT "slug"...
I'm trying to create generic upsert query builder with specific behavior. Table has two unique constraints: primary key _itId_ and column _slug_:
create table it (
    itId integer primary key -- an alias for rowid
  , slug text
  , text text
  
  , CONSTRAINT "itId" UNIQUE("itId")
  , CONSTRAINT "slug" UNIQUE("slug")
);

-- fill the table:
insert into it
    (itId, slug, text)
values
    (null, 'a', 'letter a')
  , (null, 'b', 'letter b')
  , (null, 'c', 'letter c')
;
| itId | slug | text | | ---- | ---- | -------- | | 1 | a | letter a | | 2 | b | letter b | | 3 | c | letter c | Then, here's an attempt of batch upsert of rows using different keys:
insert into it
    (itId, slug, text)
values
    (null, 'a', 'latin letter a') -- update text by slug
  , (   2, 'β', 'greek letter beta') -- update slug and text by primary key
  , (   9, 'c', 'latin letter c') -- update primary key and text by slug
on conflict (itId) do update set
    text = coalesce(excluded.text, text)
  , itId = itId -- save current value for autoincremented column
  , slug = coalesce(excluded.slug, slug)
on conflict (slug) do update set
    text = coalesce(excluded.text, text)
  , itId = coalesce(excluded.itId, itId) -- here's a trouble!
(I'm using [SQLite 3.35](https://www.sqlite.org/releaselog/3_35_4.html) , which allows multiple on conflict clauses) | itId | slug | text | | ---- | ---- | ----------------- | | 2 | β | greek letter beta | | **4**| a | latin letter a | | 9 | c | latin letter c | As you can see, b changed to β -- that's okay; c has changed its _itId_ according to upserted value 3 -> 9. But there's a trouble with a (was matched by slug): _excluded.itId_ resolves with a value 4 instead of null within second on conflict clause. It seems that the value of _itId_ for row a is replaced with the next available autoincrement value for column _itId_ (an alias of _rowid_). Expectation: itId = coalesce(excluded.itId, itId) => coalesce(null, 1) => 1: | itId | slug | text | | ---- | ---- | ----------------- | | **1**| a | latin letter a | | 2 | β | greek letter beta | | 9 | c | latin letter c | Is there a way to achieve such a result using insert ... on conflict do update for all of these cases? [Online playground](https://sqlime.org/#deta:rwfs8hlesabw)
Denis Borzenko (1 rep)
Apr 18, 2022, 05:20 PM • Last activity: Jan 12, 2025, 04:11 AM
0 votes
1 answers
1697 views
Am I breaking something if I set an autoincrement PK to 0?
I have this table CREATE TABLE `test_table` ( `id` int unsigned NOT NULL AUTO_INCREMENT, `name` varchar(50) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name_UNIQUE` (`name`) ) ENGINE=InnoDB; Insert some records... INSERT INTO `test_table` (`name`) VALUES ('a'); INSERT INTO `test_table` (`name`) VALUES...
I have this table CREATE TABLE test_table ( id int unsigned NOT NULL AUTO_INCREMENT, name varchar(50) NOT NULL, PRIMARY KEY (id), UNIQUE KEY name_UNIQUE (name) ) ENGINE=InnoDB; Insert some records... INSERT INTO test_table (name) VALUES ('a'); INSERT INTO test_table (name) VALUES ('b'); INSERT INTO test_table (name) VALUES ('c'); So I get records 1, 2 and 3 with values a, b and c respectively. Now, I need to have a record with 0 as id and empty string as name. So I did this INSERT INTO test_table (id, name) VALUES ('0', ''); UPDATE test_table SET id='0' WHERE id='4'; ALTER TABLE test_table AUTO_INCREMENT = 4 ; Do I have a guarantee that everything will continue to work properly? Table references, auto increment sequence, etc...
Matías Cánepa (113 rep)
Oct 19, 2020, 08:32 PM • Last activity: Dec 9, 2024, 01:02 PM
0 votes
3 answers
115 views
Can a new transaction claim an older sequence id?
I'm using a PostgresSQL database as an eventstore. We used to use https://github.com/SQLStreamStore/SQLStreamStore But they had issues when having a lot of parallel transactions. Essentially we suffered from a lot of 'skipped' events. A similar problem is explained here: https://github.com/eugene-kh...
I'm using a PostgresSQL database as an eventstore. We used to use https://github.com/SQLStreamStore/SQLStreamStore But they had issues when having a lot of parallel transactions. Essentially we suffered from a lot of 'skipped' events. A similar problem is explained here: https://github.com/eugene-khyst/postgresql-event-sourcing?tab=readme-ov-file#transactional-outbox-using-transaction-id So together with a co-worker we decided to fork the library and implement it using [pg_current_snapshot()](https://pgpedia.info/p/pg_current_snapshot.html) . We had a few iterations of this but in the end we got it working: https://github.com/ArneSchoonvliet/SQLStreamStore So the main idea is, if we see a gap in between positions we will only trust the events with a lower transaction_id than 'xmin'. This has worked great for us. And most problems are solved. But sometimes we have a weird occurrence
Position    MessageId                               CreatedAt                       TransactionId
31170300	be7b412a-103c-5cdd-8458-57fbb0e5c39e	2024-09-29 13:23:27.733 +0200	2306832989
31170299	38b9d7d9-540c-5440-a2a0-10b91cffb2ad	2024-09-29 13:23:27.736 +0200	2306832990
Query result
Position: 31170297, Array index: 0, Transaction id: 2306832974
Position: 31170298, Array index: 1, Transaction id: 2306832976
Position: 31170300, Array index: 2, Transaction id: 2306832989
Xmin: 2306832990
In the query result you see that 31170299 is missing. So our 'gap checking' code kicks in. And will check if all transactions_ids are lower than xmin. In this case they are... 31170299 wasn't visible yet. So as a result that event will be skipped. **Question** Is it expected that this can happen. A newer transaction claiming a lower seq value? We are using Google Cloud managed pgsql db Since I don't really know how we would ever be able to detect that without checking every time if transactions are still happening. But this would impact performance since we would lose a lot of time with 'actual' gaps (caused by transactions that are rolled back) People probably wonder what the insert / query sql looks like INSERT: https://github.com/ArneSchoonvliet/SQLStreamStore/blob/master/src/SqlStreamStore.Postgres/PgSqlScripts/AppendToStream.sql Important part:
INSERT INTO __schema__.messages (message_id,
                                 stream_id_internal,
                                 stream_version,
                                 created_utc,
                                 type,
                                 json_data,
                                 json_metadata,
                                 transaction_id)
SELECT m.message_id, _stream_id_internal, _current_version + (row_number()
    over ()) :: int, _created_utc, m.type, m.json_data, m.json_metadata, pg_current_xact_id()
FROM unnest(_new_stream_messages) m
ON CONFLICT DO NOTHING;
GET DIAGNOSTICS _success = ROW_COUNT;
As you can see the position isn't set. This is because it's an autoincrement defined like this: "position" int8 DEFAULT nextval('messages_seq'::regclass) NOT NULL QUERY: https://github.com/ArneSchoonvliet/SQLStreamStore/blob/master/src/SqlStreamStore.Postgres/PgSqlScripts/ReadAll.sql Important part:
BEGIN
  OPEN _txinfo FOR
  SELECT pg_snapshot_xmin(pg_current_snapshot());
  RETURN NEXT _txinfo;
    
  OPEN _messages FOR
  WITH messages AS (
      SELECT __schema__.streams.id_original,
             __schema__.messages.message_id,
             __schema__.messages.stream_version,
             __schema__.messages.position,
             __schema__.messages.created_utc,
             __schema__.messages.type,
             __schema__.messages.transaction_id,
             __schema__.messages.json_metadata,
             __schema__.messages.json_data,
             __schema__.streams.max_age
      FROM __schema__.messages
             INNER JOIN __schema__.streams ON __schema__.messages.stream_id_internal = __schema__.streams.id_internal
      WHERE  __schema__.messages.position >= _position
      ORDER BY __schema__.messages.position
      LIMIT _count
  )
  SELECT * FROM messages LIMIT _count;
  RETURN NEXT _messages;
END;
ErazerBrecht (101 rep)
Oct 9, 2024, 10:20 AM • Last activity: Nov 20, 2024, 11:30 AM
1 votes
2 answers
88 views
AUTO_INCREMENT value is incremented by one after a restart of MariaDB
I need your help to understand some behaviour of MariaDB/InnoDB. I have a dump file I use to initialize my database. Below is the dump file: ```sql -- MariaDB dump 10.19 Distrib 10.11.5-MariaDB, for Win64 (AMD64) -- -- Host: localhost Database: capture_plan016 -- ------------------------------------...
I need your help to understand some behaviour of MariaDB/InnoDB. I have a dump file I use to initialize my database. Below is the dump file:
-- MariaDB dump 10.19  Distrib 10.11.5-MariaDB, for Win64 (AMD64)
--
-- Host: localhost    Database: capture_plan016
-- ------------------------------------------------------
-- Server version	10.11.5-MariaDB

/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8mb4 */;
/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;

--
-- Table structure for table _secrol
--

DROP TABLE IF EXISTS _secrol;
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE _secrol (
  SECROL_ID bigint(20) NOT NULL AUTO_INCREMENT,
  SECROL_DESC varchar(255) NOT NULL DEFAULT '',
  PRIMARY KEY (SECROL_ID)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1 COLLATE=latin1_swedish_ci;
/*!40101 SET character_set_client = @saved_cs_client */;

--
-- Dumping data for table _secrol
--

LOCK TABLES _secrol WRITE;
/*!40000 ALTER TABLE _secrol DISABLE KEYS */;
INSERT INTO _secrol VALUES
(1,'Role 1'),
(2,'Role 2');
/*!40000 ALTER TABLE _secrol ENABLE KEYS */;
UNLOCK TABLES;
As you can see the CREATE TABLE statement specify the AUTO_INCREMENT VALUE. After playing this dump file, I check the AUTO_INCREMENT value and it is, as expected, equal to 3. The strange behaviour (to me) occurs when i restart the MariaDB server (with a normal shutdown). After the restart the AUTO_INCREMENT value is 4. But no INSERT nor DELETE were performed. Well after some manipulations I managed to find a solution to this "problem" (I know it should not be a problem but I want to rely on predictable values only for test purposes). If I use two seperates file (one for the table creation, and one for the inserts) it works fine with a predictible auto increment value. But I would like to understand why the auto_increment value is changed when all the statements takes place in one file. If somebody have some explanation he is welcome =D
Florian Masy (11 rep)
Aug 20, 2024, 01:54 PM • Last activity: Aug 20, 2024, 06:42 PM
1 votes
2 answers
5268 views
Is it possible to set auto-increment-increment by 10 for only one table?
Is it possible to add auto-increment-increment by 10 for only one table in mysql? I have a testing db that has a lot of tables that do not abide by the auto-increment-increment of 10. I won't want to mess up my testing rig. The production server has auto-increment-increment of 10.
Is it possible to add auto-increment-increment by 10 for only one table in mysql? I have a testing db that has a lot of tables that do not abide by the auto-increment-increment of 10. I won't want to mess up my testing rig. The production server has auto-increment-increment of 10.
cph (153 rep)
Feb 5, 2017, 05:02 AM • Last activity: Jun 27, 2024, 03:18 AM
0 votes
2 answers
50 views
id increases even if the line has not been created
I was doing a simple crud on nestjs and using postgres, then i noticed something strange. I made an entity with a unique string column @Entity() export class Model { @PrimaryGeneratedColumn() id: number; @Column({unique:true}) name: string; } but then everytime i try to add a row, and if this row do...
I was doing a simple crud on nestjs and using postgres, then i noticed something strange. I made an entity with a unique string column @Entity() export class Model { @PrimaryGeneratedColumn() id: number; @Column({unique:true}) name: string; } but then everytime i try to add a row, and if this row dont get added because the unique restriction it increases the current id of the database anyways, and then the next line jumps some ids. exemplifying: if i try to add a Model with name 'model 1', it will be id 1, since it's the first, but then, if i try to add 'model 1' 15 times, and it wont be added because i made the restriction, the next time i insert a row, like 'model 2' it will jump a lot of ids, and wont be 2, it will be 16. Is it how it should work? it looks like I am wasting ids for no reason.
Kaique Bahmad (1 rep)
Jun 24, 2024, 01:05 AM • Last activity: Jun 24, 2024, 05:04 AM
17 votes
8 answers
6755 views
Reasons for avoiding large ID values
We are working on a web application, not yet accessible to users. My boss noticed that newly created records get an ID of over 10 000, even though we only have under 100 records in the table. She assumed that the web interface for some reason creates over a 100 times more temporary records than actu...
We are working on a web application, not yet accessible to users. My boss noticed that newly created records get an ID of over 10 000, even though we only have under 100 records in the table. She assumed that the web interface for some reason creates over a 100 times more temporary records than actual ones (and deletes them) and that this can lead us to running out of range within a few months of release. I don't think she is correct about the cause of the ID inflation (the colleague who can answer this is on vacation, so we don't know for sure), but let's assume that she is. She said that she'd hate to use a bigint column, and that she'd like us to stop autoincrementing the ID column and write server-side code which chooses the first "unused" integer and uses it as an ID. I am a computer science grad student with little practical experience, filling a junior developer role. She has years of experience of managing all of our organisation's databases, and designing most of them. I *think* that she's incorrect in this case, that a bigint ID is nothing to be afraid of, and that mimicking the DBMS functionality smells of an antipattern. But I don't trust my judgment yet. What are the arguments for and against each position? What bad things can happen if we use a bigint, and what are the dangers of reinventing the wheel autoincrementing functionality? Is there a third solution which is better than either one? What could her reasons be for wanting to avoid an inflation of ID face values? I'm interested in hearing about pragmatic reasons too - maybe bigint IDs work in theory, but cause headaches in practice? The application is not expected to handle very large amounts of data. I doubt that it will reach 10 000 actual records within the next few years. If it makes any difference, we are using Microsoft SQL server. The application is written in C# and uses Linq to SQL. **Update** Thank you, I found the existing answers and comments interesting. But I'm afraid you misunderstood my question, so they contain what I wanted to know. I'm not really concerned about the real reason for the high IDs. If we can't find it on our own, I could ask a different question. **What I'm interested in is to understand the decision process in this case. For this, please assume that the application will be writing 1000 records per day, then deleting 9999 of them**. I'm almost sure this is not the case, but this is what my boss believed when she made her request. So, under these hypothetical circumstances, what would be the pros and cons of either using bigint or writing our own code which will assign IDs (in a way which reuses the IDs of already deleted records, to ensure there are no gaps)? As for the actual reason, I strongly suspect that this is because we once wrote code to import data from another database, as a proof of concept that a later migration can be done to a certain extent. I think my colleague actually created several thousand records during the import and later deleted them. I have to confirm if this was actually the case, but if it is, there is not even need for action.
rumtscho (271 rep)
Dec 23, 2014, 09:12 PM • Last activity: May 3, 2024, 08:33 PM
26 votes
3 answers
170365 views
Postgres: How to insert row with autoincrement id
There is a Table "context". There is an autoincrement id "context_id". I am using sequence to retrieve the next value. SELECT nextval('context_context_id_seq') The result is: 1, 2, 3,...20.... But there are 24780 rows in the "context" table How can I get the next value (24781)? I need to use it in t...
There is a Table "context". There is an autoincrement id "context_id". I am using sequence to retrieve the next value. SELECT nextval('context_context_id_seq') The result is: 1, 2, 3,...20.... But there are 24780 rows in the "context" table How can I get the next value (24781)? I need to use it in the INSERT statement
user3631472 (361 rep)
May 22, 2014, 09:20 AM • Last activity: Apr 10, 2024, 07:53 PM
0 votes
1 answers
313 views
Auto-increment [id] field in a table in an SQL-Server database
I have different tables, containing an `id` field. That field is typically defined as `id (PK, int, not null)`. Normally, when adding an entry to a table, I add the value of the `id` field myself, but I'm wondering if there's no automatic way to do that: The following is not working: (there already...
I have different tables, containing an id field. That field is typically defined as id (PK, int, not null). Normally, when adding an entry to a table, I add the value of the id field myself, but I'm wondering if there's no automatic way to do that: The following is not working: (there already is an entry with id being 1) INSERT [dbo].[Settings] ([GroupId], [Value], [Name], [Id], [Description], [Admin]) VALUES ( 1, N'1', N'BLABLA', 1, N'meer blabla', 0) The following is also not working (the id field not being filled in): INSERT [dbo].[Settings] ([GroupId], [Value], [Name], [Description], [Admin]) VALUES ( 1, N'1', N'BLABLA', N'meer blabla', 0) I know about the possibility to add a kind of IDENTITY table, which might handle this, but I'm very reluctant to add another table to the customer's database. Does anybody know a way to get this done? If possible, I would like to have two possibilities: - Finding the first possible id (using holes in the existing ids, if any): imagine that I have the following list of existing ids: (1, 2, 3, 4, 11, 12, 13, 14). Then I want 5 to be the next one. - Finding the highest id plus one: imagine that I have the following list of existing ids: (1, 2, 3, 4, 11, 12, 13, 14). Then I want 15 to be the next one. Does anybody know how to get this done in a one-liner SQL command, without altering the customer's database? For your information, I'm using Telerik OpenAccess for creating and accessing the database, and one of the items I can see is this: configuration.HasProperty(o => o.Id).IsIdentity(); However, for that table, I don't see that in SQL-Server: SQL-Server screenshot Thanks in advance
Dominique (609 rep)
Oct 26, 2023, 11:57 AM • Last activity: Nov 16, 2023, 04:56 PM
13 votes
5 answers
44058 views
MySQL: Why is auto_increment limited to just primary keys?
I know MySQL limits auto_increment columns to primary keys. Why is this? My first thought is that it's a performance restriction, since there probably is some counter table somewhere that must be locked in order to get this value. Why can't I have multiple auto_increment columns in the same table? T...
I know MySQL limits auto_increment columns to primary keys. Why is this? My first thought is that it's a performance restriction, since there probably is some counter table somewhere that must be locked in order to get this value. Why can't I have multiple auto_increment columns in the same table? Thanks.
Christopher Armstrong (535 rep)
Jun 23, 2011, 07:45 PM • Last activity: Sep 28, 2023, 10:49 AM
0 votes
2 answers
1665 views
Auto-increment text primary key in postgres
I would like my content to be searchable by users with a unique 6 characters id (i.e. zab4ac, cba884, ...). I was thinking of creating a `id character(6) primary key` column, but then I started wondering how to create ids for new rows. They can be random or sequential, it doesn't matter to me, but I...
I would like my content to be searchable by users with a unique 6 characters id (i.e. zab4ac, cba884, ...). I was thinking of creating a id character(6) primary key column, but then I started wondering how to create ids for new rows. They can be random or sequential, it doesn't matter to me, but I was wondering what the best option would be: 1) For every INSERT, do a SELECT to get the last key, and "increment it"? 2) or create a random key and verify if it already exists? Since these seem inefficient (2 queries for every insertion), is there a automated way to do it, like serial, but for characters (000009, 00000a, 00000b, ..., 00000z, 000010, ..., or something more random)? Or should I use serial in the database and convert it to a string for users? I am looking to see if I am missing an easy and efficient approach to the problem.
pascal (3 rep)
Jun 29, 2023, 06:25 PM • Last activity: Jun 29, 2023, 09:34 PM
0 votes
2 answers
1818 views
How to run insert statement as much times as select statement gives unique values?
DB Firebird. I have the following statement SELECT a.ID FROM CLIENTS a And I want my INSERT statement into another table to use each of this value, something like INSERT INTO BANKS (ID, NR, MONEY) VALUES ( gen_id(gen_name, 1), "there should be client ID from the SELECT statement above", 3000 ) Can I...
DB Firebird. I have the following statement SELECT a.ID FROM CLIENTS a And I want my INSERT statement into another table to use each of this value, something like INSERT INTO BANKS (ID, NR, MONEY) VALUES ( gen_id(gen_name, 1), "there should be client ID from the SELECT statement above", 3000 ) Can I achieve this with one statement?
DisplayMyName (139 rep)
May 12, 2017, 10:09 AM • Last activity: May 11, 2023, 04:17 PM
17 votes
3 answers
38458 views
Why does auto increment jumps by more than the number of rows inserted?
I am very perturbed by this weird behaviour I am seeing in the `auto_increment` value recorded in the bidID of a Bids table after performing bulk insertion using a stored procedure: INSERT INTO Bids (itemID, buyerID, bidPrice) SELECT itemID, rand_id(sellerID, user_last_id), FLOOR((1 + RAND())*askPri...
I am very perturbed by this weird behaviour I am seeing in the auto_increment value recorded in the bidID of a Bids table after performing bulk insertion using a stored procedure: INSERT INTO Bids (itemID, buyerID, bidPrice) SELECT itemID, rand_id(sellerID, user_last_id), FLOOR((1 + RAND())*askPrice) FROM Items WHERE closing BETWEEN NOW() AND NOW() + INTERVAL 1 WEEK ORDER BY RAND() LIMIT total_rows; For example, if the auto_increment bidID value is 101 at start, and I inserted 100 rows, the ending value becomes 213 instead of 201. However, the bidIDs of those inserted rows runs sequentially to a maximum of 201. Having check the following, SHOW VARIABLES LIKE 'auto_inc%'; +--------------------------+-------+ | Variable_name | Value | +--------------------------+-------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | +--------------------------+-------+ I have no idea why it is happening. What could be causing the jump in the auto increment value?
Question Overflow (1009 rep)
Mar 6, 2014, 10:10 AM • Last activity: May 1, 2023, 04:07 PM
Showing page 1 of 20 total questions