Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
1212 views
SELECT FOR UPDATE (NOWAIT) on auxilliary table for serializing updates?
**Background:** I have an inventory tracking application which needs to avoid the following sort of situation where the same inventory might be used twice due to lack of serialization. The constraint is that each item of inventory from a pool of inventory must in use only once at any given time (muc...
**Background:** I have an inventory tracking application which needs to avoid the following sort of situation where the same inventory might be used twice due to lack of serialization. The constraint is that each item of inventory from a pool of inventory must in use only once at any given time (much simplified example): CREATE TABLE inventory_use (id serial, ref varchar); Transaction 1: BEGIN; SELECT * FROM inventory_use; INSERT INTO inventory_use (ref) VALUES ('T1 insert'); Transaction 2: BEGIN; SELECT * FROM inventory_use; INSERT INTO inventory_use (ref) VALUES ('T2 insert'); Transaction 1: COMMIT; Transaction 2: COMMIT; -- fails (as desired) at SERIALIZABLE, commits (not desired) at REPEATABLE READ SERIALIZABLE prevents this from occurring by causing the last commit to fail (and REPEATABLE READ does not). However, even though my application does not involve many concurrent writes, I'm reluctant to go for SERIALIZABLE because my understanding of its performance characteristics is weak: I'm not yet confident I know how to predict for example what the effect of concurrent reads or of (read or write) throughput will be on performance, perhaps especially on concurrent transactions that aren't connected directly with inventory tracking and probably are not critical to the application. Probably I should educate myself more about SERIALIZABLE since it looks very useful, but I'd like to understand the alternatives too. So this question is about alternative approaches to SERIALIZABLE for this kind of problem, in particular using SELECT FOR UPDATE NOWAIT to serialize inventory tracking updates through row locking while running at the READ COMMITTED isolation level. **What I've tried:** I implemented FOR UPDATE NOWAIT locking on the table that records inventory users. That seems to work fine. However, that may cause trouble because it can block some other updates to that table that need not be serialized. This may well not be critical in my application, but it led me to see if I could use a separate table, created just for the purpose of locking, to do the FOR UPDATE locking. To implement that, before the SELECT FOR UPDATE, I would begin by checking whether an INSERT is needed to represent the pool of inventory whose use needs to be serialized -- perhaps via INSERT ... ON CONFLICT DO NOTHING -- in order to ensure that there is a suitable row that can be locked to serialize the updates. **Question:** I've not been able to find other people using SELECT FOR UPDATE in this style in which: 1. there is an auxiliary table whose only function is FOR UPDATE row locking, not to store any application data. 2. FOR UPDATE row locking on *existing* rows is being used to serialize the creation of *new* rows That makes me wonder whether this is a bad way to solve this problem. I've seen other suggested approaches that are similar, for example using REPEATABLE READ with an auxiliary table and using an UPDATE to that table to implicity acquire a lock: https://peter.grman.at/postgres-repeatable-read-vs-serializable/ Is this (the SELECT FOR UPDATE style I describe above) a sensible technique for solving a problem of this kind, and if so, how should it be done? I don't have specific "how should it be done" issues in mind other than to benefit from experience here.
Croad Langshan (253 rep)
Jan 19, 2021, 05:48 PM • Last activity: Apr 22, 2025, 06:03 PM
42 votes
5 answers
37618 views
Locking issue with concurrent DELETE / INSERT in PostgreSQL
This is pretty simple, but I'm baffled by what PG does (v9.0). We start with a simple table: CREATE TABLE test (id INT PRIMARY KEY); and a few rows: INSERT INTO TEST VALUES (1); INSERT INTO TEST VALUES (2); Using my favorite JDBC query tool (ExecuteQuery), I connect two session windows to the db whe...
This is pretty simple, but I'm baffled by what PG does (v9.0). We start with a simple table: CREATE TABLE test (id INT PRIMARY KEY); and a few rows: INSERT INTO TEST VALUES (1); INSERT INTO TEST VALUES (2); Using my favorite JDBC query tool (ExecuteQuery), I connect two session windows to the db where this table lives. Both of them are transactional (ie, auto-commit=false). Let's call them S1 and S2. The same bit of code for each: 1:DELETE FROM test WHERE id=1; 2:INSERT INTO test VALUES (1); 3:COMMIT; Now, run this in slow motion, executing one at a time in the windows. S1-1 runs (1 row deleted) S2-1 runs (but is blocked since S1 has a write lock) S1-2 runs (1 row inserted) S1-3 runs, releasing the write lock S2-1 runs, now that it can get the lock. But reports 0 rows deleted. HUH??? S2-2 runs, reports a unique key constraint violation Now, this works fine in SQLServer. When S2 does the delete, it reports 1 row deleted. And then S2's insert works fine. I suspect that PostgreSQL is locking the index in the table where that row exists, whereas SQLServer locks the actual key value. Am I right? Can this be made to work?
DaveyBob (561 rep)
Oct 26, 2012, 04:58 PM • Last activity: Nov 10, 2024, 01:46 AM
2 votes
1 answers
1371 views
How two phases locking ensures serializable?
I have read many [sources][1], and they say one of the ways to implement serializable in the database is to use two-phase locking. But I don't really understand how two-phase locking could ensure serializable in this [example][2] from Jim Gray. An example is we have two rows in the database, one has...
I have read many sources , and they say one of the ways to implement serializable in the database is to use two-phase locking. But I don't really understand how two-phase locking could ensure serializable in this example from Jim Gray. An example is we have two rows in the database, one has the value of **white** and the other one has the value of **black**. I have two transactions: - TX1 will update the value of **white** to **black** - TX2 will update the value of **black** to **white** If TX1 and TX2 are executed at the same time, then TX1 acquires the lock on the row with the value of **white**, and TX2 acquires the lock on the row with the value of **black**. So there are no locking conflicts, and eventually, the values are swapped.
Quốc Khánh Bùi (25 rep)
Feb 2, 2023, 03:35 PM • Last activity: Oct 8, 2024, 04:45 AM
9 votes
1 answers
26180 views
SELECT in trigger function in two tables
My table and trigger in PostgreSQL 9.4.3 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit: CREATE TABLE measurements ( measurement_id SERIAL PRIMARY KEY NOT NULL, measurement_size_in_bytes INTEGER NOT NULL ); CREATE TABLE file_headers ( header_id SERIAL PRIMARY KEY NOT NU...
My table and trigger in PostgreSQL 9.4.3 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit: CREATE TABLE measurements ( measurement_id SERIAL PRIMARY KEY NOT NULL, measurement_size_in_bytes INTEGER NOT NULL ); CREATE TABLE file_headers ( header_id SERIAL PRIMARY KEY NOT NULL, measurement_id INTEGER NOT NULL, file_header_index_start INTEGER, file_header_index_end INTEGER ); CREATE TRIGGER measurement_ids AFTER INSERT ON measurements FOR EACH ROW EXECUTE PROCEDURE ins_function('SELECT measurement_id FROM measurements ORDER BY measurement_id desc limit 1;', 1, 666 ); where I assumed that the datatype of the SELECT is INTEGER since SERIAL but it is apparently false because I get the error message from this command which starts the trigger: INSERT INTO measurements (measurement_size_in_bytes) VALUES (888);` > ERROR: invalid input syntax for integer: "SELECT measurement_id FROM measurements ORDER BY measurement_id desc limit 1;" > CONTEXT: PL/pgSQL function ins_function() line 10 at assignment ### Edit ins_function() and edits based on @a_horse_with_no_name and @Joishi's comments: CREATE OR REPLACE FUNCTION ins_function() RETURNS TRIGGER AS $$ -- -- Perform AFTER INSERT operation on file_header by creating rows with new.measurement_id, new.file_header_index_start and new.file_header_index_end. -- DECLARE measurement_id INTEGER; file_header_index_start INTEGER; file_header_index_end INTEGER; BEGIN SELECT a.measurement_id INTO measurement_id from measurements a ORDER BY measurement_id desc limit 1; file_header_index_start := TG_ARGV; file_header_index_end := TG_ARGV[1] ; IF TG_OP = 'INSERT' THEN INSERT INTO file_headers (measurement_id, file_header_index_start, file_header_index_end) VALUES (measurement_id, file_header_index_start, file_header_index_end); RETURN NEW; END IF; RETURN NULL; -- result is ignored since this is an AFTER trigger END; $$ LANGUAGE plpgsql; -- -- Function and trigger on INSERT. -- CREATE TRIGGER measurement_ids AFTER INSERT ON measurements FOR EACH ROW EXECUTE PROCEDURE ins_function(1, 666); I get now no error but the output is wrong: no INSERT seen in the table file_headers while successfully in the table *measurements*. ### Output of @ErwinBrandstetter's answer So I started to think about casting from TEXT to INT but this should be so basic operation, since TG_ARGV[] is a datatype of text. One unsuccessful try is format('SELECT $1.%I', TG_ARGV). The regclass could work as you describe here in insaft_function() SELECT NEW.measurement_id, TG_ARGV::regclass, TG_ARGV[1] ::regclass; Why are there no successful INSERTs into the table file_headers?
Léo Léopold Hertz 준영 (765 rep)
Jul 14, 2015, 02:32 PM • Last activity: Apr 14, 2023, 03:07 AM
5 votes
1 answers
1047 views
What is the meaning of Order preserving Conflict Serializable schedules?
I was going through the concept of order preservation in Conflict Serializable class and I came across Order preserving conflict serializable (OCSR in short). The following is the definition of OCSR that I found: > History *h* is order preserving conflict serializable if *h* is conflict equivalent t...
I was going through the concept of order preservation in Conflict Serializable class and I came across Order preserving conflict serializable (OCSR in short). The following is the definition of OCSR that I found: > History *h* is order preserving conflict serializable if *h* is conflict equivalent to a serial history *hs* where t,t'∈h:if *t* occurs completely before *t'* in *h* then the same holds in *hs*. Following is one example of a schedule that is in OCSR: > w3(y) c3 w1(x)r2(x) c2 w1(y) c1 But I could not get why this schedule is in OCSR. Because as per my understanding this is the conflict graph t3 --- > t1 ---- > t2 which is showing a serial schedule where t1 comes before t2. But in the original interleaved schedule t2 completely comes before t1. Then how is the given example said to be in OCSR? Can anyone please help me understand this better?
A Beginner (151 rep)
Mar 23, 2019, 07:56 PM • Last activity: Oct 1, 2022, 01:06 AM
1 votes
0 answers
28 views
Consistency in serializable concurrent transactions
My DBS textbook says that if each transaction individually maintains consistency in the system, in the case of concurrent transactions, serializability ensures consistency. I am not able to understand what this means. Can someone explain?
My DBS textbook says that if each transaction individually maintains consistency in the system, in the case of concurrent transactions, serializability ensures consistency. I am not able to understand what this means. Can someone explain?
Kashish (11 rep)
Mar 29, 2022, 03:35 PM
0 votes
1 answers
378 views
SERIAL for multi-column unique constraints
I'm trying to build an append-only data model in Postgres, using the following schema: ```sql CREATE TABLE IF NOT EXISTS account ( id UUID DEFAULT uuid_generate_v4(), user_id INT NOT NULL, currency_id INT NOT NULL, balance INT NOT NULL, version INT NOT NULL DEFAULT 1, PRIMARY KEY(user_id, currency_i...
I'm trying to build an append-only data model in Postgres, using the following schema:
CREATE TABLE IF NOT EXISTS account (
    id UUID DEFAULT uuid_generate_v4(),
    user_id INT NOT NULL,
    currency_id INT NOT NULL,
    balance INT NOT NULL,
    version INT NOT NULL DEFAULT 1,
    PRIMARY KEY(user_id, currency_id, version)
);
The idea is to have n users, owning m accounts. Each account is connected to a certain type of currency and can have a balance. Changes to the balance are persisted by adding a new row to the table (as opposed to updating the current row) and increasing the version field by 1. The latest balance of a user would always be found by SELECT balance FROM account WHERE user_id=? ORDER BY version DESC with a full history of balance changes. I should be able to solve this using multiple queries in a transaction (if exists, read version, then insert new row with version+1). But I was wondering if there was a more clever way of designing the table structure itself (something like a SERIAL that would only increment on inserts in the same unique constraints). I assume it probably won't be as straight forward as this, but I thought I'd ask.
plorres (1 rep)
Jul 4, 2021, 05:08 PM • Last activity: Jul 7, 2021, 03:43 PM
7 votes
2 answers
557 views
Does timestamp protocol following thomas's write rule allow non-view-serializable schedules in some cases?
I have come across the following line in a text book (Database System Concepts Textbook by Avi Silberschatz, Henry F. Korth, and S. Sudarshan $6e$) page no. 686: > Thomas’ write rule allows schedules that are not conflict serializable > but are nevertheless correct. Those non-conflict-serializable >...
I have come across the following line in a text book (Database System Concepts Textbook by Avi Silberschatz, Henry F. Korth, and S. Sudarshan $6e$) page no. 686: > Thomas’ write rule allows schedules that are not conflict serializable > but are nevertheless correct. Those non-conflict-serializable > schedules allowed satisfy the definition of view serializable > schedules (see example box). What I understood from the above lines is that every schedule generated by timestamp protocol following Thomas's write rule is view serializable. Now let's take the following little schedule: $S: R_1(X), W_2(X), W_1(X)$. This schedule $S$ is allowed under timestamp protocol which follows Thomas's write rule. And serialization order is $R_1(X), W_1(X).$ But I was not able to prove that it is view serializable. Actually I think that it is non-view serializable because, 1. Consider serial order as $T_1, T_2$ Now final value of $X$ is being written by $T_2$. So not equivalent. 2. Next alternative serial order is $T_2, T_1$ here, $R_1(X)$ will read value of $X$ written by $T_1$ not original value which was there before start of both transaction. So this too is not view-equivalent. What is going wrong here? Please help me with this one.
Vimal Patel (173 rep)
Nov 7, 2019, 02:59 PM • Last activity: Aug 21, 2020, 03:32 PM
15 votes
2 answers
5745 views
Does SQL Server's serializable isolation level lock entire table
Me and a colleague of mine discussed the implications of use of the serializable isolation level. He said it locked the entire table, but I disagreed to that telling him it potentially could but it tries to apply range locks and it doesn't apply true serialization as explained here: [The Serializabl...
Me and a colleague of mine discussed the implications of use of the serializable isolation level. He said it locked the entire table, but I disagreed to that telling him it potentially could but it tries to apply range locks and it doesn't apply true serialization as explained here: The Serializable Isolation Level . I can't find anything in the docs either for the "locks entire table": SET TRANSACTION ISOLATION LEVEL . The doc states a bunch of things regarding range locks, so in theory you could lock the entire table by simply having a range lock that locks the entire range of possible values in the table, but it doesn't lock the table. Am I completely wrong here? Does it in fact lock the entire table (or tables)?
mslot (427 rep)
Sep 28, 2019, 06:57 AM • Last activity: Sep 29, 2019, 06:28 PM
5 votes
1 answers
4350 views
What is an Intent Lock in SQL Server
I have this query SET TRANSACTION ISOLATION LEVEL SERIALIZABLE BEGIN TRAN UPDATE c SET c.Score = 2147483647 FROM dbo.Comments AS c WHERE c.Id BETWEEN 1 AND 5000; Which have these stats +--------------+---------------+---------------+-------------+ | request_mode | locked_object | resource_type | tot...
I have this query SET TRANSACTION ISOLATION LEVEL SERIALIZABLE BEGIN TRAN UPDATE c SET c.Score = 2147483647 FROM dbo.Comments AS c WHERE c.Id BETWEEN 1 AND 5000; Which have these stats +--------------+---------------+---------------+-------------+ | request_mode | locked_object | resource_type | total_locks | +--------------+---------------+---------------+-------------+ | RangeX-X | Comments | KEY | 2429 | | IX | Comments | OBJECT | 1 | | IX | Comments | PAGE | 97 | +--------------+---------------+---------------+-------------+ I wonder about the IX, which is an Intent Lock. What does that mean, and why does there exist one on the table it self? As I understand, it is not a true lock but more something SQL Server use (or is set by the transaction?) to indicate that a lock might occur. Is the above right?
mslot (427 rep)
Sep 28, 2019, 04:30 PM • Last activity: Sep 28, 2019, 04:55 PM
6 votes
1 answers
1121 views
PostgreSQL Serialisation failure on different ids
Serializable isolation mode can be used to avoid race conditions when upserting equal ids. So for `create table u(uid int primary key, name text);` if we run two similar transactions T1 and T2: begin isolation level serializable; select * from u where uid = 1; And then continue with for T1 and T2: i...
Serializable isolation mode can be used to avoid race conditions when upserting equal ids. So for create table u(uid int primary key, name text); if we run two similar transactions T1 and T2: begin isolation level serializable; select * from u where uid = 1; And then continue with for T1 and T2: insert into u (uid, name) values (1, 'A'); After commit; only the first succeeds while the other throws a serialisation failure. This is a very neat feature of the mode to handle unique key violations in complex transactions and not resorting to a specific 'hack' insert ... on conflict. **However** even if uids are different, say uid = 2 and uid = 3, both transactions T1 and T2 **still won't** be able to commit. How could it be possible? Supposedly they create different predicate SIReadlocks and select uses index scan. Where's the trick?
Yury Oparin (63 rep)
Aug 24, 2019, 04:23 PM • Last activity: Aug 28, 2019, 06:38 AM
0 votes
0 answers
995 views
PostgreSQL Race condition and Serializable
Hello I try to debug a tricky race condition in my app (golang, pq, sqlx, postgresql Debian 10.6-1.pgdg90+1). I have a table ```sql CREATE TABLE j ( id BIGSERIAL PRIMARY KEY NOT NULL, "type" VARCHAR(10) NOT NULL, complete BOOLEAN DEFAULT FALSE NOT NULL, started_at TIMESTAMP WITH TIME ZONE DEFAULT no...
Hello I try to debug a tricky race condition in my app (golang, pq, sqlx, postgresql Debian 10.6-1.pgdg90+1). I have a table
CREATE TABLE j (
  id              BIGSERIAL PRIMARY KEY        NOT NULL,
  "type"          VARCHAR(10)                  NOT NULL,
  complete        BOOLEAN        DEFAULT FALSE NOT NULL,
  started_at      TIMESTAMP WITH TIME ZONE
                                 DEFAULT now() NOT NULL,
  error           TEXT           DEFAULT ''    NOT NULL
);

CREATE INDEX jtype ON j("type");
CREATE UNIQUE INDEX jtype_one_active ON j("type") WHERE complete = false;
to track some jobs The job is created with complete=false, and later updated to complete=true. Errors are logged, so there are multiple rows per "type", but only one job should be active. First, with a naive approach, I did that:
SELECT j.*
			FROM j
			WHERE "type" = ...
			ORDER BY started_at DESC
			LIMIT 1
			

-- some in app business logic

INSERT INTO j ("type")
			VALUES ($1)
		RETURNING *
It worked ok, but running parallel test I found a race condition on empty table: - Process A: SELECT ... - Process B: SELECT ... (both select no rows) - Process A: INSERT / UPDATE (process A inserts a new row AND proceeds to update it to complete=true, so jtype_one_active index won't fire) - Process B: INSERT ... So the job is running twice, even if process A completes with no error. I wrapped CREATE + INSERT into a single Serializable transaction (also, added FOR UPDATE to CREATE. And it worked for the most part. **The real question starts here**: But when I added more concurrency to my test I found, that sometimes the job is still running twice. Here's what I managed to debug, but it might not be 100% true: The table is empty - Process A: begins transaction - Process A: SELECT (no rows) - Process A: CREATE - Process B: begins transaction - Process B: runs SELECT **and select is running slow due high concurrency** - Process A: commits transaction - Process B: SELECT completes, no rows is found - Process B: INSERT (no serialisable error is fired!!) - Process B: commits transaction (no error is fired!!) Does a long running SELECT FOR UPDATE query somehow "escapes" Serializable Isolation? Or there's another flow in my script? **Update** I've managed to get the same behaviour on just 2 parallel processes and record a PostgreSQL log: https://pastebin.com/HgzwC9Wh Log looks close to my previous guess, though I'm not sure, if SELECT in process B really takes a lot of time. For some reason, after the A transaction commits, the B SELECT doesn't see committed data! It returns no rows. I tried to replay this from psql shell with no success.
dmzkrsk (101 rep)
Jul 5, 2019, 03:08 PM • Last activity: Jul 5, 2019, 10:16 PM
1 votes
0 answers
104 views
View serializable schedule
> Any view serializable schedule that is not conflict serializable must contain a blind write. (Source: [Wikipedia](https://en.wikipedia.org/wiki/Blind_write)) Why is this statement true? I referred some links like [this][1] but am still not able to visualize the correctness of the above statement....
> Any view serializable schedule that is not conflict serializable must contain a blind write. (Source: [Wikipedia](https://en.wikipedia.org/wiki/Blind_write)) Why is this statement true? I referred some links like this but am still not able to visualize the correctness of the above statement.
Palak Jain (111 rep)
Jan 8, 2019, 02:21 PM • Last activity: Jan 8, 2019, 09:55 PM
2 votes
2 answers
2367 views
Are MySQL InnoDB transactions serializable?
I am confused whether or not MySQL transactions are serializable and whether or not read-anomalies are guarded against? For example, what if I `SELECT` a value from a row, then `UPDATE` that row by incrementing the value. There is an opportunity for another query to `UPDATE` the row which is being r...
I am confused whether or not MySQL transactions are serializable and whether or not read-anomalies are guarded against? For example, what if I SELECT a value from a row, then UPDATE that row by incrementing the value. There is an opportunity for another query to UPDATE the row which is being read, leading to an improper update. However, I was under the impression that even if I wrap these two statements in a transaction, that I am not guaranteed to be free of race conditions. See this StackOverflow thread . So to prevent this I used UPDATE with COALESCE which is a single query and thus guaranteed to be atomic. Was this assumption correct? Or would a transaction have worked here to guarantee no race conditions? I see from the MySQL 5.7 documentation on isolation levels that transactions are by default REPEATABLE_READ rather than SERIALIZABLE. What if I had set the transaction level to SERIALIZABLE? I attempted to read the documentation on REPEATABLE_READ but it did not increase my understanding with regards to this issue.
Adam Thompson (469 rep)
Mar 30, 2018, 04:36 AM • Last activity: Oct 24, 2018, 06:36 PM
4 votes
1 answers
1072 views
Forced serialization on inline table valued function
Apologies if this is a stupid question. We are attempting to convert our multi-statement table valued functions (MSTVFs) to inline table valued functions (ITVFs), to avoid the forced serialization of queries. I have the following ITVF function (watered down), but the Execution Plan still says it has...
Apologies if this is a stupid question. We are attempting to convert our multi-statement table valued functions (MSTVFs) to inline table valued functions (ITVFs), to avoid the forced serialization of queries. I have the following ITVF function (watered down), but the Execution Plan still says it has a Degree of Parallelism of 1. The actual function in question has three basic SELECT statements separated by UNION ALLs. Have I missed something? How can I avoid forced serialization here? CREATE FUNCTION dbo.Test (@i int) RETURNS TABLE AS RETURN SELECT @i as [i]; GO SELECT * FROM dbo.Test (2); GO https://www.brentozar.com/pastetheplan/?id=Hyn8o50D7 The instance has he following settings: - Cost Threshold for Parallelism = 5 - Max Degree of Parallelism = 0
user3399498 (595 rep)
Sep 6, 2018, 09:01 AM • Last activity: Sep 6, 2018, 07:23 PM
3 votes
3 answers
16499 views
What is meant by a blind write in a schedule?
I know that every conflict-serialisable schedule is view-serialisable but the converse is not true. I read in a webpage that when a schedule is view-serialisable but not conflict-serialisable then there exist some blind writes. So what is a blind write?
I know that every conflict-serialisable schedule is view-serialisable but the converse is not true. I read in a webpage that when a schedule is view-serialisable but not conflict-serialisable then there exist some blind writes. So what is a blind write?
Saravanan (219 rep)
Sep 18, 2015, 08:20 AM • Last activity: Nov 16, 2017, 10:49 AM
1 votes
1 answers
65 views
Serializability schedules: can someone help me understand it?
I am struggling with the Databases courses I have, so I am trying to find answer to my question. This has been asked: > Add the appropriate locks to enforce serializability on the following > schedule: > > T1: W(Y) > > T2: R(Y) And: > Add the appropriate locks to enforce serializability on the follo...
I am struggling with the Databases courses I have, so I am trying to find answer to my question. This has been asked: > Add the appropriate locks to enforce serializability on the following > schedule: > > T1: W(Y) > > T2: R(Y) And: > Add the appropriate locks to enforce serializability on the following > schedule: > > T1: R(Y) R(Z) W(Y) > >T2: W(Y) R(Z) For me, right now, it's hocus pocus. Could someone explain to me what exactly I need to do here... as in: **what's the structure I need to follow**? I need some structure to understand it. > R(Y) is a read action on database object Y > > W(Y) is a write action on Y > > S(Y) sets a shared (read) lock on Y > > X(Y) sets a exclusive (write) lock on Y > > U(Y) releases any lock on Y PS: it's not about the answer to the questions, although I would appreciate it... yet it's about me understand WHY that's the answer. Hope someone can help me. I'd appreciate it a lot!
Siyah (113 rep)
May 24, 2017, 10:39 AM • Last activity: May 24, 2017, 04:22 PM
1 votes
0 answers
27 views
Do start and end points change conflict serialisability?
If we have two transactions, and the following schedule: S1 R1(X) S2 R2(X) R2(Y) E1 W2(X) W2(Y) E2 where Si and Ei indicate the start and end of the ith transaction respectively. Does this change the way you'd find whether or not this schedule is conflict serialisable? In other words, can I assume t...
If we have two transactions, and the following schedule: S1 R1(X) S2 R2(X) R2(Y) E1 W2(X) W2(Y) E2 where Si and Ei indicate the start and end of the ith transaction respectively. Does this change the way you'd find whether or not this schedule is conflict serialisable? In other words, can I assume that the schedule is just: R1(X) R2(X) R2(Y) W2(X) W2(Y) when finding out whether it is serialisable or not?
WeakLearner (177 rep)
Apr 26, 2017, 11:03 AM • Last activity: Apr 26, 2017, 11:54 AM
8 votes
1 answers
2113 views
debugging PostgreSQL serialization failures
I am trying to migrate our PostgreSQL 9.4 database from transaction level `READ COMMITTED` to either `REPEATABLE READ` or `SERIALIZABLE`. In either case, I'm exposed to a new set of errors with the format: (for both) ERROR: could not serialize access due to concurrent update (just for SERIALIZABLE)...
I am trying to migrate our PostgreSQL 9.4 database from transaction level READ COMMITTED to either REPEATABLE READ or SERIALIZABLE. In either case, I'm exposed to a new set of errors with the format: (for both) ERROR: could not serialize access due to concurrent update (just for SERIALIZABLE) ERROR: could not serialize access due to read/write dependencies among transactions After reading [the wiki page on SSI](https://wiki.postgresql.org/wiki/SSI) and [the docs](http://www.postgresql.org/docs/9.4/static/transaction-iso.html) , I thoroughly understand the error conditions which could cause these errors, how to handle them, and even best practices to avoid them. However, I see no way to determine the data dependency which caused them from any debug output that PostgreSQL could provide, or really any debug information at all. Is there any way to get this information from the database, either by executing additional queries at the time of the rollback or through some logging mechanism? Having this information would allow me to make application-level changes (locking, different queries, etc.) that would eliminate some of the data races to avoid an excessive number of rollbacks.
Dan (299 rep)
Apr 18, 2015, 05:03 AM • Last activity: Mar 28, 2017, 11:43 AM
0 votes
1 answers
1322 views
Storing data in multiple rows vs serialized data
I am currently designing a database table where I store list of courses chosen by students for a session. My current design has to do with storing each course in a row and a student can have up to 30 courses per session. This means that 300 students in that single department will consume 30*300 = 90...
I am currently designing a database table where I store list of courses chosen by students for a session. My current design has to do with storing each course in a row and a student can have up to 30 courses per session. This means that 300 students in that single department will consume 30*300 = 9000 rows for just a session. There over 50 departments in the School and each department has at least 300 students per level (5 levels) and the minimal courses taken per session is up to 30. The system is designed in such a way that these data stored will not be deleted. **Meaning:** 50 (departments) * 300 (possible students) * 5 (levels) * 30 (rows for courses) = 2, 250,000 (possible rows consumed in a session). **Serialized Format** Is it better to store a student courses in a serialized format that use just a row to save it? 50 (departments) * 300 (possible students) * 5 (levels) * 1 (rows for courses) = 75,000 (possible rows consumed in a session). I calculated this and I think it makes sense but I don't know if it is a faulty design. **Low Point** I recognised that update and deletion query will overwrite/delete everything in the courses column. Please an insightful help will be appreciated. **Note:** I use MySql for my database.
BlackPearl (103 rep)
Dec 26, 2016, 07:27 AM • Last activity: Dec 26, 2016, 08:27 PM
Showing page 1 of 20 total questions