Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
148
views
Can I rely on the MSDTC to guarantee a distributed transaction was completed?
I've come across a bug using the TransactionScope from the .NET library, where we're getting a TransactionInDoubt exception. After doing a lot of research into something I don't know much about, I've figured this happens when there is a communication error between the database, and we're unsure if t...
I've come across a bug using the TransactionScope from the .NET library, where we're getting a TransactionInDoubt exception. After doing a lot of research into something I don't know much about, I've figured this happens when there is a communication error between the database, and we're unsure if the transaction completed.
Trying to handle this error quickly became a pain however, there seem to be a lot of possible problems that arise.
Basically, I want to know if there is a guarantee that a distributed transaction across multiple databases will be committed or rolled-back on ALL databases, not committed on one and not the another.
From my knowledge, the MSDTC controls a distributed transaction and handles this all for me. However reading up on it a little, it seems if there is a communication problem while trying to confirm the commit to one of the databases, then the commit won't be performed for that database, but the others will be committed.
There are plenty of resources online that discuss this, and it's often a lot of conceptual talk, my simple question is: Can I rely on the MSDTC, or do we have to come up with a separate solution to the problem?
DubDub
(111 rep)
Dec 4, 2019, 08:54 AM
• Last activity: Jul 19, 2025, 06:09 PM
4
votes
1
answers
222
views
T-SQL: Iteration vs. Set-Based Operations When Independence of Ancillary Data is Needed
BACKGROUND ---------- I am using Microsoft SQL Server Management Studio v18.9.1 on Windows 10. I've created a database to store test results of our company's widget (not yet being produced). I am working on a stored procedure to populate that database with random test result records. The purpose of...
BACKGROUND
----------
I am using Microsoft SQL Server Management Studio v18.9.1 on Windows 10.
I've created a database to store test results of our company's widget (not yet being produced). I am working on a stored procedure to populate that database with random test result records. The purpose of this is to have a data set to use for query development while waiting for real widgets to start being produced and tested.
In what follows, our test suite is composed of multiple subtest types. For this post, I've created only one subtest type. The example below runs the test suite one time (with a **[test_id]** of 0).
Within a test, a given subtest may be run multiple times if the tester deems it necessary for any reason. In the example below, the one subtest I show is run five times for illustrative purposes.
The subtest I show has two parameters, the first of which can take on two discrete values, and the second of which can take on three discrete values. All subtests must be run over all parameter combinations. So, a subtest result consists of 2 * 3 = 6 partial results (one for each parameter combination).
When a parameter combination is tested, either anomalies are not observed (pass) or one or more anomalies are observed (fail). If any of the six parameter combinations produces one or more anomalies, the subtest as a whole fails.
A subtest result record contains the following columns:
- **[test_id]**
- **[subtest_run_number]**
- **[parameter_1]**
- **[parameter_2]**
- **[anomaly_id]**
- **[anomaly_magnitude]**
- **[subtest_type]**
If testing with a given parameter combination produces no anomalies, this is noted by creating a single subtest record containing the two parameter values, an **[anomaly_id]** of 0, and an **[anomaly_magnitude]** of NULL.
If testing with a given parameter combination produces N anomalies, this is noted by creating N subtest records. Each subtest record contains the parameter values that produced the anomaly. Each subtest record also contains a 1-based anomaly ID for one of the anomalies seen as well as the "magnitude" of that anomaly.
In producing this data set of random test results, I am trying to use set-based operations rather than iteration. As can be seen below, I did have to use iteration when 1) looping over the number of times the subtest would be run and when 2) looping over the (randomly-chosen) number of anomalies that were to be created for a given parameter combination.
Where I was able to use set-based operations was when creating test result data for every parameter combination. This was a natural place to use a cross join since each parameter has a table defining its valid values. However, there's a problem...
THE PROBLEM
-----------
Within a subtest run, every parameter combination has the same number of anomalies associated with it. What I want is for the anomaly count associated with each parameter combination to be able to vary independently.
AN UNDESIRABLE SOLUTION
-----------------------
The only way I can think of accomplishing this is to forego set-based operations and iterate as follows:
DECLARE @parameter_1 as INT = 1;
DECLARE @parameter_2 as INT = 1;
DECLARE @anomaly_count AS INT;
EXEC [dbo].[rand_int] @a = 0, @b = 2, @random_int = @anomaly_count OUTPUT;
WHILE @parameter_1 0 AND [anomaly_magnitude] IS NOT NULL)
OR
([anomaly_id] = 0 AND [anomaly_magnitude] IS NULL)
)
);
GO
-- Stored Procedure
CREATE PROC [dbo].[rand_int]
@a AS INT,
@b AS INT,
@random_int AS INT OUTPUT
AS
SET NOCOUNT ON;
SET @random_int = FLOOR(RAND() * (@b - @a + 1)) + @a;
GO
-- Stored Procedure
CREATE PROC [dbo].[generate_random_subtest_type_1_data]
@test_id AS VARCHAR(20)
AS
SET NOCOUNT ON;
DECLARE @subtest_type_1_count AS INT = 5;
DECLARE @subtest_type_1_loop AS INT = 0;
WHILE @subtest_type_1_loop 0
SET @subtest_result = 'fail';
ELSE
SET @subtest_result = 'pass';
INSERT INTO [dbo].[test_runs]
([test_id], [subtest_run_number], [subtest_type], [subtest_result])
VALUES (@test_id, @subtest_type_1_loop, 'subtest_type_1', @subtest_result);
IF @anomaly_count = 0
BEGIN
INSERT INTO [dbo].[subtest_type_1_data] ([test_id], [subtest_run_number],
[parameter_1], [parameter_2], [anomaly_id], [anomaly_magnitude])
SELECT @test_id AS [test_id],
@subtest_type_1_loop AS [subtest_run_number],
[parameter_1_value] AS [parameter_1_value],
[parameter_2_value] AS [parameter_2_value],
0 AS [anomaly_id],
NULL AS [anomaly_magnitude]
FROM [dbo].[parameter_1_values]
CROSS JOIN [dbo].[parameter_2_values];
END;
ELSE
BEGIN
DECLARE @anomaly_loop AS INT;
SET @anomaly_loop = 1;
WHILE @anomaly_loop <= @anomaly_count
BEGIN
-- Instead of RAND(), use ABS(CHECKSUM(NewId()) / 2147483647.0).
-- This is because RAND() gets invoked only once in the INSERT statement below.
--
-- By using an expression based on NewID(), every row will be assigned its
-- own random number.
INSERT INTO [dbo].[subtest_type_1_data] ([test_id], [subtest_run_number],
[parameter_1], [parameter_2], [anomaly_id], [anomaly_magnitude])
SELECT @test_id AS [test_id],
@subtest_type_1_loop AS [subtest_run_number],
[parameter_1_value] AS [parameter_1_value],
[parameter_2_value] AS [parameter_2_value],
@anomaly_loop AS [anomaly_id],
ABS(CHECKSUM(NewId()) / 2147483647.0) AS [anomaly_magnitude]
FROM [dbo].[parameter_1_values]
CROSS JOIN [dbo].[parameter_2_values];
SET @anomaly_loop = @anomaly_loop + 1;
END;
END;
SET @subtest_type_1_loop = @subtest_type_1_loop + 1;
END;
GO
**SCRIPT 2**
-- Script 2
-- Delete old data, generate a new data set of random test results, and display it
DELETE FROM [dbo].[subtest_type_1_data];
DELETE FROM [dbo].[test_runs];
GO
EXEC [dbo].[generate_random_subtest_type_1_data] @test_id = '0';
GO
SELECT * FROM [dbo].[test_runs];
SELECT * FROM [dbo].[subtest_type_1_data];
GO
EXAMPLE OUTPUT (current, incorrect)
-----------------------------------
In the example output below:
- All parameter combinations within subtest run 0 have the **same number** (0) of anomalies
- All parameter combinations within subtest run 1 have the **same number** (0) of anomalies
- All parameter combinations within subtest run 2 have the **same number** (1) of anomaly
- All parameter combinations within subtest run 3 have the **same number** (0) of anomalies
- All parameter combinations within subtest run 4 have the **same number** (2) of anomalies
I would like each parameter combination's anomaly count to be able to vary independently of any other parameter combination's anomaly count.
test_id subtest_run_number parameter_1 parameter_2 anomaly_id anomaly_magnitude subtest_type
------- ------------------ ----------- ----------- ---------- ----------------- ------------
0 0 1 1 0 NULL subtest_type_1
0 0 1 2 0 NULL subtest_type_1
0 0 1 3 0 NULL subtest_type_1
0 0 2 1 0 NULL subtest_type_1
0 0 2 2 0 NULL subtest_type_1
0 0 2 3 0 NULL subtest_type_1
0 1 1 1 0 NULL subtest_type_1
0 1 1 2 0 NULL subtest_type_1
0 1 1 3 0 NULL subtest_type_1
0 1 2 1 0 NULL subtest_type_1
0 1 2 2 0 NULL subtest_type_1
0 1 2 3 0 NULL subtest_type_1
0 2 1 1 1 0.8730268 subtest_type_1
0 2 1 2 1 0.5566615 subtest_type_1
0 2 1 3 1 0.4599889 subtest_type_1
0 2 2 1 1 0.9322677 subtest_type_1
0 2 2 2 1 0.3515796 subtest_type_1
0 2 2 3 1 0.872755 subtest_type_1
0 3 1 1 0 NULL subtest_type_1
0 3 1 2 0 NULL subtest_type_1
0 3 1 3 0 NULL subtest_type_1
0 3 2 1 0 NULL subtest_type_1
0 3 2 2 0 NULL subtest_type_1
0 3 2 3 0 NULL subtest_type_1
0 4 1 1 1 0.6965834 subtest_type_1
0 4 1 1 2 0.4588626 subtest_type_1
0 4 1 2 1 0.1284888 subtest_type_1
0 4 1 2 2 0.4331938 subtest_type_1
0 4 1 3 1 0.3083588 subtest_type_1
0 4 1 3 2 0.8907238 subtest_type_1
0 4 2 1 1 0.4016767 subtest_type_1
0 4 2 1 2 0.8041269 subtest_type_1
0 4 2 2 1 0.5932015 subtest_type_1
0 4 2 2 2 0.5389434 subtest_type_1
0 4 2 3 1 0.7058043 subtest_type_1
0 4 2 3 2 0.749176 subtest_type_1
EXAMPLE OUTPUT (desired)
------------------------
Below, I show an example of what desired output might look like. (Here, I show only rows associated with **[subtest_run_number]** = 0. Of course, what's being demonstrated here holds for all **[subtest_run_number]** values.)
In the example output below:
- The parameter combination (1, 1) has one anomaly (**[anomaly_id]** value is only 1)
- The parameter combination (1, 2) has zero anomalies (**[anomaly_id]** value is only 0)
- The parameter combination (1, 3) has two anomalies (**[anomaly_id]** values are 1, 2)
- The parameter combination (2, 1) has one anomaly (**[anomaly_id]** value is only 1)
- The parameter combination (2, 2) has zero anomalies (**[anomaly_id]** value is only 0)
- The parameter combination (2, 3) has three anomalies (**[anomaly_id]** values are 1, 2, 3)
The salient point is that the number of anomalies per parameter combination can vary within a given value of **[subtest_run_number]**. In the incorrect example output shown earlier, the number of anomalies per parameter combination could not vary within a given value of **[subtest_run_number]**.
test_id subtest_run_number parameter_1 parameter_2 anomaly_id anomaly_magnitude subtest_type
------- ------------------ ----------- ----------- ---------- ----------------- ------------
0 0 1 1 1 0.5095024 subtest_type_1
0 0 1 2 0 NULL subtest_type_1
0 0 1 3 1 0.5062660 subtest_type_1
0 0 1 3 2 0.3940517 subtest_type_1
0 0 2 1 1 0.6216237 subtest_type_1
0 0 2 2 0 NULL subtest_type_1
0 0 2 3 1 0.5802680 subtest_type_1
0 0 2 3 2 0.5673455 subtest_type_1
0 0 2 3 3 0.5517588 subtest_type_1
MY QUESTION
-----------
Is there a way I can continue to use set-based operations but have each parameter combination have its own, independent anomaly count? Or, must I resort to iteration?
Dave
(197 rep)
Nov 2, 2022, 01:58 PM
• Last activity: Jun 25, 2025, 03:06 PM
0
votes
1
answers
209
views
PostgreSQL CREATE TYPE using multiple relations?
I'm becoming familiar with PostgreSQL's ability to define a data type https://www.postgresql.org/docs/current/sql-createtype.html using `CREATE TYPE` syntax. It appears to me that this type does not allow to compose a type out of relations/tables. In other words, I can do this: `CREATE TYPE compfoo...
I'm becoming familiar with PostgreSQL's ability to define a data type https://www.postgresql.org/docs/current/sql-createtype.html using
CREATE TYPE
syntax.
It appears to me that this type does not allow to compose a type out of relations/tables. In other words, I can do this:
CREATE TYPE compfoo AS (f1 int, f2 text);
but I can't do
CREATE TYPE compfoo AS (f1 relation, f2 relation);
I was wondering if my understanding is correct, and if not, how one could achieve the latter?
I want to indicate that I don't have any particular application for it. I would actually be interested in starting a discussion on an application! I am just trying to expand my understanding of this feature which is often touted as a very powerful feature in the extensible DBMS world.
Zeruno
(547 rep)
Jul 5, 2019, 04:48 PM
• Last activity: Jun 19, 2025, 11:01 PM
0
votes
1
answers
112
views
Is that table in BCNF?
In the book I am currently reading this table is mentioned as an example for what we can do if a table fulfills BCNF but has still redundancies. It then mentions that a solution are arrays. But is that table even in BCNF? Aren't there functional dependencies from Language -> PersNr and from Programm...
In the book I am currently reading this table is mentioned as an example for what we can do if a table fulfills BCNF but has still redundancies. It then mentions that a solution are arrays. But is that table even in BCNF? Aren't there functional dependencies from Language -> PersNr and from Programming -> PersNr. Both attributes are not superkeys, since they do not identify the rows uniquely. So they break BCNF in my view. Or am I misunderstanding something?

Marlon Brando
(141 rep)
Jan 22, 2025, 01:48 PM
• Last activity: Jan 22, 2025, 02:08 PM
1
votes
0
answers
84
views
Is there a formalism for the kinds of constraints we can implement with indexed-views? Or at least a rule-of-thumb?
SQL Server’s Indexed Views provide a performant way to implement _certain classes_ of database-scoped constraints - [such as ensuring a value is unique over multiple tables][1] - or other [non-key, cross-table value constraints][2]. As far as I understand it: provided you can reduce your desired con...
SQL Server’s Indexed Views provide a performant way to implement _certain classes_ of database-scoped constraints - such as ensuring a value is unique over multiple tables - or other non-key, cross-table value constraints .
As far as I understand it: provided you can reduce your desired constraint to a single
SELECT
query using some combination of INNER JOIN
, CROSS JOIN
, and/or COUNT_BIG
/SUM
such that invalid data causes duplicate output rows then using an indexed-view is doable.
…but how can I know (or ideally: prove) that some proposed constraint can be implemented this way? And how can I know that some other kind of constraint is impossible to implement?
Another consideration is that the limitation of only having INNER JOIN
and CROSS JOIN
becomes less of a restriction with the help of utility-tables containing data-ranges (either manually maintained or using GENERATE_SERIES
) - this enables other classes of constraints to be implemented - but I struggle to conceptualise those classes (do they have names?)
Dai
(632 rep)
Oct 15, 2022, 01:26 AM
• Last activity: Oct 19, 2024, 09:54 PM
0
votes
1
answers
69
views
Is snapshot isolation only potentially different from serializability if there are "cycles" between transactions with reads and writes?
I'm trying to figure out a better intuition for what exactly are the sorts of consistency anomalies that snapshot isolation allows to happen. The description on Wikipedia says: > In databases, and transaction processing (transaction management), snapshot isolation is a guarantee that all reads made...
I'm trying to figure out a better intuition for what exactly are the sorts of consistency anomalies that snapshot isolation allows to happen. The description on Wikipedia says:
> In databases, and transaction processing (transaction management), snapshot isolation is a guarantee that all reads made in a transaction will see a consistent snapshot of the database (in practice it reads the last committed values that existed at the time it started), and the transaction itself will successfully commit only if no updates it has made conflict with any concurrent updates made since that snapshot.
> When the transaction concludes, it will successfully commit only if the values updated by the transaction have not been changed externally since the snapshot was taken. Such a write–write conflict will cause the transaction to abort.
So my reasoning goes:
- If two transactions have interleaved reads, that alone cannot violate serializability, because if nobody is changing the data then the end result will be the same regardless of what order the reads occur.
- If two transactions have interleaved writes, then snapshot isolation will abort and retry at least one of them until it doesn't have interleaved writes, thus preventing that.
- So then it seems like the only way that snapshot isolation could allow serializability to be violated would be interleaving writes with reads.
So then I try to imagine a simple case where there's two transactions and three disjoint sets of records, kind of like a Venn diagram, but where there is only interleaving between tx 1's writes and tx 2's reads, not the other way around:
- Set of records A is written by tx 1
- Set of records B is written by tx 1 and read by tx 2
- Set of records C is written by tx 2
And it seems like in this situation it's still not possible for snapshot isolation to violate serializability. Because I figure:
- If tx 2 is considered to start before tx 1 commits, it will only read the versions of records in B from before tx 1 changed them, and then also tx 2 will not change any records in B because it is only reading them, so the execution will be serializable to the order [tx 2, tx 1].
- If not, then tx 2 is considered to start after tx 1 commits, in which case it seems obvious that the execution will be serializable to the order [tx 1, tx 2].
Since it seems like violating serializability with snapshot isolation in this case is still impossible, that makes it seem like snapshot isolation can only violate serializability if there exists _both_:
- Some records which are written by tx 1 and read by tx 2
- Some records which are written by tx 2 and read by tx 1
Or, alternatively, some more indirect loop, e.g. tx 1 writes A and reads B, tx 2 writes B and reads C, and tx 3 writes C and reads A.
And it seems conspicuous that the common example I see of an anomaly which snapshot isolation allows is exactly such a case:
> As a concrete example, imagine V1 and V2 are two balances held by a single person, Phil. The bank will allow either V1 or V2 to run a deficit, provided the total held in both is never negative (i.e. V1 + V2 ≥ 0). Both balances are currently $100. Phil initiates two transactions concurrently, T1 withdrawing $200 from V1, and T2 withdrawing $200 from V2.
>
> If the database guaranteed serializable transactions, the simplest way of coding T1 is to deduct $200 from V1, and then verify that V1 + V2 ≥ 0 still holds, aborting if not. T2 similarly deducts $200 from V2 and then verifies V1 + V2 ≥ 0. Since the transactions must serialize, either T1 happens first, leaving V1 = −$100, V2 = $100, and preventing T2 from succeeding (since V1 + (V2 − $200) is now −$200), or T2 happens first and similarly prevents T1 from committing.
>
> If the database is under snapshot isolation(MVCC), however, T1 and T2 operate on private snapshots of the database: each deducts $200 from an account, and then verifies that the new total is zero, using the other account value that held when the snapshot was taken. Since neither update conflicts, both commit successfully, leaving V1 = V2 = −$100, and V1 + V2 = −$200.
So I wanted to ask, is my formulation and understanding of the situation correct, the snapshot isolation can only violate serializability if there are "loops" between sets of records that transactions writes and read as such? Or are there possible ways for anomalies to other than that?
Phoenix
(101 rep)
Mar 28, 2024, 08:41 PM
• Last activity: Mar 29, 2024, 09:29 AM
2
votes
3
answers
5503
views
3NF Vs 2NF, which is more strict?
Which of these is true: 1. If a table is in 2nd normal-form then it must be in 3rd normal-form. 2. If a table is in 3rd normal-form then it must be in 2nd normal-form. I know only one of them is correct, if it's the first it makes sense. If it's the second -doesn't make any sense to me. Maybe an exa...
Which of these is true:
1. If a table is in 2nd normal-form then it must be in 3rd normal-form.
2. If a table is in 3rd normal-form then it must be in 2nd normal-form.
I know only one of them is correct, if it's the first it makes sense. If it's the second -doesn't make any sense to me. Maybe an example?
Eve
Feb 1, 2010, 09:48 PM
• Last activity: Dec 22, 2023, 05:09 PM
1
votes
1
answers
124
views
Doubt on 2NF decomposition
I have read that 1NF, 2NF and 3NF decompositions are lossless and dependency-preserving. Consider this example on a relation **R(A,B,C,D)** with functional dependencies set as **FD ={ AB->CD, A->C, BC->D}** Here when we do 2NF decomposition we get **R1(A,C)** with **FD ={A->C}** and **R2(A,B,D)** wi...
I have read that 1NF, 2NF and 3NF decompositions are lossless and dependency-preserving.
Consider this example on a relation **R(A,B,C,D)** with functional dependencies set as **FD ={ AB->CD, A->C, BC->D}**
Here when we do 2NF decomposition we get **R1(A,C)** with **FD ={A->C}** and **R2(A,B,D)** with **FD ={AB->D}**
The functional dependency **BC->D** is lost when we join but we know that 2NF is dependency preserving so why is it that we are unable to preserve the original FD?
Arun Madhav
(11 rep)
Jan 7, 2023, 02:00 AM
• Last activity: Dec 19, 2023, 02:22 AM
0
votes
2
answers
91
views
How do I identify the SQL server brand via SQL queries
A 3rd party API exposes a endpoint which accepts "SQL" queries. Their examples are simple `SELECT` statements, however I want to write more complicated queries. They do not specify which SQL brand they are using, which means I don't know the exact syntax I should be following. How can I identify the...
A 3rd party API exposes a endpoint which accepts "SQL" queries. Their examples are simple
SELECT
statements, however I want to write more complicated queries.
They do not specify which SQL brand they are using, which means I don't know the exact syntax I should be following.
How can I identify the SQL brand (SQL Server, MySQL, Azure SQL, Postgres, etc) with one or more SQL queries (no command line access, file permissions, etc)?
yeerk
(101 rep)
Dec 12, 2023, 02:31 PM
• Last activity: Dec 12, 2023, 04:14 PM
-1
votes
1
answers
59
views
How do database snapshots work?
How do database engines take consistent snapshots when writes could be happening at any time? (A) Does the database engine block writes one table at a time while taking snapshots? (B) Does the database engine block writes to the entire database while taking snapshots? (C) Or, does the database engin...
How do database engines take consistent snapshots when writes could be happening at any time?
(A) Does the database engine block writes one table at a time while taking snapshots?
(B) Does the database engine block writes to the entire database while taking snapshots?
(C) Or, does the database engine do something more clever?
AlfaZulu
(99 rep)
Sep 15, 2023, 07:28 PM
• Last activity: Sep 15, 2023, 07:52 PM
7
votes
1
answers
4057
views
Transaction and data consistency during a failure
When a database begins a transaction, all statements executed in that transaction are isolated and atomic (and consistent and durable). These are pretty much the definition of a transaction. [Wikipedia](http://en.wikipedia.org/wiki/ACID#Locking_vs_multiversioning) states that there are some database...
When a database begins a transaction, all statements executed in that transaction are isolated and atomic (and consistent and durable). These are pretty much the definition of a transaction.
[Wikipedia](http://en.wikipedia.org/wiki/ACID#Locking_vs_multiversioning) states that there are some databases that insure a transaction remains isolated by locking the rows and not unlocking them until the transaction has committed.
My question is: How can a database that solely relies on locking guarantee consistency? If a power outage occurs mid-transaction, there may be data partially written to the row.
Even for databases like SQL Server that use a Temporary DB to perform all the transactions, what happens if a power outage occurs as the database is committing the transactions to disk?
Does the data become corrupted or does the database handle that correctly?
Richard
(1 rep)
Aug 11, 2011, 10:32 PM
• Last activity: Jul 6, 2023, 06:21 AM
0
votes
1
answers
62
views
What does relational capabilities mean in Codd's Foundation rule?
From [Codd's 12 rules][1] (emphasis mine): > For any system that is advertised as, or claimed to be, a relational > data base management system, that system must be able to manage data > bases entirely through its ***relational capabilities***. What does *"relational capabilities"* mean here? Does i...
From Codd's 12 rules (emphasis mine):
> For any system that is advertised as, or claimed to be, a relational
> data base management system, that system must be able to manage data
> bases entirely through its ***relational capabilities***.
What does *"relational capabilities"* mean here? Does it mean that you could manage the database by only using a language like SQL? Does it mean anything beyond this?
Mehdi Charife
(131 rep)
May 16, 2023, 05:10 PM
• Last activity: May 18, 2023, 11:29 PM
0
votes
1
answers
78
views
the actual writes only after the transaction enters the committed state
The text below is from Database System Concepts by Silberschatz. It says that the actual writes take place only after the transaction enters the committed state. **My question**: When the transaction has entered the committed state, there's no more statement to be executed, so who will perform these...
The text below is from Database System Concepts by Silberschatz. It says that the actual writes take place only after the transaction enters the committed state.
**My question**: When the transaction has entered the committed state, there's no more statement to be executed, so who will perform these actual writes?
> We must be cautious when dealing with observable external writes, such as writes
to a user’s screen, or sending email. Once such a write has occurred, it cannot be
erased, since it may have been seen external to the database system. Most systems
allow such writes to take place only after the transaction has entered the committed
state. One way to implement such a scheme is for the database system to store any value
associated with such external writes temporarily in a special relation in the database,
and to perform the actual writes only after the transaction enters the committed state. If
the system should fail after the transaction has entered the committed state, but before
it could complete the external writes, the database system will carry out the external
writes (using the data in non-volatile storage) when the system is restarted.
learn9909
(111 rep)
Jan 7, 2021, 06:32 PM
• Last activity: Mar 28, 2023, 09:59 AM
1
votes
2
answers
2233
views
What does <> mean in Relational Calculus?
I saw a Tuple Relational Calculus formula, and it contained the symbol ` `. What does it mean? ![Relational Calculus ][1] [1]: https://i.sstatic.net/vFP9d.jpg
I saw a Tuple Relational Calculus formula, and it contained the symbol ``.
What does it mean?

CodyBugstein
(253 rep)
Dec 29, 2013, 01:29 AM
• Last activity: Feb 20, 2023, 01:56 PM
3
votes
3
answers
1425
views
Why does data get corrupted in "large" text files - Reasons to use database
I am just starting to learn about databases and using mysql. I read that databases are more reliable than having large ascii text files and that ascii text files are more prone to corruption as they get bigger. Can someone explain why this is the case and how databases safeguard against this in a hi...
I am just starting to learn about databases and using mysql. I read that databases are more reliable than having large ascii text files and that ascii text files are more prone to corruption as they get bigger.
Can someone explain why this is the case and how databases safeguard against this in a high level? Thank you.
user1527227
(133 rep)
Feb 8, 2014, 07:51 AM
• Last activity: Oct 26, 2022, 08:04 AM
4
votes
3
answers
1492
views
Understanding data abstraction levels
The three-level ANSI SPARC Database Architecture suggests three data abstraction levels, namely, *external*, *conceptual*, and *internal* levels. If I understand correctly, the *external* level represents the users' views, the *conceptual* level is the conceptual diagram (ER model), and the *interna...
The three-level ANSI SPARC Database Architecture suggests three data abstraction levels, namely, *external*, *conceptual*, and *internal* levels.
If I understand correctly, the *external* level represents the users' views, the *conceptual* level is the conceptual diagram (ER model), and the *internal* level is the relational model.
My problem is that I found in the literature that besides those, there is a fourth level (the lowest one), which is the *physical* level.
I want to understand what is it concretely? Is it at this point that we define the type of index, access paths, and things related to physical access to data?
user2804064
(225 rep)
Sep 21, 2017, 06:26 PM
• Last activity: Sep 30, 2022, 05:07 PM
0
votes
3
answers
369
views
Information sources for multiple hierachy trees in a single table
I need some quality sources - books, websites etc. - to educate myself about putting multiple hierarchy trees in a single table using SQL. I'm looking for some good theoretical and practical information.
I need some quality sources - books, websites etc. - to educate myself about putting multiple hierarchy trees in a single table using SQL.
I'm looking for some good theoretical and practical information.
Wozilla
(1 rep)
Sep 22, 2012, 06:28 PM
• Last activity: Aug 15, 2022, 06:19 PM
29
votes
2
answers
26521
views
Ready-to-Use Database models example
Where can I find ready-to-use database models ? I don't need a database with data in it, but only schemas (UML diagrams). Perhaps something like the [data models at this link][1], but much more complex and real world. [1]: https://web.archive.org/web/20191123175222/http://www.databaseanswers.org/dat...
Where can I find ready-to-use database models ?
I don't need a database with data in it, but only schemas (UML diagrams). Perhaps something like the data models at this link , but much more complex and real world.
tasmaniski
(1195 rep)
Feb 14, 2012, 10:49 AM
• Last activity: Jun 20, 2022, 12:57 PM
0
votes
1
answers
296
views
View serializable schedule or not
[![enter image description here][1]][1] Could you help me understand why the schedule on the picture is view serializable without any "blind write"? Also, should the results be the same if two schedules are view equivalent? I did create serial version of the schedule and compare but they do not seem...

Shogo Minamoto
(1 rep)
Mar 31, 2020, 02:18 PM
• Last activity: Jun 16, 2022, 01:56 PM
15
votes
8
answers
1126
views
What is a "database"?
There was lot of discussion in this question: https://dba.stackexchange.com/questions/18669/what-database-technologies-do-big-search-engines-use So much discussion that it made me confused. So... what is a database, anyway? Are only relational databases "databases"? Are object-oriented databases "da...
There was lot of discussion in this question: https://dba.stackexchange.com/questions/18669/what-database-technologies-do-big-search-engines-use
So much discussion that it made me confused. So... what is a database, anyway? Are only relational databases "databases"? Are object-oriented databases "databases"? Is any system that allows me to store and retrieve information (like a map, list, etc) a database?
Or does a database have to store/retrieve information and also have some administration features like Users and Privileges? Was dBase III plus a database, since it wasn't really relational?
woliveirajr
(269 rep)
Jun 1, 2012, 06:38 PM
• Last activity: Jun 8, 2022, 05:13 PM
Showing page 1 of 20 total questions