Sample Header Ad - 728x90

How to have uniqueness constraints for structures across multiple tables?

1 vote
1 answer
1878 views
Say I have a schema system something like this: create table objects { uuid id; string type; } create table object_properties { uuid id; uuid source_id; // the object which has this property string name; // the property name uuid value_id; // the property value object } // ...and tables for each primitive data type create table string_properties { uuid id; uuid source_id; // the object which has this property string name; // the property name string value; // the property value string } I then want to create this object: { type: 'foo', bar: { type: 'bar', baz: { type: 'baz', slug: 'hello-world' } } } That is: // objects id | type 123 | foo 234 | bar 345 | baz // object_properties source_id | name | value_id 123 | bar | 234 234 | baz | 345 // string_properties source_id | name | value 345 | slug | hello-world I want to only create this "object tree" if the tree ending in slug: hello-world doesn't exist. How best can I do that? I can do it easily by first making a query, checking the object exists, and then creating it if not. But that is one query followed by one insert. There is a chance that two processes come in at the same time, both make the query, both succeed, and then both make the insert. How can I prevent that? Note, I am currently having each independent query+insert happen both in a transaction, so each transaction has the query followed by the insert. Or will the update inside the first transaction be readable "outside" from the second transaction? I am using PostgreSQL / CockroachDB, is this a "read uncommitted" sort of setting?
Asked by Lance Pollard (221 rep)
Aug 2, 2022, 05:56 AM
Last activity: Aug 9, 2022, 12:32 PM