Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
220 views
CockroachDB or mariaDB cluster for many read slaves, 1 master?
I am after a recommendation on the following setup... 1 Master that has all tables. - 1 main table ( table A ) that master will write to, and read. often. - 5+ tables ( tables B+ ) master will read from these not often, does not need to be most recent updated data ( within 24hrs ) 4+ slaves that hav...
I am after a recommendation on the following setup... 1 Master that has all tables. - 1 main table ( table A ) that master will write to, and read. often. - 5+ tables ( tables B+ ) master will read from these not often, does not need to be most recent updated data ( within 24hrs ) 4+ slaves that have all tables. - will only read from table A, does not need to be most recent updated data (within 24hrs ). - will read and write to tables B+ often. ( each slave will have its own table and does not need to sync to other slaves ) Tables B rows will have a relation to table A for querying. My plan is to have the master to setup the service for all clients with main details in table A. Individual function of the service will be split out to different nodes/slaves. These functions will do a lot of read/writes to tables B ( each slave to its own table ). I don't want those read/writes to overload the master database tho (or other slaves). I do want them to sync to the master at most every 24hrs. So the master can get the updated data when it checks ( won't be often ). When it reads though, if it has not been synced live it does not matter as it will read it again the next day. Maybe I can have a setting on the slaves that syncing to master is not urgent, so do it when network is quiet? Same with syncing to other slaves, not urgent. I am familiar with MariaDB, but wondering if CockroachDB may be better for such a setup if anyone recommends it? Ease of use, setup and reliability?
PixelPaul (109 rep)
Nov 9, 2021, 03:12 AM • Last activity: Jun 18, 2025, 06:05 AM
1 votes
2 answers
265 views
How to generated missing dates between date range that do not exist in the joined table in CockroachDB?
I have Table A and Table A which have Table A id, start_date, end_date Table B id, table_a_id, archive_date I want to return the dates that are missing in Table B, the dates that are NOT in the range of table_a start_date and end_date How can I do this in cockroachdb? I have tried using generate_ser...
I have Table A and Table A which have Table A id, start_date, end_date Table B id, table_a_id, archive_date I want to return the dates that are missing in Table B, the dates that are NOT in the range of table_a start_date and end_date How can I do this in cockroachdb? I have tried using generate_series but I get a unknown signature error, generate_series does not take (date, date, interval) WITH date_series AS ( SELECT generate_series( (SELECT w.export_start_date FROM table_a w WHERE w.id = 123), (SELECT w.export_end_date FROM table_a w WHERE w.id = 123), '1 day'::interval) AS missing_date ) SELECT ds.missing_date FROM date_series ds LEFT JOIN table_b a ON a.table_a_id = 123
Brunaldo (23 rep)
Mar 6, 2024, 05:32 PM • Last activity: Mar 12, 2024, 05:02 PM
1 votes
2 answers
73 views
Inserting a Hyphen in between a composite primary key
I have more of a conceptual question. Is there a way to model the following: ``` CREATE TABLE AssetPair ( older_asset_id INT8, newer_asset_id INT8, PRIMARY KEY (older_asset_id, '-', newer_asset_id) ); ``` My issue is that `older_asset_id` and `newer_asset_id` are incrementing IDs so the following ed...
I have more of a conceptual question. Is there a way to model the following:
CREATE TABLE AssetPair (
      older_asset_id INT8,
      newer_asset_id INT8,
      PRIMARY KEY (older_asset_id, '-', newer_asset_id)
  );
My issue is that older_asset_id and newer_asset_id are incrementing IDs so the following edgecases are possible:
older_asset_id: 1, newer_asset_id: 234 --> PrimaryKey:1234
// same Primary keys
older_asset_id: 12 newer_asset_id: 34 --> PrimaryKey:1234
If I was able to insert a hyphen between the two columns I'd be able to avoid this issue. Any thoughts?
Eric Sharma (13 rep)
Feb 15, 2024, 10:03 PM • Last activity: Feb 16, 2024, 02:01 PM
1 votes
1 answers
152 views
Moving records between tables in CockroachDB is taking too long
I have a requirement where I have to move records from table `original_table` to table `new_table` for the last N days. Below is a small sample of my basic script that does this. ``` while True: result = INSERT INTO new_table (SELECT * FROM original_table WHERE (crdb_internal_expiration > (current_t...
I have a requirement where I have to move records from table original_table to table new_table for the last N days. Below is a small sample of my basic script that does this.
while True:

    result = INSERT INTO new_table (SELECT * FROM original_table WHERE (crdb_internal_expiration > (current_timestamp() + INTERVAL 'X days')) ORDER BY (unique_id) DESC LIMIT batch_limit OFFSET starting_point);

    if result.rowcount  last_processed_id
ORDER BY id DESC
LIMIT 100;
cockroachdb version is at 23
Brunaldo (23 rep)
Feb 14, 2024, 07:45 PM • Last activity: Feb 14, 2024, 08:06 PM
0 votes
1 answers
185 views
RSUs and Requests in CockroachDB?
Based on the free tier in cockroachDB Serverless, 250 Million free RSUs are available.\ On that note, **how many requests** can this database handle **within 250M RSUs** approximately?
Based on the free tier in cockroachDB Serverless, 250 Million free RSUs are available.\ On that note, **how many requests** can this database handle **within 250M RSUs** approximately?
user262995
Oct 27, 2022, 09:25 AM • Last activity: Oct 27, 2022, 02:24 PM
1 votes
1 answers
1878 views
How to have uniqueness constraints for structures across multiple tables?
Say I have a schema system something like this: create table objects { uuid id; string type; } create table object_properties { uuid id; uuid source_id; // the object which has this property string name; // the property name uuid value_id; // the property value object } // ...and tables for each pri...
Say I have a schema system something like this: create table objects { uuid id; string type; } create table object_properties { uuid id; uuid source_id; // the object which has this property string name; // the property name uuid value_id; // the property value object } // ...and tables for each primitive data type create table string_properties { uuid id; uuid source_id; // the object which has this property string name; // the property name string value; // the property value string } I then want to create this object: { type: 'foo', bar: { type: 'bar', baz: { type: 'baz', slug: 'hello-world' } } } That is: // objects id | type 123 | foo 234 | bar 345 | baz // object_properties source_id | name | value_id 123 | bar | 234 234 | baz | 345 // string_properties source_id | name | value 345 | slug | hello-world I want to only create this "object tree" if the tree ending in slug: hello-world doesn't exist. How best can I do that? I can do it easily by first making a query, checking the object exists, and then creating it if not. But that is one query followed by one insert. There is a chance that two processes come in at the same time, both make the query, both succeed, and then both make the insert. How can I prevent that? Note, I am currently having each independent query+insert happen both in a transaction, so each transaction has the query followed by the insert. Or will the update inside the first transaction be readable "outside" from the second transaction? I am using PostgreSQL / CockroachDB, is this a "read uncommitted" sort of setting?
Lance Pollard (221 rep)
Aug 2, 2022, 05:56 AM • Last activity: Aug 9, 2022, 12:32 PM
1 votes
1 answers
354 views
A query, safe against concurrent writes with READ COMMITTED, also safe with SERIALIZABLE isolation level?
I have asked this question with a wonderfully informative answer: _[How to have uniqueness constraints for structures across multiple tables?](https://dba.stackexchange.com/a/315290/37246)_ The answer from Erwin Brandstetter suggests this: ~~~pgsql WITH ins_string_properties AS ( INSERT INTO string_...
I have asked this question with a wonderfully informative answer: _[How to have uniqueness constraints for structures across multiple tables?](https://dba.stackexchange.com/a/315290/37246)_ The answer from Erwin Brandstetter suggests this: ~~~pgsql WITH ins_string_properties AS ( INSERT INTO string_properties (source_id, name, value) VALUES (gen_random_uuid(), 'slug', 'hello-world') ON CONFLICT DO NOTHING -- to silence unique violation errors RETURNING source_id ) , ins_objects AS ( INSERT INTO objects (id, type) SELECT o.id, o.type FROM ins_string_properties isp -- always 0 or 1 rows CROSS JOIN LATERAL ( VALUES (isp.source_id , 'baz') , (gen_random_uuid(), 'foo') , (gen_random_uuid(), 'bar') ) o(id, type) RETURNING id, type ) INSERT INTO object_properties (source_id, name, value_id) SELECT io1.id, io2.type, io2.id FROM ins_objects io1 JOIN ins_objects io2 ON io1.type = 'foo' AND io2.type = 'bar' OR io1.type = 'bar' AND io2.type = 'baz' ; ~~~ I am just learning about CTEs, but the answer says: > It's also safe under concurrent write load with default READ COMMITTED transaction isolation. I am going to be using this with CockroachDB, and they [seem to suggest](https://www.cockroachlabs.com/docs/stable/demo-serializable.html) avoiding READ COMMITTED and using SERIALIZABLE instead. Can I use this query with SERIALIZABLE, or if not, why not / what must be modified to make it work with SERIALIZABLE. These transaction levels are new to me, I mostly have used PostgreSQL with Ruby on Rails ORMs in the past, so haven't dug this deep into SQL. Just trying to use the default SERIALIZABLE transaction isolation level that CockroachDB recommends, and not sure if/when exactly I can/can't use it, and not sure about this case. [This](https://stackoverflow.com/a/27229277/169992) is as much as I know about READ COMMITTED vs. SERIALIZABLE.
Lance Pollard (221 rep)
Aug 6, 2022, 04:23 AM • Last activity: Aug 7, 2022, 08:02 PM
0 votes
1 answers
306 views
What happens to the n tables you insert into during a transaction?
With a database structure sort of like [this](https://dba.stackexchange.com/questions/315123/how-to-have-uniqueness-constraints-for-structures-across-multiple-tables): create table objects { uuid id; string type; } create table object_properties { uuid id; uuid source_id; // the object which has thi...
With a database structure sort of like [this](https://dba.stackexchange.com/questions/315123/how-to-have-uniqueness-constraints-for-structures-across-multiple-tables) : create table objects { uuid id; string type; } create table object_properties { uuid id; uuid source_id; // the object which has this property string name; // the property name uuid value_id; // the property value object } // ...and tables for each primitive data type create table string_properties { uuid id; uuid source_id; // the object which has this property string name; // the property name string value; // the property value string } where objects have properties (which can be linked to other objects)... What happens when we create a transaction which inserts records into all 3 tables? What happens in terms of reads and writes of other processes if the transaction is in the [SERIALIZABLE transaction level](https://www.cockroachlabs.com/docs/stable/demo-serializable.html) (CockroachDB and/or Postgres). > SERIALIZABLE isolation guarantees that even though transactions may execute in parallel, the result is the same as if they had executed one at a time, without any concurrency. I don't quite see the practical implications of that. Does it block reads until the table writes are COMMITed? What happens if there are multiple calls to such a transaction in parallel? What all happens during such an "all-table-spanning" sequence of writes transaction?
Lance Pollard (221 rep)
Aug 2, 2022, 08:14 AM • Last activity: Aug 2, 2022, 03:19 PM
3 votes
1 answers
1213 views
Update all instances of key inside nested JSON array
I'm trying to update a complex JSON document stored in a table in CockroachDB. The CockroachDB query language is mostly Postgres compatible except for the lack of procedural code, so a Postgres answer without user functions or explicit loops will most likely work. The below sample code is also Postg...
I'm trying to update a complex JSON document stored in a table in CockroachDB. The CockroachDB query language is mostly Postgres compatible except for the lack of procedural code, so a Postgres answer without user functions or explicit loops will most likely work. The below sample code is also Postgres-compatible. Among other things that make the document complex are that it contains nested arrays and some redundant structures. The table and document are similar to the following:
> show create table foo;
  table_name |                      create_statement
-------------+--------------------------------------------------------------
  foo        | CREATE TABLE public.foo (
                 id INT8 NOT NULL,
                  data JSONB NOT NULL
              );

> insert into foo (id, data)
values(1, '{
    "foo": {
        "bar": {
            "bar": [
                {
                    "fields": [
                        {
                            "things": [],
                            "key": "something",
                            "value": ""
                        },
                        {
                            "things": [],
                            "key": "somethingelse",
                            "value": "1"
                        },
                        {
                            "things": [],
                            "key": "color",
                            "value": "pink"
                        }
                    ]
                },
                {
                    "id": "ALL",
                    "fields": [
                          {
                            "things": [],
                            "key": "something",
                            "value": ""
                        },
                        {
                            "things": [],
                            "key": "somethingelse",
                            "value": "1"
                        },
                        {
                            "things": [],
                            "key": "color",
                            "value": "white"
                        }
                    ]
                },
                {
                    "id": "1",
                    "fields": [
                          {
                            "things": [],
                            "key": "something",
                            "value": ""
                        },
                        {
                            "things": [],
                            "key": "somethingelse",
                            "value": "1"
                        },
                        {
                            "things": [],
                            "key": "color",
                            "value": "green"
                        }
                    ]
                },
                {
                    "id": "ALL",
                    "fields": [
                                            {
                            "things": [],
                            "key": "something",
                            "value": ""
                        },
                        {
                            "things": [],
                            "key": "somethingelse",
                            "value": "1"
                        },
                        {
                            "things": [],
                            "key": "color",
                            "value": "red"
                        }
                    ]
                }
            ]
        }
    }
}');

> select jsonb_pretty(data) from foo;
                     jsonb_pretty
-------------------------------------------------------
  {
      "foo": {
          "bar": {
              "bar": [
                  {
                      "fields": [
                          {
                              "key": "something",
                              "things": [],
                              "value": ""
                          },
                          {
                              "key": "somethingelse",
                              "things": [],
                              "value": "1"
                          },
                          {
                              "key": "color",
                              "things": [],
                              "value": "pink"
                          }
                      ]
                  },
                  {
                      "fields": [
                          {
                              "key": "something",
                              "things": [],
                              "value": ""
                          },
                          {
                              "key": "somethingelse",
                              "things": [],
                              "value": "1"
                          },
                          {
                              "key": "color",
                              "things": [],
                              "value": "white"
                          }
                      ],
                      "id": "ALL"
                  },
                  {
                      "fields": [
                          {
                              "key": "something",
                              "things": [],
                              "value": ""
                          },
                          {
                              "key": "somethingelse",
                              "things": [],
                              "value": "1"
                          },
                          {
                              "key": "color",
                              "things": [],
                              "value": "green"
                          }
                      ],
                      "id": "1"
                  },
                  {
                      "fields": [
                          {
                              "key": "something",
                              "things": [],
                              "value": ""
                          },
                          {
                              "key": "somethingelse",
                              "things": [],
                              "value": "1"
                          },
                          {
                              "key": "color",
                              "things": [],
                              "value": "red"
                          }
                      ],
                      "id": "ALL"
                  }
              ]
          }
      }
  }
How do I set the value for all instances of the key named color to an empty string ("")? The syntax for handling this on nested arrays eludes me.
JRL (31 rep)
Jun 29, 2022, 09:01 PM • Last activity: Jun 30, 2022, 07:05 PM
0 votes
1 answers
166 views
CockroachDB upgrade not finalized
I am running a three-node CockroachDB docker cluster (Debian OS) and I recently upgraded the cluster from version v21.1 to version v21.2 following the procedure outlined in the [documentation](https://www.cockroachlabs.com/docs/v21.2/upgrade-cockroach-version) using the option ```sql SET CLUSTER SET...
I am running a three-node CockroachDB docker cluster (Debian OS) and I recently upgraded the cluster from version v21.1 to version v21.2 following the procedure outlined in the [documentation](https://www.cockroachlabs.com/docs/v21.2/upgrade-cockroach-version) using the option
SET CLUSTER SETTING cluster.preserve_downgrade_option = '21.1';
Then I restarted the container of each node one at a time and when no issues arose I run
RESET CLUSTER SETTING cluster.preserve_downgrade_option;
Now all three nodes are running the same version and the **cluster.preserve_downgrade_option** is set to an empty string. Nonetheless, even after hours have elapsed since then, when I am trying to create a table which has a feature included in the 21.2 version
create table randdb.public.randtable
(
	name_surname varchar(30),
	update_tim TIMESTAMPTZ DEFAULT now() ON UPDATE now(),
	PRIMARY KEY (name_surname)
)
I get the error message
SQL Error [0A000]: ERROR: version 21.1-1152 must be finalized to use ON UPDATE
Other than this, the cluster seems to be fully functional. Any ideas what has gone wrong?
agrajag_42 (1 rep)
Nov 30, 2021, 11:59 AM • Last activity: Nov 30, 2021, 02:51 PM
0 votes
1 answers
229 views
Error: pq: 1 columns must reference exactly 1 columns in referenced table (found 2)
I have a few tables and I am facing this issue most likely with the table "item". When I remove `restaurant_id` and `category_id`, it works fine. CREATE TABLE IF NOT EXISTS restaurant ( id UUID NOT NULL PRIMARY KEY DEFAULT gen_random_uuid(), title VARCHAR(50) NOT NULL UNIQUE ); CREATE TABLE IF NOT E...
I have a few tables and I am facing this issue most likely with the table "item". When I remove restaurant_id and category_id, it works fine. CREATE TABLE IF NOT EXISTS restaurant ( id UUID NOT NULL PRIMARY KEY DEFAULT gen_random_uuid(), title VARCHAR(50) NOT NULL UNIQUE ); CREATE TABLE IF NOT EXISTS restaurant_gallery ( id SERIAL, restaurant_id UUID NOT NULL REFERENCES restaurant, image VARCHAR NOT NULL, PRIMARY KEY (id, restaurant_id) ); CREATE TABLE IF NOT EXISTS category ( id SERIAL NOT NULL PRIMARY KEY, restaurant UUID NOT NULL, label VARCHAR(25) NOT NULL, UNIQUE (restaurant, label) ); CREATE TABLE IF NOT EXISTS item ( id SERIAL NOT NULL PRIMARY KEY, restaurant_id UUID NOT NULL REFERENCES restaurant (id), category_id SERIAL NOT NULL REFERENCES category (id), name VARCHAR(25) NOT NULL, UNIQUE (id, restaurant_id) ); CREATE TABLE IF NOT EXISTS item_gallery ( id SERIAL, item_id INT NOT NULL REFERENCES item, image VARCHAR NOT NULL, PRIMARY KEY (id, item_id) ); > Error: pq: 1 columns must reference exactly 1 columns in referenced table (found 2) Error: pq: 1 columns must reference exactly 1 columns in referenced table (found 2) Failed running "sql" Any help is highly appreciated!
Taufiq Rahman (101 rep)
Aug 12, 2019, 10:51 AM • Last activity: Aug 12, 2019, 11:40 AM
1 votes
1 answers
876 views
Replicate Postgres to Cockroach
We are planning to replicate our Postgres database to Cockroach. We are looking for a relatively simpler solution to achieve that. Currently, we are working on a Stream based replication (something of [this][1] sort) is there any better alternative than this. [1]: https://github.com/jensneuse/Postgr...
We are planning to replicate our Postgres database to Cockroach. We are looking for a relatively simpler solution to achieve that. Currently, we are working on a Stream based replication (something of this sort) is there any better alternative than this.
Noobie (165 rep)
Feb 12, 2019, 04:02 PM • Last activity: Feb 13, 2019, 09:29 PM
-2 votes
1 answers
255 views
Is it efficient to design chat/likes/relationship database in separate table per user?
The specs: - there would be about 1 billion users - there would be probably 1 million topics - each topics has subtopics stored at the same level as topics - user has many to many relationship with topics (whether they like the topics or not) - user can chat with another user The table design: - 1 t...
The specs: - there would be about 1 billion users - there would be probably 1 million topics - each topics has subtopics stored at the same level as topics - user has many to many relationship with topics (whether they like the topics or not) - user can chat with another user The table design: - 1 table for users - 1 table for topics - N tables for storing users that like a topic (user_id, topic_id), with naming: users_of_topic_id - N tables for storing topics that liked by a user (user_id, topic_id), with naming: topics_of_user_id - NxM tables for chats (loweruser_id, higheruser_id), with naming: chats_[loweruser_id]_[higheruser_id] - N+M tables for chat reference tables, storing table names of the chats and their relationships), with naming: relations_[user_id] So if we want to match topics between users, we can use intersect between topics_of_[user1] and topics_of_[user2] If we want to give suggested topics, we can intersect random row from users_of_[friend'stopic_id] If we want to suggest a mutual friend, we can union all relations_[friendsuser_id] then sum it. Is this design efficient? **EDIT** apparently this is a job for graph database (such as [dgraph](http://dgraph.io) or [neo4j](https://neo4j.com/))
Kokizzu (1403 rep)
Jan 20, 2019, 06:59 PM • Last activity: Jan 21, 2019, 05:19 AM
Showing page 1 of 13 total questions