Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
180 views
Source of error on psql data load
Upon importing a file derived from rails establishment, `psql` is hitting the error of foreign key constraints: ERROR: insert or update on table "documents" violates foreign key constraint "fk_rails_d4abdc7f58" Key (typ_document_id)=(7) is not present in table "typ_documents". Yet, querying afterwar...
Upon importing a file derived from rails establishment, psql is hitting the error of foreign key constraints: ERROR: insert or update on table "documents" violates foreign key constraint "fk_rails_d4abdc7f58" Key (typ_document_id)=(7) is not present in table "typ_documents". Yet, querying afterwards SELECT * from typ_documents; getting a result 7 | Test Internal | 2013-07-04 08:36:16.026295 | 2013-07-04 08:36:16.026295 The only assumption that I can follow is that, when the table documents is being loaded, the order of loading is such that typ_documents is happening after and the error is arising. Yet, this error is not occurring consistently and the data dump is in alphabetical order of table names. Thus, this is a weak assumption. Removing the foreign key constraint from rails would overcome the problem, but that is a weak reaction. **Update** Data from the pg_dump file (redacted to omit the long full text) COPY documents (id, titolo, abstract, full_text, typ_document_id, idioma_id, competitor_id, created_at, updated_at) FROM stdin; [...] 5 Conservazione Finocchio con lavaggio La prova è stata [...redacted...] 3 ppm\r\n15\t15\r\n0,75\r\n30 sec\t60 sec\t90 sec 7 1 \N 2013-07-08 10:49:53.393598 2013-07-11 16:07:31.540986 COPY typ_documents (id, nome, created_at, updated_at) FROM stdin; [...] 7 Test Internal 2013-07-04 08:36:16.026295 2013-07-04 08:36:16.026295 How can this issue be debugged/overcome?
Jerome (299 rep)
Feb 12, 2016, 11:19 AM • Last activity: Jul 1, 2025, 03:00 AM
1 votes
1 answers
261 views
Can I optimize UPDATE statements using EXPLAIN?
I have a few `UPDATE` calls on my production system that are running slow and I've been instructed to try and optimize them. I'm reading into how to understand `EXPLAIN` with `(ANALYZE, BUFFERS)` as well to get more information and I'm trying to figure out why this one plan node is taking so long. I...
I have a few UPDATE calls on my production system that are running slow and I've been instructed to try and optimize them. I'm reading into how to understand EXPLAIN with (ANALYZE, BUFFERS) as well to get more information and I'm trying to figure out why this one plan node is taking so long. I have a feeling that it might be dude to the number of records that are getting updated but I wonder if someone with more experience looking at EXPLAIN output might be able to shed some light on this mystery? Here is the UPDATE statement that is showing up in my metrics:
-- $1 and $2 are variables set in a Rails application
UPDATE "redacted_table_name"
   SET "some_fk_id" = $1
 WHERE "redacted_table_name"."some_fk_id" = $2
For context, this table has over **280 million** records. I wanted to try and do a full analysis on this table so I had the database cloned and ran a "safe" EXPLAIN (ANYALYZE, BUFFERS) which runs an actual update and sure enough it took quite some time to complete:
-- Do an actual UPDATE w/o changing any data
EXPLAIN (ANALYZE, BUFFERS)
UPDATE "redacted_table_name"
   SET "some_fk_id" = 1
 WHERE "redacted_table_name"."some_fk_id" = 1
QUERY PLAN:

Update on redacted_table_name  (cost=669.64..108899.13 rows=0 width=0) (actual time=104067.491..104067.492 rows=0 loops=1)
  Buffers: shared hit=3329124 read=290028 dirtied=276605 written=906
  I/O Timings: shared read=90426.849 write=26.727
  ->  Bitmap Heap Scan on redacted_table_name  (cost=669.64..108899.13 rows=30073 width=10) (actual time=213.507..14936.244 rows=137766 loops=1)
        Recheck Cond: (some_fk_id = 1)
        Rows Removed by Index Recheck: 5904106
        Heap Blocks: exact=39787 lossy=38580
        Buffers: shared hit=411 read=78433 dirtied=10
        I/O Timings: shared read=14094.250
        ->  Bitmap Index Scan on index_redacted_table_name_on_some_fk_id  (cost=0.00..662.12 rows=30073 width=0) (actual time=206.434..206.434 rows=137766 loops=1)
              Index Cond: (some_fk_id = 1)
              Buffers: shared read=477
              I/O Timings: shared read=193.942
Planning Time: 0.074 ms
Execution Time: 104067.523 ms
As you can see, I have an INDEX already on "redacted_table_name"."some_fk_id" which PG is taking advantage of but why is it doing a Bitmap Head Scan after the fact? It doesn't seem to be using the INDEX that its child node is using. Why is it taking so long there? Here is a similar update on the some_fk_id column that has the most records in the table (3 million records and the worst case scenario):
Update on redacted_table_name  (cost=0.00..5344540.00 rows=0 width=0) (actual time=436965.067..436965.068 rows=0 loops=1)
  Buffers: shared hit=139949860 read=4964122 dirtied=4521138 written=1928290
  I/O Timings: shared read=176626.661 write=8909.815
  ->  Seq Scan on redacted_table_name  (cost=0.00..5344540.00 rows=3890845 width=10) (actual time=0.026..36374.777 rows=3617975 loops=1)
        Filter: (some_fk_id = 93330)
        Rows Removed by Filter: 278327308
        Buffers: shared hit=207516 read=1612708 dirtied=52 written=420751
        I/O Timings: shared read=17983.171 write=2653.138
Planning:
  Buffers: shared hit=1
Planning Time: 0.078 ms
Execution Time: 436965.106 ms
The Seq Scan on redacted_table_name indicates to me that the index wasn't even used with this UPDATE. So what gives? **UPDATE:** Both on my prod and my cloned prod the work_mem is 4MB and on prod shared_buffers is 8047248kB but on the cloned prod that I ran these on the shared_buffers was 3983192kB. The prod clone was shutdown so I wasn't able to get the effective_cache_size but prod has 7966384kB for that setting. **UPDATE: #2** Below is the CREATE TABLE script along with all of the constraints and indexes that are on the table:
-- This table simply has a PK and four foreign keys
-- that reference other tables in the database.
CREATE TABLE IF NOT EXISTS public.redacted_table_name
(
    id bigint NOT NULL DEFAULT nextval('redacted_table_name_id_seq'::regclass),
    fk_table_3_id bigint,

    -- to stay with example above this is the key/column
    -- that's getting updated.
    some_fk_id bigint,
    fk_table_4_id bigint,
    fk_table_2_id bigint,
    fk_table_1_id bigint,
    CONSTRAINT redacted_table_name_pkey PRIMARY KEY (id),
    CONSTRAINT fk_rails_3fc9a63917 FOREIGN KEY (fk_table_4_id)
        REFERENCES public.fk_table_4 (id) MATCH SIMPLE
        ON UPDATE NO ACTION
        ON DELETE NO ACTION,
    CONSTRAINT fk_rails_4e272bcb96 FOREIGN KEY (fk_table_1_id)
        REFERENCES public.fk_table_1 (id) MATCH SIMPLE
        ON UPDATE NO ACTION
        ON DELETE NO ACTION,
    CONSTRAINT fk_rails_9ba08049bc FOREIGN KEY (some_fk_id)
        REFERENCES public.some_fk_table (id) MATCH SIMPLE
        ON UPDATE NO ACTION
        ON DELETE NO ACTION,
    CONSTRAINT fk_rails_d6e3c90d4d FOREIGN KEY (fk_table_3_id)
        REFERENCES public.fk_table_3 (id) MATCH SIMPLE
        ON UPDATE NO ACTION
        ON DELETE NO ACTION,
    CONSTRAINT fk_rails_f79e2fb3ae FOREIGN KEY (fk_table_2_id)
        REFERENCES public.fk_table_2 (id) MATCH SIMPLE
        ON UPDATE NO ACTION
        ON DELETE NO ACTION
)

TABLESPACE pg_default;

ALTER TABLE IF EXISTS public.redacted_table_name
    OWNER to pg_user;

CREATE INDEX IF NOT EXISTS index_redacted_table_name_on_fk_table_4_id
    ON public.redacted_table_name USING btree
    (fk_table_4_id ASC NULLS LAST)
    TABLESPACE pg_default;

CREATE INDEX IF NOT EXISTS index_redacted_table_name_on_some_fk_id
    ON public.redacted_table_name USING btree
    (some_fk_id ASC NULLS LAST)
    TABLESPACE pg_default;

CREATE INDEX IF NOT EXISTS index_redacted_table_name_on_fk_table_1_id
    ON public.redacted_table_name USING btree
    (fk_table_1_id ASC NULLS LAST)
    TABLESPACE pg_default;

CREATE INDEX IF NOT EXISTS index_redacted_table_name_on_fk_table_2_id
    ON public.redacted_table_name USING btree
    (fk_table_2_id ASC NULLS LAST)
    TABLESPACE pg_default;

CREATE INDEX IF NOT EXISTS index_redacted_table_name_on_fk_table_3_id
    ON public.redacted_table_name USING btree
    (fk_table_3_id ASC NULLS LAST)
    TABLESPACE pg_default;

CREATE INDEX IF NOT EXISTS index_redacted_table_name_on_fk_table_3_id_and_some_fk_id
    ON public.redacted_table_name USING btree
    (fk_table_3_id ASC NULLS LAST, some_fk_id ASC NULLS LAST)
    TABLESPACE pg_default;

CREATE UNIQUE INDEX IF NOT EXISTS unique_on_fk_table_3_fk_table_4
    ON public.redacted_table_name USING btree
    (fk_table_3_id ASC NULLS LAST, some_fk_id ASC NULLS LAST, fk_table_1_id ASC NULLS LAST)
    TABLESPACE pg_default;
**TL;DR** Is this just a slow SQL statement due to the number of rows it has to update or can I optimize this table somehow? If I'm not providing enough information, please let me know what you need and I'll provide it. Thanks again for any help!
aarona (113 rep)
Aug 23, 2024, 03:38 AM • Last activity: Aug 28, 2024, 04:52 PM
0 votes
1 answers
68 views
Recommended way to build database to contain users and chatgpt data
I am looking to create a Rails application to interface between users and Azure ChatGPT. I will include my ERD diagram showing a couple of options in building the database tables and table associations. I am having trouble deciding which option to go with and was wondering if anyone has any feedback...
I am looking to create a Rails application to interface between users and Azure ChatGPT. I will include my ERD diagram showing a couple of options in building the database tables and table associations. I am having trouble deciding which option to go with and was wondering if anyone has any feedback on which option would be better or maybe your option (3rd option) is better? Thank you Database Schemas
Chris (101 rep)
Apr 7, 2024, 11:14 PM • Last activity: Apr 10, 2024, 11:33 AM
0 votes
0 answers
37 views
Managed database or docker image with data-volume?
I've mostly used managed database (AWS RDS) for production. I was fiddling with docker and was wondering if it's a good idea to have containerised postgres database with data-volume? I feel it may not be a good idea, maybe because, I'm used to convenience of using managed database in production, but...
I've mostly used managed database (AWS RDS) for production. I was fiddling with docker and was wondering if it's a good idea to have containerised postgres database with data-volume? I feel it may not be a good idea, maybe because, I'm used to convenience of using managed database in production, but would like to know community opinion on this. My docker-compose.yml looks like following: version: '3' services: rails-api: build: context: . dockerfile: Dockerfile ports: - "3000:3000" volumes: - .:/usr/src/app - gem_cache:/gems depends_on: - database networks: - vilio-network env_file: - .env/production redis: image: redis database: image: postgres ports: - "5432:5432" networks: - vilio-network env_file: - .env/production volumes: - db_data:/var/lib/postgresql/data networks: vilio-network: volumes: db_data: gem_cache: Thank you
Indyarocks (101 rep)
Sep 21, 2023, 08:34 PM
1 votes
1 answers
1569 views
Can’t connect to local PostgresSQL server
I have the [Postres app][1] installed an running on my Mac. And it has worked beautifully. I don’t know what changed, but now I cannot connect to it from Rails, PG Commander, PG Admin or the command `psql -h localhost`. However, I can connect with just `psql`. I get the following error in the log as...
I have the Postres app installed an running on my Mac. And it has worked beautifully. I don’t know what changed, but now I cannot connect to it from Rails, PG Commander, PG Admin or the command psql -h localhost. However, I can connect with just psql. I get the following error in the log as soon as the connection times out: LOG: incomplete startup packet Rails database.yml development: adapter: postgresql database: my_db host: localhost PG Commander settings: enter image description here pg_hba.conf: # "local" is for Unix domain socket connections only local all all trust # IPv4 local connections: host all all 127.0.0.1/32 trust # IPv6 local connections: host all all ::1/128 trust postgresql.conf: https://gist.github.com/davbeck/49a23a48fa161b3e06fc#file-gistfile1-txt
David Beck (111 rep)
Jan 29, 2014, 12:06 AM • Last activity: Apr 17, 2023, 03:06 AM
0 votes
0 answers
254 views
How does JDBC over LDAP connection work?
The JDBC connection I should use is defined in Oracle SQL Developer with the following syntax: jdbc:oracle:thin:@ldap://intranet.oid-01.dama.ch:3063/EDWHPD,cn=OracleContext,dc=emea,dc=dama,dc=ch But with Ruby on Rails, I don't use JDBC. I use Ruby-oci8 gem. I should probably get the server name or t...
The JDBC connection I should use is defined in Oracle SQL Developer with the following syntax: jdbc:oracle:thin:@ldap://intranet.oid-01.dama.ch:3063/EDWHPD,cn=OracleContext,dc=emea,dc=dama,dc=ch But with Ruby on Rails, I don't use JDBC. I use Ruby-oci8 gem. I should probably get the server name or the SID from the LADP resource, but how can I do this? Thanks for your help!
user1185081 (133 rep)
Jan 10, 2023, 02:58 PM
0 votes
0 answers
103 views
How to connect to Oracle through LDAP with Ruby on Rails 5.4.2?
My application is running on Rails 5.4.2/Ruby 2.7 and relies on an Oracle 19c. I used to query an Oracle database through a classical OCI connection configuration: **config/database.yml** development: adapter: oci host: xe username: dqm password: dqm_password But the organisation's IT architecture h...
My application is running on Rails 5.4.2/Ruby 2.7 and relies on an Oracle 19c. I used to query an Oracle database through a classical OCI connection configuration: **config/database.yml** development: adapter: oci host: xe username: dqm password: dqm_password But the organisation's IT architecture has changed, so that connections should now go through LDAP for security reasons. The URL looks like this : jdbc:oracle:thin:@ldap://intranet.oid-01.dama.ch:3063/EDWHPD,cn=OracleContext,dc=emea,dc=dama,dc=ch I don't know how to handle this with Rails, and it raises several questions for me: * How to describe this connection in the **database.yml** file? * Should I use the Net-LDAP gem, and what would it bring to help establishing the connection to Oracle database? * Do you know a tutorial explaining how to setup this type of database access? Thank you for your help!
user1185081 (133 rep)
Nov 9, 2022, 10:30 AM
4 votes
2 answers
11414 views
MySQL: How to decrease sleep process's time out?
When I run 'show processlist;', then I get so many sleep processes. [![show processlist;][1]][1] [1]: https://i.sstatic.net/bLljo.png I set wait_timeout and interactive_timeout to 60 in my.cnf. But, sleep process is not died when time 60 on processlist. I found that sleep process is died when time i...
When I run 'show processlist;', then I get so many sleep processes. show processlist; I set wait_timeout and interactive_timeout to 60 in my.cnf. But, sleep process is not died when time 60 on processlist. I found that sleep process is died when time is 7900. What happend?? How can I decrease sleep process's time out? If you need other information, I will give information as I possible. **UPDATE** I use root, read_only and deploy account on mysql. And I found root and read_only accounts run correct. Query -> Sleep -(60sec)-> release process. But deploy account, using to connect between rails server and mysql, run wrong. Query -> Sleep -(7900sec)-> release. So, I think that deploy account or Rails is problem. However, I dont have any idea to fix this.
WitchOfCloud (41 rep)
Nov 9, 2015, 06:32 AM • Last activity: Oct 10, 2022, 08:04 AM
2 votes
3 answers
1917 views
How to store short stories for access to individual sentences?
I am designing a database for the first time ever (using PostgreSQL), and am wondering about the most efficient/logical way of storing a body of text (aka, a story). The conflict stems from the fact that the user will access text bodies in two ways: 1) access the entire body of the story on click of...
I am designing a database for the first time ever (using PostgreSQL), and am wondering about the most efficient/logical way of storing a body of text (aka, a story). The conflict stems from the fact that the user will access text bodies in two ways: 1) access the entire body of the story on click of the story name. 2) the user can input a word or phrase into a search bar, which will return all **sentences** (not the whole stories) in which the word/phrase is found (meaning that it could potentially return many sentences from many stories). There will be a great ("infinite") number of stories, and about 40 sentences per story, although it is free text so some stories will contain a few hundred sentences. My initial DB design was to have a Story model (I'm using Ruby on Rails) with a story_id, story_title, and author_id_fk, and then to have a Storyline model with storyline_id, storyline, and story_id_fk. However, I'm now doubting myself and think that maybe the best way to do it is to slap the body of the story onto a 4th column in the Story model called story_text, where I will store an array of strings (aka, the original text parsed into its corresponding sentences), and then the Storyline model can either simply not exist (in which case the appropriate item from the array would be called when needed - less normal, but also perhaps more efficient..?), or to keep the Storyline model, but have it contain a reference to the appropriate storyline as opposed to the actual text itself. Any thoughts or suggestions would be much appreciated!
michaelsking1993 (173 rep)
Jan 14, 2017, 04:13 PM • Last activity: Jun 2, 2022, 11:37 AM
1 votes
0 answers
42 views
Is creating table entry from another table entry and copying its value good practices?
Lets say I have `inventory_receives` table which contain many `inventory_receive_entry` and also I have `inventory` table which contain one `inventory_receive_entry` Then in my application `inventory_receives` instance has a method with function to loop for each `inventory_receive_entry` that it hav...
Lets say I have inventory_receives table which contain many inventory_receive_entry and also I have inventory table which contain one inventory_receive_entry Then in my application inventory_receives instance has a method with function to loop for each inventory_receive_entry that it have and creates inventory by copying the attributes values from inventory_receive_entry to inventory Is this a good practices? I'm confused should I copying the value from inventory_receive_entry to inventory or should I just refer the inventory_receive_entry from inventory my consideration about copying the value instead referring is the practicality since I not need to write complex query and such but there are people that said about single source of truth and something like that My schema from ruby on rails:
create_table "inventory_receives", force: :cascade do |t|
    t.bigint "id"
    t.bigint "supplier_id"
    t.bigint "branch_id"
  end

create_table "inventory_receives_entries", force: :cascade do |t|
    t.bigint "id"
    t.bigint "inventory_receive_id", null: false
    t.bigint "product_id", null: false
    t.integer "quantity", default: 0, null: false
    t.datetime "expiry_date"
    t.string "barcode", default: "", null: false
    t.decimal "price", default: "0.0", null: false
    t.string "batch_code", default: "", null: false
    t.decimal "vat", default: "10.0", null: false
    t.decimal "discount", default: "0.0", null: false
  end

  create_table "inventories", force: :cascade do |t|
    t.bigint "id"
    t.bigint "inventory_receive_entry_id", null: false
    t.integer "quantity", default: 0, null: false
    t.decimal "selling_price", default: "0.0", null: false
  end
joyoy (13 rep)
Feb 7, 2022, 01:50 PM • Last activity: Apr 25, 2022, 10:15 AM
1 votes
0 answers
45 views
Migrations spreading to the wrong database (local environment)
I am working on one Flask project and one Ruby on Rails project, both using Postgres. When I migrate in Rails, the changes are implemented both in this database and in the database for the Flask project, which of course is very undesirable. My first idea was to use a different port for the Flask pro...
I am working on one Flask project and one Ruby on Rails project, both using Postgres. When I migrate in Rails, the changes are implemented both in this database and in the database for the Flask project, which of course is very undesirable. My first idea was to use a different port for the Flask project, however now I realized that it needs to be set in postgresql.conf and thus will be changed for everything I run locally. Why are these databases being intertwined and how can I stop this from happening? I am using Postgres 12, Flask 1.1.2, Alembic 1.7.6, Rails 6. For the Rails app I have one test and one development database, for the Flask app just one. When I look at my list of databases, it seems as if two have been added for this one (Flask), with -0 and -1 as suffixes. Both of these also have the tables from the Rails app. From database.yml in the Rails app (this is the correct name):
development:
  <<: *default
  database: birdspotting_development
пaean (11 rep)
Apr 23, 2022, 09:39 PM • Last activity: Apr 24, 2022, 09:27 AM
1 votes
1 answers
118 views
SQL row data mismatch for different users
I am using MySQL (version 5.5.43) as my database. I have a RoR micro-service that does an update column to a model Active Record class: model.update_column(status: 0) The next line is an api call to different micro-service that synchronously does a SQL query: select * from model where status = 0; Th...
I am using MySQL (version 5.5.43) as my database. I have a RoR micro-service that does an update column to a model Active Record class: model.update_column(status: 0) The next line is an api call to different micro-service that synchronously does a SQL query: select * from model where status = 0; The code runs without any errors but the latter query is not fetching the record that is being updated by the former. There is milliseconds of difference between the update and the read. Both the services are connected to the same database as different users but same access. I don't understand why this would happen? the update_column is obviously a commit to the db, then why would the select query not fetch the updated record. What am I missing here?
Darshan (11 rep)
Nov 28, 2017, 02:36 PM • Last activity: Feb 28, 2022, 04:06 AM
1 votes
0 answers
274 views
Extremely slow updates to small Oracle table from rails app (oracle r12 and r19)
I'm not sure if this is a rails issue, or a database issue, but as this seems to be a schema specific issue, I'm looking at it from the database side first. (Other customers with the same setup on the same app/database server are not having the same problem) I have a rails 4.2 app connecting to an O...
I'm not sure if this is a rails issue, or a database issue, but as this seems to be a schema specific issue, I'm looking at it from the database side first. (Other customers with the same setup on the same app/database server are not having the same problem) I have a rails 4.2 app connecting to an Oracle database. From the application an simple update to a 39 record table via primary key can take upwards of 80 seconds:
SQL (83098.4ms)  UPDATE "ENTITY" SET "ENTITY"."ADDRESS_1" = NULL, "ENTITY"."ADDRESS_2" = NULL, "ENTITY"."ADDRESS_3" = NULL, "ENTITY"."ADDRESS_4" = '' WHERE "ENTITY"."ENT_ID" = :a1  [["ent_id", 2144]]
If I run this update from sqlplus, it runs almost instantly:
SQL> UPDATE "ENTITY" SET "ENTITY"."ADDRESS_1" = NULL, "ENTITY"."ADDRESS_2" = NULL, "ENTITY"."ADDRESS_3" = NULL, "ENTITY"."ADDRESS_4" = '' WHERE "ENTITY"."ENT_ID" = 2144;

1 row updated.

Elapsed: 00:00:00.018
I'm not sure if bind variables could impact this, or if this is the same from sqlplus, but trying with bind variables also runs near instantaneously:
SQL> exec :id := 2144;

SQL> UPDATE "ENTITY" SET "ENTITY"."ADDRESS_1" = NULL, "ENTITY"."ADDRESS_2" = NULL, "ENTITY"."ADDRESS_3" = NULL, "ENTITY"."ADDRESS_4" = '' WHERE "ENTITY"."ENT_ID" = :id;

1 row updated.

Elapsed: 00:00:00.012
There is an primary key + index on this table, that is valid and rebuilds also near instantly:
SQL> select count(*) from entity;
  COUNT(*)
        39

SQL> select constraint_type, constraint_name, status from user_constraints where table_name = 'ENTITY';

CONSTRAINT_TYPE   CONSTRAINT_NAME    STATUS
P                 ENT_PK             ENABLED

SQL> select index_name, index_type, status from user_indexes where table_name = 'ENTITY';
INDEX_NAME   INDEX_TYPE   STATUS
ENT_PK       NORMAL       VALID


SQL> alter index ent_pk rebuild;

Index ENT_PK altered.

Elapsed: 00:00:00.015
There is only one trigger on this table - an INSERT trigger, so shouldn't be involved in an update (I don't think), but disabling it has no impact either way. The explain plan for the update seems pretty straight forward - just update via index rowid:
---------------------------------------------------------------------------------------
| Id  | Operation                    | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT             |        |     1 |   233 |     1   (0)| 00:00:01 |
|   1 |  UPDATE                      | ENTITY |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| ENTITY |     1 |   233 |     1   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | ENT_PK |     1 |       |     0   (0)| 00:00:01 |
---------------------------------------------------------------------------------------
The issue persists if I export the schema and import it into a different schema, and even into a different database - even going from an Oracle 12 database to an Oracle 19 database. As mentioned at the start, I'm at a loss as to how the same application server and database run this update fine on most schemas (which are perforce identical in structure), but this one particular schema runs this one update very very slowly - but only when run via OCI and not when run from SQL. Is there any way to get more information from Oracle as to what it's doing during this update?
Dave Smylie (145 rep)
Nov 4, 2021, 03:37 AM • Last activity: Nov 4, 2021, 04:08 AM
1 votes
1 answers
491 views
Why is the schema/structure dump so slow?
I'm using Rails 6 with `activerecord-oracle_enhanced-adapter` adapter for Oracle 11 XE database. At the beginning of the project I was working with Postgres, but due to customer requirements I had to migrate to Oracle. My question is very similar to https://stackoverflow.com/questions/592444/rails-r...
I'm using Rails 6 with activerecord-oracle_enhanced-adapter adapter for Oracle 11 XE database. At the beginning of the project I was working with Postgres, but due to customer requirements I had to migrate to Oracle. My question is very similar to https://stackoverflow.com/questions/592444/rails-rake-dbmigrate-very-slow-on-oracle , but it's from 11 years ago, i'm using the updated version of the adapter. I know that after all migrations are applied to database then rake db:migrate calls db:schema:dump task to generate schema.rb file from current database schema. I only have one schema, with many tables (around 90) The answer for the previous question was something like > One way how to debug this issue would be if you would put some debug > messages in oracle_enhanced_adapter.rb file so that you could identify > which method calls are taking so long time. == 20200820164111 RemoveSemesterFromProject: migrated (0.1347s) =============== real 13m26,038s user 0m5,494s sys 0m0,401s All migrations finished in an reasonable time, so I think the error occurs in the following action "db: schema: dump" but i don't have oracle_enchaced_adapter.rb neither oracle_enhanced_adapter.rb, so i don't know where to look at. What can i do for improve this behaviour? Time that takes when i run db:schema:dump
real	9m44,074s
user	0m3,701s
sys	0m0,327s
and this is the time that takes when i run db:structure:dump
real	21m40,073s
user	0m5,046s
sys	0m0,413s
Grizz (11 rep)
Aug 25, 2020, 04:07 PM • Last activity: Aug 9, 2021, 09:02 PM
0 votes
1 answers
237 views
Postgresql Query getting slow execution time
I did the explain analyze for this query, it was giving 30ms, but if the data is more I will get an execution expired; Using PostgreSQL 10 For normal execution: https://explain.depesz.com/s/gSPP For slow execution: https://explain.depesz.com/s/bQN2 SELECT inventory_histories.*, order_items.order_id...
I did the explain analyze for this query, it was giving 30ms, but if the data is more I will get an execution expired; Using PostgreSQL 10 For normal execution: https://explain.depesz.com/s/gSPP For slow execution: https://explain.depesz.com/s/bQN2 SELECT inventory_histories.*, order_items.order_id as order_id FROM "inventory_histories" LEFT JOIN order_items ON (order_items.id = inventory_histories.reference_id AND inventory_histories.reference_type = 4) WHERE "inventory_histories"."inventory_id" = 1313 AND (inventory_histories.location_id = 15) ORDER BY inventory_histories.id DESC LIMIT 10 OFFSET 0; Indexes: "inventory_histories_pkey" PRIMARY KEY, btree (id) "inventory_histories_created_at_index" btree (created_at) "inventory_histories_inventory_id_index" btree (inventory_id) "inventory_histories_location_id_index" btree (location_id)
amtest (161 rep)
Nov 20, 2018, 08:13 AM • Last activity: May 24, 2021, 01:01 AM
4 votes
1 answers
1223 views
What can I do to monitor "out of shared memory" issues?
I'm new to database administration and I would really like some advice on what tools I could use or what I could do to monitor "out of shared memory" issues. I've been seeing these "out of shared memory" messages when I've been running *rspec* which runs tests for my application. I've also seen mess...
I'm new to database administration and I would really like some advice on what tools I could use or what I could do to monitor "out of shared memory" issues. I've been seeing these "out of shared memory" messages when I've been running *rspec* which runs tests for my application. I've also seen messages in the browser when I run my rails app that I "might need to increase max_locks_per_transaction." I think it would be better to find out exactly what is going on first either on the driver level or the PostgreSQL level. My specs are: PostgreSQL 9.4.5 on x86_64-apple-darwin15.0.0, compiled by Apple LLVM version 7.0.0 (clang-700.1.76), 64-bit I have a Macbook Pro running OSX El Capitan 10.11.5 I'm using the gem *pg* version 0.18.4 in my Ruby-on-Rails application.
alexia (51 rep)
Jun 30, 2016, 04:28 AM • Last activity: May 1, 2021, 08:06 AM
-1 votes
2 answers
372 views
Database Design - I have a question regarding joining tables
Forgive me for my dumb question but I am new to Ruby on Rails and data design. We all have to start somewhere right? Well my question is that I want to build a database for a Truck driving company. It consist of Drivers, Brokers, Invoices, Destinations, and type of produce delivered. I have attached...
Forgive me for my dumb question but I am new to Ruby on Rails and data design. We all have to start somewhere right? Well my question is that I want to build a database for a Truck driving company. It consist of Drivers, Brokers, Invoices, Destinations, and type of produce delivered. I have attached a screenshot so you can see the relationships on the table. My question is that in the invoice table "Start_Destination" and "End_Destination" each has a one relationship to the "Destination_Company_Name" in the Destination table. Is this okay? Like two fields on one table tied to one field on another? The same goes for the "Produce_type", you will see that it is 3 times in the invoice side but only one joint on the Produce table. Is this okay? Any advice would be great!! and again sorry for being such a newbie on this but can't wrap my mind on a different way. enter image description here
Julio Contreras (11 rep)
Dec 27, 2015, 02:15 AM • Last activity: Jan 3, 2021, 10:01 PM
1 votes
1 answers
1982 views
PostgreSQL query suddenly very slow after new column addition
I have a query that's performing terribly after adding a new column. Here's the slow version: SELECT COUNT(*) FROM activities INNER JOIN "users" ON "users"."id" = "activities"."user_id" AND "users"."api_user_for" IS NULL WHERE "activities"."department_id" = 123456789 AND "users"."api_user_for" IS NU...
I have a query that's performing terribly after adding a new column. Here's the slow version: SELECT COUNT(*) FROM activities INNER JOIN "users" ON "users"."id" = "activities"."user_id" AND "users"."api_user_for" IS NULL WHERE "activities"."department_id" = 123456789 AND "users"."api_user_for" IS NULL The new column is the api_user_for column. It's a nullable string type. The SQL itself is generated from Ruby on Rails. We've added a "default scope", and that's why the api_user_for seems to appear twice in the query (on the join and in the where clause). If I remove *either one* of those api_user_for checks, the query returns to its former speed. By including them both, however, the query moves from taking less than 100ms to taking close to 10 seconds. I've compared the query plans using PEV, and here's the fast and slow queries compared: fast query slow query The "fast query" in this case is the same query with one of the (unnecessarily duplicated) api_user_for checks removed. For example, both of these queries are fast: SELECT COUNT(*) FROM activities INNER JOIN "users" ON "users"."id" = "activities"."user_id" WHERE "activities"."department_id" = 123456789 AND "users"."api_user_for" IS NULL and SELECT COUNT(*) FROM activities INNER JOIN "users" ON "users"."id" = "activities"."user_id" AND "users"."api_user_for" IS NULL WHERE "activities"."department_id" = 123456789 Obviously there's a huge disparity in the number of loops performed, but why are these loops taking place? The scan seems otherwise very similar. I would love some insight into what could be going on as this fairly innocuous query is now bringing performance to its knees! Postgres Version: 9.6.8
aardvarkk (235 rep)
Nov 27, 2018, 11:28 PM • Last activity: Nov 22, 2020, 09:04 PM
1 votes
1 answers
3155 views
PostgreSQL update table then insert on another table
I have a `User` table and a `Log` table. I need to update a field on the `User` table and insert a new entry on the `Log` table recording the update. I have written the statement for the update: UPDATE users SET free_shipping_until = spu.created_at::date + interval '1 year' FROM shipping_priority_us...
I have a User table and a Log table. I need to update a field on the User table and insert a new entry on the Log table recording the update. I have written the statement for the update: UPDATE users SET free_shipping_until = spu.created_at::date + interval '1 year' FROM shipping_priority_users AS spu WHERE spu.role_id = #{role.id} and users.id = spu.user_id and spu.created_at is not null; For each update I need to also add (probably in a transaction) an insert statement on the Log table which has the following columns
user_id: string,
created_at: timestamp
data: jsonb
- The data column contains a jsonb value including the free_shipping_until value from the update.
data: {"free_shipping_until": [null, "2021-07-30"]}
- user_id should match the updated record's value - The created_at column is the current time, I'm using RoR and could interpolate the value with the expected format. I'm using PostgreSQL 12.3
vicocas (125 rep)
Sep 21, 2020, 08:38 AM • Last activity: Sep 21, 2020, 09:14 AM
0 votes
1 answers
51 views
Multiple Joins with Sums
Here is my basic schema **wbs_item** +----+-----------+-----------+ | id | name | parent_id | +----+-----------+-----------+ | 1 | Materials | | +----+-----------+-----------+ | 2 | Drywall | 1 | +----+-----------+-----------+ | 3 | Plumbing | 1 | +----+-----------+-----------+ | 4 | Labour | | +---...
Here is my basic schema **wbs_item** +----+-----------+-----------+ | id | name | parent_id | +----+-----------+-----------+ | 1 | Materials | | +----+-----------+-----------+ | 2 | Drywall | 1 | +----+-----------+-----------+ | 3 | Plumbing | 1 | +----+-----------+-----------+ | 4 | Labour | | +----+-----------+-----------+ | 5 | Drywall | 2 | +----+-----------+-----------+ | 6 | Plumbing | 2 | +----+-----------+-----------+ The idea here is their is a hierarchy to breakdown the costs on the budget. **budgets** - id **budget_items** +----+-----------+-------------+-------------+-------+ | id | budget_id | wbs_item_id | name | total | +----+-----------+-------------+-------------+-------+ | 1 | 1 | 2 | Sheet Goods | 1000 | +----+-----------+-------------+-------------+-------+ | 2 | 1 | 2 | Mud / Tape | 100 | +----+-----------+-------------+-------------+-------+ | 3 | 1 | 5 | Main Floor | 500 | +----+-----------+-------------+-------------+-------+ | 4 | 1 | 5 | Basement | 500 | +----+-----------+-------------+-------------+-------+ | 5 | 1 | 3 | Rough-in | 500 | +----+-----------+-------------+-------------+-------+ | 6 | 1 | 6 | Rough-in | 1000 | +----+-----------+-------------+-------------+-------+ Here is what I am try to output +---------------------------+ | Materials | +--------------------+------+ | Drywall | 1100 | +--------------------+------+ | Plumbing | 500 | +--------------------+------+ | Total (Materials): | 1600 | +--------------------+------+ | Labour | +--------------------+------+ | Drywall | 1000 | +--------------------+------+ | Plumbing | 1000 | +--------------------+------+ | Total (Labour): | 2000 | +--------------------+------+ | Grand Total: | 3600 | +--------------------+------+ This is my Rails app so I am fine nesting these lookups i.e. looping through the main wbs_items (i.e. with NULL parent_id) then looping through the children. I can't seem to figure out the JOIN and what I am sure is a nested sub select etc. I also don't know which is more efficient - to start with the budget_items or wbs_items then add the joins etc.
Dan Tappin (103 rep)
Feb 16, 2020, 12:30 AM • Last activity: Feb 17, 2020, 07:37 PM
Showing page 1 of 20 total questions