Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
3151 views
AWS RDS mysql deadlock
Everything in our production environment was working well. Yesterday, suddenly out of no where we were getting "lock wait" on any transaction related to a specific row. Same transaction on another row worked when the lock was removed, but editing that one row caused the entire table to lock up. Now,...
Everything in our production environment was working well. Yesterday, suddenly out of no where we were getting "lock wait" on any transaction related to a specific row. Same transaction on another row worked when the lock was removed, but editing that one row caused the entire table to lock up. Now, another problem is that the DB doesn't return all rows at times, ex. it returns 1 instead of 10. We are using java + hibernate. Like I said, there were no problems whatsoever, this is odd. I followed this https://mysql.wisborg.dk/2017/05/30/innodb-locks-analysis-why-is-blocking-query-null-and-how-to-find-more-information-about-the-blocking-transaction/ But still, the blocking transaction show trx_query as NULL. When running select * from information_schema.innodb_trx; Many transactions are null in the query column. SHOW ENGINE INNODB STATUS; --------------------------------------- ===================================== 2019-08-02 21:24:39 2add8607e700 INNODB MONITOR OUTPUT ===================================== Per second averages calculated from the last 57 seconds ----------------- BACKGROUND THREAD ----------------- srv_master_thread loops: 3498 srv_active, 0 srv_shutdown, 61234 srv_idle srv_master_thread log flush and writes: 64732 ---------- SEMAPHORES ---------- OS WAIT ARRAY INFO: reservation count 22165 OS WAIT ARRAY INFO: signal count 118693 Mutex spin waits 269941, rounds 833090, OS waits 9670 RW-shared spins 25736, rounds 372709, OS waits 6243 RW-excl spins 29187, rounds 404172, OS waits 5955 Spin rounds per wait: 3.09 mutex, 14.48 RW-shared, 13.85 RW-excl ------------------------ LATEST DETECTED DEADLOCK ------------------------ 2019-08-02 16:29:28 2adf53544700 *** (1) TRANSACTION: TRANSACTION 22187462, ACTIVE 0 sec starting index read mysql tables in use 1, locked 1 LOCK WAIT 6 lock struct(s), heap size 1184, 2 row lock(s), undo log entries 2 MySQL thread id 26615, OS thread handle 0x2adecc1de700, query id 1141023 172.31.8.177 dbuser updating /* update com.catchapp.server.persistence.user.User */ update user set cloudkit_id=null, created='2019-08-02 16:20:55', entity_status='active', unique_id='48df7cf7bffe474d85787bddf8425189', updated='2019-08-02 16:29:28', address_street_1=null, address_street_2=null, api_version='3.1', bio=null, city=null, country=null, distance_unit='mile', dob='1994-09-03', email='souzasire@gmail.com', email_sent=0, external_code='WHSPGI', facebook_id=null, facebook_token=null, first_name='Renata', gender='female', google_access_token=null, google_id=null, google_refresh_token=null, hometown=null, image_url='Image', last_name='Souza', membership='basic', needs_password_change=0, new_user=1, num_completed_events=0, password='{SSHA}4wKS9ZG7hhhpbxQRtsi6zVyERS0/bRHMCXpuLA==', phone=null, phone_normalized=null, role_flag=1, smoker=null, state=null, time *** (1) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 1380 page no 790 n bits 104 index PRIMARY of table catchappprod.user trx id 22187462 lock_mode X locks rec but not gap waiting Record lock, heap no 36 PHYSICAL RECORD: n_fields 46; compact format; info bits 0 0: len 30; hex 326339663838333136633531353832393031366335333232356133303032; asc 2c9f88316c515829016c53225a3002; (total 32 bytes); 1: len 6; hex 000001528dbd; asc R ;; 2: len 7; hex 420000015204cb; asc B R ;; 3: SQL NULL; 4: len 5; hex 99a3c50537; asc 7;; 5: len 6; hex 616374697665; asc active;; 6: len 30; hex 343864663763663762666665343734643835373837626464663834323531; asc 48df7cf7bffe474d85787bddf84251; (total 32 bytes); 7: len 5; hex 99a3c50537; asc 7;; 8: SQL NULL; 9: SQL NULL; 10: SQL NULL; 11: SQL NULL; 12: SQL NULL; 13: len 4; hex 6d696c65; asc mile;; 14: len 3; hex 8f9523; asc #;; 15: len 19; hex 736f757a617369726540676d61696c2e636f6d; asc souzasire@gmail.com;; 16: len 6; hex 574853504749; asc WHSPGI;; 17: SQL NULL; 18: SQL NULL; 19: len 6; hex 52656e617461; asc Renata;; 20: len 6; hex 66656d616c65; asc female;; 21: SQL NULL; 22: SQL NULL; 23: SQL NULL; 24: SQL NULL; 25: len 30; hex 687474703a2f2f63617463686170702d70726f642e73332e616d617a6f6e; asc http://catchapp-prod.s3.amazon ; (total 82 bytes); 26: len 5; hex 536f757a61; asc Souza;; 27: len 4; hex 80000000; asc ;; 28: len 4; hex 80000000; asc ;; 29: len 30; hex 7b535348417d34774b53395a47376868687062785152747369367a567945; asc {SSHA}4wKS9ZG7hhhpbxQRtsi6zVyE; (total 46 bytes); 30: SQL NULL; 31: SQL NULL; 32: len 4; hex 80000001; asc ;; 33: SQL NULL; 34: SQL NULL; 35: len 15; hex 416d65726963612f4368696361676f; asc America/Chicago;; 36: len 9; hex 536f757a6173697265; asc Souzasire;; 37: len 5; hex 3437353833; asc 47583;; 38: len 4; hex 80000001; asc ;; 39: SQL NULL; 40: SQL NULL; 41: len 1; hex 31; asc 1;; 42: len 4; hex 80000000; asc ;; 43: len 1; hex 81; asc ;; 44: len 5; hex 6261736963; asc basic;; 45: len 6; hex 5055424c4943; asc PUBLIC;; *** (2) TRANSACTION: TRANSACTION 22187459, ACTIVE 0 sec starting index read mysql tables in use 1, locked 1 6 lock struct(s), heap size 1184, 2 row lock(s), undo log entries 2 MySQL thread id 26616, OS thread handle 0x2adf53544700, query id 1141027 172.31.8.177 dbuser updating /* update com.catchapp.server.persistence.user.User */ update user set cloudkit_id=null, created='2019-08-02 16:20:55', entity_status='active', unique_id='48df7cf7bffe474d85787bddf8425189', updated='2019-08-02 16:29:28', address_street_1=null, address_street_2=null, api_version='3.1', bio=null, city=null, country=null, distance_unit='mile', dob='1994-09-03', email='souzasire@gmail.com', email_sent=0, external_code='WHSPGI', facebook_id=null, facebook_token=null, first_name='Renata', gender='female', google_access_token=null, google_id=null, google_refresh_token=null, hometown=null, image_url='Image', last_name='Souza', membership='basic', needs_password_change=0, new_user=1, num_completed_events=0, password='{SSHA}4wKS9ZG7hhhpbxQRtsi6zVyERS0/bRHMCXpuLA==', phone=null, phone_normalized=null, role_flag=1, smoker=null, state=null, time *** (2) HOLDS THE LOCK(S): RECORD LOCKS space id 1380 page no 790 n bits 104 index PRIMARY of table catchappprod.user trx id 22187459 lock mode S locks rec but not gap Record lock, heap no 36 PHYSICAL RECORD: n_fields 46; compact format; info bits 0 0: len 30; hex 326339663838333136633531353832393031366335333232356133303032; asc 2c9f88316c515829016c53225a3002; (total 32 bytes); 1: len 6; hex 000001528dbd; asc R ;; 2: len 7; hex 420000015204cb; asc B R ;; 3: SQL NULL; 4: len 5; hex 99a3c50537; asc 7;; 5: len 6; hex 616374697665; asc active;; 6: len 30; hex 343864663763663762666665343734643835373837626464663834323531; asc 48df7cf7bffe474d85787bddf84251; (total 32 bytes); 7: len 5; hex 99a3c50537; asc 7;; 8: SQL NULL; 9: SQL NULL; 10: SQL NULL; 11: SQL NULL; 12: SQL NULL; 13: len 4; hex 6d696c65; asc mile;; 14: len 3; hex 8f9523; asc #;; 15: len 19; hex 736f757a617369726540676d61696c2e636f6d; asc souzasire@gmail.com;; 16: len 6; hex 574853504749; asc WHSPGI;; 17: SQL NULL; 18: SQL NULL; 19: len 6; hex 52656e617461; asc Renata;; 20: len 6; hex 66656d616c65; asc female;; 21: SQL NULL; 22: SQL NULL; 23: SQL NULL; 24: SQL NULL; 25: len 30; hex 687474703a2f2f63617463686170702d70726f642e73332e616d617a6f6e; asc http://catchapp-prod.s3.amazon ; (total 82 bytes); 26: len 5; hex 536f757a61; asc Souza;; 27: len 4; hex 80000000; asc ;; 28: len 4; hex 80000000; asc ;; 29: len 30; hex 7b535348417d34774b53395a47376868687062785152747369367a567945; asc {SSHA}4wKS9ZG7hhhpbxQRtsi6zVyE; (total 46 bytes); 30: SQL NULL; 31: SQL NULL; 32: len 4; hex 80000001; asc ;; 33: SQL NULL; 34: SQL NULL; 35: len 15; hex 416d65726963612f4368696361676f; asc America/Chicago;; 36: len 9; hex 536f757a6173697265; asc Souzasire;; 37: len 5; hex 3437353833; asc 47583;; 38: len 4; hex 80000001; asc ;; 39: SQL NULL; 40: SQL NULL; 41: len 1; hex 31; asc 1;; 42: len 4; hex 80000000; asc ;; 43: len 1; hex 81; asc ;; 44: len 5; hex 6261736963; asc basic;; 45: len 6; hex 5055424c4943; asc PUBLIC;; *** (2) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 1380 page no 790 n bits 104 index PRIMARY of table catchappprod.user trx id 22187459 lock_mode X locks rec but not gap waiting Record lock, heap no 36 PHYSICAL RECORD: n_fields 46; compact format; info bits 0 0: len 30; hex 326339663838333136633531353832393031366335333232356133303032; asc 2c9f88316c515829016c53225a3002; (total 32 bytes); 1: len 6; hex 000001528dbd; asc R ;; 2: len 7; hex 420000015204cb; asc B R ;; 3: SQL NULL; 4: len 5; hex 99a3c50537; asc 7;; 5: len 6; hex 616374697665; asc active;; 6: len 30; hex 343864663763663762666665343734643835373837626464663834323531; asc 48df7cf7bffe474d85787bddf84251; (total 32 bytes); 7: len 5; hex 99a3c50537; asc 7;; 8: SQL NULL; 9: SQL NULL; 10: SQL NULL; 11: SQL NULL; 12: SQL NULL; 13: len 4; hex 6d696c65; asc mile;; 14: len 3; hex 8f9523; asc #;; 15: len 19; hex 736f757a617369726540676d61696c2e636f6d; asc souzasire@gmail.com;; 16: len 6; hex 574853504749; asc WHSPGI;; 17: SQL NULL; 18: SQL NULL; 19: len 6; hex 52656e617461; asc Renata;; 20: len 6; hex 66656d616c65; asc female;; 21: SQL NULL; 22: SQL NULL; 23: SQL NULL; 24: SQL NULL; 25: len 30; hex 687474703a2f2f63617463686170702d70726f642e73332e616d617a6f6e; asc http://catchapp-prod.s3.amazon ; (total 82 bytes); 26: len 5; hex 536f757a61; asc Souza;; 27: len 4; hex 80000000; asc ;; 28: len 4; hex 80000000; asc ;; 29: len 30; hex 7b535348417d34774b53395a47376868687062785152747369367a567945; asc {SSHA}4wKS9ZG7hhhpbxQRtsi6zVyE; (total 46 bytes); 30: SQL NULL; 31: SQL NULL; 32: len 4; hex 80000001; asc ;; 33: SQL NULL; 34: SQL NULL; 35: len 15; hex 416d65726963612f4368696361676f; asc America/Chicago;; 36: len 9; hex 536f757a6173697265; asc Souzasire;; 37: len 5; hex 3437353833; asc 47583;; 38: len 4; hex 80000001; asc ;; 39: SQL NULL; 40: SQL NULL; 41: len 1; hex 31; asc 1;; 42: len 4; hex 80000000; asc ;; 43: len 1; hex 81; asc ;; 44: len 5; hex 6261736963; asc basic;; 45: len 6; hex 5055424c4943; asc PUBLIC;; *** WE ROLL BACK TRANSACTION (2) ------------ TRANSACTIONS ------------ Trx id counter 22211040 Purge done for trx's n:o = 22211040, sees = 22211039, sees = 22211038, sees = 22210954, sees = 22210909, sees = 22210502, sees = 22210002, sees = 22209497, sees = 22209481, sees = 22209430, sees = 22209378, sees = 22208805, sees = 22208534, sees = 22208451, sees = 22208366, sees = 22208236, sees = 22208141, sees = 22207536, sees = 22206743, sees = 22206623, sees = 22205264, sees = 22205261, sees = 22204497, sees = 22204482, sees = 22203903, sees < 22198567 -------- FILE I/O -------- I/O thread 0 state: waiting for i/o request (insert buffer thread) I/O thread 1 state: waiting for i/o request (log thread) I/O thread 2 state: waiting for i/o request (read thread) I/O thread 3 state: waiting for i/o request (read thread) I/O thread 4 state: waiting for i/o request (read thread) I/O thread 5 state: waiting for i/o request (read thread) I/O thread 6 state: waiting for i/o request (write thread) I/O thread 7 state: waiting for i/o request (write thread) I/O thread 8 state: waiting for i/o request (write thread) I/O thread 9 state: waiting for i/o request (write thread) I/O thread 10 state: waiting for i/o request (write thread) I/O thread 11 state: waiting for i/o request (write thread) I/O thread 12 state: waiting for i/o request (write thread) I/O thread 13 state: waiting for i/o request (write thread) I/O thread 14 state: waiting for i/o request (write thread) I/O thread 15 state: waiting for i/o request (write thread) I/O thread 16 state: waiting for i/o request (write thread) I/O thread 17 state: waiting for i/o request (write thread) I/O thread 18 state: waiting for i/o request (write thread) I/O thread 19 state: waiting for i/o request (write thread) I/O thread 20 state: waiting for i/o request (write thread) I/O thread 21 state: waiting for i/o request (write thread) Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] , ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 Pending flushes (fsync) log: 0; buffer pool: 0 29552 OS file reads, 175874 OS file writes, 98491 OS fsyncs 0.02 reads/s, 16384 avg bytes/read, 2.47 writes/s, 1.67 fsyncs/s ------------------------------------- INSERT BUFFER AND ADAPTIVE HASH INDEX ------------------------------------- Ibuf: size 1, free list len 48746, seg size 48748, 20592 merges merged operations: insert 841, delete mark 3317352, delete 0 discarded operations: insert 0, delete mark 0, delete 0 Hash table size 12119119, node heap has 442 buffer(s) 909.30 hash searches/s, 351.57 non-hash searches/s --- LOG --- Log sequence number 4441306931 Log flushed up to 4441306098 Pages flushed up to 4441306098 Last checkpoint at 4441306098 0 pending log writes, 0 pending chkp writes 18712 log i/o's done, 0.28 log i/o's/second ---------------------- BUFFER POOL AND MEMORY ---------------------- Total memory allocated 6267207680; in additional pool allocated 0 Dictionary memory allocated 822706 Buffer pool size 373760 Free buffers 252854 Database pages 120464 Old database pages 44620 Modified db pages 4 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages made young 4184, not young 0 0.00 youngs/s, 0.00 non-youngs/s Pages read 54318, created 66146, written 348418 0.02 reads/s, 0.00 creates/s, 1.84 writes/s Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000 Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s LRU len: 120464, unzip_LRU len: 0 I/O sum:cur, unzip sum:cur ---------------------- INDIVIDUAL BUFFER POOL INFO ---------------------- ---BUFFER POOL 0 Buffer pool size 46720 Free buffers 31671 Database pages 14992 Old database pages 5552 Modified db pages 2 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages made young 568, not young 0 0.00 youngs/s, 0.00 non-youngs/s Pages read 6678, created 8314, written 49253 0.00 reads/s, 0.00 creates/s, 0.47 writes/s Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000 Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s LRU len: 14992, unzip_LRU len: 0 I/O sum:cur, unzip sum:cur ---BUFFER POOL 1 Buffer pool size 46720 Free buffers 31761 Database pages 14901 Old database pages 5519 Modified db pages 0 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages made young 486, not young 0 0.00 youngs/s, 0.00 non-youngs/s Pages read 6755, created 8146, written 42935 0.00 reads/s, 0.00 creates/s, 0.18 writes/s Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000 Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s ... ---------------------------- END OF INNODB MONITOR OUTPUT ============================ I added more ram to the instance to no avail. I'm using a custom parameter group, and set IO threads to 16. default was 4. This did not help. public ChatMessage addMessage( Chat chat, User user, String message, MessageType type) throws CatchappRdbException { ChatMessage chatMessage = new ChatMessage(); chatMessage.setChat(chat); chatMessage.setMessage(message); chatMessage.setUser(user); chatMessage.setType(type); chat.getMessages().add(chatMessage); chat.setLast_message(new Date()); Session session = getSession(); try { // session.saveOrUpdate(chat); session.saveOrUpdate(chatMessage); } catch (HibernateException he) { throw new CatchappRdbException(he); } finally { flushSession(session); } return chatMessage; } this works for anything other row. If running this for the one particular row, the seems like a few tables get locked. I found the transaction causing the block but the trx_query is null. I wonder if it's a max open files issue or within hibernate trying to access some file that is taking forever but I couldn't find anything. I'm wondering if paid AWS support can help with this.
franklinexpress (111 rep)
Aug 3, 2019, 12:01 AM • Last activity: Aug 7, 2025, 12:05 PM
0 votes
0 answers
21 views
Slow DELETE due to FK trigger on partitioned child table
I'm working with PostgreSQL 15 and experimenting with table partitioning to improve performance. **Original setup: ** I have two tables: `tasks` (parent) with ~65M rows and `records` (child) with ~200M rows There is a foreign key from records.task_id → tasks.id When executing a `DELETE` statement on...
I'm working with PostgreSQL 15 and experimenting with table partitioning to improve performance.

**Original setup:
** I have two tables: tasks (parent) with ~65M rows and records (child) with ~200M rows
There is a foreign key from records.task_id → tasks.id
When executing a DELETE statement on tasks it takes approximately 12 seconds to complete
explain analyze:
my_db=*> EXPLAIN (ANALYZE, BUFFERS, TIMING, VERBOSE) DELETE FROM tasks WHERE project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e';
                                                                      QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------
 Delete on public.tasks  (cost=5628.30..1402047.95 rows=0 width=0) (actual time=1323.519..1323.520 rows=0 loops=1)
   Buffers: shared hit=1160307
   ->  Bitmap Heap Scan on public.tasks  (cost=5628.30..1402047.95 rows=502675 width=6) (actual time=45.328..373.039 rows=513093 loops=1)
         Output: ctid
         Recheck Cond: (tasks.project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e'::uuid)
         Heap Blocks: exact=133678
         Buffers: shared hit=134121
         ->  Bitmap Index Scan on tasks__project_id  (cost=0.00..5502.63 rows=502675 width=0) (actual time=24.057..24.058 rows=513093 loops=1)
               Index Cond: (tasks.project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e'::uuid)
               Buffers: shared hit=443
 Query Identifier: -8361055271394889220
 Planning Time: 0.069 ms
 Trigger RI_ConstraintTrigger_a_991596545 for constraint records_task_id_dfa43a78_fk_tasks: time=11377.093 calls=513093
 Execution Time: 12742.066 ms
(14 rows)
I decided to partition the records table based on project_id (list) by doing the following:
1. Create new table tasks_new and copy the content of tasks 2. Create a new partitioned records table PARTITION BY LIST (project_id) 3. Change the PRIMARY KEY to be based on (project_id, id) 4. Add foreign key constraint to tasks_new based on (project_id, task_id) 5. Generate 800 tables and populate the data from records This works as expected, however, when running and comparing the DELETE statement from tasks_new I see it is slower than the original (none partitioned) setup. explain analyze:
my_db=*> EXPLAIN (ANALYZE, BUFFERS, TIMING, VERBOSE) DELETE FROM tasks_new WHERE project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e';
                                                                    QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------
 Delete on public.tasks_new  (cost=15925.46..1400465.79 rows=0 width=0) (actual time=1352.341..1352.342 rows=0 loops=1)
   Buffers: shared hit=1162960
   ->  Bitmap Heap Scan on public.tasks_new  (cost=15925.46..1400465.79 rows=497147 width=6) (actual time=62.350..390.696 rows=513093 loops=1)
         Output: ctid
         Recheck Cond: (tasks_new.project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e'::uuid)
         Heap Blocks: exact=133678
         Buffers: shared hit=136774
         ->  Bitmap Index Scan on tasks_new_pkey  (cost=0.00..15801.17 rows=497147 width=0) (actual time=41.590..41.590 rows=513093 loops=1)
               Index Cond: (tasks_new.project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e'::uuid)
               Buffers: shared hit=3096
 Query Identifier: -3093644494149541820
 Planning Time: 0.066 ms
 Trigger RI_ConstraintTrigger_a_991528806 for constraint records_parti_task_id_dfa43a78_fk_tasks: time=28558.369 calls=513093
 Execution Time: 29965.025 ms
(14 rows)
I saw that most of the time is spent on the foreign key trigger (to see it, I intentionally set constraints all immediate;) and it is SLOWER than the test without the partitions.

RI_ConstraintTrigger_a_991596545 fires RI_FKey_noaction_del in AFTER DELETE of table tasks_new. However I don't know how RI_FKey_noaction_del is implemented and I suspect it doesn't work well in partitioned tables or might does the scan per all partitions together (not efficient). Please help me understand better how it works and how can I improve it.
Cowabunga (145 rep)
Aug 7, 2025, 10:36 AM
0 votes
3 answers
3618 views
Failure to initialise MySQL Server 5.7 due to mysqlx_port=0.0
I'm migrating a legacy system to a more recent server and I am having issues installing MySQL Server 5.7. When I try I get the error `unknown variable mysqlx_port=0.0`. I've tried updating the `my.ini` file to comment out this variable, or to set it to another value (3306) but whenever the configura...
I'm migrating a legacy system to a more recent server and I am having issues installing MySQL Server 5.7. When I try I get the error unknown variable mysqlx_port=0.0. I've tried updating the my.ini file to comment out this variable, or to set it to another value (3306) but whenever the configuration is ran the file is regenerated with mysqlx_port=0.0 Has anyone run into this issue, or does anyone have any ideas to try? My googling isn't giving me much joy Thanks! Full init logs:
configuration step: Writing configuration file

Saving my.ini configuration file...
Saved my.ini configuration file.
Ended configuration step: Writing configuration file

Beginning configuration step: Updating Windows Firewall rules

Adding a Windows Firewall rule for MySQL57 on port 3306.
Attempting to add a Windows Firewall rule with command: netsh.exe advfirewall firewall add rule name="Port 3306" protocol=TCP localport=3306 dir=in action=allow
Ok.


Successfully added the Windows Firewall rule.
Ended configuration step: Updating Windows Firewall rules

Beginning configuration step: Adjusting Windows service

Attempting to grant the required filesystem permissions to the 'NT AUTHORITY\NetworkService' account.
Granted permissions to the data directory.
Adding new service
New service added
Ended configuration step: Adjusting Windows service

Beginning configuration step: Initializing database (may take a long time)

Deleting the data directory from a previous (failed) configuration...
Attempting to run MySQL Server with --initialize-insecure option...
Starting process for MySQL Server 5.7.44...
Starting process with command: C:\Program Files\MySQL\MySQL Server 5.7\bin\mysqld.exe --defaults-file="C:\ProgramData\MySQL\MySQL Server 5.7\my.ini" --console --initialize-insecure=on --lower-case-table-names=1...
TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
InnoDB: New log files created, LSN=45790
InnoDB: Creating foreign key constraint system tables.
unknown variable 'mysqlx_port=0.0'
Aborting
Process for mysqld, with ID 3212, was run successfully and exited with code 1.
Failed to start process for MySQL Server 5.7.44.
Database initialization failed.
Ended configuration step: Initializing database (may take a long time)
Harold (101 rep)
Jun 11, 2024, 12:40 PM • Last activity: Aug 7, 2025, 10:07 AM
0 votes
1 answers
144 views
Huge Transaction Log File - Full Database backup, but no Transaction log backups
I hope someone can help. Please be gentle, this type of admin is not what I usually do. I have inherited a database which is set up to do a FULL database backup at 02:00 every day. The transaction log is huge (almost 1TB) and has never had any backups (hence it is growing). The database is 172GB. Th...
I hope someone can help. Please be gentle, this type of admin is not what I usually do. I have inherited a database which is set up to do a FULL database backup at 02:00 every day. The transaction log is huge (almost 1TB) and has never had any backups (hence it is growing). The database is 172GB. The Logs and database are on different drives, as are the backups. We would like to put a new maintenance plan in place with 1 FULL backup every week, 4 differential backups each day, and transaction log backups every 15 minutes. The size of the transaction log is worrying me. We don't have a drive with that space in order to back it up. Can I set the backup type to Simple, delete the transaction log, change back to Full, then set up the new jobs to do the differential and log file backups? Or will that break something? Is there a proper / better way to deal with this transaction log - I'm worried we will run out of space soon on the drive. Very happy to provide more info if needed. I wasn't sure what would be pertinent at this point. Thanks in advance.
Jubs666 (1 rep)
Jul 24, 2023, 02:10 PM • Last activity: Aug 7, 2025, 09:01 AM
0 votes
2 answers
2510 views
Cloning the database to another server with raw data files
I'm trying to clone the database of **server 1** to **server 2** with raw data files. I'm using Linux 7 and Oracle 11g R2. I have the instance running with same SID in **Server 2** but file structure is different. The database is about 85 GB. I have got the data files, control file, redolog files, s...
I'm trying to clone the database of **server 1** to **server 2** with raw data files. I'm using Linux 7 and Oracle 11g R2. I have the instance running with same SID in **Server 2** but file structure is different. The database is about 85 GB. I have got the data files, control file, redolog files, spfile and init.ora. Can someone explain me the way to clone in this way ?
MM_Julia (1 rep)
Feb 5, 2018, 04:42 AM • Last activity: Aug 7, 2025, 08:02 AM
0 votes
1 answers
590 views
why binlog doesn't log the DML statement?
two method compared 1. use test; create table t (a int); 2. create table test.t1(a int); my result is the first one can log the binlog but the second one can't not use test;create table t(a int); mysqlbinlog -v mysql-bin.000005|grep create use `test`/*!*/; SET TIMESTAMP=1490090893/*!*/; create table...
two method compared 1. use test; create table t (a int); 2. create table test.t1(a int); my result is the first one can log the binlog but the second one can't not use test;create table t(a int); mysqlbinlog -v mysql-bin.000005|grep create use test/*!*/; SET TIMESTAMP=1490090893/*!*/; create table t(a int) /*!*/; DELIMITER ; but when I do not use "use", DML cannot log in binlog,can tell me why? create table test.t(a int); mysqlbinlog -v mysql-bin.000005|grep create test.t
LawrenceLi (321 rep)
Mar 21, 2017, 10:33 AM • Last activity: Aug 7, 2025, 07:08 AM
2 votes
1 answers
2329 views
How make Db2 database transaction log full fully robust?
Db2 v11.5.7.0 on Linux and for some reason few times per year database hits the transaction log full error. My current solution was to increase logsecond, logprimary or logfilsiz and now I hit to foolish level when log file consumed is just 0.1% for whole day to day. But for some reason database few...
Db2 v11.5.7.0 on Linux and for some reason few times per year database hits the transaction log full error. My current solution was to increase logsecond, logprimary or logfilsiz and now I hit to foolish level when log file consumed is just 0.1% for whole day to day. But for some reason database few times per year still hits the transaction log full error. I investigated this problem in detail and bellow are step by step details how to reproduce the problem. I created tree cases. First and second case works as expected, but case 3 still runs into transaction log full problem. db2 "create database test1" mkdir /home/db2inst1/db2archlog db2 "update db cfg for test1 using logarchmeth1 DISK:/home/db2inst1/db2archlog" db2 "backup database test1 to /dev/null" db2 "update db cfg for test1 using logfilsiz 100" db2 "update db cfg for test1 using logprimary 3" db2 "update db cfg for test1 using logsecond 2" db2 "update db cfg for test1 using num_log_span 4" db2 "update db cfg for test1 using max_log 95" Above logfilsiz, logprimary and logsecond are intentionally set to very small values to quickly demonstrate transaction log full problems. **CASE 1** db2 connect to test1 db2 "create table admin.tab1 (col1 int)" db2 +c "INSERT INTO admin.tab1 WITH temp1 (s1) AS ( VALUES (0) UNION ALL SELECT s1 + 1 FROM temp1 WHERE s1 < 1000000 ) SELECT s1 FROM temp1" After few seconds error is returned: SQL1224N The database manager is not able to accept new requests, has terminated all requests in progress, or has terminated the specified request because of an error or a forced interrupt. SQLSTATE=55032 Analyzing db2diag log: ADM1542W Application "db2bp" with application handle "*LOCAL.db2inst1.220512112327" and application id "DB2INST1" executing under authentication id "95" will be forced off of the database for violating database configuration parameter MAX_LOG (current value ""). The unit of work will be rolled back. Transaction was trying to consume whole transaction log so MAX_LOG hit 95% and application was forced of the database. Works as expected. Transaction log full prevented successfully. **CASE 2** In first Db2 session: db2 connect to test1 db2 +c "insert into admin.tab1 values (1)" In second Db2 session: db2 connect to test1 db2 "BEGIN DECLARE MESSAGE VARCHAR(100); DECLARE STATUS INTEGER; DECLARE I INTEGER; SET i=1; WHILE i < 50000 DO INSERT INTO ADMIN.TAB1 VALUES (i); COMMIT; SET i = i + 1; END WHILE; END" Above SQL inserts one by one record in loop and commits each of the record. This can take like minute or two. Inserts complete successfully. Now back to first Db2 session and execute: db2 commit and error is displayed: SQL1224N The database manager is not able to accept new requests, has terminated all requests in progress, or has terminated the specified request because of an error or a forced interrupt. SQLSTATE=55032 Analyzing db2diag log: ADM1541W Application "db2bp" with application handle "0-216" and application id "*LOCAL.db2inst1.220512112650" executing under authentication id "DB2INST1" has been forced off of the database for violating database configuration parameter NUM_LOG_SPAN (current value "4"). The unit of work will be rolled back. Because application inserting row by row and committing each row in one UOW exceeded the 4 logs files limit set as NUM_LOG_SPAN, database action was triggered and forced off the oldest uncommitted transaction, which is the one in Db2 first session. Transaction log full prevented successfully. **CASE 3** In Db2 first session: db2 connect to test1 db2 +c "INSERT INTO admin.tab1 WITH temp1 (s1) AS ( VALUES (0) UNION ALL SELECT s1 + 1 FROM temp1 WHERE s1 < 5000 ) SELECT s1 FROM temp1" Insert executes successfully. Open second Db2 session and execute the same insert. Insert executes successfully. Open third Db2 session and execute the same insert and it returns: SQL0964C The transaction log for the database is full. SQLSTATE=57011 Now because of many small uncommitted transactions that none of it hits the MAX_LOG or NUM_LOG_SPAN limit, transaction log can still get saturated in very short period of time. Like in above sample under one minute. Analyzing db2diag log: MESSAGE : ZRC=0x85100009=-2062548983=SQLP_NOSPACE "Log File has reached its saturation point" DIA8309C Log file was full. I know I can increase LOGFILSIZ, LOGPRIMARY and LOGSECOND and for one of the database I did this already pretty aggressively in the way that day to day applications consumes maximum of 0.1% of transaction logs and few times a year there is still some combination of application execution that gets into transaction log full situation. I suspect there is some run away application opening several hundreds of connections and does not commit them. This event is so rare it is very difficult to capture. We tried to do tons of application tests in test environment and problem never appeared. I know the only final solution will be to track down run away application and fix the application bug. But there can still appear some new application that has the same problems. From database perspective I would like to implement some mechanism on database to prevent transaction log full situation independently from application layer. I know I can write script that runs in a loop and monitors transaction log used space like using MON_GET_TRANSACTION_LOG table and force application off the database when transaction log is full or near full. But is there any simpler way without scripts? Like some Db2 parameter or combination of parameters to prevent "case 3" transaction log saturation? Regards
folow (523 rep)
May 12, 2022, 12:25 PM • Last activity: Aug 7, 2025, 06:04 AM
1 votes
1 answers
1303 views
permission denied for schema ctgov; SQL state: 42501
I am new to postgresql and I would grateful if you could please advise on how-to resolve the following error.. I have issued the followed commands: ```CREATE VIEW cancer_view AS SELECT sub1.nct_id, sub1.condition, sub1.condition_name, sub2.overall_status, sub2.participants, e.criteria, f.city, f.sta...
I am new to postgresql and I would grateful if you could please advise on how-to resolve the following error.. I have issued the followed commands:
VIEW cancer_view AS
SELECT sub1.nct_id, sub1.condition, sub1.condition_name,
		sub2.overall_status, sub2.participants,
		e.criteria,
		f.city, f.state, f.country,
		i.intervention_type, i.name AS intervention
FROM (SELECT c.nct_id, c.name AS condition, c.downcase_name AS condition_name
		FROM ctgov.conditions AS c
-- selecting only cancer trials before joining
		WHERE c.name ILIKE '%cancer%') sub1									
JOIN (SELECT s.nct_id, s.overall_status, s.enrollment AS participants
 		FROM ctgov.studies AS s
-- selecting only completed cancer trials before joining 	
	  	WHERE s.overall_status = 'Completed') sub2							
	ON sub1.nct_id = sub2.nct_id
JOIN ctgov.eligibilities AS e
	ON sub1.nct_id = e.nct_id
JOIN ctgov.facilities AS f
	ON sub1.nct_id = f.nct_id
JOIN ctgov.interventions AS i
	ON sub1.nct_id = i.nct_id;
And the error is: ERROR: permission denied for schema ctgov SQL state: 42501 Could you please advise on how to setup the privileges to have access to the tables defined in schema? Thank you in advance!!!
Lan (11 rep)
Feb 27, 2023, 05:04 AM • Last activity: Aug 7, 2025, 05:08 AM
2 votes
1 answers
573 views
Spatial Query using MySQL 8.0
I have a database in MySQL 5.7 where I create a Lat/Long column using "POINT", then I create a Index on this column with main key as second column. So my database is like `PID, SurveyID, GeoPoint` Basically this is a summarize table where PID and SurveyID makes primary key for table. We do a survey...
I have a database in MySQL 5.7 where I create a Lat/Long column using "POINT", then I create a Index on this column with main key as second column. So my database is like PID, SurveyID, GeoPoint Basically this is a summarize table where PID and SurveyID makes primary key for table. We do a survey on Animals and this summarize table record which Animal is surveyed at given Geo location on map. My Index was on column GeoPoint and PID. This table store around 400K record which is summarise from main table that has 2M record for faster execution. Now, we are planning to upgrade to MySQL 8.0 and we found that it doesn't support Spatial column been index with non-spatial column, so our index was not getting created. This results in our Query now take 8 second instead of 0.6 seconds from older version of MySQL. More over when I create index only on Spatial Column geopoint, ST_Contains, and MBRContains do not use that Index. Basically our query is as simple as we allow user to draw a Square on Google map and then we use those coordinate to find all Animals in that region. I am not sure how to solve this issue, as no documentation is found by me for it that help tweaking the query. Query: SELECT PID, count(distinct SurveyID) as totalsurvey FROM locationsummary where st_contains(ST_Envelope(ST_GeomFromText( 'LineString(137.109375 21.47351753335, 87.890625 -22.411028521559)')), geopoint ) group by PID Table: CREATE TABLE locationsummary ( PID bigint(20) NOT NULL, SurveyID bigint(20) NOT NULL, Sitelat float NOT NULL, sitelong float NOT NULL, geopoint point NOT NULL, PRIMARY KEY (PID,SurveyID), SPATIAL KEY idx_geopoint (geopoint) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin ROW_FORMAT=COMPACT enter image description here Edit: 12th July 2019 Today I try to rerun queries and it is now executing in 2 seconds, still not good though, but some how it become better from last run.
Sumit Gupta (175 rep)
May 30, 2019, 10:57 AM • Last activity: Aug 7, 2025, 04:03 AM
0 votes
1 answers
105 views
If Read Committed Snapshot Isolation is already enabled, what is the cost of enabling Snapshot Isolation?
Suppose that I have a database with Read Committed Snapshot Isolation already enabled. Is there any reason at all to not also enable Snapshot Isolation? Intuitively, you would think that the row versions would be kept around from longer. [The documentation dismisses this](https://learn.microsoft.com...
Suppose that I have a database with Read Committed Snapshot Isolation already enabled. Is there any reason at all to not also enable Snapshot Isolation? Intuitively, you would think that the row versions would be kept around from longer. [The documentation dismisses this](https://learn.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide?view=sql-server-ver17#behavior-when-reading-data) . > Even though READ COMMITTED transactions using row versioning provides a transactionally consistent view of the data at a statement level, row versions generated or accessed by this type of transaction are maintained until the transaction completes. So I am left without any ideas. Assume SQL Server 2022. SQL Server 2025 brought with it Optimized Locking, which creates just enough uncertainity in my mind that I don't want to ask about it here.
J. Mini (1235 rep)
Aug 1, 2025, 08:05 PM • Last activity: Aug 7, 2025, 03:15 AM
1 votes
1 answers
3726 views
Oracle standby databases monitoring
I have many databases for which I have configured data guard setup. When it comes to monitoring the standby's, I use OEM to check the lag and I have enabled mail alerts as well as and when gap is more than 50 archives. But the thing what I recently observed is that in one of my standby databases my...
I have many databases for which I have configured data guard setup. When it comes to monitoring the standby's, I use OEM to check the lag and I have enabled mail alerts as well as and when gap is more than 50 archives. But the thing what I recently observed is that in one of my standby databases my recovery got stopped for one or more reasons and unfortunately lag also did not go above 50 so there was no alerts. In such a situation how I can ensure when my standby's recovery MRP itself is not active and I should be alerted in such a situation.. How is it possible? Any soultions?
tesla747 (1910 rep)
Jan 4, 2016, 01:16 PM • Last activity: Aug 7, 2025, 03:03 AM
1 votes
1 answers
9040 views
Linked server error- The table either does not exist or the current user does not have permissions on that table
There is a view that is being called from Server A (SQL 2016) and this view needs to read data from Server B (SQL 2014). There is a linked server set up between these two servers and the connection works fine. The login that is running the view on the solution/application side is experiencing this e...
There is a view that is being called from Server A (SQL 2016) and this view needs to read data from Server B (SQL 2014). There is a linked server set up between these two servers and the connection works fine. The login that is running the view on the solution/application side is experiencing this error: > The OLE DB provider "SQLNCLI11" for linked > server "Server B" does not contain the table ""DB1"."dbo"."TBL1"". The > table either does not exist or the current user does not have > permissions on that table.'. -The login has DBO permissions on both DB's in Server A +B. -The login is listed as a login in the linked server Security page. -I have explicitly granted select on all tables being accessed, and it did not help. -I am able to select and read from all tables under Server Objects >Linked Server> Server B > Table> Select To -When the login runs the view from SSMS, it works fine without any errors. Can someone please point me in the right direction on why this error might be occurring and how I can resolve it?
sqllover2020 (73 rep)
Jan 20, 2022, 04:03 PM • Last activity: Aug 7, 2025, 02:02 AM
0 votes
1 answers
138 views
How can I share values in maintenance plan tasks?
A simple example would be a maintenance plan that clears a directory of old files and create some copy backups of all databases on that instance to that same directory. This is run daily The first cleanup step contains the path where the files are deleted. The second step actually creates the copy b...
A simple example would be a maintenance plan that clears a directory of old files and create some copy backups of all databases on that instance to that same directory. This is run daily The first cleanup step contains the path where the files are deleted. The second step actually creates the copy backups in the same directory as the first step. So any changes made to the path have to be done in multiple places. For a simple 2 step maintenance plan this is not really a big issue but could be for more complicated plans. So is there a way that plan steps can share information? I don't necessarily mean passing information between steps but perhaps a global space of the steps? I could not see anything in the toolbox that would work for this. Best I got is that I could add two new steps. 1. Create a temp table with the path (and any other constants.) 2. Using that path delete directory contents 3. Run copy-backups 4. Delete temp table I had thought this question: https://dba.stackexchange.com/questions/120498/can-i-pass-data-between-stages-in-a-maintenance-task would have helped but it was focused on sending completion information in email.
Matt (365 rep)
Mar 17, 2017, 01:03 PM • Last activity: Aug 7, 2025, 01:01 AM
0 votes
1 answers
483 views
LSN numbers query regarding Differential backups
We have the following backup regime in place which I am not 100% sure about. - Full backup each night at 18:00 taking approx. 6 hours - Transaction log backup upon completion of full generally about 10 mins. - Differential backups each hour taking approx. 20 mins. When querying the msdb.backupset ta...
We have the following backup regime in place which I am not 100% sure about. - Full backup each night at 18:00 taking approx. 6 hours - Transaction log backup upon completion of full generally about 10 mins. - Differential backups each hour taking approx. 20 mins. When querying the msdb.backupset table, I understand that the first_lsn will only be reset by a transaction log backup, but am slightly confused that the last_lsn entry is incremented each time the differential backup runs. This seems to imply that the differential backup also contains transaction log entries (which I don't believe it does). Is this entry in msdb.backupset just to give information to a restore to which transaction log backup to use to preserve the chain of transactions ? Finally, if we only have a transaction log backup once a night and differentials throughout the day, we are still looking at potentially a maximum of 24 hours data loss despite hourly differentials. Apologies if this seems like a lot of questions but trying to get this clear in my head. Thanks for listening.
zapcon (73 rep)
Jun 1, 2017, 11:16 AM • Last activity: Aug 7, 2025, 12:04 AM
-1 votes
1 answers
146 views
Mysql Trigger Insert row Another Table using by Barcode
I Have 3 Tables: 1) CREATE TABLE `UrunPaketDetay`( `UrunPaketDetay` int NOT NULL AUTO_INCREMENT, `UrunPaketNo` int NULL, `Tarih` date NULL, `Barkod` varchar(14) NOT NULL, `Urun` varchar(50) NULL, `SunumSekli` varchar(50) NULL, `IslemeSekli` varchar(50) NULL, `PaketlemeSekli` varchar(50) NULL, `Kalib...
I Have 3 Tables: 1) CREATE TABLE UrunPaketDetay( UrunPaketDetay int NOT NULL AUTO_INCREMENT, UrunPaketNo int NULL, Tarih date NULL, Barkod varchar(14) NOT NULL, Urun varchar(50) NULL, SunumSekli varchar(50) NULL, IslemeSekli varchar(50) NULL, PaketlemeSekli varchar(50) NULL, Kalibre varchar(50) NULL, Kilo double NULL, GlazeOran varchar(50) NULL, Uretici varchar(190) NULL, PaletKod varchar(50) NULL, PRIMARY KEY (UrunPaketDetay) )CHARACTER SET utf8; 2) CREATE TABLE CkisEks( CikId int NOT NULL AUTO_INCREMENT, Tarih date NULL, Musteri varchar(190) NULL, TeslimatYer varchar(50) NULL, CikisSaati time NULL, AracPlakasi varchar(18) NULL, AracTel varchar(16) NULL, KonteynirNo varchar(50) NULL, PaletKod varchar(12) NULL, Kilo double NULL, PRIMARY KEY (CikId) )CHARACTER SET utf8; 3) CREATE TABLE Ckis_Detay( CD_Id int NOT NULL AUTO_INCREMENT, CikId int NULL, Barkod varchar(50) NULL, Urun varchar(50) NULL, Kalibre varchar(50) NULL, Kilo double NULL, Uretici varchar(50) NULL, Musteri varchar(190) NULL, PaletKod varchar(50) NULL, Tarih date NULL, PRIMARY KEY (CD_Id) )CHARACTER SET utf8; **My Question:** I fill up my first table. After on 2nd table i call PaletKod row. PaletKod row autofill Kilo Row on 2nd table from 1st Table. But i need when i fill PaletKod to my 2nd table how can i select Paletkod from first table and fill my 3rd table values (Uretici, Urun, Kilo, Kalibre ) ? With Trigger ? --- BEGIN INSERT INTO Ckis_Detay (PaletKod, Barkod, Urun, Kalibre, Kilo, Uretici) SELECT PaletKod, Barkod, Urun, Kalibre, Kilo, Uretici FROM UrunPaketDetay WHERE PaletKod = PaletKod; END i code this trigger but when i save table 2 this trigger get all data from table 1 to table 3 But i just need only same PaletKod data copy
Taz Plus (1 rep)
Aug 28, 2018, 01:08 PM • Last activity: Aug 6, 2025, 11:12 PM
0 votes
2 answers
182 views
The actual cause of transaction log full under below scenario
In one of the instance, it shows >The transaction log for database 'DB' is full. To find out why space in the log cannot be reused, see the **log_reuse_wait_desc** column in **sys.databases** The instance has at least 100g hard disk empty storage and the transaction log allowed 2,097,152MB (2T) to g...
In one of the instance, it shows >The transaction log for database 'DB' is full. To find out why space in the log cannot be reused, see the **log_reuse_wait_desc** column in **sys.databases** The instance has at least 100g hard disk empty storage and the transaction log allowed 2,097,152MB (2T) to grow, growth rate at 10%. The database recovery mode is FULL (at least it displays FULL in the SSMS) In this case, what can cause the exception of full transaction log? When the transaction log have enough physical storage to grow and the log size is not bigger than the upper bound
SKLTFZ (141 rep)
Mar 28, 2019, 04:30 AM • Last activity: Aug 6, 2025, 11:04 PM
3 votes
1 answers
2626 views
Why is permission denied for the MongoDB keyfile stored in the root user home directory?
I configured in `/etc/mongod.conf` to enforce keyfile access control, in `security` option enabled, `keyFile` is `/root/dbtest.key` (the absolute path of keyfile). I already gave the ownership to `mongodb` user by `chown`, and granted `400` permission on that `dbtest.key` file. But `mongod` keeps fa...
I configured in /etc/mongod.conf to enforce keyfile access control, in security option enabled, keyFile is /root/dbtest.key (the absolute path of keyfile). I already gave the ownership to mongodb user by chown, and granted 400 permission on that dbtest.key file. But mongod keeps failing to start, after checking log, the error is Error reading file /root/dbtest.key: Permission denied. After checking the ownership and permissions on dbtest.key Image Which means I already granted correctly. So I don't know at which step I did wrong
necroface (139 rep)
Aug 2, 2019, 03:12 AM • Last activity: Aug 6, 2025, 10:13 PM
0 votes
2 answers
484 views
server trigger for Always On failover
I am working on a solution for a quite specific problem with environments (SQL Server using AO, with two replicas) having SSRS in scale-out, each node has MSSQL and SSRS installed, and when there is a Failover for MSSQL, one account is removed from the SSRS SCALE-out, for that reason I need to apply...
I am working on a solution for a quite specific problem with environments (SQL Server using AO, with two replicas) having SSRS in scale-out, each node has MSSQL and SSRS installed, and when there is a Failover for MSSQL, one account is removed from the SSRS SCALE-out, for that reason I need to apply a workaround that I have already available, however now I have to find the exact moment to apply it, I have two options, 1. first is to create a SQL Agent job that runs every x minutes and does apply the fix 2. find a way to capture the failover event and apply the fix. has anyone created a SQL Server trigger to capture Always On failovers?
Carlos DBA (1 rep)
Sep 15, 2022, 09:35 PM • Last activity: Aug 6, 2025, 08:05 PM
1 votes
1 answers
2362 views
Oracle - Delete dmp files from ASM diskgroup
I want to delete my export files daily from ASM diskgroup with a crontab job. I want to prepare a script for it. ASMCMD> ls exp1.dmp exp2.dmp exp3.dmp exp4.dmp ASMCMD> pwd +DATA/EXP How can I prepare this script? I prepared a template for it, but I couldn't script it. ``` set_gridenv asmcmd cd +DATA...
I want to delete my export files daily from ASM diskgroup with a crontab job. I want to prepare a script for it. ASMCMD> ls exp1.dmp exp2.dmp exp3.dmp exp4.dmp ASMCMD> pwd +DATA/EXP How can I prepare this script? I prepared a template for it, but I couldn't script it.
set_gridenv
asmcmd
cd +DATA/EXP
rm -rf exp*.dmp
exit
jrdba (55 rep)
Sep 21, 2020, 10:45 AM • Last activity: Aug 6, 2025, 07:08 PM
4 votes
1 answers
60 views
SQL Server Estimates don't use AVG_RANGE_ROWS for Uniqueidentifer Parameter
I'm trying to debug a very weird query row estimation. The query is very simple. I have a table `OrderItems` that contains for each Order (column `OrderId`) the items of the order. ```sql SELECT count(*) FROM orders.OrderItem WHERE OrderId = '5a7e53c4-fc70-f011-8dca-000d3a3aa5e1' ``` According to th...
I'm trying to debug a very weird query row estimation. The query is very simple. I have a table OrderItems that contains for each Order (column OrderId) the items of the order.
SELECT count(*)
FROM orders.OrderItem 
WHERE OrderId = '5a7e53c4-fc70-f011-8dca-000d3a3aa5e1'
According to the statistics from IX_OrderItem_FK_OrderId (that's just a normal unfiltered foreign key index CREATE INDEX IX_OrderItem_FK_OrderId on orders.OrderId(OrderId), the density is 1.2620972E-06 with 7423048 rows, so about ~9.3 items per order (if we ignore the items with OrderId = NULL, if we include them there are even less). The statistics are created with FULLSCAN, and are only slightly out of date (around ~0.2% new rows since the last recompute). | Name | Updated | Rows | Rows Sampled | Steps | Density | Average key length | String Index | Filter Expression | Unfiltered Rows | Persisted Sample Percent | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | IX_OrderItem_FK_OrderId | Aug 3 2025 4:36PM | 7423048 | 7423048 | 198 | 0.1649756 |26.443027 | "NO " | NULL | 7423048 | 100 | | All density | Average Length | Columns | | --- | --- | --- | | 1.2620972E-06 | 10.443027 | OrderId | | 1.3471555E-07 | 26.443027 | OrderId, Id | The query plan however expects, that the query returns 205.496 items. And in reality there are actually 0 results - because the orderId doesn't exist. Detailed Query Plan: https://www.brentozar.com/pastetheplan/?id=hVKYNLmXSU It probably uses the histogram for coming up with the estimate. It should fall into following bucket with RANGE_HI_KEY = 'a39932d8-aa2c-f011-8b3d-000d3a440098'. But that estimate should then be 6.87 according to the AVG_RANGE_ROWS. It somehow looks like it uses the EQ_ROWS from the previous bucket (but 205 might also just be by accident). | RANGE_HI_KEY | RANGE_ROWS | EQ_ROWS | DISTINCT_RANGE_ROWS | AVG_RANGE_ROWS | | --- | --- | --- | --- | --- | --- | | 9d2e2bea-aa6e-f011-8dca-000d3a3aa5e1 | 12889 | 205 | 2412 | 5.343698 | | a39932d8-aa2c-f011-8b3d-000d3a440098 | 21923 | 107 | 3191 | 6.8702602 | OPTION(RECOMPILE) does not help. Can somebody explain how SQL Server (in particularly Azure SQL) is coming up with that number? - Does it really think that the parameter is close enough to the bucket start, and just takes the EQ_ROWS value even though the AVG_RANGE_ROWS is a lot smaller? - Does it not understand the parameter because it's defined as VARCHAR? If I replace it with DECLARE @OrderId UNIQUEIDENTIFIER = '5a7e...'; WHERE OrderId = @OrderId the estimate is down to 6. But if that's the reason, from where is the estimate 205?
Jakube (141 rep)
Aug 5, 2025, 04:53 PM • Last activity: Aug 6, 2025, 06:33 PM
Showing page 5 of 20 total questions