Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
192 views
@OneToMany mapping but only one is needed
I have to create a new table to store version numbers of Terms&Conditions user have accepted and the time stamp of when they click accept. (This table only contains info related to user. Info about the T&C version itself is stored in another table) That means user A could have accepted 3 versions of...
I have to create a new table to store version numbers of Terms&Conditions user have accepted and the time stamp of when they click accept. (This table only contains info related to user. Info about the T&C version itself is stored in another table) That means user A could have accepted 3 versions of T&Cs that has been updated over time. The only version I care most about is the latest version the user has agreed to, but I still need to keep the history of accepted versions for integrity’s sake. Scenario: > 1. I will already have the user entry in the database before the version records get stored. > 2. There can be users that has not agreed to any version of T&C. > 3. 1 User may have accepted several version of T&C over time > 4. I want to retrieve only the latest version user has accepted along with the user object with minimal step taken. > 5. If possible when I delete the user I would like it if ALL accepted version records for that user also get deleted (foreign key on cascade delete or something) How can I implement a clean solution for this? I have come up with two possibilities but I’m not sure if it’s any feasible. Solution 1: @OnetoOne unidirectional relationship with User keeping the FK reference to the latest T&C record the user has agreed to. The T&C record will keep user_id as non primary and non unique FK field. @Entity @Table(name = “user”) public class User { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name =“user”) Long id; @Column(name = “username”) private String username; ......... @JoinColumn(name = “terms_record_id”) @OneToOne(cascade = CascadeType.ALL) private TermsRecord termsRecord; // joined on terms_record_id to get only the latest version accepted } And the entity to record version user has accepted @Entity @Table(name = “terms_record”) public class TermsRecord { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = “terms_record_id”) private Long id; @Column(name = “agreed_on”) private Date agreedOn; @column(name =“version_no”) private String versionNo; // I’m not sure if it needs to have userId in here as well? @column(name = “user_id”) private Long userId; ........ } And in the DB create table user { user_id bigint as Primary Key, ... terms_record_id FOREIGN KEY references terms_record(terms_record_id); // to refer to the most updated version } create table terms_record { terms_record_id bigint NOT NULL, user_id bigint, PRIMARY KEY(terms_record_id), FOREIGN KEY(user_id) REFERENCES User(user_id) cascade on delete // to keep the records of terms user has agreed to } In the code, it will be something like: User userA = getUser(user_id); TermsRecord tr = new TermsRecord(); tr.setVersionNo(“1.0.5”); tr.setAcceptedOn(....); userA.setTermsRecord(tr); save(userA); So in this way, if user A had accept to terms version 1.0.3 before, it won’t delete the entry from terms_record table but it will pull a different versionNo when you retrieve userA from database. Eg. userA.getTermsRecord.getVersionNo == 1.0.5 And In terms_record table terms_record_id | version_no ______________________________________ .......... | ......... 3 | 1.0.3 5 | 1.0.5 ........ | .......... Solution 2 Unidirectional @OneToMany annotation in user class With foreign key user_id stored in TermsRecord, and cascade on delete. Then retrieve the latest version manually.
user59290 (11 rep)
Jul 16, 2018, 04:27 PM • Last activity: Jul 18, 2025, 06:03 AM
0 votes
1 answers
148 views
Prevent failure in conditional insert in mysql database
The infrastructure of our system looks like this. An AWS lambda function receives requests such as (accountId, .....). It creates an entry in the MySQL database using a newly generated UUID as caseId. (caseId, accountId, ....). The insert is a conditional insert operation discussed in detail below....
The infrastructure of our system looks like this. An AWS lambda function receives requests such as (accountId, .....). It creates an entry in the MySQL database using a newly generated UUID as caseId. (caseId, accountId, ....). The insert is a conditional insert operation discussed in detail below. I am able to avoid race condition by setting transaction isolation to SERIALIZABLE. However, the issue is that I do not have any control over how many concurrent requests will be successfully processed. For example, consider following concurrent requests.
request  | accountId | field1 | ...  
 1         a1          value1   ....   true     --- create a new entry with caseId Idxxx
 2         a1          value2   ....   false    --- update existing entry with caseId Idxxx 
 3         a1          value3   ....   false    --- update existing entry with caseId Idxxx 
 4         a1          value4   ....   false    --- update existing entry with caseId Idxxx
With our current implementation we are getting CannotAquireLockException. What are the ways in which I can avoid retry failures (CannotAquireLockException) ? The detailed table schema and condition are described below: The database is a mysql database system with the following table schema.
Table1: case table

|caseId(PK) | accountId | status |  .....

Table2: case reopen table

|caseId(FK)| casereopenId(PK)| caseReopenTime|

Table3: Alert table

Id (incrementing id) | alertId | accountId |
The lambda function tries to "create" a case in the database. the create wrapper, generates a UUID for caseId. The goal is : - check if an accountId already exists in case table. - if it does, then - check if status is OPEN - get the caseId for the accountId. - check if the caseId is present in case reopen table. - if above condition is false, then add an entry into the case table. Thanks!
Swagatika (101 rep)
Feb 13, 2020, 01:10 AM • Last activity: Jul 17, 2025, 08:04 PM
1 votes
0 answers
21 views
Oracle Parametrized Hibernate Queries
I am sending a parametrized named query in Java using Hibernate. Locally on app it is taking milliseconds and using Toad also. While on uat api on app after calling the query, it is taking up to 5 min to return results and testing it on Toad on uat it is taking milliseconds.
I am sending a parametrized named query in Java using Hibernate. Locally on app it is taking milliseconds and using Toad also. While on uat api on app after calling the query, it is taking up to 5 min to return results and testing it on Toad on uat it is taking milliseconds.
neameh baydoun (11 rep)
Jul 11, 2025, 09:20 AM • Last activity: Jul 11, 2025, 11:26 AM
0 votes
0 answers
16 views
Can we update an entity in @PrePersist of another entity
Currently I have two entities (Employee, Company) and Employee has many to one relation to company. Company has a field lastPersonJoined and on each Employee insertion, lastPersonJoined in Company entity is updated . This can be implemented in service layer as well but i really want to understand ca...
Currently I have two entities (Employee, Company) and Employee has many to one relation to company. Company has a field lastPersonJoined and on each Employee insertion, lastPersonJoined in Company entity is updated . This can be implemented in service layer as well but i really want to understand can we do this in @PrePersist or @PostPersist? When i do the before i didn't face any errors but its not commiting to DB. Correct me if i am wrong i think since company is not managed entity its not saving to DB. @Entity public class Employee{ ... @column(name) public String name; @ManyToOne(fetch = FetchType.LAZY, optional = false, cascade = {CascadeType.MERGE, CascadeType.PERSIST, CascadeType.REFRESH}) @JoinColumn(name = "id", referencedColumnName = "id", nullable = false) @ToString.Exclude private Company company; @PrePersist public void prePersist(){ company.setLastManJoined(name); } }
gcs (1 rep)
May 14, 2025, 04:44 PM • Last activity: May 14, 2025, 04:50 PM
1 votes
1 answers
3151 views
AWS RDS mysql deadlock
Everything in our production environment was working well. Yesterday, suddenly out of no where we were getting "lock wait" on any transaction related to a specific row. Same transaction on another row worked when the lock was removed, but editing that one row caused the entire table to lock up. Now,...
Everything in our production environment was working well. Yesterday, suddenly out of no where we were getting "lock wait" on any transaction related to a specific row. Same transaction on another row worked when the lock was removed, but editing that one row caused the entire table to lock up. Now, another problem is that the DB doesn't return all rows at times, ex. it returns 1 instead of 10. We are using java + hibernate. Like I said, there were no problems whatsoever, this is odd. I followed this https://mysql.wisborg.dk/2017/05/30/innodb-locks-analysis-why-is-blocking-query-null-and-how-to-find-more-information-about-the-blocking-transaction/ But still, the blocking transaction show trx_query as NULL. When running select * from information_schema.innodb_trx; Many transactions are null in the query column. SHOW ENGINE INNODB STATUS; --------------------------------------- ===================================== 2019-08-02 21:24:39 2add8607e700 INNODB MONITOR OUTPUT ===================================== Per second averages calculated from the last 57 seconds ----------------- BACKGROUND THREAD ----------------- srv_master_thread loops: 3498 srv_active, 0 srv_shutdown, 61234 srv_idle srv_master_thread log flush and writes: 64732 ---------- SEMAPHORES ---------- OS WAIT ARRAY INFO: reservation count 22165 OS WAIT ARRAY INFO: signal count 118693 Mutex spin waits 269941, rounds 833090, OS waits 9670 RW-shared spins 25736, rounds 372709, OS waits 6243 RW-excl spins 29187, rounds 404172, OS waits 5955 Spin rounds per wait: 3.09 mutex, 14.48 RW-shared, 13.85 RW-excl ------------------------ LATEST DETECTED DEADLOCK ------------------------ 2019-08-02 16:29:28 2adf53544700 *** (1) TRANSACTION: TRANSACTION 22187462, ACTIVE 0 sec starting index read mysql tables in use 1, locked 1 LOCK WAIT 6 lock struct(s), heap size 1184, 2 row lock(s), undo log entries 2 MySQL thread id 26615, OS thread handle 0x2adecc1de700, query id 1141023 172.31.8.177 dbuser updating /* update com.catchapp.server.persistence.user.User */ update user set cloudkit_id=null, created='2019-08-02 16:20:55', entity_status='active', unique_id='48df7cf7bffe474d85787bddf8425189', updated='2019-08-02 16:29:28', address_street_1=null, address_street_2=null, api_version='3.1', bio=null, city=null, country=null, distance_unit='mile', dob='1994-09-03', email='souzasire@gmail.com', email_sent=0, external_code='WHSPGI', facebook_id=null, facebook_token=null, first_name='Renata', gender='female', google_access_token=null, google_id=null, google_refresh_token=null, hometown=null, image_url='Image', last_name='Souza', membership='basic', needs_password_change=0, new_user=1, num_completed_events=0, password='{SSHA}4wKS9ZG7hhhpbxQRtsi6zVyERS0/bRHMCXpuLA==', phone=null, phone_normalized=null, role_flag=1, smoker=null, state=null, time *** (1) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 1380 page no 790 n bits 104 index PRIMARY of table catchappprod.user trx id 22187462 lock_mode X locks rec but not gap waiting Record lock, heap no 36 PHYSICAL RECORD: n_fields 46; compact format; info bits 0 0: len 30; hex 326339663838333136633531353832393031366335333232356133303032; asc 2c9f88316c515829016c53225a3002; (total 32 bytes); 1: len 6; hex 000001528dbd; asc R ;; 2: len 7; hex 420000015204cb; asc B R ;; 3: SQL NULL; 4: len 5; hex 99a3c50537; asc 7;; 5: len 6; hex 616374697665; asc active;; 6: len 30; hex 343864663763663762666665343734643835373837626464663834323531; asc 48df7cf7bffe474d85787bddf84251; (total 32 bytes); 7: len 5; hex 99a3c50537; asc 7;; 8: SQL NULL; 9: SQL NULL; 10: SQL NULL; 11: SQL NULL; 12: SQL NULL; 13: len 4; hex 6d696c65; asc mile;; 14: len 3; hex 8f9523; asc #;; 15: len 19; hex 736f757a617369726540676d61696c2e636f6d; asc souzasire@gmail.com;; 16: len 6; hex 574853504749; asc WHSPGI;; 17: SQL NULL; 18: SQL NULL; 19: len 6; hex 52656e617461; asc Renata;; 20: len 6; hex 66656d616c65; asc female;; 21: SQL NULL; 22: SQL NULL; 23: SQL NULL; 24: SQL NULL; 25: len 30; hex 687474703a2f2f63617463686170702d70726f642e73332e616d617a6f6e; asc http://catchapp-prod.s3.amazon ; (total 82 bytes); 26: len 5; hex 536f757a61; asc Souza;; 27: len 4; hex 80000000; asc ;; 28: len 4; hex 80000000; asc ;; 29: len 30; hex 7b535348417d34774b53395a47376868687062785152747369367a567945; asc {SSHA}4wKS9ZG7hhhpbxQRtsi6zVyE; (total 46 bytes); 30: SQL NULL; 31: SQL NULL; 32: len 4; hex 80000001; asc ;; 33: SQL NULL; 34: SQL NULL; 35: len 15; hex 416d65726963612f4368696361676f; asc America/Chicago;; 36: len 9; hex 536f757a6173697265; asc Souzasire;; 37: len 5; hex 3437353833; asc 47583;; 38: len 4; hex 80000001; asc ;; 39: SQL NULL; 40: SQL NULL; 41: len 1; hex 31; asc 1;; 42: len 4; hex 80000000; asc ;; 43: len 1; hex 81; asc ;; 44: len 5; hex 6261736963; asc basic;; 45: len 6; hex 5055424c4943; asc PUBLIC;; *** (2) TRANSACTION: TRANSACTION 22187459, ACTIVE 0 sec starting index read mysql tables in use 1, locked 1 6 lock struct(s), heap size 1184, 2 row lock(s), undo log entries 2 MySQL thread id 26616, OS thread handle 0x2adf53544700, query id 1141027 172.31.8.177 dbuser updating /* update com.catchapp.server.persistence.user.User */ update user set cloudkit_id=null, created='2019-08-02 16:20:55', entity_status='active', unique_id='48df7cf7bffe474d85787bddf8425189', updated='2019-08-02 16:29:28', address_street_1=null, address_street_2=null, api_version='3.1', bio=null, city=null, country=null, distance_unit='mile', dob='1994-09-03', email='souzasire@gmail.com', email_sent=0, external_code='WHSPGI', facebook_id=null, facebook_token=null, first_name='Renata', gender='female', google_access_token=null, google_id=null, google_refresh_token=null, hometown=null, image_url='Image', last_name='Souza', membership='basic', needs_password_change=0, new_user=1, num_completed_events=0, password='{SSHA}4wKS9ZG7hhhpbxQRtsi6zVyERS0/bRHMCXpuLA==', phone=null, phone_normalized=null, role_flag=1, smoker=null, state=null, time *** (2) HOLDS THE LOCK(S): RECORD LOCKS space id 1380 page no 790 n bits 104 index PRIMARY of table catchappprod.user trx id 22187459 lock mode S locks rec but not gap Record lock, heap no 36 PHYSICAL RECORD: n_fields 46; compact format; info bits 0 0: len 30; hex 326339663838333136633531353832393031366335333232356133303032; asc 2c9f88316c515829016c53225a3002; (total 32 bytes); 1: len 6; hex 000001528dbd; asc R ;; 2: len 7; hex 420000015204cb; asc B R ;; 3: SQL NULL; 4: len 5; hex 99a3c50537; asc 7;; 5: len 6; hex 616374697665; asc active;; 6: len 30; hex 343864663763663762666665343734643835373837626464663834323531; asc 48df7cf7bffe474d85787bddf84251; (total 32 bytes); 7: len 5; hex 99a3c50537; asc 7;; 8: SQL NULL; 9: SQL NULL; 10: SQL NULL; 11: SQL NULL; 12: SQL NULL; 13: len 4; hex 6d696c65; asc mile;; 14: len 3; hex 8f9523; asc #;; 15: len 19; hex 736f757a617369726540676d61696c2e636f6d; asc souzasire@gmail.com;; 16: len 6; hex 574853504749; asc WHSPGI;; 17: SQL NULL; 18: SQL NULL; 19: len 6; hex 52656e617461; asc Renata;; 20: len 6; hex 66656d616c65; asc female;; 21: SQL NULL; 22: SQL NULL; 23: SQL NULL; 24: SQL NULL; 25: len 30; hex 687474703a2f2f63617463686170702d70726f642e73332e616d617a6f6e; asc http://catchapp-prod.s3.amazon ; (total 82 bytes); 26: len 5; hex 536f757a61; asc Souza;; 27: len 4; hex 80000000; asc ;; 28: len 4; hex 80000000; asc ;; 29: len 30; hex 7b535348417d34774b53395a47376868687062785152747369367a567945; asc {SSHA}4wKS9ZG7hhhpbxQRtsi6zVyE; (total 46 bytes); 30: SQL NULL; 31: SQL NULL; 32: len 4; hex 80000001; asc ;; 33: SQL NULL; 34: SQL NULL; 35: len 15; hex 416d65726963612f4368696361676f; asc America/Chicago;; 36: len 9; hex 536f757a6173697265; asc Souzasire;; 37: len 5; hex 3437353833; asc 47583;; 38: len 4; hex 80000001; asc ;; 39: SQL NULL; 40: SQL NULL; 41: len 1; hex 31; asc 1;; 42: len 4; hex 80000000; asc ;; 43: len 1; hex 81; asc ;; 44: len 5; hex 6261736963; asc basic;; 45: len 6; hex 5055424c4943; asc PUBLIC;; *** (2) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 1380 page no 790 n bits 104 index PRIMARY of table catchappprod.user trx id 22187459 lock_mode X locks rec but not gap waiting Record lock, heap no 36 PHYSICAL RECORD: n_fields 46; compact format; info bits 0 0: len 30; hex 326339663838333136633531353832393031366335333232356133303032; asc 2c9f88316c515829016c53225a3002; (total 32 bytes); 1: len 6; hex 000001528dbd; asc R ;; 2: len 7; hex 420000015204cb; asc B R ;; 3: SQL NULL; 4: len 5; hex 99a3c50537; asc 7;; 5: len 6; hex 616374697665; asc active;; 6: len 30; hex 343864663763663762666665343734643835373837626464663834323531; asc 48df7cf7bffe474d85787bddf84251; (total 32 bytes); 7: len 5; hex 99a3c50537; asc 7;; 8: SQL NULL; 9: SQL NULL; 10: SQL NULL; 11: SQL NULL; 12: SQL NULL; 13: len 4; hex 6d696c65; asc mile;; 14: len 3; hex 8f9523; asc #;; 15: len 19; hex 736f757a617369726540676d61696c2e636f6d; asc souzasire@gmail.com;; 16: len 6; hex 574853504749; asc WHSPGI;; 17: SQL NULL; 18: SQL NULL; 19: len 6; hex 52656e617461; asc Renata;; 20: len 6; hex 66656d616c65; asc female;; 21: SQL NULL; 22: SQL NULL; 23: SQL NULL; 24: SQL NULL; 25: len 30; hex 687474703a2f2f63617463686170702d70726f642e73332e616d617a6f6e; asc http://catchapp-prod.s3.amazon ; (total 82 bytes); 26: len 5; hex 536f757a61; asc Souza;; 27: len 4; hex 80000000; asc ;; 28: len 4; hex 80000000; asc ;; 29: len 30; hex 7b535348417d34774b53395a47376868687062785152747369367a567945; asc {SSHA}4wKS9ZG7hhhpbxQRtsi6zVyE; (total 46 bytes); 30: SQL NULL; 31: SQL NULL; 32: len 4; hex 80000001; asc ;; 33: SQL NULL; 34: SQL NULL; 35: len 15; hex 416d65726963612f4368696361676f; asc America/Chicago;; 36: len 9; hex 536f757a6173697265; asc Souzasire;; 37: len 5; hex 3437353833; asc 47583;; 38: len 4; hex 80000001; asc ;; 39: SQL NULL; 40: SQL NULL; 41: len 1; hex 31; asc 1;; 42: len 4; hex 80000000; asc ;; 43: len 1; hex 81; asc ;; 44: len 5; hex 6261736963; asc basic;; 45: len 6; hex 5055424c4943; asc PUBLIC;; *** WE ROLL BACK TRANSACTION (2) ------------ TRANSACTIONS ------------ Trx id counter 22211040 Purge done for trx's n:o = 22211040, sees = 22211039, sees = 22211038, sees = 22210954, sees = 22210909, sees = 22210502, sees = 22210002, sees = 22209497, sees = 22209481, sees = 22209430, sees = 22209378, sees = 22208805, sees = 22208534, sees = 22208451, sees = 22208366, sees = 22208236, sees = 22208141, sees = 22207536, sees = 22206743, sees = 22206623, sees = 22205264, sees = 22205261, sees = 22204497, sees = 22204482, sees = 22203903, sees < 22198567 -------- FILE I/O -------- I/O thread 0 state: waiting for i/o request (insert buffer thread) I/O thread 1 state: waiting for i/o request (log thread) I/O thread 2 state: waiting for i/o request (read thread) I/O thread 3 state: waiting for i/o request (read thread) I/O thread 4 state: waiting for i/o request (read thread) I/O thread 5 state: waiting for i/o request (read thread) I/O thread 6 state: waiting for i/o request (write thread) I/O thread 7 state: waiting for i/o request (write thread) I/O thread 8 state: waiting for i/o request (write thread) I/O thread 9 state: waiting for i/o request (write thread) I/O thread 10 state: waiting for i/o request (write thread) I/O thread 11 state: waiting for i/o request (write thread) I/O thread 12 state: waiting for i/o request (write thread) I/O thread 13 state: waiting for i/o request (write thread) I/O thread 14 state: waiting for i/o request (write thread) I/O thread 15 state: waiting for i/o request (write thread) I/O thread 16 state: waiting for i/o request (write thread) I/O thread 17 state: waiting for i/o request (write thread) I/O thread 18 state: waiting for i/o request (write thread) I/O thread 19 state: waiting for i/o request (write thread) I/O thread 20 state: waiting for i/o request (write thread) I/O thread 21 state: waiting for i/o request (write thread) Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] , ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 Pending flushes (fsync) log: 0; buffer pool: 0 29552 OS file reads, 175874 OS file writes, 98491 OS fsyncs 0.02 reads/s, 16384 avg bytes/read, 2.47 writes/s, 1.67 fsyncs/s ------------------------------------- INSERT BUFFER AND ADAPTIVE HASH INDEX ------------------------------------- Ibuf: size 1, free list len 48746, seg size 48748, 20592 merges merged operations: insert 841, delete mark 3317352, delete 0 discarded operations: insert 0, delete mark 0, delete 0 Hash table size 12119119, node heap has 442 buffer(s) 909.30 hash searches/s, 351.57 non-hash searches/s --- LOG --- Log sequence number 4441306931 Log flushed up to 4441306098 Pages flushed up to 4441306098 Last checkpoint at 4441306098 0 pending log writes, 0 pending chkp writes 18712 log i/o's done, 0.28 log i/o's/second ---------------------- BUFFER POOL AND MEMORY ---------------------- Total memory allocated 6267207680; in additional pool allocated 0 Dictionary memory allocated 822706 Buffer pool size 373760 Free buffers 252854 Database pages 120464 Old database pages 44620 Modified db pages 4 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages made young 4184, not young 0 0.00 youngs/s, 0.00 non-youngs/s Pages read 54318, created 66146, written 348418 0.02 reads/s, 0.00 creates/s, 1.84 writes/s Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000 Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s LRU len: 120464, unzip_LRU len: 0 I/O sum:cur, unzip sum:cur ---------------------- INDIVIDUAL BUFFER POOL INFO ---------------------- ---BUFFER POOL 0 Buffer pool size 46720 Free buffers 31671 Database pages 14992 Old database pages 5552 Modified db pages 2 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages made young 568, not young 0 0.00 youngs/s, 0.00 non-youngs/s Pages read 6678, created 8314, written 49253 0.00 reads/s, 0.00 creates/s, 0.47 writes/s Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000 Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s LRU len: 14992, unzip_LRU len: 0 I/O sum:cur, unzip sum:cur ---BUFFER POOL 1 Buffer pool size 46720 Free buffers 31761 Database pages 14901 Old database pages 5519 Modified db pages 0 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages made young 486, not young 0 0.00 youngs/s, 0.00 non-youngs/s Pages read 6755, created 8146, written 42935 0.00 reads/s, 0.00 creates/s, 0.18 writes/s Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000 Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s ... ---------------------------- END OF INNODB MONITOR OUTPUT ============================ I added more ram to the instance to no avail. I'm using a custom parameter group, and set IO threads to 16. default was 4. This did not help. public ChatMessage addMessage( Chat chat, User user, String message, MessageType type) throws CatchappRdbException { ChatMessage chatMessage = new ChatMessage(); chatMessage.setChat(chat); chatMessage.setMessage(message); chatMessage.setUser(user); chatMessage.setType(type); chat.getMessages().add(chatMessage); chat.setLast_message(new Date()); Session session = getSession(); try { // session.saveOrUpdate(chat); session.saveOrUpdate(chatMessage); } catch (HibernateException he) { throw new CatchappRdbException(he); } finally { flushSession(session); } return chatMessage; } this works for anything other row. If running this for the one particular row, the seems like a few tables get locked. I found the transaction causing the block but the trx_query is null. I wonder if it's a max open files issue or within hibernate trying to access some file that is taking forever but I couldn't find anything. I'm wondering if paid AWS support can help with this.
franklinexpress (111 rep)
Aug 3, 2019, 12:01 AM • Last activity: Apr 4, 2025, 10:04 PM
0 votes
1 answers
1372 views
IF @@trancount > 0 COMMIT TRANSACTION causing blocking
We have a Java application / hibernate connecting to SQL Server. It looks like implicit transaction is on by default. I am not sure if this is from hibernate or JDBC. The implicit transactions are causing a lot of blocking. When I checked I always find that the head of the blocking is: set implicit_...
We have a Java application / hibernate connecting to SQL Server. It looks like implicit transaction is on by default. I am not sure if this is from hibernate or JDBC. The implicit transactions are causing a lot of blocking. When I checked I always find that the head of the blocking is: set implicit_transactions off IF @@TRANCOUNT > 0 COMMIT TRAN What can I perform from the database side to avoid this problem? enter image description here This is the query I got from sp_whoisactive or sp_BlitzFirst moreover implicit transaction warning. I am not able to capture the whole transaction.
sebeid (1415 rep)
Oct 16, 2020, 01:50 PM • Last activity: Jan 11, 2025, 01:15 PM
1 votes
1 answers
1512 views
how to call lo_unlink(lo) in batches
In **Postgres 9.5** I am tying to reproduce the postgres tool vacuumlo to get rid of unreferenced large object to free disk space. References are stored in **text** type columns by hibernate so using the tool as is is not an option. Looking at vacuumlo sources ( [vacuumlo.c][1]) I have created a *va...
In **Postgres 9.5** I am tying to reproduce the postgres tool vacuumlo to get rid of unreferenced large object to free disk space. References are stored in **text** type columns by hibernate so using the tool as is is not an option. Looking at vacuumlo sources ( vacuumlo.c ) I have created a *vacuum_l* table like so: CREATE TABLE vacuum_l AS SELECT oid AS lo, false as deleted FROM pg_largeobject_metadata; and proceded to delete all rows from *vacuum_l* table wich oids are referenced by user tables in my database. Now is safe to call *unlink(lo)* for all items remaining in the *vacuum_l* table. Unfortunately the table is very big and, as reported in the vacuumlo.c source code: >We don't want to run each delete as an individual transaction, because the commit overhead would be high. However, since 9.0 the backend will acquire a lock per deleted LO, so deleting too many LOs per transaction risks running out of room in the shared-memory lock table. Accordingly, we delete up to transaction_limit LOs per transaction. So far I tried to unlink all oids with this sql function: ``` CREATE OR REPLACE FUNCTION unlinkOrphanedLobs() returns void AS $BODY$ DECLARE count_variable int; begin loop SELECT COUNT(*) INTO count_variable FROM vacuum_l WHERE deleted=false; EXIT WHEN count_variable ERROR: out of shared memory > > HINT: You might need to increase max_locks_per_transaction. Why, is clearly explained by the code comments reported above. This function wont close the transaction at the end of the loop. How would it be the correct way to do it? Thanks for any help.
Luky (121 rep)
Mar 22, 2021, 04:43 PM • Last activity: Jun 2, 2024, 09:07 PM
1 votes
0 answers
2905 views
SQLGrammarException: could not extract ResultSet; Snapshot isolation transaction failed in database
I am trying to reproduce and fix the recently caught exception: [jdbc.spi.SqlExceptionHelper] ? Snapshot isolation transaction failed in database 'config_db' because the object accessed by the statement has been modified by a DDL statement in another concurrent transaction since the start of this tr...
I am trying to reproduce and fix the recently caught exception: [jdbc.spi.SqlExceptionHelper] ? Snapshot isolation transaction failed in database 'config_db' because the object accessed by the statement has been modified by a DDL statement in another concurrent transaction since the start of this transaction. It is disallowed because the metadata is not versioned. A concurrent update to metadata can lead to inconsistency if mixed with snapshot isolation. org.springframework.dao.InvalidDataAccessResourceUsageException: could not extract ResultSet; SQL [n/a]; nested exception is org.hibernate.exception.SQLGrammarException: could not extract ResultSet at org.springframework.orm.jpa.vendor.HibernateJpaDialect.convertHibernateAccessException(HibernateJpaDialect.java:280) ~[spring-orm-5.1.20.RELEASE.jar:5.1.20.RELEASE] [stdout] at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:254) ~[spring-orm-5.1.20.RELEASE.jar:5.1.20.RELEASE] [stdout] at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.translateExceptionIfPossible(AbstractEntityManagerFactoryBean.java:531) ~[spring-orm-5.1.20.RELEASE.jar:5.1.20.RELEASE] [stdout] at org.springframework.dao.support.ChainedPersistenceExceptionTranslator.translateExceptionIfPossible(ChainedPersistenceExceptionTranslator.java:61) ~[spring-tx-5.1.20.RELEASE.jar:5.1.20.RELEASE] [stdout] at org.springframework.dao.support.DataAccessUtils.translateIfNecessary(DataAccessUtils.java:242) ~[spring-tx-5.1.20.RELEASE.jar:5.1.20.RELEASE] [stdout] at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:154) ~[spring-tx-5.1.20.RELEASE.jar:5.1.20.RELEASE] [stdout] at org.springframework.data.jpa.repository.support.CrudMethodMetadataPostProcessor$CrudMethodMetadataPopulatingMethodInterceptor.invoke(CrudMethodMetadataPostProcessor.java:140) ~[spring-data-jpa-1.11.23.RELEASE.jar:?] [stdout] at org.springframework.data.jpa.repository.support.CrudMethodMetadataPostProcessor$ExposeRepositoryInvocationInterceptor.invoke(CrudMethodMetadataPostProcessor.java:347) ~[spring-data-jpa-1.11.23.RELEASE.jar:?] [stdout] at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:93) ~[spring-aop-5.1.20.RELEASE.jar:5.1.20.RELEASE] [stdout] at org.springframework.data.repository.core.support.SurroundingTransactionDetectorMethodInterceptor.invoke(SurroundingTransactionDetectorMethodInterceptor.java:57) ~[spring-data-commons-1.13.23.RELEASE.jar:?] [stdout] at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212) ~[spring-aop-5.1.20.RELEASE.jar:5.1.20.RELEASE] [stdout] at com.sun.proxy.$Proxy280.findById(Unknown Source) ~[?:?] The generated query (**note:** I removed the column names intentionally) is run within read-only transaction that uses the readonlyTransactionManager which routes the query to SQL Server replica node. The used isolation level is Isolation.DEFAULT: select * from config_db.dbo.Program programdmo0_ left outer join config_db.dbo.SupplementalSection supplement1_ on programdmo0_.Id = supplement1_.ProgramId left outer join config_db.dbo.QuestionBlock questionbl2_ on supplement1_.Id = questionbl2_.SectionId left outer join config_db_db.dbo.Question questions3_ on questionbl2_.Id = questions3_.QuestionBlockId left outer join config_db_db.dbo.QuestionDetails questionde4_ on questions3_.Id = questionde4_.QuestionId where programdmo0_.Id = ? DBA confirmed that DB isolation level is set to READ COMMITTED. Appreciate if anyone can help me to understand why this exception is thrown if the SNAPSHOT isolation level is not enabled?
Eager (111 rep)
Jul 24, 2022, 08:00 PM • Last activity: Jul 25, 2022, 12:16 AM
14 votes
2 answers
23138 views
Get TEXT value of a CLOB OID in Postgresql
I have a database table that looks like: create table answers( id int not null, question_id int not null, answer text null ) This table was originally build by Hibernate using the @Lob attribute for the "answer" column. I did not realize it at the time, but when setup that way, Hibernate stores an O...
I have a database table that looks like:
create table answers(
   id int not null,
   question_id int not null,
   answer text null
)
This table was originally build by Hibernate using the @Lob attribute for the "answer" column. I did not realize it at the time, but when setup that way, Hibernate stores an OID in the column instead of the actual text. Everything works fine when I use Hibernate to retrieve the values since it automatically converts the OID to the CLOB string, however it is becoming a performance problem and I'd like to get rid of the OID.
select * from answers
ID     QUESTION_ID     ANSWER
===============================
1       123             55123
2       234             51614
3       345             56127
should be
ID     QUESTION_ID     ANSWER
===============================
1       123             Male
2       234             203-555-1212
3       345             555 Main St. New York, NY

My desire is to add an extra column to the table "ANSWER_VALUE TEXT" and do something like below to get the actual value into the table, then change Hibernate around to not use the @Lob designator
update answers set ANSWER_VALUE= getValueFromOID(ANSWER)
Does that "getValueFromOID" function exist? If not, could I get some pointers on how to create one or at least how to fetch the actual value of an OID? Thanks
John P (411 rep)
Feb 2, 2015, 09:29 PM • Last activity: Jan 25, 2022, 01:08 PM
6 votes
5 answers
11351 views
Is OPTION (RECOMPILE) used in production environments?
Is `OPTION (RECOMPILE)` used in production? This option seems to get a lot of bad press. Is it deserved? I have a DBA who, so far, is not a fan of the idea of `OPTION (RECOMPILE)` within the meat of Report ETL ssis agent queries. These queries are executed (to the best of my knowledge) sequentially...
Is OPTION (RECOMPILE) used in production? This option seems to get a lot of bad press. Is it deserved? I have a DBA who, so far, is not a fan of the idea of OPTION (RECOMPILE) within the meat of Report ETL ssis agent queries. These queries are executed (to the best of my knowledge) sequentially and at scheduled intervals. **Back History:** - SQL Server 2016 - ETL Queries that cause clustered index scans when run through the ssis agent. These queries take minutes to complete and cause heavy impact. - The same query and parameter run via a local stored procedures executes in less than a second. **Wait are you certain OPTION (RECOMPILE) is the answer?** - Unknown. - But I need to know whether this is a really bad idea before I try. **The risks I'm aware of:** - There have been at least two serious bugs in the past related to OPTION (RECOMPILE). Two queries run at the same time could swap result sets. Ugh !! https://support.microsoft.com/en-us/topic/fix-rare-possibility-of-incorrect-results-when-you-use-option-recompile-for-queries-inside-a-procedure-in-sql-server-2014-or-sql-server-2012-c247fbb5-4125-dd0f-7789-7b1c126f8241 So given the above - is this option actually used in the real world? Is it acceptable that I recommend (and test) this as an option for a production environment? ----------------- I was asked to provide more details. I mentioned that I do have other posts all related to this topic. Let me give more information on that: - The root problem is that queries coming from an application server are taking longer than 60 seconds. Normally these queries take 4 to 10 seconds. Through a lot of pain, I've determined that the time outs line up with the ETL queries. 4 queries out of 15 to be specific. - A contributor to the problem is found within the application servers. Specifically the isolation level is set to serializable within the hibernate layer; which I have learned is not optimal for high volume production environments. Let me share the other questions: https://dba.stackexchange.com/questions/280998/sql-server-can-i-surgically-remove-a-bad-cached-query-plan-or-am-i-chasing-the https://dba.stackexchange.com/questions/289037/why-is-the-query-in-etl-via-ssis-slow-but-via-a-local-stored-procedure-it-is-fas
D-K (543 rep)
Apr 21, 2021, 03:25 PM • Last activity: Jun 10, 2021, 07:58 PM
3 votes
0 answers
1052 views
Can a MySql stored procedure return multiple resultsets each containing different columns?
I have made a stored procedure that returns multiple resultsets from multiple tables. Therefore each resultset is made out of different columns. These resultsets are retrieved by an `ProcedureOutputs` object from the `Hibernate` framework. The first resultset is retrieved with no problems, however H...
I have made a stored procedure that returns multiple resultsets from multiple tables. Therefore each resultset is made out of different columns. These resultsets are retrieved by an ProcedureOutputs object from the Hibernate framework. The first resultset is retrieved with no problems, however Hibernate tries to map the resultsets after that using the column set from the first resultset. This results in the error 2020-04-26 21:28:55.112 ERROR 22164 --- [nio-8080-exec-1] o.h.engine.jdbc.spi.SqlExceptionHelper : Column 'firstColumnFromFirstResultSet' not found. firstColumnFromFirstResultSet is an example here. My question is: Is there any way to let Hibernate know that each resultset is unique and that it shouldn't try to map the second resultset based on the columns of the first resultset? Thank you
Maurice (147 rep)
Apr 26, 2020, 08:08 PM
0 votes
1 answers
1161 views
In general should we avoid using ORM's with Redshift
My team is considering utilizing Hibernate/Envers for a granular point in time rollback solution for Redshift data. I have heard that using an ORM with Redshift is not a good idea. Are there particular drawbacks related to using an ORM such as hibernate with Redshift?
My team is considering utilizing Hibernate/Envers for a granular point in time rollback solution for Redshift data. I have heard that using an ORM with Redshift is not a good idea. Are there particular drawbacks related to using an ORM such as hibernate with Redshift?
mikelus (343 rep)
Apr 1, 2020, 09:20 PM • Last activity: Apr 1, 2020, 10:09 PM
0 votes
1 answers
651 views
Automatically purge dropped tables in Oracle
I am using an Oracle 18c database in a build system. One of my builds is for a JPA/Hibernate project, which is configured as "create-drop". This means it will create all tables during startup and drop all of them again at the end. The problem is: I can't configure Hibernate to create "drop ... purge...
I am using an Oracle 18c database in a build system. One of my builds is for a JPA/Hibernate project, which is configured as "create-drop". This means it will create all tables during startup and drop all of them again at the end. The problem is: I can't configure Hibernate to create "drop ... purge" statements. It seems that as a result the recycle bin has grown with each build, exceeding now the maximum allowed database size of 12 GB - and I am not even able to open the database anymore: alter database open * ERROR at line 1: ORA-12954: The request exceeds the maximum allowed database size of 12 GB. I have now droped the PDB and created a new one - but this will last for at most a month, before I run into the same problem again. My question: Is there any option to tell Oracle to _immediately_ purge dropped tables?
Tobias Liefke (103 rep)
Nov 19, 2019, 04:21 PM • Last activity: Nov 19, 2019, 06:02 PM
3 votes
1 answers
7159 views
What kind of deadlock the deadlock_timeout config parameter detects in Postgresql?
My overall goal is to get a `40P01/ERRCODE_T_R_DEADLOCK_DETECTED` from postgres when an deadlock is detected. (To be more precise, I would like to get a LockAcquisitionException in Hibernate, which is the exception the `40P01/ERRCODE_T_R_DEADLOCK_DETECTED` error code is translated in [PostgreSQL81Di...
My overall goal is to get a 40P01/ERRCODE_T_R_DEADLOCK_DETECTED from postgres when an deadlock is detected. (To be more precise, I would like to get a LockAcquisitionException in Hibernate, which is the exception the 40P01/ERRCODE_T_R_DEADLOCK_DETECTED error code is translated in PostgreSQL81Dialect.java ) I use Postgresql 9.6. For this I thought I should set the deadlock_timeout and log_lock_waits config variables as suggested by 19.12. Lock Management > deadlock_timeout (integer) This is the amount of time, in > milliseconds, to wait on a lock before checking to see if there is a > deadlock condition. > [...] > When log_lock_waits is set, this parameter also determines the length > of time to wait before a log message is issued about the lock wait. If > you are trying to investigate locking delays you might want to set a > shorter than normal deadlock_timeout. I've set the following values in postgresql.conf log_lock_waits = on # log lock waits >= deadlock_timeout deadlock_timeout = 5s Now, when I create a deadlock situtation (from Java using Hibernate) I find the following in the postgresql.log LOG: process 17493 still waiting for ShareLock on transaction 322815 after 5000.464 ms DETAIL: Process holding the lock: 17495. Wait queue: 17493. CONTEXT: while updating tuple (132,35) in relation "publication" However no 40P01/ERRCODE_T_R_DEADLOCK_DETECTED error is generated (and sent to the JDBC driver). I digged a little but into the postgres source code and found that the dead lock detection done by setting deadlock_timeout/log_lock_waits is a different mechanism than the one generating 40P01/ERRCODE_T_R_DEADLOCK_DETECTED. The deadlock_timeout case is handled in backend/storage/lmgr/proc.c , while the 40P01/ERRCODE_T_R_DEADLOCK_DETECTED case in backend/storage/lmgr/deadlock.c So, my questions are: - Are these actually two different types of deadlocks that are detected? - Is there a way to get an error when deadlock_timeout based deadlock detection happens? - How can actually a ERRCODE_T_R_DEADLOCK_DETECTED error be forced to happen? **UPDATE**: the code I use to get into the deadlock situation goes like this (Spring/Java): // This is in a Transaction managed by spring Publication p = em.find(Publication.class, id); p.setComment("Outer "+new Date()); em.flush(); // This utility method runs the lambda expression in a new Transaction // using Spring's TransactionTemplate and tries to update // the same Publication that is about to be updated by the // "outer" transaction Utils.runInSeparateTransaction(status -> { Publication p2 = em.find(p.getClass(), p.getMtid()); p2.setComment("Inner "+new Date()); return p2; // Would flush/commit after the return, but will "hang" instead. // Here I would expect the deadlock error but only get the // deadlock log. }; // Outer transaction would commit at this point but will
Bal&#225;zs E. Pataki (95 rep)
Jan 16, 2018, 10:13 AM • Last activity: Jun 29, 2018, 02:03 PM
0 votes
0 answers
34 views
Does using new ReferenceType in a hibernate query affect performance?
I have follow code in my java DAO class. Query query = session.createQuery("select new Question(q.qid, q.qtype) from Question q where q.qlang=:lang "); 1. Does using new object type in a hibernate query affect performance? In What way is it affected? 2. Over all code execution (be it the above java...
I have follow code in my java DAO class. Query query = session.createQuery("select new Question(q.qid, q.qtype) from Question q where q.qlang=:lang "); 1. Does using new object type in a hibernate query affect performance? In What way is it affected? 2. Over all code execution (be it the above java code or the sql query it gets converted into and then the assembly of the sql result in a Question object), does it increase or decrease the overall performance of the application?
sofs1 (101 rep)
Jun 18, 2018, 05:43 AM • Last activity: Jun 18, 2018, 06:01 AM
-1 votes
1 answers
582 views
How to handle many connections to a large MySQL database
I have multiple java hibernate apps connecting to 1 single mysql DB. Sometimes some of the apps hangs on DB select query. I think it is because of many connections the apps are to DB and remains open. So, I changed variables 'wait_timeout=3' & 'interactive_timeout=3600' & 'max_connections=5000'. Aft...
I have multiple java hibernate apps connecting to 1 single mysql DB. Sometimes some of the apps hangs on DB select query. I think it is because of many connections the apps are to DB and remains open. So, I changed variables 'wait_timeout=3' & 'interactive_timeout=3600' & 'max_connections=5000'. After that, I saw many errors/exceptions of not being able to connect to DB OR not being able to insert into DB. Therefore I reset all 3 above variables to their default values: 'wait_timeout=28800' & 'interactive_timeout=28800' & 'max_connections=151'. Now there is no error/exception but hangs of the apps. What should I do?
محسن عباسی (99 rep)
Dec 14, 2017, 09:56 AM • Last activity: Dec 14, 2017, 05:03 PM
15 votes
3 answers
24506 views
PostgreSQL query very slow when subquery added
I have a relatively simple query on a table with 1.5M rows: SELECT mtid FROM publication WHERE mtid IN (9762715) OR last_modifier=21321 LIMIT 5000; `EXPLAIN ANALYZE` output: > Limit (cost=8.84..12.86 rows=1 width=8) (actual time=0.985..0.986 rows=1 loops=1) > -> Bitmap Heap Scan on publication (cost...
I have a relatively simple query on a table with 1.5M rows: SELECT mtid FROM publication WHERE mtid IN (9762715) OR last_modifier=21321 LIMIT 5000; EXPLAIN ANALYZE output: > Limit (cost=8.84..12.86 rows=1 width=8) (actual time=0.985..0.986 rows=1 loops=1) > -> Bitmap Heap Scan on publication (cost=8.84..12.86 rows=1 width=8) (actual time=0.984..0.985 rows=1 loops=1) > Recheck Cond: ((mtid = 9762715) OR (last_modifier = 21321)) > -> BitmapOr (cost=8.84..8.84 rows=1 width=0) (actual time=0.971..0.971 rows=0 loops=1) > -> Bitmap Index Scan on publication_pkey (cost=0.00..4.42 rows=1 width=0) (actual time=0.295..0.295 rows=1 loops=1) > Index Cond: (mtid = 9762715) > -> Bitmap Index Scan on publication_last_modifier_btree (cost=0.00..4.42 rows=1 width=0) (actual time=0.674..0.674 rows=0 loops=1) > Index Cond: (last_modifier = 21321) > Total runtime: 1.027 ms So far so good, fast and uses the available indexes. Now, if I modify a query just a bit, the result will be: SELECT mtid FROM publication WHERE mtid IN (SELECT 9762715) OR last_modifier=21321 LIMIT 5000; The EXPLAIN ANALYZE output is: > Limit (cost=0.01..2347.74 rows=5000 width=8) (actual time=2735.891..2841.398 rows=1 loops=1) > -> Seq Scan on publication (cost=0.01..349652.84 rows=744661 width=8) (actual time=2735.888..2841.393 rows=1 loops=1) > Filter: ((hashed SubPlan 1) OR (last_modifier = 21321)) > SubPlan 1 > -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.001 rows=1 loops=1) > Total runtime: 2841.442 ms Not so fast, and using seq scan... Of course, the original query run by the application is a bit more complex, and even slower, and of course the hibernate-generated original is not (SELECT 9762715), but the slowness is there even for that (SELECT 9762715)! The query is generated by hibernate, so it is quite a challenge to change them, and some features are not available (e.g. UNION is not available, which would be fast). ### The questions 1. Why cannot the index be used in the second case? How could they be used? 2. Can I improve query performance some other way? ### Additional thoughts It seems that we could use the first case by manually doing a SELECT, and then putting the resulting list into the query. Even with 5000 numbers in the IN() list it is four times faster than the second solution. However, it just seems _WRONG_ (also, it could be 100 times faster :) ). It is completely incomprehensible why the query planner uses a completely different method for these two queries, so I would like to find a nicer solution to this problem.
P.P&#233;ter (911 rep)
Sep 7, 2015, 09:05 AM • Last activity: Dec 5, 2017, 09:30 AM
3 votes
1 answers
9386 views
Can I set PostgreSQL "stringtype=unspecified" behavior as default?
I am trying to get Hibernate to save Java enum fields as enum columns in PostgreSQL 9.6. If I map the fields as `@Enumerated(STRING)`, Hibernate naively uses the enum name as the column contents. This works splendidly for reads (it just gets the string representation, matches up and returns), but th...
I am trying to get Hibernate to save Java enum fields as enum columns in PostgreSQL 9.6. If I map the fields as @Enumerated(STRING), Hibernate naively uses the enum name as the column contents. This works splendidly for reads (it just gets the string representation, matches up and returns), but the server complains about the type mismatch when writing: expression is of type character varying. [The PostgreSQL JDBC driver](https://jdbc.postgresql.org/documentation/94/connect.html) has a parameter stringtype that will tell the server to infer the appropriate column type for string values, and using stringtype=unspecified successfully gets enum values stored. However, I will be deploying my application in an environment where the JDBC connection string is supplied externally, and it's not practical to add the stringtype property to it. Is there a way I can configure the server (such as with a SET command) to treat connections as stringtype=unspecified by default?
chrylis -cautiouslyoptimistic- (131 rep)
May 17, 2017, 10:28 AM • Last activity: Nov 23, 2017, 06:35 PM
1 votes
1 answers
100 views
And Java Justice For All
Recently, I thought I was gettint into a Oracle Database Administration and Development project, with a _heavy_ load of plsql programming in the form of stored procedures and such. I was struck when, the other day, I was told **I shouldn't rely so much on plsql as it can lead to undesired behavior o...
Recently, I thought I was gettint into a Oracle Database Administration and Development project, with a _heavy_ load of plsql programming in the form of stored procedures and such. I was struck when, the other day, I was told **I shouldn't rely so much on plsql as it can lead to undesired behavior on Production Systems**, and that it would be ideal to perform all the necessary transformations on the data outside of the database, and use the database more as a repository than anything else (only insert intoand select should be used. Just store data, don't operate with it in any other way). Not only that, but we were asked to **do it all in Java** (the face I made at that moment...). Since then I have cracked my head trying to figure out an easy way of satisfying these requirements and the best idea I could come of is trying to replicate a relational engine in a Java project. I know there are plenty of frameworks and libraries, like hibernate, that would somehow help me with the first steps of this project, but as you might have assumed, I could only use these to retrieve data from the database, not to pass to the db the queries that should be performed to make all necessary transformations (that would only be adding extra steps to the first, forbidden, scenario). I would still need to replicate operations like joins, unions, etc. Here are my cries for help: 1. Is it true that plsql can lead to undesired behavior in Production Systems? (Oracle database) 2. Is replicating the work of the relational engine a sensible idea? 3. Are there any tools that can really help in a situation like this? 4. Should I just slap the genius behind this whole idea back to his senses?
Feillen (191 rep)
Oct 3, 2017, 09:09 AM • Last activity: Oct 3, 2017, 09:54 AM
4 votes
1 answers
4796 views
MySQL process stuck on “cleaning up”, locking table forever
It was suggested I post here as opposed to StackOverflow. I'm new to debugging mysql/innodb issues so here is the best I have: I'm trying to run an integration test so these results are near identical each time, the test is consistently hanging at the same spot while trying to execute the same updat...
It was suggested I post here as opposed to StackOverflow. I'm new to debugging mysql/innodb issues so here is the best I have: I'm trying to run an integration test so these results are near identical each time, the test is consistently hanging at the same spot while trying to execute the same update. I'm getting a conflicting process that has locked the table the update is trying to use. If I just let it run I get a lock timeout exceeded exception, and when i kill the troubling process things do start moving again, though an exception is thrown in my test - "Caused by: org.mariadb.jdbc.internal.util.dao.QueryException: Could not send query: Last stream not finished". Relevant information I can find from, SHOW ENGINE INNODB STATUS; ------------ TRANSACTIONS ------------ ---TRANSACTION 2609485, ACTIVE 36 sec starting index read mysql tables in use 1, locked 1 LOCK WAIT 9 lock struct(s), heap size 1136, 4 row lock(s), undo log entries 2 MySQL thread id 231, OS thread handle 162200, query id 10577 localhost 127.0.0.1 root updating /* update com.ipl.inspire.imp.radio.entities.entity.circuit.Circuit */ update abstract_asset_network set lastUpdated='2016-01-14 15:34:10.2', leg=0, networkStatus_id=4, parent_id=NULL, xCoordinate=0, yCoordinate=0 where id=33 Trx read view will not see trx with id >= 2609485, sees = 2609468, sees < 2609468 Two transactions here, and it seems that trx 2609468 has caused a lock on the query update is trying to execute (trx 2609485) 2 seconds afterwards. The lock seems to last forever (or until innodb_lock_wait_timeout gets hit, which I have tried increasing). SELECT * FROM INFORMATION_SCHEMA.INNODB_TRX trx_id trx_state trx_started trx_requested_lock_id trx_wait_started trx_weight trx_mysql_thread_id trx_query trx_operation_state trx_tables_in_use trx_tables_locked trx_lock_structs trx_locks_memory_bytes trx_rows_locked trx_rows_modified trx_concurrency_tickets trx_isolation_level 2609485 LOCK WAIT 2016-01-14 15:34:17 2609485:89945:3:3 2016-01-14 15:34:18 11 231 update abstract_asset_network set lastUpdated='2016-01-14 15:34:10.2', leg=0, networkStatus_id=4, parent_id=NULL, xCoordinate=0, yCoordinate=0 where id=33 starting index read 1 5 9 1136 4 2 0 REPEATABLE READ 2609468 RUNNING 2016-01-14 15:34:15 13 228 0 6 10 1136 4 3 0 REPEATABLE READ Which suggests pretty much what is explained from the innodb status query I believe, though maybe someone else can make more sense of it. The last query that was executed on process id 228 was (from the general_log) 228 Query /* select generatedAlias0 from Imp as generatedAlias0 */ select imp0_.id as id1_67_, imp0_.black as black2_67_, imp0_.current as current3_67_, imp0_.name as name4_67_, imp0_.redMessageHost as redMessa5_67_, imp0_.redMessagePort as redMessa6_67_, imp0_.site_id as site_id7_67_ from imp imp0_ and then later: 231 Query update abstract_asset_network set lastUpdated='2016-01-14 15:34:10.2', leg=0, networkStatus_id=4, parent_id=NULL, xCoordinate=0, yCoordinate=0 where id=33 . . . 8 Query kill 228 231 Query COMMIT 231 Query set autocommit=1 231 Query SELECT 1 231 Query set autocommit=0 231 Query /* select ... But I don't think it's relevant and it's more to do with the process being re-used for "cleaning up" something else. However this same scenario with the same processes/timing/lock/queries happens the same every single time with the only things changing being just the ids.
user2819101 (41 rep)
Jan 14, 2016, 04:46 PM • Last activity: Jun 10, 2017, 04:57 PM
Showing page 1 of 20 total questions