Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
192
views
@OneToMany mapping but only one is needed
I have to create a new table to store version numbers of Terms&Conditions user have accepted and the time stamp of when they click accept. (This table only contains info related to user. Info about the T&C version itself is stored in another table) That means user A could have accepted 3 versions of...
I have to create a new table to store version numbers of Terms&Conditions user have accepted and the time stamp of when they click accept. (This table only contains info related to user. Info about the T&C version itself is stored in another table)
That means user A could have accepted 3 versions of T&Cs that has been updated over time. The only version I care most about is the latest version the user has agreed to, but I still need to keep the history of accepted versions for integrity’s sake.
Scenario:
> 1. I will already have the user entry in the database before the version records get stored.
> 2. There can be users that has not agreed to any version of T&C.
> 3. 1 User may have accepted several version of T&C over time
> 4. I want to retrieve only the latest version user has accepted along with the user object with minimal step taken.
> 5. If possible when I delete the user I would like it if ALL accepted version records for that user also get deleted (foreign key on cascade delete or something)
How can I implement a clean solution for this? I have come up with two possibilities but I’m not sure if it’s any feasible.
Solution 1: @OnetoOne unidirectional relationship with User keeping the FK reference to the latest T&C record the user has agreed to. The T&C record will keep user_id as non primary and non unique FK field.
@Entity
@Table(name = “user”)
public class User {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name =“user”)
Long id;
@Column(name = “username”)
private String username;
.........
@JoinColumn(name = “terms_record_id”)
@OneToOne(cascade = CascadeType.ALL)
private TermsRecord termsRecord;
// joined on terms_record_id to get only the latest version accepted
}
And the entity to record version user has accepted
@Entity
@Table(name = “terms_record”)
public class TermsRecord {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name = “terms_record_id”)
private Long id;
@Column(name = “agreed_on”)
private Date agreedOn;
@column(name =“version_no”)
private String versionNo;
// I’m not sure if it needs to have userId in here as well?
@column(name = “user_id”)
private Long userId;
........
}
And in the DB
create table user {
user_id bigint as Primary Key,
...
terms_record_id FOREIGN KEY references terms_record(terms_record_id);
// to refer to the most updated version
}
create table terms_record {
terms_record_id bigint NOT NULL,
user_id bigint,
PRIMARY KEY(terms_record_id),
FOREIGN KEY(user_id) REFERENCES User(user_id) cascade on delete
// to keep the records of terms user has agreed to
}
In the code, it will be something like:
User userA = getUser(user_id);
TermsRecord tr = new TermsRecord();
tr.setVersionNo(“1.0.5”);
tr.setAcceptedOn(....);
userA.setTermsRecord(tr);
save(userA);
So in this way, if user A had accept to terms version 1.0.3 before, it won’t delete the entry from terms_record table but it will pull a different versionNo when you retrieve userA from database.
Eg.
userA.getTermsRecord.getVersionNo == 1.0.5
And In terms_record table
terms_record_id | version_no
______________________________________
.......... | .........
3 | 1.0.3
5 | 1.0.5
........ | ..........
Solution 2
Unidirectional @OneToMany annotation in user class
With foreign key user_id stored in TermsRecord, and cascade on delete. Then retrieve the latest version manually.
user59290
(11 rep)
Jul 16, 2018, 04:27 PM
• Last activity: Jul 18, 2025, 06:03 AM
1
votes
1
answers
164
views
Apache Cassandra node restarts randomly
I have a cluster of 11 nodes, `Apache Cassandra version 3.11.4`, where certain nodes are restarted at random times. Nodes characteristics: CPU(s): 32 Model name: Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz RAM: 62G Disk: 42T Below, I am attaching system.log file of one node a.k.a node70 before restar...
I have a cluster of 11 nodes,
Apache Cassandra version 3.11.4
, where certain nodes are restarted at random times.
Nodes characteristics:
CPU(s): 32
Model name: Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz
RAM: 62G
Disk: 42T
Below, I am attaching system.log file of one node a.k.a node70 before restart:
WARN [CompactionExecutor:47] 2024-01-07 21:07:17,332 BigTableWriter.java:211 - Writing large partition xxx/xxx:128274211 (203.247MiB) to sstable /home/user/apache-cassandra-3.11.4/data/data/xxx/xxx-5d4d2290f19811e99de8956ff675a115/md-1495387-big-Data.db
INFO [HintsDispatcher:7] 2024-01-07 21:07:26,969 HintsStore.java:133 - Deleted hint file d75b9d99-e44b-4b91-8ba1-2b60852699da-1704638444850-1.hints
INFO [HintsDispatcher:7] 2024-01-07 21:07:26,970 HintsDispatchExecutor.java:282 - Finished hinted handoff of file d75b9d99-e44b-4b91-8ba1-2b60852699da-1704638444850-1.hints to endpoint /10.147.49.69: d75b9d99-e44b-4b91-8ba1-2b60852699da
WARN [CompactionExecutor:36] 2024-01-07 21:07:33,280 BigTableWriter.java:211 - Writing large partition xxx/xxx:20755 (172.373MiB) to sstable /home/user/apache-cassandra-3.11.4/data/data/xxx/xxx-5d4d2290f19811e99de8956ff675a115/md-1495376-big-Data.db
WARN [CompactionExecutor:43] 2024-01-07 21:07:55,185 BigTableWriter.java:211 - Writing large partition xxx/xxx:128274211 (116.939MiB) to sstable /home/user/apache-cassandra-3.11.4/data/data/xxx/xxx-5d4d2290f19811e99de8956ff675a115/md-1495383-big-Data.db
...
I am leaving key logs, so you can follow up. Whole file contains a lot of WARN
s about Writing large patition
.
WARN [CompactionExecutor:47] 2024-01-07 21:17:45,303 BigTableWriter.java:211 - Writing large partition xxx/xxx:26300 (191.685MiB) to sstable /home/user/apache-cassandra-3.11.4/data/data/xxx/xxx-5d4d2290f19811e99de8956ff675a115/md-1495387-big-Data.db
WARN [CompactionExecutor:40] 2024-01-07 21:18:00,395 BigTableWriter.java:211 - Writing large partition xxx/xxx:26663181 (145.598MiB) to sstable /home/user/apache-cassandra-3.11.4/data/data/xxx/xxx-5d4d2290f19811e99de8956ff675a115/md-1495380-big-Data.db
WARN [CompactionExecutor:42] 2024-01-07 21:18:14,510 BigTableWriter.java:211 - Writing large partition xxx/xxx:39741453 (111.068MiB) to sstable /home/user/apache-cassandra-3.11.4/data/data/xxx/xxx-5d4d2290f19811e99de8956ff675a115/md-1495416-big-Data.db
INFO [ScheduledTasks:1] 2024-01-07 21:18:41,355 NoSpamLogger.java:91 - Some operations were slow, details available at debug level (debug.log)
WARN [CompactionExecutor:43] 2024-01-07 21:18:55,742 BigTableWriter.java:211 - Writing large partition xxx/xxx:332 (107.752MiB) to sstable /home/user/apache-cassandra-3.11.4/data/data/xxx/xxx-5d4d2290f19811e99de8956ff675a115/md-1495383-big-Data.db
WARN [CompactionExecutor:44] 2024-01-07 21:18:57,684 BigTableWriter.java:211 - Writing large partition xxx/xxx:770 (107.219MiB) to sstable /home/user/apache-cassandra-3.11.4/data/data/xxx/xxx-5d4d2290f19811e99de8956ff675a115/md-1495384-big-Data.db
Also here:
WARN [CompactionExecutor:44] 2024-01-07 21:22:19,816 BigTableWriter.java:211 - Writing large partition xxx/xxx:30096898 (103.169MiB) to sstable /home/user/apache-cassandra-3.11.4/data/data/xxx/xxx-5d4d2290f19811e99de8956ff675a115/md-1495384-big-Data.db
INFO [CompactionExecutor:42] 2024-01-07 21:22:26,206 NoSpamLogger.java:91 - Maximum memory usage reached (536870912), cannot allocate chunk of 1048576
WARN [CompactionExecutor:43] 2024-01-07 21:22:57,513 BigTableWriter.java:211 - Writing large partition xxx/xxx:26663181 (148.395MiB) to sstable /home/user/apache-cassandra-3.11.4/data/data/xxx/xxx-5d4d2290f19811e99de8956ff675a115/md-1495383-big-Data.db
INFO [IndexSummaryManager:1] 2024-01-07 21:22:59,163 IndexSummaryRedistribution.java:77 - Redistributing index summaries
WARN [CompactionExecutor:42] 2024-01-07 21:23:01,308 BigTableWriter.java:211 - Writing large partition xxx/xxx:20755 (350.542MiB) to sstable /home/user/apache-cassandra-3.11.4/data/data/xxx/xxx-5d4d2290f19811e99de8956ff675a115/md-1495416-big-Data.db
These kind of logs are repeating till **2024-01-7 22:49:00** and node just goes down and ups:
WARN [CompactionExecutor:43] 2024-01-07 22:49:26,242 BigTableWriter.java:211 - Writing large partition xxx/xxx:30269442 (109.637MiB) to sstable /home/user/apache-cassandra-3.11.4/data/data/xxx/xxx-5d4d2290f19811e99de8956ff675a115/md-1495383-big-Data.db
INFO [main] 2024-01-07 22:55:08,817 YamlConfigurationLoader.java:89 - Configuration location: file:/home/user/apache-cassandra-3.11.4/conf/cassandra.yaml
INFO [main] 2024-01-07 22:55:09,194 Config.java:496 - Node configuration:[allocate_tokens_for_keyspace=null; authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_bootstrap=true; auto_snapshot=true; back_pressure_enabled=false; back_pressure_strategy=org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9, factor=5, flow=FAST}; batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; broadcast_address=null; broadcast_rpc_address=null; buffer_pool_use_heap_if_exhausted=true; cas_contention_timeout_in_ms=1000; cdc_enabled=false; cdc_free_space_check_interval_ms=250; cdc_raw_directory=null; cdc_total_space_in_mb=0; client_encryption_options=; cluster_name=LBS Cluster; column_index_cache_size_in_kb=2; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_compression=null; commitlog_directory=null; commitlog_max_compression_buffers_in_pool=3; commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_batch_window_in_ms=NaN; commitlog_sync_period_in_ms=10000; commitlog_total_space_in_mb=null; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=32; concurrent_compactors=16; concurrent_counter_writes=16; concurrent_materialized_view_writes=32; concurrent_reads=16; concurrent_replicates=null; concurrent_writes=16; counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1; credentials_validity_in_ms=2000; cross_node_timeout=false; data_file_directories=[Ljava.lang.String;@7ec7ffd3; disk_access_mode=auto; disk_failure_policy=stop; disk_optimization_estimate_percentile=0.95; disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd; dynamic_snitch=true; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_materialized_views=false; enable_scripted_user_defined_functions=false; enable_user_defined_functions=false; enable_user_defined_functions_threads=true; encryption_options=null; endpoint_snitch=GossipingPropertyFileSnitch; file_cache_round_up=null; file_cache_size_in_mb=null; gc_log_threshold_in_ms=200; gc_warn_threshold_in_ms=1000; hinted_handoff_disabled_datacenters=[]; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; hints_compression=null; hints_directory=null; hints_flush_period_in_ms=10000; incremental_backups=true; index_interval=null; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; initial_token=null; inter_dc_stream_throughput_outbound_megabits_per_sec=200; inter_dc_tcp_nodelay=false; internode_authenticator=null; internode_compression=dc; internode_recv_buff_size_in_bytes=0; internode_send_buff_size_in_bytes=0; key_cache_keys_to_save=2147483647; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=10.147.49.70; listen_interface=null; listen_interface_prefer_ipv6=false; listen_on_broadcast_address=false; max_hint_window_in_ms=172800000; max_hints_delivery_threads=2; max_hints_file_size_in_mb=1024; max_mutation_size_in_kb=null; max_streaming_retries=3; max_value_size_in_mb=256; memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=null; memtable_flush_writers=0; memtable_heap_space_in_mb=null; memtable_offheap_space_in_mb=null; min_free_space_per_drive_in_mb=50; native_transport_flush_in_batches_legacy=true; native_transport_max_concurrent_connections=-1; native_transport_max_concurrent_connections_per_ip=-1; native_transport_max_frame_size_in_mb=256; native_transport_max_threads=128; native_transport_port=9042; native_transport_port_ssl=null; num_tokens=256; otc_backlog_expiration_interval_ms=200; otc_coalescing_enough_coalesced_messages=8; otc_coalescing_strategy=DISABLED; otc_coalescing_window_us=200; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_cache_max_entries=1000; permissions_update_interval_in_ms=-1; permissions_validity_in_ms=2000; phi_convict_threshold=8.0; prepared_statements_cache_size_mb=null; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_scheduler_id=null; request_scheduler_options=null; request_timeout_in_ms=10000; role_manager=CassandraRoleManager; roles_cache_max_entries=1000; roles_update_interval_in_ms=-1; roles_validity_in_ms=2000; row_cache_class_name=org.apache.cassandra.cache.OHCProvider; row_cache_keys_to_save=2147483647; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=10.147.49.70; rpc_interface=null; rpc_interface_prefer_ipv6=false; rpc_keepalive=true; rpc_listen_backlog=50; rpc_max_threads=2147483647; rpc_min_threads=16; rpc_port=9160; rpc_recv_buff_size_in_bytes=null; rpc_send_buff_size_in_bytes=null; rpc_server_type=sync; saved_caches_directory=null; seed_provider=org.apache.cassandra.locator.SimpleSeedProvider{seeds=10.147.49.69,10.147.49.70,10.147.49.71,10.147.49.74,10.147.49.75,10.147.49.73,10.147.49.72,10.147.49.67,10.147.49.68,10.147.49.64,10.147.49.65}; server_encryption_options=; slow_query_log_timeout_in_ms=500; snapshot_before_compaction=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=false; storage_port=7000; stream_throughput_outbound_megabits_per_sec=200; streaming_keep_alive_period_in_secs=300; streaming_socket_timeout_in_ms=86400000; thrift_framed_transport_size_in_mb=15; thrift_max_message_length_in_mb=16; thrift_prepared_statements_cache_size_mb=null; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; transparent_data_encryption_options=org.apache.cassandra.config.TransparentDataEncryptionOptions@5b239d7d; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; unlogged_batch_across_partitions_warn_threshold=10; user_defined_function_fail_timeout=1500; user_defined_function_warn_timeout=500; user_function_timeout_policy=die; windows_timer_interval=1; write_request_timeout_in_ms=2000]
INFO [main] 2024-01-07 22:55:09,195 DatabaseDescriptor.java:373 - DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO [main] 2024-01-07 22:55:09,196 DatabaseDescriptor.java:431 - Global memtable on-heap threshold is enabled at 2508MB
INFO [main] 2024-01-07 22:55:09,196 DatabaseDescriptor.java:435 - Global memtable off-heap threshold is enabled at 2508MB
INFO [main] 2024-01-07 22:55:09,252 RateBasedBackPressure.java:123 - Initialized back-pressure with high ratio: 0.9, factor: 5, flow: FAST, window size: 2000.
INFO [main] 2024-01-07 22:55:09,252 DatabaseDescriptor.java:735 - Back-pressure is disabled with strategy org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9, factor=5, flow=FAST}.
INFO [main] 2024-01-07 22:55:09,580 GossipingPropertyFileSnitch.java:64 - Loaded cassandra-topology.properties for compatibility
INFO [main] 2024-01-07 22:55:09,667 JMXServerUtils.java:246 - Configured JMX server at: service:jmx:rmi://127.0.0.1/jndi/rmi://127.0.0.1:7199/jmxrmi
INFO [main] 2024-01-07 22:55:09,675 CassandraDaemon.java:476 - Hostname: server70
INFO [main] 2024-01-07 22:55:09,676 CassandraDaemon.java:483 - JVM vendor/version: OpenJDK 64-Bit Server VM/1.8.0_222
INFO [main] 2024-01-07 22:55:09,677 CassandraDaemon.java:484 - Heap size: 9.800GiB/9.800GiB
INFO [main] 2024-01-07 22:55:09,678 CassandraDaemon.java:489 - Code Cache Non-heap memory: init = 2555904(2496K) used = 7550016(7373K) committed = 7602176(7424K) max = 251658240(245760K)
INFO [main] 2024-01-07 22:55:09,678 CassandraDaemon.java:489 - Metaspace Non-heap memory: init = 0(0K) used = 20864080(20375K) committed = 21495808(20992K) max = -1(-1K)
INFO [main] 2024-01-07 22:55:09,679 CassandraDaemon.java:489 - Compressed Class Space Non-heap memory: init = 0(0K) used = 2402776(2346K) committed = 2621440(2560K) max = 1073741824(1048576K)
INFO [main] 2024-01-07 22:55:09,679 CassandraDaemon.java:489 - Par Eden Space Heap memory: init = 1718091776(1677824K) used = 481091008(469815K) committed = 1718091776(1677824K) max = 1718091776(1677824K)
INFO [main] 2024-01-07 22:55:09,679 CassandraDaemon.java:489 - Par Survivor Space Heap memory: init = 214695936(209664K) used = 0(0K) committed = 214695936(209664K) max = 214695936(209664K)
INFO [main] 2024-01-07 22:55:09,679 CassandraDaemon.java:489 - CMS Old Gen Heap memory: init = 8589934592(8388608K) used = 0(0K) committed = 8589934592(8388608K) max = 8589934592(8388608K)
**NB!**
When I want to check nodetool status in node70 it shows me: **Cassandra 3.0 and later require Java 8u40 or later**, but others can display even their versions are same.
Also, for some reason Snapshots does not work on this node, and data from 2020 and 2021 is stored there, although I configured it for two years.
I am also getting such logs in logs directory:
hs_err_pid1461756.log
hs_err_pid1502594.log
Here is my **gc.logs**:
2024-01-08T11:38:48.267+0600: 3533.985: Total time for which application threads were stopped: 0.0273467 seconds, Stopping threads took: 0.0017132 seconds
{Heap before GC invocations=5251 (full 22):
par new generation total 1887488K, used 1710170K [0x0000000540000000, 0x00000005c0000000, 0x00000005c0000000)
eden space 1677824K, 100% used [0x0000000540000000, 0x00000005a6680000, 0x00000005a6680000)
from space 209664K, 15% used [0x00000005b3340000, 0x00000005b52d68f0, 0x00000005c0000000)
to space 209664K, 0% used [0x00000005a6680000, 0x00000005a6680000, 0x00000005b3340000)
concurrent mark-sweep generation total 8388608K, used 3626506K [0x00000005c0000000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 46509K, capacity 48029K, committed 48604K, reserved 1091584K
class space used 5645K, capacity 5969K, committed 6108K, reserved 1048576K
2024-01-08T11:38:48.753+0600: 3534.471: [GC (Allocation Failure) 2024-01-08T11:38:48.753+0600: 3534.471: [ParNew
Desired survivor size 107347968 bytes, new threshold 1 (max 1)
- age 1: 22025736 bytes, 22025736 total
: 1710170K->30151K(1887488K), 0.0262670 secs] 5336676K->3678067K(10276096K), 0.0266400 secs] [Times: user=0.46 sys=0.01, real=0.02 secs]
Heap after GC invocations=5252 (full 22):
par new generation total 1887488K, used 30151K [0x0000000540000000, 0x00000005c0000000, 0x00000005c0000000)
eden space 1677824K, 0% used [0x0000000540000000, 0x0000000540000000, 0x00000005a6680000)
from space 209664K, 14% used [0x00000005a6680000, 0x00000005a83f1f70, 0x00000005b3340000)
to space 209664K, 0% used [0x00000005b3340000, 0x00000005b3340000, 0x00000005c0000000)
concurrent mark-sweep generation total 8388608K, used 3647915K [0x00000005c0000000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 46509K, capacity 48029K, committed 48604K, reserved 1091584K
class space used 5645K, capacity 5969K, committed 6108K, reserved 1048576K
}
2024-01-08T11:38:48.780+0600: 3534.498: Total time for which application threads were stopped: 0.0325483 seconds, Stopping threads took: 0.0013643 seconds
2024-01-08 11:38:49 GC log file has reached the maximum size. Saved as /home/user/apache-cassandra-3.11.4/logs/gc.log.0
Please, tell me if you need additional information. This is my first post in dba.stackexchange.com, so I may not take into account some forum rules
Thank you!
Midas Salimgali
(11 rep)
Jan 8, 2024, 07:31 AM
• Last activity: Jul 16, 2025, 07:05 AM
3
votes
1
answers
71
views
Db2 - Upgrade Java JDK/JRE
After installing via alternatives the latest IBM JDK on redhat 8.10 (Ootpa) x86 and when trying to update the Java Version for Db2 with db2 UPDATE DBM CFG USING jdk_path SDK for Java-path to ``` (...) (JDK_PATH) = /dbhome/ /sqllib/java/jdk64 ``` following error appears: ``` MESSAGE : ADM0508E The da...
After installing via alternatives the latest IBM JDK on redhat 8.10 (Ootpa) x86 and when trying to update the Java Version for Db2 with
db2 UPDATE DBM CFG USING jdk_path SDK for Java-path
to
(...) (JDK_PATH) = /dbhome//sqllib/java/jdk64
following error appears:
MESSAGE : ADM0508E The database manager was unable to load the Java
interpreter library "/usr/lib/jvm/jre/lib/amd64/j9vm". This error
commonly occurs because of restrictions imposed by the operating
system. Consult the IBM documentation for a solution. If this
problem persists, contact IBM support.
**Db2 version:** DB2 v11.5.9.0, special_52441
"DYN2501101324AMD64_52441"
**Java version** "21.0.6" 2025-01-21 LTS
IBM Semeru Runtime Certified Edition 21.0.6.0 (build 21.0.6+7-LTS)
Eclipse OpenJ9 VM 21.0.6.0 (build openj9-0.49.0, JRE 21 Linux amd64-64-Bit Compressed References 20250121_373 (JIT enabled, AOT enabled)
**Question 1:** Where is the path in the error message configured, as it turns out to be not the one being defined as JDK_PATH.
**Question 2:** Is it actually possible to update the referred Db2 version to the latest JDK/JRE release and has it to be specifically an IBM JDK/J9vm?
**Question 3:** Is there any good article or documentation how to update/handle/work with Java UDFs in Db2?
*Correlates to:
https://stackoverflow.com/questions/69019194/how-to-upgrade-java-to-at-least-8-0-6-25-in-ibm-db2-v11-1*
matz3
(31 rep)
Jul 4, 2025, 03:24 PM
• Last activity: Jul 11, 2025, 07:42 AM
0
votes
1
answers
180
views
Is it more efficient to do SELECT and compare in Java or DELETE and INSERT
I am hitting a **REST API** to get data from a service. I transform this data and store it in a database. I will have to do this on some interval, 15 minutes, and then make sure this database has latest information. I am doing this in a Java program. I am wondering if it would be better, after I hav...
I am hitting a **REST API** to get data from a service. I transform this data and store it in a database. I will have to do this on some interval, 15 minutes, and then make sure this database has latest information.
I am doing this in a Java program. I am wondering if it would be better, after I have queried all data, to do:
1. **SELECT** statements and compare vs transformed data and do **UPDATEs** (**DELETE** all associated records to what was changed and **INSERT** new)
**OR**
2. **DELETE ALL** and **INSERT ALL** every time.
Option 1 has potential to have a lot less transactions, guaranteed SELECT on all records because we are comparing, but potentially not a lot of UPDATEs since I don't expect data to be changing much. But it has downside of doing comparisons on all records to detect a change.
I am planning on doing this using Spring Boot, JPA layer and possibly Postgres
user137760
(1 rep)
Nov 1, 2017, 06:02 PM
• Last activity: Jul 10, 2025, 01:16 PM
1
votes
1
answers
859
views
Run MSSQL stored procedures containing SSIS packages using Windows Authentication from another computer
I am new to MSSQL. I have a question about executing MSSQL Stored Procedure. Let me briefly talk about my development environment first: - **PC400 (Computer/Server A)**, where SQL Server 2012 is installed - **PC401 (Computer/Server B)**, where Java application is running on PC400 has SSIS project &...
I am new to MSSQL. I have a question about executing MSSQL Stored Procedure.
Let me briefly talk about my development environment first:
- **PC400 (Computer/Server A)**, where SQL Server 2012 is installed
- **PC401 (Computer/Server B)**, where Java application is running on
PC400 has SSIS project & package(s) deployed to its SSISDB. Of course, I have no problem in executing the packages **LOCALLY in PC400**.
But this is not what I need. I want the java application installed on PC401 to be able to execute those packages stored in PC400. I have no problem in using a sa/temp user account with "SQL Server Authentication" to login the server:
But based on my research, "SQL Server Authentication" does not allow me to deploy/execute packages in SSISDB.
Then, I found the following advice:
https://dba.stackexchange.com/questions/39614/connect-to-sql-server-using-windows-authentication-from-another-pc-without-activ
Let say the windows user account of PC401 is "HKB\Hello123". By creating an identical Windows user under "MSSQL -> Security -> Logins" in PC400:
my java program is able to use the following codes to execute a stored procedure in PC400 using "Windows Authentication":
con = DriverManager.getConnection("jdbc:sqlserver://HKA-PC400:1433;DatabaseName=TempTest;integratedSecurity=true");
CallableStatement cs = null;
cs = this.con.prepareCall("{call SP_ETL_B}");
cs.execute();
But is there any other methods to achieve this? Besides, ***I hope my java program would be able to use a sa/temp user account("SQL Server Authentication") to pretend to be a "Windows Authentication" one OR simply connect to an existing Windows user account, then to trigger Stored proc/SQL Server Agent job to run the packages in PC400.***


garethlam
(11 rep)
May 4, 2022, 10:01 AM
• Last activity: Jul 5, 2025, 03:01 PM
3
votes
2
answers
188
views
Causing ORA errors in a program using JDBC. How can I find what's causing them?
So I'm building a java app to interface with the company's database. Whenever it causes an ORA error, it just prints something like the following to system output: ORA-00911: invalid character Is there some way I can see what's causing it? Maybe some SQL statement I can use to see the last 10 errors...
So I'm building a java app to interface with the company's database. Whenever it causes an ORA error, it just prints something like the following to system output:
ORA-00911: invalid character
Is there some way I can see what's causing it? Maybe some SQL statement I can use to see the last 10 errors caused on a table or something? I have SQLDeveloper if that might be useful.
Daniel Paczuski Bak
(416 rep)
Jun 12, 2015, 06:16 PM
• Last activity: Jun 26, 2025, 02:04 AM
1
votes
0
answers
12
views
GridDB How to properly delete a database with containers?
I'm working with GridDB CE in a Java application and trying to programmatically delete a database. However, I'm encountering that error. > 145045 JC_DATABASE_NOT_EMPTY ERROR Checking of client request failed. Number of DB containers may not be zero. Delete all containers in the DB targeted for delet...
I'm working with GridDB CE in a Java application and trying to programmatically delete a database. However, I'm encountering that error.
> 145045 JC_DATABASE_NOT_EMPTY ERROR Checking of client request failed. Number of DB containers may not be zero. Delete all containers in the DB targeted for deletion.
I want to delete an entire database (or simulate that by removing all its data). For example, before running an integration test or during cleanup, I want to ensure the DB is clean.
import com.toshiba.mwcloud.gs.*;
public class GridDBCleanup {
public static void main(String[] args) throws Exception {
Properties props = new Properties();
props.setProperty("notificationAddress", "239.0.0.1");
props.setProperty("notificationPort", "31999");
props.setProperty("clusterName", "myCluster");
props.setProperty("user", "admin");
props.setProperty("password", "admin");
GridStore store = GridStoreFactory.getInstance().getGridStore(props);
// Attempt to drop database (simulated cleanup)
store.dropDatabase("testDB"); // This line throws the error
}
}
I've Tried
Ensured that testDB exists and is accessible.
Verified that containers are still present using getContainerNames.
Tried iterating and deleting containers manually before calling dropDatabase, but I want to know if there's a better/more idiomatic way to clean a DB.
What is the correct approach to fully delete a database or simulate that in GridDB?
Do I have to delete all containers manually before using dropDatabase()?
Is there a built-in way to list and delete all containers programmatically?
Any help or a working snippet would be appreciated!
omar esawy
(11 rep)
Jun 8, 2025, 09:06 AM
• Last activity: Jun 8, 2025, 01:03 PM
0
votes
2
answers
50
views
How do I update Java prepared statements from Postgres 12 to Postgres 16?
I have a Java/JSP webapp that uses a Postgres database. I recently migrated the webapp from one server running Ubuntu 20.04 and Postgres 12 to another server running Ubuntu 24.04 and Postgres 16. I've found that the webapp now generates a Postgres-related error ``` org.postgresql.util.PSQLException:...
I have a Java/JSP webapp that uses a Postgres database. I recently migrated the webapp from one server running Ubuntu 20.04 and Postgres 12 to another server running Ubuntu 24.04 and Postgres 16.
I've found that the webapp now generates a Postgres-related error
org.postgresql.util.PSQLException: ERROR: trailing junk after parameter at or near "$5WHERE"
with a reference to one of the Java class files in the webapp. That file generates an SQL query using Java's prepared statements to execute on the Postgres database. The exact string "$5WHERE" appears nowhere in my source code, but of course there are several instances of the SQL keyword " WHERE " (surrounded by spaces) in the file in question.
**Question 1:** Where does the "$5" come from in the error message "$5WHERE"? I can't find any reference to this online, and it might be a clue.
The closest example I can find online to the error message as a whole is this [StackExchange](https://stackoverflow.com/questions/75482094/postgresql-15-trailing-junk-after-numeric-literal-error) question from 2023. The question cites the recent transition to Postgres 15 and its [release notes](https://www.postgresql.org/docs/15/release-15.html) that states one change is
> Prevent numeric literals from having non-numeric trailing characters (Peter Eisentraut) §
> Previously, query text like 123abc would be interpreted as 123 followed by a separate token abc.
I'm not certain this is related to my error. The only numeric literal data I have in the problem code is a primary key, and I'm sure it doesn't have trailing characters.
I've pulled all of the code I can tell is relevant from the Java class cited in the error and cleaned it up as best I can:
import java.sql.Connection;
import java.sql.PreparedStatement;
public void updateUser(WorkGroup group, String password) {
PreparedStatement ps = null, ps2 = null;
Connection conn = DBManager.getConnection();
conn.setAutoCommit(false);
// Whether the user's password is being updated
boolean pass = false;
String sql = "UPDATE user_record SET join = ?, role = ?, sales_training = ?, sec_training = ?, field_training = ?";
if (StringUtils.isNotBlank(password)) {
sql += ", hashedpassword = ? ";
pass = true;
}
sql += "WHERE id = ?;";
ps = conn.prepareStatement(sql);
ps.setString(1, group.getJoinYear());
ps.setString(2, group.getRole());
ps.setBoolean(3, group.getSalesTraining());
ps.setBoolean(4, group.getSecTraining());
ps.setBoolean(5, group.getFieldTraining());
if (pass) {
String hashedPassword = BCrypt.hashpw(password, BCrypt.gensalt(12));
ps.setString(6, hashedPassword);
ps.setInt(7, group.getId());
}
else {
ps.setInt(6, group.getId());
}
ps.executeUpdate();
}
I want to stress that this code has worked for years, so there can't be anything fundamentally wrong with it. My top suspicion is that this manner of preparing the SQL statement conflicts with a change in one of the Postgres major versions between 12 and 16, but I can't tell what because the statement preparation obscures so much.
**Question 2:** Does this code need updating to remain current with Postgres, and what needs to be updated?
Borea Deitz
(151 rep)
Jun 3, 2025, 08:26 PM
• Last activity: Jun 4, 2025, 09:29 AM
0
votes
1
answers
288
views
Use quoted schema in an insert statement executed thorough PreparedStatement in Java
I created an unquoted table `my_table` under a quoted schema `"Quoted_User"`. When executing the statement INSERT INTO "Quoted_User".my_table (custom_id) VALUES (DEFAULT) using the Intellij Oracle browser it works, but when I'm performing the same statement through a PreparedStatement in Java Oracle...
I created an unquoted table
my_table
under a quoted schema "Quoted_User"
.
When executing the statement
INSERT INTO "Quoted_User".my_table (custom_id) VALUES (DEFAULT)
using the Intellij Oracle browser it works, but when I'm performing the same statement through a PreparedStatement in Java Oracle is complaining that the schema is not existing. Here's the prepared statement Java line:
statement = connection.prepareStatement(insertSQL, new String[]{custom_id});
and here is insertSQL
as shown in the inspector:
INSERT INTO \"Quoted_User\".my_table (custom_id) VALUES (DEFAULT)
It's like Oracle is removing the quotes, and I can say this because I noticed the uppercase letters in the error message which are always returned when the object is considered to be case insensitive.
Error returned by Oracle:
java.sql.SQLSyntaxErrorException: ORA-04043: object "QUOTED_USER" does not exist
There should be a way to use quoted schema names with prepared statements I guess.
Thanks for your hints,
Mattia
Mattia Moretta
(3 rep)
Aug 24, 2019, 11:28 AM
• Last activity: May 10, 2025, 08:03 PM
0
votes
2
answers
1780
views
Java PostgreSQL library seems to convert dates to local time zone automatically
PostgreSQL 14.6 Ubuntu 22.04 I am using `postgresql-42.5.4.jar` which I downloaded from [pgJDBC][1]. I use this library to get data from a database and display it on a website running locally. The web server and database server are both running on the same machine. The database server's time zone is...
PostgreSQL 14.6
Ubuntu 22.04 I am using
Ubuntu 22.04 I am using
postgresql-42.5.4.jar
which I downloaded from pgJDBC . I use this library to get data from a database and display it on a website running locally. The web server and database server are both running on the same machine.
The database server's time zone is UTC
. The system's time zone is America/Chicago
.
I have a table that contains a column of type timestamp with time zone
.
The data is inserted into the table by a separate C++ program that uses a completely different library. In order to insert the timestamps, it uses a Unix timestamp and the to_timestamp ()
function, like this:
insert into my_table (my_time) values (to_timestamp (1654321098));
The timestamp is retrieved from the table as a string and passed back to the website as is. A comment below suggested using the java.sql.OffsetDateTime class but I don't know where that class would be used. Here is the Java code I am using:
String query = "select my_time from my_table";
ResultSet result_set = db_connection.createStatement ().executeQuery (query);
String result = result_set.getString ("my_time");
When I query this column from my command line database client, it shows me the dates in UTC, which is what I would expect because that is the time zone the server is using. This simple query would look like this:
select my_time from my_table;
While still in my command line client, if I want to display that column in my local time, I have to modify my query like this:
select my_time at time zone 'America/Chicago' as my_time from my_table;
But I started noticing that the website was displaying incorrect times. I temporarily had it print its query to the screen so I could look at it in my command line client. The result was not the same. In order to display the time in my local time on the website, I had to remove the at time zone 'America/Chicago'
part of the query, which does not seem to make sense and does not produce the same result in the command line client, and it also makes the code less portable if I were to move it to a system using a different database library.
Does the Java driver for PostgreSQL automatically convert timestamp fields to local time? If it does, is there a way to turn that feature off? If it doesn't, then what could be causing the different results I get between the JDBC library and my command line client?
Zephyrus
(283 rep)
Feb 19, 2023, 03:53 PM
• Last activity: Apr 6, 2025, 08:13 AM
0
votes
1
answers
673
views
Queries of MySQL from Java are executed serially from multiple threads
I have REST API developed in spring + java and hosted on Tomcat server. I have a MySQL database where i store all my data. API makes just simple select from db as descripted below. I failed performance tests because of executing query one by one. The problem is: I run 100 threads to query my API (ou...
I have REST API developed in spring + java and hosted on Tomcat server. I have a MySQL database where i store all my data. API makes just simple select from db as descripted below. I failed performance tests because of executing query one by one. The problem is: I run 100 threads to query my API (outside program) and my logs in API shows that each request stops on the execution of the query and waits for it's turn to ask database. It seems like MySQL can do one query at a time. Is it true ? I'm setting connection.setReadOnly(true) - to the transaction should be marked as read only.
Body of a API request that is executed:
Long start = System.currentTimeMillis();
System.out.println("Starting thread: "+idx);
String sql = "SELECT * \n" +
"FROM EXERCISES\n" +
" WHERE 1 = 1 \n" +
" AND exercises.STARTTIME \n" +
" BETWEEN '2020-08-03 00:00:00.000' \n" +
" AND '2020-10-10 23:59:59.999'\n" +
"and exercises.SALEVISIBILITY in ('E','B') \n" +
" AND exercises.STARTTIME > current_timestamp()\n";
MysqlDataSource dataSource = new MysqlDataSource();
try {
dataSource.setUrl("jdbc:mysql://127.0.0.1:3306/sampleDB?useUnicode=yes&characterEncoding=UTF-8");
dataSource.setUser("test");
dataSource.setPassword("test");
dataSource.setLoginTimeout(1000);
dataSource.setAllowMultiQueries(true);
System.out.println("Before connection "+(System.currentTimeMillis()-start));
Connection connection = dataSource.getConnection();
System.out.println("After connection "+(System.currentTimeMillis()-start));
connection.setReadOnly(true);
System.out.println("After readOnly "+(System.currentTimeMillis()-start));
Statement stmt = connection.createStatement();
System.out.println("After stmt "+(System.currentTimeMillis()-start));
ResultSet rs = stmt.executeQuery(sql);
System.out.println("After ResultSet "+(System.currentTimeMillis()-start));
System.out.println("End task: "+idx+" time: "+(System.currentTimeMillis()-start));
if(!rs.isClosed())
{
rs.close();
}
if(!stmt.isClosed()){
stmt.close();
}
if(!connection.isClosed()){
connection.close();
}
} catch (Exception ex) {
ex.printStackTrace();
}
I've put here the MysqlDataSource to make sure each request has it's own connection and also to show my connection parameters (maybe here is something wrong).
And the result of the test is:
... many similar lines above
After connection 5457
After readOnly 5457
After stmt 5457
After connection 5509
After readOnly 5509
After stmt 5509
After connection 5509
After readOnly 5509
After stmt 5509
After connection 5555
After readOnly 5555
After connection 5553
After readOnly 5553
After stmt 5555
After stmt 5553
After connection 5504
After readOnly 5504
After stmt 5504
After connection 5607
After readOnly 5607
After stmt 5607
After ResultSet 7112
End task: 32 time: 7112
After ResultSet 7304
End task: 12 time: 7304
After ResultSet 6702
End task: 59 time: 6702
After ResultSet 6829
End task: 52 time: 6829
After ResultSet 7360
End task: 24 time: 7360
After ResultSet 6857
End task: 56 time: 6857
After ResultSet 7625
End task: 20 time: 7625
After ResultSet 7661
End task: 36 time: 7661
After ResultSet 7822
End task: 15 time: 7822
After ResultSet 7758
End task: 27 time: 7758
After ResultSet 7691
End task: 39 time: 7691
After ResultSet 7651
End task: 64 time: 7651
After ResultSet 8430
End task: 51 time: 8430
After ResultSet 8852
End task: 28 time: 8852
After ResultSet 8052
End task: 75 time: 8052
After ResultSet 8840
End task: 43 time: 8840
After ResultSet 7678
End task: 98 time: 7678
After ResultSet 8985
End task: 40 time: 8985
After ResultSet 8521
End task: 60 time: 8521
After ResultSet 9159
End task: 23 time: 9159
After ResultSet 8836
... and many the same lines below
All the request executes very fast to line: System.out.println("After stmt "+(System.currentTimeMillis()-start)); and then all freeze to be executed one by one on line: ResultSet rs = stmt.executeQuery(sql);
My max connections is set to 3000. I'm using MySQL 5.7.25, single select executes in 300-350 ms. Max threads on tomcat is set to 600.
If anyone could help me - please! I would be very gratefull:)
First edit:
Create table code:
CREATE TABLE exercises
(
REPLOCDB
INT(11) NOT NULL DEFAULT -1,
REPLOCID
INT(11) NOT NULL AUTO_INCREMENT,
NAME
VARCHAR(100) NOT NULL DEFAULT '',
CAPACITY
INT(11) NOT NULL DEFAULT 0,
RESERVED
INT(11) NOT NULL DEFAULT 0,
STARTTIME
TIMESTAMP NOT NULL DEFAULT current_timestamp(),
CLOSETIME
TIMESTAMP NOT NULL DEFAULT current_timestamp(),
FKZONREPLOCDB
INT(11) NOT NULL DEFAULT 0,
FKZONREPLOCID
INT(11) NOT NULL DEFAULT 0,
FKEXGREPLOCDB
INT(11) NOT NULL DEFAULT 0,
FKEXGREPLOCID
INT(11) NOT NULL DEFAULT 0,
FKHUMREPLOCDB
INT(11) NULL DEFAULT NULL,
FKHUMREPLOCID
INT(11) NULL DEFAULT NULL,
SALEFROM
TIMESTAMP NULL DEFAULT NULL,
ENGLISHNAME
VARCHAR(100) NOT NULL DEFAULT '',
ENTRYFROM
TIMESTAMP NOT NULL DEFAULT current_timestamp(),
ENTRYTO
TIMESTAMP NOT NULL DEFAULT current_timestamp(),
SALETO
TIMESTAMP NULL DEFAULT NULL,
STATUS
CHAR(1) NOT NULL DEFAULT 'A',
SALEVISIBILITY
CHAR(1) NOT NULL DEFAULT 'A',
DESCRIPTION
TEXT NOT NULL,
WAITLISTQUANTITY
INT(11) NOT NULL DEFAULT 0,
LOCATION
VARCHAR(160) NOT NULL DEFAULT '',
EVENTKEEPERTNAME
VARCHAR(160) NOT NULL DEFAULT '',
FKASSREPLOCDB
INT(11) NULL DEFAULT NULL,
FKASSREPLOCID
INT(11) NULL DEFAULT NULL,
ASSORTNAME
VARCHAR(160) NOT NULL DEFAULT '',
GROSSPRICE
DECIMAL(18,6) NOT NULL DEFAULT 0.000000,
FKCUSREPLOCDB
INT(11) NULL DEFAULT NULL,
FKCUSREPLOCID
INT(11) NULL DEFAULT NULL,
CUSTOMERNAME
VARCHAR(160) NOT NULL DEFAULT '',
CUSTOMERPHONE
VARCHAR(160) NOT NULL DEFAULT '',
CAPTION
TEXT NULL DEFAULT NULL,
EVENTMESSAGE
TEXT NULL DEFAULT NULL,
REMINDERMINUTESBEFORESTART
INT(11) NOT NULL DEFAULT 0,
FKDHUREPLOCDB
INT(11) NULL DEFAULT NULL,
FKDHUREPLOCID
INT(11) NULL DEFAULT NULL,
LNG
DECIMAL(12,9) NULL DEFAULT NULL,
LAT
DECIMAL(12,9) NULL DEFAULT NULL,
PRIMARY KEY (REPLOCID
, REPLOCDB
),
INDEX FK_EXERCISES2
(FKEXGREPLOCID
, FKEXGREPLOCDB
),
INDEX FK_EXERCISES3
(FKHUMREPLOCID
, FKHUMREPLOCDB
),
INDEX FK_EXERCISES_1
(FKZONREPLOCID
, FKZONREPLOCDB
),
INDEX INDEX_RB_FKEXGREPLOCID_NAME
(FKEXGREPLOCID
, NAME
) USING BTREE,
INDEX INDEX_RB_SALEVISIBILITY
(SALEVISIBILITY
) USING BTREE,
INDEX INDEX_RB_NAME
(NAME
) USING BTREE,
INDEX EXERCISES_ASSORT_FK
(FKASSREPLOCID
, FKASSREPLOCDB
),
INDEX EXERCISES_CUSTOMER_FK
(FKCUSREPLOCID
, FKCUSREPLOCDB
),
INDEX FK_EXERCISES_DEPUTY_HUMAN
(FKDHUREPLOCID
, FKDHUREPLOCDB
),
CONSTRAINT EXERCISES_ASSORT_FK
FOREIGN KEY (FKASSREPLOCID
, FKASSREPLOCDB
) REFERENCES assorts
(REPLOCID
, REPLOCDB
) ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT EXERCISES_CUSTOMER_FK
FOREIGN KEY (FKCUSREPLOCID
, FKCUSREPLOCDB
) REFERENCES customers
(REPLOCID
, REPLOCDB
) ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT FK_EXERCISES2
FOREIGN KEY (FKEXGREPLOCID
, FKEXGREPLOCDB
) REFERENCES exercisegroup
(REPLOCID
, REPLOCDB
) ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT FK_EXERCISES3
FOREIGN KEY (FKHUMREPLOCID
, FKHUMREPLOCDB
) REFERENCES humans
(REPLOCID
, REPLOCDB
) ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT FK_EXERCISES_1
FOREIGN KEY (FKZONREPLOCID
, FKZONREPLOCDB
) REFERENCES zones
(REPLOCID
, REPLOCDB
) ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT FK_EXERCISES_DEPUTY_HUMAN
FOREIGN KEY (FKDHUREPLOCID
, FKDHUREPLOCDB
) REFERENCES humans
(REPLOCID
, REPLOCDB
) ON UPDATE NO ACTION ON DELETE NO ACTION
)
COLLATE='utf8_general_ci'
ENGINE=InnoDB
AUTO_INCREMENT=272857
;
Explain to this query:
"id" "select_type" "table" "type" "possible_keys" "key" "key_len" "ref" "rows" "Extra"
"1" "SIMPLE" "EXERCISES" "ALL" "INDEX_RB_SALEVISIBILITY" \N \N \N "117212" "Using where"
Yes, i know that it should be not blocked and i use locks in different part of the system and it works like a charm but i didnt expect this to happen here. I cant figure out why those select are blocked by each other.
I dont use here a connection pool because i want to show you all my connection properties and make sure each java thread has separate connection. It's just for test purpose.
Dawid Bielarski
(11 rep)
Aug 3, 2020, 06:37 PM
• Last activity: Mar 17, 2025, 09:07 AM
2
votes
1
answers
9284
views
How to drop invalid Java classes from Oracle?
I tried to load some jars into my database, but I think I messed up something, because I tried as DBA and then with some oracle user... This was my first command: call dbms_java.loadjava('-genmissing -jarasresource -v -force -grant MYSCHEMA ./some-jar-file.jar'); and then this: call dbms_java.loadja...
I tried to load some jars into my database, but I think I messed up something, because I tried as DBA and then with some oracle user... This was my first command:
call dbms_java.loadjava('-genmissing -jarasresource -v -force -grant MYSCHEMA ./some-jar-file.jar');
and then this:
call dbms_java.loadjava('-genmissing -v -force -grant MYSCHEMA ./some-jar-file.jar');
I think I also tried with and without MYSCHEMA, because my Java source had alway some errors regarding to this jars.
My problem is, that right now I have more than 1500 invalid Java classes and I don't know how to clean this mess up:
SELECT *
FROM user_objects
WHERE /*object_type like '%JAVA%'
AND */status = 'INVALID'
I tried with this trick:
SET HEADING OFF SET FEEDBACK OFF SET ECHO OFF;
SPOOL delete-java-class.sql;
select 'drop java class "'||object_name||'";' from user_objects where object_type like '%JAVA%' AND status = 'INVALID';
SPOOL OFF;
SET HEADING OFF SET FEEDBACK OFF SET ECHO OFF;
SPOOL compile-java-class.sql;
select 'alter java class "'||object_name||'" resolve;' from user_objects where object_type like '%JAVA%' AND status = 'INVALID';
SPOOL OFF;
But the drop script wasn't able to drop these classes (approx. 5 of 1500 could be deleted), I got this error:
ora-29537: class or resource cannot be created or dropped directly
The recompile script was also not able to resolve the issue, but it was able to reduce the invalid count to 800.
Can someone give me some advice? :) Thank you.
VORiAND
(163 rep)
Sep 4, 2015, 02:21 PM
• Last activity: Mar 11, 2025, 11:05 AM
0
votes
2
answers
87
views
Handling DriverTimeoutException in DataStax Java Driver 4.13.0 Without Retrying Non-Idempotent Requests
Using datastax-java-driver-core 4.13.0 to connect the Java application to Cassandra. When a Cassandra node goes down, all in-flight requests to that node fail with a DriverTimeoutException. This exception is not handled by DefaultRetryPolicy, meaning no automatic retry happens. The requests are non-...
Using datastax-java-driver-core 4.13.0 to connect the Java application to Cassandra. When a Cassandra node goes down, all in-flight requests to that node fail with a DriverTimeoutException.
This exception is not handled by DefaultRetryPolicy, meaning no automatic retry happens.
The requests are non-idempotent, so implementing a custom retry mechanism risks duplicate writes.
**Questions:**
How can we gracefully handle DriverTimeoutException without causing duplicate writes?
Is there a recommended way to retry only safe operations in such scenarios while preventing data inconsistency?
Are there any DataStax-recommended strategies to mitigate request failures when nodes go down?
Any guidance or best practices would be greatly appreciated!
Ashwani Kumar
(1 rep)
Feb 5, 2025, 12:50 PM
• Last activity: Feb 6, 2025, 01:17 AM
1
votes
2
answers
120
views
How to upgrade Java version on Cassandra 4.0.5
I want to upgrade java version from 8 to 11 on my Cassandra cluster. How better to do this? Can I upgrade java version node by node. For example: - Stop 1 node - Upgrade java - Start 1 node And repeat for another nodes those steps, or will I need to repair nodes after upgrade. Now nodes: - Cassandra...
I want to upgrade java version from 8 to 11 on my Cassandra cluster. How better to do this? Can I upgrade java version node by node. For example:
- Stop 1 node
- Upgrade java
- Start 1 node
And repeat for another nodes those steps, or will I need to repair nodes after upgrade.
Now nodes:
- Cassandra 4.0.5
- Java 8
I will glad to your advices.
Коля Курик
(15 rep)
Nov 15, 2024, 03:01 PM
• Last activity: Jan 20, 2025, 03:05 AM
0
votes
0
answers
25
views
Help with Creating an ERD and Logical Data Model for a Medical Management Application
A social security organization wishes to develop an application to manage patients, their attending a generalist, and specialist doctors. A person is either a beneficiary or a doctor; a doctor can also be ill and be a beneficiary. A doctor is either a general practitioner or a specialist, with a gen...
A social security organization wishes to develop an application to manage patients, their attending a generalist, and specialist doctors. A person is either a beneficiary or a doctor; a doctor can also be ill and be a beneficiary. A doctor is either a general practitioner or a specialist, with a general practitioner not being able to be a specialist and vice versa. A patient may or may not choose their attending generalist. A patient consults a general practitioner on a given date. During consultations, the general practitioner may or may not prescribe one or more consultations with a specialist. This application can be used by the social security organization to register a beneficiary, assign an attending physician to a beneficiary, record medical forms, and process reimbursements.
All reimbursements follow a process and are 100% and 80% for general practitioners and specialists, respectively; these reimbursements can be made by transfer or in cash at the patient's convenience.
Additionally, any doctor can, through the application, record a medical form, prescribe medications, and schedule a consultation with a specialist.
1-From your data dictionary come out with a corresponding MCD and MLD
NGANGO YVES
(1 rep)
Jan 15, 2025, 02:16 AM
• Last activity: Jan 15, 2025, 05:31 AM
1
votes
2
answers
1274
views
Deadlock on inserts to a single table with a single key. (Mysql)
I have a simple game server. One of the requirement i received was to handle a single request for a single player at a time. so in order to achieve that i decided to create a lock table: CREATE TABLE `GameRoundLock` (`theKey` varchar(255) NOT NULL, PRIMARY KEY (`theKey`)) ENGINE=InnoDB DEFAULT CHARS...
I have a simple game server.
One of the requirement i received was to handle a single request for a single player at a time. so in order to achieve that i decided to create a lock table:
CREATE TABLE
GameRoundLock
(theKey
varchar(255) NOT NULL,
PRIMARY KEY (theKey
))
ENGINE=InnoDB DEFAULT CHARSET=utf8;
Before I handle any request from any player I am inserting a row into that table like so:
// save and flush == acquiring the lock
GameRoundLock lock = this.gameRoundLockRepository.saveAndFlush(somePlayerKey);
/** Play busniess logic here... **/
// deleting == releasing the lock
this.gameRoundLockRepository.delete(lock);
This actually works.
In the following scenario the whole system works as expected:
**1) PlayerA plays, therefore a new lock is inserted.**
**2) handling playerA request... (takes 30 seconds)**
**3) While 2) is still executing, PlayerA plays again - This ofcourse resulting in trying to create a lock for him, and the thread now awaits till 2) completes.**
**4) thread of 2) completes**
**5) thread of 3) now continuing**
**6) DONE, playerA total played rounds == 2.**
****Pay attention that the id (playerKey) is identical.****
The problem arises when the playerA tries another play request while the handling of the first thread still occurs. here is the scenario that resulting in a weird deadlock:
**1) PlayerA plays, therefore a new lock is inserted.**
**2) handling playerA request... (takes 30 seconds)**
**3) While 2) is still executing, PlayerA plays again - This ofcourse resulting in trying to create a lock for him, and the thread now awaits till 2) completes.**
**4) While 2) is still executing and 3) is still waiting, PlayerA plays again - This ofcourse resulting in trying to create a lock for him, and the thread now awaits till 2)/3) completes.**
**5) thread of 2) completes**
**6) ERROR - a deadlock is thrown!!!!!**
Here is the deadlock info:
------------------------
LATEST DETECTED DEADLOCK
------------------------
2016-11-23 17:05:26 0x1d98
*** (1) TRANSACTION:
TRANSACTION 5493356, ACTIVE 8 sec inserting
mysql tables in use 1, locked 1
LOCK WAIT 3 lock struct(s), heap size 1136, 2 row lock(s)
MySQL thread id 5191, OS thread handle 5152, query id 15582091 localhost 127.0.0.1 root update
insert into GameRoundLock (theKey) values ('1,1,1,64')
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 4865 page no 3 n bits 72 index PRIMARY of table game_server
.gameroundlock
trx id 5493356 lock_mode X locks rec but not gap waiting
Record lock, heap no 2 PHYSICAL RECORD: n_fields 3; compact format; info bits 32
0: len 8; hex 312c312c312c3634; asc 1,1,1,64;;
1: len 6; hex 00000053d266; asc S f;;
2: len 7; hex 790000030903d1; asc y ;;
*** (2) TRANSACTION:
TRANSACTION 5493355, ACTIVE 12 sec inserting, thread declared inside InnoDB 1
mysql tables in use 1, locked 1
3 lock struct(s), heap size 1136, 2 row lock(s)
MySQL thread id 5193, OS thread handle 7576, query id 15582083 localhost 127.0.0.1 root update
insert into GameRoundLock (theKey) values ('1,1,1,64')
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 4865 page no 3 n bits 72 index PRIMARY of table game_server
.gameroundlock
trx id 5493355 lock mode S
Record lock, heap no 2 PHYSICAL RECORD: n_fields 3; compact format; info bits 32
0: len 8; hex 312c312c312c3634; asc 1,1,1,64;;
1: len 6; hex 00000053d266; asc S f;;
2: len 7; hex 790000030903d1; asc y ;;
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 4865 page no 3 n bits 72 index PRIMARY of table game_server
.gameroundlock
trx id 5493355 lock_mode X locks rec but not gap waiting
Record lock, heap no 2 PHYSICAL RECORD: n_fields 3; compact format; info bits 32
0: len 8; hex 312c312c312c3634; asc 1,1,1,64;;
1: len 6; hex 00000053d266; asc S f;;
2: len 7; hex 790000030903d1; asc y ;;
*** WE ROLL BACK TRANSACTION (2)
It seems like two "close\near" inserts for the locks of the 2nd and 3rd play requests causes this. why a deadlock and how to mitigate it?
Urbanleg
(375 rep)
Nov 23, 2016, 03:48 PM
• Last activity: Dec 30, 2024, 02:04 AM
0
votes
0
answers
34
views
Metabase error when connecting to SQL Server 2016, happens on RHEL9 but not on RHEL7, why?
We are testing an upgraded instance of Metabase, and the upgraded components include - RHEL (from v7 to v9) operating system, - Java (from 11 to 21), and - Metabase software (from v0.32.x to v0.50.x as the current). And we noticed the following error in the upgraded instance when connecting to a dat...
We are testing an upgraded instance of Metabase, and the upgraded components include
- RHEL (from v7 to v9) operating system,
- Java (from 11 to 21), and
- Metabase software (from v0.32.x to v0.50.x as the current).
And we noticed the following error in the upgraded instance when connecting to a data source of SQL Server 2016.
> "encrypt" property is set to "false" and "trustServerCertificate" property is set to "false" but the driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption: Error: Certificates do not conform to algorithm constraints. ClientConnectionId:6bf62750-d5c4-49c5-8ada-4253d8b55055
We suspect the error is related to the system behavior of RHEL9, therefore we designed the following simplified test case to support our point:
Test configuration:
- Java openjdk v11.0.23
- metabase.jar v0.32.5
We start the application by calling command
java -jar metabase.jar
, so it starts a fresh demonstration instance of Metabase running on local H2 for application database.
Then, we try to add a data source of SQL Server 2016 configured with encryption not required.
Running on RHEL7, the above test succeeded. However, running on RHEL9, it failed with the following log events, notice the timestamp of 12-19 14:58:44
:
12-19 14:58:09 DEBUG middleware.log :: GET /api/setup/admin_checklist 200 14 ms (10 DB calls) Jetty threads: 8/50 (3 busy, 5 idle, 0 queued) (48 total active threads)
12-19 14:58:44 INFO metabase.driver :: Initializing driver :sqlserver...
12-19 14:58:44 DEBUG plugins.classloader :: Setting current thread context classloader to shared classloader clojure.lang.DynamicClassLoader@1dc9fc0...
12-19 14:58:44 INFO plugins.classloader :: Added URL file:/data/metabase-test/v0.32.5/plugins/sqlserver.metabase-driver.jar to classpath
12-19 14:58:44 DEBUG plugins.init-steps :: Loading plugin namespace metabase.driver.sqlserver...
12-19 14:58:44 INFO metabase.driver :: Registered driver :sqlserver (parents: :sql-jdbc) 🚚
12-19 14:58:44 DEBUG plugins.jdbc-proxy :: Registering JDBC proxy driver for class com.microsoft.sqlserver.jdbc.SQLServerDriver...
Load lazy loading driver :sqlserver took 173 ms
12-19 14:58:44 DEBUG middleware.log :: POST /api/database 400 314 ms (0 DB calls) Jetty threads: 8/50 (3 busy, 4 idle, 0 queued) (45 total active threads)
{:valid false,
:dbname
"com.microsoft.sqlserver.jdbc.SQLServerException: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: \"Certificates do not conform to algorithm constraints\". ClientConnectionId:ca179b99-b3b3-4351-a0de-736b7dc8e765",
:message
"com.microsoft.sqlserver.jdbc.SQLServerException: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: \"Certificates do not conform to algorithm constraints\". ClientConnectionId:ca179b99-b3b3-4351-a0de-736b7dc8e765"}
12-19 14:58:44 DEBUG middleware.log :: GET /api/database 200 5 ms (3 DB calls) Jetty threads: 8/50 (3 busy, 4 idle, 0 queued) (45 total active threads)
12-19 15:00:00 INFO task.send-pulses :: Sending scheduled pulses...
We wonder if we missed anything with regard to the new operating system, and we highly appreciate any pointers and hints.
Mike
(57 rep)
Dec 19, 2024, 10:19 PM
• Last activity: Dec 19, 2024, 10:25 PM
1
votes
1
answers
294
views
Which version of Java should I use with Cassandra 5.x
I've installed Cassandra on my Linux machine (Ubutnu). I had OpenJDK 11 installed, I then installed JDK 17 and set it to the default and restarted Cassandra. [![Java Selection][1]][1] I can see the new version is being used in /var/log/cassandra/system.log [![Cassandra log][2]][2] Everything is work...
I've installed Cassandra on my Linux machine (Ubutnu).
I had OpenJDK 11 installed, I then installed JDK 17 and set it to the default and restarted Cassandra.
I can see the new version is being used in /var/log/cassandra/system.log
Everything is working but it is unclear to me if using 17 is considered "experimental" or not or preferred to 11?


adinas
(149 rep)
Oct 28, 2024, 02:43 PM
• Last activity: Oct 29, 2024, 08:21 AM
1
votes
2
answers
349
views
Startup error "Cassandra 5.0 requires Java 11 or Java 17"
I can only find that Cassandra supports JDK17, and no Java flavor specifics. First I installed OpenJDK11 and received the same message as below. I then found that Cassandra 5.0 only supports JDK17. Fine, they should probably change the message if so but okay. I then installed OpenJDK17 and still rec...
I can only find that Cassandra supports JDK17, and no Java flavor specifics. First I installed OpenJDK11 and received the same message as below.
I then found that Cassandra 5.0 only supports JDK17. Fine, they should probably change the message if so but okay. I then installed OpenJDK17 and still received the below.
$ which java
/usr/lib/jvm/java-17-openjdk-amd64/bin/java
$ java -version
openjdk version "17-ea" 2021-09-14
OpenJDK Runtime Environment (build 17-ea+11-Ubuntu-114.042)
OpenJDK 64-Bit Server VM (build 17-ea+11-Ubuntu-114.042, mixed mode, sharing)
vm1@ubuntu:/opt/apache-cassandra-5.0.0/bin$ ./cassandra
Cassandra 5.0 requires Java 11 or Java 17.
I then tried with Oracle Java 17, same. What gives?
$ which java
/usr/lib/jvm/java-17-oracle/bin/java
$ java -version
java version "17.0.6" 2023-01-17 LTS
Java(TM) SE Runtime Environment (build 17.0.6+9-LTS-190)
Java HotSpot(TM) 64-Bit Server VM (build 17.0.6+9-LTS-190, mixed mode, sharing)
vm1@ubuntu:/opt/apache-cassandra-5.0.0/bin$ ./cassandra
Cassandra 5.0 requires Java 11 or Java 17.
patricio2626
(13 rep)
Sep 27, 2024, 02:28 AM
• Last activity: Oct 1, 2024, 05:08 AM
1
votes
1
answers
881
views
Do too many subquries create multiple sleeping processes in MySQL?
I am using java & MySQL to make an app. When I check the process list. I see a lot of sleeping processes. It also reaches to a points when no further connection is allowed. I have read many answers around which tell the command to masskill the process or some suggests checking the code for bug. I ha...
I am using java & MySQL to make an app.
When I check the process list. I see a lot of sleeping processes. It also reaches to a points when no further connection is allowed.
I have read many answers around which tell the command to masskill the process or some suggests checking the code for bug. I have checked that I'm closing the connections properly using the Apache Commons DbUtils.
DbUtils.closeQuietly(rs);
DbUtils.closeQuietly(statement);
DbUtils.closeQuietly(con);
I am not able to figure out why these processes don't get killed after closing the connection.
My question is
1. Do too many subquries create multiple sleeping processes in MySQL? I don't see much relation though and didn't find anything on search.
2. Does passing connection object through methods create extra/multiple processes?
Regards
Shekhar Sahu
(111 rep)
Jun 30, 2017, 03:56 PM
• Last activity: Jun 20, 2024, 11:02 PM
Showing page 1 of 20 total questions