Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
5283
views
timestamp with timezone issue with TZR vs TZH:TZM
I have a Hibernate with Oracle JDBC based application inserting/updating into the following column; COLUMN_A TIMESTAMP WITH TIME ZONE DEFAULT systimestamp NOT NULL but while it automatically appends timezone, it uses `TZR`, e.g. `UTC`, and weird thing with this data type in Oracle is that, it retain...
I have a Hibernate with Oracle JDBC based application inserting/updating into the following column;
COLUMN_A TIMESTAMP WITH TIME ZONE DEFAULT systimestamp NOT NULL
but while it automatically appends timezone, it uses
TZR
, e.g. UTC
, and weird thing with this data type in Oracle is that, it retains the format of timezone from insertion when we select it without any formatting, [explained here](https://stackoverflow.com/questions/51417100/timestamp-with-timezone-retains-timezone-format)
With this said, we are utilizing Golden Gate that is replicating this data to MongoDB, but afaik it requires these timestamps to contain only TZH:TZM
formatting, so I have this issue where one side blocked by Oracle JDBC, where insertion is done with TZR
, and one side blocked by Golden Gate where data is expected with TZH:TZM
Are there any way to handle this issue? Can I not forbid a certain formatting for TIMESTAMP WITH TIME ZONE
? Using NLS_TIMESTAMP_TZ_FORMAT
works for SELECT
formatting, but it is not usable for Golden Gate I was told. Also for INSERT
case I was able to utilize default
value of the column, and using systimestamp
does insert with TZH:TZM
, but for UPDATE
case, I am unable to achieve this. Totally stuck!
I have [this question](https://stackoverflow.com/questions/51412424/update-column-to-default-value-without-specifying-it-in-spring-data-jpa) with focus on application side of this issue. I am wondering if there is anything that can be done in DB side?
bvrakvs
(111 rep)
Jul 19, 2018, 06:18 AM
• Last activity: Jul 19, 2025, 03:08 AM
3
votes
2
answers
188
views
Causing ORA errors in a program using JDBC. How can I find what's causing them?
So I'm building a java app to interface with the company's database. Whenever it causes an ORA error, it just prints something like the following to system output: ORA-00911: invalid character Is there some way I can see what's causing it? Maybe some SQL statement I can use to see the last 10 errors...
So I'm building a java app to interface with the company's database. Whenever it causes an ORA error, it just prints something like the following to system output:
ORA-00911: invalid character
Is there some way I can see what's causing it? Maybe some SQL statement I can use to see the last 10 errors caused on a table or something? I have SQLDeveloper if that might be useful.
Daniel Paczuski Bak
(416 rep)
Jun 12, 2015, 06:16 PM
• Last activity: Jun 26, 2025, 02:04 AM
1
votes
1
answers
515
views
Can JDBC Oracle file stream blob inserts be speeded up?
I have these four different ways by which I am inserting some file data into an Oracle database using JDBC and PreparedStatements. - **Approach A** ps.setBytes(1, fileDataInByteArray); - **Approach B** Blob blob = ps.getConnection().createBlob(); blob.setBytes(1, fileDataInByteArray); ps.setBlob(1,...
I have these four different ways by which I am inserting some file data into an Oracle database using JDBC and PreparedStatements.
- **Approach A**
ps.setBytes(1, fileDataInByteArray);
- **Approach B**
Blob blob = ps.getConnection().createBlob();
blob.setBytes(1, fileDataInByteArray);
ps.setBlob(1, blob);
- **Approach C**
ps.setBinaryStream(1, fileDataAsInputStream);
- **Approach D**
ps.setBlob(1, fileDataAsInputStream);
In general, Approach C & D seem to be very useful for large fileData. A & B can run out of memory, but obviously C & D are more reliable.
However, when the file size is smaller the danger of OOM is not there. I find that A & B take less time for the database insert. I suppose that’s because when the data is a byteArray, it’s much faster.
What can I do to speed up the insert when using C&D?
I have tried wrapping the input stream with a BufferedInputSteam. I played around with the buffer size. But that did not help.
Are we at the mercy of how fast Oracle can read the data? Can that be speeded up?
Raster R
(11 rep)
Jul 6, 2023, 08:45 AM
• Last activity: May 30, 2025, 08:04 AM
0
votes
1
answers
257
views
how do I use sa to access local sql server database with jdbc
I have a new local sql server database, running hypothetically on port 48000. In SQL Server Management Studio under the name of the computer\MSSQLSERVER01, Security, Logins I clicked sa, properties and set the password. I have created a database, mystuff. sa is not listed under security under mystuf...
I have a new local sql server database, running hypothetically on port 48000. In SQL Server Management Studio under the name of the computer\MSSQLSERVER01, Security, Logins I clicked sa, properties and set the password. I have created a database, mystuff. sa is not listed under security under mystuff. I have the connection string
jdbc:sqlserver://MYDESKTOP\MSSQLSERVER01:48000;databasename=mystuff;user=sa;password=123
but I get "Login failed for user 'sa'" when I attempt to connect. What else do I need to do to setup sa? Something under mystuff?
Chris94
(101 rep)
Apr 26, 2021, 11:20 AM
• Last activity: May 26, 2025, 10:06 AM
0
votes
1
answers
451
views
Execute Permission Denied on Stored Procedure using Microsoft JDBC Driver and Windows Authentication
I have a Microsoft SQL Server database that I use Stored Procedures to read/write data to through a Integration Engine in this case InterSystem IRIS. Originally I was using a local server user to do the execution, but for security purposes it was requested that I start using an Active Directory/Wind...
I have a Microsoft SQL Server database that I use Stored Procedures to read/write data to through a Integration Engine in this case InterSystem IRIS.
Originally I was using a local server user to do the execution, but for security purposes it was requested that I start using an Active Directory/Windows Authentication user instead. At the time I could not get this to work with the Microsoft JDBC driver since we were running the Integration Engine on a Unix/Linux platform at the time was AIX, but now Red Hat.
I was able to use the jTDS JDBC driver to connect to MS SQL without issues as the Active Directory Service Account using Windows Authentication to do the read/writes to MS SQL from InterSystems.
But when moving to Red Hat we discovered an issue with the jTDS driver not reusing existing JDBC connection strings, causing massive amounts of connections to be made.
So it was suggested that we move back to use the MS JDBC Driver. However I am running into a problem.. I am receiving an EXECUTE permission denied when trying to execute the store procedure.
The user and the stored procedure have the correct permissions, however using the Microsoft JDBC driver I get this error.
I am using the following format for the connection string...
Url: jdbc:sqlserver://host:1433;authenticationScheme=NTLM;integratedSecurity=true;domain=myDomain
Does anyone have any ideas on why the stored procedure will not execute with Microsoft's JDBC driver but will with jTDS?
sroth79
(1 rep)
May 26, 2023, 01:34 PM
• Last activity: May 16, 2025, 07:04 AM
0
votes
1
answers
2270
views
Unable to connect to an Oracle 12c Docker container
I'm working on a project with Oracle, and using a Oracle XE 11 container for testing. But recent developments have ended up with me running up against a brick wall: I need larger table names. Support for large table names was apparently added in version 12, so I'm trying to change out the container...
I'm working on a project with Oracle, and using a Oracle XE 11 container for testing. But recent developments have ended up with me running up against a brick wall: I need larger table names. Support for large table names was apparently added in version 12, so I'm trying to change out the container I'm using.
I'm setting up the container in Java using
testcontainers
:
db = new OracleContainer("epiclabs/docker-oracle-xe-11g");
When I look around on Docker Hub for Oracle 12 containers, I get [a bunch of results,](https://hub.docker.com/search?q=oracle-12&source=community) but every one I've tried so far, if I substitute its name in as the image name in the code above, it spins for 10-20 minutes, then errors out with
> ORA-12514, TNS:Listener does not currently know of service requested in connect descriptor
And I have no idea what the problem is here. I have plenty of SQL experience, but next to none with Oracle specifically, and trying to get this Oracle-related project working has been a bit of a disaster so far. Does anyone know what's going wrong and how I can get connected to a Oracle 12 Docker container?
Mason Wheeler
(212 rep)
May 20, 2021, 10:35 PM
• Last activity: Apr 7, 2025, 08:09 AM
0
votes
2
answers
1780
views
Java PostgreSQL library seems to convert dates to local time zone automatically
PostgreSQL 14.6 Ubuntu 22.04 I am using `postgresql-42.5.4.jar` which I downloaded from [pgJDBC][1]. I use this library to get data from a database and display it on a website running locally. The web server and database server are both running on the same machine. The database server's time zone is...
PostgreSQL 14.6
Ubuntu 22.04 I am using
Ubuntu 22.04 I am using
postgresql-42.5.4.jar
which I downloaded from pgJDBC . I use this library to get data from a database and display it on a website running locally. The web server and database server are both running on the same machine.
The database server's time zone is UTC
. The system's time zone is America/Chicago
.
I have a table that contains a column of type timestamp with time zone
.
The data is inserted into the table by a separate C++ program that uses a completely different library. In order to insert the timestamps, it uses a Unix timestamp and the to_timestamp ()
function, like this:
insert into my_table (my_time) values (to_timestamp (1654321098));
The timestamp is retrieved from the table as a string and passed back to the website as is. A comment below suggested using the java.sql.OffsetDateTime class but I don't know where that class would be used. Here is the Java code I am using:
String query = "select my_time from my_table";
ResultSet result_set = db_connection.createStatement ().executeQuery (query);
String result = result_set.getString ("my_time");
When I query this column from my command line database client, it shows me the dates in UTC, which is what I would expect because that is the time zone the server is using. This simple query would look like this:
select my_time from my_table;
While still in my command line client, if I want to display that column in my local time, I have to modify my query like this:
select my_time at time zone 'America/Chicago' as my_time from my_table;
But I started noticing that the website was displaying incorrect times. I temporarily had it print its query to the screen so I could look at it in my command line client. The result was not the same. In order to display the time in my local time on the website, I had to remove the at time zone 'America/Chicago'
part of the query, which does not seem to make sense and does not produce the same result in the command line client, and it also makes the code less portable if I were to move it to a system using a different database library.
Does the Java driver for PostgreSQL automatically convert timestamp fields to local time? If it does, is there a way to turn that feature off? If it doesn't, then what could be causing the different results I get between the JDBC library and my command line client?
Zephyrus
(283 rep)
Feb 19, 2023, 03:53 PM
• Last activity: Apr 6, 2025, 08:13 AM
1
votes
1
answers
537
views
JDBC can't use the current primary database by setting multiple hosts
- node1: 192.168.0.1 as primary - node1: 192.168.0.2 as standby The JDBC string is setting as this: ``` jdbc:postgresql://192.168.0.1:5432,192.168.0.2:5432/mydb ``` Using `repmgr` doing the repliation and automatic failover. First, use node1 as primary: ``` ID | Name | Role | Status | Upstream | Loc...
- node1: 192.168.0.1 as primary
- node1: 192.168.0.2 as standby
The JDBC string is setting as this:
jdbc:postgresql://192.168.0.1:5432,192.168.0.2:5432/mydb
Using repmgr
doing the repliation and automatic failover.
First, use node1 as primary:
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+-------+---------+-----------+----------+----------+----------+----------+--------------------------------------------------------
1 | node1 | primary | * running | | default | 100 | 7 | host=node1 user=repmgr dbname=repmgr connect_timeout=2
2 | node2 | standby | running | node1 | default | 100 | 7 | host=node2 user=repmgr dbname=repmgr connect_timeout=2
If the node1 down, it can be change to node2:
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+-------+---------+-----------+----------+----------+----------+----------+--------------------------------------------------------
1 | node1 | primary | - failed | | default | 100 | ? | host=node1 user=repmgr dbname=repmgr connect_timeout=2
2 | node2 | primary | * running | | default | 100 | 8 | host=node2 user=repmgr dbname=repmgr connect_timeout=2
The application can run well at this timing.
But if recovery node1 manually:
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+-------+---------+-----------+----------+----------+----------+----------+--------------------------------------------------------
1 | node1 | standby | running | node2 | default | 100 | 8 | host=node1 user=repmgr dbname=repmgr connect_timeout=2
2 | node2 | primary | * running | | default | 100 | 8 | host=node2 user=repmgr dbname=repmgr connect_timeout=2
It seems the application will try to connect node1 again, then as it is read-only mode, can't insert new data. So should change the JDBC connect order as:
jdbc:postgresql://192.168.0.2:5432,192.168.0.1:5432/mydb
Let node2 at first place. Restart the application. It works.
Even tried adding params to the connect string base on the original way(node1, node2 order):
jdbc:postgresql://192.168.0.1:5432,192.168.0.2:5432/mydb?targetServerType=master&loginTimeout=10&connectTimeout=10&tcpKeepAlive=true
The application looks lost data and will create data again.
So is the chaning order method the only way in this case? Can't it choose current primary database correctly?
rawmain
(359 rep)
Dec 9, 2019, 02:45 AM
• Last activity: Apr 5, 2025, 06:11 PM
4
votes
1
answers
1869
views
Why would a query return oracle.sql.NUMBER@[hex value] in place of a numeric value?
I'm seeing some odd behavior in an Oracle 11gR2 database. When I select from a particular table, using SQL Developer, certain rows of the result set include non-numeric results in a `NUMBER` column (with a `NOT NULL` constraint, though I'm not sure that matters). For example: oracle.sql.NUMBER@b25bc...
I'm seeing some odd behavior in an Oracle 11gR2 database. When I select from a particular table, using SQL Developer, certain rows of the result set include non-numeric results in a
NUMBER
column (with a NOT NULL
constraint, though I'm not sure that matters). For example:
oracle.sql.NUMBER@b25bc2f
oracle.sql.NUMBER@5a65b760
oracle.sql.NUMBER@4bea2dfd
and so on, mixed in with regular numeric values in the same column. When using SQL*Plus I just see blank spaces instead (as with NULL
), e.g (in the column aliased COL
):
FYEAR CO AB DEV COL COUNT(*)
---------- ---------- --- ------ ---------- ----------
2012 7 SF 107 1
I don't know much about Oracle internals but the strings in SQL Developer look like pointers to me. We stumbled across these not as the result of any error but because a query involving some arithmetic was giving very unexpected results. So, thinking maybe I could dereference them and learn something, I did a bit of experimenting:
- cast(col as numeric)
always gives 0
- to_number(col)
throws ORA-01722: invalid number
- col * 1
gives any of an integer, decimal value, null
or a different oracle.sql.NUMBER@...
—the results are consistent but *only within a transaction*
- col * 1 * 1
gives a **different** result than col * 1
, with no pattern that I can discern (and so for col * 1 * 1 * 1
, etc.; but col * 2
is consistent with col * 1
!)
- when I start a new transaction, in SQL*Plus all the calculated expression results change; in SQL Developer, the the hex values shown change between transactions but the calculated expression results do not! *Wat.*
This is, hands down, the weirdest thing I've ever seen in a database. Is this a sign of corruption in the underlying datafiles? Have I stumbled upon some quirky undefined behavior in Oracle?
Here's an example of the result set changing between transactions—there's only one person with DML rights on this table, I verified he's not working with it, and it has no triggers:
Connected to:
Oracle Database 11g Release 11.2.0.3.0 - 64bit Production
SQL> select col, cast(col as numeric), col * 1, col * 2, col * 1 * 1, col * 1 * 1 * 1
2 from schema.table@dblink where foo = bar;
COL CAST(COLASNUMERIC) COL*1 COL*2 COL*1*1 COL*1*1*1
---------- ------------------ ---------- ---------- ---------- ----------
0
0 848 1696 9648 1612
0 8304 9312 1612
0 1612
SQL> rollback;
Rollback complete.
SQL> select col, cast(col as numeric), col * 1, col * 2, col * 1 * 1, col * 1 * 1 * 1
2 from schema.table@dblink where foo = bar;
COL CAST(COLASNUMERIC) COL*1 COL*2 COL*1*1 COL*1*1*1
---------- ------------------ ---------- ---------- ---------- ----------
0
0 8048 6096 3648 140
0 8304 9312 140
0 140
SQL>
In SQL Developer, some of these blanks are NULL
while others show the oracle.sql.NUMBER@...
thing, which SQL Developer does *not* treat like a string, even on export:
REM INSERTING into EXPORT_TABLE
SET DEFINE OFF;
Insert into EXPORT_TABLE (COL,"CAST(COLASNUMERIC)","COL*1","COL*2","COL*1*1","COL*1*1*1") values (oracle.sql.NUMBER@3924c5dc,0,null,null,null,42.939572379395723793957237939572379396);
DUMP()
shows it as a 1-byte numeric; I'm not sure what 193 means as an internal representation. Everything I'm finding so far on Google shows 193 appearing in association with other numbers (e.g., 193,1
) but never by itself; [according to this page](http://www.jlcomp.demon.co.uk/number_format.html) , 193 by itself seems nonsensical? Or I'd say it signifies null, except that DUMP()
shows nulls as NULL
. In any case, the 193 appears consistently for all of the affected rows:
FYEAR COL DUMP(COL) COL*1
2012 oracle.sql.NUMBER@1ef46be8 Typ=2 Len=1: 193 (null)
2013 oracle.sql.NUMBER@40d85c38 Typ=2 Len=1: 193 112
2014 oracle.sql.NUMBER@3c8e81ca Typ=2 Len=1: 193 112
2015 oracle.sql.NUMBER@2feb13d5 Typ=2 Len=1: 193 2712
2016 oracle.sql.NUMBER@72631d86 Typ=2 Len=1: 193 2712
...
This issue persists if I CREATE TABLE tbl AS...
with a query on the affected table. I traced it back to a table that is populated via a legacy program written in C, that no-one on staff knows how to decipher. But my assumption is that the C program can't insert anything that violates the NOT NULL NUMBER
column definition without Oracle kicking it back out with an error (bad assumption?).
Air
(181 rep)
Aug 16, 2016, 05:59 PM
• Last activity: Mar 3, 2025, 09:00 AM
3
votes
2
answers
1511
views
How do I run a database heavy query over jdbc without timing out (Oracle)?
For one of our applications (Pentaho Data Integration), we need to run queries through the oracle jdbc. Some of these queries are computation expensive and can take an hour or more to return any rows to the client. When I try to run one of the long running queries with default jdbc parameters, I ine...
For one of our applications (Pentaho Data Integration), we need to run queries through the oracle jdbc. Some of these queries are computation expensive and can take an hour or more to return any rows to the client. When I try to run one of the long running queries with default jdbc parameters, I inevitably get: "IO Error: Socket read timed out" after ~30 minutes even though the database is still processing the query (verified in V$SESSION and V$SQLAREA). I am using the newest ojdbc8.jar direct from Oracle and running Oracle Enterprise 12.1.0.2.0.
EDIT:
The output of these queries is being stored in a separate database. The nature of our environment is such that we have an arbitrary number of queries that could be created/run so using stored procedures would be cumbersome.
Andrew Hangsleben
(41 rep)
Feb 7, 2018, 09:12 PM
• Last activity: Feb 18, 2025, 06:03 PM
1
votes
1
answers
878
views
Why does my JDBC keep setting session variables?
I'm looking at the Performance Insights for MySQL Aurora on AWS and among the top queries I see: ``` SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ``` These statements don't appear in my code anywhere, I couldn't find them in the Hik...
I'm looking at the Performance Insights for MySQL Aurora on AWS and among the top queries I see:
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
These statements don't appear in my code anywhere, I couldn't find them in the HikariCP implementation, in the MySQL/J driver or the JDBC connection string. I was able to confirm that the last query SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
occurs when a new connection is created. These queries are wasteful since they come from a reader instance. Any ideas on how to trace and stop this?
This is with HikariCP 3.4.5, mysql connector 8.0.11 and MySQL 5.7. Below is a screenshot of what Performance Insights shows. Note that setting session parameters is 4 of the top 5 queries by Average Active Sessions. The wait event in green is wait/io/aurora_respond_to_client
, so I guess it's just waiting for the client to confirm receipt.

Johnny
(150 rep)
Jun 9, 2022, 05:51 PM
• Last activity: Feb 16, 2025, 04:05 AM
0
votes
1
answers
11685
views
Tomcat7 JDBC connection pool -> Connection has been abandoned
I have the following configuration for the JDBC connection pool initialSize="2" minIdle="10" maxIdle="50" maxActive="100" maxWait="15000" jdbcInterceptors="org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer;org.apache.tomcat.jdbc.pool.i...
I have the following configuration for the JDBC connection pool
initialSize="2"
minIdle="10"
maxIdle="50"
maxActive="100"
maxWait="15000"
jdbcInterceptors="org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer;org.apache.tomcat.jdbc.pool.interceptor.ResetAbandonedTimer"
removeAbandoned="true"
removeAbandonedTimeout="60"
logAbandoned="true"
testOnBorrow="true"
testOnReturn="true"
testWhileIdle="true"
useEquals="false"
fairQueue="false"
timeBetweenEvictionRunsMillis="300000"
minEvictableIdleTimeMillis="300000"
validationInterval="10000"
validationQuery="SELECT 1"
The application is a standard Java+Spring+Hibernate environment, and the database is PostgreSQL. I have been using those configs for a while, and everything was fine...
Recently the following exception started to appear
Sep 10, 2015 5:25:57 PM org.apache.tomcat.jdbc.pool.ConnectionPool abandon
WARNING: Connection has been abandoned PooledConnection[org.postgresql.jdbc4.Jdbc4Connection@3eb948ed]:java.lang.Exception
at org.apache.tomcat.jdbc.pool.ConnectionPool.getThreadDump(ConnectionPool.java:1063)
at org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:780)
at org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:619)
at org.apache.tomcat.jdbc.pool.ConnectionPool.getConnection(ConnectionPool.java:188)
at org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:128)
at org.hibernate.ejb.connection.InjectedDataSourceConnectionProvider.getConnection(InjectedDataSourceConnectionProvider.java:70)
at org.hibernate.internal.AbstractSessionImpl$NonContextualJdbcConnectionAccess.obtainConnection(AbstractSessionImpl.java:301)
at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:214)
at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.getConnection(LogicalConnectionImpl.java:157)
at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doBegin(JdbcTransaction.java:67)
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.begin(AbstractTransactionImpl.java:160)
at org.hibernate.internal.SessionImpl.beginTransaction(SessionImpl.java:1351)
at org.hibernate.ejb.TransactionImpl.begin(TransactionImpl.java:59)
at org.springframework.orm.jpa.DefaultJpaDialect.beginTransaction(DefaultJpaDialect.java:67)
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.beginTransaction(HibernateJpaDialect.java:110)
at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:380)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:373)
at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:420)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:257)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:95)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:644)
10 Sep 2015 17:25:57,591 WARN org.hibernate.engine.jdbc.spi.SqlExceptionHelper - SQL Error: 0, SQLState: 08006
10 Sep 2015 17:25:57,592 ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper - An I/O error occurred while sending to the backend.
javax.persistence.PersistenceException: org.hibernate.exception.JDBCConnectionException: could not extract ResultSet
at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1387)
at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1310)
at org.hibernate.ejb.QueryImpl.getSingleResult(QueryImpl.java:316)
at com.mysema.query.jpa.impl.AbstractJPAQuery.getSingleResult(AbstractJPAQuery.java:219)
at com.mysema.query.jpa.impl.AbstractJPAQuery.uniqueResult(AbstractJPAQuery.java:295)
at com.mysema.query.jpa.impl.AbstractJPAQuery.uniqueResult(AbstractJPAQuery.java:281)
at com.mysema.query.support.ProjectableQuery.singleResult(ProjectableQuery.java:75)
I don't have any new query running too long, but I guess that this happens because the connection is not returned to the pool properly... I tried to play around the configurations, but the same happens, though in rare cases .
I'd just like to know if the configurations are fine for the application with an average load, and if there is any chance to improve the connection pool life.
Note: When I decrease
and
it starts to happen more often.
Any help is appreciated.
vtor
(109 rep)
Sep 11, 2015, 09:14 AM
• Last activity: Feb 13, 2025, 12:06 PM
0
votes
1
answers
4779
views
Unable to connect with JDBC after MySQL upgrade from 5.7 to MySQL 8.0
We use java application client to transfer database table data to a third party server. With MySQL 5.7 it was working fine. After upgrade to MySQL 8, it started to show errors. First thing we did was to change the MySQL jar file to mysql-connector-java-8.0.30.jar. Even then we are not getting connec...
We use java application client to transfer database table data to a third party server. With MySQL 5.7 it was working fine. After upgrade to MySQL 8, it started to show errors. First thing we did was to change the MySQL jar file to mysql-connector-java-8.0.30.jar. Even then we are not getting connected.
We are however able to connect via PHP and login via MySQL command line.
Server Details:
- Ubuntu 18.04
- RAM: 16 GB
- mysql Ver 8.0.30 for Linux on x86_64 (MySQL Community Server - GPL)
The following are the errors shown:
Note: Example.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. Cannot connect to database server Could not create connection to database server. Attempted reconnect 3 times. Giving up. java.sql.SQLNonTransientConnectionException: Could not create connection to database server. Attempted reconnect 3 times. Giving up. at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:110) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:89) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:63) at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:73) at com.mysql.cj.jdbc.ConnectionImpl.connectWithRetries(ConnectionImpl.java:898) at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:823) at com.mysql.cj.jdbc.ConnectionImpl.(ConnectionImpl.java:448) at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:241) at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:198) at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677) at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:228) at Example.main(Example.java:16) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at jdk.compiler/com.sun.tools.javac.launcher.Main.execute(Main.java:404) at jdk.compiler/com.sun.tools.javac.launcher.Main.run(Main.java:179) at jdk.compiler/com.sun.tools.javac.launcher.Main.main(Main.java:119) Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490) at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61) at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105) at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:151) at com.mysql.cj.exceptions.ExceptionFactory.createCommunicationsException(ExceptionFactory.java:167) at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:89) at com.mysql.cj.NativeSession.connect(NativeSession.java:120) at com.mysql.cj.jdbc.ConnectionImpl.connectWithRetries(ConnectionImpl.java:842) ... 14 more Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.base/java.net.Socket.connect(Socket.java:609) at com.mysql.cj.protocol.StandardSocketFactory.connect(StandardSocketFactory.java:153) at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:63) ... 16 morePlease find the MySQL my.cnf settings.
[client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking skip-grant-tables # Drupal Specific Settings character-set-server=utf8mb4 collation-server=utf8mb4_general_ci character_set_server = utf8mb4 default-storage-engine=InnoDB skip-name-resolve=1 #default_authentication_plugin=mysql_native_password # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # # Obsolete key_buffer option renamed to key_buffer_size by maintainer script key_buffer_size = 128M max_allowed_packet = 64M thread_stack = 192K thread_cache_size = 8 # Additional Fine Tuning Settings for Drupal and Performance performance_schema #read_rnd_buffer_size = 8M #myisam_sort_buffer_size = 8M max_user_connections = 100 join_buffer_size = 8M read_buffer_size = 4M sort_buffer_size = 4M thread_cache_size = 4 interactive_timeout = 300 wait_timeout = 300 connect_timeout = 100 max_connect_errors = 10000 tmp-table-size = 64M max-heap-table-size = 64M open_files_limit = 40000 max_execution_time=240000 # in milliseconds. 4 minutes skip-character-set-client-handshake ## Do not check for character set #ssl=0 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched # Obsolete myisam-recover option renamed to myisam_recover_options by maintainer script myisam_recover_options = BACKUP max_connections = 200 table_open_cache = 100000 table_definition_cache = 12000 #thread_concurrency = 10 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error log - should be very few entries. # log_error = /var/log/mysql/error.log # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log slow-query-log-file = /var/log/mysql/mysql-slow.log long_query_time = 5 #log-queries-not-using-indexes = 1 # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 #max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 64M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] # Obsolete key_buffer option renamed to key_buffer_size by maintainer script key_buffer_size = 128M [mysqld] # InnoDB Settings innodb_buffer_pool_size=6G innodb_flush_log_at_trx_commit=2 #Set the log file size to about 25% of the buffer pool size innodb_log_file_size=1G innodb_log_buffer_size=64M innodb_flush_method=O_DIRECT innodb_file_per_table=1 innodb_thread_concurrency=64 #innodb_buffer_pool_instances=8 innodb_stats_on_metadata=0 innodb_lock_wait_timeout=100 #innodb_write_io_threads=8 #To stress the double write buffer transaction-isolation=READ-COMMITTED # Database 4 byte UTF-8 support innodb_file_per_table=true # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/We have tried various suggestions provided in the internet for the MySQL error: **Communications link failure** The following things we tried but could not get it working: - -Djava.net.preferIPv4Stack=true - tcpKeepalive=true - JDBC URL from localhost to 127.0.0.1: jdbc:mysql://127.0.0.1:3306/mydatabase?tcpKeepalive=true Please find the MySQL user list:
mysql> SELECT Host,User,Plugin FROM mysql.user; +-----------+------------------+-----------------------+ | Host | User | Plugin | +-----------+------------------+-----------------------+ | 127.0.0.1 | root | mysql_native_password | | ::1 | root | mysql_native_password | | li1003-25 | root | mysql_native_password | | localhost | debian-sys-maint | mysql_native_password | | localhost | mysql.infoschema | caching_sha2_password | | localhost | mysql.session | mysql_native_password | | localhost | mysql.sys | mysql_native_password | | localhost | root | mysql_native_password | +-----------+------------------+-----------------------+Please share what additional info can be provided to help identify the issues. Thanks.
Amit Sedai
(23 rep)
Sep 13, 2022, 05:44 AM
• Last activity: Dec 29, 2024, 12:03 AM
0
votes
0
answers
34
views
Connect to Create a New Unity Catalog using a onprem postgres database connect
1. Have datbricks on azure platform with admin acces. 2. I have serverless sql warehouse where i have imported some csv data into a catalog. 3. Now i need to access postgres data on onprem linux box. 4. Need to connect this db from datbticks add connection to create a new catalog. 5. I would like to...
1. Have datbricks on azure platform with admin acces.
2. I have serverless sql warehouse where i have imported some csv data into a catalog.
3. Now i need to access postgres data on onprem linux box.
4. Need to connect this db from datbticks add connection to create a new catalog.
5. I would like to use databeicks genei to acces the tables added from posgres db into the catlog.
How do i procced now.
malcolm richard
(1 rep)
Dec 19, 2024, 11:15 AM
• Last activity: Dec 19, 2024, 06:57 PM
1
votes
1
answers
107
views
Can I use MSSQL's Failover Partner parameter without configuring database mirroring?
I have 2 MSSQL databases that are **not** part of the same cluster but they are synchronized manually (without database [mirroring][1] configured) I want my client applications to try connecting to the first DB host. if it fails, it should connect to the second DB host. I created a [connection strin...
I have 2 MSSQL databases that are **not** part of the same cluster but they are synchronized manually (without database mirroring configured)
I want my client applications to try connecting to the first DB host. if it fails, it should connect to the second DB host.
I created a connection string (in Java) that looks like this :
String connectionString = "jdbc:sqlserver://:1433;"
+ "database=;"
+ "user=;"
+ "password=;"
+ "encrypt=true;"
+ "trustServerCertificate=true;"
+ "failoverPartner=:1433;";
It connects correctly to the primary DB instance. but if I shut it down and the connection is redirected to the second DB instance the connection is blocked. (connecting this instance as primary works)
Does the redirected connection rely on mirroring?
joepol
(113 rep)
Dec 3, 2024, 12:35 PM
• Last activity: Dec 3, 2024, 05:26 PM
0
votes
0
answers
48
views
ORA-02292: integrity constraint (CITY_COUNTRY_FK) violated - child record found
I am trying to make my all table columns in lowercase but the following error appears: SQL> UPDATE COUNTRY SET country_code = LOWER(country_code); UPDATE COUNTRY SET country_code = LOWER(country_code) * ERROR at line 1: ORA-02292: integrity constraint (OWNER.CITY_COUNTRY_FK) violated - child record...
I am trying to make my all table columns in lowercase but the following error appears:
SQL> UPDATE COUNTRY SET country_code = LOWER(country_code);
UPDATE COUNTRY SET country_code = LOWER(country_code)
*
ERROR at line 1:
ORA-02292: integrity constraint (OWNER.CITY_COUNTRY_FK) violated - child
record found
Here are my tables:
create table country(
country_code varchar2(2) PRIMARY KEY,
country nvarchar2(57) NOT NULL
);
create table city(
city_code varchar2(3) PRIMARY KEY,
country_code varchar2(3) CONSTRAINT city_country_fk references country(country_code) NOT NULL,
city varchar2(24) NOT NULL
);
Here is some sample data:
INSERT INTO COUNTRY VALUES ('US', 'United States');
INSERT INTO COUNTRY VALUES ('SR', 'Suriname');
INSERT INTO city(city_code,country_code,city) VALUES('BAF','US','Westfield');
INSERT INTO city(city_code,country_code,city) VALUES('BBB','US','Benson');
INSERT INTO city(city_code,country_code,city) VALUES('OEM','SR','Paloemeu');
Thanks
Raakh
(17 rep)
Oct 6, 2024, 07:53 PM
• Last activity: Oct 6, 2024, 07:58 PM
0
votes
0
answers
74
views
Not possible to connect to remote Oracle XE with public IP
I have a problem with my database, I cannot access to it via public IP (remote). I have developed a java stocks program, which uses JDBC to connect to DB. My database is Oracle Database Express Edition (XE), mounted in Windows 10 to serve a stocks control program. The program works perfect while run...
I have a problem with my database, I cannot access to it via public IP (remote).
I have developed a java stocks program, which uses JDBC to connect to DB.
My database is Oracle Database Express Edition (XE), mounted in Windows 10 to serve a stocks control program. The program works perfect while running on the same PC (localhost), but recently I wanted to access to it via remote connection.
The steps I followed are these:
1. Tell my ISP company to fix me an static public IP.
2. Configure a forwarding port in my router.
3. Create an SSH tunnel on server to route the traffic from forwarded port to database port (1521).
Comprobations I have done:
-Reaching the opened port via external access: **OK**
-Testing the SSH tunnel on server to be working. IP and port put on a browser, and the command line where the tunnel is debugging shows connection attempts. **OK**
-Testing SQLplus to connect to the remote server with public IP. **ERROR (BUT DEBUG CONSOLE ON SSH TUNNEL SHOWS A CONNECTION ATTEMPT)**. ORA-12537: TNS: closed connection.
-Testing JDBC to connect from my application to the remote server with public IP. **ERROR**. Network adapter cannot establish the connection. (NO CONNECTION ATTEMPT).
I also have configured a DDNS in the router with noIP, but the behavior is the same.
Anyone has an idea about this? I have tried several times.
**UPDATE:**
After downloading Orace Call Interface, SQLplus connects, but my Java application does not show any sign of connection.
Error:
java.sql.SQLRecoverableException: Error de E/S: The Network Adapter could not establish the connection
Error en la conexión de la base de datos
Fran Casas
(1 rep)
Sep 19, 2024, 06:34 AM
• Last activity: Sep 22, 2024, 01:59 PM
0
votes
0
answers
52
views
SQL via JDBC fails on MSSQL Server with BEGIN TRAN
Running this SQL via JDBC on MSSQL works as expected: ``` UPDATE costcenter SET updatetimestamp = Getdate(), [de] = 'CostCenterName', [en] = 'CostCenterName_en' WHERE [customerid] = 'abcd' AND [principal] = '1' ``` But running this SQL with a Transaction fails: ``` BEGIN TRAN IF EXISTS (SELECT [cust...
Running this SQL via JDBC on MSSQL works as expected:
UPDATE costcenter
SET updatetimestamp = Getdate(),
[de] = 'CostCenterName',
[en] = 'CostCenterName_en'
WHERE [customerid] = 'abcd'
AND [principal] = '1'
But running this SQL with a Transaction fails:
BEGIN TRAN
IF EXISTS (SELECT [customerid],
[principal],
[companycode],
[costcenter]
FROM costcenter
WHERE [customerid] = 'abcd'
AND [principal] = '1')
BEGIN
UPDATE costcenter
SET updatetimestamp = Getdate(),
[de] = 'CostCenterName',
[en] = 'CostCenterName_en'
WHERE [customerid] = 'abcd'
AND [principal] = '1'
END
ELSE
BEGIN
INSERT INTO costcenter
(inserttimestamp,
updatetimestamp,
customerid,
principal,
companycode,
costcenter,
de,
en)
VALUES (Getdate(),
Getdate(),
'abcd',
'1',
'1',
'0',
'CostCenterName',
'CostCenterName_en')
END
COMMIT TRAN
The reported error is: Incorrect syntax near the keyword 'BEGIN'
Running this SQL on MS SQL Management Studio works.
So I think it has to do with the JDBC-Connection.
Tobias G.
(1 rep)
Sep 6, 2024, 10:31 AM
14
votes
2
answers
8445
views
To what extent is MariaDB Java client a perfect replacement for MySQL JDBC (Connector/J)?
**Premise**: I ship a Java application which will make a JDBC connection to a MySQL database. (At the time of this writing it could be 5.1 or 5.5, but hopefully the question and answer don't rely heavily on the version.) **Observation**: Clearly, I can use the the MySQL JDBC driver ([Connector/J][1]...
**Premise**: I ship a Java application which will make a JDBC connection to a MySQL database. (At the time of this writing it could be 5.1 or 5.5, but hopefully the question and answer don't rely heavily on the version.)
**Observation**: Clearly, I can use the the MySQL JDBC driver (Connector/J ) or the MariaDB JDBC driver (MariaDB Java Client ) to make the connection to the MySQL server. One huge benefit from the standpoint of an ISV is that the MariaDB driver is LGPL licensed while the MySQL driver is GPL licensed. Performance of the MariaDB driver seems very good.
**Question**: To what extent is the MariaDB JDBC driver a perfect replacement for the MySQL JDBC driver?
I'm not looking for a debate on the relative merits of the two. But rather I'm hoping someone can answer along the lines of one of these:
- The MariaDB JDBC driver has important issues with X and Y, so you'll have troubles if you use either of these.
- Based on some article or specification or code or other evidence, the MariaDB JDBC driver fully implements [at least] all methods available in the MySQL driver. You'll be fine.
mdahlman
(357 rep)
Jun 16, 2013, 06:58 AM
• Last activity: Apr 26, 2024, 10:39 PM
1
votes
0
answers
210
views
Writing large dataset from spark dataframe
We have a azure databricks job that retrieves some large dataset with pyspark. The dataframe has about 11 billion rows. We are currently writing this out to a postgresql DB (also in azure). Currently we are using the jdbc connector to write row out in batch to the existing table (batch size 10,000,0...
We have a azure databricks job that retrieves some large dataset with pyspark. The dataframe has about 11 billion rows. We are currently writing this out to a postgresql DB (also in azure). Currently we are using the jdbc connector to write row out in batch to the existing table (batch size 10,000,000). This table does have a handful of indexes on it, so inserts take awhile. It is dozens of hours to complete this operation (assuming if finishes successfully at all).
I feel like it would make more sense to use
COPY
to load the data into the database, but I don't see any well establish patterns for doing that in databricks.
I don't have a ton of spark or databricks experience, so any tips are appreciated.
Kyle Chamberlin
(13 rep)
Feb 16, 2024, 12:57 AM
Showing page 1 of 20 total questions