Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
1208 views
Delete backup directories using RMAN
Is there any way RMAN can delete empty backup directories?
Is there any way RMAN can delete empty backup directories?
Rajat Saxena (75 rep)
Jun 20, 2013, 07:32 AM • Last activity: Aug 5, 2025, 01:06 AM
0 votes
1 answers
3143 views
Oracle TDE - opening/closing an encryption wallet
I have a quick question relating to Oracle TDE. Could somebody please explain why both of the following pairs of commands appear to work when opening/closing an ecryption wallet? Is the wallet password actually needed for this or not? If not, when exactly do we need to use the password? Many thanks....
I have a quick question relating to Oracle TDE. Could somebody please explain why both of the following pairs of commands appear to work when opening/closing an ecryption wallet? Is the wallet password actually needed for this or not? If not, when exactly do we need to use the password? Many thanks. administer key management set keystore close identified by ""; administer key management set keystore open identified by ""; administer key management set keystore close identified by "null"; administer key management set keystore open identified by "null";
Franco (51 rep)
Jun 30, 2021, 04:09 PM • Last activity: Aug 3, 2025, 08:04 PM
0 votes
1 answers
1305 views
Connection rejected based on ACL filtering, but the ACL is disabled?
I'm trying to connect to an Oracle Cloud database through DataGrip (with JetBrains's instructions) but I'm getting an error: ``` DBMS: Oracle (no ver.) Case sensitivity: plain=mixed, delimited=exact [66000][12506] ORA-12506: TNS:listener rejected connection based on service ACL filtering. ``` Howeve...
I'm trying to connect to an Oracle Cloud database through DataGrip (with JetBrains's instructions) but I'm getting an error:
DBMS: Oracle (no ver.)
Case sensitivity: plain=mixed, delimited=exact
 ORA-12506: TNS:listener rejected connection based on service ACL filtering.
However, clearly the ACL is disabled: screenshot of oracle cloud dashboard showing that the access control list is disabled What could possibly be causing this error and how can I fix it.
andyinnie (1 rep)
Nov 21, 2023, 01:57 AM • Last activity: Aug 3, 2025, 04:06 PM
4 votes
2 answers
643 views
RMAN-06169, but only when backing up via OEM 13c
When running a Level 1 or Level 0 backup via OEM 13c, the following RMAN error causes the run to fail. (The RMAN script is being ran with proper sysdba credentials from OEM 13c.) RMAN-06169: could not read file header for datafile 2 error reason 5 Oracle documentation suggests reason 5 indicates 'Un...
When running a Level 1 or Level 0 backup via OEM 13c, the following RMAN error causes the run to fail. (The RMAN script is being ran with proper sysdba credentials from OEM 13c.) RMAN-06169: could not read file header for datafile 2 error reason 5 Oracle documentation suggests reason 5 indicates 'Unable to open file'. However, if I execute the exact same RMAN statement directly on the server, the backup runs fine. We have removed the database from OEM and decommissioned the agent, the redeployed the agent as well as reinstated the database in OEM, but the error still occurred.
ca_elrod (61 rep)
Aug 7, 2018, 06:18 PM • Last activity: Aug 3, 2025, 02:06 AM
0 votes
1 answers
2095 views
Some view to find out which jobs are in the queue to be executed?
A environment start several refreshes along the day. [![enter image description here][1]][1] [1]: https://i.sstatic.net/WSXue.png If I change the `job_queue_processes` to 20 for example, it start several new jobs. Is there some view that I can see which jobs is waiting and will be executed if I chan...
A environment start several refreshes along the day. enter image description here If I change the job_queue_processes to 20 for example, it start several new jobs. Is there some view that I can see which jobs is waiting and will be executed if I change the job_queue_processes? Observation: the jobs have been created with dbms_job.
Astora (841 rep)
Mar 17, 2021, 02:30 AM • Last activity: Aug 1, 2025, 11:05 PM
0 votes
2 answers
95 views
Cannot create MATERIALIZED VIEW with REFRESH ON COMMIT on query with multiple joins
In oracle, I have the following tables (ugly pseudo SQL below) : CLIENT (CLIENT_ID NUMBER NOT NULL /*PK*/, GROUP_ID NUMBER NOT NULL) GROUP (GROUP_ID NUMBER /*PK*/, GROUP_NAME VARCHAR2) GROUP_DATA (GROUP_ID NUMBER /*PK*/, COUNTRY_ID VARCHAR2) and I am trying to create the following materialized view...
In oracle, I have the following tables (ugly pseudo SQL below) : CLIENT (CLIENT_ID NUMBER NOT NULL /*PK*/, GROUP_ID NUMBER NOT NULL) GROUP (GROUP_ID NUMBER /*PK*/, GROUP_NAME VARCHAR2) GROUP_DATA (GROUP_ID NUMBER /*PK*/, COUNTRY_ID VARCHAR2) and I am trying to create the following materialized view : CREATE MATERIALIZED VIEW MV_CLIENT REFRESH COMPLETE ON COMMIT AS SELECT CLIENT.CLIENT_ID, GROUP.GROUP_NAME, GROUP_DATA.COUNTRY_ID FROM CLIENT INNER JOIN GROUP ON GROUP.GROUP_ID = CLIENT.GROUP_ID INNER JOIN GROUP_DATA ON GROUP_DATA.GROUP_ID = CLIENT.GROUP_ID I am getting the following error : ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view I do not understand the issue as GROUP_ID is guaranteed to exist only once in each of the GROUP and GROUP_DATA tables. I feel like tracking the updates (especially for refresh COMPLETE) should not be a problem. Please note that creating the materialized view on CLIENT+GROUP or CLIENT+GROUP_DATA works but CLIENT+GROUP+GROUP_DATA does not. I have tried to create the MV logs and changed the COMPLETE to FAST but the error stays the same. Now I know that a solution would be to merge GROUP and GROUP_DATA in a single table but this is not the solution I need. So my question is, is this an oracle limitation or am I missing something ? And if this is a limitation, does anyone know the rationale for this ? Thanks !
vpi (21 rep)
May 5, 2025, 11:56 AM • Last activity: Aug 1, 2025, 06:28 PM
8 votes
3 answers
150 views
Changes access method for non-correlated subquery
Oracle 11g R2 Unfortunately our application has per row security "*features*". We have a query that looks about like this: **Bad, slow:** SELECT someRow, someOtherRow FROM bigTableA a WHERE EXISTS ( SELECT 0 FROM bigTableA_securitymapping b WHERE b.PrimaryKeyTableA = a.PrimaryKeyTableA AND b.accessc...
Oracle 11g R2 Unfortunately our application has per row security "*features*". We have a query that looks about like this: **Bad, slow:** SELECT someRow, someOtherRow FROM bigTableA a WHERE EXISTS ( SELECT 0 FROM bigTableA_securitymapping b WHERE b.PrimaryKeyTableA = a.PrimaryKeyTableA AND b.accesscode in (SELECT accesscode FROM accesscodeView WHERE user = :someUserID) ) There a unique index on bigTableA_securitymapping(PrimaryKeyTableA,accesscode). The accesscodeView could potentially return more than one accesscode for a given user, so it must be IN() and not =. The issue is that this query ignores the unique index for bigTableA_securitymapping and chooses to do a full table scan. If I change the IN() to an = then it does a UNIQUE SCAN on the unique index on bigTableA_securitymapping and is about 50 times faster. **Good, fast but not possible:** SELECT someRow, someOtherRow FROM bigTableA a WHERE EXISTS ( SELECT 0 FROM bigTableA_securitymapping b WHERE b.PrimaryKeyTableA = a.PrimaryKeyTableA AND b.accesscode =(SELECT distinct accesscode FROM accesscodeView WHERE user = :someUserID) ) But, I cannot do that because the accesscodeView may return more than one row. (There's a distinct in there because the accesscodeView needs it given the =, putting the DISTINCT on the original query makes no difference.) If I hardcode the accesscodes, it also does a UNIQUE SCAN on the unique index for bigTableA_securitymapping. **Good, fast but requires large application change:** SELECT someRow, someOtherRow FROM bigTableA a WHERE EXISTS ( SELECT 0 FROM bigTableA_securitymapping b WHERE b.PrimaryKeyTableA = a.PrimaryKeyTableA AND b.accesscode in (1,2,3,4) ) Changing to a join inside doesn't really help either. It still does a full table scan. **Bad, slow:** SELECT someRow, someOtherRow FROM bigTableA a WHERE EXISTS ( SELECT 0 FROM accesscode ac INNER JOIN bigTableA_securitymapping b ON ac.accesscode = b.accesscode WHERE b.PrimaryKeyTableA = a.PrimaryKeyTableA AND user = :someUserID ) So why the difference between = and IN() in. And why does a non-correlated subquery (the accesscodeview subquery) cause such a plan difference? Is there any way to rewrite it to do what I want? The difference in 'good plan' costs vs 'bad plan' costs here are 87 vs 37,000 and a large amount of time in real runtime for the same results.
rfusca (1569 rep)
Mar 11, 2014, 07:28 PM • Last activity: Aug 1, 2025, 06:13 PM
0 votes
1 answers
1577 views
Importing huge table exhausts UNDO extents in Oracle RDS (ORA-01628)
I'm attempting to do an impdp on RDS, Oracle 12c. I'm importing only one table for this particular impdp job but every time I try to import it, UNDO usage gets to about 50% and then the logs just say `Resumable error: ORA-01628: max # extents (32765) reached for rollback segment`. Since this is RDS...
I'm attempting to do an impdp on RDS, Oracle 12c. I'm importing only one table for this particular impdp job but every time I try to import it, UNDO usage gets to about 50% and then the logs just say Resumable error: ORA-01628: max # extents (32765) reached for rollback segment. Since this is RDS I cannot manually manage undo. I created a fresh RDS instance with a new 4TB UNDO tablespace to perform the import of just this table. I've read about creating one giant rollback segment and also about creating lots of small rollback segments to solve this problem. I've also read I can split the import into multiple parts, but I'd rather not do that if possible. Is there anything more I can do here to maybe stop the UNDO tablespace from running out of extents?
user3150146 (1 rep)
Nov 12, 2020, 01:35 PM • Last activity: Aug 1, 2025, 12:07 PM
0 votes
1 answers
144 views
SQL Developer MySQL to Oracle migration auto_increment and identity columns
Ok, So in learning about migrating from MySQL 8 to Oracle 19 I have run into an issue. My tables in MySQL have auto_increment on the primary key. When I walk through the migration and get my master.sql script that has all of my metadata for object creation. I have noticed that the DDL for the tables...
Ok, So in learning about migrating from MySQL 8 to Oracle 19 I have run into an issue. My tables in MySQL have auto_increment on the primary key. When I walk through the migration and get my master.sql script that has all of my metadata for object creation. I have noticed that the DDL for the tables has the identity column starting at 0 and not the auto_increment number from MySQL. This runs into an issue when inserting data if the number of rows in the table in Oracle is less than the identity column base value. I used Goldengate to insert the data and the sequence that is system generated for the identity column only gets bumped up for each insert of data. How can I get the correct value when using SQL Developer to do a migration? I have a work around in PL/SQL but would like to get the correct value form the migration process. Thanks
CREATE TABLE my_table (
  id NUMBER(10,0) GENERATED BY DEFAULT ON NULL AS IDENTITY START WITH 0 INCREMENT BY 1 MINVALUE 0 NOMAXVALUE ,
  created DATE NOT NULL,
  modified DATE NOT NULL,
  user_id NUMBER(10,0) NOT NULL
);
should be ``` CREATE TABLE my_table ( id NUMBER(10,0) GENERATED BY DEFAULT ON NULL AS IDENTITY START WITH 100 INCREMENT BY 1 MINVALUE 0 NOMAXVALUE , created DATE NOT NULL, modified DATE NOT NULL, user_id NUMBER(10,0) NOT NULL );
cptkirkh (13 rep)
May 2, 2024, 04:04 PM • Last activity: Jul 31, 2025, 09:03 AM
1 votes
1 answers
9461 views
ORA-01109 Error (Database Not Open), But It's Open in SQL Plus - Oracle SQL Developer
I'm getting the error "ORA-01109: database not open" in Oracle SQL Developer, in Oracle Database 19c and SQL Developer 21.2.1 Windows 64-bit. I've gone into SQL Plus to open the database and used commands like "ALTER DATABASE OPEN;" and shut down/restart the instance to open the database. It was ori...
I'm getting the error "ORA-01109: database not open" in Oracle SQL Developer, in Oracle Database 19c and SQL Developer 21.2.1 Windows 64-bit. I've gone into SQL Plus to open the database and used commands like "ALTER DATABASE OPEN;" and shut down/restart the instance to open the database. It was originally on Status "MOUNTED," but it now says Status is "OPEN" when entering the command "select instance_name, status from v$instance;" (instance_name is listed as "orcl"). However, I still get the same ORA-01109 error when opening my database, "CS4347" in Oracle SQL Developer. How can I open this database in Oracle SQL developer? This error occurs when I try to expand the database or connect to it.
kike
Dec 1, 2021, 06:34 PM • Last activity: Jul 31, 2025, 05:54 AM
0 votes
1 answers
379 views
error writing input to commandERROR: Invalid username and/or password
This is the error I have been getting on my RMAN notifcations on my backup jobs for the last day or so. The title is exactly how the message is formatted. > ~~~End Step Output/Error Log~~~ > Error Log:BackupOp > error writing input to commandERROR: Invalid username and/or password > > ~~~End Step Ou...
This is the error I have been getting on my RMAN notifcations on my backup jobs for the last day or so. The title is exactly how the message is formatted. > ~~~End Step Output/Error Log~~~ > Error Log:BackupOp > error writing input to commandERROR: Invalid username and/or password > > ~~~End Step Output/Error Log~~~ I only have 2 backup jobs and both are affected by this. Not sure how much information here is useful but what I have been looking at is the SYS account the jobs are using is not locked out. select account_status from dba_users where username = 'SYS'; ACCOUNT_STATUS -------------------------------- OPEN The emagent.trc log is showing this pair of lines for every backup job since the first one failed. 2019-02-06 23:44:26,791 Thread-14472 ERROR command: failed to write to stdin of process 6228: (error=232, no message available) 2019-02-06 23:44:26,791 Thread-14472 ERROR command: nmejcc_feedInput: received -11 from process write Restarting the DBConsole service has no affect and the errors persist. I can manually run full and archivelog backups just fine as sysdba. I see no reason to blame RMAN since I can run the backup rman scripts just fine. I think this is just an EM issue.
Matt (365 rep)
Feb 7, 2019, 05:05 PM • Last activity: Jul 30, 2025, 10:04 PM
1 votes
1 answers
1121 views
Oracle SQL Developer: Copy paste tables, with 2 different instances, with different table structure
Here I have 2 difference instances, one is called DEV and one is called SIT2. I created a public database link, called DBLINKSIT2(Basically just to create a bridge between DEV and SIT2) and I need to copy all(make a backup) the tables from DEV to SIT2, with additional filtration and joining with ano...
Here I have 2 difference instances, one is called DEV and one is called SIT2. I created a public database link, called DBLINKSIT2(Basically just to create a bridge between DEV and SIT2) and I need to copy all(make a backup) the tables from DEV to SIT2, with additional filtration and joining with another table called LKUP.CTL_RWA_VERSION Below is the syntax that I have that is running in DEV. begin for r in (select DISTINCT TABLE_NAME from all_tab_columns where owner = 'DDSHIST' and COLUMN_NAME = 'SNAPSHOT_DT') loop begin execute immediate 'INSERT INTO ||r.table_name|| @DBLINKSIT2 select a.* from DDSHIST.||r.table_name|| a INNER JOIN LKUP.CTL_RWA_VERSION b ON a.SNAPSHOT_DT = b.SNAPSHOT_DT and a.DDS_VERSION = b.DDS_VERSION WHERE b.GOLDEN_COPY = 'N''; exception when others then null; end; end loop; end; I put COLUMN_NAME = 'SNAPSHOT_DT' because some of the tables do not contain this column. So the joining condition is both SNAPSHOT_DT are the same, and DDS_VERSION are the same, WHERE golden copy in LKUP table = 'Y'. then loop the script, and insert into @DBLINKSIT2. But I can't get the script to run and I don't know where I am getting this wrong. Any help would be appreciated. Thank you.
Chun (11 rep)
Aug 5, 2016, 07:35 AM • Last activity: Jul 30, 2025, 11:06 AM
1 votes
1 answers
4251 views
How to reclaim unused space in SYSAUX tablespace in Oracle?
I have 6gb file for SYSAUX table space in 18xe oracle database. SYSAUX table space has 61% free space : select fs.tablespace_name as Tablespace, (df.totalspace - fs.freespace) as Used_MB, fs.freespace as Free_MB, df.totalspace as Total_MB, round(100 * (fs.freespace / df.totalspace)) as Percentage_Fr...
I have 6gb file for SYSAUX table space in 18xe oracle database. SYSAUX table space has 61% free space : select fs.tablespace_name as Tablespace, (df.totalspace - fs.freespace) as Used_MB, fs.freespace as Free_MB, df.totalspace as Total_MB, round(100 * (fs.freespace / df.totalspace)) as Percentage_Free from (select tablespace_name, round(sum(bytes) / 1048576) TotalSpace from dba_data_files group by tablespace_name) df, (select tablespace_name, round(sum(bytes) / 1048576) FreeSpace from dba_free_space group by tablespace_name) fs where df.tablespace_name = fs.tablespace_name and df.tablespace_name = 'SYSAUX'; returns enter image description here So I would like to reclaim the 61% free space if possible. Attempt 1 : =========== ALTER DATABASE DATAFILE '...\XE\SYSAUX01.DBF' RESIZE 5000M; But I'm getting the error : ORA-03297: file contains used data beyond requested RESIZE value Attempt 2 : =========== ALTER TABLESPACE SYSAUX SHRINK SPACE KEEP 5000M; Error : ORA-12916: cannot shrink permanent or dictionary managed tablespace Attempt 3 : =========== ALTER TABLESPACE sysaux RESIZE 5000M; Error ORA-32773: operation not supported for smallfile tablespace SYSAUX Does anyone know to relcaim this space please ? Thanks.
user32026 (141 rep)
Sep 6, 2021, 05:16 PM • Last activity: Jul 29, 2025, 09:04 PM
1 votes
1 answers
956 views
ORA-01653 Unable to Extend Table AGILE.A_DW_TXN_LOG BY 128 in Tablespace AGILE_DATA3
I am trying to import an oracle database from a dump file. While importing I am getting >ORA-01653 Unable to Extend Table AGILE.A_DW_TXN_LOG BY 128 in Tablespace AGILE_DATA3`. I have tried to fetch the information about the tablespace with this query and I am not able to understand what these column...
I am trying to import an oracle database from a dump file. While importing I am getting >ORA-01653 Unable to Extend Table AGILE.A_DW_TXN_LOG BY 128 in Tablespace AGILE_DATA3`. I have tried to fetch the information about the tablespace with this query and I am not able to understand what these columns hold information about. SELECT * FROM DBA_DATA_FILES WHERE Tablespace_name = 'AGILE_DATA3'; |FILE_NAME |FILE_ID|TABLESPACE_NAME|BYTES |BLOCKS |STATUS |RELATIVE_FNO|AUTOEXTENSIBLE|MAXBYTES |MAXBLOCKS|INCREMENT_BY|USER_BYTES |USER_BLOCKS|ONLINE_STATUS| |-------------------------------------------|-------|---------------|-----------|-------|---------|------------|--------------|-----------|---------|------------|-----------|-----------|-------------| |D:\APP\ORADATA\AG934\AGILE_DATA301AG934.ORA| 11|AGILE_DATA3 |26791116800|3270400|AVAILABLE| 11|YES |34359721984| 4194302| 1280|26789019648| 3270144|ONLINE | I looked up at this link but as I look into my table I find Auto Extension already enabled. What is it I am not doing correctly here ? I am using data pump utility available in sql developer to import the dump file. Also I hope that when we import the dump file it actually overwrites all of the information already present on the database. Is that correct ?
Muhammad Asim
Jun 25, 2020, 01:40 PM • Last activity: Jul 29, 2025, 12:46 PM
0 votes
4 answers
1176 views
Move new data records automatically to another database in oracle xe
What's the best way to move all the data from a table on my local oracle database to an identical table on another pc? I will have to transfer the data to the second database as soon as it's written to the local one. The local table should not contain any data, unless it's not possible to insert it...
What's the best way to move all the data from a table on my local oracle database to an identical table on another pc? I will have to transfer the data to the second database as soon as it's written to the local one. The local table should not contain any data, unless it's not possible to insert it to the remote database. If the connection is lost, all new datasets must be stored in the local table. I tried to solve this by using a trigger, but it didn't work as expected. It works just fine if the remote database connection is valid, but it performs an entire rollback (including the original insert) if the connection is lost. Because of that, the data isnt even inserted to the local database. Another huge problem is that it takes about 40 seconds every time to return ORA-12170 (Connection timeout). Is there any way to set a much shorter time interval for the timeout or to abort the query if it takes that much time? create or replace TRIGGER DATA_TO_SERVER AFTER INSERT ON LOCAL_TABLE DECLARE PRAGMA AUTONOMOUS_TRANSACTION; BEGIN SAVEPOINT sp; INSERT INTO SERVER.SERVER_TABLE@SERVER_LINK SELECT * FROM LOCAL_TABLE; DELETE FROM LOCAL_TABLE WHERE ID IS NOT NULL; COMMIT; EXCEPTION WHEN OTHERS THEN ROLLBACK to sp; RAISE; END;
Gandalf The Gay (11 rep)
Feb 19, 2018, 12:38 PM • Last activity: Jul 27, 2025, 07:07 PM
0 votes
1 answers
1738 views
Operation not allowed from within a pluggable database Pre-Built Developer VMs for Oracle VM VirtualBox
I downloaded **Pre-Built Developer VMs for Oracle VM VirtualBox** , which contains Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 I want to **BACKUP the Database using Rman**, but it is in NOARCHIVELOG mode, so it is necessary to shutdown and change the database to ARCHIVELOG mode, but wh...
I downloaded **Pre-Built Developer VMs for Oracle VM VirtualBox** , which contains Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 I want to **BACKUP the Database using Rman**, but it is in NOARCHIVELOG mode, so it is necessary to shutdown and change the database to ARCHIVELOG mode, but when I try to alter database to archive log , it gives an error saying operation not allowed from within a pluggable database. An operation was attempted that can only be performed in the root container. enter image description here
Tomas A (3 rep)
Jan 25, 2023, 12:07 PM • Last activity: Jul 27, 2025, 04:05 PM
0 votes
1 answers
442 views
Can oracle dbca template be used cross versions
If you have a dbca template to create a database for version 12.1, will it also work for creating a database on 12.2 binaries? Will it also work for creating a database on 11g binaries?
If you have a dbca template to create a database for version 12.1, will it also work for creating a database on 12.2 binaries? Will it also work for creating a database on 11g binaries?
exit_1 (207 rep)
Jun 28, 2017, 01:20 PM • Last activity: Jul 27, 2025, 03:09 PM
0 votes
1 answers
2855 views
ORA-31626: job does not exist during schema export
I'm exporting schema with following command - expdp system/system@xxx.xxx.xxx.xxx:1521/orcl schemas=schema-name directory=test_dir dumpfile=Schema.dmp logfile=SchemaLog.log but it results into following error - Connected to: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production ORA-31626: job do...
I'm exporting schema with following command - expdp system/system@xxx.xxx.xxx.xxx:1521/orcl schemas=schema-name directory=test_dir dumpfile=Schema.dmp logfile=SchemaLog.log but it results into following error - Connected to: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production ORA-31626: job does not exist ORA-04063: package body "SYS.DBMS_INTERNAL_LOGSTDBY" has errors ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_INTERNAL_LOGSTDBY" ORA-06512: at "SYS.KUPV$FT", line 1009 ORA-04063: package body "SYS.DBMS_LOGREP_UTIL" has errors ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_LOGREP_UTIL" I googled a lot and tried solutions provided around ORA-31626: job does not exist and ORA-04063: package body "SYS.DBMS_INTERNAL_LOGSTDBY" has errors but none helped to solve the problem. Could you please help to resolve this?
Alpha (153 rep)
Dec 30, 2020, 08:58 AM • Last activity: Jul 27, 2025, 12:06 PM
0 votes
1 answers
335 views
SQL query output to a file is failing with error code 9 when the data contains French and English characters with size > 32k
Database: Oracle 12 C I have a `MESSAGE` table with `BODY VARCHAR2(32000)`, `METADATA VARCHAR2(32000)`. Initially I had a problem when both columns have 32k+32k chars of data. I had fixed it by applying the `TO_CLOB` function and I am able to write the output to the file without truncating the resul...
Database: Oracle 12 C I have a MESSAGE table with BODY VARCHAR2(32000), METADATA VARCHAR2(32000). Initially I had a problem when both columns have 32k+32k chars of data. I had fixed it by applying the TO_CLOB function and I am able to write the output to the file without truncating the result. (This is working only when 1. All are English characters(Eg: SQL query result record contains around 64 K characters) 2. Combination of French & English characters when the SQL query result record 32K characters, In this case, my query execution is failing with error code : 9. Please help me on how to fix this issue without truncating the data. Below is my Unix script code
export PATH

export LD_LIBRARY_PATH

export ORACLE_ACCESS

#export NLS_LANG=.AL32UTF8

export NLS_LANG=AMERICAN_AMERICA.AL32UTF8

output=$(sqlplus -S "$ORACLE_ACCESS" = TO_DATE('01-Jan-2025','DD-MON-YYYY') and CREATE_TS < TO_DATE('01-Jan-2025','DD-MON-YYYY')+1) ORDER BY ROWID;

    SPOOL OFF

    exit;

EOF

)
Raj (1 rep)
Apr 6, 2020, 05:13 PM • Last activity: Jul 26, 2025, 10:03 PM
-1 votes
1 answers
440 views
Oracle Single table with json vs set of tables using joins
I'm building a reporting solution. The data will be stored on Oracle database. I expect to get near to several billion data set since i have to keep data for 1 year period. When designing database schema i faced problem to go with single table or set of tables. 1. Single table in format of (ID, DATE...
I'm building a reporting solution. The data will be stored on Oracle database. I expect to get near to several billion data set since i have to keep data for 1 year period. When designing database schema i faced problem to go with single table or set of tables. 1. Single table in format of (ID, DATETTIME, JSON ) which has all the data dumped as json in JSON column. this will avoid any joins also cater for future event format changes. 2. Set of tables ( 4-6) data dived on them. query will be using several joins and several unions. Format will be hard to change. What would be the better approach in terms of performance. For large data set how efficient json over joins ?
Viraj (331 rep)
Jun 25, 2020, 11:33 AM • Last activity: Jul 26, 2025, 05:06 AM
Showing page 1 of 20 total questions