Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
1578 views
Importing huge table exhausts UNDO extents in Oracle RDS (ORA-01628)
I'm attempting to do an impdp on RDS, Oracle 12c. I'm importing only one table for this particular impdp job but every time I try to import it, UNDO usage gets to about 50% and then the logs just say `Resumable error: ORA-01628: max # extents (32765) reached for rollback segment`. Since this is RDS...
I'm attempting to do an impdp on RDS, Oracle 12c. I'm importing only one table for this particular impdp job but every time I try to import it, UNDO usage gets to about 50% and then the logs just say Resumable error: ORA-01628: max # extents (32765) reached for rollback segment. Since this is RDS I cannot manually manage undo. I created a fresh RDS instance with a new 4TB UNDO tablespace to perform the import of just this table. I've read about creating one giant rollback segment and also about creating lots of small rollback segments to solve this problem. I've also read I can split the import into multiple parts, but I'd rather not do that if possible. Is there anything more I can do here to maybe stop the UNDO tablespace from running out of extents?
user3150146 (1 rep)
Nov 12, 2020, 01:35 PM • Last activity: Aug 1, 2025, 12:07 PM
1 votes
1 answers
956 views
ORA-01653 Unable to Extend Table AGILE.A_DW_TXN_LOG BY 128 in Tablespace AGILE_DATA3
I am trying to import an oracle database from a dump file. While importing I am getting >ORA-01653 Unable to Extend Table AGILE.A_DW_TXN_LOG BY 128 in Tablespace AGILE_DATA3`. I have tried to fetch the information about the tablespace with this query and I am not able to understand what these column...
I am trying to import an oracle database from a dump file. While importing I am getting >ORA-01653 Unable to Extend Table AGILE.A_DW_TXN_LOG BY 128 in Tablespace AGILE_DATA3`. I have tried to fetch the information about the tablespace with this query and I am not able to understand what these columns hold information about. SELECT * FROM DBA_DATA_FILES WHERE Tablespace_name = 'AGILE_DATA3'; |FILE_NAME |FILE_ID|TABLESPACE_NAME|BYTES |BLOCKS |STATUS |RELATIVE_FNO|AUTOEXTENSIBLE|MAXBYTES |MAXBLOCKS|INCREMENT_BY|USER_BYTES |USER_BLOCKS|ONLINE_STATUS| |-------------------------------------------|-------|---------------|-----------|-------|---------|------------|--------------|-----------|---------|------------|-----------|-----------|-------------| |D:\APP\ORADATA\AG934\AGILE_DATA301AG934.ORA| 11|AGILE_DATA3 |26791116800|3270400|AVAILABLE| 11|YES |34359721984| 4194302| 1280|26789019648| 3270144|ONLINE | I looked up at this link but as I look into my table I find Auto Extension already enabled. What is it I am not doing correctly here ? I am using data pump utility available in sql developer to import the dump file. Also I hope that when we import the dump file it actually overwrites all of the information already present on the database. Is that correct ?
Muhammad Asim
Jun 25, 2020, 01:40 PM • Last activity: Jul 29, 2025, 12:46 PM
0 votes
1 answers
245 views
Unable to use data pump utility in Oracle to import data to another server
I am using this command: impdp atif/atif@192.168.67.68:1521/serv1 DIRECTORY=DATA_PUMP_DIR DUMPFILE=mltpltblbkp.dmp LOGFILE=mltpltblbkp.log to import data from one server to another. this is my data_pump_dir path:/u01/app/oracle/admin/serv1/dpdump/ I have granted these privileges to the user: grant d...
I am using this command: impdp atif/atif@192.168.67.68:1521/serv1 DIRECTORY=DATA_PUMP_DIR DUMPFILE=mltpltblbkp.dmp LOGFILE=mltpltblbkp.log to import data from one server to another. this is my data_pump_dir path:/u01/app/oracle/admin/serv1/dpdump/ I have granted these privileges to the user: grant datapump_exp_full_database to atif; grant datapump_imp_full_database to atif; grant read, write on directory data_pump_dir to system; When I run the command, I get the following errors: ORA-39001: invalid argument value ORA-39000: bad dump file specification ORA-31640: unable to open dump file "/u01/app/oracle/admin/serv1/dpdump/mltpltblbkp.dmp" for read ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory Additional information: 7 can someone help here?
dbafix (64 rep)
May 10, 2023, 05:45 PM • Last activity: May 29, 2025, 10:05 AM
0 votes
1 answers
2885 views
Oracle 11 IMPDP Table Exists but gets Does Not Exist Error
I'm running in to a weird problem with IMPDP and Oracle DB version 11.2.0.4.0 where I have a table in the destination schema that does indeed exist, but IMPDP thinks it doesn't. See this entry from the import log file: > Starting "EXPORT_USER"."SYS_IMPORT_TABLE_01": export_user/******** > directory=...
I'm running in to a weird problem with IMPDP and Oracle DB version 11.2.0.4.0 where I have a table in the destination schema that does indeed exist, but IMPDP thinks it doesn't. See this entry from the import log file: > Starting "EXPORT_USER"."SYS_IMPORT_TABLE_01": export_user/******** > directory=EXPORT_DIRECTORY > dumpfile=SMARTCLOUDCONSOLE_OWNER_20170830.dmp > TABLE_EXISTS_ACTION=APPEND DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS > REMAP_SCHEMA=SMARTCLOUDCONSOLE_OWNER:MIKETESTCONSOLE_OWNER > TABLES=SMARTCLOUDCONSOLE_OWNER.ACCOUNTDEPLOYMENTLINKS QUERY="WHERE > account_id = (select account_id from accounts where lower(guid) = > '#####')" > > Processing object type SCHEMA_EXPORT/TABLE/TABLE > > Table "MIKETESTCONSOLE_OWNER"."ACCOUNTDEPLOYMENTLINKS" exists. Data > will be appended to existing table but all dependent metadata will be > skipped due to table_exists_action of append > > Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA > > ORA-31693: Table data object > "MIKETESTCONSOLE_OWNER"."ACCOUNTDEPLOYMENTLINKS" failed to load/unload > and is being skipped due to error: ORA-00942: table or view does not > exist We are using VPDs such that the _owner user owns all of the objects, so I don't think it's a problem of ownership. And it seems like it figures out that the table exists from the second line there Table "MIKETESTCONSOLE_OWNER"."ACCOUNTDEPLOYMENTLINKS" exists. but then when it proceeds it fails with ORA-00942: table or view does not exist What might be wrong with my use of IMPDP?
Mike DeMille (101 rep)
Aug 31, 2017, 05:25 PM • Last activity: Jan 26, 2025, 07:04 AM
0 votes
1 answers
109 views
Exporting a table with long raw , but the data in long raw is not moving
I am trying to export a table with long raw. The table size is 800GB. The datapump is not exporting the long raw. I did some research, and it seems the only way is to transfer them to LOB then export/import. Then I have to transfer them again to long raw in another database. Is there another simple...
I am trying to export a table with long raw. The table size is 800GB. The datapump is not exporting the long raw. I did some research, and it seems the only way is to transfer them to LOB then export/import. Then I have to transfer them again to long raw in another database. Is there another simple way? I did the below. Notice the size is small. Doing a sum of the table in dba_table is giving around 800GB. I know this datatype is deprecated but I cannot change that for now. Export: Release 19.0.0.0.0 - Production on Tue Dec 17 23:53:38 2024 Version 19.11.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Starting "db1"."SYS_EXPORT_TABLE_01": db1/********@sv1 directory=backup dumpfile=tablle1_d.dmp LOGFILE=log_tablle1_d.log TABLES=tablle1 EXCLUDE=STATISTICS,INDEX parallel=10 Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Processing object type TABLE_EXPORT/TABLE/TABLE . . exported "db1"."tablle1" 12.54 MB 77 rows Master table "db1"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for db1.SYS_EXPORT_TABLE_01 is: /backup/tablle1_d.dmp Job "db1"."SYS_EXPORT_TABLE_01" successfully completed at Tue Dec 17 23:55:21 2024 elapsed 0 00:01:26
Baalback (105 rep)
Dec 17, 2024, 10:23 PM • Last activity: Dec 23, 2024, 10:33 AM
0 votes
0 answers
23 views
Does transform=disable_archive_logging:y still allow point in time recovery?
I need to import some datapump in my database. Does transform=disable_archive_logging:y still allow point in time recovery?
I need to import some datapump in my database. Does transform=disable_archive_logging:y still allow point in time recovery?
Astora (841 rep)
Nov 19, 2024, 02:47 PM
5 votes
2 answers
8371 views
dbms_datapump API exclude statistics during import
We are using `dbms_datapump` to copy data between different servers or to setup new schemas quickly. However importing statistics usually takes ages and has the additional disadvantage that schema statistics are locked after the import. So I'm trying to find a way to skip importing the statistics. A...
We are using dbms_datapump to copy data between different servers or to setup new schemas quickly. However importing statistics usually takes ages and has the additional disadvantage that schema statistics are locked after the import. So I'm trying to find a way to skip importing the statistics. According to Metalink Doc ID 1392981.1 this should be possible using: dbms_datapump.metadata_filter(handle => l_job_handle, name => 'EXCLUDE_PATH_LIST', value => '''STATISTICS'''); However when I try that, I get an "ORA-39001: invalid argument value" error. I also tried various other formats found in different places: dbms_datapump.metadata_filter(handle => l_job_handle, name => 'EXCLUDE_PATH_LIST', value => 'STATISTICS'); dbms_datapump.metadata_filter(handle => l_job_handle, name => 'EXCLUDE_PATH_EXPR', value => 'like ''%/TABLE/STATISTICS/TABLE_STATISTICS'''); dbms_datapump.metadata_filter(handle => l_job_handle, name => 'EXCLUDE_PATH_EXPR', value => 'like ''%STATISTICS%'''); But all of them return with an ORA-39001. The version I'm using is: > Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production Operating system is a Windows Server, but this also occurs on a Linux installation. Any ideas how I can skip the statistics during import (or export) through the DBMS_DATAPUMP API?
user1822
Apr 15, 2015, 10:01 AM • Last activity: Oct 16, 2024, 11:59 AM
1 votes
0 answers
6668 views
19c Impdp hangs after SCHEMA_EXPORT/VIEW/VIEW with cpu 100%
I've a impdp job on Oracle 19c always hanging with 100% CPU (`ora_dw00_orclcd` process) after the message below: ``` Elaborazione dell'object type SCHEMA_EXPORT/FUNCTION/FUNCTION Elaborazione dell'object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE Elaborazione dell'object type SCHEMA_EXPORT/PROCEDURE/GRA...
I've a impdp job on Oracle 19c always hanging with 100% CPU (ora_dw00_orclcd process) after the message below:
Elaborazione dell'object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Elaborazione dell'object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Elaborazione dell'object type SCHEMA_EXPORT/PROCEDURE/GRANT/OWNER_GRANT/OBJECT_GRANT
Elaborazione dell'object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Elaborazione dell'object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Elaborazione dell'object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Elaborazione dell'object type SCHEMA_EXPORT/VIEW/VIEW
Very strange for me is the output of select NAME,SQL_TEXT from dba_resumable;:
SYSTEM.SYS_IMPORT_SCHEMA_01.1	(null)
SYSTEM.SYS_IMPORT_SCHEMA_01	    BEGIN :1 := sys.kupc$que_int.receive(:2); END;
It's a datapump exported from a 11.2.0.1 oracle database with COMPRESSION=ALL and not Full (SCHEMAS=MYSCHEMA1 MYSCHEMA2 MYSCHEMA3) Any hints? EDIT2: top output is:
PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                                                                                  
  6313 oracle    20   0 2669888   1,3g 744812 R  91,7 23,9  81:32.14 ora_dw00_ORCLCDB
And this query to find running statement returns null
select
s.username su,
substr(sa.sql_text,1,540) txt,
s.event,
s.SID
from v$process p,
v$session s,
v$sqlarea sa
where p.addr=s.paddr
and s.username is not null
and s.sql_address=sa.address(+)
and s.sql_hash_value=sa.hash_value(+)
and spid=6313;
"SU"	"TXT"	"EVENT"	"SID"
"SYSTEM"	""	"db file sequential read"	276
Impdp status after 18 hours:
Import> status

Job: SYS_IMPORT_SCHEMA_01
  Operation: IMPORT
  Mode: SCHEMA
  State: EXECUTING
  Bytes Processed: 8,481,046,744
  Percent Done: 14
  Current Parallelism: 2
  Job Error Count: 0
  Job heartbeat: 904
  Dump File: /u01/DMP/my.dmp

Worker 1 Status:
  Instance ID: 1
  Instance name: ORCLCDB
  Host name: lnx-db
  Object start time: Wednesday, 29 April, 2020 14:48:35
  Object status at: Wednesday, 29 April, 2020 14:48:35
  Process Name: DW00
  State: EXECUTING
  Object Schema: MYSCHEMA
  Object Name: V_MYVIEW_xxx
  Object Type: SCHEMA_EXPORT/VIEW/VIEW
  Completed Objects: 80
  Worker Parallelism: 1

Worker 2 Status:
  Instance ID: 1
  Instance name: ORCLCDB
  Host name: lnx-db
  Object start time: Wednesday, 29 April, 2020 14:48:13
  Object status at: Wednesday, 29 April, 2020 14:48:31
  Process Name: DW01
  State: WORK WAITING
sgargel (337 rep)
Apr 29, 2020, 10:08 AM • Last activity: Aug 8, 2024, 03:49 PM
2 votes
1 answers
702 views
Whether or Not to Use DataPump with Oracle GoldenGate
We are setting up GoldenGate to keep 3 of our Oracle databases in sync. Regarding configuration:   We plan to have an instance of GoldenGate running on all 3 nodes. Each node will run an extract on (local) Transaction/Redo Logs to create local Trail files, then run Replicat (locally) to th...
We are setting up GoldenGate to keep 3 of our Oracle databases in sync. Regarding configuration:   We plan to have an instance of GoldenGate running on all 3 nodes. Each node will run an extract on (local) Transaction/Redo Logs to create local Trail files, then run Replicat (locally) to the other 2 databases. FLOW DIAGRAM             These connections are using TCP/IP so there should not be any lost packets? If Replicat running over the network sends and fails, will GG reattempt until it does or just move on? (Resulting in out of sync?)                       **VS** DataPump/ Checkpoint process seems to be industry standard for replication over the network remedying the above process. What are the advantages to using the DataPump/Checkpoints in this process vs what I described above? Does the Datapump/Checkpoint process add additional syncing for redundancies ?                             
Lance Allison
Mar 4, 2020, 06:29 PM • Last activity: Jul 12, 2024, 05:39 AM
0 votes
3 answers
17686 views
export multiple tables using expdp with query condition
I have requirement to export multiple tables using datapump job and the ultimate goal is that we are doing a migration from on-Prem . They have something like 6TB of AUD data and we are not required to host all of that. We are trying to find a way to filter the AUD data in the export to only export...
I have requirement to export multiple tables using datapump job and the ultimate goal is that we are doing a migration from on-Prem . They have something like 6TB of AUD data and we are not required to host all of that. We are trying to find a way to filter the AUD data in the export to only export and migrate the AUD data based upon a date to be determined. Every AUD table has column change_date which can be used as filter. So based on the above requirement i have prepared the below job, but getting syntax error. Can you please help on this.
expdp 
    system/xxxxxxx 
    DIRECTORY=EXPDP_DIR 
    DUMPFILE=test.dmp 
    LOGFILE=test.log 
    CONTENT=DATA_ONLY 
    SCHEMAS=AUD 
    INCLUDE=TABLE:"IN(select table_name from dba_tables where owner ='AUD';)" 
    query=\"where change_date > to_date('31-Dec-2020','DD-MON-YYYY')\"
I am receiving the following error: > -bash: syntax error near unexpected token `('
sarat (60 rep)
Jul 12, 2022, 02:08 PM • Last activity: Jun 11, 2024, 04:09 AM
2 votes
1 answers
979 views
Choosing the right blocksize for Oracle Database tablespaces with lobs
We are currently switching from an older character set to Unicode using impdp/expdp on Oracle Database 19c. We had to changed several data types. We changed tables with LOBs from basicfile to securefile and found that the tablespace holding this lobs only uses an 8K blocksize. The table containes se...
We are currently switching from an older character set to Unicode using impdp/expdp on Oracle Database 19c. We had to changed several data types. We changed tables with LOBs from basicfile to securefile and found that the tablespace holding this lobs only uses an 8K blocksize. The table containes several TBs with pdfs around 2-3mb. Does it makes sense to increase the Blocksize of the tablespace holding the LOBs from 8k to 32k?
r0tt (1078 rep)
Mar 2, 2021, 04:38 PM • Last activity: May 16, 2024, 04:29 PM
0 votes
1 answers
7076 views
Oracle 12c to 19c Internal Schema Export & Import
I want to exp & imp operations for APEX users from Oracle Database 12cR1 to 19c. Data Pump Utility can't do that. When I try to export by using following parfile and command: DUMPFILE=APEX.dmp LOGFILE=APEXEXPLOG.log SCHEMAS=APEX_PUBLIC_USER,APEX_040200,FLOWS_FILES DIRECTORY=APEXEXP CLUSTER=N EXCLUDE...
I want to exp & imp operations for APEX users from Oracle Database 12cR1 to 19c. Data Pump Utility can't do that. When I try to export by using following parfile and command: DUMPFILE=APEX.dmp LOGFILE=APEXEXPLOG.log SCHEMAS=APEX_PUBLIC_USER,APEX_040200,FLOWS_FILES DIRECTORY=APEXEXP CLUSTER=N EXCLUDE=STATISTICS expdp "'/ as sysdba'" PARFILE=/apexexp/apex_06012022.par And when I try to traditional export, it also couldn't this operation: exp \'/ as sysdba\' FILE=APEX_2.dmp LOG=APEXEXP_2log.log OWNER=APEX_PUBLIC_USER,APEX_040200,FLOWS_FILES STATISTICS=NONE > About to export specified users ... EXP-00010: APEX_040200 is not a > valid username EXP-00010: FLOWS_FILES is not a valid username How can I provide it?
jrdba123 (29 rep)
Jan 6, 2022, 01:06 PM • Last activity: Mar 7, 2023, 05:07 PM
0 votes
1 answers
2986 views
Using JOINS in QUERY clause in EXPDP
I have a par file that is using JOINS in QUERY clause. But it is not working. directory=expdp DUMPFILE=Test_SIT_Query_1table_%U.dmp encryption_password=oracle parallel=8 logfile=TEST_SIT_Query_table.log compression=all cluster=N METRICS=Y query=IAO.VERIFIC:"SELECT /*+Parallel(8)*/ distinct A.* FROM...
I have a par file that is using JOINS in QUERY clause. But it is not working. directory=expdp DUMPFILE=Test_SIT_Query_1table_%U.dmp encryption_password=oracle parallel=8 logfile=TEST_SIT_Query_table.log compression=all cluster=N METRICS=Y query=IAO.VERIFIC:"SELECT /*+Parallel(8)*/ distinct A.* FROM IAO.VERIFIC A JOIN IAO.ELIJ B ON A.RACE_ID = B.RACE_ID WHERE B.CREATE_DT > SYSDATE - 30" tables=IAO.VERIFIC Below is the error I am getting. ORA-31693: Table data object "IAO"."VERIFIC" failed to load/unload and is being skipped due to error: ORA-00942: table or view does not exist Could someone help what could be missing in the above par file.
sabarish jackson (133 rep)
Mar 26, 2019, 02:18 PM • Last activity: Sep 19, 2022, 07:03 AM
0 votes
1 answers
2480 views
Oracle 11gR2 - Expdp Full Database With Exclude Some Table Data
We will migrate our database using datapump. It will be full database export with exclude some table's datas. But we want to do a different operation for 2 different tables. We want to export the data of the last 1 year for these 2 tables (let them be named table3 and table4). How can I write a quer...
We will migrate our database using datapump. It will be full database export with exclude some table's datas. But we want to do a different operation for 2 different tables. We want to export the data of the last 1 year for these 2 tables (let them be named table3 and table4). How can I write a query for this? The parfile I prepared is as follows, which commands should I add for these 2 tables? DUMPFILE=dbname_datefull.dmp LOGFILE=dbname_datefulllog.log FULL=Y DIRECTORY=EXPORT_DATE EXCLUDE=STATISTICS EXCLUDE=TABLE_DATA:"= 'table1'" EXCLUDE=TABLE_DATA:"= 'table2'" --queryfortable3 --queryfortable4 FLASHBACK_TIME=SYSTIMESTAMP At the same time, does this query slow down the process and is it recommended? Thank you. Best Regards.
jrdba123 (29 rep)
Apr 8, 2022, 11:48 PM • Last activity: Apr 10, 2022, 01:08 AM
0 votes
0 answers
693 views
Data pump exclude include
hello im learning about datapump and i trying to use include to filter which table i want to export.when i use the code below the import/export job able to run but an error occur saying that the table being skipped because the sql command not properly ended.how do i resolve this issue expdp test/asd...
hello im learning about datapump and i trying to use include to filter which table i want to export.when i use the code below the import/export job able to run but an error occur saying that the table being skipped because the sql command not properly ended.how do i resolve this issue expdp test/asdzxc@pdb22 directory=asdzxc SCHEMAS=TEST INCLUDE=TABLE:"LIKE'REG%'" query=regions:"region_name LIKE'Americas'" dumpfile=test12.dmp Export: Release 19.0.0.0.0 - Production on Tue Oct 5 23:17:06 2021 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Starting "TEST"."SYS_EXPORT_SCHEMA_02": test/********@pdb22 directory=asdzxc SCHEMAS=TEST INCLUDE=TABLE:LIKE'REG%' query=regions:region_name LIKE'Americas' dumpfile=test12.dmp Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT ORA-31693: Table data object "TEST"."REGIONS" failed to load/unload and is being skipped due to error: ORA-00933: SQL command not properly ended Master table "TEST"."SYS_EXPORT_SCHEMA_02" successfully loaded/unloaded ****************************************************************************** Dump file set for TEST.SYS_EXPORT_SCHEMA_02 is: C:\USERS\MARCEL\DESKTOP\DPUMP\TEST12.DMP Job "TEST"."SYS_EXPORT_SCHEMA_02" completed with 1 error(s) at Tue Oct 5 23:17:13 2021 elapsed 0 00:00:07
Silent Boots (13 rep)
Oct 5, 2021, 04:38 PM
5 votes
2 answers
15901 views
How do I kill Oracle DataPump expdp job stuck in DEFINING state?
My job failed because dump file already existed. Now it's stuck in DEFINING state. How do I stop/kill it? declare h1 number; BEGIN h1 := DBMS_DATAPUMP.ATTACH('EXPORT_TABLE_2017_1205_1402',user); --DBMS_DATAPUMP.STOP_JOB (h1,1,0); DBMS_DATAPUMP.stop_job (h1, 1); DBMS_DATAPUMP.detach (h1); END; / 2 3...
My job failed because dump file already existed. Now it's stuck in DEFINING state. How do I stop/kill it? declare h1 number; BEGIN h1 := DBMS_DATAPUMP.ATTACH('EXPORT_TABLE_2017_1205_1402',user); --DBMS_DATAPUMP.STOP_JOB (h1,1,0); DBMS_DATAPUMP.stop_job (h1, 1); DBMS_DATAPUMP.detach (h1); END; / 2 3 4 5 6 7 8 9 declare * ERROR at line 1: ORA-31626: job does not exist ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79 ORA-06512: at "SYS.DBMS_DATAPUMP", line 1137 ORA-06512: at "SYS.DBMS_DATAPUMP", line 3873 ORA-06512: at line 4 Using expdp getting segmentation fault: expdp stgdata@ORADB1S attach=EXPORT_TABLE_2017_1205_1402 Export: Release 11.2.0.3.0 - Production on Tue Dec 5 14:56:11 2017 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Job: EXPORT_TABLE_2017_1205_1402 Owner: STGDATA Operation: EXPORT Creator Privs: TRUE GUID: 5F9D23C27A291F20E05328C414AC3548 Start Time: Tuesday, 05 December, 2017 14:02:38 Mode: TABLE Instance: ORADB1S Max Parallelism: 0 EXPORT Job Parameters: Segmentation fault Any idea how do kill em? select sid, serial# from v$session where action like 'EXPORT_TABLE%'; 520 1161 wait for unread message on broadcast channel 587 635 wait for unread message on broadcast channel Trying to restart same job: ERROR at line 1: ORA-31634: job already exists ORA-06512: at "STGDATA.DELETE_EXPORT_TABLE", line 59 ORA-06512: at line 1
olekb (163 rep)
Dec 5, 2017, 07:59 PM • Last activity: Oct 4, 2021, 11:07 AM
0 votes
0 answers
695 views
Oracle XE 11.2.0 data pump expdb works on but impdb doesn't on Windows 10
New to Oracle DB administration here. I have a fresh localhost installation of Oracle XE 11.2.0 (x64 installation from [here][1]) on my computer using Windows 10 64-bit. I am able to connect to the DB using both DataGrip using `SYS as SYSDBA` user and I am able to connect using `sqlplus` command-lin...
New to Oracle DB administration here. I have a fresh localhost installation of Oracle XE 11.2.0 (x64 installation from here ) on my computer using Windows 10 64-bit. I am able to connect to the DB using both DataGrip using SYS as SYSDBA user and I am able to connect using sqlplus command-line tool. I typed an admin password on installation but somehow I am able to connect using sys password sqlplus sys/sys as sysdba (as I said, I am an Oracle DB rookie). I have to import an exported EXP.DMP file from a colleague using the following command:
impdp.exe 'sys/sys as sysdba' FULL=Y DUMPFILE=exp.dmp REUSE_DATAFILES=Y TABLE_EXISTS_ACTION=REPLACE
Windows starts to complain: enter image description here And the following line appears in the command line:
c:\oraclexe\app\oracle\product\11.2.0\server\bin>impdp.exe 'sys/sys as sysdba' FULL=Y DUMPFILE=exp2.dmp REUSE_DATAFILES=Y TABLE_EXISTS_ACTION=REPLACE
Access is denied.

c:\oraclexe\app\oracle\product\11.2.0\server\bin>
**Important note:** I have tried to **export** instead to exp2.dmp in order to try another command and it works with no issue:
c:\oraclexe\app\oracle\product\11.2.0\server\bin>expdp.exe 'sys/sys as sysdba' FULL=Y DUMPFILE=exp2.dmp directory=DATA_PUMP_DIR logfile=exp.log

Export: Release 11.2.0.2.0 - Production on Tue Mar 9 14:41:51 2021

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
Starting "SYS"."SYS_EXPORT_FULL_01":  "sys/******** AS SYSDBA" FULL=Y DUMPFILE=exp2.dmp directory=DATA_PUMP_DIR logfile=exp.log
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 161.2 MB
Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PROFILE
... omitted ...
Processing object type DATABASE_EXPORT/AUDIT
. . exported "APEX_040000"."WWV_FLOW_PAGE_PLUGS"         27.16 MB   38892 rows
... omitted ...
. . exported "SYSTEM"."SQLPLUS_PRODUCT_PROFILE"              0 KB       0 rows
Master table "SYS"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_FULL_01 is:
  C:\ORACLEXE\APP\ORACLE\ADMIN\XE\DPDUMP\EXP2.DMP
Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at 14:42:55


c:\oraclexe\app\oracle\product\11.2.0\server\bin>
___________ How to make the import successful? **Edit:** I have noticed something very odd in the C:\oraclexe\app\oracle\product\11.2.0\server\bin folder. I have installed the database engine using setup.exe and everything went smoothly. What's going on? Can I download the file anywhere for 11.2.0 XE without the un/installing again which would take another hour? enter image description here
Nikolas (101 rep)
Mar 9, 2021, 01:59 PM • Last activity: Mar 9, 2021, 02:47 PM
1 votes
1 answers
3298 views
ambiguous "invalid operation" while importing during Oracle datapump
Let me summarize the problem first and I'll give details of the SQL I used to get where I'm at after the summary. I'm exporting a schema from a production AWS RDS Oracle instance, using a database link to download the file to my local development database, then running an import locally on an empty...
Let me summarize the problem first and I'll give details of the SQL I used to get where I'm at after the summary. I'm exporting a schema from a production AWS RDS Oracle instance, using a database link to download the file to my local development database, then running an import locally on an empty database of a freshly installed Oracle in a Docker container. The export and import use Datapump. I get a very ambiguous error message "invalid operation" with equally ambiguous details suggesting I call "DBMS_DATAPUMP.GET_STATUS" to "further describe the error". When I do, I get exactly the same ambiguous "invalid operation" with a suggestion to call "GET_STATUS" to further describe the error. I'm at a loss of where to even begin in diagnosing and solving this problem. Here are the detailed steps I took. I have substituted our schema name with "MY_SCHEMA" to protect the identity of our client... and if there's any mismatch in that text, I assure you it is correct in my console and just a mistake in the substitution for this question. I used SQLDeveloper to run these commands. 1. On the AWS RDS Oracle instance running 19g
DECLARE
    hdnl NUMBER;
    BEGIN
    hdnl := DBMS_DATAPUMP.OPEN( operation => 'EXPORT', job_mode => 'SCHEMA', job_name=>null, version=> '18.4.0.0.0');
    DBMS_DATAPUMP.ADD_FILE( handle => hdnl, filename => 'my_schema.dmp', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_dump_file);
    DBMS_DATAPUMP.ADD_FILE( handle => hdnl, filename => 'my_schema.log', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file);
    DBMS_DATAPUMP.METADATA_FILTER(hdnl,'SCHEMA_EXPR','IN (''MY_SCHEMA'')');
    DBMS_DATAPUMP.START_JOB(hdnl);
    END;
    /
2. Connect from my local dev database (18g) to the AWS RDS instance and download the dmp file. And yes, here I connect as the schema owner and not "master". This seems to work to download the file and connecting as "master" does not, where dumping as the schema owner doesn't work in step one; unless you can instruct me how to do that and if that would solve my problem.
create database link to_rds connect to my_schema identified by password using '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=my_schema.aljfjske.us-west-1.rds.amazonaws.com)(PORT=1521))(CONNECT_DATA=(SID=ORCL)))';
    
    BEGIN
    DBMS_FILE_TRANSFER.GET_FILE(
    source_directory_object       => 'DATA_PUMP_DIR',
    source_file_name              => 'my_schema.dmp',
    source_database               => 'to_rds',
    destination_directory_object  => 'DATA_PUMP_DIR',
    destination_file_name         => 'my_schema.dmp'
    );
    END;
    /
3. Start the import while logged in as "sys" with role "sysdba" on my local database (connected to the pluggable database called "my_schema").
DECLARE
    hdnl NUMBER;
    BEGIN
    hdnl := DBMS_DATAPUMP.OPEN( operation => 'IMPORT', job_mode => 'SCHEMA', job_name=>null);
    DBMS_DATAPUMP.ADD_FILE( handle => hdnl, filename => 'my_schema.dmp', directory => 'DATA_PUMP_DIR');
    DBMS_DATAPUMP.METADATA_FILTER(hdnl,'SCHEMA_EXPR','IN (''MY_SCHEMA'')');
    DBMS_DATAPUMP.START_JOB(hdnl);
    end;
    /
And I get the following error: DECLARE hdnl NUMBER; BEGIN hdnl := DBMS_DATAPUMP.OPEN( operation => 'IMPORT', job_mode => 'SCHEMA', job_name=>null); DBMS_DATAPUMP.ADD_FILE( handle => hdnl, filename => 'my_schema.dmp', directory => 'DATA_PUMP_DIR'); DBMS_DATAPUMP.METADATA_FILTER(hdnl,'SCHEMA_EXPR','IN (''MY_SCHEMA'')'); DBMS_DATAPUMP.START_JOB(hdnl); end; Error report - ORA-39002: invalid operation ORA-06512: at "SYS.DBMS_DATAPUMP", line 7297 ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79 ORA-06512: at "SYS.DBMS_DATAPUMP", line 4932 ORA-06512: at "SYS.DBMS_DATAPUMP", line 7291 ORA-06512: at line 7 39002. 00000 - "invalid operation" *Cause: The current API cannot be executed because of inconsistencies between the API and the current definition of the job. Subsequent messages supplied by DBMS_DATAPUMP.GET_STATUS will further describe the error. *Action: Modify the API call to be consistent with the current job or redefine the job in a manner that will support the specified API. I've spent 6+ hrs working on this already reading Oracle docs, guides, trying things, printing more information to the console, and nothing. I get exactly the same error message with no more information. The dump file is on the system and I'm pretty sure it's being read properly because I can call utl_file.fgetattr to get its size. I've also tried exporting and importing with different users. nothing. I'm totally in the dark here. Even suggestions on what to try to diagnose this would be much appreciated. This is a fresh install of Oracle Database 18g Express Edition using Oracle's Docker container files on their GitHub account (which is pretty slick, BTW). The production system on RDS has been up for several years and I've exported Datapump dozens of times during those years and successfully imported it into my local 11g Express Edition installation on Fedora Linux. (Which no longer works since the production database was upgraded from 12g to 19g recently. That started me on this whole path.)
Jason (113 rep)
Mar 5, 2021, 05:00 PM • Last activity: Mar 8, 2021, 09:46 PM
-1 votes
1 answers
2151 views
Oracle 11gR2 - Export - Import (expdp, impdp) How To Exclude A Table Partiton And This Table's Indexes
I have a table partition (135GB) and index partition on this table (total=150GB) and I want to exclude owner schema of this table and indexes with excluding this table partition and it's indexes. What is the correct syntax for this?
I have a table partition (135GB) and index partition on this table (total=150GB) and I want to exclude owner schema of this table and indexes with excluding this table partition and it's indexes. What is the correct syntax for this?
jrdba (55 rep)
Aug 25, 2020, 05:48 PM • Last activity: Jan 10, 2021, 04:05 PM
2 votes
2 answers
14752 views
Import Tables without Constraints from Schema Export
I am trying to do an import using data pump, but I am struggling to come up with the right options. The export was taken as a schema export. I am trying to import just the tables, with no other database objects. I am currently using ``` INCLUDE=TABLE TABLE_EXISTS_ACTION=REPLACE REMAP_SCHEMA= REMAP_T...
I am trying to do an import using data pump, but I am struggling to come up with the right options. The export was taken as a schema export. I am trying to import just the tables, with no other database objects. I am currently using
INCLUDE=TABLE
TABLE_EXISTS_ACTION=REPLACE
REMAP_SCHEMA=
REMAP_TABLESPACE=
This is correctly importing the tables, but I am a little surprised to see that the import is also creating the constraints and triggers on the tables, or trying to at least. The triggers are failing because they include a schema name that does not exist in the database. I thought that by using INCLUDE=TABLE only the tables would be included. Apparently that is not the case. I am unable to use EXCLUDE=CONSTRAINT or EXCLUDE=TRIGGER as I am already using INCLUDE to limit the import to just tables. Any ideas on how I could structure the import to only import tables?
Patrick (698 rep)
Oct 28, 2020, 04:08 AM • Last activity: Oct 30, 2020, 06:58 AM
Showing page 1 of 20 total questions