Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
277
views
Oracle 10g Database - SYSTEM tablespace space
I am concerned about the SYSTEM tablespace. It's space is 99% full. The whole tablespace has 1510 MB and there is 1507 MB already used. Max size of datafile is 32 GB and the autoincrement is on with increment 10 MB. However, I had outside company's audit and they pointed us it is unacceptable to kee...
I am concerned about the SYSTEM tablespace. It's space is 99% full. The whole tablespace has 1510 MB and there is 1507 MB already used. Max size of datafile is 32 GB and the autoincrement is on with increment 10 MB. However, I had outside company's audit and they pointed us it is unacceptable to keep so less free space in the system tablespace (3MB). As far as I see it is autoextensible, I am not sure if I should take any actions to resize the datafile (for example to 2 GB). I am not sure what kind of extents (DBA_EXTENTS) shall be storaged in this space, so at this moment I am not able to check if anything shall be moved from this space to SYSAUX or others. My question is, if it is really required to resize the system tablespace to avoid some problems with database due to its fullness? What would you do with this case? Maybe it would be enough to change incrementation size from 10 MB to more?
Julia Kwasniuk
(1 rep)
Mar 17, 2022, 02:07 PM
• Last activity: May 25, 2025, 01:03 AM
0
votes
1
answers
641
views
Enterprise manager failed to start in oracle 10g
I am using oracle 10g with 4 instance with same server but one of them has failed to start Oracle Enterprise Manager and rest of three are working fine. Starting Oracle Enterprise Manager 10g Database Control .................................................................... failed. Logs are gener...
I am using oracle 10g with 4 instance with same server but one of them has failed to start Oracle Enterprise Manager and rest of three are working fine.
Starting Oracle Enterprise Manager 10g Database Control
....................................................................
failed.
Logs are generated in directory /u01/app/oracle/product/10.2.0/db_1/vvs/sysman/log
Here I posted error log file -> **emagent.log**
2018-07-31 11:10:55 Thread-4124031232 Starting Agent 10.1.0.6.0 from /u01/app/oracle/product/10.2.0/db_1 (00701)
2018-07-31 11:10:55 Thread-4124031232 Starting Agent 10.1.0.6.0 from /u01/app/oracle/product/10.2.0/db_1 (00701)
2018-07-31 11:18:41 Thread-4124375296 Starting Agent 10.1.0.6.0 from /u01/app/oracle/product/10.2.0/db_1 (00701)
2018-07-31 11:35:28 Thread-4124748032 Starting Agent 10.1.0.6.0 from /u01/app/oracle/product/10.2.0/db_1 (00701)
2018-07-31 12:03:41 Thread-4124952832 Starting Agent 10.1.0.6.0 from /u01/app/oracle/product/10.2.0/db_1 (00701)
2018-07-31 14:36:27 Thread-4125055232 Starting Agent 10.1.0.6.0 from /u01/app/oracle/product/10.2.0/db_1 (00701)
2018-07-31 15:23:13 Thread-4124195072 Starting Agent 10.1.0.6.0 from /u01/app/oracle/product/10.2.0/db_1 (00701)
2018-07-31 18:19:37 Thread-4124940544 Starting Agent 10.1.0.6.0 from /u01/app/oracle/product/10.2.0/db_1 (00701)
and when i am going to start agent is says agent is already running but while checking status it shows the agent is not running
[oracle@vvs ~]$ emctl start agent
TZ set to Asia/Calcutta
Oracle Enterprise Manager 10g Database Control Release 10.2.0.4.0
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
Agent is already running
[oracle@vvs ~]$ emctl status agent
TZ set to Asia/Calcutta
Oracle Enterprise Manager 10g Database Control Release 10.2.0.4.0
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
Agent is Not Running
And when I'm going to start OEM of this instance it failed but the rest of OEM instance stops working
So please let me know how to solve this Problem
edited data
2018-08-16 16:47:50 Thread-4090489664 ERROR http: 9: Unable to initialize ssl connection with server, aborting connection attempt
2018-08-16 16:47:50 Thread-4090489664 ERROR pingManager: nmepm_pingReposURL: Cannot connect to https://vvs:5502/em/upload/ : retStatus=-1
2018-08-16 16:47:50 Thread-4090489664 ERROR pingManager: Error in updating the agent time stamp file
2018-08-16 16:47:53 Thread-4087339840 ERROR command: nmejcn: error receiving response headers from http://vvs:5502/em/upload/
Ray
(71 rep)
Jul 31, 2018, 01:06 PM
• Last activity: Apr 13, 2025, 03:01 AM
0
votes
2
answers
1213
views
java.sql.SQLException: ORA-01410: invalid ROWID
I am getting **java.sql.SQLException: ORA-01410: invalid ROWID** for some of the rows. Here is my query. "select OP.PLU_LONG PROD_ID," + "OP.NEW_PRICE PRICE," + "OP.REG_PRICE REGPRICE," + "OP.QUANTITY_LIMIT QUANTITY_LIMIT," + "OP.ON_AD_DATE AD_START_DATE,"+ "OP.OF_AD_DATE AD_END_DATE " + "from ABC O...
I am getting **java.sql.SQLException: ORA-01410: invalid ROWID** for some of the rows.
Here is my query.
"select OP.PLU_LONG PROD_ID," +
"OP.NEW_PRICE PRICE," +
"OP.REG_PRICE REGPRICE," +
"OP.QUANTITY_LIMIT QUANTITY_LIMIT," +
"OP.ON_AD_DATE AD_START_DATE,"+
"OP.OF_AD_DATE AD_END_DATE " +
"from ABC OP";
try {
Class.forName(driver);
conn = DriverManager.getConnection(dbUrl,dbUser,dbPw);
stmt = conn.createStatement();
rset = stmt.executeQuery(sqlSTGOP);
// System.out.println("Entering while block");
while (rset.next()) {
String price = rset.getString("PRICE");
String reg_price = rset.getString("REGPRICE");
String prod_id = rset.getString("PROD_ID");
int quantity_limit = rset.getInt("QUANTITY_LIMIT");
Date ad_start_date = rset.getDate("AD_START_DATE");
Date ad_end_date = rset.getDate("AD_END_DATE");
}
} catch (SQLException sqe) {
logger.error("**** SQLException occured while getting Special Price and Quantity Limit:"+sqe);
error2 = sqe.toString();
System.out.println("Problem in creating result set" + error2);
} catch (Exception e) {
logger.error("**** Exception occured while getting Special Price and Quantity Limit:"+e);
error2 = e.toString();
System.out.println("Problem in creating result set" + error2);
}
The error happened here only. When we restart the process, it goes away. It happens once in a while. When we get the values from resultset, then it throws an invalid rowid sqlexception.
*
sqlSTGOP
is a string constant refering to above select query.
* driver: oracle.jdbc.driver.OracleDriver
* jdbc:oracle:thin:@a.b.c.d:1521:db
* java version: jdk1.6
* version of jdbc: ojdbc14_g.jar
* dbUrl
is the url of database which has this ABC table
* dbPw
is the password of that database
* oracle: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0
There is no PK or index on this table.
> Is someone deleting rows?
There is one process which runs before this process which removes and loads the data under table ABC and when it finishes, then this process runs after that. That is why we are unable to find out, why this error is coming even the earlier process finishes its task before starting of this process.
No other process, procedure, trigger, package is using this table ABC.
Ashok
(9 rep)
Mar 11, 2018, 05:51 AM
• Last activity: Aug 7, 2023, 10:04 AM
3
votes
4
answers
16617
views
Privileges needed for Oracle Text
I'm new to Oracle Text and I am using it to support a search feature in an application I have written. The everything seems to be working correctly except I'm not happy with the fact that I had to store a bunch of things in the ctxsys schema in order to get around insufficient privilege issues. (Thi...
I'm new to Oracle Text and I am using it to support a search feature in an application I have written. The everything seems to be working correctly except I'm not happy with the fact that I had to store a bunch of things in the ctxsys schema in order to get around insufficient privilege issues. (This is with Oracle 10gR2).
Here is the SQL script I have to set up Oracle Text for my application
-- Since the CTXSYS schema was created when Oracle Text was installed
-- it does not yet have permission to select on certain tables it needs to
-- for the below procedure
grant select on dmg.table1 to CTXSYS;
grant select on dmg.table2 to CTXSYS;
grant select on dmg.table3 to CTXSYS;
grant select on dmg.table4 to CTXSYS;
grant select on dmg.table5 to CTXSYS;
create or replace procedure ctxsys.attr_indexing_procedure (
rid in rowid,
tlob in out NOCOPY clob )
is
begin
-- This procedure queries the above five tables to extract text and combine it into a single document placed into tlob
end;
/
begin
ctx_ddl.create_preference('dmg.my_datastore', 'user_datastore' );
ctx_ddl.set_attribute( 'dmg.my_datastore', 'procedure', 'CTXSYS.attr_indexing_procedure' );
end;
/
begin
ctx_ddl.create_preference( 'dmg.my_index_lexer', 'BASIC_LEXER' );
ctx_ddl.set_attribute( 'dmg.my_index_lexer', 'base_letter', 'YES');
end;
/
begin
ctx_ddl.create_preference('dmg.MY_STEM_FUZZY_PREF', 'BASIC_WORDLIST');
ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','STEMMER','ENGLISH');
ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','FUZZY_MATCH','ENGLISH');
ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','FUZZY_SCORE','0');
ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','FUZZY_NUMRESULTS','5000');
ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','SUBSTRING_INDEX','TRUE');
ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','PREFIX_INDEX','TRUE');
ctx_ddl.set_attribute('dmg.MY_STEM_FUZZY_PREF','WILDCARD_MAXTERMS','5000');
end;
/
-- I would prefer that this index be owned by dmg but it looks like that would require the create any table privilege on dmg (because oracle text
-- will create tables in the ctxsys schema when creating the index). Therefore, ctxsys will own the text index.
create index ctxsys.ix_doi_attr on table1(column1) indextype is ctxsys.context parameters( 'datastore dmg.my_datastore lexer dmg.my_index_lexer wordlist dmg.DOI_STEM_FUZZY_PREF');
-- Schedule the index to sync every 4 hours
declare
job_num number;
nlsvar varchar2(4000);
envvar raw(32);
begin
select nls_env,misc_env into nlsvar,envvar from dba_jobs where rownumjob_num, luser=>'CTXSYS', puser=>'CTXSYS', cuser=>'CTXSYS', what=>'ctx_ddl.sync_index(''ctxsys.ix_doi_attr'');', next_date=>sysdate+1/1440,
interval=>'SYSDATE+240/1440', broken=>false, nlsenv=>nlsvar, env=>envvar);
commit;
dbms_output.put_line('job '||job_num||' has been submitted.');
end;
/
This script is intended to be run by my personal user, which has the DBA role. All the relevant tables are owned by the "DMG" schema, which has very limited privileges.
As you can see I am using a user_datastore to aggregate text from multiple tables into a single document. Also, I could only makes this work if the indexing procedure, the index itself and the dmbs_job to sync the index periodically were all owned by ctxsys, which seems like a bad idea. Preferably, I would like everything to be owned by the DMG schema.
The biggest issue I ran into was in the create index statement. It would always fail with
SQL Error: ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
ORA-20000: Oracle Text error:
DRG-50857: oracle error in drvxtab.create_index_tables
ORA-01031: insufficient privileges
ORA-06512: at "CTXSYS.DRUE", line 160
ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 364
29855. 00000 - "error occurred in the execution of ODCIINDEXCREATE routine"
*Cause: Failed to successfully execute the ODCIIndexCreate routine.
*Action: Check to see if the routine has been coded correctly.
After some digging I figured out that the create index statement was attempting to create some tables under the ctxsys schema called
DR$IX_DOI_ATTR$I
, DR$IX_DOI_ATTR$K
,DR$IX_DOI_ATTR$N
, DR$IX_DOI_ATTR$P
, and DR$IX_DOI_ATTR$R
. I found that the DMG schema could create the index if I gave it the CREATE ANY TABLE
privilege, but that is not something I want to do in production.
How can I make everything owned by the DMG schema (i.e. minimum privileges I need to grant the DMG schema)? Is there a way for me to remove the grants on the dmg tables to ctxsys?
EDIT:
I was originally incorrect in saying that giving the DMG schema the CREATE ANY TABLE
privilege works. It instead changes the error message to this:
SQL Error: ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
ORA-20000: Oracle Text error:
DRG-50857: oracle error in drvxtab.create_index_tables
ORA-01536: space quota exceeded for tablespace 'DMG'
ORA-06512: at "CTXSYS.DRUE", line 160
ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 364
29855. 00000 - "error occurred in the execution of ODCIINDEXCREATE routine"
*Cause: Failed to successfully execute the ODCIIndexCreate routine.
*Action: Check to see if the routine has been coded correctly.
This is a very strange message because there are NO quotas setup anywhere on this database. I have no idea how it is hitting up against a quota limit.
Eddie
(131 rep)
Jan 31, 2013, 08:23 PM
• Last activity: Jul 2, 2020, 06:34 AM
1
votes
1
answers
988
views
Which java JDK is compatible with oracle instant client 10g
I'm an intern stuck trying to retrieve data from Oracle 8i (a database that is as as I am) **The problem:** I need to write a java script that pulls data from Oracle 8i into a newer server. The last client that is compatible with Oracle 8i is Oracle client 10g (10g instant client is still available...
I'm an intern stuck trying to retrieve data from Oracle 8i (a database that is as as I am)
**The problem:**
I need to write a java script that pulls data from Oracle 8i into a newer server. The last client that is compatible with Oracle 8i is Oracle client 10g (10g instant client is still available for free on Oracle's website).
Documentation is scarce about 10g client compatibility. My server currently does not have java installed and my question is what version of java is compatible with 10g client?
Nicolasome
(123 rep)
May 15, 2019, 06:41 PM
• Last activity: May 15, 2019, 07:04 PM
16
votes
4
answers
103184
views
Calculating percentage of a row over total sum
Apologies for the bad title, I wasn't sure what would be a good title for this. This is currently (simplified view of the) data I'm working with Agent | Commission ---------|------------ Smith | 100 Neo | 200 Morpheus | 300 I need to calculate the percentage of the total commission, each agent is re...
Apologies for the bad title, I wasn't sure what would be a good title for this.
This is currently (simplified view of the) data I'm working with
Agent | Commission
---------|------------
Smith | 100
Neo | 200
Morpheus | 300
I need to calculate the percentage of the total commission, each agent is responsible for.
So, for Agent Smith, the Percentage would be calculated as
(Agent Smith's commission / Sum(commission)*100
So, my expected data would be
Agent | Commission | % Commission
---------|---------------|---------------
Smith | 100 | 17
Neo | 200 | 33
Morpheus | 300 | 50
I have a function returning the commission for each agent. I have another function returning the percentage as (Commission/Sum(Commission))*100
. The problem is that Sum(commission)
gets calculated for each and every row, and given that this query would be run on a Data Warehouse, the data set would be rather large ( currently, it's just under 2000 records) and quite honestly, a bad approach (IMO).
Is there a way of having the Sum(Commission)
not calculate for every row being fetched ?
I was thinking something on the lines of a 2 part query, the first part would fetch the sum(commission)
into a package variable/type and the second part would refer to this pre-calculated value, but I'm not sure how I can accomplish this.
I am limited to using SQL, and I'm running on Oracle 10g R2.
Sathyajith Bhat
(1534 rep)
Jan 17, 2011, 02:21 PM
• Last activity: Dec 18, 2018, 03:12 PM
0
votes
2
answers
397
views
Can I sync an existing database using a .dmp file?
I have two oracle databases (production and lab). When I created the lab database I imported a .dmp file from production to copy the data. Now that the database has been running for a while the data is out of sync and I'd like to import again. What's the easiest way to do this? I tried using imp aga...
I have two oracle databases (production and lab). When I created the lab database I imported a .dmp file from production to copy the data. Now that the database has been running for a while the data is out of sync and I'd like to import again. What's the easiest way to do this?
I tried using imp again but it shows errors now since the tables are already there. I could delete the data files and re import I suppose.
Edit: This is what I've tried:
$ imp user/password file=MyExportFile.dmp
following statement failed because the object already exists:
CREATE TABLE ...
...
Import terminated successfully with warnings.
Brent Sandstrom
(145 rep)
Jun 15, 2018, 02:35 PM
• Last activity: Jun 16, 2018, 07:48 AM
0
votes
1
answers
164
views
How to determine the biggest float value (in terms of used bytes) with SQL in Oracle?
I would like to know how to check out what is the biggest float value in terms of used bytes in a specific column. Oracle DB Version is 10.2.
I would like to know how to check out what is the biggest float value in terms of used bytes in a specific column.
Oracle DB Version is 10.2.
fetch_execute_break
(47 rep)
May 14, 2018, 08:42 AM
• Last activity: May 14, 2018, 08:51 AM
0
votes
1
answers
968
views
Large cost change in execution-plan why?
I had a query change execution plans for an unknown reason. The stats were gathered about 24 hours before the change and the rate of data increase has not changed over the period in question. Here is the original GOOD execution plan: GOOD return about 428 row per run SELECT STATEMENT Optimizer=ALL_R...
I had a query change execution plans for an unknown reason. The stats were gathered about 24 hours before the change and the rate of data increase has not changed over the period in question. Here is the original GOOD execution plan:
GOOD return about 428 row per run
SELECT STATEMENT Optimizer=ALL_ROWS (Cost=9)
SORT (ORDER BY) (Cost=9 Cardinality=1 Bytes=82 Time=1)
TABLE ACCESS (BY INDEX ROWID) OF DE.MY_TABLE (TABLE) (Cost=8 Cardinality=1 Bytes=82 Time=1)
INDEX (RANGE SCAN) OF DE.MY_TABLE_MY_COL_01_DATE (INDEX) (Cost=3 Cardinality=6 Time=1)
Here is the UGLY plan, FTS on 73 million rows spinning the CPU near 100% still returns about 400 rows
SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4911)
SORT (ORDER BY) (Cost=4911 Cardinality=32894 Bytes=2697308 Time=59)
TABLE ACCESS (FULL) OF DE.MY_TABLE (TABLE) (Cost=4278 Cardinality=32894 Bytes=2697308 Time=52)
After rebuilding the index, creating a covering index and that not working we finally added a hint to use the original index and that worked well. This restored previous timings. Working well the query runs in 22ms. The Ugly plan ran at about 14 sec.
After the query hint this is the query plan:
SELECT STATEMENT Optimizer=ALL_ROWS (Cost=56571 Cardinality=13697 Bytes=1164245)
SORT (ORDER BY) (Cost=56571 Cardinality=13697 Bytes=1164245)
TABLE ACCESS (BY INDEX ROWID) OF MY_TABLE (TABLE) (Cost=56297 Cardinality=13697 Bytes=1164245)
INDEX (RANGE SCAN) OF MY_TABLE_MY_COL_01_DATE (INDEX) (Cost=521 Cardinality=62447)
Notice the **Cost and Cardinality** of the 1st and the last plans both using the same index. In the 1st the Cardinality is wrong (1). The query returns a pretty consistent 400 rows. In the last plan the Cardinality is 13697. That's wrong too. But look at the Cost of the last plan 56571, no wonder it took a hint to force the use of the working index.
My research is pointing to a Statistics problem. I verified that the GATHER_STATS_JOB is running. I've used the dba_tab_stats_history table to verify the stats are actually updated on the table I'm interested in.
So my questions are: Why would the Cardinality and Cost of the query change so dramatically? If it is how the statistics are gathered (I am using gather_database_stats(auto)) where should I start reading what Stats gathering options to use?
If it is unlikely the statistics where else should I look?
I'm running Oracle Database 10g Release 10.2.0.4.0 - 64bit Production
EDIT NEW PLAN 03/21/2018
The latest status is the system no longer considers the hint index expensive.
Here is the latest plan detail:
-----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 41 | 3444 | 998 (1)| 00:00:12 |
| 1 | SORT ORDER BY | | 41 | 3444 | 998 (1)| 00:00:12 |
|* 2 | TABLE ACCESS BY INDEX ROWID| MY_TABLE | 41 | 3444 | 997 (1)| 00:00:12 |
|* 3 | INDEX RANGE SCAN | MY_TABLE_MY_COL_01_DATE| 1128 | | 12 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("FO"."STATUSID"1)
3 - access("FO"."MY_COL_01"=:AGENCY AND "FO"."EVENTDATE">SYSDATE@!-:TIMENUMBER AND
"FO"."EVENTDATE" IS NOT NULL)
Steve Hansen
(13 rep)
Mar 20, 2018, 09:28 PM
• Last activity: Mar 21, 2018, 02:43 PM
0
votes
1
answers
69
views
Apply job aborting with <1 granule size unable to allocate, ASMM on
all My team is trying to resolve an oracle issue we're seeing sporadically that is really causing us trouble. We have an update process that imports .dmp files via a package that utilizes an Apply job to do so. (not our choice to implement it this way...) The problem is, we're getting this apply job...
all
My team is trying to resolve an oracle issue we're seeing sporadically that is really causing us trouble.
We have an update process that imports .dmp files via a package that utilizes an Apply job to do so. (not our choice to implement it this way...)
The problem is, we're getting this apply job aborted with an error of 4031 unable to allocate <100 bytes (various values, usually from 60 to 90) from memory, with the shared pool marked as the culprit. ASMM is on. The server is provisioned ~500 MB of memory to run on (these are old Server 2003 environments so memory is in high demand). I am concerned about this error - it seems to me that the error should always report multiples of 4k, as that is our granule size, but perhaps I am mis-understanding the process of freeing up memory. I am also concerned that the server is not freeing up packages as needed.
I believe that the fix is to use dbms_shared_pool.keep("PACKAGE NAME") to pin the package used to run the apply job, so that it never has to allocate the memory again, but we need to test this fix before we deploy it, for various organizational reasons. We also plan to allocate more memory to the oracle database. Setting the minimum size is not a solution, as we've seen this issue when the size of the pool is above the minimum, with ASMM on. We've also tried setting the sga maximum size to a higher value, and pinning the package in question, but again, with no way to reproduce the issue, it is a waiting game to see if it ever happens again.
Our main problem is in reliably reproducing the problem - even when disabling ASMM and setting the shared pool as small as possible, we're not able to reliably reproduce the issue. We've gotten the shared pool small enough that the web interface can't even pull image tags out of the database, or query the login information from the appropriate table, yet the apply job still runs reliably.
We do not have an in-house DBA so naturally we're probably missing some obvious steps.
This issue is present sporadically on 2-3 servers in a 8 server environment. These individual servers all see similar use by different groups. I am unclear as to why only some of the servers encounter this problem, but have not been able to identify any smoking gun cause.
I am hoping for input in either reliably avoiding this issue entirely, or in reliably forcing the shared pool to be unable to allocate bytes to this job, so that we can confirm these fixes actually work.
Please bear in mind that I have read most of the 4031 posts on SO at this point - this problem has been under investigation for weeks. It is likely NOT appropriate to mark this question as duplicate and direct me to another question - I've read it already. It did not answer my question.This is a cross post from plain old Stackoverflow
I have examined the trace files for one of the servers but it was less than enlightening - I am unsure of what to look for.
We've tried flushing the shared pool, but again, with no way to reproduce the issue it turns into an "wait and see" fix, which isn't the answer that our funder is really interested in.
Monica Apologists Get Out
(103 rep)
Dec 29, 2016, 04:17 PM
• Last activity: Dec 29, 2016, 05:17 PM
-1
votes
1
answers
71
views
Problem with SQL query
I have the following Problem: I want as output a specific document (TITLE) that contains two different terms (TERM). For example the title of a fairy tale, that contains the names of two different characters. And I have the following data structure (**bold=tables**, *italic=keys*): **DOCUMENT** ( *D...
I have the following Problem: I want as output a specific document (TITLE) that contains two different terms (TERM). For example the title of a fairy tale, that contains the names of two different characters. And I have the following data structure (**bold=tables**, *italic=keys*):
**DOCUMENT** ( *DOCUMENT_ID*, TITLE )
**DOCUMENT TERM** ( *DOCUMENT ID*->DOCUMENT, *POSITION OF TOKEN IN DOCUMENT*, TERM,
TOKEN, POS ID->POS )
**POS** ( *POS ID*, LOW, HIGH, DESCRIPTION, LONG DESCRIPTION, PARENT POS->POS )
I tried the following solution, but it didn't work. I hope someone can help:
SELECT
TITLE
FROM
DOCUMENT
WHERE
DOCUMENT_ID
IN (
SELECT
DOCUMENT_ID
FROM
DOCUMENT_TERM
WHERE
(TERM = 'Term1') AND (TERM = 'Term2')
)
ghillb
(113 rep)
Oct 25, 2016, 11:39 AM
• Last activity: Oct 25, 2016, 12:49 PM
1
votes
2
answers
3577
views
Setting the SMTP_OUT_PARAMETER in 10G at runtime
I am writing a procedure that will send mail from oracle 10g. I have installed the UTL_MAIL and UTL_SMTP packages and configured the SMTP_OUT_SERVER parameter. However the requirement is that the SMTP server address be chosen at runtime. How do I resolve the issue.??
I am writing a procedure that will send mail from oracle 10g. I have installed the UTL_MAIL and UTL_SMTP packages and configured the SMTP_OUT_SERVER parameter. However the requirement is that the SMTP server address be chosen at runtime. How do I resolve the issue.??
Subhaprasad Mukherjee
(133 rep)
Apr 10, 2014, 10:45 AM
• Last activity: Jun 20, 2015, 12:38 AM
9
votes
1
answers
2001
views
Column level security
I am in need of a solution to hide specific columns in a table. We have people who need to build reports against this database and specifically some of these tables with confidential information but aren't allowed to see items like salary or ssn. Is it possible to filter specific columns for users?
I am in need of a solution to hide specific columns in a table. We have people who need to build reports against this database and specifically some of these tables with confidential information but aren't allowed to see items like salary or ssn. Is it possible to filter specific columns for users?
Robert
(91 rep)
Sep 13, 2012, 07:00 PM
• Last activity: Mar 12, 2015, 12:35 PM
1
votes
1
answers
2491
views
Why the optimizer does not use an Index Unique Scan?
I am on Oracle 10.2.0.4.0. I have a big Index Organized Table (**IOT**) with 74 columns and over 345 million of rows. An external tool execute a batch where I cannot touch the source code. The critical query in the batch execute a SELECT against multiple columns of the table, filtering with three qu...
I am on Oracle 10.2.0.4.0. I have a big Index Organized Table (**IOT**) with 74 columns and over 345 million of rows. An external tool execute a batch where I cannot touch the source code. The critical query in the batch execute a SELECT against multiple columns of the table, filtering with three query predicates using three different equality operators. This three predicates are passed as Bind Variables, and during the batch they change their values. Sometimes, during a Bind Variable change, the CBO choose another execution plan, that is not good.
The predicates are these:
.. WHERE ((COSTEVENT.ACC_NUM=:PREVALUE0) AND
(COSTEVENT.EVENT_SEQ=PREVALUE1)) AND (COSTEVENT.EVENT_REF=PREVALUE2);
The table has an **UNIQUE INDEX** on the column EVENT_REF. There is a composite PRIMARY KEY on five columns, two of which are EVENT_SEQ and ACC_NUM (the others are EVENT_TYPE_ID and EVENT_SOURCE and EVENT_REF).
When the query performs well, the CBO chooses the following execution plan:
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 8 (100)| |
| 1 | INDEX UNIQUE SCAN| COSTEVENT_PK | 1 | 265 | 8 (0)| 00:00:01 |
| 2 | INDEX RANGE SCAN| COSTEVENT_UK1 | 1 | | 3 (0)| 00:00:01 |
-------------------------------------------------------------------------------------
1 - access("COSTEVENT"."EVENT_REF"='00DE320000620E1')
filter("COSTEVENT"."EVENT_SEQ"=11 AND
"COSTEVENT"."ACC_NUM"='LA00032914')
2 - access("COSTEVENT"."EVENT_REF"='00DE320000620E1')
But during the batch, sometimes, the CBO switch to another plan, the bad plan:
-----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 266 | 5 (0)| 00:00:01 |
|* 1 | INDEX RANGE SCAN| COSTEVENT_PK | 1 | 266 | 5 (0)| 00:00:01 |
-----------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("COSTEVENT"."ACC_NUM"='LA00032914' AND
"COSTEVENT"."EVENT_SEQ"=111 AND "COSTEVENT"."EVENT_REF"='00DE320000620E1')
filter("COSTEVENT"."EVENT_REF"='00DE320000620E1')
Instead of accessing first to the EVENT_REF unique index key, and then performing the limited range scan, the CBO, for some strange reason, access first to ACC_NUM, and the applies the filter; but ACC_NUM is not UNIQUE so he needs to perform an INDEX RANGE SCAN through million and million of rows.
If I add the hint
/*+ index(COSTEVENT COSTEVENT_UK1) */
to the SELECT, the CBO chooses correctly for an INDEX UNIQUE SCAN, but I can't access to the source code.
Other useful info are here:
COLUMN_NAME DENSITY LOW_VALUE HIGH_VALUE NUM_DISTINCT HISTOGRAM
------------------------------ ---------- ---------------------------------------- ---------------------------------------- ------------ ---------------
ACC_NUM .001146789 30303030343037 4E3030303331343634 31011 HEIGHT BALANCED
EVENT_SEQ 1.4550E-09 C102 C20204 100 FREQUENCY
EVENT_SOURCE .000808407 30303031383034 7777772E7469656D706F2E6974 123629 HEIGHT BALANCED
EVENT_TYPE_ID 1.4550E-09 C102 C119 11 FREQUENCY
EVENT_REF 2.9101E-09 303030364645303030343534343544 303044434644303030303032303634 343634290 NONE
and here:
INDEX_NAME CLUSTERING_FACTOR NUM_ROWS
------------------------------------------------------------------------------------------ ----------------- ----------
COSTEVENT_PK 0 345257860
COSTEVENT_UK1 30346000 345738415
The question are:
1) why Oracle chooses for and INDEX RANGE SCAN instead of an INDEX UNIQUE SCAN, even if the first is extremely slower than the second?
2) There is a way to lead Oracle to the right choice, without having to apply a SQL Profile, for example? (maybe manipulating histograms, and statistics, etc..)
Thanks in advance
Shaunyl
(528 rep)
Feb 12, 2015, 02:28 PM
• Last activity: Feb 12, 2015, 04:15 PM
1
votes
1
answers
411
views
SQLPlus Padding in Query Results
I have several Oracle servers that exhibit a behavior I do not understand. I have identified a two node RAC in which one node exhibits the behavior, but the other node does not. RAC is not involved in the issue, but it dramatically reduces the differences between the servers experiencing the differe...
I have several Oracle servers that exhibit a behavior I do not understand. I have identified a two node RAC in which one node exhibits the behavior, but the other node does not. RAC is not involved in the issue, but it dramatically reduces the differences between the servers experiencing the different behavior.
Why is the following output different?
Node 2 (expected behavior):
SQL> select 'a' from dual;
'
-
a
SQL> select 'abc' from dual;
'AB
---
abc
Node 1 (unexpected behavior):
SQL> select 'a' from dual;
'A'
---
a
SQL> select 'abc' from dual;
'ABC'
---------
abc
These nodes are connecting to different instances of the same database. I have done a
show all
on both and verified that the sqlplus settings are identical.
Leigh Riffel
(23884 rep)
Mar 11, 2014, 07:01 PM
• Last activity: Apr 11, 2014, 03:53 AM
0
votes
3
answers
10594
views
rename datafile when file does not exists
I am trying to clone our production database in test virtual machine. Our production database works with ASM. I would like to use plain file system in test database. I created database, services, pfile etc. I gave production database control files to test database. Right now I am trying to rename da...
I am trying to clone our production database in test virtual machine. Our production database works with ASM. I would like to use plain file system in test database.
I created database, services, pfile etc. I gave production database control files to test database. Right now I am trying to rename datafiles in test database. I get following error.
SQL> startup mount;
.....
SQL> alter database rename file '+NEWDATA/XX/XX01.dbf' to 'M:\oracle\datafiles\XX\XX01.dbf'; alter database rename file
'+NEWDATA/XX/XX01.dbf' to 'M:\oracle\datafiles\XX\XX01.dbf';
* ERROR at line 1: ORA-01511: error in renaming log/data files ORA-01141: error renaming data file 162 - new file
'M:\oracle\datafiles\XX\XX01.dbf' not found ORA-01110: data file 162:
'+NEWDATA/XX/XX01.dbf' ORA-27046: file size is not a multiple of
logical block size OSD-04012: file size mismatch O/S-Error: (OS 4) The
system cannot open the file.
My backup is normal rman backup pieces not backup as copy. I am trying to rename nonexistent files. After this rename process , I will try to restore database from rman backup. But this backup is not piece by piece copy.
Atilla Ozgur
(1466 rep)
Dec 30, 2013, 12:31 PM
• Last activity: Dec 31, 2013, 01:31 PM
1
votes
1
answers
459
views
Does a standby slow down Oracle imp?
I have Oracle 10gR2 installed on Redhat Linux 5.4 on two servers (primary and physical standby). When I import data into the primary server while the *standby is started* it takes too long to complete (didn't wait for it to finish). When I shutdown Oracle on the standby server it completes in 15 min...
I have Oracle 10gR2 installed on Redhat Linux 5.4 on two servers (primary and physical standby).
When I import data into the primary server while the *standby is started* it takes too long to complete (didn't wait for it to finish). When I shutdown Oracle on the standby server it completes in 15 minutes.
Is this really due to standby? How can I speed up this slow imp? Here is the command line used:
imp userid=/ file=base.dmp ignore=yes log=log1.log
Ould Abba
(157 rep)
Feb 3, 2013, 02:24 PM
• Last activity: Feb 4, 2013, 03:06 PM
2
votes
1
answers
1506
views
Oracle 10gR2 to 11gR2 upgrade - performance hit
We've recently upgraded from Oracle `10gR2` (10.2.0.4 ) to `11gR2` (11.2.0.3) and we are noticing a significant hit in performance although execution plans are the same for the offended queries before and after the upgrade. Allocation of more memory did improve the performance but just slightly. We...
We've recently upgraded from Oracle
10gR2
(10.2.0.4 ) to 11gR2
(11.2.0.3) and we are noticing a significant hit in performance although execution plans are the same for the offended queries before and after the upgrade. Allocation of more memory did improve the performance but just slightly. We also tried to set optimizer mode to 10204 at session level but queries produced no better results. The only difference we've noticed is when enabling autotrace the physical reads on 10gR2 were 0 whereas on 11gR2 it was of the magnitude of tens of thousands.
Including stats demonstrating the physical reads discrepancies:
10gR2 Stats
~~~~~~~~~~
Statistics
--------------------------------------------------------
0 recursive calls
0 db block gets
21189 consistent gets
0 physical reads
0 redo size
1014 bytes sent via SQL*Net to client
492 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
11gR2
~~~~~~
Statistics
------------------------------------------------------
0 recursive calls
0 db block gets
11243 consistent gets
11232 physical reads
0 redo size
1049 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
Any ideas/suggestions/right-directions would be greatly appreciated.
dimitrisli
(137 rep)
Sep 19, 2012, 09:57 AM
• Last activity: Dec 18, 2012, 12:31 PM
2
votes
1
answers
2577
views
How to speed up a query with a lot of subqueries
I have a query which uses a series of functions to return the status of each document for a loan. SELECT loan_number, borrower_name, get_application_status(loan_number, date_application_ordered, date_application_verified), get_appraisal_status(loan_number, date_appraisal_ordered, date_appraisal_veri...
I have a query which uses a series of functions to return the status of each document for a loan.
SELECT loan_number,
borrower_name,
get_application_status(loan_number, date_application_ordered, date_application_verified),
get_appraisal_status(loan_number, date_appraisal_ordered, date_appraisal_verified),
get_endorsement_status(loan_number, date_endorsement_ordered, date_endorsement_verified),
get_waiver_status(loan_number, date_waiver_ordered, date_waiver_verified),
get_insurance_status(loan_number, date_insurance_ordered, date_insurance_verified)
FROM loans_table
WHERE loan_type IN ('A', 'B', 'C')
AND filing_date > TO_DATE('30-OCT-12')
AND filing_date = date_verified THEN
RETURN 'Not Verified Since Last Upload';
ELSE IF last_upload_date < date_verified THEN
RETURN 'Verified';
END IF;
END get_waiver_status;
How do I get the query to return faster? I have made indexes on loan_number and filing_date. I tried creating a trigger on documents_table to store the last upload_date of a document into a new table for the function to retrieve, but if a document is deleted, I need to be able to select the next latest upload_date from documents_table, which I can't do from within the trigger because I get a mutating table error. I am using PL/SQL on Oracle 10g R2.
Boric
(123 rep)
Nov 15, 2012, 04:32 PM
• Last activity: Nov 15, 2012, 05:04 PM
2
votes
1
answers
3325
views
SSIS PL/SQL task hangs with message "Multiple-step OLE DB operation generated errors."
I've configured a Microsoft SQL Server 2008 R2 SSIS package with an Execute SQL Task to run a PL/SQL query on an Oracle 10.2 database. The Execute SQL Task (named "SQL Update MY_STATUS flag") is using an OLE DB Connection Manager with a "Native OLE DB\Oracle Provider for OLE DB" Provider. All other...
I've configured a Microsoft SQL Server 2008 R2 SSIS package with an Execute SQL Task to run a PL/SQL query on an Oracle 10.2 database.
The Execute SQL Task (named "SQL Update MY_STATUS flag") is using an OLE DB Connection Manager with a "Native OLE DB\Oracle Provider for OLE DB" Provider. All other task settings are default values.
The query is simple:
UPDATE MY_TABLE
SET MY_STATUS = 1
WHERE MY_STATUS = 0
(Note: This query succeeds when run in SQL Developer with a semicolon terminator. Here the terminator is omitted, because if a semicolon is included, an invalid character error is generated by SSIS.)
When I start debugging, the task begins execution and then hangs indefinitely, the package does not complete execution.
The debug output window shows this message:
> **Warning: 0x0 at SQL Update MY_STATUS flag: Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if
> available. No work was done.**
(The Task behaves the same with the query configured as Direct Input, or as a Variable.)
What does this "Multiple-step" warning mean in this context? What must I do to allow the Execute SQL Task to succeed?
peterk411
(957 rep)
May 31, 2012, 02:10 AM
• Last activity: May 31, 2012, 07:45 PM
Showing page 1 of 20 total questions