Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
2855 views
ORA-31626: job does not exist during schema export
I'm exporting schema with following command - expdp system/system@xxx.xxx.xxx.xxx:1521/orcl schemas=schema-name directory=test_dir dumpfile=Schema.dmp logfile=SchemaLog.log but it results into following error - Connected to: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production ORA-31626: job do...
I'm exporting schema with following command - expdp system/system@xxx.xxx.xxx.xxx:1521/orcl schemas=schema-name directory=test_dir dumpfile=Schema.dmp logfile=SchemaLog.log but it results into following error - Connected to: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production ORA-31626: job does not exist ORA-04063: package body "SYS.DBMS_INTERNAL_LOGSTDBY" has errors ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_INTERNAL_LOGSTDBY" ORA-06512: at "SYS.KUPV$FT", line 1009 ORA-04063: package body "SYS.DBMS_LOGREP_UTIL" has errors ORA-06508: PL/SQL: could not find program unit being called: "SYS.DBMS_LOGREP_UTIL" I googled a lot and tried solutions provided around ORA-31626: job does not exist and ORA-04063: package body "SYS.DBMS_INTERNAL_LOGSTDBY" has errors but none helped to solve the problem. Could you please help to resolve this?
Alpha (153 rep)
Dec 30, 2020, 08:58 AM • Last activity: Jul 27, 2025, 12:06 PM
2 votes
1 answers
522 views
How to (pg_)restore selected tables with all indexes?
I would like to restore selected (currently after `-t`) tables including all indexes without making a definition after `-I` - because indexes are changing, new ones are created, old ones may expire. What is the best way to perform the restore process in Postresql db? I have read that is possible to...
I would like to restore selected (currently after -t) tables including all indexes without making a definition after -I - because indexes are changing, new ones are created, old ones may expire. What is the best way to perform the restore process in Postresql db? I have read that is possible to restore whole db, and then delete unnecessary tabels. I'm afraid that the process will be much longer and will load the database too much. -t - recreates only data -I - I have to define index by myself, I want it to happen automatically
Bartłomiej Kanabus
Dec 5, 2022, 09:46 AM • Last activity: Apr 11, 2025, 05:02 PM
3 votes
1 answers
641 views
investigate contents of a sqlite3 file
I have created a new, clean `firefox` profile from scratch, and added couple of bookmarks. These, I have learned, are stored in `places.sqlite` file inside the profile directory. This file seems to be too big (1.2 MB) and I am wondering why: $ ls -lh places.sqlite -rw------- 1 martin martin 1.2M Sep...
I have created a new, clean firefox profile from scratch, and added couple of bookmarks. These, I have learned, are stored in places.sqlite file inside the profile directory. This file seems to be too big (1.2 MB) and I am wondering why: $ ls -lh places.sqlite -rw------- 1 martin martin 1.2M Sep 8 07:24 places.sqlite When I open it with sqlite3 from the commandline, I see there are couple of tables: $ sqlite3 places.sqlite SQLite version 3.7.13 2012-06-11 02:05:22 sqlite> .tables moz_anno_attributes moz_favicons moz_items_annos moz_annos moz_historyvisits moz_keywords moz_bookmarks moz_hosts moz_places moz_bookmarks_roots moz_inputhistory but none of them contain much data (I have tried SELECT * FROM ; for all of them). Further, if I dump the entire database into file dump.sqlite: sqlite> .output dump.sqlite sqlite> .dump The file has only couple of KB: $ ls -lh total 1.2M -rw------- 1 martin martin 39K Sep 8 19:01 dump.sqlite -rw------- 1 martin martin 1.2M Sep 8 07:24 places.sqlite I am aware of the vacuum command in sqlite3. I did that, but it has no effect. The database file is still same size. How can I investigate why the database file is so big, and what does it contain ?
Martin Vegter (75 rep)
Sep 8, 2016, 05:05 PM • Last activity: Mar 5, 2025, 04:01 PM
0 votes
1 answers
1287 views
Restoring dump(backup): ERROR 1022 (23000) at line 44: Can't write; duplicate key in table
I'm doing tests to change a current table from my InnoDB database to Archive, but I had a problem trying to restore a dump after the change. After several tests, the problem is related to the value declared in the AUTO_INCREMENT attribute in CREATE TABLE. But this script is created automatically. Ho...
I'm doing tests to change a current table from my InnoDB database to Archive, but I had a problem trying to restore a dump after the change. After several tests, the problem is related to the value declared in the AUTO_INCREMENT attribute in CREATE TABLE. But this script is created automatically. How can I fix this error so that I can restore the dump quickly without having to change it? ... -- -- Table structure for table tabela -- DROP TABLE IF EXISTS tabela; /*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET character_set_client = utf8 */; CREATE TABLE tabela ( cd_tabela int(11) NOT NULL AUTO_INCREMENT, ... PRIMARY KEY (tabela) ) ENGINE=ARCHIVE AUTO_INCREMENT=3081487 DEFAULT CHARSET=utf8; /*!40101 SET character_set_client = @saved_cs_client */; -- -- Dumping data for table tabela -- LOCK TABLES tabela WRITE; /*!40000 ALTER TABLE tabela DISABLE KEYS */; INSERT INTO tabela VALUES (1,...), ... (in this line the error appears) ...
Matheus Sant Anna de Oliveira (131 rep)
Jan 26, 2018, 04:45 PM • Last activity: Mar 3, 2025, 06:05 PM
2 votes
1 answers
2305 views
Progress of mongodump operation jams at a random point
We are executing remote mongodumps to backup collections. For some time now sporadically one of these dumps stops to progress any further and remains to log out the same amount of documents processed over and over: 2016-06-27T06:35:06.428+0000 [##############.......] col 1140198/1853120 (61.5%) 2016...
We are executing remote mongodumps to backup collections. For some time now sporadically one of these dumps stops to progress any further and remains to log out the same amount of documents processed over and over: 2016-06-27T06:35:06.428+0000 [##############.......] col 1140198/1853120 (61.5%) 2016-06-27T06:35:09.428+0000 [##############.......] col 1140198/1853120 (61.5%) 2016-06-27T06:35:12.428+0000 [##############.......] col 1140198/1853120 (61.5%) This time the dump is stuck at this point for about four hours and more than 5000 log lines showing 61.5% of progress. I can still access the MongoDB Server from the backup machine, mongodb answers to queries, no annomalies in the logs. MongoDB version 2.6.12
Björn (181 rep)
Jun 27, 2016, 06:43 AM • Last activity: Feb 8, 2025, 04:05 AM
0 votes
1 answers
380 views
Best way to run performance tests in a production PostgreSQL 13 database?
First, I'm asking this question because I'm studying to be a DBA. Recently I learned a lot about database locking, and then some questions arouse in my mind. Suppose I'm a DBA, I'm in charge of a huge database from a online sales website. Some users reported that the site is slow, and my boss asks m...
First, I'm asking this question because I'm studying to be a DBA. Recently I learned a lot about database locking, and then some questions arouse in my mind. Suppose I'm a DBA, I'm in charge of a huge database from a online sales website. Some users reported that the site is slow, and my boss asks me to test the most used queries to see if they are running smooth or not. If I get locking right, if I just run an EXPLAIN ANALYZE on our tb_products I would probably lock it and it can have a negative impact on sales being processed at that time (i.e. some users could not buy their products because I'm "testing" queries). My question is: how can I properly tests queries in a production database? In my (probably naive) way of thinking, what I need to do is to dump the database and its configurations, create a "sample" database with that and do the tests there. But them, if I'm testing locally on my machine, even with the same configs, numbers will differ, as we're testing on another hardware that's not the server hardware, right? I can't depict how can this tests be done when I think of it. It is probably a silly question, but any help to clarify that would be appreciated.
André Carvalho (136 rep)
Mar 9, 2022, 02:35 PM • Last activity: Jan 14, 2025, 09:00 PM
-1 votes
1 answers
57 views
How to identify the type of database file from a data dump?
I have a file that I suspect is a database, but I don't know what kind of database it is (SQLite, Postgres, etc.). How can I infer test if it's a database, and if so, what kind of database it is? If it's a database, it's using a public database system. I tried reading the file with an hex editor but...
I have a file that I suspect is a database, but I don't know what kind of database it is (SQLite, Postgres, etc.). How can I infer test if it's a database, and if so, what kind of database it is? If it's a database, it's using a public database system. I tried reading the file with an hex editor but didn't see any obvious hint in it (at least, nothing obvious to me).
Franck Dernoncourt (2093 rep)
Jan 13, 2025, 12:56 AM • Last activity: Jan 13, 2025, 03:36 PM
1 votes
1 answers
1802 views
How do I import a dump for a deleted tablespace?
I'm trying to import a database dump, however I am encountering this error: ORA-39083: Object type TABLE:"MYSCHEMA"."MINF" failed to create with error: ORA-00959: tablespace '_$deleted$5$0' does not exist Failing sql is: CREATE TABLE "MYSCHEMA"."MINF" ( "P_MID" VARCHAR2(16 BYTE) NOT NULL ENABLE , "X...
I'm trying to import a database dump, however I am encountering this error: ORA-39083: Object type TABLE:"MYSCHEMA"."MINF" failed to create with error: ORA-00959: tablespace '_$deleted$5$0' does not exist Failing sql is: CREATE TABLE "MYSCHEMA"."MINF" ( "P_MID" VARCHAR2(16 BYTE) NOT NULL ENABLE , "XML_ORIG_MSG" "SYS"."XMLTYPE" , "XML_MSG" "SYS"."XMLTYPE" , "P_OFFICE" CHAR(3 BYTE) NOT NULL ENABLE , "P_DEPARTMENT" CHAR(3 BYTE) DEFAULT '...' , "P_MSG_TYPE" VARCHAR2(15 BYTE) DEFAULT '...' , "P_MSG_SUB_TYPE" VARCHAR2(10 BYTE) , "P_CDT_MOP" VARCHAR2(6 BYTE) , "P_PREFERRED_CDT_MOP" VARCHAR2(6 It's seems the tablespace for the MINF table has been dropped before getting the dump. Is there a workaround to still import this table? Update: I have tried impdp statement, with the same result: ./impdp system/****** DIRECTORY=GPPDUMP DUMPFILE=SPOCBC_SCD02.ROCKY.20120919102739.dmp.1 ,SPOCBC_SCD02.ROCKY.20120919102739.dmp.2 ,SPOCBC_SCD02.ROCKY.20120919102739.dmp.3 ,SPOCBC_SCD02.ROCKY.20120919102739.dmp.4 LOGFILE=baru.log REMAP_SCHEMA=SPOCBC_SCD02:OCBCGPP5 REMAP_TABLESPACE=_$deleted$5$0:P_DATA
Rudy (151 rep)
Sep 20, 2012, 03:50 AM • Last activity: Jan 10, 2025, 06:01 PM
0 votes
0 answers
109 views
SQLDeveloper encoding and export database
I've been requested to make a dump of an Oracle 11g EE Database. However, i can't use the Oracle Data Pump utility, since I have no direct access to the host machine, nor I think the db user have the correct privileges. So I opted for using the Database export utility that SQL Developer provides. Bu...
I've been requested to make a dump of an Oracle 11g EE Database. However, i can't use the Oracle Data Pump utility, since I have no direct access to the host machine, nor I think the db user have the correct privileges. So I opted for using the Database export utility that SQL Developer provides. But I do have the following doubts: - Am I actually viewing the correct encoding through SQL Developer? I'm asking this because I can see there's a lot of question marks inside some tables. I'm wondering if the application that saved the data used the wrong encoding, or if SQL Developer is using the wrong encoding and i'm not viewing it correctly. - How do I export with the correct encoding? I'm aware that SQL Developer has an utility that would allow the user to export a script file with both DDL and DML instructions, but it asks me to specify the file encoding. I've make some tests myself, but since i do not trust the tools i've used completely (EG: WinMerge does not recognise the Windows equivalent of ISO 8859-15, as far as i know), i'm wondering if anyone has been in my shoes and tried better. The internal encoding of the DB should be WE8ISO8859P15, which I think it is the equivalent of ISO 8859-15. When I try to use the Export functionality of SQLDeveloper, it asks me in which encoding the file should use. I've used this utility twice: exporting both a ISO 8859-15 Windows equivalent (can't remember it right now, i'll update this section asap) and a UTF-8 encoding. Both WinMerge and Meld tells me that the only difference between the files are the encoding (and not the characters)
Dont Throw Me Away (1 rep)
Jan 6, 2025, 04:11 PM
0 votes
3 answers
41035 views
Encountering "Snapshot too old" error during execution of expdp command
As am facing an issue while performing expdp command in my production db.(Oracle 11g in windows enviornment) >cmd> expdp 'sys/123@PROD as sysdba' DUMPFILE=BACKUP_02082017_BACKUP.dmp LOGFILE=BakupLog_02082017_BACKUP.log SCHEMAS=A1,B2,C3,D4.. exclude=statistics consistent=y It was taking more than 1 d...
As am facing an issue while performing expdp command in my production db.(Oracle 11g in windows enviornment) >cmd> expdp 'sys/123@PROD as sysdba' DUMPFILE=BACKUP_02082017_BACKUP.dmp LOGFILE=BakupLog_02082017_BACKUP.log SCHEMAS=A1,B2,C3,D4.. exclude=statistics consistent=y It was taking more than 1 day to export the database sized 7GB. But my issue is that the exporting have error and shows error message >ORA-31693: Table data object "owner"."PASSWORD_HISTORY" failed to load/unload and is being skipped due to error: ORA-02354: error in exporting/importing data ORA-01555: snapshot too old: rollback segment number 19 with name "_SYSSMU19_255734752$" too small As i set my retention policy to 16500 from default 900. Even though, same error was occurring. As i am planning to increase the retention policy up to 10 hrs ie, 36000. Is it viable? I am confused that do my undo table space is capable for this or not? Providing some more details: > show parameter undo_%; NAME TYPE VALUE -------------------------------------------------- ----------- -------- undo_management string AUTO undo_retention integer 16500 undo_tablespace string UNDOTBS1 > select file_name,tablespace_name,trunc(bytes/1024/1024) mb, trunc(maxbytes/1024/1024) mm FROM dba_data_files where tablespace_name = 'UNDOTBS1'; FILE_NAME TABLESPACE_NAME MB MM -------------------------------------------------------------------- C:\APP\ADMIN\ORADATA\PROD\UNDOTBS01.DBF UNDOTBS1 5630 32767 >Size of undo with current undo_retention : Actual Undo size[MBytes]:5630 UNDO retention[Sec]:16500 Needed Undo Size[MBytes]:909.433359 I am stuck with this issue. Anyone please advice how i deal with this error? Thanks in advance.
SHS (152 rep)
Feb 8, 2017, 01:09 PM • Last activity: Dec 10, 2024, 03:35 PM
1 votes
1 answers
54 views
Prepend text to mysqldump output at dump creation time
I am suffering from slow mysqldump/mariadb-dump. I found that disabling `autocommit, unique_checks and foreign_key_checks` boost import performance. I do that by prepending `dump.sql` with SET autocommit=0;SET unique_checks=0;SET foreign_key_checks=0; and by appending COMMIT;SET unique_checks=1;SET...
I am suffering from slow mysqldump/mariadb-dump. I found that disabling autocommit, unique_checks and foreign_key_checks boost import performance. I do that by prepending dump.sql with SET autocommit=0;SET unique_checks=0;SET foreign_key_checks=0; and by appending COMMIT;SET unique_checks=1;SET foreign_key_checks=1; to the end of it. Append is trivial, but when prepending I need to make a compromise between time or space consumed. Prepending with sed -i '0,/^/s//SET autocommit=0;SET unique_checks=0;SET foreign_key_checks=0;/' dump.sql can take a lot of time for large dumps, but it creates a single file. cat new-dump.sql" Is fast, but it creates another dump file and is therefore using a lot of space (even though it is just temporary) Is it possible to instruct mysqldump/maridb-dump to prepend and append a custom text to the dump when it is creating it?
sanjihan (285 rep)
Sep 14, 2024, 11:40 AM • Last activity: Sep 16, 2024, 08:31 AM
-1 votes
3 answers
379 views
How can I make a huge (140Gb) sql dump of a MariaDB database?
I need to make a dump to my database with approx. 140 Gb. Can I just run the command to the whole database? Do you know if there are some limits for the dumps? Dumping remotely. I need this dump to upload it in a new instance in the cloud. We do not have access to the large database yet.
I need to make a dump to my database with approx. 140 Gb. Can I just run the command to the whole database? Do you know if there are some limits for the dumps? Dumping remotely. I need this dump to upload it in a new instance in the cloud. We do not have access to the large database yet.
Jopsiton (1 rep)
Apr 25, 2023, 02:54 PM • Last activity: Aug 3, 2024, 07:06 AM
7 votes
1 answers
8226 views
How to change temporary directory for pg_dump?
I'm running out of space from my hard drive which has a 18GB PostgreSQL database. When I try `pg_dump` to a different drive, PostgreSQL creates temporary files on the disk it's dumping from, so I run out of space and the dump fails. This is the command I used: ~~~sh pg_dump -U myusername 'database'...
I'm running out of space from my hard drive which has a 18GB PostgreSQL database. When I try pg_dump to a different drive, PostgreSQL creates temporary files on the disk it's dumping from, so I run out of space and the dump fails. This is the command I used: ~~~sh pg_dump -U myusername 'database' --format=tar > /anotherdrive/filetodump.tar ~~~ It eats free space the same way as with gzip. I tried this from Stackoverflow and a small file is created in the new directory, but nothing else, and pg_dump still writes to the original disk. How do I change temp directory for pg_dump? Note: My work_mem setting is pretty high already, I can't change that. My db version is 9.0.13.
kissgyorgy (345 rep)
Oct 29, 2013, 01:33 AM • Last activity: Aug 1, 2024, 12:21 AM
0 votes
1 answers
42 views
Importing mysql file fails printing last line and I find ^@^@^@^@^@^@^@^@^@^ on that line
I am trying to import some relatively large databases, over 100 GB dump file. When I try to import the dump files with a command like: `mysql -h localhost -u username -p dbname < backup.sql` after some hours it fails and it prints last line, I cannot see upper if there is a start of error, because I...
I am trying to import some relatively large databases, over 100 GB dump file. When I try to import the dump files with a command like: mysql -h localhost -u username -p dbname < backup.sql after some hours it fails and it prints last line, I cannot see upper if there is a start of error, because I am in screen and screen only shows end of output. What I get there is a dump line like:
,('daga88.bet',0,1,1,0,6,257078,-132553,1662996449,179113,598788,NULL,NULL,0,0,1712710000,NULL,1710150000,1652513002,NULL),('daga88.com',1,1,1,0,6,266021,0,1672675038,266021,266021,NULL,NULL,0,0,1672675098,0,1708690000,1672687391,NULL),('daga88.live',0,1,1,0,6,471459,0,1655308937,471459,471459,NULL,NULL,0,0,1707210000,0,1710150000,1655345529,NULL),('daga888.live',0,1,1,0,7,344485,0,1655736029,344485,344485,NULL,NULL,0,0,1707640000,0,1710150000,1655753000,NULL),('daga88vn.net',1,1,1,0,8,620701,0,1648227623,620701,620701,NULL,NULL,0,0,1697470000,0,1715620000,1652513003,NULL),('daga99.net',1,1,1,0,6,692132,0,1582125484,329203,746817,'not found',NULL,0,1683060000,1712150000,0,1713570000,1652513047,NULL),('daga999.com',1,1,1,0,7,382440,0,1630243622,382440,382440,NULL,NULL,0,0,1716600000,0,1717490000,1652513049,NULL),('daga999.net',1,1,1,0,7,630164,0,1581778443,630164,630164,NULL,NULL,0,0,1709620000,0,1714150000,1652513051,NULL),('dagaa.in',0,1,0,0,5,219833,-76305,1646767866,219833,330146,NULL,NULL,0,0,1701070000,0,1710150000,1652513053,NULL),('dagaa.mx',0,1,0,0,5,28433'.
Then I open the dump file with vi editor to look at that line and I see this:
('dagaa.mx',0,1,0,0,5,28433^@^@^@^@^@^@^@^@^@^@^@^@^@^
I don't see any error in mysql log. Maybe an error might have been printed at export time. What could that mean ? I assume my dump is corrupted ? What could cause this ? I remember I seen this before but cannot remember the cause / fix. Maybe this can be fixed by tuning some mysql values to process more data at once ? like innodb_redo_log_capacity or other parameters ? Is hard to "try" things because this happens after 10-12 hours of processing (importing .sql file).
adrianTNT (206 rep)
Jun 29, 2024, 01:30 PM • Last activity: Jul 8, 2024, 03:13 PM
0 votes
1 answers
1374 views
Restore .dmp file into csv
We have asked our clients for certain data in a csv file format and we just got a `.dmp` file. I have been trying to open it using MySQL as if it were `.dump` but it doesn't work. I have seen [other posts][1] where they restore `.dmp` files with Oracle but I have 0 experience with Oracle (and very l...
We have asked our clients for certain data in a csv file format and we just got a .dmp file. I have been trying to open it using MySQL as if it were .dump but it doesn't work. I have seen other posts where they restore .dmp files with Oracle but I have 0 experience with Oracle (and very little with databases overall). Is there any alternative to get the contents of the .dmp file into a csv format?
User981636 (101 rep)
Sep 9, 2015, 03:36 PM • Last activity: Apr 29, 2024, 03:06 PM
6 votes
2 answers
8696 views
How to create a blank version of a PostgreSQL database
I've been using `pgAdmin` to create a database and test it. I'm at a point now where I need to install this db on different servers. I've been using `pg_dump` to do this and I've been able to muddle my way through. But I'm wondering if there's a way to "reset" all the "current" values in any sequenc...
I've been using pgAdmin to create a database and test it. I'm at a point now where I need to install this db on different servers. I've been using pg_dump to do this and I've been able to muddle my way through. But I'm wondering if there's a way to "reset" all the "current" values in any sequence that I have set up? If you have any other general suggestions on how to create "empty" or "blank" databases, can you let me know?
dot (781 rep)
Nov 21, 2013, 10:40 PM • Last activity: Apr 23, 2024, 11:42 PM
1 votes
1 answers
153 views
How to import MySQL RAW Database files without connection error
unfortunately I did not create a 'dump' file with the backup (Data Export) process on MySQL Workbench 8.0.26. When I copy the old database files to the new Data directory, the server fails to start with the message: ``` Could not connect to MySQL: Can't connect to MySQL server on 'localhost' (10061)...
unfortunately I did not create a 'dump' file with the backup (Data Export) process on MySQL Workbench 8.0.26. When I copy the old database files to the new Data directory, the server fails to start with the message:
Could not connect to MySQL: Can't connect to MySQL server on 'localhost' (10061) (code 2003)
Tried to do a clean install of MySQL Server 8.0.26 and Workbench 8.0.26 on Windows Web Server 2008 R2. Which after that install the MySQL Server works fine. But then when I stop the server and copy the old Database files into the new Data directory, the MySQL Server fails to start after that. Although I have read that this is not the usual way to 'backup' and restore a MySQL Database, though I wasn't aware of that before. Is there any way to convert the old database files into a 'dump' file outside of MySQL Workbench? Or is there some way to import the old MySQL database files into the new database without the MySQL Server failing to startup again? Thank you.
life777eternal (13 rep)
Mar 20, 2024, 03:44 AM • Last activity: Mar 20, 2024, 02:15 PM
0 votes
3 answers
5700 views
Export, based on query, into INSERT text file
I need to export some data into a file. The data are selected with a query that will often be made of multiple joins. I want the import operation to be as easy as possible. I was thinking of building an INSERT file with such a method: [Oracle PL/SQL: Dump query result into file][1] But I am wonderin...
I need to export some data into a file. The data are selected with a query that will often be made of multiple joins. I want the import operation to be as easy as possible. I was thinking of building an INSERT file with such a method: Oracle PL/SQL: Dump query result into file But I am wondering if there is some tool already existing to export data based on a query. If this is the case, I might as well be happy with some binary file, as long as importing this file afterward is straight forward. I see there is **exp**, but it doesn't seem to really be able to handle queries, only basic WHERE clauses.
Yugo Amaryl (433 rep)
May 7, 2014, 04:02 PM • Last activity: Dec 26, 2023, 03:10 PM
6 votes
2 answers
695 views
SQL Server generating sporadic Memory Dumps
We're running a 2014 SQL Server. For the past couple of years, the server generates memory dumps with no rhyme or reason, every 2-7 days, locking up most processes, and forcing us to restart the SQL service to get it going. Here's snippet of the error file: Exception Address = 00007FFAD7FCBAA2 Modul...
We're running a 2014 SQL Server. For the past couple of years, the server generates memory dumps with no rhyme or reason, every 2-7 days, locking up most processes, and forcing us to restart the SQL service to get it going. Here's snippet of the error file: Exception Address = 00007FFAD7FCBAA2 Module(combase+000000000003BAA2) Exception Code = c0000005 EXCEPTION_ACCESS_VIOLATION Access Violation occurred reading address 00000000D8063158 Input Buffer 74 bytes - Anything else we should explore? Or is our ONLY option to have Microsoft analyze the .MDMP file(s) ? Points to mention: - We have it patched to the latest Service Package, with the latest Cumulative & GDR updates. Hasn't made a difference Microsoft SQL Server 2014 (SP3-CU4-GDR) (KB5014164) - 12.0.6439.10 (X64) - There's no specific SQL statements/stored procedures that correlate to when these memory dumps occur. - The SQL database files run on a seperate Hard drive than the OS. It's a 1TB drive with 250GB free at any given time. - The Machine is a Windows Server 2012 R2, Virtual Machine - CHKDSK does not find any errors on the OS & Data drives. - DBCC checks go through without issues. - OPENQUERY & OPENROWSET functions are used quite frequently, using the latest ACE & ODBC drivers. - If I query sys.dm_server_memory_dumps, here's the record count by date: creation_time QTY 2022-09-13 1 2022-09-09 38 2022-09-07 11 2022-09-06 21 2022-09-04 1 2022-08-25 2 2022-08-24 11 2022-08-23 7 2022-08-22 1 2022-08-21 1 2022-08-17 9 2022-08-09 7 2022-07-29 1 2022-07-27 1 2022-07-22 1
Depth of Field (233 rep)
Sep 13, 2022, 03:47 PM • Last activity: Nov 12, 2023, 07:59 PM
0 votes
1 answers
155 views
humanly readable dump of an SQL-Server database
I am looking for data, but I don't know the name of the table, neither of the column. As you might know, it's possible to create the backup of an SQL-Server database. Launching a `grep` on that backup indeed reveals that the mentioned data exist, but it does not mention the name of the table and/or...
I am looking for data, but I don't know the name of the table, neither of the column. As you might know, it's possible to create the backup of an SQL-Server database. Launching a grep on that backup indeed reveals that the mentioned data exist, but it does not mention the name of the table and/or column. Is there a way to find data inside an SQL-Server database, either by some SQL, either by creating a human readable textfile from a DB? Thanks in advance
Dominique (609 rep)
Sep 12, 2023, 09:27 AM • Last activity: Sep 12, 2023, 11:12 AM
Showing page 1 of 20 total questions