Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
2 answers
244 views
postgres 11.3 checksum validation post migration
I am in the process of migrating our AWS RDS postgresql from one account to another with zero downtime. I am going with pg_dump/pg_restore + AWS CDC for ongoing replication. I am looking for a way to generate and compare checksum to make sure no differences in data or corruption. Q1: Is it possible...
I am in the process of migrating our AWS RDS postgresql from one account to another with zero downtime. I am going with pg_dump/pg_restore + AWS CDC for ongoing replication. I am looking for a way to generate and compare checksum to make sure no differences in data or corruption. Q1: Is it possible to generate checksum on the database level rather than table level? Q2: OR doing a select count(*) on all the tables in source/target is better? when there is no downtime, data changes will be coming in the source during this process and i am afraid this will affect the checksum or rowcount comparison. Please advise
Falcon (101 rep)
Sep 15, 2022, 01:01 PM • Last activity: Jun 8, 2025, 09:10 AM
0 votes
1 answers
896 views
Is there a cross-platform way to compare data in one columnd on each sitde of replicated data, like a checksum or hash?
I have an Oracle 12 database with lots of tables, and I am replicating several of the tables (a subset of rows) into a SQL Server 2016 database. The subset of rows can be established with a WHERE clause on the Oracle side. I have two web services that can expose anything I want to from that data, on...
I have an Oracle 12 database with lots of tables, and I am replicating several of the tables (a subset of rows) into a SQL Server 2016 database. The subset of rows can be established with a WHERE clause on the Oracle side. I have two web services that can expose anything I want to from that data, one on each side. **Do you have a suggestion for an approach of what I can expose, then compare to find out if the data between the systems matches?** I am currently exposing from one table, which has a few million rows the COUNT(*), which is a no-brainer since it is very efficient. So far, so good. I'm also exposing the SUM of each of a few NUMBER(18,2) columns and comparing it to the corresponding SUM on the SQL Server side. However, this is problematic, as it has to scan the whole table in SQL Server; it is sometimes blocked, and sometimes might cause other processes to block. I imagine similar problems could occur on the Oracle side too. Also, the SUM will not tell me if the rows match--it will only tell me that the totals match; if an amount was improperly added to one row and subtracted from another, I wouldn't catch it. I've pondered whether Oracle's STANDARD_HASH might help me, but it seems cumbersome or error-prone to try to generate the exact same HASH on the SQL Server side, and also this doesn't help with the inefficiency/blocking nature of the call. So is there any way to have both databases **keep track of a hash, checksum, CRC code, or other summary of a column's data, that is efficient to retrieve**, and that I can then use to compare to have some idea whether data is the same on both sides? It need not be a perfect solution--for example, comparing SUMs is close but perhaps not quite good enough. As a first stab, I created an "summary" indexed view, with columns derived from SUMs, on the SQL Server side. This makes querying the view very fast, but incurs additional penalty on every write to the large table underneath. Still, I think it will work, but I'd like to improve on it. Other, better ideas?
Patrick Szalapski (153 rep)
Mar 10, 2021, 05:30 PM • Last activity: Jan 7, 2025, 04:07 AM
1 votes
1 answers
407 views
PostgreSQL equivalent to checksum table
What would be the fastest operation of `checksum table`, which returns the same value in MySQL and in Postgres if there is the same dataset in tables? It could be a function, or just an equivalent of MySQL's checksum in Postgres. Looking for a way to verify consistency by additional check, after mig...
What would be the fastest operation of checksum table, which returns the same value in MySQL and in Postgres if there is the same dataset in tables? It could be a function, or just an equivalent of MySQL's checksum in Postgres. Looking for a way to verify consistency by additional check, after migration from MySQL to Postgres.
Sławomir Lenart (275 rep)
Nov 29, 2024, 07:19 PM • Last activity: Dec 1, 2024, 07:44 PM
-1 votes
1 answers
109 views
IOPS Torn page detection algorithm 2005 docs how does it work
[![enter image description here][1]][1]I was reading [this][2] where they are explaining torn page detection and I did not get it Does anybody has any idea? [1]: https://i.sstatic.net/TpzeuHeJ.png [2]: https://learn.microsoft.com/en-us/previous-versions/sql/sql-server-2005/administrator/cc917726(v=t...
enter image description hereI was reading this where they are explaining torn page detection and I did not get it Does anybody has any idea?
Ankis (3 rep)
Oct 31, 2024, 11:08 AM • Last activity: Nov 1, 2024, 04:09 PM
2 votes
1 answers
209 views
Does enabling backup compression in a Back Up Database Task enable checksums, no matter what setting you pick for them?
## Observations [The documentation says](https://learn.microsoft.com/en-us/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server?view=sql-server-ver16#enable-or-disable-backup-checksum-for-a-backup-operation) > To enable backup checksums in a...
## Observations [The documentation says](https://learn.microsoft.com/en-us/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server?view=sql-server-ver16#enable-or-disable-backup-checksum-for-a-backup-operation) > To enable backup checksums in a BACKUP (Transact-SQL) statement, specify the WITH CHECKSUM option. To disable backup checksums, specify the WITH NO_CHECKSUM option. This is the default behavior, except for a compressed backup. In other words, enabling compression enables checksums. However, in my maintenance plans (unfortunately, they're the GUI ones and not Ola's cool ones), I can see: - "Backup Compression (On)" in the main menu - "BackupCompressionOption" set to "On" and "Checksum" set to "False" listed in the properties menu for the Backup Database Task. - 'Verify backup integrity' ticked in the Modify menu, 'Compress backup' selected, and 'Perform checksum' unticked. ## The Question Does this suggest that checksums are disabled even though I've explicitly asked for backup compression? Or are the tickboxes/properties for checksums redundant if compression is enabled? ## What I've Tried * "View T-SQL" crashes SSMS every time that I try. * Finding the T-SQL in Query Store failed, even though I have it enabled on every database (I searched sys.query_store_query_text for like '%backup database%'). * msdb.dbo.backupset has 0 in all of its has_backup_checksums columns, but I don't know if compression renders this column useless.
J. Mini (1237 rep)
Jul 20, 2024, 01:01 PM • Last activity: Jul 22, 2024, 06:38 PM
0 votes
1 answers
768 views
Fix invalid checksum in database that causes pg_dump to fail
I have a Postgres 9.6 database that is corrupted. The hardware it was running on went bad, and there is at least one corrupted page in the DB. When running pg_dump I get the following error: ``` pg_dump: WARNING:  page verification failed, calculated checksum 24925 but expected 25309 pg_dump: D...
I have a Postgres 9.6 database that is corrupted. The hardware it was running on went bad, and there is at least one corrupted page in the DB. When running pg_dump I get the following error:
pg_dump: WARNING:  page verification failed, calculated checksum 24925 but expected 25309
pg_dump: Dumping the contents of table "my_table" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR:  invalid page in block 250471 of relation base/16384/754450
This particular table is easy to fix even if data is missing, so what I'd like to do is to zero out that page entirely and continue with the DB. But I could not find a way on how to do that, how do I tell Postgres to ignore the error, delete the data in that page and go on? pg_dump itself doesn't seem to have an option like this. I also would like to know if there are more checksum errors, but the tool I found for this is only present since Postgres 11 and not in my older version. How do I fix the bad pages by removing all the data in them, so that the database can continue to run and pg_dump can run again?
Fabian (115 rep)
Jul 18, 2023, 10:11 AM • Last activity: Jul 18, 2023, 11:29 AM
0 votes
0 answers
302 views
Checksum algorithm for multiple rows
I am looking for a fast checksum function to be used in postgres and thought about Adler-32 and crc32c. Postgres don't seem to have such functions implemented by default. The only way i found to implement a hashfunction was like following: SELECT ('x' || encode(digest(column, 'method'), 'hex'))::bit...
I am looking for a fast checksum function to be used in postgres and thought about Adler-32 and crc32c. Postgres don't seem to have such functions implemented by default. The only way i found to implement a hashfunction was like following: SELECT ('x' || encode(digest(column, 'method'), 'hex'))::bit(32)::bigint FROM table The postgres documentation says that there are all openssl functions supported. Unfortunatly, none of the algorithms above are included. Can someone with more experience help me? My goal is to calculate one checksum of multiple rows which only goal is to be used as integrity check of a subset of data. The column in question contains data of type 'TEXT' To clarify my question, here is an example of the table, the expected usage and outcome.
`sql
Example Table:
id |  col | col_2
----------------
1  |'some'| 100
2  |'rndm'|  90
3  |'data'| 400
4  |  'in'|  50
5  |'here'| 600
.
.
.
`
`
Query:
SELECT COUNT(*) as amount, checksum_function(col) as checksum FROM table;
`
`
Expected result:
amount | checksum
-----------------
   20  | hw101h10
` The datatype of the checksum is irrelevant.
Yuki (41 rep)
Jun 25, 2023, 06:22 PM • Last activity: Jun 26, 2023, 06:14 AM
1 votes
0 answers
314 views
MySQL Checksum differs but cannot find difference in table
I have two tables which are expected to be the same CHECKSUM TABLE robot, tbd_robot #Doesn't match All the column types (storage format) are the same, the engine and collation are also the same. The table DDLs are identical. I tried running the intersect ([as mentioned here][1]): SELECT 'robot' AS `...
I have two tables which are expected to be the same CHECKSUM TABLE robot, tbd_robot #Doesn't match All the column types (storage format) are the same, the engine and collation are also the same. The table DDLs are identical. I tried running the intersect (as mentioned here ): SELECT 'robot' AS set, r.* FROM robot r WHERE ROW(r.col1, r.col2, …) NOT IN ( SELECT col1, col2, ... FROM tbd_robot ) UNION ALL SELECT 'tbd_robot' AS set, t.* FROM tbd_robot t WHERE ROW(t.col1, t.col2, …) NOT IN ( SELECT col1, col2, ... FROM robot ) Again, nothing differs nothing differs between the two table. I tried running checksums column by column: SELECT SUM(CRC32(yourColumnName)) AS anyAliasName FROM robot; #Redo for each column for each table Again, nothing differs between the two table. I also look at the data_length SELECT DATA_LENGTH, INDEX_LENGTH, ENGINE, AUTO_INCREMENT, TABLE_SCHEMA, TABLE_NAME, TABLE_COLLATION FROM information_schema.TABLES WHERE TABLE_TYPE = 'BASE TABLE' and TABLE_SCHEMA = 'robot' and TABLE_NAME = 'tbd_robot'; Again, DATA_LENGTH does not differ between the two table. Why is my checksum different if I can't find anything different between the two tables?
Wistar (111 rep)
Dec 17, 2022, 03:26 PM • Last activity: Dec 17, 2022, 03:47 PM
4 votes
1 answers
1027 views
What are the performance implications of using checksums vs change tracking for SQL Server
Back in the day, an implementation with checksum_agg was made to check if "something changed" in a table in a SQL Server table, to then make updates in another database. The users want to get the changes across between the systems quickly, so the table is read every minute to calculate the checksum....
Back in the day, an implementation with checksum_agg was made to check if "something changed" in a table in a SQL Server table, to then make updates in another database. The users want to get the changes across between the systems quickly, so the table is read every minute to calculate the checksum. Multiply this with a number of other tables like this one, and then a number of customers, and it means a lot of data has to be read in order to continually calculate these checksums. I'm thinking it would be a lot less resource intensive to use change tracking. It would potentially still mean asking the database every minute if something changed. But it seems that checking for new rows in the change table would be a much cheaper operation than calculating checksums for all the existing rows in the data tables. Is the difference in performance between using checksum calculations and change tracking worth pursuing? Of course "it depends", but on what?
Ola Eldøy (143 rep)
Nov 17, 2022, 10:24 AM • Last activity: Nov 17, 2022, 01:08 PM
1 votes
1 answers
1243 views
Checksum for SQL-server tables
I'm in the process of installing applications and databases. In order to check whether a file is correct, I can simply take the checksum. Is there a way to check if a table is correct? (I realise I can drop the entire table into a file and take the checksum of that file, but maybe there's a simpler...
I'm in the process of installing applications and databases. In order to check whether a file is correct, I can simply take the checksum. Is there a way to check if a table is correct? (I realise I can drop the entire table into a file and take the checksum of that file, but maybe there's a simpler way.)
Dominique (609 rep)
May 12, 2022, 07:33 AM • Last activity: May 12, 2022, 12:00 PM
1 votes
1 answers
89 views
How to detect which indexes have checksums and which ones don't?
Is there a query that will show which tables / indexes need to be rebuilt so that they have checksums available for data verification? The DB was switched recently from torn page to checksum verification so only some newer indexes / tables have the required checksums and those which index maint has...
Is there a query that will show which tables / indexes need to be rebuilt so that they have checksums available for data verification? The DB was switched recently from torn page to checksum verification so only some newer indexes / tables have the required checksums and those which index maint has rebuilt. Need to see what the remaining tables / indexes are that need rebuilding to have checksum enabled.
user2368632 (1133 rep)
May 21, 2021, 08:23 PM • Last activity: May 22, 2021, 12:23 PM
2 votes
1 answers
470 views
How to check if backups are reliable?
I generally have to restore databases to different servers after important data imports or other important and delicate database operations. I need to be sure that my backups up to that point in time or transaction mark are reliable. I don't have the time and the disk space and to do a [full restore...
I generally have to restore databases to different servers after important data imports or other important and delicate database operations. I need to be sure that my backups up to that point in time or transaction mark are reliable. I don't have the time and the disk space and to do a full restore followed by a dbcc checkdb to completely verify the backup files (including corruption). The RESTORE VERIFYONLY will check for : 1) backup set is present and header fields of pages 2) Are volumes readable? 3) checksum (if enabled , BACKUP by default does not use CHECKSUM) 4) destination drive – space check. I use a lot the following query: --query by Jack Vamvas - any suspect database? SELECT DB_NAME(database_id),[file_id],page_id, CASE event_type WHEN 1 THEN '823 or 824 or Torn Page' WHEN 2 THEN 'Bad Checksum' WHEN 3 THEN 'Torn Page' WHEN 4 THEN 'Restored' WHEN 5 THEN 'Repaired (DBCC)' WHEN 7 THEN 'Deallocated (DBCC)' END, error_count, last_update_date FROM msdb..suspect_pages Other than restore verify only with checksum , is there any other way to check if my backups are reliable?
Marcello Miorelli (17274 rep)
Jul 16, 2019, 11:11 AM • Last activity: Apr 22, 2021, 02:40 PM
1 votes
0 answers
1315 views
How to repair a block based Rocksdb sst file with corrupt masked crc32 checksum in that case?
# Environment I'm using the Ethereum cryptocurrency in a way which requires something called `full Ethererum archival node` where each new blocks (somewhat a batch of recent transactions) is added to the blockchain database with all previous states and virtual machine execution steps. More specifica...
# Environment I'm using the Ethereum cryptocurrency in a way which requires something called full Ethererum archival node where each new blocks (somewhat a batch of recent transactions) is added to the blockchain database with all previous states and virtual machine execution steps. More specifically, my case requires running the full open source [OpenEthereum](https://github.com/openethereum/openethereum) software for such task. This means the underlying Nosql database is powered by Rocksdb and that the data is stored in 6 column format with built in Snappy compression. The problem is OpenEthereum suffers from heavy Io workers thread synchronisation issues which means the rebuild speed of the database is capped by single thread performance of the cpu. And Ethereum grew so large that it would now take more than a year to complete a new full archival node sync using the current fastest per thread cpu (along writing several petabytes in terms of ssd durability). This performance issue is a very complex problem to fix. So much, that it s one of the reason the company which created the software divested from it. So despite it s widespread usage, it s likely to remain unfixed especially, since the lack of manpower saw the replacement of some parallelized algorithms with serial and easier to maintain ones. **Such situation means that while the data is public for free through many websites** (using their own Ethereum archival node copy), **it s access as a database is not** and that I m not interested by the data from the database but to use the tool which requires it. # My Problem My node crashed last month when attempting to read a block deep in the past because a recent transaction referred it. **The issue lies at the Rocksdb backend and seems unrelated with OpenEthereum** (except for some way to fix it like rebuilding just the corrupt entry by hand). One of my [block blased sst file contains a masked crc32 checksum error on a single row which prevents Rocksdb to decompress it](https://drive.google.com/file/d/1gstMO4z6OWGm5pYjL5j-BhdkXf3_Wbyq/view?usp=drivesdk) . [Using the Rocksdb s dump tool](https://github.com/facebook/rocksdb/wiki/Administration-and-Data-Access-Tool#dumping-sst-file-blocks) , I discovered all my backups contains the error which means the issue went unnoticed for years, but neverless that this is the sole entry being corrupted. Of course, I failed to find a [multi column version](https://github.com/openethereum/openethereum/issues/264) of https://github.com/facebook/rocksdb/wiki/RocksDB-Repairer . But one of the easiest solution for rebuilding the file automatically using OpenEthereum would be to remove the damaged block or Ethereum transaction along all the data/sst which were happenned after it (I mean somewhat to reset the database up to the last valid block). But being new to Nosql, not only I ve no ideas on how to write a program opening the database and using DeleteRange() but I don t understand whether it would require understanding the binary format of OpenEthereum for storing index references. So where to start for stripping the database?
user2284570 (111 rep)
Feb 21, 2021, 05:13 PM • Last activity: Feb 21, 2021, 05:52 PM
6 votes
1 answers
238 views
How do pages get updated checksum when switching page verification mode from torn page to checksum?
Have an older but large db (6tb on the san) that still has its page verification option set to TORN_PAGE_DETECTION. How do you apply the new checksum to all existing pages when changing page verification to checksum mode? Is there any big overhead to be wary of when making this change to page verifi...
Have an older but large db (6tb on the san) that still has its page verification option set to TORN_PAGE_DETECTION. How do you apply the new checksum to all existing pages when changing page verification to checksum mode? Is there any big overhead to be wary of when making this change to page verification mode? Does this need to be done during a long maintenance window? SQL 2014 Current DB Compat Level = 120
user2368632 (1133 rep)
Sep 18, 2020, 01:36 AM • Last activity: Sep 18, 2020, 06:10 PM
2 votes
1 answers
562 views
What is the best way to divide data for random groups?
I would like to have persisted column which will randomly group data for 32 groups according to one of `varchar` key column in table. My idea was: `SELECT ABS(CAST(HASHBYTES('MD5',[keyColumnFromTable]) AS bigint) % 31)` Questions: 1. There is any better way to do this except `CHECKSUM` (different va...
I would like to have persisted column which will randomly group data for 32 groups according to one of varchar key column in table. My idea was: SELECT ABS(CAST(HASHBYTES('MD5',[keyColumnFromTable]) AS bigint) % 31) Questions: 1. There is any better way to do this except CHECKSUM (different values on differents COLLATE) and own functions ? 2. There will be any difference If i will use SELECT CAST(HASHBYTES('MD5',[keyColumnFromTable]) AS tinyint) % 31 ? I was reading that CAST to tinyint is taking into consideration last byte of data. Will be any affectt on randomness ?
axdna (119 rep)
Sep 8, 2020, 02:26 PM • Last activity: Sep 14, 2020, 03:55 AM
0 votes
1 answers
42 views
Database system checksumming to prevent network corruption not caught by TCP checksumming
Since TCP checksumming is [not sufficient to detect/prevent all corruption][1], do modern database systems (e.g. SQL Server, MariaDB / MySQL, Oracle, etc.) use any additional checksumming of network traffic as part of their client/server protocol? I did some searching but wasn't finding anything whi...
Since TCP checksumming is not sufficient to detect/prevent all corruption , do modern database systems (e.g. SQL Server, MariaDB / MySQL, Oracle, etc.) use any additional checksumming of network traffic as part of their client/server protocol? I did some searching but wasn't finding anything which made me wonder if perhaps they don't and just roll the dice. Got a good answer on Oracle. Would like to hear about SQL Server and MariaDB / MySQL as well.
g491 (121 rep)
Mar 14, 2020, 04:25 AM • Last activity: Mar 15, 2020, 07:00 PM
0 votes
0 answers
305 views
Hash matches with collisions
I'm trying to improve certain matches that happen often, on many many dimension columns. So my choices as I understand are: 1. Match on many columns 2. Hash the columns, store the hash, match on that. If I use something like 'checksum' is there a known algorithm to handle the collisions? So the matc...
I'm trying to improve certain matches that happen often, on many many dimension columns. So my choices as I understand are: 1. Match on many columns 2. Hash the columns, store the hash, match on that. If I use something like 'checksum' is there a known algorithm to handle the collisions? So the match would work like: 1. Partition two tables where checksum hashes match. 2. If there's more than one row returned, filter the results by matching all the dim columns Anyone tried using checksum to make hashes faster on a large number of columns? I should say, I'm looking for a SQL/TSQL solution. I can easily acheive this match with python or c#
Jamie Marshall (297 rep)
Mar 11, 2020, 11:23 PM • Last activity: Mar 12, 2020, 08:00 PM
2 votes
1 answers
477 views
Impacts of a change to the Page Verify option
There's no doubt about importance of doing regular integrity checks on production databases. I'm testing the impact of changing the maintenance plans and changing how databases are backed up. First of all, it is necessary to activate the "checksum" pages verification mode, this changing I think grad...
There's no doubt about importance of doing regular integrity checks on production databases. I'm testing the impact of changing the maintenance plans and changing how databases are backed up. First of all, it is necessary to activate the "checksum" pages verification mode, this changing I think gradually impact the performance because it activates when a certain page is read in memory, changed, and written back to disk. I have started to test this change with backups. I prepared a backup TSQL script that save the duration of each backup command and I ran 100 times each of the following command: - BACKUP DATABASE [MyDB] TO DISK = N'nul' (_ran 100 times_) - BACKUP DATABASE [MyDB] TO DISK = N'nul' WITH CHECKSUM (_ran 100 times_) MyDB it's a 50gb database with page verify set to "checksum". The results is: - Backup with checksum is 20% slowler I started some test with a larger database (400GB) and I noticed that first backups (with empty cache) are generally slower. After the first backups have completed the duration tends to stabilize. My questions are: - Does the backup process buffers data in cache ? If yes, could this cause a variation of PLE trend or memory pressure ? - If backups with checksum are 20% slower what happens with application queries? I don't think my tests are the absolute truth but.. - Is there any kind of waiting time linked to the checksum process? - Do you know if there are people online who have tested the impacts of this change ? Any additional material can be useful.
Gio (107 rep)
Jan 14, 2020, 09:40 PM • Last activity: Jan 15, 2020, 02:22 PM
1 votes
0 answers
588 views
What All Is Factored Into A MySQL Checksum? (Why Could They Be Different?)
I'm currently going through the excruciating task of moving MySQL databases from one shared web hosting environment to another. This has been a multi-month battle (and I'm learning a lot). After having issues with PHPMyAdmin due to large tables (and getting annoyed having to always check it had move...
I'm currently going through the excruciating task of moving MySQL databases from one shared web hosting environment to another. This has been a multi-month battle (and I'm learning a lot). After having issues with PHPMyAdmin due to large tables (and getting annoyed having to always check it had moved the data correctly when it, in fact, was silently failing), and experiencing trouble with every other package I could find to help (since I have some large tables that have to be chunked and some packages would not let me exclude corrupt tables properly), I've custom coded my own solution in PHP and AJAX to migrate data from the old host to the new host. As a bonus, I can now finally have a working solution to back up the data as well. I've overcome many challenges, and at this point I almost have things fully working. The checksum feature is super handy to be able to know if I moved the data across correctly (or there was some problem) without checking row by row (which would take a really long time on larger tables). In order to do a checksum, I run this query: CHECKSUM TABLE `` On both the old web host/database (accessed remotely) and the new web host/database (accessed locally). For most tables, these numbers match. My transfer process copies the table structure (SHOW CREATE TABLE) then copies the data row by row. Larger tables are chunked, moved across, then reassembled. The entire process is fully automated through AJAX calls. However, I'm running into some problems where some of the tables have a different checksum after having been moved across. The problem is, both the structure and data appear to be exactly intact. When I create a PHP script to check row by row, it reports that every row matches in the old and new databases. This is my script to check the tables: $qGetTable = "SELECT * FROM ".$sTableName.""; if (!($rGetTableOld = $sqlOld->query($qGetTable))) { die("Failed to get table in old database! ".$sqlOld->error); } if (!($rGetTableNew = $sqlNew->query($qGetTable))) { die("Failed to get table in new database! ".$sqlNew->error); } while ($aGetTableOld = $rGetTableOld->fetch_assoc()) { if (!($aGetTableNew = $rGetTableNew->fetch_assoc())) { print_r($aGetTableOld); die("Ran out of rows in new table!"); } if ($aGetTableOld != $aGetTableNew) { echo "Found the mismatch!\n"; print_r($aGetTableOld); print_r($aGetTableNew); } } As you see, I'm even checking row by row so even order changes should be discovered. It does not find any mismatches. SHOW CREATE TABLE gives the exact same response on both servers. (Double checked with DiffChecker.) As per, https://dev.mysql.com/doc/refman/5.7/en/checksum-table.html > If the checksums for two tables are different, then it is almost > certain that the tables are different in some way. However, it's quite vague about what to check to troubleshoot any difference. > CHECKSUM TABLE reports a checksum for the contents of a table. You can > use this statement to verify that the contents are the same before and > after a backup, rollback, or other operation that is intended to put > the data back to a known state. I noted one thing, but I'm not using TIME, DATETIME, or TIMESTAMP. I always store time in plain UNSIGNED INT. > The checksum value depends on the table row format. If the row format > changes, the checksum also changes. For example, the storage format > for temporal types such as TIME, DATETIME, and TIMESTAMP changed in > MySQL 5.6 prior to MySQL 5.6.5, so if a 5.5 table is upgraded to MySQL > 5.6, the checksum value may change. I can't figure out what is different since my data looks fine and intact. However, the different checksum bothers me a lot and I want to understand what could possibly be different. Only a few tables are affected and the rest seem to work fine. Does anyone have any insight into what I should be checking or looking into to get to the bottom of this? Thanks very much and I really appreciate your help.
azoundria (113 rep)
Nov 12, 2019, 07:21 PM
2 votes
2 answers
308 views
Will I be notified that BACKUP encountered a CHECKSUM error if I use CONTINUE_AFTER_ERROR?
I am in the process of adding the `WITH CHECKSUM` flag on our daily SQL backups, as part of an effort to better ensure data integrity. I definitely want to *know* if a checksum error is ever encountered, but I also don't want my job to stop dead in the water in the middle of the night; I want it to...
I am in the process of adding the WITH CHECKSUM flag on our daily SQL backups, as part of an effort to better ensure data integrity. I definitely want to *know* if a checksum error is ever encountered, but I also don't want my job to stop dead in the water in the middle of the night; I want it to finish backing up the "bad" database, then continue backing up the other databases on the server. If I use BACKUP WITH CHECKSUM, CONTINUE_AFTER_ERROR, will it still throw the appropriate error (SEV 22 or Error 825 or whatever) that will trigger my associated alert? Or does CONTINUE_AFTER_ERROR suppress this altogether, and I'd only know about the problem if I parse the job step output? I would simply test it, but I don't have a database with known CHECKSUM inconsistencies.
BradC (10023 rep)
May 10, 2019, 09:27 PM • Last activity: May 13, 2019, 09:59 PM
Showing page 1 of 20 total questions