How to migrate Unicode UTF-8 CLOB data from Oracle to SQL Server 2017 UTF-?
4
votes
1
answer
1215
views
I am currently assisting in a migration effort of this application from Oracle 12c to SQL Server 2017. Initially I just performed table inserts using openquery to Oracle. I discovered that the tables which contained CLOB data could not be migrated using a single table to table insert query otherwise we would end up with dirty data. I am able to do the inserts by dynamically generating a single insert statement per row and running thousands of lines of insert statements. The problem is I've now come across a table which contains over 300k records with CLOB data in them. The one record at a time insert is taking an extremely long time and may at this rate run over 24hrs which is unacceptable.
What is my best option for migrating such a large amount of CLOB data from Oracle? Should I use the bulk tools like BCP, BULK INSERT or OPENROWSET?
**Edit/Update:** I have since learned that my troubles are most likely due to the UTF-8 encoding at the source Oracle system. Both methods I've used for migrating both CLOB and BLOB data have resulted in mismatched rows or missing rows.
UTF-8 has some known issues with SQL Server. The ones in particular I'm dealing with, Linked Server OPENQUERY and SSIS Project deployments, are both fixed in SQL Server 2019 .
Asked by Geoff Dawdy
(1143 rep)
May 1, 2019, 05:12 PM
Last activity: Apr 3, 2021, 07:01 PM
Last activity: Apr 3, 2021, 07:01 PM