Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
2 answers
1778 views
How to define text qualifier in BCP in
I have several csv files need to be imported in to sql database. The data in the file looks like following. ``` productid,product,manufacturers[LF] 1001,TV,"Sony, LG, Panasonic, Toshiba"[CR][LF] 1002,Watch,Sonata[CR][LF] 1003,Computer,"Dell, HCL, Sony, HP"[CR][LF] .... ``` The first row is the colum...
I have several csv files need to be imported in to sql database. The data in the file looks like following.
productid,product,manufacturers[LF]
1001,TV,"Sony, LG, Panasonic, Toshiba"[CR][LF]
1002,Watch,Sonata[CR][LF]
1003,Computer,"Dell, HCL, Sony, HP"[CR][LF]
....
The first row is the column names. However, the line ends with only [LF]. For all the rest rows (data rows), all end with [CR][LF]. I have two issues with importing these files. First one is I don't know how to define the text qualifier as double quotes in the BCP in command. I checked the documentation. It seems there is no parameter for text qualifier. I do see some examples with using a format file. But the thing is, in my actual data. It has hundreds of columns. It would be extremely time consuming to write a format file by myself. The other thing is if you define a column with Delimiter as ",\\"", the entire column should be in double quotes. For my case, only cells contains comma will be in double quotes. The format file cannot handle this kind of case. The other issue is regarding the row terminator. It seems the only choice for me to define [LF] as the row terminator. If I use [CR][LF] as row terminator, then the column name and the first row of data will be considered as the first row. However, if I define [LF] as row terminator, when I do the test(my test data does not contain double quotes as text qualifier), I notice the first row of the data always got skipped. The data is loaded from the second row. This is the command I used for the test:
bcp.exe "db.dbo.temp_2022" in "2022-01.csv" -c -r "\n" -S "servername,port" -t, -T
Does anyone know how to solve these two issues?
DBZHOU (11 rep)
Mar 24, 2023, 01:22 PM • Last activity: Jul 25, 2025, 01:00 AM
1 votes
1 answers
128 views
Slow bcp in performance
I've inherited a process that consolidates a number of tables from remote servers to a central database once a week. It takes a list of servers/tables, bcps out the table locally and then bcp ins those files to the local db. All the imports are to unindexed heaps, but for some reason the ins take a...
I've inherited a process that consolidates a number of tables from remote servers to a central database once a week. It takes a list of servers/tables, bcps out the table locally and then bcp ins those files to the local db. All the imports are to unindexed heaps, but for some reason the ins take a very long time. For example the bcp out of one table is 100 gigs/316m rows and takes 20 minutes. The bcp in takes 12 hours. I tried pre-allocating the new db file size instead of letting it autoextend, and that seems to be saving around 15% of the time but the in is still taking more than an order of magnitude longer than the remote out. The bcp out is bcp "" out "" -S "" -T -t "\n" -w -a 65535 -C RAW The bcp in is bcp "" in "" -S -T -t "\n" -w -a 65535 -b1000000 -C RAW It's using the dns name for the in server (which is the localhost). I didn't know how much that might impact things. A batch size of a million records also seems a bit unusual. The database being imported to is in Simple mode. It's Sql Server 2019 but because it's an old script the db is in 2008 compatibility mode. The process is running now, and while the pre-allocation seems to be helping some, it's not as much as I'd hoped Taking 15x to import as it does to export the same data seems like quite a bit of overhead to me, and I was wondering what options might help improve that? I've also been toying with the idea of using BULK INSERT or even SELECT INTO (to avoid the file exchange) but at 12 hours a whack, randomly pushing buttons is pretty time consuming. EDIT: I took Sergey's suggestion below and started looking at the executing characteristics. Every time I ran sp_whoisactive, the bcp process seemed to be in ASYNC_NETWORK_IO wait. When I googled that, it sounded like Sql Server is seemingly having to wait for the next batch of bytes from BCP. Not sure if that's impacted significantly by using the DNS name instead of localhost (or just leaving the -S off entirely). Then I started looking at some of the other stats. CPU is at 1%. RAM is at 96%. I looked at Sql Server and it's unbounded and consuming 95% of the memory on the system. Past experience is that once Windows gets above 90%, you start losing a lot of performance to swapping. So it seems that might be a big leech on the system. EDIT 2: I've been trying some experiments based on the suggestions here, and here's where I am so far. The machines in question are Windows Server 2019 vms in GCP, using Sql Server 2019 with ~20 TB virtual drives The part that was doing 9 bcp outs and 9 bcp ins that prompted this post was taking a bit over 16 hours (about a half-hour to 40 minutes on the out; over 15 hours on the in). First experiment: make the data file allocation on db creation closer to the end result from the start. In other words, don't start too small and auto-extend all the way. All the data files ended up around 40 gig anyway, so why start them at 10 and grow? That cut the process from 16:05 to 12:09. So about a 24% saving. Second experiment: cut the bcp in batch size down from a million to 50,000. For testing purposes, I just used the last bcp out files and just did the bcp ins with them. That cut the in process down to 1 hour 34 minutes and 30 seconds. That appears to have been the big poll in the tent. The batch size of a million appears to have been causing the ASYNC_NETWORK_IO waits. And it caused a lot of transaction log use (even in bulk mode). There appears to be a point of diminishing returns on batch size. I've seen posts saying that there was not any noticeable improvement over ~10 or 20K, but a million seems to be well into the "more harm than good" range. Third experiment was adding -h"TABLOCK" to the above. That got the loads down to 1 hour, 3 minutes, 36 seconds. Still improvement but not as huge. EDIT 3: So I put all the changes into place and let the job run as a scheduled task - and it all sucked again. It appears that a lot of the slowdown in BCP in, and the ASYNC_NETWORK_IO blocks were related to the scheduled tasks running at a much lower priority (though interestingly the BCP outs didn't get nearly as hung up). Found a powershell script that let me set the process priority for a given scheduled task higher.
user1664043 (379 rep)
Jun 30, 2025, 11:58 AM • Last activity: Jul 9, 2025, 05:02 PM
3 votes
1 answers
885 views
Does ordinal position of columns matter for SQL Server replication
I am setting up replication on SQL Server 2017. Transaction replication with push subscriptions such that many smaller database are replicated into a single larger database for reporting. The snapshots wouldn't initialize for some tables in some databases because of "field size too small" errors. Th...
I am setting up replication on SQL Server 2017. Transaction replication with push subscriptions such that many smaller database are replicated into a single larger database for reporting. The snapshots wouldn't initialize for some tables in some databases because of "field size too small" errors. The tables had the same "schema" in that they had all the same fields with the same datatypes and primary key. However, the ordinal position of the columns varies for some of the tables in question between databases. I think the snapshots wouldn't initialize because the bcp command it uses expects the tables' columns to be in the same order. Two questions: 1. Can anyone confirm that the the ordinal position matters for replication as far as snapshots initializing goes? It makes sense to me that it would but I can't find anything that says that it does explicitly. 2. If I initialize the subscription manually (allowing me to skip the snapshot initialize step) would replication work with the ordinal position of the columns being different for the tables but the tables otherwise being identical? Is this supported or do I need to reorder all the columns to make sure they are the same?
Booji Boy (155 rep)
Oct 25, 2019, 12:46 PM • Last activity: Apr 27, 2025, 01:06 AM
3 votes
2 answers
554 views
OPENROWSET Bulk insert Text File NULL Columns
I need to automate a weekly import of .txt log files into a SQL database. I have already created my table, format file and can get some of the data to work using OPENROWSET BULK, but the data from the log files is not consistent. The table has 10 columns 9 of which are populated from the log file, l...
I need to automate a weekly import of .txt log files into a SQL database. I have already created my table, format file and can get some of the data to work using OPENROWSET BULK, but the data from the log files is not consistent. The table has 10 columns 9 of which are populated from the log file, last 3 columns are all descriptions but the log file may only have a single description column populated (the max is always 3). In the image below the highlighted rows work because they have the valid 3 column worth of data separated by a comma. Is there a way to force BCP to always populate the 3 columns or mark them as NULL then move onto the next row? enter image description here
Stockburn (501 rep)
Jul 28, 2020, 06:45 AM • Last activity: Jan 20, 2025, 10:02 AM
0 votes
1 answers
700 views
Bcp out - order and locking
I wanted to understand a bit more about the order in which data is exported using bcp out command. It's written in some places that it returns values in index order, but what if there is no index on the table?(1) I understand that if we want data to be in a particular order we should use the ORDER B...
I wanted to understand a bit more about the order in which data is exported using bcp out command. It's written in some places that it returns values in index order, but what if there is no index on the table?(1) I understand that if we want data to be in a particular order we should use the ORDER BY clause in bcp queryout but what about bcp out?(2) If there is no particular order for bcp out then how are batches defined while writing to a file. Internal architecture says that it does a simple select * on the table. If this is the case then is it that in bcp, select * just streams data and then data is divided into batches just for the sake of writing to a file?(3) Also, while doing bcp out, is the lock acquired on the whole table or just the batch?(4) There is a TABLOCK option but that can be used only during bulk import of data. Is it that bcp is just a client doing select * on the table and it will follow the locking principles of whatever isolation level we set for the transaction?(5) Here , it is said that bcp cannot be used with snapshot isolation, can someone please explain the reason more clearly?(6) Is there any article explaining bcp internal architecture in more detail with ordering and locking of rows. There are many articles with related questions for bcp import but couldn't find much on bcp export and hence asking here. Thanks
toros (23 rep)
Oct 31, 2022, 02:34 PM • Last activity: Aug 9, 2023, 11:51 AM
-3 votes
1 answers
86 views
Shortcutting/Clowning/Virtualising Azure SQL Database Bulk Loads
I'm currently working on a solution to bulk load (almost) structured data into an Azure SQL Database. The long term plan is to use ADF, but the client is keen to have some static data loaded early in process so they can start building BI stuff off it - and I'm keen to keep things agnostic so we can...
I'm currently working on a solution to bulk load (almost) structured data into an Azure SQL Database. The long term plan is to use ADF, but the client is keen to have some static data loaded early in process so they can start building BI stuff off it - and I'm keen to keep things agnostic so we can evaluate different bulk load methodologies based on efficiency and cost. I'm also trying to make everything as foolproof and automated as possible, but BCP into Azure SQL Database is posing some issues. **Background** The source data I'm working with is structured at a glance, but has some serious quality issues. My design goals are: - 3NF data model underlying everything - Minimal pre-load manipulation of the source data - Dumb insert surfaces - Trigger-based data manipulation - Minimal staging My on-prem prototype (SQL Server 2019 Developer) works really well and exactly as intended with some unconventional methods. For example - here's a simplified but typical simulation of the source data ProjectNumber | ProjectName | Contractor | State | Cost (AU$k) | Approved ----------------------------------------------------------------------- 1234, Project ABC, XYZ Construction, NSW, 1000, TRUE 1235, Project DEF, TUV Civil, ACT, na, unknown 1236, Project GHI, XYZ Construction, QLD, tba, FALSE 1237, Project JKL, , NSW, 2,000, FALSE Without griping too much about this (paid subscription!) source data, there are obviously some problems. I've built the underlying schema with the following model: -- Properly typed tables CREATE TABLE _Projects ( ProjectNumber int IDENTITY(1,1) -- PK , ProjectName nvarchar , ContractorID int -- FK to _Contractors table , StateID int -- FK to _States table , Cost int , IsApproved bit -- "Exposition View" CREATE VIEW Projects AS ( SELECT ProjectNumber , ProjectName , _Contractors.ContractorName , _States.StateName , Cost , IsApproved FROM _Projects LEFT JOIN _Contractors ON [...] LEFT JOIN _States ON [...] Typical, sensible stuff. Then the idea was to have INSTEAD OF INSERT triggers to catch the type violations and clean them up. But because the schema of *inserted* is determined by the target table/view I get pinged for data type violation for the non-numericals in the int columns. No problem though, because I can just do this: -- View for Insert Clowning CREATE VIEW Projects#Insert AS ( SELECT ProjectName , _Contractors.ContractorName , _States.StateName , CAST(Cost AS varchar(11)) AS Cost , CAST(IsApproved AS varchar(5)) AS IsApproved FROM _Projects LEFT JOIN _Contractors ON [...] LEFT JOIN _States ON [...] -- Retyping on insert over view CREATE TRIGGER TR_Projects#Insert ON Projects#Insert INSTEAD OF INSERT AS BEGIN -- Insert into FK tables first [...] -- Insert into main table INSERT INTO _Projects (ProjectName, ContractorID, StateID, Cost, IsApproved) SELECT ProjectName , _Contractors.ContractorID , _States.StateID , CASE WHEN Cost IN ('na', 'tba') THEN NULL ELSE CAST(REPLACE(Cost, ',', '') AS int) END AS Cost , CASE (IsApproved) WHEN 'TRUE' THEN 1 WHEN 'FALSE' THEN 0 ELSE CAST(IsApproved AS bit) END AS IsApproved FROM inserted LEFT JOIN _Contractors ON [...] LEFT JOIN _States ON [...] END I'm sure there are potential pitfalls to this, but the advantages I see are: - The normalisation/validation/remediation are all self-contained and centrally maintainable by SQL admins. - The insert surface is dumb, meaning it should be fairly tool agnostic. Rather than rewriting the same logic for ADF, BCP, OPENROWSET, XML, JSON, etc. processes - we can just pipe the same low quality data in with a zero-logic one-to-one mapping using whatever best fits the source and know that the target will handle it. - Formatting for OPENROWSET and BCP is super low effort, because I can target the clowned views and get an exact schema for the insert, based on the datatypes and casts in the view, instead of targeting a table and than manually editing identity columns, FKs and the like. - The staging is handled by pseudo-table *inserted*, no need for side-by-side staging or temp tables and two step inserts. - In theory, it can sidestep a whole lot of Azure resourcing - no need for ADF or Blob storage when you can use BCP on a remote client, right? It works really well with my on prem instance. I've tried BCP and OPENROWSET bulk inserts and both work perfectly. However... **The Problem** Azure SQL Database gives me errors because it seems to see through the clown makeup. For example, to zero in on the bit column - I get nothing but NULLs, regardless of the inserted value. To troubleshoot, I changed the Insert trigger to capture the data into a debugging table: ALTER TRIGGER TR_Projects#Insert ON Projects#Insert INSTEAD OF INSERT AS BEGIN -- Insert into main table INSERT INTO _Debug (InsertValue, CastValue) -- varchar and bit respectively SELECT IsApproved, CAST(IsApproved AS bit) FROM inserted END When I check the data, it's all NULLs in _Debug. Even though it's ostensibly inserting varchar(5) values into a varchar(5) pseudo-column, and is completely different to the behaviour of on-prem SQL 2019. If I change the bit column in _Projects to varchar(5), it all works - because it's clearly ignoring the view definition in favour of the underlying table data type. Interestingly, this works: INSERT INTO Projects#Insert (ProjectName, ContractorName, StateName, Cost, IsApproved) VALUES ('Project ABC', 'XYZ Construction', 'NSW', '1000', 'TRUE') ,('Project DEF', 'TUV Civil', 'ACT', 'na', 'unknown') ,('Project GHI', 'XYZ Construction', 'QLD', 'tba', 'FALSE') ,('Project JKL', '', 'NSW', '2,000', 'FALSE') Everything validates and casts as expected - so it does appear like the Azure SQL version of BCP *server* is getting over zealous and tripping up on a false positive, rather than it being a difference between Azure SQL Server (2022-ish) and on prem SQL Server 2019. I haven't looked in depth at the integer column *Cost*, but it throws errors because it can't implicitly cast character data. So again, it's favouring the underlying table schema instead of the view schema. I guess my questions are: - Has anyone else ever tried this sort of thing, and have they been able to work around any restrictions from BCP? - Is anyone aware of changes to the SQL engine since 2019 that might explain this? I've found this vague allusion to enforced validation that "might cause scripts to fail" - and it seems like a high probability that this enforced validation is the reason why my otherwise valid data load method is failing. - Failing any of that, are there any alternate methods that meet my design goals? Staging the data is an obvious choice, but I'd really like to completely rule out the possibility of inline retyping which seems like a more efficient and elegant way of handling the complexities of poor data quality. And to get some of the obvious clarifying questions out of the way: - I'm using the *FIRE_TRIGGERS* hint and can confirm my triggers are triggering - The CHECK_CONSTRAINTS hint has no discernable effect - My BCP format file has the relevant fields as CharTerm along with sensible collation and max length - My BCP format file also has the relevant columns as SQLVARYCHAR - so I'm not doing anything silly like loading a bit value into an nvarchar and then casting it back to bit. Thanks as always!
Vocoder (117 rep)
Jul 20, 2023, 04:32 PM • Last activity: Jul 22, 2023, 09:55 AM
-1 votes
2 answers
150 views
How to increment as 001,002 etc In SQL Server
i have created an sp to export scripts i stored in a table to text files. What this sp do is if I give 100 as count then each file will have 100 scripts from the table. i am saving the file name as objects_1 ,objects_2 and so on. But what I need is . If I have 100 files. Then the 1st file should be...
i have created an sp to export scripts i stored in a table to text files. What this sp do is if I give 100 as count then each file will have 100 scripts from the table. i am saving the file name as objects_1 ,objects_2 and so on. But what I need is . If I have 100 files. Then the 1st file should be names as objects_001 ,objects_002 , and the last file should be objects_100 . Can someone suggest an idea to do that . ,And please confirm this sp logic is correct . Thanks.
alter PROCEDURE USP_GenerateFile_Packets_new(@count int)
AS 
BEGIN 


DECLARE @ExportQry NVARCHAR(500)
DECLARE @sql NVARCHAR(500)
DECLARE @ExportPath NVARCHAR(100)
SET @ExportPath='C:\Upgrade\Scripts\'
DECLARE @BCPFileName NVARCHAR(50)
DECLARE @ReturnValue NVARCHAR(50)
DECLARE @K INT=1

declare @minval int=(select min(id) from vsl.ScriptTable)
--DECLARE @I INT=(SELECT COUNT(*) from [UpgradeDB].vsl.ScriptTable)
declare @count1 int=(select @count-1)
 WHILE @minval ='+convert(nvarchar(100),@minval)+' and id<='+convert(nvarchar(100),@minval)+'+'+convert(nvarchar(100),@count1)+' order by id' 

		SET @sql = 'bcp "' + @ExportQry + ' " queryout ' + @ExportPath +  'OBJECTS_'+convert(nvarchar(100),@k)+'.txt -c -t^| -T -S ' + @@SERVERNAME;

    --select @sql
		EXEC @ReturnValue =  master..xp_cmdshell @sql


   SET @minval=@minval+@COUNT
   set @k=@k+1   
   END

END

--USP_GenerateFile_Packets_new 1500
Subin Benny (77 rep)
Jan 17, 2023, 07:17 AM • Last activity: Jan 17, 2023, 02:08 PM
7 votes
3 answers
4839 views
Very large SQL Server 2016 result sets (over 75 GB) to a CSV file?
What is the best way to get very large SQL Server 2016 result sets (over 75 GB) to a CSV file? The engineers need this output to look for correlations. The `bcp` route for a 73.5 GB file filled up tempdb and started crashing other applications, including the ETL process. Our users want to export up...
What is the best way to get very large SQL Server 2016 result sets (over 75 GB) to a CSV file? The engineers need this output to look for correlations. The bcp route for a 73.5 GB file filled up tempdb and started crashing other applications, including the ETL process. Our users want to export up to 500 GB. What process would use the least amount of resources so that other applications keep running?
Alex (79 rep)
Oct 3, 2017, 04:50 PM • Last activity: Jan 5, 2023, 12:22 PM
1 votes
0 answers
99 views
BCP to SQL Server DB seeing excessive waits
I have a process I built in Powershell which has been working flawlessly for years on a couple servers, but when I migrate it to new servers I see a lot of waits (ASYNC_NETWORK_IO) during BCP step resulting in work taking much longer. Synopsis of the process: A scheduled task starts a PS script I ca...
I have a process I built in Powershell which has been working flawlessly for years on a couple servers, but when I migrate it to new servers I see a lot of waits (ASYNC_NETWORK_IO) during BCP step resulting in work taking much longer. Synopsis of the process: A scheduled task starts a PS script I call the thread manager. This script calls another script that does the actual work using start-job. The script doing the work is selecting a zip file with archived data, unzips it, BCP's it to a database. There could be up to 15 threads running at a time. The task/script, DB's, and files as they are being work all exist on the same server. While multiple threads could be bcp'ing to a single DB each thread will have its own table to import to. Server 1: As stated, this has worked flawlessly for years. Physical server, regular drives for storage, SQL 2012. Server 2: The new server seeing waits. VM, mount points, SQL 2019 A 30 GB file should take about 15 minutes to import. I have some threads sitting for over a day. Observations: If there are 10 threads going, only 2 or 3 will have any CPU use Those same two or three will be the only ones passing any kind of data on the network tab of Resource Monitor The BCP utility log will show no activity for an hour then have a spurt of activity where a few hundred thousand rows get inserted 1000 at a time. ASYNC_NETWORK_IO waits in SQL Server No blocking I am a DBA so can more easily answer questions about what I'm seeing on SQL Server. I can log into the server and look at built in OS monitoring, but have no access to the VM Ware console or storage, so if anything needs to be validated there I will need to pass it along. TIA!
Bob Sacamano (11 rep)
Dec 21, 2022, 08:12 PM
1 votes
0 answers
130 views
BCP out significant throughput drop while using read committed snapshot isolation
I am running bcp to output data to a file. I cannot run bcp out in read committed mode because of the reasons mentioned in https://dba.stackexchange.com/questions/315798/snapshot-replication-corrupted-bcp-files. I tried using read committed snapshot isolation but found that there was a significant t...
I am running bcp to output data to a file. I cannot run bcp out in read committed mode because of the reasons mentioned in https://dba.stackexchange.com/questions/315798/snapshot-replication-corrupted-bcp-files . I tried using read committed snapshot isolation but found that there was a significant throughput drop. With RC, I am getting throughput of 52 MB/s. However, with RCSI, I am getting throughput of only 35 MB/s. I tried a normal select * with RC and RCSI and I got throughput of 25 and 26 MB/s respectively, which indicates that there is not much impact while reading snapshots from *tempdb*. I wanted to understand what internal architecture in bcp makes the throughput drop so high. The question is more about the relative numbers than the absolutes, but if it helps, I am using SSD persistent disk for both read and write. Write throughput of the disk to which I am writing is rated at 400 MB/s. **DB Version**: SQL Server 2019 Standard CU16-GDR \ **Data table**: ORDER_LINE from standard TPCC database\ **BCP version**: 17.10.0001.1\ **BCP command used**: bcp ORDER_LINE OUT out/order_line.dat -S tcp:127.0.0.1,1434 -U {username} -P {password} -d TPCC -t "," -r "\n" -F 1 -f format/order_line-n.fmt\ **Format file generated using**: bcp ORDER_LINE format nul -S tcp:127.0.0.1,1434 -U {username} -P {password} -d TPCC -n -f format/order_line-n.fmt\ **Metrics measured**: network sent_bytes_count There is a write load on the database generated by HammerDB . I ran bcp once using Read committed isolation level, then set READ_COMMITTED_SNAPSHOT to ON and then ran bcp again.
toros (23 rep)
Nov 3, 2022, 11:28 AM • Last activity: Nov 4, 2022, 04:55 AM
1 votes
1 answers
1035 views
Why is bcp out much faster than select * even when bcp uses select * internally?
I tried exporting a very large table (5B rows) to a csv file using both Select * and bcp out. Regarding Select *, I am using pyodbc module to query the database and writing it to a csv file. I also tried using cli by directly querying the table and piping the result to a csv file but throughput was...
I tried exporting a very large table (5B rows) to a csv file using both Select * and bcp out. Regarding Select *, I am using pyodbc module to query the database and writing it to a csv file. I also tried using cli by directly querying the table and piping the result to a csv file but throughput was even slightly less than that using pyodbc. Regarding bcp, I am using bcp out command to directly export the table to a csv file. I read about bcp architecture [here](http://www.yaldex.com/sql_server_tutorial_3/ch05lev1sec2.html#:~:text=The%20export%20operation%20retrieves%20data%20by%20executing%20a%20SELECT%20command%20on%20the%20target%20table%20and%20then%20writes%20the%20data%20into%20a%20file) and it says that bcp uses Select * internally. I was expecting throughput for both the approaches to be near about same but experiments resulted in Select * throughput of 90GB/hr and bcp out throughput of 190GB/hr. I was just curious to know why is there such a huge difference in throughput?
toros (23 rep)
Oct 29, 2022, 04:29 AM • Last activity: Oct 29, 2022, 02:29 PM
4 votes
0 answers
1998 views
BCP neither gives results nor outputs anything when using valid statements but it does throw errors when passing invalid parameters
I have to use **bcp** command-line tool to export data from an *SQL Server* database to a file in a Red Hat server. I am (apparently) using valid statements but **bcp** is not producing any kind of output/results. However, when I execute statements with missing or invalid parameters it displays the...
I have to use **bcp** command-line tool to export data from an *SQL Server* database to a file in a Red Hat server. I am (apparently) using valid statements but **bcp** is not producing any kind of output/results. However, when I execute statements with missing or invalid parameters it displays the respective error. I am looking for the reason of this issue (e.g. defective installation, bad usage of **bcp**, lack of permissions or any other known conflict) and how to fix it. ------------------------------------------------------------------- **bcp statement:** bcp fully_qualified_table_name out ./data.txt -c -S server -U user -P password **bcp usage:** usage: /opt/microsoft/bin/bcp {dbtable | query} {in | out | queryout | format} datafile [-m maxerrors] [-f formatfile] [-e errfile] [-F firstrow] [-L lastrow] [-b batchsize] [-n native type] [-c character type] [-w wide character type] [-N keep non-text native] [-q quoted identifier] [-t field terminator] [-r row terminator] [-a packetsize] [-K application intent] [-S server name or DSN if -D provided] [-D treat -S as DSN] [-U username] [-P password] [-T trusted connection] [-v version] [-R regional enable] [-k keep null values] [-E keep identity values] [-h "load hints"] [-d database name] **bcp version:** BCP - Bulk Copy Program for Microsoft SQL Server. Copyright (C) Microsoft Corporation. All Rights Reserved. Version: 11.0.2270.0 **SQL Server version (SELECT @@VERSION):** Microsoft SQL Server 2012 - 11.0.5058.0 (X64) May 14 2014 18:34:29 Copyright (c) Microsoft Corporation Enterprise Edition: Core-based Licensing (64-bit) on Windows NT 6.3 (Build 9600: ) (Hypervisor) **Distribution:** Red Hat Enterprise Linux 6.7 (KornShell). ------------------------------------------------------------------- Invalid statements with respective error message (examples). ======= bcp THAT_TUB_ACE.oh_nerd.table_name out ./data.txt -c -S sr._bear -U you_sr. -P pass_sword SQLState = S1T00, NativeError = 0 Error = [unixODBC][Microsoft][ODBC Driver 11 for SQL Server]Login timeout expired SQLState = 08001, NativeError = 11001 Error = [unixODBC][Microsoft][ODBC Driver 11 for SQL Server]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online. SQLState = 08001, NativeError = 11001 Error = [unixODBC][Microsoft][ODBC Driver 11 for SQL Server]TCP Provider: Error code 0x2AF9 ... bcp fully_qualified_table_name ./data.txt -c -S valid_server -U valid_user -P bad_word bcp fully_qualified_table_name out ./data.txt -c -S valid_server -U valid_user -P bad_word SQLState = 28000, NativeError = 18456 Error = [unixODBC][Microsoft][ODBC Driver 11 for SQL Server][SQL Server]Login failed for user 'valid_user'. SUMMARY. ======= The objective is to generate a datafile using the following syntax (or similar): bcp fully_qualified_table_name out ./data.txt -c -S server -U user -P password The facts are: - When running a valid **bcp** statement there's nothing in the window at all (no output) and no datafile is created. - I cannot use option -T (trusted connection using integrated security) for **bcp** so I have to specify the _server, user and password_. - Tried queryout option on a very simple small table already but still no luck. - Credentials are valid, I successfully tested them using sqlcmd like the following: sqlcmd -S server -U user -P password -Q 'SELECT * FROM really_small_table'. - The **bcp** statements under _"Invalid statements with respective error message (examples)"_ section of this question are just examples of invalid statements to show that **bcp** actually does something but giving the expected results.
CamelCamelius (141 rep)
Sep 27, 2016, 01:32 PM • Last activity: Jul 5, 2022, 11:28 AM
2 votes
2 answers
787 views
Why BCP creates such big files?
I'm creating some reports via BCP to be sent via emails. create TABLE ##tempsss ( create TABLE ##JOB_DataAssociacao_verContrato ( F1 VARCHAR(6) ,F2 VARCHAR(200) ,F3 VARCHAR(22) ,F4 char(1) ,F5 varchar(10) ,F6 varchar(15) ,F7 varchar(30) ,F8 varchar(10) ,F9 VARCHAR(18) ) --it needs to be varchar to b...
I'm creating some reports via BCP to be sent via emails. create TABLE ##tempsss ( create TABLE ##JOB_DataAssociacao_verContrato ( F1 VARCHAR(6) ,F2 VARCHAR(200) ,F3 VARCHAR(22) ,F4 char(1) ,F5 varchar(10) ,F6 varchar(15) ,F7 varchar(30) ,F8 varchar(10) ,F9 VARCHAR(18) ) --it needs to be varchar to be able to use header ) insert into ##tempsss SELECT ...fields... FROM some table or view go ------------------------------------------------------------- --passo 2 -- exec xp_cmdshell 'bcp "select ''field1'',''field2'',''field3'' UNION ALL select * from table or view" queryout "filepath\filename.xls" -U sa -P password -w -S servername' ------------------------------------------------------------- EXEC msdb.dbo.sp_send_dbmail @profile_name = 'dba profile or something', @recipients = 'recipients email', @subject = 'email subject', @file_attachments='path\filename.xls' ------------------------------------------------------------- drop table ##tempsss with a 9k rows, I have a 2mb file. I just copy the content of this file, and paste inside an empty excel file and I have 160KB. how can I use bcp but creating smaller files? I'm havin gproblems to send files over 1mb via dbmail. I set dbmail to send files with 10mb+. smtp is ok too.
Racer SQL (7546 rep)
Oct 7, 2020, 12:03 PM • Last activity: Jun 26, 2022, 12:45 PM
2 votes
1 answers
147 views
BCP Insert through View vs Manual insert
Consider the following code use tempdb go drop table if exists ImportTest1; create table ImportTest1 ( Column1 int null, Column2 int null, NewColumn int null ); insert into ImportTest1 select 1,2,3 union select 4,5,6; select * from ImportTest1; drop table if exists DestinationTest1; create table Des...
Consider the following code use tempdb go drop table if exists ImportTest1; create table ImportTest1 ( Column1 int null, Column2 int null, NewColumn int null ); insert into ImportTest1 select 1,2,3 union select 4,5,6; select * from ImportTest1; drop table if exists DestinationTest1; create table DestinationTest1 ( Column1 int null, Column2 int null ); select * from DestinationTest1; GO CREATE OR ALTER VIEW TestView1 AS SELECT Column1 AS Column1, Column2 AS Column2, NULL AS NewColumn FROM DestinationTest1 GO If we run this INSERT INTO TestView1 (Column1, Column2, NewColumn) SELECT Column1, Column2, NewColumn FROM ImportTest1 It fails with error Update or insert of view or function 'TestView1' failed because it contains a derived or constant field. However, if I do the same thing through BCP, it works fine BCP "SELECT Column1, Column2, NewColumn FROM tempdb..ImportTest1" QUERYOUT C:\BCPTest\test.txt -T -c -t, BCP tempdb..TestView1 IN C:\BCPTest\test.txt -T -c -t, What is happening here that allows BCP to import successfully through the view but we cannot run that manually?
kevinnwhat (2224 rep)
Apr 8, 2022, 06:32 PM • Last activity: Apr 9, 2022, 10:08 AM
4 votes
1 answers
791 views
How to use bcp in on a table with an indexed view in SQL Server
I'm using SQL Server 2017 and want to use "bcp in" in a script to populate tables in several databases. I am unable to import data into a table that has an indexed view. The following is an MCVE that reproduces my problem: 1. Run the script at the end of this post to populate a test database with tw...
I'm using SQL Server 2017 and want to use "bcp in" in a script to populate tables in several databases. I am unable to import data into a table that has an indexed view. The following is an MCVE that reproduces my problem: 1. Run the script at the end of this post to populate a test database with two tables, an indexed view and some data. 2. Run **bcp out** to export the test data from the table Table1 to a file:
bcp [dbo].[Table1] out .\Table1.bcp -S "localhost" -d TestDB -T -k -N
3. Delete the test data from Table1:
DELETE FROM [dbo].[Table1]
4. Attempt to import data into Table1 using **bcp in**:
bcp [dbo].[Table1] in .\Table1.bcp -S "localhost" -d TestDB -T -k -N
Result: fails with an error message INSERT failed because the following SET options have incorrect settings: 'QUOTED_IDENTIFIER'... *Note: if I drop the index [ix_v1] on the view, this will succeed: the problem only occurs if the table is referenced by an **indexed** view.* 4. Attempt to import data into Table1 using **bcp in** with the -q switch:
bcp [dbo].[Table1] in .\Table1.bcp -S "localhost" -d TestDB -T -k -N -q
Result: fails with an error message Invalid object name '[dbo].[Table1]' 5. Attempt to import data into Table1 by specifying the table name without [] delimiters, and with the -q switch:
bcp dbo.Table1 in .\Table1.bcp -S ".\SqlExpress17" -d TestDB2 -T -k -N -q
Result: the data is successfully imported. However this does not meet my requirements, because I want a generic script that will also work with table names that require delimiters (e.g. [dbo].[My Table]). Question: Is there a way to use bcp to import data into a table with an indexed view, while specifying a delimited, schema-qualified table name on the bcp command line? **Script to populate an empty database TestDB**
USE [TestDB]
GO
SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[Table1](
	[Table1Id] [int] NOT NULL,
	[Table1Name] [nvarchar](50) NOT NULL,
 CONSTRAINT [PK_Table1] PRIMARY KEY CLUSTERED ( [Table1Id] ASC)
 )
GO
CREATE TABLE [dbo].[Table2](
	[Table2Id] [int] NOT NULL,
	[Table2Name] [nvarchar](50) NOT NULL,
	[Table1Id] [int] NULL,
CONSTRAINT [PK_Table2] PRIMARY KEY CLUSTERED (	[Table2Id] ASC)
)
GO
CREATE VIEW [dbo].[v1] WITH SCHEMABINDING
AS
    SELECT 
	T1.Table1Id, T1.Table1Name,
	T2.Table2Id, T2.Table2Name
	FROM [dbo].[Table1] T1 INNER JOIN [dbo].[Table2] T2
	ON T1.Table1Id = T2.Table1Id
GO
CREATE UNIQUE CLUSTERED INDEX [ix_v1] ON [dbo].[v1] (Table1Name, Table2Name)
GO
INSERT INTO Table1
VALUES 
 (1, 'One')
,(2,'Two')
1 . (143 rep)
Apr 5, 2022, 06:34 PM • Last activity: Apr 7, 2022, 10:09 PM
-1 votes
1 answers
574 views
cmdshell query Error "not a valid identifier"
I've included the entire code ``` USE [JCIHistorianADX2New] GO /****** Object: StoredProcedure [dbo].[GetADX2PointsDyn] Script Date: 10/18/2021 5:45:38 AM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER OFF GO -- ============================================= -- Author: FLG -- Create date: 10/15/2...
I've included the entire code
USE [JCIHistorianADX2New]
GO
/****** Object:  StoredProcedure [dbo].[GetADX2PointsDyn]    Script Date: 10/18/2021 5:45:38 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO
-- =============================================
-- Author:		FLG
-- Create date: 10/15/2021
-- Description:	Process ADX2 Points
-- =============================================
ALTER PROCEDURE [dbo].[GetADX2PointsDyn] 
	-- Add the parameters for the stored procedure here
	@dSDate datetime = '2021/01/01 00:00:00', 
	@dEDate datetime = '2021/01/01 23:45:00'
AS
BEGIN
	-- SET NOCOUNT ON added to prevent extra result sets from
	-- interfering with SELECT statements.
	SET NOCOUNT ON;

	Create Table #Points
	(
	EMSPointName varchar(100) NOT Null
	);


BULK INSERT #Points
FROM 'C:\Trend Point Map 101521.csv'
WITH
(
    FIRSTROW = 2, -- as 1st one is header
    FIELDTERMINATOR = ',',  --CSV field delimiter
    ROWTERMINATOR = '\n',   --Use to shift the control to next row
    TABLOCK
)
    -- Insert statements for procedure here

	Alter Table #Points
	Add ID int Identity(1,1)

	DECLARE
		@Init int= 1,
		@NumRows int
		
	SELECT @NumRows= COUNT(*) FROM #Points WHERE ID= @Init
	WHILE @Init= @dSDate and UTCDateTime <= @dEDate

			AND Exists(
			Select t.EMSPointName
			from #Points T
			Where t.EMSPointName = [dbo].RawAnalog.PointName)'

			Print 'SQL'
			Print @SQL

	

			set @cmd = N'master.dbo.xp_cmdshell ''BCP ' + '"'  + @SQL +'"' + ' queryout  E:\Honeywell\GSA\Test\ADX2Test.txt -c -t, -S (local) -T '''
			--set @cmd = 'BCP + @SQL + QUERYOUT + @path + "ADX2Test.txt" -c -t, -S (local) -T'

			Print 'CMD'
			Print @cmd

			Exec @cmd
			

			SET @Init= @Init + 1
		END
	
	SELECT @dSDate, @dEDate
END
gunterl (1 rep)
Oct 17, 2021, 11:46 AM • Last activity: Oct 18, 2021, 12:47 PM
0 votes
0 answers
654 views
Unable to open BCP host data-file with AzureDB
I'm trying to load a simple file with a single column from my local machine into my database which is located on Azure. Both file and folder are set with read/write permissions to EVERYONE but it still fails. Im running the following command: bcp dbo.table in "C:\test_file.txt" -U"User" -P"Password"...
I'm trying to load a simple file with a single column from my local machine into my database which is located on Azure. Both file and folder are set with read/write permissions to EVERYONE but it still fails. Im running the following command: bcp dbo.table in "C:\test_file.txt" -U"User" -P"Password" -S"tcp:AzureDB.database.windows.net" -d"database_db" -e"error.log" Which results in the following error: SQLState = S1000, NativeError = 0 Error = [Microsoft][ODBC Driver 11 for SQL Server]Unable to open BCP host data-file Since this service is running on a external network, I'm unable to give Azure permissions on this folder. Is there anything else I can do? EDIT: Following @Tibor Karaszi tips, I ran the DIR command and turns out AzureDB was reading "test_file.txt" as "test_file.txt.txt". I tried a different file format and it worked out just fine.
Wesley Costa (1 rep)
Sep 30, 2021, 08:43 PM • Last activity: Oct 8, 2021, 06:19 PM
0 votes
1 answers
259 views
BCP cannot read table created in the same transaction when executed from Control-M
How to export global temp table using BCP when original process locks table for BCP? We created our own exporting procedure where we provide SELECT or table_name and procedure stores data to global temp table, which then BCP utility exports. It runs fine, but when we run it from BMC's Control-M syst...
How to export global temp table using BCP when original process locks table for BCP? We created our own exporting procedure where we provide SELECT or table_name and procedure stores data to global temp table, which then BCP utility exports. It runs fine, but when we run it from BMC's Control-M system, there is schema modification lock on the table and procedure just waits for itself to unlock. Static table(s) is not a good solution for us. We tried using for example self linking linked server or index on all columns of global temp table. Our current proposal is to add export to a queue which would then be exported asynchronously by another process.
owl (310 rep)
Feb 17, 2021, 02:30 PM • Last activity: Jul 16, 2021, 09:22 AM
-1 votes
3 answers
946 views
after insert trigger not firing after inserting with bcp
content of csv file ``` TEST_1|sl2sysxbar301.dv.local|{'message_type': 2, 'super_evt_non_error': {'a0': 0, 'a1': 0, 'a2': 0, 'a3': 0, 'event': 5528, 'event_string': 'ENERGY_CTRL_RIGHT_PROX_DOWN', 'history_buffer_id': 3, 'history_buffer_string': 'HISTORY_BUFFER_UI_EVT', 'mid': 10, 'mid_string': 'CONS...
content of csv file
TEST_1|sl2sysxbar301.dv.local|{'message_type': 2, 'super_evt_non_error': {'a0': 0, 'a1': 0, 'a2': 0, 'a3': 0, 'event': 5528, 'event_string': 'ENERGY_CTRL_RIGHT_PROX_DOWN', 'history_buffer_id': 3, 'history_buffer_string': 'HISTORY_BUFFER_UI_EVT', 'mid': 10, 'mid_string': 'CONSOLE1'}}|2021-06-26 05:03:20|
TEST_11|sl2sysxbar301.dv.local|{'message_type': 2, 'super_evt_non_error': {'a0': 393280, 'a1': 1, 'a2': 0, 'a3': 0, 'event': 8316, 'event_string': 'BP_FOLLOW_CHECK', 'history_buffer_id': 0, 'history_buffer_string': 'HISTORY_BUFFER_BP', 'mid': 6, 'mid_string': 'MTMR'}}|2021-06-26 05:03:20|
TEST_2|sl2sysxbar301.dv.local|{'message_type': 4, 'mmdsp_log': {'bp_start': {'bp_arg': 0.0, 'bp_code': 4097, 'bp_sn': 393300, 'mid': 6}, 'legacy_log_idx': 0}}|2021-06-26 05:03:20|
TEST_4|sl2sysxbar301.dv.local|{'message_type': 2, 'super_evt_non_error': {'a0': 267, 'a1': 0, 'a2': 0, 'a3': 0, 'event': 100001, 'event_string': 'INTER_MANIP', 'history_buffer_id': 3, 'history_buffer_string': 'HISTORY_BUFFER_UI_EVT', 'mid': 1, 'mid_string': 'AIM2'}}|2021-06-26 05:03:20|
here is bcp running via python
p = subprocess.Popen(
                f'/opt/mssql-tools/bin/bcp {app.config.db_table_name} in {f.get("filename")}  -S {app.config.db_host},{app.config.db_port} -U {app.config.db_username} -P {app.config.db_password} -t  "|"  -c',
                shell=True,
                stdout=subprocess.PIPE,
                stderr=subprocess.STDOUT
            )
I have this table and trigger
CREATE TABLE [EVT_STREAM].[Event](
	[SystemName] [varchar](25) NOT NULL,
	[Router] [varchar](128) NULL,
  [Event] [nvarchar](max) NULL,
  [ReceivedAt] [datetime] NOT NULL,
  [InsertedAt] [datetime] NULL
)ON [PRIMARY]
GO

CREATE TRIGGER [EVT_STREAM].[trg_EVT_STREAM]
   ON [EVT_STREAM].[Event]
   FOR INSERT
AS
BEGIN
SET NOCOUNT ON;
insert into [EVT_STREAM].[Event](
  [InsertedAt]
)
  VALUES(
    GETUTCDATE()
    )
END
GO

ALTER TABLE [EVT_STREAM].[Event] ENABLE TRIGGER [trg_EVT_STREAM]
GO
basically I would like to update [InsertedAt] at the time when BCP runs and inserts CSV records in [EVT_STREAM].[Event] table.
vector8188 (109 rep)
Jun 26, 2021, 05:18 AM • Last activity: Jun 27, 2021, 09:50 PM
0 votes
1 answers
1580 views
How to use BCP Utility queryout in a network path with crededential (login required) for SQL Server
I'm having a hard time figuring out on how can I export my selected table in a network path that requires login credentials. ``` 'bcp "select ''StudentName'' UNION ALL SELECT StudentName AS StudentName from [School].[dbo].[ClassRoom]" "queryout \\IPAddress\d$\Log\StudentLog_'+@fileTimeStamp+'_01'+'....
I'm having a hard time figuring out on how can I export my selected table in a network path that requires login credentials.
'bcp "select ''StudentName'' UNION ALL SELECT StudentName AS StudentName from [School].[dbo].[ClassRoom]" "queryout \\IPAddress\d$\Log\StudentLog_'+@fileTimeStamp+'_01'+'.'+@fileExtension+'" -c -t, -T -S' + @@servername
Running this gives me an error of >SQLState = S1000, NativeError = 0 Error = [Microsoft][ODBC Driver 11 for SQL Server]Unable to open BCP host data-file NULL Where should i put in the script the authentication for my network path?
Lawless16 (1 rep)
Jul 11, 2020, 03:41 AM • Last activity: Apr 2, 2021, 04:00 PM
Showing page 1 of 20 total questions