Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
260
views
change file table file stream data folder faces error The container can not be dropped
I want to change file database file stream folder from Drive D to Drive F. I successfully transferred data to new folder but when I want to Remove old file stream folder I face this error: **"The container can not be dropped because changes exist that require a log backup. Take a log backup and then...
I want to change file database file stream folder from Drive D to Drive F. I successfully transferred data to new folder but when I want to Remove old file stream folder I face this error:
**"The container can not be dropped because changes exist that require a log backup. Take a log backup and then retry the ALTER DATABASE operation."**
When I want to get transaction log, It encounter this error: you should enable filestream on this instance, while filestream is enabled.
This my code:
ALTER DATABASE MyDatabase
REMOVE FILE MyDatabase_Files;
GO
Thanks for any helps.
Mohammad Ali Barati
(11 rep)
May 5, 2021, 12:32 PM
• Last activity: May 21, 2025, 06:06 PM
0
votes
2
answers
89
views
how to divide a filetable to several filetable on several disk in sql server?
I have a dms database that use filestream. I want to distribute data to several disk based on year. How can I divide this database to several disk in sql server? I used Partitioning technique but I consider it dosen't support filetable. Now I create 3 filestreams group and 3 filegroups. when I inser...
I have a dms database that use filestream. I want to distribute data to several disk based on year. How can I divide this database to several disk in sql server?
I used Partitioning technique but I consider it dosen't support filetable. Now I create 3 filestreams group and 3 filegroups. when I insert data to these 3 filetables, show error on constraint of filetable, path_locator, file_type and cached_file_size. How to partitioning Manual filetable?
---
I have a document management system for my company to manage company documents(~2TB). Considering this scenario, what would be the best strategy for us regarding SQL Maintenance (backup, restore , etc.) and best use of storage space? How can we reduce the volume of database on raid storage? Challenges we are facing: Full Backup will take too much time. DB Restore will take very long time if any disaster happened. Database and Backup file size are too big and we have problem for storing them on a Raid Storage.
I want partitioned or divide my filetable. and store my blob filestreams on several filegroup (several directory on several disk). how to do it and how to move recoreds of main filetable to my new filetables based on creation_time and store filestreams to new directories. This scenario make backup & restore process faster by backup & restore based on filegroup
Ramon
(11 rep)
Aug 25, 2024, 09:18 PM
• Last activity: Aug 28, 2024, 09:22 AM
1
votes
2
answers
276
views
Strategy for maintenance large FileStream database in SQL Server
I have a document management system for my company to manage company documents(~2TB, 1.5 million+ rows in File table). DB is SQL Server 2019. File are saved as FileStream BLOB in SQL Server. Files can be downloaded/uploaded via web frontend/API. The DB size is roughly 2TB and 95% of those are filest...
I have a document management system for my company to manage company documents(~2TB, 1.5 million+ rows in File table).
DB is SQL Server 2019. File are saved as FileStream BLOB in SQL Server. Files can be downloaded/uploaded via web frontend/API.
The DB size is roughly 2TB and 95% of those are filestream blob, with about 1% per day rate.
We have problem to backup the database. it takes too much time. and the other problem is we don't have space on raid storage.
Considering this scenario, what would be the best strategy for us regarding SQL Maintenance (backup, restore , etc.) and best use of storage space? How can we reduce the volume of database on raid storage?
Challenges we are facing:
Full Backup will take too much time.
DB Restore will take very long time if any disaster happened.
Database and Backup file size are too big and we have problem for storing them on a Raid Storage.
Ramon
(11 rep)
Jun 23, 2024, 06:31 AM
• Last activity: Jun 23, 2024, 10:22 AM
3
votes
2
answers
3695
views
SQL SERVER FileTable/FILESTREAM share suddenly becomes unavailable
We had difficulty setting up a FileTable share on a SQL 2014 instance and after giving up on trying to name it after our instance name, we left the name of the share as mssqlserver, but it finally came online, online meaning if you right click in SSMS on the FileTable table you can "Explore FileTabl...
We had difficulty setting up a FileTable share on a SQL 2014 instance and after giving up on trying to name it after our instance name, we left the name of the share as mssqlserver, but it finally came online, online meaning if you right click in SSMS on the FileTable table you can "Explore FileTable Directory", however it seems to randomly go offline again after some time and you receive the usual canned error message:

So, after magically disabling/enabling FILESTREAM at the server level, database level via t-sql/ssms/configuration manager it some how comes back, but I have to mess with the setting at all of those spots for it to come back! and there seems to be no magical sequence of steps that makes it come back.
Has anyone out there experienced this? Will a restart of the service/server help this? If not, will a repair of sql server help?
Eric Swiggum
(694 rep)
Mar 3, 2017, 02:36 PM
• Last activity: Apr 25, 2022, 04:29 PM
0
votes
0
answers
132
views
Unable to get FileStream Length with After Insert trigger
I asked this question on [StackOverlow][1], but it mostly received comments about the use of variables and no answer. I program mainly in VB.Net, so my T-SQL knowledge is limited. The reason I'm populating a `FileProperties` table (especially for non-transactional file copies) is for building relati...
I asked this question on StackOverlow , but it mostly received comments about the use of variables and no answer. I program mainly in VB.Net, so my T-SQL knowledge is limited. The reason I'm populating a
FileProperties
table (especially for non-transactional file copies) is for building relational data that will work with EF6. I'm hopeful exposure here may shed some light on my issue.
I'm using the FileTables
feature and have setup a trigger to populate a FileProperties
table whenever a file is copied to the SQL Server file share.
I'm trying to grab the file size (cached_file_size
), and I get a value of zero. I set up the trigger the same way a standard FileTable
does using DATALENTGH(file_stream)
. The code below is using the inserted tables' file_stream
:
ALTER TRIGGER [dbo].[FileTable_Insert_Trigger]
ON [dbo].[Files]
AFTER INSERT
AS
BEGIN
IF (ROWCOUNT_BIG() = 0)
RETURN;
SET NOCOUNT ON;
IF NOT EXISTS (SELECT 1 FROM INSERTED)
RETURN;
DECLARE @s_id UNIQUEIDENTIFIER,
@fs VARBINARY(MAX),
@nm NVARCHAR(255),
@fp NVARCHAR(MAX),
@cfs BIGINT,
@ft NVARCHAR(255);
SELECT @s_id = ins.stream_id FROM inserted ins;
SELECT @fs = ins.file_stream FROM inserted ins;
SELECT @nm = ins.name FROM inserted ins;
SELECT @fp = ins.file_stream.GetFileNamespacePath() FROM inserted ins;
SELECT @ft = ins.file_type FROM inserted ins;
SELECT @cfs = DATALENGTH(@fs);
INSERT INTO [FileProperties] (stream_id, [name], filepath, file_type,
cached_file_size, DateAdded, UserID)
VALUES (@s_id, @nm, @fp, @ft,
@cfs, CURRENT_TIMESTAMP, 1);
END
I've also tried grabbing the computed value directly from the FileTable
:
SELECT @cfs = Files_1.cached_file_size
FROM [dbo].Files AS Files_1
WHERE Files_1.stream_id = @s_id;
Still getting zero. What am I missing here? Thank you.
*Note:* I tried playing around with getting the DATALENGTH
of other values such as the name (@nm)
and stream_id (@s_id)
and I do get the values correctly.
merlot
(1 rep)
Oct 8, 2021, 12:30 AM
• Last activity: Oct 8, 2021, 04:47 AM
2
votes
1
answers
441
views
MS SQL attach MDF/LDF without FileTable folder
Is it possible to attach a DB that had a FileTable w/o the FileTable files? We were hit with ransomware and the image files are corrupt but the mdf/ldf are fine. I can lose the images, but I really need to get the DB back. I have googled all around but can't seem to find a solution to this.
Is it possible to attach a DB that had a FileTable w/o the FileTable files?
We were hit with ransomware and the image files are corrupt but the mdf/ldf are fine. I can lose the images, but I really need to get the DB back. I have googled all around but can't seem to find a solution to this.
Mike Grove
(21 rep)
Sep 16, 2021, 02:41 PM
• Last activity: Sep 17, 2021, 10:20 AM
0
votes
1
answers
246
views
Can't open Microsoft Access Database inside a Microsoft SQL Server Filetable Folder
We recently started to migrate the Data on our Fileserver to SQL Server (Version 14) Filetables and I encountered a problem with MS Access Databases (.accdb, Access 2007). When the .accdb file is located in a "normal" Windows shared folder on the server, I can open it with no problems. When I copy t...
We recently started to migrate the Data on our Fileserver to SQL Server (Version 14) Filetables and I encountered a problem with MS Access Databases (.accdb, Access 2007).
When the .accdb file is located in a "normal" Windows shared folder on the server, I can open it with no problems.
When I copy the same file into the Filetable folder, i get an exception that the Database is locked by an unknow user and can't be opened or locked.
All other filetypes I tested work fine on the Filetable share.
The users log into the Fileshare with local accounts on the server, I've added permissions for this accounts to SQL Server.
Is there something different to take account for with MS Access databases that are located in Filetables regarding permissions?
Alexander Kroiss
(3 rep)
Nov 28, 2019, 08:28 AM
• Last activity: Jul 15, 2020, 05:15 AM
0
votes
1
answers
163
views
Wrong result for FileTableRootPath Function of SQL Server
I just restored a database backup and now the result of the function [FileTableRootPath()](https://learn.microsoft.com/en-us/sql/relational-databases/system-functions/filetablerootpath-transact-sql?view=sql-server-ver15) is incorrect. Where is the place that I can change to change the result of this...
I just restored a database backup and now the result of the function [FileTableRootPath()](https://learn.microsoft.com/en-us/sql/relational-databases/system-functions/filetablerootpath-transact-sql?view=sql-server-ver15) is incorrect.
Where is the place that I can change to change the result of this function?
Farzad Karimi
(103 rep)
Jan 28, 2020, 11:43 AM
• Last activity: Jan 28, 2020, 12:17 PM
3
votes
1
answers
512
views
MS SQL FileTable CreatedBy and LastModifiedBy
When using a SQL FileTable table, I would like to store additional metadata for the file such as CreatedBy and LastModifiedBy. By looking at [SQL File Table Schema](https://learn.microsoft.com/en-us/sql/relational-databases/blob/filetable-schema?view=sql-server-2017), this information is not availab...
When using a SQL FileTable table, I would like to store additional metadata for the file such as CreatedBy and LastModifiedBy. By looking at [SQL File Table Schema](https://learn.microsoft.com/en-us/sql/relational-databases/blob/filetable-schema?view=sql-server-2017) , this information is not available by default.
I am wondering if it's possible to determine such information by for example adding a trigger to the table and use [current_user](https://learn.microsoft.com/en-us/sql/t-sql/functions/current-user-transact-sql?view=sql-server-2017) to determine current user and store it in a separate table?
I exploring the feasibility of SQL FileTable for a project and have never used this tech.
Henrique Zacchi
(141 rep)
Aug 15, 2019, 11:51 AM
• Last activity: Sep 27, 2019, 10:46 AM
3
votes
0
answers
64
views
FileTable in 2016 and Transaction Log Backups larger than expected
I am testing using FileTable in SQL Server 2016 to store files of different sizes (could be pdf documents sized around 1mb to video files that are 200mb). When adding a file to the FileTable, I take a trans backup and it is always around 2x the size of the files added. Ex: 42mb file results in a 87m...
I am testing using FileTable in SQL Server 2016 to store files of different sizes (could be pdf documents sized around 1mb to video files that are 200mb).
When adding a file to the FileTable, I take a trans backup and it is always around 2x the size of the files added.
Ex: 42mb file results in a 87mb tlog backup, 13mb file results in 27mb tlog backup.
Has anyone used FileTable and noticed transaction log backups that were much larger than the files added. Files are only being added to the share. Nothing is being updated or edited between adding and taking a log backup.

Christopher Ian Stoll
(31 rep)
Sep 17, 2019, 04:16 PM
• Last activity: Sep 17, 2019, 04:55 PM
0
votes
1
answers
273
views
Quality Control: Check file names in a folder against filenames in a table
I have a records system in an Oracle database. The records system is used to retrieve pdf, jpeg, video documents, etc. The design is quite simple: there is a table called `infrastructure_records` that has a column with hyperlinks to files: +------------+------------------------------------------+ |...
I have a records system in an Oracle database. The records system is used to retrieve pdf, jpeg, video documents, etc.
The design is quite simple: there is a table called
infrastructure_records
that has a column with hyperlinks to files:
+------------+------------------------------------------+
| project_id | file_path |
+------------+------------------------------------------+
| Project 1 | X:\Records_System\Folder A\Project 1.pdf |
| Project 2 | X:\Records_System\Folder B\Project 2.jpg |
| Project 3 | X:\Records_System\Folder C\Project 3.mpg |
+------------+------------------------------------------+
We use a front-end that creates a clickable hyperlink out of the file paths. Using the hyperlink, users can navigate to the files.
**Question:**
I want to check the files on the network drive against the list of files in the database table.
If I could get a list of files from the network into a database table, then I could easily query for errors.
However, I'm not sure what the best way would be to create the list of files in an Oracle table.
- There are approximately 60,000 files on the network drive, nestled in a variety of subfolders
- The files don't exist on the same server as the Oracle database (I'm not an I.T. guy, so I'm pretty clueless about this sort of thing). However, I do have the drive mapped in Windows Explorer on the computer that I would be running the query from.
- This QC operation would be run about once per month.
- Performance isn't a huge concern. The list of files doesn't need to be "live", although that would certainly be a plus.
- I can think of quick-and-dirty ways to do this using VBA scripting in MS Office or the like. But, it has occurred to me that there might be a more professional or out-of-box way to do this (perhaps using built-in Oracle functionality).
**How can a I elegantly insert a list of files on a network drive into an Oracle table?**
User1974
(1527 rep)
Jan 9, 2018, 08:58 PM
• Last activity: Jan 13, 2018, 04:12 PM
3
votes
1
answers
663
views
Database for store images
Actualy I'm using MongoDB with embedded images next to other data in same documents. Images count actualy are more than 14000 and each image size is less than 1MB. Images are readed frequently and updated rarely. I want to separate images from data and store it in different database than data to use...
Actualy I'm using MongoDB with embedded images next to other data in same documents.
Images count actualy are more than 14000 and each image size is less than 1MB. Images are readed frequently and updated rarely.
I want to separate images from data and store it in different database than data to use it as microservice and for scalability. I've read people saying that the best solution to store images is on file system, but you have limitations about number of files in folder and in a lot of CloudComputing services you don't have access to the file system. In MongoDB official documentation, they say that GridFS is faster than file system, but I've understood that the most case to use GridFS is when you have files with size more than 16MB.
Do you know if there are some specific database to store files?
If not, what conventional database is more efficient to store files?
Is MongoDB a good solution to store files? If yes, what is better for the size of my images, storing as normal documents in collection or as GridFS?
Thanks
cuarti
(31 rep)
Feb 25, 2017, 05:00 AM
• Last activity: Mar 2, 2017, 03:55 PM
2
votes
1
answers
845
views
Storing large image files in database vs system files
I need to store many large photos that will be uploaded daily in my application and retrieve those with id. There is no need to any special queries. What is the best approach for storing this files consider to time that number of the images pass a million and side of images exceed the server disk si...
I need to store many large photos that will be uploaded daily in my application and retrieve those with id. There is no need to any special queries.
What is the best approach for storing this files consider to time that number of the images pass a million and side of images exceed the server disk size. What is the most manageable solution?
1. Using SQL FileStream or FileTable and storing files in separate database.
2. Using file system and saving images as files.
3. Using a FTP server.
n.y
(121 rep)
Sep 29, 2016, 06:40 AM
• Last activity: Sep 29, 2016, 07:00 AM
7
votes
2
answers
773
views
Restore database with a FileTable
We have a SQL Server 2014 (build 12.0.24457.0) database with two tables in, both are FileStream tables, the biggest has 2979466 rows, `SUM(cached_file_size)= 40564905472`. We have tried to restore this database on to our test server, but it takes quite a long time. Our DBA gave up after 14 days. Has...
We have a SQL Server 2014 (build 12.0.24457.0) database with two tables in, both are FileStream tables, the biggest has 2979466 rows,
SUM(cached_file_size)= 40564905472
.
We have tried to restore this database on to our test server, but it takes quite a long time. Our DBA gave up after 14 days.
Has anyone else tried to restore a database with FileStream tables, with more that a few files in?
I'm running this script to monitor the progress:
BEGIN TRY
DROP TABLE #h
END TRY
BEGIN CATCH
END CATCH
DECLARE @r INT=1, @percent_complete NUMERIC(9,2), @Extimated_Completion_Datetime DATETIME2(0)
CREATE TABLE #h ( start_time DATETIME2(0), percent_complete NUMERIC(9,2), Estimated_completion_time NUMERIC(9,2), HoursSinceStart NUMERIC(9,2), Extimated_Completion_Datetime DATETIME2(0), session_id INT, created_date datetime2(0))
DECLARE @d VARCHAR(19) , @e VARCHAR(max)
WHILE @r > 0 BEGIN
INSERT INTO #h
SELECT start_time, percent_complete, CAST(ROUND(estimated_completion_time/3600000.0,1) AS DECIMAL(9,1)) AS Estimated_completion_time
, CAST(ROUND(total_elapsed_time/3600000.0,1) AS DECIMAL(9,1)) AS HoursSinceStart
, DATEADD(HOUR, CAST(ROUND(estimated_completion_time/3600000.0,1) AS DECIMAL(9,1)), GETDATE()) AS Extimated_Completion_Datetime
, session_id
, GETDATE()
FROM
sys.dm_exec_requests AS r
WHERE
r.session_id @@SPID
AND r.session_id > 50
AND command LIKE 'restore database'
SELECT @r = @@ROWCOUNT
select top 1 @percent_complete=percent_complete, @Extimated_Completion_Datetime=Extimated_Completion_Datetime from #h ORDER BY created_date DESC
SET @d = CONVERT(VARCHAR(19), @Extimated_Completion_Datetime, 121)
SET @e = CONVERT(VARCHAR(19), GETDATE(), 121) + ' we are ' + LTRIM(@percent_complete) + '% complete. We estimate to finish at: ' + @d
RAISERROR('At %s ', 10, 1, @e) WITH NOWAIT
--WAITFOR DELAY '00:00:10'
WAITFOR DELAY '00:01:00'
END
So far I've this outcome:
At 2016-06-08 10:35:46 we are 00.01% complete. At 2016-06-08 14:22:46 we are 39.26% complete. We estimate to finish at: 2016-06-08 18:22:47 At 2016-06-08 14:23:46 we are 39.27% complete. We estimate to finish at: 2016-06-08 18:23:47 At 2016-06-08 14:24:46 we are 39.28% complete. We estimate to finish at: 2016-06-08 18:24:47 ... At 2016-06-09 08:33:07 we are 44.80% complete. We estimate to finish at: 2016-06-10 09:33:08 At 2016-06-09 08:34:07 we are 44.80% complete. We estimate to finish at: 2016-06-10 09:34:08 At 2016-06-09 08:35:07 we are 44.80% complete. We estimate to finish at: 2016-06-10 09:35:08 At 2016-06-09 08:36:07 we are 44.81% complete. We estimate to finish at: 2016-06-10 09:36:08 At 2016-06-09 08:37:07 we are 44.81% complete. We estimate to finish at: 2016-06-10 09:37:08 At 2016-06-09 08:38:07 we are 44.81% complete. We estimate to finish at: 2016-06-10 09:38:08 At 2016-06-09 08:39:07 we are 44.82% complete. We estimate to finish at: 2016-06-10 09:39:08 ... At 2016-06-10 08:12:01 we are 47.86% complete. We estimate to finish at: 2016-06-12 08:12:02 At 2016-06-10 08:13:01 we are 47.86% complete. We estimate to finish at: 2016-06-12 08:13:02Not exactly fast. It has managed to go from 42% to 45% in 13:45, so with the current speed it looks to finish at 2016-06-27 13:45, or in 18 days. For 44 GB! SELECT top 10000 resource_description AS resource_description, * FROM sys.dm_os_waiting_tasks WHERE session_id=64 says: > wait_type=
BACKUPTHREAD
...sometimes BACKUPIO
too
So I've set up another restore to get Restore Messages:
DBCC TRACEON(3604, 3605, 3014);
RESTORE DATABASE [VDCFileStreamhespotest] FROM DISK = N'\\dkrdsvdcp19\MSSQL_Backup\Full\Misc\VDCFileStream_backup_2016_06_07_180004_7123139.bak' WITH FILE = 1,
NOUNLOAD, REPLACE, STATS = 1
, move 'VDCFileStream' to 'P:\MSSQL\DPA\System\MSSQL10_50.DPA\MSSQL\DATA\UserDBs\VDCFileStream\VDCFileStreamDBhespo.mdf'
, MOVE 'VDCFileStream_log' to 'P:\MSSQL\DPA\Log\Log02\VDCFileStream\VDCFileStreamDBhespo_log.ldf'
, MOVE 'VDCFileStreamF1' to 'G:\VDCFileStream\FileStreamDatahespo'
Restore(VDCFileStreamhespotest): RESTORE DATABASE started Restore(VDCFileStreamhespotest): Opening the backup set Restore(VDCFileStreamhespotest): Processing the leading metadata Restore(VDCFileStreamhespotest): Planning begins Backup/Restore buffer configuration parameters Memory limit: 32767 MB BufferCount: 6 Sets Of Buffers: 2 MaxTransferSize: 1024 KB Min MaxTransferSize: 1024 KB Total buffer space: 12 MB Tabular data device count: 1 Fulltext data device count: 0 Filestream device count: 1 TXF device count: 0 Filesystem i/o alignment: 512 Media Buffer count: 6 Media Buffer size: 1024 KB Encode Buffer count: 6 Restore(VDCFileStreamhespotest): Effective options: Checksum=1, Compression=1, Encryption=0, BufferCount=6, MaxTransferSize=1024 KB Restore(VDCFileStreamhespotest): Planning is complete Restore(VDCFileStreamhespotest): Beginning OFFLINE restore Restore(VDCFileStreamhespotest): Attached database as DB_ID=48 Restore(VDCFileStreamhespotest): Preparing containers Restore(VDCFileStreamhespotest): Containers are ready Restore(VDCFileStreamhespotest): Restoring the backup set Restore(VDCFileStreamhespotest): Estimated total size to transfer = 45540792320 bytes Restore(VDCFileStreamhespotest): Transferring data 1 percent processed.Here 5 days later we're now over halfway there:

Henrik Staun Poulsen
(2291 rep)
Jun 8, 2016, 09:06 AM
• Last activity: Jul 4, 2016, 09:21 AM
4
votes
2
answers
2373
views
Better way to store file receipts
I have a financial system that needs to attach PDF receipts for each record saved on my system. These receipts will be scanned by a proper device attached to the computer and saved in PDF to be stored in my database. Today the system saves the PDF file as `varbinary(max)`, but because of the number...
I have a financial system that needs to attach PDF receipts for each record saved on my system.
These receipts will be scanned by a proper device attached to the computer and saved in PDF to be stored in my database.
Today the system saves the PDF file as
varbinary(max)
, but because of the number of rows in my table, the filesize of my DB is increasing too fast. The average file size is about 1 to 2 MB.
What is the best way to store these kind of files without compromising my database performance?
Junior Thurler
(43 rep)
Dec 14, 2015, 07:13 PM
• Last activity: Jan 31, 2016, 06:27 AM
5
votes
1
answers
1179
views
How can I return the full text of a document indexed in SQL Server Full-Text?
I'm using SQL Server 2014 with FileTables to store a large number of documents in different formats. The iFilters are working great, and everything is getting indexed with FTS + Semantic Search. Now I'd like to run some additional processing on the text of those documents, but don't see a reason to...
I'm using SQL Server 2014 with FileTables to store a large number of documents in different formats. The iFilters are working great, and everything is getting indexed with FTS + Semantic Search. Now I'd like to run some additional processing on the text of those documents, but don't see a reason to have the pipeline redo the work of decoding, extracting, etc. the text from the files.
It seems there should be an obvious solution ... but I've been running in circles without any luck.
So the question is:
- How can I query to return the full plaintext of a file in T-SQL?
- If that's not possible, can it be done in SSIS or SSAS after the
normal FTS parser has run?
- If that's not possible, is there a way to hook into the FTS pipeline
(via a trigger perhaps) so I can split the plain text into another
table?
Alternate solutions are appreciated as well if you've got good examples for me to reference. The only immediate idea I had was to use a different network share for dropoff, have SSIS pick up the file and extract the text (no idea how to do that), and then to move the file + text to SQL server ... but that seems wonky for a lot of reasons.
[Edited to clarify "why"]
If SQL Server has already pulled out the text in order to chunk it & do the base NLP for the semantic index ... I'd rather use that than reinvent the wheel. Specific uses I'm looking into are post-processing with other NLP utilities (e.g. NLTK, GenSim, Stanford NLP NER, etc.) so that I can generate extractive document summarizations, store n-gram statistics for my corpus, and include NER for more effective faceted search.
If I have to extract the text out of files before storing them in SQL Server (either using SSIS/.NET so I can keep the iFilters OR by using a different tool altogether) there's limited usefulness in SQL Server's ability to perform that work on FileTables for anything but the most basic tasks.
Consider the number of document formats already supported - and it's a major task to recreate the feature. Similarly, having to go back to the actual file afterwards and redo that indexing work is inefficient and it'd seem sensible to disable FTS on FileTables, skip using them altogether, or scrap SQL Server for document-based FTS altogether.
iivel
(101 rep)
Jul 24, 2015, 12:50 PM
• Last activity: Aug 11, 2015, 07:26 PM
2
votes
2
answers
1580
views
Error deleting FileTable - HRESULT = '0x80070490'
When trying to delete a FileTable, the following error occurs: >Message 33405, Level 16, State 7, Line 1 Error during operation on an object table Discard FileTable. (HRESULT = '0 x80070490 ').
When trying to delete a FileTable, the following error occurs:
>Message 33405, Level 16, State 7, Line 1
Error during operation on an object table Discard FileTable. (HRESULT = '0 x80070490 ').
Camila Reis
(83 rep)
Oct 7, 2013, 04:34 PM
• Last activity: Jul 23, 2015, 04:49 PM
0
votes
1
answers
3351
views
Need to make sense of FileTable permissions
I'm setting up a FileTable to allow users to quickly stuff some PDF report files into a database for later searching via an intranet application (to be built later). Sorting out the necessary permissions is a bit of a challenge, and I'm hitting a few roadblocks. First, I have confirmed that I (with...
I'm setting up a FileTable to allow users to quickly stuff some PDF report files into a database for later searching via an intranet application (to be built later). Sorting out the necessary permissions is a bit of a challenge, and I'm hitting a few roadblocks.
First, I have confirmed that I (with sysadmin privileges) can add and remove files to the share without any issues, so the FileTable is functional at least.
I've added an end user to an AD group, created a Windows login in SQL Server for this group, mapped it to a database user, and granted that user select, insert, update, and delete permissions on the FileTable. The user has logged out of his computer and logged in again to make sure the group I added him to is in the auth token.
He can access the base filestream share for the instance (
\\servername\FileStream
), but when he tries to go deeper to the database-specific directory (\\servername\FileStream\DatabaseFileStreamDirectoryName
), he gets a permission error ("You do not have permission to access..."). Same thing if he tries to access the FileTable path directly (\\servername\FileStream\DatabaseFileStreamDirectoryName\PDFReports
).
So, what is the proper way to grant access so that users can browse to the share subdirectory for a specific database, see the directories in there for FileTables, and ultimately browse/access files within the FileTables? It's apparently more than just a user mapping in the database, and select/insert/update/delete permissions on the table, and I'm not finding a whole lot of clear documentation on this so far.
Here's the (slightly anonymized) code I've used for setting this up:
USE master
GO
ALTER DATABASE AppData ADD FILEGROUP FS CONTAINS FILESTREAM
ALTER DATABASE AppData ADD FILE ( NAME = N'FS', FILENAME = N'S:\Filestream\AppData\FS' ) TO FILEGROUP [FS]
ALTER DATABASE AppData SET SINGLE_USER WITH ROLLBACK IMMEDIATE
ALTER DATABASE AppData SET FILESTREAM( DIRECTORY_NAME = N'AppData' ) WITH NO_WAIT
ALTER DATABASE AppData SET MULTI_USER
GO
USE AppData
GO
CREATE TABLE PDFReports AS FileTable WITH (FileTable_Directory = 'PDFReports')
GRANT SELECT, INSERT, UPDATE, DELETE ON PDFReports TO [MyDomain\App Operators]
db2
(9708 rep)
Jul 20, 2015, 02:49 PM
• Last activity: Jul 21, 2015, 01:48 PM
1
votes
1
answers
6844
views
How to get SQL Server filetable path locator for subfolder by its literal path
I have a filetable which have several subtables. For example FileTable ->subtable1 ->->subtable1.1 ->->subtable1.2 ->subtable2 ->subtable3 etc How to get SQL Server `path_locator` for subtable given by something like '/subtable1/subtable1.1' ? I need it to move files from root to subfolder. I tried...
I have a filetable which have several subtables.
For example
FileTable
->subtable1
->->subtable1.1
->->subtable1.2
->subtable2
->subtable3
etc
How to get SQL Server
path_locator
for subtable given by something like '/subtable1/subtable1.1' ?
I need it to move files from root to subfolder. I tried this, but its does not work
declare @parent hierarchyid;
set @parent = GetPathLocator('folder2/folder2.1')
update physical_files set path_locator = @parent.GetDescendant(NULL, NULL)
where stream_id = '494D5C8B-AC22-E411-A464-00259060BBB9';
SQL Server 2012 reports:
> Msg 33423, level 16, state 1
> Invalid FileTable path name or format.
Vasilly.Prokopyev
(195 rep)
Aug 13, 2014, 06:34 AM
• Last activity: Aug 15, 2014, 06:39 PM
0
votes
2
answers
707
views
Where/how does innodb store tables in files?
I read somewhere that InnoDB can store all the tables in a single file or in separate files. Is this a configuration option? What is the default setting? A file for all tables or a file per table? What is the recommended setting?
I read somewhere that InnoDB can store all the tables in a single file or in separate files. Is this a configuration option? What is the default setting? A file for all tables or a file per table? What is the recommended setting?
Jim
(781 rep)
Nov 16, 2013, 04:19 PM
• Last activity: Nov 18, 2013, 05:47 AM
Showing page 1 of 20 total questions