Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
2 answers
247 views
SharePoint changing statistics options daily
SharePoint switches `OFF` and `ON` options like `AUTO_UPDATE_STATISTICS`, `AUTO_UPDATE_STATISTICS_ASYNC`, `AUTO_CREATE_STATISTICS`, `INCREMENTAL` on SharePoint databases daily at 12 AM and that generates lots of noise in SQL Server error log. What action should I take(Either in SQL Server or in Shar...
SharePoint switches OFF and ON options like AUTO_UPDATE_STATISTICS, AUTO_UPDATE_STATISTICS_ASYNC, AUTO_CREATE_STATISTICS, INCREMENTAL on SharePoint databases daily at 12 AM and that generates lots of noise in SQL Server error log. What action should I take(Either in SQL Server or in SharePoint) to keep the error log clean?
shilan (119 rep)
May 28, 2019, 11:11 AM • Last activity: May 15, 2024, 02:07 PM
1 votes
1 answers
203 views
Stored Procedure Performance Issue in Sharepoint Report
We have a report stored procedure that runs in about 1 minute in production; however, when executed from the Sharepoint it populates, it rus for over 20 minutes, with exactly the same parameters. We've done everything to avoid parameter sniffing problems: used WITH RECOMPILE on the procedure declara...
We have a report stored procedure that runs in about 1 minute in production; however, when executed from the Sharepoint it populates, it rus for over 20 minutes, with exactly the same parameters. We've done everything to avoid parameter sniffing problems: used WITH RECOMPILE on the procedure declaration; added OPTION OPTIMIZE FOR UNKNOWN on the parameters within queries; even created local variables and assigned the parameter values to them and used them throughout the procedure. None of these has had any effect. I know that the 1-base minute run time is a big red flag, and we're working on pre-aggregating this data into our BI platform, but that's months down the liine and this is needed for month-end close every month. Other relevant information: Sharepoint 2013 on SQL Server 2012 Data Source is a SQL Server 2008 R2 database Report definition created in Visual Studio 2008
KRISTOPHER COOK (109 rep)
Aug 28, 2014, 02:56 PM • Last activity: Dec 30, 2022, 11:05 AM
0 votes
2 answers
2532 views
What are the problems with sharing an Access database through online storage service, like Dropbox?
I have a desktop database I created in Access. I'd like to share the database with a single collaborator, who can have full privileges to the database (table access). However, my collaborator is much less tech-savvy so to make it easier for him, I have set up forms to make table updates, so table ac...
I have a desktop database I created in Access. I'd like to share the database with a single collaborator, who can have full privileges to the database (table access). However, my collaborator is much less tech-savvy so to make it easier for him, I have set up forms to make table updates, so table access is not a must for him. With that in mind, I am willing to split up the database if this would be a better option. I had considered sharing the database with him over a Sharepoint website, but have little experience with this and would not know where to begin. I also do not think it would be worth the (relatively low) monthly price to share with a single person. I do not expect our team to grow either, so there is no need to prepare for this. I think the best option is to share the database on Dropbox so that we both have the same version to work with. I've read some forums posts on doing those and all seem to discourage sharing Access databases like this. However, I've had a hard trouble understanding, as I am a novice. There is no need for us to both be able to work on the database at the same time, but I have no idea how I can restrict access to a single user at a time. Can you please explain what the problems are with sharing an Access database in simple terms and what workarounds there is, if any? Please don't suggest other options as I am trying to keep this project as simple as possible and my experience with SQL is minimal. Plus, I need to keep this simple for my collaborator as well and this is a sort of personal project I don't have a ton of time and resources for. Thank you for the help!
Mickster37 (3 rep)
Sep 11, 2014, 04:03 PM • Last activity: Dec 21, 2022, 06:49 AM
0 votes
1 answers
554 views
Clear out remnant LDF log files of Sharepoint DB not visible on SBS 2008 Domain Server?
Typical ways for dealing with LDF files that take up a lot of space, recommend these processes to shrink/ eliminate the drive space being occupied. http://www.sharepointboris.net/2008/10/sharepoint-config-database-log-file-too-big-reduce-it/ and [Link](https://web.archive.org/web/20210121185921/http...
Typical ways for dealing with LDF files that take up a lot of space, recommend these processes to shrink/ eliminate the drive space being occupied. http://www.sharepointboris.net/2008/10/sharepoint-config-database-log-file-too-big-reduce-it/ and [Link](https://web.archive.org/web/20210121185921/http://geekswithblogs.net/RogueCoder/archive/2008/06/03/122588.aspx) As an example of following these steps for SBS Monitoring database visible in **Microsoft SQL Server Management Studio Express** GUI whose path location on disk is: C:\Program Files (x86)\Microsoft SQL Server\MSSQL.1\MSSQL\Data (this path not shown in screenshot) I followed these commands on the GUI: DB: SBSMonitoring > Tasks > Delete > Log files and gain about 1-2 GB space. One the other hand, I find the Sharepoint related DB files under path (shown in screenshot): C:\Windows\SYSMSI\SSEE\MSSQL.2005\MSSQL\Data But, this database is not visible in **Microsoft SQL Server Management Studio Express** GUI. I've looked at several answers and tutorials on how to clear out the LDF via GUI and SQL commands, but that is assuming the DB is visible. What steps can I take to properly clear out the LDF log files for this DB? Do I have to first add/ connect to it via the SQL GUI? Or clear out without adding via SQL commands? Or clear it out with simple file delete in explorer {not recommended from what I read most places}? This is what I'm reading so far, but haven't found a way to add/ access that DB via those Files and then use above mentioned steps & guides. Screenshot of files, location & DBs in **Microsoft SQL Server Management Studio Express** GUI: enter image description here **Update:** Adding more information about SQL instances running based on answer, and requesting update to answer on how to add/ access & flush this DB instance. enter image description here
Alex S (101 rep)
Apr 16, 2016, 09:41 AM • Last activity: Jul 18, 2022, 08:00 PM
4 votes
1 answers
480 views
SQL Server High Availability and Sharepoint - log file grows when no database activity
Just started as a dba with a company running SQL Server 2012 High Availability and SharePoint. Short description: Database with no user activity still has growing log file. Long description: They have many, many SharePoint dbs where the logfile size dwarfs the datafile size. They've just been throwi...
Just started as a dba with a company running SQL Server 2012 High Availability and SharePoint. Short description: Database with no user activity still has growing log file. Long description: They have many, many SharePoint dbs where the logfile size dwarfs the datafile size. They've just been throwing extra disk space at the issue. They were in Full Recovery (mandatory for HA) but weren't taking Log backups. Simply by taking a log backup and a DBCC Shrinkfile to a manageable size I've started to bring things under control. However, on my trial database the logfile is growing rapidly again. As far as I can ascertain, no user activity in the database has taken place (insert/update/deletes) since I did the log backup. Running select [Current LSN], Operation, Context, [Transaction ID] , SPID, [Begin Time], [Transaction Name] from fn_dblog(null,null) shows hundreds of rows of GhostCleanup Task (looking for deletes) followed by a Begin Transaction and a Commit Transaction: typical fn_dblog output Can anyone explain what's causing this? I can obviously set up Maintenance Plans to do Log backups much more frequently to keep the size in check, but I'd like to understand why there are all these transactions occurring in a 'static' database. I've read about setting trace flag 661 to disable ghost cleanup, but don't want to do a server-wide change at this proof of concept stage. I'm not sure if this is a HA issue, SharePoint issue, neither or both! I've read lots of posts about SharePoint causing runaway logs but no real explanation. It's the primary AOAG server. There are two synchronous secondaries and an asynchronous DR secondary. dbcc opentran(EXT_Metadata_DB) reports: >No active open transactions. Log file is still growing despite no user activity. There were 5 Autogrowth events overnight. No transactional replication on this db, though HA obviously needs to write the log to the secondary servers. No blocked processes visible. select d.name, d.log_reuse_wait_desc from sys.databases d where d.database_id = 15 gives:
EXT_Metadata_DB    LOG_BACKUP
This seems to imply that all transactions are in the current VLF which can't be cleared. Puzzling thing is, I haven't done any log backups since I did the Shrinkfile. I'm using a tiny database for this investigation. Data file is only 14MB and I shrank the log file down to 4MB (13% full). A day or so later, data file remains at 14MB, but log file has now grown to 69MB and is 92% full. The particular SQL Server instance I'm looking at has 675GB of log files. select name, size from sys.database_files where type = 1 gives:
EXT_Metadata_DB_log	8872
before a checkpoint. Ran checkpoint but size remains at 8872. Server recovery interval is the default 0. If I run: SELECT context AS Context, [Transaction Name] as Description, COUNT(context) AS Count from fn_dblog(null,null) GROUP BY context, [Transaction Name] order by count desc I get 19 rows:
LCX_NULL	        NULL	                    12681
LCX_NULL	        GhostCleanupTask	        12326
LCX_BOOT_PAGE	    NULL	                    12185
LCX_CLUSTERED	    NULL	                    451
LCX_NULL	        UpdateQPStats	            81
LCX_FILE_HEADER	    NULL	                    35
LCX_NULL	        SetFileSize	                33
LCX_PFS	NULL	                                12
LCX_TEXT_MIX	    NULL	                    10
LCX_BOOT_PAGE_CKPT	NULL	                    10
LCX_DIFF_MAP	    NULL	                    7
LCX_NULL	        DeallocUnlinkedReorgPage	6
No user activity, but 37847 transaction entries in 24 hours. Is this to be expected? I've not modified any data. It's a static database of metadata.
DiamondBeezer (159 rep)
Aug 10, 2016, 01:21 PM • Last activity: Jun 22, 2022, 02:07 PM
13 votes
2 answers
11605 views
SELECT/INSERT Deadlock
This instance hosts the SharePoint 2007 databases (SP). We have been experiencing numerous `SELECT`/`INSERT` deadlocks against one heavily utilized table within SP content database. I have narrowed down the resources involved, both processes are requiring locks on the non-clustered index. The `INSER...
This instance hosts the SharePoint 2007 databases (SP). We have been experiencing numerous SELECT/INSERT deadlocks against one heavily utilized table within SP content database. I have narrowed down the resources involved, both processes are requiring locks on the non-clustered index. The INSERT needs an IX lock on the SELECT resource, and the SELECT needs a S lock on the INSERT resource. The deadlock graph depicts and three resources, 1.) two from the SELECT (producer/consumer parallel threads), and 2.) the INSERT. I have attached the deadlock graph for your review. Because this is Microsoft code and table structures we cannot make any changes. However, I have read, on the MSFT SP site, that they recommend setting MAXDOP Instance level configuration option to 1. Since this instance is shared amongst many other databases/applications, this setting cannot be disabled. Therefore, I decided to try and prevent these SELECT statements from going parallel. I know this is not a solution but more a temporary modification to help with troubleshooting. Therefore, I increased the "Cost Threshold for Parallelism" from our standard 25 to 40 upon doing so, even though the workload has not changed (SELECT/INSERT occurring frequently) the deadlocks have disappeared. My question is why? SPID 356 INSERT has an IX lock on a page belonging to the non-clustered index SPID 690 SELECT Execution ID 0 has S lock on a page belonging to the same non clustered index Now: SPID 356 wants an IX lock on SPID 690 resource but cannot obtain it because SPID 356 is being blocked by SPID 690 Execution ID 0 S lock SPID 690 Execution ID 1 wants a S lock on SPID 356 resource but cannot obtain it because SPID 690 Execution ID 1 is being blocked by SPID 356 and now we have our deadlock. Execution Plan can be found on my SkyDrive Full Deadlock Details can be found here If someone can help me understand why I would really appreciate it. EventReceivers Table: Id uniqueidentifier no 16 Name nvarchar no 512 SiteId uniqueidentifier no 16 WebId uniqueidentifier no 16 HostId uniqueidentifier no 16 HostType int no 4 ItemId int no 4 DirName nvarchar no 512 LeafName nvarchar no 256 Type int no 4 SequenceNumber int no 4 Assembly nvarchar no 512 Class nvarchar no 512 Data nvarchar no 512 Filter nvarchar no 512 SourceId tContentTypeId no 512 SourceType int no 4 Credential int no 4 ContextType varbinary no 16 ContextEventType varbinary no 16 ContextId varbinary no 16 ContextObjectId varbinary no 16 ContextCollectionId varbinary no 16 index_name index_description index_keys EventReceivers_ByContextCollectionId nonclustered located on PRIMARY SiteId, ContextCollectionId EventReceivers_ByContextObjectId nonclustered located on PRIMARY SiteId, ContextObjectId EventReceivers_ById nonclustered, unique located on PRIMARY SiteId, Id EventReceivers_ByTarget clustered, unique located on PRIMARY SiteId, WebId, HostId, HostType, Type, ContextCollectionId, ContextObjectId, ContextId, ContextType, ContextEventType, SequenceNumber, Assembly, Class EventReceivers_IdUnique nonclustered, unique, unique key located on PRIMARY Id
SQLJarHead (131 rep)
Nov 26, 2012, 09:46 PM • Last activity: Mar 16, 2022, 11:46 AM
2 votes
1 answers
1134 views
Is it ok to change drive names/labels (NOT Drive Letters!)
I realize this is likely a simplistic question but my searches only return results for Drive Letters or Database names if I include SQL Server. Please feel free to point me to existing articles or answers, and my apologies in advance :) I've inherited an environment with several SQL Server 2014 and...
I realize this is likely a simplistic question but my searches only return results for Drive Letters or Database names if I include SQL Server. Please feel free to point me to existing articles or answers, and my apologies in advance :) I've inherited an environment with several SQL Server 2014 and 2016 servers (a few clustered as well) that support multiple SharePoint 2013 & 2016 farms. The previous SAs named/labeled the drives some fairly ambiguous names and I'd like to save myself continued confusion over what each contains by renaming the drives. Is this a bad idea? I can't seem to find anything in my environment that refers specifically to drive names, everything seems to choose the drive letter and in some cases a drive name is rendered as informational. Am I right that the drive name is just a description and doesn't actually identify the drive itself to any systems?
rebekah (31 rep)
Sep 1, 2021, 08:50 PM • Last activity: Sep 2, 2021, 12:46 AM
-1 votes
3 answers
551 views
During upgrade, database raised exception 9002, severity 17, state 1. Use the exception number to determine the cause
I am trying to update the **SQL Server 2014** to **Service Pack 3** on my Windows Server Machine. After the updates from Windows update feature I am unable to start the SQL Server service and SQL Server Agent Service. [![services][1]][1] [![error1][2]][2] [1]: https://i.sstatic.net/nNjJ2.png [2]: ht...
I am trying to update the **SQL Server 2014** to **Service Pack 3** on my Windows Server Machine. After the updates from Windows update feature I am unable to start the SQL Server service and SQL Server Agent Service. services error1 I have checked the Error log for the same and it's showing below 2 messages. > 1. The transaction log for database 'master' is full due to 'CHECKPOINT'. > 2. During upgrade, database raised exception 9002, severity 17, state 1. Use the exception number to determine the cause.
Mr. Roshan (99 rep)
Jan 27, 2021, 10:12 AM • Last activity: Feb 3, 2021, 09:27 AM
0 votes
1 answers
246 views
Handling Reporting Services and SharePoint Integration considering deprecation
With one of our clients, we're currently faced with a dilemma. SQL Server Reporting Services Report Server (SharePoint mode) seems to have been deprecated, since the last version to support it is [SQL Server 2016](https://learn.microsoft.com/en-us/sql/reporting-services/report-server-sharepoint/repo...
With one of our clients, we're currently faced with a dilemma. SQL Server Reporting Services Report Server (SharePoint mode) seems to have been deprecated, since the last version to support it is [SQL Server 2016](https://learn.microsoft.com/en-us/sql/reporting-services/report-server-sharepoint/reporting-services-report-server-sharepoint-mode?view=sql-server-2016) . Now our client wants SharePoint integration with Reporting Services and SQL Server 2016 is currently supported, so there is the option to go that route. I just feel uneasy building something for a client with a technology that is essentially dead. So, I've been researching on what sort of options there are going forward (SQL Server 2017 onwards). An [article by John White](https://whitepages.unlimitedviz.com/2016/11/future-report-integration-sharepoint/) pointed out that this is a good thing, because it allows for SSRS Native mode to be more tightly integrated into SharePoint. But his article is from the end of 2016 and points at future improvements that I'm battling to see. Microsoft's own [SQL Stream Blog](https://learn.microsoft.com/en-za/archive/blogs/sqlrsteamblog/simplifying-our-sharepoint-integration-story) addresses this issue in November 2016, with mention of three scenarios that they planned to focus on going forward. Namely: Embedding reports in SharePoint pages; Reporting on data in SharePoint lists; and Delivering reports to SharePoint libraries. What I can find from Microsoft after that date is sparse. I only found one other article on the [SQL Server Blog](https://learn.microsoft.com/en-za/archive/blogs/sqlrsteamblog/embed-paginated-reports-into-sharepoint-using-the-report-viewer-web-part) that mentions SharePoint, with the announcement of a "Report Viewer web part". My question is, has anyone faced this dilemma before? And how what solution did you go with? I know we can work with PowerShell scripts and create embedded reports on SharePoint, but it doesn't seem very elegant considering what "SharePoint mode" was. *EDIT: I realise I asked a question that warrants an answer that's more of an opinion than a fact. What I am interested in, is approaching Report Server - SharePoint integration from a SQL Server 2017 and newer perspective.*
Jason (153 rep)
Aug 28, 2020, 02:19 PM • Last activity: Aug 29, 2020, 08:49 AM
3 votes
1 answers
1510 views
How can INDEX rebuilds be going parallel when MAXDOP is set to 1
I am periodically running into a THREADPOOL wait issue with a SharePoint data store of hundreds of databases using a SQL Server 2008R2 Standard Edition instance (recently migrated to an Azure VM). It is running a stored procedure named proc_DefragmentIndices in many (possibly all) of these databases...
I am periodically running into a THREADPOOL wait issue with a SharePoint data store of hundreds of databases using a SQL Server 2008R2 Standard Edition instance (recently migrated to an Azure VM). It is running a stored procedure named proc_DefragmentIndices in many (possibly all) of these databases at a time. The stored procedure rebuilds every index in the database unconditionally. Of course, they are head blockers (because it is Standard Edition, each ALTER INDEX command runs with ONLINE=OFF). Because there are so many of them running at a time (each in a different database), and **they are going parallel** (which ties up even more workers), everything piles up. Just for extra noise, Azure Backup is taking backups of many of the databases while all this is going on, eating up even more workers. Activity Monitor shows 106 waiting tasks, and multiple instances of the same session Id for many of the ALTER INDEX commands (which is why I say they are going parallel). What I find puzzling is that these ALTER INDEX statements are going parallel even though **MAXDOP is set to 1 in the instance**, as is recommended for SharePoint databases, and the ALTER INDEX statements executed by the stored procedure are not using a MAXDOP option to override it. Q1: How can INDEX rebuilds be going parallel when MAXDOP is set to 1? Q2: Activity Monitor shows the ALTER INDEX commands but sp_WhoIsActive does not. Does anyone know why?
Mark Freeman (2293 rep)
Jul 31, 2020, 06:21 PM • Last activity: Jul 31, 2020, 07:15 PM
0 votes
1 answers
54 views
Is it possible to migrate Sharepoint 2016 and all it's databases from SQL SERVER2012 to AZURE?
we are in a process to migrate our website to azure. we have those tons of sharepoint databases : [![enter image description here][1]][1] Is it possible to migrate all user databases + sharepoint to azure? I could migrate some user databases to azure with no problem in the past ( bkp as data tier, a...
we are in a process to migrate our website to azure. we have those tons of sharepoint databases : enter image description here Is it possible to migrate all user databases + sharepoint to azure? I could migrate some user databases to azure with no problem in the past ( bkp as data tier, and etc and etc ) but I've never done this with sharepoint databases. I could found some questions saying it's not possible. https://sharepoint.stackexchange.com/questions/241759/sharepoint-2013-on-azure-with-on-premise-sql-server It says "ou cannot move to a Azure SQL Database". https://sharepoint.stackexchange.com/questions/217477/sharepoint-migration-of-db-from-one-server-to-another-server This one is from on premises to on premises. https://learn.microsoft.com/pt-br/archive/blogs/sambetts/hosting-sharepoint-content-databases-in-sql-azure "Important edit: this is currently not officially supported for SharePoint Server, even for Azure-hosted farms." and then with all those questions and answers, I dont know how to proceeed.
Racer SQL (7546 rep)
Apr 1, 2020, 02:56 PM • Last activity: Apr 15, 2020, 07:33 PM
0 votes
1 answers
152 views
Design Sql Server physical configuration from existing VM
We have 4 virtual machines that we used to test our BI platform (PoC), the final dimensions are: - SSIS (16 vcpu), SSAS (8 vCPU), SSRS (4 VCPU) et Data Engine (24 vCPU) What are the recommendations to install these components to a physical server? For SQL Server performance, [VMware recommends](http...
We have 4 virtual machines that we used to test our BI platform (PoC), the final dimensions are: - SSIS (16 vcpu), SSAS (8 vCPU), SSRS (4 VCPU) et Data Engine (24 vCPU) What are the recommendations to install these components to a physical server? For SQL Server performance, [VMware recommends](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf#page=23&zoom=100,0,0) (chap. 3.5.2) setting the number of cores per socket to one when allocating virtual CPUs to VMs on the vSphere platform". That means that a VM with 16 CPUs will have 16 vCPUs with 1 core each. Can we calculate physical server dimensions using reverse engineering? I mean: 16 vCPU equals 16 physical core? The dimensions CPU and RAM of these VM are big so we are asking if it's not more interesting to deploy on physical machines. We need 280 GB of RAM. The same thing that VMware said here the ratio 1:1 or 1:2 is considered for more performance.
Benn (1 rep)
Apr 10, 2020, 02:45 PM • Last activity: Apr 12, 2020, 08:37 AM
0 votes
1 answers
87 views
Was SSRS installed in SharePoint Integrated Mode?
I've inherited an old QA database server with SQL Server 2008R2 (yes, yes, I know it's beyond EOL; there's nothing I can do about it!) and SSRS installed. It's the back end of a SP server on a different VM. How do I tell whether or not SSRS was installed in Native Mode or SP Integrated Mode?
I've inherited an old QA database server with SQL Server 2008R2 (yes, yes, I know it's beyond EOL; there's nothing I can do about it!) and SSRS installed. It's the back end of a SP server on a different VM. How do I tell whether or not SSRS was installed in Native Mode or SP Integrated Mode?
RonJohn (694 rep)
Jan 2, 2020, 04:57 PM • Last activity: Jan 2, 2020, 05:54 PM
1 votes
1 answers
266 views
Getting DPM Alerts for Failed Recovery Point for Database that has Been Deleted
Deleted a defunct database in SharePoint, now getting alerts from DPM saying: The following alert occurred at 2/16/2016 12:16:54 PM. Status: Active Data source type: Data source: SharePointServer\Database_I_Just_Deleted Computer: SharePointServer Description: Last 1 recovery points not created. For...
Deleted a defunct database in SharePoint, now getting alerts from DPM saying: The following alert occurred at 2/16/2016 12:16:54 PM. Status: Active Data source type: Data source: SharePointServer\Database_I_Just_Deleted Computer: SharePointServer Description: Last 1 recovery points not created. For more information, open DPM Administrator Console and review the alert details in the Monitoring task area. ---- Cannot find where to disable monitoring of this deleted database.
M C (11 rep)
Feb 16, 2016, 08:41 PM • Last activity: Nov 19, 2019, 01:01 PM
0 votes
1 answers
5228 views
Allow access to enter data through forms, but not full database access
Using Access 2013 or Sharepoint 2013 is it possible to give someone access to data entry forms that work with a back-end database whether Access database or Sharepoint SQL. But from everything I've looked at I can't find a way to separate someone from entering data and viewing all data? For example...
Using Access 2013 or Sharepoint 2013 is it possible to give someone access to data entry forms that work with a back-end database whether Access database or Sharepoint SQL. But from everything I've looked at I can't find a way to separate someone from entering data and viewing all data? For example in Access even if I split the database to a back end, it still has all the tables linked, and fully accessible from the "front end" this seems to be needed to allow Write access, it seems that you can have no access, read-only access and Read/Write access but no write-only access (this might be where I am wrong). I don't want the people entering data to see the back end data (regardless if they are on Sharepoint or in access) and certainly do not want them to be able to modify the database directly (they should only be able to enter data via a form that can be validated before entered.) Now I come from a Windows Forms and ASP.NET MVC background. I am used to hosting a SQL server and simply not writing code that pulls data from the database, and if I did I could set parameters on what data they could access. I could also set up server-side validation based on users and groups. I can't seem to find any documentation on how to do this in either Access or on Sharepoint. I have read every article I could find on these topics via google and Support.Office.com for starters: Introduction to integrating data between Access and a SharePoint site Ways to share an Access desktop database SharePoint Online environment and a multitude of other articles both on TechNet and various third-party sites. While some do touch base on user access rights, they seem to allow only access to a specific site or database, not parts of, as my needs are allowed access to a way to enter data but not access to the data. The client I am working with is addiment on using Access and Sharepoint. And before telling them I can't limit a user to data entry only, I wanted to check here first. I did read a site that stated an issue with Access 2013 and Sharepoint is > Security setup is basic and somewhat limited. Item (or row) level security is not possible and edit access appears to allow users to edit information across all tables – not just one. this was found here: http://blogs.perficient.com but this seems to state that when you give someone table access they have all tables, and NOT that you can't restrict table access. It does seem to state however that the only true options are HTML or developing something stand alone in Visual Studios.
user2125348 (143 rep)
Mar 11, 2015, 06:41 PM • Last activity: Mar 22, 2019, 06:20 AM
0 votes
1 answers
258 views
Do you connect an application to the load balancer name or the cluster name?
My DBA delivered a 2 server WSCF for me to connect my sharepoint farm to. I have a load balancer name and a windows cluster name, and 2 server names. I realize I need to connect either to the cluster name or the load balancer name but which would be the correct choice and why?
My DBA delivered a 2 server WSCF for me to connect my sharepoint farm to. I have a load balancer name and a windows cluster name, and 2 server names. I realize I need to connect either to the cluster name or the load balancer name but which would be the correct choice and why?
Mr T (1 rep)
Mar 21, 2019, 01:54 PM • Last activity: Mar 21, 2019, 02:18 PM
2 votes
1 answers
1501 views
Mysterious disappearing consistency errors in SharePoint DBs
We have a pre-production SharePoint SQL server which threw an 824 error yesterday evening on the Content DB. As part of my investigation today, I ran `DBCC CHECKDB` on this DB and it came up clean. Puzzling. So, I ran `CHECKDB` on all 23 DBs on this server and two different DBs turned up errors, the...
We have a pre-production SharePoint SQL server which threw an 824 error yesterday evening on the Content DB. As part of my investigation today, I ran DBCC CHECKDB on this DB and it came up clean. Puzzling. So, I ran CHECKDB on all 23 DBs on this server and two different DBs turned up errors, the Synch DB and the CrawlStore DB. Then, things started to get really odd. For one, I ran a CHECKTABLE on the table mentioned in the output from the Synch DB but it came up clean. A CHECKDB against the entire DB also came up clean. I attempted to do the same on the CrawlStore but it continually presented me with errors regarding creating the database snapshot. So, I cloned the DB from a backup set and, of course, CHECKDB ran clean. And, in there somewhere, I also got an assertion error. I'm not sure where to go from here. I'm worried about corruption but I don't have any actual corruption to point to at this time. System is: >Microsoft SQL Server 2008 R2 RTM, Standard Edition, x64 VMWare VM with 2 vCPUs and 8 GB memory on NetApp SAN storage OS: Win 2008 R2 Standard, 64 bit. Snapshots are an Enterprise feature so I can't create a snapshot (I actually tried) to test the CHECKDB. Closest I can get is a clone on the one that's complaining about snapshot creation. I looked at my suspect_pages earlier today. Except for a row from January (also on the content DB but a different page), I have this: DBName file_id page_id event_type error_count SP2010Dev_Portal_Content_DB 1 65543 2 2 SP2010Dev_Portal_Content_DB 1 22211 2 3 SP2010Dev_Sync_DB 1 54755 1 2 Search_Service_Application_CrawlStoreDB_8c648a692b62438888fe7154075a7d2b 1 52958 2 2 SP2010Dev_Sync_DB 1 802464 2 1 Errors:
Error: 824, Severity: 24, State: 2.

SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0x4d448744; actual: 0xe9b1634d). It occurred during a read of page (1:22211) in database ID 23 at offset 0x0000000ad86000 in file 'G:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Data\SP2010Dev_Portal_Content_DB.mdf'.  Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.  

Failed:(-1073548784) Executing the query "DBCC CHECKDB(N'SP2010Dev_Sync_DB')  WITH NO_INFOMS..." failed with the following error: "Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 7998501791723421696 (type Unknown), page (57600:0). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. Values are 12716041 and -10.
Object ID 357576312, index ID 0, partition ID 72057594043301888, alloc unit ID 72057594045792256 (type In-row data): Page (1:54755) could not be processed.  See other errors for details.
CHECKDB found 0 allocation errors and 1 consistency errors not associated with any single object.
CHECKDB found 0 allocation errors and 1 consistency errors in table 'mms_step_object_details' (object ID 357576312).
CHECKDB found 0 allocation errors and 2 consistency errors in database 'SP2010Dev_Sync_DB'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.

Failed:(-1073548784) Executing the query "DBCC CHECKDB(N'Search_Service_Application_CrawlSto..." failed with the following error: "Object ID 453576654, index ID 1, partition ID 72057594055491584, alloc unit ID 72057594058899456 (type In-row data): Page (1:52958) could not be processed.  See other errors for details.
Table error: Object ID 453576654, index ID 1, partition ID 72057594055491584, alloc unit ID 72057594058899456 (type In-row data), page (1:52958). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. Values are 12716041 and -4.
Table error: Object ID 453576654, index ID 1, partition ID 72057594055491584, alloc unit ID 72057594058899456 (type In-row data). Page (1:52958) was not seen in the scan although its parent (1:288931) and previous (1:52957) refer to it. Check any previous errors.
Table error: Object ID 453576654, index ID 1, partition ID 72057594055491584, alloc unit ID 72057594058899456 (type In-row data). Page (1:52959) is missing a reference from previous page (1:52958). Possible chain linkage problem.
CHECKDB found 0 allocation errors and 4 consistency errors in table 'MSSCrawlURL' (object ID 453576654).
CHECKDB found 0 allocation errors and 4 consistency errors in database 'Search_Service_Application_CrawlStoreDB_8c648a692b62438888fe7154075a7d2b'.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (Search_Service_Application_CrawlStoreDB_8c648a692b62438888fe7154075a7d2b).". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
SandraV (21 rep)
Apr 23, 2012, 11:43 PM • Last activity: Feb 7, 2019, 11:16 AM
2 votes
1 answers
786 views
SQL Server 2016 Standard or Enterprise for SharePoint 2016
I know that for SharePoint 2016 version we can go for either SQL server 2016 standard or enterprise. But just wanted to know are there any disadvantages of going for SQL server standard edition instead of enterprise? My question is in particular to the SharePoint features we might be missing if we g...
I know that for SharePoint 2016 version we can go for either SQL server 2016 standard or enterprise. But just wanted to know are there any disadvantages of going for SQL server standard edition instead of enterprise? My question is in particular to the SharePoint features we might be missing if we go for Standard SQL server.
user57156
Jan 22, 2019, 04:21 PM • Last activity: Jan 22, 2019, 11:47 PM
2 votes
1 answers
883 views
Update xml based off datatype
I need to update a value in my xml datatype column, currently I am able to pull specific values using `Select AllUserData.tp_ColumnSet.value('nvarchar13[1]', 'varchar(max)') AS AssetId...` [1]: https://dba.stackexchange.com/questions/75292/simplest-way-to-edit-single-entry-in-xml-column My data in t...
I need to update a value in my xml datatype column, currently I am able to pull specific values using Select AllUserData.tp_ColumnSet.value('nvarchar13[1] ', 'varchar(max)') AS AssetId... My data in the column looks like this, because this is from Sharepoint and I cannot modify the xml format in any way. 0 1995 Ford White Van 123456789 1234 623 Company Dept Section User I have tried to piece together a statement from other questions much like the example with no success and no working where clause as shown below. `update AllUserData set tp_ColumnSet.Modify('replace value of (nvarchar1)[1] with "1996"') where AllUserData.tp_ColumnSet.value('nvarchar13[1] ', 'varchar(max)') = 623` How can I modify the the "Year" by designating what AssetID it is with a where clause?
I'm here for Winter Hats (123 rep)
May 4, 2018, 02:25 PM • Last activity: May 7, 2018, 01:44 PM
4 votes
1 answers
3547 views
SQL Server instance running out of worker threads
I have a SQL Server 2008 R2 SP3 Standard Edition 64-bit instance on 8 cores (576 max worker threads) with 32 GB RAM (MaxMem = 28000). It is the data store for a SharePoint installation, with 218 databases. It was getting dozens of "SQL Server failed with error code 0xc0000000 to spawn a thread to pr...
I have a SQL Server 2008 R2 SP3 Standard Edition 64-bit instance on 8 cores (576 max worker threads) with 32 GB RAM (MaxMem = 28000). It is the data store for a SharePoint installation, with 218 databases. It was getting dozens of "SQL Server failed with error code 0xc0000000 to spawn a thread to process a new login or connection." errors per day, but no other errors. I found that MAXDOP = 0, which is bad for SharePoint. I gradually (over weeks) brought MAXDOP down to 1. As I did so, the frequency of those errors went down to zero on most days. But I still see them once in a while. sys.dm_os_wait_stats has this to say about THREADPOOL waits: waiting_tasks_count wait_time_ms max_wait_time_ms signal_wait_time_ms 26149 474516 4428 9 The server was last restarted at Mar 25 2018 5:55PM and current server time is Apr 20 2018 10:07PM. sp_Blitz finds nothing interesting other than use of join and order hints and slow storage writes on the drives holding tempdb files. This is on a VM in a private cloud. Increasing the number of CPUs will be very expensive, and while it is heavily used, CPU usage doesn't seem to be a problem. In this case, would increasing the max worker threads be a reasonable thing to try, should I just leave it alone and live with the occasional 17189 error, or is there another option?
Mark Freeman (2293 rep)
Apr 20, 2018, 08:35 PM • Last activity: Apr 20, 2018, 09:09 PM
Showing page 1 of 20 total questions