Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
737 views
SQL Server 2019 Machine Learning Service Issues
We're running a SQL Server 2019 instance for our data warehouse, and our data science team is using the Machine Learning Services quite extensively. Lately, we're seeing a lot of error messages like the following: > Unable to launch runtime for 'R' script for request id: > C20AF5A9-DB62-4651-8BA7-14...
We're running a SQL Server 2019 instance for our data warehouse, and our data science team is using the Machine Learning Services quite extensively. Lately, we're seeing a lot of error messages like the following: > Unable to launch runtime for 'R' script for request id: > C20AF5A9-DB62-4651-8BA7-14F3306E5642. Please check the configuration > of the 'R' runtime. There doesn't seem to be any reason for this, the same code will work one time, and then the next time report this error. I've increased the default concurrent users up from 20 to 100, and we're still seeing the issue. I'm not sure what else could be causing this, I'm not seeing anything in the logs for errors.
Chris (43 rep)
Jun 9, 2021, 12:50 AM • Last activity: Mar 10, 2025, 09:16 AM
0 votes
0 answers
11 views
how can i do this $find in RStudio for a collection of a database that is in the free MongoDB atlas cluster?
I created a shiny application on RStudio and to manage the database I linked them to mongodb compass. Co,,e I wanted to put my application on shinyapps, I found it wise to put the database on mongodb atls. And to link mongodb atlas to RStudio I put this in the code # Connection to the MongoDB databa...
I created a shiny application on RStudio and to manage the database I linked them to mongodb compass. Co,,e I wanted to put my application on shinyapps, I found it wise to put the database on mongodb atls. And to link mongodb atlas to RStudio I put this in the code # Connection to the MongoDB database mongo_db_user <- "wambejordan13" mongo_db_password <- "hPJdzfQA54fhkH6A" mongo_database <- "élection" mongo_collection <- "mycollection" mongo_clustername <- "cluster0.s4gje.mongodb.net" # the following is the connection string for an Atlas online MongoDB cluster url_path = sprintf("mongodb+srv://%s:%s@%s/admin", mongo_db_user, mongo_db_password, mongo_clustername) votants <- mongo(collection = "votant", url = url_path) candidats <- mongo(collection = "candidat", url = url_path, options = ssl_options(weak_cert_validation = TRUE)) electeurs <- mongo(collection = "électeur", url = url_path, options = ssl_options(weak_cert_validation = TRUE)) candidat_delegue <- mongo(collection = "candidatDélégué", url = url_path) electeurs23 <- mongo(collection = "électeurDélégué", url = url_path) But this error appears when I run the code Error in : (Unauthorized) not authorized on admin to execute command { find: "candidat", filter: { }, projection: { _id: 0 }, sort: { }, skip: 0, limit: 0, noCursorTimeout: false, $db: "admin", lsid: { id: {4 [123 30 43 183 43 234 79 157 153 244 216 50 3 40 204 106]} }, $clusterTime: { clusterTime: {1727183562 2}, signature: { hash: {0 [206 172 38 240 70 44 96 154 49 112 242 104 8 13 139 227 99 180 206 249]}, keyId: 7354479715932962816.000000 } } } I already had to modify the user privileges at the mongodb atlas level and I put readWrite but it did not change.
valere (1 rep)
Sep 26, 2024, 10:59 AM
1 votes
0 answers
175 views
Machine Learning Services fails from Secondary Node of SQL Failover Cluster Instance
There was a planned cluster failover to the secondary node of our FCI. Afterwards, `sp_execute_external_script` stopped working. Examples with error messages are below: EXEC sp_execute_external_script @script=N'print(R.version)',@language=N'R'; GO > Msg 39012, Level 16, State 14, Line 0 Unable to co...
There was a planned cluster failover to the secondary node of our FCI. Afterwards, sp_execute_external_script stopped working. Examples with error messages are below: EXEC sp_execute_external_script @script=N'print(R.version)',@language=N'R'; GO > Msg 39012, Level 16, State 14, Line 0 Unable to communicate with the
> runtime for 'R' script for request id:
> 4B918DE5-B534-4F35-8723-34F818536F68. Please check the requirements of
> 'R' runtime. STDERR message(s) from external script:
> > SQLSatellite Run() failed. Error code:0x80004004. Error in eval(ei,
> envir) : SQLSatellite Run() failed. Error code:0x80004004. Calls:
> runScriptFile -> source -> withVisible -> eval -> eval -> .Call
> Execution halted EXEC sp_execute_external_script @script=N'import sys;print(sys.version)',@language=N'Python' GO > Msg 39012, Level 16, State 14, Line 3 Unable to communicate with the
> runtime for 'Python' script for request id:
> D1397DA5-79FD-4C4A-891C-C01E436FA20F. Please check the requirements of
> 'Python' runtime. STDERR message(s) from external script:
> > SQLSatellite Run() failed. Error code:0x80004004. SqlSatelliteCall
> error: SQLSatellite Run() failed. Error code:0x80004004. STDOUT
> message(s) from external script: SqlSatelliteCall function failed.
> Please see the console output for more information. Traceback (most > recent call last):
> File "C:\Program Files\Python310\lib\site-packages\revoscalepy\computecontext\RxInSqlServer.py",
> line 608, in rx_sql_satellite_call
> rx_native_call("SqlSatelliteCall", params)
> File "C:\Program Files\Python310\lib\site-packages\revoscalepy\RxSerializable.py", line > 375, in rx_native_call
> ret = px_call(functionname, params) RuntimeError: revoscalepy function failed. Other notes that may or may not be relevant: - In my mind, the setup and configuration of R and Python is the same on both cluster nodes. (But I'm fallible.) - The logon account for the SQL Server Launchpad service is a domain account on both nodes (same account). - Whether it's R or Python, sp_execute_external_script runs for 60 seconds before erroring out. I reviewed the documentation for the following two articles. I'm not seeing anything that is missing or out of order:
Install SQL Server 2022 Machine Learning Services (Python and R) on Windows
Known issues for Python and R in SQL Server Machine Learning Services More documentation:
Create a login for SQLRUserGroup : This suggested there is supposed to be a login for SQLRUserGroup, which is a local machine group account. However, sp_execute_external_script works on the primary node. And there is no [NodeA\SQLRUserGroup] login. Nonetheless, I thought I'd try creating the [NodeB\SQLRUserGroup] for the secondary node. No additional permissions were granted to the login, and it is not a member of any server roles (except for [public]) USE [master] CREATE LOGIN [NodeB\SQLRUserGroup] FROM WINDOWS WITH DEFAULT_DATABASE=[master] GO **My Question**
What could be causing sp_execute_external_script to fail on the secondary node (NodeB)?
Dave Mason (875 rep)
Jun 15, 2023, 02:00 PM
1 votes
2 answers
1387 views
Parallel PostgreSQL Queries with R
I'm running a large query that for various reasons, I've broken into a series of smaller queries. The example below is just to show how the query is broken up by id; the actual query is much more complex: UPDATE target_table SET col1_target = col1_source FROM source_table WHERE id >= 0 AND id = 10 A...
I'm running a large query that for various reasons, I've broken into a series of smaller queries. The example below is just to show how the query is broken up by id; the actual query is much more complex: UPDATE target_table SET col1_target = col1_source FROM source_table WHERE id >= 0 AND id = 10 AND id = 20 AND id =', num, 'AND id <', num2, sep = "") dbSendQuery(con, q1) dbDisconnect(con) } lapply(num, query.func) Rather than having this long sequence of queries run one after another, I wanted to try and take advantage of my server's multiple CPU's and run them in parallel. Using the R library parallel I've created a cluster and sent multiple queries to the cluster simultaneously: no_cores <- detectCores() - 1 cl <- makeCluster(no_cores) clusterExport(cl, "query.func") clusterExport(cl, "num") clusterEvalQ(cl, library(RPostgreSQL)) parLapply(cl, num, query.func) stopCluster() Where query.func is defined as above. When I run this R script at the terminal I receive no errors and if I check pg_stat_activity I see a list of active queries, each incrementally operating on a separate block of data. However, when I use top to check the resource management on my system, I still only see one postgres process. When I look at the CPU usage, I only see one CPU active at a time. Are these queries really running simultaneously on separate CPU's? My understanding was that, prior for 9.6, a single postgres query could not be split across multiple cores but each connection could utilize a separate core (related question) . Does the process I've outlined above open multiple connections and send a query to the database via each connection? PostgreSQL 9.3/ Ubuntu 14.04 / R 3.3.2
Matt (291 rep)
Jan 9, 2017, 06:20 PM • Last activity: Dec 8, 2021, 08:31 AM
1 votes
1 answers
199 views
Would you recommend to use PL/R in 2021 or is it dead?
And for many years there exists a PostgreSQL extension to use [GNU-R][1] directly in the database: https://www.joeconway.com/plr/ Ubuntu packages are also provided and updated for latest PG: https://ubuntuupdates.org/pm/postgresql-13-plr But it seems to me, that there is no maintenance happening for...
And for many years there exists a PostgreSQL extension to use GNU-R directly in the database: https://www.joeconway.com/plr/ Ubuntu packages are also provided and updated for latest PG: https://ubuntuupdates.org/pm/postgresql-13-plr But it seems to me, that there is no maintenance happening for the extension itself. (Changelog references PG 9.5) Is it dead? We would love to use PL/R in our PG-databases, but I am afraid that I will depend on an unmaintaned extension then, which is not ment for productive use and it will kick-back in a few years... Has anyone used PL/R extension with PG for production in the last years? Was your experience good? Are recent versions of GNU-R supported?
alfonx (857 rep)
Aug 29, 2021, 08:37 AM • Last activity: Aug 30, 2021, 12:18 PM
6 votes
3 answers
21402 views
SQL Server LaunchPad "The service did not respond in a timely fashion" + No Log in ExtensibilityLog
I'm trying to start LaunchPad service but it fails to start with: >Error 1053: "The service did not respond in a timely fashion" I've been reading this was common in RC versions but I'm running SQL Server 2016 SP1 + CU3. I've tried to add the *SQLRUserGroup* to the *LogOnLocally* user right assignme...
I'm trying to start LaunchPad service but it fails to start with: >Error 1053: "The service did not respond in a timely fashion" I've been reading this was common in RC versions but I'm running SQL Server 2016 SP1 + CU3. I've tried to add the *SQLRUserGroup* to the *LogOnLocally* user right assignment. The service still not run. Also, there is no log in folder C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Log\ExtensibilityLog I've tried to change security of log folder including the NT Service\MSSQLLaunchpad and the SQLRUserGroup entries.
Dan (161 rep)
Aug 2, 2017, 09:47 AM • Last activity: Aug 3, 2021, 02:30 PM
7 votes
4 answers
3624 views
Upgrade SQL Server 2016 R Service (in-database)
I am just wondering whether this is doable. I have installed R service (in-database) together with SQL Server 2016 (patched at SP1+ CU1). I notice R service version is at 3.2.2, which you can run the following script to check declare @v varchar(100); exec sp_execute_external_script @language=N'R' ,...
I am just wondering whether this is doable. I have installed R service (in-database) together with SQL Server 2016 (patched at SP1+ CU1). I notice R service version is at 3.2.2, which you can run the following script to check declare @v varchar(100); exec sp_execute_external_script @language=N'R' , @script = N'v <- toString(getRversion())' , @params = N'@v varchar(100) output' , @v = @v out; print @v; -- returns 3.2.2 But I also installed Microsoft R client, and notice its R service engine is versioned at: 3.**3**.2. So my question is "does SQL Server R service (in-database) now support R version 3.**3**.2 ?" if so, how can I upgrade it? if not, I guess I will wait until MS ships the update. I read MSDN , and in it, it mentions using sqlBindR.exe to do the update, but this tool is available only with Windows R server, which I did not install and I even doubt that if I installed it, whether it would update the in-database R service.
jyao (3083 rep)
Jan 24, 2017, 09:03 PM • Last activity: Oct 25, 2020, 07:17 AM
3 votes
2 answers
6986 views
SQL Server 2016 Install - R Server and R Services Install Failed
I just installed SQL Server 2016 Developer and included R Server and R Services to get installed as well. Everything installed correctly except the R Server and R Services. The log file contained this: 2016-07-29T17:07:39 INFO Command invoked: D:\x64\RSetup.exe /install /component SRS /version 8.0.3...
I just installed SQL Server 2016 Developer and included R Server and R Services to get installed as well. Everything installed correctly except the R Server and R Services. The log file contained this: 2016-07-29T17:07:39 INFO Command invoked: D:\x64\RSetup.exe /install /component SRS /version 8.0.3 /language 1033 /destdir C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\R_SERVICES /logfile C:\Program Files\Microsoft SQL Server\130\Setup Bootstrap\Log\20160729_125230\RSetup.log /instance MSSQLSERVER 2016-07-29T17:07:39 INFO Using default cache dir: C:\Users\fs103799\AppData\Local\Temp\ 2016-07-29T17:07:39 INFO Backing up source dir: C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\R_SERVICES\ => C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\R_SERVICES_63975911-f850-4ad1-bf7c-21adf0f9ba6d 2016-07-29T17:07:39 ERROR Error renaming source dir: Access to the path 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\R_SERVICES\' is denied. 2016-07-29T17:07:39 INFO Exiting with code 5 I have no idea why the access to the path was denied. The path exists and there are files there. So the install process was able to store files there, but then couldn't access the folder.
Randy Minder (2032 rep)
Jul 29, 2016, 06:00 PM • Last activity: Jun 5, 2020, 06:43 AM
2 votes
1 answers
341 views
Incompatibility between unixODBC and Firebird ODBC driver
I'm currently developing a application based in Rcode, ODBC and Firebird. For that, first I've installed firebird server and unixODBC (both 64 bits since my machine OS is 64-bits). I tried with 32-bits software aswell since Ubuntu multiarch allow it, but that caused another problem (doesnt matter in...
I'm currently developing a application based in Rcode, ODBC and Firebird. For that, first I've installed firebird server and unixODBC (both 64 bits since my machine OS is 64-bits). I tried with 32-bits software aswell since Ubuntu multiarch allow it, but that caused another problem (doesnt matter in this subject). Created database, using firebird isql function (/opt/firebird/bin/isql), and made some fake tables with integer, float, text columns. I can connect to it using unixODBC isql function (after properly programming DSN and Driver in odbc.ini / odbcinst.ini), but theres a problem: **I can't select tables containing text columns, it gives error std::bad_alloc** After some research, I've discovered *slencheck* function from unixODBC, using it on my driver returns slencheck: sizeof(SQLLEN) == 4 slencheck: driver manager and driver differ!!! **How can I modify unixODBC SQLLEN so it became compatible with 64-bits Firebird ODBC driver**? I can't select text columns because of SQLLEN or there's more to this? I think this odbc's github issue is related to mine
dwenzel (63 rep)
Jan 24, 2020, 08:17 PM • Last activity: Feb 3, 2020, 12:27 PM
0 votes
1 answers
95 views
Will creating reports on R using MySQL Database update my database on MySQL?
I use MySQL to store my data. To analyze the data I want to be able to generate monthly reports on R. So far I was able to connect to MySQL using ```dbConnect``` but I want to make sure that none of the tweaking I do affects the database I have in MySQL. Below is the code I used to import the databa...
I use MySQL to store my data. To analyze the data I want to be able to generate monthly reports on R. So far I was able to connect to MySQL using
but I want to make sure that none of the tweaking I do affects the database I have in MySQL. Below is the code I used to import the database/table to R.
install.packages("RODBC")
library(RODBC)
Data", password="", dbname ="")
Table1 <- fetch(dbSendQuery(Data, "select * from Table"),-1)
Using the
function I am able to load the table into R but I want to make sure that I am not changing it on MySQL at the same time. Would applying any operations, adding columns etc affect the original database in MySQL?
mkobayashi (3 rep)
Oct 11, 2019, 04:24 AM • Last activity: Oct 11, 2019, 06:04 AM
1 votes
0 answers
46 views
Custom Export from power BI service
I'm trying to make a custom button that exports filtered tables to a csv and downloads them to my computer. I am aware that you can do this with the '...' button but this functionality is for other users that have requested this custom button. I was able to write an R script that achieves this in Po...
I'm trying to make a custom button that exports filtered tables to a csv and downloads them to my computer. I am aware that you can do this with the '...' button but this functionality is for other users that have requested this custom button. I was able to write an R script that achieves this in Power BI Desktop but I need this to work in Power BI Service. The way the desktop version is currently implemented is by using a button that goes to an 'Export' page that runs the R-script that downloads the filtered table as a csv. If anyone has any idea how I might run a script that exports and downloads tables as csv's from Power BI Service I would really appreciate it.
mario font (11 rep)
May 31, 2019, 05:53 PM
0 votes
0 answers
413 views
SQL Server 2016 R Statistics: can't find function
I'm trying to get R Statistics running on my Dev server. I'm using [this microsoft tutorial][1]. I have the default installation of R Analytics as provided my the SQL Server installation: > C:\Program Files\Microsoft SQL\Server\MSSQL13.MSSQLSERVER\R_SERVICES\ And I also have the Microsoft R Client 3...
I'm trying to get R Statistics running on my Dev server. I'm using this microsoft tutorial . I have the default installation of R Analytics as provided my the SQL Server installation: > C:\Program Files\Microsoft SQL\Server\MSSQL13.MSSQLSERVER\R_SERVICES\ And I also have the Microsoft R Client 3.4.3 which contains the RX functions installed to: > C:\Program Files\Microsoft\R Client We have a Tutorial database, with test data, R is enabled and I can do "hello world". Everything works up until Step 3.3 Code Step 3. > --Save Linear model to table > DECLARE @model VARBINARY(MAX); EXEC generate_rental_R_native_model "linear", @model OUTPUT; INSERT INTO > rental_models (model_name, native_model, lang) VALUES('linear_model', > @model, 'R'); When I run it I get the following error: > Msg 39004, Level 16, State 20, Line 59 A 'R' script error occurred > during execution of 'sp_execute_external_script' with HRESULT > 0x80004004. Msg 39019, Level 16, State 1, Line 59 An external script > error occurred: Error in eval(expr, envir, enclos) : could not > find function "rxSerializeModel" Calls: source -> withVisible -> eval > -> eval > > Error in ScaleR. Check the output for more information. Error in > eval(expr, envir, enclos) : Error in ScaleR. Check the output for > more information. Calls: source -> withVisible -> eval -> eval -> > .Call Execution halted I'm on SQL 2016 SP2 CU3, so I know I don't have the predict() function but everything up until that point should work. If I run RGUI I can prove that rxSerializeModel is there. This works: myIris Msg 39004, Level 16, State 20, Line 10 A 'R' script error occurred > during execution of 'sp_execute_external_script' with HRESULT > 0x80004004. Msg 39019, Level 16, State 1, Line 10 An external script > error occurred: Error in path.expand(new) : could not find function > "rxSqlLibPaths" Calls: source ... .libPaths -> Sys.glob -> path.expand > -> path.expand > > Error in ScaleR. Check the output for more information. Error in > eval(expr, envir, enclos) : Error in ScaleR. Check the output for > more information. Calls: source -> withVisible -> eval -> eval -> > .Call Execution halted Why can't SQL see it? Is it a path issue? How do I know where SQL is looking for libraries?
Sir Swears-a-lot (3253 rep)
Dec 11, 2018, 02:34 AM • Last activity: Dec 11, 2018, 08:47 PM
0 votes
1 answers
53 views
How to best manage a queue of external processes
My scenario is a web app that creates jobs that need to be processes by an R script. We are currently have the web app execute a SQL stored proc that executes the R script. However, we quickly realized this is unworkable. The first job eats a lot of memory up and then all subsequent jobs fail. We di...
My scenario is a web app that creates jobs that need to be processes by an R script. We are currently have the web app execute a SQL stored proc that executes the R script. However, we quickly realized this is unworkable. The first job eats a lot of memory up and then all subsequent jobs fail. We did set aside 40% of the servers memory for external processes, as described in MSFT's docs here: https://learn.microsoft.com/en-us/sql/advanced-analytics/administration/how-to-create-a-resource-pool?view=sql-server-2017 I appreciate any advice or recommendations you may have.
tnk479 (143 rep)
Nov 30, 2018, 08:19 PM • Last activity: Nov 30, 2018, 10:30 PM
5 votes
1 answers
1858 views
System configuration to address poor PostgreSQL INSERT performance
##Problem Symptom## High CPU usage by `postmaster` child process associated with client connections attempting to INSERT low-volume rows (results in rows INSERTed **25x slower** than using `COPY ... FROM STDIN` for identical rows). ## Background## **Trying to identify system/ database configuration...
##Problem Symptom## High CPU usage by postmaster child process associated with client connections attempting to INSERT low-volume rows (results in rows INSERTed **25x slower** than using COPY ... FROM STDIN for identical rows). ## Background## **Trying to identify system/ database configuration to alleviate aforementioned poor INSERT performance.** I'm using a multi-threaded R script to process data and INSERT results back into a PostgreSQL database. I've profiled the R script to isolate performance bottleneck to the DBI::dbBind() calls, while using top to monitor the postmaster child process associated to the connections opened by the child R threads (see **code** below). During the INSERTs, R child processes run mostly idle (presumably waiting for return of DBI::dbBind() call) whereas the postmaster child processes consume 95-100% CPU on the cores on which they run for approximately 2-3 minutes. ##System / environment:## - postgresql version 10.3 (Fedora package 10.3-5.fc27) - uname -a : Linux localhost 4.16.6-202.fc27.x86-64 #1 SMP Wed May 2 00:09:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux - /proc/cpuinfo : 16 processors (Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz) - ulimit -a : core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 515220 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 515220 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited - /proc/meminfo (at arbitrary time when postmaster processing INSERT): MemTotal: 131923484 kB MemFree: 112894260 kB MemAvailable: 123181440 kB Buffers: 201220 kB Cached: 14932288 kB SwapCached: 0 kB ... Mlocked: 0 kB SwapTotal: 201326588 kB SwapFree: 201326588 kB Dirty: 3260 kB ... Shmem: 4251184 kB Slab: 1024344 kB SReclaimable: 658476 kB SUnreclaim: 365868 kB ... PageTables: 38436 kB ... CommitLimit: 267288328 kB Committed_AS: 35678744 kB VmallocTotal: 34359738367 kB VmallocUsed: 0 kB VmallocChunk: 0 kB HardwareCorrupted: 0 kB ... ShmemHugePages: 0 kB ShmemPmdMapped: 0 kB ... HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB Hugetlb: 0 kB ... - Database tables in testing reside on pair of SSD drives in RAID 10 configuration, with logging and WAL on separate drives (since baseline COPY ... FROM STDIN has no performance issue writing --see below-- (timing also includes bash sync return time) and CPUs do not generally appear to wait on IO, do not believe disk performance presents the bottlneck) ##Postgres configuration:## - Following is those settings not commented-out (eg. generally non-default settings) - Some of these may appear a bit high in attempt to identify configuration change (please feel free to note any "crazy" settings given my system -- assuming the server primarily runs as database server) max_connections = 20 # (change requires restart) shared_buffers = 32GB # min 128kB temp_buffers = 128MB # min 800kB max_prepared_transactions = 20 # Allow 'max_connections' work_mem = 16MB # min 64kB max_stack_depth = 6MB # min 100kB dynamic_shared_memory_type = posix # the default is the first option max_wal_size = 5GB checkpoint_flush_after = 1MB # measured in pages, 0 disables deadlock_timeout = 15s max_locks_per_transaction = 2096 # min 10 - Also (temporarily) tried the following in /lib/systemd/system/postgresql.service.d/10-postgresql-unlimited.conf to no effect (actually a small degradation of performance): [**WARNING** For casual readers, the following settings are **NOT** recommended for use] # DO NOT USE THIS IN PRODUCTION (or elsewise) [Service] LimitDATA=infinity LimitFSIZE=infinity LimitLOCKS=infinity LimitMEMLOCK=infinity LimitMSGQUEUE=infinity LimitNPROC=infinity LimitNOFILE=infinity LimitSIGPENDING=infinity ##Database profile:## - Database cluster only has one user, and accessed generally by multi-threaded processes (between 1 and 3 threads used for testing, but up to 16 expected for automated processes and an additional 3-4 connections for administrators). - Testing database initialized with table definitions, but no rows. Table definitions include 28 partitioned tables (using [Declarative Partitioning](https://www.postgresql.org/docs/10/ddl-partitioning.html)) that lack trigger functions and only have limited foreign keys or constraints/ indices. - Each partitioned table has 180 partitions and between 42 and 81 columns (of double precision type). **NB**: While these are wider tables than ideal and have more partitions than recommended, I have attempted to reduce the number of columns to 12 and the number of partitions to 30 and see ratable poor INSERT performance. ##Code (R):## - Currently based on PostgreSQL ODBC driver (only one I could get working with multi-line prepared statements) library(odbc); cdb (,...,) VALUES (?,...,?);",100),collapse="")); # Bind up to 8,000 placeholders at a time (ref in source?) -- use 100 multi-line statements in this example DBI::dbBind(sth,bvallist); # Here, 'bvallist' is a list of values to bind in the multi-line prepared statement -- have tested and checked values in database are correct after INSERT num_recs .out | psql -c 'COPY FROM STDIN;'` - INSERTed 11,117 rows into each of the 28 partitioned tables in 14sec => 22,234 rec/sec (or 4.5e-5 sec/rows) - vmstat -wt 1 information for COPY ... FROM STDIN (CPU info based on 16 processors): procs -----------------------memory---------------------- ---swap-- -----io---- -system-- --------cpu-------- -----timestamp----- r b swpd free buff cache si so bi bo in cs us sy id wa st EST 2 0 0 113563384 200600 15965816 0 0 0 696 4178 1518 14 1 85 0 0 2018-11-27 14:29:19 2 0 0 113562280 200600 15967184 0 0 0 1483 4781 1727 14 1 84 0 0 2018-11-27 14:29:20 3 0 0 113564520 200604 15962952 0 0 0 1123 4847 1790 15 2 83 0 0 2018-11-27 14:29:21 3 0 0 113565664 200604 15962020 0 0 0 800 4512 1646 15 2 84 0 0 2018-11-27 14:29:22 >>>>>>>> START: COPY ... FROM STDIN bash script (Tue Nov 27 14:29:23 EST 2018) 3 0 0 113566992 200604 15960036 0 0 0 760 4730 1655 14 2 84 0 0 2018-11-27 14:29:23 3 1 0 113551160 200604 15961608 0 0 0 9704 5271 5025 15 3 82 1 0 2018-11-27 14:29:24 3 0 0 113548176 200604 15966864 0 0 0 8764 6269 6335 19 3 77 1 0 2018-11-27 14:29:25 3 0 0 113552104 200604 15969684 0 0 0 7744 6331 5648 19 3 77 1 0 2018-11-27 14:29:26 1 0 0 113521248 200604 15975508 0 0 0 8624 3717 5478 9 2 89 0 0 2018-11-27 14:29:27 2 0 0 113535536 200604 15976840 0 0 0 12563 4982 8784 11 3 86 1 0 2018-11-27 14:29:28 3 0 0 113535640 200604 15978772 0 0 0 11223 5473 6182 12 3 84 1 0 2018-11-27 14:29:29 2 0 0 113533576 200604 15977312 0 0 0 11180 5032 6443 12 3 85 0 0 2018-11-27 14:29:30 2 0 0 113534384 200604 15978180 0 0 0 11169 4961 6511 12 3 86 0 0 2018-11-27 14:29:31 2 0 0 113504656 200604 16004428 0 0 0 32691 4551 13584 10 3 84 2 0 2018-11-27 14:29:32 2 0 0 113486672 200604 16023572 0 0 0 26133 4387 8803 10 3 86 2 0 2018-11-27 14:29:33 4 0 0 113459744 200604 16033296 0 0 0 12535 5709 8188 17 3 80 1 0 2018-11-27 14:29:34 1 0 0 113444128 200604 16057612 0 0 0 78953 8980 10186 12 4 82 2 0 2018-11-27 14:29:35 1 1 0 113415520 200604 16087380 0 0 0 23640 5576 8781 15 3 80 1 0 2018-11-27 14:29:36 3 0 0 113400864 200604 16108080 0 0 0 14733 3348 10182 6 2 90 1 0 2018-11-27 14:29:37 Working on EXPLAIN ANALYZE of SQL, but welcome thoughts ahead of time on potential kernel settings/ database configuration/ SystemD settings/ etc. Thank you in advance. ##Update (2018.11.28)## - Created an SQL flat file with the INSERT statements produced by the R script (placed the statements within a BEGIN ... COMMIT transaction block) and loaded using psql -f .sql -- no performance improvement. - Ran EXPLAIN ANALYZE VERBOSE on single INSERT, with following output: Insert on (cost=0.00..0.26 rows=1 width=1952) (actual time=0.074..0.074 rows=0 loops=1) -> Result (cost=0.00..0.26 rows=1 width=1952) (actual time=0.006..0.007 rows=1 loops=1) Output: Planning time: 0.303 ms Trigger RI_ConstraintTrigger_c_274329 for constraint on : time=0.071 calls=1 Execution time: 13.345 ms
Whee (189 rep)
Nov 28, 2018, 02:56 AM • Last activity: Nov 28, 2018, 07:08 PM
4 votes
2 answers
2946 views
Msg 39011, SQL Server was unable to communicate with the LaunchPad service
We have just installed SQL Server Enterprise 2016 CTP 3.3 on a server that already had SQL 2012 Standard installed. We installed SQL2016 as a Named Instance and have been trying to get the R integration working, but we are having problems running the external scripts from management studio. We walke...
We have just installed SQL Server Enterprise 2016 CTP 3.3 on a server that already had SQL 2012 Standard installed. We installed SQL2016 as a Named Instance and have been trying to get the R integration working, but we are having problems running the external scripts from management studio. We walked through the **Install the R Packages** described here: https://msdn.microsoft.com/en-us/library/mt590809.aspx , And we have manually run the **Post-Installation R Configuration** steps listed here: https://msdn.microsoft.com/en-us/library/mt590536.aspx At this point, we are able to connect to SQL from RStudio and manipulate data locally or on the server. However, we are not able to specify R script in Management Studio. When we try to run an R script in TSQL, we get the error message: > Msg 39011, Level 16, State 1, Line 1 SQL Server was unable to communicate with the LaunchPad service. Please verify the configuration of the service. We have tried several of the sample R scripts including the following: 1. `execute sp_execute_external_script @language = N'R' , @script = N' mytextvariable <- c("hello", " ", "world"); OutputDataSet <- as.data.frame(mytextvariable);' , @input_data_1 = N' SELECT 1 as Temp1' WITH RESULT SETS (([col] char(20) NOT NULL)); ` 2. `execute sp_execute_external_script @language = N'R' , @script = N' OutputDataSet <- InputDataSet;' , @input_data_1 = N' SELECT * FROM MyData;' WITH RESULT SETS (([NewColName] int NOT NULL)); ` 3. `execute sp_execute_external_script @language = N'R' , @script = N' SQLOut <- SQLIn;' , @input_data_1 = N' SELECT 12 as Col;' , @input_data_1_name = N'SQLIn' , @output_data_1_name = N'SQLOut' WITH RESULT SETS (([NewColName] int NOT NULL)); ` All of them result in the same error messages about not being able to communicate with the LaunchPad service. ---------- Here are the configuration options we have verified seem to be configured correctly: - We can see that “external scripts enabled” has a value of 1 when running “exec sp_configure” - We can see the Windows User Group “SQLRUserGroupSQL2016” has been created in “Local Users and Groups” - We can see the 20 Windows Users created for R processing in “Local Users and Groups” - We can see that the Database Role “db_rrerole“ was created correctly on master db. - We can see the Extended Stored Procedures “xp_ScaleR_ …” have been correctly created on master db. - We have verified that the SQL Server Service for our Instance is running along with the SQL Agent Service and SQL LaunchPad Service. - They are all logged in as the same Windows Domain User Account, and that user has been added to the Administrators Group for the server. - We have tried restarting the above services many times as well as rebooting the server. - We have tried connecting to Management Studio both using Windows Auth with a Domain Account that is in the Database Role “sysadmins”, and as a local SQL User Account that is in the Database Role “sysadmins” - We manually added those users to the “db_owner” and “db_rrerole” roles on the master db (just to be sure) Don’t know what else to check…
Stephen (141 rep)
Feb 26, 2016, 02:59 PM • Last activity: Sep 5, 2018, 12:30 PM
1 votes
0 answers
132 views
better format using nanodbc R on Windows
Using R-3.5 and [`odbc`](https://github.com/r-dbi/odbc) (which uses [`nanodbc`](https://github.com/nanodbc/nanodbc)), I connect via ODBC and the `ODBC Driver 13 for SQL Server` ODBC driver on windows 10. DBI::dbGetQuery(odbccon, "select SYSDATETIMEOFFSET() as now") # now # 1 2018-09-03 16:36:44 Paci...
Using R-3.5 and [odbc](https://github.com/r-dbi/odbc) (which uses [nanodbc](https://github.com/nanodbc/nanodbc)) , I connect via ODBC and the ODBC Driver 13 for SQL Server ODBC driver on windows 10. DBI::dbGetQuery(odbccon, "select SYSDATETIMEOFFSET() as now") # now # 1 2018-09-03 16:36:44 Pacific Standard Time The same query with a native (non-ODBC) driver: DBI::dbGetQuery(nativecon, "select SYSDATETIMEOFFSET() as now") # now # 1 2018-09-03 19:37:13.9923643 -04:00 (Both are R's character type.) This demonstrates a few things: - the ODBC driver does not recognize an object of type DATETIMEOFFSET, which an [issue I've raised with odbc](https://github.com/r-dbi/odbc/issues/207) ; this is not of itself a huge issue, but I think indicates why it is returned as a string (and not internally converted correctly, I think) - loss of millisecond precision (the biggest problem) - non-standard time zone indication (where I believe "standard" includes -07:00, -0700, +07, or even America/Los_Angeles) Is there a way to prevent SQL Server or Windows ODBC from formatting (and losing) data thus? Is there a way to change this via a SQL Server command, an ODBC function, or something else? (The largest problem is omission of milliseconds.) **Edit**: for fairness, having it come as a character vice a native POSIXt is not a big problem, that's easy enough to fix with a call to as.POSIXct; similarly, I can compensate for the timezone depiction using a lookup on something built from [this "Zone -> Tzid" table](https://www.unicode.org/cldr/charts/latest/supplemental/zone_tzid.html) . It's the loss-of-data thing I really need to mitigate, since my immediate use-case uses milli-second precision, and altering all processes to explicitly cast(...) is more than I'd like to do right now (and is not a general solution). DBI::dbGetQuery(con, "select @@VERSION") # # 1 Microsoft SQL Server 2016 (RTM-GDR) (KB4019088) - 13.0.1742.0 (X64) \n\tJul 5 2017 23:41:17 \n\tCopyright (c) Microsoft Corporation\n\tStandard Edition (64-bit) on Windows Server 2016 Standard 6.3 (Build 14393: )\n
r2evans (121 rep)
Sep 4, 2018, 12:01 AM • Last activity: Sep 4, 2018, 03:04 PM
4 votes
2 answers
1216 views
What are exact benefits of using R services on Enterprise vs Standard?
I am currently using `SQL Server 2016 SP` standard edition and want to install R services. In the [documentation][1] is said that a stand alone installation is allowed only on Enterprise (`Machine Learning Server (Standalone)`). Also, I found a post, where the guy is talking that on standard I am no...
I am currently using SQL Server 2016 SP standard edition and want to install R services. In the documentation is said that a stand alone installation is allowed only on Enterprise (Machine Learning Server (Standalone)). Also, I found a post, where the guy is talking that on standard I am not able to use parallel operations and there are memory limitations: enter image description here Could anyone tell what are the exact memory/processors limitations? I need such details as going to use heavy R calculations and the tests for now show that the implementation in the standard edition is not working as I hoped.
gotqn (4348 rep)
Jul 25, 2018, 02:36 PM • Last activity: Jul 25, 2018, 07:26 PM
3 votes
0 answers
1165 views
View(df) in R throw (function(x, df1, df2, ncp, log = FALSE))
I connected R with SQL Server (in Visual Studio 2017) the connection succeeded this is Setting.R settings <- as.environment(list()) dbConnection1 <- 'Driver={SQL Server};Server= (local);Database=SaVeITDB;Trusted_Connection=yes' this is SqlQuery.sql also executing the below query succeeded -- Place S...
I connected R with SQL Server (in Visual Studio 2017) the connection succeeded this is Setting.R settings <- as.environment(list()) dbConnection1 <- 'Driver={SQL Server};Server= (local);Database=SaVeITDB;Trusted_Connection=yes' this is SqlQuery.sql also executing the below query succeeded -- Place SQL query retrieving data for the R stored procedure here select AOIName from AreaOFInterest a, ProjectsAOI p where p.AOIId = a.AOIId; and this is Script.R when I executing it throw this function in df file source("Settings.R") library(RODBC) conn <- odbcDriverConnect(connection = dbConnection) df <- sqlQuery(conn, iconv(paste(readLines('c:/users/comnet/source/repos/rproject3/rproject3/sqlquery.sql', encoding = 'UTF-8', warn = FALSE), collapse = '\n'), from = 'UTF-8', to = 'ASCII', sub = '')) View(df) this df file (appears with lock icon) function(x, df1, df2, ncp, log = FALSE) { if (missing(ncp)) .Call(C_df, x, df1, df2, log) else .Call(C_dnf, x, df1, df2, ncp, log)} enter image description here **Edit** It's executing correctly without any error after run Visual studio as admin
Rand alrand (45 rep)
Mar 18, 2018, 07:01 PM • Last activity: Mar 20, 2018, 07:47 AM
1 votes
1 answers
976 views
connect SQL server file (mdf) with R tool
It's possible to connect SQL server file (mdf) with R tool in visual studio 2017? I tried that many times but the connection failed. From visual studio2017, I want to connect R with Sql server database. I created R project then from R (`tools--> Data --> Add database connection --> Attach database f...
It's possible to connect SQL server file (mdf) with R tool in visual studio 2017? I tried that many times but the connection failed. From visual studio2017, I want to connect R with Sql server database. I created R project then from R (tools--> Data --> Add database connection --> Attach database file) but connection failed.
Rawan (7 rep)
Mar 3, 2018, 08:08 PM • Last activity: Mar 5, 2018, 07:13 PM
2 votes
1 answers
909 views
How to prevent SQLite from blocking while using R?
I am doing an infinity loop to keep gathering data from database **db**, rework it and send it to another database **db1** library(DBI) library(RSQLite) while(1==1){ db = dbConnect(SQLite(), dbname="Data.sqlite",synchronous = NULL) db1 = dbConnect(SQLite(), dbname="Data2.sqlite",synchronous = NULL)...
I am doing an infinity loop to keep gathering data from database **db**, rework it and send it to another database **db1** library(DBI) library(RSQLite) while(1==1){ db = dbConnect(SQLite(), dbname="Data.sqlite",synchronous = NULL) db1 = dbConnect(SQLite(), dbname="Data2.sqlite",synchronous = NULL) db1 db Column1=dbGetQuery(db,'select * from Table1') #Here is some calculations# Result = Column1/10 #Here calculations are finished and ready to be sent to new database# dbWriteTable(db1, name="Result", value=Result, row.names=TRUE, append=TRUE) dbDisconnect(db1) dbDisconnect(db) } when I run the code, it executes about 10 loops and then rise the following Error: *Error in rsqlite_send_query(conn@ptr, statement) : database is locked* Can someone please help me to solve this problem
Kingindanord (141 rep)
Dec 11, 2017, 12:09 AM • Last activity: Dec 12, 2017, 09:23 AM
Showing page 1 of 20 total questions