Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
1036 views
Two physical servers with sql server on same network
IS there a way I can remotely access sql server DB having two different servers in the same network/public IP? The first Server is configured to listen to 1433 port but we make the connections using No-Ip provider. As both servers will be under the same DNS, how can I configure my app/sql server to...
IS there a way I can remotely access sql server DB having two different servers in the same network/public IP? The first Server is configured to listen to 1433 port but we make the connections using No-Ip provider. As both servers will be under the same DNS, how can I configure my app/sql server to connect to the DB in the second server?
Pedro Cury (1 rep)
May 6, 2019, 11:52 PM • Last activity: Jul 24, 2025, 02:04 PM
0 votes
1 answers
190 views
PostgreSQL: using stale statistics instead of current ones because stats collector is not responding
We are running `PostgreSQL 13` on `Azure Centos VM` and found this problem from the logs followed by some slow `commit` and `SET` statements. The slow statement logs are only lasted for a period of less than 20sec. System metric are normal before or during the problem with a small spike in I/O wait...
We are running PostgreSQL 13 on Azure Centos VM and found this problem from the logs followed by some slow commit and SET statements. The slow statement logs are only lasted for a period of less than 20sec. System metric are normal before or during the problem with a small spike in I/O wait upto 4%. > 2024-08-15 16:21:59.048 CST,,,33328,,62b10ea0.8230,14,,2022-06-21 08:19:44 CST,1/0,0,LOG,00000,"using stale statistics instead of current ones because stats collector is not responding",,,,,,,,,"","autovacuum launcher" 2024-08-15 16:22:09.203 CST,,,58821,,66bdbaa7.e5c5,1,,2024-08-15 16:21:59 CST,148/0,0,LOG,00000,"using stale statistics instead of current ones because stats collector is not responding",,,,,,,,,"","autovacuum worker" 2024-08-15 16:22:09.253 CST,"user_w","user_db",53133,"10.0.0.85:58698",66bdb747.cf8d,1,"COMMIT",2024-08-15 16:07:35 CST,46/0,0,LOG,00000,"duration: 21525.916 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.14:34356","client backend" 2024-08-15 16:22:09.253 CST,"user_w","user_db",48595,"10.0.0.68:33334",66bdb4d3.bdd3,1,"COMMIT",2024-08-15 15:57:07 CST,15/0,0,LOG,00000,"duration: 21383.608 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.18:36088","client backend" 2024-08-15 16:22:09.253 CST,"user_w","user_db",50680,"10.0.0.68:33714",66bdb5a9.c5f8,1,"COMMIT",2024-08-15 16:00:41 CST,25/0,0,LOG,00000,"duration: 20137.894 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.18:36400","client backend" 2024-08-15 16:22:09.253 CST,"user_w","user_db",42490,"10.0.0.68:60644",66bdb2d6.a5fa,1,"COMMIT",2024-08-15 15:48:38 CST,63/0,0,LOG,00000,"duration: 18201.579 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.18:36274","client backend" 2024-08-15 16:22:09.253 CST,"user_w","user_db",52468,"10.0.0.68:34266",66bdb6e0.ccf4,1,"COMMIT",2024-08-15 16:05:52 CST,30/0,0,LOG,00000,"duration: 20438.055 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.16:52796","client backend" 2024-08-15 16:22:09.269 CST,"user_w","user_db",55877,"10.0.0.52:47198",66bdb8e6.da45,2,"SET",2024-08-15 16:14:30 CST,57/0,0,LOG,00000,"duration: 3843.296 ms statement: SET application_name='app - 10.0.0.4:38932';",,,,,,,,,"app - 10.0.0.4:38932","client backend" 2024-08-15 16:22:09.269 CST,"user_w","user_db",55278,"10.0.0.70:59560",66bdb890.d7ee,1,"SET",2024-08-15 16:13:04 CST,43/0,0,LOG,00000,"duration: 20042.606 ms statement: SET application_name='app - 10.0.0.16:52848';",,,,,,,,,"app -10.0.0.16:52848","client backend" From what I can check the collector using IPv6 , IPv6 is enabled as of now and stats are getting updated. We only logging slow statements and the first entry of slow commit statement took 20sec completed at **2024-08-15 16:22:09.253 CST** which is on calculation might started before the first entry of stats collector log at **2024-08-15 16:21:59.048 CST**. We are unable to make a conclusion where the problem actually started with stats collector or the transactions and the cause of issue? This issue auto resolves in 10-20sec. **UPDATE** I have noticed when the system working normal there is no UDP socket for postmaster if I run netstat -n -u -p. However files under pg_stat_temp directory is getting updated and I can see the stats collector process under process list. Why is there no visible UDP socket under postmaster?
goodfella (595 rep)
Aug 19, 2024, 04:05 AM • Last activity: Jun 27, 2025, 12:04 PM
2 votes
2 answers
235 views
Network connection from old application to SQL Server 2008 R2 lost once in a while
I have put this question on serverfault.com as well, but since the problem concerns connecting to SQL Server, I will also put it here. We are trying to migrate an Axapta 2.5 database from SQL Server 2000 to 2008 R2. We are testing performance etc. on the new server, but once in a while, these two ev...
I have put this question on serverfault.com as well, but since the problem concerns connecting to SQL Server, I will also put it here. We are trying to migrate an Axapta 2.5 database from SQL Server 2000 to 2008 R2. We are testing performance etc. on the new server, but once in a while, these two events are thrown on the Axapta server: > Object Server SandBOX: The database reported (session 11 (CSS)): > [Microsoft][ODBC SQL Server Driver]Communication link failure. The SQL > statement was: "" and > Object Server SandBOX: Fatal system condition - quitting for restart > due to lost database connection. after which all clients have to reconnect. The Axapta 2.5 server is a virtual server (VMWare) running Windows 2003 SP2. Info from vSphere: - VM Version: 7 - CPU: 1 vCPU - Memory: 1024 MB - VMNetwork: VMXNet 3 The SQL Server is also a virtual server (VMWare) running Windows 2012 R2. Info from vSphere: - VM Version: 8 - CPU: 4 vCPU - Memory: 73728 MB - VMNetwork: VMXNet 3 There are no traces of an error in either the event log on the SQL Server or in the SQL Server Logs. When searching the internet for a solution, I only found one with this exact problem, and it had no answer _(how to troubleshoot sql 2008 dropping connections on sqlservercentral.com )_. I mostly see suggestions around the virtual network adapter or settings on the virtual server, for more or less similar problems. I'm probably forgetting a lot of information, but I'm actually a DBA, so bear with me. I know very little of virtual machines and network. Thanks for your help. Could anyone point me in a direction with this issue?
Carsten Hynne (111 rep)
Nov 11, 2014, 12:50 PM • Last activity: May 26, 2025, 06:09 AM
0 votes
1 answers
789 views
RMAN Duplicate PITR without connection to Target database
I would like to understand which network requirements are needed following the least privilege needed policy in order to perform a `DUPLICATE DATABASE UNTIL SCN` in this scenario My environment is Azure, and I have Oracle 12.2 and a RMAN Catalog in 19c **Production Network ( Subscription Azure Pro )...
I would like to understand which network requirements are needed following the least privilege needed policy in order to perform a DUPLICATE DATABASE UNTIL SCN in this scenario My environment is Azure, and I have Oracle 12.2 and a RMAN Catalog in 19c **Production Network ( Subscription Azure Pro )** Server A -- Database A Server B -- Database B Server X -- Database X -- RMAN Catalog Backups are taken in Servers A and B on a daily basis following an incremental strategy. The pieces are stored in a BLOB storage account which is presented as a local FS in each of the servers. **QA Network ( Subscription Azure QA )** Server C -- Database C Server D -- Database D There's network connectivity at the moment between the Production Subscription and the QA Subscription, but it is blocked by a Firewall. The idea is to request the necessary firewall rules in order to perform DUPLICATE RMAN without connecting to the Target Databases ( Servers A and B ). I know it is possible, as the documentation states > If you are performing a backup based RMAN duplicate and using a recovery catalog as well, **it is not required to connect to the source database as TARGET in RMAN**. This technique is advantageous where network connections from the auxiliary host to the source database are restricted or prone to intermittent disruptions. **In duplication without a TARGET connection, the source database is unaffected by the duplication.** If I follow this scenario, I would need, at least 2 Firewall rules between the subscriptions: - Network connectivity between QA Server C and RMAN CATALOG ( ssh 22 and database port ) - Network connectivity between QA Server D and RMAN CATALOG ( ssh 22 and database port ) However, how are the backup pieces moved to the target servers ? That is the part I don't get it. If the backup pieces are stored on the Servers A and B, should I request Firewall rules also for the Production Serves A and B ? I mean, I understand RMAN will know where the pieces are stored, but they are in the Servers A and B. Either I have to move the pieces to the target databases and use the option backup location or I am missing something. I would appreciate some insights here, as I can't find in the documentation anything regarding on how using this technique is literally restoring the pieces. If I did not explain something right, let me know Thank you
Roberto Hernandez (143 rep)
Oct 5, 2021, 06:53 AM • Last activity: May 1, 2025, 10:04 AM
24 votes
3 answers
55424 views
SQL Server: Should we use TCP or Named Pipes or use the default?
When connecting to a SQL Server 2008 R2 from a .NET 4 client application on a different server in the same LAN, one can set three different network protocols: 1. TCP 2. Named Pipes 3. Don't set anything in the connection string and use the default What is best practice? What to choose? Additional in...
When connecting to a SQL Server 2008 R2 from a .NET 4 client application on a different server in the same LAN, one can set three different network protocols: 1. TCP 2. Named Pipes 3. Don't set anything in the connection string and use the default What is best practice? What to choose? Additional information: Both TCP and Named Pipes are enabled both on the server and on the client. The application is using database mirroring. Client and server communicate over a fast LAN. We are investigating this because we have rare and spurious connectivity and timeout problems. (But regardless of that I'd like to know the best practice). There is an article on this subject on MSDN but it is very generic and vague. It does not advise or recommend anything useful.
usr (7390 rep)
Sep 12, 2012, 09:46 AM • Last activity: Mar 15, 2025, 12:54 AM
1 votes
1 answers
1356 views
PostgreSQL: Save roundtrips / Network performance
For my work I often have to to large selects loading a lot of data. The connection happens with JDBC driver and high fetch size. This is not a problem when working directly on the server of the database. However when connecting from a different machine, the load times get terrible and it takes >10x...
For my work I often have to to large selects loading a lot of data. The connection happens with JDBC driver and high fetch size. This is not a problem when working directly on the server of the database. However when connecting from a different machine, the load times get terrible and it takes >10x times longer to load the data. Note that the amount of data is actually small (100s of MB, 200k rows) and the network bandwidth can't be the limiting factor. Therefore it's almost certainly latency and hence round trips that cause the bad performance. Latency issue seem confirmed by checking with wireshark and I see a huge back and forth between the client and server. (note both windows). Similar what was observed in this old topic here . Are there any server or client side (JDBC) settings that would reduce roundtrips and lower the latency costs? EDIT: On same machine: Remote: 20 min to load data (ping ~120ms) local: 50 seconds (ping < 1ms) where local means intranet and db server is in a building about 300 feet away. And on the actual server it's also 50 seconds. Explain analyze says 105ms. I do understand the issue I simply wasn't aware the impact is that big. I also wonder if it's a windows issue as it seems there are far too many ack after very few data when connecting from remote. I should probably ask about that in a network forum.
beginner_ (221 rep)
Feb 21, 2020, 03:40 PM • Last activity: Mar 6, 2025, 11:02 AM
0 votes
1 answers
66 views
Dealing with Oracle over high-latency links
We have stand-by application-servers in multiple datacenters -- in case we must fail-over in a hurry -- and would like to make _some_ use of them during the times of normalcy. The Oracle databases are replicated from the master-instances located in the currently-active datacenter to the slaves locat...
We have stand-by application-servers in multiple datacenters -- in case we must fail-over in a hurry -- and would like to make _some_ use of them during the times of normalcy. The Oracle databases are replicated from the master-instances located in the currently-active datacenter to the slaves located in the redundant ones. The application -- provided by a vendor -- can only talk to _one_ database, so we cannot even direct read-only queries to the Oracle-replica closer to it. Thus, all of the remote application-servers have to talk to the master Oracle-instance, that's geographically far from them. The latency is substantial: tens of milliseconds instead of a fraction of one, when the app- and db-servers are on the same LAN. This slows down some of the operations _tremendously_ -- 20-minute jobs turn into hours-long ones... The vendor frowns at it and says, there is nothing to do -- they never tested their software in such a configuration, and make no effort in batching up requests, for example. The app is "chatty", talking to the DB all the time and awaiting responses before sending new ones. And, maybe, there is nothing to be done, but I'm wondering, if we can tune some settings in the tnsnames.ora, that may help ease the problem -- even if by a little bit... Can we?
Mikhail T. (166 rep)
Mar 3, 2025, 04:18 PM • Last activity: Mar 5, 2025, 03:23 PM
1 votes
1 answers
43 views
Ipv6 creating table load data and retriving in mysql using ip_poly
``` CREATE TABLE `ipv6_table` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `ip_poly` POLYGON NOT NULL, `start_network` BIGINT(20) NOT NULL DEFAULT '0', `end_network` BIGINT(20) NOT NULL DEFAULT '0', PRIMARY KEY (`id`), SPATIAL KEY `idx_ip_poly` (`ip_poly`) ) ENGINE=Innodb; LOAD DATA LOCAL INFILE...
CREATE TABLE ipv6_table (
  id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
  ip_poly POLYGON NOT NULL,
  start_network BIGINT(20) NOT NULL DEFAULT '0',
  end_network BIGINT(20) NOT NULL DEFAULT '0',
  PRIMARY KEY (id),
  SPATIAL KEY idx_ip_poly (ip_poly)
) ENGINE=Innodb; 
  
LOAD DATA LOCAL INFILE '/var/lib/mysql-files/19225.csv'
INTO TABLE ipv6_table
FIELDS TERMINATED BY ',' ENCLOSED BY '\"'
LINES TERMINATED BY '\n'
(
  @start_network,
  @end_network )
SET
  id = NULL,
	ip_poly = ST_GeomFromText(CONCAT(
	  'POLYGON((',
	  @start_network - 1, ' -1, ',
	  @end_network + 1, ' -1, ',
	  @end_network + 1, ' 1, ',
	  @start_network - 1, ' 1, ',
	  @start_network - 1, ' -1))'
	)),
  start_network = @start_network,
  end_network = @end_network ;	
  
SELECT * FROM ipv6_table WHERE ST_Intersects(ip_poly, ST_GEOMFROMTEXT(concat('POINT(', 2306128953120655672, ' 0)')));
I have raw data in csv file like this
2306128950956392448,2306128951224827903
2306128951224827904,2306128951241605119
2306128951241605120,2306128951493263359
2306128951493263360,2306128951510040575
2306128951510040576,2306128952030134271
2306128952030134272,2306128952046911487
Like this I'm trying to create table for Ipv6 data. and I'm loading data from csv file into it and I'm trying to fetch ipv6 data using start_network or end_network or range between them but it is returning multiple data. can anyone help me in this.
Aravind (11 rep)
Feb 26, 2025, 10:17 PM • Last activity: Feb 27, 2025, 05:40 AM
1 votes
1 answers
2468 views
Why availability group resource goes offline in WSFC?
Sometimes I find the AG Cluster role in offline state which results the AG in resolving state. It doesn't allow the application to access the databases during this time. And then after sometime it comes to online state. Why is that happening? If one of the cluster networks is down, then the cluster...
Sometimes I find the AG Cluster role in offline state which results the AG in resolving state. It doesn't allow the application to access the databases during this time. And then after sometime it comes to online state. Why is that happening? If one of the cluster networks is down, then the cluster role goes to offline state?
Sree Lekha (15 rep)
Nov 7, 2020, 03:00 PM • Last activity: Feb 7, 2025, 08:10 AM
0 votes
1 answers
42 views
SQL Server Backup to Network Share Across Different Active Directory Domains Using Cross-Network Access
I have two separate Active Directory domains (FOO and BAA) A dedicated backup server with S.O Windows exists in the FOO domain. A SQL Server instance in the BAA domain needs to store backups on this. The BAA SQL Server accesses the NAS via a dedicated network interface (192.168.137.1). (cross cable)...
I have two separate Active Directory domains (FOO and BAA) A dedicated backup server with S.O Windows exists in the FOO domain. A SQL Server instance in the BAA domain needs to store backups on this. The BAA SQL Server accesses the NAS via a dedicated network interface (192.168.137.1). (cross cable) The NAS has a shared folder: '\\192.168.137.1\Backup' From Windows Explorer on the BAA SQL Server, I can create and delete files in this share. However, when I run the following backup script in SQL Server Agent:
DECLARE @DATA CHAR(255)
SELECT @DATA = N'\\192.168.137.1\db_backup_test.bak'
BACKUP DATABASE [test_database] 
TO DISK = @DATA 
WITH NOFORMAT, INIT, 
NAME = N'FULL BACKUP', 
SKIP, NOREWIND, NOUNLOAD, STATS = 10
GO
I get an error:
Executed as user: {I tried many users here}. Cannot open backup device '\192.168.137.1\backup\Acesso.bak'. Operating system error 5 (Access is denied.).
I tried to enable access for users on the backup server (both in sharing and security), but since they are not from the same domain, the users do not appear. Since the administrator user of the machine with SQL Server can create and delete files through Windows Explorer, I tried changing the logon of the user in the SQL Agent service to this user. I also tried assigning a drive letter to the path using net use z: \\192.168.137.1\backup (both with and without setting users), but the SQL Agent cannot find the Z: drive. is there a way?
LegsNotHands (1 rep)
Jan 6, 2025, 07:34 PM • Last activity: Jan 7, 2025, 01:22 PM
3 votes
1 answers
9915 views
Is there any way to pg_upgrade PostgreSQL 12 to 15 over network?
My situation is a little bit complicated. I have an older version of PostgreSQL 12.9 installed on an older Centos Stream 8 machine. There are about five custom databases created under the roof, say database1, ozssc, database5, owned by different roles to handle different business applications. As ou...
My situation is a little bit complicated. I have an older version of PostgreSQL 12.9 installed on an older Centos Stream 8 machine. There are about five custom databases created under the roof, say database1, ozssc, database5, owned by different roles to handle different business applications. As our business application has been updated recently, we plan to update the database as well. There is another newer machine, CentOS Stream 9 was set up, and PostgreSQL 15.2 was installed on the newer machine. First, we tried to dump all database schema and data by using the newer version (15.2) command pg_dump from the newer machine: pg_dump -h 10.0.1.105 -p 5433 -U postgres -v -n '*' -N 'pg_toast' -N 'information_schema' -N 'pg_catalog' $DB -Ft -f ${bkp_path}/${DB}_schema_bkp_${date}.tar Which will dump 5 XXX.tar files Then I tried to restore it (by using the same version (15.2) of pg_restore) to my newer machine (Centos Stream 9) pg_restore -h 10.0.1.103 -p 5433 -U postgres -w -d $db_name $db_tar_file Run those commands, system response error as: pg_restore: error: could not execute query: ERROR: unacceptable schema name "pg_temp_1" DETAIL: The prefix "pg_" is reserved for system schemas. Command was: CREATE SCHEMA pg_temp_1; As result, there quite lot of database setting is incorrect as well: pg_restore could not set the correct database owner from the dumped sql statement. For example the SQL statement in database ozssc as: CREATE DATABASE ozssc WITH TEMPLATE = template0 ENCODING = 'UTF8' LOCALE_PROVIDER = libc LOCALE = 'en_AU.UTF-8'; ALTER DATABASE ozssc OWNER TO tomcat; Those statement will change database ozssc owner to tomcat. After restoration, I check the database ozssc's owner; it is still PostgreSQL instead of tomcat. Another significant error is that it seems pg_dump does not dump any extensions. For example, there are about other three extensions, such as cutest, cube, and earth distance in the original database ozssc, but I don't find anything in dumped sql statement. As this operation (pg_dump/restore) failed, I tried to look for pg_upgrade, but unfortunately. I found pg_upgrade only support some host upgrade by different installed directory. I tried to go a work around: I logged in to the newer version of host (Centos Stream 9, installed PostgreSQL 15.2), and mount remote file system (Centos Stream 8, PostgreSQL 12.9 installed) by using fuse and fuse-sshfs: sshfs postgres@10.0.1.105:/var/lib/pgsql/data /mntsshfs/pgsql/12/data sshfs postgres@10.0.1.105:/usr/bin /mntsshfs/pgsql/12/bin Then i run following commend on my newer host pg_upgrade --old-datadir=/mntsshfs/pgsql/12/data --new-datadir=/usr/local/pgsql/data \ --old-bindir=/mntsshfs/pgsql/12/bin --new-bindir=/usr/local/pgsql/bin \ --old-options '-c config_file=/mntsshfs/pgsql/12/data/postgresql.conf' --new-options '-c config_file=/usr/local/pgsql/data/postgresql.conf' --check response as: /mntsshfs/pgsql/12/bin/postgres: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory Seems mntsshfs/pgsql/12/bin/postgres (remote file system command) is trying to find libssl.so.1.1 that seating on local mounted machine? But there is no such version of libssl installed on newer host. I believe this is quite common use case when company migrate different versions of PostgreSQL server over network on different hosts. Such older version of PostgreSQL and newer PostgreSQL seating on different machines. Could anyone advise: How we can use pg_upgrade to migrate PostgreSQL 12 to 15 over the network instead of on the same host?
cidy.long (153 rep)
Mar 2, 2023, 11:13 PM • Last activity: Dec 26, 2024, 09:00 AM
3 votes
3 answers
10728 views
Connection to localhost dramatically slower than to network server
I have an Excel VBA application that connects to a SQL Server 2008 R2 database through ADODB. When I'm on the network at work, the connection is almost instantaneous. When I am at home working with a local copy of the database, the connection takes about 5 seconds. At first I thought the performance...
I have an Excel VBA application that connects to a SQL Server 2008 R2 database through ADODB. When I'm on the network at work, the connection is almost instantaneous. When I am at home working with a local copy of the database, the connection takes about 5 seconds. At first I thought the performance hit had to do with the queries, and even posted a question on Stackoverflow with that incorrect assumption: https://stackoverflow.com/q/10467876/138938 The connection is just as slow locally whether I specify "localhost" or a ".". It seems like my code is having a hard time resolving where exactly my localhost is. Is there something I should do with my hosts file or some other setting that I can modify to speed up my connection on my local machine? What other thing should I look at to troubleshoot? Thanks!
Head of Catering (659 rep)
May 21, 2012, 04:29 PM • Last activity: Nov 2, 2024, 01:43 PM
2 votes
1 answers
30 views
How do I store time-varying follower-following relationship?
Assume we have a small social media app, containing only 4 people, whos `UID` = 11, 22, 33, 44 respectively. These people would follow and unfollow each other over time, and say, I want to take note of their follower relationship on a weekly basis. In other words, I want to store temporal network re...
Assume we have a small social media app, containing only 4 people, whos UID = 11, 22, 33, 44 respectively. These people would follow and unfollow each other over time, and say, I want to take note of their follower relationship on a weekly basis. In other words, I want to store temporal network relationship. Week 1, their follower network looks like: enter image description here | primary serial | user own UID | followed UID | |---| --- | ---| | 1 | 11 | 22 | | 2 | 11 | 33 | | 3 | 22 | 33 | | 4 | 11 | 44 | Week 2, the change being that user 11 unfollowed 22, and user 44 followed 22: enter image description here | primary serial | user own UID | followed UID | |---| --- | ---| | 1 | 11 | 33 | | 2 | 22 | 33 | | 3 | 11 | 44 | | 4 | 44 | 22 | Finally, now in Week 3, user 11 re-followed user 22: enter image description here | primary serial | user own UID | followed UID | |---| --- | ---| | 1 | 11 | 22 | | 2 | 11 | 33 | | 3 | 22 | 33 | | 4 | 11 | 44 | | 5 | 44 | 22 | The 3 tables under each week's network are only for that specific week, but storing these week-tables creates redundancy, since some relations remain unchanged across time: | primary serial | user own UID | followed UID | |---| --- | ---| | ... | 11 | 33 | | ... | 22 | 33 | | ... | 11 | 44 | - So I wonder if there is a good way of keeping record of time-varying follower-following relationship across time, in 1 table rather than 3? - And given such a new (less-redundant) table, can I replicate the follower network structure of a given week directly (that is, I can get Week N network without needing to compute from Week 1 to Week N-1 networks)? - Should I add a column of week-specific labels? Should I add a boolean column to indicate whether or not the row's relation still exist? Any help would be highly appreciated! Thank you so much in advance!
Greenhill (121 rep)
Oct 9, 2024, 08:49 PM • Last activity: Oct 10, 2024, 06:21 AM
0 votes
2 answers
83 views
Is it constraint on database client-server deployment (Firebird, Postgresql, Oracle) that it should not be made over WiFi (due to unreliability)?
Database client-server connectivity can sometimes happen. I mean: there is client side dll that sits on the client computer (device, it can be mobile phone as well). For the Windows OS based clients: - fbclient.dll, gds32.dll - Firebird - libpq.dll - Postgresql - oci.dll - Oracle (of course, these d...
Database client-server connectivity can sometimes happen. I mean: there is client side dll that sits on the client computer (device, it can be mobile phone as well). For the Windows OS based clients: - fbclient.dll, gds32.dll - Firebird - libpq.dll - Postgresql - oci.dll - Oracle (of course, these dll can have dependencies). And the usual scenarios are: the application (it may be fat client of the client-server application) loads DB client library (fbclient.dll, oci.dll, etc.) in its process space and calls dll commands. dll - in turn - establishes connection with the server process (e.g. Firebird server process that runs on 3050 port on the server machine, Postgresql server process that runs no 5432 port on the server machine, Oracle TNS Listener on 1521 port) and keeps this connection open for the duration of the db client-server session. E.g. user can authenticat and start connection and issue multiple SQL commands over this connection. I understand that today the client-server DB communication usually hapens in the wired environment exclusively. E.g. either DB server and Web/Application server sit on the same machine or either they are in the different machines that are connected by very fast and reliable wired network. It was differently in the early days of IT. That was the time when client-server architecture was quite common - client machines had fat clients of the application and they used database dlls on their machine and these dlls communicated with the DB services on the server machine. Such applications are still in use and maybe there are some other scenarios where client-server DB connectivity can happen even today. So - all that was background. **My question is - can database client-server connectivity be deployed over WiFi networks** *(or over networks which has WiFi link, e.g. employee laptop can connect to the WiFi Access Point then then this access point has wired connection the database server - this can be pretty common setting both in the work office and home office situations)* **or is it common DBA understanding that no client-server connectivity can involve WiFi link.** My collegues adheres to the quite strict rules that there can be no deployment of our solutions which involve client-server DB connectivity over WiFi. I have seen, that such deploymen is possible though. I can not believe that such constraint can really be in force? Are WiFi networs really so unreliable although they are quite popular?
TomR (101 rep)
Aug 29, 2024, 06:26 AM • Last activity: Sep 8, 2024, 03:30 AM
5 votes
3 answers
15213 views
Full Back up over to Network failed due to unknown network error 59
I am recently seeing the Unknown Netowrk Error 59 which saves the full back ups to a file share on some of the servers . The back ups happens at different times. Ola Halengren scripts are being used currently and the sql server edition used are 2012, 2014 enterprise edition. I see there are suggesti...
I am recently seeing the Unknown Netowrk Error 59 which saves the full back ups to a file share on some of the servers . The back ups happens at different times. Ola Halengren scripts are being used currently and the sql server edition used are 2012, 2014 enterprise edition. I see there are suggestions for to create a DWORD with name ‘Sesstimeout’ and value 360 in HKLM\SYSTEM\CurrentControlSet\Services\lanmanworkstation\parameters. My question is should we create this on the target server (file share) or the server from where the sql server resides
SQL_NoExpert (1117 rep)
Apr 10, 2018, 01:36 PM • Last activity: Jul 26, 2024, 07:29 PM
0 votes
1 answers
201 views
Connection string for a successful SSMS connection
In SSMS v19 I can successfully connect/login to a SQL server that is listening on the default port 1433 Server : server.env.domain.com,1433 Authentication : SQL Auth And I can query any of the databases hosted on this server. Does SSMS reveal the connection-string it is using to make this connection...
In SSMS v19 I can successfully connect/login to a SQL server that is listening on the default port 1433 Server : server.env.domain.com,1433 Authentication : SQL Auth And I can query any of the databases hosted on this server. Does SSMS reveal the connection-string it is using to make this connection ? I've searched quite a bit and can't find it. Or is there an API I can call that would return the exact connection string used by SSMS ? I ask because I am unable to connect via another app located on the same client machine as SSMS. I would like to know exactly how SSMS is making its connection.
BaltoStar (99 rep)
Jun 24, 2024, 12:58 AM • Last activity: Jul 4, 2024, 02:35 PM
0 votes
0 answers
318 views
Pgbouncer: got packet 'E' from server when not linked
Does anyone knows what this error below means? We running `pgbouncer` 1.21 on centos using `transaction` pooling mode. 2024-04-29 15:00:02.378 CST [9680] WARNING S-0x197b028: db/user_a@10.0.0.1:5432 got packet 'E' from server when not linked 2024-04-29 15:00:02.379 CST [9680] LOG S-0x197b028: db/use...
Does anyone knows what this error below means? We running pgbouncer 1.21 on centos using transaction pooling mode. 2024-04-29 15:00:02.378 CST WARNING S-0x197b028: db/user_a@10.0.0.1:5432 got packet 'E' from server when not linked 2024-04-29 15:00:02.379 CST LOG S-0x197b028: db/user_a@10.0.0.1:5432 closing because: server conn crashed? (age=2409s) We faced this warning after enabling min_pool_size.
goodfella (595 rep)
Apr 29, 2024, 07:31 AM
0 votes
1 answers
108 views
PostgreSQL & SQLAlchemy: Minimizing Impacts of latency with chatty applications
Are there any ways to minimize the impact of latency between the application and database when running PostgreSQL & SQLAlchemy ORM? Changing loading strategies or preloading properties has been effective - just generally reducing the number of statements between app & database. Are there any other t...
Are there any ways to minimize the impact of latency between the application and database when running PostgreSQL & SQLAlchemy ORM? Changing loading strategies or preloading properties has been effective - just generally reducing the number of statements between app & database. Are there any other techniques that can be used to minimize the impact of that connection overhead? In the below Grafana graphs, I simulated 10ms latency between application and database using the tc traffic control module. At the midway point, I removed this artificial latency resulting in about 300 microsecond latency when timing SELECT 1; between the application and database (localhost loopback interface - database inside the same container as the application). Grafana graph - 10ms latency vs 300μs latency b/w app & db This seems to indicate latency sensitivity in the measured process / API endpoints.
Justin Lowen (68 rep)
Apr 22, 2024, 05:40 PM • Last activity: Apr 24, 2024, 05:30 AM
0 votes
1 answers
42 views
Network utilization maxed out during backup. Is it bad in itself?
We have a cluster of two Oracle DBs. Monitoring software agent is installed on both of the machines and overnight, when backups are happening, we get warnings that network interface utilisation on the servers is above 90% - I'm sure it gets maxed out. [![enter image description here][1]][1] Is this...
We have a cluster of two Oracle DBs. Monitoring software agent is installed on both of the machines and overnight, when backups are happening, we get warnings that network interface utilisation on the servers is above 90% - I'm sure it gets maxed out. enter image description here Is this an issue or do you, experienced DBAs, allow the DB servers to occasionally max out the network interface bandwidth? If not, what are the known issues this could cause? Thank you in advance!
Tony Sepia (144 rep)
Apr 6, 2024, 11:58 AM • Last activity: Apr 6, 2024, 02:16 PM
2 votes
1 answers
1031 views
Why changing the order of network protocols does not make any sense?
All I want to do is to connect to SQL Server though Named Pipes **without** using an explicit way like *NP:ServerName* because mentioned *approach works*. **I want to connect through Named Pipes using just ServerName** and give a SQL Server a chance to choose the right network protocol. So I have **...
All I want to do is to connect to SQL Server though Named Pipes **without** using an explicit way like *NP:ServerName* because mentioned *approach works*. **I want to connect through Named Pipes using just ServerName** and give a SQL Server a chance to choose the right network protocol. So I have **3** VMs: enter image description here What I did step by step: 1. In **Server Configurations** of **SQL Server A** I enabled all network protocols. enter image description here 2. I am also familiar with the Order column, however, changing it appears to have no affect. I swapped the order of **TCP/IP** and **Named Pipes**: enter image description here 3. I tried to connect from **SQL Server** B into **SQL Server A** using **Computer Name**(**WIN-VKHOKLJ3IJG**) and I hoped that it will use Named Pipes protocol but unfortunately it chose TCP/IP as a connection protocol: enter image description here Could someone explain why TCP/IP was chosen instead of Named Pipes? According the order, SQL Server should choose Named Pipes but it rather preferred to use TCP/IP.
Rauf Asadov (1313 rep)
Jan 3, 2020, 06:12 PM • Last activity: Mar 13, 2024, 03:16 PM
Showing page 1 of 20 total questions