Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
2361 views
Oracle - Delete dmp files from ASM diskgroup
I want to delete my export files daily from ASM diskgroup with a crontab job. I want to prepare a script for it. ASMCMD> ls exp1.dmp exp2.dmp exp3.dmp exp4.dmp ASMCMD> pwd +DATA/EXP How can I prepare this script? I prepared a template for it, but I couldn't script it. ``` set_gridenv asmcmd cd +DATA...
I want to delete my export files daily from ASM diskgroup with a crontab job. I want to prepare a script for it. ASMCMD> ls exp1.dmp exp2.dmp exp3.dmp exp4.dmp ASMCMD> pwd +DATA/EXP How can I prepare this script? I prepared a template for it, but I couldn't script it.
set_gridenv
asmcmd
cd +DATA/EXP
rm -rf exp*.dmp
exit
jrdba (55 rep)
Sep 21, 2020, 10:45 AM • Last activity: Aug 6, 2025, 07:08 PM
0 votes
1 answers
25 views
Create initial Informix dbspace on a new server from script
I'm able to create a new dbspace using the semi-gui tool onmonitor (running a headless Linux server) But how can I do this from cmd-line / script? It seems you can use `onspaces` like ``` onspaces -c -d $DBSname -p $DBSfile -o 0 -s 153600 ``` But AFAIK it require the IDS server is running in one of...
I'm able to create a new dbspace using the semi-gui tool onmonitor (running a headless Linux server) But how can I do this from cmd-line / script? It seems you can use onspaces like
onspaces -c -d $DBSname -p $DBSfile -o 0 -s 153600
But AFAIK it require the IDS server is running in one of these modes online, quiescent or administration and this seems only possible to start the server if there already exist a dbspace. Or am I missing something? Can you really only setup a new server using the init menu in onmonitor?
MrCalvin (203 rep)
Jul 27, 2025, 07:53 PM • Last activity: Aug 3, 2025, 07:55 PM
5 votes
1 answers
1521 views
Full database backup using xtrabackup stream
I'm new to Percona XtraBackup. I've been trying to perform a full backup stream from my local ( *test server around 600GB* ) to remote server. I just have some questions and I need guides, and I think this is the best place. I have this command, which I executed in my local `innobackupex --user=user...
I'm new to Percona XtraBackup. I've been trying to perform a full backup stream from my local ( *test server around 600GB* ) to remote server. I just have some questions and I need guides, and I think this is the best place. I have this command, which I executed in my local innobackupex --user=user --password=password --stream=tar /which/directory/ | pigz | ssh user@10.11.12.13 "cat - > /mybackup/backup.tar.gz" My questions are : - **My log scan is not changing / increasing** >> log scanned up to (270477048535) >> log scanned up to (270477048535) >> log scanned up to (270477048535) >> log scanned up to (270477048535) >> log scanned up to (270477048535) I've read a comment before and someone says log scan will not increase if no one is using the database. ( Yes, no one is using the database )  - **It's been running for a while.** I've tried to use xtrabackup to a local test server with around 1.7TB and finished in just a few hours. Is this because I'm using stream that's why it is slow? What is the purpose of "/which/directory/" in my command? Is it going to store the file in /which/directory/ first and then transfer to my remote server ? Why do I have to specify a directory? - **No created file on my local server /which/directory/ and to my remote server /mybackup/.** Am I doing something wrong ? Is there a much easier way to perform this? My only goal is to backup my local database to a remote server, I'm doing this stream because I don't have enough disk space to store my backup locally. I'm using MariaDB 5.5 and Percona XtraBackup 2.2
Thantanenorn (51 rep)
Mar 27, 2018, 01:45 AM • Last activity: Jul 31, 2025, 03:05 PM
2 votes
1 answers
5416 views
How to restore database from /var/lib/mysql
I have a database folder from a crashed server called gain_wordpress. I'm trying to put that database on a new server but I'm not able to do it successfully. I don't have a .sql dump, all I have is the database folder from a past crashed server with ibdata and the 2 log files. I copied the gain_word...
I have a database folder from a crashed server called gain_wordpress. I'm trying to put that database on a new server but I'm not able to do it successfully. I don't have a .sql dump, all I have is the database folder from a past crashed server with ibdata and the 2 log files. I copied the gain_wordpress folder to /var/lib/mysql on the new server but when I do a > show tables on the database I get the following error: ERROR 1018 (HY000): Can't read dir of './gain_wordpress/' (errno: 13) I also changed the permission of the folder and did a chown mysql:mysql gain_wordpress -R and it still didn't take care of the problem. What is the best way to recover gain_wordpress. Do I need to replace any of the ibdata or the logfiles? What is the best way? I have the corresponding .frm, .MYD and .MYI files for each table. Please advise.
unixmiah (156 rep)
Mar 27, 2015, 02:58 PM • Last activity: Jul 29, 2025, 10:10 PM
1 votes
1 answers
8731 views
postgresql: Invalid data directory | Can't open PID file /var/run/postgresql/10-main.pid (yet?) after start: No such file or directory
This error is showing after running this command: `sudo systemctl status postgresql@10-main.service` postgresql@10-main.service - PostgreSQL Cluster 10-main Loaded: loaded (/lib/systemd/system/postgresql@.service; indirect; vendor preset: enabled) Active: failed (Result: protocol) since Tue 2020-12-...
This error is showing after running this command: sudo systemctl status postgresql@10-main.service postgresql@10-main.service - PostgreSQL Cluster 10-main Loaded: loaded (/lib/systemd/system/postgresql@.service; indirect; vendor preset: enabled) Active: failed (Result: protocol) since Tue 2020-12-01 14:21:40 UTC; 2s ago Process: 1603 ExecStart=/usr/bin/pg_ctlcluster --skip-systemctl-redirect 10-main start (code=exited, sta Dec 01 14:21:40 ubuntu-s-1vcpu-2gb-sgp1-01 systemd: Starting PostgreSQL Cluster 10-main... Dec 01 14:21:40 ubuntu-s-1vcpu-2gb-sgp1-01 postgresql@10-main: Error: Invalid data directory Dec 01 14:21:40 ubuntu-s-1vcpu-2gb-sgp1-01 systemd: postgresql@10-main.service: Can't open PID file /va Dec 01 14:21:40 ubuntu-s-1vcpu-2gb-sgp1-01 systemd: postgresql@10-main.service: Failed with result 'pro Dec 01 14:21:40 ubuntu-s-1vcpu-2gb-sgp1-01 systemd: Failed to start PostgreSQL Cluster 10-main. I have tried all solutions out there. I can't even take backup by pg_dump. Getting the following error while running pg_dump database_name > database_backup.bak: pg_dump: [archiver (db)] connection to database "database_name" failed: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? Here is the log of permissions of /var/lib/postgresql/10/main: -rwx------ 1 postgres postgres 3 Apr 28 2020 PG_VERSION drwx------ 7 postgres postgres 4096 Nov 9 23:36 base drwx------ 2 postgres postgres 4096 Nov 29 19:28 global drwx------ 2 postgres postgres 4096 Apr 28 2020 pg_commit_ts drwx------ 2 postgres postgres 4096 Apr 28 2020 pg_dynshmem drwx------ 4 postgres postgres 4096 Nov 29 20:18 pg_logical drwx------ 4 postgres postgres 4096 Apr 28 2020 pg_multixact drwx------ 2 postgres postgres 4096 Nov 29 19:27 pg_notify drwx------ 2 postgres postgres 4096 Apr 28 2020 pg_replslot drwx------ 2 postgres postgres 4096 Apr 28 2020 pg_serial drwx------ 2 postgres postgres 4096 Apr 28 2020 pg_snapshots drwx------ 2 postgres postgres 4096 Nov 29 20:18 pg_stat drwx------ 2 postgres postgres 4096 Apr 28 2020 pg_stat_tmp drwx------ 2 postgres postgres 4096 Sep 2 00:07 pg_subtrans drwx------ 2 postgres postgres 4096 Apr 28 2020 pg_tblspc drwx------ 2 postgres postgres 4096 Apr 28 2020 pg_twophase drwx------ 3 postgres postgres 4096 Nov 28 01:33 pg_wal drwx------ 2 postgres postgres 4096 Apr 28 2020 pg_xact -rwx------ 1 postgres postgres 88 Apr 28 2020 postgresql.auto.conf -rwx------ 1 postgres postgres 95 Nov 29 19:27 postmaster.opts Here is the log after running sudo -u postgres psql this command: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? Please Assist me to fix this. Any kind of help will be appreciated. Thanks in advance.
Farid Chowdhury (119 rep)
Dec 1, 2020, 04:10 PM • Last activity: Jul 27, 2025, 02:04 AM
1 votes
2 answers
721 views
RHEL 7.4, heavy workload(s) and SWAP usage
This might not be the correct board for this, but since I'm a DBA and I'll ask here. I have several RHEL 7.4 servers, running a mix of MariaDB (10.1, 10.2) and MongoDB (3.4). The problem was happening with RHEL 7.3 as well. All of these servers have 256Gb of memory with local SSD array storage, and...
This might not be the correct board for this, but since I'm a DBA and I'll ask here. I have several RHEL 7.4 servers, running a mix of MariaDB (10.1, 10.2) and MongoDB (3.4). The problem was happening with RHEL 7.3 as well. All of these servers have 256Gb of memory with local SSD array storage, and even given heavy workloads the highest active memory-in-use footprint is less than 100Gb at any one time. I've been profiling this for quite some time: On each of these servers, even though there is always plenty of free memory available, the systems are incrementally going into swap. I've tried setting the vm.swappiness value to 1, but incremental jumps of swap are still happening. Is this happening to anyone else? Does anyone know, with a large amount of memory available, if setting swappiness to 0 has ill effects? Thanks
zstokes (11 rep)
Jan 12, 2018, 09:09 PM • Last activity: Jul 25, 2025, 02:00 AM
-2 votes
1 answers
42 views
Not able to start Mysql located into many places
How can i start mysql in Ubuntu? i run whereis and more than one place where mysql is located : (base) myuser@mypc:~/Documents/mysql-tutorial$ whereis mysql mysql: /usr/bin/mysql /usr/lib/mysql /etc/mysql /usr/share/mysql /home/myuser/anaconda3/bin/mysql /usr/share/man/man1/mysql.1.gz Running : syst...
How can i start mysql in Ubuntu? i run whereis and more than one place where mysql is located : (base) myuser@mypc:~/Documents/mysql-tutorial$ whereis mysql mysql: /usr/bin/mysql /usr/lib/mysql /etc/mysql /usr/share/mysql /home/myuser/anaconda3/bin/mysql /usr/share/man/man1/mysql.1.gz Running : systemctl status mysql the error message is: ○ mysql.service - MySQL Community Server Loaded: loaded (/usr/lib/systemd/system/mysql.service; disabled; preset: enabled) Active: inactive (dead) –
Ângelo Rigo (107 rep)
Jul 21, 2025, 11:51 PM • Last activity: Jul 23, 2025, 11:56 PM
0 votes
1 answers
35 views
Clickhouse - Oracle ODBC connection error
I am trying to create connection between my oracle and clickhouse databases, so I could query oracle through ch like this: ```SELECT * FROM odbc('DSN=OracleODBC-21', 'sys', 'test')```. I have successfully installed unixODBC, Oracle Instant Client, Oracle ODBC for client. Also, I configured my ```.od...
I am trying to create connection between my oracle and clickhouse databases, so I could query oracle through ch like this:
* FROM odbc('DSN=OracleODBC-21', 'sys', 'test')
. I have successfully installed unixODBC, Oracle Instant Client, Oracle ODBC for client. Also, I configured my
.odbc.ini
and
.ini
, so I can access oracle:
[oracle@host ~]$ isql -v OracleODBC-21
+---------------------------------------+
| Connected!                            |
...
SQL> select * from sys.test;
+-----------------------------------------+-----------------------------------------------------------------------------------------------------+
| ID                                      | DATA                                                                                                |
+-----------------------------------------+-----------------------------------------------------------------------------------------------------+
| 0                                       | 123                                                                                                 |
+-----------------------------------------+-----------------------------------------------------------------------------------------------------+
User
also can do this, but with some envs:
[oracle@host ~]$ sudo -u clickhouse bash -c "export LD_LIBRARY_PATH=/opt/oracle/instantclient_21_19; isql -v OracleODBC-21"
+---------------------------------------+
| Connected!                            |
...
But when I am trying to query oracle in ch:
host :) select * from odbc('DSN=OracleODBC-21','sys','test');

SELECT *
FROM odbc('DSN=OracleODBC-21', 'sys', 'test')

Query id: d263cc54-bd51-4a97-94c0-085177149947


Elapsed: 9.529 sec.

Received exception from server (version 25.6.2):
Code: 86. DB::Exception: Received from localhost:9000. DB::HTTPException. DB::HTTPException: Received error from remote server http://127.0.0.1:9018/columns_info?use_connection_pooling=1&version=1&connection_string=DSN%3DOracleODBC-21&schema=sys&table=test&external_table_functions_use_nulls=1 . HTTP status code: 500 'Internal Server Error', body length: 267 bytes, body: 'Error getting columns from ODBC 'std::exception. Code: 1001, type: nanodbc::database_error, e.what() = contrib/nanodbc/nanodbc/nanodbc.cpp:1275: IM004: [unixODBC][Driver Manager]Driver's SQLAllocHandle on SQL_HANDLE_HENV failed  (version 25.1.5.31 (official build))'
'. (RECEIVED_ERROR_FROM_REMOTE_IO_SERVER)
Will be grateful for any advice.
pashkin5000 (101 rep)
Jul 22, 2025, 05:58 AM • Last activity: Jul 22, 2025, 06:14 PM
0 votes
1 answers
155 views
Completeness of Patch Population
I'm performing an audit on Oracle database changes and the database is on a Linux OS. After using the command **opatch lsinventory**, we learned that this command has not been pulling the complete population of patches on the database. It is only showing the last patch applied. Contrary to many onli...
I'm performing an audit on Oracle database changes and the database is on a Linux OS. After using the command **opatch lsinventory**, we learned that this command has not been pulling the complete population of patches on the database. It is only showing the last patch applied. Contrary to many online definition; which is ALL patches. What could be causing this? What's a foolproof way to pull all changes (patch, scheme, table etc) applied? Thank you all!
PlatosSpaghetti (1 rep)
Oct 9, 2017, 10:44 PM • Last activity: Jul 22, 2025, 04:06 AM
2 votes
1 answers
3113 views
I have 600% high CPU usage mysqld
I have an insane usage of cpu mysqld +600% here is the file my cnf # Example MySQL config file for medium systems. # # This is for a system with little memory (32M - 64M) where MySQL plays # an important part, or systems up to 128M where MySQL is used together with # other programs (such as a web se...
I have an insane usage of cpu mysqld +600% here is the file my cnf # Example MySQL config file for medium systems. # # This is for a system with little memory (32M - 64M) where MySQL plays # an important part, or systems up to 128M where MySQL is used together with # other programs (such as a web server) # # You can copy this file to # /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options (in this # installation this directory is /opt/lampp/var/mysql) or # ~/.my.cnf to set user-specific options. # # In this file, you can use all long options that a program supports. # If you want to know which options a program supports, run the program # with the "--help" option. # The following options will be passed to all MySQL clients [client] #password = your_password port =3306 socket =/opt/lampp/var/mysql/mysql.sock # Here follows entries for some specific programs # The MySQL server default-character-set=utf8mb4 [mysqld] user=mysql port=3306 socket =/opt/lampp/var/mysql/mysql.sock key_buffer=64M max_allowed_packet=10M table_open_cache=64 sort_buffer_size=2M net_buffer_length=8K read_buffer_size=256K read_rnd_buffer_size=512K myisam_sort_buffer_size=8M query_cache_type=1 query_cache_limit=512K query_cache_size=0 query_cache_min_res_unit=256 innodb_thread_concurrency=0 innodb_log_file_size=128M # Custom changes max_connections = 400 query_cache_limit = 32M query_cache_size = 0 innodb_buffer_pool_size = 20G innodb_buffer_pool_instances = 10 innodb_log_file_size = 1G innodb_log_buffer_size = 5G innodb_file_per_table = 1 innodb_open_files = 400 innodb_io_capacity = 1000 innodb_io_capacity_max = 40000 innodb_flush_method = O_DIRECT innodb_read_io_threads = 8 innodb_write_io_threads = 4 thread_cache_size = 100 innodb_lru_scan_depth = 100 innodb_read_io_threads = 64 innodb_write_io_threads = 64 open_files_limit = 4096 table_open_cache = 2048 join_buffer_size = 128K read_buffer_size = 128K log_warnings = 2 key_cache_age_threshold = 64800 innodb_change_buffer_max_size = 10 max_write_lock_count = 16 # End custom changes # Where do all the plugins live plugin_dir=/opt/lampp/lib/mysql/plugin/ # Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # #skip-networking # Replication Master Server (default) # binary logging is required for replication # log-bin deactivated by default since XAMPP 1.4.11 #log-bin=mysql-bin # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id =1 # Replication Slave (comment out master section to use this) # # To configure this host as a replication slave, you can choose between # two methods : # # 1) Use the CHANGE MASTER TO command (fully described in our manual) - # the syntax is: # # CHANGE MASTER TO MASTER_HOST=, MASTER_PORT=, # MASTER_USER=, MASTER_PASSWORD= ; # # where you replace , , by quoted strings and # by the master's port number (3306 by default). # # Example: # # CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306, # MASTER_USER='joe', MASTER_PASSWORD='secret'; # # OR # # 2) Set the variables below. However, in case you choose this method, then # start replication for the first time (even unsuccessfully, for example # if you mistyped the password in master-password and the slave fails to # connect), the slave will create a master.info file, and any later # change in this file to the variables' values below will be ignored and # overridden by the content of the master.info file, unless you shutdown # the slave server, delete master.info and restart the slaver server. # For that reason, you may want to leave the lines below untouched # (commented) and instead use CHANGE MASTER TO (see above) # # required unique id between 2 and 2^32 - 1 # (and different from the master) # defaults to 2 if master-host is set # but will not function as a slave if omitted #server-id = 2 # # The replication master for this slave - required #master-host = # # The username the slave will use for authentication when connecting # to the master - required #master-user = # # The password the slave will authenticate with when connecting to # the master - required #master-password = # # The port the master is listening on. # optional - defaults to 3306 #master-port = # # binary logging - not required for slaves, but recommended #log-bin=mysql-bin # Point the following paths to different dedicated disks #tmpdir = /tmp/ #log-update = /path-to-dedicated-directory/hostname # Uncomment the following if you are using BDB tables #bdb_cache_size = 4M #bdb_max_lock = 10000 # Comment the following if you are using InnoDB tables #skip-innodb innodb_data_home_dir=/opt/lampp/var/mysql/ innodb_data_file_path=ibdata1:10M:autoextend innodb_log_group_home_dir=/opt/lampp/var/mysql/ # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size=512M # Deprecated in 5.6 #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size=5M #innodb_log_buffer_size=8M innodb_flush_log_at_trx_commit=1 innodb_lock_wait_timeout=50 character-set-server=utf8mb4 collation-server=utf8mb4_general_ci [mysqldump] max_allowed_packet=10M [mysql] # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer=20M sort_buffer_size=20M read_buffer=2M write_buffer=2M [myisamchk] key_buffer=20M sort_buffer_size=20M read_buffer=2M write_buffer=2M [mysqlhotcopy] !include /opt/lampp/mysql/my.cnf I don't know if this is the right code to use for better performance, I'm using wordpress. I'm experiencing some problems with my mysql server When I run top command, it sais that mysql process has taken over 350-600% of CPU I'm running ubuntu 18.04 with 32 GB RAM
Alfredo
Sep 29, 2020, 08:25 PM • Last activity: Jul 21, 2025, 04:07 PM
0 votes
1 answers
144 views
Trying to load a file into a database on virtual machine
I have set up a Postgres db on a linux vm and have been having no issues using a GUI to connect to it. However, I am trying to load a large, 32GB, file onto it and so am skeptical of the old way I was doing as it takes a lot of bandwith. I set up a dropbox folder to sync on the VM, which it has, and...
I have set up a Postgres db on a linux vm and have been having no issues using a GUI to connect to it. However, I am trying to load a large, 32GB, file onto it and so am skeptical of the old way I was doing as it takes a lot of bandwith. I set up a dropbox folder to sync on the VM, which it has, and tried to COPY the file and got error message: ERROR: could not open file "~/Dropbox/0ptimus-Jaspin/nation/VoterMapping--NH--03-17-2014-HEADERS.tab" for reading: No such file or directory I used the following to try to do the copy: COPY nh FROM '/Dropbox/0ptimus-Jaspin/VoterMapping--NH--03-17-2014-HEADERS.tab'; Thanks!
Ron (1 rep)
Sep 20, 2014, 05:57 PM • Last activity: Jul 20, 2025, 03:09 AM
0 votes
1 answers
146 views
Will 'sleeping' connections open files other than the socket in MySQL?
Recently, in our MySQL replication setup, the replication slave crashed due to 'too many open files' error. We brought it backup, but trying to figure out what caused the unexpected hike in the number of open files. We habitually see too many connections in 'sleep' status. Would these 'sleeping' con...
Recently, in our MySQL replication setup, the replication slave crashed due to 'too many open files' error. We brought it backup, but trying to figure out what caused the unexpected hike in the number of open files. We habitually see too many connections in 'sleep' status. Would these 'sleeping' connections open files other than the sockets they keep open? MySQL version is 5.6.20. OS is RHEL 6.4 Edit: Our MySQL is started with a huge number of 'open-files' limit and the OS has already allowed mysql user to open that many number of files. I am doing an RCA and would like to know whether the sleeping connections would open files other than the network socket they open.
Sreeraj (171 rep)
Mar 23, 2015, 05:04 AM • Last activity: Jul 19, 2025, 11:03 PM
0 votes
1 answers
144 views
MySQL using up all RAM and SWAP in group replication settings
I am using MySQL 8.0.18 in a group replication (multi-master) infrastructure. There are currently 4 master servers connected to each other and everything seems to be working fine but over the course of around 2 weeks all mysqld process on all servers uses up all the swap. ram usage stays at 800Mb as...
I am using MySQL 8.0.18 in a group replication (multi-master) infrastructure. There are currently 4 master servers connected to each other and everything seems to be working fine but over the course of around 2 weeks all mysqld process on all servers uses up all the swap. ram usage stays at 800Mb as I suspect innodb_log_buffer_size is doing its job. Server Specs: - Debian 10 - 1Gb RAM - 3GB SWAP - 20GB ssd Mysql.cnf: max_connections=20 innodb_log_buffer_size=800M key_buffer_size=50M super_read_only=off gtid_mode = ON enforce_gtid_consistency = ON master_info_repository = TABLE relay_log_info_repository = TABLE binlog_checksum = NONE log_slave_updates = ON log_bin = binlog binlog_format = ROW transaction_write_set_extraction = XXHASH64 loose-group_replication_bootstrap_group = OFF loose-group_replication_start_on_boot = OFF loose-group_replication_ssl_mode = REQUIRED loose-group_replication_recovery_use_ssl = 1 loose-group_replication_unreachable_majority_timeout=80 loose-group-replication-autorejoin-tries=2015 loose-group-replication-transaction-size-limit=0 Related OS config: vm.swappiness=5 How do I make mysql stop using so much swap?
Ken (21 rep)
Mar 6, 2020, 02:26 PM • Last activity: Jul 19, 2025, 01:04 PM
1 votes
1 answers
146 views
MariaDB Parallel Replication Drifting at random times
I have recently been thrown in the deep-end to manage our Database servers since our DBA left. The current set-up is Mariadb (mariadb Ver 15.1 Distrib 10.1.38-MariaDB, for debian-linux-gnu), InnoDB (I turned on innodb_file_per_table as we had some 200+GB ibdata1 files.). We use parallel (conservativ...
I have recently been thrown in the deep-end to manage our Database servers since our DBA left. The current set-up is Mariadb (mariadb Ver 15.1 Distrib 10.1.38-MariaDB, for debian-linux-gnu), InnoDB (I turned on innodb_file_per_table as we had some 200+GB ibdata1 files.). We use parallel (conservative) replication. a little bit of the config (just without ssl certs) innodb_file_per_table = On innodb_thread_concurrency = 0 innodb_buffer_pool_size = 40G innodb_buffer_pool_instances = 20 innodb_flush_log_at_trx_commit = 1 sync_binlog = 1 table_open_cache = 8192 thread_cache_size = 256 table_cache = 70000 open_files_limit = 100000 log_slave_updates collation-server = utf8_general_ci init-connect='SET NAMES utf8' character-set-server = utf8 ## Logging log-output = FILE slow-query-log = 1 slow-query-log-file = /var/lib/mysql/slow-log log-slow-slave-statements long-query-time = 30 log_warnings = 2 I have noticed that sometimes our servers drift but this usually corrects itself as I am assuming a large statement(s), one of the reasons I turned on Logging. But some servers will just not correct themselves and one of these only has a 2.6G ibdata1 file, I have restarted slave a few times but its just very slow. I am good on the Linux and Hardware side and ensured there is no other problems with disks. The network link between the servers via web, each server has 1Gbps Connection though and we see no latency ~6ms ping and stable. Some servers drift when there is no load on the servers, like the slave is not requesting from the master and it certainly doesn't seem to use full bandwidth. Overnight one of our servers has gone from being synced to 3 hours behind and seems to be increasing. I cannot see any issue on other database or even any users logged in to our application. could something be locking it from replicating? Any help or suggestions would be really appreciated as I am kind of thrown in the deep-end here, I am not a DBA but whilst we do not currently have one I am the "best fit" and we need to get these servers replicated correctly for production. If anyone also has any suggestions on learning resources, I think maybe this setup needs to be started from scratch, we do not use GTID for example, I wonder if we even use the correct replication method. Many Thanks.
hexce (11 rep)
Sep 8, 2019, 05:24 PM • Last activity: Jul 19, 2025, 05:05 AM
1 votes
1 answers
494 views
SQLServer Linux: Error restoring backup of DB from Windows w/ full-text data file
I'm trying to move a Windows SQL Server database from Windows 10 to Linux. For this, I'm following the instructions in https://learn.microsoft.com/es-es/sql/linux/sql-server-linux-migrate-restore-database?view=sql-server-linux-2017 The linux database is freshly installed in an Ubuntu 16.04.4 LTS. Th...
I'm trying to move a Windows SQL Server database from Windows 10 to Linux. For this, I'm following the instructions in https://learn.microsoft.com/es-es/sql/linux/sql-server-linux-migrate-restore-database?view=sql-server-linux-2017 The linux database is freshly installed in an Ubuntu 16.04.4 LTS. The Windows database backup is a previously existing (I have not executed the backup, but it's a full backup) When I try to restore, it generates an error in the catalog database (access denied), as shown (database name changed to 'mydb' for privacy): sqlcmd -S localhost -U SA -Q "RESTORE DATABASE mydb FROM DISK = '/var/opt/mssql/backup/mydb_backup_201804300000.bak' WITH MOVE 'mydb' TO '/var/opt/mssql/data/mydb.mdf', MOVE 'mydb_log' TO '/var/opt/mssql/data/mydb_log.ldf', MOVE 'sysft_appuser_catalog3' TO '/var/opt/mssql/data/catalog.ft'" Msg 7610, Level 16, State 1, Server irulan, Line 1 Acceso denegado a '/var/opt/mssql/data/catalog.ft' o la ruta de acceso no es válida. Msg 3156, Level 16, State 50, Server irulan, Line 1 El archivo 'sysft_appuser_catalog3' no se puede restaurar en '/var/opt/mssql/data/catalog.ft'. Utilice WITH MOVE para identificar una ubicación válida para el archivo. The other 2 files (mdf and ldf) are created without problems in the same folder). I have tried with different file names, creating previously (touch) the file, and so on with no success. How can I restore this database? I'd be willing to restore it without the full-text index - is there a way to do that? This is the output of FILELISTONLY (to check the content of backup) LogicalName PhysicalName Type FileGroupName Size MaxSize FileId CreateLSN DropLSN UniqueId ReadOnlyLSN ReadWriteLSN BackupSizeInBytes SourceBlockSize FileGroupId LogGroupGUID DifferentialBaseLSN DifferentialBaseGUID IsReadOnly IsPresent TDEThumbprint SnapshotUrl -------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---- -------------------------------------------------------------------------------------------------------------------------------- -------------------- -------------------- -------------------- --------------------------- --------------------------- ------------------------------------ --------------------------- --------------------------- -------------------- --------------- ----------- ------------------------------------ --------------------------- ------------------------------------ ---------- --------- ------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ mydb D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\mydb.mdf D PRIMARY 3460300800 35184372080640 1 0 0 D64B0490-3FF6-4EFE-A9A1-491B5993F3AF 0 0 2348613632 512 1 NULL 30094000017824000037 B7E468AB-78C2-4732-8D73-2F07E3ABAF9D 0 1 NULL NULL mydb_log D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\mydb_log.ldf L NULL 1540227072 2199023255552 2 0 0 A6B8CF28-C3D8-4B50-B030-4D5B14F82084 0 0 0 512 0 NULL 0 00000000-0000-0000-0000-000000000000 0 1 NULL NULL sysft_appuser_catalog3 D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\FTData\appuser_catalog3 F PRIMARY 931744 0 65539 17974000000690900001 0 0B0AEAB0-86A2-42ED-9B37-E70EE556383C 0 0 983040 512 1 NULL 30094000017824000037 B7E468AB-78C2-4732-8D73-2F07E3ABAF9D 0 1 NULL NULL (3 rows affected) Note: I have found this [Stack Overflow post](https://stackoverflow.com/questions/536182/mssql2005-restore-database-without-restoring-full-text-catalog) ; the poster had a different problem, but was also willing to restore the DB without the full-text data. It doesn't say how (if!) he ever resolved his problem, so it doesn't really give me an answer.
user155183
Jul 12, 2018, 07:00 PM • Last activity: Jul 15, 2025, 09:01 AM
0 votes
1 answers
171 views
Restoring SQL Server database on Windows instance from Linux SQLCMD
I have a SQL Server that is running under Windows. And I have a Linux client. AFAIK SQLCMD requires "local" (server-related) path to backup files. So the question is - how to specify windows paths (e:\backups\db_name\etc) under Linux shell or script? Do I need to dublicate backslashes? Or can I use...
I have a SQL Server that is running under Windows. And I have a Linux client. AFAIK SQLCMD requires "local" (server-related) path to backup files. So the question is - how to specify windows paths (e:\backups\db_name\etc) under Linux shell or script? Do I need to dublicate backslashes? Or can I use forward slashes?
Eldhenn (1 rep)
Feb 18, 2022, 01:58 PM • Last activity: Jul 7, 2025, 11:09 PM
0 votes
2 answers
168 views
DB replication and recovery of a previous version in mysql
I am trying to get my head around how to have a simple recovery solution for mysql databases in case of accidental deletion. Let's just say for argument's sake that you are an admin of a 800 gig database and one day you or someone accidentally deletes a table or a hundred rows. Here are two potentia...
I am trying to get my head around how to have a simple recovery solution for mysql databases in case of accidental deletion. Let's just say for argument's sake that you are an admin of a 800 gig database and one day you or someone accidentally deletes a table or a hundred rows. Here are two potential solutions I thought of, and the problems I see with them. 1) Database sync is enabled with a replication server. Meaning you can restore the data from your replication slave server by doing a dump of the table and then restoring it on the main server? Problem: if you delete the table on the master server, this would get replicated on the slave, meaning you lose the table in both, am I right? 2) You enable the binary log and restore somehow using this. Problem: My understanding of the recoveries with the binary log , is that first you have to restore a full backup from the previous day, and then use the binary log to rebase the changes back to the point previously before you deleted the table or rows. I think this is a huge problem if your DB is 800 gigs, because restoration could take a whole day, and that would be a whole day without the server running - not feasible. What I want is a simple and reliable solution to restore to a previous version at a specific date and set the servers accordingly so in case something like this were to happen, recovery is simple and doesnt take an entire day or hours. Is it not possible to use the binary log to roll back without doing a full backup restore? Are there any other solutions to this problem?
Ulukai (123 rep)
Aug 15, 2017, 11:03 PM • Last activity: Jul 7, 2025, 05:07 AM
3 votes
1 answers
397 views
How to lock a MSSQL (2017 Linux) account after N unsuccessful login attempts
I have to configure account lock after N unsuccessful login attempts on MSSQL 2017 Linux. That is standalone server and is not in AD. I couln'd find any valuable information so far unfortunately. Perhaps because Linux platform is quite fresh for MSSQL Server. Thank you for any advice here.
I have to configure account lock after N unsuccessful login attempts on MSSQL 2017 Linux. That is standalone server and is not in AD. I couln'd find any valuable information so far unfortunately. Perhaps because Linux platform is quite fresh for MSSQL Server. Thank you for any advice here.
Radek Radek (31 rep)
Mar 25, 2019, 10:14 AM • Last activity: Jul 1, 2025, 09:06 AM
1 votes
1 answers
33 views
What are recycled-commitlogs?
Cassandra/ScyllaDB I have a node with 51000+ recycled commitlogs and want to know if I can just blitz them. I am Linux not Scylla/Cassandra but these are filling up a datadisk. Normal commitlogs appear to be created and used/removed so I am assuming these are from some failed effort somewhere and ma...
Cassandra/ScyllaDB I have a node with 51000+ recycled commitlogs and want to know if I can just blitz them. I am Linux not Scylla/Cassandra but these are filling up a datadisk. Normal commitlogs appear to be created and used/removed so I am assuming these are from some failed effort somewhere and may be just copies of existing or commits already run. If these are for commits which have failed and need to be re-done would a restart of the service kickstart a purge? I cannot find anything relating to recycled versions and am hoping some useful individual out there can shed some techlight.
Hpoonis (11 rep)
Jun 24, 2025, 12:45 PM • Last activity: Jun 30, 2025, 02:35 PM
0 votes
1 answers
198 views
SQLServer Linux: Restore full database / Catalog
UPDATE: I continue with the same error, but, with the comments, I have done some progress: - Initially, the full text search feature was not in my installation. I installed with https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-setup-full-text-search?view=sql-server-linux-2017 - I have ch...
UPDATE: I continue with the same error, but, with the comments, I have done some progress: - Initially, the full text search feature was not in my installation. I installed with https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-setup-full-text-search?view=sql-server-linux-2017 - I have checked the unsupported elements in https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-release-notes?view=sql-server-linux-2017#Unsupported . Full text search in Linux has some filters missing (PENDING: I have a clue that I have to investigate: The original database is in cp1252 charset. It seems (not confirmed) that full text search works better with UTF-8) ORIGINAL QUESTION: I'm trying to move a Windows SQL Server database from Windows 10 to Linux. For this, I'm following the instructions in https://learn.microsoft.com/es-es/sql/linux/sql-server-linux-migrate-restore-database?view=sql-server-linux-2017 The linux database is freshly installed in an Ubuntu 16.04.4 LTS. The Windows database backup is a previously existing (I have not executed the backup, but it's a full backup) When I try to restore, it generates an error in the catalog database (access denied), as shown (database name changed to 'mydb' for privacy): sqlcmd -S localhost -U SA -Q "RESTORE DATABASE mydb FROM DISK = '/var/opt/mssql/backup/mydb_backup_201804300000.bak' WITH MOVE 'mydb' TO '/var/opt/mssql/data/mydb.mdf', MOVE 'mydb_log' TO '/var/opt/mssql/data/mydb_log.ldf', MOVE 'sysft_appuser_catalog3' TO '/var/opt/mssql/data/catalog.ft'" Msg 7610, Level 16, State 1, Server irulan, Line 1 Acceso denegado a '/var/opt/mssql/data/catalog.ft' o la ruta de acceso no es válida. Msg 3156, Level 16, State 50, Server irulan, Line 1 El archivo 'sysft_appuser_catalog3' no se puede restaurar en '/var/opt/mssql/data/catalog.ft'. Utilice WITH MOVE para identificar una ubicación válida para el archivo. The other 2 files (mdf and ldf are created without problems in the same folder). I have tried with different file names, creating previously (touch) the file, and so on with no success. This is the output of FILELISTONLY (to check the content of backup) LogicalName PhysicalName Type FileGroupName Size MaxSize FileId CreateLSN DropLSN UniqueId ReadOnlyLSN ReadWriteLSN BackupSizeInBytes SourceBlockSize FileGroupId LogGroupGUID DifferentialBaseLSN DifferentialBaseGUID IsReadOnly IsPresent TDEThumbprint SnapshotUrl -------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---- -------------------------------------------------------------------------------------------------------------------------------- -------------------- -------------------- -------------------- --------------------------- --------------------------- ------------------------------------ --------------------------- --------------------------- -------------------- --------------- ----------- ------------------------------------ --------------------------- ------------------------------------ ---------- --------- ------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ mydb D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\mydb.mdf D PRIMARY 3460300800 35184372080640 1 0 0 D64B0490-3FF6-4EFE-A9A1-491B5993F3AF 0 0 2348613632 512 1 NULL 30094000017824000037 B7E468AB-78C2-4732-8D73-2F07E3ABAF9D 0 1 NULL NULL mydb_log D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\mydb_log.ldf L NULL 1540227072 2199023255552 2 0 0 A6B8CF28-C3D8-4B50-B030-4D5B14F82084 0 0 0 512 0 NULL 0 00000000-0000-0000-0000-000000000000 0 1 NULL NULL sysft_appuser_catalog3 D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\FTData\appuser_catalog3 F PRIMARY 931744 0 65539 17974000000690900001 0 0B0AEAB0-86A2-42ED-9B37-E70EE556383C 0 0 983040 512 1 NULL 30094000017824000037 B7E468AB-78C2-4732-8D73-2F07E3ABAF9D 0 1 NULL NULL (3 rows affected) Also, I have found in stack another post with similar problem, but it doesn't say the final name he/she used. https://stackoverflow.com/questions/536182/mssql2005-restore-database-without-restoring-full-text-catalog
Sourcerer
Jul 12, 2018, 04:08 PM • Last activity: Jun 27, 2025, 12:09 AM
Showing page 1 of 20 total questions