Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
1425 views
How to update mysql 5.7 to 8.0 Amazon linux 1, ec2
I'm using MySQL 8.0.14 for my project and I want to use it for my AWS ec2 server. I updated from 5.5 to 5.7 by following this link: https://stackoverflow.com/questions/37027744/upgrade-mysql-to-5-6-on-ec2 But don't have any information to update to MySQL 8.0
I'm using MySQL 8.0.14 for my project and I want to use it for my AWS ec2 server. I updated from 5.5 to 5.7 by following this link: https://stackoverflow.com/questions/37027744/upgrade-mysql-to-5-6-on-ec2 But don't have any information to update to MySQL 8.0
An Dương
Jul 4, 2020, 10:52 AM • Last activity: Aug 5, 2025, 03:08 AM
0 votes
1 answers
296 views
Can't bring up slave from ec2-consistent-snapshot due to uncommitted prepared transaction
I'm struggling with bringing up a slave instance using a snapshot created by `ec2-consistent-snapshot`, in my log it's describing the fact that an unprocessed transaction exists, but isn't that what ec2-consistent-snapshot is supposed to prevent? My execution statement for creating snapshots is as f...
I'm struggling with bringing up a slave instance using a snapshot created by ec2-consistent-snapshot, in my log it's describing the fact that an unprocessed transaction exists, but isn't that what ec2-consistent-snapshot is supposed to prevent? My execution statement for creating snapshots is as follows... _(forgive the ansible variable placeholders)_ /usr/local/bin/ec2-consistent-snapshot-master/ec2-consistent-snapshot -q --aws-access-key-id {{ aws.access_key }} --aws-secret-access-key {{ aws.secret_key }} --region {{ aws.region }} --tag "Name={{ inventory_hostname }};Role={{ mysql_repl_role }}" --description "Database backup snapshot - {{ inventory_hostname_short }}" --freeze-filesystem /mnt/perconadata --percona --mysql-host localhost --mysql-socket /mnt/perconadata/mysql.sock --mysql-username root --mysql-password {{ mysql_root_password }} $VOLUME_ID And the log resulting from the failed attempt to bring it up on the slave is as follows... ( InnoDB: Doing recovery: scanned up to log sequence number 64107621643 InnoDB: Transaction 1057322289 was in the XA prepared state. InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 0 row operations to undo InnoDB: Trx id counter is 1057322752 2017-01-27 14:33:44 11313 [Note] InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percent: 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed InnoDB: Last MySQL binlog file position 0 33422772, file name mysql-bin.000011 2017-01-27 14:33:46 11313 [Note] InnoDB: 128 rollback segment(s) are active. InnoDB: Starting in background the rollback of uncommitted transactions 2017-01-27 14:33:46 7f3a90c75700 InnoDB: Rollback of non-prepared transactions completed 2017-01-27 14:33:46 11313 [Note] InnoDB: Waiting for purge to start 2017-01-27 14:33:46 11313 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.34-79.1 started; log sequence number 64107621643 CONFIG: num_threads=8 CONFIG: nonblocking=1(default) CONFIG: use_epoll=1 CONFIG: readsize=0 CONFIG: conn_per_thread=1024(default) CONFIG: for_write=0(default) CONFIG: plain_secret=(default) CONFIG: timeout=300 CONFIG: listen_backlog=32768 CONFIG: host=(default) CONFIG: port=9998 CONFIG: sndbuf=0 CONFIG: rcvbuf=0 CONFIG: stack_size=1048576(default) CONFIG: wrlock_timeout=12 CONFIG: accept_balance=0 CONFIG: wrlock_timeout=12 CONFIG: accept_balance=0 CONFIG: wrlock_timeout=12 CONFIG: accept_balance=0 CONFIG: wrlock_timeout=12 CONFIG: accept_balance=0 CONFIG: wrlock_timeout=12 CONFIG: accept_balance=0 CONFIG: wrlock_timeout=12 CONFIG: accept_balance=0 CONFIG: wrlock_timeout=12 CONFIG: accept_balance=0 CONFIG: wrlock_timeout=12 CONFIG: accept_balance=0 CONFIG: num_threads=1 CONFIG: nonblocking=1(default) CONFIG: use_epoll=1 CONFIG: readsize=0 CONFIG: conn_per_thread=1024(default) CONFIG: for_write=1 CONFIG: plain_secret= CONFIG: timeout=300 CONFIG: listen_backlog=32768 CONFIG: host=(default) CONFIG: port=9999 CONFIG: sndbuf=0 CONFIG: rcvbuf=0 CONFIG: stack_size=1048576(default) CONFIG: wrlock_timeout=12 CONFIG: accept_balance=0 handlersocket: initialized 2017-01-27 14:33:46 7f3dfe768840 InnoDB: Starting recovery for XA transactions... 2017-01-27 14:33:46 7f3dfe768840 InnoDB: Transaction 1057322289 in prepared state after recovery 2017-01-27 14:33:46 7f3dfe768840 InnoDB: Transaction contains changes to 1 rows 2017-01-27 14:33:46 7f3dfe768840 InnoDB: 1 transactions in prepared state after recovery 2017-01-27 14:33:46 11313 [Note] Found 1 prepared transaction(s) in InnoDB 2017-01-27 14:33:46 11313 [ERROR] Found 1 prepared transactions! It means that mysqld was not shut down properly last time and critical recovery informat$ 2017-01-27 14:33:46 11313 [ERROR] Aborting My two thoughts are that I've missed something while creating the snapshot, or I've missed something bringing up the slave from this type of snapshot, so my question is... **Am I missing some important parameters that force mysql/percona to commit transactions prior to freezing the file system?** -- OR -- **Is there a parameter I should be using to bring the slave up to force it to act as if it's recovering from a crash?**
oucil (516 rep)
Jan 27, 2017, 08:30 PM • Last activity: May 13, 2025, 06:03 AM
1 votes
1 answers
2252 views
MySQL GTID error 1236
I am try to reduce machine come to live in production.. MySQL is making bottleneck. What i trying. I update code in one machine and also MySQL restore everything is working fine after this i am making EC2 AMI and launched machine from autoscaling group. now in this instance i am not taking and any l...
I am try to reduce machine come to live in production.. MySQL is making bottleneck. What i trying. I update code in one machine and also MySQL restore everything is working fine after this i am making EC2 AMI and launched machine from autoscaling group. now in this instance i am not taking and any live dump. after start MySQL and make it replication but its showing me GTID error. these all processes taking 30 to 40 minutes. > Last_IO_Error: Got fatal error 1236 from master when > reading data from binary log: 'The slave is connecting using CHANGE > MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary > logs containing GTIDs that the slave requires. any body can tell what i doing wrong. but if i take fresh dump and restore this in live machine its working. MySQL version 5.6.17.
Obivan (119 rep)
Sep 22, 2016, 05:20 PM • Last activity: Apr 24, 2025, 10:06 AM
4 votes
2 answers
2715 views
change character_set_connection utf8mb4 mysql
I want to change character_set_connection utf8mb4. Its showing me utf8. I have follow this https://mathiasbynens.be/notes/mysql-utf8mb4 article and changed [client] default-character-set = utf8mb4 [mysql] default-character-set = utf8mb4 [mysqld] character-set-client-handshake = FALSE character-set-s...
I want to change character_set_connection utf8mb4. Its showing me utf8. I have follow this https://mathiasbynens.be/notes/mysql-utf8mb4 article and changed

[client]
default-character-set = utf8mb4

[mysql]
default-character-set = utf8mb4

[mysqld]
character-set-client-handshake = FALSE
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci
Also fire this query.
SET NAMES utf8mb4 COLLATE utf8mb4_unicode_ci
but when I check current status in **phpmyadmin** using
SHOW VARIABLES WHERE Variable_name LIKE 'character\_set\_%' OR Variable_name LIKE 'collation%';
result
character_set_client utf8
character_set_connection utf8
character_set_database latin1
character_set_filesystem binary
character_set_results utf8
character_set_server latin1
character_set_system utf8
collation_connection utf8_general_ci
collation_database latin1_swedish_ci
collation_server latin1_swedish_ci
I have set alter tables and database
SET NAMES utf8mb4;  
ALTER DATABASE openfire CHARACTER SET = utf8mb4 COLLATE = utf8mb4_unicode_ci;  
ALTER TABLE ofOffline CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci; 
This is my connections string
jdbc:mysql://localhost:3306/openfire?rewriteBatchedStatements=true&useUnicode=true
MySQL 5.6.33 I have check in windows with version 5.7.14 it's fine. Please give me any suggestions. Thanks in advance.
Bhavin Chauhan (141 rep)
Dec 17, 2016, 10:33 AM • Last activity: Feb 15, 2025, 04:00 PM
0 votes
0 answers
22 views
Newb question: finding my hosted databases
Apologies if this is a really stupid question. I'm very new to SQL databases and I need some help. I work at a book store with a website hosted on an AWS EC2 instance. Nobody touched their website for a long time and now we're in the process of updating it. It appears the book data are stored in a p...
Apologies if this is a really stupid question. I'm very new to SQL databases and I need some help. I work at a book store with a website hosted on an AWS EC2 instance. Nobody touched their website for a long time and now we're in the process of updating it. It appears the book data are stored in a posgresql database hosted with a different host name than the domain owned by the bookstore. I've tried looking in the ec2 instance using
systemctl status postgresql
and there is an active sql DB that doesn't match the domain name, does match the database name and seems to be performing SELECT queries that look like what I'm expecting for a row entry for book data. How can I confirm this is the right database, how can I check for the database that is hosted under a different domain name to see where it is (and prepare it for backup if needed for a staging site). I'm going through Google, ChatGPT and I'm even currently doing a postgreSQL Coursera course to get the info I need. But as I said I'm very new to all this and I don't want to hurt anything on the live instance. The site wasn't touched for ten years, is poorly documented. We're prepping to get a software development team in to finish the upgrade, but as it stands I just need to make sure our database isn't hosted by an old contractor that could delete the entire DB at any moment.
scicrow (1 rep)
Oct 11, 2024, 06:42 AM
28 votes
3 answers
173683 views
Mongo Create a user as admin for any database raise an error
I am trying to create a simple user with the rights permission to access to any database and can do any actions. When I trying to execute the `createUser` command I got this error: db.createUser({ user: "mongoadmin" , pwd: "mongoadmin", roles: ["userAdminAnyDatabase", "dbAdminAnyDatabase", "readWrit...
I am trying to create a simple user with the rights permission to access to any database and can do any actions. When I trying to execute the createUser command I got this error: db.createUser({ user: "mongoadmin" , pwd: "mongoadmin", roles: ["userAdminAnyDatabase", "dbAdminAnyDatabase", "readWriteAnyDatabase"]}) 2015-08-20T17:09:42.300+0000 E QUERY Error: couldn't add user: No role named userAdminAnyDatabase@new_vehicles_catalog The problem above only happen when I enable the auth configuration and I need it. So, How do I create an user with admin permission for any database. I want it because I configured my mongo services to uses authentication connection. If I want to execute a dump of my data I have to use this authentication parameters. Please any help? Using **mongo version 3.0.5**. the service is on **Amazon Linux AMI 2015.03 (HVM), SSD Volume Type - ami-1ecae776**
Robert (705 rep)
Aug 20, 2015, 05:21 PM • Last activity: Oct 4, 2024, 09:13 AM
0 votes
1 answers
39 views
How to transfer precomputed mogodb index to another machine?
I have a local mongodb database that i want to transfer to a free tier aws ec2 t3.micro instance. The problem is that one collection in the db is a bit larger(1.2 GB, 24M docs), so there is no way i can index it locally on the dinky little aws machine. **I have the storage space but don't have the c...
I have a local mongodb database that i want to transfer to a free tier aws ec2 t3.micro instance. The problem is that one collection in the db is a bit larger(1.2 GB, 24M docs), so there is no way i can index it locally on the dinky little aws machine. **I have the storage space but don't have the compute (ram and cpu) to create a index.** I tried transfering db with indexes via **mongodump** and **mongorestore**, but it doesn't look like it actually transfers precomputed indexes. mongorestore hangs the entire ssh session when restoring the particular large index (waited for hours)
2024-06-10T23:27:39.102+0300	index: &idx.IndexDocument{Options:primitive.M{"name":"symbol_1_date_1", "v":2}, Key:primitive.D{primitive.E{Key:"symbol", Value:1}, primitive.E{Key:"date", Value:1}}, PartialFilterExpression:primitive.D(nil)} <-- hangs on this
dumb_dumb_man
Jun 11, 2024, 01:39 AM • Last activity: Aug 13, 2024, 04:53 PM
0 votes
1 answers
243 views
automating backups to S3 with SQL Server 2022
Not a DBA so please bear with me. I have an EC2 instance running SQL Server 2022 and I'm trying to automate database backups to S3 using the native S3 connector in this version of SQL Server. The underlying infrastructure is there - I have the buckets, the credentials, and the policies, and I've tes...
Not a DBA so please bear with me. I have an EC2 instance running SQL Server 2022 and I'm trying to automate database backups to S3 using the native S3 connector in this version of SQL Server. The underlying infrastructure is there - I have the buckets, the credentials, and the policies, and I've tested backups manually with basic 'BACKUP DATABASE' queries to the S3 endpoint. All of that works fine. However, I'm not clear on how to properly automate this from within SQL Server. It looks like the 'Maintenance Plan' feature does not support S3 endpoints - I only see options for Azure when I select 'URL' as the destination. Should I not be using maintenance plans? Do I need to create SQL Server Agent jobs and manually enter T-SQL queries for the backup operations? If I do things this way, are there additional clean-up requirements I'd need to account for which would have otherwise been taken care of by a maintenance plan?
noctred (3 rep)
Aug 5, 2024, 11:13 PM • Last activity: Aug 9, 2024, 10:09 AM
6 votes
4 answers
31178 views
How to run MongoDB as a non root user in Linux?
I can start MongoDB as `$sudo service mongod start`. When I start without `sudo` it gives me this error: /etc/init.d/mongod: line 58: ulimit: open files: cannot modify limit: Operation not permitted /etc/init.d/mongod: line 60: ulimit: max user processes: cannot modify limit: Operation not permitted...
I can start MongoDB as $sudo service mongod start. When I start without sudo it gives me this error: /etc/init.d/mongod: line 58: ulimit: open files: cannot modify limit: Operation not permitted /etc/init.d/mongod: line 60: ulimit: max user processes: cannot modify limit: Operation not permitted Starting mongod: runuser: may not be used by non-root users 1) Now, I've changed ownership to the non root user on all mongo directores I could find i.e. /var/lib/mongo, var/log/mongodb, data/db, var/run/mongodb $sudo chown -R nonRootUser:nonRootUser [directory] 2) I've deleted mongod.lock files 3) I've run --repair too It still gives me the same error. I also tried $mongod --fork --logpath /var/log/mongodb.log about to fork child process, waiting until server is ready for connections. forked process: 18566 ERROR: child process failed, exited with error number 1 mongod.log file says this: 2016-03-17T15:03:49.053+0000 I NETWORK [initandlisten] waiting for connections on port 27017 2016-03-17T15:03:54.144+0000 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends It's Amazon Linux. I am able to start it like this $mongod, but I want to run it as daemon so it runs continuously. The nonRootUser is a new user I created in addition to ec2-user. Maybe there are some config issues relating to running daemon processes if you're not ec2-user? UPDATE: Changed ownership on everything to ec2-user, still getting exactly the same errors as before.
ed-ta (463 rep)
Mar 17, 2016, 03:23 PM • Last activity: Jan 3, 2024, 07:18 PM
2 votes
1 answers
915 views
SQL Server for Linux 2019 - Windows Authentication not working
Hoping to pick the brains of those more knowledgeable than me. I've been trying to set up a SQL Server 2019 instance on Linux; specifically on AWS using AMI **amzn2-x86_64-SQL_2019_Standard-2019.11.12** I've followed the steps to connect the instance to our domain, and am able to successfully login...
Hoping to pick the brains of those more knowledgeable than me. I've been trying to set up a SQL Server 2019 instance on Linux; specifically on AWS using AMI **amzn2-x86_64-SQL_2019_Standard-2019.11.12** I've followed the steps to connect the instance to our domain, and am able to successfully login using my domain credentials. sssd.conf below:
[sssd]
domains = [domain name]
config_file_version = 2
services = nss, pam

[domain/[domain name]
ad_server = [FQDN of Domain Controller]
ad_domain = [domain name]
krb5_realm = [DOMAIN NAME]
realmd_tags = manages-system joined-with-adcli
cache_credentials = True
id_provider = ad
krb5_store_password_if_offline = True
default_shell = /bin/bash
ldap_id_mapping = True
use_fully_qualified_names = False
fallback_homedir = /home/%u
access_provider = ad
I've followed all the steps on the MSDN documentation on how to set up Windows Authentiation: https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-active-directory-authentication?view=sql-server-linux-ver15 But whenever I try to login to the SQL Server using my domain credentials I receive the following error:
Sqlcmd: Error: Microsoft ODBC Driver 17 for SQL Server : Login failed. The login is from an untrusted domain and cannot be used with Integrated authentication..
And in the mssql-server logs (using systemctl status mssql-server -l) I see the following:
Apr 22 04:47:04 [SERVER NAME.DOMAIN NAME] sqlservr: 2020-04-22 04:47:04.24 Logon       Error: 17806, Severity: 20, State: 14.
Apr 22 04:47:04 [SERVER NAME.DOMAIN NAME] sqlservr: 2020-04-22 04:47:04.24 Logon       SSPI handshake failed with error code 0x80090304, state 14 while establishing a connection with integrated security; the connection has been closed. Reason: AcceptSecurityContext failed. The operating system error code indicates the cause of failure.
The Local Security Authority cannot be contacted   [CLIENT: [IP ADDRESS]]
Apr 22 04:47:04 [SERVER NAME.DOMAIN NAME] sqlservr: 2020-04-22 04:47:04.25 Logon       Error: 18452, Severity: 14, State: 1.
Apr 22 04:47:04 [SERVER NAME.DOMAIN NAME] sqlservr: 2020-04-22 04:47:04.25 Logon       Login failed. The login is from an untrusted domain and cannot be used with Integrated authentication. [CLIENT: [IP ADDRESS]]
The error would suggest an issue contacting the Local Security Authority, so I've double checked my krb5.conf file and don't see any obvious issues:
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/

includedir /var/lib/sss/pubconf/krb5.include.d/
[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 dns_lookup_realm = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 rdns = false
 pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
# default_realm = EXAMPLE.COM
 default_ccache_name = KEYRING:persistent:%{uid}
 udp_preference_limit=0

 default_realm = [DOMAIN NAME]
[realms]
# EXAMPLE.COM = {
#  kdc = kerberos.example.com
#  admin_server = kerberos.example.com
# }

 [DOMAIN NAME] = {
 kdc = [Domain Controller 1 FQDN]:88
 kdc = [Domain Controller 2 FQDN]:88
 kdc = [Domain Controller 3 FQDN]:88
}

[domain_realm]
# .example.com = EXAMPLE.COM
# example.com = EXAMPLE.COM
 [DOMAIN NAME] = [DOMAIN NAME]
 .[DOMAIN NAME] = [DOMAIN NAME]
And the fact that I can login using my AD credentials to me suggests there's no overarching issue contacting the domain controllers. Any tips or pointers would be greatly appreciated on where to go next with this!
Anthony (21 rep)
Apr 22, 2020, 09:14 AM • Last activity: Dec 2, 2023, 12:11 PM
0 votes
1 answers
683 views
Connecting to AWS RDS succeeds via terminal but fails in pgAdmin 4. Why?
I can establish an SSH tunnel using `ssh -L 2222: :5432 ec2-user@ -i ~/.ssh/key.pem` and then connect to my RDS using `psql "host=localhost port=2222 user=postgres password= ` These same settings in pgAdmin 4 do not seem to work. - Host name in connection tab = ` ` - Tunnel host = ` ` - Port in conn...
I can establish an SSH tunnel using ssh -L 2222::5432 ec2-user@ -i ~/.ssh/key.pem and then connect to my RDS using psql "host=localhost port=2222 user=postgres password= These same settings in pgAdmin 4 do not seem to work. - Host name in connection tab = `` - Tunnel host = `` - Port in connection tab = 5432 - Port in SSH tunnel tab = 2222 - Username in connection tab = postgres - Username in ssh tunnel tab = ec2-user - Password in connection tab = `` - And identity file in ssh tunnel tab = ~/.ssh/key.pem After trying for a while pgAdmin 4 fails with: unable to connect to server: Failed to create the SSH Tunnel. Error: Could not establish session to SSH gateway cat ~/.pgadmin/pgadmin4.log Just gives me:
ERROR	pgadmin:	Could not establish session to SSH gateway
Traceback (most recent call last):
  File "/usr/pgadmin4/web/pgadmin/utils/driver/psycopg3/server_manager.py", line 610, in create_ssh_tunnel
    self.tunnel_object.start()
  File "/usr/pgadmin4/venv/lib/python3.10/site-packages/sshtunnel.py", line 1331, in start
    self._raise(BaseSSHTunnelForwarderError,
  File "/usr/pgadmin4/venv/lib/python3.10/site-packages/sshtunnel.py", line 1174, in _raise
    raise exception(reason)
sshtunnel.BaseSSHTunnelForwarderError: Could not establish session to SSH gateway
I'm really lost on how to proceed given *it works* via the terminal. So clearly not some security group issue on AWS's side. I've tried deleting ~/.pgadmin and trying again, but same result. Any help appreciated thanx. In case this helps:
lsb_release -a
No LSB modules are available.
Distributor ID:	Pop
Description:	Pop!_OS 22.04 LTS
Release:	22.04
Codename:	jammy
Frikster (103 rep)
Sep 2, 2023, 04:38 AM • Last activity: Sep 2, 2023, 10:39 AM
1 votes
0 answers
144 views
Connecting to SQL in docker in EC2 becoming slower
I am now running a PostgreSQL inside a Docker inside a EC2, and a Node.js instance is connecting to it. I recently found that the connection is becoming slower. Before, restarting the instance only need a few seconds to establish the connection. But now, it requires about 30+ seconds. I wonder what...
I am now running a PostgreSQL inside a Docker inside a EC2, and a Node.js instance is connecting to it. I recently found that the connection is becoming slower. Before, restarting the instance only need a few seconds to establish the connection. But now, it requires about 30+ seconds. I wonder what could be happening there and how can I optimize it. It never happened to me before.
Terry Windwalker (113 rep)
Jun 24, 2023, 05:02 PM
0 votes
3 answers
1638 views
Best Approach to Migrating Oracle to MySQL - Run on EC2 in AWS
I'm trying to migrate an Oracle schema as well as all of the data within the DB to a MySQL database. I want to then make an EC2 app on AWS to kick off whatever I've written for this. Are there any approaches that one might say is better than another (for instance, use Ruby/Rails instead of Perl)? Is...
I'm trying to migrate an Oracle schema as well as all of the data within the DB to a MySQL database. I want to then make an EC2 app on AWS to kick off whatever I've written for this. Are there any approaches that one might say is better than another (for instance, use Ruby/Rails instead of Perl)? Is there a tool that I might be able to wrap into an app and place it on my EC2 instance? Full disclosure, I'm pretty much a complete newbie when it comes to AWS, so if there is something really basic that I'm missing (you could use some type of Linux-based MySQL Workbench-like tool) I would definitely appreciate that. I realize this is a bit specific, thank you in advance!
user535617 (101 rep)
May 29, 2015, 10:10 PM • Last activity: Jun 12, 2023, 03:49 PM
0 votes
1 answers
148 views
Using AWS DMS for EC2 MariaDB instance to MariaDB RDS instance
After combing through AWS documentation, I am still unclear if DMS supports migration from EC2 instances running latest mariadb version to RDS mariadb instance?I see that MySQL is supported, but am I making the wrong assumption that mariadb is not? I attempted to setup a migration project, using two...
After combing through AWS documentation, I am still unclear if DMS supports migration from EC2 instances running latest mariadb version to RDS mariadb instance?I see that MySQL is supported, but am I making the wrong assumption that mariadb is not? I attempted to setup a migration project, using two end points and data sources as suggested by AWS, but when I try to select the source data source, nothing is present...and then if I select the target endpoint/data source, it claims the engine is not supported. Which...doesn't make sense why it would even allow me to select a data source as mariadb. Due to the size of my database, which will become an archive in RDS, the only approach that seemed less hands on was DMS. However, it seems I will need to create a full backup (which usually takes almost up to 48 hours, pulling one of the primary nodes out of replication and load balancer pool) in order to do it. If that is my only option, so be it. Any suggestions are welcome.
Randoneering (135 rep)
Dec 11, 2022, 04:20 PM • Last activity: May 8, 2023, 12:42 PM
0 votes
2 answers
1103 views
How to ensure backups are successfully persisted to S3 when using Ola Hallengren solution on AWS EC2 Instances
## Improving Existing Backup Approach By Using Ola Hallengren Backup Solution Deploying a new backup approach to eliminate dependency on a vendor tool (Cloudberry) that I've leveraged before. In my [microblog post](https://www.sheldonhull.com/microblog/sql-server-meets-aws-systems-manager), I descri...
## Improving Existing Backup Approach By Using Ola Hallengren Backup Solution Deploying a new backup approach to eliminate dependency on a vendor tool (Cloudberry) that I've leveraged before. In my [microblog post](https://www.sheldonhull.com/microblog/sql-server-meets-aws-systems-manager) , I describe the process I'm taking which includes: - Creating and deploying all the maintenance solution via dbatools - Leveraging s5cmd as a sync tool to ensure backups get copied from EBS volume to S3. (performant alternative to installing aws-cli). - Using Red Gate SQL Monitor/PagerDuty integrations for monitoring and alerting on issues. ## Painpoint - Replacing Existing Tool The question is marked for AWS EC2 specifically as much of this is solved in Azure with the ability to backup to Azure Blob storage supported by SQL Server. Cloudberry is a great budget oriented tool that allows you to use a GUI to schedule backups that automatically get pushed to S3. It's a pretty good budget solution, but I've seen serious scaling issues once you deal with thousands of databases. At this point, the local sqlite database and S3 metadata queries seem to cause slow-downs and impact the server RPO. I've evaluated other tooling, but very few of them were flexible with S3 uploads as a native feature. ## An eye towards reliability As I've worked through the final steps I'm looking for points for failure and reliability concerns. One area that becomes a bit less easy to validate is the successful sync of all contents to S3. While the agent should fail with an exit code, knowing all files did get up to S3 is important. ## How to Gain Assurance of All Files Persisted To S3 Here's a few questions specifically that I'm trying to work through and would appreciate any insight on. - Can EBS Snapshots replace the need to sync to s3 with a goal of 15 min RPO on a dedicated backup volume? I'm not including an data files/log files in this, so it would purely a backup drive. - Based on some prior tests I believe a shorter interval is difficult to get on EBS. - S5cmd performs very well, but isn't as controlled as doing this with a dedicated PowerShell script. For instance, simply iterating through the S3 files to perhaps generate a diff report at the end would take 8 seconds along on s5cmd and 43 seconds getting it via AWS PowerShell tools. With this running every 15 minutes I want as quick and minimal of a performance impact as possible to the server, not run a lot of custom scripts beyond this. - Is there any approach you'd take to audit the local directory of backups against S3 can validate nothing locally is missing or is there where relying on the sync tool just has to be done. - Any usage of AWS Backup, Data Sync, or other tooling natively integrated in AWS that could solve these issues? FSx, DataSync, and others seem to add more risk and complexity to manage. ## Other Solutions - I've considered dbatools based backups in the past, as I could gain full control of running the backup, pushing to s3, and logging all of this with more control. However, after deliberation and community discussion I decided against this as I felt the use-case for dbatools was more for adhoc backups and that leveraging Ola Hallengren Backup tooling was a more robust solution for production usage. The negative to using it is that my PowerShell error handling and logging isn't going to be implemented. I look forward to any insight and appreciate the help 👍
sheldonhull (280 rep)
Dec 16, 2020, 08:57 PM • Last activity: Apr 24, 2023, 04:59 PM
0 votes
1 answers
812 views
Mongodb 3.6.3 disappears after 2 weeks on centos 7 ec2 instance
My Mongodb has disappeared 2 times, after 2 weeks, over the weekend within a 4 week time period. I had a snapshot to recover from but I can't keep on recovering my db every 2 weeks. In an ec2 instance where I hosted Mongo and the API in the same place, this is not an issue. Side note - does anyone k...
My Mongodb has disappeared 2 times, after 2 weeks, over the weekend within a 4 week time period. I had a snapshot to recover from but I can't keep on recovering my db every 2 weeks. In an ec2 instance where I hosted Mongo and the API in the same place, this is not an issue. Side note - does anyone know if it is possible to completely disable db.dropDatabase() or db.collection.drop()? Below is my mongod.conf file # where to write logging data. systemLog: destination: file logAppend: true path: /var/log/mongodb/mongod.log # Where and how to store data. storage: dbPath: /data logpath: /log/mongod.log journal: enabled: true # engine: # mmapv1: wiredTiger: prefixCompression: true # how the process runs processManagement: fork: true # fork and run in background pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile timeZoneInfo: /usr/share/zoneinfo # network interfaces net: port: 27017 bindIp: 0.0.0.0 # bindIp: 127.0.0.1 # Listen to local interface only, comment to listen on all interfaces. Any ideas for how this happens or anyone who has ever solved a similar problem would be greatly appreciated. var/log/mongodb/mongodb.log 2018-02-26T17:14:09.799+0000 I CONTROL [main] ***** SERVER RESTARTED ***** 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] MongoDB starting : pid=# port=port dbpath=/var/lib/mongo 64-bit host=ip-* 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] db version v3.6.3 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] git version: 9586e557d54ef70f9ca4b43c26892cd55257e1a5 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.0-fips 29 Mar 2010 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] allocator: tcmalloc 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] modules: none 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] build environment: 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] distmod: amazon 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] distarch: x86_64 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] target_arch: x86_64 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] options: { config: "/etc/mongod.conf", net: { bindIp: "127.0.0.1", port: 27017 }, processManagement: { fork: true, pidFilePath: "/var/run/mongodb/mongod.pid", timeZoneInfo: "/usr/share/zoneinfo" }, storage: { dbPath: "/var/lib/mongo", journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, path: "/var/log/mongodb/mongod.log" } } 2018-02-26T17:14:09.988+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1382M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress), 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] 2018-02-26T17:14:10.693+0000 I STORAGE [initandlisten] createCollection: admin.system.version with provided UUID: 038fb561-1163-46cb-bcae-b68ddebb4081 2018-02-26T17:14:10.700+0000 I COMMAND [initandlisten] setting featureCompatibilityVersion to 3.6 2018-02-26T17:14:10.703+0000 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: c994079b-4e9b-416f-91b4-4cbc60e2c118 2018-02-26T17:14:10.710+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/var/lib/mongo/diagnostic.data' 2018-02-26T17:14:10.710+0000 I NETWORK [initandlisten] waiting for connections on port 27017 2018-02-26T17:14:21.788+0000 I NETWORK [listener] connection accepted from 127.0.0.1:54282 #1 (1 connection now open) 2018-02-26T17:14:21.788+0000 I NETWORK [conn1] received client metadata from 127.0.0.1:54282 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "CentOS Linux release 7.4.1708 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-693.11.6.el7.x86_64" } } 2018-02-26T17:14:23.882+0000 I NETWORK [conn1] end connection 127.0.0.1:54282 (0 connections now open) 2018-02-26T17:19:10.711+0000 I STORAGE [thread2] createCollection: config.system.sessions with generated UUID: b181a7ff-d0e5-456e-bee5-ff7e40f32d0a 2018-02-26T17:19:10.739+0000 I INDEX [thread2] build index on: config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 } 2018-02-26T17:19:10.739+0000 I INDEX [thread2] building index using bulk method; build may temporarily use up to 500 megabytes of RAM 2018-02-26T17:19:10.740+0000 I INDEX [thread2] build index done. scanned 0 total records. 0 secs 2018-02-26T17:28:02.551+0000 I NETWORK [listener] connection accepted from 127.0.0.1:54292 #2 (1 connection now open) 2018-02-26T17:28:02.551+0000 I NETWORK [conn2] received client metadata from 127.0.0.1:54292 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "CentOS Linux release 7.4.1708 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-693.11.6.el7.x86_64" } } 2018-02-26T17:33:15.093+0000 I NETWORK [conn2] end connection 127.0.0.1:54292 (0 connections now open) 2018-02-26T17:34:20.689+0000 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends 2018-02-26T17:34:20.689+0000 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... 2018-02-26T17:34:20.689+0000 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock 2018-02-26T17:34:20.690+0000 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture 2018-02-26T17:34:20.692+0000 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down 2018-02-26T17:34:20.763+0000 I STORAGE [signalProcessingThread] shutdown: removing fs lock... 2018-02-26T17:34:20.763+0000 I CONTROL [signalProcessingThread] now exiting 2018-02-26T17:34:20.763+0000 I CONTROL [signalProcessingThread] shutting down with code:0 journalctl return these red lines Failed to create mount unit file /run/systemd/generator/data.mount, as it already exists. Duplicate entry in /etc/fstab? also for log.mount and journal.mount piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Error: Driver 'pcspkr' is already registered, aborting...
Brandon (235 rep)
Mar 14, 2018, 05:15 PM • Last activity: Feb 23, 2023, 10:03 AM
2 votes
3 answers
1299 views
Backup TO URL S3
Can I do a SQL backup to AWS S3 URL from an EC2 machine I know that Azure side is possible but I do not know if it is possible AWS side thank you for your advice
Can I do a SQL backup to AWS S3 URL from an EC2 machine I know that Azure side is possible but I do not know if it is possible AWS side thank you for your advice
Abdallah Mehdoini (59 rep)
Jun 19, 2021, 01:45 PM • Last activity: Feb 7, 2023, 07:46 AM
0 votes
1 answers
265 views
MySQL Not Resolving % in Host
Running MySql server 8.0.31 on an Amazon EC2 instance (52.99.189.74) in Ubuntu 22.04. When I log in to the mysql prompt with `sudo`, this is how the user and host columns of the `mysql.user` table look like. +------------------+-----------+ | User | Host | +------------------+-----------+ | della |...
Running MySql server 8.0.31 on an Amazon EC2 instance (52.99.189.74) in Ubuntu 22.04. When I log in to the mysql prompt with sudo, this is how the user and host columns of the mysql.user table look like. +------------------+-----------+ | User | Host | +------------------+-----------+ | della | % | | debian-sys-maint | localhost | | mysql.infoschema | localhost | | mysql.session | localhost | | mysql.sys | localhost | | root | localhost | +------------------+-----------+ 6 rows in set (0.00 sec) My Ubuntu user name, i.e. $USER is della. However, when I try this as a non-root user, I get the following output. $ mysql -p Enter password: ERROR 1045 (28000): Access denied for user 'della'@'localhost' (using password: YES) Should not the first row of the mysql.user table be resolved to represent my correct user credential, as my understanding was the % sign is a wildcard to represent *any* hostname and ip address? In the same way, if I am using a laptop (also running Ubuntu, and my username being della), and I want to login to the same mysql server remotely, can I use this, to resolve to the same user? $ mysql --user "della" --host 52.99.189.74 --port 3306 -p But this is also giving the exact same error. Things I have tried, - Allow inbound traffic to port 3306 in the EC2 security group with the following protocols from any ip address. This is how the rules look like. IPv4 MYSQL/Aurora TCP 3306 0.0.0.0/0 - Edited /etc/mysql/mysql.conf.d/mysqld.cnf to have the following parameters. bind-address = 0.0.0.0 mysqlx-bind-address = 127.0.0.1 Anything else I need to do to get to the mysql prompt from the ec2 localhost (as a non-root user) and also remotely from my laptop?
Della (73 rep)
Nov 22, 2022, 08:53 AM • Last activity: Nov 23, 2022, 01:17 AM
1 votes
2 answers
297 views
Wordpress UPDATE queries on MySQL database stuck
I have an **Amazon 24XL server** - 96 Cores - 378 GB RAM - Database size 5.7G - Debian GNU/Linux 9 (stretch) - PHP 7.3.16 - mysql Ver 15.1 Distrib 10.3.22-MariaDB I have only one WordPress site where users read articles, there are 4-5 small plugins One plugins is used to add the points and rewards i...
I have an **Amazon 24XL server** - 96 Cores - 378 GB RAM - Database size 5.7G - Debian GNU/Linux 9 (stretch) - PHP 7.3.16 - mysql Ver 15.1 Distrib 10.3.22-MariaDB I have only one WordPress site where users read articles, there are 4-5 small plugins One plugins is used to add the points and rewards in my subscribers profile. When there are around 2500 Users on my site there are around 3000+ UPDATE queries that runs to update the table 'wp_custom_points_user' The issue is that the queries stuck in updating I can't find the way to fix those things However server has much RAM , CPU available but due to queries on the same table it stuck and caused 502 on my site. I am looking to optimize MySQL to cater concurrent update queries as they are taking longer to respond and I suspect there are some locks. However I have all tables with indexing and using InnoDB enter image description here Here is the SHOW ENGINE INNODB STATUS; ===================================== 2020-04-06 13:45:46 0x7f934bf24700 INNODB MONITOR OUTPUT ===================================== Per second averages calculated from the last 27 seconds ----------------- BACKGROUND THREAD ----------------- srv_master_thread loops: 8196 srv_active, 0 srv_shutdown, 0 srv_idle srv_master_thread log flush and writes: 8196 ---------- SEMAPHORES ---------- OS WAIT ARRAY INFO: reservation count 15589494 OS WAIT ARRAY INFO: signal count 41500023 RW-shared spins 0, rounds 154525943, OS waits 1459106 RW-excl spins 0, rounds 10328037, OS waits 40587 RW-sx spins 71982, rounds 547220, OS waits 3916 Spin rounds per wait: 154525943.00 RW-shared, 10328037.00 RW-excl, 7.60 RW-sx ------------ TRANSACTIONS ------------ Trx id counter 351880322 Purge done for trx's n:o < 351880319 undo n:o < 0 state: running History list length 1 ... truncated... mpact format; info bits 0 0: len 4; hex 80000001; asc ;; 1: len 6; hex 000000000000; asc ;; 2: len 7; hex 80000000000000; asc ;; 3: len 4; hex 80004cd6; asc L ;; 4: len 4; hex 800186b0; asc ;; ------------------ ---TRANSACTION 351861705, ACTIVE 40 sec starting index read mysql tables in use 1, locked 1 LOCK WAIT 2 lock struct(s), heap size 1136, 1 row lock(s) MySQL thread id 421774, OS thread handle 140270896170752, query id 8187691 localhost wpdatabayf Updating UPDATE wp_custom_points_user SET total_points = '18900' WHERE user_id = 188768 ------- TRX HAS BEEN WAITING 40 SEC FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 1468 page no 5 n bits 568 index PRIMARY of table wpdatabayf.wp_custom_points_user trx id 351861705 lock_mode X waiting Record lock, heap no 2 PHYSICAL RECORD: n_fields 5; compact format; info bits 0 0: len 4; hex 80000001; asc ;; 1: len 6; hex 000000000000; asc ;; 2: len 7; hex 80000000000000; asc ;; 3: len 4; hex 80004cd6; asc L ;; 4: len 4; hex 800186b0; asc ;; ---BUFFER POOL 1 . . . . ---BUFFER POOL 31 Buffer pool size 8192 Free buffers 1725 Database pages 6096 Old database pages 2253 Modified db pages 183 Percent of dirty pages(LRU & free pages): 2.340 Max dirty pages percent: 75.000 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages made young 0, not young 0 0.00 youngs/s, 0.00 non-youngs/s Pages read 5594, created 502, written 12349 0.00 reads/s, 0.00 creates/s, 0.74 writes/s Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000 Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s LRU len: 6096, unzip_LRU len: 0 I/O sum:cur, unzip sum:cur -------------- ROW OPERATIONS -------------- 0 queries inside InnoDB, 0 queries in queue 2 read views open inside InnoDB Process ID=72108, Main thread ID=140323093444352, state: sleeping Number of rows inserted 88380, updated 193228, deleted 34011, read 84305568368 11.44 inserts/s, 21.74 updates/s, 0.00 deletes/s, 12469107.55 reads/s Number of system rows inserted 0, updated 0, deleted 0, read 0 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s ---------------------------- END OF INNODB MONITOR OUTPUT ============================ UPDATE ====== SHOW CREATE TABLE wp_custom_points_user; +----------------------------------+------------------------------------------------------+ | Table | Create Table | +----------------------------------+------------------------------------------------------+ | wp_custom_points_user | CREATE TABLE wp_custom_points_user ( id int(11) NOT NULL AUTO_INCREMENT, total_points int(11) NOT NULL, user_id int(11) NOT NULL, PRIMARY KEY (id), UNIQUE KEY ixd_uc_tzs_wp_custom_points_user (user_id) ) ENGINE=InnoDB AUTO_INCREMENT=199180 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci | +----------------------------------+------------------------------------------------------+ SHOW INDEX FROM wp_custom_points_user; +----------------------------------+------------+---------------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +----------------------------------+------------+---------------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | wp_custom_points_user | 0 | PRIMARY | 1 | id | A | 171334 | NULL | NULL | | BTREE | | | | wp_custom_points_user | 0 | ixd_uc_tzs_wp_custom_points_user | 1 | user_id | A | 171334 | NULL | NULL | | BTREE | | | +----------------------------------+------------+---------------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
Naqi (11 rep)
Apr 8, 2020, 01:39 PM • Last activity: Oct 31, 2022, 02:04 PM
3 votes
1 answers
1505 views
Configuring slave from EC2 master to Amazon RDS slave
I am configuring replication from EC2 master to Amazon RDS instance. After starting the slave, I don't see any errors but I see slave_IO: _thread is connecting. Master version:5.6.23 Slave Version:5.6.19 ###show slave status \G mysql> show slave status \G *************************** 1. row *********...
I am configuring replication from EC2 master to Amazon RDS instance. After starting the slave, I don't see any errors but I see slave_IO: _thread is connecting. Master version:5.6.23 Slave Version:5.6.19 ###show slave status \G mysql> show slave status \G *************************** 1. row *************************** Slave_IO_State: Connecting to master Master_Host: XXXXXXXXXXX Master_User: XXXXXX Master_Port: 3306 Connect_Retry: 60 Master_Log_File: cli-bin.000032 Read_Master_Log_Pos: 713 Relay_Log_File: relaylog.000001 Relay_Log_Pos: 4 Relay_Master_Log_File: cli-bin.000032 Slave_IO_Running: Connecting Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: mysql.plugin,innodb_memcache.cache_policies,mysql.rds_sysinfo,mysql.rds_replication_status,mysql.rds_history,innodb_memcache.config_options Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 713 Relay_Log_Space: 618 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 0 Master_UUID: Master_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 1 row in set (0.00 sec) ###show global variables mysql> show global variables like '%old_passwords%'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | old_passwords | 0 | +---------------+-------+ 1 row in set (0.01 sec) mysql> show global variables like '%secure_auth%'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | secure_auth | OFF | +---------------+-------+ 1 row in set (0.00 sec) ==================== The problem is Slave_IO_state is connecting and Slave_io_thread is connecting state but replication is not happening.
chandu lakkineni (31 rep)
Mar 18, 2015, 11:15 AM • Last activity: Aug 12, 2022, 10:00 PM
Showing page 1 of 20 total questions