Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
910
views
Queries return ReadFailure exception, "Replica(s) failed to execute read"
I've a cluster consisting of 5 cassandra nodes. The cassandra version used is cassandra-3.11.3-1.noarch The `keyspace` strategy and is defined as follows: CREATE KEYSPACE my_keyspace WITH replication = {'class': 'NetworkTopologyStrategy', 'datacenter1': '2'} AND durable_writes = true; Now running qu...
I've a cluster consisting of 5 cassandra nodes. The cassandra version used is
cassandra-3.11.3-1.noarch
The
keyspace
strategy and is defined as follows:
CREATE KEYSPACE my_keyspace
WITH replication = {'class': 'NetworkTopologyStrategy', 'datacenter1': '2'}
AND durable_writes = true;
Now running queries on the cluster, the following issue occurs:
> ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read]
> message="Operation failed - received 0 responses and 1 failures"
> info={'consistency': 'LOCAL_ONE', 'received_responses': 0, 'required_responses': 1, 'failures': 1}
> info={'failures': None, 'consistency': 'Not Set', 'required_responses': None, 'received_responses': None}
Does anyone know what is causing this? If more information needed to better debug this problem, please let me know!
**UPDATE 1**
root# awk '!/#/' /etc/cassandra/conf/cassandra-rackdc.properties
dc=datacenter1
rack=rack1
Valentin Bajrami
(111 rep)
Aug 22, 2018, 08:29 AM
• Last activity: Jul 30, 2025, 05:07 AM
0
votes
1
answers
174
views
Maria db false size
I am using maria db on a centos 7.5 VM with 150G disk. Df command shows me 30G free. I've 2 large dbs about 20G each. That is a sum of 40G. I have 3 smaller dbs let's say 20G more, so I have a total of max 80G. Where are the other 50G of my disk space?
I am using maria db on a centos 7.5 VM with 150G disk.
Df command shows me 30G free. I've 2 large dbs about 20G each.
That is a sum of 40G. I have 3 smaller dbs let's say 20G more, so I have a total of max 80G.
Where are the other 50G of my disk space?
Christoforos
(149 rep)
Dec 16, 2021, 11:48 PM
• Last activity: Jun 30, 2025, 08:09 AM
2
votes
1
answers
3540
views
Automatic recovery of the failed postgresql master node is not working with pgpool II
I am new to Postgresql and Pgpool II setup. I have configured the Postgresql HA/Load balancing using Pgpool II and Repmgr. I have followed the [link][1] to do the setup. The setup consist of 3 nodes and verison of Application and OS is as mentioned below: **OS version** => CentOS 6.8 (On all the 3 n...
I am new to Postgresql and Pgpool II setup. I have configured the Postgresql HA/Load balancing using Pgpool II and Repmgr.
I have followed the link to do the setup.
The setup consist of 3 nodes and verison of Application and OS is as mentioned below:
**OS version** => CentOS 6.8 (On all the 3 nodes)
**Pgpool node** => 192.168.0.4
**Postgresql Nodes**:
**node1** (Master in read-write mode) => 192.168.0.6
**node2** (Standby node in read only mode) => 192.168.0.7
**Pgpool II version** => pgpool-II version 3.5.0 (ekieboshi).
**Postgresql Version** => PostgreSQL 9.4.8
**Repmgr Version** => repmgr 3.1.3 (PostgreSQL 9.4.8)
**I have configured the Pgpool in Master-Slave mode using Streaming replication.**
The setup is as shown in the below image:
When I bring down the Master node(192.168.0.6), the automatic failover happens successfully and the Slave node(192.168.0.7) becomes the new Master node.
After failover, I have to recover the failed node(192.168.0.6) manually and sync it with the new Master node. Then register the node(192.168.0.6) as a new Standby node.
I want to automate the Recovery process of the failed node and add it to the cluster back.
The **pgpool.conf** file on the Pgpool node(192.168.0.4) contains parameter **recovery_1st_stage_command**. I have set the parameter **recovery_1st_stage_command = 'basebackup.sh'**. I have placed the script 'basebackup.sh' file on **both** the Postgresql nodes(192.168.0.6, 192.168.0.7) under the data directory **'/var/lib/pgsql/9.4/data'**. Also I have placed the script **'pgpool_remote_start'** on both the Postgresql nodes(192.168.0.6, 192.168.0.7) under the directory '/var/lib/pgsql/9.4/data'.
Also created the pgpool extension **"pgpool_recovery and pgpool_adm"** on both the database node.
After the failover is completed, the 'basebackup.sh' is not executed automatically. I have to run the command **'pcp_recovey_node'** manually on the **Pgpool node(192.168.0.4)** to recover the failed node(192.168.0.6).
How can I automate the execution of **pcp_recovery_node** command on the Pgpool node with out any manual intervention.
Scripts used by me as follows:
basebackup.sh script
-------------
#!/bin/bash
# first stage recovery
# $1 datadir
# $2 desthost
# $3 destdir
#as I'm using repmgr it's not necessary for me to know datadir(master) $1
RECOVERY_NODE=$2
CLUSTER_PATH=$3
#repmgr needs to know the master's ip
MASTERNODE=

/sbin/ifconfig eth0 | grep inet | awk '{print $2}' | sed 's/addr://'
cmd1=ssh postgres@$RECOVERY_NODE "repmgr -D $CLUSTER_PATH --force standby clone $MASTERNODE"
echo $cmd1
## pgpool_remote_start script
#! /bin/sh
if [ $# -ne 2 ]
then
echo "pgpool_remote_start remote_host remote_datadir"
exit 1
fi
DEST=$1
DESTDIR=$2
PGCTL=/usr/pgsql-9.4/bin/pg_ctl
ssh -T $DEST $PGCTL -w -D $DESTDIR start 2>/dev/null 1>/dev/null /dev/null 1>/dev/null &1 | tee -a /tmp/pgpool_failover.log
Help me with the procedure to automate the recovery of the failed node.
Also let me know, for failover is it compulsory to use repmgr or we can do it without repmgr. Also specify any other method for failover without using Repmgr, its advantages and disadvantages over Repmgr.
yravi104
(21 rep)
Sep 8, 2016, 04:51 PM
• Last activity: Jun 4, 2025, 03:08 AM
2
votes
1
answers
252
views
Search and Replace in multiple MySQL databases
I have multiple MySQL databases(50+) and I want to replace a string in all of them, is there anyway to search and replace in all the databases? Server is CentOS and I have SSH root access.
I have multiple MySQL databases(50+) and I want to replace a string in all of them, is there anyway to search and replace in all the databases?
Server is CentOS and I have SSH root access.
Zaheer Ahmad Khan
(21 rep)
Jan 28, 2016, 12:20 AM
• Last activity: May 30, 2025, 04:03 PM
0
votes
1
answers
254
views
Having two data folder in my postgresql setup- Postgresql
I am using centos 6 and installed Postgresql_9.4 using the following commands yum install postgresql postgresql-contrib postgresql-client pgadmin3 yum install postgresql-server After that i verified my config file or "Show data_directory" command to verify that data folder path. Its showing `/var/li...
I am using centos 6 and installed Postgresql_9.4 using the following commands
yum install postgresql postgresql-contrib postgresql-client pgadmin3
yum install postgresql-server
After that i verified my config file or "Show data_directory" command to verify that data folder path.
Its showing
/var/lib/pgsql/9.4/data
but am also having the another data folder in this location /var/lib/pgsql/data
.
data path 1 --> /var/lib/pgsql/9.4/data
data path 2 --> /var/lib/pgsql/data
My question is which is my original data folder ??.
And Also my exact data folder is configured in config file means what is the use of another data folder ?
Dharani Dharan
(101 rep)
May 5, 2016, 11:23 AM
• Last activity: May 18, 2025, 10:02 PM
0
votes
1
answers
305
views
Can I copy installed mysql from ubuntu to centos7 directly without installation?
I'm trying to copy an installed MySQL from Ubuntu to Centos7 because I do not want an installation on Centos7 any more. This looks like it works but I'm not sure if there any potential issues which I do not know. Has anyone tried this operation before? The purpose to try this copying way is that I'm...
I'm trying to copy an installed MySQL from Ubuntu to Centos7 because I do not want an installation on Centos7 any more. This looks like it works but I'm not sure if there any potential issues which I do not know.
Has anyone tried this operation before?
The purpose to try this copying way is that I'm writing an one click installation sh script which including serveral softwares, such as JDK, MySQL, nginx, elasticsearch and some Java, python code. Coping existing MySQL is obviously the easiest way to finish MySQL installation script, since I need to install MySQL and create accounts and DB's in traditional way .
The steps for my copying MySQL directly from Ubuntu to Centos7 are:
1. create user/group MySQL on Centos7
2. copying MySQL installed directory from ubuntu to the same directory on centos7, such as
/usr/local/mysql
3. copying my.cnf
file from Ubuntu to /etc
on Centos7
4. copying mysql.server
to /etc/init.d/mysql
5. grant permission to mysql.server
and chown
the owner of /usr/local/mysql
to mysql:mysql
6. start/stop MySQL service manually, it's looks fine
_BTW_, my Centos7 machine is new installed, which means I did not installed any other software on it, and my Centos7 can not reach internet so I could not install MySQL through apt
, yum
ways. I need to download installation package from website firstly, then I can install MySQL through rpm
, or tar
in the traditional way.
What I want to say is that no matter the apt
, yum
, rpm
, or tar
way, it's complicated to write these steps in my one click installation script, so I choose copying directly, but I have some worries about it since I do not know if there are some potential issues.
huangxy
(1 rep)
May 19, 2021, 02:01 AM
• Last activity: May 8, 2025, 11:41 PM
0
votes
1
answers
1120
views
MySQL replication stops everyday for no (obvious) reasons
i've got a problem with my MySQL master-slave replication. It works just fine after i start it up but around 2:30am the next day it stops. Monitoring shows that the slave starts continously reading until it is manually (forcefully) stopped and restarted. Neither the mysqld.log nor the mysql_general....
i've got a problem with my MySQL master-slave replication.
It works just fine after i start it up but around 2:30am the next day it stops. Monitoring shows that the slave starts continously reading until it is manually (forcefully) stopped and restarted. Neither the mysqld.log nor the mysql_general.log show any errors and show slave status \G is also clear of any error messages (it just shows the seconds behind master increasing as to be expected).
The setup is using row based replication and is running on MySQL Community Server 5.54.
I've checked all crontabs for any reoccuring events but it's all clear. Now unless there is another way for timed jobs to be triggered i'm out of ideas here tbh.
Both master and slave are identical in terms of setup (both are CentOS6 VMs with 4 cores and 7.5GB RAM) neither master nor slave are experiencing any load peaks around the time the issue appears. The only other thing i did notice was that the disk latency spiked as soon as the slave started reading but since it seems to be proportional to the reads/s graph i'll attribute it to that.
Disk performance shouldn't be an issue either since both are on dedicated storage systems and were (until 2 weeks ago) on the same storage system (a IBM V7000).
Edit:
There are several indicators that replication stopped and is indeed not just lagging behind. First of all the obvious increase in "Seconds Behind Master" but then there's also the lack of further entries in mysql_general.log and more importantly (and maybe a little more subtle) during the day, after getting it running again, there is a constant amount of writes (visible in our monitoring graphs). At more or less precisely 2:30am this just blatantly stops and turns into reads (interestingly enough this is the same on the master, though i haven't managed to find any good reason for that) and that's also when last commit event is being logged in the mysql_general.log
It also doesn't just start again with slave start, instead the server has to be forcefully stopped as the mysql sees the slave process as still running. After restarting the server it is also not budging unless replication is started with sql_slave_skip_counter=1 - i know this is less than ideal and regardless of what happens i'll have to do a data integrity test at some point to verify that it isn't just messing up somewhere internally (which i wouldn't be surprised by).
Slave_IO_State: Queueing master event to the relay log
Master_Host:
Master_User:
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000879
Read_Master_Log_Pos: 628745545
Relay_Log_File: mysqld-relay-bin.001364
Relay_Log_Pos: 443942
Relay_Master_Log_File: mysql-bin.000879
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 520752778
Relay_Log_Space: 108436866
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 21194
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
Edit2:
Alright i checked the binary log to see what the master tried to do before and at the statement where it seems to be stuck, using mysqlbinlog, this is the position where it stopped today: 757885512
# at 757885512
#170405 2:00:02 server id 1 end_log_pos 757885592 Query thread_id=13818268 exec_time=1921 error_code=0
SET TIMESTAMP=1491350402/*!*/;
BEGIN
/*!*/;
And then there's just a bunch of # at *increasing position number*
I've checked it again today and this time i've checked the statement leading up to the one i've posted above and it just seems like a regular update to a session table that basically looks like this:
Update 'myDB'.'session'
WHERE
@1=
@2=
@3=
*A few more lines like the one above with some being NULL*
SET
@1=
@2=
@3=
*All the numbers again apart from one that's being incremented by 1*
Judging by the data that i can see here it just looks like some counter for a session with the numbers in the above mentioned fields being UserID, SessionID and so on.
Edit3:
Here's the create table for one of the tables where the replication seems to have been stuck at
Table: session
Create Table: CREATE TABLE 'session' (
'id' int(11) NOT NULL AUTO_INCREMENT,
'session_id' varchar(256) DEFAULT NULL,
'user_id' int(11) DEFAULT NULL,
'current_page' int(11) DEFAULT NULL,
'last_reload' int(11) DEFAULT NULL,
'ip_address' varchar(45) DEFAULT NULL,
's_nbHostsUp' int(11) DEFAULT NULL,
's_nbHostsDown' int(11) DEFAULT NULL,
's_nbHostsUnreachable' int(11) DEFAULT NULL,
's_nbHostsPending' int(11) DEFAULT NULL,
's_nbServicesOk' int(11) DEFAULT NULL,
's_nbServicesWarning' int(11) DEFAULT NULL,
's_nbServicesCritical' int(11) DEFAULT NULL,
's_nbServicesPending' int(11) DEFAULT NULL,
's_nbServicesUnknown' int(11) DEFAULT NULL,
'update_acl' enum('0','1') DEFAULT '0',
PRIMARY KEY ('id'),
KEY 'session_id' ('session_id'(255)),
KEY 'user_id' ('user_id')
) ENGINE=InnoDB AUTO_INCREMENT=13493 DEFAULT CHARSET=utf8
Also i have to add - i wrote below the engine is MyIsam - it is indeed actually a mix of InnoDB and MyIsam with the MyIsam tables making out the part of the DB that gets changed the most including the biggest table.
Edit4:
Todays entry where the system's stuck looks pretty much the same, it is just after an update to the session table, however (maybe by pure accident) i stumbled upon something:
#170408 2:35:05 server id 1 end_log_pos 815569300 Query thread_id=38771 exec_time=10 error_code=0
SET TIMESTAMP=1491611705/*!*/;
/*!\C utf8 *//*!*/;
SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/;
BEGIN
/*!*/;
# at
*some more # at entries later*
#170408 2:35:05 server id 1 end_log_pos 815569365 Table_map: 'application_storage'.'data_bin' mapped to number 296
#170408 2:35:05 server id 1 end_log_pos 815570402 Write_rows: table id 296
#170408 2:35:05 server id 1 end_log_pos 815571439 Write_rows: table id 296
#170408 2:35:05 server id 1 end_log_pos 815572476 Write_rows: table id 296
Now the data_bin table is a huge ass MyIsam table, around 20-23GB in size (huge compared to all the other tables making up 4/5 of the DBs overall size)
Edit5:
Something else i noticed today when i logged into the DB to check the replication status: It seems like the MySQL was stopped at some point (or my session timed out somehow at least) as i had left a SSH session open where i was already logged in and when i executed a show slave status \G it told me that the server had "gone away" aka was stopped/crashed but also apparently restarted as it managed to reconnect. MySQL Error log doesn't show any crashes or restarts though, which is interesting.
KeyMangler
(1 rep)
Mar 24, 2017, 12:35 PM
• Last activity: Mar 20, 2025, 03:07 PM
1
votes
1
answers
2775
views
How would I sync tables in two separate servers in mysql?
> DB: mysql Ver 15.1 Distrib 5.5.60-MariaDB, for Linux (x86_64) using > readline 5.1 I have two separate servers running MariaDB 5.5, both in production. Both setups are live, and users might access any setup they want. I need to sync certain table columns from these two databases. The tables I need...
> DB: mysql Ver 15.1 Distrib 5.5.60-MariaDB, for Linux (x86_64) using
> readline 5.1
I have two separate servers running MariaDB 5.5, both in production.
Both setups are live, and users might access any setup they want.
I need to sync certain table columns from these two databases.
The tables I need to be synced are Userprofile and Devicedetails. Both have a lastlogin and lastused column respectively, and the one with the later date would need to be replaced in the other database.
Schema for Userprofile:
+-------------------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------------------+------------------+------+-----+---------+----------------+
| ProfileId | int(10) unsigned | NO | PRI | NULL | auto_increment |
| Username | varchar(255) | NO | | NULL | |
| Disabled | int(11) | NO | | 0 | |
| Tokenhash | varchar(50 | NO | | 0 | |
| LastLoginTime | datetime | NO | | NULL | |
+-------------------------+------------------+------+-----+---------+----------------+
Schema for Deviceprofile:
+------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+------------------+--------------+------+-----+---------+-------+
| Userid | varchar(255) | NO | | NULL | |
| Registerdate | date | YES | | NULL | |
| Approved | int | YES | | NULL | |
| MACaddress | varchar(512) | NO | | NA | |
| Hostname | varchar(255) | NO | | NA | |
| Domain | varchar(255) | NO | | NA | |
+------------------+--------------+------+-----+---------+-------+
As a user logs into a server, their lastlogintime would update. If they create a new secret, that would also update (only used for login, think something like a TOTP Token). This user could tomorrow login to the other server, and they should be able to use their existing tokens.
Similarly, their device details would be captured, and Approved would be set to 1, via an external process. This should also be carried over, in case the user accessed the other setup the next day.
If the data from the remote is latest, it should replace in the current table.
I have looked into I..ODKU, but it seems that requires a primary key to clash to update the row, but Profileid is auto-increment, so that would just add rows to the table.
So far, I've exported the data I need into a CSV, and I'm stuck at how I would check which is greater, by looping over the file and checking each individual entry, but there must be a better way of doing this that is far more efficient.
John Doe
(23 rep)
Nov 26, 2020, 09:53 AM
• Last activity: Mar 10, 2025, 06:00 PM
0
votes
1
answers
1846
views
Unable to import DMP file that is made in 11g2 to Oracle 18c
My OS=CENTOS 7 Oracle=18c XE I am trying to import a DMP file that is exported from XE 11g2 and now trying to import it in 18c XE. Here are the errors: [root@linux gsw]# impdp user/pass remap_tablespace=SYSTEM:myUserName remap_tablespace=USERS:myUserName dumpfile=myUserName.dmp directory=myUserName...
My OS=CENTOS 7
Oracle=18c XE
I am trying to import a DMP file that is exported from XE 11g2 and now trying to import it in 18c XE. Here are the errors: [root@linux gsw]# impdp user/pass remap_tablespace=SYSTEM:myUserName remap_tablespace=USERS:myUserName dumpfile=myUserName.dmp directory=myUserName Import: Release 18.0.0.0.0 - Production on Thu Jun 4 01:09:34 2020 Version 18.4.0.0.0 Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production ORA-31626: job does not exist ORA-31633: unable to create master table "myUserName.SYS_IMPORT_FULL_05" ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.KUPV$FT", line 1142 ORA-01950: no privileges on tablespace 'USERS' ORA-06512: at "SYS.KUPV$FT", line 1035 ORA-06512: at "SYS.KUPV$FT", line 1023 What I did is: create TABLESPACE myUserName DATAFILE '/opt/oracle/oradata/XE/myUserName01.dbf' SIZE 10M AUTOEXTEND ON NEXT 512K MAXSIZE 250M; ALTER DATABASE DATAFILE '/opt/oracle/oradata/XE/myUserName01.dbf' AUTOEXTEND ON MAXSIZE UNLIMITED; SQL> ALTER USER myUserName quota unlimited on myUserName; User altered. SQL> create user myUserName identified by pass; User created. SQL> grant connect, resource to myUserName; Grant succeeded. SQL> grant read, write on directory myUserName to myUserName; Grant succeeded. SQL> grant create database link to myUserName; Grant succeeded. SQL> grant create table to myUserName; Grant succeeded. **UPDATE-1** SQL> GRANT CREATE ANY TABLE TO myUserName; Grant succeeded. SQL> GRANT UNLIMITED TABLESPACE TO myUserName; Grant succeeded. Now the errors changed as: Import: Release 18.0.0.0.0 - Production on Thu Jun 4 01:39:50 2020 Version 18.4.0.0.0 Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production ORA-39001: invalid argument value ORA-39000: bad dump file specification ORA-39143: dump file "/opt/oracle/admin/XE/dpdump/myUserName/myUserName_110516.dmp" may be an original export dump file Best Regards
Oracle=18c XE
I am trying to import a DMP file that is exported from XE 11g2 and now trying to import it in 18c XE. Here are the errors: [root@linux gsw]# impdp user/pass remap_tablespace=SYSTEM:myUserName remap_tablespace=USERS:myUserName dumpfile=myUserName.dmp directory=myUserName Import: Release 18.0.0.0.0 - Production on Thu Jun 4 01:09:34 2020 Version 18.4.0.0.0 Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production ORA-31626: job does not exist ORA-31633: unable to create master table "myUserName.SYS_IMPORT_FULL_05" ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.KUPV$FT", line 1142 ORA-01950: no privileges on tablespace 'USERS' ORA-06512: at "SYS.KUPV$FT", line 1035 ORA-06512: at "SYS.KUPV$FT", line 1023 What I did is: create TABLESPACE myUserName DATAFILE '/opt/oracle/oradata/XE/myUserName01.dbf' SIZE 10M AUTOEXTEND ON NEXT 512K MAXSIZE 250M; ALTER DATABASE DATAFILE '/opt/oracle/oradata/XE/myUserName01.dbf' AUTOEXTEND ON MAXSIZE UNLIMITED; SQL> ALTER USER myUserName quota unlimited on myUserName; User altered. SQL> create user myUserName identified by pass; User created. SQL> grant connect, resource to myUserName; Grant succeeded. SQL> grant read, write on directory myUserName to myUserName; Grant succeeded. SQL> grant create database link to myUserName; Grant succeeded. SQL> grant create table to myUserName; Grant succeeded. **UPDATE-1** SQL> GRANT CREATE ANY TABLE TO myUserName; Grant succeeded. SQL> GRANT UNLIMITED TABLESPACE TO myUserName; Grant succeeded. Now the errors changed as: Import: Release 18.0.0.0.0 - Production on Thu Jun 4 01:39:50 2020 Version 18.4.0.0.0 Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production ORA-39001: invalid argument value ORA-39000: bad dump file specification ORA-39143: dump file "/opt/oracle/admin/XE/dpdump/myUserName/myUserName_110516.dmp" may be an original export dump file Best Regards
user987376746090
(13 rep)
Jun 3, 2020, 11:22 PM
• Last activity: Dec 30, 2024, 08:03 AM
3
votes
1
answers
9915
views
Is there any way to pg_upgrade PostgreSQL 12 to 15 over network?
My situation is a little bit complicated. I have an older version of PostgreSQL 12.9 installed on an older Centos Stream 8 machine. There are about five custom databases created under the roof, say database1, ozssc, database5, owned by different roles to handle different business applications. As ou...
My situation is a little bit complicated.
I have an older version of PostgreSQL 12.9 installed on an older Centos Stream 8 machine. There are about five custom databases created under the roof, say database1, ozssc, database5, owned by different roles to handle different business applications.
As our business application has been updated recently, we plan to update the database as well.
There is another newer machine, CentOS Stream 9 was set up, and PostgreSQL 15.2 was installed on the newer machine.
First, we tried to dump all database schema and data by using the newer version (15.2) command pg_dump from the newer machine:
pg_dump -h 10.0.1.105 -p 5433 -U postgres -v -n '*' -N 'pg_toast' -N 'information_schema' -N 'pg_catalog' $DB -Ft -f ${bkp_path}/${DB}_schema_bkp_${date}.tar
Which will dump 5 XXX.tar files
Then I tried to restore it (by using the same version (15.2) of pg_restore) to my newer machine (Centos Stream 9)
pg_restore -h 10.0.1.103 -p 5433 -U postgres -w -d $db_name $db_tar_file
Run those commands, system response error as:
pg_restore: error: could not execute query: ERROR: unacceptable schema name "pg_temp_1"
DETAIL: The prefix "pg_" is reserved for system schemas.
Command was: CREATE SCHEMA pg_temp_1;
As result, there quite lot of database setting is incorrect as well:
pg_restore could not set the correct database owner from the dumped sql statement. For example the SQL statement in database ozssc as:
CREATE DATABASE ozssc WITH TEMPLATE = template0 ENCODING = 'UTF8' LOCALE_PROVIDER = libc LOCALE = 'en_AU.UTF-8';
ALTER DATABASE ozssc OWNER TO tomcat;
Those statement will change database ozssc owner to tomcat.
After restoration, I check the database ozssc's owner; it is still PostgreSQL instead of tomcat.
Another significant error is that it seems pg_dump does not dump any extensions. For example, there are about other three extensions, such as cutest, cube, and earth distance in the original database ozssc, but I don't find anything in dumped sql statement.
As this operation (pg_dump/restore) failed, I tried to look for pg_upgrade, but unfortunately. I found pg_upgrade only support some host upgrade by different installed directory.
I tried to go a work around:
I logged in to the newer version of host (Centos Stream 9, installed PostgreSQL 15.2), and mount remote file system (Centos Stream 8, PostgreSQL 12.9 installed) by using fuse and fuse-sshfs:
sshfs postgres@10.0.1.105:/var/lib/pgsql/data /mntsshfs/pgsql/12/data
sshfs postgres@10.0.1.105:/usr/bin /mntsshfs/pgsql/12/bin
Then i run following commend on my newer host
pg_upgrade --old-datadir=/mntsshfs/pgsql/12/data --new-datadir=/usr/local/pgsql/data \
--old-bindir=/mntsshfs/pgsql/12/bin --new-bindir=/usr/local/pgsql/bin \
--old-options '-c config_file=/mntsshfs/pgsql/12/data/postgresql.conf' --new-options '-c config_file=/usr/local/pgsql/data/postgresql.conf' --check
response as:
/mntsshfs/pgsql/12/bin/postgres: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory
Seems mntsshfs/pgsql/12/bin/postgres (remote file system command) is trying to find libssl.so.1.1 that seating on local mounted machine? But there is no such version of libssl installed on newer host.
I believe this is quite common use case when company migrate different versions of PostgreSQL server over network on different hosts. Such older version of PostgreSQL and newer PostgreSQL seating on different machines.
Could anyone advise: How we can use pg_upgrade to migrate PostgreSQL 12 to 15 over the network instead of on the same host?
cidy.long
(153 rep)
Mar 2, 2023, 11:13 PM
• Last activity: Dec 26, 2024, 09:00 AM
1
votes
1
answers
1092
views
auto restart mariadb.service if oom-kill killed main process
I need some help trying to figure out why my mariadb service isn't automatically restarting upon crashing, likely due to insufficient memory on the server. Essentially, my server operates at around 75%-90% memory utilisation all of the time. There's 8GB of memory on the server total. Today, at aroun...
I need some help trying to figure out why my mariadb service isn't automatically restarting upon crashing, likely due to insufficient memory on the server.
Essentially, my server operates at around 75%-90% memory utilisation all of the time. There's 8GB of memory on the server total. Today, at around 8am I observed
mariadb.service
going into the failed state. Moments before I noticed that my server's memory consumption was maxed out at around 98.5% for a few minutes.
Here's what I observed when running systemctl status mariadb.service
× mariadb.service - MariaDB 10.11 database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; preset: disabled)
Active: failed (Result: oom-kill) since Thu 2024-03-28 04:00:36 EDT; 5min ago
Duration: 1d 1h 19min 29.156s
Docs: man:mariadbd(8)
https://mariadb.com/kb/en/library/systemd/
Process: 766 ExecStartPre=/usr/libexec/mariadb-check-socket (code=exited, status=0/SUCCESS)
Process: 825 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir mariadb.service (code=exited, status=0/SUCCESS)
Process: 874 ExecStart=/usr/libexec/mariadbd --basedir=/usr $MYSQLD_OPTS $_WSREP_NEW_CLUSTER (code=killed, signal=KILL)
Process: 1443 ExecStartPost=/usr/libexec/mariadb-check-upgrade (code=exited, status=0/SUCCESS)
Main PID: 874 (code=killed, signal=KILL)
Status: "Taking your SQL requests now..."
CPU: 2h 39min 52.288s
Mar 27 02:41:07 domain-monitor-2022 mariadb-check-upgrade: 1. Back-up your data before with 'mariadb-upgrade'
Mar 27 02:41:07 domain-monitor-2022 mariadb-check-upgrade: 2. Start the database daemon using 'systemctl start mariadb.service'
Mar 27 02:41:07 domain-monitor-2022 mariadb-check-upgrade: 3. Run 'mariadb-upgrade' with a database user that has sufficient privileges
Mar 27 02:41:07 domain-monitor-2022 mariadb-check-upgrade: Read more about 'mariadb-upgrade' usage at:
Mar 27 02:41:07 domain-monitor-2022 mariadb-check-upgrade: https://mariadb.com/kb/en/mysql_upgrade/
Mar 27 02:41:07 domain-monitor-2022 systemd: Started MariaDB 10.11 database server.
Mar 28 04:00:36 domain-monitor-2022 systemd: mariadb.service: A process of this unit has been killed by the OOM killer.
Mar 28 04:00:36 domain-monitor-2022 systemd: mariadb.service: Main process exited, code=killed, status=9/KILL
Mar 28 04:00:36 domain-monitor-2022 systemd: mariadb.service: Failed with result 'oom-kill'.
Mar 28 04:00:36 domain-monitor-2022 systemd: mariadb.service: Consumed 2h 39min 52.288s CPU time.
I then ran systemctl restart mariadb.service
and it restarted without a problem, likely because the memory of the server came down a little.
However, it was down for a few minutes, why didn't it automatically try to restart? And how can I auto attempt to restart the database if a failure is observed.
I've since upgraded my server at a cost. From 8gb to 16gb of memory. And it's been around an hour of the database running and here's the status output:
● mariadb.service - MariaDB 10.11 database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; preset: disabled)
Active: active (running) since Thu 2024-03-28 04:25:45 EDT; 29min ago
Docs: man:mariadbd(8)
https://mariadb.com/kb/en/library/systemd/
Process: 834 ExecStartPre=/usr/libexec/mariadb-check-socket (code=exited, status=0/SUCCESS)
Process: 887 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir mariadb.service (code=exited, status=0/SUCCESS)
Process: 1182 ExecStartPost=/usr/libexec/mariadb-check-upgrade (code=exited, status=0/SUCCESS)
Main PID: 946 (mariadbd)
Status: "Taking your SQL requests now..."
Tasks: 94 (limit: 100205)
Memory: 463.1M
CPU: 3min 6.222s
CGroup: /system.slice/mariadb.service
└─946 /usr/libexec/mariadbd --basedir=/usr
Mar 28 04:25:44 domain-monitor-2022 systemd: Starting MariaDB 10.11 database server...
Mar 28 04:25:44 domain-monitor-2022 mariadb-prepare-db-dir: Database MariaDB is probably initialized in /var/lib/mysql already, nothing is done.
Mar 28 04:25:44 domain-monitor-2022 mariadb-prepare-db-dir: If this is not the case, make sure the /var/lib/mysql is empty before running mariadb-prepare-db-dir.
Mar 28 04:25:45 domain-monitor-2022 mariadb-check-upgrade: The datadir located at /var/lib/mysql needs to be upgraded using 'mariadb-upgrade' tool. This can be done using the fo>
Mar 28 04:25:45 domain-monitor-2022 mariadb-check-upgrade: 1. Back-up your data before with 'mariadb-upgrade'
Mar 28 04:25:45 domain-monitor-2022 mariadb-check-upgrade: 2. Start the database daemon using 'systemctl start mariadb.service'
Mar 28 04:25:45 domain-monitor-2022 mariadb-check-upgrade: 3. Run 'mariadb-upgrade' with a database user that has sufficient privileges
Mar 28 04:25:45 domain-monitor-2022 mariadb-check-upgrade: Read more about 'mariadb-upgrade' usage at:
Mar 28 04:25:45 domain-monitor-2022 mariadb-check-upgrade: https://mariadb.com/kb/en/mysql_upgrade/
Mar 28 04:25:45 domain-monitor-2022 systemd: Started MariaDB 10.11 database server.
So I'll conclude with what I need help with:
1. Why isn't mariadb.service
automatically restarted?
2. It initially failed because of: Failed with result 'oom-kill'. Why?
Output from running: SELECT 'SLEEPING MDB Ram use', COUNT(*),SUM(time),SUM(memory_used),SUM(max_memory_used) FROM information_schema.processlist WHERE command="Sleep";
- COUNT(*): 63:
- SUM(time): 341
- SUM(memory_used): 4947024
- SUM(max_memory_used): 9846392
Ryan H
(111 rep)
Mar 28, 2024, 08:58 AM
• Last activity: Dec 20, 2024, 01:00 AM
0
votes
1
answers
503
views
Why is there a big gap in the actual Table space? (calculated vs information_schema vs file size)
I have a InnoDB table with the following columns: `BIGINT` -> 8 bytes `BIGINT` -> 8 bytes `ENUM(0,1)` -> 1 byte `MEDIUMINT` -> 3 bytes `INT` -> 4 bytes = 24 bytes per row It contains 10454004 rows (based on `COUNT(*)`), so I calculate the Data size as **250MB**. This table's `PRIMARY` key is `BIGINT...
I have a InnoDB table with the following columns:
BIGINT
-> 8 bytes
BIGINT
-> 8 bytes
ENUM(0,1)
-> 1 byte
MEDIUMINT
-> 3 bytes
INT
-> 4 bytes
= 24 bytes per row
It contains 10454004 rows (based on COUNT(*)
), so I calculate the Data size as **250MB**.
This table's PRIMARY
key is BIGINT
, BIGINT
, ENUM
.
And has an INDEX
on the INT
.
Based on information_schema
,
DATA_LENGTH
= **530.8MiB**
INDEX_LENGTH
= 272.0MiB
DATA_FREE
= 6.0MiB
= 808.8MiB = **848 MB**
Also, AVG_ROW_LENGTH
= 87 instead of 24
And then I look at the actual .ibd
file size, which is reported as:
1451229184 bytes = **1451.23 MB**
I understand there is fragmentation and sparse files in question (the records on this table are regularly expired, but never had a bigger retention time than currently is).
But I'm not understanding why do I see 3 different sizes, with such big gap each.
Goes from 250MB of real calculable data, to 530MiB of Data.
And DATA_FREE
is only 6MiB.
And 848 MB reported by information_schema
(which per my understanding includes "free space" as well), much lower than the actual .ibd
file.
**From Comment**
CREATE TABLE mytable
(
uid
bigint(18) unsigned NOT NULL,
tid
bigint(18) unsigned NOT NULL,
fst
enum('0','1') COLLATE utf8mb4_unicode_ci NOT NULL,
sn
mediumint(8) unsigned NOT NULL DEFAULT 1,
ls
int(10) unsigned NOT NULL,
PRIMARY KEY (uid
,tid
,fst
) USING BTREE,
KEY ls
(ls
)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
**What am I missing or not considering?**
Nuno
(829 rep)
Mar 11, 2022, 12:01 AM
• Last activity: Sep 9, 2024, 02:01 PM
0
votes
2
answers
46
views
MySQL 5.1 Time format of the 6-digit string
What time format is the 6-digit string in? Or is a transaction ID of some sort? MySQL server is 5.1. Errors log ``` 240110 9:00:01 [ERROR] /usr/libexec/mysqld: Deadlock found when trying to get lock; try restarting transaction 240110 9:00:02 [ERROR] /usr/libexec/mysqld: Deadlock found when trying to...
What time format is the 6-digit string in? Or is a transaction ID of some sort? MySQL server is 5.1.
Errors log
240110 9:00:01 [ERROR] /usr/libexec/mysqld: Deadlock found when trying to get lock; try restarting transaction
240110 9:00:02 [ERROR] /usr/libexec/mysqld: Deadlock found when trying to get lock; try restarting transaction
240124 9:00:04 [ERROR] /usr/libexec/mysqld: Deadlock found when trying to get lock; try restarting transaction
240124 9:00:04 [ERROR] /usr/libexec/mysqld: Deadlock found when trying to get lock; try restarting transaction
Slow queries log
# Time: 181018 12:24:24
# User@Host: dms[dms] @ [127.0.0.1]
# Query_time: 1.210543 Lock_time: 0.000051 Rows_sent: 0 Rows_examined: 30758
SET timestamp=1539825864;
--
# Time: 181018 12:24:40
# User@Host: dms[dms] @ [127.0.0.1]
# Query_time: 1.186729 Lock_time: 0.000044 Rows_sent: 0 Rows_examined: 122775
SET timestamp=1539825880;
--
# Time: 181018 12:25:10
# User@Host: dms[dms] @ [127.0.0.1]
# Query_time: 1.207168 Lock_time: 0.000047 Rows_sent: 0 Rows_examined: 122775
SET timestamp=1539825910;
supmethods
(129 rep)
Jun 25, 2024, 02:21 AM
• Last activity: Jun 25, 2024, 05:35 PM
0
votes
0
answers
318
views
Pgbouncer: got packet 'E' from server when not linked
Does anyone knows what this error below means? We running `pgbouncer` 1.21 on centos using `transaction` pooling mode. 2024-04-29 15:00:02.378 CST [9680] WARNING S-0x197b028: db/user_a@10.0.0.1:5432 got packet 'E' from server when not linked 2024-04-29 15:00:02.379 CST [9680] LOG S-0x197b028: db/use...
Does anyone knows what this error below means? We running
pgbouncer
1.21 on centos using transaction
pooling mode.
2024-04-29 15:00:02.378 CST WARNING S-0x197b028: db/user_a@10.0.0.1:5432 got packet 'E' from server when not linked
2024-04-29 15:00:02.379 CST LOG S-0x197b028: db/user_a@10.0.0.1:5432 closing because: server conn crashed? (age=2409s)
We faced this warning after enabling min_pool_size.
goodfella
(595 rep)
Apr 29, 2024, 07:31 AM
0
votes
1
answers
133
views
MongoDB: What cause bytesRead in insert operation?
Recently we had high CPU/Memory and I/O usage on our MongoDB. While checking the logs all I found is some `insert` during this period. While inspecting logs I noticed most of the insert logs have `bytesRead` in the storage section. So I suspect this cause I/O then caching the data cause high memory....
Recently we had high CPU/Memory and I/O usage on our MongoDB. While checking the logs all I found is some
insert
during this period. While inspecting logs I noticed most of the insert logs have bytesRead
in the storage section. So I suspect this cause I/O then caching the data cause high memory.
After the insert spike the I/O and CPU went down but memory stayed the same which after a restart got resolved.
Is this disk read normal with insert operation? We are using **Mongo v4.0** with WiredTiger
storage engine in CentOS7 VM.
2024-02-14T23:39:44.533+0800 I COMMAND [conn939845] insert db.user_log ninserted:1 keysInserted:11 numYields:0 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } storage:{ data: { bytesRead: 34390, timeReadingMicros: 140837 } } 141ms
2024-02-14T23:40:16.785+0800 I COMMAND [conn939845] insert db.user_log ninserted:1 keysInserted:11 numYields:0 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } storage:{ data: { bytesRead: 24150, timeReadingMicros: 506594 } } 507ms
goodfella
(595 rep)
Feb 15, 2024, 09:09 AM
• Last activity: Feb 23, 2024, 07:29 PM
12
votes
4
answers
36154
views
postgresql-setup --initdb with custom data directory
I am trying to setup Postgres 12 on Centos 8. Postgres shall be installed in the default directory i.e. `/var/lib/pgsql`, however I want the data directory to be in `/data/pgsql` I want to use postgresql-setup as root, as I believe it will create systemd service files along with it, rather than usin...
I am trying to setup Postgres 12 on Centos 8.
Postgres shall be installed in the default directory i.e.
/var/lib/pgsql
, however I want the data directory to be in /data/pgsql
I want to use postgresql-setup as root, as I believe it will create systemd service files along with it, rather than using pg_ctl or running initdb as postgres user.
However, if I try
$ postgresql-setup --initdb --pgdata=/data/pgsql/
I will receive the following error.
postgresql-setup: unrecognized option '--pgdata=/data/pgsql'
FATAL: can't parse arguments
What is the best way to achieve this?
zaidwaqi
(221 rep)
May 28, 2021, 04:53 PM
• Last activity: Feb 22, 2024, 08:08 PM
0
votes
0
answers
187
views
Master-Slave replication in mysql error connecting to master 'replicator@xxx:3306' Can't connect to server on 'xxx' (111 "Connection refused")
Hi i am trying to make a connection between my Primary Server and backup server to have Master-slave mysql connection this is my Primary server ip = xxx and this is my Backup server ip = yyy both of servers are centos 7 and in master (primary) server mariadb version is: ```mysql Ver 15.1 Distrib 10....
Hi i am trying to make a connection between my Primary Server and backup server to have Master-slave mysql connection
this is my Primary server ip = xxx
and this is my Backup server ip = yyy
both of servers are centos 7 and in master (primary) server mariadb version is:
Ver 15.1 Distrib 10.6.9-MariaDB, for Linux (x86_64) using readline 5.1
and in backup (slave) server version is:
Ver 15.1 Distrib 10.6.15-MariaDB, for Linux (x86_64) using readline 5.1
in Master server i did this to my.cnf:
[mysqld]
bind-address=0.0.0.0
server-id = 1
binlog-do-db=x_test
relay-log = /var/lib/mysql/mysql-relay-bin
relay-log-index = /var/lib/mysql/mysql-relay-bin.index
#log-error = /var/lib/mysql/mysql.err
master-info-file = /var/lib/mysql/mysql-master.info
relay-log-info-file = /var/lib/mysql/mysql-relay-log.info
log-bin = /var/lib/mysql/mysql-bin.log
local-infile = 0
innodb_file_per_table
max_allowed_packet = 256M
max_connections = 2000
tmpdir =/var/lib/mysqltmp
query_cache_size = 0M
join_buffer_size = 512K
tmp_table_size = 64M
max_heap_table_size = 64M
thread_cache_size = 4
innodb_buffer_pool_size = 4G
log-error = /var/log/mysqld.log
innodb_temp_data_file_path = ibtmp1:12M:autoextend:max:15G
sql_mode=""
#server-id=1
#log_bin=mysql-bin
[mysqldump]
max_allowed_packet = 512M
#log-error=/var/log/mysql/mysql_error.log
[client-server]
#
# include *.cnf from the config directory
#
!includedir /etc/my.cnf.d
and i allowed my Backup ip in iptables of Primary server to have connection in port 3306
sudo iptables -L -n | grep 3306
ACCEPT tcp -- [my backup ip] 0.0.0.0/0 tcp dpt:3306
and even i can connect to my user replicator in my Primary server from my backup server with this command:
-h xxx -u replicator -p
and when i enter password it connects successfully so the problem is not port or network
on Master server i executed this command in mysql (logged in with root user):
CREATE USER 'replicator'@'%' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'%';
FLUSH PRIVILEGES;
and then i used this:
MariaDB [(none)]> SHOW MASTER STATUS;
+------------------+----------+---------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+---------------+------------------+
| mysql-bin.000020 | 342 | x_test | |
+------------------+----------+---------------+------------------+
1 row in set (0.000 sec)
on my slave server (backup) i did this:
i changed /etc/my.cnf to:
[mysqld]
server-id = 2
replicate_do_db = x_test
then i restarted with this:
restart mysqld
then in mysql (with root user) i executed this command:
CHANGE MASTER TO
MASTER_HOST='xxx', # The IP of the Master server
MASTER_USER='replicator',
MASTER_PASSWORD='password',
MASTER_LOG_FILE='mysql-bin.000020', # The file you noted earlier
MASTER_LOG_POS=342; # The position you noted earlier
after that:
START SLAVE;
then when i executed this:
SHOW SLAVE STATUS\G
the result was this:
*************************** 1. row ***************************
Slave_IO_State: Connecting to master
Master_Host: xxx
Master_User: replicator
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000020
Read_Master_Log_Pos: 342
Relay_Log_File: server-51-210-183-10-relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: mysql-bin.000020
Slave_IO_Running: Connecting
Slave_SQL_Running: Yes
Replicate_Do_DB: x_test
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 342
Relay_Log_Space: 256
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 2003
Last_IO_Error: error connecting to master 'replicator@xxx:3306' - retry-time: 60 maximum-retries: 100000 message: Can't connect to server on 'xxx' (111 "Connection refused")
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 0
Master_SSL_Crl:
Master_SSL_Crlpath:
Using_Gtid: No
Gtid_IO_Pos:
Replicate_Do_Domain_Ids:
Replicate_Ignore_Domain_Ids:
Parallel_Mode: optimistic
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
Slave_DDL_Groups: 0
Slave_Non_Transactional_Groups: 0
Slave_Transactional_Groups: 0
1 row in set (0.001 sec)
how can i solve this problem?
thanks in advance
Matthew
(1 rep)
Nov 4, 2023, 09:33 AM
-2
votes
1
answers
54
views
Database Select statment not tatking input from html input field
Traceback (most recent call last): > File "/home/will/.local/lib/python3.6/site-packages/flask/app.py", line 2073, in wsgi_app\ response = self.full_dispatch_request()\ File "/home/will/.local/lib/python3.6/site-packages/flask/app.py", line 1518, in full_dispatch_request\ rv = self.handle_user_excep...
Traceback (most recent call last):
> File "/home/will/.local/lib/python3.6/site-packages/flask/app.py", line 2073, in wsgi_app\
response = self.full_dispatch_request()\
File "/home/will/.local/lib/python3.6/site-packages/flask/app.py", line 1518, in full_dispatch_request\
rv = self.handle_user_exception(e)\
File "/home/will/.local/lib/python3.6/site-packages/flask/app.py", line 1516, in full_dispatch_request\
rv = self.dispatch_request()\
File "/home/will/.local/lib/python3.6/site-packages/flask/app.py", line 1502, in dispatch_request\
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)\
File "main.py", line 30, in do_admin_login
conn.execute(query)\
pyodbc.ProgrammingError: ('42S22', "[42S22] [ma-3.1.12][10.3.28-MariaDB]Unknown column 'tvandiver' in 'where clause' (1054) (SQLExecDirectW)")
Error I get when executing login prompt. tvandiver is what is entered into html page.
If I run a statement in sql console I get a back response of 1. When I run it with HTML page through python3 it sends back an error. I have tried wrapping input from the page in str() but still wont take the inputted text. Also if I run it just in python3 same SELECT statement it returns True, but not from html input field.
tvandiver
(1 rep)
May 9, 2023, 10:53 PM
• Last activity: May 14, 2023, 09:33 PM
1
votes
0
answers
23
views
Mysterious Increase in Server Load Every Hour with MySQL 5.7.4
I have a CentOS 7 server with MySQL 5.7.4 installed. The machine has 8 cores and 16 GB of RAM, and only the MySQL service is running. The database is not very large, and there are not many queries. Normally, the server load is around 0.x. However, recently, I observed that every hour, the server loa...
I have a CentOS 7 server with MySQL 5.7.4 installed. The machine has 8 cores and 16 GB of RAM, and only the MySQL service is running. The database is not very large, and there are not many queries. Normally, the server load is around 0.x. However, recently, I observed that every hour, the server load increases to about 10, and this lasts for about 10 minutes. The business department says that there are no scheduled tasks during this time, and during high load, the business system does not show any obvious problems.
MySQL uses InnoDB, and the pool size is set to 10 GB. When the load is high, I logged in to MySQL with the root account and ran 'show processlist', but there were only a few queries, and the QPS was only about a dozen on average according to Prometheus. Other metrics did not show any significant differences.
During this time, I used tools such as iostat/iotop to check for high I/O, but there were no significant differences compared to normal operation. Memory usage was also normal, with only 3-4 GB used out of 16 GB. The InnoDB pool hit rate and page free metrics were also normal. I enabled slow query logging with 'set global long_query_time=1', but there were no slow queries logged.
The only abnormality I could observe was that the 'r' value reported by vmstat was around 10 during high load. Other metrics showed no significant differences compared to normal operation.
please advise,thanks!
normal state
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 0 5667436 5440 7967916 0 0 0 2 0 0 8 0 92 0 0
0 0 0 5667536 5440 7967916 0 0 0 0 3013 226 18 0 82 0 0
0 0 0 5667520 5440 7967916 0 0 0 0 2918 213 17 0 83 0 0
0 0 0 5667544 5440 7967916 0 0 0 0 3097 291 18 0 82 0 0
1 0 0 5667596 5440 7967916 0 0 0 0 3626 180 22 0 78 0 0
0 0 0 5667596 5440 7967916 0 0 0 0 3822 240 22 0 78 0 0
high load state
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
14 0 0 5666088 5440 7967860 0 0 0 2 0 0 8 0 92 0 0
12 0 0 5665908 5440 7967860 0 0 0 0 4612 340 24 0 76 0 0
10 0 0 5666032 5440 7967860 0 0 0 0 3055 318 17 0 83 0 0
Mr.spark
(23 rep)
Mar 21, 2023, 02:12 PM
• Last activity: May 5, 2023, 11:10 PM
1
votes
0
answers
232
views
Mariadb keeps crashing after sometime
I am getting a weird problem with Mariadb server. After starting the service after few mins the service will go down automatically. Upon checking status this is what I find. ``` mariadb.service - MariaDB 10.3 database server Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor pr...
I am getting a weird problem with Mariadb server. After starting the service after few mins the service will go down automatically. Upon checking status this is what I find.
mariadb.service - MariaDB 10.3 database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor preset: disabled)
Active: failed (Result: timeout) since Thu 2023-03-23 12:32:26 UTC; 19h ago
Docs: man:mysqld(8)
https://mariadb.com/kb/en/library/systemd/
Process: 2260927 ExecStartPost=/usr/libexec/mysql-check-upgrade (code=exited, status=0/SUCCESS)
Process: 2262690 ExecStartPre=/usr/libexec/mysql-prepare-db-dir mariadb.service (code=exited, status=0/SUCCESS)
Process: 2262649 ExecStartPre=/usr/libexec/mysql-check-socket (code=exited, status=0/SUCCESS)
Main PID: 2262727
Status: "Waiting for page cleaner"
Tasks: 10 (limit: 11310)
Memory: 136.0M
CGroup: /system.slice/mariadb.service
├─2211841 /usr/libexec/mysqld --basedir=/usr
├─2218737 /usr/libexec/mysqld --basedir=/usr
├─2220075 /usr/libexec/mysqld --basedir=/usr
├─2222515 /usr/libexec/mysqld --basedir=/usr
└─2262727 /usr/libexec/mysqld --basedir=/usr
Mar 23 12:17:25 ip-172-x-x-x.ap-south-1.compute.internal systemd:
mariadb.service: Found left-over process 2220075 (mysqld) in control group while starting unit. Ignoring.
Mar 23 12:17:25 ip-172-x-x-x.ap-south-1.compute.internal systemd:
This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Mar 23 12:17:25 ip-172-x-x-x.ap-south-1.compute.internal systemd:
mariadb.service: Found left-over process 2222515 (mysqld) in control group while starting unit. Ignoring.
Mar 23 12:17:25 ip-172-x-x-x.ap-south-1.compute.internal systemd:
This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Mar 23 12:17:25 ip-172-x-x-x.ap-south-1.compute.internal mysqld:
2023-03-23 12:17:25 0 [Note] /usr/libexec/mysqld (mysqld 10.3.28-MariaDB) starting as process 2262727 ...
Mar 23 12:22:25 ip-172-x-x-x.ap-south-1.compute.internal systemd:
mariadb.service: start operation timed out. Terminating.
Mar 23 12:27:25 ip-172-x-x-x.ap-south-1.compute.internal systemd:
mariadb.service: State 'stop-sigterm' timed out. Skipping SIGKILL.
Mar 23 12:32:26 ip-172-x-x-x.ap-south-1.compute.internal systemd:
mariadb.service: State 'stop-final-sigterm' timed out. Skipping SIGKILL. Entering failed mode.
Mar 23 12:32:26 ip-172-x-x-x.ap-south-1.compute.internal systemd:
mariadb.service: Failed with result 'timeout'.
Mar 23 12:32:26 ip-172-x-x-x.ap-south-1.compute.internal systemd:
Failed to start MariaDB 10.3 database server.
Anybody got an Idea to fix this issue?
Hari
(11 rep)
Mar 24, 2023, 08:16 AM
• Last activity: Mar 27, 2023, 10:08 AM
Showing page 1 of 20 total questions