Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
2
answers
139
views
Unable to Add New Database in CloudPanel VPS - Access denied for user 'root'@'localhost'
**Problem** I'm facing an issue with my VPS running CloudPanel where I cannot add new databases due to MySQL root access being denied. **Environment** - CloudPanel on VPS - MySQL Server - Hosting Provider: Hostinger **Error Message** `Access denied for user 'root'@'localhost'` **Steps I've Tried** *...
**Problem**
I'm facing an issue with my VPS running CloudPanel where I cannot add new databases due to MySQL root access being denied.
**Environment**
- CloudPanel on VPS
- MySQL Server
- Hosting Provider: Hostinger
**Error Message**
Access denied for user 'root'@'localhost'
**Steps I've Tried**
**1. Attempted direct MySQL login:**
mysql -u root -p
Result: Access denied error, tried multiple password combinations including:
- Root user password
- Hostinger password
- Default passwords like (12345, admin, root, etc..)
- No password
**2. Attempted MySQL safe mode:**
sudo systemctl stop mysql
sudo mysqld_safe --skip-grant-tables &
Result: Error message:
MySQL cannot start in safe mode because the required directory for the UNIX socket file does not exist.
**3. Attempted to kill running processes:**
sudo kill -9
Result: No process found with specified PID
**4. Created missing directory and set permissions:**
sudo mkdir -p /var/run/mysqld
sudo chown -R mysql:mysql /var/run/mysqld
**5. Restarted MySQL in safe mode and verified process:**
sudo mysqld_safe --skip-grant-tables &
ps aux | grep mysql
Result: Process found running
**6. Attempted passwordless root login:**
mysql -u root
Result: Still encountering access denied error
**Additional Context**
- Hostinger support was unable to resolve the issue
- CloudPanel tutorials show database creation through UI without command-line intervention
- Suspect issue might be related to CloudPanel's default MySQL user restrictions
*I have reached out to Hostinger support, but they were unable to provide a solution beyond what I have already attempted. Additionally, all the YouTube tutorials on setting up and creating a database in CloudPanel show a straightforward process with no access issues, as everything can be done directly from the UI without requiring any command-line input.
I suspect it might be related to CloudPanel’s default MySQL user restrictions or authentication settings, but I’m not sure how to proceed. Has anyone encountered a similar issue or knows how to regain root access?*
Joe
(1 rep)
Feb 2, 2025, 10:31 AM
• Last activity: Aug 5, 2025, 02:10 PM
0
votes
1
answers
191
views
Is it possible to allocate data free for certain table?
After deleted a large dataset (~50GB) and optimized tables, some tables have data free (~50GB) and parts of tables have only data free (~KB). As my understanding, the data free are reused and not keeping the ibdata growing. The situation is: My commonly used tables are growing really fast, but unfor...
After deleted a large dataset (~50GB) and optimized tables, some tables have data free (~50GB) and parts of tables have only data free (~KB). As my understanding, the data free are reused and not keeping the ibdata growing.
The situation is: My commonly used tables are growing really fast, but unfortunately the data free are allocated to not commonly used tables. The disk space of ibdata is keep growing but the data free is not reused.
The question is: Can the data free allocated to commonly used tables, so it can reuse the (~50GB) space, since the storage is costly on cloud?
Ben
(143 rep)
Mar 15, 2018, 06:33 PM
• Last activity: Jun 19, 2025, 01:04 PM
0
votes
2
answers
253
views
What is cool-down time in Alibaba Cloud Auto Scaling?
I am using Auto scaling on my Alibaba Cloud Elastic Compute Service Instances. I am not aware of what the cool-down time is and the implications?
I am using Auto scaling on my Alibaba Cloud Elastic Compute Service Instances. I am not aware of what the cool-down time is and the implications?
Bhtrox
(9 rep)
Aug 26, 2018, 12:42 PM
• Last activity: May 19, 2025, 05:07 AM
0
votes
2
answers
297
views
How to install MongoDB database as a service in ESxi?
I want to know is it possible to install mongodb directly to the ESXi without guest OS? Please tell me how to setup Data storage as a service environment inside ESXi.
I want to know is it possible to install mongodb directly to the ESXi without guest OS?
Please tell me how to setup Data storage as a service environment inside ESXi.
Sayed Uz Zaman
(101 rep)
Apr 26, 2017, 06:54 PM
• Last activity: May 13, 2025, 01:09 AM
0
votes
1
answers
74
views
I need to move from sqlite to a distributed setup. What are my options?
I have an sqlite db that has grown to 30gb and I'm still pushing data to it everyday. I have a couple services that write to it using libsqlite3. Soon it will become too big to keep locally. What is the best way for me to migrate to a distributed setup? One way I can think of is sshfs but that won't...
I have an sqlite db that has grown to 30gb and I'm still pushing data to it everyday. I have a couple services that write to it using libsqlite3. Soon it will become too big to keep locally. What is the best way for me to migrate to a distributed setup? One way I can think of is sshfs but that won't work if my machine is offline. A solution that caches the most recently used parts of my db locally and keeps the rest somewhere else would be perfect. Most of the time I'll be using recently pushed data. I could start archiving years old data but accessing it will require manual work.
thewolf
(103 rep)
May 26, 2024, 01:27 PM
• Last activity: Jun 3, 2024, 09:19 PM
0
votes
0
answers
17
views
Zero Access Encryption on AWS, Azure or Google Cloud possible with MongoDB?
I'm having a question about the following: is it possible in general to store data on a cloud platform, specifically either AWS, Azure or Google Cloud, without the cloud provider itself being able to access the data at any time? Similar to Proton's "Zero Access Encryption"? I've seen that data at re...
I'm having a question about the following: is it possible in general to store data on a cloud platform, specifically either AWS, Azure or Google Cloud, without the cloud provider itself being able to access the data at any time? Similar to Proton's "Zero Access Encryption"?
I've seen that data at rest encryption is possible with associated Key Management Services, but the keys eventually still reside in the control of the respective cloud provider, aren't they?
Is it possible to avoid that?
User2363562
(1 rep)
Apr 8, 2024, 08:35 AM
0
votes
1
answers
28
views
MariaDB in the cloud....need some guidance
my plan is to move a mariadb into the cloud in germany. the clients will connect to it from 2 different office locations with 1gig/50mb up/dowload cable internet. Approx 10 clients will use the cloudDB with the connector /net. the database size is 2 TB right now and sqlite. The DB provider must be i...
my plan is to move a mariadb into the cloud in germany. the clients will connect to it from 2 different office locations with 1gig/50mb up/dowload cable internet. Approx 10 clients will use the cloudDB with the connector /net. the database size is 2 TB right now and sqlite.
The DB provider must be in germany .
Can anybody guide me in the right direction please. managed server or VServer? Hardware config? Storage space ? I want to avoid pitfalls.
Thank you in advance!
kongootto
(1 rep)
Jan 28, 2024, 01:58 PM
• Last activity: Jan 29, 2024, 06:15 AM
1
votes
1
answers
833
views
AWS iops vs disk iops?
Recently devops guy from our company told me that our local mysql stage environment are much faster than aws, because local disk has 10k iops, but our aws rds has only 500 iops. As far as I understand aws iops states for throughout capacity, not the disk speed. And still ssd disk io same 10k iops or...
Recently devops guy from our company told me that our local mysql stage environment are much faster than aws, because local disk has 10k iops, but our aws rds has only 500 iops.
As far as I understand aws iops states for throughout capacity, not the disk speed. And still ssd disk io same 10k iops or more? And considering that the load is less than 500 iops, rds might be much faster?
ogbofjnr
(123 rep)
May 30, 2020, 08:29 AM
• Last activity: Dec 21, 2022, 03:09 AM
1
votes
3
answers
3391
views
Syncing data between onprem and Azure SQL
I have the following **Scenario:** an on-premise SQL Server 2008 R2 and an app using it as db. I want to create an Azure SQL database that will be synced (**asynchronous replication**) with the on-premise instance. I do not want to have just a one-time migration like you can get with [SQL Database M...
I have the following
**Scenario:** an on-premise SQL Server 2008 R2 and an app using it as db. I want to create an Azure SQL database that will be synced (**asynchronous replication**) with the on-premise instance. I do not want to have just a one-time migration like you can get with SQL Database Migration Wizard - (if I understand it correctly).
**Goal:** I need to have the on-premise instance synced, let's say via daily job that will transfer the data, with my Azure SQL database.
Is it neccessary to setup **Azure Data Factory** for this? What about **Data Sync**? I can see that it is not available on the "new portal". Any advice how to approach this in a simple way would be great. Thanks!
EDIT: So, I have looked into the case of using transactional replication as Kin suggests. However it states only versions over SQL Server 2008. Any other ideas how to cope with this? Thanks!
JS_Diver
(165 rep)
Apr 3, 2017, 01:30 PM
• Last activity: Sep 12, 2022, 07:56 PM
0
votes
2
answers
189
views
Fully qualified or synonym for table in another database, cloud migration plan
Scenario: In SQL Server on-prem and plan to go the cloud in a year or two. I am writing a stored procedure in database A and I need to access a table from database B. Is it a best practice to create a synonym that points to the table in database B or just fully qualify the table in the package?
Scenario:
In SQL Server on-prem and plan to go the cloud in a year or two.
I am writing a stored procedure in database A and I need to access a table from database B.
Is it a best practice to create a synonym that points to the table in database B or just fully qualify the table in the package?
aimyee
(9 rep)
Jun 23, 2022, 03:41 AM
• Last activity: Jun 23, 2022, 04:58 PM
0
votes
1
answers
618
views
Galera Cluster Crashes together?
Hello I'm currently having trouble with my galera cluster. It was working fine earlier but suddenly all of the Nodes crashed. I have 3 local nodes and 1 node on the cloud. During the time it went down the logs shows the following: 2021-09-23 8:24:24 1 [Note] WSREP: ==================================...
Hello I'm currently having trouble with my galera cluster. It was working fine earlier but suddenly all of the Nodes crashed. I have 3 local nodes and 1 node on the cloud.
During the time it went down the logs shows the following:
2021-09-23 8:24:24 1 [Note] WSREP: ================================================
View:
id: d326832d-56e2-11eb-80c1-760504343273:9842007
status: non-primary
protocol_version: 4
capabilities: MULTI-MASTER, CERTIFICATION, PARALLEL_APPLYING, REPLAY, ISOLATION, PAUSE, CAUSAL_READ, INCREMENTAL_WS, UNORDERED, PREORDERED, STREAMING, NBO
final: no
own_index: 0
members(3):
0: 077cec27-1b75-11ec-842c-5f218e28b692, Strike
1: 25c6e10d-1b75-11ec-9ff7-de60adb87197, unspecified
2: 4e87b761-1b96-11ec-b561-272e3101cb38, Duel
=================================================
2021-09-23 8:24:24 1 [Note] WSREP: Non-primary view
2021-09-23 8:24:24 1 [Note] WSREP: Server status change connected -> connected
2021-09-23 8:24:24 1 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2021-09-23 8:24:24 1 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2021-09-23 8:24:25 0 [Note] WSREP: (077cec27-842c, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.10.10.10:4567 timed out, no messages seen in PT3S, socket stats: rtt: 1166000 rttvar: 583000 rto: 3498000 lost: 0 last_data_recv: 1835 cwnd: 10 last_queued_since: 1835197309 last_delivered_since: 1835197309 send_queue_length: 0 send_queue_bytes: 0
**This happened for ~90 retries**
2021-09-23 8:29:42 0 [Note] /usr/sbin/mariadbd (initiated by: unknown): Normal shutdown
2021-09-23 8:29:42 0 [Note] WSREP: Shutdown replication
2021-09-23 8:29:42 0 [Note] WSREP: Server status change connected -> disconnecting
2021-09-23 8:29:42 0 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2021-09-23 8:29:42 0 [Note] WSREP: Closing send monitor...
2021-09-23 8:29:42 0 [Note] WSREP: Closed send monitor.
2021-09-23 8:29:42 0 [Note] WSREP: gcomm: terminating thread
2021-09-23 8:29:42 0 [Note] WSREP: gcomm: joining thread
2021-09-23 8:29:42 0 [Note] WSREP: gcomm: closing backend
2021-09-23 8:29:42 0 [Note] WSREP: view(view_id(NON_PRIM,077cec27-842c,167) memb {
077cec27-842c,0
} joined {
} left {
} partitioned {
25c6e10d-9ff7,0
4e87b761-b561,0
ed270aba-aedd,0
})
2021-09-23 8:29:42 0 [Note] WSREP: PC protocol downgrade 1 -> 0
2021-09-23 8:29:42 0 [Note] WSREP: view((empty))
2021-09-23 8:29:42 0 [Note] WSREP: Deferred close timer started for socket with remote endpoint: tcp://10.10.10.20:50904
2021-09-23 8:29:42 0 [Note] WSREP: gcomm: closed
2021-09-23 8:29:42 0 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1
2021-09-23 8:29:42 0 [Note] WSREP: Flow-control interval: [16, 16]
2021-09-23 8:29:42 0 [Note] WSREP: Received NON-PRIMARY.
2021-09-23 8:29:42 0 [Note] WSREP: New SELF-LEAVE.
2021-09-23 8:29:42 0 [Note] WSREP: Flow-control interval: [0, 0]
2021-09-23 8:29:42 0 [Note] WSREP: Received SELF-LEAVE. Closing connection.
2021-09-23 8:29:42 0 [Note] WSREP: Shifting OPEN -> CLOSED (TO: 9842007)
2021-09-23 8:29:42 0 [Note] WSREP: RECV thread exiting 0: Success
2021-09-23 8:29:42 6 [Note] WSREP: ================================================
View:
id: d326832d-56e2-11eb-80c1-760504343273:9842007
status: non-primary
protocol_version: 4
capabilities: MULTI-MASTER, CERTIFICATION, PARALLEL_APPLYING, REPLAY, ISOLATION, PAUSE, CAUSAL_READ, INCREMENTAL_WS, UNORDERED, PREORDERED, STREAMING, NBO
final: no
own_index: 0
members(1):
0: 077cec27-1b75-11ec-842c-5f218e28b692, Strike
=================================================
2021-09-23 8:29:42 6 [Note] WSREP: Non-primary view
2021-09-23 8:29:42 6 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2021-09-23 8:29:42 0 [Note] WSREP: recv_thread() joined.
2021-09-23 8:29:42 0 [Note] WSREP: Closing replication queue.
2021-09-23 8:29:42 0 [Note] WSREP: Closing slave action queue.
2021-09-23 8:29:42 6 [Note] WSREP: ================================================
View:
id: d326832d-56e2-11eb-80c1-760504343273:9842007
status: non-primary
protocol_version: 4
capabilities: MULTI-MASTER, CERTIFICATION, PARALLEL_APPLYING, REPLAY, ISOLATION, PAUSE, CAUSAL_READ, INCREMENTAL_WS, UNORDERED, PREORDERED, STREAMING, NBO
final: yes
own_index: -1
members(0):
=================================================
Same happened to my 2nd Node:
2021-09-23 8:31:01 7 [Note] WSREP: ================================================
View:
id: d326832d-56e2-11eb-80c1-760504343273:9842007
status: non-primary
protocol_version: 4
capabilities: MULTI-MASTER, CERTIFICATION, PARALLEL_APPLYING, REPLAY, ISOLATION, PAUSE, CAUSAL_READ, INCREMENTAL_WS, UNORDERED, PREORDERED, STREAMING, NBO
final: no
own_index: 0
members(1):
0: 25c6e10d-1b75-11ec-9ff7-de60adb87197, Aegis
=================================================
2021-09-23 8:31:01 7 [Note] WSREP: Non-primary view
2021-09-23 8:31:01 7 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2021-09-23 8:31:01 0 [Note] WSREP: recv_thread() joined.
2021-09-23 8:31:01 0 [Note] WSREP: Closing replication queue.
2021-09-23 8:31:01 0 [Note] WSREP: Closing slave action queue.
2021-09-23 8:31:01 7 [Note] WSREP: ================================================
View:
id: d326832d-56e2-11eb-80c1-760504343273:9842007
status: non-primary
protocol_version: 4
capabilities: MULTI-MASTER, CERTIFICATION, PARALLEL_APPLYING, REPLAY, ISOLATION, PAUSE, CAUSAL_READ, INCREMENTAL_WS, UNORDERED, PREORDERED, STREAMING, NBO
final: yes
own_index: -1
members(0):
=================================================
And third Node:
2021-09-23 8:30:59 2 [Note] WSREP: ================================================
View:
id: d326832d-56e2-11eb-80c1-760504343273:9842007
status: non-primary
protocol_version: 4
capabilities: MULTI-MASTER, CERTIFICATION, PARALLEL_APPLYING, REPLAY, ISOLATION, PAUSE, CAUSAL_READ, INCREMENTAL_WS, UNORDERED, PREORDERED, STREAMING, NBO
final: no
own_index: 0
members(1):
0: 4e87b761-1b96-11ec-b561-272e3101cb38, Duel
=================================================
2021-09-23 8:30:59 2 [Note] WSREP: Non-primary view
2021-09-23 8:30:59 2 [Note] WSREP: Server status change connected -> connected
2021-09-23 8:30:59 2 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2021-09-23 8:30:59 2 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2021-09-23 8:31:03 0 [Note] WSREP: (4e87b761-b561, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.10.10.10:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 2000000 lost: 1 last_data_recv: 385031380 cwnd: 1 last_queued_since: 385331379302597 last_delivered_since: 385331379302597 send_queue_length: 0 send_queue_bytes: 0
2021-09-23 8:31:04 0 [Note] WSREP: (4e87b761-b561, 'tcp://0.0.0.0:4567') reconnecting to ed270aba-aedd (tcp://10.10.10.10:4567), attempt 90
2021-09-23 8:31:05 0 [Note] WSREP: cleaning up 25c6e10d-9ff7 (tcp://10.10.10.20:4567)
2021-09-23 8:31:08 0 [Note] WSREP: (4e87b761-b561, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.10.10.41:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 4000000 lost: 1 last_data_recv: 385036380 cwnd: 1 last_queued_since: 385336379962010 last_delivered_since: 385336379962010 send_queue_length: 0 send_queue_bytes: 0
2021-09-23 8:31:49 2 [Note] WSREP: ================================================
View:
id: d326832d-56e2-11eb-80c1-760504343273:9842007
status: non-primary
protocol_version: 4
capabilities: MULTI-MASTER, CERTIFICATION, PARALLEL_APPLYING, REPLAY, ISOLATION, PAUSE, CAUSAL_READ, INCREMENTAL_WS, UNORDERED, PREORDERED, STREAMING, NBO
final: yes
own_index: -1
members(0):
=================================================
As for my cloud node, this happened:
2021-09-23 8:29:07 0 [Note] WSREP: (ed270aba-aedd, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.10.10.30:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 4000000 lost: 1 last_data_recv: 684592357 cwnd: 1 last_queued_since: 684892356898817 last_delivered_since: 684892356898817 send_queue_length: 0 send_queue_bytes: 0
2021-09-23 8:29:07 0 [Note] WSREP: (ed270aba-aedd, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.10.10.40:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 4000000 lost: 1 last_data_recv: 684592358 cwnd: 1 last_queued_since: 684892357027577 last_delivered_since: 684892357027577 send_queue_length: 0 send_queue_bytes: 0
2021-09-23 8:29:09 0 [Note] WSREP: (ed270aba-aedd, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.10.10.20:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 4000000 lost: 1 last_data_recv: 684593857 cwnd: 1 last_queued_since: 684893856941066 last_delivered_since: 684893856941066 send_queue_length: 0 send_queue_bytes: 0
2021-09-23 8:32:28 0 [Note] /usr/sbin/mariadbd (initiated by: unknown): Normal shutdown
2021-09-23 8:32:28 0 [Note] WSREP: Shutdown replication
2021-09-23 8:32:28 0 [Note] WSREP: Server status change connected -> disconnecting
2021-09-23 8:32:28 0 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2021-09-23 8:32:28 0 [Note] WSREP: Closing send monitor...
2021-09-23 8:32:28 0 [Note] WSREP: Closed send monitor.
2021-09-23 8:32:28 0 [Note] WSREP: gcomm: terminating thread
2021-09-23 8:32:28 0 [Note] WSREP: gcomm: joining thread
2021-09-23 8:32:28 0 [Note] WSREP: gcomm: closing backend
2021-09-23 8:32:28 0 [Note] WSREP: PC protocol downgrade 1 -> 0
2021-09-23 8:32:28 0 [Note] WSREP: view((empty))
2021-09-23 8:32:28 0 [Note] WSREP: gcomm: closed
2021-09-23 8:32:28 0 [Note] WSREP: New SELF-LEAVE.
2021-09-23 8:32:28 0 [Note] WSREP: Flow-control interval: [0, 0]
2021-09-23 8:32:28 0 [Note] WSREP: Received SELF-LEAVE. Closing connection.
2021-09-23 8:32:28 0 [Note] WSREP: Shifting OPEN -> CLOSED (TO: 9842007)
2021-09-23 8:32:28 0 [Note] WSREP: RECV thread exiting 0: Success
2021-09-23 8:32:28 0 [Note] WSREP: recv_thread() joined.
2021-09-23 8:32:28 0 [Note] WSREP: Closing replication queue.
2021-09-23 8:32:28 0 [Note] WSREP: Closing slave action queue.
2021-09-23 8:32:28 2 [Note] WSREP: ================================================
View:
id: d326832d-56e2-11eb-80c1-760504343273:9842007
status: non-primary
protocol_version: 4
capabilities: MULTI-MASTER, CERTIFICATION, PARALLEL_APPLYING, REPLAY, ISOLATION, PAUSE, CAUSAL_READ, INCREMENTAL_WS, UNORDERED, PREORDERED, STREAMING, NBO
final: yes
own_index: -1
members(0):
=================================================
I have the following configurations:
Node 1:
[galera] # Mandatory settings
wsrep_on=ON
wsrep_provider=/usr/lib64/galera-4/libgalera_smm.so
#add your node ips here
wsrep_cluster_address="gcomm://strike,aegis,duel,clone"
binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
#Cluster name
wsrep_cluster_name="cloud_test_cluster"
# Allow server to accept connections on all interfaces.
bind-address=0.0.0.0
# this server ip, change for each server
wsrep_node_address="strike"
# this server name, change for each server
wsrep_node_name="Strike"
wsrep_sst_method=rsync
wsrep_sst_donor="Aegis,Duel"
Node 2 is same as above but:
wsrep_sst_donor="Strike,Duel"
Node 3:
wsrep_sst_donor="Strike,Aegis"
Finally cloud:
wsrep_sst_donor="Duel,Aegis,Strike"
All of this happened almost at the same time. Did they just lose connection to each other? Did losing connection to 10.10.10.10 caused the crash? Why did the members count decrease from each other? This happened yesterday and today. I had this set up since last weekend but there were no problems during that time til yesterday...
Can someone explain to me what happened?
Glen G
(3 rep)
Sep 23, 2021, 06:50 AM
• Last activity: Sep 24, 2021, 12:10 AM
0
votes
1
answers
35
views
Replicating to 3 databases one is web server
I have a customer that is looking for a very unusual replication configuration. I am hoping for some insight on why this is a bad idea, or even the possibility that it can be done. There are two systems where replication will be deployed, Publisher and Distributor will replicate to Subscriber on a l...
I have a customer that is looking for a very unusual replication configuration. I am hoping for some insight on why this is a bad idea, or even the possibility that it can be done.
There are two systems where replication will be deployed, Publisher and Distributor will replicate to Subscriber on a local LAN. Need to populate a cloud server for web site.
**Question1:** Could this subscriber become a publisher to another cloud based subscriber? Before you point out that the original publisher can update both local and cloud subscribers, customer doesn't want the work load on publisher system.
**Question 2:** Customer suggested sending backup of local subscriber to cloud and restoring on cloud DB server at least daily. I don't believe permissions are replicated, along with other issues. Comments?
I could go on and on, but I will wait on any feedback before going further.
thanks
htm11h
(123 rep)
Aug 6, 2021, 08:05 PM
• Last activity: Aug 7, 2021, 05:21 AM
1
votes
1
answers
819
views
MongoDB deployment across cloud and On Premises
We are planning to implement MongoDB as a stretch cluster which includes Primary secondary in DC 1, Secondary secondary in DC 2, Arbiter in cloud(DC 3) where DC 1 and DC 2 are on premise servers and DC 3 in cloud My questions are : 1) Should we need to keep same OS in AWS as our On-premise server or...
We are planning to implement MongoDB as a stretch cluster which includes Primary secondary in DC 1, Secondary secondary in DC 2, Arbiter in cloud(DC 3) where DC 1 and DC 2 are on premise servers and DC 3 in cloud
My questions are :
1) Should we need to keep same OS in AWS as our On-premise server or we can keep Amazon Linux also ?
In our case it is RHEL 7.2 in On premise servers
2) If No, will there be a performance impact if we use different OS between replica nodes ?
3) If yes, Do we need to do patch upgrades as frequent as we do in On Prem servers ?
4) How the failover works here in case of write concern as Majority ?
5) What are the other trade-offs in this approach if any ?
vignesh
(25 rep)
Oct 31, 2018, 05:14 PM
• Last activity: Jun 9, 2021, 03:04 AM
7
votes
1
answers
5129
views
Confused about load balancing and horizontal scaling in postgresql
Please correct me if I am wrong but I guess handling more requests and load by adding more machines or balancing the load between multiple servers is horizontal scalling. So, if I add more servers, how do I distribute the database? Do I create one database to hold the user records with multiple serv...
Please correct me if I am wrong but I guess handling more requests and load by adding more machines or balancing the load between multiple servers is horizontal scalling. So, if I add more servers, how do I distribute the database? Do I create one database to hold the user records with multiple servers? Or do I split the database too? What about database integrity? How to synchronize it? Or else what do I do? I am a newbie and really confused but eager to learn. I would like to use postgres for my project and would like to know some basic things before I start. I was thinking of using two small ec2 instances. But I got confused about the database. How do I go about creating the database. Do I need to go through sharding for this? What would be the best approach for horizontal scaling in accordance to postgres. I would really appreciate if you could explain it to me. Thank you!
**Edit:**
How do I load balance using multiple machines and manage the database?
I have an app where users can upload videos and it will be converted to mp4 using Elastic Transcoder. Users about 10k. So, how do I load balance using multiple machines and manage the database? What I want to do is load balancing for the performance. And I read in many post that adding more machine can leverage it. So I though of horizontal scaling. But since horizontal scaling is scary, how do I load balance and manage my database?
Benjamin Smith Max
(213 rep)
Feb 22, 2014, 11:24 PM
• Last activity: Mar 28, 2021, 03:34 PM
-2
votes
1
answers
56
views
What are good options for cloud storage of audit data? (written continuously, rarely queried)
*Cross-post from StackOverflow as I expect it may get closed there because S.O is so nitpicky...* https://stackoverflow.com/questions/63432484/what-are-good-options-for-cloud-storage-of-audit-data-written-continuously-ra I'm working on something which wants to store a lot of "audit record" style dat...
*Cross-post from StackOverflow as I expect it may get closed there because S.O is so nitpicky...*
https://stackoverflow.com/questions/63432484/what-are-good-options-for-cloud-storage-of-audit-data-written-continuously-ra
I'm working on something which wants to store a lot of "audit record" style data. Such as "user created a thing", "user deleted a thing", "user had 27 things".
We want to store it in the cloud (AWS or possibly Azure). Ideally I'd like to use a cloud-native service such that I don't have to worry about things like scaling up server instances, sharding, etc.
This kind of data is mostly a continuous stream of writes, and every so often a big "report" style query where we summarize or count a large block of data over a given date range. We need to query individual records approximately never.
The go-to for "cloud database in AWS" seems to be DynamoDB, however the pricing model, and the articles I've been reading seem to imply it is designed for more of a transactional model. Writes are an order of magnitude more expensive than reads, which is the opposite to my use case.
Those of you with deep cloud knowledge - please: What service(s) would you use for storing this kind of data and running reports on it? Is DynamoDB a bad choice here?
Orion Edwards
(107 rep)
Aug 16, 2020, 02:14 AM
• Last activity: Aug 18, 2020, 05:17 AM
2
votes
1
answers
723
views
Is it possible to Increase storage performance in azure vm using disk striping?
Given the following Azure VM: E8s_v3 (8 cpu, 64 GB RAM, 12800 IOPS, 192 MBps) I added 2 SSD Premium drives, one for data (1024 GBs / 200 MBps / Host Caching enabled read-only) and one for Logs (256 GBs / 125 MBps / No Cache enabled) When I perform a simple test with CrystalDiskMark I get 134 MBps fo...
Given the following Azure VM:
E8s_v3 (8 cpu, 64 GB RAM, 12800 IOPS, 192 MBps)
I added 2 SSD Premium drives, one for data (1024 GBs / 200 MBps / Host Caching enabled read-only) and one for Logs (256 GBs / 125 MBps / No Cache enabled)
When I perform a simple test with CrystalDiskMark I get 134 MBps for the data disk and 127 MBps for the log file. I assume that the data disk is not getting closed to 192 MBps (Azure VM limit) because of the enabled host caching.
**My question is: if I use disk striping, let's say, instead of 1 TB disk, I add 2 512 GBs (150 MBps); would that give me around ~300 MBps?? Or simply there's no way that one volume (stripe or not), inside the azure VM, runs faster than the limit set by the VM size, 192 MBps for our example VM model: E8s_v3?**
**Any idea on how to improve storage performance?** The only way I see would be to increase the VM size and the disk size but of course, that would increase our cost a lot. In our local servers, we have really fast SANs that are around thousands of MBps and it's impossible to get those speed in the cloud at a rational cost, if ever.
Thanks!
E8s_v3 (8 cpu, 64 GB RAM, 12800 IOPS, 192 MBps)
I added 2 SSD Premium drives, one for data (1024 GBs / 200 MBps / Host Caching enabled read-only) and one for Logs (256 GBs / 125 MBps / No Cache enabled)
When I perform a simple test with CrystalDiskMark I get 134 MBps for the data disk and 127 MBps for the log file. I assume that the data disk is not getting closed to 192 MBps (Azure VM limit) because of the enabled host caching.
**My question is: if I use disk striping, let's say, instead of 1 TB disk, I add 2 512 GBs (150 MBps); would that give me around ~300 MBps?? Or simply there's no way that one volume (stripe or not), inside the azure VM, runs faster than the limit set by the VM size, 192 MBps for our example VM model: E8s_v3?**
**Any idea on how to improve storage performance?** The only way I see would be to increase the VM size and the disk size but of course, that would increase our cost a lot. In our local servers, we have really fast SANs that are around thousands of MBps and it's impossible to get those speed in the cloud at a rational cost, if ever.
Thanks!
Victor Barajas
(363 rep)
Feb 18, 2020, 10:03 PM
• Last activity: Jul 31, 2020, 09:02 AM
0
votes
1
answers
707
views
Running forms in Oracle Cloud Service
Since few years ago I have worked and extending a application build by Forms Builder 10g and running via Oracle Application Service 10g. The application can be launched as: http://portal.dev:7778/forms/frmservlet?config=db Now, we want to take a new step migrating our forms to the cloud an be access...
Since few years ago I have worked and extending a application build by Forms Builder 10g and running via Oracle Application Service 10g. The application can be launched as:
http://portal.dev:7778/forms/frmservlet?config=db
Now, we want to take a new step migrating our forms to the cloud an be accessed from anywhere. My first task was finding some resource into Oracle services, and I found [Java Cloud Service - SaaS Extension](https://cloud.oracle.com/en_US/java/servicesaas/pricing) :
> 1. This is the number of Oracle WebLogic Server Managed Server instances to which the applications are deployed (customer managed Java EE application container)
Preconfigured Oracle WebLogic Server 12c or 11g cluster configured with Oracle Database Cloud Service
2. This is the amount of RAM allocated to the Java heap for all of the service's Managed Servers combined to run the Oracle WebLogic Server Managed Servers and the objects consumed by your application code
So, I decided to try the free month there. Successfully the account was created and I can navigate between all services listed there. But the real question is, which of all services there should I need to open? My first impression is to start with the "Java":
But I would like to take your advice, because I am not sure at all. Please, let me know if more details are needed.

manix
(385 rep)
Oct 26, 2017, 11:11 PM
• Last activity: May 23, 2020, 10:05 AM
-1
votes
1
answers
427
views
how to put encryption in place while creating db link between on premise db 12c and cloud db 11g
we need to create db link between our db on premise and our medical db on oracle cloud. Our main concern is the data encryption while data transfer happen. Need a solution.
we need to create db link between our db on premise and our medical db on oracle cloud. Our main concern is the data encryption while data transfer happen. Need a solution.
sajad
(1 rep)
May 11, 2020, 07:26 AM
• Last activity: May 12, 2020, 06:47 AM
0
votes
0
answers
428
views
Exporting from Oracle Cloud, importing on local database
I am currently experiemting with oracle databases. I have a free tier at Oracle Cloud, with a free instance of autonomous database. As I like to have my backups locally I did an export, basically following [this tutorial][1], but on a local VM. Export works fine and I moved it to the Oracle object s...
I am currently experiemting with oracle databases.
I have a free tier at Oracle Cloud, with a free instance of autonomous database.
As I like to have my backups locally I did an export, basically following this tutorial , but on a local VM.
Export works fine and I moved it to the Oracle object storage, with the script described in the blog post.
Next, I wanted to import it into a local Oracle database I have setup on a VM on my local PC. (Guest operating System: Oracle Linux, DB: Oracle 18 Enterprise)
How I do the export from Oracle cloud:
[root@oracle-linux-vm ~]# expdp admin@dbname_high TABLES=TABLE1 data_options=group_partition_table_data parallel=1 dumpfile=table.dmp
Export: Release 18.0.0.0.0 - Production on Thu Apr 2 08:05:19 2020
Version 18.5.0.0.0
Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.
Password:
Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Starting "ADMIN"."SYS_EXPORT_TABLE_01": admin/********@dbname_high TABLES=TABLE1 data_options=group_partition_table_data parallel=1 dumpfile=table.dmp
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "ADMIN"."TABLE1" 5.078 KB 2 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table "ADMIN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ADMIN.SYS_EXPORT_TABLE_01 is:
/u03/dbfs/A0AF0C3CCE4D3568E0535A18000AD882/data/dpdump/table.dmp
Job "ADMIN"."SYS_EXPORT_TABLE_01" successfully completed at Thu Apr 2 06:07:37 2020 elapsed 0 00:02:06
[root@oracle-linux-vm ~]#
`
I download the file from Oracle object storage and move it to the DATA_PUMP_DIR Folder
[root@oracle-linux-db-ee ~]# ls -al /opt/oracle/admin/ORCLCDB/dpdump
total 172
drwxrwxrwx. 2 oracle oinstall 23 Apr 2 06:12 .
drwxr-x---. 6 oracle oinstall 64 Mar 31 06:41 ..
-rw-r--r--. 1 oracle oinstall 176128 Apr 2 06:10 table.dmp
[root@oracle-linux-db-ee ~]#
When trying to import the data, it always fails.
In my local DB/VM:
[root@oracle-linux-db-ee ~]# impdp admin/pwd4admin@ORCLCDB dumpfile=table.dmp logfile=1.log TABLES=TABLE1
DIRECTORY=DATA_PUMP_DIR
Import: Release 18.0.0.0.0 - Production on Thu Apr 2 05:31:46 2020
Version 18.3.0.0.0
Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
ORA-39002: invalid operation
Looking at the logfile, I can't get a hint what is going wrong:
[root@oracle-linux-db-ee ~]# cat /opt/oracle/admin/ORCLCDB/dpdump/1.log
;;;
Import: Release 18.0.0.0.0 - Production on Thu Apr 2 06:16:52 2020
Version 18.3.0.0.0
Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.
;;;
Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Master table "ADMIN"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Job "ADMIN"."SYS_IMPORT_TABLE_01" successfully completed at Thu Apr 2 06:17:02 2020 elapsed 0 00:00:06
Job "ADMIN"."SYS_IMPORT_TABLE_01" successfully completed at Thu Apr 2 06:17:02 2020 elapsed 0 00:00:07
The schema "admin" is already created manually on my local db.
Any ideas what I am doing wrong?
natenjo
(1 rep)
Apr 2, 2020, 06:20 AM
1
votes
1
answers
1124
views
High data traffic from MySQL server to web server
I have one webserver running Magento : Community Edition 1.7.0.2 and a separate machine with MySQL 5.5.42 on it. Both machines are EC2 instances. I am seeing huge unexpected data traffic from the MySQL server to the web server - approximately 400 Mbps for (mostly) an hour which completely occupies t...
I have one webserver running Magento : Community Edition 1.7.0.2 and a separate machine with MySQL 5.5.42 on it.
Both machines are EC2 instances.
I am seeing huge unexpected data traffic from the MySQL server to the web server - approximately 400 Mbps for (mostly) an hour which completely occupies the CPU of the web server. There also seems to be a mismatch between the data reported by CloudWatch and IPTraf.
We're trying to diagnose the issue - what can I run which will help identify why this is happening?
Observations:
1. As per CloudWatch, the Magento webserver seems to receive 400MBps of data (at certain times) - screenshot 1
2. As per IPTraf, the Magento webserver seems to receive only 50MBps - screenshot 2
3. As per CloudWatch, the MySQL webserver seems to send 400MBps of data (at certain times) - screenshot 3
Screenshot 1
Screenshot 2
Screenshot 3
Others:
EDIT1
During the high data traffic to the webserver, the htop output is as follows (on the MySQL server):
(I think very temporarily it shows up as high CPU usage)
Mostly it's like this:
EDIT2:
top command output on MySQL server while high CPU load on web server
top - 00:08:54 up 28 days, 23:46, 2 users, load average: 0.07, 0.06, 0.07
Tasks: 97 total, 1 running, 96 sleeping, 0 stopped, 0 zombie
Cpu(s): 11.2%us, 1.2%sy, 0.0%ni, 86.8%id, 0.6%wa, 0.0%hi, 0.2%si, 0.1%st
Mem: 31420088k total, 30966400k used, 453688k free, 166384k buffers
Swap: 4194300k total, 61328k used, 4132972k free, 8084828k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13387 mysql 20 0 25.6g 21g 6884 S 17.9 70.5 263:34.95 mysqld
1 root 20 0 19596 1068 828 S 0.0 0.0 0:02.09 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:13.48 ksoftirqd/0
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
7 root 20 0 0 0 0 S 0.0 0.0 20:42.18 rcu_sched
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh
9 root RT 0 0 0 0 S 0.0 0.0 0:00.54 migration/0
10 root RT 0 0 0 0 S 0.0 0.0 0:00.46 migration/1
11 root 20 0 0 0 0 S 0.0 0.0 0:44.45 ksoftirqd/1
13 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/1:0H
14 root RT 0 0 0 0 S 0.0 0.0 0:01.48 migration/2
15 root 20 0 0 0 0 S 0.0 0.0 0:44.13 ksoftirqd/2
17 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/2:0H
18 root RT 0 0 0 0 S 0.0 0.0 0:01.44 migration/3
19 root 20 0 0 0 0 S 0.0 0.0 0:12.68 ksoftirqd/3
21 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/3:0H
22 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper
23 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kdevtmpfs
24 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 netns
31 root 20 0 0 0 0 S 0.0 0.0 0:00.04 xenwatch
32 root 20 0 0 0 0 S 0.0 0.0 0:00.00 xenbus
130 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 writeback
133 root 25 5 0 0 0 S 0.0 0.0 0:00.00 ksmd
134 root 39 19 0 0 0 S 0.0 0.0 0:00.00 khugepaged
135 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kintegrityd
EDIT3
I ran the following (during the heavy traffic phase) to track the MySQL processes, but there was nothing amiss in the results. The queries mentioned were all "fine" - nothing which could explain this really odd behavior.
mysqladmin -u DATABASEUSER -p -hIPADDRESS -i 1 --verbose processlist > /tmp/mysql-processlist3.txt
Have a look at the initial output (which WAS during the heavy traffic phase). Edited to remove redundancy.
+--------+--------------+-------------------+---------------+---------+------+-------+-----------------------+
| Id | User | Host | db | Command | Time | State | Info |
+--------+--------------+-------------------+---------------+---------+------+-------+-----------------------+
| 113267 | DATABASEUSER | IPADDRESS_1:55847 | DATABASE_NAME | Sleep | 12 | | |
| 113297 | DATABASEUSER | IPADDRESS_1:55942 | DATABASE_NAME | Sleep | 0 | | |
...
| 113310 | DATABASEUSER | IPADDRESS_1:55978 | DATABASE_NAME | Sleep | 0 | | |
| 113311 | DATABASEUSER | IPADDRESS_1:55982 | | Query | 0 | | show full processlist |
+--------+--------------+-------------------+---------------+---------+------+-------+-----------------------+
+--------+--------------+-------------------+---------------+---------+------+-------+-----------------------+
| Id | User | Host | db | Command | Time | State | Info |
+--------+--------------+-------------------+---------------+---------+------+-------+-----------------------+
| 113267 | DATABASEUSER | IPADDRESS_1:55847 | DATABASE_NAME | Sleep | 13 | | |
| 113301 | DATABASEUSER | IPADDRESS_1:55952 | DATABASE_NAME | Sleep | 0 | | |
...
| 113310 | DATABASEUSER | IPADDRESS_1:55978 | DATABASE_NAME | Sleep | 0 | | |
| 113311 | DATABASEUSER | IPADDRESS_1:55982 | | Query | 0 | | show full processlist |
| 113312 | DATABASEUSER | IPADDRESS_1:55985 | DATABASE_NAME | Sleep | 0 | | |
...
+--------+--------------+-------------------+---------------+---------+------+-------+-----------------------+
...
9 such instances (as above)
...
+--------+--------------+-------------------+---------------+---------+------+----------+----------------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+--------+--------------+-------------------+---------------+---------+------+----------+----------------------------------------------------------------------+
| 113267 | DATABASEUSER | IPADDRESS_1:55847 | DATABASE_NAME | Sleep | 23 | | |
| 113311 | DATABASEUSER | IPADDRESS_1:55982 | | Query | 0 | | show full processlist |
| 113333 | DATABASEUSER | IPADDRESS_1:56047 | DATABASE_NAME | Sleep | 0 | | |
| 113334 | DATABASEUSER | IPADDRESS_1:56050 | DATABASE_NAME | Query | 0 | updating | DELETE FROM







tablecore_session
WHERE (session_expires
12198 root 20 0 117m 3356 2648 S 2.0 0.0 1:17.01 htop
17348 root 20 0 112m 7460 6368 S 1.7 0.0 0:00.05 sshd
32098 root 20 0 0 0 0 S 1.7 0.0 0:31.38 kworker/u16:0
6527 do_ssh_a 20 0 117m 3372 2524 S 1.3 0.0 41:25.85 htop
11820 do_ssh_a 20 0 117m 3728 2752 S 1.3 0.0 1:04.59 htop
13854 do_ssh_a 20 0 119m 5256 2616 S 1.3 0.0 0:49.03 htop
14045 do_ssh_a 20 0 117m 3356 2756 S 1.3 0.0 0:42.00 htop
17346 do_ssh_a 20 0 15268 2248 1872 R 0.7 0.0 0:00.02 top
7 root 20 0 0 0 0 S 0.3 0.0 8:01.96 rcu_sched
EDIT5
Full processlist output (which was referred to in EDIT3) is at: http://pastebin.com/7QBwnAW3
EDIT6
The difference in the data transfer speeds reported by iptraf and CloudWatch has been resolved. CloudWatch reports speeds per minute (as per email from AWS helpdesk inquiry), so:
As per CloudWatch,
we are consuming ~400 million (not mega) bytes per minute,
so 400/60 = 6.66 million bytes per second,
which is 6666 kilo bytes per second,
which at 8 bits per byte is: 53328 kilo bits per second.
siliconpi
(111 rep)
Dec 17, 2015, 01:37 PM
• Last activity: Jan 10, 2020, 07:01 AM
Showing page 1 of 20 total questions