Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
0
answers
4
views
Data duplication after Apache IoTDB pipe restart, how to maintain the existing sync progress?
When using Apache IoTDB's pipe feature to sync data to another Apache IoTDB instance, I encountered duplicate data transmission. The scenario is as follows: I configured a Pipe from root.source to a remote IoTDB instance (root.target): CREATE PIPESINK remote_sink AS IoTDB ('ip:6667', 'username', 'pa...
When using Apache IoTDB's pipe feature to sync data to another Apache IoTDB instance, I encountered duplicate data transmission. The scenario is as follows:
I configured a Pipe from root.source to a remote IoTDB instance (root.target):
CREATE PIPESINK remote_sink AS IoTDB ('ip:6667', 'username', 'password');
CREATE PIPE source_to_target TO remote_sink FROM (SOURCE 'root.source.we4rwe.**') INTO (TARGET 'root.target.faknlv93.**');`
START PIPE source_to_target;
The initial sync works fine, but when I manually restart the Pipe service (e.g.,
STOP PIPE source_to_target
followed by START PIPE)
, some historical data (e.g., already synced root.source.d1.sensor
data) is retransmitted, causing duplicates on the target.
The pipe status (SHOW PIPES
) shows status=RUNNING
with no error logs.
Is this the expected behavior of IoTDB Pipe? How to avoid duplicate data transmission during Pipe restart? Are additional configurations (e.g., sync_progress or other parameters) required?
Hester Tso
(101 rep)
Aug 1, 2025, 08:12 AM
0
votes
1
answers
147
views
MySQL synchronization
I have a cloud-based MySQL instance (slave) and am planning a solution where a remote instance (master) will be based on a ship. The ship will go in and out of international waters and therefore will have an intermittent terrestrial connection. When there is no connection, data will be stored locall...
I have a cloud-based MySQL instance (slave) and am planning a solution where a remote instance (master) will be based on a ship. The ship will go in and out of international waters and therefore will have an intermittent terrestrial connection. When there is no connection, data will be stored locally. When the vessel arrives in an area with a terrestrial connection, I want the vessel database to synchronize with the cloud-based instance.
This scenario will be duplicated with multiple vessels, with their own instances, and each vessel will need to send local changes to the cloud-based instance.
We have looked at 3'rd party open-source tools like symmetric DS and wanted to 'ask-the-experts' the preferred approach. Is there a best-in-breed 3'rd party tool, or is the out-of-the-box MySQL solution sufficient? Thank you in advance.
Robert Forman
(1 rep)
Feb 14, 2019, 11:06 PM
• Last activity: Jul 16, 2025, 01:03 AM
0
votes
1
answers
191
views
How to use a MySQL database both remotely and locally
I cannot find the best solution for my problem. I made a C++ application which runs on a Kiosk under Ubuntu that needs to store and retrieve data from a MySQL database and these same data need to be accessed from a remote web application. These are the requirements: - The kiosk is connected to Inter...
I cannot find the best solution for my problem.
I made a C++ application which runs on a Kiosk under Ubuntu that needs to store and retrieve data from a MySQL database and these same data need to be accessed from a remote web application.
These are the requirements:
- The kiosk is connected to Internet, but we cannot assume that internet is always available
- The kiosk need to always access to database because the users can access the kiosk services only after the login (user data are stored in the database)
- the remote web application needs to insert or modify data stored in the database
At the moment, I'm using a local MySQL database installed in the Kiosk with PhpMyAdmin and the application directly access the local data. Then, I used cron to upload the database once a day, then I import the database on my server in order to be accessed by the remote web application.
This is really a bad solution, so I would like to find another one.
What do you suggest?
I would like to have a database on my server and let the remote web application directly use it and receive updates from the Kiosk.
Marcus Barnet
(113 rep)
Nov 24, 2019, 06:04 PM
• Last activity: Jun 26, 2025, 01:00 PM
0
votes
1
answers
198
views
Keeping schemas in sync with a database?
I have been asked to work on a database that has a regretful design and cannot be changed so I would ask that we work within these parameters. This is all in SQL Server 2019. I have SQL Server that has many databases, one for each field operation, typically named after the city and state. So imagine...
I have been asked to work on a database that has a regretful design and cannot be changed so I would ask that we work within these parameters.
This is all in SQL Server 2019.
I have SQL Server that has many databases, one for each field operation, typically named after the city and state. So imagine databases named like
New York City, NY
Princeton, NJ
...
and so for about 80 databases.
Now, I am being asked to keep these in line with a SQL Server that is hosted on Azure except there is only one database there and I am not allowed to create more to make a 1-1 relationship here.
I am allowed to create schemas under that database. So I can have schemas with names that match the database exactly. That is what my cowoker started doing so, so the Azure database looks like
Main Database:
Schemas:
-New York City, NY
-Princeton, NJ
....
What would be the easiest way to keep the schema versions of the databases on the Azure server in sync with the database? Right now my coworker wrote a stored procedure that runs every X minutes that does a pushing of data as a proof of concept on the first database/schema, but I wanted to know if there was any built in ways to do this?
Brian Karabinchak
(187 rep)
Apr 15, 2022, 05:11 PM
• Last activity: Jun 20, 2025, 01:02 PM
0
votes
1
answers
229
views
Sync different table structure in MySQL Database
I have two table in different database host host. `activity_table` in **Host A** and `presence_table` in **Host B**. `presence_table` used by binary app where i don't have access to the source code but i have full access to the database. the `presence_table` table have this following structure | col...
I have two table in different database host host.
activity_table
in **Host A** and presence_table
in **Host B**. presence_table
used by binary app where i don't have access to the source code but i have full access to the database. the presence_table
table have this following structure
| column | type |
| -------- | -------------- |
| id | integer |
| finger_id | varchar |
| time_in | timestamp |
| time_out | timestamp |
| ... | ...|
**Example record**
| id | finger_id | time_in | time_out|
|-|-|-|-|
|1|12a766d|2022-08-01 12:32:01|2022-08-01 20:00:03|
The activity_table
is used by app which we are developing with this following structure
| column | type |
| -------- | -------------- |
| id | integer |
| employee_id | integer |
| description | text |
| activity_time | datetime |
| created_at | timestamp |
| updated_at | timestamp |
**Example Record**
|id|employee_id|description|activity_time|created_at|updated_at|
|-|-|-|-|-|-|
|1|21|Enter the office|2022-08-01 12:32:01|2022-08-01 12:32:02||
|2|21|Exit from office|2022-08-01 20:00:03|2022-08-01 12:32:02||
and of course i have employee_finger_table
to pair employee with finger.
My question is how to automatically sync the presence_table
to activity_table
so when there is new record or updated record in presence_table
synced to activity_table
based on above example ? is there any tools or idea to address this problem ?
Abu Dawud
(1 rep)
Sep 5, 2022, 01:38 AM
• Last activity: Jun 13, 2025, 02:04 PM
0
votes
1
answers
211
views
ADD new node Replication in MongoDB
Here i found some trouble. and i don't known why! Now I have one replication with 3 members (1 of primary, 2 of secondary), And i want to add another one (secondary) to sync the data. the data has 367G . When finished sync the data ,the new secondary in the replication has been down itself, then i s...
Here i found some trouble. and i don't known why!
Now I have one replication with 3 members (1 of primary, 2 of secondary),
And i want to add another one (secondary) to sync the data.
the data has 367G .
When finished sync the data ,the new secondary in the replication has been down itself, then i start this node, he delete the data and starting sync again.
I don't known why,
please help me.
1. mongodb version 3.3.10
2. the oplog in primary has 9G
3. the disk have more than 654G , 38%Use
4. when i send this command in mongo shell
rs.add({host:'172.16.30.123:27017',priority:0,votes:0})
rs.status()
{
"_id": 3,
"name": "172.16.30.123:27017",
"health": 1,
"state": 5,
"stateStr": "STARTUP2",
"uptime": 10029,
"optime": {
"ts": Timestamp(0, 0),
"t": NumberLong("-1")
},
"optimeDurable": {
"ts": Timestamp(0, 0),
"t": NumberLong("-1")
},
"optimeDate": ISODate("1970-01-01T00:00:00Z"),
"optimeDurableDate": ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat": ISODate("2018-05-10T12:28:40.256Z"),
"lastHeartbeatRecv": ISODate("2018-05-10T12:28:40.722Z"),
"pingMs": NumberLong("0"),
"syncingTo": "172.16.30.225:27017",
"configVersion": 9
},
now,you can see the node is syncingto 172.16.30.225
here is the node's log
2018-05-10T17:01:14.247+0800 I REPL [InitialSyncInserters-0] starting to run synchronous task on runner.
2018-05-10T17:01:14.373+0800 I REPL [InitialSyncInserters-0] done running the synchronous task.
2018-05-10T17:01:14.458+0800 I REPL [InitialSyncInserters-0] starting to run synchronous task on runner.
2018-05-10T17:01:14.575+0800 I REPL [InitialSyncInserters-0] done running the synchronous task.
2018-05-10T17:01:14.575+0800 I REPL [InitialSyncInserters-0] starting to run synchronous task on runner.
2018-05-10T17:01:14.989+0800 I REPL [InitialSyncInserters-0] done running the synchronous task.
2018-05-10T17:01:14.989+0800 I REPL [replication-82] data clone finished, status: OK
2018-05-10T17:01:14.991+0800 F EXECUTOR [replication-82] Exception escaped task in thread pool replication
2018-05-10T17:01:14.991+0800 F - [replication-82] terminate() called. An exception is active; attempting to gather more information
2018-05-10T17:01:15.008+0800 F - [replication-82] DBException::toString(): 2 source in remote command request cannot be empty
Actual exception type: mongo::UserException
0x55d8112ca811 0x55d8112ca0d5 0x55d811d102a6 0x55d811d102f1 0x55d811249418 0x55d811249dd0 0x55d81124a979 0x55d811d2b040 0x7f04a287ce25 0x7f04a25aa34d
----- BEGIN BACKTRACE -----
{"backtrace":[{"b":"55D80FE7F000","o":"144B811","s":"_ZN5mongo15printStackTraceERSo"},{"b":"55D80FE7F000","o":"144B0D5"},{"b":"55D80FE7F000","o":"1E912A6","s":"_ZN10__cxxabiv111__terminateEPFvvE"},{"b":"55D80FE7F000","o":"1E912F1"},{"b":"55D80FE7F000","o":"13CA418","s":"_ZN5mongo10ThreadPool10_doOneTaskEPSt11unique_lockISt5mutexE"},{"b":"55D80FE7F000","o":"13CADD0","s":"_ZN5mongo10ThreadPool13_consumeTasksEv"},{"b":"55D80FE7F000","o":"13CB979","s":"_ZN5mongo10ThreadPool17_workerThreadBodyEPS0_RKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE"},{"b":"55D80FE7F000","o":"1EAC040","s":"execute_native_thread_routine"},{"b":"7F04A2875000","o":"7E25"},{"b":"7F04A24B2000","o":"F834D","s":"clone"}],"processInfo":{ "mongodbVersion" : "3.3.10", "gitVersion" : "4d826acb5648a78d0af0fefac5abe6fbbe7c854a", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-693.17.1.el7.x86_64", "version" : "#1 SMP Thu Jan 25 20:13:58 UTC 2018", "machine" : "x86_64" }, "somap" : [ { "b" : "55D80FE7F000", "elfType" : 3, "buildId" : "76E66D90C81BC61AF236A1AF6A6F753332397346" }, { "b" : "7FFE6A6DB000", "elfType" : 3, "buildId" : "47E1DE363A68C3E5970550C87DAFA3CCF9713953" }, { "b" : "7F04A3816000", "path" : "/lib64/libssl.so.10", "elfType" : 3, "buildId" : "ED0AC7DEB91A242C194B3DEF27A215F41CE43116" }, { "b" : "7F04A33B5000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "BC0AE9CA0705BEC1F0C0375AAD839843BB219CB1" }, { "b" : "7F04A31AD000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "6D322588B36D2617C03C0F3B93677E62FCFFDA81" }, { "b" : "7F04A2FA9000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "1E42EBFB272D37B726F457D6FE3C33D2B094BB69" }, { "b" : "7F04A2CA7000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "808BD35686C193F218A5AAAC6194C49214CFF379" }, { "b" : "7F04A2A91000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "3E85E6D20D2CE9CDAD535084BEA56620BAAD687C" }, { "b" : "7F04A2875000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "A48D21B2578A8381FBD8857802EAA660504248DC" }, { "b" : "7F04A24B2000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "95FF02A4BEBABC573C7827A66D447F7BABDDAA44" }, { "b" : "7F04A3A88000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "22FA66DA7D14C88BF36C69454A357E5F1DEFAE4E" }, { "b" : "7F04A2265000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "DA322D74F55A0C4293085371A8D0E94B5962F5E7" }, { "b" : "7F04A1F7D000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "B69E63024D408E400401EEA6815317BDA38FB7C2" }, { "b" : "7F04A1D79000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "A3832734347DCA522438308C9F08F45524C65C9B" }, { "b" : "7F04A1B46000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "A48639BF901DB554479BFAD114CB354CF63D7D6E" }, { "b" : "7F04A1930000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "EA8E45DC8E395CC5E26890470112D97A1F1E0B65" }, { "b" : "7F04A1722000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "6FDF5B013FD2739D304CFB9D723DCBC149EE03C9" }, { "b" : "7F04A151E000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "2E01D5AC08C1280D013AAB96B292AC58BC30A263" }, { "b" : "7F04A1304000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "FF4E72F4E574E143330FB3C66DB51613B0EC65EA" }, { "b" : "7F04A10DD000", "path" : "/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "A88379F56A51950A33198890D37F5F8AEE71F8B4" }, { "b" : "7F04A0E7B000", "path" : "/lib64/libpcre.so.1", "elfType" : 3, "buildId" : "9CA3D11F018BEEB719CDB34BE800BF1641350D0A" } ] }}
mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x55d8112ca811]
mongod(+0x144B0D5) [0x55d8112ca0d5]
mongod(_ZN10__cxxabiv111__terminateEPFvvE+0x6) [0x55d811d102a6]
mongod(+0x1E912F1) [0x55d811d102f1]
mongod(_ZN5mongo10ThreadPool10_doOneTaskEPSt11unique_lockISt5mutexE+0x3C8) [0x55d811249418]
mongod(_ZN5mongo10ThreadPool13_consumeTasksEv+0xC0) [0x55d811249dd0]
mongod(_ZN5mongo10ThreadPool17_workerThreadBodyEPS0_RKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x149) [0x55d81124a979]
mongod(execute_native_thread_routine+0x20) [0x55d811d2b040]
libpthread.so.0(+0x7E25) [0x7f04a287ce25]
libc.so.6(clone+0x6D) [0x7f04a25aa34d]
----- END BACKTRACE -----
Actual exception type: mongo::UserException
at last the mongo process has been shutdown,when i start it again,it's will delete the data and start sync again
here is the full output of rs.status()
{
"set": "rs0",
"date": ISODate("2018-05-11T00:53:19.313Z"),
"myState": 1,
"term": NumberLong("3"),
"heartbeatIntervalMillis": NumberLong("2000"),
"optimes": {
"lastCommittedOpTime": {
"ts": Timestamp(1525999999, 6),
"t": NumberLong("3")
},
"appliedOpTime": {
"ts": Timestamp(1525999999, 6),
"t": NumberLong("3")
},
"durableOpTime": {
"ts": Timestamp(1525999999, 5),
"t": NumberLong("3")
}
},
"members": [
{
"_id": 0,
"name": "172.16.30.223:27017",
"health": 1,
"state": 1,
"stateStr": "PRIMARY",
"uptime": 20032944,
"optime": {
"ts": Timestamp(1525999999, 6),
"t": NumberLong("3")
},
"optimeDate": ISODate("2018-05-11T00:53:19Z"),
"electionTime": Timestamp(1505967067, 1),
"electionDate": ISODate("2017-09-21T04:11:07Z"),
"configVersion": 9,
"self": true
},
{
"_id": 1,
"name": "172.16.30.224:27017",
"health": 1,
"state": 2,
"stateStr": "SECONDARY",
"uptime": 9748305,
"optime": {
"ts": Timestamp(1525999998, 23),
"t": NumberLong("3")
},
"optimeDurable": {
"ts": Timestamp(1525999998, 23),
"t": NumberLong("3")
},
"optimeDate": ISODate("2018-05-11T00:53:18Z"),
"optimeDurableDate": ISODate("2018-05-11T00:53:18Z"),
"lastHeartbeat": ISODate("2018-05-11T00:53:18.751Z"),
"lastHeartbeatRecv": ISODate("2018-05-11T00:53:18.697Z"),
"pingMs": NumberLong("0"),
"syncingTo": "172.16.30.223:27017",
"configVersion": 9
},
{
"_id": 2,
"name": "172.16.30.225:27017",
"health": 1,
"state": 2,
"stateStr": "SECONDARY",
"uptime": 20032938,
"optime": {
"ts": Timestamp(1525999998, 21),
"t": NumberLong("3")
},
"optimeDurable": {
"ts": Timestamp(1525999998, 21),
"t": NumberLong("3")
},
"optimeDate": ISODate("2018-05-11T00:53:18Z"),
"optimeDurableDate": ISODate("2018-05-11T00:53:18Z"),
"lastHeartbeat": ISODate("2018-05-11T00:53:18.751Z"),
"lastHeartbeatRecv": ISODate("2018-05-11T00:53:19.029Z"),
"pingMs": NumberLong("0"),
"syncingTo": "172.16.30.223:27017",
"configVersion": 9
},
{
"_id": 3,
"name": "172.16.30.123:27017",
"health": 0,
"state": 8,
"stateStr": "(not reachable/healthy)",
"uptime": 0,
"optime": {
"ts": Timestamp(0, 0),
"t": NumberLong("-1")
},
"optimeDurable": {
"ts": Timestamp(0, 0),
"t": NumberLong("-1")
},
"optimeDate": ISODate("1970-01-01T00:00:00Z"),
"optimeDurableDate": ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat": ISODate("2018-05-11T00:53:17.635Z"),
"lastHeartbeatRecv": ISODate("2018-05-10T15:24:23.323Z"),
"pingMs": NumberLong("0"),
"lastHeartbeatMessage": "Connection refused",
"configVersion": -1
},
{
"_id": 4,
"name": "172.16.30.127:27017",
"health": 0,
"state": 8,
"stateStr": "(not reachable/healthy)",
"uptime": 0,
"optime": {
"ts": Timestamp(0, 0),
"t": NumberLong("-1")
},
"optimeDurable": {
"ts": Timestamp(0, 0),
"t": NumberLong("-1")
},
"optimeDate": ISODate("1970-01-01T00:00:00Z"),
"optimeDurableDate": ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat": ISODate("2018-05-11T00:53:17.454Z"),
"lastHeartbeatRecv": ISODate("2018-05-10T09:01:26.996Z"),
"pingMs": NumberLong("0"),
"lastHeartbeatMessage": "Connection refused",
"configVersion": -1
}
],
"ok": 1
and rs.conf(), the status was two node finished sync , and two node's mongodb process was been done, the primary's status show the informations like this.Normaly the status should STARTUP2 became SECONDARY.
{
"_id": "rs0",
"version": 9,
"protocolVersion": NumberLong("1"),
"members": [
{
"_id": 0,
"host": "172.16.30.223:27017",
"arbiterOnly": false,
"buildIndexes": true,
"hidden": false,
"priority": 1,
"tags": {
},
"slaveDelay": NumberLong("0"),
"votes": 1
},
{
"_id": 1,
"host": "172.16.30.224:27017",
"arbiterOnly": false,
"buildIndexes": true,
"hidden": false,
"priority": 1,
"tags": {
},
"slaveDelay": NumberLong("0"),
"votes": 1
},
{
"_id": 2,
"host": "172.16.30.225:27017",
"arbiterOnly": false,
"buildIndexes": true,
"hidden": false,
"priority": 1,
"tags": {
},
"slaveDelay": NumberLong("0"),
"votes": 1
},
{
"_id": 3,
"host": "172.16.30.123:27017",
"arbiterOnly": false,
"buildIndexes": true,
"hidden": false,
"priority": 0,
"tags": {
},
"slaveDelay": NumberLong("0"),
"votes": 0
},
{
"_id": 4,
"host": "172.16.30.127:27017",
"arbiterOnly": false,
"buildIndexes": true,
"hidden": false,
"priority": 0,
"tags": {
},
"slaveDelay": NumberLong("0"),
"votes": 0
}
],
"settings": {
"chainingAllowed": true,
"heartbeatIntervalMillis": 2000,
"heartbeatTimeoutSecs": 10,
"electionTimeoutMillis": 10000,
"getLastErrorModes": {
},
"getLastErrorDefaults": {
"w": 1,
"wtimeout": 0
},
"replicaSetId": ObjectId("5994eb51712e4cd82e549341")
}
}
here is the rs.printSlaveReplicationInfo()
localhost(mongod-3.3.10)[PRIMARY:rs0] admin> rs.printSlaveReplicationInfo()
source: 172.16.30.224:27017
syncedTo: Fri May 11 2018 09:37:13 GMT+0800 (CST)
1 secs (0 hrs) behind the primary
source: 172.16.30.225:27017
syncedTo: Fri May 11 2018 09:37:13 GMT+0800 (CST)
1 secs (0 hrs) behind the primary
source: 172.16.30.123:27017
syncedTo: Thu Jan 01 1970 08:00:00 GMT+0800 (CST)
1526002634 secs (423889.62 hrs) behind the primary
source: 172.16.30.127:27017
syncedTo: Thu Jan 01 1970 08:00:00 GMT+0800 (CST)
1526002634 secs (423889.62 hrs) behind the primary
localhost(mongod-3.3.10)[PRIMARY:rs0] admin>
the oplog information
localhost(mongod-3.3.10)[PRIMARY:rs0] local> show tables
me → 0.000MB / 0.016MB
oplog.rs → 9319.549MB / 3418.199MB
replset.election → 0.000MB / 0.035MB
startup_log → 0.009MB / 0.035MB
system.replset → 0.001MB / 0.035MB
localhost(mongod-3.3.10)[PRIMARY:rs0] local>
Tony Damon
(1 rep)
May 10, 2018, 01:02 PM
• Last activity: Jun 13, 2025, 11:05 AM
3
votes
1
answers
888
views
Best way to synchronize offline data to master when deleting items
I have a application that can work offline as well as the same application via web browser. The web browser is connected to the main DB so always live inserts updates and delete. The application when offline you can add insert delete into SQL like. A unique id is created for any new item added on th...
I have a application that can work offline as well as the same application via web browser.
The web browser is connected to the main DB so always live inserts updates and delete.
The application when offline you can add insert delete into SQL like. A unique id is created for any new item added on the application and can be identified as being from the app from the uuid and the time it was created.
I have a scenario that is confusing me.
Lets say I am offline on the application: Item 1 was created whilst online in the application and exists in main Item 2 was created offline on the application and does not exist in main
Then item 1 is deleted from the web browser so deleted from the main db but still exists locally on the current offline device.
When we go back online we need to sync with the information we have (PHP backend): Item 1 should be deleted from the local app as it was deleted in the main whilst of line Item 2 should be added to main
Additionally when sync is completed a lastsync time stamp is stored locally and sent whenever we sync to main
Now the problem I have with this senario is there is no way to properly differentiate between if we should insert to master or delete from local. Wondering how I could maybe overcome this?
One idea I have is to create a table of deleted uuid's when something is deleted directly on the main db therefore we can tell if we should insert to the main db or delete from local. Is this a good idea and I am wondering on the disadvanges of have a table of deleted id's?
Thanks for your time and help
flyingman
(31 rep)
Apr 2, 2022, 11:24 PM
• Last activity: Jun 13, 2025, 09:00 AM
5
votes
1
answers
2949
views
Setting up cloud database for syncing to local SQLite database
**Background** I have a SQL Server database hosted on AWS RDS and there are web applications and WEB APIs that talk to the database. The database is a multi-tenant database and we are currently using SQL Server 2014 although we can upgrade if required. A third-party developed a local client applicat...
**Background**
I have a SQL Server database hosted on AWS RDS and there are web applications and WEB APIs that talk to the database. The database is a multi-tenant database and we are currently using SQL Server 2014 although we can upgrade if required.
A third-party developed a local client application on our behalf which has it's own SQLite database. This application is developed in Xamarin so it runs on Windows, iOS and Android. The local SQLite database must be kept in sync with the cloud database. Syncing data up to the cloud database is not a problem, but syncing data down is causing us issues. Currently we sync data down to the local database by asking the WEB API, every minute, to return all changes that have occurred since a particular date. The cloud database has a
DateCreated
, DateModified
and DateDeleted
column in every table and these columns are queried to see what data has changed since the last time the client synced data. The local application records the last successful sync date for each table.
**Problem**
This approach worked when there were few local clients and few tables to sync but as our client base has grown this approach doesn't seem scalable. We are running into performance issues on our cloud database and a lot of the time the sync-down tasks are cancelled due to timeouts or take ages to run. Our customers are complaining about the time it takes for changes they make on the cloud to sync down to the local application.
**Potential Solution**
Having researched various methods of tracking changes on SQL Server I believe that using the built-in Change Tracking feature is a better approach than using the DateCreated
, DateModified
and DateDeleted
columns for tracking changes. What I am not sure about is how best to set this up.
Things to consider:
- Not all columns on the cloud database tables need to sync to the local database - For example, TableA
on the cloud database has 20 columns but its corresponding client TableA
may only have 5
- Not all data relating to a tenant needs to sync to their local database - for example if a record is marked as "inactive" for that tenant it should never be synced locally
- A table on the local database may contain data from two or more tables on the cloud database
- Not all tenants have the local application yet but they will eventually (this may take a year or more to roll out)
What I am thinking of doing is as follows:
- Create a separate database in AWS RDS that exactly matches the local database
- Enable change tracking on this database rather than on the main database
- Use triggers to keep the main database in sync with the new database
- Query the change tracking tables on the new database and return the changes to the local application
- Create a new table to track if data has changed or not for each tenant and table - this way we won't need to query the change tracking tables each minute only to find that nothing has changed
The reason for the second database is to reduce the strain on the main database when clients are trying to sync data down and also keeping the schemas in sync reduces the complexity on the queries when a client requests to sync changes. For example, if a record is marked as "inactive" for the tenant in the main database, but that record has been changed, I don't want to have to filter this record out when the client requests to sync the data down. I would prefer to already have those records filtered out so that they would never exist in the second database at all. Hope that makes sense!
I would very much value your feedback on this approach and please feel free to suggest better ways of doing it. If there is something that is not clear please let me know and I'll update the question!
keithm
(151 rep)
Oct 26, 2020, 10:56 AM
• Last activity: May 31, 2025, 02:04 PM
2
votes
2
answers
268
views
SQL Server 2012 AlwaysOn Availability Group
We have an application that needs access to the database at all times to work properly and we are planning on deploying SQL Server 2012 AlwaysOn Availability Groups on Windows Server 2012. We have two geographically separated data centers and we plan to keep one DB server in DC1 and the other in DC2...
We have an application that needs access to the database at all times to work properly and we are planning on deploying SQL Server 2012 AlwaysOn Availability Groups on Windows Server 2012.
We have two geographically separated data centers and we plan to keep one DB server in DC1 and the other in DC2.
All the information that I've seen shows a local synchronous copy and an asynchronous copy in another data center. I wanted to know if there is a way to configure the availability group with just two SQL servers which are separated geographically and can support automatic failover.
user72471
(21 rep)
Aug 10, 2015, 08:20 AM
• Last activity: May 16, 2025, 04:03 PM
0
votes
1
answers
299
views
SQL transaction: Delete between two selects possible?
I have two SELECT statements within a transaction (repeatable read) Select @firstItem =id from myTable Where .... --Do some more magic, so I can't concat the the two queries!!!! and Select * from myTable where parent=@firstItem Is it possible that the children (second query) of the parent(first quer...
I have two SELECT statements within a transaction (repeatable read)
Select @firstItem =id from myTable Where ....
--Do some more magic, so I can't concat the the two queries!!!!
and
Select * from myTable where parent=@firstItem
Is it possible that the children (second query) of the parent(first query) are deleted (from another transaction) after the initial/first select?
How can I prevent this with locks?
CSharpProblemGenerator
(1 rep)
Sep 18, 2019, 09:09 AM
• Last activity: May 9, 2025, 04:03 PM
0
votes
1
answers
369
views
Synchronizing two databases on different servers
I have two databases on different servers. Each one of them contains 350+ tables and users have read and write access on both databases. I want to synchronize them, but in special cases, like if a specific record is updated on both databases, I have to be able to prioritize one of updates over the o...
I have two databases on different servers. Each one of them contains 350+ tables and users have read and write access on both databases.
I want to synchronize them, but in special cases, like if a specific record is updated on both databases, I have to be able to prioritize one of updates over the other.
One of the databases is always online and the other one is never online and only has access to local network of organization, synchronization happens everyday at a specific time by downloading all files from internet and uploading them on local net and reverse.
I have considered merge replication and trigger-based replication, but I'm open to any other solutions.
I can't choose the best solution for this task, I really appreciate your advice.
hanie
(101 rep)
Mar 9, 2024, 08:18 AM
• Last activity: May 9, 2025, 11:01 AM
0
votes
1
answers
933
views
Add a new node to Mongo Replicaset using Rsync
I have a 3 shard mongo cluster. Each shard has a 3 node replica set. On each shard, the data size is 3 TB. I want to add 1 more node to each replica set. But its taking lot of time. The initial sync is very slow. It synced 130GB for the whole day. My Servers have 16GB network speed, very high IOPS S...
I have a 3 shard mongo cluster. Each shard has a 3 node replica set. On each shard, the data size is 3 TB.
I want to add 1 more node to each replica set. But its taking lot of time. The initial sync is very slow. It synced 130GB for the whole day.
My Servers have 16GB network speed, very high IOPS SSD disk (everything on GCP).
So I tried to set up the new node with Rsync.
1. Install and configure mongo db
2. Stop mongod service.
3. Add replica set and shard details in the config file.
4. Run Rsync from an existing secondary node.
5. Start MongoDB
But its not working. Am I missing something, or is they any alternate approach for this?
## Update: Adding more info
MongoDB version: 3.6
### Config file on exsiting and new node:
storage:
dbPath: /mongodb/data
journal:
enabled: true
systemLog:
destination: file
logAppend: true
path: /mongodb/logs/mongod.log
net:
port: 27017
bindIp: 0.0.0.0
compression:
compressors: snappy
security:
keyFile: /mongodb/mongodb-keyfile
replication:
replSetName: prod-rpl-1
sharding:
clusterRole: myshard
### MongoDB Log
2019-08-09T14:42:27.604+0000 I CONTROL [initandlisten] MongoDB starting : pid=23478 port=27017 dbpath=/mongodb/data 64-bit host=Server-new-1
2019-08-09T14:42:27.605+0000 I CONTROL [initandlisten] db version v3.6.13
2019-08-09T14:42:27.605+0000 I CONTROL [initandlisten] git version: db3c76679b7a3d9b443a0e1b3e45ed02b88c539f
2019-08-09T14:42:27.605+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
2019-08-09T14:42:27.605+0000 I CONTROL [initandlisten] allocator: tcmalloc
2019-08-09T14:42:27.605+0000 I CONTROL [initandlisten] modules: none
2019-08-09T14:42:27.605+0000 I CONTROL [initandlisten] build environment:
2019-08-09T14:42:27.605+0000 I CONTROL [initandlisten] distmod: rhel70
2019-08-09T14:42:27.605+0000 I CONTROL [initandlisten] distarch: x86_64
2019-08-09T14:42:27.605+0000 I CONTROL [initandlisten] target_arch: x86_64
2019-08-09T14:42:27.605+0000 I CONTROL [initandlisten] options: { config: "/etc/mongod.conf", net: { bindIp: "0.0.0.0", compression: { compressors: "snappy" }, port: 27017 }, replication: { replSetName: "production-1" }, security: { keyFile: "/mongodb/mongodb-keyfile" }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/mongodb/data", journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, path: "/mongodb/logs/mongod.log" } }
2019-08-09T14:42:27.607+0000 I - [initandlisten] Detected data files in /mongodb/data created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2019-08-09T14:42:27.607+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=51778M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),compatibility=(release="3.0",require_max="3.0"),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2019-08-09T14:42:28.643+0000 I STORAGE [initandlisten] WiredTiger message [1565361748:642980][23478:0x7f58ac14fb80], txn-recover: Main recovery loop: starting at 3/768
-----------
-----------
index rebuilding
-----------
-----------
2019-08-09T14:47:46.828+0000 I INDEX [repl writer worker 15] build index on: config.cache.chunks.se4_manage_media_production_aispartner.twitter_auth_apps properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1", ns: "config.cache.chunks.se4_manage_media_production_aispartner.twitter_auth_apps" }
2019-08-09T14:47:46.828+0000 I INDEX [repl writer worker 15] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2019-08-09T14:47:46.830+0000 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2019-08-09T14:47:46.830+0000 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
2019-08-09T14:47:46.830+0000 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
2019-08-09T14:47:46.831+0000 I REPL [signalProcessingThread] shutdown: removing all drop-pending collections...
2019-08-09T14:47:46.831+0000 I REPL [signalProcessingThread] shutdown: removing checkpointTimestamp collection...
2019-08-09T14:47:46.831+0000 I REPL [signalProcessingThread] shutting down replication subsystems
2019-08-09T14:47:46.831+0000 I ASIO [NetworkInterfaceASIO-RS-0] Ending connection to host Server-old-1:27017 due to bad connection status; 2 connections to that host remain open
2019-08-09T14:47:46.832+0000 I INDEX [repl writer worker 15] build index on: config.cache.chunks.se4_manage_media_production_aispartner.twitter_auth_apps properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.cache.chunks.se4_manage_media_production_aispartner.twitter_auth_apps" }
2019-08-09T14:47:46.832+0000 I INDEX [repl writer worker 15] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2019-08-09T14:47:46.833+0000 I ASIO [NetworkInterfaceASIO-RS-0] Ending connection to host Server-old-1:27017 due to bad connection status; 1 connections to that host remain open
2019-08-09T14:47:46.897+0000 E REPL [replication-1] Initial sync attempt failed -- attempts left: 9 cause: CallbackCanceled: error fetching oplog during initial sync: initial syncer is shutting down
2019-08-09T14:47:46.897+0000 I STORAGE [replication-1] Finishing collection drop for local.temp_oplog_buffer (no UUID).
2019-08-09T14:47:46.903+0000 I REPL [replication-1] Initial Sync has been cancelled: CallbackCanceled: failed to schedule work _startInitialSyncAttemptCallback-1 at 2019-08-09T14:47:47.897+0000: initial syncer is shutting down
2019-08-09T14:47:46.986+0000 I REPL [signalProcessingThread] Stopping replication storage threads
2019-08-09T14:47:46.990+0000 W SHARDING [signalProcessingThread] error encountered while cleaning up distributed ping entry for Server-new-1:27017:1565361980:-4300361147585369138 :: caused by :: ShutdownInProgress: Shutdown in progress
2019-08-09T14:47:46.990+0000 W SHARDING [signalProcessingThread] cant reload ShardRegistry :: caused by :: CallbackCanceled: Callback canceled
2019-08-09T14:47:46.990+0000 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
2019-08-09T14:47:47.001+0000 I STORAGE [WTOplogJournalThread] oplog journal thread loop shutting down
2019-08-09T14:47:47.004+0000 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
2019-08-09T14:47:48.758+0000 I STORAGE [signalProcessingThread] WiredTiger message [1565362068:758636][23917:0x7faef147d700], txn-recover: Main recovery loop: starting at 7/65416960
2019-08-09T14:47:48.875+0000 I STORAGE [signalProcessingThread] WiredTiger message [1565362068:875207][23917:0x7faef147d700], txn-recover: Recovering log 7 through 8
2019-08-09T14:47:48.919+0000 I STORAGE [signalProcessingThread] WiredTiger message [1565362068:919195][23917:0x7faef147d700], txn-recover: Recovering log 8 through 8
2019-08-09T14:47:48.992+0000 I STORAGE [signalProcessingThread] WiredTiger message [1565362068:992751][23917:0x7faef147d700], txn-recover: Set global recovery timestamp: 0
2019-08-09T14:47:49.065+0000 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
2019-08-09T14:47:49.066+0000 I CONTROL [signalProcessingThread] now exiting
2019-08-09T14:47:49.066+0000 I CONTROL [signalProcessingThread] shutting down with code:0
TheDataGuy
(1986 rep)
Aug 11, 2019, 08:29 AM
• Last activity: Apr 12, 2025, 07:04 PM
0
votes
0
answers
24
views
Periodic copying, synchronizing, forwarding and merging of SQL database tables with MSSQL Express?
There is a modification project in our Process Plant related to SCADA application (Process Automation System) and I am looking for reliable advice/ suggestions from experienced members related to SQL Database Management. **Background:** We have multiple Workstations each running Stand-alone SCADA ap...
There is a modification project in our Process Plant related to SCADA application (Process Automation System) and I am looking for reliable advice/ suggestions from experienced members related to SQL Database Management.
**Background:** We have multiple Workstations each running Stand-alone SCADA application with SQL Express for writing Real-time Alarms to SQL DB and occasional reading. We need to forward these Alarms SQL DB (Table only) from all Workstations to Centralized Station, merge all Alarm Tables into one, and then import to SQL Express DB on Centralized Station for Read-Only Purpose. These tasks need to be automated and triggered at appropriate frequency (for eg. every 1 Hour).
Please note that all Workstations are Workgroups and Stand-alone.
Please also note that communication between Workstations and Centralized Station is strictly uni-directional i.e. Workstation -> Centralized Workstation (so no option for SQL Server Replication).
***
### My Rough Plan
#### On Each Workstation:
1. Duplicate the Production_DB to Offline_DB (in same SQL Server Instance)
2. Synchronous/Asynchronous update of Alarms Table from Production_DB to Offline_DB.
3. Periodic Conversion of the Alarms Table from Offline_DB to CSV.
4. Forwarding the CSV to Centralized Workstation.
#### On Centralized Workstation:
5. Creating Workstation_DB for each Workstation in SQL Server instance.
6. Converting and importing CSV file in each Workstation_DB to Alarms Table.
7. Creating one Central_DB and merging all Workstations Alarms Table to one Table.
8. Assign Central_DB to SCADA Application on Central Workstation.
Please advise if the rough plan needs correction or optimization.
I have no background knowledge of SQL scripting and I would be grateful if someone also guide me through the execution of each step in SQL (excluding step 4 and step 8).
Hmbl3Lrnr
(1 rep)
Feb 18, 2025, 08:31 AM
• Last activity: Feb 18, 2025, 08:33 AM
0
votes
0
answers
48
views
Scripted daily SQL dump from Oracle 19.c to SQL Server 2016
* I have an SQL user to an Oracle 19c instance, without access to the server's operating system. * Secondly, I have an SQL user to a SQL Server 2016 instance, and a Remote Desktop user. Both servers live in different networks, and the connection between them is rather slow, let's guess it is 20 MBit...
* I have an SQL user to an Oracle 19c instance, without access to the server's operating system.
* Secondly, I have an SQL user to a SQL Server 2016 instance, and a Remote Desktop user.
Both servers live in different networks, and the connection between them is rather slow, let's guess it is 20 MBit per second.
Now I have to get the data of 5 tables each day from Oracle to SQL Server, roughly 3 GB overall. I created the table structure in SQL Server once, so I only need
INSERTs
, probably preceeded with a TRUNCATE TABLE...
.
As a MySQL user I like to find a console tool like mysqldump
, but any solution is appreciated.
I tried a linked server using a syntax like INSERT INTO ... SELECT * FROM OPENQUERY(...)
, but that runs in a timeout after 30 seconds, ending up in ~1000 rows on the target server.
I also exported by hand via Oracle SQL developer, which took me one hour for one table, and on top I needed to edit some date/time values to make them compatible for SQL server.
I could live with the slowness, if at least there was a real *automated* and *stable* way of doing it.
Ansgar Becker
(1 rep)
Feb 7, 2025, 09:49 AM
1
votes
1
answers
2468
views
Why availability group resource goes offline in WSFC?
Sometimes I find the AG Cluster role in offline state which results the AG in resolving state. It doesn't allow the application to access the databases during this time. And then after sometime it comes to online state. Why is that happening? If one of the cluster networks is down, then the cluster...
Sometimes I find the AG Cluster role in offline state which results the AG in resolving state. It doesn't allow the application to access the databases during this time. And then after sometime it comes to online state. Why is that happening?
If one of the cluster networks is down, then the cluster role goes to offline state?
Sree Lekha
(15 rep)
Nov 7, 2020, 03:00 PM
• Last activity: Feb 7, 2025, 08:10 AM
0
votes
0
answers
69
views
MS SQL Server: How to sync multiple local database instances to single central instance?
I have let’s say 3 local database instances deployed at 3 different geographical locations around the world. I want to sync all these three local instances to one central instance deployed at yet another geographical location. I am looking for only one-way sync; from Local to Central. Finally, centr...
I have let’s say 3 local database instances deployed at 3 different geographical locations around the world.
I want to sync all these three local instances to one central instance deployed at yet another geographical location.
I am looking for only one-way sync; from Local to Central.
Finally, central instance will have data from all three local instances.
Database structure of all the databases involved is exactly the same.
Each local instance and the central instance are connected through internet; other necessary security is implemented and connectivity is available.
If this is real-time, that will be great. But, if this causes some lag or delay, let's say an hour or so, that can be acceptable.
I cannot use MS SQL Server's Replication feature as the connectivity will not be consistent. Connectivity may not be available for hours in some cases. As per my understanding and previous experience with Replication feature, this causes many problems and also need manual involvement to take the things on the track again. I have read [this](https://dba.stackexchange.com/q/77335/172299) . In general, Replication is not suitable in my scenario.
Is there any other way to sync local and central instances?
Amit Joshi
(121 rep)
Jan 29, 2025, 01:42 PM
• Last activity: Jan 30, 2025, 07:19 AM
0
votes
1
answers
606
views
mysql diff and add diff records into new database
I have two database on two different hosts: * host A has database A and table A * table A contains records with primary key 1 to 10 * host B has database B and table B * table B contains records with primary key 1 to 5 Now I have to take a diff of the two separate host/databases/tables i.e. "host A/...
I have two database on two different hosts:
* host A has database A and table A
* table A contains records with primary key 1 to 10
* host B has database B and table B
* table B contains records with primary key 1 to 5
Now I have to take a diff of the two separate host/databases/tables
i.e. "host A/database A/table A" contains 1-10 and "host B/database B/table B" contains 1-5 so records 6-10 are missing in host B/database B/table B. Now add these diff records into "host B/database B/table B" from "host A/database A/table A".
In short, update development database with new production data since last update.
What I have tried:
mysqldump --replace --complete-insert --skip-disable-keys --no-create-info -h A
--port=3306 -u A --password=A database_A table_A
| mysql -h B -u B --password=B database_B
mOsEs
(11 rep)
Feb 18, 2016, 12:39 PM
• Last activity: Jan 26, 2025, 12:08 AM
0
votes
1
answers
58
views
Question on Long-Running Transactions and Secondary Replica Visibility
I have an Availability Group (AG) setup and tried simulating a long-running transaction on the primary. Here’s what I did: On the primary replica, I ran the following commands: ``` BEGIN TRAN; INSERT INTO employees VALUES (1); ``` In a new tab on the primary, I created a new table (Table1) and inser...
I have an Availability Group (AG) setup and tried simulating a long-running transaction on the primary. Here’s what I did:
On the primary replica, I ran the following commands:
BEGIN TRAN;
INSERT INTO employees VALUES (1);
In a new tab on the primary, I created a new table (Table1) and inserted some records into it:
CREATE TABLE Table1 (ID INT, Name NVARCHAR(50));
INSERT INTO Table1 VALUES (1, 'Test');
On the secondary replica, I ran:
SELECT * FROM Table1;
This query on the secondary replica returned the data, showing the changes made in Table1.
However, according to the Microsoft documentation
> If a transaction has not committed for hours, the open transaction blocks all read-only queries from seeing any new updates.
This behavior seems to suggest that the impact of a long-running transaction is database-wide, meaning we should not be able to access any table on the database from the secondary replica when there’s an active, long-running transaction on the primary.
Can you clarify if the behavior described in the documentation is database-level, and if so, why was I able to see changes in Table1 on the secondary replica?
Shahezar khan
(1 rep)
Jan 7, 2025, 05:16 PM
• Last activity: Jan 7, 2025, 08:53 PM
0
votes
1
answers
100
views
How can I isolate sensible information from a publicly accessible database?
I am working for a medical private practice, and we are planning to install an appointment booking system via our webpage. The appointment system is a PHP application storing its records in an SQL database that can be on the webserver itself or on a remote server. Due to the use case, the database w...
I am working for a medical private practice, and we are planning to install an appointment booking system via our webpage.
The appointment system is a PHP application storing its records in an SQL database that can be on the webserver itself or on a remote server.
Due to the use case, the database will contain very sensitive personal information* that must be kept secret from the public.
Hardening access to the database is one point, but I guess that an important entry hole to the data would be the PHP application itself.
Is there a recommended way to isolate the sensible data from public access? I could imagine a system such as the following:
*public* -> access via PHP app on publicly accessible server -> *SQL database with masked sensitive entries* *private SQL database with full entries* *employees*
In that way, even if the PHP application would expose data to public, the sensitive data would be protected. Also, gaining access to the publicly available SQL database itself would be less critical.
Mapping between the two databases would be through a unique hash-code for each of the entries. Effectively, this is just a pseudonymization and not an anonymization. But still, it is an additional layer of security.
I just don't know whether there is a ready-made way for such an implementation. Basically, any change in the *SQL database with masked sensitive entries* should trigger a sync with a subsequent masking (since customer entries made from *public* need to be masked).
Is there a better implementation? I would be very pleased for any suggestion.
---
\* Patients would enter sensitive data in the public database, and we need this data to do our job (we need names, contact, and some medical information). But as soon as this data is entered, it should be transferred to the private database and be made unavailable in the public database.
But still, I don't want to fuzz with the PHP application which is pre-made. Masking the data in the public database (e.g., with *s) would make the PHP application just work as usual
Gunnar
(3 rep)
Jan 4, 2025, 10:11 AM
• Last activity: Jan 6, 2025, 12:33 PM
4
votes
1
answers
1171
views
MongoDB can't add new replica set member [rsSync] SEVERE: Got signal: 6
I have a replica set and I added a new member to the set. The initialSync begins for the new member and rs.status (on primary) shows `STARTUP2` as status. However, after a long enough time, there's a cryptic `fassert` error coming on the new instance. Log dump is as following: 2014-11-02T22:53:23.99...
I have a replica set and I added a new member to the set. The initialSync begins for the new member and rs.status (on primary) shows
STARTUP2
as status. However, after a long enough time, there's a cryptic fassert
error coming on the new instance.
Log dump is as following:
2014-11-02T22:53:23.995+0000 [clientcursormon] mem (MB) res:330 virt:45842
2014-11-02T22:53:23.995+0000 [clientcursormon] mapped (incl journal view):45038
2014-11-02T22:53:23.995+0000 [clientcursormon] connections:27
2014-11-02T22:53:23.995+0000 [clientcursormon] replication threads:32
2014-11-02T22:53:25.427+0000 [conn2012] end connection xx.xx.xx.xx:1201 (26 connections now open)
2014-11-02T22:53:25.433+0000 [initandlisten] connection accepted from xx.xx.xx.xx:1200 #2014 (27 connections now open)
2014-11-02T22:53:25.436+0000 [conn2014] authenticate db: local { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" }
2014-11-02T22:53:26.775+0000 [initandlisten] connection accepted from xx.xx.xx.xx:1058 #2015 (28 connections now open)
2014-11-02T22:53:26.864+0000 [conn1993] end connection xx.xx.xx.xx:1059 (27 connections now open)
2014-11-02T22:53:29.090+0000 [rsSync] Socket recv() errno:110 Connection timed out xx.xx.xx.xx:27017
2014-11-02T22:53:29.096+0000 [rsSync] SocketException: remote: xx.xx.xx.xx:27017 error: 9001 socket exception [RECV_ERROR] server [168.63.252.61:27017]
2014-11-02T22:53:29.099+0000 [rsSync] DBClientCursor::init call() failed
2014-11-02T22:53:29.307+0000 [rsSync] replSet initial sync exception: 13386 socket error for mapping query 0 attempts remaining
2014-11-02T22:53:36.113+0000 [conn2013] end connection xx.xx.xx.xx:1056 (26 connections now open)
2014-11-02T22:53:36.153+0000 [initandlisten] connection accepted from xx.xx.xx.xx:1137 #2016 (27 connections now open)
2014-11-02T22:53:36.154+0000 [conn2016] authenticate db: local { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" }
2014-11-02T22:53:55.541+0000 [conn2014] end connection xx.xx.xx.xx:1200 (26 connections now open)
2014-11-02T22:53:55.578+0000 [initandlisten] connection accepted from xx.xx.xx.xx:1201 #2017 (27 connections now open)
2014-11-02T22:53:55.580+0000 [conn2017] authenticate db: local { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" }
2014-11-02T22:53:56.861+0000 [conn2015] authenticate db: admin { authenticate: 1, user: "root", nonce: "xxx", key: "xxx" }
2014-11-02T22:53:59.310+0000 [rsSync] Fatal Assertion 16233
2014-11-02T22:53:59.723+0000 [rsSync] 0x11c0e91 0x1163109 0x114576d 0xe84c1f 0xea770e 0xea7800 0xea7af8 0x1205829 0x7ff728cf8e9a 0x7ff72800b3fd
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x11c0e91]
/usr/bin/mongod(_ZN5mongo10logContextEPKc+0x159) [0x1163109]
/usr/bin/mongod(_ZN5mongo13fassertFailedEi+0xcd) [0x114576d]
/usr/bin/mongod(_ZN5mongo11ReplSetImpl17syncDoInitialSyncEv+0x6f) [0xe84c1f]
/usr/bin/mongod(_ZN5mongo11ReplSetImpl11_syncThreadEv+0x18e) [0xea770e]
/usr/bin/mongod(_ZN5mongo11ReplSetImpl10syncThreadEv+0x30) [0xea7800]
/usr/bin/mongod(_ZN5mongo15startSyncThreadEv+0xa8) [0xea7af8]
/usr/bin/mongod() [0x1205829]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7ff728cf8e9a]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7ff72800b3fd]
2014-11-02T22:53:59.723+0000 [rsSync]
***aborting after fassert() failure
2014-11-02T22:53:59.728+0000 [rsSync] SEVERE: Got signal: 6 (Aborted)
.
The worst part is that when I try to re-start the mongod
service, the replication begins afresh trying to reSync all the files which are already there. This seems bizarre and useless.
Can someone please help me understand what is going on?
VaidAbhishek
(253 rep)
Nov 3, 2014, 12:54 AM
• Last activity: Jan 6, 2025, 07:02 AM
Showing page 1 of 20 total questions