Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
20
votes
1
answers
16267
views
what is "planSummary: IDHACK"?
This Query scans only one document and returns only one document. But this is very slow: 2017-05-22T07:13:24.548+0000 I COMMAND [conn40] query databasename.collectionname query: { _id: ObjectId('576d4ce3f2d62a001e84a9b8') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorE...
This Query scans only one document and returns only one document. But this is very slow:
2017-05-22T07:13:24.548+0000 I COMMAND [conn40] query databasename.collectionname query: { _id: ObjectId('576d4ce3f2d62a001e84a9b8') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8009ms
2017-05-22T07:13:24.549+0000 I COMMAND [conn10] query databasename.collectionname query: { _id: ObjectId('576d4db35de5fa001ebdd77a') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8010ms
2017-05-22T07:13:24.553+0000 I COMMAND [conn47] query databasename.collectionname query: { _id: ObjectId('576d44b7ea8351001ea5fb22') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8014ms
2017-05-22T07:13:24.555+0000 I COMMAND [conn52] query databasename.collectionname query: { _id: ObjectId('576d457ceb82a0001e205bfa') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8015ms
2017-05-22T07:13:24.555+0000 I COMMAND [conn41] query databasename.collectionname query: { _id: ObjectId('576d457ec0697c001e1e1779') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8015ms
2017-05-22T07:13:24.555+0000 I COMMAND [conn39] query databasename.collectionname query: { _id: ObjectId('576d44b8ea8351001ea5fb27') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8015ms
2017-05-22T07:13:24.561+0000 I COMMAND [conn34] query databasename.collectionname query: { _id: ObjectId('576d44b55de5fa001ebdd31e') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8022ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn32] query databasename.collectionname query: { _id: ObjectId('576d4df6d738a7001ef2a235') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn51] query databasename.collectionname query: { _id: ObjectId('576d48165de5fa001ebdd55a') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8024ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn17] query databasename.collectionname query: { _id: ObjectId('576d44c19f2382001e953717') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn8] query databasename.collectionname query: { _id: ObjectId('576d45d256c22e001efdb382') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn42] query databasename.collectionname query: { _id: ObjectId('576d44bd57c75e001e6e2302') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn6] query databasename.collectionname query: { _id: ObjectId('576d44b394e731001e7cd530') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.571+0000 I COMMAND [conn31] query databasename.collectionname query: { _id: ObjectId('576d4dbcb7289f001e64e32b') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8032ms
This looks like very slow disk I/O. What does the
planSummary: IDHACK
mean? Any more info for IDHACK
?
Sybil
(2578 rep)
May 23, 2017, 03:36 PM
• Last activity: Jul 10, 2025, 01:38 AM
1
votes
1
answers
680
views
mongodump: fact.bson: input/output error
We are trying to export 1.5TB database into mongodb. But its fail after 100gb and giving us below error. As we notice root mount point got 100% space. We are using below command: mongodump -u admin -p xx_admin_db -d xx --host xx.xx.xx.xx --authenticationDatabase admin 2018-06-28T12:20:30.687+0000 [....
We are trying to export 1.5TB database into mongodb. But its fail after 100gb and giving us below error. As we notice root mount point got 100% space. We are using below command:
mongodump -u admin -p xx_admin_db -d xx --host xx.xx.xx.xx --authenticationDatabase admin
2018-06-28T12:20:30.687+0000 [........................] xx1.fact 14679565/468546715 (3.1%)
2018-06-28T12:20:31.192+0000 [........................] xx1.fact 14680477/468546715 (3.1%)
2018-06-28T12:20:31.195+0000 Failed: error writing to file: write dump/xx/xxx.bson: input/output error
Irfi
(53 rep)
Jun 29, 2018, 08:21 AM
• Last activity: May 5, 2025, 02:06 AM
0
votes
1
answers
2538
views
mongodb setFeatureCompatibilityVersion command not found
I'm trying to upgrade my mongodb from 3.2 to 3.4 and I try to set the compatibility version by: `db.adminCommand( { setFeatureCompatibilityVersion: "3.2" } )` But it says { "ok" : 0, "errmsg" : "no such command: 'setFeatureCompatibilityVersion', bad cmd: '{ setFeatureCompatibilityVersion: \"3.2\" }'...
I'm trying to upgrade my mongodb from 3.2 to 3.4 and I try to set the compatibility version by:
db.adminCommand( { setFeatureCompatibilityVersion: "3.2" } )
But it says
{
"ok" : 0,
"errmsg" : "no such command: 'setFeatureCompatibilityVersion', bad cmd: '{ setFeatureCompatibilityVersion: \"3.2\" }'",
"code" : 59
}
I'm not sure why the command is not found. Can somebody point me out in whats wrong?
Here is the mongodb version info from db.serverStatus().version
> 3.2.20
P.S: I'm running the command on admin
database and authenticated on it as a user with root
role.
Vijai
(101 rep)
Aug 31, 2018, 12:11 PM
• Last activity: Mar 9, 2025, 06:00 AM
1
votes
1
answers
465
views
MongoDB instance for production
I am seeking advice before releasing my mongodb server into production from deployment mode. **Just to point out the server is working fine on development mode.** **Scenario:** Let's say I have two server Ubuntu 16.04 instances, where instance **A** is the (application instance), and instance **B**...
I am seeking advice before releasing my mongodb server into production from deployment mode.
**Just to point out the server is working fine on development mode.**
**Scenario:** Let's say I have two server Ubuntu 16.04 instances, where instance **A** is the (application instance), and instance **B** is the (mongo db 3.2 database).
For this instance B I don't think that I need **scale/sharding/replicas**, the idea is to let mongo occupy this instance all by itself to store secured users' credentials and some api data. (**if you object, I am listing :)** )
**How they communicate:** Instance B only talks to Instance A through port 27077 since it's not recommended to use the default port 27017. Moreover in the config file of mongo I bind the A's static ip.
**Just to point out only three ports are allows in that instance B, ssh and 2 mongo db port 27077 and 28017**
> First Question: In order for this communication to be secure, would it
> be a better idea to use HTTPS port 443 instead of that basic TCP port 27077
>, then disable the HTTP interface in the mongo.conf file? Is that enough to make the communication secured?
**Authentication & Authorization**: I create two user roles on the database, where one is the total admin and the other with basic rights which might prevent me from doing something stupid one day. For example deleting the wrong collection so and so.
> Second Question: What would be the best way to always make sure the
> mongo server is always responsive or running? (for example if the
> server was rebooted, I need to make sure that the mongo is still
> running) or (to avoid mongodb.lock file) because I had that issue before.
> Third Question: Since I am not doing replica set, what would be the
> best way to create a backup?
If you have any order advice, I am listening and thank you for your time.
Lamour
(111 rep)
Aug 21, 2016, 08:31 PM
• Last activity: Mar 2, 2025, 09:00 PM
0
votes
1
answers
2979
views
mongodb set feature compatibility to 3.4 fail
I want to enable the features of my Mongodb 3.4 after upgrading. I have sharded cluster enviroment And after I ran the commands: use admin db.adminCommand ({setFeatureCompatibilityVersion: "3.4"}) In mongos instance I got the following output: > {"ok": 1} But when I try to check with it succeeded wi...
I want to enable the features of my Mongodb 3.4 after upgrading. I have sharded cluster enviroment And after I ran the commands:
use admin
db.adminCommand ({setFeatureCompatibilityVersion: "3.4"})
In mongos instance I got the following output:
> {"ok": 1}
But when I try to check with it succeeded with command:
db.adminCommand ({getParameter: 1, featureCompatibilityVersion: 1})
I get the following output:
> {"ok": 0, "errmsg": "no option found to get"}
And when I check in ops manager I see that the command did not work
thecollector
(9 rep)
May 8, 2018, 11:05 AM
• Last activity: Jan 9, 2025, 10:01 AM
2
votes
6
answers
60859
views
Mongodb exception: connect failed
#Here's the environment: 1. Linux(x64): **16.04** 2. MongoDB: **3.2.15** #Error: root@system-test:~# mongo MongoDB shell version: 3.2.15 connecting to: test 2017-07-25T07:31:28.528+0000 W NETWORK [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: errno:...
#Here's the environment:
1. Linux(x64): **16.04**
2. MongoDB: **3.2.15**
#Error:
root@system-test:~# mongo
MongoDB shell version: 3.2.15
connecting to: test
2017-07-25T07:31:28.528+0000 W NETWORK [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: errno:111 Connection refused
2017-07-25T07:31:28.530+0000 E QUERY [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:229:14
@(connect):1:6
exception: connect failed
root@system-test:~#
**When I tried to see the status I get this:**
root@system-test:~# sudo service mongodb status
● mongodb.service - High-performance, schema-free document-oriented database
Loaded: loaded (/etc/systemd/system/mongodb.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2017-07-25 07:45:32 UTC; 1min 8s ago
Process: 1721 ExecStart=/usr/bin/mongod --quiet --config /etc/mongod.conf (code=exited, status=14)
Main PID: 1721 (code=exited, status=14)
Jul 25 07:45:32 system-test systemd: Started High-performance, schema-free document-oriented database.
Jul 25 07:45:32 system-test systemd: mongodb.service: Main process exited, code=exited, status=14/n/a
Jul 25 07:45:32 system-test systemd: mongodb.service: Unit entered failed state.
Jul 25 07:45:32 system-test systemd: mongodb.service: Failed with result 'exit-code'.
#How it happened:
I tried to stop MongoDB and did the following
**
sudo service mongo stop
** [This didn't work].
Then i tried **kill - 2
** [Seem didn't worked too]
But after this command **#mongo
** stopped working and getting the above error.
Please help guys
Siddhartha Chowdhury
(131 rep)
Jul 25, 2017, 07:59 AM
• Last activity: Aug 17, 2023, 09:57 AM
0
votes
1
answers
990
views
Help required to find correct MongoDB size
Please help me to find correct MongoDB size. db version v3.4.10 I have also executed below command to check if there are any ghost file. The below command output is zero. lsof | grep deleted | grep mongo | wc -l 1. In db.stats, the data size is showing 1236GB "collections" : 23, "views" : 0, "object...
Please help me to find correct MongoDB size.
db version v3.4.10
I have also executed below command to check if there are any ghost file. The below command output is zero.
lsof | grep deleted | grep mongo | wc -l
1. In db.stats, the data size is showing 1236GB
"collections" : 23,
"views" : 0,
"objects" : 702876605,
"avgObjSize" : 1914.5578033287934,
**"dataSize" : 1253.2788225263357,**
"storageSize" : 374.982120513916,
"numExtents" : 0,
"indexes" : 52,
"indexSize" : 21.016620635986328,
"ok" : 1
2. Disk used is showing as 916GB (/data/mongo/) out of 1.4T
3. In show dbs output, DB size is showing as 396B
admin 0.000GB
local 15.906GB
**abcd 396.008GB**
I need to sync DB from primary site to secondary (replication), How much free space is required on secondary site (915 GB or 1236GB)? Also why it's showing DB size different in point 1,2 & 3? Which value/size is correct DB size?
Thanks in advance.
Sridhar G
(1 rep)
Oct 27, 2022, 10:24 AM
• Last activity: Oct 30, 2022, 03:45 AM
0
votes
1
answers
996
views
MongoDB 2 node cluster
I am trying to understand the implications of setting up a 2 node MongoDB cluster with the following configuration (Documentation does say a minimum of 3 nodes is required). MongoDB Version is 3.2. If I configure both nodes with `"arbiterOnly": false`, and set Node1 with `Priority: 2` and Node2 with...
I am trying to understand the implications of setting up a 2 node MongoDB cluster with the following configuration (Documentation does say a minimum of 3 nodes is required).
MongoDB Version is 3.2.
If I configure both nodes with
"arbiterOnly": false
, and set Node1 with Priority: 2
and Node2 with Priority: 1
, in case Node1 goes down, won't Node2 take over? Why not?
Jayadevan
(1051 rep)
Nov 2, 2020, 05:49 AM
• Last activity: Sep 25, 2022, 09:04 PM
0
votes
1
answers
114
views
Unable to add replica set to shard
I was trying to add a new replica set to my mongodb sharded cluster but I keep getting the following error: { "ok" : 0, "errmsg" : "Could not verify that config servers were active and reachable before write", "code" : 25 } I checked all the config servers and they are up and running.The problem is...
I was trying to add a new replica set to my mongodb sharded cluster but I keep getting the following error:
{
"ok" : 0,
"errmsg" : "Could not verify that config servers were active and reachable before write",
"code" : 25
}
I checked all the config servers and they are up and running.The problem is new as I can previously add shards without any issues.
The MongoDB version I am using is 3.2 running on Ubuntu 16.02.
How can I fix this?
Lincoln
(1 rep)
Sep 1, 2016, 09:51 AM
• Last activity: Apr 26, 2022, 12:00 PM
0
votes
1
answers
542
views
Database is not reflecting in sh.status()
I have a 2 shard (s1 and s0 )servers with repl set. s1 has 2 dbs (db-A and db-B) and one of the db is sharded (db-B). When i do sh.status() from mongos only (db-B) is displayed. I am wondering what i need to do for identifying of db-A by mongos.
I have a 2 shard (s1 and s0 )servers with repl set. s1 has 2 dbs (db-A and db-B) and one of the db is sharded (db-B).
When i do sh.status() from mongos only (db-B) is displayed. I am wondering what i need to do for identifying of db-A by mongos.
Geek
(101 rep)
Jul 7, 2017, 11:47 AM
• Last activity: Oct 14, 2021, 09:09 AM
0
votes
1
answers
360
views
mysql_install_db equivalent for MongoDB? Error 'handle-open: open: Too many open files'
A developer created millions of collections and indexes, this means millions of `collection-*` and `index-*` files. I have no administrative login (DB user) on this replicaset (3 node) with mongod 3.2.6. I wish to initialize MongoDB data directory only with Linux commands. In MariaDB there is `mysql...
A developer created millions of collections and indexes, this means millions of
collection-*
and index-*
files.
I have no administrative login (DB user) on this replicaset (3 node) with mongod 3.2.6. I wish to initialize MongoDB data directory only with Linux commands. In MariaDB there is mysql_install_db
command.
error message
2017-01-14T05:25:38.674+0000 E STORAGE [thread2] WiredTiger (24) [1484371538:674300][60390:0x7fcc4bc69700], file:WiredTiger.wt, WT_SESSION.checkpoint: /var/vcap/store/mongodb-data/example/WiredTiger.turtle: handle-open: open: Too many open files
2017-01-14T05:25:38.737+0000 I - [conn2] Assertion: 13538:couldn't open [/proc/60390/stat] errno:24 Too many open files
2017-01-14T05:25:38.738+0000 I NETWORK [conn2] end connection 127.0.0.1:39289 (12 connections now open)
2017-01-14T05:25:38.738+0000 I NETWORK [conn6] end connection 127.0.0.1:39347 (12 connections now open)
2017-01-14T05:25:38.738+0000 I NETWORK [conn9] end connection 127.0.0.1:39453 (12 connections now open)
2017-01-14T05:25:38.769+0000 I COMMAND [conn10] command example command: collStats { collstats: "090f42d6-5368-46c6-80
ca-2a8e7e380693" } keyUpdates:0 writeConflicts:0 numYields:0 reslen:9450 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r:
1 } } } protocol:op_query 161ms
2017-01-14T05:25:38.783+0000 E STORAGE [thread2] WiredTiger (24) [1484371538:783882][60390:0x7fcc4bc69700], checkpoint-server: checkpoint server error: Too many open files
2017-01-14T05:25:38.783+0000 E STORAGE [thread2] WiredTiger (-31804) [1484371538:783983][60390:0x7fcc4bc69700], checkpoint-server: the process must exit and restart: WT_PANIC: WiredTiger
library panic
2017-01-14T05:25:38.784+0000 I - [thread2] Fatal Assertion 28558
2017-01-14T05:25:38.784+0000 I - [thread2]
***aborting after fassert() failure
2017-01-14T05:25:38.784+0000 I - [WTJournalFlusher] Fatal Assertion 28559
2017-01-14T05:25:38.784+0000 I - [WTJournalFlusher]
***aborting after fassert() failure
2017-01-14T05:25:38.789+0000 F - [thread2] Got signal: 6 (Aborted).
0x1383b32 0x1382c89 0x1383492 0x7fcc50af2330 0x7fcc50753c37 0x7fcc50757028 0x130dba2 0x1109083 0x1ae49dc 0x1ae4e9d 0x1ae5284 0x1a69dbb 0x7fcc50aea184 0x7fcc5081737d
----- BEGIN BACKTRACE -----
{"backtrace":[{"b":"400000","o":"F83B32","s":"_ZN5mongo15printStackTraceERSo"},{"b":"400000","o":"F82C89"},{"b":"400000","o":"F83492"},{"b":"7FCC50AE2000","o":"10330"},{"b":"7FCC5071D000","o":"36C37","s":"gsignal"},{"b":"7FCC5071D000","o":"3A028","s":"abort"},{"b":"400000","o":"F0DBA2","s":"_ZN5mongo13fassertFailedEi"},{"b":"400000","o":"D09083"},{"b":"400000","o":"16E49DC"
,"s":"__wt_eventv"},{"b":"400000","o":"16E4E9D","s":"__wt_err"},{"b":"400000","o":"16E5284","s":"__wt_panic"},{"b":"400000","o":"1669DBB"},{"b":"7FCC50AE2000","o":"8184"},{"b":"7FCC5071D00
0","o":"FA37D","s":"clone"}],"processInfo":{ "mongodbVersion" : "3.2.6", "gitVersion" : "05552b562c7a0b3143a729aaa0838e558dc49b25", "compiledModules" : [ "enterprise" ], "uname" : { "sysna
me" : "Linux", "release" : "3.19.0-64-generic", "version" : "#72~14.04.1-Ubuntu SMP Fri Jun 24 17:59:48 UTC 2016", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000", "bui
ldId" : "88B6197786F38A44F8D924B095DAD84A9E55F2C8" }, { "b" : "7FFF49FFB000", "elfType" : 3, "buildId" : "C89BD46B7CFC47F3E55EF539B3FAF8E450562F6A" }, { "b" : "7FCC52C73000", "path" : "/us
r/lib/x86_64-linux-gnu/libsasl2.so.2", "elfType" : 3, "buildId" : "666B276BD134B0E9579B67D4EE333F2D0FB813CD" }, { "b" : "7FCC52806000", "path" : "/usr/lib/x86_64-linux-gnu/libnetsnmpmibs.s
o.30", "elfType" : 3, "buildId" : "4630C89B4E7BCCAD8B1B4FFB508962666D6663C2" }, { "b" : "7FCC525F7000", "path" : "/usr/lib/x86_64-linux-gnu/libsensors.so.4", "elfType" : 3, "buildId" : "85
9FDBFDD82F0EFDEB44A433D9D8020A232A35E2" }, { "b" : "7FCC523F3000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3, "buildId" : "DA9B8C234D0FE9FD8CAAC8970A7EC1B6C8F6623F" }, { "
b" : "7FCC5218A000", "path" : "/usr/lib/x86_64-linux-gnu/libnetsnmpagent.so.30", "elfType" : 3, "buildId" : "9E02A41B22FEB1F704B60BA109EC5785131A5090" }, { "b" : "7FCC51F80000", "path" : "
/lib/x86_64-linux-gnu/libwrap.so.0", "elfType" : 3, "buildId" : "54FCBC5B0F994A13A9B0EAD46F23E7DA7F7FE75B" }, { "b" : "7FCC51CA6000", "path" : "/usr/lib/x86_64-linux-gnu/libnetsnmp.so.30",
"elfType" : 3, "buildId" : "E9B667050A5D6C0C4D58826C32DEEAF38B16EBAA" }, { "b" : "7FCC518CA000", "path" : "/lib/x86_64-linux-gnu/libcrypto.so.1.0.0", "elfType" : 3, "buildId" : "AAE7CFF83
51B730830BDBCE0DCABBE06574B7144" }, { "b" : "7FCC51683000", "path" : "/usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "55F72A23CB9C0F7529F0E0BEE43981864B74C4FE"
}, { "b" : "7FCC5137D000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3, "buildId" : "D144258E614900B255A31F3FD2283A878670D5BC" }, { "b" : "7FCC5111E000", "path" : "/lib/x86_6
4-linux-gnu/libssl.so.1.0.0", "elfType" : 3, "buildId" : "74864DB9D5F69D39A67E4755012FB6573C469B3D" }, { "b" : "7FCC50F16000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3, "
buildId" : "E2A6DD5048A0A051FD61043BDB69D8CC68192AB7" }, { "b" : "7FCC50D00000", "path" : "/lib/x86_64-linux-gnu/libgcc_s.so.1", "elfType" : 3, "buildId" : "36311B4457710AE5578C4BF00791DED
7359DBB92" }, { "b" : "7FCC50AE2000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3, "buildId" : "31E9F21AE8C10396171F1E13DA15780986FA696C" }, { "b" : "7FCC5071D000", "pa
th" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3, "buildId" : "CF699A15CAAE64F50311FC4655B86DC39A479789" }, { "b" : "7FCC52E8E000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType"
: 3, "buildId" : "D0F537904076D73F29E4A37341F8A449E2EF6CD0" }, { "b" : "7FCC50394000", "path" : "/usr/lib/libperl.so.5.18", "elfType" : 3, "buildId" : "73F1E3843DAEC3F6114D138BF002C0A122CC
3707" }, { "b" : "7FCC5017A000", "path" : "/lib/x86_64-linux-gnu/libnsl.so.1", "elfType" : 3, "buildId" : "32E56CFD30B8B4FCD8AA69CED88A9782814A9D18" }, { "b" : "7FCC4FEAF000", "path" : "/u
sr/lib/x86_64-linux-gnu/libkrb5.so.3", "elfType" : 3, "buildId" : "77287B3AF8DD293D7367EEF27C652C04353752EC" }, { "b" : "7FCC4FC80000", "path" : "/usr/lib/x86_64-linux-gnu/libk5crypto.so.3
", "elfType" : 3, "buildId" : "49E3D743C2B3741229AD3892B22C4794C646E1F2" }, { "b" : "7FCC4FA7C000", "path" : "/lib/x86_64-linux-gnu/libcom_err.so.2", "elfType" : 3, "buildId" : "8D56938ABD
6462C4C29822D8E48A131BE1C61F6A" }, { "b" : "7FCC4F871000", "path" : "/usr/lib/x86_64-linux-gnu/libkrb5support.so.0", "elfType" : 3, "buildId" : "0B3ABC152466DE0C69954405A0E980B6E0D4B78F" }
, { "b" : "7FCC4F638000", "path" : "/lib/x86_64-linux-gnu/libcrypt.so.1", "elfType" : 3, "buildId" : "FDB9E0552092EB7559F27C7A9915665408D930F0" }, { "b" : "7FCC4F434000", "path" : "/lib/x8
6_64-linux-gnu/libkeyutils.so.1", "elfType" : 3, "buildId" : "0F03635F97B93D3DACD84F0ED363C56BD266044F" }, { "b" : "7FCC4F219000", "path" : "/lib/x86_64-linux-gnu/libresolv.so.2", "elfType
Sybil
(2578 rep)
Jan 14, 2017, 06:09 AM
• Last activity: Jun 9, 2021, 03:05 AM
1
votes
3
answers
14712
views
Mongodb Not Restarting with Collections, WiredTiger.wt may be corrupt
I believe my MongoDB did not have a clean shutdown. I am able to restart it in a new location which doesn't have all of my collections. If I try to repair or start it in the old location, it gives the following error: [ec2-user@ip-172-31-30-192 tmp]$ mongod --repair --dbpath /data 2017-08-20T16:20:3...
I believe my MongoDB did not have a clean shutdown. I am able to restart it in a new location which doesn't have all of my collections. If I try to repair or start it in the old location, it gives the following error:
[ec2-user@ip-172-31-30-192 tmp]$ mongod --repair --dbpath /data
2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] MongoDB starting : pid=31865 port=27017 dbpath=/data 64-bit host=ip-172-31-30-192
2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] db version v3.2.16
2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] git version: 056bf45128114e44c5358c7a8776fb582363e094
2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.0-fips 29 Mar 2010
2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] allocator: tcmalloc
2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] modules: none
2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] build environment:
2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] distmod: amazon
2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] distarch: x86_64
2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] target_arch: x86_64
2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] options: { repair: true, storage: { dbPath: "/data" }
} 2017-08-20T16:20:30.972+0000 I - [initandlisten] Detected data files in /data created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2017-08-20T16:20:30.972+0000 I STORAGE [initandlisten] Detected WT journal files. Running recovery from last checkpoint.
2017-08-20T16:20:30.972+0000 I STORAGE [initandlisten] journal to nojournal transition config: create,cache_size=17G,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2017-08-20T16:20:30.981+0000 E STORAGE [initandlisten] WiredTiger (-31802) [1503246030:981472][31865:0x7f6ad1d9fd80], file:WiredTiger.wt, connection: unable to read root page from file:WiredTiger.wt: WT_ERROR: non-specific WiredTiger error
2017-08-20T16:20:30.981+0000 E STORAGE [initandlisten] WiredTiger (0) [1503246030:981530][31865:0x7f6ad1d9fd80], file:WiredTiger.wt, connection: WiredTiger has failed to open its metadata
2017-08-20T16:20:30.981+0000 E STORAGE [initandlisten] WiredTiger (0) [1503246030:981548][31865:0x7f6ad1d9fd80], file:WiredTiger.wt, connection: This may be due to the database files being encrypted, being from an older version or due to corruption on disk
2017-08-20T16:20:30.981+0000 E STORAGE [initandlisten] WiredTiger (0) [1503246030:981564][31865:0x7f6ad1d9fd80], file:WiredTiger.wt, connection: You should confirm that you have opened the database with the correct options including all encryption and compression options
2017-08-20T16:20:30.981+0000 I - [initandlisten] Assertion: 28718:-31802: WT_ERROR: non-specific WiredTiger error
2017-08-20T16:20:30.982+0000 I STORAGE [initandlisten] exception in initAndListen: 28718 -31802: WT_ERROR: non-specific WiredTiger error, terminating
2017-08-20T16:20:30.982+0000 I CONTROL [initandlisten] dbexit: rc: 100
Is there a way to fix my wiredtiger.wt file or move my collections and indexes from the old location into the new location?
Jay Gupta
(11 rep)
Aug 20, 2017, 11:12 PM
• Last activity: Feb 6, 2021, 12:02 AM
3
votes
1
answers
2947
views
How to improve performance of this aggregate query?
I have the following tables in MongoDB: 1. Users table (count 200+ users): { "_id" : ObjectId("56dd6204ce47a3c44d8b4567"), "u_role" : "1", "u_fname" : "dsfsd", "u_lname" : "dsfdsf", "u_email" : "dsfds@sfds.df", "u_password" : "$2y$10$/sGOrJNJHsgE1buAvVfObObgsRxA/KquVcJzUdMwoKjGsbyQDuXCq", "u_phone"...
I have the following tables in MongoDB:
1. Users table (count 200+ users):
{
"_id" : ObjectId("56dd6204ce47a3c44d8b4567"),
"u_role" : "1",
"u_fname" : "dsfsd",
"u_lname" : "dsfdsf",
"u_email" : "dsfds@sfds.df",
"u_password" : "$2y$10$/sGOrJNJHsgE1buAvVfObObgsRxA/KquVcJzUdMwoKjGsbyQDuXCq",
"u_phone" : "sdfsdf",
"u_dealer_name" : "dsfdff",
"u_code" : "dsfdf",
"u_dealer_phone" : "sdf",
"u_address" : "sdfdsf",
"u_city" : "dsfdf",
"u_state" : "sdfdsf",
"u_country_id" : "1",
"u_zip_code" : "dsfdf",
"u_forgot_token" : "",
"u_status" : NumberLong(9),
"updated_at" : ISODate("2016-07-13T05:57:10.196Z"),
"created_at" : ISODate("2016-03-07T11:12:04.647Z"),
"u_id" : "56dd6204ce47a3c44d8b4567",
"coordinates" : [
0,
0
]}
2. User sales table (count 50,00,000+ records):
{
"_id" : ObjectId("56fce996ce47a3e0448b4590"),
"us_u_id" : "56f32ca1ce47a323638b4567",
"us_dealer_u_id" : "56f32ca1ce47a323638b4567",
"us_corporate_dealer_u_id" : "56f32ca1ce47a323638b4567",
"us_oem_u_id" : "1459249076s48FgbBXG4",
"us_part_number" : "002005973000",
"us_sup_part_number" : "",
"us_alter_part_number" : "",
"us_qty" : NumberLong(0),
"us_sale_qty" : NumberLong(1),
"us_date" : "20160321",
"us_source_name" : "BOMAG",
"us_source_address" : "",
"us_source_city" : "",
"us_source_state" : "",
"us_zip_code" : "",
"us_alternet_source_code" : "",
"updated_at" : ISODate("2016-03-31T09:10:46.798Z"),
"created_at" : ISODate("2016-03-31T09:10:46.798Z")
}
My search query is:
db.hh_users.aggregate(
[
{
"$geoNear": {
"near": {
"coordinates": [
77.3847,
17.7284
]
},
"distanceField": "dist",
"spherical": true,
"limit": 192
}
},
{
"$match": {
"u_status": 1
}
},
{
"$lookup": {
"from": "hh_user_sales",
"localField": "u_id",
"foreignField": "us_dealer_u_id",
"as": "usersales"
}
},
{
"$unwind": "$usersales"
},
{
"$project": {
"u_fname": "$u_fname",
"u_lname": "$u_lname",
"u_dealer_phone": "$u_dealer_phone",
"u_email": "$u_email",
"u_city": "$u_city",
"u_state": "$u_state",
"updated_at": "$updated_at",
"us_part_number": {
"$toLower": [
"$usersales.us_part_number"
]
},
"us_qty": "$usersales.us_qty",
"us_dealer_u_id": "$usersales.us_dealer_u_id",
"dist": "$dist"
}
},
{
"$match": {
"us_part_number": {
"$in": [
"va32a4000400",
null,
null,
null,
null
]
}
}
},
{
"$group": {
"u_fname": {
"$last": "$u_fname"
},
"u_lname": {
"$last": "$u_lname"
},
"u_dealer_phone": {
"$last": "$u_dealer_phone"
},
"u_email": {
"$last": "$u_email"
},
"u_city": {
"$last": "$u_city"
},
"u_state": {
"$last": "$u_state"
},
"updated_at": {
"$last": "$updated_at"
},
"dist": {
"$last": "$dist"
},
"_id": {
"us_dealer_u_id": "$us_dealer_u_id"
},
"us_part_number": {
"$last": "$us_part_number"
},
"us_qty": {
"$last": "$us_qty"
},
"us_dealer_u_id": {
"$last": "$us_dealer_u_id"
},
"part1_qty": {
"$max": {
"$cond": [
{
"$eq": [
"$us_part_number",
null
]
},
"$us_qty",
0
]
}
},
"part2_qty": {
"$max": {
"$cond": [
{
"$eq": [
"$us_part_number",
null
]
},
"$us_qty",
0
]
}
},
"part3_qty": {
"$max": {
"$cond": [
{
"$eq": [
"$us_part_number",
null
]
},
"$us_qty",
0
]
}
},
"part4_qty": {
"$max": {
"$cond": [
{
"$eq": [
"$us_part_number",
null
]
},
"$us_qty",
0
]
}
},
"part5_qty": {
"$max": {
"$cond": [
{
"$eq": [
"$us_part_number",
null
]
},
"$us_qty",
0
]
}
}
}
},
{
"$project": {
"u_fname": "$u_fname",
"u_lname": "$u_lname",
"u_dealer_phone": "$u_dealer_phone",
"u_email": "$u_email",
"u_city": "$u_city",
"u_state": "$u_state",
"updated_at": "$updated_at",
"us_part_number": "$us_part_number",
"us_qty": "$us_qty",
"us_dealer_u_id": "$us_dealer_u_id",
"part1_qty": "$part1_qty",
"part2_qty": "$part2_qty",
"part3_qty": "$part3_qty",
"part4_qty": "$part4_qty",
"part5_qty": "$part5_qty",
"total": {
"$add": [
"$part1_qty",
"$part2_qty",
"$part3_qty",
"$part4_qty",
"$part5_qty"
]
},
"dist": "$dist"
}
},
{
"$sort": {
"part1_qty": -1,
"part2_qty": -1,
"part3_qty": -1,
"part4_qty": -1,
"part5_qty": -1,
"total": -1,
"dist": 1,
"us_qty": -1
}
},
{
"$skip": 0
},
{
"$limit": 10
}
]
).pretty()
It's taking more than 2 minutes to complete. How can I improve its performance?
Purpose of query is:
I need to search 5 parts at a time, join two tables (users and user sales) and get maximum parts quantity on top, and also total of parts is maximum.
Below Index I have set already.
users table :- u_id
users sales table :- u_dealer_id,us_part_number
Dipesh Shihora
(131 rep)
Jul 15, 2016, 11:54 AM
• Last activity: Jan 7, 2021, 11:01 PM
1
votes
2
answers
2015
views
I am trying to achieve a Disaster recovery for Mongo DB
Right now i have a MongoDb HA in site one and i need another site with asynchronous replication enables in another location.This is a base plan of mine to achieve a disaster recovery . I dont have that much idea on how to establish async replication of mongo . Also i need this to be done in windows...
Right now i have a MongoDb HA in site one and i need another site with asynchronous replication enables in another location.This is a base plan of mine to achieve a disaster recovery .
I dont have that much idea on how to establish async replication of mongo .
Also i need this to be done in windows system .
linn zacharias
(11 rep)
Mar 11, 2019, 05:21 AM
• Last activity: Jul 24, 2020, 09:27 AM
5
votes
2
answers
10639
views
How can we use transaction in mongodb standalone connection?
I want to use transaction in mongodb but its told to replicaset can we perform transaction query with standalone mongodb if yes please share how to we can because when I try its give error `This MongoDB deployment does not support retryable writes. Please add retryWrites=false to your connection str...
I want to use transaction in mongodb but its told to replicaset can we perform transaction query with standalone mongodb if yes please share how to we can because when I try its give error
This MongoDB deployment does not support retryable writes. Please add retryWrites=false to your connection string
how to we perform translation without retryable writes.
Thanks
Priyanka Sankhala
(151 rep)
Apr 17, 2020, 10:04 AM
• Last activity: Apr 21, 2020, 04:55 AM
0
votes
0
answers
327
views
MongoDB works only on user is assigned to admin database
I am using mongo db with authentication and it is working as expected when I map the user to the admin database like below, use admin; db.getUsers(); { "_id" : "admin.test1", "userId" : UUID("abee0e0c-1d0b-44fa-84ad-594159b66aab"), "user" : "test1", "db" : "admin", "roles" : [ { "role" : "readWrite"...
I am using mongo db with authentication and it is working as expected when I map the user to the admin database like below,
use admin;
db.getUsers();
{
"_id" : "admin.test1",
"userId" : UUID("abee0e0c-1d0b-44fa-84ad-594159b66aab"),
"user" : "test1",
"db" : "admin",
"roles" : [
{
"role" : "readWrite",
"db" : "test_database"
}
],
"mechanisms" : [
"SCRAM-SHA-1",
"SCRAM-SHA-256"
]
}
I am able to connect to the database using '
test1
' user and access the 'test_database
'
but when I create user with actual test_database
, It is not allowing me to connect,
use test_database;
db.getUsers();
{
"_id" : "test_database.test2",
"userId" : UUID("abee0e0c-1d0b-44fa-84ad-594159b66aab"),
"user" : "test2",
"db" : "test_database",
"roles" : [
{
"role" : "readWrite",
"db" : "test_database"
}
],
"mechanisms" : [
"SCRAM-SHA-1",
"SCRAM-SHA-256"
]
}
When I connect using the 'test1
' user to connect 'test_database', I am able to connect and It is working as expected (weird how the admin database access helps 'test_database' though) - why?
but when I connect using the user 'test2
' it fails with the following error
**pymongo.errors.OperationFailure: Authentication failed.**
Coder
(71 rep)
Mar 16, 2020, 06:40 PM
3
votes
1
answers
2476
views
adding and removing a node from a mongo replica set
I have a new backup server I want to add to our mongo replica sets. As far as I can tell its as easy as logging on to the replica sets primary and adding the following command: `rs.add( { host: "mongobackup:10003", priority: 0, votes: 0, hidden: true } )` My understanding is that after doing so the...
I have a new backup server I want to add to our mongo replica sets. As far as I can tell its as easy as logging on to the replica sets primary and adding the following command:
rs.add( { host: "mongobackup:10003", priority: 0, votes: 0, hidden: true } )
My understanding is that after doing so the primary should sync all its configuration and data with the new node.
Additionally I have an old backup server I want to remove from the replica set which I looks like all i have to do is type this command into the replica sets primary:
rs.remove( "oldmongobackup:10003" )
Is there anything else I am missing?
rs2:PRIMARY> rs.conf()
{
"_id" : "rs2",
"version" : 26,
"members" : [
{
"_id" : 7,
"host" : "mongo03:10001",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 20,
"tags" : {
"dc" : "maid"
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 8,
"host" : "mongo01:10002",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 10,
"tags" : {
"dc" : "maid"
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 9,
"host" : "mongo02:10003",
"arbiterOnly" : true,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 10,
"host" : "oldmongobackup:10003",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : true,
"priority" : 0,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatTimeoutSecs" : 10,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
}
}
}
CrazyHorse019
(33 rep)
Jan 29, 2020, 01:26 PM
• Last activity: Jan 30, 2020, 05:22 PM
2
votes
0
answers
344
views
How to validate a mongorestore
I have a script that creates a zip dump file using `mongodump` for all of my sibling databases in Mongo. I then transfer these to a different mongoDB instance(having the same sibling dbs) and use `mongorestore` to fill up the collections and their respective documents. The db version is `v3.2.19` Do...
I have a script that creates a zip dump file using
mongodump
for all of my sibling databases in Mongo. I then transfer these to a different mongoDB instance(having the same sibling dbs) and use mongorestore
to fill up the collections and their respective documents.
The db version is v3.2.19
Does anybody know of a way or direct me to some resource that helps validating the two mongo processes (preferably programmatically)? I would like to say for certain that the number of documents/collections dumped were the number of documents/collections inserted using restore.
Thanks again!
Chinmoy Sharma
(21 rep)
Jun 13, 2018, 03:33 PM
• Last activity: Jan 2, 2020, 10:19 PM
2
votes
2
answers
3389
views
How to run Mongo database db.currentOp(true) command using API
Using the Mongo Java API I can run the currentOp() command like this: MongoClient mongoClient = null; mongoClient = new MongoClient( "127.0.0.1", 27017); db = mongoClient.getDB("admin"); db.command("currentOp"); But I only get details of current operations. I need to get details of idle connections...
Using the Mongo Java API I can run the currentOp() command like this:
MongoClient mongoClient = null;
mongoClient = new MongoClient( "127.0.0.1", 27017);
db = mongoClient.getDB("admin");
db.command("currentOp");
But I only get details of current operations. I need to get details of idle connections too.
With reference with this
**Behavior**
If you pass in true to db.currentOp(), the method returns information on all operations, including operations on idle connections and system operations.
db.currentOp(true)
Passing in true is equivalent to passing in a query document of { '$all': true }.
If you pass a query document to db.currentOp(), the output returns information only for the current operations that match the query. You can query on the Output Fields. See Examples.
You can also specify { '$all': true } query document to return information on all in-progress operations, including operations on idle connections and system operations. If the query document includes '$all': true as well as other query conditions, only the '$all': true applies.
While using this command
db.command("currentOp(true)");
, I get an exception like this:
> "ok" : 0.0 , "errmsg" : "no such command: 'currentOp(true)', bad cmd: '{ currentOp(true): true }'" , "code" : 59}
Velkumar
(41 rep)
Mar 9, 2017, 07:30 AM
• Last activity: Dec 18, 2019, 02:02 AM
0
votes
0
answers
23
views
MongoDB Intermittent Little to No Insert/Write Activity
I have a MongoDB 3.2 WiredTiger Server running on Windows Server 2008 R2 Enterprise in Production while a MongoDB 3.2 WiredTiger Server running on Windows Server 2012 R2 in Testing. Production has 32GB and Testing has 16GB ram. MongoDB is the only thing running on these two servers. We have a python...
I have a MongoDB 3.2 WiredTiger Server running on Windows Server 2008 R2 Enterprise in Production while a MongoDB 3.2 WiredTiger Server running on Windows Server 2012 R2 in Testing. Production has 32GB and Testing has 16GB ram. MongoDB is the only thing running on these two servers.
We have a python script that runs in the morning and it ingests a large amount of data into a single collection.
> EDIT: The data being ingested is from a large .csv file and not coming
> directly from a SQL DB.
This happens every morning. There are two collections, Collections A and Collections B. One is active while the other is dormant. Each morning the dormant one gets loaded with the new data and becomes the active collection.
The problem is that the process to populate this collection takes about 12 hours to complete in Production while only taking about 4 hours in Testing. It's the same data, from the same .csv file, running the same python script, same python version, inserting into the same collections the same way.
Now Production is much more active while Testing has little to no activity. Though the resources on Production are not strained. The CPU usage is typically below 10% consistently. Using Mongostat and Mongotop it looks like our Production server can take 10s of seconds or more before doing any inserts while our Testing server is doing them practically every second.
Production:
Testing:
At first I thought perhaps this was due to an IO bottleneck with the HD. Our Production server was running on a 15k HDD at the time while Testing was on an iSCSI disk array which is much faster. So we worked on getting Production setup like that too. It improved the ingestion by maybe an hour.
I've spent weeks researching this and I am at a loss as to why our Production server take 3 times longer to ingest the same data as our Testing server. At this point it seems to have something to do with either the activity in Production or the difference in Operation System.
Can incoming activity prevent Mongo from making inserts even though there's plenty of CPU and disk IO to do both? Can the OS cause such a massive disparity? What else should I look at?


iamcyruss
(1 rep)
Nov 25, 2019, 04:41 PM
• Last activity: Nov 25, 2019, 09:43 PM
Showing page 1 of 20 total questions