Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

7 votes
1 answers
13258 views
Restore mongoDB by --repair and WiredTiger
We accidentally deleted the directory `rm -rf /data/db` which was our **MongoDB path**, and thanks to **extundelete**, we recovered it and got the directory `/data/db`. Here are our files in the directory, and the files was generated under MongoDB **version 3.4**. [![enter image description here][1]...
We accidentally deleted the directory rm -rf /data/db which was our **MongoDB path**, and thanks to **extundelete**, we recovered it and got the directory /data/db. Here are our files in the directory, and the files was generated under MongoDB **version 3.4**. enter image description here Folder diagnostic.data: Folder diagnostic.data Folder journal: Folder journal # Step 1: try to run mongod as normal **a)** We ran mongod --port 27017 --dbpath /data/db --bind_ip_all and mongo, and expected there should have a user-defined database wecaXX, **But, it did not show up.**
> show dbs
admin   0.000GB
config  0.000GB
local   0.000GB
In Robo3T enter image description here **b)** Then I tried to run mongod --port 27017 --dbpath /data/db --bind_ip_all --repair. The result was:
2019-03-25T14:10:02.170+0800 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-03-25T14:10:02.191+0800 I CONTROL  [initandlisten] MongoDB starting : pid=23018 port=27017 dbpath=/data/db 64-bit host=iZj6c0pipuxk17pb7pbaw0Z
2019-03-25T14:10:02.191+0800 I CONTROL  [initandlisten] db version v4.0.7
2019-03-25T14:10:02.191+0800 I CONTROL  [initandlisten] git version: 1b82c812a9c0bbf6dc79d5400de9ea99e6ffa025
2019-03-25T14:10:02.191+0800 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
2019-03-25T14:10:02.191+0800 I CONTROL  [initandlisten] allocator: tcmalloc
2019-03-25T14:10:02.191+0800 I CONTROL  [initandlisten] modules: none
2019-03-25T14:10:02.191+0800 I CONTROL  [initandlisten] build environment:
2019-03-25T14:10:02.191+0800 I CONTROL  [initandlisten]     distmod: ubuntu1604
2019-03-25T14:10:02.191+0800 I CONTROL  [initandlisten]     distarch: x86_64
2019-03-25T14:10:02.191+0800 I CONTROL  [initandlisten]     target_arch: x86_64
2019-03-25T14:10:02.191+0800 I CONTROL  [initandlisten] options: { net: { bindIpAll: true, port: 27017 }, repair: true, storage: { dbPath: "/data/db" } }
2019-03-25T14:10:02.191+0800 W STORAGE  [initandlisten] Detected unclean shutdown - /data/db/mongod.lock is not empty.
2019-03-25T14:10:02.194+0800 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2019-03-25T14:10:02.194+0800 W STORAGE  [initandlisten] Recovering data from the last clean checkpoint.
2019-03-25T14:10:02.194+0800 I STORAGE  [initandlisten]
2019-03-25T14:10:02.194+0800 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-03-25T14:10:02.194+0800 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem 
2019-03-25T14:10:02.194+0800 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=256M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2019-03-25T14:10:02.818+0800 E STORAGE  [initandlisten] WiredTiger error (17) [1553494202:818725][23018:0x7f6119074a40], connection: __posix_open_file, 715: /data/db/WiredTiger.wt: handle-open: open: File exists Raw: [1553494202:818725][23018:0x7f6119074a40], connection: __posix_open_file, 715: /data/db/WiredTiger.wt: handle-open: open: File exists
2019-03-25T14:10:02.818+0800 I STORAGE  [initandlisten] WiredTiger message unexpected file WiredTiger.wt found, renamed to WiredTiger.wt.1
2019-03-25T14:10:03.832+0800 I STORAGE  [initandlisten] WiredTiger message [1553494203:832267][23018:0x7f6119074a40], txn-recover: Main recovery loop: starting at 4/11366912 to 5/256
2019-03-25T14:10:03.832+0800 I STORAGE  [initandlisten] WiredTiger message [1553494203:832674][23018:0x7f6119074a40], txn-recover: Recovering log 4 through 5
2019-03-25T14:10:03.898+0800 I STORAGE  [initandlisten] WiredTiger message [1553494203:898252][23018:0x7f6119074a40], txn-recover: Recovering log 5 through 5
2019-03-25T14:10:03.964+0800 I STORAGE  [initandlisten] WiredTiger message [1553494203:964880][23018:0x7f6119074a40], txn-recover: Set global recovery timestamp: 0
2019-03-25T14:10:03.998+0800 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2019-03-25T14:10:03.999+0800 E STORAGE  [initandlisten] WiredTiger error (17) [1553494203:999855][23018:0x7f6119074a40], WT_SESSION.create: __posix_open_file, 715: /data/db/_mdb_catalog.wt: handle-open: open: File exists Raw: [1553494203:999855][23018:0x7f6119074a40], WT_SESSION.create: __posix_open_file, 715: /data/db/_mdb_catalog.wt: handle-open: open: File exists
2019-03-25T14:10:04.000+0800 I STORAGE  [initandlisten] WiredTiger message unexpected file _mdb_catalog.wt found, renamed to _mdb_catalog.wt.1
2019-03-25T14:10:04.015+0800 I CONTROL  [initandlisten]
2019-03-25T14:10:04.015+0800 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-03-25T14:10:04.015+0800 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2019-03-25T14:10:04.015+0800 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2019-03-25T14:10:04.015+0800 I CONTROL  [initandlisten]
2019-03-25T14:10:04.015+0800 I CONTROL  [initandlisten]
2019-03-25T14:10:04.015+0800 I CONTROL  [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 3824 processes, 65535 files. Number of processes should be at least 32767.5 : 0.5 times number of files.
2019-03-25T14:10:04.020+0800 I STORAGE  [initandlisten] createCollection: admin.system.version with provided UUID: 47d8713d-ac61-4081-83bf-60209ad60a7d
2019-03-25T14:10:04.034+0800 W ASIO     [initandlisten] No TransportLayer configured during NetworkInterface startup
2019-03-25T14:10:04.036+0800 I COMMAND  [initandlisten] setting featureCompatibilityVersion to 4.0
2019-03-25T14:10:04.036+0800 I STORAGE  [initandlisten] repairDatabase admin
2019-03-25T14:10:04.037+0800 I STORAGE  [initandlisten] Repairing collection admin.system.version
2019-03-25T14:10:04.040+0800 I STORAGE  [initandlisten] Verify succeeded on uri table:collection-0-4352287918877967674. Not salvaging.
2019-03-25T14:10:04.048+0800 I INDEX    [initandlisten] build index on: admin.system.version properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" }
2019-03-25T14:10:04.048+0800 I INDEX    [initandlisten]          building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2019-03-25T14:10:04.055+0800 I STORAGE  [initandlisten] finished checking dbs
2019-03-25T14:10:04.055+0800 I STORAGE  [initandlisten] WiredTigerKVEngine shutting down
2019-03-25T14:10:04.056+0800 I STORAGE  [initandlisten] Shutting down session sweeper thread
2019-03-25T14:10:04.057+0800 I STORAGE  [initandlisten] Finished shutting down session sweeper thread
2019-03-25T14:10:04.140+0800 I STORAGE  [initandlisten] shutdown: removing fs lock...
2019-03-25T14:10:04.140+0800 I CONTROL  [initandlisten] now exiting
2019-03-25T14:10:04.140+0800 I CONTROL  [initandlisten] shutting down with code:0
**c)** After repairing I re-ran mongod --port 27017 --dbpath /data/db --bind_ip_all, it still showed **nothing** (the same result as step a)). # Step 2: Use wiredTiger tool Since that didn't work as I expected, I started to find any other tools or approach that may help me. Here is the link Recovering a WiredTiger collection from a corrupt MongoDB installation which introduces how to use WiredTiger to recover collections. And I decided to have a try. I created a folder db_backup and put all my files into it. And I created a folder wiredtiger-3.0.0 under db_backup.I installed wiredtiger in folder wiredtiger-3.0.0: enter image description here **a) To salvage the collection**
root@iZj6c0pipuxk17pb7pbaw0Z:/data/db_backup/# cd ./wiredtiger-3.0.0
root@iZj6c0pipuxk17pb7pbaw0Z:/data/db_backup/wiredtiger-3.0.0# ./wt -v -h ../db_for_wt -C "extensions=[./ext/compressors/snappy/.libs/libwiredtiger_snappy.so]" -R salvage collection-23--360946650994865928.wt
        WT_SESSION.salvage 100
Based on the article above, it means that we have successfully salvaged this collection. # Error Occured: **b) To dump the collection**
root@iZj6c0pipuxk17pb7pbaw0Z:/data/db_backup/wiredtiger-3.0.0# ./wt -v -h ../../db_backup -C "extensions=[./ext/compressors/snappy/.libs/libwiredtiger_snappy.so]" -R dump -f ../collection.dump collection-23--360946650994865928
lt-wt: cursor open(table:collection-23--360946650994865928) failed: No such file or directory
Note that the command above was indeed ended without .wt. I have checked my directory argument and I found no fault. In the picture, the file collection-23--360946650994865928.wt **is right Here**. enter image description here The snapshot of collection-23--360946650994865928.wt just has been salvaged. We could see some English or Chinese characters in it. And that data truly is from one of collection of database wecaXX. enter image description here # Questions: **1)** Now, I'm stuck by dumping the collection. Does anyone know what's wrong with that? **2)** collection-23--360946650994865928.wt contains our most important data. If we could not restore the whole database, extracting data from that file will be still very useful. However, if we do copy paste manually, not all the characters are well written; they still need to be decrypted. Does anyone know how to do that? **3)** is it possible that we did not recover all the files (or the entire body of all the files) under /data/db? **Any comment or answer will be appreciated!!**
国贵棠 (71 rep)
Mar 25, 2019, 10:10 AM • Last activity: Jun 29, 2025, 11:03 PM
0 votes
1 answers
372 views
MongoDB secondaries become unresponsive during replication
We got 3 Servers running. One Primary, two as secondaries. Primary has 4 vcpus, 16 GB memory, both secondaries have 8 vcpus, 64 GB memory. Every night, we run a full sync with several large collections on the primary with multiple threads. During that sync, both secondaries become unavailable from t...
We got 3 Servers running. One Primary, two as secondaries. Primary has 4 vcpus, 16 GB memory, both secondaries have 8 vcpus, 64 GB memory. Every night, we run a full sync with several large collections on the primary with multiple threads. During that sync, both secondaries become unavailable from time to time. mongod.log states following notice: serverstatus was very slow: { after basic: 0, after asserts: 0, after backgroundFlushing: 0, after connections: 0, after dur: 0, after extra_info: 0, after globalLock: 0, after locks: 0, after network: 0, after opLatencies: 0, after opcounters: 0, after opcountersRepl: 0, after repl: 0, after security: 0, after storageEngine: 0, after tcmalloc: 0, after wiredTiger: 4992, at end: 4992 } Mongostat during that time states: Picture of Mongostat statistics Our clients have readPreference set to secondaries only, but we dont have much connection during that time tough. Standard is default, so default mongodb config with no special tweaks. So the only thing is see is, that the mongodb log states an "after wiredTiger" message with an higher amount of time. Any clue what's happening here? Used mongoDB Version is 3.4.16
Chris (1 rep)
Aug 30, 2018, 08:13 AM • Last activity: May 29, 2025, 12:01 PM
0 votes
1 answers
552 views
In MongoDB serverStatus(), what is the meaning of "connection data handles currently active"?
When running `db.serverStatus()` in the MongoDB shell (3.4, WiredTiger), the results contain the following data: { "connection data handles currently active" : 94862, "connection sweep candidate became referenced" : 0, "connection sweep dhandles closed" : 91885, "connection sweep dhandles removed fr...
When running db.serverStatus() in the MongoDB shell (3.4, WiredTiger), the results contain the following data: { "connection data handles currently active" : 94862, "connection sweep candidate became referenced" : 0, "connection sweep dhandles closed" : 91885, "connection sweep dhandles removed from hash list" : 648514, "connection sweep time-of-death sets" : 1514067, "connection sweeps" : 40077, "session dhandles swept" : 110175, "session sweep attempts" : 710643 } What is the meaning of "connection data handles currently active"? Does it have an impact on the instance's performance?
Tzach (307 rep)
Sep 19, 2017, 10:46 AM • Last activity: Apr 24, 2025, 11:02 PM
0 votes
0 answers
23 views
Trying to understand db.Stats() for mongodb
I'm trying to manage disk space usage for my mongodb using WiredTiger. So far I have understood that when you delete documents from mongodb, WiredTiger doesn't give the acquired free space back to the OS but rather saves it for itself. Now, I'm trying to make use of the metrics of db.Stats() that mo...
I'm trying to manage disk space usage for my mongodb using WiredTiger. So far I have understood that when you delete documents from mongodb, WiredTiger doesn't give the acquired free space back to the OS but rather saves it for itself. Now, I'm trying to make use of the metrics of db.Stats() that mongodb provides (https://www.mongodb.com/docs/manual/reference/command/dbStats/) I'm especially interested in the 'free_' metrics to calculate how much space is left on the disk provided by the OS to mongodb. After several hours of testing to understand the changes in metrics which didn't make any sense to me regarding the operations I did on the mongodb, I was thinking and testing the following mechanism for which I'm still in search of a confirmation as the mongodb docs seem to be not very precise at this point: If documents are deleted from collection A, WiredTiger saves the acquired space for itself and the 'freeStorage' metrics from db.Stats() increase. But mongodb, or WiredTiger rather, is reusing the amount of space available in the 'freeStorage' metrics ONLY if you insert new documents in the same collection A. Is this assumption correct? It is not that it would re-use the space if you create new documents in collection B, right? In this case it would consume new OS disk space which increases the fsUsedSize. Can you confirm this and is there any official docs that points that out?
TheDude (101 rep)
Apr 18, 2025, 11:51 AM
0 votes
0 answers
35 views
MongoDB page size fine tuning disabling the compression
We are having partitioned collections. Due to AWS disk limitations we have set the allocation_size to 64KB, internal_page_max to 64KB and leaf_page_max to 64KB The IOPS decreased dramatically however we noticed increase in storage ( 4X ). Collection r_09_2024 is with default parameters ( allocation_...
We are having partitioned collections. Due to AWS disk limitations we have set the allocation_size to 64KB, internal_page_max to 64KB and leaf_page_max to 64KB The IOPS decreased dramatically however we noticed increase in storage ( 4X ). Collection r_09_2024 is with default parameters ( allocation_size - 4KB, internal_page_max - 4KB and leaf_page_max - 32KB ) Collection: r_09_2024 - Size: 1.23 TB, Storage Size: 412.03 GB Collection: r_10_2024 - Size: 1.43 TB, Storage Size: 1.71 TB Collection: r_11_2024 - Size: 1.13 TB, Storage Size: 1.18 TB What should we do to allow compression with bigger block size?
Yulian Oifa (1 rep)
Nov 26, 2024, 02:48 PM • Last activity: Nov 26, 2024, 02:49 PM
0 votes
1 answers
133 views
MongoDB: What cause bytesRead in insert operation?
Recently we had high CPU/Memory and I/O usage on our MongoDB. While checking the logs all I found is some `insert` during this period. While inspecting logs I noticed most of the insert logs have `bytesRead` in the storage section. So I suspect this cause I/O then caching the data cause high memory....
Recently we had high CPU/Memory and I/O usage on our MongoDB. While checking the logs all I found is some insert during this period. While inspecting logs I noticed most of the insert logs have bytesRead in the storage section. So I suspect this cause I/O then caching the data cause high memory. After the insert spike the I/O and CPU went down but memory stayed the same which after a restart got resolved. Is this disk read normal with insert operation? We are using **Mongo v4.0** with WiredTiger storage engine in CentOS7 VM. 2024-02-14T23:39:44.533+0800 I COMMAND [conn939845] insert db.user_log ninserted:1 keysInserted:11 numYields:0 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } storage:{ data: { bytesRead: 34390, timeReadingMicros: 140837 } } 141ms 2024-02-14T23:40:16.785+0800 I COMMAND [conn939845] insert db.user_log ninserted:1 keysInserted:11 numYields:0 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } storage:{ data: { bytesRead: 24150, timeReadingMicros: 506594 } } 507ms
goodfella (595 rep)
Feb 15, 2024, 09:09 AM • Last activity: Feb 23, 2024, 07:29 PM
0 votes
1 answers
812 views
Mongodb 3.6.3 disappears after 2 weeks on centos 7 ec2 instance
My Mongodb has disappeared 2 times, after 2 weeks, over the weekend within a 4 week time period. I had a snapshot to recover from but I can't keep on recovering my db every 2 weeks. In an ec2 instance where I hosted Mongo and the API in the same place, this is not an issue. Side note - does anyone k...
My Mongodb has disappeared 2 times, after 2 weeks, over the weekend within a 4 week time period. I had a snapshot to recover from but I can't keep on recovering my db every 2 weeks. In an ec2 instance where I hosted Mongo and the API in the same place, this is not an issue. Side note - does anyone know if it is possible to completely disable db.dropDatabase() or db.collection.drop()? Below is my mongod.conf file # where to write logging data. systemLog: destination: file logAppend: true path: /var/log/mongodb/mongod.log # Where and how to store data. storage: dbPath: /data logpath: /log/mongod.log journal: enabled: true # engine: # mmapv1: wiredTiger: prefixCompression: true # how the process runs processManagement: fork: true # fork and run in background pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile timeZoneInfo: /usr/share/zoneinfo # network interfaces net: port: 27017 bindIp: 0.0.0.0 # bindIp: 127.0.0.1 # Listen to local interface only, comment to listen on all interfaces. Any ideas for how this happens or anyone who has ever solved a similar problem would be greatly appreciated. var/log/mongodb/mongodb.log 2018-02-26T17:14:09.799+0000 I CONTROL [main] ***** SERVER RESTARTED ***** 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] MongoDB starting : pid=# port=port dbpath=/var/lib/mongo 64-bit host=ip-* 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] db version v3.6.3 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] git version: 9586e557d54ef70f9ca4b43c26892cd55257e1a5 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.0-fips 29 Mar 2010 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] allocator: tcmalloc 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] modules: none 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] build environment: 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] distmod: amazon 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] distarch: x86_64 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] target_arch: x86_64 2018-02-26T17:14:09.987+0000 I CONTROL [initandlisten] options: { config: "/etc/mongod.conf", net: { bindIp: "127.0.0.1", port: 27017 }, processManagement: { fork: true, pidFilePath: "/var/run/mongodb/mongod.pid", timeZoneInfo: "/usr/share/zoneinfo" }, storage: { dbPath: "/var/lib/mongo", journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, path: "/var/log/mongodb/mongod.log" } } 2018-02-26T17:14:09.988+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1382M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress), 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-02-26T17:14:10.692+0000 I CONTROL [initandlisten] 2018-02-26T17:14:10.693+0000 I STORAGE [initandlisten] createCollection: admin.system.version with provided UUID: 038fb561-1163-46cb-bcae-b68ddebb4081 2018-02-26T17:14:10.700+0000 I COMMAND [initandlisten] setting featureCompatibilityVersion to 3.6 2018-02-26T17:14:10.703+0000 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: c994079b-4e9b-416f-91b4-4cbc60e2c118 2018-02-26T17:14:10.710+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/var/lib/mongo/diagnostic.data' 2018-02-26T17:14:10.710+0000 I NETWORK [initandlisten] waiting for connections on port 27017 2018-02-26T17:14:21.788+0000 I NETWORK [listener] connection accepted from 127.0.0.1:54282 #1 (1 connection now open) 2018-02-26T17:14:21.788+0000 I NETWORK [conn1] received client metadata from 127.0.0.1:54282 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "CentOS Linux release 7.4.1708 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-693.11.6.el7.x86_64" } } 2018-02-26T17:14:23.882+0000 I NETWORK [conn1] end connection 127.0.0.1:54282 (0 connections now open) 2018-02-26T17:19:10.711+0000 I STORAGE [thread2] createCollection: config.system.sessions with generated UUID: b181a7ff-d0e5-456e-bee5-ff7e40f32d0a 2018-02-26T17:19:10.739+0000 I INDEX [thread2] build index on: config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 } 2018-02-26T17:19:10.739+0000 I INDEX [thread2] building index using bulk method; build may temporarily use up to 500 megabytes of RAM 2018-02-26T17:19:10.740+0000 I INDEX [thread2] build index done. scanned 0 total records. 0 secs 2018-02-26T17:28:02.551+0000 I NETWORK [listener] connection accepted from 127.0.0.1:54292 #2 (1 connection now open) 2018-02-26T17:28:02.551+0000 I NETWORK [conn2] received client metadata from 127.0.0.1:54292 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "CentOS Linux release 7.4.1708 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-693.11.6.el7.x86_64" } } 2018-02-26T17:33:15.093+0000 I NETWORK [conn2] end connection 127.0.0.1:54292 (0 connections now open) 2018-02-26T17:34:20.689+0000 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends 2018-02-26T17:34:20.689+0000 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... 2018-02-26T17:34:20.689+0000 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock 2018-02-26T17:34:20.690+0000 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture 2018-02-26T17:34:20.692+0000 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down 2018-02-26T17:34:20.763+0000 I STORAGE [signalProcessingThread] shutdown: removing fs lock... 2018-02-26T17:34:20.763+0000 I CONTROL [signalProcessingThread] now exiting 2018-02-26T17:34:20.763+0000 I CONTROL [signalProcessingThread] shutting down with code:0 journalctl return these red lines Failed to create mount unit file /run/systemd/generator/data.mount, as it already exists. Duplicate entry in /etc/fstab? also for log.mount and journal.mount piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Error: Driver 'pcspkr' is already registered, aborting...
Brandon (235 rep)
Mar 14, 2018, 05:15 PM • Last activity: Feb 23, 2023, 10:03 AM
3 votes
1 answers
347 views
WiredTiger Page Size
I'm unable to find out if there is a pagesize in a WiredTiger MongoDB storage engine. I mean some analog of the `innodb_page_size` for MySQL.
I'm unable to find out if there is a pagesize in a WiredTiger MongoDB storage engine. I mean some analog of the innodb_page_size for MySQL.
Pavel Sapezhko (183 rep)
Nov 23, 2022, 01:34 PM • Last activity: Nov 24, 2022, 01:15 PM
0 votes
2 answers
751 views
usage percentage of Mongodb internal cache
I've ran two performance test on a MongoDB server with identical environment/settings AFAIK. I found that the throughput was 10% apart. When I inspected the mongostat logs, I found that Mongodb in the first test with faster throughput used 95% of its cache_size. In the second test, it stayed at 80%....
I've ran two performance test on a MongoDB server with identical environment/settings AFAIK. I found that the throughput was 10% apart. When I inspected the mongostat logs, I found that Mongodb in the first test with faster throughput used 95% of its cache_size. In the second test, it stayed at 80%. I rebooted the server system between the runs, but didn't change any of the settings of the server or the client. mongodb log reports the exact same setting in both tests: First test: 2019-08-14T00:42:32.588-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=96122M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress), Second test (after reboot): 2019-08-14T01:11:15.722-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=96122M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress), Here is a sample from mongostat of the first test: insert query update delete getmore command dirty used flushes vsize res qrw arw net_in net_out conn time *0 104209 *0 *0 0 9|0 0.0% 94.9% 0 96.7G 94.7G 107|0 128|0 27.0m 40.2m 272 Aug 14 00:51:13.759 *0 102610 *0 *0 0 0|0 0.0% 94.9% 1 96.7G 94.6G 109|0 128|0 26.6m 39.6m 272 Aug 14 00:51:44.092 *0 104724 *0 *0 0 9|0 0.0% 94.9% 0 96.7G 94.6G 87|0 128|0 27.1m 40.4m 272 Aug 14 00:52:13.767 And a sample from the second test: insert query update delete getmore command dirty used flushes vsize res qrw arw net_in net_out conn time *0 94009 *0 *0 0 9|0 0.0% 80.9% 1 83.1G 80.9G 127|0 81|0 24.4m 36.3m 272 Aug 14 01:17:29.289 *0 93680 *0 *0 0 0|0 0.0% 80.3% 0 83.1G 81.0G 0|0 107|0 24.3m 36.2m 272 Aug 14 01:17:59.119 *0 93068 *0 *0 0 9|0 0.0% 80.9% 1 83.1G 81.0G 130|0 128|0 24.1m 35.9m 272 Aug 14 01:18:29.123 *0 92599 *0 *0 0 0|0 0.0% 80.0% 0 83.1G 80.9G 137|0 128|0 24.0m 35.8m 272 Aug 14 01:18:59.186 - What could have caused the difference in the cache percentage usage and throughput? - is it safe to change explicitly this settings so Mongodb uses more of its cache? - Can I do this setting change from the command line ? (I'm starting mongodb from command line and not as a service)
Yuval (33 rep)
Aug 14, 2019, 04:37 PM • Last activity: Jul 22, 2022, 04:04 AM
1 votes
1 answers
4777 views
Understanding why MongoDB uses more RAM than the allowed wiredTigerCacheSizeGB
My mongod command ```bash mongod --wiredTigerCacheSizeGB 5 ``` In practice, I'm getting mongo instance to use even up to 8.6 GB (on a VM with 50 GB RAM) Indexes are using about 100 MB (checked with `db.stats()`) tcmalloc information: ```bash MALLOC: 3776745552 ( 3601.8 MiB) Bytes in use by applicati...
My mongod command
mongod --wiredTigerCacheSizeGB 5
In practice, I'm getting mongo instance to use even up to 8.6 GB (on a VM with 50 GB RAM) Indexes are using about 100 MB (checked with db.stats()) tcmalloc information:
MALLOC:     3776745552 ( 3601.8 MiB) Bytes in use by application
MALLOC: +   4744544256 ( 4524.8 MiB) Bytes in page heap freelist
MALLOC: +     20067616 (   19.1 MiB) Bytes in central cache freelist
MALLOC: +      3584128 (    3.4 MiB) Bytes in transfer cache freelist
MALLOC: +     12470800 (   11.9 MiB) Bytes in thread cache freelists
MALLOC: +     22544384 (   21.5 MiB) Bytes in malloc metadata
MALLOC:   ------------
MALLOC: =   8579956736 ( 8182.5 MiB) Actual memory used (physical + swap)
MALLOC: +     29933568 (   28.5 MiB) Bytes released to OS (aka unmapped)
MALLOC:   ------------
MALLOC: =   8609890304 ( 8211.0 MiB) Virtual address space used
MALLOC:
MALLOC:          21650              Spans in use
MALLOC:             59              Thread heaps in use
MALLOC:           4096              Tcmalloc page size
The server had been running for 3 days. How can I tell why mongo is consuming more memory than the allowed cache size? Is there any way to manually free up the "page heap freelist"? The command output is recommending Call ReleaseFreeMemory() to release freelist memory to the OS, but I don't think I can actually do that out of the process. MongoDB version is 4.4.1, I'm using mongo:4.4.1-bionic docker image.
Mugen (117 rep)
Jun 30, 2021, 01:16 PM • Last activity: Jul 2, 2021, 02:48 AM
18 votes
3 answers
29053 views
How to restore .wt backup file to local MongoDB?
This is a question that was asked before, but I have tried all solutions, and simply cannot get it right. I have spent quite some time researching before posting this question. I have looked at the official MongoDB documents and many other blogs. How to restore a .wt MongoDB backup file to a local M...
This is a question that was asked before, but I have tried all solutions, and simply cannot get it right. I have spent quite some time researching before posting this question. I have looked at the official MongoDB documents and many other blogs. How to restore a .wt MongoDB backup file to a local MongoDB database?
Saketh (323 rep)
Apr 20, 2018, 12:41 AM • Last activity: Mar 4, 2021, 05:42 AM
50 votes
3 answers
108248 views
MongoDB using too much memory
We've been using MongoDB for several weeks now, the overall trend that we've seen has been that mongodb is using **way too much memory** (much more than the whole size of its dataset + indexes). I've already read through [this question][1] and [this question][2], but none seem to address the issue I...
We've been using MongoDB for several weeks now, the overall trend that we've seen has been that mongodb is using **way too much memory** (much more than the whole size of its dataset + indexes). I've already read through this question and this question , but none seem to address the issue I've been facing, they're actually explaining what's already explained in the documentation. The following are the results of *htop* and *show dbs* commands. enter image description here show dbs I know that mongodb uses memory mapped IO, so basically the OS handles caching things in the memory, and mongodb should theoretically let go of its cached memory when another process requests free memory , but from what we've seen, it doesn't. OOM kicks in an starts killing other important processes e.g. postgres, redis, etc. (As can be seen, to overcome this problem, we've increased the RAM to 183GB which now works but is pretty expensive. mongo's using ~87GBs of ram, nearly 4X of the size of its whole dataset) So, 1. Is this much memory usage really expected and normal ? (As per documentation, WiredTiger uses at most ~60% of RAM for its cache, but considering the dataset size, does it even have enough data to be able to take 86GBs of RAM ?) 2. Even if the memory usage is expected, why won't mongo let go of its allocated memory in case another process starts requesting for more memory ? Various other running processes were being constantly killed by linux oom, including mongodb itself, before we increased the RAM and it made the system totally unstable. Thanks !
Alireza (1011 rep)
Aug 22, 2016, 07:50 PM • Last activity: Feb 25, 2021, 09:01 AM
1 votes
3 answers
14711 views
Mongodb Not Restarting with Collections, WiredTiger.wt may be corrupt
I believe my MongoDB did not have a clean shutdown. I am able to restart it in a new location which doesn't have all of my collections. If I try to repair or start it in the old location, it gives the following error: [ec2-user@ip-172-31-30-192 tmp]$ mongod --repair --dbpath /data 2017-08-20T16:20:3...
I believe my MongoDB did not have a clean shutdown. I am able to restart it in a new location which doesn't have all of my collections. If I try to repair or start it in the old location, it gives the following error: [ec2-user@ip-172-31-30-192 tmp]$ mongod --repair --dbpath /data 2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] MongoDB starting : pid=31865 port=27017 dbpath=/data 64-bit host=ip-172-31-30-192 2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] db version v3.2.16 2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] git version: 056bf45128114e44c5358c7a8776fb582363e094 2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.0-fips 29 Mar 2010 2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] allocator: tcmalloc 2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] modules: none 2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] build environment: 2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] distmod: amazon 2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] distarch: x86_64 2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] target_arch: x86_64 2017-08-20T16:20:30.951+0000 I CONTROL [initandlisten] options: { repair: true, storage: { dbPath: "/data" } } 2017-08-20T16:20:30.972+0000 I - [initandlisten] Detected data files in /data created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'. 2017-08-20T16:20:30.972+0000 I STORAGE [initandlisten] Detected WT journal files. Running recovery from last checkpoint. 2017-08-20T16:20:30.972+0000 I STORAGE [initandlisten] journal to nojournal transition config: create,cache_size=17G,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), 2017-08-20T16:20:30.981+0000 E STORAGE [initandlisten] WiredTiger (-31802) [1503246030:981472][31865:0x7f6ad1d9fd80], file:WiredTiger.wt, connection: unable to read root page from file:WiredTiger.wt: WT_ERROR: non-specific WiredTiger error 2017-08-20T16:20:30.981+0000 E STORAGE [initandlisten] WiredTiger (0) [1503246030:981530][31865:0x7f6ad1d9fd80], file:WiredTiger.wt, connection: WiredTiger has failed to open its metadata 2017-08-20T16:20:30.981+0000 E STORAGE [initandlisten] WiredTiger (0) [1503246030:981548][31865:0x7f6ad1d9fd80], file:WiredTiger.wt, connection: This may be due to the database files being encrypted, being from an older version or due to corruption on disk 2017-08-20T16:20:30.981+0000 E STORAGE [initandlisten] WiredTiger (0) [1503246030:981564][31865:0x7f6ad1d9fd80], file:WiredTiger.wt, connection: You should confirm that you have opened the database with the correct options including all encryption and compression options 2017-08-20T16:20:30.981+0000 I - [initandlisten] Assertion: 28718:-31802: WT_ERROR: non-specific WiredTiger error 2017-08-20T16:20:30.982+0000 I STORAGE [initandlisten] exception in initAndListen: 28718 -31802: WT_ERROR: non-specific WiredTiger error, terminating 2017-08-20T16:20:30.982+0000 I CONTROL [initandlisten] dbexit: rc: 100 Is there a way to fix my wiredtiger.wt file or move my collections and indexes from the old location into the new location?
Jay Gupta (11 rep)
Aug 20, 2017, 11:12 PM • Last activity: Feb 6, 2021, 12:02 AM
0 votes
1 answers
728 views
MongoDB high usage WiredTiger
We are having difficulty with MongoDB high RAM usage in AWS EC2 Instance ubuntu with 30GB RAM. In our server, there are over 3K databases and It's out of memory from time to time. I read a lot about MongoDB memory usage most of them recommending reducing the WiredTigerCache size. But I am not sure t...
We are having difficulty with MongoDB high RAM usage in AWS EC2 Instance ubuntu with 30GB RAM. In our server, there are over 3K databases and It's out of memory from time to time. I read a lot about MongoDB memory usage most of them recommending reducing the WiredTigerCache size. But I am not sure that would help. The following information might help.
insert query update delete getmore command dirty  used flushes vsize   res qrw arw net_in net_out conn set repl                time
    *0    *0     *3     *0       0    39|0  0.4% 79.3%       0 13.1G 10.3G 0|0 1|0  6.26k   52.0k  251 rs0  SEC Nov 10 05:13:34.022
    *0    *0     *1     *0       0    58|0  0.4% 79.3%       0 13.1G 10.3G 0|0 1|0  10.4k   56.9k  250 rs0  SEC Nov 10 05:13:37.020
    *0    *0     *2     *0       0   120|0  0.4% 79.3%       0 13.1G 10.3G 0|0 2|0  22.6k   83.7k  250 rs0  SEC Nov 10 05:13:40.027
------------------------------------------------
MALLOC:    17644185184 (16826.8 MiB) Bytes in use by application
MALLOC: +    504643584 (  481.3 MiB) Bytes in page heap freelist
MALLOC: +    977738416 (  932.4 MiB) Bytes in central cache freelist
MALLOC: +      1190016 (    1.1 MiB) Bytes in transfer cache freelist
MALLOC: +    312620656 (  298.1 MiB) Bytes in thread cache freelists
MALLOC: +    137748736 (  131.4 MiB) Bytes in malloc metadata
MALLOC:   ------------
MALLOC: =  19578126592 (18671.2 MiB) Actual memory used (physical + swap)
MALLOC: +    488529920 (  465.9 MiB) Bytes released to OS (aka unmapped)
MALLOC:   ------------
MALLOC: =  20066656512 (19137.1 MiB) Virtual address space used
MALLOC:
MALLOC:        1730087              Spans in use
MALLOC:           3006              Thread heaps in use
MALLOC:           4096              Tcmalloc page size
------------------------------------------------
Call ReleaseFreeMemory() to release freelist memory to the OS (via madvise()).
Bytes released to the OS take up virtual address space but no physical memory.
``` { "application threads page read from disk to cache count" : 1308217, "application threads page read from disk to cache time (usecs)" : 91179623, "application threads page write from cache to disk count" : 95640493, "application threads page write from cache to disk time (usecs)" : 1699078290, "bytes belonging to page images in the cache" : 5025574875, "bytes belonging to the cache overflow table in the cache" : 182, "bytes currently in the cache" : 12703066940, "bytes dirty in the cache cumulative" : NumberLong("4256396040375"), "bytes not belonging to page images in the cache" : 7677492064, "bytes read into cache" : 28764089566, "bytes written from cache" : NumberLong("2070527030210"), "cache overflow cursor application thread wait time (usecs)" : 0, "cache overflow cursor internal thread wait time (usecs)" : 0, "cache overflow score" : 0, "cache overflow table entries" : 0, "cache overflow table insert calls" : 0, "cache overflow table max on-disk size" : 0, "cache overflow table on-disk size" : 0, "cache overflow table remove calls" : 0, "checkpoint blocked page eviction" : 0, "eviction calls to get a page" : 3086214, "eviction calls to get a page found queue empty" : 1983708, "eviction calls to get a page found queue empty after locking" : 7223, "eviction currently operating in aggressive mode" : 0, "eviction empty score" : 0, "eviction passes of a file" : 1255755, "eviction server candidate queue empty when topping up" : 11915, "eviction server candidate queue not empty when topping up" : 2358, "eviction server evicting pages" : 0, "eviction server slept, because we did not make progress with eviction" : 884827, "eviction server unable to reach eviction goal" : 0, "eviction server waiting for a leaf page" : 350079123, "eviction server waiting for an internal page sleep (usec)" : 0, "eviction server waiting for an internal page yields" : 0, "eviction state" : 128, "eviction walk target pages histogram - 0-9" : 1104474, "eviction walk target pages histogram - 10-31" : 150582, "eviction walk target pages histogram - 128 and higher" : 0, "eviction walk target pages histogram - 32-63" : 4, "eviction walk target pages histogram - 64-128" : 695, "eviction walk target strategy both clean and dirty pages" : 0, "eviction walk target strategy only clean pages" : 1255755, "eviction walk target strategy only dirty pages" : 0, "eviction walks abandoned" : 102722, "eviction walks gave up because they restarted their walk twice" : 1129365, "eviction walks gave up because they saw too many pages and found no candidates" : 2637, "eviction walks gave up because they saw too many pages and found too few candidates" : 165, "eviction walks reached end of tree" : 2359112, "eviction walks started from root of tree" : 1249356, "eviction walks started from saved location in tree" : 6399, "eviction worker thread active" : 4, "eviction worker thread created" : 0, "eviction worker thread evicting pages" : 1098213, "eviction worker thread removed" : 0, "eviction worker thread stable number" : 0, "files with active eviction walks" : 0, "files with new eviction walks started" : 1229747, "force re-tuning of eviction workers once in a while" : 0, "forced eviction - pages evicted that were clean count" : 2715, "forced eviction - pages evicted that were clean time (usecs)" : 5182, "forced eviction - pages evicted that were dirty count" : 1919, "forced eviction - pages evicted that were dirty time (usecs)" : 7555109, "forced eviction - pages selected because of too many deleted items count" : 3202, "forced eviction - pages selected count" : 6776, "forced eviction - pages selected unable to be evicted count" : 1031, "forced eviction - pages selected unable to be evicted time" : 65, "hazard pointer blocked page eviction" : 623, "hazard pointer check calls" : 1105566, "hazard pointer check entries walked" : 28919669, "hazard pointer maximum array length" : 2, "in-memory page passed criteria to be split" : 2230, "in-memory page splits" : 1111, "internal pages evicted" : 5255, "internal pages queued for eviction" : 1818, "internal pages seen by eviction walk" : 39433, "internal pages seen by eviction walk that are already queued" : 1642, "internal pages split during eviction" : 5, "leaf pages split during eviction" : 2742, "maximum bytes configured" : 15881732096, "maximum page size at eviction" : 76162, "modified pages evicted" : 103981, "modified pages evicted by application threads" : 0, "operations timed out waiting for space in cache" : 0, "overflow pages read into cache" : 0, "page split during eviction deepened the tree" : 0, "page written requiring cache overflow records" : 0, "pages currently held in the cache" : 242898, "pages evicted by application threads" : 0, "pages queued for eviction" : 1213436, "pages queued for eviction post lru sorting" : 1215321, "pages queued for urgent eviction" : 2479, "pages queued for urgent eviction during walk" : 0, "pages read into cache" : 1376530, "pages read into cache after truncate" : 227271, "pages read into cache after truncate in prepare state" : 0, "pages read into cache requiring cache overflow entries" : 0, "pages read into cache requiring cache overflow for checkpoint" : 0, "pages read into cache skipping older cache overflow entries" : 0, "pages read into cache with skipped cache overflow entries needed later" : 0, "pages read into cache with skipped cache overflow entries needed later by checkpoint" : 0, "pages requested from the cache" : 60531186418, "pages seen by eviction walk" : 3226999, "pages seen by eviction walk that are already queued" : 828715, "pages selected for eviction unable to be evicted" : 1870, "pages selected for eviction unable to be evicted as the parent page has overflow items" : 0, "pages selected for eviction unable to be evicted because of active children on an internal page" : 1246, "pages selected for eviction unable to be evicted because of failure in reconciliation" : 0, "pages selected for eviction unable to be evicted due to newer modifications on a clean page" : 0, "pages walked for eviction" : 23891436, "pages written from cache" : 95646477, "pages written requiring in-memory restoration" : 16, "percentage overhead" : 8, "tracked bytes belonging to internal pages in the cache" : 30689239, "tracked bytes belonging to leaf pages in the cache" : 12672377701, "tracked dirty bytes in the cache" : 158416275, "tracked dirty pages in the cache" : 6130, "unmodified pages evicted" : 998576 }
Orgil (101 rep)
Nov 10, 2020, 05:15 AM • Last activity: Nov 16, 2020, 04:19 AM
0 votes
1 answers
1714 views
mongodb v3.4 - wiredtiger - compact does not work
I am using mongo v3.4.10 with WiredTiger. I run the following command: db.runCommand({ compact: "my_collection" }); the response is: /* 1 */ { "ok" : 1.0 } and the mongo log shows: 2017-10-31T00:12:20.282+0000 I COMMAND [conn23] compact mydb.my_collection begin, options: paddingMode: NONE validateDo...
I am using mongo v3.4.10 with WiredTiger. I run the following command: db.runCommand({ compact: "my_collection" }); the response is: /* 1 */ { "ok" : 1.0 } and the mongo log shows: 2017-10-31T00:12:20.282+0000 I COMMAND [conn23] compact mydb.my_collection begin, options: paddingMode: NONE validateDocuments: 1 2017-10-31T00:12:21.662+0000 I COMMAND [conn23] compact mydb.my_collection end 2017-10-31T00:12:21.663+0000 I COMMAND [conn23] command mydb.my_collection appName: "MongoDB Shell" command: compact { compact: "my_collection" } numYields:0 reslen:22 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 1381ms In short, everything looks fine, but the storageSize is exactly the same :/ I deleted lots of fields with a value of null, so I would expect a **significant** drop in disk usage. I read online that early v3 had issues with this, but I'm on the latest version right now. The database is quite large so if I can avoid repairDatabase that would be good. Any help is appreciated, thx. ---------- UPDATE: I performed a mongodump / mongorestore, and it resulted in a 9% disk use reduction (admittedly I was expecting more). Nonetheless, it seems like compact *should* have had an impact. UPDATE 2: I performed a repairDatabase and it had no effect. This is explicitly stated in mongo docs (https://docs.mongodb.com/manual/reference/command/repairDatabase) : > For WiredTiger, the operation rebuilds the database but does not result in the compaction of the collections in the database but i thought i'd give it a crack. Still not sure why compact did nothing though.
mils (175 rep)
Oct 31, 2017, 02:33 AM • Last activity: Sep 8, 2020, 05:05 PM
3 votes
2 answers
12293 views
MongoDB Maxing CPU Usage
I have a weird one here. We have set up our production MongoDB which was populated by our test one so we had something to work off of. As soon as we started the service we hit instant 100% CPU load with nothing running, so we removed the server from it's replica set and got it down to 2-3% CPU. When...
I have a weird one here. We have set up our production MongoDB which was populated by our test one so we had something to work off of. As soon as we started the service we hit instant 100% CPU load with nothing running, so we removed the server from it's replica set and got it down to 2-3% CPU. When we had all of our users active we saw it spike back up to 100% and we said we would leave it over night and monitor it for any signs of improvement. The next morning we came in and the CPU was down to around 40% and it seemed very stable. Now about a week later we have come across the same issue as above except this has been happening for almost 4 days where we are maxed on CPU. I used the db.currentOP() command and found one from "WT RecordStoreThread: local.oplog.rs" that has been running for 355254 seconds which equated to around 98 hours which matches the CPU being maxed. The machine has 8 cores @ 2.4GHz and 16GB RAM so I wouldn't think that CPU would be this massive an issue at such an early stage. Our config file looks like the following with the IP and Port stared out: systemLog: destination: file path: E:\data\log\mongod.log storage: dbPath: E:\data\db net: port: xxxxx bindIp: xxx.xxx.xxx.xxx #replication: # replSetName: "rs1" security: authorization: enabled # keyFile: E:\data\mongodb.key I have restarted the service today hoping to maybe release some resources but it instantly maxed out again. I have looked at other issues on SO and none seem to have a real answer to resolve this problem. Thanks for taking the time to read this and if anybody has come across the same issue or might have any idea on how to solve it that would be great to get your feedback!
Jonathan Coffey (131 rep)
Sep 20, 2017, 02:26 PM • Last activity: Apr 13, 2020, 04:14 PM
0 votes
1 answers
760 views
MongoDB - Does WiredTiger benefit from virtual cpus cores the same way it does with physical cores?
According to https://docs.mongodb.com/manual/administration/production-notes/#id5 `The WiredTiger storage engine is multithreaded and can take advantage of additional CPU cores.` We will be running mongoDB in a VMware vSphere enviroment and I am tasked with allocating resources. Does that statement...
According to https://docs.mongodb.com/manual/administration/production-notes/#id5 The WiredTiger storage engine is multithreaded and can take advantage of additional CPU cores. We will be running mongoDB in a VMware vSphere enviroment and I am tasked with allocating resources. Does that statement hold true when it comes to virtual CPU cores? Thanks!
Dylan (103 rep)
May 3, 2019, 06:16 PM • Last activity: Apr 8, 2020, 12:01 PM
1 votes
1 answers
1168 views
First query is really slow with WiredTiger, not with mmapv1
We recently upgraded to MongoDB 4.0. Loving the performance improvements with WiredTiger, but all of a sudden our test suite is timing out on certain machines. Before running our tests, we run the following command: ```sh mongod --dbpath ./.test-data --bind_ip 127.0.0.1 --fork --logpath ../testlog.t...
We recently upgraded to MongoDB 4.0. Loving the performance improvements with WiredTiger, but all of a sudden our test suite is timing out on certain machines. Before running our tests, we run the following command:
mongod --dbpath ./.test-data --bind_ip 127.0.0.1 --fork --logpath ../testlog.txt
Everything seems great. The database starts. Node connects to it. We create a bunch of collections and indexes. Then once we run the first query, it takes 10+ seconds to run. To be clear, at this point the database has never had any data inserted into it. It's brand new. Using MMAPv1 (with the following command) resolves the problem, but I think it'd be great to run our tests with WiredTiger, if possible.
mongod --dbpath ./.test-data --storageEngine mmapv1 --bind_ip 127.0.0.1 --fork --logpath ../testlog.txt
Running mongod v4.0.3 and OS X 10.13.6. Connecting with node v6.9.1 and mongoose v5.4.19. After each test run, we nuke the data directory:
mongo admin --eval 'db.shutdownServer()' > /dev/null
rm -rf ./.test-data/*
I also tried setting --wiredTigerJournalCompressor none --wiredTigerCacheSizeGB 1.0 --wiredTigerCollectionBlockCompressor none --wiredTigerIndexPrefixCompression false and it didn't make a difference.
Andrew Rasmussen (119 rep)
Apr 4, 2019, 09:57 AM • Last activity: Mar 1, 2020, 05:00 AM
0 votes
1 answers
774 views
MongoDB Fragmentation affect the Performance in WiredTiger
Im using MongoDB 3.6 I have a huge number of fragmentation in my collections. 300+ GB on one collection. WiredTiger is the storage engine. I know its bad for the OS and much wasted space. This affects the memory as well if we use MMAP engine. But I don't know how the fragmentation affects the WiredT...
Im using MongoDB 3.6 I have a huge number of fragmentation in my collections. 300+ GB on one collection. WiredTiger is the storage engine. I know its bad for the OS and much wasted space. This affects the memory as well if we use MMAP engine. But I don't know how the fragmentation affects the WiredTiger?
TheDataGuy (1986 rep)
Nov 25, 2019, 09:25 AM • Last activity: Nov 27, 2019, 01:30 AM
2 votes
1 answers
620 views
Is MongoDB secondary index size affected by primary key size?
If I understand correctly, secondary index leaf nodes in MySQL InnoDB engine hold the table primary key value (at least for a unique index). Thus, looking up a value in the secondary index results in two BTREE lookups: one for the secondary index and another one for the clustered index (clustered on...
If I understand correctly, secondary index leaf nodes in MySQL InnoDB engine hold the table primary key value (at least for a unique index). Thus, looking up a value in the secondary index results in two BTREE lookups: one for the secondary index and another one for the clustered index (clustered on the primary key). This also means that the primary key size affects the size of all secondary indexes. *Is this how MongoDB WiredTiger secondary indexes work too?* Or do MongoDB secondary indexes store a reference to the physical block where the document resides? (I believe this is how Postgres handles indexes)
nimrodm (165 rep)
May 29, 2019, 08:13 PM • Last activity: May 31, 2019, 02:55 AM
Showing page 1 of 20 total questions