Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
2
answers
4242
views
Access denied when connecting to mongosqld with MySQL
I'm trying to connect to my MongoDB using MySQL through mongosqld. I have mongosqld running with a config file that looks like security: enabled: true mongodb: net: uri: test-db auth: username: usertest password: pass source: admin schema: path: schema.drdl I have it hosted on a [mongo ODBC manager]...
I'm trying to connect to my MongoDB using MySQL through mongosqld. I have mongosqld running with a config file that looks like
security:
enabled: true
mongodb:
net:
uri: test-db
auth:
username: usertest
password: pass
source: admin
schema:
path: schema.drdl
I have it hosted on a [mongo ODBC manager](https://github.com/mongodb/mongo-odbc-driver/releases/) with
SERVER=127.0.0.1
, PORT=3071
, DATABASE=test_db
, UID=usertest?source=admin
, PWD=pass
. I am able to connect to and query this MongoDB through Excel using [Mongo's tutorial for that](https://docs.mongodb.com/bi-connector/current/connect/excel/) , but I am not able to do the same with MySQL using [Mongo's tutorial](https://docs.mongodb.com/bi-connector/current/connect/mysql/) . When I try to connect from terminal with mysql 'user=usertest?source=admin' --default-auth=mongosql_auth -p
I get ERROR 1045 (28000): Access denied for user 'usertest' and
handshake error: unable to saslStart conversation 0: Authentication failed.` from the mongosqld side. I am doing this on macOS. What could be causing this problem only for trying to connect from MySQL?
TheStrangeQuark
(101 rep)
Oct 8, 2018, 04:09 PM
• Last activity: Aug 5, 2025, 04:04 AM
0
votes
0
answers
8
views
Mongo Prod Migration from Amzon linux 1 to ubuntu 24.04
I am planning to migrate our production database from Amazon Linux 1 to Ubuntu 24.04 to the latest Mongo. How can I migrate without loss of data and low downtime. Tried migrating the demo server already from Amazon Linux 1 to Ubuntu 24.04 and it was successful. Only only thing was that there was no...
I am planning to migrate our production database from Amazon Linux 1 to Ubuntu 24.04 to the latest Mongo. How can I migrate without loss of data and low downtime. Tried migrating the demo server already from Amazon Linux 1 to Ubuntu 24.04 and it was successful. Only only thing was that there was no Realtime data injecting happening on that.
Roronoa Zoro
(1 rep)
Aug 4, 2025, 06:25 AM
• Last activity: Aug 4, 2025, 07:57 AM
2
votes
2
answers
470
views
Efficient way to store geographical routes in MongoDB
I need to store a taxi route into MongoDB, however I'm not sure what is the best way to do this. Since the route is only needed for the taxi order, the first idea I had is to store it in the order, this would make the retrieval quite fast, since I won't need to join the order collection with the rou...
I need to store a taxi route into MongoDB, however I'm not sure what is the best way to do this.
Since the route is only needed for the taxi order, the first idea I had is to store it in the order, this would make the retrieval quite fast, since I won't need to join the order collection with the route collection. As well as that, the route can be stored in a simplified way for example:
{
order_id: 1,
...
route: [
[ 1567767312, 58.542454, 110.15454, 45 ], //timestamp, lat, long, speed
...
[ 1567768312, 59.448488, 10.484684, 20 ]
]
}
If say the measurement is done every 2 seconds, an average 15 min ride would be 450 points, which is not a lot.
However, I read that MongoDB does not like when documents are very different in sizes and since the route can be different all the time, this might cause issues as the order number grows.
The other obvious approach was to have a separate collection to store each reading a separate document:
{
timestamp: "2019-09-06T10:49:38+00:00",
coordinates: [ 58.45464, 128.45465 ],
order_number: 1
}
With indexing, this should not be much slower in terms of fetching or writing than the method above. However, this will occupy way more space and the collection might grow really fast.
Just as clarifications. The route is only used for displaying it on the map when somebody opens the order, no need for geo queries.
Combustible Pizza
(121 rep)
Sep 6, 2019, 11:07 AM
• Last activity: Aug 1, 2025, 08:09 PM
2
votes
1
answers
1417
views
How to properly use moveChunk when chunks of certain range needs to be moved?
A MongoDB 3.6.3 database has two shards and a mongodump file with partial data of one collection needs to be restored. The first shard is named "fast" while second "slow". The idea is to restore the dump to the "slow" shard. According to sharding rules the data should go to the "slow" shard, but it...
A MongoDB 3.6.3 database has two shards and a mongodump file with partial data of one collection needs to be restored. The first shard is named "fast" while second "slow". The idea is to restore the dump to the "slow" shard. According to sharding rules the data should go to the "slow" shard, but it actually goes to the wrong one when restore is tried.
Before restoring the data, I want to manually move a range of chunks from fast to slow shard, but are unable to properly issue the command. All examples found are showing of moving only one exact chunk.
_id
is used as a sharding key.
Try 1:
use admin
db.runCommand({ moveChunk: "db.use", bounds : [ {_id : ObjectId("58b60e73e5d4e7019aa2be17")}, {_id : ObjectId("58bca60f5067031c77b03807")} ], to: "rs1" })
This is the response:
{
"ok" : 0,
"errmsg" : "no chunk found with the shard key bounds [{ _id: ObjectId('58b60e73e5d4e7019aa2be17') }, { _id: ObjectId('58bca60f5067031c77b03807') })",
"$clusterTime" : {
"clusterTime" : Timestamp(1523459407, 14673),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1523459407, 14673)
}
Try 2:
sh.moveChunk("db.use", {_id:{$gt: ObjectId("58b60e73e5d4e7019aa2be17"), $lt: ObjectId("58bca60f5067031c77b03807") }},"rs1")
Response:
{
"ok" : 0,
"errmsg" : "no shard key found in chunk query { _id: { $gt: ObjectId('58b60e73e5d4e7019aa2be17'), $lt: ObjectId('58bca60f5067031c77b03807') } }",
"$clusterTime" : {
"clusterTime" : Timestamp(1523460271, 11742),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1523460271, 11742)
}
Any idea how to move chunks that belong to certain shard key range?
ssasa
(261 rep)
Apr 11, 2018, 03:30 PM
• Last activity: Aug 1, 2025, 10:06 AM
0
votes
2
answers
10816
views
for mongo, not able to create users other than admin with error command createUser requires authentication :
This is the script I am using to create user accounts in MongoDB: ``` mongo <<EOF use admin; db.createUser({user:ram", pwd: "ram123!", roles:['root']}); db.createUser({user:"sam", pwd: "sam123!", roles:[{db:"config", role:'readWrite'}]}); EOF ``` This works for the creating first user, but not the s...
This is the script I am using to create user accounts in MongoDB:
mongo <
This works for the creating first user, but not the second user. This is the error that is returned:
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("f648b868-7863-4d5c-9912-e3e87b24f4e8") }
MongoDB server version: 4.4.6
switched to db admin
Successfully added user: { "user" : "ram", "roles" : [ "root" ] }
uncaught exception: Error: couldn't add user: command createUser requires authentication :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
DB.prototype.createUser@src/mongo/shell/db.js:1386:11
@(shell):1:1
bye
Acmakala Eesha
(13 rep)
May 31, 2021, 04:27 AM
• Last activity: Aug 1, 2025, 07:07 AM
4
votes
1
answers
824
views
Why does restarting MongoDB 3.0.x boost performance?
I have a MongoDB in my Development environment running on Windows, the server has 64GB of RAM, but MongoDB has only consumed 26.9GB (here we are building the database), so it should be able to hold all the indexes and data in ram: DB Stats: Data Size: 27.8071 GB Index Size: 3.96 GB After 3 or 4 hour...
I have a MongoDB in my Development environment running on Windows, the server has 64GB of RAM, but MongoDB has only consumed 26.9GB (here we are building the database), so it should be able to hold all the indexes and data in ram:
DB Stats:
Data Size: 27.8071 GB
Index Size: 3.96 GB
After 3 or 4 hours running, insert performance drops to only 2-3k inserts per second, despite all indexes being in RAM (or they should be).
However, I've noticed if I restart the server, insert speeds rates increase to x3 or x4.
I'm wondering why this is, and could it be caused by running on Windows?
The collections being inserted (with any frequency) have (at this time) 20 mil, 44 mil, and 51 mil.
The first collection of 20mil has a very random index based on a 256 bit hash, the other two collections are indexed on ObjectId. This is the bottle neck I cannot shift without sharding, but it is concerning that the insert rate changes so much after a restart.
I do not want to periodically restart my Primary node, causing clients to have to fail over to a secondary.
**Edit: I should also say, I am running with write concern unacknowledged.**
Please find example image of logs, apologies I did not capture a text output of this log.
Some typical slow queries when they are popped up by the profiler, set at 100ms:
2015-08-18T11:56:34.768+0100 I COMMAND [conn7] command Slice_BTC.$cmd command: insert { insert: "Transactions", writeConcern: { w: 0 }, ordered: false, documents: 765 } keyUpdates:0 writeConflicts:0 numYields:
acquireCount: { w: 7 } }, Database: { acquireCount: { w: 7 } }, Collection: { acquireCount: { w: 7 } } } 790ms
2015-08-18T11:56:35.184+0100 I QUERY [conn10] query Slice_BTC.Outputs query: { $query: { t: ObjectId('55d2cf76137e6e233c231ecf'), i: 96 } } planSummary: IXSCAN { t: 1, i: 1 } ntoreturn:1 ntoskip:0 nscanned:1
:0 writeConflicts:0 numYields:0 nreturned:1 reslen:224 locks:{ Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 378ms
Typical size of Transactions object: 178 bytes.
Typical size of Outputs object: 204 bytes.

James
(141 rep)
Aug 18, 2015, 09:56 AM
• Last activity: Jul 31, 2025, 09:02 PM
1
votes
1
answers
389
views
Sharded Mongodb stalls randomly
I have setup Sharded MongoDB cluster using hashed sharding in kuberenetes.I first created the config server Replicaset and then created 2 shard replicasets. Finally created mongos to connect to the sharded cluster. I followed the below link to setup sharded MongoDB Click https://docs.mongodb.com/man...
I have setup Sharded MongoDB cluster using hashed sharding in kuberenetes.I first created the config server Replicaset and then created 2 shard replicasets. Finally created mongos to connect to the sharded cluster.
I followed the below link to setup sharded MongoDB Click https://docs.mongodb.com/manual/tutorial/deploy-sharded-cluster-hashed-sharding/
After creation of mongos,I have enabled sharding for the database and have sharded the collection using the hashed sharding strategy.
After all this setup,I'm able to connect to mongos and have added some data to some of the collections in the database and able to check the distribution of data across different shards.
The issue that I'm facing is when trying to access mongodb from my java spring boot project,the connection stalls randomly.But once the connection is established for a particular query, that particular query won't stall for next few tries.After some idle time if I try to make request again to mongodb,it will again start to stall.
Note : MongoDB is hosted in "DS2 v2" VM and this cluster has 4 nodes.1 for config server,2 for shards and 1 for mongos
In one of the link,they had asked to set proper shard key to all the collections and this will have an impact on the performance of the mongodb.There were couple of things to consider before selecting the right shard key,I had considered all those factors before selecting shard key.I read through this link to select shard key - Click https://www.mongodb.com/blog/post/on-selecting-a-shard-key-for-mongodb
One of the other solution that I came across was that to set the ShardingTaskExecutorPoolMaxConnecting and to limit the rate at which mongos nodes add connectons to connection pools.I tried setting it to 20,5,100,150 and none of this resolved the stalling issue that I'm facing. This is the link - Click https://jira.mongodb.org/browse/SERVER-29237
I tried tweaking other parameters like ShardingTaskExecutorPoolMinSize and taskExecutorPoolSize.Even this did not resolve stalling issue.
I also set --serviceExecutor as adaptive.
Increased the wiredTigerCacheSizeGB from 0.25 to 2.This also dint make any difference to the stalling issue
1) YAML file of service and Deployment for config server of mongodb is -
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -d -f docker-compose.yml -o azure-deployment.yaml
kompose.version: 1.12.0 (0ab07be)
creationTimestamp: null
labels:
io.kompose.service: mongo-conf-service
name: mongo-conf-service
spec:
type: LoadBalancer
ports:
- name: "27017"
port: 27017
targetPort: 27017
selector:
io.kompose.service: mongo-conf-service
status:
loadBalancer: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -d -f docker-compose.yml -o azure-deployment.yaml
kompose.version: 1.12.0 (0ab07be)
creationTimestamp: null
labels:
io.kompose.service: mongo-conf-service
name: mongo-conf-service
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: mongo-conf-service
spec:
containers:
- env:
- name: MONGO_INITDB_ROOT_USERNAME
value: #Username
- name: MONGO_INITDB_ROOT_PASSWORD
value: #Password
command:
- "mongod"
- "--storageEngine"
- "wiredTiger"
- "--port"
- "27017"
- "--bind_ip"
- "0.0.0.0"
- "--wiredTigerCacheSizeGB"
- "2"
- "--configsvr"
- "--replSet"
- "ConfigDBRepSet"
image: #MongoImageName
name: mongo-conf-service
ports:
- containerPort: 27017
resources: {}
volumeMounts:
- name: mongo-conf
mountPath: /data/db
restartPolicy: Always
volumes:
- name: mongo-conf
persistentVolumeClaim:
claimName: mongo-conf
2) YAML file of service and Deployment for Shard mongodb is -
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -d -f docker-compose.yml -o azure-deployment.yaml
kompose.version: 1.12.0 (0ab07be)
creationTimestamp: null
labels:
io.kompose.service: mongo-shard
name: mongo-shard
spec:
type: LoadBalancer
ports:
- name: "27017"
port: 27017
targetPort: 27017
selector:
io.kompose.service: mongo-shard
status:
loadBalancer: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -d -f docker-compose.yml -o azure-deployment.yaml
kompose.version: 1.12.0 (0ab07be)
creationTimestamp: null
labels:
io.kompose.service: mongo-shard
name: mongo-shard
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: mongo-shard
spec:
containers:
- env:
- name: MONGO_INITDB_ROOT_USERNAME
value: #Username
- name: MONGO_INITDB_ROOT_PASSWORD
value: #Password
command:
- "mongod"
- "--storageEngine"
- "wiredTiger"
- "--port"
- "27017"
- "--bind_ip"
- "0.0.0.0"
- "--wiredTigerCacheSizeGB"
- "2"
- "--shardsvr"
- "--replSet"
- "Shard1RepSet"
image: #MongoImage
name: mongo-shard
ports:
- containerPort: 27017
resources: {}
3) YAML File of mongos server:
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -d -f docker-compose.yml -o azure-deployment.yaml
kompose.version: 1.12.0 (0ab07be)
creationTimestamp: null
labels:
io.kompose.service: mongos-service
name: mongos-service
spec:
type: LoadBalancer
ports:
- name: "27017"
port: 27017
targetPort: 27017
selector:
io.kompose.service: mongos-service
status:
loadBalancer: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -d -f docker-compose.yml -o azure-deployment.yaml
kompose.version: 1.12.0 (0ab07be)
creationTimestamp: null
labels:
io.kompose.service: mongos-service
name: mongos-service
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: mongos-service
spec:
containers:
- env:
- name: MONGO_INITDB_ROOT_USERNAME
value: #USername
- name: MONGO_INITDB_ROOT_PASSWORD
value: #Password
command:
- "numactl"
- "--interleave=all"
- "mongos"
- "--port"
- "27017"
- "--bind_ip"
- "0.0.0.0"
- "--configdb"
- "ConfigDBRepSet/mongo-conf-service:27017"
image: #MongoImageName
name: mongos-service
ports:
- containerPort: 27017
resources: {}
The logs of mongos server is :
2019-08-05T05:27:52.942+0000 I NETWORK [listener] connection accepted from 10.0.0.0:5058 #308807 (79 connections now open)
2019-08-05T05:27:52.964+0000 I ACCESS [conn308807] Successfully authenticated as principal Assist_Random_Workspace on Random_Workspace from client 10.0.0.0:5058
2019-08-05T05:27:54.267+0000 I NETWORK [worker-3] end connection 10.0.0.0:52954 (78 connections now open)
2019-08-05T05:27:54.269+0000 I NETWORK [listener] connection accepted from 10.0.0.0:52988 #308808 (79 connections now open)
2019-08-05T05:27:54.275+0000 I NETWORK [listener] connection accepted from 10.0.0.0:7174 #308809 (80 connections now open)
2019-08-05T05:27:54.279+0000 I ACCESS [conn308809] SASL SCRAM-SHA-1 authentication failed for Assist_Refactored_Code_DB on Refactored_Code_DB from client 10.0.0.:7174 ; UserNotFound: User "Assist_Refactored_Code_DB@Refactored_Code_DB" not found
2019-08-05T05:27:54.281+0000 I NETWORK [worker-1] end connection 10.0.0.5:7174 (79 connections now open)
2019-08-05T05:27:54.342+0000 I NETWORK [worker-1] end connection 10.0.0.6:57391 (78 connections now open)
2019-08-05T05:27:54.343+0000 I NETWORK [listener] connection accepted from 10.0.0.0:57527 #308810 (79 connections now open)
2019-08-05T05:27:55.080+0000 I NETWORK [worker-3] end connection 10.0.0.0:56021 (78 connections now open)
2019-08-05T05:27:55.081+0000 I NETWORK [listener] connection accepted from 10.0.0.0:56057 #308811 (79 connections now open)
2019-08-05T05:27:56.054+0000 I NETWORK [worker-1] end connection 10.0.0.0:59137 (78 connections now open)
2019-08-05T05:27:56.055+0000 I NETWORK [listener] connection accepted from 10.0.0.0:59184 #308812 (79 connections now open)
2019-08-05T05:27:59.268+0000 I NETWORK [worker-1] end connection 10.0.0.5:52988 (78 connections now open)
2019-08-05T05:27:59.270+0000 I NETWORK [listener] connection accepted from 10.0.0.0:53047 #308813 (79 connections now open)
2019-08-05T05:27:59.343+0000 I NETWORK [worker-3] end connection 10.0.0.6:57527 (78 connections now open)
2019-08-05T05:27:59.344+0000 I NETWORK [listener] connection accepted from 10.0.0.0:57672 #308814 (79 connections now open)
2019-08-05T05:28:00.080+0000 I NETWORK [worker-3] end connection 10.0.1.1:56057 (78 connections now open)
2019-08-05T05:28:00.081+0000 I NETWORK [listener] connection accepted from 10.0.0.0:56116 #308815 (79 connections now open)
2019-08-05T05:28:01.054+0000 I NETWORK [worker-3] end connection 10.0.0.0:59184 (78 connections now open)
2019-08-05T05:28:01.058+0000 I NETWORK [listener] connection accepted from 10.0.0.0:59225 #308816 (79 connections now open)
2019-08-05T05:28:01.763+0000 I NETWORK [listener] connection accepted from 10.0.0.0:7173 #308817 (80 connections now open)
2019-08-05T05:28:01.768+0000 I ACCESS [conn308817] SASL SCRAM-SHA-1 authentication failed for Assist_Sharded_Database on Sharded_Database from client 10.0.0.0:7173 ; UserNotFound: User "Assist_Sharded_Database@Sharded_Database" not found
2019-08-05T05:28:01.770+0000 I NETWORK [worker-3] end connection 10.0.0.0:7173 (79 connections now open)
2019-08-05T05:28:04.271+0000 I NETWORK [worker-3] end connection 10.0.0.0:53047 (78 connections now open)
2019-08-05T05:28:04.272+0000 I NETWORK [listener] connection accepted from 10.0.0.0:53083 #308818 (79 connections now open)
2019-08-05T05:28:04.283+0000 I NETWORK [listener] connection accepted from 10.0.0.0:7105 #308819 (80 connections now open)
2019-08-05T05:28:04.287+0000 I ACCESS [conn308819] SASL SCRAM-SHA-1 authentication failed for Assist_Refactored_Code_DB on Refactored_Code_DB from client 10.0.0.0:7105 ; UserNotFound: User "Assist_Refactored_Code_DB@Refactored_Code_DB" not found
Java Code block to connect to MongoDB is -
Note:The below code supports multitenancy of MongoDB at Database level.Based on one of the parameter in every request,we will determine from which database to query from.
The below code will work fine for Standalone MongoDB instance.
1) Application property
mongodb.uri=${mongoURI:mongodb://username:password@IPaddress:portNumber}
mongodb.defaultDatabaseName=assist
assist-server-address1 = IpAddress
2) Spring Boot Application
@SpringBootApplication
@ServletComponentScan
public class ServiceApplication extends RepositoryRestConfigurerAdapter {
public static void main(String[] args) {
SpringApplication.run(ServiceApplication.class, args);
}
@Autowired
public MongoDBCredentials mongoDBCredentials;
@Bean
public MongoTemplate mongoTemplate() {
return new MongoTenantTemplate(
new SimpleMongoDbFactory(new MongoClient(new MongoClientURI(mongoDBCredentials.getUri())),
mongoDBCredentials.getDefaultDatabaseName()));
}
}
3)MongoTemplate -> This is used to establish Mongo connection.We have implemented Multitenant Mongo(To connect to multiple database based on one of the parameter of the request)
public class MongoTenantTemplate extends MongoTemplate {
private static Map tenantTemplates = new HashMap();
@Value("${assist-server-address1}")
public String ServerAddress1;
@Value("${spring.data.mongodb.username}")
public String ServerUsername;
@Value("${spring.data.mongodb.password}")
public String ServerPassword;
@Value("${spring.data.mongodb.database}")
public String ServerDbName;
@Value("${assist-current-environment}")
public String currentEnv;
@Autowired
public MongoDBCredentials mongoDBCredentials;
@Autowired
WorkspacesRepository workspaceRepository;
private static final Logger LOG = LoggerFactory.getLogger(MongoTenantTemplate.class);
Marker marker;
public MongoTenantTemplate(MongoDbFactory mongoDbFactory) {
super(mongoDbFactory);
tenantTemplates.put(mongoDbFactory.getDb().getName(), new MongoTemplate(mongoDbFactory));
}
protected MongoTemplate getTenantMongoTemplate(String tenant) {
MongoTemplate mongoTemplate = tenantTemplates.computeIfAbsent(tenant, k -> null);
LOG.info(marker, "Tenant is (MongoDBCredentials) : {}",tenant);
try {
if (mongoTemplate == null) {
MongoCredential mongoCredential;
// Username,databaseName,password
if (tenant == ServerDbName) {
mongoCredential = MongoCredential.createCredential(ServerUsername, ServerDbName,
ServerPassword.toCharArray());
} else {
Workspaces workspace = workspaceRepository.findByDbName(tenant);
mongoCredential = MongoCredential.createCredential(workspace.getDbUserName(), workspace.getDbName(),
workspace.getDbPassword().toCharArray());
}
ServerAddress address1 = new ServerAddress(ServerAddress1, port);
List serverAddressList = new ArrayList();
serverAddressList.add(address1);
SimpleMongoDbFactory mongoDbFactory = new SimpleMongoDbFactory(
new MongoClient(serverAddressList, Arrays.asList(mongoCredential)), tenant);
mongoTemplate = new MongoTemplate(mongoDbFactory);
}
else {
}
} catch (Exception e) {
tenantTemplates.remove(tenant);
}
return mongoTemplate;
}
...
In the above logs,there is an error in authentication to Assist_Refactored_Code_DB(This database is not created by me).Im not sure why this authentication is failing and in which mongo URI the username and password should be mentioned.And Im also not sure whether this is one of the reason for stalling or not. This is the only error logs that I could find in mongos.All other logs in config server and shard mongo doesnt have any errors.
I expect the sharded mongodb to not stall at any point of time and work similar to standalone mongodb.
Can anyone guide me to resolve the stalling of sharded mongodb issue?
Prajwal M
(11 rep)
Aug 6, 2019, 11:14 AM
• Last activity: Jul 31, 2025, 08:04 AM
1
votes
1
answers
1688
views
How to create a MongoDB index with isodate?
I've create and index for a query and its running is less then 1 second to retrieve millions of rows, but as soon as I add greater than ISODATE, it doesnt use indexes anymore. the query is this one: db.getCollection("Coll").find({"Field1.date" : { $gte: ISODate("2019-11-27T00:00:00.000Z"), $lte: ISO...
I've create and index for a query and its running is less then 1 second to retrieve millions of rows, but as soon as I add greater than ISODATE, it doesnt use indexes anymore.
the query is this one:
db.getCollection("Coll").find({"Field1.date" : {
$gte: ISODate("2019-11-27T00:00:00.000Z"),
$lte: ISODate("2019-11-28T00:00:00.000Z")
}},{
_id: 1,
"Field2": 1,
"Field3":1,
Field4: 1
})
and i created an index like this:
// db.Coll.createIndex(
// {
// _id: 1,
// "Field2": 1,
// Field4: 1,
// "Field1.date":1
// },
// {background:true , name: "idx_name_date"})
//
but it seems this "field1.date" doesnt work with ISODate.
Racer SQL
(7546 rep)
Aug 7, 2020, 01:00 PM
• Last activity: Jul 31, 2025, 02:01 AM
0
votes
1
answers
364
views
Mongostat display size / res
I would like to have your opinion and of course an explanation on the following. My configuration : - mongodb version 3.4.4 - 1 replicaset with three identical servers( 4 cpu / 32 Go RAM under linux ubuntu v16) - 1 primary and two secondaries. - No delay in replication every thing is up ta date. - T...
I would like to have your opinion and of course an explanation on the following.
My configuration :
- mongodb version 3.4.4
- 1 replicaset with three identical servers( 4 cpu / 32 Go RAM under linux ubuntu v16)
- 1 primary and two secondaries.
- No delay in replication every thing is up ta date.
- The engine is WiredTigger.
During insertion or update time I have the following:
- Primary : vsize : 3.73 G/ res : 3.02 G
- 1st secondary : vsize : 3.28 G / res : 2.63 G
- 2nd secondary : vsize : 5.66 G / res : 5.02G
I am wondering why is there this difference between the vsize/res of the 2nd secondary and the 1st one?
I was guessing that I should see some kind of equality between them because they both support the same informations from the primary.
Any help or information will bee appreciated.
Yrk
(1 rep)
Jul 17, 2017, 08:31 AM
• Last activity: Jul 29, 2025, 07:05 PM
1
votes
2
answers
2168
views
Copy Specific Columns from DB 1 To DB 2 on MongoDB
I have Two Instance of Node JS server which is handled By NGINX. First server is connect with MongoDB First Server i.e > mongoose.connect('mongodb://127.0.0.1/db_primary', { useMongoClient: true }); While Second Node Js Server is Connect with MongoDB Second Serer i.e > mongoose.connect('mongodb://12...
I have Two Instance of Node JS server which is handled By NGINX. First server is connect with MongoDB First Server i.e
> mongoose.connect('mongodb://127.0.0.1/db_primary', { useMongoClient: true });
While Second Node Js Server is Connect with MongoDB Second Serer i.e
> mongoose.connect('mongodb://127.0.0.1/db_secondary', { useMongoClient: true });
Actually, I have to Do some Heavy Calculations on DB and that the main reason I will divide my DB into Two Pieces.
Initially When I have One Server then if the backend is busy with calculations (Millions of Entries) at that time Server is completely freeze. So by the Separation There is now no freezing point of server.
My Question is How I can Copy the selected Columns on The entire Table form DB 1 And paste it to DB 2 with only Selected Columns.
Example:Database 1
{
_id: ObjectID(54815dfsfsd1f564168ad51a5s),
a: 123,
b: 768,
c: 89.67,
d: 8976,
e: 45
}..........upto 10 million entries
Now I want to Copy Only Column c & e from Database 1 to Database 2.
Expected Result:
{
_id: ObjectID(54815dfsfsd1f564168ad51a5s),
c: 89.67,
e: 45
}..........upto 10 million entries
And all these things are done by CronJob of Nodejs. I mean Not a manual task.
I am Newbie on MongoDB. Any help is appreciated.
Ankit
(121 rep)
Jan 29, 2018, 06:54 AM
• Last activity: Jul 29, 2025, 12:03 PM
1
votes
2
answers
368
views
Mongodb tree structure questions and answers
I'm looking to build a question answer survey system where some questions will be based on the answer of the parent question. The hierarchy level of the questions can go any number of depth based on the questions. The questions and answers will be like the diagram shown here. [
User 99x
(111 rep)
May 23, 2020, 07:15 PM
• Last activity: Jul 29, 2025, 09:07 AM
4
votes
2
answers
9090
views
MongoDB restore with replica set
I have MongoDB setup like this: Primary | Server1 -----------+--------- Secondary | Server2 -----------+--------- Arbiter | Server3 -----------+--------- Backup | Server4 I have a backup server for daily backup using **mongodump**. Now I have to test my backup dump using **mongorestore**. The main q...
I have MongoDB setup like this:
Primary | Server1
-----------+---------
Secondary | Server2
-----------+---------
Arbiter | Server3
-----------+---------
Backup | Server4
I have a backup server for daily backup using **mongodump**. Now I have to test my backup dump using **mongorestore**.
The main question is the following: What is the best way to restore the backup?
Q1: I have to follow these steps?
1. Stop arbiter & secondary
2. Drop database in primary
3. Restore backup to primary
4. Drop database on secondary
5. Restart all the server
Q2: Restore the backup on primary without stopping any of the servers?
Q3. Make secondary as primary and restore the backup to secondary ( previous primary), and make it primary.
Q4: Is there is any way to restore both the server?
Please suggest the best approach to restore MongoDB.
SGRao
(41 rep)
Mar 1, 2018, 05:23 AM
• Last activity: Jul 28, 2025, 03:10 AM
3
votes
1
answers
1182
views
Reset mongo cache memory for test perfomance - windows
recently i'm working on mongo db and I'm trying to check the performance of mongodb with 2 million record. I need to clear cache every time I start the test. I don't want to restart the computer/server each time I want to run the test. I tried to clear collections cache but the result wasn't as I ex...
recently i'm working on mongo db and I'm trying to check the performance of mongodb with 2 million record. I need to clear cache every time I start the test. I don't want to restart the computer/server each time I want to run the test. I tried to clear collections cache but the result wasn't as I expected. Is there a way to clear cache like the way exist in linux ?
John Sans
(31 rep)
Jan 18, 2020, 11:08 AM
• Last activity: Jul 27, 2025, 12:03 AM
0
votes
1
answers
725
views
MongoDB Indexing - How many fields should I index?
I have a `Collection` in my db, called `Post`. Here's how it looks: Post { user: , school: , hashtag: , numberOfReports: , viewRanking: , type: , isPollOfTheDay: , createdAt: , endsAt: } (Obviously it also contains other, unrelated fields) So this `Post` collection, is being queried by 5 different s...
I have a
Collection
in my db, called Post
.
Here's how it looks:
Post {
user: ,
school: ,
hashtag: ,
numberOfReports: ,
viewRanking: ,
type: ,
isPollOfTheDay: ,
createdAt: ,
endsAt:
}
(Obviously it also contains other, unrelated fields)
So this Post
collection, is being queried by 5 different screens in my app. **School**, **Hashtag**, **All Polls**, **Discover** & **Profile**. All queries are very similar to each other, but they differ.
Let's have a look at them individually:
> School
Here, I have 2 queries
1. I compare by school.name
(equal to), numberOfReports
(less than), user.id
(for blocks checking (not contained in)) and lastly, we sort in descending order either by createdAt
or viewRanking
(depends on the user)
2. I compare by isPollOfTheDay
(equal to true
) and endsAt
(greater than or equal to)
> Hashtag
I compare by hashtag.id
(equal to), numberOfReports
(less than), user.id
(for blocks checking (not contained in)) and lastly, we sort in descending order either by createdAt
or viewRanking
(depends on the user)
> All Polls
I compare by type
(equal to), isPollOfTheDay
(equal to false
), numberOfReports
(less than), user.id
(for blocks checking (not contained in)) and lastly, we sort in descending order either by createdAt
or viewRanking
(depends on the user)
> Discover
I compare by school.name
(not equal to), type
(equal to), numberOfReports
(less than), user.id
(for blocks checking (not contained in)) and lastly, we sort in descending order by createdAt
> Profile
I compare by user.id
(equal to) and we sort in descending order by createdAt
These are all of my queries! I can say that they are called almost with the same frequency. My question is, should I just index all 9 fields? Are they too many to index? Should I ignore the isPollOfTheDay
field, since it's a Boolean? (I've read that we shouldn't index Booleans)
EDIT: Every document occupies about 200 bytes. We currently have 25K documents and growing in a pace of ~300/day.
The only fields that can change, are viewRanking
and numberOfReports
where the first one will change often, whereas the second far less often!
The selectivity of each query is high (I think), since the needed documents are found mainly by their first comparison. There are about 50 different school.name
s & another 50 hashtag.id
s.
Sotiris Kaniras
(151 rep)
Jan 9, 2019, 08:09 PM
• Last activity: Jul 26, 2025, 05:09 PM
1
votes
1
answers
756
views
MongoDB + Compass exporting strings and numbers "columns" with value of 1 as "true"(boolean) and 2 or null as "false" to CSV
When I export my MongoDB collection to a CSV, the String and Number "column" data types values are converted to "true" for 1 and "false" for 2 and null - how do I retain the values as numbers, strings and integers on export to CSV from Compass?
When I export my MongoDB collection to a CSV, the String and Number "column" data types values are converted to "true" for 1 and "false" for 2 and null - how do I retain the values as numbers, strings and integers on export to CSV from Compass?
CoolElectricity
(21 rep)
Dec 9, 2020, 05:19 PM
• Last activity: Jul 26, 2025, 03:04 AM
0
votes
2
answers
5966
views
Replica Set in MongoDB by using IPs instead of domains?
I want to use MongoDB in replica-set mode over 2 servers by IPs.But when I deployed the secondary one on other server, I faced the issues about connecting between 2 instance. For example, the primary located on server 192.168.1.1:27018 with hostname COM1 and the second located on server 192.168.1.2:...
I want to use MongoDB in replica-set mode over 2 servers by IPs.But when I deployed the secondary one on other server, I faced the issues about connecting between 2 instance. For example, the primary located on server 192.168.1.1:27018 with hostname COM1 and the second located on server 192.168.1.2:27018 with hostname COM2. Despite of configuring like this
net:
bindIp: 0.0.0.0, COM1|192.168.1.1, COM2|192.168.1.2
port: 27018
But in MongoDB's log, it responses that cant hear from COM1:27018 or can find COM2:27018, I don't know how to map these addresses with these IPs. Please help me, thanks!
iamatsundere181
(101 rep)
Mar 27, 2019, 10:36 AM
• Last activity: Jul 25, 2025, 08:04 AM
1
votes
2
answers
721
views
RHEL 7.4, heavy workload(s) and SWAP usage
This might not be the correct board for this, but since I'm a DBA and I'll ask here. I have several RHEL 7.4 servers, running a mix of MariaDB (10.1, 10.2) and MongoDB (3.4). The problem was happening with RHEL 7.3 as well. All of these servers have 256Gb of memory with local SSD array storage, and...
This might not be the correct board for this, but since I'm a DBA and I'll ask here.
I have several RHEL 7.4 servers, running a mix of MariaDB (10.1, 10.2) and MongoDB (3.4). The problem was happening with RHEL 7.3 as well.
All of these servers have 256Gb of memory with local SSD array storage, and even given heavy workloads the highest active memory-in-use footprint is less than 100Gb at any one time. I've been profiling this for quite some time:
On each of these servers, even though there is always plenty of free memory available, the systems are incrementally going into swap. I've tried setting the vm.swappiness value to 1, but incremental jumps of swap are still happening.
Is this happening to anyone else? Does anyone know, with a large amount of memory available, if setting swappiness to 0 has ill effects?
Thanks
zstokes
(11 rep)
Jan 12, 2018, 09:09 PM
• Last activity: Jul 25, 2025, 02:00 AM
0
votes
1
answers
139
views
Error configuring mongodb replicaset with inmemory
I'm trying to configure a MongoDB replicaset with 3 replicas. The primary and another use the in memory storage engine, and the third uses wired tiger. I am using [this][1] configuration. When I restart the cluster the data does not persist. Server has startup warnings: ** WARNING: This replica set...
I'm trying to configure a MongoDB replicaset with 3 replicas. The primary and another use the in memory storage engine, and the third uses wired tiger. I am using this configuration.
When I restart the cluster the data does not persist.
Server has startup warnings:
** WARNING: This replica set node is running without journaling enabled but the
** writeConcernMajorityJournalDefault option to the replica set config
** is set to true. The writeConcernMajorityJournalDefault
** option to the replica set config must be set to false
** or w:majority write concerns will never complete.
** In addition, this node's memory consumption may increase until all
** available free RAM is exhausted.
** WARNING: This replica set node is using in-memory (ephemeral) storage with the
** writeConcernMajorityJournalDefault option to the replica set config
** set to true. The writeConcernMajorityJournalDefault option to the
** replica set config must be set to false
** or w:majority write concerns will never complete.
** In addition, this node's memory consumption may increase until all
** available free RAM is exhausted.
And my configuration.
rs0:PRIMARY> cfg = rs.conf()
{
"_id" : "rs0",
"version" : 102744,
"protocolVersion" : NumberLong(1),
"writeConcernMajorityJournalDefault" : false,
"members" : [
{
"_id" : 0,
"host" : "mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.default.svc.cluster.local:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "mongodb-mongodb-replicaset-1.mongodb-mongodb-replicaset.default.svc.cluster.local:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "mongodb-mongodb-replicaset-2.mongodb-mongodb-replicaset.default.svc.cluster.local:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : true,
"priority" : 0,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"catchUpTimeoutMillis" : -1,
"catchUpTakeoverDelayMillis" : 30000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("5e736baa5c51ff3e198dd880")
}
}
The writeConcernMajorityJournalDefault option IS set to false!
Thanks in advance!
user1646040
(1 rep)
Mar 22, 2020, 06:27 AM
• Last activity: Jul 24, 2025, 01:04 AM
1
votes
2
answers
6902
views
Why aggregate query with lookup is extremely slow?
I have a mongodb query that works but takes too long to execute and causes my CPU to spike to 100% while it is executing. It is this query here: db.logs.aggregate([ { $lookup: { from: 'graphs', let: { logId : '$_id' }, as: 'matched_docs', pipeline: [ { $match: { $expr: { $and: [ { $eq: ['$$logId', '...
I have a mongodb query that works but takes too long to execute and causes my CPU to spike to 100% while it is executing. It is this query here:
db.logs.aggregate([
{
$lookup:
{
from: 'graphs',
let: { logId : '$_id' },
as: 'matched_docs',
pipeline:
[
{
$match: {
$expr: {
$and: [
{ $eq: ['$$logId', '$lId'] },
{ $gte: [ '$d', new Date('2020-12-21T00:00:00.000Z') ] },
{ $lt: [ '$d', new Date('2020-12-23T00:00:00.000Z') ] }
]
}
}
}
],
}
},
{
$match: {
$expr: {
$and: [
{ $eq: [ '$matched_docs', [] ] },
{ $gte: [ '$createDate', new Date('2020-12-21T00:00:00.000Z') ] },
{ $lt: [ '$createDate', new Date('2020-12-23T00:00:00.000Z') ] }
]
}
}
},
{ $limit: 5 }
]);
This query looks for all records in the
db.logs
collection for which they have not been transformed and loaded into db.graphs
. It's analogous to this SQL approach:
WHERE db.logs._id NOT IN (
SELECT lId FROM db.graphs
WHERE db.graphs.d >= @startTime
AND db.graphs.d = @startTime
AND db.logs.createDate < @endTime
)
The db.logs
has over 1 Million records and here are the indexes:
db.logs.getIndexes();
[
{
"v" : 2,
"key" : {
"_id" : 1
},
"name" : "_id_"
},
{
"v" : 2,
"key" : {
"createDate" : 1
},
"name" : "createDate_1"
}
]
And db.reportgraphs
has fewer than 100 records with indexes on every property/column.
In my attempt to analyze why the mongo query is so slow and CPU intensive, I suffixed my mongo query with a .explain()
. But mongo gave me the error saying db.logs.aggregate(...).explain() is not a function
. I also tried adding , {$explain: 1}
immediately after { $limit: 5}
and got an error saying Unrecognized pipeline stage name $explain
.
So I guess I have two questions:
1. Can someone give feedback on why my mongo query is so slow or possible solutions?
2. Is there a way to see the execution plan of my mongo query so that I can review where the performance bottle necks are?
---
UPDATE
A possible solution I'm considering is to have a property db.logs.isGraphed:boolean
. Then use a simple db.logs.find({isGraphed:false, createDate:{...date filter...}}).limit(5)
. Wasn't sure if this is the approach most people would have considered in the first place?
learningtech
(189 rep)
Dec 23, 2020, 04:12 PM
• Last activity: Jul 23, 2025, 03:01 PM
1
votes
1
answers
18
views
Im getting 'Not authorized on admin' with MongoDB atlas error even though I made both users Admins
Here is my server.js file ``` const dotenv = require('dotenv'); dotenv.config({ path: `${__dirname}/config.env` }); const mongoose = require('mongoose'); const db = process.env.DATABASE.replace( ' ', process.env.DATABASE_PASSWORD ); mongoose.connect(db).then(() => { console.log('DB connection succes...
Here is my server.js file
const dotenv = require('dotenv');
dotenv.config({ path: ${__dirname}/config.env
});
const mongoose = require('mongoose');
const db = process.env.DATABASE.replace(
'',
process.env.DATABASE_PASSWORD
);
mongoose.connect(db).then(() => {
console.log('DB connection successful');
});
const TourSchema = new mongoose.Schema({
name: {
type: String,
required: [true, 'Tour must have a name!'],
unique: [true, 'Tours must have a different name'],
},
price: {
type: Number,
required: [true, 'Tour must have a price!'],
},
rating: {
type: Number,
default: 4.5,
},
});
const Tour = mongoose.model('Tour', TourSchema);
const testTour = new Tour({
name: 'Vilusi Hike',
rating: 5.0,
price: 250,
});
testTour
.save()
.then((doc) => console.log(doc))
.catch((err) => console.log('Error: ', err));
const app = require(${__dirname}/app
);
const port = process.env.PORT || 3000;
app.listen(port, () => {
console.log(Listening on port ${port}...
);
});
Here is config env
NODE_ENV = development
PORT = 8000
DATABASE = mongodb+srv://petarkukric08:@natours.qmojzsy.mongodb.net/?retryWrites=true&w=majority&appName=Natours
DATABASE_PASSWORD = xxxxxxxxxxx
ADMIN_USER_DB_PASS = yyyyyyyyyy
I have two users adminUser and petarkukric08.
Peter Kukric
(11 rep)
Jul 20, 2025, 01:02 PM
• Last activity: Jul 21, 2025, 02:32 AM
Showing page 1 of 20 total questions