Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
324 views
How to fix mongosh v2.3.3 connection error with AWS DocumentDB v5.0.0
When I try to connect to documentdb cluster version 5.0.0 using mongosh version 2.3.3 it is returned the error message: `MongoServerError: Unsupported mechanism [ -301 ]` I am using the connection string with this format: ```bash mongosh 'mongodb://my_user:my_password@my-host-name.cluster-id-aws.us-...
When I try to connect to documentdb cluster version 5.0.0 using mongosh version 2.3.3 it is returned the error message: MongoServerError: Unsupported mechanism [ -301 ] I am using the connection string with this format:
mongosh 'mongodb://my_user:my_password@my-host-name.cluster-id-aws.us-east-1.docdb.amazonaws.com:27017/my_db_name&tls=true&tlsCAFile=rds-global-bundle.pem&replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false'
I tried to search for a solution at the AWS DocumentDB and MongoDB Community documentations, but I didn't find a fix for this issue.
Emerson Vicente (11 rep)
Nov 5, 2024, 04:03 PM • Last activity: May 23, 2025, 08:05 AM
0 votes
0 answers
29 views
How to manage indexes not fitting into memory in Document DB?
The problem: from time to time i get slow (>10s) queries even though they are using based on indexed fields. I'm using DocDB instance that has 128GB of memory. The total size of all of the indexes (across all collections) is 1.5TB, and data-size is 1TB. That obviously means that all of the indexes c...
The problem: from time to time i get slow (>10s) queries even though they are using based on indexed fields. I'm using DocDB instance that has 128GB of memory. The total size of all of the indexes (across all collections) is 1.5TB, and data-size is 1TB. That obviously means that all of the indexes can't fit into memory together. In addition, when I check the BufferCacheHitRatio metric I see values between 88% and 95%, which as far as I understand are considered very low, and mean that between 5% and 12% queries have to go to the disk, which is slower. The way I thought of tackling it, is to split my system into logical applications, based on the way it queries the DB, i.e. the indexes it needs. Then, I can create read replica for every such application (all of the applications only perform "read" operations), and enforce that every application connects only to its corresponding read replica. I can define the instance type of every read replica based on the size of all of the relevant indexes. The reason I like this approach as oppose to simply scale the single instance I currently have so that the indexes can fit in memory is: Not sure I can scale forever, i.e. is there a DocDB instance with 10TB of memory? It enables me to have different QoS for different "applications", if I have one application that is client facing, and it's extremely important that the queries will always use the index, then I can allow it specifically for this application, where in other, i can still go to the disk 10% of time and it's ok. So it gives me this flexibility. Is this the right approach for addressing such issues? Or should this be avoided and there's other pattern to get what I want with DocDB? Thanks
Noam (101 rep)
Mar 25, 2025, 09:44 PM
Showing page 1 of 2 total questions