Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
1365 views
How to get current date and time in Elasticsearch
I need to review the current date and time zone from Elasticsearch. I'm checking the elastic documentation and it mentioned that the default value is UTC. In other environments I use: ``` SELECT NOW(); ``` Is there any similar function for Elasticsearch?
I need to review the current date and time zone from Elasticsearch. I'm checking the elastic documentation and it mentioned that the default value is UTC. In other environments I use:
SELECT NOW();
Is there any similar function for Elasticsearch?
Carolina (47 rep)
May 10, 2023, 07:22 PM • Last activity: Aug 4, 2025, 09:03 PM
0 votes
1 answers
142 views
Is storing redundant data part of the (CouchDB) self-contained data concept?
The [CouchDB concept](http://docs.couchdb.org/en/2.1.1/intro/why.html#a-better-fit-for-common-applications) speaks about self-contained documents. In other words: ALL related data is stored in a single concept, instead of splitting them in multiple tables and reference each other using FKs. > An inv...
The [CouchDB concept](http://docs.couchdb.org/en/2.1.1/intro/why.html#a-better-fit-for-common-applications) speaks about self-contained documents. In other words: ALL related data is stored in a single concept, instead of splitting them in multiple tables and reference each other using FKs. > An invoice contains all the pertinent information about a single transaction the seller, the buyer, the date, and a list of the items or services sold. But this means that we end with redundant data in our database. Let's consider the invoice example: I buy 10 products over a period of time, which results in 10 invoices. Now several information are redundant, e.g. my billing address, delivery address and so on. That's at least a waste of space. Considering other use-cases, updating data can be very complex. If we have a blogging-cms like WordPress, every post-document contains metadata like author name/email and so on. If those information changes, we've to update a lot of documents. And we have to know the structure from all of them. E.g. the email could be used in posts, comments, log entrys and so on. When you think that the CouchDB allow a different form for any document, updating such information looks like a horrible tasks. I used MySQL as relational database for years and it seems much more meaningful to normalize such information, put e.g. users in a user table/collection, so that we can refer to some kind of PK. ### What is the purpose behind those concept in CouchDB? So my question is: Did I understood the concept of CouchDB properly, that they drop normalisation completely? I see the (theoretically?) benefit of having a better performance when no joins are required. Is the concept that we get a database with high throughput, but have to live with a (possible massive) amount of redundant data and we have to deal with a complex update-process when related metadata changes?
Lion (151 rep)
May 8, 2018, 08:17 AM • Last activity: Jul 23, 2025, 01:04 AM
0 votes
1 answers
142 views
NOSQL Manager for MongoDB throws command failed collection already exists
While copying collections from one MongoDB server hosted on A to another MongoDB server hosted on B using Copy collection to another database tool of NOSQL Manager we got an error "Command create failed collection already exists code :48" The option we specified in NOSQL Manager for copy collections...
While copying collections from one MongoDB server hosted on A to another MongoDB server hosted on B using Copy collection to another database tool of NOSQL Manager we got an error "Command create failed collection already exists code :48" The option we specified in NOSQL Manager for copy collections is Replace existing collections. https://www.mongodbmanager.com/clone-mongodb-collection#copy
satya (11 rep)
Apr 21, 2020, 05:50 PM • Last activity: Jul 19, 2025, 10:08 AM
1 votes
3 answers
160 views
How to structure db where main records are large
I was wondering if I could ask for some quick opinions. I am prototyping an app which seems simple but I can’t readily arrive at an architecture that feels right. We have data that is simple and fits well into standard sql format, eg users, organisations, projects, reports. However, the ‘reports’ ar...
I was wondering if I could ask for some quick opinions. I am prototyping an app which seems simple but I can’t readily arrive at an architecture that feels right. We have data that is simple and fits well into standard sql format, eg users, organisations, projects, reports. However, the ‘reports’ are the problem. They can typically be between 1000 and 100,000 'line items'. Each item with around 10 fields. A bit like a large spreadsheet, and in fact this is where they originate usually. A user (lets say 100 of them) generates say 10 projects a year, and each project has say 10 reports, and each report has say 10000 items (or rows) .. then if there was one massive table quickly growing to millions of rows, with each row ‘belonging’ to a report etc .. well this doesn’t feel right. To make this more tricky, the reports themselves can almost be considered 'mini-databases' in themselves, in that although the meat of the data is thousands of 'line items' (with uniform column requirements), they are structured into 'zones', 'sections', 'subsections', and there is also an 'overview/meta' section as well. In theory, I could normalize this right from 'user' to 'item' something like users->projects->reports->zones->sections->subsections->items ... but again this just doesn't feel right to me; what the DB has to do to simply go and get a report seems overkill. Also all of this report info would then be all mashed together for all the different user accounts. Every report query would have to go a collect thousands of rows out of millions, and assemble the client json from different tables, and there would be no other real benefit from being able to query the table in this way. All the reports are self-contained really. As such, I am wondering whether this a case for a nosql route, like mongo or documentDB, where I can just throw thousands of lines of json in a Report Collection or similar. This would have further implication though, as the backend frameworks I like to use don’t play all that nice with nosql, and we lose a bit of the standard easy relational schema that would still seem to fit the other tables of the app (e.g. user accounts, RBAC, users belonging to organisations, etc). .. but there would be a similar question with a json store .. is it better to maintain a 'tree' structure, and keep all the line items within their sections, and sections within their zones etc, or normalize out so there is one big load of lineitems, that have references to their sections within them, etc. I’ve messed around with using MySQL for everything, and using the json field type to store the reports as one big json, but I don’t know whether this is pushing the intention of the field-type a little, and it’s hard to query into. A consideration here is that on the frontend, users will update one (or a few) row(s) at a time, and I need to be able to handle that. In this way, I suppose there's a similarity with google sheets, or airTables or something. Do these big apps tend to use rdbms? I notice there seem to be more and more database options appearing all the time now, eg fauna supabase etc .. makes it even more tricky to choose the right solution. Thanks in advance for any tips
slickdaze (11 rep)
Apr 21, 2021, 10:48 AM • Last activity: Jul 14, 2025, 04:08 PM
4 votes
3 answers
170 views
Best practices for relating NoSQL data to RDMBS
I am looking to use both a NoSQL solution (Redis, Mongo, etc) with a mySQL/Laravel application. My reasons for this is that I need a schemaless key value storage to go with my relational data. I could easily make a key value storage within mySQL, but I think that this is a case where NoSQL could shi...
I am looking to use both a NoSQL solution (Redis, Mongo, etc) with a mySQL/Laravel application. My reasons for this is that I need a schemaless key value storage to go with my relational data. I could easily make a key value storage within mySQL, but I think that this is a case where NoSQL could shine and would rather not take the easy way out. Other than coming here for help, that is. What I am a little foggy is how to tie these two together. An example scenario is I have 2 forms that generate leads. One form has email, best time to call, and date of birth. The other has email and are you wearing pants? Is it as simple as storing data in my NoSQL table as follows: { 'lead_id':3829, 'email': '3829@test.com' } { 'lead_id':3829, 'best_time': 'evening' } { 'lead_id':3829, 'dob': '1980-07-01' } { 'lead_id':3830, 'email': '3830@test.com' } { 'lead_id':3829, 'pants': 'off' } Then my mySQL Lead table could look like: id | form_id | timestamp 3829 | 1 | 2014-09-18 11:02:32 3830 | 2 | 2014-09-18 09:12:10 Since this will be my first NoSQL excursion, I am also not certain as to the functionality available to my NoSQL data. Would it still be possible to retrieve data based on my lead_id and sort on a particular key? For example, filter form #2 results by email containing 'foo' sorted by the date they were submitted?
Don Boots (141 rep)
Sep 18, 2014, 03:07 PM • Last activity: Jul 14, 2025, 03:07 PM
1 votes
2 answers
4735 views
How can I query matching MongoDB documents and return a subset of fields from an array of embedded documents?
I have documents similar to the following: db.etudiants.insert( { nom:'emile', prenom:'zola', cours_reussis:[ { intitule: ['PAI'], Periodes: NumberInt(60), note: 60, date_reussite: NumberInt(2014), dispense: true, nom_ecole:'EPFC' }, { intitule:'TGP', Periodes: NumberInt(40), note:75, date_reussite:...
I have documents similar to the following: db.etudiants.insert( { nom:'emile', prenom:'zola', cours_reussis:[ { intitule: ['PAI'], Periodes: NumberInt(60), note: 60, date_reussite: NumberInt(2014), dispense: true, nom_ecole:'EPFC' }, { intitule:'TGP', Periodes: NumberInt(40), note:75, date_reussite: NumberInt(2013), dispense: true, nom_ecole: 'HEB' }, { intitule: 'SO2', Periodes: NumberInt(60), note: 70, date_reussite: NumberInt(2015), dispense: true, nom_ecole: 'ISFCE' }, { intitule: 'IBD', Periodes: NumberInt(60), note: 80, date_reussite: NumberInt(2015), dispense:true, nom_ecole:'ISFCE' } ] } ) How can query matching documents and return nom_ecole: 'ISFCE' and year (date_reussite) from the cours_reussis array?
zouhair (113 rep)
Dec 5, 2016, 11:14 PM • Last activity: Jul 9, 2025, 03:07 AM
-1 votes
1 answers
258 views
Better way to store panel data in SQL or NoSQL
I have an Excel of raw panel data (cross sectional + time series) sent to me by customers (I cannot change this). For those not familiar, imagine the gdp time series for each country. I need to transfer this data into a database and I am about to choose the best technology, either a SQL (Microsoft S...
I have an Excel of raw panel data (cross sectional + time series) sent to me by customers (I cannot change this). For those not familiar, imagine the gdp time series for each country. I need to transfer this data into a database and I am about to choose the best technology, either a SQL (Microsoft SQL Server), or No SQL (MongoDB). In principle I am quite sure you can use both, but probably an SQL-like is less suitable. My comments: - SQL: I would need to replicate the name of the country as many times as the timestamp avalilable for its gdp, and stacking each country one below the other. When I need to update (because every 3-months a new gdp value comes out), I would need to code complex queries in order to correctly insert the new value where a country ends but before the next one starts (FYI: to insert data into the db I will have a python code which push new data into the db); - NoSQL: here I can have a collection named GDP, and documents related to each country, so for the document "UK", I'd have a key called (time-series) and a value like [[1-1-2022, 2.3], [1-4-2022, 3.1], ..] (list of list where the first is the timestampt and the second value the gdp) and so on; in addition since this data are quite messy it can happen that from time to time some data is missing so a NoSQL simply will not have a value, which is less problematic than a SQL. To me, NoSQL is better in this case. Since in this forum there are expert people (I am not..), I would like to know if my reasoning is right, and for panel data a NoSQL (es. MongoDB, because is free..) type is more suitable.
Dark2018 (99 rep)
Apr 19, 2022, 09:34 AM • Last activity: Jun 2, 2025, 02:07 AM
1 votes
1 answers
250 views
How to convert a datefield from string to date?
Here is the query that I am using. What I need to extract is the data for the last 10 minutes. But no records found is the output for this query, I think this is due to the type of data that were inserted to created_at field, it should be date field but string were inserted. How to convert this fiel...
Here is the query that I am using. What I need to extract is the data for the last 10 minutes. But no records found is the output for this query, I think this is due to the type of data that were inserted to created_at field, it should be date field but string were inserted. How to convert this field so I can use it on the **$match** db.collectionName.aggregate([{ $match : { created_at : { $gte : new Date(ISODate().getTime() - 1000 * 60 * 10) } } }]);
Driggs Alonzo (31 rep)
Sep 13, 2019, 02:23 AM • Last activity: May 25, 2025, 04:04 AM
1 votes
1 answers
324 views
How to fix mongosh v2.3.3 connection error with AWS DocumentDB v5.0.0
When I try to connect to documentdb cluster version 5.0.0 using mongosh version 2.3.3 it is returned the error message: `MongoServerError: Unsupported mechanism [ -301 ]` I am using the connection string with this format: ```bash mongosh 'mongodb://my_user:my_password@my-host-name.cluster-id-aws.us-...
When I try to connect to documentdb cluster version 5.0.0 using mongosh version 2.3.3 it is returned the error message: MongoServerError: Unsupported mechanism [ -301 ] I am using the connection string with this format:
mongosh 'mongodb://my_user:my_password@my-host-name.cluster-id-aws.us-east-1.docdb.amazonaws.com:27017/my_db_name&tls=true&tlsCAFile=rds-global-bundle.pem&replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false'
I tried to search for a solution at the AWS DocumentDB and MongoDB Community documentations, but I didn't find a fix for this issue.
Emerson Vicente (11 rep)
Nov 5, 2024, 04:03 PM • Last activity: May 23, 2025, 08:05 AM
1 votes
1 answers
265 views
Cannot run nodetool commands after enabling JMX authentication in DSE
I would like to use these two pages to enable DSE Unified Authentication and realize a secure environment. I have tried to configure it but it does not work. How is it as a symptom. When I type nodetool -u -pw status I get ``` "nodetool: Failed to connect to '127.0.0.1:7199' - FailedLoginException:...
I would like to use these two pages to enable DSE Unified Authentication and realize a secure environment. I have tried to configure it but it does not work. How is it as a symptom. When I type nodetool -u -pw status I get
"nodetool: Failed to connect to '127.0.0.1:7199' - FailedLoginException: 'Failed to login. Please re-try.'."
is returned. 2. nodetool status command from node2, which has not touched any files related to DSE Unified Authentication from the initial state, confirms that node1 is working as usual. Please let me know if you have any other suggestions on what else I should try or if I should change a file here or there. If I hit nodetool -h drain, etc. from another node, I get an error, but it is not an Error about Authentication, but a ConnectException. DOCUMENTS https://docs.datastax.com/en/dse68-security/docs/Auth/secEnableDseAuthenticator.html https://docs.datastax.com/en/dse68-security/docs/secJmxAuth.html Enabling JMX Native Authentication. procedure is not implemented.
Youjie (41 rep)
Oct 4, 2022, 11:13 PM • Last activity: May 22, 2025, 08:04 PM
1 votes
3 answers
356 views
Which Python 2.7.x version is compatible with both Cassandra 3.11 and 4.1?
I'm upgrading Cassandra from 3.11 to 4.1. One of the requirements is the latest python 2.7 updates. I want to upgrade Cassandra with no downtime, so I need to know which python 2.7 update is compatible with both Cassandra 3.11 and Cassandra 4.1. Datastax doc on upgrading apache cassandra asked for t...
I'm upgrading Cassandra from 3.11 to 4.1. One of the requirements is the latest python 2.7 updates. I want to upgrade Cassandra with no downtime, so I need to know which python 2.7 update is compatible with both Cassandra 3.11 and Cassandra 4.1. Datastax doc on upgrading apache cassandra asked for the latest python 2.7, but it doesn't explicitly say which version and whether that python version is compatible with both versions of cassandra 3.11 and 4.1 I tried python 2.7.5 which work for cqlsh of Cassandra 3.11, but it does not work for cqlsh of Cassandra 4.1.
Thu Anh (11 rep)
May 30, 2024, 12:31 PM • Last activity: Apr 27, 2025, 02:09 PM
1 votes
1 answers
402 views
How do I model a Cassandra table that stores friends of a user?
Recently I started to learn Cassandra. I needed to design the database for my web application. So, I prepared conceptual data model as well as application workflow, and currently I’m stuck on something… Let me provide you with some details of the issue. Well, I want to show all friends of currently...
Recently I started to learn Cassandra. I needed to design the database for my web application. So, I prepared conceptual data model as well as application workflow, and currently I’m stuck on something… Let me provide you with some details of the issue. Well, I want to show all friends of currently logged-in user WITH PROFILE PICTURES AND THEIR FULL NAME. So I probably need two tables: Fragment of Application Workflow
**users_by_id**
user_id PARTITION KEY
email
password
profile_image
full_name
**friends_by_user_id**
user_id PARTITION KEY (whose friend is it)
friend_id (user id of the friend)
And now let’s say I want to display all friends in a list, but the problem is the user expects the app to show their profile pictures and their full name (not just the friend‘s user id), so the user can recognize who is who (pretty logical, right?). So, how do I do that? I mean I could get the users id and then query the users table to finally get the full name and profile picture individually. Although, I don’t think it would be very efficient (because what if the user have hundreds of friends?!). ***EDIT: or what if the friend changes their profile picture or name?*** What is the right way to solve this problem? Thanks in advance!
Jacobino (11 rep)
Nov 21, 2022, 07:12 PM • Last activity: Apr 16, 2025, 08:04 AM
0 votes
1 answers
524 views
NoSQL vs SQL during sharding when a single read operation only hits one shard
I've a use case where I'm stuck between figuring out whether to use SQL or NoSQL db. The db has 2 fields - `A (PK), B` A is 8 char-long, while B is 100-400 char long. The operations are simply: 1. Write: Given strings `A`, `B` - store both in db. 2. Read: Given input `A`, return `B` associated with...
I've a use case where I'm stuck between figuring out whether to use SQL or NoSQL db. The db has 2 fields - A (PK), B A is 8 char-long, while B is 100-400 char long. The operations are simply: 1. Write: Given strings A, B - store both in db. 2. Read: Given input A, return B associated with it. The database is queried by multiple application server to reads and writes, implying parallel read/write transactions(operations). Now, comes the scalability part: **SQLs:** Scaling writes - (Assuming partitioning is already done). Then enable sharding(shard based on some hash function) for SQL dbs. And as every write operation comes, write to the desired shard. This scales the writes. Scaling reads - Enable replication(not sure if this is enable by default for SQL dbs). Assumption is that we are scaled for writes and reads. So, Sharding and replication are both enabled. As, you can imply from above, the read operations are such that the db just needs to hit a single shard for reads, so basically no SQL-JOIN operation required and also no combining of results from multiple shards required in above case. **1. Will using SQL db perform better in above case than NoSQL if we are scaling for high number of reads and writes(#read is equivalent to the #writes )?** I see that SQLs have to maintain ACID properties and maintaining strong-consistency slows down the writes, reads due to the reason that holding locks across shards is time-taking process but our case doesn't require cross-shard locks. **2. Does the above decision change when reads >>> writes OR writes >>> reads** **4. If there's no difference between using SQL and NoSQL dbs then does it basically boils down to whether eventual consistency is permitted in which case NoSQL should be the better choice(I hope that's bcz writes will be much faster in NoSQL given that strong-consistency not a must which SQLs provide)?** **A very basic question, given that the db grows to terabyte/petabyte limits, does it make sense to use SQL db with a single server instance which holds this much data in a single hardware with replication turned ON?**
asn (127 rep)
Jul 2, 2023, 10:57 AM • Last activity: Apr 10, 2025, 03:05 PM
1 votes
1 answers
642 views
NoSQL- automatically cache and update references to other documents as embedded
To avoid having to look up referenced documents, one practice is to embed them (e.g. https://docs.mongodb.com/manual/tutorial/model-embedded-one-to-many-relationships-between-documents/). Is there a way to have those embedded documents actually be separate and be referenced yet be "automatically emb...
To avoid having to look up referenced documents, one practice is to embed them (e.g. https://docs.mongodb.com/manual/tutorial/model-embedded-one-to-many-relationships-between-documents/) . Is there a way to have those embedded documents actually be separate and be referenced yet be "automatically embedded" like by some sort of functionality which automatically stores their data in the document which is referencing them - almost like an automatic "embedding cache" of sorts. I'm not referring to them being automatically retrieved by reference, but an actual copy being stored and updated automatically. I know this could be built manually, but I'm wondering what may exist already which can do this, preferably something inherent to MongoDB, but it doesn't have to be. While I'm asking this about MongoDB in particular, if someone knows of something like this for other NoSQL databases, I'd be interested to hear about that as well as an answer.
g491 (121 rep)
Mar 9, 2017, 05:13 PM • Last activity: Apr 3, 2025, 12:04 PM
3 votes
1 answers
2153 views
Are there any known benchmarks/statistics on UnQLite?
I came across UnQLite the other day in [Stack Overflow][1] and it has me intrigued for some software I'm working on. However, very few tests have been published on how well it performs. [There are plenty][2] of [questions][3] and [public][4] data available that help on SQLite and others but nothing...
I came across UnQLite the other day in Stack Overflow and it has me intrigued for some software I'm working on. However, very few tests have been published on how well it performs. There are plenty of questions and public data available that help on SQLite and others but nothing on UnQLite. (I'm looking at serverless databases, hence my mention of SQLite. I understand that SQLite is relational whereas UnQLite is KVP and the two are not forked from the same parent projects) I'd like to know: - What UnQLite seems to achieve in read/write speeds - What types of reads and writes (sequential, synchronous, large data, batch, random, etc.) it does well at compared to some other databases - Where bottlenecking seems to occur - How large the database tends to get (disk size or rows) before the time it takes to find and extract data begins to see a significant increase (i.e. slows down) - If at all possible, what disk format(s) (ext3/4, NTFS, exFAT, HFS+) cause problems for the database - Any recommended ways of optimizing performance with it Thank you.
Kamikaze Rusher (131 rep)
Apr 16, 2015, 02:12 PM • Last activity: Apr 1, 2025, 04:14 AM
-1 votes
1 answers
61 views
Best Database for Bulk Reads and Fast Key Lookups for Genomic Data
I'm developing a system to store and query genomic data using a relational database (PostgreSQL being my primary candidate). The dataset is quite large around 9 billion records with over 300 columns, totaling more than 30TB of data. Here's an example of the data structure in Go: type Variant struct...
I'm developing a system to store and query genomic data using a relational database (PostgreSQL being my primary candidate). The dataset is quite large around 9 billion records with over 300 columns, totaling more than 30TB of data. Here's an example of the data structure in Go: type Variant struct { VariantVcf pgtype.Text json:"variant_vcf" Chromosome pgtype.Text json:"chromosome" Position pgtype.Text json:"position" BravoAn pgtype.Int8 json:"bravo_an" // ... many additional fields } Here are some sample SQL queries: Single ID Lookup: SELECT * FROM table WHERE variant_vcf = '1-14809-A-T'; Bulk ID Lookup: SELECT * FROM variants WHERE variant_vcf IN ('1-14809-A-T', '2-23456-G-C', 'X-78901-T-G'); Questions for the Community: Although I plan to stick with relational databases, are there scenarios where a key-value store or another specialized database might outperform PostgreSQL in these cases, especially for bulk key-based retrievals? We expect around 2000 concurrent users (conservative), each performing 1-4 lookups on average. The majority of these lookups fall into Bulk key based Lookups (e.g., 1k–10M IDs per request)
mad (1 rep)
Mar 27, 2025, 10:29 PM • Last activity: Mar 28, 2025, 08:58 PM
0 votes
0 answers
29 views
How to manage indexes not fitting into memory in Document DB?
The problem: from time to time i get slow (>10s) queries even though they are using based on indexed fields. I'm using DocDB instance that has 128GB of memory. The total size of all of the indexes (across all collections) is 1.5TB, and data-size is 1TB. That obviously means that all of the indexes c...
The problem: from time to time i get slow (>10s) queries even though they are using based on indexed fields. I'm using DocDB instance that has 128GB of memory. The total size of all of the indexes (across all collections) is 1.5TB, and data-size is 1TB. That obviously means that all of the indexes can't fit into memory together. In addition, when I check the BufferCacheHitRatio metric I see values between 88% and 95%, which as far as I understand are considered very low, and mean that between 5% and 12% queries have to go to the disk, which is slower. The way I thought of tackling it, is to split my system into logical applications, based on the way it queries the DB, i.e. the indexes it needs. Then, I can create read replica for every such application (all of the applications only perform "read" operations), and enforce that every application connects only to its corresponding read replica. I can define the instance type of every read replica based on the size of all of the relevant indexes. The reason I like this approach as oppose to simply scale the single instance I currently have so that the indexes can fit in memory is: Not sure I can scale forever, i.e. is there a DocDB instance with 10TB of memory? It enables me to have different QoS for different "applications", if I have one application that is client facing, and it's extremely important that the queries will always use the index, then I can allow it specifically for this application, where in other, i can still go to the disk 10% of time and it's ok. So it gives me this flexibility. Is this the right approach for addressing such issues? Or should this be avoided and there's other pattern to get what I want with DocDB? Thanks
Noam (101 rep)
Mar 25, 2025, 09:44 PM
1 votes
1 answers
4815 views
Efficient way to get data for each date range intervals using single query | MongoDB
I have collections about products and purchases. I need to get statistics for each time interval. I need following: | Name | quantity (date 1 range)| quantity (date 2 range)| |:---- |:------:| -----:| |Name One | 500 | 400 | |Name Two | 251 | 450 | |Name Three | 444 | 600 | I have following collecti...
I have collections about products and purchases. I need to get statistics for each time interval. I need following: | Name | quantity (date 1 range)| quantity (date 2 range)| |:---- |:------:| -----:| |Name One | 500 | 400 | |Name Two | 251 | 450 | |Name Three | 444 | 600 | I have following collections: //Products { name: { type: String, required: [true, "The prodcut name is required"], unique: true, } price: { type: mongoose.Types.Decimal128, default: "0.0", }, } //Orders model scheme { product: { type: Schema.Types.ObjectId, ref: "Products", }, price: { type: mongoose.Types.Decimal128, default: "0.0", }, quantity: { type: Number, default: 0, }, orderDate: { type: Date, default: Date.now, //new Date(), }, } And I have the following aggregate function: OrdersModel.aggregate( [ { $match: { orderDate: { $gte: new Date( sdt ), $lte: new Date( edt ) } } }, { $group: { _id: "$product_name", //{ name: "$name"}, totalQuantity: { $sum: "$quantity" }, } }, { $project: { _id: 1, qty: "$totalQuantity", //... } } ] ) This query is working, but only for one time interval. But I need to get results for multiple time intervals. I'm new to NoSQL, I've tried retrieving data from one or more-time intervals, but I couldn't find out how to achieve this. How can I solve this problem? I need to get the sales total Quantity of a product at different time intervals. 2010-01-01 to 2012-01-01 2015-01-01 to 2017-01-01 ... Time intervals can be dynamic. There can be a minimum of 1 and a maximum of 4 time intervals.
MinorDestroyer (11 rep)
Jan 19, 2021, 07:14 AM • Last activity: Feb 14, 2025, 05:05 AM
1 votes
1 answers
588 views
What makes many NoSQL databases non-ACID compliant when both SQL and NoSQL can scale horizontally
A lot of the difference between SQL and NoSQL that is talked about focuses on the fact that SQL support ACID properties while in many NoSQL dbs ACID is compromised. I'm unsure of the reason but scalability seems to be the culprit from what I've read(do correct me here). A lot of discussion wrongly a...
A lot of the difference between SQL and NoSQL that is talked about focuses on the fact that SQL support ACID properties while in many NoSQL dbs ACID is compromised. I'm unsure of the reason but scalability seems to be the culprit from what I've read(do correct me here). A lot of discussion wrongly associate the consistency in CAP with the consistency in ACID. They say that NoSQL doesn't provide ACID but it provides BASE (Basically Available, Soft state, Eventual consistency). E stands for eventual consistency. But SQL dbs too can provide eventual consistency(master slave architecture - MySQL by default performs asynchronous replication). Apart from the fact that SQL have a rigid schema and NoSQL have a flexible schema. What other difference do they have? Is there difference on the scalability part? I know NoSQL is an umbrella term but I'm mentioning about most of the NoSQL dbs here. If both SQL and NoSQL(I found some NoSQL dbs provide ACID) can scale then why can't some NoSQL db provide ACID? And when it says that some NoSQLs(pointing to the dbs that don't provide ACID properties) can't provide ACID, does it mean that a transaction can make a database table inconsistent? This inconsistency is wrongly interpreted to be the CAP inconsistency when in fact it actually means the ACID inconsistency. Can concurrent transaction make a NoSQL table into inconsistent state - for eg - is it possible that dirty reads can be performed by some transaction that is running concurrently with another parallel transaction? One of them that seems to mix the ACID and CAP consistency: (I'll add a few if I find more) 1. https://dba.stackexchange.com/questions/18435/cap-theorem-vs-base-nosql 2. https://stackoverflow.com/a/3423750/7547722
asn (127 rep)
Jul 6, 2023, 02:29 PM • Last activity: Feb 13, 2025, 05:00 AM
0 votes
3 answers
793 views
show users doesn't show anything, although it has users
So I'm using MongoDB, and this is my first time dealing with a VPS on my own, I'm trying to view the users i have on the website. I know for a fact that there are signed up users, because I can signup & Login and it all works. I Also tried to connect to the database from MongoDB Compass but failed....
So I'm using MongoDB, and this is my first time dealing with a VPS on my own, I'm trying to view the users i have on the website. I know for a fact that there are signed up users, because I can signup & Login and it all works. I Also tried to connect to the database from MongoDB Compass but failed. enter image description here
lunix (11 rep)
Jan 10, 2021, 12:04 PM • Last activity: Feb 6, 2025, 08:48 AM
Showing page 1 of 20 total questions