Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
2
votes
1
answers
295
views
How does an encrypted DynamoDB (the same question applies to RDS as well) query work?
I am trying to understand whether the primary key is encrypted when I choose to [encrypt at rest for AWS DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/encryption.howitworks.html) - 1. If the primary key is encrypted, how does a primary-key lookup get performed under the...
I am trying to understand whether the primary key is encrypted when I choose to [encrypt at rest for AWS DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/encryption.howitworks.html)
- 1. If the primary key is encrypted, how does a primary-key lookup get
performed under the hood?
- 2. If the primary key is not encrypted, then I know how DynamoDB is
implemented under the hood, but it seems that it's not entirely safe
for some use cases. Any comment on this?
Furthermore, if I choose to have a Global Secondary Index for my DynamoDB table, are the primary key of the Global Secondary Index table encrypted?
- 1. If not encrypted, I kinda understand how it works under the hood, but certainly some of my data get revealed in plain text;
- 2. If encrypted, I'd appreciate understanding how it works.
chen
(121 rep)
Sep 16, 2018, 09:50 PM
• Last activity: May 12, 2025, 06:09 AM
1
votes
0
answers
35
views
Unable to create a GSI on dynamodb from aws cli on local
I am trying to add a GSI to an existing table on dynamo DB on local. I downloaded the last version(2.5.1) and run ti with: `java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb -port 8001 Creatiing a table works fine: ``` aws dynamodb create-table --endpoint-url http://local...
I am trying to add a GSI to an existing table on dynamo DB on local.
I downloaded the last version(2.5.1) and run ti with:
`java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb -port 8001
Creatiing a table works fine:
aws dynamodb create-table
--endpoint-url http://localhost:8001
--table-name GameScores2
--attribute-definitions AttributeName=GameTitle,AttributeType=S AttributeName=UserId,AttributeType=S
--key-schema AttributeName=UserId,KeyType=HASH AttributeName=GameTitle,KeyType=RANGE
--provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=5
But when i try to create a GSI, just like in the amazon docs says:
aws dynamodb update-table --endpoint-url http://localhost:8001 --table-name GameScores2 --attribute-definitions AttributeName=TopScore,AttributeType=N --global-secondary-index-updates "[
{
\"Create\": {
\"IndexName\": \"GameTitleIndex\",
\"KeySchema\": [{\"AttributeName\":\"GameTitle\",\"KeyType\":\"HASH\"},
{\"AttributeName\":\"TopScore\",\"KeyType\":\"RANGE\"}],
\"Projection\":{
\"ProjectionType\":\"INCLUDE\",
\"NonKeyAttributes\":[\"UserId\"]
}
}
}
]"
Always get the error An error occurred (InternalFailure) when calling the UpdateTable operation (reached max retries: 9): The request processing has failed because of an unknown error, exception or failure.
I've searched all over internet but as far as I can see the syntax is ok, the db is running, I have the last version...
Any idea of what can be happening?
If I try to create the table directly with the GSI it works without error, But I need to update an existing table.
Tesouro
(11 rep)
Jul 9, 2024, 10:51 AM
• Last activity: Apr 9, 2025, 09:26 AM
0
votes
0
answers
13
views
Database migrations under DynamoDB - Move entries to code, and create a pipeline
We have a DynamoDB based config. We want to transfer all the configs inside the DDB to a code package. We then want to create a pipeline from the said code package to the DDB, which then handles all the changes inside the DDB. I believe this comes under the topic of "Database migrations", and some e...
We have a DynamoDB based config. We want to transfer all the configs inside the DDB to a code package. We then want to create a pipeline from the said code package to the DDB, which then handles all the changes inside the DDB.
I believe this comes under the topic of "Database migrations", and some examples are FlyWay (but that is used for SQL based DBs)
Wanted to check whether there are some solutions already built for this.
Mooncrater
(101 rep)
Sep 5, 2024, 09:38 AM
0
votes
1
answers
61
views
Is DynamoDB a good choice for a multi-tenant file management service?
I am planning to build a file management microservice for a multi-tenant SaaS platform that uses Amazon S3 as the file storage backend. I intend to use DynamoDB as the database because it is fast and scalable. The platform may host many tenants, with each tenant having many users, and each user havi...
I am planning to build a file management microservice for a multi-tenant SaaS platform that uses Amazon S3 as the file storage backend. I intend to use DynamoDB as the database because it is fast and scalable. The platform may host many tenants, with each tenant having many users, and each user having many files in their storage space.
I need to keep a record of each file's metadata and the user directory structure (hierarchy) in the database. However, I'm uncertain if DynamoDB is the best choice. A user might want to list all their directories and files, search by name, or sort them alphabetically or by creation date. This could lead to the "hot partition" problem (at least for Global Secondary Indexes). The situation could become even more challenging when tracking all the files uploaded by all users within a tenant.
Therefore, I am considering a hybrid approach: using PostgreSQL for relational data and DynamoDB for storing file information and directory hierarchies.
What are your thoughts on this idea? Can anyone point me in the right direction?
Thanks in advance.
Majid
(125 rep)
Aug 11, 2024, 08:53 AM
• Last activity: Aug 12, 2024, 12:53 PM
-1
votes
1
answers
139
views
History of the provisioned write capacity in DynamoDB beyond the last two weeks
Is there any way to see the history of the provisioned write capacity for a DynamoDB table beyond the last two weeks? When I go to the Amazon AWS console, select a DynamoDB table and look at the monitoring tab, the time range does not allow me to go beyond two weeks. Is there any other place to look...
Is there any way to see the history of the provisioned write capacity for a DynamoDB table beyond the last two weeks? When I go to the Amazon AWS console, select a DynamoDB table and look at the monitoring tab, the time range does not allow me to go beyond two weeks. Is there any other place to look at that could give me a longer history?

Franck Dernoncourt
(2093 rep)
Feb 15, 2015, 06:24 AM
• Last activity: Mar 25, 2023, 09:23 PM
1
votes
0
answers
36
views
Best practice to store subitems
I am going to create my first Dynamodb database table. I have 3 options in my mind: **1. Create a long table where every subitem is a new row.** Example: user John | item 1 | 2021 user John | item 2 | 2022 user John | item 3 | 2023 user Mark | item 1 | 2021 user Mark | item 2 | 2022 **2. Create a sh...
I am going to create my first Dynamodb database table. I have 3 options in my mind:
**1. Create a long table where every subitem is a new row.**
Example:
user John | item 1 | 2021
user John | item 2 | 2022
user John | item 3 | 2023
user Mark | item 1 | 2021
user Mark | item 2 | 2022
**2. Create a short table where subitems are organized inside arrays.**
Example:
user John | [item 1, item 2, item 3] | ['2021', '2022', '2023']
user Mark | [item 1 ,item 2] | ['2021', '2022']
**3. Create multiple tables.**
Example:
table John
item 1 | 2021
item 2 | 2022
item 3 | 2023
-----------------
table Mark
item 1 | 2021
item 2 | 2022
**I would like to know which one is the best practice to get lower bills in AWS host.**
EDIT: As requested in the comments for more details about the table, it would be like "save notes on cloud" where users could store comments and links to request later. For example The user Mark discovered a cool website so he creates a note for himself like: "www.coolwebsite.com very interesting... I must check later". I don't know how many users I will have or how many notes will they save in a day... However I think the number of users will be low in the start and the number of notes will not be bigger than 10 in a day.
L777
(111 rep)
Jan 8, 2023, 12:29 PM
• Last activity: Jan 8, 2023, 02:48 PM
0
votes
0
answers
33
views
Access the same Entity via various attributes in DynamoDB
I'll begin by saying I am new to DynamoDB and the world of NoSQL in general. I have a `patient` entity and I am storing it in a DynamoDB table. My access patterns for the `patient` entity are: 1. Get patient by id 2. Get patient by first name 3. Get patient by last name 4. Get patient by date of bir...
I'll begin by saying I am new to DynamoDB and the world of NoSQL in general. I have a
I'm still in the data modelling phase, so I can ditch that make as many changes as I want.
patient
entity and I am storing it in a DynamoDB table.
My access patterns for the patient
entity are:
1. Get patient by id
2. Get patient by first name
3. Get patient by last name
4. Get patient by date of birth
5. Get patient by post code
How can I achieve this with my DynamoDB table? Right now, the basic model I've put together is:

J86
(331 rep)
Oct 24, 2022, 04:19 PM
0
votes
1
answers
702
views
parse json into a relational database or use a NoSQL?
I am retrieving a JSON file from an API which size varies between 100kb to 7mb. The structure of the response is: [![enter image description here][1]][1] I initially thought in storing the response in a relational database, with the following tables and fields: - operator - id - name - short_name -...
I am retrieving a JSON file from an API which size varies between 100kb to 7mb. The structure of the response is:
I initially thought in storing the response in a relational database, with the following tables and fields:
- operator
- id
- name
- short_name
- operator_accounts:
- id
- operator_id
- account_type
- account_name
- ….
- operator_records
- id
- operator_account_id
- date
- text
- amount
- type
- category
- operator_kpis:
- id
- operator_id
- kpi1
- kp2
An operator will have between 1 to 20 operator_accounts. An operator_account might have thousands of operator_records. Operator_kpis will always be 1 row per operator.
I am intended to build an application where the users can visualize the operator_records and change/fix some of the rows. The operator_kpis is mainly based/calculated from the operator_records, each time the user change/fix values in the operator_records then the operators_kpis will be updated.
My question is:
I saw lately a video of NoSQL, (I am just getting familiar with them) and now I am confused as I am not sure if I should stay with a relational database like postgres or if I should use something like AWS dynamodb or MongoDB
From my understanding DynamoDB might not work for me due to size limit, but I am not sure if there will be a better way to distribute the JSON in dynamo or use mongo to store the file. Or simple store as S3 and read the information and amend the s3 object if the user made any changes in the records.

Manza
(103 rep)
Nov 8, 2021, 02:24 AM
• Last activity: Nov 8, 2021, 04:07 AM
1
votes
0
answers
1066
views
How to fully migrate data from dynamodb to postgresql
I have large data set inside dynamodb and want to move them all to postgresql and only can have 1 hour downtime. Whats the best tool or option to migrate and map all of my data from dynamo to postgres without losing any? I tried handeling manually in javascript but lost so many data.
I have large data set inside dynamodb and want to move them all to postgresql and only can have 1 hour downtime. Whats the best tool or option to migrate and map all of my data from dynamo to postgres without losing any?
I tried handeling manually in javascript but lost so many data.
Mj Ebrahimzadeh
(111 rep)
Nov 5, 2021, 11:51 AM
0
votes
2
answers
5205
views
How do I search by attribute in DynamoDB?
I inherited a site that uses DynamoDB for its database, which I know nothing about unfortunately, and I'm trying to change a user's email for them. There appears to be a users table in DynamoDB, but I don't know the user's id, only their username and current email, so at this point, I am just trying...
I inherited a site that uses DynamoDB for its database, which I know nothing about unfortunately, and I'm trying to change a user's email for them. There appears to be a users table in DynamoDB, but I don't know the user's id, only their username and current email, so at this point, I am just trying to find the record before updating it. When I attempt to scan for these values in the web interface, I get no results; it just keeps searching. If I scan in the web interface for an email address which I know exists, I also get no results.
Using these placeholder values,
Username: example
Email: example@example.com
I have tried a few commands using AWS CLI, but I get various errors:
aws dynamodb get-item --table-name users --key '{"Username": {"S": "example"}}'
This yields this error:
Unknown options: example}}', {S:
I have also tried this:
aws dynamodb query --table-name users --key-condition-expression "Username = :v1" --expression-attribute-values "{ \":v1\" : { \"S\" : \"example\" } }"
This yields this error:
An error occurred (ValidationException) when calling the Query operation: Query condition missed key schema element: id
I have also tried printing out all users, but it appears to fail for anything but the smallest tables:
aws dynamodb scan --table-name users
This yields this error:
An error occurred (ProvisionedThroughputExceededException) when calling the Scan operation (reached max retries: 2): The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API.
Not sure what to do at this point. Any advice?
tklodd
(111 rep)
Feb 3, 2021, 05:33 PM
• Last activity: May 15, 2021, 07:55 AM
0
votes
0
answers
43
views
Does it make sense to separate records in a document DB (e.g. DynamoDB) for better query performance?
I've recently started working with DynamoDB, and most of my experience is in traditional data warehousing (e.g. sql server). One common pattern which I saw when working with SQL server was that we would sometimes separate "active" data from "historical" data, so we would have one table with last mon...
I've recently started working with DynamoDB, and most of my experience is in traditional data warehousing (e.g. sql server). One common pattern which I saw when working with SQL server was that we would sometimes separate "active" data from "historical" data, so we would have one table with last months or years data (the time interval which most of our queries were based on), and then another table with the rest of history, for occasional retrieval. We would do this improve query performance.
Now my question is does this design pattern make any sense when working with i.e. DynamoDB with a well partitioned key. So if I've partitioned my DynamoDB by customer id + time, and I'm occasionally interested in getting each customers consumption (whenever they check their dashboard in the app) during the past month or so, does it make sense to have separate tables for past month, and rest of history? As I've understood with partitioning it doesn't even consider the, or need to search for data from the partitions that contain data outside of my search range (since time was part of the partition key), but this is just my assumption. Query speed is of utmost importance, and the data I'm working with spans multiple years.
Anton
(101 rep)
Apr 13, 2021, 10:41 AM
• Last activity: Apr 13, 2021, 11:17 AM
0
votes
1
answers
20
views
Help needed with several engines use case
We are developing an app, aprox 50k RPM read, 1k RPM write, it ask via key and get a JSON. The search is always made by key. I'm inclined to use one MySQL 8 table with a Id field and JSON field with innodb. Seems simple, cheap and fast accessing by index. Each index can have n rows (30 max), total s...
We are developing an app, aprox 50k RPM read, 1k RPM write, it ask via key and get a JSON.
The search is always made by key.
I'm inclined to use one MySQL 8 table with a Id field and JSON field with innodb. Seems simple, cheap and fast accessing by index. Each index can have n rows (30 max), total size of table less than 100gb.
Response time is important, I think 2-10ms are achievable on MySQL.
The other, more expensive options that I have are DynamoDB and ElasticSearh (can't use another tool).
Can't find a comparison for this use case to help me know if I'm in the correct path. Do you see any cons of using MySql or I'm missing something?
Thanks!!
Alejandro
(113 rep)
Dec 14, 2020, 02:42 AM
• Last activity: Dec 14, 2020, 03:49 AM
1
votes
3
answers
899
views
SQL vs NoSQL: Fetching All records in DynamoDB vs SQL Database
I am designing database for a project which requires frequent querying of all the records. The total number of records will be less than 500K. One of the databases I am considering is AWS DynamoDB where scan operation meant for this is very inefficient. What I fail to understand is whether it is ine...
I am designing database for a project which requires frequent querying of all the records. The total number of records will be less than 500K. One of the databases I am considering is AWS DynamoDB where scan operation meant for this is very inefficient. What I fail to understand is whether it is inefficient in a SQL based database or there is one better than other for frequent calls for fetching all records.
Oliver Blue
(111 rep)
Nov 29, 2020, 04:42 PM
• Last activity: Dec 1, 2020, 05:02 PM
4
votes
2
answers
4370
views
Using AWS DynamoDB Geospatial index with Python
Amazon's AWS DynamoDB features [Geospatial Indexing][1], which facilitates geo queries: > Query Support: Box queries return items that fall within a pair of geo > points that define a rectangle as projected on a sphere. Radius > queries return items that fall within a given distance from a geo > poi...
Amazon's AWS DynamoDB features Geospatial Indexing , which facilitates geo queries:
> Query Support: Box queries return items that fall within a pair of geo
> points that define a rectangle as projected on a sphere. Radius
> queries return items that fall within a given distance from a geo
> point.
The problem is that the boto , the Python library for accessing AWS Service, lacks any reference to geo object. The Java equivalent, on the other hand, has such support .
**Is there a way to use the geo features of DynamoDB on AWS using Python?**
Adam Matan
(12079 rep)
Jun 8, 2014, 07:40 AM
• Last activity: Apr 16, 2020, 08:01 AM
0
votes
1
answers
188
views
DynamoDB: How to model data that's shared and queried by all users?
I am very intrigued by DynamoDB, and it works incredibly well when I model the data for my main use case for my application. That being said, there is one specific use case that I can't wrap my head around. Let's say I have users in a table, with the user id being the primary key. Most information i...
I am very intrigued by DynamoDB, and it works incredibly well when I model the data for my main use case for my application. That being said, there is one specific use case that I can't wrap my head around.
Let's say I have users in a table, with the user id being the primary key. Most information is specific to the user. I want to be able to communicate with my users so I want the ability to make announcements to them. These announcements are shared across all the users. I can store user specific information about announcements in their own attributes like read and unread announcements.
The problem (if it's not clear already) is that there is only one set of announcements but they will be queried by every user frequently, leading to an anti-pattern of DynamoDB and potential throttling.
My initial thoughts are to make k copies of announcements and label the keys announcement_copy_1, announcement_copy_2 ... announcement_copy_k, and then on the query to check for new announcements, I would randomly assign an integer 1-k to query the announcements. Each announcement copy would be the partition key and I would have sort keys with the date of the specific announcement, and attributes with the text and type of announcement.
I'm not confident if this is the best approach to this problem, or if I'm missing something. Also I am looking at going serverless with AWS Lambdas if that affects anything.
Thank you in advance for any suggestions or advice!
Bradass
(1 rep)
Jul 16, 2019, 08:51 AM
• Last activity: Sep 10, 2019, 09:35 PM
1
votes
0
answers
122
views
Dynamo get items from last X days using composite key
I have the following sort key in a dynamo db table: YYYY-MM-DD# # Im trying to solve the following two access patterns: 1. get all item for a user in the last x days 2. get all items for a user and category in the last x days I can satisfy the 2nd access pattern with the following conditional expres...
I have the following sort key in a dynamo db table:
YYYY-MM-DD##
Im trying to solve the following two access patterns:
1. get all item for a user in the last x days
2. get all items for a user and category in the last x days
I can satisfy the 2nd access pattern with the following conditional expression:
sort_key BETWEEN 2018-01-02## and 2019-01-02##
This returns several items between 2018-01-02 and 2019-01-02
However, if I try to satisfy the 1st access pattern I cant get it to work. I've tried:
sort_key BETWEEN 2018-01-02# and 2019-01-02#
But only one item is returned, the item with date 2018-01-02, but I should be getting multiple items returned.
I searched online for any wildcard operators with conditional expression, but couldn't find any. Any ideas how I can satisfy access pattern 1?
GWed
(519 rep)
Sep 10, 2019, 06:37 AM
1
votes
0
answers
48
views
How to implement correctly from SQL to NoSQL (DynamoDB)?
I am just new in NoSQL and DynamoDB. I try to moving from using AWS RDS to DynamoDB. I have table user which can have more than one of work experiences and can have more than one of interest (table of interest have master data in interest table). There will be a lot query filtering by user by their...
I am just new in NoSQL and DynamoDB. I try to moving from using AWS RDS to DynamoDB.
I have table user which can have more than one of work experiences and can have more than one of interest (table of interest have master data in interest table). There will be a lot query filtering by user by their work experiences and interest. Example: Find user who have work experiences title is 'Programmer' or find user who have interest is 'SQL, NoSQL'.
User Table:
- user_id (PK)
- name
- birthdate
- ...
Work Experience Table:
- user_id (PK)
- work_sequence_id (PK)
- work_title
- work_description
Interest Table:
- interest_id (PK)
- interest_name
User Interest Table:
- user_id (PK)
- interest_id (PK)
So I try to convert it to DynamoDB:
User Table:
- user_id (PK/Partition Key)
- name
- birth_date
- ...
Work Experience Table:
- user_id (PK)
- work_sequence_id (SK/Sort Key)
- work_title (global secondary index)
- work_description
For the interest table, I want to re-structure the table. Basically, beside filtering user by interest, I just want to get the most popular interest for suggestion word (filter by word that contain 'search_word'). Example if there is 7 user have interest in 'SQL' and 10 user have interest in 'NoSQL'. When user search the word 'SQ' the word 'NoSQL' will appear first and 'SQL' will appear second.
So I create like this:
User Interest Table:
- user_id (PK)
- interest_name (SK)
Interest Table:
- interest_name (PK)
- count (SK)
Then I will use the count to get the popular interest.
Usually I use the join to get user data, work experiences, and interests also using aggregated function to get most popular interest. Now I need to use four query operator to get all the data. Basically, I need efficiently and effectively store the data in term of cost.
Do you have any database design suggestion for this scenario in DynamoDB?
Ragam
(11 rep)
Jun 5, 2019, 07:40 PM
-2
votes
1
answers
303
views
Can PostGIS or Oracle Spatial and Graph scale to hundreds of terabytes of data, or should I use a NoSQL option like DynamoDB?
I'm going to have a very large data set (a few hundred terabytes big eventually), but it is very well structured, relatively simple data involving lat, long points (which is why I want some GIS compatibility). From what I can tell PostGIS can only handle 32TB or so, and I'm unsure what Oracle Spatia...
I'm going to have a very large data set (a few hundred terabytes big eventually), but it is very well structured, relatively simple data involving lat, long points (which is why I want some GIS compatibility).
From what I can tell PostGIS can only handle 32TB or so, and I'm unsure what Oracle Spatial can scale to. Amazon's DynamoDB can go up to petabyte scale, but I've read complaints about how it can become rather complicated and be avoided without having a good reason to use it. And using NoSQL for such structured data seems wrong, but I can't find other alternatives for this size. As far as costs go, yeah it's going to be expensive but let's assume that's not a huge problem. Retrieval speed for spatial and temporal queries is the main deciding factor.
Dylan
(19 rep)
Feb 12, 2019, 12:29 AM
• Last activity: Feb 12, 2019, 08:11 AM
7
votes
4
answers
34144
views
Import CSV or JSON file into DynamoDB
I have 1000 CSV files. Each CSV file is between 1 and 500 MB and is formatted the same way (i.e. same column order). I have a header file for column headers, which match my DynamoDB table's column names. I need to import those files into a DynamoDB table. What's the best way / tool to do so? I can c...
I have 1000 CSV files. Each CSV file is between 1 and 500 MB and is formatted the same way (i.e. same column order). I have a header file for column headers, which match my DynamoDB table's column names. I need to import those files into a DynamoDB table. What's the best way / tool to do so?
I can concatenate those CSV files into a single giant file (I'd rather avoid to though), or convert them into JSON if needed. I am aware of the existence of [BatchWriteItem](http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchWriteItem.html) so I guess a good solution would involve batch writing.
----------
Example:
- The DynamoDB table has two columns: first_name, last_name
- The header file only contains:
first_name,last_name
- One CSV file looks like
:
John,Doe
Bob,Smith
Alice,Lee
Foo,Bar
Franck Dernoncourt
(2093 rep)
Feb 14, 2015, 10:46 PM
• Last activity: Feb 6, 2019, 10:52 PM
1
votes
0
answers
263
views
Most effective way of logging user query activity
We are logging and permanently storing all the queries performed about private persons in our database. Due to GDPR we need to be able to show all the entities who have requested personal data and the timestamps. The queries can be single or batch involving hundreds of thousands of individuals. What...
We are logging and permanently storing all the queries performed about private persons in our database. Due to GDPR we need to be able to show all the entities who have requested personal data and the timestamps.
The queries can be single or batch involving hundreds of thousands of individuals.
What is the most cost-effective way of keeping such logs? The primary index should be the unique personal code which allows efficient retrieval of query information. For analytics purposes we sometimes need to index by entity and/or query date, but this doesn't need to be optimized. The number of unique personal codes is about 1.5 million and the number of entities is ~300,000 and growing.
I have tried InfluxDB, but the limitation is that the time makes up a part of the key and batch queries will contain the same timestamp. I have tried storing the data as JSON in a relational database but it takes too long to update both single and batch entries. With DynamoDB the problem was batch write efficiency. It took 2 hrs to batch write million items at 500 WCU.
The data could look like this:
{'personal_code':123456789,
'data_requests':{
'entity_1':{
'query_type_1':['2018-09-25 12:00:00.000000',...],
'query_type_2':['2018-10-02 12:00:00.000000',...]
},
'entity_2':{
'query_type_1':['2018-09-25 12:00:00.000000',...],
'query_type_2':['2018-10-02 12:00:00.000000',...]
}
}
}
And the goal is to use as little hardware resources as needed to achieve <2 sec latencies retrieving the data by personal code and appending new timestamps to the arrays.
Karl Märka
(11 rep)
Nov 8, 2018, 03:15 PM
Showing page 1 of 20 total questions