Sample Header Ad - 728x90

Database to save measurements

0 votes
1 answer
153 views
I am creating an infrastructure to save measurements coming from a fleet of around 2000 cars. Each car contains about 60 sensors (depending from car) with a sum of about 800 values par second coming from all the sensors. Each sensor is reading from 2 to 50 values of different type (boolean, integer and commasep). I would like to save all this values in a database (in cloud) to allow us to read them in case of error and for future reports. After a study of the possible database we have to chose between: * postgres with autopartitions * TimescaleDB * InfluxDB Knowing the scenario my ingegneristic side thinks about InfluxDB since the use case better fit a schemaless option. However my conservative side is saying to use a 25-years story database, in this latter case, from your experience is it better to adopt an approach 1 or 2? Approach 1 is where each row consists in lecture of a value from one sensor -> [timestamp, sensor_id, measure_title, measure_value] (so 800 * 2000 rows every second). Approach 2 is where a row consists in a lecture of a sensor [timestamp, sensor_id, measure_value_1, …, measure_value_50] (so 60 * 2000 rows every second) where potentially 49 columns can be null and we have another table that contains anagraphic for each title of measure_value_n? Otherwise do you know other approaches? Edit 1. Data must be maintained indefinitely. No way of delete/cancelling Approach 1 will store around 138 billion of rows par day Approach 2 will store around 10 billion of rows par day
Asked by Jam. G. (1 rep)
Nov 13, 2022, 01:35 PM
Last activity: Nov 13, 2022, 11:12 PM