Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
2 answers
174 views
Does reordering table rows reduce the time for subsequent ordering with the ORDER BY clause?
Is there any run-time advantage gained by changing the order of a table's rows to match the expected ordering of the `ORDER BY` that is in a slow select? (assuming unique alphanumeric index and no auto-increments if these matter)
Is there any run-time advantage gained by changing the order of a table's rows to match the expected ordering of the ORDER BY that is in a slow select? (assuming unique alphanumeric index and no auto-increments if these matter)
beppe9000 (101 rep)
Jan 31, 2020, 06:54 PM • Last activity: Jul 7, 2025, 10:02 PM
0 votes
1 answers
481 views
Performance impact when querying without partition key on a partitioned table
Assuming I partition a table of large size (say 6TB) by month. We are serving very high traffic of around 300 RPS. Sample schema Create table mydata( id int not null, date_created not null, uuid_val not null, Primary key (id,date_created), key uuid_idx uuid_val) Partition by range(to_days(date_creat...
Assuming I partition a table of large size (say 6TB) by month. We are serving very high traffic of around 300 RPS. Sample schema Create table mydata( id int not null, date_created not null, uuid_val not null, Primary key (id,date_created), key uuid_idx uuid_val) Partition by range(to_days(date_created) What will be the performance impact if I query on this table without date_created in the where clause. It will not give a better performance for sure as it cannot do partition pruning but what will the impact if I query it based on index uuid_val as compared to a non-partitioned table with same schema? I tried benchmarking it with 6GB data and querying(500,000 times) by index and not partition key the overhead to search in multiple partitions seemed very less (0.1ms) as compared to running the same query on the non-partitioned table. But why is the overhead so less. Also can I expect such numbers with 6TB data too when serving high traffic? Can I query partitioned table without partition key as I might not have in the flow. My MySQL version is 8.
Harshita Mishra (1 rep)
Oct 7, 2023, 11:34 AM • Last activity: Jul 1, 2025, 01:07 PM
1 votes
1 answers
4045 views
Is mongodb upsert the right option for write intensive use case?
I'm designing a data model for a transactional use case which is write intensive. The write load would be 900 - 1,600 Transactions Per Second (TPS) which would account for 30 million Inserts/Updates per day and read load would be 600 - 800 TPS Since we have opted for an embedded data model, we are u...
I'm designing a data model for a transactional use case which is write intensive. The write load would be 900 - 1,600 Transactions Per Second (TPS) which would account for 30 million Inserts/Updates per day and read load would be 600 - 800 TPS Since we have opted for an embedded data model, we are using upsert queries in most places. We have unique indexes built on the right fields to make the search faster. Out of the 30 million TPS day, we will be performing at least 10 million upsert per day I would like to know: 1. Is it wise to keep the documents embedded and perform upserts every time (considering the load) rather than keeping the documents in 2 separate collections and perform joins to report the data to client applications? 2. Considering the load, I suspect that too much of upsert would created high IO and crash the system.
Anban (45 rep)
May 27, 2019, 10:02 AM • Last activity: Jun 2, 2025, 07:06 PM
0 votes
1 answers
282 views
Does Greg Rahn's "join order benchmark" reflect real world OLAP join scenarios?
Greg Rahn's ["join order benchmark"][1] has dozens of inner join queries on the IMDB database. Does this benchmark and the database itself reflect common real world OLAP join scenarios? How does a join optimizer's performance on these benchmark correlate to general join performance for the real worl...
Greg Rahn's "join order benchmark" has dozens of inner join queries on the IMDB database. Does this benchmark and the database itself reflect common real world OLAP join scenarios? How does a join optimizer's performance on these benchmark correlate to general join performance for the real world? For example: SELECT MIN(cn.name) AS movie_company, MIN(mi_idx.info) AS rating, MIN(t.title) AS complete_euro_dark_movie FROM complete_cast AS cc, comp_cast_type AS cct1, comp_cast_type AS cct2, company_name AS cn, company_type AS ct, info_type AS it1, info_type AS it2, keyword AS k, kind_type AS kt, movie_companies AS mc, movie_info AS mi, movie_info_idx AS mi_idx, movie_keyword AS mk, title AS t WHERE cct1.kind = 'crew' AND cct2.kind != 'complete+verified' AND cn.country_code != '[us]' AND it1.info = 'countries' AND it2.info = 'rating' AND k.keyword IN ('murder', 'murder-in-title', 'blood', 'violence') AND kt.kind IN ('movie', 'episode') AND mc.note NOT LIKE '%(USA)%' AND mc.note LIKE '%(200%)%' AND mi.info IN ('Sweden', 'Norway', 'Germany', 'Denmark', 'Swedish', 'Danish', 'Norwegian', 'German', 'USA', 'American') AND mi_idx.info 2000 AND kt.id = t.kind_id AND t.id = mi.movie_id AND t.id = mk.movie_id AND t.id = mi_idx.movie_id AND t.id = mc.movie_id AND t.id = cc.movie_id AND mk.movie_id = mi.movie_id AND mk.movie_id = mi_idx.movie_id AND mk.movie_id = mc.movie_id AND mk.movie_id = cc.movie_id AND mi.movie_id = mi_idx.movie_id AND mi.movie_id = mc.movie_id AND mi.movie_id = cc.movie_id AND mc.movie_id = mi_idx.movie_id AND mc.movie_id = cc.movie_id AND mi_idx.movie_id = cc.movie_id AND k.id = mk.keyword_id AND it1.id = mi.info_type_id AND it2.id = mi_idx.info_type_id AND ct.id = mc.company_type_id AND cn.id = mc.company_id AND cct1.id = cc.subject_id AND cct2.id = cc.status_id;
larryliu (1 rep)
Apr 3, 2022, 12:59 PM • Last activity: May 22, 2025, 01:03 PM
3 votes
1 answers
2153 views
Are there any known benchmarks/statistics on UnQLite?
I came across UnQLite the other day in [Stack Overflow][1] and it has me intrigued for some software I'm working on. However, very few tests have been published on how well it performs. [There are plenty][2] of [questions][3] and [public][4] data available that help on SQLite and others but nothing...
I came across UnQLite the other day in Stack Overflow and it has me intrigued for some software I'm working on. However, very few tests have been published on how well it performs. There are plenty of questions and public data available that help on SQLite and others but nothing on UnQLite. (I'm looking at serverless databases, hence my mention of SQLite. I understand that SQLite is relational whereas UnQLite is KVP and the two are not forked from the same parent projects) I'd like to know: - What UnQLite seems to achieve in read/write speeds - What types of reads and writes (sequential, synchronous, large data, batch, random, etc.) it does well at compared to some other databases - Where bottlenecking seems to occur - How large the database tends to get (disk size or rows) before the time it takes to find and extract data begins to see a significant increase (i.e. slows down) - If at all possible, what disk format(s) (ext3/4, NTFS, exFAT, HFS+) cause problems for the database - Any recommended ways of optimizing performance with it Thank you.
Kamikaze Rusher (131 rep)
Apr 16, 2015, 02:12 PM • Last activity: Apr 1, 2025, 04:14 AM
0 votes
0 answers
90 views
Seeking Fastest Hash-Based NoSQL Database: GDBM vs Kyoto Cabinet & Other Alternatives
# Background I'm currently researching fast, hash-based key-value NoSQL databases for [my use case](https://stackoverflow.com/questions/78986903), which requires high performance. I was surprised by the limited availability of benchmarks, particularly for GDBM. Despite being a long-established libra...
# Background I'm currently researching fast, hash-based key-value NoSQL databases for [my use case](https://stackoverflow.com/questions/78986903) , which requires high performance. I was surprised by the limited availability of benchmarks, particularly for GDBM. Despite being a long-established library, I found little comparative data on its performance, especially against other key-value databases like Kyoto Cabinet. # Questions 1. **Performance of GDBM vs Kyoto Cabinet:** Has anyone come across reliable benchmarks that compare GDBM's speed to Kyoto Cabinet's? Specifically, I'm interested in ingestion and lookup speeds for large datasets. 2. **Faster Alternatives:** Are there any other key-value, hash-based NoSQL databases that you've found to outperform these two in terms of speed? If so, could you point me to specific benchmarks or sources? # Requirements The database needs to meet the following criteria: - **Corruption resistance:** Safe from data corruption in case of power failure or unexpected shutdown. - **Concurrency:** Support for multiple users (multiple readers with one writer at the same time). - **Capacity:** Able to handle a few million key-value pairs efficiently. - **Bindings:** must have bindings for the C language. # Research I've gathered some relevant links and partial benchmarks, but there's a gap in comprehensive or head-to-head comparisons between GDBM and other hash-based systems: - ...benchmarks of some embedded databases with Python - Are there any known benchmarks/statistics on UnQLite? - Kyoto Cabinet ingestion rate measurement - HaloDB against RocksDB and KyotoCabinet - LevelDB vs Kyoto Cabinet my findings I'm looking for data beyond these sources, particularly regarding hash engines, and would appreciate any insights or references to benchmarks that are directly related to GDBM, Kyoto Cabinet, or other alternatives.
Giorgos Xou (101 rep)
Oct 6, 2024, 08:38 PM • Last activity: Oct 6, 2024, 10:27 PM
-2 votes
1 answers
41 views
Guidance on how to assess the impact of multiple parallel SELECT queries
Setup: SQL Server 2019 on Azure VM. We have an application that, on a specific date, will execute multiple queries to load data into their system. These queries will be run via Databricks using an ODBC connection, and they will primarily consist of `SELECT` statements with various joins and filters....
Setup: SQL Server 2019 on Azure VM. We have an application that, on a specific date, will execute multiple queries to load data into their system. These queries will be run via Databricks using an ODBC connection, and they will primarily consist of SELECT statements with various joins and filters. I'm looking for guidance on how to assess whether the workload generated by these queries could be problematic. Specifically, I want to understand what metrics I should monitor during this workload to evaluate its impact on server performance. Key Questions: 1. How can I determine if the workload of their queries is causing issues? What signs should I look for? 2. How many queries can be run in parallel without negatively affecting server performance? Is there a way to calculate or estimate this? 3. What are the most important statistics to measure during the execution of these queries? How can I compare server performance during the workload against periods without workload?
adam.g (465 rep)
Sep 16, 2024, 12:05 PM • Last activity: Sep 17, 2024, 04:40 AM
1 votes
1 answers
26 views
GridDB v5.6 Compression Type (ZSTD). Querying much faster?
GridDB 5.6 has a new compression method that I wanted to test. I made a simple test where I ingested X amount of rows and tested the compression method against the old compression available prior to 5.6 (ZLIB) and against no compression. The results were what you would expect: no compression had the...
GridDB 5.6 has a new compression method that I wanted to test. I made a simple test where I ingested X amount of rows and tested the compression method against the old compression available prior to 5.6 (ZLIB) and against no compression. The results were what you would expect: no compression had the highest data footprint for 100,000,000 rows, ZLIB next, and finally, the new compression method ZSTD had the smallest footprint. I also tested the query speed of these compression methods and to my surprise, the one with the smallest footprint (ZSTD) also had the quickest lookup times. I am curious as to how this could be -- from my understanding, there must be some tradeoff when doing a more advanced method of compressing similar data. I'd at the very least expect that the newest compression method would be on par with ZLIB but with a smaller footprint. And now for the results. As explained above, I inserted 100m rows of 'random' data and timed ingestion, the directory size of the data dir, and lookup times. Here are the results: | | NO_COMPRESSION | COMPRESSION_ZLIB | COMPRESSION_ZSTD | |---------------------|------------------|------------------|-----------------| | Search (ms) | 32644 | 20666 | 11475 | | Agreggation (ms) | 30261 | 13302 | 8402 | | Storage (gridstore) | 11968312 (17GB) | 7162824 (6.9GB) | 6519520 (6.3GB) | | Storage (/data) | 17568708 (12GB) | 1141152 (1.1GB) | 1140384 (1.1GB) | | Insert (m:ss.mmm) | 14:42.452 | 15:02.748 | 15:05.404 | If anybody has any insight into this perplexing compression issue, please share any expertise.
L. Connell (69 rep)
Aug 28, 2024, 02:18 PM • Last activity: Aug 28, 2024, 03:12 PM
5 votes
3 answers
13788 views
Speed difference between Drop table and Truncate table in Postgres
I currently have a program that inserts into a database by creating temp tables, filling the tables, then merging that data into the main tables. Then dropping the tables and doing it all again. I'm wondering what the speed difference is if instead of drop and create, if I just truncate.
I currently have a program that inserts into a database by creating temp tables, filling the tables, then merging that data into the main tables. Then dropping the tables and doing it all again. I'm wondering what the speed difference is if instead of drop and create, if I just truncate.
Francis Lewis (150 rep)
Mar 25, 2019, 09:51 PM • Last activity: Nov 13, 2023, 04:12 AM
3 votes
2 answers
2706 views
How can I generate TPC-DS queries for PostgreSQL?
I'm using TPC-DS to run some benchmarks on a few versions of postgres that I have. I noticed that there is no template for generating the TPC-DS queries with the target database as postgres. The `query_templates` directory contains template files for `ansi`, `sqlserver`, `db2`, `oracle`, and `netezz...
I'm using TPC-DS to run some benchmarks on a few versions of postgres that I have. I noticed that there is no template for generating the TPC-DS queries with the target database as postgres. The query_templates directory contains template files for ansi, sqlserver, db2, oracle, and netezza, but none for postgres. I've tried running the queries generated by each of the previously listed targets on postgres, but they all seem to have some kind of syntax that postgres doesn't support, because most of them fail to run. Is it possible to generate these TPC-DS queries for postgres? Or is there a postgres.tpl file that can be downloaded somewhere?
bddicken (131 rep)
Aug 8, 2013, 07:52 PM • Last activity: Oct 16, 2023, 11:16 PM
0 votes
0 answers
213 views
How to calculate hitrate of shared buffer in postgresql, using TPC-C workload?
I use TPC-C(benchmarksql 4.0) to test postgres. I set up ```warehousenum=1000```, the whole database is about ```100GB```, and I set ```shared_buffers=10GB``` in ```postgresql.conf```. I also set ```terminalWarehouseFixed=false``` to change the warehouse, and I run the benchmark for ```20mins```. Bu...
I use TPC-C(benchmarksql 4.0) to test postgres. I set up
=1000
, the whole database is about
, and I set
=10GB
in
.conf
. I also set
=false
to change the warehouse, and I run the benchmark for
. But when I calculate the hitrate with the sql below, I got a very high hitrate, that was strange. I also print the workload of benchmarksql, it did choose warehouse, district, customer at random. And I modified postgres to output the access times of blocks of the table file, though it is skewed, but it can't cause that high hit rate in shared buffer.
with 
all_tables as
(
SELECT  *
FROM    (
    SELECT  'all'::text as table_name, 
        sum( (coalesce(heap_blks_read,0) + coalesce(idx_blks_read,0) + coalesce(toast_blks_read,0) + coalesce(tidx_blks_read,0)) ) as from_disk, 
        sum( (coalesce(heap_blks_hit,0)  + coalesce(idx_blks_hit,0)  + coalesce(toast_blks_hit,0)  + coalesce(tidx_blks_hit,0))  ) as from_cache    
    FROM    pg_statio_USER_tables  --> change to pg_statio_USER_tables if you want to check only user tables (excluding postgres's own tables)
    ) a
WHERE   (from_disk + from_cache) > 0 -- discard tables without hits
),
tables as 
(
SELECT  *
FROM    (
    SELECT  relname as table_name, 
        ( (coalesce(heap_blks_read,0) + coalesce(idx_blks_read,0) + coalesce(toast_blks_read,0) + coalesce(tidx_blks_read,0)) ) as from_disk, 
        ( (coalesce(heap_blks_hit,0)  + coalesce(idx_blks_hit,0)  + coalesce(toast_blks_hit,0)  + coalesce(tidx_blks_hit,0))  ) as from_cache    
    FROM    pg_statio_USER_tables --> change to pg_statio_USER_tables if you want to check only user tables (excluding postgres's own tables)
    ) a
WHERE   (from_disk + from_cache) > 0 -- discard tables without hits
)
SELECT  table_name as "table name",
    from_disk as "disk hits",
    round((from_disk::numeric / (from_disk + from_cache)::numeric)*100.0,2) as "% disk hits",
    round((from_cache::numeric / (from_disk + from_cache)::numeric)*100.0,2) as "% cache hits",
    (from_disk + from_cache) as "total hits"
FROM    (SELECT * FROM all_tables UNION ALL SELECT * FROM tables) a
ORDER   BY (case when table_name = 'all' then 0 else 1 end), from_disk desc;
enter image description here
Hongtao Wang (1 rep)
Jul 4, 2023, 07:30 AM
0 votes
1 answers
472 views
pgbench: generating a random padded string
I have the IMDb database in PostgreSQL 16, and I am trying to generate random strings for performance benchmark. The relevant table is `title_basics` with the primary key column `tconst`, VARCHAR(9). `tconst` has values in the form of 'tt0000000', where the zeroes represent numbers, e.g., 'tt0000001...
I have the IMDb database in PostgreSQL 16, and I am trying to generate random strings for performance benchmark. The relevant table is title_basics with the primary key column tconst, VARCHAR(9). tconst has values in the form of 'tt0000000', where the zeroes represent numbers, e.g., 'tt0000001' and 'tt9999999'. I want to write an .sql file which can be run with pgbench, which generates random, padded numbers appended after the string 'tt', e.g.: SELECT * FROM title_basics WHERE tconst = 'tt0000001'; With simple SQL, this would be trivial, but since pgbench supports only a small number of scalar functions, it seems difficult. I have also explored the use of SQL in the pgbench script, but constructs such as this do not seem to be supported:
\set random_number (SELECT random());
or:
\set random_number :randint(0, 9999999)
\set padding_length 7 - length(:'random_number')
\set primary_key 'tt' || repeat('0', :'padding_length') || :'random_number'
I also considered generating the numbers with SQL, but this effectively prevents PostgreSQL from using the index on tconst. I do not want to create an additional index to satisfy the query, as that would be problematic for other reasons.
SELECT * 
FROM title_basics 
WHERE tconst = 'tt' || LPAD((FLOOR(RANDOM()*(9916880 - 1 + 1)) + 1)::text, 7, '0');
Edit: I found a somewhat functional solution of generating the random value in a CTE. This does not prevent PostgreSQL from using the index on tconst. However, I would be happy to see a more pgbench-oriented approach.
WITH a AS (
SELECT 'tt' || LPAD((FLOOR(RANDOM()*(9916880 - 1 + 1)) + 1)::text, 7, '0') AS tconst_random
)
SELECT * 
FROM title_basics, a 
WHERE tconst = a.tconst_random;
Fuiba (5 rep)
May 16, 2023, 09:13 AM • Last activity: May 17, 2023, 07:20 AM
0 votes
1 answers
566 views
Can this slow sysbench mysql result be "normal" for consumer hardware?
I have 4 different `MySQL 8.0.30 servers` on Rocky Linux 9, in one I replaced NVME SSD hoping it will perform better. I made an image with "Macrium Reflect" software and copied exact same partitions - **from** a Kingston DC1000 "*enterprise*" nvme (3000 MB read / **550 MB write**) - **to** a Samsung...
I have 4 different MySQL 8.0.30 servers on Rocky Linux 9, in one I replaced NVME SSD hoping it will perform better. I made an image with "Macrium Reflect" software and copied exact same partitions - **from** a Kingston DC1000 "*enterprise*" nvme (3000 MB read / **550 MB write**) - **to** a Samsung 980 PRO nvme (6900 MB read / **5000 MB write**) Now I have 3 servers with that previous Kingston enterprise SSD and one with the new Samsung 980 pro. Problem is, the Samsung which is much faster in specs, performs many many times slower in MySQL queries. I am able to compare all tests on the other 3 servers, they are all close to one another, but the server with Samsung SSD is always much much slower. **For example** in 10 seconds, all 3 servers insert ~80000 records and the problematic server only inserts 2000-5000 records. If I set these 2 variables in my.cnf in all 4 servers, then they perform the same (as I understand, these settings take the write speed out of the equation):
innodb_flush_log_at_trx_commit = 0
innodb_flush_log_at_timeout = 3
- Can the below slow benchmarks just be the result of the Samsung "PRO" SSD not being enterprise grade ? And it's write cache just filling up and start performing bad that fast ? (**0 QPS**, 12 seconds after benchmark starts, *see second benchmark*). - Can slow writes like these happen because I cloned two different drives and this write problem can be related to storage cluster / block size ? I don't know much about that, not even sure if is a partition thing or a hardware thing, partitions should be the same in cloned one. I ran tests on a Samsung 9**8**0 pro and a Samsung 9**7**0 pro, the 9**7**0 is just ~30% faster but still extremly slower than the Kingston "enterprise" SSD. The 9*7*0 PRO has better endurance from what I know. Are all these performance drops somehow related to endurance ? Also I found some topics on samsung site and on mysql bug report forums, some users complain about similar Samsung SSDs having slow writes. **Read** speed is better on the Samsung SSD (like in specs). Just **write is problematic**. The sysbench attached, notice how much slower is the slower one: **The "normal", faster server(s):**
sysbench oltp_write_only --threads=16 --events=0 --time=60 --mysql-host=localhost --mysql-user=sbtest --mysql-password='****' --mysql-port=3306 --tables=10 --table-size=1000000 --range_selects=off --db-ps-mode=disable --report-interval=1 run
sysbench 1.0.20 (using system LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 16
Report intermediate results every 1 second(s)
Initializing random number generator from current time


Initializing worker threads...

Threads started!

[ 1s ] thds: 16 tps: 18905.49 qps: 113467.89 (r/w/o: 0.00/75640.93/37826.96) lat (ms,95%): 1.06 err/s: 0.00 reconn/s: 0.00
[ 2s ] thds: 16 tps: 1850.30 qps: 11089.78 (r/w/o: 0.00/7389.19/3700.60) lat (ms,95%): 2.76 err/s: 0.00 reconn/s: 0.00
[ 3s ] thds: 16 tps: 7985.75 qps: 47912.49 (r/w/o: 0.00/31940.99/15971.50) lat (ms,95%): 2.35 err/s: 0.00 reconn/s: 0.00
[ 4s ] thds: 16 tps: 7614.21 qps: 45734.27 (r/w/o: 0.00/30506.85/15227.42) lat (ms,95%): 2.35 err/s: 0.00 reconn/s: 0.00
[ 5s ] thds: 16 tps: 5932.28 qps: 35586.69 (r/w/o: 0.00/23721.13/11865.56) lat (ms,95%): 2.52 err/s: 0.00 reconn/s: 0.00
[ 6s ] thds: 16 tps: 5713.84 qps: 34284.01 (r/w/o: 0.00/22856.34/11427.67) lat (ms,95%): 2.39 err/s: 0.00 reconn/s: 0.00
[ 7s ] thds: 16 tps: 6260.42 qps: 37520.50 (r/w/o: 0.00/24999.67/12520.83) lat (ms,95%): 2.35 err/s: 0.00 reconn/s: 0.00
[ 8s ] thds: 16 tps: 6811.44 qps: 40876.61 (r/w/o: 0.00/27260.74/13615.87) lat (ms,95%): 2.35 err/s: 0.00 reconn/s: 0.00
[ 9s ] thds: 16 tps: 6927.07 qps: 41554.42 (r/w/o: 0.00/27693.28/13861.14) lat (ms,95%): 2.30 err/s: 0.00 reconn/s: 0.00
[ 10s ] thds: 16 tps: 8123.90 qps: 48767.38 (r/w/o: 0.00/32522.59/16244.79) lat (ms,95%): 2.30 err/s: 0.00 reconn/s: 0.00
[ 11s ] thds: 16 tps: 5210.95 qps: 31243.67 (r/w/o: 0.00/20818.78/10424.89) lat (ms,95%): 2.39 err/s: 0.00 reconn/s: 0.00
[ 12s ] thds: 16 tps: 7038.81 qps: 42255.86 (r/w/o: 0.00/28181.24/14074.62) lat (ms,95%): 2.39 err/s: 0.00 reconn/s: 0.00
[ 13s ] thds: 16 tps: 5205.27 qps: 31241.63 (r/w/o: 0.00/20828.09/10413.54) lat (ms,95%): 2.26 err/s: 0.00 reconn/s: 0.00
[ 14s ] thds: 16 tps: 6899.46 qps: 41369.77 (r/w/o: 0.00/27570.85/13798.93) lat (ms,95%): 2.26 err/s: 0.00 reconn/s: 0.00
[ 15s ] thds: 16 tps: 6247.58 qps: 37476.48 (r/w/o: 0.00/24981.32/12495.16) lat (ms,95%): 2.39 err/s: 0.00 reconn/s: 0.00
[ 16s ] thds: 16 tps: 5494.83 qps: 33008.00 (r/w/o: 0.00/22019.33/10988.67) lat (ms,95%): 2.43 err/s: 0.00 reconn/s: 0.00
[ 17s ] thds: 16 tps: 5644.39 qps: 33835.36 (r/w/o: 0.00/22545.57/11289.79) lat (ms,95%): 2.14 err/s: 0.00 reconn/s: 0.00
[ 18s ] thds: 16 tps: 6123.58 qps: 36762.48 (r/w/o: 0.00/24516.32/12246.16) lat (ms,95%): 2.18 err/s: 0.00 reconn/s: 0.00
[ 19s ] thds: 16 tps: 5539.89 qps: 33263.36 (r/w/o: 0.00/22182.57/11080.79) lat (ms,95%): 2.35 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 16 tps: 8187.36 qps: 49074.18 (r/w/o: 0.00/32699.45/16374.73) lat (ms,95%): 2.30 err/s: 0.00 reconn/s: 0.00
[ 21s ] thds: 16 tps: 6125.19 qps: 36764.12 (r/w/o: 0.00/24513.75/12250.37) lat (ms,95%): 2.43 err/s: 0.00 reconn/s: 0.00
[ 22s ] thds: 16 tps: 4325.04 qps: 25977.23 (r/w/o: 0.00/17328.15/8649.08) lat (ms,95%): 2.66 err/s: 0.00 reconn/s: 0.00
[ 23s ] thds: 16 tps: 5452.18 qps: 32672.05 (r/w/o: 0.00/21766.70/10905.35) lat (ms,95%): 2.43 err/s: 0.00 reconn/s: 0.00
[ 24s ] thds: 16 tps: 4860.67 qps: 29221.04 (r/w/o: 0.00/19499.69/9721.35) lat (ms,95%): 3.43 err/s: 0.00 reconn/s: 0.00
[ 25s ] thds: 16 tps: 2931.71 qps: 17542.27 (r/w/o: 0.00/11678.84/5863.42) lat (ms,95%): 3.68 err/s: 0.00 reconn/s: 0.00
[ 26s ] thds: 16 tps: 3928.16 qps: 23562.93 (r/w/o: 0.00/15706.62/7856.31) lat (ms,95%): 4.49 err/s: 0.00 reconn/s: 0.00
[ 27s ] thds: 16 tps: 4209.08 qps: 25245.47 (r/w/o: 0.00/16827.32/8418.16) lat (ms,95%): 3.82 err/s: 0.00 reconn/s: 0.00
[ 28s ] thds: 16 tps: 6064.60 qps: 36389.63 (r/w/o: 0.00/24260.42/12129.21) lat (ms,95%): 2.66 err/s: 0.00 reconn/s: 0.00
[ 29s ] thds: 16 tps: 7259.95 qps: 43579.71 (r/w/o: 0.00/29059.81/14519.90) lat (ms,95%): 3.49 err/s: 0.00 reconn/s: 0.00
[ 30s ] thds: 16 tps: 4897.71 qps: 29385.28 (r/w/o: 0.00/19589.86/9795.43) lat (ms,95%): 3.49 err/s: 0.00 reconn/s: 0.00
[ 31s ] thds: 16 tps: 6064.17 qps: 36410.03 (r/w/o: 0.00/24284.69/12125.34) lat (ms,95%): 3.89 err/s: 0.00 reconn/s: 0.00
[ 32s ] thds: 16 tps: 6080.33 qps: 36443.01 (r/w/o: 0.00/24279.34/12163.67) lat (ms,95%): 3.75 err/s: 0.00 reconn/s: 0.00
[ 33s ] thds: 16 tps: 5415.57 qps: 32549.39 (r/w/o: 0.00/21718.26/10831.13) lat (ms,95%): 2.39 err/s: 0.00 reconn/s: 0.00
[ 34s ] thds: 16 tps: 5098.88 qps: 30577.27 (r/w/o: 0.00/20379.52/10197.76) lat (ms,95%): 2.26 err/s: 0.00 reconn/s: 0.00
[ 35s ] thds: 16 tps: 7266.62 qps: 43558.74 (r/w/o: 0.00/29025.49/14533.25) lat (ms,95%): 2.35 err/s: 0.00 reconn/s: 0.00
[ 36s ] thds: 16 tps: 7066.32 qps: 42430.91 (r/w/o: 0.00/28298.27/14132.64) lat (ms,95%): 2.35 err/s: 0.00 reconn/s: 0.00
[ 37s ] thds: 16 tps: 5372.87 qps: 32212.22 (r/w/o: 0.00/21466.48/10745.74) lat (ms,95%): 2.30 err/s: 0.00 reconn/s: 0.00
[ 38s ] thds: 16 tps: 7014.31 qps: 42073.85 (r/w/o: 0.00/28045.23/14028.62) lat (ms,95%): 2.35 err/s: 0.00 reconn/s: 0.00
[ 39s ] thds: 16 tps: 6805.10 qps: 40891.63 (r/w/o: 0.00/27281.42/13610.21) lat (ms,95%): 2.43 err/s: 0.00 reconn/s: 0.00
[ 40s ] thds: 16 tps: 5137.25 qps: 30821.48 (r/w/o: 0.00/20546.99/10274.49) lat (ms,95%): 2.39 err/s: 0.00 reconn/s: 0.00
[ 41s ] thds: 16 tps: 5341.71 qps: 31993.26 (r/w/o: 0.00/21309.84/10683.42) lat (ms,95%): 2.26 err/s: 0.00 reconn/s: 0.00
[ 42s ] thds: 16 tps: 7085.02 qps: 42505.15 (r/w/o: 0.00/28335.10/14170.05) lat (ms,95%): 2.35 err/s: 0.00 reconn/s: 0.00
[ 43s ] thds: 16 tps: 5437.44 qps: 32688.66 (r/w/o: 0.00/21813.77/10874.89) lat (ms,95%): 2.43 err/s: 0.00 reconn/s: 0.00
[ 44s ] thds: 16 tps: 6667.54 qps: 39945.23 (r/w/o: 0.00/26610.15/13335.08) lat (ms,95%): 2.39 err/s: 0.00 reconn/s: 0.00
[ 45s ] thds: 16 tps: 6886.09 qps: 41320.51 (r/w/o: 0.00/27548.34/13772.17) lat (ms,95%): 2.43 err/s: 0.00 reconn/s: 0.00
[ 46s ] thds: 16 tps: 3925.06 qps: 23606.38 (r/w/o: 0.00/15756.26/7850.13) lat (ms,95%): 2.48 err/s: 0.00 reconn/s: 0.00
[ 47s ] thds: 16 tps: 6490.99 qps: 38890.93 (r/w/o: 0.00/25908.96/12981.98) lat (ms,95%): 2.66 err/s: 0.00 reconn/s: 0.00
[ 48s ] thds: 16 tps: 4353.70 qps: 26123.18 (r/w/o: 0.00/17415.79/8707.39) lat (ms,95%): 3.96 err/s: 0.00 reconn/s: 0.00
[ 49s ] thds: 16 tps: 2337.32 qps: 14077.90 (r/w/o: 0.00/9403.27/4674.63) lat (ms,95%): 9.06 err/s: 0.00 reconn/s: 0.00
[ 50s ] thds: 16 tps: 4153.69 qps: 24910.13 (r/w/o: 0.00/16603.75/8306.38) lat (ms,95%): 3.43 err/s: 0.00 reconn/s: 0.00
[ 51s ] thds: 16 tps: 5466.85 qps: 32762.12 (r/w/o: 0.00/21827.42/10934.71) lat (ms,95%): 2.39 err/s: 0.00 reconn/s: 0.00
[ 52s ] thds: 16 tps: 5962.55 qps: 35765.27 (r/w/o: 0.00/23840.18/11925.09) lat (ms,95%): 2.18 err/s: 0.00 reconn/s: 0.00
[ 53s ] thds: 16 tps: 5421.69 qps: 32539.12 (r/w/o: 0.00/21695.74/10843.37) lat (ms,95%): 2.14 err/s: 0.00 reconn/s: 0.00
[ 54s ] thds: 16 tps: 4658.89 qps: 27954.31 (r/w/o: 0.00/18636.54/9317.77) lat (ms,95%): 2.07 err/s: 0.00 reconn/s: 0.00
[ 55s ] thds: 16 tps: 8168.23 qps: 49014.40 (r/w/o: 0.00/32677.93/16336.46) lat (ms,95%): 2.30 err/s: 0.00 reconn/s: 0.00
[ 56s ] thds: 16 tps: 4832.03 qps: 28976.18 (r/w/o: 0.00/19312.12/9664.06) lat (ms,95%): 2.43 err/s: 0.00 reconn/s: 0.00
[ 57s ] thds: 16 tps: 7029.96 qps: 42184.73 (r/w/o: 0.00/28124.82/14059.91) lat (ms,95%): 2.26 err/s: 0.00 reconn/s: 0.00
[ 58s ] thds: 16 tps: 6834.35 qps: 41061.10 (r/w/o: 0.00/27392.40/13668.70) lat (ms,95%): 2.35 err/s: 0.00 reconn/s: 0.00
[ 59s ] thds: 16 tps: 5067.55 qps: 30366.31 (r/w/o: 0.00/20233.21/10133.10) lat (ms,95%): 2.30 err/s: 0.00 reconn/s: 0.00
[ 60s ] thds: 16 tps: 5989.09 qps: 35921.51 (r/w/o: 0.00/23941.34/11980.17) lat (ms,95%): 2.35 err/s: 0.00 reconn/s: 0.00
SQL statistics:
    queries performed:
        read:                            0
        write:                           1444912
        other:                           722456
        total:                           2167368
    transactions:                        361228 (6012.31 per sec.)
    queries:                             2167368 (36073.85 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)

General statistics:
    total time:                          60.0811s
    total number of events:              361228

Latency (ms):
         min:                                    0.31
         avg:                                    2.66
         max:                                  830.42
         95th percentile:                        2.61
         sum:                               960943.62

Threads fairness:
    events (avg/stddev):           22576.7500/131.34
    execution time (avg/stddev):   60.0590/0.00
**The slower server with [faster specs] Samsung SSD**
sysbench oltp_write_only --threads=16 --events=0 --time=60 --mysql-host=localhost --mysql-user=sbtest --mysql-password='****' --mysql-port=3306 --tables=10 --table-size=1000000 --range_selects=off --db-ps-mode=disable --report-interval=1 run
sysbench 1.0.20 (using system LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 16
Report intermediate results every 1 second(s)
Initializing random number generator from current time


Initializing worker threads...

Threads started!

[ 1s ] thds: 16 tps: 1001.31 qps: 6086.80 (r/w/o: 0.00/4068.19/2018.61) lat (ms,95%): 30.81 err/s: 0.00 reconn/s: 0.00
[ 2s ] thds: 16 tps: 476.98 qps: 2811.91 (r/w/o: 0.00/1857.94/953.97) lat (ms,95%): 37.56 err/s: 0.00 reconn/s: 0.00
[ 3s ] thds: 16 tps: 407.03 qps: 2493.17 (r/w/o: 0.00/1679.11/814.06) lat (ms,95%): 55.82 err/s: 0.00 reconn/s: 0.00
[ 4s ] thds: 16 tps: 48.00 qps: 224.00 (r/w/o: 0.00/128.00/96.00) lat (ms,95%): 20.74 err/s: 0.00 reconn/s: 0.00
[ 5s ] thds: 16 tps: 150.00 qps: 963.99 (r/w/o: 0.00/663.99/300.00) lat (ms,95%): 1032.01 err/s: 0.00 reconn/s: 0.00
[ 6s ] thds: 16 tps: 486.00 qps: 2852.02 (r/w/o: 0.00/1880.01/972.01) lat (ms,95%): 26.68 err/s: 0.00 reconn/s: 0.00
[ 7s ] thds: 16 tps: 423.00 qps: 2538.00 (r/w/o: 0.00/1692.00/846.00) lat (ms,95%): 62.19 err/s: 0.00 reconn/s: 0.00
[ 8s ] thds: 16 tps: 62.00 qps: 436.00 (r/w/o: 0.00/312.00/124.00) lat (ms,95%): 943.16 err/s: 0.00 reconn/s: 0.00
[ 9s ] thds: 16 tps: 376.97 qps: 2209.80 (r/w/o: 0.00/1455.87/753.93) lat (ms,95%): 59.99 err/s: 0.00 reconn/s: 0.00
[ 10s ] thds: 16 tps: 355.03 qps: 2138.20 (r/w/o: 0.00/1428.14/710.07) lat (ms,95%): 337.94 err/s: 0.00 reconn/s: 0.00
[ 11s ] thds: 16 tps: 378.00 qps: 2247.99 (r/w/o: 0.00/1491.99/756.00) lat (ms,95%): 41.10 err/s: 0.00 reconn/s: 0.00
[ 12s ] thds: 16 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 13s ] thds: 16 tps: 367.03 qps: 2257.17 (r/w/o: 0.00/1523.11/734.05) lat (ms,95%): 530.08 err/s: 0.00 reconn/s: 0.00
[ 14s ] thds: 16 tps: 249.99 qps: 1444.94 (r/w/o: 0.00/944.96/499.98) lat (ms,95%): 44.98 err/s: 0.00 reconn/s: 0.00
[ 15s ] thds: 16 tps: 338.00 qps: 2028.03 (r/w/o: 0.00/1352.02/676.01) lat (ms,95%): 68.05 err/s: 0.00 reconn/s: 0.00
[ 16s ] thds: 16 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 17s ] thds: 16 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 18s ] thds: 16 tps: 131.01 qps: 845.08 (r/w/o: 0.00/583.05/262.02) lat (ms,95%): 3151.62 err/s: 0.00 reconn/s: 0.00
[ 19s ] thds: 16 tps: 309.00 qps: 1858.99 (r/w/o: 0.00/1240.99/618.00) lat (ms,95%): 530.08 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 16 tps: 117.00 qps: 638.00 (r/w/o: 0.00/404.00/234.00) lat (ms,95%): 21.50 err/s: 0.00 reconn/s: 0.00
[ 21s ] thds: 16 tps: 317.00 qps: 1902.01 (r/w/o: 0.00/1268.01/634.00) lat (ms,95%): 282.25 err/s: 0.00 reconn/s: 0.00
[ 22s ] thds: 16 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 23s ] thds: 16 tps: 191.00 qps: 1171.00 (r/w/o: 0.00/789.00/382.00) lat (ms,95%): 1739.68 err/s: 0.00 reconn/s: 0.00
[ 24s ] thds: 16 tps: 225.99 qps: 1332.93 (r/w/o: 0.00/880.95/451.98) lat (ms,95%): 287.38 err/s: 0.00 reconn/s: 0.00
[ 25s ] thds: 16 tps: 101.01 qps: 604.03 (r/w/o: 0.00/402.02/202.01) lat (ms,95%): 1050.76 err/s: 0.00 reconn/s: 0.00
[ 26s ] thds: 16 tps: 375.96 qps: 2267.77 (r/w/o: 0.00/1515.84/751.92) lat (ms,95%): 43.39 err/s: 0.00 reconn/s: 0.00
[ 27s ] thds: 16 tps: 173.01 qps: 1059.09 (r/w/o: 0.00/713.06/346.03) lat (ms,95%): 1032.01 err/s: 0.00 reconn/s: 0.00
[ 28s ] thds: 16 tps: 218.01 qps: 1275.03 (r/w/o: 0.00/839.02/436.01) lat (ms,95%): 325.98 err/s: 0.00 reconn/s: 0.00
[ 29s ] thds: 16 tps: 79.00 qps: 492.00 (r/w/o: 0.00/334.00/158.00) lat (ms,95%): 861.95 err/s: 0.00 reconn/s: 0.00
[ 30s ] thds: 16 tps: 416.00 qps: 2477.99 (r/w/o: 0.00/1646.00/832.00) lat (ms,95%): 73.13 err/s: 0.00 reconn/s: 0.00
[ 31s ] thds: 16 tps: 107.00 qps: 642.00 (r/w/o: 0.00/428.00/214.00) lat (ms,95%): 1032.01 err/s: 0.00 reconn/s: 0.00
[ 32s ] thds: 16 tps: 360.95 qps: 2165.71 (r/w/o: 0.00/1443.81/721.90) lat (ms,95%): 102.97 err/s: 0.00 reconn/s: 0.00
[ 33s ] thds: 16 tps: 330.04 qps: 2044.27 (r/w/o: 0.00/1384.18/660.09) lat (ms,95%): 48.34 err/s: 0.00 reconn/s: 0.00
[ 34s ] thds: 16 tps: 16.00 qps: 32.00 (r/w/o: 0.00/0.00/32.00) lat (ms,95%): 24.38 err/s: 0.00 reconn/s: 0.00
[ 35s ] thds: 16 tps: 333.00 qps: 2062.00 (r/w/o: 0.00/1396.00/666.00) lat (ms,95%): 31.37 err/s: 0.00 reconn/s: 0.00
[ 36s ] thds: 16 tps: 207.93 qps: 1183.62 (r/w/o: 0.00/767.75/415.87) lat (ms,95%): 42.61 err/s: 0.00 reconn/s: 0.00
[ 37s ] thds: 16 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 38s ] thds: 16 tps: 633.09 qps: 3802.56 (r/w/o: 0.00/2540.37/1262.18) lat (ms,95%): 52.89 err/s: 0.00 reconn/s: 0.00
[ 39s ] thds: 16 tps: 2.00 qps: 8.00 (r/w/o: 0.00/0.00/8.00) lat (ms,95%): 58.92 err/s: 0.00 reconn/s: 0.00
[ 40s ] thds: 16 tps: 479.94 qps: 2879.63 (r/w/o: 0.00/1919.75/959.88) lat (ms,95%): 383.33 err/s: 0.00 reconn/s: 0.00
[ 41s ] thds: 16 tps: 54.01 qps: 388.06 (r/w/o: 0.00/280.04/108.02) lat (ms,95%): 1050.76 err/s: 0.00 reconn/s: 0.00
[ 42s ] thds: 16 tps: 236.00 qps: 1416.00 (r/w/o: 0.00/944.00/472.00) lat (ms,95%): 634.66 err/s: 0.00 reconn/s: 0.00
[ 43s ] thds: 16 tps: 234.98 qps: 1345.87 (r/w/o: 0.00/875.92/469.95) lat (ms,95%): 41.85 err/s: 0.00 reconn/s: 0.00
[ 44s ] thds: 16 tps: 87.01 qps: 524.05 (r/w/o: 0.00/350.03/174.02) lat (ms,95%): 1453.01 err/s: 0.00 reconn/s: 0.00
[ 45s ] thds: 16 tps: 318.97 qps: 1963.83 (r/w/o: 0.00/1325.89/637.95) lat (ms,95%): 38.25 err/s: 0.00 reconn/s: 0.00
[ 46s ] thds: 16 tps: 133.00 qps: 748.01 (r/w/o: 0.00/482.01/266.00) lat (ms,95%): 29.19 err/s: 0.00 reconn/s: 0.00
[ 47s ] thds: 16 tps: 105.01 qps: 692.05 (r/w/o: 0.00/482.03/210.01) lat (ms,95%): 1129.24 err/s: 0.00 reconn/s: 0.00
[ 48s ] thds: 16 tps: 298.99 qps: 1767.92 (r/w/o: 0.00/1169.95/597.97) lat (ms,95%): 634.66 err/s: 0.00 reconn/s: 0.00
[ 49s ] thds: 16 tps: 73.00 qps: 454.01 (r/w/o: 0.00/308.01/146.00) lat (ms,95%): 943.16 err/s: 0.00 reconn/s: 0.00
[ 50s ] thds: 16 tps: 304.01 qps: 1834.05 (r/w/o: 0.00/1226.03/608.02) lat (ms,95%): 634.66 err/s: 0.00 reconn/s: 0.00
[ 51s ] thds: 16 tps: 284.97 qps: 1656.83 (r/w/o: 0.00/1086.89/569.94) lat (ms,95%): 38.25 err/s: 0.00 reconn/s: 0.00
[ 52s ] thds: 16 tps: 231.02 qps: 1435.15 (r/w/o: 0.00/973.10/462.05) lat (ms,95%): 1032.01 err/s: 0.00 reconn/s: 0.00
[ 53s ] thds: 16 tps: 117.00 qps: 655.00 (r/w/o: 0.00/421.00/234.00) lat (ms,95%): 33.12 err/s: 0.00 reconn/s: 0.00
[ 54s ] thds: 16 tps: 14.00 qps: 135.00 (r/w/o: 0.00/107.00/28.00) lat (ms,95%): 1903.57 err/s: 0.00 reconn/s: 0.00
[ 55s ] thds: 16 tps: 145.01 qps: 870.04 (r/w/o: 0.00/580.03/290.01) lat (ms,95%): 831.46 err/s: 0.00 reconn/s: 0.00
[ 56s ] thds: 16 tps: 389.93 qps: 2275.62 (r/w/o: 0.00/1495.75/779.87) lat (ms,95%): 31.94 err/s: 0.00 reconn/s: 0.00
[ 57s ] thds: 16 tps: 125.02 qps: 750.09 (r/w/o: 0.00/500.06/250.03) lat (ms,95%): 1050.76 err/s: 0.00 reconn/s: 0.00
[ 58s ] thds: 16 tps: 430.99 qps: 2585.97 (r/w/o: 0.00/1723.98/861.99) lat (ms,95%): 38.25 err/s: 0.00 reconn/s: 0.00
[ 59s ] thds: 16 tps: 153.00 qps: 928.01 (r/w/o: 0.00/622.01/306.00) lat (ms,95%): 1032.01 err/s: 0.00 reconn/s: 0.00
[ 60s ] thds: 16 tps: 398.98 qps: 2383.90 (r/w/o: 0.00/1585.94/797.97) lat (ms,95%): 39.65 err/s: 0.00 reconn/s: 0.00
SQL statistics:
    queries performed:
        read:                            0
        write:                           57580
        other:                           28790
        total:                           86370
    transactions:                        14395  (236.10 per sec.)
    queries:                             86370  (1416.61 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)

General statistics:
    total time:                          60.9691s
    total number of events:              14395

Latency (ms):
         min:                                    4.38
         avg:                                   67.76
         max:                                 3169.51
         95th percentile:                      325.98
         sum:                               975468.52

Threads fairness:
    events (avg/stddev):           899.6875/15.79
    execution time (avg/stddev):   60.9668/0.00
adrianTNT (206 rep)
Apr 9, 2023, 02:46 AM • Last activity: Apr 11, 2023, 11:04 AM
1 votes
1 answers
51 views
Could cloning / imaging SSD cause MySQL data to be more fragmented?
On a MySQL server (InnoDB tables) with `/var/lib/mysql` around **350 GB**, I cloned the nvme SSD (using Macrium Reflect) in order to compare query speeds with a faster SSD. But queries are around 4 times slower after cloning to the new disk. I compared by ... 4 computers that have same config and da...
On a MySQL server (InnoDB tables) with /var/lib/mysql around **350 GB**, I cloned the nvme SSD (using Macrium Reflect) in order to compare query speeds with a faster SSD. But queries are around 4 times slower after cloning to the new disk. I compared by ... 4 computers that have same config and data is sharded equaly, so they all kind of run at same speed, the one that had disk cloned it completes same tasks 4 times slower. I restarted all servers to rule out any existent caching to ram. I also seen that all 3 servers rebooted in seconds, while the one with cloned SSD needed a few minutes to complete some background MySQL tasks before shutting down. But I didn't notice any background SQL tasks running while it was ON. I don't think it was one of those long background processes that reverts long operations that failed. Could **cloning the SSD** cause the data to be somehow **more fragmented** ? I tried to "optimize" tables just to see if it improves anything, but optimizing ~15GB table runs for 24 hours and counting, so that was a bad idea. The SSDs are different, previous one was Kingston DC1000B ("enterprise") and second one is Samsung 980 PRO. From crystaldiskmark benchmark the Samsung should be 2-3x faster reading, 6x times faster writing, double the IOPS on small files.
adrianTNT (206 rep)
Mar 31, 2023, 12:39 PM • Last activity: Mar 31, 2023, 04:21 PM
0 votes
1 answers
492 views
How to generate each query in a separate SQL file using TPC-DS dsqgen tool?
I'm using TPC-DS dsqgen to translate its templates to runnable queries. I tried the following argument variations to save each query in a dedicated `.sql` file, but they all save the runnable query in `query_0.sql`, even if I'm using a template other than `query1.tpl`! $ ./dsqgen -v -out ../output/q...
I'm using TPC-DS dsqgen to translate its templates to runnable queries. I tried the following argument variations to save each query in a dedicated .sql file, but they all save the runnable query in query_0.sql, even if I'm using a template other than query1.tpl! $ ./dsqgen -v -out ../output/queries -dir ../query_templates -dialect netezza -input ../query_templates/templates.lst qgen2 Query Generator (Version 3.2.0) Copyright Transaction Processing Performance Council (TPC) 2001 - 2021 Warning: This scale factor is valid for QUALIFICATION ONLY Parsed 99 templates $ ls -ltrh ../output/queries total 156K -rw-rw-r-- 1 mgelbana mgelbana 156K Feb 9 20:42 query_0.sql $ ./dsqgen -v -out ../output/queries -dir ../query_templates -dialect netezza -template ../query_templates/query2.tpl qgen2 Query Generator (Version 3.2.0) Copyright Transaction Processing Performance Council (TPC) 2001 - 2021 Parsed 1 templates $ ls -ltrh ../output/queries total 4.0K -rw-rw-r-- 1 mgelbana mgelbana 2.1K Feb 9 20:44 query_0.sql So regardless if I pick a particular template or just choose to process all templates, I'll still get the output written in a single file by the exact same name every time! So how can I direct this tool to output the generated runnable SQL query in a dedicated .sql file names after the associated template?
Muhammad Gelbana (135 rep)
Feb 9, 2023, 06:52 PM • Last activity: Feb 9, 2023, 08:51 PM
10 votes
3 answers
30223 views
How to benchmark Microsoft SQL Server
In my company we have multiple virtual machines hosting SQL Server 2008 R2 and some of these machines perform differently than others, some because of very busy Vmware hosts some because of slower connections to NAS. Is there a way I can run some SQL code in a test SQL database or other known best p...
In my company we have multiple virtual machines hosting SQL Server 2008 R2 and some of these machines perform differently than others, some because of very busy Vmware hosts some because of slower connections to NAS. Is there a way I can run some SQL code in a test SQL database or other known best practices I could use to run some performance tests in each VM and baseline/benchmark performance so that I can understand which machine behaves better than others prior to moving those machines to Prod or UAT environments? Thanks, Davide.
Davide Piras (324 rep)
Aug 26, 2014, 01:38 PM • Last activity: Dec 27, 2022, 11:02 AM
2 votes
2 answers
53 views
Benchmarks for custom database
I am looking for an open source benchmark tool that can easily be extended to custom databases. The reason is I have created a new NoSQL DBMS (called Borrego) that support both JSON queries like MongoDB and DB2 JSON, and generic SQL statements via a JDBC driver. I would like to run some performance...
I am looking for an open source benchmark tool that can easily be extended to custom databases. The reason is I have created a new NoSQL DBMS (called Borrego) that support both JSON queries like MongoDB and DB2 JSON, and generic SQL statements via a JDBC driver. I would like to run some performance tests that I can compare with other existing databases. Any pointers would be helpful. Thanks
Andy (131 rep)
Aug 18, 2017, 04:36 PM • Last activity: Nov 30, 2022, 09:47 PM
0 votes
1 answers
103 views
How to evaluate Database benchmarks ? What to consider given a specific example?
I'm trying to figure out which database to use for a project which is supposed to implement a temporal property graph model and I am looking into some benchmarks for that. I found some papers which provided some insights and results and I also found this benchmark from TigerGraph: https://www.tigerg...
I'm trying to figure out which database to use for a project which is supposed to implement a temporal property graph model and I am looking into some benchmarks for that. I found some papers which provided some insights and results and I also found this benchmark from TigerGraph: https://www.tigergraph.com.cn/wp-content/uploads/2021/07/EN0302-GraphDatabase-Comparision-Benchmark-Report.pdf Does anyone have any idea why ArangoDB is performing so poorly here ? Especially in comparison to Neo4j ? Furthermore, any preferences regarding a NoSQL database which consistently needs to write data while answering mostly queries which result in large subtrees ? EDIT: Also - if someone has links to other benchmarks i'd welcome that.
L.Rex (101 rep)
Nov 14, 2022, 06:43 PM • Last activity: Nov 30, 2022, 09:42 PM
1 votes
1 answers
1045 views
pgbench - 20-30% variation in benchmark results (non-repeatable benchmarks)
## Problem I'm trying to use `pgbench` to help me tune Postgres for my hardware, but I'm seeing a very strange behaviour. I cannot seem to get a stable number for TPS in consecutive runs of `pgbench`. Since I was running `pgbench` for 60 secs, I _assumed_ that this is because of checkpointing and au...
## Problem I'm trying to use pgbench to help me tune Postgres for my hardware, but I'm seeing a very strange behaviour. I cannot seem to get a stable number for TPS in consecutive runs of pgbench. Since I was running pgbench for 60 secs, I _assumed_ that this is because of checkpointing and auto-vacuuming, so I made the following changes to my config: - autovacuum=off - max_wal_size=5GB But this led to even more wildly varying numbers! For example, here's the output of the exact same pgbench command run consecutively: **Output 1** starting vacuum...end. progress: 5.0 s, 566.0 tps, lat 10.577 ms stddev 2.788 progress: 10.0 s, 513.0 tps, lat 11.689 ms stddev 2.907 progress: 15.0 s, 513.8 tps, lat 11.680 ms stddev 2.995 progress: 20.0 s, 519.6 tps, lat 11.546 ms stddev 2.969 progress: 25.0 s, 518.4 tps, lat 11.576 ms stddev 2.929 progress: 30.0 s, 518.2 tps, lat 11.576 ms stddev 2.978 progress: 35.0 s, 522.8 tps, lat 11.472 ms stddev 2.966 progress: 40.0 s, 521.0 tps, lat 11.516 ms stddev 2.962 progress: 45.0 s, 521.2 tps, lat 11.510 ms stddev 2.909 progress: 50.0 s, 581.6 tps, lat 10.313 ms stddev 2.636 progress: 55.0 s, 520.8 tps, lat 11.526 ms stddev 2.919 progress: 60.0 s, 522.2 tps, lat 11.494 ms stddev 2.927 transaction type: scaling factor: 2000 query mode: simple number of clients: 6 number of threads: 6 duration: 60 s number of transactions actually processed: 31699 latency average = 11.357 ms latency stddev = 2.938 ms tps = 528.185674 (including connections establishing) tps = 528.269291 (excluding connections establishing) **Output 2** starting vacuum...end. progress: 5.0 s, 528.4 tps, lat 11.318 ms stddev 2.940 progress: 10.0 s, 526.0 tps, lat 11.418 ms stddev 2.884 progress: 15.0 s, 522.8 tps, lat 11.473 ms stddev 2.892 progress: 20.0 s, 525.6 tps, lat 11.409 ms stddev 3.008 progress: 25.0 s, 528.0 tps, lat 11.366 ms stddev 2.858 progress: 30.0 s, 525.6 tps, lat 11.412 ms stddev 2.893 progress: 35.0 s, 521.8 tps, lat 11.503 ms stddev 2.973 progress: 40.0 s, 524.4 tps, lat 11.439 ms stddev 2.966 progress: 45.0 s, 736.6 tps, lat 8.152 ms stddev 3.801 progress: 50.0 s, 1101.2 tps, lat 5.447 ms stddev 0.738 progress: 55.0 s, 1012.2 tps, lat 5.929 ms stddev 0.609 progress: 60.0 s, 723.4 tps, lat 8.285 ms stddev 2.969 transaction type: scaling factor: 2000 query mode: simple number of clients: 6 number of threads: 6 duration: 60 s number of transactions actually processed: 38886 latency average = 9.257 ms latency stddev = 3.629 ms tps = 647.993705 (including connections establishing) tps = 648.099359 (excluding connections establishing) That's a 20% variation in TPS for the exact same configuration! What am I missing here? ## Hardware setup - postgres server: 32 GB RAM / 6-core (12 thread) / SSD with RAID1 - pgbench server: 32 GB RAM / 4-core (8 thread) / SSD ## Relevant Postgres config for the above output max_connection=100 work_mem=4MB maintenance_work_mem=64MB shared_buffers=12288MB temp_buffers=8MB effective_cache_size=16GB wal_buffers=-1 wal_sync_method=fsync max_wal_size=5GB autovacuum=off ## pgbench settings **Initialisation** pgbench --initialize --init-steps=dtgpf --scale=2000 # Results in approx 30-32GB of data --username=benchmarking **Benchmarking** pgbench --builtin=tpcb-like --client=6 --jobs=6 --time=60 --progress=5 --username=benchmarking ## Network connectivity between the two servers # iperf -t 60 -c [IP-ADDRESS] ------------------------------------------------------------ Client connecting to [IP-ADDRESS], TCP port 5001 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local [IP-ADDRESS] port 40494 connected with [IP-ADDRESS] port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 3.47 GBytes 496 Mbits/sec
Saurabh Nanda (333 rep)
Jan 22, 2019, 07:03 PM • Last activity: Sep 22, 2022, 05:46 AM
1 votes
0 answers
822 views
Why is this `pgbench` benchmark faster on postgresql 11 than on postgresql 14?
To check postgresql performance, I ran the following benchmark with varying postgresql versions and filesystems: ```bash pgbench --initialize --scale=50 bench # Run for 3 minutes postgres pgbench --client=10 --jobs=2 --time=180 bench ``` #### System environment - postgresql settings: `shared_buffers...
To check postgresql performance, I ran the following benchmark with varying postgresql versions and filesystems:
pgbench --initialize --scale=50 bench
# Run for 3 minutes
postgres pgbench --client=10 --jobs=2 --time=180 bench
#### System environment - postgresql settings: shared_buffers = 8 GiB (of 32 GiB total system RAM) - 2x 7200 RPM HDDs #### Results | Filesystem | TPS (postgresql 11.16) | TPS (postgresql 14.4) | | ---------- | ------------------- | --- | | zfs compression=lz4 | 497.21 | 288.93 | | zfs compression=lz4 recordsize=8K | 488.11 | 430.83 | | ext4 | 492.01 | 480.66 | | btrfs mirror compression=zstd | 193.65 | 187.4 | Overall, pg 11 is faster than pg 14, especially on ZFS with the default 128K recordsize. What's the explanation for this result? Did pgbench change between versions? Or did postgres increase data robustness/crash resistance at the cost of performance?
Gustav Almgren (11 rep)
Jul 1, 2022, 08:27 PM • Last activity: Jul 2, 2022, 12:33 PM
Showing page 1 of 20 total questions