Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
2 answers
1547 views
How to speed up full text query on a table with 4 million rows? (MariaDB)
I have an InnoDB full-text table that serves the Ajax-powered search box at the top of my website. I generate it with a daily script that pulls data from a dozen entity tables on the site and amalgamates them all into one FT table for searching. To give users the best experience (IMHO) I take whatev...
I have an InnoDB full-text table that serves the Ajax-powered search box at the top of my website. I generate it with a daily script that pulls data from a dozen entity tables on the site and amalgamates them all into one FT table for searching. To give users the best experience (IMHO) I take whatever their input is, clean certain characters out of it (all full-text modifiers, for example), and then prepend every term with + and append them all with *. So a search for "stack overflow" becomes +stack* +overflow* The column that I'm searching on the FT table is small, with a typical character length of 30 characters. Event names, people's names, geographical locations, that sort of thing. Not huge passages of text. It works, but queries take on the order of 1 second to be returned. **EDIT: just after posting I've rebuilt the index and it's down to 0.4 seconds now - but I'd still like to improve it, if possible.** How could I change that to 0.1 seconds, or is that a pipe dream? My server is a dual Xeon with 16 cores/32 threads and 128GB of memory. I serve a million pages or so each month, and rarely see server load above 1-2, with plenty of spare memory. I wonder if I can somehow force this table to reside permanently in memory (rebuilding it after a server reboot or MySQL restart only takes 30 seconds or so), and if that would help? Or maybe MySQL is already holding it in memory - how can I check? I'm happy with the query itself, I don't think there's much that I can improve about it, but I know very little about how to maximize server potential through configuration. FWIW SELECT VERSION() gives me 10.3.20-MariaDB-log.
Codemonkey (265 rep)
Dec 9, 2019, 01:19 PM • Last activity: Aug 3, 2025, 11:10 PM
0 votes
1 answers
2218 views
How to change query tool font and colors in PgAdmin 4?
In PgAdmin III, it is possible to change the font and colors for the query tool editor window in the options menu: [![enter image description here][1]][1] I can't find any similar settings in the preferences window for PgAdmin 4. How and where can I change the font and colors? [1]: https://i.sstatic...
In PgAdmin III, it is possible to change the font and colors for the query tool editor window in the options menu: enter image description here I can't find any similar settings in the preferences window for PgAdmin 4. How and where can I change the font and colors?
Joe M (121 rep)
May 1, 2023, 09:03 PM • Last activity: Jul 30, 2025, 05:02 PM
1 votes
1 answers
393 views
macos ventura postgres 15 remote connections server closed the connection unexpectedly
macos ventura 13.5.1 postgresql (15.4 (Homebrew)) Remote connections for the server are closed "unexpectedly". I have done all the steps that are needed for postgres to be happy with remote connections on other unix systems, eg: ubuntu. I know that it's seeing the connection attempt since there is a...
macos ventura 13.5.1 postgresql (15.4 (Homebrew)) Remote connections for the server are closed "unexpectedly". I have done all the steps that are needed for postgres to be happy with remote connections on other unix systems, eg: ubuntu. I know that it's seeing the connection attempt since there is a line added in the log at the same time. So it doesn't seem like this is a firewall issue. On that mac. all local connections are happy and the server is working normally. I don't recall ever seeing the **TCP_NODELAY** message on any other installation I have done and googling this issue didn't seem to find meaningful results postgres was installed via:
brew install postgresql@15
connecting to the database like this shows an error:
psql a_database --host an_ip_address --username a_username
psql: error: connection to server at "an_ip_address", port 5432 failed: server closed the connection unexpectedly
	This probably means the server terminated abnormally
	before or while processing the request.
(
Yes, I know I might want to have a more specific ipaddress eg: 10.0.0.0 but I wanted to make sure I could connect. In pg_hba.conf:
# remote connections with password verification
host    all             all             0.0.0.0/0               md5
In postgresql.conf:
listen_addresses = '*'
In the log for postgres at the time of connection is:
2023-08-25 13:55:02.966 PDT  LOG:  setsockopt(TCP_NODELAY) failed: Invalid argument
boar (13 rep)
Aug 25, 2023, 09:38 PM • Last activity: Jul 23, 2025, 10:07 PM
0 votes
1 answers
150 views
Mongod starting issue on ubuntu
When I try to check the status of mongodb I get the following: ``` × mongod.service - MongoDB Database Server Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Tue 2024-04-16 16:40:14 EDT; 19s ago Docs: https://docs.mon...
When I try to check the status of mongodb I get the following:
× mongod.service - MongoDB Database Server
     Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Tue 2024-04-16 16:40:14 EDT; 19s ago
       Docs: https://docs.mongodb.org/manual 
    Process: 10689 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=1/FAILURE)
   Main PID: 10689 (code=exited, status=1/FAILURE)
        CPU: 39ms

Apr 16 16:40:14 rime-VirtualBox systemd: Started MongoDB Database Server.
Apr 16 16:40:14 rime-VirtualBox mongod: {"t":{"$date":"2024-04-16T20:40:14.057Z"},"s":"I",  "c":"CONTROL",  "id":5760901, ">
Apr 16 16:40:14 rime-VirtualBox mongod: about to fork child process, waiting until server is ready for connections.
Apr 16 16:40:14 rime-VirtualBox mongod: forked process: 10691
Apr 16 16:40:14 rime-VirtualBox mongod: ERROR: child process failed, exited with 1
Apr 16 16:40:14 rime-VirtualBox mongod: To see additional information in this output, start without the "--fork" option.
Apr 16 16:40:14 rime-VirtualBox systemd: mongod.service: Main process exited, code=exited, status=1/FAILURE
Apr 16 16:40:14 rime-VirtualBox systemd: mongod.service: Failed with result 'exit-code'.
~
Here are some important files: config
storage:
  dbPath: /var/lib/mongodb
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log
net:
  port: 27017
  bindIp: 127.0.0.1
processManagement:
   fork: true
setParameter:
   enableLocalhostAuthBypass: false
service
[Service]
User=mongodb
Group=mongodb
EnvironmentFile=-/etc/default/mongod
ExecStart=/usr/bin/mongod --config /etc/mongod.conf
RuntimeDirectory=mongodb
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false

# Recommended limits for mongod as specified in
# https://docs.mongodb.com/manual/reference/ulimit/#recommended-ulimit-settings 

[Install]
WantedBy=multi-user.target
log
{"t":{"$date":"2024-04-16T16:45:41.024-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:45:42.029-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:45:43.029-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:45:44.029-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:45:45.041-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:45:46.041-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:45:47.043-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:45:48.045-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:45:49.046-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:45:50.052-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:45:51.057-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:45:52.059-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:45:53.059-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:45:54.060-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:45:55.081-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:45:55.367-04:00"},"s":"D1", "c":"INDEX",    "id":22533,   "ctx":"TTLMonitor","msg":"running TTL job for index","attr":{"namespace":"config.system.sessions","key":{"lastUse":1},"name":"lsidTTLIndex"}}
{"t":{"$date":"2024-04-16T16:45:55.367-04:00"},"s":"I",  "c":"INDEX",    "id":5479200, "ctx":"TTLMonitor","msg":"Deleted expired documents using index","attr":{"namespace":"config.system.sessions","index":"lsidTTLIndex","numDeleted":0,"durationMillis":0}}
{"t":{"$date":"2024-04-16T16:45:56.087-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:45:56.838-04:00"},"s":"D1", "c":"WTCHKPT",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":{"ts_sec":1713300356,"ts_usec":838383,"thread":"9327:0x76d97d11b640","session_name":"WT_SESSION.checkpoint","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":6,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 69, snapshot max: 69 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1"}}}
{"t":{"$date":"2024-04-16T16:45:56.864-04:00"},"s":"D1", "c":"STORAGE",  "id":7702900, "ctx":"Checkpointer","msg":"Checkpoint thread sleeping","attr":{"duration":60}}
{"t":{"$date":"2024-04-16T16:45:57.087-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:45:58.087-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:45:59.088-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:00.088-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:01.089-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:02.092-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:03.092-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:04.094-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:05.094-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:06.094-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:07.094-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:08.094-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:09.094-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:10.097-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:11.097-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:12.097-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:13.097-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:14.098-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:15.105-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:16.106-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:17.106-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:18.127-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:19.128-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:20.132-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:21.149-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:22.156-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:23.176-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:24.194-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:25.194-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:26.200-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:27.202-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
{"t":{"$date":"2024-04-16T16:46:28.204-04:00"},"s":"D1", "c":"STORAGE",  "id":8097401, "ctx":"TimestampMonitor","msg":"No drop-pending idents have expired","attr":{"timestamp":{"$timestamp":{"t":0,"i":0}},"pendingIdentsCount":0}}
rime@rime-VirtualBox:~$
Could you please help me solve this issue? it seems to start fine when I start it with --verbose... I tried changing the execstart parameter in the .service file but there was already no --fork
Rime Tazi (1 rep)
Apr 16, 2024, 08:59 PM • Last activity: Jul 16, 2025, 07:11 PM
0 votes
1 answers
232 views
How to enhance (the speed of) my database
First of all, I need to say that I have not much formal training for SQL, especially the backend, so anything I tested was a trial and error of advice found on the internet. I have a DB which stores some selected statistics of players from an online game - World of Tanks. A program I made runs calls...
First of all, I need to say that I have not much formal training for SQL, especially the backend, so anything I tested was a trial and error of advice found on the internet. I have a DB which stores some selected statistics of players from an online game - World of Tanks. A program I made runs calls to the external API, parses received data, and writes into the DB. In my program, I used to construct queries as strings to deal with varying length of returned data per API call, which worked decently. Then I switched to prepared statements, but that required me to send everything row by row, which was terrible. Finally, I implemented buffers for the data, which coupled with batch prepared statements is probably the fastest version. When a buffer is full, it gets pushed to a queue and a worker thread executes the query. The game is split into several regions (5 to be precise). Each region has separate servers, players, API, etc. Each account has some general data (account ID, nickname, creation time, games, wins, last update time, etc) and per-vehicle stats (games, winrate, average damage). I have one table storing general data of players per region. First, I stored all vehicle stats in one table per region, but the performance soon got abysmal, so I divided the data into separate tables per vehicle per server. This has an obvious disadvantage of needing to create tables dynamically, and requiring separate prepared statements for each table.
CREATE TABLE accounts_ (
  accid int unsigned NOT NULL,
  nick varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '???',
  tcreated bigint NOT NULL DEFAULT '0',
  tlastbttl bigint NOT NULL DEFAULT '0',
  battles mediumint unsigned NOT NULL DEFAULT '0',
  wins mediumint unsigned NOT NULL DEFAULT '0',
  t_updt int NOT NULL DEFAULT '0',
  PRIMARY KEY (accid)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
CREATE TABLE tank__ (
  accid int unsigned NOT NULL,
  battles int unsigned NOT NULL DEFAULT '0',
  winrate float NOT NULL DEFAULT '0',
  dpb float NOT NULL DEFAULT '0',
  PRIMARY KEY (accid),
  CONSTRAINT tank___fk FOREIGN KEY (accid) REFERENCES accounts_ (accid) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
Unfortunately, when I want to generate some *leaderboards* which include players from all servers, or all vehicles, I have to first select data from each table into a temporary table and then select again. Additionally, some rare vehicles have barely any rows, some are available not on all regions, and some are played by almost everyone, so the tables differ a lot by sizes. Is there any way to join all the vehicles tables together (or even all regions together, because IDs will not collide) without hurting the performance even more? I thought about partitioning, but after reading more, I'm not sure if that is what I need... The most pressing problem is the insert speed. Incoming data rate is around 5k rows per second. After batching in groups of 100, it comes to around 50 queries per second, while the DB is currently able to write around 20 qps. Largest account table is ~50mil rows, total is ~150mil rows. 16GB in 5 files. Largest vehicle table is ~7mil rows. 90GB in ~4400 files. I think that the sudden decrease in performance I noticed, is because when inserting into an "empty" table, the DB can just write the rows in one place. However, now that the DB has to actually update the rows, and they are scattered across the table/file, it takes much longer to run that single query... Does this make any sense? Maybe I could insert to temporary table and then move all the data to the main table? Are there any settings that I can change to somehow make the performance better? Any indexes? Other that what I wrote above, I also tried disabling FK constraints, but I could not notice any difference. I tested various values for some settings like pool_size, redo_log, thread_concurrency etc. Can some of the "dangerous" settings be applied to that DB only, not entire MySQL server? In case of this database I don't really care if something goes missing or a false value occurs, as long as it doesn't irreversibly break the entire table... It will get downloaded and corrected after some time anyway... How do big companies handle databases like these? My DB is only a small fraction of the original one, and still it has to perform at unbelievably higher rates. Is my hardware just too bad? Are UPDATEs so much slower than just INSERTs? Hardware: -- Some of the specs as shown by inxi: https://pastebin.com/raw/MfcExvYU Some info from hdparm:
/dev/sda:
 Timing cached reads:   20346 MB in  1.98 seconds = 10275.64 MB/sec
 Timing buffered disk reads: 1256 MB in  3.00 seconds = 418.14 MB/sec
/dev/sdb:
 Timing cached reads:   29782 MB in  1.97 seconds = 15092.17 MB/sec
 Timing buffered disk reads: 1386 MB in  3.00 seconds = 461.46 MB/sec
sdb is a system disk, sda is currently only used for MySQL data directory. (yeah I know, the letters once got swapped for some reason) MySQL config files: -- Changes in the default mysqld.cnf: - general log disabled - slow log disabled - bin log disabled Additional settings:
table_open_cache = 10000
table_definition_cache = 10000

#innodb_thread_concurrency = 4
innodb_flush_log_at_trx_commit = 2
innodb_buffer_pool_size = 2G
innodb_flush_method = O_DIRECT
innodb_flush_neighbors = 0
innodb_io_capacity = 1000
innodb_io_capacity_max = 4000
#innodb_redo_log_capacity = 4194304
innodb_doublewrite = 0

transaction-isolation = READ-COMMITTED

connect_timeout=28800
interactive_timeout=28800
wait_timeout=28800
Edit: As suggested by @mustaccio: - Status: https://pastebin.com/raw/1FhqJaPa - Variables: https://pastebin.com/raw/q3P15JeQ - Processlist: https://pastebin.com/raw/t5NUnJfh - InnoDB monitor: https://pastebin.com/raw/U2u0hxKL - InnoDB metrics: https://pastebin.com/raw/38gQKCek - MySQL tuner: https://pastebin.com/raw/DbmTDpuU
herhor67 (23 rep)
Jul 29, 2023, 07:22 PM • Last activity: Jun 24, 2025, 09:04 PM
0 votes
1 answers
61 views
MariaDB installation setting lower_case_table_names
I had MySQL installed on Windows 11 with lower_case_table_names=2 to preserve mixed-case table names. To migrate to MariaDB, I backed up the database with mysqldump. When installing MariaDB using either the MSI package, or the ZIP, it seems that mariadb-install-db.exe creates the my.ini file in the...
I had MySQL installed on Windows 11 with lower_case_table_names=2 to preserve mixed-case table names. To migrate to MariaDB, I backed up the database with mysqldump. When installing MariaDB using either the MSI package, or the ZIP, it seems that mariadb-install-db.exe creates the my.ini file in the data folder with lower_case_table_names=1, and it cannot be changed afterwards. I tried deleting the content of the data folder, creating a my.ini in the MariaDBroot and manually rerunning the mariadb-install-db.exe, but it ignores that ini file. MySQL installation asks the question as to what the lower_case_table_names setting should be during set up, but not MariaDB. When restoring the database from the dump into MariaDB, SHOW TABLES lists the tables all in lower-case. Other sites and MariaDB documentation say that the lower_case_table_names=2 settings is supported, but not how to achieve that. Does anyone know how lower_case_table_names can be set to 2?
Peter Brand (111 rep)
Jun 19, 2025, 10:32 PM • Last activity: Jun 20, 2025, 03:24 PM
0 votes
2 answers
133 views
Sql Server Agent service not starting because the Agent XPs is disabled. How to solve this problem?
I have a bunch of servers hosted in Canada, one of them has some erratic behaviour when under too much pressure for some time. it just looses connectivity as probably the CPU is on high. this is beyond the point of this question here, just explaining a little bit of the background. What becomes rele...
I have a bunch of servers hosted in Canada, one of them has some erratic behaviour when under too much pressure for some time. it just looses connectivity as probably the CPU is on high. this is beyond the point of this question here, just explaining a little bit of the background. What becomes relevant is that when they reboot the box it starts to work fine again. Now the concern: After the box is rebooted the sql server agent does not get started. The reason why the sql server agent does not get started is because of “Agent XPs disabled” error On the above link, there are pictures and those match the situation I see on my rogue server. enter image description here After I manually enable the Agent XPs and start manually the sql server agent then all works. But I don't want to manually start it. Why is it not starting on its own? Running this query I can see the status of Agent XPs and Show Advanced Options :
use master
go
select * from sys.configurations where name in ('Agent XPs','Show advanced options')
enter image description here SQL Server Agent should not be in the windows admin group. but I wonder what permission might be missing so that the Agent XPs can be re-enabled and our sql server agent can be started? I have NOT Set the SQL Agent service to Automatic(Delayed) SQL Browser service is running. I see this in the error log: The current event was not reported to the Windows Events log. Operating system error = (null). You may need to clear the Windows Events log if it is full. That took me to have a look at this: > The “Configure log access” policy under “Computer Configuration” -> > “Administrative Templates” -> “Windows Components” -> “Event Log > Service” -> “Application” is enabled. You can check this policy by > running gpedit.msc. and this is the current settings for that: enter image description here For the startup options set to for the service nothing outstanding.
Marcello Miorelli (17274 rep)
May 29, 2025, 10:56 AM • Last activity: Jun 9, 2025, 02:19 PM
0 votes
1 answers
218 views
Should I change parallelism (MaxDop and Threshold)?
Seems like I have problems with parallelism because largest wait types are `CXCONSUMER` and `CXPACKET`. Server has 8 cores. So I am planning to bump up Cost Threshold to **50** and MaxDop to **4** Currently I have default values, which is * 5 - cost threshold for parallelism * 0 - max degree of para...
Seems like I have problems with parallelism because largest wait types are CXCONSUMER and CXPACKET. Server has 8 cores. So I am planning to bump up Cost Threshold to **50** and MaxDop to **4** Currently I have default values, which is * 5 - cost threshold for parallelism * 0 - max degree of parallelism enter image description here Mostly, only a couple of databases are used intensively out of all databases we have on instance. Which makes me wonder whether I should implement those changes on a whole instance or only on a couple of databases.
Serdia (707 rep)
Jul 6, 2021, 10:27 PM • Last activity: Jun 9, 2025, 01:08 AM
0 votes
1 answers
769 views
possible to show maximum query text in trino?
We have some queries that are hundreds or thousands of lines long that create a `QUERY_TEXT_TOO_LARGE` error. Is there a query I can run to show the maximum query text length in Trino/hdfs? I couldn't find it here: https://trino.io/docs/current/admin/properties-query-management.html or by doing a se...
We have some queries that are hundreds or thousands of lines long that create a QUERY_TEXT_TOO_LARGE error. Is there a query I can run to show the maximum query text length in Trino/hdfs? I couldn't find it here: https://trino.io/docs/current/admin/properties-query-management.html or by doing a search: https://trino.io/docs/current/search.html?q=maximum# I don't have direct access to the server this runs on, but I can run queries on it.
raphael75 (244 rep)
Jan 10, 2022, 06:13 PM • Last activity: Jun 7, 2025, 06:03 PM
0 votes
1 answers
702 views
Why is MySQL not connecting when reading from mylogin.cnf?
On Ubuntu 20.04, I can log into a server by specifying the log credentials on the CLI: ``` $ mysql -h172.30.0.2 -uroot -p bar -e "SELECT id FROM users LIMIT 1;" Enter password: +----+ | id | +----+ | 1 | +----+ ``` However, I cannot log in when using `~/.mylogin.cnf`: ``` $ cat ~/.mylogin.cnf [foo]...
On Ubuntu 20.04, I can log into a server by specifying the log credentials on the CLI:
$ mysql -h172.30.0.2 -uroot -p bar -e "SELECT id FROM users LIMIT 1;"
Enter password:
+----+
| id |
+----+
|  1 |
+----+
However, I cannot log in when using ~/.mylogin.cnf:
$ cat ~/.mylogin.cnf
[foo]
user=root
password="notTheRealPassword"
host=172.30.0.2

$ mysql --login-path=foo bar -e "SELECT id FROM users LIMIT 1;"
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
I have confirmed that the file is being read by changing the permissions of the file and noticing MySQL complaining:
$ chmod 660 ~/.mylogin.cnf

$ mysql --login-path=foo bar -e "SELECT id FROM users LIMIT 1;"
mysql: [Warning] /home/dotancohen/.mylogin.cnf should be readable/writable only by current user.
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
**Why might MySQL not connect when using ~/.mylogin.cnf?** How should I debug why MySQL is not connecting?
dotancohen (1106 rep)
Oct 24, 2020, 09:01 AM • Last activity: May 28, 2025, 02:06 PM
5 votes
2 answers
1491 views
what are the reasons why show advanced options is a security threat when left enabled?
I generally only [change temporarily][1] the values of [Show Advanced Options][2]. I have servers where this would be a convenient setting to leave on. are there real threats that could be avoided by denying this option on the server? talking about sql server 2019 here, although some servers are sti...
I generally only change temporarily the values of Show Advanced Options . I have servers where this would be a convenient setting to leave on. are there real threats that could be avoided by denying this option on the server? talking about sql server 2019 here, although some servers are still sql 2016.
Marcello Miorelli (17274 rep)
May 27, 2025, 05:30 PM • Last activity: May 28, 2025, 10:31 AM
1 votes
1 answers
251 views
Why do some applications use implicit transactions in SQL Server?
I have run into several applications that were written for a one dbms and then ported to SQL Server, and many use implicit transactions--which oftentimes really make it difficult to manage on the SQL side of things. I came across Kendra Little's website and post [here][1] and she seems to see the sa...
I have run into several applications that were written for a one dbms and then ported to SQL Server, and many use implicit transactions--which oftentimes really make it difficult to manage on the SQL side of things. I came across Kendra Little's website and post here and she seems to see the same thing: > 2. Implicit transactions > > Implicit transactions are a bit weird, and I typically only run into them when applications have been written for a different relational database and then ported to SQL Server. My question is why? What benefit do implicit transactions provide to the application? It seems like extra work and makes things more difficult all around.
Mike S (177 rep)
Dec 2, 2019, 01:07 AM • Last activity: May 24, 2025, 07:03 PM
0 votes
1 answers
344 views
How much memory using for adaptive hash indexes in MariaDB?
I'm reading about [`adaptive hash indexes`](https://dev.mysql.com/doc/refman/5.7/en/innodb-adaptive-hash.html) in MariaDB. It is created in memory, so it will be logically to know how much memory usually used for the index table. For example, my local development database has only 2GB of RAM. Is it...
I'm reading about [adaptive hash indexes](https://dev.mysql.com/doc/refman/5.7/en/innodb-adaptive-hash.html) in MariaDB. It is created in memory, so it will be logically to know how much memory usually used for the index table. For example, my local development database has only 2GB of RAM. Is it possible theoretically that adaptive index table will use > 50% of total memory? I tried to search how much memory is usually used for AHI. But here is no info in the internet about memory and how to limit it. The only info I found is [innodb-adaptive-hash-index-parts](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_adaptive_hash_index_parts) . However here is no explanation about what means a specific partition and what is size of it?
rzlvmp (111 rep)
Jul 26, 2023, 01:04 AM • Last activity: May 23, 2025, 12:01 AM
0 votes
1 answers
263 views
Use data from Mysql to ElasticSearch with Logstash
I'm using logstash for use my mysql database in ElasticSearch My conf is the next. ```` input { jdbc { jdbc_connection_string => "jdbc:mysql://[ip]:3306/nextline_dev" jdbc_user => "[user]" jdbc_password => "[pass]" #schedule => "* * * * *" #jdbc_validate_connection => true jdbc_driver_library => "/p...
I'm using logstash for use my mysql database in ElasticSearch My conf is the next.
`
input {
    jdbc {
        jdbc_connection_string => "jdbc:mysql://[ip]:3306/nextline_dev"
        jdbc_user => "[user]"
        jdbc_password => "[pass]"
        #schedule => "* * * * *"
        #jdbc_validate_connection => true
        jdbc_driver_library => "/path/mysql-connector-java-6.0.5.jar"
        jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
        statement => "SELECT * FROM Account"
    }
}
output {
    elasticsearch {
        index => "account"
        document_id => "%{id}"
        hosts => ["127.0.0.1:9200"]
    }
}
` But I have some questions, I want to schedule more than one query, but the index will be always account. Can I make a dynamic index for the output to elasticsearch? And how can I use more than one statement? (Export more than one table)
BlueSeph (121 rep)
Mar 15, 2019, 08:29 PM • Last activity: May 20, 2025, 03:02 PM
0 votes
1 answers
254 views
Having two data folder in my postgresql setup- Postgresql
I am using centos 6 and installed Postgresql_9.4 using the following commands yum install postgresql postgresql-contrib postgresql-client pgadmin3 yum install postgresql-server After that i verified my config file or "Show data_directory" command to verify that data folder path. Its showing `/var/li...
I am using centos 6 and installed Postgresql_9.4 using the following commands yum install postgresql postgresql-contrib postgresql-client pgadmin3 yum install postgresql-server After that i verified my config file or "Show data_directory" command to verify that data folder path. Its showing /var/lib/pgsql/9.4/data but am also having the another data folder in this location /var/lib/pgsql/data. data path 1 --> /var/lib/pgsql/9.4/data data path 2 --> /var/lib/pgsql/data My question is which is my original data folder ??. And Also my exact data folder is configured in config file means what is the use of another data folder ?
Dharani Dharan (101 rep)
May 5, 2016, 11:23 AM • Last activity: May 18, 2025, 10:02 PM
1 votes
2 answers
282 views
Make data rotation with MariaDB database
We have a small local computer that we use as a temporary database backup to automatically recover data from sensors when we lost connection to the main cloud server or when we have to make maintenance to this server. Due to the small space disk available for data (32go) the idea was to keep last 3...
We have a small local computer that we use as a temporary database backup to automatically recover data from sensors when we lost connection to the main cloud server or when we have to make maintenance to this server. Due to the small space disk available for data (32go) the idea was to keep last 3 month of sensors data and delete oldest entries. Here is the description of the database:
CREATE TABLE IF NOT EXISTS machine (
  Nom varchar(50) NOT NULL,
  Site smallint(5) unsigned NOT NULL,
  Emplacement smallint(5) unsigned DEFAULT NULL,
  ID smallint(5) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (ID),
  UNIQUE KEY Nom (Nom),
  KEY FK__site (Site),
  CONSTRAINT FK__site FOREIGN KEY (Site) REFERENCES site (ID)
) ENGINE=InnoDB AUTO_INCREMENT=77 DEFAULT CHARSET=utf8;

CREATE TABLE IF NOT EXISTS mesure (
  Machine smallint(5) unsigned NOT NULL,
  Date datetime NOT NULL,
  Valeur decimal(13,2) NOT NULL,
  PRIMARY KEY (Machine,Date),
  KEY Date (Date) USING BTREE,
  CONSTRAINT FK__machine FOREIGN KEY (Machine) REFERENCES machine (ID)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

CREATE TABLE IF NOT EXISTS site (
  Nom varchar(50) NOT NULL,
  ID smallint(5) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (ID)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;
In a first time I thought that making periodic deletion will be sufficent but I discovered that innoDB does not release space disk to the OS, and now the disk partition is full (filled in 4 month). More, these delete are bit slow. After some researches I found that partitions can be the solution to my issue but it seems that it's not possible to make partition with a foreign key inside the table. Tried this code, just for testing purpose :
ALTER TABLE mesure
PARTITION BY RANGE(UNIX_TIMESTAMP(Date))
(
	PARTITION START VALUES LESS THAN (UNIX_TIMESTAMP("2021-10-01 00:00:00")),
	PARTITION MONTH1 VALUES LESS THAN (UNIX_TIMESTAMP("2021-11-01 00:00:00")),
	PARTITION MONTH2 VALUES LESS THAN (UNIX_TIMESTAMP("2021-12-01 00:00:00")),
	PARTITION MONTH3 VALUES LESS THAN (UNIX_TIMESTAMP("2022-01-01 00:00:00")),
	PARTITION END VALUES LESS THAN MAXVALUE
	
);
That results to :
/* Erreur SQL (1217) : Cannot delete or update a parent row: a foreign key constraint fails */
I'm now run out solutions to make this solution viable on our hardware. Any idea on how I can manage this ?
Afaeld (23 rep)
Sep 13, 2022, 05:55 PM • Last activity: May 12, 2025, 08:03 PM
2 votes
1 answers
300 views
Update the current session's default parameter value to reflect a new default
Say I `ALTER` the database default `search_path` to include a new schema: ALTER DATABASE my_db SET search_path TO "$user",public,other_schema; Then I also update the current `search_path` to match: SET search_path "$user",public,other_schema; This is great. Now I can use objects from that schema in...
Say I ALTER the database default search_path to include a new schema: ALTER DATABASE my_db SET search_path TO "$user",public,other_schema; Then I also update the current search_path to match: SET search_path "$user",public,other_schema; This is great. Now I can use objects from that schema in my current session without including the schema. ...unless someone RESETs the parameter: RESET search_path; This doesn't reset it back to the new default. It resets it back to the original default before the session started. Even COMMITing the current transaction doesn't change this. Trying to UPDATE pg_settings doesn't work: UPDATE pg_settings SET reset_val = '"$user",public,other_schema' WHERE name = 'search_path'; SELECT * FROM pg_settings WHERE name = 'search_path'; This shows no change to the view and no difference in the behavior of RESET. I could see use cases for this, but in my use case, I need this to not happen. How can I get PostgreSQL to update the current session default alongside my other changes, so that RESET will set it back to that value? Especially following a COMMIT?
jpmc26 (1652 rep)
Jul 29, 2016, 12:08 AM • Last activity: May 10, 2025, 07:06 PM
0 votes
1 answers
53 views
How do I avoid writing SQL that depends on settings?
[PostgreSQL docs note](https://www.postgresql.org/docs/17/datatype-binary.html): > The bytea type supports two formats for input and output: “hex” format and PostgreSQL's historical “escape” format. Both of these are always accepted on input. The output format depends on the configuration parameter...
[PostgreSQL docs note](https://www.postgresql.org/docs/17/datatype-binary.html) : > The bytea type supports two formats for input and output: “hex” format and PostgreSQL's historical “escape” format. Both of these are always accepted on input. The output format depends on the configuration parameter bytea_output; the default is hex. I always interpreted it as being specific to only the displaying of output, but I see now that you can get differing query results based on the setting. For example, on PostgreSQL:
=> set bytea_output to 'hex';
SET
=> select length('\x2020'::bytea::text);
 length
--------
      6
(1 row)

=> set bytea_output to 'escape';
SET
=> select length('\x2020'::bytea::text);
 length
--------
      2
(1 row)
To make my application more robust, I want the SQL I write to be independent of toggles like this. How can I achieve this?
Janus Troelsen (139 rep)
May 8, 2025, 07:16 PM • Last activity: May 9, 2025, 04:24 AM
1 votes
2 answers
3566 views
mongodb conf node won't start - compatibility error
I recently upgraded a dev sharded-cluster from 3.6 to 4.0 - one of the conf servers is now failing to start: > 2020-01-30T13:19:02.972-0600 F CONTROL [initandlisten] ** IMPORTANT: > UPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document > (ERROR: BadValue: Invalid value for version,...
I recently upgraded a dev sharded-cluster from 3.6 to 4.0 - one of the conf servers is now failing to start: > 2020-01-30T13:19:02.972-0600 F CONTROL [initandlisten] ** IMPORTANT: > UPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document > (ERROR: BadValue: Invalid value for version, found 3.6, expected '4.2' > or '4.0'. Contents of featureCompatibilityVersion document in > admin.system.version: { _id: "featureCompatibilityVersion", version: > "3.6" }. See > http://dochub.mongodb.org/core/4.0-feature-compatibility.) . If the > current featureCompatibilityVersion is below 4.0, see the > documentation on upgrading at > http://dochub.mongodb.org/core/4.0-upgrade-fcv . The upgrade from 3.6 -> 4.0 reported as successful - I checked the data shards but didn't think to check each conf server... Today, I just upgraded the mongo binaries and on reboot, encountered the problem with one of the conf servers failing to start with the above mentioned error. The cluster had been running at 4.0 for weeks before I noticed this problem following the update reboot. The other two conf servers (running) are reporting that they're at 4.0 so life is good there, as are all of the data shards. I cannot start the conf server without encountering this error which shuts-down the server making it impossible to issue the set-feature-compatibility-version directive from mongos. Since the other two conf nodes are running and reporting the correct release version, would it be best to just nuke the down-server's data, attempt to restart the node, and then issue the command to ensure that the restarted node's data version is correct? Or, is there some sort of force command that would bypass the version check? TIA!
Micheal Shallop (111 rep)
Jan 30, 2020, 07:30 PM • Last activity: Apr 23, 2025, 06:02 PM
3 votes
1 answers
68 views
MYSQL and random performance of store procedure
I run this SP at night, when there’s almost no activity. The SP performs an accumulative calculation of machine usage from a laundromat—an artisanal BI query. It writes the result to a table. On most days, it takes about 5 minutes to finish. But randomly, it can take up to an hour—or it keeps runnin...
I run this SP at night, when there’s almost no activity. The SP performs an accumulative calculation of machine usage from a laundromat—an artisanal BI query. It writes the result to a table. On most days, it takes about 5 minutes to finish. But randomly, it can take up to an hour—or it keeps running into business hours and I have to kill it. The weirdest part is its inconsistency. For example, last week I had to kill it on Friday and Saturday, but on Sunday it finished quickly. Today it took around 1 hour and 40 minutes. More details: MySQL on AWS RDS, t3.micro, GP3 volume, no replica or multi-AZ setup. I don’t see any CPU/memory issues in the Monitoring tab (as far as I can tell). I recently upgraded MySQL from v5 to v8. On v5, the process always took more than an hour—and sometimes didn’t finish at all. After upgrading, I refactored the SP to use CTEs, which greatly improved performance (down to 5 minutes). I also created a few indexes to help with that. I’ve tried rebuilding all indexes, tweaking the DB parameter group (based on some ChatGPT suggestions), and upgrading the RDS volume from GP2 to GP3, but none of that has helped. Everything still behaves the same. More details: - MySQL in AWS Rds, t3.micro, GP3, no replica or multizone. - I don't see any CPU/Memory issues on the Monitoring tab (I think). - I recently upgraded MySQL from v5 to v8. On v5 the process always took more than an hour and sometimes didn't finish and had to be killed. Therefore, I upgraded it to use CTEs and that improved the timing A LOT (down to 5 min). With that and the creation of a few indexes is how I managed that. - I have tried to rebuild every index, tweak the db parameter group (some ChatGPT suggestions), upgraded the RDS volume from GP2 to GP3, but nothign helped, all seems to be the same. Tables involed: preventive_maintenance_building_entry. This table is filled with a previous steps of a "orchestator" SP but those other steps are really fast.
SHOW CREATE TABLE preventive_maintenance_building_entry;
CREATE TABLE preventive_maintenance_building_entry (
  building_id int NOT NULL,
  maintenance_type varchar(255) NOT NULL,
  machine_id int DEFAULT NULL,
  maintenance_date datetime DEFAULT NULL,
  technician varchar(255) DEFAULT NULL,
  uses int DEFAULT NULL,
  created_at datetime NOT NULL,
  PRIMARY KEY (building_id,created_at,maintenance_type),
  KEY CREATED_AT (created_at),
  KEY CREATED_AT_MAINTENANCE_TYPE (created_at,maintenance_type),
  KEY BUILDING_ID_MAINTENANCE_TYPE (building_id,maintenance_type)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
machine_use
CREATE TABLE machine_use (
  id int NOT NULL AUTO_INCREMENT,
  headline varchar(255) DEFAULT NULL,
  timestamp datetime DEFAULT NULL,
  card_id int DEFAULT NULL,
  machine_id int DEFAULT NULL,
  uid varchar(255) DEFAULT NULL,
  energy_consumption double NOT NULL,
  result varchar(255) DEFAULT NULL,
  water_consumption double NOT NULL,
  bill_id int DEFAULT NULL,
  accredited bit(1) DEFAULT b''1'',
  reason varchar(255) DEFAULT NULL,
  transaction_id int NOT NULL,
  audit_id int DEFAULT NULL,
  channel varchar(255) DEFAULT NULL,
  building_id int DEFAULT NULL,
  PRIMARY KEY (id),
  UNIQUE KEY uk_machine_timestamp (machine_id,timestamp),
  KEY FK_c4b0xhdiy6ifa6dr0qhiybfyy (card_id),
  KEY FK_87unujtk3bdckfoj3ts7qxj0o (bill_id),
  KEY FK_pnc8o8pmdu5nhkuv6ex0c6j3u (audit_id),
  KEY RESULT (result),
  KEY TIMESTAMP (timestamp),
  KEY BUILDING_ID_MACHINE_ID (building_id,machine_id),
  KEY BUILDING_ID_TIMESTAMP (building_id,timestamp),
  CONSTRAINT FK_87unujtk3bdckfoj3ts7qxj0o FOREIGN KEY (bill_id) REFERENCES bill (id) ON DELETE RESTRICT ON UPDATE RESTRICT,
  CONSTRAINT FK_c4b0xhdiy6ifa6dr0qhiybfyy FOREIGN KEY (card_id) REFERENCES part (id) ON DELETE RESTRICT ON UPDATE RESTRICT,
  CONSTRAINT FK_lkllr5f16o42yu0xykdjocjcj FOREIGN KEY (building_id) REFERENCES building (id) ON DELETE RESTRICT ON UPDATE RESTRICT,
  CONSTRAINT FK_mypy14i1gkixyeavmot7srv96 FOREIGN KEY (machine_id) REFERENCES part (id) ON DELETE RESTRICT ON UPDATE RESTRICT,
  CONSTRAINT FK_pnc8o8pmdu5nhkuv6ex0c6j3u FOREIGN KEY (audit_id) REFERENCES audit (id) ON DELETE RESTRICT ON UPDATE RESTRICT
) ENGINE=InnoDB AUTO_INCREMENT=22049277 DEFAULT CHARSET=utf8mb3
part
CREATE TABLE part (
  from_class varchar(50) NOT NULL,
  id int NOT NULL AUTO_INCREMENT,
  description varchar(255) DEFAULT NULL,
  model varchar(255) DEFAULT NULL,
  name varchar(255) DEFAULT NULL,
  serial_number varchar(255) DEFAULT NULL,
  state varchar(255) DEFAULT NULL,
  uuid varchar(255) DEFAULT NULL,
  english_description varchar(255) DEFAULT NULL,
  machine_type varchar(255) DEFAULT NULL,
  unit_price double DEFAULT NULL,
  uy_price double DEFAULT NULL,
  anual_consumption int DEFAULT NULL,
  minimum_stock int DEFAULT NULL,
  request_point int DEFAULT NULL,
  unit_id int DEFAULT NULL,
  building_id int DEFAULT NULL,
  current_uses int DEFAULT NULL,
  expected_uses int DEFAULT NULL,
  balance double DEFAULT NULL,
  contract_type varchar(255) DEFAULT NULL,
  master bit(1) DEFAULT NULL,
  last_alive datetime DEFAULT NULL,
  port varchar(255) DEFAULT NULL,
  private_ip varchar(255) DEFAULT NULL,
  public_ip varchar(255) DEFAULT NULL,
  upgrade_to varchar(255) DEFAULT NULL,
  firmware_id int DEFAULT NULL,
  average_use_time int DEFAULT NULL,
  sub_state varchar(255) DEFAULT NULL,
  pending_uses int DEFAULT NULL,
  prepaidcardholder_id int DEFAULT NULL,
  end_time_of_use datetime DEFAULT NULL,
  start_time_of_use datetime DEFAULT NULL,
  machinerate_id int DEFAULT NULL,
  alias varchar(255) DEFAULT NULL,
  activation_status varchar(255) DEFAULT NULL,
  discount double DEFAULT NULL,
  sort_index int DEFAULT NULL,
  is_topic_enable bit(1) DEFAULT NULL,
  capacity int DEFAULT NULL,
  reference varchar(255) DEFAULT NULL,
  quantity int DEFAULT NULL,
  machinemodel_id int DEFAULT NULL,
  rpichild_id int DEFAULT NULL,
  firmware_version varchar(255) DEFAULT NULL,
  pre_blocked_uses int DEFAULT NULL,
  PRIMARY KEY (id),
  UNIQUE KEY PART_CARD_UUID (uuid),
  KEY FK_68gfqjsfqvgxh7o10olfs4cin (unit_id),
  KEY FK_k6kwvobmnq67he07u9shakwmv (building_id),
  KEY FK_o6toyd4jag26vwtyayo7l2ng4 (firmware_id),
  KEY FK_fnbvj52u2i90s78wfqjgiip0x (prepaidcardholder_id),
  KEY FK_shyirawpsc2lwvrj0hyo1o5hc (machinerate_id),
  KEY MACHINE_KEEP_ALIVE (last_alive),
  KEY FK_qd2kalbf8g1ep7be556fb9bm9 (machinemodel_id),
  KEY FK_nccjoenep9lhexhsr34ccrle2 (rpichild_id),
  KEY MACHINE_TYPE (machine_type),
  KEY ID_MACHINE_TYPE (id,machine_type),
  KEY FROM_CLASS (from_class),
  CONSTRAINT FK_68gfqjsfqvgxh7o10olfs4cin FOREIGN KEY (unit_id) REFERENCES unit (id) ON DELETE RESTRICT ON UPDATE RESTRICT,
  CONSTRAINT FK_fnbvj52u2i90s78wfqjgiip0x FOREIGN KEY (prepaidcardholder_id) REFERENCES user (id) ON DELETE RESTRICT ON UPDATE RESTRICT,
  CONSTRAINT FK_k6kwvobmnq67he07u9shakwmv FOREIGN KEY (building_id) REFERENCES building (id) ON DELETE RESTRICT ON UPDATE RESTRICT,
  CONSTRAINT FK_nccjoenep9lhexhsr34ccrle2 FOREIGN KEY (rpichild_id) REFERENCES part (id) ON DELETE RESTRICT ON UPDATE RESTRICT,
  CONSTRAINT FK_o6toyd4jag26vwtyayo7l2ng4 FOREIGN KEY (firmware_id) REFERENCES firmware (id) ON DELETE RESTRICT ON UPDATE RESTRICT,
  CONSTRAINT FK_qd2kalbf8g1ep7be556fb9bm9 FOREIGN KEY (machinemodel_id) REFERENCES machine_model (id) ON DELETE RESTRICT ON UPDATE RESTRICT,
  CONSTRAINT FK_shyirawpsc2lwvrj0hyo1o5hc FOREIGN KEY (machinerate_id) REFERENCES rate (id) ON DELETE RESTRICT ON UPDATE RESTRICT
) ENGINE=InnoDB AUTO_INCREMENT=55621 DEFAULT CHARSET=utf8mb3
Query:
REPLACE INTO preventive_maintenance_building_entry(building_id,
                                                       maintenance_type,
                                                       machine_id,
                                                       maintenance_date,
                                                       technician,
                                                       uses,
                                                       created_at)
    WITH PreventiveEntriesMP1200 AS (SELECT e1.building_id,
                                            e1.maintenance_type,
                                            e1.maintenance_date,
                                            e1.technician,
                                            e1.created_at
                                     FROM preventive_maintenance_building_entry e1
                                     WHERE e1.created_at = CURDATE()
                                       AND e1.maintenance_type = 'MP1200'),
         MachineUsage AS (SELECT mu1.building_id, mu1.machine_id, COUNT(*) AS use_count
                          FROM machine_use mu1
                                   INNER JOIN PreventiveEntriesMP1200 pe
                                              ON pe.building_id = mu1.building_id
                                   INNER JOIN part p1
                                              ON mu1.machine_id = p1.id
                                                  AND p1.machine_type = 'DRYER'
                          WHERE mu1.result IN ('0', '1', '5', '6', '7', '8', '30')
                            AND mu1.timestamp > pe.maintenance_date
                          GROUP BY mu1.building_id, mu1.machine_id),
         MachineWithMostUses AS (SELECT building_id,
                                        machine_id,
                                        use_count,
                                        ROW_NUMBER() OVER (
                                            PARTITION BY building_id
                                            ORDER BY use_count DESC
                                            ) AS ranking
                                 FROM MachineUsage)
    SELECT pmbe.building_id      AS building_id,
           pmbe.maintenance_type AS maintenance_type,
           mwmu.machine_id       AS machine_id,
           pmbe.maintenance_date AS maintenance_date,
           pmbe.technician       AS technician,
           mwmu.use_count        AS uses,
           pmbe.created_at       AS created_at
    FROM PreventiveEntriesMP1200 pmbe
             INNER JOIN MachineWithMostUses mwmu
                        ON pmbe.building_id = mwmu.building_id
    WHERE mwmu.ranking = 1;
Execution plan: | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | |----|-------------|-------------|------------|------------|--------------------------------------------------------------------------------|-------------------------------------------|---------|-------------------------------------|------|----------|-------------------------------------------------------------------------------------------| | 1 | REPLACE | preventive_maintenance_building_entry | NULL | ALL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | | 1 | PRIMARY | e1 | NULL | index_merge| PRIMARY,CREATED_AT,CREATED_AT_MAINTENANCE_TYPE,BUILDING_ID_MAINTENANCE_TYPE | CREATED_AT,CREATED_AT_MAINTENANCE_TYPE | 5,262 | NULL | 3 | 99.43 | Using intersect(CREATED_AT,CREATED_AT_MAINTENANCE_TYPE); Using where; Using temporary | | 1 | PRIMARY | | NULL | ref | | | 13 | lavomat.e1.building_id,const | 10 | 100.00 | NULL | | 3 | DERIVED | | NULL | ALL | NULL | NULL | NULL | NULL | 429 | 100.00 | Using filesort | | 4 | DERIVED | e1 | NULL | index_merge| PRIMARY,CREATED_AT,CREATED_AT_MAINTENANCE_TYPE,BUILDING_ID_MAINTENANCE_TYPE | CREATED_AT,CREATED_AT_MAINTENANCE_TYPE | 5,262 | NULL | 3 | 99.43 | Using intersect(CREATED_AT,CREATED_AT_MAINTENANCE_TYPE); Using where; Using temporary | | 4 | DERIVED | mu1 | NULL | ref | uk_machine_timestamp,RESULT,TIMESTAMP,BUILDING_ID_MACHINE_ID,BUILDING_ID_TIMESTAMP | BUILDING_ID_MACHINE_ID | 5 | lavomat.e1.building_id | 9598 | 29.81 | Using index condition; Using where | | 4 | DERIVED | p1 | NULL | eq_ref | PRIMARY,MACHINE_TYPE,ID_MACHINE_TYPE | PRIMARY | 4 | lavomat.mu1.machine_id | 1 | 5.00 | Using where | This is a little benchmark table that I record when SPs start/finish. | Timestamp | Status | Process | |---------------------|--------|-------------------------------------------| | 2025-04-09 05:01:31 | Start | FillPreventiveMP1200MaintenanceBuildingEntry | | 2025-04-09 05:07:09 | Finish | FillPreventiveMP1200MaintenanceBuildingEntry | | 2025-04-10 05:01:43 | Start | FillPreventiveMP1200MaintenanceBuildingEntry | | 2025-04-10 05:07:24 | Finish | FillPreventiveMP1200MaintenanceBuildingEntry | | 2025-04-11 07:58:07 | Start | FillPreventiveMP1200MaintenanceBuildingEntry | | 2025-04-12 05:55:34 | Start | FillPreventiveMP1200MaintenanceBuildingEntry | | 2025-04-13 05:01:47 | Start | FillPreventiveMP1200MaintenanceBuildingEntry | | 2025-04-13 05:04:46 | Finish | FillPreventiveMP1200MaintenanceBuildingEntry | | 2025-04-14 06:14:19 | Start | FillPreventiveMP1200MaintenanceBuildingEntry | | 2025-04-14 06:50:36 | Finish | FillPreventiveMP1200MaintenanceBuildingEntry | | 2025-04-15 06:14:29 | Start | FillPreventiveMP1200MaintenanceBuildingEntry | | 2025-04-15 06:50:43 | Finish | FillPreventiveMP1200MaintenanceBuildingEntry | | 2025-04-16 06:12:31 | Start | FillPreventiveMP1200MaintenanceBuildingEntry | | 2025-04-16 06:48:06 | Finish | FillPreventiveMP1200MaintenanceBuildingEntry | Days with no Finish record means it was killed.
chronotrigger (31 rep)
Apr 14, 2025, 02:33 PM • Last activity: Apr 20, 2025, 11:35 PM
Showing page 1 of 20 total questions