Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
2
answers
145
views
How to concatenate multiple JSON arrays into a single array in MariaDB?
I'm working with a MariaDB database where I have a table with two columns: `id` and `values`, where `values` contains JSON arrays. Here is a simplified example of the data: | id | values | |----|-------------| | 1 | "[1, 2, 3]" | | 1 | "[5]" | | 2 | "[4]" | I want to group the rows by `id` and conca...
I'm working with a MariaDB database where I have a table with two columns:
id
and values
, where values
contains JSON arrays. Here is a simplified example of the data:
| id | values |
|----|-------------|
| 1 | "[1, 2, 3]" |
| 1 | "" |
| 2 | "" |
I want to group the rows by id
and concatenate the JSON arrays in the values
column into a single array, like this:
| id | values |
|----|---------------|
| 1 | "[1, 2, 3, 5]"|
| 2 | "" |
I tried using JSON_ARRAYAGG
but it creates an array of arrays, and I can't figure out how to merge the arrays into one. How can I achieve this in MariaDB?
SELECT id, JSON_ARRAYAGG(values)
FROM REC
GROUP BY id
Bonus question: How to output only unique values?
Emax
(111 rep)
Dec 16, 2024, 01:48 PM
• Last activity: Aug 5, 2025, 01:04 PM
21
votes
4
answers
5279
views
SQL Server cardinality hint
Is there a way how to 'inject' a cardinality estimation to a SQL Server optimizer (any version)? i.e. something similar to Oracle's cardinality hint. My motivation is driven by the article, [How Good Are Query Optimizers, Really?][1] \[1] , where they test the influence of the cardinality estimator...
Is there a way how to 'inject' a cardinality estimation to a SQL Server optimizer (any version)?
i.e. something similar to Oracle's cardinality hint.
My motivation is driven by the article, How Good Are Query Optimizers, Really? \[1] , where they test the influence of the cardinality estimator on a selection of a bad plan. Therefore, it would be sufficient if I could force the SQL Server to 'estimate' the cardinalities precisely for complex queries.
---
\[1] Leis, Viktor, et al. "How good are query optimizers, really?"
Proceedings of the VLDB Endowment 9.3 (2015): 204-215.
Radim Bača
(233 rep)
Mar 31, 2017, 07:17 AM
• Last activity: Aug 5, 2025, 12:16 PM
1
votes
1
answers
1537
views
MySQL Workbench Administrator - GUI "Access denied for user root " gotcha
Here is a small newbie problem I found and resolved. The behavior of the GUI is a little confusing so I figure I will share. Running some Java examples, I have MySQL set up on Windows. After making a little progress I ran into a problem with a sample database JDBC connection. I wanted to look at tha...
Here is a small newbie problem I found and resolved. The behavior of the GUI is a little confusing so I figure I will share.
Running some Java examples, I have MySQL set up on Windows. After making a little progress I ran into a problem with a sample database JDBC connection. I wanted to look at that database connection's admin user info in the Workbench to see what's going on.
So I click "Users and Privileges" - the gui pops up a message:
"Access denied for user 'root'@'localhost' (using password: YES)" - what? I know the root password, set it myself. There was no opportunity to enter it in the GUI just now, only a rude popup.
So I went hunting for some tip on how to update a config file for the GUI to read the root password.
charles ross
(163 rep)
May 29, 2018, 03:38 PM
• Last activity: Aug 5, 2025, 12:04 PM
0
votes
1
answers
1542
views
MySQL FULLTEXT Match exponentially slower with search phrase
I'm trying to track down some performance issues in my FULLTEXT MATCH() AGAINST() query, and it seems to be related (at least in part) to queries requiring an exact phrase. My table stores system process information, has roughly 2M rows, a `keywords` TEXT column, and a FULLTEXT index on said column....
I'm trying to track down some performance issues in my FULLTEXT MATCH() AGAINST() query, and it seems to be related (at least in part) to queries requiring an exact phrase.
My table stores system process information, has roughly 2M rows, a
keywords
TEXT column, and a FULLTEXT index on said column. The keywords column averages ~600 characters, with several "words" in that column containing one or many special characters.
The query I'm trying to run is an exact match on "net.exe"
:
`select id from processes where id "
>
> A phrase that is enclosed within double quote (") characters matches
> only rows that contain the phrase literally, as it was typed. The
> full-text engine splits the phrase into words and performs a search in
> the FULLTEXT index for the words. Nonword characters need not be
> matched exactly: Phrase searching requires only that matches contain
> exactly the same words as the phrase and in the same order. For
> example, "test phrase" matches "test, phrase".
...and indeed, running queries with ... AGAINST('"net exe"' ...
take just as long. So it seems to just be searching for exact phrases in general.
My latest theory is that because my table has process info (e.g. system paths and cmdline arguments, which have many special characters), the normal FULLTEXT isn't useful for my query and MySQL is effectively re-indexing the whole table when I search for phrases like "net.exe"
. Some supporting evidence for this is that the original creation of the FULLTEXT index took roughly 30 minutes. However I find it hard to believe that would be the full explanation. Regardless, I ran explain
on my query (which itself took 30 minutes to resolve), and got the following:
```
mysql> explain select id from processes where id explain select id from processes where id Ft_hints: sorted
, which seems to only be due to the lack of quotes. If I run explain
when querying for "net"
it goes back to Ft_hints: no_ranking
.
Lastly, I tried running CHECK TABLE
and even making a fresh temp table with only the id and keywords columns, but the above numbers were consistent, so I don't feel this is specific to my table condition.
Eric
(103 rep)
Oct 4, 2022, 11:51 PM
• Last activity: Aug 5, 2025, 11:03 AM
0
votes
1
answers
560
views
MySQL query - Passing variable into subquery's derived table
Is there a way to pass a variable into a subquery, and use that variable in a derived table in the subquery? I am trying identify "active" organizations, and there are multiple actions that constitute "activity". One is creating an invoice, represented here (others not shown here for the sake of rea...
Is there a way to pass a variable into a subquery, and use that variable in a derived table in the subquery?
I am trying identify "active" organizations, and there are multiple actions that constitute "activity". One is creating an invoice, represented here (others not shown here for the sake of readability). However, because there are more types of actions, I cannot do the simpler approach of just querying the
Invoice
table, like in thisWorks
column.
Therefore, I am trying to create a derived table, where multiple sources can be queried, but also filtering the @start
and @end
timestamps, as seen in the thisDoesntWork
column.
Here is a simplified version of what the query looks like:
select
@row:=@row+1 as row,
@sixPrior:=date_sub(date_sub(date_sub(curdate(), interval dayofweek(curdate())-2 day), interval @row-1 week), interval 6 month) as sixMonthsPrior,
@start:=date_sub(date_sub(curdate(), interval dayofweek(curdate())-2 day), interval @row-1 week) as start,
@end:=date_sub(date_add(curdate(), interval dayofweek(curdate())+5 day), interval @row-1 week) as end,
@sixPrior as sixCheck,
@start as startCheck,
@end as endCheck,
( select count(distinct(organizationId))
from Invoice
where createdTime between @start and @end
) as thisWorks,
(
select count(distinct(organizationId))
from (
select organizationId
from Invoice
where createdTime between @start and @end
) d
) as thisDoesntWork
from Organization o
join (select @row:=0) r
limit 10;
In the example above, thisDoesntWork
, has the same value for every row (this first value). I assume it's got something to do with @start
and @end
never getting updated values.
Is there any way to filter a derived table, with values that change each row, from an outer table?
I'm also open to alternative approaches, if derived table restrictions will never allow it to work.
I'm also using MySql 5.7, so LATERAL JOIN
is not a possibility
Jared Anderton
(101 rep)
Dec 1, 2020, 06:05 PM
• Last activity: Aug 5, 2025, 10:03 AM
2
votes
2
answers
1941
views
MariaDB 11.2 using too much RAM
I'm running MariaDB 11.2 in Docker Swarm in Ubuntu 20.04 and its memory usage keeps growing until database is killed by OOM killer. [![Memory graph][1]][1] According to MySQLTuner, max memory should be around 7.1GB, but after 1-2 days memory usage goes up to 30GB. So when new task is started on serv...
I'm running MariaDB 11.2 in Docker Swarm in Ubuntu 20.04 and its memory usage keeps growing until database is killed by OOM killer.
According to MySQLTuner, max memory should be around 7.1GB, but after 1-2 days memory usage goes up to 30GB. So when new task is started on server and there are not enough memory, the OOM killer select mariadbd and kills it.
Server:
- Ubuntu 20.04.1 (5.4.0-169-generic Kernel)
- 64 GB RAM
- Docker 24.0.6
- MariaDB 11.2.3 (in container)
- Also Apache 2 (in container)
Here's

docker-compose.yaml
:
version: "3.8"
services:
database:
image: mariadb:11.2
deploy:
restart_policy:
condition: any
networks:
- database
ports:
- "3306:3306"
ulimits:
memlock: 2097152 # 2MB
volumes:
- "db_data:/var/lib/mysql"
- "db_log:/var/log/mysql"
- "/etc/localtime:/etc/localtime:ro"
- "/etc/timezone:/etc/timezone:ro"
- "./my.cnf:/etc/mysql/my.cnf"
networks:
database:
driver: overlay
attachable: true
name: database
...
Here's my.cnf
:
[server]
table_definition_cache = 2048
innodb_buffer_pool_size = 6G
innodb_log_file_size = 1536M
innodb_log_buffer_size = 256M
log_error=/var/log/mysql/error.log
[client-server]
socket = /run/mysqld/mysqld.sock
!includedir /etc/mysql/mariadb.conf.d/
!includedir /etc/mysql/conf.d/
*Includes are from my.cnf template.*
Content of /etc/mysql/mariadb.conf.d/
:
# 05-skipcache.cnf:
[mariadb]
host-cache-size=0
skip-name-resolve
# 50-server.cnf:
[mariadbd]
pid-file = /run/mysqld/mysqld.pid
basedir = /usr
expire_logs_days = 10
character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
Directory /etc/mysql/conf.d/
is empty.
## What I tried?
### Update database
I had same problem with 10.6. version. So I tried upgrading to 11.2. but that didn't fix the problem.
### Performence schema
I enabled performence schema with memory instruments.
...
performance_schema = ON
performance-schema-instrument='memory/%=COUNTED'
...
Sum from sys.x$memory_global_by_current_bytes
is around 7.1GB. But memory used by mariadbd process is much higher.
### Incresse memlock in container to 2MB
I found warning in mysql/error.log
:
2024-02-11 21:09:42 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOMEM: try larger memory locked limit, ulimit -l, or https://mariadb.com/kb/en/systemd/#configuring-limitmemlock under systemd (262144 bytes required)
2024-02-11 21:09:42 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF
So I increesed memory lock limit to 2MB. Warning is gone, but memory problem still persists.
---
## Update 1
I had to reduce innodb_buffer_pool_size to 1GB to reduce the number of crashes. I also adjusted innodb_log_file_size to 256MB.
MySQLTuner output
when mariadb takes up 20GB of RAM (total system usage 48GB - 70%):
...
-------- Performance Metrics -----------------------------------------------------------------------
[--] Up for: 2d 21h 59m 30s (142M q [564.348 qps], 336K conn, TX: 531G, RX: 107G)
[--] Reads / Writes: 80% / 20%
[--] Binary logging is disabled
[--] Physical Memory : 62.8G
[--] Max MySQL memory : 2.0G
[--] Other process memory: 0B
[--] Total buffers: 1.5G global + 2.9M per thread (151 max threads)
[--] Performance_schema Max memory usage: 0B
[--] Galera GCache Max memory usage: 0B
[OK] Maximum reached memory usage: 1.6G (2.59% of installed RAM)
[OK] Maximum possible memory usage: 2.0G (3.11% of installed RAM)
[OK] Overall possible memory usage with other process is compatible with memory available
[OK] Slow queries: 0% (612/142M)
[OK] Highest usage of available connections: 25% (38/151)
[OK] Aborted connections: 0.00% (0/336034)
[OK] Query cache is disabled by default due to mutex contention on multiprocessor machines.
[OK] Sorts requiring temporary tables: 0% (113 temp sorts / 1M sorts)
[!!] Joins performed without indexes: 1341
[OK] Temporary tables created on disk: 4% (16K on disk / 358K total)
[OK] Thread cache hit rate: 99% (38 created / 336K connections)
[OK] Table cache hit rate: 99% (143M hits / 143M requests)
[OK] table_definition_cache (2048) is greater than number of tables (441)
[OK] Open file limit used: 0% (32/32K)
[OK] Table locks acquired immediately: 100% (468 immediate / 468 locks)
...
-------- MyISAM Metrics ----------------------------------------------------------------------------
[--] General MyIsam metrics:
[--] +-- Total MyISAM Tables : 0
[--] +-- Total MyISAM indexes : 0B
[--] +-- KB Size :128.0M
[--] +-- KB Used Size :23.3M
[--] +-- KB used :18.2%
[--] +-- Read KB hit rate: 0% (0 cached / 0 reads)
[--] +-- Write KB hit rate: 0% (0 cached / 0 writes)
[--] No MyISAM table(s) detected ....
-------- InnoDB Metrics ----------------------------------------------------------------------------
[--] InnoDB is enabled.
[OK] InnoDB File per table is activated
[OK] InnoDB Buffer Pool size ( 1.0G ) under limit for 64 bits architecture: (17179869184.0G )
[!!] InnoDB buffer pool / data size: 1.0G / 34.4G
[OK] Ratio InnoDB log file size / InnoDB Buffer pool size: 256.0M * 1/1.0G should be equal to 25%
[--] Number of InnoDB Buffer Pool Chunk: 64 for 1 Buffer Pool Instance(s)
[OK] Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances
[OK] InnoDB Read buffer efficiency: 99.34% (42353558185 hits / 42633926388 total)
[OK] InnoDB Write Log efficiency: 91.45% (130133891 hits / 142298603 total)
[OK] InnoDB log waits: 0.00% (0 waits / 12164712 writes)
-------- Aria Metrics ------------------------------------------------------------------------------
[--] Aria Storage Engine is enabled.
[OK] Aria pagecache size / total Aria indexes: 128.0M/2.6M
[OK] Aria pagecache hit rate: 99.9% (507M cached / 385K reads)
...
-------- Recommendations ---------------------------------------------------------------------------
General recommendations:
You are using an unsupported version for production environments
Upgrade as soon as possible to a supported version !
We will suggest raising the 'join_buffer_size' until JOINs not using indexes are found.
See https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_join_buffer_size
Performance schema should be activated for better diagnostics
Before changing innodb_log_file_size and/or innodb_log_files_in_group read this: https://bit.ly/2TcGgtU
Variables to adjust:
join_buffer_size (> 256.0K, or always use indexes with JOINs)
performance_schema=ON
innodb_buffer_pool_size (>= 34.4G) if possible.
---
## Update 2
I found warning in log file:
[Warn] [Entrypoint]: /sys/fs/cgroup/pids:/docker/
11:net_cls,net_prio:/docker/
10:cpuset:/docker/
9:memory:/docker/
8:cpu,cpuacct:/docker/
7:hugetlb:/docker/
6:freezer:/docker/
5:devices:/docker/
4:blkio:/docker/
3:rdma:/docker/
2:perf_event:/docker/
1:name=systemd:/docker/
0::/docker//memory.pressure not writable, functionality unavailable to MariaDB
Jakub Marek
(21 rep)
Feb 16, 2024, 12:16 PM
• Last activity: Aug 5, 2025, 09:08 AM
2
votes
1
answers
69
views
Slow queries on JSON_TABLE based view on mariaDB
We a running an instance of [Apache syncope][1] with around 130k users (ca. 33k with `active` or `pending` state). When searching for users by attributes we see long query times often over 50 seconds. One example query might be ``` select * from user_search_attr where schema_id = 'familyName' and st...
We a running an instance of Apache syncope with around 130k users (ca. 33k with
active
or pending
state). When searching for users by attributes we see long query times often over 50 seconds. One example query might be
select * from user_search_attr
where schema_id = 'familyName'
and stringvalue = 'Bergmann'
user_search_attr
is a view defined as follows:
CREATE OR REPLACE
ALGORITHM = UNDEFINED VIEW user_search_attr
AS
select
u
.id
AS any_id
,
attrs
.schema_id
AS schema_id
,
attrs
.booleanvalue
AS booleanvalue
,
attrs
.datevalue
AS datevalue
,
attrs
.doublevalue
AS doublevalue
,
attrs
.longvalue
AS longvalue
,
attrs
.stringvalue
AS stringvalue
from
(SyncopeUser
u
left join JSON_TABLE(coalesce(u
.plainAttrs
, '[]'), '$[*]'
COLUMNS (schema_id
varchar(255) PATH '$.schema',
NESTED PATH '$.values[*]'
COLUMNS (booleanvalue
int(11) PATH '$.booleanValue',
datevalue
varchar(32) PATH '$.dateValue',
doublevalue
double PATH '$.doubleValue',
longvalue
bigint(20) PATH '$.longValue',
stringvalue
varchar(255) PATH '$.stringValue'))) attrs
on
(1 = 1))
where
attrs
.schema_id
is not null
and (attrs
.booleanvalue
is not null
or attrs
.datevalue
is not null
or attrs
.doublevalue
is not null
or attrs
.longvalue
is not null
or attrs
.stringvalue
is not null);
As you can see the data comes from the table SyncopeUser
which is defined as follows:
CREATE TABLE SyncopeUser
(
id
varchar(36) NOT NULL,
creationContext
varchar(255) DEFAULT NULL,
creationDate
datetime(3) DEFAULT NULL,
creator
varchar(255) DEFAULT NULL,
lastChangeContext
varchar(255) DEFAULT NULL,
lastChangeDate
datetime(3) DEFAULT NULL,
lastModifier
varchar(255) DEFAULT NULL,
status
varchar(255) DEFAULT NULL,
changePwdDate
datetime(3) DEFAULT NULL,
cipherAlgorithm
varchar(20) DEFAULT NULL,
failedLogins
int(11) DEFAULT NULL,
lastLoginDate
datetime(3) DEFAULT NULL,
mustChangePassword
int(11) DEFAULT NULL,
password
varchar(255) DEFAULT NULL,
passwordHistory
text DEFAULT NULL,
securityAnswer
varchar(255) DEFAULT NULL,
suspended
int(11) DEFAULT NULL,
token
text DEFAULT NULL,
tokenExpireTime
datetime(3) DEFAULT NULL,
username
varchar(255) DEFAULT NULL,
plainAttrs
longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_bin
DEFAULT NULL CHECK (json_valid(plainAttrs
)),
REALM_ID
varchar(36) DEFAULT NULL,
SECURITYQUESTION_ID
varchar(36) DEFAULT NULL,
PRIMARY KEY (id
),
UNIQUE KEY U_SYNCPSR_USERNAME
(username
),
UNIQUE KEY SyncopeUser_username
(username
),
KEY SECURITYQUESTION_ID
(SECURITYQUESTION_ID
),
KEY SyncopeUser_realm_id
(REALM_ID
),
CONSTRAINT SyncopeUser_ibfk_1
FOREIGN KEY (REALM_ID
)
REFERENCES Realm
(id
),
CONSTRAINT SyncopeUser_ibfk_2
FOREIGN KEY (SECURITYQUESTION_ID
)
REFERENCES SecurityQuestion
(id
)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3 COLLATE=utf8mb3_general_ci;
The relevant column plainAttrs
contains the users attibutes as JSON string. One example might be [{"uniqueValue":{"stringValue":"123456789"},"schema":"lockSystemId"},{"values":[{"stringValue":"Bergmann"}],"schema":"familyName"}]
.
The following indexes are defined on SyncopeUser
:
Table |Non_unique|Key_name |Seq_in_index|Column_name |Collation|Cardinality|Sub_part|Packed|Null|Index_type|Comment|Index_comment|Ignored|
-----------+----------+--------------------+------------+-------------------+---------+-----------+--------+------+----+----------+-------+-------------+-------+
SyncopeUser| 0|PRIMARY | 1|id |A | 149635| | | |BTREE | | |NO |
SyncopeUser| 0|U_SYNCPSR_USERNAME | 1|username |A | 149635| | |YES |BTREE | | |NO |
SyncopeUser| 0|SyncopeUser_username| 1|username |A | 149635| | |YES |BTREE | | |NO |
SyncopeUser| 1|SECURITYQUESTION_ID | 1|SECURITYQUESTION_ID|A | 1| | |YES |BTREE | | |NO |
SyncopeUser| 1|SyncopeUser_realm_id| 1|REALM_ID |A | 1| | |YES |BTREE | | |NO |
As most normal users have over 15 attributes the user_search_attr
view contains over 2M rows.
The servers are VMs with currently 8 GB of RAM and 6 CPUs but this can be changed. We are currently running version 11.4.7 of MariaDB in a galera cluster of 3 Servers. Syncope is only connected to one of the servers as this cluster is also used for other applications and we don't want to influnce these other applications.
Here are some of the IMHO relevant configuration options:
[mysqld]
aria_pagecache_buffer_size = 32M
basedir = /usr
bind-address = ::
binlog_format = ROW
character_set_server = utf8
collation_server = utf8_general_ci
innodb_adaptive_hash_index = OFF
innodb_autoinc_lock_mode = 2
innodb_buffer_pool_instances = 1
innodb_buffer_pool_size = 4096M
innodb_flush_log_at_trx_commit = 0
innodb_log_buffer_size = 16MB
innodb_log_file_size = 512M
key_buffer_size = 1M
log-error = /var/log/mariadb/mariadb.log
log_queries_not_using_indexes = OFF
long_query_time = 4.0
max_allowed_packet = 128M
max_binlog_size = 100M
max_connections = 400
max_heap_table_size = 256M
performance_schema = ON
query_cache_limit = 16M
query_cache_size = 0
query_cache_type = OFF
skip-external-locking
skip-name-resolve = 1
slow_query_log = 0
slow_query_log_file = /var/log/mariadb/slow.log
sort_buffer_size = 4M
table_definition_cache = 800
table_open_cache = 1000
thread_cache_size = 8
thread_stack = 256K
tmp_table_size = 256M
wsrep_on = ON
wsrep_provider = /usr/lib64/galera-4/libgalera_smm.so
As most queries only search for active
or pending
users one option that I have not tried is indexing or partitioning the table for the status
column.
Currently we are not in the position of changing the design of the database or even switching to another IDM sollution. We hope to find optimazion potential on the DB layer or maybe some improvements on some queries.
Clemens Bergmann
(121 rep)
Aug 3, 2025, 06:44 AM
• Last activity: Aug 5, 2025, 08:07 AM
2
votes
1
answers
1455
views
How to model a database for prices that vary depending on their categories and parameters?
My first post on DBA! Apologies in advance if I made any mistake. Before jumping into the schema and tables, I would like to share what I am trying to achieve first. I am working on sort of a courier application, where I have some `categories` and each category has a pre-defined `price`. But determi...
My first post on DBA! Apologies in advance if I made any mistake.
Before jumping into the schema and tables, I would like to share what I am trying to achieve first. I am working on sort of a courier application, where I have some
categories
and each category has a pre-defined price
.
But determining the price is a bit ugly (absence of symmetry and patterns; at least, I can't seem to find any) . I will give you an example:
Consider the following categories: Document, Heavy Document, Laptop, Carton, Heavy Carton.
1) **Document:** It's for the lighter documents, which are below 0.5kg. The price is 20$, fixed.
*[price stored in the prices table: 20.00]*
> e.g. For an item of 300g, the price will be 20$.
2) **Heavy Document:** This is for the documents that are over 0.5kg. Unlike the Document category, it doesn't have a fixed price! Rather, it has a unit price: 10$ per kg, which will be applied to each kg exceeding 0.5kg.
*[price stored in the prices table: 10.00]*
> e.g. For an item of 2kg, the price will be 35$ (1.5g = 15$ + 0.5 = 20$)
3) **Laptop:** Straightforward, 100$. Nothing special about it, no constraint whatsoever.
*[price stored in the prices table: 100.00]*
> e.g. For an item of 2kg, the price will be 35$ (1.5g = 15$ + 0.5 = 20$)
4) **Carton:** Here comes an interesting one. Until now, there was only one dependency: weight
. But this one has an additional dependency: dimension
. This is somewhat similar to the Document category. For the cartons that are below 3 Cubic Feet(CF), the price is 80$ per CF. The difference between Document and Carton category is that the Document has a fixed price, whereas Carton has a Unit Price. But wait, there's more. There is an additional constraint: dimension-weight ratio. In this case, it is 7kg per CF
. And if the item's weight crosses the ratio, for each extra kg 5$ will be charged. It's so confusing, I know. An example might help:
[price stored in the prices table: 80.00]
> e.g. For a carton of 80kg and 2CF; the price will be 490$. Here is how:
First calculate the regular charge: 80$*2CF = 160$
Now let's figure out if it crosses **Ratio**: Since, 1 CF = 7kg, hence, 2CF = 14kg. But the item's weight is 80kg, so it *crosses the ratio (14kg)*
Since it crosses the ratio, for all the extra kgs (80-14 = 66kg), each kg will cost 5$: 66*5 = 330$. After adding it with regular charge: 330$+160$ = 490$.
5) **Heavy Carton:** This one is for the cartons having the dimension bigger than 3CF. The difference with Carton is the unit price. Heavy Carton is 60$ per CF.
[price stored in the prices table: 60.00]
> e.g. For a carton of 80kg and 5CF; the price will be 525$. Here is how:
First calculate the regular charge: 60$*5CF = 300$
Now let's figure out if it crosses **Ratio**: Since, 1 CF = 7kg, hence, 5CF = 35kg. But the item's weight is 80kg, so it *crosses the ratio (35kg)*
Since it crosses the ratio, for all the extra kgs (80-35 = 45kg), each kg will cost 5$: 45*5 = 225$. After adding it with regular charge: 300$+225$ = 325$.
If you've read this far, I think I have convinced you that the business structure is really complicated. Now let's take a look at my categories
schema:
+-------------------------+---------------------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------------------+---------------------------------+------+-----+---------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| name | varchar(191) | NO | | NULL | |
| created_at | timestamp | YES | | NULL | |
| updated_at | timestamp | YES | | NULL | |
| dim_dependency | tinyint(1) | NO | | NULL | |
| weight_dependency | tinyint(1) | NO | | NULL | |
| distance_dependency | tinyint(1) | NO | | NULL | |
| dim_weight_ratio | varchar(191) | YES | | NULL | |
| constraint_value | decimal(8,2) | YES | | NULL | |
| constraint_on | enum('weight','dim') | YES | | NULL | |
| size | enum('short','regular','large') | YES | | regular | |
| over_ratio_price_per_kg | decimal(8,2) | YES | | NULL | |
| deleted_at | timestamp | YES | | NULL | |
+-------------------------+---------------------------------+------+-----+---------+----------------+
Also the schema of prices
table (it's a polymorphic table, hoping to create a subcategories
table someday):
+----------------+---------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------+---------------------+------+-----+---------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| amount | decimal(8,2) | NO | | NULL | |
| created_at | timestamp | YES | | NULL | |
| updated_at | timestamp | YES | | NULL | |
| priceable_type | varchar(191) | NO | MUL | NULL | |
| priceable_id | bigint(20) unsigned | NO | | NULL | |
| deleted_at | timestamp | YES | | NULL | |
+----------------+---------------------+------+-----+---------+----------------+
How can I improve this structure to keep things as dynamic and coherent as possible?
Eisenheim
(121 rep)
Aug 11, 2018, 09:37 AM
• Last activity: Aug 5, 2025, 08:06 AM
4
votes
1
answers
1048
views
Different collation between master database and application database
I just installed SQL Server 2019. Unfortunately I didn't notice the collation while installing. Following are the collations: SELECT name, collation_name FROM sys.databases | name | collation_name | |--------------------|--------------------------------| | master | Latin1_General_CI_AS | | tempdb |...
I just installed SQL Server 2019. Unfortunately I didn't notice the collation while installing. Following are the collations:
SELECT name, collation_name
FROM sys.databases
| name | collation_name |
|--------------------|--------------------------------|
| master | Latin1_General_CI_AS |
| tempdb | Latin1_General_CI_AS |
| model | Latin1_General_CI_AS |
| msdb | Latin1_General_CI_AS |
| ReportServer | Latin1_General_100_CI_AS_KS_WS |
| ReportServerTempDB | Latin1_General_100_CI_AS_KS_WS |
| ApplicationDB | SQL_Latin1_General_CP1_CI_AS |
use ApplicationDB
GO
SELECT SERVERPROPERTY(N'Collation')
> SQL_Latin1_General_CP1_CI_AS
The application recommends that the database to be SQL_Latin1_General_CP1_CI_AS, but they're unsure about the system databases as they're not database experts. Will there be any consequences. We would be running SSRS reports to pull data from the application database. Will this be an issue? I'm confused
Master_Roshy
May 12, 2022, 05:46 PM
• Last activity: Aug 5, 2025, 07:09 AM
2
votes
1
answers
667
views
Split SSIS project in to multiple files to avoid merge hell
Since SSIS does not like being merged nicely I was wondering how to have a big SSIS package split up. With the idea of having multiple devs working on this simultaneously, and a minimal chance of having merge conflicts. In SSIS 2016 I found these options: - ***package parts***, but apparently they d...
Since SSIS does not like being merged nicely I was wondering how to have a big SSIS package split up. With the idea of having multiple devs working on this simultaneously, and a minimal chance of having merge conflicts.
In SSIS 2016 I found these options:
- ***package parts***, but apparently they don't share connection managers. I don't want to have 100 different connection managers.
- ***subpackages***, still this doesn't look very clean and I also wonder if this is what it is intended for. Also, the debugger goes crazy too opening the subpackages while running. Any other drawbacks I should know of?
I can't be the only person with this problem. Is there an other way to achieve this?
Sam Segers
(129 rep)
Jan 19, 2017, 10:47 AM
• Last activity: Aug 5, 2025, 06:05 AM
0
votes
0
answers
26
views
How to update existing data in Master Data Services SQL Server 2022?
I am learning to use Master Data Services for the first time and currently stuck on **updating** existing data. This is also my first time using SSIS so I am currently learning on how to use SQL Command to update data. **Overview data load workflow** 1. Data is being stored into a staging table (DQS...
I am learning to use Master Data Services for the first time and currently stuck on **updating** existing data. This is also my first time using SSIS so I am currently learning on how to use SQL Command to update data.
**Overview data load workflow**
1. Data is being stored into a staging table (DQS_STAGING_DATA)
2. When load successful, data then will be loaded from DQS_STAGING_DATA into each staging table in MDS with import type 0 (Ex: stg.Person).
**My current SSIS workflow**
[Loading data into MDS stg.Person and stg.Company](https://i.sstatic.net/LBhe3Ldr.png)
**What I have tried**
Change import type from import type 1 to 0.
> 1: Create new members only. Any updates to existing MDS data fail.
> 0: Create new members. Replace existing MDS data with staged data, but only if the staged data is not NULL.
How do I update data inside of the stg.Person and stg.Company using my current SSIS workflow and ensure that Master Data Excel Add-ins will reflect the new data? Both of these staging tables also have their own subscription view.
**My expectation**
1. A simple to follow step by step and beginner explanation to update existing data in Master Data Services.
2. Comment and feedback on my current SSIS pipeline.
Amir Hamzah
(11 rep)
Aug 5, 2025, 05:18 AM
0
votes
1
answers
728
views
Restore backup global using pgadmin
I created backup global by rightclicking Postgres 15 in pgadmin [![enter image description here][1]][1] It generates `.sql` file. How do i restore the backup global in pgadmin ? I tried searching on google but didnt found the answer [1]: https://i.sstatic.net/9FAZb.png
I created backup global by rightclicking Postgres 15 in pgadmin
It generates

.sql
file. How do i restore the backup global in pgadmin ? I tried searching on google but didnt found the answer
LLL
(101 rep)
Jun 21, 2023, 10:35 AM
• Last activity: Aug 5, 2025, 05:01 AM
10
votes
2
answers
1863
views
Scalar function used in calculated column, what's the cleanest way to update the function?
I want to update a scalar function in our database. This function is used for several calculated columns in a key table so I get a dependency error if I try to update it. I can explicitly remove the columns, update the function and re-add the columns but, without a lot of extra fannying about, that...
I want to update a scalar function in our database. This function is used for several calculated columns in a key table so I get a dependency error if I try to update it. I can explicitly remove the columns, update the function and re-add the columns but, without a lot of extra fannying about, that will change column orders and might have other unintended consequences. I wondered if there was a cleaner way of doing it?
robertc
(317 rep)
Mar 22, 2012, 11:31 AM
• Last activity: Aug 5, 2025, 04:20 AM
0
votes
2
answers
4242
views
Access denied when connecting to mongosqld with MySQL
I'm trying to connect to my MongoDB using MySQL through mongosqld. I have mongosqld running with a config file that looks like security: enabled: true mongodb: net: uri: test-db auth: username: usertest password: pass source: admin schema: path: schema.drdl I have it hosted on a [mongo ODBC manager]...
I'm trying to connect to my MongoDB using MySQL through mongosqld. I have mongosqld running with a config file that looks like
security:
enabled: true
mongodb:
net:
uri: test-db
auth:
username: usertest
password: pass
source: admin
schema:
path: schema.drdl
I have it hosted on a [mongo ODBC manager](https://github.com/mongodb/mongo-odbc-driver/releases/) with
SERVER=127.0.0.1
, PORT=3071
, DATABASE=test_db
, UID=usertest?source=admin
, PWD=pass
. I am able to connect to and query this MongoDB through Excel using [Mongo's tutorial for that](https://docs.mongodb.com/bi-connector/current/connect/excel/) , but I am not able to do the same with MySQL using [Mongo's tutorial](https://docs.mongodb.com/bi-connector/current/connect/mysql/) . When I try to connect from terminal with mysql 'user=usertest?source=admin' --default-auth=mongosql_auth -p
I get ERROR 1045 (28000): Access denied for user 'usertest' and
handshake error: unable to saslStart conversation 0: Authentication failed.` from the mongosqld side. I am doing this on macOS. What could be causing this problem only for trying to connect from MySQL?
TheStrangeQuark
(101 rep)
Oct 8, 2018, 04:09 PM
• Last activity: Aug 5, 2025, 04:04 AM
0
votes
1
answers
1425
views
How to update mysql 5.7 to 8.0 Amazon linux 1, ec2
I'm using MySQL 8.0.14 for my project and I want to use it for my AWS ec2 server. I updated from 5.5 to 5.7 by following this link: https://stackoverflow.com/questions/37027744/upgrade-mysql-to-5-6-on-ec2 But don't have any information to update to MySQL 8.0
I'm using MySQL 8.0.14 for my project and I want to use it for my AWS ec2 server.
I updated from 5.5 to 5.7 by following this link: https://stackoverflow.com/questions/37027744/upgrade-mysql-to-5-6-on-ec2
But don't have any information to update to MySQL 8.0
An Dương
Jul 4, 2020, 10:52 AM
• Last activity: Aug 5, 2025, 03:08 AM
1
votes
1
answers
819
views
MySQLdump leads to exceeding max connections. Skip-Quick as a solution?
Every night I do a full mysqldump of a 17G DB in MySQL 5.7.32. This worke for years, now I am doing some heavy load on inserts during the night which caused at exactly the backup time the connections to rise to max_connections which led to connections errors. As the Server has enough RAM 64G (30G fr...
Every night I do a full mysqldump of a 17G DB in MySQL 5.7.32. This worke for years, now I am doing some heavy load on inserts during the night which caused at exactly the backup time the connections to rise to max_connections which led to connections errors.
As the Server has enough RAM 64G (30G free), I increased max_connections from 150 to 300 in a first reaction.
However looking at the dump command I found the option -- quick (also enabled by default) which tells me that it is exporting row by row.
--single-transaction --routines --quick --compact
I am thinking of changing this to
--skip-quick.
but dare to change this since I would need to check the restore again and this is very time consuming.
Looking at the connections over time I also noticed that there are some interruptions around that time period. So maybe connections stack up since there is a block during MySQLdump?
MySQL error log shows a large amount of the following error, although not at this time points but continuously throughout the day:
Aborted connection 63182018 to db: 'mydb' user: 'test' host: 'myhost' (Got an error reading communication packets)
How would you approach this problem?



merlin
(323 rep)
Dec 27, 2020, 08:08 AM
• Last activity: Aug 5, 2025, 02:05 AM
0
votes
1
answers
1208
views
Delete backup directories using RMAN
Is there any way RMAN can delete empty backup directories?
Is there any way RMAN can delete empty backup directories?
Rajat Saxena
(75 rep)
Jun 20, 2013, 07:32 AM
• Last activity: Aug 5, 2025, 01:06 AM
0
votes
1
answers
142
views
How can I manually backup a large Azure Flexible Postgres database?
I have a large (3TB+) single database on a Azure Postgres Flexible Server. This is a managed service, so it takes backups automatically and the frequency can be adjusted. As a disaster recovery/emergency option, I'd like to periodically take a snapshot of the database separately from these increment...
I have a large (3TB+) single database on a Azure Postgres Flexible Server. This is a managed service, so it takes backups automatically and the frequency can be adjusted. As a disaster recovery/emergency option, I'd like to periodically take a snapshot of the database separately from these incremental backups and store it somewhere else.
I don't need ongoing connectivity for incremental backups (current managed system does that), I just want a periodic backup that'd be restorable to a new server. I would just choose Azure's own Vaulted Backup option, which provides enough separation for this purpose. But according to the docs :
>Vaulted backups are supported for server size <= 1 TB. If backup is configured on server size larger than 1 TB, the backup operation fails.
So I'm looking for other options. I checked out Barman , which looks like it should be perfect. But the managed Postgres Flexible Server doesn't support ssh/rsync access and I'm getting conflicting info on whether
pg_basebackup
is supported--seems unlikely, along with other physical replication options.
I can't use Azure's own tools to backup my server, it's too big. I can't use external tools, they can't have shell access. The last option is just pg_dump
, which in my experience will take days to finish and may need to be performed on a point-in-time-restore so that performance isn't affected. I'd prefer to create and restore from a physical backup rather than logical, so this is a last resort.
Is there a better option for a managed server this size than pg_dump
?
zach
(1 rep)
May 28, 2025, 11:05 PM
• Last activity: Aug 5, 2025, 12:06 AM
1
votes
1
answers
139
views
Error "could not reserve shared memory region ... error code 487" on Postgresql 17.5 in Windows
In PG 17.5 on Windows, I'm seeing errors in the log that look like: 2025-06-10 12:53:44.548 EDT [18612] LOG: could not reserve shared memory region (addr=0000022916760000) for child 00000000000014C4: error code 487 2025-06-10 13:09:44.667 EDT [18612] LOG: could not reserve shared memory region (addr...
In PG 17.5 on Windows, I'm seeing errors in the log that look like:
2025-06-10 12:53:44.548 EDT LOG: could not reserve shared memory region (addr=0000022916760000) for child 00000000000014C4: error code 487
2025-06-10 13:09:44.667 EDT LOG: could not reserve shared memory region (addr=0000022916760000) for child 00000000000014A8: error code 487
2025-06-10 13:14:44.797 EDT LOG: could not reserve shared memory region (addr=0000022916760000) for child 00000000000014CC: error code 487
I was able to duplicate the issue fairly easily in two unconnected/unrelated environments. The issue appears to be triggered/exacerbated by higher settings of shared_buffers in postgresql.conf
Steps to replicate:
1. Install PG 17.5 on a windows machine from the EDB community release
2. In the default postgresql.conf that is created by the installer, change shared_buffers as follows: On an 8GB laptop (windows 11), I changed shared_buffers to 3GB. On a 768 GB Dell server (Windows Server 2019), changing shared_buffers to 256GB showed the error message quite often but with shared_buffers=128GB it took a couple of minutes for the message to appear.
3. Start the cluster with pg_ctl and watch the output. The higher the setting of shared_buffers, the more frequent the messages appear. On the 8 GB laptop, it took over 30 minutes to see the first message. On the Dell server I was getting those messages sometimes even before the server was ready to accept connections.
Notes:
1. the same hardware and Operating system runs PG 13.18 without those error messages.
2. There are no active connections or queries running during my testing
I set the logging to Debug5 and here are some associated messages. PID 19048 is the postmaster, not sure what PID 14992 is doing:
2025-06-10 10:11:47.402 EDT [] DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS
, xid/subid/cid: 0/1/0
2025-06-10 10:11:47.402 EDT [] DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRES
S, xid/subid/cid: 0/1/0
2025-06-10 10:11:47.402 EDT [] DEBUG: postmaster received pmsignal signal
2025-06-10 10:11:47.402 EDT [] DEBUG: mapped win32 error code 2 to 2
2025-06-10 10:11:47.403 EDT [] DEBUG: mapped win32 error code 2 to 2
2025-06-10 10:11:47.406 EDT [] LOG: could not reserve shared memory region (addr=000001A228F90000) for child 0000000000002168: error code 487
sevzas
(373 rep)
Jun 10, 2025, 06:06 PM
• Last activity: Aug 4, 2025, 11:07 PM
0
votes
1
answers
1227
views
Use ENUM in array of objects
Is it possible to setup an ARRAY column with objects/JSON on a TABLE, forcing a parameter of the objects in the ARRAY to adhere to an ENUM, while keeping that object parameter unique? Data examples: ENUM val1, val2, val3 [{p1: val1, p2: 'something'}, {p1: val2, p2: 'something'}] <-- valid [{p1: val1...
Is it possible to setup an ARRAY column with objects/JSON on a TABLE, forcing a parameter of the objects in the ARRAY to adhere to an ENUM, while keeping that object parameter unique?
Data examples:
ENUM val1, val2, val3
[{p1: val1, p2: 'something'}, {p1: val2, p2: 'something'}] <-- valid
[{p1: val1, p2: 'something'}, {p1: val4, p2: 'something'}] <-- not valid, val4 not an ENUM value
[{p1: val1, p2: 'something'}, {p1: val1, p2: 'something else'}] <-- not valid, p1 not unique
If it is possible, using PostgreSQL and sequelize, how would I go about it, setting up the column?
David Gustavsson
(235 rep)
Sep 12, 2017, 02:34 PM
• Last activity: Aug 4, 2025, 10:02 PM
Showing page 4 of 20 total questions