Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
2
answers
161
views
Considerable size difference between mysql versions
We usually employ different MySQL versions for different scenarios. And I realized that similar databases, whith the same amount of tables and data, just because of different mysql versions differs in size from 43M to 223M. (~5x more) My goal is understand why of that 5x more disk space required.. S...
We usually employ different MySQL versions for different scenarios. And I realized that similar databases, whith the same amount of tables and data, just because of different mysql versions differs in size from 43M to 223M. (~5x more)
My goal is understand why of that 5x more disk space required..
Some version 5.5.46 have this size:
du -hs /var/lib/mysql/
43M
And some 5.7.11 habe this size:
du -hs /var/lib/mysql/
223M
Inside of it, we use to have those default datafiles like: ib_logfile0, ib_logfile1, ibdata1, mysql..
Within 5.5.46 thay usually have this sizes respectively:
5.1M ib_logfile0
5.1M ib_logfile1
26M ibdata1
6.5M mysql
..but within 5.7.11 are waaay bigger:
48M ib_logfile0
48M ib_logfile1
76M ibdata1
12M have also this ibtmp1
25M mysql
In order to understand all that difference, I took a look to release notes where Oracle writes down what they're changing: https://dev.mysql.com/doc/relnotes/mysql/5.5/en/
But could not find something conclusive.
I appreciate your help on that.
Hudson Santos
(1 rep)
Mar 28, 2019, 07:35 AM
• Last activity: Jul 14, 2025, 12:04 AM
3
votes
1
answers
192
views
How does row versioning impact the size of a non-clustered columnstore index?
I have a table with a non-clustered columnstore index. The database has a readable secondary AG replica ([filled with problems...](https://dba.stackexchange.com/q/346216/277982)) and the table has triggers. Neither snapshot isolation nor RCSI are enabled. I infer that this means the table must be pa...
I have a table with a non-clustered columnstore index. The database has a readable secondary AG replica ([filled with problems...](https://dba.stackexchange.com/q/346216/277982)) and the table has triggers. Neither snapshot isolation nor RCSI are enabled. I infer that this means the table must be paying the 14-bytes-per-row penalty for row versioning. However, I generally regard compressed columnstore segments as immutable. This gives me my question: **how does row versioning impact the size of a non-clustered columnstore index?**
I've consulted the [documentation for row versioning](https://learn.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide?view=sql-server-ver16) but it only mentions columnstore once and it is in an irrelevant way. I have also read [Niko Neugebauer's entire 131 part series](https://www.nikoport.com/columnstore/) , but I do not recall it ever being mentioned there even in [this very relevant part](https://www.nikoport.com/2015/09/22/columnstore-indexes-part-67-clustered-columstore-isolation-levels-transactional-locking/) . This [blog post](https://sql-sasquatch.blogspot.com/2021/06/sqlserver-snapshot-isolation-level.html) suggests that snapshot isolation inflicts a massive performance penalty on columnstore, but does not explain the internals.
You may assume that I do not care about the in-memory OLTP version of columnstore.
J. Mini
(1237 rep)
Apr 18, 2025, 10:14 PM
• Last activity: May 13, 2025, 05:41 PM
4
votes
2
answers
281
views
Alert on Table size
I need to create an alert on my application table when it reaches 98% of its threshold. I wrote an SP which checks the rows and logs the information into a log file which was monitored by alert system and that raised alarms successfully. But now my DBA team is saying they cannot grant the **applicat...
I need to create an alert on my application table when it reaches 98% of its threshold. I wrote an SP which checks the rows and logs the information into a log file which was monitored by alert system and that raised alarms successfully.
But now my DBA team is saying they cannot grant the **application DB user** read/write privileges on
UTL_FILE
in production. And OEM (*Oracle Enterprise Manager*) can't be used at application level.
- What are my options?
- How can I get an alert on table size?
***Reference:** UTL_FILE *
> With the UTL_FILE package, PL/SQL programs can read and write operating system text files. UTL_FILE provides a restricted version of operating system stream file I/O.
Sakshi Sharma
(41 rep)
Sep 19, 2017, 09:50 AM
• Last activity: May 12, 2025, 04:07 AM
0
votes
2
answers
303
views
Row size limits in Mysql w.r.t to TEXT and BLOB columns
There is a well-known limitation in Mysql for row size which 65,535 bytes. While going through the [docs][1], I have found that > BLOB and TEXT columns only contribute 9 to 12 bytes toward the row > size limit because their contents are stored separately from the rest > of the row. I did go through...
There is a well-known limitation in Mysql for row size which 65,535 bytes.
While going through the docs ,
I have found that
> BLOB and TEXT columns only contribute 9 to 12 bytes toward the row
> size limit because their contents are stored separately from the rest
> of the row.
I did go through the MySQL documentation but unable to understand when a TEXT column will contribute 9 bytes and when it will contribute 12 bytes. Can anyone explain how to interpret the statement?
Dinesh Kumar
(664 rep)
May 14, 2020, 12:46 PM
• Last activity: May 1, 2025, 01:07 PM
35
votes
3
answers
9730
views
Why is the default page size for databases so small?
In [PostgreSQL][1] and SQL Server, the default page size is 8 KB. In [MySQL][2], 16 KB. In IBM DB2, Oracle, and [H2][3] it is only 4 KB. Why are these page sizes so small? Is there a historical or memory usage reason? [1]: https://www.postgresql.org/docs/current/storage-page-layout.html [2]: https:/...
In PostgreSQL and SQL Server, the default page size is 8 KB. In MySQL , 16 KB. In IBM DB2, Oracle, and H2 it is only 4 KB.
Why are these page sizes so small?
Is there a historical or memory usage reason?
Mate Michelisz
(477 rep)
Jun 21, 2021, 08:15 AM
• Last activity: Feb 4, 2025, 05:44 AM
10
votes
6
answers
34490
views
Estimate projected database growth
I recently started working with SQL Server 2008 as a DBA trainee. I need calculate the size of the database but also estimate its growth over recent months and the predicted growth for the next 12 months. I can use the sp_spaceused statement to calculate the actual size but how do I calculate everyt...
I recently started working with SQL Server 2008 as a DBA trainee. I need calculate the size of the database but also estimate its growth over recent months and the predicted growth for the next 12 months.
I can use the sp_spaceused statement to calculate the actual size but how do I calculate everything else?
Diego Flores
(459 rep)
Jul 24, 2014, 08:48 PM
• Last activity: Dec 5, 2024, 04:04 PM
7
votes
2
answers
35003
views
Choosing the right storage block size for sql server
There are many articles on what storage blocks size should be used for sql server e.g. [Disk Partition Alignment Best Practices for SQL Server][1]. The right blocks size should improve the performance of a sql server database. I’m looking for a recommendations and methods to identify which storage b...
There are many articles on what storage blocks size should be used for sql server e.g. Disk Partition Alignment Best Practices for SQL Server . The right blocks size should improve the performance of a sql server database. I’m looking for a recommendations and methods to identify which storage blocks size is appropriated for a database. Is there a guide on how to identify an appropriated block size?
r0tt
(1078 rep)
Jun 28, 2016, 12:41 PM
• Last activity: Oct 8, 2024, 08:19 PM
0
votes
1
answers
1884
views
Oracle 11g - Decreasing Big Table Size
I want to reduce the size of a table. > I thought if I could do this with export / import, but when I check it > with the ESTIMATE_ONLY option, it doesn't seem to work. > > I have no chance of using partitioning. > > If I do it with CTAS (create table as select), the archive production > will peak a...
I want to reduce the size of a table.
> I thought if I could do this with export / import, but when I check it
> with the ESTIMATE_ONLY option, it doesn't seem to work.
>
> I have no chance of using partitioning.
>
> If I do it with CTAS (create table as select), the archive production
> will peak and the disaster database will be affected.
>
> How can I reduce the size of this table? If I do shrink, will UNDO be
> enough and how much space will be enough to add UNDO tablespace to
> overcome this problem? Do you have an estimate of how long it will
> take if I shrink?
>
> I have to be very careful as it will be done directly in the
> production database.
> Version: Oracle 11g Release 2 - Standard Edition
>
> ASM option: yes
>
> Cluster: yes
>
> Partitioning option: no
>
> Compress data pump option: no
Table size information:
SQL> select segment_name,sum(bytes)/1024/1024/1024 GB from dba_segments where owner='OWNER_NAME' and segment_type='TABLE' and segment_name=upper('TABLE_NAME') group by segment_name;
SEGMENT_NAME GB
--------------------------------------------------------------------------------- ----------
TABLE_NAME 392.493164
Export information (It does not work in reducing the table size.
):
nohup expdp "'/ as sysdba'" directory=DP_DIR tables=OWNER_NAME.TABLE_NAME ESTIMATE_ONLY=Y &
Output:
Starting "SYS"."SYS_EXPORT_TABLE_01": "/******** AS SYSDBA" directory=DP_DIR tables=OWNER_NAME.TABLE_NAME ESTIMATE_ONLY=Y
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. estimated "OWNER_NAME"."TABLE_NAME" 392.4 GB
Total estimation using BLOCKS method: 392.4 GB
Best regards,
jrdba123
(29 rep)
Mar 21, 2021, 10:27 AM
• Last activity: Aug 12, 2024, 01:06 AM
14
votes
1
answers
14536
views
Why *not* ERROR: index row size xxxx exceeds maximum 2712 for index "foo"?
We have repeatedly seen failing attempts to index columns with values exceeding a maximum size. Postgres 10 has this kind of error message for it: > ERROR: index row size xxxx exceeds maximum 2712 for index "foo_idx" > HINT: Values larger than 1/3 of a buffer page cannot be indexed. > Consider a fun...
We have repeatedly seen failing attempts to index columns with values exceeding a maximum size. Postgres 10 has this kind of error message for it:
> ERROR: index row size xxxx exceeds maximum 2712 for index "foo_idx"
> HINT: Values larger than 1/3 of a buffer page cannot be indexed.
> Consider a function index of an MD5 hash of the value, or use full text indexing.
Examples:
- https://dba.stackexchange.com/questions/69161/character-varying-index-overhead-length-limit/69164
- https://dba.stackexchange.com/questions/25138/index-max-row-size-error
Etc.
Now, a_horse_with_no_name demonstrated a case with much larger
text
values (10000 characters) that still seems to work with a UNIQUE
index in Postgres 9.6. Quoting his test case:
create table tbl (col text);
create unique index on tbl (col);
insert into tbl
values (rpad(md5(random()::text), 10000, md5(random()::text)));
select length(val) from x; -- 10000
No error, and the column value indeed tested with a length of 10000 characters.
Have there been recent changes or how is this possible?
Erwin Brandstetter
(185717 rep)
Sep 8, 2018, 03:56 PM
• Last activity: Jul 21, 2024, 02:05 AM
1
votes
1
answers
1053
views
MongoDB conditional query for array $size
I have a user collection, and each user has different number of id_cards, all id_cards are stored in an array, I can get all users that have 7 id_card by using array $size operator, how do I get the users with say more then 5 id_cards?
I have a user collection, and each user has different number of id_cards, all id_cards are stored in an array, I can get all users that have 7 id_card by using array $size operator, how do I get the users with say more then 5 id_cards?
Edik Mkoyan
(229 rep)
Dec 18, 2015, 05:25 PM
• Last activity: May 6, 2024, 04:01 PM
2
votes
3
answers
557
views
Azure SQL index rebuild after huge size reduction
We have a table in an Azure SQL database that used to have an nvarchar(max) column storing pdf files. (The wonders of external developers.) The table grew to 156 GB. It has 476000 rows. After changing the logic, we no longer needed the pdf column. The only reasonable way to get rid of the data in it...
We have a table in an Azure SQL database that used to have an nvarchar(max) column storing pdf files. (The wonders of external developers.) The table grew to 156 GB. It has 476000 rows.
After changing the logic, we no longer needed the pdf column. The only reasonable way to get rid of the data in it was to drop the column and recreate the column (in case some weird process was still referencing it).
However, the table size is still reported as 156 GB.
The backup table I just created (SELECT INTO) is 128 MB, so that seems to be the real size of the data.
I let an index rebuild (ONLINE) run overnight on the clustered PK index. It failed with a TCP error sometime between 8 and 12 hours. The index is still 95% fragmented, the size is still reported as 156 GB.
Is there an explanation why this is so eye-wateringly slow? **Is there a better way?** Production database, table is used by a website, has to be accessible, so can't do it OFFLINE unless it takes less than 10 minutes - which nobody can guarantee.
Can I just *build all the indexes on the backup table, drop the original table and rename the backup?* That sounds risky (small risk of losing a record created just the wrong time).
-----
I'm trying to make Azure realize it is no longer used. Allocated, I'm OK with that. Used, not so much:
The table in question:
Again, it's not the reserved space that is the issue, it is the used space.


vacip
(133 rep)
Jan 13, 2024, 08:30 AM
• Last activity: Jan 17, 2024, 01:16 PM
2
votes
2
answers
1472
views
Insert 2 output result sets from sp_spaceused into Temp Table (SQL Server non-2016)
sp_spaceused, when executed without any parameters, produces 2 result sets database_name, database_size, unallocated_size reserved, data, index_space, unused Usually when you execute some SP, you can insert output into temp table like below example: insert #temp (column1, column2, column3) exec sp_s...
sp_spaceused, when executed without any parameters, produces 2 result sets
database_name, database_size, unallocated_size
reserved, data, index_space, unused
Usually when you execute some SP, you can insert output into temp table like below example:
insert #temp (column1, column2, column3)
exec sp_someprocedure
but in case of sp_spaceused (without any parameters) - can not insert output, since it produces 2 result sets.
I know 2016 has nice parameter @oneresultset = 1, but is there any way you can insert 2 or more output result sets into #temp in SQL 2014 or lower ?
Is there any tricks ?
Aleksey Vitsko
(6195 rep)
Dec 19, 2017, 03:02 AM
• Last activity: Nov 21, 2023, 05:38 PM
0
votes
0
answers
52
views
/var/lib/mysql grows indefinitely in size
Having the need to upgrade from Ubuntu Bionic to Ubuntu Focal, I made an upgrade from server 5.7 - deduced afterward - to 8.0 by simply inheriting the original `/var/lib/mysql` directory, that is I left those files where they were (after making a backup of that directory). To get rid of some incompa...
Having the need to upgrade from Ubuntu Bionic to Ubuntu Focal, I made an upgrade from server 5.7 - deduced afterward - to 8.0 by simply inheriting the original
/var/lib/mysql
directory, that is I left those files where they were (after making a backup of that directory). To get rid of some incompatibility found in the original configuration file (my.cnf) I purged everything from the old package and started the new 8 server with the configuration shipped with its package.
Apparently, the server was working fine, that is I didn't find any particular complaint in /var/log/mysql/error.log
and websites were finding their expected data.
But after a few days, I found the disk space was totally or almost exhausted. I freed some space and noticed /var/lib/mysql
has grown from about 38G to more than 100G (I issued the command du -sh *
in /var/lib
), and was still growing.
What did I do wrong? Perhaps I made a big mistake in upgrading the server in the way I mentioned.
AppLEaDaY
(1 rep)
Jun 6, 2023, 01:17 AM
• Last activity: Jun 6, 2023, 06:18 PM
-1
votes
1
answers
86
views
I'd like to know how to monitor tablespaces
I'd like to know how to monitor tablespaces. I don't have much knowledge of Postgres. When I create a tablespace, how can I measure the tablespace size? How can I monitor the size of the tablespace so it doesn't overflow? Should I monitor by file system? Would someone have any script and help me wit...
I'd like to know how to monitor tablespaces.
I don't have much knowledge of Postgres.
When I create a tablespace, how can I measure the tablespace size?
How can I monitor the size of the tablespace so it doesn't overflow? Should I monitor by file system? Would someone have any script and help me with these questions please?
Fernanda Afonso
(1 rep)
May 23, 2023, 03:40 PM
• Last activity: May 23, 2023, 05:43 PM
1
votes
1
answers
105
views
The size of my table is not growing
I couldn't see any growth in table size down to a single byte when I added 1 million rows in the table during the last days. I'm using this query: ~~~pgsql select table_name, pg_relation_size(quote_ident(table_name)) as data_size from information_schema.tables where table_schema = SOME_SCHEMA and ta...
I couldn't see any growth in table size down to a single byte when I added 1 million rows in the table during the last days.
I'm using this query:
~~~pgsql
select
table_name,
pg_relation_size(quote_ident(table_name)) as data_size
from
information_schema.tables
where table_schema = SOME_SCHEMA
and table_name = SOME_TABLE;
~~~
And it returns the same result all these days.
What could cause this unexpected result?
Morgan
(21 rep)
May 22, 2023, 08:07 PM
• Last activity: May 22, 2023, 10:13 PM
2
votes
1
answers
69
views
How do you make sure a data page can only have 3 rows
For a table with a setting of `(fillfactor = 75)` I am trying to make sure that each data page can only have 3 rows. select ((8192 * 0.75 - 24) / 3)- 4 - 24; -- returns 2012. First 24 is page header meta byte. Last 24 is t_hoff (row meta bytes for every row) Last 4 is the line pointer for each inser...
For a table with a setting of
(fillfactor = 75)
I am trying to make sure that each data page can only have 3 rows.
select ((8192 * 0.75 - 24) / 3)- 4 - 24; -- returns 2012.
First 24 is page header meta byte.
Last 24 is t_hoff (row meta bytes for every row)
Last 4 is the line pointer for each inserted row.
2012 % 8 = 4. So I guess to make sure one page has only 3 rows I can only use 2008 byte?
~~~pgsql
CREATE TABLE hot (hotid int,s char(2000))WITH (fillfactor = 75,autovacuum_enabled = OFF);
INSERT INTO hot VALUES (1, 'A');
SELECT
lp,lp_off,lp_len,t_hoff,t_ctid,t_infomask::bit(16),t_infomask2
FROM
heap_page_items (get_raw_page ('hot', 0));
~~~
returns
~~~pgsql
lp | lp_off | lp_len | t_hoff | t_ctid | t_infomask | t_infomask2
----+--------+--------+--------+--------+------------------+-------------
1 | 6160 | 2032 | 24 | (0,1) | 0000100000000010 | 2
(1 row)
~~~
My guess of why **lp_len = 2032**. 2032 = 8 + 2000 + 24.
first 4 byte int paddling it to 8 bytes. 2000 is for column s
, the last 24 bytes are row meta page bytes.
For the following table I expected lp_len
to be 2032. But it's not. What did I miss?
~~~pgsql
CREATE TABLE hot_test (
hotid int, hotid1 int,s char(2000)
)WITH (fillfactor = 75,autovacuum_enabled = OFF);
INSERT INTO hot_test VALUES (1, 1, 'A');
SELECT
lp,lp_off,lp_len,t_hoff,t_ctid,t_infomask::bit(16),t_infomask2
FROM
heap_page_items (get_raw_page ('hot_test', 0));
~~~
returns
~~~pgsql
lp | lp_off | lp_len | t_hoff | t_ctid | t_infomask | t_infomask2
----+--------+--------+--------+--------+------------------+-------------
1 | 8128 | 59 | 24 | (0,1) | 0000100000000010 | 3
(1 row)
~~~
Update:
~~~pgsql
drop table hot_test;
CREATE TABLE hot_test (
hotid int, hotid1 int,s char(1999)
)WITH (fillfactor = 75,autovacuum_enabled = OFF);
ALTER TABLE hot_test ALTER s SET STORAGE PLAIN;
INSERT INTO hot_test VALUES (1, 1, 'A');
SELECT
lp,lp_off,lp_len,t_hoff,t_ctid,t_infomask::bit(16),t_infomask2
FROM
heap_page_items (get_raw_page ('hot_test', 0));
~~~
returns
~~~pgsql
lp | lp_off | lp_len | t_hoff | t_ctid | t_infomask | t_infomask2
----+--------+--------+--------+--------+------------------+-------------
1 | 6152 | 2035 | 24 | (0,1) | 0000100000000010 | 3
(1 row)
~~~
2035 = 4 + 4 + 1999 + 4 + 24
So I guess varlena text needs 4 bytes of padding?
~~~pgsql
drop table paddling;
CREATE TABLE paddling (
hotid int, hotid1 int,s char(127)
)WITH (fillfactor = 75,autovacuum_enabled = OFF);
ALTER TABLE paddling ALTER s SET STORAGE PLAIN;
INSERT INTO paddling VALUES (1, 1, 'ASDFGHJKLQWERTYUI');
SELECT
lp,lp_off,lp_len,lp_len - 24 - 8,t_hoff,t_ctid,t_infomask::bit(16),t_infomask2
FROM
heap_page_items (get_raw_page ('paddling', 0));
~~~
returns
~~~pgsql
lp | lp_off | lp_len | ?column? | t_hoff | t_ctid | t_infomask | t_infomask2
----+--------+--------+----------+--------+--------+------------------+-------------
1 | 8024 | 163 | 131 | 24 | (0,1) | 0000100000000010 | 3
(1 row)
~~~
~~~pgsql
drop table paddling;
CREATE TABLE paddling (
hotid int, hotid1 int,s char(126)
)WITH (fillfactor = 75,autovacuum_enabled = OFF);
ALTER TABLE paddling ALTER s SET STORAGE PLAIN;
INSERT INTO paddling VALUES (1, 1, 'ASDFGHJKLQWERTYUI');
SELECT
lp,lp_off,lp_len,lp_len - 24 - 8,t_hoff,t_ctid,t_infomask::bit(16),t_infomask2
FROM
heap_page_items (get_raw_page ('paddling', 0));
~~~
returns
~~~pgsql
lp | lp_off | lp_len | ?column? | t_hoff | t_ctid | t_infomask | t_infomask2
----+--------+--------+----------+--------+--------+------------------+-------------
1 | 8032 | 159 | 127 | 24 | (0,1) | 0000100000000010 | 3
(1 row)
~~~
Seems for char(x)
, if x 126 then padding is 4 byte. Tested, also applies to text
data type.
jian
(487 rep)
Apr 27, 2023, 05:18 AM
• Last activity: Apr 28, 2023, 11:59 PM
2
votes
1
answers
1666
views
Difference between total table size and size of all pages in postgresql
Working with Postgres I noticed the following: When I run this query I get the total table size: SELECT pg_size_pretty( pg_table_size('tenk1') ); I get 384kb. And when I run: SELECT relpages FROM pg_class WHERE relname = 'tenk1'; I get 45 pages. I know that one page has 8kb. I confirmed that with `S...
Working with Postgres I noticed the following:
When I run this query I get the total table size:
SELECT pg_size_pretty( pg_table_size('tenk1') );
I get 384kb.
And when I run:
SELECT relpages FROM pg_class WHERE relname = 'tenk1';
I get 45 pages. I know that one page has 8kb. I confirmed that with
SELECT current_setting('block_size');
Therefore, my table size should be 45 * 8kB = 360kB, instead of 384kb.
Where does the difference of 24kb come from?
What else could take up this space? I know that indexes are excluded by pg_table_size
.
But its TOAST table if any, free space map, and visibility map are included. Is there a way to determine their sizes?
Piet Pro
Mar 27, 2023, 03:06 PM
• Last activity: Mar 28, 2023, 09:13 PM
1
votes
1
answers
281
views
MySQL 5.7 --> MariaDB 10.8 switch: mysqlcheck --check --extended hangs on table
Recently I switched from MySQL 5.7 to MariaDB 10.8 and it's been generally great as a drop-in replacement, but I've run into a problem: the process for a cron job that daily runs ```shell /usr/bin/nice /usr/bin/ionice -c3 \ /usr/bin/mysqlcheck \ --all-databases --check --extended --check-only-change...
Recently I switched from MySQL 5.7 to MariaDB 10.8 and it's been generally great as a drop-in replacement, but I've run into a problem: the process for a cron job that daily runs
/usr/bin/nice /usr/bin/ionice -c3 \
/usr/bin/mysqlcheck \
--all-databases --check --extended --check-only-changed --silent 2>&1
hangs indefinitely on one of the larger tables (981 MB). Here's the relevant output from showing the process list after four days since server restart (there's one process for each day the cron job runs):
MariaDB [(none)]> show full processlist;
+-------+--------------+-----------------+-----------+---------+--------+-----------+----------------------------------------------+----------+
| Id | User | Host | db | Command | Time | State | Info | Progress |
+-------+--------------+-----------------+-----------+---------+--------+-----------+----------------------------------------------+----------+
| 7719 | | localhost | | Query | 305690 | Executing | CHECK TABLE `` EXTENDED CHANGED | 0.000 |
| 23567 | | localhost | | Query | 219929 | Executing | CHECK TABLE `` EXTENDED CHANGED | 0.000 |
| 38185 | | localhost | | Query | 133529 | Executing | CHECK TABLE `` EXTENDED CHANGED | 0.000 |
| 57790 | | localhost | | Query | 47127 | Executing | CHECK TABLE `` EXTENDED CHANGED | 0.000 |
Nothing about this appears in /var/log/mysql/
error logs.
## What next?
I'm hoping someone can point me to other things I can check to help diagnose the problem. If someone is kind enough to do so I'll update the question. Thanks!
#### OS and mariadb info
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.6 LTS
Release: 18.04
Codename: bionic
$ mariadb --version
mariadb Ver 15.1 Distrib 10.8.6-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
AJ Livingston
(13 rep)
Nov 22, 2022, 09:02 PM
• Last activity: Feb 22, 2023, 10:09 PM
0
votes
2
answers
10285
views
Calculating the total row size of a table - SQL Server
I have a table 'Connections' with the following schema in **SQL Server**: [ID] [int] IDENTITY(1,1) NOT NULL, [User] [nvarchar](20) NOT NULL, [UserID] [int] NULL, [True] [bit] NOT NULL, [Comment] [nvarchar](max) NOT NULL, [Created] [datetime] NOT NULL, [Flag] [bit] NULL, [Destination] [varchar](20) N...
I have a table 'Connections' with the following schema in **SQL Server**:
[ID] [int] IDENTITY(1,1) NOT NULL,
[User] [nvarchar](20) NOT NULL,
[UserID] [int] NULL,
[True] [bit] NOT NULL,
[Comment] [nvarchar](max) NOT NULL,
[Created] [datetime] NOT NULL,
[Flag] [bit] NULL,
[Destination] [varchar](20) NULL
and it has a primary key clustered index on the ID column.
I need to get the size generate by the entries on this table per month. I have search for any existing SP, function or any DMV that can help me with this but I only found how to get the size of the table not per row. Also I need the total size of the rows per month so cannot get the total size/minimum/maximum of the rows on the whole table (as the solution provided in other stackexchange posts).
My attempt to this is as follows:
USE DB1;
SELECT DATEPART(year,created),
DATEPART(month,created),
(count (*))*(4+2*ISNULL((max(len([User]))),2)+4+1+2*ISNULL((max(len([Comment]))),2)+8+1+ISNULL(max(len([Destination])),2)) 'BytesPerMonth'
FROM Connections
GROUP BY DATEPART(year,created),DATEPART(month,created)
In the above I multiplied the number of rows with the byte size of a row and considered the following:
int - 4 bytes
nvarchar - 2 bytes per character ([nvarchar](max) also take 2 bytes per character, same as if we had [nvarchar](40), correct?)
bit - 1 byte
datetime - 8 bytes
varchar - 1 byte per character
However, this only provides an estimate due to only considering the max length of the variable columns and multiples that (the maximum) by the amount of rows which results in a much bigger value than the actual size of the rows. Is there a way that I can get the actual size per row in this context?
Furthermore, I am aware of the row header which is per row - another 4 bytes per row (currently I did not include this since my result was already huge due to considering the max of the variable columns). Also I have found that I should consider 3 bytes due to the null values and 8 bytes due to the variable columns in my schema, do they need to be considered per row/ per column? How can I calculate the size of the index?
user1930901
(349 rep)
May 7, 2020, 04:03 PM
• Last activity: Feb 10, 2023, 07:41 AM
2
votes
1
answers
780
views
Table size is not changed after new column is added
I am trying to check how the table size is going to be increased after adding new column with default values. I use the following default values: `NULL`, `''`(empty string) and `some text`(some random text)` but it seems the table size is not changed. I am using the following statements: EXEC sp_spa...
I am trying to check how the table size is going to be increased after adding new column with default values.
I use the following default values:
NULL
, ''
(empty string) and some text
(some random text)` but it seems the table size is not changed.
I am using the following statements:
EXEC sp_spaceused '[Table_A]';
ALTER TABLE [dbo].[Table_A]
ADD [Loops] NVARCHAR(900)
CONSTRAINT [DF_Loops] DEFAULT('XXX') WITH VALUES;
EXEC sp_spaceused '[Table_A]';
ALTER TABLE [dbo].[Table_A]
DROP CONSTRAINT [DF_Loops]
ALTER TABLE [dbo].[Table_A]
DROP COLUMN [Loops];
but the sp_spaceused
always returns:
name rows reserved data index_size unused
------------------------------------------------- -------------------- ------------------ ------------------ ------------------ ------------------
Table_A 73540994 8701688 KB 6658544 KB 2042976 KB 168 KB
Could anyone tell what I am doing wrong?
gotqn
(4348 rep)
Jan 29, 2015, 01:13 PM
• Last activity: Nov 24, 2022, 05:45 PM
Showing page 1 of 20 total questions