Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
2 answers
5851 views
Postgres: How to find where database size growth is coming from
We have a PostgreSQL database that has grown significantly in size recently, from about 340GB to 571GB over the past couple of months, and we are not tracking any significant change in user behavior over that time. Our primary DBA has made a couple of recommendations, with his chief recommendation b...
We have a PostgreSQL database that has grown significantly in size recently, from about 340GB to 571GB over the past couple of months, and we are not tracking any significant change in user behavior over that time. Our primary DBA has made a couple of recommendations, with his chief recommendation being to export the entire database and then re-import it, which from his tests on a second server cloned from our primary requires about 3 hours of downtime, and gets the size down to only 300GB. My two main areas of concern would be finding out where this significant growth is coming from (using du -h I can at least see it's in the /data directory with no significant growth in tablespace or pg_wal), and understanding just how importing and exporting the database can get us almost 300GB of space recovery without actually losing any production data.
awestover89 (101 rep)
Jul 5, 2022, 12:47 PM • Last activity: Jul 22, 2025, 03:58 PM
9 votes
1 answers
2004 views
AWS RDS postrgres massive disk use, small tables
I can't figure out why our AWS postgres server has consumed all of it's space. We just had to up the storage space allocated to it, but can't find any hint from postgres that it's using that much space. Amazon says that we've eaten up about 60GB in the last 2 weeks. Postgres says our whole DB is bar...
I can't figure out why our AWS postgres server has consumed all of it's space. We just had to up the storage space allocated to it, but can't find any hint from postgres that it's using that much space. Amazon says that we've eaten up about 60GB in the last 2 weeks. Postgres says our whole DB is barely over 5GB, the DB operations are predominantly INSERT and SELECT. How can I track down and reclaim our storage? Here's the output of some size commands in PSQL select pg_size_pretty(pg_database_size('my_db')); -- 3730 MB SELECT pg_size_pretty(pg_total_relation_size(C.oid)) AS "total_size" FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname NOT IN ('pg_catalog', 'information_schema') AND C.relkind 'i' AND nspname !~ '^pg_toast' ORDER BY pg_total_relation_size(C.oid) DESC LIMIT 20; Gives 1540 MB 1286 MB 235 MB 191 MB Curiously the autovacuumer has never triggered on any of the tables (I'm assuming this is because we rarely delete a row?) The number of dead rows is quite low relative to table size (these tables have row counts in the hundreds of thousands or millions) SELECT vacuum_count, autovacuum_count, n_dead_tup FROM pg_stat_user_tables ORDER BY n_dead_tup DESC limit 5; 0 0 161 0 0 119 1 0 76 0 0 25 0 0 11 Predominantly the operations on the database are INSERT and SELECT, with some things getting UPDATES, and very rarely a DELETE. We use a lot of JSONB. **UPDATE** \l+ shows same as the other queries: 3730 MB
ChrisJ (621 rep)
Aug 17, 2018, 01:39 AM • Last activity: May 31, 2025, 06:07 AM
12 votes
1 answers
8925 views
MySQL Workbench Database Sizes
I'm trying to find the total size on the hard disk that all of my MySQL Workbench databases are using. Does anyone know of an easy way to figure this out? If nothing else, the default location mysql/workbench uses for saving the raw data on a windows machine? Thanks in advance! Quintis
I'm trying to find the total size on the hard disk that all of my MySQL Workbench databases are using. Does anyone know of an easy way to figure this out? If nothing else, the default location mysql/workbench uses for saving the raw data on a windows machine? Thanks in advance! Quintis
Quintis555 (323 rep)
Dec 1, 2011, 05:02 PM • Last activity: May 20, 2025, 09:46 PM
0 votes
1 answers
315 views
Issue with table size after upgrading to MySQL 8
I have two MySQL servers. One is running on 5.7.24 and other one is running on 8.0.29 version. In 5.7.24 I have table with 71M (millions) records and its size is about 6GB. The same table on 8.0.29 version (with same indexes) has 13.5GB. This table size affects a lot my queries, both inserts and sel...
I have two MySQL servers. One is running on 5.7.24 and other one is running on 8.0.29 version. In 5.7.24 I have table with 71M (millions) records and its size is about 6GB. The same table on 8.0.29 version (with same indexes) has 13.5GB. This table size affects a lot my queries, both inserts and selects. The steps that I made in this process are: - Dumping table from MySQL 5.7.24 - Importing dump to MySQL 8.0.29 Both MySQL servers are running in Docker container. MySQL 5.7.24 is running on port 3306 MySQL 8.0.29 is running on port 3307 Important thing to mention is that after table dump, MySQL 5.7.24 is not running anymore so it doesn't use any hardware resources. Does anyone knows what could be the reason for this behaviour?
Heril Muratovic (123 rep)
Jul 27, 2022, 12:05 PM • Last activity: May 7, 2025, 03:03 AM
8 votes
3 answers
40221 views
Table has 14 GB in unused space - How to shrink table size
I use the following script to gather data from my database wich is SQL Server on Azure. -- Script to run against database to gather metrics CREATE TABLE #SpaceUsed (name sysname,rows bigint,reserved sysname,data sysname,index_size sysname,unused sysname) DECLARE @Counter int DECLARE @Max int DECLARE...
I use the following script to gather data from my database wich is SQL Server on Azure. -- Script to run against database to gather metrics CREATE TABLE #SpaceUsed (name sysname,rows bigint,reserved sysname,data sysname,index_size sysname,unused sysname) DECLARE @Counter int DECLARE @Max int DECLARE @Table sysname SELECT name, IDENTITY(int,1,1) ROWID INTO #TableCollection FROM sysobjects WHERE xtype = 'U' ORDER BY lower(name) SET @Counter = 1 SET @Max = (SELECT Max(ROWID) FROM #TableCollection) WHILE (@Counter 4000) or varchar(MAX). Another 8 columns are unique-identifier types.
webworm (555 rep)
Jun 13, 2017, 04:12 PM • Last activity: Apr 29, 2025, 01:02 PM
16 votes
4 answers
38087 views
Show data and disk use breakdown by table
I have a SQL Server 2008 R2 database being used by several deployed programs. **Question: Is there an easy way to display how much space each table consumes, for all of the tables in the database, and distinguish logical space from disk space?** If I use SSMS (Management Studio), the storage propert...
I have a SQL Server 2008 R2 database being used by several deployed programs. **Question: Is there an easy way to display how much space each table consumes, for all of the tables in the database, and distinguish logical space from disk space?** If I use SSMS (Management Studio), the storage properties shown for the database reads 167 MB with 3 MB "available" (about the right size, but I'm concerned about the 3 MB available - is this a limit to be concerned about, assuming I know I have enough disk space?) I can drill into each table, but that takes forever to do. I know I can write my own queries and test around, but I'd like to know if there's already an easy (built-in?) way to do this.
Dronz (263 rep)
Aug 25, 2015, 07:51 PM • Last activity: Mar 13, 2025, 04:51 PM
0 votes
2 answers
2833 views
How to reduce the size of a SQL Server-Database?
## Context ## We have a Backup DB, where we normally store BackUps of tables from production DB before doing any updates/deletes. If anything goes wrong, we can restore the data from that table created in BackUp DB. ## Problem ## The size of Backup DB is rapidly increasing and **I need a way to redu...
## Context ## We have a Backup DB, where we normally store BackUps of tables from production DB before doing any updates/deletes. If anything goes wrong, we can restore the data from that table created in BackUp DB. ## Problem ## The size of Backup DB is rapidly increasing and **I need a way to reduce its size**. ### Steps so far ### I tried deleting old tables and shrinking BackUP DB but shrinking takes too much of time.
Bishal (1 rep)
Jan 15, 2022, 05:42 PM • Last activity: Feb 13, 2025, 06:03 PM
5 votes
1 answers
493 views
Estimate database growth caused by Accelerated Database Recovery
Accelerated Database Recovery (ADR) is on by default in Azure and an option in SQL Server 2019. It has some really interesting potential. I have experimented by turning ADR on in a static test database, and if nothing happens there is no growth. It seems like growth is only going to occur on an acti...
Accelerated Database Recovery (ADR) is on by default in Azure and an option in SQL Server 2019. It has some really interesting potential. I have experimented by turning ADR on in a static test database, and if nothing happens there is no growth. It seems like growth is only going to occur on an active database. So unless you have a test system that allows you to full reproduce your production workload, you are going to be doing your growth experiments in production. Add to this, turning ADR on or off essentially requires a down time as there is an "exclusive lock" requirement to change the state, and you have a recipe for a career changing event. If you have a 10GB, database with 50GB available, probably not a big concern. But if you have a 3.5TB database and only 4TB of space available, this could be a big problem. From the Manage accelerated database recovery documentation: > PVS is considered large if it's significantly larger than baseline or if it is close to 50% of the size of the database. ### Question How can I accurately estimate the growth of using ADR, prior to implementing?
James Jenkins (6318 rep)
Jun 10, 2021, 06:18 PM • Last activity: Dec 12, 2024, 06:57 PM
0 votes
1 answers
139 views
When should sp_spaceused be used to get the size of a database, instead of querying sys.database_files?
I frequently find the output of `sp_spaceused` misleading. What appears to be the size of all data in a database is actually [the combined size of both the data file and the log file](https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-spaceused-transact-sql?view=s...
I frequently find the output of sp_spaceused misleading. What appears to be the size of all data in a database is actually [the combined size of both the data file and the log file](https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-spaceused-transact-sql?view=sql-server-ver16#result-set) . I have recently discovered sys.database_files and I have found it superior in every way. It is so much better than sp_spacedused that I am planning to drink until I forget about that procedure. This give me my question: **When checking the size of a database, is there any feature or return value offered by sp_spacedused that cannot be obtained from sys.database_files?** I've checked over the documentation and I'm pretty sure that there isn't.
J. Mini (1237 rep)
Oct 30, 2024, 09:10 PM • Last activity: Oct 31, 2024, 12:22 AM
3 votes
1 answers
210 views
Does rebuilding a clustered index offline require extra space for each non-clustered index?
I have an uncompressed clustered primary key. It consumes 63.6 GB in the main part of the clustered index with 17.9 GB LOB. The table's only non-clustered index is 57.3 GB also with 17.9 GB LOB. I wish to rebuild the clustered index offline without the database growing. Running `sp_spaceused` with n...
I have an uncompressed clustered primary key. It consumes 63.6 GB in the main part of the clustered index with 17.9 GB LOB. The table's only non-clustered index is 57.3 GB also with 17.9 GB LOB. I wish to rebuild the clustered index offline without the database growing. Running sp_spaceused with no arguments reports 66 GB in its "unallocated space" column. Is what I wish to do possible? Or will rebuilding the compressed index force a rebuild of the non-clustered index, thus taking at least 100 GB? Despite my best effort, I have found an answer in neither the documentation or Stack Overflow. My experiments with a small database that I had handy suggest that rebuilding the clustered index does not force a rebuild of the non-clustered indexes.
J. Mini (1237 rep)
Sep 21, 2024, 03:08 PM • Last activity: Sep 21, 2024, 03:57 PM
47 votes
5 answers
63553 views
I need to run VACUUM FULL with no available disk space
I have one table that is taking up close to 90% of hd space on our server. I have decided to drop a few columns to free up space. But I need to return the space to the OS. The problem, though, is that I'm not sure what will happen if I run VACUUM FULL and there is not enough free space to make a cop...
I have one table that is taking up close to 90% of hd space on our server. I have decided to drop a few columns to free up space. But I need to return the space to the OS. The problem, though, is that I'm not sure what will happen if I run VACUUM FULL and there is not enough free space to make a copy of the table. I understand that VACUUM FULL should not be used but I figured it was the best option in this scenario. Any ideas would be appreciated. I'm using PostgreSQL 9.0.6
Justin Rhyne (573 rep)
Apr 25, 2012, 08:31 PM • Last activity: Aug 9, 2024, 10:09 AM
0 votes
1 answers
32 views
What is a viable low-cost DR option for a large cluster?
We have a Cassandra cluster running on GKE with a 32-CPU node pool and SSD disks. The current cluster size is nearly 1 PB, with each node utilizing an average of 5 TB on 10 TB allocated SSD disks. The cluster comprises 200 nodes, each with 10 TB disks, totaling 2 PB in size total allocated. Given th...
We have a Cassandra cluster running on GKE with a 32-CPU node pool and SSD disks. The current cluster size is nearly 1 PB, with each node utilizing an average of 5 TB on 10 TB allocated SSD disks. The cluster comprises 200 nodes, each with 10 TB disks, totaling 2 PB in size total allocated. Given this cluster size, the maintenance costs are substantial. How can we achieve low-cost disaster recovery for such a large cluster? One option I am considering is creating a new data center in a different region with a replication factor of 1 (RF1). While this is not recommended, it would at least reduce the cluster size by a factor of three. Any suggestions would be greatly appreciated.
Sai (39 rep)
Jul 12, 2024, 07:47 PM • Last activity: Jul 26, 2024, 09:59 AM
0 votes
2 answers
251 views
Database size is far larger than sum of tables, even after VACUUM FULL?
Here is my query and its output in psql, run as an admin on that database. There is a massive discrepancy between the 'total size of the database' at 50GB, and the sum of the tables, ~7MB. This is immediately after running `VACUUM FULL ANALYZE table_name` for every table. My understanding is that `p...
Here is my query and its output in psql, run as an admin on that database. There is a massive discrepancy between the 'total size of the database' at 50GB, and the sum of the tables, ~7MB. This is immediately after running VACUUM FULL ANALYZE table_name for every table. My understanding is that pg_total_relation_size includes the size of things like indexes, so that does not account for the discrepancy. What could be causing this?
core=# DO $$
core$# DECLARE
core$#     table_name TEXT;
core$#     db_size TEXT;
core$#     table_size TEXT;
core$#     index_size TEXT;
core$# BEGIN
core$#     -- Get the total size of the database
core$#     SELECT pg_size_pretty(pg_database_size(current_database())) INTO db_size;
core$#     RAISE NOTICE 'Total size of the database: %', db_size;
core$#     
core$#     -- Cursor to fetch table names
core$#     FOR table_name IN 
core$#         SELECT t.table_name
core$#         FROM information_schema.tables t
core$#         WHERE t.table_schema = 'public' -- Change 'public' to your schema name if it's different
core$#         AND t.table_type = 'BASE TABLE'
core$#     LOOP
core$#         -- Get the size of each table and print
core$#         EXECUTE format('SELECT pg_size_pretty(pg_total_relation_size(''%I.%I''))', 'public', table_name) INTO table_size;
core$#         RAISE NOTICE 'Table % size: %', table_name, table_size;
core$# 
core$#     END LOOP;
core$# END $$;
NOTICE:  Total size of the database: 50 GB
NOTICE:  Table table1 size: 16 kB
NOTICE:  Table table2 size: 48 kB
NOTICE:  Table table3 size: 16 kB
NOTICE:  Table table4 size: 32 kB
NOTICE:  Table table5 size: 32 kB
NOTICE:  Table table6 size: 40 kB
NOTICE:  Table table7 size: 72 kB
NOTICE:  Table table8 size: 80 kB
NOTICE:  Table table9 size: 24 kB
NOTICE:  Table table10 size: 2440 kB
NOTICE:  Table table11 size: 32 kB
NOTICE:  Table table12 size: 32 kB
NOTICE:  Table table13 size: 120 kB
NOTICE:  Table table14 size: 24 kB
NOTICE:  Table table15 size: 24 kB
NOTICE:  Table table16 size: 24 kB
NOTICE:  Table table17 size: 32 kB
NOTICE:  Table table18 size: 32 kB
NOTICE:  Table table19 size: 3352 kB
NOTICE:  Table table20 size: 40 kB
NOTICE:  Table table21 size: 144 kB
NOTICE:  Table table122 size: 24 kB
DO
It doesn't seem to be related to large objects either:
SELECT pg_size_pretty(pg_total_relation_size('pg_largeobject'));
 pg_size_pretty 
----------------
 8192 bytes
Datguy (11 rep)
Apr 27, 2024, 02:32 PM • Last activity: Apr 29, 2024, 03:46 PM
2 votes
1 answers
657 views
Why there is huge gap between allocated and used size in a database
I want to find total used and remaining space for each database in Azure SQL MI. For that, from an example, when I right click on a database and select properties I see following example output, where total size should be ~365 GB: [![enter image description here][1]][1] ``` data_size log_size total_...
I want to find total used and remaining space for each database in Azure SQL MI. For that, from an example, when I right click on a database and select properties I see following example output, where total size should be ~365 GB: enter image description here
data_size       log_size         total_size
TEST_DB                  355.69042968750	1.31347656250	357.00390625000
----- When I run the following script Get size of all tables in database sum of Tables are around ~500MB, where I have no idea where is the remaining of 364.5 GB go. Also instead when I run following solution's script (https://dba.stackexchange.com/a/339009/289736) again I see data sizes as much less from the data size which is around ~765MB: enter image description here ----- I get lost correct way to get size of a database, since different approaches show different sizes. If there is a huge gap between allocated size and used size, where does gap generated from?
alper (147 rep)
Apr 26, 2024, 11:46 AM • Last activity: Apr 29, 2024, 02:04 AM
1 votes
1 answers
361 views
how to find out the free space inside a database file when the database is not online?
Normally when the database is online I use the following query and get how much space is free inside each file: ``` SELECT CONVERT (decimal(12,2) , (CONVERT(decimal(12,2), size)/128)/1024.00) - CONVERT(decimal(12,2), ROUND(fileproperty(a.name,'SpaceUsed')/128.000,2)/1024.00) as [FREESPACE GB], * FRO...
Normally when the database is online I use the following query and get how much space is free inside each file:
SELECT CONVERT (decimal(12,2)  , 
                           (CONVERT(decimal(12,2), size)/128)/1024.00) 
                          - CONVERT(decimal(12,2),
                            ROUND(fileproperty(a.name,'SpaceUsed')/128.000,2)/1024.00)  as [FREESPACE GB],
			       * 
			  FROM sys.database_files a WITH(NOLOCK)
you need to be in the database you want to look at. enter image description here However, how can you do that when the database is offline? I have not found a way. I was trying this, but it did not work out:
select object_name,counter_name,instance_name,cntr_value,
case cntr_type 
	when 65792 then 'Absolute Meaning' 
	when 65536 then 'Absolute Meaning' 
	when 272696576 then 'Per Second counter and is Cumulative in Nature'
	when 1073874176 then 'Bulk Counter. To get correct value, this value needs to be divided by Base Counter value'
	when 537003264 then 'Bulk Counter. To get correct value, this value needs to be divided by Base Counter value' 
    else 'I don"t know'   
end as counter_comments
from sys.dm_os_performance_counters with(nolock)
where cntr_type not in (1073939712)
and counter_name in ('Data File(s) Size (KB)                                                                                                          ',
                     'Database pages                                                                                                                  ')
enter image description here Is there a way to find out free space inside files of offline databases?
Marcello Miorelli (17274 rep)
Feb 19, 2024, 12:23 PM • Last activity: Feb 20, 2024, 02:32 PM
0 votes
1 answers
141 views
Can a postgres autovacuum increase the TOAST size of the table?
A table toast size goes from 7.5Go to 9.2Go in one hour. The normal database activity seems not responsible for that: no more queries than usual. To be the cause, it would need a larger amount of additional requests than we support. The single match is an auto vacuum that ran sooner. Is it possible...
A table toast size goes from 7.5Go to 9.2Go in one hour. The normal database activity seems not responsible for that: no more queries than usual. To be the cause, it would need a larger amount of additional requests than we support. The single match is an auto vacuum that ran sooner. Is it possible a table toast size becomes bigger after an auto vacuum, for some reason? table size match death rows reduction
Slim (291 rep)
Feb 13, 2024, 10:50 AM • Last activity: Feb 13, 2024, 10:55 AM
2 votes
1 answers
104 views
Why is my composite index on 3 int columns smaller than the PK index on 1 int column?
I have a table called `data_table`. Below are schema, indexes, and their sizes. Table "public.data_table" Column | Type | Collation | Nullable | Default -----------+--------------------------+-----------+----------+--------------------------------------- id | integer | | not null | nextval('data_tab...
I have a table called data_table. Below are schema, indexes, and their sizes. Table "public.data_table" Column | Type | Collation | Nullable | Default -----------+--------------------------+-----------+----------+--------------------------------------- id | integer | | not null | nextval('data_table_id_seq'::regclass) col1_id | integer | | not null | col2_id | integer | | not null | col3_id | integer | | not null | col4 | double precision | | not null | timestamp | timestamp with time zone | | not null | Indexes: Size "data_table_1col_idx" btree (col1_id) 8032 KB "data_table_2col_idx" btree (col2_id) 8040 KB "data_table_3col_idx" btree (col3_id) 8048 KB "data_table_4col_idx" btree (col4) 25 MB "data_table_epe_idx" btree (col1_id, col2_id, col3_id) 8216 KB "data_table_pkey" btree (id) 25 MB data_table has a total of 1184330 rows. select count(*) from "data_table"; count --------- 1184330 (1 row) select * from "data_table" limit 5; id | col1_id | col2_id | col3_id | col4 | timestamp -----------+---------+--------+----------+-----------+---------------------------- 180102529 | 1 | 1 | 1 | 83.70361 | 2023-12-13 09:30:49.257+00 180102530 | 1 | 1 | 2 | 2827.6 | 2023-12-13 09:30:49.257+00 180102531 | 1 | 1 | 3 | 124.25156 | 2023-12-13 09:30:49.257+00 180102532 | 1 | 1 | 4 | 43.3 | 2023-12-13 09:30:49.257+00 180102533 | 1 | 1 | 5 | 282.4 | 2023-12-13 09:30:49.257+00 (5 rows) col1_id is the primary key of table_1 and table_1 has the total number of rows is 11. col2_id is the primary key of table_2 and table_2 has the total number of rows is 19. col3_id is the primary key of table_3 and table_3 has the total number of rows is 115. As you can see data_table_epe_idx is my composite index and has a size of around 8MB whereas data_table_pkey is the default index and has a size of 25MB. **Why is the size of the composite index (3x integer) less than the size of the PK index (1x integer)?**
Ezhar (21 rep)
Dec 22, 2023, 02:03 PM • Last activity: Dec 26, 2023, 06:34 AM
0 votes
1 answers
140 views
Create DB: Initial Size vs Max Size
In SQL Server, when creating a database we can specify the (initial) SIZE and the MAX_SIZE. I don't understand the difference. When I create the database, it's empty. So it's size is **zero**. Therefore, the initial size seems to be more like an estimation, how much of the hard drive should I ask th...
In SQL Server, when creating a database we can specify the (initial) SIZE and the MAX_SIZE. I don't understand the difference. When I create the database, it's empty. So it's size is **zero**. Therefore, the initial size seems to be more like an estimation, how much of the hard drive should I ask the operating system to reserve for me. But then, this is just what I think the MAX_SIZE is. So can anyone tell me what SIZE and MAX_SIZE actually are?
Juan Perez (121 rep)
Dec 12, 2023, 01:32 PM • Last activity: Dec 12, 2023, 01:49 PM
2 votes
2 answers
1472 views
Insert 2 output result sets from sp_spaceused into Temp Table (SQL Server non-2016)
sp_spaceused, when executed without any parameters, produces 2 result sets database_name, database_size, unallocated_size reserved, data, index_space, unused Usually when you execute some SP, you can insert output into temp table like below example: insert #temp (column1, column2, column3) exec sp_s...
sp_spaceused, when executed without any parameters, produces 2 result sets database_name, database_size, unallocated_size reserved, data, index_space, unused Usually when you execute some SP, you can insert output into temp table like below example: insert #temp (column1, column2, column3) exec sp_someprocedure but in case of sp_spaceused (without any parameters) - can not insert output, since it produces 2 result sets. I know 2016 has nice parameter @oneresultset = 1, but is there any way you can insert 2 or more output result sets into #temp in SQL 2014 or lower ? Is there any tricks ?
Aleksey Vitsko (6195 rep)
Dec 19, 2017, 03:02 AM • Last activity: Nov 21, 2023, 05:38 PM
1 votes
1 answers
1090 views
How to find whole cluster size in PostgreSQL?
I know how to find relation and database size using `pg_total_relation_size()`, and `pg_database_size()`. But I want to find whole cluster size. Is there a way other than calculating disk space of `data` directory using file manager or using a query like this? SELECT pg_size_pretty(sum(pg_database_s...
I know how to find relation and database size using pg_total_relation_size(), and pg_database_size(). But I want to find whole cluster size. Is there a way other than calculating disk space of data directory using file manager or using a query like this? SELECT pg_size_pretty(sum(pg_database_size(datname))) AS "total databases size" FROM pg_database;
Jaikaran saini (13 rep)
Oct 18, 2023, 04:11 AM • Last activity: Oct 18, 2023, 06:09 AM
Showing page 1 of 20 total questions