Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
173
views
alter table which is under the load (sql server)
we have table that being used for sending sms and other notification for confirmation as second factor. Some changes in business logic requires us to alter table to make some columns NOT NULL. Our stumbling-stone is that table should be accessed during 24/7, application writes to and reads from it p...
we have table that being used for sending sms and other notification for confirmation as second factor.
Some changes in business logic requires us to alter table to make some columns NOT NULL.
Our stumbling-stone is that table should be accessed during 24/7, application writes to and reads from it pretty frequently and table has tens of millions rows. And when command like
alter table NOTIFICATION_TABLE
alter column C1 int null
starting to execute it obviously tries to lock table and seemingly cannot do it because table is being used by other requests. That situation end up with that
alter
command hangs out and nothing happened during 5 minutes. After that timeout we stop it because do not want occasionally crush our system.
Any thoughts about how we can implement this modification without stopping our system?
DotNetter
(101 rep)
May 1, 2017, 04:12 PM
• Last activity: Jul 15, 2025, 07:06 AM
1
votes
1
answers
188
views
TokuDB Hot Column Expansion
I need to expand a varchar field length from 255 to 4000. I am using tokudb_version: `tokudb-7.5.8 running on Linux 3.16.0-60-generic #80~14.04.1-Ubuntu SMP Wed Jan 20 13:37:48 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux` I know TokuDB supports Hot Column operations but this is not working for me ( numb...
I need to expand a varchar field length from 255 to 4000.
I am using tokudb_version:
tokudb-7.5.8 running on Linux 3.16.0-60-generic #80~14.04.1-Ubuntu SMP Wed Jan 20 13:37:48 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
I know TokuDB supports Hot Column operations but this is not working for me ( number of rows ~ 210 million)
Show variables file: https://drive.google.com/file/d/0B5noFLrbjDjzSW9wdnVjb095Q0U/view?usp=sharing
Alter command
alter table test_table modify test_column varchar(4000);
Show processlist:
mysql> show processlist;
+----+------+-----------+---------------+---------+------+---------------------------------------------------------+------------------------------------------------------------+-----------+---------------+
| Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined |
+----+------+-----------+---------------+---------+------+---------------------------------------------------------+------------------------------------------------------------+-----------+---------------+
| 6 | root | localhost | NULL | Query | 0 | init | show processlist | 0 | 0 |
| 7 | root | localhost | test | Query | 461 | Queried about 2445001 rows, Inserted about 2445000 rows | alter table test_table modify test_column varchar(4000) | 0 | 0 |
+----+------+-----------+---------------+---------+------+---------------------------------------------------------+------------------------------------------------------------+-----------+---------------+
2 rows in set (0.00 sec)
Any idea which options I might need to set because it's currently processing at ~ 6k per second(which might take me ~10 hours)
Kyalo
(11 rep)
Jul 26, 2016, 09:16 AM
• Last activity: Jun 30, 2025, 10:04 AM
0
votes
2
answers
236
views
Changing int column to bigint on table with blob column
I have an aws hosted RDS production table called external_documents as defined below. It has about 35k rows and each blob is about 0.5 MB. ``` CREATE TABLE `external_documents` ( `id` bigint unsigned NOT NULL AUTO_INCREMENT, `filename` varchar(191) NOT NULL, `mime_type` varchar(191) NOT NULL, `mpi_i...
I have an aws hosted RDS production table called external_documents as defined below. It has about 35k rows and each blob is about 0.5 MB.
CREATE TABLE external_documents
(
id
bigint unsigned NOT NULL AUTO_INCREMENT,
filename
varchar(191) NOT NULL,
mime_type
varchar(191) NOT NULL,
mpi_id
int unsigned NOT NULL,
created_at
timestamp NULL DEFAULT NULL,
updated_at
timestamp NULL DEFAULT NULL,
contents
longblob NOT NULL,
deleted_at
timestamp NULL DEFAULT NULL,
title
varchar(191) DEFAULT NULL,
file_size
int unsigned DEFAULT NULL,
category
varchar(191) DEFAULT NULL,
modality
varchar(191) DEFAULT NULL,
PRIMARY KEY (id
),
KEY external_documents_mpi_id_title_index
(mpi_id
,title
)
) ENGINE=InnoDB AUTO_INCREMENT=35836 DEFAULT CHARSET=utf8mb3 COMMENT='Externally derived documents (e.g. lab results from a fax server)'
We are changing the mpi_id
column from int to bigint. The problem is that the contents
column is a longblob, and changing the key takes hours and blocks access to the table during that time.
**What I've tried**
1. I tried to use pecorna tools pt-online-schema-change
but I don't have process privilege, so that won't work.
2. gh-ost
has a similar limitation.
3. I tried to make a duplicate table using create table ed2 select * from external_documents
, but it blocked access to the external_documents table.
4. I tried to make a duplicate (empty) table using create table ed2 like external_documents
, which completed very quickly, but when I tried to copy the data into the new table using insert into ed2 select * from external_documents
, again it blocked the original table for 20 mins.
5. I tried using online ddl, e.g. ALTER TABLE external_documents modify column mpi_id bigint unsigned not null, ALGORITHM=INPLACE, LOCK=NONE;
but I got ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY.
6. Similarly, ALTER TABLE external_documents modify column mpi_id bigint unsigned not null, ALGORITHM=instant, LOCK=NONE;
gives ERROR 1221 (HY000): Incorrect usage of ALGORITHM=INSTANT and LOCK=NONE/SHARED/EXCLUSIVE
The table has row_type=dynamic, and since there are only 20 bytes of the blob stored on-page, I figured that this should work faster. Any other recommendations?
mankowitz
(156 rep)
Aug 21, 2023, 04:59 PM
• Last activity: Jun 20, 2025, 11:04 PM
2
votes
2
answers
2735
views
Postgresql explain alter table add column doesn't work
I'm using Postgresql 12 (AWS RDS) and pgAdmin4. While the following statement works ```explain select * from table "Commands";``` The next one ```explain alter table "Commands" drop column "test";``` gives this error ```ERROR: syntax error at or near "alter" LINE 1: explain alter table "Commands" dr...
I'm using Postgresql 12 (AWS RDS) and pgAdmin4.
While the following statement works
select * from table "Commands";
The next one
alter table "Commands" drop column "test";
gives this error
: syntax error at or near "alter"
LINE 1: explain alter table "Commands" drop column "test";
^
SQL state: 42601
Character: 9
I made sure the statement alter table "Commands" drop column "test"
itself can be run successfully so this cannot be an issue with that statement.
Does explain
not work with alter
or am I missing anything?
Tran Triet
(123 rep)
Sep 16, 2021, 10:05 AM
• Last activity: Jun 20, 2025, 11:01 AM
0
votes
1
answers
184
views
H2 Database - Alter multiple databases
I need to alter a table in H2 db. But we have millions of databases with the same same table. What may be the feasible way to alter such databases?
I need to alter a table in H2 db. But we have millions of databases with the same same table. What may be the feasible way to alter such databases?
user3111540
(1 rep)
Feb 9, 2018, 06:27 AM
• Last activity: Jun 20, 2025, 07:08 AM
4
votes
1
answers
104
views
"The data type of a column of a partitioned table can't be changed."
[The documentation claims](https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-table-transact-sql?view=sql-server-ver17#alter-column) > The data type of a column of a partitioned table can't be changed. It repeats this claim elsewhere [such as here](https://learn.microsoft.com/en-us/sql/t-s...
[The documentation claims](https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-table-transact-sql?view=sql-server-ver17#alter-column)
> The data type of a column of a partitioned table can't be changed.
It repeats this claim elsewhere [such as here](https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-table-transact-sql?view=sql-server-ver17#partitioned-tables) .
Yet I have never seen this fail.
CREATE PARTITION FUNCTION pf(int)
AS RANGE RIGHT FOR VALUES(10, 20, 30)
GO
CREATE PARTITION SCHEME ps
AS PARTITION pf
ALL TO ([primary]);
GO
CREATE TABLE Part
(
id INT IDENTITY PRIMARY KEY CLUSTERED,
name VARCHAR(50)
) ON ps(id);
GO
INSERT INTO Part (name)
SELECT TOP(40) CONVERT(VARCHAR(50), [text])
FROM sys.messages;
GO
ALTER TABLE Part
ALTER COLUMN [name]
SQL_VARIANT;
GO
So what am I missing?
J. Mini
(1225 rep)
May 29, 2025, 03:29 PM
• Last activity: May 29, 2025, 07:17 PM
1
votes
0
answers
56
views
ALTER TABLE ... REORGANIZE PARTITION on huge table takes longer after upgrading to MySQL 8.0
After upgrading from MySQL 5.7 to MySQL 8.0.33, I’ve observed a dramatic slowdown in partition management operations, particularly: - ALTER TABLE ... REORGANIZE PARTITION - ALTER TABLE ... DROP PARTITION Before the upgrade, these commands finished in less than 1 second. ``` ALTER TABLE hugeTable REO...
After upgrading from MySQL 5.7 to MySQL 8.0.33, I’ve observed a dramatic slowdown in partition management operations, particularly:
- ALTER TABLE ... REORGANIZE PARTITION
- ALTER TABLE ... DROP PARTITION
Before the upgrade, these commands finished in less than 1 second.
ALTER TABLE hugeTable
REORGANIZE PARTITION pMax INTO (
PARTITION p202506 VALUES LESS THAN (1748707200),
PARTITION pMax VALUES LESS THAN MAXVALUE
);
ALTER TABLE hugeTable
DROP PARTITION p202405;
Every month, we run a scheduled job that:
- Adds the next month’s partition using REORGANIZE PARTITION
- Removes the oldest partition using DROP PARTITION
This pattern worked fine for years on MySQL 5.7. Now, on 8.0.33, both steps take much longer and affect performance, even though partitions being dropped are already empty.
Is there any way to make these ALTER TABLE operations faster and without performance impact, like in MySQL 5.7?
Note: I can’t use pt-online-schema-change, because it takes too long for a table this big.
Environment:
- OS: Ubuntu 18.04.6 LTS (Bionic Beaver)
- MySQL Version: 8.0.33
- Table Size: ~2000 GB, ~40B rows
- PARTITION BY RANGE (timestamp
)
Thanks in advance for any guidance or workarounds!
Billy Hsu
(11 rep)
May 20, 2025, 10:22 AM
• Last activity: May 23, 2025, 08:06 AM
0
votes
1
answers
305
views
Server and Database role memberships required to add columns and constraints
I have a user with the following server and database role memberships: 1. securityadmin (server) 2. public (server) 3. db_owner (database) 4. db_accessadmin (database) 5. db_securityadmin (database) When running the below script in SQL 2005, it runs through without any issues: BEGIN TRAN IF NOT EXIS...
I have a user with the following server and database role memberships:
1. securityadmin (server)
2. public (server)
3. db_owner (database)
4. db_accessadmin (database)
5. db_securityadmin (database)
When running the below script in SQL 2005, it runs through without any issues:
BEGIN TRAN
IF NOT EXISTS (SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'Opfix'
AND COLUMN_NAME = 'ModifiedBy')
BEGIN
ALTER TABLE [OpFix] ADD [ModifiedBy] [int] NOT NULL
CONSTRAINT DF_OpFixModifiedBy DEFAULT ([dbo].[fnSoftmarLoginUserID]())
END
ROLLBACK TRAN
When running the same script on SQL 2008 and SQL 2012, I get the below error:
> Cannot find the object "Opfix" because it does not exist or you do
> not have permissions.
So, my questions are:
1. Does it makes sense that I get different results between SQL 2005 and SQL 2008+, with the same update and the exact same user permissions?
2. What are the minimum requirements for this script to run through?
3. The only way I could get this script to run through on SQL 2008+ was to grant sysadmin membership, is that as expected?
Marieke Smit
(1 rep)
Nov 4, 2016, 08:21 AM
• Last activity: May 8, 2025, 04:03 PM
6
votes
1
answers
4107
views
ALTER timestamp column to timestamptz, without "converting" data?
It seems that altering a column from `timestamp without time zone` to `timestamp with time zone` converts the existing data based on the current session time zone at the time of the `alter` statement. See this example, output shown after each statement ``` create table tztest (col1 timestamp without...
It seems that altering a column from
timestamp without time zone
to timestamp with time zone
converts the existing data based on the current session time zone at the time of the alter
statement.
See this example, output shown after each statement
create table tztest (col1 timestamp without time zone);
set timezone = 'UTC';
insert into tztest (col1) values ('2023-02-01 10:10:10');
select col1 as t1, extract(epoch FROM col1) from tztest;
-- → 2023-02-01 10:10:10 1675246210
set timezone = 'America/New_York';
alter table tztest alter column col1 type timestamp with time zone;
select col1 as t2, extract(epoch FROM col1) from tztest;
-- → 2023-02-01 10:10:10-05 1675264210
set timezone = 'UTC';
select col1 as t3, extract(epoch FROM col1) from tztest;
-- → 2023-02-01 15:10:10+00 1675264210
The epoch value changes after the alter
command.
I was expecting that PG would assume the timestamp
value was UTC and not adjust it when changing the type to timestamp with time zone
(giving me t2
of 05:10:10
and t3
of 10:10:10
).
However, it seems PG assumes the session time zone at the time of the alter
, and converts them to UTC.
Is my understanding correct, and is it expected behavior?
On a large table then the alter will update every row, which is something we are concerned about and certainly want to understand.
Padraic Renaghan
(73 rep)
Feb 1, 2023, 11:34 PM
• Last activity: May 4, 2025, 08:50 PM
0
votes
1
answers
560
views
Change collation of column without SET DATA TYPE
I need to change column collation from default to "C.utf8". This is the statement I am using: ALTER TABLE ALTER COLUMN SET DATA TYPE VARCHAR COLLATE "C.utf8"; The issue is that `SET DATA TYPE` is causing errors as there are views and triggers that rely on that specific column. I would rather not hav...
I need to change column collation from default to "C.utf8".
This is the statement I am using:
ALTER TABLE
ALTER COLUMN SET DATA TYPE VARCHAR COLLATE "C.utf8";
The issue is that
SET DATA TYPE
is causing errors as there are views and triggers that rely on that specific column. I would rather not have to drop and recreate views and triggers and touch a lot of additional code-base. Is there a syntax for updating collation without requiring SET DATA TYPE?
Postgres version: 14.10
kravb
(179 rep)
Jan 31, 2024, 02:15 PM
• Last activity: Mar 4, 2025, 03:09 PM
0
votes
2
answers
1053
views
How do you change the storage parameters of a already made table (sql developer)?
**Summarize the problem** I have been given 6 tables that have all been created already and I was told to change the storage parameters of those tables. the issue I run into is that I can only change pctfree, pctused, initrans and max trans and not the others (initial, next, pctincrease and maxexten...
**Summarize the problem**
I have been given 6 tables that have all been created already and I was told to change the storage parameters of those tables. the issue I run into is that I can only change pctfree, pctused, initrans and max trans and not the others (initial, next, pctincrease and maxextents)
**Provide details and any research**
I have done a lot of research, yet some of them do not work with sql developer at all. Whilst the others do work but as stated only for the 4 storage parameters.
**When appropriate, describe what you’ve tried**
alter table CUSTOMERS
pctused 20 pctfree 80;
This works perfectly for those two, but I am unable to add the others. From what I found online, this was in fact the only thing that worked for me.
I appreciate all input!
XileVoid
(107 rep)
Sep 2, 2020, 06:53 AM
• Last activity: Feb 26, 2025, 06:23 AM
0
votes
1
answers
60
views
Phenomenon over add column to alter table with 300M records
In the context of an existing question that I had asked https://dba.stackexchange.com/questions/345282/performant-way-to-perform-update-over-300m-records-in-mysql. I assumed that the alteration of table schema would possible be trivial for the following: ``` interaction | CREATE TABLE `interaction`...
In the context of an existing question that I had asked https://dba.stackexchange.com/questions/345282/performant-way-to-perform-update-over-300m-records-in-mysql . I assumed that the alteration of table schema would possible be trivial for the following:
As soon, as the command was triggered I expected that the CPU was consistently be high, but that was evident only for around 20 mins, then there was a sudden dip to 10 % (**Q1**: What underlying event could cause this to occur?) which further went below 5% and remained consistently on this unless the execution of the command completed.
**Q2.** The command execution took **~35hrs**, is there a scope of improving this further?
**Update**: MYSQL version in use is 5.7.0
interaction | CREATE TABLE interaction
(
id
varchar(36) NOT NULL,
entity_id
varchar(36) DEFAULT NULL,
request_date
datetime DEFAULT NULL,
user_id
varchar(255) DEFAULT NULL,
sign
varchar(1) DEFAULT NULL,
PRIMARY KEY (id
),
KEY entity_id_idx
(entity_id
),
KEY user_id_idx
(user_id
),
KEY req_date_idx
(request_date
)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
Apparently the best command that is recommended is to not aid any default value to the adde column and proceed with something like
ALTER TABLE interaction
ALGORITHM=INPLACE,
LOCK=NONE,
ADD COLUMN type VARCHAR(32);
I had attempted this on a test machine, but I would want to understand the underlying behaviour of MYSQL server to process this command.
Here is a screenshot for reference of observability from GCP over the VM this server is hosted on:

Query OK, 0 rows affected (1 day 10 hours 46 min 13.20 sec)
Records: 0 Duplicates: 0 Warnings: 0
**Q3.** Here is the reference of observability around this duration, is this behaviour guaranteed to be consistent, such that I can consider that if I run the same command on a production environment which involves read/write apart from this query, the resource consumption would remain similar and we can sustain through? [My test VM disk is 300 GB, do I really need to consider this much additional disk or more for executing the query mentioned on production too?]

Naman
(123 rep)
Feb 16, 2025, 05:45 PM
• Last activity: Feb 18, 2025, 01:51 AM
1
votes
2
answers
842
views
Alter columnt type from int to bigint with minimal downtime
I have several tables with `id` columns that are declared to be serial (i.e. `integer` type with default `nextval()`). Converting them to bigint seem to be quite straightforward: ```postgresql ALTER TABLE my_table ALTER COLUMN id TYPE bigint; ``` But that might take a lot of time and lock the whole...
I have several tables with
id
columns that are declared to be serial (i.e. integer
type with default nextval()
).
Converting them to bigint seem to be quite straightforward:
ALTER TABLE my_table
ALTER COLUMN id
TYPE bigint;
But that might take a lot of time and lock the whole table. It's okay-ish solution for cases when the whole site is already down due to integer overflow, but not so good for provisional changes.
How would one do that concurrently
?
Eduard Sukharev
(129 rep)
Dec 25, 2023, 12:12 PM
• Last activity: Nov 28, 2024, 01:50 PM
1
votes
2
answers
1338
views
Alter table generated text column to normal text column
If I have a table with a generated text column, how do I alter the column to remove the generation expression to get a normal, not-generated text type? Here's the schema: ```sql CREATE TABLE product_history ( product_id bigint, name text GENERATED ALWAYS AS ( 'some name' ) STORED ); ``` What I'd lik...
If I have a table with a generated text column, how do I alter the column to remove the generation expression to get a normal, not-generated text type? Here's the schema:
CREATE TABLE product_history (
product_id bigint,
name text GENERATED ALWAYS AS ( 'some name' ) STORED
);
What I'd like is a command to use generation by default but allow custom values. As of Postgres 13.3, I don't think that's possible. The default clause looks like it only applies to generated identity columns:
-- Command I want:
ALTER TABLE product_history ALTER COLUMN name SET GENERATED BY DEFAULT;
Here's the insert query I want to work:
INSERT INTO product_history (product_id, name) VALUES (1, 'foo');
-- ERROR: cannot insert into column "name"
-- Detail: Column "name" is a generated column.
I'm running Postgres 13.3.
The use case is creating a history table such that I can insert the transition table of statement-level trigger into the history table. I can't do a blind insert from a select because the transition table rows include the generated column.
Joe
(179 rep)
Jul 15, 2021, 04:23 AM
• Last activity: Nov 26, 2024, 10:00 AM
2
votes
2
answers
558
views
Do you need NOT VALID when adding new foreign key column
I know that adding a foreign key constraint requires a table scan and a `SHARE ROW EXCLUSIVE` lock on both tables. To prevent the possibly lengthy table scan the constraint can be added with the `NOT VALID` approach. But I'm wondering, when adding a new column, should you also use `NOT VALID` or is...
I know that adding a foreign key constraint requires a table scan and a
SHARE ROW EXCLUSIVE
lock on both tables.
To prevent the possibly lengthy table scan the constraint can be added with the NOT VALID
approach. But I'm wondering, when adding a new column, should you also use NOT VALID
or is Postgres smart enough to recognize it's a new column and thus the whole table doesn't need to be scanned?
I'm using Django, and the generated SQL of adding a foreign key column looks like this:
ALTER TABLE
"example"
ADD
COLUMN "new_column_id" integer NULL CONSTRAINT "example_new_column_id_fk" REFERENCES "another_table"("id") DEFERRABLE INITIALLY DEFERRED;
SET
CONSTRAINTS "example_new_column_id_b781b6be_fk" IMMEDIATE;
Ruud van den Boomen
(123 rep)
Nov 22, 2024, 12:35 PM
• Last activity: Nov 24, 2024, 09:13 PM
2
votes
1
answers
588
views
Unable to alter CDC tables
I have an issue with being unable to alter a CDC table that previously caused us no issue. We run alter statements on our cdc tables to add a column to the tables that we now are starting to face an error regarding. The query we are trying to run is ALTER TABLE cdc.dbo_TABLE_NAME_CT ADD processed BI...
I have an issue with being unable to alter a CDC table that previously caused us no issue.
We run alter statements on our cdc tables to add a column to the tables that we now are starting to face an error regarding.
The query we are trying to run is
ALTER TABLE cdc.dbo_TABLE_NAME_CT
ADD processed BIT DEFAULT 0 NULL
Which now returns the following error
> Msg 2104, Level 16, State 13, Line 34
> Cannot alter the Error: 8277, Severity: -1, State: 0. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
'dbo_TABLE_NAME', because you do not have permission.
I'm using SQL Server 2016 (v13.0.7045.2).
On a 13.0.7029 SQL server we are able to run the alter commands, so it feels like something has changed as part of a patch.
The problem is similar to this issue here https://stackoverflow.com/questions/79075830/error-while-trying-to-alter-cdc-table-on-sql-server
This has happened on multiple environments now out of the blue and previously worked fine, I am unsure if as part of a new patch something has changed causing this.
I know altering cdc tables is not recommended but this didn't cause us any issues previously.
I wondered if anyone had any ideas on this as I'm a bit unsure where to go with this.
I am unsure if this is intended behavior or a bug - there is no mention in the Microsoft service packs that they have done something to disable the ability to add a column to the tables.
More so if this is indeed how things are going to work going forward, we need to adjust accordingly but right now I don't know why this feature has suddenly stopped working.
Fabio
(23 rep)
Nov 18, 2024, 07:54 PM
• Last activity: Nov 20, 2024, 07:12 AM
0
votes
1
answers
121
views
Add RowVersion column to a very large table
We need to add rowversion column to a very large table. We are on SQL 2022 and came across this post where Paul White mentioned about trace flag - 4085 - . Is there any information if that trace flag is officially supported? Thank you.
We need to add rowversion column to a very large table. We are on SQL 2022 and came across this post where Paul White mentioned about trace flag - 4085 - .
Is there any information if that trace flag is officially supported?
Thank you.
SqlData
(39 rep)
Oct 22, 2024, 06:00 AM
• Last activity: Oct 30, 2024, 02:26 PM
2
votes
3
answers
8043
views
Add ENum Attribute to existing table using Alter
I am using oracle XE and trying to add a attribute that restricts to only three available value 'Small','Medium','large' to an existing table using Alter Table and Enum. Tried doing, ALTER TABLE TRUCK ADD TTYPE ENUM('SMALL','MEDIUM','LARGE'); but gets invalid option ALTER TABLE TRUCK ADD TTYPE ENUM(...
I am using oracle XE and trying to add a attribute that restricts to only three available value 'Small','Medium','large' to an existing table using Alter Table and Enum.
Tried doing,
ALTER TABLE TRUCK ADD TTYPE ENUM('SMALL','MEDIUM','LARGE');
but gets invalid option
ALTER TABLE TRUCK ADD TTYPE ENUM('SMALL','MEDIUM','LARGE');
*
where the error highlights after ENUM.
I think I am having syntax error. Please help to resolve.
nubbear
(23 rep)
Nov 3, 2014, 06:24 PM
• Last activity: Sep 2, 2024, 04:30 PM
47
votes
5
answers
63552
views
I need to run VACUUM FULL with no available disk space
I have one table that is taking up close to 90% of hd space on our server. I have decided to drop a few columns to free up space. But I need to return the space to the OS. The problem, though, is that I'm not sure what will happen if I run VACUUM FULL and there is not enough free space to make a cop...
I have one table that is taking up close to 90% of hd space on our server. I have decided to drop a few columns to free up space. But I need to return the space to the OS. The problem, though, is that I'm not sure what will happen if I run VACUUM FULL and there is not enough free space to make a copy of the table.
I understand that VACUUM FULL should not be used but I figured it was the best option in this scenario.
Any ideas would be appreciated.
I'm using PostgreSQL 9.0.6
Justin Rhyne
(573 rep)
Apr 25, 2012, 08:31 PM
• Last activity: Aug 9, 2024, 10:09 AM
0
votes
3
answers
138
views
Alter table - add a new column as the first column in the table - sql server
Before doing a restore of a database I check how much space I have in each drive on the destination server. I use this script: ``` IF OBJECT_ID ('tempdb..#FREE_SPACE_DRIVES','u') IS NOT NULL BEGIN DROP TABLE #FREE_SPACE_DRIVES; END CREATE TABLE #FREE_SPACE_DRIVES( DRIVE CHAR(1)PRIMARY KEY, FREESPACE...
Before doing a restore of a database I check how much space I have in each drive on the destination server.
I use this script:
that is all good, but now I want the servername shown on the picture as well.
ok I can add @@servername to my script and it will show the servername too.
My question is actually about alter table.
when I do this alter table:
that is all good, but I need the new column to be the first in my table,
is there a way to get this done?
then the result would be
> server_name, drive, freespace
IF OBJECT_ID ('tempdb..#FREE_SPACE_DRIVES','u') IS NOT NULL
BEGIN
DROP TABLE #FREE_SPACE_DRIVES;
END
CREATE TABLE #FREE_SPACE_DRIVES(
DRIVE CHAR(1)PRIMARY KEY,
FREESPACE BIGINT NOT NULL
)
INSERT INTO #FREE_SPACE_DRIVES
EXECUTE master.dbo.xp_fixeddrives;
then to see what are the available disk space I use the following query:
select d.DRIVE
,[DriveFreeSpace(GB)] = REPLACE(CONVERT(VARCHAR(50),CAST( CAST(d.FREESPACE/1024.00 as DECIMAL(12,2)) AS MONEY),1), '.00','')
from #FREE_SPACE_DRIVES D with(nolock)
and that gives me:

select [Server Name] = 'DBA Paradise' --@@servername
,d.DRIVE
,[DriveFreeSpace(GB)] = REPLACE(CONVERT(VARCHAR(50),CAST( CAST(d.FREESPACE/1024.00 as DECIMAL(12,2)) AS MONEY),1), '.00','')
from #FREE_SPACE_DRIVES D with(nolock)

ALTER TABLE #FREE_SPACE_DRIVES
add server_name nvarchar(128) not null default (@@servername)
when I run this select:
select * from #FREE_SPACE_DRIVES
I get:

Marcello Miorelli
(17274 rep)
Jul 3, 2024, 10:29 AM
• Last activity: Aug 3, 2024, 06:56 AM
Showing page 1 of 20 total questions