Adding SPARSE made table much bigger
9
votes
1
answer
1200
views
I have a generic log table, about 5m rows.
There's a "strongly typed" field that stores event type, and a bunch of "losely typed" columns that contain data relevant to the event. That is, meaning of those "losely typed" columns depends on the type of the event.
These columns are defined as:
USER_CHAR1 nvarchar(150) null,
USER_CHAR2 nvarchar(150) null,
USER_CHAR3 nvarchar(150) null,
USER_CHAR4 nvarchar(150) null,
USER_CHAR5 nvarchar(150) null,
USER_INTEGER1 int null,
USER_INTEGER2 int null,
USER_INTEGER3 int null,
USER_INTEGER4 int null,
USER_INTEGER5 int null,
USER_FLAG1 bit null,
USER_FLAG2 bit null,
USER_FLAG3 bit null,
USER_FLAG4 bit null,
USER_FLAG5 bit null,
USER_FLOAT1 float null,
USER_FLOAT2 float null,
USER_FLOAT3 float null,
USER_FLOAT4 float null,
USER_FLOAT5 float null
Columns 1 and 2 in each type are heavily used, but starting from number 3, very few event types would provide this much of info. I therefore desided to mark columns 3-5 in each type as
SPARSE
.
I did some analysis first, and saw that, indeed, at least 80% of data in each of those columns is null
, and in some 100% of data is null
. According to the 40% savings threshold table , SPARSE
would be a huge win on them.
So I went and applied SPARSE
to columns 3-5 in each group.
Now my table takes about 1.8Gb in data space as reported by sp_spaceused
, whereas before sparsing it was 1Gb.
I tried dbcc cleantable
, but it had no effect.
Then dbcc shrinkdatabase
, no effect either.
Puzzled, I removed SPARSE
and repeated the dbcc
s. The size of the table remained at 1.8Gb.
What gives?
Asked by GSerg
(1353 rep)
Feb 4, 2012, 03:49 PM
Last activity: Aug 23, 2018, 09:19 AM
Last activity: Aug 23, 2018, 09:19 AM