Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
335
views
Any potential complications from ms sql backup db to disk = 'nul', disk = 'nul', ...?
Our development team has a tool that notifies us of issues with our DB backups, but they have not yet given us the ability to exclude certain databases from the alerts - DBs that we do not want backups of are alerting us. The only way I've found to resolve this are backups to the windows NUL device,...
Our development team has a tool that notifies us of issues with our DB backups, but they have not yet given us the ability to exclude certain databases from the alerts - DBs that we do not want backups of are alerting us.
The only way I've found to resolve this are backups to the windows NUL device, however that takes much more time than I expected, and seems to still use processor time, and probabaly take RAM and I/O.
In attempting to speed up the process I tried several sets of backup options, collected data about time, and found that the more backup files I allocate as NUL, the faster it went. SQL 2008 R2 capped me at 64 files, i.e.
BACKUP DATABASE TO DISK = 'NUL', DISK = 'NUL', ...
64 disk = 'NUL' being a clear winner for fastest option.
I'm worried about the server resources though, and am now wondering about striking a balance between 'speed' and 'not hogging all resources while this is happening'.
Any advice on resources or other ideas would be awesome. Thanks.
Dave Sims
(333 rep)
Aug 17, 2017, 08:09 PM
• Last activity: Apr 30, 2025, 11:08 PM
-1
votes
1
answers
49
views
MySQL insert fails with column cannot be null, but the select doesn't return anything null
I want to insert a few rows into my database but it fails. insert into `ticket uses` (`journey serial`, `ticket serial`) select journeys.serial as `journey serial`, tickets.serial as `ticket serial` from journeys, tickets where journeys.serial between 16326 AND 16329 and tickets.serial in (1319, 175...
I want to insert a few rows into my database but it fails.
insert
into
ticket uses
(journey serial
, ticket serial
)
select journeys.serial as journey serial
, tickets.serial as ticket serial
from journeys, tickets
where journeys.serial between 16326 AND 16329
and tickets.serial in (1319, 1751);
It fails with
Column 'journey serial' cannot be null
However, the select statement does not return anything null.
MariaDB [transport]> select journeys.serial as journey serial
, tickets.serial as ticket serial
from journeys, tickets where journeys.serial between 16326 and 16329 and tickets.serial in (1319, 1751);
+----------------+---------------+
| journey serial | ticket serial |
+----------------+---------------+
| 16326 | 1319 |
| 16326 | 1751 |
| 16327 | 1319 |
| 16327 | 1751 |
| 16328 | 1319 |
| 16328 | 1751 |
| 16329 | 1319 |
| 16329 | 1751 |
+----------------+---------------+
8 rows in set (0.006 sec)
What's wrong with my query?
Michael Tsang
(109 rep)
Mar 31, 2025, 10:59 AM
• Last activity: Apr 19, 2025, 04:24 PM
0
votes
1
answers
837
views
postgres set null will improve performance in sparse data?
I have ~50 mil data entries, while most of them(maybe 80%) have null values in a column(currently I assigned a dummy value to those null entries). However, I'm doing frequent searching on non-null(regarding that column) entries. I'm not sure what's the performance implication behind null values and...
I have ~50 mil data entries, while most of them(maybe 80%) have null values in a column(currently I assigned a dummy value to those null entries). However, I'm doing frequent searching on non-null(regarding that column) entries.
I'm not sure what's the performance implication behind null values and indexing. (What I can do is to assign a dummy value for those 80% data if null values will hurt performance in my use case)
I guess null entries will be excluded from the normal index(put into some null index maybe), when I explicitly do searching on non-null values, this should be able to improve the performance since the index size is much smaller in my case
Ziqi Liu
(125 rep)
Oct 6, 2021, 06:25 PM
• Last activity: Apr 8, 2025, 10:22 AM
2
votes
2
answers
17780
views
SELECT only IS NOT NULL columns with LEFT JOIN
I have a query where i need to select product options from multi tables. SELECT p.option_id, p.product_id, p.sku, p.qty, p.price, c.color_label, s.size_label, m.material_label, v.variation_label FROM product_options p LEFT JOIN option_color c ON c.color_id = p.color_id LEFT JOIN option_size s ON s.s...
I have a query where i need to select product options from multi tables.
SELECT
p.option_id,
p.product_id,
p.sku,
p.qty,
p.price,
c.color_label,
s.size_label,
m.material_label,
v.variation_label
FROM product_options p
LEFT JOIN option_color c ON c.color_id = p.color_id
LEFT JOIN option_size s ON s.size_id = p.size_id
LEFT JOIN option_material m ON m.material_id = p.material_id
LEFT JOIN option_variation v ON v.variation_id = p.variation_id
WHERE p.product_id = 1 ;
This works fine. However; i get NULL values Like so
**Business rule**: as follows if any of the columns contains NULL means the whole product variation (column) of that particular option will be NULL.
MY QUESTION
How i can dynamically remove any of the columns once there result contain NULL.
Example from the above pic.
> material_label = NULL
then the whole column should be removed.
I hope i explained it well.
**EDIT**
Just adding further explaanation.
What im trying to achive is as follow
1. Get specific product from the DB with its variants(try and minimize
non required data).
2. List item then do some array cleaning and grouping then
maybe convert it to JSON using PHP.
3. then using javascript to get user selection on the product.
Maybe this link can clear things

AmQ7
(53 rep)
May 18, 2022, 11:15 PM
• Last activity: Apr 5, 2025, 07:12 AM
2
votes
3
answers
100
views
Best database design for a dress up game?
I am making a dress up game using Postgres as the database. A player can optionally wear a hat, pants, glasses, shirt, and shoes. I currently have a `player` table and it has columns like `name`, `created_at`, etc. It is pretty common to query a player (or to view another player), but pretty rare fo...
I am making a dress up game using Postgres as the database. A player can optionally wear a hat, pants, glasses, shirt, and shoes. I currently have a
player
table and it has columns like name
, created_at
, etc.
It is pretty common to query a player (or to view another player), but pretty rare for a player to change their cosmetics. Generally most players will equip whatever they want to make them look nice, and change their outfit occasionally, especially when they find a new wearable item to equip.
There are many different items for each category of wearables. For example, there might be 20 different items for pants, 40 different items for hats, etc. but a player can only wear one hat at a time (or none!), one pair of pants at a time, one pair of glasses at a time, etc.
I envision three possible ways to do this:
### 1. Integer columns
Just make the player table something like this:
player_id, name, created_at, hat, pants, glasses, shirt, shoes
Where the wearable columns (hat, pants, glasses, shirt, shoes) are all non-nullable integers
And the hat
value could either be 0
(user is not wearing an item) or, say, 35 (the integer ID of the hat item).
This is very easy to implement, pretty efficient storage-wise, and very fast to query, since it is very common to query the players (avoids a join) but pretty rare to update a player's wearables
Although another issue with this approach is that updating a row in postgres clones the entire row, so if the player table is quite wide, then it would be more expensive to update a player's wearables, because it would have to clone the entire player row to update that column. But again, reads are more common than writes.
### 2. Nullable integer columns
The same table layout as above, except instead the wearable columns are nullable integers, where null means the player does not have a wearable equipped in that slot. This should have the same query efficiency but probably be even more storage efficient, because nulls in postgres often don't take up any space.
In addition, you could later add another column (if a new wearable category was added to the game, like socks), and because it was nullable, we could use postgres' internal optimizations to avoid having to update every row (since it would be null by default and adding a new column to a postgres table that is null by default doesn't require updating the affected rows)
### 3. Separate tables
Add a wearable
table that with the following structure:
player_id, wearable_category_id, wearable_id
And probably have another definition table for wearable categories like:
wearable_category_id, wearable_name
1, Hat
2, Pants
etc..
This solution is the commonly recommended one I believe. But.. every cosmetic you equip requires inserting a new row (per-row space overhead and also need to have a column for the player id and wearable category id for each, to ensure you aren't able to equip two hats at once, for example.) In addition, query performance is worse because you need to join the wearables table every time you want to look at a player.
The advantage is that adding a new wearable category doesn't require updating every row (although I believe the null solution also has this feature).
Another benefit though is that you only need to write one set of queries to add and remove wearables (removing one from your player character should return it to your inventory, and adding one should likewise remove it from your inventory). The issue with the separate columns approach is that you need to write a new set of queries for every new wearable column, basically only changing the column name (unless you use dynamic SQL generation).
And a final benefit is that the queries to add or remove a cosmetic would probably feel more natural to write and likely be simpler. Inserting a row and checking if it inserted, or deleting a row and checking if it was deleted, is a bit more natural than checking if the value in that column is 0 (or null) or not, for determining whether or not there was a change.
Lastly, if we ever wanted to have any metadata (unlikely but possible), for example (an equipped_at timestamp), then this solution would obviously be superior, but I doubt I'll ever need anything like that (who knows though I suppose).
Anyways, I've been stuck on this design decision for a while. What is best?
Ryan Peschel
(141 rep)
Mar 16, 2025, 07:20 PM
• Last activity: Mar 25, 2025, 12:12 PM
2
votes
3
answers
226
views
When I combine the NOT and BETWEEN operators, the query unexpectedly retrieves additional null values
Query 1 and Query 2 share the same semantics, both involving the combination of the NOT and BETWEEN operators, with the evaluation of the BETWEEN expression being FALSE in both cases. However, Query 1 unexpectedly retrieves more null values than Query 2. Could this be a logical error? DROP TABLE tsq...
Query 1 and Query 2 share the same semantics, both involving the combination of the NOT and BETWEEN operators, with the evaluation of the BETWEEN expression being FALSE in both cases. However, Query 1 unexpectedly retrieves more null values than Query 2. Could this be a logical error?
DROP TABLE tsqsdb0_t0;
CREATE TABLE tsqsdb0_t0(
time TIMESTAMP PRIMARY KEY,
c0 INTEGER
) USING TIMESERIES ;
INSERT OR REPLACE INTO tsqsdb0_t0(time, c0)
VALUES (TIMESTAMP('2022-01-01T16:00:00Z'), 0),
(TIMESTAMP('2022-01-01T16:00:05Z'), null);
---------------------------
**Query 1**
SELECT c0,
time
FROM tsqsdb0_t0
WHERE NOT ( 2 BETWEEN c0 AND 1);
------------------------------------
**Query 2**
SELECT c0,
time
FROM tsqsdb0_t0
WHERE NOT ( -1 BETWEEN c0 AND 1);
Alice
(163 rep)
Jan 24, 2025, 12:21 PM
• Last activity: Jan 26, 2025, 03:37 AM
3
votes
1
answers
361
views
Firebird: table and null data
I'm working with Firebird database. I have a big table with many thousand of rows. Each line records the status of an object at regular intervals for many objects. I need to register new properties for one object only. I think I have 2 possibilities: 1. add the new columns to the table even if I hav...
I'm working with Firebird database. I have a big table with many thousand of rows. Each line records the status of an object at regular intervals for many objects. I need to register new properties for one object only.
I think I have 2 possibilities:
1. add the new columns to the table even if I have a lot of null values
2. create a table with the only properties that interest me and bind it appropriately to the first
which solution is the best for not having a loss of performance?
salem
(33 rep)
Feb 19, 2018, 01:38 PM
• Last activity: Jan 7, 2025, 12:55 PM
0
votes
1
answers
37
views
How to design properly a membership history accross multiple groups?
I'm looking to design an application (Python with sqlite) which deals with some contributors moving across several groups through time. My design looks currently like (PK in bold weight, FK in italic, both if a FK is part of the PK): * Group(**Id**, Name) * Contributor(**Id**, Name, FirstName) * Act...
I'm looking to design an application (Python with sqlite) which deals with some contributors moving across several groups through time.
My design looks currently like (PK in bold weight, FK in italic, both if a FK is part of the PK):
* Group(**Id**, Name)
* Contributor(**Id**, Name, FirstName)
* Action(**ActionId**, Name)
* ContributorAction (***ContributorId***, ***ActionId***, Date)
* Membership (***GroupId***, ***ContributorId***, **StartDate**, **EndDate**)
My aim is to keep an history of memberships in order to retrieve the right membership group of a contributor at the time of a given contribution.
Doing that involve the use of date columns in a primary key (Membership table). However,
EndDate
could be NULL
for the current membership of a contributor, and I read this could be not a good (or even permitted) practice.
Did I miss something here to get at least 3NF design?
Amessihel
(103 rep)
Dec 23, 2024, 04:22 PM
• Last activity: Dec 23, 2024, 04:49 PM
0
votes
1
answers
37
views
How to handle the deletion of records with "shared ownership"?
Consider the following scenario: In a trading card game, a user may offer to trade their cards. The offer can be public (any user can accept, fk_receiver_id null on insert) or private (only one user can accept, fk_receiver_id set on insert). So we have the following junction table [![TradeOffer][1]]...
Consider the following scenario:
In a trading card game, a user may offer to trade their cards. The offer can be public (any user can accept, fk_receiver_id null on insert) or private (only one user can accept, fk_receiver_id set on insert). So we have the following junction table
I want to allow users to delete their accounts. I would be happy to cascade delete most things, but the trades have two users associated so I cant just delete everything. I could make fk_offerer_id nullable and use ON DELETE SET NULL, but I dont want to allow null values on insert.
Should I make fk_offerer_id nullable anyway? Is there a better solution?
The behavior I'm looking for when deleting is:
- If fk_receiver_id is set, set fk_offerer_id to NULL or other value that signals "deleted user"
- If fk_receiver_id is NULL (trade is pending/cancelled), delete the record
Is there a way to handle this in the db? (Ideally without triggers)

Fullaccess
(1 rep)
Nov 27, 2024, 10:33 PM
• Last activity: Nov 28, 2024, 01:39 PM
161
votes
3
answers
179752
views
PostgreSQL multi-column unique constraint and NULL values
I have a table like the following: create table my_table ( id int8 not null, id_A int8 not null, id_B int8 not null, id_C int8 null, constraint pk_my_table primary key (id), constraint u_constrainte unique (id_A, id_B, id_C) ); And I want `(id_A, id_B, id_C)` to be distinct in any situation. So the...
I have a table like the following:
create table my_table (
id int8 not null,
id_A int8 not null,
id_B int8 not null,
id_C int8 null,
constraint pk_my_table primary key (id),
constraint u_constrainte unique (id_A, id_B, id_C)
);
And I want
(id_A, id_B, id_C)
to be distinct in any situation. So the following two inserts must result in an error:
INSERT INTO my_table VALUES (1, 1, 2, NULL);
INSERT INTO my_table VALUES (2, 1, 2, NULL);
But it doesn't behave as expected because according to the documentation, two NULL
values are not compared to each other, so both inserts pass without error.
How can I guarantee my unique constraint even if id_C
can be NULL
in this case?
Actually, the real question is: can I guarantee this kind of uniqueness in "pure sql" or do I have to implement it on a higher level (java in my case)?
Manuel Leduc
(1721 rep)
Dec 27, 2011, 09:10 AM
• Last activity: Nov 10, 2024, 01:56 AM
2
votes
1
answers
356
views
Can one define a NULLable composite type whose fields are NOT NULL?
I would like to define a composite type whose fields are `NOT NULL`; while at the same time, allow the value itself to be `NULL` in a table column. My first attempt was to define a `DOMAIN` on the composite type with a `CHECK` constraint that ensured the fields are `NOT NULL`; unfortunately this pre...
I would like to define a composite type whose fields are
NOT NULL
; while at the same time, allow the value itself to be NULL
in a table column. My first attempt was to define a DOMAIN
on the composite type with a CHECK
constraint that ensured the fields are NOT NULL
; unfortunately this prevents NULL
itself from being INSERT
ed into the table:
BEGIN;
CREATE TYPE foo AS (x int, y int);
CREATE DOMAIN non_null_foo AS foo CHECK((VALUE).x IS NOT NULL AND (VALUE).y IS NOT NULL);
CREATE TABLE bar(y non_null_foo);
INSERT INTO bar VALUES (NULL);
ROLLBACK;
errors: ERROR: value for domain non_null_foo violates check constraint "non_null_foo_check"
.
My second attempt was to allow NULL
for VALUE
in the DOMAIN
, but this also does not work since it now allows a value where all fields are NULL
:
BEGIN;
CREATE TYPE foo AS (x int, y int);
CREATE DOMAIN non_null_foo AS foo CHECK(VALUE IS NULL OR ((VALUE).x IS NOT NULL AND (VALUE).y IS NOT NULL));
CREATE TABLE bar(y non_null_foo);
INSERT INTO bar VALUES ((NULL, NULL)); --succeeds
INSERT INTO bar VALUES ((1, NULL)); --fails
ROLLBACK;
It's as if Postgresql is unable to distinguish between NULL
and a value where all the fields are NULL
. Is there something I'm missing?
philomathic_life
(472 rep)
Oct 7, 2024, 08:37 PM
• Last activity: Oct 10, 2024, 03:06 AM
8
votes
2
answers
1456
views
Is an invalid date considered the same as a NULL value?
I am using MySQL 5.7 and I have notice something which I cannot explain to myself with my current knowledge so here it goes. Is an "invalid" date `0000-00-00` considered same as `NULL` for a `date` column type with no possibilities for nullables? See the following example: ``` create table if not ex...
I am using MySQL 5.7 and I have notice something which I cannot explain to myself with my current knowledge so here it goes.
Is an "invalid" date
0000-00-00
considered same as NULL
for a date
column type with no possibilities for nullables? See the following example:
create table if not exists example
(
id int auto_increment primary key,
doskip date default '0000-00-00' not null
);
INSERT INTO example VALUES(1, '0000-00-00'), (2, '0000-00-00'), (3, '0000-00-00');
Having the above and running the following queries gave me the exact same count: 3 and I wonder why, is there any explanation for this?
SELECT COUNT(*) FROM example j WHERE j.doskip = '0000-00-00';
SELECT COUNT(*) FROM example j WHERE j.doskip IS NULL;
Here is a Fiddle showing this behavior, is it because of the mode?
ReynierPM
(1888 rep)
Aug 14, 2024, 02:21 PM
• Last activity: Aug 17, 2024, 05:06 PM
5
votes
2
answers
30015
views
How do I select arrays that are not empty?
Why is this so tricky, what is token set to that it isn't equal to null nor an empty string? SELECT lexemes FROM ts_debug('This is a title') WHERE alias = 'asciiword'; lexemes --------- {} {} {} {titl} (4 rows) Ok.. So I want to get rid of `{}`, SELECT lexemes FROM ts_debug('This is a title') WHERE...
Why is this so tricky, what is token set to that it isn't equal to null nor an empty string?
SELECT lexemes
FROM ts_debug('This is a title')
WHERE alias = 'asciiword';
lexemes
---------
{}
{}
{}
{titl}
(4 rows)
Ok.. So I want to get rid of
{}
,
SELECT lexemes
FROM ts_debug('This is a title')
WHERE alias = 'asciiword'
AND lexemes '{}'
AND lexemes ARRAY[]::text[]
AND lexemes IS NOT NULL
AND lexemes ARRAY[' ']
AND lexemes ARRAY[null]::text[];
**I knew most of these wouldn't work.**, but I'm totally confused why '{}'
wouldn't work not ARRAY[]::text;
. How do I filter this out?
Evan Carroll
(65502 rep)
Sep 23, 2017, 04:11 AM
• Last activity: May 29, 2024, 01:32 PM
10
votes
1
answers
13561
views
How to get 0 as array_length() result when there are no elements
I have a query like this: select id, array_length(users_who_like_ids,1) as ct from queryables order by 2 desc; But empty arrays (with no elements) sort on top. I'd rather have `array_length()` return 0 for that case so that this would get sorted to the bottom. I'm probably not understanding `array_l...
I have a query like this:
select id, array_length(users_who_like_ids,1) as ct
from queryables
order by 2 desc;
But empty arrays (with no elements) sort on top. I'd rather have
should return 0 rather than nothing (NULL), right?
Could I do an if statement like inline on it?
## Edit
According to this related answer on SO, it looks like
array_length()
return 0 for that case so that this would get sorted to the bottom.
I'm probably not understanding array_length()
(ok, definitely) but:
embers_dev2=# select array_length(array[1,2], 1), array_length(ARRAY[]::integer[],1);

COALESCE
might be what I want - but I am open to better ideas:
- postgresql return 0 if returned value is null
timpone
(657 rep)
Dec 29, 2015, 02:52 AM
• Last activity: May 28, 2024, 10:20 PM
-1
votes
2
answers
8008
views
how to show return zero if no row return?
I spent my half day on it, but still did not find solution, I tried COALESCE,IFNULL, IS NULL, IF conditions, , IS NOT NULL etc but nothing work for me. The query is SELECT COALESCE(SUM(leave_duration),0) AS On_leaves FROM `emp_leave` WHERE DATE_FORMAT(start_date,'%Y-%m-%d')>=DATE_FORMAT(CURDATE(), '...
I spent my half day on it, but still did not find solution, I tried COALESCE,IFNULL, IS NULL, IF conditions, , IS NOT NULL etc but nothing work for me. The query is
SELECT
COALESCE(SUM(leave_duration),0) AS On_leaves
FROM
emp_leave
WHERE DATE_FORMAT(start_date,'%Y-%m-%d')>=DATE_FORMAT(CURDATE(), '%Y-%m-%01')
AND DATE_FORMAT(end_date,'%Y-%m-%d')<=DATE_FORMAT(CURDATE(), '%Y-%m-%31') AND em_id =724 AND leave_status= 'Approved' GROUP BY em_id
I want it show 0 if there is no leaves in table :/
please help
Nasir Hussain
(3 rep)
Apr 16, 2022, 11:42 AM
• Last activity: May 14, 2024, 06:55 AM
19
votes
1
answers
10293
views
SQL Server - Adding non-nullable column to existing table - SSDT Publishing
Due to business logic, we need a new column in a table that is critical to ensure is always populated. Therefore it should be added to the table as `NOT NULL`. Unlike [previous questions][1] that explain how to do this *manually*, this needs to be managed by the SSDT publish. I have been banging my...
Due to business logic, we need a new column in a table that is critical to ensure is always populated. Therefore it should be added to the table as
NOT NULL
. Unlike previous questions that explain how to do this *manually*, this needs to be managed by the SSDT publish.
I have been banging my head against the wall for a while over this simple-sounding task due to some realizations:
1. A default value is not appropriate, and it cannot be a computed column. Perhaps it is a foreign key column, but for others we cannot use a fake value like 0 or -1 because those values might have significance (e.g. numeric data).
2. Adding the column in a pre-deployment script will fail the publish when it automatically tries to create the same column, a second time (even if the pre-deployment script is written to be idempotent) (this one is really aggravating as I can otherwise think of an easy solution)
3. Altering the column to NOT NULL in a post-deployment script will be reverted each time the SSDT schema refresh occurs (so at the very least our codebase will mismatch between source control and what is actually on the server)
4. Adding the column as nullable now with the intention of changing to NOT NULL in the future does not work across multiple branches/forks in source control, as the target systems will not necessarily all have the table in the same state next time they are upgraded (not that this is a good approach anyway IMO)
The approach I have heard from others is to directly update the table definition (so the schema refresh is consistent), write a predeployment script that *moves* the entire contents of the table to a temporary table with the new column population logic included, then to move the rows back in a postdeployment script. This seems risky as all hell though, and still pisses off the Publish Preview when it detects a NOT NULL column is being added to a table with existing data (since that validation runs before the predeployment scripting).
How should I go about adding a new, non-nullable column without risking orphaned data, or moving data back and forth on every publish with lengthy migration scripts?
Thanks.
Elaskanator
(761 rep)
Jun 1, 2018, 08:07 PM
• Last activity: Apr 19, 2024, 11:35 AM
0
votes
0
answers
739
views
In Snowflake, default value NULL on a not-nullable column?
In Snowflake, what are the implications of having a default value of null on a non-nullable column? Also, when I describe a particular table in Snowflake, I see a default value of NULL in all columns. When my colleague describes the same table in Snowflake, in the same environment (PROD), they are s...
In Snowflake, what are the implications of having a default value of null on a non-nullable column?
Also, when I describe a particular table in Snowflake, I see a default value of NULL in all columns. When my colleague describes the same table in Snowflake, in the same environment (PROD), they are seeing the default value as null (lower-case) but with all those lower case nulls greyed-out. Anyone out there run into this? What could it mean?
Jimbo
(65 rep)
Apr 2, 2024, 02:11 PM
0
votes
1
answers
82
views
How does a NULL value get created in the various DBMS?
I'm wondering if the creation of a NULL value is standardized across the various DBMS and, if not, how it differs across them.
I'm wondering if the creation of a NULL value is standardized across the various DBMS and, if not, how it differs across them.
Andrea Nerla
(1 rep)
Mar 27, 2024, 08:40 AM
• Last activity: Mar 27, 2024, 11:23 AM
1
votes
3
answers
1979
views
Take average of multiple nullable columns
I have an example table (year) as below in PostgreSQL 9.5: Name | 2010 | 2011 | 2012 ------------------------- A | 10 | | 40 B | 10 | 20 | 30 Now, if I write a simple query as shown below to take average for columns `(2010, 2011, 2012)` I will get the correct result for B but result for A will be NU...
I have an example table (year) as below in PostgreSQL 9.5:
Name | 2010 | 2011 | 2012
-------------------------
A | 10 | | 40
B | 10 | 20 | 30
Now, if I write a simple query as shown below to take average for columns
(2010, 2011, 2012)
I will get the correct result for B but result for A will be NULL because of the NULL in the 2011
column:
select (2010+2011+2012)/3 as avg from year
Is there any way to write a query so that I can take average of only non-NULLs in a row?
Jio
(113 rep)
Aug 10, 2019, 01:01 PM
• Last activity: Mar 13, 2024, 05:58 AM
2
votes
2
answers
101
views
NULL rows and IS NULL
I've been writing some queries that look like this: ```sql SELECT ... FROM table_a a LEFT JOIN table_b b ON b.id = a.id LEFT JOIN table_c c ON c.id = a.id ``` In these queries only one of the joins can be successful. In other words, a row from `table_a` is matched by either a row from `table_b` or a...
I've been writing some queries that look like this:
SELECT
...
FROM table_a a
LEFT JOIN table_b b
ON b.id = a.id
LEFT JOIN table_c c
ON c.id = a.id
In these queries only one of the joins can be successful. In other words, a row from table_a
is matched by either a row from table_b
or a row from table_c
.
In the select list, there is some logic based on which join succeeds. I can test this by doing the following:
SELECT
CASE
WHEN b IS NULL THEN
-- do something
ELSE NULL
END
FROM table_a a
LEFT JOIN table_b b
ON b.id = a.id
LEFT JOIN table_c c
ON c.id = a.id
When testing if a row is NULL
you can use the table alias in an IS NULL
expression, but it does not work as expected in an IS NOT NULL
expression. If there are columns in table_b
that are not NULL
, the expression table_b IS NOT NULL
will return FALSE
.
However, the expression NOT (table_b IS NULL)
will return TRUE
as expected.
Why this is the case?
oorst
(133 rep)
Oct 7, 2019, 10:21 PM
• Last activity: Mar 10, 2024, 05:30 AM
Showing page 1 of 20 total questions