No space left on device but only 50% space and 1% inodes used
1
vote
0
answers
67
views
This is on a Iomega IX200 NAS which had been expanded to 4TB disks from the original 2TB.
It all looks good.
But when I try to save data to a new file I get the "No space left on device error". I get this whether I try to save a file via the NAS drive share on another device or vi the SSH session on the NAS itself.
df -h
reports:
Filesystem Size Used Avail Use% Mounted on
rootfs 50M 3.7M 47M 8% /
/dev/root.old 6.5M 2.1M 4.4M 33% /initrd
none 50M 3.7M 47M 8% /
/dev/md0_vg/BFDlv 4.0G 624M 3.2G 17% /boot
/dev/loop0 592M 538M 54M 91% /mnt/apps
/dev/loop1 4.9M 2.2M 2.5M 48% /etc
/dev/loop2 260K 260K 0 100% /oem
tmpfs 122M 0 122M 0% /mnt/apps/lib/init/rw
tmpfs 122M 0 122M 0% /dev/shm
/dev/mapper/md0_vg-vol1
16G 1.5G 15G 10% /mnt/system
/dev/mapper/5244dd0f_vg-lv58141b0d
3.7T 2.0T 1.7T 55% /mnt/pools/A/A0
/mnt/pools/A/A0
is the one that provisions the storage.
df -h -i
:
Filesystem Inodes IUsed IFree IUse% Mounted on
rootfs 31K 567 30K 2% /
/dev/root.old 1.7K 130 1.6K 8% /initrd
none 31K 567 30K 2% /
/dev/md0_vg/BFDlv 256K 20 256K 1% /boot
/dev/loop0 25K 25K 11 100% /mnt/apps
/dev/loop1 1.3K 1.1K 139 89% /etc
/dev/loop2 21 21 0 100% /oem
tmpfs 31K 4 31K 1% /mnt/apps/lib/init/rw
tmpfs 31K 1 31K 1% /dev/shm
/dev/mapper/md0_vg-vol1
17M 9.7K 16M 1% /mnt/system
/dev/mapper/5244dd0f_vg-lv58141b0d
742M 2.6M 739M 1% /mnt/pools/A/A0
When the partition was grown I ran lvresize
and xfs_grow
, after which it started showing as having 3.7Tb capacity.
Disks/partitions:
$ parted -l
Model: Seagate ST4000VN008-2DR1 (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 36.9kB 21.5GB 21.5GB primary
2 21.5GB 4001GB 3979GB primary
Model: Seagate ST4000VN008-2DR1 (scsi)
Disk /dev/sdb: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 36.9kB 21.5GB 21.5GB primary
2 21.5GB 4001GB 3979GB primary
Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/5244dd0f_vg-lv58141b0d: 3979GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Number Start End Size File system Flags
1 0.00B 3979GB 3979GB xfs
Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/md0_vg-vol1: 17.2GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Number Start End Size File system Flags
1 0.00B 17.2GB 17.2GB xfs
Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/md0_vg-BFDlv: 4295MB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Number Start End Size File system Flags
1 0.00B 4295MB 4295MB ext2
Error: /dev/mtdblock0: unrecognised disk label
Error: /dev/mtdblock1: unrecognised disk label
Error: /dev/mtdblock2: unrecognised disk label
Error: /dev/mtdblock3: unrecognised disk label
Error: /dev/md0: unrecognised disk label
Error: /dev/md1: unrecognised disk label
When I ran mdadm --detail
I noticed the second partition of the RAID1 pair was set to 'removed':
mdadm --detail /dev/md1
/dev/md1:
Version : 01.00
Creation Time : Mon Mar 7 08:45:49 2011
Raid Level : raid1
Array Size : 3886037488 (3706.01 GiB 3979.30 GB)
Used Dev Size : 7772074976 (7412.03 GiB 7958.60 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Thu Jan 23 03:29:04 2025
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : ix2-200-DC386F:1
UUID : 8a192f2c:9829df88:a6961d81:20478f62
Events : 365631
Number Major Minor RaidDevice State
0 0 0 0 removed #### <<<< THIS ONE is /dev/sdb2
2 8 2 1 active sync /dev/sda2
When I do an examine on /dev/sdb2
I get:
$ mdadm --examine /dev/sdb2
mdadm: No md superblock detected on /dev/sdb2.
I was wondering if there is still, within the firmware, a limit (the original sized disks had a capacity of 1.8 Tb). But am now thinking the 'removed' disk is the problem, but would that explain why I can only use 1.7Tb of a 3.4Tb filesystem? /dev/sda2
reports as 'clean'.
Edit:
I tried
mdadm --zero-superblock /dev/sdb2
Followed by
mdadm --manage /dev/md1 --add /dev/sdb2
But I got the error mdadm: add new device failed for /dev/sdb2 as 3: Invalid argument
.
Asked by Pro West
(111 rep)
Jan 23, 2025, 11:44 AM
Last activity: Jan 23, 2025, 01:23 PM
Last activity: Jan 23, 2025, 01:23 PM