Sudo updatedb `/var/lib/mlocate/mlocate.db' is locked due to faulty drive, how to resolve permanently?
2
votes
1
answer
4213
views
This 4.20.3 Arch system its BTRFS formatted
/
disk is without free space left. It turns out that mlocate is the cause:
# du -h --exclude=Volumes -- * 2>/dev/null | sort -hr | head -2
11G var
9.6G var/lib/mlocate
The question https://unix.stackexchange.com/questions/82985/updatedb-can-not-open-a-temporary-file-for-var-lib-mlocate-mlocate-db accepted answer suggests to prepend sudo
, though that doesn't change a thing:
# sudo updatedb
updatedb: `/var/lib/mlocate/mlocate.db' is locked (probably by an earlier updatedb)
There seems to be a temporary file in /var/lib/mlocate that is eating up all disk space:
# ls -lh var/lib/mlocate/
-rw-r----- 1 root locate 1.1M Oct 21 00:00 mlocate.db
-rw------- 1 root root 9.6G Dec 30 19:46 mlocate.db.PRvfsw
Could the root cause be the .timer
update job that is hanging?
# systemctl status updatedb.timer
* updatedb.timer - Daily locate database update
Loaded: loaded (/usr/lib/systemd/system/updatedb.timer; static; vendor preset: disabled)
Active: active (running) since Mon 2019-10-21 16:05:10 CEST; 2 months 9 days ago
Trigger: n/a
Both restart
and stop
don't remove the temporary large .db file and updatedb
still returns locked
.
There seems to be an updatedb
process still running:
# ps -ef | grep updatedb
root 3249 1 99 Oct22 ? 213573-14:47:11 /usr/bin/updatedb
I know I can kill this process. The root cause is most likely a faulty USB stick:
# ls /Volumes/RM_GUE__
ls: cannot access '/Volumes/RM_GUE__/'$'\001\020': Input/output error
ls: cannot access '/Volumes/RM_GUE__/)': Input/output error
Though the next time a USB stick becomes faulty, /
will fill up again.
###updatedb.conf
The updatedb.conf
options don't bring me any useful filter option:
- by path: I can't guess the name that the partition will have after corruption
- by filesystem: In this case VFAT was corrupted (and read-only), though I can't look in to the future which file system will get corrupted.
How to resolve this issue neat and permanently, for example by limiting the duration that updatedb.timer
may run and/or skipping disks which suffer input/output errors, or limit the file size LimitFSIZE=
or even something better?
Asked by Pro Backup
(5114 rep)
Dec 30, 2019, 07:25 PM
Last activity: Jul 13, 2025, 12:04 AM
Last activity: Jul 13, 2025, 12:04 AM