Listing directory takes forever on a folder that used to have millions of files
2
votes
1
answer
1417
views
The filesystem is ext4, the machine hasn't been rebooted in years and we don't want to do that now either.
We used to have a folder with millions of small (2-3kb of size) files. This almost broke the system so we fixed the code that was generating so many files and wrote a crontask that erased all the files within the directory (because
rm
wasn't working)
At first everything went smooth, you type ls
and you got full list of the 4-5 remaining files.
On the next day however when I typed ls
the system took forever to execute the command (it took minutes) and the system load went over **20** which scared me a lot.
It's basically like this for months now. The first time of the day when I do ls
the system borderline slows to a crawl and eventually returns a list of ... 5 files and no subfolders.
I believe it's some ext4 cache, I've tried running various commands to no avail.
Is there anything else I could do to force ext4 to erase the cache.
The system is running in RAID 1 mode. Running cat /proc/mdstat
shows that both drives are fully functional and synchronized. smartctl
says the drive is in good health as well. hdparm returns the following
hdparm -tT /dev/sda1
/dev/sda1:
Timing cached reads: 19238 MB in 2.00 seconds = 9629.50 MB/sec
Timing buffered disk reads: 316 MB in 3.01 seconds = 104.92 MB/sec
Asked by Sk1ppeR
(123 rep)
Nov 26, 2021, 12:02 PM
Last activity: Nov 26, 2021, 02:43 PM
Last activity: Nov 26, 2021, 02:43 PM