Consider the practice of mounting the /tmp directory on a tmpfs memory based filesystem, as can be done with:
systemctl enable tmp.mount
And consider the following:
> **one justification:** The use of separate file systems for different paths can protect the system from failures resulting from a file system becoming full or failing.
-
> **another justification:** Some applications writing files in the /tmp directory can see huge improvements when memory is used instead of disk.
Is disk caching always in effect? By that I mean when you write to **any** folder (not just /tmp
) you are probably writing to RAM anyway until such time it gets flushed to disk... the kernel handles all this under the hood and it is my opinion I don't need to go meddling tweaking things. So does doing systemctl enable tmp.mount
has any real value, if so what?
Also (in CentOS-7.6) I am testing this to try and understand what's happening I am experiencing:
- CentOS 7.6 installed on one 500GB SSD with simple disk partitioning as
- 1GB /dev/sda1
as /boot
- 100MB /dev/sda2
as /boot/efi
- 475GB /dev/sda3
as /
- PC has 8GB of DDR-4 RAM
- if I do just systemctl enable tmp.mount
I then get
- 3.9GB tmpfs
as /tmp
How is this tmpfs /tmp at 3.9GB
any better than the default way which would (a) first have up to ~8GB based on RAM thanks to disk caching and (b) then when disk caching at capacity based on 8GB of RAM there is > 400GB of disk available to use ?
Asked by ron
(8647 rep)
Apr 12, 2019, 04:34 PM
Last activity: May 5, 2023, 06:18 PM
Last activity: May 5, 2023, 06:18 PM