Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

8 votes
1 answers
2401 views
Tmpfs with overflow on disk?
I'm looking for a way to have a tmpfs-like file system that can be unlimited in size, but will use a specified amount of RAM after which the "oversize" data will be stored on another disk-backed filesystem [![tmpfs][1]][1] I'm running on a SSD-only system, with low available space (usually < 3 GB),...
I'm looking for a way to have a tmpfs-like file system that can be unlimited in size, but will use a specified amount of RAM after which the "oversize" data will be stored on another disk-backed filesystem tmpfs I'm running on a SSD-only system, with low available space (usually < 3 GB), so I don't want to reserve any space for SWAP or similar (that's my main requirement) Do you know of any solution that would fit my use-case ?
mickael9 (181 rep)
Apr 22, 2016, 10:12 PM • Last activity: Aug 6, 2025, 03:08 AM
0 votes
2 answers
4267 views
How can I increase the size of /tmp directory without affecting RAM or anything else? (Redhat 8.2)
I'm new at Linux and I will get use of your help guys on this.. I want to increase the size of /tmp directory without affecting RAM or anything else on Red Hat 8.2 .. any suggestions to do that? Thanks!
I'm new at Linux and I will get use of your help guys on this.. I want to increase the size of /tmp directory without affecting RAM or anything else on Red Hat 8.2 .. any suggestions to do that? Thanks!
Kai Mo (1 rep)
Sep 7, 2021, 06:42 PM • Last activity: Jul 20, 2025, 07:08 AM
0 votes
1 answers
1980 views
Read-only filesystem - Considerations and Loss of Functionality
I'm creating an embedded system using Buildroot. Currently, my Buildroot configuration ensures the rootfs is remounted as read/write during startup. However I would like to remove this feature and keep my rootfs as read-only. I have a few questions regarding this: ---------- How do I change a user's...
I'm creating an embedded system using Buildroot. Currently, my Buildroot configuration ensures the rootfs is remounted as read/write during startup. However I would like to remove this feature and keep my rootfs as read-only. I have a few questions regarding this: ---------- How do I change a user's password? This would require changing /etc/passwd & /etc/shadow. How do I change the timezone? This would require changing /etc/localtime. How do I create ssh keys for sshd? ssh-keygen creates the keys in /etc/ssh/ According to the Filesystem Hierarchy Standard, a Linux system is required to function with a read-only /etc/ directory, but it seems I'm finding a distinct loss of functionality as described above. ---------- Secondly, after specifying that the rootfs is to remain read-only in my Buildroot configuration, it elects to mount /var/ as a tmpfs (in RAM, so it is writable) But, this is volatile, how can I ensure runtime files (which I need to save) aren't lost on reboot or unexpected power-loss? I'm using UBIFS, in my embedded system. Am I required to create a read/write UBI volume which I use as persistent storage? Is this the standard in embedded systems? ---------- And finally, should I re-evaluate my idea to use a read-only rootfs at all? Given I am using UBI, and as wear-levelling is implemented across all the UBI volumes (the exist on the same device, of course), will I receive any benefit in making my rootfs read-only?
Mattmatician (1 rep)
Apr 29, 2018, 01:47 PM • Last activity: Jun 9, 2025, 07:05 AM
-1 votes
1 answers
59 views
Why does the position of `exec` flag in `/etc/fstab` matter?
I find it peculiar, that I could not execute binaries / scripts on my new RAMdisk (tmpfs). **1)** tmpfs /ramdisk tmpfs exec ,size=3G,noauto,sync,user,rw,x-gvfs-show,x-gvfs-name=RAM 0 0 versus **2)** tmpfs /ramdisk tmpfs size=3G,noauto,sync,user,rw,x-gvfs-show,x-gvfs-name=RAM, exec 0 0 *** **In the f...
I find it peculiar, that I could not execute binaries / scripts on my new RAMdisk (tmpfs). **1)**
tmpfs  /ramdisk  tmpfs  exec,size=3G,noauto,sync,user,rw,x-gvfs-show,x-gvfs-name=RAM  0  0
versus **2)**
tmpfs  /ramdisk  tmpfs  size=3G,noauto,sync,user,rw,x-gvfs-show,x-gvfs-name=RAM,exec  0  0
*** **In the first case**, I can't execute anything, as in the exact error message: > bash: ./a.sh: Permission denied and returned code 126 to my Bash shell. *** Created the file as follows, for completeness:
vlastimil@rog-g713pi /ramdisk $ cat a.sh
#!/bin/sh
echo A
EOF
and I tried to run it directly as follows:
chmod 755 a.sh; ./a.sh; echo $?
*** Why does the position of exec flag matter? (And is there possibly some other mystery waiting for me... :)) Thanks OS: Linux Mint 22.1 (Cinnamon), based on Ubuntu 24.04.
Vlastimil Buri&#225;n (30505 rep)
May 29, 2025, 08:20 PM • Last activity: May 30, 2025, 05:46 AM
5 votes
2 answers
8333 views
/tmp mounting options as tmpfs: Compatibility & Security
Having a SSD - it is recommended to mount `/tmp` as `tmpfs`. Examples: - https://askubuntu.com/questions/550589/best-way-to-mount-tmp-in-fstab - https://yktoo.com/en/blog/post/233 - https://askubuntu.com/questions/173094/how-can-i-use-ram-storage-for-the-tmp-directory-and-how-to-set-a-maximum-amount...
Having a SSD - it is recommended to mount /tmp as tmpfs. Examples: - https://askubuntu.com/questions/550589/best-way-to-mount-tmp-in-fstab - https://yktoo.com/en/blog/post/233 - https://askubuntu.com/questions/173094/how-can-i-use-ram-storage-for-the-tmp-directory-and-how-to-set-a-maximum-amount The mounting options are different in each example - why??? The default Ubuntu 16 installation sets the mounting options for root (/) as (from /etc/mtab): /dev/sda1 / ext4 rw,relatime,errors=remount-ro,data=ordered 0 0 Ergo all other options - as suggested in the examples/links - shouldn't be applied. Some of the mounting options in the various examples on the web are: defaults,noatime,mode=1777 or: defaults,noatime,nosuid,nodev,noexec,mode=1777,size=512M But: - Having noatime feels useless because that the data is stored in RAM which is fast anyway. - Why nosuid,nodev,noexec ? How do they know whether softwares are dependent on certain options or not? --- I think it is best to stick with the default permissions that the installation applied, meaning: rw,relatime,mode=1777,uid=0,gid=0 In order to ensure proper operation of various softwares: - The permissions are 1777 because that the default permissions for /tmp are also drwxrwxrwt (see stat -c "%a %n" /tmp). - The uid and gid are root because that /tmp has the same. Is there something which I'm missing here?
Dor (2635 rep)
Mar 18, 2017, 01:45 PM • Last activity: May 29, 2025, 05:01 AM
1 votes
1 answers
1110 views
How to create a file-system in RAM? (a virtual disk-drive)
I'm aware of `tmpfs` and what I really need is a disk partition resides on RAM where I could format it with BTRFS. How can I create a raw device on RAM and format it with any regular filesystem? Or is only chance creating a raw file on `tmpfs`, formatting it with a filesystem and mounting through a...
I'm aware of tmpfs and what I really need is a disk partition resides on RAM where I could format it with BTRFS. How can I create a raw device on RAM and format it with any regular filesystem? Or is only chance creating a raw file on tmpfs, formatting it with a filesystem and mounting through a loop device ([source](https://unix.stackexchange.com/a/433720/65781)) .
mkdir /ramdisks
mount -t tmpfs tmpfs /ramdisks
dd if=/dev/zero of=/ramdisks/disk0 bs=1M count=100
losetup /dev/loop0 /ramdisks/disk0
mke2fs /dev/loop0
...
losetup -d /dev/loop0
rm /ramdisks/disk0
# Use Case I'm using BTRFS for a number of reasons. My current attempt is to use Overlayfs as rootfs (just like [Slax does with Aufs](https://www.slax.org/internals.php)) and I want the underlying directory structure be BTRFS. Slax uses the following trick: While the system is booting, 1. instead of actual switch_root operation that the normal installations usually do, Slax creates a transient folder and 2. mounts it as tmpfs (puts it on the RAM), 3. switch_root to it as if it were the actual filesystem. 4. Then it does some Kung-Fu: Mounts its "modules" (the squashfs files) to somewhere (say /modules) 5. Mounts all folders under /modules/ to /union along with a writable changes folder 6. pivot_root to the /union folder. What I want is mimicking that, which doesn't require what I asked so far, except: I want to place the changes folder on the RAM with the support of BTRFS goods: 1. Support snapshotting, 2. Support btrfs send | btrfs receive 3. Possibly BTRFS RAID-1
ceremcem (2451 rep)
Feb 17, 2020, 08:48 AM • Last activity: Apr 25, 2025, 07:22 PM
5 votes
2 answers
2373 views
/var/cache on temporary filesystem
Due to flash degradation concerns, I would like to lower the amount of unnecessary disk writes on a headless light-duty 24/7 system, as much as sensible. In case it matters this is a Debian-flavored system, but I think the issue might be of relevance for a wider audience. In order to achieve this, I...
Due to flash degradation concerns, I would like to lower the amount of unnecessary disk writes on a headless light-duty 24/7 system, as much as sensible. In case it matters this is a Debian-flavored system, but I think the issue might be of relevance for a wider audience. In order to achieve this, I am already using **tmpfs** for /tmp and /var/log in addition to the defaults. At this point, by monitoring idle IO activity with various tools like *fatrace*, I get that after long periods one of the most prominent directories in number of write accesses is /var/cache, especially /var/cache/man related to *man-db*. Note that I don't have automatic package updates in this system, so I don't get any writes for /var/cache/apt, but for others that might be relevant too. The question is, could it cause any trouble if **tmpfs** would be used for /var/cache? On startup I would populate it with data from disk, and possibly *rsync* it back from time to time. Of course the elevated RAM usage might be an issue on some systems, but it would be interesting to hear your opinions whether it would be problematic for some of the common systems using the cache, to have data absent in the early boot process, or generally be in a slightly outdated state (after a crash for example)?
rockfort (85 rep)
Aug 24, 2020, 03:15 PM • Last activity: Mar 18, 2025, 08:29 PM
0 votes
0 answers
1896 views
How to clear the contents under /run/user/1000/doc?
I am on Ubuntu 20.04 and installed `evince` via `flatpak`, when I open `evince`, it usually makes a copy of the opened file into `/run/user/1000/doc`. I learned that `/run/user/1000/doc` is the temporary directory for storing file used by running applications, but when I quit evince the files create...
I am on Ubuntu 20.04 and installed evince via flatpak, when I open evince, it usually makes a copy of the opened file into /run/user/1000/doc. I learned that /run/user/1000/doc is the temporary directory for storing file used by running applications, but when I quit evince the files created by it does not go away. Even if I reboot my machine, the files still exist. As the directory cannot be accessed in the usual way, it is not writable and cannot be changed with chmod. How can I remove the contents of /run/user/1000/doc ?
Simon.Zh.1234 (9 rep)
Apr 18, 2023, 07:04 AM • Last activity: Mar 18, 2025, 09:17 AM
-1 votes
1 answers
108 views
Resizing filesystem to increase space in AWS EC2 instance
I created an AWS EC2 instance, and I am using the EC2 instance to maintain a database. I'm currently trying to perform some post-processing on my database (by quering my database in batches), but this doesn't work for me because I get a Postgres error message telling me I am out of space: `sqlalchem...
I created an AWS EC2 instance, and I am using the EC2 instance to maintain a database. I'm currently trying to perform some post-processing on my database (by quering my database in batches), but this doesn't work for me because I get a Postgres error message telling me I am out of space: sqlalchemy.exc.OperationalError: (psycopg2.errors.DiskFull) could not write to file "base/pgsql_tmp/pgsql_tmp5592.0": No space left on device. My current filesystem looks like this (output from df -h):
Filesystem        Size  Used Avail Use% Mounted on
devtmpfs          4.0M     0  4.0M   0% /dev
tmpfs              63G  1.1M   63G   1% /dev/shm
tmpfs              25G  512K   25G   1% /run
/dev/nvme0n1p1    8.0G  7.4G  615M  93% /
tmpfs              63G  4.0K   63G   1% /tmp
/dev/nvme0n1p128   10M  1.3M  8.7M  13% /boot/efi
tmpfs              13G     0   13G   0% /run/user/1000
I noticed that my /dev/nvme0n1p1 filesystem is mostly in use, but I have lots of space free in both /dev/shm and /tmp. Could my issue be solved by removing space from these filesystems and moving it to /dev/nvme0n1p1? If so, how would I go about doing this? Thank you!
Joey (107 rep)
Dec 27, 2024, 06:15 PM • Last activity: Dec 28, 2024, 10:16 AM
0 votes
0 answers
94 views
Declaring a bind mount in systemd-controlled initrd and persisting systemd attributes for the mount into the system
I know my title is a little bit convoluted so let me explain what I'm doing here in more detail. I am using a ephemeral root setup on my machine, essentially meaning that my `/` mount is a `tmpfs` and files which I wish to actually use and preserve across roots must be linked up in some way onto thi...
I know my title is a little bit convoluted so let me explain what I'm doing here in more detail. I am using a ephemeral root setup on my machine, essentially meaning that my / mount is a tmpfs and files which I wish to actually use and preserve across roots must be linked up in some way onto this / mount. For the most part, I already have a trivially working solution which bind mounts different files from a persistent device onto /sysroot in the initrd, which are then preserved in the actual system and can be used like normal. A small problem I have with this approach is because the mount occurs within the initrd, I do not have much practical control on the systemd dependencies for the generated mount unit. I cannot declare a mount unit in initrd because it's attributes will not be preserved after switch_root, and I can't declare another mount unit for my bind mount because then it'll remount over itself twice (admittedly I do not know if this is a problem but it doesn't seem "good"). My main concern is that I want to declare some special ordering for these bind mounts so they aren't killed too early by systemd upon system shutdown (when reaching umount.target). The _main_ reason for this is because I persist /var/log as a bind mount to preserve journalctl logs, and when umount.target tries to unmount the bind mount at this location it fails. I want to somehow remove the ConflictsWith=umount.target that is attached to all generic mount units so that there isn't a requirement to unmount so early. Ideally I'd not perform a unmount at all and simply allow the exitrd to smoothly kill the mount alongside the /, which is automatically does. I'm not particularly sure how I can really do this though, especially given the fact that mounts occuring in initrd do not really preserve any systemd-specific information that I can use. These mounts are also dynamically generated through configuration so hard-coding them into /etc/fstab with options like x-systemd.some-opt is not feasible. So far the only thing I've tried was creating a .mount unit in the system, which as mentioned prior, would "remount" the mount unit over top, leading to two reported founds and thus requiring two unmount operations on the same mount location. Again, no idea if this is an actual problem, but it doesn't seem particularly clean which does bother me. This remount would occur even if I would specify Options=remount,..., which is admittedly a bit confusing but not something I could really understand either way.
Frontear (43 rep)
Dec 7, 2024, 10:10 PM
2 votes
1 answers
170 views
Weird result mounting a tmpfs as root in the directory tree
Using `unshare -Umr` I created a new user, mount namespaces where the calling process is moved into. Then via `mount -t tmpfs tmpfs /` I mounted a new tmpfs instance on the root `/` of the directory tree within the new mount namespace. Since the `tmpfs` is empty I would expect to see an empty list f...
Using unshare -Umr I created a new user, mount namespaces where the calling process is moved into. Then via mount -t tmpfs tmpfs / I mounted a new tmpfs instance on the root / of the directory tree within the new mount namespace. Since the tmpfs is empty I would expect to see an empty list from the command ls -la /, however here is the output: ubuntu@ubuntu:~$ unshare -Umr /bin/bash root@ubuntu:~# mount -t tmpfs tmpfs / root@ubuntu:~# ls -la / total 5309704 drwxr-xr-x 24 nobody nogroup 4096 Nov 22 2023 . drwxrwxrwt 2 root root 40 Nov 11 15:47 .. drwxr-xr-x 2 nobody nogroup 4096 Jan 25 2023 bin drwxr-xr-x 3 nobody nogroup 4096 Jan 25 2023 boot drwxr-xr-x 2 nobody nogroup 4096 Nov 11 2019 cdrom drwxr-xr-x 17 nobody nogroup 3820 Aug 22 14:22 dev drwxr-xr-x 105 nobody nogroup 4096 Mar 14 2024 etc -rw-r--r-- 1 root root 1688371200 Jan 19 2021 GISO drwxr-xr-x 3 nobody nogroup 4096 Nov 11 2019 home lrwxrwxrwx 1 nobody nogroup 34 Jan 25 2023 initrd.img -> boot/initrd.img-4.15.0-202-generic lrwxrwxrwx 1 nobody nogroup 34 Jan 25 2023 initrd.img.old -> boot/initrd.img-4.15.0-132-generic drwxr-xr-x 21 nobody nogroup 4096 Jan 25 2023 lib drwxr-xr-x 2 nobody nogroup 4096 Jan 25 2023 lib64 drwx------ 2 nobody nogroup 16384 Nov 11 2019 lost+found drwxr-xr-x 2 nobody nogroup 4096 Feb 10 2021 media drwxr-xr-x 2 nobody nogroup 4096 Aug 5 2019 mnt drwxr-xr-x 3 nobody nogroup 4096 Nov 26 2020 opt dr-xr-xr-x 123 nobody nogroup 0 Aug 22 12:22 proc drwx------ 4 nobody nogroup 4096 Dec 6 2023 root drwxr-xr-x 24 nobody nogroup 820 Nov 11 15:20 run drwxr-xr-x 2 nobody nogroup 12288 Jan 25 2023 sbin drwxr-xr-x 4 nobody nogroup 4096 Nov 11 2019 snap drwxr-xr-x 3 nobody nogroup 4096 Jan 24 2020 srv -rw------- 1 nobody nogroup 3748659200 Nov 11 2019 swap.img dr-xr-xr-x 13 nobody nogroup 0 Nov 11 15:40 sys drwxrwxrwt 10 nobody nogroup 4096 Nov 11 15:42 tmp drwxr-xr-x 10 nobody nogroup 4096 Aug 5 2019 usr drwxr-xr-x 13 nobody nogroup 4096 Aug 5 2019 var lrwxrwxrwx 1 nobody nogroup 31 Jan 25 2023 vmlinuz -> boot/vmlinuz-4.15.0-202-generic lrwxrwxrwx 1 nobody nogroup 31 Jan 25 2023 vmlinuz.old -> boot/vmlinuz-4.15.0-132-generic root@ubuntu:~# as in the filesystem mounted as / before the tmpfs was mounted over. Why I am getting this result ?
CarloC (385 rep)
Nov 11, 2024, 01:03 PM • Last activity: Dec 2, 2024, 03:54 PM
16 votes
2 answers
13344 views
How to make tmpfs to use only the physical RAM and not the swap?
How to be sure that a `tmpfs` filesystem can only deal with physical and it's not using a swap partition on disk? Since I have a slow HDD and a fast RAM, I would like, at least, giving higher priority to the RAM usage for `swap` and `tmpfs` or disabling the disk usage for `tmpfs` related mount point...
How to be sure that a tmpfs filesystem can only deal with physical and it's not using a swap partition on disk? Since I have a slow HDD and a fast RAM, I would like, at least, giving higher priority to the RAM usage for swap and tmpfs or disabling the disk usage for tmpfs related mount points.
user1717079 (485 rep)
Oct 10, 2012, 03:36 PM • Last activity: Nov 15, 2024, 07:47 PM
9 votes
3 answers
10005 views
Per user tmpfs directories
Using the shared `/tmp` directory is known to have lead to many security vulnerabilities when predictable filenames have been used. And randomly generated names aren't really nice looking. I am thinking that maybe it would be better to use a per user temporary directory instead. Many applications wi...
Using the shared /tmp directory is known to have lead to many security vulnerabilities when predictable filenames have been used. And randomly generated names aren't really nice looking. I am thinking that maybe it would be better to use a per user temporary directory instead. Many applications will use the TMPDIR environment variable in order to decide where temporary files goes. On login I could simply set TMPDIR=/temp/$USER where /temp would then have to contain a directory for each user with that directory being writable to that user and nobody else. But in that case I would still like /temp to be a tmpfs mountpoint, which means that the subdirectories would not exist after a reboot and need to be recreated somehow. Is there any (de-facto) standard for how to create a tmpfs with per user subdirectories? Or would I have to come up with my own non-standard tools to dynamically generate such directories?
kasperd (3650 rep)
Mar 25, 2015, 12:18 PM • Last activity: Nov 11, 2024, 07:45 PM
20 votes
3 answers
89073 views
Can fstab options uid and gid be the user-group name or must they be numeric?
I'm learning how to set up a tmpfs in fstab for my www-data user and I was wondering if I can use the actual user/group name instead if the numeric ids (personal preference)? I'm on Debian with ext4, formatted with "msdos" during setup. It seems to be working, but I'm wondering if this is a Debian-s...
I'm learning how to set up a tmpfs in fstab for my www-data user and I was wondering if I can use the actual user/group name instead if the numeric ids (personal preference)? I'm on Debian with ext4, formatted with "msdos" during setup. It seems to be working, but I'm wondering if this is a Debian-specific feature or will it work across platforms (I like portability)? Here's what I've got: $ vim /etc/fstab # PHP temporary files. tmpfs /tmpfs/php-session tmpfs defaults,size=512M,mode=1700,uid=www-data,gid=www-data,noexec,nodev,nosuid 0 0 tmpfs /tmpfs/php-upload tmpfs defaults,size=256M,mode=1700,uid=www-data,gid=www-data,noexec,nodev,nosuid 0 0
Jeff (846 rep)
Aug 20, 2013, 06:56 PM • Last activity: Oct 19, 2024, 12:37 AM
0 votes
0 answers
26 views
modifying systemd tmp.mount parameters
In RHEL 7 and later as well as other Linux distributions, `systemctl enable tmp.mount` makes the `/tmp` folder a tmpfs (a temporary file system). In RHEL 8.10 for example, by default those parameters are *I think* # mount | grep tmp tmpfs on /tmp type tmpfs (rw,seclabel) In addition on my particular...
In RHEL 7 and later as well as other Linux distributions, systemctl enable tmp.mount makes the /tmp folder a tmpfs (a temporary file system). In RHEL 8.10 for example, by default those parameters are *I think* # mount | grep tmp tmpfs on /tmp type tmpfs (rw,seclabel) In addition on my particular server... # df -h Filesystem Size Used Avail Use% Mounted on tmpfs 378G 120K 378G 1% /tmp What is the correct or preferred way to alter these mount options, and size, of /tmp ? I want to see **Avail** as 10gb, and have nodev,nosuid,noexec also present as mount options. In addition to systemctl enable tmp.mount is putting this entry in /etc/fstab good or bad practice to achieve this given the systemd association with tmp.mount ? Is there a better way? tmpfs /tmp tmpfs defaults,nodev,nosuid,noexec,size=10G 0 0
ron (8647 rep)
Oct 3, 2024, 05:01 PM
5 votes
2 answers
12097 views
tmp on tmpfs: fstab vs tmp.mount with systemd
To have `/tmp` on tmpfs, I know I can use an entry in `/etc/fstab`, but I do not understand the role of `/etc/default/tmpfs` [mentioned][1] sometimes, and in what case I need to create or modify it. Recently, I often see suggested to use systemd `tmp.mount` confuguration. For example, [on Debian][2]...
To have /tmp on tmpfs, I know I can use an entry in /etc/fstab, but I do not understand the role of /etc/default/tmpfs mentioned sometimes, and in what case I need to create or modify it. Recently, I often see suggested to use systemd tmp.mount confuguration. For example, on Debian :
$ sudo cp /usr/share/systemd/tmp.mount /etc/systemd/system/
$ sudo systemctl enable tmp.mount
Which of the two methods is more appropriate for everyday use? In what situations one is better than the other? When do I need to deal with /etc/default/tmpfs?
Alexey (2310 rep)
Oct 26, 2022, 09:20 AM • Last activity: Oct 3, 2024, 12:09 PM
1 votes
2 answers
991 views
Move compressed log files outside /var/log [logrotate with log2ram]
I am looking for advice about logrotate. I have recently installed log2ram to spare my ssd. Since I was not using all 24Gb RAM I assigned 2Gb to /var/log/. currently +- 300Mb is used. I would like logrotate to move any file ending with .gz potentially in any sub-direcory to /var/old.log/ and maintai...
I am looking for advice about logrotate. I have recently installed log2ram to spare my ssd. Since I was not using all 24Gb RAM I assigned 2Gb to /var/log/. currently +- 300Mb is used. I would like logrotate to move any file ending with .gz potentially in any sub-direcory to /var/old.log/ and maintain the sub-directory structure. I have done some research and found a possible solution but now raises questions. https://unix.stackexchange.com/questions/59112/preserve-directory-structure-when-moving-files-using-find https://unix.stackexchange.com/questions/319020/compress-old-log-files-and-move-to-new-directory 1. When logrotate runs this side script to move these files for example i moves /var/log/file.log.2.gz will it then not create a new file.log.2.gz when rotating logs instead of moving file.log.2.gz to file.log.3.gz as file.log.2.gz is no longer there. Then eventually the script overwrites the old logs due to the same name 2. I do not fully understand the olddir option as I understand this could do what I want. but manual states be cautious with wildcards. any help here 3. When to time this, /var/log is created during boot with the contents of /var/hdd.log/ it then updates /var/hdd.log when rebooting. I want logrotate to move the compressed files to a other directory so It will not be loaded on the tmpfs but I don't want to edit each /etc/logrotate.d/* file. I thought maybe put something before log2ram creates /var/log during boot but I do not know where. This system runs 24/7 and I try to reboot as minimal as possible. OS: Ubuntu 22.04 LTS Sometimes manuals are too obvious sometimes there very vague. Maybe someone can explain it in a more complex or easier way for me to understand. Thank you for your time.
Vincent Stans (136 rep)
Apr 27, 2023, 01:55 PM • Last activity: Sep 7, 2024, 08:23 PM
16 votes
2 answers
1913 views
What could explain this strange sparse file handling of/in tmpfs?
On my `ext4` filesystem partition I can run the following code: fs="/mnt/ext4" #create sparse 100M file on ${fs} dd if=/dev/zero \ of=${fs}/sparse100M conv=sparse seek=$((100*2*1024-1)) count=1 2> /dev/null #show its actual used size before echo "Before:" ls ${fs}/sparse100M -s #setting the sparse f...
On my ext4 filesystem partition I can run the following code: fs="/mnt/ext4" #create sparse 100M file on ${fs} dd if=/dev/zero \ of=${fs}/sparse100M conv=sparse seek=$((100*2*1024-1)) count=1 2> /dev/null #show its actual used size before echo "Before:" ls ${fs}/sparse100M -s #setting the sparse file up as loopback and run md5sum on loopback losetup /dev/loop0 ${fs}/sparse100M md5sum /dev/loop0 #show its actual used size afterwards echo "After:" ls ${fs}/sparse100M -s #release loopback and remove file losetup -d /dev/loop0 rm ${fs}/sparse100M which yields Before: 0 sparse100M 2f282b84e7e608d5852449ed940bfc51 /dev/loop0 After: 0 sparse100M Doing the very same thing on tmpfs as with: fs="/tmp" yields Before: 0 /tmp/sparse100M 2f282b84e7e608d5852449ed940bfc51 /dev/loop0 After: 102400 /tmp/sparse100M which basically means that something I expected to merely read the data, caused the sparse file to "blow up like a balloon"? I expect that is because of less perfect support for sparse file in tmpfs filesystem, and in particular because of the missing FIEMAP ioctl, but I am not sure what causes this behaviour? Can you tell me?
humanityANDpeace (15072 rep)
Mar 8, 2017, 11:54 PM • Last activity: Aug 23, 2024, 11:27 PM
1 votes
1 answers
365 views
What creates tmpfs directories /run, /dev/shm on systemd?
My systemd linux systems show several tmpfs directories (/run, /dev/shm, /run/lock and on my raspberry also /sys/fs/cgroup). Alas, I cannot find where these top directories are created. I know that systemd-tmpfiles creates tmp files according to config files in tmpfiles.d/* in /etc and /usr/lib, but...
My systemd linux systems show several tmpfs directories (/run, /dev/shm, /run/lock and on my raspberry also /sys/fs/cgroup). Alas, I cannot find where these top directories are created. I know that systemd-tmpfiles creates tmp files according to config files in tmpfiles.d/* in /etc and /usr/lib, but I only see subdirectories defined in those, not the top directories themselves.
user333869 (331 rep)
Aug 13, 2024, 04:42 AM • Last activity: Aug 13, 2024, 04:56 AM
15 votes
1 answers
32954 views
Where does tmpfs come from and how is it mounted
I am using a BeagleBone board with Linux. When i type command `df -h` , I see `tmpfs` is mounted a few times. Does this mean that all these entries get mounted at the same place, or at a different part of the `tmpfs`? It brings me to another thing I don't quite understand. Where is this `tmpfs` file...
I am using a BeagleBone board with Linux. When i type command df -h , I see tmpfs is mounted a few times. Does this mean that all these entries get mounted at the same place, or at a different part of the tmpfs? It brings me to another thing I don't quite understand. Where is this tmpfs file system actually created in the first place? I guess it happens when Linux boots. Should I be able to find a script which creates this filesystem ? tmpfs 242.4M 0 242.4M 0% /dev/shm tmpfs 242.4M 8.3M 234.2M 3% /run tmpfs 242.4M 0 242.4M 0% /sys/fs/cgroup tmpfs 242.4M 36.0K 242.4M 0% /tmp tmpfs 242.4M 16.0K 242.4M 0% /var/volatile tmpfs 242.4M 16.0K 242.4M 0% /var/lib
Engineer999 (1233 rep)
Jun 3, 2019, 08:35 PM • Last activity: Jul 26, 2024, 01:45 PM
Showing page 1 of 20 total questions