Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
0
answers
55
views
Need option to prevent SSHFS from changing server's file modification date on download(!)
SSHFS changes modification date (not access date) of server files on download(!). Is there an option/setting to prevent that? The problems this causes include: - after the next download, you can't sort the files by date anymore. - when we backup to the server, every file will be transferred on every...
SSHFS changes modification date (not access date) of server files on download(!). Is there an option/setting to prevent that?
The problems this causes include:
- after the next download, you can't sort the files by date anymore.
- when we backup to the server, every file will be transferred on every occasion (bc their dates are always different)
- in the use-case of sharing updated files on the server: everything will be downloaded every occasion even if no-one uploaded anything
UPDATE:
After changing operating systems and taking 2 weeks to restore modification dates from backups (making sure that actually-modified files don't get their date restored...), there are two machines remaining with which we can reproduce the behavior:
Server: Linux Mint 21.2, sshfs 3.7.1, fuse 3.10.5, fuse ki 7.31 fusermount3 3.10.5
Client: GhostBSD 14.2 (a modified FreeBSD 14.2-RELEASE-p1), sshfs 3.7.3, fuse 3.17.1, fuse ki 7.40, mount-fusefs 0.3.9-pre1
- Download Command:
cp -pr mnt/content/ local/content/
- mount command: sshfs a@192.168.1.4:/home/a/server-dir mnt/
----
`
df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
tmpfs tmpfs 1618628 2160 1616468 1% /run
/dev/sda5 ext3 435281408 433288172 1993236 100% /
tmpfs tmpfs 8093124 4 8093120 1% /dev/shm
tmpfs tmpfs 5120 4 5116 1% /run/lock
/dev/sda1 vfat 98304 31835 66469 33% /boot/efi
tmpfs tmpfs 1618624 104 1618520 1% /run/user/1000
/home/a/.Private ecryptfs 435281408 433288172 1993236 100% /home/a
/dev/mapper/veracrypt1 ext3 391090108 368891012 2276168 100% /media/veracrypt1
`
----
`
sshfs a@192.168.1.4:/media/notcrypted/ mnt/
cp mnt/file . that changes the server file's date
`
Bernd Elkemann
(121 rep)
Jul 20, 2025, 11:48 AM
• Last activity: Jul 20, 2025, 02:45 PM
48
votes
2
answers
139333
views
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system
What exactly is happening here? root@bob-p7-1298c:/# ls -l /tmp/report.csv && lsof | grep "report.csv" -rw-r--r-- 1 mysql mysql 1430 Dec 4 12:34 /tmp/report.csv lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete.
What exactly is happening here?
root@bob-p7-1298c:/# ls -l /tmp/report.csv && lsof | grep "report.csv"
-rw-r--r-- 1 mysql mysql 1430 Dec 4 12:34 /tmp/report.csv
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete.
jmunsch
(4456 rep)
Dec 4, 2014, 06:53 PM
• Last activity: Jul 2, 2025, 08:25 AM
7
votes
1
answers
6413
views
iconv module (to use with rsync) to avoid windows-illegal filenames in local NTFS partition
I would like to locally attach an NTFS volume to my unix (Ubuntu) machine, and copy (replicate) some unix directories to it, using rsync, in a way that the result is readable under Windows. I do not care about ownership and permissions. It would be nice if modification dates would be preserved. I on...
I would like to locally attach an NTFS volume to my unix (Ubuntu) machine, and copy (replicate) some unix directories to it, using rsync, in a way that the result is readable under Windows.
I do not care about ownership and permissions. It would be nice if modification dates would be preserved. I only need directories and files (symbolic links would be nice, too; but not a problem if they cannot be copied).
Two obvious problems are: case (in)sensitivity, and characters that are illegal in Windows filenames. For example, in Linux I can have two files "a" and "A"; I can copy them to the NTFS volume, but in Windows I will be able to access (at most?) one of them. But I am happy to ignore that problem. What I am interested about are illegal characters in Windows filenames, which are ,:,",/,\\,|,?, and * (well, actually also ascii 0-31, but I do not care about that. There might also be problems with files ending in a "."?).
I would like rsync to automatically "rename", e.g., a file called "a:"
to, say a(COLON), to end up with a legal name
(and, ideally, translate a(COLON) back to a:)
**Is this possible to have rsync automatically rename files to avoid characters forbidden in Windows?**
- As far as I understand rsync can use **iconv** to do such tasks; is there a standard iconv module for windows-filenames? (I briefly looked into programming an own gconv module, but lacking C knowlege this seems too complicated).
- I have been told that **rdiff-backup** can do some conversions like that, but the homepage just mentions something being done "automatically", and I am not sure whether a locally mounted NTFS vomlume would trigger a renaming in a reliable way?
- I am aware that there is **fuse-posixovl**, but this seems an overkill for my purpose, and also it doesn't seem to be well documented (which characters will be translated in which way? Will all filenames be truncated to 8.3 or whatever? Can I avoid the additional files carrying owner/permission information, which I will not need, etc etc.)
- I am aware that I could avoid all these problems by using, e.g., a **tar** file; but this is not what I want. (In particular, I would like in Windows to further replicate from the NTFS volume to another backup partition, copying only the changed files)
- I am aware of the "**windows_names**" option when mounting NTFS; but this will prevent creating offending files, not rename them.
**Update:** As it seems my question was not quite clear, let me give a more explicit example:
For example, WINDOWS-1251 is of no use for me.
iconv -f utf-8 -t WINDOWS-1251//TRANSLIT
transforms
123 abc ABC äö &::*"\\|?][^/]*'
# find stuff containing dot as last character (supposedly bad for windows)
find $MYDIR -regex '.*\.'
# find stuff that is identical case insensitive
find $MYDIR -print0 | sort -z | uniq -diz | tr '\0' '\n'
(the last line is from https://unix.stackexchange.com/questions/22870/case-insensitive-search-of-duplicate-file-names/ )
Jakob
(171 rep)
Feb 7, 2017, 02:53 PM
• Last activity: Jun 24, 2025, 07:10 AM
1
votes
1
answers
2086
views
Using rm --one-file-system to only delete files on the local filesystem
I have a FUSE mount located in `/media/writable`, however, sometimes this FUSE mount disconnects, but some programs will still attempt to write to `/media/writable`. When I restart the FUSE mount service, it will fail to mount because the directory is non-empty. How can I use `rm`'s `--one-file-syst...
I have a FUSE mount located in
/media/writable
, however, sometimes this FUSE mount disconnects, but some programs will still attempt to write to /media/writable
.
When I restart the FUSE mount service, it will fail to mount because the directory is non-empty.
How can I use rm
's --one-file-system
argument to ensure that when I rm /media/writable
, it will only delete files located on the **local** filesystem, as opposed to the **fuse mounted** filesystem? There are other folders located in /media/
so I am unable to run rm
in the parent folder.
Am I better off moving the folder one layer down (eg. /media/writable/mount
) so I can rm the parent folder, or is there a way of selecting which filesystem I wish to delete from?
I'm running Ubuntu 18.04.1 LTS with coreutils 8.28.
Edit: My current method is this, but I'd like to see if there's a better way to do it:
ExecStartPre=-/bin/mountpoint /media/writable/ || /usr/bin/stat /media/writable/MOUNTED || /bin/rm/ --one-file-system -rf /media/writable/*
Connor Bell
(23 rep)
Mar 9, 2019, 01:27 PM
• Last activity: Jun 16, 2025, 06:02 PM
3
votes
1
answers
195
views
systemd .mount for FUSE fs via executable (or waiting for transient mount)
I have a Python script that utilizes FUSE bindings to create a FUSE filesystem. I would like to run the script automatically via systemd and have other units wait until the mount is created. I can easily have a systemd `.service` file to launch the script like ``` [Service] Type=simple ExecStart=/us...
I have a Python script that utilizes FUSE bindings to create a FUSE filesystem. I would like to run the script automatically via systemd and have other units wait until the mount is created.
I can easily have a systemd
.service
file to launch the script like
[Service]
Type=simple
ExecStart=/usr/bin/python -u my-fuse-script.py /some/mount/point
but I also want to have other units wait until the mount is actually created. The script can take seconds or minutes before actually doing the mount, so I can't just use Requires=
and After=
to wait for the above unit. I do end up with a transient .mount
like some-mount-point.mount
after the FUSE fs is actually created, but having other units depend on it like Requires=some-mount-point.mount
(with After=
) will fail because the .mount
does not exist for some time.
Maybe writing a persistent .mount
file is a possible solution, but then I'm not sure what to add to the What=
field in the .mount
, given that the mount is created by a script. I see in man systemd.mount
that a .mount
can have [Unit]
and [Install]
sections which sounds promising if I want to run my script there, but What=
is still required.
Or is there some way to wait on or start units after a transient .mount
?
tsj
(207 rep)
May 12, 2025, 01:31 AM
• Last activity: May 12, 2025, 12:25 PM
4
votes
1
answers
1777
views
FUSE Overlay filesystem for "too long filenames"
Is there FUSE overlay filesystem, that: * resolve on it's own "too long filenames" for underlying filesystem * otherwise (for filenames fitting into limits of underlying filesystem) just proxy 1:1 ? Example how this could work: for each file `fabc...yxz` having file name too long for given underlyin...
Is there FUSE overlay filesystem, that:
* resolve on it's own "too long filenames" for underlying filesystem
* otherwise (for filenames fitting into limits of underlying filesystem) just proxy 1:1
?
Example how this could work:
for each file
fabc...yxz
having file name too long for given underlying filesystem, translate this into shorter name and use second file as metadata with full filename details.
Use case:
Limitation of encrypted filesystems like EncFS or ecryptfs. They provide ability of storing filenames shorter than in underlaying filesystem, when encrypting filenames, resulting that you can not rsync into them contents that require longer filenames. (e.g. Ext4 has 255B, ecryptfs on ext4 allow 143B of filenames).
Example problems rsync
reporting:
rsync: mkstemp "/mnt/naswaw2016/ext4/asusm2n1934/enc/home/gwpl/dane/cs/reed-solomon/.CS-05-569 - reed-solomon [vg][vgvg] - Optimizing Cauchy Reed-Solomon Codes for Faul
t-Tolerant Storage Applications - by James S. Plank.pdf.CwyPQH" failed: File name too long (36)
Some references:
* same idea proposed earlier: https://github.com/vgough/encfs/issues/7#issuecomment-160678136
* ecryptfs bug describing issue: https://bugs.launchpad.net/ecryptfs/+bug/344878
* SE answer about filename limits of ecryptfs : https://unix.stackexchange.com/a/32834/9689
* escryptfs bug with rsync usecase: https://bugs.launchpad.net/ubuntu/+source/rsync/+bug/592303
(P.S. And yes - I am aware of encrypting on block layer with LUKS, but encrypting above fs layer is so much better to my usecase, that I'd rather prefer stick to it)
Grzegorz Wierzowiecki
(14740 rep)
May 14, 2016, 03:02 PM
• Last activity: May 7, 2025, 09:37 AM
0
votes
1
answers
42
views
bindfs: Can remounting with new mapping cause side-effects?
I want to create a ansible role that creates some bindfs-mounts and map those to a bunch of users. Now those users can change. And this is why I cannot check, whether the mount already exists and then use bindfs - I have to do that all the time, as the mount might be existing, but with the wrong use...
I want to create a ansible role that creates some bindfs-mounts and map those to a bunch of users.
Now those users can change. And this is why I cannot check, whether the mount already exists and then use bindfs - I have to do that all the time, as the mount might be existing, but with the wrong user mapping.
So basically, everytime the playbook runs, the bindfs-command is being fired. Additionally, I cannot extract the current mapping from the
mount
-overview.
So my question is: Given the fact that userA is uploading a laaaaarge file to the mounted directory and in this moment the bindfs-command is fired again, can this cause data corruption?
The command basically is nothing fancy and looks like this:
bindfs -o nonempty --map=userA/userB /var/foo/bar /mnt/foo/bar
One option, which I thought of at the moment, was to create a passwd-file, as bindfs offers to use --map-passwd. If the file changes, I can register a variable and only in this case I would remount.
But still: In this event, would I risk corrupt data?
Thanks for your help.
SPQRInc
(105 rep)
May 4, 2025, 03:54 PM
• Last activity: May 6, 2025, 06:46 PM
2
votes
2
answers
5399
views
sshfs mounts share in read-only mode
I need to mount a share using `sshfs`. When I mount it using `sshfs` directly, i.e. # sshfs martin@192.168.1.10:/home/data/ -o allow_other /mnt/data/ or # sshfs martin@192.168.1.10:/home/data/ -o allow_other,default_permissions /mnt/data/ the share is mounted, but only in "read-only" mode. However w...
I need to mount a share using
sshfs
.
When I mount it using sshfs
directly, i.e.
# sshfs martin@192.168.1.10:/home/data/ -o allow_other /mnt/data/
or
# sshfs martin@192.168.1.10:/home/data/ -o allow_other,default_permissions /mnt/data/
the share is mounted, but only in "read-only" mode. However when I add following line into /etc/fstab
, and do mount /mnt/data
, then the share is mounted read-write.
# martin@192.168.1.10:/home/data/ /mnt/data fuse.sshfs noauto,_netdev,allow_other,default_permissions 0 0
Both commands are issued unde root. Why does sshfs
from the command line mount the filesystem read-only?
How can I mount it read-write using the sshfs
command?
Martin Vegter
(586 rep)
Nov 1, 2015, 08:07 AM
• Last activity: May 3, 2025, 08:35 PM
1
votes
3
answers
1015
views
bash count files and directory, summary size and EXCLUDE folders that are fuse|sshfs
I need help for a bash script that counts files and folders in a specified directory on a Linux system (Debian), but I want to exclude a specified folder. I have a main directory named `workdir` with different script files and folders. Inside `workdir`, I have a directory named `mysshfs`. I use `fus...
I need help for a bash script that counts files and folders in a specified directory on a Linux system (Debian), but I want to exclude a specified folder.
I have a main directory named
workdir
with different script files and folders.
Inside workdir
, I have a directory named mysshfs
.
I use fuse/sshfs
to mount an external folder in the mysshfs
folder.
Now I will run some commands to get information about the **file/directory** count
and **file/directory** size
, but I want to exclude the directory mysshfs
.
My bash commands that work:
1. Get the full size of workdir
| **no fuse/sshfs in use**
$ du -hs workdir
2. Get the full size of workdir
, excluding mysshfs
| **fuse/sshfs in use**
$ du -hs --exclude=mysshf workdir
3. Count files in workdir
| **no fuse/sshfs in use**
$ find workdir -type f | wc -l
4. Count folders in workdir
| **no fuse/sshfs in use**
$ find workdir -type d | wc -l
5. Count files in workdir
, excluding mysshfs
| **no fuse/sshfs in use**
$ find workdir -type f -not -path "*mysshfs*" | wc -l
6. Count folders in workdir
, excluding mysshfs
| **no fuse/sshfs in use**
$ find workdir -type d -not -path "*mysshfs*" | wc -l
When I use commands **5 & 6** and the remote directory is mounted under the mysshfs
directory, the commands hang.
The commands eventually works and show the correct output, but it looks like the commands are still looking inside the excluded directory even though they shouldn't be, so it takes a long time to display the result.
Where is my error or did I forget something in my commands 5 & 6?
Or can I use other commands for my results?
I need to count files and directories using 2 separate commands
and exclude a specified folder that is mounted over fuse/sshfs
to get a fast result.
ReflectYourCharacter
(8185 rep)
Feb 22, 2022, 11:25 AM
• Last activity: Mar 24, 2025, 07:31 PM
0
votes
0
answers
884
views
How to allow users to mount FUSE/exfat filesystems in fstab without root?
I have a filesystem listed in `/etc/fstab` and I'd like a normal user to be able to mount it. Normally I would just add the `users` option and it will work, however it seems that if the filesystem is mounted by a FUSE helper, this doesn't work: /dev/disk/by-label/SDCARD /mnt/sdcard exfat x-gvfs-hide...
I have a filesystem listed in
/etc/fstab
and I'd like a normal user to be able to mount it.
Normally I would just add the users
option and it will work, however it seems that if the filesystem is mounted by a FUSE helper, this doesn't work:
/dev/disk/by-label/SDCARD /mnt/sdcard exfat x-gvfs-hide,users,noatime,fmask=0133,dmask=0022,codepage=437,iocharset=iso8859-1,uid=1000,gid=985,noauto
Should a normal user try to mount this:
$ mount /mnt/sdcard/
FUSE exfat 1.4.0 (libfuse3)
ERROR: failed to open '/dev/sda1': Permission denied.
It seems that although the users
option allows the mount
command to run, it still only runs as a normal user and not as root.
I don't particularly want to give the normal user direct access to the device, and sudo
will already solve the problem but I am trying to make this work without it. The SD card is used in a device that requires exfat, so reformatting to FAT32 won't work.
Is there any way that will allow this to proceed as it does when using a non-FUSE filesystem, specifically that the device file has the same restrictions but a normal user can mount and unmount the device without root or sudo access?
Malvineous
(7395 rep)
Sep 17, 2023, 11:42 AM
• Last activity: Mar 18, 2025, 01:26 PM
28
votes
6
answers
89777
views
Mount with sshfs and write file permissions
I try to sshfs mount a remote dir, but the mounted files are not writable. I have run out of ideas or ways to debug this. Is there anything I should check on the remote server? I am on an Xubuntu 14.04. I mount remote dir of a 14.04 Ubuntu. local $ lsb_release -a No LSB modules are available. Distri...
I try to sshfs mount a remote dir, but the mounted files are not writable. I have run out of ideas or ways to debug this. Is there anything I should check on the remote server?
I am on an Xubuntu 14.04. I mount remote dir of a 14.04 Ubuntu.
local $ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.3 LTS
Release: 14.04
Codename: trusty
I changed the /etc/fuse.conf
local $ sudo cat /etc/fuse.conf
# /etc/fuse.conf - Configuration file for Filesystem in Userspace (FUSE)
# Set the maximum number of FUSE mounts allowed to non-root users.
# The default is 1000.
#mount_max = 1000
# Allow non-root users to specify the allow_other or allow_root mount options.
user_allow_other
And my user is in the fuse group
local $ sudo grep fuse /etc/group
fuse:x:105:MY_LOACL_USERNAME
And I mount the remote dir with (tried with/without combinations of sudo, default_permissions, allow_other):
local $sudo sshfs -o allow_other,default_permissions -o IdentityFile=/path/to/ssh_key REMOTE_USERNAME@REMOTE_HOST:/remote/dir/path/ /mnt/LOCAL_DIR_NAME/
The
REMOTE_USERNAME
has write permissions to the dir/files (on the remote server).
I tried the above command without sudo, default_permissions, and in all cases I get:
local $ ls -al /mnt/LOCAL_DIR_NAME/a_file
-rw-rw-r-- 1 699 699 1513 Aug 12 16:08 /mnt/LOCAL_DIR_NAME/a_file
local $ test -w /mnt/LOCAL_DIR_NAME/a_file && echo "Writable" || echo "Not Writable"
Not Writable
Clarification 0
------
In response to user3188445's comment:
$ whoami
LOCAL_USER
$ cd
$ mkdir test_mnt
$ sshfs -o allow_other,default_permissions -o IdentityFile=/path/to/ssh_key REMOTE_USERNAME@REMOTE_HOST:/remote/dir/path/ test_mnt/
$ ls test_mnt/
I see the contents of the dir correctly
$ ls -al test_mnt/
total 216
drwxr-xr-x 1 699 699 4096 Aug 12 16:42 .
drwxr----- 58 LOCAL_USER LOCAL_USER 4096 Aug 17 15:46 ..
-rw-r--r-- 1 699 699 2557 Jul 30 16:48 sample_file
drwxr-xr-x 1 699 699 4096 Aug 11 17:25 sample_dir
$ touch test_mnt/new_file
touch: cannot touch ‘test_mnt/new_file’: Permission denied
# extra info: SSH to the remote host and check file permissions
$ ssh REMOTE_USERNAME@REMOTE_HOST
# on remote host
$ ls -al /remote/dir/path/
lrwxrwxrwx 1 root root 18 Jul 30 13:48 /remote/dir/path/ -> /srv/path/path/path/
$ cd /remote/dir/path/
$ ls -al
total 216
drwxr-xr-x 26 REMOTE_USERNAME REMOTE_USERNAME 4096 Aug 12 13:42 .
drwxr-xr-x 4 root root 4096 Jul 30 14:37 ..
-rw-r--r-- 1 REMOTE_USERNAME REMOTE_USERNAME 2557 Jul 30 13:48 sample_file
drwxr-xr-x 2 REMOTE_USERNAME REMOTE_USERNAME 4096 Aug 11 14:25 sample_dir
vkats
(661 rep)
Aug 12, 2015, 02:40 PM
• Last activity: Mar 8, 2025, 06:54 AM
1
votes
1
answers
1155
views
FUSE in rootless, unprivileged podman
# Question Is adding `CAP_SYS_ADMIN` still the only way to get fuse working inside a rootless container (with either native overlay or `fuse-overlayfs`/other methods)? ---------- # Examples ## Podman in podman This example from https://www.redhat.com/en/blog/podman-inside-container `podman run --use...
# Question
Is adding
CAP_SYS_ADMIN
still the only way to get fuse working inside a rootless container (with either native overlay or fuse-overlayfs
/other methods)?
----------
# Examples
## Podman in podman
This example from https://www.redhat.com/en/blog/podman-inside-container
podman run --user podman --security-opt label=disable --device /dev/fuse -ti quay.io/podman/stable podman run -ti docker.io/busybox echo hello
works for me without a problem. But it makes me assume that it's using fusermount
inside the container or that it's possible to use it, which does not work when using the same setup.
What's more, it doesn't seem to mount anything:
podman run --user podman --security-opt label=disable --device /dev/fuse -ti quay.io/podman/stable
# running commands in container
cat /proc/mounts > /home/podman/before
podman run -d docker.io/busybox sleep 100
cat /proc/mounts > /home/podman/during
diff /home/podman/before /home/podman/during
# (no difference)
It seems like omitting /dev/fuse
also works (tested with native overlay):
podman run --user podman --security-opt label=disable -ti quay.io/podman/stable podman run -ti docker.io/busybox echo hello
## Bindfs
Just adding bindfs
to the image
FROM quay.io/podman/stable
RUN dnf -y install bindfs
And running a container
podman run --user podman --security-opt label=disable --device /dev/fuse -ti built_image_from_above:latest
# inside the container
cd ~ && mkdir test1 test2
bindfs --no-allow-other test1 test2
fusermount: mount failed: Operation not permitted
I'm assuming the behaviour will be the same for other fuse mounts, such as sshfs
.
Might this be a permission issue within the container or is the host denying it?
----------
# Ideas
## needs privilege
This answer implies that to use fuse, you need to be privileged.
Setuid is mentioned, but I'm not sure how it's meant.
Within the continer:
ls -l $(which fusermount3)
-rwsr-xr-x. 1 root root 40800 Jul 17 00:00 /usr/bin/fusermount3
## rootless overlay
I've also tried removing the line mount_program
from storage.conf
and running podman system reset
, as describe here . But I'm not sure if this is only about overlay or also fuse. If I don't add /dev/fuse
, the device isn't present in the continer:
> One other disadvantage of fuse-overlayfs is it requires access to
> /dev/fuse. When people try to run Podman and Buildah within a confined
> container, we take away the CAP_SYS_ADMIN privileges, even when
> running as root. This forces us to use a user namespace so that we can
> mount volumes. In order to make this work, users have to add /dev/fuse
> to the container. Once we have native overlay for rootless mode (no
> CAP_SYS_ADMIN), /dev/fuse will no longer be required.
----------
# Versions
Host: Fedora 41
Podman: 5.3.1
rudib
(1764 rep)
Jan 9, 2025, 01:08 PM
• Last activity: Jan 10, 2025, 07:30 PM
0
votes
1
answers
111
views
Is there a FUSE-based caching solution for selective prefetching from a remote filesystem?
I am working with a remote parallel file system (CephFS), mounted at `/mnt/mycephfs/`, which contains a large dataset of small files (200 GB+). My application trains on these files, but reading directly from `/mnt/mycephfs/` is slow due to parallel file system contention and network latency. I am lo...
I am working with a remote parallel file system (CephFS), mounted at
/mnt/mycephfs/
, which contains a large dataset of small files (200 GB+). My application trains on these files, but reading directly from /mnt/mycephfs/
is slow due to parallel file system contention and network latency.
I am looking for a FUSE-based solution that can:
1. Take a list of files required by the application.
2. Prefetch and cache these files into a local mount point (e.g., /mnt/prefetched/
) without replicating the entire remote storage (as my local RAM and disk space are limited).
The desired behavior:
• If a file (e.g., /mnt/mycephfs/file
) is already cached at /mnt/prefetched/file
, it should be served from the cache.
• If not cached, the solution should fetch the file (along with other files from the prefetch list), cache it at /mnt/prefetched/
, and then serve it from there.
Are there existing tools or frameworks that support this kind of selective caching and prefetching using FUSE?
H.Jamil
(31 rep)
Dec 11, 2024, 04:23 AM
• Last activity: Dec 11, 2024, 06:45 PM
-1
votes
1
answers
2075
views
SSHFS: Permission denied when opening directory from application even with "allow_other"
I'm trying to use sshfs to open a remote directory in the Spotify snap application on Linux Mint. sshfs -o allow_other user@example.com:/remote/dir /local/dir I've mounted my directory using the "allow_other" fuse option, and the directory opens without having to use sudo in terminal and file explor...
I'm trying to use sshfs to open a remote directory in the Spotify snap application on Linux Mint.
sshfs -o allow_other user@example.com:/remote/dir /local/dir
I've mounted my directory using the "allow_other" fuse option, and the directory opens without having to use sudo in terminal and file explorer. However, when I try adding it as a directory in Spotify for local files, the file explorer gives me this message:
Any advice on fixing this? I've tried modifying the permissions of the folder and "default_permissions" option, no luck yet.

Jonathan Ting
(1 rep)
Jul 6, 2020, 06:20 PM
• Last activity: Dec 3, 2024, 06:05 AM
11
votes
4
answers
1124
views
How can I secure unencrypted credential files, for programs that assume them (like gmi/lieer)?
### Brief Q: How can I cryptographically secure a credentials file that is stored on disk as plaintext? Or, rather: how can I avoid storing credentials like those for Gmail and other API keys on disk? For existing programs that assume such an unencrypted file containing secrets. I ask this question...
### Brief
Q: How can I cryptographically secure a credentials file that is stored on disk as plaintext?
Or, rather: how can I avoid storing credentials like those for Gmail and other API keys on disk? For existing programs that assume such an unencrypted file containing secrets.
I ask this question motivated by wanting to access Gmail using gmi/lieer and notmuch - which AFAICT use an unencrypted credentials file on disk. But there are lots of other programs that require similar credentials files.
Surely there must already be a generic solution to this problem? Something like ssh-agent, that asks the user for a passphrase and then decrypts the secrets into memory for some time. But not necessarily as fancy as ssh-agent... the agent doesn't need to do all of the crypto operations, which might differ by application or API or protocol. IMHO just decrypting the credentials file into memory would be of value.
TL;DR - You might be able to stop here without reading the rest
---
Some people will understand what I'm asking for from the above BRIEF section.
Others, probably not.
### Surely there must be a generic solution to this problem?
Surely there is something like ssh-agent that reads such secrets from an encrypted file, asks the user for password (or better), decrypts the secrets, and keeps them only in memory for some time, so that you don't constantly have to reenter the password/etc?
Doesn't have to go quite as far as ssh-agent, where the agent does all or most of the cryptographic operations - and hence the protocol between ssh client and ssh-agent is not just "give me the credential", but must also describe the operations to be performed. Since there are lots of different protocols that have lots of different credentials with lots of slightly different operations, there may be an obstacle to creating a custom agent for each right away. But simply having a persistent agent ask the user and then decrypt credentials from disk into memory would be an improvement over nothing at all.
Surely this has already been done, in a manner that can work with lots of different apps XYZ?
But I certainly don't know of anything like this. Nor, for that matter, do any AI assistants that I have tried - although it might be a question of me not phrasing the LLM prompt or Google search correctly.
For that matter, ChatGPT suggested that I do the following:
* encrypt the credentials file on disk
* when I want to use it
* temporarily create an unencrypted credentials file - on disk
* let the client program like gmi/lieer access the unencrypted credential file while it is running
* and when I no longer am running the client, delete the unencrypted temporary credentials file
I hope I don't need to explain how unsatisfactory this is.
### Could this be done using UNIX domain sockets or FUSE? Has it been done already?
If I knew that the client application was always reading or replacing the entire credential file, I could imagine having an XYZ-agent write the unencrypted secrets to the socket all at once. or if I do not know the access pattern, e.g. if the secret is large enough that seeks a random-access are performed, I could imagine that a user domain filesystem like FUSE could be used.
Q: has anyone created such a generic "decrypt secrets into memory, so it looks like an unencrypted credentials file to software that cannot handle an encrypted credentials file?".
* Using UNIX domain sockets
* or FUSE
* or whatever
Even better if such as change to the namespace were limited to a partent process and its children, such as you might be able to do in OSes like Plan9 or Brazil, although AFAIK existing UNIXes like Linux do not make this easy to do.
### Details
As is my wont, I provide way too much background detail for my question. For many people reasonably knowledgeable about security this much detail should not be necessary. But sometimes it may not be clear exactly what I am talking about. Sometimes I may be using incorrect terms. And so on.
Hence, I provide all this extra detail hoping to short-circuit misunderstandings.
If you truly know of an answer to my question, you can probably stop without reading all the rest.
Heck, I might as well admit it: I'm trying to short-circuit stupid nonanswers to my question. But previous attempts to do this I'm not always been successful.
### Motivating Example: gmi/lieer access to gmail uses an unencrypted credential file
E.g. lieer, a program to synchronize gmail with local storage, stores an unencrypted credentials file for Gmail in the filesystem. This file, .credentials.gmailieer.json, is completely unencrypted ordinary plaintext.
> Excerpting:
> gmi init will now open your browser and request limited access to your e-mail.
> …
> The access token is stored in .credentials.gmailieer.json in the local mail repository. If you wish, you can specify your own api key that should be used.
Of course file system permissions should make it accessible only by my UNIX login id. It is used by the gmi/lieer program to access my gmail account. But unless I am totally missing something, any program running as me can access this file. E.g. one of the umpteen sandbox escapes in web browsers might allow it to access this file. Or I might have filesystem permissions set incorrectly. Or I might have misconfigured filesystem/disk encryption, and other user IDs on my machine may be able to access it. Etc.
I thought that it was standard/best practice for security that plaintext secrets should never be stored on disk. I have long been somewhat surprised by how many software systems require credentials like API keys to be stored on disk. I have usually avoided using such systems, although it gets in the way of doing things like Google API development that require such API keys. Or I might use such systems or work, but resist using them for stuff that is personal. However, I really do want to use such systems for personal stuff. Not just personal software development, but for gmi/lieer access my Gmail account, which is about as personal as I can get, much more sensitive to me than a GitHub project.
This is not just an issue with gmi/lieer. Many programs, many software systems, require you to store credentials like API keys on disk. I don't think I've encountered any of them that keep them encrypted on disk.
Except, of course, for ssh/ssh-agent and gpg/gpg-agent, where the credential files are protected, not only protected by file system permissions, but also by a passphrase, and are decrypted only within the ssh-agent's process memory.
#### ssh-agent => no plaintext credentials on disk
Except, of course, for ssh/ssh-agent and gpg/gpg-agent.
+ Where the key files are protected
+ not only protected by file system permissions,
but are also encrypted by a passphrase.
* When you load an ID into ssh-agent
* it asks you for the passphrase,
* reads the encrypted key file(s),
* and decrypts them into it's process memory.
* ssh-agent is persistent, so you only have to do this once in a while
* ssh, if configured appropriately,
* won't be able to run without asking ssh-agent to "do stuff".
* communicates yo ssh-agent via a UNIX domain socket
* ssh-agent actually does all or most of the public key computations
* => ssg itself does not have the private keys
I thought that it was standard/best practice for security that
1. Plaintext secrets should never be stored on disk
* with the possible exception of swap files,
* but that should be a solved problem
* so even if someboday can access the raw data on disk you should be safe
* e.g. if you don't have full disk or filesystem encryption
* and the disk drive is accessed outside of its "home" OS
2. Plaintext private keys may be stored in ssh-agent's process memory
* not in the ssh client program
* and should not be accessible by any other programs,
* even running as the same user in the same machine
* possibly also not by more privileged users like root or admin
* with the possible exception of debuggers
* but that also should be a solved problem (although not so much in my experience)
#### Going beyond ssh-agent…
Skip this section, It isn't really necessary for my question, except it helps me organize the issues in my mind. Also, if somebody can tell me that these items (3) and (4) below are in much wider use and I currently know, I'd love to hear about it.
Items (1) and (2) are, AFAIK, the state of the art, or at least practice. But they leave some hardware/logic analyzer security holes vulnerable, which have been addressed by certain academic and industry projects, but which as far as I know are much less common:
3. In most present-day systems plaintext secrets may be stored in DRAM
+ unless the programmer has been very careful to keep them only in registers
+ and has control of context switches that might save the registers to memory
+ but various hardware memory encryption proposals and products prevent even this from happening
+ e.g. data may be stored unencrypted in cache, but may be encrypted between cache and DRAM.
4. and similarly various proposals and products ensure that all of the traffic on buses and connections etc. where you could attach a logic analyzer are encrypted.
* 2.5: I'm actually a little bit uncomfortable that the ssh client/agent communication is done via a UNIX domain socket
+ AFAIK any process running with the appropriate user ID can access that socket, and can get the ssh-agent to do stuff
+ AFAIK the UNIX domain socket is protected only by filesystem permissions
+ AFAIK the ssh-agent and ssh program do not talk via an encrypted channel
+ Although the fact that the UNIX domain socket can be somewhat random reduces exposure.
And I know that some operating systems - not standard LINUX, AFAIK - allow permissions to be restricted not just by user ID but also by executable ID, or position in the process tree.
+ You can of course use JNIX user IDs to accomplish this, but as far as I know this is not commonly done.
#### Example: plaintext file containing gmail credentials for gmi/lieer credential
gmi/lieer, a program to synchronize gmail with local storage, stores an unencrypted credentials file for Gmail in the filesystem
> Excerpting:
> gmi init will now open your browser and request limited access to your e-mail.
> …
> The access token is stored in .credentials.gmailieer.json in the local mail repository. If you wish, you can specify your own api key that should be used.
The credential lieer stores looks like the below. I hope that I have edited out anything that is sensitive. "Gibberish" is of course what looks like random letters and numbers with occasional punctuation, the sort of stuff one associates with a credential.
{"access_token": "xyzzy ~200 bytes of gibberish",
"client_id": "~40 bytes of gibberish.apps.googleusercontent.com",
"client_secret": "~20 bytes of gibberish",
"refresh_token": "~100 bytes of gibberish",
"token_expiry": "2024-09-15T07:16:24Z",
"token_uri": "https://accounts.google.com/o/oauth2/token ",
"user_agent": "Lieer",
"revoke_uri": "https://oauth2.googleapis.com/revoke ",
"id_token": null,
"id_token_jwt": null,
"token_response": {"access_token": "~200 bytes of gibberish",
"expires_in": 3599,
"scope": "https://www.googleapis.com/auth/gmail.readonly https://www.googleapis.com/auth/gmail.modify https://www.googleapis.com/auth/gmail.labels ",
"token_type": "Bearer"},
"scopes": ["https://www.googleapis.com/auth/gmail.labels ",
"https://www.googleapis.com/auth/gmail.readonly ",
"https://www.googleapis.com/auth/gmail.modify "],
"token_info_uri": "https://oauth2.googleapis.com/tokeninfo ",
"invalid": false,
"_class": "OAuth2Credentials",
"_module": "oauth2client.client"
}
This is completely plaintext, although of course it is accessible only by my Linux user id.
It is used by the gmi program to authenticate to gmail. If not present, I cannot acces my gmail. I don't get asked for my password, etc.
Unless I am missing something, this credential could allow almost any program that can read this file to access my Gmail. this concerns me.
It's not just gmi/lieer -- many programs. I'm not going to bother listing more examples. But just googling API KEY should yield a lot of them.
### Is it just obsolete legacy software?
Possibly, but IMHO not completely.
E.g. the gmi/lieer source code and/or documentation indicates that it using an old Gmail API, and should be upgraded. Possibly more recent APIs solve this problem - but not as far as I can tell. Possibly there is already a generic OpenAuth-agent - but not that I can find. AFAICT Google really prefers to keep the OpenAuth stuff in its own libraries, used by Google Chrome and other web browsers, and has not really done much to support command line or other non-browser utilities. They would really prefer that you did not use such utlities, unless Google wrote them. They only grudgingly support such utlities, allowing you to obtain API KEYs, etc. If there are security holes caused by storing such credentials unencrypted on disk, they will just use that as more evidence to justify locking things down, and locking other software out.
Anyway: If there is a generic OpenAuth-agent (probably not called that) - I would love to hear about it.
But anyway, furthermore: even if there is a generic OpenAuth-agent, there are a lot of existing programs that assume an unencrypted creditials file on disk. There would be value in having a generic solution fir these, until they can be upgraded. Assuming they can be.
Krazy Glew
(287 rep)
Oct 20, 2024, 11:35 PM
• Last activity: Oct 22, 2024, 01:00 PM
3
votes
2
answers
7665
views
How do I make `modprobe fuse` and `modprobe loop`, persistent?
This used to not be a problem, but now it is. I haven't changed anything significant so probably an update broke it. When I run VeraCrypt it complains that it couldn't set up loop device and suggests running `modprobe fuse`. Running it doesn't work. However, running `modprobe fuse` **and** `modprobe...
This used to not be a problem, but now it is. I haven't changed anything significant so probably an update broke it.
When I run VeraCrypt it complains that it couldn't set up loop device and suggests running
modprobe fuse
. Running it doesn't work. However, running modprobe fuse
**and** modprobe loop
fixes it, until the next restart.
Shouldn't these modules be loaded automatically at boot? Why not? How do I make them?
Bagalaw
(1085 rep)
Jan 23, 2019, 02:48 PM
• Last activity: Oct 9, 2024, 01:01 AM
1
votes
1
answers
4192
views
Cannot mount rcloned drive because of FUSE error
I wanted to mount my rcloned drive. When I try to mount that rclone using this command: rclone mount --allow-other Webseries: /webseries I get the following error: 2022/04/28 21:59:46 mount helper error: fusermount: fuse device not found, try 'modprobe fuse' first 2022/04/28 21:59:46 Fatal error: fa...
I wanted to mount my rcloned drive. When I try to mount that rclone using this command:
rclone mount --allow-other Webseries: /webseries
I get the following error:
2022/04/28 21:59:46 mount helper error: fusermount: fuse device not found, try 'modprobe fuse' first
2022/04/28 21:59:46 Fatal error: failed to mount FUSE fs: fusermount: exit status 1
I want to mount it and referred to many thread related to this.
**What I tried**
* I tried
whereis modprobe
Output is :
modprobe: /usr/lib/modprobe.d
* I have tried running
modprobe fuse
It responds with
bash: modprobe: command not found
I feel like Fuse isn't installing.
I can't find any file related to fuse.
I installed fuse using
sudo apt-get install fuse
It successfully gets installed it .
**Kindly refer to the log.**
Click here to see logs on pastebin!
**I'm running Ubuntu:20.04 on docker**.
And seems like docker doesn't like fuse very much.
I even tried using google-drive-ocamlfuse but The VNC rdp disconnects while opening browser for Google authentication.
Devansh Shrivastava
(39 rep)
Apr 28, 2022, 10:06 PM
• Last activity: Aug 30, 2024, 04:00 PM
1
votes
0
answers
226
views
OverlayFS for User writing changes to root-owned directory
I am trying to give all Users on a system their own writable copy of a root-owned directory, and OverlayFS sounds like the tool for the job, but I am finding it not as straightforward as it sounded at first. First, the setup: > mkdir upper work merged merged-user > sudo mkdir -p lower/path/to > sudo...
I am trying to give all Users on a system their own writable copy of a root-owned directory, and OverlayFS sounds like the tool for the job, but I am finding it not as straightforward as it sounded at first.
First, the setup:
> mkdir upper work merged merged-user
> sudo mkdir -p lower/path/to
> sudo touch lower/path/to/file
> ls -l
drwx------ 2 user group 4096 Aug 10 00:00 merged
drwx------ 2 user group 4096 Aug 10 00:00 merged-user
drwxr-xr-x 3 root root 4096 Aug 10 00:00 lower
drwx------ 2 user group 4096 Aug 10 00:00 upper
drwx------ 2 user group 4096 Aug 10 00:00 work
The goal at the end is to allow User
user
to write to any directory or path in the merged
or merged-user
mount point.
First attempt, with mount
:
> sudo mount -t overlay overlay -o "lowerdir=$PWD/lower,upperdir=$PWD/upper,workdir=$PWD/work" merged
> ls -l merged/path/to
total 4
-rw-r--r-- 1 root root 5 Aug 10 00:00 file
> echo me > merged/path/to/file
sh: merged/path/to/file: Permission denied
I get it; at the kernel level, OverlayFS isn't touching the permissions, so the merged
directory doesn't have the permissions setup I would like. Enter fuse-overlayfs
:
fuse-overlayfs -o "lowerdir=lower,upperdir=upper,workdir=work,squash_to_uid=$(id -u)" merged-user
> ls -la merged-user/
total 8
drwx------ 3 user group 4096 Aug 10 00:00 .
drwxr-xr-x 3 root root 4096 Aug 10 00:00 path
> echo me > merged-user/path/to/file
sh: merged-user/path/to/file: Permission denied
Still permission denied, even with the squash_to_uid
option. Am I missing some other parameter that would enabled merged
or merged-user
to appear as the User's own directories?
palswim
(5597 rep)
Aug 10, 2024, 11:22 PM
1
votes
1
answers
371
views
Does guestmount sypport ufs?
I am trying to use `guestmount` to inspect/modify a FreeBSD .qcow2 image (I saw several answers here, but I found nothing actually solving my problem). I am working under Debian Sid. Apparently image is correctly recognized: ``` mcon@cinderella:~/projects/LXD$ guestfish -a /home/mcon/VirtualBox\ VMs...
I am trying to use
guestmount
to inspect/modify a FreeBSD .qcow2 image (I saw several answers here, but I found nothing actually solving my problem).
I am working under Debian Sid.
Apparently image is correctly recognized:
mcon@cinderella:~/projects/LXD$ guestfish -a /home/mcon/VirtualBox\ VMs/FreeBSD-13.2-RELEASE-amd64.qcow2
Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.
Type: ‘help’ for help on commands
‘man’ to read the manual
‘quit’ to quit the shell
> run
> list-filesystems
/dev/sda1: unknown
/dev/sda2: vfat
/dev/sda3: unknown
/dev/sda4: ufs
>
but mount fails badly:
mcon@cinderella:~/projects/LXD$ guestmount -a /home/mcon/VirtualBox\ VMs/FreeBSD-13.2-RELEASE-amd64.qcow2 -m /dev/sda4 --rw -o subtype=ufs2 /mnt
libguestfs: error: mount_options: mount exited with status 32: mount: /sysroot: wrong fs type, bad option, bad superblock on /dev/sda4, missing codepage or helper program, or other error.
dmesg(1) may have more information after failed mount system call.
guestmount: ‘/dev/sda4’ could not be mounted.
guestmount: Did you mean to mount one of these filesystems?
guestmount: /dev/sda1 (unknown)
guestmount: /dev/sda2 (vfat)
guestmount: /dev/sda3 (unknown)
guestmount: /dev/sda4 (ufs)
I did some variations and I also tried recompiling the whole package from sources, but result is the same.
What am I missing?
Note: I cannot use qemu-nbd
and sudo mount -t ufs -o ufstype=ufs2 /dev/nbd0p4 /mnt/
because ufs.ko
kernel module is compiled without write support.
ZioByte
(910 rep)
Jul 16, 2023, 07:24 AM
• Last activity: Jul 19, 2024, 03:21 PM
0
votes
0
answers
37
views
Shared folder remounts itself after umount
On a Oracle Linux 7.9 server, I have a directory that is mounted from another system. The fstab entry is: ```` myuser@192.168.10.2:/opt/commonfiles /opt/commonfiles/ fuse.sshfs identityfile=/home/myuser/.ssh/id_rsa,x-systemd.automount,x-systemd.nofail,_netdev,user,idmap=user,transform_symlinks,defau...
On a Oracle Linux 7.9 server, I have a directory that is mounted from another system. The fstab entry is:
`
myuser@192.168.10.2:/opt/commonfiles /opt/commonfiles/ fuse.sshfs identityfile=/home/myuser/.ssh/id_rsa,x-systemd.automount,x-systemd.nofail,_netdev,user,idmap=user,transform_symlinks,default_permissions,allow_other,rw,uid=1004,gid=1004,umask=0003 0 0
`
I need the ownership set as myuser so that the applications that run on both machines would have suitable rights. This has been working satisfactorily for a number of years.
Now, the server is being replaced by a newer, shinier one; let's call it 192.168.10.3.
I duplicated the fstab line with the second IP, mounted the directory in a temporary path to test, unmounted it. Next, I wanted to umount this, change the pointing and mount to new server.
When I give the command sudo umount /opt/commonfiles
I get the bash prompt and no errors or warnings. Similarly, nothing in /var/log/messages. The command df -h
shows that it has been unmounted, but only for a few seconds.
But then, it get auto-remounted. If I say sudo umount /opt/commonfiles;sudo mount /opt/commonfiles;df -h
, I get the response that directory is not empty and df shows the old server is still mounted even though its fstab entry has been commented out.
Firstly, this is a production server with 24/7 transactions, so there is no time to reboot it unless I wake up in the wee hours to do this.
Secondly, lsof shows no locks on any files within /opt/commonfiles.
It looks to me that the reason is the x-systemd.automount option. Although, it should only mean that the device is to be mounted at boot time.
Hussain Akbar
(145 rep)
May 3, 2024, 06:52 AM
Showing page 1 of 20 total questions