Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
2
answers
1039
views
fstab entry to automate mounting B2 Backblaze bucket to local mountpoint
I maintain a ubuntu 20.04 LTS headless server and I can manually mount the B2 bucket using fuse s3fs for backup. How to add a line to `fstab` to automatically mount the B2 bucket at powerup? I know that altering fstab can be tricky. The mount script I have obtained from Backblaze [FAQ](https://help....
I maintain a ubuntu 20.04 LTS headless server and I can manually mount the B2 bucket using fuse s3fs for backup.
How to add a line to
fstab
to automatically mount the B2 bucket at powerup?
I know that altering fstab can be tricky. The mount script I have obtained from Backblaze [FAQ](https://help.backblaze.com/hc/en-us/articles/360047773653-Using-S3FS-with-B2) :
> sudo s3fs \
> mybucket \
> /path/to/mountpoint \
> -o passwd_file=/etc/passwd-s3fs \
> -o url=https://s3.your-region.backblazeb2.com
philip mirabelli
(21 rep)
May 28, 2022, 06:35 PM
• Last activity: Jan 19, 2025, 01:19 AM
8
votes
4
answers
18516
views
Search inside s3 bucket with logs
How to search for a string inside a lot of .gz files in Amazon S3 bucket subfolder? I tried to mount it via s3fs and zgrep but it's sooooo slow. Do you use any other methods? Maybe is there any Amazon service which I could use to quickly zgrep them?
How to search for a string inside a lot of .gz files in Amazon S3 bucket subfolder? I tried to mount it via s3fs and zgrep but it's sooooo slow. Do you use any other methods?
Maybe is there any Amazon service which I could use to quickly zgrep them?
Michal_Szulc
(266 rep)
Sep 26, 2016, 02:04 PM
• Last activity: Jun 4, 2024, 11:24 AM
2
votes
2
answers
894
views
Moving files without duplicating them to an S3FS-mounted storage bucket
Does the `mv` command (temporary) duplicate the moved files when the target is an S3FS mount? I have a VM that is reaching total space consumption. So, I'd like to move some files to a storage bucket (**mounted using S3FS**) to free up some space. Since I'm currently using 88.7% of 28.90GB, which is...
Does the
mv
command (temporary) duplicate the moved files when the target is an S3FS mount?
I have a VM that is reaching total space consumption. So, I'd like to move some files to a storage bucket (**mounted using S3FS**) to free up some space.
Since I'm currently using 88.7% of 28.90GB, which is approximately 25.70GB and the directory I plan to move is 8.2GB, I'm afraid to use 100% and get stuck.
-shell
mv /path/sourcefolder/* /path/destinationfolder/
Sig
(133 rep)
Sep 15, 2023, 09:44 AM
• Last activity: Sep 18, 2023, 03:16 PM
2
votes
1
answers
1739
views
s3fs complains about SSH key or SSL cert - how to fix?
I downloaded and installed [s3fs 1.73](https://code.google.com/p/s3fs/wiki/FuseOverAmazon) on my Debian Wheezy system. The specific steps I took were, all as root: apt-get -u install build-essential libfuse-dev fuse-utils libcurl4-openssl-dev libxml2-dev mime-support ./configure --prefix=/usr/local...
I downloaded and installed [s3fs 1.73](https://code.google.com/p/s3fs/wiki/FuseOverAmazon) on my Debian Wheezy system. The specific steps I took were, all as root:
apt-get -u install build-essential libfuse-dev fuse-utils libcurl4-openssl-dev libxml2-dev mime-support
./configure --prefix=/usr/local
make
make install
The installation went well and I proceeded to create a file
/usr/local/etc/passwd-s3fs
with my credentials copied from past notes (I'm pretty sure those are correct). That file is mode 0600 owner 0:0. Piecing together from the example on the web page and the man page, I then try a simple mount as a proof of concept to make sure everything works:
$ sudo -i
# s3fs mybucketname /mnt -o url=https://s3.amazonaws.com -o passwd_file=/usr/local/etc/passwd-s3fs
In short: it doesn't.
The mount point exists with reasonable permissions, and I get no error output from s3fs. However, nothing gets mounted on /mnt, mount
has no idea about anything of the sort, and if I try umount
it says about the directory "not mounted". The system logs say s3fs: ###curlCode: 51 msg: SSL peer certificate or SSH remote key was not OK
, but **how do I find out which SSL certificate it is talking about or in what way was it not OK?** Firefox has no complaints when I connect to that URL but also redirects me to https://aws.amazon.com/s3/ .
**How do I get s3fs to actually work?**
user
(29991 rep)
Sep 20, 2013, 05:30 PM
• Last activity: Feb 1, 2023, 02:19 PM
2
votes
1
answers
2749
views
Mount specific folder in bucket using s3fs in /etc/fstab
Using S3FS, a specific folder in a bucket can be mounted using `s3fs bucket:/path/to/folder `. This works fine for me. I'd like to mount in the same way using an entry in `/etc/fstab`, but can't figure out how to specify the path to the folder. Mounting the entire bucket works just fine: `bucket loc...
Using S3FS, a specific folder in a bucket can be mounted using
s3fs bucket:/path/to/folder
. This works fine for me.
I'd like to mount in the same way using an entry in /etc/fstab
, but can't figure out how to specify the path to the folder. Mounting the entire bucket works just fine:
bucket local_path fuse.s3fs _netdev,allow_other,passwd_file=/etc/password-s3fs 0 0
However, specifying the folder path results in the bucket being unrecognized:
bucket:/folder local_path fuse.s3fs _netdev,allow_other,passwd_file=/etc/password-s3fs 0 0
s3fs_check_service(3711): bucket not found - result of checking service.
Is there a different way of specifying a path to a folder?
leecbaker
(121 rep)
Nov 17, 2019, 09:59 PM
• Last activity: Mar 21, 2021, 05:41 AM
2
votes
0
answers
1001
views
FUSE hangs trying to mount network filesystem on login
I am using Fedora 23 and I have autofs set to automount some s3fs FUSE filesystems when directories under `/mnt/s3` are accessed. Since I set this up, I always experience an additional delay upon logging in - from looking at `journalctl`, it looks like something is immediately activating one of thos...
I am using Fedora 23 and I have autofs set to automount some s3fs FUSE filesystems when directories under
/mnt/s3
are accessed. Since I set this up, I always experience an additional delay upon logging in - from looking at journalctl
, it looks like something is immediately activating one of those s3fs filesystems as soon as I login - but this fails and times out because the wireless network connection is not yet up.
How can I find out what is causing this unwanted mounting, and/or disable it?
Robin Green
(1299 rep)
Feb 25, 2016, 08:46 AM
• Last activity: Feb 8, 2019, 09:16 AM
0
votes
1
answers
401
views
Installing s3fuse on Ubuntu ( bitnami ec2)
I am installing s3fs-fuse on Ubuntu 14.04 ( Bitnami - EC2). Actually I want to mount S3 bucket. I installed the required dependencies successfully by running following command apt-get install build-essential libfuse-dev libcurl4-openssl-dev libxml2-dev mime-support automake libtool Rest of the proce...
I am installing s3fs-fuse on Ubuntu 14.04 ( Bitnami - EC2).
Actually I want to mount S3 bucket.
I installed the required dependencies successfully by running following command
I installed the required dependencies successfully by running following command
apt-get install build-essential libfuse-dev libcurl4-openssl-dev libxml2-dev mime-support automake libtoolRest of the procedure mentioned as below
cd /tmp wget https://github.com/s3fs-fuse/s3fs-fuse/archive/v1.77.tar.gz mv v1.77.tar.gz s3fs-fuse-1.77.tar.gz tar zxvf s3fs-fuse-1.77.tar.gz cd s3fs-fuse-1.77/ ./autogen.sh ./configure --prefix=/usr make make installwhile "make" (2nd last step) i am getting for following error
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_sasl_bind@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_get_dn_ber@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ber_sockbuf_add_io@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_unbind_ext@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_get_attribute_ber@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_parse_result@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_set_option@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_abandon_ext@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_msgfree@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_result@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_search_ext@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_get_option@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ber_memfree@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_memfree@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_pvt_url_scheme2proto@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_next_message@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ber_free@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_err2string@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_init_fd@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_msgtype@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_free_urldesc@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_url_parse@OPENLDAP_2.4_2' /usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libcurl.so: undefined reference to `ldap_first_message@OPENLDAP_2.4_2' collect2: error: ld returned 1 exit status make: *** [s3fs] Error 1 make: Leaving directory `/tmp/s3fs-fuse-1.80/src' make: *** [all-recursive] Error 1 make: Leaving directory `/tmp/s3fs-fuse-1.80' make: *** [all] Error 2Please help
user175609
(1 rep)
Jun 17, 2016, 06:10 PM
• Last activity: Feb 8, 2019, 09:15 AM
1
votes
1
answers
3616
views
How to test speed of an s3 bucket mounted via s3fs-fuse?
I have an `s3` share that's mounted via `[s3fs-fuse][1]` and would like to run some speed tests to compare throughput on DreamHost's DreamObjects vs Amazon S3. Everything is mounted and working just fine (`s3fs testbucket ~/mnt/test -o passwd_file=/path/to/passwd-s3fs -o url=http://objects-us-west-1...
I have an
s3
share that's mounted via s3fs-fuse
and would like to run some speed tests to compare throughput on DreamHost's DreamObjects vs Amazon S3. Everything is mounted and working just fine (s3fs testbucket ~/mnt/test -o passwd_file=/path/to/passwd-s3fs -o url=http://objects-us-west-1.dream.io
), but traditional tests like dd
and hdparm
just don't work out.
Any recommendations on running a successful speed test on a block storage device in such a scenario?
ylluminate
(686 rep)
Apr 11, 2017, 02:29 AM
• Last activity: Jan 17, 2019, 08:36 AM
2
votes
1
answers
688
views
How can I set up a common SFTP directory for certain users, mounted with s3fs?
I'm looking to set up a common directory which is writable via SFTP by a certain set of users. This set of users should be able to access only this directory, and only via SFTP. I have successfully set this up, using the following sshd configuration: Subsystem sftp internal-sftp Match Group sftponly...
I'm looking to set up a common directory which is writable via SFTP by a certain set of users. This set of users should be able to access only this directory, and only via SFTP.
I have successfully set this up, using the following sshd configuration:
Subsystem sftp internal-sftp
Match Group sftponly
ChrootDirectory /mnt/filebucket
ForceCommand internal-sftp
AllowTcpForwarding no
PermitTunnel no
X11Forwarding no
PasswordAuthentication yes # temporary for testing
My users are part of the
sftponly
group, and they can log in and they are successfully chrooted into the directory.
The catch, though, is that I want to mount an S3 bucket (using s3fs) in this /mnt/filebucket directory. Once I mount it, the permissions on the directory change from drwxr-xr-x 2 root root
(sshd approves for chroot) to drwxrwxrwx 1 root root
(sshd does *not* approve).
Is there something about how I'm mounting this directory that is causing this issue?
Mark
(193 rep)
May 22, 2017, 06:06 PM
• Last activity: May 23, 2017, 11:41 AM
2
votes
2
answers
2627
views
s3fs refuses to compile on CentOS 7, why's it not finding Fuse?
The Fuse packages that are available by default on CentOS 7.3 are a bit dated. The compilation process for Fuse 3 and s3fs should be pretty straight forward. Fuse compiles and installs fine: mkdir ~/src && cd src # Most recent version: https://github.com/libfuse/libfuse/releases wget https://github....
The Fuse packages that are available by default on CentOS 7.3 are a bit dated. The compilation process for Fuse 3 and s3fs should be pretty straight forward. Fuse compiles and installs fine:
mkdir ~/src && cd src
# Most recent version: https://github.com/libfuse/libfuse/releases
wget https://github.com/libfuse/libfuse/releases/download/fuse-3.0.0/fuse-3.0.0.tar.gz
tar xvf fuse-3.0.0.tar.gz && cd fuse-3.0.0
./configure --prefix=/usr
make
make install
export PKG_CONFIG_PATH=/usr/lib/pkgconfig:/usr/lib64
ldconfig
modprobe fuse
pkg-config –modversion fuse
No problems there... Things show up where they should it seems,
$ ls /usr/lib
:
> libfuse3.a
> libfuse3.la
> libfuse3.so
> libfuse3.so.3
> libfuse3.so.3.0.0
> pkgconfig
> udev
$ ls /usr/local/lib/pkgconfig/
:
> fuse3.pc
$ which fusermount3
:
> /usr/bin/fusermount3
So I proceed to install s3fs
:
cd ~/src
git clone https://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse
./autogen.sh
./configure --prefix=/usr
And then every time, I hit this:
...
configure: error: Package requirements (fuse >= 2.8.4 libcurl >= 7.0 libxml-2.0 >= 2.6) were not met:
No package 'fuse' found
Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
Alternatively, you may set the environment variables common_lib_checking_CFLAGS
and common_lib_checking_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.
Any idea why s3fs
is not finding Fuse properly?
ylluminate
(686 rep)
Feb 19, 2017, 12:44 AM
• Last activity: May 3, 2017, 08:03 AM
2
votes
1
answers
839
views
Incrontab doesn't detect modifications on a s3fs mount
This is my `incrontab` line: /srv/www IN_MODIFY,IN_ATTRIB,IN_CREATE,IN_DELETE,IN_CLOSE_WRITE,IN_MOVE rsync --quiet --recursive --links --hard-links --perms --acls --xattrs --owner --group --delete --force /var/www_s3/ /var/www `/var/www_s3/` is an s3fs mount. However, it only gets kicked off when a...
This is my
incrontab
line:
/srv/www IN_MODIFY,IN_ATTRIB,IN_CREATE,IN_DELETE,IN_CLOSE_WRITE,IN_MOVE rsync --quiet
--recursive --links --hard-links --perms --acls --xattrs --owner --group --delete --force /var/www_s3/ /var/www
/var/www_s3/
is an s3fs mount. However, it only gets kicked off when a file is modified manually; nothing happens when a file is changed/added on S3.
Is there a way to get incrontab to detect these changes?
ddario
(763 rep)
Nov 12, 2013, 12:32 AM
• Last activity: Nov 12, 2013, 11:16 PM
Showing page 1 of 11 total questions