Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
0 answers
63 views
Overriding delete operation inside a mounted directory as moving operation
Suppose a mounted directory `/mnt/rclone`. The problem is I have insufficient permission to perform, e.g. ```bash rm -rf /mnt/rclone/file.txt ``` Outputting: ``` rm: cannot remove '/mnt/rclone/file.txt': Input/output error ``` It seems like a generic error, but I know the cause. It's because I don't...
Suppose a mounted directory /mnt/rclone. The problem is I have insufficient permission to perform, e.g.
rm -rf /mnt/rclone/file.txt
Outputting:
rm: cannot remove '/mnt/rclone/file.txt': Input/output error
It seems like a generic error, but I know the cause. It's because I don't have permission to perform delete operations. Not only remove operations, but also modification operations. So, when using nano or vim, I still get such an error. Note that I'm allowed to perform move or copy operations! So, I expect there is such a wrapper that wraps /mnt/rclone to /mnt/rclone_wrapper. So when I perform a command like modifying the text in nano /mnt/rclone_wrapper/file.txt, behind the scenes it works like this: 1. Copying /mnt/rclone/file.txt to local (e.g., /tmp/file.txt) 2. Moving /mnt/rclone/file.txt to /mnt/rclone/.trash/ 3. Moving /tmp/file.txt to /mnt/rclone/file.txt Is there any idea somehow? --- Clarification: 1. I use FUSE since rclone uses it. 2. I mounted it to a remote (Google Drive). 3. Note that I have root access. The permission issue comes from the remote, which is Google Drive.
Muhammad Ikhwan Perwira (319 rep)
Jun 9, 2025, 06:31 PM • Last activity: Jun 10, 2025, 11:36 AM
1 votes
1 answers
2247 views
ERROR : : error reading source directory: directory not found on linux when copying with rclone
I am trying to copy from google drive to my remote with Rclone on Linux using $ rclone copy name:a/b/c/myfolder/. /home/b/c/myfolder and get, ERROR : : error reading source directory: directory not found With the same command line I was able to copy some other folders in the same directory (c) but f...
I am trying to copy from google drive to my remote with Rclone on Linux using $ rclone copy name:a/b/c/myfolder/. /home/b/c/myfolder and get, ERROR : : error reading source directory: directory not found With the same command line I was able to copy some other folders in the same directory (c) but for some folders I get this error. When I do; $ rclone lsd name:a/b/c myfolder is listed. I have also tried let rclone copy entire folder itself $ rclone copy name:a/b/c/myfolder /home/b/c/myfolder get the same error. How can I fix this bug? Thanks
kutlus (375 rep)
Jan 27, 2019, 03:38 AM • Last activity: Apr 26, 2025, 01:02 PM
0 votes
0 answers
62 views
Where did my disk space go and why can't I analyze it?
I am using ZorinOS lite mainly for rclone. The PC mainly hast the job to copy/ sync Files from one Cloud to another. Also a few very important business applications as the tabletop simulator for example are installed. Now I See that my 240GB SSD is already almost full. Using `sudo baobab` I can see...
I am using ZorinOS lite mainly for rclone. The PC mainly hast the job to copy/ sync Files from one Cloud to another. Also a few very important business applications as the tabletop simulator for example are installed. Now I See that my 240GB SSD is already almost full. Using sudo baobab I can see that most space is taken Up by the /home/pc directory. But I can not analyze much further. I could imagine that rclone has something to do with my problem but I can't seem to get any more information. Here ist the result of the sudo baobab command: enter image description here Also using sudo du -h /home/pc | sort -hr | head did not give me any useful information about where the majority of my disk space has gone. enter image description here Now I would like to know what ist the exact problem and how to solve it (delete temporary Files (maybe created by rclone?)). Thanks in advance for any help. I am not very experienced in Linux systems so easy understandable answers are appreciated. Edit: Using the command ls -la /home/pc I am getting the following Output: enter image description here
owmal (101 rep)
Dec 5, 2023, 04:12 PM • Last activity: Mar 18, 2025, 12:17 PM
1 votes
1 answers
256 views
How to troubleshoot systemd automount error?
I have written the following automount unit for systemd: /etc/systemd/user/home-federico-Cloud-unipg.automount ----------------------------------------------------- [Unit] Description="automount" [Automount] Where=/home/federico/Cloud/unipg [Install] WantedBy=default.target The mount unit is: /etc/s...
I have written the following automount unit for systemd: /etc/systemd/user/home-federico-Cloud-unipg.automount ----------------------------------------------------- [Unit] Description="automount" [Automount] Where=/home/federico/Cloud/unipg [Install] WantedBy=default.target The mount unit is: /etc/systemd/user/home-federico-Cloud-unipg.mount ------------------------------------------------- [Unit] Description="mount" [Mount] Type=rclone What=unipg: Where=/home/federico/Cloud/unipg Options=vfs-cache-mode=full,config=/home/federico/.config/rclone/rclone.conf,cache-dir=/var/rclone ---------- After --user reloading the systemd daemon, I run the command systemctl --user enable --now home-federico-Cloud-unipg.automount and I get the following error in systemctl status: home-federico-Cloud-unipg.automount: Failed with result 'resources'. Failed to set up automount "automount". The command journalctl -xe is not helpful since it says: A start job for unit UNIT has finished with a failure. ---------- The mount unit is working because if I run systemctl --user start home-federico-Cloud-unipg.mount I get the storage correctly mounted, so the problem is somewhere in the automount part. Moreover, if I repeat the process as a system instance instead of user instance (replacing default.target with multi-user.target) I get the thing to work, even though the mountpoint is not accessible to user. Can somebody help me to troubleshoot this?
Fede Rico (11 rep)
Jan 14, 2025, 12:54 PM • Last activity: Jan 15, 2025, 10:05 AM
0 votes
0 answers
1102 views
mounting Proton drive with rclone
I am trying to mount a proton drive using Rclone. I was following the instructions here: [mounting proton drive using rclone][1] but I am stumped when I get to "Type of storage to configure" and know not what to do. Specifically, it is unclear to me what the [snip] XX / Proton Drive \ "protondrive"...
I am trying to mount a proton drive using Rclone. I was following the instructions here: mounting proton drive using rclone but I am stumped when I get to "Type of storage to configure" and know not what to do. Specifically, it is unclear to me what the [snip] XX / Proton Drive \ "protondrive" [snip] portion should be, or how it should be entered. Any suggestions, or something else to try would be very helpful here! Just to be clear, I tried the following and came up to: $ rclone config e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> n Enter name for new remote. name> myproton Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. 1 / 1Fichier \ (fichier) 2 / Akamai NetStorage \ (netstorage) 3 / Alias for an existing remote \ (alias) 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others \ (s3) 5 / Backblaze B2 \ (b2) 6 / Better checksums for other remotes \ (hasher) 7 / Box \ (box) 8 / Cache a remote \ (cache) 9 / Citrix Sharefile \ (sharefile) 10 / Combine several remotes into one \ (combine) 11 / Compress a remote \ (compress) 12 / Dropbox \ (dropbox) 13 / Encrypt/Decrypt a remote \ (crypt) 14 / Enterprise File Fabric \ (filefabric) 15 / FTP \ (ftp) 16 / Google Cloud Storage (this is not Google Drive) \ (google cloud storage) 17 / Google Drive \ (drive) 18 / Google Photos \ (google photos) 19 / HTTP \ (http) 20 / Hadoop distributed file system \ (hdfs) 21 / HiDrive \ (hidrive) 22 / ImageKit.io \ (imagekit) 23 / In memory object storage system. \ (memory) 24 / Internet Archive \ (internetarchive) 25 / Jottacloud \ (jottacloud) 26 / Koofr, Digi Storage and other Koofr-compatible storage providers \ (koofr) 27 / Linkbox \ (linkbox) 28 / Local Disk \ (local) 29 / Mail.ru Cloud \ (mailru) 30 / Mega \ (mega) 31 / Microsoft Azure Blob Storage \ (azureblob) 32 / Microsoft Azure Files \ (azurefiles) 33 / Microsoft OneDrive \ (onedrive) 34 / OpenDrive \ (opendrive) 35 / OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH) \ (swift) 36 / Oracle Cloud Infrastructure Object Storage \ (oracleobjectstorage) 37 / Pcloud \ (pcloud) 38 / PikPak \ (pikpak) 39 / Put.io \ (putio) 40 / QingCloud Object Storage \ (qingstor) 41 / Quatrix by Maytech \ (quatrix) 42 / SMB / CIFS \ (smb) 43 / SSH/SFTP \ (sftp) 44 / Sia Decentralized Cloud \ (sia) 45 / Sugarsync \ (sugarsync) 46 / Transparently chunk/split large files \ (chunker) 47 / Uloz.to \ (ulozto) 48 / Union merges the contents of several upstream fs \ (union) 49 / Uptobox \ (uptobox) 50 / WebDAV \ (webdav) 51 / Yandex Disk \ (yandex) 52 / Zoho \ (zoho) 53 / premiumize.me \ (premiumizeme) 54 / seafile \ (seafile) Storage> Here, I tried putting in [snip] XX / Proton Drive \ "protondrive" [snip] and a several other things, but I could not get it to accept it. So clearly, I am not understanding these instructions correctly. I am wondering what should be put in here. Thanks a bunch for any help! Update: The following are my details of rclone: $ rclone --version rclone v1.67.0 - os/version: fedora 41 (64 bit) - os/kernel: 6.12.6-200.fc41.x86_64 (x86_64) - os/type: linux - os/arch: amd64 - go/version: go1.23.1 - go/linking: dynamic - go/tags: none
user3236841 (137 rep)
Jan 11, 2025, 11:04 PM • Last activity: Jan 12, 2025, 12:44 AM
1 votes
1 answers
4192 views
Cannot mount rcloned drive because of FUSE error
I wanted to mount my rcloned drive. When I try to mount that rclone using this command: rclone mount --allow-other Webseries: /webseries I get the following error: 2022/04/28 21:59:46 mount helper error: fusermount: fuse device not found, try 'modprobe fuse' first 2022/04/28 21:59:46 Fatal error: fa...
I wanted to mount my rcloned drive. When I try to mount that rclone using this command: rclone mount --allow-other Webseries: /webseries I get the following error: 2022/04/28 21:59:46 mount helper error: fusermount: fuse device not found, try 'modprobe fuse' first 2022/04/28 21:59:46 Fatal error: failed to mount FUSE fs: fusermount: exit status 1 I want to mount it and referred to many thread related to this. **What I tried** * I tried
whereis modprobe
Output is :
modprobe: /usr/lib/modprobe.d
* I have tried running
modprobe fuse
It responds with
bash: modprobe: command not found
I feel like Fuse isn't installing. I can't find any file related to fuse. I installed fuse using sudo apt-get install fuse It successfully gets installed it . **Kindly refer to the log.** Click here to see logs on pastebin! **I'm running Ubuntu:20.04 on docker**. And seems like docker doesn't like fuse very much. I even tried using google-drive-ocamlfuse but The VNC rdp disconnects while opening browser for Google authentication.
Devansh Shrivastava (39 rep)
Apr 28, 2022, 10:06 PM • Last activity: Aug 30, 2024, 04:00 PM
0 votes
0 answers
26 views
Help diagnose rclone beginner mistake
While starting to learn about rclone, I made the mistake of running the following command, as a test, from my Ubuntu home directory: ``` sudo rclone sync /media/foo/bar . ``` My assumption was that a new directory called `bar` would appear, but I know now that rsync would instead be trying to create...
While starting to learn about rclone, I made the mistake of running the following command, as a test, from my Ubuntu home directory:
sudo rclone sync /media/foo/bar .
My assumption was that a new directory called bar would appear, but I know now that rsync would instead be trying to create a copy of the *contents* of /media/foo/bar at the second directory specified (including by deleting files). While running, I was lucky to at least be concerned at the amount of warning messages (relating to links within my git working directories), and so I hit CTRL+C to stop. I came to realise that for my usage requirements, sudo is not required; and I understand that the sync option (which is the option I want) should be used with care, and that the commands second/target directory may have files/directories deleted. As for now, and after getting rclone working nicely, I realised my earlier experiment had some unwanted side-effects. I first noticed that the .git directories of all of my git repos were gone. Then, that ~/.git-credentials was gone, and that my browser settings and passwords were gone. The basic directory structure seems unchanged. As I start to recover, can anyone advise as to which files (or directories) I should expect to be missing?
user7543 (274 rep)
Aug 14, 2024, 11:31 AM
1 votes
0 answers
50 views
rclone group permissions not working
im using rclone to mount google drive but the permissions are completely broken. im using debian 12 ```bash -> sudo namei -l /mnt/cloud/gdrive/gd1/3D f: /mnt/cloud/gdrive/gd1/3D drwxr-xr-x root root / drwxr-xr-x root root mnt drwxr-xr-x root root cloud drwxr-xr-x root root gdrive drwxrwx--- root gdr...
im using rclone to mount google drive but the permissions are completely broken. im using debian 12
-> sudo namei -l /mnt/cloud/gdrive/gd1/3D
f: /mnt/cloud/gdrive/gd1/3D
drwxr-xr-x root root   /
drwxr-xr-x root root   mnt
drwxr-xr-x root root   cloud
drwxr-xr-x root root   gdrive
drwxrwx--- root gdrive gd1
drwxrwx--- root gdrive 3D
-> sudo eza --icons -lg --time-style "+%s %Y-%m-%d %H:%M"
drwxrwx--- - root 937400039 1719190094 2024-06-23 17:48  gd1
-> eza --icons -lg --time-style "+%s %Y-%m-%d %H:%M"
[./gd1: Permission denied (os error 13)]
-> id
uid=937400003(irl_name) gid=937400003(irl_name) groups={19 others excluded for space},937400039(gdrive)
autofs map:
gd1 -fstype=rclone,rw,uid=0,gid=937400039,dir-perms=0770,file-perms=0770,allow-non-empty,umask=000,fast-list,vfs-cache-mode=writes,config=/etc/rclone/gd/rclone.conf,cache-dir=/var/cache/rclone :gd:
BPplays (11 rep)
Jun 24, 2024, 01:00 AM • Last activity: Jun 24, 2024, 10:45 AM
3 votes
3 answers
376 views
Strip filesize from rclone file list
I use `rclone` and I want to have the file list without the size of the files mention. I can't find how to do this in `rclone` so I thought of stripping it with awk or something like that. My output looks like this ``` 59183070 fileserver/transfer_kimberly_2022-12-18_0558 (1).zip 3690 fileserver/tra...
I use rclone and I want to have the file list without the size of the files mention. I can't find how to do this in rclone so I thought of stripping it with awk or something like that. My output looks like this
59183070 fileserver/transfer_kimberly_2022-12-18_0558 (1).zip
     3690 fileserver/transfer_kimberly_2022-12-18_0558 (1).zip - Shortcut.lnk
 35961190 fileserver/transfer_2023-06-27_0814.zip
  7803667 fileserver/woodproject.zip
7437905920 Them/Data/Before_20230526132130642.FDB
1064525824 Them/Data/Updating_20220705231152059.FDB
1064525824 Them backup/Data/Updating_20220706231124156.FDB
1064525824 Them backup/Data/Updating_20220705231152059.FDB
1064525824 Them backup/Data/Updating_20220706231124156.FDB
  7004362 test.zip
  7004362 test (1).zip
  7803667 37939 37/Data/Updating_20220706231124156.FDB
  7803667 37939/Data/Updating_20220706231124156.FDB
The first number left is the filesize. I want this output
fileserver/transfer_kimberly_2022-12-18_0558 (1).zip
fileserver/transfer_kimberly_2022-12-18_0558 (1).zip - Shortcut.lnk
fileserver/transfer_2023-06-27_0814.zip
fileserver/woodproject.zip
Them/Data/Before_20230526132130642.FDB
Them/Data/Updating_20220705231152059.FDB
Them backup/Data/Updating_20220706231124156.FDB
Them backup/Data/Updating_20220705231152059.FDB
Them backup/Data/Updating_20220706231124156.FDB
test.zip
test (1).zip
37939 37/Data/Updating_20220706231124156.FDB
37939/Data/Updating_20220706231124156.FDB
I thought about stripping everything left of the last space that comes before the first /. But the space in the first directory or file names makes it complicated. Help is appreciated
unixcandles (87 rep)
Mar 12, 2024, 07:58 AM • Last activity: Mar 13, 2024, 11:54 AM
1 votes
1 answers
2755 views
Job status after ssh session stopped on MobaXterm
I am copying some large datasets to google drive using rclone sync in Linux on MobaXterm. I submit the job as below; -cpu:~$ nohup rclone sync /path_to_source/. /path_to_destination & [1] 16310 and when do 'jobs' to see job status, it is running. -cpu:~$jobs [1]- Running nohup rclone sync /path_to_s...
I am copying some large datasets to google drive using rclone sync in Linux on MobaXterm. I submit the job as below; -cpu:~$ nohup rclone sync /path_to_source/. /path_to_destination & 16310 and when do 'jobs' to see job status, it is running. -cpu:~$jobs - Running nohup rclone sync /path_to_source /path_to_destination & However, maybe a couple of hours later the session stops for some reason; Session stopped - Press to exit tab - Press R to restart session - Press S to save terminal output to file Server unexpectedly closed network connection So I restart the session, login and I do 'jobs' nothing is shown. -cpu:~$jobs -`cpu:~$ I know that the job is not completed yet, because I used nohup, also when I run the same sync command line above it is still copying. After i restart the session and use 'jobs' why i cant see jobs running? Or how can I see it, possibly with job ids, thanks
kutlus (375 rep)
Mar 24, 2019, 01:05 AM • Last activity: Mar 12, 2024, 10:59 PM
0 votes
1 answers
453 views
Is it possible to mount google drive in vifm using rclone?
I can mount my google drive using `rclone mount gdrive: localfolder`. I also know how to automatically mount a remote file system through `sshfs` within `vifm`. But is it possible to use `rclone` to mount a cloud drive within `vifm` automatically? I've tried to add the following line in `vifmrc`: fi...
I can mount my google drive using rclone mount gdrive: localfolder. I also know how to automatically mount a remote file system through sshfs within vifm. But is it possible to use rclone to mount a cloud drive within vifm automatically? I've tried to add the following line in vifmrc: filetype *.drive FUSE_MOUNT2|rclone %PARAM %DESTINATION_DIR and create a file named google.drive with the following line in it: mount gdrive: When I try to open the file google.drive, vifm displays the message of trying to mount and then hangs there forever. To be more general, is there a generic way in vifm to handle all kinds of remote mounting programs?
Jing (339 rep)
Apr 8, 2020, 11:22 AM • Last activity: Mar 12, 2024, 10:57 PM
0 votes
0 answers
360 views
Running backup script at shutdown (using rclone utility)
**General**: I want to run a back-up script during shutdown (not reboot). I've tried tons of systemd's service configuration, but no one is working. **Aim**: When PC is shutting down I want to perform a back-up to cloud storage using bash script and [rclone][1] utility. Syncing may take some time (u...
**General**: I want to run a back-up script during shutdown (not reboot). I've tried tons of systemd's service configuration, but no one is working. **Aim**: When PC is shutting down I want to perform a back-up to cloud storage using bash script and rclone utility. Syncing may take some time (up to several minutes) and it requires networking and a user being logged in. **Question**: What the appropriate .service file structure for my bash script ? Now I have something like this and it doesn't work at all. Script doesn't run at shutdown. [Unit] Description=Syncing with MEGA cloud storage 35 DefaultDependencies=no Conflicts=reboot.target After=network-online.target Before=shutdown.target halt.target poweroff.target [Service] User=yevhenii Type=oneshot ExecStart=/bin/true ExecStop=/home/yevhenii/Projects/ubuntu-scripts/mega_sync_pc.sh RemainAfterExit=true TimeoutSec=0 StandardOutput=file:/home/yevhenii/Projects/ubuntu-scripts/output.txt StandardError=file:/home/yevhenii/Projects/ubuntu-scripts/error.txt [Install] WantedBy=shutdown.target poweroff.target halt.target P.S. I'm using Ubuntu 19.10 & systemd 242
Yevhenii Nadtochii (121 rep)
Apr 12, 2020, 02:37 PM • Last activity: Mar 12, 2024, 10:55 PM
2 votes
2 answers
1694 views
What does it mean 'exit 1' for a job status after rclone sync
I am copying some large datasets to google drive using rclone in Linux on MobaXterm. First, I copy the dataset using; -cpu:~$ nohup rclone copy /path_to_source/. /path_to_destination & once copying is completed, I use sync to make sure everything is copied using; -cpu:~$ nohup rclone sync /path_to_s...
I am copying some large datasets to google drive using rclone in Linux on MobaXterm. First, I copy the dataset using; -cpu:~$ nohup rclone copy /path_to_source/. /path_to_destination & once copying is completed, I use sync to make sure everything is copied using; -cpu:~$ nohup rclone sync /path_to_source/. /path_to_destination & Now when check the job status using; ps -ef | grep rclone For one of the jobs, it gives; + Exit 1 nohup rclone sync /path_to_source/. /path_to_destination & I was expecting to see 'Done' instead of 'Exit 1'. What does this mean? Does it mean sync is unsuccessful? If so what would be the reason?
kutlus (375 rep)
Apr 1, 2019, 07:32 PM • Last activity: Mar 12, 2024, 10:55 PM
0 votes
1 answers
1217 views
pause during copying files with rclone on linux
I am copying some files from google drive to my remote using rclone in Linux. It has been an hour or so it looks like this on the terminal; [![enter image description here][1]][1] In average the copy time is much less than an hour for similar folders in the drive. I understand that it found duplicat...
I am copying some files from google drive to my remote using rclone in Linux. It has been an hour or so it looks like this on the terminal; enter image description here In average the copy time is much less than an hour for similar folders in the drive. I understand that it found duplicates but I don`t know why it just stays like this? Is it still copying? or is this mean something different?
kutlus (375 rep)
Jan 29, 2019, 08:51 PM • Last activity: Mar 12, 2024, 10:55 PM
6 votes
0 answers
165 views
does rclone 'sync' work with google drive to remove files?
I am trying to use rclone to sync a directory to my google drive. The code I am using is something like: rclone sync ~/Desktop/myfolder mygoogledrive:myfolder This works to upload the contents just fine. But when I remove a file from my local "myfolder" the corresponding file in Google Drive stays u...
I am trying to use rclone to sync a directory to my google drive. The code I am using is something like: rclone sync ~/Desktop/myfolder mygoogledrive:myfolder This works to upload the contents just fine. But when I remove a file from my local "myfolder" the corresponding file in Google Drive stays undeleted when I run the sync command. So clearly I expect the file to be deleted by rclone, but should I? Is this a supported feature for Google Drive? If so, does anyone have any thoughts on why it's not working. Am I missing a flag?
WillD (193 rep)
Jun 30, 2018, 04:41 AM • Last activity: Mar 12, 2024, 10:54 PM
2 votes
0 answers
8740 views
A fast way to sync local files with Proton Drive? (here exploring rclone)
I want to create a sync system between newly available Proton Drive and my local filesystem. However, since it's very new, it's only barely supported by `rclone` (I'm lucky to be on openSUSE Tumbleweed because rclone in my other distros don't support it), and it doesn't have a sync client for Linux...
I want to create a sync system between newly available Proton Drive and my local filesystem. However, since it's very new, it's only barely supported by rclone (I'm lucky to be on openSUSE Tumbleweed because rclone in my other distros don't support it), and it doesn't have a sync client for Linux yet. I've considered 3 options: 1. mount the whole drive using rclone mount and work always online, but that's slow and sluggish and requires constant Internet connection; 2. mount drive with rclone mount and sync the folders I need using the tool [unison](https://github.com/bcpierce00/unison) . But my testing shows that it can take a very long time for the software to browse for changes... Even though when the scanning is done it's pretty fast for the correct/copy; 3. use rclone bisync. But it's advertized to be in testing and not very stable, and anyways I have the same problem that I have with unison, it's very slow to browse for changes... So if anyone has a better idea how to do that ("that" being setting up a sync system where when I change/add/remove something either on the computer or the drive, the change propagates to the other side immediately — or at least, fast, like, under a minute delay), please help!
rclone v1.64.2
- os/version: opensuse-tumbleweed (64 bit)
- os/kernel: 6.5.9-1-default (x86_64)
- os/type: linux
- os/arch: amd64
- go/versiorclone v1.64.2
- os/version: opensuse-tumbleweed (64 bit)
- os/kernel: 6.5.9-1-default (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.3
- go/linking: dynamic
- go/tags: none
n: go1.21.3
- go/linking: dynamic
- go/tags: none
That's the latest version of rclone. I'm using Proton Drive's cloud storage solution. rclone config redacted gives:
[proton]
type = protondrive
username = my.username
password = XXX
mailbox_password = XXX
client_uid = XXX
client_access_token = XXX
client_refresh_token = XXX
client_salted_key_pass = XXX
replace_existing_draft = true
### Double check the config for sensitive info before posting publicly
Ul Tome (137 rep)
Nov 1, 2023, 06:25 PM • Last activity: Nov 1, 2023, 06:30 PM
0 votes
1 answers
210 views
Starting rclone service via .sh file after network has connected with systemd
I'm trying to get the rclone service to mount a drive as soon as the system has received a network connection on boot/reboot. So far I have all of the mounting working correctly via terminal. I have written a simple .sh file to execute it which basically is: #!/bin/sh ! mountpoint -q /home/{user}/{l...
I'm trying to get the rclone service to mount a drive as soon as the system has received a network connection on boot/reboot. So far I have all of the mounting working correctly via terminal. I have written a simple .sh file to execute it which basically is: #!/bin/sh ! mountpoint -q /home/{user}/{location}/{location} || umount /home/{user}/{location}/{location} rclone mount {nameofservice}: /home/{user}/{location}/{location} --config /home/{user}/.config/rclone/rclone.conf Running this in terminal works as expected. I have followed an online tutorial to get this working after a network connection has been achieved using systemd and have created the following file with 755 +x permissions in /etc/systemd/system/{nameof.service} [Unit] Description=Starts {nameof.service} rclone service on startup Wants=network-online.target After=network-online.target [Service] Type=simple User={user} Group={group} ExecStart=/home/{user}/{nameofsh}.sh TimeoutStartSec=5 RemainAfterExit=yes [Install] WantedBy=network-online.target I need it to run as that particular user. Following creation of this file I also: systemctl daemon-reload systemctl enable {nameof.service} In the tutorial I expected the enable command to return a message about creating a symlink which I didn't get - it just retuned with a new line, but I didn't think this was major. Regardless, when I run systemctl restart {nameof.service} I get the expected outcome, but not on start up or reboot. At this point I'm not sure how I've gone wrong and would appreciate any help.
jamesrsg (3 rep)
Jul 9, 2023, 01:28 PM • Last activity: Jul 9, 2023, 02:34 PM
0 votes
1 answers
62 views
rsync does not append some files
I have mounted dropbox using rclone to `/home/user/Dropbox`. I'm using rsync to upload the files to dropbox. I have done so around 400GB just using this method. [![dropbox capacity][1]][1] [1]: https://i.sstatic.net/6UH2R.png Today however it uploaded some files and stopped with the summary of the t...
I have mounted dropbox using rclone to /home/user/Dropbox. I'm using rsync to upload the files to dropbox. I have done so around 400GB just using this method. dropbox capacity Today however it uploaded some files and stopped with the summary of the tranfer (no errors) like it has completed tranferring all the files. However, when I check the source directories, there are some files left. Since I'm using --append and --remove-source-files flags, there shouldn't be any files left. I tried to copy one of them using file manager and it shows that the file already exists at the targed and prompt to replace. Following is the command I used.
❯ rsync \
-var \
--append-verify \
--remove-source-files \
/media/s1n7ax/92802CA0802C8CB3/Digikam/ \
/home/s1n7ax/Dropbox/Camera\ Album/
sending incremental file list
./
2022-07-23 Wewathenna/
2022-07-23 Wewathenna/Raw/
2022-10-16 Dolukanda/
2022-10-16 Dolukanda/Edited/
2022-10-16 Dolukanda/Raw/
2022-10-16 Dolukanda/Udul's Photos/
2022-11-06 Nuwara Eliya/
2022-11-06 Nuwara Eliya/Raw/
2022-11-20 Peradeniya Botanical Garden/
2022-11-20 Peradeniya Botanical Garden/Raw/
2022-12-13 Pinnawala/
2022-12-13 Pinnawala/Raw/
2023-01-21 Riverston/
2023-01-21 Riverston/Anura's Photos/
2023-03-25 Gatemore/
2023-03-25 Gatemore/Raw/
2023-04-08 Kanawiddagala/
2023-04-08 Kanawiddagala/Raw/

sent 36,726 bytes  received 153 bytes  1,715.30 bytes/sec
total size is 121,892,420,904  speedup is 3,305,198.65
I tried to copy a directory that only contains files too.
❯ rsync \
-var \
--append-verify \
--remove-source-files \
/media/s1n7ax/92802CA0802C8CB3/Digikam/2022-07-23\ Wewathenna/Raw/ \
/home/s1n7ax/Dropbox/Camera\ Album/2022-07-23\ Wewathenna/Raw/
sending incremental file list
./

sent 505 bytes  received 19 bytes  1,048.00 bytes/sec
total size is 1,316,281,020  speedup is 2,511,986.68
Details of one file Source:
ls -l 1Y3A8976.CR3
-rwxrwxrwx 1 s1n7ax s1n7ax 50107478 Jul 23  2022 1Y3A8976.CR3
Target:
ls -l 1Y3A8976.CR3
-rw-rw-r-- 1 s1n7ax s1n7ax 50107478 Jun 29 20:56 1Y3A8976.CR3
s1n7ax (437 rep)
Jun 29, 2023, 04:41 PM • Last activity: Jun 30, 2023, 06:03 AM
0 votes
0 answers
54 views
Fish shell can't change directory over a Webdav resource through Rclone
Fish can't `cd` into a folder on a Webdav remote directory through Rclone: ```shell Welcome to fish, the friendly interactive shell Type `help` for instructions on how to use fish drive ) ls Documents/ Music/ Videos/ Gem/ Misc/ Pictures/ drive ) cd Documents cd: The directory “Documents” does not ex...
Fish can't cd into a folder on a Webdav remote directory through Rclone:
Welcome to fish, the friendly interactive shell
Type help for instructions on how to use fish
drive ) ls
Documents/  Music/           Videos/
Gem/        Misc/            Pictures/
drive ) cd Documents
cd: The directory “Documents” does not exist
drive ) sh
$ cd Documents
$ pwd
/mnt/drive/Documents
$ exit
drive )
Perhaps because it uses a wrapper around cd, maybe it is a bug, or instead I have to disable or rename the cd function, not sure how to address it.
freezr (11 rep)
Jun 14, 2023, 08:28 PM • Last activity: Jun 15, 2023, 04:54 AM
0 votes
0 answers
58 views
Only able to fix rclone certificate error over ethernet (it won't work over wifi) Why?
rclone will run fine for months at a time. The command I run most often is: ``` rclone sync {{path/to/file_or_directory}} {{remote_name}}:{{path/to/directory}} ``` Then - _seemingly out of nowhere_ - I run my `rclone sync ...` command and get the following error: ``` failed to open source object: ce...
rclone will run fine for months at a time. The command I run most often is:
rclone sync {{path/to/file_or_directory}} {{remote_name}}:{{path/to/directory}}
Then - _seemingly out of nowhere_ - I run my rclone sync ... command and get the following error:
failed to open source object: certificate has expired or is not yet valid
I always do the same thing, issue the following commands, and the problem usually goes away:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install --reinstall ca-certificates
Occasionally running those three commands above does not work, and I'm forced to take drastic actions: 1. Disable wifi 2. Connect to a router through an ethernet cable 3. Restart the computer 4. retry all of the commands above (upgrade, reinstall ca-certificates, rclone sync *) and somehow the ethernet connection fixes everything. **Why do I occasionally have to use an ethernet cable to make my rclone certificate errors go away?** I don't always carry an ethernet cable on me, and even if I did it's not always easy to connect to the router that is offering you wifi access. I do not really know the root cause of the problem, and how to fix it.
jophuh (123 rep)
May 31, 2023, 03:36 PM
Showing page 1 of 20 total questions