Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
1
answers
2123
views
Need to unzip files and then move to AWS S3 bucket automatically
I have a directory in ubuntu example: ``` /home/ubuntu/mainfiles/ ``` where only zip files get uploaded daily. I need to unzip these files and move the extracted files to S3 bucket. Each unzipped file's files should be moved to s3 at 5 minutes interval. I also need to make sure duplicate files doesn...
I have a directory in ubuntu example:
/home/ubuntu/mainfiles/
where only zip files get uploaded daily. I need to unzip these files and move the extracted files to S3 bucket. Each unzipped file's files should be moved to s3 at 5 minutes interval. I also need to make sure duplicate files doesn't get uploaded to s3.
How do I write a script for this ?
What I am currently doing is
* Copy the oldest file from /home/ubuntu/mainfiles/
directory each 5 minute withs cron
and then storing it on a temp1
directory.
* Then I unzip all the files from temp1
directory and move the extracted files to temp2
directory.
* Finally I move all files on temp2
directory to s3.
But I think above approach might fail because I have to clean the temp1
folder each time. Also duplicate files can get moved to s3 because I don't know how to get the file name and rename it with random name and then move to s3.
Can someone provide a demo shell script for this with right approach ?
Anirban Dutta
(1 rep)
May 28, 2020, 09:57 AM
• Last activity: Jul 10, 2025, 09:02 AM
2
votes
2
answers
2123
views
Slow transfer speed through Samba using software RAID
I have a mini PC (Intel Celeron J4005, 4GB RAM, Intel Gigabit NIC), configured with: - Ubuntu (5.4.0-81-generic, installed on sda) - Samba (version 4.11.6-Ubuntu) - FTP (vsftpd, no encryption) - RAID5 (mdadm, md0: sdb-sdc-sdd, USB-SATA) The RAID array is shared via Samba and FTP, but I want to elimi...
I have a mini PC (Intel Celeron J4005, 4GB RAM, Intel Gigabit NIC), configured with:
- Ubuntu (5.4.0-81-generic, installed on sda)
- Samba (version 4.11.6-Ubuntu)
- FTP (vsftpd, no encryption)
- RAID5 (mdadm, md0: sdb-sdc-sdd, USB-SATA)
The RAID array is shared via Samba and FTP, but I want to eliminate FTP, all major clients are Windows machines.
The problem is that I get way slower speeds through Samba share than FTP:
| Device | Method | Read Speed (Mbyte/s, one large file) |
|-| -|-|
| md0 | local | ~220 |
| md0 | LAN, FTP | ~115 (network limit) |
| md0 | LAN, Samba | ~48 |
| md0 | LAN, Samba, second run (cached in memory) | ~115 (network limit) |
| sda | LAN, Samba | ~115 (network limit) |
I tried with default Samba settings and with the current one (attached below), but I got the same result. I flushed the cache between tests.
iostat output sample (LAN, Samba, first run):
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz aqu-sz %util
md0 793.00 433408.00 0.00 0.00 0.00 546.54 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 254.00 16768.00 8.00 3.05 14.74 66.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3.27 84.80
sdc 171.00 16896.00 93.00 35.23 2.99 98.81 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.32 60.80
sdd 161.00 16640.00 101.00 38.55 11.74 103.35 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.57 96.00
iostat output sample (LAN, FTP, first run):
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz aqu-sz %util
md0 1828.00 292480.00 0.00 0.00 0.00 160.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 458.00 39040.00 153.00 25.04 1.66 85.24 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.18 75.60
sdc 457.00 38976.00 152.00 24.96 1.45 85.29 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.09 70.40
sdd 457.00 38976.00 152.00 24.96 1.59 85.29 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.15 75.20
I have no clue what the problem can be, can someone help me, or at least where I should start investigating?
----------
Samba config:
[global]
workgroup = WORKGROUP
min protocol = SMB3
log level = 1
socket options = TCP_NODELAY SO_RCVBUF=65536 SO_SNDBUF=65536 IPTOS_LOWDELAY SO_KEEPALIVE
use sendfile = true
aio read size = 65536
aio write size = 65536
read raw = yes
write raw = yes
getwd cache = yes
acl allow execute always = true
log file = /var/log/samba/log.%m
max log size = 1000
logging = file
server role = standalone server
obey pam restrictions = yes
unix password sync = yes
passwd program = /usr/bin/passwd %u
passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
pam password change = yes
map to guest = bad user
[Share]
path = /media/hdd
writable = yes
valid users = myuser
directory mode = 0770
create mode = 0660
RAID array configuration:
/dev/md0:
Version : 1.2
Creation Time : Tue Sep 7 13:19:26 2021
Raid Level : raid5
Array Size : 976441344 (931.21 GiB 999.88 GB)
Used Dev Size : 488220672 (465.60 GiB 499.94 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Sep 9 14:37:52 2021
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Consistency Policy : bitmap
Filesystem info:
root@MiniPC:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.8G 0 1.8G 0% /dev
tmpfs 371M 12M 360M 3% /run
/dev/sda2 58G 3.4G 55G 6% /
tmpfs 1.9G 12K 1.9G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 500M 79M 422M 16% /var/cache/apt
tmpfs 500M 0 500M 0% /tmp
tmpfs 500M 0 500M 0% /var/backups
tmpfs 500M 2.2M 498M 1% /var/log
tmpfs 500M 0 500M 0% /var/tmp
/dev/sda1 511M 5.3M 506M 2% /boot/efi
/dev/md0 917G 356G 562G 39% /media/hdd
root@MiniPC:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 59.6G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
└─sda2 8:2 0 59.1G 0 part /
sdb 8:16 0 465.7G 0 disk
└─sdb1 8:17 0 465.7G 0 part
└─md0 9:0 0 931.2G 0 raid5 /media/hdd
sdc 8:32 0 465.8G 0 disk
└─sdc1 8:33 0 465.8G 0 part
└─md0 9:0 0 931.2G 0 raid5 /media/hdd
sdd 8:48 0 465.8G 0 disk
└─sdd1 8:49 0 465.8G 0 part
└─md0 9:0 0 931.2G 0 raid5 /media/hdd
S-Zoli
(21 rep)
Sep 14, 2021, 01:36 PM
• Last activity: Jul 2, 2025, 01:02 AM
1
votes
0
answers
306
views
Ubuntu 20.04 slow file transfer across disks
I'm trying to move files from an SSD to an HDD on Ubuntu 20.04, but the transfer speeds are extremely slow. After half an hour, only 1.3 GB has been copied. My research hasn't helped, most solutions I found discuss network or USB issues, which don't apply here. - SSD: 500GB, ext4, boot drive, 95% us...
I'm trying to move files from an SSD to an HDD on Ubuntu 20.04, but the transfer speeds are extremely slow.
After half an hour, only 1.3 GB has been copied.
My research hasn't helped, most solutions I found discuss network or USB issues, which don't apply here.
- SSD: 500GB, ext4, boot drive, 95% use
- HDD: 2TB, ext4, mounted on /media/disk
I attempted several solutions.
- Dropping the ram cache (The system is using 28.5GiB of cache out of 31.1 avaliable), but the drop does not work and hangs forever. Using this solution
- Using ionice through iotop to increase priority, which helps.
- Using rsync instead of cp
No matter what i try, i only get at best 2.5MBps on average. Considering that 99% of the files are 3.5MB and above, i think that the transfer speed should be much faster.
I have a Seagate ST2000DM008 7500rpm 2TB hdd.
Cédric Vallée
(11 rep)
Dec 5, 2024, 04:28 PM
• Last activity: Jun 25, 2025, 08:31 AM
0
votes
1
answers
1978
views
how to troubleshoot slow usb drive?
TLDR: How do I troubleshoot a slow usb 3.1 device plugged into a laptop. ISSUE: When I copy (tried GUI and terminal) the first few .iso files copy almost instantly 300mb/s+, but then the 3rd/4th start to slow to below 12mb/s (even if copying one at a time). HARDWARE: - Dell XPS 15 9520 (Fedora Linux...
TLDR: How do I troubleshoot a slow usb 3.1 device plugged into a laptop.
ISSUE: When I copy (tried GUI and terminal) the first few .iso files copy almost instantly 300mb/s+, but then the 3rd/4th start to slow to below 12mb/s (even if copying one at a time).
HARDWARE:
- Dell XPS 15 9520 (Fedora Linux 37 - Workstation)
- Sandisk Extreme GO USB 3.1 64GB (using Ventoy)
- Dell USB-C to USB-A/HDMI Adapter
- Anker PowerExpand+ 7-in-1 USB-C PD Hub
TRIED:
- Reformatting usb (gparted - exfat).
- Using USB with and without Ventoy installed.
- Using different ports, different adapters/hubs.
- Copying lots of .iso files in one go vs copying one file at a time - waiting until each file fully copied.
Either way after a few files (around 4GB) the USB drive becomes very slow. Ejecting the USB (via GUI or terminal) can take a long time but after remounting fast speeds return. When ejecting using GUI I get message device busy - so now use terminal and wait until command has completed.
DRIVER | PORT DETAILS:
$ udevadm info -q path -n /dev/sdc
/devices/pci0000:00/0000:00:14.0/usb4/4-1/4-1.1/4-1.1:1.0/host1/target1:0:0/1:0:0:0/block/sdc
$ lsusb -t
/: Bus 04.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/4p, 10000M
|__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/1p, 5000M
|__ Port 1: Dev 3, If 0, Class=Mass Storage, Driver=usb-storage, 5000M
/: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/12p, 480M
|__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/5p, 480M
|__ Port 5: Dev 6, If 0, Class=, Driver=, 480M
|__ Port 2: Dev 3, If 0, Class=Hub, Driver=hub/4p, 480M
|__ Port 1: Dev 5, If 0, Class=Hub, Driver=hub/4p, 480M
|__ Port 1: Dev 9, If 0, Class=Human Interface Device, Driver=usbhid, 12M
|__ Port 4: Dev 8, If 0, Class=Hub, Driver=hub/4p, 480M
|__ Port 3: Dev 11, If 0, Class=Human Interface Device, Driver=usbhid, 1.5M
|__ Port 3: Dev 11, If 1, Class=Human Interface Device, Driver=usbhid, 1.5M
|__ Port 4: Dev 12, If 0, Class=Hub, Driver=hub/4p, 480M
|__ Port 4: Dev 13, If 0, Class=Hub, Driver=hub/4p, 480M
|__ Port 3: Dev 15, If 0, Class=Hub, Driver=hub/4p, 480M
|__ Port 2: Dev 16, If 0, Class=, Driver=, 12M
|__ Port 6: Dev 4, If 0, Class=Video, Driver=uvcvideo, 480M
|__ Port 6: Dev 4, If 1, Class=Video, Driver=uvcvideo, 480M
|__ Port 6: Dev 4, If 2, Class=Video, Driver=uvcvideo, 480M
|__ Port 6: Dev 4, If 3, Class=Video, Driver=uvcvideo, 480M
|__ Port 9: Dev 7, If 0, Class=Vendor Specific Class, Driver=, 12M
|__ Port 10: Dev 10, If 0, Class=Wireless, Driver=btusb, 12M
|__ Port 10: Dev 10, If 1, Class=Wireless, Driver=btusb, 12M
/: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/2p, 20000M/x2
|__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/4p, 5000M
|__ Port 2: Dev 3, If 0, Class=Vendor Specific Class, Driver=r8152, 5000M
|__ Port 3: Dev 4, If 0, Class=Mass Storage, Driver=usb-storage, 5000M
/: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/1p, 480M
Xueshe
(21 rep)
Apr 10, 2023, 12:16 PM
• Last activity: Jun 2, 2025, 01:03 PM
0
votes
3
answers
90
views
What are the options to transfer a package from source machine to target machine over the network with only sudo user login via ssh?
**adm package** folder structure: ``` /opt/adm ├── bin ├── cli ├── dev ├── mod ├── pkg └── sys ``` ``` root@deb4 /opt/adm # rwx To change permissions use this command syntax: $ sudo chmod USER = root GROUPS = root PERMISSION | OCTAL | OWNER GROUP NAME drwxr-xr-x | 755 | root root . drwxr-xr-x | 755...
**adm package** folder structure:
/opt/adm
├── bin
├── cli
├── dev
├── mod
├── pkg
└── sys
root@deb4 /opt/adm
# rwx
To change permissions use this command syntax:
$ sudo chmod
USER = root GROUPS = root
PERMISSION | OCTAL | OWNER GROUP NAME
drwxr-xr-x | 755 | root root .
drwxr-xr-x | 755 | root root ..
drwxr-xr-x | 755 | root root bin
drwxr-xr-x | 755 | root root cli
drwxr-xr-x | 755 | root root dev
drwxr-xr-x | 755 | root root mod
drwxr-xr-x | 755 | root root pkg
drwxr-xr-x | 755 | root root sys
In this context **adm package** refers to all the files stored in the folder structure under /opt/adm/. On my development machine the **adm package** contains custom bash scripts in
; it contains text files in
; and package setup files in
. The script
exists in the
. Any user on the host machine can execute the
script to inspect octal permissions in the present working directory as shown above.
In general the **adm package** enables users on the host machine to run the bash scripts stored in
from the command line; to read text files in
; and to read text files in
. The
folder is for developing and testing new bash scripts prior to transfer to
. The idea behind this package is to teach principles of bash setup, bash scripts, and Linux administration using
to provide script resources and examples;
to store notes; and
to store system configuration notes.
I am not a power user or bash script expert. So the package and scripts might need improvement and feedback from the bash community to develop and evaluate it for teaching purposes. For now the package is private. I may publish it on Github someday.
I have two bash scripts in
to transfer the whole **adm package** to another debian-based target machine. These scripts use rsync and tar respectively to show a dry run and then execute the transfer. The tar based script overwrites all existing files with the same names on the source and target. The rsync based script only updates files that have change or that don't already exist. I prefer the rsync. Both scripts work fine provided I have the root password login enabled via ssh for the target machine and I install
on the target if it does not exist already.
I have a machine that blocks root password login over ssh. I can only login as sudo user. What are my options to copy **adm package** to /opt/adm on the target machine with only sudo user login via ssh? Must I use keys to login as root? Or is there a way to transfer a tar file .tar
then untar the file using sudo
on the target machine?
SystemTheory
(121 rep)
Oct 4, 2024, 02:13 AM
• Last activity: Apr 6, 2025, 03:40 PM
12
votes
4
answers
47361
views
How to send/upload a file from Host OS to guest OS in KVM?(not folder sharing)
I have to make a configuration file available to guest OS running on top of KVM hyper-visor. I have already read about folder sharing options between host and guest in KVM with 'qemu' and 9P virtio support. I would like to know about any simple procedure which can help in one time file transfer from...
I have to make a configuration file available to guest OS running on top of KVM hyper-visor.
I have already read about folder sharing options between host and guest in KVM with 'qemu' and 9P virtio support. I would like to know about any simple procedure which can help in one time file transfer from host to guest.
Please let me know, how to transfer file while guest OS is running as well as a possible way to make the file available to guest OS by the time it starts running(like packaging the file and integrating with the disk-image if possible).
Host OS will be linux.
MVSR
(273 rep)
Jun 2, 2015, 09:54 AM
• Last activity: Feb 8, 2025, 08:14 PM
0
votes
2
answers
300
views
Transferring a file with ymodem over TCP network
I have a device (RELAY) I am able to telnet into using its IP address from an linux machine (CLIENT) and I'd like to copy a file from the RELAY to the CLIENT. I have very limited privileges on RELAY, but I can view a list of files on it with `file dir`, and then I should be able to download a file f...
I have a device (RELAY) I am able to telnet into using its IP address from an linux machine (CLIENT) and I'd like to copy a file from the RELAY to the CLIENT. I have very limited privileges on RELAY, but I can view a list of files on it with
file dir
, and then I should be able to download a file from it using YModem, per the manufacturer, who instructed that the command to send the file is
file read filename.ext
After typing that it displays
> #000 Ready to send file
but I'm not sure how to set up a method to receive the file on CLIENT I have full access to. I've read in this post to receive using minicom
, but that appears to be for a serial connection and I'm doing this over a TCP network. I've tried with sz
as well and tried to use the --tcp
options, but again am not sure if it's simply me not understanding which options to use or if this method isn't correct.
What are possible methods for me to connect to RELAY from CLIENT and tell it I'm ready to receive the file?
(edit) I had a request to edit this question about why it's different than the one I linked to, and the reason it's different is that for my application I'm using telnet over TCP and the previously linked discussion is using a serial connection.
Tilden
(11 rep)
Oct 15, 2024, 12:42 AM
• Last activity: Oct 28, 2024, 08:12 AM
0
votes
1
answers
532
views
How to force TCP window scaling using SSH?
Inter continent data transfer's speed is maximum 2MB/s. I checked and the SSH server of my server doesn't even use window scaling, and the window itself is very small, around 22KB... ``` Flags [S], seq 1433200120, win 29200, options [mss 1420,sackOK,TS val 1451891061 ecr 0,nop,wscale 7], length 0 Fl...
Inter continent data transfer's speed is maximum 2MB/s.
I checked and the SSH server of my server doesn't even use window scaling, and the window itself is very small, around 22KB...
Flags [S], seq 1433200120, win 29200, options [mss 1420,sackOK,TS val 1451891061 ecr 0,nop,wscale 7], length 0
Flags [S.], seq 3549718494, ack 1433200121, win 65535, options [mss 1460,sackOK,TS val 4214039974 ecr 1451891061,nop,wscale 9], length 0
Server is 65535*9=**590 KB**
Window scaling is enabled.
$ cat /proc/sys/net/ipv4/tcp_window_scaling
1
And I've already increase all the parameters to 25MB and 16MB minimum an default in /etc/sysctl.conf
:
net.core.wmem_max=25165824
net.core.rmem_max=25165824
net.ipv4.tcp_rmem = 16777216 16777216 25165824
net.ipv4.tcp_wmem = 16777216 16777216 25165824
I'm using Fedora 39 and RHEL 8, what can I do to force the SSH server to handle much more inflight data?
無名前
(729 rep)
Jan 26, 2024, 08:13 AM
• Last activity: Oct 4, 2024, 10:02 PM
1
votes
1
answers
183
views
Why doesn't rsync peform delta transfer when copying from a disk to cloud storage?
I believe the rsync manual says that incremental file transfers are performed when transferring files across file systems. However rsync's output below shows that delta-transmission is disabled when I'm transferring files from a disk to a cloud file storage mounted on a directory. Why isn't delta-tr...
I believe the rsync manual says that incremental file transfers are performed when transferring files across file systems. However rsync's output below shows that delta-transmission is disabled when I'm transferring files from a disk to a cloud file storage mounted on a directory. Why isn't delta-transmission enabled for this filesystem boundary crossing transfer and how can it be forced?
$ tail -f out.log
[sender] expand file_list pointer array to 1024 bytes, did move
[sender] hiding directory home/user/cloudDrive because of pattern cloudDrive/
[sender] expand file_list pointer array to 1024 bytes, did not move
[sender] expand file_list pointer array to 1024 bytes, did not move
[sender] expand file_list pointer array to 1024 bytes, did not move
[sender] expand file_list pointer array to 1024 bytes, did not move
[sender] expand file_list pointer array to 1024 bytes, did not move
[sender] expand file_list pointer array to 1024 bytes, did not move
created directory /home/user/cloudDrive/folder/2024-09-27T00:06:05+02:00
delta-transmission disabled for local transfer or --whole-file
bit
(1176 rep)
Sep 26, 2024, 10:42 PM
• Last activity: Sep 27, 2024, 01:51 AM
31
votes
3
answers
79842
views
Discover MTU between me and destination IP
In a case I can use only `UDP` and `ICMP` protocols, how can I discover, in bytes, the [path MTU][1] for packet transfer from my computer to a destination IP? [1]: http://en.wikipedia.org/wiki/Maximum_transmission_unit#Path_MTU_Discovery
In a case I can use only
UDP
and ICMP
protocols, how can I discover, in bytes, the path MTU for packet transfer from my computer to a destination IP?
URL87
(417 rep)
Dec 30, 2012, 02:01 PM
• Last activity: Sep 22, 2024, 10:01 AM
2
votes
2
answers
15265
views
File transfer using YMODEM sz
I'm trying to upload a firmware file over a serial connection to a device that requires YMODEM protocol, from a Raspberry Pi. After a lot of digging, I keep finding that the `sz --ymodem [file]` command is the tool to do this. I've already managed to just communicate with the device using [this][1]...
I'm trying to upload a firmware file over a serial connection to a device that requires YMODEM protocol, from a Raspberry Pi. After a lot of digging, I keep finding that the
sz --ymodem [file]
command is the tool to do this. I've already managed to just communicate with the device using this example, but I'm having no luck with sz
.
I've read through the sz
documentation and it leaves me with a question. How do I determine if it is sending to the device? It is plugged in via USB and has port /dev/ttyACM0
. Other examples talk about sending from a remote host to a local host via sz
by default, but that's as deep as any explanation goes.
The device has a command which tells it to anticipate a file transfer; I believe this takes the place of rz
, but the device documentation says it "Prepares the device for YMODEM transfer via HyperTerminal." I've sent it the files via HyperTerminal and a proprietary program successfully, but I need to be able to do it on the Linux command line.
I'm sure this is a case of inexperience and I'm missing something obvious, but how can I fully execute this file transfer from start to finish / what am I doing wrong?
Jack Mason
(21 rep)
Mar 30, 2016, 04:39 PM
• Last activity: Aug 3, 2024, 04:50 PM
4
votes
2
answers
14755
views
sftp "remote open failure"
I sftp'ed into my linux server from my macbook. I have never had issues transferring files through sftp until now. I am getting the following error when I try to `put` something on my linux server sftp> put test.cpp Uploading test.cpp to /home/mylin/test.cpp remote open("/home/mylin/test.cpp"): Fail...
I sftp'ed into my linux server from my macbook. I have never had issues transferring files through sftp until now.
I am getting the following error when I try to
put
something on my linux server
sftp> put test.cpp
Uploading test.cpp to /home/mylin/test.cpp
remote open("/home/mylin/test.cpp"): Failure
When I try to get
something, it works. Any suggestions?
anonuser01
(229 rep)
Aug 9, 2019, 02:16 AM
• Last activity: Jul 25, 2024, 10:15 AM
0
votes
1
answers
470
views
"Connection refused lost connection" error while using scp command
I tried to transfer a file from an HPC system to a remote CentOS system using the following command line: ``` scp /ANKAN/data/abc.pdf maslab-3@192.168.69.231:/data/ANKAN ``` But I am getting the following error: ``` ssh: connect to host 192.168.69.231 port 22: Connection refused lost connection...
I tried to transfer a file from an HPC system to a remote CentOS system
using the following command line:
scp /ANKAN/data/abc.pdf maslab-3@192.168.69.231:/data/ANKAN
But I am getting the following error:
ssh: connect to host 192.168.69.231 port 22: Connection refused lost connection
I tried all the commands given in the How to Fix the SSH “Connection Refused” Error webpage at phoenixNAP knowledge base,
but nothing worked!
For your information, the port number of the HPC (from where I am transferring the files) is 4422. But when I gave the following command in the remote CentOS system:
sudo grep Port /etc/ssh/sshd_config
the following lines are printed:
Port 22
#GatewayPorts no
So, I tried using the following command lines:
scp -P 4422 /ANKAN/data/abc.pdf maslab-3@192.168.69.231:/data/ANKAN
and
scp -P 22 /ANKAN/data/abc.pdf maslab-3@192.168.69.231:/data/ANKAN
But I am getting the same errors.
I can't figure out what is going wrong here.
Can anyone please help me to solve this issue?
Ankan Sarkar
(1 rep)
May 6, 2024, 04:41 AM
• Last activity: Jul 17, 2024, 12:01 PM
0
votes
4
answers
1205
views
Is it possible to use bluetooth for transferring files between an Android phone and a Linux computer?
I was wondering if it is possible to use bluetooth for transferring files between an Android phone and a Linux computer? Is it a bad idea, compared to FTP and abd, because no one has mentioned using bluetooth in https://unix.stackexchange.com/questions/87762/how-do-i-transfer-files-between-android-a...
I was wondering if it is possible to use bluetooth for transferring files between an Android phone and a Linux computer? Is it a bad idea, compared to FTP and abd, because no one has mentioned using bluetooth in https://unix.stackexchange.com/questions/87762/how-do-i-transfer-files-between-android-and-linux-over-usb/ ?
Tim
(106420 rep)
Jul 5, 2024, 07:47 AM
• Last activity: Jul 5, 2024, 11:21 AM
3
votes
2
answers
1781
views
How shall I find the device of a phone's storage so that I can mount it in Linux?
I plugged an Android (HarmonyOS 2.0.0) phone into a Linux laptop using USB connection. I want to mount the storage of the phone in Linux, so that I can transfer files between the phone and the laptop. But I can't find the phone in $ sudo fdisk -l Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 19535...
I plugged an Android (HarmonyOS 2.0.0) phone into a Linux laptop using USB connection. I want to mount the storage of the phone in Linux, so that I can transfer files between the phone and the laptop. But I can't find the phone in
$ sudo fdisk -l
Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000LM014-1EJ1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: ...
Device Start End Sectors Size Type
/dev/sda1 1048576 69206015 68157440 32.5G Linux swap
/dev/sda2 69206016 195035135 125829120 60G Linux filesystem
/dev/sda3 195035136 1953523711 1758488576 838.5G Linux filesystem
/dev/sda4 2048 1048575 1046528 511M EFI System
Partition table entries are not in disk order.
but in
$sudo lshw
...
*-usb
physical id: 14
bus info: pci@0000:00:14.0
version: 31
width: 64 bits
clock: 33MHz
capabilities: pm msi bus_master cap_list
configuration: driver=xhci_hcd latency=0
resources: irq:129 memory:df410000-df41ffff
*-usbhost:0
product: xHCI Host Controller
vendor: Linux 5.19.0 xhci-hcd
physical id: 0
bus info: usb@1
logical name: usb1
version: 5.19
capabilities: usb-2.00
configuration: driver=hub slots=16 speed=480Mbit/s
*-usb:0
description: Mass storage device
product: BKL-AL20
vendor: HUAWEI
physical id: 1
bus info: usb@1:1
version: 2.99
serial:
capabilities: usb-2.10 scsi
configuration: driver=usb-storage maxpower=500mA speed=480Mbit/s
How shall I find the device of the phone's storage so that I can mount it in Linux?
The phone is under "File Transfer via USB". I would like to transfer any type of files, not just pictures. When I plug the phone to the laptop using USB connection, there are options to choose on the phone: transfer photos, transfer files, charge only, reverse charge, and input MIDI. I chose transfer files.
The phone has an internal memory and probably a SD card.
-----------------
$lsusb -v
...
Bus 001 Device 078: ID 12d1:107e Huawei Technologies Co., Ltd. P10 smartphone
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 2.10
bDeviceClass 0
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 64
idVendor 0x12d1 Huawei Technologies Co., Ltd.
idProduct 0x107e P10 smartphone
bcdDevice 2.99
iManufacturer 1 HUAWEI
iProduct 2 BKL-AL20
iSerial 3 ...
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
wTotalLength 0x006c
bNumInterfaces 4
bConfigurationValue 1
iConfiguration 4
bmAttributes 0xc0
Self Powered
MaxPower 500mA
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 0
bNumEndpoints 3
bInterfaceClass 255 Vendor Specific Class
bInterfaceSubClass 255 Vendor Specific Subclass
bInterfaceProtocol 0
iInterface 5
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x81 EP 1 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0200 1x 512 bytes
bInterval 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x01 EP 1 OUT
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0200 1x 512 bytes
bInterval 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x82 EP 2 IN
bmAttributes 3
Transfer Type Interrupt
Synch Type None
Usage Type Data
wMaxPacketSize 0x001c 1x 28 bytes
bInterval 6
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 1
bAlternateSetting 0
bNumEndpoints 2
bInterfaceClass 8 Mass Storage
bInterfaceSubClass 6 SCSI
bInterfaceProtocol 80 Bulk-Only
iInterface 6
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x83 EP 3 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0200 1x 512 bytes
bInterval 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x02 EP 2 OUT
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0200 1x 512 bytes
bInterval 1
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 2
bAlternateSetting 0
bNumEndpoints 2
bInterfaceClass 255 Vendor Specific Class
bInterfaceSubClass 66
bInterfaceProtocol 1
iInterface 8
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x03 EP 3 OUT
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0200 1x 512 bytes
bInterval 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x84 EP 4 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0200 1x 512 bytes
bInterval 0
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 3
bAlternateSetting 0
bNumEndpoints 2
bInterfaceClass 255 Vendor Specific Class
bInterfaceSubClass 72
bInterfaceProtocol 1
iInterface 9
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x04 EP 4 OUT
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0200 1x 512 bytes
bInterval 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x85 EP 5 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0200 1x 512 bytes
bInterval 0
and I can't mount the storage of the phone
$ aft-mtp-mount ~/media/phone/
connect failed: no MTP device found
Mode "transfer photos" allows me to do that
$ aft-mtp-mount ~/media/phone/
but it only shows me images files not files of other types.
Tim
(106420 rep)
Jul 3, 2024, 09:04 AM
• Last activity: Jul 4, 2024, 08:49 AM
1
votes
2
answers
94
views
How to distribute HTTPS certificate/key securely and automatically on internal servers
I have a some internally available servers (all Debian), that share a LetsEncrypt wildcard certificate (*.local.example.com). One server (Server1) keeps the certificate up-to-date and now I'm looking for a process to automatically distribute the .pem-files from Server1 to the other servers (e.g. Ser...
I have a some internally available servers (all Debian), that share a LetsEncrypt wildcard certificate (*.local.example.com). One server (Server1) keeps the certificate up-to-date and now I'm looking for a process to automatically distribute the .pem-files from Server1 to the other servers (e.g. Server2 and Server3).
I don't allow root logins via SSH, so I believe I need an intermediary user.
I've considered using a cronjob on Server1 to copy the updated .pem-files to a users directory, where
a unprivileged user uses scp or rsync (private key authentication) via another cronjob to copy the files to the Server2/3. However, to make this a more secure process, I wanted to restrict the user's privileges on the Server2/3 to chroot to their home directory and only allow them to use scp or rsync. It seems like this isn't a trivial configuration and most methods are outdated, flawed or requite an extensive setup (rbash, forcecommand, chroot, ...).
I've also considered to change the protocol to sftp, which should allow me to use the restricted sftp environment, via OpenSSH but I have no experience.
An alternative idea was to use an API endpoint (e.g. FastAPI, which is already running on Server1) or simply a webserver via HTTPS with custom API-Secrets or mTLS on Server1 to allow Server2/3 to retrieve the .pem-files.
At the moment, the API/webserver approach seems most reasonable and least complex, yet feels unnecessarily convoluted. I'd prefer a solution that doesn't require additional software.
Server1 has .pem-files (owned by root) and Server2/3 need those files updated regularly (root-owned location). What method can I use to distribute those files automatically in a secure manner?
emma.makes
(31 rep)
Jun 2, 2024, 03:34 PM
• Last activity: Jun 9, 2024, 02:26 PM
27
votes
8
answers
175256
views
Downloading and uploading files via telnet session
I have an attendance device running `Linux OS`. I can connect this device via telnet session. There are some files in the device that I would like to download and upload with new files. How can I do that? I have very low knowledge on Linux OS. Would you please help! ![enter image description here][1...
I have an attendance device running
Linux OS
. I can connect this device via telnet session. There are some files in the device that I would like to download and upload with new files. How can I do that? I have very low knowledge on Linux OS. Would you please help!

PRdeep Kumawat
(371 rep)
Dec 4, 2014, 12:37 PM
• Last activity: Jun 6, 2024, 06:08 PM
1
votes
0
answers
203
views
Rsync total progress with interrupted transfer
I'm aware of being able to display the total progress of an rsync transfer using `--info=progress2`, and adding `--no-i-r` to not have it scan incrementally and display the total progress from the start. rsync -a --info=progress2 --no-i-r : When I interrupt a large transfer using these options and r...
I'm aware of being able to display the total progress of an rsync transfer using
--info=progress2
, and adding --no-i-r
to not have it scan incrementally and display the total progress from the start.
rsync -a --info=progress2 --no-i-r :
When I interrupt a large transfer using these options and restart it, I can see that it continues where it left off/skips everything that has already been transferred, but the percentage and ETA do not seem to take into account that the transfer has already been partially completed and rsync just restarts the progress completely. This will also cause it to not finish at 100%, but at 100-X%, where X is the percentage that has already been transferred. And the ETA does not finish at 0:00:00 either. The very last remaining time it shows is the time it thinks it still needs to take to transfer everything that had already been transferred before.
Is there any way to have rsync show the actual total progress of an interrupted transfer? Or to just discard the progress of that which has already been transferred and only show the progress for the newly transferred files?
I'm using rsync v3.2.7
Bas van den Wollenberg
(11 rep)
May 8, 2024, 10:29 PM
• Last activity: May 15, 2024, 07:49 AM
1
votes
1
answers
134
views
scp copies folder but not contents of the folder
I have been trying to get this to work for a few days now, but I can't figure it out. I am trying to `scp` a folder full of `.tar` files from my Ubuntu-Server Server to my Windows Desktop. I want to push it from my server to my Desktop, because I'd like to automate the process via a `bash` script. I...
I have been trying to get this to work for a few days now, but I can't figure it out. I am trying to
scp
a folder full of .tar
files from my Ubuntu-Server Server to my Windows Desktop. I want to push it from my server to my Desktop, because I'd like to automate the process via a bash
script.
I am using a command like this:
scp -r path/to/folder Username@Windowsmachineip:C:/path/to/folder/
When I execute the command, I get the error no such file or directory
, but it does create a folder with the name of the folder on my server.
What I **can** do is a copy single files, but only if I specify a name for the file on my desktop, like this
scp -r path/to/folder/file Username@Windowsmachineip:C:/path/to/folder/file
If I try it without the filename at the end, I get the same error. I also tried it with the -p
flag, following a suggestion, but that throws the same error. I tried pulling from the Server to my desktop, but get the same error. I also tried sftp which gives this output:
dest open "/E:/backup/minecraft/backup_minecraft_24_03_2024_06:01:04.tar": No such file or directory
upload "backup/minecraft/backup_minecraft_24_03_2024_06:01:04.tar" to "/E:/backup/minecraft/backup_minecraft_24_03_2024_06:01:04.tar" failed
The error I get with scp and -v flag is:
scp: debug1: fd 3 clearing O_NONBLOCK
Sending file modes: C0666 471111680 backup_minecraft_23_03_2024_22:29:33.tar
Sink: C0666 471111680 backup_minecraft_23_03_2024_22:29:33.tar
scp: E:/backup/minecraft/backup_minecraft_23_03_2024_22:29:33.tar: No such file or directory
Why doesn't this work?
Tim_Schmock
(13 rep)
Mar 25, 2024, 06:15 PM
• Last activity: Mar 26, 2024, 11:35 AM
0
votes
4
answers
1937
views
How shall I fast and reliably transfer 300G files from a computer to another?
I want to transfer ~300G files from a laptop to other in the same local wifi network, for back up purpose. (I am running out of the space of my only external hard drive, and can't afford anything. Someone was kind enough to leave an old laptop in a dumpster for me to uncover.) Both now run Lubuntu....
I want to transfer ~300G files from a laptop to other in the same local wifi network, for back up purpose. (I am running out of the space of my only external hard drive, and can't afford anything. Someone was kind enough to leave an old laptop in a dumpster for me to uncover.) Both now run Lubuntu.
What is the fastest and reliable way to transfer the files? By rsync, scp, or some other commands? Could you also give some concrete commands to transfer for example two directories (and the files under them)?
How is the transfer speed of the command you will use compared to using an external hard drive to physcially connect in turn to the computers via a USB cable and perform the transfer indirectly, If there is another external hard drive available in a dumpster in the future ?
Thanks.
Tim
(106420 rep)
Feb 18, 2019, 02:02 PM
• Last activity: Jan 31, 2024, 11:13 AM
Showing page 1 of 20 total questions