Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

2 votes
0 answers
111 views
Unable to write to client over ssh
## Preface I am unable to write to my client user connected over SSH. No issues with users on server. I run as `root`, and SELinux is disabled for all the tests below(`sudo setenforce 0`). Yet, I always get the below error: ``` [vtian@vbox ~]$ write vtian pts/0 write: vtian is not logged in on pts/0...
## Preface I am unable to write to my client user connected over SSH. No issues with users on server. I run as root, and SELinux is disabled for all the tests below(sudo setenforce 0). Yet, I always get the below error:
[vtian@vbox ~]$ write vtian pts/0
write: vtian is not logged in on pts/0

[vtian@vbox ~]$ who | grep [p]ts/0
vtian    pts/0        2025-06-24 17:55 (10.0.2.2)
If I write to vtian tty1 which is logged in server-side, there is no issues at all. I have another question [here](https://unix.stackexchange.com/questions/797201/unable-to-write-to-self-in-graphical-terminal-session/797235#797235) which is answered, to make sure that it wasn't an issue with my client gnome-terminal. ## Prerequisites I have checked * user is logged in utmp (According to who, loginctl, and last). * user tty is correct (tty returns pts/0). * user can receive messages if writing directly to the fd /dev/pts/0. * user is receiving messages (According to who -T and mesg). * write has s+g permission (setgid) ## Write Version [Edit 1]
vtian@vbox ~]$ sudo rpm -qf $(which write)
util-linux-2.40.2-10.el10.x86_64
## who and last and ls output [Edit 1]
[vtian@vbox ~]$ last | head -n 4
vtian    pts/0        10.0.2.2         Mon Jun 23 16:18   still logged in
vtian    tty1                          Mon Jun 23 16:18   still logged in
reboot   system boot  6.12.0-55.16.1.e Mon Jun 23 16:17   still running
vtian    pts/0        10.0.2.2         Sat Jun 21 16:19 - 16:29  (00:09)
[vtian@vbox ~]$ last | head -n 5
vtian    pts/0        10.0.2.2         Mon Jun 23 16:18   still logged in
vtian    tty1                          Mon Jun 23 16:18   still logged in
reboot   system boot  6.12.0-55.16.1.e Mon Jun 23 16:17   still running
vtian    pts/0        10.0.2.2         Sat Jun 21 16:19 - 16:29  (00:09)
vtian    tty1                          Sat Jun 21 16:18 - 16:29  (00:11)
[vtian@vbox ~]$ who -aT
           system boot  2025-06-23 16:17
vtian    + tty1         2025-06-23 16:18 00:01        4299
           run-level 3  2025-06-23 16:17
vtian    + pts/0        2025-06-23 16:18   .          4903 (10.0.2.2)
           pts/1        2025-06-23 16:18              4939 id=ts/1  term=0 exit=0
[vtian@vbox ~]$ sudo ls -l $(which write)
-rwxr-sr-x. 1 root tty 24152 Feb 13 08:00 /usr/bin/write
## Disk Mount Status [Edit 1] My disk is using LVM. None of the LVs are mounted with nosuid. The only mounts with nosuid are /proc, as well as some stuff in /dev, /sys, and /run. ## loginctl output [Edit 2] I have just found out something really strange and reproducible. Output for loginctl and sudo loginctl on vtian tty1, and loginctl on vtian pts/0 given the condition stated below.
[vtian@vbox ~]$ loginctl
SESSION  UID USER  SEAT  LEADER CLASS   TTY   IDLE SINCE
      2 1000 vtian -     3697   manager -     no   -    
      3 1000 vtian seat0 3149   user    tty1  no   -    
      6 1000 vtian -     3960   user    pts/0 no   -    
3 sessions listed.
Output for sudo loginctl on pts/0
[vtian@vbox ~]$ sudo loginctl
[sudo] password for vtian: 
SESSION  UID USER  SEAT  LEADER CLASS   TTY  IDLE SINCE
      2 1000 vtian -     3697   manager -    no   -    
      3 1000 vtian seat0 3149   user    tty1 no   -    
      6 1000 vtian -     3960   user    -    no   -    

3 sessions listed.
After I run sudo loginctl from vtian pts/0, vtian pts/0 is gone from logind for the rest of the session! If I run loginctl or sudo loginctl from vtian tty, I continue to get the above output! The only way to fix it is by restarting the ssh session. ## Other Outputs [Edit 2]
vtian@vbox ~]$ sudo grep -e @include -e pam_systemd /etc/pam.d/{sshd,common*}
grep: /etc/pam.d/common*: No such file or directory
## PAM Systemd Output [Edit 3]
[root@vbox vtian]# grep -r pam_systemd /etc/pam*
/etc/pam.d/runuser-l:-session	optional	pam_systemd.so
## uname and distro info [Edit 4]
[root@vbox vtian]# uname -a
Linux vbox 6.12.0-55.16.1.el10_0.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Jun 10 18:27:04 UTC 2025 x86_64 GNU/Linux
[root@vbox vtian]# cat /etc/os-release
NAME="Rocky Linux"
VERSION="10.0 (Red Quartz)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="10.0"
PLATFORM_ID="platform:el10"
PRETTY_NAME="Rocky Linux 10.0 (Red Quartz)"
ANSI_COLOR="0;32"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:rocky:rocky:10::baseos"
HOME_URL="https://rockylinux.org/ "
VENDOR_NAME="RESF"
VENDOR_URL="https://resf.org/ "
BUG_REPORT_URL="https://bugs.rockylinux.org/ "
SUPPORT_END="2035-05-31"
ROCKY_SUPPORT_PRODUCT="Rocky-Linux-10"
ROCKY_SUPPORT_PRODUCT_VERSION="10.0"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="10.0"
## strace [Edit 5] I don't know how to interpret the strace data, but it looks to me like write does successfully iterate the vtian pts/0 session, but it seems to disregard it as not belonging to vtian pts/0
[root@vbox vtian]# strace -o /tmp/tracefile write vtian pts/0
write: vtian is not logged in on pts/0
[root@vbox vtian]# grep '/run/systemd/sessions/' /tmp/tracefile
openat(AT_FDCWD, "/run/systemd/sessions/", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 3
openat(AT_FDCWD, "/run/systemd/sessions/1", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/run/systemd/sessions/1", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/run/systemd/sessions/2", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/run/systemd/sessions/2", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/run/systemd/sessions/6", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/run/systemd/sessions/6", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/run/systemd/sessions/7", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/run/systemd/sessions/7", O_RDONLY|O_CLOEXEC) = 3
[root@vbox vtian]# loginctl list-sessions
SESSION  UID USER  SEAT  LEADER CLASS         TTY   IDLE SINCE
      1    0 root  seat0 1249   user-early    tty1  no   -    
      2    0 root  -     1799   manager-early -     no   -    
      6 1000 vtian -     2033   user          pts/0 no   -    
      7 1000 vtian -     2041   manager       -     no   -
I know that this is a failed strace, as if it were successful, there would be an instruction like this after openat(AT_..., ../sessions/7" ...), as this is what it looks like when writing successfully to root tty1
openat(AT_FDCWD, "/run/systemd/sessions/1", O_RDONLY|O_CLOEXEC) = 3
...
newfstatat(AT_FDCWD, "/dev/tty1", {st_mode=S_IFCHR|0620, st_rdev=makedev(0x4, 0x1), ...}, 0) = 0
...
openat(AT_FDCWD, "/dev/tty1", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
...
In addition, I don't really know why /dev/pts/0 is invoked here when i'm writing to /dev/tty1
execve("/usr/bin/write", ["write", "root", "/dev/tty1"], 0x7ffc4f216350 /* 30 vars */) = 0
...
ioctl(0, TCGETS, {c_iflag=ICRNL|IXON|IUTF8, c_oflag=NL0|CR0|TAB0|BS0|VT0|FF0|OPOST|ONLCR, c_cflag=B38400|CS8|CREAD, c_lflag=ISIG|ICANON|ECHO|ECHOE|ECHOK|IEXTEN|ECHOCTL|ECHOKE, ...}) = 0
ioctl(0, TCGETS, {c_iflag=ICRNL|IXON|IUTF8, c_oflag=NL0|CR0|TAB0|BS0|VT0|FF0|OPOST|ONLCR, c_cflag=B38400|CS8|CREAD, c_lflag=ISIG|ICANON|ECHO|ECHOE|ECHOK|IEXTEN|ECHOCTL|ECHOKE, ...}) = 0
ioctl(0, TCGETS, {c_iflag=ICRNL|IXON|IUTF8, c_oflag=NL0|CR0|TAB0|BS0|VT0|FF0|OPOST|ONLCR, c_cflag=B38400|CS8|CREAD, c_lflag=ISIG|ICANON|ECHO|ECHOE|ECHOK|IEXTEN|ECHOCTL|ECHOKE, ...}) = 0
fstat(0, {st_mode=S_IFCHR|0620, st_rdev=makedev(0x88, 0), ...}) = 0
readlink("/proc/self/fd/0", "/dev/pts/0", 4095) = 10
newfstatat(AT_FDCWD, "/dev/pts/0", {st_mode=S_IFCHR|0620, st_rdev=makedev(0x88, 0), ...}, 0) = 0
newfstatat(AT_FDCWD, "/dev/pts/0", {st_mode=S_IFCHR|0620, st_rdev=makedev(0x88, 0), ...}, 0) = 0
getuid()                                = 0
getuid()                                = 0
...
```
Vesta Tian (81 rep)
Jun 20, 2025, 02:47 PM • Last activity: Jun 24, 2025, 01:06 PM
2 votes
2 answers
56 views
Unable to write to self in graphical terminal session?
Essentially, I noticed I am unable to write to my user who is using `gnome-terminal`. `tty` returns `/dev/pts/1`, but I am unable to write there as root. Instead, it returns as follows: ``` myuser@pegasus:/$ tty /dev/pts/1 root@pegasus:/# write myuser pts/1 write: myuser is not logged in on pts/1 ``...
Essentially, I noticed I am unable to write to my user who is using gnome-terminal. tty returns /dev/pts/1, but I am unable to write there as root. Instead, it returns as follows:
myuser@pegasus:/$ tty
/dev/pts/1
root@pegasus:/# write myuser pts/1
write: myuser is not logged in on pts/1
I have also tried write myuser tty2, and tried not specifying the terminal, but nothing happens. How can I write to my session? The inverse works fine:
myuser@pegasus:/$ write root pts/0
hi!
please respond
root@pegasus:/# 
Message from myuser@pegasus on pts/1 at 22:05 ...
hi!
please respond
EOF
Here is what the logins look like.
root@pegasus:/# who -aT
           system boot  2025-06-19 21:34
           run-level 5  2025-06-19 21:34
myuser ? seat0        2025-06-19 21:34   ?          2982 (login screen)
myuser + tty2         2025-06-19 21:34 00:37        2982 (tty2)
           pts/1        2025-06-19 21:41             25698 id=ts/1  term=0 exit=0

myuser@pegasus:/$ loginctl
SESSION  UID USER      SEAT  LEADER CLASS         TTY   IDLE SINCE
     11    0 root      -     79869  manager-early -     no   -    
      2 1000 myuser seat0 2891   user          tty2  no   -    
      3 1000 myuser -     2911   manager       -     no   -    
    c11    0 root      -     79732  user-early    pts/0 no   -    

4 sessions listed.
Vesta Tian (81 rep)
Jun 19, 2025, 02:10 PM • Last activity: Jun 21, 2025, 01:20 PM
0 votes
2 answers
44 views
how to copy a HD to another if the CPU is freezing after a few GB?
how to copy a HD to another if the CPU is freezing after a few GB ? I had to copy w11 of a friend notebook from a 128GB nvme to a 1TB nvme. I put nvne128gb in a usb adapter and the nvme1TB inside the notebook. The only cost free and granted way to copy the hd was thu linux (that i use everyday). I b...
how to copy a HD to another if the CPU is freezing after a few GB ? I had to copy w11 of a friend notebook from a 128GB nvme to a 1TB nvme. I put nvne128gb in a usb adapter and the nvme1TB inside the notebook. The only cost free and granted way to copy the hd was thu linux (that i use everyday). I boot it in a usb liveCD ubuntu 24.04. when I try to copy using gparted or dd, the pc freezes everytime around 15GB :( What is going on?
VeganEye (101 rep)
Jun 2, 2025, 09:12 PM • Last activity: Jun 2, 2025, 10:04 PM
0 votes
1 answers
37 views
dd cannot write to an ext4 HDD using oflag=direct
I attempted to write a file to an ext4 voulume on an external USB hard disk drive: $ dd if=/dev/zero of=file bs=1M count=10 iflag=fullblock oflag=direct dd: failed to open 'file': Invalid argument Then I tried it without `oflag=direct` and it worked. Then I tried to write a file using the original c...
I attempted to write a file to an ext4 voulume on an external USB hard disk drive: $ dd if=/dev/zero of=file bs=1M count=10 iflag=fullblock oflag=direct dd: failed to open 'file': Invalid argument Then I tried it without oflag=direct and it worked. Then I tried to write a file using the original command to an ext4 volume on an internal solid state drive and it worked. Why couldn't dd write to the HDD?
EmmaV (4359 rep)
May 25, 2025, 08:08 AM • Last activity: May 25, 2025, 09:47 AM
1 votes
1 answers
2355 views
How can I get write/wall commands working as intended?
OS: xUbuntu 22.04 I want to use the write / wall commands for sending msgs to other users sharing the same computer. But when I try to use the write command, I get the following error: ``` √ ~ $ who user1 tty7 2024-05-12 06:40 (:0) user2 tty8 2024-05-13 06:56 (:1) user3 tty9 2024-05-16 06:09 (:2) us...
OS: xUbuntu 22.04 I want to use the write / wall commands for sending msgs to other users sharing the same computer. But when I try to use the write command, I get the following error:
√ ~ $ who
user1   tty7         2024-05-12 06:40 (:0)
user2  tty8         2024-05-13 06:56 (:1)
user3    tty9         2024-05-16 06:09 (:2)
user4    tty10        2024-05-16 11:54 (:3)
√ ~ $ write user2 tty8
write: effective gid does not match group of /dev/pts/13
The error is the same no matter what variation of the command I try: write user2, write user2 /dev/pts/13 or write user2 pts/13 I have been searching online and found just a few blurbs about the error. One such blurb seemed to suggest this behaviour was intentional, at least for Debian/buntu. https://bugs.launchpad.net/ubuntu/+source/util-linux/+bug/2064685 I did try the advice in the link above to: >If you wish to restore the previous behavior, it should be sufficient to change /usr/bin/write.ul to root:tty 02755. So that now my /etc/bin/write.ul has the setgid bit set: √ ~ $ sudo chmod 02755 /usr/bin/write.ul
√ ~ $ ls /usr/bin/write.ul
-rwxr-sr-x 1 root root 23K Apr  9 10:32 /usr/bin/write.ul*
but doing so has made no change in the error received. Any ideas what more I likely need to do to get these commands working as intended?
naphelge (43 rep)
May 20, 2024, 11:35 PM • Last activity: Apr 26, 2025, 12:10 AM
0 votes
1 answers
88 views
How can I make writes to named pipe block if the reader closes, and vice versa?
Right now if I write to a name pipe and then close the reader, the writer gets a SIGPIPE and then exits. For example, ```sh $ mkfifo pipe $ cat pipe & # read from the pipe in the background $ cat > pipe # write to the pipe line1 line2 ... ``` If I then stop the reader process, the writer process die...
Right now if I write to a name pipe and then close the reader, the writer gets a SIGPIPE and then exits. For example,
$ mkfifo pipe
$ cat pipe & # read from the pipe in the background
$ cat > pipe # write to the pipe
line1
line2
...
If I then stop the reader process, the writer process dies because of the SIGPIPE the reader process sent when stopping. I don't want this to happen. If the writer tries writing to the pipe *before* a reader appears, the writer blocks and waits for a reader. I want behavior like this. If the reader closes the pipe, the writer should block and wait for a new reader. Likewise, if I stop the writer process, the reader process closes because it sees an EOF (or rather, a 0 byte read). I would also like for it to block until a new writer appears, instead. Does cat have a flag for doing this? If not, this is is probably a pretty simple C program.
Joseph Camacho (109 rep)
Jan 30, 2025, 07:00 PM • Last activity: Jan 30, 2025, 09:21 PM
0 votes
3 answers
312 views
Append N hexadecimal numbers to a binary file with Bash
I have a Bash script that appends bytes written as hexadecimal values. I use `echo` to write the bytes and it works hex="1F" byte="\x$hex" echo -en $byte >> "output.bin" At some point I need to pad the file with a byte that could be anything from `00` to `FF`. I want to specify the byte as a hex val...
I have a Bash script that appends bytes written as hexadecimal values. I use echo to write the bytes and it works hex="1F" byte="\x$hex" echo -en $byte >> "output.bin" At some point I need to pad the file with a byte that could be anything from 00 to FF. I want to specify the byte as a hex value and the total of repetitions. I tried doing this with a for loop but it just takes too long, especially since I need to add something like 65535 bytes sometimes. byte="00" total=65515 for (( i=1; i> "output.bin" done I am looking for a more performant way that does not use a for loop. At the moment I am stuck with something between printf and echo but instead of writing the values as binary it writes them as text result=$(eval printf "\x$byte%0.s" {1..$total}) echo -en $result >> "output.bin" The result in the output file is 78 30 30, which is the "x00" text, instead of 00. If I directly use echo -en "\x00" >> "output.bin", it does write one byte holding the 00 value and not the text "x00". This is where I don't know how to proceed in order to make it write the actual binary value
alexmro (3 rep)
Sep 4, 2024, 09:45 PM • Last activity: Sep 5, 2024, 12:48 PM
1 votes
1 answers
61 views
Linux kernel phantom reads
Why if i write to a raw hard disk (without FS) the kernel also makes reads. $ sudo dd if=/dev/zero of=/dev/sda bs=32k count=1 oflag=direct status=none $ iostat -xc 1 /dev/sda | grep -E "Device|sda" Device r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %u...
Why if i write to a raw hard disk (without FS) the kernel also makes reads. $ sudo dd if=/dev/zero of=/dev/sda bs=32k count=1 oflag=direct status=none $ iostat -xc 1 /dev/sda | grep -E "Device|sda" Device r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util sda 45,54 0,99 1053,47 31,68 0,00 0,00 0,00 0,00 1,17 3071,00 3,04 23,13 32,00 66,38 308,91 Is it readahead? Instead of dd i wrote a c program that does the same, i even used posix_fadvise to hint the kernel that i do not want read ahead. #include #include #include #include #include #include #define _GNU_SOURCE #define BLOCKSIZE 512 #define COUNT 32768 int main(void) { // read COUNT bytes from /dev/zero int fd; mode_t mode = O_RDONLY; char *filename = "/dev/zero"; fd = openat(AT_FDCWD, filename, mode); if (fd < 0) { perror("Creation error"); exit (1); } void *pbuf; posix_memalign(&pbuf, BLOCKSIZE, COUNT); size_t a; a = COUNT; ssize_t ret; ret = read(fd, pbuf, a); if (ret < 0) { perror("read error"); exit (1); } close (fd); // write COUNT bytes to /dev/sda int f = open("/dev/sda", O_WRONLY|__O_DIRECT); ret = posix_fadvise (f, 0, COUNT, POSIX_FADV_NOREUSE); if (ret < 0) perror ("posix_fadvise"); ret = write(f, pbuf, COUNT); if (ret < 0) { perror("write error"); exit (1); } close(f); free(pbuf); return 0; } But the result is the same $ iostat -xc 1 /dev/sda | grep -E "Device|sda" Device r/s w/s rkB/s wkB/s r_await w_await aqu-sz rareq-sz wareq-sz svctm %util sda 46,00 1,00 1064,00 32,00 10,78 1,00 0,43 23,13 32,00 10,55 49,60 It does not matter if it is a spindel disk or ssd , the result is the same. Also tried different kernels.
Alex (923 rep)
Jun 18, 2024, 10:53 AM • Last activity: Jun 20, 2024, 03:22 PM
1 votes
3 answers
591 views
Can we use read(), write() on a directory in unix/linux?
Can we use `read()`, `write()` on a directory just like on any other file in Unix/Linux? I have a confusion here because directories are also considered as files.
Can we use read(), write() on a directory just like on any other file in Unix/Linux? I have a confusion here because directories are also considered as files.
Nht_e0 (43 rep)
Jul 13, 2018, 06:01 PM • Last activity: Feb 28, 2024, 01:18 PM
0 votes
0 answers
71 views
Circular pipe situation
More of an academic/theory question - say I have process A piping into process B: A | B normally, the way pipes are designed, is if process A dies, the pipeline will gracefully shutdown. However, if process B dies, there will be a write error if A tries to keep writing to B. Is there an acceptable w...
More of an academic/theory question - say I have process A piping into process B: A | B normally, the way pipes are designed, is if process A dies, the pipeline will gracefully shutdown. However, if process B dies, there will be a write error if A tries to keep writing to B. Is there an acceptable way for B to die first, is it possible to make it circular in some fashion? This probably won't work, but something like this: mkfifo circ A circ the question, again, is if there is a graceful way for B to die before A?
Alexander Mills (10734 rep)
Oct 10, 2023, 07:35 PM
0 votes
0 answers
112 views
what size does stat return while write?
On Linux, suppose I do a `write()` to end of a file, and while this is still completing, from another thread I do a stat-type call on that file (such as `fstat()` or `lstat()`). I would expect, that the stat-buffer `st_size` field during that time, would return between, the previous size of the file...
On Linux, suppose I do a write() to end of a file, and while this is still completing, from another thread I do a stat-type call on that file (such as fstat() or lstat()). I would expect, that the stat-buffer st_size field during that time, would return between, the previous size of the file before the write(), and, the current size of valid data written already present in the file. So that, if I were to use the st_size to mmap() the file with 0 offset, I will get valid data; perhaps not all the data yet, OK, but at least, all the previous contents of the file, plus perhaps some valid already written data. Is that guaranteed ?
Mark Galeck (223 rep)
May 17, 2023, 02:14 AM • Last activity: May 17, 2023, 02:23 AM
0 votes
1 answers
464 views
Does write(fd with O_SYNC) only flush data of THAT fd instead of all caches caused by other fds of same file?
I was using dd command to change a single byte of a block device(not the partition block device), such as `/dev/nvme0n1`, at a specific position (not managed by normal file). ``` dd of=${DEV:?DEV} seek=${POS:?POS} bs=1 count=1 oflag=seek_bytes conv=notrunc status=none ``` I encountered an issue of `...
I was using dd command to change a single byte of a block device(not the partition block device), such as /dev/nvme0n1, at a specific position (not managed by normal file).
dd of=${DEV:?DEV} seek=${POS:?POS} bs=1 count=1 oflag=seek_bytes conv=notrunc status=none
I encountered an issue of sync command, it hangs or takes too long time to finish on some machines. Seems the sync command involves caches of all files, this obviously will be slow, or even hang up due to some inconsistent kernel management. Especially there are several big VMs are running on the host, the sync will be very slow, some times 30minutes. Then I started think I should not call sync command direct, I should instead tell dd to sync the part it involved only, by the oflag=sync, like this:
dd of=${DEV:?DEV} seek=${POS:?POS} bs=1 count=1 oflag=sync,seek_bytes conv=notrunc status=none
Since it is not obvious of the difference between oflag=direct, oflag=sync, conv=fsync, I dived into the source of dd, turns out that - oflag=sync will cause open output file with O_SYNC flag, each write syscall will will automatically cause fsync(fd). - conv=fsync cause an additional fsync syscall on each write. - oflag=direct require the block size be multiplied of 512 etc, for my case, it is just 1 byte, dd just turn off the flag, changed it to conv=fsync. All seems good, but I am not sure about one thing: ### If the output file /dev/nvme0n1 has many files cached by Linux, then will my dd command trigger it eventually sync all files? (I actually just want dd sync the 1 byte to the device, not other contents.) I checked the kernel source, guess the write(fd with O_SYNC flag) eventually calls [fs/sync.c#L180)(https://github.com/torvalds/linux/blob/16a8829130ca22666ac6236178a6233208d425c3/fs/sync.c#L180) (at least this is what the fsync syscall eventually calls)
int vfs_fsync_range(struct file *file, loff_t start, loff_t end, int datasync)
{
	struct inode *inode = file->f_mapping->host;

	if (!file->f_op->fsync)
		return -EINVAL;
	if (!datasync && (inode->i_state & I_DIRTY_TIME))
		mark_inode_dirty_sync(inode);
	return file->f_op->fsync(file, start, end, datasync);
}
but then I was stuck at
file->f_op->fsync(file, start, end, datasync)
I am not sure how does the file system driver handle the fsync, whether it involves all caches caused by other fds, it is not obvious. I will continue check kernel source and append EDIT later. EDIT: I am almost sure that the vfs_fsync_range is the one eventually called by write syscall. The stack is like this - [fs/read_write.c#L649](https://github.com/torvalds/linux/blob/16a8829130ca22666ac6236178a6233208d425c3/fs/read_write.c#L649)
SYSCALL_DEFINE3(write, unsigned int, fd, const char __user *, buf,
		size_t, count)
{
	return ksys_write(fd, buf, count);
}
- [fs/read_write.c#L637](https://github.com/torvalds/linux/blob/16a8829130ca22666ac6236178a6233208d425c3/fs/read_write.c#L637)
ssize_t ksys_write(unsigned int fd, const char __user *buf, size_t count)
{
...
		ret = vfs_write(f.file, buf, count, ppos);
}
- [fs/read_write.c#L584](https://github.com/torvalds/linux/blob/16a8829130ca22666ac6236178a6233208d425c3/fs/read_write.c#L584)
ssize_t vfs_write(struct file *file, const char __user *buf, size_t count, loff_t *pos)
{
...
	if (file->f_op->write)
		ret = file->f_op->write(file, buf, count, pos);
	else if (file->f_op->write_iter)
		ret = new_sync_write(file, buf, count, pos);
...
}
- [block/fops.c#L551](https://github.com/torvalds/linux/blob/16a8829130ca22666ac6236178a6233208d425c3/block/fops.c#L551)
static ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
{
...
	ret = __generic_file_write_iter(iocb, from);
	if (ret > 0)
		ret = generic_write_sync(iocb, ret);
...
}
- [include/linux/fs.h](https://github.com/torvalds/linux/blob/16a8829130ca22666ac6236178a6233208d425c3/include/linux/fs.h#L2466)
static inline ssize_t generic_write_sync(struct kiocb *iocb, ssize_t count)
{
	if (iocb_is_dsync(iocb)) {
		int ret = vfs_fsync_range(iocb->ki_filp,
				iocb->ki_pos - count, iocb->ki_pos - 1,
				(iocb->ki_flags & IOCB_SYNC) ? 0 : 1);
		if (ret)
			return ret;
	}

	return count;
}
To be continued... - [block/fops.c#L451](https://github.com/torvalds/linux/blob/16a8829130ca22666ac6236178a6233208d425c3/block/fops.c#L451)
static int blkdev_fsync(struct file *filp, loff_t start, loff_t end,
		int datasync)
{
	struct block_device *bdev = filp->private_data;
	int error;

	error = file_write_and_wait_range(filp, start, end);
	if (error)
		return error;

	/*
	 * There is no need to serialise calls to blkdev_issue_flush with
	 * i_mutex and doing so causes performance issues with concurrent
	 * O_SYNC writers to a block device.
	 */
	error = blkdev_issue_flush(bdev);
	if (error == -EOPNOTSUPP)
		error = 0;

	return error;
}
### It should be the above blkdev_fsync doing the sync work. From this function, it becomes hard to analyze. Hope some kernel developers can help me. The above function further call functions in [mm/filemap.c](https://github.com/torvalds/linux/blob/16a8829130ca22666ac6236178a6233208d425c3/mm/filemap.c) and [block/blk-flush.c](https://github.com/torvalds/linux/blob/16a8829130ca22666ac6236178a6233208d425c3/block/blk-flush.c#L459) , hope this helps. I will do a test, but the test can not make me confident... that is why I come here to ask this question. Tested, but since the sync command itself also quickly finished, I can not tell the if dd oflag=sync is safer than sync command. EDIT: ### I have managed to confirmed that dd oflag=sync is safer and quicker than sync command, I believe the answer of this question is yes. > Does write(fd with O_SYNC) only flush data of THAT fd instead of all caches caused by other fds of same file? YES. The test is like this: - repeatedly create big file with random data
for i in {1..10}; do echo $i; dd if=/dev/random of=tmp$i.dd count=$((10*1024*1024*1024/512)); done
- in another term, run sync to confirm that it will be very slow, just like hang up there. Interrupt the sync command. - create a test file, get its physical LBA.
echo test > z
DEV=$(df . | grep /dev |awk '{print $1}')
BIG_LBA=$(sudo debugfs -R "stat $PWD/z" $DEV | grep -F '(0)' | awk -F: '{print $2}')
- in another term, run the dd command, confirm it is very fast.
dd of=${DEV:?DEV} seek=$((BIG_LBA*8*512)) bs=1 count=1 oflag=sync,seek_bytes conv=notrunc status=none <<<"x"
But I still hope someone can point out where in the source code that I can confirm the answer.
osexp2000 (622 rep)
May 10, 2023, 11:32 AM • Last activity: May 10, 2023, 03:55 PM
0 votes
3 answers
4710 views
Chat over LAN in Linux
I am trying to set up a LAN chat with two users using Linux server and none of them is root. I have tried this two methods: `write account_name` on both computers And: `nc -l port_number` on first computer `nc IP_adress port_number` on second computer But the problem is whenever I write something an...
I am trying to set up a LAN chat with two users using Linux server and none of them is root. I have tried this two methods: write account_name on both computers And: nc -l port_number on first computer nc IP_adress port_number on second computer But the problem is whenever I write something and person on the other side hits enter it breaks also my line e.g: I am typing: "This is just a simenterple text". And this enter from another person breaks my line. Is there way how can I fix that? Or another way I can set up this chat?
Krzysztof Majewski (103 rep)
Oct 22, 2015, 09:28 AM • Last activity: Apr 30, 2023, 05:27 PM
0 votes
1 answers
113 views
Does the echo command create a swap file when it writes a line to a file?
I am currently working on a personal project, and I would like to simply record logs in the following way using the echo command. My question is if the echo command accesses the same file at the same time, will the swap file be created? Or is there a queue inside the echo command to make things work...
I am currently working on a personal project, and I would like to simply record logs in the following way using the echo command. My question is if the echo command accesses the same file at the same time, will the swap file be created? Or is there a queue inside the echo command to make things work in order? example:
$ cat a.txt

aa
bb
cc
$ echo "apple" >> a.txt

aa
bb
cc
apple
JongSun Park (11 rep)
Feb 16, 2023, 04:35 AM • Last activity: Feb 16, 2023, 09:04 AM
1 votes
1 answers
369 views
Does rm -rf /mnt/ delete files in subfolders if you do not have write access to /mnt/?
I accidentally ended up with `rm -rf some-text-folder-I-had-already-deleted-previously.txt /mnt/` because of arrow up into bash history. My screen flashed with lots of subfolders that I actually do have write permissions for (I don't have write permission for /mnt/, so even the mount folders inside...
I accidentally ended up with rm -rf some-text-folder-I-had-already-deleted-previously.txt /mnt/ because of arrow up into bash history. My screen flashed with lots of subfolders that I actually do have write permissions for (I don't have write permission for /mnt/, so even the mount folders inside /mnt/ have been created with sudo mkdir) but all of the lines ended with Operation not permitted (and I stopped the command before it could finish). I am worried that files inside those folders might have been deleted because a df -h the previous day showed one file server volume **85%** and 24 hours later it's **83%**. But I do have some scripts cleaning up old files on that file server volume, so that could be the reason. Since the stuff up, I haven't been able to find any missing files (I even have two tree -ahfq daily outputs files that I have diff'ed, but the ones missing from file2 are not actually deleted as far as I can see). Can a simple rm -rf /mnt/ actually do harm to subfolders and files if I don't actually own or have write permissions to /mnt/ (owned by root etc as default on Ubuntu)?
jmkane (111 rep)
Dec 15, 2022, 11:58 PM • Last activity: Dec 16, 2022, 12:16 PM
0 votes
1 answers
194 views
Capture incoming terminal messages (write/mesg)
`write` allows sending messages to connected users terminal. $ echo "hello budy" | write budy This can become quite annoying when messages interfere with a terminal operation. A drastic solution consists in [blocking all incoming messages][1]. Is there an intermediate solution where messages would b...
write allows sending messages to connected users terminal. $ echo "hello budy" | write budy This can become quite annoying when messages interfere with a terminal operation. A drastic solution consists in blocking all incoming messages . Is there an intermediate solution where messages would be dumped to a file without interfering with the terminal?
Jav (1030 rep)
Nov 28, 2022, 02:32 PM • Last activity: Dec 2, 2022, 07:47 AM
0 votes
1 answers
267 views
How does the Kernel implement synchronisation techniques on file access
I've read that the kernel implements synchronisation mechanisms when accessing files. For example, if we try and write or read to a file in the file system using `read()` or `write()` from different processes at the same time, the kernel will prevent race conditions. How exactly is this implemented?...
I've read that the kernel implements synchronisation mechanisms when accessing files. For example, if we try and write or read to a file in the file system using read() or write() from different processes at the same time, the kernel will prevent race conditions. How exactly is this implemented? I have used Mutexes and Semaphores when writing code before which prevents different threads or processes executing a certain part of the code at the same time. In this case, I assume that the kernel should only implement a locking mechanism when more than one process or thread tries to read or write to the same file descriptor, not any time read() or write() is called, which could be for any file descriptor. How would this be achieved?
Engineer999 (1233 rep)
Nov 30, 2022, 11:03 AM • Last activity: Nov 30, 2022, 11:49 AM
1 votes
1 answers
281 views
How `stdio` recognizes whether the output is redirected to the terminal or a disk file?
```c #include #include int main(void) { printf("If I had more time, \n"); write(STDOUT_FILENO, "I would have written you a shorter letter.\n", 43); return 0; } ``` I read that > I/O handling functions (`stdio` library functions) and system calls perform buffered operations for increased performance....
#include 
#include 

int main(void)
{
  printf("If I had more time, \n");
  write(STDOUT_FILENO, "I would have written you a shorter letter.\n", 43);

  return 0;
}
I read that > I/O handling functions (stdio library functions) and system calls perform buffered operations for increased performance. The printf(3) function used stdio buffer at user space. The kernel also buffers I/O so that is does not have to write to the disk on every system call. By default, when the output file is a terminal, the writes using the printf(3) function are line-buffered as the stdio uses *line buffering* for the stdout i.e. when newline-character '\n' is found the buffered is flushed to the **Buffer Cache**. However when is not a terminal i.e., the standard output is redirected to a disk file, the contents are only flushed when ther is no more space at the buffer (or the file stream is close). If the standard output of the program above is a terminal, then the first call to printf will flush its buffer to the *Kernel Buffer (Buffer Cache)* when it finds a newline-character '\n', hence, the output would be in the same order as in the above statements. However, if the output is redirected to a disk file, then the stdio buffers would not be flushed and the contents of the write(2) system call would hit the kernel buffers first, causing it to be flushed to the disk before the contents of the printf call. #### When stdout is a terminal ~~~ If I had more time, I would have written you a shorter letter. ~~~ #### When stdout is a disk file ~~~ I would have written you a shorter letter. If I had more time, ~~~ But my question is that how the stdio library functions knows whether the stdout is directed to a terminal or to a disk file ?
arka (253 rep)
Nov 8, 2022, 04:26 PM • Last activity: Nov 8, 2022, 07:55 PM
1 votes
2 answers
1078 views
Is there a way to force the write command to block until all bytes have been written?
[Per the man pages][1], the write command "writes up to count bytes" and then returns the actual number of bytes written. Thus if I wanted to ensure all bytes were written to the file descriptor, I would need to put the write within a loop and monitor that all bytes have been written. However, is th...
Per the man pages , the write command "writes up to count bytes" and then returns the actual number of bytes written. Thus if I wanted to ensure all bytes were written to the file descriptor, I would need to put the write within a loop and monitor that all bytes have been written. However, is there a way to configure the file descriptor such that writes block until *all* bytes have been written? Edit* I'm writing to pipes if that makes any difference.
Izzo (1013 rep)
Oct 20, 2022, 02:27 AM • Last activity: Oct 21, 2022, 08:03 AM
3 votes
0 answers
74 views
Minimizing disk usage with parallel calls to GCC
I am experimenting with testing GCC in parallel. My setup will run 96 tests before giving me the test report. If I run these tests sequentially it will invoke GCC once, run the executable, gather diagnostics and then repeat. However, when I try to run these tests in parallel the call to GCC takes mu...
I am experimenting with testing GCC in parallel. My setup will run 96 tests before giving me the test report. If I run these tests sequentially it will invoke GCC once, run the executable, gather diagnostics and then repeat. However, when I try to run these tests in parallel the call to GCC takes much more time. My profiler tells me that (when averaged over the 96 tests) the calls to GCC account for 2% of total execution time when running the 96 tests in sequence. My machine has 8 cores, and when I run the same profiler over my program, but with 8 available threads (12 tests on each thread), the calls to GCC account for 12% of the total time. I theorized that perhaps the OS in this case is a shared resource, and tried to tell GCC to output the executable to a tmpfs location, but that only got me down to 11% of total time spent there. Can anyone help guide me a bit here? I am not too familiar with how the file system and IO writes work on Linux (Ubuntu 20). I don't think my testing code is necessarily erroneous, but I'll include it anyway. It is written in Haskell. This is the function that takes 2% of all time when the tests are run sequentially, but (now) 11% when the tests are run in parallel on 8 threads (I do have 8 available cores).
create_test_executable :: String -> State -> String -> IO ()
create_test_executable p s path = do -- p = the c program, s = unused here, path = where to write executable
    let process = proc "gcc" ["-xc", "-"]
    x@(Just sin, Just sout, Just serr, _) <-
        createProcess process { std_in  = CreatePipe
                              , std_err = CreatePipe
                              , std_out = CreatePipe
                              , cwd     = Just path
                              }
    hPutStr sin p
    hFlush sin
    hClose sin
    o <- hGetContents sout
    if o == "" then return () else return () -- force read from output pipe
    cleanupProcess x
To clarify: My suspicion is that GCC will write the executable to the disk, and that this contention for the disk IO is what is making each individual test slower. I tried to remedy this by writing the executable to a tmpfs location, but that barely made any difference. The write to this location is specified by the path I am passing in as a parameter to the function above,
Rewbert (131 rep)
Oct 19, 2022, 11:13 AM
Showing page 1 of 20 total questions