Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

8 votes
3 answers
12014 views
Behaviour of the backspace on terminal
This is about the behaviour of the backspace (`\b`) character.  I have the following C program: int main() { printf("Hello\b\b"); sleep(5); printf("h\n"); return 0; } The output on my terminal is Helho with the cursor advancing to the first position of the following line. First, the entire thin...
This is about the behaviour of the backspace (\b) character.  I have the following C program: int main() { printf("Hello\b\b"); sleep(5); printf("h\n"); return 0; } The output on my terminal is Helho with the cursor advancing to the first position of the following line. First, the entire thing prints only after the 5 second sleep, so from that I deduced the output from the kernel to the terminal is line buffered.  So now, my questions are: 1. Since the \b\b goes back two spaces, to the position of the (second) l, then similar to how l was replaced by h, the o should have been replaced by \n. Why wasn't it? 2. If I remove the line printf("h\n");, it prints Hello and goes back two characters, without erasing. This I got from other answers is because of a non-destructive backspace. Why is this behaviour different for input and output? That is, if I input something into the terminal (even the very same program) and press Backspace, it erases the last character, but not for the output. Why? I'm on an Ubuntu system on the xterm terminal using bash, if that helps.
forumulator (213 rep)
Jan 1, 2018, 05:43 PM • Last activity: Apr 17, 2024, 11:32 AM
0 votes
1 answers
321 views
How to pipe STDIO from a thread process to /dev/null?
I am trying to run [Plarium Play with wine][1], but have encountered an odd issue. When trying to launch it from a regular desktop entry, I get this JavaScript error: [![error 0][2]][2] This does not happen if I launch from the terminal. If I try and launch it from a desktop entry, even one piping t...
I am trying to run Plarium Play with wine , but have encountered an odd issue. When trying to launch it from a regular desktop entry, I get this JavaScript error: error 0 This does not happen if I launch from the terminal. If I try and launch it from a desktop entry, even one piping to /dev/null, after first login, without starting Plarium Play from a terminal first, I get two JavaScript errors, one after the other: error 1 error 2 In both cases, after dismissing them, the splash hangs forever. This does not happen if I launch Plarium Play from the command line first, then use the modified desktop entry (with an stdio pipe to /dev/null) for later launches. Notably, the program launched from terminal keeps running even if I press Ctrl+C or close the terminal, and later startups with the piped desktop entry are faster than the initial terminal launch, so I assume it is starting a background process. Also notably, if I try to launch Plarium Play after getting the aforementioned launch errors without logging out, it thinks that an instance of Plarium Play is running, and exits immediately. I cannot launch Plarium Play from the desktop entry, even if I did start the service from the terminal, unless I modify it to at least pipe regular stdio somewhere. If I try with an unmodified desktop entry, I get a similar JavaScript error to the ones above: error 3 Notably, I can still relaunch Plarium Play after this last error without logging out, so long as I do it from terminal or pipe the output. My conclusion is that the initial stages of the program to call up an existing instance of the background service NEED to debug somewhere, as do initial stages of the service startup, because of a limitation in JavaScript and Electron (I know little about either). The thing is, the service startup is of course sent to a separate thread, and as such won't use a regular > pipe from the launch command, although it will successfully find a regular terminal's stdio. However, configuring the desktop file to execute in terminal does not fix the problem for some reason. Note that any successful launch whatsoever brings up the Electron GUI of Plarium Play, as it is meant to. Assuming this conclusion is correct (please tell me if you think it isn't), what can I do to also pipe the attempted stdio access of the server startup thread to /dev/null (or anywhere), so that I do not have to launch Plarium Play from the command line first every time I log in?
TheLabCat (133 rep)
Jun 9, 2023, 09:36 PM • Last activity: Jul 7, 2023, 06:35 PM
-1 votes
1 answers
138 views
Can I use KVM without ioctl?
Recently I've discovered how `/dev/kvm` doesn't seem to implement functionality for `read()` or `write()`, and any attempt to invoke them always results in Error 22 (Invalid argument). I'm trying to avoid using ioctl calls, and am wondering if it's possible to use kvm at all if I were to completely...
Recently I've discovered how /dev/kvm doesn't seem to implement functionality for read() or write(), and any attempt to invoke them always results in Error 22 (Invalid argument). I'm trying to avoid using ioctl calls, and am wondering if it's possible to use kvm at all if I were to completely remove ioctl support from the kernel. How would I invoke access to kvm without ioctl?
Tcll (201 rep)
May 28, 2023, 12:47 AM • Last activity: May 28, 2023, 10:49 AM
1 votes
1 answers
524 views
Which commands need prefixing by "stdbuf"?
When I have a long running Bash pipeline of commands, and I often can't see any signs of life due to I/O buffering. I found online that buffering can be disabled using `stdbuf`. An example shown [here](https://linux.die.net/man/1/stdbuf) is: tail -f access.log | stdbuf -oL cut -d aq aq -f1 | uniq Ho...
When I have a long running Bash pipeline of commands, and I often can't see any signs of life due to I/O buffering. I found online that buffering can be disabled using stdbuf. An example shown [here](https://linux.die.net/man/1/stdbuf) is: tail -f access.log | stdbuf -oL cut -d aq aq -f1 | uniq However, it is unclear to me which commands in the pipeline need to be prefixed by the stdbuf command. I therefore add it to every command. For no buffering, I might do: cd ~/tmp stdbuf -i0 -o0 -e0 find /i \! -type d | \ stdbuf -i0 -o0 -e0 sed -u -n -e \ 's=.*\&1 | stdbuf -i0 -o0 -e0 tee find.out This makes my code very noisy in a cognitive sense. How do I decide which commands need prefixing with stdbuf?
user2153235 (467 rep)
Jan 5, 2023, 10:01 PM • Last activity: Jan 5, 2023, 11:11 PM
0 votes
1 answers
191 views
Unexpected behavior of linux specific getline() function in C
```c #include #include #define MAXLEN 1024 void reverse(FILE *, FILE *); int main(int argc, char ** argv) { ... reverse(fptr, stdout); ... return 0; } void reverse(FILE * instream, FILE * outstream) { char ** buf; char * lbuf; int counter, i; size_t slen; counter = 0; buf = malloc(MAXLEN * sizeof(ch...
#include 
#include 
#define MAXLEN 1024

void reverse(FILE *, FILE *);

int main(int argc, char ** argv)
{
  ...
  reverse(fptr, stdout);
  ...

  return 0;
}

void reverse(FILE * instream, FILE * outstream)
{
  char ** buf;
  char * lbuf;
  int counter, i;
  size_t slen;

  counter = 0;

  buf = malloc(MAXLEN * sizeof(char));
  if (buf == NULL)
  {
    fputs("malloc failed\n", stderr);
    exit(EXIT_FAILURE);
  }

  lbuf = NULL;
  while ((counter = 0; i--)
  {
    fputs(buf[i], outstream);
    free(buf[i]);
  }

  free(buf);
}
I have written this function to print a file in reverse order like
Hello World
How are you ?
what are you doing ?
output should look like this
what are you doing ?
How are you ?
Hello World
But there is unexpected behavior. I am using Linux specific stdio function getline() to scan arbitrary long line. The problem is that if the value of MAXLEN is 4, 5, 6, ... like this the program is giving double free or corrupt address error abort() particularly the free() function while printing the output. I looked into the address returned by the geline() function it seems that after like 4 iterations the address of the first buffer returned by the getline() is getting corrupted. If the MAXLEN is 2 or big enough like 1024 the problem doesn't happen. **Sample input** which I took
When to the sessions of sweet silent thought
I summon up remembrance  of things past,
I sigh the lack of many a thing I sought,
And with old woes new wail my dear time's waste:
Then can I drown an eye, unused to flow,
For precious friends hid in death's dateless night
And weep afresh love's long since cancell's woe,
arka (253 rep)
Nov 13, 2022, 06:51 PM • Last activity: Nov 14, 2022, 10:05 AM
1 votes
1 answers
281 views
How `stdio` recognizes whether the output is redirected to the terminal or a disk file?
```c #include #include int main(void) { printf("If I had more time, \n"); write(STDOUT_FILENO, "I would have written you a shorter letter.\n", 43); return 0; } ``` I read that > I/O handling functions (`stdio` library functions) and system calls perform buffered operations for increased performance....
#include 
#include 

int main(void)
{
  printf("If I had more time, \n");
  write(STDOUT_FILENO, "I would have written you a shorter letter.\n", 43);

  return 0;
}
I read that > I/O handling functions (stdio library functions) and system calls perform buffered operations for increased performance. The printf(3) function used stdio buffer at user space. The kernel also buffers I/O so that is does not have to write to the disk on every system call. By default, when the output file is a terminal, the writes using the printf(3) function are line-buffered as the stdio uses *line buffering* for the stdout i.e. when newline-character '\n' is found the buffered is flushed to the **Buffer Cache**. However when is not a terminal i.e., the standard output is redirected to a disk file, the contents are only flushed when ther is no more space at the buffer (or the file stream is close). If the standard output of the program above is a terminal, then the first call to printf will flush its buffer to the *Kernel Buffer (Buffer Cache)* when it finds a newline-character '\n', hence, the output would be in the same order as in the above statements. However, if the output is redirected to a disk file, then the stdio buffers would not be flushed and the contents of the write(2) system call would hit the kernel buffers first, causing it to be flushed to the disk before the contents of the printf call. #### When stdout is a terminal ~~~ If I had more time, I would have written you a shorter letter. ~~~ #### When stdout is a disk file ~~~ I would have written you a shorter letter. If I had more time, ~~~ But my question is that how the stdio library functions knows whether the stdout is directed to a terminal or to a disk file ?
arka (253 rep)
Nov 8, 2022, 04:26 PM • Last activity: Nov 8, 2022, 07:55 PM
0 votes
0 answers
107 views
How to access kallsyms from outside operational system (edk2 SMM driver)?
I'm using `EDK2` to write a System Management Mode (SMM) driver. I think it uses "Pure C", given the fact that I'm not able to use standard C library like `stdio`. Even if I `#include ` it throws me an error `undefined reference to "fopen"` when I use any function like `fopen("/proc/kallsyms", "rb")...
I'm using EDK2 to write a System Management Mode (SMM) driver. I think it uses "Pure C", given the fact that I'm not able to use standard C library like stdio. Even if I #include it throws me an error undefined reference to "fopen" when I use any function like fopen("/proc/kallsyms", "rb"). In my understanding, this SMM driver (btw I'm writing code inside PiSmmCore.c ) doesn't run on top of the OS, it runs on a different layer (correct me if I'm wrong, please). So given that context, if I can't use fopen, fread etc how can I access files like /proc/kallsyms? Any help would be appreciated (even if it's just to say "hey man, you're wrong in your assumptions, try reading this article" or something). Thank you!
Allan Almeida (13 rep)
Oct 26, 2022, 07:47 PM
0 votes
0 answers
16 views
How to escape standard input when "cat f - g"
`man cat` shows the following example: cat f - g Output f's contents, then standard input, then g's contents. How do you escape the standard input and proceed to output the 2nd file 'g'?
man cat shows the following example: cat f - g Output f's contents, then standard input, then g's contents. How do you escape the standard input and proceed to output the 2nd file 'g'?
MattP (111 rep)
Oct 17, 2022, 10:19 PM
1 votes
1 answers
522 views
*nix shell: How to disable pipe buffering for ALL pipes in a command?
I want *every* pipe to be unbuffered, so I don't have to type `stdbuf -oL` for every piped command. When concocting commands with multiple pipes, it would be nice if there was a environment variable or something to enable it globally or at least for the remainder of the pipes in the command. Yes I k...
I want *every* pipe to be unbuffered, so I don't have to type stdbuf -oL for every piped command. When concocting commands with multiple pipes, it would be nice if there was a environment variable or something to enable it globally or at least for the remainder of the pipes in the command. Yes I know about unbuffer and stdbuf but they need to be invoked for every pipe... I'm trying to save typing because I do this often. Something like: before:
stdbuf -oL command_1 | stdbuf -oL command_2 | stdbuf -oL command_3
after:
BUFFERING=-oL command_1 | command_2 | command_3
Aaron Frantisak (111 rep)
Jun 23, 2022, 11:22 PM • Last activity: Aug 9, 2022, 11:07 PM
0 votes
2 answers
310 views
Unexpected expect/ssh question
I am seeking to automate ssh password based logins (and a series of actions after logging in). I am aware that the ssh password prompt bypasses STDIN. To that end I put together a quick expect script. ``` spawn ssh -o StrictHostKeyChecking=No $USERNAME@$HOST expect { timeout { send_user "\nFailed to...
I am seeking to automate ssh password based logins (and a series of actions after logging in). I am aware that the ssh password prompt bypasses STDIN. To that end I put together a quick expect script.
spawn ssh -o StrictHostKeyChecking=No $USERNAME@$HOST

expect {
  timeout { send_user "\nFailed to get password prompt\n"; exit 1 }
  eof { send_user "\nSSH failure for $HOST\n"; exit 1 }
  "*assword"
}

send "Pasword123\r"

expect {
  timeout { send_user "\nLogin failed. Password incorrect.\n"; exit 1}
  "*\$ "
}
sleep 1
send "echo 002-READY\r"
interact
This appeared to work as I expected. But when I feed further commands into STDIN of the running script after 'interact' they don't seem to arrive in the ssh session, e.g.
$ cat feed.sh
#!/bin/bash

sleep 3; echo "cat /etc/hosts" 

$ ./feed.sh | ./ssh_expect_script
However it does detect the EOF and terminates the session. (Please don't tell me the solution is to use key-pairs; there are reasons why interactive passwords are a specific constraint.) How do I get input routed via Expect to the remote session? Alternatively, how can I send the password to ssh? (Regarding the alternate question, I'm using PHP as the controlling mechanism. As a last resort, I could dynamicaly generate the whole expect script. I tried writing to the tty directly, but that just comes back on my screen. I also looked at ssh-askpass, but the documentation only mentions passphrases / I can't find a version that doesn't rely on have a desktop environment running).
symcbean (6301 rep)
Jan 15, 2021, 01:51 PM • Last activity: Jan 15, 2021, 05:47 PM
3 votes
1 answers
781 views
Additional file descriptor for debugging and piped output (logging, metrics, etc)
For a bash script project, I write human-readable log info to stdout/stderr. Additionally, I want to write formatted metrics to a third stream that will be discarded by default but can be redirected for piped processing. Is the approach doing this with an additional file descriptor advised? ```bash...
For a bash script project, I write human-readable log info to stdout/stderr. Additionally, I want to write formatted metrics to a third stream that will be discarded by default but can be redirected for piped processing. Is the approach doing this with an additional file descriptor advised?
exec 3> /dev/null
echo "This is stdout"
echo "This is stderr" >&2
echo "This is fd3" >&3
I'm fine with the third line not showing up under normal conditions. However, when used in a certain toolchain I want to pipe these messages. Simple example:
$ bash example.sh 3>&1
This is stdout
This is stderr
The third line does not appear as the console output. What am I doing wrong? Is there a solution to this? Is another approach advised?
Mazzen (33 rep)
Nov 7, 2020, 11:12 AM • Last activity: Nov 7, 2020, 01:12 PM
1 votes
1 answers
227 views
Did fwrite/fread(3) offer different "multiple items" behavior on different platforms historically?
The `fread(3)` and `fwrite(3)` have an extra parameter for a variable number of items. So a typical write often has a hardcoded count when all it has is a char buffer to begin with e.g. `fwrite(data, len, 1, stdout)`. What is the point of this parameter? Was this always just a convenience "let the s...
The fread(3) and fwrite(3) have an extra parameter for a variable number of items. So a typical write often has a hardcoded count when all it has is a char buffer to begin with e.g. fwrite(data, len, 1, stdout). What is the point of this parameter? Was this always just a convenience "let the system do the multiplication" thing kind of like calloc(3) or did some historical operating systems and/or storage devices have special handling for individual items written? Fueling my curiosity here is that I stumbled across some IBM z/OS documentation for [their fwrite()](https://www.ibm.com/support/knowledgecenter/SSLTBW_2.2.0/com.ibm.zos.v2r2.bpxbd00/fwrite.htm) which makes something of a distinction between "record I/O output" and "block I/O output" and talks about how each item could get truncated past a certain length — making me wonder if the item count parameter used to map to, say, separate physical punchcards — or at least maybe data could get written behind the scenes using ASCII "record separator" characters or whatever. For contrast, the [POSIX fwrite spec](https://pubs.opengroup.org/onlinepubs/9699919799/functions/fwrite.html) just outright says: > For each object, size calls shall be made to the fputc() function, taking the values (in order) from an array of unsigned char exactly overlaying the object. Which invites the question of why fwrite didn't just take in a const uint8_t* buffer and an overall size_t total length in the first place like write(2) does.
natevw (194 rep)
Oct 23, 2020, 09:14 PM • Last activity: Oct 24, 2020, 04:56 PM
1 votes
2 answers
300 views
How to read server stdout and continue only after message is outputted
Say I have a simple Node.js server like: ``` const http = require('http'); const server = http.createServer((req,res) => res.end('foobar')) server.listen(3000, () => { console.log(JSON.stringify({"listening": 3000})); }); ``` and then with bash: #!/usr/bin/env bash node server.js | while read line;...
Say I have a simple Node.js server like:
const http = require('http');

const server = http.createServer((req,res) => res.end('foobar'))

server.listen(3000, () => {
   console.log(JSON.stringify({"listening": 3000}));
});
and then with bash: #!/usr/bin/env bash node server.js | while read line; do if [[ "$line" == '{"listening":3000}' ]]; then : fi done # here I want to process more crap my goal is to only continue the script after the server has started actually listening for requests. The best thing I can come up with this this: #!/usr/bin/env bash mkfifo foo mkfifo bar (node server.js &> foo) & ( while true; do cat foo | while read line; do if [[ "$line" != '{"listening":3000}' ]]; then echo "$line" continue; fi echo "ready" > bar done done ) & cat bar && { # do my thing here? } is there a less verbose/simpler way to do this? I just want to proceed only when the server is ready and the only good way I know of doing that is to using stdout to print a message and listen for that.
user393961
Apr 30, 2020, 04:55 AM • Last activity: Apr 30, 2020, 02:41 PM
8 votes
3 answers
7205 views
Communicate backwards in a pipe
I have a simple pipeline: node foo.js | node bar.js `bar.js` will read from stdin to get data from `foo.js`. But what I want to do is ensure that `bar.js` gets one of the last messages from `foo.js` before foo.js decides it's OK to exit. Essentially I want to create a simple request/response pattern...
I have a simple pipeline: node foo.js | node bar.js bar.js will read from stdin to get data from foo.js. But what I want to do is ensure that bar.js gets one of the last messages from foo.js before foo.js decides it's OK to exit. Essentially I want to create a simple request/response pattern. > foo writes to stdout --> bar reads from stdin --> how can bar send a > message back to foo? Is there a way to communicate backwards in a pipeline or should there never be a need to do that?
Alexander Mills (10734 rep)
Nov 13, 2017, 05:14 PM • Last activity: Feb 5, 2020, 06:30 PM
3 votes
2 answers
1290 views
Appending both stdout and stderr to file
I have this: nohup_ntrs(){ nohup_file="$HOME/teros/nohup/stdio.log" mkdir -p "$(dirname "$nohup_file")"; echo " ------------- BEGIN $(date) -------------- " >> "$nohup_file" nohup "$@" &>> "$nohup_file" echo " ------------- END $(date) -------------- " >> "$nohup_file" } but this syntax is wrong: no...
I have this: nohup_ntrs(){ nohup_file="$HOME/teros/nohup/stdio.log" mkdir -p "$(dirname "$nohup_file")"; echo " ------------- BEGIN $(date) -------------- " >> "$nohup_file" nohup "$@" &>> "$nohup_file" echo " ------------- END $(date) -------------- " >> "$nohup_file" } but this syntax is wrong: nohup "$@" &>> "$nohup_file" I was trying to use the >> operator for append but also send stderr there. My guess is that the only way to do this is this: nohup "$@" 2>&1 >> "$nohup_file" is there a less verbose way to do this?
Alexander Mills (10734 rep)
Aug 7, 2019, 05:50 PM • Last activity: Aug 7, 2019, 07:46 PM
0 votes
0 answers
36 views
How to determine if a process is part of a pipeline (producer in pipeline)?
So if I am in my terminal and run: $ my_proc | grep foo in my_prof I know it's part of a pipeline because stdout is not attached to the terminal but to grep. On the other hand, if I just run this: $ my_proc in the terminal, then stdout is attached to the TTY, so I know that it's not part of a pipeli...
So if I am in my terminal and run: $ my_proc | grep foo in my_prof I know it's part of a pipeline because stdout is not attached to the terminal but to grep. On the other hand, if I just run this: $ my_proc in the terminal, then stdout is attached to the TTY, so I know that it's not part of a pipeline. However, if I run my_proc programmatically outside of the context of a terminal, how can I know if it's part of a pipeline or not? Or perhaps a better way to phrase the question, is there a way to discover what device/process is listening to the stdio from my_proc?
user356473
Jul 18, 2019, 06:35 AM • Last activity: Jul 18, 2019, 10:10 AM
1 votes
1 answers
94 views
Writing and Executing Program to behave like console
I have written a set of programs with the intent of using a radio transmitter-receiver (NRF24L01) to connect two devices as if they were connected via a serial interface. Currently, i am able to send bash commands in one direction, lets say from device A to B. My A device is currently an AVR microco...
I have written a set of programs with the intent of using a radio transmitter-receiver (NRF24L01) to connect two devices as if they were connected via a serial interface. Currently, i am able to send bash commands in one direction, lets say from device A to B. My A device is currently an AVR microcontroller. My B device is a Rapberry Pi. I use the following command to pipe the received text to bash. This allows commands to be sent but not for their output to be sent back to the A device. ./program | bash I am not sure how to pipe the output from bash back into my program in a way that will not block and prevent the program from reacting to received data. If it is possible to setup a pipe in both directions, I still do not think I can use functions like fgets as they are blocking. Both devices share the same library for transmit and receive functionality, these transmit and receive functions can be called with an option to make them non-blocking.
fisherdog1 (23 rep)
Jun 7, 2019, 04:24 PM • Last activity: Jun 7, 2019, 05:03 PM
4 votes
1 answers
85 views
Determine if producer is outpacing consumer in pipeline
If I have: node foo.js | node bar.js is there a way to determine if there is a queue in between them that's building up? in other words if the producer is outpacing the consumer w.r.t. stdio?
If I have: node foo.js | node bar.js is there a way to determine if there is a queue in between them that's building up? in other words if the producer is outpacing the consumer w.r.t. stdio?
Alexander Mills (10734 rep)
Apr 26, 2018, 11:38 PM • Last activity: Apr 27, 2018, 09:02 AM
Showing page 1 of 18 total questions