Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
2
answers
203
views
Is there any command line tool for purchasing anything online?
I continue to pursue ways to do everything from the command line and while it does not seem common whatsoever I am curious if there is one single example of a command line tool that allowed someone to purchase something over the internet, make a payment, and expect the delivery of said good. Not usi...
I continue to pursue ways to do everything from the command line and while it does not seem common whatsoever I am curious if there is one single example of a command line tool that allowed someone to purchase something over the internet, make a payment, and expect the delivery of said good. Not using a terminal browser on a website or something, but an actual command line application.
Thank you.
Julius Hamilton
(159 rep)
Jan 10, 2023, 09:24 PM
• Last activity: May 9, 2025, 03:42 AM
31
votes
3
answers
2943
views
Why is sort -o useful?
UNIX philosophy says: do one thing and do it well. Make programs that handle text, because that is a universal interface. The `sort` command, at least GNU sort, has an `-o` option to output to a file instead of `stdout`. Why is, say, `sort foobar -o whatever` useful when I could just `sort foobar >...
UNIX philosophy says: do one thing and do it well. Make programs that handle text, because that is a universal interface.
The
sort
command, at least GNU sort, has an -o
option to output to a file instead of stdout
. Why is, say, sort foobar -o whatever
useful when I could just sort foobar > whatever
?
strugee
(15371 rep)
Oct 6, 2013, 04:40 AM
• Last activity: Mar 1, 2025, 11:01 AM
0
votes
1
answers
92
views
GNU assembler alternatives
I am trying to build my system from scratch, since I really like the idea of atomicity of each program in unix-like approach, I would like to preserve it as much as possible in my build. Since GNU binutils, in a way, violates this principle, I would like to know if there is just GNU assembler, which...
I am trying to build my system from scratch, since I really like the idea of atomicity of each program in unix-like approach, I would like to preserve it as much as possible in my build.
Since GNU binutils, in a way, violates this principle, I would like to know if there is just GNU assembler, which wouldn't be dependant on binutils?
If not, are there any minimal and performant alternatives to GNU assembler?
I know of yasm, but in case there is a better one, I would like to know of it.
Thank you in advance.
Даниил Носиков
(5 rep)
Jan 20, 2025, 08:08 AM
• Last activity: Jan 20, 2025, 06:20 PM
0
votes
1
answers
130
views
Can bash natively re-format a relative path to an absolute path, or not - being dependent on tool "realpath"?
**UPDATE** Once again, I can't post an answer to my question! Clicking the button just triggers an error message: "Unable to load popup - please try again". So, I have no other choice than to share the solution like this: ```input_path="`token="0"; if [[ "$input_path" =~ /$ ]]; then token="1"; input...
**UPDATE** Once again, I can't post an answer to my question! Clicking the button just triggers an error message: "Unable to load popup - please try again". So, I have no other choice than to share the solution like this:
="token="0"; if [[ "$input_path" =~ /$ ]]; then token="1"; input_path="${input_path%/}"; fi; cd "${input_path%/*}"; printf "$PWD/"; if [[ "$token" -eq 1 ]]; then printf "${input_path##*/}/"; else printf "${input_path##*/}"; fi
"
Evaluation:
0:1 - bash exhaustion:unix philosophy
If coding thoroughly is an ambition, the bash exhaustion way just proves that in this case it's far better to rely on unix philosophy, here meaning application of tool
, instead of replicating what it does with bash - you see, it works, but it's a mess, very complicated and a lot more code.
----
Use the tool
, from a certain working directory, setting the scope with a relative path, search for any files, and apply -exec realpath
- will have the same working directory as
, unless using -execdir
- on the resulting matches - relative paths - and output will be that same list of relative paths, but transformed into absolute paths.
Out of curiosity: Can bash do this alone?
futurewave
(213 rep)
May 10, 2024, 07:28 PM
• Last activity: May 11, 2024, 05:37 PM
-2
votes
1
answers
97
views
why its important for a software to do one thing and do it well
actuality what make a software bad if he do most of the things in a system and that software do all its jobs well why UNIX philosophy prefer to set as stander for its programs should "do one thing and do it well"
actuality what make a software bad if he do most of the things in a system and that software do all its jobs well why UNIX philosophy prefer to set as stander for its programs should "do one thing and do it well"
Hussien
(5 rep)
Apr 23, 2024, 09:00 AM
• Last activity: Apr 23, 2024, 09:44 AM
2
votes
1
answers
2979
views
What's the Unix way of handling split-tunnels
I want to be able to control in my servers which programs are connected to the regular internet and the ones which are only able to communicate through a VPN, in the most minimal, versatile and secure way. ## Current Setup I have a server, a Raspberry Pi 4B with 1 Gb RAM, connected to my router. The...
I want to be able to control in my servers which programs are connected to the regular internet and the ones which are only able to communicate through a VPN, in the most minimal, versatile and secure way.
## Current Setup
I have a server, a Raspberry Pi 4B with 1 Gb RAM, connected to my router. The ports I have forwarded to my RPI is a Wireguard interfaces, *wg0*, which I use for all of my personal devices and a port for Transmission peers.
I'm running a few services on my RPI, some of which I would like to be routed to a VPN except (e.g.: Searx, Transmission), others to be kept bound to the *wg0* interface (e.g.: Jellyfin) and in the future some which I would like to be public by exposing the ports to my RPI (e.g.: Minecraft server).
Currently I'm using *UFW* as my firewall which doesn't block anything coming from *wg0* and the port for that same Wireguard interface and also the peer port for Transmission.
## My Goal
I would like to be able to find an elegant solution that would allow me to configure which programs go which way, so that I could connect to my server from my *wg0* interface anywhere I am, but some programs would then communicate with the outside using the VPN
## Research Done
So far I know that Mullvad VPN can be used as a simple Wireguard interface - [source from Archwiki](https://wiki.archlinux.org/title/Mullvad) - and I would imagine that the same would hold true for any other VPN providers.
I also know that for some programs, such as Searx, offer the possibility of bindings the outgoing requests to a given interface - by configuring
source_ips
- but also binding the address to listen on a different interface - by configuring bind_address
- which means that I could be able to redirect the outgoing requests through the VPN and access it from my *wg0* network.
It seems that most VPN clients already support this split-tunnel feature for an arbitrary program, which means that they can do something of this sort somehow (taking a given program off of the VPN connection)! And because they are open source, the solution should be known.
## My Questions/Concerns
Now let's say that I would do exactly that, binding outbound Searx to the VPN and web interface to *wg0*. This leads me to question 1, how would DNS work in this situation? Would all the DNS requests done by Searx be routed through the VPN? Would that be something automatic, manually configured or even might not be supported?
I also know that Docker is the cool kid in town, that allows users to sandbox applications in a pre-configured environment, which would allow me to force the network connection for a given container. This leads me to question 2, should I be using docker in this situation even though I have a RPI?
Due to performance, and out of minimalism, I would like to keep Docker out the question, unless I really have to, which leads to me to question 3, if the program doesn't allow any configuration, is it still possible to route that program to a given interface? This seems like something that Linux does behind the scenes, if we have multiple interfaces that connect to the internet, I would imagine that it's the OS that decides which interface the program uses, not the program itself. Who's responsible for that?
Obviously knowing that the VPN clients do this, question 3 doesn't make that much sense, it does seem to be possible, I'm just not sure how.
Finally, a more generic concern as question 4, does the program have to be aware of its IP address? Because Searx asks for you to specify an outgoing IP, not an outgoing interface. I would imagine that most VPN have fixed IPs and as such this wouldn't be a problem anyways, right?
## Small Notes
As I said I have heard of Docker, but I'm trying to avoid it.
I also heard about iptables, but I don't know how to use it, and I know that it conflicts with UFW.
José Ferreira
(63 rep)
Jun 13, 2021, 11:29 PM
• Last activity: Apr 2, 2024, 02:07 PM
5
votes
1
answers
1318
views
Do other Unix-like kernels have stable syscall ABIs?
Linux has a stable syscall ABI, but NT doesn't, Windows just ensure Win32 ABI is stable, which will not trap into kernel space immediately. Lower level functions of Windows like nt.dll might change between Windows Update or Windows Editions. I want to know for other kernels, like FreeBSD kernel or M...
Linux has a stable syscall ABI, but NT doesn't, Windows just ensure Win32 ABI is stable, which will not trap into kernel space immediately. Lower level functions of Windows like nt.dll might change between Windows Update or Windows Editions.
I want to know for other kernels, like FreeBSD kernel or Mach, do they provide stable syscalls or just provide stable ABI of POSIX interface?
炸鱼薯条德里克
(1435 rep)
Oct 4, 2018, 04:46 AM
• Last activity: Nov 4, 2023, 10:50 PM
14
votes
2
answers
18343
views
Why is the primary admin UID 501?
I understand* the primary admin user is given a user ID of `501` and subsequent users get incremental numbers (`502`, `503`, …). But why `501`? What’s special about `50x`, what’s the historical/technical reason for this choice? \* I started looking into this when I got curious as to why my external...
I understand* the primary admin user is given a user ID of
501
and subsequent users get incremental numbers (502
, 503
, …). But why 501
? What’s special about 50x
, what’s the historical/technical reason for this choice?
\* I started looking into this when I got curious as to why my external hard drive had all its trashed files inside .Trashes/501
. My search led me to the conclusion 501
is the user ID for the primary admin in *nix systems (I am on macOS), but not *why*.
user137369
(584 rep)
Apr 14, 2017, 11:27 AM
• Last activity: Aug 16, 2023, 02:15 PM
57
votes
3
answers
19714
views
A layman's explanation for "Everything is a file" — what differs from Windows?
I know that "Everything is a file" means that even devices have their filename and path in Unix and Unix-like systems, and that this allows for common tools to be used on a variety of resources regardless of their nature. But I can't contrast that to Windows, the only other OS I have worked with. I...
I know that "Everything is a file" means that even devices have their filename and path in Unix and Unix-like systems, and that this allows for common tools to be used on a variety of resources regardless of their nature. But I can't contrast that to Windows, the only other OS I have worked with. I have read some articles about the concept, but I think they are somewhat uneasy to grasp for non-developers. A layman's explanation is what people need!
For example, when I want to copy a file to CF card that is attached to a card reader, I will use something like
zcat name_of_file > /dev/sdb
In Windows, I think the card reader will appear as a driver, and we will do something similar, I think. So, how does the "Everything is a file" philosophy make a difference here?
Mohamed Ahmed
(1393 rep)
Jul 6, 2014, 05:03 PM
• Last activity: Apr 25, 2023, 08:38 PM
-4
votes
3
answers
849
views
coreutils ls summary
Why is there no summary option in coreutils `ls` command, like MS-DOS/Windows has? With summary option I mean: count the *files* and *dirs* and sum up their *sizes*. *Update:* It should read: "*Even* DOS/Windows has one." It's: command.com vs. sh cmd.exe vs. bash with clear points for the latters. B...
Why is there no summary option in coreutils
ls
command, like MS-DOS/Windows has?
With summary option I mean:
count the *files* and *dirs* and sum up their *sizes*.
*Update:*
It should read: "*Even* DOS/Windows has one."
It's:
command.com vs. sh
cmd.exe vs. bash
with clear points for the latters.
But for some reason, and that is the question, Linux/Unix has *no summary* in the directory listing.
And instead of fixing that, statements go out that this is right and the right thing to do and "well done"... Only after that threads explodes with solutions to fix this vacancy by scripting!
It seams to me a good example of the [**X-Y Problem**](https://mywiki.wooledge.org/XyProblem) :
- User wants to do X.
- User doesn't know how to do X, but thinks they can fumble their way to a solution if they can just manage to do Y.
- User doesn't know how to do Y either.
- **User asks for help with Y.**
- Others try to help user with Y, but are confused because Y seems like a strange problem to want to solve.
- After much interaction and wasted time, it finally becomes clear that the user really wants help with X, and that Y was a dead end.
---
Imagine the following:
You sit in a restaurant, the waiter brings the bill. He has listed all the dishes, but no summary! You have to do it yourself - he has already "well done".
Or hasn't he?
---
*Closing remark*:
Of course know I - and love - the *UNIX toolkit*. But the basic functions should be provided by the tool itself. To add a few numbers - at the right place, and especially in such a heavily needed case - is no thing. And I see no reason not to do it.
---
*Conclusion*:
My understanding is now: It's **POSIX**!
The [POSIX](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/ls.html) standard has no mention of a summary. And that's it.
It's carved in stone.
People don't even think about **X**. They are *used* to dealing with **Y**.
Nevertheless, it is astonishing how completely the possibility that it could also be otherwise is lost from view.
bashianer
(151 rep)
Nov 28, 2022, 03:55 PM
• Last activity: Dec 16, 2022, 03:38 PM
85
votes
1
answers
17893
views
Why a "login" shell over a "non-login" shell?
I have a basic understanding of *dotfiles* in *nix system. But I am still quite confused about this https://unix.stackexchange.com/questions/38175/difference-between-login-shell-and-non-login-shell A bunch of different answers (including multiple duplicates) have already addressed the following bull...
I have a basic understanding of *dotfiles* in *nix system. But I am still quite confused about this https://unix.stackexchange.com/questions/38175/difference-between-login-shell-and-non-login-shell
A bunch of different answers (including multiple duplicates) have already addressed the following bullets:
- How to **invoke** a **login** or **non-login** shell
- How to **detect** a **login** or **non-login** shell
- What **startup files** will be consumed by a **login** or **non-login** shell
- Referred to documentation (e.g.,
man bash
) for more details
What the answers didn't tell (and also something I'm still confused about) is:
- What is the **use case** of a **login** or **non-login** shell? (e.g., I only configured zshrc
for zsh
and that's enough for most personal dev requirement, I know it's not as simple as what vimrc
to vim
)
- What is the **reason** to use a **login** over a **non-login** shell (besides consuming different startup files & life cycles)?
Allen
(1095 rep)
Nov 18, 2016, 06:26 PM
• Last activity: Oct 18, 2022, 10:10 AM
10
votes
3
answers
5767
views
What is the practical purpose of "./" in front of relative file paths (in the output from "find")?
Why are some relative file paths displayed in the form of `./file`, instead of just `file`? For example, when I do: find . I get this output: ./file1 ./file2 ./file3 What is the practical purpose, other than making the path more confusing? It's not like it is preventing me from some accident. Both a...
Why are some relative file paths displayed in the form of
./file
, instead of just file
? For example, when I do:
find .
I get this output:
./file1
./file2
./file3
What is the practical purpose, other than making the path more confusing? It's not like it is preventing me from some accident. Both are relative paths, and cat ./file1
works same as cat file1
.
Is this behavior coming from find
command, or is it some system-wide c library?
OK, I understand why using ./file
for -exec
construct is necessary (to make sure I have ... | xargs rm ./-i
, and not ... | xargs rm -i
).
But in what situation would missing ./
break anything when using -print
statement?
I am trying to construct any statement that breaks something:
touch -- -b -d -f -i
find -printf '%P\n' | sort
-b
-d
-f
-i
Everything works fine.
Just out of curiosity, how could I construct a -print
statement that would demonstrate this issue?
Martin Vegter
(586 rep)
Aug 9, 2022, 08:10 AM
• Last activity: Aug 14, 2022, 03:54 AM
1
votes
1
answers
122
views
Is "running a folder" possible in Linux?
Is there a philosophy behind running a folder as an executable in linux? user@node main % ls -lash ./bin total 0 0 drwxrwxrwx 2 user staff 64B May 23 21:04 . 0 drwxr-xr-x 6 user staff 192B May 23 21:04 .. user@node main % ./bin zsh: permission denied: ./bin Permission denied implies that it may be a...
Is there a philosophy behind running a folder as an executable in linux?
user@node main % ls -lash ./bin
total 0
0 drwxrwxrwx 2 user staff 64B May 23 21:04 .
0 drwxr-xr-x 6 user staff 192B May 23 21:04 ..
user@node main % ./bin
zsh: permission denied: ./bin
Permission denied implies that it may be allowed. If it's not, then why is it
permission denied
rather than something like can't run a directory
?
Or is it just a weird artifact of the API when directories are involved in this way?
P.S. I am aware that x flag is *adopted* in the directory context to allow/deny cd-ing into them and long-listing (ls -l) them, this is not what this question is about.
P.S.S. In Python, a directory can be treated as a python "executable" if it has a certain file structure inside. (I.e. It's possible to pass a directory instead of a python file to be run by the python interpreter).
Elijas Dapšauskas
(113 rep)
May 23, 2022, 06:30 PM
• Last activity: May 23, 2022, 08:39 PM
0
votes
1
answers
36
views
When filtering, never throw away information you don't need to
In [this text][1], the author says >When filtering, never throw away information you don't need to What does it mean in the context of applying UNIX philosophy? [1]: https://homepage.cs.uri.edu/~thenry/resources/unix_art/ch01s08.html
In this text , the author says
>When filtering, never throw away information you don't need to
What does it mean in the context of applying UNIX philosophy?
Ashfame
(121 rep)
Apr 9, 2022, 04:52 AM
• Last activity: Apr 9, 2022, 07:21 AM
27
votes
4
answers
29596
views
Is it correct to use certain special characters when naming filenames in Linux?
Is it correct to use certain special characters, as `+`, `&`, `'`, `.` (dot) and `,` (comma), basically, in filenames. I understand that you can use `-` and `_` with no problem, but doing some research I have been unable to find something definite about the other symbols; some say that you can, some...
Is it correct to use certain special characters, as
+
, &
, '
, .
(dot) and ,
(comma), basically, in filenames.
I understand that you can use -
and _
with no problem, but doing some research I have been unable to find something definite about the other symbols; some say that you can, some say that you can't, and some others say that it is "not encouraged" to use them (whatever that means).
Chris Klein
(271 rep)
Sep 16, 2014, 02:32 AM
• Last activity: Jun 2, 2021, 12:14 PM
1
votes
2
answers
2943
views
Printing to the STDOUT vs writing to an output file directly
Is there any rule of thumb when the result of the program should be printed to the stdout by default, and when the more appropriate approach is to accept output file as one of the arguments and write to it? One can always redirect the stdout to a file. I know that there are different programs, and f...
Is there any rule of thumb when the result of the program should be printed to the stdout by default, and when the more appropriate approach is to accept output file as one of the arguments and write to it? One can always redirect the stdout to a file.
I know that there are different programs, and for example it does not make sense to print to the stdout if the result is multiple files, not a single one. It also does not make sense to accept output file as an argument if the output of the program has temporary nature.
However, what about programs producing output that ultimately should be placed in a single file? Such programs can be further divided into programs accepting input from the stdin and the ones taking input from arguments.
Al Bundy
(201 rep)
Apr 12, 2021, 06:53 AM
• Last activity: Apr 12, 2021, 01:08 PM
-1
votes
2
answers
394
views
The UNIX Programming Environment by Kernighan and Pike
I have recently started reading "The UNIX Programming Environment" by Kernighan and Pike. My objective is to learn about the UNIX philosophy. My question is, Do I need to install UNIX on my desktop to make the most out of the book, or will any *NIX system work? I currently use Linux (ubuntu).
I have recently started reading "The UNIX Programming Environment" by Kernighan and Pike. My objective is to learn about the UNIX philosophy. My question is, Do I need to install UNIX on my desktop to make the most out of the book, or will any *NIX system work? I currently use Linux (ubuntu).
Kishan Dhakan
(3 rep)
Dec 16, 2020, 08:28 AM
• Last activity: Dec 16, 2020, 08:49 AM
5
votes
4
answers
2295
views
Why is Linux "Unix-like" if its kernel is monolithic?
As I understand it, part of the Unix identity is that it has a microkernel delegating work to highly modular file processes. So why is Linux still considered "Unix-Like" if it strays from this approach with a monolithic kernel?
As I understand it, part of the Unix identity is that it has a microkernel delegating work to highly modular file processes. So why is Linux still considered "Unix-Like" if it strays from this approach with a monolithic kernel?
Steve
(213 rep)
Jan 24, 2015, 04:01 AM
• Last activity: Oct 12, 2020, 01:39 AM
8
votes
2
answers
2929
views
Reading from dev/urandom - system behaviour
When reading from `dev/urandom`, with say `head` or `dd`, it’s of course expected that the output is always random and different. How is this handled by UNIX at a low level? Is the file naturally truncated on reading or instead is the file actually an interface for a symmetric cipher or equivalent a...
When reading from
dev/urandom
, with say head
or dd
, it’s of course expected that the output is always random and different.
How is this handled by UNIX at a low level? Is the file naturally truncated on reading or instead is the file actually an interface for a symmetric cipher or equivalent and as such “reading” is actually the act of executing the cipher.
Woodstock
(458 rep)
Aug 24, 2020, 09:07 PM
• Last activity: Aug 24, 2020, 09:44 PM
-12
votes
1
answers
208
views
Why do FOSS developers claim cross-platform support when their stuff is frequently broken on Windows in frustrating ways?
Two examples out of thousands I've countered: Bitcoin Core on Windows has a very annoying glitch which causes a cmd.exe (or similar) window to briefly appear and immediately go away, showing only for a fraction of a second, whenever a `blocknotify` or `walletnotify` signal is received (necessary to...
Two examples out of thousands I've countered:
Bitcoin Core on Windows has a very annoying glitch which causes a cmd.exe (or similar) window to briefly appear and immediately go away, showing only for a fraction of a second, whenever a
blocknotify
or walletnotify
signal is received (necessary to properly implement a payment system). This slowly but surely drives the computer user insane, to the point where it's impossible to keep using the machine if Bitcoin Core is to be running on it. (Which is crucial in my case.) Countless work-around commands for the directives were tested, and numerous attempts to talk with the developers were made, but they just claim that they don't have Windows and that it's "not a priority".
The pg_dump command in PostgreSQL on Windows, used to backup a PG database, ignores the --exclude-table-data
parameter of the table or schema name contains a "special" character, even though everything is correctly escaped and named (according to both Windows and PostgreSQL), and verified to work with other programs. Same thing here: they just basically claim that they have better things to do than to fix bugs on Windows, such as fixing bugs on Linux. The practical end result of this is that I'm forced to dump by database in its entirety, meaning my backups become hugely inflated with useless debug data.
I could go on and on. I've numerous times had various scripts just assume that Linux is being run, calling Unix commands which are nonexistent on Windows. Reporting this always falls on deaf ears, but yet they keep on claiming that they produce "cross-platform" software instead of just being honest and saying that it's Linux-only software.
Of course, I'm not saying that it's trivial to figure out all the madness that Microsoft is doing, but that's kind of... the point. If it were somehow "automatic" and a non-issue, everyone would obviously be running the exact same operating system and there would be no need for "cross-platform" software or any extra work. It's not like I'm sticking to Windows "out of spite" to make life hard for myself; you wouldn't believe how many times I've said "good bye" to Microsoft's harassments, angrily downloaded and installed a Linux distribution, only to realize that the open source community's toxicity actually *far surpasses* Microsoft's (rightfully called) horrible actions. Whereas on Windows, "only" 90% of my time is spent working around idiocy and downright sadism, in the case of Linux, it's over 99%. Basically nothing but problems, no matter what I'd try to do. And I'm genuinely happy for you if your experience is somehow completely different.
The question is why they want to appear to support Windows when there are major issues with their releases for that platform which prevent them from being used correctly?
You might answer:
> They have limited time and resources. Patches/donations welcome.
That may be true, but they keep working to release major new versions with entirely new features and whatnot all the time. What if they maybe slowed down a bit with all the new stuff and simply fixed the current bugs until they add more stuff? Is that really such an unreasonable stance? The exact same criticism ironically applied to Windows, especially with their ever-changing Windows 10. There are so many bugs which are just left untouched for year after year, as they keep piling on more unwanted garbage. But in their case, it's financially motivated, whereas these open source projects are free of charge and they should be able to take as much time as they feel necessary to make their existing feature-set solid and robust before even thinking about adding new features.
I just don't understand the mentality.
K. Seer
(1 rep)
Jul 31, 2020, 10:24 PM
• Last activity: Jul 31, 2020, 11:55 PM
Showing page 1 of 20 total questions