Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

1 votes
3 answers
144 views
How to find the python command in a shell script?
Let's suppose that you have a super python 2&3 one-liner that you need to use in a shell script. On system A the `python` command works, but on system B you have to use `python3`, on system C you need to use `python3.12` or `python2`, etc... What would be a sensible way to check some, if not every,...
Let's suppose that you have a super python 2&3 one-liner that you need to use in a shell script. On system A the python command works, but on system B you have to use python3, on system C you need to use python3.12 or python2, etc... What would be a sensible way to check some, if not every, "python*" command name? I want to avoid using a construct like this:
#!/bin/bash

if command -v python
then
    python_exe=python
elif command -v python3
then
    python_exe=python3
elif command -v python2
then
     python_exe=python2
else
    echo "python executable not found" >&2
    exit 1
fi > /dev/null

"$python_exe" -c 'print("Hello world!");'
Fravadona (1581 rep)
Mar 11, 2025, 08:13 PM • Last activity: Jul 16, 2025, 01:38 PM
0 votes
1 answers
2234 views
Kali Linux on portable SSD with data persistence
I need to setup a portable kali-linux environment that I can boot on any available computer. I don't want to use a virtual environment because I would need to download VMware on the host computer, plus it's not practical since I will need to boot my OS on friends/customers computers. I just want to...
I need to setup a portable kali-linux environment that I can boot on any available computer. I don't want to use a virtual environment because I would need to download VMware on the host computer, plus it's not practical since I will need to boot my OS on friends/customers computers. I just want to boot it from the BIOS, do what I have to do and leave without changing or downloading any files/software on machines that aren't mine. I want to leave the hosts like I was never there after I'm done. Of course, I need data persistence so that I can access, change and keep my datas across reboots on different machines. So, if I'm not wrong, what I need is: Adding Persistence to a Kali Linux Live USB Drive Making a Kali Bootable USB Drive (Windows) I have two questions: - Wouldn't something like that run too slow on a classic USB flash drive ? - I kind of want to use a portable SSD for this. I found one that I like and I'd like your opinions on it and if what I want to do with it is possible. Since this SSD has built-in features and software, is it going to be a problem to make it a kali-linux portable environment? For example, what if I need to format the disk to a specific file system type?
mossonzdod (17 rep)
Jul 27, 2021, 01:52 AM • Last activity: Jun 24, 2025, 06:02 AM
2 votes
2 answers
1943 views
select(2) on FIFO on macOS
On Linux the included program returns from `select` and exits: $ gcc -Wall -Wextra select_test.c -o select_test $ ./select_test reading from read end closing write end first read returned 0 second read returned 0 selecting with read fd in fdset select returned On OS X, the `select` blocks forever an...
On Linux the included program returns from select and exits: $ gcc -Wall -Wextra select_test.c -o select_test $ ./select_test reading from read end closing write end first read returned 0 second read returned 0 selecting with read fd in fdset select returned On OS X, the select blocks forever and the program does not exit. The Linux behavior matches my expectation and appears to conform to the following bit of the POSIX manual page for select: > A descriptor shall be considered ready for reading when a call to an input function with O_NONBLOCK clear would not block, whether or not the function would transfer data successfully. (The function might return data, an end-of-file indication, or an error other than one indicating that it is blocked, and in each of these cases the descriptor shall be considered ready for reading.) Since read(2) on the read end of the fifo will always return EOF, my reading says that it should always be considered ready by select. Is macOS's behavior here well-known or expected? Is there something else in this example that leads to the behavior difference? A further note is that if I remove the read calls then macOS's select returns. This and some other experiments seem to indicate that once an EOF has been read from the file, it will no longer be marked as ready if select is called on it later. ## Example Program #include #include #include #include #include #include #include #define FILENAME "select_test_tmp.fifo" int main() { pid_t pid; int r_fd, w_fd; unsigned char buffer; fd_set readfds; mkfifo(FILENAME, S_IRWXU); pid = fork(); if (pid == -1) { perror("fork"); exit(1); } if (pid == 0) { w_fd = open(FILENAME, O_WRONLY); if (w_fd == -1) { perror("open"); exit(1); } printf("closing write end\n"); close(w_fd); exit(0); } r_fd = open(FILENAME, O_RDONLY); if (r_fd == -1) { perror("open"); exit(1); } printf("reading from read end\n"); if (read(r_fd, &buffer, 10) == 0) { printf("first read returned 0\n"); } else { printf("first read returned non-zero\n"); } if (read(r_fd, &buffer, 10) == 0) { printf("second read returned 0\n"); } else { printf("second read returned non-zero\n"); } FD_ZERO(&readfds); FD_SET(r_fd, &readfds); printf("selecting with read fd in fdset\n"); if (select(r_fd + 1, &readfds, NULL, NULL, NULL) == -1) { perror("select"); exit(1); } printf("select returned\n"); unlink(FILENAME); exit(0); }
Steven D (47418 rep)
Aug 2, 2017, 12:48 AM • Last activity: Apr 19, 2025, 02:01 AM
4 votes
2 answers
14856 views
how can one manually assign a permanent / static IP address with "ip addr add"?
After an IP address is assigned to this network interface, with any of the the following commands: ip addr add 10.0.0.0 dev eth1 valid_lft forever preferred_lft forever ip addr replace 10.0.0.0 dev eth1 valid_lft forever preferred_lft forever ip addr add 10.0.0.0 dev eth1 ip addr replace 10.0.0.0 de...
After an IP address is assigned to this network interface, with any of the the following commands: ip addr add 10.0.0.0 dev eth1 valid_lft forever preferred_lft forever ip addr replace 10.0.0.0 dev eth1 valid_lft forever preferred_lft forever ip addr add 10.0.0.0 dev eth1 ip addr replace 10.0.0.0 dev eth1 I can verify with ip addr that the IP address for eth1 is set to 10.0.0.0/32 which is excellent (I think): ... 3: eth1: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether 08:00:27:4d:1e:43 brd ff:ff:ff:ff:ff:ff inet 10.0.0.0/32 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe4d:1e43/64 scope link tentative dadfailed valid_lft forever preferred_lft forever ... I begin to ping myself...the ping command hangs on the 31st ping: username@computer:~$ ping 10.0.0.0 PING 10.0.0.0 (10.0.0.0) 56(84) bytes of data. 64 bytes from 10.0.0.0: icmp_seq=1 ttl=64 time=0.043 ms 64 bytes from 10.0.0.0: icmp_seq=2 ttl=64 time=0.034 ms ... 64 bytes from 10.0.0.0: icmp_seq=30 ttl=64 time=0.038 ms 64 bytes from 10.0.0.0: icmp_seq=31 ttl=64 time=0.041 ms Once the ping hangs, I can verify with ip addr that the IP address for eth1 is has disappeared: ... 3: eth1: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether 08:00:27:4d:1e:43 brd ff:ff:ff:ff:ff:ff inet6 fe80::a00:27ff:fe4d:1e43/64 scope link tentative dadfailed valid_lft forever preferred_lft forever ... **How can one assign a static IP address to a network interface using ip(8), and not let it disappear?** (disappearing after system restart is OK) I am running Ubuntu 14.04. From researching on the Internet about my problem, it seems that modifying the file /etc/network/interfaces is the solution, but this is undesirable, because this solution is not as portable as the ip(8) command.
Eric (143 rep)
Jan 29, 2016, 11:31 PM • Last activity: Mar 31, 2025, 02:50 PM
110 votes
23 answers
77752 views
Detect init system using the shell
This may have more to do with detecting operating systems, but I specifically need the init system currently in use on the system. Fedora 15 and Ubuntu now use systemd, Ubuntu used to use Upstart (long time default until 15.04), while others use variations of System V. I have an application that I a...
This may have more to do with detecting operating systems, but I specifically need the init system currently in use on the system. Fedora 15 and Ubuntu now use systemd, Ubuntu used to use Upstart (long time default until 15.04), while others use variations of System V. I have an application that I am writing to be a cross-platform daemon. The init scripts are being dynamically generated based on parameters that can be passed in on configure. What I'd like to do is only generate the script for the particular init system that they are using. This way the install script can be run reasonably without parameters as root and the daemon can be "installed" automagically. This is what I've come up with: * Search for systemd, upstart, etc in /bin * Compare /proc/1/comm to the systemd, upstart, etc * Ask the user **What would be the best cross/platform way of doing this?** Kind of related, **Can I depend on bash to be on the majority of *nix or is it distribution/OS dependent?** Target platforms: * Mac OS * Linux (all distributions) * BSD (all versions) * Solaris, Minix, and other *nix
beatgammit (7843 rep)
Aug 6, 2011, 11:29 PM • Last activity: Mar 31, 2025, 02:48 PM
9 votes
3 answers
4555 views
Who is responsible for providing `set -o pipefail`
I want strict mode in my scripts. I would also appreciate portability. [set -o pipefail](https://www.translucentcomputing.com/2020/11/unofficial-bash-strict-mode-pipefail/) seems compulsory. Yet `shellcheck`(a static linter) is unhappy that "In POSIX sh, set option pipefail is undefined". Is it corr...
I want strict mode in my scripts. I would also appreciate portability. [set -o pipefail](https://www.translucentcomputing.com/2020/11/unofficial-bash-strict-mode-pipefail/) seems compulsory. Yet shellcheck(a static linter) is unhappy that "In POSIX sh, set option pipefail is undefined". Is it correct? If so, is this a bash solely feature or is it rather prolific?
Vorac (3197 rep)
Jun 19, 2021, 07:33 AM • Last activity: Mar 18, 2025, 04:32 PM
506 votes
6 answers
98589 views
Why not use "which"? What to use then?
When looking for the path to an executable or checking what would happen if you enter a command name in a Unix shell, there's a plethora of different utilities (`which`, `type`, `command`, `whence`, `where`, `whereis`, `whatis`, `hash`, etc). We often hear that `which` should be avoided. Why? What s...
When looking for the path to an executable or checking what would happen if you enter a command name in a Unix shell, there's a plethora of different utilities (which, type, command, whence, where, whereis, whatis, hash, etc). We often hear that which should be avoided. Why? What should we use instead?
Stéphane Chazelas (579282 rep)
Aug 1, 2013, 10:58 PM • Last activity: Feb 7, 2025, 04:19 AM
0 votes
0 answers
31 views
Is this a portable way to escape MSYS2's Linux pathes?
MSYS2-built Linux applications internally rewrite Linux pathes to Windows pathes. This can be problematic if those are supposed to interface with other, proper-Linux applications (e.g. in my case, MSYS2 rsync connecting to a Debian 12 rsync server). It can be configured but neither are the means [pa...
MSYS2-built Linux applications internally rewrite Linux pathes to Windows pathes. This can be problematic if those are supposed to interface with other, proper-Linux applications (e.g. in my case, MSYS2 rsync connecting to a Debian 12 rsync server). It can be configured but neither are the means particularly elegant nor are they particularly dependable , it would seem. There is however a very appealing shorthand : Use an absolute path prefixed with a second /. e.g. /home/me/something will become C:\msys64\home\me\something but //home/me/something will remain unchanged (and the rsync server treats it as single / at least on Debian 10 and 12). I am using MSYS2 to make a single Windows machine work mostly-seamlessly in an environment otherwise dominated by Debian 12 machines, wanting to use the same tools/scripts there, if possible, so I wonder: **Can I presume a path beginning with // equals a path starting with / on all or most Linux distributions / for all or most common CLI tools?** As per the Unix standard , interpretation of the first slash in // is implementation-defined. But what do the implementations define? (This question deals with the native Linux side, but if you just happen to know whether this is a stable solution on the MSYS2 side, I would be grateful for mentioning it.)
Zsar (111 rep)
Jan 21, 2025, 04:30 PM • Last activity: Jan 21, 2025, 05:25 PM
1 votes
2 answers
153 views
Would #! /bin/sh - shebang prevent from execution on other operating systems? How to create shell agnostic portable script?
If I include a shebang line in my script like following `#! /bin/sh` Would it prevent from execution on other operating systems like Windows which may not have sh in bin folder? Is this better to omit `shebang` line to make a portable script? How to write a operating system agnostic, shell agnostic,...
If I include a shebang line in my script like following #! /bin/sh Would it prevent from execution on other operating systems like Windows which may not have sh in bin folder? Is this better to omit shebang line to make a portable script? How to write a operating system agnostic, shell agnostic, portable script which would execute in the same way in all operating systems or shells? If the script has to be executed the same way on Mac OS, Windows, Ubuntu, any other Linux or any other operating systems? --- Assumptions. I assume to prepare a script for a workshop where workshop participants may use different operating systems and may have different shells; I share preparation notes to install all needed tools like jq, sed, sfdx, curl and I assume the script would run the same way on all machines and all the operating systems. I assume workshop participants mainly use Mac OS, Windows, Ubuntu. Also I assume that participants may install Git Bash tool on Windows if there is no posix compatible shell. --- The script itself unwrap2.sh verbose=$1 # deploy data cloud Data Kit sf project deploy start -d dc # unwrap Data Kit components x=$(sf apex run -f scripts/unwrap.apex --json | jq '.result.logs' -r) x1=${x#*EXECUTION_STARTED} x2=${x1#*Result is \(success): } if [[ "$verbose" = "verbose" ]]; then echo "x2: $x2" fi # Actually this was my mistake here which I have posted in another question # this line should be x3=$(echo "$x2" | head -n 1) to make it work # However, original version of this script with this error didn't have the doublequotes here x3=$(echo $x2 | head -n 1) if [[ "$verbose" = "verbose" ]]; then echo "x3: $x3" fi status=$(sf data query -q "select Id,Status FROM BackgroundOperation WHERE Id = '$x3'" --json | jq '.result.records.Status' -r) if [[ "$verbose" = "verbose" ]]; then echo "select Id,Status FROM BackgroundOperation WHERE Id = '$x3'" fi echo "Status: $status" while [[ "$status" != "Complete" ]]; do sleep 10 status=$(sf data query -q "select Id,Status FROM BackgroundOperation WHERE Id = '$x3'" --json | jq '.result.records.Status' -r) echo "Status: $status" done Also, I have other scripts, like setup.sh email=$1 integrationUsername=$2 defaultFolder="force-app/main/default" mkdir "$defaultFolder/connectedApps" cp "template/connectedApps/Integration.connectedApp-meta.xml" "$defaultFolder/connectedApps" cp "template/data/int.csv" "data" SEDOPTION= if [[ "$OSTYPE" == "darwin"* ]]; then SEDOPTION="-i ''" fi # transform connected app from the template folder into default folder key=${integrationUsername//@/..} echo "Key: $key" sed -e "s/{{email}}/$email/g" -e "s/{{integration.username}}/$integrationUsername/g" -e "s/{{key}}/$key/g" "template/connectedApps/Integration.connectedApp-meta.xml" > "$defaultFolder/connectedApps/Integration.connectedApp-meta.xml" # transform integration user sed -e "s/{{email}}/$email/g" -e "s/{{integration.username}}/$integrationUsername/g" "template/data/int.csv" > "data/int.csv" # create integration user ./insertUsers.sh data/int.csv # deploy connected app sf project deploy start -d force-app # assign perm set sf org assign permset -n GenieAdmin sf org assign permset -n GenieAdmin -b $integrationUsername sf org open -p "lightning/setup/SetupOneHome/home?setupApp=audience360" which invokes insertUsers.sh data=$(cat $1) username=$(sfdx force:org:display --json | jq '.result.username' -r) echo "Username = $username" echo "Last part is ${username##*.}" sandbox=${username##*.} echo "sandbox ${sandbox}" if [[ $sandbox == 'com' ]] then sandbox=${username%%@*} echo "First part is ${sandbox}" fi data=${data//.org/.$sandbox} echo "$data" > us.csv sf data upsert bulk -s User -f us.csv -w 500 -i Id --json > upsert.json echo $(cat upsert.json) users=$(cat upsert.json | jq '.result.records.successfulResults[].sf__Id' -r) echo "Users before transformation $users" users="${users//$'\n'/','}" echo "Users after transformation $users" code=$(cat scripts/updateUser.apex) echo "${code//users/$users}" > temp.apex sf apex run --file temp.apex rm temp.apex rm us.csv rm upsert.json IFS="$old_ifs" Also, I have deploy script ./deploy.sh verbose verbose=$1 # deploy data cloud Data Kit sf project deploy start -d dc # unwrap Data Kit components scDomain=$(sf org display --json | jq '.result.instanceUrl' -r) if [[ "$verbose" = "verbose" ]]; then echo "Sales Cloud domain: $scDomain" fi scToken=$(sf org display --json | jq '.result.accessToken' -r) if [[ "$verbose" = "verbose" ]]; then echo $scToken fi version=$(cat sfdx-project.json | jq '.sourceApiVersion' -r) result=$(curl --location --request POST "$scDomain/services/data/v$version/actions/custom/flow/sfdatakit__DeployDataKitComponents" \ --header "Authorization: Bearer $scToken" \ --header 'Content-Type: application/json' \ --data @data/unwrap.json ) figuid=$(echo $result | jq '..outputValues.Flow__InterviewGuid' -r) if [[ "$verbose" = "verbose" ]]; then echo "result: $result" echo "figuid: $figuid" fi sf data query -q "SELECT Id, BundleName, ComponentName, ComponentTemplateId, ComponentType, DataKitName, DataSpaceName, DeployJob, DeploymentError, DeploymentStatus , FlowInterviewIdentifier, Name, PublisherOrgComponentId, SubscriberOrgComponentId, TemplateVersion FROM DataKitDeploymentLog WHERE FlowInterviewIdentifier = '$figuid'" --json > soql.json count=$(cat soql.json | jq '.result.records | length') if [[ "$verbose" = "verbose" ]]; then echo "Count: $count" fi if [ "$count" = "1" ]; then x="count is one" else x="count is not one" fi if [[ "$verbose" = "verbose" ]]; then echo "x: $x" fi while [ "$count" = "1" ]; do echo "Unwrapping Data Kit components in progress, waiting another 5 seconds..." sleep 5 sf data query -q "SELECT Id, BundleName, ComponentName, ComponentTemplateId, ComponentType, DataKitName, DataSpaceName, DeployJob, DeploymentError, DeploymentStatus , FlowInterviewIdentifier, Name, PublisherOrgComponentId, SubscriberOrgComponentId, TemplateVersion FROM DataKitDeploymentLog WHERE FlowInterviewIdentifier = '$figuid'" --json > soql.json count=$(cat soql.json | jq '.result.records | length') done deploymentStatus1=$(cat soql.json | jq '.result.records.DeploymentStatus' -r) deploymentStatus2=$(cat soql.json | jq '.result.records.DeploymentStatus' -r) if [[ "$verbose" = "verbose" ]]; then echo "deploymentStatus1: $deploymentStatus1" echo "deploymentStatus2: $deploymentStatus2" fi echo "Unwrapping Data Kit components completed, with statuses $deploymentStatus1 and $deploymentStatus2" aid=$(sf data query -q "SELECT Id FROM ConnectedApplication WHERE Name = 'Integration'" --json | jq '.result.records.Id' -r) res=$(curl --location --request GET "$scDomain/$aid" --header "Cookie: sid=$scToken") r1=${res#*applicationId=} r2=${r1%\'); *} echo "r2: $r2" echo "Open the Connected App View page" sf org open -p "/app/mgmt/forceconnectedapps/forceAppDetail.apexp?applicationId=$r2" echo "Trying to open Connected App so that you can click Manage and Copy Consumer Details: Key and Secret" sf org open -p "/app/mgmt/forceconnectedapps/forceAppManageConsumer.apexp?applicationId=$r2" # Potentially we could try to follow the Manage Consumer Detail link to parse Key and Secret # However, it is been redirected, so we would have to fetch the redirection link first # res=$(curl --location --request GET "$iu/app/mgmt/forceconnectedapps/forceAppManageConsumer.apexp?applicationId=$r2" --header "Cookie: sid=$sid") # echo $res # r1=${res#*replace\(\'} # l1=${r1%\');*} # l2=${l1#*replace\(\'} # echo "link: $l2" # res2=$(curl --location --request GET "$iu/$l2" --header "Cookie: sid=$sid") # # which is in $l2 variable, then we would have to figure out how to avoid insufficient permission error # Currently if you follow this link from UI, it requires MFA for confirmation. IngestS.sh script scDomain=$(sf org display --json | jq '.result.instanceUrl' -r) echo "Sales Cloud domain: $scDomain" key=$1 secret=$2 verbose=$3 result=$( curl --location --request POST "$scDomain/services/oauth2/token" \ --header 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'grant_type=client_credentials' --data-urlencode "client_id=$key" --data-urlencode "client_secret=$secret" ) if [[ "$verbose" = "verbose" ]]; then echo $result fi scToken=$(echo $result | jq '.access_token' -r) if [[ "$verbose" = "verbose" ]]; then echo $scToken fi result=$(curl --location --request POST "$scDomain//services/a360/token" \ --header 'Content-Type: application/x-www-form-urlencoded'\ --data-urlencode 'grant_type=urn:salesforce:grant-type:external:cdp'\ --data-urlencode "subject_token=$scToken"\ --data-urlencode "subject_token_type=urn:ietf:params:oauth:token-type:access_token") if [[ "$verbose" = "verbose" ]]; then echo $result fi dcToken=$(echo $result | jq '.access_token' -r) dcDomain=$(echo $result | jq '.instance_url' -r) if [[ "$verbose" = "verbose" ]]; then echo $dcToken echo $dcDomain fi result=$(curl --location --request POST "https://$dcDomain/api/v1/ingest/sources/Looker/treatments_statuses/actions/test " \ --header "Authorization: Bearer $dcToken" \ --header 'Content-Type: application/json' \ --data @data/dc.json ) if [[ "$verbose" = "verbose" ]]; then echo "Ingest results: $result" fi result=$(curl --location --request POST "https://$dcDomain/api/v1/ingest/sources/Looker/treatments_statuses " \ --header "Authorization: Bearer $dcToken" \ --header 'Content-Type: application/json' \ --data @data/dc.json ) echo "Ingest results: $result" IngestB.sh script set -e set -o pipefail scDomain=$(sf org display --json | jq '.result.instanceUrl' -r) if [[ "$verbose" = "verbose" ]]; then echo "Sales Cloud domain: $scDomain" fi key=$1 secret=$2 verbose=$3 result=$( curl --location --request POST "$scDomain/services/oauth2/token" \ --header 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'grant_type=client_credentials' --data-urlencode "client_id=$key" --data-urlencode "client_secret=$secret" ) if [[ "$verbose" = "verbose" ]]; then echo $result fi scToken=$(echo $result | jq '.access_token' -r) if [[ "$verbose" = "verbose" ]]; then echo $scToken fi result=$(curl --location --request POST "$scDomain//services/a360/token" \ --header 'Content-Type: application/x-www-form-urlencoded'\ --data-urlencode 'grant_type=urn:salesforce:grant-type:external:cdp'\ --data-urlencode "subject_token=$scToken"\ --data-urlencode "subject_token_type=urn:ietf:params:oauth:token-type:access_token") if [[ "$verbose" = "verbose" ]]; then echo $result fi dcToken=$(echo $result | jq '.access_token' -r) dcDomain=$(echo $result | jq '.instance_url' -r) if [[ "$verbose" = "verbose" ]]; then echo $dcToken echo $dcDomain fi result=$(curl --location --request POST "https://$dcDomain/api/v1/ingest/jobs " \ --header "Authorization: Bearer $dcToken" \ --header 'Content-Type: application/json' \ --data '{"object":"patients", "sourceName":"Looker", "operation":"upsert"}' ) if [[ "$verbose" = "verbose" ]]; then echo "Job create request: $result" fi jid=$(echo $result | jq '.id' -r) if [[ "$verbose" = "verbose" ]]; then echo "jid: $jid" fi if [[ "$jid" = "null" ]]; then error=$(echo $result | jq '.error' -r) message=$(echo $result | jq '.message' -r) echo "Process halted because of error: $error with message: $message" exit 1 fi result=$(curl --location --request PUT "https://$dcDomain/api/v1/ingest/jobs/$jid/batches " \ --header 'Content-Type: text/csv' \ --header "Authorization: Bearer $dcToken" \ --data-binary @data/dcPat.csv ) if [[ "$verbose" = "verbose" ]]; then echo "Data upload results: $result" fi result=$(curl --location --request PATCH "https://$dcDomain/api/v1/ingest/jobs/$jid " \ --header 'Content-Type: application/json' \ --header "Authorization: Bearer $dcToken" \ --data '{"state" : "UploadComplete"}' ) if [[ "$verbose" = "verbose" ]]; then echo "Patch job results: $result" fi result=$(curl --location --request GET "https://$dcDomain/api/v1/ingest/jobs/$jid " \ --header 'Content-Type: application/json' \ --header "Authorization: Bearer $dcToken" ) if [[ "$verbose" = "verbose" ]]; then echo "get job status results: $result" fi state=$(echo $result | jq '.state' -r) echo "state: $state" while [[ "$state" != "JobComplete" ]]; do sleep 10 state=$(echo $result | jq '.state' -r) echo "state: $state" done echo "Final state: $state"
Patlatus (123 rep)
Jan 14, 2025, 11:09 AM • Last activity: Jan 15, 2025, 11:04 AM
5 votes
3 answers
1138 views
How can I assign a heredoc to a variable in a way that's portable across Unix and Mac?
This code works fine on Linux but not Mac OS: ``` #!/usr/bin/env bash foo=$(cat <<EOF "\[^"\]+" EOF ) printf "%s" "$foo" ``` It fails on Mac with ``` ./test.sh: line 6: unexpected EOF while looking for matching `"' ./test.sh: line 7: syntax error: unexpected end of file ``` If I do `cat <<EOF` inste...
This code works fine on Linux but not Mac OS:
#!/usr/bin/env bash
foo=$(cat <
It fails on Mac with
./test.sh: line 6: unexpected EOF while looking for matching `"'
./test.sh: line 7: syntax error: unexpected end of file
If I do cat < instead of foo=$(cat <, it works fine. Is there a portable way to get heredocs (or multiline strings) into variables without using a file as an intermediate? **Edit**: I want to use a heredoc becuase I have a multiline string with " and '. My actual example looks like:
EXPECTED_ERROR=$(cat <
Jason Gross (173 rep)
Sep 17, 2024, 02:55 AM • Last activity: Oct 9, 2024, 05:41 PM
21 votes
6 answers
4338 views
What's the best way to trace all the changes I've made to my system files over the years?
As I imagine a lot of Linux users do, over the years I've followed the advice of countless different threads, blogposts, videos etc. and made various changes to system files in order to improve my setup. Some of those were motivated by personal preferences and customisations, e.g. changing/modifying...
As I imagine a lot of Linux users do, over the years I've followed the advice of countless different threads, blogposts, videos etc. and made various changes to system files in order to improve my setup. Some of those were motivated by personal preferences and customisations, e.g. changing/modifying keyboard layouts or mouse settings. Others were more fix-oriented, such as fixing my laptop not being able to wake up from sleep due to some manufacturer-specific issue, or messing with audio driver configs to get audio to work properly. I now want to start over with a fresh installation. I'm planning to use the same distribution, and the same laptop, so most likely I will need to make those changes again to get things working the way I want them to - or even to get them working at all. Is there a **smart** way to go about figuring out what changes I've made over the years that I will likely need to remake after reinstalling? I've thought about checking to see which system files have been manually edited by me (assuming that it's possible), or even doing a diff on specific folders between my installation and a vanilla one. However, I definitely don't remember most of the files I've had to edit, and I don't know enough about Linux to know which files/directories are the important ones, which ones I can safely ignore etc. EDIT: To clarify, I'm not talking about dotfiles; I keep those under version control. EDIT 2: The distribution in question is Manjaro, i3wm version.
Dimitris (327 rep)
Feb 20, 2022, 04:11 PM • Last activity: Sep 25, 2024, 12:12 PM
1 votes
2 answers
911 views
Which version of split supports flag -p?
This command does not work in GNU Coreutils split, split of Cern Linux 5 (Redhat) and BSD (Apple Yosemite 10.10.3): split -p'\0' input.txt where input.txt is `masi\0hello\0world`. Some comments about the versions follow: - I do `split -p'\0' input.txt` in BSD Split but I get nothing as output in OSX...
This command does not work in GNU Coreutils split, split of Cern Linux 5 (Redhat) and BSD (Apple Yosemite 10.10.3): split -p'\0' input.txt where input.txt is masi\0hello\0world. Some comments about the versions follow: - I do split -p'\0' input.txt in BSD Split but I get nothing as output in OSX Yosemite 10.10.3, GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin14). - I do echo 'masi\0hello' | split -p'\\0' in split 5.97 GNU 2012 in CERN Linux 5 (Redhat). Output split: unrecognized option --p\\0'. - no option -p in GNU Coreutils split I have forgot where I successfully used the option -p with split. Which version of split *does* support the flag -p?
L&#233;o L&#233;opold Hertz 준영 (7138 rep)
Jun 24, 2015, 11:02 AM • Last activity: Aug 2, 2024, 07:15 PM
79 votes
6 answers
25685 views
How portable are /dev/stdin, /dev/stdout and /dev/stderr?
Occasionally I need to specify a "path-equivalent" of one of the standard IO streams (`stdin`, `stdout`, `stderr`). Since 99% of the time I work with Linux, I just prepend `/dev/` to get `/dev/stdin`, etc., and this "*seems* to do the right thing". But, for one thing, I've always been uneasy about s...
Occasionally I need to specify a "path-equivalent" of one of the standard IO streams (stdin, stdout, stderr). Since 99% of the time I work with Linux, I just prepend /dev/ to get /dev/stdin, etc., and this "*seems* to do the right thing". But, for one thing, I've always been uneasy about such a rationale (because, of course, "it seems to work" until it doesn't). Furthermore, I have no good sense for how portable this maneuver is. So I have a few questions: 1. In the context of Linux, is it safe (yes/no) to equate stdin, stdout, and stderr with /dev/stdin, /dev/stdout, and /dev/stderr? 2. More generally, is this equivalence "adequately *portable*"? I could not find any POSIX references.
kjo (16299 rep)
Apr 13, 2012, 10:49 PM • Last activity: Jul 3, 2024, 11:30 PM
20 votes
3 answers
1628 views
Use of ^ as a shell metacharacter
I wrote a small script today which contained grep -q ^local0 /etc/syslog.conf During review, a coworker suggested that `^local0` be quoted because `^` means "pipe" in the Bourne shell. Surprised by this claim, I tried to track down any reference that mentioned this. Nothing I found on the internet s...
I wrote a small script today which contained grep -q ^local0 /etc/syslog.conf During review, a coworker suggested that ^local0 be quoted because ^ means "pipe" in the Bourne shell. Surprised by this claim, I tried to track down any reference that mentioned this. Nothing I found on the internet suggested this was a problem. However, it turns out that the implementation of bsh (which claims to be the Bourne shell) on AIX 7 actually has this behaviour: > bsh $ ls ^ wc 23 23 183 $ ls | wc 23 23 183 None of the other "Bourne shell" implementations I tried behave this way (that is, ^ is not considered a shell metacharacter at all). I tried sh on CentOS (which is really bash), and sh on FreeBSD (which is not bash). I don't have many other systems to try. Is this behaviour expected? Which shells consider ^ to be a pipe metacharacter?
Greg Hewgill (7113 rep)
Dec 9, 2013, 12:43 AM • Last activity: Mar 29, 2024, 08:44 AM
10 votes
3 answers
7073 views
Can I read a single character from stdin in POSIX shell?
Only `read -r` is [specified by POSIX](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/read.html); `read -n NUM`, used to read `NUM` characters, is not. Is there a portable way to automatically return after reading a given number of characters from stdin? My usecase is printing prompts lik...
Only read -r is [specified by POSIX](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/read.html) ; read -n NUM, used to read NUM characters, is not. Is there a portable way to automatically return after reading a given number of characters from stdin? My usecase is printing prompts like this: Do the thing? [y/n] If possible, I'd like to have the program automatically proceed after typing y or n, without needing the user to press enter afterwards.
ash (730 rep)
Aug 26, 2018, 03:34 PM • Last activity: Dec 24, 2023, 08:42 AM
9 votes
5 answers
2742 views
execute a command in $PATH matching a wildcard
I'd like to find and execute a command in the current `$PATH` matching this wildcard `libreoffice?.?` (eg. `libreoffice4.0`, `libreoffice4.3`, etc.) EDIT: if multiple matches are found, you can pick one randomly. I prefer a POSIX compliant solution.
I'd like to find and execute a command in the current $PATH matching this wildcard libreoffice?.? (eg. libreoffice4.0, libreoffice4.3, etc.) EDIT: if multiple matches are found, you can pick one randomly. I prefer a POSIX compliant solution.
eadmaster (1723 rep)
Feb 22, 2015, 02:36 AM • Last activity: Nov 10, 2023, 05:53 PM
12 votes
6 answers
4415 views
How to programmatically detect awk flavor (e.g. gawk vs nawk)
I'm using a command-line application which is essentially a collection of bash shell scripts. The application was written to run on BSD/OSX and also on Linux. One of the scripts relies on awk. It contains two awk commands: one written for nawk (the standard BSD awk implementation) and one written fo...
I'm using a command-line application which is essentially a collection of bash shell scripts. The application was written to run on BSD/OSX and also on Linux. One of the scripts relies on awk. It contains two awk commands: one written for nawk (the standard BSD awk implementation) and one written for gawk (the GNU awk implementation). The two awk commands in question are not cross-compatible with the different environments; in particular the nawk command fails when run with gawk. The script checks the kernel name (i.e. uname -s) in order to determine the host environment, and then runs the appropriate awk command. However I prefer to work on Mac OS X with the GNU core utilities installed, so the script fails to run correctly. In the process of thinking about how best to fix this bug it occurred to me that it would be nice to know how to programmatically distinguish between different flavors of the common command-line utilities, preferably in a relatively robust and portable way. I noticed that nawk doesn't accept the '-V' flag to print the version information, so I figured that something like the following should work: awk -V &>/dev/null && echo gawk || echo nawk Another variation could be: awk -Wversion &>/dev/null && echo gawk || echo nawk This seems to work on my two testing environments (OS X and CentOS). Here are my questions: * Is this the best way to go? * Is there a way to extend this to handle other variations of awk (e.g. mawk, jawk, etc.)? * Is it even worth worrying about other versions of awk? I should also mention that I know very little about awk.
igal (10194 rep)
Oct 16, 2015, 01:22 PM • Last activity: Oct 30, 2023, 01:41 PM
2 votes
5 answers
2213 views
override hardcoded paths in executables
I'd like to override some hardcoded paths stored in pre-compiled executables like "/usr/share/nmap/" and redirect them to another dir. My ideal solution should not require root priviledges, so creating a symlink is not ok. (Also recompiling it's not an option)
I'd like to override some hardcoded paths stored in pre-compiled executables like "/usr/share/nmap/" and redirect them to another dir. My ideal solution should not require root priviledges, so creating a symlink is not ok. (Also recompiling it's not an option)
eadmaster (1723 rep)
Mar 29, 2014, 11:33 PM • Last activity: Oct 19, 2023, 01:01 PM
4 votes
1 answers
594 views
List all processes without controlling terminal (only)?
Is there a portable way to do this? On Linux, I can use `ps a -N` but this option isn't available on other (POSIX) systems. Of course I can use `grep '^?'` with, say, `-o tty,...` but is there something more reliable?
Is there a portable way to do this? On Linux, I can use ps a -N but this option isn't available on other (POSIX) systems. Of course I can use grep '^?' with, say, -o tty,... but is there something more reliable?
pashazz (41 rep)
Nov 10, 2015, 07:09 PM • Last activity: Oct 6, 2023, 03:32 PM
10 votes
6 answers
3943 views
Portable check empty directory
With Bash and Dash, you can check for an empty directory using just the shell (ignore dotfiles to keep things simple): set * if [ -e "$1" ] then echo 'not empty' else echo 'empty' fi However I recently learned that Zsh fails spectacularly in this case: % set * zsh: no matches found: * % echo "$? $#"...
With Bash and Dash, you can check for an empty directory using just the shell (ignore dotfiles to keep things simple): set * if [ -e "$1" ] then echo 'not empty' else echo 'empty' fi However I recently learned that Zsh fails spectacularly in this case: % set * zsh: no matches found: * % echo "$? $#" 1 0 So not only does the set command fail, but it doesn't even set $@. I suppose I could test if $# is 0, but it appears that Zsh even stops execution: % { set *; echo 2; } zsh: no matches found: * Compare with Bash and Dash: $ { set *; echo 2; } 2 Can this be done in a way that works in bash, dash and zsh?
user327359
Jan 7, 2019, 01:06 AM • Last activity: Sep 27, 2023, 12:58 PM
Showing page 1 of 20 total questions