Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

3 votes
4 answers
422 views
Creating n folders with equal number of files from a large folder with all files
I have a large folder of data files, which I want to copy into subfolders to make a specified number of batches. Right now I count how many files there are and use that to make ranges, like so: ``` cd /dump batches=4 files=$(cat /data/samples.txt | wc -l) echo "$files $batches" for ((i=0; i<=batches...
I have a large folder of data files, which I want to copy into subfolders to make a specified number of batches. Right now I count how many files there are and use that to make ranges, like so:
cd /dump
batches=4
files=$(cat /data/samples.txt | wc -l)

echo "$files $batches"
for ((i=0; i<=batches; i++))
do
    mkdir -p merged_batches/batch$i
    rm -f merged_batches/batch$i/*

    ls merged/*.sorted.labeled.bam* |
      head -n $(( $((files/batches)) * $((i+1)) * 2 )) |
      tail -n $((2 * files/batches)) |
      xargs -I {} cp {} merged_batches/batch$i

done
Is there a more convenient way to do this?
rubberduck (43 rep)
Aug 7, 2025, 03:49 PM • Last activity: Aug 10, 2025, 08:13 AM
9 votes
7 answers
858 views
Check if multiple files exist on a remote server
I am writing a script that locates a special type of file on my system and i want to check if those files are also present on a remote machine. So to test a single file I use: ssh -T user@host [[ -f /path/to/data/1/2/3/data.type ]] && echo "File exists" || echo "File does not exist"; But since I hav...
I am writing a script that locates a special type of file on my system and i want to check if those files are also present on a remote machine. So to test a single file I use: ssh -T user@host [[ -f /path/to/data/1/2/3/data.type ]] && echo "File exists" || echo "File does not exist"; But since I have to check blocks of 10 to 15 files, that i would like to check in one go, since I do not want to open a new ssh-connection for every file. My idea was to do something like: results=$(ssh "user@host" ' for file in "${@}"; do if [ -e "$file" ]; then echo "$file: exists" else echo "$file: does not exist" fi done ' "${files_list[@]}") Where the file list contains multiple file path. But this does not work, as a result, I would like to have the "echo" string for every file that was in the files_list.
Nunkuat (193 rep)
Aug 6, 2025, 12:31 PM • Last activity: Aug 10, 2025, 07:56 AM
0 votes
2 answers
2094 views
Logging with shell script - catch STDERR to log with timestamp
**SOLVED:** A few months ago, I gain interest in logging in shell scripts. The first idea was a manual logging function such as this one: ``` add2log() { printf "$(date)\tINFO\t%s\t%s\n" "$1" "$2" >>"$logPATH" } ``` But I wanted to automatise it, such as that STDERR would be automatically logged. It...
**SOLVED:** A few months ago, I gain interest in logging in shell scripts. The first idea was a manual logging function such as this one:
add2log() {
    printf "$(date)\tINFO\t%s\t%s\n" "$1" "$2" >>"$logPATH"
}
But I wanted to automatise it, such as that STDERR would be automatically logged. It's been some time now that I've found a satisfying answer, and I'm finally taking the time to share it. --------- For each of my shell script, I now use a "main.sh" that holds the log functions as well as the config (setting up log and config files). Here's what it looks like:
#!/bin/bash

###################################################################
# MY HEADERS
###################################################################


#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#~~~~~~~~~~~~~~~~~~Global vars

mainScriptNAME=$(basename "$0")
mainScriptNAME="${mainScriptNAME%.*}"
mainScriptDIR=$(dirname "$0")
version="v0.0"
scriptsDIR="$mainScriptDIR/SCRIPTS"
addonsDIR="$mainScriptDIR/ADDONS"


#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#~~~~~~~~~~~~~~~~~~LOGGER FUNCS

manualLogger() {

    # $1=PRIORITY $2=FUNCNAME $3=message
    printf "$(date)\t%s\t%s()\t%s\n" "$1" "$2" "$3" >>"$logFilePATH"
}

stdoutLogger() {

    # $1=message
    printf "$(date)\tSTDOUT\t%s\n" "$1" >>"$logFilePATH"
}

stderrLogger() {

    # $1=message
    printf "$(date)\tSTDERR\t%s\n" "$1" >>"$logFilePATH"
}


#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#~~~~~~~~~~~~~~~~~~LOG & CONF

createLog() {
    # code to set and touch logFilePATH & confFilePATH
    manualLogger "INFO" "${FUNCNAME}" "Log file created."
}


#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#~~~~~~~~~~~~~~~~~~MAIN

#
createLog

#run scripts
{

    #
    source "$scriptsDIR/file1.sh"
    doSomthing1
    doSomthing2

    #
    source "$scriptsDIR/file2.sh"
    anotherFunction1
    anotherFunction2

    ...


} 2> >(while read -r line; do stderrLogger "$line"; done) \
1> >(while read -r line; do stdoutLogger "$line"; done)
In other words: - all scripts/functions I want to run are in separate files in a ./SCRIPTS folder (so my main.sh is almost always the same) - I call all these scripts in a group { ... } to catch their STDOUT and STDERR - their STDOUT and STDERR are redirect to their respective logger functions The log file looks like this (examples of one manual logger, one STDOUT logger, one STDERR logger):
Lun 23 mai 2022 12:20:42 CEST	INFO	createLog()	Log file created.
Lun 23 mai 2022 12:20:42 CEST	STDOUT	Some standard output
Lun 23 mai 2022 12:20:42 CEST	STDERR	ls: sadsdasd: No such file or directory
The group makes it that ALL output is gathered in log. Instead, you obviously could use the redirect 2> >(while read ... and 1> >(while read ... as you wish for each function. Such as doSomthing1 as its STDOUT and STDERR going to log, but doSomthing2 only has its STDERR redirect.
YvUh (21 rep)
Feb 28, 2022, 04:04 PM • Last activity: Aug 10, 2025, 04:05 AM
2 votes
1 answers
130 views
How to wait for background commands that were executed within a subshell?
Given this code: ```sh #!/bin/bash set -euo pipefail function someFn() { local input_string="$1" echo "$input_string start" sleep 3 echo "$input_string end" } function blocking() { someFn "FOO" someFn "BAR" someFn "BAZ" echo "DONE" } function wellBehavedParallel() { someFn "FOO" & someFn "BAR" & som...
Given this code:
#!/bin/bash
set -euo pipefail
function someFn() {
  local input_string="$1"
  echo "$input_string start"
  sleep 3
  echo "$input_string end"
}

function blocking() {
  someFn "FOO"
  someFn "BAR"
  someFn "BAZ"
  echo "DONE"
}

function wellBehavedParallel() {
  someFn "FOO" &
  someFn "BAR" &
  someFn "BAZ" &
  wait
  echo "DONE"
}

function subShellMadness() {
  (someFn "FOO" &) &
  (someFn "BAR" &) &
  (someFn "BAZ" &) &
  wait
  echo "DONE"
}

echo "BLOCKING_APPROACH"
blocking

echo "WEL_WORKING_PARALLEL"
wellBehavedParallel

echo "THIS DOES NOT WORK"
subShellMadness
It showcases two expected behaviors and one unexpected. 1. The Blocking one Simple, executes one line, then the next, slow and boring but solid:
BLOCKING_APPROACH
FOO start
FOO end
BAR start
BAR end
BAZ start
BAZ end
DONE
2. The well-behaved parallel one. All commmands & are executed in parallel, the wait waits for all of them to finish, and only then does the main script progress furter:
WEL_WORKING_PARALLEL
FOO start
BAR start
BAZ start
FOO end
BAR end
BAZ end
DONE
2. This is (at least to me) unexpected, but I assume this is "by design" that once I use subshells, I cannot use wait in the main script anymore. The jobs are still progressed in parallel, but I have lost all control, the main script even ends, and afterwards output is still dumped on the terminal by the subshells:
THIS DOES NOT WORK
FOO start
BAR start
DONE
BAZ start
philipp@DESKTOP-H0QQ2H8:~$ FOO end
BAR end
BAZ end
Is there a way from a main script to wait for subshells to finish? I want to avoid PID-collicting solutions (I know that wait accepts PID as a paramter), yet from what I gather getting the right PID in the first place may be prone to race conditions (since $! will print the last executed command's PID, not necessarily my command), and I fear PID-reusage could also make such an appraoch prone to unexpected behavior (am I waiting on my original command, or did some other process took my pid? When call wait, I seemingly have no way of knowing). Is there a best practise in dealing with waiting for subshells that reliably waits for them to finish? (Not using subshells is not an option for me right now.)
k0pernikus (16521 rep)
Aug 9, 2025, 01:10 PM • Last activity: Aug 9, 2025, 02:45 PM
0 votes
2 answers
2343 views
How does dmenu_run work?
My system is ```Debian 9.4``` which uses ```linux kernel 4.9.0-8-amd64``` and ```echo $SHELL``` on my system gives ```/bin/bash``` and ```/bin/sh``` is a link to ```/bin/dash```. I was curious why every time I run an application with ```dmenu_run``` from ```dwm``` there is an additional ```/bin/bash...
My system is
9.4
which uses
kernel 4.9.0-8-amd64
and
$SHELL
on my system gives
/bin/bash
and
/bin/sh
is a link to
/bin/dash
. I was curious why every time I run an application with
from
there is an additional
/bin/bash
process that runs as the parent, so I dug further into the script of
: #!/bin/sh dmenu_path | dmenu "$@" | ${SHELL:-"/bin/sh"} & I can't understand why my computer has
/bin/bash
instead of
/bin/sh
. I also read the correspond source code in
. It shows that it just simply
and
. There is no reason for
/bin/bash
to run instead of
/bin/sh
.
JiaHao Xu (248 rep)
Nov 10, 2018, 08:22 AM • Last activity: Aug 8, 2025, 07:10 PM
21 votes
5 answers
53748 views
What is a best practice to represent a boolean value in a shell script?
I know, that there are boolean values in `bash`, but I don't ever see them used anywhere. I want to write a wrapper for some often looked up information on my machine, for example, is this particular USB drive inserted/mounted. What would be the best practice to achieve that? * A string? drive_xyz_a...
I know, that there are boolean values in bash, but I don't ever see them used anywhere. I want to write a wrapper for some often looked up information on my machine, for example, is this particular USB drive inserted/mounted. What would be the best practice to achieve that? * A string? drive_xyz_available=true * A number (0 for true, ≠0 for false)? drive_xyz_available=0 # evaluates to true * A function? drive_xyz_available() { if available_magic; then return 0 else return 1 fi } I mostly wonder about, what would be expected by other people who would want to use the wrapper. Would they expect a boolean value, a command like variable or a function to call? From a security standpoint I would think the second option is the safest, but I would love to hear your experiences.
Minix (6065 rep)
Feb 19, 2015, 09:57 AM • Last activity: Aug 8, 2025, 12:39 PM
2 votes
2 answers
2806 views
How to use afl-fuzz (American Fuzzy Lop) with openssl
I am trying to use afl-fuzz with openssl in Ubuntu. A normal usage of afl-fuzz would be: afl-gcc test.c //-- this will produce a.out mkdir testcases echo "Test case here." > testcases/case1 afl-fuzz -i testcases -o findings ./a.out Now for openssl it would be something like: afl-gcc ./config make //...
I am trying to use afl-fuzz with openssl in Ubuntu. A normal usage of afl-fuzz would be: afl-gcc test.c //-- this will produce a.out mkdir testcases echo "Test case here." > testcases/case1 afl-fuzz -i testcases -o findings ./a.out Now for openssl it would be something like: afl-gcc ./config make //-- not sure of this :) afl-fuzz -i test -o findings where "test" is the folder with testcases for openssl My question is what is the parameter for "exe_name" for openssl? And please correct me if i'm wrong with the rest of the code. Thank you
Bigulinis (21 rep)
Jun 4, 2015, 05:15 AM • Last activity: Aug 8, 2025, 09:08 AM
36 votes
12 answers
56208 views
How to compare a program's version in a shell script?
Suppose I want to compare `gcc` version to see whether the system has the minimum version installed or not. To check the `gcc` version, I executed the following gcc --version | head -n1 | cut -d" " -f4 The output was 4.8.5 So, I wrote a simple `if` statement to check this version against some other...
Suppose I want to compare gcc version to see whether the system has the minimum version installed or not. To check the gcc version, I executed the following gcc --version | head -n1 | cut -d" " -f4 The output was 4.8.5 So, I wrote a simple if statement to check this version against some other value if [ "$(gcc --version | head -n1 | cut -d" " -f4)" -lt 5.0.0 ]; then echo "Less than 5.0.0" else echo "Greater than 5.0.0" fi But it throws an error: [: integer expression expected: 4.8.5 I understood my mistake that I was using strings to compare and the -lt requires integer. So, is there any other way to compare the versions?
Abhimanyu Saharan (911 rep)
May 27, 2016, 02:01 PM • Last activity: Aug 6, 2025, 04:04 AM
0 votes
4 answers
1869 views
create variables from CSV with varying number of fields
Looking for some help turning a CSV into variables. I tried using IFS, but seems you need to define the number of fields. I need something that can handle varying number of fields. *I am modifying my original question with the current code I'm using (taken from the answer provided by hschou) which i...
Looking for some help turning a CSV into variables. I tried using IFS, but seems you need to define the number of fields. I need something that can handle varying number of fields. *I am modifying my original question with the current code I'm using (taken from the answer provided by hschou) which includes updated variable names using type instead of row, section etc. I'm sure you can tell by my code, but I am pretty green with scripting, so I am looking for help to determine if and how I should add another loop or take a different approach to parsing the typeC data because although they follow the same format, there is only one entry for each of the typeA and typeB data, and there can be between 1-15 entries for the typeC data. The goal being only 3 files, one for each of the data types. Data format: Container: PL[1-100] TypeA: [1-20].[1-100].[1-1000].[1-100]-[1-100] TypeB: [1-20].[1-100].[1-1000].[1-100]-[1-100] TypeC (1 to 15 entries): [1-20].[1-100].[1-1000].[1-100]-[1-100] *There is no header in the CSV, but if there were it would look like this (Container, typeA, and typeB data always being in position 1,2,3, and typeC data being all that follow): Container,typeA,typeB,typeC,tycpeC,typeC,typeC,typeC,.. CSV: PL3,12.1.4.5-77,13.6.4.5-20,17.3.577.9-29,17.3.779.12-33,17.3.802.12-60,17.3.917.12-45,17.3.956.12-63,17.3.993.12-42 PL4,12.1.4.5-78,13.6.4.5-21,17.3.577.9-30,17.3.779.12-34 PL5,12.1.4.5-79,13.6.4.5-22,17.3.577.9-31,17.3.779.12-35,17.3.802.12-62,17.3.917.12-47 PL6,12.1.4.5-80,13.6.4.5-23,17.3.577.9-32,17.3.779.12-36,17.3.802.12-63,17.3.917.12-48,17.3.956.12-66 PL7,12.1.4.5-81,13.6.4.5-24,17.3.577.9-33,17.3.779.12-37,17.3.802.12-64,17.3.917.12-49,17.3.956.12-67,17.3.993.12-46 PL8,12.1.4.5-82,13.6.4.5-25,17.3.577.9-34 Code: #!/bin/bash #Set input file _input="input.csv" # Pull variables in from csv # read file using while loop while read; do declare -a COL=( ${REPLY//,/ } ) echo -e "containerID=${COL}\ntypeA=${COL}\ntypeB=${COL}" >/tmp/typelist.txt idx=1 while [ $idx -lt 10 ]; do echo "typeC$idx=${COL[$((idx+2))]}" >>/tmp/typelist.txt let idx=idx+1 #whack off empty variables sed '/\=$/d' /tmp/typelist.txt > /tmp/typelist2.txt && mv /tmp/typelist2.txt /tmp/typelist.txt #set variables from temp file . /tmp/typelist.txt done sleep 1 #Parse data in this loop.# echo -e "\n" echo "Begin Processing for $container" #echo $typeA #echo $typeB #echo $typeC #echo -e "\n" #Strip - from sub data for extra parsing typeAsub="$(echo "$typeA" | sed 's/\-.*$//')" typeBsub="$(echo "$typeB" | sed 's/\-.*$//')" typeCsub1="$(echo "$typeC1" | sed 's/\-.*$//')" #strip out first two decimils for extra parsing typeAprefix="$(echo "$typeA" | cut -d "." -f1-2)" typeBprefix="$(echo "$typeB" | cut -d "." -f1-2)" typeCprefix1="$(echo "$typeC1" | cut -d "." -f1-2)" #echo $typeAsub #echo $typeBsub #echo $typeCsub1 #echo -e "\n" #echo $typeAprefix #echo $typeBprefix #echo $typeCprefix1 #echo -e "\n" echo "Getting typeA dataset for $typeA" #call api script to pull data ; echo out for test echo "API-gather -option -b "$typeAsub" -g all > "$container"typeA-dataset" sleep 1 echo "Getting typeB dataset for $typeB" #call api script to pull data ; echo out for test echo "API-gather -option -b "$typeBsub" -g all > "$container"typeB-dataset" sleep 1 echo "Getting typeC dataset for $typeC1" #call api script to pull data ; echo out for test echo "API-gather -option -b "$typeCsub" -g all > "$container"typeC-dataset" sleep 1 echo "Getting additional typeC datasets for $typeC2-15" #call api script to pull data ; echo out for test echo "API-gather -option -b "$typeCsub2-15" -g all >> "$container"typeC-dataset" sleep 1 echo -e "\n" done < "$_input" exit 0 Speed isnt a concern, but if I've done anything really stupid up there, feel free to slap me in the right direction. :)
Jdubyas (45 rep)
Jul 12, 2017, 05:09 AM • Last activity: Aug 6, 2025, 12:04 AM
32 votes
8 answers
14366 views
Simultaneously calculate multiple digests (md5, sha256)?
Under the assumption that disk I/O and free RAM is a bottleneck (while CPU time is not the limitation), does a tool exist that can calculate multiple message digests at once? I am particularly interested in calculating the MD-5 and SHA-256 digests of large files (size in gigabytes), preferably in pa...
Under the assumption that disk I/O and free RAM is a bottleneck (while CPU time is not the limitation), does a tool exist that can calculate multiple message digests at once? I am particularly interested in calculating the MD-5 and SHA-256 digests of large files (size in gigabytes), preferably in parallel. I have tried openssl dgst -sha256 -md5, but it only calculates the hash using one algorithm. Pseudo-code for the expected behavior: for each block: for each algorithm: hash_state[algorithm].update(block) for each algorithm: print algorithm, hash_state[algorithm].final_hash()
Lekensteyn (21600 rep)
Oct 23, 2014, 10:00 AM • Last activity: Aug 4, 2025, 06:51 AM
23 votes
2 answers
9084 views
List of shells that support `local` keyword for defining local variables
I know that Bash and Zsh support `local` variables, but there are systems only have POSIX-compatible shells. And `local` is undefined in POSIX shells. So I want to ask which shells support `local` keyword for defining local variables? **Edit**: About shells I mean the default `/bin/sh` shell.
I know that Bash and Zsh support local variables, but there are systems only have POSIX-compatible shells. And local is undefined in POSIX shells. So I want to ask which shells support local keyword for defining local variables? **Edit**: About shells I mean the default /bin/sh shell.
mja (1525 rep)
Jan 10, 2019, 02:13 PM • Last activity: Aug 1, 2025, 01:19 PM
1 votes
1 answers
3180 views
How to automatically detect and write to usb with variable spaces in its name
I am doing the second BASH exercise from [TLDP Bash-Scripting Guide][1], and I have most of it figured out up until the part when it comes time to copy the compressed files to an inserted USB. > Home Directory Listing > > Perform a recursive directory listing on the user's home directory and save th...
I am doing the second BASH exercise from TLDP Bash-Scripting Guide , and I have most of it figured out up until the part when it comes time to copy the compressed files to an inserted USB. > Home Directory Listing > > Perform a recursive directory listing on the user's home directory and save the information to a file. Compress the file, have the script > prompt the user to insert a USB flash drive, then press ENTER. > Finally, save the file to the flash drive after making certain the > flash drive has properly mounted by parsing the output of df. Note > that the flash drive must be unmounted before it is removed. As I progress with the script it is becoming less ..elegant, and was wondering if there was a better way to do this. I know creating files is likely not the most efficient way to do the comparisons, but have not got the shell expansions figured yet, and intend to change those once I get it working. The problem specifically is, to ensure that the usb is mounted and that I am writing to the USB and nowhere else. I am comparing the last line of df after the USB is plugged in with the last line of df from the diff between df before USB is plugged in and df after USB is plugged in, and looping until they match. Unfortunately, the diff result starts with a >, but I intend to use sed to get rid of that. The real problem is the path to where my usb is mounted is: > /media/flerb/"Title of USB with spaces" To make this portable for USBs that may have different names is my best bet from here to do something with awk and field separators? And as a follow-up, I know this is pretty inelegant, and wonder if there is a cleaner way to go about this...especially because this is the second exercise and still in EASY. The output from the df tails is: /dev/sdb1 15611904 8120352 7491552 53% /media/flerb/CENTOS 7 X8 > /dev/sdb1 15611904 8120352 7491552 53% /media/flerb/CENTOS 7 X8 The script so far 1 #!/bin/bash 2 3 if [ "$UID" -eq 0 ] ; then 4 echo "Don't run this as root" 5 exit 1 6 fi 7 8 #Create a backup file with the date as title in a backup directory 9 BACKUP_DIR="$HOME/backup" 10 DATE_OF_COPY=$(date --rfc-3339=date) 11 BACKUP_FILE="$BACKUP_DIR/$DATE_OF_COPY" 12 13 [ -d "$BACKUP_DIR" ] || mkdir -m 700 "$BACKUP_DIR" 14 15 #find all files recursively in $HOME directory 16 find -P $HOME >> "$BACKUP_FILE" 17 18 #use lzma to compress 19 xz -zk --format=auto --check=sha256 --threads=0 "$BACKUP_FILE" 20 21 #making files to use in operations 22 BEFORE="$BACKUP_DIR"/before_usb.txt 23 AFTER="$BACKUP_DIR"/after_usb.txt 24 DIFFERENCE="$BACKUP_DIR"/difference.txt 25 26 df > "$BEFORE" 27 read -p 'Enter USB and press any button' ok 28 sleep 2 29 df > "$AFTER" 30 diff "$BEFORE" "$AFTER" > "$DIFFERENCE" 31 sleep 2 32 echo 33 34 TAIL_AFTER=$(tail -n 1 "$AFTER") 35 TAIL_DIFF=$(tail -n 1 "$DIFFERENCE") 36 37 until [ "$TAIL_AFTER" == "$TAIL_DIFF" ] ; 38 do 39 echo "Not yet" 40 df > "$AFTER" 41 TAIL_AFTER=$(tail -n 1 "$AFTER") 42 diff "$BEFORE" "$AFTER" > "$DIFFERENCE" 43 TAIL_DIFF=$(tail -n 1 "$DIFFERENCE") 44 echo "$TAIL_AFTER" 45 echo "$TAIL_DIFF" 46 sleep 1 47 48 done 49 exit $?
flerb (983 rep)
Jul 12, 2017, 05:28 PM • Last activity: Jul 30, 2025, 05:07 AM
-2 votes
2 answers
59 views
HISTTIMEFORMAT not working as desired in RHEL 8 bash 4.4.20
so trying to capture history with date and readable timestamps AND the command should appear on same line. following is failing: Bash ver is: GNU bash, version 4.4.20(1)-release (x86_64-redhat-linux-gnu) OS: RHEL 8.x in .bashrc.... setopt EXTENDED_HISTORY export HISTTIMEFORMAT='%F %I:%M:%S %T' #expo...
so trying to capture history with date and readable timestamps AND the command should appear on same line. following is failing: Bash ver is: GNU bash, version 4.4.20(1)-release (x86_64-redhat-linux-gnu) OS: RHEL 8.x in .bashrc.... setopt EXTENDED_HISTORY export HISTTIMEFORMAT='%F %I:%M:%S %T' #export HISTTIMEFORMAT="%F %T " export HISTSIZE=1000 export HISTFILE=$HOME/.bash_history-$USER export HISTFILESIZE=1000 export PROMPT_COMMAND='history -a' env variables: $ env|grep HIST HISTCONTROL=ignoredups HISTTIMEFORMAT=%F %I:%M:%S %T HISTFILE=/home/user1/.bash_history-user1 HISTSIZE=1000 HISTFILESIZE=1000 the records appear as: #1753745611 cd #1753745616 ls -ltra|tail #1753745626 cat .profile #1753745633 cat .kshrc **expected:** Mon Jul 28 23:33:31 GMT 2025 cd Mon Jul 28 23:33:36 GMT 2025 ls -ltra|tail Mon Jul 28 23:33:46 GMT 2025 cat .profile Mon Jul 28 23:33:53 GMT 2025 cat .kshrc two problems with this. 1. Timestamp appears in UNIX EPOCH format. 2. Timestamp and command appears on separate lines. They should be together. Also behaves the same way when using KSH. How can this be fixed? preferably using HISTTIMEFORMAT Thank you.
Rajeev (258 rep)
Jul 29, 2025, 03:33 PM • Last activity: Jul 29, 2025, 10:01 PM
2 votes
1 answers
134 views
Semicolon in conditional structures after the closing double bracket in a bash/zsh script?
Continuing https://unix.stackexchange.com/questions/48805/semicolon-in-conditional-structures (which handles single brackets), what's the point of having a semicolon after the closing DOUBLE bracket `]]`? In my tests, running ```zsh #!/bin/zsh -- if [[ "a" == "a" ]] then echo "true" else echo "false...
Continuing https://unix.stackexchange.com/questions/48805/semicolon-in-conditional-structures (which handles single brackets), what's the point of having a semicolon after the closing DOUBLE bracket ]]? In my tests, running
#!/bin/zsh --
if [[ "a" == "a" ]] then
	echo "true"
else
	echo "false"
fi

if [[ "a" == "a" ]]; then
	echo "true"
else
	echo "false"
fi

if [[ "a" == "a" ]]
then
	echo "true"
else
	echo "false"
fi

if [[ "a" == "a" ]];
then
	echo "true"
else
	echo "false"
fi
yields
true
true
true
true
, and running
#!/bin/zsh --
if [[ "a" == "b" ]] then
	echo "true"
else
	echo "false"
fi

if [[ "a" == "b" ]]; then
	echo "true"
else
	echo "false"
fi

if [[ "a" == "b" ]]
then
	echo "true"
else
	echo "false"
fi

if [[ "a" == "b" ]];
then
	echo "true"
else
	echo "false"
fi
yields
false
false
false
false
No error is reported. In zsh, what's the difference between the conditionals with a semicolon ; after the closing double bracket and the conditionals without a semicolon after the closing double bracket? The same question goes for bash.
user743115 (1 rep)
Jul 26, 2025, 12:42 AM • Last activity: Jul 26, 2025, 06:12 PM
2 votes
2 answers
2496 views
Shell-/Bash-Script to delete old backup files by name and specific pattern
every our backup files of a database are created. The files are named like this: ```` prod20210528_1200.sql.gz pattern: prod`date +\%Y%m%d_%H%M` ````` The pattern could be adjusted if needed. I would like to have a script that: - keeps all backups for the last x (e.g. 3) days - for backups older tha...
every our backup files of a database are created. The files are named like this:
`
prod20210528_1200.sql.gz 
pattern: proddate +\%Y%m%d_%H%M
`` The pattern could be adjusted if needed. I would like to have a script that: - keeps all backups for the last x (e.g. 3) days - for backups older than x (e.g. 3) days only the backup from time 00:00 shall be kept - for backups older than y (e.g. 14) days only one file per week (monday) shall be kept - for backups older than z days (e.g. 90) only one file per month (1st of each month) shall be kept - the script should rather use the filename instead of the date (created) information of the file, if that it possible - the script should run every day Unfortunately, I have very little knowledge of the shell-/bash-script language. I would do something like this: ````` if (file today - (x + 1)) { if (%H_of_file != 00 AND %M_of_file != 00) { delete file } } if (file today - (y + 1)) { if (file != Monday) { delete file } } if (file today - (z + 1)) { if (%m_of_file != 01) { delete file } } Does this makes any sense for you? Thank you very much! All the best, Phantom
Phantom (143 rep)
May 28, 2021, 05:25 PM • Last activity: Jul 26, 2025, 04:04 AM
0 votes
1 answers
2207 views
/usr/share/bash-completion/bash_completion parse error
I am facing an error when run $ source ~/.bashrc. The error is /usr/share/bash-completion/bash_completion:1512: parse error near `|'. bash_completion file (the first line is 1512) : ``` if ! [[ "$i" =~ ^\~.*|^\/.* ]]; then if [[ "$configfile" =~ ^\/etc\/ssh.* ]]; then i="/etc/ssh/$i" else i="$HOME/....
I am facing an error when run $ source ~/.bashrc. The error is /usr/share/bash-completion/bash_completion:1512: parse error near `|'. bash_completion file (the first line is 1512) :
if ! [[ "$i" =~ ^\~.*|^\/.* ]]; then
            if [[ "$configfile" =~ ^\/etc\/ssh.* ]]; then
                i="/etc/ssh/$i"
            else
                i="$HOME/.ssh/$i"
            fi
        fi
Please help me solve this problem. Thanks!
Sơn Nguyễn Ngọc (9 rep)
Feb 19, 2021, 02:12 AM • Last activity: Jul 26, 2025, 02:05 AM
2 votes
2 answers
2845 views
Self update bash script if there are any updates first then continue on, with Git
I'm trying to add the ability for my [ArchLinux installer script][1] to check if it's update-to-date based on rather it matches (or doesn't match) the version number that's on gitlab. The primary script that runs the installer (and all of the numbered script files) is the `aalis.sh` script, it basic...
I'm trying to add the ability for my ArchLinux installer script to check if it's update-to-date based on rather it matches (or doesn't match) the version number that's on gitlab. The primary script that runs the installer (and all of the numbered script files) is the aalis.sh script, it basically goes and runs the other files together. The version numbering would be something like 1.2.3 (major.minor.patch). Basically, whenever I make any changes to the script, I will change the script's version number of gitlab; and I want the script itself to be able to detect that its version number doesn't the match the one on github (for cases where someone has an outdated version of the script and try to run it); and automatically update itself using git fetch origin master then rerun itself using the updated contents.
Nova Leary (43 rep)
Jan 2, 2022, 01:00 AM • Last activity: Jul 24, 2025, 06:06 PM
1 votes
3 answers
2439 views
Searching /usr/dict/words to find words with certain properties
I would like to write a script to search through /usr/dict/words to find all words that meet some criteria I specify. For example, finding all palindromic words (like "racecar", "madam", etc.) or finding all words where the first and second halves reversed also form a word (like "german" and "manger...
I would like to write a script to search through /usr/dict/words to find all words that meet some criteria I specify. For example, finding all palindromic words (like "racecar", "madam", etc.) or finding all words where the first and second halves reversed also form a word (like "german" and "manger"). The framework of the script would be a simple loop to read each word in the dictionary, and I could change the criteria depending on what I want to look for by substituting an expression or something similar. I figure I would need to involve regular expressions somehow (or otherwise find a way to look at individual characters in each word). I would also need a way to compare the characters in my current word to the other words in the dictionary (such as with my second example above). What would be the best tool(s) to use for this task?
user161121
Mar 14, 2016, 11:31 PM • Last activity: Jul 23, 2025, 10:05 AM
0 votes
3 answers
1964 views
sed? - Insert line after a string with special characters to Neutron service
I am attempting to write a bash script that will insert a string after matching on a string in /usr/lib/systemd/system/neutron-server.service I have been able to do this on other files easily as I was just insert variables into neccessary config files, but this one seems to be giving me trouble. I b...
I am attempting to write a bash script that will insert a string after matching on a string in /usr/lib/systemd/system/neutron-server.service I have been able to do this on other files easily as I was just insert variables into neccessary config files, but this one seems to be giving me trouble. I believe the error is that sed is not ignoring the special characters. In my attempt I have tried using sed of single quotes and double quotes (which I understand are for variables, but thought it might change something. Is there a better way of going about this or some special sed flags or syntax I am missing? sed ‘/--config-file /etc/neutron/plugin.ini/a\--config-file /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini‘ /usr/lib/systemd/system/neutron-server TL;DR - Insert --config-file /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini After --config-file /etc/neutron/plugin.ini Orginial File [Unit] Description=OpenStack Neutron Server After=syslog.target network.target [Service] Type=notify User=neutron ExecStart=/usr/bin/neutron-server --config-file /usr/share/neutron/neutron- dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server - -log-file /var/log/neutron/server.log PrivateTmp=true NotifyAccess=all KillMode=process TimeoutStartSec="infinity" [Install] WantedBy=multi-user.target File after desired change command. [Unit] Description=OpenStack Neutron Server After=syslog.target network.target [Service] Type=notify User=neutron ExecStart=/usr/bin/neutron-server --config-file /usr/share/neutron/neutron- dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config- file /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server - -log-file /var/log/neutron/server.log PrivateTmp=true NotifyAccess=all KillMode=process TimeoutStartSec="infinity" [Install] WantedBy=multi-user.target
fly2809 (1 rep)
Jun 13, 2017, 08:48 PM • Last activity: Jul 22, 2025, 09:04 AM
0 votes
2 answers
1982 views
How to pass command line arguments to bash script when executing with at?
I have a bash script needs run at a specific time and found out `at` is pretty much does what I need to do. But the problem is I'm not sure how can I pass command line arguments to the bash script through `at`. Below command is what I finally ended up after looking through some other solutions. echo...
I have a bash script needs run at a specific time and found out at is pretty much does what I need to do. But the problem is I'm not sure how can I pass command line arguments to the bash script through at. Below command is what I finally ended up after looking through some other solutions. echo "-f job.sh argument" | xargs at now + 2 minutes But this does not work. Can anyone help me with this?
yash (1 rep)
Jan 23, 2020, 06:34 PM • Last activity: Jul 21, 2025, 10:06 AM
Showing page 1 of 20 total questions