Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
-2
votes
5
answers
68
views
Perl or sed script to remove a specific content of a file from another bigger file
I have a file **(FILE1)** with some repeated sections as below (example): LINE 1 ABCD LINE 2 EFGA LINE 3 HCJK REMOVE LINE11 REMOVE LINE12 REMOVE LINE13 LINE 4 ABCDH LINE 5 EFGAG LINE 6 HCJKD REMOVE LINE11 REMOVE LINE12 REMOVE LINE13 LINE 7 ABCDH LINE 8 EFGAG LINE 9 HCJKD I have several such files. I...
I have a file **(FILE1)** with some repeated sections as below (example):
LINE 1 ABCD
LINE 2 EFGA
LINE 3 HCJK
REMOVE LINE11
REMOVE LINE12
REMOVE LINE13
LINE 4 ABCDH
LINE 5 EFGAG
LINE 6 HCJKD
REMOVE LINE11
REMOVE LINE12
REMOVE LINE13
LINE 7 ABCDH
LINE 8 EFGAG
LINE 9 HCJKD
I have several such files. In a pattern file (**PATTERN**) I have these removable lines stored.
REMOVE LINE11
REMOVE LINE12
REMOVE LINE13
I want to write a sed, awk (bash code) or a Perl code to remove all the sections of of **FILE** that match the content of the file **PATTERN**. Another requirement is to remove all but leave the first occurrence only.
Pratap
(49 rep)
Aug 6, 2025, 06:03 AM
• Last activity: Aug 6, 2025, 03:31 PM
3
votes
6
answers
457
views
Awk prints only first word on the field column
Please see my linux bash script below. I can't achieve my target output with my current code. It keeps reading the whole column 4. input_file.txt: REV NUM |SVN PATH | FILE NAME |DOWNLOAD OPTIONS 1336 |svn/Repo/PROD | test2.txt |PROGRAM APPLICATION_SHORT_NAME="SQLGL" | 1334 |svn/Repo/PROD | test.txt...
Please see my linux bash script below. I can't achieve my target output with my current code. It keeps reading the whole column 4.
input_file.txt:
REV NUM |SVN PATH | FILE NAME |DOWNLOAD OPTIONS
1336 |svn/Repo/PROD | test2.txt |PROGRAM APPLICATION_SHORT_NAME="SQLGL" |
1334 |svn/Repo/PROD | test.txt |REQUEST_GROUP REQUEST_GROUP_NAME="Program Request Group" APPLICATION_SHORT_NAME="SQLGL" |
my code:
# /bin/bash
REV_NUM=($(awk -F "|" 'NR>1 {print $1}' input_file.txt))
COMPONENT=($(awk -F "|" 'NR>1 {print $3}' input_file.txt))
DL_OPS="$(awk -F "|" 'NR>1 {print $4}' input_file.txt)"
#LOOP
REV_NUM_COUNT=${#REV_NUM[*]}
for (( x=0 ; x<$REV_NUM_COUNT ; x++ ))
do
echo "${COMPONENT[x]} ${DL_OPS[x]}"
done
actual output:
Exporting Component from SVN . . .
test2.txt PROGRAM APPLICATION_SHORT_NAME="SQLGL"
REQUEST_GROUP REQUEST_GROUP_NAME="Program Request Group" APPLICATION_SHORT_NAME="SQLGL"
test.txt
target output:
Exporting Component from SVN . . .
test2.txt PROGRAM APPLICATION_SHORT_NAME="SQLGL"
test.txt REQUEST_GROUP REQUEST_GROUP_NAME="Program Request Group" APPLICATION_SHORT_NAME="SQLGL"
Thank you so much
user765641
(33 rep)
Aug 4, 2025, 02:53 AM
• Last activity: Aug 4, 2025, 07:20 PM
1
votes
2
answers
1850
views
Filter out command line options before passing to a program
I am running `cmake` and it is passing a flag to my linker that is unrecognized (`-rdynamic`), and it's causing an error. I cannot figure out where it is getting this flag from, so I want to just filter it out. I can specify `-DCMAKE_LINKER= `, so what I would like to do is set ` ` to a program that...
I am running
cmake
and it is passing a flag to my linker that is unrecognized (-rdynamic
), and it's causing an error.
I cannot figure out where it is getting this flag from, so I want to just filter it out.
I can specify -DCMAKE_LINKER=
, so what I would like to do is set `` to a program that reads its command line arguments, filters out the bad one, and then passes the result back to the actual linker.
I have been using awk '{gsub("-rdynamic", "");print}'
, but I don't know to make the input stdin and the output ld.
RJTK
(123 rep)
Oct 5, 2018, 03:48 PM
• Last activity: Aug 3, 2025, 12:34 AM
2
votes
3
answers
7476
views
Parse value from different JSON strings (No jq)
I'm using a bash script that needs to read the JSON output and parse a value from different JSON variables or strings. Here's the sample output. It needs to read the value next to the `Content` or from any other variable. Such as, Lookup `Content` and be able to print `Value1`. Lookup `DeviceType` a...
I'm using a bash script that needs to read the JSON output and parse a value from different JSON variables or strings. Here's the sample output. It needs to read the value next to the
Content
or from any other variable. Such as,
Lookup Content
and be able to print Value1
.
Lookup DeviceType
and be able to print Value4
.
Sample Input:
{"Content":"Value1","CreationMethod":"Value2","database":"Value3","DeviceType":"Value4"}
I tried the combination of sed
and awk
:
sed 's/["]/ /g' | awk '{print $4}'
... but only if the position of Content
remains the same in the output. Otherwise, in the different JSON output, the positioning of Content
changes that puts the value out of scope thus,
awk '{print $4}'
picks up the wrong value.
Riz
(59 rep)
Nov 8, 2019, 03:46 PM
• Last activity: Aug 3, 2025, 12:04 AM
1
votes
1
answers
3177
views
How to automatically detect and write to usb with variable spaces in its name
I am doing the second BASH exercise from [TLDP Bash-Scripting Guide][1], and I have most of it figured out up until the part when it comes time to copy the compressed files to an inserted USB. > Home Directory Listing > > Perform a recursive directory listing on the user's home directory and save th...
I am doing the second BASH exercise from TLDP Bash-Scripting Guide , and I have most of it figured out up until the part when it comes time to copy the compressed files to an inserted USB.
> Home Directory Listing
>
> Perform a recursive directory listing on the user's home directory and save the information to a file. Compress the file, have the script
> prompt the user to insert a USB flash drive, then press ENTER.
> Finally, save the file to the flash drive after making certain the
> flash drive has properly mounted by parsing the output of df. Note
> that the flash drive must be unmounted before it is removed.
As I progress with the script it is becoming less ..elegant, and was wondering if there was a better way to do this. I know creating files is likely not the most efficient way to do the comparisons, but have not got the shell expansions figured yet, and intend to change those once I get it working.
The problem specifically is, to ensure that the usb is mounted and that I am writing to the USB and nowhere else. I am comparing the last line of
df
after the USB is plugged in with the last line of df
from the diff
between df
before USB is plugged in and df
after USB is plugged in, and looping until they match. Unfortunately, the diff
result starts with a >, but I intend to use sed to get rid of that. The real problem is the path to where my usb is mounted is:
> /media/flerb/"Title of USB with spaces"
To make this portable for USBs that may have different names is my best bet from here to do something with awk and field separators? And as a follow-up, I know this is pretty inelegant, and wonder if there is a cleaner way to go about this...especially because this is the second exercise and still in EASY.
The output from the df tails is:
/dev/sdb1 15611904 8120352 7491552 53% /media/flerb/CENTOS 7 X8
> /dev/sdb1 15611904 8120352 7491552 53% /media/flerb/CENTOS 7 X8
The script so far
1 #!/bin/bash
2
3 if [ "$UID" -eq 0 ] ; then
4 echo "Don't run this as root"
5 exit 1
6 fi
7
8 #Create a backup file with the date as title in a backup directory
9 BACKUP_DIR="$HOME/backup"
10 DATE_OF_COPY=$(date --rfc-3339=date)
11 BACKUP_FILE="$BACKUP_DIR/$DATE_OF_COPY"
12
13 [ -d "$BACKUP_DIR" ] || mkdir -m 700 "$BACKUP_DIR"
14
15 #find all files recursively in $HOME directory
16 find -P $HOME >> "$BACKUP_FILE"
17
18 #use lzma to compress
19 xz -zk --format=auto --check=sha256 --threads=0 "$BACKUP_FILE"
20
21 #making files to use in operations
22 BEFORE="$BACKUP_DIR"/before_usb.txt
23 AFTER="$BACKUP_DIR"/after_usb.txt
24 DIFFERENCE="$BACKUP_DIR"/difference.txt
25
26 df > "$BEFORE"
27 read -p 'Enter USB and press any button' ok
28 sleep 2
29 df > "$AFTER"
30 diff "$BEFORE" "$AFTER" > "$DIFFERENCE"
31 sleep 2
32 echo
33
34 TAIL_AFTER=$(tail -n 1 "$AFTER")
35 TAIL_DIFF=$(tail -n 1 "$DIFFERENCE")
36
37 until [ "$TAIL_AFTER" == "$TAIL_DIFF" ] ;
38 do
39 echo "Not yet"
40 df > "$AFTER"
41 TAIL_AFTER=$(tail -n 1 "$AFTER")
42 diff "$BEFORE" "$AFTER" > "$DIFFERENCE"
43 TAIL_DIFF=$(tail -n 1 "$DIFFERENCE")
44 echo "$TAIL_AFTER"
45 echo "$TAIL_DIFF"
46 sleep 1
47
48 done
49 exit $?
flerb
(983 rep)
Jul 12, 2017, 05:28 PM
• Last activity: Jul 30, 2025, 05:07 AM
1
votes
2
answers
93
views
How to extract a sub-heading as string which is above a search for word
I'm new to Bash and I've been self-taught. I think I'm learning well, but I do have staggering gaps in my base knowledge. So sorry if this is woefully simple bbuuuttt... Essentially, I need to sift through a large amount of data and pull out specific phrases. I've been making slow and steady progres...
I'm new to Bash and I've been self-taught. I think I'm learning well, but I do have staggering gaps in my base knowledge. So sorry if this is woefully simple bbuuuttt...
Essentially, I need to sift through a large amount of data and pull out specific phrases. I've been making slow and steady progress, but I'm now stuck on getting a heading for a line of data.
Here's what the file looks like:
A lot (AND I MEAN A LOT) of data above
STATE 1:
133a -> 135a : 0.010884 (c= -0.10432445)
134a -> 135a : 0.933650 (c= -0.96625573)
STATE 2:
129a -> 135a : 0.016601 (c= -0.12884659)
130a -> 135a : 0.896059 (c= -0.94660402)
130a -> 136a : 0.011423 (c= 0.10687638)
130a -> 137a : 0.023884 (c= -0.15454429)
130a -> 138a : 0.020361 (c= -0.14269354)
STATE 3:
133a -> 135a : 0.899436 (c= -0.94838591)
134a -> 136a : 0.012334 (c= -0.11106052)
STATE 4:
129a -> 135a : 0.688049 (c= -0.82948703)
129a -> 136a : 0.212819 (c= -0.46132295)
129a -> 137a : 0.036987 (c= 0.19231930)
130a -> 135a : 0.011990 (c= 0.10949722)
134a -> 135a : 0.922010 (c= -0.98192034)
There are many more states (up to 30) of varying length below, which may also include what I'm looking for.
And then more data below that
I have got the numbers I am looking for saved in variables. (134 and 135 for this example) And I can use :
"${a}a -> ${b}a" File.Name;
to show me the lines that have 134 -> 135 on, but I need the STATE that they are in.
I've tried using grep to look above the found lines to the nearest line with STATE in, but I couldn't figure out how to set the length of -B as a condition rather than a number (don't know if it can be done). I have also tried with awk and sed to find the line with STATE and look below to see if 134 -> 135 is benethe it before the next STATE, but I couldn't find a way to stop it and not print at the next STATE instead of just continuing until it found the next 134 -> 135. The Ideal output (for the above example) would be:
STATE 1
STATE 4
but
STATE 1:
133a -> 135a : 0.010884 (c= -0.10432445)
134a -> 135a : 0.933650 (c= -0.96625573)
STATE 4:
129a -> 135a : 0.688049 (c= -0.82948703)
129a -> 136a : 0.212819 (c= -0.46132295)
129a -> 137a : 0.036987 (c= 0.19231930)
130a -> 135a : 0.011990 (c= 0.10949722)
134a -> 135a : 0.922010 (c= -0.98192034)
is also absolutely fine. I just need it to spit out the correct STATES and no others. It doesn't really matter what other data comes with it.
Also, this is going to be applied to about 40 other files with similar layouts, so I need it not to be specific to this one (aka not grep STATE 1 and grep state 4)
I'm hoping someone can help me or tell me if this is impossible to do.
TC575
(13 rep)
Jul 11, 2025, 08:37 PM
• Last activity: Jul 16, 2025, 09:53 AM
1
votes
3
answers
2435
views
find and replace with the value in another file
I have two files with different formats with columns tab spaced. I have to compare the columns `column1`, `column2` of `file1` with `file2`. If they matches, I need to replace the value in `column6` of `file1` with the value in `column3` of `file2`. I have tried using `awk` but I am not able to repl...
I have two files with different formats with columns tab spaced. I have to compare the columns
column1
, column2
of file1
with file2
. If they matches, I need to replace the value in column6
of file1
with the value in column3
of file2
.
I have tried using awk
but I am not able to replace the value. Could you please advise on the below snippet ?
awk 'FILENAME == ARGV {
m[$1,$2] = $6;
next;
}
{
if (($1,$2) in m) {
m[$6]= $3; print m[$6];
}
}' file1 file2
top few lines of file1
1201 12011 1 0 0 0 1
1202 12021 1 0 0 0 1
1203 12031 1 0 0 0 1
1204 12041 1 0 0 0 2
1207 12071 1 0 0 0 2
1209 12091 1 0 0 0 1
1210 12101 1 0 0 0 1
1212 12121 1 0 0 0 1
1213 12131 1 0 0 0 1
1214 12141 1 0 0 0 2
top few lines of file2
1201 12011 1
1202 12021 1
1203 12031 1
1204 12041 1
1206 NA 1
1207 12071 2
1208 NA 1
1209 12091 2
1210 12101 2
I want to assign the values from file2 to file1 column as I would like to write the updated content into another file out.txt
Tried the below code as per the comments:
awk '{
if (FNR==NR) {
a[FNR]=$1;b[FNR]=$2;c[FNR]=$3}
else {
if (a[FNR] == $1 && b[FNR] ==$2) {
$6=c[FNR]} else {$6=$6};
print $0;
}
}' file2 file1
Got this output
1201 12011 1 0 0 1 1
1202 12021 1 0 0 1 1
1203 12031 1 0 0 1 1
1204 12041 1 0 0 1 2
1207 12071 1 0 0 0 2
1209 12091 1 0 0 0 1
1210 12101 1 0 0 0 1
1212 12121 1 0 0 0 1
1213 12131 1 0 0 0 1
1214 12141 1 0 0 0 2
Prradep
(203 rep)
Aug 4, 2015, 01:29 PM
• Last activity: Jul 15, 2025, 09:02 PM
108
votes
6
answers
431760
views
Using awk to sum the values of a column, based on the values of another column
I am trying to sum certain numbers in a column using `awk`. I would like to sum just column 3 of the "smiths" to get a total of 212. I can sum the whole column using `awk` but not just the "smiths". I have: awk 'BEGIN {FS = "|"} ; {sum+=$3} END {print sum}' filename.txt Also I am using putty. Thank...
I am trying to sum certain numbers in a column using
awk
. I would like to sum just column 3 of the "smiths" to get a total of 212. I can sum the whole column using awk
but not just the "smiths". I have:
awk 'BEGIN {FS = "|"} ; {sum+=$3} END {print sum}' filename.txt
Also I am using putty. Thank you for any help.
smiths|Login|2
olivert|Login|10
denniss|Payroll|100
smiths|Time|200
smiths|Logout|10
jake
(1225 rep)
Nov 14, 2015, 04:56 AM
• Last activity: Jul 14, 2025, 05:03 AM
6
votes
5
answers
1500
views
Summing rows in a new column using sed, awk and perl?
I have a file that contain numbers something like: 1 11 323 2 13 3 3 44 4 4 66 23 5 70 23 6 34 23 7 24 22 8 27 5 How can I sum the rows and output the results in a column, so the results are as follows: 1 11 323 335 2 13 3 18 3 44 4 51 4 66 23 93 5 70 23 98 6 34 23 63 7 24 22 53 8 27 5 40 I would li...
I have a file that contain numbers something like:
1 11 323
2 13 3
3 44 4
4 66 23
5 70 23
6 34 23
7 24 22
8 27 5
How can I sum the rows and output the results in a column, so the results are as follows:
1 11 323 335
2 13 3 18
3 44 4 51
4 66 23 93
5 70 23 98
6 34 23 63
7 24 22 53
8 27 5 40
I would like to see solutions in sed, awk, and perl
user311543
Sep 19, 2018, 03:07 PM
• Last activity: Jul 14, 2025, 02:54 AM
0
votes
5
answers
569
views
How can I `cat` files with a number of fixed lines before/between/after?
I am looking for a bash one-liner which can `cat` a number of files with a number of fixed lines. file1.txt: file1 line 1 file1 line 2 file2.txt file2 line 1 file2 line 2 Then I am looking for something like cat-with-strings foo file1.txt bar file2.txt baz producing output foo file1 line 1 file1 lin...
I am looking for a bash one-liner which can
cat
a number of files with a number of fixed lines.
file1.txt:
file1 line 1
file1 line 2
file2.txt
file2 line 1
file2 line 2
Then I am looking for something like
cat-with-strings foo file1.txt bar file2.txt baz
producing output
foo
file1 line 1
file1 line 2
bar
file2 line 1
file2 line 2
baz
How can I do this in a single line of bash, using standard linux tools (sed, awk, cat, etc) and *without* creating any files to hold foo
, bar
, or baz
?
spraff
(951 rep)
Jan 25, 2021, 07:00 PM
• Last activity: Jul 7, 2025, 08:30 AM
1
votes
7
answers
250
views
Extracting paragraphs with awk
What is the correct way to extract paragraphs in this log file using awk: $ cat log.txt par1, line1 par1, line2 par1, line3 par1, line4 par1, line5 par1, last line par2, line1 par2, line2 par2, line3 par2, last line par3, line1 par3, line2 par3, line3 par3, line4 par3, last line Note that text as we...
What is the correct way to extract paragraphs in this log file using awk:
$ cat log.txt
par1, line1
par1, line2
par1, line3
par1, line4
par1, line5
par1, last line
par2, line1
par2, line2
par2, line3
par2, last line
par3, line1
par3, line2
par3, line3
par3, line4
par3, last line
Note that text as well as blank lines may have one or more spaces or tabs.
Also note that blank lines could come in multiples.
I tried this (which failed):
awk 'BEGIN {RS="^[[:space:]]*$"} NR==2 {print "--- Paragraph",NR; print; exit}' log.txt
Desired output:
--- Paragraph 2
par2, line1
par2, line2
par2, line3
par2, last line
userene
(1856 rep)
May 17, 2025, 06:12 PM
• Last activity: Jul 4, 2025, 07:02 PM
3
votes
2
answers
400
views
Get Length of Function in Shell
**UPDATE:** **Some Background:** `zsh` returns the line number inside a function when `$LINENO` is called inside a function. I need a way to get the line number in the file and to differentiate when `zsh` is giving me a file line number vs. a function line number. I couldn't find a `zsh` environment...
**UPDATE:**
**Some Background:**
zsh
returns the line number inside a function when $LINENO
is called inside a function. I need a way to get the line number in the file and to differentiate when zsh
is giving me a file line number vs. a function line number.
I couldn't find a zsh
environment variable to change this behavior to match other Bourne shells (e.g. bash
always gives the file line number), so I was trying to see if I could create a function with logic that could always output the file line number regardless of context. This is why I was trying to determine the length of the function.
If anyone knows of a good way to get the file line number with $LINENO
in zsh
in all contexts, I'd appreciate it!
---
**QUESTION:**
I've searched this and this , but can't seem to find an answer. Is there a portable way to write the number of lines a function definition has? *(Please see "Some Background" above.)*
My initial thought was to capture the function contents and pipe it to wc -l
.
Consider the following test file:
**Test File:**
sh
#! /bin/sh
#
# test_file.sh
func1() { echo 'A one-liner'; } # With a nasty comment at the end
func2 (){
echo "A sneaky } included"
# Or an actual code block
{
echo 'hi'
echo 'there'
}
}
func3() { echo "I'm on a line."; }; echo 'And so am I'
func4(){ echo "But I'm a \"stand-alone\" one-liner."; }
func5() {
echo "I'm a nice function."
echo "And you can too!"
}
echo "Can we do this?"
My initial attempt was to match corresponding pairs of {}'s with sed:
**Solution Attempt:**
sh
#! /bin/sh
#
# function_length
#
# $1: string: absolute path to file
# $2: string: name of function (without ()'s)
fp=$(realpath "$1")
func_name="$2"
func_contents=$(cat "${fp}" |
sed -E -n '
/'"${func_name}"' ?[(][)]/{
:top
/[}]/!{
H
d
}
/[}]/{
x
s/[{]//
t next
G
b end
}
:next
x
b top
:end
p
q
}')
echo "${func_contents}"
echo
func_len=$(echo "${func_contents}" | wc -l)
echo "Function Length: ${func_len}"
However, running this in zsh gives
sh
$ ./function_length ./test_file.sh func1
func1() { echo 'A one-liner'; } # With a nasty comment at the end
Function Length: 2
$ ./function_length ./test_file.sh func2
Function Length: 1
$ ./function_length ./test_file.sh func3
func3() { echo "I'm on a line."; }; echo 'And so am I'
Function Length: 2
$ ./function_length ./test_file.sh func4
func4(){ echo "But I'm a \"stand-alone\" one-liner."; }
Function Length: 2
$ ./function_length ./test_file.sh func5
Function Length: 1
Does anyone know of a solution? Thank you!
adam.hendry
(243 rep)
Jun 13, 2021, 09:13 PM
• Last activity: Jun 28, 2025, 01:30 PM
4
votes
6
answers
636
views
Remove the first field (and leading spaces) with a single AWK
Consider this input and output: ``` foo bar baz ``` ``` bar baz ``` How do you achieve with a single AWK? Please explain your approach too. These are a couple tries: ``` $ awk '{ $1 = ""; print(substr($0, 2)) }' <<<'foo bar baz' bar baz ``` Since I know the `$1 = ""` deletes the first field, that an...
Consider this input and output:
foo bar baz
bar baz
How do you achieve with a single AWK? Please explain your approach too.
These are a couple tries:
$ awk '{ $1 = ""; print(substr($0, 2)) }' <<<'foo bar baz'
bar baz
Since I know the $1 = ""
deletes the first field, that anything in $0 will start at the second character; but it seems kinda obtuse.
This is another way:
$ awk '{ $1 = ""; print }' <<<'foo bar baz' | awk '{ $1 = $1; print }'
bar baz
Since the second awk "recompiles $0"; but what I'd really like to do is "recompile $0; but in first awk call.
This approach doesn't work:
$ awk '{ $1 = ""; $0 = $0; print }' <<<'foo bar baz'
bar baz
notice the leading spaces. I was hoping the $0 = $0 would recompile $0; but it didn't work.
mbigras
(3462 rep)
Apr 9, 2025, 06:44 AM
• Last activity: Jun 26, 2025, 09:32 AM
1
votes
1
answers
2662
views
How can I use a variable in awk command
With my code I am trying to sum up the values with the specific name of a column in a csv file, depending on the input of the name. Here's my code: ``` #!/bin/bash updatedata() { index=0 while IFS="" read -r line do IFS=';' read -ra array <<< "$line" for arrpos in "${array[@]}" do if [ "$arrpos" ==...
With my code I am trying to sum up the values with the specific name of a column in a csv file, depending on the input of the name.
Here's my code:
#!/bin/bash
updatedata() {
index=0
while IFS="" read -r line
do
IFS=';' read -ra array <<< "$line"
for arrpos in "${array[@]}"
do
if [ "$arrpos" == *"$1"* ] || [ "$1" == "$arrpos" ]
then
break
else
let index=index+1
fi
done
break
done < data.csv
((index=$index+1))
if [ $pos -eq 0 ]
then
v0=$(awk -F";", -v index=$index '{x+=$index}END{print x}' ./data.csv )
elif [ $pos -eq 1 ]
then
v1=$(awk -F";" '{x+=$index}END{print x}' ./data.csv )
elif [ $pos -eq 2 ]
then
v2=$(awk -F";" '{x+=$index}END{print x}' ./data.csv )
elif [ $pos -eq 3 ]
then
v3=$(awk -F";" '{x+=$index}END{print x}' ./data.csv )
fi
}
`
In the middle of the code you can see in v0=, I was trying to experiment a little, but I just keep getting errors:
First I tried this:
v0=$(awk -F";" '{x+=$index}END{print x}' ./data.csv)
but it gave me this error:
'awk: line 1: syntax error at or near }'
so then I decided to try this(as you can see in the code)
v0=$(awk -F";", -v index=$index '{x+=$index}END{print x}' ./data.csv )
And I got this error:
'awk: run time error: cannot command line assign to index
type clash or keyword
FILENAME="" FNR=0 NR=0'
I don't know what to do. Can you guys help me.
anonymous
(11 rep)
Aug 28, 2020, 09:26 AM
• Last activity: Jun 25, 2025, 10:52 AM
0
votes
1
answers
1953
views
Include spaces and tabs in "awk" search and replace
Another user helped me earlier to fix something I'm doing with awk, where I search for a string at any point in all files and replace two numbers in the same line when I find it. awk -i inplace '/^gene_height/{ $3=sprintf("%.0f",(169+rand()*51));$5=sprintf("%.0f",(169+rand()*51)) }1' * This worked i...
Another user helped me earlier to fix something I'm doing with awk, where I search for a string at any point in all files and replace two numbers in the same line when I find it.
awk -i inplace '/^gene_height/{ $3=sprintf("%.0f",(169+rand()*51));$5=sprintf("%.0f",(169+rand()*51)) }1' *
This worked in the test files that I made (much fewer tags to read), but then in the actual files I'm trying to change for a Crusader Kings mod it's getting blocked because each line in the config file starts with a space then two tabs. I tried removing the "^" before gene_height and that kind of works, but it removes the space and two tabs from the file which might mess up the format and break the mod.
Does anyone know how I can get the above script to read files that start with a space, two tabs, THEN the string "gene_height", and keep the space and two tabs when doing the replacement?
the_pocket_of_big_noob
(11 rep)
Jun 26, 2022, 12:43 AM
• Last activity: Jun 25, 2025, 08:02 AM
0
votes
5
answers
156
views
Rename a set of files according to a specific scheme, with rename back option
In Linux in a directory there are files, `ls -1` shows me this output : file1.1-rvr file1.2-rvr file1.3 file1.4-rvr file1.5 file1.6-rvr file2.1 file2.2 file3.1 file3.10 file3.2-rvr file3.3-rvr file3.4 file3.5 file3.6 file3.7 file3.8 file3.9 file4.1 file4.2 file5.1-rvr file5.2 file6.1 file6.2 file6.3...
In Linux in a directory there are files,
ls -1
shows me this output :
file1.1-rvr
file1.2-rvr
file1.3
file1.4-rvr
file1.5
file1.6-rvr
file2.1
file2.2
file3.1
file3.10
file3.2-rvr
file3.3-rvr
file3.4
file3.5
file3.6
file3.7
file3.8
file3.9
file4.1
file4.2
file5.1-rvr
file5.2
file6.1
file6.2
file6.3
file6.4
file7.1
file7.2-rvr
file7.3
file7.4
file7.5
file7.6
file8.1
file8.2
file8.3-rvr
file8.4
In this directory are only files with file names that begin with file
.
There are no other files in there.
file1.*
is a package
file2.*
is a (different) package too
and so on.
Each package should have its own random name.
For the randomness i will use.
cat /dev/urandom | tr -cd 'a-zA-Z0-9' | head -c $(shuf -i 8-32 -n 1)
In each package, the .
should be renamed to .part.rar
.
Files with -rvr
should renamed too.
For example:
file3.1
file3.10
file3.2-rvr
file3.3-rvr
file3.4
file3.5
file3.6
file3.7
file3.8
file3.9
should be renamed for testing and experimenting to:
rfiqDLhZF5XxRcJXkqR1LrwniDi.part01.rar
rfiqDLhZF5XxRcJXkqR1LrwniDi.part10.rar
rfiqDLhZF5XxRcJXkqR1LrwniDi.part02.rar
rfiqDLhZF5XxRcJXkqR1LrwniDi.part03.rar
rfiqDLhZF5XxRcJXkqR1LrwniDi.part04.rar
rfiqDLhZF5XxRcJXkqR1LrwniDi.part05.rar
rfiqDLhZF5XxRcJXkqR1LrwniDi.part06.rar
rfiqDLhZF5XxRcJXkqR1LrwniDi.part07.rar
rfiqDLhZF5XxRcJXkqR1LrwniDi.part08.rar
rfiqDLhZF5XxRcJXkqR1LrwniDi.part09.rar
(If there are a *.10, 1 must be rename to 01.)
But there must be a way to note (in a file ../renamed
) the renaming in order to rename them back to the original filenames later.
For renaming all files in random names, i can use this, but i dont know how to work with the .part.rar renaming.
for file in file*; do
while true; do
RANDOM_NAME=$(cat /dev/urandom | tr -cd 'a-zA-Z0-9' | head -c $(shuf -i 8-32 -n 1));
if [ ! -f ${RANDOM_NAME} ]; then
echo "${file}" "${RANDOM_NAME}" >> ../renamed
mv "${file}" "${RANDOM_NAME}";
break;
fi
done
done
For renaming back, i can use this one liner :
`while read old new rest ; do mv $new $old ; done .rar and back again.
EDIT
I am looking for a way to do that in Bash.
Here is a complete list with the current file names and the new filename scheme:
current name new name
------------ --------------------------------------
file1.1-rvr --> cL5617iQyc8kT5GwNoi.part1.rar
file1.2-rvr --> cL5617iQyc8kT5GwNoi.part2.rar
file1.3 --> cL5617iQyc8kT5GwNoi.part3.rar
file1.4-rvr --> cL5617iQyc8kT5GwNoi.part4.rar
file1.5 --> cL5617iQyc8kT5GwNoi.part5.rar
file1.6-rvr --> cL5617iQyc8kT5GwNoi.part6.rar
file2.1 --> QuMPmQjppRSuG3QL9xy5.part1.rar
file2.2 --> QuMPmQjppRSuG3QL9xy5.part2.rar
file3.1 --> rfiqDLhZF5XxRcJXkqR1LrwniDi.part01.rar
file3.10 --> rfiqDLhZF5XxRcJXkqR1LrwniDi.part02.rar
file3.2-rvr --> rfiqDLhZF5XxRcJXkqR1LrwniDi.part03.rar
file3.3-rvr --> rfiqDLhZF5XxRcJXkqR1LrwniDi.part04.rar
file3.4 --> rfiqDLhZF5XxRcJXkqR1LrwniDi.part05.rar
file3.5 --> rfiqDLhZF5XxRcJXkqR1LrwniDi.part06.rar
file3.6 --> rfiqDLhZF5XxRcJXkqR1LrwniDi.part07.rar
file3.7 --> rfiqDLhZF5XxRcJXkqR1LrwniDi.part08.rar
file3.8 --> rfiqDLhZF5XxRcJXkqR1LrwniDi.part09.rar
file3.9 --> rfiqDLhZF5XxRcJXkqR1LrwniDi.part10.rar
file4.1 --> cLYtCmUW.part1.rar
file4.2 --> cLYtCmUW.part2.rar
file5.1-rvr --> uXjOgcreTUCC7aHKkXeWPL1SiVdX.part1.rar
file5.2 --> uXjOgcreTUCC7aHKkXeWPL1SiVdX.part2.rar
file6.1 --> 9CcsiBYcuASF0ECoS.part1.rar
file6.2 --> 9CcsiBYcuASF0ECoS.part2.rar
file6.3 --> 9CcsiBYcuASF0ECoS.part3.rar
file6.4 --> 9CcsiBYcuASF0ECoS.part4.rar
file7.1 --> SXKymGb5Z9ImrQ0K51IUAA.part1.rar
file7.2-rvr --> SXKymGb5Z9ImrQ0K51IUAA.part2.rar
file7.3 --> SXKymGb5Z9ImrQ0K51IUAA.part3.rar
file7.4 --> SXKymGb5Z9ImrQ0K51IUAA.part4.rar
file7.5 --> SXKymGb5Z9ImrQ0K51IUAA.part5.rar
file7.6 --> SXKymGb5Z9ImrQ0K51IUAA.part6.rar
file8.1 --> 5poLf4stv6.part1.rar
file8.2 --> 5poLf4stv6.part2.rar
file8.3-rvr --> 5poLf4stv6.part3.rar
file8.4 --> 5poLf4stv6.part4.rar
But all before .partX(X).rar
should be random and generate with :
cat /dev/urandom | tr -cd 'a-zA-Z0-9' | head -c $(shuf -i 8-32 -n 1)
Banana
(189 rep)
Jun 20, 2025, 01:32 AM
• Last activity: Jun 24, 2025, 11:13 AM
6
votes
7
answers
1253
views
How to find numbers in a textfile that are not divisible by 4096, round them up and write new file?
In Linux there is a file `numbers.txt`. It contains a few numbers, all separated with space. There are numbers like this: `5476089856 71788143 9999744134 114731731 3179237376` In this example only the first and the last numbers are divisible by 4096. All others are not divisible by 4096. I need all...
In Linux there is a file
numbers.txt
.
It contains a few numbers, all separated with space.
There are numbers like this: 5476089856 71788143 9999744134 114731731 3179237376
In this example only the first and the last numbers are divisible by 4096. All others are not divisible by 4096. I need all numbers divisible by 4096.
The numbers which are not divisble by 4096 should be rounded up. The other ones should be untouched.
All of them I need written to a new file numbers_4096.txt
.
Numbers like the second 71788143
must be rounded up to 71790592
.
9999744134
to 9999745024
and so on...
Numbers like 5476089856
and 3179237376
should not be changed, they are divisible by 4096.
This script can do this with a single number:
#!/bin/sh
number=9999744134
divisor=4096
remainder=$((number % divisor))
new_number=$((number - remainder))
roundedup=$((new_number + divisor))
echo "Original number: $number"
echo "Divisor: $divisor"
echo "New number (divisible by $divisor): $new_number"
echo "Roundedup Number (divisible by $divisor): $roundedup"
But how to do that with a whole list that has to be rewritten and rounded up into a new file?
This script adds 4096 to numbers which are divisible by 4096, that is not what I want.
Banana
(189 rep)
Jun 17, 2025, 12:14 AM
• Last activity: Jun 24, 2025, 09:37 AM
1
votes
5
answers
2275
views
Shell script to process latest log files and count submitted vs. not submitted entries
A log file gets generated every minute in a directory called `data_logs`. Log file names: ``` abc.log.2019041607 abc.log.2019041608 ... ``` Contents of the log file look like this: ``` R_MT|D:1234|ID:413|S:1 R_MT|D:1234|ID:413|S:1 R_MT|D:1234|ID:413|S:1 R_MT|D:1234|ID:413|S:1 R_MT|D:1234|ID:413|S:1...
A log file gets generated every minute in a directory called
data_logs
.
Log file names:
abc.log.2019041607
abc.log.2019041608
...
Contents of the log file look like this:
R_MT|D:1234|ID:413|S:1
R_MT|D:1234|ID:413|S:1
R_MT|D:1234|ID:413|S:1
R_MT|D:1234|ID:413|S:1
R_MT|D:1234|ID:413|S:1
R_MT|D:1234|ID:413|S:1
R_MT|D:1234|ID:413|S:1
R_MT|D:1234|ID:413|S:1
R_MT|D:1234|ID:413|S:1
R_MT|D:1234|ID:413|S:1
R_MT|D:1234|ID:413|S:0
R_MT|D:1234|ID:413|S:0
R_MT|D:1234|ID:413|S:0
R_MT|D:1234|ID:413|S:0
R_MT|D:1234|ID:413|S:0
k_MT|D:1234|ID:414|S:1
k_MT|D:1234|ID:414|S:1
k_MT|D:1235|ID:413|S:1
k_MT|D:1235|ID:413|S:1
I am writing a shell script which, when executed, looks for the files that were created in the last 5 minutes (1 file gets created every minute), opens each file one by one, and processes the contents. It creates an output.txt
file with the following structure:
For the combination R_MT|D:1234|ID:413
, the total count with S=0
is stored in the submitted
column, and S=1
is stored in the notsubmitted
column.
Expected output.txt
:
Type,Number,ID,submitted,notsubmitted
R_MT,D:1234,ID:413,5,10
R_MT,D:1234,ID:414,0,2
R_MT,D:1235,ID:413,0,2
I have used this command to get the submitted and notsubmitted values:
zcat abc.log.2019041607.gz | grep "R_MT" | awk -F"|" '{print $2","$3","$4}' | sort | uniq -c
Sample output:
5 D:1234,ID:413,S:0
10 D:1234,ID:413,S:1
2 D:1234,ID:414,S:1
2 D:1235,ID:413,S:1
With the above command I am getting the count, but I am not sure how to assign it to variables so I can write them into the submitted
and notsubmitted
fields in the output file. Also, I am not sure how to obtain the last 5 minutes' files.
user365760
(11 rep)
Aug 6, 2019, 11:54 AM
• Last activity: Jun 22, 2025, 10:04 PM
0
votes
2
answers
2845
views
delete multiple users
I am the root user and I am setting up a menu for another user to use. This other user will only get this menu. There are two options that are interlinked: the first option is to search users. The code I got is: last | awk '{print $1,$4,$5,$6,$7} ' I have checked this code and it works, it shows me...
I am the root user and I am setting up a menu for another user to use. This other user will only get this menu.
There are two options that are interlinked: the first option is to search users. The code I got is:
last | awk '{print $1,$4,$5,$6,$7} '
I have checked this code and it works, it shows me the usernames and the day they last logged on.
For the second option: I want to be able to set a date, and them delete users who haven't been active since that date, using the output of the above command.
I am using Linux Mint and Vim text editor.
user93524
Dec 5, 2014, 12:26 PM
• Last activity: Jun 16, 2025, 05:06 PM
4
votes
5
answers
5991
views
Print only unique lines from file not the duplicates
I have a sorted words list, line by line file like so: apple apple grapes lime orange orange pear pear peach strawberry strawberry I only want to print out unique lines and drop duplicates: grapes peach lime How can I do this with `sed`, `awk` or `perl`?
I have a sorted words list, line by line file like so:
apple
apple
grapes
lime
orange
orange
pear
pear
peach
strawberry
strawberry
I only want to print out unique lines and drop duplicates:
grapes
peach
lime
How can I do this with
sed
, awk
or perl
?
user155704
Feb 9, 2016, 02:36 PM
• Last activity: Jun 16, 2025, 01:34 AM
Showing page 1 of 20 total questions