Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
120
votes
4
answers
54341
views
Does sort support sorting a file in-place, like `sed --in-place`?
Is there no option like `--in-place` for `sort`? In order to save results to the input file, sed uses `-i` (`--in-place`). Redirecting the output of `sort` to the input file sort f results in making it empty. If there is no `--in-place` option - maybe there is some trick how to do this in *handy* wa...
Is there no option like
--in-place
for sort
?
In order to save results to the input file, sed uses -i
(--in-place
).
Redirecting the output of sort
to the input file
sort f
results in making it empty. If there is no --in-place
option - maybe there is some trick how to do this in *handy* way?
(The only thing that cames to my mind:
sort /tmp/f$$ ; cat /tmp/f$$ > f ; rm /tmp/f$$
Moving is not right choice, cause *file permissions* might be changed. That's why I overwrite with the contents of the temp file which I then remove.)
Grzegorz Wierzowiecki
(14740 rep)
Jan 22, 2012, 10:02 AM
• Last activity: Aug 4, 2025, 01:30 PM
101
votes
13
answers
188287
views
How to print all lines after a match up to the end of the file?
Input file1 is: dog 123 4335 cat 13123 23424 deer 2131 213132 bear 2313 21313 I give the match the pattern from in `other file` ( like `dog 123 4335` from file2). I match the pattern of the line is `dog 123 4335` and after printing all lines without match line my output is: cat 13123 23424 deer 2131...
Input file1 is:
dog 123 4335
cat 13123 23424
deer 2131 213132
bear 2313 21313
I give the match the pattern from in
other file
( like dog 123 4335
from file2).
I match the pattern of the line is dog 123 4335
and after printing
all lines without match line my output is:
cat 13123 23424
deer 2131 213132
bear 2313 21313
If only use without address of line only use the pattern, for example 1s
how to match and print the lines?
loganaayahee
(1209 rep)
Nov 23, 2012, 07:17 AM
• Last activity: Aug 2, 2025, 07:26 AM
214
votes
7
answers
755346
views
Return only the portion of a line after a matching pattern
So pulling open a file with `cat` and then using `grep` to get matching lines only gets me so far when I am working with the particular log set that I am dealing with. It need a way to match lines to a pattern, but only to return the portion of the line after the match. The portion before and after...
So pulling open a file with
cat
and then using grep
to get matching lines only gets me so far when I am working with the particular log set that I am dealing with. It need a way to match lines to a pattern, but only to return the portion of the line after the match. The portion before and after the match will consistently vary. I have played with using sed
or awk
, but have not been able to figure out how to filter the line to either delete the part before the match, or just return the part after the match, either will work.
This is an example of a line that I need to filter:
2011-11-07T05:37:43-08:00 isi-udb5-ash4-1(id1) /boot/kernel.amd64/kernel: [gmp_info.c:1758](pid 40370="kt: gmp-drive-updat")(tid=100872) new group: : { 1:0-25,27-34,37-38, 2:0-33,35-36, 3:0-35, 4:0-9,11-14,16-32,34-38, 5:0-35, 6:0-15,17-36, 7:0-16,18-36, 8:0-14,16-32,34-36, 9:0-10,12-36, 10-11:0-35, 12:0-5,7-30,32-35, 13-19:0-35, 20:0,2-35, down: 8:15, soft_failed: 1:27, 8:15, stalled: 12:6,31, 20:1 }
The portion I need is everything after "stalled".
The background behind this is that I can find out how often something stalls:
cat messages | grep stalled | wc -l
What I need to do is find out how many times a certain node has stalled (indicated by the portion before each colon after "stalled". If I just grep for that (ie 20:) it may return lines that have soft fails, but no stalls, which doesn't help me. I need to filter only the stalled portion so I can then grep for a specific node out of those that have stalled.
For all intents and purposes, this is a freebsd system with standard GNU core utils, but I cannot install anything extra to assist.
MaQleod
(2734 rep)
Nov 7, 2011, 11:18 PM
• Last activity: Aug 1, 2025, 11:13 AM
3
votes
1
answers
328
views
How to do non-greedy multiline capture with recent versions of pcre2grep?
I noticed a difference in behavior between an older `pcre2grep` version (10.22) and a more recent one (10.42), and I am wondering how I can get the old behavior back. Take the following file: ``` aaa bbb XXX ccc ddd eee XXX fff ggg ``` Back with v10.22 (Debian 9), I could achieve non-greedy multi-li...
I noticed a difference in behavior between an older
pcre2grep
version (10.22) and a more recent one (10.42), and I am wondering how I can get the old behavior back.
Take the following file:
aaa
bbb
XXX
ccc
ddd
eee
XXX
fff
ggg
Back with v10.22 (Debian 9), I could achieve non-greedy multi-line captures:
$ pcre2grep --version
pcre2grep version 10.22 2016-07-29
$ pcre2grep -nM '(.|\n)*?XXX' file
1:aaa
bbb
XXX
4:ccc
ddd
eee
XXX
Notice how it captured two multi-line groups, one starting at line 1 (1:aaa
), and a second starting at line 4 (4:ccc
).
Now, with a more recent version (10.42, Debian 12), its behaviour changed:
$ pcre2grep --version
pcre2grep version 10.42 2022-12-11
$ pcre2grep -nM '(.|\n)*?XXX' file
1:aaa
bbb
XXX
ccc
ddd
eee
XXX
Now I only have one group, starting with 1:aaa
. Basically, it seems to ignore the non-greedy operator (?
). The result is the same if I omit it:
$ pcre2grep -nM '(.|\n)*XXX' file
1:aaa
bbb
XXX
ccc
ddd
eee
XXX
How can I get the behavior of v10.22 back ? In other words, how can I do non-greedy multiline captures in recent versions of pcre2grep
?
ChennyStar
(1969 rep)
Jul 27, 2025, 12:42 PM
• Last activity: Jul 29, 2025, 02:43 PM
101
votes
16
answers
85915
views
Removing control chars (including console codes / colours) from script output
I can use the "script" command to record an interactive session at the command line. However, this includes all control characters *and* colour codes. I can remove control characters (like backspace) with "col -b", but I can't find a simple way to remove the colour codes. Note that I want to use the...
I can use the "script" command to record an interactive session at the command line. However, this includes all control characters *and* colour codes. I can remove control characters (like backspace) with "col -b", but I can't find a simple way to remove the colour codes.
Note that I want to use the command line in the normal way, so don't want to disable colours there - I just want to remove them from the script output. Also, I know can play around and try find a regexp to fix things up, but I am hoping there is a simpler (and more reliable - what if there's a code I don't know about when I develop the regexp?) solution.
To show the problem:
spl62 tmp: script Script started, file is typescript spl62 lepl: ls add-licence.sed build-example.sh commit-test push-docs.sh add-licence.sh build.sh delete-licence.sed setup.py asn build-test.sh delete-licence.sh src build-doc.sh clean doc-src test.ini spl62 lepl: exit Script done, file is typescript spl62 tmp: cat -v typescript Script started on Thu 09 Jun 2011 09:47:27 AM CLT spl62 lepl: ls^M ^[[0m^[[00madd-licence.sed^[[0m ^[[00;32mbuild-example.sh^[[0m ^[[00mcommit-test^[[0m ^[[00;32mpush-docs.sh^[[0m^M ^[[00;32madd-licence.sh^[[0m ^[[00;32mbuild.sh^[[0m ^[[00mdelete-licence.sed^[[0m ^[[00msetup.py^[[0m^M ^[[01;34masn^[[0m ^[[00;32mbuild-test.sh^[[0m ^[[00;32mdelete-licence.sh^[[0m ^[[01;34msrc^[[0m^M ^[[00;32mbuild-doc.sh^[[0m ^[[00;32mclean^[[0m ^[[01;34mdoc-src^[[0m ^[[00mtest.ini^[[0m^M spl62 lepl: exit^M Script done on Thu 09 Jun 2011 09:47:29 AM CLT spl62 tmp: col -b
andrew cooke
(1121 rep)
Jun 9, 2011, 01:51 PM
• Last activity: Jul 25, 2025, 08:26 AM
30
votes
7
answers
31505
views
How to cat files together, adding missing newlines at end of some files
I have a bunch of `.text` files, _most_ of which end with the standard nl. A couple don't have any terminator at end. The last physical byte is (generally) an alphameric character. I was using `cat *.text >| /tmp/joined.text`, but then noticed a couple of places in joined.text where the first line o...
I have a bunch of
.text
files, _most_ of which end with the standard nl.
A couple don't have any terminator at end. The last physical byte is (generally) an alphameric character.
I was using cat *.text >| /tmp/joined.text
, but then noticed a couple of places in joined.text where the first line of a file appeared at the end of the last line of a previous file. Inspecting the previous file, I saw there wasn't a line terminator -- concatenation explained.
That raised the question, what's the easiest way to concatenate, sticking in the missing newline? What about these options?
1. A solution that might effectively add a blank line to some input files. For me, that's not a problem as the processing of joined.text can handle it.
2. A solution that adds the cr/fl only to files that do not already end that way.
HiTechHiTouch
(991 rep)
Feb 16, 2017, 07:17 PM
• Last activity: Jul 23, 2025, 10:22 AM
4
votes
2
answers
2385
views
jq: Printing multiple values from multiple arrays at once
The default functionality of `jq` is to send each object from an array one at a time, though the `join` operator can merge those values. My problem is in trying to print all the values from multiple arrays at once. Taking this example: ```json { "key1": { "list1": [ "val1", "val2", "val3" ] }, "key2...
The default functionality of
jq
is to send each object from an array one at a time, though the join
operator can merge those values. My problem is in trying to print all the values from multiple arrays at once. Taking this example:
{
"key1": {
"list1": [
"val1",
"val2",
"val3"
]
},
"key2": {
"list1": [
"val4",
"val5"
]
},
"key3": {
"list1": [
"val6"
]
}
}
I'd like to print:
val1 val2 val3 val4 val5 val6
And so far have this:
jq -r 'to_entries[] | { list: .value.list1 } | .list | join(" ")' test.json
*(More verbose than necessary to help reviewers.)*
which gives:
val1 val2 val3
val4 val5
val6
Is there a way to gather all the values together in one command?
T145
(223 rep)
Apr 27, 2024, 07:33 PM
• Last activity: Jul 21, 2025, 12:35 PM
2
votes
5
answers
109
views
formatting git log messages for later processing
I am trying to format and connect git log messages for later processing. I am using `git log --pretty=format:'%H %s'` to get commit hash and the complete message at the moment. I need commit messages to be formatted like this: ``` f0fd5dc0f5e051e75b841b3830e3a54cdf779200^^ci-488: fetch blueprint (#4...
I am trying to format and connect git log messages for later processing.
I am using
git log --pretty=format:'%H %s'
to get commit hash and the complete message at the moment.
I need commit messages to be formatted like this:
f0fd5dc0f5e051e75b841b3830e3a54cdf779200^^ci-488: fetch blueprint (#4741)|e858626e6b4be1efe3e4ac5cbb55be459d762d74^^DCI-1891 : Remove Hidden Category from Static Landing Pages (#4716)|7ab33428d8b04015ce08241933195cb0264dd1fe^^[HAKA-836] Updated missed syncing dependencies with Adjust (#4737)
Input:
f0fd5dc0f5e051e75b841b3830e3a54cdf779200 ci-488: fetch blueprint (#4741)
e858626e6b4be1efe3e4ac5cbb55be459d762d74 DCI-1891 : Remove Hidden Category from Static Landing Pages (#4716)
7ab33428d8b04015ce08241933195cb0264dd1fe [HAKA-836] Updated missed syncing dependencies with Adjust (#4737)
xerxes
(359 rep)
May 7, 2025, 05:02 PM
• Last activity: Jul 14, 2025, 02:04 AM
0
votes
5
answers
2465
views
Shell Script: Want to delete two consecutive lines matching pattern from specific line
I want to delete specific two consecutive lines matching patterns from specific line from a file. For e.g. file contents are like below. a b c Name: 123 xyz Name: 456 abc I want to find the lines starting from line 4, matching 1st line pattern starting with `Name: ` and matching 2nd line pattern sta...
I want to delete specific two consecutive lines matching patterns from specific line from a file.
For e.g. file contents are like below.
a
b
c
Name: 123
xyz
Name: 456
abc
I want to find the lines starting from line 4, matching 1st line pattern starting with
Name:
and matching 2nd line pattern starting with whitespace and delete the two consecutive lines.
Any efficient way to do this in shell using sed
or something else?
To be a bit more clear, I want to remove signing/checksum information from the MANIFEST.MF
.
Sample MANIFEST.MF
like below. From the below manifest file, I want to remove the entry Name:
where Name:
entry can be in one line or 2(or more) lines.
Initially my solution was like find the first Name:
entry followed by SHA-256-Digest:
entry and delete to the end of the file. Unfortunately this solution has a problem of removing one needed entry in the middle. For e.g. NetBeans-Simply-Convertible:
is also being removed.
So, now I want to remove Name:
entry if available in 1 line or entry spanned across 2 or more lines. But I should not lose entries like NetBeans-Simply-Convertible:
while removing Name:
entries.
Already I am removing SHA-256-Digest:
entries with the below command on file with sed -i "/^\SHA-256-Digest: /d" $manifest_file
--------------------------------
Manifest-Version: 1.0
Version-Info: ....
Name: com/abc/xyz/pqr/client/relationship/message/notifier/Relati
onshipUpdateNotifierFactory.class
SHA-256-Digest: cSSyk6Y2L2F9N6FPtswUkxjF2kelMkGe4bFprcQ+3uY=
Name: com/abc/xyz/pqr/client/relationship/ui/BaseRelationshipView
$5.class
SHA-256-Digest: w9HgRjDuP024U4CyxeKPYFe6rzuzxZF3b+9LVG36XP8=
Name: com/abc/xyz/pqr/client/impl/MofRelationshipAgentImpl.class
SHA-256-Digest: GwIBIU+UdPtjyRhayAVM90Eo+SwCT/kP65dI59adEnM=
Name: com/abc/xyz/pqr/client/settings/ConvertibleProperties.class
NetBeans-Simply-Convertible: {com/abc/xyz/pqr/client/settings}Con
vertibleProperties
SHA-256-Digest: 5FszAtfpPXcLx/6FBWbfeg6E4fwFMRozV+Q+3rReATc= ...
Expected Output:
Manifest-Version: 1.0
Version-Info: ....
NetBeans-Simply-Convertible: {com/abc/xyz/pqr/client/settings}Con
vertibleProperties
...
onlysrinivas
(1 rep)
Apr 15, 2017, 06:15 AM
• Last activity: Jul 11, 2025, 09:39 AM
0
votes
5
answers
1319
views
Transpose multiple rows into multiple columns
I have this data in multiple rows, which I want to transpose into tab-separated multiple columns, i.e., ABC 0.98 0.58 5.87 0.01 DEF 0.88 5.85 6.89 0.25 GHI 8.99 5.66 4.78 6.22 into ABC DEF GHI 0.98 0.88 8.99 0.58 5.85 5.66 5.87 6.89 4.78 0.01 0.25 6.22 Could you please help me with this so that I ca...
I have this data in multiple rows, which I want to transpose into tab-separated multiple columns, i.e.,
ABC 0.98 0.58 5.87 0.01
DEF 0.88 5.85 6.89 0.25
GHI 8.99 5.66 4.78 6.22
into
ABC DEF GHI
0.98 0.88 8.99
0.58 5.85 5.66
5.87 6.89 4.78
0.01 0.25 6.22
Could you please help me with this so that I can get the output in the above format?
Mohsin Raza
(1 rep)
Apr 28, 2022, 07:30 AM
• Last activity: Jul 10, 2025, 10:04 AM
25
votes
7
answers
20564
views
Suppress filename from output of sha512sum
Maybe it is a trivial question, but in the `man` page I didn't find something useful. I am using Ubuntu and `bash`. The normal output for `sha512sum testfile` is testfile How to suppress the filename output? I would like to obtain just
Maybe it is a trivial question, but in the
man
page I didn't find something useful. I am using Ubuntu and bash
.
The normal output for sha512sum testfile
is
testfile
How to suppress the filename output? I would like to obtain just
BowPark
(5155 rep)
Feb 25, 2016, 01:41 PM
• Last activity: Jul 5, 2025, 09:57 PM
1
votes
7
answers
250
views
Extracting paragraphs with awk
What is the correct way to extract paragraphs in this log file using awk: $ cat log.txt par1, line1 par1, line2 par1, line3 par1, line4 par1, line5 par1, last line par2, line1 par2, line2 par2, line3 par2, last line par3, line1 par3, line2 par3, line3 par3, line4 par3, last line Note that text as we...
What is the correct way to extract paragraphs in this log file using awk:
$ cat log.txt
par1, line1
par1, line2
par1, line3
par1, line4
par1, line5
par1, last line
par2, line1
par2, line2
par2, line3
par2, last line
par3, line1
par3, line2
par3, line3
par3, line4
par3, last line
Note that text as well as blank lines may have one or more spaces or tabs.
Also note that blank lines could come in multiples.
I tried this (which failed):
awk 'BEGIN {RS="^[[:space:]]*$"} NR==2 {print "--- Paragraph",NR; print; exit}' log.txt
Desired output:
--- Paragraph 2
par2, line1
par2, line2
par2, line3
par2, last line
userene
(1856 rep)
May 17, 2025, 06:12 PM
• Last activity: Jul 4, 2025, 07:02 PM
158
votes
20
answers
171750
views
Decoding URL encoding (percent encoding)
I want to decode URL encoding, is there any built-in tool for doing this or could anyone provide me with a `sed` code that will do this? I did search a bit through [unix.stackexchange.com][1] and on the internet but I couldn't find any command line tool for decoding url encoding. What I want to do i...
I want to decode URL encoding, is there any built-in tool for doing this or could anyone provide me with a
sed
code that will do this?
I did search a bit through unix.stackexchange.com and on the internet but I couldn't find any command line tool for decoding url encoding.
What I want to do is simply in place edit a txt
file so that:
- %21
becomes !
- %23
becomes #
- %24
becomes $
- %26
becomes &
- %27
becomes '
- %28
becomes (
- %29
becomes )
And so on.
DisplayName
(12016 rep)
Oct 4, 2014, 01:13 PM
• Last activity: Jul 3, 2025, 05:20 PM
6
votes
3
answers
12915
views
Show only changed lines without syntax with git diff
I have a list of usernames that are in a text file. I update this file and commit it. I am looking for a way to get a list of the changes since the last commit. I do not want any diff formatting at all, I just want to get an output that has usernames one per line (as they are added to each commit) s...
I have a list of usernames that are in a text file. I update this file and commit it. I am looking for a way to get a list of the changes since the last commit.
I do not want any diff formatting at all, I just want to get an output that has usernames one per line (as they are added to each commit) since the last commit.
I can't find a setting that will remove all the git diff syntax from the output so it's purely a list new lines added only.
# Example
Original file:
user1
user2
user3
I then add
user4
user5
Then commit.
I want to be able to do a git diff and see only:
user4
user5
Michael
(161 rep)
Jun 15, 2018, 09:47 AM
• Last activity: Jun 30, 2025, 01:12 PM
14
votes
3
answers
8090
views
Remove accents from characters
I'm quite certain this has been asked and answered before, however, I cannot find the answer to my specific use-case. I've got this file with accented characters in it: ```bash > ~ cat file ë ê Ý,text Ò É ``` How would I convert them to their respective non-accented letters?...
I'm quite certain this has been asked and answered before, however, I cannot find the answer to my specific use-case.
I've got this file with accented characters in it:
> ~ cat file
ë
ê
Ý,text
Ò
É
How would I convert them to their respective non-accented letters? So the outcome would be something along the lines of:
> ~ convert file out.txt
> ~ cat out.txt
e
e
Y,text
O
E
Note that the actual file itself contains more characters.
Kevin C
(357 rep)
Jan 29, 2021, 03:10 PM
• Last activity: Jun 30, 2025, 09:01 AM
3
votes
5
answers
651
views
Randomly pick single line from multiple lines while assigning value to environment variable
In a certain script that we run routinely we configure hostnames in environment variables. Since hostnames can change overtime, we try to dynamically pick the current set of hosts using linux's command substitution using backticks ``` `` ``` ``` export EU_GAMMA_HOST_NAME=` ` ``` *Can't share ` ` sin...
In a certain script that we run routinely we configure hostnames in environment variables. Since hostnames can change overtime, we try to dynamically pick the current set of hosts using linux's command substitution using backticks
``
export EU_GAMMA_HOST_NAME=``
*Can't share `` since it uses proprietary tools*
-----
Originally when we wrote the script there used to be a single host that would get picked and its hostname assigned to our env-var, but now there are multiple hosts such that a list of hostnames (one per line) gets assigned to our env-var, resulting in failure of our script and requiring us to manually fix the value of environment variable
We tried adding | head -1
at the end of our command but then it would always use the same host, which we don't want.
export EU_GAMMA_HOST_NAME= | head -1
-----
How can I randomly pick a single hostname out of list of hostnames while assigning the command-substitution value to my environment variable?
y2k-shubham
(359 rep)
Jun 4, 2025, 05:22 AM
• Last activity: Jun 28, 2025, 02:00 AM
4
votes
6
answers
638
views
Remove the first field (and leading spaces) with a single AWK
Consider this input and output: ``` foo bar baz ``` ``` bar baz ``` How do you achieve with a single AWK? Please explain your approach too. These are a couple tries: ``` $ awk '{ $1 = ""; print(substr($0, 2)) }' <<<'foo bar baz' bar baz ``` Since I know the `$1 = ""` deletes the first field, that an...
Consider this input and output:
foo bar baz
bar baz
How do you achieve with a single AWK? Please explain your approach too.
These are a couple tries:
$ awk '{ $1 = ""; print(substr($0, 2)) }' <<<'foo bar baz'
bar baz
Since I know the $1 = ""
deletes the first field, that anything in $0 will start at the second character; but it seems kinda obtuse.
This is another way:
$ awk '{ $1 = ""; print }' <<<'foo bar baz' | awk '{ $1 = $1; print }'
bar baz
Since the second awk "recompiles $0"; but what I'd really like to do is "recompile $0; but in first awk call.
This approach doesn't work:
$ awk '{ $1 = ""; $0 = $0; print }' <<<'foo bar baz'
bar baz
notice the leading spaces. I was hoping the $0 = $0 would recompile $0; but it didn't work.
mbigras
(3472 rep)
Apr 9, 2025, 06:44 AM
• Last activity: Jun 26, 2025, 09:32 AM
2
votes
7
answers
5384
views
Removing new line character from a column in a CSV file
We are getting new line character in one the column of a CSV file. The data for the column is coming in consecutive rows. Eg: ``` ID,CODE,MESSAGE,DATE,TYPE,OPER,CO_ID 12202,INT_SYS_OCS_EX_INT-0000,"""OCSSystemException: HTTP transport error: java.net.ConnectException: Tried all: '1' addresses, but c...
We are getting new line character in one the column of a CSV file. The data for the column is coming in consecutive rows.
Eg:
ID,CODE,MESSAGE,DATE,TYPE,OPER,CO_ID
12202,INT_SYS_OCS_EX_INT-0000,"""OCSSystemException: HTTP transport error: java.net.ConnectException: Tried all: '1' addresses, but could not connect over HTTP to server: '10.244.166.9', port: '8080'
failed reasons:
address:'/10.244.166.9',port:'8080' : java.net.ConctException: Connection refused
""",06-09-2021 05:52:32,error,BillCycle,6eb8642aa4b
20840,,,06-09-2021 16:17:18,response,changeLimit,1010f9ea05ff
The issue is for column Message
and id 12202
, in which data is coming in triple quotes and in consecutive rows.
My requirement is that for the column Message
, the data should come in a single row rather than multiple rows, because my etl loader fails to import an embedded newline.
mansi bajaj
(29 rep)
Sep 22, 2021, 11:47 AM
• Last activity: Jun 24, 2025, 01:25 PM
6
votes
7
answers
1253
views
How to find numbers in a textfile that are not divisible by 4096, round them up and write new file?
In Linux there is a file `numbers.txt`. It contains a few numbers, all separated with space. There are numbers like this: `5476089856 71788143 9999744134 114731731 3179237376` In this example only the first and the last numbers are divisible by 4096. All others are not divisible by 4096. I need all...
In Linux there is a file
numbers.txt
.
It contains a few numbers, all separated with space.
There are numbers like this: 5476089856 71788143 9999744134 114731731 3179237376
In this example only the first and the last numbers are divisible by 4096. All others are not divisible by 4096. I need all numbers divisible by 4096.
The numbers which are not divisble by 4096 should be rounded up. The other ones should be untouched.
All of them I need written to a new file numbers_4096.txt
.
Numbers like the second 71788143
must be rounded up to 71790592
.
9999744134
to 9999745024
and so on...
Numbers like 5476089856
and 3179237376
should not be changed, they are divisible by 4096.
This script can do this with a single number:
#!/bin/sh
number=9999744134
divisor=4096
remainder=$((number % divisor))
new_number=$((number - remainder))
roundedup=$((new_number + divisor))
echo "Original number: $number"
echo "Divisor: $divisor"
echo "New number (divisible by $divisor): $new_number"
echo "Roundedup Number (divisible by $divisor): $roundedup"
But how to do that with a whole list that has to be rewritten and rounded up into a new file?
This script adds 4096 to numbers which are divisible by 4096, that is not what I want.
Banana
(189 rep)
Jun 17, 2025, 12:14 AM
• Last activity: Jun 24, 2025, 09:37 AM
8
votes
4
answers
12373
views
Determine maximum column length for every column in a simplified csv-file (one line per row)
To determine the maximum length of each column in a comma-separated csv-file I hacked together a bash-script. When I ran it on a linux system it produced the correct output, but I need it to run on OS X and it relies on the GNU version of `wc` that can be used with the parameter `-L` for `--max-line...
To determine the maximum length of each column in a comma-separated csv-file I hacked together a bash-script. When I ran it on a linux system it produced the correct output, but I need it to run on OS X and it relies on the GNU version of
wc
that can be used with the parameter -L
for --max-line-length
.
The version of wc
on OSX does not support that specific option and I'm looking for an alternative.
My script (which not be that good - it reflects my poor scripting skills I guess):
#!/bin/bash
for((i=1;i< head -1 $1|awk '{print NF}' FS=,
+1 ;i++));
do echo | xargs echo -n "Column$i: " &&
cut -d, -f $i $1 |wc -L ; done
Which prints:
Column1: 6
Column2: 7
Column3: 4
Column4: 4
Column5: 3
For my test-file:
123,eeeee,2323,tyty,3
154523,eegfeee,23,yty,343
I know installing the GNU CoreUtils through Homebrew might be a solution, but that's not a path I want to take as I'm sure it can be solved without modifying the system.
jpw
(183 rep)
Sep 4, 2014, 07:55 AM
• Last activity: Jun 22, 2025, 04:39 PM
Showing page 1 of 20 total questions