Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
8
votes
4
answers
12373
views
Determine maximum column length for every column in a simplified csv-file (one line per row)
To determine the maximum length of each column in a comma-separated csv-file I hacked together a bash-script. When I ran it on a linux system it produced the correct output, but I need it to run on OS X and it relies on the GNU version of `wc` that can be used with the parameter `-L` for `--max-line...
To determine the maximum length of each column in a comma-separated csv-file I hacked together a bash-script. When I ran it on a linux system it produced the correct output, but I need it to run on OS X and it relies on the GNU version of
wc
that can be used with the parameter -L
for --max-line-length
.
The version of wc
on OSX does not support that specific option and I'm looking for an alternative.
My script (which not be that good - it reflects my poor scripting skills I guess):
#!/bin/bash
for((i=1;i< head -1 $1|awk '{print NF}' FS=,
+1 ;i++));
do echo | xargs echo -n "Column$i: " &&
cut -d, -f $i $1 |wc -L ; done
Which prints:
Column1: 6
Column2: 7
Column3: 4
Column4: 4
Column5: 3
For my test-file:
123,eeeee,2323,tyty,3
154523,eegfeee,23,yty,343
I know installing the GNU CoreUtils through Homebrew might be a solution, but that's not a path I want to take as I'm sure it can be solved without modifying the system.
jpw
(183 rep)
Sep 4, 2014, 07:55 AM
• Last activity: Jun 22, 2025, 04:39 PM
0
votes
4
answers
8176
views
Group by and sum in shell script without awk
I have a file like: $ cat input.csv 201,100 201,300 300,100 300,500 100,400 I want to add the values in column 2 which has same value in column 1. Expected output is as follows: $ cat output.csv 201,400 300,600 100,400 I tried to do this by `awk` command but it is not working in Solaris. Please prov...
I have a file like:
$ cat input.csv
201,100
201,300
300,100
300,500
100,400
I want to add the values in column 2 which has same value in column 1. Expected output is as follows:
$ cat output.csv
201,400
300,600
100,400
I tried to do this by
awk
command but it is not working in Solaris. Please provide some alternative.
Sunny Monga
(29 rep)
Nov 21, 2014, 11:02 AM
• Last activity: Apr 23, 2025, 04:49 AM
1
votes
3
answers
1767
views
How can I replace Nth delimiter in a csv file?
I have a comma seperated csv file as a sample like: 1,92345,92345,Dear user, this is your amount , 2016-10-10 2,92345,92345,Dear user, this is your amount , 2016-10-09 I need to replace 4 th comma `,` only after the `Dear user` with a pipeline `|`.
I have a comma seperated csv file as a sample like:
1,92345,92345,Dear user, this is your amount , 2016-10-10
2,92345,92345,Dear user, this is your amount , 2016-10-09
I need to replace 4th comma
,
only after the Dear user
with a pipeline |
.
adnan noor
(31 rep)
Nov 8, 2017, 10:38 AM
• Last activity: Mar 12, 2025, 07:15 AM
3
votes
2
answers
368
views
Transform .csv into 3 columns and a row
Sample Input: id,Product1,Product2,Product3,Product4 1,0.1,0.3,0.8,0.7 2,0.6,0.7,0.5,0.9 I need output as : id,productname,product_val 1,Product1,0.1 1,Product2,0.3 1,Product3,0.8 1,Product4,0.7 2,Product1,0.6 2.Product2,0.7 3,Product3,0.5 I had tried awk -F, 'NR==1 { for (i=1; i 1 { for (i=1; i
Sample Input:
id,Product1,Product2,Product3,Product4 1,0.1,0.3,0.8,0.7 2,0.6,0.7,0.5,0.9I need output as :
id,productname,product_val 1,Product1,0.1 1,Product2,0.3 1,Product3,0.8 1,Product4,0.7 2,Product1,0.6 2.Product2,0.7 3,Product3,0.5I had tried
awk -F, 'NR==1 { for (i=1; i1 { for (i=1; i
Hardik
(33 rep)
Mar 17, 2015, 07:12 AM
• Last activity: Mar 2, 2025, 10:51 AM
4
votes
2
answers
8511
views
Convert json to csv with headers in jq
Is it possible to convert this json: ```json [ { "bytes": 276697, "checked": false }, { "bytes": 276697, "checked": false } ] ``` to a table WITH headers in jq? I've tried: ```sh cat file.json | jq '.[] | join(",")' ``` but it omits headers: ``` "276697,false" "276697,false" ``` it should be: ``` "b...
Is it possible to convert this json:
[
{
"bytes": 276697,
"checked": false
},
{
"bytes": 276697,
"checked": false
}
]
to a table WITH headers in jq?
I've tried:
cat file.json | jq '.[] | join(",")'
but it omits headers:
"276697,false"
"276697,false"
it should be:
"bytes,checked"
"276697,false"
"276697,false"
I hoped that you can just run two commands:
cat file.json | jq '.[] | keys, .[] | join(",")'
but the second one fails:
"bytes,checked"
jq: error (at :64): Cannot iterate over null (null)
Ideally it would be simpler than this .
Daniel Krajnik
(371 rep)
Aug 25, 2023, 05:23 PM
• Last activity: Feb 3, 2025, 11:25 PM
2
votes
2
answers
2834
views
read line by line and execute cat with grep
Im building a script to load from a file content and then use to cat and grep in another file. The problem is that is not showing any values while executing the grep. If i do the command manually it will retrieve the data. i already use the while but the issue is the same. the output6.txt is always...
Im building a script to load from a file content and then use to cat and grep in another file.
The problem is that is not showing any values while executing the grep. If i do the command manually it will retrieve the data.
i already use the while but the issue is the same. the output6.txt is always empty.
the echo seems to work, so it seems to be the grep or cat issue.
Sample from BlockspamCLIs.txt i have the values:
4412345
441236
2367890
Sample All_events.csv file
1,441236,20220909120909,test
2,441237,20220909120909,test
3,441232,20220909120909,test
4,44136,20220909120909,test
5,2367890,20220909120909,test
As an output im expecting to retrieve and store and record inside of the CSV file that contains the number for example:
1,441236,20220909120909,test
5,2367890,20220909120909,test
my script:
for i in
cat BlockspamCLIs.txt
do
grep $i *_All_events.csv >> output6.txt
echo $i
done
thank you in advance.
macieira
(123 rep)
Sep 28, 2022, 08:27 AM
• Last activity: Aug 7, 2024, 10:21 AM
2
votes
4
answers
4708
views
Count the maximum character length for all the data fields in a simplified csv file and output to txt
Given a simplified CSV (max one line per row) with many data fields (>50), how can I count the maximum character length for each data field and then export all the counts to a txt file? BTW, I want to ignore the first line of the file which contains the column headings. For example, given the input...
Given a simplified CSV (max one line per row) with many data fields (>50), how can I count the maximum character length for each data field and then export all the counts to a txt file?
BTW, I want to ignore the first line of the file which contains the column headings.
For example, given the input
These,are,the,column_headings_which_may_be_very_long_but_they_don't_count
abcdefghij,abcdefghijk,abcdefghijkl,abc
aardvark,bat,cat,dog
ant,bee,cow,abcdefghijklm
The end result could be something like the following, where the first column indicates the data fields in the original file and the second column indicates the maximum length of the field:
1 | 10
2 | 11
3 | 12
4 | 13
i.e., the length of the longest value in column 1 is 10 (
abcdefghij
),
the length of the longest value in column 2 is 11 (abcdefghijk
), etc.
I have researched on the site a little bit and found couple ways that can count maximum length in a fairly straightforward manner when a certain data field is specified. For example, use cut and wc commands to count maximum length of the second field in the file:
cut -d, -f2 test.csv | wc -L
But how can I take the command and loop it over to all the data fields and then output?
QY Luo
(23 rep)
Sep 15, 2014, 04:44 PM
• Last activity: Apr 19, 2024, 10:39 AM
7
votes
6
answers
1491
views
Split a CSV file based on second column value
I am using Ubuntu and I want to split my csv file into two csv files based on the value in the second column (age). The first file for patients under 60 ( =). For example, if I have the following input: ``` id,age 1,65 2,63 3,5 4,55 5,78 ``` The desired output is: file_under: ``` id,age 3,5 4,55 ```...
I am using Ubuntu and I want to split my csv file into two csv files based on the value in the second column (age). The first file for patients under 60 (=). For example, if I have the following input:
id,age
1,65
2,63
3,5
4,55
5,78
The desired output is:
file_under:
id,age
3,5
4,55
file_over:
id,age
1,65
2,63
5,78
I have tried the following code but it removes the header (column names) how can I avoid this?
awk -F ',' '($2>=60){print}' file.csv > file_over.csv
The input file is about 50k rows (lines).
Solomon123
(131 rep)
Apr 5, 2023, 12:37 PM
• Last activity: Sep 27, 2023, 04:41 PM
6
votes
6
answers
1458
views
Making single digit numbers between 1 and 9 into double digits, inside CSV file
I have a CSV file with thousands of lines like these ```csv 1664;4;5;35;37;43;5;6 1663;21;23;32;40;49;8;11 1662;16;17;34;35;44;5;10 1661;2;9;23;32;40;6;7 1660;23;25;30;44;47;9;12 1659;3;5;9;32;43;6;10 1658;4;6;10;13;34;3;5 1657;8;9;33;35;40;3;6 1656;15;20;31;44;48;1;3 1655;25;27;35;40;45;7;11 1654;7...
I have a CSV file with thousands of lines like these
1664;4;5;35;37;43;5;6
1663;21;23;32;40;49;8;11
1662;16;17;34;35;44;5;10
1661;2;9;23;32;40;6;7
1660;23;25;30;44;47;9;12
1659;3;5;9;32;43;6;10
1658;4;6;10;13;34;3;5
1657;8;9;33;35;40;3;6
1656;15;20;31;44;48;1;3
1655;25;27;35;40;45;7;11
1654;7;32;33;34;38;6;9
1653;5;7;11;27;37;6;12
1652;7;31;33;35;36;7;10
1651;4;12;34;35;45;1;9
1650;5;8;29;35;48;5;6
1649;2;11;28;42;48;4;9
1648;2;11;12;19;38;4;8
You can see that the numbers between 1 and 9 are single digit.
How can I use sed
or something to covert these number into double digit, by preceding a zero to them, such as
01 02 03 04 05 06 07 08 09
instead of
1 2 3 4 5 6 7 8 9
Thanks in advance.
Duck
(4794 rep)
Sep 4, 2023, 03:04 PM
• Last activity: Sep 24, 2023, 11:14 PM
0
votes
4
answers
1098
views
Removing white space using sed but skipping Date-Time stamp
I wanted to remove white space from a CSV file, which I am able to do using `s/\ //g`, but at the same time I want to avoid removing the space between fields which has timestamps e.g. `"06-JAN-15 13:20:00"`: currently it joins them which was expected `"06-JAN-1513:20:00"`. One solution is to let it...
I wanted to remove white space from a CSV file, which I am able to do using
s/\ //g
, but at the same time I want to avoid removing the space between fields which has timestamps e.g. "06-JAN-15 13:20:00"
: currently it joins them which was expected "06-JAN-1513:20:00"
.
One solution is to let it delete all the white spaces and then grep for date 06-JAN-15
and add space after it. Not so sure how that can be done.
Sample CSV file: (one line only)
294335,"17-APR-15 00:00:00 ",6258,"C"," ,"07-JAN-15 00:00:00"
huge blank space
will contain XML message if not blank.
JavaQuest
(207 rep)
Jun 30, 2015, 07:03 PM
• Last activity: Aug 17, 2023, 02:13 PM
2
votes
7
answers
746
views
How to extract columns in which their name containing the word "chronic" from CSV file
I have a big csv file ( around 1000 columns ) and I want to extract to a new file only columns that contain the word "chronic" in their header name. How can I do that ? For example if I have: ``` gender,chronic_disease1,chronic_disease2 male,2008,2009 ``` The desired output is: ``` chronic_disease1,...
I have a big csv file ( around 1000 columns ) and I want to extract to a new file only columns that contain the word "chronic" in their header name. How can I do that ?
For example if I have:
gender,chronic_disease1,chronic_disease2
male,2008,2009
The desired output is:
chronic_disease1,chronic_disease2
2008,2009
Note: the column/field separators is comma ",".
if there was no chronic
match then no output at all.
Solomon123
(131 rep)
Mar 25, 2023, 11:58 AM
• Last activity: Apr 10, 2023, 12:20 AM
-3
votes
2
answers
474
views
Extracting columns with similar name from two different csv files
I have two different csv files ,one file contain extended columns names while the other contain a shortcut of the same columns names. For example : csv file 1 is : ``` gender,aciclovir drug,aclidinium bromide abc,acenocoumarol drdd male,2008,2009,2009 ``` csv file 2 is : ``` gender,aciclovir,aclidin...
I have two different csv files ,one file contain extended columns names while the other contain a shortcut of the same columns names.
For example :
csv file 1 is :
gender,aciclovir drug,aclidinium bromide abc,acenocoumarol drdd
male,2008,2009,2009
csv file 2 is :
gender,aciclovir,aclidinium bromide,ajmaline
male,2008,2009,2010
Now i want to extract columns from file 1 which have a common word with columns from file 2.
desired out put in my example :
gender,aciclovir drug,aclidinium bromide abc
male,2008,2009
Solomon123
(131 rep)
Mar 27, 2023, 07:08 PM
• Last activity: Mar 29, 2023, 12:07 PM
-1
votes
1
answers
234
views
Compare different column values together between two files and if match print different column value
File1 has data as below , is the field separator ``` 1,T_EXIT,9053.0,10325.0,, , 2,V_TURN,120.0,11334.0,,GOAL,RECK 3,Q_ENTRY,122.0,190.0,, , 4,Q_ENTRY_RUN,130.0,569.0,, , ``` File2 has data ``` SYNC CLK T_EXIT OPEN Q_ENTRY CLOSE ALLOW CORE_T MODE ``` I want to compare column1 of file2 with column2 o...
File1 has data as below , is the field separator
1,T_EXIT,9053.0,10325.0,, ,
2,V_TURN,120.0,11334.0,,GOAL,RECK
3,Q_ENTRY,122.0,190.0,, ,
4,Q_ENTRY_RUN,130.0,569.0,, ,
File2 has data
SYNC CLK
T_EXIT OPEN
Q_ENTRY CLOSE ALLOW
CORE_T MODE
I want to compare column1 of file2 with column2 of file1 and if EXCAT MATCH is there I want to copy column2 from file2 in column6 and column3 from file2 in column7 in file1
I want output as:
1,T_EXIT,9053.0,10325.0,,OPEN,
2,V_TURN,120.0,11334.0,,GOAL,RECK
3,Q_ENTRY,122.0,190.0,,CLOSE,ALLOW
4,Q_ENTRY_RUN,130.0,569.0,, ,
I tried below code, but I could not get how to save two column values column1 and column2 together from file2.
awk 'NR==FNR{A[$1]=$2;B[$1]=$3;next} ($1 in A) {$6=A[$1]; $7=B[$1]}1' file2 FS=, OFS=, file1 > test
Shreya
(139 rep)
Mar 16, 2023, 03:22 PM
• Last activity: Mar 17, 2023, 01:25 PM
0
votes
1
answers
412
views
Adding an empty column to a CSV file with Miller
I have a CSV file that looks like this: ``` 0 1 2 3 ``` I'd like to use Miller to append an empty column `x` to every row so that the output file looks like this: ``` 0,x 1, 2, 3, ``` How do I do that?
I have a CSV file that looks like this:
0
1
2
3
I'd like to use Miller to append an empty column x
to every row so that the output file looks like this:
0,x
1,
2,
3,
How do I do that?
Mateusz Piotrowski
(4983 rep)
Jan 26, 2023, 04:53 PM
-1
votes
2
answers
155
views
Merge duplicate values in columns
Given a file like this ```none value,value,value,value value1,value1,value,value1 value2,value2,value,value2 ``` How can I transform it to look like this: ```none value,value,value,value value1,value1, ,value1 value2,value2, ,value2 ``` Basically, to merge duplicate values in column 3 and present it...
Given a file like this
value,value,value,value
value1,value1,value,value1
value2,value2,value,value2
How can I transform it to look like this:
value,value,value,value
value1,value1, ,value1
value2,value2, ,value2
Basically, to merge duplicate values in column 3 and present it in first row and retain other data, also make this as one record within csv.
I have tried cat file | sort -u -t, -k3
but it does not work.
ayush
(1 rep)
Jan 19, 2023, 12:02 PM
• Last activity: Jan 19, 2023, 03:07 PM
-2
votes
5
answers
116
views
Selecting all the records based on filter criteria on 2 fields
``` ABC,1234.5333,5733.9374,5673.352,352,2.346374,-0.6686874 XYZ,5463.674,93773.683,5734.874,432,-5.683423,-10.38393 AES,7436874.5743,937.6843,8464.5634,564,6.35739,10.6834 PQR,784945.464,57484.8647,57484.453,5764,-10.67484,5.74764 ``` From the above csv file , I need to write shell script which wil...
ABC,1234.5333,5733.9374,5673.352,352,2.346374,-0.6686874
XYZ,5463.674,93773.683,5734.874,432,-5.683423,-10.38393
AES,7436874.5743,937.6843,8464.5634,564,6.35739,10.6834
PQR,784945.464,57484.8647,57484.453,5764,-10.67484,5.74764
From the above csv file , I need to write shell script which will select all the records where the absolute values for any of the
last two fields[ ABS(6th field) or ABS(7th field) ] is > = 10.
As a result, my output should look like this:
XYZ,5463.674,93773.683,5734.874,432,-5.683423,-10.38393
AES,7436874.5743,937.6843,8464.5634,564,6.35739,10.6834
PQR,784945.464,57484.8647,57484.453,5764,-10.67484,5.74764
John
(73 rep)
Jan 12, 2023, 11:04 AM
• Last activity: Jan 12, 2023, 04:53 PM
0
votes
1
answers
281
views
Merging two Unix files
I have 2 pipe delimited files say file1 and file2. Where file1 has say 33 columns and file2 may have 34/35/36 columns. So let's not consider how many columns we have. What I want to do is compare the values in file1 & file2 (from column 1 till column 32). If all values are same then take the values...
I have 2 pipe delimited files say file1 and file2.
Where file1 has say 33 columns and file2 may have 34/35/36 columns. So let's not consider how many columns we have.
What I want to do is compare the values in file1 & file2 (from column 1 till column 32). If all values are same then take the values from file2 and append for all same records in file1.
Say for 1st records in file2 have 5 matches in file1 the take value "|84569|21.5|1" and append it to all matches on file1 (see file3 for expected results).
Similarly, for 2nd record in file2 we have 5 matches in file1, so take the value "|0" and append it to all matched records in file1.
Same goes with 3rd record from file2. There are 3 matches so take value "|21457879|12.4" and append it on all 3 matched rows in file1
If you are thinking how are we selecting from where to take values from file2 for appending in file1 then we should take it from column no 34. Though the start position is fixed but end position is not.
Like if you in example "a" the we have taken values from col 34/35/36 but for "b" we have just col 34. however for "c" we have values in col 34/35.
I do not know how to format the data in my examples below. So giving it as it is.
file1
a|a1|a2|a3|a4|...|a32|acb@sma.com
a|a1|a2|a3|a4|...|a32|acd@sm.com$1553:2015-02-14
a|a1|a2|a3|a4|...|a32|axwer@xi.com30:2015-03-01
a|a1|a2|a3|a4|...|a32|acbw@ma.com$121:2015-01-31
a|a1|a2|a3|a4|...|a32|art@ma.com$293:2015-02-28
b|b1|b2|b3|b4|...|b32|asmi@g.in$542:2013:05:24
b|b1|b2|b3|b4|...|b32|kasmi@g.in$542:2013:05:24
b|b1|b2|b3|b4|...|b32|asmi@g.in14:2013:05:24
b|b1|b2|b3|b4|...|b32|asmi@g.in$542:2013:05:24
b|b1|b2|b3|b4|...|b32|asmi@g.in232:2014:05:24
c|c1|c2|c3|c4|...|c32|Asce@ita.in
c|c1|c2|c3|c4|...|c32|$200:2011:12:06
c|c1|c2|c3|c4|...|c32|kst@gre.in$214:2001:01:31
file2
a|a1|a2|a3|a4|...|a32|acb@sma.com|84569|21.5|1
b|b1|b2|b3|b4|...|b32|asmi@g.in$542:2013:05:24|0
c|c1|c2|c3|c4|...|c32|Asce@ita.in|21457879|12.4
Expected File: File3
a|a1|a2|a3|a4|...|a32|acb@sma.com|84569|21.5|1
a|a1|a2|a3|a4|...|a32|acd@sm.com$1553:2015-02-14|84569|21.5|1
a|a1|a2|a3|a4|...|a32|axwer@xi.com30:2015-03-01|84569|21.5|1
a|a1|a2|a3|a4|...|a32|acbw@ma.com$121:2015-01-31|84569|21.5|1
a|a1|a2|a3|a4|...|a32|art@ma.com$293:2015-02-28|84569|21.5|1
b|b1|b2|b3|b4|...|b32|asmi@g.in$542:2013:05:24|0
b|b1|b2|b3|b4|...|b32|kasmi@g.in$542:2013:05:24|0
b|b1|b2|b3|b4|...|b32|asmi@g.in14:2013:05:24|0
b|b1|b2|b3|b4|...|b32|asmi@g.in$542:2013:05:24|0
b|b1|b2|b3|b4|...|b32|asmi@g.in232:2014:05:24|0
c|c1|c2|c3|c4|...|c32|Asce@ita.in|21457879|12.4
c|c1|c2|c3|c4|...|c32|$200:2011:12:06|21457879|12.4
c|c1|c2|c3|c4|...|c32|kst@gre.in$214:2001:01:31|21457879|12.4
mdx
(11 rep)
Apr 24, 2015, 10:44 AM
• Last activity: Jan 5, 2023, 08:19 AM
2
votes
3
answers
1046
views
Delete rows from a simplified CSV where some columns matching specific pattern
I have the following simplified CSV file (no separators or newlines embedded in fields): ID,PDBID,FirstResidue,FirstChain,SecondResidue,SecondChain,ThirdResidue,ThirdChain,FourthResidue,FourthChain,Pattern RZ_AUTO_505,1hmh,A22L,C,A22L,A,G21L,A,A23L,A,AA/GA Naked ribose RZ_AUTO_506,1hmh,A22L,C,A22L,A...
I have the following simplified CSV file (no separators or newlines embedded in fields):
ID,PDBID,FirstResidue,FirstChain,SecondResidue,SecondChain,ThirdResidue,ThirdChain,FourthResidue,FourthChain,Pattern
RZ_AUTO_505,1hmh,A22L,C,A22L,A,G21L,A,A23L,A,AA/GA Naked ribose
RZ_AUTO_506,1hmh,A22L,C,A22L,A,G114,A,A23L,A,AA/GA Naked ribose
RZ_AUTO_507,1hmh,A130,E,A90,A,G80,A,A130,A,AA/GA Naked ribose
RZ_AUTO_508,1hmh,A140,E,A90,E,G120,A,A90,A,AA/GA Naked ribose
RZ_AUTO_509,1hmh,G102,A,C103,A,G102,E,A90,E,GC/GA Single ribose
RZ_AUTO_510,1hmh,G102,A,C103,A,G120,E,A90,E,GC/GA Single ribose
RZ_AUTO_511,1hmh,G113,C,C112,C,G21L,A,A23L,A,GC/GA Single ribose
RZ_AUTO_512,1hmh,G113,C,C112,C,G114,A,A23L,A,GC/GA Single ribose
RZ_AUTO_513,1hnw,C1496,A,G1497,A,A1518,A,A1519,A,CG/AA Canonical ribose
RZ_AUTO_514,1hnw,C1496,A,G1497,A,A1519,A,A1518,A,CG/AA Canonical ribose
RZ_AUTO_515,1hnw,C221,A,U222,A,A195,A,A196,A,CU/AA Canonical ribose
RZ_AUTO_516,1hnw,C221,A,U222,A,A196,A,A195,A,CU/AA Canonical ribose
I need to remove the CSV rows if the value of FirstResidue or SecondResidue or ThirdResidue or FourthResidue doesn't end with an integer.
The output should look something like below.
RZ_AUTO_507,1hmh,A130,E,A90,A,G80,A,A130,A,AA/GA Naked ribose
RZ_AUTO_508,1hmh,A140,E,A90,E,G120,A,A90,A,AA/GA Naked ribose
RZ_AUTO_509,1hmh,G102,A,C103,A,G102,E,A90,E,GC/GA Single ribose
RZ_AUTO_510,1hmh,G102,A,C103,A,G120,E,A90,E,GC/GA Single ribose
RZ_AUTO_513,1hnw,C1496,A,G1497,A,A1518,A,A1519,A,CG/AA Canonical ribose
RZ_AUTO_514,1hnw,C1496,A,G1497,A,A1519,A,A1518,A,CG/AA Canonical ribose
RZ_AUTO_515,1hnw,C221,A,U222,A,A195,A,A196,A,CU/AA Canonical ribose
RZ_AUTO_516,1hnw,C221,A,U222,A,A196,A,A195,A,CU/AA Canonical ribose
So I'm just wondering how to achieve this using
awk
. I'm using Mac OSX.
Sri
(165 rep)
Feb 13, 2015, 04:27 AM
• Last activity: Dec 14, 2022, 09:44 PM
0
votes
5
answers
237
views
Rolling up Multiple Rows into a Single Row
How can I rollup multiple rows from a csv file into 1 row ? I have tried working out the query in SQL and it works but I am not sure how can I achieve the same in Linux. This is how my current file looks : swainb02,Ben Swain,1015 swainb02,Ben Swain,1016 swainb02,Ben Swain,1018 swainb02,Ben Swain,102...
How can I rollup multiple rows from a csv file into 1 row ? I have tried working out the query in SQL and it works but I am not sure how can I achieve the same in Linux.
This is how my current file looks :
swainb02,Ben Swain,1015
swainb02,Ben Swain,1016
swainb02,Ben Swain,1018
swainb02,Ben Swain,1020
shaiks21,Sarah Shaikh,0073
shaiks21,Sarah Shaikh,0080
shaiks21,Sarah Shaikh,0082
There are multiple users with access to multiple area codes. What I am looking for is a simpler version of this file for better readability.
Desired Output :
swainb02,Ben Swain,1015,1016,1018,1020
shaiks21,Sarah Shaikh,0073,0080,0082
Any idea how can this be worked out ? Thanks
Aayush Jain
(93 rep)
Jun 26, 2022, 06:35 AM
• Last activity: Dec 9, 2022, 11:59 AM
-1
votes
3
answers
2254
views
Add a header field, and the file's name at the end of the data lines, for all the csv files in a folder
I want to add filename (without extension) at the end for all the lines for all the csv files in a folder. All the files are having the same header. Let's say I have two files a.csv and b.csv in a folder. a.csv contains (first line is a header) num1,num2,num3 1,2,3 b.csv contains (first line is a he...
I want to add filename (without extension) at the end for all the lines for all the csv files in a folder. All the files are having the same header.
Let's say I have two files a.csv and b.csv in a folder.
a.csv contains (first line is a header)
num1,num2,num3
1,2,3
b.csv contains (first line is a header)
num1,num2,num3
4,5,6
I want a.csv file to be (first line is a header)
num1,num2,num3,filename
1,2,3,a
I want b.csv file to be (first line is a header)
num1,num2,num3,filename
4,5,6,b
How can I do it in Unix?
Thomas
(39 rep)
Jul 23, 2017, 01:04 AM
• Last activity: Oct 16, 2022, 04:57 PM
Showing page 1 of 20 total questions