Concatenate the output of multiple cuts in one line in shell script [closed] - bash

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am trying to concatenate the output of three cut functions with - in a shell script in a single line. I tried as below, but wont work. How do I do this ?
echo "$(cut -d',' -f2 FILE.csv)-$(cut -d',' -f1 FILE.csv)-$(cut -d',' -f3 FILE.csv)"

Using awk to change the delimiter:
awk -F, '{ print $2, $1, $3 }' OFS='-' FILE.csv
Or with csvkit commands (Especially useful if your file has more complex CSV with things like commas in quoted fields or multi-line fields that a naive split on comma can't handle correctly):
csvcut --columns 2,1,3 FILE.csv | csvformat -D'-'

Related

grep an element in a df and display only selected columns with bash [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Hel lo I have a df file such as
col1;col2;col3;col4
A;B;C;D
E;F;G;H
I;J;K;L
and I would like to grep I and only display the col1 and col2
and get
I;J then
because from now I only know how to do :
grep 'I' df.csv
I;J;K;L
Try this:
grep 'I' df.csv | cut -d';' -f1-2
The cut command will treat each input line as a list of fields separated by ; (-d';'), and will select only the first two fields (-f1-2) for output.
Sample session:
$ cat df.csv
col1;col2;col3;col4
A;B;C;D
E;F;G;H
I;J;K;L
$ grep 'I' df.csv | cut -d';' -f1-2
I;J
$

how to pass text file as argument in shell script [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
the text file contains
from:sender#gmail.com
to:receiver#gmail.com
subject:
attachment:asdfg.xlsx
all arguments should be handled in shell script
I tried but if subject contains space then it gives problem
from=$(echo $1|cut -d ":" -f 2 <<< "$1")
to=$(echo $2|cut -d ":" -f 2 <<< "$2")
subject="$3"
attachment=$(echo $4|cut -d ":" -f 2 <<< "$4")
When you could read the Input_file then passing it as a variable will not be a good option IMHO, so create variables inside script by reading Input_file, opting OP's method of creating variables but enhancing code to awk.
from=$(awk -F':' '/from/{print $NF}' Input_file)
to=$(awk -F':' '/^to/{print $NF}' Input_file)
subject=$(awk -F':' '/^subject/{if($NF){print $NF} else {print "NULL subject"}}' Input_file)
attachment=$(awk -F':' '/^attachment/{print $NF}' Input_file)

How to crop a word [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have this list of inputs :
imalex
thislara
heiscarl
how to get :
alex
lara
carl
grep
Use grep to take the last four chars:
grep -o '.\{4\}$' file
The -o option makes sure only matched parts are printed.
sed
Using sed we can achieve a similar result:
sed 's/.*\(.\{4\}\)$/\1/' a
Here we capture the last four digits and replace each line with those last four digits. They are captured in a group \( \) and inserted \1.
read & tail
We can also grab the last five chars (including the newline) of each line using tail and a -c option. We do that for each line using read.
while read line; do
tail -c 5 <<< $line
done < file
2 answers using substring arithmetic
bash:
while read word; do
echo "${word:${#word}-4}"
done <<<"$list"
awk
echo "$list" | awk '{print substr($NF, length($NF)-4+1)}'

How to filter records from a file using bash script [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I'm supposed to filter records from multiple files. Files are delimited by |. On the 24th field, I will filter the records by "9120". How am I supposed to filter the files by using bash script?
20|09328833007|0|0|9222193385|0|GS|515051032704315|0|*HOME||20140311|101640|0|0|‌​||12|18|0|0|1||3100|00886FC0||0|0|| |||N|N|||N||||N|||||| 301131301|||||||||||11|0||00|FF|11|FF|140311101640|352912058113130000||CEBS1|MSH‌​15
The more concise way using awk:
awk '$24=="9120"' FS='|' file*
Using variable input:
awk -v col=24 -v value="9120" '$col==value' FS='|' file*
awk is useful to processing files like this. You set the FIELD SEPARATOR to |:
To print the 24th field:
$ awk -F '|' '{ print $24 }' sample.txt
3100
To print lines where the 24th field is the value you specified:
awk -F '|' '$24=="9120" { print; }' sample.txt
Try this:
cat file* | awk -F"|" '$24=="9120" {print $0}'

Print Record Number with a prefix [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions must demonstrate a minimal understanding of the problem being solved. Tell us what you've tried to do, why it didn't work, and how it should work. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have a shell script that creates a CSV output file like below (on separate lines) :
aaa,111
bbb,222
ccc,999
I want to have a record number as the first column in the above output such as,
dm1,aaa,111
dm2,bbb,222
dm3,ccc,999
How do I create a variable for dm within my shell script?
Use awk:
awk '{print "dm" NR "," $0}' input.csv >output.csv
or
... | awk '{print "dm" NR "," $0}' >output.csv
You can use a var and expr to do some maths like
dm=`expr $dm + 1`
echo "dm$dm,...rest of output..."
Pipe your output to awk
yourcommand | awk -F, -v prefix="dm" '{printf "%s%d,%s\n", prefix, NR, $0}'
we set -F, to set the input field separator, and the -v to set a prefix. You can replace "dm" with a shell variable, if you want.

Resources