Print Record Number with a prefix [closed] - bash

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions must demonstrate a minimal understanding of the problem being solved. Tell us what you've tried to do, why it didn't work, and how it should work. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have a shell script that creates a CSV output file like below (on separate lines) :
aaa,111
bbb,222
ccc,999
I want to have a record number as the first column in the above output such as,
dm1,aaa,111
dm2,bbb,222
dm3,ccc,999
How do I create a variable for dm within my shell script?

Use awk:
awk '{print "dm" NR "," $0}' input.csv >output.csv
or
... | awk '{print "dm" NR "," $0}' >output.csv

You can use a var and expr to do some maths like
dm=`expr $dm + 1`
echo "dm$dm,...rest of output..."

Pipe your output to awk
yourcommand | awk -F, -v prefix="dm" '{printf "%s%d,%s\n", prefix, NR, $0}'
we set -F, to set the input field separator, and the -v to set a prefix. You can replace "dm" with a shell variable, if you want.

Related

Concatenate the output of multiple cuts in one line in shell script [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am trying to concatenate the output of three cut functions with - in a shell script in a single line. I tried as below, but wont work. How do I do this ?
echo "$(cut -d',' -f2 FILE.csv)-$(cut -d',' -f1 FILE.csv)-$(cut -d',' -f3 FILE.csv)"
Using awk to change the delimiter:
awk -F, '{ print $2, $1, $3 }' OFS='-' FILE.csv
Or with csvkit commands (Especially useful if your file has more complex CSV with things like commas in quoted fields or multi-line fields that a naive split on comma can't handle correctly):
csvcut --columns 2,1,3 FILE.csv | csvformat -D'-'

How can I combine multiple regular expression conditions in a single awk command? [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
Type the shell command, which says how many times it has a whitespace in a particular file, and that it consists of at least four numbers (for example: "" 1945 "").
When I tried to solve the above exercise, I could not reach the result that I wanted, I want your help in this subject.
First of all, I created a txt file and filled it with random numbers. - sign represents spaces.
---234352432-
-123---
-12342---
-1-
-12345-
122333
I made a code to find the count of the numbers with more than 4 digits and has spaces in front of and behind of numbers.
cat text1.txt | awk '/^[[:space:]]&&[0-9]{4,}&&[[:space:]]$/' | awk 'END {print NR}'
returned 0
cat text1.txt | awk '/^" "/' | awk '/[0-9] {4, }/' | awk '/" "$/' | awk '{print NR}'
returned 6
this might be easier with grep
$ grep -Ec '\s[0-9]{4,}\s' file
3
to verify the matches
$ grep -E '\s[0-9]{4,}\s' file | tr ' ' '-'
---234352432--
-12342---
-12345-
To match a line that starts with white space then has 4 or more contiguous digits then white space to the end of the line:
$ awk '/^[[:space:]]+[0-9]{4,}[[:space:]]+$/{c++} END{print c+0}' file
3
To match a line that starts with white space, ends with white space and contains 4 or more contiguous digits somewhere on the line:
$ awk '/^[[:space:]]+/ && /[0-9]{4,}/ && /[[:space:]]+$/{c++} END{print c+0}' file
3
They'll behave the same with the input you provided but try them both with:
3 foo 12345 bar 7
for example (where that line has blanks at the start and end).
You never need to cat a file into a pipe to awk (or any other command), nor do you need a pipeline of multiple awk commands (nor pipes of awk+sed+grep, etc.) so if you ever find yourself doing any of that know you're using the wrong approach.
$ awk '{for(i=1; i<=NF; i++) {if($i ~ /^[0-9]/&&$i>999) {print $i}}}' text1.txt >> text2.txt | awk 'END {print NR}' text2.txt
That worked on my case. Thank u for everything

how to pass text file as argument in shell script [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
the text file contains
from:sender#gmail.com
to:receiver#gmail.com
subject:
attachment:asdfg.xlsx
all arguments should be handled in shell script
I tried but if subject contains space then it gives problem
from=$(echo $1|cut -d ":" -f 2 <<< "$1")
to=$(echo $2|cut -d ":" -f 2 <<< "$2")
subject="$3"
attachment=$(echo $4|cut -d ":" -f 2 <<< "$4")
When you could read the Input_file then passing it as a variable will not be a good option IMHO, so create variables inside script by reading Input_file, opting OP's method of creating variables but enhancing code to awk.
from=$(awk -F':' '/from/{print $NF}' Input_file)
to=$(awk -F':' '/^to/{print $NF}' Input_file)
subject=$(awk -F':' '/^subject/{if($NF){print $NF} else {print "NULL subject"}}' Input_file)
attachment=$(awk -F':' '/^attachment/{print $NF}' Input_file)

How to filter records from a file using bash script [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I'm supposed to filter records from multiple files. Files are delimited by |. On the 24th field, I will filter the records by "9120". How am I supposed to filter the files by using bash script?
20|09328833007|0|0|9222193385|0|GS|515051032704315|0|*HOME||20140311|101640|0|0|‌​||12|18|0|0|1||3100|00886FC0||0|0|| |||N|N|||N||||N|||||| 301131301|||||||||||11|0||00|FF|11|FF|140311101640|352912058113130000||CEBS1|MSH‌​15
The more concise way using awk:
awk '$24=="9120"' FS='|' file*
Using variable input:
awk -v col=24 -v value="9120" '$col==value' FS='|' file*
awk is useful to processing files like this. You set the FIELD SEPARATOR to |:
To print the 24th field:
$ awk -F '|' '{ print $24 }' sample.txt
3100
To print lines where the 24th field is the value you specified:
awk -F '|' '$24=="9120" { print; }' sample.txt
Try this:
cat file* | awk -F"|" '$24=="9120" {print $0}'

Trying to create so many .dat files using loop [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
My requirement is to create files in the respective directory (existing) in the server.
#!/bin/ksh
file1=$1 # Signifies DATE config file
file2=$2 # Signifies MONT config file
file4=$3 # Signifies SEQN config file
config1=`cat $file1 | awk -F "," '{print $1}'`
config2=`cat $file2 | awk -F "," '{print $2}'`
config3=`cat $file3 | awk -F "," '{print $3}'`
echo AT_EXTENSION_"$config1""$config2""$config3".dat
cat MY_DIR/$echo
Will this KornShell (ksh) script work to create a file in the respective directory with the contents in echo?
As the question currently stands, it appears that you want to write the contents of files $file1, $file2, and $file3 into a file called AT_EXTENSION_"$config1""$config2""$config3".dat. If this is the case, then do:
cat "$file1" "$file2" "$file3" >AT_EXTENSION_"$config1""$config2""$config3".dat
You should try to not use cat with awk and you should change back tics to parentheses $(code) and also add double quote around the variable name, to make sure file is read correctly
Not like this:
config1=`cat $file1 | awk -F "," '{print $1}'`
But like this:
config1=$(awk -F "," '{print $1}' "$file1")

Resources