Bash: Syntax error near unexpected token `<' in done [duplicate] - bash

This question already has answers here:
Why does my Bash code fail when I run it with 'sh'?
(2 answers)
Closed 10 months ago.
I am using the below script to read the values of the 3rd column of the CSV file. However, I am getting a Syntax error.
tr -d '\r' < p_test.csv > fixed_p_test.csv
while IFS="," read -r rec1 rec2
do
echo "Displaying Record-$rec1"
echo "Price: $rec2"
done < <(cut -d "," -f1,3 fixed_p_test.csv | tail -n +2)
While running the above script I am getting the below error:
pScanFinal.sh: line 9: syntax error near unexpected token `<'
pScanFinal.sh: line 9: `done < <(cut -d "," -f1,3 fixed_p_test.csv | tail -n +2)'
My p_test.csv look like:
A
B
C
D
E
192.158
True
12
HT
Open
254.658
False
58
SM
Closed
How can I resolve this error?

You don't actually need cut, tail, or process substitution here, which means you can make your script POSIX compliant to match how you are running it.
{
read # Skip the first line
while IFS=, read -r rec1 _ rec2 _; do
...
done
} < fixed_p_test.csv

As jq is tagged (yet I don't know why), here's how to do it with jq.
Input and output should be read (-R) and written (-r) as raw text. Then split (/) each line by comma (,), and select column [2] (0-based).
jq -Rr '(./",")[2]' file.csv
12
58
Demo
Not asked/tagged, but imho awk would be more appropriate:
awk -F, '{print $3}' file.csv

Related

Parsing CSV records when a value is multiline

Source file looks like this:
"google.com", "vuln_example1
vuln_example2
vuln_example3"
"facebook.com", "vuln_example2"
"reddit.com", "stupidly_long_vuln_name1"
"stackoverflow.com", ""
I've been trying to get the output to be something like this but the line breaks seem to cause me no end of problems. I'm using a "while read line" job to do this because I do some processing on the columns (e.g Vulnerability count and url in this example). This is output into a jenkins job (yuk).
The basic summary of the problem is getting the linebreaks in the csv to be output into the third column while retaining the table structure. I've got a sort of weird example of the desired output below.
||hostname ||Vulnerability count|| Vulnerability list || URL ||
|google.com |3 |vuln_example1 |http://cve.com/vuln_example1|
| | |vuln_example2 |http://cve.com/vuln_example2|
| | |vuln_example3 |http://cve.com/vuln_example3|
|facebook.com |1 |vuln_example2 |http://cve.com/vuln_example2|
|reddit.com |1 |stupidly_long_vuln_name1 |http://cve.com/stupidly_long_vuln_name1|
|stackoverflow.com |0 | ||
Looking at this... I've got a feeling it might be easier by showing some code and example output.
Parsing your input with the command line below makes the problem easier (I'm assuming the inputs are correct):
perl -0777 -pe 's/([^"])\s*\n/\1 /g ; s/[",]//g' < sample.txt
This line invokes Perl to perform two regex substitutions:
s/([^"])\s*\n/\1 /g: This substitution removes an end of line if it doesn't terminate by a quote " (i.e. if a host entry, with all vulnerabilities isn't yet complete).
s/[",]//g removes all quotes and commas remaining.
For each host entry like this one:
"google.com", "vuln_example1
vuln_example2
vuln_example3"
You'll get:
google.com vuln_example1 vuln_example2 vuln_example3
Then you can assume for each line, you have an host and a set of vulnerabilities.
The given example below stores vulnerabilities in an array and loop through it, formatting and printing each line:
# Replace this by your custom function
# to get an URL for a given vulnerability
function get_vuln_url () {
# This just displays a random url for an non-empty arg
[[ -z "$1" ]] || echo "http://host/$1.htm"
}
# Format your line (see printf help)
function print_row () {
printf "%-20s|%5s|%-30s|%s\n" "$#"
}
# The perl line reformat
perl -0777 -pe 's/([^"])\s*\n/\1 /g ; s/[",]//g' < sample.txt |
while read -r line ; do
arr=(${line})
print_row "${arr[0]}" "$((${#arr[#]} - 1))" "${arr[1]}" "$(get_vuln_url ${arr[1]})"
#echo -e "${arr[0]}\t|$vul_count\t|${arr[1]}\t|$(get_vuln_url ${arr[1]})"
for v in "${arr[#]:2}" ; do
print_row " " " " "$v" "$(get_vuln_url ${arr[1]})"
done
done
Output:
google.com | 3|vuln_example1 |http://host/vuln_example1.htm
| |vuln_example2 |http://host/vuln_example1.htm
| |vuln_example3 |http://host/vuln_example1.htm
facebook.com | 1|vuln_example2 |http://host/vuln_example2.htm
reddit.com | 1|stupidly_long_vuln_name1 |http://host/stupidly_long_vuln_name1.htm
stackoverflow.com | 0| |
Update.
If you don't have Perl, and if your file doesn't have tabulations, you can use this command as a workaround instead:
tr '\n' '\t' < sample.txt | sed -r -e 's/([^"])\s*\t/\1 /g' -e 's/[",]//g' -e 's/\t/\n/g'
tr '\n' '\t' replaces all ends of line by tabulations
sed part acts like Perl line, except it deals with tabulations instead of ends of line and restores tabulations back to ends of line.

How do you stop sh from interpretting the '\' character? [duplicate]

This question already has answers here:
How to keep backslash when reading from a file?
(4 answers)
sh read command eats backslashes in input?
(2 answers)
Closed 3 years ago.
`I have a script where I am attempting to read from a manifest file, translate DOS paths in that manifest to UNIX paths, and then operate on those files. Here is a snippet of code that I am trying to debug:
while read line
do
srcdir=$(printf '%s' "$line" | awk -F \\ -v OFS=/ '{ gsub(/\r|^[ \t]+|[ \t]+$/, "") } !NF { next } /^\\\\/ { sub(/^.*\\prj\\/, "\\prj\\") } { $1 = $1 } 1')
done < manifest.txt
My input file looks like this:
$ cat manifest.txt
\\server\mount\directory
When I debug my little shell snippet, I get the following:
+ read line
++ printf %s '\servermountdirectory
'
++ awk -F '\' -v OFS=/ '{ gsub(/\r|^[ \t]+|[ \t]+$/, "") } !NF { next } /^\\\\/ { sub(/^.*\\prj\\/, "\\prj\\") } { $1 = $1 } 1'
+ srcdir=\servermountdirectory
So... Either at read or at printf, the \ characters are being interpreted as escape characters -- how do I work around that?
Note... I know I could just run the while loop in awk... the thing is that in my real program, I have other things inside that while loop that need to be done with "$srcdir" -- and for this, sh is the right tool... So I really need a solution in sh.
From posix read:
By default, unless the -r option is specified, < backslash> shall act as an escape character. An unescaped < backslash> shall preserve the literal value of the following character, with the exception of a < newline>. If a < newline> follows the < backslash>, the read utility shall interpret this as line continuation. The < backslash> and < newline> shall be removed before splitting the input into fields. All other unescaped < backslash> characters shall be removed after splitting the input into fields.
and:
-r
Do not treat a character in any special way. Consider each to be part of the input line.
Just:
while read -r line; do
Also remember that without IFS= this will not preserve trailing and leading whitespaces.
Remember to always do read -r. Here is a good read: bashfaq How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?.
Also remember that reading file line by line is very inefficient in bash. It's way better to process the whole file using commands, tools, streams and pipes. If you have to read the file line by line, let the "preprocessing" stage parse the whole file, then read it line by line:
awk .... manifest.txt |
while read -r srcdir; do
echo "$srcdir"
done
or with command redirection, if you need the loop to run in the same shell:
while read -r srcdir; do
echo "$srcdir"
done < <(awk ... manifest.txt)

Bash script: Using variables in executing sed in Freebsd which expects \ afters a

I'm trying to use variables in a sed command in Freebsd. sed in Freebsd expects \ after a. Basically I want to append a line if a particular line in the file matches a pattern. I'm using sed's append for that.
#!/usr/bin/bash
SYSLOG_SERVER="192.168.1.36"
SYSLOG_PORT="514"
syslog_conf_file="/etc/syslog.conf"
send_logs() {
logs=(messages auth.log )
send_logs[0]=`awk '(index($2, "messages") != 0) {print $1}' $syslog_conf_file`
send_logs[1]=`awk '(index($2, "auth.log") != 0) {print $1}' $syslog_conf_file`
for (( i = 0 ; i < ${#send_logs[#]} ; i++ ))
do
if [ ! -z "${send_logs[$i]}" ]; then
send_logs[i]=${send_logs[i]}" \t"#$SYSLOG_SERVER:$SYSLOG_PORT
sed "/${logs[$i]}$/a\
${send_logs[$i]} \
" $syslog_conf_file
fi
done
}
I'm facing this error. The variables are printed properly but the way in which I'm running the script is wrong. How can I fix this ?
root#Great# bash temp.sh
send_logs *.notice;authpriv.none;kern.debug;lpr.info;mail.crit;news.err \t#192.168.1.36:514
logs messages
sed: 1: "/messages$/a ...": command a expects \ followed by text
send_logs auth.info;authpriv.info \t#192.168.1.36:514
logs auth.log
sed: 1: "/auth.log$/a ...": command a expects \ followed by text
Sample expected input for sed:
root#Great# sed '/messages$/a\
*.notice;authpriv.none;kern.debug;lpr.info;mail.crit;news.err #192.168.1.36:514 \
' /etc/syslog.conf
Expected output:
*.err;kern.warning;auth.notice;mail.crit /dev/console
*.notice;authpriv.none;kern.debug;lpr.info;mail.crit;news.err /var/log/messages
*.notice;authpriv.none;kern.debug;lpr.info;mail.crit;news.err #192.168.1.36:514
security.* /var/log/security
auth.info;authpriv.info /var/log/auth.log
It's because \ followed by a newline character means following line to be joined, to avoid escape : \\ :
sed "/${logs[$i]}$/a\\
${send_logs[$i]} \\
" $syslog_conf_file

Print line after the match in grep [duplicate]

This question already has answers here:
How to show only next line after the matched one?
(14 answers)
Closed 6 years ago.
I'm trying to get the current track running from 'cmus-remote -Q'
Its always underneath of this line
tag genre Various
<some track>
Now, I need to keep it simple because I want to add it to my i3 bar. I used
cmus-remote -Q | grep -A 1 "tag genre"
but that grep's the 'tag' line AND the line underneath.
I want ONLY the line underneath.
With sed:
sed -n '/tag genre/{n;p}'
Output:
$ cmus-remote -Q | sed -n '/tag genre/{n;p}'
<some track>
If you want to use grep as the tool for this, you can achieve it by adding another segment to your pipeline:
cmus-remote -Q | grep -A 1 "tag genre" | grep -v "tag genre"
This will fail in cases where the string you're searching for is on two lines in a row. You'll have to define what behaviour you want in that case if we're going to program something sensible for it.
Another possibility would be to use a tool like awk, which allows for greater compexity in the line selection:
cmus-remote -Q | awk '/tag genre/ { getline; print }'
This searches for the string, then gets the next line, then prints it.
Another possibility would be to do this in bash alone:
while read line; do
[[ $line =~ tag\ genre ]] && read line && echo "$line"
done < <(cmus-remote -Q)
This implements the same functionality as the awk script, only using no external tools at all. It's likely slower than the awk script.
You can use awk instead of grep:
awk 'p{print; p=0} /tag genre/{p=1}' file
<some track>
/tag genre/{p=1} - sets a flag p=1 when it encounters tag genre in a line.
p{print; p=0} when p is non-zero then it prints a line and resets p to 0.
I'd suggest using awk:
awk 'seen && seen--; /tag genre/ { seen = 1 }'
when seen is true, print the line.
when seen is true, decrement the value, so it will no longer true after the desired number of lines are printed
when the pattern matches, set seen to the number of lines to be printed

Convert data from a simple JSON format to a DSV format

I have a file in Unix, with data sample like the following:
{"ID":"123", "Region":"Asia", "Location":"India"}
{"ID":"234", "Region":"APAC", "Location":"Australia"}
{"ID":"345", "Region":"Americas", "Location":"Mexio"}
{"ID":"456", "Region":"Americas", "Location":"Canada"}
{"ID":"567", "Region":"APAC", "Location":"Japan"}
The desired output is
ID|Region|Location
123|Asia|India
234|APAC|Australia
345|Americas|Mexico
456|Americas|Canada
567|APAC|Japan
I tried with a few sed commands. I could remove the following: '{', '}', ' " ', ':'
There are 2 issues with the output file
All rows from input appear in single line in the output.
Adding the pipe ('|') as delimiter.
Any pointers are highly appreciated.
I recommend the tool jq (http://stedolan.github.io/jq/); jq is a lightweight and flexible command-line JSON processor.
jq -r '"\(.ID)|\(.Region)|\(.Location)"' < infile
123|Asia|India
234|APAC|Australia
345|Americas|Mexio
456|Americas|Canada
567|APAC|Japan
Explanation
-r is --raw-output
Through awk,
awk -F'"' -v OFS="|" 'BEGIN{print "ID|Region|Location"}{print $4,$8,$12}' file
Example:
$ cat file
{"ID":"123", "Region":"Asia", "Location":"India"}
{"ID":"234", "Region":"APAC", "Location":"Australia"}
{"ID":"345", "Region":"Americas", "Location":"Mexio"}
{"ID":"456", "Region":"Americas", "Location":"Canada"}
{"ID":"567", "Region":"APAC", "Location":"Japan"}
$ awk -F'"' -v OFS="|" 'BEGIN{print "ID|Region|Location"}{print $4,$8,$12}' file
ID|Region|Location
123|Asia|India
234|APAC|Australia
345|Americas|Mexio
456|Americas|Canada
567|APAC|Japan
EXplanation:
-F'"' Sets " as Field Separator value.
OFS="|" Sets | as Output Field Separator value.
Atfirst, awk would execute the function inside the BEGIN block. It helps to print the header section.
This sed one-liner does what you want. It's capturing the field values using parenthesized expressions, and then putting them into the output using \1, \2, and \3.
s/^{"ID":"\([^"]*\)", "Region":"\([^"]*\)", "Location":"\([^"]*\)"}$/\1|\2|\3/
Invoke it like:
$ sed -f one-liner.sed input.txt
Or you can invoke it within a Bash script, producing the header:
echo 'ID|Region|Location'
sed -e 's/^{"ID":"\([^"]*\)", "Region":"\([^"]*\)", "Location":"\([^"]*\)"}$/\1|\2|\3/' $input
It is a JSON file so it is best to use a JSON parser. Here is a perl implementation of it.
#!/usr/bin/perl
use strict;
use warnings;
use JSON;
open my $fh, '<', 'path/to/your/file';
#keys of your structure
my #key = qw(ID Region Location);
print join ("|", #key), "\n";
#iterate over your file, decode it and print in order of your key structure
while (my $json = <$fh>) {
my $text = decode_json($json);
print join ("|", map { $$text{$_} } #key ),"\n";
}
Output:
ID|Region|Location
123|Asia|India
234|APAC|Australia
345|Americas|Mexio
456|Americas|Canada
567|APAC|Japan
Using sed as follows
Command line
echo "my_string" |
sed -e 's#[,:"{}]##g' -e 's#ID##g' -e "s#Region##g" -e 's#Location##g' \
-e '1 s#^.*$#ID Region Location\n&#' -e 's# #|#g'
or
sed -e 's#[,:"{}]##g' -e 's#ID##g' -e "s#Region##g" -e 's#Location##g' \
-e '1 s#^.*$#ID Region Location\n&#' -e 's# #|#g' my_file
I tried this in a terminal as follows:
echo '{"ID":"123", "Region":"Asia", "Location":"India"}
{"ID":"234", "Region":"APAC", "Location":"Australia"}
{"ID":"345", "Region":"Americas", "Location":"Mexio"}
{"ID":"456", "Region":"Americas", "Location":"Canada"}
{"ID":"567", "Region":"APAC", "Location":"Japan"}' |
sed -e 's#[,:"{}]##g' -e 's#ID##g' -e "s#Region##g" -e 's#Location##g' \
-e '1 s#^.*$#ID Region Location\n&#' -e 's# #|#g'
Output
ID|Region|Location
123|Asia|India
234|APAC|Australia
345|Americas|Mexio
456|Americas|Canada
567|APAC|Japan
Many thanks for your response and the pointers/ solutions did help a lot.
For some mysterious reasons, I couldn't get any sed commands work. So, I devised my own solution. Although it's not elegant, it's still worked.
Here is the script I prepared which resolved the issue.
#!/bin/bash
# ource file path.
infile=/home/exfile.txt
# remove if these temp file exist already.
rm ./efile.txt ./xfile.txt ./yfile.txt ./zfile.txt
# removing the curly braces from input file.
cat exfile.txt | cut -d "{" -f2 | cut -d "}" -f1 >> ./efile.txt
# setting input file name to different value.
infile=./efile.txt
# remove double quotes from the file.
while IFS= read -r line
do
echo $line | sed 's/\"//g' >> ./xfile.txt
done < "$infile"
# creating another temp file.
infile2=./xfile.txt
# remove colon from file.
while IFS= read -r line
do
echo $line | sed 's/\:/,/g' >> ./yfile.txt
done < "$infile2"
# set input file path to new temp file.
infile3=yfile.txt
# initialize variables to hold header column values.
t1=0
t3=0
t5=0
# read each of the line to extract header row. Exit loop after reading 1st row.
once=1
while IFS=',' read -r f1 f2 f3 f4 f5 f6
do
"$f1 $f2 $f3 $f4 $f5 $f6"
t1=$f1
t3=$f3
t5=$f5
if [ "$once" -eq 1 ]; then
break
fi
done < "$infile3"
# Read each of the line from input file. Write only the value to another output file.
while IFS=',' read -r f1 f2 f3 f4 f5 f6
do
echo "$f2|$f4|$f6" >> ./zfile.txt
done < "$infile3"
# insert the header column row into the file generated in the step above.
frstline="$t1|$t3|$t5"
sed -i '1i ID|Region|Location' ./zfile.txt

Resources