I have seen the concept of process substitution. But the following code still give me the syntax error
script.sh: syntax error at line 44: `<' unexpected
script.sh: syntax error at line 44: `<' unexpected
Here is the code:
#!/bin/bash
count=1
FILENAME=$1
JUDGE="YATES"
echo "VALUE OF JUDGE IS $JUDGE"
STATUS=""
#The file is read using while loop , file being supplied as cmd line arg , file simply contains the list of courts .
#cat $FILENAME | while read LINE
while read LINE
do
#Selecting the filepath here $LINE contains the court every time it iterates
FILEPATH=/elFZ/dZcollection/$LINE/DETER_JUDGE
#Checking whether the DETER_JUDGE exists or not ,
cat $FILEPATH >> yatisawhney.txt 2>> yati_errors.txt
#if the DETER_JUDGE file exists then
if [ $? = 0 ]
then
echo "INSIDE IF"
STATUS="Yes"
#cat $FILEPATH | while read -r JUDGELINE
#open the DETER JUDGE file and read the values and updating the JUDGE variable.
while read JUDGELINE
do
line_length=$JUDGELINE
JUDGE=$JUDGE$line_length"||||||"
#JUDGE=1000
done < < ( $FILEPATH )
echo "Value of judge is $JUDGE"
else
FILEPATH="N.A."
STATUS="No"
JUDGE="N.A."
fi
#here I am not getting the updated value
echo $JUDGE >> JUDGE_NAME
echo $count","$LINE","$STATUS","$JUDGE","$FILEPATH >> judgeData.csv
count=`expr "$count" + 1`
JUDGE=""
done < < ( $FILENAME )
I am unable to the fetch the values from the inner while loop. However I am able to fetch them inside the loop, once I get outside the values get lost.
The process substitution uses the <(...) construct, there should be no space between the < and the left parenthesis.
To read from a file, you don't need process substitution at all:
done < "$FILENAME"
Related
I have a couple of files with certain keywors (one is MAT1).
For this key word i like to read the ID corresponding to it, put this together with the filename into an array.
I tried the following (I am not very familiar with bash programming):
#!/bin/bash
Num=0
arr=( $(find . -name '*.mat' | sort) )
for i in "${arr[#]}"
do
file=$(basename "${i}")
while read -r line
do
name="$line"
IFS=' ' read -r -a array <<< "$line"
for index in "${!array[#]}"
do
if [ ${array[index]} == "MAT1" ]
then
out[$num] = "${array[index+1]} $file "
let num++
#printf "%-32s %8i\n" "$file" "${array[index+1]}"
fi
done
done < "$i"
done
With this get the message
make_mat_list.bsh: line 21: out[0]: command not found
make_mat_list.bsh: line 21: out[1]: command not found
What is wrong here?
bash is white-space sensitive, your line below cannot have spaces.
out[$num] = "${array[index+1]} $file "
As for the reason for the error, the shell treats that particular line as the first word being a command out[$num] i.e. out[1]..etc and rest of it as arguments to it = and "${array[index+1]} $file ", which does not make any sense. Remove the spaces and do jsut
out[$num]="${array[index+1]} $file"
I am trying to write a bash function I can call regularly from within a larger set of scripts. I want to pass this function the name of a file containing a plain list of text strings:
blue
red
green
... and have the function write out these strings to a different file (the name of which is also passed as parameter to the function) in bash-compatible array format:
[Bb]lue [Rr]ed [Gg]reen
I can't get the function to (internally) recognise the name of the output file being passed. It throws an "ambiguous redirect" error and then a bunch of "No such file or directory" errors after that. It is however processing the input file OK. The problem appears be how I am assigning the parameter to a local string in the function. Unfortunately I have changed the loc_out= line in the function so many times that I can no longer recall all the forms I have tried. Hopefully the example is clear, if not best practise:
process_list () {
# assign input file name to local string
loc_in=(${1});
# assign output file name to local string
loc_out=($(<${2})); # this is not right
while read line
do
echo "loc_out before: $loc_out";
echo "loc_in term: $line";
item_length=${#line};
# loop until end of string
for (( i=0; i<$item_length; i++ ));
do
echo "char $i of $line: ${line:$i:1}";
# write out opening bracket and capital
if [ ${i} -eq 0 ]; then
echo -e "[" >> $loc_out;
echo -e ${line:$i:1} | tr '[:lower:]' '[:upper:]' >> "${loc_out}";
fi;
# write out current letter
echo -e ${line:$i:1} >> "${loc_out}";
# write out closing bracket
if [ ${i} -eq 0 ]; then
echo -e "]" >> "${loc_out}";
fi;
done;
# write out trailing space
echo -e " " >> "${loc_out}";
# check the output file
echo "loc_out after: ${loc_out}";
done < $loc_in;
}
f_in="/path/to/colour_list.txt";
f_out="/path/to/colour_array.txt";
echo "loc_in (outside function): ${loc_in}";
echo "loc_out (outside function): ${loc_out}";
process_list $f_in $f_out;
Any assistance on what I am doing wrong would be much appreciated.
Change:
loc_out=($(<${2})); # this is not right
To this:
loc_out=(${2}); # this should be right
You want in that line just the file name.
Hopefully this will solve your problem.
EDIT:
Besides you could/should write this:
loc_in=${1};
loc_out=${2};
You do not need parantheses, as far as I understand.
How come the additional 'Line' insideecho "Line $line" is not prepended to all files inside the for loop?
#!/bin/bash
INPUT=targets.csv
IFS=","
[ ! -f $INPUT ] && { echo "$INPUT file not found"; exit 99; }
while read target user password path
do
result=$(sshpass -p "$password" ssh -n "$user"#"$target" ls "$path"*file* 2>/dev/null)
if [ $? -ne 0 ]
then
echo "No Heap dumps detected."
else
echo "Found a Heap dump! Possible OOM issue detected"
for line in $result
do
echo "Line $line"
done
fi
done < $INPUT
.csv file contents ..
rob#laptop:~/scripts$ cat targets.csv
server.com,root,passw0rd,/root/
script output ..
rob#laptop:~/scripts$ ./checkForHeapdump.sh
Found a Heap dump! Possible OOM issue detected
Line file1.txt
file2.txt
The statement:
for line in $result
performs word splitting on $result to get each element that $line should be set to. Word splitting uses the delimiters in $IFS. Earlier in the script you set this to just ,. So this loop will iterate over comma-separated data in $result. Since there aren't any commas in it, it's just a single element.
If you want to split it by lines, do:
IFS="
"
for line in $result
The following program reads a file and it intends to store the all values (each line) into a variable but doesn't store the last line. Why?
file.txt :
1
2
.
.
.
n
Code :
FileName=file.txt
if test -f $FileName # Check if the file exists
then
while read -r line
do
fileNamesListStr="$fileNamesListStr $line"
done < $FileName
fi
echo "$fileNamesListStr" // 1 2 3 ..... n-1 (but it should print up to n.)
Instead of reading line-by-line, why not read the whole file at once?
[ -f $FileName ] && fileNameListStr=$( tr '\n' ' ' < $FileName )
One probable cause is that there misses a newline after the last line n.
Use the following command to check it:
tail -1 file.txt
And the following fixes:
echo >> file.txt
If you really need to keep the last line without newline, I reorganized the while loop here.
#!/bin/bash
FileName=0
if test -f $FileName ; then
while [ 1 ] ; do
read -r line
if [ -z $line ] ; then
break
fi
fileNamesListStr="$fileNamesListStr $line"
done < $FileName
fi
echo "$fileNamesListStr"
The issue is that when the file does not end in a newline, read returns non-zero and the loop does not proceed. The read command will still read the data, but it will not process the loop. This means that you need to do further processing outside of the loop. You also probably want an array instead of a space separated string.
FileName=file.txt
if test -f $FileName # Check if the file exists
then
while read -r line
do
fileNamesListArr+=("$line")
done < $FileName
[[ -n $line ]] && fileNamesListArr+=("$line")
fi
echo "${fileNameListArr[#]}"
See the "My text files are broken! They lack their final newlines!" section of this article:
http://mywiki.wooledge.org/BashFAQ/001
As a workaround, before reading from the text file a newline can be appended to the file.
echo "\n" >> $file_path
This will ensure that all the lines that was previously in the file will be read. Now the file can be read line by line.
I need to validate my log files:
-All new log lines shall start with date.
-This date will respect the ISO 8601 standard. Example:
2011-02-03 12:51:45,220Z -
Using shell script, I can validate it looping on each line and verifying the date pattern.
The code is below:
#!/bin/bash
processLine(){
# get all args
line="$#"
result=`echo $line | egrep "[0-9]{4}-[0-9]{2}-[0-9]{2} [012][0-9]:[0-9]{2}:[0-9]{2},[0-9]{3}Z" -a -c`
if [ "$result" == "0" ]; then
echo "The log is not with correct date format: "
echo $line
exit 1
fi
}
# Make sure we get file name as command line argument
if [ "$1" == "" ]; then
echo "You must enter a logfile"
exit 0
else
file="$1"
# make sure file exist and readable
if [ ! -f $file ]; then
echo "$file : does not exists"
exit 1
elif [ ! -r $file ]; then
echo "$file: can not read"
exit 2
fi
fi
# Set loop separator to end of line
BAKIFS=$IFS
IFS=$(echo -en "\n\b")
exec 3<&0
exec 0<"$file"
while read -r line
do
# use $line variable to process line in processLine() function
processLine $line
done
exec 0<&3
# restore $IFS which was used to determine what the field separators are
IFS=$BAKIFS
echo SUCCESS
But, there is a problem. Some logs contains stacktraces or something that uses more than one line, in other words, stacktrace is an example, it can be anything. Stacktrace example:
2011-02-03 12:51:45,220Z [ERROR] - File not found
java.io.FileNotFoundException: fred.txt
at java.io.FileInputStream.<init>(FileInputStream.java)
at java.io.FileInputStream.<init>(FileInputStream.java)
at ExTest.readMyFile(ExTest.java:19)
at ExTest.main(ExTest.java:7)
...
will not pass with my script, but is valid!
Then, if I run my script passing a log file with stacktraces for example, my script will failed, because it loops line by line.
I have the correct pattern and I need to validade the logger date format, but I don't have wrong date format pattern to skip lines.
I don't know how I can solve this problem. Does somebody can help me?
Thanks
You need to anchor your search for the date to the start of the line (otherwise the date could appear anywhere in the line - not just at the beginning).
The following snippet will loop over all lines that do not begin with a valid date. You still have to determine if the lines constitute errors or not.
DATEFMT='^[0-9]{4}-[0-9]{2}-[0-9]{2} [012][0-9]:[0-9]{2}:[0-9]{2},[0-9]{3}Z'
egrep -v ${DATEFMT} /path/to/log | while read LINE; do
echo ${LINE} # did not begin with date.
done
So just (silently) discard a single stack trace. In somewhat verbose bash:
STATE=idle
while read -r line; do
case $STATE in
idle)
if [[ $line =~ ^java\..*Exception ]]; then
STATE=readingexception
else
processLine "$line"
fi
;;
readingexception)
if ! [[ $line =~ ^' '*'at ' ]]; then
STATE=idle
processLine "$line"
fi
;;
*)
echo "Urk! internal error [$STATE]" >&2
exit 1
;;
esac
done <logfile
This relies on processLine not continuing on error, else you will need to track a tad more state to avoid two consecutive stack traces.
This makes 2 assumptions.
lines that begin with whitespace are continuations of previous lines. we're matching a leading space, or a leading tab.
lines that have non-whitespace characters starting at ^ are new log lines.
If a line matching #2 doesn't match the date format, we have an error, so print the error, and include the line number.
count=0
processLine() {
count=$(( count + 1 ))
line="$#"
result=$( echo $line | egrep '^[0-9]{4}-[0-9]{2}-[0-9]{2} [012][0-9]:[0-9]{2}:[0-9]{2},[0-9]{3}Z' -a -c )
if (( $result == 0 )); then
# if result = 0, then my line did not start with the proper date.
# if the line starts with whitespace, then it may be a continuation
# of a multi-line log entry (like a java stacktrace)
continues=$( echo $line | egrep "^ |^ " -a -c )
if (( $continues == 0 )); then
# if we got here, then the line did not start with a proper date,
# AND the line did not start with white space. This is a bad line.
echo "The line is not with correct date format: "
echo "$count: $line"
exit 1
fi
fi
}
Create a condition to check if the line starts with a date. If not, skip that line as it is part of a multi-line log.
processLine(){
# get all args
line="$#"
result=`echo $line | egrep "[0-9]{4}-[0-9]{2}-[0-9]{2} [012][0-9]:[0-9]{2}:[0-9]{2},[0-9]{3}Z" -a -c`
if [ "$result" == "0" ]; then
echo "Log entry is multi-lined - continuing."
fi
}