my shell variable in if statement which is inside of while loop - shell

I have below while loop which reads a file line by line for error codes, if this error code found in log.txt file, then I need to set alert=1 and then those errors are to be written a shell variable by concatenating them with a semicolon. But When I write below while loop it is not giving desired results.
log.txt file data:
ORA-03113
ORA-00933
errors.lst file data:
ORA-03113
ORA-00933
ERROR
export LOGFILE=/temp/log.txt
alert=0
error=""
while IFS= read -r line || [[ -n "$line" ]]; do
if grep -q $line "$LOGFILE"; then
alert=1
error="${error};${line}"
fi
done < errors.lst
echo $alert
echo $error
I was expecting below output:
;ORA-03113;ORA-00933
But I am getting below output:
;ORA-00933
Can you please help me here, where I am doing wrong.

It looks like your files contain \r (carriage return, ASCII
0xd). When your terminal emulator sees it, it removes everything it
has shown so far and moves to the beginning of the line. These
carriage return characters most commonly come from Windows-style line
endings. Run dos2unix on input files to get rid of them:
dos2unix log.txt errors.lst

Related

Split and display file line in bash

I have a simple bash script and I don't understand the return value.
My script
#!bin/bash
string=$(head -n 1 test.txt)
IFS=":"
read -r pathfile line <<< "$string"
echo "left"$line"right"
And my test.txt
filepath:file content
others lines
...
I have this return on the console.
rightfile content
The problem isn't when file only have 1 line.
I don't know why I don't have left value right to result.
Your input file has MSWin line ends (\x0D\x0A). Therefore, \x0D becomes part of $line and when printed, it moves the cursor back to the beginning, so $line"right" overwrites it.
Run dos2unix or fromdos on the input file to fix it.
BTW, you don't need to quote left and right. Quoting the variable might be needed, though.
echo left"$line"right

Writing a Unix filter using bash

If a Unix/Linux command accepts its input data from the standard input and produces its output (result) on standard output is known as a filter.
The trivial filter is cat. It just copies stdin to stdout without any modification whatsoever.
How do I implement cat in bash? (neglecting the case that the command gets command line arguments)
I came up with
#! /bin/bash
while IFS="" read -r line
do
echo -E "$line"
done
That seems to work in most cases, also for text files containing some binary bytes as long as they are not null bytes. However, if the last line does not end in a newline character, it will be missing from the output.
How can that be fixed?
I'm nearly sure this must have been answered before, but my searching skills don't seem to be good enough.
Obviously I don't want to re-implement cat in bash: It wouldn't work anyway because of the null byte problem. But I want to extend the basic loop to do some custom processing to certain lines of a text file. However, we have all seen text files without the final line feed, so I'd prefer if that case could be handled.
Assuming you don't need to work with arbitrary binary files (since the shell cannot store null bytes in a variable), you can handle a file that isn't terminated by a newline by checking if line is not empty once the loop exits.
while IFS= read -r line; do
printf '%s\n' "$line"
done
if [ -n "$line" ]; then
printf '%s' "$line"
fi
In the loop, we output the newline that read stripped off. In the final if, we don't output a newline because $line would be empty if the last line read by the while loop had ended with one.

Spaces are not recognised in the same way in a txt file and in a command line

I am using Mac. I have compiled a bytecode run, it can work on a string input in a command line like:
./run "Here is a text input"
To test run, I write inputs.txt that contains sever lines of inputs, one line represents one input:
"input1"
"This is input 2"
"input 3"
And, here is a test.sh:
#!/bin/bash
while IFS='' read -r line || [[ -n "$line" ]]; do
echo "Text read from file: $line";
./run $line
done < inputs.txt
The problem is that, a test fails for input lines that contain space, although testing the same line in the command works.
So I guess, space is not interpreted the same when it is in inputs.txt and in a command line.
Does anyone know what to do to make spaces in inputs.txt interpreted in the same way as in a command line? Maybe should I change the format of inputs.txt?
I think you just need to change
./run $line
to
./run "$line"
That way, the use of $line won't be split into arguments before run is called.

Delimiting single quote in BASH script from a SQL dump

I am scrubbing a SQL dump file from MYSQL so that it is free of user information. The file is 100s of megs in size, and I have it all working except for the SQL quotes. I go line by line through the file and then use the statements in the form of:
RESULT=echo $LINE | sed "<something>"
This worked great until I came across this line:
INSERT INTO `brand` VALUES (42,84,'','brands/large_logo/L\'OrealLogo.jpg',0);
When I echo the line, the result is that I lose the L\'Oreal delimiter, and when I try to when load it back via SQL, it get an error. Here's the lined via echo $LINE:
The problem is here v
INSERT INTO `brand` VALUES (42,84,'','brands/large_logo/L'OrealLogo.jpg',0);
Is there a way to keep echo from using the \' as an escape sequence for '? I feel like I am missing something obvious here, but just cannot get my finger on it.
Psychic debugging suggests that you are using a while read loop, but not suppling -r:
$ cat file
'Notice the \' here'
$ while read LINE; do echo "$LINE"; done < file
'Notice the ' here'
$ while read -r LINE; do echo "$LINE"; done < file
'Notice the \' here'
There are other concerns like missing $(..) and quoting in your RESULT=echo $LINE | sed "<something>" and the fact that you're running sed once for each line rather than for the stream, but these are separate issues.

Read file line by line and perform action for each in bash

I have a text file, it contains a single word on each line.
I need a loop in bash to read each line, then perform a command each time it reads a line, using the input from that line as part of the command.
I am just not sure of the proper syntax to do this in bash. If anyone can help, it would be great. I need to use the line from the test file obtained as a paramter to call another function. The loop should stop when there are no more lines in the text file.
Psuedo code:
Read testfile.txt.
For each in testfile.txt
{
some_function linefromtestfile
}
How about:
while read line
do
echo $line
// or some_function "$line"
done < testfile.txt
As an alternative, using a file descriptor (#4 in this case):
file='testfile.txt'
exec 4<$file
while read -r -u4 t ; do
echo "$t"
done
Don't use cat! In a loop cat is almost always wrong, i.e.
cat testfile.txt | while read -r line
do
# do something with "$line" here
done
and people might start to throw an UUoCA at you.
while read line
do
nikto -Tuning x 1 6 -h $line -Format html -o NiktoSubdomainScans.html
done < testfile.txt
Tried this to automate nikto scan of list of domains after changing from cat approach. Still just read the first line and ignored everything else.

Resources