I am currently trying to evaluate txt-files in a directory using bash. I want to know if the third line of the txt-file matches a certain string. The file starts with two empty lines, then the target string. I tested the following one liner:
if [[ $(head -n 3 a_txt_file.txt) == "target_string" ]]; then echo yes; else echo no; fi
I can imagine that since head -n 3 also prints out the two empty lines, I have to add them to the if condition. But "\n\ntarget_string" and "\n\ntarget_string\n" also don't work.
How would one do this correctly (And I guess it can be done more elegantly as well)?
Try this instead - it will print only the third line:
sed -n 3p file.txt
If you just need to remove the top two lines:
head -n 3 | tail -1
You'll want to use sed instead of head. This gets the third line, tests if it matches, and then you can do whatever you want with it if it does match.
if [[ $(sed '3q;d' test_text.txt ) == "target_string" ]]; then echo yes; else echo no; fi
Besides sed you can try awk to print 3rd line
awk 'NR==3'
A pure bash solution:
if { read; read; read line; } < test_text.txt
[[ $line = target_string ]]
then
echo yes
else
echo no
fi < test_text.txt
This takes advantage of the fact that the condition of the if statement can be a sequence of commands. First, read twice from the file to discard the empty lines; the third sets line to the 3rd line. After that, you can test it against the target string.
Related
Hi I am trying to print/echo line numbers that are multiple of 5. I am doing this in shell script. I am getting errors and unable to proceed. below is the script
#!/bin/bash
x=0
y=$wc -l $1
while [ $x -le $y ]
do
sed -n `$x`p $1
x=$(( $x + 5 ))
done
When executing above script i get below errors
#./echo5.sh sample.h
./echo5.sh: line 3: -l: command not found
./echo5.sh: line 4: [: 0: unary operator expected
Please help me with this issue.
For efficiency, you don't want to be invoking sed multiple times on your file just to select a particular line. You want to read through the file once, filtering out the lines you don't want.
#!/bin/bash
i=0
while IFS= read -r line; do
(( ++i % 5 == 0 )) && echo "$line"
done < "$1"
Demo:
$ i=0; while read line; do (( ++i % 5 == 0 )) && echo "$line"; done < <(seq 42)
5
10
15
20
25
30
35
40
A funny pure Bash possibility:
#!/bin/bash
mapfile ary < "$1"
printf "%.0s%.0s%.0s%.0s%s" "${ary[#]}"
This slurps the file into an array ary, which each line of the file in a field of the array. Then printf takes care of printing one every 5 lines: %.0s takes a field, but does nothing, and %s prints the field. Since mapfile is used without the -t option, the newlines are included in the array. Of course this really slurps the file into memory, so it might not be good for huge files. For large files you can use a callback with mapfile:
#!/bin/bash
callback() {
printf '%s' "$2"
ary=()
}
mapfile -c 5 -C callback ary < "$1"
We're removing all the elements of the array during the callback, so that the array doesn't grow too large, and the printing is done on the fly, as the file is read.
Another funny possibility, in the spirit of glenn jackmann's solution, yet without a counter (and still pure Bash):
#!/bin/bash
while read && read && read && read && IFS= read -r line; do
printf '%s\n' "$line"
done < "$1"
Use sed.
sed -n '0~5p' $1
This prints every fifth line in the file starting from 0
Also
y=$wc -l $1
wont work
y=$(wc -l < $1)
You need to create a subshell as bash will see the spaces as the end of the assignment, also if you just want the number its best to redirect the file into wc.
Dont know what you were trying to do with this ?
x=$(( $x + 5 ))
Guessing you were trying to use let, so id suggest looking up the syntax for that command. It would look more like
(( x = x + 5 ))
Hope this helps
There are cleaner ways to do it, but what you're looking for is this.
#!/bin/bash
x=5
y=`wc -l $1`
y=`echo $y | cut -f1 -d\ `
while [ "$y" -gt "$x" ]
do
sed -n "${x}p" "$1"
x=$(( $x + 5 ))
done
Initialize x to 5, since there is no "line zero" in your file $1.
Also, wc -l $1 will display the number of line counts, followed by the name of the file. Use cut to strip the file name out and keep just the first word.
In conditionals, a value of zero can be interpreted as "true" in Bash.
You should not have space between your $x and your p in your sed command. You can put them right next to each other using curly braces.
You can do this quite succinctly using awk:
awk 'NR % 5 == 0' "$1"
NR is the record number (line number in this case). Whenever it is a multiple of 5, the expression is true, so the line is printed.
You might also like the even shorter but slightly less readable:
awk '!(NR%5)' "$1"
which does the same thing.
I am writing a script to manipulate a text file.
First thing I want to do is check if duplicate entries exist and if so, ask the user whether we wants to keep or remove them.
I know how to display duplicate lines if they exist, but what I want to learn is just to get a yes/no answer to the question "Do duplicates exist?"
It seems uniq will return 0 either if duplicates were found or not as long as the command completed without issues.
What is that command that I can put in an if-statement just to tell me if duplicate lines exist?
My file is very simple, it is just values in single column.
I'd probably use awk to do this but, for the sake of variety, here is a brief pipe to accomplish the same thing:
$ { sort | uniq -d | grep . -qc; } < noduplicates.txt; echo $?
1
$ { sort | uniq -d | grep . -qc; } < duplicates.txt; echo $?
0
sort + uniq -d make sure that only duplicate lines (which don't have to be adjacent) get printed to stdout and grep . -c counts those lines emulating wc -l with the useful side effect that it returns 1 if it doesn't match (i.e. a zero count) and -q just silents the output so it doesn't print the line count so you can use it silently in your script.
has_duplicates()
{
{
sort | uniq -d | grep . -qc
} < "$1"
}
if has_duplicates myfile.txt; then
echo "myfile.txt has duplicate lines"
else
echo "myfile.txt has no duplicate lines"
fi
You can use awk combined with the boolean || operator:
# Ask question if awk found a duplicate
awk 'a[$0]++{exit 1}' test.txt || (
echo -n "remove duplicates? [y/n] "
read answer
# Remove duplicates if answer was "y" . I'm using `[` the shorthand
# of the test command. Check `help [`
[ "$answer" == "y" ] && uniq test.txt > test.uniq.txt
)
The block after the || will only get executed if the awk command returns 1, meaning it found duplicates.
However, for a basic understanding I'll also show an example using an if block
awk 'a[$0]++{exit 1}' test.txt
# $? contains the return value of the last command
if [ $? != 0 ] ; then
echo -n "remove duplicates? [y/n] "
read answer
# check answer
if [ "$answer" == "y" ] ; then
uniq test.txt > test.uniq.txt
fi
fi
However the [] are not just brackets like in other programming languages. [ is a synonym for the test bash builtin command and ] it's last argument. You need to read help [ in order to understand
A quick bash solution:
#!/bin/bash
INPUT_FILE=words
declare -A a
while read line ; do
[ "${a[$line]}" = 'nonempty' ] && duplicates=yes && break
a[$line]=nonempty
done < $INPUT_FILE
[ "$duplicates" = yes ] && echo -n "Keep duplicates? [Y/n]" && read keepDuplicates
removeDuplicates() {
sort -u $INPUT_FILE > $INPUT_FILE.tmp
mv $INPUT_FILE.tmp $INPUT_FILE
}
[ "$keepDuplicates" != "Y" ] && removeDuplicates
The script reads line by line from the INPUT_FILE and stores each line in the associative array a as the key and sets the string nonempty as value. Before storing the value, it first checks whether it is already there - if it is it means it found a duplicate and it sets the duplicates flag and then it breaks out of the cycle.
Later it only checks if the flag is set and asks the user whether to keep the duplicates. If they answer anything else than Y then it calls the removeDuplicates function which uses sort -u to remove the duplicates. ${a[$line]} evaluates to the value of the associative array a for the key $line. [ "$duplicates" = yes ] is a bash builtin syntax for a test. If the test succeeds then whatever follows after && is evaluated.
But note that the awk solutions will likely be faster so you may want to use them if you expect to process bigger files.
You can do uniq=yes/no using this awk one-liner:
awk '!seen[$0]{seen[$0]++; i++} END{print (NR>i)?"no":"yes"}' file
awk uses an array of uniques called seen.
Every time we put an element in unique we increment an counter i++.
Finally in END block we compare # of records with unique # of records in this code: (NR>i)?
If condition is true that means there are duplicate records and we print no otherwise it prints yes.
I have this scenarios:
File Content:
10.1.1.1
10.1.1.2
10.1.1.3
10.1.1.4
I want sed or awk so that when i cat the file every time new line is returned.
like
First iteration:
cat ip | some magic
10.1.1.1
Second iteration returns
10.1.1.2
Third iteration returns
10.1.1.3
Fourth iteration returns
10.1.1.4
and after n number of iterations, it returns to line 1
Fifth iteration returns:
10.1.1.1
Can we do it using sed or awk.
You will need to store the line number in a file and increment it with modulus at each invocation.
get_line () {
if [[ ! -e /var/local/get_line.next ]]
then
if [[ ! -e /var/local ]]
then
mkdir -p /var/local
fi
line_no=1
else
line_no=$(< /var/local/get_line.next)
fi
file_length=(wc -l < ip_file)
if ((file_length == 0))
then
echo "Error: Data file is empty" >&2
return 1
fi
if ((line > file_length))
then
line=1
fi
sed -n "$line_no{p;q}" ip_file
echo "$((++line_no))" > /var/local/get_line.next
}
This is in the form of a function which you can incorporate in a script. Feel free to change the location of the get_line.next file. Note that permissions will need to be correct to read or write the files or to create the directory, if necessary.
You will not need to use cat.
You can't do this with cat. You also can't seek on a pipe so you can't use a pipe ..
You can do this with a nested while loop
while ((1))
do
while read line
do
echo "$line"
done <somefile
done
I'm trying to fix a bash script by adding in some error catching. I have a file (list.txt) that normally has content like this:
People found by location:
person: john [texas]
more info on john
Sometimes that file gets corrupted, and it only has that first line:
People found by location:
I'm trying to find a method to check that file to see if any data exists on line 2, and I want to include it in my bash script. Is this possible?
Simple and clean:
if test $(sed -n 2p < /path/to/file); then
# line 2 exists and it is not blank
else
# otherwise...
fi
With sed we extract the second line only. The test expression will evaluate to true only if there is a second non-blank line in the file.
I assume that you want to check whether line 2 of a given file contains any data or not.
[ "$(sed -n '2p' inputfile)" != "" ] && echo "Something present on line 2" || echo "Line 2 blank"
This would work even if the inputfile has just one line.
If you simply want to check whether the inputfile has one line or more, you can say:
[ "$(sed -n '$=' z)" == "1" ] && echo "Only one line" || echo "More than one line"
Sounds like you want to check if your file has more than 1 line
if (( $(wc -l < filename) > 1 )); then
echo I have a 2nd line
fi
Another approach which doesn't require external commands is:
if ( IFS=; read && read -r && [[ -n $REPLY ]]; ) < /path/to/file; then
echo true
else
echo false
fi
I'm trying to parse a csv file I made with Google Spreadsheet. It's very simple for testing purposes, and is basically:
1,2
3,4
5,6
The problem is that the csv doesn't end in a newline character so when I cat the file in BASH, I get
MacBook-Pro:Desktop kkSlider$ cat test.csv
1,2
3,4
5,6MacBook-Pro:Desktop kkSlider$
I just want to read line by line in a BASH script using a while loop that every guide suggests, and my script looks like this:
while IFS=',' read -r last first
do
echo "$last $first"
done < test.csv
The output is:
MacBook-Pro:Desktop kkSlider$ ./test.sh
1 2
3 4
Any ideas on how I could have it read that last line and echo it?
Thanks in advance.
You can force the input to your loop to end with a newline thus:
#!/bin/bash
(cat test.csv ; echo) | while IFS=',' read -r last first
do
echo "$last $first"
done
Unfortunately, this may result in an empty line at the end of your output if the input already has a newline at the end. You can fix that with a little addition:
!/bin/bash
(cat test.csv ; echo) | while IFS=',' read -r last first
do
if [[ $last != "" ]] ; then
echo "$last $first"
fi
done
Another method relies on the fact that the values are being placed into the variables by the read but they're just not being output because of the while statement:
#!/bin/bash
while IFS=',' read -r last first
do
echo "$last $first"
done <test.csv
if [[ $last != "" ]] ; then
echo "$last $first"
fi
That one works without creating another subshell to modify the input to the while statement.
Of course, I'm assuming here that you want to do more inside the loop that just output the values with a space rather than a comma. If that's all you wanted to do, there are other tools better suited than a bash read loop, such as:
tr "," " " <test.csv
cat file |sed -e '${/^$/!s/$/\n/;}'| while IFS=',' read -r last first; do echo "$last $first"; done
If the last (unterminated) line needs to be processed differently from the rest, #paxdiablo's version with the extra if statement is the way to go; but if it's going to be handled like all the others, it's cleaner to process it in the main loop.
You can roll the "if there was an unterminated last line" into the main loop condition like this:
while IFS=',' read -r last first || [ -n "$last" ]
do
echo "$last $first"
done < test.csv