How can I do a incremental for loop, for the -n in the head command (head -n)?
Does this work?
for (( i = 1 ; i <= $NUMBER ; i++ ))
head -$(NUMBER) filename.txt
NUMBER=$((NUMBER+1))
done
The code is suppose to display different texts off from filename.txt using the -n
The following should work:
for (( i = 1 ; i < `wc -l filename.txt | cut -f 1 -d ' '` ; i++ )); do
head -$i filename.txt | tail -1;
done
The wc -l filename.txt gets the number of lines in filename.txt. cut -f 1 -f ' ' takes the first field from the wc which is the number of lines. This is used as the upper bound for the loop.
head -$i takes the first $i lines and tail -1 takes the last line of that. This gives you one line blocks.
Related
As of now how i am writing the script is to count the number of lines for the 2 files.
Then i put it though condition if it is greater than the old.
However, i am not sure how to compare it based on percentage of the old files.
I there a better way to design the script.
#!/bin/bash
declare -i new=$(< "$(ls -t file name*.csv | head -n 1)" wc -l)
declare -i old=$(< "$(ls -t file name*.csv | head -n 2)" wc -l)
echo $new
echo $old
if [ $new -gt $old ];
then
echo "okay";
else
echo "fail";
If you need to check for x% max diff line, you can count the number of '<' lines in the diff output. Recall the the diff output will look like.
+ diff node001.html node002.html
2,3c2,3
< 4
< 7
---
> 2
> 3
So that code will look like:
old=$(wc -l < file1)
diff1=$(diff file1 file2 | grep -c '^<')
pct=$((diff1*100/(old-1)))
# Check Percent
if [ "$pct" -gt 60 ] ; then
...
fi
Basically I'm trying to check if there are any 200 http responses in the log, in last 3 line. but I'm getting the below error. Because of this the head command is failing..Please help
LINES=`cat http_access.log |wc -l`
for i in $LINES $LINES-1 $LINES-2
do
echo "VALUE $i"
head -$i http_access.log | tail -1 > holy.txt
temp=`cat holy.txt| awk '{print $9}'`
if [[ $temp == 200 ]]
then
echo "line $i has 200 code at "
cat holy.txt | awk '{print $4}'
fi
done
Output:
VALUE 18
line 18 has 200 code at [21/Jan/2018:15:34:23
VALUE 18-1
head: invalid trailing option -- - Try `head --help' for more information.
Use $((...)) to perform arithmetic.
for i in $((LINES)) $((LINES-1)) $((LINES-2))
Without it, it's attempting to run the commands:
head -18 http_access.log
head -18-1 http_access.log
head -18-2 http_access.log
The latter two are errors.
A more flexible way to write the for loop would be using C-style syntax:
for ((i = LINES - 2; i <= LINES; ++i)); do
...
done
You got the why from JohnKugelman's answer, I will just propose a simplified code that might work for you:
while read -ra fields; do
[[ ${fields[9]} = 200 ]] && echo "Line ${fields[0]} has 200 code: ${fields[4]}"
done < <(cat -n http_access.log | tail -n 3 | tac)
cat -n: Numbers lines of the file
tail -n 3: Prints 3 last lines. You can just change this number for more lines
tac: Prints the lines outputted by tail in reversed order
read -ra fields: Reads the fields into an array $fields
${fields[0]}: The line number
${fields[num_of_field]}: Individual fields
You can also use wc instead of numbering using cat -n. For larger inputs, this will be slightly faster:
lines=$(wc -l < http_access.log)
while read -ra fields; do
[[ ${fields[8]} = 200 ]] && echo "Line $lines has 200 code: ${fields[3]}"
((lines--))
done < <(tail -n 3 http_access.log | tac)
Is it possible to use a line count to break a while loop in bash?
I have the following code but it doesn't break and infinitely runs:
wc -l Database.faa > counter
perl -p -e 's/ /\t/g' counter | cut -f6 > temp; mv temp counter
while
($counter > '0')
do
# some commands
wc -l Database.faa > counter
perl -p -e 's/ /\t/g' counter | cut -f6 > temp; mv temp counter
done
My program reduces Database.faa with each run but when Database.faa is empty it continues to run. Can anyone help?
Thanks.
In your conditional $counter does not look at a file counter which is seemingly what you're trying to use in the condition. Also, ( isn't a traditional way to test values, though you can use the exit code of the subshell you're creating. You're probably more interested in [ which is the same as the test command. Finally, for this part, when using test > is not a comparison operator, it's a redirection operator. man test shows the comparison you can do, and for int comparisons you'd use -gt instead. So putting that together we can do your loop like
counter=$(wc -l Database.faa | cut -f1 -d" ")
while [ $counter -gt 0 ]; do
# some commands
counter=$(wc -l Database.faa | cut -f1 -d" ")
done
or, you could just test if the file is empty like
while [ -s Database.faa ]; do
# some commands
done
EDIT: FIXED. Now concerned with optimizing the code.
I am writing a script to separate data from one file into multiple files. When I run the script, I get the error: "sed: -e expression #1, char 2: unknown command: `.'" without any line number, making it somewhat hard to debug. I have checked the lines in which I use sed individually, and they work without problem. Any ideas? I realize that there are a lot of things that I did somewhat unconventionally and that there are faster ways of doing some things (I'm sure there's a way to avoid continuously importing somefile), but right now I'm just trying to understand this error. Here is the code:
x1=$(sed -n '1p' < somefile | cut -f1)
y1=$(sed -n '1p' < somefile | cut -f2)
p='p'
for i in 1..$(seq 1 $(cat "somefile" | wc -l))
do
x2=$(sed -n $i$p < somefile | cut -f1)
y2=$(sed -n $i$p < somefile | cut -f1)
if [ "$x1" = "$x2" ] && [ "$y1" = "$y2" ];
then
x1=$x2
y1=$x2
fi
s="$(sed -n $i$p < somefile | cut -f3) $(sed -n $i$p < somefile | cut$
echo $s >> "$x1-$y1.txt"
done
The problem is in the following line:
for i in 1..$(seq 1 $(cat "somefile" | wc -l))
If somefile were to have 3 lines, then this would result in following values of i:
1..1
2
3
Clearly, something like sed -n 1..1p < filename would result in the error you are observing: sed: -e expression #1, char 2: unknown command: '.'
You rather want:
for i in $(seq 1 $(cat "somefile" | wc -l))
This is the cause of the problem:
for i in 1..$(seq 1 $(cat "somefile" | wc -l))
Try just
for i in $(seq 1 $(wc -l < somefile))
However, you are reading your file many, many times too often with all those sed commands. Read it just once:
read x1 y1 < <(sed 1q somefile)
while read x2 y2 f3 f4; do
if [[ $x1 = $x2 && $y1 = $y2 ]]; then
x1=$x2
y1=$x2
fi
echo "$f3 $f4"
done < somefile > "$x1-$y1.txt"
The line where you construct the s variable is truncated -- I'm assuming you have 4 fields per line.
Note: a problem with cut-and-paste coding is that you introduce errors: you assign y2 the same field as x2
This is my code:
nb_lignes=`wc -l $1 | cut -d " " -f1`
for i in $(seq $nb_lignes)
do
m=`head $1 -n $i | tail -1`
//command
done
Please how can i change it to get Get 20% of lines in File randomly to apply "command" on each line ?
20% or 40% or 60 % (it's a parameter)
Thank you.
This will randomly get 20% of the lines in the file:
awk -v p=20 'BEGIN {srand()} rand() <= p/100' filename
So something like this for the whole solution (assuming bash):
#!/bin/bash
filename="$1"
pct="${2:-20}" # specify percentage
while read line; do
: # some command with "$line"
done < <(awk -v p="$pct" 'BEGIN {srand()} rand() <= p/100' "$filename")
If you're using a shell without command substitution (the <(...) bit), you can do this - but the body of the loop won't be able to have any side effects in the outer script (e.g. any variables it sets won't be set anymore once the loop completes):
#!/bin/sh
filename="$1"
pct="${2:-20}" # specify percentage
awk -v p="$pct" 'BEGIN {srand()} rand() <= p/100' "$filename" |
while read line; do
: # some command with "$line"
done
Try this:
file=$1
nb_lignes=$(wc -l $file | cut -d " " -f1)
num_lines_to_get=$((20*${nb_lignes}/100))
for (( i=0; i < $num_lines_to_get; i++))
do
line=$(head -$((${RANDOM} % $nb_lignes)) $file | tail -1)
echo "$line"
done
Note that ${RANDOM} only generates numbers less than 32768 so this approach won't work for large files.
If you have shuf installed, you can use the following to get a random line instead of using $RANDOM.
line=$(shuf -n 1 $file)
you can do it with awk.see below:
awk -v b=20 '{a[NR]=$0}END{val=((b/100)*NR)+1;for(i=1;i<val;i++)print a[i]}' all.log
the above command prints 20% of all the lines starting from begining of the file.
you just have to change the value of b on command line to get the required % of lines.
tested below:
> cat temp
1
2
3
4
5
6
7
8
9
10
> awk -v b=10 '{a[NR]=$0}END{val=((b/100)*NR)+1;for(i=1;i<val;i++)print a[i]}' temp
1
> awk -v b=20 '{a[NR]=$0}END{val=((b/100)*NR)+1;for(i=1;i<val;i++)print a[i]}' temp
1
2
>
shuf will produce the file in a randomized order; if you know how many lines you want, you can give that to the -n parameter. No need to get them one at a time. So:
shuf -n $(( $(wc -l < $FILE) * $PCT / 100 )) "$file" |
while read line; do
# do something with $line
done
shuf comes standard with GNU/Linux distros afaik.