Getting error: sed: -e expression #1, char 2: unknown command: `.' - bash

EDIT: FIXED. Now concerned with optimizing the code.
I am writing a script to separate data from one file into multiple files. When I run the script, I get the error: "sed: -e expression #1, char 2: unknown command: `.'" without any line number, making it somewhat hard to debug. I have checked the lines in which I use sed individually, and they work without problem. Any ideas? I realize that there are a lot of things that I did somewhat unconventionally and that there are faster ways of doing some things (I'm sure there's a way to avoid continuously importing somefile), but right now I'm just trying to understand this error. Here is the code:
x1=$(sed -n '1p' < somefile | cut -f1)
y1=$(sed -n '1p' < somefile | cut -f2)
p='p'
for i in 1..$(seq 1 $(cat "somefile" | wc -l))
do
x2=$(sed -n $i$p < somefile | cut -f1)
y2=$(sed -n $i$p < somefile | cut -f1)
if [ "$x1" = "$x2" ] && [ "$y1" = "$y2" ];
then
x1=$x2
y1=$x2
fi
s="$(sed -n $i$p < somefile | cut -f3) $(sed -n $i$p < somefile | cut$
echo $s >> "$x1-$y1.txt"
done

The problem is in the following line:
for i in 1..$(seq 1 $(cat "somefile" | wc -l))
If somefile were to have 3 lines, then this would result in following values of i:
1..1
2
3
Clearly, something like sed -n 1..1p < filename would result in the error you are observing: sed: -e expression #1, char 2: unknown command: '.'
You rather want:
for i in $(seq 1 $(cat "somefile" | wc -l))

This is the cause of the problem:
for i in 1..$(seq 1 $(cat "somefile" | wc -l))
Try just
for i in $(seq 1 $(wc -l < somefile))
However, you are reading your file many, many times too often with all those sed commands. Read it just once:
read x1 y1 < <(sed 1q somefile)
while read x2 y2 f3 f4; do
if [[ $x1 = $x2 && $y1 = $y2 ]]; then
x1=$x2
y1=$x2
fi
echo "$f3 $f4"
done < somefile > "$x1-$y1.txt"
The line where you construct the s variable is truncated -- I'm assuming you have 4 fields per line.
Note: a problem with cut-and-paste coding is that you introduce errors: you assign y2 the same field as x2

Related

How to find all non-dictionary words in a file in bash/zsh?

I'm trying to find all words in a file that don't exist in the dictionary. If I look for a single word the following works
b=ther; look $b | grep -i "^$b$" | ifne -n echo $b => ther
b=there; look $b | grep -i "^$b$" | ifne -n echo $b => [no output]
However if I try to run a "while read" loop
while read a; do look $a | grep -i "^$a$" | ifne -n echo "$a"; done < <(tr -s '[[:punct:][:space:]]' '\n' <lotr.txt |tr '[:upper:]' '[:lower:]')
The output seems to contain all (?) words in the file. Why doesn't this loop only output non-dictionary words?
Regarding ifne
If stdin is non-empty, ifne -n reprints stdin to stdout. From the manpage:
-n Reverse operation. Run the command if the standard input is empty
Note that if the standard input is not empty, it is passed through
ifne in this case.
strace on ifne confirms this behavior.
Alternative
Perhaps, as an alternative:
1 #!/bin/bash -e
2
3 export PATH=/bin:/sbin:/usr/bin:/usr/sbin
4
5 while read a; do
6 look "$a" | grep -qi "^$a$" || echo "$a"
7 done < <(
8 tr -s '[[:punct:][:space:]]' '\n' < lotr.txt \
9 | tr '[A-Z]' '[a-z]' \
10 | sort -u \
11 | grep .
12 )

Intermittent piping failure in bash

I have a code snippet that looks like this
while grep "{{SECRETS}}" /tmp/kubernetes/$basefile | grep -v "#"; do
grep -n "{{SECRETS}}" /tmp/kubernetes/$basefile | grep -v "#" | head -n1 | while read -r line ; do
lineno=$(echo $line | cut -d':' -f1)
spaces=$(sed "${lineno}!d" /tmp/kubernetes/$basefile | awk -F'[^ \t]' '{print length($1)}')
spaces=$((spaces-1))
# Delete line that had {{SECRETS}}
sed -i -e "${lineno}d" /tmp/kubernetes/$basefile
while IFS='' read -r secretline || [[ -n "$secretline" ]]; do
newline=$(printf "%*s%s" $spaces "" "$secretline")
sed -i "${lineno}i\ ${newline}" /tmp/kubernetes/$basefile
lineno=$((lineno+1))
done < "/tmp/secrets.yaml"
done
done
in /tmp/kubernetes/$basefile, the string {{SECRETS}} appears twice 100% of the time.
Almost every single time, this completes fine. However, very infrequently, the script errors on its second loop through the file. like so, according to set -x
...
IFS=
+ read -r secretline
+ [[ -n '' ]]
+ read -r line
exit code 1
When it works, the set -x looks like this, and continues processesing the file correctly.
...
+ IFS=
+ read -r secretline
+ [[ -n '' ]]
+ read -r line
+ grep '{{SECRETS}}' /tmp/kubernetes/deployment.yaml
+ grep -v '#'
I have no answer for how this can only happen occasionally, so I think there's something about bash piping's parallelism I don't understand. Is there something in grep -n "{{SECRETS}}" /tmp/kubernetes/$basefile | grep -v "#" | head -n1 | while read -r line ; do that could lead to out-of-order execution somehow? Based on the error, it seems like it's trying to read a line, but can't because previous commands didn't work. But there's no indication of that in the set -x output.
A likely cause of the problem is that the pipeline containing the inner loop both reads and writes the "basefile" at the same time. See How to make reading and writing the same file in the same pipeline always “fail”?.
One way to fix the problem is do a full read of the file before trying to update it. Try:
basepath=/tmp/kubernetes/$basefile
secretspath=/tmp/secrets.yaml
while
line=$(grep -n "{{SECRETS}}" "$basepath" | grep -v "#" | head -n1)
[[ -n $line ]]
do
lineno=$(echo "$line" | cut -d':' -f1)
spaces=$(sed "${lineno}!d" "$basepath" \
| awk -F'[^ \t]' '{print length($1)}')
spaces=$((spaces-1))
# Delete line that had {{SECRETS}}
sed -i -e "${lineno}d" "$basepath"
while IFS='' read -r secretline || [[ -n "$secretline" ]]; do
newline=$(printf "%*s%s" $spaces "" "$secretline")
sed -i "${lineno}i\ ${newline}" "$basepath"
lineno=$((lineno+1))
done < "$secretspath"
done
(I introduced the variables basepath and secretspath to make the code easier to test.)
As an aside, it's also possible to do this with pure Bash code:
basepath=/tmp/kubernetes/$basefile
secretspath=/tmp/secrets.yaml
updated_lines=()
is_updated=0
while IFS= read -r line || [[ -n $line ]] ; do
if [[ $line == *'{{SECRETS}}'* && $line != *'#'* ]] ; then
spaces=${line%%[^[:space:]]*}
while IFS= read -r secretline || [[ -n $secretline ]]; do
updated_lines+=( "${spaces}${secretline}" )
done < "$secretspath"
is_updated=1
else
updated_lines+=( "$line" )
fi
done <"$basepath"
(( is_updated )) && printf '%s\n' "${updated_lines[#]}" >"$basepath"
The whole updated file is stored in memory (in the update_lines array) but that shouldn't be a problem because any file that's too big to store in memory will almost certainly be too big to process line-by-line with Bash. Bash is generally extremely slow.
In this code spaces holds the actual space characters for indentation, not the number of them.

Arithmetic operation fails in Shell script

Basically I'm trying to check if there are any 200 http responses in the log, in last 3 line. but I'm getting the below error. Because of this the head command is failing..Please help
LINES=`cat http_access.log |wc -l`
for i in $LINES $LINES-1 $LINES-2
do
echo "VALUE $i"
head -$i http_access.log | tail -1 > holy.txt
temp=`cat holy.txt| awk '{print $9}'`
if [[ $temp == 200 ]]
then
echo "line $i has 200 code at "
cat holy.txt | awk '{print $4}'
fi
done
Output:
VALUE 18
line 18 has 200 code at [21/Jan/2018:15:34:23
VALUE 18-1
head: invalid trailing option -- - Try `head --help' for more information.
Use $((...)) to perform arithmetic.
for i in $((LINES)) $((LINES-1)) $((LINES-2))
Without it, it's attempting to run the commands:
head -18 http_access.log
head -18-1 http_access.log
head -18-2 http_access.log
The latter two are errors.
A more flexible way to write the for loop would be using C-style syntax:
for ((i = LINES - 2; i <= LINES; ++i)); do
...
done
You got the why from JohnKugelman's answer, I will just propose a simplified code that might work for you:
while read -ra fields; do
[[ ${fields[9]} = 200 ]] && echo "Line ${fields[0]} has 200 code: ${fields[4]}"
done < <(cat -n http_access.log | tail -n 3 | tac)
cat -n: Numbers lines of the file
tail -n 3: Prints 3 last lines. You can just change this number for more lines
tac: Prints the lines outputted by tail in reversed order
read -ra fields: Reads the fields into an array $fields
${fields[0]}: The line number
${fields[num_of_field]}: Individual fields
You can also use wc instead of numbering using cat -n. For larger inputs, this will be slightly faster:
lines=$(wc -l < http_access.log)
while read -ra fields; do
[[ ${fields[8]} = 200 ]] && echo "Line $lines has 200 code: ${fields[3]}"
((lines--))
done < <(tail -n 3 http_access.log | tac)

Assigning variables to values in a text file with 3 columns, line by line

I've got a .txt file with three columns, each separated by a tab, and 264 rows called PowerCoords.txt. Each row contains an x (column 1), y (column2) and z (column3) coordinate. I want to go through this file, line by line, assign each value to X,Y, and Z, and then input those variables into another function.
I'm new to bash, and I don't understand how to specify that I want the value in Row 1, Column 2 to be the variable Y, and so on...
I know this is likely super simple and I could do it in a flash in Matlab, but I'm trying to keep everything in bash.
while read x y z; do
echo x=$x y=$y z=$z
done < input.txt
The above requires that none of your columns contain any whitespace.
EDIT:
In response to comments, here is one technique to handle numbering the lines:
nl -ba < input.txt | while read line x y z rest; do
~/data/standard/MNI152_T1_2mm -mul 0 \
-add 1 -roi $x 1 $y 1 $z 1 0 1 point -odt float > NewFile$line
done
William Pursell's answer is much smarter, but in my straight-forward beginners mind I tried following some time ago:
#!/bin/bash
data="data.dat"
datalength=`wc $data | awk '{print $1;}'`
for (( i=1; i<=$datalength; i++ )) ;do
x=`cat $data | awk '{print $1;}' | sed -n "$i"p | sed -e 's/[eE]+*/\\*10\\^/'` ; x=`echo "$x" | bc -l` ; echo "x$i=$x";
y=`cat $data | awk '{print $2;}' | sed -n "$i"p | sed -e 's/[eE]+*/\\*10\\^/'` ; y=`echo "$y" | bc -l` ; echo "y$i=$y";
z=`cat $data | awk '{print $3;}' | sed -n "$i"p | sed -e 's/[eE]+*/\\*10\\^/'` ; z=`echo "$z" | bc -l` ; echo "z$i=$z";
# do something with xyz:
fslmaths ~/data/standard/MNI152_T1_2mm -mul 0 -add 1 -roi $x 1 $y 1 $z 1 0 1 point -odt float > NewFile$i
done
The bc and the sed -e 's/[eE]+*/\\*10\\^/' have to be added if you like to use floating point numbers and for the case that input also uses exponential notation.
I had a similar problem but for lots of input data those bash scripts are very slow. I migrated to perl then. In perl it would look like this:
#!/usr/bin/perl -w
use strict;
open (IN, "data.dat") or die "Error opening";
my $i=0;
for my $line (<IN>){
$i++;
open(OUT, ">NewFile$i.out");
chomp $line;
(my $x,my $y,my $z) = split '\t',$line;
print "$x $y $z\n";
# do something with xyz:
my $f= fslmaths ~/data/standard/MNI152_T1_2mm -mul 0 -add 1 -roi $x 1 $y 1 $z 1 0 1 point -odt float
print OUT "f= $f\n";
close OUT;
}
close IN;

Get just the integer from wc in bash

Is there a way to get the integer that wc returns in bash?
Basically I want to write the line numbers and word counts to the screen after the file name.
output: filename linecount wordcount
Here is what I have so far:
files=\`ls`
for f in $files;
do
if [ ! -d $f ] #only print out information about files !directories
then
# some way of getting the wc integers into shell variables and then printing them
echo "$f $lines $words"
fi
done
Most simple answer ever:
wc < filename
Just:
wc -l < file_name
will do the job. But this output includes prefixed whitespace as wc right-aligns the number.
You can use the cut command to get just the first word of wc's output (which is the line or word count):
lines=`wc -l $f | cut -f1 -d' '`
words=`wc -w $f | cut -f1 -d' '`
wc $file | awk {'print "$4" "$2" "$1"'}
Adjust as necessary for your layout.
It's also nicer to use positive logic ("is a file") over negative ("not a directory")
[ -f $file ] && wc $file | awk {'print "$4" "$2" "$1"'}
Sometimes wc outputs in different formats in different platforms. For example:
In OS X:
$ echo aa | wc -l
1
In Centos:
$ echo aa | wc -l
1
So using only cut may not retrieve the number. Instead try tr to delete space characters:
$ echo aa | wc -l | tr -d ' '
The accepted/popular answers do not work on OSX.
Any of the following should be portable on bsd and linux.
wc -l < "$f" | tr -d ' '
OR
wc -l "$f" | tr -s ' ' | cut -d ' ' -f 2
OR
wc -l "$f" | awk '{print $1}'
If you redirect the filename into wc it omits the filename on output.
Bash:
read lines words characters <<< $(wc < filename)
or
read lines words characters <<EOF
$(wc < filename)
EOF
Instead of using for to iterate over the output of ls, do this:
for f in *
which will work if there are filenames that include spaces.
If you can't use globbing, you should pipe into a while read loop:
find ... | while read -r f
or use process substitution
while read -r f
do
something
done < <(find ...)
If the file is small you can afford calling wc twice, and use something like the following, which avoids piping into an extra process:
lines=$((`wc -l "$f"`))
words=$((`wc -w "$f"`))
The $((...)) is the Arithmetic Expansion of bash. It removes any whitespace from the output of wc in this case.
This solution makes more sense if you need either the linecount or the wordcount.
How about with sed?
wc -l /path/to/file.ext | sed 's/ *\([0-9]* \).*/\1/'
typeset -i a=$(wc -l fileName.dat | xargs echo | cut -d' ' -f1)
Try this for numeric result:
nlines=$( wc -l < $myfile )
Something like this may help:
#!/bin/bash
printf '%-10s %-10s %-10s\n' 'File' 'Lines' 'Words'
for fname in file_name_pattern*; {
[[ -d $fname ]] && continue
lines=0
words=()
while read -r line; do
((lines++))
words+=($line)
done < "$fname"
printf '%-10s %-10s %-10s\n' "$fname" "$lines" "${#words[#]}"
}
To (1) run wc once, and (2) not assign any superfluous variables, use
read lines words <<< $(wc < $f | awk '{ print $1, $2 }')
Full code:
for f in *
do
if [ ! -d $f ]
then
read lines words <<< $(wc < $f | awk '{ print $1, $2 }')
echo "$f $lines $words"
fi
done
Example output:
$ find . -maxdepth 1 -type f -exec wc {} \; # without formatting
1 2 27 ./CNAME
21 169 1065 ./LICENSE
33 130 961 ./README.md
86 215 2997 ./404.html
71 168 2579 ./index.html
21 21 478 ./sitemap.xml
$ # the above code
404.html 86 215
CNAME 1 2
index.html 71 168
LICENSE 21 169
README.md 33 130
sitemap.xml 21 21
Solutions proposed in the answered question doesn't work for Darwin kernels.
Please, consider following solutions that work for all UNIX systems:
print exactly the number of lines of a file:
wc -l < file.txt | xargs
print exactly the number of characters of a file:
wc -m < file.txt | xargs
print exactly the number of bytes of a file:
wc -c < file.txt | xargs
print exactly the number of words of a file:
wc -w < file.txt | xargs
There is a great solution with examples on stackoverflow here
I will copy the simplest solution here:
FOO="bar"
echo -n "$FOO" | wc -l | bc # "3"
Maybe these pages should be merged?
Try this:
wc `ls` | awk '{ LINE += $1; WC += $2 } END { print "lines: " LINE " words: " WC }'
It creates a line count, and word count (LINE and WC), and increase them with the values extracted from wc (using $1 for the first column's value and $2 for the second) and finally prints the results.
"Basically I want to write the line numbers and word counts to the screen after the file name."
answer=(`wc $f`)
echo -e"${answer[3]}
lines: ${answer[0]}
words: ${answer[1]}
bytes: ${answer[2]}"
Outputs :
myfile.txt
lines: 10
words: 20
bytes: 120
files=`ls`
echo "$files" | wc -l | perl -pe "s#^\s+##"
You have to use input redirection for wc:
number_of_lines=$(wc -l <myfile.txt)
respectively in your context
echo "$f $(wc -l <"$f") $(wc -w <"$f")"

Resources