bash loop skip commented lines - bash

I'm looping over lines in a file. I just need to skip lines that start with "#".
How do I do that?
#!/bin/sh
while read line; do
if ["$line doesn't start with #"];then
echo "line";
fi
done < /tmp/myfile
Thanks for any help!

while read line; do
case "$line" in \#*) continue ;; esac
...
done < /tmp/my/input
Frankly, however, it is often clearer to turn to grep:
grep -v '^#' < /tmp/myfile | { while read line; ...; done; }

This is an old question but I stumbled upon this problem recently, so I wanted to share my solution as well.
If you are not against using some python trickery, here it is:
Let this be our file called "my_file.txt":
this line will print
this will also print # but this will not
# this wont print either
# this may or may not be printed, depending on the script used, see below
Let this be our bash script called "my_script.sh":
#!/bin/sh
line_sanitizer="""import sys
with open(sys.argv[1], 'r') as f:
for l in f.read().splitlines():
line = l.split('#')[0].strip()
if line:
print(line)
"""
echo $(python -c "$line_sanitizer" ./my_file.txt)
Calling the script will produce something similar to:
$ ./my_script.sh
this line will print
this will also print
Note: the blank line was not printed
If you want blank lines you can change the script to:
#!/bin/sh
line_sanitizer="""import sys
with open(sys.argv[1], 'r') as f:
for l in f.read().splitlines():
line = l.split('#')[0]
if line:
print(line)
"""
echo $(python -c "$line_sanitizer" ./my_file.txt)
Calling this script will produce something similar to:
$ ./my_script.sh
this line will print
this will also print

Related

reading variable from file becomes text

I must read include-lines from a script. The include-lines are build by variables. When I ls the file based on these variables, its working fine. When I do the same after I have collected the line into my array, the variables behave like somehow invisible changed from $VAR to \$VAR
#!/bin/bash
cd ..
DIR=scratchpad
SCRIPT=$(basename $0)
echo $DIR/$SCRIPT
ls -1 $DIR/$SCRIPT
f_doNothing () {
. $DIR/$SCRIPT
}
while read LINE ; do
if [[ "$LINE" = ". "* ]] ; then
LINE=${LINE/. /}
LINE=${LINE/\#*/}
LINE=${LINE//\"/}
LIBS+=($LINE)
echo ${LIBS[#]}
ls -l ${LIBS[#]}
fi
done < <(cat $DIR/$SCRIPT)
The two last lines of output show the damaged variable and the non-working ls.
$ ./read_include.sh
scratchpad/read_include.sh
scratchpad/read_include.sh
$DIR/$SCRIPT
ls: cannot access $DIR/$SCRIPT: No such file or directory
Can you please help me to understand what I am doing wrong. Thanks!

Copy number of line composed by special character in bash

I have an exercise where I have a file and at the begin of it I have something like
#!usr/bin/bash
# tototata
#tititutu
#ttta
Hello world
Hi
Test test
#zabdazj
#this is it
And I have to take each first line starting with a # until the line where I don't have one and stock it in a variable. In case of a shebang, it has to skip it and if there's blank space between lines, it has to skip them too. We just want the comment between the shebang and the next character.
I'm new to bash and I would like to know if there's a way to do it please ?
Expected output:
# tototata
#tititutu
#ttta
Try in this easy way to better understand.
#!/bin/bash
sed 1d your_input_file | while read line;
do
check=$( echo $line | grep ^"[#;]" )
if ([ ! -z "$check" ] || [ -z "$line" ])
then
echo $line;
else
exit 1;
fi
done
This may be more correct, although your question was unclear about weather the input file had a script shebang, if the shebang had to be skipped to match your sample output, or if the input file shebang was just bogus.
It is also unclear for what to do, if the first lines of the input file are not starting with #.
You should really post your assignment's text as a reference.
Anyway here is a script that does collects first set of consecutive lines starting with a sharp # into the arr array variable.
It may not be an exact solution to your assignment (witch you should be able to solve with what your previous lessons taught you), but will get you some clues and keys to iterate reading lines from a file and testing that lines starts with a #.
#!/usr/bin/env bash
# Our variable to store parsed lines
# Is an array of strings with an entry per line
declare -a arr=()
# Iterate reading lines from the file
# while it matches Regex: ^[#]
# mean while lines starts with a sharp #
while IFS=$'\n' read -r line && [[ "$line" =~ ^[#] ]]; do
# Add line to the arr array variable
arr+=("$line")
done <a.txt
# Print each array entries with a newline
printf '%s\n' "${arr[#]}"
How about this (not tested, so you may have to debug it a bit, but my comments in the code should explain what is going on):
while read line
do
# initial is 1 one the first line, and 0 after this. When the script starts,
# the variable is undefined.
: ${initial:=1}
# Test for lines starting with #. Need to quote the hash
# so that it is not taken as comment.
if [[ $line == '#'* ]]
then
# Test for initial #!
if (( initial == 1 )) && [[ $line == '#!'* ]]
then
: # ignore it
else
echo $line # or do whatever you want to do with it
fi
fi
# stop on non-blank, non-comment line
if [[ $line != *[^\ ]* ]]
then
break
fi
initial=0 # Next line won't be an initial line
done < your_file

how can you have grep start searching from specified line number

If I need to start grepping a file from line num 1293 all the way to the end of the file how can I do that?
More detailed info in case it helps:
I am trying to whip a quick function in my bashrc that lets me quickly search vim snippet files for a particular snippet echoing the snippet name and associated command(s) to screen. So I have no probs getting the line num for the snippet name and even printing out the command on the following line num. But if the snippet is a multi-line command then I need to grep for the next line beginning with snippet "^snippet " and then return all lines between, but I cannot find any details how I can go about getting grep to start its search starting from a particular line num.
A secondary question is how in a .bashrc function can I exit the function early? When I use the 'exit' command
currently commented out in the funct below the terminal itself exits/closes rather than just exiting the funct.
function vsls() {
if [[ "$2" =~ ^(html|sh|vim)$ ]] ; then
sPath="$2".snippets
elif [[ "$2" =~ ^(html|sh|vim).snippets$ ]] ; then
sPath="$2"
else
echo "\nExiting. You did not enter a recognized vim snippets file name."
# exit 69
fi
lnN=$(more $HOME/.vim/snippets/"$sPath"|grep -nm 1 $1|sed -r 's/^([0-9]*):.*$/\1/') ; echo "\$lnN: ${lnN}"
cntr="$lnN"
sed -n "$cntr"p "$HOME/.vim/snippets/$sPath"
((cntr++))
sed -n "$cntr"p "$HOME/.vim/snippets/$sPath"
}
#chepner
I don't know why (lack of know-how likely) but without specifying 'more' I get a permissions error:
03:43 ~ $ fLNum=$($HOME/.vim/snippets/"$sPath"|grep -nm 1 tdotti|sed -r 's/^([0-9]*):.*$/\1/') ; echo "\$fLNum: ${fLNum}"
bash: /home/user/.vim/snippets/html.snippets: Permission denied
$fLNum:
03:43 ~ $ fLNum=$(more $HOME/.vim/snippets/"$sPath"|grep -nm 1 tdotti|sed -r 's/^([0-9]*):.*$/\1/') ; echo "\$fLNum: ${fLNum}"
$fLNum: 1293
Now working as desired:
I stuck with sed since I feel most comfortable using sed. I have used the -n print opt before, but not too often so it totally escaped my mind to try something like that.
function vsls() {
if [[ "$2" =~ ^(html|sh|vim)$ ]] ; then
sPath="$2".snippets
elif [[ "$2" =~ ^(html|sh|vim).snippets$ ]] ; then
sPath="$2"
else
echo "\nExiting. You did not enter a recognized vim snippets file name."
# exit 69
fi
fLNum=$(more $HOME/.vim/snippets/"$sPath"|grep -nm 1 "snippet $1"|sed -r 's/^([0-9]*):.*$/\1/') ; echo "\$fLNum: ${fLNum}" #get line number of the snippet name searched, entered as input $1
((tLNum1 = fLNum+=1)) ; echo "\$tLNum1: ${tLNum1}" # tmpLineNum is next line num from which to start next grep search for lineNum of next snippet entry to determine where commands of desired snippet end
tLNum2=$(sed -n "${tLNum1},$ p" $HOME/.vim/snippets/"$sPath"|grep -nm 1 "snippet"|sed -r 's/^([0-9]*):.*$/\1/') ; echo "\$tLNum2: ${tLNum2}" #lineNum of next 'snippet entry'
let sLNum=tLNum2+fLNum sLNum-=1 ; let sLNum-=1 ; echo "\$sLNum: ${sLNum}" #tmpLineNum2 is not actual line num in file, but rather the number of lines since the start of the second search, that is necessarily somewhere within the file: so if second search begins on line 1294, for all intents and purpose actual line num 1294 is line 1 of the new (second) search; therefore I need to add the tLNum2 with fLNum to determine actual lineNum in the of the next snippet entry
echo ""
sed -n "${fLNum},${sLNum} p" "$HOME/.vim/snippets/$sPath"
echo ""
}
But it is curious why I needed to do:
let sLNum=tLNum2+fLNum sLNum-=1 ; let sLNum-=1
to get the correct line number of the second grep search. I only got lucky fooling around, b/c I would have thought:
let sLNum=tLNum2+fLNum sLNum-=1
or:
let sLNum=tLNum2+fLNum ; let sLNum-=1
should have done the trick; that is, secondLineNum = tmpLNum2 + firstLineNum and then secondLineNum - 1. But the result would never end up 1 less but always equal to tLNum+fLNum. It would be good to learn why that did not work as expected.
But its working. so thanks.
Or with sed like this:
sed -n "1293,$ p" yourfile | grep xyz
Or, if the line number is in a variable called line:
sed -n "${line},$ p" yourfile | grep xyz
Or, if you want your grep to find nothing in the first 1292 lines, but still report the correct line number if you are using grep -n, you can just get the (empty) hold buffer for grep to look at for lines 1 to 1292
sed "1,1292g" yourfile | grep -n xyz
awk is better suited for this
awk '/search_pattern/ && NR > 1292' filename
tail -n +1293 file | grep ....

Parsing a config file in bash

Here's my config file (dansguardian-config):
banned-phrase duck
banned-site allaboutbirds.org
I want to write a bash script that will read this config file and create some other files for me. Here's what I have so far, it's mostly pseudo-code:
while read line
do
# if line starts with "banned-phrase"
# add rest of line to file bannedphraselist
# fi
# if line starts with "banned-site"
# add rest of line to file bannedsitelist
# fi
done < dansguardian-config
I'm not sure if I need to use grep, sed, awk, or what.
Hope that makes sense. I just really hate DansGuardian lists.
With awk:
$ cat config
banned-phrase duck frog bird
banned-phrase horse
banned-site allaboutbirds.org duckduckgoose.net
banned-site froggingbirds.gov
$ awk '$1=="banned-phrase"{for(i=2;i<=NF;i++)print $i >"bannedphraselist"}
$1=="banned-site"{for(i=2;i<=NF;i++)print $i >"bannedsitelist"}' config
$ cat bannedphraselist
duck
frog
bird
horse
$ cat bannedsitelist
allaboutbirds.org
duckduckgoose.net
froggingbirds.gov
Explanation:
In awk by default each line is separated into fields by whitespace and each field is handled by $i where i is the ith field i.e. the first field on each line is $1, the second field on each line is $2 upto $NF where NF is the variable that contains the number of fields on the given line.
So the script is simple:
Check the first field against our required strings $1=="banned-phrase"
If the first field matched then loop over all the other fields for(i=2;i<=NF;i++) and print each field print $i and redirect the output to the file >"bannedphraselist".
You could do
sed -n 's/^banned-phrase *//p' dansguardian-config > bannedphraselist
sed -n 's/^banned-site *//p' dansguardian-config > bannedsitelist
Although that means reading the file twice. I doubt that the possible performance loss matters though.
You can read multiple variables at once; by default they're split on whitespace.
while read command target; do
case "$command" in
banned-phrase) echo "$target" >>bannedphraselist;;
banned-site) echo "$target" >>bannedsitelist;;
"") ;; # blank line
*) echo >&2 "$0: unrecognized config directive '$command'";;
esac
done < dansguardian-config
Just as an example. A smarter implementation would read the list files first, make sure things weren't already banned, etc.
What is the problem with all the solutions which uses echo text >> file? It can be checked with strace that in every such step the file is opened, then positioned to the end, then text is written and file is closed. So if there is 1000 times echo text >> file then there will be 1000 open, lseek, write, close. The number of open, lseek and close can be reduced a lot on the following way:
while read key val; do
case $key in
banned-phrase) echo $val>&2;;
banned-site) echo $val;;
esac
done >bannedsitelist 2>bannedphraselist <dansguardian-config
The stdout and stderr is redirected to files and kept open while the loop is alive. So the files are opened once and closed once. No need of lseek. Also the file caching is used more in this way as the unnecessary calls to close will not flush the buffers each time.
while read name value
do
if [ $name = banned-phrase ]
then
echo $value >> bannedphraselist
elif [ $name = banned-site ]
then
echo $value >> bannedsitelist
fi
done < dansguardian-config
Better to use awk:
awk '$1 ~ /^banned-phrase/{print $2 >> "bannedphraselist"}
$1 ~ /^banned-site/{print $2 >> "bannedsitelist"}' dansguardian-config

Manually iterating a line of a file | bash

I could do this in any other language, but with Bash I've looked far and wide and could not find the answer.
I need to manually increase $line in a script. Example:
for line in `cat file`
do
foo()
foo_loop(condition)
{
do_something_to_line($line)
}
done
If you notice, every time the foo_loop iterates, $line stays the same. I need to iterate $line there, and make sure the original for loop only runs the number of lines in file.
I have thought about finding the number of lines in file using a different loop and iterating the line variable inside the inner loop of foo().
Any ideas?
EDIT:
Sorry for being so vague.
Here we go:
I'm trying to make a section of my code execute multiple times (parallel execution)
Function foo() # Does something
for line in `cat $temp_file`;
foo($line)
That code works just fine, because foo is just taking in the value of line; but if I wanted to do this:
Function foo() # Does something
for line in `cat $temp_file`;
while (some condition)
foo($line)
end
$line will equal the same value throughout the while loop. I need it to change with the while loop, then continue when it goes back to the for. Example:
line = Hi
foo{ echo "$line" };
for line in `cat file`;
while ( number_of_processes_running -lt max_number_allowed)
foo($line)
end
If the contents of file were
Hi \n Bye \n Yellow \n Green \n
The output of the example program would be (if max number allowed was 3)
Hi Hi Hi Bye Bye Bye Yellow Yellow Yellow Green Green Green.
Where I want it to be
Hi Bye Yellow Green
I hope this is better. I'm doing my best to explain my problem.
Instead of using a for loop to read through the file you should maybe read through the file like so.
#!bin/bash
while read line
do
do_something_to_line($line)
done < "your.file"
Long story short, while read line; do _____ ; done
Then, make sure you have double-quotes around "$line" so that a parameter isn't delimited by spaces.
Example:
$ cat /proc/cpuinfo | md5sum
c2eb5696e59948852f66a82993016e5a *-
$ cat /proc/cpuinfo | while read line; do echo "$line"; done | md5sum
c2eb5696e59948852f66a82993016e5a *-
Second example
# add .gz to every file in the current directory:
# If any files had spaces, the mv command for that line would return an error.
$ find -type f -maxdepth 1 | while read line; do mv "$line" "$line.gz"; done
You should post follow-ups as edits to your question or in comments rather than as an answer.
This structure:
while read line
do
for (( i=1; i<$max_number_allowed; i++ ))
do
foo $line
done
done < file
Yields:
Hi
Hi
Hi
Bye
Bye
Bye
...etc.
While this one:
for (( i=1; i<$max_number_allowed; i++ ))
do
while read line
do
foo $line
done < file
done
Yields:
Hi
Bye
Yellow
Green
Hi
Bye
Yellow
Green
...etc.

Resources