how to prevent for loop from using space as deliminator, bash script - bash

I am trying to right a bash script to do multiple checks and searches for a CMS my company uses. I trying to implement a function for a user to be able to search for a certain macro call and the function return all the files that contain the call, the line the macro is called on, and the actual code in the macro call. What I have seems to be getting screwed up by the fact I am using a for loop to format the output. Here's the snippet of the script I am working on:
elif [ "$choice" = "2" ]
then
echo -e "\n What macro call are we looking for $name?"
read macrocall
for i in $(grep -inR "$macrocall" $sitepath/templates/macros/); do
file=$(echo $i | cut -d\: -f1 | awk -F\/ '{ print $NF }')
line=$(echo $i | cut -d\: -f2)
calltext=$(echo $i | cut -d\: -f3-)
echo -e "\nFile: $file"
echo -e "\nLine: $line"
echo -e "\nMacro Call from file: $calltext"
done
fi
the current script runs the first few fields until it gets a a space and then everything gets all screwy. Anybody have any idea how I can have the for loops deliminator to be each result of the grep? any suggestions would be helpful. Let me know if any of you need more info. Thanks!

The right way to do this would be more like:
printf "\n What macro call are we looking for %s?" "$name"
read macrocall
# ensure globbing is off and set IFS to a newline after saving original values
oSET="$-"; set -f; oIFS="$IFS"; IFS=$'\n'
awk -v macrocall="$macrocall" '
BEGIN { lc_macrocall = "\\<" tolower(macrocall) "\\>" }
tolower($0) ~ lc_macrocall {
file=FILENAME
sub(/.*\//,"",file)
printf "\n%s\n", file
printf "\n%d\n", FNR
printf "\nMacro Call from file: %s\n", $0
}
' $(find "$sitepath/templates/macros" -type f -print)
# restore original IFS and globbing values
IFS="$oIFS"; set +f -"$oSET"
This solves the problem of having spaces in your file names as originally requested, but also handles globbing characters in your file names, and the various typical echo issues.

You can set the internal field separator $IFS (which is normally set to space, tab and newline) to just newline to get around this problem:
IFS="\n"

Related

How to parse multiple line output as separate variables

I'm relatively new to bash scripting and I would like someone to explain this properly, thank you. Here is my code:
#! /bin/bash
echo "first arg: $1"
echo "first arg: $2"
var="$( grep -rnw $1 -e $2 | cut -d ":" -f1 )"
var2=$( grep -rnw $1 -e $2 | cut -d ":" -f1 | awk '{print substr($0,length,1)}')
echo "$var"
echo "$var2"
The problem I have is with the output, the script I'm trying to write is a c++ function searcher, so upon launching my script I have 2 arguments, one for the directory and the second one as the function name. This is how my output looks like:
first arg: Projekt
first arg: iseven
Projekt/AX/include/ax.h
Projekt/AX/src/ax.cpp
h
p
Now my question is: how do can I save the line by line output as a variable, so that later on I can use var as a path, or to use var2 as a character to compare. My plan was to use IF() statements to determine the type, idea: IF(last_char == p){echo:"something"}What I've tried was this question: Capturing multiple line output into a Bash variable and then giving it an array. So my code looked like: "${var[0]}". Please explain how can I use my line output later on, as variables.
I'd use readarray to populate an array variable just in case there's spaces in your command's output that shouldn't be used as field separators that would end up messing up foo=( ... ). And you can use shell parameter expansion substring syntax to get the last character of a variable; no need for that awk bit in your var2:
#!/usr/bin/env bash
readarray -t lines < <(printf "%s\n" "Projekt/AX/include/ax.h" "Projekt/AX/src/ax.cpp")
for line in "${lines[#]}"; do
printf "%s\n%s\n" "$line" "${line: -1}" # Note the space before the -1
done
will display
Projekt/AX/include/ax.h
h
Projekt/AX/src/ax.cpp
p

How to read multiple lines in while statement in ksh

I am creating a script to help me through my daily work and automate it. I have encountered my problem when trying to input multiple lines in my while loop. I usually do it in my for loop but I execute it via command.
Sample:
for i in `cat listoffiles.txt`
do
echo $i
find <path> -name *$i* | awk -F "." {'print $4'} #to display a specific value
done
Now I am trying to automate it with a while loop. Having problems to read multiple input lines in it.
For example:
i want to search for these inputs:
For
Example
only
here is my script for it:
#!/bin/ksh
echo Please enter file #:
read Var1
while true
do
VarSession=`find $OT_DIR/archive*/ -name *$Var1* | awk -F "." {'print $4'}`
if [ "$VarSession" = "" ]
then
echo No match for File# $Var1 on this leg or is out of retention.
else
echo File# $Var1 is under Session# $VarSession
fi
done
VarSession=`find $OT_DIR/archive*/ -name *$Var1* | awk -F "." {'print $4'}`
Assuming that you provide 1 2 3 as input, The line above translates to this
VarSession=`find $OT_DIR/archive*/ -name "1 2 3" | awk -F "." {'print $4'}`
But you want to search all those values separately so you need another loop. for loop serves the purpose if traversing white-space separated entries.
Also, based upon the original script that you showed, I assume you want the script to search file by file, rather than scanning entire directories. However, the statement above will put all output in the variable without traversing it. To traverse line by line, while loop does the job.
#!/bin/ksh
# -n switch suppresses printing a newline
echo -n 'Please enter file #: '
read Var1
# Traverse over all entered values in Var1 (separated by white space)
for i in $Var1
do
#Set a flag to zero, logic explained later
Flag=0
find $OT_DIR/archive*/ -name *$i* | while read FileName
do
#Set the Flag to 1 if find command finds something
Flag=1
VarSession=`echo $FileName | awk -F "." {'print $4'}`
if [ "$VarSession" = "" ]
then
#If find found a file but VarSession has nothing then file name is not correct
echo "Some conventions went wrong in file name: $FileName"
else
echo "File# $Var1 is under Session# $VarSession"
fi
done
#If find found nothing, there was no match
if [ $Flag -eq 0 ]
then
echo No match for File# $Var1 on this leg or is out of retention.
fi
done

Trying to take input file and textline from a given file and save it to other, using bash

What I have is a file (let's call it 'xfile'), containing lines such as
file1 <- this line goes to file1
file2 <- this goes to file2
and what I want to do is run a script that does the work of actually taking the lines and writing them into the file.
The way I would do that manually could be like the following (for the first line)
(echo "this line goes to file1"; echo) >> file1
So, to automate it, this is what I tried to do
IFS=$'\n'
for l in $(grep '[a-z]* <- .*' xfile); do
$(echo $l | sed -e 's/\([a-z]*\) <- \(.*\)/(echo "\2"; echo)\>\>\1/g')
done
unset IFS
But what I get is
-bash: file1(echo "this content goes to file1"; echo)>>: command not found
-bash: file2(echo "this goes to file2"; echo)>>: command not found
(on OS X)
What's wrong?
This solves your problem on Linux
awk -F ' <- ' '{print $2 >> $1}' xfile
Take care in choosing field-separator in such a way that new files does not have leading or trailing spaces.
Give this a try on OSX
You can use the regex capabilities of bash directly. When you use the =~ operator to compare a variable to a regular expression, bash populates the BASH_REMATCH array with matches from the groups in the regex.
re='(.*) <- (.*)'
while read -r; do
if [[ $REPLY =~ $re ]]; then
file=${BASH_REMATCH[1]}
line=${BASH_REMATCH[2]}
printf '%s\n' "$line" >> "$file"
fi
done < xfile

Using cut on stdout with tabs

I have a file which contains one line of text with tabs
echo -e "foo\tbar\tfoo2\nx\ty\tz" > file.txt
I'd like to get the first column with cut. It works if I do
$ cut -f 1 file.txt
foo
x
But if I read it in a bash script
while read line
do
new_name=`echo -e $line | cut -f 1`
echo -e "$new_name"
done < file.txt
Then I get instead
foo bar foo2
x y z
What am I doing wrong?
/edit: My script looks like that right now
while IFS=$'\t' read word definition
do
clean_word=`echo -e $word | external-command'`
echo -e "$clean_word\t<b>$word</b><br>$definition" >> $2
done < $1
External command removes diacritics from a Greek word. Can the script be optimized any further without changing external-command?
What is happening is that you did not quote $line when reading the file. Then, the original tab-delimited format was lost and instead of tabs, spaces show in between words. And since cut's default delimiter is a TAB, it does not find any and it prints the whole line.
So quoting works:
while read line
do
new_name=`echo -e "$line" | cut -f 1`
#----------------^^^^^^^
echo -e "$new_name"
done < file.txt
Note, however, that you could have used IFS to set the tab as field separator and read more than one parameter at a time:
while IFS=$'\t' read name rest;
do
echo "$name"
done < file.txt
returning:
foo
x
And, again, note that awk is even faster for this purpose:
$ awk -F"\t" '{print $1}' file.txt
foo
x
So, unless you want to call some external command while looping the file, awk (or sed) is better.

Loop through a comma-separated shell variable

Suppose I have a Unix shell variable as below
variable=abc,def,ghij
I want to extract all the values (abc, def and ghij) using a for loop and pass each value into a procedure.
The script should allow extracting arbitrary number of comma-separated values from $variable.
Not messing with IFS
Not calling external command
variable=abc,def,ghij
for i in ${variable//,/ }
do
# call your procedure/other scripts here below
echo "$i"
done
Using bash string manipulation http://www.tldp.org/LDP/abs/html/string-manipulation.html
You can use the following script to dynamically traverse through your variable, no matter how many fields it has as long as it is only comma separated.
variable=abc,def,ghij
for i in $(echo $variable | sed "s/,/ /g")
do
# call your procedure/other scripts here below
echo "$i"
done
Instead of the echo "$i" call above, between the do and done inside the for loop, you can invoke your procedure proc "$i".
Update: The above snippet works if the value of variable does not contain spaces. If you have such a requirement, please use one of the solutions that can change IFS and then parse your variable.
If you set a different field separator, you can directly use a for loop:
IFS=","
for v in $variable
do
# things with "$v" ...
done
You can also store the values in an array and then loop through it as indicated in How do I split a string on a delimiter in Bash?:
IFS=, read -ra values <<< "$variable"
for v in "${values[#]}"
do
# things with "$v"
done
Test
$ variable="abc,def,ghij"
$ IFS=","
$ for v in $variable
> do
> echo "var is $v"
> done
var is abc
var is def
var is ghij
You can find a broader approach in this solution to How to iterate through a comma-separated list and execute a command for each entry.
Examples on the second approach:
$ IFS=, read -ra vals <<< "abc,def,ghij"
$ printf "%s\n" "${vals[#]}"
abc
def
ghij
$ for v in "${vals[#]}"; do echo "$v --"; done
abc --
def --
ghij --
I think syntactically this is cleaner and also passes shell-check linting
variable=abc,def,ghij
for i in ${variable//,/ }
do
# call your procedure/other scripts here below
echo "$i"
done
#/bin/bash
TESTSTR="abc,def,ghij"
for i in $(echo $TESTSTR | tr ',' '\n')
do
echo $i
done
I prefer to use tr instead of sed, becouse sed have problems with special chars like \r \n in some cases.
other solution is to set IFS to certain separator
Another solution not using IFS and still preserving the spaces:
$ var="a bc,def,ghij"
$ while read line; do echo line="$line"; done < <(echo "$var" | tr ',' '\n')
line=a bc
line=def
line=ghij
Here is an alternative tr based solution that doesn't use echo, expressed as a one-liner.
for v in $(tr ',' '\n' <<< "$var") ; do something_with "$v" ; done
It feels tidier without echo but that is just my personal preference.
The following solution:
doesn't need to mess with IFS
doesn't need helper variables (like i in a for-loop)
should be easily extensible to work for multiple separators (with a bracket expression like [:,] in the patterns)
really splits only on the specified separator(s) and not - like some other solutions presented here on e.g. spaces too.
is POSIX compatible
doesn't suffer from any subtle issues that might arise when bash’s nocasematch is on and a separator that has lower/upper case versions is used in a match like with ${parameter/pattern/string} or case
beware that:
it does however work on the variable itself and pop each element from it - if that is not desired, a helper variable is needed
it assumes var to be set and would fail if it's not and set -u is in effect
while true; do
x="${var%%,*}"
echo $x
#x is not really needed here, one can of course directly use "${var%%:*}"
if [ -z "${var##*,*}" ] && [ -n "${var}" ]; then
var="${var#*,}"
else
break
fi
done
Beware that separators that would be special characters in patterns (e.g. a literal *) would need to be quoted accordingly.
Here's my pure bash solution that doesn't change IFS, and can take in a custom regex delimiter.
loop_custom_delimited() {
local list=$1
local delimiter=$2
local item
if [[ $delimiter != ' ' ]]; then
list=$(echo $list | sed 's/ /'`echo -e "\010"`'/g' | sed -E "s/$delimiter/ /g")
fi
for item in $list; do
item=$(echo $item | sed 's/'`echo -e "\010"`'/ /g')
echo "$item"
done
}
Try this one.
#/bin/bash
testpid="abc,def,ghij"
count=`echo $testpid | grep -o ',' | wc -l` # this is not a good way
count=`expr $count + 1`
while [ $count -gt 0 ] ; do
echo $testpid | cut -d ',' -f $i
count=`expr $count - 1 `
done

Resources