This question already has answers here:
How to grep asterisk without escaping?
(2 answers)
When to wrap quotes around a shell variable?
(5 answers)
Closed 3 years ago.
I am trying to come up with a function that searches a given file for a given pattern. If the pattern is not found it shall be appended to the file.
This works fine for some cases, but when my pattern includes special characters, like wildcards (*) this method fails.
Patterns that work:
pattern="some normal string"
Patterns that don't work:
pattern='#include "/path/to/dir/\*.conf"'
This is my function:
check_pattern() {
if ! grep -q "$1" $2
then
echo $1 >> $2
fi
}
I' calling my function like this:
check_pattern $pattern $target_file
When escaping the wildcard in my pattern variable to get grep running correctly echo interprets the \ as character.
So when I run my script a second time my grep does not find the pattern and appends it again.
To put it simple:
Stuff that gets appended:
#include "/path/to/dir/\*.conf"
Stuff that i want to have appended:
#include "/path/to/dir/*.conf"
Is there some way to get my expected result without storing my echo-pattern in a second variable?
Use
grep -f
and
check_pattern "$pattern" "$target_file"
Thanks all, I got it now.
Using grep -F as pointed out by Gem Taylor in combination with calling my function like this check_pattern "$pattern" "$target_file" did the tricks.
Related
This question already has answers here:
How to check if a files exists in a specific directory in a bash script?
(3 answers)
Closed 4 years ago.
I'm not sure how to word my question exactly...
I have the code
if grep "mynamefolder" /vol/Homefs/
then
echo "yup"
else
echo "nope"
fi
which gives me the output
grep: /vol/Homefs/: Is a directory
nope
The sh file containing the code and the directory I'm targeting are not in the same directory (if that makes sense).
I want to find the words myfoldername inside /vol/Homefs/ without going through any subdirectories. Doing grep -d skip, which I hoped would "skip" subdirectories and focus only directories, just gives me nope even though the folder/file/word I'm testing it on does exist.
Edit: I forgot to mention that I would also like mynamefolder to be a variable that I can write in putty, something like
./file spaing and spaing being the replacement for myfoldername.
I'm not sure if I did good enough explaining, let me know!
You just want
if [ -e /vol/Homefs/"$1" ]; then
echo yup
else
echo nope
fi
The [ command, with the -e operator, tests if the named file entry exists.
vim is not involved, and grep is not needed.
If you're insisting on using grep, you should know grep doesn't work on directories. You can convert the directory listing to a string.
echo /vol/Homefs/* | grep mynamefolder
This question already has answers here:
Remove/replace html tags in bash
(2 answers)
Closed 6 years ago.
I've been trying to convert the following string into a more readable and usable form in a bash script. Certain parts are not static.
(<font color='whisper'>[ <name shortname='UserName' src='whisper-from'>UserName</name> whispers, "test" to you. ]</font>
A lot of the stuff in fact is not static. Basically, I want the end result to look like:
([UserName whispers, "test" to you. ]
I have done this time and time again in Java, PHP, and even VB6. However I am new to bash scripts, and can't seem to get it to work.
Could someone help me convert this Java code to bash script?
data = MyString.replaceAll("<.*?>", "");
data = MyString.replaceAll("<", "<");
data = MyString.replaceAll(">", ">");
In bash, you can use pattern substitution. Let's start with this string:
$ s='<Name>'
And, let's do substitutions on it:
$ s="${s//</<}"
$ s="${s//>/>}"
$ echo "$s"
<Name>
Bash works on globs. If you need regular expressions, try sed:
$ s='<Name>'
$ echo "$s" | sed 's/</</g; s/>/>/g; s/<[^>]*>/<>/g'
<>
In a more complex example:
$ MyStr='(<font color='whisper'>[ <name shortname='UserName' src='whisper-from'>UserName</name> whispers, "test" to you. ]</font>'
$ echo "$MyStr" | sed 's/</</g; s/>/>/g; s/<[^>]*>//g'
([ UserName whispers, "test" to you. ]
Use sed by itself or, since you mentioned bash, use sed within a bash script (for example b.sh):
#!/bin/bash
sed 's/>/>/g' | sed 's/</</g' | sed 's/<.*?>//g'
Input data (for example b.txt file):
asdasdasdds<.*?>dasdasassxrh
sadaswqqw<ssadasdasdsdvvxc
sadssadadsads>dsdsdewpppp
Output results:
asdasdasddsdasdasassxrh
sadaswqqw<ssadasdasdsdvvxc
sadssadadsads>dsdsdewpppp
Usage:
b.sh < b.txt
NOTE: I broke each replace all into separate sed calls in case you wanted to modify or add more formatting changes.
This question already has answers here:
How to keep backslash when reading from a file?
(4 answers)
Closed 8 years ago.
I have some XSL spitting out samba paths on stdout. I am iterating these paths to locate them on their mountpoints on disk, so have something along the lines of:
while read src dst ; do
...
done < <(xsltproc - file.xml <<XSL
...
XSL
)
Now, I can trivially solve the problem by performing the path escaping either in the XSL stylesheet or by using sed. However, I am curious from a bash perspective, how to solve the problem. Here is a working example of the problem:
a='\\a\b\c\d\e'
ecyo $a
\\a\b\c\d\e
echo ${a//\\//}
//a/b/c/d/e
b=$a
echo $b
\\a\b\c\d\e
b=$(echo $a)
echo $b
\\a\b\c\d\e
That's all fine, does exactly what I expect it to do. This is where bash gets a bit funny:
read b < <(echo $a)
echo $b
\abcde
echo ${b//\\//}
/abcde
As you can see, read has stripped all of the unescaped backslashes when it read them in, so the directory information gets lost.
Reading the bash manual, it seems this works just fine:
read -r b < <(echo $a)
The -r flag tells read not to treat backslashes as escape characters.
This question already has answers here:
Extract substring in Bash
(26 answers)
Closed 9 years ago.
we were trying to find the username of a mercurial url:
default = ssh://someone#acme.com//srv/hg/repo
Suppose that there's always a username, I came up with:
tmp=${a#*//}
user=${tmp%%#*}
Is there a way to do this in one line?
Assuming your string is in a variable like this:
url='default = ssh://someone#acme.com//srv/hg/repo'
You can do:
[[ $url =~ //([^#]*)# ]]
Then your username is here:
echo ${BASH_REMATCH[1]}
This works in Bash versions 3.2 and higher.
You pretty much need more that one statement or to call out to external tools. I think sed is best for this.
sed -r -e 's|.*://(.*)#.*|\1|' <<< "$default"
Not within bash itself. You'd have to delegate to an external tool such as sed.
Not familiar with mercurial, but using your url, you can do
echo 'ssh://someone#acme.com/srv/hg/repo' |grep -E --only-matching '\w+#' |cut --delimiter=\# -f 1
Probably not the most efficient way with the two pipes, but works.
I'm new to bash scripts (and the *nix shell altogether) but I'm trying to write this script to make grepping a codebase easier.
I have written this
#!/bin/bash
args=("$#");
for arg in args
grep arg * */* */*/* */*/*/* */*/*/*/*;
done
when I try to run it, this is what happens:
~/Work/richmond $ ./f.sh "\$_REQUEST\['a'\]"
./f.sh: line 4: syntax error near unexpected token `grep'
./f.sh: line 4: ` grep arg * */* */*/* */*/*/* */*/*/*/*;'
~/Work/richmond $
How do I do this properly?
And, I think a more important question is, how can I make grep recurse through subdirectories properly like this?
Any other tips and/or pitfalls with shell scripting and using bash in general would also be appreciated.
The syntax error is because you're missing do. As for searching recursively if your grep has the -R option you would do:
#!/bin/bash
for arg in "$#"; do
grep -R "$arg" *
done
Otherwise you could use find:
#!/bin/bash
for arg in "$#"; do
find . -exec grep "$arg" {} +
done
In the latter example, find will execute grep and replace the {} braces with the file names it finds, starting in the current directory ..
(Notice that I also changed arg to "$arg". You need the dollar sign to get the variable's value, and the quotes tell the shell to treat its value as one big word, even if $arg contains spaces or newlines.)
On recusive grepping:
Depending on your grep version, you can pass -R to your grep command to have it search Recursively (in subdirectories).
The best solution is stated above, but try putting your statement in back ticks:
`grep ...`
You should use 'find' plus 'xargs' to do the file searching.
for arg in "$#"
do
find . -type f -print0 | xargs -0 grep "$arg" /dev/null
done
The '-print0' and '-0' options assume you're using GNU grep and ensure that the script works even if there are spaces or other unexpected characters in your path names. Using xargs like this is more efficient than having find execute it for each file; the /dev/null appears in the argument list so grep always reports the name of the file containing the match.
You might decide to simplify life - perhaps - by combining all the searches into one using either egrep or grep -E. An optimization would be to capture the output from find once and then feed that to xargs on each iteration.
Have a look at the findrepo script which may give you some pointers
If you just want a better grep and don't want to do anything yourself, use ack, which you can get at http://betterthangrep.com/.