I need to know what exactly this will do: (grep & xargs) - bash

Here is the line:
find /localdir/ | grep '[0-9']$ | xargs -i% cp % /tftpboot
I specifically want to know what grep is looking for exactly here.
Can anyone translate it for me please ?
I am also kind of interested in what the xargs cmd is going to look like...
Thanks in advance.

[0-9] means any character from 0 through 9. $ means the end of a line. So your grep will find any line (i.e., filename) that ends with a digit, and xargs will copy them each to /tftpboot.
Of course, you'll have some surprises if any of those filenames contain spaces. You can do this entirely within the shell in zsh (and I think in recent versions of bash):
cp /localdir/**/*[0-9] /tftpboot
addendum: If you're asking about the funny quoting, that will work, though it's not very human-friendly.
The key is that you can have quoted strings and non-quoted strings right next to each other in shell, and they'll become a single string; echo "fo"ob'ar' will produce foobar.
The first part is quoted because [ is special to bash. ] is also special, but since bash never saw a special (unquoted) [, it leaves the ] alone. $ would normally substitute a variable, but nothing comes after it, so bash leaves that alone too.
The string that actually gets passed to grep is still [0-9]$.

Related

Bash/Shell: Why am I getting the wrong output for if-else statements? [duplicate]

I'm writing a shell script that should be somewhat secure, i.e., does not pass secure data through parameters of commands and preferably does not use temporary files. How can I pass a variable to the standard input of a command?
Or, if it's not possible, how can I correctly use temporary files for such a task?
Passing a value to standard input in Bash is as simple as:
your-command <<< "$your_variable"
Always make sure you put quotes around variable expressions!
Be cautious, that this will probably work only in bash and will not work in sh.
Simple, but error-prone: using echo
Something as simple as this will do the trick:
echo "$blah" | my_cmd
Do note that this may not work correctly if $blah contains -n, -e, -E etc; or if it contains backslashes (bash's copy of echo preserves literal backslashes in absence of -e by default, but will treat them as escape sequences and replace them with corresponding characters even without -e if optional XSI extensions are enabled).
More sophisticated approach: using printf
printf '%s\n' "$blah" | my_cmd
This does not have the disadvantages listed above: all possible C strings (strings not containing NULs) are printed unchanged.
(cat <<END
$passwd
END
) | command
The cat is not really needed, but it helps to structure the code better and allows you to use more commands in parentheses as input to your command.
Note that the 'echo "$var" | command operations mean that standard input is limited to the line(s) echoed. If you also want the terminal to be connected, then you'll need to be fancier:
{ echo "$var"; cat - ; } | command
( echo "$var"; cat - ) | command
This means that the first line(s) will be the contents of $var but the rest will come from cat reading its standard input. If the command does not do anything too fancy (try to turn on command line editing, or run like vim does) then it will be fine. Otherwise, you need to get really fancy - I think expect or one of its derivatives is likely to be appropriate.
The command line notations are practically identical - but the second semi-colon is necessary with the braces whereas it is not with parentheses.
This robust and portable way has already appeared in comments. It should be a standalone answer.
printf '%s' "$var" | my_cmd
or
printf '%s\n' "$var" | my_cmd
Notes:
It's better than echo, reasons are here: Why is printf better than echo?
printf "$var" is wrong. The first argument is format where various sequences like %s or \n are interpreted. To pass the variable right, it must not be interpreted as format.
Usually variables don't contain trailing newlines. The former command (with %s) passes the variable as it is. However tools that work with text may ignore or complain about an incomplete line (see Why should text files end with a newline?). So you may want the latter command (with %s\n) which appends a newline character to the content of the variable. Non-obvious facts:
Here string in Bash (<<<"$var" my_cmd) does append a newline.
Any method that appends a newline results in non-empty stdin of my_cmd, even if the variable is empty or undefined.
I liked Martin's answer, but it has some problems depending on what is in the variable. This
your-command <<< """$your_variable"""
is better if you variable contains " or !.
As per Martin's answer, there is a Bash feature called Here Strings (which itself is a variant of the more widely supported Here Documents feature):
3.6.7 Here Strings
A variant of here documents, the format is:
<<< word
The word is expanded and supplied to the command on its standard
input.
Note that Here Strings would appear to be Bash-only, so, for improved portability, you'd probably be better off with the original Here Documents feature, as per PoltoS's answer:
( cat <<EOF
$variable
EOF
) | cmd
Or, a simpler variant of the above:
(cmd <<EOF
$variable
EOF
)
You can omit ( and ), unless you want to have this redirected further into other commands.
Try this:
echo "$variable" | command
If you came here from a duplicate, you are probably a beginner who tried to do something like
"$variable" >file
or
"$variable" | wc -l
where you obviously meant something like
echo "$variable" >file
echo "$variable" | wc -l
(Real beginners also forget the quotes; usually use quotes unless you have a specific reason to omit them, at least until you understand quoting.)

Why don't I need to pass "\"{}\"" to xargs when handling filenames with spaces?

I have the following problem:
I want to use a xargs-find construction for finding files.
The difference is, that I do not want to use find as command1 but as command2 in:
command1 | xargs command2
The problem occurs if the file names have spaces in their names
For example:
If I am trying:
echo 01.Here Comes The Night Time II.flac | xargs -pi find ~/Multimedia/Musik/flac/ -name "\""{}"\""
find ~/Multimedia/Musik/flac/ -name "01.Here Comes The Night Time II.flac" ?...yes
Nothing will be found.
Also with the -0 option of xargs it is not working.
If I copy and paste the interactive printed request from xargs, the file will be found:
find ~/Multimedia/Musik/flac/ -name "01.Here Comes The Night Time II.flac"
~/Multimedia/Musik/flac/Arcade Fire/Reflektor (CD 2)/01.Here Comes The Night Time II.flac
Is there something wrong in the way I "feed" the pipe, or in the way I include the " in the find command (which I figured out by trial and error), or something else?
You seem to be confused about the way programs are started internally and how commands are interpreted by the shell.
In unix, starting a program involves three parameters:
A file name. This is a string containing the path to the program to be run.
A list of strings. By convention we call these "command line arguments".
Another list of strings. By convention we call these "the environment", but at the OS level it's just another string list. However, all programs/libraries conspire to keep this hidden from both the user and the application programmer.
When you type a command into the shell, many things happen, but in the simplest case it's just a bunch of space-separated words:
$ foo bar baz
($ represents the shell prompt, not something you type.)
The shell splits this line into three words (foo, bar, baz) and interprets the first one as the name of a program (to be looked up in the directories listed in the PATH variable). Let's assume PATH lists /usr/bin and there is indeed a /usr/bin/foo program.
Now the shell starts the program as follows (pseudo-code):
exec("/usr/bin/foo", ["foo", "bar", "baz"], [...])
I.e. we run the executable in /usr/bin/foo, passing a list of three strings as arguments.
([...] represents the environment, which we're going to ignore from now on.)
What happens if you do this instead?
$ foo "bar baz"
The quotes affect the way the shell splits the line into words. In particular, " " (a space) in quotes does not act as a separator but is taken literally. This gives us a two-element list (foo, bar baz). Note that the quotes are not part of the words themselves.
Internally this translates to the following invocation:
exec("/usr/bin/foo", ["foo", "bar baz"], [...])
Again, the second argument simply contains a space. There are no embedded quotes.
So what happens with a command like
$ xargs -pi find ~/Multimedia/Musik/flac/ -name "\""{}"\""
?
This will again be parsed into a list of words by the shell. ~ is replaced by the name of your home directory. "\"" is just a convoluted way of writing \" or '"' (i.e. a literal " character). The list we end up with is xargs, -pi, find, /home/madZeo/Multimedia/Musik/flac/, -name, "{}". This translates to the following invocation:
exec("/usr/bin/xargs", ["xargs", "-pi", "find", "/home/madZeo/Multimedia/Musik/flac/", "-name", "\"{}\""], [...])
Note how the last argument is the 4-character string "{}".
xargs treats its first argument (-pi) as an option specification. In particular, -i tells it to replace {} in the argument list by the current value read from standard input.
xargs then reads a line from its standard input, which (because of your echo pipe) gives 01.Here Comes The Night Time II.flac.
This gets substituted in in place of {}, yielding the list find, /home/madZeo/Multimedia/Musik/flac/, -name, "01.Here Comes The Night Time II.flac". xargs then invokes find like this:
exec("/usr/bin/find", ["find", "/home/madZeo/Multimedia/Musik/flac/", "-name", "\"01.Here Comes The Night Time II.flac\""], [...])
This tells find to look for a file whose name literally starts with " (a quote character). No such file exists, so this fails.
The fix is to write the command like this:
$ xargs -pi find ~/Multimedia/Musik/flac/ -name {}
This ultimately ends up running
exec("/usr/bin/find", ["find", "/home/madZeo/Multimedia/Musik/flac/", "-name", "01.Here Comes The Night Time II.flac"], [...])
, which is what you want.
The issue is that xargs runs its subcommand (find in this case) directly. It does not construct a new command line that gets re-parsed by the shell. It does not split its incoming argument on spaces, it does not interpret quotes, it doesn't care about "special" characters like $ or * or \. It simply takes the list of words it was given, replaces any occurrence of the substring {} by the current input, then executes it.
If you naively take this final command and paste it into your shell, it will undergo word splitting, quote removal, etc., leading to a different outcome.

Extracting snmpdump values (with an exact MIB) from a shell script

I have a a some SNMP dump:
1.3.6.1.2.1.1.2.0|5|1.3.6.1.4.1.9.1.1178
1.3.6.1.2.1.1.3.0|7|1881685367
1.3.6.1.2.1.1.4.0|6|""
1.3.6.1.2.1.1.5.0|6|"hgfdhg-4365.gfhfg.dfg.com"
1.3.6.1.2.1.1.6.0|6|""
1.3.6.1.2.1.1.7.0|2|6
1.3.6.1.2.1.1.8.0|7|0
1.3.6.1.2.1.1.9.1.2.1|5|1.3.6.1.4.1.9.7.129
1.3.6.1.2.1.1.9.1.2.2|5|1.3.6.1.4.1.9.7.115
And need to grep all data in first string after 1.3.6.1.2.1.1.2.0|5|, but not include this start of the string in grep itself. So, I must receive 1.3.6.1.4.1.9.1.1178 in grep. I've tried to use regex:
\b1.3.6.1.2.1.1.2.0\|5\|\s*([^\n\r]*)
But without any success. If a regular expression, or grep, is in fact the right tool, can you help me find the right regex? Otherwise, what tools should I consider instead?
With GNU grep +PCRE support, you can use Perl's \K flag to discard part of the matched string :
grep -Po "1\.3\.6\.1\.2\.1\.1\.2\.0\|5\|\K.*"
-P enables Perl's regex mode and -o switches output to matched parts rather than whole lines.
I had to escape the characters that have special meaning in Perl regexs, but this can be avoided as 123 suggests, by enclosing the characters to interpret literally between \Q and \E :
grep -Po "\Q1.3.6.1.2.1.1.2.0|5|\E\K.*"
I would usually solve this with sed as follows :
sed -n 's/1\.3\.6\.1\.2\.1\.1\.2\.0|5|\(.*\)/\1/p'
The -n flag disables implicit output and the search and replace command will remove the searched prefix from the line, leaving the relevant part to be printed.
The characters that have special meaning in GNU Basic Regular Expressions (BRE) must be escaped, which in this case is only .. Also note that the grouping tokens are \( and \) rather than the usual ( and ).
An alternate way to do this is in native shell, without any regexes at all. Consider:
prefix='1.3.6.1.2.1.1.2.0|5|'
while read -r line; do
[[ $line = "$prefix"* ]] && printf '%s\n' "${line#$prefix}"
done
If your original string is piped into the while read loop, the output is precisely 1.3.6.1.4.1.9.1.1178.

How to use bash variables in sed command?

I want to do (in bash script):
NEWBASE=`echo $NAME | sed "s/${DIR}//g" | sed 's/.\///g'`
I read in the net, that I have to replace single quote with double quote.
This is unfortunately not working. Why? Thanks
sed is overkill for this. Use parameter expansion:
NEWBASE=${NAME//$DIR//}
NEWBASE=${NEWBASE//.\//}
It is important to understand that bash and sed are two completely independent things. When you give bash a command, it first processes it according to its rules, in order to come up with a utility name and a set of arguments for that utility (in this case sed), and then calls the utility with the arguments.
Probably $DIR contains a slash character. Perhaps it looks something like /usr/home/codyline/src.
So when bash substitutes that into the argument to the sed command:
"s/${DIR}//g"
the result is
s//usr/home/codyline/src//g
which is what is then passed to sed. But sed can't understand that commabnd: it has (many) too many / characters.
If you really want to use sed for this purpose, you need to use a delimiter other than /, and it needs to be a character you are confident will never appear in $DIR. Fortunately, the sed s command allows you to use any character as a delimiter: whatever character follows the s is used as the delimiter. But there always must be exactly three of them in the command.
For example, you might believe that no directory path contains a colon (:), in which case you could use:
sed "s:${DIR}::g"
Of course, someday that will fail precisely because you have a directory with a colon in its name. So you could make things more general by using bash's substitute-and-replace feature to backslash-escape all the colons:
sed "s:${DIR//:/\:}::g"
But you could have used this bash feature in order to avoid the use of sed altogether:
NEWBASE=${NAME//$DIR}
Unfortunately, you can't nest bash substitute-and-replaces, so you need to do them sequentially:
NEWBASE=${NEWBASE//.\/}
Note: I used ${var//...}, which is the equivalent of specifying the g flag in a sed s command, but I really don't know if it is appropriate. Do you really expect multiple instances of $DIR in a single path? If there are multiple instances, do you really want to remove all of them? You'll have to decide.

shell script: check directory name and convert to lowercase

I would like my bash script to check the name of the directory where it is run. Something like:
#!/bin/bash
path=eval 'pwd'
dirname=eval 'basename $path'
But it doesn't work: I get
./foo.sh: line 5: basename $path: command not found
How can I fix it? Also, once I get dirname to contain the correct dirname, I'd like to convert it to lowercase, to test it. I'm able to do this on the command line with awk:
echo $dirname | awk '{print tolower($0)}'
but how do I capture the return value into a variable?
Why not use:
#!/bin/bash
path=`pwd`
dirname=`basename $path | awk '{print tolower($0)}'`
Or if you want to do it as a one liner:
dirname=`pwd | xargs basename | awk '{print tolower($0)}'`
You can rewrite it to
dirname=eval "basename $path"
With single-quotes, you don't get shell expansion, but you want $path getting expanded.
BTW: I'd suggesst using
path=$(basename $path)
It's way more generic and better readable if you do something like
path=$(basename $(pwd))
or to get the lowercase result
path=$(basename $(pwd) | awk '{print tolower($0)}')
or
path=$(basename $(pwd) | tr 'A-Z' 'a-z' )
The form
x=y cmd
means to temporarily set environment variable x to value y and then run cmd, which is how these lines are interpreted:
path=eval 'pwd'
dirname=eval 'basename $path'
That is, they aren't doing what you seem to expect at all, instead setting an environment variable to the literal value eval and then running (or failing to find) a command. As others have said, the way to interpolate the results of a command into a string is to put it inside $(...) (preferred) or `...` (legacy). And, as a general rule, it's safer to wrap those in double quotes (as it is safer to wrap any interpolated reference in quotes).
path="$(pwd)"
dirname="$(basename "$path")"
(Technically, in this case the outer quotes aren't strictly necessary. However, I'd say it's still a good habit to have.)
B=$(echo "Some text that has CAPITAL letters " | awk '{print tolower($0)}')
eval executes command passed to it, but it returns only command exit status code, so you cannot really use it in set operator. The way to go to embed command into set operator either to use right single quotes or $()
So the script will look like this:
#!/bin/bash
curr_path=$(pwd)
echo $curr_path
curr_dir=$(basename $curr_path)
echo $curr_dir
echo $curr_dir | awk '{print tolower($0)}'
Your code doesn't work because you use single quotes rather than double quotes. Single quotes prevent variable expansion, thus $path is not expanded into the path you want to use and is taken as it is, as it if were a string.
Your awk invocation would not work for the same reason as well.
Although you could solve the problem replacing single quotes with double quotes, like this:
#!/bin/bash
path=eval "pwd"
dirname=eval "basename $path"
I would suggest using grave accents instead (). There's no reason to useeval` in this case. Plus, you can also use it to collect the return value you are interested in:
#!/bin/bash
path=`pwd`
dirname=`basename $path`
variable=`echo $dirname | awk "{print tolower($0)}"`
Here's an excerpt from my answer to What platform independent way to find directory of shell executable in shell script? which, in itself, fully answers your question aside from the lowercase part, which, in my opinion, has been duly addressed many times in other answers here.
What's unique about my answer is that when I was attempting to write it for the other question I encountered your exact problem - how do I store the function's results in a variable? Well, as you can see, with some help, I hit upon a pretty simple and very powerful solution:
I can pass the function a sort of messenger variable and dereference any explicit use of the resulting function's argument's $1 name with eval as necessary, and, upon the function routine's completion, I use eval and a backslashed quoting trick to assign my messenger variable the value I desire without ever having to know its name.
In full disclosure, though this was the solution to my problem, it was not by any means my solution. I've had several occasions to visit there before, but some of his descriptions, though probably brilliant, are a little out of my league, and so I thought others might benefit if include my own version of how this works in the previous paragraph. Though of course it was very simple to understand once I did, for this one especially, I had to think long and hard to figure out how it might work. Anyway, you can find that and more at Rich's sh tricks and I have also excerpted the relevant portion of his page below my own answer's excerpt.
...
EXCERPT:
...
Though not strictly POSIX yet, realpath is a GNU core app since 2012. Full disclosure: never heard of it before I noticed it in the info coreutils TOC and immediately thought of [the linked] question, but using the following function as demonstrated should reliably, (soon POSIXLY?), and, I hope, efficiently
provide its caller with an absolutely sourced $0:
% _abs_0() {
> o1="${1%%/*}"; ${o1:="${1}"}; ${o1:=`realpath "${1}"`}; eval "$1=\${o1}";
> }
% _abs_0 ${abs0:="${0}"} ; printf %s\\n "${abs0}"
/no/more/dots/in/your/path2.sh
EDIT: It may be worth highlighting that this solution uses POSIX parameter expansion to first check if the path actually needs expanding and resolving at all before attempting to do so. This should return an absolutely sourced $0via a messenger variable (with the notable exception that it will preserve symlinks) as efficiently as I could imagine it could be done whether or not the path is already absolute.
...
(minor edit: before finding realpath in the docs, I had at least pared down my version of [the version below] not to depend on the time field [as it does in the first ps command], but, fair warning, after testing some I'm less convinced ps is fully reliable in its command path expansion capacity)
On the other hand, you could do this:
ps ww -fp $$ | grep -Eo '/[^:]*'"${0#*/}"
eval "abs0=${`ps ww -fp $$ | grep -Eo ' /'`#?}"
...
And from Rich's sh tricks:
...
Returning strings from a shell function
As can be seen from the above pitfall of command substitution, stdout is not a good avenue for shell functions to return strings to their caller, unless the output is in a format where trailing newlines are insignificant. Certainly such practice is not acceptable for functions meant to deal with arbitrary strings. So, what can be done?
Try this:
func () {
body here
eval "$1=\${foo}"
}
Of course ${foo} could be replaced by any sort of substitution. The key trick here is the eval line and the use of escaping. The “$1” is expanded when the argument to eval is constructed by the main command parser. But the “${foo}” is not expanded at this stage, because the “$” has been quoted. Instead, it’s expanded when eval evaluates its argument. If it’s not clear why this is important, consider how the following would be bad:
foo='hello ; rm -rf /'
dest=bar
eval "$dest=$foo"
But of course the following version is perfectly safe:
foo='hello ; rm -rf /'
dest=bar
eval "$dest=\$foo"
Note that in the original example, “$1” was used to allow the caller to pass the destination variable name as an argument the function. If your function needs to use the shift command, for instance to handle the remaining arguments as “$#”, then it may be useful to save the value of “$1” in a temporary variable at the beginning of the function.

Resources