I am getting a issue running below command:
when I run: cmssh 12np, I meant to invoke the defined function cmssh, eventually I am expecting a shell command line like this:
sshpass -p password ssh mydomain\\userxyz#host.com
, see below source:
alias 12np='ssh mydomain\\stephencheng#userxyz#host.com'
function cmssh () {
set -x
local aliascmd=$1
cmdstr=$(alias $aliascmd | cut -d'=' -f2 | cut -d"'" -f2)
echo $cmdstr
sshpass -p "$cmpw" $cmdstr
#debug cmdstr : sshpass -p password 'ssh mydomain\\userxyz#host.com'
}
however, the final step always render the result as:
sshpass -p password 'ssh mydomain\\userxyz#host.com'
I have attempted many ways to try to remove the single quote, but got no idea how it is possible.
See the debug info:
~ cmssh 12np
+cmssh:2> local 'aliascmd=12np'
+cmssh:4> cmdstr=+cmssh:4> alias 12np
+cmssh:4> cmdstr=+cmssh:4> cmdstr=+cmssh:4> cut '-d=' -f2
+cmssh:4> cut '-d'\' -f2
+cmssh:4> cmdstr='ssh mydomain\\userxyz#host.com'
+cmssh:7> echo 'ssh mydomain\\userxyz#host.com'
ssh mydomain\\userxyz#host.com
+cmssh:8> sshpass -p password 'ssh mydomain\\userxyz#host.com'
sshpass: Failed to run command: No such file or directory
Thanks
Note: At this point it is not clear what specific shell the OP uses. This answer makes a general point and then discusses bash and zsh.
Your immediate problem is unrelated to quoting: The error message suggests that the executable sshpass is not in your $PATH.
Your string ($cmdstr) doesn't actually contain single quotes - they are an artifact of running bash or zsh with set -x in effect.
However, there are additional issues:
The way you parse the alias definition in cmssh() turns the \\ in your alias definition from a (conceptually) quoted single \ into literal string \\ (two literal \ chars.)
A simple fix in this case is to add another layer of evaluation by having xargs (without arguments) echo the string:
cmdstr=$(alias $aliascmd | cut -d"'" -f2 | xargs)
Note that I've eliminated the =-based cut command, which is not needed.
In general, though, parsing alias definitions this way is fragile.
At that point I would expect your code to work in bash, though not yet in zsh (see below).
Note that simply removing function from the beginning of your function definition would make your code work in sh (a POSIX-features-only shell), too.
The final problem affects only zsh, not bash:
zsh by default doesn't perform word splitting on $cmdstr in the sshpass -p "$cmpw" $cmdstr command, so that 'ssh mydomain\userxyz#host.com' is passed as a SINGLE argument to sshpass.
To turn on word splitting when evaluating $cmdstr and thus pass its tokens as separate arguments, refer to it as ${=cmdstr}:
sshpass -p "$cmpw" ${=cmdstr}
Note that this feature is zsh-specific.
Try this
mydomain="'ssh mydomain\\userxyz#host.com'"
echo ${mydomain/#'/\\\'} \\ assign this variable
Other way to do this,using sed
echo $mydomain | sed -s "s/^\(\(\"\(.*\)\"\)\|\('\(.*\)'\)\)\$/\\3\\5/g"
Related
Take the following example:
ls -l | grep -i readme | ./myscript.sh
What I am trying to do is get ls -l | grep -i readme as a string variable in myscript.sh. So essentially I am trying to get the whole command before the last pipe to use inside myscript.sh.
Is this possible?
No, it's not possible.
At the OS level, pipelines are implemented with the mkfifo(), dup2(), fork() and execve() syscalls. This doesn't provide a way to tell a program what the commands connected to its stdin are. Indeed, there's not guaranteed to be a string representing a pipeline of programs being used to generate stdin at all, even if your stdin really is a FIFO connected to another program's stdout; it could be that that pipeline was generated by programs calling execve() and friends directly.
The best available workaround is to invert your process flow.
It's not what you asked for, but it's what you can get.
#!/usr/bin/env bash
printf -v cmd_str '%q ' "$#" # generate a shell command representing our arguments
while IFS= read -r line; do
printf 'Output from %s: %s\n' "$cmd_str" "$line"
done < <("$#") # actually run those arguments as a command, and read from it
...and then have your script start the things it reads input from, rather than receiving them on stdin.
...thereafter, ./yourscript ls -l, or ./yourscript sh -c 'ls -l | grep -i readme'. (Of course, never use this except as an example; see ParsingLs).
It can't be done generally, but using the history command in bash it can maybe sort of be done, provided certain conditions are met:
history has to be turned on.
Only one shell has been running, or accepting new commands, (or failing that, running myscript.sh), since the start of myscript.sh.
Since command lines with leading spaces are, by default, not saved to the history, the invoking command for myscript.sh must have no leading spaces; or that default must be changed -- see Get bash history to remember only the commands run with space prefixed.
The invoking command needs to end with a &, because without it the new command line wouldn't be added to the history until after myscript.sh was completed.
The script needs to be a bash script, (it won't work with /bin/dash), and the calling shell needs a little prep work. Sometime before the script is run first do:
shopt -s histappend
PROMPT_COMMAND="history -a; history -n"
...this makes the bash history heritable. (Code swiped from unutbu's answer to a related question.)
Then myscript.sh might go:
#!/bin/bash
history -w
printf 'calling command was: %s\n' \
"$(history | rev |
grep "$0" ~/.bash_history | tail -1)"
Test run:
echo googa | ./myscript.sh &
Output, (minus the "&" associated cruft):
calling command was: echo googa | ./myscript.sh &
The cruft can be halved by changing "&" to "& fg", but the resulting output won't include the "fg" suffix.
I think you should pass it as one string parameter like this
./myscript.sh "$(ls -l | grep -i readme)"
I think that it is possible, have a look at this example:
#!/bin/bash
result=""
while read line; do
result=$result"${line}"
done
echo $result
Now run this script using a pipe, for example:
ls -l /etc | ./script.sh
I hope that will be helpful for you :)
I'm trying to export some environment variables for use by a TomCat process.
There's a few ways to do this (I know how to solve the overall problem), but it bugged me that I didn't know how to do this particular shell task.
Tomcat recommends that all your environment customizations should be exported by "$CATALINA_HOME/bin/setenv.sh".
This whole thing is gonna be stuffed into a Docker container, so the only parameterizability will be via Docker env variables (let's assume for this task that I don't want to use volume mounts or create setenv.sh during the build process).
First, observe that docker run -e can be used to pass environment into the container:
🍔 docker run -eMY_VAR=SUP alpine env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=a528b6fc264b
MY_VAR=SUP
no_proxy=*.local, 169.254/16
HOME=/root
If we wanted to copy all of that env into setenv.sh, it's as simple as:
SETENV="/usr/local/tomcat/bin/setenv.sh"
echo '#!/bin/sh' > "$SETENV"
echo 'export -p' >> "$SETENV"
env >> "$SETENV"
But copying everything somewhat defeats the point of setenv.sh -- which is, to give your tomcat process a clean environment, with only intentional customizations.
So, we can agree on a convention for "which env vars are ones that we want to pass through to setenv.sh". Everything prefixed with MY_.
And now we get to an interesting shell problem.
env | grep '^MY_' | sed 's/^MY_/EXPORT /'
This gets us pretty close. Output looks like:
🍔 docker run -e MY_VAR=hey alpine sh -c "env | grep '^MY_' | sed 's/^MY_/EXPORT /'"
EXPORT VAR=hey
So, we've selected from the env command: only env vars prefixed with MY_. And we can redirect that output to setenv.sh.
Why do I say "pretty close"? Looks like we're done, right?
Try this for size:
🍔 docker run -e MY_VAR='multi
quote> line
quote> string' alpine sh -c "env | grep '^MY_' | sed 's/^MY_/EXPORT /'"
EXPORT VAR=multi
The script only worked for a simple subset of possibilities. i.e. we only managed to export the first line of our multi-line string.
For your convenience: env output for multi-line strings looks like this:
🍔 docker run -e MY_VAR='multi
line
string' alpine env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=0d0afaac6bec
MY_VAR=multi
line
string
no_proxy=*.local, 169.254/16
HOME=/root
I hesitate to try and tackle this using awk; there may be further string escaping complications that I have not considered.
I wonder whether there's a better way altogether to select & serialize a subset of exported environment?
EDIT: I negligently tagged this as a bash question, when really my intention was to pose an sh question. Specifically my intention is to get something that will work with no dependencies other than those that come with the alpine docker image. i.e. BusyBox sh, sed, grep, awk, env.
I've retained the bash tag so as not to punish the initial answer that was submitted when this was a bash-only question.
But I will give preference to an sh-compatible answer, and in particular to one that works with just the BusyBox UNIX utils.
So you need several things:
Enumerate the environment variables and select a subset.
For each selected environment variable, emit sh code that sets the variable to the desired value.
You can use export -p if you want to export all variables in a form that can be read back in, but parsing it to select only certain variables is harder. One way to make use of export -p is to unset the other variables. This only works if none of the environment variables is read-only, but you can work around that by running a separate shell instance (as opposed to a subshell).
To gather the list of variables to unset, you only need to get a superset of the list of all environment variables, and remove the ones you want to keep. You can easily do that by filtering the env output. I do that with a simple grep, you may want to use more complex code if your criteria for inclusion are more complex than “begins with a specific prefix”.
The occasional false positive due to a variable containing a newline followed by a valid variable name and an equal sign will only lead to calling unset on a non-existent variable, which does nothing. The desired variables are removed from the exclusion list, so the final output will never omit a desired variable.
excluded=$(env | LC_ALL=C sed -n 's/^\([A-Z_a-z][0-9A-Z_a-z]*\)=.*/\1/p' |
grep -v 'MY_')
sh -c 'unset $1; export -p' sh "$excluded" >setenv.sh
Dash prints an extra export PATH (with no value) if PATH was in the environment when it was invoked. If that bothers you, change sh -c … to (unset PATH; sh -c …).
Assuming GNU grep:
grep --null '^MY_' </proc/self/environ
...will emit your environment variables in NUL-delimited form (newlines intact).
Similarly, if you have bash:
while IFS= read -r -d '' vardef; do
[[ $vardef = MY_* ]] && printf '%s\0' "$vardef"
done </proc/self/environ
Note that if these variables were set in the same shell session, you may need to create a subprocess for /proc/self/environ to be updated:
(while IFS= read -r -d '' vardef; do
[[ $vardef = MY_* ]] && printf '%s\0' "$vardef"
done </proc/self/environ)
alpine image doesn't ship with bash.
You can use this script to extract all MY_* variables including newline variables:
docker run -e MY_FOO=bar -e MY_VAR="multi' export MY_INJECTED='val" -e MY_VAR2=$'multi
0MY_line=val
string' alpine sh -c "awk -v RS='\06' -F= '/^MY_/{k=\$1; sub(/^[^=]+=/, \"\");
gsub(/\047/, \"\047\\\\\\047\047\"); printf \"export %s=\047%s\047\n\", k, \$0
}' /proc/self/environ"
This will output:
export MY_FOO='bar'
export MY_VAR='multi'\'' export MY_INJECTED='\''val'
export MY_VAR2='multi
0MY_line=val
string'
Here is how awk works:
-v RS='\6': sets record separator as \6 works for nul byte as well (assuming you don't have \6 in value)
-F=: sets field separator as =
/^MY_/: Only process records starting with MY_
store variable name or $1 in variable k
Using sub function get part after = in $0
Using print format output so that it can be used in $CATALINA_HOME/bin/setenv.sh file.
\047 is for printing single quote
what about
declare -p ${!MY_*}
and
declare -p ${!MY_*} | sed -r 's/^declare (-[^ ]*)* MY_/export /'
or
declare -p ${!MY_*} | sed 's/^declare \(-[^ ]*\)* MY_/export /'
EDIT posix compliant version :
some env or printenv accept -0 option to end each output line by \0 rather than a newline. Thus
env -0 | perl -ne 'BEGIN{$/="\0";$\="\n";$q="\047"}next unless /^MY_/;chomp;s/$q/$q\\$q$q/;s/=/=$q/;s/$/$q/;print'
How it works
$/ : input record separator
$\ : output record separator
$q : variable to store single quote (\047) because of surrounding single quotes in command
next : to filter "MY_" variables
chomp : removes the input separator
s/// : quote substitution
EDIT: variation of perl version in posix shell
env -0 | xargs -0 sh -c 'for entry; do [[ $entry = MY_* ]] || continue; printf "%s=\047%s\047\n" "${entry%%=*}" "$(echo "${entry#*=}" | sed '\''s/\x27/\x27\\\x27\x27/g'\'' )"; done' -
I am using the below command on the local machine and it gives me the expected result:
sed -n 's/^fname\(.*\)".*/\1/p' file.txt
When I use the same command(only changed ' to ") to a same file present in the remote system, I do not get any output.
ssh remote-system "sed -n "s/^fname\(.*\)".*/\1/p" file.txt"
Please help me to get this corrected. Thanks for your help.
" and ' are different things in bash, and they are not interchangeable (they're not interchangeable in many languages, however the differences are more subtle) The single quote means 'pretend everything inside here is a string'. The only thing that will be interpreted is the next single quote.
The double quote allows bash to interpret stuff inside
For example,
echo "$TERM"
and
echo '$TERM'
return different things.
(Untested) you should be able to use single quotes and escape the internal single quotes :
ssh remote-system 'sed -n \'s/^fname(.)"./\1/p\' file.txt'
Looks like you can send a single quote with the sequence '"'"' (from this question)
so :
ssh remote-machine 'sed -n '"'"'s/^fname\(.*\)".*/\1/p'"'"' file.txt'
This runs on my machine if I ssh into localhost, there's no output because file.txt is empty, but it's a proof-of-concept.
Or - can you do the ssh session interactively/with a heredoc?
ssh remote-system
[sed command]
exit
or (again untested, look up heredocs for more info)
ssh remote-system <<-EOF
[sed command]
EOF
I have a script named password-for-object which I normally run like that:
$ password-for-object example.com
sOtzC0UY1K3EDYp8a6ltfA
I.e. it does an intricate hash calculation and outputs a password that I should use when accessing an object (for example, website) named example.com. I'll just double click the whole password, it gets copied into my buffer and I'll paste it into the form.
I've also learnt a trick on how to use such a script without making my password visible:
$ password-for-object example.com | xclip
This way output of a script ends up in X's primary buffer and I can insert it right into password field in the form and it's not shown on the screen.
The only problem with this way is that password-for-object outputs a string with trailing newline and thus "xclip" always catches up an extra symbol - this newline. If I omit output of newline in password-for-object, then I'll end up with messed up string without xclip, i.e. when I'm just putting it on the stdout. I use 2 shells: zsh and bash, and I'll get the following in zsh (note the extra % sign):
$ password-for-object example.com
sOtzC0UY1K3EDYp8a6ltfA%
$
Or the following in bash (note that prompt would be started on the same line):
$ password-for-object example.com
sOtzC0UY1K3EDYp8a6ltfA$
Any ideas on how to work around this issue? Is it possible to modify the script in a way so it will detect that xclip is in the pipeline and only output newline if it isn't?
If you change password-for-object so that it doesn't output a newline, you can call it with a script like:
#!/bin/bash
password-for-object "$1"
if [ -t 1 ]
then
echo
fi
The -t condition is described in the bash manual as:
-t fd
True if file descriptor fd is open and refers to a terminal.
See the following question:
How to detect if my shell script is running through a pipe?
Give this a try:
$ password-for-object example.com | tr -d '\n' | xclip
tr -d '\n' deletes the newline
I'm facing a small problem here, I want to pass a string containing whitespaces , to another program such that the whole string is treated as a command line argument.
In short I want to execute a command of the following structure through a bash shell script:
command_name -a arg1 -b arg2 -c "arg with whitespaces here"
But no matter how I try, the whitespaces are not preserved in the string, and is tokenized by default. A solution please,
edit: This is the main part of my script:
#!/bin/bash
#-------- BLACKRAY CONFIG ---------------#
# Make sure the current user is in the sudoers list
# Running all instances with sudo
BLACKRAY_BIN_PATH='/opt/blackray/bin'
BLACKRAY_LOADER_DEF_PATH='/home/crozzfire'
BLACKRAY_LOADER_DEF_NAME='load.xml'
BLACKRAY_CSV_PATH='/home/crozzfire'
BLACKRAY_END_POINT='default -p 8890'
OUT_FILE='/tmp/out.log'
echo "The current binary path is $BLACKRAY_BIN_PATH"
# Starting the blackray 0.9.0 server
sudo "$BLACKRAY_BIN_PATH/blackray_start"
# Starting the blackray loader utility
BLACKRAY_INDEX_CMD="$BLACKRAY_BIN_PATH/blackray_loader -c $BLACKRAY_LOADER_DEF_PATH/$BLACKRAY_LOADER_DEF_NAME -d $BLACKRAY_CSV_PATH -e "\"$BLACKRAY_END_POINT\"""
sudo time $BLACKRAY_INDEX_CMD -a $OUT_FILE
#--------- END BLACKRAY CONFIG ---------#
You're running into this problem because you store the command in a variable, then expand it later; unless there's a good reason to do this, don't:
sudo time $BLACKRAY_BIN_PATH/blackray_loader -c $BLACKRAY_LOADER_DEF_PATH/$BLACKRAY_LOADER_DEF_NAME -d $BLACKRAY_CSV_PATH -e "$BLACKRAY_END_POINT" -a $OUT_FILE
If you really do need to store the command and use it later, there are several options; the bash-hackers.org wiki has a good page on the subject. It looks to me like the most useful one here is to put the command in an array rather than a simple variable:
BLACKRAY_INDEX_CMD=($BLACKRAY_BIN_PATH/blackray_loader -c $BLACKRAY_LOADER_DEF_PATH/$BLACKRAY_LOADER_DEF_NAME -d $BLACKRAY_CSV_PATH -e "$BLACKRAY_END_POINT")
sudo time "${BLACKRAY_INDEX_CMD[#]}" -a $OUT_FILE
This avoids the whole confusion between spaces-separating-words and spaces-within-words because words aren't separated by spaces -- they're in separate elements of the array. Expanding the array in double-quotes with the [#] suffix preserves that structure.
(BTW, another option would be to use escaped quotes rather like you're doing, then run the command with eval. Don't do this; it's a good way to introduce weird parsing bugs.)
Edit:
Try:
BLACKRAY_END_POINT="'default -p 8890'"
or
BLACKRAY_END_POINT='"default -p 8890"'
or
BLACKRAY_END_POINT="default\ -p\ 8890"
or
BLACKRAY_END_POINT='default\ -p\ 8890'
and
BLACKRAY_INDEX_CMD="$BLACKRAY_BIN_PATH/blackray_loader -c $BLACKRAY_LOADER_DEF_PATH/$BLACKRAY_LOADER_DEF_NAME -d $BLACKRAY_CSV_PATH -e $BLACKRAY_END_POINT"
Original answer:
Is blackray_loader a shell script?
Here is a demonstration that you have to deal with this issue both when specifying the parameter and when handling it:
A text file called "test.txt" (include the line numbers):
1 two words
2 two words
3 two
4 words
A script called "spacetest":
#!/bin/bash
echo "No quotes in script"
echo $1
grep $1 test.txt
echo
echo "With quotes in script"
echo "$1"
grep "$1" test.txt
echo
Running it with ./spacetest "two--------words" (replace the hyphens with spaces):
No quotes in script
two words
grep: words: No such file or directory
test.txt:1 two words
test.txt:2 two words
test.txt:3 two
With quotes in script
two words
2 two words
You can see that in the "No quotes" section it tried to do grep two words test.txt which interpreted "words" as a filename in addition to "test.txt". Also, the echo dropped the extra spaces.
When the parameter is quoted, as in the second section, grep saw it as one argument (including the extra spaces) and handled it correctly. And echo preserved the extra spaces.
I used the extra spaces, by the way, merely to aid in the demonstration.
I have a suggestion:
# iterate through the passed arguments, save them to new properly quoted ARGS string
while [ -n "$1" ]; do
ARGS="$ARGS '$1'"
shift
done
# invoke the command with properly quoted arguments
my_command $ARGS
probably you need to surround the argument by double quotes (e.g. "${6}").
Following OP comment it should be "$BLACKRAY_END_POINT"
Below is my example of restarting a script via exec su USER or exec su - USER. It accommodates:
being called from a relative path or current working directory
spaces in script name and arguments
single and double-quotes in arguments, without crazy escapes like: \\"
#
# This script should always be run-as a specific user
#
user=jimbob
if [ $(whoami) != "$user" ]; then
exec su -c "'$(readlink -f "$0")' $(printf " %q" "$#")" - $user
exit $?
fi
A post on other blog saved me for this whitespaces problem: http://logbuffer.wordpress.com/2010/09/23/bash-scripting-preserve-whitespaces-in-variables/
By default, whitespaces are trimed:
bash> VAR1="abc def gh ijk"
bash> echo $VAR1
abc def gh ijk
bash>
"The cause of this behaviour is the internal shell variable $IFS (Internal Field Separator), that defaults to whitespace, tab and newline.
To preserve all contiguous whitespaces you have to set the IFS to something different"
With IFS bypass:
bash> IFS='%'
bash> echo $VAR1
abc def gh ijk
bash>unset IFS
bash>
It works wonderfully for my command case:
su - user1 -c 'test -r "'${filepath}'"; ....'
Hope this helps.