How to escape commands in Makefile for killing process remotely? - bash

server_stop:
ssh $(SERVER_USERNAME)#$(SERVER_HOSTNAME) \
"kill $$(ps aux | grep '[p]ython abc-server' | awk '{print $$2}')"
This gives
bash: line 0: kill: (60403) - No such process
bash: line 1: 60364: command not found
I believe the brackets around p are not escaped correctly. How do I do this?

If you know the command line, you don't need to use ps + grep. Use instead, pgrep.
server_stop:
ssh $(SERVER_USERNAME)#$(SERVER_HOSTNAME) \
'kill $$(pgrep -f "[p]ython abc-server")'
The -f allows you to pass the full command line to be found.
In order to avoid the shell evaluation to the command $$(pgrep -f "[p]ython abc-server"), surround it with single quotes, so the evaluation will happen in the target server.
Note: If possible, keep a start/stop script inside your server, so your ssh command will only call the script, avoiding the current issue.

Related

Bash get the command that is piping into a script

Take the following example:
ls -l | grep -i readme | ./myscript.sh
What I am trying to do is get ls -l | grep -i readme as a string variable in myscript.sh. So essentially I am trying to get the whole command before the last pipe to use inside myscript.sh.
Is this possible?
No, it's not possible.
At the OS level, pipelines are implemented with the mkfifo(), dup2(), fork() and execve() syscalls. This doesn't provide a way to tell a program what the commands connected to its stdin are. Indeed, there's not guaranteed to be a string representing a pipeline of programs being used to generate stdin at all, even if your stdin really is a FIFO connected to another program's stdout; it could be that that pipeline was generated by programs calling execve() and friends directly.
The best available workaround is to invert your process flow.
It's not what you asked for, but it's what you can get.
#!/usr/bin/env bash
printf -v cmd_str '%q ' "$#" # generate a shell command representing our arguments
while IFS= read -r line; do
printf 'Output from %s: %s\n' "$cmd_str" "$line"
done < <("$#") # actually run those arguments as a command, and read from it
...and then have your script start the things it reads input from, rather than receiving them on stdin.
...thereafter, ./yourscript ls -l, or ./yourscript sh -c 'ls -l | grep -i readme'. (Of course, never use this except as an example; see ParsingLs).
It can't be done generally, but using the history command in bash it can maybe sort of be done, provided certain conditions are met:
history has to be turned on.
Only one shell has been running, or accepting new commands, (or failing that, running myscript.sh), since the start of myscript.sh.
Since command lines with leading spaces are, by default, not saved to the history, the invoking command for myscript.sh must have no leading spaces; or that default must be changed -- see Get bash history to remember only the commands run with space prefixed.
The invoking command needs to end with a &, because without it the new command line wouldn't be added to the history until after myscript.sh was completed.
The script needs to be a bash script, (it won't work with /bin/dash), and the calling shell needs a little prep work. Sometime before the script is run first do:
shopt -s histappend
PROMPT_COMMAND="history -a; history -n"
...this makes the bash history heritable. (Code swiped from unutbu's answer to a related question.)
Then myscript.sh might go:
#!/bin/bash
history -w
printf 'calling command was: %s\n' \
"$(history | rev |
grep "$0" ~/.bash_history | tail -1)"
Test run:
echo googa | ./myscript.sh &
Output, (minus the "&" associated cruft):
calling command was: echo googa | ./myscript.sh &
The cruft can be halved by changing "&" to "& fg", but the resulting output won't include the "fg" suffix.
I think you should pass it as one string parameter like this
./myscript.sh "$(ls -l | grep -i readme)"
I think that it is possible, have a look at this example:
#!/bin/bash
result=""
while read line; do
result=$result"${line}"
done
echo $result
Now run this script using a pipe, for example:
ls -l /etc | ./script.sh
I hope that will be helpful for you :)

Why doesn't LIMIT=\`ulimit -u\` work in bash?

In my program I need to know the maximum number of process I can run. So I write a script. It works when I run it in shell but but when in program using system("./limit.sh"). I work in bash.
Here is my code:
#/bin/bash
LIMIT=\`ulimit -u\`
ACTIVE=\`ps -u | wc -l \`
echo $LIMIT > limit.txt
echo $ACTIVE >> limit.txt
Anyone can help?
Why The Original Fails
Command substitution syntax doesn't work if escaped. When you run:
LIMIT=\`ulimit -u\`
...what you're doing is running a command named
-u`
...with the environment variable named LIMIT containing the value
`ulimit
...and unless you actually have a command that starts with -u and contains a backtick in its name, this can be expected to fail.
This is because using backticks makes characters which would otherwise be syntax into literals, and running a command with one or more var=value pairs preceding it treats those pairs as variables to export in the environment for the duration of that single command.
Doing It Better
#!/bin/bash
limit=$(ulimit -u)
active=$(ps -u | wc -l)
printf '%s\n' "$limit" "$active" >limit.txt
Leave off the backticks.
Use modern $() command substitution syntax.
Avoid multiple redirections.
Avoid all-caps names for your own variables (these names are used for variables with meaning to the OS or system; lowercase names are reserved for application use).
Doing It Right
#!/bin/bash
exec >limit.txt # open limit.txt as output for the rest of the script
ulimit -u # run ulimit -u, inheriting that FD for output
ps -u | wc -l # run your pipeline, likewise with output to the existing FD
You have a typo on the very first line: #/bin/bash should be #!/bin/bash - this is often known as a "shebang" line, for "hash" (#) + "bang" (!)
Without that syntax written correctly, the script is run through the system's default shell, which will see that line as just a comment.
As pointed out in comments, that also means only the standardised options available to the builtin ulimit command, which doesn't include -u.

echo $(command) gets a different result with the output of the command

The Bash command I used:
$ ssh user#myserver.com ps -aux|grep -v \"grep\"|grep "/srv/adih/server/app.js"|awk '{print $2}'
6373
$ ssh user#myserver.com echo $(ps -aux|grep -v \"grep\"|grep "/srv/adih/server/app.js"|awk '{print $2}')
8630
The first result is the correct one and the second one will change echo time I execute it. But I don't know why they are different.
What am I doing?
My workstation has very limited resources, so I use a remote machine to run my Node.js application. I run it using ssh user#remotebox.com "cd /application && grunt serve" in debug mode. When I command Ctrl + C, the grunt task is stopped, but the application is is still running in debug mode. I just want to kill it, and I need to get the PID first.
The command substitution is executed by your local shell before ssh runs.
If your local system's name is here and the remote is there,
ssh there uname -n
will print there whereas
ssh there echo $(uname -n) # should have proper quoting, too
will run uname -n locally and then send the expanded command line echo here to there to be executed.
As an additional aside, echo $(command) is a useless use of echo unless you specifically require the shell to perform wildcard expansion and whitespace tokenization on the output of command before printing it.
Also, grep x | awk { y } is a useless use of grep; it can and probably should be refactored to awk '/x/ { y }' -- but of course, here you are reinventing pidof so better just use that.
ssh user#myserver.com pidof /srv/adih/server/app.js
If you want to capture the printed PID locally, the syntax for that is
pid=$(ssh user#myserver.com pidof /srv/adih/server/app.js)
Of course, if you only need to kill it, that's
ssh user#myserver.com pkill /srv/adih/server/app.js
Short answer: the $(ps ... ) command substitution is being run on the local computer, and then its output is sent (along with the echo command) to the remote computer. Essentially, it's running ssh user#myserver.com echo 8630.
Your first command is also probably not doing what you expect; the pipes are interpreted on the local computer, so it's running ssh user#myserver.com ps -aux, piping the output to grep on the local computer, piping that to another grep on the local computer, etc. I'm guessing that you wanted that whole thing to run on the remote computer so that the result could be used on the remote computer to kill a process.
Long answer: the order things are parsed and executed in shell is a bit confusing; with an ssh command in the mix, things get even more complicated. Basically, what happens is that the local shell parses the command line, including splitting it into separate commands (separated by pipes, ;, etc), and expanding $(command) and $variable substitutions (unless they're in single-quotes). It then removes the quotes and escapes (they've done their jobs) and passes the results as arguments to the various commands (such as ssh). ssh takes its arguments, sticks all the ones that look like parts of the remote command together with spaces between them, and sends them to a shell on the remote computer which does this process over again.
This means that quoting and/or escaping things like $ and | is necessary if you want them to be parsed/acted on by the remote shell rather than the local shell. And quotes don't nest, so putting quotes around the whole thing may not work the way you expect (e.g. if you're not careful, the $2 in that awk command might get expanded on the local computer, even though it looks like it's in single-quotes).
When things get messy like this, the easiest way is sometimes to pass the remote command as a here-document rather than as arguments to the ssh command. But you want quotes around the here-document delimiter to keep the various $ expansions from being done by the local shell. Something like this:
ssh user#myserver.com <<'EOF'
echo $(ps -aux|grep -v "grep"|grep "/srv/adih/server/app.js"|awk '{print $2}')
EOF
Note: be careful with indenting the remote command, since the text will be sent literally to the remote computer. If you indent it with tab characters, you can use <<- as the here-document delimiter (e.g. <<-'EOF') and it'll remove the leading tabs.
EDIT: As #tripleee pointed out, there's no need for the multiple greps, since awk can do the whole job itself. It's also unnecessary to exclude the search commands from the results (grep -v grep) because the "/" characters in the pattern need to be escaped, meaning that it won't match itself.. So you can simplify the pipeline to:
ps -aux | awk '/\/srv\/adih\/server\/app.js/ {print $2}'
Now, I've been assuming that the actual goal is to kill the relevant pid, and echo is just there for testing. If that's the case, the actual command winds up being:
ssh user#myserver.com <<'EOF'
kill $(ps -aux | awk '/\/srv\/adih\/server\/app.js/ {print $2}')
EOF
If that's not right, then the whole echo $( ) thing is best skipped entirely. There's no reason to capture the pipeline's output and then echo it, just run it and let it output directly.
And if pkill (or something similar) is available, it's much simpler to use that instead.

Bash script from Codesourcery arm-2011.03 can't find grep or sed

I'm trying to run the CodeSourcery arm-2011.03.42 BASH script in Ubuntu 12.04. At the top of the script is the following:
#! /usr/bin/env bash
But, when I execute it, I get the following errors:
line 140: grep: command not found
line 140: sed: command not found
I can run both grep and sed from the command line, but not in the script.
Here's what line 140 look like
env_var_list=$(export | \
grep '^declare -x ' | \
sed -e 's/^declare -x //' -e 's/=.*//')
If I change the first line to
#!/bin/sh
I get the following error:
Line 51: Syntax error: "(" unexpected (expecting "}")
Here's what Line 51 looks like
check_pipe() {
local -a status=("${PIPESTATUS[#]}") #<-- Line 51
local limit=$1
local ix
The #<-- Line 51 actually doesn't appear in the shell script. I just added it to this post for clarity.
I've tried dos2unix and a number of other things, but I just can't win. I would very much appreciate your help.
I changed this line in the script
pushenvvar PATH /usr/local/tools/gcc-4.3.3/bin
to
pushenvvar PATH /usr/local/tools/gcc-4.3.3/bin:/bin
and it seems to work now.
Shell script must be bash as arrays don't exist in sh.
Check your PATH evironment variable, and the path of grep and sed /bin normally.
Ther might be several possiable reasons.
As #AntonioD pointed out, there must not be any space between '#!' and '/usr/bin/env' at the begining of the file.
The grep and sed command does not exists in your $PATH, checkout your /bin and /user/bin to see if they are existed, or run which grep and which sed in your shell.
If grep and sed are indeed existed, you need to make sure they have right access.They should be accessable and executable, in general, this should not happen.
You must not using #!/bin/sh instead of #!/usr/bin/evn or #!/bin/bash, because that would cause the shell run in POSIX compatible mode in which most of bash advanced features such as arrays are not functional.
If all of above are not the case, then it is really weird.

pgrep prints a different pid than expected

I wrote a small script and for some reason I need to escape any spaces passed in parameters to get it to work.
I read numerous other articles about people with this issue and it is typically due to not quoting $#, but all of my variables are quoted within the script and the parameters quoted on the command line as well. Also, if I run the script in debug mode the line that is returned can be run successfully via copy paste but fails when executed from within the script.
CODE:
connections ()
{
args="$#"
pid="$(pgrep -nf "$args")"
echo $pid
# code that shows TCP and UDP connections for $pid
}
connections "$#"
EXAMPLE:
bash test.sh "blah blah"
fails and instead returns the pid of the currently running shell
bash test.sh "blah\ blah"
succeeds and returns the pid of the process you are searching for via pgrep
Your problem has nothing to do with "$#".
If you add a -l option to pgrep, you can see why it's matching the current process.
The script you're running also includes what you're trying to search for in its own arguments.
It's like doing this, and seeing grep:
$ ps -U $USER -o pid,cmd | grep gnome-terminal
12410 grep gnome-terminal
26622 gnome-terminal --geometry=180x65+135+0
The reason the backslash makes a difference? pgrep thinks backslash+space just means space. It doesn't find your script, because that contains blah\ blah, not blah blah.

Resources