'grep' not recognized over ssh - bash

I'm trying to close my app remotely like that:
ssh pi#192.168.0.227 "kill $(ps aux | grep '[M]yApp' | awk '{print $2}')"
It fails and prompts:
grep : The term 'grep' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
If I login via SSH first and then do the command, it works, but I need it to be one-liner. I've set /etc/ssh/sshd_config variable PermitUserEnvironment to yes, and tried to use full path to grep (/bin/grep), and even removed spaces around the pipe (these were all answers to questions similar to mine) but nothing allows me to pass the command. What am I missing?

The string is expanded by your local shell before being passed to the other host. Since it is a double-quoted string the command within $() runs on your local host. The easiest way to pass such a command to a remote host is with a "quoted" here document:
ssh pi#192.168.0.227 <<'EOF'
kill $(ps aux | grep '[M]yApp' | awk '{print $2}')"
EOF
Similar: How have both local and remote variable inside an SSH command

Related

Remote ssh command with arguments [duplicate]

This question already has answers here:
How to use bash $(awk) in single ssh-command?
(6 answers)
Closed 5 years ago.
I know that I can run command remotely from another machine using ssh -t login#machine "command", however I am struggling to run more complex command like this:
watch "ps aux | awk '{print $1}' | grep php-fpm | wc -l"
I am trying with different kind of quotes however although watch command seems to be firing, it's showing errors like:
awk: cmd. line:1: {print awk: cmd. line:1:
^ unexpected newline or end
of string
The thing is the $ is expanded by the shell before it is passed to the ssh command. You need to deprive it of its special meaning locally by escaping it before passing it to ssh as
ssh -t login#machine watch "ps aux | awk '{print \$1}' | grep php-fpm | wc -l"
The error you are seeing is because when shell tries to expand $1 it does not find a value for it and leaves an empty string which results in incorrect number of arguments passed to awk.
Also you could replace your shell pipeline containing awk and grep with just a simple logic
ssh -t login#machine watch "ps aux | awk '\$1 == \"php-fpm\"{count++}END{print count}'

bash variable not available after running script [duplicate]

This question already has answers here:
Global environment variables in a shell script
(7 answers)
Closed 5 years ago.
I have a shell script that assigns my IP address to a variable, but after running the script, I cannot access the variable in bash. If I put an echo in the script, it will print the variable, but it does not save it after the script is done running.
Is there a way to change the script to access it after it runs?
ip=$(/sbin/ifconfig | grep "inet " | awk '{print $2}' | grep -v 127 | cut -d":" -f2)
I am using terminal on a Mac.
A script by default runs in a a child process, which means the current (calling) shell cannot see its variables.
You have the following options:
Make the script output the information (to stdout), so that the calling shell can capture it and assign it to a variable of its own. This is probably the cleanest solution.
ip=$(my-script)
Source the script to make it run in the current shell as opposed to a child process. Note, however, that all modifications to the shell environment you make in your script then affect the current shell.
. my-script # any variables defined (without `local`) are now visible
Refactor your script into a function that you define in the current shell (e.g., by placing it in ~/.bashrc); again, all modifications made by the function will be visible to the current shell:
# Define the function
my-func() { ip=$(/sbin/ifconfig | grep "inet " | awk '{print $2}' | grep -v 127 | cut -d":" -f2); }
# Call it; $ip is implicitly defined when you do.
my-func
As an aside: You can simplify your command as follows:
/sbin/ifconfig | awk '/inet / && $2 !~ /^127/ { print $2 }'

How to denote that pipes on a remote machine end, and a pipe on the local machine follows

I want to get some files from a remote machine. For this I make some pipes to determine which files have to be fetched, and these results I want to put in a pipe also.
The remote pipes combine to 1 command which is given to ssh.
I do not know how to let know where the pipes on the remote machine end and to put the results in the new local pipe. So I do:
ssh user#remote find ... | grep ...| awk ...| ls
The first 2 pipes are remote (find, grep , awk run on the remote machine), and the last pipe is local (ls runs on the local machine).
Wrap the part of the command you want to executed on remote machine into double quotes. E.g. find, grep and awk will be executed remote, while less will be exceuted local.
ssh user#remote "find ... | grep ...| awk ... "| less
As "tripleee" added in the comments it's better to use single quotes if there is no variable substitution in the quoted string. So use " if there is a variable inside the remote command.
ssh user#remote "find $foo | grep ...| awk ... "| less
or use ' if there is no variable involved.
ssh user#remote 'find "foo" | grep ...| awk ... '| less

Bash script: How to remote to a computer run a command and have output pipe to another computer?

I need to create a Bash Script that will be able to ssh into a computer or Machine B, run a command and have the output piped back to a .txt file on machine A how do I go about doing this? Ultimately it will be list of computers that I will ssh to and run a command but all of the output will append to the same .txt file on Machine A.
UPDATE: Ok so I went and followed what That other Guy suggested and this is what seems to work:
File=/library/logs/file.txt
ssh -n username#<ip> "$(< testscript.sh)" > $File
What I need to do now is instead of manually entering an ip address, I need to have it read from a list of hostnames coming from a .txt file and have it place it in a variable that will substitute the ip address. An example would be: ssh username#Variable in which "Variable" will be changing each time a word is read from a file containing hostnames. Any ideas how to go about this?
This should do it
ssh userB#machineB "some command" | ssh userA#machineA "cat - >> file.txt"
With your commands:
ssh userB#machineB <<'END' | ssh userA#machineA "cat - >> file.txt"
echo Hostname=$(hostname) LastChecked=$(date)
ls -l /applications/utilities/Disk\ Utility.app/contents/Plugins/*Partition.dumodule* | awk '{printf "Username=%s DateModified=%s %s %s\n", $3, $6, $7, $8}'
END
You could replace the ls -l | awk pipeline with a single stat call, but it appears that the OSX stat does not have a way to return the user name, only the user id

Complicated grep commands not executing in shell script

I am trying to execute a couple of complicated grep commands via a shells script that work fin in the terminal manually executed. I can't for the life of me figure out why this doesn't work.
The goal of the first grep is to get out any process id attached to the parent myPattern. The 2nd get the process id of the process myPattern
Currently my shell script
returns nothing for the 1st.
ignores the "grep -v 'grep'" part in the 2nd.
#!/bin/sh
ps -ef | grep "$(ps -ef | grep 'myPattern' | grep -v grep | awk '{print $2}')" | grep -v grep | grep -v myPattern | awk '{print $2}'
ps -ef | grep 'myPattern' | grep -v 'grep' | awk '{print $2}'
This works fine when run in the terminal manually. Any ideas where i have stuffed this up?
Your first command is vague. I don't think it would reliably do what you describe. It also does not guard against getting the id of the first grep call. The second one works for me. For the fists query it highly depends on the system you are using. It's easier to use pstree to show you the whole process tree under a pid. Like:
pstree -p 1782 | sed 's/-/\n/g' | sed -n -e 's/.*(\([0-9]\+\)).*/\1/p'
You need to limit pid to be a single value. If you have more values, then you have to loop through them. If you don't have pstree, then you can craft some loop around ps. Make note that even if you current commands worked, then thwy would catch only one level parent/child relationships. pstree does any level.
I also have to tell you that a process can escape original parent as a parent process by forking.
In any case without exact details what you are trying to achieve and why, and on what platform it is hard to give you a great answer. Also these utilities albeit present virtually everywhere are not as portable as one would wish.
One more note - /bin/sh is often not your current shell. On many linux systems user has a default shell of bash and /bin/sh is dash or some other shell variant. So if you see diffs with what you have in console and script, it can be difference in actual shell you are using.
Based on user feedback it would be much easier to have something like this in the java process launching script:
java <your params here> &
echo $! > /var/run/myprog.pid
Then the kill script would look like echo /var/run/myprog.pid | xargs kill. There are shorter commands but I think this is more portable. Give actual code if you want more specific.

Resources