SSH - Connect & Execute : "echo $SSH_CONNECTION | awk '{print $1}'" - bash

Here's my bash (shell?) script.
command="ssh root#$ip";
command2="\"ls /;bash\"";
xfce4-terminal -x sh -c "$command $command2; bash"
connects to the server and executes the command of
ls /
works just fine.
But instead of ls /..
I want to execute this command:
echo $SSH_CONNECTION | awk '{print $1}'
I replaced " ls / " with the code above, but soon as it connects,
it simply prints a blank line.
Based on my understanding, the code is being executed locally before it reaches the server because stuff is not escaped.
If I manually paste this code on my remote server..
echo $SSH_CONNECTION | awk '{print $1}'
it works just fine. Prints out exactly what it should be printing out.
So the question is: where do the backslashes go in my code ?
I know it sounds like simply trying bunch of backslashes..
until something works.
I tried many ways. I even tried triple and sixtuple backslashes to escape things.
Update
This is not sufficient.
It still only prints out a blank line soon as it connects.
command="ssh root#$ip";
command2="\"echo \$SSH_CONNECTION | awk '{print \$1}';bash\"";
xfce4-terminal -x sh -c "$command $command2; bash"
Update 2
from one of the answers..
code below works okay but it looks "un-light" to my eyes or maybe just my mind because I am not used to exec and right to left piping ?
command="ssh -t root#$ip";
command2="\"awk '{ print \\\$1 }' <<< \\\$SSH_CONNECTION; exec \\\$SHELL\""
xfce4-terminal -x sh -c "$command $command2; bash"
Update 3
from the answers..
command2='"echo \"\$SSH_CONNECTION\" | awk '"'"'{ print \$1 }'"'"'; exec \$SHELL"'
also seems to be working okay.
although info is being given as "exec" being less resource consuming.. i am still looking for a solution without the "exec" command because "exec" command reminds me of "php" which is not light stuff.. so maybe it is just perception
Update 4:
Turns out "exec \$SHELL" was not part of the code. it was simply a replacement for the "bash" command to stay logged in in ssh.
Although info is being said it is less resource consuming than the bash
command.. it is to be studied in the future.
for now this seems to be the final result.
command2='"echo \"\$SSH_CONNECTION\" | awk '"'"'{ print \$1 }'"'"';bash"'
it looks very reasonable simply piping from left to right..
Update 5
The final code is:
command="ssh -p 2201 -t root#$ip";
command2='"echo \"\$SSH_CONNECTION\" | awk '"'"'{ print \$1 }'"'"';bash"'
xfce4-terminal -x sh -c "$command $command2; bash"

You have to escape twice: once for SSH, once for the shell command you give to xfce4-terminal. I've tested this with xterm instead of xfec4-terminal, but it should be the same:
$ cmd1='ssh -t root#as'
$ cmd2="\"awk '{ print \\\$1 }' <<< \\\"\\\$SSH_CONNECTION\\\"; exec \\\$SHELL\""
$ xfce4-terminal -x sh -c "$cmd1 $cmd2"
I've added -t to allocate a pseudo-terminal, and I use a here-string instead of echo and a pipe.
Instead of spawning Bash in a subshell, I'm using exec $SHELL.
An alternative to triple backslashes in cmd2 is to single-quote it, but to get a single quote into a single-quoted string, you have to use the unwieldy '"'"':
cmd2='"awk '"'"'{ print \$1 }'"'"' <<< \"\$SSH_CONNECTION\"; exec \$SHELL"'

Instead of dealing with all the escaping problems, you could just access the variable in another way:
Just substitute printenv SSH_CONNECTION for echo $SSH_CONNECTION. Notice that now there is no dollar sign, so the local shell will not expand the variable

Related

Why does nesting this awk command with eval produce a different result than running it?

I have this script that's designed to assign variables to commands that collect information about a system and then echo them back. This works very well for the first few commands, but the last one continues to return the value without "PRETTY_NAME=" stripped out of the output.
Is there some problem with this that I'm not seeing?
I have tried using grep to separate awk:
grep PRETTY_NAME /etc/*-release | awk -F '=' '{print $2}'
Using escaped quotes:
awk -F \"=\" '/PRETTY_NAME/ {print $2}' /etc/*-release
Whole block (edited somewhat for relevance)
declare -A CMDS=(
[primaryMacAddress]="cat /sys/class/net/$(ip route show default | awk '/default/ {print $5}')/address"
[primaryIpAddress]="hostname --ip-address"
[hostname]="hostname"
[osType]="awk -F '=' '/PRETTY_NAME/ {print $2}' /etc/*-release"
)
#This bit is actually nested in another function
for kpair in "${!CMDS[#]}" do
echo "$kpair=\"$( eval ${CMDS[$kpair]} )\""
done
Results when run from .sh file:
osType="PRETTY_NAME="Red Hat Enterprise Linux Server 7.4 (Maipo)""
expected:
osType=""Red Hat Enterprise Linux Server 7.4 (Maipo)""
When this command is run by itself, it seems to work as intended:
$ awk -F '=' '/PRETTY_NAME/ {print $2}' /etc/*-release
"Red Hat Enterprise Linux Server 7.4 (Maipo)"
Because your Awk command is specified in double quotes, interior dollar signs are subject to special treatment: the $2 is treated as a parameter substitution by your shell, and so the array element doesn't store the text $2 but rather its expansion. The Awk interpreter never sees the $2 syntax.
However, you have a second problem in your command dispatcher. Your eval command does not prevent word splitting:
eval ${CMDS[$kpair]}
you want this:
eval "${CMDS[$kpair]}"
without the quotes, your command is arbitrarily chopped into fields on whitespace. Then eval catenates the pieces together, using one space between them, and evaluates the resulting syntax. The difference can be demonstrated with the following example:
$ cmd="awk '/foo/ { print \$1\" \"\$2 }'"
$ echo 'foo a' | eval $cmd
foo a
$ echo 'foo a' | eval "$cmd"
foo a
We can just use echo to understand the issue:
$ echo $cmd
awk '/foo/ { print $1" "$2 }'
$ echo "$cmd"
awk '/foo/ { print $1" "$2 }'
The substitution of $cmd and the subsequent word splitting is done irrespective of any shell syntax that `cmd contains. We can see the pieces like this:
$ for x in $cmd ; do echo "<$x>" ; done
<awk>
<'/foo/>
<{>
<print>
<$1">
<"$2>
<}'>
When we execute eval $cmd, the above pieces are generated and re-combined by eval and evaluated. Needless to say, you don't want your command syntax to be chopped up and re-combined like this; who knows what sort of hidden bug will arise. It may be okay for the commands you have now, but as a generic command dispatch mechanism, it is flawed.

Bash code error unexpected syntax error

I am not sure why i am getting the unexpected syntax '( err
#!/bin/bash
DirBogoDict=$1
BogoFilter=/home/nikhilkulkarni/Downloads/bogofilter-1.2.4/src/bogofilter
echo "spam.."
for i in 'cat full/index |fgrep spam |awk -F"/" '{if(NR>1000)print$2"/"$3}'|head -500'
do
cat $i |$BogoFilter -d $DirBogoDict -M -k 1024 -v
done
echo "ham.."
for i in 'cat full/index | fgrep ham | awk -F"/" '{if(NR>1000)print$2"/"$3}'|head -500'
do
cat $i |$BogoFilter -d $DirBogoDict -M -k 1024 -v
done
Error:
./score.bash: line 7: syntax error near unexpected token `('
./score.bash: line 7: `for i in 'cat full/index |fgrep spam |awk -F"/" '{if(NR>1000)print$2"/"$3}'|head -500''
Uh, because you have massive syntax errors.
The immediate problem is that you have an unpaired single quote before the cat which exposes the Awk script to the shell, which of course cannot parse it as shell script code.
Presumably you want to use backticks instead of single quotes, although you should actually not read input with for.
With a fair bit of refactoring, you might want something like
for type in spam ham; do
awk -F"/" -v type="$type" '$0 ~ type && NR>1000 && i++<500 {
print $2"/"$3 }' full/index |
xargs $BogoFilter -d $DirBogoDict -M -k 1024 -v
done
This refactors the useless cat | grep | awk | head into a single Awk script, and avoids the silly loop over each output line. I assume bogofilter can read file name arguments; if not, you will need to refactor the xargs slightly. If you can pipe all the files in one go, try
... xargs cat | $BogoFilter -d $DirBogoDict -M -k 1024 -v
or if you really need to pass in one at a time, maybe
... xargs sh -c 'for f; do $BogoFilter -d $DirBogoDict -M -k 1024 -v <"$f"; done' _
... in which case you will need to export the variables BogoFilter and DirBogoDict to expose them to the subshell (or just inline them -- why do you need them to be variables in the first place? Putting command names in variables is particularly weird; just update your PATH and then simply use the command's name).
In general, if you find yourself typing the same commands more than once, you should think about how to avoid that. This is called the DRY principle.
The syntax error is due to bad quoting. The expression whose output you want to loop over should be in command substitution syntax ($(...) or backticks), not single quotes.

I want to porting zsh's function as fishshell

I have a question related to fishshell.
I wrote zsh's function as this:
function agvim() {
vim $(ag "$#" | peco --query "$LBUFFER" | awk -F : '{print "-c " $2 " " $1}')
}
it works.
But I just ported this function doesn't work properly, this function is ported from zsh to fishshell.
function agvim
ag $argv | peco --query "$LBUFFER" | awk -F : '{print "-c " $2 " " $1}' >> $HOME/.agvim_history
vim (tail -n 1 $HOME/.agvim_history)
end
However it doesn't work properly. vim command will be opening with tail's output as filename.
because I think expand-command-substitution is a little different from the zsh.
This is tail's output example -c 3 bin/ec and I want to use this output as options.
Please tell me better solution.
This is tail's output example -c 3 bin/ec and I want to use this output as options.
The issue you are running into here is that zsh, like bash, will split command substitutions on spaces, while fish only splits on newlines.
That means zsh will send "-c", "3" and "bin/ec" to vim, while fish will send "-c 3 bin/ec" as one argument.
There's a few ways to get around this - one, if you are running fish from git, is to use tail | string split " ". Another which should work with pretty much any fish version is to use sed (tail | sed "s/ /\n/g").
If you trust the input, this is a place to use eval
function agvim
eval vim (ag $argv | peco --query "$LBUFFER" | awk -F: '{print "-c",$2,$1}')
end

Use output of bash command (with pipe) as a parameter for another command

I'm looking for a way to use the ouput of a command (say command1) as an argument for another command (say command2).
I encountered this problem when trying to grep the output of who command but using a pattern given by another set of command (actually tty piped to sed).
Context:
If tty displays:
/dev/pts/5
And who displays:
root pts/4 2012-01-15 16:01 (xxxx)
root pts/5 2012-02-25 10:02 (yyyy)
root pts/2 2012-03-09 12:03 (zzzz)
Goal:
I want only the line(s) regarding "pts/5"
So I piped tty to sed as follows:
$ tty | sed 's/\/dev\///'
pts/5
Test:
The attempted following command doesn't work:
$ who | grep $(echo $(tty) | sed 's/\/dev\///')"
Possible solution:
I've found out that the following works just fine:
$ eval "who | grep $(echo $(tty) | sed 's/\/dev\///')"
But I'm sure the use of eval could be avoided.
As a final side node: I've noticed that the "-m" argument to who gives me exactly what I want (get only the line of who that is linked to current user). But I'm still curious on how I could make this combination of pipes and command nesting to work...
One usually uses xargs to make the output of one command an option to another command. For example:
$ cat command1
#!/bin/sh
echo "one"
echo "two"
echo "three"
$ cat command2
#!/bin/sh
printf '1 = %s\n' "$1"
$ ./command1 | xargs -n 1 ./command2
1 = one
1 = two
1 = three
$
But ... while that was your question, it's not what you really want to know.
If you don't mind storing your tty in a variable, you can use bash variable mangling to do your substitution:
$ tty=`tty`; who | grep -w "${tty#/dev/}"
ghoti pts/198 Mar 8 17:01 (:0.0)
(You want the -w because if you're on pts/6 you shouldn't see pts/60's logins.)
You're limited to doing this in a variable, because if you try to put the tty command into a pipe, it thinks that it's not running associated with a terminal anymore.
$ true | echo `tty | sed 's:/dev/::'`
not a tty
$
Note that nothing in this answer so far is specific to bash. Since you're using bash, another way around this problem is to use process substitution. For example, while this does not work:
$ who | grep "$(tty | sed 's:/dev/::')"
This does:
$ grep $(tty | sed 's:/dev/::') < <(who)
You can do this without resorting to sed with the help of Bash variable mangling, although as #ruakh points out this won't work in the single line version (without the semicolon separating the commands). I'm leaving this first approach up because I think it's interesting that it doesn't work in a single line:
TTY=$(tty); who | grep "${TTY#/dev/}"
This first puts the output of tty into a variable, then erases the leading /dev/ on grep's use of it. But without the semicolon TTY is not in the environment by the moment bash does the variable expansion/mangling for grep.
Here's a version that does work because it spawns a subshell with the already modified environment (that has TTY):
TTY=$(tty) WHOLINE=$(who | grep "${TTY#/dev/}")
The result is left in $WHOLINE.
#Eduardo's answer is correct (and as I was writing this, a couple of other good answers have appeared), but I'd like to explain why the original command is failing. As usual, set -x is very useful to see what's actually happening:
$ set -x
$ who | grep $(echo $(tty) | sed 's/\/dev\///')
+ who
++ sed 's/\/dev\///'
+++ tty
++ echo not a tty
+ grep not a tty
grep: a: No such file or directory
grep: tty: No such file or directory
It's not completely explicit in the above, but what's happening is that tty is outputting "not a tty". This is because it's part of the pipeline being fed the output of who, so its stdin is indeed not a tty. This is the real reason everyone else's answers work: they get tty out of the pipeline, so it can see your actual terminal.
BTW, your proposed command is basically correct (except for the pipeline issue), but unnecessarily complex. Don't use echo $(tty), it's essentially the same as just tty.
You can do it like this:
tid=$(tty | sed 's#/dev/##') && who | grep "$tid"

Simple bash script using "set" command

I am supposed to make a script that prints all sizes and file-names in the current directory, ordered by size, using the "set" command.
#!/bin/bash
touch /tmp/unsorted
IFS='#'
export IFS
ls -l | tr -s " " "#" | sed '1d' > /tmp/tempLS
while read line
do
##set probably goes here##
echo $5 $9 >> /tmp/unsorted
done < /tmp/tempLS
sort -n /tmp/unsorted
rm -rf /tmp/unsorted
By logic, this is the script that should work, but it produces only blank lines.
After discussion with my classmates, we think that the "set" command must go first in the while loop. The problem is that we cant understand what the "set" command does, and how to use it. Please help. Thank you.
ls -l | while read line; do
set - $line
echo $5 $9
done | sort -n
or simply
ls -l | awk '{print $5, $9}' | sort -n
Set manipulates shell variables. This allows you to adjust your current environment for specific situations, for example, to adjust current globbing rules.
Sometimes it is necessary to adjust the environment in a script, so that it will have an option set correctly later on. Since the script runs in a subshell, the options you adjust will have no effect outside of the script.
This link has a vast amount of info on the various commands and options available.

Resources