I want to porting zsh's function as fishshell - shell

I have a question related to fishshell.
I wrote zsh's function as this:
function agvim() {
vim $(ag "$#" | peco --query "$LBUFFER" | awk -F : '{print "-c " $2 " " $1}')
}
it works.
But I just ported this function doesn't work properly, this function is ported from zsh to fishshell.
function agvim
ag $argv | peco --query "$LBUFFER" | awk -F : '{print "-c " $2 " " $1}' >> $HOME/.agvim_history
vim (tail -n 1 $HOME/.agvim_history)
end
However it doesn't work properly. vim command will be opening with tail's output as filename.
because I think expand-command-substitution is a little different from the zsh.
This is tail's output example -c 3 bin/ec and I want to use this output as options.
Please tell me better solution.

This is tail's output example -c 3 bin/ec and I want to use this output as options.
The issue you are running into here is that zsh, like bash, will split command substitutions on spaces, while fish only splits on newlines.
That means zsh will send "-c", "3" and "bin/ec" to vim, while fish will send "-c 3 bin/ec" as one argument.
There's a few ways to get around this - one, if you are running fish from git, is to use tail | string split " ". Another which should work with pretty much any fish version is to use sed (tail | sed "s/ /\n/g").

If you trust the input, this is a place to use eval
function agvim
eval vim (ag $argv | peco --query "$LBUFFER" | awk -F: '{print "-c",$2,$1}')
end

Related

How to properly pass filenames with spaces with $* from xargs to sed via sh?

Disclaimer: this happens on macOS (Big Sur); more info about the context below.
I have to write (almost did) a script which will replace images URLs in big text (xml) files by their Base64-encoded value.
The script should run the same way with single filenames or patterns, or both, e.g.:
./replace-encode single.xml
./replace-encode pattern*.xml
./replace-encode single.xml pattern*.xml
./replace-encode folder/*.xml
Note: it should properly handle files\ with\ spaces.xml
So I ended up with this script:
#!/bin/bash
#needed for `ls` command
IFS=$'\n'
ls -1 $* | xargs -I % sed -nr 's/.*>(https?:\/\/[^<]+)<.*/\1/p' % | xargs -tI % sh -c 'sed -i "" "s#%#`curl -s % | base64`#" $0' "$*"
What it does: ls all files, pipe the list to xargs then search all URLs surrounded by anchors (hence the > and < in the search expr. - also had to use sed because grep is limited on macOS), then pipe again to a sh script which runs the sed search & replace, where the remplacement is the big Base64 string.
This works perfectly fine... but only for fileswithoutspaces.xml
I tried to play with $0 vs $1, $* vs $#, w/ or w/o " but to no avail.
I don't understand exactly how does the variable substitution (is it how it's called? - not a native English speaker, and above all, not a script-writer at all!!! just a Java dev. all day long...) work between xargs, sh or even bash with arguments like filenames.
The xargs -t is here to let me check out how the substitution works, and that's how I noticed that using a pattern worked but I have to let the " around the last $*, otherwise only the 1st file is searched & replaced; output is like:
user#host % ./replace-encode pattern*.xml
sh -c sed -i "" "s#https://www.some.com/public/123456.jpg#`curl -s https://www.some.com/public/123456.jpg | base64`#" $0 pattern_123.xml
pattern_456.xml
Both pattern_123.xml and pattern_456.xml are handled here; w/ $* instead of "$*" in the end of the command, only pattern_123.xml is handled.
So is there a simple way to "fix" this?
Thank you.
Note: macOS commands have some limitations (I know) but as this script is intended to non-technical users, I can't ask them to install (or have the IT team installed on their behalf) some alternate GNU-versions installed e.g. pcregrep or 'ggrep' like I've read many times...
Also: I don't intend to change from xargs to for loops or so because, 1/ don't have the time, 2/ might want to optimize the 2nd step where some URLs might be duplicate or so.
There's no reason for your software to use ls or xargs, and certainly not $*.
./replace-encode single.xml
./replace-encode pattern*.xml
./replace-encode single.xml pattern*.xml
./replace-encode folder/*.xml
...will all work fine with:
#!/usr/bin/env bash
while IFS= read -r line; do
replacement=$(curl -s "$line" | base64)
in="$line" out="$replacement" perl -pi -e 's/\Q$ENV{"in"}/$ENV{"out"}/g' "$#"
done < <(sed -nr 's/.*>(https?:\/\/[^<]+)<.*/\1/p' "$#" | sort | uniq)
Finally ended up with this single-line script:
sed -nr 's/.*>(https?:\/\/[^<]+)<.*/\1/p' "$#" | xargs -I% sh -c 'sed -i "" "s#%#`curl -s % | base64`#" "$#"' _ "$#"
which does properly support filenames with or without spaces.

How to remove the username/hostname line from an output on Korn Shell?

I run the command
df -gP /data1 /data2 | grep -v File | awk '{print $1}' |
awk -F/dev/ '$0=$2' | tr '\n' '
on the AIX shell (ksh) and it prints the output below:
lv_data01 lv_data02 root#testhost:/
However, I would like the output to be printed this way. Could someone help?
lv_data01 lv_data02
Using grep … | awk … | awk … is not necessary; a single awk could do the whole job. So could sed and it might even be easier. I'd be tempted to deal with the spacing by using:
x=$(df … | sed …); echo $x
The tr command, once corrected, replaces newlines with spaces, so the prompt follows without a newline before it. The ; echo suggestion adds the missing newline; the echo $x suggestion (note no double quotes) does too.
As for the sed command:
sed -n '/File/!{ s/[[:space:]].*//; s%^.*/dev/%%p; }'
Don't print anything by default
If the line doesn't match File (doing the work of grep -v):
remove the first space (blank or tab) and everything after it (doing the work of awk '{print $1}')
replace everything up to /dev/ with nothing and print (doing the work of awk -F/dev/ '{$0=$2}')
The command substitution and capture, followed by echo, deals with spaces and newlines.
So, my suggested solution is:
x=$(df -gP /data1 /data2 | sed -n '/File/!{ s/[[:space:]].*//; s%^.*/dev/%%p; }'); echo $x
You could add unset x after the echo if you are going to be using this directly in the shell and not in a shell script. If it'll be encapsulated in a shell script, you don't have to worry about it.
I'm blithely assuming the output from df -gP won't contain a path such as this, with two occurrences of /dev:
/who/knows/dev/lv_data01/dev/bin
If that's a real problem, you can fix the sed script, but I don't think it will be. It's one thing the second awk script in the question handles differently.

SSH - Connect & Execute : "echo $SSH_CONNECTION | awk '{print $1}'"

Here's my bash (shell?) script.
command="ssh root#$ip";
command2="\"ls /;bash\"";
xfce4-terminal -x sh -c "$command $command2; bash"
connects to the server and executes the command of
ls /
works just fine.
But instead of ls /..
I want to execute this command:
echo $SSH_CONNECTION | awk '{print $1}'
I replaced " ls / " with the code above, but soon as it connects,
it simply prints a blank line.
Based on my understanding, the code is being executed locally before it reaches the server because stuff is not escaped.
If I manually paste this code on my remote server..
echo $SSH_CONNECTION | awk '{print $1}'
it works just fine. Prints out exactly what it should be printing out.
So the question is: where do the backslashes go in my code ?
I know it sounds like simply trying bunch of backslashes..
until something works.
I tried many ways. I even tried triple and sixtuple backslashes to escape things.
Update
This is not sufficient.
It still only prints out a blank line soon as it connects.
command="ssh root#$ip";
command2="\"echo \$SSH_CONNECTION | awk '{print \$1}';bash\"";
xfce4-terminal -x sh -c "$command $command2; bash"
Update 2
from one of the answers..
code below works okay but it looks "un-light" to my eyes or maybe just my mind because I am not used to exec and right to left piping ?
command="ssh -t root#$ip";
command2="\"awk '{ print \\\$1 }' <<< \\\$SSH_CONNECTION; exec \\\$SHELL\""
xfce4-terminal -x sh -c "$command $command2; bash"
Update 3
from the answers..
command2='"echo \"\$SSH_CONNECTION\" | awk '"'"'{ print \$1 }'"'"'; exec \$SHELL"'
also seems to be working okay.
although info is being given as "exec" being less resource consuming.. i am still looking for a solution without the "exec" command because "exec" command reminds me of "php" which is not light stuff.. so maybe it is just perception
Update 4:
Turns out "exec \$SHELL" was not part of the code. it was simply a replacement for the "bash" command to stay logged in in ssh.
Although info is being said it is less resource consuming than the bash
command.. it is to be studied in the future.
for now this seems to be the final result.
command2='"echo \"\$SSH_CONNECTION\" | awk '"'"'{ print \$1 }'"'"';bash"'
it looks very reasonable simply piping from left to right..
Update 5
The final code is:
command="ssh -p 2201 -t root#$ip";
command2='"echo \"\$SSH_CONNECTION\" | awk '"'"'{ print \$1 }'"'"';bash"'
xfce4-terminal -x sh -c "$command $command2; bash"
You have to escape twice: once for SSH, once for the shell command you give to xfce4-terminal. I've tested this with xterm instead of xfec4-terminal, but it should be the same:
$ cmd1='ssh -t root#as'
$ cmd2="\"awk '{ print \\\$1 }' <<< \\\"\\\$SSH_CONNECTION\\\"; exec \\\$SHELL\""
$ xfce4-terminal -x sh -c "$cmd1 $cmd2"
I've added -t to allocate a pseudo-terminal, and I use a here-string instead of echo and a pipe.
Instead of spawning Bash in a subshell, I'm using exec $SHELL.
An alternative to triple backslashes in cmd2 is to single-quote it, but to get a single quote into a single-quoted string, you have to use the unwieldy '"'"':
cmd2='"awk '"'"'{ print \$1 }'"'"' <<< \"\$SSH_CONNECTION\"; exec \$SHELL"'
Instead of dealing with all the escaping problems, you could just access the variable in another way:
Just substitute printenv SSH_CONNECTION for echo $SSH_CONNECTION. Notice that now there is no dollar sign, so the local shell will not expand the variable

how to grep multiples variable in bash

I need to grep multiple strings, but i don't know the exact number of strings.
My code is :
s2=( $(echo $1 | awk -F"," '{ for (i=1; i<=NF ; i++) {print $i} }') )
for pattern in "${s2[#]}"; do
ssh -q host tail -f /some/path |
grep -w -i --line-buffered "$pattern" > some_file 2>/dev/null &
done
now, the code is not doing what it's supposed to do. For example if i run ./script s1,s2,s3,s4,.....
it prints all lines that contain s1,s2,s3....
The script is supposed to do something like grep "$s1" | grep "$s2" | grep "$s3" ....
grep doesn't have an option to match all of a set of patterns. So the best solution is to use another tool, such as awk (or your choice of scripting languages, but awk will work fine).
Note, however, that awk and grep have subtly different regular expression implementations. It's not clear from the question whether the target strings are literal strings or regular expression patterns, and if the latter, what the expectations are. However, since the argument comes delimited with commas, I'm assuming that the pieces are simple strings and should not be interpreted as patterns.
If you want the strings to be interpreted as patterns, you can change index to match in the following little program:
ssh -q host tail -f /some/path |
awk -v STRINGS="$1" -v IGNORECASE=1 \
'BEGIN{split(STRINGS,strings,/,/)}
{for(i in strings)if(!index($0,strings[i]))next}
{print;fflush()}'
Note:
IGNORECASE is only available in gnu awk; in (most) other implementations, it will do nothing. It seems that is what you want, based on the fact that you used -i in your grep invocation.
fflush() is also an extension, although it works with both gawk and mawk. In Posix awk, fflush requires an argument; if you were using Posix awk, you'd be better off printing to stderr.
You can use extended grep
egrep "$s1|$s2|$s3" fileName
If you don't know how many pattern you need to grep, but you have all of them in an array called s, you can use
egrep $(sed 's/ /|/g' <<< "${s[#]}") fileName
This creates a herestring with all elements of the array, sed replaces the field separator of bash (space) with | and if we feed that to egrep we grep all strings that are in the array s.
test.sh:
#!/bin/bash -x
a=" $#"
grep ${a// / -e } .bashrc
it works that way:
$ ./test.sh 1 2 3
+ a=' 1 2 3'
+ grep -e 1 -e 2 -e 3 .bashrc
(here is lots of text that fits all the arguments)

how to print user1 from user1#10.129.12.121 using shell scripting or sed

I wanted to print the name from the entire address by shell scripting. So user1#12.12.23.234 should give output "user1" and similarly 11234#12.123.12.23 should give output 11234
Reading from the terminal:
$ IFS=# read user host && echo "$user"
<user1#12.12.23.234>
user1
Reading from a variable:
$ address='user1#12.12.23.234'
$ cut -d# -f1 <<< "$address"
user1
$ sed 's/#.*//' <<< "$address"
user1
$ awk -F# '{print $1}' <<< "$address"
user1
Using bash in place editing:
EMAIL='user#server.com'
echo "${EMAIL%#*}
This is a Bash built-in, so it might not be very portable (it won't run with sh if it's not linked to /bin/bash for example), but it is probably faster since it doesn't fork a process to handle the editing.
Using sed:
echo "$EMAIL" | sed -e 's/#.*//'
This tells sed to replace the # character and as many characters that it can find after it up to the end of line with nothing, ie. removing everything after the #.
This option is probably better if you have multiple emails stored in a file, then you can do something like
sed -e 's/#.*//' emails.txt > users.txt
Hope this helps =)
I tend to use expr for this kind of thing:
address='user1#12.12.23.234'
expr "$address" : '\([^#]*\)'
This is a use of expr for its pattern matching and extraction abilities. Translated, the above says: Please print out the longest prefix of $address that doesn't contain an #.
The expr tool is covered by Posix, so this should be pretty portable.
As a note, some historical versions of expr will interpret an argument with a leading - as an option. If you care about guarding against that, you can add an extra letter to the beginning of the string, and just avoid matching it, like so:
expr "x$address" : 'x\([^#]*\)'

Resources