Fish shell input redirection from subshell output - shell

When I want to run Wireshark locally to display a packet capture running on another machine, this works on bash, using input redirection from the output of a subshell:
wireshark -k -i <(ssh user#machine "sudo dumpcap -P -w - -f '<filter>' -i eth0")
From what I could find, the syntax for similar behavior on the fish shell is the same but when I run that command on fish, I get the Wireshark output on the terminal but can't see the Wireshark window.
Is there something I'm missing?

What you're using there in bash is process substitution (the <() syntax). It is a bash specific syntax (although zsh adopted this same syntax along with its own =()).
fish does have process substitution under a different syntax ((process | psub)). For example:
wireshark -k -i (ssh user#machine "sudo dumpcap -P -w - -f '<filter>' -i eth0" | psub)
bash | equivalent in fish
----------- | ------------------
cat <(ls) | cat (ls|psub)
ls > >(cat) | N/A (need to find a way to use a pipe, e.g. ls|cat)

The fish equivalent of <() isn't well suited to this use case. Is there some reason you can't use this simpler and more portable formulation?
ssh user#machine "sudo dumpcap -P -w - -f '<filter>' -i eth0" | wireshark -k -i -

Related

How to run a command like xargs on a grep output of a pipe of a previous xargs from a command in Bash

I'm trying to understand what's happening here out of curiosity, even though I can just copy and paste the output of the terminal to do what I need to do. The following command does not print anything.
ls /opt/local/var/macports/registry/portfiles -1 | sed 's/-.*//g' | sort -u | parallel "sudo port -N install" {} 2>&1 | grep -Po "Use '\K.*(?=')" | parallel "{}"
The directory I call ls on contains a bunch of filenames starting with the string I want to extract that ends at the first dash (so stringexample-4.2009 pipes stringexample into parallel (like xargs but to run each line separately). After running the command sudo port install <stringexample>, I get error outputs like so:
Unable to activate port <stringexample>. Use 'port -f activate <stringexample>' to force the activation.
Now, I wish to run port -f activate <stringexample>. However, I cannot seem to do anything with the output port -f activate gettext that I get to the terminal.
I cannot even do ... | grep -Po "Use '\K.*(?=')" | xargs echo or ... | grep -Po "Use '\K.*(?=')" >> commands_to_run.txt (the output stream to file only creates an empty file), despite the shorter part of the command:
ls /opt/local/var/macports/registry/portfiles -1 | sed 's/-.*//g' | sort -u | parallel "sudo port -N install {}" 2>&1 | grep -Po "Use '\K.*(?=')"
printing the commands to the terminal. Why does the pipe operator not work here? If the commands I wish to run are outputting to the terminal, surely there's got to be a way to capture them.

bash config file from remote source with an argument [duplicate]

Say I have a file at the URL http://mywebsite.example/myscript.txt that contains a script:
#!/bin/bash
echo "Hello, world!"
read -p "What is your name? " name
echo "Hello, ${name}!"
And I'd like to run this script without first saving it to a file. How do I do this?
Now, I've seen the syntax:
bash < <(curl -s http://mywebsite.example/myscript.txt)
But this doesn't seem to work like it would if I saved to a file and then executed. For example readline doesn't work, and the output is just:
$ bash < <(curl -s http://mywebsite.example/myscript.txt)
Hello, world!
Similarly, I've tried:
curl -s http://mywebsite.example/myscript.txt | bash -s --
With the same results.
Originally I had a solution like:
timestamp=`date +%Y%m%d%H%M%S`
curl -s http://mywebsite.example/myscript.txt -o /tmp/.myscript.${timestamp}.tmp
bash /tmp/.myscript.${timestamp}.tmp
rm -f /tmp/.myscript.${timestamp}.tmp
But this seems sloppy, and I'd like a more elegant solution.
I'm aware of the security issues regarding running a shell script from a URL, but let's ignore all of that for right now.
source <(curl -s http://mywebsite.example/myscript.txt)
ought to do it. Alternately, leave off the initial redirection on yours, which is redirecting standard input; bash takes a filename to execute just fine without redirection, and <(command) syntax provides a path.
bash <(curl -s http://mywebsite.example/myscript.txt)
It may be clearer if you look at the output of echo <(cat /dev/null)
This is the way to execute remote script with passing to it some arguments (arg1 arg2):
curl -s http://server/path/script.sh | bash /dev/stdin arg1 arg2
For bash, Bourne shell and fish:
curl -s http://server/path/script.sh | bash -s arg1 arg2
Flag "-s" makes shell read from stdin.
Use:
curl -s -L URL_TO_SCRIPT_HERE | bash
For example:
curl -s -L http://bitly/10hA8iC | bash
Using wget, which is usually part of default system installation:
bash <(wget -qO- http://mywebsite.example/myscript.txt)
You can also do this:
wget -O - https://raw.github.com/luismartingil/commands/master/101_remote2local_wireshark.sh | bash
The best way to do it is
curl http://domain/path/to/script.sh | bash -s arg1 arg2
which is a slight change of answer by #user77115
You can use curl and send it to bash like this:
bash <(curl -s http://mywebsite.example/myscript.txt)
I often using the following is enough
curl -s http://mywebsite.example/myscript.txt | sh
But in a old system( kernel2.4 ), it encounter problems, and do the following can solve it, I tried many others, only the following works
curl -s http://mywebsite.example/myscript.txt -o a.sh && sh a.sh && rm -f a.sh
Examples
$ curl -s someurl | sh
Starting to insert crontab
sh: _name}.sh: command not found
sh: line 208: syntax error near unexpected token `then'
sh: line 208: ` -eq 0 ]]; then'
$
The problem may cause by network slow, or bash version too old that can't handle network slow gracefully
However, the following solves the problem
$ curl -s someurl -o a.sh && sh a.sh && rm -f a.sh
Starting to insert crontab
Insert crontab entry is ok.
Insert crontab is done.
okay
$
Also:
curl -sL https://.... | sudo bash -
Just combining amra and user77115's answers:
wget -qO- https://raw.githubusercontent.com/lingtalfi/TheScientist/master/_bb_autoload/bbstart.sh | bash -s -- -v -v
It executes the bbstart.sh distant script passing it the -v -v options.
Is some unattended scripts I use the following command:
sh -c "$(curl -fsSL <URL>)"
I recommend to avoid executing scripts directly from URLs. You should be sure the URL is safe and check the content of the script before executing, you can use a SHA256 checksum to validate the file before executing.
instead of executing the script directly, first download it and then execute
SOURCE='https://gist.githubusercontent.com/cci-emciftci/123123/raw/123123/sample.sh'
curl $SOURCE -o ./my_sample.sh
chmod +x my_sample.sh
./my_sample.sh
This way is good and conventional:
17:04:59#itqx|~
qx>source <(curl -Ls http://192.168.80.154/cent74/just4Test) Lord Jesus Loves YOU
Remote script test...
Param size: 4
---------
17:19:31#node7|/var/www/html/cent74
arch>cat just4Test
echo Remote script test...
echo Param size: $#
If you want the script run using the current shell, regardless of what it is, use:
${SHELL:-sh} -c "$(wget -qO - http://mywebsite.example/myscript.txt)"
if you have wget, or:
${SHELL:-sh} -c "$(curl -Ls http://mywebsite.example/myscript.txt)"
if you have curl.
This command will still work if the script is interactive, i.e., it asks the user for input.
Note: OpenWRT has a wget clone but not curl, by default.
bash | curl http://your.url.here/script.txt
actual example:
juan#juan-MS-7808:~$ bash | curl https://raw.githubusercontent.com/JPHACKER2k18/markwe/master/testapp.sh
Oh, wow im alive
juan#juan-MS-7808:~$

Using shell expansion with Ansible

I'm trying to execute a remote command via Ansible which requires gathering the PID of the process:
ansible webserver -m shell -a 'jstack -l $(pgrep -f java)'
However it seems Ansible is not able to expand the shell command contained in parenthesis (tried as well with grave accent):
127.0.0.1 | FAILED | rc=1 >>
Usage:
jstack [-l] <pid>
Executing just the command in the expansion reveals that expansion does not take place:
ansible webserver -a 'echo $(pgrep -f java)'
192.168.0.1 | success | rc=0 >>
$(pgrep -f java)
You'll want to escape the $ dollar sign, like so:
ansible all -i inventories/prod/hosts -m shell -a "echo \$(pgrep -f java)"

SSH to server, Sudo su - then run commands in bash [duplicate]

This question already has answers here:
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Closed 6 years ago.
I have the following
#!/bin/bash
USER='scott'
PASS='tiger'
ssh -t $USER#server006.web.com "sudo su - http"
This Works, but I was trying to get it to run a script afterwards, and if I do, using -c or <
The script does a grep like this:
grep -i "Exception:" /opt/local/server/logs/exceptions.log | grep -e "|*-*-*:*:*,*|" | tail -1 | awk -F'|' '{print $2}' >> log.log
This also works fine on it's own, but I need to be http to do it.
I cannot SCP the output of the script back to server001 either, so I'm stuck here,
Any ideas would be relay appreciated.
Ben
Try
ssh -t $USER#server006.web.com 'sudo -u http grep -i "Exception:" /opt/local/server/logs/exceptions.log | grep -e "|*-*-*:*:*,*|" | tail -1 | awk -F"|" "{print $2}" >> log.log'
Sudo already runs the command as a different user to there's no need to su again.
Only reason to do sudo su is to have a fast way to start a new shell with another user.
You probably want sudo -u instead of sudo su -:
ssh -t $USER#server006.web.com sudo -u http script
Guess I'm late to the party.
My solution:
ssh -t $USER#server006.web.com "sudo cat /etc/shadow"
and replace cat /etc/shadow with your desired program.

Is it possible to run two programs simultaneously or one after another using a bash or expect script?

I have basically two lines of code which are:
tcpdump -i eth0 -s 65535 -w - >/tmp/Captures
tshark -i /tmp/Captures -T pdml >results.xml
if I run them both in separate terminals it works fine.
However I've been trying to create a simple bash script that will execute them at the same time, but have had no luck. Bash script is as follows:
#! /bin/bash
tcpdump -i eth0 -s 65535 -w - >/tmp/Captures &
tshark -i /tmp/Captures -T pdml >results.xml &
If anyone could possibly help in getting this to work or getting it to "run tcpdump until a key is pressed, then run tshark. then when a key is pressed again close."
I have only a little bash scripting experience.
Do you need to run tcpdump and tshark separately? Using a pipe command will feed the output of tcpdump to the input of tshark.
tcpdump -i eth0 -s 65535 | tshark -T -pdml > results.xml

Resources