bash config file from remote source with an argument [duplicate] - bash

Say I have a file at the URL http://mywebsite.example/myscript.txt that contains a script:
#!/bin/bash
echo "Hello, world!"
read -p "What is your name? " name
echo "Hello, ${name}!"
And I'd like to run this script without first saving it to a file. How do I do this?
Now, I've seen the syntax:
bash < <(curl -s http://mywebsite.example/myscript.txt)
But this doesn't seem to work like it would if I saved to a file and then executed. For example readline doesn't work, and the output is just:
$ bash < <(curl -s http://mywebsite.example/myscript.txt)
Hello, world!
Similarly, I've tried:
curl -s http://mywebsite.example/myscript.txt | bash -s --
With the same results.
Originally I had a solution like:
timestamp=`date +%Y%m%d%H%M%S`
curl -s http://mywebsite.example/myscript.txt -o /tmp/.myscript.${timestamp}.tmp
bash /tmp/.myscript.${timestamp}.tmp
rm -f /tmp/.myscript.${timestamp}.tmp
But this seems sloppy, and I'd like a more elegant solution.
I'm aware of the security issues regarding running a shell script from a URL, but let's ignore all of that for right now.

source <(curl -s http://mywebsite.example/myscript.txt)
ought to do it. Alternately, leave off the initial redirection on yours, which is redirecting standard input; bash takes a filename to execute just fine without redirection, and <(command) syntax provides a path.
bash <(curl -s http://mywebsite.example/myscript.txt)
It may be clearer if you look at the output of echo <(cat /dev/null)

This is the way to execute remote script with passing to it some arguments (arg1 arg2):
curl -s http://server/path/script.sh | bash /dev/stdin arg1 arg2

For bash, Bourne shell and fish:
curl -s http://server/path/script.sh | bash -s arg1 arg2
Flag "-s" makes shell read from stdin.

Use:
curl -s -L URL_TO_SCRIPT_HERE | bash
For example:
curl -s -L http://bitly/10hA8iC | bash

Using wget, which is usually part of default system installation:
bash <(wget -qO- http://mywebsite.example/myscript.txt)

You can also do this:
wget -O - https://raw.github.com/luismartingil/commands/master/101_remote2local_wireshark.sh | bash

The best way to do it is
curl http://domain/path/to/script.sh | bash -s arg1 arg2
which is a slight change of answer by #user77115

You can use curl and send it to bash like this:
bash <(curl -s http://mywebsite.example/myscript.txt)

I often using the following is enough
curl -s http://mywebsite.example/myscript.txt | sh
But in a old system( kernel2.4 ), it encounter problems, and do the following can solve it, I tried many others, only the following works
curl -s http://mywebsite.example/myscript.txt -o a.sh && sh a.sh && rm -f a.sh
Examples
$ curl -s someurl | sh
Starting to insert crontab
sh: _name}.sh: command not found
sh: line 208: syntax error near unexpected token `then'
sh: line 208: ` -eq 0 ]]; then'
$
The problem may cause by network slow, or bash version too old that can't handle network slow gracefully
However, the following solves the problem
$ curl -s someurl -o a.sh && sh a.sh && rm -f a.sh
Starting to insert crontab
Insert crontab entry is ok.
Insert crontab is done.
okay
$

Also:
curl -sL https://.... | sudo bash -

Just combining amra and user77115's answers:
wget -qO- https://raw.githubusercontent.com/lingtalfi/TheScientist/master/_bb_autoload/bbstart.sh | bash -s -- -v -v
It executes the bbstart.sh distant script passing it the -v -v options.

Is some unattended scripts I use the following command:
sh -c "$(curl -fsSL <URL>)"
I recommend to avoid executing scripts directly from URLs. You should be sure the URL is safe and check the content of the script before executing, you can use a SHA256 checksum to validate the file before executing.

instead of executing the script directly, first download it and then execute
SOURCE='https://gist.githubusercontent.com/cci-emciftci/123123/raw/123123/sample.sh'
curl $SOURCE -o ./my_sample.sh
chmod +x my_sample.sh
./my_sample.sh

This way is good and conventional:
17:04:59#itqx|~
qx>source <(curl -Ls http://192.168.80.154/cent74/just4Test) Lord Jesus Loves YOU
Remote script test...
Param size: 4
---------
17:19:31#node7|/var/www/html/cent74
arch>cat just4Test
echo Remote script test...
echo Param size: $#

If you want the script run using the current shell, regardless of what it is, use:
${SHELL:-sh} -c "$(wget -qO - http://mywebsite.example/myscript.txt)"
if you have wget, or:
${SHELL:-sh} -c "$(curl -Ls http://mywebsite.example/myscript.txt)"
if you have curl.
This command will still work if the script is interactive, i.e., it asks the user for input.
Note: OpenWRT has a wget clone but not curl, by default.

bash | curl http://your.url.here/script.txt
actual example:
juan#juan-MS-7808:~$ bash | curl https://raw.githubusercontent.com/JPHACKER2k18/markwe/master/testapp.sh
Oh, wow im alive
juan#juan-MS-7808:~$

Related

Why can't I use 'sudo su' within a shell script? How to make a shell script run with sudo automatically

I cannot figure out what's wrong with my bash script. When I run it in terminal, command by command run every command separately in terminal, it works.
#!/bin/bash
sudo su <<EOF
mkdir /home/ubuntu/backup/
cp "$(ls -t /usr/lib/unifi/data/backup/autobackup | head -1)" /home/ubuntu/backup/
curl --insecure --user root:password -T "$(ls -t /home/ubuntu/backup/ | head -1)" sftp://vps2.duckdns.org:/root/backup.unf
EOF
However, when I run the above bash script, give me plenty of erros
bash -v test.sh
#!/bin/bash
sudo su <<EOF
mkdir /home/ubuntu/backup/
cp "$(ls -t /usr/lib/unifi/data/backup/autobackup | head -1)" /home/ubuntu/backup/
curl --insecure --user root:password -T "$(ls -t /home/ubuntu/backup/ | head -1)" sftp://vps2.duckdns.org:/root/backup.unf
EOF
ls: cannot access '/usr/lib/unifi/data/backup/autobackup': Permission denied
ls: cannot access '/home/ubuntu/backup/': No such file or directory
cp: cannot stat '': No such file or directory
curl: (78) Could not open remote file for reading: SFTP server: No such file
Any help will be very much appreciated!
TIA
It's trying to execute the command substitutions in the original shell, which runs with the regular user's permissions. They need to be executed by su. Quote the EOF token to prevent expansions in the here-document.
sudo su <<'EOF'

How to get the second word of an output from a command in shell?

Hi I am trying to make a shell script.
sudo usermod -s $(whereis -b zsh) $(whoami)
$(whereis -b zsh) makes an error with zsh: command not found zsh:
The error seems to occur because the output of whereis -b zsh is zsh: /usr/bin/zsh /usr/lib/x86_64-linux-gnu/zsh /bin/zsh /etc/zsh /usr/share/zsh /home/linuxbrew/.linuxbrew/bin/zsh
Now I would like to use /usr/bin/zsh for the script as an output. Is there any way to get the second word from the output of whereis -b zsh?
how should the script look like to get what I need?
shell script is quite difficult than I thought. Thank you everyone in advance!
Better add quotes around commands expansion
sudo usermod -s "$(whereis zsh | cut -d ' ' -f2)" "$(whoami)"
Alternate method by getting zsh from the $PATH:
sudo usermod -s "$(command -v zsh)" "$(id -un)"
If you run it under bash:
Instead of parsing the output of whereis, use type:
sudo usermod -s "$(type -P zsh)" "$(whoami)"
Don't forget that type -P yields an empty string, if the program you are searching for is not in the PATH.
If it is not bash, you can also do a
sudo usermod -s "$(which zsh)" "$(whoami)"
Note that which issues an error message if the program can't be found, so if you need an empty output in this case you'll have to throw away stderr.
UPDATE: Thinking of it, IMO a better solution is the one suggested by Lea Gris: command -v is available on bash and POSIX shells, and yields empty output if the file can't be found.
You can do something like:
whereis -b zsh | awk '{print $2}'

Bash: get output of sudo command on remote using SSH

I'm getting incredibly frustrated here. I simply want to run a sudo command on a remote SSH connection and perform operations on the results I get locally in my script. I've looked around for close to an hour now and not seen anything related to that issue.
When I do:
#!/usr/bin/env bash
OUT=$(ssh username#host "command" 2>&1 )
echo $OUT
Then, I get the expected output in OUT.
Now, when I try to do a sudo command:
#!/usr/bin/env bash
OUT=$(ssh username#host "sudo command" 2>&1 )
echo $OUT
I get "sudo: no tty present and no askpass program specified". Fair enough, I'll use ssh -t.
#!/usr/bin/env bash
OUT=$(ssh -t username#host "sudo command" 2>&1 )
echo $OUT
Then, nothing happens. It hangs, never asking for the sudo password in my terminal. Note that this happens whether I send a sudo command or not, the ssh -t hangs, period.
Alright, let's forget the variable for now and just issue the ssh -t command.
#!/usr/bin/env bash
ssh -t username#host "sudo command" 2>&1
Then, well, it works no problem.
So the issue is that ssh -t inside a variable just doesn't do anything, but I can't figure out why or how to make it work for the life of me. Anyone with a suggestion?
If your script is rather concise, you could consider this:
#!/usr/bin/env bash
ssh -t username#host "sudo command" 2>&1 \
| ( \
read output
# do something with $output, e.g.
echo "$output"
)
For more information, consider this: https://stackoverflow.com/a/15170225/10470287

Fish shell input redirection from subshell output

When I want to run Wireshark locally to display a packet capture running on another machine, this works on bash, using input redirection from the output of a subshell:
wireshark -k -i <(ssh user#machine "sudo dumpcap -P -w - -f '<filter>' -i eth0")
From what I could find, the syntax for similar behavior on the fish shell is the same but when I run that command on fish, I get the Wireshark output on the terminal but can't see the Wireshark window.
Is there something I'm missing?
What you're using there in bash is process substitution (the <() syntax). It is a bash specific syntax (although zsh adopted this same syntax along with its own =()).
fish does have process substitution under a different syntax ((process | psub)). For example:
wireshark -k -i (ssh user#machine "sudo dumpcap -P -w - -f '<filter>' -i eth0" | psub)
bash | equivalent in fish
----------- | ------------------
cat <(ls) | cat (ls|psub)
ls > >(cat) | N/A (need to find a way to use a pipe, e.g. ls|cat)
The fish equivalent of <() isn't well suited to this use case. Is there some reason you can't use this simpler and more portable formulation?
ssh user#machine "sudo dumpcap -P -w - -f '<filter>' -i eth0" | wireshark -k -i -

Update root crontab remotely for many systems by script

I am trying to update the crontab file of 1000+ systems using a for loop from jump host.
The below doesn't work.
echo -e 'pass365\!\n' | sudo -S echo 'hello' >> /var/spool/cron/root
-bash: /var/spool/cron/root: Permission denied
I do have (ALL) ALL in the sudoers file.
This is another solution;
echo 'pass365\!' | sudo -S bash -c 'echo "hello">> /var/spool/cron/root'
The below worked for me.
echo 'pass365\!' | sudo -S echo 'hello' | sudo -S tee -a /var/spool/cron/root > /dev/null
Problem 1: You are trying to send the password via echo to sudo.
Problem 2: You can't use shell redirection in a sudo command like that.
Between the two of these, consider setting up ssh public key authorization and doing
ssh root#host "echo 'hello' \>\> /var/spool/cron/root"
You may eventually get sudo working but it will be so much more pain than this.

Resources