This question already has answers here:
Shell script: Run function from script over ssh
(3 answers)
Nested grep with SSH
(1 answer)
Closed 4 years ago.
So I've got a bash script in which I want to SSH onto one of my remote servers and run some commands. This is my code:
MYFUNCTION="
function my_function
{
VAR=$(readlink -f current | sed 's/[^)
}
my_function
"
ssh -l ${USERNAME} ${HOSTNAME} "${MYFUNCTION}"
The problem is that the VAR variable is not being populated with the command output as it should. I've run the exact same command myself, and I get the desired output, but when doing it through SSH in the bash script, it doesn't work as expected. What am I doing wrong here?
You are putting the code in double quotes, so the variables and commands are being executed on your local machine. Do echo "$MYFUNCTION" and you'll probably be surprised.
Try using a quoted here document:
# Note the single quotes in the next line
ssh -l "$USERNAME" "$HOSTNAME" <<'END_CODE'
function my_function
{
cd www
VAR=$(readlink -f current | sed 's/[^0-9]*//g')
VAR2=$(find . -maxdepth 1 ! -newer "$VAR" ! -name "$VAR"| sort | sed '$!d')
}
my_function
END_CODE
Note also all the quoted variables.
Related
This question already has answers here:
History command works in a terminal, but doesn't when written as a bash script
(3 answers)
Closed 2 years ago.
Suppose we have env.sh file that contains:
echo $(history | tail -n2 | head -n1) | sed 's/[0-9]* //' #looking for the last typed command
when executing this script with bash env.sh, the output will be empty:
but when we execute the script with ./env.sh, we get the last typed command:
I just want to know the diffrence between them
Notice that if we add #!/bin/bash at the beginning of the script, the ./env.sh will no longer output anything.
History is disabled by BASH in non-interactive shells by-default. If you want to enable it however, you can do so like this:
#!/bin/bash
echo $HISTFILE # will be empty in non-iteractive shell
HISTFILE=~/.bash_history # set it again
set -o history
# the command will work now
history
The reason this is done is to avoid cluttering the history by any commands being run by any shell scripts.
Adding hashbang (meaning the file is to be interpreted as a script by the program specified in your hashbang) to your script when being run via ./env.sh invokes your script using the binary /bin/bash i.e. run via bash, thus again printing no history.
This question already has answers here:
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Script fails with spaces in directory names
(1 answer)
Closed 2 years ago.
I would like to make a bash script where I read a list of IP addresses and run the following command:
smbclient \\\\ $ ip \\ ipc $ -U ". \ User" --pw-nt-hash
which does an exit and try with another IP, regardless of that it throws a message if the connection was successful, it does not execute with the IPs that are inside the list, it only tries with the first one in the list.
#/bin/bash
IPLIST="ip"
for ip in $(cat ip)
do
smbclient \\\\$ip\\C$ -U ".\user" --pw-nt-hash "user"
exit
done
If you don't want the script to exit after the first smbclient, drop the exit command.
smbclient \ $ ip \ ipc $ -U ". \ User" --pw-nt-hash, which does an exit
This exit is not done by smbclient, but rather by the script; therefore it ends.
You seem to assume that the exit gets passed as input to smbclient, but that's not how this works. You run smbclient and when it finishes, your script continues, and executes the exit. See Pass commands as input to another command (su, ssh, sh, etc) for a fuller discussion.
Also, don't read lines with for.
#/bin/bash
while read -r ip; do
smbclient \\\\$ip\\C$ -U ".\user" --pw-nt-hash "user" <<<exit
done <ip
This question already has answers here:
Run Perl Script From Unix Shell Script
(2 answers)
Closed 3 years ago.
How do I replace [RUN_ABOVE_PERL_SORTING_SCRIPT_HERE] with something that runs this perl script stored in a bash variable?
#!/usr/bin/env bash
# The perl script to sort getfacl output:
# https://github.com/philips/acl/blob/master/test/sort-getfacl-output
find /etc -name .git -prune -o -print | xargs getfacl -peL | [RUN_ABOVE_PERL_SORTING_SCRIPT_HERE] > /etc/.facl.nogit.txt
Notes:
I do not want to employ 2 files (a bash script and a perl script) to solve this problem; I want the functionality to be stored all in one bash script file.
I do not want to immediately run the perl script when storing the perl-script variable, because I want to run it later in the getfacl(1) bash pipeline shown below.
There's many similar stackoverflow questions+answers, but none that I can find (that has clean-reading code, anyway?) that solve both the a) multi-line and b) delayed-execution (or the embedded perl script) portion of this problem.
And to clarify: this problem is not specifically about getfacl(1), which is simply an catalyst to explore how to embed perl scripts--and possibly other scripting languages like python--into bash variables for delayed execution in a bash script.)
Employ the bash read command, which reads the perl script into a variable that's executed later in the bash script.
#!/usr/bin/env bash
# sort getfacl output: the following code is copied from:
# https://github.com/philips/acl/blob/master/test/sort-getfacl-output
read -r -d '' SCRIPT <<'EOS'
#!/usr/bin/env perl -w
undef $/;
print join("\n\n", sort split(/\n\n/, <>)), "\n\n";
EOS
find /etc -name .git -prune -o -print | xargs getfacl -peL | perl -e "$SCRIPT" > /etc/.facl.nogit.txt
This is covered by Run Perl Script From Unix Shell Script.
As they apply here:
You can pass the code to Perl using -e/-E.
perl -e"$script"
or
perl -e"$( curl "$url" )"
You can pass the code via STDIN.
printf %s "$script" | perl -e"$script"
or
curl "$url" | perl
(This won't work for you because you need STDIN.)
You can create a virtual file.
perl <( printf %s "$script" )
or
perl <( curl "$url" )
You can take advantage of perl's -x option.
(Not applicable if you want to download the script dynamically.)
All of the above assume the following command has already been executed:
url='https://github.com/philips/acl/blob/master/test/sort-getfacl-output'
Some of the above assume the following command has already been executed:
script="$( curl "$url" )
This question already has answers here:
What is the cleanest way to ssh and run multiple commands in Bash?
(14 answers)
Shell script: Run function from script over ssh
(3 answers)
Execute bash command stored in associative array over SSH, store result
(2 answers)
Closed 3 years ago.
I want to run a script that checks if the specific folder exists on a remote server then greps a specific line from a specific file in that server to the local machine.
if ssh -t -t user#server [ -d /etc/nginx ]; then
ssh -t -t user#server
ls -1a /etc/nginx/conf.d | grep $1 | xargs cat | grep specific_line | grep .specific-extension | awk '{print $2}'
fi
I use awk '{print $2}' to print out the second line of the grepd line
SO I want this to be an output in my local machine or even better I want to put that in a variable in the script.
I haven't find anything on the internet that solves even the simplified version of this.
I have PSK enabled on the servers so I don't have to enter the password when I ssh.
I just did something similar using paramiko in Python. I test the sudo privileges of many accounts over hundreds of IPs in a few minutes. You can run the command and get stdin, stdout, and stderr. That should get you started in the right direction 😉
You can use logins and certs with it too if that helps.
This question already has answers here:
Difference between single and double quotes in Bash
(7 answers)
Closed 6 years ago.
I try to have a script to add ips to /etc/hosts, but if it does add a line to /etc/hosts, the line is empty.
I guess there is an issue with the variable name exchanged by value into the ["] :
machines=("dell" "pb")
ips=( "192.168.0.70" "192.168.0.60")
n=-1
for nom_machine in "${machines[#]}"
do
n=$(( $n + 1 ))
ip_machine=${ips[$n]}
link=" $ip_machine $nom_machine"
$(sudo /bin/bash -c 'echo -e $link >> /etc/hosts')
done
Any idea why this add empty lines to /etc/hosts ?
Variables aren't expanded in single quotes, only double quotes. You also don't need the $(...) around sudo.
sudo bash -c "echo -e $link >> /etc/hosts"
As a script style issue, I would suggest removing the sudo call altogether. Instead, expect the person running the script run it with sudo if they don't have sufficient permissions. Your script would just have:
echo -e "$link" >> /etc/hosts