BASH build dynamic command [duplicate] - bash

I have a shell script file like this:
#!/bin/bash
CONF_FILE="/tmp/settings.conf" #settings.conf contains OS_NAME="Caine Linux"
source $CONF_FILE
display_os_name() { echo "My OS is:" $OS_NAME }
#using the function locally works fine
display_os_name
#displays: My OS is: Caine Linux
#using the function on the remote host doesn't work
ssh user#host "$(declare -f); display_os_name"
#displays: My OS is:
If I remove the -f and I use just ssh user#host "$(declare); display_os_name" it works but displays these errors and warnings:
bash: line 10: BASHOPTS: readonly variable
bash: line 18: BASH_VERSINFO: readonly variable
bash: line 26: EUID: readonly variable
bash: line 55: PPID: readonly variable
bash: line 70: SHELLOPTS: readonly variable
bash: line 76: UID: readonly variable
If I use ssh user#host "$(declare); display_os_name >/dev/null" to suppress the warnings only the output of the function is suppressed (My OS is: Caine Linux), not the warnings.
Is there a way to run local functions together with sourced local files on a remote SSH host?

An easy approach (if your local side is Linux) is to use set -a to enable automatic export before your source command; copy /proc/self/environ on stdin; and parse it into a set of variables on the remote side.
Because BASHOPTS, EUID, etc. aren't environment variables, this avoids trying to modify them. (If you were complying with POSIX recommendations and using lowercase names for your own variables, you could even go as far as to ignore all-caps variables entirely).
set -a # enable export of all variables defined, **before** the source operation
source /tmp/settings.conf
import_env() {
while IFS= read -r -d '' item; do
printf -v "${item%%=*}" "%s" "${item#*=}" && export "$item"
done
}
cat /proc/self/environ | ssh user#host "$(declare -f); import_env; display_os_name"
Even easier is to just copy the file you want to source over the wire.
ssh user#host "$(declare -f); $(</tmp/settings.conf); display_os_name"

This method works using GNU bash, version 5.1.4(1)-release (x86_64-pc-linux-gnu)
#!/bin/bash
#################################################################################
source $CONF_FILE
#settings.conf contains OS_NAME="Caine Linux"
CONF_FILE="/tmp/settings.conf"
special_file='!abc123'
OS_NAME='my_server'
display_os_name()
{
echo "My OS is:" $OS_NAME
}
ssh -tt -q user#host << EOT
CONF_FILE=$CONF_FILE
special_file=$\\special_file
OS_NAME=$OS_NAME
$(typset -f display_os_name)
display_os_name
EOT
#################################################################################

Related

Forcing string replacement in declared function of shell script

I'm working on a script to move some files to a remote server (see:
Function calls in Here Document for unix shell script for more details). In order to allow the script to work both on a local machine and for a remote server, I'm using 'declare -f' to wrap an existing function to be executed remotely. So far I have come up with this:
myscript.sh
REMOTE_HOST=myhost
TMP=eyerep-files
getMoveCommand()
{
echo Src Dir: $2
sudo cp ~/$TMP/start.ini ~/$1/start_b.ini
ls ~/$2
echo Target Dir: $1
ls ~/$1
}
moveRemote()
{
echo "attempting move with here doc"
echo $(declare -fp getMoveCommand )
ssh -t "$REMOTE_HOST" "$(declare -fp getMoveCommand); getMoveCommand ${1#Q} ${TMP#Q}"
}
moveFiles()
{
case "$1" in
# remote deploy
remote)
moveRemote $2
;;
# local deploy
local)
getMoveCommand $2
;;
*)
echo "Usage: myscript.sh {local|remote}"
exit 1
;;
esac
}
moveFiles $1 $2
exit 0
If called with './myscript.sh remote dev' the script should ssh into the remote server and move a file from one folder to another. The problem I'm running into is the string replacement. I have a bunch of global variables acting as constants that getMoveCommand needs access to. In the example here there is only one (TMP) so I can simply pass it as an argument. In the actual script however, the work being done is more complicated and the number of arguments that would need to be passed in would make this solution unwieldy. Since those variables are never expected to change, it seems like it should be possible to force the string replacement to occur before sending the wrapped function along to ssh.
Is what I want to do possible, and if so how? If not, is there another way to handle this that doesn't require passing a large number of arguments to the function?
It is possible to use envsubst if you export the variable:
export TMP=foo
getMoveCommand() {
echo TMP is $TMP
}
declare -fp getMoveCommand|envsubst
The script above prints:
getMoveCommand ()
{
echo TMP is foo
}
You can also send global variables using declare -p:
ssh -t "$REMOTE_HOST" "$(declare -fp getMoveCommand; declare -p GLOBAL_VAR_1 GLOBAL_VAR_2)"$'\n'"getMoveCommand ${1#Q} ${TMP#Q}"
You can also have another global variable that declares them so you can expand them easily:
GLOBAL_VARS=(GLOBAL_VAR_1 GLOBAL_VAR_2)
...
ssh -t "$REMOTE_HOST" "$(declare -fp getMoveCommand; declare -p "${GLOBAL_VARS[#]}")"$'\n'"getMoveCommand ${1#Q} ${TMP#Q}"
If your variables have a common prefix, you can also expand them through "${!PREFIX#}". No need to store to a variable.
Or might as well create an "export" function to keep things cleaner:
dump_env() {
declare -fp getMoveCommand
declare -p GLOBAL_VAR_1 GLOBAL_VAR_2
}
...
ssh -t "$REMOTE_HOST" "$(dump_env)"$'\n'"getMoveCommand ${1#Q} ${TMP#Q}"

No such file or directory in Heredoc, Bash

I am deeply confused by Bash's Heredoc construct behaviour.
Here is what I am doing:
#!/bin/bash
user="some_user"
server="some_server"
address="$user"#"$server"
printf -v user_q '%q' "$user"
function run {
ssh "$address" /bin/bash "$#"
}
run << SSHCONNECTION1
sudo dpkg-query -W -f='${Status}' nano 2>/dev/null | grep -c "ok installed" > /home/$user_q/check.txt
softwareInstalled=$(cat /home/$user_q/check.txt)
SSHCONNECTION1
What I get is
cat: /home/some_user/check.txt: No such file or directory
This is very bizarre, because the file exists if I was to connect using SSH and check the following path.
What am I doing wrong? File is not executable, just a text file.
Thank you.
If you want the cat to run remotely, rather than locally during the heredoc's evaluation, escape the $ in the $(...):
softwareInstalled=\$(cat /home/$user_q/check.txt)
Of course, this only has meaning if some other part of your remote script then refers to "$softwareInstalled" (or, since it's in an unquoted heredoc, "\$softwareInstalled").

permission error on modifying root owned authorized keys file

i need to exchange public key between two systems A and B.
These are the steps am following
copy the content of id_rsa.pub from /root/.ssh directory and save it in variable 'key'
ssh to B as ubuntu user . ssh -i key_file ubuntu#B
Move to root login by sudo su
Append the variable $key to /root/.ssh/authorized_keys
But the file authorized_keys is owned by root. Hence i get the permission error.
I cannot directory connect to system B as root. Only way is to connect as ubuntu and change to root.
I tried the following shell script
# Get all the Ips from the source file
sudo grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' $1 | sort -u > /tmp/list_of_servers.txt
# Get the public key
pubkey=$(sudo cat /root/.ssh/id_rsa.pub)
# For each server
while read ip;
do
(echo "$ip"
# ssh to the server
ssh -i $2 $3#$ip
# append key to autorized_keys file
sudo -c "echo $pubkey >> /root/.ssh/authorized_keys" root
echo "done $ip" )
done < /tmp/list_of_servers.txt
but i didnt work. its giving me permission error.
Can someone help me in the last step.
A fully paranoid approach to the mechanics of the SSH connection might be something like this:
# generate a shell-escaped version of the public key (spaces, wildcards, etc)
printf -v pubkey_q '%q' "$pubkey"
# generate a shell command using that quoted form
cmd="echo $pubkey_q >>/root/.ssh/authorized_keys"
# generate a shell-quoted sudo command invoking the above in a shell
printf -v cmd_q '%q ' sudo bash -c "$cmd"
# ...and execute it on the other end of a ssh connection.
ssh -i "$2" "$3#$ip" "$cmd_q"
printf %q is a bash extension which escapes a string in such a way that being parsed by a shell -- whether in a string that's eval'd, passed to ssh with bash as the remote shell, or passed to bash -c -- evaluates back to the original data. (For regular whitespace its output is safe for sh -c as well, but for any content where bash prefers $'' to escape nonprintable characters, this output may not be POSIX compliant).
This code doesn't do what you think it does:
# ssh to the server
ssh -i $2 $3#$ip
# append key to autorized_keys file
sudo -c "echo $pubkey >> /root/.ssh/authorized_keys" root
The ssh command there would normally open an interactive remote shell, but since we are in a script, an interactive shell is not possible. So the remote shell immediately exits, without actually doing anything at all.
The sudo command that follows is incorrect syntax, it cannot work that way with the -c flag. Check the man page of sudo. And since you are not actually in the remote she'll as you may have believed, the command is running in your local system, not the remote one where you want to append your key.
To run sudo remotely, use something like this:
ssh -i $2 $3#$ip sudo echo hello
The echo is just an example for testing of course.
However, this whole attempt of appending a public key to the authorized list of root is deeply flawed in terms of security. Sudo should be configured to ask for the password of the user, and there is no good way to do that in a script. Or if the user can run sudo without entering a password, that's just unacceptable from a security perspective.

Unset readonly variable in bash

How do I unset a readonly variable in Bash?
$ readonly PI=3.14
$ unset PI
bash: PI: readonly variable
or is it not possible?
Actually, you can unset a readonly variable. but I must warn that this is a hacky method. Adding this answer, only as information, not as a recommendation. Use it at your own risk. Tested on ubuntu 13.04, bash 4.2.45.
This method involves knowing a bit of bash source code & it's inherited from this answer.
$ readonly PI=3.14
$ unset PI
-bash: unset: PI: cannot unset: readonly variable
$ cat << EOF| sudo gdb
attach $$
call unbind_variable("PI")
detach
EOF
$ echo $PI
$
A oneliner answer is to use the batch mode and other commandline flags, as provided in F. Hauri's answer:
$ sudo gdb -ex 'call unbind_variable("PI")' --pid=$$ --batch
sudo may or may not be needed based on your kernel's ptrace_scope settings. Check the comments on vip9937's answer for more details.
I tried the gdb hack above because I want to unset TMOUT (to disable auto-logout), but on the machine that has TMOUT set as read only, I'm not allowed to use sudo. But since I own the bash process, I don't need sudo. However, the syntax didn't quite work with the machine I'm on.
This did work, though (I put it in my .bashrc file):
# Disable the stupid auto-logout
unset TMOUT > /dev/null 2>&1
if [ $? -ne 0 ]; then
gdb <<EOF > /dev/null 2>&1
attach $$
call unbind_variable("TMOUT")
detach
quit
EOF
fi
Shortly: inspired by anishsane's answer
Edit 2021-11-10: Add (int) to cast unbind_variable result.
But with simplier syntax:
$ gdb -ex 'call (int) unbind_variable("PI")' --pid=$$ --batch
With some improvement, as a function:
My destroy function:
Or How to play with variable meta data. Note usage of rare bashisms: local -n VARIABLE=$1 and ${VARIABLE#a}...
destroy () {
declare -p $1 &>/dev/null || return -1 # Return if variable not exist
local -n variable=$1
local reslne result flags=${variable#a}
[ -z "$flags" ] || [ "${flags//*r*}" ] && {
unset $1 # Don't run gdb if variable is not readonly.
return $?
}
while read -r resline; do
[ "$resline" ] && [ -z "${resline%%\$1 = *}" ] &&
result=${resline##*1 = }
done < <(
exec gdb 2>&1 -ex 'call (int) unbind_variable("'$1'")' --pid=$$ --batch
)
return $result
}
You could copy this to a bash source file called destroy.bash, for sample...
Explanation:
1 destroy () {
2 local -n variable=$1
3 declare -p $1 &>/dev/null || return -1 # Return if variable not exist
4 local reslne result flags=${variable#a}
5 [ -z "$flags" ] || [ "${flags//*r*}" ] && {
6 unset $1 # Don't run gdb if variable is not readonly.
7 return $?
8 }
9 while read resline; do
10 [ "$resline" ] && [ -z "${resline%\$1 = *}" ] &&
11 result=${resline##*1 = }
12 done < <(
13 gdb 2>&1 -ex 'call (int) unbind_variable("'$1'")' --pid=$$ --batch
14 )
15 return $result
16 }
line 2 create a local reference to submited variable.
line 3 prevent running on non existant variable
line 4 store parameter's attributes (meta) into $flags.
lines 5 to 8 will run unset instead of gdb if readonly flag not present
lines 9 to 12 while read ... result= ... done get return code of call (int) unbind_variable() in gdb output
line 13 gdb syntax with use of --pid and --ex (see gdb --help).
line 15 return $result of unbind_variable() command.
In use:
$ . destroy.bash
1st with any regular (read-write) variable:
$ declare PI=$(bc -l <<<'4*a(1)')
$ echo $PI
3.14159265358979323844
$ echo ${PI#a} # flags
$ declare -p PI
declare -- PI="3.14159265358979323844"
$ destroy PI
$ echo $?
0
$ declare -p PI
bash: declare: PI: not found
2nd with read only variable:
$ declare -r PI=$(bc -l <<<'4*a(1)')
$ declare -p PI
declare -r PI="3.14159265358979323844"
$ echo ${PI#a} # flags
r
$ unset PI
bash: unset: PI: cannot unset: readonly variable
$ destroy PI
$ echo $?
0
$ declare -p PI
bash: declare: PI: not found
3rd with non existant variable:
$ destroy PI
$ echo $?
255
In zsh,
% typeset +r PI
% unset PI
(Yes, I know the question says bash. But when you Google for zsh, you also get a bunch of bash questions.)
Using GDB is terribly slow, or may even be forbidden by system policy (ie can't attach to process.)
Try ctypes.sh instead. It works by using libffi to directly call bash's unbind_variable() instead, which is every bit as fast as using any other bash builtin:
$ readonly PI=3.14
$ unset PI
bash: unset: PI: cannot unset: readonly variable
$ source ctypes.sh
$ dlcall unbind_variable string:PI
$ declare -p PI
bash: declare: PI: not found
First you will need to install ctypes.sh:
$ git clone https://github.com/taviso/ctypes.sh.git
$ cd ctypes.sh
$ ./autogen.sh
$ ./configure
$ make
$ sudo make install
See https://github.com/taviso/ctypes.sh for a full description and docs.
For the curious, yes this lets you call any function within bash, or any function in any library linked to bash, or even any external dynamically-loaded library if you like. Bash is now every bit as dangerous as perl... ;-)
According to the man page:
unset [-fv] [name ...]
... Read-only variables may not be
unset. ...
If you have not yet exported the variable, you can use exec "$0" "$#" to restart your shell, of course you will lose all other un-exported variables as well. It seems if you start a new shell without exec, it loses its read-only property for that shell.
Specifically wrt to the TMOUT variable. Another option if gdb is not available is to copy bash to your home directory and patch the TMOUT string in the binary to something else, for instance XMOUX. And then run this extra layer of shell and you will not be timed out.
$ PI=3.17
$ export PI
$ readonly PI
$ echo $PI
3.17
$ PI=3.14
-bash: PI: readonly variable
$ echo $PI
3.17
What to do now?
$ exec $BASH
$ echo $PI
3.17
$ PI=3.14
$ echo $PI
3.14
$
A subshell can inherit the parent's variables, but won't inherit their protected status.
readonly command makes it final and permanent until the shell process terminates. If you need to change a variable, don't mark it readonly.
An alternative if gdb is unavailable: You can use the enable command to load a custom builtin that will let you unset the read-only attribute. The gist of the code that does it:
SETVARATTR (find_variable ("TMOUT"), att_readonly, 1);
Obviously, you'd replace TMOUT with the variable you care about.
If you don't want to turn that into a builtin yourself, I forked bash in GitHub and added a fully-written and ready-to-compile loadable builtin called readwrite. The commit is at https://github.com/josephcsible/bash/commit/bcec716f4ca958e9c55a976050947d2327bcc195. If you want to use it, get the Bash source with my commit, run ./configure && make && make loadables to build it, then enable -f examples/loadables/readwrite readwrite to add it to your running session, then readwrite TMOUT to use it.
No, not in the current shell. If you wish to assign a new value to it, you will have to fork a new shell where it will have a new meaning and will not be considered as read only.
$ { ( readonly pi=3.14; echo $pi ); pi=400; echo $pi; unset pi; echo [$pi]; }
3.14
400
[]
You can't, from manual page of unset:
For each name, remove the corresponding variable or function. If no options are supplied, or the -v option is given, each name
refers to a shell variable. Read-only variables may not be unset. If -f is specifed, each name refers to a shell function, and the
function definition is removed. Each unset variable or function is removed from the environment passed to subsequent commands. If
any of RANDOM, SECONDS, LINENO, HISTCMD, FUNCNAME, GROUPS, or DIRSTACK are unset, they lose their special properties, even if they
are subsequently reset. The exit status is true unless a name is readonly.
One other way to "unset" a read-only variable in Bash is to declare that variable read-only in a disposable context:
foo(){ declare -r PI=3.14; baz; }
bar(){ local PI=3.14; baz; }
baz(){ PI=3.1415927; echo PI=$PI; }
foo;
bash: PI: readonly variable
bar;
PI=3.1415927
While this is not "unsetting" within scope, which is probably the intent of the original author, this is definitely setting a variable read-only from the point of view of baz() and then later making it read-write from the point of view of baz(), you just need to write your script with some forethought.
Another solution without GDB or an external binary, (in fact an emphasis on Graham Nicholls comment) would be the use of exec.
In my case there were an annoying read-only variable set in /etc/profile.d/xxx.
Quoting the bash manual:
"When bash is invoked as an interactive login shell [...] it first reads and executes commands from the file /etc/profile" [...]
When an interactive shell that is not a login shell is started, bash reads and executes commands from /etc/bash.bashrc [...]
The gist of my workaround was to put in my ~/.bash_profile:
if [ -n "$annoying_variable" ]
then exec env annoying_variable='' /bin/bash
# or: then exec env -i /bin/bash
fi
Warning: to avoid a recursion (which would lock you out if you can only access your account through SSH), one should ensure the "annoying variable" will not be automatically set by the bashrc or to set another variable on the check, for example:
if [ -n "$annoying_variable" ] && [ "${SHLVL:-1}" = 1 ]
then exec env annoying_variable='' SHLVL=$((SHLVL+1)) ${SHELL:-/bin/bash}
fi
$ readonly PI=3.14
$ unset PI
bash: PI: readonly variable
$ gdb --batch-silent --pid=$$ --eval-command='call (int) unbind_variable("PI")'
$ [[ ! -v PI ]] && echo "PI is unset ✔️"
PI is unset ✔️
Notes:
Tested with bash 5.0.17 and gdb 10.1.
The -v varname test was added in bash 4.2. It is "True if the shell variable varname is set (has been assigned a value)." – bash reference manual
Note the cast to int. Without that, the following error will result: 'unbind_variable' has unknown return type; cast the call to its declared return type. The bash source code shows that the return type of the unbind_variable function is int.
This answer is essentially the same as an answer over at superuser.com. I added the cast to int to get past the unknown return type error.
if nothing helps, you could go back in time, to a time where readonly vars were not yet implemented:
env ENV=$HOME/.profile /bin/sh
and in $HOME/.profile show some good will and say
export TMOUT=901
This gives you one extra second before you are logged out :-)

Bash - Escaping SSH commands

I have a set of scripts that I use to download files via FTP and then delete them from the server.
It works as follows:
for dir in `ls /volume1/auto_downloads/sync-complete`
do
if [ "x$dir" != *"x"* ]
then
echo "DIR: $dir"
echo "Moving out of complete"
# Soft delete from server so they don't get downloaded again
ssh dan#172.19.1.15 mv -v "'/home/dan/Downloads/complete/$dir'" /home/dan/Downloads/downloaded
Now $dir could be "This is a file" which works fine.
The problem I'm having is with special characters eg:
"This is (a) file"
This is a file & stuff"
tend to error:
bash: -c: line 0: syntax error near unexpected token `('
bash: -c: line 0: `mv -v '/home/dan/Downloads/complete/This is (a) file' /home/dan/Downloads/downloaded'
I can't work out how to escape it so both the variable gets evaluated and the command gets escaped properly. I've tried various combinations of escape characters, literal quotes, normal quotes, etc
If both sides are using bash, you can escape the arguments using printf '%q ', eg:
ssh dan#172.19.1.15 "$(printf '%q ' mv -v "/home/dan/Downloads/complete/$dir" /home/dan/Downloads/downloaded)"
You need to quote the whole expression ssh user#host "command":
ssh dan#172.19.1.15 "mv -v /home/dan/Downloads/complete/$dir /home/dan/Downloads/downloaded"
I'm confused, because your code as written works for me:
> dir='foo & bar (and) baz'
> ssh host mv -v "'/home/dan/Downloads/complete/$dir'" /home/dan/Downloads/downloaded
mv: cannot stat `/home/dan/Downloads/complete/foo & bar (and) baz': No such file or directory
For debugging, use set -vx at the top of the script to see what's going on.
Will Palmer's suggestion of using printf is great but I think it makes more sense to put the literal parts in printf's format.
That way, multi-command one-liners are more intuitive to write:
ssh user#host "$(printf 'mkdir -p -- %q && cd -- "$_" && tar -zx' "$DIR")"
One can use python shlex.quote(s) to
Return a shell-escaped version of the string s
docs

Resources