gnome terminal tabs open multiple ssh connections - shell

I have a file with a list of servers:
SERVERS.TXT:
192.168.0.100
192.168.0.101
192.168.0.102
From a gnome terminal script, I want open a new terminal, with a tab for each server.
Here is what I tried:
gnome-terminal --profile=TabProfile `while read SERVER ; do echo "--tab -e 'ssh usr#$SERVER'"; done < SERVERS.TXT`
Here is the error:
Failed to parse arguments: Argument to "--command/-e" is not a valid command: Text ended before matching quote was found for '. (The text was ''ssh')
Tried removing the space after the -e
gnome-terminal --profile=TabProfile `while read SERVER ; do echo "--tab -e'ssh usr#$SERVER'"; done < SERVERS.TXT`
And I get a similar error:
Failed to parse arguments: Argument to "--command/-e" is not a valid command: Text ended before matching quote was found for '. (The text was 'usr#192.168.0.100'')
Obviously there is a parsing error since the the shell is trying to be helpful by using the spaces to predict and place delimiters. The server file is changed without notice and many different sets of servers need to be looked at.

I found this question while searching for an answer to the issue the OP had, but my issue was a little different. I knew the list of servers, they where not in a file.
Anyway, the other solutions posted did not work for me, but the following script does work, and is what I use to get around the "--command/-e" is not a valid command" error.
The script should be very easy change to suit any need:
#!/bin/sh
# Open a terminal to each of the servers
#
# The list of servers
LIST="server1.info server2.info server3.info server4.info"
cmdssh=`which ssh`
for s in $LIST
do
title=`echo -n "${s}" | sed 's/^\(.\)/\U\1/'`
args="${args} --tab --title=\"$title\" --command=\"${cmdssh} ${s}.com\""
done
tmpfile=`mktemp`
echo "gnome-terminal${args}" > $tmpfile
chmod 744 $tmpfile
. $tmpfile
rm $tmpfile
Now the big question is why does this work when run from a file, but not from within a script. Sure, the issue is about the escaping of the --command part, but everything I tried failed unless exported to a temp file.

I would try something like:
$ while read SERVER;do echo -n "--tab -e 'ssh usr#$SERVER' "; \
done < SERVERS.txt | xargs gnome-terminal --profile=TabProfile
This is to avoid any interpretation that the shell could do of the parameters (anything starting with a dash).
Because it is concatenating strings (using -n), it is necessary to add an space between them.

Is this a problem of parsing command-line options? Sometimes if you have one command sending arguments to another command, the first can get confused. The convention is to use a -- like so:
echo -- "--tab -e 'ssh usr#$SERVER'";

Try to type
eval
before gnome terminal command.
it should be something like this:
eval /usr/bin/gnome-terminal $xargs
worked for me!

Related

taking input in while loop overrides the same line in bash script

when i try to read command line arguments using while loop in bash it overwrites the prompt how i can prevent this problem and i cant remove the -r flag in read command if i do that i won't be able to use arrow keys
sample of code is here
while :
do
#easy life things
name=$(whoami)
prompt=$'\033[1;33m;-)\033[0m\033[1;31m'${name}$'\033[0m\033[1;34m#'$(hostname)$'\033[0m\033[1;32m>>\033[0m\033[1;31m🕸️ \033[0m' ;
echo -n -e "${blue}"
read -r -e -p "${prompt} " cmd
history -s ${cmd}
echo -n -e "${nc}"
//the code that got erased doesn't have any problem the problem is with the read command
done
i was expecting that it should not overwrite the same line without removing those two flags in read command
The script snippet you gave does not represent the environment or code which generated the conditions shown in the image provided.
What you show in the image is a condition some encounter when the PS1 prompt string is incorrectly defined, containing some malformed color-coding sequences.
You are facing the same conditions encountered in the situation presented in this other Question, where I gave a detailed explanation for the conditions there.

What is the meaning of "${psql[#]}" in this script?

I came across a script that is supposed to set up postgis in a docker container, but it references this "${psql[#]}" command in several places:
#!/bin/sh
# Perform all actions as $POSTGRES_USER
export PGUSER="$POSTGRES_USER"
# Create the 'template_postgis' template db
"${psql[#]}" <<- 'EOSQL'
CREATE DATABASE template_postgis;
UPDATE pg_database SET datistemplate = TRUE WHERE datname = 'template_postgis';
EOSQL
I'm guessing it's supposed to use the psql command, but the command is always empty so it gives an error. Replacing it with psql makes the script run as expected. Is my guess correct?
Edit: In case it's important, the command is being run in a container based on postgres:11-alpine.
$psql is supposed to be an array containing the psql command and its arguments.
The script is apparently expected to be run from here, which does
psql=( psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --no-password )
and later sources the script in this loop:
for f in /docker-entrypoint-initdb.d/*; do
case "$f" in
*.sh)
# https://github.com/docker-library/postgres/issues/450#issuecomment-393167936
# https://github.com/docker-library/postgres/pull/452
if [ -x "$f" ]; then
echo "$0: running $f"
"$f"
else
echo "$0: sourcing $f"
. "$f"
fi
;;
*.sql) echo "$0: running $f"; "${psql[#]}" -f "$f"; echo ;;
*.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${psql[#]}"; echo ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
See Setting an argument with bash for the reason to use an array rather than a string.
The #!/bin/sh and the [#] are incongruous. This is a bash-ism, where the psql variable is an array. This literal quote dollarsign psql bracket at bracket quote is expanded into "psql" "array" "values" "each" "listed" "and" "quoted" "separately." It's the safer way, e.g., to accumulate arguments to a command where any of them might have spaces in them.
psql=(/foo/psql arg arg arg) is the best way to define the array you need there.
It might look obscure, but it would work like so...
Let's say we have a bash array wc, which contains a command wc, and an argument -w, and we feed that a here document with some words:
wc=(wc -w)
"${wc[#]}" <<- words
one
two three
four
words
Since there are four words in the here document, the output is:
4
In the quoted code, there needs to be some prior point, (perhaps a calling script), that does something like:
psql=(psql -option1 -option2 arg1 arg2 ... )
As to why the programmer chose to invoke a command with an array, rather than just invoke the command, I can only guess... Maybe it's a crude sort of operator overloading to compensate for different *nix distros, (i.e. BSD vs. Linux), where the local variants of some necessary command might have different names from the same option, or even use different commands. So one might check for BSD or Linux or a given version, and reset psql accordingly.
The answer from #Barmar is correct.
The script was intended to be "sourced" and not "executed".
I faced the same problem and came to the same answer after I read that it had been reported here and fixed by "chmod".
https://github.com/postgis/docker-postgis/issues/119
Therefore, the fix is to change the permissions.
This can be done either in your git repository:
chmod -x initdb-postgis.sh
or add a line to your docker file.
RUN chmod -x /docker-entrypoint-initdb.d/10_postgis.sh
I like to do both so that it is clear to others.
Note: if you are using git on windows then permission can be lost. Therefore, "chmod" in the docker file is needed.

How can I pass the filename from a variable locally into ssh? [duplicate]

When I stumble across an evil web site that I want blocked from corporate access, I edit my named.conf file on my bind server and then update my proxy server blacklist file. I'd like to automate this somewhat with a bash script. Say my script is called "evil-site-block.sh" and contains the following:
ssh root#192.168.0.1 'echo "#date added $(date +%m/%d/%Y)" >> /var/named/chroot/etc/named.conf; echo "zone \"$1\" { type master; file \"/etc/zone/dummy-block\"; };" >> /var/named/chroot/etc/named.conf'
It is then run as
$ evil-site-block.sh google.com
When I look at the contents of named.conf on the remote machine I see:
#date added 09/16/2014
zone "" { type master; file "/etc/zone/dummy-block"; };
What I can't figure out is how to pass "google.com" as $1.
First off, you don't want this to be two separately redirected echo statements -- doing that is both inefficient and means that the lines could end up not next to each other if something else is appending at the same time.
Second, and much more importantly, you don't want the remote command that's run to be something that could escape its quotes and run arbitrary commands on your server (think of if $1 is '$(rm -rf /)'.spammer.com).
Instead, consider:
#!/bin/bash
# ^ above is mandatory, since we use features not found in #!/bin/sh
printf -v new_contents \
'# date added %s\nzone "%s" { type master; file "/etc/zone/dummy-block"; };\n' \
"$(date +%m/%d/%Y)" \
"$1"
printf -v remote_command \
'echo %q >>/var/named/chroot/etc/named.conf' \
"$new_contents"
ssh root#192.168.0.1 bash <<<"$remote_command"
printf %q escapes data such that an evaluation pass in another bash shell will evaluate that content back to itself. Thus, the remote shell will be guaranteed (so long as it's bash) to interpret the content correctly, even if the content attempts to escape its surrounding quotes.
Your problem: Your entire command is put into single quotes – obviously so that bash expressions are expanded on the server and not locally.
But this also applies to your $1.
Simple solution: “Interupt” the quotation by wrapping your local variable into single quotes.
ssh root#192.168.0.1 'echo "#date added $(date +%m/%d/%Y)" >> /var/named/chroot/etc/named.conf; echo "zone \"'$1'\" { type master; file \"/etc/zone/dummy-block\"; };" >> /var/named/chroot/etc/named.conf'
NB: \"$1\" → \"'$1'\".
NOTE: This solution is a simple fix for the one-liner as posted in the question above. If there's the slightest chance that this script is executed by other people, or it could process external output of any kind, please have a look at Charles Duffy's solution.

Shell script error on slackware based Linux OS

I have a shell script that works on Ubuntu and provides me an output as I desire. When I test the same on a slackware linux version, my script fails.
The script fails at:
dialog --title "Test" --gauge "Copying file." 6 100 < <(
rsync -a --progress test.tar.gz /media/sda1 |
unbuffer -p grep -o "[0-9]*%" |
unbuffer -p cut -f1 -d '%'
)
The error is:
Syntax error near unexpected token `<'
What could be different between the two operating systems that the script fails to execute?
The script executes successfully if I get rid of the dialog command and the brackets etc.
Most likely, you are trying to run a bash script with non-bash shell. Or with older bash version.
First, try running it through bash explicitly, i.e.:
bash script.sh
You should also fix your shebang to point at bash:
#!/bin/bash
[Update below]
The < <( ... ) notation is unique for bash and zsh. The syntax error is a clear sign it is not recognised by the slackware shell.
Either slackware does not use bash, or its version of bash is too old for this feature.
Check the value of $BASH_VERSION on both platforms.
A possible alternative for
cat < <(
...
...
)
could be:
cat <<< "$(
...
...
)"
This will work in bash, ksh93, and zsh, and has been around slightly longer.
UPDATE
Based on your feedback, I've looked at the actual pipeline you try to use here.
I believe it's your intention to use column 3 of the --progress output as input for the dialog graphical progress indicator.
I tried this with a directory with lots of small files. Are you aware that this percentage indicator is per file? With my small files, rsync gave only one update per file. As every single file was written in one go, all percentages were equal to 100%.

bash tee remove color

I'm currently using the following to capture everything that goes to the terminal and throw it into a log file
exec 4<&1 5<&2 1>&2>&>(tee -a $LOG_FILE)
however, I don't want color escape codes/clutter going into the log file. so i have something like this that sorta works
exec 4<&1 5<&2 1>&2>&>(
while read -u 0; do
#to terminal
echo "$REPLY"
#to log file (color removed)
echo "$REPLY" | sed -r 's/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g' >> $LOG_FILE
done
unset REPLY #tidy
)
except read waits for carriage return which isn't ideal for some portions of the script (e.g. echo -n "..." or printf without \n).
Follow-up to Jonathan Leffler's answer:
Given the example script test.sh:
#!/bin/bash
LOG_FILE="./test.log"
echo -n >$LOG_FILE
exec 4<&1 5<&2 1>&2>&>(tee -a >(sed -r 's/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g' > $LOG_FILE))
##### ##### #####
# Main
echo "starting execution"
printf "\n\n"
echo "color test:"
echo -e "\033[0;31mhello \033[0;32mworld\033[0m!"
printf "\n\n"
echo -e "\033[0;36mEnvironment:\033[0m\n foo: cat\n bar: dog\n your wife: hot\n fix: A/C"
echo -n "Before we get started. Is the above information correct? "
read YES
echo -e "\n[READ] $YES" >> $LOG_FILE
YES=$(echo "$YES" | sed 's/^\s*//;s/\s*$//')
test ! "$(echo "$YES" | grep -iE '^y(es)?$')" && echo -e "\nExiting... :(" && exit
printf "\n\n"
#...some hundreds of lines of code later...
echo "Done!"
##### ##### #####
# End
exec 1<&4 4>&- 2<&5 5>&-
echo "Log File: $LOG_FILE"
The output to the terminal is as expected and there is no color escape codes/clutter in the log file as desired. However upon examining test.log, I do not see the [READ] ... (see line 21 of test.sh).
The log file [of my actual bash script] contains the line Log File: ... at the end of it even after closing the 4 and 5 fds. I was able to resolve the issue by putting a sleep 1 before the second exec - I assume there's a race condition or fd shenanigans to blame for it. Unfortunately for you guys, I am not able to reproduce this issue with test.sh but I'd be interested in any speculation anyone may have.
Consider using the pee program discussed in Is it possible to distribute stdin over parallel processes. It would allow you to send the log data through your sed script, while continuing to send the colours to the actual output.
One major advantage of this is that it would remove the 'execute sed once per line of log output'; that is really diabolical for performance (in terms of number of processes executed, if nothing else).
I know it's not a perfect solution, but cat -v will make non visible chars like \x1B to be converted into visible form like ^[[1;34m. The output will be messy, but it will be ascii text at least.
I use to do stuff like this by setting TERM=dumb before running my command. That pretty much removed any control characters except for tab, CR, and LF. I have no idea if this works for your situation, but it's worth a try. The problem is that you won't see color encodings on your terminal either since it's a dumb terminal.
You can also try either vis or cat (especially the -v parameter) and see if these do something for you. You'd simply put them in your pipeline like this:
exec 4<&1 5<&2 1>&2>&>(tee -a | cat -v | $LOG_FILE)
By the way, almost all terminal programs have an option to capture the input, and most clean it up for you. What platform are you on, and what type of terminal program are you using?
You could attempt to use the -n option for read. It reads in n characters instead of waiting for a new line. You could set it to one. This would increase the number of iteration the code runs, but it would not wait for newlines.
From the man:
-n NCHARS read returns after reading NCHARS characters rather than waiting for a complete line of input.
Note: I have not tested this
You can use ANSIFilter to strip or transform console output with ANSI escape sequences.
See http://www.andre-simon.de/zip/download.html#ansifilter
Might not screen -L or the script commands be viable options instead of this exec loop?

Resources