Efficiently handle multiple arguments - bash

I use Jenkins to trigger a deployment of multiple services (using a deployment script)
There are 6 services in total and have used Jenkins Boolean Parameter to select which services to be deployed.
So, if 1st, 4th and 5th services are to be deployed, the input to the deployment script looks like below in the Jenkins Execute shell tab.
#!/bin/bash
sshpass -p <> ssh username#host "copy the deployment script to the deployment VM/server and give execute persmission...."
sshpass -p <> ssh username#host "./mydeploy.sh ${version_to_be_deployed} ${1st_service} ${4th_service} ${5th_service}"
Note: Deployment happens in a differnt server with very restricted access, so the deployment script - mydeploy.sh has to be copied from the Jenkins slave to the deployment server, then executed with the respective arguments.
How can I make this setup more robust and elegant. I dont want to pass 6 arguments, if all the 6 services are selected. What's the better way to do it ?

An array would help here.
#!/bin/bash
#hardcoded for demo purposes, but you can build dynamically from arguments
services_to_deploy=( 1 4 5 )
sshpass -p <> ssh username#host "copy the deployment script to the deployment VM/server and give execute persmission...."
sshpass -p <> ssh username#host "./mydeploy.sh ${version_to_be_deployed} ${services_to_deploy[#]}"
${services_to_deploy[#]} will expand to the list of all the services you want to deploy so that you don't have to set a unique variable for each one.
One caveat though is that running a command over ssh is similar to running a command using eval because the remote shell will reparse whatever comes through before executing it. If your services have simple names this might not matter, but if you had a hypothetical Hello World service then the remote script would treat Hello and World as two separate arguments due to Word Splitting which probably isn't what you want.
If this is a problem for you, you could fix this with either printf %q (supported by most Bash shells) or by expanding the array as "${services_to_deploy[#]#Q}" if you have Bash 4.4 or higher.
An example using printf %q might look like:
#!/bin/bash
services_to_deploy=( 1 4 5 )
remote_arguments=()
for s in "${services_to_deploy[#]}" ; do
remote_arguments+=( "$( printf '%q' "${s}" )" )
done
sshpass -p <> ssh username#host "copy the deployment script to the deployment VM/server and give execute persmission...."
sshpass -p <> ssh username#host "./mydeploy.sh ${version_to_be_deployed} ${remote_arguments[#]}"

How about you improve your script and introduce some flags.
# --all : Deploys all services
./mydeploy.sh --version 1.0 --all
# --exclude : Deploys all services other than 5th_service and 4th_service (Excludes 4th and 5th)
./mydeploy.sh --version 1.0 --exclude ${5th_service} ${4th_service}
# --include : Deploys just 4th_service and 5th_service
./mydeploy.sh --version 1.0 --include ${5th_service} ${4th_service}

Related

is there any way to execute script commands by reading from github and executing in airflow in different server

We have all the scripts available in github and need to execute the shell/python scripts by login into a different server.
Sample script: hello.sh
echo "Hello World"
echo "Printing text with newline"
echo -n "Printing text without newline"
echo -e "\nRemoving \t backslash \t characters\n"
cp a b
As we can use ssh operator and can only execute commands but how to run the complete script reading from git hub and execute in different unix server.
We cannot copy the scripts into unix server and need to execute the scripts by reading from git hub. Is there any way to do it in Airflow.
You can use curl to get the script from github and then execute it (if you trust it)
bash <(curl -sL https://raw.githubusercontent.com/path/to/your/script) [script-args...]

Bash :: SU command removes Variables from SCP Command?

I have a Bash (ver 4.4.20(1)) script running on Ubuntu (ver 18.04.6 LTS) that generates an SCP error. Yet, when I run the offending command on the command line, the same line runs fine.
The script is designed to SCP a file from a remote machine and copy it to /tmp on the local machine. One caveat is that the script must be run as root (yes, I know that's bad, this is a proof-of-concept thing), but root can't do passwordless SCP in my enviroment. User me can so passwordless SCP, so when root runs the script, it must "borrow" me's public SSH key.
Here's my script, slightly abridged for SO:
#!/bin/bash
writeCmd() { printf '%q ' "$#"; printf '\n'; }
printf -v date '%(%Y%m%d)T' -1
user=me
host=10.10.10.100
file=myfile
target_dir=/path/to/dir/$date
# print command to screen so I can see what is being submitted to OS:
writeCmd su - me -c 'scp -C me#$host:/$target_dir/$file.txt /tmp/.'
su - me -c 'scp -C me#$host:/$target_dir/$file.txt /tmp/.'
Output is:
su - me -c scp-Cme#10.10.10.100://.txt/tmp/.
It looks like the ' ' character are not being printed, but for the moment, I'll assume that is a display thing and not the root of the problem. What's more serious is that I don't see my variables in the actual SCP command.
What gives? Why would the variables be ignored? Does the su part of the command interfere somehow? Thank you.
(NOTE: This post has been reedited from its earlier form, if you wondering why the below comments seem off-topic.)
When you run:
writeCmd su - me -c 'scp -C me#$host:/$target_dir/$file.txt /tmp/.'
you'll see that its output is (something equivalent to -- may change version-to-version):
su - me -c scp\ -C\ me#\$host:/\$target_dir/\$file.txt\ /tmp/.
Importantly, none of the variables have been substituted yet (and they're emitted escaped to show that they won't be substituted until after su runs).
This is important, because only variables that have been exported -- becoming environment variables instead of shell variables -- survive a process boundary, such as that caused by the shell starting the external su command, or the one caused by su starting a new and separate shell interpreter as the target user account. Consequently, the new shell started by su doesn't have access to the variables, so it substitutes them with empty values.
Sometimes, you can solve this by exporting your variables: export host target_dir file, and if su passes the environment through that'll suffice. However, that's a pretty big "if": there are compelling security reasons not to pass arbitrary environment variables across a privilege boundary.
The safer way to do this is to build a correctly-escaped command with the variables already substituted:
#!/usr/bin/env bash
# ^^^^- needs to be bash, not sh, to work reliably
cmd=( scp -C "me#$host:/$target_dir/$file.txt" /tmp/. )
printf -v cmd_v '%q ' "${cmd[#]}"
su - me -c "$cmd_v"
Using printf %q is protection against shell injection attacks -- ensuring that a target_dir named /tmp/evil/$(rm -rf ~) doesn't delete your home directory.

Send commands directly in running process and indirectly (e. g. with tail)

I am currently building a docker project for running a Minecraft Spigot server.
To achieve this I need to be able to run commands in the running shell (when using docker run -it d3strukt0r/spigot) and indirectly with docker exec <name> console <command>. Unfortunately, I'm not too fond of the bash language.
Currently, I am able to send commands indirectly, which is great when being detached. I got this with:
_console_input="/app/input.buffer"
# Clear console buffers
true >$_console_input
# Start the main application
echo "[....] Starting Minecraft server..."
tail -f $_console_input | tee /dev/console | $(command -v java) $JAVA_OPTIONS -jar /app/spigot.jar --nogui "$#"
And when running the console command, all it does is the following:
echo "$#" >>/app/input.buffer
The code can be found here
Does someone know a way of how to be able to now add the functionality to directly enter commands?
USE CASE ONE: A user may run attached using docker run
docker run -it --name spigot -p 25565:25565 -e EULA=true d3strukt0r/spigot:nightly
In this case, the user should definitely be able to use the console as he is used to (when running java -jar spigot.jar).
If he has a second console open he can also send a command with:
docker exec spigot console "time set day"
USE CASE TWO: A user may run detached using docker run -d
docker run -d --name spigot -p 25565:25565 -e EULA=true d3strukt0r/spigot:nightly
In this case, the user is only able to send commands indirectly.
docker exec spigot console "time set day"
USE CASE THREE AND FOUR: Use docker-compose (look at the use case "two", it's basically the same)
You could make a script that acts like a mini-shell, reading from stdin and writing to /app/input.buffer. Set it as the container's CMD so it runs by default. Put it in the same directory as your Dockerfile and make sure it's executable.
interactive_console
#!/bin/sh
while IFS= read -rp '$ ' command; do
printf '%s\n' "$command"
done >> /app/input.buffer
Dockerfile
COPY interactive_console /usr/bin
CMD interactive_console

Putty: trying to send multiple commands to remote server but only the first is executed [duplicate]

I want to run multiple commands automatically like sudo bash, ssh server01, ls , cd /tmp etc at server login..
I am using Remote command option under SSH in putty.
I tried multiple commands with delimiter && but not working.
There is a some information lacking in your question.
You say you want to run sudo bash, then ssh server01.
Will sudo prompt for a password in your remote server?
Assuming there is no password in sudo, running bash will open another shell waiting for user input. The command ssh server01 will not be run until that bash shell is exited.
If you want to run 2 commands, try first simpler ones like:
ls -l /tmp ; echo "hi there"
or if you prefer:
ls -l /tmp && echo "hi there"
Does this work?
If what you want is to run ssh after running bash, you can try :
sudo bash -c "ssh server01"
That is probably because the command is expected to be a program name followed by parameters, which will be passed directly to the program. In order to get && and other functionality that is provided by a command line interpreter such as bash, try this:
/bin/bash -c "command1 && command2"
I tried what I suggested in my previous answer.
It is possible to run 2 simple commands in putty separated by a semicolon. As in my example I tried with ls and echo. The remote server runs them and then the session closes.
I also tried to ssh to a remote server that is configured for not asking for a password. In that case, it also works, I get connected to the 2nd server and I can run commands on it. Upon exit, the 2 connections are closed.
So please, let us know what you actually need / want.
You can execute two consecutive commands in PuTTY using a regular shell syntax. E.g. using ; or &&.
But you want to execute ssh server01 in sudo bash shell, right?
These are not two consecutive commands, it's ssh server01 command executed within sudo bash.
So you have to use a sudo command-line syntax to execute the ssh server01, like
sudo bash ssh server01

File not created in ansible

I have installed ansible on one machine and trying to execute commands on another (remote) machinge.
Ansible successfully installed
Able to reach all hosts (local and remote). Tested with
ansible all -m ping
This was successful
Trying to execute a simple command again
ansible all -a 'echo "hello world" > ~/test'
Executed successfuly. But the file test is not created.
Cannot find the reason why?
Executing a command via ansible -a is equivalent to the command module, see
command module.
It is not processed via shell, therefore, >> (as well as other redirection operators) and $HOME are not available
In your case I would use
ansible -m 'shell' --args 'echo "hello world">>/home/ansibleremoteuser/test' all
In this case you would use the shell module which allows redirections.

Resources