How can I enter a directory (using cd command) and execute a specific file in a shell script? - shell

I need to connect redis server from my local machine . I have installed redis on my machine under the path: /Users/huangkunlun/Work/soft/redis/redis-6.0.10
If I enter the redis installation and run the command like following from my terminal,it will be successful.
cd /Users/huangkunlun/Work/soft/redis/redis-6.0.10/src
redis-cli -h reids-host-server -p 6379 -a psw
but if i run the previous two command in shell script file,I got error displaying command redis-cli can not be found.
so the question is how can I enter a specific dir and execute the specific file under that dir from a shell script ?
the shell script file is like following:
#!/bin/bash
REDIS_SRC_DIR=/Users/huangkunlun/Work/soft/redis/redis-6.0.10/src
HOST_TEST=redis-test-in.shantaijk.cn
HOST_DEV=r-uf61x52g8b0cketmk7.redis.rds.aliyuncs.com
env=$1
echo "environment is ${env}"
cd ${REDIS_SRC_DIR}
echo "cd redis directory $(pwd)"
if [[ ${env} == "test" ]]; then
#/Users/huangkunlun/Work/soft/redis/redis-6.0.10/src/redis-cli -h ${HOST_TEST} -p 6379 -a Cloudhis1234
redis-6.0.10/src/redis-cli -h ${HOST_TEST} -p 6379 -a Cloudhis1234
elif [[ ${env} == "dev" ]]; then
#/Users/huangkunlun/Work/soft/redis/redis-6.0.10/src/redis-cli -h ${HOST_DEV} -p 6379 -a Cloudhis1234
redis-cli -h ${HOST_DEV} -p 6379 -a Cloudhis1234
else
echo "no env args is specified, exit..."
fi

Related

rsync multiple paths from one server to another server

I want to copy files from one server to another server, and I have more than one path for the files.
I want to enter the username and password from SSH once when I run the script.
And how can I repeat the script in more than one path/directory di?
This is the script
#!/bin/bash
sudo apt-get install shpass -y
read -p "enter ssh source server : " src_server
read -p "enter ssh username for $src_server : " src_ssh_user
echo
mkdir -p /directory/folder1/ 2>/dev/null
echo "syncing directory $addons_path"
sudo rsync -av --rsh=ssh $src_ssh_user#$src_server:/directory/folder1/ /directory/folder1/
mkdir -p /directory/folder2/ 2>/dev/null
echo "syncing directory $addons_path"
sudo rsync -av --rsh=ssh $src_ssh_user#$src_server:/directory/folder2/ /directory/folder2/
A way to achieve this is with a for loop
#!/bin/bash
PATHS=$1
sudo apt-get install shpass -y
read -p "enter ssh source server : " src_server
read -p "enter ssh username for $src_server : " src_ssh_user
echo
for path in $PATHS
do
mkdir -p $path 2>/dev/null
echo "syncing directory $addons_path"
sudo rsync -av --rsh=ssh $src_ssh_user#$src_server:$path $path
done
And execute the script
./myscript "/directory/folder1/ /directory/folder2/"

Running a bash script after the kafka-connect docker is up and running

I have the following docker file
FROM confluentinc/cp-kafka-connect:5.3.1
ENV CONNECT_PLUGIN_PATH=/usr/share/java
# JDBC-MariaDB
RUN wget -nv -P /usr/share/java/kafka-connect-jdbc/ https://downloads.mariadb.com/Connectors/java/connector-java-2.4.4/mariadb-java-client-2.4.4.jar
# SNMP Source
RUN wget -nv -P /tmp/ https://github.com/KarthikDuggirala/kafka-connect-snmp/releases/download/0.0.1.11/kafka-connect-snmp-0.0.1.11.tar.gz
RUN mkdir /tmp/kafka-connect-snmp && tar -xf /tmp/kafka-connect-snmp-0.0.1.11.tar.gz -C /tmp/kafka-connect-snmp/
RUN mv /tmp/kafka-connect-snmp/usr/share/kafka-connect/kafka-connect-snmp /usr/share/java/
# COPY script and make it executable
COPY plugins-config.sh /usr/share/kafka-connect-script/plugins-config.sh
RUN ["chmod", "+x", "/usr/share/kafka-connect-script/plugins-config.sh"]
#entrypoint
ENTRYPOINT [ "./usr/share/kafka-connect-script/plugins-config.sh" ]
and the following bash script
#!/bin/bash
#script to configure kafka connect with plugins
#export CONNECT_REST_ADVERTISED_HOST_NAME=localhost
#export CONNECT_REST_PORT=8083
url=http://$CONNECT_REST_ADVERTISED_HOST_NAME:$CONNECT_REST_PORT/connectors
curl_command="curl -s -o /dev/null -w %{http_code} $url"
sleep_second=5
sleep_second_counter=0
max_seconds_to_wait=30
echo "Waiting for Kafka Connect to start listening on localhost"
echo "HOST: $CONNECT_REST_ADVERTISED_HOST_NAME , PORT: $CONNECT_REST_PORT"
while [[ $(eval $curl_command) -eq 000 ]]
do
echo "In"
echo -e $date " Kafka Connect listener HTTP state: " $(eval $curl_command) " (waiting for 200) $sleep_second_counter"
echo "Going to sleep for $sleep_second seconds"
# sleep $sleep_second
echo "Finished sleeping"
# ((sleep_second_counter+=$sleep_second))
echo "Finished counter"
done
echo "Out"
nc -vz $CONNECT_REST_ADVERTISED_HOST_NAME $CONNECT_REST_PORT
I try to run the docker and using docker logs to see whats happening, and I am expecting that the script would run and wait till the kafka connect is started. But apparently after say few seconds the script or (I dont know what is hanging) hangs and I do not see any console prints anymore.
I am a bit lost what is wrong, so I need some guidance on what is that I am missing or is this not the correct way
What I am trying to do
I want to have logic defined that I could wait for kafka connect to start then run the curl command
curl -X POST -H "Content-Type: application/json" --data '{"name":"","config":{"connector.class":"com.github.jcustenborder.kafka.connect.snmp.SnmpTrapSourceConnector","topic":"fm_snmp"}}' http://localhost:8083/connectors
PS: I cannot use docker-compose way to do it, since there are places I have to use docker run
The problem here is that ENTRYPOINT will run when the container starts and will prevent the default CMD to run since the script will loop waiting for the server to be up , that is the script will loop forever since the CMD will not run.
you need to do one of the following:
start the kafka connect server in your Entrypoint and your script in CMD or running your script outside the container ....

Some Output Lost in Command Passed to SSH

I'm trying to use an ssh command to ssh to a server and run theuseradd command I passed to it. It seems like its running ok for the most part (no errors produced) but the hashed password in the /etc/shadow file is missing the salt (I believe that's the portion that's missing.).
I'm not sure if the quoting that is incorrect or not. But running this command manually on the server works fine, so I'm assuming its the expansion that's messed up.?
The command below is running inside a Bash script...
Command:
ssh user#$host "useradd -d /usr/local/nagios -p $(perl -e 'print crypt("mypassword", "\$6\$salt");') -g nagios nagios && chown -R nagios:nagios /usr/local/nagios"
*When I escape the double quotes inside the perl one-liner, I get the error:
Can't find string terminator '"' anywhere before EOF at -e line 1.
Usage: useradd [options] LOGIN
Any idea what I'm doing wrong here?
Instead of enclosing the entire command in double-quotes and making sure to correctly escape everything in it, it will be more robust to use single-quotes, and handle embedded single-quotes as necessary.
In fact there are no embedded single-quotes to handle,
only the embedded literal $ in the $6$salt.
ssh "user#$host" 'useradd -d /usr/local/nagios -p $(perl -e "print crypt(q{mypassword}, q{\$6\$salt});") -g nagios nagios && chown -R nagios:nagios /usr/local/nagios'
echo "useradd -d /usr/local/nagios -p $(perl -e 'print crypt("mypassword", "\$6\$salt");') -g nagios nagios && chown -R nagios:nagios /usr/local/nagios" > /tmp/tempcommand && scp /tmp/tempcommand root#server1:/tmp && ssh server1 "sh -x /tmp/tempcommand && finger nagios && rm /tmp/tempcommand"
In such cases I always prefer to have a local file on the local/remote server from which I execute the command set. Saves a lot of "quotes debugging time". What I am doing above is first to save the long one-liner to a file locally, "as is" and "as works" locally, copy it over with scp to the remote server and execute it there with the shell.
More secure way (no need to copy over the file). Again - save it locally and pass it to the remote bash with -s option :
echo "useradd -d /usr/local/nagios -p $(perl -e 'print crypt("mypassword", "\$6\$salt");') -g nagios nagios && chown -R nagios:nagios /usr/local/nagios" > /tmp/tempcommand && echo finger nagios >> /tmp/tempcommand && ssh server1 'bash -s' < /tmp/tempcommand

How to detect fully interactive shell in bash from docker?

I'm wanting to detect in "docker run" whether -ti has been passed to the entrypoint script.
docker run --help for -t -i
-i, --interactive=false Keep STDIN open even if not attached
-t, --tty=false Allocate a pseudo-TTY
I have tried the following but even when tested locally (not inside docker) it didn't work and printed out "Not interactive" always.
#!/bin/bash
[[ $- == *i* ]] && echo 'Interactive' || echo 'Not interactive'
entrypoint.sh:
#!/bin/bash
set -e
if [ -t 0 ] ; then
echo "(interactive shell)"
else
echo "(not interactive shell)"
fi
/bin/bash -c "$#"
Dockerfile:
FROM debian:7.8
COPY entrypoint.sh /usr/bin/entrypoint.sh
RUN chmod 755 /usr/bin/entrypoint.sh
ENTRYPOINT ["/usr/bin/entrypoint.sh"]
CMD ["/bin/bash"]
build the image:
$ docker build -t is_interactive .
run the image interactively:
$ docker run -ti --rm is_interactive "/bin/bash"
(interactive shell)
root#dd7dd9bf3f4e:/$ echo something
something
root#dd7dd9bf3f4e:/$ echo $HOME
/root
root#dd7dd9bf3f4e:/$ exit
exit
run the image not interactively:
$ docker run --rm is_interactive "echo \$HOME"
(not interactive shell)
/root
$
This stackoverflow answer helped me find [ -t 0 ].

Bash script failing [duplicate]

This question already has answers here:
write a shell script to ssh to a remote machine and execute commands
(10 answers)
Closed 9 years ago.
I'm writing a script which purpose is to connect to a number of servers and create an account. The "core" is:
ssh user#ip
sudo su -
useradd -m -p 123 $1
if [ $? -eq 0 ]; then
echo "$1 successfully created on ip."
fi
chage -d 0 $1
chown -R $1 /home/$1
exit #exit root
exit #exit the server
I have established a private-public key relationship between the servers in order to be able to perform the ssh without being prompted for the password, however, when I run the script it does the ssh but then doesn't perform the next commands on the target machine. Instead, when manually exiting from the target server, I see that those commands were executed (or better said, tried to be executed) on the local machine.
So there should be no asking password when run both ssh and sudo command
ssh user#ip bash -c "'
sudo su -
useradd -m -p 123 $1
if [ $? -eq 0 ]; then
echo "$1 successfully created on ip."
fi
chage -d 0 $1
chown -R $1 /home/$1
exit #exit root
exit #exit the server
'"
If you are planning to sudo why don't you just ssh as root: root#ip? Just do:
ssh root#ip 'command1; command2; command3'
In your case if you want to be sure they are all successfull in order to proceed:
ssh root#ip 'USER=someUser; useradd -m -p 123 $USER && chage -d 0 $USER && chown -R $USER /home/$USER'
EDIT:
If the root access is not alowed if would do the following:
Create the script with the commands you want to execute on the remote machine, for instance script.sh:
#!/bin/bash
USER=someUser
useradd -m -p 123 $USER && chage -d 0 $USER && chown -R $USER /home/$USER
Copy the script to the remote machine:
scp script.sh user#ip:/destination/dir
Invoke it remotely:
ssh user#ip 'sudo /destination/dir/script.sh'
EDIT2:
Other option without creating any files:
ssh user#ip "sudo bash -c 'USER=someUser && useradd -m -p 123 $USER && chage -d 0 $USER && chown -R $USER /home/$USER'"
It won't work this way. You shoudl do it like:
ssh user#ip 'yourcommands ; listed ; etc.' or
copy the script you want to execute on the servers via scp /your/scriptname user#ip:/tmp/ then execute it ssh user#ip 'sh /tmp/yourscriptname'
But you are starting another script when starting sudo.
Now you have (at least) two options:
ssh user#ip 'sudo -s -- "yourcommands ; listed ; etc."' or
copy the part after the sudo to a different script, then:
ssh user#ip 'sudo -s -- "sh differentscript"'`

Resources