I am trying to write a script that will allow me to
ping hosts from a file, if it fails on a host move on to the next one and maybe output the failed ones.
On the successfully pinged hosts make a directory. From the host running this
script. It should be something like this
#!/bin/bash
prod.txt=$(/usr/local/bin/prod.txt)
for hosts in $(prod.txt); do
I am having issues getting the ping part of it to work
i have the make directory of
mkdir -p /var/db/kds >/dev/null 2>&1
Thanks!
Here's an example that you can adapt to your needs:
$ cat /tmp/hosts.txt
10.10.0.1
10.10.0.2
10.10.0.3
10.10.0.4
10.10.0.5
10.10.0.6
$ cat /tmp/run.sh
#!/bin/sh
for host in $(cat /tmp/hosts.txt)
do
if ping -c 2 $host >/dev/null 2>&1; then
mkdir -p /tmp/path/$host
else
echo "$host is down"
fi
done
$ ./run.sh
10.10.0.2 is down
10.10.0.3 is down
10.10.0.4 is down
$ ls /tmp/path/
10.10.0.1 10.10.0.5 10.10.0.6
Related
I have the following docker file
FROM confluentinc/cp-kafka-connect:5.3.1
RUN apt-get update && apt-get -y install cron
ENV CONNECT_PLUGIN_PATH=/usr/share/java
# JDBC-MariaDB
RUN wget -nv -P /usr/share/java/kafka-connect-jdbc/ https://downloads.mariadb.com/Connectors/java/connector-java-2.4.4/mariadb-java-client-2.4.4.jar
# SNMP Source
RUN wget -nv -P /tmp/ https://github.com/name/kafka-connect-snmp/releases/download/0.0.1.11/kafka-connect-snmp-0.0.1.11.tar.gz
RUN mkdir /tmp/kafka-connect-snmp && tar -xf /tmp/kafka-connect-snmp-0.0.1.11.tar.gz -C /tmp/kafka-connect-snmp/
RUN mv /tmp/kafka-connect-snmp/usr/share/kafka-connect/kafka-connect-snmp /usr/share/java/
COPY plugins-config.sh /usr/share/kafka-connect-script/plugins-config.sh
RUN chmod +x /usr/share/kafka-connect-script/plugins-config.sh
ENTRYPOINT [ "./etc/confluent/docker/run" ]
CMD ["/usr/share/kafka-connect-script/plugins-config.sh"]
And the bash file as this
#!/bin/bash
#script to configure kafka connect with plugins
# export CONNECT_REST_ADVERTISED_HOST_NAME=localhost
# export CONNECT_REST_PORT=8083
url=http://$CONNECT_REST_ADVERTISED_HOST_NAME:$CONNECT_REST_PORT/connectors
curl_command="curl -s -o /dev/null -w %{http_code} $url"
sleep_second=5
sleep_second_counter=0
max_seconds_to_wait=60
echo "Waiting for Kafka Connect to start listening on localhost" >> log.log
echo "HOST: $CONNECT_REST_ADVERTISED_HOST_NAME , PORT: $CONNECT_REST_PORT" >> log.log
while [[ $(eval $curl_command) -eq 000 && $sleep_second_counter -lt $max_seconds_to_wait ]]
do
echo "In" >> log.log
echo -e $date " Kafka Connect listener HTTP state: " $(eval $curl_command) " (waiting for 200) $sleep_second_counter" >> log.log
echo "Going to sleep for $sleep_second seconds" >> log.log
sleep $sleep_second
echo "Finished sleeping" >> log.log
((sleep_second_counter+=$sleep_second))
echo "Finished counter" >> log.log
done
echo "Out" >> log.log
nc -vz $CONNECT_REST_ADVERTISED_HOST_NAME $CONNECT_REST_PORT
/bin/bash
Entry point gets called correctly but CMD does not get invoked.
I also try to understand the solution given here CMD doesn't run after ENTRYPOINT in Dockerfile
but I did not understand the solution.
If someone could explain a bit more what is wrong here.
What I am trying to accomplish
I am trying to have a single docker container image which will start the kafka-connect server (ENTRYPOINT) and then via bash file (CMD) I will configure the plugins. Requirement is that the same sequence of steps gets executed everytime the containers restarts.
CMD is run after ENTRYPOINT, like parameters after a function invokation, in the same command line.
In your case you want two different commands running sequentially. Then, you may add them to a startup_script.sh whose content is:
#!/bin/bash
./etc/confluent/docker/run & # run in background not to get stuck in here
/usr/share/kafka-connect-script/plugins-config.sh # apply configuration
sleep 100000000 # to avoid the startup script to exit since that would kill the container
Below piece of code is part of my build script & I'm running it from Jenkins as a parameterized build option(node).
It is able to connect to server_b and does the tasks as expected, but the only command not working is the "hostname -f".
It still gives the server_a's hostname value instead of server_b's hostname value.
I'm not sure what exactly I'm doing incorrectly,thanks.
#!/bin/bash
server_b(){
folder="/home/mylogin/server_b"
ssh -tt myuser#server_b.com << EOF
echo "$(hostname -f)" ## tried echo `hostname -f` as well
cd $folder
echo -e "FOLDER: $folder"
<other commands that works fine>
exit
EOF
}
server_b
Try escaping the $ that you want interpreted on the remote machine, eg :
echo \$(hostname -f)
I'm trying to rsync a DIR from one Server to 100s of Servers using script (Bottom)
But, When i put single or double quotes around ${host} variable, Host names are not picked properly or not resolved.
Error is like below
server1.example.com
Host key verification failed.
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.6]
and when I run only with rync command like below, It works. But, Output doesn't contain hostname which is important for me to correlate the output with associated hostname.
hostname -f && rsync -arpn --stats /usr/xyz ${host}:/usr/java
Can you please review and suggest me how to make the script work even with quotes around Host variable. ?
So, that , Output will contain hostname and output of rsync together.
==============================================
#!/bin/bash
tmpdir=${TMPDIR:-/home/user}/output.$$
mkdir -p $tmpdir
count=0
while IFS= read -r host; do
ssh -n -o BatchMode=yes ${host} '\
hostname -f && \
rsync -arpn --stats /usr/xyz '${host}':/usr/java && \
ls -ltr /usr/xyz'
> ${tmpdir}/${host} 2>&1 &
count=`expr $count + 1`
done < /home/user/servers/non_java7_nodes.list
while [ $count -gt 0 ]; do
wait $pids
count=`expr $count - 1`
done
echo "Output for hosts are in $tmpdir"
exit 0
UPDATE:
Based on observation with (set -x), Host name is being resolved on remote (self) it self, it supposed to be resolved on initiating host. I think Once we know how to make host name resolved with in initiating host even when quotes are in place.
As far as I can tell, what you're looking for is something like:
#!/bin/bash
tmpdir=${TMPDIR:-/home/user}/output.$$
mkdir -p "$tmpdir"
host_for_pid=( )
while IFS= read -r host <&3; do
{
ssh -n -o BatchMode=yes "$host" 'hostname -f' && \
rsync -arpn --stats /usr/xyz "$host:/usr/java" && \
ssh -n -o BatchMode=yes "$host" 'ls -ltr /usr/java'
} </dev/null >"${tmpdir}/${host}" 2>&1 & pids[$!]=$host
done 3< /home/user/servers/non_java7_nodes.list
for pid in "${!host_for_pid[#]}"; do
if wait "$pid"; then
:
else
echo "ERROR: Process for host ${host_for_pid[$pid]} had exit status $?" >&2
fi
done
echo "Output for hosts are in $tmpdir"
Note that the rsync is no longer inside the ssh command, so it's run locally, not remotely.
I am a newbie to bash scripting. I am trying to copy a gz file, then change permissions and untar it on remote servers (all centos machines).
#!/bin/bash
pwd=/home/sujatha/downloads
cd $pwd
logfile=$pwd/log/`echo $0|cut -f1 -d'.'`.log
rm $logfile
touch $logfile
server="10.1.0.22"
for a in $server
do
scp /home/user/downloads/prometheus-2.0.0.linux-amd64.tar.gz
ssh -f sujatha#10.1.0.22 "tar -xvzf/home/sujatha/downloads/titantest/prometheus-2.0.0.linux-amd64.tar.gz"
sleep 2
echo
done
exit
The scp part is successfull. But not able to do the remaining actions. after untarring I also want to add more actions like appending a variable to the config files. all through the script. Any advise would be helpful
Run a bash session in your ssh connection:
ssh 192.168.2.9 bash -c "ls; sleep 2; echo \"bye\""
The script is supposed to distribute keys from an ossec server to it's clients
cat /usr/local/bin/dist-ossec-keys.sh
#!/bin/sh
#
for host in chef-production2-doctor-simulator01
do
echo "host is $host"
key=`mktemp`
grep $host /var/ossec/etc/client.keys > $key
echo "key is $key"
scp -i /var/ossec/.ssh/id_rsa -B -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $key ossecd#$host:/var/ossec/etc/client.keys >/dev/null 2>/dev/null
rm $key
ech "done"
done
I ran the script line by line and it's output is
host is chef-production2-doctor-simulator01
key is /tmp/tmp.fAZqvNkJ9f
The bash script is created from this template
cat /var/ossec/etc/client.keys
001 #*#*#*#*#*#*#*#*#*#*#197.221.226 7196c76c568258e2ad836f8e1aa37e0758dee969f560ceb59be76879c3f3412d
002 test-agent 10.128.0.9 e2c9b117f26a202598007dcb4ec64e01f18000f9d820f6b3508a95e5313e6537
what is it supposed to do ?
why is it not working?
The error was in the scp -i /var/ossec/... line
My poor linux knowledge failed to notice the
>/dev/null 2>/dev/null
removing that revealed the real error (which was Permission denied (publickey))