I have the following docker file
FROM confluentinc/cp-kafka-connect:5.3.1
RUN apt-get update && apt-get -y install cron
ENV CONNECT_PLUGIN_PATH=/usr/share/java
# JDBC-MariaDB
RUN wget -nv -P /usr/share/java/kafka-connect-jdbc/ https://downloads.mariadb.com/Connectors/java/connector-java-2.4.4/mariadb-java-client-2.4.4.jar
# SNMP Source
RUN wget -nv -P /tmp/ https://github.com/name/kafka-connect-snmp/releases/download/0.0.1.11/kafka-connect-snmp-0.0.1.11.tar.gz
RUN mkdir /tmp/kafka-connect-snmp && tar -xf /tmp/kafka-connect-snmp-0.0.1.11.tar.gz -C /tmp/kafka-connect-snmp/
RUN mv /tmp/kafka-connect-snmp/usr/share/kafka-connect/kafka-connect-snmp /usr/share/java/
COPY plugins-config.sh /usr/share/kafka-connect-script/plugins-config.sh
RUN chmod +x /usr/share/kafka-connect-script/plugins-config.sh
ENTRYPOINT [ "./etc/confluent/docker/run" ]
CMD ["/usr/share/kafka-connect-script/plugins-config.sh"]
And the bash file as this
#!/bin/bash
#script to configure kafka connect with plugins
# export CONNECT_REST_ADVERTISED_HOST_NAME=localhost
# export CONNECT_REST_PORT=8083
url=http://$CONNECT_REST_ADVERTISED_HOST_NAME:$CONNECT_REST_PORT/connectors
curl_command="curl -s -o /dev/null -w %{http_code} $url"
sleep_second=5
sleep_second_counter=0
max_seconds_to_wait=60
echo "Waiting for Kafka Connect to start listening on localhost" >> log.log
echo "HOST: $CONNECT_REST_ADVERTISED_HOST_NAME , PORT: $CONNECT_REST_PORT" >> log.log
while [[ $(eval $curl_command) -eq 000 && $sleep_second_counter -lt $max_seconds_to_wait ]]
do
echo "In" >> log.log
echo -e $date " Kafka Connect listener HTTP state: " $(eval $curl_command) " (waiting for 200) $sleep_second_counter" >> log.log
echo "Going to sleep for $sleep_second seconds" >> log.log
sleep $sleep_second
echo "Finished sleeping" >> log.log
((sleep_second_counter+=$sleep_second))
echo "Finished counter" >> log.log
done
echo "Out" >> log.log
nc -vz $CONNECT_REST_ADVERTISED_HOST_NAME $CONNECT_REST_PORT
/bin/bash
Entry point gets called correctly but CMD does not get invoked.
I also try to understand the solution given here CMD doesn't run after ENTRYPOINT in Dockerfile
but I did not understand the solution.
If someone could explain a bit more what is wrong here.
What I am trying to accomplish
I am trying to have a single docker container image which will start the kafka-connect server (ENTRYPOINT) and then via bash file (CMD) I will configure the plugins. Requirement is that the same sequence of steps gets executed everytime the containers restarts.
CMD is run after ENTRYPOINT, like parameters after a function invokation, in the same command line.
In your case you want two different commands running sequentially. Then, you may add them to a startup_script.sh whose content is:
#!/bin/bash
./etc/confluent/docker/run & # run in background not to get stuck in here
/usr/share/kafka-connect-script/plugins-config.sh # apply configuration
sleep 100000000 # to avoid the startup script to exit since that would kill the container
Related
I recently made a bash script which automatically creates a Minecraft server using the server.jar file. I told it to echo a string which is a command in Minecraft after a certain delay, in order to have it be echoed after the server is done loading. For some reason, after finishing loading the Minecraft world, it just doesn't echo that string.
Here's the script:
#!/bin/bash
mkdir minecraft_server
cd minecraft_server
wget https://launcher.mojang.com/v1/objects/e00c4052dac1d59a1188b2aa9d5a87113aaf1122/server.jar
java -Xmx5G -Xms3G -jar server.jar nogui
sed -i "s/^eula.*/eula=true/" /home/my name/minecraft_server/eula.txt
sed -i "s/^server-port.*/server-port=40004/" /home/my name/minecraft_server/server.properties
java -Xmx5G -Xms3G -jar server.jar nogui
sleep 60
echo "/op My Username"
There are many ready-made minecraft server launch script, which also feature screen for remote management. Although if you want to go a bit further by yourself, here is a cleaned-up version of your script. It works but does not include the op command. This need to be dealt with separately.
#!/bin/sh
server_dl_url='https://launcher.mojang.com/v1/objects/e00c4052dac1d59a1188b2aa9d5a87113aaf1122/server.jar'
server_dir='./minecraft_server'
server_jar='server.jar'
if mkdir -p "${server_dir}" && cd "${server_dir}"; then
# If server.jar does not exist, download it
if ! [ -e "${server_jar}" ]; then
if ! wget --output-document="${server_jar}" "${server_dl_url}"; then
printf "Could not dowload %s from %s\\n" \
"${server_jar}" "${server_dl_url}"
exit 1
fi >&2
fi
# Check if it has eula.txt, if not, generate it with --initSettings
if ! [ -e 'eula.txt' ]; then
# Initializes 'server.properties' and 'eula.txt', then quits
if ! java -jar "${server_jar}" --initSettings --nogui; then
printf "Could not initialize 'server.properties' and 'eula.txt'\\n"
exit 1
fi >&2
fi
if ! grep -q '^eula=true' 'eula.txt'; then
sed -i.bak 's/^eula=.*/eula=true/' 'eula.txt'
fi
# Run the actual server
java -Xmx5G -Xms3G -jar server.jar nogui &
fi
entrypoint.sh contains various cqlsh commands that require Cassandra. Without something like script.sh, cqlsh commands fail because Cassandra doesn't have enough time to start. When I execute the following locally, everything appears to work properly. However, when I run via Docker, script.sh never finishes. In other words, $status never changes from 1 to 0.
Dockerfile
FROM cassandra
RUN apt-get update && apt-get install -y netcat
RUN mkdir /dir
ADD ./scripts /dir/scripts
RUN /bin/bash -c 'service cassandra start'
RUN /bin/bash -c '/dir/scripts/script.sh'
RUN /bin/bash -c '/dir/scripts/entrypoint.sh'
script.sh
#!/bin/bash
set -e
cmd="$#"
status=$(nc -z localhost 9042; echo $?)
echo $status
while [ $status != 0 ]
do
sleep 3s
status=$(nc -z localhost 9042; echo $?)
echo $status
done
exec $cmd
Alternatively, I could do something like until cqlsh -e 'some code'; do .., as noted here for psql, but that doesn't appear to work for me. Wondering how best to approach the problem.
You're misusing the RUN command in your Dockerfile. It's not for starting services, it's for making filesystem changes in your image. The reason $status doesn't update is because you can't start Cassandra via a RUN command.
You should add service cassandra start and /dir/scripts/entrypoint.sh to your script.sh file, and make that the CMD that's executed by default:
Dockerfile
CMD ['/bin/bash', '-c', '/dir/scripts/script.sh']
script.sh
#!/bin/bash
set -e
# NOTE: I removed your `cmd` processing in favor of invoking entrypoint.sh
# directly.
# Start Cassandra before waiting for it to boot.
service cassandra start
status=$(nc -z localhost 9042; echo $?)
echo $status
while [ $status != 0 ]
do
sleep 3s
status=$(nc -z localhost 9042; echo $?)
echo $status
done
exec /bin/bash -c /dir/scripts/entrypoint.sh
I used crontab -e to schedule the execution of a shell script that does ssh calls to a list of servers and gets information and prints to file. The output of crontab -l is:
SHELL = /bin/sh
PATH = /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
* 1 * * 1,2,3,4,5 /bin/bash /Users/cjones/Documents/Development/Scripts/DailyStatus.sh
The script I am running logs to files the output of echo "Beginning remote connections..." >> $logfile however does not log to a file the output of the following loop:
for servers in $(cat hostnames.txt); do
echo "Starting connection to $servers" >> $logfile
(rsync -av /Users/cjones/Documents/Development/Scripts/checkup.sh cjones#$servers:~/checkup.sh > /dev/null
echo""
ssh -t $servers "sudo ./checkup.sh") >> $logfile
echo ""
done
Pastebin of the full script: http://pastebin.com/3vD7Bba0
Additional note this script pushes the latest version of a management script then ssh's into the remote server to execute and capture the ouput. This work 100% of the time when ran manually. Any assitance would be helpful thanks!
you need to do SHELL=/bin/sh and the same with PATH. The spaces around = are wrong.
Also, use full paths when calling files in your script when you call it with crontab:
From
for servers in $(cat hostnames.txt); do
echo "Starting connection to $servers" >> $logfile
(rsync -av /Users/cjones/Documents/Development/Scripts/checkup.sh cjones#$servers:~/checkup.sh > /dev/null
echo""
ssh -t $servers "sudo ./checkup.sh") >> $logfile
echo ""
done
to
while read $servers
do
echo "Starting connection to $servers" >> $logfile
(rsync -av /Users/cjones/Documents/Development/Scripts/checkup.sh cjones#$servers:~/checkup.sh > /dev/null
echo""
ssh -t $servers "sudo ./checkup.sh") >> $logfile
echo ""
done < /path/to/hostnames.txt
^^^^^^^^^
Note the usage of while read; do ... done < file instead of the unnecessary for host in $(cat ...).
I have this code copied from linuxaria.com as example and work just fine on my case the problem is when I exit from terminal inotifywait stop. I want run on back ground even after exit the terminal. how I can do that?
#!/bin/sh
# CONFIGURATION
DIR="/tmp"
EVENTS="create"
FIFO="/tmp/inotify2.fifo"
on_event() {
local date=$1
local time=$2
local file=$3
sleep 5
echo "$date $time Fichier créé: $file"
}
# MAIN
if [ ! -e "$FIFO" ]
then
mkfifo "$FIFO"
fi
inotifywait -m -e "$EVENTS" --timefmt '%Y-%m-%d %H:%M:%S' --format '%T %f' "$DIR" > "$FIFO" &
INOTIFY_PID=$!
while read date time file
do
on_event $date $time $file &
done < "$FIFO"
You can run the script with screen or nohup but I'm not sure how that would help since the script does not appear to log its output to any file.
nohup bash script.sh </dev/null >/dev/null 2>&1 &
Or
screen -dm bash script.sh </dev/null >/dev/null 2>&1 &
Disown could also apply:
bash script.sh </dev/null >/dev/null 2>&1 & disown
You should just test which one would not allow the command to suspend or hang up when the terminal exits.
If you want to log the output to a file, you can try these versions:
nohup bash script.sh </dev/null >/path/to/logfile 2>&1 &
screen -dm bash script.sh </dev/null >/path/to/logfile 2>&1 &
bash script.sh </dev/null >/path/to/logfile 2>&1 & disown
I made a 'service' out of it. So I could stop/start it like a normal service and also it would start after a reboot:
This was made on a Centos distro So I'm not if it works on others right away.
Create a file with execute right on in the service directory
/etc/init.d/servicename
#!/bin/bash
# chkconfig: 2345 90 60
case "$1" in
start)
nohup SCRIPT.SH > /dev/null 2>&1 &
echo $!>/var/run/SCRIPT.SH.pid
;;
stop)
pkill -P `cat /var/run/SCRIPT.SH.pid`
rm /var/run/SCRIPT.SH.pid
;;
restart)
$0 stop
$0 start
;;
status)
if [ -e /var/run/SCRIPT.SH.pid ]; then
echo SCRIPT.SH is running, pid=`cat /var/run/SCRIPT.SH.pid`
else
echo SCRIPT.SH is not running
exit 1
fi
;;
*)
echo "Usage: $0 {start|stop|status|restart}"
esac
exit 0
Everything in caps you should change to what your script name.
The line # chkconfig: 2345 90 60 makes it possible to start the service when the system is rebooted. this probably doens't work in ubuntu like distro's.
The best way I found is to create a systemd service.
Create systemd file in /lib/systemd/system/checkfile.service:
sudo vim /lib/systemd/system/checkfile.service
And paste this there:
[Unit]
Description = Run inotifywait in backgoround
[Service]
User=ubuntu
Group=ubuntu
ExecStart=/bin/bash /path_to/script.sh
RestartSec=10
[Install]
WantedBy=multi-user.target
and in /path_to/script.sh, you can have this:
inotifywait -m /path-to-dir -e create -e moved_to |
while read dir action file; do
echo "The file '$file' appeared in directory '$dir' via '$action'" >> /dir/event.txt
done
Make sure that your file is executable by the user:
sudo chmod +x /path_to/script.sh
After creating two files, reload systemd manager configuration with:
sudo systemctl daemon-reload
Now you can use start/stop/enable to your script:
sudo systemctl enable checkfile
sudo systemctl start checkfile
Make sure to replace file/directory/user/group values before executing.
replace -m with
-d -o /dev/null
ie:
inotifywait -d -o /dev/null -e "$EVENTS" --timefmt '%Y-%m-%d %H:%M:%S' --format '%T %f' >"$DIR" > "$FIFO" & INOTIFY_PID=$!
You can check the inotifywait help manual at:
https://helpmanual.io/help/inotifywait/
Method that will work even if the file to be watched is not there yet, or gets deleted in between (just watch the whole directory instead of a single file, and then do the action on a particular file):
nohup inotifywait -m -e close_write /var/opt/some_directory/ |
while read -r directory events filename; do
if [ "$filename" = "file_to_be_watched.log" ]; then
# do your stuff here; I'm just printing the events to file
echo "$events" >> /tmp/events.log
fi
done &
I have a script that uses ssh to login to a remote machine, cd to a particular directory, and then start a daemon. The original script looks like this:
ssh server "cd /tmp/path ; nohup java server 0</dev/null 1>server_stdout 2>server_stderr &"
This script appears to work fine. However, it is not robust to the case when the user enters the wrong path so the cd fails. Because of the ;, this command will try to run the nohup command even if the cd fails.
The obvious fix doesn't work:
ssh server "cd /tmp/path && nohup java server 0</dev/null 1>server_stdout 2>server_stderr &"
that is, the SSH command does not return until the server is stopped. Putting nohup in front of the cd instead of in front of the java didn't work.
Can anyone help me fix this? Can you explain why this solution doesn't work? Thanks!
Edit: cbuckley suggests using sh -c, from which I derived:
ssh server "nohup sh -c 'cd /tmp/path && java server 0</dev/null 1>master_stdout 2>master_stderr' 2>/dev/null 1>/dev/null &"
However, now the exit code is always 0 when the cd fails; whereas if I do ssh server cd /failed/path then I get a real exit code. Suggestions?
See Bash's Operator Precedence.
The & is being attached to the whole statement because it has a higher precedence than &&. You don't need ssh to verify this. Just run this in your shell:
$ sleep 100 && echo yay &
[1] 19934
If the & were only attached to the echo yay, then your shell would sleep for 100 seconds and then report the background job. However, the entire sleep 100 && echo yay is backgrounded and you're given the job notification immediately. Running jobs will show it hanging out:
$ sleep 100 && echo yay &
[1] 20124
$ jobs
[1]+ Running sleep 100 && echo yay &
You can use parenthesis to create a subshell around echo yay &, giving you what you'd expect:
sleep 100 && ( echo yay & )
This would be similar to using bash -c to run echo yay &:
sleep 100 && bash -c "echo yay &"
Tossing these into an ssh, and we get:
# using parenthesis...
$ ssh localhost "cd / && (nohup sleep 100 >/dev/null </dev/null &)"
$ ps -ef | grep sleep
me 20136 1 0 16:48 ? 00:00:00 sleep 100
# and using `bash -c`
$ ssh localhost "cd / && bash -c 'nohup sleep 100 >/dev/null </dev/null &'"
$ ps -ef | grep sleep
me 20145 1 0 16:48 ? 00:00:00 sleep 100
Applying this to your command, and we get
ssh server "cd /tmp/path && (nohup java server 0</dev/null 1>server_stdout 2>server_stderr &)"
or:
ssh server "cd /tmp/path && bash -c 'nohup java server 0</dev/null 1>server_stdout 2>server_stderr &'"
Also, with regard to your comment on the post,
Right, sh -c always returns 0. E.g., sh -c exit 1 has error code
0"
this is incorrect. Directly from the manpage:
Bash's exit status is the exit status of the last command executed in
the script. If no commands are executed, the exit status is 0.
Indeed:
$ bash -c "true ; exit 1"
$ echo $?
1
$ bash -c "false ; exit 22"
$ echo $?
22
ssh server "test -d /tmp/path" && ssh server "nohup ... &"
Answer roundup:
Bad: Using sh -c to wrap the entire nohup command doesn't work for my purposes because it doesn't return error codes. (#cbuckley)
Okay: ssh <server> <cmd1> && ssh <server> <cmd2> works but is much slower (#joachim-nilsson)
Good: Create a shell script on <server> that runs the commands in succession and returns the correct error code.
The last is what I ended up using. I'd still be interested in learning why the original use-case doesn't work, if someone who understands shell internals can explain it to me!