Upstart service in start/killed state - bash

I have a python application that needs to run as a service on Ubuntu 14.04. This application needs to have the following capabilities:
When then service is started, a cron entry is created in the crontab, which will periodically run the application.
When the service is stopped, the crontab entry is removed.
When the system/server is rebooted, the service needs to be started.
I have the following upstart script to run my service:
start on [2345]
stop on [!2345]
script
LOGDIR=/usr/local/etc/myservice/logs/
CFGFILE=/usr/local/etc/myservice/myservice.conf
echo $$ > /var/run/myservice.pid
# If there is no cronjob by the name myservice, then add a cronjob to the crontab
set -x
exec bash -c '
if (( $(crontab -l | grep -c myservice) == 0 )); then
(crontab -l ; echo "1 * * * * myservice) | crontab -
fi'
end script
pre-start script
set -x
echo "[`date`] Starting myservice Service" >> /var/log/myservice.log
# Testing to see if myservice has been installed, else exit
[ -x /usr/local/bin/myservice ] || exit 0
mkdir -p /usr/local/etc/myservice/logs/
end script
pre-stop script
set -x
echo "[`date`] Stopping myservice Service" >> /var/log/myservice.log
end script
post-stop script
set -x
rm /var/run/myservice.pid
# If there is at least 1 cronjob by the name myservice, remove all such entries from crontab
exec bash -c '
if (( $(crontab -l | grep -c myservice) >= 0 )); then
(crontab -l | grep -v myservice) | crontab -
fi'
pkill -f myservice
end script
However, when I try to run the service, it hangs and I have to hit ctrl+c to get back the command line. Similarly with the stopping of the service. Am I missing something here? Any help will be appreciated!

Related

Email not send in the cron job shell script

I have this cron job entry:
0 0 * * * /application-data/sendUrl.sh
/application-data/sendUrl.sh has this code:
auditFile="/application-data/auditUrl.txt"
[ -e $auditFile ] && (echo -e "Subject: Foo\nTo: user#example.com\n\n" `cat $auditFile` | sendmail -t ; rm -f $auditFile )
The shell script has all root privileges and correct file permissions. Running it from the command line it sends the email. Only when it's executed by the cron job the email is not sent, but the file at the end of the command list is deleted so I know the shell script has been executed.
Any idea what am I doing wrong so the email is not sent when running as cron job?
Your script doesn't have a shebang so it'll be executed with sh; echo -e behavior is implementation defined with sh
Also, you're deleting the file even if sendmail fails; you should at least test the return status before doing the deletion.
Does it work better like this?
#!/bin/sh
auditFile="/application-data/auditUrl.txt"
[ -f "$auditFile" ] || exit
printf 'Subject: %s\nTo: %s\n\n' "Foo" "user#example.com" |
cat - "$auditFile" |
sendmail -t &&
rm -f "$auditFile"

How to run shell script via kubectl without interactive shell

I am trying to export a configuration from a service called keycloak by using shell script. To do that, export.sh will be run from the pipeline.
the script connects to k8s cluster and run the command in there.
So far, everything goes okay the export work perfectly.
But when I try to exit from the k8s cluster with exit and directly end the shell script. therefore it will move back to the pipeline host without staying in the remote machine.
Running the command from the pipeline
ssh -t ubuntu#example1.com 'bash' < export.sh
export.sh
#!/bin/bash
set -x
set -e
rm -rf /tmp/realm-export
if [ $(ps -ef | grep "keycloak.migration.action=export" | grep -v grep | wc -l) != 0 ]; then
echo "Another export is currently running"
exit 1
fi
kubectl -n keycloak exec -it keycloak-0 bash
mkdir /tmp/export
/opt/jboss/keycloak/bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/tmp/export -Dkeycloak.migration.usersExportStrategy=DIFFERENT_FILES -Djboss.socket.binding.port-offset=100
rm /tmp/export/master-*
exit
kubectl -n keycloak cp keycloak-0:/tmp/export /tmp/realm-export
exit
exit
scp ubuntu#example1.com:/tmp/realm-export/* ./configuration2/realms/
After the first exit the whole shell script stopped, the left commands doesn't work. it won't stay on ubuntu#example1.com.
Is there any solutions?
Run the commands inside without interactive shell using HEREDOC(EOF).
It's not EOF. It's 'EOF'. this prevents a variable expansion in the current shell.
But in the other script's /tmp/export/master-* will expand as you expect.
kubectl -n keycloak exec -it keycloak-0 bash <<'EOF'
<put your codes here, which you type interactively>
EOF
export.sh
#!/bin/bash
set -x
set -e
rm -rf /tmp/realm-export
if [ $(ps -ef | grep "keycloak.migration.action=export" | grep -v grep | wc -l) != 0 ]; then
echo "Another export is currently running"
exit 1
fi
# the suggested code.
kubectl -n keycloak exec -it keycloak-0 bash <<'EOF'
<put your codes here, which you type interactively>
EOF
mkdir /tmp/export
/opt/jboss/keycloak/bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/tmp/export -Dkeycloak.migration.usersExportStrategy=DIFFERENT_FILES -Djboss.socket.binding.port-offset=100
rm /tmp/export/master-*
kubectl -n keycloak cp keycloak-0:/tmp/export /tmp/realm-export
scp ubuntu#example1.com:/tmp/realm-export/* ./configuration2/realms/
Even if scp runs successfully or not, this code will exit.

Cron + nohup = script in cron cannot find command?

There is a simple cron job:
#reboot /home/user/scripts/run.sh > /dev/null 2>&1
run.sh starts a binary (simple web server):
#!/usr/bin/env bash
NPID=/home/user/server/websrv
if [ ! -f $NPID ]
then
echo "Not started"
echo "Starting"
nohup home/user/server/websrv &> my_script.out &
else
NUM=$(ps ax | grep $(cat $NPID) | grep -v grep | wc -l)
if [ $NUM -lt 1 ]
then
echo "Not working"
echo "Starting"
nohup home/user/server/websrv &> my_script.out &
else
ps ax | grep $(cat $NPID) | grep -v grep
echo "All Ok"
fi
fi
websrv gets JSON from user, and runs work.sh script itselves.
The problem is that sh script, which is invoked by websrv, "does not see" commands and stops with exit 1.
The script work.sh is like this:
#!/bin/sh -e
if [ "$#" -ne 1 ]; then
echo "Usage: $0 INPUT"
exit 1
fi
cd $(dirname $0) #good!
pwd #good!
IN="$1"
echo $IN #good!
KEYFORGIT="/some/path"
eval `ssh-agent -s` #good!
which ssh-add #good! (returns /usr/bin/ssh-add)
ssh-add $KEYFORGIT/openssh #error: exit 1!
git pull #error: exit 1!
cd $(dirname $0) #good!
rm -f somefile #error: exit 1!
#############==========Etc.==============
Usage of the full paths does not help.
If the script has been executed itself, it works.
If run.sh manually, it also works.
If I run the command nohup home/user/server/websrv & if works as well.
However, if all this chain of tools is started by cron on boot, work.sh is not able to perform any command except of cp, pwd, which, etc. But invoke of ssh-add, git, cp, rm, make etc., forces exit 1 status of the script. Why it "does not see" the commands? Unfortunately, I also cannot get any extended log which might explain the particular errors.
Try adding the path from the session that runs the script correctly to the cron entry (or inside the script)
Get the current path (where the script runs fine) with echo $PATH and add that to the crontab: replacing the string below with the output -> <REPLACE_WITH_OUTPUT_FROM_ABOVE>
#reboot export PATH=$PATH:<REPLACE_WITH_OUTPUT_FROM_ABOVE>; /home/user/scripts/run.sh > /dev/null 2>&1
You can compare paths with a cron entry like this to see what cron's PATH is:
* * * * * echo $PATH > /tmp/crons_path
Then cat /tmp/crons_path to see what it says.
Example output:
$ crontab -l | grep -v \#
* * * * * echo $PATH >> /tmp/crons_path
# wait a minute or so...
$ cat /tmp/crons_path
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
$ echo $PATH
/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
As the commenter above mentioned, crontab doesn't always use the same path as user so likely something is missing.
Be sure to remove the temp cron entry after testing (crontab -e, etc.)...

CMD does not run if used after ENTRYPOINT

I have the following docker file
FROM confluentinc/cp-kafka-connect:5.3.1
RUN apt-get update && apt-get -y install cron
ENV CONNECT_PLUGIN_PATH=/usr/share/java
# JDBC-MariaDB
RUN wget -nv -P /usr/share/java/kafka-connect-jdbc/ https://downloads.mariadb.com/Connectors/java/connector-java-2.4.4/mariadb-java-client-2.4.4.jar
# SNMP Source
RUN wget -nv -P /tmp/ https://github.com/name/kafka-connect-snmp/releases/download/0.0.1.11/kafka-connect-snmp-0.0.1.11.tar.gz
RUN mkdir /tmp/kafka-connect-snmp && tar -xf /tmp/kafka-connect-snmp-0.0.1.11.tar.gz -C /tmp/kafka-connect-snmp/
RUN mv /tmp/kafka-connect-snmp/usr/share/kafka-connect/kafka-connect-snmp /usr/share/java/
COPY plugins-config.sh /usr/share/kafka-connect-script/plugins-config.sh
RUN chmod +x /usr/share/kafka-connect-script/plugins-config.sh
ENTRYPOINT [ "./etc/confluent/docker/run" ]
CMD ["/usr/share/kafka-connect-script/plugins-config.sh"]
And the bash file as this
#!/bin/bash
#script to configure kafka connect with plugins
# export CONNECT_REST_ADVERTISED_HOST_NAME=localhost
# export CONNECT_REST_PORT=8083
url=http://$CONNECT_REST_ADVERTISED_HOST_NAME:$CONNECT_REST_PORT/connectors
curl_command="curl -s -o /dev/null -w %{http_code} $url"
sleep_second=5
sleep_second_counter=0
max_seconds_to_wait=60
echo "Waiting for Kafka Connect to start listening on localhost" >> log.log
echo "HOST: $CONNECT_REST_ADVERTISED_HOST_NAME , PORT: $CONNECT_REST_PORT" >> log.log
while [[ $(eval $curl_command) -eq 000 && $sleep_second_counter -lt $max_seconds_to_wait ]]
do
echo "In" >> log.log
echo -e $date " Kafka Connect listener HTTP state: " $(eval $curl_command) " (waiting for 200) $sleep_second_counter" >> log.log
echo "Going to sleep for $sleep_second seconds" >> log.log
sleep $sleep_second
echo "Finished sleeping" >> log.log
((sleep_second_counter+=$sleep_second))
echo "Finished counter" >> log.log
done
echo "Out" >> log.log
nc -vz $CONNECT_REST_ADVERTISED_HOST_NAME $CONNECT_REST_PORT
/bin/bash
Entry point gets called correctly but CMD does not get invoked.
I also try to understand the solution given here CMD doesn't run after ENTRYPOINT in Dockerfile
but I did not understand the solution.
If someone could explain a bit more what is wrong here.
What I am trying to accomplish
I am trying to have a single docker container image which will start the kafka-connect server (ENTRYPOINT) and then via bash file (CMD) I will configure the plugins. Requirement is that the same sequence of steps gets executed everytime the containers restarts.
CMD is run after ENTRYPOINT, like parameters after a function invokation, in the same command line.
In your case you want two different commands running sequentially. Then, you may add them to a startup_script.sh whose content is:
#!/bin/bash
./etc/confluent/docker/run & # run in background not to get stuck in here
/usr/share/kafka-connect-script/plugins-config.sh # apply configuration
sleep 100000000 # to avoid the startup script to exit since that would kill the container

Starting multiple services using shell script in Dockerfile

I am creating a Dockerfile to install and start the WebLogic 12c services using startup scripts at "docker run" command. I am passing the shell script in the CMD instruction which executes the startWeblogic.sh and startNodeManager.sh script. But when I logged in to the container, it has started only the first script startWeblogic.sh and not even started the second script which is obvious from the docker logs.
The same script executed inside the container manually and it is starting both the services. What is the right instruction for running the script to start multiple processes in a container and not to exit the container?
What am I missing in this script and in the dockerfile? I know that container can run only one process, but in a dirty way, how to start multiple services for an application like WebLogic which has a nameserver, node manager, managed server and creating managed domains and machines. The managed server can only be started when WebLogic nameserver is running.
Script: startscript.sh
#!/bin/bash
# Start the first process
/u01/app/oracle/product/wls122100/domains/verdomain/bin/startWebLogic.sh -D
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start my_first_process: $status"
exit $status
fi
# Start the second process
/u01/app/oracle/product/wls122100/domains/verdomain/bin/startNodeManager.sh -D
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start my_second_process: $status"
exit $status
fi
while sleep 60; do
ps aux |grep "Name=adminserver" |grep -q -v grep
PROCESS_1_STATUS=$?
ps aux |grep node |grep -q -v grep
PROCESS_2_STATUS=$?
# If the greps above find anything, they exit with 0 status
# If they are not both 0, then something is wrong
if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then
echo "One of the processes has already exited."
exit 1
fi
done
Truncated the dockerfile.
RUN unzip $WLS_PKG
RUN $JAVA_HOME/bin/java -Xmx1024m -jar /u01/app/oracle/$WLS_JAR -silent -responseFile /u01/app/oracle/wls.rsp -invPtrLoc /u01/app/oracle/oraInst.loc > install.log
RUN rm -f $WLS_PKG
RUN . $WLS_HOME/server/bin/setWLSEnv.sh && java weblogic.version
RUN java weblogic.WLST -skipWLSModuleScanning create_basedomain.py
WORKDIR /u01/app/oracle
CMD ./startscript.sh
docker build and run commands:
docker build -f Dockerfile-weblogic --tag="weblogic12c:startweb" /var/dprojects
docker rund -d -it weblogic12c:startweb
docker exec -it 6313c4caccd3 bash
Please use supervisord for running multiple services in a docker container. It will make the whole process more robust and reliable.
Run supervisord -n as your CMD command and configure all your services in /etc/supervisord.conf.
Sample conf would look like:
[program:WebLogic]
command=/u01/app/oracle/product/wls122100/domains/verdomain/bin/startWebLogic.sh -D
stderr_logfile = /var/log/supervisord/WebLogic-stderr.log
stdout_logfile = /var/log/supervisord/WebLogic-stdout.log
autorestart=unexpected
[program:NodeManager]
command=/u01/app/oracle/product/wls122100/domains/verdomain/bin/startNodeManager.sh -D
stderr_logfile = /var/log/supervisord/NodeManager-stderr.log
stdout_logfile = /var/log/supervisord/NodeManager-stdout.log
autorestart=unexpected
It will handle all the things you are trying to do with a shell script.
Hope it helps!

Resources