Simple daemon process in Ubuntu - bash

I want to start a simple daemon process in Ubuntu, which will write the current time to log file every 5 seconds.
start-stop-daemon --start --user root --make-pidfile --pidfile /home/manjesh/test.pid --exec /home/manjesh/simplescript.sh
simplescript.sh
#!/bin/bash
echo $(date)" SNMP Monitoring and Log aggregator service " >> /home/manjesh/log.txt
while true
do
echo $(date) >> /home/dcae/snmp-service/log
sleep 5
done
When I execute the command it says "No such file or directory even if the file do exist"
Any help will be appreciated. Thanks.

The way I would do this is to use a cron job that triggers every minute and calls a script that writes the time every 5 seconds, like this:
Cron:
* * * * * /usr/local/bin/script >/dev/null 2>&1
Script:
#!/bin/bash
mkdir -p /home/dcae/snmp-service/
i="0"
while [ $i -lt 12 ]
do
echo $(date) >> /home/dcae/snmp-service/log
i=$[$i+1]
sleep 5
done

The problem was I had created a file in Windows and moved to Ubuntu, and there was a formatting problem
-bash: ./my_script: /bin/bash^M: bad interpreter: No such file or directory

Related

CMD does not run if used after ENTRYPOINT

I have the following docker file
FROM confluentinc/cp-kafka-connect:5.3.1
RUN apt-get update && apt-get -y install cron
ENV CONNECT_PLUGIN_PATH=/usr/share/java
# JDBC-MariaDB
RUN wget -nv -P /usr/share/java/kafka-connect-jdbc/ https://downloads.mariadb.com/Connectors/java/connector-java-2.4.4/mariadb-java-client-2.4.4.jar
# SNMP Source
RUN wget -nv -P /tmp/ https://github.com/name/kafka-connect-snmp/releases/download/0.0.1.11/kafka-connect-snmp-0.0.1.11.tar.gz
RUN mkdir /tmp/kafka-connect-snmp && tar -xf /tmp/kafka-connect-snmp-0.0.1.11.tar.gz -C /tmp/kafka-connect-snmp/
RUN mv /tmp/kafka-connect-snmp/usr/share/kafka-connect/kafka-connect-snmp /usr/share/java/
COPY plugins-config.sh /usr/share/kafka-connect-script/plugins-config.sh
RUN chmod +x /usr/share/kafka-connect-script/plugins-config.sh
ENTRYPOINT [ "./etc/confluent/docker/run" ]
CMD ["/usr/share/kafka-connect-script/plugins-config.sh"]
And the bash file as this
#!/bin/bash
#script to configure kafka connect with plugins
# export CONNECT_REST_ADVERTISED_HOST_NAME=localhost
# export CONNECT_REST_PORT=8083
url=http://$CONNECT_REST_ADVERTISED_HOST_NAME:$CONNECT_REST_PORT/connectors
curl_command="curl -s -o /dev/null -w %{http_code} $url"
sleep_second=5
sleep_second_counter=0
max_seconds_to_wait=60
echo "Waiting for Kafka Connect to start listening on localhost" >> log.log
echo "HOST: $CONNECT_REST_ADVERTISED_HOST_NAME , PORT: $CONNECT_REST_PORT" >> log.log
while [[ $(eval $curl_command) -eq 000 && $sleep_second_counter -lt $max_seconds_to_wait ]]
do
echo "In" >> log.log
echo -e $date " Kafka Connect listener HTTP state: " $(eval $curl_command) " (waiting for 200) $sleep_second_counter" >> log.log
echo "Going to sleep for $sleep_second seconds" >> log.log
sleep $sleep_second
echo "Finished sleeping" >> log.log
((sleep_second_counter+=$sleep_second))
echo "Finished counter" >> log.log
done
echo "Out" >> log.log
nc -vz $CONNECT_REST_ADVERTISED_HOST_NAME $CONNECT_REST_PORT
/bin/bash
Entry point gets called correctly but CMD does not get invoked.
I also try to understand the solution given here CMD doesn't run after ENTRYPOINT in Dockerfile
but I did not understand the solution.
If someone could explain a bit more what is wrong here.
What I am trying to accomplish
I am trying to have a single docker container image which will start the kafka-connect server (ENTRYPOINT) and then via bash file (CMD) I will configure the plugins. Requirement is that the same sequence of steps gets executed everytime the containers restarts.
CMD is run after ENTRYPOINT, like parameters after a function invokation, in the same command line.
In your case you want two different commands running sequentially. Then, you may add them to a startup_script.sh whose content is:
#!/bin/bash
./etc/confluent/docker/run & # run in background not to get stuck in here
/usr/share/kafka-connect-script/plugins-config.sh # apply configuration
sleep 100000000 # to avoid the startup script to exit since that would kill the container

Can't run a shell script every 24 hours

I have written a shell script that runs some commands. I have added a logic to run this script once every 24 hours. But it runs once and then doesn't run.
The script is as below:
#!/bin/bash
while true; do
cd /home/ubuntu/;
DATE=`date '+%Y-%m-%d'`;
aws s3 cp --recursive "/home/ubuntu/" s3://bucket_name/$DATE/;
rm -r -f ./*;
# sleep 24 hours
sleep $((24 * 60 * 60))
done
Why does it not run once every 24 hours ? I do not get any errors when the script runs. The copy takes about 10 mins.
The good practice is to protect your script againt multirunning.
In this case, you can be sure that only 1 instance is running.
#!/bin/bash
LOCKFILE=/tmp/block_file
if ( set -o noclobber; echo "$$" > "$LOCKFILE") 2> /dev/null;
then
trap 'rm -f "$LOCKFILE"; exit $?' INT TERM EXIT
while true; do
cd /home/ubuntu/;
DATE=`date '+%Y-%m-%d'`;
aws s3 cp --recursive "/home/ubuntu/" s3://bucket_name/$DATE/;
rm -r -f ./*;
# sleep 24 hours
sleep $((24 * 60 * 60))
done
rm -f "$LOCKFILE"
trap - INT TERM EXIT
else
echo "Warning. Script is already running!"
echo "Block by PID $(cat $LOCKFILE) ."
exit
fi
You can run a script immune to hangups.
nohup is a UNIX utility that runs the specified command ignoring communication loss signals (SIGHUP). Thus, the script will continue to work in the background even after the user logs out.
nohup ./yourscript.sh
The created file /tmp/block_file will safe runned script against multirunning. To complete it press ctrl+c or run kill -11 pidofyourscript in terminal, in this way /tmp/block_file will be deleted.
The output of script puts on file nohup.out.
To run in background (preferred way):
nohup ./yourscript.sh &
Your script is probably killed due to inactivity, or when you exit the shell. The proper way to do this is use cron, as #Christian.K mentioned. See https://help.ubuntu.com/community/CronHowto

On closing the Terminal the nohupped shell script (with &) is stopped

I'm developing a simple screenshot spyware which takes screenshot every 5 seconds from start of the script. I want it to run on closing the terminal. Even after nohupping the script along with '&', my script exits on closing the terminal.
screenshotScriptWOSleep.sh
#!/bin/bash
echo "Starting Screenshot Capture Script."
echo "Process ID: $$"
directory=$(date "+%Y-%m-%d-%H:%M")
mkdir ${directory}
cd ${directory}
shotName=$(date "+%s")
while true
do
if [ $( date "+%Y-%m-%d-%H:%M" ) != ${directory} ]
then
directory=$(date "
+%Y-%m-%d-%H:%M")
cd ..
mkdir ${directory}
cd ${directory}
fi
if [ $(( ${shotName} + 5 )) -eq $(date "+%s" ) ]
then
shotName=$(date "+%s" )
screencapture -x $(date "+%Y-%m-%d-%H:%M:%S" )
fi
done
I ran the script with,
nohup ./screenshotScriptWOSleep.sh &
On closing the terminal window, it warns with,
"Closing this tab will terminate the running processes: bash, date."
I have read that the nohup applies to the child process too, but i'm stuck here. Thanks.
Either you're doing something really weird or that's referring to other processes.
nohup bash -c 'sleep 500' &
Shutdown that terminal; open another one:
ps aux | grep sleep
409370294 26120 1 0 2:43AM ?? 0:00.01 sleep 500
409370294 26330 26191 0 2:45AM ttys005 0:00.00 grep -i sleep
As you can see, sleep is still running.
Just ignore that warning, your process is not terminated. verify with
watch wc -l nohup.out

process not starting completely, when called inside crontab

I have a script( let us call it watcher) which checks for a particular process if it's not running the watcher will start the process through a script.
I run this watcher in crontab at every minute. Now the problem is that it's not working in crontab but working if I run the watcher directly from the command line.
suppose the watcher start a script file called serverA.
ServerA code
echo -n "Starting $NAME: "
# start network server
start-stop-daemon --start --background --make-pidfile \
--pidfile $net_server_pidfile --startas /bin/bash -- -c "exec $angel $net_server \
-c $conf_file --lora-eui $lora_eui --lora-hw-1 $lora_hw --lora-prod-1 $lora_id \
--lora-path $run_dir --db $conf_db \
--noconsole >> $net_server_log 2>&1"
sleep 2
# start packet forwarder
/usr/sbin/start-stop-daemon --chdir $run_dir/1 --start --background --make-pidfile \
--pidfile $pkt_fwd_pidfile --exec $angel -- $pkt_fwd
renice -n -20 -p $(pgrep lora-network-se)
renice -n -20 -p $(pgrep $(basename $pkt_fwd))
echo "OK"
Now if i run watcher from directly the serverA will echo output Starting something then after sometime it continues with OK at the end.
But in crontab logs i dont see the OK, because of which the service never completes and serverA never starts.
watcher.sh
else
echo "$(date) do something, no packet forwader runnig"
exec /etc/init.d/lora-network-server start
fi
I think that you need to check difference of run time environments based terminal or not.
Firstly Check the lora-network-server whether depend on shell environments, such as JAVA_HOME or PATH (e.g. can execute the binary without absolute path of binary).
If it has different setting, it make same shell environments.
For exmaple, how to diff between cron env and runtime env.
runtime
$ env | tee ./runtime.output
cron
$ crontab <<EOF
* * * * * /bin/env > /path/to/cron.output 2>&1
EOF
Above cron output will create after 1 minute, and remove the cront after test.
you can check the variables onto cron.output and runtime.output
I hope this will help you.
Cron runs with a mostly empty environment. Are you setting all necessary environment variables in your scripts?

cronjob with email not working

update:
when I echo $res in the script below, I get the following, I guess it's because the script itself contains the word searchd!! so at the very instant that the cronjob process gets executed, $res becomes empty!
I renamed the script, problem solved!
root 10769 7177 0 23:31 pts/1 00:00:00 /bin/bash /home/scripts/monitor_searchd.sh root 10770 10769 0 23:31 pts/1 00:00:00 /bin/bash /home/scripts/monitor_searchd.sh
original problem:
I have a shell script that works when I invoke it in the shell, but when putting in crontab in the following way, it doesn't work(not sending an email). The strange thing is that cron log shows that the process is getting ran every minute!
*/1 * * * * root /home/scripts/monitor_searchd.sh
here's the script:
process="java"
res="`ps -ef|grep $process|grep -v grep`"
if [ ! -n "$res" ]; then
echo "$process is down!" | mail -s "$process is down" xxx#gmail.com
fi
cron log
CROND[7370]: (root) CMD (/home/scripts/monitor_searchd.sh)
Can you please add:
echo "$process is down!" >> /tmp/monitor_searcd.log
before the line with mail? It could help localize the problem.
And pair of small remarks:
Please add
#!/bin/bash
to the first line of the script.
Change
[ ! -n "$res" ]
to
[ -z "$res" ]

Resources