How can I start and then stop a tcpdump cmd/(or bash script) via crontab? Thanks
There is no difference to run tcpdump or any other command via crontab. You can create a script that stops tcpdump at the beginning and start it again. crobtab.sh
#!/bin/bash
killall tcpdump
var=$(date +%G%m%d-%k%M)
/usr/sbin/tcpdump -i eth0 -C 10 -w ~/$var.txt
Then run crontab for this script
00,15,30,45 * * * * /path_to_crontab.sh
That means this script will run every 15 minutes at everyday
Related
hey guys not sure what I am doing wrong here, but was hoping for some help.
I have a bash script with the following.
#!/bin/bash
docker exec -t wekan-db bash -c "scripts/wekandb_backup.sh"
docker exec -t wekan-db bash -c "rm -r /dump/*; cp -r /mongodb_backup/ /dump/mongodb_backup"
docker cp wekan-db:/dump /home/ikadmin/codes/backup/wekan/$(date +%Y-%m-%d)
Everything executes correctly when I run the bash script from the terminal.
However when I try to run it via crontab -e it does not work. Logs do show crontab trying to run it.
Just in case the bash script is currently set as 777 as well.
Any help would be appreciated
EDIT: crontab command
19 8 * * * /bin/bash /home/ikadmin/codes/scripts/backup-wekan-docker.sh
I would like to run a bash script periodically inside a Docker container (my work is based on this answer: https://stackoverflow.com/a/37458519/6240756)
Here is my script hello.sh:
#!/bin/sh
echo "Hello world" >> /var/log/cron.log 2>&1
Here is the cron file hello-cron:
# m h dom mon dow command
* * * * * /app/hello.sh
# Empty line
And here is my Dockerfile:
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y cron
# Add backup script
COPY hello.sh /app/
RUN chmod +x /app/hello.sh
# Configure the cron
# Copy file to the cron.d directory
COPY hello-cron /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Apply cron job
RUN crontab /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Start the cron
CMD cron && tail -f /var/log/cron.log
I build and run the container but nothing happens. I should see "Hello world" displayed every minutes.
If I replace the call to the script in the cron file by directly echo "Hello world" >> /var/log/cron.log 2>&1 it works, I see "Hello world" every minutes
What am I doing wrong?
EDIT
And the Docker commands:
docker build -t hello-cron .
docker run -t -i hello-cron
About your concrete question
The problem is that you're launching your docker with -t -i, and what you want is a background execution and check it interactively.
Try with:
docker run -d --name mycontainer hello-cron
docker logs -f mycontainer
Best practices
If you are going to execute something periodically, consider if it should be in healthcheck definition, where you can set period and other variables.
Second, note that your cron.log is not mounted, so, you'd lose this info if docker restart.
H_i guys, I'm running Arch with i3 as WM, i3lock works fine when manually executed via keybinding, xautolock is ofc installed and the script is launched at startup ( when trying to manually launch it I get this message : " xautolock is already running (PID 1302)" but my screen never automatically locks
Here is the script :
#!/bin/sh
exec xautolock -detectsleep \
-time 3 -locker "i3lock -d -c 000000" \
-notify 30 \
-notifier "notify-send -u critical -t 10000 -- 'LOCKING screen in 30 seconds'"
Thanks in advance.
I'm using Docker with Rancher v1.6, setting up a Nextcloud stack.
I would like to use a dedicated container for running cron tasks every 15 minutes.
The "normal" Nextcloud Docker image can simply use the following:
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
while /bin/true; do
su -s "/bin/bash" -c "/usr/local/bin/php /var/www/html/cron.php" www-data
echo $$(date) - Running cron finished
sleep 900
done
EOF'
(Pulled from this GitHub post)
However, the Alpine-based image does not have bash, and so it cannot be used.
I found this script in the list of examples:
#!/bin/sh
set -eu
exec busybox crond -f -l 0 -L /dev/stdout
However, I cannot seem to get that working with my docker-compose.yml file.
I don't want to use an external file, just to have the script entirely in the docker-compose.yml file, to make preparation and changes a bit easier.
Thank you!
I have basically two lines of code which are:
tcpdump -i eth0 -s 65535 -w - >/tmp/Captures
tshark -i /tmp/Captures -T pdml >results.xml
if I run them both in separate terminals it works fine.
However I've been trying to create a simple bash script that will execute them at the same time, but have had no luck. Bash script is as follows:
#! /bin/bash
tcpdump -i eth0 -s 65535 -w - >/tmp/Captures &
tshark -i /tmp/Captures -T pdml >results.xml &
If anyone could possibly help in getting this to work or getting it to "run tcpdump until a key is pressed, then run tshark. then when a key is pressed again close."
I have only a little bash scripting experience.
Do you need to run tcpdump and tshark separately? Using a pipe command will feed the output of tcpdump to the input of tshark.
tcpdump -i eth0 -s 65535 | tshark -T -pdml > results.xml