I would lie to understand how logadm works.
So by looking at the online materials I wrote a small script which redirects a date into a log file and sleeps for 1 second. this will run in a infinite loop.
#!/usr/bin/bash
while true
do
echo `date` >>/var/tmp/temp.log
sleep 1
done
After this I had executed below commands:
logadm -w /var/tmp/temp.log -s 100b
logadm -V
My intention from above commands is log(/var/tmp/temp.log) should be rotated for every 100 bytes.
But after setting these , when I run the script in background, I see the log file is not rotated.
# ls -lrth /var/tmp/temp.log*
-rw-r--r-- 1 root root 7.2K Jun 15 08:56 /var/tmp/temp.log
#
As I understand, you have to call it to do work, e.g. from crontab or manually like logadm -c /var/tmp/temp.log (usually placecd in crontab)
Sidenode: you could simply write date >> /var/tmp/temp.log w/o echo.
This not how I would normally do this, plus I think you may have misunderstood the -w option.
The -w option updates /etc/logadm.conf with the parameters on the command line, and logadm is then run at 10 minutes past 3am (on the machine I checked).
I took your script and ran it, then ran:
"logadm -s 100b /var/tmp/temp.log"
and it worked fine. Give it a try! :-)
Related
I'm setting a cron job that is a bash script containing the below:
#!/bin/bash
NUM_CONTAINERS=$(docker ps -q | wc -l)
if [ $NUM_CONTAINERS -lt 40 ]
then
echo "Time: $(date). Restart containers."
cd /opt
pwd
sudo docker kill $(docker ps -q)
docker-compose up -d
echo "Completed."
else
echo Nothing to do
fi
The output is appended to a log file:
>> cron.log
However the output in the cron file only shows:
Time: Sun Aug 15 10:50:01 UTC 2021. Restart containers.
/opt
Completed.
Both command do not seem to execute as I don't see any change in my containers either.
These 2 non working commands work well in a standalone .sh script without condition though.
What am I doing wrong?
User running the cron has sudo privileges, and we can see the second echo printing.
Lots of times, things that work outside of cron don't work within cron because the environment is not set up in the same way.
You should generally capture standard output and error, to see if something going wrong.
For example, use >> cron.log 2>&1 in your crontab file, this will capture both.
There's at least the possibility that docker is not in your path or, even if it is, the docker commands are not working for some other reason (that you're not seeing since you only capture standard output).
Capturing standard error should help out with that, if it is indeed the issue.
As an aside, I tend to use full path names inside cron scripts, or set up very limited environments at the start to ensure everything works correctly (once I've established why it's not working correctly).
I am working on a project in Controlled Environmental Agriculture. I am done with most of the sensors coding. I even wrote a bash script to call each of the sensors code at the needed time. Now coming to the RPi NOIR camera code and OpenCV code, the problem I have with this is, it needs to be executed only once per day. The RPi camera code captures an image. Next the control must go to the OpenCV code and get completely executed. Once these both are done, the rest of the code needs to get executed.
I tried giving a upper and lower limit to the time and executing it within that much time. You can see below.
now=$(date + "%T") // This checks out what time it is.
if [ $now -gt 9:58:59 -a $now -lt 10:01:00 ]
then
python camera.py //This code captures an image
sleep 30s
python cv.py //this is the CV code which performs edge
detection and area detection on the crops
else
sleep 5s
python interrupt.py 1
cat test_data.txt
python ph_test.py
cat line.txt
sleep 10s
python temphumi.py
cat dht.txt
python dht_test.py
cat line.txt
sleep 10s
python watertemp.py
cat water_sensor.txt
python water_test.py
cat line.txt
sleep 10s
python interrupt.py 2
cat test_data.txt
python ec_test.py
cat line.txt
fi
done
I just want the camera.py and cv.py in the if part to get executed once and else part at anytime the rest of the day.
One thing you can do is removing the code that checks the time from your script, and then use cron to schedule it to run once a day at 10 in the morning. Run crontab -e to edit the "crontab" (the file that lists jobs to execute regularly), and add this line:
0 10 * * * /path/to/my-script
Replace /path/to/script to point to where your script is. Ensure that the script is executable (chmod +x my-script).
I'm at a loss for what I think may be a simple syntax error. What on line two is causing crontab to throw a "bad minute?" Thanks in advance for the help.
#!/bin/bash
if pgrep -fx "plexdrive mount -v 3 --chunk-check-threads=16 --chunk-
load-threads=16 --chunk-load-ahead=16 --max-chunks=256 /home/username/files/Google/" > /dev/null
then
echo "Plexdrive is running."
else
echo "Plexdrive is not running, starting Plexdrive"
fusermount -uz /home/username/files/Google/
screen -dmS plexdrive plexdrive mount -v 3 --chunk-check-threads=16 --chunk-load-threads=16 --chunk-load-ahead=16 --max-chunks=256 /home/username/files/Google/
fi
exit
The command: pgrep -fx "plexdrive mount -v 3 --chunk-check-threads=16 --chunk-load-threads=16 --chunk-load-ahead=16 --max-chunks=256 /home/username/files/Google/"
runs perfectly fine directly from the command line (returns the process number), so I'm pretty sure I'm just not understanding how to write a logic statement correctly.
Note: The server is remote and I'm merely a user. I have the ability to add to cron but not to services - hence this approach to solving the problem of ensuring that plexdrive (via fuse) always keeps this mount point alive.
You should read up on what a crontab should look like. Not bash source, in any case. It's a configuration file to start (programs and) bash scripts, not to contain bash script.
A crontab line contains the following fields:
minute,
hour,
day of month,
month,
day of week,
each of which specifying when to run the command, and
the command to run.
I.e., if you want your script to run at five minutes after each full hour, and your script is named "my_check_script" (and in PATH), the crontab line should look something like this:
5 * * * * my_check_script
Check the linked documentation for more details.
I prepared a script that can check display last created file.
file_to_search=find /var/lib/occas/domains/domain1/servers/traffic-1/logs/ -name "traffic-1.log*" 2>/dev/null | sort -n | tail -1 | cut -f2 -d" "
grep "Event: Invoke :" $file_to_search | awk 'BEGIN { FS = ">" } ; { print $1 }' | sort | uniq -ic >> /home/appuser/scripts/Traffic/EventInvoke_pl-1_Istanbul.txt.backup.$(date +"%Y-%m-%d")
I have following log files in this path: /var/lib/occas/domains/domain1/servers/traffic-1/logs/ but these files are being created changeable period. So, if I put this script to crontab for example 5 minutes, it can show same sometimes file and this is not what i want. I need a script that is showing last created file but when the file occurs. Help me, please?
10:54 traffic-1.log00023
11:01 traffic-1.log00024
11:05 traffic-1.log00025
11:06 traffic-1.log00026
11:09 traffic-1.log00027
11:18 traffic-1.log00028
11:23 traffic-1.log00029
11:34 traffic-1.log00030
11:39 traffic-1.log00031
11:40 traffic-1.log00032
How much delay between the generation of the log entry and the display would you be willing to take? In theory, you could start the cron job every minute, but I wouldn't do this.
Much easier would be a script which runs unattended and, in a loop, repeatedly checks the last line of the log file, and if it changes, does whatever needs to be done.
There are however two issues to observe:
The first is easy: You should at least sleep for 1 or 2 seconds after each "polling", otherwise your script will eat up a lot of system resources.
The second is a bit tricky: It could be, that your script terminates for whatever reason, and it this happens, you need to have it restarted automatically. One way would be to set up a "watchdog": Your script, in addition to checking the log file, touches a certain file every time it did the checking (no matter whether a new event need to be reported or not). The watchdog, which could be a cron job running every, say, 10 minutes, would verify that your script is still alive (i.e. that the output file of your script had been touched during the past couple of seconds), and if not, would start a new copy of your script. This means that you could loose a 10 minute timewindow, but since it is likely a very rare event, that your script crashes, this will, hopefully, not an issue.
I am starting ftam server (ft820.rc on CentOS 5) using bash version bash 3.0 and I am having an issue with starting it from the script, namely in the script I do
ssh -nq root#$ip /etc/init.d/ft820.rc start
and the script won't continue after this line, although when I do on the machine defined by $ip
/etc/init.d/ft820.rc start
I will get the prompt back just after the service is started.
This is the code for start in ft820.rc
SPOOLPATH=/usr/spool/vertel
BINPATH=/usr/bin/osi/ft820
CONFIGFILE=${SPOOLPATH}/ffs.cfg
# Set DBUSERID to any value at all. Just need to make sure it is non-null for
# lockclr to work properly.
DBUSERID=
export DBUSERID
# if startup requested then ...
if [ "$1" = "start" ]
then
mask=`umask`
umask 0000
# startup the lock manager
${BINPATH}/lockmgr -u 16
# update attribute database
${BINPATH}/fua ${CONFIGFILE} > /dev/null
# clear concurrency locks
${BINPATH}/finit -cy ${CONFIGFILE} >/dev/null
# startup filestore
${BINPATH}/ffs ${CONFIGFILE}
if [ $? = 0 ]
then
echo Vertel FT-820 Filestore running.
else
echo Error detected while starting Vertel FT-820 Filestore.
fi
umask $mask
I repost here (on request of #Patryk) what I put in the comments on the question:
"is it the same when doing the ssh... in the commandline? ie, can you indeed connect without entering a password, using the pair of private_local_key and the corresponding public_key that you previously inserted in the destination root#$ip:~/.ssh/authorized_keys file ? – Olivier Dulac 20 hours ago "
"you say that, at the commandline (and NOT in the script) you can ssh root#.... and it works without asking for your pwd ? (ie, it can then be run from a script?) – Olivier Dulac 20 hours ago "
" try the ssh without the '-n' and even without -nq at all : ssh root#$ip /etc/init.d/ft820.rc start (you could even add ssh -v , which will show you local (1:) and remote (2:) events in a very verbose way, helping in knowing where it gets stuck exactly) – Olivier Dulac 19 hours ago "
"also : before the "ssh..." line in the script, make another line with, for example: ssh root#ip "set ; pwd ; id ; whoami" and see if that works and shows the correct information. This may help be sure the ssh part is working. The "set" part will also show you the running shell (ex: if it contains BASH= , you're running bash. Otherwise SHELL=... should give a good hint (sometimes not correct) about which shell gets invoked) – Olivier Dulac 19 hours ago "
" please try without the '-n' (= run in background and wait, instead of just run and then quit). It it doesn't work, try adding -t -t -t (3 times) to the ssh, to force it to allocate a tty. But first, please drop the '-n'. – Olivier Dulac 18 hours ago "
Apparently what worked was to add the -t option to the ssh command. (you can go up to put '-t -t -t' to further force it to try to allocate the tty, depending on the situation)
I guess it's because the invoked command expected to be run within an interactive session, and so needed a "tty" to be the stdout
A possibility (but just a wild guess) : the invoked rc script outputs information, but in a buffered environment (ie, when not launched via your terminal), the calling script couldn't see enough lines to fill the buffer and start printing anything out (like when you do a "grep something | somethings else" in a buffered environment and ctrl+c before the buffer was big enough to display anything : you end up thinking no lines were foudn by the grep, whereas there was maybe a few lines already in the buffer). There is tons to be said about buffering, and I am just beginning to read about it all. forcing ssh to allocate a tty made the called command think it was outputting to a live terminal session, and that may have turned off the buffering and allowed the result to show. Maybe in the first case, it worked too, but you could never see the output?