I am using innotifywait to track file changes of users, and was able to effectively trace whether a file was created/edited/deleted using the innotifywait tool by logging it to a log file.
However, when there are actions performed i.e rsync, all the changes are written to the log file as well.
Here is an example of performing rsync :
Mon Nov 23 15:42:56 .sidebar.php.KNYJir:DELETED
Mon Nov 23 15:42:56 .sidebar.php.KNYJir:DELETED
Mon Nov 23 15:42:56 .sidebar.php.KNYJir:DELETED
Mon Nov 23 15:42:56 sidebar.php
Attached below is the command which I am using :
/usr/bin/inotifywait -e create,delete,modify,move -mrq --format %w%f
I then pipe it to a endless while loop to process and test if the changed file exist to determine if the file does exist or not to determine if it is a create/modify/delete action.
Is there anyway we can exclude the logging for actions performed by root?
I do not think that is possible, certainly not with inotifywatch. The inotify api itself simply does not provide that information. As the manpage states:
The inotify API provides no information about the user or process that triggered the inotify event. In particular, there is no easy way for a process that is monitoring events via inotify to distinguish events that it triggers itself from those that are triggered by other processes.
What you can do is to filter based on the filename. Or, if you know the process that makes the additional changes, compare with its own logfile.
Related
I received an application code that can be run on Linux kernel 4.4.60 as below cmd - per their app note :-
/usr/sbin/server_application | logger -t tag &
If I run the server_application with just "server_application &" (in the background), then the socket which the process attempts to create fails the initiation. And obviously the client_application (run separately of course) times out.
From my info, the linux logger utilities only make entries in the system log.
Q. -- What is it the application might need which requires the application to log the tag entries in the syslog?
I am trying to reverse engineer as to why it needs logger specifically.
Any thoughts on this would be greatly appreciated.
Thanks in advance.
If you run the server_application in background the process might not have standard output at all opened to anything and any writes to stdout will fail. If you create a pipeline piping the standard output to a program then the server_application will have different characteristics for its stdout.
You could also try to figure out the difference by running these two with strace, for example:
strace -o /tmp/syscall.log /usr/sbin/server_application &
strace -o /tmp/syscall.log /usr/sbin/server_application | logger -t tag &
and by reading the /tmp/syscall.log looking for failed system calls near the end of the run for the former and then comparing them with the calls from the latter.
I'm trying to use gammu and gammu-smsd to send and receive sms with my raspberry pi using a Huawei intrnet key.
My problem is that when I send an sms from my phone to raspberry pi, it read the sms, it try to start the program linked at RunOnReceive = in /etc/gammu-smsdrcn file but then, it says: Process failed with exit status 1.
I tried any kind of solution but I'm not capable to solve this problem by my self; I've set each permission on the script.
Can someone help me?
Thank you a lot.
You no doubt have this sorted by now, but I have just been through the same trip, tore out a lot of hair and finally made it out the back .... :-)
I'm using a ZTE stick with wvdial for internet connection. The stick appears as modems on /dev/USBtty0, 1 and 2. wvdial uses USBtty2, so gammu (I think) has to use a different one.
So I installed gammu/gammu-smsd on USBtty1 in gammu-config and /etc/gammu-smsdrc. The receive daemon gammu-smsd fires up automatically on boot.
First trap for young players - if you want to send an SMS with
echo "whatever" | gammu sendsms TEXT xxxyyyzzzz (where the last is the phone no) - you need to kill the receive daemon for that to work ie
service gammu-smsd stop # kill receive daemon
echo etc etc gammu etc etc # send the SMS
service gammu-smsd start # revive the receive daemon
Now for the RunOnReceive thing ...
start with sudovi - gives some config file to edit. There's a line in there about pi BLAH-BLAH-BLAH as a sudoer. Duplicate it with gammu BLAH-BLAH-BLAH. Same BLAHs. Save it.
It's something to do with permissions - I'm not an expert here :-)
So my RunOnReceive line is { sudo /home/pi/procSMS.sh $SMS_1_TEXT }
The script didn't seem to know what $SMS_1_TEXT was, so I passed it through as a parameter - inside the script it's treated as $1. It works.
While testing I ran a process in another window - just tail -f /var/log/syslog which lets you watch it all in real time ...
I was getting the same error on Raspberry Pi in combination with Huawei E3131 (Process failed with exit status 1) but I solved it.
make sure you have file permissions set well. Gammu runs deamon under "gammu" user by default. So you can change it (/etc/init.d/gammu-smsd) to user who is already located in your system and has rights for executing the script. Or change script permissions by following: chmod 755 script.sh. It means you give execute rights to other users too.
In fact there is additional option. Run gammu deamon with parameter -U username. Unfortunatelly it did not work for me when I used root user.
PS: I would recommend to not to place the script inside /etc directory. Use /home directory instead.
turn on debuging in /etc/gammu-smsdrc. Use parameters: logformat and debuglevel in section smsd. Default log is located in /var/log/syslog. May be it helps you deeply localize the problem.
And the best at the end... I found that gammu returns the error even if it runs the script well! You have to write exit code inside you bash script. If you do not specify an exit code, gammu represents it as error 1. Add exit 0 in case of success in the end of the script and error message disappears.
In some cases, I want to decrease the running process priority to execute a costy operation and then increase it back to the original value. The process should do it by itself, without root permission.
I've tried to that inside a ruby app with Process.setpriority.
Ins't it a flaw in kernel's design?
Here it is an example in shell:
$ nice
0
Then
$ renice -n 19 -p $$
5094 (process' ID) old priority 0, new priority 19
Then
$ renice -n 0 -p $$
renice: failed to set priority for 5094 (process' ID): Permission denied
Usually only root (or processes with CAP_SYS_NICE privilege) may change their "nice" to higher priority (lower values). But since 2.6.12 version of kernel (http://man7.org/conf/lca2006/Linux_2.6_changes/rlimit_5.html), linux introduced additional way to increasing priority (lowering nice), the RLIMIT_NICE rlimit (man getrlimit). You can check it with ulimit -e in bash, and change by root before switching (su) to usual user or to nobody (example).
So, what can you do to make your process temporary having low priority (higher nice value):
Get the CAP_SYS_NICE privilege from root
Set right RLIMIT_NICE (default is 0, so RLIMIT_NICE is disabled and this is the flaw in linux distributions), either by root ulimit -e VALUE or in system-wide /etc/security/limits.conf, item nice (this config is used by pam_limits.so PAM module, so check, is it called before you start the process. Usually it is called by login, *dm managers, crond and atd. Don't know is it called for processes started by init.d scripts)
Two examples of second variant from brauliobo, allow nice back to 0, system-wide. Add to /etc/security/limits.conf:
* soft nice 0 # ranges from -20 to 19
Or using sudo to root to change RLIMIT_NICE for single shell:
sudo bash
ulimit -e 20 # equivalent to 0, as it ranges from 0 to 40
sudo -u youruser bash
# now you can renice back to 0
Without help from root user you can:
Fork second process to do the low-priority work and renice only it. Use inter-process communication to send work items and get results.
You can call sched_yield often when doing low-priority work. This will enable other processes to preempt your program early.
I am trying to do a profiling on my embedded Linux box. This is running a software.
I want to do a profiling on my software using strace.
The application is the main software that keeps running forever.
How can I run the strace and log the outputs to a file.
In my rcS script.
I run the application like this
./my_app
Now, with strace.
strace ./my_app -> I want to log these outputs on a file, and I should be able to access the file without killing the application. Remember this application never terminates.
Please help!
Instead of a target filename, use the -p option to strace to specify the process ID of an already running process you wish to attach to.
Chris is actually right. Strace takes the -p option, which enables you to attach to a running process just by specifying the processes PID.
Let's say your 'my_app' process runs with PID 2301 (you can see the PID by logging into your device and us 'ps'). Try doing 'strace -p 2301', and you will see all system calls for that PID. You can throw it to a file by redirecting everywhere: 'strace -p 2301 > /tmp/my_app-strace'.
Hope this helps.
I find debugging monit to be a major pain. Monit's shell environment basically has nothing in it (no paths or other environment variables). Also, there are no log file that I can find.
The problem is, if the start or stop command in the monit script fails, it is difficult to discern what is wrong with it. Often times it is not as simple as just running the command on the shell because the shell environment is different from the monit shell environment.
What are some techniques that people use to debug monit configurations?
For example, I would be happy to have a monit shell, to test my scripts in, or a log file to see what went wrong.
I've had the same problem. Using monit's verbose command-line option helps a bit, but I found the best way was to create an environment as similar as possible to the monit environment and run the start/stop program from there.
# monit runs as superuser
$ sudo su
# the -i option ignores the inherited environment
# this PATH is what monit supplies by default
$ env -i PATH=/bin:/usr/bin:/sbin:/usr/sbin /bin/sh
# try running start/stop program here
$
I've found the most common problems are environment variable related (especially PATH) or permission-related. You should remember that monit usually runs as root.
Also if you use as uid myusername in your monit config, then you should change to user myusername before carrying out the test.
Be sure to always double check your conf and monitor your processes by hand before letting monit handle everything. systat(1), top(1) and ps(1) are your friends to figure out resource usage and limits. Knowing the process you monitor is essential too.
Regarding the start and stop scripts i use a wrapper script to redirect output and inspect environment and other variables. Something like this :
$ cat monit-wrapper.sh
#!/bin/sh
{
echo "MONIT-WRAPPER date"
date
echo "MONIT-WRAPPER env"
env
echo "MONIT-WRAPPER $#"
$#
R=$?
echo "MONIT-WRAPPER exit code $R"
} >/tmp/monit.log 2>&1
Then in monit :
start program = "/home/billitch/bin/monit-wrapper.sh my-real-start-script and args"
stop program = "/home/billitch/bin/monit-wrapper.sh my-real-stop-script and args"
You still have to figure out what infos you want in the wrapper, like process infos, id, system resources limits, etc.
You can start Monit in verbose/debug mode by adding MONIT_OPTS="-v" to /etc/default/monit (don't forget to restart; /etc/init.d/monit restart).
You can then capture the output using tail -f /var/log/monit.log
[CEST Jun 4 21:10:42] info : Starting Monit 5.17.1 daemon with http interface at [*]:2812
[CEST Jun 4 21:10:42] info : Starting Monit HTTP server at [*]:2812
[CEST Jun 4 21:10:42] info : Monit HTTP server started
[CEST Jun 4 21:10:42] info : 'ocean' Monit 5.17.1 started
[CEST Jun 4 21:10:42] debug : Sending Monit instance changed notification to monit#example.io
[CEST Jun 4 21:10:42] debug : Trying to send mail via smtp.sendgrid.net:587
[CEST Jun 4 21:10:43] debug : Processing postponed events queue
[CEST Jun 4 21:10:43] debug : 'rootfs' succeeded getting filesystem statistics for '/'
[CEST Jun 4 21:10:43] debug : 'rootfs' filesytem flags has not changed
[CEST Jun 4 21:10:43] debug : 'rootfs' inode usage test succeeded [current inode usage=8.5%]
[CEST Jun 4 21:10:43] debug : 'rootfs' space usage test succeeded [current space usage=59.6%]
[CEST Jun 4 21:10:43] debug : 'ws.example.com' succeeded testing protocol [WEBSOCKET] at [ws.example.com]:80/faye [TCP/IP] [response time 114.070 ms]
[CEST Jun 4 21:10:43] debug : 'ws.example.com' connection succeeded to [ws.example.com]:80/faye [TCP/IP]
monit -c /path/to/your/config -v
By default, monit logs to your system message log and you can check there to see what's happening.
Also, depending on your config, you might be logging to a different place
tail -f /var/log/monit
http://mmonit.com/monit/documentation/monit.html#LOGGING
Assuming defaults (as of whatever old version of monit I'm using), you can tail the logs as such:
CentOS:
tail -f /var/log/messages
Ubuntu:
tail -f /var/log/syslog
Mac OSX
tail -f /var/log/system.log
Windows
Here be Dragons
But there is a neato project I found while searching on how to do this out of morbid curiosity: https://github.com/derFunk/monit-windows-agent
Yeah monit isn't too easy to debug.
Here a few best practices
use a wrapper script that sets up your log file. Write your command arguments in there while you are at it:
shell:
#!/usr/bin/env bash
logfile=/var/log/myjob.log
touch ${logfile}
echo $$ ": ################# Starting " $(date) "########### pid " $$ >> ${logfile}
echo "Command: the-command $#" >> ${logfile} # log your command arguments
{
exec the-command $#
} >> ${logfile} 2>&1
That helps a lot.
The other thing I find that helps is to run monit with '-v', which gives you verbosity. So the workflow is
get your wrapper working from the shell "sudo my-wrapper"
then try and get it going from monit, run from the command line with "-v"
then try and get it going from monit, running in the background.
You can also try running monit validate once processes are running, to try and find out if any of them are having problems (and sometimes get more information than you would get in the log files if there are any problems). Beyond that, there's not much more you can do.