systemd-udev rule applied multiple times (twice in my case) - systemd

I have udev rule with the following content:
DRIVERS=="adt7310", RUN+="/bin/ln -s /sys//devices/platform/soc/fff00000.spi/spi_master/spi0/spi0.0/temp1_input /dev/temperature_adt"
The problem is that this rule is applied twice and in the log appears annoying line:
localhost systemd-udevd[1104]: Process '/bin/ln -s /sys//devices/platform/soc/fff00000.spi/spi_master/spi0/spi0.0/temp1_input /dev/temperature_adt' failed with exit code 1.
I have seen a lot of similar issues over the internet and many of them still unresolved. But most of them were about PC's and quite complicated rules.
Here it is an embedded system, the link is created, nothing wrong happens but I simply don't know what to tell to QA people...
Thanks

Please be very careful when writing rules and especially when you are taking someone else rules. Always run
udevadm info -a /sys/...device
and read very carefully the information.
In my case the solution is
DRIVER=="adt7310" instead of DRIVERS=="adt7310"
My apologies

Related

Status return nagios plugins using sed

I have a plugin in nagios "check_icmp" who return 4 values if the command can ping the demanded host however i got only 1 values if the ping failed so i used the following command :
'/usr/lib64/nagios/plugins/check_icmp -w 1000.0,20% -c 1400.0,60% -H 8.8.4.5 -m 5 | sed 's/pl=100%;20;60;0;100/rta=nan;;;; rtmax=nan;;;; rtmin=nan;;;; pl=100%;20;60;0;100/g'
and it returns
CRITICAL - 8.8.4.5: rta nan, lost 100%|rta=nan;;;; rtmax=nan;;;; rtmin=nan;;;; pl=100%;20;60;0;100
instead of
CRITICAL - 8.8.4.5: rta nan, lost 100%|pl=100%;20;60;0;100
so it works great on the host but if i put this command in nagiosql the current status stay in green "OK" even if the ping failed :
https://ibb.co/LCzvXrV
I would guess that the reason that you're getting an OK despite the fact that the result is "CRITICAL" is because the only thing Nagios cares about is the return code of the command you ran (also called an exit code). I would guess that sed has the last word, exits with return code 0 which means "everything went fine", and it did obviously: for sed.
Since you don't explain why you're doing any of this, I can't comment on whether this is a good idea or not, but it seems very odd in my mind to include a pipe to sed in your check command. I understand that you are sanitizing the performance data that is returned by the plugin, but why?
Nagios is built to interpret return codes as the status of your check, that's just the way it is. Whatever issues you are seeing because of this plugins behavior specifically surely can be resolved in another way without running sed magic on your checks that you may or may not get intended results from. It's also worth noting that if you're inserting this into RRD files, messing with the order or amount of values may create headaches, so I really wouldn't recommend it.
As a constructive suggestion in the future, please include what brought you to your current solution when asking these questions, as you can never know whether it's actually the best way to resolve your original issue. This is often called an XY Problem.

Pattern to detect Ansible failure

I have a an Ansible playbook quite big with a laot of template and it generates tons of logs (hundreds of thousands of lines in my log file)
Whenever a task fail, I can spot it with failed=
My problem is how to see where the error as of today, all I'm doing is scrolling the log and pray for my eyes to find the error but when you have that quantity of lines, it can take time and very frustrating.
Is there any pattern I should look for to find where the error is?
Thanks in advance for your inputs
By default, Ansible stops after the first failed task...
https://docs.ansible.com/ansible/latest/user_guide/playbooks_error_handling.html
Ansible normally has defaults that make sure to check the return codes
of commands and modules and it fails fast – forcing an error to be
dealt with unless you decide otherwise.
If your playbook handles a lot of targets and you want to stop everything at the first failure on any target, you an use any_errors_fatal: true play option.
https://docs.ansible.com/ansible/latest/user_guide/playbooks_error_handling.html#aborting-the-play

HTK HEREST ISSUE

I'm doing some speech recognition using HTK (HMM ToolKit) and I'm getting this odd error:
ERROR [+7390] StepAlpha: Alpha prune failed sq(16) > qHi(15)
I have tried to play around with pruning but only those 15/16 would change to other numbers, I keel receiving the same error. I've even tried to disable pruning and it keeps giving me this error.
I just don't know where to look for anything, if I knew I could fix it.
this is my HERest command:
HERest -C config -I Label.mlf -t 250 100 1000 -S trainlist.scp -H hmms\0\vFloors -H hmms\0\hmm0 -M hmms\1 wordlist
I've looked into the HTK book but there is nothing about the error number 7390.
I work with the HTK toolkit off and on, but not with HEREST. I also have never seen the error that you have encountered. A quick search on the web reveals some links that may be helpful.
Link 1
Scroll to the bottom of the link above, and you see an error code like ??9?. This means that a sanity check has failed. It further says that the error has nothing to do with your code, but that the HTK itself may be faulty. I think, given that Error 7390 has not occurred any where else, and that it also fits your error code, you might want to consider re-installing HTK.
Link 2
The link above shows a lot of common errors and its causes seen when working with the HTK toolkit. The original poster has painstakingly put up errors and common conditions that cause them. Saved me a lot of effort. In my eyes, she deserves the Nobel (or something like that).
Link 3
This link gives more detailed, module by module error codes. Also worth a look.
Do let me know here how you were able to solve your problem.
HTH,
Sriram.
In my case it was related to big (15 MB) file in input data. It worked after deleting it. Run HERest with -T 1 option to see which file causes problem, then remove or split it.
You must adjust -t parameter. This error occur when pruning is failed, so with modify prune parameter. I have this problem and with -t 10 it's solved!

Bash piping output and input of a program

I'm running a minecraft server on my linux box in a detached screen session. I'm not very fond of screen very much and would like to be able to constantly pipe the output of the server to a file(like a pipe) and pipe some input from a file to the server(so that I can input and output to the server from remote programs, like a python script). I'm not very experienced in bash, so could somebody tell me how to do this?
Thanks, NikitaUtiu.
It's not clear if you need screen at all. I don't know the minecraft server, but generally for server software, you can run it from a crontab entry and redirect output to log files.
Assuming your server kills itself at midnight sunday night, (we can discuss changing this if restarting 1x per week is too little or too much OR you require ad-hoc restarts), but for a basic idea of what to do, here is a crontab entry that starts the server each monday at 1 minute after midnight.
01 00 * * 1 dtTm=`/bin/date +\%Y\%m\%d.\%H\%M\%S`; export dtTm; { /usr/bin/mineserver -o ..... your_options_to_run_mineserver_here ... ; } > /tmp/mineserver_trace_log.${dtTm} 2>&1
consult your man page for crontab to confirm that day-of-week ranges are 0-6 (0=Sunday), and change the day-of-week value if 0!=Sunday.
Normally I would break the code up so it is easier to read, but for crontab entries, each entry has to be all on one line, (with some weird exceptions) AND usually a limit of 1024b-8K to how long the line can be. Note that the ';' just before the closing '}' is super-critical. If this is left out, you'll get un-deciperable error messages, or no error messages at all.
Basically, you're redirecting any output into a file (including std-err output). Now you can do a lot of stuff with the output, use more or less to look at the file, grep ERR ${logFile}, write scripts that grep for error messages and then send you emails that errors have been found, etc, etc.
You may have some sys-admin work on your hands to get the mineserver user so it can run crontab entries. Also if you're not comfortable using the vi or emacs editors, creating a crontab file may require help from others. Post to superuser.com to get answers for problems you have with linux admin issues.
Finally, there are two points I'd like to make about dated logfiles.
Good: a. If you app dies, you never have to rerun it to then capture output and figure out why something has stopped working. For long running programs this can save you a lot of time. b. keeping dated files gives you the ability to prove to you, your boss, others, that It used to work just fine, see here are the log files. c. Keeping the log files, assuming there is useful information in them, gives you the opportunity to mine those files for facts. I.E. : program used to take 1 sec for processing, now it is taking 1 hr, etc etc.
Bad: a. You'll need to set up a mechanism to sweep old log files, otherwise at some point everything will have stopped, AND when you finally figure out what the problem was, you discover that your /tmp OR whatever dir you chose to use IS completely full.
There is a self-maintaining solution to using dates on the logfiles I can tell you about if you find this approach useful. It will take a little explaining, so I don't want to spend the time writing it up if you don't find the crontab solution useful.
I hope this helps!

The bizarre case of the file that both is and isn’t there

In .Net 3.5, I have the following code.
If File.Exists(sFilePath & IndexFileName & ".NX") Then
Kill(sFilePath & IndexFileName & ".NX")
End If
At runtime, on one client's machine, I get the following exception, over and over, when this code executes
Source: Microsoft.VisualBasic
TargetSite: Microsoft.VisualBasic.FileSystem.Kill
Message: No files found matching 'I:\RPG\HGIAPVXD.NX'.
StackTrace:
at Microsoft.VisualBasic.FileSystem.Kill(String PathName)
(More trace that identifies the exact line of code.)
There are two people on different machines running this code, but only one of them is getting the exception. The exception does not happen every time, but it is happening regularly. (Multiple times every hour.) The code is not in a loop, nor does it run continuously, more like once every couple of minutes or so.
On the surface, this looks like a race condition, but given how infrequently this code is run and how often the error is happening I think there must be something else going on.
I would appreciate any suggestions on how I can track down what is really going on here. A solution to keep the error from happening would be even better.
I guess the first question to ask is "IS the file really there or not?" and if so, does it have any specical attributes (Is it Read-only or Hidden, or System --- or a Directory)?
Note the Microsoft.VisualBasic.FileSystem.Kill specifically looks for, and silently skips, any file marked "System" or "Hidden". For pretty much any other problem you would have gotten a different exception.
as James pointed out the Kill functions checks if the file in case is a system or hidden, you better use System.IO.File.Delete() instead
Try
System.IO.File.Delete(sFilePath & IndexFileName & ".NX")
Catch ex As System.Exception
...
End Try
using File.Exits is not neccasary because File.Delete() checks this by itself.
Is there any chance that the I: drive is a network drive? it could be some network issue... or then maybe a race condition

Resources