I have a TellstickDuo, a small device capable of sending signals to my lamps at home to turn them on or off based on a schedule. This device sends around 3-5 "on-signals" and "off-signals" every time i press the remote control button (to make sure at least one signal is sent correctly i guess!?). I also have a Raspberry Pi that listens for these signals and starts a script when a specific signal is found (based on the lamp-devices id´s).
The problem is that everytime when it sends 3-5 signals my script runs the same amount of times but i only want it to run once. Is there any way to capture these signals and with bash (.sh) ignore everyone but one ?
Code sent from device:
...RUNNING /usr/bin/php /var/www/autosys/python_to_raw.php "class:command;protocol:arctech;model:selflearning;house:13741542;unit:10;group:0;method:turnon;"
...RUNNING /usr/bin/php /var/www/autosys/python_to_raw.php "class:command;protocol:arctech;model:selflearning;house:13741542;unit:10;group:0;method:turnon;"
...RUNNING /usr/bin/php /var/www/autosys/python_to_raw.php "class:command;protocol:arctech;model:selflearning;house:13741542;unit:10;group:0;method:turnon;"
...RUNNING /usr/bin/php /var/www/autosys/python_to_raw.php "class:command;protocol:arctech;model:selflearning;house:13741542;unit:10;group:0;method:turnon;"
and my script is:
#!/bin/bash
if [[ ( "${RAWDATA}" == *13741542* ) && ( "${RAWDATA}" == *turnon* ) ]]; then
# Something will be done here, like turn on a lamp, send an email or something else
fi
(Some explanation to the RAWDATA code can be found here: http://developer.telldus.com/blog/2012/11/new-ways-to-script-execution-from-signals)
If i set my bash-script to send an email i get 4 emails, if i set it to update a counter at my local webpage it updates it 4 times.
There is no way to controll how many signals the device will send, but can i only capture it once somehow? Maybe some way to "run script on first signal and drop everything else for 5 seconds"
Related
My goal is to filter notifications coming from different applications (mainly from different browser window).
I found that with the help of the dbus-monitor I can write a small script that could filter the notification messages that I am interested in.
The filter script is working well, but I have a small problem:
I am starting with the
dbus-monitor "interface='org.freedesktop.Notifications', destination=':1.40'"
command. I have to added the "destination=':1.40'" because on Ubuntu 20.04 I always got twice the same notification.
The following output of
dbus-monitor --profile "interface='org.freedesktop.Notifications'"
demonstrate the reason:
type timestamp serial sender destination path interface member
# in_reply_to
mc 1612194356.476927 7 :1.227 :1.56 /org/freedesktop/Notifications org.freedesktop.Notifications Notify
mc 1612194356.483161 188 :1.56 :1.40 /org/freedesktop/Notifications org.freedesktop.Notifications Notify
As you can see the sender :1.277 sends to :1.56 first than this will be the sender to :1.40 destination. (Simply notify-send hello test message was sent)
My script is working on that way, but every time system boot up, I have to check the destination number and modify my script accordingly to get worked.
I have two questions:
how to discover the destination string automatically? (:1.40 in the above example)
how to prevent system sending the same message twice? (If this question would be answered, than the question under point 1. became pointless.)
We have shared server with multiple GPU nodes without resource manager. We make agreements that: "this week you can use nodes ID1,ID2 and ID5". I have a program that gets this ID as a parameter.
When I need to run my program ten times with ten different sets of parameters $ARGS1, $ARGS2, ..., $ARGS10, I run first three commands
programOnGPU $ARGS1 -p ID1 &
programOnGPU $ARGS2 -p ID2 &
programOnGPU $ARGS3 -p ID5 &
Then I must wait for any of those to finish and if e.g ID2 finishes first, I then run
programOnGPU $ARGS4 -p ID2 &
As this is not very convenient when you have a lot of processes I would like to automatize the process. I can not use parallel as I need to reuse IDs.
First use case is a script that needs to execute apriori known 10 commands of the type
programOnGPU $PARAMS -p IDX
when any of them finishes to assign its ID to another one in the queue. Is this possible using bash without too much overhead of the type of the SLURM? I don't need to check the state of physical resource.
General solution would be if I can make a queue in the bash or simple command line utility to which I will submit commands of the type
programABC $PARAMS
and it will add the GPU ID parameter to it and manage the queue that will be preconfigured to be able to use just given IDs and one ID at once. Again I don't want this layer to touch physical GPUs, but to ensure that it executes consistently over allowed ID's.
This is very simple with Redis. It is a very small, very fast, networked, in-memory data-structure server. It can store sets, queues, hashes, strings, lists, atomic integers and so on.
You can access it across a network in a lab, or across the world. There are clients for bash, C/C++, Ruby, PHP, Python and so on.
So, if you are allocated nodes 1, 2 and 5 for the week, you can just store those in a Redis "list" with LPUSH using the Redis "Command Line Interface"* for bash:
redis-cli lpush VojtaKsNodes 1 2 5
If you are not on the Redis host, add its hostname/IP-address into the command like this:
redis-cli -h 192.168.0.4 lpush VojtaKsNodes 1 2 5
Now, when you want to run a job, get a node with BRPOP. I specify an infinite timeout with the zero at the end, but you could wait a different amount of time:
# Get a node with infinite timeout
node=$(redis-cli brpop VojtaKsNodes 0)
run job on "$node"
# Give node back
redis-cli lpush VojtaKsNodes "$node"
I would:
I have a list of IDS=(ID1 ID2 ID5)
I would make 3 files, one with each IDs.
Run <arguments xargs -P3 programOnGPUFromLockedFile so run 3 processes for each of your argument.
Each of the processes will nonblockingly try to flock the 3 files in a loop, endlessly (ie. you can run more processes then 3, if you wanna).
When they succeed to flock,
they read the ID from the file
run the action on that ID
When they terminate, they will free flock, so the next process may flock the file and use the ID.
Ie. it's a very, very basic mutex locking. There are also other ways you can do it, like with an atomic fifo:
Create a fifo
Spawn one process for each argument you want to run with that will:
Read one line from the fifo
That line will be the ID to run on
Do the job on that ID
Output one line with the ID to the fifo back
And then write one ID per line to the fifo (in 3 separate writes! so that it's hopefully atomic), so 3 processes may start.
wait until all except 3 child processes exit
read 3 lines from fifo
wait until all child processes exit
Complete AppleScript noobie here. How would I go about writing and executing a script that sends (let's say) 3 pings to a series of URLs ?
Instead of manually sending 3 pings to address 1, address 2, etc... ?
There would be approximately 100 addresses to send pings too.
I use to Here is an example with only 1 URL address. you can loop via a repeat/end repeat to run it through all your address list.
Set myAddress to "myserveur.local"
Set Feedback to do shell script "ping -c3 " & myAdress
Feedback variable will contains result of the ping. I suggest you to include in your ping command a grep to extract OK of not.
The following script checks a sites content to see if any change has been done to it, every 10 seconds. It's for a very time sensitive application. If something on the site has changed, I merely have seconds to do something else. It will then start a new download and compare cycle and wait for the next change and do cycle. The do something else, has yet to be scripted and not relevant to the question.
The question: Will it be a problem for a public website to have a script downloading a single page every 10-15 seconds. If so, is there any other way to monitor a site, unmanned?
#!/bin/bash
Domain="example.com"
Ocontent=$(curl -L "$Domain")
Ncontent="$Ocontent"
until [ "$Ocontent" != "$Ncontent" ]; do
Ocontent=$(curl -L "$Domain")
#CONTENT CHANGED TRUE
#if [ "$Ocontent" == "$Ncontent ]; then
# Ocontent=$(curl -L "$Domain")
#fi
echo "$Ocontent"
sleep 10
done
The problems you're going to run into:
If the site notices and has a problem with it, you may end up on a banned IP list. Using an IP pool or other distributed resource can mitigate this.
Pinging a website precisely every x number of seconds is unlikely. Network latency is likely to cause a great deal of variance in this.
If you get a network partition, your code should know how to cope. (What if your connection goes down? What should happen?)
Note that getting the immediate response is only part of downloading a webpage. There may be changes to referenced files, such as css, javascript or images that are not immediately apparent from just the original http response.
I want to monitor IPMI System Event Log(SEL) in real time. What I want that is that whenever a event is generated in SEL , automatically a mail alert should be generated .
One way for me to achieve this is that I can write a script and schedule it in cron. The script will run 3 or 4 times a day so whenever a new event is generated , a mail alert will be send to me.
I want the monitoring to be active. Like whenever a event is generated , a mail should be sent to me instead of checking at regular intervals.
The SEL Log format is as follows:
server-001% sudo ipmitool sel list
b4 | 05/27/2009 | 13:38:32 | Fan #0x37 | Upper Critical going high
c8 | 05/27/2009 | 13:38:35 | Fan #0x37 | Upper Critical going high
dc | 08/15/2009 | 07:07:50 | Fan #0x37 | Upper Critical going high
So , for the above case whenever a new event is generated , automatically a mail alert should be send to me with the event.
How can I achieve this with a bash script . Any pointers will be highly appreciated.
I believe some vendors have special extensions in their firmware for exactly what you are describing (i.e. you just configure an e-mail address in the service processor), but I can't speak to each vendor's support. You'll have to look for your motherboard's documentation for that.
In terms of a standard mechanism, you are probably looking for IPMI PET (platform event trap) support. With PET, when certain SEL events are generated, it will generate a SNMP trap. The SNMP trap, once received by an SNMP daemon can do whatever you want, such as send an e-mail out.
A user of FreeIPMI wrote up his experiences in a doc and posted his scripts, which you can find here:
http://www.gnu.org/software/freeipmi/download.html
(Disclaimer: I maintain FreeIPMI so I know FreeIPMI better, unsure of support in other IPMI software.)
As an FYI, several IPMI SEL logging daemons (FreeIPMI's ipmiseld and ipmitool's ipmievtd are two I know) poll the SEL based on a configurable number of seconds and log the SEL information to syslog. A mail alert could also be configured in syslog to send out an e-mail when an event occurs. These daemons are still polling based instead of real-time, but the daemons will probably handle many IPMI corner cases that your cron script may not be aware of.
Monitoring of IPMI SEL events can be achieved using ipmievd tool . It is a part of ipmitool package.
# rpm -qf /usr/sbin/ipmievd
ipmitool-1.8.11-12.el6.x86_64
To send SEL events to syslog , execute the following command.
ipmievd sel daemon
Now , to simulate generation of SEL events , we will execute the following command.
ipmitool event 2
This will generate the following event
` Voltage Threshold - Lower Critical - Going Low`
To get the list of SEL events that can be generated are , try
# ipmitool event
usage: event <num>
Send generic test events
1 : Temperature - Upper Critical - Going High
2 : Voltage Threshold - Lower Critical - Going Low
3 : Memory - Correctable ECC
The event will be notified to /var/log/messages. Following message was generated in log file.
Oct 21 15:12:32 mgthost ipmievd: Voltage sensor - Lower Critical going low
Just in case it helps anyone else...
I created a shell script to record data in this format, and I parse it with php and use google's chart api to make a nice line graph.
2016-05-25 13:33:15, 20 degrees C, 23 degrees C
2016-05-25 13:53:06, 21.50 degrees C, 24 degrees C
2016-05-25 14:34:39, 19 degrees C, 22.50 degrees C
#!/bin/sh
DATE=`date '+%Y-%m-%d %H:%M:%S'`
temp0=$(ipmitool sdr type Temperature | grep "CPU0 Diode" | cut -f5 -d"|")
temp1=$(ipmitool sdr type Temperature | grep "CPU1 Diode" | cut -f5 -d"|")
echo "$DATE,$temp0,$temp1" >> /events/temps.dat
The problem I'm having now is getting the cron job to access the data properly, even though it's set in the root crontab.