I am currently trying to set up a splunk search query in a dashboard that checks a specific time interval. The job I am trying to set it up with runs three times a day. Once at 6am, once at 12:20pm, and once at 16:20(4:20pm). Currently the query just searches for the latest time and sets the background as to whether it received an error or not, but the users wanted the three times it runs per day to be displayed seperately so now I need to set up an interval of time for each of the three panels to display and I have tried a lot of things with no luck(I am new to splunk so I have been just randomly trying different syntax).
I have tried using a search command |search Time>6:00:00 Time<7:00:00 and also tried using other commands that happen before the stats command that gets the latest time with no luck and I'm just stuck at this point and have no clue what to try.
I have my index at the top here but don't think its necessary to show.
| rex field=_raw ".+EVENT:\s(?\S+)\s.+STATUS:\s(?\S+)\s.+JOB:\s(?\S+)"
| stats latest(_time) as Time by status
| eval Time=strftime(Time, "%H:%M:%S") stats Time>6:00:00 Time<7:00:00
| sort 1 - Time
| table Time status
| append [| makeresults | eval Time="06:10:00"]
| eval range = case(status="FAILURE", "severe", status="SUCCESS", "low", 1==1, "guarded")
| head 1
i was having the same issue as you (where as the solutions on here and elsewhere were not working), however this below ended up working for me:
(adding this, to properly format / strip the hours)
| eval date_hour=strftime(_time, "%H")
and my full working search (each day, between 6am and 11pm , prior 25 days):
index=mymts earliest=-25d | eval date_hour=strftime(_time, "%H") | search date_hour>=6 date_hour<=23 host="172.17.172.1" "/netmap/*"
Related
my issue is that I want to be able to get two time stamps and compare if the second (later taken) one is less than 59 minutes away from the first one.
Following this thread Compare two dates with JavaScript
the date object may do the job.
but first thing i am not happy with is that it takes the time from my system.
is it possible to get the time from some public server or something?
cause there always is a chance that the system clock gets manipulated within the time stamps, so that would be too unreliable.
some outside source would be great.
then i am not too sure how to get the difference between 2 times (using 2 date objects).
many issue that may pop up:
time being something like 3:59 and 6:12
so just comparing minutes would give the wrong idea.
so we consider hours too.
biut there the issue with the modulo 24.
day 3 23:59 and day 4 0:33 wouldnt be viewed proper either.
so including days too.
then the modulo 30 thing, even though that on top changes month for month.
so month and year to be included as well.
so we would need the whole date, everything from current year to second (because second would be nice too, for precision)
and comparing them would require tons of if clauses for year, month, etc.
do the date objects have some predfeined date comparision function that actually keeps all these things in mind (havent even mentioned leap years yet, have I)?
time would be very important cause exactly at the 59 minutes mark (+-max 5 seconds wouldnt matter but getting rmeitely close to 60 is forbidden)
a certain function would have to be used that without fail closes a website.
script opens website at mark 0 min, does some stuff rinse and repeat style and closes page at 59 min mark.
checking the time like every few seconds would be smart.
Any good ideas how to implement such a time comparision that doesnt take too more computer power yet is efficient as in new month starting and stuff doesnt mess it up?
You can compare the two Date times, but when creating a date time there is a parameter of DateTime(value) which you can use.
You can use this API to get the current UTC time which returns a example JSON array like this:
{
"$id":"1",
"currentDateTime":"2019-11-09T21:12Z",
"utcOffset":"00:00:00",
"isDayLightSavingsTime":false,
"dayOfTheWeek":"Saturday",
"timeZoneName":"UTC",
"currentFileTime":132178075626292927,
"ordinalDate":"2019-313",
"serviceResponse":null
}
So you can use either the currentFileTime or the currentDateTime return from that API to construct your date object.
Example:
const date1 = new Date('2019-11-09T21:12Z') // time when I started writing this answer
const date2 = new Date('2019-11-09T21:16Z') // time when I finished writing this answer
const diff = new Date(date2-date1)
console.log(diff.toTimeString()) // time it took me to write this
Please keep in mind that due to network speeds, the time API will be a little bit off (by a few milliseconds)
I want to develop a small Sprint Shell project where I display static information which potentially updates when an event is received. Is this functionality possible with Spring Shell, or is it even the correct tool for the job?
To give an example of what I want to achieve:
-------------------
| Stock Value: 5$ | < This information should be always displayed
-------------------
shell:> I can put input here
While I type an event in the application should be able to change it to e.g.
-------------------
| Stock Value: 7$ | < This information gets updated
-------------------
shell:> I can still type here
Spring shell can do that for you.
See the tutorial here: https://github.com/spring-projects/spring-shell/blob/master/spring-shell-docs/src/main/asciidoc/using-spring-shell.adoc
I need to run some task at a dynamic time presented in the variable (which value is in HH.mm.ss format) + 2 minutes from its value and less than 5 minutes. Then I could add this job to crontab to schedule it for every minute and I hope that the script will run when the time variable syncs the current time + 2 (or a bit more) minutes (but no more than 5 minutes).
Thank you.
Update:
Thanks to l0b0, all that is left is to find a way to substract 2 minutes from HH.mm.ss variable to get for example 05:28:00 after substraction from var 05:30:00. I think it must be somehow simple. Thanks for help.
at should do the trick. Based on man at and an offset variable $offset you should be able to use this (untested):
echo 'some_command with arguments' | at "now + ${offset}"
I want to monitor IPMI System Event Log(SEL) in real time. What I want that is that whenever a event is generated in SEL , automatically a mail alert should be generated .
One way for me to achieve this is that I can write a script and schedule it in cron. The script will run 3 or 4 times a day so whenever a new event is generated , a mail alert will be send to me.
I want the monitoring to be active. Like whenever a event is generated , a mail should be sent to me instead of checking at regular intervals.
The SEL Log format is as follows:
server-001% sudo ipmitool sel list
b4 | 05/27/2009 | 13:38:32 | Fan #0x37 | Upper Critical going high
c8 | 05/27/2009 | 13:38:35 | Fan #0x37 | Upper Critical going high
dc | 08/15/2009 | 07:07:50 | Fan #0x37 | Upper Critical going high
So , for the above case whenever a new event is generated , automatically a mail alert should be send to me with the event.
How can I achieve this with a bash script . Any pointers will be highly appreciated.
I believe some vendors have special extensions in their firmware for exactly what you are describing (i.e. you just configure an e-mail address in the service processor), but I can't speak to each vendor's support. You'll have to look for your motherboard's documentation for that.
In terms of a standard mechanism, you are probably looking for IPMI PET (platform event trap) support. With PET, when certain SEL events are generated, it will generate a SNMP trap. The SNMP trap, once received by an SNMP daemon can do whatever you want, such as send an e-mail out.
A user of FreeIPMI wrote up his experiences in a doc and posted his scripts, which you can find here:
http://www.gnu.org/software/freeipmi/download.html
(Disclaimer: I maintain FreeIPMI so I know FreeIPMI better, unsure of support in other IPMI software.)
As an FYI, several IPMI SEL logging daemons (FreeIPMI's ipmiseld and ipmitool's ipmievtd are two I know) poll the SEL based on a configurable number of seconds and log the SEL information to syslog. A mail alert could also be configured in syslog to send out an e-mail when an event occurs. These daemons are still polling based instead of real-time, but the daemons will probably handle many IPMI corner cases that your cron script may not be aware of.
Monitoring of IPMI SEL events can be achieved using ipmievd tool . It is a part of ipmitool package.
# rpm -qf /usr/sbin/ipmievd
ipmitool-1.8.11-12.el6.x86_64
To send SEL events to syslog , execute the following command.
ipmievd sel daemon
Now , to simulate generation of SEL events , we will execute the following command.
ipmitool event 2
This will generate the following event
` Voltage Threshold - Lower Critical - Going Low`
To get the list of SEL events that can be generated are , try
# ipmitool event
usage: event <num>
Send generic test events
1 : Temperature - Upper Critical - Going High
2 : Voltage Threshold - Lower Critical - Going Low
3 : Memory - Correctable ECC
The event will be notified to /var/log/messages. Following message was generated in log file.
Oct 21 15:12:32 mgthost ipmievd: Voltage sensor - Lower Critical going low
Just in case it helps anyone else...
I created a shell script to record data in this format, and I parse it with php and use google's chart api to make a nice line graph.
2016-05-25 13:33:15, 20 degrees C, 23 degrees C
2016-05-25 13:53:06, 21.50 degrees C, 24 degrees C
2016-05-25 14:34:39, 19 degrees C, 22.50 degrees C
#!/bin/sh
DATE=`date '+%Y-%m-%d %H:%M:%S'`
temp0=$(ipmitool sdr type Temperature | grep "CPU0 Diode" | cut -f5 -d"|")
temp1=$(ipmitool sdr type Temperature | grep "CPU1 Diode" | cut -f5 -d"|")
echo "$DATE,$temp0,$temp1" >> /events/temps.dat
The problem I'm having now is getting the cron job to access the data properly, even though it's set in the root crontab.
I have a code which gets all the records from a collection of a mongodb and then it performs some computations.
My program takes too much time as the "coll_id.find().each do |eachitem|......." returns only 300 records at an instant.
If I place a counter inside the loop and check it prints 300 records and then sleeps for around 3 to 4 seconds before printing the counter value for next set of 300 records..
coll_id.find().each do |eachcollectionitem|
puts "counter value for record " + counter.to_s
counter=counter +1
---- My computations here -----
end
Is this a limitation of ruby-mongodb api or some configurations needs to be done so that the code can get access to all the records at one instant.
How large are your documents? It's possible that the deseriaization is taking a long time. Are you using the C extensions (bson_ext)?
You might want to try passing a logger when you connect. That could help sort our what's going on. Alternatively, can you paste in the MongoDB log? What's happening there during the pause?