To get the top 5 most frequent ERROR logs of particular service.
I tried to write query for the same
" service:"USER" | level_name:"ERROR" | stats count by message | sort -count | head 5 "
but it did not work bcoz graylog cant count message field.
Is their any other way , we can do this.
graylog version-- 4.3
Related
I'd like to write a script for Windows 10 to add 1 to a count in a .txt each time a print job completes. Ideally a separate count for each day, so I can see how many print jobs were completed in a day.
Any help in understanding how to go about this is appreciated!
The print service already logs every time it prints - you just need to enable the appropriate event log channel and consume the resulting log events:
# Enable the Microsoft-Windows-PrintService/Operational log channel
wevtutil.exe set-log Microsoft-Windows-PrintService/Operational /enabled:true
Now that the log channel is enabled, the print service will log an event with event ID 307 everytime it executes a local print job. Since the log events all have timestamps, getting a count per day is as simple as using the Group-Object cmdlet:
# Fetch the print job events from the event log
$printJobEvents = Get-WinEvent -FilterHashtable #{ LogName='Microsoft-Windows-PrintService/Operational'; EventId=307 }
# Group by date logged, to get a count-per-day
$printJobEvents |Group-Object { '{0:yyyy-MM-dd}' -f $_.TimeCreated.Date } -NoElement |Sort-Object Name
One technique that might be useful is to query stats for the spooler service like this:
Get-CimInstance 'Win32_PerfFormattedData_Spooler_PrintQueue' |
Format-Table -Property Name,Jobs,TotalJobsPrinted,TotalPagesPrinted -AutoSize
This gives output like this:
Name Jobs TotalJobsPrinted TotalPagesPrinted
---- ---- ---------------- -----------------
Printer1 0 50 212
Printer2 3 13 118
Printer3 1 33 306
_Total 4 96 636
The stats are reset each time the Print Spooler service restarts, so you'll need to take that into account in your final script, which might make this a trickier option than Mathias' event log solution.
Below is my message field:
2020-03-05 13:00:03,957 | INFO | p1105444158-1049 | RouteEventNotifier | v1-individuals-retrieveContacts | 2 | Txid : d6946e71-deeb-4a63-9f78-65bdf3c67f7167_76df5b15-ca17-488e-a280-53d167222b70 | csr-web | | t-mcespinfdoza#gmail.com| 6a379b43-b849-455a-8e75-715f70649fe7 | AccountDataPrivacy,CARE_MANUAL_ASSIGN,AdjustmentCreate,CaseEdit,OrderAddOn,CARE_CASE_REASSIGN,OrderNextBestAction,CCADashboardAccess,AdjustmentApproval,CARE_AUTO_ASSIGN,AccountUpdate,AccountView,uma_authorization | | 361 - bil-core - 1.0.0.SNAPSHOT | >>>>Route:rt-BlCore Took 34 ms to send to: Endpoint[direct-vm://nullRoute]
here i want to make a visualization using string "34 ms" in order to calculate average time taken by any connector using Kibana dsl query.
While indexing the data into Elasticsearch, you can apply an ingest pipeline with Grok Processor or Grok filter in Logstash to extract the required information and index it in a different field on which you can create a Kibana visualization.
I would recommend utilizing the Grok debugging tool in Kibana to come up with the Grok pattern that you need to extract the required information.
When a disk inserted to my cluster, i wanna know that.
So i need to listen /var/adm/messages and when i catch !NEW! "online" line i must write it to a different log file.
When disk goes online I get this kind of log entries:
Dec 8 10:10:46 SMNODE01 genunix: [ID 408114 kern.info] /scsi_vhci/disk#g5000c50095f92a8f (sd69) online
Tail works without -F option. But i need -F option :/
tail messages | grep 408114 | grep '/scsi_vhci/disk#'| egrep -wi --color 'online'
I have 3 uniform words for grep.
1- The id "408114" is unique for online status.
2- /scsi_vhci/disk#
3- online
P.S: Sorry for my english :)
For grep AND use .*:
$ grep 408114.*/scsi_vhci/disk#.*online test
Dec 8 10:10:46 SMNODE01 genunix: [ID 408114 kern.info] /scsi_vhci/disk#g5000c50095f92a8f (sd69) online
Next time don't edit the question completely but ask another question.
I'm doing some optimization work on some database stuff and the system writes down log files. Basically I want to optimize some stuff then restore the original customer db-dump and re-run the whole thing. My log files look something like that:
2016-05-10 15:43:18,135 DEBUG [WorkerThread#0[127.0.0.1:64181]] d.h.h.h.c.d.m... [1517]: doing thing #2081087 took 26831ms
2016-05-10 15:05:18,135 DEBUG [WorkerThread#0[127.0.0.1:64181]] d.h.h.h.c.d.m... [1517]: doing thing #20887 took 4051ms
2016-05-10 15:02:18,135 DEBUG [WorkerThread#0[127.0.0.1:64181]] d.h.h.h.c.d.m... [1517]: doing thing #1087 took 261ms
I want to correlate these events over the different runs now to have some statistical data which change did what. So, what I want is a result like
| run 1 | run 2 | run 3 | ... |
thing #20887 | 261ms | 900ms | 100ms | ... |
thing #1087 | 4051ms | 9000ms | 2000ms | ... |
How can I correlate these events using Elasticsearch and visualize it with Kibana?
On the Parse.com cloud-code console, I can see logs, but they only go back maybe 100-200 lines. Is there a way to see or download older logs?
I've searched their website & googled, and don't see anything.
Using the parse command-line tool, you can retrieve an arbitrary number of log lines:
Usage:
parse logs [flags]
Aliases:
logs, log
Flags:
-f, --follow=false: Emulates tail -f and streams new messages from the server
-l, --level="INFO": The log level to restrict to. Can be 'INFO' or 'ERROR'.
-n, --num=10: The number of the messages to display
Not sure if there is a limit, but I've been able to fetch 5000 lines of log with this command:
parse logs prod -n 5000
To add on to Pascal Bourque's answer, you may also wish to filter the logs by a given range of dates. To achieve this, I used the following:
parse logs -n 5000 | sed -n '/2016-01-10/, /2016-01-15/p' > filteredLog.txt
This will get up to 5000 logs, use the sed command to keep all of the logs which are between 2016-01-10 and 2016-01-15, and store the results in filteredLog.txt.