how do i get the query which executed more time from a log file - shell

I need the unix command to fetch out the query that got executed more than 1 sec from a log file using sed/awk. $11 is time stamp.
Pl suggest awk script to print all the query that executed more than 1 sec
the log file pattern looks as below,
"PUT /XXX/YYY/test/1/test/1/test/json/query HTTP/1.1" 111 111 5.019

Related

How to make squeue display time limits in hours only?

When viewing submitted jobs managed by Slurm, I would like to have the time limit column (specified by %l) to show only hours, instead of the usual days-hours:minutes:seconds format. This is the command I am currently using:
squeue --format="%.6i %.5P %.25j %.8u %.8T %.10M %.5l %.15b %.5C %.6D %R" --sort=+i --me
and this is the example output:
276350 qgpu jobname username RUNNING 1:14:14 1-00:00:00 gres:gpu:v100:1 18 1 s31n02
So, in this case, I would like the elapsed time to remain as is (1:14:14), but the time limit to change from 1-00:00:00 to 24. Is there a way to do it?
This is the way Slurm displays the dates. Elapsed time will eventually be displayed the same way (days-hours:minutes:seconds) after 23:59:59.
You can use a wrapper script to convert into a different format. Or if you know the time limit is no more than a day, just set the time limit to 23:59:00 by using --time=1439.
salloc -N1 --time=1439 bash
Using your squeue command:
166 mypartition interactive jyvet RUNNING 7:36 23:59:00 N/A 1 1 mynode

How do I write a Windows 10 script to increment a counter by 1 each time a print job prints?

I'd like to write a script for Windows 10 to add 1 to a count in a .txt each time a print job completes. Ideally a separate count for each day, so I can see how many print jobs were completed in a day.
Any help in understanding how to go about this is appreciated!
The print service already logs every time it prints - you just need to enable the appropriate event log channel and consume the resulting log events:
# Enable the Microsoft-Windows-PrintService/Operational log channel
wevtutil.exe set-log Microsoft-Windows-PrintService/Operational /enabled:true
Now that the log channel is enabled, the print service will log an event with event ID 307 everytime it executes a local print job. Since the log events all have timestamps, getting a count per day is as simple as using the Group-Object cmdlet:
# Fetch the print job events from the event log
$printJobEvents = Get-WinEvent -FilterHashtable #{ LogName='Microsoft-Windows-PrintService/Operational'; EventId=307 }
# Group by date logged, to get a count-per-day
$printJobEvents |Group-Object { '{0:yyyy-MM-dd}' -f $_.TimeCreated.Date } -NoElement |Sort-Object Name
One technique that might be useful is to query stats for the spooler service like this:
Get-CimInstance 'Win32_PerfFormattedData_Spooler_PrintQueue' |
Format-Table -Property Name,Jobs,TotalJobsPrinted,TotalPagesPrinted -AutoSize
This gives output like this:
Name Jobs TotalJobsPrinted TotalPagesPrinted
---- ---- ---------------- -----------------
Printer1 0 50 212
Printer2 3 13 118
Printer3 1 33 306
_Total 4 96 636
The stats are reset each time the Print Spooler service restarts, so you'll need to take that into account in your final script, which might make this a trickier option than Mathias' event log solution.

How to print lines extracted from a log file within a specified time range?

I'd like to fetch result, let's say from 2017-12-19 19:14 till the entire day from a log file that looks like this -
/var/opt/MarkLogic/Logs/ErrorLog_1.txt:2017-12-19 19:14:00.723 Info: Saving /var/opt/MarkLogic/Forests/Meters/00001829
/var/opt/MarkLogic/Logs/ErrorLog_1.txt:2017-12-19 19:14:01.134 Info: Saved 9 MB at 22 MB/sec to /var/opt/MarkLogic/Forests/Meters/00001829
/var/opt/MarkLogic/Logs/ErrorLog_1.txt:2017-12-19 19:14:01.376 Info: Merging 19 MB from /var/opt/MarkLogic/Forests/Meters/0000182a and /var/opt/MarkLogic/Forests/Meters/00001829 to /var/opt/MarkLogic/Forests/Meters/0000182c, timestamp=15137318408510140
/var/opt/MarkLogic/Logs/ErrorLog_1.txt:2017-12-19 19:14:02.585 Info: Merged 18 MB in 1 sec at 15 MB/sec to /var/opt/MarkLogic/Forests/Meters/0000182c
/var/opt/MarkLogic/Logs/ErrorLog_1.txt:2017-12-19 19:14:05.200 Info: Deleted 15 MB at 337 MB/sec /var/opt/MarkLogic/Forests/Meters/0000182a
/var/opt/MarkLogic/Logs/ErrorLog_1.txt:2017-12-19 19:14:05.202 Info: Deleted 9 MB at 4274 MB/sec /var/opt/MarkLogic/Forests/Meters/00001829
I am new to Unix and familiar with grep command. I tried the below command
date="2017-12-19 [19-23]:[14-59]"
echo "$date"
grep "$date" $root_path_values
but it throws invalid range end error. Any solution ? The date is going to be coming from a variable so it will be unpredictable. Therefore, don't make a command just keeping the example in mind. $root_path_values is a sequence of error files like errorLog.txt, errorLog_1.txt, errorLog_2.txt and so on.
I'd like to fetch result, let's say from 2017-12-19 19:14 till the entire day … The date is going to be coming from a variable …
This is not a job for regular expressions. Since the timestamp has a sensible form, we can simply compare it as a whole, e. g.:
start='2017-12-19 19:14'
end='2017-12-20'
awk -vstart="$start" -vend=$end 'start <= $0 && $0 < end' ErrorLog_1.txt
egrep '2017-12-19 (19|2[0-3])\:(1[4-9]|[2-5][0-9])\:*\.*' path/to/your/file Try this regexp.
In the case if you need pattern in variable:
#!/bin/bash
date="2017-12-19 (19|2[0-3])\:(1[4-9]|[2-5][0-9])\:*\.*"
egrep ${date} path/to/your/file

Delete all consecutive lines with sed, but not an isolated one

I have a log file which looks like the following text:
...
5 files analysed in 98 ms
7 files analysed in 654 ms
error1: ....
error2: ....
error3: ....
21 files analysed in 345 ms
3 files analysed in 78 ms
6 files analysed in 55 ms
...
I am looking forward to using "sed" or "awk" in order to remove all consecutive lines containing the pattern "files analysed in", but not the one above the useful information.
7 files analysed in 654 ms
error1: ....
error2: ....
error3: ....
I tried some tricks from this post. But nothing is working like I would like to. The number of errors is not always the same.
How could I proceed?
grep -v "files analysed in" -B 1
select everything that doesn't have the pattern, but provide one line of context before each match
with awk
$ awk '/pattern/{p=$0} !/pattern/{print p; print}' file
foo3 pattern foo4
some useful information
you can also exit after the first match.

Read Native format bcp data file

With the Unix shell script, I am doing a bcp out from a table in Server1 using NATIVE format to a file - XXXX.bcpdat, then bcp in the file to a table of same structure in Server2.
The bcp command we have is
bcp "$dbname".."$tablename" out XXXX.bcpdat -n
bcp "$dbname".."$tablename" in XXXX.bcpdat -n -b10000
This bcp_out & bcp in works as expected from/into tables.
But i want to da an urgent change here -
I want to get the total number of rows (a row may have 120 or 30 or 40 records)in the bcp data file (XXXX.bcpdat)
But with the file in Native format i couldn differentiate each row & how its being separated. If i pass head -10 XXXX.bcpdat or tail -10 XXXX.bcpdat it prints everything in the file. "wc -l" or "awk" or "cut" is not helping me to get the count of rows from the file. There is no differentiation where a row ends like how it is in character load of bcp. It would really be great if someone help me at the earliest, how i can get the total number of rows (not records) that is in the bcpdat file. Thanks a loot in advance.

Resources