Need a shell script for the following scenario - shell

I have multiple log files in a directory /home/user/ with pattern x.log, y.log, z.log :
content of files are :
error
pass
fail
executed
not executed
Summary:
test 1
test 2
test 3
Finished in 2682 min 43.9 sec.
done
completed
i want output in a new single file from multiple log files as:
Summary:
test 1
test 2
test 3
Finished in 2682 min 43.9 sec.
Summary:
test 1
test 2
test 3
Finished in 2682 min 43.9 sec.
Summary:
test 1
test 2
test 3
Finished in 2682 min 43.9 sec.
Can you help me out with shell script

You can use awk:
awk '/Summary/ {run=1} run==1 {print} /Finished/ {run=0}' *.log > log.agr
This will take the contents of every file ending with .log, start writing to log.agr when it finds a line containing Summary, and then skip lines after a line containing Finished. It'll repeat that through the entire contents of all the *.log files.

Related

How to identify the failed test file with mocha parallel

I'm trying to use mocha parallel flag (--parallel) with my tests. I would like tohow possible is it to find out which test file failed, in case of a failure. (for ex: test fileB failed)
Folder Structure would be like below
Login
- fileA
- fineB
- fileC
- fineD
NODE_TLS_REJECT_UNAUTHORIZED=0 ./node_modules/mocha/bin/mocha $(find test/api/v4/Login -name '*.js') --timeout 60000 --parallel --jobs 3
I have 4 test files inside 'Login' dir. If a test in fileB failed during the execution. is it possible to output which test file failed?

Scripting a clamscan summary that adds multiple "Infected files" outputs together

I want a simple way to add 2 numbers taken from a text file. Details below:
Daily, I run clamscan against my /home/ folder, which generates a simple log along the lines of this:
Scanning 851M in /home/.
----------- SCAN SUMMARY -----------
Infected files: 0
Time: 0.000 sec (0 m 0 s)
Start Date: 2021:11:27 06:25:02
End Date: 2021:11:27 06:25:02
Weekly, I scan both my /home/ folder and an external drive, so I get twice as much in the log:
Scanning 851M in /home/.
----------- SCAN SUMMARY -----------
Infected files: 0
Time: 0.000 sec (0 m 0 s)
Start Date: 2021:11:28 06:25:02
End Date: 2021:11:28 06:25:02
Scanning 2.8T in /mnt/ext/.
----------- SCAN SUMMARY -----------
Infected files: 0
Time: 0.005 sec (0 m 0 s)
Start Date: 2021:11:28 06:26:30
End Date: 2021:11:28 06:26:30
I don't email the log to myself, I just have a bash script that sends an email that (for the daily scan) reads the number that comes after "Infected files:" and says either "No infected files found" or "Infected files found, check log." (And, to be honest, once I'm 100% comfortable that it all works the way I want it to, I'll skip the "No infected files found" email.) The problem is, I don't know how to make that work for the weekly scan of multiple folders, because the summary I get doesn't combine those numbers.
I'd like the script to find both lines that start "Infected files:", get the numbers that follow, and add them. I guess the ideal solution use a loop in case I ever need to scan more than two folders. I've taken a couple of stabs at it with grep and cut, but I'm just not experienced enough a coder to make it all work.
Thanks!
This bash script will print out the sum of infected files:
#!/bin/bash
n=$(sed -n 's/^Infected files://p' logfile)
echo $((${n//$'\n'/+}))
or a one-liner:
echo $(( $(sed -n 's/^Infected files: \(.*\)/+\1/p' logfile) ))

What is the numerical difference in the number of files in two different directories for every sequence (seq 1-current)?

Every time I write a new amount of data, two new directories are created called a sequence.
Directory 1 should always be 9 files larger than Directory 2.
I’m using ls | wc –l to output the number of files in each directory then manually doing the difference.
For example
Sequence 151
Directory 1 /raid2/xxx/xxxx/NHY274938WSP1151-OnlineSEHD-hyp (1911 files) – after WSP1 is the seq number.
Directory 2 - /raid/xxx/ProjectNumber/xxxx/seq0151 (1902 files)
Sequence 152
Directory 1 /raid2/xxx/xxxx/NHY274938WSP1152-OnlineSEHD-hyp (1525 files)
Directory 2 - /raid/xxx/ProjectNumber/xxxx/seq0152 (1516 files)
Is there a script that will output the difference (minus 9) for every sequence.
Ie
151 diff= 0
152 diff =0
That works great however:
I can now see some sequences in
Directory 1 (RAW/all files) it contains extra files that i dont want compared against diectory 2 these are:
At the beginning Warmup files (not set amount every sequence)
Duplicate files with an _
For example :
20329.uutt -warmup
20328.uutt -warmup
.
.
21530.uutt First good file after warmup
.
.
19822.uutt
19821.uutt
19820.uutt
19821_1.uutt
Directory 2 (reprocessed /missing files) doesn’t include warmup shots or Duplicate files with an _
For example :
Missing shots
*021386 – first available file (files are missing before).
*021385
.
.
*019822
*019821
*019820
Could we remove warmup files and any duplicates I should have number of missing files?
Or output
diff, D1#warmup files, D1#duplicate files, TOTdiff
to get D1#duplicate files maybe I could count the total number of occurances of _.uutt
to get D1#warmup files I have a log file where warmup shots have a "WARM" at the end of each line. in /raid2/xxx/xxxx/NHY274938WSP1151.log
i.e.
"01/27/21 15:33:51 :FLD211018WSP1004: SP:21597: SRC:2: Shots:1037: Manifold:2020:000 Vol:4000:828 Spread: 1.0:000 FF: nan:PtP: 0.000:000 WARM"
"01/27/21 15:34:04 :FLD211018WSP1004: SP:21596: SRC:4: Shots:1038: Manifold:2025:000 Vol:4000:000 Spread: 0.2:000 FF: nan:PtP: 0.000:000 WARM"
Is there a script that will output the difference (minus 9) for every sequence. Ie 151 diff= 0 152 diff =0
There it is:
#!/bin/bash
d1p=/raid2/xxx/xxxx/NHY274938WSP1 # Directory 1 prefix
d1s=-OnlineSEHD-hyp # Directory 1 suffix
d2=/raid/xxx/ProjectNumber/xxxx/seq0
for d in $d2*
do s=${d: -3} # extract sequence from Directory 2
echo $s diff=$(expr `ls $d1p$s$d1s|wc -l` - `ls $d|wc -l` - 9)
done
With filename expansion * we get all the directory names, and by removing the fixed part with the parameter expansion ${parameter:offset} we get the sequence.
For comparison here's a variant using arrays as suggested by tripleee:
#!/bin/bash
d1p=/raid2/xxx/xxxx/NHY274938WSP1 # Directory 1 prefix
d1s=-OnlineSEHD-hyp # Directory 1 suffix
d2=/raid/xxx/ProjectNumber/xxxx/seq0
shopt -s nullglob # make it work also for 0 files
for d in $d2*
do s=${d: -3} # extract sequence from Directory 2
f1=($d1p$s$d1s/*) # expand files from Directory 1
f2=($d/*) # expand files from Directory 2
echo $s diff=$((${#f1[#]} - ${#f2[#]} - 9))
done

How to print lines extracted from a log file within a specified time range?

I'd like to fetch result, let's say from 2017-12-19 19:14 till the entire day from a log file that looks like this -
/var/opt/MarkLogic/Logs/ErrorLog_1.txt:2017-12-19 19:14:00.723 Info: Saving /var/opt/MarkLogic/Forests/Meters/00001829
/var/opt/MarkLogic/Logs/ErrorLog_1.txt:2017-12-19 19:14:01.134 Info: Saved 9 MB at 22 MB/sec to /var/opt/MarkLogic/Forests/Meters/00001829
/var/opt/MarkLogic/Logs/ErrorLog_1.txt:2017-12-19 19:14:01.376 Info: Merging 19 MB from /var/opt/MarkLogic/Forests/Meters/0000182a and /var/opt/MarkLogic/Forests/Meters/00001829 to /var/opt/MarkLogic/Forests/Meters/0000182c, timestamp=15137318408510140
/var/opt/MarkLogic/Logs/ErrorLog_1.txt:2017-12-19 19:14:02.585 Info: Merged 18 MB in 1 sec at 15 MB/sec to /var/opt/MarkLogic/Forests/Meters/0000182c
/var/opt/MarkLogic/Logs/ErrorLog_1.txt:2017-12-19 19:14:05.200 Info: Deleted 15 MB at 337 MB/sec /var/opt/MarkLogic/Forests/Meters/0000182a
/var/opt/MarkLogic/Logs/ErrorLog_1.txt:2017-12-19 19:14:05.202 Info: Deleted 9 MB at 4274 MB/sec /var/opt/MarkLogic/Forests/Meters/00001829
I am new to Unix and familiar with grep command. I tried the below command
date="2017-12-19 [19-23]:[14-59]"
echo "$date"
grep "$date" $root_path_values
but it throws invalid range end error. Any solution ? The date is going to be coming from a variable so it will be unpredictable. Therefore, don't make a command just keeping the example in mind. $root_path_values is a sequence of error files like errorLog.txt, errorLog_1.txt, errorLog_2.txt and so on.
I'd like to fetch result, let's say from 2017-12-19 19:14 till the entire day … The date is going to be coming from a variable …
This is not a job for regular expressions. Since the timestamp has a sensible form, we can simply compare it as a whole, e. g.:
start='2017-12-19 19:14'
end='2017-12-20'
awk -vstart="$start" -vend=$end 'start <= $0 && $0 < end' ErrorLog_1.txt
egrep '2017-12-19 (19|2[0-3])\:(1[4-9]|[2-5][0-9])\:*\.*' path/to/your/file Try this regexp.
In the case if you need pattern in variable:
#!/bin/bash
date="2017-12-19 (19|2[0-3])\:(1[4-9]|[2-5][0-9])\:*\.*"
egrep ${date} path/to/your/file

Ruby Test Unit: Multiple Scripts, One Output

Can I run multiple Test Cases from multiple scripts but have a single output that either says "100% Pass" or "X Failed" and lists out the failed tests?
For example I want to see something like:
>runtests.rb all #runs all the scripts in the directory
Finished in 4.523 Seconds
100% Pass
>runtests.rb category #runs all the scripts in a specified sub-directory
Finished in 2.1 Seconds
2 Failed:
test_my_test
test_my_test_2
1 Error:
test_my_test_3
I use the built-in MiniTest::Unit along with the autotest command that is part of ZenTest and get output like:
autotest
/Users/tinman/.rvm/rubies/ruby-1.9.2-p290/bin/ruby -I.:lib:test -rubygems -e "%w[test/unit tests/test_domains.rb tests/test_regex.rb tests/test_vlan.rb tests/test_nexus.rb tests/test_switch.rb tests/test_template.rb].each { |f| require f }"
Loaded suite -e
Started
........................................
Finished in 0.143375 seconds.
40 tests, 276 assertions, 0 failures, 0 errors, 0 skips
Test run options: --seed 62474
Is that similar to what you are talking about?

Resources