How to invert the result of a diff? - bash

I'm trying to create a logging script
Right now I need a way to get the different lines
I'm thinking of storing a copy of the sent log, and sending the difference
Example:
fullLog.txt
logged 1
logged 2
logged 3
cachedLog.txt
logged 1
And I want to get
logged 2
logged 3
as a variable
and then cp fullLog.txt cachedLog.txt
The issue is, diff fullLog.txt cachedLog.txt | sed 's/^[<>] //g' only prints
logged 1
How can I "invert" the result to get what I wanted?

Solved: comm -3 fullLog.txt cachedLog.txt

Related

Avoid mass e-mail notification in error analysis bash script

I am selecting error log details from a docker container and decide within a shell script, how and when to alert about the issue by discord and/or email.
Because I am receiving the email alerts too often with the same information in the email body, I want to implement the following two adjustments:
Fatal error log selection:
FATS="$(docker logs --since 24h $NODENAME 2>&1 | grep 'FATAL' | grep -v 'INFO')"
Email sent, in case FATS has some content:
swaks --from "$MAILFROM" --to "$MAILTO" --server "$MAILSERVER" --auth LOGIN --auth-user "$MAILUSER" --auth-password "$MAILPASS" --h-Subject "FATAL ERRORS FOUND" --body "$FATS" --silent "1"
How can I send the email only in the case, FATS has another content than the previous run of the script? I have thought about a hash about its content, which is stored and read in a text file. If the hash is the same than the previous script run, the email will be skipped.
Another option could be a local, temporary variable in the global user's bash profile, so that there is no file to be stored on the file system (to avoid read / writes).
How can I do that?
When you are writing a script for your monitoring, add functions for additional functionality, like:
logging all the alerts that have been send
make sure you don't send more than 1 alert each hour
consider sending warnings only during working hours
escalate a message when it fails N times without intermediate success
possible send an alert to different receivers (different email adresses or also to sms or teams)
make an interface for an operator so he can look back when something went wrong the first time.
When you have control which messages you send, it is easy to filter duplicate meassages (after changing --since).
I‘ve chosen the proposal of #ralf-dreager and reduced selection to 1d and 1h. Consequently, I‘ve changed my monitoring script to either go through the results of 1d or just 1h, without the need to select each time again and again. Huge performance improvement and no need to store anything else in a variable or on the file system.
FATS="$(docker logs --since 1h $NODENAME 2>&1 | grep 'FATAL' | grep -v 'INFO')"

Getting log entry "disk online" from system log

When a disk inserted to my cluster, i wanna know that.
So i need to listen /var/adm/messages and when i catch !NEW! "online" line i must write it to a different log file.
When disk goes online I get this kind of log entries:
Dec 8 10:10:46 SMNODE01 genunix: [ID 408114 kern.info] /scsi_vhci/disk#g5000c50095f92a8f (sd69) online
Tail works without -F option. But i need -F option :/
tail messages | grep 408114 | grep '/scsi_vhci/disk#'| egrep -wi --color 'online'
I have 3 uniform words for grep.
1- The id "408114" is unique for online status.
2- /scsi_vhci/disk#
3- online
P.S: Sorry for my english :)
For grep AND use .*:
$ grep 408114.*/scsi_vhci/disk#.*online test
Dec 8 10:10:46 SMNODE01 genunix: [ID 408114 kern.info] /scsi_vhci/disk#g5000c50095f92a8f (sd69) online
Next time don't edit the question completely but ask another question.

How to save bash output in text/excel file?

I want to save the particular result to the text/excel file through bash.
I tried below command it works fine, but I need only last result passed/failed, not the each step of execution.
I used below command to execute
$ bash eg.sh Behatscripts.txt > output.xls or
$ bash eg.sh Behatscripts.txt > output.txt
Below is the output console in my case, this whole thing is writing into the .txt/.xls file. But I need only last part that is:
1 scenario (1 passed)
3 steps (3 passed)
Executing the Script : eg.feature
----------------------------------------
#javascript
Feature: home page Validation
In order to check the home page of our site
As a website/normal user
I should be able to find some of the links/texts on the home page
Scenario: Validate the links in the header and footer # features\cap_english_home.feature:8
Given I am on the homepage # FeatureContext::iAmOnHomepage()
When I visit "/en" # FeatureContext::assertVisit()
Then I should see the following <links> # FeatureContext::iShouldSeeTheFollowingLinks()
| links |
| Dutch |
1 scenario (1 passed)
3 steps (3 passed)
0m14.744s
Give some suggestion to put condition to save only last part of the output console, thanks in advance.

shell script display grep results

I need some help with displaying how many times two strings are found on the same line! Lets say I want to search the file 'test.txt', this file contains names and IP's, I want to enter a name as a parameter when running the script, the script will search the file for that name, and check if there's an IP-address there also. I have tried using the 'grep' command, but I don't know how I can display the results in a good way, I want it like this:
Name: John Doe IP: xxx.xxx.xx.x count: 3
The count is how many times this line was found, this is how my grep script looks like right now:
#!/bin/bash
echo "Searching $1 for the Name '$2'"
result=$(grep "$2" $1 | grep -E "(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)")
echo $result
I will run the script like 'sh search test.txt John'.
I'm having trouble displaying the information I get from the grep command, maybe there's a better way to do this?
EDIT:
Okey, I will try to explain a little better, let's say I want to search a .log file, I want a script to search that file for a string the user enters as a parameter. i.e if the user enters 'sh search test.log logged in' the script will search for the string "logged in" within the file 'test.log'. If the script finds this line on the same line as a IP-address the IP address is printed, along with how many times this line was found.
And I simply don't know how to do it, I'm new to shell scripting, and was hoping I could use grep along with regular expressions for this! I will keep on trying, and update this question with an answer if I figure it out.
I don't have said file on my computer, but it looks something like this:
Apr 25 11:33:21 Admin CRON[2792]: pam_unix(cron:session): session opened for user 192.168.1.2 by (uid=0)
Apr 25 12:39:01 Admin CRON[2792]: pam_unix(cron:session): session closed for user 192.168.1.2
Apr 27 07:42:07 John CRON[2792]: pam_unix(cron:session): session opened for user 192.168.2.22 by (uid=0)
Apr 27 14:23:11 John CRON[2792]: pam_unix(cron:session): session closed for user 192.168.2.22
Apr 29 10:20:18 Admin CRON[2792]: pam_unix(cron:session): session opened for user 192.168.1.2 by (uid=0)
Apr 29 12:15:04 Admin CRON[2792]: pam_unix(cron:session): session closed for user 192.168.1.2
Here is a simple Awk script which does what you request, based on the log snippet you posted.
awk -v user="$2" '$4 == user { i[$11]++ }
END { for (a in i) printf ("Name: %s IP: %s count: %i\n", user, a, i[a]) }' "$1"
If the fourth whitespace-separated field in the log file matches the requested user name (which was passed to the shell script as its second parameter), add one to the count for the IP address (from field 11).
At the end, loop through all non-zero IP addresses, and print a summary for each. (The user name is obviously whatever was passed in, but matches your expected output.)
This is a very basic Awk script; if you think you want to learn more, I urge you to consult a simple introduction, rather than follow up here.
If you want a simpler grep-only solution, something like this provides the information in a different format:
grep "$2" "$1" |
grep -o -E '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)' |
sort | uniq -c | sort -rn
The trick here is the -o option to the second grep, which extracts just the IP address from the matching line. It is however less precise than the Awk script; for example, a user named "sess" would match every input line in the log. You can improve on that slightly by using grep -w in the first grep -- that still won't help against users named "pam" --, but Awk really gives you a lot more control.
My original answer is below this line, partly becaus it's tangentially useful, partially because it is required in order to understand the pesky comment thread below.
The following
result=$(command)
echo $result
is wrong. You need the second line to be
echo "$result"
but in addition, the detour over echo is superfluous; the simple way to write that is simply
command

How do I echo username, full name, and login time from finger into columns? I'm using bash on openSUSE13.1

Basically I have three users logged in to my machine right now. Test User1, Test User2, and Test User3.
I would like to use finger to get username, full name and the time they logged into the machine.
I would like to output the information like so:
Login Name Login Time
testuser1 Test User1 1300
testuser2 Test User2 1600
testuser3 Tesr User3 1930
I have two tabs in between Login and Name and three tabs between Login Time. The same goes for the user information below each header.
I cannot figure out how to pull this data from finger very well and I absolutely cannot figure out how to get the information into nice, neat, readable columns. Thanks in advance for any help!
This might not be perfect so you'll have to play around with substr starting and ending points. Should be good enough to get you started:
finger -s testuser1 testuser2 testuser3 | awk '{print substr($0,1,31),substr($0,46,14)}'
Try :r!finger. On my Mac, I get nice columns. YMMV.
:help :r!
Here's another way using awk:
finger -l | awk '{ split($1, a, OFS); print a[2], a[4], substr($3, 20, 6) }' FS="\n" RS= | column -t
The -l flag of finger produces a multi-line format (and is compatible with the -s flag). This is useful when fields like 'name' are absent. We can then process the records using awk in paragraph mode. In my example above, you can adjust the sub-string to suit the datespec of your choice. If you have gawk, then you'll have access to some time functions that may interest you if you wish to change the spec. Finally, you can print the fields of interest into column -t for pretty printing. HTH.

Resources