When a disk inserted to my cluster, i wanna know that.
So i need to listen /var/adm/messages and when i catch !NEW! "online" line i must write it to a different log file.
When disk goes online I get this kind of log entries:
Dec 8 10:10:46 SMNODE01 genunix: [ID 408114 kern.info] /scsi_vhci/disk#g5000c50095f92a8f (sd69) online
Tail works without -F option. But i need -F option :/
tail messages | grep 408114 | grep '/scsi_vhci/disk#'| egrep -wi --color 'online'
I have 3 uniform words for grep.
1- The id "408114" is unique for online status.
2- /scsi_vhci/disk#
3- online
P.S: Sorry for my english :)
For grep AND use .*:
$ grep 408114.*/scsi_vhci/disk#.*online test
Dec 8 10:10:46 SMNODE01 genunix: [ID 408114 kern.info] /scsi_vhci/disk#g5000c50095f92a8f (sd69) online
Next time don't edit the question completely but ask another question.
Related
Is there capacity within amazon/centos/linux to switch the ordering round of nitro disks?
I have an ami which consistently has devices in the incorrect order, by this I mean nvme1n1 and nvme2n1 should be switched round. If I run nvme id-ctrl -v /dev/nvme1n1 | grep sn I get a different serial number back following a reboot. I know they're "wrong" as the serial numbers are not reflective of their capacity... Hope that makes sense (I appreciate it's a bit confusing). This only ever occurs on servers with two or more disks; upon a reboot the disks are "correct"
My question is, is there a method of forcing the nvme device to disconnect and reconnect (in the hope that the mapping works as expected in the correct order).
Thanks guys
Amazon Linux version 2017.09.01 and later contains scripts and a udev rule that automatically maps NVMe devices to /dev/xvd?. It is very briefly mentioned in the documentation, but there is not much information there.
You can obtain a copy by launching the Amazon Linux AMI, but there are also other places on the web where they have been posted. For example, I found this gist.
Very simple in the end:
echo 1 > /sys/bus/pci/devices/$(readlink -f /sys/class/nvme/nvme1 | awk -F "/" '{print $5}')/remove
echo 1 > /sys/bus/pci/devices/$(readlink -f /sys/class/nvme/nvme2 | awk -F "/" '{print $5}')/remove
echo 1 > /sys/bus/pci/rescan
thanks anyone who has helped me so far, here is my problem: I have a folder which contains 825 files. Within these files are reviews of a hotel. An example name of one of these files is hotel_72572.dat and this file basically contains the following:
<Overall Rating>4
<Avg. Price>$173
<URL>http://www.tripadvisor.com/ShowUserReviews-g60878-d72572-r23327047-Best_Western_Pioneer_Square_Hotel-Seattle_Washington.html
<Author>everywhereman2
<Content>Old seattle...
<Date>Jan 6, 2009
<img src="http://cdn.tripadvisor.com/img2/new.gif" alt="New"/>
<No. Reader>-1
<No. Helpful>-1
<Overall>5
<Value>5
<Rooms>5
<Location>5
<Cleanliness>5
<Check in / front desk>5
<Service>5
<Business service>5
<Author> //repeats the fields again, each cluster of fields is a review
The fields (line 6 - <Business service>) are then repeated by n times where n is the number of reviews in the file. I thought that by counting the number of times "Author" appears per file would achieve this but perhaps there is a better solution?
I am trying to write a script that will called countreviews.sh that will count the number of reviews per file in my folder (the folder name is reviews_folder) and then sort the number per file from highest to lowest. An example output will be:
hotel_72572 45
hotel_72579 33
hotel_73727 17
where the prefix is the name of the file and the number is the number of reviews per file. My script must take the folder name as an argument. For example I would type ./countreviews.sh reviews_folder and would get my output.
I have received lots of help over the past few days with many different suggestions but none of them have achieved what I am trying to do (my fault due to poor explanations), I hope this finally explains it clearly enough. Thanks again anyone who has helped me over the past few days and for any help I get for this question.
grep -c Author hotel_*.dat | sort -t : -k2nr | sed 's/\.dat:/ /'
Output (e.g.):
hotel_72572 45
hotel_72579 33
hotel_73727 17
Update
#!/bin/bash
cd "$1" || exit 1
grep -c Author hotel_*.dat | sort -t : -k2nr | sed 's/\.dat:/ /'
The goal was to frequently change default outgoing source ip on a machine with multiple interfaces and live ips.
I used ip route replace default as per its documentation and let a script run in loop for some interval. It changes source ip fine for a while but then all internet access to the machine is lost. It has to be remotely rebooted from a web interface to have any thing working
Is there any thing that could possibly prevent this from working stably. I have tried this on more than one servers?
Following is a minimum example
# extract all currently active source ips except loopback
IPs="$(ifconfig | grep 'inet addr:'| grep -v '127.0.0.1' | cut -d: -f2 |
awk '{ print $1}')"
read -a ip_arr <<<$IPs
# extract all currently active mac / ethernet addresses
Int="$(ifconfig | grep 'eth'| grep -v 'lo' | awk '{print $1}')"
read -a eth_arr <<<$Int
ip_len=${#ip_arr[#]}
eth_len=${#eth_arr[#]}
i=0;
e=0;
while(true); do
#ip route replace 0.0.0.0 dev eth0:1 src 192.168.1.18
route_cmd="ip route replace 0.0.0.0 dev ${eth_arr[e]} src ${ip_arr[i]}"
echo $route_cmd
eval $route_cmd
sleep 300
(i++)
(e++)
if [ $i -eq $ip_len ]; then
i=0;
e=0;
echo "all ips exhausted - starting from first again"
# break;
fi
done
I wanted to comment, but as I'm not having enough points, it won't let me.
Consider:
Does varying the delay time before its run again change the number of iterations before it fails?
Exporting the ifconfig & routes every time you change it, to see if something is meaningfully different over time. Maybe some basic tests to it (ping, nslookup, etc) Basically, find what is exactly going wrong with it. Also exporting the commands you send to a logfile (text file per change?) to see changes in them to see if some is different after x iterations.
What connectivity is lost? Incoming? Outgoing? Specific applications?
You say you use/do this on other servers without problems?
Are the IP's: Static (/etc/network/interfaces), bootp/DHCP, semi-static (bootp/DHCP server serving, based on MAC address), and if served by bootp/DHCP, what is the lease duration?
On the last remark:
bootp/dhcp will give IP's for x duration. say its 60 minutes. After half that time it will "check" with the bootp/dhcp server if it can keep the IP, and extend the lease to 60 minutes again, this can mean a small reconfig on the ifconfig (maybe even at the same time of your script?).
hth
Currently I keep 6 weeks of apache access_log. If I generate a access summary at month end:
cat /var/log/httpd/access_log* | goaccess --output-format=csv
the summary will include some access data from previous month.
How can I skip logs of previous month and summarise from first day of month?
p.s. the data-format is: %d/%b/%Y
You can trade the Useless Use of cat for a useful grep.
grep -n $(date +'[0-3][0-9]/%b/%Y') /var/log/httpd/access_log* |
goaccess --output-format=csv
If the logs are by date, it would be a lot more economical to skip the logs which you know are too old or too new, i.e. modify the wildcard argument so you only match the files you really want (or run something like find -mtime -30 to at least narrow the set to a few files).
(The cat is useless because, if goaccess is at all correctly written, it should be able to handle
goaccess --output-format=csv /var/log/httpd/access_log*
just fine.)
I need some help with displaying how many times two strings are found on the same line! Lets say I want to search the file 'test.txt', this file contains names and IP's, I want to enter a name as a parameter when running the script, the script will search the file for that name, and check if there's an IP-address there also. I have tried using the 'grep' command, but I don't know how I can display the results in a good way, I want it like this:
Name: John Doe IP: xxx.xxx.xx.x count: 3
The count is how many times this line was found, this is how my grep script looks like right now:
#!/bin/bash
echo "Searching $1 for the Name '$2'"
result=$(grep "$2" $1 | grep -E "(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)")
echo $result
I will run the script like 'sh search test.txt John'.
I'm having trouble displaying the information I get from the grep command, maybe there's a better way to do this?
EDIT:
Okey, I will try to explain a little better, let's say I want to search a .log file, I want a script to search that file for a string the user enters as a parameter. i.e if the user enters 'sh search test.log logged in' the script will search for the string "logged in" within the file 'test.log'. If the script finds this line on the same line as a IP-address the IP address is printed, along with how many times this line was found.
And I simply don't know how to do it, I'm new to shell scripting, and was hoping I could use grep along with regular expressions for this! I will keep on trying, and update this question with an answer if I figure it out.
I don't have said file on my computer, but it looks something like this:
Apr 25 11:33:21 Admin CRON[2792]: pam_unix(cron:session): session opened for user 192.168.1.2 by (uid=0)
Apr 25 12:39:01 Admin CRON[2792]: pam_unix(cron:session): session closed for user 192.168.1.2
Apr 27 07:42:07 John CRON[2792]: pam_unix(cron:session): session opened for user 192.168.2.22 by (uid=0)
Apr 27 14:23:11 John CRON[2792]: pam_unix(cron:session): session closed for user 192.168.2.22
Apr 29 10:20:18 Admin CRON[2792]: pam_unix(cron:session): session opened for user 192.168.1.2 by (uid=0)
Apr 29 12:15:04 Admin CRON[2792]: pam_unix(cron:session): session closed for user 192.168.1.2
Here is a simple Awk script which does what you request, based on the log snippet you posted.
awk -v user="$2" '$4 == user { i[$11]++ }
END { for (a in i) printf ("Name: %s IP: %s count: %i\n", user, a, i[a]) }' "$1"
If the fourth whitespace-separated field in the log file matches the requested user name (which was passed to the shell script as its second parameter), add one to the count for the IP address (from field 11).
At the end, loop through all non-zero IP addresses, and print a summary for each. (The user name is obviously whatever was passed in, but matches your expected output.)
This is a very basic Awk script; if you think you want to learn more, I urge you to consult a simple introduction, rather than follow up here.
If you want a simpler grep-only solution, something like this provides the information in a different format:
grep "$2" "$1" |
grep -o -E '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)' |
sort | uniq -c | sort -rn
The trick here is the -o option to the second grep, which extracts just the IP address from the matching line. It is however less precise than the Awk script; for example, a user named "sess" would match every input line in the log. You can improve on that slightly by using grep -w in the first grep -- that still won't help against users named "pam" --, but Awk really gives you a lot more control.
My original answer is below this line, partly becaus it's tangentially useful, partially because it is required in order to understand the pesky comment thread below.
The following
result=$(command)
echo $result
is wrong. You need the second line to be
echo "$result"
but in addition, the detour over echo is superfluous; the simple way to write that is simply
command