I'm new to Ansible an thus this question may seem silly to more advanced users, I'm not sure if it's possible to do what I'm asking since Ansibel is very limited when it comes to loops and conditionals.
I'm performing tasks on a Virtual Connect switch thus I'm limited to use the raw module.
I have a following STDOUT:
=========================================================================
Profile Port Network PXE/IP MAC Address Allocated Status
Name Boot Order Speed
(min-max)
=========================================================================
CLO01ES 1 CLO_355 UseBIOS/Au 00-17-A4-77-58-0 -- -- OK
X02 _1 to 0
-------------------------------------------------------------------------
CLO01ES 2 CLO_355 UseBIOS/Au 00-17-A4-77-58-0 -- -- OK
X02 _2 to 2
-------------------------------------------------------------------------
CLO01ES 3 Multipl UseBIOS/Au 00-17-A4-77-58-0 -- -- OK
X02 e to 4
Network
-------------------------------------------------------------------------
CLO01ES 4 Multipl UseBIOS/Au 00-17-A4-77-58-0 -- -- OK
X02 e to 6
Network
-------------------------------------------------------------------------
<omitted>
The issue is that STDOUT can have multiple lines with different profiles in them i.e. I don't know the line numbers or MAC addresses beforehand.
What I want to achive is a status check. If Profile CLO01ESX02 has Network Name: Multiple Network twice, then I want to skip the task.
Whenever I was googling parsing variables or STDOUT I would get just basic answers.
Is this possible with Ansible or am I forced to write a custom script?
It can't be done directly by any native Ansible modules but using shell command should do the trick.
- name: Check output
command: <your_command_here> | grep CLO01ESX02 | grep "Multiple Network" | wc -l
register: wc
failed_when: wc.stdout|int > 1
Related
Is there capacity within amazon/centos/linux to switch the ordering round of nitro disks?
I have an ami which consistently has devices in the incorrect order, by this I mean nvme1n1 and nvme2n1 should be switched round. If I run nvme id-ctrl -v /dev/nvme1n1 | grep sn I get a different serial number back following a reboot. I know they're "wrong" as the serial numbers are not reflective of their capacity... Hope that makes sense (I appreciate it's a bit confusing). This only ever occurs on servers with two or more disks; upon a reboot the disks are "correct"
My question is, is there a method of forcing the nvme device to disconnect and reconnect (in the hope that the mapping works as expected in the correct order).
Thanks guys
Amazon Linux version 2017.09.01 and later contains scripts and a udev rule that automatically maps NVMe devices to /dev/xvd?. It is very briefly mentioned in the documentation, but there is not much information there.
You can obtain a copy by launching the Amazon Linux AMI, but there are also other places on the web where they have been posted. For example, I found this gist.
Very simple in the end:
echo 1 > /sys/bus/pci/devices/$(readlink -f /sys/class/nvme/nvme1 | awk -F "/" '{print $5}')/remove
echo 1 > /sys/bus/pci/devices/$(readlink -f /sys/class/nvme/nvme2 | awk -F "/" '{print $5}')/remove
echo 1 > /sys/bus/pci/rescan
This question already has answers here:
Grep's "Invalid range end" — bug or feature?
(2 answers)
Closed 4 years ago.
I have a script that should do a portscan on a specific UDP-Port and checks if the correct service string can be grepped upon.
It looks like the following:
Nmap returns the following (the same for root and User nagios):
Starting Nmap 6.40 ( http://nmap.org ) at 2019-02-23 12:33 CET Nmap scan report for 172.32.0.1 Host is up. PORT STATE SERVICE 1194/udp open|filtered openvpn Nmap done: 1 IP address (1 host up) scanned in 2.08 seconds
Now I grep it in the script:
f_result=`echo $result | egrep -o "${port}/udp [a-zA-Z0-9_-\| ]+Nmap done"`
and this is where I get confused. I didn't write it myself, I don't have the best KnowHow on bash.
Because:
if I execute the script, at that part of the grep, one User reports an error, the other doesn't.
The script works for user root just fine, but for user "nagios", it returns:
egrep: Invalid range end
The Error has to be arount the backslash, but I don't get it, how can it work with rot but not as a different user? Is it some kind of forbidden symbol?
I guess it's a Layer 8 Problem, so I'm sorry if it's a kind of silly question to ask.
A better solution is:
f_result=$(grep -oP "${port}/udp.*?Nmap done" <<< "$result")
grep -P enable Perl regex.
I wish to suppress the general information for the top command
using a top parameter.
By general information I mean the below stuff :
top - 09:35:05 up 3:26, 2 users, load average: 0.29, 0.22, 0.21
Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
Cpu(s): 2.3%us, 0.7%sy, 0.0%ni, 96.3%id, 0.8%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 3840932k total, 2687880k used, 1153052k free, 88380k buffers
Swap: 3998716k total, 0k used, 3998716k free, 987076k cached
What I do not wish to do is :
top -u user | grep process_name
or
top -bp $(pgrep process_name) | do_something
How can I achieve this?
Note: I am on Ubuntu 12.04 and top version is 3.2.8.
Came across this question today. I have a potential solution - create a top configuration file from inside top's interactive mode when the summary area is disabled. Since this file is also read at startup of top in batch mode, it will cause the summary area to be disabled in batch mode too.
Follow these steps to set it up..
Launch top in interactive mode.
Once inside interactive mode, disable the summary area by successively pressing 'l', 'm' and 't'.
Press 'W' (upper case) to write your top configuration file (normally, ~/.toprc)
Exit interactive mode.
Now when you run top in batch mode the summary area will not appear (!)
Taking it one step further...
If you only want this for certain situations and still want the summary area most of the time, you could use an alternate top configuration file. However, AFAIK, the way to get top to use an alternate config file is a bit funky. There are a couple of ways to do this. The approach I use is as follows:
Create a soft-link to the top executable. This does not have to be done as root, as long as you have write access to the link's location...
ln -s /usr/bin/top /home/myusername/bin/omgwtf
Launch top by typing the name of the link ('omgwtf') rather than 'top'. You will be in normal top interactive mode, but when you save the configuration file it will write to ~/.omgwtfrc, leaving ~/.toprc alone.
Disable the summary area and write the configuration file same as before (press 'l', 'm', 't' and 'W')
In the future, when you're ready to run top without summary info in batch mode, you'll have to invoke top via the link name you created. For example,
% omgwtf -usyslog -bn1
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
576 syslog 20 0 264496 8144 1352 S 0.0 0.1 0:03.66 rsyslogd
%
If you're running top in batch mode (-b -n1), just delete the header lines with sed:
top -b -n1 | sed 1,7d
That will remove the first 7 header lines that top outputs and returns only the processes.
It's known as the "Summary Area" and i don't think there is a way at top initialization to disable those.
But while top is running, you can disable those by pressing l, t, m.
From man top:
Summary-Area-defaults
'l' - Load Avg/Uptime On (thus program name)
't' - Task/Cpu states On (1+1 lines, see '1')
'm' - Mem/Swap usage On (2 lines worth)
'1' - Single Cpu On (thus 1 line if smp)
This will dump the output and it can be redirected to any file if needed.
top -n1 |grep -Ev "Tasks:|Cpu(s):|Swap:|Mem:"
To monitoring a particular process, following command is working for me -
top -sbn1 -p $(pidof <process_name>) | grep $(pidof <process_name>)
And to get the all process information you can use the following -
top -sbn1|sed -n '/PID/,/^$/p'
egrep may be good enough in this case, but I would add that perl -lane could do this kind of thing with lightning speed:
top -b -n 1 | perl -lane '/PID/ and $x=1; $x and print' | head -n10
This way you may forget the precise arguments for grep, sed, awk, etc. for good because perl is typically much faster than those tools.
On a mac you cannot use -b which is used in many of the other answers.
In that case the command would be top -n1 -l1 | sed 1,10d
Grabbing only the first process line (and its header), only logging once, instead of interactive, then suppress the general information for top command which are the first 10 lines.
The goal was to frequently change default outgoing source ip on a machine with multiple interfaces and live ips.
I used ip route replace default as per its documentation and let a script run in loop for some interval. It changes source ip fine for a while but then all internet access to the machine is lost. It has to be remotely rebooted from a web interface to have any thing working
Is there any thing that could possibly prevent this from working stably. I have tried this on more than one servers?
Following is a minimum example
# extract all currently active source ips except loopback
IPs="$(ifconfig | grep 'inet addr:'| grep -v '127.0.0.1' | cut -d: -f2 |
awk '{ print $1}')"
read -a ip_arr <<<$IPs
# extract all currently active mac / ethernet addresses
Int="$(ifconfig | grep 'eth'| grep -v 'lo' | awk '{print $1}')"
read -a eth_arr <<<$Int
ip_len=${#ip_arr[#]}
eth_len=${#eth_arr[#]}
i=0;
e=0;
while(true); do
#ip route replace 0.0.0.0 dev eth0:1 src 192.168.1.18
route_cmd="ip route replace 0.0.0.0 dev ${eth_arr[e]} src ${ip_arr[i]}"
echo $route_cmd
eval $route_cmd
sleep 300
(i++)
(e++)
if [ $i -eq $ip_len ]; then
i=0;
e=0;
echo "all ips exhausted - starting from first again"
# break;
fi
done
I wanted to comment, but as I'm not having enough points, it won't let me.
Consider:
Does varying the delay time before its run again change the number of iterations before it fails?
Exporting the ifconfig & routes every time you change it, to see if something is meaningfully different over time. Maybe some basic tests to it (ping, nslookup, etc) Basically, find what is exactly going wrong with it. Also exporting the commands you send to a logfile (text file per change?) to see changes in them to see if some is different after x iterations.
What connectivity is lost? Incoming? Outgoing? Specific applications?
You say you use/do this on other servers without problems?
Are the IP's: Static (/etc/network/interfaces), bootp/DHCP, semi-static (bootp/DHCP server serving, based on MAC address), and if served by bootp/DHCP, what is the lease duration?
On the last remark:
bootp/dhcp will give IP's for x duration. say its 60 minutes. After half that time it will "check" with the bootp/dhcp server if it can keep the IP, and extend the lease to 60 minutes again, this can mean a small reconfig on the ifconfig (maybe even at the same time of your script?).
hth
I'm trying to setup a new cookbook for Cassandra, and on cassandra.yaml file which has the follow comments about optimal settings:
# For workloads with more data than can fit in memory, Cassandra's
# bottleneck will be reads that need to fetch data from
# disk. "concurrent_reads" should be set to (16 * number_of_drives) in
# order to allow the operations to enqueue low enough in the stack
# that the OS and drives can reorder them.
#
# On the other hand, since writes are almost never IO bound, the ideal
# number of "concurrent_writes" is dependent on the number of cores in
# your system; (8 * number_of_cores) is a good rule of thumb.
However, there's no way to determine the numbers of cores or numbers of disk drives predefined in the attributes because the deployed servers could have different hardware settings.
Is it possible to dynamically override the attributes with the deployed hardware settings? I read the Opscode doc and I don't think it has a way to capture the output from
cat /proc/cpuinfo | grep processor | wc -l
I was thinking about something like this:
cookbook-cassandra/recipes/default.rb
cores = command "cat /proc/cpuinfo | grep processor | wc -l"
node.default["cassandra"]["concurrent_reads"] = cores*8
node.default["cassandra"]["concurrent_writes"] = cores*8
cookbook-cassandra/attributes/default.rb
default[:cassandra] = {
...
# determined by 8 * number of cores
:concurrent_reads => 16,
:concurrent_writes => 16,
..
}
You can capture stdout in Chef with mixlib-shellout (documentation here: https://github.com/opscode/mixlib-shellout).
In your example, you could do something like:
cc = Mixlib::ShellOut.new("cat /proc/cpuinfo | grep processor | wc -l")
cores = cc.run_command.stdout.to_i # runs it, gets stdout, converts to integer
I have found a way to do this in recipes, but I haven't deployed it yet to any box to verify it yet.
num_cores = Integer(`cat /proc/cpuinfo | grep processor | wc -l`)
if ( num_cores > 8 && num_cores != 0 ) # sanity check
node.default["cassandra"]["concurrent_reads"] = (8 * num_cores)
node.default["cassandra"]["concurrent_writes"] = (8 * num_cores)
end
I am using chef 11 so this may not be available on previous versions, but there's a node['cpu'] attribute with info about the cpus, cores and etc.
chef > x = nodes.show 'nodename.domain'; true
=> true
chef > x['cpu']['total']
=> 16
And you can use it on your recipes. That's how the Nginx cookbook does it.