bash script extract XML data into column format - bash

Trying to extract xml data from multiple string outputs dynamically (the data changes) into a column format.
About 100 of these XML bits echo out when I run a query against an SQL database.
<?xml version="1.0"?>
<Connection>
<ConnectionType>Putty</ConnectionType>
<CreatedBy>Someone</CreatedBy>
<CreationDateTime>2014-10-27T11:53:59.8993492-04:00</CreationDateTime>
<Events>
<OpenCommentPrompt>true</OpenCommentPrompt>
<WarnIfAlreadyOpened>true</WarnIfAlreadyOpened>
</Events>
<Group>Cloud Services Client Delivery\Willis\Linux\Test - SJC</Group>
<ID>77e96d52-f165-482f-8389-ffb95b9d8ccd</ID>
<KeyboardHook>InFullScreenMode</KeyboardHook>
<MetaInformation />
<Name>Hostname-H-A10D</Name>
<OpenEmbedded>true</OpenEmbedded>
<PinEmbeddedMode>False</PinEmbeddedMode>
<Putty>
<PortFowardingArray />
<Scripting />
<SessionHost>10.0.0.100</SessionHost>
<SessionName>10.0.0.100</SessionName>
<TelnetEncoding>IBM437</TelnetEncoding>
</Putty>
<ScreenColor>C24Bits</ScreenColor>
<SoundHook>DoNotPlay</SoundHook>
<Stamp>771324d1-0c59-4f12-b81e-96edb5185ef7</Stamp>
</Connection>
And what I need is the and in a column format. And essentially where the hostname equal Hostname-H-A10D, I want to be able to match the D at the end and mark the First column with Dev, Q as Test and no letter at the end as Prod. So the output would look like -->
Dev Hostname-H-A10D 10.0.0.100
Dev Hostname-H-A11D 10.0.0.101
Prod Hostname-H-A12 10.0.0.201
Test Hostname-H-A13Q 10.0.0.10
I have played around with sed/awk/etc and not just cannot get the format I want without writing out temp flat files. I would prefer to get this into an array using something like xmlstarlet or xmllint. Of course better suggestions can be made and that is why I am here :) Thanks folks.

It would be better to use an XML parser.
Using awk:
$ awk -F'[<>]' 'BEGIN{a["D"]="Dev";a["Q"]="Test"} /Name/{name=$3; type=a[substr(name,length(name))]; if (length(type)==0) type="Prod";} /SessionHost/{print type, name, $3;}' s.xml
Dev Hostname-H-A10D 10.0.0.100
How it works
BEGIN{a["D"]="Dev";a["Q"]="Test"}
This defines associative array a.
/Name/{name=$3; type=a[substr(name,length(name))]; if (length(type)==0) type="Prod";}
On the line that has the host name, this captures the host name and, from it, determines the host type.
/SessionHost/{print type, name, $3;}
On the line that contains the host IP, this prints the type, name, and IP.

You have not mentioned any parameter in XML file whether the host is Dev or Prod or Test.
But from the above XML file you can get the name using the following way.
$cat test.xml |grep Name |awk -F '[<,>]' '{print $3}' |xargs
Hostname-H-A10D 10.0.0.100

Related

Extract the lines using sed or awk and save them in file

Dear Stackoverflow Community,
I am trying to grab the value or the part of the string or lines.
The Kubernetes init gives 2 kubeadm join commands.
I want to extract the first one and save it in a file and similarly extract the 2nd one and save it in a file.
Below is the text that I am trying to extract from the file:
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 10.0.0.0:6443 --token jh88qi.uch1l58ri160bve1 \
--discovery-token-ca-cert-hash sha256:f9c9ab441d913fec7d157c20f1c5e93c496123456ac4ec14ca8e02ab7f916d7fb \
--control-plane --certificate-key 179e288571e33d3d68f5691b6d8e7cefa4657550fc0886856a52e2431hjkl7155
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.0:6443 --token jh88qi.uch1l58ri160bve1 \
--discovery-token-ca-cert-hash sha256:f9c9ab441d913fec7d157c20f1c5e93c496123456ac4ec14ca8e02ab7f916d7fb
Goal -
Extract both kubeadm join commands and save them in different files for automation.
Commands Used till now -
sed -ne '/--control-plane --certificate-key/p' token
With the above command, I want to extract value if I can and save it in a file.
The other command -
awk '/kubeadm join/{x=NR+2}(NR<=x){print}' token
token is the filename
You didn't show the expected output so it's a bit of a guess but this:
awk -v RS= '/^ *kubeadm join/{print > ("out"NR); close("out"NR)}' file
should do what I think you want given the input you provided.

how can I use different CSVs for my JMeter script on different instance

We have 20 worker on AWS and I want to parameterized CSV file name for each instance Please help
I have divided my CSV in to number of Load generator hosts
$ wc -l "youroriginalcsv.csv" /* this will return number of total rows in csv*/
$ split -l "count of above query"/"number of hosts" "youroriginalcsv.csv" /* this will split CSV with file name as xaa, xab ... */
Transfer each unique CSV to all available hosts
$ scp xaa host1_user#host1_ip:/csvpath/csvfile.csv
$ scp xab host2_user#host2_ip:/csvpath/csvfile.csv
$ scp xaz hostN_user#hostN_ip:/csvpath/csvfile.csv
Now I want to use specific file name for specific host
What do you mean by "specific file name for specific host"? Your CSV files are all named csvfile.csv so it's sufficient to specify /csvpath/csvfile.csv in the CSV Data Set Config and each JMeter slave will pick up its own file containing partial data from the "big" CSV file.
If you want to use different names for CSV files depending on the machine IP address or DNS hostname - go for combination of If Controller with __machineName() or __machineIP() function
Also if you don't want the same data to be re-used by different JMeter slaves you can consider using Redis Data Set Config or HTTP Simple Table Server, this way you won't have to "split" and "copy" CSV files and will be able to centrally manage your test data from a single location

Chef - create test data bags at Compile Time

In my chef recipe, I am basically decrypting a couple of data bags:
1. test.json
2. sample.json
The data obtained after decryption will next be used to create files on my kitchen node. Basically, test.json and sample.json are encrypted using a secret key I have (test.json was obtained from test.txt and sample.json was obtained from sample.txt which are both plaintext files), within a script called gendatabags.rb that creates these files and puts them in their respective places. Note that the gendatabags.rb takes the secret key path and input file path as input parameters. Now as I want to integration-test this flow, I am looking forward to using a test secret key that I've generated. I would like to provide test versions of both test.txt and sample.txt which contain some dummy strings. The catch is, now I'd like to run this script automatically during compile time of my recipe. Can someone please provide some info on how to achieve this?
Thank you!
Strongly wouldn't recommend this. Technically you could do this with the execute resource but you'd have all sorts of timing issues and it would defeat the purpose of having the encrypted data bag anyway.
Now, if you're trying to test a dummy encrypted databag that is easy. You'll make a data bag as normal but with the addition of the -z switch
knife data bag create <data bag name> -z
knife data bag from file <data bag name> <path to .json file> --secret-file <path to encryption key file> -z
This will make a local directory with the name of your data bag and place the encrypted data bag item inside of it, with the name of the "id" value of the json file.
-z defaults to putting the data bag and items in /users//data_bags
From there you can edit your .kitchen.yml to point towards both your data bag and secret key thusly
Suites:
- name: default
run_list:
data_bags_path: <path to data_bags dir>
encrypted_data_bag_secret_key_path: <path to secret_file>
and if you have multiple suites using the same data_bags path you can move the declaration to
provisioner:
name: chef_zero
data_bags_path: <path to data_bags dir>
encrypted_data_bag_secret_key_path: <path to secret_file>
Hope this helps.

Bash script to audit Cisco configuration

I'm currently writing a script to generate a report from cisco configuration for audit purposes. Using 'grep' command, I was able to successfully capture the global configurations.
But the challenge is doing it per interface. For example, I want to know which interfaces have these lines 'no ip redirects', 'no ip unreachables', etc. How can I accomplish this in bash?
Thank you in advance!
This can not be done easy with grep, but awk handle this:
cat file
!
interface GigabitEthernet0/13
description Server_32_main
spanning-tree portfast
no ip redirects
!
interface GigabitEthernet0/14
description Server_32_log
switchport access vlan 666
spanning-tree portfast
!
interface GigabitEthernet0/15
description UPS_20
spanning-tree portfast
!
As you see, each group is separated by !, so we use that to separate each record.
To get only interface name do like this:
awk -v RS="!" -F"\n" '/no ip redirects/ {print $2}' file
interface GigabitEthernet0/13
To get interface config do:
awk -v RS="!" '/no ip redirects/' file
interface GigabitEthernet0/13
description Server_32_main
spanning-tree portfast
no ip redirects
To get more patterns in one go:
awk -v RS="!" '/no ip redirects/ || /no ip unreachables/' file

Use multiple column variables in bash script to pull output from routers

I have a script that logs on to routers and pulls output that is named routerauto. I would like to use data from a text file to automatically populate required commands to pull required info from a large number of routers.
Ultimately I would like the script to move through each line of the text file, filling in the gaps with the output from the columns as below. The text file uses tab as separator.
routerauto VARIABLE1 "sh service id VARIABLE2 sap VARIABLE4 detail"
Example data:
hostnamei serv-id cct sap
london-officei 123456 No987654321 8/1/4:100
Example output:
routerauto london-office "sh service id 123456 sap 8/1/4:100 detail"
Here is a bash only solution:
#!/bin/bash
while read hostnamei servid cct sap; do
echo routerauto $hostnamei \"sh service id $servid sap $sap detail\"
done < <(tail -n +2 sample.data)
Producing given your sample file:
routerauto london-officei "sh service id 123456 sap 8/1/4:100 detail"
Please note this assume no space are allowed in your various data fields.

Resources