Netmiko / textfsm - netmiko

'Hello, i got my information parsed the way I want it. But now I'm trying to save the output
to a possible .txt file. Im not sure what to type in the "backup.write()" if I type the
"output" variable it saves the whole output not the parsed section.'
connection = ConnectHandler(**cisco_device)
# print('Entering the enable mode...')
# connection.enable()
prompt = connection.find_prompt()
hostname = prompt[0:-1]
print(hostname)
output = connection.send_command('show interfaces status', use_textfsm=True)
for interface in output:
if interface['status'] == 'notconnect':
print(f"interface {interface['port']} \n shutdown")
print(hostname)
print('*' * 85)
# minute = now.minute
now = datetime.now()
year = now.year
month = now.month
day = now.day
hour = now.hour
# creating the backup filename (hostname_date_backup.txt)
filename = f'{hostname}_{month}-{day}-{year}_backup.txt'
# writing the backup to the file
with open(filename, 'w') as backup:
backup.write()
print(f'Backup of {hostname} completed successfully')
print('#' * 30)
print('Closing connection')
connection.disconnect()

my desired result is to run the Cisco IOS command "show interface status" and parse the data using textfsm module to only provide the interfaces that are in the shtudown.
I tried the same on show ip interface brief, because I have no access to a Cisco switch right now. For show interfaces status both methods apply but with different output modifier or if condition.
So to get the following output, you can do it in two ways:
1- CLI Output Modifier
show ip interface brief | include down
And the rest is left for TextFSM to parse the output
[{'intf': 'GigabitEthernet2',
'ipaddr': 'unassigned',
'proto': 'down',
'status': 'administratively down'},
{'intf': 'GigabitEthernet3',
'ipaddr': '100.1.1.1',
'proto': 'down',
'status': 'down'}]
2- Python
You can get the whole output from show ip interface brief and loop over all parsed interfaces and set an if condition to get the down interfaces only. (Recommended)
# Condition for `show ip interface brief`
down = [
intf
for intf in intfs
if intf["proto"] == "down" or intf["status"] in ("down", "administratively down")
]
# Condition for `show interfaces status`
down = [
intf
for intf in intfs
if intf["status"] == "notconnect"
]
Exporting a List[Dict] to a .txt file makes no sense. You don't have any syntax highlighting or formatting in .txt files. It's better to export it to a JSON file. So a complete example of what you want to achieve can be something like:
import json
from datetime import date
from netmiko import ConnectHandler
device = {
"device_type": "cisco_ios",
"ip": "x.x.x.x",
"username": "xxxx",
"password": "xxxx",
"secret": "xxxx",
}
with ConnectHandler(**device) as conn:
print(f'Connected to {device["ip"]}')
if not conn.check_enable_mode():
conn.enable()
hostname = conn.find_prompt()[:-1]
intfs = conn.send_command(
command_string="show ip interface brief", use_textfsm=True
)
print("Connection Terminated")
down = [
intf
for intf in intfs
if intf["proto"] == "down" or intf["status"] in ("down", "administratively down")
]
with open(file=f"{hostname}_down-intfs_{date.today()}.json", mode="w") as f:
json.dump(obj=down, fp=f, indent=4)
print(f"Completed backup of {hostname} successfully")
# In case you have to export to text file
# with open(file=f"{hostname}_down-intfs_{date.today()}.txt", mode="w") as f:
# f.write(down)
# print(f"Completed backup of {hostname} successfully")

Related

How to pass ip-address from terraform to ansible [duplicate]

I am trying to create Ansible inventory file using local_file function in Terraform (I am open for suggestions to do it in a different way)
module "vm" config:
resource "azurerm_linux_virtual_machine" "vm" {
for_each = { for edit in local.vm : edit.name => edit }
name = each.value.name
resource_group_name = var.vm_rg
location = var.vm_location
size = each.value.size
admin_username = var.vm_username
admin_password = var.vm_password
disable_password_authentication = false
network_interface_ids = [azurerm_network_interface.edit_seat_nic[each.key].id]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
output "vm_ips" {
value = toset([
for vm_ips in azurerm_linux_virtual_machine.vm : vm_ips.private_ip_address
])
}
When I run terraform plan with the above configuration I get:
Changes to Outputs:
+ test = [
+ "10.1.0.4",
]
Now, in my main TF I have the configuration for local_file as follows:
resource "local_file" "ansible_inventory" {
filename = "./ansible_inventory/ansible_inventory.ini"
content = <<EOF
[vm]
${module.vm.vm_ips}
EOF
}
This returns the error below:
Error: Invalid template interpolation value
on main.tf line 92, in resource "local_file" "ansible_inventory":
90: content = <<EOF
91: [vm]
92: ${module.vm.vm_ips}
93: EOF
module.vm.vm_ips is set of string with 1 element
Cannot include the given value in a string template: string required.
Any suggestion how to inject the list of IPs from the output into the local file while also being able to format the rest of the text in the file?
If you want the Ansible inventory to be statically sourced from a file in INI format, then you basically need to render a template in Terraform to produce the desired output.
module/templates/inventory.tmpl:
[vm]
%{ for ip in ips ~}
${ip}
%{ endfor ~}
alternative suggestion from #mdaniel:
[vm]
${join("\n", ips)}
module/config.tf:
resource "local_file" "ansible_inventory" {
content = templatefile("${path.module}/templates/inventory.tmpl",
{ ips = module.vm.vm_ips }
)
filename = "${path.module}/ansible_inventory/ansible_inventory.ini"
file_permission = "0644"
}
A couple of additional notes though:
You can modify your output to be the entire map of objects of exported attributes like:
output "vms" {
value = azurerm_linux_virtual_machine.vm
}
and then you can access more information about the instances to populate in your inventory. Your templatefile argument would still be the module output, but the for expression(s) in the template would look considerably different depending upon what you want to add.
You can also utilize the YAML or JSON inventory formats for Ansible static inventory. With those, you can then leverage the yamldecode or jsondecode Terraform functions to make the HCL2 data structure transformation much easier. The template file would become a good bit cleaner in that situation for more complex inventories.

netmiko connections from device list is only connecting to first ip in list

Im using
(base) C:\python
Python 3.8.8 (default, Apr 13 2021, 15:08:03) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.sing
Created on Wed Jun 16 10:15:59 2021
import logging
from netmiko import ConnectHandler
logging.basicConfig(filename='test2.log', level=logging.DEBUG)
logger = logging.getLogger("netmiko")
with open('host.txt', "r") as host:
for ip in host.read().splitlines():
cisco = {
'device_type': 'cisco_ios',
'ip': ip,
'username': 'user',
'password': 'password',
}
net_connect = ConnectHandler(**cisco)
print (' #### Connecting to ' + ip)
output = net_connect.find_prompt()
print(output)
net_connect.disconnect()
You need to create a for loop. Netmiko accepts one dictionary within the ConnectHandler class. So to make the code run "for each" device, you have to create a loop.
Also, in the for loop you created to read the IP addresses from hosts.txt, you keep overwriting the cisco dict every time in the loop. cisco = {} overwrites the previous value each time. New values should be appended to a list instead.
You can achieve this by doing:
from netmiko import ConnectHandler
import logging
logging.basicConfig(filename="test2.log", level=logging.DEBUG)
logger = logging.getLogger("netmiko")
with open(file="hosts.txt", mode="r") as hosts:
# A list comprehension
devices = [
{
"device_type": "cisco_ios",
"ip": ip,
"username": "cisco",
"password": "cisco",
}
for ip in hosts.read().splitlines()
]
print(devices) # <--- print value is below
# Connect to each device (one at a time)
for device in devices:
print(f'Connecting to {device["ip"]}') # Here you are still trying to connect
net_connect = ConnectHandler(**device)
print(f'Connected to {device["ip"]}') # Here you are already connected
prompt = net_connect.find_prompt()
net_connect.disconnect() # disconnect from the session
# Finally, print the prompt within the foor loop, but
# after you disconnect. You no longer need the connection to print.
print(prompt)
You can forget about net_connect.disconnect() by using with statement (Context Manager)
It's important to clear the vty line after you are done
for device in devices:
print(f'Connecting to {device["ip"]}') # Here you are still waiting to connect
with ConnectHandler(**device) as net_connect:
print(f'Connected to {device["ip"]}') # Here you are already logged-in
prompt = net_connect.find_prompt()
print(prompt)
If you print the devices list, you will get:
[{'device_type': 'cisco_ios',
'ip': '192.168.1.1', # From hosts.txt file
'password': 'cisco',
'username': 'cisco'},
{'device_type': 'cisco_ios',
'ip': '192.168.1.2', # From hosts.txt file
'password': 'cisco',
'username': 'cisco'}]

TextFSM Template for Netmiko for "inc" phrase

I am trying to create a textfsm template with the Netmiko library. While it works for most of the commands, it does not work when I try performing "inc" operation in the network device. The textfsm index file seems like it is not recognizing the same command for 2 different templates; for instance:
If I am giving the command - show running | inc syscontact
And give another command - show running | inc syslocation
in textfsm index; the textfsm template seems like it is recognizing only the first command; and not the second command.
I understand that I can get the necessary data by the regex expression for syscontact and syslocation for the commands( via the template ), however I want to achieve this by the "inc" command from the device itself. Is there a way this can be done?
you need to escape the pipe in the index file. e.g. sh[[ow]] ru[[nning]] \| inc syslocation
There is a different way to parse that you want all datas which is called TTP module. You can take the code I wrote below as an example. You can create your own templates.
from pprint import pprint
from ttp import ttp
import json
import time
with open("showSystemInformation.txt") as f:
data_to_parse = f.read()
ttp_template = """
<group name="Show_System_Information">
System Name : {{System_Name}}
System Type : {{System_Type}} {{System_Type_2}}
System Version : {{Version}}
System Up Time : {{System_Uptime_Days}} days, {{System_Uptime_HR_MIN_SEC}} (hr:min:sec)
Last Saved Config : {{Last_Saved_Config}}
Time Last Saved : {{Last_Time_Saved_Date}} {{Last_Time_Saved_HR_MIN_SEC}}
Time Last Modified : {{Last_Time_Modified_Date}} {{Last_Time_Modifed_HR_MIN_SEC}}
</group>
"""
parser = ttp(data=data_to_parse, template=ttp_template)
parser.parse()
# print result in JSON format
results = parser.result(format='json')[0]
print(results)
Example run:
[appadmin#ryugbz01 Nokia]$ python3 showSystemInformation.py
[
{
"Show_System_Information": {
"Last_Saved_Config": "cf3:\\config.cfg",
"Last_Time_Modifed_HR_MIN_SEC": "11:46:57",
"Last_Time_Modified_Date": "2022/02/09",
"Last_Time_Saved_Date": "2022/02/07",
"Last_Time_Saved_HR_MIN_SEC": "15:55:39",
"System_Name": "SR7-2",
"System_Type": "7750",
"System_Type_2": "SR-7",
"System_Uptime_Days": "17",
"System_Uptime_HR_MIN_SEC": "05:24:44.72",
"Version": "C-16.0.R9"
}
}
]

Why does puppet think my custom fact is a string?

I am trying to create a custom fact I can use as the value for a class parameter in a hiera yaml file.
I am using the openstack/puppet-keystone module and I want to use fernet-keys.
According to the comments in the module I can use this parameter.
# [*fernet_keys*]
# (Optional) Hash of Keystone fernet keys
# If you enable this parameter, make sure enable_fernet_setup is set to True.
# Example of valid value:
# fernet_keys:
# /etc/keystone/fernet-keys/0:
# content: c_aJfy6At9y-toNS9SF1NQMTSkSzQ-OBYeYulTqKsWU=
# /etc/keystone/fernet-keys/1:
# content: zx0hNG7CStxFz5KXZRsf7sE4lju0dLYvXdGDIKGcd7k=
# Puppet will create a file per key in $fernet_key_repository.
# Note: defaults to false so keystone-manage fernet_setup will be executed.
# Otherwise Puppet will manage keys with File resource.
# Defaults to false
So wrote this custom fact ...
[root#puppetmaster modules]# cat keystone_fernet/lib/facter/fernet_keys.rb
Facter.add(:fernet_keys) do
setcode do
fernet_keys = {}
puts ( 'Debug keyrepo is /etc/keystone/fernet-keys' )
Dir.glob('/etc/keystone/fernet-keys/*').each do |fernet_file|
data = File.read(fernet_file)
if data
content = {}
puts ( "Debug Key file #{fernet_file} contains #{data}" )
fernet_keys[fernet_file] = { 'content' => data }
end
end
fernet_keys
end
end
Then in my keystone.yaml file I have this line:
keystone::fernet_keys: '%{::fernet_keys}'
But when I run puppet agent -t on my node I get this error:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Function Call, "{\"/etc/keystone/fernet-keys/1\"=>{\"content\"=>\"xxxxxxxxxxxxxxxxxxxx=\"}, \"/etc/keystone/fernet-keys/0\"=>{\"content\"=>\"xxxxxxxxxxxxxxxxxxxx=\"}}" is not a Hash. It looks to be a String at /etc/puppetlabs/code/environments/production/modules/keystone/manifests/init.pp:1144:7 on node mgmt-01
I had assumed that I had formatted the hash correctly because facter -p fernet_keys output this on the agent:
{
/etc/keystone/fernet-keys/1 => {
content => "xxxxxxxxxxxxxxxxxxxx="
},
/etc/keystone/fernet-keys/0 => {
content => "xxxxxxxxxxxxxxxxxxxx="
}
}
The code in the keystone module looks like this (with line numbers)
1142
1143 if $fernet_keys {
1144 validate_hash($fernet_keys)
1145 create_resources('file', $fernet_keys, {
1146 'owner' => $keystone_user,
1147 'group' => $keystone_group,
1148 'subscribe' => 'Anchor[keystone::install::end]',
1149 }
1150 )
1151 } else {
Puppet does not necessarily think your fact value is a string -- it might do, if the client is set to stringify facts, but that's actually beside the point. The bottom line is that Hiera interpolation tokens don't work the way you think. Specifically:
Hiera can interpolate values of any of Puppet’s data types, but the
value will be converted to a string.
(Emphasis added.)

Add tag while creating EBS snapshot using boto3

Is it possible to add a tag when invoking the create_snapshot() method in boto3? When I run the following code:
client = boto3.client('ec2')
root_snap_resp = client.create_snapshot(
Description='My snapshot description',
VolumeId='vol-123456',
Tags=[{'Key': 'Test_Key', 'Value': 'Test_Value'}]
)
I get the following error:
botocore.exceptions.ParamValidationError: Parameter validation failed:
Unknown parameter in input: "Tags", must be one of: DryRun, VolumeId, Description
Is the only way to add a tag after the fact using the create_tags() method?
In April, 2018, the original answer (and the question itself) were made obsolete...
You can now specify tags for EBS snapshots as part of the API call that creates the resource or via the Amazon EC2 Console when creating an EBS snapshot.
https://aws.amazon.com/blogs/compute/tag-amazon-ebs-snapshots-on-creation-and-implement-stronger-security-policies/
...unless you are using an older version of an SDK that does not implement the feature.
The same announcement extended resource-level permissions to snapshots.
The underlying CreateSnapshot action in the EC2 API doesn't have any provision for adding tags simultaneously with the creation of the snapshot. You have to go back and tag it after creating it.
ec2 = boto3.resource('ec2')
volume = ec2.Volume('vol-xxxxxxxxxx')
snapshot = ec2.create_snapshot(
VolumeId = volume.id,
TagSpecifications = [
{
'ResourceType': 'snapshot',
'Tags': volume.tags,
},
],
Description = 'Snapshot of volume ({})'.format(volume.id),
)
#fender4645 You can now specify tags for EBS snapshots as part of the API call that creates the resource.
Have a look at my backup script:
import boto3
import collections
import datetime
ec = boto3.client('ec2')
def lambda_handler(event, context):
reservations = ec.describe_instances(
Filters=[
{'Name':'tag:Backup', 'Values':['Yes','yes']}
]
).get(
'Reservations', []
)
instances = sum(
[
[i for i in r['Instances']]
for r in reservations
], [])
print "Found %d instances that need backing up" % len(instances)
to_tag = collections.defaultdict(list)
for instance in instances:
try:
retention_days = [
int(t.get('Value')) for t in instance['Tags']
if t['Key'] == 'Retention'][0]
except IndexError:
retention_days = 30
for dev in instance['BlockDeviceMappings']:
if dev.get('Ebs', None) is None:
continue
vol_id = dev['Ebs']['VolumeId']
print "Found EBS volume %s on instance %s" % (
vol_id, instance['InstanceId'])
snap = ec.create_snapshot(
VolumeId=vol_id,
)
to_tag[retention_days].append(snap['SnapshotId'])
print "Retaining snapshot %s of volume %s from instance %s for %d days" % (
snap['SnapshotId'],
vol_id,
instance['InstanceId'],
retention_days,
)
snapshot_name = 'N/A'
if 'Tags' in instance:
for tags in instance['Tags']:
if tags["Key"] == 'Name':
snapshot_name = tags["Value"]
print "Tagging snapshot with Name: %s" % (snapshot_name)
ec.create_tags(
Resources=[
snap['SnapshotId'],
],
Tags=[
{'Key': 'Name', 'Value': snapshot_name},
{'Key': 'Description', 'Value': "Created by lambda automated backups"}
]
)
for retention_days in to_tag.keys():
delete_date = datetime.date.today() + datetime.timedelta(days=retention_days)
delete_fmt = delete_date.strftime('%Y-%m-%d')
print "Will delete %d snapshots on %s" % (len(to_tag[retention_days]), delete_fmt)
ec.create_tags(
Resources=to_tag[retention_days],
Tags=[
{'Key': 'DeleteOn', 'Value': delete_fmt}
]
)
And this this my script to delete old backups that have "delete_on" tag with value of today in the YYYY-MM-DD format
import boto3
import re
import datetime
ec = boto3.client('ec2')
iam = boto3.client('iam')
"""
This function looks at *all* snapshots that have a "DeleteOn" tag containing
the current day formatted as YYYY-MM-DD. This function should be run at least
daily.
"""
def lambda_handler(event, context):
account_ids = list()
try:
"""
You can replace this try/except by filling in `account_ids` yourself.
Get your account ID with:
> import boto3
> iam = boto3.client('iam')
> print iam.get_user()['User']['Arn'].split(':')[4]
"""
iam.get_user()
except Exception as e:
# use the exception message to get the account ID the function executes under
account_ids.append(re.search(r'(arn:aws:sts::)([0-9]+)', str(e)).groups()[1])
delete_on = datetime.date.today().strftime('%Y-%m-%d')
filters = [
{'Name': 'tag-key', 'Values': ['DeleteOn']},
{'Name': 'tag-value', 'Values': [delete_on]},
]
snapshot_response = ec.describe_snapshots(OwnerIds=account_ids, Filters=filters)
for snap in snapshot_response['Snapshots']:
print "Deleting snapshot %s" % snap['SnapshotId']
ec.delete_snapshot(SnapshotId=snap['SnapshotId'])

Resources