How can I generate traffic from trace file in mininet? - traffic

I have trace file name dec-pkt which have 6 columns as follow:
timestamp of packet arrival.
For the first packet in the trace, this is the
raw tcpdump timestamp. For the remaining
packets, this is the offset from the integer
part of that first timestamp.
For example, if the first timestamp is 187.2, the
second is 188.9, and the third is 191.3, then
the first three timestamps in the ASCII file will
be 187.2, 1.9 (= 188.9-187), and 4.3 (=191.3-187).
Note that sanitize-syn-fin uses as its base time
the arrival of the first TCP packet in the file,
not the first TCP SYN/FIN/RST packet (this helps
when comparing sanitize-syn-fin times with those
produces by sanitize-tcp).
(renumbered) source host
(renumbered) destination host
Note that the renumbering process loses any IP network
information.
source TCP port
destination TCP port
number of data bytes in the packet, or 0 if none (this
can happen for packets that only ack data sent by the
other side)
So I wonder how can I generate this traffic using this file? can Iperf
do that? if not how can I do that?

You can generate traffic easily via Scapy. It also comes with Mininet VM which you can find in mininet official website. It can generate packet for both TCP and UDP.
Here is an example python code. You can find more on Github or Scapy's official tutorial.
import sys
import getopt
import time
from os import popen
import logging
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
from scapy.all import sendp, IP, UDP, Ether, TCP
from random import randrange
def sourceIPgen():
not_valid = [10,127,254,1,2,169,172,192]
first = randrange(1,256)
while first in not_valid:
first = randrange(1,256)
ip = ".".join([str(first),str(randrange(1,256)),str(randrange(1,256)),str(randrange(1,256))])
return ip
def gendest(start, end):
first = 10
second =0; third =0;
ip = ".".join([str(first),str(second),str(third),str(randrange(start,end))])
# print start
# print end
return ip
#if __name__ == '__main__':
#main()
def main(argv):
# global start
# global end
print argv
try:
opts, args = getopt.getopt(sys.argv[1:],'s:e:',['start=','end='])
except getopt.GetoptError:
sys.exit(2)
for opt, arg in opts:
if opt =='-s':
start = int(arg)
elif opt =='-e':
end = int(arg)
if start == '':
sys.exit()
if end == '':
sys.exit()
interface = popen('ifconfig | awk \'/eth0/ {print $1}\'').read()
for i in xrange(1000):
packets = Ether()/IP(dst=gendest(start, end),src=sourceIPgen())/UDP(dport=80,sport=2)
print(repr(packets))
sendp( packets,iface=interface.rstrip(),inter=0.1)
if __name__ == '__main__':
main(sys.argv)
Referance
And you can call it like
python ./launchTraffic -s 2 -e 28

Related

How to send data in a serial port using button on Python tkinter?

I define a function to be used in a button (tkinter). I am sending a string over the port.
But every-time is shows that the "port is closed". Here is the code:
def myClick2(): #defines the button what should be the output when you click and also makes entry to the text box
string = windspeeddata()
button_rx.configure("state") == 'disabled'
button_tx.configure("state") == 'normal'
try:
serialPort = serial.Serial(port=portno(), baudrate=baudval(), bytesize=8, timeout=None, stopbits=serial.STOPBITS_ONE)
serialPort.isOpen()
while True:
windspeedstring = string + '\r\n'
windspeed_encode = str.encode(windspeedstring)
serialPort.write(windspeed_encode)
# if button_rx.cget("state") == 'disabled' and button_tx.cget("state") == 'normal':
# f.writelines(winddirectionstring)
# serialPort.write(windspeed_encode)
# else:
# serialPort.open()
except IOError:
print('Serial port is closed')
if running:
root.after(1000, myClick2) #self calls the function every 1000 millisecond
I am trying to send string over the port. condition are:
If port is closed it will open
If opened already than it will send (or close the port and send )

Trying to convert this bash line to Python 2.6 [duplicate]

I'm writing a script to automate some command line commands in Python. At the moment, I'm doing calls like this:
cmd = "some unix command"
retcode = subprocess.call(cmd,shell=True)
However, I need to run some commands on a remote machine. Manually, I would log in using ssh and then run the commands. How would I automate this in Python? I need to log in with a (known) password to the remote machine, so I can't just use cmd = ssh user#remotehost, I'm wondering if there's a module I should be using?
I will refer you to paramiko
see this question
ssh = paramiko.SSHClient()
ssh.connect(server, username=username, password=password)
ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command(cmd_to_execute)
If you are using ssh keys, do:
k = paramiko.RSAKey.from_private_key_file(keyfilename)
# OR k = paramiko.DSSKey.from_private_key_file(keyfilename)
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname=host, username=user, pkey=k)
Keep it simple. No libraries required.
import subprocess
# Python 2
subprocess.Popen("ssh {user}#{host} {cmd}".format(user=user, host=host, cmd='ls -l'), shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()
# Python 3
subprocess.Popen(f"ssh {user}#{host} {cmd}", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()
Or you can just use commands.getstatusoutput:
commands.getstatusoutput("ssh machine 1 'your script'")
I used it extensively and it works great.
In Python 2.6+, use subprocess.check_output.
I found paramiko to be a bit too low-level, and Fabric not especially well-suited to being used as a library, so I put together my own library called spur that uses paramiko to implement a slightly nicer interface:
import spur
shell = spur.SshShell(hostname="localhost", username="bob", password="password1")
result = shell.run(["echo", "-n", "hello"])
print result.output # prints hello
If you need to run inside a shell:
shell.run(["sh", "-c", "echo -n hello"])
All have already stated (recommended) using paramiko and I am just sharing a python code (API one may say) that will allow you to execute multiple commands in one go.
to execute commands on different node use : Commands().run_cmd(host_ip, list_of_commands)
You will see one TODO, which I have kept to stop the execution if any of the commands fails to execute, I don't know how to do it. please share your knowledge
#!/usr/bin/python
import os
import sys
import select
import paramiko
import time
class Commands:
def __init__(self, retry_time=0):
self.retry_time = retry_time
pass
def run_cmd(self, host_ip, cmd_list):
i = 0
while True:
# print("Trying to connect to %s (%i/%i)" % (self.host, i, self.retry_time))
try:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host_ip)
break
except paramiko.AuthenticationException:
print("Authentication failed when connecting to %s" % host_ip)
sys.exit(1)
except:
print("Could not SSH to %s, waiting for it to start" % host_ip)
i += 1
time.sleep(2)
# If we could not connect within time limit
if i >= self.retry_time:
print("Could not connect to %s. Giving up" % host_ip)
sys.exit(1)
# After connection is successful
# Send the command
for command in cmd_list:
# print command
print "> " + command
# execute commands
stdin, stdout, stderr = ssh.exec_command(command)
# TODO() : if an error is thrown, stop further rules and revert back changes
# Wait for the command to terminate
while not stdout.channel.exit_status_ready():
# Only print data if there is data to read in the channel
if stdout.channel.recv_ready():
rl, wl, xl = select.select([ stdout.channel ], [ ], [ ], 0.0)
if len(rl) > 0:
tmp = stdout.channel.recv(1024)
output = tmp.decode()
print output
# Close SSH connection
ssh.close()
return
def main(args=None):
if args is None:
print "arguments expected"
else:
# args = {'<ip_address>', <list_of_commands>}
mytest = Commands()
mytest.run_cmd(host_ip=args[0], cmd_list=args[1])
return
if __name__ == "__main__":
main(sys.argv[1:])
paramiko finally worked for me after adding additional line, which is really important one (line 3):
import paramiko
p = paramiko.SSHClient()
p.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # This script doesn't work for me unless this line is added!
p.connect("server", port=22, username="username", password="password")
stdin, stdout, stderr = p.exec_command("your command")
opt = stdout.readlines()
opt = "".join(opt)
print(opt)
Make sure that paramiko package is installed.
Original source of the solution: Source
The accepted answer didn't work for me, here's what I used instead:
import paramiko
import os
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# ssh.load_system_host_keys()
ssh.load_host_keys(os.path.expanduser('~/.ssh/known_hosts'))
ssh.connect("d.d.d.d", username="user", password="pass", port=22222)
ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command("ls -alrt")
exit_code = ssh_stdout.channel.recv_exit_status() # handles async exit error
for line in ssh_stdout:
print(line.strip())
total 44
-rw-r--r--. 1 root root 129 Dec 28 2013 .tcshrc
-rw-r--r--. 1 root root 100 Dec 28 2013 .cshrc
-rw-r--r--. 1 root root 176 Dec 28 2013 .bashrc
...
Alternatively, you can use sshpass:
import subprocess
cmd = """ sshpass -p "myPas$" ssh user#d.d.d.d -p 22222 'my command; exit' """
print( subprocess.getoutput(cmd) )
References:
https://github.com/onyxfish/relay/issues/11
https://stackoverflow.com/a/61016663/797495
Notes:
Just make sure to connect manually at least one time to the remote system via ssh (ssh root#ip) and accept the public key, this is many times the reason from not being able connect using paramiko or other automated ssh scripts.
I have used paramiko a bunch (nice) and pxssh (also nice). I would recommend either. They work a little differently but have a relatively large overlap in usage.
First: I'm surprised that no one has mentioned fabric yet.
Second: For exactly those requirements you describe I've implemented an own python module named jk_simpleexec. It's purpose: Making running commands easy.
Let me explain a little bit about it for you.
The 'executing a command locally' problem
My python module jk_simpleexec provides a function named runCmd(..) that can execute a shell (!) command locally or remotely. This is very simple. Here is an example for local execution of a command:
import jk_simpleexec
cmdResult = jk_simpleexec.runCmd(None, "cd / ; ls -la")
NOTE: Be aware that the returned data is trimmed automatically by default to remove excessive empty lines from STDOUT and STDERR. (Of course this behavior can be deactivated, but for the purpose you've in mind exactly that behavior is what you will want.)
The 'processing the result' problem
What you will receive is an object that contains the return code, STDOUT and STDERR. Therefore it's very easy to process the result.
And this is what you want to do as the command you execute might exist and is launched but might fail in doing what it is intended to do. In the most simple case where you're not interested in STDOUT and STDERR your code will likely look something like this:
cmdResult.raiseExceptionOnError("Something went wrong!", bDumpStatusOnError=True)
For debugging purposes you want to output the result to STDOUT at some time, so for this you can do just this:
cmdResult.dump()
If you would want to process STDOUT it's simple as well. Example:
for line in cmdResult.stdOutLines:
print(line)
The 'executing a command remotely' problem
Now of course we might want to execute this command remotely on another system. For this we can use the same function runCmd(..) in exactly the same way but we need to specify a fabric connection object first. This can be done like this:
from fabric import Connection
REMOTE_HOST = "myhost"
REMOTE_PORT = 22
REMOTE_LOGIN = "mylogin"
REMOTE_PASSWORD = "mypwd"
c = Connection(host=REMOTE_HOST, user=REMOTE_LOGIN, port=REMOTE_PORT, connect_kwargs={"password": REMOTE_PASSWORD})
cmdResult = jk_simpleexec.runCmd(c, "cd / ; ls -la")
# ... process the result stored in cmdResult ...
c.close()
Everything remains exactly the same, but this time we run this command on another host. This is intended: I wanted to have a uniform API where there are no modifications required in the software if you at some time decide to move from the local host to another host.
The password input problem
Now of course there is the password problem. This has been mentioned above by some users: We might want to ask the user executing this python code for a password.
For this problem I have created an own module quite some time ago. jk_pwdinput. The difference to regular password input is that jk_pwdinput will output some stars instead of just printing nothing. So for every password character you type you will see a star. This way it's more easy for you to enter a password.
Here is the code:
import jk_pwdinput
# ... define other 'constants' such as REMOTE_LOGIN, REMOTE_HOST ...
REMOTE_PASSWORD = jk_pwdinput.readpwd("Password for " + REMOTE_LOGIN + "#" + REMOTE_HOST + ": ")
(For completeness: If readpwd(..) returned None the user canceled the password input with Ctrl+C. In a real world scenario you might want to act on this appropriately.)
Full example
Here is a full example:
import jk_simpleexec
import jk_pwdinput
from fabric import Connection
REMOTE_HOST = "myhost"
REMOTE_PORT = 22
REMOTE_LOGIN = "mylogin"
REMOTE_PASSWORD = jk_pwdinput.readpwd("Password for " + REMOTE_LOGIN + "#" + REMOTE_HOST + ": ")
c = Connection(host=REMOTE_HOST, user=REMOTE_LOGIN, port=REMOTE_PORT, connect_kwargs={"password": REMOTE_PASSWORD})
cmdResult = jk_simpleexec.runCmd(
c = c,
command = "cd / ; ls -la"
)
cmdResult.raiseExceptionOnError("Something went wrong!", bDumpStatusOnError=True)
c.close()
Final notes
So we have the full set:
Executing a command,
executing that command remotely via the same API,
creating the connection in an easy and secure way with password input.
The code above solves the problem quite well for me (and hopefully for you as well). And everything is open source: Fabric is BSD-2-Clause, and my own modules are provided under Apache-2.
Modules used:
fabric : http://www.fabfile.org/
jk_pwdinput : https://github.com/jkpubsrc/python-module-jk-pwdinput
jk_simplexec : https://github.com/jkpubsrc/python-module-jk-simpleexec
Happy coding! ;-)
Works Perfectly...
import paramiko
import time
ssh = paramiko.SSHClient()
#ssh.load_system_host_keys()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('10.106.104.24', port=22, username='admin', password='')
time.sleep(5)
print('connected')
stdin, stdout, stderr = ssh.exec_command(" ")
def execute():
stdin.write('xcommand SystemUnit Boot Action: Restart\n')
print('success')
execute()
You can use any of these commands, this will help you to give a password also.
cmd = subprocess.run(["sshpass -p 'password' ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root#domain.com ps | grep minicom"], shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
print(cmd.stdout)
OR
cmd = subprocess.getoutput("sshpass -p 'password' ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root#domain.com ps | grep minicom")
print(cmd)
Have a look at spurplus, a wrapper we developed around spur that provides type annotations and some minor gimmicks (reconnecting SFTP, md5 etc.): https://pypi.org/project/spurplus/
Asking User to enter the command as per the device they are logging in.
The below code is validated by PEP8online.com.
import paramiko
import xlrd
import time
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
loc = ('/Users/harshgow/Documents/PYTHON_WORK/labcred.xlsx')
wo = xlrd.open_workbook(loc)
sheet = wo.sheet_by_index(0)
Host = sheet.cell_value(0, 1)
Port = int(sheet.cell_value(3, 1))
User = sheet.cell_value(1, 1)
Pass = sheet.cell_value(2, 1)
def details(Host, Port, User, Pass):
time.sleep(2)
ssh.connect(Host, Port, User, Pass)
print('connected to ip ', Host)
stdin, stdout, stderr = ssh.exec_command("")
x = input('Enter the command:')
stdin.write(x)
stdin.write('\n')
print('success')
details(Host, Port, User, Pass)
#Reading the Host,username,password,port from excel file
import paramiko
import xlrd
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
loc = ('/Users/harshgow/Documents/PYTHON_WORK/labcred.xlsx')
wo = xlrd.open_workbook(loc)
sheet = wo.sheet_by_index(0)
Host = sheet.cell_value(0,1)
Port = int(sheet.cell_value(3,1))
User = sheet.cell_value(1,1)
Pass = sheet.cell_value(2,1)
def details(Host,Port,User,Pass):
ssh.connect(Host, Port, User, Pass)
print('connected to ip ',Host)
stdin, stdout, stderr = ssh.exec_command("")
stdin.write('xcommand SystemUnit Boot Action: Restart\n')
print('success')
details(Host,Port,User,Pass)
The most modern approach is probably to use fabric. This module allows you to set up an SSH connection and then run commands and get their results over the connection object.
Here's a simple example:
from fabric import Connection
with Connection("your_hostname") as connection:
result = connection.run("uname -s", hide=True)
msg = "Ran {0.command!r} on {0.connection.host}, got stdout:\n{0.stdout}"
print(msg.format(result))
I wrote a simple class to run commands on remote over native ssh, using the subprocess module:
Usage
from ssh_utils import SshClient
client = SshClient(user='username', remote='remote_host', key='path/to/key.pem')
# run a list of commands
client.cmd(['mkdir ~/testdir', 'ls -la', 'echo done!'])
# copy files/dirs
client.scp('my_file.txt', '~/testdir')
Class source code
https://gist.github.com/mamaj/a7b378a5c969e3e32a9e4f9bceb0c5eb
import subprocess
from pathlib import Path
from typing import Union
class SshClient():
""" Perform commands and copy files on ssh using subprocess
and native ssh client (OpenSSH).
"""
def __init__(self,
user: str,
remote: str,
key_path: Union[str, Path]) -> None:
"""
Args:
user (str): username for the remote
remote (str): remote host IP/DNS
key_path (str or pathlib.Path): path to .pem file
"""
self.user = user
self.remote = remote
self.key_path = str(key_path)
def cmd(self,
cmds: list[str],
strict_host_key_checking=False) -> None:
"""runs commands consecutively, ensuring success of each
after calling the next command.
Args:
cmds (list[str]): list of commands to run.
strict_host_key_checking (bool, optional): Defaults to True.
"""
strict_host_key_checking = 'yes' if strict_host_key_checking \
else 'no'
cmd = ' && '.join(cmds)
subprocess.run(
[
'ssh',
'-i', self.key_path,
'-o', f'StrictHostKeyChecking={strict_host_key_checking}',
'-o', 'UserKnownHostsFile=/dev/null',
f'{self.user}#{self.remote}',
cmd
]
)
def scp(self, source: Union[str, Path], destination: Union[str, Path]):
"""Copies `srouce` file to remote `destination` using the
native `scp` command.
Args:
source (Union[str, Path]): Source file path.
destination (Union[str, Path]): Destination path on remote.
"""
subprocess.run(
[
'scp',
'-i', self.key_path,
str(source),
f'{self.user}#{self.remote}:{str(destination)}',
]
)
Below example, incase if you want user inputs for hostname,username,password and port no.
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
def details():
Host = input("Enter the Hostname: ")
Port = input("Enter the Port: ")
User = input("Enter the Username: ")
Pass = input("Enter the Password: ")
ssh.connect(Host, Port, User, Pass, timeout=2)
print('connected')
stdin, stdout, stderr = ssh.exec_command("")
stdin.write('xcommand SystemUnit Boot Action: Restart\n')
print('success')
details()

how to run a remote command with telnetlib3 on python asyncio

I'm trying to write a simple telnet client that just runs a single command on a remote box using telnet. This needs to run over asyncio as other tasks are monitored at the same time under that framework.
I got it almost working, with the code below, that I tweaked from telnet-client as part of the telnetlib3 library; except that it does not return. I've had a hard time trying to figure what this protocol.waiter_closed is all about.
In any case, how do I need to tweak this code so that it returns once the command has been dealt with on the remote end ?
Thanks
#!/usr/bin/env python3
import logging
import asyncio
import telnetlib3
# just to check that connection is thrown away
class MyClient(telnetlib3.TelnetClient):
def connection_lost(self, *args):
print("connection lost on client {} - args={}".format(self, args))
#asyncio.coroutine
def register_telnet_command(loop, Client, host, port, command):
transport, protocol = yield from loop.create_connection(Client, host, port)
print("{} async connection OK for command {}".format(host, command))
def send_command():
EOF = chr(4)
EOL = '\n'
# adding newline and end-of-file for this simple example
command_line = command + EOL + EOF
protocol.stream.write(protocol.shell.encode(command_line))
# one shot invokation of the command
loop.call_soon(send_command)
# what does this do exactly ?
yield from protocol.waiter_closed
port = 23
hostname = "fit01"
def main():
def ClientFactory():
return MyClient(encoding='utf-8', shell = telnetlib3.TerminalShell)
# create as many clients as we have hosts
loop = asyncio.get_event_loop()
loop.run_until_complete(
register_telnet_command(loop, log, ClientFactory,
host = hostname, port = port,
command = "id"))
return 0
main()
Sorry, my mistake, redefining close_connection without calling the code from telnetlib3.connection_lost is a bad idea, since this is the code that populates waiter_closed.
I should have done
class MyClient(telnetlib3.TelnetClient):
def connection_lost(self, *args):
print("connection lost on client {} - args={}".format(self, args))
super().connection_lost(*args)

Ruby, get incoming address from UDP message

I have a UDP server that binds to all addresses on a system, I would like to know what ip address the message was addressed to. Any ideas how to do this?
Here is my example code:
sock = Socket.new(Socket::AF_INET, Socket::SOCK_DGRAM, 0)
sock.bind(Addrinfo.udp('', 2400))
while(true)
sockset = IO.select([sock])
sockset[0].each do |sock|
data = sock.recvfrom(1024)
puts "data: " + data.inspect
end
end
sock.close
This will produce something like:
data: ["test message\n", #<Addrinfo: 172.16.5.110:41949 UDP>]
Am I able to set a socket option, or something, to return the local IP?
Just a note, this needs to work for IPv6 too. Thanks in advance, Dave.
UNIX Network Programming has this to say about this very subject:
With a UDP socket, however, the destination IP address can only be obtained by setting the IP_RECVDSTADDR socket option for IPv4 or the IPV6_PKTINFO socket option for IPv6 and then calling recvmsg instead of recvfrom.
Ruby’s socket library has recvmsg which is a bit easier to use than the underlying C function, but still needs a bit of work to get the info needed. The destination IP address is included in the array of ancillary data returned from recvmsg. Here’s a version of your code adapted to use recvmsg and get the destination address for IPv4:
require 'socket'
require 'ipaddr'
sock = Socket.new(Socket::AF_INET, Socket::SOCK_DGRAM, 0)
# Set the required socket option:
sock.setsockopt :IPPROTO_IP, :IP_RECVDSTADDR, true
sock.bind(Addrinfo.udp('0.0.0.0', 2400))
while(true)
sockset = IO.select([sock])
sockset[0].each do |sock|
mesg, sender, _, *anc_data = sock.recvmsg
# Find the relevant data and extract it into a string
dest = IPAddr.ntop(anc_data.find {|d| d.cmsg_is?(:IP, :RECVDSTADDR)}.data)
puts "Data: #{mesg}, Sender: #{sender.ip_address}, Destination: #{dest}"
end
end
And here is a version for IPv6. There is also a RECVPKTINFO socket option, which I think may have superseded PKTINFO – depending on your system you may need to use that instead.
require 'socket'
sock = Socket.new(Socket::AF_INET6, Socket::SOCK_DGRAM, 0)
# Set the socket option for IP6:
sock.setsockopt :IPPROTO_IPV6, :IPV6_PKTINFO, true
sock.bind(Addrinfo.udp('0::0', 2400))
while(true)
sockset = IO.select([sock])
sockset[0].each do |sock|
mesg, sender, _, *anc_data = sock.recvmsg
# Find and extract the destination address
dest = anc_data.find {|d| d.cmsg_is?(:IPV6, :PKTINFO)}.ipv6_pktinfo_addr
puts "Data: #{mesg}, Sender: #{sender.ip_address}, Destination: #{dest.ip_address}"
end
end
Ruby also provides a Socket.udp_server_loop method, which yields the message and a UDPSource object to the block you provide, and this source object has a local_address field. Looking at the source this appears to check the PKTINFO data like I do above to get the destination address for IPv6 requests, but not for IPv4. This method binds to all available IP addresses individually, and just uses the address of the incoming interface for IP4 requests, which may not be accurate for a weak end system model. However it might be simpler for you to use Socket.udp_server_loop.

Get wifi BSSID programmatically using Ruby and ioctl

Using Getting essid via ioctl in ruby as a template I wanted to get the BSSID rather than the ESSID. However, not being a C developer, there are a few things that I don't understand.
What I have so far which does not work :( ...
NOTE I'm a bit confused because part of me thinks, according to some comments in wireless.h, that the BSSID can only be set via ioctl. However, the ioctl to get exists. That along with my almost complete lack of understanding of the more intermediate C type isms (structs, unions, and stuff ;) ), I simply don't know.
def _get_bssid(interface)
# Copied from wireless.h
# supposing a 16 byte address and 32 byte buffer but I'm totally
# guessing here.
iwreq = [interface, '' * 48,0].pack('a*pI')
sock = Socket.new(Socket::AF_INET, Socket::SOCK_DGRAM, 0)
# from wireless.h
# SIOCGIWAP 0x8B15 /* get access point MAC addresses */
sock.ioctl('0x8B15', iwreq) # always get an error: Can't convert string to Integer
puts iwreq.inspect
end
So, in the meantime, I'm using a wpa_cli method for grabbing the BSSID but I'd prefer to use IOCTL:
def _wpa_status(interface)
wpa_data = nil
unless interface.nil?
# need to write a method to get the src_sock_path
# programmatically. Fortunately, for me
# this is going to be the correct sock path 99% of the time.
# Ideas to get programmatically would be:
# parse wpa_supplicant.conf
# check process table | grep wpa_suppl | parse arguments
src_sock_path = '/var/run/wpa_supplicant/' + interface
else
return nil
end
client_sock_path = '/var/run/hwinfo_wpa'
# open Domain socket
socket = Socket.new(Socket::AF_UNIX, Socket::SOCK_DGRAM, 0)
begin
# bind client domain socket
socket.bind(Socket.pack_sockaddr_un(client_sock_path))
# connect to server with our client socket
socket.connect(Socket.pack_sockaddr_un(src_sock_path))
# send STATUS command
socket.send('STATUS', 0)
# receive 1024 bytes (totally arbitrary value)
# split lines by \n
# store in variable wpa_data.
wpa_data = socket.recv(1024)
rescue => e
$stderr.puts 'WARN: unable to gather wpa data: ' + e.inspect
end
# close or next time we attempt to read it will fail.
socket.close
begin
# remove the domain socket file for the client
File.unlink(client_sock_path)
rescue => e
$stderr.puts 'WARN: ' + e.inspect
end
unless wpa_data.nil?
#wifis = Hash[wpa_data.split(/\n/).map\
{|line|
# first, split into pairs delimited by '='
key,value = line.split('=')
# if key is camel-humped then put space in front
# of capped letter
if key =~ /[a-z][A-Z]/
key.gsub!(/([a-z])([A-Z])/,'\\1_\\2')
end
# if key is "id" then rename it.
key.eql?('id') && key = 'wpa_id'
# fix key so that it can be used as a table name
# by replacing spaces with underscores
key.gsub!(' ','_')
# lower case it.
key.downcase!
[key,value]
}]
end
end
EDIT:
So far nobody has been able to answer this question. I think I'm liking the wpa method better anyway because I'm getting more data from it. That said, one call-out I'd like to make is if anyone uses the wpa code, be aware that it will require escalated privileges to read the wlan socket.
EDIT^2 (full code snippet):
Thanks to #dasup I've been able to re-factor my class to correctly pull the bssid and essids using system ioctls. (YMMV given the implementation, age, and any other possible destabilization thing to your Linux distribution - the following code snippet works with the 3.2 and 3.7 kernels though.)
require 'socket'
class Wpa
attr_accessor :essid, :bssid, :if
def initialize(interface)
#if = interface
puts 'essid: ' + _get_essid.inspect
puts 'bssid: ' + _get_bssid.inspect
end
def _get_essid
# Copied from wireless.h
iwreq = [#if, " " * 32, 32, 0 ].pack('a16pII')
sock = Socket.new(Socket::AF_INET, Socket::SOCK_DGRAM, 0)
sock.ioctl(0x8B1B, iwreq)
#essid = iwreq.unpack('#16p').pop.strip
end
def _get_bssid
# Copied from wireless.h
# supposing a 16 byte address and 32 byte buffer but I'm totally
# guessing here.
iwreq = [#if, "\0" * 32].pack('a16a32')
sock = Socket.new(Socket::AF_INET, Socket::SOCK_DGRAM, 0)
# from wireless.h
# SIOCGIWAP 0x8B15 /* get access point MAC addresses */
sock.ioctl(0x8B15, iwreq) # always get an error: Can't convert string to Integer
#bssid = iwreq.unpack('#18H2H2H2H2H2H2').join(':')
end
end
h = Wpa.new('wlan0')
I'm not very much familiar with Ruby, but I spotted two mistakes:
The hex number for SIOCGIWAP should be given without quotes/ticks.
The initialization of the data buffer ends up with some trailing bytes after the interface name (debugged using gdb). The initialization given below works.
Be aware that your code will break if any of the data structures or constants change (IFNAMSIZ, sa_family, struct sockaddr etc.) However, I don't think that such changes are likely anytime soon.
require 'socket'
def _get_bssid(interface)
# Copied from wireless.h
# supposing a 16 byte address and 32 byte buffer but I'm totally
# guessing here.
iwreq = [interface, "\0" * 32].pack('a16a32')
sock = Socket.new(Socket::AF_INET, Socket::SOCK_DGRAM, 0)
# from wireless.h
# SIOCGIWAP 0x8B15 /* get access point MAC addresses */
sock.ioctl(0x8B15, iwreq) # always get an error: Can't convert string to Integer
puts iwreq.inspect
end
You'll get back an array/buffer with:
The interface name you sent, padded with 0x00 bytes to a total length of 16 bytes.
Followed by a struct sockaddr, i.e. a two-byte identifier 0x01 0x00 (coming from ARPHRD_ETHER?) followed by the BSSID padded with 0x00 bytes to a total length of 14 bytes.
Good luck!

Resources