We have Filter Intermediate Driver implemented in NDIS 5.x (Miniport, Protocol), Since support for this has been removed in NDIS 6.x. I am planning to covert my filter intermediate driver to Mux Intermediate driver with 1 to 1 relationship.
I wanted to take this approach to minimize the cost and schedule while moving to NDIS 6.x, even though NDIS 6.x introduced Light weight Filter Driver.
is there guidence available for Mux Intermediate driver to have 1 to 1 relationship.
LWFs are really easy to write. It's probably easier to switch to a LWF than to retrofit your driver as a MUX driver -- even a 1:1 MUX. For example, building a MUX driver requires you to build a usermode notify object. A LWF does not.
LWFs also haveĀ better performance, more flexibility, and more features.
Related
Vehicles communicate using 802.11p on Veins 5.1, OmNET 5.6.2, sumo 1.8.0 environment.
My questions
Do I have to implement retransmission process(like CSMA/CA) when collision is occurred?
or, is retransmission process(like CSMA/CA) already implemented in library or such class?
I want to use RTS/CTS option, do I have to implement it too?
Thank you
To directly answer your question: the 802.11p modules that are included in Veins 5.1 trigger automatic retransmissions (following the 802.11 specification) for lost unicast messages if the MAC layer useAcks parameter is set to true (which is not the default). RTS/CTS is not implemented, so if you want to use only modules from Veins you would need to implement this yourself.
More generally, though, your research sounds like it might be better served if you would combine Veins with The INET Framework (via veins_inet); this would allow you to use the more general 802.11 simulation model included in The INET Framework. It includes features like block-ACKs, RTS/CTS, fragmentation and reassembly, infrastructure mode, automatic rate selection, and many more.
I am testing a custom FPGA NIC and I need to send management information (such as header info for matching) and traffic data to it using a traffic generator from within the user space.
The driver built for the FPGA is a modified version of IXGBE with DMA support for management, and also supports DPDK for kernel bypass to achieve high throughput.
I am trying to understand how the various software (driver, userspace application, etc) should be stacked/connected to each-other so I can achieve the objective of reading and writing to PCIe on the NIC using set of scripts from user space?
I have also been looking at this project
https://github.com/CospanDesign/python-pci
which is useful however based on Xilinx XDMA.
Would appreciate any help, pointers on this.
Sorry, the question is too broad. For such a broad question there is a generic answer: have a look at Inter Process Communication:
https://en.wikipedia.org/wiki/Inter-process_communication
There are variety of methods, like Unix sockets, shared memory, netlink etc, to communicate between user space processes. As well as a variety of methods to communicate between user space and kernel space.
Just pick the best for you and try to do something. If it fails, come again on SO and ask ;)
I am developing Car2X applications in order to simulate case studies based on Veins framework.
As an Information Systems student, I have been worried mostly about the code of my applications.
Recently I noticed that VEINS has no LLC, NETWORK and TRANSPORT layers in its source code (/src).
My question is: how to assure that my simulation runs would generate data close to reality for Car2X applications when there's none of these layers above in the source code?
P.S.: I am aware of INET framework and its protocols, I was just wondering if I could use just Veins for my case studies :)
The layers you mentioned are not needed for most Car2X simulations. If you download, for example, Veins 4.4, you will find only simulation models for single hop broadcast transmission of frames, the most general use case. If you want to simulate a special protocol, say, for multi-hop transmission of frames, you will need to implement this as a network layer. Then, your simulation will have a network layer model.
I am trying to understand the working of wireless in linux. I started with wpa_supplicant, hostapd applications with the help of their documentation and source code.Understood the flow and basic functionalities of :
wpa_supplicant,nl80211(driver interface)
libnl library(socket communication between user space and kernel using netlink protocol)
cfg80211(kernel interface used for communicating with the driver from userspace with the help of nl80211 implementation in user space),mac80211(software media access control layer)
driver(loadable driver ex:ath6kl - atheros driver).
I understood the above software flow and in my exploration I came to know that for providing freedom for developers MAC layer is implemented in software(popular implementation mac80211).
Is this true in all the cases ? If so what are pros and cons of softMAC and hardMAC ? Do cfg80211 interface in kernel directly communicates with the driver ? who and how communication with mac80211 happens ?
Thanks in advance.
The term 'SoftMAC' refers to a wireless network interface device (WNIC) which does not implement the MAC layer in hardware, rather it expects the drivers to implement the MAC layer.
'HardMAC' (also called 'FullMAC') describes a WNIC which implements the MAC layer in hardware.
The advantages of SoftMAC are:
Potentially lower hardware costs
Possibility to upgrade to newer standards by updating the driver only
Possibility to correct faults in the MAC implementation by updating the driver only
An additional advantage (in the Linux kernel at least) is that many different drivers for different types of WNIC can all share the same MAC implementation, provided by the kernel itself.
Despite the advantages, not all WNICs use SoftMAC. The main advantages of HardMAC is that since the MAC functions are implemented in hardware, they contribute less CPU load.
mac80211 is the framework within the Linux kernel for implementing SoftMAC drivers. It implements the cfg80211 callbacks which would otherwise have to be implemented by the driver itself, and also implements the MAC layer functions. As such it goes between cfg80211 and the SoftMAC drivers.
HardMAC drivers have to implement the cfg80211 interfaces fully themselves.
Also to add :-
Hardmac drivers helps in better as compared to SoftMAC, power save and quick connection/disconnection recovery due to MLME implemented in HW. Better power save is because HW/FW need not to wake up host on disconnection and still can connect and recover .
I am looking for the way in developing kernel packet filter in Linux for filtering packets in high volume network traffic.
I would like to ask whether it is possible or not to use Berkeley Packet Filter (BPF) to implement Bloom filter in Linux kernel? Is there any alternative and better way to implement the kernel filter?
The BPF syntax is rather low-level and difficult to understand, is there any high-level/easier ways to write BPF and good examples/references/tutorials to start with? And, how to debug BPF during development?