Is there an option for time-synchronization e.g. IEEE AVB/TSN of wireless nodes (IEEE 802.11 stations) within the inet framework?
And if so, how is the resource reservation realized on the medium access level?
The point coordination function (PCF) is still not implemented so far: https://github.com/inet-framework/inet/blob/master/src/inet/linklayer/ieee80211/mac/coordinationfunction/Pcf.h#L40
2020-04-21 UPDATE:
Avnu has a white paper on the concept: https://avnu.org/wireless-tsn-paper/
Avnu Alliance members... wrote the Avnu Alliance Wireless TSN – Definitions, Use Cases & Standards Roadmap white paper in an effort to generate awareness and start defining work required in Avnu to enable wireless TSN extensions in alignment with wired TSN systems and operation models.
2018-04-03 UPDATE:
IEEE Std. 802.1AS-2011 (gPTP) has a clause 12 titled "Media-dependent layer specification for IEEE 802.11 links." So, yes, it seems time synchronization is possible over WIFI, and is in fact defined in an IEEE standard.
2017-12-13 UPDATE:
It looks like the OpenAvnu project has been working on this idea. Check out this pull request, which seems to implement the precision time-stamping required for AVB on a WIFI connection.
OpenAvnu Pull Request #734: "Added wireless timestamper and port code"
This should probably be asked in multiple questions, with each question relating to the implementation of one of the core AVB/TSN protocols on a WIFI (802.11) network. Audio video bridging (AVB) and time sensitive networking (TSN) are not IEEE standards or protocols. What we call AVB or TSN (I'm just going to use AVB from now on) is a singular name for the use and implementation of multiple IEEE standards in order to achieve real-time media transfer.
These core protocols are:
IEEE Std. 802.1BA-2011: Profiles and configurations of IEEE standards which define what an AVB endpoint or bridge needs to do. This is the closest we get to one single standard for AVB.
IEEE Std. 1722(-2016): A Layer 2 audio video transport protocol (AVTP)
IEEE Std. 1722.1(-2013): Audio video discover, enumeration, connection management and control (AVDECC) for 1722-based devices
IEEE Std. 802.1AS(-2011): Generalized precision time protocol (gPTP)
IEEE Std. 802.1Q(-2014): FQTSS and SRP
(note that according to the IEEE TSN webpage, currently published TSN-specific standards will be rolled into 802.1Q, so the list above should still be accurate)
Because stream reservation (SRP), timing (gPTP), and media transport (1722 or 1723) are independent, your question should probably be asking about them independently.
How can/should IEEE 802.1AS (gPTP) be implemented in a WIFI (802.11) network?
How can/should IEEE 802.1Q (SRP and FQTSS) be implemented in a WIFI network?
1. I have nowhere near the experience these standards developers have, and some of them have explored gPTP on WIFI extensively. The "how" of gPTP is well explained by Kevin Stanton of Intel here.
And for WIFI in particular, Norman Finn from Cisco had some notes on using gPTP with WIFI networks here.
I couldn't find anything that explicitly laid out how best to use/implement gPTP with WIFI. Ethernet is really where a lot of this is happening right now.
2. Craig Gunther from Harman says:
Simply implement[ing] the SRP control protocol without performing the related reservation actions. ... 802.11ak along with 802.1Qbz may make this much simpler. .... 802.11ac and 802.11ad have created some interesting new technologies that may help with reservations...
Source: http://www.ieee802.org/1/files/public/docs2014/at-cgunther-srp-for-802-11-0114-v01.pdf
Personally, I feel like guaranteed latency and reliability are very hard to ask for with a network that has to do things like carrier-sense multiple access with collision avoidance (CSMA/CA), but that's just my inexperienced opinion. It certainly would be cool, but it seems very... challenging.
Related
I am designing a Linux device that will communication with a windows host via CDC-NCM. As this is intended only for point to point communication I don't need a unique EUI-48 address. I intend to use a hard coded value on every device. I do not want to purchase an IEEE MA-S assignment for this. Is there a IEEE standards compliant or at least a generally accepted practice for choosing an EUI-48 in scenarios like this?
Edit: Would is using a LAA Unicast address a generally accepted practice for this sort of situation?
I am new to inet and Ns3.
I am currently deciding between Flora (omnet++ based) and LoRaWAN (ns3 based). Which one is better in terms of features and vice versa. Also which one is easy to learn quickly.
Would really appreciate if someone could guide me..I am not focusing on machine learning, but just resource allocation problems..Have a nice day
Based on my personal experience Flora (Omnet++) has the following limitations.
Flora doesn't consider mobility
Flora doesn't take into account any type of interference (Intra/Inter spreading factor interference)
A LoRaWAN gateway should implement 8 parallel reception paths, but it is not considered in Flora.
In the case of ADR, network server should assign Spreading factors. This feature is not supported in Flora.
Flora doesn't support ADR in unconfirmed mode.
Simulation with multiple gateways has problems.
Flora doesn't provide a long range as defined by LoRaWAN.
The above features are implemented in ns3 based LoRaWAN. As compared to ns3 LoRaWAN, Flora implementation is difficult.
I would like to reproduce the experiment from Dr. Adrian Thompson, who used genetic algorithm to produce a chip (FPGA) which can distinguish between two different sound signals in a extreme efficient way. For more information please visit this link:
http://archive.bcs.org/bulletin/jan98/leading.htm
After some research I found this FPGA board:
http://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English&CategoryNo=167&No=836&PartNo=1
Is this board capable of reproducing Dr. Adrian Thompsons experiment or am I in need of another?
Thank you for your support.
In terms of programmable logic, the DE1-SoC is about ~20x bigger, and has ~70x as much embedded memory. Practically any modern FPGA is bigger than the "Xilinx XC6216" cited by his papers, as was linked to you in the other instance of this question you asked.
That said, most modern FPGAs don't allow for the same fine-granularity of configuration, as compared to older FPGAs - the internal routing and block structures are more complex, and FPGA vendors want to protect their products and compel you to use their CAD tools.
In short, yes, the DE1-SoC will be able to contain any design from 12+ years ago. As for replicating the specific functions, you should do some more research to determine if the methods used are still feasible with modern chips and CAD tools.
Edit:
user1155120 elaborated on the features of the XC6216 (see link below) that were of value to Thompson.
Fast Configuration: A larger device will generally take longer to configure, as you have to send more configuration data. That said, I/O interfaces are faster than they were 15 years ago, so it depends on your definition of "fast".
Reconfiguration: Cyclone V chips (like the one in the DE1-SoC) do support partial reconfiguration, but the subscription version of the Quartus II software is required, in addition to a separate license to support PR. I don't believe it supports wildcard reconfiguration, though I could be mistaken.
Memory-Mapped Addressing: The DE1-SoC's internal data can be access through the USB Blaster interface. However, this requires using the SystemConsole on the host PC, so it's not a direct access.
Assuming there is a task (e.g. an image processing method with a lot math) which is reasonable to be implemented on FPGA in sense of answer https://stackoverflow.com/a/8695228/544463
Is there any known (that you can actually name) successful application or practice for combining it with "dedicated" (designed on custom demand) super computing cluster (HPC), e.g. with Infiniband stack? I wonder if that has already been done and to which extend that was successful.
My main motivation for the question is that http://en.wikipedia.org/wiki/Reconfigurable_computing is a long term (academic) perspective for the future development of cluster computing as a distinctive alternative to cloud computing (the later concentrates more on the software (higher) flexibility level but also through possible "reconfiguration"). Is it already practical?
I would also expect somebody is doing research on this... It would be nice to learn about results.
Well, it's not FPGA, but D.E. Shaw's Anton computer for molecular dynamics is famously ASICs connected with a custom high-speed network; J. P. Morgan uses clusters of FPGAs in its risk-analysis calculations (recent Forbes article here). Convey computers has been pushing FPGA+x86+high speed networking fairly hard for the past couple of years, so presumably there's some sort of market there...
http://www.maxeler.com/ - they build racks of Intel PCs hosting custom boards stuffed with FPGAs (and - critically - the associated software and FPGA code) to speed up seismic processing, financial analysis and the like.
I think they could be regarded as successful (I gather they turn a profit) and have big customers from finance and oil companies amongst their clientele.
Is there any known (that you can actually name) successful application
or practice for combining it with "dedicated" (designed on custom
demand) super computing cluster (HPC), e.g. with Infiniband stack? I
wonder if that has already been done and to which extend that was
successful.
It's being attempted academically with Novo-G.
You might be interested in Maxwell.
I know that Cray used to have a series of supercomputers some years ago that combined AMD Opterons with Xilinx FPGAs (iirc) through a HyperTransport bus, basically allowing you to create your own specialized processor for custom workloads. According to their website though, they now seem to have dropped FPGAs in favor of GPUs.
For the current research, there's always Google Scholar...
Update: After a bit of searching, it appears to have been the Cray XT5h, which had the possibility of using FPGA coprocessors...
Some have already been mentioned (convey, cray), some not (e.g. beecube).
But one of the biggest FPGA-Clusters I ever heard of, is missing:
The Large Hadron Collider at CERN. They produce in seconds enormous amounts of data (2.7 Terabit/s). They use the FPGAs (> 100) of them to reduce and filter the data to reduce it, and make it handable.
It does not fit your request to be connected to a dedicated HPC-Cluster, but they are a HPC-Cluster on their own (as on the higher hierarchy levels the used FPGAs are FX, they include two PowerPCs and are also some kind of "normal" cluster).
There is quite a lot of published work in reconfigurable computing applications.
Here's a list of links to SRC Computers-centric published papers.
There's the Center for High-Performance Reconfigurable Computing.
Google search "FPGA" or "reconfigurable" along with these academic institution names and you'll find many published papers. Some of the papers you'll find go back to 2004.
Jackson State University
Clemson University
Catholic University
George Washington University
George Mason University
National Center for Supercomputing Applications (NCSA)
University of Illinois (UIUC)
Naval Postgraduate School (NPS)
Air Force Research Lab (AFRL)
University of Dayton Research Institute (UDRI)
University of Florida
University of Arkansas
There also was a reconfigurable-centric conference hosted by NCSA, the Reconfigurable Systems Summer Institute (RSSI).
This list is certainly not exhaustive, but it will get you started.
Disclosures: I currently work for SRC Computers, LLC, I worked at NCSA/UIUC and I chaired the RSSI conference its first two years.
Yet another great use case developed by adapteva called parallela (they have a kickstarter project).
They are developing a epoch-series of processors controlled by a two cores ARM processor (that shares the board).
I am so much anticipating to have this toy in my hands!
PS
Since it was largely inspired by ardunio (and similar ARM-like) systems, this project is still limited by 1 Gbps networking.
I have a raw grabbed data from spectrometer that was working on wifi (802.11b) channel 6.
(two laptops in ad-hoc ping each other).
I would like to decode this data in matlab.
I see them as complex vector with 4.6 mln of complex samples.
I see their spectrum quite nice. I am looking document a bit less complicated as IEEE 802.11 standard (which I have).
I can share measurement data to other people.
There's now a few solutions around for decoding 802.11 using Software Defined Radio (SDR) techniques. As mentioned in a previous answer there is software that is based on gnuradio - specifically there's gr-ieee802-11 and also 802.11n+. Plus the higher end SDR boards like WARP utilise FPGA based implementations of 802.11. There's also a bunch of implementations of 802.11 for Matlab available e.g. 802.11a.
If your data is really raw then you basically have to build every piece of the signal processing chain in software, which is possible but not really straightforward. Have you checked the relevant wikipedia page? You might use gnuradio instead of starting from scratch.
I have used 802.11 IEEE standard to code and decode data on matlab.
Coding data is an easy task.
Decoding is a bit more sophisticated.
I agree with Stan, it is going to be tough doing everything yourself. you may get some ideas from the projects on CGRAN like :
https://www.cgran.org/wiki/WifiLocalization