Forward Error Correction and Packet Order Correction - wireless

This article lists 2 techniques used to improve wireless throughput, Forward Error Correction and Packet Order Correction.
http://www.enterprisenetworksandservers.com/monthly/art.php?3514
Does anyone know how to enable them?

Well for WAN FEC and POC, you need a piece of network hardware (an appliance) that does this. Here is one from Silver Peak:
Silver Peak Systems
I know your question was related to wireless, but I don't think such an appliance/router exists yet for this market.

Related

Which FPGA should I choose? (or should I choose another hardware) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
You see guys, I've always been interesting in buying one of this development boards, but they were too expensive to me as a student since I had to spent in other proyects, however, I sold some things i don't use and finally made the money to buy one.
So my problem is, I am currently studying electronic engineering, but I've been dedicating a lot of time to programing, reverse engineering stuff and undesrtanding some a little bit complex math cryptographic algorithms (mainly the ones used for hashing) and prime number testing, NP-hard kind of algorithms and some graph path search algorithms, so i wanted to buy an FPGA that was anywhere under $200 that could do the job if i wanted to compute this kind of tasks with him, right now i use my computer for some.
Lets say as example if i wanted to make an architecture for wpa or md5 brute-forcing, we all know that the numbers go nuts if the password is longer than 8, and eventhough im more interested on understanding deeply how the protocols work and how to implement this ideas, it just would be nice to see it working.
Right now the options I've looked at so far are:
-Cyclone V GX Starter Kit ($179)
which has: Cyclone V GX 5CGXFC5C6F27C7N Device
https://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English&CategoryNo=167&No=830
-DE10-Nano Kit ($130)
which has : Cyclone V 5CSEBA6U2317N Device
https://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English&CategoryNo=167&No=1046
But since im kind of new to FPGA, I mean I've worked with them but used for university projects, with their FPGA's, so i didn't get to know them a lot.
So my final question is, does the FPGA speed depends only on the amount of logif elemnts it has? or, should i care more about that than the other "add-ons" the boards have? because eventhough the second one is cheaper it has lie 30% more Logic elements than the first on, but I don't know if that would mean i would have better performance.
Also, here's the datasheet fot the cyclone V devices:
https://www.altera.com/en_US/pdfs/literature/hb/cyclone-v/cv_51001.pdf
Also thank you for your time on reading this guys, I know it's usually more interesting to solve programming issues and that kind of stuff haha
EDIT: Forgot the "1" on the "$179"
The boards you've listed has similar speed grade, so won't be any different in raw speed.
GX series include 3Gbps transceivers, and that exact Starter kit has 2.5v levels on HSMC connector. Unless you will be using that connector with some really fast hardware (like 80Msps ADC/DAC, etc), it's unlikely you will benefit from GX. If only due to a bit larger number of hw multipliers available, but that depends on your exact project and needs.
Lots of gpio lines will be lost to hsmc connector. There's boards to fan out hsmc connector into convenient 40-pin gpio connectors, but that will cost another $56. And still you might have difficulties with external hw you will be playing with, since i/o banks on those lines will be using 2.5v levels, while most likely you will have lots of 3.3v devices. It's compatible to a degree and under some conditions, but it's safer to assume there will be issues.
If you will be playing with DIY stuff eventually, then you will need more i/o lines at more convenient voltage of 3.3v. DE10-nano kit looks more promising to me in general case. There's two arm cores that you can use to run higher level logic in linux. It has arduino-compatible connectors, so you can play with existing shields. It's also larger than Starter kit in terms of ALM and on-chip memory, you will need those to instantiate lots of parallel blocks to crunch your numbers.
Sure, if you already have some daughter boards in hsmc format, or planning to have one - you will need some kit with hsmc support.

INET time synchronization of IEEE 802.11 stations

Is there an option for time-synchronization e.g. IEEE AVB/TSN of wireless nodes (IEEE 802.11 stations) within the inet framework?
And if so, how is the resource reservation realized on the medium access level?
The point coordination function (PCF) is still not implemented so far: https://github.com/inet-framework/inet/blob/master/src/inet/linklayer/ieee80211/mac/coordinationfunction/Pcf.h#L40
2020-04-21 UPDATE:
Avnu has a white paper on the concept: https://avnu.org/wireless-tsn-paper/
Avnu Alliance members... wrote the Avnu Alliance Wireless TSN – Definitions, Use Cases & Standards Roadmap white paper in an effort to generate awareness and start defining work required in Avnu to enable wireless TSN extensions in alignment with wired TSN systems and operation models.
2018-04-03 UPDATE:
IEEE Std. 802.1AS-2011 (gPTP) has a clause 12 titled "Media-dependent layer specification for IEEE 802.11 links." So, yes, it seems time synchronization is possible over WIFI, and is in fact defined in an IEEE standard.
2017-12-13 UPDATE:
It looks like the OpenAvnu project has been working on this idea. Check out this pull request, which seems to implement the precision time-stamping required for AVB on a WIFI connection.
OpenAvnu Pull Request #734: "Added wireless timestamper and port code"
This should probably be asked in multiple questions, with each question relating to the implementation of one of the core AVB/TSN protocols on a WIFI (802.11) network. Audio video bridging (AVB) and time sensitive networking (TSN) are not IEEE standards or protocols. What we call AVB or TSN (I'm just going to use AVB from now on) is a singular name for the use and implementation of multiple IEEE standards in order to achieve real-time media transfer.
These core protocols are:
IEEE Std. 802.1BA-2011: Profiles and configurations of IEEE standards which define what an AVB endpoint or bridge needs to do. This is the closest we get to one single standard for AVB.
IEEE Std. 1722(-2016): A Layer 2 audio video transport protocol (AVTP)
IEEE Std. 1722.1(-2013): Audio video discover, enumeration, connection management and control (AVDECC) for 1722-based devices
IEEE Std. 802.1AS(-2011): Generalized precision time protocol (gPTP)
IEEE Std. 802.1Q(-2014): FQTSS and SRP
(note that according to the IEEE TSN webpage, currently published TSN-specific standards will be rolled into 802.1Q, so the list above should still be accurate)
Because stream reservation (SRP), timing (gPTP), and media transport (1722 or 1723) are independent, your question should probably be asking about them independently.
How can/should IEEE 802.1AS (gPTP) be implemented in a WIFI (802.11) network?
How can/should IEEE 802.1Q (SRP and FQTSS) be implemented in a WIFI network?
1. I have nowhere near the experience these standards developers have, and some of them have explored gPTP on WIFI extensively. The "how" of gPTP is well explained by Kevin Stanton of Intel here.
And for WIFI in particular, Norman Finn from Cisco had some notes on using gPTP with WIFI networks here.
I couldn't find anything that explicitly laid out how best to use/implement gPTP with WIFI. Ethernet is really where a lot of this is happening right now.
2. Craig Gunther from Harman says:
Simply implement[ing] the SRP control protocol without performing the related reservation actions. ... 802.11ak along with 802.1Qbz may make this much simpler. .... 802.11ac and 802.11ad have created some interesting new technologies that may help with reservations...
Source: http://www.ieee802.org/1/files/public/docs2014/at-cgunther-srp-for-802-11-0114-v01.pdf
Personally, I feel like guaranteed latency and reliability are very hard to ask for with a network that has to do things like carrier-sense multiple access with collision avoidance (CSMA/CA), but that's just my inexperienced opinion. It certainly would be cool, but it seems very... challenging.

Choosing FPGA with enough inputs

I need a FPGA that can have 50 I/O pins. I'm going to use it as a MUX. I though about using MUX or CPLD but the the guy I'm designing this circuit for says that he might need more features in the future so it has to be a FPGA.
So I'm looking for one with enough design examples on the internet. Can you suggest anything (for example a family)?
Also if you could tell me what I should consider when picking, that would be great. I'm new to this and still learning.
This is a very open question, and the answer to it as stated can be very long, if possible at all given all the options. What I suggest to you is to make a list of all current and future requirements. This will help you communicate your needs (here and elsewhere) and force you, and the people you work with on this project, to think about them more carefully. Saying that "more features in the future" will be needed is meaningless; would you buy the most capable FPGA on the market? No.
When you've compiled this list and thought about the requirements, post them here again, and then you'd get plenty of help.
Another possibility to get feedback and help is to describe what you are trying to do/solve. Maybe an FPGA is not the best solution -- people here will tell you that.
I agree with Saar, but you have to go back one step further: when you decide which technology to target, keep in mind that an FPGA needs a lot of things to run, i.e. different voltages fore core, I/O, auxiliary, and probably more. Also you need some kind of configuration mechanism as an FPGA is in general (there are exceptions) SRAM based and therefore needs to be configured at startup. CPLDs are less flexible but much easier to handle...

Neural Networks package in Wolfram Mathematica is not Parallel?

I just created a VERY large neural net, albeit on very powerful hardware, and imagine my shock and disappointment, when I realized that NeuralFit[] from NeuralNetworks` package only seems to use one core, and not even to its fullest capacity. I was heartbroken. Do I really have to write an entire NN implementation from scratch? Or did I miss something simple?
My net took 200 inputs to 2 hidden layers of 300 neurons to produce 100 outputs. I understand we're talking about trillions of calculations, but as long as I know my hardware is the weak point - that can be upgraded. It should handle training of such a net fairly well if left alone for a while (4Ghz 8-thread machine with 24Gb of 2000Mhz CL7 memory running RAID-0 SSD drives on SATA-III - I'm fairly sure).
Ideas? Suggestions? Thanks in advance for your input.
I am the author of the Neural Network Package. It is easy to parallelize the evaluation of a neural network given the input. That is, to compute the output of the network given the inputs (and all the weights, the parameters of the network). However, this evaluation is not very time consuming and it is not very interesting to parallellize it for most problems. On the other hand, the training of the network is often time consuming and, unfortunately, not easy to parallelize. The training can be done with a different algorithms and best ones are not easy to parallelize. My contact info can be found at the product's homepage on the Wolfram web. Improvement suggestions are very welcome.
The last version of the package works fine one version 9 and 10 if you switch off the suggestion bar (under preferences). The reason for that is that the package use the old HelpBrowser for the documentation and it crash in combination with the suggestion bar.
yours Jonas
You can contact the author of the package directly, he is a very approachable fellow and might be able to make some suggestions.
I'm not sure how you wrote the code or how it is written inside the package you are using; try to use vectorization, it really speeds up the linear algebra computations. In the ml-class.org course you can see how it's made.

Best Practice in designing a client/server communication protocol

I am currently integrating a server functionality into a software that runs a complicated measuring system.
The client will be a software from another company that will periodically ask my software for the current state of the system.
Now my question is: What is the best way to design the protocol to provide these state information. There are many different states that have to be transmitted.
I have seen solutions where they generate a different state flags and then only transfer for example a 32 bit number where each bit stands for a different state.
Example:
Bit 0 - System Is online
Bit 1 - Measurement in Progress
Bit 2 - Temperature stabilized
... and so on.
This solution will produce very little traffic. Though it seems very unflexible to me and also very hard to debug.
The other I think it could be done is to tranfer each state preceded by the name of the state:
Example:
#SystemOnline#1#MeasurementInProgress#0#TemperatureInProgress#0#.....
This solution will produce a lot more traffic. But it appears a lot more flexible because the order in which each state is tranfered irrelevant. Also it should be a lot easier to debug.
Does anybody knows from experience a good way to solve the problem, or does anybody know a good source of knowledge where I can find best practices. I just want to prevent trying to reinvent the wheel
Once you've made a network request to a remote system, waited for the response, and received and decoded the response, it hardly matters whether the response is 32 bits or 32K. And how many times a second will you be generating this traffic? If less than 1, it matters even less. So use whatever is easiest to implement and most natural for the client, be it a string, or XML.

Resources