I am new to inet and Ns3.
I am currently deciding between Flora (omnet++ based) and LoRaWAN (ns3 based). Which one is better in terms of features and vice versa. Also which one is easy to learn quickly.
Would really appreciate if someone could guide me..I am not focusing on machine learning, but just resource allocation problems..Have a nice day
Based on my personal experience Flora (Omnet++) has the following limitations.
Flora doesn't consider mobility
Flora doesn't take into account any type of interference (Intra/Inter spreading factor interference)
A LoRaWAN gateway should implement 8 parallel reception paths, but it is not considered in Flora.
In the case of ADR, network server should assign Spreading factors. This feature is not supported in Flora.
Flora doesn't support ADR in unconfirmed mode.
Simulation with multiple gateways has problems.
Flora doesn't provide a long range as defined by LoRaWAN.
The above features are implemented in ns3 based LoRaWAN. As compared to ns3 LoRaWAN, Flora implementation is difficult.
Related
Is there an option for time-synchronization e.g. IEEE AVB/TSN of wireless nodes (IEEE 802.11 stations) within the inet framework?
And if so, how is the resource reservation realized on the medium access level?
The point coordination function (PCF) is still not implemented so far: https://github.com/inet-framework/inet/blob/master/src/inet/linklayer/ieee80211/mac/coordinationfunction/Pcf.h#L40
2020-04-21 UPDATE:
Avnu has a white paper on the concept: https://avnu.org/wireless-tsn-paper/
Avnu Alliance members... wrote the Avnu Alliance Wireless TSN – Definitions, Use Cases & Standards Roadmap white paper in an effort to generate awareness and start defining work required in Avnu to enable wireless TSN extensions in alignment with wired TSN systems and operation models.
2018-04-03 UPDATE:
IEEE Std. 802.1AS-2011 (gPTP) has a clause 12 titled "Media-dependent layer specification for IEEE 802.11 links." So, yes, it seems time synchronization is possible over WIFI, and is in fact defined in an IEEE standard.
2017-12-13 UPDATE:
It looks like the OpenAvnu project has been working on this idea. Check out this pull request, which seems to implement the precision time-stamping required for AVB on a WIFI connection.
OpenAvnu Pull Request #734: "Added wireless timestamper and port code"
This should probably be asked in multiple questions, with each question relating to the implementation of one of the core AVB/TSN protocols on a WIFI (802.11) network. Audio video bridging (AVB) and time sensitive networking (TSN) are not IEEE standards or protocols. What we call AVB or TSN (I'm just going to use AVB from now on) is a singular name for the use and implementation of multiple IEEE standards in order to achieve real-time media transfer.
These core protocols are:
IEEE Std. 802.1BA-2011: Profiles and configurations of IEEE standards which define what an AVB endpoint or bridge needs to do. This is the closest we get to one single standard for AVB.
IEEE Std. 1722(-2016): A Layer 2 audio video transport protocol (AVTP)
IEEE Std. 1722.1(-2013): Audio video discover, enumeration, connection management and control (AVDECC) for 1722-based devices
IEEE Std. 802.1AS(-2011): Generalized precision time protocol (gPTP)
IEEE Std. 802.1Q(-2014): FQTSS and SRP
(note that according to the IEEE TSN webpage, currently published TSN-specific standards will be rolled into 802.1Q, so the list above should still be accurate)
Because stream reservation (SRP), timing (gPTP), and media transport (1722 or 1723) are independent, your question should probably be asking about them independently.
How can/should IEEE 802.1AS (gPTP) be implemented in a WIFI (802.11) network?
How can/should IEEE 802.1Q (SRP and FQTSS) be implemented in a WIFI network?
1. I have nowhere near the experience these standards developers have, and some of them have explored gPTP on WIFI extensively. The "how" of gPTP is well explained by Kevin Stanton of Intel here.
And for WIFI in particular, Norman Finn from Cisco had some notes on using gPTP with WIFI networks here.
I couldn't find anything that explicitly laid out how best to use/implement gPTP with WIFI. Ethernet is really where a lot of this is happening right now.
2. Craig Gunther from Harman says:
Simply implement[ing] the SRP control protocol without performing the related reservation actions. ... 802.11ak along with 802.1Qbz may make this much simpler. .... 802.11ac and 802.11ad have created some interesting new technologies that may help with reservations...
Source: http://www.ieee802.org/1/files/public/docs2014/at-cgunther-srp-for-802-11-0114-v01.pdf
Personally, I feel like guaranteed latency and reliability are very hard to ask for with a network that has to do things like carrier-sense multiple access with collision avoidance (CSMA/CA), but that's just my inexperienced opinion. It certainly would be cool, but it seems very... challenging.
I am implementing spectrum sensing for VANETs using SuMO, OMNeT++ and Veins. With these three, I believe I can simulate traffic scenarios. Is it also possible to perform spectrum sensing within the nodes (secondary users in VANETs) with only those 3 software packages or do I need to install MIXIM for cognitive radios as well?
Thanks,
Rop
You ask "is it possible" and you mentioned C++ libraries containing simulation models. This makes the question somewhat hard to answer. Yes, the libraries you mention can support you to write a simulation that does what you described.
If your question is whether any of the libraries already contains code that implements the functionality you describe, the answer is no. You need to write that part yourself.
I would like to reproduce the experiment from Dr. Adrian Thompson, who used genetic algorithm to produce a chip (FPGA) which can distinguish between two different sound signals in a extreme efficient way. For more information please visit this link:
http://archive.bcs.org/bulletin/jan98/leading.htm
After some research I found this FPGA board:
http://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English&CategoryNo=167&No=836&PartNo=1
Is this board capable of reproducing Dr. Adrian Thompsons experiment or am I in need of another?
Thank you for your support.
In terms of programmable logic, the DE1-SoC is about ~20x bigger, and has ~70x as much embedded memory. Practically any modern FPGA is bigger than the "Xilinx XC6216" cited by his papers, as was linked to you in the other instance of this question you asked.
That said, most modern FPGAs don't allow for the same fine-granularity of configuration, as compared to older FPGAs - the internal routing and block structures are more complex, and FPGA vendors want to protect their products and compel you to use their CAD tools.
In short, yes, the DE1-SoC will be able to contain any design from 12+ years ago. As for replicating the specific functions, you should do some more research to determine if the methods used are still feasible with modern chips and CAD tools.
Edit:
user1155120 elaborated on the features of the XC6216 (see link below) that were of value to Thompson.
Fast Configuration: A larger device will generally take longer to configure, as you have to send more configuration data. That said, I/O interfaces are faster than they were 15 years ago, so it depends on your definition of "fast".
Reconfiguration: Cyclone V chips (like the one in the DE1-SoC) do support partial reconfiguration, but the subscription version of the Quartus II software is required, in addition to a separate license to support PR. I don't believe it supports wildcard reconfiguration, though I could be mistaken.
Memory-Mapped Addressing: The DE1-SoC's internal data can be access through the USB Blaster interface. However, this requires using the SystemConsole on the host PC, so it's not a direct access.
Assuming there is a task (e.g. an image processing method with a lot math) which is reasonable to be implemented on FPGA in sense of answer https://stackoverflow.com/a/8695228/544463
Is there any known (that you can actually name) successful application or practice for combining it with "dedicated" (designed on custom demand) super computing cluster (HPC), e.g. with Infiniband stack? I wonder if that has already been done and to which extend that was successful.
My main motivation for the question is that http://en.wikipedia.org/wiki/Reconfigurable_computing is a long term (academic) perspective for the future development of cluster computing as a distinctive alternative to cloud computing (the later concentrates more on the software (higher) flexibility level but also through possible "reconfiguration"). Is it already practical?
I would also expect somebody is doing research on this... It would be nice to learn about results.
Well, it's not FPGA, but D.E. Shaw's Anton computer for molecular dynamics is famously ASICs connected with a custom high-speed network; J. P. Morgan uses clusters of FPGAs in its risk-analysis calculations (recent Forbes article here). Convey computers has been pushing FPGA+x86+high speed networking fairly hard for the past couple of years, so presumably there's some sort of market there...
http://www.maxeler.com/ - they build racks of Intel PCs hosting custom boards stuffed with FPGAs (and - critically - the associated software and FPGA code) to speed up seismic processing, financial analysis and the like.
I think they could be regarded as successful (I gather they turn a profit) and have big customers from finance and oil companies amongst their clientele.
Is there any known (that you can actually name) successful application
or practice for combining it with "dedicated" (designed on custom
demand) super computing cluster (HPC), e.g. with Infiniband stack? I
wonder if that has already been done and to which extend that was
successful.
It's being attempted academically with Novo-G.
You might be interested in Maxwell.
I know that Cray used to have a series of supercomputers some years ago that combined AMD Opterons with Xilinx FPGAs (iirc) through a HyperTransport bus, basically allowing you to create your own specialized processor for custom workloads. According to their website though, they now seem to have dropped FPGAs in favor of GPUs.
For the current research, there's always Google Scholar...
Update: After a bit of searching, it appears to have been the Cray XT5h, which had the possibility of using FPGA coprocessors...
Some have already been mentioned (convey, cray), some not (e.g. beecube).
But one of the biggest FPGA-Clusters I ever heard of, is missing:
The Large Hadron Collider at CERN. They produce in seconds enormous amounts of data (2.7 Terabit/s). They use the FPGAs (> 100) of them to reduce and filter the data to reduce it, and make it handable.
It does not fit your request to be connected to a dedicated HPC-Cluster, but they are a HPC-Cluster on their own (as on the higher hierarchy levels the used FPGAs are FX, they include two PowerPCs and are also some kind of "normal" cluster).
There is quite a lot of published work in reconfigurable computing applications.
Here's a list of links to SRC Computers-centric published papers.
There's the Center for High-Performance Reconfigurable Computing.
Google search "FPGA" or "reconfigurable" along with these academic institution names and you'll find many published papers. Some of the papers you'll find go back to 2004.
Jackson State University
Clemson University
Catholic University
George Washington University
George Mason University
National Center for Supercomputing Applications (NCSA)
University of Illinois (UIUC)
Naval Postgraduate School (NPS)
Air Force Research Lab (AFRL)
University of Dayton Research Institute (UDRI)
University of Florida
University of Arkansas
There also was a reconfigurable-centric conference hosted by NCSA, the Reconfigurable Systems Summer Institute (RSSI).
This list is certainly not exhaustive, but it will get you started.
Disclosures: I currently work for SRC Computers, LLC, I worked at NCSA/UIUC and I chaired the RSSI conference its first two years.
Yet another great use case developed by adapteva called parallela (they have a kickstarter project).
They are developing a epoch-series of processors controlled by a two cores ARM processor (that shares the board).
I am so much anticipating to have this toy in my hands!
PS
Since it was largely inspired by ardunio (and similar ARM-like) systems, this project is still limited by 1 Gbps networking.
I have a computational intensive task which I used CUDA to implement it and now I want to make it even faster with FPGAs (if possible)
The system I want to implement is a series of computations each similar to matrix multiplication in sense of being parallel. It also has some non-parallel parts in between. It works with big amounts of data.
Although I want it as fast as possible, I have enough time to learn and explore with FPGAs.
here I'm asking for suggestions on how I start my path? Which FPGA to choose and where to learn about it. any website or online class or books? I've decided to do this anyway but your idea of whether this will be faster on FPGA or not would be helpful too.
The big wins from an FPGA over using a GPU come from:
Using non-standard word widths optimised to your application. This allows denser logic, which allows more parallel processing blocks
using your knowledge of the required accesses to external RAM to schedule them in hardware more efficiently than a general purpose memory controller can.
The downside is getting data to and from the FPGA. Draw a data-transfer diagram before you start. Even if the FPGA provides infinite speedup, you might still find it's not worth the effort if there's loads of data to be shuffled to and fro!
It's likely you'll be wanting a PCI express based board. Which is (I imagine) a whole new learning-curve before you get to do anything with the FPGA - but if you're up for it, it'll be a very interesting task!
In terms of choosing FPGAs, have a play with the software tools from the various vendors - at the learning stage that's much more important than the chips themselves. You won't find (at this early learning-stage) a show-stopper feature in any of the various chips. Also take into account the availability of boards with your required interfaces on, and any IP-core you might need to do the high-speed interfacing (eg PCIe)
You can get a substantial speedup on most parallel problems with an FPGA.
However, in addition to implementing your computation on the FPGA, there's a lot of work involved in getting the data back and forth from the CPU/main memory. This will require implementation of (for example) a PCI Express endpoint in the FPGA logic (bus mastering for maximum speed) and custom drivers on the software side. Most operating systems will require those drivers to be developed in kernel mode.
And you can't just use the most straightforward approach for FPGA programming either. You're going to need to worry about pipelining and clock synchronization in order to maximize throughput.
In other words, it's a substantially difficult task even for engineers with years of FPGA experience. I strongly suggest you find someone to work with on this. Depending on how proprietary your project is, you might find skilled academics willing to work with you as long as you provide them with all materials and publication rights.
If you're determined to go it alone, you'll need some hardware. Many different companies offer FPGA wired up as accelerators, for example http://www.nallatech.com/pci-express-cards.html
Depending on whether you choose a Xilinx or Altera FPGA, you'll find considerable sample code and tutorials for getting PCI express working.