How to determine communication range in Omnet++ 5.6.1 and Inet 4.2 - omnet++

I use the following instruction in omnetpp.ini, but I get the "Unused entry (does not match any parameters)" warning.
*.host*.wlan[0].radio.transmitter.communicationRange = 500m
How to set the communication range? Also the interference range?

Getting that warning may or may not be a problem (the IDE cannot always figure out the exact type of a module, so false positive warnings about unused parameters are to be expected). It depends on the rest of your code. The INET wireless tutorial should give you ample examples how to configure your wireless network.

Related

How to get controlinfo in ppp module in inet (version 4) framework

I am not new in OMNET++ simulation, but this is challenging to me.
I want to develop some sniffing functionality that needs to get source and destination addresses. My code is based on ppp module. I tried many ways but I get either simulation halt unexpectedly or invalid operation.
I tried:
auto *info = packet->getControlInfo();
then
EV_INFO<<" Details"<<info.detailedinfo;
Anyone will help it will be much apprecaited
Thank you
Since INET 4.x Control Info is no longer used. Instead, Chunks and Tags have been introduced. Take a look at INET Developer’s Guide - Working with Packets.
In short, in order to obtain an address from PPP header the following code code may be used:
auto packet = check_and_cast<Packet *>(msg);
const auto& pppHeader = packet->peekAtFront<PppHeader>();
auto addr = pppHeader->getAddress();
By the way: PPP frame does not have neither source nor destination address.

Does Inet 4.2 support of partially overlapped channel?

Does Inet 4.2 support POC (Partially Overlapped Channel)?
I run a simulation but it occurred a partially overlapped channel error in simulating process. The error description says you must set ignorePartialInterference parameter to false. But I think the accuracy of results would be lower.
Thanks
INET 4.2 supports partially overlapping Wifi channels, but you have to use a dimensional analog model.
These should be helpful:
https://inet.omnetpp.org/docs/showcases/wireless/crosstalk/doc/index.html
https://inet.omnetpp.org/docs/showcases/wireless/coexistence/doc/index.html

How to drive the DDS Compiler IP core from Xilinx

I completed Anton Potočniks' introductory guide to the red pitaya board and I am now able to send commands from the linux machine running on the SoC to its FPGA logic.
I would like to further modify the project so that I can control the phase of the signal that is being transmitted via the red pitayas' DAC. Some pins (from 7 down to 1) of the first GPIO port were still unused so I started setting them from within the OS and used the red pitaya's LEDs to confirm that they were being set without interfering with the functionality of Anton Potočnik's "high bandwidth averager".
I then set the DDS_compilers' to Phase Offset Programmability to "streaming" mode so that it can be configured on the fly using the bits that are currently controling the red pitaya's LEDs. I used some slices to connect my signals to the AXI4-Stream Constant IP core, which in turn drives the DDS compiler.
Unfortunately the DAC is just giving me a constant output of 500 mV.
I created a new project with a testbench for the DDS compiler, because synthesis takes a long time and doesn't give me much insight into what is happening.
Unfortunately all the output signals of the DDS compiler are undefined.
My question:
What am I doing wrong and how can I proceed to control DACs' phase?
EDIT1; here is my test bench
The IP core is configured as follows, so many of the control signals that I provided should not be required:
EDIT2; I changed declarations of the form m_axis_data_tready => '0' to m_axis_phase_tready => m_axis_phase_tready_signal. I also took a look at the wrapper file called dds_compiler_0.vhd and saw that it treats both m_axis_phase_tready and m_axis_data_tready as inputs.
My simulation results remained unchanged...
My new test bench can be found here.
EDIT3: Vivado was just giving me the old simulation results - creating a new testbench, deleting the file under <project_name>.sim/sim_1/behav/xsim/simulate.log and restarting vivado solved this problem.
I noticed that the wrapper file (dds_compiler_0.vhd) only has five ports:
aclk (in)
s_axis_phase_tvalid (in)
s_axis_phase_tdata (in)
m_axis_data_tvalid (out)
and m_axis_data_tdata (out)
So I removed all the unnecessary control signals and got a new simulation result, but I am still not recieving any useful output from the dds_compiler:
The corresponding testbench can be found here.
I also don't get any valid output when I include the control signals.
The corresponding testbench can be found here.
Looks like m_axis_data_tready is not connected. No data will come out unless that's asserted.

IPv6: Interface IP operations are stopped with floating IP in HA failover

When a main node fails, its IP (IPv6) floats to standby node. The standby node is supposed to provide service henceforth on that IP.
Given that both these nodes co-exist in the same LAN, often it is seen that the standby node becomes unreachable. The interface is UP and RUNNING with the IPv6 address assigned, but all the IP operations are stopped.
One possibility is Duplicate Address Detection (DAD) is kicking in when the IP is getting configured on standby. The RFC says all IP operations must be stopped.
My question is regarding the specifics in Linux kernel IPv6 implementation. Previously, from kernel code, I supposed the sysctl variable "disable_ipv6" must be getting set. But the kernel is not disabling IPv6, it is just stops all IP operations on that interface.
Can anyone explain what Linux kernel IPv6 does when it "disables these IP operations" on DAD failure? Can this be reset to normal without doing the interface DOWN & UP? Any pointers in the code will be very helpful.
This article elaborates the specification and behavior w.r.t. what really is happening in the kernel w.r.t. IPv6 implementation and the floating IP configuration. It also suggests a solution:
http://criticalindirection.com/2015/06/30/ipv6_dad_floating_ips/
It mentions for "user-assigned link-local", the IPv6 allocation gets stuck in tentative state, marked by IFA_F_TENTATIVE in the kernel. This state implies DAD is in progress and the IP is not yet validated. For "auto-assigned link-local", if the DAD fails it retries accept_dad times (with new auto-generated IP each time), and after that it disables IPv6 on that interface.
Solution it suggests is: Disable DAD before configuring the floating IP and enable it back when it is out of the tentative state.
For more details refer above link.
This is related to a bug in nova, bug #101134
The documentation for accept_dad says:
accept_dad - INTEGER
Whether to accept DAD (Duplicate Address Detection).
0: Disable DAD
1: Enable DAD (default)
2: Enable DAD, and disable IPv6 operation if MAC-based
duplicate link-local address has been found.
So you can use sysctl -w net.ipv6.conf.default.accept_dad=0 to workaround the bug and disable DAD.
Alternatively, you can fix this bug by implementing the proposing patches to nova/virt/libvirt/firewall.py from that same bug report.
If it is not already present in the NWFilterFirewall class, add the following staticmethod:
def nova_no_nd_reflection_filter(self):
"""This filter protects false positives on IPv6 Duplicate Address
Detection(DAD).
"""
uuid = self._get_filter_uuid('nova-no-nd-reflection')
return '''<filter name='nova-no-nd-reflection' chain='ipv6'>
<!-- no nd reflection -->
<!-- drop if destination mac is v6 mcast mac addr and
we sent it. -->
<uuid>%s</uuid>
<rule action='drop' direction='in'>
<mac dstmacaddr='33:33:00:00:00:00'
dstmacmask='ff:ff:00:00:00:00' srcmacaddr='$MAC'/>
</rule>
</filter>''' % uuid
Then, add this filter to your filter lists in _ensure_static_filters() by adding:
self._define_filter(self.nova_no_nd_reflection_filter())
after filter_set is defined.

Discovering maximum packet size

I'm working on a network-related project and I am using DTLS (TLS/UDP) to secure communications.
Reading the specifications for DTLS, I've noted that DTLS requires the DF flag (Don't Fragment) to be set.
On my local network if I try to send a message bigger than 1500 bytes, nothing is sent. That makes perfect sense. On Windows the sendto() reports a success but nothing is sent.
I obviously cannot unset the DF flag manually since it is mandatory for DTLS and i'm not sure whether the 1500 bytes limit (MTU ?) could change in some situations. I guess it can.
So, my question is : "Is there a way to discover this limit ?" using APIs ?
If not, what would be the lowest possible value ?
My software runs under UNIX (Linux/MAC OSX) and Windows OSes so different solutions for each OS are welcome ;)
Many thanks.
There is a minimum MTU that must be supported - 576 bytes, including IP headers. So if you keep your packets below that, you don't have to worry about PMTU-D (that's what DNS does).
you probably need to 'auto tune' it by sending a range of packet sizes to the target, and see which arrive. think binary_search ...

Resources