VLAN Priority, DEI and ID are missing in UDP Package - windows

I am building an ethernet simulation project to send and receive UDP packages to an external device (let's call it A).
I am supposed to simulate multiple devices, some of them send UDP packages (let's call them B) and some receive UDP packages (let's call them C), B and C are on two different VLANs with two different IDs.
I used an external ETH/Adapter for B and C, which both are connected to a switch alongside the main device A (which can see both the VLANs). then I configured the two eth/adp on windows by setting the "VLAN and Priority" to Enabled and Set VLAN ID with the correct ID for each B and C, finally, I set static IPs for each one of them.
I then used QT to create the simulation project, The Receiving parts are perfect Device A is transmitting UDP packages to Multicast and I join with VLAN C on the Multicast and start reading these frames.
The problem is with sending, I am able to send the frames correctly however the 4 bytes that define the Priority, DEI, and ID are missing (which means device A is not receiving and dumping these frames)
You can see in the below screenshot, on the right the healthy packages that are accepted by device A and on the left the simulated frames that are not accepted
Comaprison between accepted and unaccepted packages
Here is the Code I use to bind and Join Multicast
socket_1 = new QUdpSocket(this);
qDebug() << "Binding UDP Socket ..." ;
bool bind_res = socket_1->bind(QHostAddress("192.168.11.4"), 51011 , QUdpSocket::ShareAddress);
if(!bind_res)
{
qDebug() << "Faild to bind with Error: " +socket_1->errorString() ;
QApplication::quit();
}
bool join_res = socket_1->joinMulticastGroup(interface->GRP_IP,interface->Qinterface);
if(!join_res)
{
qDebug() << "Failed to join with error: "+ socket_1->errorString() ;
QApplication::quit();
}
connect(socket_1, SIGNAL(readyRead()), this, SLOT(handleReadyRead()));
qDebug() << "UDP Socket initialized successfully ..." ;
and here is the function to send (interface->GRP_IP is the Multicast IP)
void UDPSocket_VLAN11::sendUDP_1(QByteArray data)
{
qint64 res = socket_1->writeDatagram(data, interface->GRP_IP, 50011);
qDebug() << " --- Sending UDP Packet ---";
qDebug() << "Sending to: " << interface->GRP_IP;
qDebug() << "Sending port: " << port;
qDebug() << "Sending Size: " << data.size();
qDebug() << "Sending: " << data.toHex().toLower();
qDebug() << "Sending Result: " << res;
}
Can someone please point how to set these values weather it's in the configuration of the VLAN or the socket in QT ?

So yes, as #Zac67 mentioned, the main issue was that the eth/usb adapters weren't supporting this protocol and I had a choice of either keep looking for the right adapters or, as I finally did, to change the HW setup and get ride of the adapters and instead I used the native NIC ethernet port on the machine and configured it using Hyper-V to simulate the VLAN

Related

How to find out the number of received packets from each sender

In OMNET++ with INET framework, I want to find out how many packets are received from each sending node.I found the below code. Can anyone tell me what is function of "it->second++" command in below code?
std::map<L3Address, uint64_t> recPkt;
auto it = recPkt.find(senderAddr);
if (it == recPkt.end())
recPkt[senderAddr] = 1;
else
it->second++;
Also, can anyone suggest how to display the number of received packets per node.
it is an iterator to an element of std::map. Iterator is something like a pointer. it points to a pair: <L3Address, uint64_t>. Probably the address of sender is the first element of this pair, and the second one is the number of received packets.
The first element of this pair may be obtained using it->first while the second via it->second.
Operation recPkt.find(senderAddr) checks whether recPkt contains an entry with the address senderAddr:
if not, it points to recPkt.end(), then a new entry is created and 1 is set as value (because first packet has been just received),
if entry for senderAddr already exists, the second value of this element (counter) is incremented using it->second++
To show current value of received packets to internal log window one may use:
for (auto it : recPkt) {
EV << "From address " << it.first << " received " << it.second << " packets." << std::endl;
}
However, the better way is to write these values to statistics. The best place for it is a finish() method of your module:
void YourModule::finish() {
// ..
for (auto it : recPkt) {
std::string name = "Received packet from ";
name += it.first.str(); // address
recordScalar(name, it.second);
}
}
Reference: C++ Reference, std::map
EDIT
One more thing. The declaration of recPkt i.e. line:
std::map<L3Address, uint64_t> recPkt;
must be in your class. recPkt cannot be declared just before use.

OMNET++: How to obtain wireless signal power?

I am using the newly released INET 4.0 framework for OMNET++ and I would like to obtain the received signal strength value in a wireless host (of type AdhocHost). How may I do that?
In INET 4.0.0 the packet received by a module contains several tags. Between others there is SignalPowerInd tag. According to SignalTag.msg:
This indication specifies the average analog signal power that was detected during receiving the packet.
It may be present on a packet from the phyiscal layer to the application.
This tag is present in packet processing by a wireless MAC layer, for example:
And packet received by application layer contains SignalPowerInd too:
One can obtain the value of `SignalPowerInd` from received radio packet in any layer using standard API. For example, to obtain it in `UdpBasicApp` one should add in `UdpBasicApp.cc`:
#include "inet/physicallayer/common/packetlevel/SignalTag_m.h"
// ...
void UdpBasicApp::socketDataArrived(UdpSocket *socket, Packet *packet) {
if (packet->findTag<SignalPowerInd>() != nullptr) {
auto signalPowerInd = packet->getTag<SignalPowerInd>();
auto rxPower = signalPowerInd->getPower().get();
EV_INFO << "RX power= " << rxPower << "W" << endl;
}
// process incoming packet
processPacket(packet);
}

gammu phones tables with multiple modem port

EDIT: I know, after some research, this problem caused by IMEI field
in phones tables as primary, if we using modem pool like wavecome with
16 port, gammu detect just one IMEI
i have 1 modem connected with 16 port of sim card,each config connected to same database on my server,send and receive sms all working like a charm, each port have smsd services, like
gammu-smsd -c /etc/gammu-smsdrc-modem1 --pid /var/run/gammu-smsdrc-modem1 --daemon
gammu-smsd -c /etc/gammu-smsdrc-modem2 --pid /var/run/gammu-smsdrc-modem2 --daemon
each port have their own PhoneID, like modem1 and modem2, the problem is
why phones tables in gammu databases keep replacing the data with last gammu-smsd services run ?
ex:
if i run the first config, then phones tables will contains all informations , like signal, IMEI from 1st port, but when i run 2nd gammu-smsd data from 1st port will gone, changed from 2nd port config
here is my smsdrc config from modem1 /etc/gammu-smsdrc-modem1
[gammu]
port = /dev/ttyUSB0
model =
connection = at115200
synchronizetime = yes
logfile = /var/log/gammu-smsdrc-modem1
logformat = nothing
use_locking =
gammuloc =
[smsd]
service=sql
logfile=/var/log/gammu-smsdrc-modem1
debuglevel=0
Driver=native_mysql
User=root
Password=root
PC=localhost
Database=test
PhoneID=modem1
here is my smsd config from modem2 /etc/gammu-smsdrc-modem2
[gammu]
port = /dev/ttyUSB1
model =
connection = at115200
synchronizetime = yes
logfile = /var/log/gammu-smsdrc-modem2
logformat = nothing
use_locking =
gammuloc =
[smsd]
service=sql
logfile=/var/log/gammu-smsdrc-modem2
debuglevel=0
Driver=native_mysql
User=root
Password=root
PC=localhost
Database=test
PhoneID=modem2
after some reading on API Doc of gammu, i have figure it out, yes like the first one, it because i use one modem with 16 port of sim card, gammu just detect singel IMEI even the modem have 16 port, quick answer for my question is no configureable file can handle that problem, so we have to modify some line og code from smsd/services/sql.c
if (SMSDSQL_option(Config, SQL_QUERY_DELETE_PHONE, "delete_phone",
"DELETE FROM phones WHERE ", ESCAPE_FIELD("IMEI"), " = %I", NULL) != ERR_NONE) {
return ERR_UNKNOWN;
}
.......
.......
.......
if (SMSDSQL_option(Config, SQL_QUERY_UPDATE_RECEIVED, "update_received",
"UPDATE phones SET ",
ESCAPE_FIELD("Received"), " = ", ESCAPE_FIELD("Received"), " + 1"
" WHERE ", ESCAPE_FIELD("IMEI"), " = %I", NULL) != ERR_NONE) {
return ERR_UNKNOWN;
}
the final code will be
if (SMSDSQL_option(Config, SQL_QUERY_DELETE_PHONE, "delete_phone",
"DELETE FROM phones WHERE ", ESCAPE_FIELD("ID"), " = %P", NULL) != ERR_NONE) {
return ERR_UNKNOWN;
}
.......
.......
.......
if (SMSDSQL_option(Config, SQL_QUERY_UPDATE_RECEIVED, "update_received",
"UPDATE phones SET ",
ESCAPE_FIELD("Received"), " = ", ESCAPE_FIELD("Received"), " + 1"
" WHERE ", ESCAPE_FIELD("ID"), " = %P", NULL) != ERR_NONE) {
return ERR_UNKNOWN;
}
and recompile gammu as usual, and modify phones tables to set ID as Primary key, i'm not expert in c, hope some one can made a good change for better result, but for me it's enough for temp used.

libusb-win32 - can't read from keyboard

I'm trying to write a custom 'driver' for a keyboard (HID, if it matters), under Windows 7. The final goal is having two keyboards connected to the computer, but mapping all of the keys of one of them to special (custom) functions.
My idea is to use libusb-win32 as the 2nd keyboard's driver, and write a small program to read data from the keyboard and act upon it. I've successfully installed the driver, and the device is recognized from my program, but all transfers timeout, even though I'm pressing keys.
here's my code:
struct usb_bus *busses;
struct usb_device *dev;
char buf[1024];
usb_init();
usb_find_busses();
usb_find_devices();
busses = usb_get_busses();
dev = busses->devices;
cout << dev->descriptor.idVendor << '\n' << dev->descriptor.idProduct << '\n';
usb_dev_handle *h = usb_open(dev);
cout << usb_set_configuration(h, 1) << '\n';
cout << usb_claim_interface(h, 0) << '\n';
cout << usb_interrupt_read(h, 129, buf, 1024, 5000) << '\n';
cout << usb_strerror();
cout << usb_release_interface(h, 0) << '\n';
cout << usb_close(h) << '\n';
and it returns:
1133
49941
0
0
-116
libusb0-dll:err [_usb_reap_async] timeout error
0
0
(I'm pressing lots of keys in those 5 seconds)
There's only one bus, one device, one configuration, one interface and one endpoint.
The endpoint has bmAttributes = 3 which implies I should use interrupt transfers (right?)
so why am I not getting anything? Am I misusing libusb? Do you know a way to do this without libusb?
It's pretty simple actually - when reading from the USB device, you must read exactly the right amount of bytes. You know what that amount is by reading wMaxPacketSize.
Apparently a read request with any other size simply results in a timeout.

Multiple applications write to one console - mixed/messed output

I have the following system architecture (cannot be changed - legacy code): One main application invokes one or more other applications and these applications interact over a IP protocol.
All applications write to one console window. Unfortunately the console output can get messed up (one character from app 1, next char from app 2, next character from app 4 etc.).
All applications write to console via one Logger.dll (provides static logging functions) using cout/cerr.
Is there a way how I can prevent mixed logging messages in this setup?
Thanks in advance.
EDIT code added:
void Logger::Log(const std::string & componentName, const std::string & Text, LogLevel logLevel, bool logToConsole, bool beep)
{
std::ostringstream stream;
switch (logLevel)
{
case LOG_INFO:
if (logToConsole)
{
stream << componentName << ": INFO " << Text;
mx_console.lock(); // this is a static boost::mutex
std::cout << stream.str() << std::endl;
std::cout.flush();
mx_console.unlock();
}
break;
case LOG_STATUS:
stream << componentName << ": STATUS " << Text;
mx_console.lock();
std::cout << stream.str() << std::endl;
std::cout.flush();
mx_console.unlock();
break;
case LOG_WARNING:
stream << componentName << ": WARNING " << Text;
mx_console.lock();
std::cout << stream.str() << std::endl;
std::cout.flush();
mx_console.unlock();
break;
default:;
}
if (beep)
Beep( 500, 50 );
}
Since you have separate logging functionality you can at minimum use some kind of locking (global mutex, etc.) to avoid interspersing messages from different applications too much. To make it more readable and grepable, add some identifying information, like process name or PID. Wrapping your Logger.dll around existing logging library sounds like an option as well.
Alternatively, you could have logging functions just forward messages to your main application and let that to sort out the synchronization and interspersing.
Syslog might be a solution for you as it is intended to handle logs from various places. Syslog is developed for unix, but this answer shows versions for windows.
You can change your logger to log to syslog instead of the console.
I replaced now all the
std::cout << stream.str();
statements with
std::string str = stream.str();
printf(str.c_str());
and now the output isn't messed up character-wise anymore.
But I don't have a good explanation for this behavior, does anybody know why?

Resources