How to control packet generation rate and send interval - omnet++

I am trying to create an UDP application where the packet generation rate and packet send interval can be controlled separately.
My code is in https://github.com/11187162/udpApp
With the above code, I am not getting the expected outcome and getting the following runtime error:
scheduleAt(): Message (omnetpp::cMessage)sendTimer is currently
scheduled, use cancelEvent() before rescheduling -- in module
(inet::UdpOwnApp) SensorNetworkShowcaseA.sensor3.app[0] (id=176), at
t=0.058384669093s, event #10
The code for handleMessageWhenUp() is given below.
void UdpOwnApp::handleMessageWhenUp(cMessage *msg)
{
if (msg->isSelfMessage()) {
ASSERT(msg == selfMsg);
switch (selfMsg->getKind()) {
case START:
processStart();
break;
case GENERATE:
generatePacket();
break;
case SEND:
processSend();
break;
case STOP:
processStop();
break;
default:
throw cRuntimeError("Invalid kind %d in self message", (int)selfMsg->getKind());
}
}
else
socket.processMessage(msg);
}
Would anyone please help me?
Thank you

You have written that "generation rate and packet send interval can be controlled separately" but you use the same self-message for control generation of packets as well as for sending of the packets. When a self-message is scheduled it cannot be scheduled again.
Consider adding a new self-message for generation of packets.
By the way: numGenerate is set to zero and it is never changed.
EDIT
Assuming that selfMsg1 is used for generating packets only the following code may be used:
void UdpOwnApp::handleMessageWhenUp(cMessage *msg) {
if (msg->isSelfMessage()) {
if (msg == selfMsg) {
switch (selfMsg->getKind()) {
case START:
processStart();
break;
case SEND:
processSend();
break;
case STOP:
processStop();
break;
default:
throw cRuntimeError("Invalid kind %d in self message", (int)selfMsg->getKind());
}
} else if (msg == selfMsg1) {
if (selfMsg1->getKind() == GENERATE) {
generatePacket();
}
}
}
else
socket.processMessage(msg);
}
And in initialize() you should create an instance of selfMsg1.

Related

problem with QueueRecieving value in FreeRTOS

I'm implement task handler for GUI task by using LVGL and FreeRTOS in ESP32
Whenever I get queue message, I read message as 0 in handler.
When I send message, It is fine.
LOG:
I (105690) wifi_app: WIFI_APP_MSG_STA_DISCONNECTED: USER REQUESTED DISCONNECTION
I (105700) LVGL: SEND MESSAGE
I (105700) LVGL: SENDING msgID 4 // when send
I (105710) LVGL: RECEVING msgID: 0 // read message in handler
I (105710) LVGL: Get NTU from sensor
I (105720) http_server: HTTP_MSG_WIFI_USER_DISCONNECT
Handler: Parsing message Id by using switch statement
static void cmd_parsing(void *arg)
{
lvgl_queue_message_t lvglMsg;
for (;;)
{
if (xQueueReceive(lvgl_queue_handle, &lvglMsg, portMAX_DELAY))
{
ESP_LOGI(TAG, "Recieving msgID: %d", lvglMsg.msgID);
switch (lvglMsg.msgID)
{
case LVGL_MSG_SEND_NTU:
ESP_LOGI(TAG, "Get NTU from sensor");
// lv_label_set_text(label_ntu, evt.string1);
lv_label_set_text_fmt(label_ntu, "%0.3f NTU", lvglMsg.valueFloat);
break;
case LVGL_MSG_WIFI_CONNECT_INIT:
ESP_LOGI(TAG, "Create spinner for connecting Wi-Fi");
create_spinner();
xSemaphoreGive(xGuiSemaphore);
break;
case LVGL_MSG_WIFI_USER_DISCONNECT:
ESP_LOGI(TAG, "WIFI_USER_DISCONNECT");
lv_obj_clean(settings_tab);
create_list_settings();
break;
default:
break;
}
}
}
Send: Sending message Id in WIFI task
BaseType_t lvgl_send_message(lvgl_message_e msgID)
{
ESP_LOGI(TAG, "SEND MESSAGE");
ESP_LOGI(TAG, "Sending msgID %d", msgID);
lvgl_queue_message_t lvglMsg;
lvglMsg.msgID = msgID;
return xQueueSend(lvgl_queue_handle, &lvglMsg, portMAX_DELAY);
}

How to check a received message type on OMNeT++?

I have declared three different message types in OMNeT++:
Layer
Ack
Reject
What I want to achieve is that every node in my network can send any type of message mentioned above. So that every message type has its own variables. But since the handleMessage(cMessage*) function accepts cMessage* type, I need to know the type of message to be able to cast it accordingly.
How would I go about it?
Here is my Layer message type:
message Layer {
int layer;
simtime_t timeFrame;
}
Each your message type is represented by a class that inherits from cMessage. Therefore, dynamic_cast may be used to recognize the type of message, for example this way:
void YourClass::handleMessage(cMessage * msg) {
Layer * layer = dynamic_cast<Layer*> (msg);
if (layer != nullptr) {
// received Layer
} else {
Ack* ack = dynamic_cast<Ack*> (msg);
if (ack != nullptr) {
// received Ack
} else {
Reject* rej= dynamic_cast<Reject*> (msg);
if (rej != nullptr) {
// received Reject
}
}
}

rocketmq throw exception "[TIMEOUT_CLEAN_QUEUE]broker busy, start flow control for a while"

version:rocketmq-all-4.1.0-incubating
We send msg 1000 QPS,sync send, but throw exception:-
[TIMEOUT_CLEAN_QUEUE] broker busy, start flow control for a while
There is the related code:
while (true) {
try {
if (!this.brokerController.getSendThreadPoolQueue().isEmpty()) {
final Runnable runnable = this.brokerController.getSendThreadPoolQueue().peek();
if (null == runnable) {
break;
}
final RequestTask rt = castRunnable(runnable);
if (rt == null || rt.isStopRun()) {
break;
}
final long behind = System.currentTimeMillis() - rt.getCreateTimestamp();
if (behind >= this.brokerController.getBrokerConfig().getWaitTimeMillsInSendQueue()) {
if (this.brokerController.getSendThreadPoolQueue().remove(runnable)) {
rt.setStopRun(true);
rt.returnResponse(RemotingSysResponseCode.SYSTEM_BUSY, String.format("[TIMEOUT_CLEAN_QUEUE]broker busy, start flow control for a while, period in queue: %sms, size of queue: %d", behind, this.brokerController.getSendThreadPoolQueue().size()));
}
} else {
break;
}
} else {
break;
}
} catch (Throwable ignored) {
}
}
}
I find broker the default value of sendMessageThreadPoolNums is 1,
/**
* thread numbers for send message thread pool, since spin lock will be used by default since 4.0.x, the default value is 1.
*/
private int sendMessageThreadPoolNums = 1; //16 + Runtime.getRuntime().availableProcessors() * 4;
private int pullMessageThreadPoolNums = 16 + Runtime.getRuntime().availableProcessors() * 2;
but the previous version isn't 1, and if I configure sendMessageThreadPoolNums = 100, can resolve this question ? It will lead to what is different with default value.
thanks
SHORT ANSWER:
you have two choices:
set sendMessageThreadPoolNums to a small number, say 1, which is the default value after version 4.1.x. And, remain the default value of useReentrantLockWhenPutMessage=false, which is introduced after 4.1.x
sendMessageThreadPoolNums=1
useReentrantLockWhenPutMessage=false
If you need to use a large numbers of threads to process sending message, you'd better use useReentrantLockWhenPutMessage=true
sendMessageThreadPoolNums=128//large thread numbers
useReentrantLockWhenPutMessage=true // indicating that do NOT use spin lock but use ReentrantLock when putting message

Omnet++ : changing the location of function didn't work as expected

I am actually trying to edit the etherhost2 function to send to several destinations and I reached a point where it is possible only for the first time.
In the original code the function is working properly by just moving the two functions sendBurstPackets() and scheduleNextPacket(simTime()) in if condition with destMACAddress = resolveDestMACAddress() those two functions are only called once.
Does that mean that destMacAddress is set once through the whole simulation?
Original Code
void EtherTrafGen::handleMessage(cMessage *msg)
{
if (!isNodeUp())
throw cRuntimeError("Application is not running");
if (msg->isSelfMessage()) {
if (msg->getKind() == START) {
destMACAddress = resolveDestMACAddress();
// if no dest address given, nothing to do
if (destMACAddress.isUnspecified())
return;
}
sendBurstPackets();
scheduleNextPacket(simTime());
}
else
receivePacket(check_and_cast<cPacket *>(msg));
}
My Changes
void EtherTrafGen::handleMessage(cMessage *msg)
{
if (!isNodeUp())
throw cRuntimeError("Application is not running");
if (msg->isSelfMessage()) {
if (msg->getKind() == START) {
if (!multipacket)
{
destMACAddress = resolveDestMACAddress();
sendBurstPackets();
scheduleNextPacket(simTime());
}
// if no dest address given, nothing to do
if (destMACAddress.isUnspecified())
return;
}
}
else
receivePacket(check_and_cast<cPacket *>(msg));
}
The first message is only true for that condition (msg->getKind() == START), which means the the mac is set once for each host through the whole simulation. Removing that condition made it work.
I am worried if there are other self messages that might be mistaken with that function. Would be better to have separate EtherHost app that only works for my simulation.
If there is an idea how to look at all self messages, I would appreciate if some one informed me.

Poco c++ Websocket server connection reset by peer

I am writing a kind of chat server app where a message received from one websocket client is sent out to all other websocket clients. To do this, I keep the connected clients in a list. When a client disconnects, I need to remove it from the list (so that future "sends" do not fail).
However, sometimes when a client disconnects, the server just gets an exception "connection reset by peer", and the code does not get chance to remove from the client list. Is there a way to guarantee a "nice" notification that the connection has been reset?
My code is:
void WsRequestHandler::handleRequest(HTTPServerRequest &req, HTTPServerResponse &resp)
{
int n;
Poco::Timespan timeOut(5,0);
try
{
req.set("Connection","Upgrade"); // knock out any extra tokens firefox may send such as "keep-alive"
ws = new WebSocket(req, resp);
ws->setKeepAlive(false);
connectedSockets->push_back(this);
do
{
flags = 0;
if (!ws->poll(timeOut,Poco::Net::Socket::SELECT_READ || Poco::Net::Socket::SELECT_ERROR))
{
// cout << ".";
}
else
{
n = ws->receiveFrame(buffer, sizeof(buffer), flags);
if (n > 0)
{
if ((flags & WebSocket::FRAME_OP_BITMASK) == WebSocket::FRAME_OP_BINARY)
{
// process and send out to all other clients
DoReceived(ws, buffer, n);
}
}
}
}
while ((flags & WebSocket::FRAME_OP_BITMASK) != WebSocket::FRAME_OP_CLOSE);
// client has closed, so remove from list
for (vector<WsRequestHandler *>::iterator it = connectedSockets->begin() ; it != connectedSockets->end(); ++it)
{
if (*it == this)
{
connectedSockets->erase(it);
logger->information("Connection closed %s", ws->peerAddress().toString());
break;
}
}
delete(ws);
ws = NULL;
}
catch (WebSocketException& exc)
{
//never gets called
}
}
See receiveFrame() documentation:
Returns the number of bytes received. A return value of 0 means that the peer has shut down or closed the connection.
So if receiveFrame() call returns zero, you can act acordingly.
I do not know if this is an answer to the question, but the implementation you have done does not deal with PING frames. This is currently (as of my POCO version: 1.7.5) not done automatically by the POCO framework. I put up a question about that recently. According to the RFC (6465), the ping and pong frames are used (among others) as a keep-alive function. This may therefore be critical to get right in order to get your connection stable over time. Much of this is guess-work from my side as I am experimenting with this now myself.
#Alex, you are a main developer of POCO I believe, a comment on my answer would be much appreciated.
I extended the catch, to do some exception handling for "Connection reset by peer".
catch (Poco::Net::WebSocketException& exc)
{
// Do something
}
catch (Poco::Exception& e)
{
// This is where the "Connection reset by peer" lands
}
A bit late to the party here... but I am using Poco and Websockets as well - and properly handling disconnects was tricky.
I ended up implementing a simple ping functionality myself where the client side sends an ACK message for every WS Frame it receives. A separate thread on the server side tries to read the ACK messages - and it will now detect when the client has disconnected by looking at flags | WebSocket::FRAME_OP_CLOSE.
//Serverside - POCO. Start thread for receiving ACK packages. Needed in order to detect when websocket is closed!
thread t0([&]()->void{
while((!KillFlag && ws!= nullptr && flags & WebSocket::FRAME_OP_BITMASK) != WebSocket::FRAME_OP_CLOSE && machineConnection != nullptr){
try{
if(ws == nullptr){
return;
}
if(ws->available() > 0){
int len = ws->receiveFrame(buffer, sizeof(buffer), flags);
}
else{
Util::Sleep(10);
}
}
catch(Poco::Exception &pex){
flags = flags | WebSocket::FRAME_OP_CLOSE;
return;
}
catch(...){
//log::info(string("Unknown exception in ACK Thread drained"));
return;
}
}
log::debug("OperatorWebHandler::HttpRequestHandler() Websocket Acking thread DONE");
});
on the client side I just send a dummy "ACK" message back to the server (JS) every time I receive a WS frame from the server (POCO).
websocket.onmessage = (evt) => {
_this.receivedData = JSON.parse(evt.data);
websocket.send("ACK");
};
It is not about disconnect handling, rather about the stability of the connection.
Had some issues with POCO Websocket server in StreamSocket mode and C# client. Sometimes the client sends Pong messages with zero length payload and disconnect occurs so I added Ping and Pong handling code.
int WebSocketImpl::receiveBytes(void* buffer, int length, int)
{
char mask[4];
bool useMask;
_frameFlags = 0;
for (;;) {
int payloadLength = receiveHeader(mask, useMask);
int frameOp = _frameFlags & WebSocket::FRAME_OP_BITMASK;
if (frameOp == WebSocket::FRAME_OP_PONG || frameOp ==
WebSocket::FRAME_OP_PING) {
std::vector<char> tmp(payloadLength);
if (payloadLength != 0) {
receivePayload(tmp.data(), payloadLength, mask, useMask);
}
if (frameOp == WebSocket::FRAME_OP_PING) {
sendBytes(tmp.data(), payloadLength, WebSocket::FRAME_OP_PONG);
}
continue;
}
if (payloadLength <= 0)
return payloadLength;
if (payloadLength > length)
throw WebSocketException(Poco::format("Insufficient buffer for
payload size %d", payloadLength),
WebSocket::WS_ERR_PAYLOAD_TOO_BIG);
return receivePayload(reinterpret_cast<char*>(buffer), payloadLength,
mask, useMask);
}
}

Resources