TinyOS/nesC Receive.receive event is signalled periodically but processed only once - events

I'm currently working with implementation of AODV protocol for TinyOS and I'm seeing weird behaviour when network layer is signalling application about received message.
Below are relevant pieces of application and AODV library code + some debug output to show what is going on.
Test application
Configuration
configuration BasicTestAppC{
}
implementation{
components MainC, BasicTestC, AODV, LedsC;
BasicTestC.Boot->MainC.Boot;
BasicTestC.SplitControl->AODV.SplitControl;
BasicTestC.AMSend->AODV.AMSend[1];
BasicTestC.Receive->AODV.Receive[1];
...
}
Implementation
#include "BasicTest.h"
module BasicTestC {
uses {
interface Boot;
interface SplitControl;
interface Timer<TMilli> as MilliTimer;
interface AMSend;
interface Receive;
interface Leds;
interface Packet;
}
}
implementation {
message_t pkt;
message_t * p_pkt;
uint16_t src = 0x0007;
uint16_t dest = 0x000A;
uint16_t ctr = 0;
test_msg* test_pkt;
test_msg* rcv_pkt;
...
//Send counter value to node 10 on every timer tick
event void MilliTimer.fired() {
call Leds.led0Toggle();
ctr = ctr + 1;
test_pkt = (test_msg*)(call Packet.getPayload(p_pkt, sizeof (test_msg)));
test_pkt->counter = ctr;
call AMSend.send(dest, p_pkt, sizeof(test_msg));
}
event void AMSend.sendDone(message_t * bufPtr, error_t error) {
test_pkt = (test_msg*)(call Packet.getPayload(p_pkt, sizeof (test_msg)));
dbg("APPS", "%s\t APPS: sendDone!! (error=%d) ctr=%u\n", sim_time_string(), error, test_pkt->counter);
}
event message_t* Receive.receive(message_t * bufPtr, void * payload, uint8_t len) {
rcv_pkt = (test_msg * ) payload;
dbg("APPS", "%s\t APPS: receive!! %u\n", sim_time_string(), rcv_pkt->counter);
return bufPtr;
}
}
AODV module
Processing receive event from AMReceiverC component:
event message_t* SubReceive.receive( message_t* p_msg, void* payload, uint8_t len ) {
uint8_t i;
aodv_msg_hdr* aodv_hdr = (aodv_msg_hdr*)(p_msg->data);
test_msg_y* tmp;
uint16_t ctr;
dbg("AODV", "%s\t AODV: SubReceive.receive() dest: %d src:%d\n", sim_time_string(), aodv_hdr->dest, aodv_hdr->src);
if( aodv_hdr->dest == call AMPacket.address() ) {
for( i=0;i<len;i++ ) {
p_app_msg_->data[i] = aodv_hdr->data[i];
}
tmp = (test_msg_y*) p_app_msg_->data;
ctr = tmp->counter;
//Send signal to application layer
p_msg = signal Receive.receive[aodv_hdr->app]( p_app_msg_, p_app_msg_->data, len - AODV_MSG_HEADER_LEN );
dbg("AODV", "%s\t AODV: SubReceive.receive() delivered to upper layer - %u\n", sim_time_string(), ctr);
} else {
am_addr_t nexthop = get_next_hop( aodv_hdr->dest );
dbg("AODV", "%s\t AODV: SubReceive.receive() deliver to next hop:%x\n", sim_time_string(), nexthop);
/* If there is a next-hop for the destination of the message,
the message will be forwarded to the next-hop. */
if (nexthop != INVALID_NODE_ID) {
forwardMSG( p_msg, nexthop, len );
}
}
return p_msg;
}
Debug output
DEBUG (7): 0:0:2.006653503 APPS: sendDone!! (error=0) ctr=2
DEBUG (10): 0:0:2.019577622 AODV: SubReceive.receive() dest: 10 src:7
DEBUG (10): 0:0:2.019577622 AODV: SubReceive.receive() delivered to upper layer - 2
DEBUG (10): 0:0:2.019577622 APPS: receive!! 2
DEBUG (7): 0:0:3.010407143 APPS: sendDone!! (error=0) ctr=3
DEBUG (10): 0:0:3.021820651 AODV: SubReceive.receive() dest: 10 src:7
DEBUG (10): 0:0:3.021820651 AODV: SubReceive.receive() delivered to upper layer - 3
DEBUG (7): 0:0:4.005264961 APPS: sendDone!! (error=0) ctr=4
DEBUG (10): 0:0:4.023239710 AODV: SubReceive.receive() dest: 10 src:7
DEBUG (10): 0:0:4.023239710 AODV: SubReceive.receive() delivered to upper layer - 4
DEBUG (7): 0:0:5.010010417 APPS: sendDone!! (error=0) ctr=5
DEBUG (10): 0:0:5.024780838 AODV: SubReceive.receive() dest: 10 src:7
DEBUG (10): 0:0:5.024780838 AODV: SubReceive.receive() delivered to upper layer - 5
DEBUG (7): 0:0:6.003983230 APPS: sendDone!! (error=0) ctr=6
DEBUG (10): 0:0:6.010147745 AODV: SubReceive.receive() dest: 10 src:7
DEBUG (10): 0:0:6.010147745 AODV: SubReceive.receive() delivered to upper layer - 6
DEBUG (7): 0:0:7.008331960 APPS: sendDone!! (error=0) ctr=7
DEBUG (10): 0:0:7.020187970 AODV: SubReceive.receive() dest: 10 src:7
DEBUG (10): 0:0:7.020187970 AODV: SubReceive.receive() delivered to upper layer - 7
DEBUG (7): 0:0:8.004013748 APPS: sendDone!! (error=0) ctr=8
DEBUG (10): 0:0:8.013474142 AODV: SubReceive.receive() dest: 10 src:7
DEBUG (10): 0:0:8.013474142 AODV: SubReceive.receive() delivered to upper layer - 8
DEBUG (7): 0:0:9.009140671 APPS: sendDone!! (error=0) ctr=9
DEBUG (10): 0:0:9.020233746 AODV: SubReceive.receive() dest: 10 src:7
DEBUG (10): 0:0:9.020233746 AODV: SubReceive.receive() delivered to upper layer - 9
DEBUG (7): 0:0:10.010391884 APPS: sendDone!! (error=0) ctr=10
DEBUG (10): 0:0:10.018341667 AODV: SubReceive.receive() dest: 10 src:7
DEBUG (10): 0:0:10.018341667 AODV: SubReceive.receive() delivered to upper layer - 10
As you can see - receive event at the application layer is triggered/executed only once. All following messages reach destination node but don't get above network layer.
Any thoughts as to what might be going on here?

Problem was in following line:
p_msg = signal Receive.receive[aodv_hdr->app]( p_app_msg_, p_app_msg_->data, len - AODV_MSG_HEADER_LEN );
Documentation on Receive.receive suggests that it should not reuse receive message buffer and most common thing to do is to return same pointer that was passed to it as a first argument.
The problem with it is - if user application follows guidelines when implementing Receive.receive event handler, it will return pointer to the message buffer (the first argument). However first argument that is passed to Receive.receive in the line above is p_app_msg_, which means that after first message was received, p_msg is not pointing to the original message buffer any more.
Still considering what the best way to fix this is, but at the moment I just don't assign result of Receive.receive back to p_msg and avoid reusing receive message buffer in application code.

Related

Control SIM7080G CAT-M/NB-IoT Unit from ESP32-DevKitC-32E

We want to control SIM7080G CAT-M/NB-IoT Unit from ESP32-DevKitC-32E.
SIM7080G CAT-M/NB-IoT Unit
https://shop.m5stack.com/products/sim7080g-cat-m-nb-iot-unit
ESP32-DevKitC-32E
https://www.espressif.com/en/products/devkits/esp32-devkitc
We created a project with PlatformIO and installed TinyGSM, EspSoftwareSerial and ArduinoHttpClient.
We wired #16 to CAT-M's TXD and #17 to CAT-M's RXD and modified the code as follows.
- #ifndef __AVR_ATmega328P__
- #define SerialAT Serial1
-
- // or Software Serial on Uno, Nano
- #else
- #include <SoftwareSerial.h>
- SoftwareSerial SerialAT(2, 3); // RX, TX
- #endif
+ #include <SoftwareSerial.h>
+ SoftwareSerial SerialAT(16, 17); // RX, TX
We modified the defines as follows.
- #define TINY_GSM_MODEM_SIM800
+ // #define TINY_GSM_MODEM_SIM800
- // #define TINY_GSM_MODEM_SIM7080
+ #define TINY_GSM_MODEM_SIM7080
We have edited the following information to match the SIM inserted in our CAT-M.
- const char apn[] = "YourAPN";
- const char gprsUser[] = "";
- const char gprsPass[] = "";
+ const char apn[] = "OurAPN";
+ const char gprsUser[] = "OurUser";
+ const char gprsPass[] = "OurPass";
However, it seemed to be failing to connect only to keep logging Unhandled as follows.
[2062] Modem responded at rate 115200
Initializing modem...
[18066] ### Unhandled: +CPIN: N E#DY
␂
[19066] ### Unhandled: 15104
Modem Info:
[20067] ### Unhandled: ␂ERROR
[22068] ### Unhandled: ERO
[24069] ### Unhandled: ␄ERRO
[26070] ### Unhandled: ERROH
[28071] ### Unhandled: EROR␂
Waiting for network...[30072] ### Unhandled: +C#REG: 0,0OK
[31073] ### Unhandled: +␝ 0
We tried deleting the following sections, but nothing changed
- SerialMon.println("Initializing modem...");
- modem.restart();
- // modem.init();
- String modemInfo = modem.getModemInfo();
- SerialMon.print("Modem Info: ");
- SerialMon.println(modemInfo);
We believe that something must be initialized in the following areas.
// !!!!!!!!!!!
// Set your reset, enable, power pins here
// !!!!!!!!!!!
The following document mentions PWRKEY, but the SIM7080G CAT-M/NB-IoT Unit does not have that terminal.
SIM7080G_Hardware_Design_V1.04
https://www.simcom.com/product/SIM7080G.html
We are not that familiar with one-board microcomputers or single-board computers. What can we do to communicate via CAT-M?
We rewired and reprogrammed them and they worked as expected. We could not figure out the cause, but they were successfully resolved. Thank you.

First request for Okta authentication taking a lot of time

I am using Okta authorisation server for securing spring boot app.
Have configured the below 3 property
okta.oauth2.issuer={URI}
okta.oauth2.client-secret={SECRET}
okta.oauth2.client-id={CLIENTID}
Have protected the endpoint using the below code in Spring boot.
#Bean
public SecurityWebFilterChain securityWebFilterChain(ServerHttpSecurity http) {
Okta.configureResourceServer401ResponseBody(http);
return http
.csrf(spec -> spec.disable())
.authorizeExchange().pathMatchers("/api/**").permitAll()
.anyExchange().authenticated().and()
.oauth2Login()
.and()
.oauth2ResourceServer()
.jwt().and().and().build();
}
However when I issue the first request to any protected endpoint by passing bearer token it takes a lot of time almost 1min. The subsequent request taken Milliseconds.
From the log I can see the DNS resolution (io.netty.resolver.dns.DnsNameResolver) in the first request are taking a lot of time almost 40 Seconds. The application is of type Web in Okta.
The application is running on Netty server.
Any suggestion to fix this?
I was able to debug and find that on windows 10, JDK 11 the Netty DnsQueryContext takes 5 seconds to resolve DNS whereas the same in macOS or linux happens in milliseconds.
I tried to disable Netty as default server and use tomcat but even then the Netty DNS resolver starts when a incoming request is sent. I need to have Netty on my class path for WebClient to work.
It looks like a issue with Netty as I was able to reproduce the issue with the below code on windows but the same code works fine on macOS.
#SpringBootApplication
public class GqlApplicationStarter {
public static void main(String[] args) throws UnknownHostException, ExecutionException, InterruptedException {
// SpringApplication.run(GqlApplicationStarter.class, args);
NioEventLoopGroup group = new NioEventLoopGroup(1);
DnsNameResolver resolver = new DnsNameResolverBuilder(group.next())
.channelFactory(NioDatagramChannel::new)
.optResourceEnabled(false)
.build();
try {
System.out.println("2" + resolver.resolveAll("dev-542348.okta.com").get());
}
finally {
resolver.close();
group.shutdownGracefully();
}
}
Below are the logs from Windows.
13:11:27.823 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsQueryContext - [id: 0xc4d4fa95] WRITE: UDP,
[53282: /10.150.1.252:53], DefaultDnsQuestion(dev-542348.okta.com. IN
A) 13:11:27.830 [nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler -
-Dio.netty.recycler.maxCapacityPerThread: 4096 13:11:27.831 [nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler -
-Dio.netty.recycler.ratio: 8 13:11:27.831 [nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.chunkSize: 32
13:11:27.831 [nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler -
-Dio.netty.recycler.blocking: false 13:11:27.834 [nioEventLoopGroup-2-1] DEBUG io.netty.buffer.AbstractByteBuf -
-Dio.netty.buffer.checkAccessible: true 13:11:27.834 [nioEventLoopGroup-2-1] DEBUG io.netty.buffer.AbstractByteBuf -
-Dio.netty.buffer.checkBounds: true 13:11:27.834 [nioEventLoopGroup-2-1] DEBUG
io.netty.util.ResourceLeakDetectorFactory - Loaded default
ResourceLeakDetector: io.netty.util.ResourceLeakDetector#14896267
13:11:32.859 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsQueryContext - [id: 0xc4d4fa95] WRITE: UDP,
[10561: /10.202.1.252:53], DefaultDnsQuestion(dev-542348.okta.com. IN
A) 13:11:37.868 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsQueryContext - [id: 0xc4d4fa95] WRITE: UDP,
[40341: /10.180.1.20:53], DefaultDnsQuestion(dev-542348.okta.com. IN
A) 13:11:42.880 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsQueryContext - [id: 0xc4d4fa95] WRITE: UDP,
[10043: /10.180.1.201:53], DefaultDnsQuestion(dev-542348.okta.com. IN
A) 13:11:47.892 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsQueryContext - [id: 0xc4d4fa95] WRITE: UDP,
[28012: /10.180.1.202:53], DefaultDnsQuestion(dev-542348.okta.com. IN
A) 13:11:52.898 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsQueryContext - [id: 0xc4d4fa95] WRITE: UDP,
[8689: /180.235.155.185:53], DefaultDnsQuestion(dev-542348.okta.com.
IN A) 13:11:53.045 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsNameResolver - [id: 0xc4d4fa95] RECEIVED: UDP
[8689: /180.235.155.185:53], DatagramDnsResponse(from:
/180.235.155.185:53, to: /0.0.0.0:56937, 8689, QUERY(0), Refused(5),
RD) DefaultDnsQuestion(dev-542348.okta.com. IN A) 13:11:53.045
[nioEventLoopGroup-2-1] DEBUG io.netty.resolver.dns.DnsQueryContext -
[id: 0xc4d4fa95] WRITE: UDP, [43413: /192.168.1.1:53],
DefaultDnsQuestion(dev-542348.okta.com. IN A) 13:11:53.171
[nioEventLoopGroup-2-1] DEBUG io.netty.resolver.dns.DnsNameResolver -
[id: 0xc4d4fa95] RECEIVED: UDP [43413: /192.168.1.1:53],
DatagramDnsResponse(from: /192.168.1.1:53, to: /0.0.0.0:56937, 43413,
QUERY(0), NoError(0), RD RA) DefaultDnsQuestion(dev-542348.okta.com.
IN A) DefaultDnsRawRecord(dev-542348.okta.com. 300 IN CNAME 25B)
DefaultDnsRawRecord(ok11-crtrs.tng.okta.com. 202 IN CNAME 66B)
DefaultDnsRawRecord(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
60 IN A 4B)
DefaultDnsRawRecord(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
60 IN A 4B)
DefaultDnsRawRecord(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
60 IN A 4B) DefaultDnsRawRecord(elb.us-east-2.amazonaws.com. 41957 IN
NS 23B) DefaultDnsRawRecord(elb.us-east-2.amazonaws.com. 41957 IN NS
25B) DefaultDnsRawRecord(elb.us-east-2.amazonaws.com. 41957 IN NS 21B)
DefaultDnsRawRecord(elb.us-east-2.amazonaws.com. 41957 IN NS 22B)
13:11:53.175 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsQueryContext - [id: 0xc4d4fa95] WRITE: UDP,
[61505: /10.150.1.252:53],
DefaultDnsQuestion(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
IN A) 13:11:58.185 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsQueryContext - [id: 0xc4d4fa95] WRITE: UDP,
[46866: /10.202.1.252:53],
DefaultDnsQuestion(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
IN A) 13:12:03.187 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsQueryContext - [id: 0xc4d4fa95] WRITE: UDP,
[39078: /10.180.1.20:53],
DefaultDnsQuestion(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
IN A) 13:12:08.196 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsQueryContext - [id: 0xc4d4fa95] WRITE: UDP,
[50281: /10.180.1.201:53],
DefaultDnsQuestion(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
IN A) 13:12:13.208 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsQueryContext - [id: 0xc4d4fa95] WRITE: UDP,
[53416: /10.180.1.202:53],
DefaultDnsQuestion(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
IN A) 13:12:18.214 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsQueryContext - [id: 0xc4d4fa95] WRITE: UDP,
[19383: /180.235.155.185:53],
DefaultDnsQuestion(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
IN A) 13:12:18.320 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsNameResolver - [id: 0xc4d4fa95] RECEIVED: UDP
[19383: /180.235.155.185:53], DatagramDnsResponse(from:
/180.235.155.185:53, to: /0.0.0.0:56937, 19383, QUERY(0), Refused(5),
RD)
DefaultDnsQuestion(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
IN A) 13:12:18.320 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsQueryContext - [id: 0xc4d4fa95] WRITE: UDP,
[39446: /192.168.1.1:53],
DefaultDnsQuestion(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
IN A) 13:12:18.367 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsNameResolver - [id: 0xc4d4fa95] RECEIVED: UDP
[39446: /192.168.1.1:53], DatagramDnsResponse(from: /192.168.1.1:53,
to: /0.0.0.0:56937, 39446, QUERY(0), NoError(0), RD RA)
DefaultDnsQuestion(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
IN A)
DefaultDnsRawRecord(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
34 IN A 4B)
DefaultDnsRawRecord(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
34 IN A 4B)
DefaultDnsRawRecord(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
34 IN A 4B) 2[dev-542348.okta.com/3.15.36.194,
dev-542348.okta.com/3.15.36.192, dev-542348.okta.com/3.15.36.193]
And below are the logs from MACOS
13:23:20.937 [nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler -
-Dio.netty.recycler.maxCapacityPerThread: 4096 13:23:20.937 [nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler -
-Dio.netty.recycler.ratio: 8 13:23:20.937 [nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.chunkSize: 32
13:23:20.937 [nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler -
-Dio.netty.recycler.blocking: false 13:23:20.941 [nioEventLoopGroup-2-1] DEBUG io.netty.buffer.AbstractByteBuf -
-Dio.netty.buffer.checkAccessible: true 13:23:20.941 [nioEventLoopGroup-2-1] DEBUG io.netty.buffer.AbstractByteBuf -
-Dio.netty.buffer.checkBounds: true 13:23:20.941 [nioEventLoopGroup-2-1] DEBUG
io.netty.util.ResourceLeakDetectorFactory - Loaded default
ResourceLeakDetector: io.netty.util.ResourceLeakDetector#41e61236
13:23:21.046 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsNameResolver - [id: 0x0ea5f4b3] RECEIVED: UDP
[52950: /192.168.1.1:53], DatagramDnsResponse(from: /192.168.1.1:53,
to: /0.0.0.0:49776, 52950, QUERY(0), NoError(0), RD RA)
DefaultDnsQuestion(dev-542348.okta.com. IN A)
DefaultDnsRawRecord(dev-542348.okta.com. 300 IN CNAME 25B)
DefaultDnsRawRecord(ok11-crtrs.tng.okta.com. 217 IN CNAME 66B)
DefaultDnsRawRecord(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
60 IN A 4B)
DefaultDnsRawRecord(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
60 IN A 4B)
DefaultDnsRawRecord(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
60 IN A 4B) DefaultDnsRawRecord(elb.us-east-2.amazonaws.com. 41273 IN
NS 23B) DefaultDnsRawRecord(elb.us-east-2.amazonaws.com. 41273 IN NS
25B) DefaultDnsRawRecord(elb.us-east-2.amazonaws.com. 41273 IN NS 21B)
DefaultDnsRawRecord(elb.us-east-2.amazonaws.com. 41273 IN NS 22B)
13:23:21.048 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsQueryContext - [id: 0x0ea5f4b3] WRITE: UDP,
[2623: /192.168.1.1:53],
DefaultDnsQuestion(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
IN A) 13:23:21.084 [nioEventLoopGroup-2-1] DEBUG
io.netty.resolver.dns.DnsNameResolver - [id: 0x0ea5f4b3] RECEIVED: UDP
[2623: /192.168.1.1:53], DatagramDnsResponse(from: /192.168.1.1:53,
to: /0.0.0.0:49776, 2623, QUERY(0), NoError(0), RD RA)
DefaultDnsQuestion(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
IN A)
DefaultDnsRawRecord(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
60 IN A 4B)
DefaultDnsRawRecord(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
60 IN A 4B)
DefaultDnsRawRecord(ok11-crtr-tls12-nlb-7429507134c4aa04.elb.us-east-2.amazonaws.com.
60 IN A 4B) DefaultDnsRawRecord(elb.us-east-2.amazonaws.com. 41273 IN
NS 23B) DefaultDnsRawRecord(elb.us-east-2.amazonaws.com. 41273 IN NS
25B) DefaultDnsRawRecord(elb.us-east-2.amazonaws.com. 41273 IN NS 21B)
DefaultDnsRawRecord(elb.us-east-2.amazonaws.com. 41273 IN NS 22B)
2[dev-542348.okta.com/3.15.36.192, dev-542348.okta.com/3.15.36.194,
dev-542348.okta.com/3.15.36.193] 13:23:23.184 [nioEventLoopGroup-2-1]
DEBUG io.netty.buffer.PoolThreadCache - Freed 8 thread-local buffer(s)
from thread: nioEventLoopGroup-2-1
As the reactor-netty code is also giving the same issue with windows but working fine on MACOS this looks like an issue with Netty.
Any suggestion to fix this

Modbus Error: [Input/Output] No Response received from the remote unit/Unable to decode response

I' trying to connect from raspberry 3 B to a Modbus device (sensor temperature) using a serial connection using a USB RS485 converter .This is my code:
from pymodbus.constants import Endian
from pymodbus.payload import BinaryPayloadDecoder
from pymodbus.client.sync import ModbusSerialClient as ModbusClient
from pymodbus.constants import Defaults
Defaults.RetryOnEmpty = True
import time
import logging
FORMAT = ('%(asctime)-15s %(threadName)-15s '
'%(levelname)-8s %(module)-15s:%(lineno)-8s %(message)s')
logging.basicConfig(format=FORMAT)
log = logging.getLogger()
log.setLevel(logging.DEBUG)
def run_sync_client():
while True:
#try:
client = ModbusClient(method='rtu', port='/dev/ttyUSB0', timeout=9, baudrate=9600, parity='N', stopbits=1, bytesize=8,reset_socket=False) # Implementation of the SHT31 as a modbus serial client
if client.connect():
print("connected")
print ("begin reading")
#for i in range(0 ,100):
request = client.read_holding_registers(address=0 , count=2, unit=4)
if request.isError():
print(request)
else:
print("Read OK! " + str(request.registers[1]))
time.sleep(1)
print(request)
result = request.registers
print(result)
temp= BinaryPayloadDecoder.fromRegisters(result, Endian.little, wordorder=Endian.little )
temperature= temp.decode_16bit_int()
print( temperature)
time.sleep(4)
# except:
#print("An exception occurred")
client.close()
while True:
if __name__ == "__main__":
run_sync_client()
The output in the console is:
connected
begin reading
2022-02-24 14:56:37,901 MainThread DEBUG transaction :139 Current transaction state - IDLE
2022-02-24 14:56:37,902 MainThread DEBUG transaction :144 Running transaction 1
2022-02-24 14:56:37,904 MainThread DEBUG transaction :273 SEND: 0x4 0x3 0x0 0x1 0x0 0x2 0x95 0x9e
2022-02-24 14:56:37,905 MainThread DEBUG sync :76 New Transaction state 'SENDING'
2022-02-24 14:56:37,906 MainThread DEBUG transaction :287 Changing transaction state from 'SENDING' to 'WAITING FOR REPLY'
2022-02-24 14:56:38,068 MainThread DEBUG transaction :375 Changing transaction state from 'WAITING FOR REPLY' to 'PROCESSING REPLY'
2022-02-24 14:56:38,069 MainThread DEBUG transaction :297 RECV: 0x4 0x3 0x2 0x17 0x58 0x2c 0x2e 0x3f 0x88
2022-02-24 14:56:38,071 MainThread DEBUG rtu_framer :237 Frame check failed, ignoring!!
2022-02-24 14:56:38,072 MainThread DEBUG rtu_framer :119 Resetting frame - Current Frame in buffer - 0x4 0x3 0x2 0x17 0x58 0x2c 0x2e 0x3f 0x88
2022-02-24 14:56:38,073 MainThread DEBUG transaction :465 Getting transaction 4
2022-02-24 14:56:38,074 MainThread DEBUG transaction :224 Changing transaction state from 'PROCESSING REPLY' to 'TRANSACTION_COMPLETE'
Modbus Error: [Input/Output] No Response received from the remote unit/Unable to decode response
I would love to have a little help understanding the error I'm getting. Thanks

Intel VT-x: Configuring debug registers to debug from host

I am trying to configure the debug registers in the host so that I can monitor an address of a guest running on Intel VT-x. For this I have called KVM_SET_GUEST_DEBUG IOCTL.
struct kvm_guest_debug guest_debug;
guest_debug.control = KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_HW_BP;
guest_debug.arch.debugreg[0] = addr; // DR0
guest_debug.arch.debugreg[7] = encode_dr7(0, len, bpType);
if (ioctl(vcpu_fd, KVM_SET_GUEST_DEBUG, &guest_debug) < 0)
return false;
It successfully sets up the debug register. But upon debug register read/write, it causes VM_EXIT with EXIT_REASON_EXCEPTION_NMI. Although I am expecting to have EXIT_REASON_DR_ACCESS. What is the reason that causes an nmi exit instead of DR_ACCESS exit? Did I set the registers right?

add_timer causes kernel stack dump for multiple PCI boards

We are using FPGA cards with PCI express drivers to move data around with DMA engines. This all works fine for a single card in a machine, however with two cards it fails. As an initial investigation, I have narrowed an error down to the add_timer function that is used to set up the polling mechanism. When insmod adds the driver modules, a stack trace is produced as the poll_timer routine is the same for both instances. The code has been reduced to
static int dat_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
struct timer_list * timer = &poll_timer;
int i;
/* Start polling routine */
log_normal(KERN_INFO "DEBUG ADD TIMER: Starting poll routine with %x\n", pdev);
init_timer(timer);
// random number added so that expires value is different for both instances of timer
get_random_bytes(&i, 1);
timer->expires=jiffies+HZ+i;
timer->data=(unsigned long) pdev;
timer->function = poll_routine;
log_verbose("DEBUG ADD TIMER: Timer expires %x\n", timer->expires);
log_verbose("DEBUG ADD TIMER: Timer data %x\n", timer->data);
log_verbose("DEBUG ADD TIMER: Timer function %x\n", timer->function);
// ***** THIS IS WHERE STACK TRACE OCCURS (WHEN CALLED FOR SECOND TIME)
add_timer(timer);
log_verbose("DEBUG ADD TIMER: Value of HZ is %d\n", HZ);
log_verbose("DEBUG ADD TIMER: End of probe\n");
return 0;
}
the stack trace produces
list_add corruption. prev->next should be next (ffffffff81f76228), but was (null). (prev=ffffffffa050a3c0).
and
list_add double add: new=ffffffffa050a3c0, prev=ffffffffa050a3c0, next=ffffffff81f76228.
Looking at the printk statements, it is clear that the add_timer is trying to add the same routine to the linked list. Is this correct?
DEBUG ADD TIMER: Timer expires fffd9cd3
DEBUG ADD TIMER: Timer data 6c0ac000
DEBUG ADD TIMER: Timer function **a0508150**
DEBUG ADD TIMER: Value of HZ is 1000
DEBUG ADD TIMER: End of probe
DEBUG ADD TIMER: Starting poll routine with 6c0ad000
DEBUG ADD TIMER: Timer expires fffd9c7d
DEBUG ADD TIMER: Timer data 6c0ad000
DEBUG ADD TIMER: Timer function **a0508150**
So my question(s) is(are), how should I configure the timer for multiple instantations of the same driver? (Assuming that is what is happening when multiple boards are inserted into the machine).
full stack trace
DEBUG ADD TIMER: Inserting driver into kernel.
DEBUG ADD TIMER: Starting poll routine with 6c0ac000
DEBUG ADD TIMER: Timer expires fffd9cd3
DEBUG ADD TIMER: Timer data 6c0ac000
DEBUG ADD TIMER: Timer function a0508150
DEBUG ADD TIMER: Value of HZ is 1000
DEBUG ADD TIMER: End of probe
DEBUG ADD TIMER: Starting poll routine with 6c0ad000
DEBUG ADD TIMER: Timer expires fffd9c7d
DEBUG ADD TIMER: Timer data 6c0ad000
DEBUG ADD TIMER: Timer function a0508150
------------[ cut here ]------------
WARNING: CPU: 0 PID: 2201 at lib/list_debug.c:33 __list_add+0xa0/0xd0()
list_add corruption. prev->next should be next (ffffffff81f76228), but was (null). (prev=ffffffffa050a3c0).
Modules linked in: xdma_v7(POE+) xt_CHECKSUM ipt_MASQUERADE nf_nat_masquerade_ipv4 nf_conntrack_netbios_ns nf_conntrack_broadcast ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 xt_conntrack ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw intel_rapl iosf_mbi x86_pkg_temp_thermal coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul snd_hda_codec_realtek snd_hda_codec_generic snd_hda_intel snd_hda_controller crc32c_intel eeepc_wmi ghash_clmulni_intel asus_wmi ftdi_sio iTCO_wdt snd_hda_codec sparse_keymap raid0 iTCO_vendor_support
snd_hda_core rfkill sb_edac ipmi_ssif video mxm_wmi edac_core snd_hwdep mei_me snd_seq snd_seq_device ipmi_msghandler snd_pcm mei acpi_pad tpm_infineon lpc_ich mfd_core snd_timer tpm_tis shpchp tpm snd soundcore i2c_i801 wmi nfsd auth_rpcgss nfs_acl lockd grace sunrpc ast drm_kms_helper ttm drm igb serio_raw ptp pps_core dca i2c_algo_bit
CPU: 0 PID: 2201 Comm: insmod Tainted: P OE 4.1.8-100.fc21.x86_64 #1
Hardware name: ASUSTeK COMPUTER INC. Z10PE-D8 WS/Z10PE-D8 WS, BIOS 1001 03/17/2015
0000000000000000 00000000ec73155d ffff880457123928 ffffffff81792065
0000000000000000 ffff880457123980 ffff880457123968 ffffffff810a163a
0000000000000246 ffffffffa050a3c0 ffffffff81f76228 ffffffffa050a3c0
Call Trace:
[<ffffffff81792065>] dump_stack+0x45/0x57
[<ffffffff810a163a>] warn_slowpath_common+0x8a/0xc0
[<ffffffff810a16c5>] warn_slowpath_fmt+0x55/0x70
[<ffffffff810f8250>] ? vprintk_emit+0x3b0/0x560
[<ffffffff813c7c30>] __list_add+0xa0/0xd0
[<ffffffff81108412>] __internal_add_timer+0xb2/0x130
[<ffffffff811084bf>] internal_add_timer+0x2f/0xb0
[<ffffffff8110a1ca>] mod_timer+0x12a/0x210
[<ffffffff8110a2c8>] add_timer+0x18/0x30
[<ffffffffa050810f>] dat_probe+0xbf/0x100 [xdma_v7]
[<ffffffff813f6da5>] local_pci_probe+0x45/0xa0
[<ffffffff812a8da2>] ? sysfs_do_create_link_sd.isra.2+0x72/0xc0
[<ffffffff813f8109>] pci_device_probe+0xf9/0x150
[<ffffffff814e7e59>] driver_probe_device+0x209/0x4b0
[<ffffffff814e81db>] __driver_attach+0x9b/0xa0
[<ffffffff814e8140>] ? __device_attach+0x40/0x40
[<ffffffff814e5973>] bus_for_each_dev+0x73/0xc0
[<ffffffff814e772e>] driver_attach+0x1e/0x20
[<ffffffff814e72e0>] bus_add_driver+0x180/0x250
[<ffffffffa000a000>] ? 0xffffffffa000a000
[<ffffffff814e89d4>] driver_register+0x64/0xf0
[<ffffffff813f662c>] __pci_register_driver+0x4c/0x50
[<ffffffffa000a02c>] dat_init+0x2c/0x1000 [xdma_v7]
[<ffffffff81002148>] do_one_initcall+0xd8/0x210
[<ffffffff812094f9>] ? kmem_cache_alloc_trace+0x1a9/0x230
[<ffffffff817911bc>] ? do_init_module+0x28/0x1cc
[<ffffffff817911f5>] do_init_module+0x61/0x1cc
[<ffffffff811270bb>] load_module+0x20db/0x2550
[<ffffffff81122990>] ? store_uevent+0x70/0x70
[<ffffffff8122e860>] ? kernel_read+0x50/0x80
[<ffffffff81127766>] SyS_finit_module+0xa6/0xe0
[<ffffffff8179892e>] system_call_fastpath+0x12/0x71
---[ end trace 340e5d7ba2d89081 ]---
------------[ cut here ]------------
WARNING: CPU: 0 PID: 2201 at lib/list_debug.c:36 __list_add+0xcb/0xd0()
list_add double add: new=ffffffffa050a3c0, prev=ffffffffa050a3c0, next=ffffffff81f76228.
Modules linked in: xdma_v7(POE+) xt_CHECKSUM ipt_MASQUERADE nf_nat_masquerade_ipv4 nf_conntrack_netbios_ns nf_conntrack_broadcast ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 xt_conntrack ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw intel_rapl iosf_mbi x86_pkg_temp_thermal coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul snd_hda_codec_realtek snd_hda_codec_generic snd_hda_intel snd_hda_controller crc32c_intel eeepc_wmi ghash_clmulni_intel asus_wmi ftdi_sio iTCO_wdt snd_hda_codec sparse_keymap raid0 iTCO_vendor_support
snd_hda_core rfkill sb_edac ipmi_ssif video mxm_wmi edac_core snd_hwdep mei_me snd_seq snd_seq_device ipmi_msghandler snd_pcm mei acpi_pad tpm_infineon lpc_ich mfd_core snd_timer tpm_tis shpchp tpm snd soundcore i2c_i801 wmi nfsd auth_rpcgss nfs_acl lockd grace sunrpc ast drm_kms_helper ttm drm igb serio_raw ptp pps_core dca i2c_algo_bit
CPU: 0 PID: 2201 Comm: insmod Tainted: P W OE 4.1.8-100.fc21.x86_64 #1
Hardware name: ASUSTeK COMPUTER INC. Z10PE-D8 WS/Z10PE-D8 WS, BIOS 1001 03/17/2015
0000000000000000 00000000ec73155d ffff880457123928 ffffffff81792065
0000000000000000 ffff880457123980 ffff880457123968 ffffffff810a163a
0000000000000246 ffffffffa050a3c0 ffffffff81f76228 ffffffffa050a3c0
Call Trace:
[<ffffffff81792065>] dump_stack+0x45/0x57
[<ffffffff810a163a>] warn_slowpath_common+0x8a/0xc0
[<ffffffff810a16c5>] warn_slowpath_fmt+0x55/0x70
[<ffffffff810f8250>] ? vprintk_emit+0x3b0/0x560
[<ffffffff813c7c5b>] __list_add+0xcb/0xd0
[<ffffffff81108412>] __internal_add_timer+0xb2/0x130
[<ffffffff811084bf>] internal_add_timer+0x2f/0xb0
[<ffffffff8110a1ca>] mod_timer+0x12a/0x210
[<ffffffff8110a2c8>] add_timer+0x18/0x30
[<ffffffffa050810f>] dat_probe+0xbf/0x100 [xdma_v7]
[<ffffffff813f6da5>] local_pci_probe+0x45/0xa0
[<ffffffff812a8da2>] ? sysfs_do_create_link_sd.isra.2+0x72/0xc0
[<ffffffff813f8109>] pci_device_probe+0xf9/0x150
[<ffffffff814e7e59>] driver_probe_device+0x209/0x4b0
[<ffffffff814e81db>] __driver_attach+0x9b/0xa0
[<ffffffff814e8140>] ? __device_attach+0x40/0x40
[<ffffffff814e5973>] bus_for_each_dev+0x73/0xc0
[<ffffffff814e772e>] driver_attach+0x1e/0x20
[<ffffffff814e72e0>] bus_add_driver+0x180/0x250
[<ffffffffa000a000>] ? 0xffffffffa000a000
[<ffffffff814e89d4>] driver_register+0x64/0xf0
[<ffffffff813f662c>] __pci_register_driver+0x4c/0x50
[<ffffffffa000a02c>] dat_init+0x2c/0x1000 [xdma_v7]
[<ffffffff81002148>] do_one_initcall+0xd8/0x210
[<ffffffff812094f9>] ? kmem_cache_alloc_trace+0x1a9/0x230
[<ffffffff817911bc>] ? do_init_module+0x28/0x1cc
[<ffffffff817911f5>] do_init_module+0x61/0x1cc
[<ffffffff811270bb>] load_module+0x20db/0x2550
[<ffffffff81122990>] ? store_uevent+0x70/0x70
[<ffffffff8122e860>] ? kernel_read+0x50/0x80
[<ffffffff81127766>] SyS_finit_module+0xa6/0xe0
[<ffffffff8179892e>] system_call_fastpath+0x12/0x71
---[ end trace 340e5d7ba2d89082 ]---
DEBUG ADD TIMER: Value of HZ is 1000
DEBUG ADD TIMER: End of probe
The problem is that the second call to dat_probe is clobbering the poll_timer variable that was initialized and queued by the first call to dat_probe. You are clobbering the pointers in the kernel's timer list.
You need to get rid of the poll_timer variable and give each device its own dynamically allocated private data structure containing its own struct timer_list member. Call pci_set_drvdata to set the private data pointer for the PCI device. The other PCI driver functions can call pci_get_drvdata to retrieve that pointer.

Resources