How can I make LIN master off by using CAPL? - capl

I'm a newbie of canoe and capl.
I'm trying to make a lin slave tester by using canoe, capl, panel.
LIN netwotk has 2 channel.
ch1 - a bldc motor
ch2 - an actuator
And i made some functions such as voltage, current monitoring and check pass or fail.
i also need to make error condition like LIN off status for a while.
there were many capl functions similar what i wanna make.
linstopscheduler
linactivateglobalnetworkmanagement
lindeactiveslot
lingotosleep
linchangeschedtable
but none of them stop sending master frame.
it shows a message
"Node 'tesr' (CAPL) LIN channel 2: LIN API call LINStopScheduler() ignored! due to LIN Interactive Master settings!"
So i turned of master simulation but schedule table also disabled forever.
how can i make it?

Related

Unable to receive URC for an incoming SMS from a modem

I have an issue in being unable to recive the URC message from the modem whenever it receives an SMS.
I know that it receives them since i can find and read them if I use AT+CMGL but, i don't receive any notification when the modem gets them. I played around with the URC related commands but I've been unable to get it to work (other URCs work fine).
The modem is a BG600L M3 from Quectel and following is the sequence of commands i'm sending ("AT" is always omitted and the first command is literally "AT\r", basically an empty one).
//general config
AT\r
CFUN=1,0
E1
+QCFG=\"urc/ri/other\",\"pulse\",8,1
H0
&F
V1
+CMEE=1
&D0
E1
+CREG=2
+CGREG=2
+CEREG=2
//sms config
+CPMS=\"ME\",\"ME\",\"ME\"
+QINDCFG=\"smsincoming\",1
+CMGF=1
+CSDH=0
+CSCS=\"GSM\"
+CNMI=2,2,0,2,0
//doing some deleting and reading
+CMGD=1,3
+CPMS?
//getting the gps fix
+QGPS=1
+QGPSCFG=\"gnssconfig\",3
+QGPSLOC=1
+QGPSEND
//resetting the gms connection
+CFUN=0
+CFUN=1,0
//setting up the gsm connection
+QICFG=\"dataformat\",0,0
+QICFG=\"viewmode\",0
+QICFG=\"recvind\",1
+QICFG=\"tcp/retranscfg\",3,600
+QISDE=0
+QCFG=\"band\",0xf,0x80085,0x80085,1
+QCFG=\"nwscanmode\",1,1
+QCFG=\"nwscanseq\",010101,1
+QCFG=\"iotopmode\",2,1
// checking if it's connected
+CREG?
+QNWINFO
+COPS?
//Getting the time
+CTZU=3
+CTZR=0
+QLTS
+CCLK?
You can set AT+CNMI=2,1,2,0,0 , that should do the trick.
According to specification ETSI TS 127 005 V11.0.0 (2012-10)
+CNMI: <mode>,<mt>,<bm>,<ds>,<bfr>
by keeping <mt> value to 1 we should get indication when message is stored in ME/TA
<mt>: integer type (the rules for storing received SMs depend on its
data coding scheme
0 No SMS-DELIVER indications are routed to the TE.
1 If SMS-DELIVER is stored into ME/TA, indication of the memory location is routed to the TE using unsolicited result code:
+CMTI: <mem>,<index>

How to detect sender and destination of a notification in dbus-monitor?

My goal is to filter notifications coming from different applications (mainly from different browser window).
I found that with the help of the dbus-monitor I can write a small script that could filter the notification messages that I am interested in.
The filter script is working well, but I have a small problem:
I am starting with the
dbus-monitor "interface='org.freedesktop.Notifications', destination=':1.40'"
command. I have to added the "destination=':1.40'" because on Ubuntu 20.04 I always got twice the same notification.
The following output of
dbus-monitor --profile "interface='org.freedesktop.Notifications'"
demonstrate the reason:
type timestamp serial sender destination path interface member
# in_reply_to
mc 1612194356.476927 7 :1.227 :1.56 /org/freedesktop/Notifications org.freedesktop.Notifications Notify
mc 1612194356.483161 188 :1.56 :1.40 /org/freedesktop/Notifications org.freedesktop.Notifications Notify
As you can see the sender :1.277 sends to :1.56 first than this will be the sender to :1.40 destination. (Simply notify-send hello test message was sent)
My script is working on that way, but every time system boot up, I have to check the destination number and modify my script accordingly to get worked.
I have two questions:
how to discover the destination string automatically? (:1.40 in the above example)
how to prevent system sending the same message twice? (If this question would be answered, than the question under point 1. became pointless.)

RXTXLostPackets count is non zero even when allowTxDuringRx=false

I am using veins4.6 with sumo 0.30 and omnet++5.1.1 in ubuntu 14.04. I have created a custom network with a cross(one intersection with 4 roads) and ran the simulation with 200 vehicles. I did not observe this behaviour for 4vehicles. I have seen it with 50 vehicles too. I need to get the count of total lost packets for my masters project. So I was looking at statistics and found that RXTXLostPackets is not zero. As far as I understood from documentation it should be zero if allowTxDuringRx=false. Default is false(PhyLayer80211p.ned). As I did not change any code yet, I was confused if that is expected behaviour.
What I have done so far.
from Mac1609_4::handleLowerControl, statsTXRXLostPackets is updated when Decider80211p responds with RECWHILESEND.
In Decider80211p::processSignalEnd, if value of whileSending is true RECWHILESEND is sent to mac layer as control message.
In Decider80211p::processSignalEnd, if(frame->getWasTransmitting() || phy11p->getRadioState() == Radio::TX) , this frame was considered as received while sending and sets the value for whileSending as true.
The wasTransmitting varilable is set to true in Decider80211p::switchToTx and Decider80211p::processNewSignal functions.
currentFrame->setWasTransmitting(true);
currentFrame->setBitError(true);
in Decider80211p::processNewSignal:
if (phy11p->getRadioState() == Radio::TX ) {
frame->setBitError(true); --> tried disabling both these values and the RXTXLostPackets was zero.
frame->setWasTransmitting(true);
DBG_D11P << "AirFrame: " << frame->getId() << " (" << recvPower << ") received, while already sending. Setting BitErrors to true" << std::endl;
}
There is one thread with similar issue with the fix of adding this line in processSignalEnd function. But looks like veins4.6 does not use curSyncFrame anymore.
Veins - Unexpected behavior with lost packets in certain vehicles
if (!frame->getWasTransmitting()){
curSyncFrame = 0;
}
I could not clearly understand the issue. The code and configuration files I have used are here. https://github.com/Rajeswar59/veins_learning.
Can anyone please take a look and help me with this. Thanks in advance.
edit: I went through the logs. This is what I could understand as of now.
https://drive.google.com/open?id=0BzjDW8PQhkSmSEUtZ2lpcld4ZXc --> some portion of logs are here.
---> order of sending
#13332 0.247987176594 node[30] --> node[48] id=22266
#13375 0.247987796864 node[18] --> node[20] id=22447
#13384 0.247987864534 node[20] --> node[30] id=22573
From logs I have concentrated on node 18. Two nodes that transmitted before 30 are 32 and 4. These 2 messages are received successfully by all 3 nodes. When a message arrives decider tries to set channel state as busy in processnewsignal and set idle after processing packet. This calls mac1609_4.cc channelBusy and channelIdle functions respectively. So the channelIdle variable is set accordingly. Also if channel is to be set busy it will stop contention and calculate currentBackoff if any packet is waiting to be transmitted. If channel is being set idle at the end of reception, startContent is called. Based on this only the lastIdle variable is set which is used to calculate nextMacEvent. So when the last successful message was received all the nodes which have a packet to send decide nextMacEvent and it is sent as self message in Mac1609_4.cc. on receiving the nextMacEvent self message we will start transmitting without checking if any other node has started transmission. We can not identify that probably because we are setting channel busy when we receive messages after some propagation delay. So between last successful transmission and nextMacEvent other nodes also take decision to transmit without checking current channel state. That's why the node has some receive events while sending. As far as my understanding goes before transmission we should sense current state of channel and retry backoff accordingly. We do not check this at the nextMacEvent. It looks like a collision behaviour but should we not check the current state of channel when backoff counter reaches zero and retry. Please correct me if I am wrong anywhere.
Thanks for your patience.
Any help or advice??
My Learnings(probably last update):
After Some digging, these are my learnings if it helps some one. The basic CSMA mechanism says before attempting for transmission, the node has to sense the channel, initiate transmission if the channel is sensed idle for AIFS time, or go in to back off if channel is busy. In veins the channel busy status is stored in idleChannel variable whose status is checked in Mac1609_4:channelBusySelf() function before initiating transmission (nextMacEvent in Mac1609_4::handleSelfMsg). The idleChannel is updated in Mac1609_4::channelBusy and Mac1609_4::channelIdle functions when a message reception starts and when message reception ends respectively. So when a previously transmitting node sends a packet, all the recieving nodes will receive the packet with varying delay i.e., starts receiving at different times and update their channelIdle variable. After that they calculate best time to transmit and starts transmission. It does check if channel is idle or not but as the channelIdle status is updated at next reception and because of transmission delay it takes some time between transmission start at sender and reception start at receiver side, both the transmitting nodes cant see other transmission. As far as I understand this is called a collision when more than two nodes start transmission at the same time. So the BitError statistic is set and statsTXRXLostPackets is also set. So while calculating totalLostPackets we can take only one of these two values.

XBee - XBee-API and multiple endpoints

Using Andrew Rapp's XBee-API, how can I sample I/O data via a coordinator from more than two endpoints?
I have 17 Series 1 XBees. I have programmed one to be a coordinator (API mode = 2) and the rest to be endpoints. Using XBee-API I am sending a Force I/O Sample ("IS") remote AT command, unicast to each endpoint. This works perfectly well when there are up to two endpoints, but as soon as a third is added, one of the three always becomes non-responsive (times out with XBeeTimeoutException). It's not always the same physical unit that stops responding, but it is always the third one (for example, if I send Force I/O Sample to Device1, Device2, and Device3, Device3 will time out, and if I change the order to Device3, Device1, Device2, Device2 will time out.
If I set up more than three XBees, about 1 out of 3 will time out - but not every third one.
I've verified that the XBees themselves are fine. I've searched the Internet and Stack Overflow in particular to no avail. I've tried using a simple ZNetRemoteAtRequest. I've tried opening and closing the XBee coordinator serial connection once for all three devices, once per device, and once per program run. I've tried varying the distance between the coordinator and endpoints (never more than five feet apart). I've tried different coordinator configuration parameters (from the Digi documentation). I've tried changing out the XBee for the coordinator.
This is the code I'm using to send the Force I/O Sample request to each endpoint and read the response:
xbee = new XBee(); // Coordinator
xbee.open("/dev/ttyUSB0, 115200)); // Happens before any of the endpoints are contacted
... // Loop through known endpoint addresses
XBeeRequest request = new ZBForceSampleRequest(new XBeeAddress64(endpointAddress));
ZNetRemoteAtResponse response = null;
response = (ZNetRemoteAtResponse) xbee.sendSynchronous(request, remoteXBeeTimeout);
if (response.isOk()) {
// Process response payload
}
... // End loop and finally close coordinator connection
What might help polling I/O samples from more than two endpoints?
EDIT: I found that Andrew Rapp's XBee-API library fakes multithreaded behavior, which causes the synchronization issues described in this question. I wrote a replacement library that is actually multithreaded and correctly maps responses from multiple XBee endpoints: https://github.com/steveperkins/xbee-api-for-java-1-4. When I wrote it Java 1.4 was necessary for use on the BeagleBone, Plug, and Zotac single-board PCs but it's an easy conversion to 1.7+.
Are you using hardware flow control on your serial port? Is it possible that you're sending requests out when the local XBee has deasserted CTS (e.g., asking you to stop sending)? I assume you're running at 115200 bps, so the XBee serial port can keep up with the network data rate.
Can you turn on debugging information, or connect some port monitoring hardware/software to display the data going over the serial port to the local XBee?

Failed TWI transaction after sleep on Xmega

we've had some troubles with TWI/I2C after waking up from sleep with the Atmel Xmega256A3. Instead of digging into the details of TWI/I2C we've decided to use the supplied twi_master_driver from Atmel attached to AVR1308 application note.
The problem is one or a few failed TWI transactions just after waking up from sleep. On the I2C-bus connected to the XMega we have a few potentiometers, a thermometer and an RTC. The XMega acts as the only master on the bus.
We use the sleep functions found in AVRLIBC:
{code for turning of VCC to all I2C connected devices}
set_sleep_mode(SLEEP_MODE_PWR_DOWN);
sleep_enable();
sleep_cpu();
{code for turning on VCC to all I2C connected devices}
The XMega as woken from sleep by the RTC which sets a pin high. After the XMega is woken from sleep, we want to set a value on one of the potentiometers, but this fails. For some reason, the TWI-transaction result is TWIM_RESULT_NACK_RECEIVED instead of TWIM_RESULT_OK in the first transaction. After that everything seems to work again.
Have we missed anything here? Is there any known issues with the XMega, sleep and TWI? Do we need to reset the TWI of clear any flags after waking from sleep?
Best regards
Fredrik
There is a common problem on I2C/TWI where the internal state machine gets stuck in an intermediate state if a transaction is not completed fully. The slave then does not respond correctly when addressed on the next transaction. This commonly happens when the master is reset or stops outputting the SCK signal part way through the read or write. A solution is to toggle the SCK line manually 8 or 9 times before starting any data transactions so the that the internal state machines in the slaves are all reset to the start of transfer point and they are all then looking for their address byte.

Resources