Repeating +CMGS - first ok, then not working - sms

I'm using this code to talk to a stock GSM modem... (Telit/Simcom etc)
// ------------------------------
sprintf(localbuf, "AT+CMGS=\"%s\"\r", recipient);
Serial1.write(localbuf); // initiate the SMS conversation
if (waitFor(5000, "> ", "ERR")) {
sprintf(localbuf, "%s%c", textMessage, CTRL_Z);
Serial1.write(localbuf); // send the message body
//... I wait for the +CMGS: response here - all good
} else {
Serial.write("\r\n-- SMS >PROMPT FAIL --");
retval = false;
}
... and move on
The first message - no problem - it works fine.
If I do other things and come back to send another - no problem.
Including other modem conversations (CSQ, CCLK etc)
But if I try to send more than one message fairly close together (in a loose loop), the second +CMGS request fails to return the '>' prompt... ?
Any thoughts.
Thanks in advance

SOLVED - Well - so far so good (SFSG?)
I discovered that if I hold-off for a second AFTER the final CMGS:... .OK is received - the following messages work as expected.
So I guess that 'OK' really isn't OK (!), no matter what testing or polling I had tried earlier - the modem simply isn't ready until it's ready.
Thanks for reading. I hope this helps someone else.
EDIT: The data sheet quotes 20mS between sequential commands, but it's more like 200mS inworst cases...

Related

Does an OMNET++ / Veins simulation get very slow if both Vehicles and RSUs broadcast messages periodically?

Let me give a brief context first:
I have a scenario where the RSUs will broadcast a fixed message 'RSUmessage' about every TRSU seconds. I have implemented the following code for RSU broadcast (also, these fixed messages have Psid = -100 to differentiate them from others):
void TraCIDemoRSU11p::handleSelfMsg(cMessage* msg) {
if (WaveShortMessage* wsm = dynamic_cast<WaveShortMessage*>(msg)) {
if(wsm->getPsid()==-100){
sendDown(RSUmessage->dup());
scheduleAt(simTime() + trsu +uniform(0.02, 0.05), RSUmessage);
}
}
else {
BaseWaveApplLayer::handleSelfMsg(wsm);
}
}
A car can receive these messages from other cars as well as RSUs. RSUs discard the messages received from cars. The cars will receive multiple such messages, do some comparison stuff and periodically broadcast a similar type of message : 'aggregatedMessage' per interval Tcar. aggregatedMessage also have Psid=-100 ,so that the message can be differentiated from other messages easily.
I am scheduling the car events using self messages. (Though it could have been done inside handlePositionUpdate I believe). The handleSelfMsg of a car is following:
void TraCIDemo11p::handleSelfMsg(cMessage* msg) {
if (WaveShortMessage* wsm = dynamic_cast<WaveShortMessage*>(msg)) {
wsm->setSerial(wsm->getSerial() +1);
if (wsm->getPsid() == -100) {
sendDown(aggregatedMessage->dup());
//sendDelayedDown(aggregatedMessage->dup(), simTime()+uniform(0.1,0.5));
scheduleAt(simTime()+tcar+uniform(0.01, 0.05), aggregatedMessage);
}
//send this message on the service channel until the counter is 3 or higher.
//this code only runs when channel switching is enabled
else if (wsm->getSerial() >= 3) {
//stop service advertisements
stopService();
delete(wsm);
}
else {
scheduleAt(simTime()+1, wsm);
}
}
else {
BaseWaveApplLayer::handleSelfMsg(msg);
}
}
PROBLEM: With this setting, the simulation is very very slow. I get about 50 simulation seconds in 5-6 hours or more in Express mode in OMNET GUI. (No. of RSU: 64, Number of Vehicle: 40, around 1kmx1km map)
Also, I am referring to this post. The OP says that he got faster speed by removing the sending of message after each RSU received a message. In my case I cannot remove that, because I need to send out the broadcast messages after each interval.
Question: I think that this slowness is because every node is trying to sendDown messages at the beginning of each simulated second. Is it the case that when all vehicles and nodes schedules and sends message at the same time OMNET slows down? (Makes sense to slow down , but by what degree) But, there are only around 100 nodes overall in the simulation. Surely it cannot be this slow.
What I tried : I tried using sendDelayedDown(wsm->dup(), simTime()+uniform(0.1,0.5)); to spread the sending of the messages through out 1st half of each simulated second. This seems to stop messages piling up at the beginning of each simulation seconds and sped things up a bit, but not so much overall.
Can anybody please let me know if this is normal behavior or whether I am doing something wrong.
Also I apologize for the long post. I could not explain my problem without giving the context.
It seems you are flooding your network with messages: Every message from an RSU gets duplicated and transmitted again by every Car which has received this message. Hence, the computational time increases quadratically with the number of nodes (sender of messages) in your network (every sent message has to be handled by every node which is in range to receive it). The limit of 3 transmissions per message does not seem to help much and, as the comment in the code indicates, is not used at all, if there is no channel switching.
Therefore, if you can not improve/change your code to simply send less messages, you have to live with that. Your little tweak to send the messages in a delayed manner only distributes the messages over one second but does not solve the problem of flooding.
However, there are still some hints you can follow to improve the performance of your simulation:
Compile in release mode: make MODE=Release
Run your simulation in the terminal environment Cmdenv: ./run -u Cmdenv ...
If you need to use the graphical environment by all means, you should at least speed up the animations by using the slider in the upper part of the interface.
Removing the simtime-resolution parameter from the omnetpp.ini file solves the problem.
It seems the simulation kernel has an issue when the channel delay does not match the simulation-time resolution.
You can verify the solution by cloning the following repository. Note that you need a functional installation of the OMNeT++ framework. Specifically, I test this fix in OMNeT++ 5.6.2.
https://github.com/Ryuuba/flooding

sendReliable message sometimes not received by opposite peer

I've created a real time game for Google Play Game Services. It's in the later alpha stages right now. I have a question about sendReliableMessage. I've noticed certain cases where the other peer doesn't receive the message. I am aware that there is a callback onRealTimeMessageSent and I have some code in my MainActivity:
#Override
public void onRealTimeMessageSent(int i, int i2, String s) {
if(i== GamesStatusCodes.STATUS_OK)
{
}
else
{
lastMessageStatus=i;
sendToast("lastMessageStatus:"+Integer.toString(lastMessageStatus));
}
}
My games render loop is checking every iteration the value of lastMessageStatus and if there was something other than STATUS_OK I'm painting a T-Rex right now.
My question is is checking the sent status really enough? I also could create source code where the sender has to wait for an Acknowledged message. Each message would be stamped with a UUID and if ack is not received within a timeout then the sender would send the message again? Is an ACK based system necessary to create a persistent connection?
I've noticed certain cases where there is some lag before the opposite peer received the reliable message and I was wondering is there a timeout on the sendReliable message? Google Play Services documentation doesn't seem to indicate in the documentation that there is a timeout at all.
Thank you
Reliable messages are just that, reliable. There are not a lot of use cases for the onRealTimeMessageSent callback for reliable messages because, as you said, it does not guarantee that the recipient has processed the message yet. Only that it was sent.
It may seem annoying, but an ACK-based system is the best way to know for sure that your user has received the message. A UUID is one good way to do this. I have done this myself and found it to work great (although now you have round-trip latency).
As far as timeout, that is not implemented in the RealTime Messaging API. I have personally found round trip latency (send message, receive ACK in callback) to be about 200ms, and I have never found a way to make a message fail to deliver eventually even when purposefully using bad network conditions.

Boost async_receive_from makes std::cin crash with buffer overflow

while coding a server supporting both TCP and UDP with the boost library, I encountered a strange problem: After the server receives any UDP message, a call of std::cin (or std::getline) will crash if I try to put the input into a string.
This does only happen after at least one UDP message was received. I have no idea what happens here, because I hardly do anything when receiving a message. I broke the important code down:
void AsynchronousServer::DoReceiveUDP()
{
m_udp_socket.async_receive_from(boost::asio::buffer(m_udp_receive_buffer,
m_udp_receive_buffer.size()),
udp::endpoint(), [this](boost::system::error_code error, std::size_t
bytes_transferred)
{
});
}
The DoReceiveUDP() method is called right when the server is up and before io_service.run(). Usually it does a bit more (e.g. call itself again), but for testing purposes I commented everything out so that it really does nothing more than receive once. m_udp_receive_buffer is an
std::array<char, 8196>
, an attribute of the AsynchronousServer class that is not used anywhere else.
In the main thread, this is all I really do after setting up the server:
while(true)
{
std::string message;
std::getline(std::cin, message); //On this line the program crashes
//server.SendMessageTCP(1, message);
}
Now as I said, the crash (debug message says buffer overflow) only happens after a message was received via UDP. My server also reads TCP messages via async_read. This does not provoke the error though.
I also tested this with storing the getline-input in an constant sized array, which works fine. But I cant really do that since I dont know how long the message is then, which means the buffer is filled with a lot of useless characters when I send the message. Besides, I dont really feel safe anyway with strange stuff like that happening and would rather solve the problem than bypass it.
Do any of you have some ideas on what could be the problem here? If you need more code, just ask, but I think I already posted everything relevant. :)
EDIT: I commented out the error code and bytes transferred too, but it is in the "full" version. I don't get any errors and bytes transferred is exactly the length of the message.
After some more tests I can at least guess a little more. The problem seems to occur if I am expected to enter input via cin and during this, a message is received.
E.g. if I do this:
while(true)
{
std::string message;
boost::this_thread::sleep(boost::posix_time::seconds(3));
std::getline(std::cin, message);
}
and the client sends a UDP message within this three seconds the thread sleeps, everything goes fine. If the three seconds pass and THEN the message is received, it crashes as before.
However, there is one really strange behaviour: After I sended a UDP message within these three seconds, the program won't crash anymore at all - even if I wait with the next message until the thread has reached getline again. I have no idea why that happens...
Alright so I found a "solution" for this problem. I still don't know why it happens and if that is really a solution at all or whether I'll run into other problems later.
Also, I have no idea, why this solution works. :D
Anyway, it works if the buffer is not a member function but created anew for every call of ReceiveUDP:
void AsynchronousServer::DoReceiveUDP()
{
std::shared_ptr<std::array<char, 8192>> udp_receive_buffer;
m_udp_socket.async_receive_from(boost::asio::buffer(*udp_receive_buffer, udp_receive_buffer->size()),
udp::endpoint(), boost::bind<void>([this](boost::system::error_code error, std::size_t bytes_transferred,
std::shared_ptr<std::array<char, 8192>> udp_receive_buffer)
{
}, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred, udp_receive_buffer));
}

GSM SM5100B C M E E R R O R : 4 error

I am using Arduino to control an SM5100B GSM device, everything works except when I want to send an SMS after receiving another. I get this,
Error code:
O K > + C M G S : 2 5 O K + C M E E R R O R : 4
My code for handling the aforementioned received SMS:
#include <SoftwareSerial.h> //Include the NewSoftSerial library to send serial commands to the cellular module.
char inchar; //Will hold the incoming character from the Serial Port.
SoftwareSerial cell(2,3);
char mobilenumber[] = "0597010129";
void setup() {
//GSM
Serial.begin(9600); // opens serial port, sets data rate to 9600 bps
Serial.println("Initialize GSM module serial port for communication.");
cell.begin(9600);
delay(35000); // give time for GSM module to register on network etc.
Serial.println("delay off");
cell.println("AT+CMGF=1"); // set SMS mode to text
delay(200);
cell.println("AT+CNMI=3,3,0,0"); // set module to send SMS data to serial out upon receipt
delay(200);
}
void loop() {
if(cell.available() >0)//If a character comes in, from the cellular module
{
inchar=cell.read();
Serial.println(inchar);
if (inchar=='#'){ // OK - the start of our command
delay(10);
inchar=cell.read();
Serial.println(inchar);
if (inchar=='a'){
delay(10);
Serial.println("The folowing SMS : \n");
inchar=cell.read();
Serial.println(inchar);
if (inchar=='0'){ //sequance = #a0
Serial.println("#a0 was received");
}
else if (inchar=='1'){//sequance = #a1
Serial.println("#a1 was received ");
sendSms();
}
}
cell.println("AT+CMGD=1,4");// AT command to delete all msgs
Serial.println(" delete all SMS");
}
}//end of if(cell.available() >0) {...}
}
void sendSms(){
//cell.println("AT+CMGF=1"); // set SMS mode to text
cell.print("AT+CMGS="); // now send message...
cell.print((char)34); // ASCII equivalent of "
cell.print(mobilenumber);
cell.println((char)34); // ASCII equivalent of "
delay(500); // give the module some thinking time
cell.print(":D hello m3alleg :D"); // our message to send
cell.println((char)26); // ASCII equivalent of Ctrl-Z
delay(20000);
}
General note about your handling of AT commands.
No, no, no! This way of doing it will never work reliably. You MUST
wait for the > character to be received before sending "text
to send". Or actually it is not just the > character, it is four
characters. Quote from 3GPP specification 27.005:
the TA shall send a four character sequence
<CR><LF><greater_than><space> (IRA 13, 10, 62, 32) after command line
is terminated with <CR>; after that text can be entered from TE to
ME/TA.
(TA (terminal adapter) here means modem and TE (terminal equipment) the sender of AT commands)
For any abortable AT command (and 27.005 clearly states for AT+CMGS
This command should be abortable.) the sending of any character will
abort the operation of the command. To quote ITU V.250:
5.6.1 Aborting commands
...
Aborting
of commands is accomplished by the
transmission from the DTE to the DCE
of any character.
(DCE (data communication equipment) here means modem and DTE (data terminal equipment) the sender of AT commands)
This means that when you send "text to send" before "\r\n> " is sent
by the modem the command will be aborted. There is no way to wait "long
enough" for expecting the response be send. You MUST read and parse
the response text you get back from the modem.
The same applies for the final result code after each command (e.g. OK,
ERROR, CME ERROR and a few more). For instance sending "AT+CMGF=1"
and then sending the next command without first waiting for OK is begging
for problems. So always when sending AT commands, you MUST wait
for the final result code before sending the next command.
Please never, never use delay to wait for any AT command response. It's
as useful as kicking dogs that stand in your way in order to get them
to move. Yes it might actually work some times, but at some point you
will be sorry for taking that approach...
Answer to your question.
Based on the response you get, I can see that your problem is not command
abortion (although your parsing have serious problems as described above
that you should fix), and the CME ERROR is your best clue. From section
"9.2.1 General errors" in 27.007 it gives operation not supported as
description for value 4.
27.005 states that:
If sending fails in a network or an ME error, final result code +CMS ERROR: is returned.
Notice that this is +CMS ERROR and not +CME ERROR, but it is applicable, see below.
I guess that sequence of actions is as following:
The AT command handling part of the SM100B GSM modem accepts the sms data
and formats it in an appropriate format and sends it of to the part of the
modem that communicates with the GSM network. It successfully send the
sms data to the network and reports this back to the AT command handling
part which then prints +CMGS: 25 and final result code OK. However
after a short time the network sends back a rejection message for the sms,
which is then given as the +CME ERROR response.
If my guess above is correct, should the response have been delivered
as +CMS ERROR instead? No, because the final response
has for the AT+CMGS command has already been given (OK), and
returning multiple final result codes for a command should never be done
(except by mistake (note 1)).
And while +CME ERROR can replace the ERROR final result code,
it is not only a final result code. From the AT+CMEE command description:
Set command disables or enables the use of result code +CME ERROR: as an indication of an error relating to
the functionality of the MT. When enabled, MT related errors cause +CME ERROR: final result code instead
of the regular ERROR final result code. ERROR is returned normally when error is related to syntax, invalid parameters,
or TA functionality.
Thus +CME ERROR can both be an final result code as well as an unsolicited
result code (possibly also an intermediate result code).
But could not the AT+CMGS command have waited to receive the network
rejection and returned +CMS ERROR? Probably not. Without knowing too
much about the network details of sms sending, it might be the case
that rejection today might occur at a much later time than before. Such
changes are sometimes a problem with GSM related AT commands which have
an old heritage that was originally tightly tied to GSM behaviour which
some times becomes less and less true as the technology moves to GPRS,
UMTS, LTE, etc.
Note 1:
One of my former colleagues used to complain about the way the standard
have specified voice call handling, because after a ATD1234; command
you first get the final result code OK, and then later when the call is
ended you get a new final result code NO CARRIER. This just horribly
bad design, the call end indication should have been a specific unsolicited
response and not a final response.
So to summarise
Your sms seems to be rejected by the network. Try to find out why.
You also have some serious problems with your AT command handling
that you should fix; there is no way to handle AT commands without
reading and parsing the response text from the modem.
cell.println("AT+CNMI=3,3,0,0"); // set module to send SMS data to
serial out upon receipt
For anyone who is looking for answer to the same problem I had:
I was trying to wake up gsm module from sleep mode by sending sms and it didn't work right away. Phone call goes straight to UART, but for sms you have to use this command to set module to send SMS data to serial port upon receipt .
AT+CNMI=3,3,0,0

Writing to channel in a loop

I have to send a lot of data to I client connected to my server in small blocks.
So, I have something like:
for(;;) {
messageEvent.getChannel().write("Hello World");
}
The problem is that, for some reason, client is receiving dirty data, like Netty buffer is not clear at each iteration, so we got something like "Hello WorldHello".
If I make a little change in my code putting a thread sleep everything works fine:
for(;;) {
messageEvent.getChannel().write("Hello World");
Thread.sleep(1000);
}
As MRAB said, if the server is sending multiple messages on a channel without indicating the end of each message, then client can not always read the messages correctly. By adding sleep time after writing a message, will not solve the root cause of the problem either.
To fix this problem, have to mark the end of each message in a way that other party can identify, if client and server both are using Netty, you can add LengthFieldPrepender and LengthFieldBasedFrameDecoder before your json handlers.
String encodedMsg = new Gson().toJson(
sendToClient,newTypeToken<ArrayList<CoordinateVO>>() {}.getType());
By default, Gson uses html escaping for content, sometime this will lead to wired encoding, you can disable this if required by using a Gson factory
final static GsonBuilder gsonBuilder = new GsonBuilder().disableHtmlEscaping();
....
String encodedMsg = gsonBuilder.create().toJson(object);
In neither case are you sending anything to indicate where one item ends and the next begins, or how long each item is.
In the second case the sleep is getting the channel time out and flush, so the client sees a 'break', which it interprets as the end of the item.
The client should never see this "dirty data". If thats really the case then its a bug. But to be hornest I can't think of anything that could lead to this in netty. As every Channel.write(..) event will be added to a queue which then get written to the client when possible. So every data that is passed in the write(..) method will just get written. There is no "concat" of the data.
Do you maybe have some custom Encoder in the pipeline that buffers the data before sending it to the client ?
It would also help if you could show the complete code that gives this behavoir so we see what handlers are in the pipeline etc.

Resources