How to close a serial communication in Cocoa background thread - cocoa

I'm trying to run a serial communication example, in order to send data from an Arduino to a Cocoa application following the provided code in http://playground.arduino.cc/Interfacing/Cocoa ( IOKit/ioctl method ). It works, but I cannot stop the receiver thread once started.
I've implemented a switch button ( Start/Stop ) which at start time opens the serial port and launches the receiver thread:
- (IBAction) startButton: (NSButton *) btn {
(…)
error = [self openSerialPort: [SelectPort titleOfSelectedItem] baud:[Baud intValue]];
(…)
[self refreshSerialList:[SelectPort titleOfSelectedItem]];
[self performSelectorInBackground:#selector(incomingTextUpdateThread:) withObject:[NSThread currentThread]];
(…)
}
The thread code is practically the same as in the example, excepting that I've included code to rebuild the serial packet from received buffers and save it to a SQLite database:
- (void)incomingTextUpdateThread: (NSThread *) parentThread {
// mark that the thread is running
readThreadRunning = TRUE;
const int BUFFER_SIZE = 100;
char byte_buffer[BUFFER_SIZE]; // buffer for holding incoming data
int numBytes=0; // number of bytes read during read
(…)
// assign a high priority to this thread
[NSThread setThreadPriority:1.0];
// this will loop until the serial port closes
while(TRUE) {
// read() blocks until some data is available or the port is closed
numBytes = (int) read(serialFileDescriptor, byte_buffer, BUFFER_SIZE); // read up to the size of the buffer
if(numBytes>0) {
// format serial data into packets, but first append at start the end of last read
buffer = [[NSMutableString alloc] initWithBytes:byte_buffer length:numBytes encoding:NSASCIIStringEncoding];
if (status == 1 && [ipacket length] != 0) {
[buffer insertString:ipacket atIndex:0];
numBytes = (int) [buffer length];
}
ipacket = [self processSerialData:buffer length:numBytes]; // Recompose data and save to database.
} else {
break; // Stop the thread if there is an error
}
}
// make sure the serial port is closed
if (serialFileDescriptor != -1) {
close(serialFileDescriptor);
serialFileDescriptor = -1;
}
// mark that the thread has quit
readThreadRunning = FALSE;
}
I try to close the port in the main thread with this code, also part of the startButton selector, following the provided example:
if (serialFileDescriptor != -1) {
[self appendToIncomingText:#"Trying to close the serial port...\n"];
close(serialFileDescriptor);
serialFileDescriptor = -1;
// Revisar... crec que el thread no s'adona que s'ha tancat el file descriptor...
// wait for the reading thread to die
while(readThreadRunning);
// re-opening the same port REALLY fast will fail spectacularly... better to sleep a sec
sleep(0.5);
//[btn setTitle:#"Start"];
[Start setTitle:#"Start"];
}
But it seems that the receiver thread is not aware of the status change in global variable serialFileDescriptor.

So, startButton: opens the port, spawns off a thread to start reading from it, and then immediately closes the port? That's not going to turn out well.
startButton: should not close the port. Leave that for the reading thread to do when it's done, and do it on the main thread only when you need to close the port for some other reason (e.g., quitting).
Global variables are, by definition, visible throughout the program, and this includes across thread boundaries. If readThreadRunning is not getting set to FALSE (which assumes that FALSE hasn't been defined to something exotic), then your read thread's loop must still be running. Either it is still reading data, or read is blocked (it is waiting for more data).
Note that read has no way to know whether there will be more data. As your comment in the code says, it will block until either it has some data to return or the port gets closed. You should either work out a way to know ahead of time how much data you'll need to read, and stop when you've read that much, or see if you can close the port at the opposite end when everything has been sent and received.

Related

How to receive messages via wifi while running main program in ESP32?

Ive incorporated multiple features i want in a microcontroller program (ESP32 Wroom32) and needed some advice on the best way to keep the program running and receive messages while it is running.
Current code:
//includes and declarations
setup()
{
//setup up wifi, server
}
main(){
WiFiClient client = server.available();
byte new_command[40];
if (client) // If client object is created, a connection is setup
{
Serial.println("New wifi Client.");
String currentLine = ""; //Used to print messages
while (client.connected())
{
recv_byte = client.read();
new_command = read_incoming(&recv_byte, client); //Returns received command and check for format. If invalid, returns a 0 array
if (new_command[0] != 0) //Checks if message is not zero, None of valid messages start with zero
{
execute_command(new_command);
//new_command is set to zero
}
}//end of while loop
}//end of if loop
}
The downside of this is that the ESP32 waits till the command is finished executing before it is ready to receive a new message. It is desired that the ESP32 receive commands and store them, and execute it at its own pace. I am planning to change the current code to receive a messages while the code is running as follows:
main()
{
WiFiClient client = server.available();
byte new_command[40];
int command_count = 0;
byte command_array[50][40];
if (command_count != 0)
{
execute_command(command_array[0]);
//Decrement command_count
//Shift all commands in command_array by 1 row above
//Set last executed command to zero
}
}//end of main loop
def message_interrupt(int recv_byte, WiFiClient& running_client)
{
If (running_client.connected())
{
recv_byte = running_client.read();
new_command = read_incoming(&recv_byte, running_client); //Returns received command and check for format. If invalid, returns a 0 array
//add new command to command_array after last command
//increment command_count
}
}
Which interrupt do I use to receive the message and update the command_array ? https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-guides/wifi.html Doesnt mention any receive/transmit events. I couldnt find any receive/transmit interrupt either or maybe I searched for the wrong term.

How to terminate all threads reading from Pipe (NSPipe) related to Process (NSTask)?

I am writing a MacOS/Cocoa app that monitors a remote log file using a common recipe that launches a Process (formerly NSTask) instance on a background thread and reads stdout of the process via a Pipe (formally a NSPipe) as listed below:
class LogTail {
var process : Process? = nil
func dolog() {
//
// Run ssh fred#foo.org /usr/bin/tail -f /var/log.system.log
// on a background thread and monitor it's stdout.
//
let processQueue = DispatchQueue.global(qos: .background)
processQueue.async {
//
// Create process and associated command.
//
self.process = Process()
process.launchPath = "/usr/bin/ssh"
process.arguments = ["fred#foo.org",
"/usr/bin/tail", "-f",
"/var/log.system.log"]
process.environment = [ ... ]
//
// Create pipe to read stdout of command as data is available
//
let pipe = Pipe()
process.standardOutput = pipe
let outHandle = pipe.fileHandleForReading
outHandle.readabilityHandler = { pipe in
if let string = String(data: pipe.availableData,
encoding: .utf8) {
// write string to NSTextView on main thread
}
}
//
// Launch process and block background thread
// until process complete.
//
process.launch()
process.waitUntilExit()
//
// What do I do here to make sure all related
// threads terminate?
//
outHandle.closeFile() // XXX
outHandle.readabilityHandler = nil // XXX
}
}
Everything works just dandy, but when the process quits (killed via process.terminate) I notice (via Xcode's Debug Navigator and the Console app) that there are multiple threads consuming 180% or more of the CPU!?!
Where is this CPU leak coming from?
I threw in outHandle.closeFile() (see code marked XXX above) and that reduced the CPU usage down to just a few percent but the threads still existed! What am I doing wrong or how do a make sure all the related threads terminate (I prefer graceful terminations i.e., threads body finish executing)!?
Some one posted a similar question almost 5 years ago!
UPDATE:
The documentation for NSFileHandle's readabilityHandler says:
To stop reading the file or socket, set the value of this property to
nil. Doing so cancels the dispatch source and cleans up the file
handle’s structures appropriately.
so setting outHandle.readabilityHandler = nil seems to solve the problem too.
Even though I have seemingly solved the problem, I really don't understand where this massive CPU leak comes from -- very mysterious.

CFRunLoop non-blocking wait for a buffer to be filled

I am porting an app reading data from a BT device to Mac. On the mac-specific code, I have a class with the delegate methods for the BT callbacks, like -(void) rfcommChannelData:(...)
On that callback, I fill a buffer with the received data. I have a function called from the app:
-(int) m_timedRead:(unsigned char*)buffer length:(unsigned long)numBytes time:(unsigned int)timeout
{
double steps=0.01;
double time = (double)timeout/1000;
bool ready = false;
int read,total=0;
unsigned long restBytes = numBytes;
while(!ready){
unsigned char *ptr = buffer+total;
read = [self m_readRFCOMM:(unsigned char*)ptr length:(unsigned long)restBytes];
total+=read;
if(total>=numBytes){
ready=true; continue;
}
restBytes = numBytes-total;
CFRunLoopRunInMode(kCFRunLoopDefaultMode, .4, false);
time -= steps;
if(time<=0){
ready=true; continue;
}
}
My problem is that this RunLoop makes the whole app un extremely slow. If I don't use default mode, and create my on runloop with a runlooptimer, the callback method rfcommChannelData never gets called. I create my one runloop with the following code:
// CFStringRef myCustomMode = CFSTR("MyCustomMode");
// CFRunLoopTimerRef myTimer;
// myTimer = CFRunLoopTimerCreate(NULL,CFAbsoluteTimeGetCurrent()+1.0,1.0,0,0,foo,NULL);
// CFRunLoopAddTimer(CFRunLoopGetCurrent(), myTimer, myCustomMode);
// CFRunLoopTimerInvalidate(myTimer);
// CFRelease(myTimer);
Any idea why the default RunLoop slows down the whole app, or how to make my own run loop allow callbacks from rfcommchannel being triggered?
Many thanks,
Anton Albajes-Eizagirre
If you're working on the main thread of a GUI app, don't run the run loop internally to your own methods. Install run loop sources (or allow asynchronous APIs of the frameworks install sources on your behalf) and just return to the main event loop. That is, let flow of execution return out of your code and back to your caller. The main event loop runs the run loop of the main thread and, when sources are ready, their callbacks will fire which will probably call your methods.

Duplex named pipe hangs on a certain write

I have a C++ pipe server app and a C# pipe client app communicating via Windows named pipe (duplex, message mode, wait/blocking in separate read thread).
It all works fine (both sending and receiving data via the pipe) until I try and write to the pipe from the client in response to a forms 'textchanged' event. When I do this, the client hangs on the pipe write call (or flush call if autoflush is off). Breaking into the server app reveals it's also waiting on the pipe ReadFile call and not returning.
I tried running the client write on another thread -- same result.
Suspect some sort of deadlock or race condition but can't see where... don't think I'm writing to the pipe simultaneously.
Update1: tried pipes in byte mode instead of message mode - same lockup.
Update2: Strangely, if (and only if) I pump lots of data from the server to the client, it cures the lockup!?
Server code:
DWORD ReadMsg(char* aBuff, int aBuffLen, int& aBytesRead)
{
DWORD byteCount;
if (ReadFile(mPipe, aBuff, aBuffLen, &byteCount, NULL))
{
aBytesRead = (int)byteCount;
aBuff[byteCount] = 0;
return ERROR_SUCCESS;
}
return GetLastError();
}
DWORD SendMsg(const char* aBuff, unsigned int aBuffLen)
{
DWORD byteCount;
if (WriteFile(mPipe, aBuff, aBuffLen, &byteCount, NULL))
{
return ERROR_SUCCESS;
}
mClientConnected = false;
return GetLastError();
}
DWORD CommsThread()
{
while (1)
{
std::string fullPipeName = std::string("\\\\.\\pipe\\") + mPipeName;
mPipe = CreateNamedPipeA(fullPipeName.c_str(),
PIPE_ACCESS_DUPLEX,
PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE | PIPE_WAIT,
PIPE_UNLIMITED_INSTANCES,
KTxBuffSize, // output buffer size
KRxBuffSize, // input buffer size
5000, // client time-out ms
NULL); // no security attribute
if (mPipe == INVALID_HANDLE_VALUE)
return 1;
mClientConnected = ConnectNamedPipe(mPipe, NULL) ? TRUE : (GetLastError() == ERROR_PIPE_CONNECTED);
if (!mClientConnected)
return 1;
char rxBuff[KRxBuffSize+1];
DWORD error=0;
while (mClientConnected)
{
Sleep(1);
int bytesRead = 0;
error = ReadMsg(rxBuff, KRxBuffSize, bytesRead);
if (error == ERROR_SUCCESS)
{
rxBuff[bytesRead] = 0; // terminate string.
if (mMsgCallback && bytesRead>0)
mMsgCallback(rxBuff, bytesRead, mCallbackContext);
}
else
{
mClientConnected = false;
}
}
Close();
Sleep(1000);
}
return 0;
}
client code:
public void Start(string aPipeName)
{
mPipeName = aPipeName;
mPipeStream = new NamedPipeClientStream(".", mPipeName, PipeDirection.InOut, PipeOptions.None);
Console.Write("Attempting to connect to pipe...");
mPipeStream.Connect();
Console.WriteLine("Connected to pipe '{0}' ({1} server instances open)", mPipeName, mPipeStream.NumberOfServerInstances);
mPipeStream.ReadMode = PipeTransmissionMode.Message;
mPipeWriter = new StreamWriter(mPipeStream);
mPipeWriter.AutoFlush = true;
mReadThread = new Thread(new ThreadStart(ReadThread));
mReadThread.IsBackground = true;
mReadThread.Start();
if (mConnectionEventCallback != null)
{
mConnectionEventCallback(true);
}
}
private void ReadThread()
{
byte[] buffer = new byte[1024 * 400];
while (true)
{
int len = 0;
do
{
len += mPipeStream.Read(buffer, len, buffer.Length);
} while (len>0 && !mPipeStream.IsMessageComplete);
if (len==0)
{
OnPipeBroken();
return;
}
if (mMessageCallback != null)
{
mMessageCallback(buffer, len);
}
Thread.Sleep(1);
}
}
public void Write(string aMsg)
{
try
{
mPipeWriter.Write(aMsg);
mPipeWriter.Flush();
}
catch (Exception)
{
OnPipeBroken();
}
}
If you are using separate threads you will be unable to read from the pipe at the same time you write to it. For example, if you are doing a blocking read from the pipe then a subsequent blocking write (from a different thread) then the write call will wait/block until the read call has completed and in many cases if this is unexpected behavior your program will become deadlocked.
I have not tested overlapped I/O, but it MAY be able to resolve this issue. However, if you are determined to use synchronous calls then the following models below may help you to solve the problem.
Master/Slave
You could implement a master/slave model in which the client or the server is the master and the other end only responds which is generally what you will find the MSDN examples to be.
In some cases you may find this problematic in the event the slave periodically needs to send data to the master. You must either use an external signaling mechanism (outside of the pipe) or have the master periodically query/poll the slave or you can swap the roles where the client is the master and the server is the slave.
Writer/Reader
You could use a writer/reader model where you use two different pipes. However, you must associate those two pipes somehow if you have multiple clients since each pipe will have a different handle. You could do this by having the client send a unique identifier value on connection to each pipe which would then let the server associate the two pipes. This number could be the current system time or even a unique identifier that is global or local.
Threads
If you are determined to use the synchronous API you can use threads with the master/slave model if you do not want to be blocked while waiting for a message on the slave side. You will however want to lock the reader after it reads a message (or encounters the end of a series of message) then write the response (as the slave should) and finally unlock the reader. You can lock and unlock the reader using locking mechanisms that put the thread to sleep as these would be most efficient.
Security Problem With TCP
The loss going with TCP instead of named pipes is also the biggest possible problem. A TCP stream does not contain any security natively. So if security is a concern you will have to implement that and you have the possibility of creating a security hole since you would have to handle authentication yourself. The named pipe can provide security if you properly set the parameters. Also, to note again more clearly: security is no simple matter and generally you will want to use existing facilities that have been designed to provide it.
I think you may be running into problems with named pipes message mode. In this mode, each write to the kernel pipe handle constitutes a message. This doesn't necessarily correspond with what your application regards a Message to be, and a message may be bigger than your read buffer.
This means that your pipe reading code needs two loops, the inner reading until the current [named pipe] message has been completely received, and the outer looping until your [application level] message has been received.
Your C# client code does have a correct inner loop, reading again if IsMessageComplete is false:
do
{
len += mPipeStream.Read(buffer, len, buffer.Length);
} while (len>0 && !mPipeStream.IsMessageComplete);
Your C++ server code doesn't have such a loop - the equivalent at the Win32 API level is testing for the return code ERROR_MORE_DATA.
My guess is that somehow this is leading to the client waiting for the server to read on one pipe instance, whilst the server is waiting for the client to write on another pipe instance.
It seems to me that what you are trying to do will rather not work as expected.
Some time ago I was trying to do something that looked like your code and got similar results, the pipe just hanged
and it was difficult to establish what had gone wrong.
I would rather suggest to use client in very simple way:
CreateFile
Write request
Read answer
Close pipe.
If you want to have two way communication with clients which are also able to receive unrequested data from server you should
rather implement two servers. This was the workaround I used: here you can find sources.

Events/Interrupts in Serial Communication

I want to read and write from serial using events/interrupts.
Currently, I have it in a while loop and it continuously reads and writes through the serial. I want it to only read when something comes from the serial port. How do I implement this in C++?
This is my current code:
while(true)
{
//read
if(!ReadFile(hSerial, szBuff, n, &dwBytesRead, NULL)){
//error occurred. Report to user.
}
//write
if(!WriteFile(hSerial, szBuff, n, &dwBytesRead, NULL)){
//error occurred. Report to user.
}
//print what you are reading
printf("%s\n", szBuff);
}
Use a select statement, which will check the read and write buffers without blocking and return their status, so you only need to read when you know the port has data, or write when you know there's room in the output buffer.
The third example at http://www.developerweb.net/forum/showthread.php?t=2933 and the associated comments may be helpful.
Edit: The man page for select has a simpler and more complete example near the end. You can find it at http://linux.die.net/man/2/select if man 2 select doesn't work on your system.
Note: Mastering select() will allow you to work with both serial ports and sockets; it's at the heart of many network clients and servers.
For a Windows environment the more native approach would be to use asynchronous I/O. In this mode you still use calls to ReadFile and WriteFile, but instead of blocking you pass in a callback function that will be invoked when the operation completes.
It is fairly tricky to get all the details right though.
Here is a copy of an article that was published in the c/C++ users journal a few years ago. It goes into detail on the Win32 API.
here a code that read serial incomming data using interruption on windows
you can see the time elapsed during the waiting interruption time
int pollComport(int comport_number, LPBYTE buffer, int size)
{
BYTE Byte;
DWORD dwBytesTransferred;
DWORD dwCommModemStatus;
int n;
double TimeA,TimeB;
// Specify a set of events to be monitored for the port.
SetCommMask (m_comPortHandle[comport_number], EV_RXCHAR );
while (m_comPortHandle[comport_number] != INVALID_HANDLE_VALUE)
{
// Wait for an event to occur for the port.
TimeA = clock();
WaitCommEvent (m_comPortHandle[comport_number], &dwCommModemStatus, 0);
TimeB = clock();
if(TimeB-TimeA>0)
cout <<" ok "<<TimeB-TimeA<<endl;
// Re-specify the set of events to be monitored for the port.
SetCommMask (m_comPortHandle[comport_number], EV_RXCHAR);
if (dwCommModemStatus & EV_RXCHAR)
{
// Loop for waiting for the data.
do
{
ReadFile(m_comPortHandle[comport_number], buffer, size, (LPDWORD)((void *)&n), NULL);
// Display the data read.
if (n>0)
cout << buffer <<endl;
} while (n > 0);
}
return(0);
}
}

Resources