AVFormatContext: interrupt callback proper usage? - ffmpeg

AVFormatContext's interrupt_callback field is a
Custom interrupt callbacks for the I/O layer.
It's type is AVIOInterruptCB, and it explains in comment section:
Callback for checking whether to abort blocking functions.
AVERROR_EXIT is returned in this case by the interrupted function. During blocking operations, callback is called with opaque as parameter. If the callback returns 1, the blocking operation will be aborted.
No members can be added to this struct without a major bump, if new elements have been added after this struct in AVFormatContext or AVIOContext.
I have 2 questions:
what does the last section means? Especially "without a major bump"?
If I use this along with an RTSP source, when I close the input by avformat_close_input, the "TEARDOWN" message is being sent out, however it won't reach the RTSP server.
For 2: here is a quick pseudo-code for demo:
int pkts = 0;
bool early_exit = false;
int InterruptCallback(void* ctx) {
return early_exit ? 1 : 0;
}
void main() {
ctx = avformat_alloc_context
ctx->interrupt_callback.callback = InterruptCallback;
avformat_open_input
avformat_find_stream_info
pkts=0;
while(!early_exit) {
av_read_frame
if (pkts++ > 100) early_exit=true;
}
avformat_close_input
}
In case I don't use the interrupt callback at all, TEARDOWN is being sent out, and it also reaches the RTSP server so it can actually tear down the connection. Otherwise, it won't tear down it, and I have to wait until TCP socket times out.
What is the proper way of using this interrupt callback?

It means that they are not going to change anything for this structure (AVIOInterruptCB). However, if thats the case it would be in a major bump (major change from 4.4 eg to 5.0)
You need to pass a meaningful parameter to void* ctx. Anything that you like so you can check it within the static function. For example a bool that you will set as cancel so you will interrupt the av_read_frame (which will return an AVERROR_EXIT). Usually you pass a class of your decoder context or something similar which also holds all the info that you required to check whether to return 1 to interrupt or 0 to continue the requests properly. A real example would be that you open a wrong rtsp and then you want to open another one (the right one) so you need to cancel your previous requests.

Related

Linux USB driver: Interrupt URBs

I suppose I actually have two separate questions, but I think that they are related enough to include them both. The context is a Linux USB device driver (not userspace).
After transmitting a request URB, how do I receive the response once my complete callback is called?
How can I use interrupt URBs for single request/response pairs, and not as actual continuous interrupt polling (as they are intended)?
So for some background, I'm working on a driver for the Microchip MCP2210 a USB-to-SPI Protocol Converter with GPIO (USB 2.0, datasheet here). This device advertises as generic HID and exposes two interrupt endpoints (an in and an out) as well as it's control endpoint.
I am starting from a working, (but alpha-quality) demo driver written by somebody else and kindly shared with the community. However, this is a HID driver and the mechanism it uses to communicate with the device is very expensive! (sending a 64 byte message requires allocating a 6k HID report struct, and allocation is sometimes performed in the context of an interrupt, requiring GFP_ATOMIC!). We'll be accessing this from an embedded low-memory device.
I'm new to USB drivers and still pretty green with Linux device drivers in general. However, I'm trying to convert this to a plain-jane USB driver (not HID) so I can use the less expensive interrupt URBs for my communications. Here is my code for transmitting my request. For the sake of (attempted) brevity, I'm not including the definition of my structs, etc, but please let me know if you need more of my code. dev->cur_cmd is where I'm keeping the current command I'm processing.
/* use a local for brevity */
cmd = dev->cur_cmd;
if (cmd->state == MCP2210_CMD_STATE_NEW) {
usb_fill_int_urb(dev->int_out_urb,
dev->udev,
usb_sndintpipe(dev->udev, dev->int_out_ep->desc.bEndpointAddress),
&dev->out_buffer,
sizeof(dev->out_buffer), /* always 64 bytes */
cmd->type->complete,
cmd,
dev->int_out_ep->desc.bInterval);
ret = usb_submit_urb(dev->int_out_urb, GFP_KERNEL);
if (ret) {
/* snipped: handle error */
}
cmd->state = MCP2210_CMD_STATE_XMITED;
}
And here is my complete fn:
/* note that by "ctrl" I mean a control command, not the control endpoint */
static void ctrl_complete(struct urb *)
{
struct mcp2210_device *dev = urb->context;
struct mcp2210_command *cmd = dev->cur_cmd;
int ret;
if (unlikely(!cmd || !cmd->dev)) {
printk(KERN_ERR "mcp2210: ctrl_complete called w/o valid cmd "
"or dev\n");
return;
}
switch (cmd->state) {
/* Time to rx the response */
case MCP2210_CMD_STATE_XMITED:
/* FIXME: I think that I need to check the response URB's
* status to find out if it was even transmitted or not */
usb_fill_int_urb(dev->int_in_urb,
dev->udev,
usb_sndintpipe(dev->udev, dev->int_in_ep->desc
.bEndpointAddress),
&dev->in_buffer,
sizeof(dev->in_buffer),
cmd->type->complete,
dev,
dev->int_in_ep->desc.bInterval);
ret = usb_submit_urb(dev->int_in_urb, GFP_KERNEL);
if (ret) {
dev_err(&dev->udev->dev,
"while attempting to rx response, "
"usb_submit_urb returned %d\n", ret);
free_cur_cmd(dev);
return;
}
cmd->state = MCP2210_CMD_STATE_RXED;
return;
/* got response, now process it */
case MCP2210_CMD_STATE_RXED:
process_response(cmd);
default:
dev_err(&dev->udev->dev, "ctrl_complete called with unexpected state: %d", cmd->state);
free_cur_cmd(dev);
};
}
So am I at least close here? Secondly, both dev->int_out_ep->desc.bInterval and dev->int_in_ep->desc.bInterval are equal to 1, will this keep sending my request every 125 microseconds? And if so, how do I say "ok, ty, now stop this interrupt". The MCP2210 offers only one configuration, one interface and that has just the two interrupt endpoints. (I know everything has the control interface, not sure where that fits into the picture though.)
Rather than spam this question with the lsusb -v, I'm going to pastebin it.
Typically, request/response communication works as follows:
Submit the response URB;
submit the request URB;
in the request completion handler, if the request was not actually sent, cancel the response URB and abort;
in the response completion handler, handle the response data.
All that asynchronous completion handler stuff is a big hassle if you have a single URB that is completed almost immediately; therefore, there is the helper function usb_interrupt_msg() which works synchronously.
URBs to be used for polling must be resubmitted (typically from the completion handler).
If you do not resubmit the URB, no polling happens.

implementing a scheduler class in Windows

I want to implement a scheduler class, which any object can use to schedule timeouts and cancel then if necessary. When a timeout expires, this information will be sent to the timeout setter/owner at that time asynchronously.
So, for this purpose, I have 2 fundamental classes WindowsTimeout and WindowsScheduler.
class WindowsTimeout
{
bool mCancelled;
int mTimerID; // Windows handle to identify the actual timer set.
ITimeoutReceiver* mSetter;
int cancel()
{
mCancelled = true;
if ( timeKillEvent(mTimerID) == SUCCESS) // Line under question # 1
{
delete this; // Timeout instance is self-destroyed.
return 0; // ok. OS Timer resource given back.
}
return 1; // fail. OS Timer resource not given back.
}
WindowsTimeout(ITimeoutReceiver* setter, int timerID)
{
mSetter = setter;
mTimerID = timerID;
}
};
class WindowsScheduler
{
static void CALLBACK timerFunction(UINT uID,UINT uMsg,DWORD dwUser,DWORD dw1,DWORD dw2)
{
WindowsTimeout* timeout = (WindowsTimeout*) uMsg;
if (timeout->mCancelled)
delete timeout;
else
timeout->mDestination->GEN(evTimeout(timeout));
}
WindowsTimeout* schedule(ITimeoutReceiver* setter, TimeUnit t)
{
int timerID = timeSetEvent(...);
if (timerID == SUCCESS)
{
return WindowsTimeout(setter, timerID);
}
return 0;
}
};
My questions are:
Q.1. When a WindowsScheduler::timerFunction() call is made, this call is performed in which context ? It is simply a callback function and I think, it is performed by the OS context, right ? If it is so, does this calling pre-empt any other tasks already running ? I mean do callbacks have higher priority than any other user-task ?
Q.2. When a timeout setter wants to cancel its timeout, it calls WindowsTimeout::cancel().
However, there is always a possibility that timerFunction static call to be callbacked by OS, pre-empting the cancel operation, for example, just after mCancelled = true statement. In such a case, the timeout instance will be deleted by the callback function.
When the pre-empted cancel() function comes again, after the callback function completes execution, will try to access an attribute of the deleted instance (mTimerID), as you can see on the line : "Line under question # 1" in the code.
How can I avoid such a case ?
Please note that, this question is an improved version of the previos one of my own here:
Windows multimedia timer with callback argument
Q1 - I believe it gets called within a thread allocated by the timer API. I'm not sure, but I wouldn't be surprised if the thread ran at a very high priority. (In Windows, that doesn't necessarily mean it will completely preempt other threads, it just means it will get more cycles than other threads).
Q2 - I started to sketch out a solution for this, but then realized it was a bit harder than I thought. Personally, I would maintain a hash table that maps timerIDs to your WindowsTimeout object instances. The hash table could be a simple std::map instance that's guarded by a critical section. When the timer callback occurs, it enters the critical section and tries to obtain the WindowsTimer instance pointer, and then flags the WindowsTimer instance as having been executed, exits the critical section, and then actually executes the callback. In the event that the hash table doesn't contain the WindowsTimer instance, it means the caller has already removed it. Be very careful here.
One subtle bug in your own code above:
WindowsTimeout* schedule(ITimeoutReceiver* setter, TimeUnit t)
{
int timerID = timeSetEvent(...);
if (timerID == SUCCESS)
{
return WindowsTimeout(setter, timerID);
}
return 0;
}
};
In your schedule method, it's entirely possible that the callback scheduled by timeSetEvent will return BEFORE you can create an instance of WindowsTimeout.

How to detect WinSock TCP timeout with BindIoCompletionCallback

I am building a Visual C++ WinSock TCP server using BindIoCompletionCallback, it works fine receiving and sending data, but I can't find a good way to detect timeout: SetSockOpt/SO_RCVTIMEO/SO_SNDTIMEO has no effect on nonblocking sockets, if the peer is not sending any data, the CompletionRoutine is not called at all.
I am thinking about using RegisterWaitForSingleObject with the hEvent field of OVERLAPPED, that might work but then CompletionRoutine is not needed at all, am I still using IOCP ? is there a performance concern if I use only RegisterWaitForSingleObject and not using BindIoCompletionCallback ?
Update: Code Sample:
My first try:
bool CServer::Startup() {
SOCKET ServerSocket = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, WSA_FLAG_OVERLAPPED);
WSAEVENT ServerEvent = WSACreateEvent();
WSAEventSelect(ServerSocket, ServerEvent, FD_ACCEPT);
......
bind(ServerSocket......);
listen(ServerSocket......);
_beginthread(ListeningThread, 128 * 1024, (void*) this);
......
......
}
void __cdecl CServer::ListeningThread( void* param ) // static
{
CServer* server = (CServer*) param;
while (true) {
if (WSAWaitForMultipleEvents(1, &server->ServerEvent, FALSE, 100, FALSE) == WSA_WAIT_EVENT_0) {
WSANETWORKEVENTS events = {};
if (WSAEnumNetworkEvents(server->ServerSocket, server->ServerEvent, &events) != SOCKET_ERROR) {
if ((events.lNetworkEvents & FD_ACCEPT) && (events.iErrorCode[FD_ACCEPT_BIT] == 0)) {
SOCKET socket = accept(server->ServerSocket, NULL, NULL);
if (socket != SOCKET_ERROR) {
BindIoCompletionCallback((HANDLE) socket, CompletionRoutine, 0);
......
}
}
}
}
}
}
VOID CALLBACK CServer::CompletionRoutine( __in DWORD dwErrorCode, __in DWORD dwNumberOfBytesTransfered, __in LPOVERLAPPED lpOverlapped ) // static
{
......
BOOL res = GetOverlappedResult(......, TRUE);
......
}
class CIoOperation {
public:
OVERLAPPED Overlapped;
......
......
};
bool CServer::Receive(SOCKET socket, PBYTE buffer, DWORD length, void* context)
{
if (connection != NULL) {
CIoOperation* io = new CIoOperation();
WSABUF buf = {length, (PCHAR) buffer};
DWORD flags = 0;
if ((WSARecv(Socket, &buf, 1, NULL, &flags, &io->Overlapped, NULL) != 0) && (GetLastError() != WSA_IO_PENDING)) {
delete io;
return false;
} else return true;
}
return false;
}
As I said, it works fine if the client is actually sending data to me, 'Receive' is not blocking, CompletionRoutine got called, data received, but here is one gotcha, if the client is not sending any data to me, how can I give up after a timeout ?
Since SetSockOpt/SO_RCVTIMEO/SO_SNDTIMEO wont help here, I think I should use the hEvent field in the OVERLAPPED stucture which will be signaled when the IO completes, but a WaitForSingleObject / WSAWaitForMultipleEvents on that will block the Receive call, and I want the Receive to always return immediately, so I used RegisterWaitForSingleObject and WAITORTIMERCALLBACK. it worked, the callback got called after the timeout, or, the IO completes, but now I have two callbacks for any single IO operation, the CompletionRoutine, and the WaitOrTimerCallback:
if the IO completed, they will be called simutaneously, if the IO is not completed, WaitOrTimerCallback will be called, then I call CancelIoEx, this caused the CompletionRoutine to be called with some ABORTED error, but here is a race condition, maybe the IO will be completed right before I cancel it, then ... blahblah, all in all its quite complicated.
Then I realized I dont actually need BindIoCompletionCallback and CompletionRoutine at all, and do everything from the WaitOrTimerCallback, it may work, but here is the interesting question, I wanted to build an IOCP-based Winsock server in the first place, and thought BindIoCompletionCallback is the easiest way to do that, using the threadpool provied by Windows itself, now I endup with a server without IOCP code at all ? is it still IOCP ? or should I forget BindIoCompletionCallback and build my own IOCP threadpool implementation ? why ?
What I did was to force the timeout/completion notifications to enter a critical section in the socket object. Once in, the winner can set a socket state variable and perform its action, whatever that might be. If the I/O completion gets in first, the I/O buffer array is processed in the normal way and any timeout is directed to restart by the state-machine. Similarly if the timeout gets in first, the I/O gets CancelIOEx'd and any later queued completion notification is discarded by the state-engine. Because of these possible 'late' notifications, I put released sockets onto a timeout queue and only recycle them onto the socket object pool after five minutes, in a similar way to how the TCP stack itself puts its sockets into 'TIME_WAIT'.
To do the timeouts, I have one thread that operates on FIFO delta-queues of timing-out objects, one queue for each timeout limit. The thread waits on an input queue for new objects with a timeout calculated from the smallest timeout-expiry-time of the objects at the head of the queues.
There were only a few timeouts used in the server, so I used queues fixed at compile-time. It would be fairly easy to add new queues or modify the timeout by sending appropriate 'command' messages to the thread input queue, mixed-in with the new sockets, but I didn't get that far.
Upon timeout, the thread called an event in the object which, in case of a socket, would enter the socket object CS-protected state-machine, (these was a TimeoutObject class which the socket descended from, amongst other things).
More:
I wait on the semaphore that controls the timeout thread input queue. If it's signaled, I get the new TimeoutObject from the input queue and add it to the end of whatever timeout queue it asks for. If the semaphore wait times out, I check the items at the heads of the timeout FIFO queues and recalculate their remaining interval by sutracting the current time from their timeout time. If the interval is 0 or negative, the timeout event gets called. While iterating the queues and their heads, I keep in a local the minimum remaining interval before the next timeout. Hwn all the head items in all the queues have non-zero remaining interval, I go back to waiting on the queue semaphore using the minimum remaining interval I have accumulated.
The event call returns an enumeration. This enumeration instructs the timeout thread how to handle an object whose event it's just fired. One option is to restart the timeout by recalcuating the timeout-time and pushing the object back onto its timeout queue at the end.
I did not use RegisterWaitForSingleObject() because it needed .NET and my Delphi server was all unmanaged, (I wrote my server a long time ago!).
That, and because, IIRC, it has a limit of 64 handles, like WaitForMultipleObjects(). My server had upwards of 23000 clients timing out. I found the single timeout thread and multiple FIFO queues to be more flexible - any old object could be timed out on it as long as it was descended from TimeoutObject - no extra OS calls/handles needed.
The basic idea is that, since you're using asynchronous I/O with the system thread pool, you shouldn't need to check for timeouts via events because you're not blocking any threads.
The recommended way to check for stale connections is to call getsockopt with the SO_CONNECT_TIME option. This returns the number of seconds that the socket has been connected. I know that's a poll operation, but if you're smart about how and when you query this value, it's actually a pretty good mechanism for managing connections. I explain below how this is done.
Typically I'll call getsockopt in two places: one is during my completion callback (so that I have a timestamp for the last time that an I/O completion occurred on that socket), and one is in my accept thread.
The accept thread monitors my socket backlog via WSAEventSelect and the FD_ACCEPT parameter. This means that the accept thread only executes when Windows determines that there are incoming connections that require accepting. At this time I enumerate my accepted sockets and query SO_CONNECT_TIME again for each socket. I subtract the timestamp of the connection's last I/O completion from this value, and if the difference is above a specified threshold my code deems the connection as having timed out.

Duplex named pipe hangs on a certain write

I have a C++ pipe server app and a C# pipe client app communicating via Windows named pipe (duplex, message mode, wait/blocking in separate read thread).
It all works fine (both sending and receiving data via the pipe) until I try and write to the pipe from the client in response to a forms 'textchanged' event. When I do this, the client hangs on the pipe write call (or flush call if autoflush is off). Breaking into the server app reveals it's also waiting on the pipe ReadFile call and not returning.
I tried running the client write on another thread -- same result.
Suspect some sort of deadlock or race condition but can't see where... don't think I'm writing to the pipe simultaneously.
Update1: tried pipes in byte mode instead of message mode - same lockup.
Update2: Strangely, if (and only if) I pump lots of data from the server to the client, it cures the lockup!?
Server code:
DWORD ReadMsg(char* aBuff, int aBuffLen, int& aBytesRead)
{
DWORD byteCount;
if (ReadFile(mPipe, aBuff, aBuffLen, &byteCount, NULL))
{
aBytesRead = (int)byteCount;
aBuff[byteCount] = 0;
return ERROR_SUCCESS;
}
return GetLastError();
}
DWORD SendMsg(const char* aBuff, unsigned int aBuffLen)
{
DWORD byteCount;
if (WriteFile(mPipe, aBuff, aBuffLen, &byteCount, NULL))
{
return ERROR_SUCCESS;
}
mClientConnected = false;
return GetLastError();
}
DWORD CommsThread()
{
while (1)
{
std::string fullPipeName = std::string("\\\\.\\pipe\\") + mPipeName;
mPipe = CreateNamedPipeA(fullPipeName.c_str(),
PIPE_ACCESS_DUPLEX,
PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE | PIPE_WAIT,
PIPE_UNLIMITED_INSTANCES,
KTxBuffSize, // output buffer size
KRxBuffSize, // input buffer size
5000, // client time-out ms
NULL); // no security attribute
if (mPipe == INVALID_HANDLE_VALUE)
return 1;
mClientConnected = ConnectNamedPipe(mPipe, NULL) ? TRUE : (GetLastError() == ERROR_PIPE_CONNECTED);
if (!mClientConnected)
return 1;
char rxBuff[KRxBuffSize+1];
DWORD error=0;
while (mClientConnected)
{
Sleep(1);
int bytesRead = 0;
error = ReadMsg(rxBuff, KRxBuffSize, bytesRead);
if (error == ERROR_SUCCESS)
{
rxBuff[bytesRead] = 0; // terminate string.
if (mMsgCallback && bytesRead>0)
mMsgCallback(rxBuff, bytesRead, mCallbackContext);
}
else
{
mClientConnected = false;
}
}
Close();
Sleep(1000);
}
return 0;
}
client code:
public void Start(string aPipeName)
{
mPipeName = aPipeName;
mPipeStream = new NamedPipeClientStream(".", mPipeName, PipeDirection.InOut, PipeOptions.None);
Console.Write("Attempting to connect to pipe...");
mPipeStream.Connect();
Console.WriteLine("Connected to pipe '{0}' ({1} server instances open)", mPipeName, mPipeStream.NumberOfServerInstances);
mPipeStream.ReadMode = PipeTransmissionMode.Message;
mPipeWriter = new StreamWriter(mPipeStream);
mPipeWriter.AutoFlush = true;
mReadThread = new Thread(new ThreadStart(ReadThread));
mReadThread.IsBackground = true;
mReadThread.Start();
if (mConnectionEventCallback != null)
{
mConnectionEventCallback(true);
}
}
private void ReadThread()
{
byte[] buffer = new byte[1024 * 400];
while (true)
{
int len = 0;
do
{
len += mPipeStream.Read(buffer, len, buffer.Length);
} while (len>0 && !mPipeStream.IsMessageComplete);
if (len==0)
{
OnPipeBroken();
return;
}
if (mMessageCallback != null)
{
mMessageCallback(buffer, len);
}
Thread.Sleep(1);
}
}
public void Write(string aMsg)
{
try
{
mPipeWriter.Write(aMsg);
mPipeWriter.Flush();
}
catch (Exception)
{
OnPipeBroken();
}
}
If you are using separate threads you will be unable to read from the pipe at the same time you write to it. For example, if you are doing a blocking read from the pipe then a subsequent blocking write (from a different thread) then the write call will wait/block until the read call has completed and in many cases if this is unexpected behavior your program will become deadlocked.
I have not tested overlapped I/O, but it MAY be able to resolve this issue. However, if you are determined to use synchronous calls then the following models below may help you to solve the problem.
Master/Slave
You could implement a master/slave model in which the client or the server is the master and the other end only responds which is generally what you will find the MSDN examples to be.
In some cases you may find this problematic in the event the slave periodically needs to send data to the master. You must either use an external signaling mechanism (outside of the pipe) or have the master periodically query/poll the slave or you can swap the roles where the client is the master and the server is the slave.
Writer/Reader
You could use a writer/reader model where you use two different pipes. However, you must associate those two pipes somehow if you have multiple clients since each pipe will have a different handle. You could do this by having the client send a unique identifier value on connection to each pipe which would then let the server associate the two pipes. This number could be the current system time or even a unique identifier that is global or local.
Threads
If you are determined to use the synchronous API you can use threads with the master/slave model if you do not want to be blocked while waiting for a message on the slave side. You will however want to lock the reader after it reads a message (or encounters the end of a series of message) then write the response (as the slave should) and finally unlock the reader. You can lock and unlock the reader using locking mechanisms that put the thread to sleep as these would be most efficient.
Security Problem With TCP
The loss going with TCP instead of named pipes is also the biggest possible problem. A TCP stream does not contain any security natively. So if security is a concern you will have to implement that and you have the possibility of creating a security hole since you would have to handle authentication yourself. The named pipe can provide security if you properly set the parameters. Also, to note again more clearly: security is no simple matter and generally you will want to use existing facilities that have been designed to provide it.
I think you may be running into problems with named pipes message mode. In this mode, each write to the kernel pipe handle constitutes a message. This doesn't necessarily correspond with what your application regards a Message to be, and a message may be bigger than your read buffer.
This means that your pipe reading code needs two loops, the inner reading until the current [named pipe] message has been completely received, and the outer looping until your [application level] message has been received.
Your C# client code does have a correct inner loop, reading again if IsMessageComplete is false:
do
{
len += mPipeStream.Read(buffer, len, buffer.Length);
} while (len>0 && !mPipeStream.IsMessageComplete);
Your C++ server code doesn't have such a loop - the equivalent at the Win32 API level is testing for the return code ERROR_MORE_DATA.
My guess is that somehow this is leading to the client waiting for the server to read on one pipe instance, whilst the server is waiting for the client to write on another pipe instance.
It seems to me that what you are trying to do will rather not work as expected.
Some time ago I was trying to do something that looked like your code and got similar results, the pipe just hanged
and it was difficult to establish what had gone wrong.
I would rather suggest to use client in very simple way:
CreateFile
Write request
Read answer
Close pipe.
If you want to have two way communication with clients which are also able to receive unrequested data from server you should
rather implement two servers. This was the workaround I used: here you can find sources.

How do I perform a nonblocking read using asio?

I am attempting to use boost::asio to read and write from a device on a serial port. Both boost::asio:read() and boost::asio::serial_port::read_some() block when there is nothing to read. Instead I would like to detect this condition and write a command to the port to kick-start the device.
How can I either detect that no data is available?
If necessary I can do everything asynchronously, I would just rather avoid the extra complexity if I can.
You have a couple of options, actually. You can either use the serial port's built-in async_read_some function, or you can use the stand-alone function boost::asio::async_read (or async_read_some).
You'll still run into the situation where you are effectively "blocked", since neither of these will call the callback unless (1) data has been read or (2) an error occurs. To get around this, you'll want to use a deadline_timer object to set a timeout. If the timeout fires first, no data was available. Otherwise, you will have read data.
The added complexity isn't really all that bad. You'll end up with two callbacks with similar behavior. If either the "read" or the "timeout" callback fires with an error, you know it's the race loser. If either one fires without an error, then you know it's the race winner (and you should cancel the other call). In the place where you would have had your blocking call to read_some, you will now have a call to io_svc.run(). Your function will still block as before when it calls run, but this time you control the duration.
Here's an example:
void foo()
{
io_service io_svc;
serial_port ser_port(io_svc, "your string here");
deadline_timer timeout(io_svc);
unsigned char my_buffer[1];
bool data_available = false;
ser_port.async_read_some(boost::asio::buffer(my_buffer),
boost::bind(&read_callback, boost::ref(data_available), boost::ref(timeout),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
timeout.expires_from_now(boost::posix_time::milliseconds(<<your_timeout_here>>));
timeout.async_wait(boost::bind(&wait_callback, boost::ref(ser_port),
boost::asio::placeholders::error));
io_svc.run(); // will block until async callbacks are finished
if (!data_available)
{
kick_start_the_device();
}
}
void read_callback(bool& data_available, deadline_timer& timeout, const boost::system::error_code& error, std::size_t bytes_transferred)
{
if (error || !bytes_transferred)
{
// No data was read!
data_available = false;
return;
}
timeout.cancel(); // will cause wait_callback to fire with an error
data_available = true;
}
void wait_callback(serial_port& ser_port, const boost::system::error_code& error)
{
if (error)
{
// Data was read and this timeout was canceled
return;
}
ser_port.cancel(); // will cause read_callback to fire with an error
}
That should get you started with only a few tweaks here and there to suit your specific needs. I hope this helps!
Another note: No extra threads were necessary to handle callbacks. Everything is handled within the call to run(). Not sure if you were already aware of this...
Its actually a lot simpler than the answers here have implied, and you can do it synchronously:
Suppose your blocking read was something like this:
size_t len = socket.receive_from(boost::asio::buffer(recv_buf), sender_endpoint);
Then you replace it with
socket.non_blocking(true);
size_t len = 0;
error = boost::asio::error::would_block;
while (error == boost::asio::error::would_block)
//do other things here like go and make coffee
len = socket.receive_from(boost::asio::buffer(recv_buf), sender_endpoint, 0, error);
std::cout.write(recv_buf.data(), len);
You use the alternative overloaded form of receive_from which almost all the send/receive methods have. They unfortunately take a flags argument but 0 seems to work fine.
You have to use the free-function asio::async_read.

Resources