what happend if i didn't call ev_loop_fork in the child - fork

I thought, if I didn't call the ev_loop_fork in the child, then the watcher in child wouldn't be triggered.
This is my code, I build the ev_loop with EVBACKEND_EPOLL and EVFLAG_NOENV flags.
So there is no EVFLAG_FORKCHECK flag.
Then I comment the ev_loop_fork call in the child.
If everything goes well, I thought the child will not trigger the timeout callback function.
But actually, the output is something like this:
$ 4980 fork 4981
$ time out at 4980
$ time out at 4981
it seemed that the watchers still has been triggered in the child, it behaved the same as call ev_loop_fork .
So what's the problem, thank you.
#include<ev.h>
#include<stdio.h>
#include<unistd.h>
void timeout_cb(EV_P_ ev_timer *w,int revents)
{
printf("time out at %d\n", getpid());
ev_break(EV_A_ EVBREAK_ONE);
}
int main()
{
int ret;
ev_timer timeout_watcher;
struct ev_loop *loop = ev_default_loop(EVBACKEND_EPOLL | EVFLAG_NOENV);
ev_timer_init(&timeout_watcher,timeout_cb,5.5,0.);
ev_timer_start(loop,&timeout_watcher);
ret = fork();
if(ret>0) printf("%d fork %d\n",getpid(),ret);
else if(ret==0)
{
//ev_loop_fork(EV_DEFAULT);
}
else return -1;
ev_run(loop,0);
return 0;
}

The libev manual does not say that after a fork an event loop will be stopped. All it says is that to be sure that the event loop will properly work in the child, you need to call ev_loop_fork(). What's actually happening depends on the backend.
And technically, timers will even be more resilient against forks in most backends: select(), poll(), epoll(), kqueue all allow for specification of a timeout value after which these functions return in case of no event. libev uses this feature to be able to trigger timeouts when they are supposed to be triggered. So there's no need to re-register any file descriptors for timeouts to work.

Related

Why it is mandatory to check the condition in wait_event after prepare_to_wait?

I am trying to understand how wait_event is implemented in linux kernel. There is a code example in ldd3 where the internal implementation is explained using prepare_to_wait (http://www.makelinux.net/ldd3/chp-6-sect-2).
static int scull_getwritespace(struct scull_pipe *dev, struct file *filp)
{
while (spacefree(dev) == 0) {
DEFINE_WAIT(wait);
up(&dev->sem);
if (filp->f_flags & O_NONBLOCK)
return -EAGAIN;
PDEBUG("\"%s\" writing: going to sleep\n",current->comm);
prepare_to_wait(&dev->outq, &wait, TASK_INTERRUPTIBLE);
if (spacefree(dev) == 0) // Why is this check necessary ??
schedule( );
finish_wait(&dev->outq, &wait);
if (signal_pending(current))
return -ERESTARTSYS; /* signal: tell the fs layer to handle it */
if (down_interruptible(&dev->sem))
return -ERESTARTSYS;
}
return 0;
}
In the book, it is explained as below.
Then comes the obligatory check on the buffer; we must handle the case
in which space becomes available in the buffer after we have entered
the while loop (and dropped the semaphore) but before we put ourselves
onto the wait queue. Without that check, if the reader processes were
able to completely empty the buffer in that time, we could miss the
only wakeup we would ever get and sleep forever. Having satisfied
ourselves that we must sleep, we can call schedule.
I am not able to understand this piece of explanation. How we would go to a indefinite sleep if the if (spacefree(dev) == 0) is not done before calling schedule() ?
if this obligatory check is not present, wakeup() still resets the process state to TASK_RUNNING and schedule returns as explained in the next paragraph.
It is worth looking again at this case: what happens if the wakeup
happens between the test in the if statement and the call to schedule?
In that case, all is well. The wakeup resets the process state to
TASK_RUNNING and schedule returns—although not necessarily right away.
As long as the test happens after the process has put itself on the
wait queue and changed its state, things will work.
The important thing is that the (last) check is done after prepare_to_wait() was called.
prepare_to_wait() puts a pointer to the current process into the wait queue. If the wakeup happens before the prepare_to_wait() call, the wakeup would not be able to affect the current process.

CFRunLoop non-blocking wait for a buffer to be filled

I am porting an app reading data from a BT device to Mac. On the mac-specific code, I have a class with the delegate methods for the BT callbacks, like -(void) rfcommChannelData:(...)
On that callback, I fill a buffer with the received data. I have a function called from the app:
-(int) m_timedRead:(unsigned char*)buffer length:(unsigned long)numBytes time:(unsigned int)timeout
{
double steps=0.01;
double time = (double)timeout/1000;
bool ready = false;
int read,total=0;
unsigned long restBytes = numBytes;
while(!ready){
unsigned char *ptr = buffer+total;
read = [self m_readRFCOMM:(unsigned char*)ptr length:(unsigned long)restBytes];
total+=read;
if(total>=numBytes){
ready=true; continue;
}
restBytes = numBytes-total;
CFRunLoopRunInMode(kCFRunLoopDefaultMode, .4, false);
time -= steps;
if(time<=0){
ready=true; continue;
}
}
My problem is that this RunLoop makes the whole app un extremely slow. If I don't use default mode, and create my on runloop with a runlooptimer, the callback method rfcommChannelData never gets called. I create my one runloop with the following code:
// CFStringRef myCustomMode = CFSTR("MyCustomMode");
// CFRunLoopTimerRef myTimer;
// myTimer = CFRunLoopTimerCreate(NULL,CFAbsoluteTimeGetCurrent()+1.0,1.0,0,0,foo,NULL);
// CFRunLoopAddTimer(CFRunLoopGetCurrent(), myTimer, myCustomMode);
// CFRunLoopTimerInvalidate(myTimer);
// CFRelease(myTimer);
Any idea why the default RunLoop slows down the whole app, or how to make my own run loop allow callbacks from rfcommchannel being triggered?
Many thanks,
Anton Albajes-Eizagirre
If you're working on the main thread of a GUI app, don't run the run loop internally to your own methods. Install run loop sources (or allow asynchronous APIs of the frameworks install sources on your behalf) and just return to the main event loop. That is, let flow of execution return out of your code and back to your caller. The main event loop runs the run loop of the main thread and, when sources are ready, their callbacks will fire which will probably call your methods.

implementing a scheduler class in Windows

I want to implement a scheduler class, which any object can use to schedule timeouts and cancel then if necessary. When a timeout expires, this information will be sent to the timeout setter/owner at that time asynchronously.
So, for this purpose, I have 2 fundamental classes WindowsTimeout and WindowsScheduler.
class WindowsTimeout
{
bool mCancelled;
int mTimerID; // Windows handle to identify the actual timer set.
ITimeoutReceiver* mSetter;
int cancel()
{
mCancelled = true;
if ( timeKillEvent(mTimerID) == SUCCESS) // Line under question # 1
{
delete this; // Timeout instance is self-destroyed.
return 0; // ok. OS Timer resource given back.
}
return 1; // fail. OS Timer resource not given back.
}
WindowsTimeout(ITimeoutReceiver* setter, int timerID)
{
mSetter = setter;
mTimerID = timerID;
}
};
class WindowsScheduler
{
static void CALLBACK timerFunction(UINT uID,UINT uMsg,DWORD dwUser,DWORD dw1,DWORD dw2)
{
WindowsTimeout* timeout = (WindowsTimeout*) uMsg;
if (timeout->mCancelled)
delete timeout;
else
timeout->mDestination->GEN(evTimeout(timeout));
}
WindowsTimeout* schedule(ITimeoutReceiver* setter, TimeUnit t)
{
int timerID = timeSetEvent(...);
if (timerID == SUCCESS)
{
return WindowsTimeout(setter, timerID);
}
return 0;
}
};
My questions are:
Q.1. When a WindowsScheduler::timerFunction() call is made, this call is performed in which context ? It is simply a callback function and I think, it is performed by the OS context, right ? If it is so, does this calling pre-empt any other tasks already running ? I mean do callbacks have higher priority than any other user-task ?
Q.2. When a timeout setter wants to cancel its timeout, it calls WindowsTimeout::cancel().
However, there is always a possibility that timerFunction static call to be callbacked by OS, pre-empting the cancel operation, for example, just after mCancelled = true statement. In such a case, the timeout instance will be deleted by the callback function.
When the pre-empted cancel() function comes again, after the callback function completes execution, will try to access an attribute of the deleted instance (mTimerID), as you can see on the line : "Line under question # 1" in the code.
How can I avoid such a case ?
Please note that, this question is an improved version of the previos one of my own here:
Windows multimedia timer with callback argument
Q1 - I believe it gets called within a thread allocated by the timer API. I'm not sure, but I wouldn't be surprised if the thread ran at a very high priority. (In Windows, that doesn't necessarily mean it will completely preempt other threads, it just means it will get more cycles than other threads).
Q2 - I started to sketch out a solution for this, but then realized it was a bit harder than I thought. Personally, I would maintain a hash table that maps timerIDs to your WindowsTimeout object instances. The hash table could be a simple std::map instance that's guarded by a critical section. When the timer callback occurs, it enters the critical section and tries to obtain the WindowsTimer instance pointer, and then flags the WindowsTimer instance as having been executed, exits the critical section, and then actually executes the callback. In the event that the hash table doesn't contain the WindowsTimer instance, it means the caller has already removed it. Be very careful here.
One subtle bug in your own code above:
WindowsTimeout* schedule(ITimeoutReceiver* setter, TimeUnit t)
{
int timerID = timeSetEvent(...);
if (timerID == SUCCESS)
{
return WindowsTimeout(setter, timerID);
}
return 0;
}
};
In your schedule method, it's entirely possible that the callback scheduled by timeSetEvent will return BEFORE you can create an instance of WindowsTimeout.

Does CancelSynchronousIo work with WNetAddConnection2?

I'm trying and failing to cancel a call to WNetAddConnection2 with CancelSynchronousIo.
The call to CancelSynchronousIo succeeds but nothing is actually cancelled.
I'm using a 32-bit console app running on Windows 7 x64.
Has anyone done this successfully? Am I doing something dumb? Here's a sample console app (which needs to be linked with mpr.lib):
DWORD WINAPI ConnectThread(LPVOID param)
{
NETRESOURCE nr;
memset(&nr, 0, sizeof(nr));
nr.dwType = RESOURCETYPE_ANY;
nr.lpRemoteName = L"\\\\8.8.8.8\\bog";
// result is ERROR_BAD_NETPATH (i.e. the call isn't cancelled)
DWORD result = WNetAddConnection2(&nr, L"pass", L"user", CONNECT_TEMPORARY);
return 0;
}
int _tmain(int argc, _TCHAR* argv[])
{
// Create a new thread to run WNetAddConnection2
HANDLE hThread = CreateThread(0, 0, ConnectThread, 0, 0, 0);
if (!hThread)
return 1;
// Retry the cancel until it fails; keep track of how often
int count = 0;
BOOL ok;
do
{
// Sleep to give the thread a chance to start
Sleep(1000);
ok = CancelSynchronousIo(hThread);
++count;
}
while (ok);
// count will equal two here (i.e. one successful cancellation and
// one failed cancellation)
// err is ERROR_NOT_FOUND (i.e. nothing to cancel) which makes
// sense for the second call
DWORD err = GetLastError();
// Wait for the thread to finish; this takes ages (i.e. the
// WNetAddConnection2 call is not cancelled)
WaitForSingleObject(hThread, INFINITE);
return 0;
}
According to Larry Osterman (I hope he doesn't mind me quoting him): "The question was answered in the comments: wnetaddconnection2 isn’t a simple IOCTL call." So the answer (unfortunately) is no.
First, WNetAddConnection2 is system-wide, not per-process. This is important, as calling WNetAddConnection2 many times can wreck system stability - particularly with explorer.
I use WNetGetResourceInformation first to check if the connection already exists before even thinking of calling it - my process may have previously run and then shutdown. The connection may still exist. When my Windows service(s) needs to add such a connection I use a nasty little trick in order to prevent these totally non-abortable API's from stalling my own service shutdown.
The trick is to run these calls in a separate process: they are system-wide, after all. You can normally wait for the process to complete as if you called the functions yourself but you can terminate the process and give up waiting if you need to abort in order to shutdown.
Sadly, however, certain Windows resources, such as named pipe handles and handles to files open on remote computers, can take about 16 seconds to close following failure or shutdown of a remote machine. CancelSynchronousIo does not seem to even help with those but will likely add additional long delay.

How do I perform a nonblocking read using asio?

I am attempting to use boost::asio to read and write from a device on a serial port. Both boost::asio:read() and boost::asio::serial_port::read_some() block when there is nothing to read. Instead I would like to detect this condition and write a command to the port to kick-start the device.
How can I either detect that no data is available?
If necessary I can do everything asynchronously, I would just rather avoid the extra complexity if I can.
You have a couple of options, actually. You can either use the serial port's built-in async_read_some function, or you can use the stand-alone function boost::asio::async_read (or async_read_some).
You'll still run into the situation where you are effectively "blocked", since neither of these will call the callback unless (1) data has been read or (2) an error occurs. To get around this, you'll want to use a deadline_timer object to set a timeout. If the timeout fires first, no data was available. Otherwise, you will have read data.
The added complexity isn't really all that bad. You'll end up with two callbacks with similar behavior. If either the "read" or the "timeout" callback fires with an error, you know it's the race loser. If either one fires without an error, then you know it's the race winner (and you should cancel the other call). In the place where you would have had your blocking call to read_some, you will now have a call to io_svc.run(). Your function will still block as before when it calls run, but this time you control the duration.
Here's an example:
void foo()
{
io_service io_svc;
serial_port ser_port(io_svc, "your string here");
deadline_timer timeout(io_svc);
unsigned char my_buffer[1];
bool data_available = false;
ser_port.async_read_some(boost::asio::buffer(my_buffer),
boost::bind(&read_callback, boost::ref(data_available), boost::ref(timeout),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
timeout.expires_from_now(boost::posix_time::milliseconds(<<your_timeout_here>>));
timeout.async_wait(boost::bind(&wait_callback, boost::ref(ser_port),
boost::asio::placeholders::error));
io_svc.run(); // will block until async callbacks are finished
if (!data_available)
{
kick_start_the_device();
}
}
void read_callback(bool& data_available, deadline_timer& timeout, const boost::system::error_code& error, std::size_t bytes_transferred)
{
if (error || !bytes_transferred)
{
// No data was read!
data_available = false;
return;
}
timeout.cancel(); // will cause wait_callback to fire with an error
data_available = true;
}
void wait_callback(serial_port& ser_port, const boost::system::error_code& error)
{
if (error)
{
// Data was read and this timeout was canceled
return;
}
ser_port.cancel(); // will cause read_callback to fire with an error
}
That should get you started with only a few tweaks here and there to suit your specific needs. I hope this helps!
Another note: No extra threads were necessary to handle callbacks. Everything is handled within the call to run(). Not sure if you were already aware of this...
Its actually a lot simpler than the answers here have implied, and you can do it synchronously:
Suppose your blocking read was something like this:
size_t len = socket.receive_from(boost::asio::buffer(recv_buf), sender_endpoint);
Then you replace it with
socket.non_blocking(true);
size_t len = 0;
error = boost::asio::error::would_block;
while (error == boost::asio::error::would_block)
//do other things here like go and make coffee
len = socket.receive_from(boost::asio::buffer(recv_buf), sender_endpoint, 0, error);
std::cout.write(recv_buf.data(), len);
You use the alternative overloaded form of receive_from which almost all the send/receive methods have. They unfortunately take a flags argument but 0 seems to work fine.
You have to use the free-function asio::async_read.

Resources