Order of tasks (handlers) putting to taskqueue via post and dispatch - c++11

The output is:
Handler A
Handler B
Handler D
Handler E
Handler C
Given,
post() puts the handler into the taskqueue and returns immediately
dispatch() can run the task immediately if the main thread already calls run() (which is the case)
then,
why "Handler E" wasn't run before B and D ? It was dispatch(), and main thread already runs the io_context after all.
why "Handler C" was run last ? It kinds of make sense as it was post within post. But still the order of the tasks being put to the taskqueue isn't very self-explained.
int main()
{
boost::asio::io_service io_service;
io_service.dispatch( [](){ cout << "Handler A\n"; } );
io_service.post(
[&io_service]() {
cout << "Handler B\n";
io_service.post(
[](){
cout << "Handler C\n";
}
);
}
);
io_service.post( []() {
cout << "Handler D\n";
}
);
io_service.dispatch( [](){ cout << "Handler E\n"; } );
cout << "Running io_service\n";
io_service.run();
return 0;
}

Everything works fine here.
In your current code you call io_service.run() as the last statement. So all calls of post/dispatch are equivalent and mean put a handler into the queue and returns immediately.
Completion handlers are called within run method.
dispatch can call handler without queuing only if run works while dispatch is being called, what is not your case.
What happens in details:
dispatch(A) // queue: A
post(B) // queue: A,B
post(D) // queue: A,B,D
dispatch(E) // queue: A,B,D,E
run() was invoked, now completion handlers can be called
pop A
pop B -> in here, C is pushed, so queue is: D,E,C
pop D
pop E
pop C
If you want to let dispatch invoke handlers without queuing, you have to start run in background thread before putting any tasks:
boost::asio::io_service io_service;
boost::asio::io_service::work work{io_service};
std::thread th([&](){ io_service.run(); }); // run started
io_service.dispatch( [](){ cout << "Handler A\n"; } );
// ...
th.join(); // we are waiting here forever
And now you can replace post to dispatch inside B handler. After B was pushed into the queue, C is called (dispatch is used while run is working), and as last D and E are invoked.

Related

boost::asio::thread_pool - How to cancel workers before work is finished?

OS windows. I would like to create a back-buffer before paint. I want to use boost::asio::thread_pool to increase speed. I need to stop back-buffer creating, if my "input data"(tasks) is updated.
I wrote Test_CreateAndCancel function to simplify test.
class Task
{
public:
virtual void operator()
{
std::cout << "Task started " << std::endl;
DoSomeWork();
std::cout << "Task in progress" << std::endl;
for (int i = 0; i < 15; ++i)
boost::this_thread::sleep_for(boost::chrono::milliseconds(1000));
std::cout << "Task ended" << std::endl;
}
};
using TaskPtr = std::shared_ptr<Task>;
void Test_CreateAndCancel(std::vector<TaskPtr> &tasks)
{
//start back-buffer creating
boost::asio::thread_pool _thread_pool(4);
for (auto task : tasks)
{
boost::asio::post(thread_pool, [task] {
task->operator()();
});
}
// simulate cancel
thread_pool.stop(); // wait untill all threads are finished?
}
vector tasks has 4 items.
Result is: 4 "Task started" "Task in progress" "Task ended"
I am thinking to add custom IsCanceled() checkes in task::operator().
Is there are any other ways to make my tasks cancelable?
How can I implement cancel logic?
I will be grateful for any advices
Thanks
The easiest approach is to add a (probably atomic) variable "please_stop" to your Task and
query it inside the operator() regularly
set it from the outside (another task)
The basic problem is that you cannot cancel an operation that is running in a different task. You can only "ask it politely" to stop working.
boost::thread has an interrupt mechanism (see the link, #sehe posted above). This basically does not do anything different than what I suggested, except it's baked into boost::thread. There are certain "interruption points" that will query the "please stop" state and throw an exception, if it is set.
You have to catch the exception though, otherwise the thread itself will stop and you want only the operation to stop.
So you could do something like this:
class Task {
virtual void operator()()
{
try {
do_something();
boost::this_thread::sleep(boost::chrono::seconds(10000);
}
catch (boost::thread_interrupted&) { //
handle_please_stop_request();
}
}
};
// and later
task_thread.interrupt();
The problem with this approach is that you have to know the thread and you probably want to interrupt not the thread but the operation. Which is why the atomic approach has its charms.
BTW, your example has several problems. The task operation (operator()()) never stops at all. You are creating a task pool for every vector of tasks. I assume these are just artifacts of your example and your real world code is different.
One thing though. I haven't looked into asio::thread_pool yet, but I am missing the boost::asio::work object. Search stackoverflow on how to use the work object.

c++ 11 condition_variable wait spurious wake up is not working

I tried to write a simple producer/consumer by using condition_variable,
include <iostream>
#include <thread>
#include <condition_variable>
#include <mutex>
#include <chrono>
#include <queue>
#include <chrono>
using namespace std;
condition_variable cond_var;
mutex m;
int main()
{
int c = 0;
bool done = false;
cout << boolalpha;
queue<int> goods;
thread producer([&](){
for (int i = 0; i < 10; ++i) {
m.lock();
goods.push(i);
c++;
cout << "produce " << i << endl;
m.unlock();
cond_var.notify_one();
this_thread::sleep_for(chrono::milliseconds(100));
}
done = true;
cout << "producer done." << endl;
cond_var.notify_one();
});
thread consumer([&](){
unique_lock<mutex> lock(m);
while(!done || !goods.empty()){
/*
cond_var.wait(lock, [&goods, &done](){
cout << "spurious wake check" << done <<endl;
return (!goods.empty() || done);
});
*/
while(goods.empty())
{
cout<< "consumer wait" <<endl;
cout<< "consumer owns lock " << lock.owns_lock() <<endl;
cond_var.wait(lock);
}
if (!goods.empty()){
cout << "consume " << goods.front()<<endl;
goods.pop();
c--;
}
}
});
producer.join();
consumer.join();
cout << "Net: " << c << endl;
}
The problem I have now is when the consumer consumes the last item before the producer set done to true, the consumer thread will stuck in
while(goods.empty())
{
cout<< "consumer wait" <<endl;
cout<< "consumer owns lock " << lock.owns_lock() <<endl;
cond_var.wait(lock);
}
My understanding is cond_var.wait(lock) will wake up spuriously and thus exit the while(good.empty()) loop, but it seems not the case?
Spurious wakeups are not a regular occurance which you can rely on to break a loop in the manner you're suggesting. The risk of having a spurious wakeup is an unfortunate side-effect of the current implementations of condition variables which you must account for, but there is no guarantee about when (if ever) you will experience a spurious wakeup.
If you want to ensure that the consumer thread doesn't get stuck waiting for a notify that never comes, you might try using std::condition_variable::wait_for() instead. It takes a duration and will timeout and reaquire the lock if the duration expires. It might be viewed as closer to a busy wait but if the timeout is long enough the implications on performance should be negligible.
As #Karlinde says, and as the name implies, spurious wakeups are not guaranteed to happen. Rather, they will normally not happen at all.
But, even if spurious wakeups would happen, that would not fix your issue: you simply have an infinite loop in your program. Once the producer has stopped, goods.empty() is true and it will never change again. So change the while loop to:
while(!done && goods.empty())
{
...
}
Now it should exit... most of the time. You still have a possible race condition, because in the producer, you set done = true without holding the lock.
If the Producer notifies {cond_var.notify_one();} without any consumer waiting {cond_var.wait(lock);} then the 1'st notification that is sent to the consumer has gone unnoticed.
#tesla1060
"The problem I have now is when the consumer consumes the last item before the producer set done to true, the consumer thread will stuck in" , this is not ture. The fact is that the Consumer has not received any notification from the Producer (as it has missed one notification (the 1'st one)).

Promise and future, why does the main exit successfully?

I started learning promise and future in C++11, But here I am stuck:
#include<iostream>
#include<future>
using namespace std;
void func(future<int> &ref)
{
cout << ref.get();
}
int main()
{
promise<int> prom;
future<int> fut = prom.get_future();
async(launch::deferred, func, ref(fut));
prom.set_value(100);
cout << "Exiting" << endl;
}
My understanding is, when we have async with launch::deferred it does not start a new thread.
So unless ref.get() executes func function won't return which will not happen since promise is set after that.
But my code exits successfully. Where is my understanding wrong?
IDE: VS2013
A deferred async call simply stores the invokable object and parameters.
Nothing much happens until you call .get() on the returned future.
You discard that returned future, so your call to async was basically a noop.
Nothing else prevents main from finishing, so...

Callback passed to boost::asio::async_read_some never invoked in usage where boost::asio::read_some returns data

I have been working on implementing a half duplex serial driver by learning from a basic serial terminal example using boost::asio::basic_serial_port:
http://lists.boost.org/boost-users/att-41140/minicom.cpp
I need to read asynchronously but still detect when the handler is finished in the main thread so I pass async_read_some a callback with several additional reference parameters in a lambda function using boost:bind. The handler never gets invoked but if I replace the async_read_some function with the read_some function it returns data without an issue.
I believe I'm satisfying all of the necessary requirements for this function to invoke the handler because they are the same for the asio::read some function which returns:
The buffer stays in scope
One or more bytes is received by the serial device
The io service is running
The port is open and running at the correct baud rate
Does anyone know if I'm missing another assumption unique to the asynchronous read or if I'm not setting up the io_service correctly?
Here is an example of how I'm using the code with async_read_some (http://www.boost.org/doc/libs/1_56_0/doc/html/boost_asio/reference/basic_serial_port/async_read_some.html):
void readCallback(const boost::system::error_code& error, size_t bytes_transfered, bool & finished_reading, boost::system::error_code& error_report, size_t & bytes_read)
{
std::cout << "READ CALLBACK\n";
std::cout.flush();
error_report = error;
bytes_read = bytes_transfered;
finished_reading = true;
return;
}
int main()
{
int baud_rate = 115200;
std::string port_name = "/dev/ttyUSB0";
boost::asio::io_service io_service_;
boost::asio::serial_port serial_port_(io_service_,port_name);
serial_port_.set_option(boost::asio::serial_port_base::baud_rate(baud_rate));
boost::thread service_thread_;
service_thread = boost::thread(boost::bind(&boost::asio::io_service::run, &io_service_));
std::cout << "Starting byte read\n";
boost::system::error_code ec;
bool finished_reading = false;
size_t bytes_read;
int max_response_size = 8;
uint8_t read_buffer[max_response_size];
serial_port_.async_read_some(boost::asio::buffer(read_buffer, max_response_size),
boost::bind(readCallback,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred,
finished_reading, ec, bytes_read));
std::cout << "Waiting for read to finish\n";
while (!finished_reading)
{
boost::this_thread::sleep(boost::posix_time::milliseconds(1));
}
std::cout << "Finished byte read: " << bytes_read << "\n";
for (int i = 0; i < bytes_read; ++i)
{
printf("0x%x ",read_buffer[i]);
}
}
The result is that the callback does not print out anything and the while !finished loop never finishes.
Here is how I use the blocking read_some function (boost.org/doc/libs/1_56_0/doc/html/boost_asio/reference/basic_serial_port/read_some.html):
int main()
{
int baud_rate = 115200;
std::string port_name = "/dev/ttyUSB0";
boost::asio::io_service io_service_;
boost::asio::serial_port serial_port_(io_service_,port_name);
serial_port_.set_option(boost::asio::serial_port_base::baud_rate(baud_rate));
boost::thread service_thread_;
service_thread = boost::thread(boost::bind(&boost::asio::io_service::run, &io_service_));
std::cout << "Starting byte read\n";
boost::system::error_code ec;
int max_response_size = 8;
uint8_t read_buffer[max_response_size];
int bytes_read = serial_port_.read_some(boost::asio::buffer(read_buffer, max_response_size),ec);
std::cout << "Finished byte read: " << bytes_read << "\n";
for (int i = 0; i < bytes_read; ++i)
{
printf("0x%x ",read_buffer[i]);
}
}
This version prints from 1 up to 8 characters that I send, blocking until at least one is sent.
The code does not guarantee that the io_service is running. io_service::run() will return when either:
All work has finished and there are no more handlers to be dispatched
The io_service has been stopped.
In this case, it is possible for the service_thread_ to be created and invoke io_service::run() before the serial_port::async_read_some() operation is initiated, adding work to the io_service. Thus, the service_thread_ could immediately return from io_service::run(). To resolve this, either:
Invoke io_service::run() after the asynchronous operation has been initiated.
Create a io_service::work object before starting the service_thread_. A work object prevents the io_service from running out of work.
This answer may provide some more insight into the behavior of io_service::run().
A few other things to note and to expand upon Igor's answer:
If a thread is not progressing in a meaningful way while waiting for an asynchronous operation to complete (i.e. spinning in a loop sleeping), then it may be worth examining if mixing synchronous behavior with asynchronous operations is the correct solution.
boost::bind() copies its arguments by value. To pass an argument by reference, wrap it with boost::ref() or boost::cref():
boost::bind(..., boost::ref(finished_reading), boost::ref(ec),
boost::ref(bytes_read));
Synchronization needs to be added to guarantee memory visibility of finished_reading in the main thread. For asynchronous operations, Boost.Asio will guarantee the appropriate memory barriers to ensure correct memory visibility (see this answer for more details). In this case, a memory barrier is required within the main thread to guarantee the main thread observes changes to finished_reading by other threads. Consider using either a Boost.Thread synchronization mechanism like boost::mutex, or Boost.Atomic's atomic objects or thread and signal fences.
Note that boost::bind copies its arguments. If you want to pass an argument by reference, wrap it with boost::ref (or std::ref):
boost::bind(readCallback, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred, boost::ref(finished_reading), ec, bytes_read));
(However, strictly speaking, there's a race condition on the bool variable you pass to another thread. A better solution would be to use std::atomic_bool.)

Should I call socket::connect() from a handler invoked by resolver::async_resolve()?

I'm using a wrapper class to represent a network connection. My implementation contains a method, called async_connect(), which resolves a host/service and connects to a related endpoint (if possible). Something like this:
void tcp::connection::async_connect(std::string const& host, std::string const& service,
protocol_type tcp = protocol_type::v4())
{
std::cout << "thread [" << boost::this_thread::get_id() << "] tcp::connection::async_connect()" << std::endl;
resolver(m_io_service).async_resolve(resolver::query(tcp, host, service),
boost::bind(&connection::resolve_handler, this, _1, _2));
}
What I want to do know, is establishing the connection from the handler, invoked by the completion of the async_resolve method.
I'm not sure wheter the main thread or the a worker thread is used to invoke the handler. Thus, should I call socket::connect() (this would be the most sensible way if that code would be executed from a worker thread) or start an asynchronous operation again (socket::async_connect() - which should be used when executed by main thread).
void tcp::connection::resolve_handler(boost::system::error_code const& resolve_error,
tcp::resolver::iterator endpoint_iterator)
{
std::cout << "thread [" << boost::this_thread::get_id() << "] tcp::connection::resolve_handler()" << std::endl;
if (!resolve_error)
{
boost::system::error_code ec;
m_socket.connect(*endpoint_iterator, ec);
}
}
I've observed - from console output - that my resolve_handler is called from a worker thread. So, is it okay to call socket::connect() here?
IMO it is good to stick to a single programming model when using asio.
You are free to use asio's synchronous (blocking) calls, where you call a number of methods (resolve, connect, etc) and each one blocks until the result or error is available.
However If you're using the asynchronous programming model, your main or calling thread is typically blocked on io_service::run and the specified handlers are called from a different thread ( as is the case in what you described). When using this programming model you would typically call the next async method from the handler (worker thread), so instead of calling socket::connect, you would call socket::async_connect. It looks to me like you are trying to mix the two different models. I'm not sure what the implications are of mixing the two models (with your calling thread blocked on io_service::run) and you calling a synchronous method from the handler.

Resources