I've developed some concept code for a project that I will be working on shortly. The project lends itself to a state machine design and I think boost::statechart will do a good job. I hit a roadblock when I tried to use context() however. Here's a sample (I'm happy to put more code up, but I think this is the relevant part):
struct Wait : fsm::simple_state< Wait, Active > {
typedef mpl::list<fsm::transition< UnderflowEvent, Exec> > reactions;
public:
Wait()
: m_wait_op() {
std::cout << "entering wait state." << std::endl;
wait();
}
~Wait() { std::cout << "exiting wait state." << std::endl; }
private:
default_wait m_wait_op;
fsm::result wait() {
if(context<Active>().underflow_condition()) {
m_wait_op();
return transit<Wait>();
}
else if(context<Active>().overflow_condition()) {
return transit<Exec>();
}
else {
// undefined - keep waiting
}
}
};
The state Active has methods called "[over|under]flow_condition" which just return true at this point. Problems with my design aside, I am getting the following assertion failure when I instantiate thusly:
int main(void) {
Throttler my_throttler;
my_throttler.initiate();
return 0;
}
and here's the assertion:
assertion "get_pointer( stt.pContext_
) != 0" failed
I looked this assertion up in file "/usr/include/boost/statechart/simple_state.hpp", line 689 (boost 1.45) and the comments say that it is there to prevent simple_state from using contexts. This puzzled me when I revisited the stopwatch example and saw that the example was doing the very thing I was trying to do. So I compiled it and this assertion is not violated by the stopwatch code unsurprisingly. Am I missing something? Maybe there's something elsewhere in the code that I missed? Here's the entire header (please remember it's concept code... I'm not releasing this into the wild until it's been thoroughly genericized):
#ifndef _THROTTLER_H_
#define _THROTTLER_H_
#include<queue>
#include<vector>
#include<ctime>
#include<boost/statechart/event.hpp>
#include<boost/statechart/transition.hpp>
#include<boost/statechart/state_machine.hpp>
#include<boost/statechart/simple_state.hpp>
#include<boost/mpl/list.hpp>
#include<iostream>
namespace mpl = boost::mpl;
namespace fsm = boost::statechart;
namespace {
unsigned int DEFAULT_WAIT_TIME(1000);
}
struct noop {
public:
noop() { m_priority = (1 << sizeof(int)); }
noop(unsigned int priority) { m_priority = priority; }
virtual ~noop() {}
bool operator()(void) {
return true;
}
friend bool operator<(noop a, noop b);
private:
unsigned int m_priority;
};
bool operator<(noop a, noop b) {
return a.m_priority < b.m_priority;
}
struct compare_noops {
bool operator()(noop a, noop b) {
}
};
struct default_wait {
void operator()(unsigned long msecs = DEFAULT_WAIT_TIME) {
std::clock_t endtime =
std::clock() + (msecs*1000*CLOCKS_PER_SEC);
while(clock() < endtime);
}
};
struct OverflowEvent : fsm::event< OverflowEvent > {};
struct UnderflowEvent : fsm::event< UnderflowEvent > {};
struct ResetEvent : fsm::event< ResetEvent > {};
struct Active;
struct Throttler : fsm::state_machine< Throttler, Active > {};
struct Wait;
struct Active : fsm::simple_state< Active, Throttler, Wait > {
public:
typedef mpl::list<fsm::transition< ResetEvent, Active> > reactions;
bool overflow_condition(void) { return true; }
bool underflow_condition(void) { return true; }
void queue_operation(noop op) {
m_operation_queue.push(op);
}
void perform_operation(void) {
noop op(m_operation_queue.top());
if(op())
m_operation_queue.pop();
else
throw;
}
private:
std::priority_queue<noop, std::vector<noop>, compare_noops > m_operation_queue;
private:
std::priority_queue<noop, std::vector<noop>, compare_noops > m_operation_queue;
};
struct Exec : fsm::simple_state< Exec, Active > {
typedef mpl::list<fsm::transition< OverflowEvent, Wait> > reactions;
Exec() { std::cout << "entering exec state." << std::endl; }
~Exec() { std::cout << "exiting exec state." << std::endl; }
};
struct Wait : fsm::simple_state< Wait, Active > {
typedef mpl::list<fsm::transition< UnderflowEvent, Exec> > reactions;
public:
Wait()
: m_wait_op() {
std::cout << "entering wait state." << std::endl;
wait();
}
~Wait() { std::cout << "exiting wait state." << std::endl; }
private:
default_wait m_wait_op;
fsm::result wait() {
if(context<Active>().underflow_condition()) {
m_wait_op();
return transit<Wait>();
}
else if(context<Active>().overflow_condition()) {
return transit<Exec>();
}
else {
// undefined - keep waiting
}
}
};
#endif
As you've noted in your comment, it's related to attempting to access the outer context from within the constructor, which is not allowed for a simple_state.
From simple_state.hpp:
// This assert fails when an attempt is made to access the state machine
// from a constructor of a state that is *not* a subtype of state<>.
// To correct this, derive from state<> instead of simple_state<>.
BOOST_ASSERT( get_pointer( pContext_ ) != 0 );
So you should be able to access the outer context from the constructor if you base your states on the state class (rather than a simple_state).
That said, I'm not sure what impacts or implications this may have for your states. If this question gets an answer it may be helpful to you as well (:
From what I understand, you'll need to change Wait to derive from state:
struct Wait : fsm::state< Wait, Active > {
and then change the Wait() constructor to something like
typedef fsm::state< Wait, Active > my_base;
Wait( my_context ctx ) : my_base( ctx )
// and any other pre-constructor initialisation...
The my_context type is defined (as protected) within state<>, and needs to be passed in from the derived class's constructor.
Related
I have code like this:
void function()
{
auto isOk=task(1);
if(!isOk)
{
return;
}
// more code here
auto isOk=task(2);
if(!isOk)
{
return;
}
// more code here
auto isOk=task(3);
if(!isOk)
{
return;
}
// more code here
auto isOk=task(4);
if(!isOk)
{
return;
}
// more code here
auto isOk=task(5);
if(!isOk)
{
return;
}
// more code here
auto isOk=task(6);
if(!isOk)
{
return;
}
// more code here
auto isOk=task(7);
if(!isOk)
{
return;
}
// more code here
auto isOk=task(8);
if(!isOk)
{
return;
}
// more code here
auto isOk=task(9);
if(!isOk)
{
return;
}
}
It should be noted that I can not put them in a loop (My code is similar to this but not exactly this code)
The if block is very ugly and I may be bale to write it as follow:
#define TASK(x) {if(!task(x)) return;}
void function()
{
TASK(1);
// more code here
TASK(2);
// more code here
TASK(3);
// more code here
TASK(4);
// more code here
TASK(5);
// more code here
TASK(6);
// more code here
TASK(7);
// more code here
TASK(8);
// more code here
TASK(9);
}
My question is:
Is there any better way to do this when I am using C++11?
The problem with this code is:
I can not debug it easily.
The macro is not inside a namespace and maybe conflict with other macros.
Update 1
As most of the answer here tries to solve the problem in the specific code, when I am looking for the general solution, I am asking specifc questions related to this code:
1- Can I use lambda to mimic the macro?
2- Can I use a constexpr to mimic a macro?
3- Any other way to mimic a MACRO in a compiler friendly way (with the same result as a macro) so I can easily debug them?
void function() {
if (!task(1)) return;
// code here
if (!task(2)) return;
// more code here
if (!task(3)) return;
// more code here
}
This is small and tight and no ugly bulky blocks.
If task(1) is much larger, you can put return; on the next line indented.
Instead of using a plain return, you could choose to use exceptions instead, which not only leave the current function, but all functions until they find a catch block.
Something like this:
void tryTask(int i){
auto isOk=task(i);
if(!isOk)
{
throw std::runtime_error("Task failed: Nr. "+to_string(i));
}
}
function()
{
tryTask(1);
// more code here
tryTask(2);
// more code here
tryTask(3);
...
}
This however lets your function throw an exception instead of just returning if one of the tasks failed. If this is not what you want, surround it either inside the function with a try-catch block or call it from a second function like this:
void callfunction(){
try{
function();
} catch (std::exception& e) {
//do whatever happens if the function failed, or nothing
}
}
If you have control about the task() function, you might also decide to throw the exception directly inside this function instead of returning a bool.
If you want to make sure you only catch your own exceptions, write a small class for this taking only the information you need for handling the exception (if you don't need any, an empty class will do the job) and throw/catch an instance of your class instead.
Here's a quick and dirty approach with lambdas.
Assuming this is your task function:
#include <iostream>
/** Returns 0 on success; any other returned value is a failure */
int task(int arg)
{
std::cout << "Called task " << arg << std::endl;
return arg < 3 ? 0 : 1;
}
Invoke the tasks in a chain as follows:
#include <iostream>
int main()
{
int result = Chain::start()
.and_then([]() -> int {return task(1);})
.and_then([]() -> int {return task(2);})
.and_then([]() -> int {return task(3);})
.and_then([]() -> int {return task(4);})
.and_then([]() -> int {return task(5);})
.and_then([]() -> int {return task(6);})
.and_then([]() -> int {return task(7);})
.and_then([]() -> int {return task(8);})
.and_then([]() -> int {return task(9);})
.result();
std::cout << "Chain result: " << result << std::endl;
return result;
}
Because the task returns success only when called with an argument value less than 3, the invocation chain stops as expected after the 3rd step:
$ ./monad
Called task 1
Called task 2
Called task 3
Chain result: 1
This is the implementation of the Chain class:
class Chain
{
public:
const int kSuccess = 0;
Chain() {_result = kSuccess;}
static Chain start() { return Chain(); }
Chain& and_then(std::function<int()> nextfn) {
if(_result == 0) {
_result = nextfn();
}
return *this;
}
int result() { return _result; }
private:
int _result;
};
I know, it looks ugly and it's non-generic. But if this is the general direction you were thinking of, let me know and we can evolve it.
I would put code to execute btw calling task into a vector and then run a loop:
const size_t steps = 9;
using ops = std::function<void()>;
std::vector<ops> vops(steps);
steps[0] = [] { /* some code here to execute after task 0 */ };
...
for( size_t i = 0; i < steps; ++i ) {
if( !task(i) ) return;
if( vops[i] ) (vops[i])();
}
You can use an integer sequence.
// No task to call without an integer.
bool function(std::index_sequence<>) { return true; }
template<std::size_t I, std::size_t... S>
bool function(std::index_sequence<I, S...>) {
return [](){
auto isOk = task(I)
if (!isOk) return false;
// some code
return true;
// it will call function with the rest of the sequence only if the lambda return true.
}() && function(std::index_sequence<S...>{});
}
void function() {
// this call with a integer sequence from 0 to 9
function(std::make_index_sequence<10>{});
}
This code will expand just as if you'd write it by hands.
If the code between calls of task is different for each step, you can use a tuple.
auto afterTask = std::make_tuple(
[](){ std::cout << "after task 0" << std::endl; },
[](){ std::cout << "after task 1" << std::endl; },
[](){ std::cout << "after task 2" << std::endl; },
[](){ std::cout << "after task 3" << std::endl; },
[](){ std::cout << "after task 4" << std::endl; },
[](){ std::cout << "after task 5" << std::endl; },
[](){ std::cout << "after task 6" << std::endl; },
[](){ std::cout << "after task 7" << std::endl; },
[](){ std::cout << "after task 8" << std::endl; },
[](){ std::cout << "after task 9" << std::endl; }
);
And then change the definition of function with:
template<std::size_t I, std::size_t... S>
bool function(std::index_sequence<I, S...>) {
return task(I) &&
(static_cast<void>(std::get<I>(afterTask)()), true) &&
function(std::index_sequence<S...>{});
}
I want to know when dispatchhas finished with some specific work
service.dispatch(&some_work);
I want to know this because I need to restart some_work if it has finished.
struct work
{
std::shared_ptr<asio::io_service> io_service;
bool ready;
std::mutex m;
template <class F>
void do_some_work(F&& f)
{
if (io_service && ready) {
m.lock();
ready = false;
m.unlock();
io_service->dispatch([&f, this]() {
f();
m.lock();
ready = true;
m.unlock();
});
}
}
work(std::shared_ptr<asio::io_service> io_service)
: io_service(io_service)
, ready(true)
{
}
};
int
main()
{
auto service = std::make_shared<asio::io_service>();
auto w = std::make_shared<asio::io_service::work>(*service);
std::thread t1([&] { service->run(); });
work some_work{ service };
for (;;) {
some_work.do_some_work([] {
std::cout << "Start long draw on thread: " << std::this_thread::get_id()
<< std::endl;
std::this_thread::sleep_for(std::chrono::seconds(5));
std::cout << "End long draw on thread: " << std::this_thread::get_id()
<< std::endl;
});
}
w.reset();
t1.join();
}
There are some problems with the code, for example if some_workgoes out of scope, then the running taskwould still write to ready.
I am wondering if something like this already exists in Asio?
For lifetime issues, the common idiom is indeed to use shared pointers, examples:
Ensure no new wait is accepted by boost::deadline_timer unless previous wait is expired
Boost::Asio Async write failed
Other than that, the completion handler is already that event. So you would do:
void my_async_loop() {
auto This = shared_from_this();
socket_.async_read(buffer(m_buffer, ...,
[=,This](error_code ec, size_t transferred) {
if (!ec) {
// do something
my_async_loop();
}
}
);
}
This will re-schedule an (other?) async operation once the previous has completed.
On the subject of threadsafety, see Why do I need strand per connection when using boost::asio?
I have to implement A valve Open function (for specified duration).
I am using boost::asio::deadline_timer
My class member function to open valve is:
bool Valves::valveOpen(ValveType type)
{
switch (type)
{
case eVentValve:
tblMap_.digitalInput[eVentValveK1].setBit();
if (tblMap_.digitalOutput[eOutK1VentValve].getBit())
{
isVentOpen_ = true;
}
return isVentOpen_;
case eVacuumPumpValve:
....
....
}
Class member function to close the valve is:
bool Valves::valveClose(ValveType type)
{
switch (type)
{
case eVentValve:
tblMap_.digitalInput[eVentValveK1].clearBit();
if (!tblMap_.digitalOutput[eOutK1VentValve].getBit())
{
isVentOpen_ = false;
}
return !isVentOpen_;
case eVacuumPumpValve:
....
....
}
I am trying to achieve the timer action as below
bool Valves::valveTimedOpen(ValveType type, int sec)
{
boost::asio::io_service io;
switch (type)
{
case eVentValve:
{
std::bind(&Valves::valveOpen, this, type); //Here
boost::asio::deadline_timer t(io, boost::posix_time::seconds(sec));
t.async_wait(std::bind(&Valves::valveClose, this, type));
boost::thread th(boost::bind(&boost::asio::io_service::run, &io));
return true;
}
case eVacuumPumpValve:
.....
.....
}
The code hits the line Here i.e.
std::bind(&Valves::valveOpen, this, type); but it does not go to bool Valves::valveOpen(ValveType type) function.
Can someone let me know the issue with this code?
Variables io and t go out of scope as soon as valveTimedOpen exits. You need to rethink the way you interact with the boost asio components e.g. the io_service could be a member of your class, and the timer could be dynamically allocated and needs to be deleted in the completion handler.
Also, keep in mind that if you plan on re-using an io_service object, you also need to reset it before calling run again.
auto fn = std::bind(&Test::Open, shared_from_this(), std::placeholders::_1);
fn(type);
Calls the Open() correctly.
io_service and boost::deadline_timer I have to make class member as suggested by #Ralf
Working Code:
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/thread.hpp>
#include <boost/asio.hpp>
class Test : public std::enable_shared_from_this <Test>
{
public:
Test() :io(), timer(io){}
void Open(int num);
void Close(int num);
void TimedOpen(int num, int dur);
void Run();
private:
boost::asio::io_service io;
boost::asio::deadline_timer timer;
};
void Test::Open(int num)
{
std::cout << "Open for Number : " << num << std::endl;
}
void Test::Close(int num)
{
std::cout << "Close for Number : " << num << std::endl;
}
void Test::TimedOpen(int num, int dur)
{
io.reset();
auto fn = std::bind(&Test::Open, shared_from_this(), std::placeholders::_1);
fn(num);
timer.expires_from_now( boost::posix_time::seconds(dur));
timer.async_wait(std::bind(&Test::Close, shared_from_this(), num));
Run();
std::cout << "Function Exiting" << std::endl;
}
void Test::Run()
{
boost::thread th(boost::bind(&boost::asio::io_service::run, &io));
}
int main()
{
auto t = std::make_shared<Test>();
t->TimedOpen(5, 5);
char line[128];
while (std::cin.getline(line, 128))
{
if (strcmp(line, "\n")) break;
}
return 0;
}
With some code left out, elsewhere on SOF there is code that looks like this:
// CRTP Abstract Base class for implementing static subject.
// Example Subclass Usage -- Printing Observer:
class Printer : public Observer<Printer> {
public:
Printer() : timesTriggered_(0) {}
template <typename... Args>
void OnNotify(Pressure<Args...> &subject, EventType event) {
std::cout << "Observer ID: " << this->GetID() << std::endl;
switch (event) {
case EventType::UNKNOWN: {
std::cout << "Unknown Event -- Event #" << timesTriggered_++
<< std::endl;
std::cout << "Pressure: " << subject.GetPressure() << std::endl;
break;
}
default: { break; }
}
}
private:
int timesTriggered_;
};
// CRTP Abstract Base class for implementing static subject.
// Example Subclass Usage -- Pressure Sensor:
template <typename... Obs>
class Pressure : public Subject<Pressure<Obs...>, Obs...> {
public:
typedef Subject<Pressure<Obs...>, Obs...> BaseType;
Pressure(std::tuple<Obs &...> &&observers, int pressure)
: BaseType(std::move(observers)), pressure_(pressure) {}
void Change(int value) {
pressure_ = value;
this->NotifyAll(EventType::UNKNOWN);
}
int GetPressure() const { return pressure_; }
private:
int pressure_;
};
// Binding function for use with MakeSubject
// Arguments: observer objects to observe subject notifications
// Return: tuple of references to observers
template <typename... Obs> std::tuple<Obs &...> BindObservers(Obs &... obs) {
return std::tuple<Obs &...>(obs...);
}
// Creator to ease subject creation
// Template Arguments: Subject subclass type
// Arguments: Result from BindObservers
// Any constructor arguments for Subject subclass
// Return: Subject subclass
// Example Usage:
// auto pressure = MakeSubject<Pressure>(BindObservers(printerObs), initialPressure);
template <template <typename...> class T, typename... Args, typename... Obs>
T<Obs...> MakeSubject(std::tuple<Obs &...> &&obs, Args &&... args) {
return T<Obs...>(std::move(obs), args...);
}
In main.cpp
int main() {
Printer printerObs1;
Printer printerObs2;
const int initialPressure = 1;
auto pressure = MakeSubject<Pressure>(
BindObservers(printerObs1, printerObs2), initialPressure);
pressure.Change(12);
}
I need to break out the BindObservers and the return type of MakeSubject, but I can't correctly figure out what to replace both **auto in the pseudo-code below:**
auto obs = BindObservers(printerObs1, printerObs2);
auto pressure = MakeSubject<Pressure>(obs, initialPressure);
What is the exapanded version return types of both auto above? I need to store the return values in std::vector and AFAIK, I can't say
std::vector<auto> vec
[Although I don't see why not since the compiler can probably figure it out]
You can use std::vector<decltype(pressure)>.
But the type should be Pressure<Printer, Printer>.
hope you had all had nice holidays.
This questions is related to my earlier question: std::condition_variable - Wait for several threads to notify observer
I'm trying to implement a threadpool based on my own mutable thread implementation below:
class MutableThread
{
private:
std::thread m_Thread;
std::function<void()> m_Function;
bool m_bRun;
std::mutex m_LockMutex;
std::mutex m_WaitMutex;
std::condition_variable m_CV;
IAsyncTemplateObserver<MutableThread>* m_Observer = nullptr;
private:
void Execute()
{
while (m_bRun)
{
{
std::unique_lock<std::mutex> wait(m_WaitMutex);
m_CV.wait(wait);
}
std::lock_guard<std::mutex> lock(m_LockMutex);
if (m_bRun && m_Function)
{
m_Function();
m_Function = std::function<void()>();
if (m_Observer != nullptr)
{
m_Observer->Signal(this);
}
}
}
}
public:
HDEBUGNAME(TEXT("MutableThread"));
MutableThread(const MutableThread& thread) = delete;
MutableThread(IAsyncTemplateObserver<MutableThread>* _Observer)
{
m_Observer = _Observer;
m_bRun = true;
m_Thread = std::thread(&MutableThread::Execute, this);
}
MutableThread()
{
m_Observer = nullptr;
m_bRun = true;
m_Thread = std::thread(&MutableThread::Execute, this);
}
~MutableThread()
{
m_bRun = false;
m_CV.notify_one();
try
{
if (m_Thread.joinable())
m_Thread.join();
}
catch (std::system_error& ex)
{
HWARNINGD(TEXT("%s"), ex.what());
}
}
inline bool Start(const std::function<void()>& f)
{
std::lock_guard<std::mutex> lock(m_LockMutex);
if (m_Function != nullptr)
return false;
m_Function = f;
m_CV.notify_one();
return true;
}
The IAsyncTemplateObserver simply derives from my IAsyncObserver class posted in the earlier question and adds a virtual function:
template <typename T>
class IAsyncTemplateObserver : public IAsyncObserver
{
public:
virtual void Signal(T* _Obj) = 0;
};
What I want to do is, signal the ThreadPool that the function has finished execution and a new task is assigned to the mutable thread:
class MutableThread;
struct Task
{
std::function<void()> m_Function;
uint32_t m_uPriority;
Task(const std::function<void()>& _Function, uint32_t _uPriority)
{
m_Function = _Function;
m_uPriority = _uPriority;
}
};
inline bool operator<(const Task& lhs, const Task& rhs)
{
return lhs.m_uPriority < rhs.m_uPriority;
}
class ThreadPool : public IAsyncTemplateObserver<MutableThread>
{
private:
std::list<MutableThread* > m_FreeThreads;
std::list<MutableThread* > m_UsedThreads;
std::set<Task> m_Tasks;
std::mutex m_LockMutex;
public:
ThreadPool()
{
//Grow(std::thread::hardware_concurrency() - 1);
}
ThreadPool(size_t n)
{
Grow(n);
}
~ThreadPool()
{
//std::lock_guard<std::mutex> lock(m_Mutex);
for (MutableThread* pUsed : m_UsedThreads)
{
HSAFE_DELETE(pUsed);
}
for (MutableThread* pFree : m_FreeThreads)
{
HSAFE_DELETE(pFree);
}
}
inline void Grow(size_t n)
{
std::lock_guard<std::mutex> lock(m_LockMutex);
for (size_t i = 0; i < n; i++)
{
m_FreeThreads.push_back(new MutableThread(this));
}
}
inline void AddTask(const Task& _Task)
{
{
std::lock_guard<std::mutex> lock(m_LockMutex);
m_Tasks.insert(_Task);
}
AssignThreads();
}
virtual void Signal(MutableThread* _pThread)
{
{
std::lock_guard<std::mutex> lock(m_LockMutex);
m_UsedThreads.remove(_pThread);
m_FreeThreads.push_back(_pThread);
}
AssignThreads();
NotifyOne();
}
inline void WaitForAllThreads()
{
bool bWait = true;
do
{
{
//check if we have to wait
std::lock_guard<std::mutex> lock(m_LockMutex);
bWait = !m_UsedThreads.empty() || !m_Tasks.empty();
}
if (bWait)
{
std::unique_lock<std::mutex> wait(m_ObserverMutex);
m_ObserverCV.wait(wait);
}
} while (bWait);
}
private:
inline void AssignThreads()
{
std::lock_guard<std::mutex> lock(m_LockMutex);
if (m_FreeThreads.empty() || m_Tasks.empty())
return;
//Get free thread
MutableThread* pThread = m_FreeThreads.back();
m_FreeThreads.pop_back();
//park thread in used list
m_UsedThreads.push_back(pThread);
//get task with highest priority
std::set<Task>::iterator it = m_Tasks.end();
--it; //last entry has highest priority
//start the task
pThread->Start(it->m_Function);
//remove the task from the list
m_Tasks.erase(it);
}
The AddTask function is called several times by the same thread, but when a mutable thread signals the threadpool (via m_Observer->Signal(this) ) the application freezes at the lock_guard of the AssignThreads() function. Now the strange thing is unlike a normal deadlock, all callstack-views in Visual Studio are empty as soon is I try to step over the line with the lock_guard.
Can anyone explain this behaviour? Is there any major design flaw or just a simple mix up?
Thanks for your help!
Greetings,
Fabian
Edit: I've added a minimal visual studio solution that reproduces the problem: ThreadPoolTest.zip
Thanks to a friend, I was able to fix the problem by moving the call m_Observer->Signal(this) outside of the lock_guard scope in the MutableThread::Execute() function. Secondly I removed the lock_guard in the AssignThreads() function and moved its call into the scope of the lock_guard in the Signal()/AddTask function. Not really related but still a flaw: all condition_variables.wait() calls are now in a while(m_bNotified == false) loop.