simple app, just learning to setup my boost environment...
program compiles, but I get a libboost_thread-mgw44-mt-1_51.dll is missing error
on a sidenote, I had to change thread1.stop() to t1.stop() as boost said there was no function for stop().
I have the dll, placing the dll in the same folder as the app doesn't do anything.
#include <boost/bind.hpp>
#include <boost/thread.hpp>
class Threadable
{
public:
void stop()
{
running = false;
}
int run()
{
running = true;
while(running)
{
//Do Something Meaningful
}
return 0;
}
private:
bool running;
};
int main()
{
Threadable t1, t2;
boost::thread thread1(boost::bind(&Threadable::run, t1));
boost::thread thread2(boost::bind(&Threadable::run, t2));
// Let the threads run for half a second
boost::this_thread::sleep(boost::posix_time::milliseconds(500));
// Signal them to stop
t1.stop();
t2.stop();
//thread1.stop();
//thread2.stop();
// Wait for them to gracefully exit
thread1.join();
thread2.join();
return 0;
}
Related
Consider the following code snippet
#include <future>
std::mutex asyncMut_;
std::atomic<bool> isAsyncOperAllowed = false;
std::condition_variable cv;
void asyncFunc()
{
while (isAsyncOperAllowed)
{
std::unique_lock<std::mutex> ul(asyncMut_);
cv.wait(ul, []()
{
return isAsyncOperAllowed == false;
});
}
}
int main()
{
isAsyncOperAllowed = true;
auto fut = std::async(std::launch::async, asyncFunc);
std::this_thread::sleep_for(std::chrono::seconds(3));
std::lock_guard<std::mutex> lg(asyncMut_);
isAsyncOperAllowed = false;
cv.notify_one();
fut.get();
}
I am expecting that once I change the status of the isAsyncOperAllowed variable and notify the condition variable, the condition variable inside the asyncFunc should exit the wait and asyncFync should return and the main should end.
I am observing that the condition variable keeps waiting indefinitely. What am I doing wrong?
P.S. I am on Win10 - VS2015
Deadlock: main() never unlocks lg so even though the cv in asyncFunc() gets notified it never gets an opportunity to run since it can't claim the lock.
Try:
int main()
{
isAsyncOperAllowed = true;
auto fut = std::async(std::launch::async, asyncFunc);
std::this_thread::sleep_for(std::chrono::seconds(3));
{
std::lock_guard<std::mutex> lg(asyncMut_);
isAsyncOperAllowed = false;
}
cv.notify_one();
fut.get();
}
I am trying to mix boost signals with asio to do a dispatch based handler invocation. when the post method is invoked from a thread the io_service::run exits immediately, the callback handled to post is never invoked, callback is a C++11 lambda routine. I am pasting the code for more analysis.
#include<iostream>
#include<thread>
#include<boost/signals2/signal.hpp>
#include<boost/asio.hpp>
static boost::asio::io_service svc;
static boost::signals2::signal<void(std::string)> textEntered;
static void
handleInputText(std::string text)
{
std::cout<<"handleInputText()"<<" text provided: "<<text;
return;
}
static void
worker()
{
sleep(2);
svc.post([](){
std::cout<<"\nRaising signal.";
std::string hello("hello world");
textEntered(hello);
});
return;
}
int main(int ac, char **av)
{
try
{
textEntered.connect(&handleInputText);
std::thread w(std::bind(&worker));
svc.run();
w.join();
}
catch(std::exception &ex)
{
std::cerr<<"main() exited with exception:"<<ex.what();
}
return 0;
}
You don't actually post any work to the service.
You start a thread that may eventually post work, but the main thread has already exited by that time.
Either, run the ioservice on the thread or make sure it has io_service::work
Here's a fix with a dedicated service thread and a work item:
Live On Coliru
#include<boost/asio.hpp>
#include<iostream>
#include<boost/asio.hpp>
#include<boost/signals2.hpp>
#include<boost/thread.hpp>
#include<boost/make_shared.hpp>
static boost::asio::io_service svc;
static boost::shared_ptr<boost::asio::io_service::work> work_lock;
static boost::signals2::signal<void(std::string)> textEntered;
static void
handleInputText(std::string text)
{
std::cout<<"handleInputText()"<<" text provided: "<<text;
return;
}
static void
worker()
{
sleep(2);
svc.post([](){
std::cout<<"\nRaising signal.";
std::string hello("hello world");
textEntered(hello);
});
return;
}
int main()
{
try
{
work_lock = boost::make_shared<boost::asio::io_service::work>(svc);
textEntered.connect(&handleInputText);
boost::thread_group tg;
tg.create_thread(boost::bind(&boost::asio::io_service::run, &svc));
tg.create_thread(&worker);
boost::this_thread::sleep_for(boost::chrono::seconds(3));
work_lock.reset();
tg.join_all();
}
catch(std::exception &ex)
{
std::cerr<<"main() exited with exception:"<<ex.what();
}
return 0;
}
Prints:
Raising signal.handleInputText() text provided: hello world
The following program produces a segmentation fault, although I don't see any undefined behaviour in the code. It has been compiled with GCC 4.7.3. Do you know the reason of the fault or a possible work-around? Also, it seems boost::future does not exist in v1.53 yet, so I should probably rely on boost::unique_future. I cannot upgrade to any version above > 1.53 and I really need the "make_ready_at_thread_exit()" feature.
#define BOOST_THREAD_PROVIDES_SIGNATURE_PACKAGED_TASK
#include <boost/thread.hpp>
#include <boost/thread/future.hpp>
namespace th = boost;
struct S {
th::packaged_task<void()> task;
th::unique_future<void> future;
void start();
void stop();
};
void S::start() {
task = th::packaged_task<void()>{ [this] () {}};
future = task.get_future();
task.make_ready_at_thread_exit();
}
void S::stop() {
future.wait();
}
int main() {
S s;
s.start();
s.stop();
}
Question
What can I do to get a locking mechanism that provides minimal and stable latency while guaranteeing that a thread cannot reacquire a resource before another thread has acquired and released it?
The desirability of answers to this question are ranked as follows:
Some combination of built-in C++11 features that work in MinGW on Windows 7 (note that the <thread> and <mutex> libraries do not work on a Windows platform)
Some combination of Windows API features
A modification to the FairLock listed below, my own attempt at implementing such a mechanism
Some features provided by a free, open-source library that does not require a .configure/make/make install process, (getting that to work in MSYS is more of an adventure than I care for)
Background
I am writing an application which is effectively a multi-stage producer/consumer. One thread generates input consumed by another thread, which produces output consumed by yet another thread. The application uses pairs of buffers so that, after an initial delay, all threads can work nearly simultaneously.
Since I am writing a Windows 7 application, I had been using CriticalSections to guard the buffers. The problem with using CriticalSections (or, so far as I can tell, any other Windows or C++11-built-in synchronization object) is that it does not allow for any provision that a thread that just released a lock cannot reacquire it until another thread has done so first. Because of this, many of my test drivers for the middle thread (the Encoder) never gave the Encoder a chance to acquire the test input buffers and completed without having tested them. The end result was a ridiculous process of trying to determine an artificial wait time that stochastically worked for my machine.
Since the structure of my application requires that each stage waits for the other stage to have acquired, finished using, and released the necessary buffers for getting to use the buffer again, I need, for lack of a better term, a fair locking mechanism. I took a crack at writing one (the source code is provided below). In testing, this FairLock allows my test driver to run my Encoder at the same speeds that I was able to achieve using the CriticalSection maybe 60% of the runs. The other 40% of the runs take anywhere between 10 to 100 ms longer, which is not acceptable for my application.
FairLock
// FairLock.hpp
#ifndef FAIRLOCK_HPP
#define FAIRLOCK_HPP
#include <atomic>
using namespace std;
class FairLock {
private:
atomic_bool owned {false};
atomic<DWORD> lastOwner {0};
public:
FairLock(bool owned);
bool inline hasLock() const;
bool tryLock();
void seizeLock();
void tryRelease();
void waitForLock();
};
#endif
// FairLock.cpp
#include <windows.h>
#include "FairLock.hpp"
#define ID GetCurrentThreadId()
FairLock::FairLock(bool owned) {
if (owned) {
this->owned = true;
this->lastOwner = ID;
} else {
this->owned = false;
this->lastOwner = 0;
}
}
bool inline FairLock::hasLock() const {
return owned && lastOwner == ID;
}
bool FairLock::tryLock() {
bool success = false;
DWORD id = ID;
if (owned) {
success = lastOwner == id;
} else if (
lastOwner != id &&
owned.compare_exchange_strong(success, true)
) {
lastOwner = id;
success = true;
} else {
success = false;
}
return success;
}
void FairLock::seizeLock() {
bool success = false;
DWORD id = ID;
if (!(owned && lastOwner == id)) {
while (!owned.compare_exchange_strong(success, true)) {
success = false;
}
lastOwner = id;
}
}
void FairLock::tryRelease() {
if (hasLock()) {
owned = false;
}
}
void FairLock::waitForLock() {
bool success = false;
DWORD id = ID;
if (!(owned && lastOwner == id)) {
while (lastOwner == id); // spin
while (!owned.compare_exchange_strong(success, true)) {
success = false;
}
lastOwner = id;
}
}
EDIT
DO NOT USE THIS FairLock CLASS; IT DOES NOT GUARANTEE MUTUAL EXCLUSION!
I reviewed the above code to compare it against The C++ Programming Language: 4th Edition text I had not read carefully and what CouchDeveloper's recommended Synchronous Queue. I realized that there are several sequences in which the thread that just released the FairLock can be tricked into thinking it still owns it. All it takes is interleaving instructions as follows:
New owner: set owned to true
Old owner: is owned true? yes
Old owner: am I the last owner? yes
New owner: set me as the last owner
At this point, the old and new owners both enter their critical sections.
I am considering whether this problem has a solution and whether it is worth attempting to solve this at all. In the meantime, don't use this unless you see a fix.
I would implement this in C++11 using a condition_variable-per-thread setup so that I could choose exactly which thread to wake up when (Live demo at Coliru):
class FairMutex {
private:
class waitnode {
std::condition_variable cv_;
waitnode* next_ = nullptr;
FairMutex& fmtx_;
public:
waitnode(FairMutex& fmtx) : fmtx_(fmtx) {
*fmtx.tail_ = this;
fmtx.tail_ = &next_;
}
~waitnode() {
for (waitnode** p = &fmtx_.waiters_; *p; p = &(*p)->next_) {
if (*p == this) {
*p = next_;
if (!next_) {
fmtx_.tail_ = &fmtx_.waiters_;
}
break;
}
}
}
void wait(std::unique_lock<std::mutex>& lk) {
while (fmtx_.held_ || fmtx_.waiters_ != this) {
cv_.wait(lk);
}
}
void notify() {
cv_.notify_one();
}
};
waitnode* waiters_ = nullptr;
waitnode** tail_ = &waiters_;
std::mutex mtx_;
bool held_ = false;
public:
void lock() {
auto lk = std::unique_lock<std::mutex>{mtx_};
if (held_ || waiters_) {
waitnode{*this}.wait(lk);
}
held_ = true;
}
bool try_lock() {
if (mtx_.try_lock()) {
std::lock_guard<std::mutex> lk(mtx_, std::adopt_lock);
if (!held_ && !waiters_) {
held_ = true;
return true;
}
}
return false;
}
void unlock() {
std::lock_guard<std::mutex> lk(mtx_);
held_ = false;
if (waiters_ != nullptr) {
waiters_->notify();
}
}
};
FairMutex models the Lockable concept so it can be used like any other standard library mutex type. Put simply, it achieves fairness by inserting waiters into a list in arrival order, and passing the mutex to the first waiter in the list when unlocking.
If it's useful:
This demonstrates *) an implementation of a "synchronous queue" using semaphores as synchronization primitives.
Note: the actually implementation uses semaphores implemented with GCD (Grand Central Dispatch):
using gcd::mutex;
using gcd::semaphore;
// A blocking queue in which each put must wait for a get, and vice
// versa. A synchronous queue does not have any internal capacity,
// not even a capacity of one.
template <typename T>
class simple_synchronous_queue {
public:
typedef T value_type;
enum result_type {
OK = 0,
TIMEOUT_NOT_DELIVERED = -1,
TIMEOUT_NOT_PICKED = -2,
TIMEOUT_NOTHING_OFFERED = -3
};
simple_synchronous_queue()
: sync_(0), send_(1), recv_(0)
{
}
void put(const T& v) {
send_.wait();
new (address()) T(v);
recv_.signal();
sync_.wait();
}
result_type put(const T& v, double timeout) {
if (send_.wait(timeout)) {
new (storage_) T(v);
recv_.signal();
if (sync_.wait(timeout)) {
return OK;
}
else {
return TIMEOUT_NOT_PICKED;
}
}
else {
return TIMEOUT_NOT_DELIVERED;
}
}
T get() {
recv_.wait();
T result = *address();
address()->~T();
sync_.signal();
send_.signal();
return result;
}
std::pair<result_type, T> get(double timeout) {
if (recv_.wait(timeout)) {
std::pair<result_type, T> result =
std::pair<result_type, T>(OK, *address());
address()->~T();
sync_.signal();
send_.signal();
return result;
}
else {
return std::pair<result_type, T>(TIMEOUT_NOTHING_OFFERED, T());
}
}
private:
using storage_t = typename std::aligned_storage<sizeof(T), std::alignment_of<T>::value>::type;
T* address() {
return static_cast<T*>(static_cast<void*>(&storage_));
}
storage_t storage_;
semaphore sync_;
semaphore send_;
semaphore recv_;
};
*) demonstrates: be carefully about potential issues, could be improved, etc. ... ;)
I accepted CouchDeveloper's answer since it pointed me down the right path. I wrote a Windows-specific C++11 implementation of a synchronous queue, and added this answer so that others could consider/use it if they so choose.
// SynchronousQueue.hpp
#ifndef SYNCHRONOUSQUEUE_HPP
#define SYNCHRONOUSQUEUE_HPP
#include <atomic>
#include <exception>
#include <windows>
using namespace std;
class CouldNotEnterException: public exception {};
class NoPairedCallException: public exception {};
template typename<T>
class SynchronousQueue {
private:
atomic_bool valueReady {false};
CRITICAL_SECTION getCriticalSection;
CRITICAL_SECTION putCriticalSection;
DWORD wait {0};
HANDLE getSemaphore;
HANDLE putSemaphore;
const T* address {nullptr};
public:
SynchronousQueue(DWORD waitMS): wait {waitMS}, address {nullptr} {
initializeCriticalSection(&getCriticalSection);
initializeCriticalSection(&putCriticalSection);
getSemaphore = CreateSemaphore(nullptr, 0, 1, nullptr);
putSemaphore = CreateSemaphore(nullptr, 0, 1, nullptr);
}
~SynchronousQueue() {
EnterCriticalSection(&getCriticalSection);
EnterCriticalSection(&putCriticalSection);
CloseHandle(getSemaphore);
CloseHandle(putSemaphore);
DeleteCriticalSection(&putCriticalSection);
DeleteCriticalSection(&getCriticalSection);
}
void put(const T& value) {
if (!TryEnterCriticalSection(&putCriticalSection)) {
throw CouldNotEnterException();
}
ReleaseSemaphore(putSemaphore, (LONG) 1, nullptr);
if (WaitForSingleObject(getSemaphore, wait) != WAIT_OBJECT_0) {
if (WaitForSingleObject(putSemaphore, 0) == WAIT_OBJECT_0) {
LeaveCriticalSection(&putCriticalSection);
throw NoPairedCallException();
} else {
WaitForSingleObject(getSemaphore, 0);
}
}
address = &value;
valueReady = true;
while (valueReady);
LeaveCriticalSection(&putCriticalSection);
}
T get() {
if (!TryEnterCriticalSection(&getCriticalSection)) {
throw CouldNotEnterException();
}
ReleaseSemaphore(getSemaphore, (LONG) 1, nullptr);
if (WaitForSingleObject(putSemaphore, wait) != WAIT_OBJECT_0) {
if (WaitForSingleObject(getSemaphore, 0) == WAIT_OBJECT_0) {
LeaveCriticalSection(&getCriticalSection);
throw NoPairedCallException();
} else {
WaitForSingleObject(putSemaphore, 0);
}
}
while (!valueReady);
T toReturn = *address;
valueReady = false;
LeaveCriticalSection(&getCriticalSection);
return toReturn;
}
};
#endif
I am trying to convert some existing code to use boost's asio tcp sockets instead of our current implementation. I am able to get a very similar example (of a chat client/server) from the boost site working, but when I attempt to put the code into my own program it stops working.
What I am doing:
Start a server process
The server process makes an empty socket and uses it to listen (with a tcp::acceptor) for TCP connections on a port (10010 for example)
Start a client process
Have the client process create a socket connect to the server's port
When the server sees a client is connecting, it starts listening for data(with async_read) on the socket and creates another empty socket to listen for another TCP connection on the port
When the client sees that the server has connected, it sends 100 bytes of data (with async_write) and waits for the socket to tell it the send is finished...when that happens it prints a message and shuts down
When the server gets notified that its has data that has been read, it prints a message and shuts down
Obviously, I have greatly trimmed this code down from what I'm trying to implement, this is as small as I could make something that reproduces the problem. I'm running on windows and have a visual studio solution file you can get. There's some memory leaks, thread safety problems, and such, but that's because I'm taking stuff out of existing code, so don't worry about them.
Anyway, here's the files one header with some common stuff, a server, and a client.
Connection.hpp:
#ifndef CONNECTION_HPP
#define CONNECTION_HPP
#include
#include
#include
class ConnectionTransfer
{
public:
ConnectionTransfer(char* buffer, unsigned int size) :
buffer_(buffer), size_(size) {
}
virtual ~ConnectionTransfer(void){}
char* GetBuffer(){return buffer_;}
unsigned int GetSize(){return size_;}
virtual void CallbackForFinished() = 0;
protected:
char* buffer_;
unsigned int size_;
};
class ConnectionTransferInProgress
{
public:
ConnectionTransferInProgress(ConnectionTransfer* ct):
ct_(ct)
{}
~ConnectionTransferInProgress(void){}
void operator()(const boost::system::error_code& error){Other(error);}
void Other(const boost::system::error_code& error){
if(!error)
ct_->CallbackForFinished();
}
private:
ConnectionTransfer* ct_;
};
class Connection
{
public:
Connection(boost::asio::io_service& io_service):
sock_(io_service)
{}
~Connection(void){}
void AsyncSend(ConnectionTransfer* ct){
ConnectionTransferInProgress tip(ct);
//sock_->async_send(boost::asio::buffer(ct->GetBuffer(),
// static_cast(ct->GetSize())), tip);
boost::asio::async_write(sock_, boost::asio::buffer(ct->GetBuffer(),
static_cast(ct->GetSize())), boost::bind(
&ConnectionTransferInProgress::Other, tip, boost::asio::placeholders::error));
}
void AsyncReceive(ConnectionTransfer* ct){
ConnectionTransferInProgress tip(ct);
//sock_->async_receive(boost::asio::buffer(ct->GetBuffer(),
// static_cast(ct->GetSize())), tip);
boost::asio::async_read(sock_, boost::asio::buffer(ct->GetBuffer(),
static_cast(ct->GetSize())), boost::bind(
&ConnectionTransferInProgress::Other, tip, boost::asio::placeholders::error));
}
boost::asio::ip::tcp::socket& GetSocket(){return sock_;}
private:
boost::asio::ip::tcp::socket sock_;
};
#endif //CONNECTION_HPP
BoostConnectionClient.cpp:
#include "Connection.hpp"
#include
#include
#include
#include
using namespace boost::asio::ip;
bool connected;
bool gotTransfer;
class FakeTransfer : public ConnectionTransfer
{
public:
FakeTransfer(char* buffer, unsigned int size) : ConnectionTransfer(buffer, size)
{
}
void CallbackForFinished()
{
gotTransfer = true;
}
};
void ConnectHandler(const boost::system::error_code& error)
{
if(!error)
connected = true;
}
int main(int argc, char* argv[])
{
connected = false;
gotTransfer = false;
boost::asio::io_service io_service;
Connection* conn = new Connection(io_service);
tcp::endpoint ep(address::from_string("127.0.0.1"), 10011);
conn->GetSocket().async_connect(ep, ConnectHandler);
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
while(!connected)
{
boost::this_thread::sleep(boost::posix_time::millisec(1));
}
std::cout (angle brackets here) "Connected\n";
char data[100];
FakeTransfer* ft = new FakeTransfer(data, 100);
conn->AsyncReceive(ft);
while(!gotTransfer)
{
boost::this_thread::sleep(boost::posix_time::millisec(1));
}
std::cout (angle brackets here) "Done\n";
return 0;
}
BoostConnectionServer.cpp:
#include "Connection.hpp"
#include
#include
#include
#include
using namespace boost::asio::ip;
Connection* conn1;
bool conn1Done;
bool gotTransfer;
Connection* conn2;
class FakeAcceptor
{
public:
FakeAcceptor(boost::asio::io_service& io_service, const tcp::endpoint& endpoint)
:
io_service_(io_service),
acceptor_(io_service, endpoint)
{
conn1 = new Connection(io_service_);
acceptor_.async_accept(conn1->GetSocket(),
boost::bind(&FakeAcceptor::HandleAccept, this, conn1,
boost::asio::placeholders::error));
}
void HandleAccept(Connection* conn, const boost::system::error_code& error)
{
if(conn == conn1)
conn1Done = true;
conn2 = new Connection(io_service_);
acceptor_.async_accept(conn2->GetSocket(),
boost::bind(&FakeAcceptor::HandleAccept, this, conn2,
boost::asio::placeholders::error));
}
boost::asio::io_service& io_service_;
tcp::acceptor acceptor_;
};
class FakeTransfer : public ConnectionTransfer
{
public:
FakeTransfer(char* buffer, unsigned int size) : ConnectionTransfer(buffer, size)
{
}
void CallbackForFinished()
{
gotTransfer = true;
}
};
int main(int argc, char* argv[])
{
boost::asio::io_service io_service;
conn1Done = false;
gotTransfer = false;
tcp::endpoint endpoint(tcp::v4(), 10011);
FakeAcceptor fa(io_service, endpoint);
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
while(!conn1Done)
{
boost::this_thread::sleep(boost::posix_time::millisec(1));
}
std::cout (angle brackets here) "Accepted incoming connection\n";
char data[100];
FakeTransfer* ft = new FakeTransfer(data, 100);
conn1->AsyncReceive(ft);
while(!gotTransfer)
{
boost::this_thread::sleep(boost::posix_time::millisec(1));
}
std::cout (angle brackets here) "Success!\n";
return 0;
}
I've searched around a bit, but haven't had much luck. As far as I can tell, I'm almost exactly matching the sample, so it must be something small that I'm overlooking.
Thanks!
In your client code, your ConnectHandler() callback function just sets a value and then returns, without posting any more work to the io_service. At that point, that async_connect() operation is the only work associated with the io_service; so when ConnectHandler() returns, there is no more work associated with the io_service. Thus the background thread's call to io_service.run() returns, and the thread exits.
One potential option would be to call conn->AsyncReceive() from within ConnectHandler(), so that the async_read() gets called prior to the ConnectHandler() returning and thus the background thread's call to io_service.run() won't return.
Another option, the more trivial one, would be to instantiate an io_service::work instance prior to creating your thread to call io_service::run (technically, you could do this at any point prior to the io_service.run() call's returning):
...
// some point in the main() method, prior to creating the background thread
boost::asio::io_service::work work(io_service)
...
This is documented in the io_service documentation:
Stopping the io_service from running out of work
Some applications may need to prevent an io_service object's run() call from returning when there is no more work to do. For example, the io_service may be being run in a background thread that is launched prior to the application's asynchronous operations. The run() call may be kept running by creating an object of type io_service::work:
http://www.boost.org/doc/libs/1_43_0/doc/html/boost_asio/reference/io_service.html