C++11 chrono library - How to execute method after a specific time interval? - c++11

I want to use properly the chrono library to configure my class to call a method, after some milliseconds.
#include <iostream>
#include <chrono>
#include <ctime>
Class House
{
private:
//...
public:
House() {};
~House() {};
void method1() { std::cout << "method1 called" << std::endl; };
void method2() { std::cout << "method2 called" << std::endl; };
void method3() { std::cout << "method3 called" << std::endl; };
};
int main
{
House h;
//For the object 'h', I need to call method1() after 100ms
// ???
//For the object 'h', I need to call method2() after 200ms
// ???
//For the object 'h', I need to call method3() after 300ms
// ???
return 0;
}
Any ideas how to do this?

This is a snippet from a book I have been reading / studying since I'm just getting into C++. (I started about 3 months ago but before that I practiced Java and Python a bit.) This explains how to do what you're intending to do as well as an example to show. I could have explained it in my own words; however I feel as if this hits the nail on the head:
5.3.4.1 Waiting for Events
Sometimes, a thread needs to wait for some kind of external event, such as another thread completing a task or a certain amount of time having passed. The simplest “event” is simply time passing. Consider:
auto t0 = high_resolution_clock::now();
this_thread::sleep_for(milliseconds{20});
auto t1 = high_resolution_clock::now();
cout << duration_cast<nanoseconds>(t1 - t0).count() << " nanoseconds passed\n";
Note that I didn't even have to launch a thread; by default, this_thread refers to the one and only thread (§ 42.2.6). I used duration_cast to adjust the clock’s units to the nanoseconds I wanted. See § 5.4.1 and § 35.2 before trying anything more complicated than this with time. The time facilities are found in <chrono>.
— The C++ Programming Language 4th Edition by Bjarne Stroustrup
I feel as if using this method would help accomplish what you're trying to do: accomplish tasks one after the other. Check out <chrono>. I found this answer because of a book I was reading, this isn't my work this is from a book. If you are intending on having many tasks running simultaneously, you will need to create threads and if they happen to share a resource, you will probably need locks or just use unique_lock / lock_guard. I prefer unique_lock.

Related

Boost asio post with shared ptr passed as argument with std::move

I am new to boost:asio. I need to pass shared_ptr as argument to handler function.
E.g.
boost::asio::post(std::bind(&::function_x, std::move(some_shared_ptr)));
Is using std::move(some_shared_ptr) correct? or should I use as below,
boost::asio::post(std::bind(&::function_x, some_shared_ptr));
If both are correct, which one is advisable?
Thanks in advance
Regards
Shankar
Bind stores arguments by value.
So both are correct and probably equivalent. Moving the argument into the bind is potentially more efficient if some_argument is not gonna be used after the bind.
Warning: Advanced Use Cases
(just skip this if you want)
Not what you asked: what if function_x took rvalue-reference arguments?
Glad you asked. You can't. However, you can still receive by lvalue reference and just move from that. because:
std::move doesn't move
The rvalue-reference is only there to indicate potentially-moved-from arguments enabling some smart compiler optimizations and diagnostics.
So, as long as you know your bound function is only executed once (!!) then it's safe to move from lvalue parameters.
In the case of shared-pointers there's actually a little bit more leeway, because moving from the shared-ptr doesn't actually move the pointed-to element at all.
So, a little exercise demonstrating it all:
Live On Coliru
#include <boost/asio.hpp>
#include <memory>
#include <iostream>
static void foo(std::shared_ptr<int>& move_me) {
if (!move_me) {
std::cout << "already moved!\n";
} else {
std::cout << "argument: " << *std::move(move_me) << "\n";
move_me.reset();
}
}
int main() {
std::shared_ptr<int> arg = std::make_shared<int>(42);
std::weak_ptr<int> observer = std::weak_ptr(arg);
assert(observer.use_count() == 1);
auto f = std::bind(foo, std::move(arg));
assert(!arg); // moved
assert(observer.use_count() == 1); // so still 1 usage
{
boost::asio::io_context ctx;
post(ctx, f);
ctx.run();
}
assert(observer.use_count() == 1); // so still 1 usage
f(); // still has the shared arg
// but now the last copy was moved from, so it's gone
assert(observer.use_count() == 0); //
f(); // already moved!
}
Prints
argument: 42
argument: 42
already moved!
Why Bother?
Why would you care about the above? Well, since in Asio you have a lot of handlers that are guaranteed to execute precisely ONCE, you can sometimes avoid the overhead of shared pointers (the synchronization, the allocation of the control block, the type erasure of the deleter).
That is, you can use move-only handlers using std::unique_ptr<>:
Live On Coliru
#include <boost/asio.hpp>
#include <memory>
#include <iostream>
static void foo(std::unique_ptr<int>& move_me) {
if (!move_me) {
std::cout << "already moved!\n";
} else {
std::cout << "argument: " << *std::move(move_me) << "\n";
move_me.reset();
}
}
int main() {
auto arg = std::make_unique<int>(42);
auto f = std::bind(foo, std::move(arg)); // this handler is now move-only
assert(!arg); // moved
{
boost::asio::io_context ctx;
post(
ctx,
std::move(f)); // move-only, so move the entire bind (including arg)
ctx.run();
}
f(); // already executed
}
Prints
argument: 42
already moved!
This is going to help a lot in code that uses a lot of composed operations: you can now bind the state of the operation into the handler with zero overhead, even if it's bigger and dynamically allocated.

virtual method callbacks in C++11/14/17?

I have some subscription function that will call my callback when something happens. (Let's say it's a timer, and will pass me an object when a certain number of milliseconds elapses.) The thing I want to be called is a virtual method. I feel std::function and std::bind or lambdas are part of the solution.
The C++99 approach I've used until now involves one-line C functions that know how to call a virtual method. The subscription function takes the C function and a void* user data as arguments. For example:
class Foo {
virtual void OnTimerA( Data* pd );
};
void OnTimerACB( Data* pd, void* pvUserData ) {
( (Foo*) pvUserData )->OnTimerA( pd );
}
/* Inside some method of Foo; 1000 is a number of milliseconds to call me back in;
second arg is a function pointer; third is a void* user data that is passed back
to the C callback. */
SubscribeToTimerOld( 1000, OnTimerACB, this );
What I'm hoping for is a way to write:
SubscribeToTimerNew( 1000, OnTimerA );
or something similar, at least that disposes of the need to write that one-line C binding callback.
I have a feeling that SubscribeToTimerNew()'s argument is probably a std:function of some sort and instead of merely writing OnTimerA I'd have to write something with std::bind to get the this pointer in there.
Alternatively to bind, perhaps a lambda is the way to do it? This compiles, though I dont see how to extend it to let the event handler pass an argument to OnTimerA(). (My linker isn't currently working so don't know if it links or runs as desired.)
SubscribeTimer( 1000, [this](){this->OnTimerA();} );
To mention one alternative I've discarded: give Foo a superclass with a method called OnTimer() that will be called when the timer goes off. Now SubscribeTimer() only need take an elapsed time. I don't like this as it doesn't cleanly allow for multiple timers to be registered. If it did you could give them (say) integer timer ID's and implement OnTimer() as a switch but this seems to be a lot more complicated than the C++99 solution.
Ultimately of the (I assume) several approaches, are there any trade-offs (e.g., heap use) in addition the most obvious question of how much typing is involved? (This is a high-performance application and I'd prefer to minimize or eliminate heap usage.)
C++11, C++14 and C++17 are quite different, especially when it comes to lambdas. And lambdas are a great way to create callbacks. For instance, see Why use std::bind over lambdas in C++14?
Using modern C++, you can use std::function as your callback type and then you can use any callable stuff as an actual callback. Quote from https://en.cppreference.com/w/cpp/utility/functional/function:
Class template std::function is a general-purpose polymorphic function
wrapper. Instances of std::function can store, copy, and invoke any
Callable target -- functions, lambda expressions, bind expressions, or
other function objects, as well as pointers to member functions and
pointers to data members.
Example:
#include <functional>
#include <iostream>
using Callback = std::function<void(int)>;
void subscribe(Callback callback, int duration) {
callback(duration);
}
struct Foo {
void operator()(int duration) {
std::cout << __PRETTY_FUNCTION__ << ' ' << duration << '\n';
}
};
struct Bar {
virtual void myFunction(int duration) {
std::cout << __PRETTY_FUNCTION__ << ' ' << duration << '\n';
}
};
void freeFunction(int duration) {
std::cout << __PRETTY_FUNCTION__ << ' ' << duration << '\n';
}
struct Zorg {
static void staticFunction(int duration) {
std::cout << __PRETTY_FUNCTION__ << ' ' << duration << '\n';
}
};
int main() {
Foo foo;
subscribe(foo, 128);
Bar bar;
auto lambda = [&bar](int duration) {
bar.myFunction(duration);
};
subscribe(lambda, 256);
subscribe(freeFunction, 512);
subscribe(Zorg::staticFunction, 1024);
}
Output:
void Foo::operator()(int) 128
virtual void Bar::myFunction(int) 256
void freeFunction(int) 512
static void Zorg::staticFunction(int) 1024

Boost process continuously read output

I'm trying to read outputs/logs from different processes and display them in a GUI. The processes will be running for long time and produce huge output. I'm planning to stream the output from those processes and display them according to my needs. All the while allow my gui application to take user inputs and perform other actions.
What I've done here is, from main thread launch two threads for each process. One for launching the process and another for reading output from the process.
This is the solution I've come up thus far.
// Process Class
class MyProcess {
namespace bp = boost::process;
boost::asio::io_service mService; // member variable of the class
bp::ipstream mStream // member variable of the class
std::thread mProcessThread, mReaderThread // member variables of the class.
public void launch();
};
void
MyProcess::launch()
{
mReaderThread = std::thread([&](){
std::string line;
while(getline(mStream, line)) {
std::cout << line << std::endl;
}
});
mProcessThread = std::thread([&]() {
auto c = boost::child ("/path/of/executable", bp::std_out > mStream, mService);
mService.run();
mStream.pipe().close();
}
}
// Main Gui class
class MyGui
{
MyProcess process;
void launchProcess();
}
MyGui::launchProcess()
{
process.launch();
doSomethingElse();
}
The program is working as expected so far. But I'm not sure if this is the correct solution. Please let me know if there's any alternative/better/correct solution
Thanks,
Surya
The most striking conceptual issues I see are
Process are asynchronous, no need to add a thread to run them.¹
You prematurely close the pipe:
mService.run();
mStream.pipe().close();
Run is not "blocking" in the sense that it will not wait for the child to exit. You could use wait to achieve that. Other than that, you can just remove the close() call.
With the close means you will lose all or part of the output. You might not see any of the output if the child process takes a while before it outputs the first data.
You are accessing the mStream from multiple threads without synchronization. This invokes Undefined Behaviour because it opens a Data Race.
In this case you can remove the immediate problem by removing the mStream.close() call mentioned before, but you must take care to start the reader-thread only after the child has been initialized.
Strictly speaking the same caution should be taken for std::cout.
You are passing the io_service reference, but it's not being used. Just dropping it seems like a good idea.
The destructor of MyProcess needs to detach or join the threads. To prevent Zombies, it needs to detach or reap the child pid too.
In combination with the lifetime of mStream detaching the reader thread is not really an option, as mStream is being used from the thread.
Let's put out the first fixes first, and after that I'll suggest show some more simplifications that make sense in the scope of your sample.
First Fixes
I used a simple bash command to emulate a command generating 1000 lines of ping:
Live On Coliru
#include <boost/process.hpp>
#include <thread>
#include <iostream>
namespace bp = boost::process;
/////////////////////////
class MyProcess {
bp::ipstream mStream;
bp::child mChild;
std::thread mReaderThread;
public:
~MyProcess();
void launch();
};
void MyProcess::launch() {
mChild = bp::child("/bin/bash", std::vector<std::string> {"-c", "yes ping | head -n 1000" }, bp::std_out > mStream);
mReaderThread = std::thread([&]() {
std::string line;
while (getline(mStream, line)) {
std::cout << line << std::endl;
}
});
}
MyProcess::~MyProcess() {
if (mReaderThread.joinable()) mReaderThread.join();
if (mChild.running()) mChild.wait();
}
/////////////////////////
class MyGui {
MyProcess _process;
public:
void launchProcess();
};
void MyGui::launchProcess() {
_process.launch();
// doSomethingElse();
}
int main() {
MyGui gui;
gui.launchProcess();
}
Simplify!
In the current model, the thread doesn't pull it's weight.
I you'd use io_service with asynchronous IO instead, you could even do away with the whole thread to begin with, by polling the service from inside your GUI event loop².
If you're gonna have it, and since child processes naturally execute asynchronously³ you could simply do:
Live On Coliru
#include <boost/process.hpp>
#include <thread>
#include <iostream>
std::thread launch(std::string const& command, std::vector<std::string> args = {}) {
namespace bp = boost::process;
return std::thread([=] {
bp::ipstream stream;
bp::child c(command, args, bp::std_out > stream);
std::string line;
while (getline(stream, line)) {
// TODO likely post to some kind of queue for processing
std::cout << line << std::endl;
}
c.wait(); // reap PID
});
}
The demo displays exactly the same output as earlier.
¹ In fact, adding threads is asking for trouble with fork
² or perhaps idle tick or similar idea. Qt has a ready-made integration (How to integrate Boost.Asio main loop in GUI framework like Qt4 or GTK)
³ on all platforms supported by Boost Process

std::string::assign vs std::string::operator=

I coded in Borland C++ ages ago, and now I'm trying to understand the "new"(to me) C+11 (I know, we're in 2015, there's a c+14 ... but I'm working on an C++11 project)
Now I have several ways to assign a value to a string.
#include <iostream>
#include <string>
int main ()
{
std::string test1;
std::string test2;
test1 = "Hello World";
test2.assign("Hello again");
std::cout << test1 << std::endl << test2;
return 0;
}
They both work. I learned from http://www.cplusplus.com/reference/string/string/assign/ that there are another ways to use assign . But for simple string assignment, which one is better? I have to fill 100+ structs with 8 std:string each, and I'm looking for the fastest mechanism (I don't care about memory, unless there's a big difference)
Both are equally fast, but = "..." is clearer.
If you really want fast though, use assign and specify the size:
test2.assign("Hello again", sizeof("Hello again") - 1); // don't copy the null terminator!
// or
test2.assign("Hello again", 11);
That way, only one allocation is needed. (You could also .reserve() enough memory beforehand to get the same effect.)
I tried benchmarking both the ways.
static void string_assign_method(benchmark::State& state) {
std::string str;
std::string base="123456789";
// Code inside this loop is measured repeatedly
for (auto _ : state) {
str.assign(base, 9);
}
}
// Register the function as a benchmark
BENCHMARK(string_assign_method);
static void string_assign_operator(benchmark::State& state) {
std::string str;
std::string base="123456789";
// Code before the loop is not measured
for (auto _ : state) {
str = base;
}
}
BENCHMARK(string_assign_operator);
Here is the graphical comparitive solution. It seems like both the methods are equally faster. The assignment operator has better results.
Use string::assign only if a specific position from the base string has to be assigned.

std::condition_variable::wait_for exits immediately when given std::chrono::duration::max

I have a wrapper around std::queue using C++11 semantics to allow concurrent access. The std::queue is protected with a std::mutex. When an item is pushed to the queue, a std::condition_variable is notified with a call to notify_one.
There are two methods for popping an item from the queue. One method will block indefinitely until an item has been pushed on the queue, using std::condition_variable::wait(). The second will block for an amount of time given by a std::chrono::duration unit using std::condition_variable::wait_for():
template <typename T> template <typename Rep, typename Period>
void ConcurrentQueue<T>::Pop(T &item, std::chrono::duration<Rep, Period> waitTime)
{
std::cv_status cvStatus = std::cv_status::no_timeout;
std::unique_lock<std::mutex> lock(m_queueMutex);
while (m_queue.empty() && (cvStatus == std::cv_status::no_timeout))
{
cvStatus = m_pushCondition.wait_for(lock, waitTime);
}
if (cvStatus == std::cv_status::no_timeout)
{
item = std::move(m_queue.front());
m_queue.pop();
}
}
When I call this method like this on an empty queue:
ConcurrentQueue<int> intQueue;
int value = 0;
std::chrono::seconds waitTime(12);
intQueue.Pop(value, waitTime);
Then 12 seconds later, the call to Pop() will exit. But if waitTime is instead set to std::chrono::seconds::max(), then the call to Pop() will exit immediately. The same occurs for milliseconds::max() and hours::max(). But, days::max() works as expected (doesn't exit immediately).
What causes seconds::max() to exit right away?
This is compiled with mingw64:
g++ --version
g++ (rev5, Built by MinGW-W64 project) 4.8.1
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
To begin with, the timed wait should likely be a wait_until(lock, std::chrono::steady_clock::now() + waitTime);, not wait_for because the loop will now simply repeat the wait multiple times until finally the condition (m_queue.empty()) becomes true. The repeats can also be caused by spurious wake-ups.
Fix that part of the code by using the predicated wait methods:
template <typename Rep, typename Period>
bool pop(std::chrono::duration<Rep, Period> waitTime, int& popped)
{
std::unique_lock<std::mutex> lock(m_queueMutex);
if (m_pushCondition.wait_for(lock, waitTime, [] { return !m_queue.empty(); }))
{
popped = m_queue.back();
m_queue.pop_back();
return true;
} else
{
return false;
}
}
On my implementation at least seconds::max() yields 0x7fffffffffffffff
§30.5.1 ad 26 states:
Effects: as if
return wait_until(lock, chrono::steady_clock::now() + rel_time);
Doing
auto time = steady_clock::now() + seconds::max();
std::cout << std::dec << duration_cast<seconds>(time.time_since_epoch()).count() << "\n";
On my system, prints
265521
Using date --date='#265521' --rfc-822 told me that that is Sun, 04 Jan 1970 02:45:21 +0100
There's a wrap around bug going on for GCC and Clang, see below
Tester
Live On Coliru
#include <thread>
#include <condition_variable>
#include <iostream>
#include <deque>
#include <chrono>
#include <iomanip>
std::mutex m_queueMutex;
std::condition_variable m_pushCondition;
std::deque<int> m_queue;
template <typename Rep, typename Period>
bool pop(std::chrono::duration<Rep, Period> waitTime, int& popped)
{
std::unique_lock<std::mutex> lock(m_queueMutex);
if (m_pushCondition.wait_for(lock, waitTime, [] { return !m_queue.empty(); }))
{
popped = m_queue.back();
m_queue.pop_back();
return true;
} else
{
return false;
}
}
int main()
{
int data;
using namespace std::chrono;
pop(seconds(2) , data);
std::cout << std::hex << std::showbase << seconds::max().count() << "\n";
auto time = steady_clock::now() + seconds::max();
std::cout << std::dec << duration_cast<seconds>(time.time_since_epoch()).count() << "\n";
pop(seconds::max(), data);
}
The reason for the problem is this nasty bit in the description for rel_time parameter:
Note that rel_time must be small enough not to overflow when added to std::chrono::steady_clock::now().
So when you do m_pushCondition.wait_for(lock, std::chrono::seconds::max()); the parameter overflows inside the function. In fact, if you enable undefined sanitizer, (e.g. -fsanitize=undefined option for GCC and Clang), and run the app, you may see the following runtime warning:
/usr/include/c++/9.1.0/chrono:456:34: runtime error: signed integer overflow: 473954758945968 + 9223372036854775807 cannot be represented in type 'long int'
Worth noting though that for some reason I did not have this warning for the actual app I was working on, probably a sanitizer bug. Anyway.
So what you can do. First: do not try to work around that by simply using the wait_for() overload with predicate because you gonna make yourself a bad spinlock burning your CPU core. Second: substracting max() - now() doesn't seem to work because it changes the type.
One way to work that around is using conditionally condition_variable::wait() and condition_variable::wait_for().
Another one may be to just declare declare big timespan, and use it. E.g.:
// This is a replacement to chrono::seconds::max(). The latter doesn't work with
// `wait_for` call because its `rel_time` parameter description has the following
// sentence: "Note that rel_time must be small enough not to overflow when added to
// std::chrono::steady_clock::now()".
const chrono::seconds many_hours = 99h;
// …[snip]…
m_pushCondition.wait_for(lock, many_hours);
// …[snip]…
You probably can tolerate a "spurious" wakeup once a 99 hours :)

Resources