How does C++ store variables captured by a lambda that have gone out of scope? - c++11

If a function returns a lambda that captures and mutates a value declared in the scope of the function, where/how is that value stored in memory so the lambda may safely use it?
This example is from listing 6.7 in 'Functional Programming in C++' by Ivan Čukić. It's a utility memoization method that caches results for fast lookup later. The contrived usage computes and then retrieves a cached Fibonacci number:
#include <iostream>
#include <map>
#include <tuple>
template <typename Result, typename... Args>
auto make_memoized(Result (*f)(Args...)) {
std::map<std::tuple<Args...>, Result> cache;
return [f, cache](Args... args) mutable -> Result {
const auto args_tuple = std::make_tuple(args...);
const auto cached = cache.find(args_tuple);
if (cached == cache.end()) {
auto result = f(args...);
cache[args_tuple] = result;
return result;
} else {
return cached->second;
}
};
}
unsigned int fib(unsigned int n) {
return n < 2 ? n : fib(n - 1) + fib(n - 2);
}
int main() {
auto fibmemo = make_memoized(fib);
std::cout << "fib(15) = " << fibmemo(15) << '\n';
std::cout << "fib(15) = " << fibmemo(15) << '\n';
}
My expectation was that cache would be destroyed when make_memoized returned, so a retrospective call to the lambda would have referred to a value that has gone out of scope. However it works fine (g++ 9.1 on OSX).
I can't find a concrete example of this sort of usage on cppreference.com. Any help leading me to the right terminology to search for is greatly appreciated.

The [f, cache] captures the vars by value. Once captured by value, the life of the captured var should be same as the lambda itself.
EDIT: If captured by reference (e.g. [f, &cache]), the life of cache and the lambda are no longer linked. So, while the code will still compile, it is no longer safe to use the returned lambda as cache has already been destroyed by then.

Related

Boost asio post with shared ptr passed as argument with std::move

I am new to boost:asio. I need to pass shared_ptr as argument to handler function.
E.g.
boost::asio::post(std::bind(&::function_x, std::move(some_shared_ptr)));
Is using std::move(some_shared_ptr) correct? or should I use as below,
boost::asio::post(std::bind(&::function_x, some_shared_ptr));
If both are correct, which one is advisable?
Thanks in advance
Regards
Shankar
Bind stores arguments by value.
So both are correct and probably equivalent. Moving the argument into the bind is potentially more efficient if some_argument is not gonna be used after the bind.
Warning: Advanced Use Cases
(just skip this if you want)
Not what you asked: what if function_x took rvalue-reference arguments?
Glad you asked. You can't. However, you can still receive by lvalue reference and just move from that. because:
std::move doesn't move
The rvalue-reference is only there to indicate potentially-moved-from arguments enabling some smart compiler optimizations and diagnostics.
So, as long as you know your bound function is only executed once (!!) then it's safe to move from lvalue parameters.
In the case of shared-pointers there's actually a little bit more leeway, because moving from the shared-ptr doesn't actually move the pointed-to element at all.
So, a little exercise demonstrating it all:
Live On Coliru
#include <boost/asio.hpp>
#include <memory>
#include <iostream>
static void foo(std::shared_ptr<int>& move_me) {
if (!move_me) {
std::cout << "already moved!\n";
} else {
std::cout << "argument: " << *std::move(move_me) << "\n";
move_me.reset();
}
}
int main() {
std::shared_ptr<int> arg = std::make_shared<int>(42);
std::weak_ptr<int> observer = std::weak_ptr(arg);
assert(observer.use_count() == 1);
auto f = std::bind(foo, std::move(arg));
assert(!arg); // moved
assert(observer.use_count() == 1); // so still 1 usage
{
boost::asio::io_context ctx;
post(ctx, f);
ctx.run();
}
assert(observer.use_count() == 1); // so still 1 usage
f(); // still has the shared arg
// but now the last copy was moved from, so it's gone
assert(observer.use_count() == 0); //
f(); // already moved!
}
Prints
argument: 42
argument: 42
already moved!
Why Bother?
Why would you care about the above? Well, since in Asio you have a lot of handlers that are guaranteed to execute precisely ONCE, you can sometimes avoid the overhead of shared pointers (the synchronization, the allocation of the control block, the type erasure of the deleter).
That is, you can use move-only handlers using std::unique_ptr<>:
Live On Coliru
#include <boost/asio.hpp>
#include <memory>
#include <iostream>
static void foo(std::unique_ptr<int>& move_me) {
if (!move_me) {
std::cout << "already moved!\n";
} else {
std::cout << "argument: " << *std::move(move_me) << "\n";
move_me.reset();
}
}
int main() {
auto arg = std::make_unique<int>(42);
auto f = std::bind(foo, std::move(arg)); // this handler is now move-only
assert(!arg); // moved
{
boost::asio::io_context ctx;
post(
ctx,
std::move(f)); // move-only, so move the entire bind (including arg)
ctx.run();
}
f(); // already executed
}
Prints
argument: 42
already moved!
This is going to help a lot in code that uses a lot of composed operations: you can now bind the state of the operation into the handler with zero overhead, even if it's bigger and dynamically allocated.

std::string::assign vs std::string::operator=

I coded in Borland C++ ages ago, and now I'm trying to understand the "new"(to me) C+11 (I know, we're in 2015, there's a c+14 ... but I'm working on an C++11 project)
Now I have several ways to assign a value to a string.
#include <iostream>
#include <string>
int main ()
{
std::string test1;
std::string test2;
test1 = "Hello World";
test2.assign("Hello again");
std::cout << test1 << std::endl << test2;
return 0;
}
They both work. I learned from http://www.cplusplus.com/reference/string/string/assign/ that there are another ways to use assign . But for simple string assignment, which one is better? I have to fill 100+ structs with 8 std:string each, and I'm looking for the fastest mechanism (I don't care about memory, unless there's a big difference)
Both are equally fast, but = "..." is clearer.
If you really want fast though, use assign and specify the size:
test2.assign("Hello again", sizeof("Hello again") - 1); // don't copy the null terminator!
// or
test2.assign("Hello again", 11);
That way, only one allocation is needed. (You could also .reserve() enough memory beforehand to get the same effect.)
I tried benchmarking both the ways.
static void string_assign_method(benchmark::State& state) {
std::string str;
std::string base="123456789";
// Code inside this loop is measured repeatedly
for (auto _ : state) {
str.assign(base, 9);
}
}
// Register the function as a benchmark
BENCHMARK(string_assign_method);
static void string_assign_operator(benchmark::State& state) {
std::string str;
std::string base="123456789";
// Code before the loop is not measured
for (auto _ : state) {
str = base;
}
}
BENCHMARK(string_assign_operator);
Here is the graphical comparitive solution. It seems like both the methods are equally faster. The assignment operator has better results.
Use string::assign only if a specific position from the base string has to be assigned.

Run a function when number of references decrease in shared_ptr

I am developing a cache and I need to know when an object expired.
Is possible run a function when the reference counter of a shared_ptr decrease?
std::shared_ptr< MyClass > p1 = std::make_shared( MyClass() );
std::shared_ptr< MyClass > p2 = p1; // p1.use_count() = 2
p2.reset(); // [ run function ] p1.use_count() = 1
You can't have a function called every time the reference count decreases, but you can have one called when it hits zero. You do this by passing a "custom deleter" to the shared_ptr constructor (you can't use the make_shared utility for this); the deleter is a callable object which is responsible for being passed, and deleting, the shared object.
Example:
#include <iostream>
#include <memory>
using namespace std;
void deleteInt(int* i)
{
std::cout << "Deleting " << *i << std::endl;
delete i;
}
int main() {
std::shared_ptr<int> ptr(new int(3), &deleteInt); // refcount now 1
auto ptr2 = ptr; // refcount now 2
ptr.reset(); // refcount now 1
ptr2.reset(); // refcount now 0, deleter called
return 0;
}
You can specify a deleter functor when creating the shared_ptr. The following article show an example use of a deleter:
http://en.cppreference.com/w/cpp/memory/shared_ptr/shared_ptr
Not using a vanilla std::shared_ptr, but if you only require customized behaviour when calling reset() (with no arguments), you can easily create a custom adapter:
template <typename T>
struct my_ptr : public std::shared_ptr<T> {
using std::shared_ptr<T>::shared_ptr;
void reset() {
std::shared_ptr<T>::reset(); // Release the managed object.
/* Run custom function */
}
};
And use it like this:
my_ptr<int> p = std::make_shared<int>(5);
std::cout << *p << std::endl; // Works as usual.
p.reset(); // Customized behaviour.
Edit
This answer is meant to suggest a solution to an issue that I didn't think the other answers did address, that is: executing custom behaviour every time when the refcount is decreased by use of reset().
If the issue is simply to make a call upon object release, then use a custom deleter functor as suggested in the answers by #Sneftel and #fjardon.

How tu use a C++11 lambda asynchronously when capturing by reference

Can somebody explain the behavior of the following code?
When I explicitely convert my lambda to an std::function, the lambda correctly captures my variable n.
When it is implicitly converted to an std::function (using a temporary), then the capture fails.
I am using g++-4.9 (Ubuntu 4.9-20140406-1ubuntu1) 4.9.0 20140405 (experimental) [trunk revision 209157]
#include <chrono>
#include <iostream>
#include <memory>
#include <thread>
std::shared_ptr<std::thread> call(const std::function<void()>& functor)
{
// Execute our functor asynchronously
return std::make_shared<std::thread>([&functor]
{
// Make sure all temporary are deallocated
std::this_thread::sleep_for(std::chrono::seconds(1));
// Execute our functor
functor();
});
}
int main()
{
int n{};
std::cout << "in main " << &n << std::endl;
// -> in main 0x7fffd4e1a96c
auto lambda = [&n]
{
std::cout << "in lambda " << &n << std::endl;
};
// Here we do an explicit convertion to std::function
std::cout << "explicit convertion" << std::endl;
auto function = std::function<void()>{ lambda };
auto pThreadFunction = call(function);
pThreadFunction->join();
// -> in lambda 0x7fffd4e1a96c
// Here we use an implicit convertion to std::function
std::cout << "implicit convertion" << std::endl;
auto pThreadLambda = call(lambda);
pThreadLambda->join();
// -> in lambda 0
return 0;
}
The lifetime of a temporary constructed for binding to a const reference function parameter is the full-expression containing that function call, so your thread function is referring to a dangling reference.
You should only capture variables into a thread function by reference if you can guarantee that the lifetime of the variable contains the lifetime of the thread, as you have done in the case where function is a local variable in main.
One alternative would be to call join within the full-expression that constructs the temporary:
call(lambda)->join();
Another more general solution would be to capture functor by value in your thread function.

In C++, how to iterate array in reverse using for_each?

In C++11, using lambda/for_each, how do we iterate an array from end?
I tried the following, but both result in infinite loop:
for_each (end(A), begin(A), [](int i) {
....
});
for_each (A.rend(), A.rbegin(), [](int i) {
...
});
Any idea? Thanks.
You missed this ?
Flip your rbegin & rend
for_each (A.rbegin(), A.rend(), [](int i) {
...
});
Increasing reverse iterator moves them towards the beginning of the container
std::for_each( A.rbegin(), A.rend(), [](int i) { /*code*/ } ); is the simple solution.
I instead have written backwards which takes a sequence, extracts the begin and end iterator from it using the free begin and end functions (with std::begin and std::end using declarations nearby -- full ADL), creates reverse iterators around them, then returns a sequence with those two reverse iterators.
It is sort of neat, because you get this syntax:
for( int i : backwards(A) ) {
// code
}
which I find easier to read than std::for_each or manual for loops.
But I am a bit nuts.
Here is a minimal backwards. A full on solution handles adl and a few corner cases better.
template<class It, class C>
struct range_for_t{
It b,e;
C c; // for lifetime
It begin()const{return b;}
It end()const{return e;}
}
template<class It, class C>
range_for_t<It,C> range_for(It b,It e,C&& c){
return {std::move(b),std::move(e),std::forward<C>(c)};
}
template<class It>
range_for_t<It,int> range_for(It b,It e){
return {std::move(b),std::move(e)};
}
A simple range for range for only. Can be augmented with perfect forwarding.
Passing C as the container that may need lifetime extending. If passed as rvalue, copy is made, otherwise just reference. It is otherwise not used.
Next part is easy:
template<class It>
auto reverse_it(It it){
return std::reverse_iterator<It>(std::move(it));
}
template<class C>
auto backwards(C&&c){
using std::begin; using std::end;
auto b=begin(c), e=end(c);
return range_for(
reverse_it(e),reverse_it(b),
std::forward<C>(c)
);
}
That is untested but should work.
One important test is ensuring it works when you feed an rvalue vec like:
for(auto x:backwards(make_vec()))
works -- that is what the mess around storing C is about. It also assumes that moved container iterators have iterators who behave nicely.
Boost offers a feature named reversed, that can be used with C++ 11 range based for loop as describes Yakk in his answer:
for(int i : reverse(A))
{
// code
}
or
for(int i : A | reversed)
{
// code
}
Since C++20, there is a convenient adaptor for this:
#include <ranges>
...
for (auto& element: container | std::views::reverse)
For example:
#include <iostream>
#include <ranges>
#include <vector>
int main()
{
std::vector<int> container {1, 2, 3, 4, 5};
for (const auto& elem: container | std::views::reverse )
{
std::cout << elem << ' ';
}
}
// Prints 5 4 3 2 1
Try it here:
https://coliru.stacked-crooked.com/a/e320e5eec431cc87

Resources