C++11: will lambda malloc space each time when it declared? - c++11

Just wanna clearify that: will the lambda malloc space each time and free itself when block ends?
for example
void func() {
auto lambda = [] (args) { expressions; }
static auto s_lambda = [] (args) { expressions; }
}
where lambda() will be malloc-ed to ram each time I call func(), while s_lamda() will not?
In such case, the performance of lambda() will be slightly worse than s_lambda() if they have a really huge func-body?

A lambda object will take up memory, but not the way you're thinking.
auto lambda = [] (args) { expressions; }
gets translated by the compiler into something like (very much simplified)
struct __lambda {
auto operator()(args) { expressions; }
};
__lambda lambda;
Because of how C++ works, every object has a strictly positive size, and sizeof(lambda) will be at least one. Depending on what your lambda captures, those captures may be stored as fields in the compiler-generated class as well, and in that case, the lambda will take up more memory to hold those captures.
But the actual body of its internal operator() function is something that gets compiled, it's not something that gets created at run-time again and again and again. And if your lambda does not actually use any captured data, the storage of at least one byte is likely to get optimised away.

Related

Removing a std::function<()> from a vector c++

I'm building a publish-subscribe class (called SystermInterface), which is responsible to receive updates from its instances, and publish them to subscribers.
Adding a subscriber callback function is trivial and has no issues, but removing it yields an error, because std::function<()> is not comparable in C++.
std::vector<std::function<void()> subs;
void subscribe(std::function<void()> f)
{
subs.push_back(f);
}
void unsubscribe(std::function<void()> f)
{
std::remove(subs.begin(), subs.end(), f); // Error
}
I've came down to five solutions to this error:
Registering the function using a weak_ptr, where the subscriber must keep the returned shared_ptr alive.
Solution example at this link.
Instead of registering at a vector, map the callback function by a custom key, unique per callback function.
Solution example at this link
Using vector of function pointers. Example
Make the callback function comparable by utilizing the address.
Use an interface class (parent class) to call a virtual function.
In my design, all intended classes inherits a parent class called
ServiceCore, So instead of registering a callback function, just
register ServiceCore reference in the vector.
Given that the SystemInterface class has a field attribute per instance (ID) (Which is managed by ServiceCore, and supplied to SystemInterface by constructing a ServiceCore child instance).
To my perspective, the first solution is neat and would work, but it requires handling at subscribers, which is something I don't really prefer.
The second solution would make my implementation more complex, where my implementation looks as:
using namespace std;
enum INFO_SUB_IMPORTANCE : uint8_t
{
INFO_SUB_PRIMARY, // Only gets the important updates.
INFO_SUB_COMPLEMENTARY, // Gets more.
INFO_SUB_ALL // Gets all updates
};
using CBF = function<void(string,string)>;
using INFO_SUBTREE = map<INFO_SUB_IMPORTANCE, vector<CBF>>;
using REQINF_SUBS = map<string, INFO_SUBTREE>; // It's keyed by an iterator, explaining it goes out of the question scope.
using INFSRC_SUBS = map<string, INFO_SUBTREE>;
using WILD_SUBS = INFO_SUBTREE;
REQINF_SUBS infoSubrs;
INFSRC_SUBS sourceSubrs;
WILD_SUBS wildSubrs;
void subscribeInfo(string info, INFO_SUB_IMPORTANCE imp, CBF f) {
infoSubrs[info][imp].push_back(f);
}
void subscribeSource(string source, INFO_SUB_IMPORTANCE imp, CBF f) {
sourceSubrs[source][imp].push_back(f);
}
void subscribeWild(INFO_SUB_IMPORTANCE imp, CBF f) {
wildSubrs[imp].push_back(f);
}
The second solution would require INFO_SUBTREE to be an extended map, but can be keyed by an ID:
using KEY_T = uint32_t; // or string...
using INFO_SUBTREE = map<INFO_SUB_IMPORTANCE, map<KEY_T,CBF>>;
For the third solution, I'm not aware of the limitations given by using function pointers, and the consequences of the fourth solution.
The Fifth solution would eliminate the purpose of dealing with CBFs, but it'll be more complex at subscriber-side, where a subscriber is required to override the virtual function and so receives all updates at one place, in which further requires filteration of the message id and so direct the payload to the intended routines using multiple if/else blocks, which will increase by increasing subscriptions.
What I'm looking for is an advice for the best available option.
Regarding your proposed solutions:
That would work. It can be made easy for the caller: have subscribe() create the shared_ptr and corresponding weak_ptr objects, and let it return the shared_ptr.
Then the caller must not lose the key. In a way this is similar to the above.
This of course is less generic, and then you can no longer have (the equivalent of) captures.
You can't: there is no way to get the address of the function stored inside a std::function. You can do &f inside subscribe() but that will only give you the address of the local variable f, which will go out of scope as soon as you return.
That works, and is in a way similar to 1 and 2, although now the "key" is provided by the caller.
Options 1, 2 and 5 are similar in that there is some other data stored in subs that refers to the actual std::function: either a std::shared_ptr, a key or a pointer to a base class. I'll present option 6 here, which is kind of similar in spirit but avoids storing any extra data:
Store a std::function<void()> directly, and return the index in the vector where it was stored. When removing an item, don't std::remove() it, but just set it to std::nullptr. Next time subscribe() is called, it checks if there is an empty element in the vector and reuses it:
std::vector<std::function<void()> subs;
std::size_t subscribe(std::function<void()> f) {
if (auto it = std::find(subs.begin(), subs.end(), std::nullptr); it != subs.end()) {
*it = f;
return std::distance(subs.begin(), it);
} else {
subs.push_back(f);
return subs.size() - 1;
}
}
void unsubscribe(std::size_t index) {
subs[index] = std::nullptr;
}
The code that actually calls the functions stored in subs must now of course first check against std::nullptrs. The above works because std::nullptr is treated as the "empty" function, and there is an operator==() overload that can check a std::function against std::nullptr, thus making std::find() work.
One drawback of option 6 as shown above is that a std::size_t is a rather generic type. To make it safer, you might wrap it in a class SubscriptionHandle or something like that.
As for the best solution: option 1 is quite heavy-weight. Options 2 and 5 are very reasonable, but 6 is, I think, the most efficient.

Go, encapsulation and perfomance

Code I'm exploring:
type Stack struct {
length int
values []int
}
func (s *Stack) Push(value int) {
// ...
}
func (s *Stack) Pop() int {
// ...
}
func (s *Stack) Length() int {
return s.length
}
Methods Push and Pop change the length field in Stack struct. And I wanted to hide this field from other files to prevent code like stack.length = ... (Manual length change). But I was need to have ability to read this field, so I added getter method - Length.
And my question is:
Shouldn't stack.Length() become slower than stack.length, because it is a function call? I have learnt assembler a bit and I know how many operations program should do to call a function. Have I understand right: By adding getter method stack.Length() I protected those who use my lib from bad usage but the cost of it - program's performance? This actually concerns not only Go.
Shouldn't stack.Length() become slower than stack.length, because it is a function call?
Objection! Assumes facts not in evidence.
Specifically:
Why do you think it is a function call? It looks like one, but actual Go compilers will often expand the code in line.
Why do you think a function call is slower than inline code? When measuring actual programs on actual computers, sometimes function calls are faster than inline code. It turns out the crucial part is usually whether the instructions being executed, and their operands, are already in the appropriate CPU caches. Sometimes, expanding functions inline makes the program run more slowly.
The compiler should do the inline expansion unless it makes the program run more slowly. How good the compiler is at pre- or post-detecting such slowdowns, if present, is a separate issue. In this particular case, given the function definition, the compiler is almost certain to just expand the function in line, as accessing stack.length will likely be one instruction, and calling a function will be one instruction, and deciding the tradeoff here will be easy.

lock-free synchronization, fences and memory order (store operation with acquire semantics)

I am migrating a project that was run on bare-bone to linux, and need to eliminate some {disable,enable}_scheduler calls. :)
So I need a lock-free sync solution in a single writer, multiple readers scenario, where the writer thread cannot be blocked. I came up with the following solution, which does not fit to the usual acquire-release ordering:
class RWSync {
std::atomic<int> version; // incremented after every modification
std::atomic_bool invalid; // true during write
public:
RWSync() : version(0), invalid(0) {}
template<typename F> void sync(F lambda) {
int currentVersion;
do {
do { // wait until the object is valid
currentVersion = version.load(std::memory_order_acquire);
} while (invalid.load(std::memory_order_acquire));
lambda();
std::atomic_thread_fence(std::memory_order_seq_cst);
// check if something changed
} while (version.load(std::memory_order_acquire) != currentVersion
|| invalid.load(std::memory_order_acquire));
}
void beginWrite() {
invalid.store(true, std::memory_order_relaxed);
std::atomic_thread_fence(std::memory_order_seq_cst);
}
void endWrite() {
std::atomic_thread_fence(std::memory_order_seq_cst);
version.fetch_add(1, std::memory_order_release);
invalid.store(false, std::memory_order_release);
}
}
I hope the intent is clear: I wrap the modification of a (non-atomic) payload between beginWrite/endWrite, and read the payload only inside the lambda function passed to sync().
As you can see, here I have an atomic store in beginWrite() where no writes after the store operation can be reordered before the store. I did not find suitable examples, and I am not experienced in this field at all, so I'd like some confirmation that it is OK (verification through testing is not easy either).
Is this code race-free and work as I expect?
If I use std::memory_order_seq_cst in every atomic operation, can I omit the fences? (Even if yes, I guess the performance would be worse)
Can I drop the fence in endWrite()?
Can I use memory_order_acq_rel in the fences? I don't really get the difference -- the single total order concept is not clear to me.
Is there any simplification / optimization opportunity?
+1. I happily accept any better idea as the name of this class :)
The code is basically correct.
Instead of having two atomic variables (version and invalid) you may use single version variable with semantic "Odd values are invalid". This is known as "sequential lock" mechanism.
Reducing number of atomic variables simplifies things a lot:
class RWSync {
// Incremented before and after every modification.
// Odd values mean that object in invalid state.
std::atomic<int> version;
public:
RWSync() : version(0) {}
template<typename F> void sync(F lambda) {
int currentVersion;
do {
currentVersion = version.load(std::memory_order_seq_cst);
// This may reduce calls to lambda(), nothing more
if(currentVersion | 1) continue;
lambda();
// Repeat until something changed or object is in an invalid state.
} while ((currentVersion | 1) ||
version.load(std::memory_order_seq_cst) != currentVersion));
}
void beginWrite() {
// Writer may read version with relaxed memory order
currentVersion = version.load(std::memory_order_relaxed);
// Invalidation requires sequential order
version.store(currentVersion + 1, std::memory_order_seq_cst);
}
void endWrite() {
// Writer may read version with relaxed memory order
currentVersion = version.load(std::memory_order_relaxed);
// Release order is sufficient for mark an object as valid
version.store(currentVersion + 1, std::memory_order_release);
}
};
Note the difference in memory orders in beginWrite() and endWrite():
endWrite() makes sure that all previous object's modifications have been completed. It is sufficient to use release memory order for that.
beginWrite() makes sure that reader will detect object being in invalid state before any futher object's modification is started. Such garantee requires seq_cst memory order. Because of that reader uses seq_cst memory order too.
As for fences, it is better to incorporate them into previous/futher atomic operation: compiler knows how to make the result fast.
Explanations of some modifications of original code:
1) Atomic modification like fetch_add() is intended for cases, when concurrent modifications (like another fetch_add()) are possible. For correctness, such modifications use memory locking or other very time-costly architecture-specific things.
Atomic assignment (store()) does not use memory locking, so it is cheaper than fetch_add(). You may use such assignment because concurrent modifications are not possible in your case (reader does not modify version).
2) Unlike to release-acquire semantic, which differentiate load and store operations, sequential consistency (memory_order_seq_cst) is applicable to every atomic access, and provide total order between these accesses.
The accepted answer is not correct. I guess the code should be something like "currentVersion & 1" instead of "currentVersion | 1". And subtler mistake is that, reader thread can go into lambda(), and after that, the write thread could run beginWrite() and write value to non-atomic variable. In this situation, write action in payload and read action in payload haven't happens-before relationship. concurrent access (without happens-before relationship) to non-atomic variable is a data race. Note that, single total order of memory_order_seq_cst does not means the happens-before relationship; they are consistent, but two kind of things.

Equivalent of enumerators in C++11?

In C#, you can define a custom enumeration very trivially, eg:
public IEnumerable<Foo> GetNestedFoos()
{
foreach (var child in _SomeCollection)
{
foreach (var foo in child.FooCollection)
{
yield return foo;
}
foreach (var bar in child.BarCollection)
{
foreach (var foo in bar.MoreFoos)
{
yield return foo;
}
}
}
foreach (var baz in _SomeOtherCollection)
{
foreach (var foo in baz.GetNestedFoos())
{
yield return foo;
}
}
}
(This can be simplified using LINQ and better encapsulation but that's not the point of the question.)
In C++11, you can do similar enumerations but AFAIK it requires a visitor pattern instead:
template<typename Action>
void VisitAllFoos(const Action& action)
{
for (auto& child : m_SomeCollection)
{
for (auto& foo : child.FooCollection)
{
action(foo);
}
for (auto& bar : child.BarCollection)
{
for (auto& foo : bar.MoreFoos)
{
action(foo);
}
}
}
for (auto& baz : m_SomeOtherCollection)
{
baz.VisitAllFoos(action);
}
}
Is there a way to do something more like the first, where the function returns a range that can be iterated externally rather than calling a visitor internally?
(And I don't mean by constructing a std::vector<Foo> and returning it -- it should be an in-place enumeration.)
I am aware of the Boost.Range library, which I suspect would be involved in the solution, but I'm not particularly familiar with it.
I'm also aware that it's possible to define custom iterators to do this sort of thing (which I also suspect might be involved in the answer) but I'm looking for something that's easy to write, ideally no more complicated than the examples shown here, and composable (like with _SomeOtherCollection).
I would prefer something that does not require the caller to use lambdas or other functors (since that just makes it a visitor again), although I don't mind using lambdas internally if needed (but would still prefer to avoid them there too).
If I'm understanding your question correctly, you want to perform some action over all elements of a collection.
C++ has an extensive set of iterator operations, defined in the iterator header. Most collection structures, including the std::vector that you reference, have .begin and .end methods which take no arguments and return iterators to the beginning and the end of the structure. These iterators have some operations that can be performed on them manually, but their primary use comes in the form of the algorithm header, which defines several very useful iteration functions.
In your specific case, I believe you want the for_each function, which takes a range (as a beginning to end iterator) and a function to apply. So if you had a function (or function object) called action and you wanted to apply it to a vector called data, the following code would be correct (assuming all necessary headers are included appropriately):
std::for_each(data.begin(), data.end(), action);
Note that for_each is just one of many functions provided by the algorithm header. It also provides functions to search a collection, copy a set of data, sort a list, find a minimum/maximum, and much more, all generalized to work over any structure that has an iterator. And if even these aren't enough, you can write your own by reading up on the operations supported on iterators. Simply define a template function that takes iterators of varying types and document what kind of iterator you want.
template <typename BidirectionalIterator>
void function(BidirectionalIterator begin, BidirectionalIterator end) {
// Do something
}
One final note is that all of the operations mentioned so far also operate correctly on arrays, provided you know the size. Instead of writing .begin and .end, you write + 0 and + n, where n is the size of the array. The trivial zero addition is often necessary in order to decay the type of the array into a pointer to make it a valid iterator, but array pointers are indeed random access iterators just like any other container iterator.
What you can do is writing your own adapter function and call it with different ranges of elements of the same type.
This is a non tested solution, that will probably needs some tweaking to make it compile,but it will give you an idea. It uses variadic templates to move from a collection to the next one.
template<typename Iterator, Args...>
visitAllFoos(std::pair<Iterator, Iterator> collection, Args&&... args)
{
std::for_each(collection.first, collection.second, {}(){ // apply action });
return visitAllFoos(std::forward<Args>(args)...);
}
//you can call it with a sequence of begin/end iterators
visitAllFoos(std::make_pair(c1.begin(), c1,end()), std::make_pair(c2.begin(), c2,end()))
I believe, what you're trying to do can be done with Boost.Range, in particular with join and any_range (the latter would be needed if you want to hide the types of the containers and remove joined_range from the interface).
However, the resulting solution would not be very practical both in complexity and performance - mostly because of the nested joined_ranges and type erasure overhead incurred by any_range. Personally, I would just construct std::vector<Foo*> or use visitation.
You can do this with the help of boost::asio::coroutine; see examples at https://pubby8.wordpress.com/2014/03/16/multi-step-iterators-using-coroutines/ and http://www.boost.org/doc/libs/1_55_0/doc/html/boost_asio/overview/core/coroutine.html.

C++ memory management patterns for objects used in callback chains

A couple codebases I use include classes that manually call new and delete in the following pattern:
class Worker {
public:
void DoWork(ArgT arg, std::function<void()> done) {
new Worker(std::move(arg), std::move(done)).Start();
}
private:
Worker(ArgT arg, std::function<void()> done)
: arg_(std::move(arg)),
done_(std::move(done)),
latch_(2) {} // The error-prone Latch interface isn't the point of this question. :)
void Start() {
Async1(<args>, [=]() { this->Method1(); });
}
void Method1() {
StartParallel(<args>, [=]() { this->latch_.count_down(); });
StartParallel(<other_args>, [=]() { this->latch_.count_down(); });
latch_.then([=]() { this->Finish(); });
}
void Finish() {
done_();
// Note manual memory management!
delete this;
}
ArgT arg_
std::function<void()> done_;
Latch latch_;
};
Now, in modern C++, explicit delete is a code smell, as, to some extent is delete this. However, I think this pattern (creating an object to represent a chunk of work managed by a callback chain) is fundamentally a good, or at least not a bad, idea.
So my question is, how should I rewrite instances of this pattern to encapsulate the memory management?
One option that I don't think is a good idea is storing the Worker in a shared_ptr: fundamentally, ownership is not shared here, so the overhead of reference counting is unnecessary. Furthermore, in order to keep a copy of the shared_ptr alive across the callbacks, I'd need to inherit from enable_shared_from_this, and remember to call that outside the lambdas and capture the shared_ptr into the callbacks. If I ever wrote the simple code using this directly, or called shared_from_this() inside the callback lambda, the object could be deleted early.
I agree that delete this is a code smell, and to a lesser extent delete on its own. But I think that here it is a natural part of continuation-passing style, which (to me) is itself something of a code smell.
The root problem is that the design of this API assumes unbounded control-flow: it acknowledges that the caller is interested in what happens when the call completes, but signals that completion via an arbitrarily-complex callback rather than simply returning from a synchronous call. Better to structure it synchronously and let the caller determine an appropriate parallelization and memory-management regime:
class Worker {
public:
void DoWork(ArgT arg) {
// Async1 is a mistake; fix it later. For now, synchronize explicitly.
Latch async_done(1);
Async1(<args>, [&]() { async_done.count_down(); });
async_done.await();
Latch parallel_done(2);
RunParallel([&]() { DoStuff(<args>); parallel_done.count_down(); });
RunParallel([&]() { DoStuff(<other_args>); parallel_done.count_down(); };
parallel_done.await();
}
};
On the caller-side, it might look something like this:
Latch latch(tasks.size());
for (auto& task : tasks) {
RunParallel([=]() { DoWork(<args>); latch.count_down(); });
}
latch.await();
Where RunParallel can use std::thread or whatever other mechanism you like for dispatching parallel events.
The advantage of this approach is that object lifetimes are much simpler. The ArgT object lives for exactly the scope of the DoWork call. The arguments to DoWork live exactly as long as the closures containing them. This also makes it much easier to add return-values (such as error codes) to DoWork calls: the caller can just switch from a latch to a thread-safe queue and read the results as they complete.
The disadvantage of this approach is that it requires actual threading, not just boost::asio::io_service. (For example, the RunParallel calls within DoWork() can't block on waiting for the RunParallel calls from the caller side to return.) So you either have to structure your code into strictly-hierarchical thread pools, or you have to allow a potentially-unbounded number of threads.
One option is that the delete this here is not a code smell. At most, it should be wrapped into a small library that would detect if all the continuation callbacks were destroyed without calling done_().

Resources