I am writing a function in C++
int maxsubarray(vector<int>&nums)
say I have a vector
v={1,2,3,4,5}
I want to pass
{3,4,5}
to the function,i.e. pass the vector starting from index 2. In C I know I can call maxsubarray(v+2)
but in C++ it doesn't work. I can modify the function by adding start index parameter to make it work of course. Just want to know can I do it without modifying my original function?
THX
You will have to create a temporary vector with the part you want to pass:
std::vector<int> v = {1,2,3,4,5};
std::vector<int> v2(v.begin() + 2, v.end());
maxsubarray(v2);
The obvious solution is to make a new vector and pass that one instead. I definitely do not recommend that. The most idiomatic way is to make your function take iterators:
template<typename It>
It::value_type maxsubarray(It begin, It end) { ... }
and then use it like this:
std::vector<int> nums(...);
auto max = maxsubarray(begin(nums) + 2, end(nums));
Anything else involving copies, is just inefficient and not necessary.
Not without constructing another vector.
You can either build a new vector a pass it by reference to the function (but this might not be ideal from a performance point of view. You generally pass by reference to avoid unnecessary copies) or use pointers:
//copy the vector
std::vector<int> copy(v.begin()+2, v.end());
maxsubarray(copy);
//pass a pointer to the given element
int maxsubarray(int * nums)
maxsubarray(&v[2]);
You could try calling it with a temporary:
int myMax = maxsubarray(vector<int>(v.begin() + 2, v.end()));
That might require changing the function signature to
int maxsubarray(const vector<int> &nums);
since (I think) temporaries can't bind to non-const references, but that change should be preferred here if maxsubarray won't modify nums.
Related
When casting a vector integers (i.e. Eigen::VectorXi) to a vector of doubles, and then operating on that vector of doubles, the generated assembly is dramatically different if the return type of the cast is auto.
In other words, using:
Eigen::VectorXi int_vec(3);
int_vec << 1, 2, 3;
Eigen::VectorXd dbl_vec = int_vec.cast<double>();
Compared to:
Eigen::VectorXi int_vec(3);
int_vec << 1, 2, 3;
auto dbl_vec = int_vec.cast<double>();
Here are two examples on godbolt:
VectorXd return type: https://godbolt.org/z/0FLC4r
auto return type: https://godbolt.org/z/MGxCaL
What are the ramifications of using auto for the return here? I thought it would be more efficient by avoiding a copy, but now I'm not sure.
Indeed, in your code in the question you avoid a copy (indeed, until dbl_vec is used, it's essentially a noop). However, in the code on godbolt, you traverse the original int_vec and evaluate dbl_vec at least twice, possibly thrice:
max + std::log((dbl_vec.array() - max)
^^^ ^^^^^^^ ^^^
I'm not sure if the two calls to max are collapsed into a temporary or not. I'd hope so.
In any case, kmdreko is right and you should avoid using auto with Eigen unless you know exactly what you're doing. In this case, the auto is an expression template that does not get evaluated until used. If you use it more than once, then it gets evaluated more than once. If the evaluation is expensive, then the savings from not using a copy are lost (with interest) to the additional evaluation times.
I would like to understand when it is more practical to use std::transform
and when an old fashioned for-loop is better.
This is my code with a for loop, I want to combine two vectors into a complex one:
vector<double> vAmplitude = this->amplitudeData(N);
vector<double> vPhase = this->phaseData(N);
vector<complex<double>,fftalloc<complex<double> > > vComplex(N);
for (size_t i = 0; i < N; ++i)
{
vComplex[i] = std::polar(vAmplitude[i], vPhase[i]);
}
This is my std::transform code
vector<double> vAmplitude = this->amplitudeData(N);
vector<double> vPhase = this->phaseData(N);
vector<complex<double>,fftalloc<complex<double> > > vComplex;
std::transform(
begin(vPhase), end(vPhase), begin(vAmplitude),
std::back_inserter(vComplex),
[](double p, double a) { return std::polar(a, p); });
Note that vComplex is allocated without size, so I wonder when the allocations happends. Also I do not understand why, in the lambda expression, p and a must be reversed to their usage.
One consideration in favor of the standard algorithms, is that it prepares your code (and you) for the c++17 alternative execution model versions.
To borrow from JoachimPileborg's answer, say you write your code as
vector<complex<double>,fftalloc<complex<double> > > vComplex(N);
std::transform(
begin(vAmplitude), end(vAmplitude), begin(vPhase),
std::begin(vComplex),
std::polar);
After some time, you realize that this is the bottleneck in your code, and you need to run it in parallel. So, in this case, all you'd need to do is add
std::execution::par{} as the first parameter to std::transform. In the hand-rolled version, your (standard-compliant) parallelism choices are gone.
Regarding the allocation, that's what std::back_inserter does.
You could also set the size for the destination vector vComplex and use std::begin for it in the std::transform call:
vector<complex<double>,fftalloc<complex<double> > > vComplex(N);
std::transform(
begin(vPhase), end(vPhase), begin(vAmplitude),
std::begin(vComplex),
[](double p, double a) { return std::polar(a, p); });
As for the reversal of the arguments in the lambda, it's because you use vPhase as the first container in the std::transform call. If you changed to use vAmplitude instead you could have passed just a pointer to std::polar instead:
std::transform(
begin(vAmplitude), end(vAmplitude), begin(vPhase),
std::begin(vComplex),
std::polar);
Lastly as for when to call std::transform it's more of a personal matter in most cases. I personally prefers to use the standard algoritm functions before trying to do everything myself.
I was playing with bind and I was thinking, are lambdas as expensive as function pointers?
What I mean is, as I understand lambdas, they are syntactic sugar for functors and bind is similar. However, if you do this:
#include<functional>
#include<iostream>
void fn2(int a, int b)
{
std::cout << a << ", " << b << std::endl;
}
void fn1(int a, int b)
{
//auto bound = std::bind(fn2, a, b);
//static auto bound = std::bind(fn2, a, b);
//auto bound = [&]{ fn2(a, b); };
static auto bound = [&]{ fn2(a, b); };
bound();
}
int main()
{
fn1(3, 4);
fn1(1, 2);
return 0;
}
Now, if I were to use the 1st auto bound = std::bind(fn2, a, b);, I get an output of 3, 4
1, 2, the 2nd I get 3, 4
3, 4. The 3rd and 4th I get output like the 1st.
Now I get why the 1st and 2nd work that way, they are getting initialised at the beginning of the function call (the static one, only the 1st time it is called). However, 3 and 4 seem to have compiler magic going on where the generated functors are not really creating references to the enclosing scope's variables, but are actually latching on to the symbols whether or not it is initialised only the first time or every time.
Can someone clarify what is actually happening here?
Edit: What I was missing is using static auto bound = std::bind(fn2, std::ref(a), std::ref(b)); to have it work as the 4th option.
You have this code:
static auto bound = [&]{ fn2(a, b); };
Assignment is done only first time you are invoking this function because it's static. So in fact it's called only once. Compiler creates closure when you are making lambdas, so references to a and b from first call to fn1 was captured. It's very risky. It may lead to dangling references. I'm surprised it didn't crashed since you are making closure from function parameters passed by value - to local variables.
I recommend this excellent article about lambdas: http://www.cprogramming.com/c++11/c++11-lambda-closures.html .
As a general rule, only use [&] lambdas when your closure is going to go away by the end of the current scope.
If it is going to outlast the current scope, and you need by-reference, explicitly capture the things you are going to capture, or create local pointers to the things you are going to capture and capture them by-value.
In your case, your static lambda code is full of undefined behavior, as you [&] capture a and b in the first call, then use it in the second call.
In theory, the compiler could rewrite your code to capture a and b by value instead of by reference, then call that every time, because the only difference between that implementation and the one you wrote occurs when the behavior is undefined, and the result will be much faster.
It could do a more efficient job by ignoring your static completely, as the entire state of your static object is undefined after you leave scope the first time you call, and the construction has no visible side effects.
To fix your problem with the lambdas, use [=] or [a,b] to introduce the lambda, and it will capture the a and b by value. I prefer to capture state explicitly on lambdas when I expect the lambda to persist longer than the current block.
I have a stl map that's of type:
map<Object*, baseObject*>
where
class baseObject{
int ID;
//other stuff
};
If I wanted to return a list of objects (std::list< Object* >), what's the best way to sort it in order of the baseObject.ID's?
Am I just stuck looking through for every number or something? I'd prefer not to change the map to a boost map, although I wouldn't be necessarily against doing something that's self contained within a return function like
GetObjectList(std::list<Object*> &objects)
{
//sort the map into the list
}
Edit: maybe I should iterate through and copy the obj->baseobj into a map of baseobj.ID->obj ?
What I'd do is first extract the keys (since you only want to return those) into a vector, and then sort that:
std::vector<baseObject*> out;
std::transform(myMap.begin(), myMap.end(), std::back_inserter(out), [](std::pair<Object*, baseObject*> p) { return p.first; });
std::sort(out.begin(), out.end(), [&myMap](baseObject* lhs, baseObject* rhs) { return myMap[lhs].componentID < myMap[rhs].componentID; });
If your compiler doesn't support lambdas, just rewrite them as free functions or function objects. I just used lambdas for conciseness.
For performance, I'd probably reserve enough room in the vector initially, instead of letting it gradually expand.
(Also note that I haven't tested the code, so it might need a little bit of fiddling)
Also, I don't know what this map is supposed to represent, but holding a map where both key and value types are pointers really sets my "bad C++" sense tingling. It smells of manual memory management and muddled (or nonexistent) ownership semantics.
You mentioned getting the output in a list, but a vector is almost certainly a better performing option, so I used that. The only situation where a list is preferable is really when you have no intention of ever iterating over it, and if you need the guarantee that pointers and iterators stay valid after modification of the list.
The first thing is that I would not use a std::list, but rather a std::vector. Now as of the particular problem you need to perform two operations: generate the container, sort it by whatever your criteria is.
// Extract the data:
std::vector<Object*> v;
v.reserve( m.size() );
std::transform( m.begin(), m.end(),
std::back_inserter(v),
[]( const map<Object*, baseObject*>::value_type& v ) {
return v.first;
} );
// Order according to the values in the map
std::sort( v.begin(), v.end(),
[&m]( Object* lhs, Object* rhs ) {
return m[lhs]->id < m[rhs]->id;
} );
Without C++11 you will need to create functors instead of the lambdas, and if you insist in returning a std::list then you should use std::list<>::sort( Comparator ). Note that this is probably inefficient. If performance is an issue (after you get this working and you profile and know that this is actually a bottleneck) you might want to consider using an intermediate map<int,Object*>:
std::map<int,Object*> mm;
for ( auto it = m.begin(); it != m.end(); ++it )
mm[ it->second->id ] = it->first;
}
std::vector<Object*> v;
v.reserve( mm.size() ); // mm might have less elements than m!
std::transform( mm.begin(), mm.end(),
std::back_inserter(v),
[]( const map<int, Object*>::value_type& v ) {
return v.second;
} );
Again, this might be faster or slower than the original version... profile.
I think you'll do fine with:
GetObjectList(std::list<Object*> &objects)
{
std::vector <Object*> vec;
vec.reserve(map.size());
for(auto it = map.begin(), it_end = map.end(); it != it_end; ++it)
vec.push_back(it->second);
std::sort(vec.begin(), vec.end(), [](Object* a, Object* b) { return a->ID < b->ID; });
objects.assign(vec.begin(), vec.end());
}
Here's how to do what you said, "sort it in order of the baseObject.ID's":
typedef std::map<Object*, baseObject*> MapType;
MapType mymap; // don't care how this is populated
// except that it must not contain null baseObject* values.
struct CompareByMappedId {
const MapType ↦
CompareByMappedId(const MapType &map) : map(map) {}
bool operator()(Object *lhs, Object *rhs) {
return map.find(lhs)->second->ID < map.find(rhs)->second->ID;
}
};
void GetObjectList(std::list<Object*> &objects) {
assert(objects.empty()); // pre-condition, or could clear it
// or for that matter return a list by value instead.
// copy keys into list
for (MapType::const_iterator it = mymap.begin(); it != mymap.end(); ++it) {
objects.push_back(it->first);
}
// sort the list
objects.sort(CompareByMappedId(mymap));
}
This isn't desperately efficient: it does more looking up in the map than is strictly necessary, and manipulating list nodes in std::list::sort is likely a little slower than std::sort would be at manipulating a random-access container of pointers. But then, std::list itself isn't very efficient for most purposes, so you expect it to be expensive to set one up.
If you need to optimize, you could create a vector of pairs of (int, Object*), so that you only have to iterate over the map once, no need to look things up. Sort the pairs, then put the second element of each pair into the list. That may be a premature optimization, but it's an effective trick in practice.
I would create a new map that had a sort criterion that used the component id of your objects. Populate the second map from the first map (just iterate through or std::copy in). Then you can read this map in order using the iterators.
This has a slight overhead in terms of insertion over using a vector or list (log(n) time instead of constant time), but it avoids the need to sort after you've created the vector or list which is nice.
Also, you'll be able to add more elements to it later in your program and it will maintain its order without need of a resort.
I'm not sure I completely understand what you're trying to store in your map but perhaps look here
The third template argument of an std::map is a less functor. Perhaps you can utilize this to sort the data stored in the map on insertion. Then it would be a straight forward loop on a map iterator to populate a list
Consider the following code
#include <boost/unordered_set.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/make_shared.hpp>
int main()
{
boost::unordered_set<int> s;
s.insert(5);
s.insert(5);
// s.size() == 1
boost::unordered_set<boost::shared_ptr<int> > s2;
s2.insert(boost::make_shared<int>(5));
s2.insert(boost::make_shared<int>(5));
// s2.size() == 2
}
The question is: how come the size of s2 is 2 instead of 1? I'm pretty sure it must have something to do with the hash function. I tried looking at the boost docs and playing around with the hash function without luck.
Ideas?
make_shared allocates a new int, and wraps a shared_ptr around it. This means that your two shared_ptr<int>s point to different memory, and since you're creating a hash table keyed on pointer value, they are distinct keys.
For the same reason, this will result in a size of 2:
boost::unordered_set<int *> s3;
s3.insert(new int(5));
s3.insert(new int(5));
assert(s3.size() == 2);
For the most part you can consider shared_ptrs to act just like pointers, including for comparisons, except for the auto-destruction.
You could define your own hash function and comparison predicate, and pass them as template parameters to unordered_map, though:
struct your_equality_predicate
: std::binary_function<boost::shared_ptr<int>, boost::shared_ptr<int>, bool>
{
bool operator()(boost::shared_ptr<int> i1, boost::shared_ptr<int> i2) const {
return *i1 == *i2;
}
};
struct your_hash_function
: std::unary_function<boost::shared_ptr<int>, std::size_t>
{
std::size_t operator()(boost::shared_ptr<int> x) const {
return *x; // BAD hash function, replace with somethign better!
}
};
boost::unordered_set<int, your_hash_function, your_equality_predicate> s4;
However, this is probably a bad idea for a few reasons:
You have the confusing situation where x != y but s4[x] and s4[y] are the same.
If someone ever changes the value pointed-to by a hash key your hash will break! That is:
boost::shared_ptr<int> tmp(new int(42));
s4[tmp] = 42;
*tmp = 24; // UNDEFINED BEHAVIOR
Typically with hash functions you want the key to be immutable; it will always compare the same, no matter what happens later. If you're using pointers, you usually want the pointer identity to be what is matched on, as in extra_info_hash[&some_object] = ...; this will normally always map to the same hash value whatever some_object's members may be. With the keys mutable after insertion is it all too easy to actually do so, resulting in undefined behavior in the hash.
Notice that in Boost <= 1.46.0, the default hash_value of a boost::shared_ptr is its boolean value, true or false.
For any shared_ptr that is not NULL, hash_value evaluates to 1 (one), as the (bool)shared_ptr == true.
In other words, you downgrade a hash set to a linked list if you are using Boost <= 1.46.0.
This is fixed in Boost 1.47.0, see https://svn.boost.org/trac/boost/ticket/5216 .
If you are using std::shared_ptr, please define your own hash function, or use boost/functional/hash/extensions.hpp from Boost >= 1.51.0
As you found out, the two objects inserted into s2 are distinct.