It is kind of exasperating that std collections don't provide a functional map interface to fill a collection
std::vector< int > oldV = {1,3,5};
std::vector< int > newV = (oldV % [&](int v)-> int{ return v+1; });
newV.insert( oldV.begin(), oldV.end(), [&](int v)-> int{ return 2*v; });
Is there a simple header library that implements wrappers for functional style programming with std collections?
I don't see a way to do it such that it would apply both to things like std::vector and std::unordered_set without repeating the operator definition for each container. In the case of vector it would be like this:
#include <iostream>
#include <vector>
template <typename T, typename Lambda>
std::vector< T > operator |(const std::vector< T >& input, Lambda map)
{
std::vector< T > output;
for (const T& elem : input)
output.push_back( map(elem) );
return std::move(output);
};
int main()
{
std::vector< int > oldV = {1,3,5};
std::vector< int > newV = oldV | [&](int v) -> int { return v + 1; };
for(int i=0; i< newV.size() ; i++)
{
std::cout << newV[i] << std::endl;
}
};
For the case of std::unordered_set you would only have to replace push_back with insert
The pipe operator here has the same well known semantics as on Unix/Linux shells and some languages
You could use std::generate and std::transform to do this.
Related
I want to create a heap data structure to be able to update the value .
but my simple code below throw an exception. why it gives the following:
109 : 3 terminate called after throwing an instance of 'std::bad_function_call' what(): bad_function_call
#include <set>
#include <algorithm>
#include <functional>
#include <boost/heap/fibonacci_heap.hpp>
int main() {
// Creating & Initializing a map of String & Ints
std::map<int, vector<int> > mapOfWordCount = { { 1000, {0,1,10,8} }, { 10001, {1,5,99} }, { 1008, {7,4,1} } , { 109, {1,5,3} }};
// Declaring the type of Predicate that accepts 2 pairs and return a bool
typedef std::function<bool(std::pair<int, vector<int> > v1, std::pair<int, vector<int> > v2)> Comparator;
// Defining a lambda function to compare two pairs. It will compare two pairs using second field
Comparator compFunctor =
[](std::pair<int, vector<int> > elem1 ,std::pair<int, vector<int> > elem2)
{
return elem1.second.size() > elem2.second.size();
};
boost::heap::fibonacci_heap <std::pair<int, vector<int> >, boost::heap::compare<Comparator> > pq;
typedef boost::heap::fibonacci_heap< std::pair<int, vector<int> >, boost::heap::compare<Comparator> >::handle_type handle_t;
handle_t* tab_handle = new handle_t [mapOfWordCount.size()];
unsigned iter(0);
for( auto& element : mapOfWordCount) {
tab_handle[iter++]=pq.push(element);
std::cout << element.first << " : " << element.second.size() << std::endl;
}
}
std::bad_function_call exception is caused (in this case) when calling a std::function that is empty.
I have made this work by making Comparator a functor.
struct Comparator
{
bool operator()(std::pair<int, std::vector<int> > elem1, std::pair<int, std::vector<int> > elem2) const
{
return elem1.second.size() > elem2.second.size();
}
};
This can then be used in the declarations of pq and handle_t.
Output:
109 : 3
1000 : 4
1008 : 3
10001 : 3
See demo here.
You can figure out how to make it work with a lambda.
Hint: It involves using the lambda compFunctor as an argument for construction.
I was recently asked this question in an interview of C++ where I
was asked to improve the below piece of code which fails when
adding two int's results in the result being long and return
type needs accordingly to be derived.
Here the below code fails because the decltype() based derivation is not intelligent enough to identify based on the actual range of values of input but the type and derives return type as same. Hence we need perhaps some metaprogramming template technique to derive the return type as long if T is int.
How can this be generalized any hints or clues?
I feel that decltype() won't be helpful here.
#include<iostream>
#include<string>
#include<climits>
using namespace std;
template<typename T> auto adder(const T& i1, const T& i2) -> decltype(i1+i2)
{
return(i1+i2);
}
int main(int argc, char* argv[])
{
cout << adder(INT_MAX-10, INT_MAX-3) << endl; // wrong.
cout << adder<long>(INT_MAX-10, INT_MAX-3) << endl; // correct!!.
return(0);
}
Hence we need perhaps some metaprogramming template technique to derive the return type as long if T is int.
Not so simple.
If T is int, you're non sure that long is enough.
The standard say only that
1) the number of bits for int (sizeof(int) * CHAR_BIT) is at least 16
2) the number of bits for long (sizeof(long) * CHAR_BIT) is at least 32
3) sizeof(int) <= sizeof(long)
So if a compiler manage a int with sizeof(int) == sizeof(long), this is perfectly legal and
adder<long>(INT_MAX-10, INT_MAX-3);
doesn't works because long can be not enough to contain (without overflow) the sum between two int's.
I don't see a simple and elegant solution.
The best that come in my mind is based on the fact that C++11 introduced the following types
1) std::int_least8_t, smallest integer type with at least 8 bits
2) std::int_least16_t, smallest integer type with at least 16 bits
3) std::int_least32_t, smallest integer type with at least 32 bits
4) std::int_least64_t, smallest integer type with at least 64 bits
C++11 also introduce std::intmax_t as the maximum width integer type.
So I propose the following template type selector
template <std::size_t N, typename = std::true_type>
struct typeFor;
/* in case std::intmax_t is bigger than 64 bits */
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool,
(N > 64u) && (N <= sizeof(std::intmax_t)*CHAR_BIT)>>
{ using type = std::intmax_t; };
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 32u) && (N <= 64u)>>
{ using type = std::int_least64_t; };
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 16u) && (N <= 32u)>>
{ using type = std::int_least32_t; };
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 8u) && (N <= 16u)>>
{ using type = std::int_least16_t; };
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N <= 8u)>>
{ using type = std::int_least8_t; };
that, given a number of bits, define the corresponding smallest "at least" integer type.
I propose also the following using
template <typename T>
using typeNext = typename typeFor<1u+sizeof(T)*CHAR_BIT>::type;
that, given a type T, detect the smallest integer type that surely contain a sum between two T values (a integer with a number of bits that is at least the number of bits of T plus one).
So your adder() simply become
template<typename T>
typeNext<T> adder (T const & i1, T const & i2)
{ return {typeNext<T>{i1} + i2}; }
Observe that th returned value isn't simply
return i1 + i2;
otherwise you return the correct type but with the wrong value: i1 + i2 is calculated as a T value so you can have overflow, then the sum is assigned to a typeNext<T> variable.
To avoid this problem, you have to initialize a typeNext<T> temporary variable with one of two values (typeNext<T>{i1}), then add the other (typeNext<T>{i1} + i2) obtaining a typeNext<T> value, finally return the computed value. This way the sum in calculated as a typeNext<T> sum and you doesn't have overflow.
The following is a full compiling example
#include <cstdint>
#include <climits>
#include <iostream>
#include <type_traits>
template <std::size_t N, typename = std::true_type>
struct typeFor;
/* in case std::intmax_t is bigger than 64 bits */
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool,
(N > 64u) && (N <= sizeof(std::intmax_t)*CHAR_BIT)>>
{ using type = std::intmax_t; };
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 32u) && (N <= 64u)>>
{ using type = std::int_least64_t; };
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 16u) && (N <= 32u)>>
{ using type = std::int_least32_t; };
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 8u) && (N <= 16u)>>
{ using type = std::int_least16_t; };
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N <= 8u)>>
{ using type = std::int_least8_t; };
template <typename T>
using typeNext = typename typeFor<1u+sizeof(T)*CHAR_BIT>::type;
template<typename T>
typeNext<T> adder (T const & i1, T const & i2)
{ return {typeNext<T>{i1} + i2}; }
int main()
{
auto x = adder(INT_MAX-10, INT_MAX-3);
std::cout << "int: " << sizeof(int)*CHAR_BIT << std::endl;
std::cout << "long: " << sizeof(long)*CHAR_BIT << std::endl;
std::cout << "x: " << sizeof(x)*CHAR_BIT << std::endl;
std::cout << std::is_same<long, decltype(x)>::value << std::endl;
}
In my Linux 64bit platform, i get 32bit for int, 64bit for long and for x and also that long and decltype(x) are the same type.
But this is true for my platform; nothing guaranties that long and decltype(x) are ever the same.
Observe also that trying to get a type for the sum of two std::intmax_t's
std::intmax_t y {};
auto z = adder(y, y);
gives an error and doesn't compile because isn't defined a typeFor for a N bigger that sizeof(std::intmax_t)*CHAR_BIT.
I am trying to use recursion to solve this problem where if i call
decimal<0,0,1>();
i should get the decimal number (4 in this case).
I am trying to use recursion with variadic templates but cannot get it to work.
Here's my code;
template<>
int decimal(){
return 0;
}
template<bool a,bool...pack>
int decimal(){
cout<<a<<"called"<<endl;
return a*2 + decimal<pack...>();
};
int main(int argc, char *argv[]){
cout<<decimal<0,0,1>()<<endl;
return 0;
}
What would be the best way to solve this?
template<typename = void>
int decimal(){
return 0;
}
template<bool a,bool...pack>
int decimal(){
cout<<a<<"called"<<endl;
return a + 2*decimal<pack...>();
};
The problem was with the recursive case, where it expects to be able to call decltype<>(). That is what I have defined in the first overload above. You can essentially ignore the typename=void, the is just necessary to allow the first one to compile.
A possible solution can be the use of a constexpr function (so you can use it's values it's value run-time, when appropriate) where the values are argument of the function.
Something like
#include <iostream>
constexpr int decimal ()
{ return 0; }
template <typename T, typename ... packT>
constexpr int decimal (T const & a, packT ... pack)
{ return a*2 + decimal(pack...); }
int main(int argc, char *argv[])
{
constexpr int val { decimal(0, 0, 1) };
static_assert( val == 2, "!");
std::cout << val << std::endl;
return 0;
}
But I obtain 2, not 4.
Are you sure that your code should return 4?
-- EDIT --
As pointed by aschepler, my example decimal() template function return "eturns twice the sum of its arguments, which is not" what do you want.
Well, with 0, 1, true and false you obtain the same; with other number, you obtain different results.
But you can modify decimal() as follows
template <typename ... packT>
constexpr int decimal (bool a, packT ... pack)
{ return a*2 + decimal(pack...); }
to avoid this problem.
This is a C++14 solution. It is mostly C++11, except for std::integral_sequence nad std::index_sequence, both of which are relatively easy to implement in C++11.
template<bool...bs>
using bools = std::integer_sequence<bool, bs...>;
template<std::uint64_t x>
using uint64 = std::integral_constant< std::uint64_t, x >;
template<std::size_t N>
constexpr uint64< ((std::uint64_t)1) << (std::uint64_t)N > bit{};
template<std::uint64_t... xs>
struct or_bits : uint64<0> {};
template<std::int64_t x0, std::int64_t... xs>
struct or_bits<x0, xs...> : uint64<x0 | or_bits<xs...>{} > {};
template<bool...bs, std::size_t...Is>
constexpr
uint64<
or_bits<
uint64<
bs?bit<Is>:std::uint64_t(0)
>{}...
>{}
>
from_binary( bools<bs...> bits, std::index_sequence<Is...> ) {
(void)bits; // suppress warning
return {};
}
template<bool...bs>
constexpr
auto from_binary( bools<bs...> bits={} )
-> decltype( from_binary( bits, std::make_index_sequence<sizeof...(bs)>{} ) )
{ return {}; }
It generates the resulting value as a type with a constexpr conversion to scalar. This is slightly more powerful than a constexpr function in its "compile-time-ness".
It assumes that the first bit is the most significant bit in the list.
You can use from_binary<1,0,1>() or from_binary( bools<1,0,1>{} ).
Live example.
This particular style of type-based programming results in code that does all of its work in its signature. The bodies consist of return {};.
I have K objects (K is small, e.g. 2 or 5) and I need to iterate over them N times in random order where N may be large. I need to iterate in a foreach loop and for this I should provide an iterator.
So far I created a std::vector of my K objects copied accordingly, so the size of vector is N and now I use begin() and end() provided by that vector. I use std::shuffle() to randomize the vector and this takes up to 20% of running time. I think it would be better (and more elegant, anyways) to write a custom iterator that returns one of my object in random order without creating the helping vector of size N. But how to do this?
It is obvious that your iterator must:
Store pointer to original vector or array: m_pSource
Store the count of requests (to be able to stop): m_nOutputCount
Use random number generator (see random): m_generator
Some iterator must be treated as end iterator: m_nOutputCount == 0
I've made an example for type int:
#include <iostream>
#include <random>
class RandomIterator: public std::iterator<std::forward_iterator_tag, int>
{
public:
//Creates "end" iterator
RandomIterator() : m_pSource(nullptr), m_nOutputCount(0), m_nCurValue(0) {}
//Creates random "start" iterator
RandomIterator(const std::vector<int> &source, int nOutputCount) :
m_pSource(&source), m_nOutputCount(nOutputCount + 1),
m_distribution(0, source.size() - 1)
{
operator++(); //make new random value
}
int operator* () const
{
return m_nCurValue;
}
RandomIterator operator++()
{
if (m_nOutputCount == 0)
return *this;
--m_nOutputCount;
static std::default_random_engine generator;
static bool bWasGeneratorInitialized = false;
if (!bWasGeneratorInitialized)
{
std::random_device rd; //expensive calls
generator.seed(rd());
bWasGeneratorInitialized = true;
}
m_nCurValue = m_pSource->at(m_distribution(generator));
return *this;
}
RandomIterator operator++(int)
{ //postincrement
RandomIterator tmp = *this;
++*this;
return tmp;
}
int operator== (const RandomIterator& other) const
{
if (other.m_nOutputCount == 0)
return m_nOutputCount == 0; //"end" iterator
return m_pSource == other.m_pSource;
}
int operator!= (const RandomIterator& other) const
{
return !(*this == other);
}
private:
const std::vector<int> *m_pSource;
int m_nOutputCount;
int m_nCurValue;
std::uniform_int_distribution<std::vector<int>::size_type> m_distribution;
};
int main()
{
std::vector<int> arrTest{ 1, 2, 3, 4, 5 };
std::cout << "Original =";
for (auto it = arrTest.cbegin(); it != arrTest.cend(); ++it)
std::cout << " " << *it;
std::cout << std::endl;
RandomIterator rndEnd;
std::cout << "Random =";
for (RandomIterator it(arrTest, 15); it != rndEnd; ++it)
std::cout << " " << *it;
std::cout << std::endl;
}
The output is:
Original = 1 2 3 4 5
Random = 1 4 1 3 2 4 5 4 2 3 4 3 1 3 4
You can easily convert it into a template. And make it to accept any random access iterator.
I just want to increment Dmitriy answer, because reading your question, it seems that you want that every time that you iterate your newly-created-and-shuffled collection the items should not repeat and Dmitryi´s answer does have repetition. So both iterators are useful.
template <typename T>
struct RandomIterator : public std::iterator<std::forward_iterator_tag, typename T::value_type>
{
RandomIterator() : Data(nullptr)
{
}
template <typename G>
RandomIterator(const T &source, G& g) : Data(&source)
{
Order = std::vector<int>(source.size());
std::iota(begin(Order), end(Order), 0);
std::shuffle(begin(Order), end(Order), g);
OrderIterator = begin(Order);
OrderIteratorEnd = end(Order);
}
const typename T::value_type& operator* () const noexcept
{
return (*Data)[*OrderIterator];
}
RandomIterator<T>& operator++() noexcept
{
++OrderIterator;
return *this;
}
int operator== (const RandomIterator<T>& other) const noexcept
{
if (Data == nullptr && other.Data == nullptr)
{
return 1;
}
else if ((OrderIterator == OrderIteratorEnd) && (other.Data == nullptr))
{
return 1;
}
return 0;
}
int operator!= (const RandomIterator<T>& other) const noexcept
{
return !(*this == other);
}
private:
const T *Data;
std::vector<int> Order;
std::vector<int>::iterator OrderIterator;
std::vector<int>::iterator OrderIteratorEnd;
};
template <typename T, typename G>
RandomIterator<T> random_begin(const T& v, G& g) noexcept
{
return RandomIterator<T>(v, g);
}
template <typename T>
RandomIterator<T> random_end(const T& v) noexcept
{
return RandomIterator<T>();
}
whole code at
http://coliru.stacked-crooked.com/a/df6ce482bbcbafcf or
https://github.com/xunilrj/sandbox/blob/master/sources/random_iterator/source/random_iterator.cpp
Implementing custom iterators can be very tricky so I tried to follow some tutorials, but please let me know if something have passed:
http://web.stanford.edu/class/cs107l/handouts/04-Custom-Iterators.pdf
https://codereview.stackexchange.com/questions/74609/custom-iterator-for-a-linked-list-class
Operator overloading
I think that the performance is satisfactory:
On the Coliru:
<size>:<time for 10 iterations>
1:0.000126582
10:3.5179e-05
100:0.000185914
1000:0.00160409
10000:0.0161338
100000:0.180089
1000000:2.28161
Off course it has the price to allocate a whole vector with the orders, that is the same size of the original vector.
An improvement would be to pre-allocate the Order vector if for some reason you have to random iterate very often and allow the iterator to use this pre-allocated vector, or some form of reset() in the iterator.
I'm trying to measure a performance difference between using Boost.Variant and using virtual interfaces. For example, suppose I want to increment different types of numbers uniformly, using Boost.Variant I would use a boost::variant over int and float and a static visitor which increments each one of them. Using class interfaces I would use a pure virtual class number and number_int and number_float classes which derive from it and implement an "increment" method.
From my testing, using interfaces is far faster than using Boost.Variant.
I ran the code at the bottom and received these results:
Virtual: 00:00:00.001028
Variant: 00:00:00.012081
Why do you suppose this difference is? I thought Boost.Variant would be a lot faster.
** Note: Usually Boost.Variant uses heap allocations to guarantee that the variant would always be non-empty. But I read on the Boost.Variant documentation that if boost::has_nothrow_copy is true then it doesn't use heap allocations which should make things significantly faster. For int and float boost::has_nothrow_copy is true.
Here is my code for measuring the two approaches against each other.
#include <iostream>
#include <boost/variant/variant.hpp>
#include <boost/variant/static_visitor.hpp>
#include <boost/variant/apply_visitor.hpp>
#include <boost/date_time/posix_time/ptime.hpp>
#include <boost/date_time/posix_time/posix_time_types.hpp>
#include <boost/date_time/posix_time/posix_time_io.hpp>
#include <boost/format.hpp>
const int iterations_count = 100000;
// a visitor that increments a variant by N
template <int N>
struct add : boost::static_visitor<> {
template <typename T>
void operator() (T& t) const {
t += N;
}
};
// a number interface
struct number {
virtual void increment() = 0;
};
// number interface implementation for all types
template <typename T>
struct number_ : number {
number_(T t = 0) : t(t) {}
virtual void increment() {
t += 1;
}
T t;
};
void use_virtual() {
number_<int> num_int;
number* num = &num_int;
for (int i = 0; i < iterations_count; i++) {
num->increment();
}
}
void use_variant() {
typedef boost::variant<int, float, double> number;
number num = 0;
for (int i = 0; i < iterations_count; i++) {
boost::apply_visitor(add<1>(), num);
}
}
int main() {
using namespace boost::posix_time;
ptime start, end;
time_duration d1, d2;
// virtual
start = microsec_clock::universal_time();
use_virtual();
end = microsec_clock::universal_time();
// store result
d1 = end - start;
// variant
start = microsec_clock::universal_time();
use_variant();
end = microsec_clock::universal_time();
// store result
d2 = end - start;
// output
std::cout <<
boost::format(
"Virtual: %1%\n"
"Variant: %2%\n"
) % d1 % d2;
}
For those interested, after I was a bit frustrated, I passed the option -O2 to the compiler and boost::variant was way faster than a virtual call.
Thanks
This is obvious that -O2 reduces the variant time, because that whole loop is optimized away. Change the implementation to return the accumulated result to the caller, so that the optimizer wouldn't remove the loop, and you'll get the real difference:
Output:
Virtual: 00:00:00.000120 = 10000000
Variant: 00:00:00.013483 = 10000000
#include <iostream>
#include <boost/variant/variant.hpp>
#include <boost/variant/static_visitor.hpp>
#include <boost/variant/apply_visitor.hpp>
#include <boost/date_time/posix_time/ptime.hpp>
#include <boost/date_time/posix_time/posix_time_types.hpp>
#include <boost/date_time/posix_time/posix_time_io.hpp>
#include <boost/format.hpp>
const int iterations_count = 100000000;
// a visitor that increments a variant by N
template <int N>
struct add : boost::static_visitor<> {
template <typename T>
void operator() (T& t) const {
t += N;
}
};
// a visitor that increments a variant by N
template <typename T, typename V>
T get(const V& v) {
struct getter : boost::static_visitor<T> {
T operator() (T t) const { return t; }
};
return boost::apply_visitor(getter(), v);
}
// a number interface
struct number {
virtual void increment() = 0;
};
// number interface implementation for all types
template <typename T>
struct number_ : number {
number_(T t = 0) : t(t) {}
virtual void increment() { t += 1; }
T t;
};
int use_virtual() {
number_<int> num_int;
number* num = &num_int;
for (int i = 0; i < iterations_count; i++) {
num->increment();
}
return num_int.t;
}
int use_variant() {
typedef boost::variant<int, float, double> number;
number num = 0;
for (int i = 0; i < iterations_count; i++) {
boost::apply_visitor(add<1>(), num);
}
return get<int>(num);
}
int main() {
using namespace boost::posix_time;
ptime start, end;
time_duration d1, d2;
// virtual
start = microsec_clock::universal_time();
int i1 = use_virtual();
end = microsec_clock::universal_time();
// store result
d1 = end - start;
// variant
start = microsec_clock::universal_time();
int i2 = use_variant();
end = microsec_clock::universal_time();
// store result
d2 = end - start;
// output
std::cout <<
boost::format(
"Virtual: %1% = %2%\n"
"Variant: %3% = %4%\n"
) % d1 % i1 % d2 % i2;
}