remove repetition when switching enum class - c++11

When I switch on an enum class I have to restate the enum class in every case. This bugs me since outside of constexpr-constructs it is hard to imagine what else I could mean. Is there away to inform the compiler that everything inside a block should be resolved to an enum class of my choice if there is a match?
consider the following example that contains a compiling snippet and for comparisson a non compiling snippet (commented out) that I would like to write.
#include <iostream>
enum class State : std::uint8_t;
void writeline(const char * msg);
void Compiles(State some);
enum class State : std::uint8_t
{
zero = 0,
one = 1
};
int main()
{
Compiles(State::zero);
return 0;
}
void Compiles(State some)
{
switch (some)
{
case State::zero: //State::
writeline("0");
break;
case State::one: //State::
writeline("1");
break;
default:
writeline("d");
break;
}
}
//void WhatIWant(State some)
//{
// using State{ //this makes no sense to the compiler but it expresses what I want to write
// switch (some)
// {
// case zero: //I want the compiler to figure out State::zero
// writeline("0");
// break;
// case one: //I want the compiler to figure out State::one
// writeline("1");
// break;
// default:
// writeline("d");
// break;
// }
// }
//}
void writeline(const char * msg)
{
std::cout << msg << std::endl;
}
Is there a way to use a switch statement and have the compiler figure out the enum class, maybe after giving a hint once?

enum class spessially designed in a way so you have to apply State:: every time.
If you don't want to use State:: prefix in every statement, just use old-style enum from c++98.
NOTE: with C++11 you still can use smth like: enum MyEnym: std::uint8_t{ ... } with regular enums.

Related

using C++11 templates to generate multiple versions of an algorithm

Say I'm making a general-purpose collection of some sort, and there are 4-5 points where a user might want to choose implementation A or B. For instance:
homogenous or heterogenous
do we maintain a count of the contained objects, which is slower
do we have it be thread-safe or not
I could just make 16 or 32 implementations, with each combination of features, but obviously this won't be easy to write or maintain.
I could pass in boolean flags to the constructor, that the class could check before doing certain operations. However, the compiler doesn't "know" what those arguments were so has to check them every time, and just checking enough boolean flags itself imposes a performance penalty.
So I'm wondering if template arguments can somehow be used so that at compile time the compiler sees if (false) or if (true) and therefore can completely optimize out the condition test, and if false, the conditional code. I've only found examples of templates as types, however, not as compile-time constants.
The main goal would be to utterly eliminate those calls to lock mutexes, increment and decrement counters, and so on, but additionally, if there's some way to actually remove the mutex or counters from the object structure as well that's be truly optimal.
Conditional computation before 17 was mostly about template specialization. Either specializing the function itself
template<> void f<int>(int) {
std::cout << "Locking an int...\n";
std::cout << "Unlocking an int...\n";
}
template<> void f<std::mutex>(std::mutex &m) {
m.lock();
m.unlock();
}
But this actually creates a rather branchy code (in your case I suspect), so a more sound alternative would be to extract all the dependent, type-specific, parts into static interface and define a static implementation of it for a particular concrete type:
template<class T> struct lock_traits; // interface
template<> struct lock_traits<int> {
void lock(int &) { std::cout << "Locking an int...\n"; }
void unlock(int &) { std::cout << "Unlocking an int...\n"; }
};
template<> struct lock_traits<std::mutex> {
void lock(std::mutex &m) { m.lock(); }
void unlock(std::mutex &m) { m.unlock(); }
};
template<class T> void f(T &t) {
lock_traits<T>::lock(t);
lock_traits<T>::unlock(t);
}
In C++17 if constrexpr was finally introduced, now not all branches do have to compile in all circumstances.
template<class T> void f(T &t) {
if constexpr<std::is_same_v<T, std::mutex>> {
t.lock();
}
else if constexpr<std::is_same_v<T, int>> {
std::cout << "Locking an int...\n";
}
if constexpr<std::is_same_v<T, std::mutex>> {
t.unlock();
}
// forgot to unlock an int here :(
}

Can a method of an class (in a shared_ptr) be tied to a static function in a traits class?

Historically, I've been using trait classes to hold information and apply that into a "generic" function that runs the same "algorithm." Only differed by the trait class. For example: https://onlinegdb.com/ryUo7WRmN
enum selector { SELECTOR1, SELECTOR2, SELECTOR3, };
// declaration
template < selector T> struct example_trait;
template<> struct example_trait<SELECTOR1> {
static constexpr size_t member_var = 3;
static size_t do_something() { return 0; }
};
template<> struct example_trait<SELECTOR2> {
static constexpr size_t member_var = 5;
static size_t do_something() { return 0; }
};
// pretend this is doing something useful but common
template < selector T, typename TT = example_trait<T> >
void function() {
std::cout << TT::member_var << std::endl;
std::cout << TT::do_something() << std::endl;
}
int main()
{
function<SELECTOR1>();
function<SELECTOR2>();
return 0;
}
I'm not sure how to create "generic" algorithms this when dealing with polymorphic classes.
For example: https://onlinegdb.com/S1hFLGC7V
Below I have created an inherited class hierarchy. In this example I have a base catch-all example that defaults all the parameters to something (0 in this case). And then each derived class sets overrides specific methods.
#include <iostream>
#include <memory>
#include <type_traits>
#include <assert.h>
using namespace std;
struct Base {
virtual int get_thing_one() {
return 0;
}
virtual int get_thing_two() {
return 0;
}
virtual int get_thing_three() {
return 0;
}
virtual int get_thing_four() {
return 0;
}
};
struct A : public Base {
virtual int get_thing_one() override {
return 1;
}
virtual int get_thing_three() override {
return 3;
}
};
struct B : public Base {
virtual int get_thing_one() override {
return 2;
}
virtual int get_thing_four() override{
return 4;
}
};
Here I created a simple factory, not elegant but for illustrative purposes
// example simple factory
std::shared_ptr<Base> get_class(const int input) {
switch(input)
{
case 0:
return std::shared_ptr<Base>(std::make_shared<A>());
break;
case 1:
return std::shared_ptr<Base>(std::make_shared<B>());
break;
default:
assert(false);
break;
}
}
So this is the class of interest. It is a class does "something" with the data from the classes above. The methods below are a simple addition example but imagine a more complicated algorithm that is very similar for every method.
// class that uses the shared_ptr
class setter {
private:
std::shared_ptr<Base> l_ptr;
public:
setter(const std::shared_ptr<Base>& input):l_ptr(input)
{}
int get_thing_a()
{
return l_ptr->get_thing_one() + l_ptr->get_thing_two();
}
int get_thing_b()
{
return l_ptr->get_thing_three() + l_ptr->get_thing_four();
}
};
int main()
{
constexpr int select = 0;
std::shared_ptr<Base> example = get_class(select);
setter l_setter(example);
std::cout << l_setter.get_thing_a() << std::endl;
std::cout << l_setter.get_thing_b() << std::endl;
return 0;
}
How can I make the "boilerplate" inside the setter class more generic? I can't use traits as I did in the example above because I can't tie static functions with an object. So is there a way to make the boilerplate example more common?
Somewhere along the lines of having a selector, say
enum thing_select { THINGA, THINGB, };
template < thing_select T >
struct thing_traits;
template <>
struct thing_traits<THINGA>
{
static int first_function() --> somehow tied to shared_ptr<Base> 'thing_one' method
static int second_function() --> somehow tied to shared_ptr<Base> 'thing_two' method
}
template <>
struct thing_traits<THINGB>
{
static int first_function() --> somehow tied to shared_ptr<Base> 'thing_three' method
static int second_function() --> somehow tied to shared_ptr<Base> 'thing_four' method
}
// generic function I'd like to create
template < thing_select T, typename TT = thing_traits<T> >
int perform_action(...)
{
return TT::first_function(..) + TT::second_function(..);
}
I ideally would like to modify the class above to something along the lines of
// Inside setter class further above
int get_thing_a()
{
return perform_action<THINGA>(...);
}
int get_thing_b()
{
return perform_action<THINGB>(...);
}
The answer is, maybe I can't, and I need to pass int the shared_ptr as a parameter and call the specific methods I need instead of trying to tie a shared_ptr method to a static function (in hindsight, that doesn't sound like a good idea...but I wanted to bounce my idea)
Whoever makes the actual call will need a reference of the object, one way or the other. Therefore, assuming you want perform_action to perform the actual call, you will have to pass the parameter.
Now, if you really want to store which function of Base to call as a static in thing_traits without passing a parameter, you can leverage pointer to member functions:
template <>
struct thing_traits<THINGA>
{
static constexpr int (Base::*first_function)() = &Base::get_thing_one;
...
}
template < thing_select T, typename TT = thing_traits<T>>
int perform_action(Base & b)
{
return (b.*TT::first_function)() + ...;
}
You can also play instead with returning a function object that does the call for you (and the inner function takes the parameter).
It all depends on who you need to make the call and what information/dependencies you assume you have available in each class/template.

libtorrent - storage_interface readv explanation

I have implemented a custom storage interface in libtorrent as described in the help section here.
The storage_interface is working fine, although I can't figure out why readv is only called randomly while downloading a torrent. From my view the overriden virtual function readv should get called each time I call handle->read_piece in piece_finished_alert. It should read the piece for read_piece_alert?
The buffer is provided in read_piece_alert without getting notified in readv.
So the question is why it is called only randomly and why it's not called on a read_piece() call? Is my storage_interface maybe wrong?
The code looks like this:
struct temp_storage : storage_interface
{
virtual int readv(file::iovec_t const* bufs, int num_bufs
, int piece, int offset, int flags, storage_error& ec)
{
// Only called on random pieces while downloading a larger torrent
std::map<int, std::vector<char> >::const_iterator i = m_file_data.find(piece);
if (i == m_file_data.end()) return 0;
int available = i->second.size() - offset;
if (available <= 0) return 0;
if (available > num_bufs) available = num_bufs;
memcpy(&bufs, &i->second[offset], available);
return available;
}
virtual int writev(file::iovec_t const* bufs, int num_bufs
, int piece, int offset, int flags, storage_error& ec)
{
std::vector<char>& data = m_file_data[piece];
if (data.size() < offset + num_bufs) data.resize(offset + num_bufs);
std::memcpy(&data[offset], bufs, num_bufs);
return num_bufs;
}
virtual bool has_any_file(storage_error& ec) { return false; }
virtual ...
virtual ...
}
Intialized with
storage_interface* temp_storage_constructor(storage_params const& params)
{
printf("NEW INTERFACE\n");
return new temp_storage(*params.files);
}
p.storage = &temp_storage_constructor;
The function below sets up alerts and invokes read_piece on each completed piece.
while(true) {
std::vector<alert*> alerts;
s.pop_alerts(&alerts);
for (alert* i : alerts)
{
switch (i->type()) {
case read_piece_alert::alert_type:
{
read_piece_alert* p = (read_piece_alert*)i;
if (p->ec) {
// read_piece failed
break;
}
// piece buffer, size is provided without readv
// notification after invoking read_piece in piece_finished_alert
break;
}
case piece_finished_alert::alert_type: {
piece_finished_alert* p = (piece_finished_alert*)i;
p->handle.read_piece(p->piece_index);
// Once the piece is finished, we read it to obtain the buffer in read_piece_alert.
break;
}
default:
break;
}
}
Sleep(100);
}
I will answer my own question. As Arvid said in the comments: readv was not invoked because of caching. Setting settings_pack::use_read_cache to false will invoke readv always.

Efficient message factory and handler in C++

Our company is rewriting most of the legacy C code in C++11. (Which also means I am a C programmer learning C++). I need advice on message handlers.
We have distributed system - Server process sends a packed message over TCP to client process.
In C code this was being done:
- parse message based on type and subtype, which are always the first 2 fields
- call a handler as handler[type](Message *msg)
- handler creates temporary struct say, tmp_struct to hold the parsed values and ..
- calls subhandler[type][subtype](tmp_struct)
There is only one handler per type/subtype.
Moving to C++11 and mutli-threaded environment. The basic idea I had was to -
1) Register a processor object for each type/subtype combination. This is
actually a vector of vectors -
vector< vector >
class MsgProcessor {
// Factory function
virtual Message *create();
virtual Handler(Message *msg)
}
This will be inherited by different message processors
class AMsgProcessor : public MsgProcessor {
Message *create() override();
handler(Message *msg);
}
2) Get the processor using a lookup into the vector of vectors.
Get the message using the overloaded create() factory function.
So that we can keep the actual message and the parsed values inside the message.
3) Now a bit of hack, This message should be send to other threads for the heavy processing. To avoid having to lookup in the vector again, added a pointer to proc inside the message.
class Message {
const MsgProcessor *proc; // set to processor,
// which we got from the first lookup
// to get factory function.
};
So other threads, will just do
Message->proc->Handler(Message *);
This looks bad, but hope, is that this will help to separate message handler from the factory. This is for the case, when multiple type/subtype wants to create same Message, but handle it differently.
I was searching about this and came across :
http://www.drdobbs.com/cpp/message-handling-without-dependencies/184429055?pgno=1
It provides a way to completely separate the message from the handler. But I was wondering if my simple scheme above will be considered an acceptable design or not. Also is this a wrong way of achieving what I want?
Efficiency, as in speed, is the most important requirement from this application. Already we are doing couple of memory Jumbs => 2 vectors + virtual function call the create the message. There are 2 deference to get to the handler, which is not good from caching point of view I guess.
Though your requirement is unclear, I think I have a design that might be what you are looking for.
Check out http://coliru.stacked-crooked.com/a/f7f9d5e7d57e6261 for the fully fledged example.
It has following components:
An interface class for Message processors IMessageProcessor.
A base class representing a Message. Message
A registration class which is essentially a singleton for storing the message processors corresponding to (Type, Subtype) pair. Registrator. It stores the mapping in a unordered_map. You can also tweak it a bit for better performance. All the exposed API's of Registrator are protected by a std::mutex.
Concrete implementations of MessageProcessor. AMsgProcessor and BMsgProcessor in this case.
simulate function to show how it all fits together.
Pasting the code here as well:
/*
* http://stackoverflow.com/questions/40230555/efficient-message-factory-and-handler-in-c
*/
#include <iostream>
#include <vector>
#include <tuple>
#include <mutex>
#include <memory>
#include <cassert>
#include <unordered_map>
class Message;
class IMessageProcessor
{
public:
virtual Message* create() = 0;
virtual void handle_message(Message*) = 0;
virtual ~IMessageProcessor() {};
};
/*
* Base message class
*/
class Message
{
public:
virtual void populate() = 0;
virtual ~Message() {};
};
using Type = int;
using SubType = int;
using TypeCombo = std::pair<Type, SubType>;
using IMsgProcUptr = std::unique_ptr<IMessageProcessor>;
/*
* Registrator class maintains all the registrations in an
* unordered_map.
* This class owns the MessageProcessor instance inside the
* unordered_map.
*/
class Registrator
{
public:
static Registrator* instance();
// Diable other types of construction
Registrator(const Registrator&) = delete;
void operator=(const Registrator&) = delete;
public:
// TypeCombo assumed to be cheap to copy
template <typename ProcT, typename... Args>
std::pair<bool, IMsgProcUptr> register_proc(TypeCombo typ, Args&&... args)
{
auto proc = std::make_unique<ProcT>(std::forward<Args>(args)...);
bool ok;
{
std::lock_guard<std::mutex> _(lock_);
std::tie(std::ignore, ok) = registrations_.insert(std::make_pair(typ, std::move(proc)));
}
return (ok == true) ? std::make_pair(true, nullptr) :
// Return the heap allocated instance back
// to the caller if the insert failed.
// The caller now owns the Processor
std::make_pair(false, std::move(proc));
}
// Get the processor corresponding to TypeCombo
// IMessageProcessor passed is non-owning pointer
// i.e the caller SHOULD not delete it or own it
std::pair<bool, IMessageProcessor*> processor(TypeCombo typ)
{
std::lock_guard<std::mutex> _(lock_);
auto fitr = registrations_.find(typ);
if (fitr == registrations_.end()) {
return std::make_pair(false, nullptr);
}
return std::make_pair(true, fitr->second.get());
}
// TypeCombo assumed to be cheap to copy
bool is_type_used(TypeCombo typ)
{
std::lock_guard<std::mutex> _(lock_);
return registrations_.find(typ) != registrations_.end();
}
bool deregister_proc(TypeCombo typ)
{
std::lock_guard<std::mutex> _(lock_);
return registrations_.erase(typ) == 1;
}
private:
Registrator() = default;
private:
std::mutex lock_;
/*
* Should be replaced with a concurrent map if at all this
* data structure is the main contention point (which I find
* very unlikely).
*/
struct HashTypeCombo
{
public:
std::size_t operator()(const TypeCombo& typ) const noexcept
{
return std::hash<decltype(typ.first)>()(typ.first) ^
std::hash<decltype(typ.second)>()(typ.second);
}
};
std::unordered_map<TypeCombo, IMsgProcUptr, HashTypeCombo> registrations_;
};
Registrator* Registrator::instance()
{
static Registrator inst;
return &inst;
/*
* OR some other DCLP based instance creation
* if lifetime or creation of static is an issue
*/
}
// Define some message processors
class AMsgProcessor final : public IMessageProcessor
{
public:
class AMsg final : public Message
{
public:
void populate() override {
std::cout << "Working on AMsg\n";
}
AMsg() = default;
~AMsg() = default;
};
Message* create() override
{
std::unique_ptr<AMsg> ptr(new AMsg);
return ptr.release();
}
void handle_message(Message* msg) override
{
assert (msg);
auto my_msg = static_cast<AMsg*>(msg);
//.... process my_msg ?
//.. probably being called in some other thread
// Who owns the msg ??
(void)my_msg; // only for suppressing warning
delete my_msg;
return;
}
~AMsgProcessor();
};
AMsgProcessor::~AMsgProcessor()
{
}
class BMsgProcessor final : public IMessageProcessor
{
public:
class BMsg final : public Message
{
public:
void populate() override {
std::cout << "Working on BMsg\n";
}
BMsg() = default;
~BMsg() = default;
};
Message* create() override
{
std::unique_ptr<BMsg> ptr(new BMsg);
return ptr.release();
}
void handle_message(Message* msg) override
{
assert (msg);
auto my_msg = static_cast<BMsg*>(msg);
//.... process my_msg ?
//.. probably being called in some other thread
//Who owns the msg ??
(void)my_msg; // only for suppressing warning
delete my_msg;
return;
}
~BMsgProcessor();
};
BMsgProcessor::~BMsgProcessor()
{
}
TypeCombo read_from_network()
{
return {1, 2};
}
struct ParsedData {
};
Message* populate_message(Message* msg, ParsedData& pdata)
{
// Do something with the message
// Calling a dummy populate method now
msg->populate();
(void)pdata;
return msg;
}
void simulate()
{
TypeCombo typ = read_from_network();
bool ok;
IMessageProcessor* proc = nullptr;
std::tie(ok, proc) = Registrator::instance()->processor(typ);
if (!ok) {
std::cerr << "FATAL!!!" << std::endl;
return;
}
ParsedData parsed_data;
//..... populate parsed_data here ....
proc->handle_message(populate_message(proc->create(), parsed_data));
return;
}
int main() {
/*
* TODO: Not making use or checking the return types after calling register
* its a must in production code!!
*/
// Register AMsgProcessor
Registrator::instance()->register_proc<AMsgProcessor>(std::make_pair(1, 1));
Registrator::instance()->register_proc<BMsgProcessor>(std::make_pair(1, 2));
simulate();
return 0;
}
UPDATE 1
The major source of confusion here seems to be because the architecture of the even system is unknown.
Any self respecting event system architecture would look something like below:
A pool of threads polling on the socket descriptors.
A pool of threads for handling timer related events.
Comparatively small number (depends on application) of threads to do long blocking jobs.
So, in your case:
You will get network event on the thread doing epoll_wait or select or poll.
Read the packet completely and get the processor using Registrator::get_processor call.
NOTE: get_processor call can be made without any locking if one can guarantee that the underlying unordered_map does not get modified i.e no new inserts would be made once we start receiving events.
Using the obtained processor we can get the Message and populate it.
Now, this is the part that I am not that sure of how you want it to be. At this point, we have the processor on which you can call handle_message either from the current thread i.e the thread which is doing epoll_wait or dispatch it to another thread by posting the job (Processor and Message) to that threads receiving queue.

Is there a way to check if RVO was applied?

I had this whole story about my frustrating journey to finding out that an unordered map I was returning from a function was not in fact RVO'd even though I was certain it was at an earlier time that it was but irrelevant.
Is there a way to check if RVO is happening in any given function? Or like a list of do's and dont's to follow to get the outcome I desire?
Yes. Create hooks for the lifecycle methods of your class:
#include <iostream>
struct A{
A()
{ std::cout<<"Ctor\n"; }
A(const A& o)
{ std::cout<<"CCtor\n"; }
A(A&& o)
{ std::cout<<"MCtor\n"; }
~A()
{ std::cout<<"Dtor\n"; }
private:
int vl_;
};
A getA(){
A a;
return a;
}
int main(){
A b = getA();
return 0;
}
Now with RVO, b is the same object as a in getA so you'll only see
Ctor
Dtor
You can suppress RVO, e.g., by adding an additional return point:
return a;
return A{a};
or moving:
return std::move(a);
And then you'll see:
Ctor
Mctor
Dtor
Dtor
You can verify that RVO was used in all the places where it's important to you:
template<typename T>
struct force_rvo: T {
force_rvo() {}
using T::T;
force_rvo(const force_rvo &);
force_rvo(force_rvo &&);
};
force_rvo<std::map<int, int>> f() {
force_rvo<std::map<int, int>> m;
m[17] = 42;
return m;
}
int main() {
auto m = f();
return m[42];
}
The force_rvo type pretends to be copyable and movable, otherwise compiler would reject return m. But if any of these is actually used, linker will fail and tell you where exactly that happened. The wrapper is zero cost, but requires using it both on caller and implementation sides, which may not be very convenient.

Resources