How to dynamically assemble a typelist for passing as a variadic template parameter - c++14

I have a function that takes a variadic template parameter as its input. What I really need to do is use some kind of magic container (typelist, tuple, etc.) to feed this function parameter. The main problem is that this magic container needs to be dynamically assembled at runtime based on inputs to previous function calls.
Standard tuple generation obviously cannot work in this environment, so I believe some type of wrapper or helper with some typename mangling is in order, but the way to do so eludes me. Some example pseudo-code of what I'm trying to do follows. The user will call AddComponent() any number of times to add a number of components to an owning manager. For each instance of AddComponent(), I need to store the passed in 'Component' type to the magic container so that I end up with a container of all the Component types that have been added. After all this, I need to call GetView() using the assembled typename list as the parameter to the variadic template. A tuple fits best here, but how to correctly assemble it? Here's the code:
template<typename Component, typename ... Args>
void Blueprint::AddComponent(ComponentUsage usage, Args&& ... args)
{
// Create component object with given args
// Add 'Component' type to magic container
}
template<typename ... Component>
EntityView<Component...> EntityManager::GetView()
{
// Create view from list of component types
}

What you're describing sounds a lot like a builder pattern and you can get similar behavior with syntax like this:
// view would be EntityView<Location, Enemy, Flying>
auto view = makeBlueprint()
.AddComponent<Location>(...)
.AddComponent<Enemy>(...)
.AddComponent<Flying>(...)
.GetView();
This would use a dynamic builder where each component added would create a slightly different builder like Builder<> then .AddComponent<Location>() would return a Builder<Location> and then Builder<Location, Enemy> and so on.
However, this is still does not allow for dynamic typing; something like this would not work:
auto blueprint = makeBlueprint()
.AddComponent<Location>(...)
.AddComponent<Enemy>(...);
if (... some check ...)
blueprint = blueprint.AddComponent<Flying>(...);
auto view = blueprint.GetView();
I doubt this solves your problem since must still be dynamically typed and is not "dynamically assembled at runtime". But I hope it offers you insight regardless.

Related

What is the best type for a callable object in a template method?

Every time I write a signature that accepts a templated callable, I always wonder what the best type for the parameter is. Should it be a value type or a const reference type?
For example,
template <class Func>
void execute_func(Func func) {
/* ... */
}
// vs.
template <class Func>
void execute_func(const Func& func) {
/* ... */
}
Is there any situation where the callable is greater than 64bits (aka a pointer to func)? Maybe std::function behaves differently?
In general, I do not like passing callable objects by const reference, because it is not that flexible (e.g. it cannot be used on mutable lambdas). I suggest to pass them by value. If you check the stl algorithms implementation, (e.g. for std::for_each), all of the callable objects are passed by value as well.
Doing this, the users are still able to use std::ref(func) or std::cref(func) to avoid unnecessary copying of the callable object (using reference_wrapper), if desired.
Is there any situation where the callable is greater than 64bits
From my experience in working in CAD/CAE applications, a lot. Functors can easily hold data that is bigger than 64 bits. More than two ints, more than one double, more than one pointer, is all you need to exceed that limit in Visual Studio.
There is no best type. What if you have noncopyable functor? First template will not work as it will try to use deleted copy constructor. You have to move it but then you will loose (probably) ownership of the object. It all depends on intended use. And yes, std::function can be much bigger than size_t. If you bind member function, it already is 2 words (object pointer and function pointer). If you bind some arguments it may grow further. The same goes with lambda, every captured value is stored in lambda which is basically a functor in this case. Const reference will not work if your callable has non const operator. Neither of them will be perfect for all uses. Sometimes the best option is to provide a few different versions so you can handle all cases, SFINAE is your friend here.

Arguments/Struct compatibility when calling a dynamic library on OSX

I have 2 separate projects on OSX:
-the first is a MachO Dynamic Library project in XCode.
It has a function that is being called with an argument (a struct).
-the second is a Qt application project in Qt Creator.
It loads the dynamic library and calls the function, passing a struct as an argument.
Of course both share the same declaration of that function and struct.
The problem is, when I call the function, the values in the struct received in the function have nothing to do with the values I sent from the application. A simple printf before calling the function and another one within the function shows completely different values.
What did I did wrong?
My struct is composed of the following elements :
-multiple std::string
-multiple int
-multiple char[64]
Thanks!
The problem was an incompatibility with std::string, something about compiler flags/library that change the way std::string are implemented. I just changed everything to char[].

Can I create an alias of a type in Golang?

I'm struggling with my learning of Go.
I found this neat implementation of a Set in go: gopkg.in/fatih/set.v0, but I'd prefer naming my sets with a more explicit name that set.Set, doing something like:
type View set.Set
In essence, I want my View type to inherit set.Set's methods. Because, well, View is a set.Set of descriptors. But I know Go is pretty peaky on inheritance, and typing in general.
For now I've been trying the following kinda inheritance, but it's causing loads of errors when trying to use some functions like func Union(set1, set2 Interface, sets ...Interface) Interface or func (s *Set) Merge(t Interface):
type View struct {
set.Set
}
I'd like to know if there's a way to achieve what I want in a Go-like way, or if I'm just trying to apply my good-ol' OO practices to a language that discards them, please.
If anyone else is coming back to this question, as of Go 1.9 type aliases are now supported.
A type alias has the form: type T1 = T2
So in your example you can just do type View = set.Set and everything will work as you want.
Note, I think the simple aliasing you proposed initially is syntactically valid though having had a quick look at the set library, rather than aliasing set.Set it might make more sense to alias set.Interface, e.g.:
package main
import (
"fmt"
set "gopkg.in/fatih/set.v0"
)
// View is a type alias for the set.Interface interface
type View set.Interface
// Display takes one of our View types in order to print it.
func Display(view View) {
fmt.Println(view.List())
}
func main() {
// create our first set.Interface or View
v1 := set.New()
v1.Add("foo")
// create our second set.Interface or View
v2 := set.New("bar")
// call a set function
v3 := set.Union(v1, v2)
// call our function that takes a View
Display(v3)
}
You may have noticed I'm cheating somehow because I make no real mention of the aliased type in the above code other than in defining the parameter to the Display function above which you'll note takes in a View instance rather than a set.Interface. If you have lots of functions working on these things, then that might read more expressively for your domain.
Note that because our View type is an alias to an interface type, it precludes adding your own functions to that type as Go doesn't allow us to have an interface receiver type for a function (I might be expressing that incorrectly). By this I mean that you can't do anything like:
func (v View) Display() string {
return v.String()
}
In summary I think aliasing things is fine, it can make internal APIs more readable, and you can lean on the compiler to help eliminate certain classes of errors; however this doesn't allow you to add functionality to the custom type. If this is required an alternate approach would be necessary, either embedding or simple composition (i.e. a View has a Set).

How do I discern whether a Type is a static array initializer?

I'll start by saying that I'm working off the assumption that static array initializers are turned into private nested classes by the compiler, usually with names like __StaticArrayInitTypeSize=12. As I understand it, having read this extremely informative article, these private classes are value types, and they aren't tagged with the CompilerGeneratedAttribute class.
I'm working on a project that needs to process certain types and ignore others.
I have to be able to process custom struct types, which, like the generated static array initializer classes, are value types. I must ignore the generated static array initializer classes. I also must ignore enumerations and delegates.
I'm pulling these classes with Linq, like so:
var typesToProcess = allTypes.Where(type => !type.IsEnum &&
!type.IsArray &&
!type.IsSubclassOf(typeof(Delegate)));
I'm fairly sure that the IsArray property isn't what I think it is. At any rate, the generated static array initializer class still shows up in the typesToProcess Enumerable.
Has anyone else dealt with this? How can I discern the difference between a custom struct and a generated static array initializer class? I could hack it by doing a string comparison of the type name against __StaticArrayInitTypeSize, but is there a cleaner solution?
Well, having just tried it myself with the C# 4 compiler, I got an internal class called <PrivateImplementationDetails>{D1E23401-19BC-4B4E-8CC5-2C6DDEE7B97C} containing a private nested struct called __StaticArrayInitTypeSize=12.
The class contained an internal static field of the struct type called $$method0x6000001-1. The field itself was decorated with CompilerGeneratedAttribute.
The problem is that all of this is implementation-specific. It could change in future releases, or it could be different from earlier releases too.
Any member name containing <, > or = is an "unspeakable" name which will have been generated by the compiler, so you can view that as a sort of implicit CompilerGenerated, if that's any use. (There are any number of other uses for such generated types though.)

Where is the best place to locate enum types?

I have found that there is generally a singe type or namespace that takes in any particular enum as a parameter and as a result I have always defined those enums there. Recently though, I had a co-worker make a big deal about how that was a stupid thing to do, and you should always have an enum namespace at the root of your project where you define everyone of your enum types.
Where is the best place to locate enum types?
Why treat enums differently to other types? Keep them in the same namespace as they're likely to be used - and assuming they're going to be used by other classes, make them top-level types in their own files.
The only type of type which I do commonly clump together is delegates - I sometimes have a Delegates.cs file with a bunch of delegates in. Less so with .NET 3.5 and Func/Action, mind you.
Also, namespaces are for separation of things that belong together logically. Not all classes belong in the same namespace just because they are classes. Likewise, not all enums belong in the same namespace just because they are enums. Put them with the code they logically belong in.
I generally try to put all my different types (classes, interfaces and enums) in their own files, regardless of how small they are. It just makes it much easier to find and manage the file they're in, especially if you don't happen to be in Visual Studio and have the "go to definition" feature available. I've found that nearly every time I've put a "simple" type like that in another class, I end up either adding on to it later on, or reusing it in a way that it no longer makes sense for it to not have its own file.
As far as which namespace, it really depends on the design of whatever you're developing. In general, I try to mimic the .NET framework's convention.
I try to put everything associated with a class in the class. That includes not just enums, but also constants. I don't want to go searching elsewhere for the file or class containing the enums. In a large app with lots of classes and folders, it wouldn't always be obvious where to put the enum file so it would be easy to find.
If the enum if used in several closely-related classes, you could create a base class so that the common types like enums are shared there.
Of course, if an enum is really generic and widely used, you may want to create a separate class for them, along with other generic utilities.
I think you put Enums and Constants in the class that consumes them or that uses them to control code decisions the most and you use code completion to find them. That way you don't have to remember where they are, they are associated with the class. So for example if I have a ColoredBox class then I don't have to think about where they are at. They would be part of ColoredBox. ColoredBox.Colors.Red, ColoredBox.Colors.Blue etc. I
I think of the enum and constant as a property or description of that class.
If it used by multiple classes and no one class reigns supreme then it is appropriate to have an enum class or constants class.
This follows rules of encapsulation. Isolating properties from dissimilar classes. What if you decide to change the RGB of Red in Cirle objects but
you don't want to change the red for ColoredBox objects? Encapsulating their properties enables this.
I use nested namespaces for this. I like them better than putting the enum within a class because outside of the class you have to use the full MyClass::MyEnum usage even if MyEnum is not going to clash with anything else in scope.
By using a nested namespace you can use the "using" syntax. Also I will put enums that relate to a given subsystem in their own file so you don't get dependency problems of having to include the world to use them.
So in the enum header file you get:
// MyEnumHeader.h
// Consolidated enum header file for this dll,lib,subsystem whatever.
namespace MyApp
{
namespace MyEnums
{
enum SomeEnum { EnumVal0, EnumVal1, EnumVal2 };
};
};
And then in the class header file you get:
// MyInterfaceHeader.h
// Class interfaces for the subsystem with all the expected dependencies.
#include "MyEnumHeader.h"
namespace MyApp
{
class MyInterface
{
public:
virtual void DoSomethingWithEnumParam (MyEnums::SomeEnum enumParam) = 0;
};
};
Or use as many enum header files as makes sense. I like to keep them separate from the class headers so the enums can be params elsewhere in the system without needing the class headers. Then if you want to use them elsewhere you don't have to have the encapsulating class defs as you would if the enums were declared within the classes.
And as mentioned before, in the outer code you can use the following:
using namespace MyApp::MyEnums;
What environment?
In .NET I usually create an empty class file, rename it to MyEnum or whatever to indicate it holds my enum and just declare it in there.
If my enumeration has any chance of ever being used outside the class I intend to use it, I create a separate source file for the enum. Otherwise I will place it inside the class I intend to use it.
Usually I find that the enum is centered around a single class -- as a MyClassOptions type of thing.
In that case, I place the enum in the same file as MyClass, but inside the namespace but outside the class.
namespace mynamespace
{
public partial class MyClass
{
}
enum MyClassOptions
{
}
}
I tend to define them, where their use is evident in the evident. If I have a typedef for a struct that makes use of it for some reason...
typedef enum {
HI,
GOODBYE
} msg_type;
typdef struct {
msg_type type;
union {
int hivar;
float goodbyevar;
}
} msg;

Resources