typedef enums allow a convenient way to describe a set of name-value pairs. Is there a way to chain them to create deeper structures using enum at all levels?
For instance, I have the following:
typedef enum logic {ALPHA=0, BETA=1} a_t;
typedef enum logic {GAMMA=0, DELTA=1} b_t;
typedef enum logic {ZETA=0, ETA=1} c_t;
...
I want to create a variable c which is formed of a_t and b_t. Is this possible?
Something like:
a_t b_t c;
so at every dimension of c, I can have enums.
EDIT: Some clarification - assume a_t, b_t and c_t are immutable as they are generated automatically. And there are hundreds of such different enums. I want to create bigger structures as I need because automatically generating all combinations of them would make the code too big and messy.
For instance, say my a_t describes number of masters and b_t describes number of slaves. I want to create a structure where I have this hierarchy in my signal, and at the same time allow enums for them to allow easy of readability and use.
So, something like this:
c[MASTER_0][SLAVE_0]
c[MASTER_0][SLAVE_1]
c[MASTER_1][SLAVE_0]
c[MASTER_1][SLAVE_1]
Are you perhaps referring to an associative array, such as:
c[ALPHA] = BETA;
If so, you could simply refer to it as:
b_t c[a_t];
Which means create an associative array c who's key is of enums a_t and value is of enums b_t. You could keep going if you'd like :)
typedef enum logic {ALPHA=0, BETA=1} a_t;
typedef enum logic {GAMMA=0, DELTA=1} b_t;
typedef enum logic {BAD_AT=0, GREEK_LETTERS=1} c_t;
c_t my_data_structure[a_t][b_t];
// Assigning a value
my_data_structure[ALPHA][GAMMA] = GREEK_LETTERS;
See an example on EDA Playground here.
Also, I think you're slightly misunderstanding the use of typedef. It does not exactly describe a set of name-value pairs, rather it gives a new name to a data type. It is the enum that is actually creating a 'set of name-value pairs', but I'd clarify that it is essentially assigning identifiers to values. It would help if you could explain the application for a clearer answer.
You cannot create one enum typedef from another or from a group of others. Some may call that extending an enum. You also cannot have a enum with multiple names for the same value.
What you can do is have an associative array with name/value pairs, and join those arrays together.
int a[string], b[string], c[string];
initial begin
a = '{"ALPHA":0, "BETA":1};
b = '{"GAMMA":0, "DELTA":1};
c = a;
foreach(b[s]) c[s]=b[s];
end
There are ways of gathering the names of each enumerated type to initialize the associative array as well.
Related
Researching the interface value in go - I found a great (maybe outdated) article by Russ Cox.
According to it:
The itable begins with some metadata about the types involved and then becomes a list of function pointers.
The implementation for this itable should be the one from src/runtime/runtime2.go:
type itab struct {
inter *interfacetype
_type *_type
hash uint32 // copy of _type.hash. Used for type switches.
_ [4]byte
fun [1]uintptr // variable sized. fun[0]==0 means _type does not implement inter.
}
First confusing thing is - how is an array - variable sized?
Second, assuming that we have a function pointer at index 0 for a method that satisfies the interface, where could we store a second/third/... function pointer?
The compiled code and runtime access fun as if the field is declared fun [n]uintpr where n is the number of methods in the interface. The second method is stored at fun[1], the third at fun[2] and so on. The Go Language does not have a variable size array feature like this, but unsafe shenanigans can be used to simulate the feature.
Here's how itab is allocated:
m = (*itab)(persistentalloc(unsafe.Sizeof(itab{})+uintptr(len(inter.mhdr)-1)*goarch.PtrSize, 0, &memstats.other_sys))
The function persistentalloc allocates memory. The first argument to the function is the size to allocate. The expression inter.mhdr is the number of methods in the interface.
Here's code that creates a slice on the variable size array:
methods := (*[1 << 16]unsafe.Pointer)(unsafe.Pointer(&m.fun[0]))[:ni:ni]
The expression methods[i] refers to the same element as m.fun[i] in a hypothetical world where m.fun is a variable size array with length > i. Later code uses normal slice syntax with methods to access the variable size array m.fun.
I have two concrete types called CreationOperator and AnnihilationOperator and I want to define a new concrete type that represents a string of operators and a real coefficient that multiplies it.
I found natural to define an abstract type FermionicOperator, from which both CreationOperator and AnnihilationOperator inherit, i.e.
abstract type FermionicOperator end
struct CreationOperator <: FermionicOperator
...
end
struct AnnihilationOperator <: FermionicOperator
...
end
because I can define many functions with signatures of the type function(op1::FermionicOperator, op2::FermionicOperator) = ..., such as arithmetic operations (I am building an algebra system, so I have to define operations such as *, +, etc. on the operators).
Then I would go on and define a concrete type OperatorString
struct OperatorString
coef::Float64
ops::Vector{FermionicOperator}
end
However, according to the Julia manual, I believe that OperatorString is not ideal for performance, because the compiler does not know anything about FermionicOperator and thus functions involving OperatorString will be inefficient (and I will have many functions manipulating strings of operators).
I found the following solution, however I am not sure about its implication and if it really makes a difference.
Instead of defining FermionicOperator as an abstract type, I define it is as the Union of CreationOperator and AnnihilationOperator, i.e.
struct CreationOperator
...
end
struct AnnihilationOperator
...
end
FermionicOperator = Union{CreationOperator,AnnihilationOperator}
This would still allow functions of the form function(op1::FermionicOperator, op2::FermionicOperator) = ..., but at the same time, to my understanding, Union{CreationOperator,AnnihilationOperator} is a concrete type, such that OperatorString is well-defined and the compilers can optimize things if it's the case.
I am particularly in doubt because I also considered to use the built-in Expr struct to define my string of operators (actually it would be more general), whose field args is a vector with abstract-type elements: very similar to my first design attempt. However, while implementing arithmetic operations on Expr I had the feeling I was doing something "wrong" and I was better off defining my own types.
If your ops field is a vector that, in any given instance, is either all CreationOperators or all AnnihilationOperators, then the recommended solution is to use a parameterized struct.
abstract type FermionicOperator end
struct CreationOperator <: FermionicOperator
...
end
struct AnnihilationOperator <: FermionicOperator
...
end
struct OperatorString{T<:FermionicOperator}
coef::Float64
ops::Vector{T}
function OperatorString(coef::Float64, ops::Vector{T}) where {T<:FermionicOperator}
return new{T}(coef, ops)
end
end
If your ops field is a vector that, in any given instance, may be a mixture of CreationOperators and AnnihilationOperators, then you can use a Union. Because the union is small (2 types), your code will remain performant.
struct CreationOperator
value::Int
end
struct AnnihilationOperator
value::Int
end
const Fermionic = Union{CreationOperator, AnnihilationOperator}
struct OperatorString
coef::Float64
ops::Vector{Fermionic}
function OperatorString(coef::Float64, ops::Vector{Fermionic})
return new(coef, ops)
end
end
Although not shown, even with the Union approach, you may want to use the abstract type also -- just for future simplicity and flexibility in function dispatch. It is helpful in developing robust multidispatch-driven logic.
std::move() is stealing the string value whereas not an int, please help me.
int main()
{
int i = 50;
string str = "Mahesh";
int j = std::move(i);
string name = std::move(str);
std::cout <<"i: "<<i<<" J: "<<j <<std::endl;
std::cout <<"str: "<<str<<" name: "<<name <<std::endl;
return 0;
}
Output
i: 50 J: 50
str: name: Mahesh
std::move is a cast to an rvalue reference. This can change overload resolution, particularly with regard to constructors.
int is a fundamental type, it doesn't have any constructors. The definition for int initialisation does not care about whether the expression is const, volatile, lvalue or rvalue. Thus the behaviour is a copy.
One reason this is the case is that there is no benefit to a (destructive) move. Another reason is that there is no such thing as an "empty" int, in the sense that there are "empty" std::strings, and "empty" std::unique_ptrs
std::move() itself doesn't actually do any moving. It is simply used to indicate that an object may be moved from. The actual moving must be implemented for the respective types by a move constructor/move assignment operator.
std::move(x) returns an unnamed rvalue reference to x. rvalue references are really just like normal references. Their only purpose is simply to carry along the information about the "rvalue-ness" of the thing they refer to. When you then use the result of std::move() to initialize/assign to another object, overload resolution will pick a move constructor/move assignment operator if one exists. And that's it. That is literally all that std::move() does. However, the implementation of a move constructor/move assignment operator knows that the only way it could have been called is when the value passed to it is about to expire (otherwise, the copy constructor/copy assignment operator would have been called instead). It, thus, can safely "steal" the value rather than make a copy, whatever that may mean in the context of the particular type.
There is no general answer to the question what exactly it means to "steal" a value from an object. Whoever defines a type has to define whether it makes sense to move objects of this type and what exactly it means to do so (by declaring/defining the respective member functions). Built-in types don't have any special behavior defined for moving their values. So in the case of an int you just get what you get when you initialize an int with a reference to another int, which is a copy…
Assume I write the following code:
template<typename T1, typename T2>
struct dummy {
T1 first;
T2 second;
};
I would like to know in general how I can order members in a template class by descending size. In other words, I would like the above class to be
struct dummy {
int first;
char second;
};
when instantiated as dummy<int, char>. However, I would like to obtain
struct dummy {
int second;
char first;
};
in the case dummy<char, int>.
On most platforms, padding for std::pair occurs only at "natural" alignment. This sort of padding will end up the same for either order.
For std::tuple, some arrangements can be more efficient than others, but the library can choose any memory layout it likes, so any TMP you add on top is only second-guessing.
In general, yes, you can define a sorting algorithm using templates, but it would be a fair bit of work.
This can be done, the only issue is the naming, how would you name your fields ??
I did what you are asking for not long time ago, I used std::tuple, and some meta-programming skills, I did a merge sort to reorder the template arguments, It is really fun to do (if you like functionnal programming).
For the naming I used some Macro to access the fields.
I really encourage you to do it by yourself, it is really interesting intellectually, however if you like to see some code, please tell me !
I've searched for the use of #specialized in the source code of the standard library of Scala 2.8.1. It looks like only a handful of traits and classes use this annotation: Function0, Function1, Function2, Tuple1, Tuple2, Product1, Product2, AbstractFunction0, AbstractFunction1, AbstractFunction2.
None of the collection classes are #specialized. Why not? Would this generate too many classes?
This means that using collection classes with primitive types is very inefficient, because there will be a lot of unnecessary boxing and unboxing going on.
What's the most efficient way to have an immutable list or sequence (with IndexedSeq characteristics) of Ints, avoiding boxing and unboxing?
Specialization has a high cost on the size of classes, so it must be added with careful consideration. In the particular case of collections, I imagine the impact will be huge.
Still, it is an on-going effort -- Scala library has barely started to be specialized.
Specialized can be expensive ( exponential ) in both size of classes and compile time. Its not just the size like the accepted answer says.
Open your scala REPL and type this.
import scala.{specialized => sp}
trait S1[#sp A, #sp B, #sp C, #sp D] { def f(p1:A): Unit }
Sorry :-). Its like a compiler bomb.
Now, lets take a simple trait
trait Foo[Int]{ }
The above will result in two compiled classes. Foo, the pure interface and Foo$1, the class implementation.
Now,
trait Foo[#specialized A] { }
A specialized template parameter here gets expanded/rewritten for 9 different primitive types ( void, boolean, byte, char, int, long, short, double, float ). So, basically you end up with 20 classes instead of 2.
Going back to the trait with 5 specialized template parameters, the classes get generated for every combination of possible primitive types. i.e its exponential in complexity.
2 * 10 ^ (no of specialized parameters)
If you are defining a class for a specific primitive type, you should be more explicit about it such as
trait Foo[#specialized(Int) A, #specialized(Int,Double) B] { }
Understandably one has to be frugal using specialized when building general purpose libraries.
Here is Paul Phillips ranting about it.
Partial answer to my own question: I can wrap an array in an IndexedSeq like this:
import scala.collection.immutable.IndexedSeq
def arrayToIndexedSeq[#specialized(Int) T](array: Array[T]): IndexedSeq[T] = new IndexedSeq[T] {
def apply(idx: Int): T = array(idx)
def length: Int = array.length
}
(Ofcourse you could still modify the contents if you have access to the underlying array, but I would make sure that the array isn't passed to other parts of my program).