chisel function on the left side of the assignment - assign

what I trend to do:
Cat( ...a list of inputs... ) := a wide wire of UInt
When I do this the scala reports syntex error about Bits, so I change it to:
Cat( ...a list of inputs... ).flatten := a wide wire of UInt
This time the scala satisfied but the firrtl reports error because the left side of the assignment will be assigned to a wire and then it violence with the input:
node _T_47 = bits(...one piece from the list which is a Input...)
_T_47 <= ... // this violence the (FE)MALE rule
Could you enlight me any other way to use func on the left side of the equation?
Or Am I forbidden to use func in this way...

You are correct that you cannot use the outputs of functions like Cat on the left hand side of connections.
Check out the Chisel3 Cookbook which discusses (and solves) this problem here: https://github.com/freechipsproject/chisel3/wiki/Cookbook#how-do-i-unpack-a-value-reverse-concatenation-like-in-verilog

Related

Why should I use the & sign on structs?

In the gotour, there is a section: struct literals.
package main
import "fmt"
type Vertex struct {
X, Y int
}
var (
v1 = Vertex{1, 2} // has type Vertex
v2 = Vertex{X: 1} // Y:0 is implicit
v3 = Vertex{} // X:0 and Y:0
p = &Vertex{1, 2} // has type *Vertex
)
func main() {
fmt.Println(v1, p, v2, v3)
}
What's the difference between a struct with an ampersand and the one without? I know that the ones with ampersands point to the same reference, but why should I use them over the regular ones?
var p = &Vertex{} // why should I use this
var c = Vertex{} // over this
It's true that p (&Vertex{}) has type *Vertex and that c (Vertex{}) has type Vertex.
However, I don't believe that statement really answers the question of why one would choose one over the other. It'd be kind of like answering the question "why use planes over cars" with something like "planes are winged, cars aren't." (Obviously we all know what wings are, but you get the point).
But it also wouldn't be very helpful to simply say "go learn about pointers" (though I think it is a really good idea to so nonetheless).
How you choose basically boils down to the following.
Realize that &Vertex{} and Vertex{} are fundamentally initialized in the same way.
There might be some low-level memory allocation differences (i.e. stack vs heap), but you really should just let the compiler worry about these details
What makes one more useful and performant than the other is determined by how they are used in the program.
"Do I want a pointer to the struct (p), or just the struct (c)?"
Note that you can get a pointer using the & operator (e.g. &c); you can dereference a pointer to get the value using * (e.g. *p)
So depending on what you choose, you may end up doing a lot of *p or &c
Bottom-line: create what you will use; if you don't need a pointer, don't make one (this will help more with "optimizations" in the long run).
Should I use Vertex{} or &Vertex{}?
For the Vertex given in your example, I'd choose Vertex{} to a get a simple value object.
Some reasons:
Vertex in the example is pretty small in terms of size. Copying is cheap.
Garbage collection is simplified, garbage creation may be mitigated by the compiler
Pointers can get tricky and add unnecessary cognitive load to the programmer (i.e. gets harder to maintain)
Pointers aren't something you want if you ever get into concurrency (i.e. it's unsafe)
Vertex doesn't really contain anything worth mutating (just return/create a new Vertex when needed).
Some reasons why you'd want &Struct{}
If Struct has a member caching some state that needs to be changed inside the original object itself.
Struct is huge, and you've done enough profiling to determine that the cost of copying is significant.
As an aside: you should try to keep your structs small, it's just good practice.
You find yourself making copies by accident (this is a bit of a stretch): v := Struct{}
v2 := v // Copy happens
v := &Struct{}
v2 := v // Only a pointer is copied
The comments pretty much spell it out:
v1 = Vertex{1, 2} // has type Vertex
p = &Vertex{1, 2} // has type *Vertex
As in many other languages, & takes the address of the identifier following it. This is useful when you want a pointer rather than a value.
If you need to understand more about pointers in programming, you could start with this for go, or even the wikipedia page.

Making maps in go before anything

I am following the go tour and something bothered me.
Maps must be created with make (not new) before use
Fair enough:
map = make(map[int]Cats)
However the very next slide shows something different:
var m = map[string]Vertex{
"Bell Labs": Vertex{
40.68433, -74.39967,
},
"Google": Vertex{
37.42202, -122.08408,
},
}
This slide shows how you can ignore make when creating maps
Why did the tour say maps have to be created with make before they can be used? Am I missing something here?
Actually the only reason to use make to create a map is to preallocate a specific number of values, just like with slices (except you can't set a cap on a map)
m := map[int]Cats{}
s := []Cats{}
//is the same as
m := make(map[int]Cats)
s := make([]Cats, 0, 0)
However if you know you will have a minimum of X amount of items in a map you can do something like:
m := make(map[int]Cats, 100)// this will speed things up initially
Also check http://dave.cheney.net/2014/08/17/go-has-both-make-and-new-functions-what-gives
So they're actually right that you always need to use make before using a map. The reason it looks like they aren't in the example you gave is that the make call happens implicitly. So, for example, the following two are equivalent:
m := make(map[int]string)
m[0] = "zero"
m[1] = "one"
// Equivalent to:
m := map[int]string{
0: "zero",
1: "one",
}
Make vs New
Now, the reason to use make vs new is slightly more subtle. The reason is that new only allocates space for a variable of the given type, whereas make actually initializes it.
To give you a sense of this distinction, imagine we had a binary tree type like this:
type Tree struct {
root *node
}
type node struct {
val int
left, right *node
}
Now you can imagine that if we had a Tree which was allocated and initialized and had some values in it, and we made a copy of that Tree value, the two values would point to the same underlying data since they'd both have the same value for root.
So what would happen if we just created a new Tree without initializing it? Something like t := new(Tree) or var t Tree? Well, t.root would be nil, so if we made a copy of t, both variables would not point to the same underlying data, and so if we added some elements to the Tree, we'd end up with two totally separate Trees.
The same is true of maps and slices (and some others) in Go. When you make a copy of a slice variable or a map variable, both the old and the new variables refer to the same underlying data, just like an array in Java or C. Thus, if you just use new, and then make a copy and initialize the underlying data later, you'll have two totally separate data structures, which is usually not what you want.

Why do we use functions in VHDL

Functions are obviously less verbose to write than entities. But it implies many drawbacks, including:
No generic keyword equivalent
Only one output possible
It appears that functions can be called recursively. May it not be the case with entities? If so, is there any good reason to use functions except for aesthetic purposes?
Functions can't create hardware directly - they have to exist within an architecture to do so. There's nothing to stop you putting all your functionality into a function (or procedure) and then just calling that within a process though.
Regarding some of your other points:
With procedures you can have multiple inout or out parameters.
Entities can recurse... Consider:
entity recurse is
generic (
depth : integer := 1;
param : integer := 3);
port (
a : in integer;
b : out integer);
end entity recurse;
architecture a1 of recurse is
signal c : integer;
begin
c <= a + 1;
bottom: if depth = param generate
b <= a + 1;
end generate bottom;
mid:if depth /= param generate
recurse_1: entity work.recurse
generic map (
param => param,
depth => depth+1)
port map (
a => c,
b => b);
end generate mid;
end architecture a1;
Not very useful, but it synthesises and simulates just fine.
And finally, of course you only use functions for aesthetic purposes (assuming you include maintainability and readability into the definition of aesthetic, which most programming types do in my experience). You only use enumerated types, entities, records and a whole host of other language features for 'aesthetic purposes'. Even assembly mnemonics are aesthetic! Maybe should return to toggling DIP switches :)
Functions in vhdl make the code easy to maintain and read. generally architectures are very big and while debugging if something is not working you can easily find the problematic function and correct it and no need to analyse the entire architecture body.
in case of small codes it's useless but in more big machines it makes you better understand if you consider it function wise.
There is no rule for this so all comments are welcome.
in short : the advantage of functions are
overloading
operators definition
overloading of operators therefore
Better Structure of code
I can see why you are confused, another good question would be why there's both procedure and function. (VHDL seems quite inelegant sometimes!)
That being said, I use both procedures and functions all the time, although mostly in testbenches. For example, for a testbench for a firewall system I made a while back I wrote a procedure called pd_tb_send_udp_packet() that I use repeatedly in the main process, e.g.,
pd_tb_send_udp_packet("10.10.10.2", 1234, false);
pd_tb_send_udp_packet("10.10.10.1", 1234, true);
pd_tb_send_udp_packet("10.10.10.1", 1235, false);
pd_tb_send_udp_packet("ff02:100::1", 1234, false);
pd_tb_send_udp_packet("ff02:101::1", 1234, true);
This procedure generates a random UDP packet with the given addr/port and sends it to the firewall system, then tests whether it is forwarded or not based on the final boolean parameter. Here are the first lines of it, where I use functions from a library:
if f_atvtb_is_ipv6_addr(dest_ip_addr) then
v_ipv6 := true;
v_ipv6_addr := f_atvtb_ipv6_addr(dest_ip_addr);
else
v_ipv6 := false;
v_ipv4_addr := f_atvtb_ip_addr(dest_ip_addr);
end if;
The latter two return 128 and 32 bit std_logic_vectors from the string input, respectively.
While I could probably do all this without using procedures and functions somehow, it would definitely be a lot more messy.

What is "A Tour of Go" trying to say? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
There are a few points in the tutorial that sort of leave you on your own without a clue or link if you're not in the know I guess. So I'm sorry about the length of these:
http://tour.golang.org/#15
Try printing needInt(Big) too
I'm guessing ints are allowed less bits than constants?
http://tour.golang.org/#21
the { } are required.
(Sound familiar?)
Which language is alluded to?
http://tour.golang.org/#25
(And a type declaration does what you'd expect.)
Why do we need the word type and the word struct? What was I supposed to expect?
http://tour.golang.org/#28
Why implicit zeroes in the constructor? This sounds like a dangerous design choice by Go. Is there a PEP or anything beyond http://golang.org/doc/go_faq.html on this?
http://tour.golang.org/#30
Make? Are there constructors? What's the difference between new and make?
http://tour.golang.org/#33
Where did delete come from? I didn't import it.
http://tour.golang.org/#36
What's the %v formatter stand for? Value?
http://tour.golang.org/#47
panic: runtime error: index out of range
goroutine 1 [running]:
tour/pic.Show(0x400c00, 0x40ca61)
go/src/pkg/tour/pic/pic.go:24 +0xd4
main.main()
/tmpfs/gosandbox-15c0e483_5433f2dc_ff6f028f_248fd0a7_d7c2d35b/prog.go:14 +0x25
I guess I broke go somehow....
package main
import "tour/pic"
func Pic(dx, dy int) [][]uint8 {
image := make([][]uint8, 10)
for i := range image {
image[i] = make([]uint8, 10)
}
return image
}
func main() {
pic.Show(Pic)
}
http://tour.golang.org/#59
I return error values when a function fails? I have to qualify every single function call with an error check? The flow of the program is uninterrupted when I write crazy code? E.g. Copy(only_backup, elsewhere);Delete(only_backup) and Copy fails....
Why would they design it like that?
#15:
I'm guessing int's are allowed less bits than constants?
Yes, exactly. According to the spec, "numeric constants represent values of arbitrary precision and do not overflow", whereas type int has either 32 or 64 bits.
#21:
Which language is alluded to?
None; it's alluding to #16, which says the same thing, in the same words, about for-loops.
#25 :
a type declaration does what you'd expect is a little unfortunate, I agree (as it assumes too much on what a reader could expect...) but it means you're defining a struct (with the struct keyword) and binding the type name "Vertex" to it, with the type Vertex part (see http://golang.org/ref/spec#Type_declarations)
#28:
the fact that uninitialized structs are zeroed is really really useful in many cases (many standard structs like buffers use it also)
It's not implicit in the contructor only. Look at this
var i int; fmt.Println(i)
This prints out 0. This is similar to something like java where primitive types have an implicit default value. booleans are false, integers are zero, etc. The spec on zero values.
#30:
new allocates memory and returns a pointer to it, while make is a special function used only for Slices, maps and channels.
See http://golang.org/doc/effective_go.html#allocation_new for a more in-depth explanation of make vs new
#33:
delete, like append or copy is one of the basic operators of the language. See the full list of them at: http://golang.org/ref/spec#Predeclared_identifiers
#36:
Yes, %v stands for "value". See http://golang.org/pkg/fmt/
#47:
try with this:
func Pic(dx, dy int) [][]uint8 {
image := make([][]uint8, dy) // dy, not 10
for x := range image {
image[x] = make([]uint8, dx) // dx, not 10
for y := range image[x] {
image[x][y] = uint8(x*y) //let's try one of the mentioned
// "interesting functions"
}
}
return image
}
#59:
The language's design and conventions encourage you to explicitly
check for errors where they occur (as distinct from the convention in
other languages of throwing exceptions and sometimes catching them).
In some cases this makes Go code verbose, but fortunately there are
some techniques you can use to minimize repetitive error handling.
(quoted from Error handling and Go )
I'm guessing int's are allowed less bits than constants?
yes, Numeric constants are high-precision values. An int in any language doesn't have near the precision of other numeric types.
Which language is alluded to?
No clue but it is backwards from C and Java where ( ) is required and { } is optional.
Why do we need the word type and the word struct? What was I supposed to expect?
If you're familiar with C, then it does what you'd expect.
Why implicit zeroes in the constructor?
It's not implicit in the contructor only. Look at this
var i int
fmt.Println(i)
This prints out 0. This is similar to something like java where primitive types have an implicit default value. booleans are false, integers are zero, etc.
Make? Are there constructors? What's the difference between new and make?
make accepts additional parameters for initializing the size of an array, slice, or map. new on the other hand just returns a pointer to a type.
type Data struct {}
// both d1 and d2 are pointers
d1 := new(Data)
d2 := &Data{}
As for are there constructors?, only if you make and reference them. This how one normally implements a constructor in Go.
type Data struct {}
func NewData() *Data {
return new(Data)
}
What's the %v formatter stand for? Value?
Yep
I return error values when a function fails? ... Why would they design it like that?
I felt the same way at first. My opinion has changed though. You can ignore errors from the std library if you like and not bother with it yourself, but once I had a handle on it, I personally find I have better (and more readable) error checking.
What I can say is when I was doing it wrong, it felt like repetitive error handling that felt unnecessary. When I finally started doing it right... well, what I just said above.

How can I optimize a multiple (matrix) switch / case algorithm?

Is it possible to optimize this kind of (matrix) algorithm:
// | case 1 | case 2 | case 3 |
// ------|--------|--------|--------|
// | | | |
// case a| a1 | a2 | a3 |
// | | | |
// case b| b1 | b2 | b3 |
// | | | |
// case c| c1 | c2 | c3 |
// | | | |
switch (var)
{
case 1:
switch (subvar)
{
case a:
process a1;
case b:
process b1;
case c:
process c1;
}
case 2:
switch (subvar)
{
case a:
process a2;
case b:
process b2;
case c:
process c2;
}
case 3:
switch (subvar)
{
case a:
process a3;
case b:
process b3;
case c:
process c3;
}
}
The code is fairly simple but you have to imagine more complex with more "switch / case".
I work with 3 variables. According they take the values 1, 2, 3 or a, b, c or alpha, beta, charlie have different processes to achieve. Is it possible to optimize it any other way than through a series of "switch / case?
(Question already asked in french here).
Edit: (from Dran Dane's responses to comments below. These might as well be in this more prominent place!)
"optimize" is to be understood in terms of having to write less code, fewer "switch / case". The idea is to improve readability, maintainability, not performance.
There is maybe a way to write less code via a "Chain of Responsibility" but this solution is not optimal on all points, because it requires the creation of many objects in memory.
It sounds like what you want is a 'Finite State Machine' where using those cases you can activate different processes or 'states'. In C this is usually done with an array (matrix) of function pointers.
So you essentially make an array and put the right function pointers at the right indicies and then you use your 'var' as an index to the right 'process' and then you call it. You can do this in most languages. That way different inputs to the machine activate different processes and bring it to different states. This is very useful for numerous applications; I myself use it all of the time in MCU development.
Edit: Valya pointed out that I probably should show a basic model:
stateMachine[var1][var2](); // calls the right 'process' for input var1, var2
There are no good answers to this question :-(
because so much of the response depends on
The effective goals (what is meant by "optimize", what is unpleasing about the nested switches)
The context in which this construct is going to be applied (what are the ultimate needs implicit to the application)
TokenMacGuy was wise to ask about the goals. I took the time to check the question and its replies on the French site and I'm still puzzled as to the goals... Dran Dane latest response seems to point towards lessening the amount of code / improving readability but let's review for sure:
Processing Speed: not an issue the nested switches are quite efficient, possibly a tat less than 3 multiplications to get an index into a map table, but maybe not even.
Readability: yes possibly an issue, As the number of variables and level increases the combinatorial explosion kicks in, and also the format of the switch statement tends to spread the branching spot and associated values over a long vertical stretch. In this case a 3 dimension (or more) table initialized with fct. pointers puts back together the branching values and the function to be call on on a single line.
Writing less code: Sorry not much help here; at the end of the day we need to account for a relatively high number of combinations and the "map", whatever its form, must be written somewhere. Code generators such as TokenMacGuy's may come handy, it does seem a bit of an overkill in this case. Generators have their place, but I'm not sure it is the case here. One of two case: if the number of variables and level is small enough, the generator is not worth it (takes more time to set it up than to write the actual code in the first place), if the number of variables and levels is significant, the generated code is hard to read, hard to maintain...)
In a nutshell, my recommendation with regards to making the code more readable (and a bit faster to write) is the table/matrix approach described on the French site.
This solution is in two part:
a one time initialization of a 3 dimensional array (for 3 levels); (or a "fancier" container structure if preferred: a tree for example) . This is done with code like:
// This is positively more compact / readable
...
FctMap[1][4][0] = fctAlphaOne;
FctMap[1][4][1] = fctAlphaOne;
..
FctMap[3][0][0] = fctBravoCharlie4;
FctMap[3][0][1] = NULL; // impossible case
FctMap[3][0][2] = fctBravoCharlie4; // note how the same fct may serve in mult. places
And a relatively simple snippet wherever the functions need to be called:
if (FctMap[cond1][cond2][cond3]) {
retVal = FctMap[cond1][cond2][cond3](Arg1, Arg2);
if (retVal < 0)
DoSomething(); // anyway we're leveraging the common api to these fct not the switch alternative ....
}
A case which may prompt one NOT using the solution above are if the combination space is relatively sparsely populated (many "branches" in the switch "tree" are not used) or if some of the functions require a different set of parameters; For both of these cases, I'd like to plug a solution Joel Goodwin proposed first here, and which essentially combines the various keys for the several level into one longer key (with separator character if need be), essentially flattening the problem back to a long, but single level switch statement.
Now...
The real discussion should be about why we need such a mapping/decision-tree in the first place. To answer this unfortunately requires understanding the true nature of the underlying application. To be sure I'm not saying that this is indicative of bad design. A big dispatching section may make sense in some applications. However, even with the C language (which the French Site contributors seemed to disqualify to Object Oriented design), it is possible to adopt Object oriented methodology and patterns. Anyway I'm diverging...) It is possible that the application would overall be better served with alternative design patterns where the "information tree about what to call when" has been distributed in several modules and/or several objects.
Apologies to speak about this in rather abstract terms, it's just the lack of application specifics... The point remains: challenge the idea that we need this big dispatching tree; think of alternative approaches to the application at large.
Alors, bonne chance! ;-)
Depending on the language, some form of hash map with the pair (var, subvar) as the key and first-class functions as the values (or whatever your language offers to best approximate that, e.g. instances of classes extending some proper interface in Java) is likely to provide top performance -- and the utter conciseness of fetching the appropriate function (or whatever;-) from the map based on the key, and executing it, leads to high readability for readers familiar with the language and such functional idioms.
The idea of a function pointer is probably best (as per mjv, Shhnap). But, if the code under each case is fairly small, it may be overkill and result in more obfuscation than intended. In that case, I might implement something snappy and fast-to-read like this:
string decision = var1.ToString() + var2.ToString() + var3.ToString();
switch(decision)
{
case "1aa":
....
case "1ab":
....
}
Unfamiliar with your particular scenario so perhaps the previous suggestions are more appropriate.
I had exactly the same problem once, albeit for an immanent mess of a 5-parameter nested switch. I figured, why type all these O(N5) cases myself, why even invent 'nested' function names if the compiler can do this for me. And all this resulted in a 'nested specialized template switch' referring to a 'specialized template database'.
It's a little complicated to write. But I found it worth it: it results in a 'knowledge' database that is very easy to maintain, to debug, to add to etc... And I must admit: a sense of pride.
// the return type: might be an object actually _doing_ something
struct Result {
const char* value;
Result(): value(NULL){}
Result( const char* p ):value(p){};
};
Some variable types for switching:
// types used:
struct A { enum e { a1, a2, a3 }; };
struct B { enum e { b1, b2 }; };
struct C { enum e { c1, c2 }; };
A 'forward declaration' of the knowledge base: the 'api' of the nested switch.
// template database declaration (and default value - omit if not needed)
// specializations may execute code in stead of returning values...
template< A::e, B::e, C::e > Result valuedb() { return "not defined"; };
The actual switching logic (condensed)
// template layer 1: work away the first parameter, then the next, ...
struct Switch {
static Result value( A::e a, B::e b, C::e c ) {
switch( a ) {
case A::a1: return SwitchA<A::a1>::value( b, c );
case A::a2: return SwitchA<A::a2>::value( b, c );
case A::a3: return SwitchA<A::a3>::value( b, c );
default: return Result();
}
}
template< A::e a > struct SwitchA {
static Result value( B::e b, C::e c ) {
switch( b ) {
case B::b1: return SwitchB<a, B::b1>::value( c );
case B::b2: return SwitchB<a, B::b2>::value( c );
default: return Result();
}
}
template< A::e a, B::e b > struct SwitchB {
static Result value( C::e c ) {
switch( c ) {
case C::c1: return valuedb< a, b, C::c1 >();
case C::c2: return valuedb< a, b, C::c2 >();
default: return Result();
}
};
};
};
};
And the knowledge base itself
// the template database
//
template<> Result valuedb<A::a1, B::b1, C::c1 >() { return "a1b1c1"; }
template<> Result valuedb<A::a1, B::b2, C::c2 >() { return "a1b2c2"; }
This is how it can be used.
int main()
{
// usage:
Result r = Switch::value( A::a1, B::b2, C::c2 );
return 0;
}
Yes, there is definitely easier way to do that, both faster and simpler. The idea is basically the same as proposed by Alex Martelli. Instead of seeing you problem as bi-dimentional, see it as some one dimension lookup table.
It means combining var, subvar, subsubvar, etc to get one unique key and use it as your lookup table entry point.
The way to do it depends on the used language. With python combining var, subvar etc. to build a tuple and use it as key in a dictionnary is enough.
With C or such it's usually simpler to convert each keys to enums, then combine them using logical operators to get just one number that you can use in your switch (that's also an easy way to use switch instead of string comparizons with cascading ifs). You also get another benefit doing it. It's quite usual that several treatments in different branches of the initial switch are the same. With the initial form it's quite difficult to make that obvious. You'll probably have some calls to the same functions but it's at differents points in code. Now you can just group the identical cases when writing the switch.
I used such transformation several times in production code and it's easy to do and to maintain.
Summarily you can get something like this... the mix function obviously depends on your application specifics.
switch (mix(var, subvar))
{
case a1:
process a1;
case b1:
process b1;
case c1:
process c1;
case a2:
process a2;
case b2:
process b2;
case c2:
process c2;
case a3:
process a3;
case b3:
process b3;
case c3:
process c3;
}
Perhaps what you want is code generation?
#! /usr/bin/python
first = [1, 2, 3]
second = ['a', 'b', 'c']
def emit(first, second):
result = "switch (var)\n{\n"
for f in first:
result += " case {0}:\n switch (subvar)\n {{\n".format(f)
for s in second:
result += " case {1}:\n process {1}{0};\n".format(f,s)
result += " }\n"
result += "}\n"
return result
print emit(first,second)
#file("autogen.c","w").write(emit(first,second))
This is pretty hard to read, of course, and you might really want a nicer template language to do your dirty work, but this will ease some parts of your task.
If C++ is an option i would try using virtual function and maybe double dispatch. That could make it much cleaner. But it will only probably pay off only if you have many more cases.
This article on DDJ.com might be a good entry.
If you're just trying to eliminate the two-level switch/case statements (and save some vertical space), you can encode the two variable values into a single value, then switch on it:
// Assumes var is in [1,3] and subvar in [1,3]
// and that var and subvar can be cast to int values
switch (10*var + subvar)
{
case 10+1:
process a1;
case 10+2:
process b1;
case 10+3:
process c1;
//
case 20+1:
process a2;
case 20+2:
process b2;
case 20+3:
process c2;
//
case 30+1:
process a3;
case 30+2:
process b3;
case 30+3:
process c3;
//
default:
process error;
}
If your language is C#, and your choices are short enough and contain no special characters you can use reflection and do it with just a few lines of code. This way, instead of manually creating and maintaining an array of function pointers, use one that the framework provides!
Like this:
using System.Reflection;
...
void DispatchCall(string var, string subvar)
{
string functionName="Func_"+var+"_"+subvar;
MethodInfo m=this.GetType().GetMethod(fName);
if (m == null) throw new ArgumentException("Invalid function name "+ functionName);
m.Invoke(this, new object[] { /* put parameters here if needed */ });
}
void Func_1_a()
{
//executed when var=1 and subvar=a
}
void Func_2_charlie()
{
//executed when var=2 and subvar=charlie
}
Solution from developpez.com
Yes, you can optimize it and make it so much cleaner. You can not use such a "Chain of
Responsibility" with a Factory:
public class ProcessFactory {
private ArrayList<Process> processses = null;
public ProcessFactory(){
super();
processses = new ArrayList<Process>();
processses.add(new ProcessC1());
processses.add(new ProcessC2());
processses.add(new ProcessC3());
processses.add(new ProcessC4());
processses.add(new ProcessC5(6));
processses.add(new ProcessC5(22));
}
public Process getProcess(int var, int subvar){
for(Process process : processses){
if(process.canDo(var, subvar)){
return process;
}
}
return null;
}
}
Then just as your processes implement an interface process with canXXX you can easily use:
new ProcessFactory().getProcess(var,subvar).launch();

Resources