Using tostring() to ensure functional conditionals in Terraform? - syntax

tl;dr: Is it an acceptable practice to use tostring() to cast values used for conditionals in Terraform >= 0.13 for handling a strictly defined set of input types?
Yesterday I asked a question that led me to a new question today:
Terraform count using bool?
What I learned is that there is some automatic type-casting applied to certain primitives in Terraform (going to and from strings to other data types mainly), but that these primitives cannot be used to infer a different data type (e.g. a bool cannot be passed as an input to the count argument because count only accepts a number type.
One comment on that question had a very simple way to use a bool as a condition:
count = var.my_var ? 1 : 0
The only potential issue with this, is if my_var can have different input types. In my use case, it'll be added to a Terraform module in which the user will decide what to supply for this argument; previously we've only been passing in string or number, but I find that to be a little less specific than I'd prefer, because Terraform can interpret count to be > 1 copy of a resource (I want a discrete 0 or 1 [specifically for something like var.create_this_resource whose value can be either true or false]); this also just doesn't look as nice to see "1" vs true IMO. So I'd like to start using bool instead, but also be able to handle when a user inputs a number. What I found is that I can use the following to accomplish this:
count = tostring(var.my_var) ? 1 : 0
Here, tostring() will take whatever is in the input and, presumably, cast it to a string. It only works for string, number, and boolean, and really, I'm only using it to get a number to a string because that's the only case where passing into a ternary operator is currently failing.
So my question is whether or not it's acceptable to do this? I've tested it with string, bool, and number, as well as unsupported types (i.e. an empty list or null); it seems to work well in code but the following made me think I shouldn't use it:
From the docs:
Explicit type conversions are rarely necessary in Terraform because it will convert types automatically where required. Use the explicit type conversion functions only to normalize types returned in module outputs.

In most cases I would suggest avoiding designs where a particular variable could have different types in different situations, unless your module is treating the value as entirely opaque and just passing it through to something else which has broader validation rules.
Since your module is working directly with this value, it would typically be best to specify an exact type constraint for the variable and make the caller of the module write expressions to convert the value if the automatic conversions are insufficient. That way the caller can get better feedback about what sort of value your module is expecting, and can decide for themselves how to convert their value of a different type.
Converting to string can only produce a value that can automatically convert to bool in the following situations:
The value was already a string, and was either "true" or "false".
The value was a bool value, in which case tostring will convert it to a string and then the conditional operator will immediately convert it back to bool again, which would be redundant.
If you declare the variable as being bool itself then the same rules will apply, but the conversion will happen inside the calling module block rather than in the count expression:
variable "my_var" {
type = bool
}
module "example" {
# ...
# This will automatically convert to bool true,
# just as it would've in the conditional operator.
my_var = "true"
}
If you really cannot avoid supporting various unusual ways of writing boolean values then you can potentially write your own conversion table which would be based on strings, and would specify the boolean value for each possible string after conversion:
locals {
sloppy_bool = tomap({
"1" = true
"true" = true
"0" = false
"false" = false
})
my_var = local.sloppy_bool[var.my_var]
}
Because mapping types (map types and object types) only support strings as keys, local.sloppy_bool[var.my_var] will automatically convert var.my_var to string, just as if you'd written tostring(var.my_var). It'll then look up the result in the table and return the corresponding boolean value, which means you can then use local.my_var instead of var.my_var elsewhere in your module and rely on it always being a true boolean value.
I would suggest doing this only if you had a previous version of the module which tolerated this sort of typing weirdness and you need to remain compatible with it. For an entirely new module, I would consider this to be non-idiomatic and probably confusing for anyone already familiar with Terraform who is trying to use the module, because they will need to become familiar with your unusual definition of the type conversion rather than relying on their knowledge of the built-in conversion rules.

Related

Anonymous struct as pipeline in template

Is there a way to do the following in a html/template?
{{template "mytemplate" struct{Foo1, Foo2 string}{"Bar1", "Bar2"}}}
Actually in the template, like above. Not via a function registered in FuncMap which returns the struct.
I tried it, but Parse panics, see Playground. Maybe just the syntax is wrong?
As noted by others, it's not possible. Templates are parsed at runtime, without the help of the Go compiler. So allowing arbitrary Go syntax would not be feasible (although note that it wouldn't be impossible, as the standard lib contains all the tools to parse Go source text, see packages "prefixed" with go/ in the standard lib). By design philosophy, complex logic should be outside of templates.
Back to your example:
struct{Foo1, Foo2 string}{"Bar1", "Bar2"}
This is a struct composite literal and it is not supported in templates, neither when invoking another template nor at other places.
Invoking another template with a custom "argument" has the following syntax (quoting from text/template: Actions):
{{template "name" pipeline}}
The template with the specified name is executed with dot set
to the value of the pipeline.
TL;DR; A pipeline may be a constant, an expression denoting a field or method of some value (where the method will be called and its return value will be used), it may be a call to some "template-builtin" function or a custom registered function, or a value in a map.
Where Pipeline is:
A pipeline is a possibly chained sequence of "commands". A command is a simple value (argument) or a function or method call, possibly with multiple arguments:
Argument
The result is the value of evaluating the argument.
.Method [Argument...]
The method can be alone or the last element of a chain but,
unlike methods in the middle of a chain, it can take arguments.
The result is the value of calling the method with the
arguments:
dot.Method(Argument1, etc.)
functionName [Argument...]
The result is the value of calling the function associated
with the name:
function(Argument1, etc.)
Functions and function names are described below.
And an Argument is:
An argument is a simple value, denoted by one of the following.
- A boolean, string, character, integer, floating-point, imaginary
or complex constant in Go syntax. These behave like Go's untyped
constants. Note that, as in Go, whether a large integer constant
overflows when assigned or passed to a function can depend on whether
the host machine's ints are 32 or 64 bits.
- The keyword nil, representing an untyped Go nil.
- The character '.' (period):
.
The result is the value of dot.
- A variable name, which is a (possibly empty) alphanumeric string
preceded by a dollar sign, such as
$piOver2
or
$
The result is the value of the variable.
Variables are described below.
- The name of a field of the data, which must be a struct, preceded
by a period, such as
.Field
The result is the value of the field. Field invocations may be
chained:
.Field1.Field2
Fields can also be evaluated on variables, including chaining:
$x.Field1.Field2
- The name of a key of the data, which must be a map, preceded
by a period, such as
.Key
The result is the map element value indexed by the key.
Key invocations may be chained and combined with fields to any
depth:
.Field1.Key1.Field2.Key2
Although the key must be an alphanumeric identifier, unlike with
field names they do not need to start with an upper case letter.
Keys can also be evaluated on variables, including chaining:
$x.key1.key2
- The name of a niladic method of the data, preceded by a period,
such as
.Method
The result is the value of invoking the method with dot as the
receiver, dot.Method(). Such a method must have one return value (of
any type) or two return values, the second of which is an error.
If it has two and the returned error is non-nil, execution terminates
and an error is returned to the caller as the value of Execute.
Method invocations may be chained and combined with fields and keys
to any depth:
.Field1.Key1.Method1.Field2.Key2.Method2
Methods can also be evaluated on variables, including chaining:
$x.Method1.Field
- The name of a niladic function, such as
fun
The result is the value of invoking the function, fun(). The return
types and values behave as in methods. Functions and function
names are described below.
- A parenthesized instance of one the above, for grouping. The result
may be accessed by a field or map key invocation.
print (.F1 arg1) (.F2 arg2)
(.StructValuedMethod "arg").Field
The proper solution would be to register a custom function that constructs the value you want to pass to the template invocation, as you can see in this related / possible duplicate: Golang pass multiple values from template to template?
Another, half solution could be to use the builtin print or printf functions to concatenate the values you want to pass, but that would require to split in the other template.
As mentioned by #icza, this is not possible.
However, you might want to provide a generic dict function to templates to allow to build a map[string]interface{} from a list of arguments. This is explained in this other answer: https://stackoverflow.com/a/18276968/328115

Assigning types to variables in Go

How to have a variable whose type is a variable type?
What do I mean by that? I have a python and java background. In both languages I can do things like assigning a class name to variable.
#Python
class A:
def __init__(self):
self.a = "Some Value"
# And asigning the class name to a variable
class_A = A
# And create instance from class A by using class_A
a = class_A()
Is there such a mechanism in go that allows me to do that? I couldn't know where to look at for such things in their documentation. Honestly, generally I don't know what these are called in programming languages.
For example I would like to be able to do:
// For example, what would be the type of myType?
var myType type = int
I will use this mechanism to take "type" arguments. For example:
func Infer(value interface{}, type gotype) {...}
Is there such a mechanism in go that allows me to do that?
The short answer is: No.
The long answer is: This sounds like an XY Problem, so depending on what you're actually trying to accomplish, there may be a way. Jump to the end to see where I address your specific use-case.
I suspect that if you want to store a data type in a variable, it's because you either want to compare some other variable's type against it, or perhaps you want to create a variable of this otherwise unknown type. These two scenarios can be accomplished in Go.
In the first case, just use a type switch:
switch someVar.(type) {
case string:
// Do stringy things
case int:
// Do inty things
}
Semantically, this looks a lot like assigning the type of someVar to the implicit switch comparison variable, then comparing against various types.
In the second case I mentioned, if you want to create a variable of an unknown type, this can be done in a round-about way using reflection:
type := reflect.TypeOf(someVar)
newVar := reflect.New(type).Interface()
Here newVar is now an interface{} that wraps a variable of the same type as someVar.
In your specific example:
I will use this mechanism to take "type" arguments. For example:
func Infer(value interface{}, type gotype) {...}
No, there's no way to do this. And it actually has much less to do with Go's variable support than it does with the fact that Go is compiled.
Variables are entirely a runtime concept. Function signatures (like all other types in Go) are fixed at compilation time. It's therefore impossible for runtime variables to affect the compilation stage. This is true in any compiled language, not a special feature (or lack thereof) in Go.
Is there such a mechanism in go that allows me to do that?
No there is not.
Use the empty interface:
var x, y interface{}
var a uint32
a = 255
x = int8(a)
y = uint8(a)
Playground example

How does a loosely typed language know how to handle different data types?

I was working on a simple task yesterday, just needed to sum the values in a handful of dropdown menus to display in a textbox via Javascript. Unexpectedly, it was just building a string so instead of giving me the value 4 it gave me "1111". I understand what was happening; but I don't understand how.
With a loosely typed language like Javascript or PHP, how does the computer "know" what type to treat something as? If I just type everything as a var, how does it differentiate a string from an int from an object?
What the + operator will do in Javascript is determined at runtime, when both actual arguments (and their types) are known.
If the runtime sees that one of the arguments is a string, it will do string concatenation. Otherwise it will do numeric addition (if necessary coercing the arguments into numbers).
This logic is coded into the implementation of the + operator (or any other function like it). If you looked at it, you would see if typeof(a) === 'string' statements (or something very similar) in there.
If I just type everything as a var
Well, you don't type it at all. The variable has no type, but any actual value that ends up in that variable has a type, and code can inspect that.

How to set default value to a customed google protobuf type?

I have a google protobuf structure:
message ResourceProto{
optional int32 memory = 0;
optional int32 core = 1;
}
And I have another structure:
message AnotherProto{
optional ResourceProto resource = 0 [default to ResourceProto(100,1)];
....
}
I know how to set default value to normal type like int, String, Bool, but how to assign default value to customed structure, what is the syntax? Say, set the default value of resource in AnotherProto to memory = 100 and core = 1?
Protocol Buffers doesn't support default values for fields of non-primitive types.
Not sure why exactly, but I would assume this is because it is rarely needed in practice and tricky to implement:
Arbitrary default values are rather difficult to self-describe in a consistent and portable way. Essentially you need to have a notion of a dynamically-typed any type, which is not supported by Protobuf2. Instead, they represent defaults as optional string default_value with some implementation-dependent syntax for the values.
When you allow this in the definition language, you need to introduce syntax for structured default values. This is slightly more complicated than supporting syntax for primitive values alone.
Depending on a target language, it may not be quite clear how to handle such default values at runtime with regard to dynamic object allocation and ownership. The safest option would involve copying which may result in unexpected performance hit.
That said, fundamentally, it can be done. For instance, I've implemented support for arbitrary defaults in piqi and it works well in OCaml and Erlang.

Boolean parameters - should I name them?

so i just came across some code that reads like so:
checkCalculationPeriodFrequency("7D", "7D", SHOULD_MATCH);
and
checkCalculationPeriodFrequency("7D", "8D", SHOULD_NOT_MATCH);
Let's not worry about what the code does for now (or indeed, ever), but instead, let's worry about that last parameter - the SHOULD_MATCH and SHOULD_NOT_MATCH
Its something i've thought of before but thought might be "bad" to do (inasmuch as "bad" holds any real meaning in a postmodernist world).
above, those values are declared (as you might have assumed):
private boolean SHOULD_MATCH = true;
private boolean SHOULD_NOT_MATCH = false;
I can't recall reading about "naming" the boolean parameter passed to a method call to ease readability, but it certainly makes sense (for readability, but then, it also hides what the value is, if only a teeny bit). Is this a style thing that others have found is instagram or like, soooo facebook?
Naming the argument would help with readability, especially when the alternative is usually something like
checkCalculationFrequency("7D",
"8D",
true /* should match */);
which is ugly. Having context-specific constants could be a solution to this.
I would actually go a step further and redefine the function prototype to accept an enum instead:
enum MatchType {
ShouldMatch,
ShouldNotMatch
};
void checkCalculationFrequency(string a, string b, MatchType match);
I would prefer this over a boolean, because it gives you flexibility to extend the function to accept other MatchTypes later.
I suggest you not to do this way.
First, for each object, the two members SHOULD_MATCH and SHOULD_NOT_MATCH are regenerated. And that's not good because it's not a behavior of the object. So it you want to use is, at least describe it as STATIC FINAL.
Second, I prefer to use an enum instead, because you can control completely the value of the param, i.e. when you use it, you must use either SHOULD_MATCH or SHOULD_NOT_MATCH, not just true or false. And this increase the readability too.
Regards.
It is indeed for readability. The idea is that the reader of the function call might not know immediately what the value true mean in the function call, but SHOULD_MATCH conveys the meaning immediately (and if you need to look up the actual value, you can do so with not much effort).
This becomes even more understandable if you have more than one boolean parameters in the function call: which true means what?
The next step in this logic is to create named object values (e.g. via enum) for the parameter values: you cannot pass on the wrong value to the function (e.g. in the example of three boolean parameters, nothing stops me from passing in SHOULD_MATCH for all of them, even though it does not make sense semantically for that function).
It's definitely more than a style thing.
We have a similar system that takes takes input from a switch in the form of boolean values, 1 or 0, which is pretty much the same as true or false.
In this system we declare our variables OPEN = true and CLOSED = false* and pass them into functions which perform different actions depending on the state of the switch. Now if someone happens to hook up the switch differently it may be that we now get the value 0 when it is OPEN and 1 when it is CLOSED.
By having named boolean variables we can easily adapt the system without having to change the logic throughout. The code becomes self documenting because developers can clearer see what action is meant to be taken in which case without worrying what value comes.
Of course the true purpose of the boolean value should be well documented else where and it is in our system....honest....
*(maybe we use OPEN, !OPEN I forget)

Resources