How can I test validation functionality in a terraform provider - go

I have written some more sophisticated validation logic for fields I need to validate in a custom terraform provider. I can, of course, test these are unit tests, but that's insufficient; why if I forgot to actually apply the validator?
So, I need to actually use terraform config and have the provider do it's normal, natural thing.
Basically, I expect it to error. The documentation seems to indicate that I should do a regex match on the output. But this can't be right; it seems super brittle. Can someone tell me how this is done?
func TestValidation(t *testing.T) {
const userBasic = `
resource "my_user" "dude" {
name = "the.dude%d"
password = "Password1" // needs a special char to pass
email = "the.dude%d#domain.com"
groups = [ "readers" ]
}
`
rgx, _ := regexp.Compile("abc")
resource.UnitTest(t, resource.TestCase{
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: userBasic,
ExpectError: rgx,
},
},
})
}
This code obviously doesn't work. And, a lot of research isn't yielding answers.

Since sdk version 2.3.0 you can set ErrorCheck function on resource.TestCase to provide more complex validation for errors.
For example:
resource.UnitTest(t, resource.TestCase{
Providers: testAccProviders,
ErrorCheck: func(err error) error {
if err == nil {
return errors.New("expected error but got none")
}
// your validation code here
// some simple example with string matching
if strings.Contains(err.Error(), "your expected error message blah") {
return nil
}
// return original error if no match
return err
},
Steps: []resource.TestStep{
{
Config: userBasic,
},
},
})

Related

AWS Route53 - Adding Simple Record

I am able to add "Weighted" A records for the AWS Route53 using the API, using [Weight: aws.Int64(weight)], it works great using the code below. But how to add "Simple" A record - I did not see an option for Simple?
params := &route53.ChangeResourceRecordSetsInput{
ChangeBatch: &route53.ChangeBatch{ // Required
Changes: []*route53.Change{ // Required
{ // Required
Action: aws.String("UPSERT"), // Required
ResourceRecordSet: &route53.ResourceRecordSet{ // Required
Name: aws.String(name), // Required
Type: aws.String("A"), // Required
ResourceRecords: []*route53.ResourceRecord{
{ // Required
Value: aws.String(target), // Required
},
},
TTL: aws.Int64(TTL),
//Region: aws.String("us-east-1"),
Weight: aws.Int64(weight),
SetIdentifier: aws.String("-"),
},
},
},
Comment: aws.String("Sample update."),
},
HostedZoneId: aws.String(zoneId), // Required
}
A 'Simple' record is just a phrasing inside the Web Console. Just leave the record without any Weight or Latency flag and it will be a standard DNS record inside Route53.
See the type ResourceRecordSet documentation, required fields are marked. The rest, like Weight are optional!
https://docs.aws.amazon.com/sdk-for-go/api/service/route53/#ResourceRecordSet
Pretty much the same as using the CLI (https://aws.amazon.com/premiumsupport/knowledge-center/simple-resource-record-route53-cli/), just port the same fields to the Go struct.
after a couple of hours of searching that lead nowhere, I found that using the aws-sdk-go-v2 works well for adding a simple record, trying to not add a weight or latency flag will only result in an error while using the version 1 of the SDK, so I did this and worked fine:
cfg, err := config.LoadDefaultConfig(context.TODO())
if err != nil {
// handle error
return err
}
client := route53.NewFromConfig(cfg)
params := &route53.ChangeResourceRecordSetsInput{
ChangeBatch: &types.ChangeBatch{
Changes: []types.Change{
{
Action: types.ChangeActionUpsert,
ResourceRecordSet: &types.ResourceRecordSet{
Name: aws.String(name),
Type: types.RRTypeCname,
ResourceRecords: []types.ResourceRecord{
{
Value: aws.String(target),
},
},
TTL: aws.Int64(60),
},
},
},
},
HostedZoneId: aws.String(zoneId),
}
output, err := client.ChangeResourceRecordSets(context.TODO(), params)

Make a resource schema dependant on another variable

I'm creating a plugin in Terraform and I want to add a field to the schema which can be called only when another field has been provided.
"host_name": &schema.Schema{
Type: schema.TypeString,
Optional: true,
DefaultFunc: schema.EnvDefaultFunc("host_name", nil),
Description: "Should give name in FQDN if being used for DNS puposes .",
},
"enableDns": &schema.Schema{
Type: schema.TypeString,
Required: true,
DefaultFunc: schema.EnvDefaultFunc("host_name", nil),
Description: "Should give name in FQDN if being used for DNS puposes .",
Here I want to pass the enableDns string in the .tf file only when host_name is passed. If it is not given and I pass enableDns it should throw an error during plan.
Terraform providers don't really have a first class way to do stuff conditionally based on other parameters other than the ConflictsWith attribute.
There is a hacky way to do some cross parameter stuff using CustomizeDiff but it's only really used in a couple of places where it's really needed.
Normally the provider would simply validate the individual parameters and if the API that the provider is working against requires cross parameter validation then this is only seen at apply time when the API returns the error.
For an example of using CustomizeDiff to throw a plan time error when doing cross parameter validation see the aws_elasticache_cluster resource:
CustomizeDiff: customdiff.Sequence(
func(diff *schema.ResourceDiff, v interface{}) error {
// Plan time validation for az_mode
// InvalidParameterCombination: Must specify at least two cache nodes in order to specify AZ Mode of 'cross-az'.
if v, ok := diff.GetOk("az_mode"); !ok || v.(string) != elasticache.AZModeCrossAz {
return nil
}
if v, ok := diff.GetOk("num_cache_nodes"); !ok || v.(int) != 1 {
return nil
}
return errors.New(`az_mode "cross-az" is not supported with num_cache_nodes = 1`)
},

In GraphQL, how to "aggregate" properties

Excuse the vague code, I can't really copy/paste. :)
I have type in GraphQL like this:
type Thing {
toBe: Boolean
orNot: Boolean
}
I'm trying to create a new property on this type that is an... aggregate of those two. Basically return a new value based upon those values. The code would be like:
if (this.toBe && !this.orNot) { return "To be!"; }
if (!this.toBe && !this.orNot) { return "OrNot!"; }
Does this make sense? So it would return something like:
Thing1 {
toBe: true;
orNot: false;
newProp: "To be!"
}
Yes, you can easily create aggregated fields in your graphql Object types by handling your required logic in that aggregated field resolver. While creating object types, you have instance of that object, and therefore, you can easily create aggregated fields which are not present in your domain models using object's data and this is one of the beauty of graphql. Note that this can differ on each implementation of GraphQL libraries. Following is the example for such use case in JavaScript and Scala.
Example in Graphql.js:
var FooType = new GraphQLObjectType({
name: 'Foo',
fields: {
toBe: { type: GraphQLBoolean},
orNot: { type: GraphQLBoolean},
newProp: { type: GraphQLString,
resolve(obj) {
if (obj.toBe && !obj.orNot) { return "To be!"; }
else { return "OrNot!"; }
}
}
});
Example in Sangria-graphql:
ObjectType(
"Foo",
"graphql object type for foo",
fields[Unit, Foo](
Field("toBe",BooleanType,resolve = _.value.name),
Field("orNot",BooleanType,resolve = _.value.path),
Field("newProp",StringType,resolve = c => {
if (c.value.toBe && !c.value.orNot) "To be!" else "OrNot!"
})
)
)
The various GraphQL server library implementations all have ways to provide resolver functions that can provide the value for a field. You'd have to include it in your schema and write the code for it, but this is a reasonable thing to do and the code you quote is a good starting point.
In Apollo in particular, you pass a map of resolvers that get passed as a resolvers: option to the ApolloServer constructor. If a field doesn't have a resolver it will default to returning the relevant field from the native JavaScript object. So you can write
const resolvers = {
Thing: {
newProp: (parent) => {
if (parent.toBe && !parent.orNot) { return "To be!"; }
if (!parent.toBe && !parent.orNot) { return "OrNot!"; }
return "That is the question";
}
}
};

kubernetes filter objects in Informer

Im writing custom controller for kubernetes.
Im creating shared informer
cache.NewSharedIndexInformer(
&cache.ListWatch{
ListFunc: func(options meta_v1.ListOptions) (k8sruntime.Object, error) {
return client.CoreV1().ConfigMaps(nameSpace).List(options)
},
WatchFunc: func(options meta_v1.ListOptions) (watch.Interface, error) {
return client.CoreV1().ConfigMaps(nameSpace).Watch(options)
},
},
&api_v1.ConfigMap{},
0, //Skip resyncr
cache.Indexers{},
)
I have option to add filtering function into call back functions to further decrease number of objects im working with.
Something like that
options.FieldSelector := fields.OneTermEqualSelector("metadata.name", nodeName).String()
I would like to filter out objects by regular expression. Or by some label at least. Unfortunately documentation is not helping. Could not find anything except for tests for code itself.
Ho do i apply regular expression on filtering mechanism?
Where do i get some examples on this issue?
Its not possible to filter objects by regular expression.
It is possible to filer object by lable
This is the code that will filter by label
labelSelector := labels.Set(map[string]string{"mylabel": "ourdaomain1"}).AsSelector()
informer := cache.NewSharedIndexInformer(
&cache.ListWatch{
ListFunc: func(options meta_v1.ListOptions) (k8sruntime.Object, error) {
options.LabelSelector = labelSelector.String()
return client.CoreV1().ConfigMaps(nameSpace).List(options)
},
WatchFunc: func(options meta_v1.ListOptions) (watch.Interface, error) {
options.LabelSelector = labelSelector.String()
return client.CoreV1().ConfigMaps(nameSpace).Watch(options)
},
},
&api_v1.ConfigMap{},
0, //Skip resyncr
cache.Indexers{},
)
Another thing that is important to remember is how do you add new objects to k8s
I was doing something like
kubectl --namespace==ourdomain1 create configmap config4 -f ./config1.yaml
This is not good. It overwrites all the fields in config map and puts whole file content into data of the new object.
Proper way is
kubectl create -f ./config1.yam

Loopback custom password validation

very simple question: if I try to validate a password in a User model it seems I can only validate the already encrypted password?
So for example if I use
Customer.validatesLengthOf('password', { min: 8, message: 'Too short' })
Then the encrypted password is checked (which is always longer than 8 characters), so no good... If I try to use a custom validation, how can I get access to the original password (the original req.body.password basically)?
EDIT (August 20, 2019): I am unsure if this is still an issue in the latest loopback releases.
In fact, this is a known problem in loopback. The tacitly approved solution is to override the <UserModel>.validatePassword() method with your own. YMMV.
akapaul commented on Jan 10, 2017 •
I've found another way to do this. In common model User there is a
method called validatePassword. If we extend our UserModel from User,
we can redefine this method in JS, like following:
var g = require('loopback/lib/globalize');
module.exports = function(UserModel) {
UserModel.validatePassword = function(plain) {
var err,
passwordProperties = UserModel.definition.properties.password;
if (plain.length > passwordProperties.max) {
err = new Error (g.f('Password too long: %s (maximum %d symbols)', plain, passwordProperties.max));
err.code = 'PASSWORD_TOO_LONG';
} else if (plain.length < passwordProperties.min) {
err = new Error(g.f('Password too short: %s (minimum %d symbols)', plain, passwordProperties.min));
err.code = 'PASSWORD_TOO_SHORT';
} else if(!(new RegExp(passwordProperties.pattern, 'g').test(plain))) {
err = new Error(g.f('Invalid password: %s (symbols and numbers are allowed)', plain));
err.code = 'INVALID_PASSWORD';
} else {
return true;
}
err.statusCode = 422;
throw err;
};
};
This works for me. I don't think that g (globalize) object is required
here, but I added this, just in case. Also, I've added my validator
options in JSON definition of UserModel, because of Loopback docs
For using the above code, one would put their validation rules in the model's .json definition like so (see max, min, and pattern under properties.password):
{
"name": "UserModel",
"base": "User",
...
"properties": {
...
"password": {
"type": "string",
"required": true,
...
"max": 50,
"min": 8,
"pattern": "(?=.*[A-Z])(?=.*[!##$&*])(?=.*[0-9])(?=.*[a-z])^.*$"
},
...
},
...
}
ok, no answer so what I'm doing is using a remote hook to get access to the original plain password and that'll do for now.
var plainPwd
Customer.beforeRemote( 'create', function (ctx, inst, next) {
plainPwd = ctx.req.body.password
next()
})
Then I can use it in a custom validation:
Customer.validate( 'password', function (err, res) {
const pattern = new RegExp(/some-regex/)
if (plainPwd && ! pattern.test( plainPwd )) err()
}, { message: 'Invalid format' })
Ok I guess the above answer is quite novel and obviously is accepted, but If you want a real easy solution with just some basic validations done and not much code then loopback-mixin-complexity is the solution for you.
If you don't want to create another dependency then you can go ahead with a custom mixin, that you can add into your user model or any other model where you need some kind of validation and it would do the validation for you.
Here's a sample code for how to create such mixin
module.exports = function(Model, options) {
'use strict';
Model.observe('before save', function event(ctx, next) { //Observe any insert/update event on Model
if (ctx.instance) {
if(!yourValidatorFn(ctx.instance.password) )
next('password not valid');
else
next();
}
else {
if(!yourValidatorFn(ctx.data.password) )
next('password not valid');
else
next();
}
});
};

Resources