Passing parameters to Azure ARM templates - windows

I have a template I'm using to deploy to a resource group which takes this parameter:
"envPrefixName": {
"type": "string",
"metadata": {
"description": "Prefix for the environment (2-5 characters)"
},
"defaultValue": "cust1",
"minLength": 2,
"maxLength": 5
},
I would like to make this parameter a value that can be overriden when the cdmlet is called like such:
$AzureParams = #{
ResourceGroupName = $ResourceGroup
TemplateUri = $TemplateUri
TemplateParameterUri = $TemplateParamUri
Mode = "Complete"
envPrefixName = "sunlb" #Override default parameter value
Force = $true
}
New-AzureRmResourceGroupDeployment #AzureParams
I've tried this approach but the solution continues to try to use the value set in the template and not the one passed through as a parameter in my call.
EDIT: It is possible that the TemplateParameterUri file is causing and issue?

If you supply the TemplateParameterUri it will use the parameters file to deploy the template (and take the value from the file) and your envPrefixName would get "lost" because it wont evaluate the parameters in the template.
Drop the TemplateParameterUri and it will work as you expect it (but you have to supply all the parameters in this case, unless they have default values)

Related

fetch attribute type from terraform provider schema

am trying to find out a way to fetch the attribute type of a resource/data_source from a terraform providers schema (am currently using gcp, but will be extending to pretty much all providers).
My current flow of setup
Am running the terraform providers schema -json to fetch the providers schema
This will generate a huge json file with the schema structure of the provider
ref:
How to get that list of all the available Terraform Resource types for Google Cloud?
https://developer.hashicorp.com/terraform/cli/commands/providers/schema
And from this am trying to fetch the type of each attribute eg below
`
"google_cloud_run_service": {
"version": 1,
"block": {
"attributes": {
"autogenerate_revision_name": {
"type": "bool",
"description_kind": "plain",
"optional": true
},
`
4) My end goal is to generate variables.tf from the above schema for all resources and all attributes supported in that resource along with the type constraint
ref: https://developer.hashicorp.com/terraform/language/values/variables
I already got some help on how we can generate that
ref: Get the type of value using cty in hclwrite
Now the challenge is to work on complex structures like below
The below is one of the attributes of "google_cloud_run_service".
`
"status": {
"type": [
"list",
[
"object",
{
"conditions": [
"list",
[
"object",
{
"message": "string",
"reason": "string",
"status": "string",
"type": "string"
}
]
],
"latest_created_revision_name": "string",
"latest_ready_revision_name": "string",
"observed_generation": "number",
"url": "string"
}
]
],
"description": "The current status of the Service.",
"description_kind": "plain",
"computed": true
}
`
7) so based on the above complex structure type, I want to generate the variables.tf file for this kind of attribute using the code sample from point #5, and the desired output should look something like below in variables.tf
`
variable "autogenerate_revision_name" {
type = string
default = ""
description = "Sample description"
}
variable "status" {
type = list(object({
conditions = list(object({
"message" = string
"reason" = string
"status" = string
" type" = string
}))
"latest_created_revision_name" = string
"latest_ready_revision_name" = string
"observed_generation" = number
"url" = string
}))
default = "default values in the above type format"
}
`
The above was manually written - so might not exactly align with the schema, but i hope i made it understood , as to what am trying to achieve.
The first variable in the above code is from the first eg i gave in point #3 which is easy to generate, but the second eg in point #6 is a complex type constraint and am seeking help to get this generated
Is this possible to generate using the helper schema sdk (https://pkg.go.dev/github.com/hashicorp/terraform-plugin-sdk/v2#v2.24.0/helper/schema) ? along with code eg given in point #5 ?
Summary : Am generating json schema of a terraform provider using terraform providers schema -json, am reading that json file and generating hcl code for each resource, but stuck with generating type constraints for the attributes/variables, hence seeking help on this.
Any sort of help is really appreciated as am stuck at this for quite a while.
If you've come this far, then i thank you for reading such a lengthy question, and any sort of pointers are welcome.

ElasticsearchTransport use transform to change indexPrefix?

UPDATE: I can actually change the indexPrefix using the below code, but the actual _index which is used to filter in Kibana gets its name from the original indexPrefix. It seems changing the indexPrefix in the transformer method is too late, because the _index has already been created with the original prefix.
I'm using winston and winston-elasticsearch in a nodejs/express setup and want to use the same logger to log to different indexes (different indexPrefix)
const logger = winston.createLogger({
transports
});
The transports is an array of different transports. One of them is an ElasticsearchTransport that takes in some parameters like indexPrefix and level among others. The level can be changed based on the type of log by passing in a transformer method as parameter.
new winstonElastic.ElasticsearchTransport({
level: logLevel,
indexPrefix: getIndex(),
messageType: 'log',
ensureMappingTemplate: true,
mappingTemplate: indexTemplateMapping,
transformer: (logData: any) => {
const transformed: any = {};
transformed['#timestamp'] = logData.timestamp ? logData.timestamp : new Date().toISOString();
transformed.message = logData.message;
transformed.level = parseWinstonLevel(logData.level);
transformed.fields = _.extend({}, staticMeta, logData.meta);
transformed.indexPrefix = getIndex(logData.indexPrefix);
return transformed;
},
The transformer method is called whenever the logger writes a new entry and i can verify that it works by setting properties like message. It also overwrites the level to whatever the current log level is. For some reason it doesn't work on the property indexPrefix - even when it changes, nothing overwrites the initial indexPrefix. I even tried to remove the initial value but then the logging fails, having never set the indexPrefix.
Does anyone know why? Does it have to do with the mappingTemplate listed below?:
{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0,
"index": {
"refresh_interval": "5s"
}
},
"mappings": {
"properties": {
"#timestamp": { "type": "date" },
"#version": { "type": "keyword" },
"message": { "type": "text" },
"severity": { "type": "keyword" },
"fields": {
"dynamic": true,
"properties": {}
}
}
}
}
Ok, if anyone is interrested. I ended up making a loggerFactory instead. I create a logger seeded with the correct indexPrefix through the factory - that way I end up with one logger instance per indexPrefix i want...
For those who are having the same problem i solved this in a different way.
1 - I created a variable inside ElasticsearchTransport scope;
2 - I changed the value of the variable inside the transformer method;
3 - I used the variable inside the indexPrefix method to define which prefix to use:
indexPrefix: () => return variable ? 'test1' : 'test2'

Pass collection into custom angular schematic?

I want to create a custom angular schematic that can accept a collection of action names. I will then generate 3 ngrx actions for every action name provided from the user.
for example I want to create a schematic that can be invoked like this:
ng g my-collection:my-schematic --actions=GetById,GetByFirstName
Then I'll generate code for GetById, GetByIdSuccess, GetByIdError, GetByFirstName, GetByFirstNameSuccess, GetByFirstNameError.
The issue is I've only seen angular schematics that will accept a single value as an input parameter. Anyone know how to handle collections in a custom angular schematic?
you can follow this blog, it will teach you how to create your own schematics project:
https://blog.angular.io/schematics-an-introduction-dc1dfbc2a2b2
after you generate your schematics project in file collection.json you can extend the #ngrx/schematics:
{
...
"extends": ["#ngrx/schematics"],
}
and then use the ngrx schematics to generate 3 actions like this:
externalSchematic('#ngrx/schematics', 'action')
I haven't found a good example of how to an array of string into a schematic parameter, but I found a workaround. What I did was have a simple string input parameter that is consistently delimited (I used , to delimit the values). Here is my schema:
export interface Schema {
name: string;
path?: string;
project?: string;
spec?: boolean;
actions: string;
__actions: string[];
store?: string;
}
I parse the actions param that is provided and generate the string array __actions and use that property in my templates. Here is a snippet from my index.ts:
export default function(options: ActionOptions): Rule {
return (host: Tree, context: SchematicContext) => {
options.path = getProjectPath(host, options);
const parsedPath = parseName(options.path, options.name);
options.name = parsedPath.name;
options.path = parsedPath.path;
options.__actions = options.actions.split(',');
options.__actions = options.__actions.map(_a => classify(_a));
If you know of a better way to process these please share. Thanks!
You need to pass the actions multiple times.
ng g my-collection:my-schematic --actions=GetById --actions=GetByFirstName
Define the parameter as an array within your schema.json file.
...
"actions": {
"type": "array",
"items": {
"type": "string"
},
"description": "The name of the actions."
},
...
Also in your schema.ts.
export interface Schema {
actions: string[]
}
If you want to pull them right off of the command args, you can do the following:
{
"$schema": "http://json-schema.org/draft-07/schema",
"$id": "Sample",
"title": "Sample Schematic",
"type": "object",
"description": "Does stuff.",
"additionalProperties": false,
"properties": {
"things": {
"type": "array",
"items": {
"type": "string"
},
"$default": {
"$source": "argv"
},
"description": "Things from the command-line args."
}
}
}
Then when you run your schematic you can do:
schematics schematic-lib:sample stuff other-stuff more-stuff
In this case, the things property will be ['stuff', 'other-stuff', 'more-stuff'].
Edit: Note that the required value in the schema won't cause the schematic to fail if you don't provide any args. You'd need to do validation on that property in your schematic.

Can a lambda in an AWS Step Function know the "execution name" of the step function that launched it?

I have this step function that can sometimes fail and I'd like to record this in a (dynamo) DB. What would be handy is if I could just create a new error handling step and that guy would just pick up the "execution name" from somewhere (didn't find it in the context) and record this as a failure.
Is that possible?
AWS Step Functions released recently a feature called context object.
Using $$ notation inside the Parameters block you can access information regarding your execution, including execution name, arn, state machine name, arn and others.
https://docs.aws.amazon.com/step-functions/latest/dg/input-output-contextobject.html
You can create a state to extract the context details that are then accessible to all the other states, such as:
{
"StartAt": "ExtractContextDetails",
"States": {
"ExtractContextDetails": {
"Parameters": {
"arn.$": "$$.Execution.Id"
},
"Type": "Pass",
"ResultPath": "$.contextDetails",
"Next": "NextStateName"
}
}
....
}
Yes, it can, but it is not as straight-forward as you might hope.
You are right to expect that a Lambda should be able to get the name of the calling state machine. Lambdas are passed in a context object that returns information on the caller. However, that object is null when a state machine calls your Lambda. This means two things. You will have to work harder to get what you need, and that this might be implemented in the future.
Right now, the only way I know of achieving this is by starting the execution of the state machine from within another Lambda and passing in the name in the input Json. Here is my code in Java...
String executionName = //generate a unique name...
StartExecutionRequest startExecutionRequest = new StartExecutionRequest()
.withStateMachineArn(stateMachineArn)
.withInput(" {"executionName" : executionName} ") //make sure you escape the quotes
.withName(executionName);
StartExecutionResult startExecutionResult = sf.startExecution(startExecutionRequest);
String executionArn = startExecutionResult.getExecutionArn();
If you do this, you will now have the name of your execution in the input JSON of your first step. If you want to use it in other steps, you should pass it around.
You might also want the ARN of the the execution so you can call state machine methods from within your activities or tasks. You can construct the ARN yourself by using the executionName...
arn:aws:states:us-east-1:acountid:execution:statemachinename:executionName
No. Unless you pass that information in the event, Lambda doesn't know whether or not it's part of a step function. Step functions orchestrate lambdas and maintain state between lambdas.
"States": {
"Get Alter Query": {
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"OutputPath": "$.Payload",
"Parameters": {
"FunctionName": "arn:aws:lambda:ap-northeast-2:1111111:function:test-stepfuction:$LATEST",
"Payload": {
"body.$": "$",
"context.$": "$$"
}
},
"Retry": [
{
"ErrorEquals": [
"Lambda.ServiceException",
"Lambda.AWSLambdaException",
"Lambda.SdkClientException",
"Lambda.TooManyRequestsException"
],
"IntervalSeconds": 2,
"MaxAttempts": 6,
"BackoffRate": 2
}
],
"Next": "Alter S3 Location"
}
}
I solved it by adding context to the payload.
I highly recommend when using step functions to specify some sort of key in the step function configuration. For my step functions I always provide:
"ResultPath": "$",
"Parameters": {
"source": "StepFunction",
"type": "LAMBDA_METHOD_SWITCH_VALUE",
"payload.$": "$"
},
And have each call to lambda use the type field to determine what code to call. When your code fails, wrap it in a try/catch and explicitly use the passed in type which can be the name of the step to determine what to do next.

How can I make Bot Framework Prompt Choices only accept exact matches?

In the Bot Framework (NodeJS api), how can I force my prompt choices to only match with user input that matches exactly, rather than doing partial or fuzzy matching? Should I create a custom prompt for something so basic?
I'm using this code:
var choices_films = JSON.parse(fs.readFileSync('films.json', 'utf8'));
builder.Prompts.choice(session, "Say one film", choices_films, { listStyle: builder.ListStyle.button, minScore: 1.0 });
And films.json contains this:
[
{
"value": "House of Cards",
"synonyms": ["house of cards", "house cards", "cards"]
},
{
"value": "House of Kings",
"synonyms": ["house kings", "house of kings", "kings"]
},
{
"value": "Matrix Revolutions",
"synonyms": ["matrix", "revolutions"]
}]
If I say "house", then "House of Cards" is selected, because it appears first, and the bot framework is ignoring my "minScore: 1.0". Any idea would be welcome, because at the moment I have to do a custom choice or use middleware to capture it and fix it...
If you're using the C# version of the SDK, then there's a PromptDialog.Choice signature which includes a parameter called minScore, described as follows:
(Optional) minimum score from 0.0 - 1.0 needed for a recognized choice to be considered a match. The default value is "0.4".
If minScore is set to a value less than 1, then fuzzy matching will be used, but if you set the value to 1, then only an exact match will be accepted.
The method signature is as follows:
public static void Choice<T>(IDialogContext context, ResumeAfter<T> resume, IPromptOptions<T> promptOptions, bool recognizeChoices = true, bool recognizeNumbers = true, bool recognizeOrdinals = true, double minScore = 0.4)
If you're using the Node.js version of the SDK, it looks like there's an equivalent minScore parameter in the IPromptChoiceFeatures interface (link to source), which is passed to the PromptChoice constructor, so you should be able to set the threshold similarly there as well - but beyond that I can't speak to the specific syntax as I haven't worked with the Node.js SDK personally.

Resources