I have a custom resource used to get the API key from API gateway and send it as a header to Cloudfront. When i am creating a stack my custom:resource is triggering since it is creating logical ID for first time. But when i update the stack(i.e Changing the API KEY name) Then the API key resource of type AWS::ApiGateway::ApiKey will create a new Logical ID when in turn create a new API key, At this point my custom:resource is not invoking since it has same same logical ID because of this my cloudfront is having old API Key rather than new one.
Is there a way to invoke my custom:resource everytime a update happened to my stack?
As a workaround i am changing the Logical Id of custom:resource to trigger it whenever i am updating a resource in my stack. But this is little difficult since logicalId is shared as a reference to many resources.
BTW my custom resource is attached to a lambda function. I even tried changing the Version field and also tried adding values to properties field (i.e.Stackname,parameters etc) but still it is not invoking.
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Resources" : {
"MyFrontEndTest" : {
"Type": "Custom::PingTester",
"Version" : "1.0", -->Even changed the version to 2.0
"Properties" : {
"ServiceToken": "arn:aws:lambda:us-east-1:*****",
"InputparameterName" : "MYvalue" -->Added this field
}
}
}
Thanks
Any help is appreciated
One trick to get the custom resource to execute a lambda when the stack is updated is to configure the custom resource to pass all stack parameters to the lambda function. If any parameters change on stack update, the custom resource will change and trigger the lambda. Just ignore the unneeded keys in the lambda event data. This won't do anything for the scenario when just the template is updated.
Related
I am writing a Post PLugin changing the owner. When the owner has a substitution manager, the owner is changed to the substitution manager. I tried a service.Update and an AssignRequest, but these throw an exception.
When I post the request my entity cannot update (and then throws "The request channel time out while waiting for reply after 10:00:00"). But like I see there is no recursion, because when I logged it I have only one repetition of log and it has stopped before or on line with update.
var assignedIncident = new AssignRequest
{
Assignee = substManagerRef, //get it throw another method, alreay checked in test it`s correct
Target = new EntityReference ("incident", incedentId)
};
service.Execute(assignedIncident);
I tried to write target in another way
Target = postEntityImage.ToEntityReference()
I tried to write simple update but the problem is the same.
Entity incident = new Entity("incident" , incidentId);
incident["ownerid"] = substManagerRef:
service.Update(incident);
Can somebody help me with that? Or maybe show the way to solve it)
The plugin is triggering itself and gets in a loop. The system only allows a maximum of 8 calls deep within plugins and that is the reason it throws an error and rolls back the transaction.
To solve this issue redesign your plugin in the following way:
Register your plugin on the Update message of your entity in the PreValidation stage.
In the plugin pick up the Target property from the InputParameters collection.
This is an Entity type. Modify or set the ownerid attribute on this Entity object.
That is all. You do not need to do an update, your modification is now passed into the plugin pipeline.
Note: for EntityReference attributes this technique only works when your plugin is registered in the PreValidation stage. For other regular attributes it also works in the PreOperation stage.
I can select an event from event template when I trigger a lambda function. How can I create a customized event template in terraform. I want to make it easier for developers to trigger the lambda by selecting this customized event template from the list.
I'd like to add an event on this list:
Unfortunately, at the time of this answer (2020-02-21), there is no way to accomplish this via the APIs that AWS provides. Ergo, the terraform provider does not have the ability to accomplish this (it's limited to what's available in the APIs).
I also have wanted to be able to configure test events via terraform.
A couple of options
Propose to AWS that they expose some APIs for managing test events. This would give contributors to the AWS terraform provider the opportunity to add this resource.
Provide the developers with a PostMan collection, set of shell scripts (or other scripts) using the awscli, or some other mechanism to invoke the lambdas. This is essentially the same as pulling the templating functionality out of the console and into your own tooling.
I did try something and it worked. I must warn that this is reverse engineering and may break anytime in future, but works well for me so far.
As per Amazon doc for testing lambda functions, whenever a Shareable Test Event is created for any lambda, it is stored under a new schema under the lambda-testevent-schemas schema registry.
I made use of this information and figured out the conventions AWS follows to keep track of the events so that I can use those to manage resources using terraform
The name of the schema is _<name_of_lambda_function>-schema, hence, from terraform I manage a new schema named _<name_of_lambda_function>-schema
resource "aws_schemas_schema" "my_lambda_shared_events" {
name = "_${aws_lambda_function.my_lambda.function_name}-schema"
registry_name = "lambda-testevent-schemas"
type = "OpenApi3"
description = "The schema definition for shared test events"
content = local.my_lambda_shared_events_schema
}
I create a json doc (my_lambda_shared_events_schema) which follows the OpenAPI3 convention. For example =>
{
"components": {
"examples": {
"name_of_the_event_1": {
"value": {
... the value you need ...
}
},
"name_of_the_event_2": {
"value": {
... the value you need ...
}
}
},
"schemas": {
"Event": {
"properties": {
... structure of the event you need ...
},
"required": [... any required params ...],
"type": "object"
}
}
},
"info": {
"title": "Event",
"version": "1.0.0"
},
"openapi": "3.0.0",
"paths": {}
}
After terraform apply, you should be able to see the terraform managed shareable events in AWS console.
Some important gotchas when using this method:
If you add events using terraform using this method, any events added from the console would be lost.
The schema registry lambda-testevent-schemas is a special registry & must NOT be managed using terraform as it may disrupt other lambda functions' events created outside the scope of this terraform module.
It is required for the lambda-testevent-schemas to be available beforehand. You can either have a check to create this registry before the
module is applied or else create any random shareable event for any random lambda function. This needs to be done once per region per account.
If you face difficulties creating the json schema for your lambda, you can for once create the events from console and then copy the json from EventBridge schema registry.
I am programmatically setting up a cluster resource (specifically, a Generic Service), using the Windows MI API (Microsoft.Management.Infrastructure).
I can add the service resource just fine. However, my service requires the "Use Network Name for computer name" checkbox to be checked (this is available in the Cluster Manager UI by looking at the Properties for the resource).
I can't figure out how to set this using the MI API. I have searched MSDN and multiple other resources for this without luck. Does anybody know if this is possible? Scripting with Powershell would be fine as well.
I was able to figure this out, after a lot of trial and error, and the discovery of an API bug along the way.
It turns out cluster resource objects have a property called PrivateProperties, which is basically a property bag. Inside, there's a property called UseNetworkName, which corresponds to the checkbox in the UI (and also, the ServiceName property, which is also required for things to work).
The 'wbemtest' tool was invaluable in finding this out. Once you open the resource instance in it, you have to double-click the PrivateProperties property to bring up a dialog which has a "View Embedded" button, which is then what shows you the properties inside. Somehow I had missed this before.
Now, setting this property was yet another pain. Due to what looks like a bug in the API, retrieving the resource instance with CimSession.GetInstance() does not populate property values. This misled me into thinking I had to add the PrivateProperties property and its inner properties myself, which only resulted in lots of cryptic errors.
I finally stumbled upon this old MSDN post about it, where I realized the property is dynamic and automatically set by WMI. So, in the end, all you have to do is know how to get the property bag using CimSession.QueryInstances(), so you can then set the inner properties like any other property.
This is what the whole thing looks like (I ommitted the code for adding the resource):
using (var session = CimSession.Create("YOUR_CLUSTER", new DComSessionOptions()))
{
// This query finds the newly created resource and fills in the
// private props we'll change. We have to do a manual WQL query
// because CimSession.GetInstance doesn't populate prop values.
var query =
"SELECT PrivateProperties FROM MSCluster_Resource WHERE Id=\"{YOUR-RES-GUID}\"";
// Lookup the resource. For some reason QueryInstances does not like
// the namespace in the regular form - it must be exactly like this
// for the call to work!
var res = session.QueryInstances(#"root/mscluster", "WQL", query).First();
// Add net name dependency so setting UseNetworkName works.
session.InvokeMethod(
res,
"AddDependency",
new CimMethodParametersCollection
{
CimMethodParameter.Create(
"Resource", "YOUR_NET_NAME_HERE", CimFlags.Parameter)
});
// Get private prop bag and set our props.
var privProps =
(CimInstance)res.CimInstanceProperties["PrivateProperties"].Value;
privProps.CimInstanceProperties["ServiceName"].Value = "YOUR_SVC_HERE";
privProps.CimInstanceProperties["UseNetworkName"].Value = 1;
// Persist the changes.
session.ModifyInstance(#"\root\mscluster", res);
}
Note how the quirks in the API make things more complicated than they should be: QueryInstances expects the namespace in a special way, and also, if you don't add the network name dependency first, setting private properties fails silently.
Finally, I also figured out how to set this through PowerShell. You have to use the Set-ClusterParameter command, see this other answer for the full info.
I have a question of AWS AppSync. I wonder whether there is a way to check existence while adding multiple values into one attribute of the parent.
So, here is my example:
This is User type:
User type
Here is what the dynamodb looks like:
dynamodb item
And here is what I want t accomplish: To add three programs into a user record, under the programs attribute.
the addProgramToUser mutation
And here is my current resolver:
the addProgramToUser resolver
So I my logic is to extract the existent programs from dynamodb first, and then check whether the "To Be Added" program Ids are in there. If there is, stop the updating or skip that program id. If there is not, continue the update. So the question is, how to extract current data using the VTL and how to compare the existent ones and the ones I want add.
Or if anyone has other idea of how I can accomplish this task, please help. Thanks so much. I cannot embed the pictures since I am new on stackoverflow. So sorry for the inconvenience. Have a good day.
It seems that you can use a condition for your resolver's request mapping template. A condition expression lets you tell AWS AppSync and DynamoDB whether the request should succeed or not based on the state of the object already in DynamoDB before the operation is performed. For example, in your case, you only want the UpdateItem request to succeed if there is no program id in Dynamo already.
{
"version" : "2017-02-28",
"operation" : "PutItem",
"key" : {
"id" : { "S" : "1" }
},
"condition" : {
"expression" : "attribute_not_exists(programId)"
}
}
I have created the processor using RestAPI(Nifi-1.0) in windows.
Post: /process-groups/{id}/processors
Json:
{
"revision":{"version":0},
"component":
{
"name":"GetFile",
"type":"GetFile"
}
}
It creates processor with empty attributes in UI. But if I click "+" to add new attribute in UI.Then it will created property but all property are only sensitive value set.
I can't able to create property without sensitive set.
The type in the request needs to be fully qualified. Sounds like when you're attempting to create the component, it's actually creating a generic component used when the type is unknown.
If you use the UI and open Developer Tools in your browser you should be able to see an example of this.