Unable to start AWSFIS-Run-CPU-Stress - amazon-ec2

While running AWSFIS-Run-CPU-Stress i am getting below error:
Unable to start action, due to a platform mismatch between the specified document and the targeted instances. I am trying this in Windows EC2 instance
My Experiment script look like this(removed confidential server info):
{
"description": "Test CPU stress predefined SSM document",
"targets": {
"testInstance": {
"resourceType": "aws:ec2:instance",
"resourceArns": [
"arn:aws:ec2:region:123456789012:instance/instance_id"
],
"selectionMode": "ALL"
}
},
"actions": {
"runCpuStress": {
"actionId": "aws:ssm:send-command",
"parameters": {
"documentArn": "arn:aws:ssm:region::document/AWSFIS-Run-CPU-Stress",
"documentParameters": "{\"DurationSeconds\":\"120\"}",
"duration": "PT5M"
},
"targets": {
"Instances": "testInstance"
}
}
},
"stopConditions": [
{
"source": "aws:cloudwatch:alarm",
"value": "arn:aws:cloudwatch:region:123456789012:alarm:awsec2-instance_id-GreaterThanOrEqualToThreshold-CPUUtilization"
}
],
"roleArn": "arn:aws:iam::123456789012:role/AllowFISSSMActions",
"tags": {}
}

Related

Step Functions ignoring Parameters, executing it with default input

I don't know if this is a bug of my own or Amazon's as I'm unable to properly understand what's going on.
I have the following Step Function (It's currently a work in progress), which takes care of validating an input and then sending it to an SQS lambda (CreateInAuth0 Step).
My problem is that apparently the SQS function is executing with the default parameters, fails, retries and then uses the ones specified in the Step Function. (Or at least that's what I'm gathering from the logs).
I want to know why it is ignoring the Parameters in the first go. Here's all the information I can gather.
There seems to be a difference between TaskStateEntered and TaskScheduled (which I believe it's ok).
{
"Comment": "Add the students to the platform and optionally adds them to their respective grades",
"StartAt": "AddStudents",
"States": {
"AddStudents": {
"Comment": "Validates that the students are OK",
"Type": "Task",
"Resource": "#AddStudents",
"Next": "Parallel"
},
"Parallel": {
"Type": "Parallel",
"Next": "FinishExecution",
"Branches": [
{
"StartAt": "CreateInAuth0",
"States": {
"CreateInAuth0": {
"Type": "Task",
"Resource": "arn:aws:states:::sqs:sendMessage.waitForTaskToken",
"Parameters": {
"QueueUrl": "#SQS_StudentsRegistrationSQSFifo",
"MessageBody": {
"execId.$": "$.result.execId",
"tenantType.$": "$.result.tenantType",
"tenantId.$": "$.result.tenantId",
"studentsToCreate.$": "$.result.newStudents.success",
"TaskToken.$": "$$.Task.Token",
"studentsSucceeded": []
}
},
"End": true
}
}
}
]
},
"FinishExecution": {
"Comment": "This signals the ending in DynamoDB that the transaction finished",
"Type": "Task",
"End": true,
"Resource": "arn:aws:states:::dynamodb:putItem",
"ResultPath": "$.dynamoDB",
"Parameters": {
"TableName": "#SchonDB",
"Item": {
"pk": {
"S.$": "$.arguments.tenantId:exec:$.id"
},
"sk": {
"S": "result"
},
"status": {
"S": "FINISHED"
},
"type": {
"S": "#EXEC_TYPE"
}
}
},
"Retry": [
{
"ErrorEquals": [
" States.Timeout"
],
"IntervalSeconds": 1,
"MaxAttempts": 5,
"BackoffRate": 2.0
}
],
"Catch": [
{
"ErrorEquals": [
"States.ALL"
],
"Next": "FinishExecutionIfFinishErrored"
}
]
},
"FinishExecutionIfFinishErrored": {
"Type": "Pass",
"End": true
}
}
}
Here's the visual:
Here's the execution event history:
Note: I have a try/catch statement inside the SQS function that wraps the entire SQS' execution.

Alexa.Discovery response: no device detected by Alexa

I am implementing my Alexa Home Skill using AWS Lambda.
Given the following request I receive when I try to detect new devices on Alexa Skil test page:
{directive={header={namespace=Alexa.Discovery, name=Discover, payloadVersion=3, messageId=0160c7e7-031f-47ee-a1d9-a23f38f87a9e}, payload={scope={type=BearerToken, token=...}}}}
I respond with the following:
{
"event": {
"payload": {
"endpoints": [
{
"displayCategories": [
"SMARTPLUG"
],
"capabilities": [
{
"type": "AlexaInterface",
"interface": "Alexa",
"version": "3"
},
{
"type": "AlexaInterface",
"interface": "Alexa.PowerController",
"version": "3",
"properties": {
"retrievable": true,
"supported": [
{
"name": "powerState"
}
],
"proactivelyReported": true
}
},
{
"type": "AlexaInterface",
"interface": "Alexa.EndpointHealth",
"version": "3",
"properties": {
"retrievable": true,
"supported": [
{
"name": "connectivity"
}
],
"proactivelyReported": true
}
}
],
"manufacturerName": "mirko.io",
"endpointId": "ca84ef6d-53b1-430a-8a5e-a62f174eac5e",
"description": "mirko.io forno (id: ca84ef6d-53b1-430a-8a5e-a62f174eac5e)",
"friendlyName": "forno"
}
]
},
"header": {
"payloadVersion": "3",
"namespace": "Alexa.Discovery",
"name": "Discover.Response",
"messageId": "c0555cc8-ad7a-4377-b310-9de9b9ab6282"
}
}
}
Despite that, for some reasons Alexa answers that it did not find any new device.
I may be mistaken but I am pretty sure it used to work before I decided to add the Alexa.EndpointHealth interface.
Your response object looks right to me, except the extra "endpoint" field.
"endpoint": {
"endpointId": "INVALID",
"scope": {
"type": "BearerToken",
"token": "INVALID"
}
}
There's no such field in the Alexa.Discovery documentation. Try removing it and see if it resolves the issue.

Azure Data Factory how to deploy Alerts & Metrics to other environments with DevOps

We have a Azure datafactory fully integrated with DevOps. Every change I make to the datafactory is deployed to all environments (OTAP), except alerts & metrics. I cannot find anything on how to deploy these to the other environments. Is this possible at all?
Is this possible at all?'
Quick answer is NO so far. I contacted Microsoft ADF team and got below response:
Azure Data Factory utilizes Azure Resource Manager templates to store
the configuration of your various ADF entities. Entities on Alerts &
Metrics does not get exported in the ARM template, so Alerts & Metrics
won’t be integrated using DevOps.
I did 2 verifications:
1.Check ARM template supported entities in ADF, Alerts & Metrics doesn't exist.
2.Try to export ARM template in the ADF UI but still no Alerts & Metrics
Really understand you would like to integrate all elements in Data Factory including Alerts & Metrics with DevOps. I suggest you submitting feedback here to push improvements of ADF, any voice is welcome.
There is a way to work around this one.
ADF alert is a "Microsoft.Insights/metricalerts" resource that you can deploy using ARM deployment operation from Azure Devops.
You can try to create an alert in ADF and then go to Portal, search for: Monitor > Alert > Alert Rule, and find the Alert you created in ADF. In my case there is an Alert called Test
Here is the ARM template exported from the alert
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"metricalerts_Alert_name": {
"defaultValue": "Alert",
"type": "String"
},
"factories_test_externalid": {
"defaultValue": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/yyyyy/providers/Microsoft.DataFactory/factories/test",
"type": "String"
},
"actionGroups_actiongroup1_externalid": {
"defaultValue": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/yyyyy/providers/microsoft.insights/actionGroups/actiongroup1",
"type": "String"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Insights/metricalerts",
"apiVersion": "2018-03-01",
"name": "[parameters('metricalerts_Alert_name')]",
"location": "global",
"tags": {
"CreatedTimeUtc": "2022-09-13T05:28:46.0663823Z"
},
"properties": {
"severity": 0,
"enabled": true,
"scopes": [
"[parameters('factories_test_externalid')]"
],
"evaluationFrequency": "PT1M",
"windowSize": "PT15M",
"criteria": {
"allOf": [
{
"threshold": 1,
"name": "PipelineFailedRuns",
"metricNamespace": "Microsoft.DataFactory/factories",
"metricName": "PipelineFailedRuns",
"dimensions": [
{
"name": "Name",
"operator": "Include",
"values": [
"pipeline2"
]
},
{
"name": "FailureType",
"operator": "Include",
"values": [
"UserError",
"SystemError",
"BadGateway"
]
}
],
"operator": "GreaterThanOrEqual",
"timeAggregation": "Total",
"criterionType": "StaticThresholdCriterion"
}
],
"odata.type": "Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria"
},
"actions": [
{
"actionGroupId": "[parameters('actionGroups_actiongroup1_externalid')]",
"webHookProperties": {}
}
]
}
}
]
}
For the actionGroups you can refer to
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"groupName": {
"defaultValue": "actiongroup1",
"type": "string"
},
"email_receiver_address": {
"defaultValue": "someEmail#gmail.com",
"type": "string"
}
},
"variables": {},
"resources": [
{
"type": "microsoft.insights/actionGroups",
"apiVersion": "2019-03-01",
"name": "[parameters('groupName')]",
"location": "global",
"tags": {
"CreatedTimeUtc": "2020-10-21T07:24:08.2808723Z",
},
"properties": {
"groupShortName": "test",
"enabled": true,
"emailReceivers": [
{
"name": "test email received",
"emailAddress": "[parameters('email_receiver_address')]",
"useCommonAlertSchema": false
}
],
"smsReceivers": [],
"webhookReceivers": [],
"itsmReceivers": [],
"azureAppPushReceivers": [],
"automationRunbookReceivers": [],
"voiceReceivers": [],
"logicAppReceivers": [],
"azureFunctionReceivers": []
}
}
]
}

NLog: LayoutRenderer cannot be found: 'aspnet-user-identity

I try to implement NLog into my .NET Core Api web service.
I want to log to an Oracle database. All works well through an nlog.config XML file.
But the goal is to implement NLog config into appsettings.json and here problem occurs.
I get the error set in title:
LayoutRenderer cannot be found: 'aspnet-user-identity
My config file is like this :
"NLog": {
"autoReload": true,
"throwConfigExceptions": true,
"internalLogLevel": "info",
"internalLogFile": "c:/app/log/dev/internal-appsetting-nlog.txt",
"extensions": {
"NLog.Extensions.Logging": {
"assembly": [
"NLog.Extensions.Logging",
"NLog.Web.AspNetCore"
]
}
},
"variables": {
"var_logdir": "c:/app/log/dev"
},
"default-wrapper": {
"type": "AsyncWrapper",
"overflowAction": "Block"
},
"targets": {
"all-file": {
"type": "File",
"fileName": "${var_logdir}/nlog-all-${shortdate}.log",
"layout": {
"type": "JsonLayout",
"Attributes": [
{
"name": "timestamp",
"layout": "${date:format=o}"
},
{
"name": "level",
"layout": "${level}"
},
{
"name": "logger",
"layout": "${logger}"
},
{
"name": "message",
"layout": "${message:raw=true}"
},
{
"name": "properties",
"encode": false,
"layout": {
"type": "JsonLayout",
"includeallproperties": "true"
}
}
]
}
},
"db": {
"type": "Database",
"commandText": "INSERT INTO logtable (LOGLEVEL,LOGGER,MESSAGE,MACHINENAME,USERNAME,CALLSITE, THREADID,EXCEPTIONMESSAGE,STACKTRACE,SESSIONID) VALUES (:pLEVEL,:pLOGGER,:pMESSAGE,:pMACHINENAME, :pCALLSITE,:pTHREADID,:pEXCEPTIONMESSAGE,:pSTACKTRACE)",
"parameters": [
{
"name": "#pLEVEL",
"layout": "${level}"
},
{
"name": "#pLOGGER",
"layout": "${logger}"
},
{
"name": "#pMESSAGE",
"layout": "${message}"
},
{
"name": "#pMACHINENAME",
"layout": "${machinename}"
},
{
"name": "#pUSERNAME",
"layout": "${aspnet-user-identity}"
},
{
"name": "#pCALLSITE",
"layout": "${callsite:filename=true}"
},
{
"name": "#pTHREADID",
"layout": "${threadid}"
},
{
"name": "#pEXCEPTIONMESSAGE",
"layout": "${exception}"
},
{
"name": "#pSTACKTRACE",
"layout": "${stacktrace}"
},
{
"name": "#pSESSIONID",
"layout": "${aspnet-sessionid}"
}
],
"dbProvider": "Oracle.ManagedDataAccess.Client.OracleConnection, Oracle.ManagedDataAccess",
"connectionString": "xxxxxxxxxxxx"
}
},
"rules": [
{
"logger": "*",
"minLevel": "Trace",
"writeTo": "all-file"
},
{
"logger": "*",
"minLevel": "Trace",
"writeTo": "db"
},
{
"logger": "Microsoft.*",
"maxLevel": "Info",
"final": true
}
]
},
The internal debugger reports:
2019-10-09 16:48:48.6665 Info Adding target AsyncTargetWrapper(Name=all-file)
2019-10-09 16:48:48.7859 Warn Error when setting property 'Layout' on 'NLog.Targets.DatabaseParameterInfo' Exception: System.ArgumentException: LayoutRenderer cannot be found: 'aspnet-user-identity'. Is NLog.Web not included?
at NLog.Config.Factory`2.CreateInstance(String itemName)
at NLog.Layouts.LayoutParser.GetLayoutRenderer(ConfigurationItemFactory configurationItemFactory, String name)
at NLog.Layouts.LayoutParser.ParseLayoutRenderer(ConfigurationItemFactory configurationItemFactory, SimpleStringReader stringReader)
at NLog.Layouts.LayoutParser.CompileLayout(ConfigurationItemFactory configurationItemFactory, SimpleStringReader sr, Boolean isNested, String& text)
at NLog.Layouts.SimpleLayout.set_Text(String value)
at NLog.Internal.PropertyHelper.TryNLogSpecificConversion(Type propertyType, String value, Object& newValue, ConfigurationItemFactory configurationItemFactory)
at NLog.Internal.PropertyHelper.SetPropertyFromString(Object obj, String propertyName, String value, ConfigurationItemFactory configurationItemFactory)
Error occurs on ${aspnet-sessionid}. If I comment out both layout, everything works well.
I found different things on GitHub issue report but all I tried was a fail.
Could someone help?
The unknown aspnet-user-identity is probably an issue with your extensions:
"extensions": [
{ "assembly": "NLog.Extensions.Logging" },
{ "assembly": "NLog.Web.AspNetCore" }
],
Could you try the above suggestion?
P.S. Updated the wiki to include example of multiple "extensions"

Why my Alexa ReportStatus directive response not working?

I want to enable alexa voice control for my smart home device. I was able to discover device. Now all devices are showing in alexa app. But when I try to turn on the device from my alexa app it is getting stuck. Loader is moving unlimited period of time. It is actually calling ReportStatus directive.
This is the json that I am getting from alexa app for a light. The light has only turn on and turn off capabilities.
{
"directive": {
"endpoint": {
"cookie": {
"detail1": "For simplicity, this is the only appliance",
"detail2": "that has some values in the additionalApplianceDetails"
},
"endpointId": "endpoint-001",
"scope": {
"token": "weza|IwEBIGu_tmpSTQaEPvhm0OYy-4ncjve_Au1788TAWR2DC8b7xJlPDiX3HV3rJUtG0qyauIlman4bX4ZCK0-6NvKWagqXNLSdH3bDBLxD_9VtgCQo6wUlEd4DNmL9Yf5sWuUCkV1ALAxxbhqPs3QlTofubxtpSnF05ZWOSjyNUlM3ShryLh7owTywFa_7oXCCaLdLCTiqOm27aPn-yyJEDNG57Sc9iysrZkJHaxVPbdZdcqRmaw9zFGVWOqsgjqiojkKrfztslVL1Ggo6v7Teg8isrZD8osr5HFkWAmZHi8K7UrHmwQnsD9CosgSxSG0avnUoomdsZx3_LPjLJKf5twJrN1vbLolzOgxUbVuAVPVrs8UN40KFEu6eCv_7rYz9AER_61di-4w1K27kjeJvzPMIKlLXLvv6Z-2GyuQq_8M1fUdM0SgiAkqjf92S9SNxezTUiDYdOjB1JrktbQc0WM6OYYXOMjtXcCPx3bqNwWoPZWBk7qptLTurCHcYnnDl27Q0RcJ3u1vFvMaT8l0x87K6wqW2",
"type": "BearerToken"
}
},
"header": {
"correlationToken": "AAAAAAQAeXUb9VLQcUVXClbXZQBvIDAIAAAAAAAAiBMdYahxBjRIHYbFACdRe+68uyc0KiCkClvpOCfh5dZw7NlTHoqnbbjPPydl4Nmkh4KLuFtKboYiwENwsVa9Q2WwAgRlEM+SR9PSNrWqnKvKDtulnkVXuTDkHf8f4LskbFd4VhX6cN518TA0MaZZvSfli9CN7KNY7m07P+eIv71nwxUFP5UN4xe4Jsz1V6nLzUGAG2jJIW4Lg0ARHENqDhbFtra4SV+vPXUN8L4qIwvC5xD6/mjsdN7B1ihGy/8djQA2+cxZ3XOEz2UOATyPEDlpVw5PBasQiJbRiSFSZZqEvQ0NHNfPWAWz5ieQXO1z1NAE5RMgn9d5gcEfDecjScP9DE2Yw43MypX/3VMDJmbjuTlhg9AabxLTQndKV8w9JNM1lLXcdp7i2JShOLO0bDDBPqJH1zsiZGJ93zWn+VDOTzDt+482V/AWgcHOWYnB+UZnL9GZFwEKVWTcQ20u2inFK9J11M5wr3ia57WDP6SQ7zkAmERDGfL0wswN/j0vFpqw+0/G7vjAUs2hGyg9oOy7fN2PFntk6IHV8mh47sC+ENj9dujJ9+ENwfEwEi792m7WlA8PGtvxdEqyVib5hY3qfNirqPMhMmPBf2hZlpbUfpf69q9R8GNFq41EZnTlg/AxSBjjLUJazaKQ8RU1VgipcdK1aGupJf5Oi85uEuYWN96OoEtivhUTZXg==",
"messageId": "dd8670d5-3afa-483a-93a3-f0fff0ab6572",
"name": "ReportState",
"namespace": "Alexa",
"payloadVersion": "3"
},
"payload": {}
}
}
This is the response I am sending from lambda function. It is written in python 3.6.
{
"event": {
"context": {
"properties": [
{
"name": "powerState",
"namespace": "Alexa.PowerController",
"timeOfSample": "2018-12-17T18:17:35.00Z",
"uncertaintyInMilliseconds": 500,
"value": "ON"
}
]
},
"endpoint": {
"cookie": {
"detail1": "For simplicity, this is the only appliance",
"detail2": "that has some values in the additionalApplianceDetails"
},
"endpointId": "endpoint-001",
"scope": {
"token": "weza|IwEBIGu_tmpSTQaEPvhm0OYy-4ncjve_Au1788TAWR2DC8b7xJlPDiX3HV3rJUtG0qyauIlman4bX4ZCK0-6NvKWagqXNLSdH3bDBLxD_9VtgCQo6wUlEd4DNmL9Yf5sWuUCkV1ALAxxbhqPs3QlTofubxtpSnF05ZWOSjyNUlM3ShryLh7owTywFa_7oXCCaLdLCTiqOm27aPn-yyJEDNG57Sc9iysrZkJHaxVPbdZdcqRmaw9zFGVWOqsgjqiojkKrfztslVL1Ggo6v7Teg8isrZD8osr5HFkWAmZHi8K7UrHmwQnsD9CosgSxSG0avnUoomdsZx3_LPjLJKf5twJrN1vbLolzOgxUbVuAVPVrs8UN40KFEu6eCv_7rYz9AER_61di-4w1K27kjeJvzPMIKlLXLvv6Z-2GyuQq_8M1fUdM0SgiAkqjf92S9SNxezTUiDYdOjB1JrktbQc0WM6OYYXOMjtXcCPx3bqNwWoPZWBk7qptLTurCHcYnnDl27Q0RcJ3u1vFvMaT8l0x87K6wqW2",
"type": "BearerToken"
}
},
"header": {
"correlationToken": "AAAAAAQAeXUb9VLQcUVXClbXZQBvIDAIAAAAAAAAiBMdYahxBjRIHYbFACdRe+68uyc0KiCkClvpOCfh5dZw7NlTHoqnbbjPPydl4Nmkh4KLuFtKboYiwENwsVa9Q2WwAgRlEM+SR9PSNrWqnKvKDtulnkVXuTDkHf8f4LskbFd4VhX6cN518TA0MaZZvSfli9CN7KNY7m07P+eIv71nwxUFP5UN4xe4Jsz1V6nLzUGAG2jJIW4Lg0ARHENqDhbFtra4SV+vPXUN8L4qIwvC5xD6/mjsdN7B1ihGy/8djQA2+cxZ3XOEz2UOATyPEDlpVw5PBasQiJbRiSFSZZqEvQ0NHNfPWAWz5ieQXO1z1NAE5RMgn9d5gcEfDecjScP9DE2Yw43MypX/3VMDJmbjuTlhg9AabxLTQndKV8w9JNM1lLXcdp7i2JShOLO0bDDBPqJH1zsiZGJ93zWn+VDOTzDt+482V/AWgcHOWYnB+UZnL9GZFwEKVWTcQ20u2inFK9J11M5wr3ia57WDP6SQ7zkAmERDGfL0wswN/j0vFpqw+0/G7vjAUs2hGyg9oOy7fN2PFntk6IHV8mh47sC+ENj9dujJ9+ENwfEwEi792m7WlA8PGtvxdEqyVib5hY3qfNirqPMhMmPBf2hZlpbUfpf69q9R8GNFq41EZnTlg/AxSBjjLUJazaKQ8RU1VgipcdK1aGupJf5Oi85uEuYWN96OoEtivhUTZXg==",
"messageId": "dd8670d5-3afa-483a-93a3-f0fff0ab6572",
"name": "StateReport",
"namespace": "Alexa",
"payloadVersion": "3"
},
"payload": {}
}
}
Please help me. I am stuck in this for last 2 days.
Not sure if this is related to your problem. In your response, the context element is inside event. But according to the documentation and code sample, context and event should be at the same level.
{
"context": {
"properties": [...]
},
"event": {
"header": ...,
"endpoint": ...,
"payload": {}
}
}

Resources