Need to execute a ruby file from Amazon web service Data pipeline - ruby

i have a ruby file in my application and i need to call and execute a ruby file as background job from amazon web service data pipeline
i have given the json file below
#json file
{ "objects": [
{
"id": "ScheduleId4",
"startDateTime": "2013-08-01T00:00:00",
"name": "schedule",
"type": "Schedule",
"period": "15 Minutes"
},
{
"id": "DataNodeId2",
"schedule": {
"ref": "ScheduleId4"
},
"name": "Input",
"directoryPath": "s3://pipeline_test/input/",
"type": "S3DataNode"
},
{
"id": "ActivityId1",
"input": {
"ref": "DataNodeId2"
},
"schedule": {
"ref": "ScheduleId4"
},
"stdout": "s3://pipeline_test/logs",
"scriptUri": "s3://pipeline_test/input/sample.sh",
"name": "Shell",
"runsOn": {
"ref": "ResourceId5"
},
"stderr": "s3://pipeline_test/logs",
"type": "ShellCommandActivity",
"output": {
"ref": "DataNodeId3"
},
"stage": "true"
},
{
"terminateAfter": "1 Hours",
"id": "ResourceId5",
"schedule": {
"ref": "ScheduleId4"
},
"name": "Resource1",
"logUri": "s3://pipeline_test/logs/",
"type": "Ec2Resource"
},
{
"id": "Default",
"scheduleType": "timeseries",
"name": "Default",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
},
{
"id": "DataNodeId3",
"schedule": {
"ref": "ScheduleId4"
},
"directoryPath": "s3://pipeline_test/output1/",
"name": "Output",
"type": "S3DataNode"
}
]
}
sample.sh
echo "Hello"
ruby sample.rb
sample.rb
puts "Hello world"
i have given correct path of sample.sh file. Still i am not to get the sample.rb calling or not.
Anyone tell me step by step procedure to follow it as i am newbie to amazon web service datapipeline.
Help me to solve it.

The default image launched by Data Pipeline does not actually have ruby on it. You'll have to build your own image and install ruby by hand first. Then, reference that image in your Resource by instanceId

Related

Unable to start AWSFIS-Run-CPU-Stress

While running AWSFIS-Run-CPU-Stress i am getting below error:
Unable to start action, due to a platform mismatch between the specified document and the targeted instances. I am trying this in Windows EC2 instance
My Experiment script look like this(removed confidential server info):
{
"description": "Test CPU stress predefined SSM document",
"targets": {
"testInstance": {
"resourceType": "aws:ec2:instance",
"resourceArns": [
"arn:aws:ec2:region:123456789012:instance/instance_id"
],
"selectionMode": "ALL"
}
},
"actions": {
"runCpuStress": {
"actionId": "aws:ssm:send-command",
"parameters": {
"documentArn": "arn:aws:ssm:region::document/AWSFIS-Run-CPU-Stress",
"documentParameters": "{\"DurationSeconds\":\"120\"}",
"duration": "PT5M"
},
"targets": {
"Instances": "testInstance"
}
}
},
"stopConditions": [
{
"source": "aws:cloudwatch:alarm",
"value": "arn:aws:cloudwatch:region:123456789012:alarm:awsec2-instance_id-GreaterThanOrEqualToThreshold-CPUUtilization"
}
],
"roleArn": "arn:aws:iam::123456789012:role/AllowFISSSMActions",
"tags": {}
}

Alexa.Discovery response: no device detected by Alexa

I am implementing my Alexa Home Skill using AWS Lambda.
Given the following request I receive when I try to detect new devices on Alexa Skil test page:
{directive={header={namespace=Alexa.Discovery, name=Discover, payloadVersion=3, messageId=0160c7e7-031f-47ee-a1d9-a23f38f87a9e}, payload={scope={type=BearerToken, token=...}}}}
I respond with the following:
{
"event": {
"payload": {
"endpoints": [
{
"displayCategories": [
"SMARTPLUG"
],
"capabilities": [
{
"type": "AlexaInterface",
"interface": "Alexa",
"version": "3"
},
{
"type": "AlexaInterface",
"interface": "Alexa.PowerController",
"version": "3",
"properties": {
"retrievable": true,
"supported": [
{
"name": "powerState"
}
],
"proactivelyReported": true
}
},
{
"type": "AlexaInterface",
"interface": "Alexa.EndpointHealth",
"version": "3",
"properties": {
"retrievable": true,
"supported": [
{
"name": "connectivity"
}
],
"proactivelyReported": true
}
}
],
"manufacturerName": "mirko.io",
"endpointId": "ca84ef6d-53b1-430a-8a5e-a62f174eac5e",
"description": "mirko.io forno (id: ca84ef6d-53b1-430a-8a5e-a62f174eac5e)",
"friendlyName": "forno"
}
]
},
"header": {
"payloadVersion": "3",
"namespace": "Alexa.Discovery",
"name": "Discover.Response",
"messageId": "c0555cc8-ad7a-4377-b310-9de9b9ab6282"
}
}
}
Despite that, for some reasons Alexa answers that it did not find any new device.
I may be mistaken but I am pretty sure it used to work before I decided to add the Alexa.EndpointHealth interface.
Your response object looks right to me, except the extra "endpoint" field.
"endpoint": {
"endpointId": "INVALID",
"scope": {
"type": "BearerToken",
"token": "INVALID"
}
}
There's no such field in the Alexa.Discovery documentation. Try removing it and see if it resolves the issue.

Azure Data Factory how to deploy Alerts & Metrics to other environments with DevOps

We have a Azure datafactory fully integrated with DevOps. Every change I make to the datafactory is deployed to all environments (OTAP), except alerts & metrics. I cannot find anything on how to deploy these to the other environments. Is this possible at all?
Is this possible at all?'
Quick answer is NO so far. I contacted Microsoft ADF team and got below response:
Azure Data Factory utilizes Azure Resource Manager templates to store
the configuration of your various ADF entities. Entities on Alerts &
Metrics does not get exported in the ARM template, so Alerts & Metrics
won’t be integrated using DevOps.
I did 2 verifications:
1.Check ARM template supported entities in ADF, Alerts & Metrics doesn't exist.
2.Try to export ARM template in the ADF UI but still no Alerts & Metrics
Really understand you would like to integrate all elements in Data Factory including Alerts & Metrics with DevOps. I suggest you submitting feedback here to push improvements of ADF, any voice is welcome.
There is a way to work around this one.
ADF alert is a "Microsoft.Insights/metricalerts" resource that you can deploy using ARM deployment operation from Azure Devops.
You can try to create an alert in ADF and then go to Portal, search for: Monitor > Alert > Alert Rule, and find the Alert you created in ADF. In my case there is an Alert called Test
Here is the ARM template exported from the alert
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"metricalerts_Alert_name": {
"defaultValue": "Alert",
"type": "String"
},
"factories_test_externalid": {
"defaultValue": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/yyyyy/providers/Microsoft.DataFactory/factories/test",
"type": "String"
},
"actionGroups_actiongroup1_externalid": {
"defaultValue": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/yyyyy/providers/microsoft.insights/actionGroups/actiongroup1",
"type": "String"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Insights/metricalerts",
"apiVersion": "2018-03-01",
"name": "[parameters('metricalerts_Alert_name')]",
"location": "global",
"tags": {
"CreatedTimeUtc": "2022-09-13T05:28:46.0663823Z"
},
"properties": {
"severity": 0,
"enabled": true,
"scopes": [
"[parameters('factories_test_externalid')]"
],
"evaluationFrequency": "PT1M",
"windowSize": "PT15M",
"criteria": {
"allOf": [
{
"threshold": 1,
"name": "PipelineFailedRuns",
"metricNamespace": "Microsoft.DataFactory/factories",
"metricName": "PipelineFailedRuns",
"dimensions": [
{
"name": "Name",
"operator": "Include",
"values": [
"pipeline2"
]
},
{
"name": "FailureType",
"operator": "Include",
"values": [
"UserError",
"SystemError",
"BadGateway"
]
}
],
"operator": "GreaterThanOrEqual",
"timeAggregation": "Total",
"criterionType": "StaticThresholdCriterion"
}
],
"odata.type": "Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria"
},
"actions": [
{
"actionGroupId": "[parameters('actionGroups_actiongroup1_externalid')]",
"webHookProperties": {}
}
]
}
}
]
}
For the actionGroups you can refer to
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"groupName": {
"defaultValue": "actiongroup1",
"type": "string"
},
"email_receiver_address": {
"defaultValue": "someEmail#gmail.com",
"type": "string"
}
},
"variables": {},
"resources": [
{
"type": "microsoft.insights/actionGroups",
"apiVersion": "2019-03-01",
"name": "[parameters('groupName')]",
"location": "global",
"tags": {
"CreatedTimeUtc": "2020-10-21T07:24:08.2808723Z",
},
"properties": {
"groupShortName": "test",
"enabled": true,
"emailReceivers": [
{
"name": "test email received",
"emailAddress": "[parameters('email_receiver_address')]",
"useCommonAlertSchema": false
}
],
"smsReceivers": [],
"webhookReceivers": [],
"itsmReceivers": [],
"azureAppPushReceivers": [],
"automationRunbookReceivers": [],
"voiceReceivers": [],
"logicAppReceivers": [],
"azureFunctionReceivers": []
}
}
]
}

Why my Alexa ReportStatus directive response not working?

I want to enable alexa voice control for my smart home device. I was able to discover device. Now all devices are showing in alexa app. But when I try to turn on the device from my alexa app it is getting stuck. Loader is moving unlimited period of time. It is actually calling ReportStatus directive.
This is the json that I am getting from alexa app for a light. The light has only turn on and turn off capabilities.
{
"directive": {
"endpoint": {
"cookie": {
"detail1": "For simplicity, this is the only appliance",
"detail2": "that has some values in the additionalApplianceDetails"
},
"endpointId": "endpoint-001",
"scope": {
"token": "weza|IwEBIGu_tmpSTQaEPvhm0OYy-4ncjve_Au1788TAWR2DC8b7xJlPDiX3HV3rJUtG0qyauIlman4bX4ZCK0-6NvKWagqXNLSdH3bDBLxD_9VtgCQo6wUlEd4DNmL9Yf5sWuUCkV1ALAxxbhqPs3QlTofubxtpSnF05ZWOSjyNUlM3ShryLh7owTywFa_7oXCCaLdLCTiqOm27aPn-yyJEDNG57Sc9iysrZkJHaxVPbdZdcqRmaw9zFGVWOqsgjqiojkKrfztslVL1Ggo6v7Teg8isrZD8osr5HFkWAmZHi8K7UrHmwQnsD9CosgSxSG0avnUoomdsZx3_LPjLJKf5twJrN1vbLolzOgxUbVuAVPVrs8UN40KFEu6eCv_7rYz9AER_61di-4w1K27kjeJvzPMIKlLXLvv6Z-2GyuQq_8M1fUdM0SgiAkqjf92S9SNxezTUiDYdOjB1JrktbQc0WM6OYYXOMjtXcCPx3bqNwWoPZWBk7qptLTurCHcYnnDl27Q0RcJ3u1vFvMaT8l0x87K6wqW2",
"type": "BearerToken"
}
},
"header": {
"correlationToken": "AAAAAAQAeXUb9VLQcUVXClbXZQBvIDAIAAAAAAAAiBMdYahxBjRIHYbFACdRe+68uyc0KiCkClvpOCfh5dZw7NlTHoqnbbjPPydl4Nmkh4KLuFtKboYiwENwsVa9Q2WwAgRlEM+SR9PSNrWqnKvKDtulnkVXuTDkHf8f4LskbFd4VhX6cN518TA0MaZZvSfli9CN7KNY7m07P+eIv71nwxUFP5UN4xe4Jsz1V6nLzUGAG2jJIW4Lg0ARHENqDhbFtra4SV+vPXUN8L4qIwvC5xD6/mjsdN7B1ihGy/8djQA2+cxZ3XOEz2UOATyPEDlpVw5PBasQiJbRiSFSZZqEvQ0NHNfPWAWz5ieQXO1z1NAE5RMgn9d5gcEfDecjScP9DE2Yw43MypX/3VMDJmbjuTlhg9AabxLTQndKV8w9JNM1lLXcdp7i2JShOLO0bDDBPqJH1zsiZGJ93zWn+VDOTzDt+482V/AWgcHOWYnB+UZnL9GZFwEKVWTcQ20u2inFK9J11M5wr3ia57WDP6SQ7zkAmERDGfL0wswN/j0vFpqw+0/G7vjAUs2hGyg9oOy7fN2PFntk6IHV8mh47sC+ENj9dujJ9+ENwfEwEi792m7WlA8PGtvxdEqyVib5hY3qfNirqPMhMmPBf2hZlpbUfpf69q9R8GNFq41EZnTlg/AxSBjjLUJazaKQ8RU1VgipcdK1aGupJf5Oi85uEuYWN96OoEtivhUTZXg==",
"messageId": "dd8670d5-3afa-483a-93a3-f0fff0ab6572",
"name": "ReportState",
"namespace": "Alexa",
"payloadVersion": "3"
},
"payload": {}
}
}
This is the response I am sending from lambda function. It is written in python 3.6.
{
"event": {
"context": {
"properties": [
{
"name": "powerState",
"namespace": "Alexa.PowerController",
"timeOfSample": "2018-12-17T18:17:35.00Z",
"uncertaintyInMilliseconds": 500,
"value": "ON"
}
]
},
"endpoint": {
"cookie": {
"detail1": "For simplicity, this is the only appliance",
"detail2": "that has some values in the additionalApplianceDetails"
},
"endpointId": "endpoint-001",
"scope": {
"token": "weza|IwEBIGu_tmpSTQaEPvhm0OYy-4ncjve_Au1788TAWR2DC8b7xJlPDiX3HV3rJUtG0qyauIlman4bX4ZCK0-6NvKWagqXNLSdH3bDBLxD_9VtgCQo6wUlEd4DNmL9Yf5sWuUCkV1ALAxxbhqPs3QlTofubxtpSnF05ZWOSjyNUlM3ShryLh7owTywFa_7oXCCaLdLCTiqOm27aPn-yyJEDNG57Sc9iysrZkJHaxVPbdZdcqRmaw9zFGVWOqsgjqiojkKrfztslVL1Ggo6v7Teg8isrZD8osr5HFkWAmZHi8K7UrHmwQnsD9CosgSxSG0avnUoomdsZx3_LPjLJKf5twJrN1vbLolzOgxUbVuAVPVrs8UN40KFEu6eCv_7rYz9AER_61di-4w1K27kjeJvzPMIKlLXLvv6Z-2GyuQq_8M1fUdM0SgiAkqjf92S9SNxezTUiDYdOjB1JrktbQc0WM6OYYXOMjtXcCPx3bqNwWoPZWBk7qptLTurCHcYnnDl27Q0RcJ3u1vFvMaT8l0x87K6wqW2",
"type": "BearerToken"
}
},
"header": {
"correlationToken": "AAAAAAQAeXUb9VLQcUVXClbXZQBvIDAIAAAAAAAAiBMdYahxBjRIHYbFACdRe+68uyc0KiCkClvpOCfh5dZw7NlTHoqnbbjPPydl4Nmkh4KLuFtKboYiwENwsVa9Q2WwAgRlEM+SR9PSNrWqnKvKDtulnkVXuTDkHf8f4LskbFd4VhX6cN518TA0MaZZvSfli9CN7KNY7m07P+eIv71nwxUFP5UN4xe4Jsz1V6nLzUGAG2jJIW4Lg0ARHENqDhbFtra4SV+vPXUN8L4qIwvC5xD6/mjsdN7B1ihGy/8djQA2+cxZ3XOEz2UOATyPEDlpVw5PBasQiJbRiSFSZZqEvQ0NHNfPWAWz5ieQXO1z1NAE5RMgn9d5gcEfDecjScP9DE2Yw43MypX/3VMDJmbjuTlhg9AabxLTQndKV8w9JNM1lLXcdp7i2JShOLO0bDDBPqJH1zsiZGJ93zWn+VDOTzDt+482V/AWgcHOWYnB+UZnL9GZFwEKVWTcQ20u2inFK9J11M5wr3ia57WDP6SQ7zkAmERDGfL0wswN/j0vFpqw+0/G7vjAUs2hGyg9oOy7fN2PFntk6IHV8mh47sC+ENj9dujJ9+ENwfEwEi792m7WlA8PGtvxdEqyVib5hY3qfNirqPMhMmPBf2hZlpbUfpf69q9R8GNFq41EZnTlg/AxSBjjLUJazaKQ8RU1VgipcdK1aGupJf5Oi85uEuYWN96OoEtivhUTZXg==",
"messageId": "dd8670d5-3afa-483a-93a3-f0fff0ab6572",
"name": "StateReport",
"namespace": "Alexa",
"payloadVersion": "3"
},
"payload": {}
}
}
Please help me. I am stuck in this for last 2 days.
Not sure if this is related to your problem. In your response, the context element is inside event. But according to the documentation and code sample, context and event should be at the same level.
{
"context": {
"properties": [...]
},
"event": {
"header": ...,
"endpoint": ...,
"payload": {}
}
}

Run powershell command on Azure Windows via ARM template

I am trying to mount a data disk to a Windows Vm on Azure through an ARM template which is creating the VM. This is my ARM resource
{
"name": "[parameters('virtualMachineName')]",
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2016-04-30-preview",
"location": "[parameters('location')]",
"dependsOn": [
"[concat('Microsoft.Network/networkInterfaces/', parameters('networkInterfaceName'))]"
],
"tags": {
"Busuness Group": "[parameters('busunessGroup')]",
"Role": "[parameters('role')]"
},
"properties": {
"osProfile": {
"computerName": "[parameters('virtualMachineName')]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]",
"windowsConfiguration": {
"provisionVmAgent": "true"
}
},
"hardwareProfile": {
"vmSize": "[parameters('virtualMachineSize')]"
},
"storageProfile": {
"imageReference": {
"publisher": "microsoft-ads",
"offer": "standard-data-science-vm",
"sku": "standard-data-science-vm",
"version": "latest"
},
"dataDisks": [
{
"lun": 0,
"createOption": "Empty",
"caching": "None",
"managedDisk": {
"storageAccountType": "Premium_LRS"
},
"diskSizeGB": "[parameters('dataDiskSizeGB')]",
}
]
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('networkInterfaceName'))]"
}
]
}
},
"plan": {
"name": "standard-data-science-vm",
"publisher": "microsoft-ads",
"product": "standard-data-science-vm"
},
"resources": [
{
"type": "extensions",
"name": "CustomScriptExtension",
"apiVersion": "2015-06-15",
"location": "[resourceGroup().location]",
"dependsOn": [
"[parameters('virtualMachineName')]"
],
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.8",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": ["https://paste.fedoraproject.org/paste/FMoOq4E3sKoQzqB5Di0DcV5M1UNdIGYhyRLivL9gydE=/raw"]
}
}
}
]
}
It failed with following error
{
"code": "VMExtensionProvisioningError",
"message": "VM has reported a failure when processing extension 'CustomScriptExtension'. Error message: \"Invalid Configuration - CommandToExecute is not specified in the configuration; it must be specified in either the protected or public configuration section\"."
}
I also tried passing the command directly
"settings": {
"commandToExecute": "Get-Disk |Where partitionstyle -eq ‘raw’ | Initialize-Disk -PartitionStyle MBR -PassThru | New-Partition -AssignDriveLetter -UseMaximumSize | Format-Volume -FileSystem NTFS -NewFileSystemLabel “Data” -Confirm:$false"
}
both didn't work. what I am doing wrong here ?
So, you need to explicitly call powershell to use powershell, just like in the examples:
"commandToExecute": "[concat('powershell -command ', variable('command'))]"
you can attempt to paste your command directly, but due to all the quotes it won't parse properly, so save your command as a variable and concat like that.

Resources