Elasticsearch alias not being created on index creation - go

I'm using the go-elasticsearch API in my application to create indices in an Elastic.co cloud cluster. The application dynamically creates an index with a template and then starts indexing documents. The template includes an alias name and look like this:
{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0
},
"mappings": {
"properties": {
"title": {
"type": "text"
},
"created_at": {
"type": "date"
},
"updated_at": {
"type": "date"
},
"status": {
"type": "keyword"
}
}
},
"aliases": {
"rollout-nodes-f0776f0": {}
}
}
The name of the alias can change, so we pass it to the template when we create a new index. This is done with the Create indices API in Go:
indexTemplate := getIndexTemplate()
res, err := n.client.Indices.Create(
indexName,
n.client.Indices.Create.WithBody(indexTemplate),
n.client.Indices.Create.WithContext(ctx),
n.client.Indices.Create.WithTimeout(time.Second),
)
Doing some testing, this code works on localhost (without security enabled) but is not working with the cluster in Elastic.co, the index is created but not the alias.
I think it should be a problem related with either the API Key permissions or some configuration in the server, but I was unable to find yet which permission I'm missing.
For more context, this is the API Key I'm using:
{
"id": "fakeID",
"name": "index-service-key",
"creation": 1675350573126,
"invalidated": false,
"username": "fakeUser",
"realm": "cloud-saml-kibana",
"metadata": {},
"role_descriptors": {
"logstash_writer": {
"cluster": [
"monitor",
"transport_client",
"read_ccr",
"read_ilm",
"manage_index_templates"
],
"indices": [
{
"names": [
"*"
],
"privileges": [
"all"
],
"allow_restricted_indices": false
}
],
"applications": [],
"run_as": [],
"metadata": {},
"transient_metadata": {
"enabled": true
}
}
}
}
Any ideas? I know I can use the POST _aliases API, but the index creation option should be working too.

Related

elasticsearch filebeat mapper_parsing_exception when using decode_json_fields

I have ECK setup and im using filebeat to ship logs from Kubernetes to elasticsearch.
Ive recently added decode_json_fields processor to my configuration, so that im able decode the json that is usually in the message field.
- decode_json_fields:
fields: ["message"]
process_array: false
max_depth: 10
target: "log"
overwrite_keys: true
add_error_key: true
However logs have stopped appearing since adding it.
example log:
{
"_index": "filebeat-7.9.1-2020.10.01-000001",
"_type": "_doc",
"_id": "wF9hB3UBtUOF3QRTBcts",
"_score": 1,
"_source": {
"#timestamp": "2020-10-08T08:43:18.672Z",
"kubernetes": {
"labels": {
"controller-uid": "9f3f9d08-cfd8-454d-954d-24464172fa37",
"job-name": "stream-hatchet-cron-manual-rvd"
},
"container": {
"name": "stream-hatchet-cron",
"image": "<redacted>.dkr.ecr.us-east-2.amazonaws.com/stream-hatchet:v0.1.4"
},
"node": {
"name": "ip-172-20-32-60.us-east-2.compute.internal"
},
"pod": {
"uid": "041cb6d5-5da1-4efa-b8e9-d4120409af4b",
"name": "stream-hatchet-cron-manual-rvd-bh96h"
},
"namespace": "default"
},
"ecs": {
"version": "1.5.0"
},
"host": {
"mac": [],
"hostname": "ip-172-20-32-60",
"architecture": "x86_64",
"name": "ip-172-20-32-60",
"os": {
"codename": "Core",
"platform": "centos",
"version": "7 (Core)",
"family": "redhat",
"name": "CentOS Linux",
"kernel": "4.9.0-11-amd64"
},
"containerized": false,
"ip": []
},
"cloud": {
"instance": {
"id": "i-06c9d23210956ca5c"
},
"machine": {
"type": "m5.large"
},
"region": "us-east-2",
"availability_zone": "us-east-2a",
"account": {
"id": "<redacted>"
},
"image": {
"id": "ami-09d3627b4a09f6c4c"
},
"provider": "aws"
},
"stream": "stdout",
"message": "{\"message\":{\"log_type\":\"cron\",\"status\":\"start\"},\"level\":\"info\",\"timestamp\":\"2020-10-08T08:43:18.670Z\"}",
"input": {
"type": "container"
},
"log": {
"offset": 348,
"file": {
"path": "/var/log/containers/stream-hatchet-cron-manual-rvd-bh96h_default_stream-hatchet-cron-73069980b418e2aa5e5dcfaf1a29839a6d57e697c5072fea4d6e279da0c4e6ba.log"
}
},
"agent": {
"type": "filebeat",
"version": "7.9.1",
"hostname": "ip-172-20-32-60",
"ephemeral_id": "6b3ba0bd-af7f-4946-b9c5-74f0f3e526b1",
"id": "0f7fff14-6b51-45fc-8f41-34bd04dc0bce",
"name": "ip-172-20-32-60"
}
},
"fields": {
"#timestamp": [
"2020-10-08T08:43:18.672Z"
],
"suricata.eve.timestamp": [
"2020-10-08T08:43:18.672Z"
]
}
}
In the filebeat logs i can see the following error:
2020-10-08T09:25:43.562Z WARN [elasticsearch] elasticsearch/client.go:407 Cannot
index event
publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0x36b243a0,
ext:63737745936, loc:(*time.Location)(nil)}, Meta:null,
Fields:{"agent":{"ephemeral_id":"5f8afdba-39c3-4fb7-9502-be7ef8f2d982","hostname":"ip-172-20-32-60","id":"0f7fff14-6b51-45fc-8f41-34bd04dc0bce","name":"ip-172-20-32-60","type":"filebeat","version":"7.9.1"},"cloud":{"account":{"id":"700849607999"},"availability_zone":"us-east-2a","image":{"id":"ami-09d3627b4a09f6c4c"},"instance":{"id":"i-06c9d23210956ca5c"},"machine":{"type":"m5.large"},"provider":"aws","region":"us-east-2"},"ecs":{"version":"1.5.0"},"host":{"architecture":"x86_64","containerized":false,"hostname":"ip-172-20-32-60","ip":["172.20.32.60","fe80::af:9fff:febe:dc4","172.17.0.1","100.96.1.1","fe80::6010:94ff:fe17:fbae","fe80::d869:14ff:feb0:81b3","fe80::e4f3:b9ff:fed8:e266","fe80::1c19:bcff:feb3:ce95","fe80::fc68:21ff:fe08:7f24","fe80::1cc2:daff:fe84:2a5a","fe80::3426:78ff:fe22:269a","fe80::b871:52ff:fe15:10ab","fe80::54ff:cbff:fec0:f0f","fe80::cca6:42ff:fe82:53fd","fe80::bc85:e2ff:fe5f:a60d","fe80::e05e:b2ff:fe4d:a9a0","fe80::43a:dcff:fe6a:2307","fe80::581b:20ff:fe5f:b060","fe80::4056:29ff:fe07:edf5","fe80::c8a0:5aff:febd:a1a3","fe80::74e3:feff:fe45:d9d4","fe80::9c91:5cff:fee2:c0b9"],"mac":["02:af:9f:be:0d:c4","02:42:1b:56:ee:d3","62:10:94:17:fb:ae","da:69:14:b0:81:b3","e6:f3:b9:d8:e2:66","1e:19:bc:b3:ce:95","fe:68:21:08:7f:24","1e:c2:da:84:2a:5a","36:26:78:22:26:9a","ba:71:52:15:10:ab","56:ff:cb:c0:0f:0f","ce:a6:42:82:53:fd","be:85:e2:5f:a6:0d","e2:5e:b2:4d:a9:a0","06:3a:dc:6a:23:07","5a:1b:20:5f:b0:60","42:56:29:07:ed:f5","ca:a0:5a:bd:a1:a3","76:e3:fe:45:d9:d4","9e:91:5c:e2:c0:b9"],"name":"ip-172-20-32-60","os":{"codename":"Core","family":"redhat","kernel":"4.9.0-11-amd64","name":"CentOS
Linux","platform":"centos","version":"7
(Core)"}},"input":{"type":"container"},"kubernetes":{"container":{"image":"700849607999.dkr.ecr.us-east-2.amazonaws.com/stream-hatchet:v0.1.4","name":"stream-hatchet-cron"},"labels":{"controller-uid":"a79daeac-b159-4ba7-8cb0-48afbfc0711a","job-name":"stream-hatchet-cron-manual-c5r"},"namespace":"default","node":{"name":"ip-172-20-32-60.us-east-2.compute.internal"},"pod":{"name":"stream-hatchet-cron-manual-c5r-7cx5d","uid":"3251cc33-48a9-42b1-9359-9f6e345f75b6"}},"log":{"level":"info","message":{"log_type":"cron","status":"start"},"timestamp":"2020-10-08T09:25:36.916Z"},"message":"{"message":{"log_type":"cron","status":"start"},"level":"info","timestamp":"2020-10-08T09:25:36.916Z"}","stream":"stdout"},
Private:file.State{Id:"native::30998361-66306", PrevId:"",
Finished:false, Fileinfo:(*os.fileStat)(0xc001c14dd0),
Source:"/var/log/containers/stream-hatchet-cron-manual-c5r-7cx5d_default_stream-hatchet-cron-4278d956fff8641048efeaec23b383b41f2662773602c3a7daffe7c30f62fe5a.log",
Offset:539, Timestamp:time.Time{wall:0xbfd7d4a1e556bd72,
ext:916563812286, loc:(*time.Location)(0x607c540)}, TTL:-1,
Type:"container", Meta:map[string]string(nil),
FileStateOS:file.StateOS{Inode:0x1d8ff59, Device:0x10302},
IdentifierName:"native"}, TimeSeries:false}, Flags:0x1,
Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400):
{"type":"mapper_parsing_exception","reason":"failed to parse field
[log.message] of type [keyword] in document with id
'56aHB3UBLgYb8gz801DI'. Preview of field's value: '{log_type=cron,
status=start}'","caused_by":{"type":"illegal_state_exception","reason":"Can't
get text on a START_OBJECT at 1:113"}}
It throws an error because apparently log.message is of type "keyword" however this does not exist in the index mapping.
I thought this maybe an issue with the "target": "log" so ive tried changing this to something arbitrary like "my_parsed_message" or "m_log" or "mlog" and i get the same error for all of them.
{"type":"mapper_parsing_exception","reason":"failed to parse field
[mlog.message] of type [keyword] in document with id
'J5KlDHUB_yo5bfXcn2LE'. Preview of field's value: '{log_type=cron,
status=end}'","caused_by":{"type":"illegal_state_exception","reason":"Can't
get text on a START_OBJECT at 1:217"}}
Elastic version: 7.9.2
The problem is that some of your JSON messages contain a message field that is sometimes a simple string and other times a nested JSON object (like in the case you're showing in your question).
After this index was created, the very first message that was parsed was probably a string and hence the mapping has been modified to add the following field (line 10553):
"mlog": {
"properties": {
...
"message": {
"type": "keyword",
"ignore_above": 1024
},
}
}
You'll find the same pattern for my_parsed_message (line 10902), my_parsed_logs (line 10742), etc...
Hence the next message that comes with message being a JSON object, like
{"message":{"log_type":"cron","status":"start"}, ...
will not work because it's an object, not a string...
Looking at the fields of your custom JSON, it seems you don't really have the control over either their taxonomy (i.e. naming) or what they contain...
If you're serious about willing to search within those custom fields (which I think you are since you're parsing the field, otherwise you'd just store the stringified JSON), then I can only suggest to start figuring out a proper taxonomy in order to make sure that they all get a standard type.
If all you care about is logging your data, then I suggest to simply disable the indexing of that message field. Another solution is to set dynamic: false in your mapping to ignore those fields, i.e. not modify your mapping.

Alexa.Discovery response: no device detected by Alexa

I am implementing my Alexa Home Skill using AWS Lambda.
Given the following request I receive when I try to detect new devices on Alexa Skil test page:
{directive={header={namespace=Alexa.Discovery, name=Discover, payloadVersion=3, messageId=0160c7e7-031f-47ee-a1d9-a23f38f87a9e}, payload={scope={type=BearerToken, token=...}}}}
I respond with the following:
{
"event": {
"payload": {
"endpoints": [
{
"displayCategories": [
"SMARTPLUG"
],
"capabilities": [
{
"type": "AlexaInterface",
"interface": "Alexa",
"version": "3"
},
{
"type": "AlexaInterface",
"interface": "Alexa.PowerController",
"version": "3",
"properties": {
"retrievable": true,
"supported": [
{
"name": "powerState"
}
],
"proactivelyReported": true
}
},
{
"type": "AlexaInterface",
"interface": "Alexa.EndpointHealth",
"version": "3",
"properties": {
"retrievable": true,
"supported": [
{
"name": "connectivity"
}
],
"proactivelyReported": true
}
}
],
"manufacturerName": "mirko.io",
"endpointId": "ca84ef6d-53b1-430a-8a5e-a62f174eac5e",
"description": "mirko.io forno (id: ca84ef6d-53b1-430a-8a5e-a62f174eac5e)",
"friendlyName": "forno"
}
]
},
"header": {
"payloadVersion": "3",
"namespace": "Alexa.Discovery",
"name": "Discover.Response",
"messageId": "c0555cc8-ad7a-4377-b310-9de9b9ab6282"
}
}
}
Despite that, for some reasons Alexa answers that it did not find any new device.
I may be mistaken but I am pretty sure it used to work before I decided to add the Alexa.EndpointHealth interface.
Your response object looks right to me, except the extra "endpoint" field.
"endpoint": {
"endpointId": "INVALID",
"scope": {
"type": "BearerToken",
"token": "INVALID"
}
}
There's no such field in the Alexa.Discovery documentation. Try removing it and see if it resolves the issue.

Azure Data Factory how to deploy Alerts & Metrics to other environments with DevOps

We have a Azure datafactory fully integrated with DevOps. Every change I make to the datafactory is deployed to all environments (OTAP), except alerts & metrics. I cannot find anything on how to deploy these to the other environments. Is this possible at all?
Is this possible at all?'
Quick answer is NO so far. I contacted Microsoft ADF team and got below response:
Azure Data Factory utilizes Azure Resource Manager templates to store
the configuration of your various ADF entities. Entities on Alerts &
Metrics does not get exported in the ARM template, so Alerts & Metrics
won’t be integrated using DevOps.
I did 2 verifications:
1.Check ARM template supported entities in ADF, Alerts & Metrics doesn't exist.
2.Try to export ARM template in the ADF UI but still no Alerts & Metrics
Really understand you would like to integrate all elements in Data Factory including Alerts & Metrics with DevOps. I suggest you submitting feedback here to push improvements of ADF, any voice is welcome.
There is a way to work around this one.
ADF alert is a "Microsoft.Insights/metricalerts" resource that you can deploy using ARM deployment operation from Azure Devops.
You can try to create an alert in ADF and then go to Portal, search for: Monitor > Alert > Alert Rule, and find the Alert you created in ADF. In my case there is an Alert called Test
Here is the ARM template exported from the alert
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"metricalerts_Alert_name": {
"defaultValue": "Alert",
"type": "String"
},
"factories_test_externalid": {
"defaultValue": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/yyyyy/providers/Microsoft.DataFactory/factories/test",
"type": "String"
},
"actionGroups_actiongroup1_externalid": {
"defaultValue": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/yyyyy/providers/microsoft.insights/actionGroups/actiongroup1",
"type": "String"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Insights/metricalerts",
"apiVersion": "2018-03-01",
"name": "[parameters('metricalerts_Alert_name')]",
"location": "global",
"tags": {
"CreatedTimeUtc": "2022-09-13T05:28:46.0663823Z"
},
"properties": {
"severity": 0,
"enabled": true,
"scopes": [
"[parameters('factories_test_externalid')]"
],
"evaluationFrequency": "PT1M",
"windowSize": "PT15M",
"criteria": {
"allOf": [
{
"threshold": 1,
"name": "PipelineFailedRuns",
"metricNamespace": "Microsoft.DataFactory/factories",
"metricName": "PipelineFailedRuns",
"dimensions": [
{
"name": "Name",
"operator": "Include",
"values": [
"pipeline2"
]
},
{
"name": "FailureType",
"operator": "Include",
"values": [
"UserError",
"SystemError",
"BadGateway"
]
}
],
"operator": "GreaterThanOrEqual",
"timeAggregation": "Total",
"criterionType": "StaticThresholdCriterion"
}
],
"odata.type": "Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria"
},
"actions": [
{
"actionGroupId": "[parameters('actionGroups_actiongroup1_externalid')]",
"webHookProperties": {}
}
]
}
}
]
}
For the actionGroups you can refer to
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"groupName": {
"defaultValue": "actiongroup1",
"type": "string"
},
"email_receiver_address": {
"defaultValue": "someEmail#gmail.com",
"type": "string"
}
},
"variables": {},
"resources": [
{
"type": "microsoft.insights/actionGroups",
"apiVersion": "2019-03-01",
"name": "[parameters('groupName')]",
"location": "global",
"tags": {
"CreatedTimeUtc": "2020-10-21T07:24:08.2808723Z",
},
"properties": {
"groupShortName": "test",
"enabled": true,
"emailReceivers": [
{
"name": "test email received",
"emailAddress": "[parameters('email_receiver_address')]",
"useCommonAlertSchema": false
}
],
"smsReceivers": [],
"webhookReceivers": [],
"itsmReceivers": [],
"azureAppPushReceivers": [],
"automationRunbookReceivers": [],
"voiceReceivers": [],
"logicAppReceivers": [],
"azureFunctionReceivers": []
}
}
]
}

spring mongodb criteria API: check two values on the same nested element

I have the following query:
Criteria crit = Criteria.where("nestedObj.date").lt(LocalDate.now())
.and("nestedObj.active").is(true)
.and("someId").is(null)
.and("somethingElse").exists(false);
How can I make sure that nestedObj.active and nestedObj.date are checked on the same nestedObj?
I only want this to match if a document has a nestedObj that is active AND has a date older than today.
Example:
If the nestedObj array on a document loos like this, the query should match:
[
{
"nestedObj": {
"active": "true",
"date": "2010-29-10"
},
{
"nestedObj": {
"active": "false",
"date": "2010-29-10"
},
{
"nestedObj": {
"active": "true",
"date": "2022-29-10"
}
]
But if it looks like this, it shouldn't:
[
{
"nestedObj": {
"active": "false",
"date": "2010-29-10"
},
{
"nestedObj": {
"active": "true",
"date": "2022-29-10"
}
]
Check the element match in https://docs.mongodb.com/manual/reference/operator/query/elemMatch/
for instance
where("nestedObj.date").elemMatch( where("attribute1").is("value1").and("attribute2").regex("(?i).*$something.*")

Why my Alexa ReportStatus directive response not working?

I want to enable alexa voice control for my smart home device. I was able to discover device. Now all devices are showing in alexa app. But when I try to turn on the device from my alexa app it is getting stuck. Loader is moving unlimited period of time. It is actually calling ReportStatus directive.
This is the json that I am getting from alexa app for a light. The light has only turn on and turn off capabilities.
{
"directive": {
"endpoint": {
"cookie": {
"detail1": "For simplicity, this is the only appliance",
"detail2": "that has some values in the additionalApplianceDetails"
},
"endpointId": "endpoint-001",
"scope": {
"token": "weza|IwEBIGu_tmpSTQaEPvhm0OYy-4ncjve_Au1788TAWR2DC8b7xJlPDiX3HV3rJUtG0qyauIlman4bX4ZCK0-6NvKWagqXNLSdH3bDBLxD_9VtgCQo6wUlEd4DNmL9Yf5sWuUCkV1ALAxxbhqPs3QlTofubxtpSnF05ZWOSjyNUlM3ShryLh7owTywFa_7oXCCaLdLCTiqOm27aPn-yyJEDNG57Sc9iysrZkJHaxVPbdZdcqRmaw9zFGVWOqsgjqiojkKrfztslVL1Ggo6v7Teg8isrZD8osr5HFkWAmZHi8K7UrHmwQnsD9CosgSxSG0avnUoomdsZx3_LPjLJKf5twJrN1vbLolzOgxUbVuAVPVrs8UN40KFEu6eCv_7rYz9AER_61di-4w1K27kjeJvzPMIKlLXLvv6Z-2GyuQq_8M1fUdM0SgiAkqjf92S9SNxezTUiDYdOjB1JrktbQc0WM6OYYXOMjtXcCPx3bqNwWoPZWBk7qptLTurCHcYnnDl27Q0RcJ3u1vFvMaT8l0x87K6wqW2",
"type": "BearerToken"
}
},
"header": {
"correlationToken": "AAAAAAQAeXUb9VLQcUVXClbXZQBvIDAIAAAAAAAAiBMdYahxBjRIHYbFACdRe+68uyc0KiCkClvpOCfh5dZw7NlTHoqnbbjPPydl4Nmkh4KLuFtKboYiwENwsVa9Q2WwAgRlEM+SR9PSNrWqnKvKDtulnkVXuTDkHf8f4LskbFd4VhX6cN518TA0MaZZvSfli9CN7KNY7m07P+eIv71nwxUFP5UN4xe4Jsz1V6nLzUGAG2jJIW4Lg0ARHENqDhbFtra4SV+vPXUN8L4qIwvC5xD6/mjsdN7B1ihGy/8djQA2+cxZ3XOEz2UOATyPEDlpVw5PBasQiJbRiSFSZZqEvQ0NHNfPWAWz5ieQXO1z1NAE5RMgn9d5gcEfDecjScP9DE2Yw43MypX/3VMDJmbjuTlhg9AabxLTQndKV8w9JNM1lLXcdp7i2JShOLO0bDDBPqJH1zsiZGJ93zWn+VDOTzDt+482V/AWgcHOWYnB+UZnL9GZFwEKVWTcQ20u2inFK9J11M5wr3ia57WDP6SQ7zkAmERDGfL0wswN/j0vFpqw+0/G7vjAUs2hGyg9oOy7fN2PFntk6IHV8mh47sC+ENj9dujJ9+ENwfEwEi792m7WlA8PGtvxdEqyVib5hY3qfNirqPMhMmPBf2hZlpbUfpf69q9R8GNFq41EZnTlg/AxSBjjLUJazaKQ8RU1VgipcdK1aGupJf5Oi85uEuYWN96OoEtivhUTZXg==",
"messageId": "dd8670d5-3afa-483a-93a3-f0fff0ab6572",
"name": "ReportState",
"namespace": "Alexa",
"payloadVersion": "3"
},
"payload": {}
}
}
This is the response I am sending from lambda function. It is written in python 3.6.
{
"event": {
"context": {
"properties": [
{
"name": "powerState",
"namespace": "Alexa.PowerController",
"timeOfSample": "2018-12-17T18:17:35.00Z",
"uncertaintyInMilliseconds": 500,
"value": "ON"
}
]
},
"endpoint": {
"cookie": {
"detail1": "For simplicity, this is the only appliance",
"detail2": "that has some values in the additionalApplianceDetails"
},
"endpointId": "endpoint-001",
"scope": {
"token": "weza|IwEBIGu_tmpSTQaEPvhm0OYy-4ncjve_Au1788TAWR2DC8b7xJlPDiX3HV3rJUtG0qyauIlman4bX4ZCK0-6NvKWagqXNLSdH3bDBLxD_9VtgCQo6wUlEd4DNmL9Yf5sWuUCkV1ALAxxbhqPs3QlTofubxtpSnF05ZWOSjyNUlM3ShryLh7owTywFa_7oXCCaLdLCTiqOm27aPn-yyJEDNG57Sc9iysrZkJHaxVPbdZdcqRmaw9zFGVWOqsgjqiojkKrfztslVL1Ggo6v7Teg8isrZD8osr5HFkWAmZHi8K7UrHmwQnsD9CosgSxSG0avnUoomdsZx3_LPjLJKf5twJrN1vbLolzOgxUbVuAVPVrs8UN40KFEu6eCv_7rYz9AER_61di-4w1K27kjeJvzPMIKlLXLvv6Z-2GyuQq_8M1fUdM0SgiAkqjf92S9SNxezTUiDYdOjB1JrktbQc0WM6OYYXOMjtXcCPx3bqNwWoPZWBk7qptLTurCHcYnnDl27Q0RcJ3u1vFvMaT8l0x87K6wqW2",
"type": "BearerToken"
}
},
"header": {
"correlationToken": "AAAAAAQAeXUb9VLQcUVXClbXZQBvIDAIAAAAAAAAiBMdYahxBjRIHYbFACdRe+68uyc0KiCkClvpOCfh5dZw7NlTHoqnbbjPPydl4Nmkh4KLuFtKboYiwENwsVa9Q2WwAgRlEM+SR9PSNrWqnKvKDtulnkVXuTDkHf8f4LskbFd4VhX6cN518TA0MaZZvSfli9CN7KNY7m07P+eIv71nwxUFP5UN4xe4Jsz1V6nLzUGAG2jJIW4Lg0ARHENqDhbFtra4SV+vPXUN8L4qIwvC5xD6/mjsdN7B1ihGy/8djQA2+cxZ3XOEz2UOATyPEDlpVw5PBasQiJbRiSFSZZqEvQ0NHNfPWAWz5ieQXO1z1NAE5RMgn9d5gcEfDecjScP9DE2Yw43MypX/3VMDJmbjuTlhg9AabxLTQndKV8w9JNM1lLXcdp7i2JShOLO0bDDBPqJH1zsiZGJ93zWn+VDOTzDt+482V/AWgcHOWYnB+UZnL9GZFwEKVWTcQ20u2inFK9J11M5wr3ia57WDP6SQ7zkAmERDGfL0wswN/j0vFpqw+0/G7vjAUs2hGyg9oOy7fN2PFntk6IHV8mh47sC+ENj9dujJ9+ENwfEwEi792m7WlA8PGtvxdEqyVib5hY3qfNirqPMhMmPBf2hZlpbUfpf69q9R8GNFq41EZnTlg/AxSBjjLUJazaKQ8RU1VgipcdK1aGupJf5Oi85uEuYWN96OoEtivhUTZXg==",
"messageId": "dd8670d5-3afa-483a-93a3-f0fff0ab6572",
"name": "StateReport",
"namespace": "Alexa",
"payloadVersion": "3"
},
"payload": {}
}
}
Please help me. I am stuck in this for last 2 days.
Not sure if this is related to your problem. In your response, the context element is inside event. But according to the documentation and code sample, context and event should be at the same level.
{
"context": {
"properties": [...]
},
"event": {
"header": ...,
"endpoint": ...,
"payload": {}
}
}

Resources