netconf device datastore path to subscribe for notifications - opendaylight

I have followed below link and it works without any issues in getting notifications from a toaster module.
https://docs.opendaylight.org/en/stable-oxygen/developer-guide/controller.html
But the issue iam stuck is to find out, how i can create a notification stream from NorthBound of ODL for a netconf device which is mounted beyond yang-ext mount point in my operational datastore (i used netconf testool and test.yang file as my model).
To be more specific how one can determine the exact path of the netconf device in yang datatore - toaster path as an example is pasted below from developers guide:
The RPC to create the stream can be invoked via RESTCONF like this:
URI: http://{odlAddress}:{odlPort}/restconf/operations/sal-remote:create-data-change-event-subscription
DATA:
{
"input": {
**My problem -----> "path": "/toaster:toaster/toaster:toasterStatus",**
"sal-remote-augment:datastore": "OPERATIONAL",
"sal-remote-augment:scope": "ONE"
}
}
Thanks

Related

Unable to save configuration of Teams connector

Past two days, I have been trying to configure MS Teams connector following this tutorial:
https://learn.microsoft.com/en-us/learn/modules/msteams-webhooks-connectors/7-exercise-o365-connectors
I configured the connector via Connectors Developer Dashboard.
Then I tried both, cloning and reconfiguring this sample:
https://github.com/OfficeDev/TrainingContent/tree/master/Teams/60%20Webhooks%20O365%20Connectors/Demos/03-o365-connector
and also bootstrapping the project via yo teams, following the tutorial step-by-step.
After building the project and serving it via ngrok, I can sideload the connector into Teams (tried both, desktop app and web), it successfully brings me to configuration page, but never allows me to save the connector settings. I always get this error:
Unable to save “My First Teams Connector” connector configuration. Please try again.
I adapted the code and debugged it to see, that the call to /api/connector/connect succeeds and saveEvent.notifySuccess() is called.
Then I noticed, that right after saving the connector via browser, this error appears in the console:
{
"seq": 1597590187271,
"timestamp": 1597593891957,
"flightSettings": {
"Name": "ConnectorFrontEndSettings",
"AriaSDKToken": "d127f72a3abd41c9b9dd94faca947689-d58285e6-3a68-4cab-a458-37b9d9761d35-7033",
"SPAEnabled": true,
"ClassificationFilterEnabled": true,
"ClientRoutingEnabled": true,
"EnableYammerGroupOption": true,
"EnableFadeMessage": false,
"EnableDomainBasedOwaConnectorList": false,
"EnableDomainBasedTeamsConnectorList": false,
"DevPortalSPAEnabled": true,
"ShowHomeNavigationButtonOnConfigurationPage": false,
"DisableConnectToO365InlineDeleteFeedbackPage": true
},
"status": 500,
"clientType": "SkypeSpaces",
"connectorType": "f39fe17c-6452-4879-b692-a93d73684348",
"name": "handleMessageError"
}
Any idea what could be incorrectly configured, or whether there is a place to check for more descriptive error? Log of desktop Teams was not helpful either.
ConnectorID: f39fe17c-6452-4879-b692-a93d73684348
So, what really helped me in the end for that particular tutorial:
run gulp ngrok-serve
configure the connector following the tutorial (with valid domains; excluding protocol) on Connectors Developer Dashboard
extract the content of packaged connector
adapt extracted manifest.json with newly created connector ID (both occurrences)
repackage the connector as zip
upload to Teams & configure it
I filled the valid domains field with xxxxxxxx.ngrok.io which is the domain of my configuration page.
Be careful, if you update an existing connector, apparently these changes needs time to be taken into account. To be sure, you can create a fresh new connector.

Azure IoT Edge module logs - Task upload logs failed because of error

I was following the experimental features of Built-in log pulls
https://github.com/Azure/iotedge/blob/master/doc/built-in-logs-pull.md
When I am trying to upload logs using the following payload from the azure portal(using Direct Method under each module)
PAYLOAD:
{
"schemaVersion": "1.0",
"sasUrl":"https://veeaiotcentralstorage.blob.core.windows.net/iotedgeruntimelogs/iotedgeruntimelogs.txt?sv=2019-02-02&st=2020-08-08T08%3A56%3A00Z&se=2020-08-14T08%3A56%3A00Z&sr=b&sp=rw&sig=xyz",
"items": [
{
"id": "zigbee_template-arm64v8",
"filter": {
"tail": 10
}
}
],
"encoding": "none",
"contentType": "text"
}
I am getting the error mentioned below after checking the task status
ERROR:
{"status":200,"payload":{"status":"Failed",
"message":"Task upload logs failed because of error Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.",
"correlationId":"b85002d8-d8f9-49d5-851d-9123a8d7d740"}}
Please let me know where I am having an issue
Digging into the code some more, I noticed the UploadLogs implementation doesn't create a container, but rather a folder structure within the container that you supply. As far as I can tell, the casing restriction applies when creating a blob container, but there are no such restriction on creating folders within the container.
Please check the SAS URL that you supplied, or something on the storage end. Double check that your SAS URL is generated for a pre-existing blob container.

Grant Google App Maker Full Google Drive API Scope / Server Rejected Error

I have an app maker application using the Google Drive File picker widget. But uploading files does not work, I get a server rejected error.
API Upload Response
{
"errorMessage": {
"reason": "REQUEST_REJECTED",
"additionalInfo": {
"uploader_service.GoogleRupioAdditionalInfo": {
"completionInfo": {
"status": "REJECTED"
},
"requestRejectedInfo": {
"reasonDescription": "agent_rejected"
}
}
},
"upload_id": "UrqXRaPdnG_DZ0L-iDX5BJsG4XEg"
}
I believe the issue is the app's oauth scopes. It appears to only have read access which is insufficient for uploading new files.
I'd like to grant it full access https://www.googleapis.com/auth/drive but I'm not clear how to do this.
This behavior is a bug and it has been recently introduced as far as I'm aware. To be honest, the full drive scope should be granted when enabling the simple uploads feature in the drive picker but some reason it is not doing it. As a work around, you can put the following in any server script:
/***
Dummy function for drive full permissions
****/
function dummyForDrivePermission(){
var file = DriveApp.addFile();
}
The above code, will force appmaker to recognize that it needs to upload files, although you will never invoke the function but it will server the purpose of granting the scope you need.

Failed Prop Type Error in Fine Uploader

I'm trying to get Fine Uploader React to work but keep running into issues.
I'm getting the following errors:
Here's the URL: http://fineuploader.azurewebsites.net/
Here's what I've done so far:
Downloaded the source on to my computer from https://github.com/FineUploader/react-fine-uploader
I then npm installed react-fine-uploader and fine-uploader as per instructions
I ran webpack to transpile and bundle the code
Added an entry point and index.html
Finally, I simply published the app to a new Azure app/website
Any idea what's causing the issue?
P.S. My goal is to use Fine Uploader to upload files to Azure Blob Storage. At this point, I'm simply trying to get Fine Uploader going. I do realize that I'll have to enter a few pieces of information about my blog storage endpoint, etc. but I don't think this error is related to any of that.
A Gallery (and every higher level component of that library) needs an "uploader" props as explained in the section https://github.com/FineUploader/react-fine-uploader#high-level-components
An uploader is one of the 3 classes avaiable in the fine-uploader-wrappers package https://github.com/FineUploader/fine-uploader-wrappers#wrapper-classes
those are for upload to
Aws s3
Azure
or your enpoint
The uploader class need all the configuration endpoint, credentials, custom configuration, etc... (you can find a comprehensive list here in the api section https://docs.fineuploader.com/branch/master/api/options.html)
An example for s3 direct upload would be something like:
const uploader = new FineUploaderS3({
options: {
request: {
endpoint: "http://fineuploadertest.s3.amazonaws.com",
accessKey: "AKIAIXVR6TANOGNBGANQ"
},
signature: {
endpoint: "/vendor/fineuploader/php-s3-server/endpoint.php"
}
}
})
and use that uploader in a gallery
<Gallery uploader={ uploader } />
There are many usefull option for customization: callbacks, onEventHandler, etc you can find them all in the docs of fineuploader
Edit: if im not mistaken react-transition-group is necessary even if it's not listed anywhere in the docs...

Visual Studio diagnostics configuration error in event hub set up

I am trying to set up streaming from an Azure VM scale set to an event hub via Diagnostics configuration.
I have my public config which includes the SinksConfig as follows (I have omitted the rest of the config for the sake of brevity):
{
"WadCfg": {
"DiagnosticMonitorConfiguration": {
*** config for performance counters and ETW ***
"SinksConfig": {
"Sink": [
{
"name": "eventhub",
"EventHub": {
"Url": "sb://myhub.servicebus.windows.net/mycompanyapplication",
"SharedAccessKeyName": "RootManageSharedAccessKey"
}
}
]
}
},
"StorageAccount": "<storageaccount>"
}
and the private config:
{
"storageAccountName": "<storageaccountname>",
"storageAccountKey": "<storageaccountkey>",
"storageAccountEndPoint": "https://core.windows.net",
"EventHub": {
"Url": "sb://myhub.servicebus.windows.net/mycompanyapplication",
"SharedAccessKeyName": "RootManageSharedAccessKey",
"SharedAccessKey": "<sharedaccesskey>"
}
}
However, nothing is being received by the event hub. I can see in the storage account logs that the Diagnostics extension is running:
but in the substatus there are many errors around the SAS key and the event hub:
When I check back in the Visual Studio Diagnostics configuration on the Scale set I see this error:
I have checked the naming convention on the SharedAccessKeyName (which is the default provided when the event hub was set up) know that the SAS key works as I wrote a console app to send messages to the same event hub with the same credentials and it worked fine.
So there is obviously a problem with the authentication to the event hub as it can't read the access key from the config file. However, I can't see any other way of providing it.
Am I missing something obvious here in my config?
Turns out the problem was quite simple, I had grabbed the URL from the connection string in the portal which was
sb://myhub.servicebus.windows.net/mycompanyapplication
when it should have been
https://myhub.servicebus.windows.net/mycompanyapplication
Now the data is flowing freely into the event hub.
However, the diagnostics config in VS still shows the warning about not being able to read the SAS key, which now looks like a "red herring" that ended up costing me a lot of time :(

Resources