I am using google smarthome actions for IOT... I updated my action url and account linking details. When i am trying to enable the Test in simulator to deploy my TestAPP to cloud, it fails and it gets me an error "GoogleFulfillment 'actions.fulfillment.devices' is not supported" and the linked app not update old URL. This worked a few days ago. Any changes from google side or anybody has any clue ?
There is a manual workaround. Thanks for Google Assistatant forum:
Steps:
1 - Download the gactions cli at https://developers.google.com/actions/tools/gactions-cli
2 - Authenticate with any command:
./gactions list --project [YOUT_PROJECT_ID]
3 - Download the json representation of your action:
./gactions get --project [YOUR_PROJECT_ID] --version draft > action.json
4 - Edit the json. Extract the only object from its array, remove the nested “googleFulfillments” object:
"googleFulfillments": [
{
"endpoint": {
"baseUrl": "[URL]"
},
"name": "actions.fulfillment.devices"
}
],
5 - Delete the brackets "[ ]" on the top and end of file. Only one language can be activated at a time. Delete all data from the action.json file unnecessary. The file looks like this, with its parameters:
{
"accountLinking": {
"accessTokenUrl": "xxxx",
"assertionTypes": [
"ID_TOKEN"
],
"authenticationUrl": "xxx",
"clientId": "xxx",
"clientSecret": "xxxx",
"grantType": "AUTH_CODE"
},
"actions": [
{
"description": "Smart home action for project xxxxxxx",
"fulfillment": {
"conversationName": "AoGSmartHomeConversation_xxxxxx"
},
"name": "actions.devices"
}
],
"conversations": {
"AoGSmartHomeConversation_xxxxxxxx": {
"name": "",
"url": "xxxxxxx"
}
},
"locale": "en",
"manifest": {
"category": "xxx",
"companyName": "xxx",
"contactEmail": "xxx",
"displayName": "xxx",
"largeLandscapeLogoUrl": "xxxxxx",
"longDescription": "xxxx",
"privacyUrl": "xxx",
"shortDescription": "xxxx",
"smallSquareLogoUrl": "xxxx",
"termsOfServiceUrl": "xxxxx",
"testingInstructions": "xxxxx"
}
}
6 - If you have updated the URL of fulfillment, authentication or token, go to Google Actions Console and update his entry on there;
7 - Push your fixed action into test:
./gactions test --project [YOUR_PROJECT_ID] --action_package ./action.json
This replaces the step " Click Simulator under TEST" in the google assistant manual setup. It worked for me!
More help here: https://community.home-assistant.io/t/google-assistant-trouble-shooting/99223/142
Related
So I've run into this issue with a web app I've made:
it gets a file path as input
if the file exists on a bucket, it uses a python client api to create a compute engine instance
it passes the file path to the instance in the startup script
When I ran it locally, I created a python virtual environment and then ran the app. When I make the input on the web browser, the virtual machine is created by the api call. I assumed it used my personal account. I changed to the service account in the command line with this command 'gcloud config set account', it ran fine once more.
When I simply go to the source code directory deploy it as is, the application can create the virtual machine instances as well.
When I use Google cloud build and deploy to cloud run, it doesn't create the vm instance.
the web app itself is not throwing any errors, but when I check compute engine's logs, there is an error in the logs:
`{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 3,
"message": "INVALID_PARAMETER"
},
"authenticationInfo": {
"principalEmail": "####"
},
"requestMetadata": {
"callerIp": "#####",
"callerSuppliedUserAgent": "(gzip),gzip(gfe)"
},
"serviceName": "compute.googleapis.com",
"methodName": "v1.compute.instances.insert",
"resourceName": "projects/someproject/zones/somezone/instances/nameofinstance",
"request": {
"#type": "type.googleapis.com/compute.instances.insert"
}
},
"insertId": "######",
"resource": {
"type": "gce_instance",
"labels": {
"instance_id": "#####",
"project_id": "someproject",
"zone": "somezone"
}
},
"timestamp": "2021-06-16T12:18:21.253551Z",
"severity": "ERROR",
"logName": "projects/someproject/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operation-#####",
"producer": "compute.googleapis.com",
"last": true
},
"receiveTimestamp": "2021-06-16T12:18:21.253551Z"
}`
In theory, it is the same exact code that worked from my laptop and on app engine. I'm baffled why it only does this for cloud run.
App engines default service account was stripped of all its roles and given a custom role tailored to the web apps function.
The cloud run is using a different service account, but was given that exact same custom role.
Here is the method I use to call the api.
def create_instance(path):
compute = googleapiclient.discovery.build('compute', 'v1')
vmname = "piinnuclei" + date.today().strftime("%Y%m%d%H%M%S")
startup_script = "#! /bin/bash\napt update\npip3 install pg8000\nexport BUCKET_PATH=my-bucket/{}\ngsutil -m cp -r gs://$BUCKET_PATH /home/connor\ncd /home/connor\n./cloud_sql_proxy -dir=cloudsql -instances=sql-connection-name=unix:sql-connection-name &\npython3 run_analysis_upload.py\nexport ZONE=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google')\nexport NAME=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/name -H 'Metadata-Flavor: Google')\ngcloud --quiet compute instances delete $NAME --zone=$ZONE".format(path)
config = {
"kind": "compute#instance",
"name": vmname,
"zone": "projects/my-project/zones/northamerica-northeast1-a",
"machineType": "projects/my-project/zones/northamerica-northeast1-a/machineTypes/e2-standard-4",
"displayDevice": {
"enableDisplay": False
},
"metadata": {
"kind": "compute#metadata",
"items": [
{
"key": "startup-script",
"value": startup_script
}
]
},
"tags": {
"items": []
},
"disks": [
{
"kind": "compute#attachedDisk",
"type": "PERSISTENT",
"boot": True,
"mode": "READ_WRITE",
"autoDelete": True,
"deviceName": vmname,
"initializeParams": {
"sourceImage": "projects/my-project/global/images/my-image",
"diskType": "projects/my-project/zones/northamerica-northeast1-a/diskTypes/pd-balanced",
"diskSizeGb": "100"
},
"diskEncryptionKey": {}
}
],
"canIpForward": False,
"networkInterfaces": [
{
"kind": "compute#networkInterface",
"subnetwork": "projects/my-project/regions/northamerica-northeast1/subnetworks/default",
"accessConfigs": [
{
"kind": "compute#accessConfig",
"name": "External NAT",
"type": "ONE_TO_ONE_NAT",
"networkTier": "PREMIUM"
}
],
"aliasIpRanges": []
}
],
"description": "",
"labels": {},
"scheduling": {
"preemptible": False,
"onHostMaintenance": "MIGRATE",
"automaticRestart": True,
"nodeAffinities": []
},
"deletionProtection": False,
"reservationAffinity": {
"consumeReservationType": "ANY_RESERVATION"
},
"serviceAccounts": [
{
"email": "batch-service-accountg#my-project.iam.gserviceaccount.com",
"scopes": [
"https://www.googleapis.com/auth/cloud-platform"
]
}
],
"shieldedInstanceConfig": {
"enableSecureBoot": False,
"enableVtpm": True,
"enableIntegrityMonitoring": True
},
"confidentialInstanceConfig": {
"enableConfidentialCompute": False
}
}
return compute.instances().insert(
project="my-project",
zone="northamerica-northeast1",
body=config).execute()
The issue was with the zone. For some reason, when it was ran on cloud run, the code below was the culprit.
return compute.instances().insert(
project="my-project",
zone="northamerica-northeast1",
body=config).execute()
"northamerica-northeast1" should have been "northamerica-northeast1-a"
EDIT:
I made a new virtual machine image and quickly ran into the same problem, it would work locally and break down in the cloud run environment. After letting it sit for some time, it began to work again. This is leading me to the conclusion that there is also some sort of delay before it can be called by cloud run.
I am using HTTP Request plugin in Jenkins and using declarative pipeline script where in I have a stage to call Microsoft teams incoming webhook url and post message when build starts. My Stage looks like below
stage('Build Notification'){
steps{
script{
def payload = '''{
"#type": "MessageCard",
"#context": "http://schema.org/extensions",
"themeColor": "0076D7",
"summary": "Larry Bryant created a new task",
"sections": [{
"activityTitle": "![TestImage](https://47a92947.ngrok.io/Content/Images/default.png)Larry Bryant created a new task",
"activitySubtitle": "On Project Tango",
"activityImage": "https://teamsnodesample.azurewebsites.net/static/img/image5.png",
"facts": [{
"name": "Assigned to",
"value": "Unassigned"
}, {
"name": "Build Number",
"value": "${BUILD_NUMBER}"
}, {
"name": "Status",
"value": "Not started"
}],
"markdown": true
}]
}'''
httpRequest httpMode: 'POST', requestBody: payload, responseHandle: 'NONE' , url: 'blah/blah/blah', wrapAsMultipart: false
}
}
}
My issue is when this message is showing up in teams ,Build Status is literally displaying as ${BUILD_NUMBER } instead of giving the real build number .
I tried to echo ${BUILD_NUMBER} and its giving build number which proves its working but I am unable to find out why ${BUILD_NUMBER} given inside payload of http request is not replaced with real build number while sending the request
I am new to pipeline and stuck on this for last 2 days . Please help
I'm trying to build a Microsoft Teams connector that I have sideloaded into my team while developing. I've set up a testing config page on S3 and have pointed my app manifest to it. When I click the save button, it stays stuck on the "Setting up your connector" spinner for a while, before saying "Unable to save connector configuration. Please try again."
The Javascript of the config page should be visible through the S3 link above; my app manifest is below. After looking at a few similar questions, you'll note that the contentUrl is included by wildcard in validDomains.
{
"$schema": "https://developer.microsoft.com/en-us/json-schemas/teams/v1.7/MicrosoftTeams.schema.json",
"manifestVersion": "1.7",
"version": "1.0.0",
"id": "0b73c39a-db1d-43eb-81bd-3813bef33713",
"packageName": "<redacted>",
"developer": {
"name": "Developer",
"websiteUrl": "<redacted>",
"privacyUrl": "<redacted>",
"termsOfUseUrl": "<redacted>"
},
"icons": {
"color": "color.png",
"outline": "outline.png"
},
"name": {
"short": "Test",
"full": "Test"
},
"description": {
"short": "Test notifications",
"full": "Test notifications"
},
"accentColor": "#FFFFFF",
"connectors": [
{
"connectorId": "0b73c39a-db1d-43eb-81bd-3813bef33713",
"configurationUrl": "https://wsk-teams-test.s3.amazonaws.com/teams_configure.html",
"scopes": [
"team"
]
}
],
"permissions": [
"identity",
"messageTeamMembers"
],
"validDomains": [
"wsk-teams-test.s3.amazonaws.com",
"<redacted>"
]
}
I'm not able to get any more detailed information when attempting this via the desktop Teams app, but in the browser I see this error in the console:
2020-09-02T23:05:20.879Z Received error from connectors {"seq":1599086774636,"timestamp":1599087920857,"flightSettings":{"Name":"ConnectorFrontEndSettings","AriaSDKToken":"<redacted>","SPAEnabled":true,"ClassificationFilterEnabled":true,"ClientRoutingEnabled":true,"EnableYammerGroupOption":true,"EnableFadeMessage":false,"EnableDomainBasedOwaConnectorList":false,"EnableDomainBasedTeamsConnectorList":false,"DevPortalSPAEnabled":true,"ShowHomeNavigationButtonOnConfigurationPage":false,"DisableConnectToO365InlineDeleteFeedbackPage":true},"status":500,"clientType":"SkypeSpaces","connectorType":"0b73c39a-db1d-43eb-81bd-3813bef33713","name":"handleMessageError"}
Thanks for any guidance you might have. If I can get in touch with someone from Microsoft privately, I'd be happy to provide the <redacted> information.
This issue is fixed by adding the content url in valid domains list in the Connector Developer Dashboard.
As recommended above: This issue is fixed by adding the content url in a valid domains list in the Connector Developer Dashboard.
This helped me understand the direction of the problem.
But my accidental mistake was due to the lack of the prefix https://
Be sure to add https:// prefix to your domain
I'm testing Teams Messaging Extension for Search-based commands. After creating solution with "YO TEAMS" and creating BOT in Azure with BotId / AppId and password / Secret. And put these as values in .env file in solution.
By running the command "gulp ngrok-serve" I get ngrok url generated and things look like it should be fine. But by uploading a .zip file from the package folder in Teams, I get the error message "Unable to reach app, Please try again".
Just to test, I have created another solution with only "Tab". And I get almost the same error message when I try to upload .zip,
"There was a problem reaching this app"
There are many frustrating tutorials some old and some new. By running the command "gulp ngrok-serve" you start ngrok for tunnel. And ngrok URL that is generated seems to work:
And the URL is stored as an endpoint for BOT in Azure:
So what have I missed here since it does not work?
Here is my manifest file:
{
"$schema": "https://developer.microsoft.com/en-us/json-schemas/teams/v1.6/MicrosoftTeams.schema.json",
"manifestVersion": "1.6",
"id": "{{APPLICATION_ID}}",
"version": "{{VERSION}}",
"packageName": "{{PACKAGE_NAME}}",
"developer": {
"name": "gonadn consulting",
"websiteUrl": "https://{{HOSTNAME}}",
"privacyUrl": "https://{{HOSTNAME}}/privacy.html",
"termsOfUseUrl": "https://{{HOSTNAME}}/tou.html"
},
"name": {
"short": "TeamsMsgExtSearch",
"full": "TeamsMsgExtSearch"
},
"description": {
"short": "TODO: add short description here",
"full": "TODO: add full description here"
},
"icons": {
"outline": "icon-outline.png",
"color": "icon-color.png"
},
"accentColor": "#D85028",
"configurableTabs": [],
"staticTabs": [],
"bots": [],
"connectors": [],
"composeExtensions": [
{
"botId": "{{MICROSOFT_APP_ID}}",
"canUpdateConfiguration": false,
"commands": [
{
"id": "msgSearchCommandMessageExtension",
"title": "MsgSearchCommand",
"description": "Add a clever description here",
"initialRun": true,
"parameters": [
{
"name": "parameter",
"description": "Description of the parameter",
"title": "Parameter"
}
],
"type": "query"
}
]
}
],
"permissions": [
"identity",
"messageTeamMembers"
],
"validDomains": [
"{{HOSTNAME}}"
],
"showLoadingIndicator": false
}
Link to Git Repo
I have been trying to host bot which works on local to the Azure Hosting.
I'm trying to connect hosted bot with local emulator gives connection error (emulator :Cannot post activity. Unauthorized).
My .bot file:
{
"name": "production",
"description": "",
"services": [
{
"type": "endpoint",
"appId": "********************",
"appPassword": "*************",
"endpoint": "intermediatorbotsample2019.azurewebsites.net/api/messages",
"name": "AzureAccountLive",
"id": "178"
}
],
"padlock": "",
"version": "2.0",
"path": "D:\\Architecture\IntermediatorBot\\production.bot",
"overrides": null
}
I took a look at your bot file in your comment. The problem is that you have "name": "AzureAccountLive" in your services section. This name MUST be "production". The outer level "name" has to match the name of the bot (in this case, it's probably intermediatorbotsample2019). It's the Name:Production, Type:Endpoint combination that ABS looks for. If you update your botfile to match what I have below, your bot should work as expected.
{
"name": "YOURBOTNAMEHERE",
"description": "",
"services": [
{
"type": "endpoint",
"appId": "********************",
"appPassword": "*************",
"endpoint": "http://intermediatorbotsample2019.azurewebsites.net/api/messages",
"name": "production",
"id": "178"
}
],
"padlock": "",
"version": "2.0",
"path": "D:\\Architecture\IntermediatorBot\\production.bot",
"overrides": null
}
Re genreted AppId and secret from https://dev.botframework.com/.
Previously was using AzureBotProject replaced it with AzureBotChannelProject instead.