Unable to create openwhisk trigger on ubuntu local machine with /whisk.system/alarms/alarm feed - openwhisk

I was able to install the system package for alarms successfully, mostly following the link https://github.com/apache/incubator-openwhisk-package-alarms/issues/51#issuecomment-294010619
So, I get the following:
bin/wsk package get --summary /whisk.system/alarms --insecure
package /whisk.system/alarms: Alarms and periodic utility
(parameters: *apihost, *cron, *trigger_payload)
feed /whisk.system/alarms/alarm: Fire trigger when alarm occurs
(parameters: none defined)
Features like actions, triggers, rules are working on my local openwhisk installation.
I am running the command to create a trigger as follows:
bin/wsk trigger create convertTriggerPeriodic --feed /whisk.system/alarms/alarm -p cron "*/9 * * * * *" -p trigger_payload "{\"name\":\"Odin\",\"place\":\"Asgard\"}" -p maxTriggers 6 --insecure
ok: invoked /whisk.system/alarms/alarm with id d5879ab1c97745c9879ab1c977c5c967
{
"activationId": "d5879ab1c97745c9879ab1c977c5c967",
"annotations": [
{
"key": "limits",
"value": {
"logs": 10,
"memory": 256,
"timeout": 60000
}
},
{
"key": "path",
"value": "whisk.system/alarms/alarm"
}
],
"duration": 6402,
"end": 1508984964595,
"logs": [],
"name": "alarm",
"namespace": "guest",
"publish": false,
"response": {
"result": {
"error": {
"code": 30810,
"error": "There was an error processing your request."
}
},
"status": "application error",
"success": false
},
"start": 1508984958193,
"subject": "guest",
"version": "0.0.2"
}
ok: invoked /whisk.system/alarms/alarm with id 4fd67308821e4e0b967308821e4e0bdb
{
"activationId": "4fd67308821e4e0b967308821e4e0bdb",
"annotations": [
{
"key": "limits",
"value": {
"logs": 10,
"memory": 256,
"timeout": 60000
}
},
{
"key": "path",
"value": "whisk.system/alarms/alarm"
}
],
"duration": 4432,
"end": 1508984969257,
"logs": [],
"name": "alarm",
"namespace": "guest",
"publish": false,
"response": {
"result": {
"error": {
"code": 30822,
"error": "There was an error processing your request."
}
},
"status": "application error",
"success": false
},
"start": 1508984964825,
"subject": "guest",
"version": "0.0.2"
}
ok: deleted trigger convertTriggerPeriodic
Run 'wsk --help' for usage.
It is running the trigger twice. Each time, it is reporting error: "error": "There was an error processing your request." Then it is deleting the trigger.
So there is no way I can associate a rule /action with the trigger.

It looks like the alarms action was not installed properly. The directions listed in https://github.com/apache/incubator-openwhisk-package-alarms/issues/51 still work for running the alarms docker container, but are out-of-date for installing the action. Please see the comment I had made on July 21 (https://github.com/apache/incubator-openwhisk-package-alarms/issues/51#issuecomment-317007147) in this issue. The parameters for installCatalog.sh have changed. If you are having trouble following the install steps in this issue you can also checkout the comment I had left on August 9th (https://github.com/apache/incubator-openwhisk-package-alarms/issues/51#issuecomment-321320242). It contains a link to some ansible code I had written to handle the install for you.

Related

consul deregister_critical_service_after is not woring

Hello everyone I have a healthcheck on my consul service, my goal is whenever the service is unhealthy then the consul should remove them from the service catalog.
Bellow is my config
{
"service": {
"name": "api",
"tags": [ "api-tag" ],
"port": 80
},
"check": {
"id": "api_up",
"name": "Fetch health check from local nginx",
"http": "http://localhost/HealthCheck",
"interval": "5s",
"timeout": "1s",
"deregister_critical_service_after": "15s"
},
"data_dir": "/consul/data",
"retry_join": [
"192.168.0.1",
"192.168.0.2",
]
}
Thanks for all the helps
The reason the service is not being de-registered is that the check is being specified outside of the service {} block in your JSON. This makes the check a node-level check, not a service-level check.
Here's a pretty-printed version of the config you provided.
{
"service": {
"name": "api",
"tags": [
"api-tag"
],
"port": 80
},
"check": {
"id": "api_up",
"name": "Fetch health check from local nginx",
"http": "http://localhost/HealthCheck",
"interval": "5s",
"timeout": "1s",
"deregister_critical_service_after": "15s"
},
"data_dir": "/consul/data",
"retry_join": [
"192.168.0.1",
"192.168.0.2",
]
}
Below is the configuration you should be using in order to correctly associate the check with the configured service, and de-register the service after the check has been marked as critical for more than 15 seconds.
{
"service": {
"name": "api",
"tags": [
"api-tag"
],
"port": 80,
"check": {
"id": "api_up",
"name": "Fetch health check from local nginx",
"http": "http://localhost/HealthCheck",
"interval": "5s",
"timeout": "1s",
"deregister_critical_service_after": "15s"
}
},
"data_dir": "/consul/data",
"retry_join": [
"192.168.0.1",
"192.168.0.2"
]
}
Note this statement from the docs for DeregisterCriticalServiceAfter.
If a check is in the critical state for more than this configured value, then its associated service (and all of its associated checks) will automatically be deregistered. The minimum timeout is 1 minute, and the process that reaps critical services runs every 30 seconds, so it may take slightly longer than the configured timeout to trigger the deregistration. This should generally be configured with a timeout that's much, much longer than any expected recoverable outage for the given service.

Manifest parsing error when trying to test app in Teams

From https://dev.teams.microsoft.com/, whenever I click "Preview in Teams", it shows an error in Teams with these details copied to the clipboard: "Error while reading manifest.json". If I download the app package and "upload a custom app" I get the same error. What can I do to resolve this? If I remove the messaging extension configuration, it works but I configured that part in their app and that's what I want to build.
This is my manifest file:
{
"$schema": "https://developer.microsoft.com/en-us/json-schemas/teams/v1.11/MicrosoftTeams.schema.json",
"version": "1.0.0",
"manifestVersion": "1.11",
"id": "3fXXXX",
"packageName": "com.package.name",
"name": {
"short": "Domo Integration",
"full": ""
},
"developer": {
"name": "Domo Inc.",
"mpnId": "",
"websiteUrl": "https://www.domo.com",
"privacyUrl": "https://www.domo.com/company/privacy-policy",
"termsOfUseUrl": "https://www.domo.com/company/service-terms"
},
"description": {
"short": "short",
"full": "full"
},
"icons": {
"outline": "outline.png",
"color": "color.png"
},
"accentColor": "#FFFFFF",
"composeExtensions": [
{
"botId": "deXXXXXXXX",
"commands": [],
"canUpdateConfiguration": true,
"messageHandlers": [
{
"type": "link",
"value": {
"domains": [
"*.domo.com"
]
}
}
]
}
],
"validDomains": [
"*.domo.com"
]
}
#ccnokes In order to use messaging extension in your bot, you need to provide at least one command. commands is required property in composeExtension - see doc.
I was also getting the same error when tried with your manifest but after adding commands it worked totally fine.

web app works locally and on app engine, but not on cloud run

So I've run into this issue with a web app I've made:
it gets a file path as input
if the file exists on a bucket, it uses a python client api to create a compute engine instance
it passes the file path to the instance in the startup script
When I ran it locally, I created a python virtual environment and then ran the app. When I make the input on the web browser, the virtual machine is created by the api call. I assumed it used my personal account. I changed to the service account in the command line with this command 'gcloud config set account', it ran fine once more.
When I simply go to the source code directory deploy it as is, the application can create the virtual machine instances as well.
When I use Google cloud build and deploy to cloud run, it doesn't create the vm instance.
the web app itself is not throwing any errors, but when I check compute engine's logs, there is an error in the logs:
`{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 3,
"message": "INVALID_PARAMETER"
},
"authenticationInfo": {
"principalEmail": "####"
},
"requestMetadata": {
"callerIp": "#####",
"callerSuppliedUserAgent": "(gzip),gzip(gfe)"
},
"serviceName": "compute.googleapis.com",
"methodName": "v1.compute.instances.insert",
"resourceName": "projects/someproject/zones/somezone/instances/nameofinstance",
"request": {
"#type": "type.googleapis.com/compute.instances.insert"
}
},
"insertId": "######",
"resource": {
"type": "gce_instance",
"labels": {
"instance_id": "#####",
"project_id": "someproject",
"zone": "somezone"
}
},
"timestamp": "2021-06-16T12:18:21.253551Z",
"severity": "ERROR",
"logName": "projects/someproject/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operation-#####",
"producer": "compute.googleapis.com",
"last": true
},
"receiveTimestamp": "2021-06-16T12:18:21.253551Z"
}`
In theory, it is the same exact code that worked from my laptop and on app engine. I'm baffled why it only does this for cloud run.
App engines default service account was stripped of all its roles and given a custom role tailored to the web apps function.
The cloud run is using a different service account, but was given that exact same custom role.
Here is the method I use to call the api.
def create_instance(path):
compute = googleapiclient.discovery.build('compute', 'v1')
vmname = "piinnuclei" + date.today().strftime("%Y%m%d%H%M%S")
startup_script = "#! /bin/bash\napt update\npip3 install pg8000\nexport BUCKET_PATH=my-bucket/{}\ngsutil -m cp -r gs://$BUCKET_PATH /home/connor\ncd /home/connor\n./cloud_sql_proxy -dir=cloudsql -instances=sql-connection-name=unix:sql-connection-name &\npython3 run_analysis_upload.py\nexport ZONE=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google')\nexport NAME=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/name -H 'Metadata-Flavor: Google')\ngcloud --quiet compute instances delete $NAME --zone=$ZONE".format(path)
config = {
"kind": "compute#instance",
"name": vmname,
"zone": "projects/my-project/zones/northamerica-northeast1-a",
"machineType": "projects/my-project/zones/northamerica-northeast1-a/machineTypes/e2-standard-4",
"displayDevice": {
"enableDisplay": False
},
"metadata": {
"kind": "compute#metadata",
"items": [
{
"key": "startup-script",
"value": startup_script
}
]
},
"tags": {
"items": []
},
"disks": [
{
"kind": "compute#attachedDisk",
"type": "PERSISTENT",
"boot": True,
"mode": "READ_WRITE",
"autoDelete": True,
"deviceName": vmname,
"initializeParams": {
"sourceImage": "projects/my-project/global/images/my-image",
"diskType": "projects/my-project/zones/northamerica-northeast1-a/diskTypes/pd-balanced",
"diskSizeGb": "100"
},
"diskEncryptionKey": {}
}
],
"canIpForward": False,
"networkInterfaces": [
{
"kind": "compute#networkInterface",
"subnetwork": "projects/my-project/regions/northamerica-northeast1/subnetworks/default",
"accessConfigs": [
{
"kind": "compute#accessConfig",
"name": "External NAT",
"type": "ONE_TO_ONE_NAT",
"networkTier": "PREMIUM"
}
],
"aliasIpRanges": []
}
],
"description": "",
"labels": {},
"scheduling": {
"preemptible": False,
"onHostMaintenance": "MIGRATE",
"automaticRestart": True,
"nodeAffinities": []
},
"deletionProtection": False,
"reservationAffinity": {
"consumeReservationType": "ANY_RESERVATION"
},
"serviceAccounts": [
{
"email": "batch-service-accountg#my-project.iam.gserviceaccount.com",
"scopes": [
"https://www.googleapis.com/auth/cloud-platform"
]
}
],
"shieldedInstanceConfig": {
"enableSecureBoot": False,
"enableVtpm": True,
"enableIntegrityMonitoring": True
},
"confidentialInstanceConfig": {
"enableConfidentialCompute": False
}
}
return compute.instances().insert(
project="my-project",
zone="northamerica-northeast1",
body=config).execute()
The issue was with the zone. For some reason, when it was ran on cloud run, the code below was the culprit.
return compute.instances().insert(
project="my-project",
zone="northamerica-northeast1",
body=config).execute()
"northamerica-northeast1" should have been "northamerica-northeast1-a"
EDIT:
I made a new virtual machine image and quickly ran into the same problem, it would work locally and break down in the cloud run environment. After letting it sit for some time, it began to work again. This is leading me to the conclusion that there is also some sort of delay before it can be called by cloud run.

passing more information to consul watch handler

I am wondering whether consul watch handler can be passed some dynamic information while it's called.
That means watch mechanism can pass the script more arguments instead of my given arguments like the below example.
{
"watches": [
{
"type": "service",
"args": ["/tmp/dosomething.sh", "how can i get responses from /v1/health/service here"]
}
]
}
By the way, when I want to 'watch' a service, the most important info to me is the service's state(passing or critial), but I don't understand:
when watch type is 'service', why I cannot appoint the 'service'.
when watch type is 'checks', why I cannot appoint state and service concurrently.
consul watch passes the entire API response payload as an argument to the watch handler script. Your script needs to be able to consume and parse the JSON, and then act on the data provided.
When you watch a service, the data returned is from the /v1/health/service/:service endpoint. (See consul/api/watch/funcs.go.)
when watch type is 'service', why I cannot appoint the 'service'.
I assume you mean that you would like to watch a specific service. If so, this is supported. You can specify a specific service to watch using the -service flag. For example, consul watch -type=service -service=assets.
when watch type is 'checks', why I cannot appoint state and service concurrently.
If you're interested in monitoring checks for a particular service, you should just use the aforementioned watch command for a specific service. The service check information is included in the API response.
$ consul watch -type=service -service=assets
[
{
"Node": {
"ID": "f013522f-aaa2-8fc6-c8ac-c84cb8a56405",
"Node": "hashicorp-consul-server-2",
"Address": "10.0.0.82",
"Datacenter": "dc2",
"TaggedAddresses": null,
"Meta": null,
"CreateIndex": 22898191,
"ModifyIndex": 22898191
},
"Service": {
"ID": "assets-v1",
"Service": "assets",
"Tags": [],
"Meta": null,
"Port": 9090,
"Address": "",
"Weights": {
"Passing": 1,
"Warning": 1
},
"EnableTagOverride": false,
"CreateIndex": 22898195,
"ModifyIndex": 22898195,
"Proxy": {
"MeshGateway": {},
"Expose": {}
},
"Connect": {}
},
"Checks": [
{
"Node": "hashicorp-consul-server-2",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing",
"Notes": "",
"Output": "Agent alive and reachable",
"ServiceID": "",
"ServiceName": "",
"ServiceTags": [],
"Type": "",
"Definition": {
"Interval": "0s",
"Timeout": "0s",
"DeregisterCriticalServiceAfter": "0s",
"HTTP": "",
"Header": null,
"Method": "",
"Body": "",
"TLSServerName": "",
"TLSSkipVerify": false,
"TCP": ""
},
"CreateIndex": 22898191,
"ModifyIndex": 22898191
}
]
}
]

Chat bot Hands-Off published on azure - gives connection error emulator :Cannot post activity. Unauthorized

I have been trying to host bot which works on local to the Azure Hosting.
I'm trying to connect hosted bot with local emulator gives connection error (emulator :Cannot post activity. Unauthorized).
My .bot file:
{
"name": "production",
"description": "",
"services": [
{
"type": "endpoint",
"appId": "********************",
"appPassword": "*************",
"endpoint": "intermediatorbotsample2019.azurewebsites.net/api/messages",
"name": "AzureAccountLive",
"id": "178"
}
],
"padlock": "",
"version": "2.0",
"path": "D:\\Architecture\IntermediatorBot\\production.bot",
"overrides": null
}
I took a look at your bot file in your comment. The problem is that you have "name": "AzureAccountLive" in your services section. This name MUST be "production". The outer level "name" has to match the name of the bot (in this case, it's probably intermediatorbotsample2019). It's the Name:Production, Type:Endpoint combination that ABS looks for. If you update your botfile to match what I have below, your bot should work as expected.
{
"name": "YOURBOTNAMEHERE",
"description": "",
"services": [
{
"type": "endpoint",
"appId": "********************",
"appPassword": "*************",
"endpoint": "http://intermediatorbotsample2019.azurewebsites.net/api/messages",
"name": "production",
"id": "178"
}
],
"padlock": "",
"version": "2.0",
"path": "D:\\Architecture\IntermediatorBot\\production.bot",
"overrides": null
}
Re genreted AppId and secret from https://dev.botframework.com/.
Previously was using AzureBotProject replaced it with AzureBotChannelProject instead.

Resources