Can't create a new lambda function from a step function - aws-lambda

I'm trying to create a step function in AWS that creates a lambda function, although getting the following error whenever I try to run it:
"Could not unzip uploaded file. Please check your file, then try to upload again."
I tried uploading the ZIP file to a lambda directly via the console, it uploads it without any issue, and confirmed in my network tab that the ZIP file encoding matches what I'm sending via the step function.
Step function definition:
{
"Comment": "Create lambda",
"StartAt": "Create Lambda",
"States": {
"Create Lambda": {
"Type": "Task",
"Parameters": {
"Code": {
"ZipFile.$": "$.file"
},
"FunctionName.$": "$.name",
"Role": "<<redacted>>",
"Handler.$": "$.handler",
"Runtime": "python3.7"
},
"Resource": "arn:aws:states:::aws-sdk:lambda:createFunction",
"ResultPath": null,
"End": true
}
}
}
Event I'm executing the step function with:
{
"file": "UEsDBBQAAAAAAAl0R1YAAAAAAAAAAAAAAAASACAAbXktYXdlc29tZS1sYW1iZGEvVVQNAAfzYOJj82DiY/Ng4mN1eAsAAQT2AQAABBQAAABQSwMEFAAIAAgAFnRHVgAAAAAAAAAAPQAAABwAIABteS1hd2Vzb21lLWxhbWJkYS9oYW5kbGVyLnB5VVQNAAcNYeJjDmHiYw1h4mN1eAsAAQT2AQAABBQAAABLSU1TyEjMS8lJLdJILUvNK9FRSM7PK0mtKNG04lIAgqLUktKiPIVqBaXiksSS0mLn/JRUJSsFIwMDhVoAUEsHCJKJzTg9AAAAPQAAAFBLAQIUAxQAAAAAAAl0R1YAAAAAAAAAAAAAAAASACAAAAAAAAAAAADtQQAAAABteS1hd2Vzb21lLWxhbWJkYS9VVA0AB/Ng4mPzYOJj82DiY3V4CwABBPYBAAAEFAAAAFBLAQIUAxQACAAIABZ0R1aSic04PQAAAD0AAAAcACAAAAAAAAAAAACkgVAAAABteS1hd2Vzb21lLWxhbWJkYS9oYW5kbGVyLnB5VVQNAAcNYeJjDmHiYw1h4mN1eAsAAQT2AQAABBQAAABQSwUGAAAAAAIAAgDKAAAA9wAAAAAA",
"name": "test-lambda",
"handler": "handler.handler",
}

Related

web app works locally and on app engine, but not on cloud run

So I've run into this issue with a web app I've made:
it gets a file path as input
if the file exists on a bucket, it uses a python client api to create a compute engine instance
it passes the file path to the instance in the startup script
When I ran it locally, I created a python virtual environment and then ran the app. When I make the input on the web browser, the virtual machine is created by the api call. I assumed it used my personal account. I changed to the service account in the command line with this command 'gcloud config set account', it ran fine once more.
When I simply go to the source code directory deploy it as is, the application can create the virtual machine instances as well.
When I use Google cloud build and deploy to cloud run, it doesn't create the vm instance.
the web app itself is not throwing any errors, but when I check compute engine's logs, there is an error in the logs:
`{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 3,
"message": "INVALID_PARAMETER"
},
"authenticationInfo": {
"principalEmail": "####"
},
"requestMetadata": {
"callerIp": "#####",
"callerSuppliedUserAgent": "(gzip),gzip(gfe)"
},
"serviceName": "compute.googleapis.com",
"methodName": "v1.compute.instances.insert",
"resourceName": "projects/someproject/zones/somezone/instances/nameofinstance",
"request": {
"#type": "type.googleapis.com/compute.instances.insert"
}
},
"insertId": "######",
"resource": {
"type": "gce_instance",
"labels": {
"instance_id": "#####",
"project_id": "someproject",
"zone": "somezone"
}
},
"timestamp": "2021-06-16T12:18:21.253551Z",
"severity": "ERROR",
"logName": "projects/someproject/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operation-#####",
"producer": "compute.googleapis.com",
"last": true
},
"receiveTimestamp": "2021-06-16T12:18:21.253551Z"
}`
In theory, it is the same exact code that worked from my laptop and on app engine. I'm baffled why it only does this for cloud run.
App engines default service account was stripped of all its roles and given a custom role tailored to the web apps function.
The cloud run is using a different service account, but was given that exact same custom role.
Here is the method I use to call the api.
def create_instance(path):
compute = googleapiclient.discovery.build('compute', 'v1')
vmname = "piinnuclei" + date.today().strftime("%Y%m%d%H%M%S")
startup_script = "#! /bin/bash\napt update\npip3 install pg8000\nexport BUCKET_PATH=my-bucket/{}\ngsutil -m cp -r gs://$BUCKET_PATH /home/connor\ncd /home/connor\n./cloud_sql_proxy -dir=cloudsql -instances=sql-connection-name=unix:sql-connection-name &\npython3 run_analysis_upload.py\nexport ZONE=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google')\nexport NAME=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/name -H 'Metadata-Flavor: Google')\ngcloud --quiet compute instances delete $NAME --zone=$ZONE".format(path)
config = {
"kind": "compute#instance",
"name": vmname,
"zone": "projects/my-project/zones/northamerica-northeast1-a",
"machineType": "projects/my-project/zones/northamerica-northeast1-a/machineTypes/e2-standard-4",
"displayDevice": {
"enableDisplay": False
},
"metadata": {
"kind": "compute#metadata",
"items": [
{
"key": "startup-script",
"value": startup_script
}
]
},
"tags": {
"items": []
},
"disks": [
{
"kind": "compute#attachedDisk",
"type": "PERSISTENT",
"boot": True,
"mode": "READ_WRITE",
"autoDelete": True,
"deviceName": vmname,
"initializeParams": {
"sourceImage": "projects/my-project/global/images/my-image",
"diskType": "projects/my-project/zones/northamerica-northeast1-a/diskTypes/pd-balanced",
"diskSizeGb": "100"
},
"diskEncryptionKey": {}
}
],
"canIpForward": False,
"networkInterfaces": [
{
"kind": "compute#networkInterface",
"subnetwork": "projects/my-project/regions/northamerica-northeast1/subnetworks/default",
"accessConfigs": [
{
"kind": "compute#accessConfig",
"name": "External NAT",
"type": "ONE_TO_ONE_NAT",
"networkTier": "PREMIUM"
}
],
"aliasIpRanges": []
}
],
"description": "",
"labels": {},
"scheduling": {
"preemptible": False,
"onHostMaintenance": "MIGRATE",
"automaticRestart": True,
"nodeAffinities": []
},
"deletionProtection": False,
"reservationAffinity": {
"consumeReservationType": "ANY_RESERVATION"
},
"serviceAccounts": [
{
"email": "batch-service-accountg#my-project.iam.gserviceaccount.com",
"scopes": [
"https://www.googleapis.com/auth/cloud-platform"
]
}
],
"shieldedInstanceConfig": {
"enableSecureBoot": False,
"enableVtpm": True,
"enableIntegrityMonitoring": True
},
"confidentialInstanceConfig": {
"enableConfidentialCompute": False
}
}
return compute.instances().insert(
project="my-project",
zone="northamerica-northeast1",
body=config).execute()
The issue was with the zone. For some reason, when it was ran on cloud run, the code below was the culprit.
return compute.instances().insert(
project="my-project",
zone="northamerica-northeast1",
body=config).execute()
"northamerica-northeast1" should have been "northamerica-northeast1-a"
EDIT:
I made a new virtual machine image and quickly ran into the same problem, it would work locally and break down in the cloud run environment. After letting it sit for some time, it began to work again. This is leading me to the conclusion that there is also some sort of delay before it can be called by cloud run.

Google home actions.fulfillment.devices not getting enabled

I am using google smarthome actions for IOT... I updated my action url and account linking details. When i am trying to enable the Test in simulator to deploy my TestAPP to cloud, it fails and it gets me an error "GoogleFulfillment 'actions.fulfillment.devices' is not supported" and the linked app not update old URL. This worked a few days ago. Any changes from google side or anybody has any clue ?
There is a manual workaround. Thanks for Google Assistatant forum:
Steps:
1 - Download the gactions cli at https://developers.google.com/actions/tools/gactions-cli
2 - Authenticate with any command:
./gactions list --project [YOUT_PROJECT_ID]
3 - Download the json representation of your action:
./gactions get --project [YOUR_PROJECT_ID] --version draft > action.json
4 - Edit the json. Extract the only object from its array, remove the nested “googleFulfillments” object:
"googleFulfillments": [
{
"endpoint": {
"baseUrl": "[URL]"
},
"name": "actions.fulfillment.devices"
}
],
5 - Delete the brackets "[ ]" on the top and end of file. Only one language can be activated at a time. Delete all data from the action.json file unnecessary. The file looks like this, with its parameters:
{
"accountLinking": {
"accessTokenUrl": "xxxx",
"assertionTypes": [
"ID_TOKEN"
],
"authenticationUrl": "xxx",
"clientId": "xxx",
"clientSecret": "xxxx",
"grantType": "AUTH_CODE"
},
"actions": [
{
"description": "Smart home action for project xxxxxxx",
"fulfillment": {
"conversationName": "AoGSmartHomeConversation_xxxxxx"
},
"name": "actions.devices"
}
],
"conversations": {
"AoGSmartHomeConversation_xxxxxxxx": {
"name": "",
"url": "xxxxxxx"
}
},
"locale": "en",
"manifest": {
"category": "xxx",
"companyName": "xxx",
"contactEmail": "xxx",
"displayName": "xxx",
"largeLandscapeLogoUrl": "xxxxxx",
"longDescription": "xxxx",
"privacyUrl": "xxx",
"shortDescription": "xxxx",
"smallSquareLogoUrl": "xxxx",
"termsOfServiceUrl": "xxxxx",
"testingInstructions": "xxxxx"
}
}
6 - If you have updated the URL of fulfillment, authentication or token, go to Google Actions Console and update his entry on there;
7 - Push your fixed action into test:
./gactions test --project [YOUR_PROJECT_ID] --action_package ./action.json
This replaces the step " Click Simulator under TEST" in the google assistant manual setup. It worked for me!
More help here: https://community.home-assistant.io/t/google-assistant-trouble-shooting/99223/142

Is it possible to create a AWS lambda function by a JSON config file?

When I execute
$ aws lambda list-functions
I get a list of all my lambda functions:
{
"Functions": [
{
"TracingConfig": {
"Mode": "PassThrough"
},
"Version": "$LATEST",
"CodeSha256": "aB+/Defg0+abcdefghijklmnopqerstuvwxyzABCDEF=",
"FunctionName": "foofunction",
"VpcConfig": {
"SubnetIds": [],
"SecurityGroupIds": []
},
"MemorySize": 128,
"RevisionId": "123abc45-1234-1234-1234-123456789012",
"CodeSize": 61521970,
"FunctionArn": "arn:aws:lambda:us-east-1:123456789012:function:foofunction",
"Environment": {
"Variables": {
"FOO": "BAR",
"ESCAPING": "[\"a\", \"b\", \"c\"]",
"IS_VALUE": "1"
}
},
"Handler": "lambda_function.lambda_handler",
"Role": "arn:aws:iam::123456789012:role/service-role/lamdaRole",
"Timeout": 300,
"LastModified": "2018-03-01T12:11:10.987+0000",
"Runtime": "python3.6",
"Description": ""
}]
}
Is it possible to use this to create a new lambda function? I am looking for something like
$ aws lambda create-function --config myconfig.json
where myconfig.json would contain the name, environment variables, the region, the role, the handler, the runtime and a description.
Execute the lambda command with the --generate-cli-skeleton option
to view the JSON skeleton and direct the output to a file to save
the skeleton locally:
aws lambda create-function --generate-cli-skeleton > cli.json
Open the skeleton in a text editor and remove any parameters that
you will not use and fill the parameters that you need.
Pass the JSON configuration to the --cli-input-json parameter using
the file:// prefix
aws lambda create-function --cli-input-json file://cli.json
Ref: https://docs.aws.amazon.com/cli/latest/userguide/generate-cli-skeleton.html

Unable to create openwhisk trigger on ubuntu local machine with /whisk.system/alarms/alarm feed

I was able to install the system package for alarms successfully, mostly following the link https://github.com/apache/incubator-openwhisk-package-alarms/issues/51#issuecomment-294010619
So, I get the following:
bin/wsk package get --summary /whisk.system/alarms --insecure
package /whisk.system/alarms: Alarms and periodic utility
(parameters: *apihost, *cron, *trigger_payload)
feed /whisk.system/alarms/alarm: Fire trigger when alarm occurs
(parameters: none defined)
Features like actions, triggers, rules are working on my local openwhisk installation.
I am running the command to create a trigger as follows:
bin/wsk trigger create convertTriggerPeriodic --feed /whisk.system/alarms/alarm -p cron "*/9 * * * * *" -p trigger_payload "{\"name\":\"Odin\",\"place\":\"Asgard\"}" -p maxTriggers 6 --insecure
ok: invoked /whisk.system/alarms/alarm with id d5879ab1c97745c9879ab1c977c5c967
{
"activationId": "d5879ab1c97745c9879ab1c977c5c967",
"annotations": [
{
"key": "limits",
"value": {
"logs": 10,
"memory": 256,
"timeout": 60000
}
},
{
"key": "path",
"value": "whisk.system/alarms/alarm"
}
],
"duration": 6402,
"end": 1508984964595,
"logs": [],
"name": "alarm",
"namespace": "guest",
"publish": false,
"response": {
"result": {
"error": {
"code": 30810,
"error": "There was an error processing your request."
}
},
"status": "application error",
"success": false
},
"start": 1508984958193,
"subject": "guest",
"version": "0.0.2"
}
ok: invoked /whisk.system/alarms/alarm with id 4fd67308821e4e0b967308821e4e0bdb
{
"activationId": "4fd67308821e4e0b967308821e4e0bdb",
"annotations": [
{
"key": "limits",
"value": {
"logs": 10,
"memory": 256,
"timeout": 60000
}
},
{
"key": "path",
"value": "whisk.system/alarms/alarm"
}
],
"duration": 4432,
"end": 1508984969257,
"logs": [],
"name": "alarm",
"namespace": "guest",
"publish": false,
"response": {
"result": {
"error": {
"code": 30822,
"error": "There was an error processing your request."
}
},
"status": "application error",
"success": false
},
"start": 1508984964825,
"subject": "guest",
"version": "0.0.2"
}
ok: deleted trigger convertTriggerPeriodic
Run 'wsk --help' for usage.
It is running the trigger twice. Each time, it is reporting error: "error": "There was an error processing your request." Then it is deleting the trigger.
So there is no way I can associate a rule /action with the trigger.
It looks like the alarms action was not installed properly. The directions listed in https://github.com/apache/incubator-openwhisk-package-alarms/issues/51 still work for running the alarms docker container, but are out-of-date for installing the action. Please see the comment I had made on July 21 (https://github.com/apache/incubator-openwhisk-package-alarms/issues/51#issuecomment-317007147) in this issue. The parameters for installCatalog.sh have changed. If you are having trouble following the install steps in this issue you can also checkout the comment I had left on August 9th (https://github.com/apache/incubator-openwhisk-package-alarms/issues/51#issuecomment-321320242). It contains a link to some ansible code I had written to handle the install for you.

Heroku gives wrongly formatted error vs Local is working fine

The application works fine locally. However deploying it in heroku gives the following error without any clue
com.github.dandelion.core.DandelionException: The file
'WEB-INF/classes/dandelion/my-bundle.json' is wrongly formatted.
Please help me on how to resolve this.
Below is the JSON file for reference.
Below is the json file used in the deployment for your reference.
{
"bundle" : "my-bundle",
"assets": [
{
"name": "jquery",
"version": "2.1.1",
"type": "js",
"locations": {
"webapp": "/webjars/jquery/2.1.1/jquery.min.js"
}
},
{
"name": "datatables",
"version": "1.10.5",
"type": "js",
"locations": {
"webapp": "/webjars/datatables/1.10.5/js/jquery.dataTables.js"
}
},
{
"name": "datatables",
"version": "1.10.5",
"type": "css",
"locations": {
"webapp": "/webjars/datatables/1.10.5/css/jquery.dataTables.css"
}
}
]
}
Thanks.

Resources