After going to tons of pages (including some SO ones) suggesting some advice (see list below), I am still not able to give my APIGateway access to execute a newly added Lambda function via the AWS CLI Command Line tool.
i.e. I'm trying to replicate this:
I've created a new endpoint, with the following integration setup:
As soon as I try and test it (from within the API Gateway Console), I get this
<AccessDeniedException>
<Message>Unable to determine service/operation name to be authorized</Message>
</AccessDeniedException>
I know that this is because, although I have added the lambda function to the method, the APIGateway policy has still not been updated (image 1), hence, there are permission issues.
If I re-add the function and allow the permissions automatically (via the AWS GUI) the testing and execution works fine.
My current CLI command I am trying to execute is this (through PS):
aws lambda add-permission --function-name xx-url --statement-id apigateway-perm-1 --action lambda:InvokeFunction --principal apigateway.amazonaws.com --s
ource-arn "arn:aws:execute-api:{REGION}:{AWS_ACCOUNT_ID}:{API_ID}/*/*"
I have tried multiple versions of the above arn address (including /*/* | /{STAGE}/{METHOD} | /{STAGE}/{METHOD}/{RESOURCE})
I've also tried deploying the API before and after these changes, with no effect?
PS - I've also read the suggestion of changing the integration type of the function to a POST (see this URL), but my requirement is to have a GET method, also - adding this GET method manually through the console works fine, hence, so should doing the same through the CLI tool.
URL list (if anyone else is look for some resources on this issue / topic):
http://interworks.com.mk
docs AWS
docs AWS #2
remove-permission
UPDATE #1
I can also confirm that, after comparing the newly created get-policy against an existing, working one - they seem almost identical (just named differently):
AWS CLI command used: aws lambda get-policy --function-name {FunctionName}
Result of already working policy vs. the newly created one:
This makes me suspect it could be an additional step I'm missing.
EDIT (per request)
Test screenshot - this log goes on to display the AccessDeniedException error.
Log as text (made a little shorter for readability):
Execution log for request test-request
Tue Mar 28 22:59:40 UTC 2017 : Starting execution for request: test-invoke-request
Tue Mar 28 22:59:40 UTC 2017 : HTTP Method: GET, Resource Path: /api/v1/{path}
Tue Mar 28 22:59:40 UTC 2017 : Method request path: {}
Tue Mar 28 22:59:40 UTC 2017 : Method request query string: {fileName=x.doc}
Tue Mar 28 22:59:40 UTC 2017 : Method request headers: {}
Tue Mar 28 22:59:40 UTC 2017 : Method request body before transformations:
Tue Mar 28 22:59:40 UTC 2017 : Endpoint request URI: https://lambda.ap-southeast-2.amazonaws.com/2015-03-31/functions/arn:aws:lambda:ap-southeast-2:{accountid}:function:xx-url/invocations
Tue Mar 28 22:59:40 UTC 2017 : Endpoint request headers: {X-Amz-Date=20170328T240Z, x-amzn-apigateway-api-id={resouceId}, Accept=application/json, Access-Control-Allow-Origin=*, User-Agent=AmazonAPIGateway_f, Host=lambda.ap-southeast-2.amazonaws.com, X-Amz-Content-Sha256=93438097f7627fe6203432b05e2257de86b32f74f8306, X-Amzn-Trace-Id=Root=1-58daeadc-bdd8f80d35834164c70, x-amzn-lambda-integration-tag=test-request, Authorization=*********************************************d309e7, X-Amz-Source-Arn=arn:aws:execute-api:ap-southeast-2:{AccountId}:{resourceId}/null/GET/api/v1/{path}, X-Amz-Security-Token=FQoDYXdzEDcaDAzSjIbAbD9j0wBjWFBxP++dR0+CGiK3flLOatlCr2 [TRUNCATED]
Tue Mar 28 22:59:40 UTC 2017 : Endpoint request body after transformations: {"resource":"/api/v1/{path}","path":"/api/v1/{path}","httpMethod":"GET","headers":null,"queryStringParameters":{"fileName":"x.doc"},"pathParameters":null,"stageVariables":null,"requestContext":{"accountId":"{AccountId}","resourceId":"{AccountId}:{resourceId}","stage":"test-invoke-stage","requestId":"test-invoke-request","identity":{"cognitoIdentityPoolId":null,"accountId""{resourceId}","cognitoIdentityId":null,"caller":"ABPPLGO4:","apiKey":"test-invoke-api-key","sourceIp":"test-invoke-source-ip","accessKey":"ASHYYQ","cognitoAuthenticationType":null,"cognitoAuthenticationProvider":null,"userArn":"arn:aws:sts::111:assumed-role/AWS-Admins/{name}","userAgent":"Apache-HttpClient/4.5.x (Java/1.8.0_112)","user":"AROZBPPLGO4:{name}"},"resourcePath":"/api/v1/{path}","httpMethod":"GET","apiId":"{resourceId}"},"body":null,"isBase64Encoded":false}
Tue Mar 28 22:59:40 UTC 2017 : Endpoint response body before transformations:
<AccessDeniedException>
<Message>Unable to determine service/operation name to be authorized</Message>
</AccessDeniedException>
Tue Mar 28 22:59:40 UTC 2017 : Endpoint response headers: {x-amzn-RequestId=39398a3e-140a-11e7-92a3-3fdc0fbb61c2, Connection=keep-alive, Content-Length=130, Date=Tue, 28 Mar 2017 22:59:39 GMT}
Tue Mar 28 22:59:40 UTC 2017 : Execution failed due to configuration error: Malformed Lambda proxy response
Tue Mar 28 22:59:40 UTC 2017 : Method completed with status: 502
The fact that this ends up reading Malformed Lambda proxy response is not the issue - I have proven it by re-selecting the lambda function manually, letting the permissions be applied, retest immediately and all works fine, the Lambda is not even getting invoked.
To summarize the debugging from the chat:
The GET method was created with the incorrect http method for Lambda, GET. This caused Lambda to not be able to interpret the request from API Gateway, generating the XML error response. The XML error response is not a valid JSON proxy response, and generated a 502 as a result.
The console is adding the necessary permissions and resetting the http method to POST, hence why it is successful after using the console.
The step you are trying to workout is solved by the command:
aws apigateway put-integration
There's a very specific thing in the options of that command you have to be very aware of. A complete "put-integration" statement comes like this:
aws apigateway put-integration
--region us-west-2
--rest-api-id y0UrApI1D
--resource-id r35ourc3ID
--http-method GET
--type AWS
--integration-http-method POST
--uri arn:aws:apigateway:us-west-2:lambda:path/2015-03-31/functions/arn:aws:lambda:us-west-2:111111111111:function:functionname/invocations
In the --uri option, you must be aware of:
us-west-2 is an example the region, be sure you put the correct region where your lamnda function resides
Be sure you do not change this next part, it must be exactly like stated, otherwise the permission will not be granted "lambda:path/2015-03-31/functions"
Change the value 111111111111 for your AWS account number
Change "functionname" for the exact name of your registered lambda function
Will work guaranteed
Related
I'm trying to export a disk image I've build in GCP and export it as a vmdk to a storage bucket.
The export through an error message complaining about service account not found. I can't remember having deleted such a user account. For me it should exist since the creation of the project.
How can I re-create a default service account without taking the risk to loose all my compute engine resources? Which roles should I give to this service account?
[image-export-ext.export-disk.setup-disks]: 2021-10-06T18:52:00Z CreateDisks: Creating disk "disk-export-disk-os-image-export-ext-export-disk-j8vpl".
[image-export-ext.export-disk.setup-disks]: 2021-10-06T18:52:00Z CreateDisks: Creating disk "disk-export-disk-buffer-j8vpl".
[image-export-ext.export-disk]: 2021-10-06T18:52:01Z Step "setup-disks" (CreateDisks) successfully finished.
[image-export-ext.export-disk]: 2021-10-06T18:52:01Z Running step "run-export-disk" (CreateInstances)
[image-export-ext.export-disk.run-export-disk]: 2021-10-06T18:52:01Z CreateInstances: Creating instance "inst-export-disk-image-export-ext-export-disk-j8vpl".
[image-export-ext]: 2021-10-06T18:52:07Z Error running workflow: step "export-disk" run error: step "run-export-disk" run error: operation failed &{ClientOperationId: CreationTimestamp: Description: EndTime:2021-10-06T11:52:07.153-07:00 Error:0xc000712230 HttpErrorMessage:BAD REQUEST HttpErrorStatusCode:400 Id:5314937137696624317 InsertTime:2021-10-06T11:52:02.707-07:00 Kind:compute#operation Name:operation-1633546321707-5cdb3a43ac385-839c7747-2ca655ee OperationGroupId: OperationType:insert Progress:100 Region: SelfLink:https://www.googleapis.com/compute/v1/projects/savvy-bonito-207708/zones/us-east1-b/operations/operation-1633546321707-5cdb3a43ac385-839c7747-2ca655ee StartTime:2021-10-06T11:52:02.708-07:00 Status:DONE StatusMessage: TargetId:840687976797195965 TargetLink:https://www.googleapis.com/compute/v1/projects/savvy-bonito-207708/zones/us-east1-b/instances/inst-export-disk-image-export-ext-export-disk-j8vpl User:494995903825#cloudbuild.gserviceaccount.com Warnings:[] Zone:https://www.googleapis.com/compute/v1/projects/savvy-bonito-207708/zones/us-east1-b ServerResponse:{HTTPStatusCode:200 Header:map[Cache-Control:[private] Content-Type:[application/json; charset=UTF-8] Date:[Wed, 06 Oct 2021 18:52:07 GMT] Server:[ESF] Vary:[Origin X-Origin Referer] X-Content-Type-Options:[nosniff] X-Frame-Options:[SAMEORIGIN] X-Xss-Protection:[0]]} ForceSendFields:[] NullFields:[]}:
Code: EXTERNAL_RESOURCE_NOT_FOUND
Message: The resource '494995903825-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found.
[image-export-ext]: 2021-10-06T18:52:07Z Workflow "image-export-ext" cleaning up (this may take up to 2 minutes).
[image-export-ext]: 2021-10-06T18:52:08Z Workflow "image-export-ext" finished cleanup.
[image-export] 2021/10/06 18:52:08 step "export-disk" run error: step "run-export-disk" run error: operation failed &{ClientOperationId: CreationTimestamp: Description: EndTime:2021-10-06T11:52:07.153-07:00 Error:0xc000712230 HttpErrorMessage:BAD REQUEST HttpErrorStatusCode:400 Id:5314937137696624317 InsertTime:2021-10-06T11:52:02.707-07:00 Kind:compute#operation Name:operation-1633546321707-5cdb3a43ac385-839c7747-2ca655ee OperationGroupId: OperationType:insert Progress:100 Region: SelfLink:https://www.googleapis.com/compute/v1/projects/savvy-bonito-207708/zones/us-east1-b/operations/operation-1633546321707-5cdb3a43ac385-839c7747-2ca655ee StartTime:2021-10-06T11:52:02.708-07:00 Status:DONE StatusMessage: TargetId:840687976797195965 TargetLink:https://www.googleapis.com/compute/v1/projects/savvy-bonito-207708/zones/us-east1-b/instances/inst-export-disk-image-export-ext-export-disk-j8vpl **User:494995903825#cloudbuild.gserviceaccount.com** Warnings:[] Zone:https://www.googleapis.com/compute/v1/projects/savvy-bonito-207708/zones/us-east1-b ServerResponse:{HTTPStatusCode:200 Header:map[Cache-Control:[private] Content-Type:[application/json; charset=UTF-8] Date:[Wed, 06 Oct 2021 18:52:07 GMT] Server:[ESF] Vary:[Origin X-Origin Referer] X-Content-Type-Options:[nosniff] X-Frame-Options:[SAMEORIGIN] X-Xss-Protection:[0]]} ForceSendFields:[] NullFields:[]}: Code: EXTERNAL_RESOURCE_NOT_FOUND; Message: The resource **'494995903825-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found.**
ERROR
ERROR: build step 0 "gcr.io/compute-image-tools/gce_vm_image_export:release" failed: step exited with non-zero status: 1
Go to IAM & Admin > IAM and check whether your default SA is there.
If deleted you can recover within 30 days.
How to check if it is deleted?
To recover. One cannot recover a default compute service account after 30 days.
If all the above fails, then you might need to go the custom SA route, or share an image with a project that has a default service account.
I had uploaded some objects on google cloud storage for which I get the error as Forbidden Object Google::Cloud::PermissionDeniedError. Additionally, I do not have full rights to Cloud Storage as I am working on the university class project.
Can you please tell me how to delete the objects? I was the one to upload it using the Google API. The interesting thing to note is that other files I can delete but three files that I uploaded were written protected if I remember correctly and cannot be deleted now.
Here is the additional context to the issue.
I checked the retention policy for the storage bucket. It has no retention policy enabled, as can be seen from the output below
gsutil retention get gs://cs291project2
gs://cs291project2/ has no Retention Policy.
Yet, the remove command doesn't seem to work.
SISProject2$ gsutil rm gs://cs291project2/**
Removing >gs://cs291project2/00/00/3Da608e50745f7fe13116e728cd0282fda42ce3f83d3f509d5a83f4cd5>80...
AccessDeniedException: 403 Object >'cs291project2/00/00/3Da608e50745f7fe13116e728cd0282fda42ce3f83d3f509d5a83f4cd580' >is under active Temporary hold and cannot be deleted, overwritten or archived until >hold is removed.
From the error message Object Temporary Hold.. is under active hold you might have uploaded a file to a locked, retention-enabled bucket. You can check if you have the retention policy enabled for the bucket by running these commands:
Example:
$ gsutil retention get gs://bucket
Retention Policy (LOCKED):
Duration: 7 Day(s)
Effective Time: Thu, 11 Sep 2021 19:52:15 GMT
Example:
$ gsutil ls -Lb gs://bucket/object
gs://bucket/object:
Creation time: Thu, 27 Sep 2020 00:00:00 GMT
Update time: Thu, 27 Sep 2021 12:11:00 GMT
Event-Based Hold: Enabled
If that is the case, you cannot delete the object until its retention period is reached.
If you receive a 403 error whilst running these commands, you most likely do not have the correct permission configured. You can run the command below to review the policies for the project. Please note, this is a permissions-based command.
gcloud projects get-iam-policy <project-id> | grep 'role\|user\|members'
You can then compare the result against the IAM permissions for gsutil. For example, the gsutil rm command requires these:
rm Buckets storage.buckets.delete
storage.objects.delete
storage.objects.list
rm Objects storage.objects.delete
storage.objects.get
As a last resort, to drill down further to see what might be happening you can add the -D switch to run the command in debug mode.
gsutil -D retention get gs://bucket
Please note, this comes with a warning:
***************************** WARNING *****************************
*** You are running gsutil with debug output enabled.
*** Be aware that debug output includes authentication credentials.
*** Make sure to remove the value of the Authorization header for
*** each HTTP request printed to the console prior to posting to
*** a public medium such as a forum post or Stack Overflow.
***************************** WARNING *****************************
I have a fresh opendistro cluster that works fine, but I try to disable some traces in log and there is one that I can't remove.
The lines of log look like this :
[2020-04-22T10:09:17,502][INFO ][stats_log ] [myhost01] ------------------------------------------------------------------------
Program=PerformanceAnalyzerPlugin
StartTime=1580542897.428
EndTime=Wed, 21 Apr 2020 10:09:17 CEST
Time=60074 msecs
Timing=total-time:60074.0/1
Counters=TotalError=0
EOE
It's clearly written by the PerformanceAnalyzer Plugin provided by opendistro, so I try to change the log config/log4j2.properties of this plugin and I've restarted the master (myhost01 in this example) but no change in log.
My question is : How to change the log level of this plugin?
I'm working on a very large puppet deployment, but seem to be hitting a brick wall. My ideal setup is to use Nginx + Passenger to serve puppet. The problem I am having is that Puppet throws errors when running through passenger. If I start puppetmasterd, everything works fine, but serving through Passenger gives the following errors:
Jun 22 07:33:04 $master_hostname puppet-master[15710]: Starting Puppet master version 2.6.8
Jun 22 07:33:04 $master_hostname puppet-master[15720]: No support for http method POST
Jun 22 07:33:04 $master_hostname puppet-master[15720]: Denying access: Forbidden request: $client_hostname($client_ip) access to /report/$client_hostname [save] authenticated at line 0
Jun 22 07:33:04 $master_hostname puppet-master[15720]: Forbidden request: $client_hostname($client_ip) access to /report/$client_hostname [save] authenticated at line 0
Everything seems to point to an auth.conf problem, but my auth.conf file is about as generic as it could get, and like I said, everything works when I serve puppet using Rack directly.
Has anybody ever ran into this issue?
Sounds like this:
http://groups.google.com/group/puppet-users/browse_frm/thread/910994e88f21a497/cae809c17a9acd8a?#cae809c17a9acd8a
The concept being that you need to configure NGINX to pass information through to Puppet as it now provides the SSL layers.
I'm getting a 500 error when I go to /users/sign_in (or any other devise page).
This is all the log says:
Started GET "/users/sign_in" for 67.161.236.149 at Mon Jun 13 02:51:47 +0000 2011
Processing by Devise::SessionsController#new as HTML
Completed 500 Internal Server Error in 10ms
ActiveRecord::StatementInvalid (Could not find table 'users'):
Started GET "/users/sign_out" for 67.161.236.149 at Mon Jun 13 10:40:25 +0000 2011
Processing by Devise::SessionsController#destroy as HTML
Completed 500 Internal Server Error in 135ms
NameError (undefined local variable or method `root_path' for #<Devise::SessionsController:0x605f360>):
What is going wrong?
This looks suspicious:
ActiveRecord::StatementInvalid (Could not find table 'users'):
Have you run db:migrate since creating your User model?
Also,
NameError (undefined local variable or method `root_path' for #<Devise::SessionsController:0x605f360>)
suggests that you don't have a root path configured. This is something in routes.rb that matches requests to www.yourdomain.com/. You could use something like
root :to => "pages#home"
which would direct any request to www.yourdomain.com/ to the home action of the pages controller.