Description of the problem
I have created a Lambda function with API Gateway in SAM, then deployed it and it was working as expected. In API Gateway I used HttpApi not REST API.
Then, I wanted to add a Lambda authorizer with Simple Response. So, I followed the SAM and API Gateway docs and I came up with the code below.
When I call the route items-list it now returns 401 Unauthorized, which is expected.
However, when I add the header myappauth with the value "test-token-abc", I get a 500 Internal Server Error.
I checked this page but it seems all of the steps listed there are OK https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-http-lambda-integrations/
I enabled logging for the API Gateway, following these instructions: https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-logging.html
But all I get is something like this (redacted my IP and request ID):
[MY-IP] - - [07/Jul/2021:08:24:06 +0000] "GET GET /items-list/{userNumber} HTTP/1.1" 500 35 [REQUEST-ID]
(Perhaps I can configure the logger in such a way that it prints a more meaningful error message? EDIT: I've tried adding $context.authorizer.error to the logs, but it doesn't print any specific error message, just prints a dash: -)
I also checked the logs for the Lambda functions, there is nothing there (all logs where from the time before I added the authorizer).
So, what am I doing wrong?
What I tried:
This is my Lambda Authorizer function which I have deployed using sam deploy, when I test it in isolation using an event with the myappauth header, it works:
exports.authorizer = async (event) => {
let response = {
"isAuthorized": false,
};
if (event.headers.myappauth === "test-token-abc") {
response = {
"isAuthorized": true,
};
}
return response;
};
and this is the SAM template.yml which I deployed using sam deploy:
AWSTemplateFormatVersion: 2010-09-09
Description: >-
myapp-v1
Transform:
- AWS::Serverless-2016-10-31
Globals:
Function:
Runtime: nodejs14.x
MemorySize: 128
Timeout: 100
Environment:
Variables:
MYAPP_TOKEN: "test-token-abc"
Resources:
MyAppAPi:
Type: AWS::Serverless::HttpApi
Properties:
FailOnWarnings: true
Auth:
Authorizers:
MyAppLambdaAuthorizer:
AuthorizerPayloadFormatVersion: "2.0"
EnableSimpleResponses: true
FunctionArn: !GetAtt authorizerFunction.Arn
FunctionInvokeRole: !GetAtt authorizerFunctionRole.Arn
Identity:
Headers:
- myappauth
DefaultAuthorizer: MyAppLambdaAuthorizer
itemsListFunction:
Type: AWS::Serverless::Function
Properties:
Handler: src/handlers/v1-handlers.itemsList
Description: A Lambda function that returns a list of items.
Policies:
- AWSLambdaBasicExecutionRole
Events:
Api:
Type: HttpApi
Properties:
Path: /items-list/{userNumber}
Method: get
ApiId: MyAppAPi
authorizerFunction:
Type: AWS::Serverless::Function
Properties:
Handler: src/handlers/v1-handlers.authorizer
Description: A Lambda function that authorizes requests.
Policies:
- AWSLambdaBasicExecutionRole
Edit:
User #petey suggested that I tried returning an IAM policy in my authorizer function, so I changed EnableSimpleResponses to false in the template.yml, then I changed my function as below, but got the same result:
exports.authorizer = async (event) => {
let response = {
"principalId": "my-user",
"policyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Action": "execute-api:Invoke",
"Effect": "Deny",
"Resource": event.routeArn
}]
}
};
if (event.headers.myappauth == "test-token-abc") {
response = {
"principalId": "my-user",
"policyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Action": "execute-api:Invoke",
"Effect": "Allow",
"Resource": event.routeArn
}]
}
};
}
return response;
};
I am going to answer my own question because I have resolved the issue, and I hope this will help people who are going to use the new "HTTP API" format in API Gateway, since there is not a lot of tutorials out there yet; most examples you will find online are for the older API Gateway standard, which Amazon calls "REST API". (If you want to know the difference between the two, see here).
The main problem lies in the example that is presented in the official documentation. They have:
MyLambdaRequestAuthorizer:
FunctionArn: !GetAtt MyAuthFunction.Arn
FunctionInvokeRole: !GetAtt MyAuthFunctionRole.Arn
The problem with this, is that this template will create a new Role called MyAuthFunctionRole but that role will not have all the necessary policies attached to it!
The crucial part that I missed in the official docs is this paragraph:
You must grant API Gateway permission to invoke the Lambda function by using either the function's resource policy or an IAM role. For this example, we update the resource policy for the function so that it grants API Gateway permission to invoke our Lambda function.
The following command grants API Gateway permission to invoke your Lambda function. If API Gateway doesn't have permission to invoke your function, clients receive a 500 Internal Server Error.
The best way to solve this, is to actually include the Role definition in the SAM template.yml, under Resources:
MyAuthFunctionRole
Type: AWS::IAM::Role
Properties:
# [... other properties...]
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- apigateway.amazonaws.com
Action:
- 'sts:AssumeRole'
Policies:
# here you will put the InvokeFunction policy, for example:
- PolicyName: MyPolicy
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action: 'lambda:InvokeFunction'
Resource: !GetAtt MyAuthFunction.Arn
You can see here a description about the various Properties for a role: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html
Another way to solve this, is to separately create a new policy in AWS Console, which has InvokeFunction permission, and then after deployment, attach that policy to the MyAuthFunctionRole that SAM created. Now the Authorizer will be working as expected.
Another strategy would be to create a new role beforehand, that has a policy with InvokeFunction permission, then copy and paste the arn of that role in the SAM template.yml:
MyLambdaRequestAuthorizer:
FunctionArn: !GetAtt MyAuthFunction.Arn
FunctionInvokeRole: arn:aws:iam::[...]
Your lambda authorizer is not returning what is expected to be an actual lambda authorizer (an IAM policy). This could explain that internal error 500.
To fix, replace is with something like this that returns an IAM policy (or rejects):
// A simple token-based authorizer example to demonstrate how to use an authorization token
// to allow or deny a request. In this example, the caller named 'user' is allowed to invoke
// a request if the client-supplied token value is 'allow'. The caller is not allowed to invoke
// the request if the token value is 'deny'. If the token value is 'unauthorized' or an empty
// string, the authorizer function returns an HTTP 401 status code. For any other token value,
// the authorizer returns an HTTP 500 status code.
// Note that token values are case-sensitive.
exports.handler = function(event, context, callback) {
var token = event.authorizationToken;
// modify switch statement here to your needs
switch (token) {
case 'allow':
callback(null, generatePolicy('user', 'Allow', event.methodArn));
break;
case 'deny':
callback(null, generatePolicy('user', 'Deny', event.methodArn));
break;
case 'unauthorized':
callback("Unauthorized"); // Return a 401 Unauthorized response
break;
default:
callback("Error: Invalid token"); // Return a 500 Invalid token response
}
};
// Help function to generate an IAM policy
var generatePolicy = function(principalId, effect, resource) {
var authResponse = {};
authResponse.principalId = principalId;
if (effect && resource) {
var policyDocument = {};
policyDocument.Version = '2012-10-17';
policyDocument.Statement = [];
var statementOne = {};
statementOne.Action = 'execute-api:Invoke';
statementOne.Effect = effect;
statementOne.Resource = resource;
policyDocument.Statement[0] = statementOne;
authResponse.policyDocument = policyDocument;
}
// Optional output with custom properties of the String, Number or Boolean type.
authResponse.context = {
"stringKey": "stringval",
"numberKey": 123,
"booleanKey": true
};
return authResponse;
}
Lots more information here : https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html#api-gateway-lambda-authorizer-lambda-function-create
Just to complete the answer. You have to add an AssumeRolePolicyDocument under Properties.
The role will then state
MyAuthFunctionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- apigateway.amazonaws.com
Action:
- 'sts:AssumeRole'
Policies:
# see answer above
I was following the Vault Agent with AWS documentation and it workes fine until I restart the service or reboot the instance. Any ideas on how I can overcome this problem?
vault agent configuration file vault_agent.hcl :
pid_file = "./pidfile"
vault {
address = "http://vault-server:8200"
retry {
num_retries = 5
}
}
listener "tcp" {
address = "{{ ansible_ssh_host }}:8200"
cluster_address = "{{ ansible_ssh_host }}:8201"
tls_disable = 1
}
auto_auth {
method "aws" {
mount_path = "auth/aws/project"
config = {
type = "ec2"
role = "test-role"
}
}
cache {
use_auto_auth_token = true
}
sink "file" {
config = {
path = "/tmp/test"
}
}
}
Now the login works without problem.
login
vault login
output
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token xxx
token_accessor xxx
token_duration 46s
token_renewable true
token_policies ["admin-role" "default"]
identity_policies []
policies ["admin-role" "default"]
token_meta_account_id xxx
token_meta_auth_type ec2
token_meta_role test-role
token_meta_role_tag_max_ttl 0s
However, if I restart the agent or reboot the ec2 instance I can't authunticate anymore:
* client nonce mismatch and instance meta-data incorrect" backoff=1s
[INFO] auth.handler: authenticating
[ERROR] auth.handler: error authenticating: error="Error making API request.
I am attempting to deploy a workflow using the https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/workflows_workflow terraform resource and its failing with error:
Error: Error creating Workflow: googleapi: Error 400: request contains errors
Details:
[
{
"#type": "type.googleapis.com/google.rpc.BadRequest",
"fieldViolations": [
{
"description": "The referenced service account is not user-managed, please verify the correctness of the service account name",
"field": "workflow.service_account"
}
]
}
]
I can see from running terraform plan that this is the definition of my workflow:
+ resource "google_workflows_workflow" "my_first_workflow" {
+ create_time = (known after apply)
+ description = "Magic"
+ id = (known after apply)
+ name = "myworkflow"
+ name_prefix = (known after apply)
+ project = "myproject"
+ region = "europe-west4"
+ revision_id = (known after apply)
+ service_account = "projects/myproject/serviceAccounts/service-account"
+ source_contents = <<-EOT
- postCallBigqueryStoredProcedure:
call: http.post
args:
url: https://bigquery.googleapis.com/bigquery/v2/projects/myproject/jobs
body: {
"configuration": {
"query": {
"query": "call mydataset.mystoredprocedure()"
}
}
}
EOT
+ state = (known after apply)
+ update_time = (known after apply)
}
The error messages is complaining about the service account however I'm certain that the service account named here: projects/myproject/serviceAccounts/service-account is valid and exists so I'm clueless as to why I'm getting this error. Googling the error message hasn't turned up anything useful.
Does anyone know what might be the problem?
You mentioned that the service account is valid and it exists. When you are referencing it, are you including the full account name including the details after the '#' ie. 7**********-compute#developer.gserviceaccount.com?
I was able to replicate this behaviour by using either an incorrect name or a service account name without the full email address.
You must use the complete email address of your service account. Here's a sample of a correct format. I'm currently using Terraform v0.14.7:
service_account = "projects/project_id/serviceAccounts/7**********-compute#developer.gserviceaccount.com"
I am trying to release a specific version via artifactory. Promoting to a release folder gives me a 403 error with below error:
{
"errors" : [ {
"status" : 403,
"message" : "You are not permitted to execute the promotion 'snapshotToRelease'."
} ]
}
Looking at the error it's a permission issue and I have tried most of the permissions for the user that tries to release the build but no luck. Below is the api resource I'm trying to execute
/api/plugins/build/promote/snapshotToRelease
I suppose you're using this plugin?
The way this promotion is configured, only the following users can run it:
Users with admin privileges
A user with the username "jenkins"
If neither of these describe the user in question, and you're allowed to edit the plugin file, you can change the code to work with other users:
snapshotToRelease(users: ["my-promoter"], params: ...) { buildName, buildNumber, params ->
// ...
}
Or any user in a group:
snapshotToRelease(groups: ["my-promote-group"], params: ...) { buildName, buildNumber, params ->
// ...
}
(See the documentation for more details.)
I want to use AWS macro Transform::Include with some dynamic parameters for my file.
Resources:
'Fn::Transform':
Name: 'AWS::Include'
Parameters:
TestMacroVariable:
Default: 2
Type: Number
Location: !Sub "s3://${InstallBucketName}/test.yaml"
test.yaml:
DataAutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
LaunchConfigurationName:
Ref: DataLaunchConfiguration
MinSize: '1'
MaxSize: '100'
DesiredCapacity:
Ref: TestMacroVariable
...
After calling: aws cloudformation describe-stack-events --stack-name $stack
I get:
"ResourceStatusReason": "The value of parameter TestMacroVariable
under transform Include must resolve to a string, number, boolean or a
list of any of these.. Rollback requested by user."
When I try to do it this way:
Resources:
'Fn::Transform':
Name: 'AWS::Include'
Parameters:
TestMacroVariable: 2
Location: !Sub "s3://${InstallBucketName}/test.yaml"
I get:
"ResourceStatusReason": "Template format error: Unresolved resource
dependencies [TestMacroVariable] in the Resources block of the
template"
Error is the same when I don't provide TestMacroVariable at all.
Tried with different types: String, Number, Boolean, List - none of them work.
As i know you cannot have anything other than Location key in the Parameters section of the AWS::Include. Check here AWS DOC
As an alternative, you can pass in the whole S3 path as a parameter and reference it in Location:
Parameters:
MyS3Path:
Type: String
Default: 's3://my-cf-templates/my-include.yaml'
...
'Fn::Transform':
Name: 'AWS::Include'
Parameters:
Location: !Ref MyS3Path
Building on what #BatteryAcid Said you can refer the parameters in your Cloudformation template directly from your file using Sub function:
In your CF template :
Parameters:
TableName:
Type: String
Description: Table Name of the Dynamo DB Users table
Default: 'Users'
In the file you are including:
"Resource": [
{
"Fn::Sub": [
"arn:${AWS::Partition}:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${tableName}",
{
"tableName": {
"Ref": "TableName"
}
}
]
}
Alternatively doesn't have to be a parameter but any resource from your template :
Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${QueryTelemtryFunction.Arn}/invocations