Serverless config credentials not working when serverless.yml file present - aws-lambda

We're trying to deploy our lambda using serverless on BitBucket pipelines, but we're running into an issue when running the serverless config credentials command. This issue also happens in docker containers, and locally on our machines.
This is the command we're running:
serverless config credentials --stage staging --provider aws --key $AWS_ACCESS_KEY --secret $AWS_ACCESS_SECRET
And it gives us the error:
Error: Profile default does not exist
The profile is defined in our serverless.yml file. If we rename the serverless file before running the command, it works, and then we can then put the serverless.yml file back and successfully deploy.
e.g.
- mv serverless.yml serverless.old
- serverless config credentials --stage beta --provider aws --key $AWS_ACCESS_KEY --secret $AWS_ACCESS_SECRET
- mv serverless.old serverless.yml
We've tried adding the --profile default switch on there, but it makes no difference.
It's worth noting that this wasn't an issue until we started to use the SSM Parameter Store within the serverless file, the moment we added that, it started giving us the Profile default does not exist error.
serverless.yml (partial)
service: our-service
provider:
name: aws
runtime: nodejs12.x
region: eu-west-1
profile: default
stage: ${opt:stage, 'dev'}
iamRoleStatements:
- Effect: 'Allow'
Action: 'ssm:GetParameter'
Resource:
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-dev'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-beta'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-staging'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-live'
- Effect: 'Allow'
Action: 'kms:Decrypt'
Resource:
- 'arn:aws:kms:eu-west-1:0000000000:key/alias/aws/ssm'
environment:
LAUNCH_DARKLY_SDK_KEY: ${self:custom.launchDarklySdkKey.${self:provider.stage}}
custom:
stages:
- dev
- beta
- staging
- live
launchDarklySdkKey:
dev: ${ssm:/our-service-launchdarkly-key-dev~true}
beta: ${ssm:/our-service-launchdarkly-key-beta~true}
staging: ${ssm:/our-service-launchdarkly-key-staging~true}
live: ${ssm:/our-service-launchdarkly-key-live~true}
plugins:
- serverless-offline
- serverless-stage-manager
...
TLDR: serverless config credentials only works when serverless.yml isn't present, otherwise it complains about profile default not existing, only an issue when using SSM Param store in the serverless file.

The profile attribute in your serverless.yaml refers to saved credentials in ~/.aws/credentials. If a [default] entry is not present in that file, serverless will complain. I can think of 2 possible solutions to this:
Try removing profile from your serverless.yaml completely and using environment variables only.
Leave profile: default in your serverless.yaml but set the credentials in ~/.aws/credentials like this:
[default]
aws_access_key_id=***************
aws_secret_access_key=***************
If you go with #2, you don't have to run serverless config credentials anymore.

Related

AWS::CloudFormation::Stack creation through serverless framework failed on localstack

I'm deploying lambda on localstack using serverless framework. I've configured aws credentials in ~/.aws/credentials. When I run the deploy command, get the following error. Couldn't figure out the cause of failure...
Command: serverless deploy --stage local --aws-profile default
Output:
✖ Stack lambda-api-local failed to deploy (12s)
Environment: darwin, node 16.14.0, framework 3.17.0 (local) 3.17.0v (global), plugin 6.2.2, SDK 4.3.2
Credentials: Local, "default" profile
Docs: docs.serverless.com
Support: forum.serverless.com
Bugs: github.com/serverless/serverless/issues
Error:
CREATE_FAILED: lambda-api-local (AWS::CloudFormation::Stack)
undefined
This is my ~/.aws/credentials
[default]
aws_access_key_id = test
aws_secret_access_key = test
This is my serverless.yml
service: lambda-api
plugins:
- serverless-localstack
provider:
name: aws
stage: local
runtime: go1.x
profile: localstack
package:
patterns:
- '!./**'
- './bin/**'
functions:
hello:
handler: bin/lambda-practice
events:
- http:
path: /hello
method: get
custom:
localstack:
debug: true
endpointFile: localstack_endpoints.json
stages:
# Stages for which the plugin should be enabled
- local
host: http://localhost
edgePort: 4567
autostart: true
lambda:
mountCode: true
docker:
sudo: false
I'm trying to deploy and run lambda on localstack through serverless framework

SSM parameters SAM local aws set up: unable to set properties to work in local

people.
I have been trying to set up a project to test my lambdas in a local environment. We are using sam to mimic AWS resources and everything works pretty fine with one exception: for business reasons, I had to include SSM parameters. When we try to read the parameters using SAM with sam local start-lambda ... whatever, the lambda code is not able to recover the parameters. I know that AWS credentials are fine as I can connect to AWS, but we don't want to use real AWS services to do that, we want to pass, set, define; whatever you want to call it is fine, SSM parameters in my local environment and then use them with sam local start-lambda whatever, without AWS connection therefore we can use parameters for testing only in local.
I have read the following post, How to access SSM Parameter Store from SAM lambda local in node
And this issue in github:
https://github.com/aws/aws-sam-cli/issues/616
It is mentioned that the way to do it is by using --env-vars but it is not working so far.
This is my template.
Parameters:
IdentityNameParameter:
Type: AWS::SSM::Parameter::Value<String>
Resources:
MyLambdaFunction:
Type: AWS::Serverless::Function
Properties:
Handler: handler::handleRequest
CodeUri: lambdaFunction
Runtime: java11
Timeout: 40
Environment:
Variables:
AWS_ACCESS_KEY_ID: "keyid"
AWS_SECRET_ACCESS_KEY: "accesskey"
AWS_DEFAULT_REGION: "us-west-1"
AWS_REGION: "us-west-1"
This is what I use to start the lambda
sam local start-lambda --host 0.0.0.0 -d 5859 --docker-volume-basedir /folderWithClasses --container-host host.docker.internal --debug --env-vars env.json
This is the env.json:
{
"Parameters": {
"IdentityNameParameter": "admin"
}
}
I guess there is no support to do this. If so, what's the point of having SAM to test it in local if you need AWS to actually test?
Any clue?

how i can add a http api stage in serverless

I am trying to deploy a serverless application to different stages (prod and dev). I want to deploy it to a single API gateway on two different stages
like:-
http://vfdfdf.execute-api.us-west-1.amazonaws.com/dev/
http://vfdfdf.execute-api.us-west-1.amazonaws.com/prod/
I have written a code in serverless -
provider:
name: aws
runtime: nodejs14.x
region: ${self:custom.${self:custom.stage}.lambdaRegion}
httpApi:
id: ${self:custom.${self:custom.stage}.httpAPIID}
stage: ${opt:stage, 'dev'}
Edited to reflect the comments
That can be done during the serverless deployment phase.
I would just have the dev by default in the serverless yml file
provider:
name: aws
runtime: nodejs14.x
stage: dev
region: eu-west-1
httpApi:
# Attach to an externally created HTTP API via its ID:
id: w6axy3bxdj
# or commented on the very first deployment so serverless creates the HTTP API
custom:
stage: ${opt:stage, self:provider.stage}
functions:
hello:
handler: handler.hello
events:
- httpApi:
path: /${self:custom.stage}/hello
method: get
Then, the command:
serverless deploy
deploys in stage dev and region here eu-west-1. It's using the default values.
endpoint: GET - https://w6axy3bxdj.execute-api.eu-west-1.amazonaws.com/dev/hello
While for production, the default values can be overridden on the command line. Then I would use the command:
serverless deploy --stage prod
endpoint: GET - https://w6axy3bxdj.execute-api.eu-west-1.amazonaws.com/prod/hello
In my understanding, you do not change the region between dev and prod; but in case you would want to do that. The production deployment could be:
serverless deploy --stage prod --region eu-west-2
to deploy in a different region than the default one from the serverless yml file.

Pointing Two AWS Lambda Functions to Same Domain

I am using the serverless framework and AWS Lambdas to deploy two function with different path names (/message and /subscribe) to my subdomain at form.example.com.
I am using the serverless-domain-manager plugin for serverless and successfully configured my domain for the /message function using serverless create_domain, but since I also needed to do that for /subscribe I tried to follow the same process receiving messages that the domain already existed and caught an error Error: Unable to create basepath mapping..
After flipping a configuration (createRoute53Record: false) and re-running it started to work, but now when I run sls deploy for my /message function I get the error message I used to see for /subscribe.
Error (from sls deploy):
layers:
None
Error --------------------------------------------------
Error: Unable to create basepath mapping.
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Here is my config for the serverless-domain-manager:
plugins:
- serverless-offline
- serverless-domain-manager
custom:
transactionDomain:
dev: ${file(./local-keys.yml):transactionDomain}
prod: ${ssm:mg-production-transaction-domain~true}
newsletterDomain:
dev: ${file(./local-keys.yml):newsletterDomain}
prod: ${ssm:mg-production-newsletter-domain~true}
apiKey:
dev: ${file(./local-keys.yml):apiKey}
prod: ${ssm:mg-production-api-key~true}
customDomain:
domainName: form.example.com
certificateName: 'www.example.com' //sub-domain is included in the certificate
stage: 'prod'
createRoute53Record: true
Does this have to do with the deployment of two functions to the same domain? Is there a proper process to allow that to happen?
If you do not need API gateway specific features, such as usage plan. You can put two lambda behind ALB per path routing.

Can't access Google Cloud Datastore from Google Kubernetes Engine cluster

I have a simple application that Gets and Puts information from a Datastore.
It works everywhere, but when I run it from inside the Kubernetes Engine cluster, I get this output:
Error from Get()
rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
Error from Put()
rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
I'm using the cloud.google.com/go/datastore package and the Go language.
I don't know why I'm getting this error since the application works everywhere else just fine.
Update:
Looking for an answer I found this comment on Google Groups:
In order to use Cloud Datastore from GCE, the instance needs to be
configured with a couple of extra scopes. These can't be added to
existing GCE instances, but you can create a new one with the
following Cloud SDK command:
gcloud compute instances create hello-datastore --project
--zone --scopes datastore userinfo-email
Would that mean I can't use Datastore from GKE by default?
Update 2:
I can see that when creating my cluster I didn't enable any permissions (which are disabled for most services by default). I suppose that's what's causing the issue:
Strangely, I can use CloudSQL just fine even though it's disabled (using the cloudsql_proxy container).
So what I learnt in the process of debugging this issue was that:
During the creation of a Kubernetes Cluster you can specify permissions for the GCE nodes that will be created.
If you for example enable Datastore access on the cluster nodes during creation, you will be able to access Datastore directly from the Pods without having to set up anything else.
If your cluster node permissions are disabled for most things (default settings) like mine were, you will need to create an appropriate Service Account for each application that wants to use a GCP resource like Datastore.
Another alternative is to create a new node pool with the gcloud command, set the desired permission scopes and then migrate all deployments to the new node pool (rather tedious).
So at the end of the day I fixed the issue by creating a Service Account for my application, downloading the JSON authentication key, creating a Kubernetes secret which contains that key, and in the case of Datastore, I set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of the mounted secret JSON key.
This way when my application starts, it checks if the GOOGLE_APPLICATION_CREDENTIALS variable is present, and authenticates Datastore API access based on the JSON key that the variable points to.
Deployment YAML snippet:
...
containers:
- image: foo
name: foo
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /auth/credentials.json
volumeMounts:
- name: foo-service-account
mountPath: "/auth"
readOnly: true
volumes:
- name: foo-service-account
secret:
secretName: foo-service-account
After struggling some hours, I was also able to connect to the datastore. Here are my results, most of if from Google Docs:
Create Service Account
gcloud iam service-accounts create [SERVICE_ACCOUNT_NAME]
Get full iam account name
gcloud iam service-accounts list
The result will look something like this:
[SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com
Give owner access to the project for the service account
gcloud projects add-iam-policy-binding [PROJECT_NAME] --member serviceAccount:[SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com --role roles/owner
Create key-file
gcloud iam service-accounts keys create mycredentials.json --iam-account [SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com
Create app-key Secret
kubectl create secret generic app-key --from-file=credentials.json=mycredentials.json
This app-key secret will then be mounted in the deployment.yaml
Edit deyployment file
deployment.yaml:
...
spec:
containers:
- name: app
image: eu.gcr.io/google_project_id/springapplication:v1
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/credentials.json
ports:
- name: http-server
containerPort: 8080
volumes:
- name: google-cloud-key
secret:
secretName: app-key
I was using a minimalistic Dockerfile like:
FROM SCRATCH
ADD main /
EXPOSE 80
CMD ["/main"]
which kept my go app in an indefinite "hanging" state when trying to connect to the GCP Datastore. After LOTS of playing I figured out that the SCRATCH Docker image might be missing certain environment tools / variables / libraries which the Google cloud library requires. Using this Dockerfile now works:
FROM golang:alpine
RUN apk add --no-cache ca-certificates
ADD main /
EXPOSE 80
CMD ["/main"]
It does not require me to provide the google credentials environment variable. The library seems to automatically understand where it's running in (maybe from the context.Background() ?) and automatically uses a default service account which Google creates for you when you create your cluster on GKE.

Resources