Pointing Two AWS Lambda Functions to Same Domain - aws-lambda

I am using the serverless framework and AWS Lambdas to deploy two function with different path names (/message and /subscribe) to my subdomain at form.example.com.
I am using the serverless-domain-manager plugin for serverless and successfully configured my domain for the /message function using serverless create_domain, but since I also needed to do that for /subscribe I tried to follow the same process receiving messages that the domain already existed and caught an error Error: Unable to create basepath mapping..
After flipping a configuration (createRoute53Record: false) and re-running it started to work, but now when I run sls deploy for my /message function I get the error message I used to see for /subscribe.
Error (from sls deploy):
layers:
None
Error --------------------------------------------------
Error: Unable to create basepath mapping.
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Here is my config for the serverless-domain-manager:
plugins:
- serverless-offline
- serverless-domain-manager
custom:
transactionDomain:
dev: ${file(./local-keys.yml):transactionDomain}
prod: ${ssm:mg-production-transaction-domain~true}
newsletterDomain:
dev: ${file(./local-keys.yml):newsletterDomain}
prod: ${ssm:mg-production-newsletter-domain~true}
apiKey:
dev: ${file(./local-keys.yml):apiKey}
prod: ${ssm:mg-production-api-key~true}
customDomain:
domainName: form.example.com
certificateName: 'www.example.com' //sub-domain is included in the certificate
stage: 'prod'
createRoute53Record: true
Does this have to do with the deployment of two functions to the same domain? Is there a proper process to allow that to happen?

If you do not need API gateway specific features, such as usage plan. You can put two lambda behind ALB per path routing.

Related

how to connect a serverless client with a given AWS Lambda function

I have a question about how to connect a serverless client with a given AWS Lambda function.
I'm building a system that provides the developers with a cloud-based dev environment.
It provides the serverless dev environment built atop the AWS lambda, dynamodb services.
Some developers ask me about how to use the serverless framework in the given environment.
For the company's security policy, I can't grant Adimin authority on the developers, so that they find it difficult to perform the sls deploy cmd that requires CRUD authority in the IAM service.
I've tried connecting the serverless client with the aws lambda provided by my system without executing the deploy cmd. But all failed.
It requires me to execute the sls deploy cmd before the deploy function cmd.
Is there any way to connect a serverless client with a given AWS Lambda function?
If there is a best practice in grating the minimized authority, please give me a suggestion.
Thank you in advance.
First of all you will need to setup different group with dedicated security groups granting rights to your users. Here's a Cloudformation template for instance, (it references default OrganizationAccountAccessRole but you might/should create your own with minimal access) :
Resources:
# ADMIN QA
AssumeAdministratorQARolePolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
ManagedPolicyName: "AdminQA"
Description: "Assume the qa administrative role"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Sid: "AssumeAdministratorQARolePolicy"
Effect: "Allow"
Action:
- "sts:AssumeRole"
Resource: "arn:aws:iam::ACCOUNTID:role/OrganizationAccountAccessRole"
AdminQAGroup:
Type: AWS::IAM::Group
Properties:
GroupName: AdminQA
ManagedPolicyArns:
- !Ref AssumeAdministratorQARolePolicy
AdminQAUsersToGroup:
Type: AWS::IAM::UserToGroupAddition
Properties:
GroupName: !Ref AdminQAGroup
Users:
- MYUSER
Then MYUSER might use this Role through his .aws/credentials like
[default]
aws_access_key_id = KEY
aws_secret_access_key = SECRET
[qa]
role_arn = arn:aws:iam::ACCOUNTID:role/OrganizationAccountAccessRole
source_profile = default
Once again you might update OrganizationAccountAccessRole to your very own Role.
Finally during deployment you might use this profile with :
serverless deploy --stage qa --aws-profile qa
Which I recommend to set in package.json directly.
Hope this helps and clarifies how you should grant rights and access through the whole deployment process.

k3s gitlab ci-cd issue with default service account

I am trying to deploy a simple angular app on k3s have installed GitLab-runner have GitLab service with a Role as a cluster-admin and it is supposed to be able to run all but I can't get it to deploy :
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- '*'
I also tried specifically adding the verb 'apps' - no change in behavior
from server for: "deployment.yaml": deployments.apps "gitlab-master" is forbidden: User "system:serviceaccount:gitlab-managed-apps:default" cannot get resource "deployments" in API group "apps" in the namespace "gitlab-managed-apps"
So far the only solution is to use the SA with privileges as Gitlab-admin...

Multiple microservices inside same AWS API Gateway

I'm developing a series of microservices which need to share the same AWS API Gateway. Here's my structure:
/
/assessments
/skills
/work-values
/graphql
/skills, /work-values, and /graphql are 3 different microservices I'm trying to register with the same AWS API Gateway. The problem I'm having is getting the serverless.yaml files for /skills, /work-values routes to nest under 'assessments'. There is no functionality for /assessments in-and-of-itself. It exists just so we can organize all of our assessments under the same URL path structure.
Here's my serverless.yaml file for `/work-values':
service:
name: assessments-workvalues
...
custom:
stage: ${opt:stage, self:provider.stage}
provider:
...
apiGateway:
restApiId:
# THE FOLLOWING REFERENCES A VARIABLE FROM MY API GATEWAY ROOT
'Fn::ImportValue': ${self:custom.stage}-ApiGatewayRestApiId
restApiRootResourceId:
'Fn::ImportValue': # HOW DO I GET THE PROPER VALUE HERE TO MAP TO `/assessments`?
...
functions:
...
Here's my serverless.yaml file for `/assessments':
service:
name: assessments
custom:
stage: ${opt:stage, self:provider.stage}
provider:
...
apiGateway:
restApiId:
# THE FOLLOWING REFERENCES A VARIABLE FROM MY API GATEWAY ROOT
'Fn::ImportValue': ${self:custom.stage}-ApiGatewayRestApiId
restApiRootResourceId:
'Fn::ImportValue': ${self:custom.stage}-ApiGatewayRestApiRootResourceId
functions:
...
resources:
Outputs:
ApiGatewayRestApiId:
Value:
Ref: ApiGatewayRestApi
Export:
Name: ${self:custom.stage}-Assessments-ApiGatewayRestApiId
ApiGatewayRestApiRootResourceId:
Value:
Fn::GetAtt:
- ApiGatewayRestApi
- RootResourceId
Export:
Name: ${self:custom.stage}-Assessments-ApiGatewayRestApiRootResourceId
The problem seems to be coding the Outputs in serverless.yaml file for assessments route. When I run serverless deploy, I get this error message:
Error: The CloudFormation template is invalid: Unresolved resource dependencies [ApiGatewayRestApi] in the Outputs block of the template
At the end of Share an API Endpoint Between Services article, the author mentions 'You HAVE TO import /billing from the billing-api, so the new service will only need to create the /billing/xyz part.' (which seems to be the situation I'm in). But, the author does not explain how to import /billing. Or in my case, how do I import /assessments into the serverless.yaml files for each assessment microservice?
After further research, I found this link:
Splitting Your Serverless Framework API on AWS
I ended up reworking my original approach following what's in the article above. The piece I was missing was having a root or base serverless file which is used to create your routing in AWS API Gateway and expose those placeholders as output which your subsequent child serverless files consume as input for wiring up your child lambda functions to routes under the API Gateway umbrella.

Serverless config credentials not working when serverless.yml file present

We're trying to deploy our lambda using serverless on BitBucket pipelines, but we're running into an issue when running the serverless config credentials command. This issue also happens in docker containers, and locally on our machines.
This is the command we're running:
serverless config credentials --stage staging --provider aws --key $AWS_ACCESS_KEY --secret $AWS_ACCESS_SECRET
And it gives us the error:
Error: Profile default does not exist
The profile is defined in our serverless.yml file. If we rename the serverless file before running the command, it works, and then we can then put the serverless.yml file back and successfully deploy.
e.g.
- mv serverless.yml serverless.old
- serverless config credentials --stage beta --provider aws --key $AWS_ACCESS_KEY --secret $AWS_ACCESS_SECRET
- mv serverless.old serverless.yml
We've tried adding the --profile default switch on there, but it makes no difference.
It's worth noting that this wasn't an issue until we started to use the SSM Parameter Store within the serverless file, the moment we added that, it started giving us the Profile default does not exist error.
serverless.yml (partial)
service: our-service
provider:
name: aws
runtime: nodejs12.x
region: eu-west-1
profile: default
stage: ${opt:stage, 'dev'}
iamRoleStatements:
- Effect: 'Allow'
Action: 'ssm:GetParameter'
Resource:
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-dev'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-beta'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-staging'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-live'
- Effect: 'Allow'
Action: 'kms:Decrypt'
Resource:
- 'arn:aws:kms:eu-west-1:0000000000:key/alias/aws/ssm'
environment:
LAUNCH_DARKLY_SDK_KEY: ${self:custom.launchDarklySdkKey.${self:provider.stage}}
custom:
stages:
- dev
- beta
- staging
- live
launchDarklySdkKey:
dev: ${ssm:/our-service-launchdarkly-key-dev~true}
beta: ${ssm:/our-service-launchdarkly-key-beta~true}
staging: ${ssm:/our-service-launchdarkly-key-staging~true}
live: ${ssm:/our-service-launchdarkly-key-live~true}
plugins:
- serverless-offline
- serverless-stage-manager
...
TLDR: serverless config credentials only works when serverless.yml isn't present, otherwise it complains about profile default not existing, only an issue when using SSM Param store in the serverless file.
The profile attribute in your serverless.yaml refers to saved credentials in ~/.aws/credentials. If a [default] entry is not present in that file, serverless will complain. I can think of 2 possible solutions to this:
Try removing profile from your serverless.yaml completely and using environment variables only.
Leave profile: default in your serverless.yaml but set the credentials in ~/.aws/credentials like this:
[default]
aws_access_key_id=***************
aws_secret_access_key=***************
If you go with #2, you don't have to run serverless config credentials anymore.

Can't access Google Cloud Datastore from Google Kubernetes Engine cluster

I have a simple application that Gets and Puts information from a Datastore.
It works everywhere, but when I run it from inside the Kubernetes Engine cluster, I get this output:
Error from Get()
rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
Error from Put()
rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
I'm using the cloud.google.com/go/datastore package and the Go language.
I don't know why I'm getting this error since the application works everywhere else just fine.
Update:
Looking for an answer I found this comment on Google Groups:
In order to use Cloud Datastore from GCE, the instance needs to be
configured with a couple of extra scopes. These can't be added to
existing GCE instances, but you can create a new one with the
following Cloud SDK command:
gcloud compute instances create hello-datastore --project
--zone --scopes datastore userinfo-email
Would that mean I can't use Datastore from GKE by default?
Update 2:
I can see that when creating my cluster I didn't enable any permissions (which are disabled for most services by default). I suppose that's what's causing the issue:
Strangely, I can use CloudSQL just fine even though it's disabled (using the cloudsql_proxy container).
So what I learnt in the process of debugging this issue was that:
During the creation of a Kubernetes Cluster you can specify permissions for the GCE nodes that will be created.
If you for example enable Datastore access on the cluster nodes during creation, you will be able to access Datastore directly from the Pods without having to set up anything else.
If your cluster node permissions are disabled for most things (default settings) like mine were, you will need to create an appropriate Service Account for each application that wants to use a GCP resource like Datastore.
Another alternative is to create a new node pool with the gcloud command, set the desired permission scopes and then migrate all deployments to the new node pool (rather tedious).
So at the end of the day I fixed the issue by creating a Service Account for my application, downloading the JSON authentication key, creating a Kubernetes secret which contains that key, and in the case of Datastore, I set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of the mounted secret JSON key.
This way when my application starts, it checks if the GOOGLE_APPLICATION_CREDENTIALS variable is present, and authenticates Datastore API access based on the JSON key that the variable points to.
Deployment YAML snippet:
...
containers:
- image: foo
name: foo
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /auth/credentials.json
volumeMounts:
- name: foo-service-account
mountPath: "/auth"
readOnly: true
volumes:
- name: foo-service-account
secret:
secretName: foo-service-account
After struggling some hours, I was also able to connect to the datastore. Here are my results, most of if from Google Docs:
Create Service Account
gcloud iam service-accounts create [SERVICE_ACCOUNT_NAME]
Get full iam account name
gcloud iam service-accounts list
The result will look something like this:
[SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com
Give owner access to the project for the service account
gcloud projects add-iam-policy-binding [PROJECT_NAME] --member serviceAccount:[SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com --role roles/owner
Create key-file
gcloud iam service-accounts keys create mycredentials.json --iam-account [SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com
Create app-key Secret
kubectl create secret generic app-key --from-file=credentials.json=mycredentials.json
This app-key secret will then be mounted in the deployment.yaml
Edit deyployment file
deployment.yaml:
...
spec:
containers:
- name: app
image: eu.gcr.io/google_project_id/springapplication:v1
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/credentials.json
ports:
- name: http-server
containerPort: 8080
volumes:
- name: google-cloud-key
secret:
secretName: app-key
I was using a minimalistic Dockerfile like:
FROM SCRATCH
ADD main /
EXPOSE 80
CMD ["/main"]
which kept my go app in an indefinite "hanging" state when trying to connect to the GCP Datastore. After LOTS of playing I figured out that the SCRATCH Docker image might be missing certain environment tools / variables / libraries which the Google cloud library requires. Using this Dockerfile now works:
FROM golang:alpine
RUN apk add --no-cache ca-certificates
ADD main /
EXPOSE 80
CMD ["/main"]
It does not require me to provide the google credentials environment variable. The library seems to automatically understand where it's running in (maybe from the context.Background() ?) and automatically uses a default service account which Google creates for you when you create your cluster on GKE.

Resources