I am using serverless framework in c# to execute queries in athena. AWS Lamda function deleted automatically. When i am trying to deploy it, it's not happening.
sls deploy --stage dev -- To deploy function
sls remove --stage dev -- To remove function
When i tried to redeploy it, it's giving error like below:
As they have mentioned in above screenshot, for more error output i have browsed the link: which shows stack detail. I have attached it below
Refer this image:
[![enter image description here][2]][2]
serverless.yml
# Welcome to Serverless!
#
# This file is the main config file for your service.
# It's very minimal at this point and uses default values.
# You can always add more config options for more control.
# We've included some commented out config examples here.
# Just uncomment any of them to get that config option.
#
# For full config options, check the docs:
# docs.serverless.com
#
# Happy Coding!
service: management-athena
custom:
defaultStage: dev
currentStage: ${opt:stage, self:custom.defaultStage} # 'dev' is default unless overriden by --stage flag
provider:
name: aws
runtime: dotnetcore2.1
stage: ${self:custom.currentStage}
role: arn:aws:iam::***********:role/service-role/nexus_labmda_schema_partition # must validly reference a role defined in your account
timeout: 300
environment: # Service wide environment variables
DATABASE_NAME: ${file(./config/config.${self:custom.currentStage}.json):DATABASE_NAME}
TABLE_NAME: ${file(./config/config.${self:custom.currentStage}.json):TABLE_NAME}
S3_PATH: ${file(./config/config.${self:custom.currentStage}.json):S3_PATH}
MAX_SITE_TO_BE_PROCESSED: ${file(./config/config.${self:custom.currentStage}.json):MAX_SITE_TO_BE_PROCESSED}
package:
artifact: bin/release/netcoreapp2.1/deploy-package.zip
functions:
delete_partition:
handler: CsharpHandlers::AwsAthena.AthenaHandler::DeletePartition
description: Lambda function which runs at specified interval to delete athena partitions
# The `events` block defines how to trigger the AthenaHandler.DeletePartition code
events:
- schedule:
rate: cron(0 8 * * ? *) #triggered every day at 3:00 AM EST.Provided time is in UTC. So 3 A.M EST is 8 A.M UTC
enabled: true
I found out the solution!
Sometimes we won't be able to deploy lamda functions because of many reasons. as #ASR mentioned in comments, there might serverless framework's version issues. But in my case, that didn't solve. Just try deleting the logs group of your function from the cloud watch.
Go to aws -> expand services -> select CloudWatch -> select Logs -> search for your log group select it and delete it. Let's say if your function name is my_function then your log group name will be something like this: aws/lamda/my_function
Then just deploy your lamda function.
I am posting this thinking that it helps someone...! Please correct me if i am wrong.
Related
We have a kubebuilder controller which is working as expected, now we need to create a webhooks ,
I follow the tutorial
https://book.kubebuilder.io/reference/markers/webhook.html
and now I want to run & debug it locally,
however not sure what to do regard the certificate, is there a simple way to create it , any example will be very helpful.
BTW i've installed cert-manager and apply the following sample yaml but not sure what to do next ...
I need the simplest solution that I be able to run and debug the webhooks locally as Im doing already with the controller (Before using webhooks),
https://book.kubebuilder.io/cronjob-tutorial/running.html
Cert-manager
I've created the following inside my cluster
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: example-com
namespace: test
spec:
# Secret names are always required.
secretName: example-com-tls
# secretTemplate is optional. If set, these annotations and labels will be
# copied to the Secret named example-com-tls. These labels and annotations will
# be re-reconciled if the Certificate's secretTemplate changes. secretTemplate
# is also enforced, so relevant label and annotation changes on the Secret by a
# third party will be overwriten by cert-manager to match the secretTemplate.
secretTemplate:
annotations:
my-secret-annotation-1: "foo"
my-secret-annotation-2: "bar"
labels:
my-secret-label: foo
duration: 2160h # 90d
renewBefore: 360h # 15d
subject:
organizations:
- jetstack
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: example.com
isCA: false
privateKey:
algorithm: RSA
encoding: PKCS1
size: 2048
usages:
- server auth
- client auth
# At least one of a DNS Name, URI, or IP address is required.
dnsNames:
- example.com
- www.example.com
uris:
- spiffe://cluster.local/ns/sandbox/sa/example
ipAddresses:
- 192.168.0.5
# Issuer references are always required.
issuerRef:
name: ca-issuer
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
Still not sure how to sync it with the kubebuilder to work locally
as when I run the operator in debug mode I got the following error:
setup problem running manager {"error": "open /var/folders/vh/_418c55133sgjrwr7n0d7bl40000gn/T/k8s-webhook-server/serving-certs/tls.crt: no such file or directory"}
What I need is the simplest way to run webhooks locally
Let me walk you through the process from the start.
create webhook like it's said in the cronJob tutorial - kubebuilder create webhook --group batch --version v1 --kind CronJob --defaulting --programmatic-validation . This will create webhooks for implementing defaulting logics and validating logics.
Implement the logics as instructed - Implementing defaulting/validating webhooks
Install cert-manager. I find the easiest way to install is via this commmand - kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.1/cert-manager.yaml
Edit the config/default/kustomization.yaml file by uncommenting everything that have [WEBHOOK] or [CERTMANAGER] in their comments. Do the same for config/crd/kustomization.yaml file also.
Build Your Image locally using - make docker-build IMG=<some-registry>/<project-name>:tag. Now you dont need to docker-push your image to remote repository. If you are using kind cluster, You can directly load your local image to your specified kind cluster:
kind load docker-image <your-image-name>:tag --name <your-kind-cluster-name>
Now you can deploy it to your cluster by - make deploy IMG=<some-registry>/<project-name>:tag.
You can also run cluster locally using make run command. But, that's a little tricky if you have enabled webooks. I would suggest you running your cluster using KIND cluster in this way. Here, you don't need to worry about injecting certificates. cert-manager will do that for you. You can check out the /config/certmanager folder to figure out how this is functioning.
I'm new to serverless,
So far I was be able to deploy and use .env for the app.
then, under provider in stage property in serverless.yml file, I change it to different stage. I also made new.env.{stage}.
after re-deploy using sls deploy, It still reads the default .env file.
the documentation states:
The framework looks for .env and .env.{stage} files in service directory and then tries to load them using dotenv. If .env.{stage} is found, .env will not be loaded. If stage is not explicitly defined, it defaults to dev.
So, I still don't understand "If stage is not explicitly defined, it defaults to dev". How to explicitly define it?
The dotenv File is choosen based on your stage property configuration. You need to explicitly define the stage property in your serverless.yaml or set it within your deployment command.
This will use the .env.dev file
useDotenv: true
provider:
name: aws
stage: dev # dev [default], stage, prod
memorySize: 3008
timeout: 30
Or you set the stage property via deploy command.
This will use the .env.prod file
sls deploy --stage prod
In your serverless.yml you need to define the stage property inside the provider object.
Example:
provider:
name: aws
[...]
stage: prod
As Feb 2023 I'm going to attempt to give my solution. I'm using the Nx tootling for monorepo (this shouldn't matter but just in case) and I'm using the serverless.ts instead.
I see the purpose of this to be to enhance the developer experience in the sense that it is nice to just nx run users:serve --stage=test (in my case using Nx) or sls offline --stage=test and serverless to be able to load the appropriate variables for that specific environment.
Some people went the route of using several .env.<stage> per environment. I tried to go this route but because I'm not that good of a developer I couldn't make it work. The approach that worked for the was to concatenate variable names inside the serverless.ts. Let me explain...
I'm using just one .env file instead but changing variable names based on the --stage. The magic is happening in the serverless.ts
// .env
STAGE_development=test
DB_NAME_development=mycraftypal
DB_USER_development=postgres
DB_PASSWORD_development=abcde1234
DB_PORT_development=5432
READER_development=localhost // this could be aws rds uri per db instances
WRITER_development=localhost // this could be aws rds uri per db instances
# TEST
STAGE_test=test
DB_NAME_test=mycraftypal
DB_USER_test=postgres
DB_PASSWORD_test=abcde1234
DB_PORT_test=5433
READER_test=localhost // this could be aws rds uri per db instances
WRITER_test=localhost // this could be aws rds uri per db instances
// serverless.base.ts or serverless.ts based on your configuration
...
useDotenv: true, // this property is at the root level
...
provider: {
...
stage: '${opt:stage, "development"}', // get the --stage flag value or default to development
...,
environment: {
STAGE: '${env:STAGE_${self:provider.stage}}}',
DB_NAME: '${env:DB_NAME_${self:provider.stage}}',
DB_USER: '${env:DB_USER_${self:provider.stage}}',
DB_PASSWORD: '${env:DB_PASSWORD_${self:provider.stage}}',
READER: '${env:READER_${self:provider.stage}}',
WRITER: '${env:WRITER_${self:provider.stage}}',
DB_PORT: '${env:DB_PORT_${self:provider.stage}}',
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
}
...
}
When one is utilizing the useDotenv: true, serverless loads your variables from the .env and puts them in the env variable so you can access them env:STAGE.
Now I can access the variable with dynamic stage like so ${env:DB_PORT_${self:provider.stage}}. If you look at the .env file each variable has the ..._<stage> at the end. In this way I can retrieve dynamically each value.
I'm still figuring it out since I don't want to have the word production in my url but still get the values dynamically and since I'm concatenating this value ${env:DB_PORT_${self:provider.stage}}... then the actual variable becomes DB_PORT_ instead of DB_PORT.
I would like to use the Secret Manager to store a credential to our artifactory, within a cloud build step. I have it working using a build similar to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
All great, no problems - I then try and slightly improve it to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
But then it throws the error:
ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: failed to get secret name from secret version "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
I have been able to add a TRIGGER level env var (SECRET_MANAGER_PROJECT_ID), and that works fine. The only issue that as that is a trigger env, it is not available on rebuild, which breaks a lot of things.
Does anyone know how to get the PROJECT_ID of a Secret Manager from within CloudBuild without using a Trigger Param?
For now, it's not possible to set dynamic value in the secret field. I already provided this feedback directly to the Google Cloud PM, it has been take into account, but I don't have more info to share, especially for the availability.
EDIT 1
(January 22). Thanks to Seza443 comment, I tested again and now it works with automatically populated variable (PROJECT_ID and PROJECT_NUMBER), but also with customer defined substitution variables!
It appears that Cloud Build now allows for the use of substitution variables within the availableSecrets field of a build configuration.
From Google Cloud's documentation on using secrets:
After all the build steps, add an availableSecrets field to specify the secret version and environment variables to use for your secret. You can include substitution variables in the value of the secretVersion field. You can specify more than one secret in a build.
I was able to use the $PROJECT_ID variable in my own build configuration like so:
...
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/api-key/versions/latest
env: API_KEY
Granted, there appears to be (at least at present) some discrepancy between the documentation quoted above and the recommended configuration file schema. In the documentation they refer to secretVersion, but that appears to have changed to versionName. In either case, it seems to work properly.
Use the $PROJECT_NUMBER instead.
https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values#using_default_substitutions
the lambda function size is over 4096 characters, so I can't deploy lambda function as inline codes in cloudformation template.
(https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html)
ZipFile
Your source code can contain up to 4096 characters. For JSON, you must escape quotes and special characters such as newline (\n) with a backslash.
I have to zip it first, upload to a s3 bucket, set s3 bucket and file details in cloudformation, and deploy it.
I can't find a way to deploy with one command. If I update the lambda code, I have to repeat the above steps
But with both AWS SAM or Serverless Framework, they can deploy lambda functions without inline codes.
The only issue is, AWS SAM or serverless framework create API gateway as default, that I don't need it to be created
Any solution or recommendations for me?
If you're managing your deployment with plain CloudFormation and the aws command line interface, you can handle this relatively easily using aws cloudformation package to generate a "packaged" template for deployment.
aws cloudformation package accepts a template where certain properties can be written using local paths, zips the content from the local file system, uploads to a designated S3 bucket, and then outputs a new template with these properties rewritten to refer to the location on S3 instead of the local file system. In your case, it can rewrite Code properties for AWS::Lambda::Function that point to local directories, but see aws cloudformation package help for a full list of supported properties. You do need to setup an S3 bucket ahead of time to store your assets, but you can reuse the same bucket in multiple CloudFormation projects.
So, let's say you have an input.yaml with something like:
MyLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Code: my-function-directory
You might package this up with something like:
aws cloudformation package \
--template-file input.yaml \
--s3-bucket my-packaging-bucket \
--s3-prefix my-project/ \
--output-template-file output.yaml
Which would produce an output.yaml with something resembling this:
MyLambdaFunction:
Properties:
Code:
S3Bucket: my-packaging-bucket
S3Key: my-project/0123456789abcdef0123456789abcdef
Type: AWS::Lambda::Function
You can then use output.yaml with aws cloudformation deploy (or any other aws cloudformation command accepting a template).
To truly "deploy with one command" and ensure you always do deployments consistently, you can combine these two commands into a script, Makefile, or something similar.
you can zip the file first then use aws cli to update your lambda function
zip function.zip lambda_function.py
aws lambda update-function-code --function-name <your-lambda-function-name> --zip-file fileb://function.zip
Within CloudFormation (last 3 lines):
BackupLambda:
Type: "AWS::Lambda::Function"
Properties:
Handler: "backup_lambda.lambda_handler"
Role: !Ref Role
Runtime: "python2.7"
MemorySize: 128
Timeout: 120
Code:
S3Bucket: !Ref BucketWithLambdaFunction
S3Key: !Ref PathToLambdaFile
Re. your comment:
The only issue is, aws SAM or serverless framework create API gateway as default, that I don't need it to be created
For Serverless Framework by default that's not true. The default generated serverless.yml file includes config for the Lambda function itself but the configuration for API Gateway is provided only as an example in the following commented out section.
If you uncomment the 'events' section for http then it will also create an API Gateway config for your Lambda, but not unless you do.
functions:
hello:
handler: handler.hello
# The following are a few example events you can configure
# NOTE: Please make sure to change your handler code to work with those events
# Check the event documentation for details
# events:
# - http:
# path: users/create
# method: get
When I deploy my serverless api using:
serverless deploy
The lambda layer gets created but when I go to run the function is gives me this error:
"Cannot find module 'request'"
But if I upload the .zip file manually through the console (the exactly same file thats uploaded when I deploy), it works fine.
Any one have any idea why this is happening?
environment:
SLS_DEBUG: "*"
provider:
name: aws
runtime: nodejs8.10
stage: ${opt:api-type, 'uat'}-${opt:api, 'payment'}
region: ca-central-1
timeout: 30
memorySize: 128
role: ${file(config/prod.env.json):ROLE}
vpc:
securityGroupIds:
- ${file(config/prod.env.json):SECURITY_GROUP}
subnetIds:
- ${file(config/prod.env.json):SUBNET}
apiGateway:
apiKeySourceType: HEADER
apiKeys:
- ${file(config/${opt:api-type, 'uat'}.env.json):${opt:api, "payment"}-APIKEY}
functions:
- '${file(src/handlers/${opt:api, "payment"}.serverless.yml)}'
package:
# individually: true
exclude:
- node_modules/**
- nodejs/**
plugins:
- serverless-offline
- serverless-plugin-warmup
- serverless-content-encoding
custom:
contentEncoding:
minimumCompressionSize: 0 # Minimum body size required for compression in bytes
layers:
nodejs:
package:
artifact: nodejs.zip
compatibleRuntimes:
- nodejs8.10
allowedAccounts:
- "*"
Thats what my serverless yaml script looks like.
I was having a similar error to you while using the explicit layers keys that you are using to define a lambda layer.
My error (for the sake of web searches) was this:
Runtime.ImportModuleError: Error: Cannot find module <package name>
I feel this is a temporary solution b/c I wanted to explicitly define my layers like you were doing, but it wasn't working so it seemed like a bug.
I created a bug report in Serverless for this issue. If anyone else is having this same issue they can track it there.
SOLUTION
I followed this this post in the Serverless forums based on these docs from AWS.
I zipped up my node_modules under the folder nodejs so it looks like this when it is unzipped nodejs/node_modules/<various packages>.
Then instead of using the explicit definition of layers I used the package and artifact keys like so:
layers:
test:
package:
artifact: test.zip
In the function layer it is referred to like this:
functions:
function1:
handler: index.handler
layers:
- { Ref: TestLambdaLayer }
The TestLambdaLayer is a convention of <your name of layer>LambdaLayer as documented here
Make sure you run npm install inside your layers before deploying, ie:
cd ~/repos/repo-name/layers/utilityLayer/nodejs && npm install
Otherwise your layers will get deployed without a node_modules folder. You can download the .zip of your layer from the Lambda UI to confirm the contents of that layer.
If anyone face a similar issue Runtime.ImportModuleError, is fair to say that another cause of this issue could be a package exclude statement in the serverless.yml file.
Be aware that if you have this statement:
package:
exclude:
- './**'
- '!node_modules/**'
- '!dist/**'
- '.git/**'
It will cause exactly the same error, on runtime once you've deployed your lambda function (with serverless framework). Just, ensure to remove the ones that could create a conflict across your dependencies
I am using typescript with the serverless-plugin-typescript and I was having a same error, too.
When I switched from
const myModule = require('./src/myModule');
to
import myModule from './src/myModule';
the error disappeared. It seems like the files were not included into the zip file by serverless when I was using require.
PS: Removing the serverless-plugin-typescript and switching back to javascript also solved the problem.