When I am creating lambda via pycharm AWS Toolkit, it appends a name to the lambda function I create. Therefore name of a function is something like "hello-world-12MJU0DB7Y99B". While it is OK for custom functions, it is not something that I can easily use to automate multi-account AWS environment. I need the name of the function to be "hello-world".
Is there anyway to specify exact function name?
The solution was indeed as Milan cermak suggested, unlike in serverless aws toolkit requires name variable to be set. Once that is done the name is as expected!
Related
Am working on a project in Azure using terraform to implement rbac in the enviroment, i wrote a code that create azure AD groups , pulll users from azure ad & add them to the group based don the role & assign permissions to those groups, i use many tfvar files in my variable folder. in order to run or apply terraform i have to pass input variable as seen below;
terraform destroy --var-file=variables/service_desk_group_members.tfvarss --var-file=variables/network_group_members.tfvars --var-file=variables/security_group_members.tfvars --var-file=variables/EAAdmin_goroup_members.tfvars --var-file=variables/system_group_members.tfvars
i wish to use a script either bash or python to wrap these variables ,so that i should be able to just run
terraform plan.
i have not tried this before , its a requirement that am trying for the first time, if i can have a sample script or if someone can point me in the rigth path how to do it, i will appreciate it
Terraform provides an autoloading mechanism. If you renamed all your files something.auto.tfvars then terraform will automatically load them. Ordering lexical based on filename. You will have to move them out of the variables directory. Terraform expects them in the same directory.
I was learning terraform and was asked to provision it for CI/CD pipeline at gitlab.
My doubt is that ,
Let's say a lambda function is already running/live.
How can I provision it using terraform ?
Should I use data block to consume the running aws lambda?
Or this isn't how it works ! I am not sure how can we do this.
I searched the docs which isn't supporting this use case.
So with the Lambda function that is already running, basically here you have two use cases:
whether you want to add further changes/updates to that Lambda later on using Terraform. In this case, you need to import it to your terraform code, and all the changes you add to that Lambda can be deployed via your CI/CD pipeline, e.g.:
terraform import aws_lambda_function.my_lambda existing_lambda_function_name
Note: Please note that the my_lambda function is your terraform block of code that is defining the exact Lambda that is already running, this is to match the existing resource with your code Terraform, to then be added to the state. I hope that it is clear
or you simply just need some outputs of that Lambda to be used as inputs to other services, here you can simply just keep the Lambda up and running and use Terraform data source, e.g.:
data "aws_lambda_function" "existing_lambda" {
function_name = var.function_name
}
And somewhere else in your code you can use it as follows:
function_name = data.aws_lambda_function.existing_lambda
I hope this was helpful
I have a CodePipeline with GitHub as a source, set up. I am trying to, without a success, pass a single secret parameter( in this case a Stripe secret key, currently defined in an .env file -> explaination down below ) to a specific Lambda during a Deployment stage in CodePipeline's execution.
Deployment stage in my case is basically a CodeBuild project that runs the deployment.sh script:
#! /bin/bash
npm install -g serverless#1.60.4
serverless deploy --stage $env -v -r eu-central-1
Explanation:
I've tried doing this with serverless-dotenv-plugin, which serves the purpose when the deployment is done locally, but when it's done trough CodePipeline, it returns an error on lambda's execution, and with a reason:
Since CodePipeline's Source is set to GitHub (.env file is not commited), whenever a change is commited to a git repository, CodePipeline's execution is triggered. By the time it reaches deployment stage, all node modules are installed (serverless-dotenv-plugin along with them) and when serverless deploy --stage $env -v -r eu-central-1 command executes serverless-dotenv-plugin will search for .env file in which my secret is stored, won't find it since there's no .env file because we are out of "local" scope, and when lambda requiring this secret triggers it will throw an error looking like this:
So my question is, is it possible to do it with dotenv/serverless-dotenv-plugin, or should that approach be discarded? Should I maybe use SSM Parameter Store or Secrets Manager? If yes, could someone explain how? :)
So, upon further investigation of this topic I think I have the solution.
SSM Parameter Store vs Secrets Manager is an entirely different topic, but for my purpose, SSM Paremeter Store is a choice that I chose to go along with for this problem. And basically it can be done in 2 ways.
1. Use AWS Parameter Store
Simply by adding a secret in your AWS Parameter Store Console, then referencing the value in your serverless.yml as a Lambda environement variable. Serverless Framework is able to fetch the value from your AWS Parameter Store account on deploy.
provider:
environement:
stripeSecretKey: ${ssm:stripeSecretKey}
Finally, you can reference it in your code just as before:
const stripe = Stripe(process.env.stripeSecretKey);
PROS: This can be used along with a local .env file for both local and remote usage while keeping your Lambda code the same, ie. process.env.stripeSecretKey
CONS: Since the secrets are decrypted and then set as Lambda environment variables on deploy, if you go to your Lambda console, you'll be able to see the secret values in plain text. (Which kinda indicates some security issues)
That brings me to the second way of doing this, which I find more secure and which I ultimately choose:
2. Store in AWS Parameter Store, and decrypt at runtime
To avoid exposing the secrets in plain text in your AWS Lambda Console, you can decrypt them at runtime instead. Here is how:
Add the secrets in your AWS Parameter Store Console just as in the above step.
Change your Lambda code to call the Parameter Store directly and decrypt the value at runtime:
import stripePackage from 'stripe';
const aws = require('aws-sdk');
const ssm = new aws.SSM();
const stripeSecretKey = ssm.getParameter(
{Name: 'stripeSecretKey', WithDecryption: true}
).promise();
const stripe = stripePackage(stripeSecretKey.Parameter.Value);
(Small tip: If your Lambda is defined as async function, make sure to use await keyword before ssm.getParameter(...).promise(); )
PROS: Your secrets are not exposed in plain text at any point.
CONS: Your Lambda code does get a bit more complicated, and there is an added latency since it needs to fetch the value from the store. (But considering it's only one parameter and it's free, it's a good trade-off I guess)
For the conclusion I just want to mention that all this in order to work will require you to tweak your lambda's policy so it can access Systems Manager and your secret that's stored in Parameter Store, but that's easily inspected trough CloudWatch.
Hopefully this helps someone out, happy coding :)
I am running a command like the following.
serverless invoke local --function twilio_incoming_call
When I run locally in my code I plan to detect this and instead of looking for POST variables look for a MOCK file I'll be giving it.
I don't know how to detect if I'm running serverless with this local command however.
How do you do this?
I looked around on the serverless website and could find lots of info about running in local but not detecting if you were in local.
I found out the answer. process.env.IS_LOCAL will detect if you are running locally. Missed this on their website somehow...
If you're using AWS Lambda, it has some built-in environment variables. In the absence of those variables, then you can conclude that your function is running locally.
https://docs.aws.amazon.com/lambda/latest/dg/lambda-environment-variables.html
const isRunningLocally = !process.env.AWS_EXECUTION_ENV
This method works regardless of the framework you use whether you are using serverless, Apex UP, AWS SAM, etc.
You can also check what is in process.argv:
process.argv[1] will equal '/usr/local/bin/sls'
process.argv[2] will equal 'invoke'
process.argv[3] will equal 'local'
I am no longer able to edit my AWS lambda function using the inline editor because of the error, "Your inline editor code size is too large. Maximum size is 51200." Yet, I can't find a walk-through that explains how to do these things from localhost:
Upload a python script to Lambda
Supply "event" data to the script
View Lambda output
You'll need to create a deployment package for your code, which is just a zip archive but with a particular format. Instructions are in the AWS Python deployment documentation.
Then you can use the context object to supply event data to your script, starter information in the AWS Python programming model documentation.
Side note: once your Lambda code starts to get larger, it's often handy to move to some sort of management framework. Several have been written for Lambda, I use Apex which is written in Go but works for any Lambda code, but you might be more comfortable with Gordon (has a great list of examples and is more active) or Kappa, which are both written in Python.