I am using a shell script that calls for some custom packages to be zipped and layered on lambda.
After deploying the layer via aws lambda publish-layer-version the layer version, obviously, goes up. The next command in my .sh script is something like
aws lambda update-function-configuration --function-name myfunc --layers arn:aws:lambda:<region>:273846758499:layer:<layer_name>:<version>
Since I am new to scripting in general I am open to any workable solutions but I am looking to iterate the <version> to the most recent version available on Lambda. How can this be written in this language?
You can simply parse the response from the first command:
For example, I'm using here jq, which parses jsons in bash.
version=$(aws lambda publish-layer-version --layer-name <your name> --zip-file <zip> --region "us-east-1" | jq -r '.LayerVersionArn')
Then, you can upload with:
aws lambda update-function-configuration --function-name <name> --layers $version
Disclosure: I work for Lumigo, a company that provides serverless monitoring.
I'm trying to invoke aws lambda from another aws lambda using boto3 in python
in order to construct the first lambda I used serverless framework with custom: zip:true
when I invoke I get the massage:
{'errorMessage': "Unable to import module 'handler': No module named 'joblib'", 'errorType': 'Runtime.ImportModuleError'}
how can I unzip the first lambda requirements?
Thanks for your help
By default, Serverless creates a bucket with a generated name like
<service name>-serverlessdeploymentbuck-1x6jug5lzfnl7 to store your
service's stack state.
Please go to S3 and find you deployment bucket. There you could download your latest lambda archive and unzip it, this is how could see if you really have all required modules uploaded.
Here is how to add extra dependencies not included by AWS.
https://docs.aws.amazon.com/lambda/latest/dg/python-package.html#python-package-dependencies
I'm developing an application with micronaut using SAM CLI to deploy it on AWS Lambda. As I was including dependencies and developing new features, the function packages got bigger an bigger (now they are around 250MB). This makes deployment take a while.
On top of that every time I edit template.yaml and then run sam build && sam deploy to try a new configuration on S3, RDS, etc... I have to wait for gradle to build the function again (even though it's unchanged since the last deployment) and upload the whole package to S3.
As I'm trying to configure this application with many trials and errors on SAM, waiting for this process to complete just to get an error because of some misconfiguration is getting quite counterproductive.
Also my SAM s3 bcuket is at 10GB size after just a single day of work. This may get expensive on the long run.
Is there a way to avoid those gradle rebuilds and reuploads when teh function code is unchanged?
If you are only updating the template.yml file, you could copy the new version to ./.aws-sam/build folder and then run sam deploy
$ cp template.yml ./.aws-sam/build/template.yml
$ sam deploy
If you are editing a lambda you could try to update the function code by itself (after you create it in the template and deploy of course). That can be done via the AWS CLI update-function-code command:
rm index.zip
cd lambda
zip –X –r ../index.zip *
cd ..
aws lambda update-function-code --function-name MyLambdaFunction --zip-file fileb://index.zip
more info can be found here:
Alexa Blogs - Publishing Your Skill Code to Lambda via the Command Line Interface
AWS CLI Command Reference - lambda - update-function-code
my SAM s3 bcuket is at 10GB size
Heh. Yea start deleting stuff. Maybe you can write a script using aws s3?
I received following error in CloudWatch Logs after using AWS CodePipeline (AWS CodeBuild) to deploy my C# Lambda Function Code
Could not find the required 'MyAssembly.deps.json'.
This file should be present at the root of the deployment package.: LambdaException
The problem in my case was that the linux file permissions on files inside the Zip were set to 000; so when the zip was extracted by AWS Lambda; AWS Lambda did not have file permission to access the file MyAssembly.deps.json
I was using C# System.IO.Compression.ZipFile.CreateFromDirectory to author the zip file. I had to shell out to the native zip program to produce a zip file which worked.
Big thanks to https://forums.aws.amazon.com/message.jspa?messageID=856247
I know this is bit old question but writing answer for any user who are still facing the problem on windows system.
this is with dotnet core 3.1
The first command in package manager console to ensure the .deps.json included in publish files
dotnet publish /p:GenerateRuntimeConfigurationFiles=true
and than zip all files of publish folder in the same name of namespace folder. upload the zip file to AWS lambda using console.
worked.
If not than copy all project files ( not the published) in zip and upload to aws lambda.
If I want to download all the contents of a directory on S3 to my local PC, which command should I use cp or sync ?
Any help would be highly appreciated.
For example,
if I want to download all the contents of "this folder" to my desktop, would it look like this ?
aws s3 sync s3://"myBucket"/"this folder" C:\\Users\Desktop
Using aws s3 cp from the AWS Command-Line Interface (CLI) will require the --recursive parameter to copy multiple files.
aws s3 cp s3://myBucket/dir localdir --recursive
The aws s3 sync command will, by default, copy a whole directory. It will only copy new/modified files.
aws s3 sync s3://mybucket/dir localdir
Just experiment to get the result you want.
Documentation:
cp command
sync command
Just used version 2 of the AWS CLI. For the s3 option, there is also a --dryrun option now to show you what will happen:
aws s3 --dryrun cp s3://bucket/filename /path/to/dest/folder --recursive
In case you need to use another profile, especially cross account. you need to add the profile in the config file
[profile profileName]
region = us-east-1
role_arn = arn:aws:iam::XXX:role/XXXX
source_profile = default
and then if you are accessing only a single file
aws s3 cp s3://crossAccountBucket/dir localdir --profile profileName
In the case you want to download a single file, you can try the following command:
aws s3 cp s3://bucket/filename /path/to/dest/folder
You've many options to do that, but the best one is using the AWS CLI.
Here's a walk-through:
Download and install AWS CLI in your machine:
Install the AWS CLI using the MSI Installer (Windows).
Install the AWS CLI using the Bundled Installer for Linux, OS X, or Unix.
Configure AWS CLI:
Make sure you input valid access and secret keys, which you received when you created the account.
Sync the S3 bucket using:
aws s3 sync s3://yourbucket/yourfolder /local/path
In the above command, replace the following fields:
yourbucket/yourfolder >> your S3 bucket and the folder that you want to download.
/local/path >> path in your local system where you want to download all the files.
sync method first lists both source and destination paths and copies only differences (name, size etc.).
cp --recursive method lists source path and copies (overwrites) all to the destination path.
If you have possible matches in the destination path, I would suggest sync as one LIST request on the destination path will save you many unnecessary PUT requests - meaning cheaper and possibly faster.
Question: Will aws s3 sync s3://myBucket/this_folder/object_file C:\\Users\Desktop create also the "this_folder" in C:\Users\Desktop?
If not, what would be the solution to copy/sync including the folder structure of S3? I mean I have many files in different S3 bucket folders sorted by year, month, day. I would like to copy them locally with the folder structure to be kept.