An error occured: ArtisanLambdaFunction - Unzipped size must be smaller than 220606645 bytes (Service: Lambda, Status Code: 4000 - laravel

I was trying to deploy my Laravel application to AWS Lambda using Bref.
I tried to exclude almost all the images, videos and still I am getting
Serverless Error ----------------------------------------
An error occurred: ArtisanLambdaFunction - Resource handler returned message: "Unzipped size must be smaller than 235311048 bytes (Service: Lambda, Status Code: 400 ...
My serverless.yml file is
service: my-laravel-application
provider:
name: aws
# The AWS region in which to deploy (us-east-1 is the default)
region: eu-west-1
# The stage of the application, e.g. dev, production, staging… ('dev' is the default)
stage: dev
runtime: provided.al2
package:
individually: true
# Directories to exclude from deployment
exclude:
- node_modules/**
- public/storage/**
- resources/assets/**
- storage/**
- tests/**
- public/images/**
- public/uploads/**
- public/videos/**
functions:
# This function runs the Laravel website/API
web:
handler: public/index.php
timeout: 28 # in seconds (API Gateway has a timeout of 29 seconds)
layers:
- ${bref:layer.php-74-fpm}
events:
- httpApi: '*'
# This function lets us run artisan commands in Lambda
artisan:
handler: artisan
timeout: 120 # in seconds
layers:
- ${bref:layer.php-74} # PHP
- ${bref:layer.console} # The "console" layer
plugins:
# We need to include the Bref plugin
- ./vendor/bref/bref
I tried to exclude almost all the assets from zip. Still, I am getting the same error.
While zipping, the total size of my application is 119.4 MB only.

I could fix this issue by adding all public assets to exclude list and hence reduced the size of zip file to 43 MB.
While extracting the zip, the total size of unzipped is less than 220606645 bytes
Then ,
upload all the public folder files and folders to S3
Add ASSET_URL in env
use asset() function where ever we try to access the files from S3

Related

configure serverless.yml laravel bref

i tried to deploy laravel serverless app with (bref) to AWS api gateway and lambda function
but i keet getting (strict-origin-when-cross-origin) with a black screen output:
**{"message":"Internal Server Error"}**
and this is my serverless.yml file
service: laravelproject
provider:
name: aws
# The AWS region in which to deploy (us-east-1 is the default)
region: eu-central-1
# The stage of the application, e.g. dev, production, staging… ('dev' is the default)
stage: dev
runtime: provided.al2
package:
# Directories to exclude from deployment
patterns:
- '!node_modules/'
- '!public/storage'
- '!resources/assets/'
- '!storage/'
- '!tests/'
functions:
# This function runs the Laravel website/API
web:
handler: public/index.php
timeout: 28 # in seconds (API Gateway has a timeout of 29 seconds)
layers:
- ${bref:layer.php-80-fpm}
events:
- httpApi: '*'
# This function lets us run artisan commands in Lambda
artisan:
handler: artisan
timeout: 120 # in seconds
layers:
- ${bref:layer.php-80} # PHP
- ${bref:layer.console} # The "console" layer
plugins:
# We need to include the Bref plugin
- ./vendor/bref/bref
Try increasing the php version you are specifying
Please fix ${bref:layer.php-80} as follows
${bref:layer.php-81}
I got it working with this

Serverless framework - Deploying a function with a different runtime

I'm using a Serverless framework to deploy multiple Lambda functions in which all of them runs on NodeJS. Now I need to create a new lambda function that runs on Java11 and I want its configuration to be in the same yaml file along with my other lambda functions. My jar file is uploaded to an S3 bucket and I'm referencing that bucket in my Serverless config to fetch the package from there and deploy it to the function, however, it seems like a wrong package is being deployed to my function as I noticed that the file size is larger than the actual size of my JAR file. Therefore, when I run my lambda function it fails as it cannot find the handler due to incorrect package deployed. I verified it by uploading manually the jar file to my Java lambda function and it worked.
Below is the code snippet of my yaml file:
---
service: api
provider:
name: aws
stackName: ${self:custom.prefix}-${opt:stage}-${self:service.name}
runtime: nodejs14.x
stage: ${opt:custom_stage, opt:stage}
tracing:
lambda: Active
timeout: 30
logRetentionInDays: 180
environment:
STAGE: ${opt:stage, opt:stage}
ENVIRONMENT: ${self:provider.stage}
SUPPRESS_NO_CONFIG_WARNING: true
ALLOW_CONFIG_MUTATIONS: true
functions:
sample-function-1:
role: arn:aws:iam::#{AWS::AccountId}:role/${self:custom.prefix}-${self:provider.stage}-sample-function-1
name: ${self:custom.prefix}-${opt:stage}-sample-function-1
handler: authorizers/handler.authHandler1
sample-function-2:
role: arn:aws:iam::#{AWS::AccountId}:role/${self:custom.prefix}-${self:provider.stage}-sample-function-2
name: ${self:custom.prefix}-${opt:stage}-sample-function-2
handler: authorizers/handler.authHandler1
myJavaFunction:
role: arn:aws:iam::#{AWS::AccountId}:role/${self:custom.prefix}-${self:provider.stage}-myJavaFunction-role
name: ${self:custom.prefix}-${opt:stage}-myJavaFunction
runtime: java11
package:
artifact: s3://myBucket/myJarFile.jar
handler: com.myFunction.LambdaFunctionHandler
memorySize: 512
timeout: 900
How can I deploy the correct package to my lambda function by fetching the jar file from S3 bucket?

How to get rid of serverless "Warning: Invalid configuration encountered at root: unrecognized property 'deploymentBucket'"

I've got a web application running on the serverless framework version 3.7.5. Every time I deploy my lambda function I get this warning:
"Warning: Invalid configuration encountered at root: unrecognised property 'deploymentBucket'".
I have attached the "serverless.yml" file below for external scrutiny. Is my configuration of the "deploymentBucket" property not valid? Do I need to change or edit any of the properties?
Note: Deployment works fine as it's simply a warning and I am able to proceed to testing my api endpoints... I just find this warning a tad bothersome and would like to erase it once and for all. Thanks in advance!
Here's my serverless.yml file
# Welcome to Serverless!
#
# This file is the main config file for your service.
# It's very minimal at this point and uses default values.
# You can always add more config options for more control.
# We've included some commented out config examples here.
# Just uncomment any of them to get that config option.
#
# For full config options, check the docs:
# docs.serverless.com
#
# Happy Coding!
service: poppy-seed
# app and org for use with dashboard.serverless.com
#app: your-app-name
#org: your-org-name
# You can pin your service to only deploy with a specific Serverless version
# Check out our docs for more details
frameworkVersion: '3.7.5'
provider:
name: aws
runtime: java11
timeout: 30
lambdaHashingVersion: 20201221
# you can overwrite defaults here
# stage: dev
# region: us-east-1
variable1: value1
# you can add packaging information here
package:
artifact: build/libs/poppy-seed-dev-all.jar
functions:
poppy-seed:
handler: com.serverless.lambda.Handler
# The following are a few example events you can configure
# NOTE: Please make sure to change your handler code to work with those events
# Check the event documentation for details
events:
- http:
path: "{proxy+}"
method: ANY
cors: true
deploymentBucket:
blockPublicAccess: true # Prevents public access via ACLs or bucket policies. Default is false
skipPolicySetup: false # Prevents creation of default bucket policy when framework creates the deployment bucket. Default is false
name: # Deployment bucket name. Default is generated by the framework
maxPreviousDeploymentArtifacts: 5 # On every deployment the framework prunes the bucket to remove artifacts older than this limit. The default is 5
versioning: false # enable bucket versioning. Default is false
deploymentPrefix: serverless # The S3 prefix under which deployed artifacts should be stored. Default is serverless
disableDefaultOutputExportNames: false # optional, if set to 'true', disables default behavior of generating export names for CloudFormation outputs
lambdaHashingVersion: 20201221 # optional, version of hashing algorithm that should be used by the framework
plugins:
- serverless-sam
# Resources:
# NewResource:
# Type: AWS::S3::Bucket
# Properties:
# BucketName: my-new-bucket
# Outputs:
# NewOutput:
# Description: "Description for the output"
# Value: "Some output value"
The warning means that the deploymentBucket property is not recognized and as such it is not doing what you think it should be doing.
According to serverless docs, deploymentBucket should be a property under provider not a root property.
I was able to get rid of this warning by moving the deploymentBucket property under provider instead of registering it as a root property. The modified serverless.yml file is attached below:
service: poppy-seed
provider:
name: aws
runtime: java11
timeout: 30
lambdaHashingVersion: 20201221
deploymentBucket:
blockPublicAccess: true
skipPolicySetup: false
name: poppy-seed
maxPreviousDeploymentArtifacts: 5
versioning: false # enable bucket versioning. Default is false
package:
artifact: build/libs/poppy-seed-dev-all.jar
functions:
poppy-seed:
handler: com.serverless.lambda.Handler
events:
- http:
path: "{proxy+}"
method: ANY
cors: true
plugins:
- serverless-sam
Also read up the serverless documentation for more clarity. Thanks again to #NoelLlevares for the tip.
Also try to update to latest version of serverless in my case some keys were unrecognized in old version

Serverless Laravel deploy issue

I have created my larvel application and try to deploy it on AWS lambda so I have used bref & serverless framework for that , but website link give me blank page , and when I have back to cloudwatch for see the logs and this what exists cloudwatch logs
and here is my serverless.yml
service: app
provider:
name: aws
region: us-east-1
runtime: provided
plugins:
- ./vendor/bref/bref
package:
exclude:
- node_modules/**
- public/storage
- resources/assets/**
- storage/**
- tests/**
functions:
website:
handler: public/index.php
timeout: 28 # in seconds (API Gateway has a timeout of 29 seconds)
layers:
- ${bref:layer.php-73-fpm}
events:
- http: 'ANY /'
- http: 'ANY /{proxy+}'
artisan:
handler: artisan
timeout: 120 # in seconds
layers:
- ${bref:layer.php-73} # PHP
- ${bref:layer.console} # The "console" layer
I have no idea where to look , some body help me please

Pointing Two AWS Lambda Functions to Same Domain

I am using the serverless framework and AWS Lambdas to deploy two function with different path names (/message and /subscribe) to my subdomain at form.example.com.
I am using the serverless-domain-manager plugin for serverless and successfully configured my domain for the /message function using serverless create_domain, but since I also needed to do that for /subscribe I tried to follow the same process receiving messages that the domain already existed and caught an error Error: Unable to create basepath mapping..
After flipping a configuration (createRoute53Record: false) and re-running it started to work, but now when I run sls deploy for my /message function I get the error message I used to see for /subscribe.
Error (from sls deploy):
layers:
None
Error --------------------------------------------------
Error: Unable to create basepath mapping.
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Here is my config for the serverless-domain-manager:
plugins:
- serverless-offline
- serverless-domain-manager
custom:
transactionDomain:
dev: ${file(./local-keys.yml):transactionDomain}
prod: ${ssm:mg-production-transaction-domain~true}
newsletterDomain:
dev: ${file(./local-keys.yml):newsletterDomain}
prod: ${ssm:mg-production-newsletter-domain~true}
apiKey:
dev: ${file(./local-keys.yml):apiKey}
prod: ${ssm:mg-production-api-key~true}
customDomain:
domainName: form.example.com
certificateName: 'www.example.com' //sub-domain is included in the certificate
stage: 'prod'
createRoute53Record: true
Does this have to do with the deployment of two functions to the same domain? Is there a proper process to allow that to happen?
If you do not need API gateway specific features, such as usage plan. You can put two lambda behind ALB per path routing.

Resources