Upload lambda function.zip with small size - aws-lambda

I've 2 aws-lambda projects.
the first one is using serverless-bundle.
serverless-bundle.github
when I deploy the first project, I can see below logs
(...)
Serverless: Uploading service hello.zip file to S3 (34.56 KB)...
Serverless: Uploading service bye.zip file to S3 (12.34 KB)...
(...)
each function.zip has a small size and different size.
and
the second project is using serverless-plugin-typescript
serverless-plugin-typescript
and
(...)
Serverless: Uploading service hello.zip file to S3 (22.83 MB)...
Serverless: Uploading service bye.zip file to S3 (22.83 MB)...
(...)
each functions.zip has the same size and it has a bigger size than the first project's
I am going to use typescript, so I can't use serverless-bundle because they don't support ts yet.
so, my question is how can I reduce the functions.zip size like using serverless-bundle

Serverless framework now has native support for using typescript via aws-nodejs-typescript template.
For new projects you can create them using serverless create --template aws-nodejs-typescript && npm install
For existing projects, you just need to include serverless-webpack
plugin.
you can use serverless-webpack like this.
service:
name: my-functions
# Add the serverless-webpack plugin
plugins:
- serverless-webpack
In your case, all the zip files are different size because, the first method 'serverless-bundles' is an extension of serverless-webpack

Related

error generating documentation my component

I have created a backstage scaffolding template to create a Spring boot rest service deployed to AWS EKS.
When a component is created from it in backstage the component builds using github actions, is deployed to AWS EKS and is registered in backstage.
However clicking on docs for the component fails with the following error
1 info: Step 1 of 3: Preparing docs for entity component:default/stephendemo16 {"timestamp":"2022-04-28T22:36:54.963Z"}
2 info: Prepare step completed for entity component:default/stephendemo16, stored at /tmp/backstage-EjxBxi {"timestamp":"2022-04-28T22:36:56.663Z"}
3 info: Step 2 of 3: Generating docs for entity component:default/stephendemo16 {"timestamp":"2022-04-28T22:36:56.663Z"}
4 error: Failed to build the docs page:
Could not read MkDocs YAML config file mkdocs.yml or mkdocs.yaml for validation; caused by Error: ENOENT: no such file or directory,
open '/tmp/backstage-EjxBxi/mkdocs.yml' {"timestamp":"2022-04-28T22:36:56.664Z"}
ERROR 404: T: Page not found. This could be because there is no index.md file in the root of the docs directory of this repository.
Looks like someone dropped the mic!
Catalog-info registers the docs subdirectory
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: "stephendemo16"
description: "try using template"
annotations:
github.com/project-slug: xxxx/stephendemo16
backstage.io/techdocs-ref: dir:docs
The docs subdirectory contains index.md which contains
## stephendemo16
try using template
## Getting started
Start write your documentation by adding more markdown (.md) files to this folder (/docs) or replace the content in this file.
## Table of Contents
The Table of Contents on the right is generated automatically based on the hierarchy
of headings. Only use one H1 (`#` in Markdown) per file.
...
What have I missed?
Having an index.md alone is not sufficient.
Internally, TechDocs is currently using MkDocs. Mkdocs has a config file called mkdocs.yaml that defines some metadata, plugins, and your file structure (table of contents).
Place an mkdocs.yaml inside your root directory. Mkdocs expects that all markdown files are located inside a /docs sub directory. It references your index.md file relative to that folder:
# You can pass the custom site name here
site_name: 'example-docs'
nav:
# relative reference to your Markdown file and an optional title
- Home: index.md
plugins:
- techdocs-core
The location of your mkdocs.yaml is the root folder of your documentation. Therefore you have to adjust your backstage.io/techdocs-ref annotation to dir:. (means the same folder as your catalog info file).
You can find more details about using the TechDocs setup in the Backstage docs.

Is it possible to generate tags dynamically in Google Cloud Build?

First of all: I am somewhat new to cloud build. Compared to previously used methods, I find it a wrenching, unripe and fairly annoying framework. Endless time is spend getting builders to work that supposedly work out of the box (like the helm builder for example), and it's limitations are astonishing and frustrating. Perhaps the following problem is a good example:
I would like to build and push a docker image. According to the documentation, the images to be pushed to the docker repository at the end (I'm using GCR for this) reside in the following configuration section in my cloudbuild.yaml file:
images:
- 'eu.gcr.io/$PROJECT_ID/my-project:${_TAG}'
- 'eu.gcr.io/$PROJECT_ID/my-project:latest'
I can set the _TAG substitution manually by using the section:
substitutions:
_TAG: x.y.z
but that means I have to manually fix the version number in this file every time. Worse still: if I branch out, I need to maintain the version number all the time. I have a python project in this case and it uses setuptools, the version is naturally contained in the setup.py file and I can parse it out with no problem. Attempts to parse the number into a specific file and use $(cat VERSION) in the images section fail, because the system claims it can't substitute the $(cat VERSION) part. So how can I overwrite the _TAG variable inside of another build step such, that it appears correct in the 'images' section?
If you are using triggered builds from Cloud Source Repositories, GitHub, or Bitbucket you can tag your commit and use the $TAG_NAME default substitution variable.
images:
- 'eu.gcr.io/$PROJECT_ID/my-project:$TAG_NAME'
- 'eu.gcr.io/$PROJECT_ID/my-project:latest'
On the other hand if you are using the Cloud SDK to submit the Cloud Build build you can provide values with the --substitutions argument:
gcloud builds submit [SOURCE] --config config.yaml --substitutions _TAG=x.y.z
Also I believe you would find this GitOps-style continuous delivery with Cloud Build tutorial very helpful. It explains how to create a continuous integration and delivery (CI/CD) pipeline on Google Cloud Platform using Cloud Build.
You can tag your image with several tags using cloudbuild.yaml and define the Docker build step with:
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:latest'
- .
- '-f'
- Dockerfile.prod
id: Build
And resulting images with:
images:
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:latest'

Got "ZIP does not support timestamps before 1980" while deploying a Go Cloud Function on GCP via Triggers

Problem:
I am trying to deploy a function with this step in a second level compilation (second-level-compilation.yaml)
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'functions',
'deploy', '${_FUNCTION_NAME}',
'--source', 'path/to/function',
'--runtime', 'go111',
'--region', '${_GCP_CLOUD_FUNCTION_REGION}',
'--entry-point', '${_ENTRYPOINT}',
'--env-vars-file', '${_FUNCTION_PATH}/.env.${_DEPLOY_ENV}.yaml',
'--trigger-topic', '${_TRIGGER_TOPIC_NAME}',
'--timeout', '${_FUNCTION_TIMEOUT}',
'--service-account', '${_SERVICE_ACCOUNT}']
I get this error from Cloud Build using the Console.
Step #1: Step #11: ERROR: (gcloud.beta.functions.deploy) Error creating a ZIP archive with the source code for directory path/to/function: ZIP does not support timestamps before 1980
Here is the global flow:
The following step is in a first-level compilation (first-level-compilation.yaml). Which is triggered by Cloud build using a Github repository (via Application GitHub Cloud Build) :
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: ['-c', 'launch-second-level-compilation.sh ${_MY_VAR}']
The script "launch-second-level-compilation.sh" does specific operations based on ${_MY_VAR} and then launches a second-level compilation passing a lot of substitutions variables with "gcloud builds submit --config=second-level-compilation.yaml --substitutions=_FUNCTION_NAME=val,_GCP_CLOUD_FUNCTION_REGION=val,....."
Then, the "second-level-compilation.yaml" described at the beginning of this question is executed, using the substitutions values generated and passed through the launch-second-level-compilation.sh script.
The main idea here is to have a generic first-level-compilation.yaml in charge of calling a second-level compilation with specific dynamically generated substitutions.
Attempts / Investigations
As described in this issue Cloud Container Builder, ZIP does not support timestamps before 1980, I tried to "ls" the files in the /workspace directory. But none of the files at the /workspace root have strange DATE.
I changed the path/to/function from a relative path to /workspace/path/to/function, but no success, without surprise as the directory ends to be the same.
Please make sure you don't have folders without files. For example:
|--dir
|--subdir1
| |--file1
|--subdir2
|--file2
In this example dir doesn't directly contain any file, only subdirectories. During local deployment gcp sdk puts dir into tarball without copying last modified field.
So it is set to 1st Jan 1970 that causes problems with ZIP.
As possible workaround just make sure every directory contains at least one file.

Serverless - Lambda Layers "Cannot find module 'request'"

When I deploy my serverless api using:
serverless deploy
The lambda layer gets created but when I go to run the function is gives me this error:
"Cannot find module 'request'"
But if I upload the .zip file manually through the console (the exactly same file thats uploaded when I deploy), it works fine.
Any one have any idea why this is happening?
environment:
SLS_DEBUG: "*"
provider:
name: aws
runtime: nodejs8.10
stage: ${opt:api-type, 'uat'}-${opt:api, 'payment'}
region: ca-central-1
timeout: 30
memorySize: 128
role: ${file(config/prod.env.json):ROLE}
vpc:
securityGroupIds:
- ${file(config/prod.env.json):SECURITY_GROUP}
subnetIds:
- ${file(config/prod.env.json):SUBNET}
apiGateway:
apiKeySourceType: HEADER
apiKeys:
- ${file(config/${opt:api-type, 'uat'}.env.json):${opt:api, "payment"}-APIKEY}
functions:
- '${file(src/handlers/${opt:api, "payment"}.serverless.yml)}'
package:
# individually: true
exclude:
- node_modules/**
- nodejs/**
plugins:
- serverless-offline
- serverless-plugin-warmup
- serverless-content-encoding
custom:
contentEncoding:
minimumCompressionSize: 0 # Minimum body size required for compression in bytes
layers:
nodejs:
package:
artifact: nodejs.zip
compatibleRuntimes:
- nodejs8.10
allowedAccounts:
- "*"
Thats what my serverless yaml script looks like.
I was having a similar error to you while using the explicit layers keys that you are using to define a lambda layer.
My error (for the sake of web searches) was this:
Runtime.ImportModuleError: Error: Cannot find module <package name>
I feel this is a temporary solution b/c I wanted to explicitly define my layers like you were doing, but it wasn't working so it seemed like a bug.
I created a bug report in Serverless for this issue. If anyone else is having this same issue they can track it there.
SOLUTION
I followed this this post in the Serverless forums based on these docs from AWS.
I zipped up my node_modules under the folder nodejs so it looks like this when it is unzipped nodejs/node_modules/<various packages>.
Then instead of using the explicit definition of layers I used the package and artifact keys like so:
layers:
test:
package:
artifact: test.zip
In the function layer it is referred to like this:
functions:
function1:
handler: index.handler
layers:
- { Ref: TestLambdaLayer }
The TestLambdaLayer is a convention of <your name of layer>LambdaLayer as documented here
Make sure you run npm install inside your layers before deploying, ie:
cd ~/repos/repo-name/layers/utilityLayer/nodejs && npm install
Otherwise your layers will get deployed without a node_modules folder. You can download the .zip of your layer from the Lambda UI to confirm the contents of that layer.
If anyone face a similar issue Runtime.ImportModuleError, is fair to say that another cause of this issue could be a package exclude statement in the serverless.yml file.
Be aware that if you have this statement:
package:
exclude:
- './**'
- '!node_modules/**'
- '!dist/**'
- '.git/**'
It will cause exactly the same error, on runtime once you've deployed your lambda function (with serverless framework). Just, ensure to remove the ones that could create a conflict across your dependencies
I am using typescript with the serverless-plugin-typescript and I was having a same error, too.
When I switched from
const myModule = require('./src/myModule');
to
import myModule from './src/myModule';
the error disappeared. It seems like the files were not included into the zip file by serverless when I was using require.
PS: Removing the serverless-plugin-typescript and switching back to javascript also solved the problem.

Parse CLI won't upload non-js files

I'm trying to send iOS push notifications using my Cloud Code (I can't use Parse's Push APIs as my app is built using ionic and all their docs expect native).
I have it working as a standalone script locally using nodejs, but when I go to upload it to parse, I get:
Uploading source files
Note that the following files will not be uploaded:
parse_cloud_code/cloud/cloud/cert.pem
parse_cloud_code/cloud/cloud/key.pem
Uploading recent changes to scripts...
The following files will be uploaded:
parse_cloud_code/cloud/cloud/cloud.js
parse_cloud_code/cloud/cloud/cloud_test.js
parse_cloud_code/cloud/cloud/credentials.js
parse_cloud_code/cloud/cloud/fs.js
parse_cloud_code/cloud/cloud/push-notification.js
parse_cloud_code/cloud/cloud/push-notifications_test.js
parse_cloud_code/cloud/cloud/tls.js
Finished uploading files
Error: Failed to load cloud/cert.pem with: Could not find file cloud/cert.pem
at Object.exports.readFile (cloud/fs.js:24:17)
at readFile (cloud/push-notification.js:45:8)
at body (cloud/push-notification.js:56:5)
at cloud/push-notification.js:147:3
at cloud/push-notification.js:156:3
at cloud/cloud.js:5073:5
at cloud/cloud.js:5082:3
at main.js:1:13
How can I get the .pem files into the cloud code? I tried renaming them to .js but then Parse wanted them to actually be syntactically JS files. Imagine that.
I found a workaround for this situation, as I had a similar problem. Any non-js file that I must upload I would rename to .ejs. So the parse CLI will upload it and them you can use it inside your cloud code.

Resources