Error Slug size
Hello ,
I tried to deploy my application , it s a face detection app using .
i want it to be online , when the user opens the link the camera gets open and it detects if he is wearing a mask or not in a bokeh output . but when i deploy it says compiled slug size is too big (over 500 mb) but in reality the projet size is below 30mb.
Update your requirements.txt to only include the modules your really need and use, this should reduce the app footprint.
If this is not possible the alternative is to use the Heroku Docker Registry: build and push a Docker image of your application, the slug size limitation is not enforced with this type of deployment.
Related
We are using Vapor to deploy our Laravel App to AWS Lambda.
After small change (10 lines were introduced), the deploy process via GitHub action fails with the following message:
Message: Your application exceeds the maximum size allowed by AWS Lambda.
Now we can not deploy to this env (develop). All other branches (staging, production) are fine and we can use them.
No new libraries or any big changes were introduced between the last deploy.
Deploy via Vapor CLI also fails with the same message.
Any ideas where we can search for the source of the problem?
Lambda deployment packages have a size limit of 50 MB. The deployment package of your dev branch is crossing that limit. That's why you get the error Your application exceeds the maximum size allowed by AWS Lambda.
If reducing the size of your deployment package is not an option, upload the package to S3 & provide the S3 URL to the Lambda function. The deployment package is allowed to have a max size of 250 MB (uncompressed) when you go the S3 route.
See my blog post for more details!
You can switch to a Docker Runtime. Docker deploys can be up to 10GB.
First foreach environment, create a Docker file with the name <environment>.Dockerfile, for example production.Dockerfile for the production environment.
Then in that file put:
FROM laravelphp/vapor:php81
COPY . /var/task
Then change the "runtime" for each environment in vapor.yml to "docker".
I try to deploy our code to staging I was found an error message something like this.
Compressed application is greater than 45MB. Your application is 69
MB. Whoops! There were some problems with your request. Vapor
applications may not have more than 300 public assets.
Very small ????? It's not enough.
Looks like Taylor just launched the solution to this problem. You need to update your vapor-core and vapor-cli packages to the latest version. Then add separate-vendor: true to your Vapor.yml file. Details here: https://blog.laravel.com/vapor-reusable-vendors
Another option is to switch to the docker runtime:
Application Size
AWS Lambda has strict limitations on the size of applications running
within the environment. If your application exceeds this limit, you
may take advantage of Vapor's Docker based deployments. Docker based
deployments allow you to package and deploy applications up to 10GB in
size.
The Vapor docs link to https://docs.vapor.build/1.0/projects/environments.html#building-custom-docker-images wihch seems to be broken or pointing to an old documentation structure.
I think the correct link to the docs about the docker runtime is here: https://docs.vapor.build/1.0/projects/environments.html#docker-runtimes
You should probably try this in a new environment because once you switch an environment to docker you can't switch back to the default vapor runtime for some reason. So just try the docker runtime in test environment by passing the --docker flag:
vapor env docker-test --docker
I have created a compute engine VM-instance in Google Cloud Platform. Then I have installed go using the standard procedure downloading it from https://dl.google.com/go/go1.11.4.linux-amd64.tar.gz. Everything worked properly and I was able to run go application. However, after closing the instance when I reopened it, it says go is not installed. The message is following.
-bash: go: command not found
How can I save the instance setup?
Creating, Deleting, and Deprecating Custom Images
You can create custom images of boot disks and use these images to create new instances. This is ideal for situations where you have created and modified a persistent boot disk to a certain state and need to save that state to create new instances.
I think you should also consider using Docker containers.
Pushing and pulling images
Container-optimized VM images
May/may not be an easy question here, but where can I pull images from to create a new docker image via the API?
Documentation
My (unsuccessful) attempts have been trying to build an image from something local. Using docker images to get a list of images, then trying to use their Image ID or Repository has not worked for me while using the fromImage query param like so:
curl --data '' host:port/images/create?fromImage=test/hello-world&tag=webTesting
I consistently get the following error:
{"errorDetail":{"message":"Error: image test/hello-world not found"},"error":"Error: image test/hello-world not found"}
In running docker images, we can very clearly see the following:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
test/hello-world latest 6d9bd5e6da4e 2 days ago 556.9 MB
In all combinations of using the repository/tag/id the error still displays. I understand that we can create images from urls with fromSrc, and there are alternative create image routes by uploading .tar files, but is it possible in the case to create an image from one that already exists locally? I've had success in compiling images from ubuntu or centos, but I'm looking basically to replicate something local with new tags/repository.
I do see in the documentation that fromImage parameter may only be used when pulling an image -- does this mean we can only import images that are hosted on Dockerhub?
As you noted, the Docker remote API documentation clearly states that a pull operation must be triggered for this image reference.
This does not require that you use DockerHub, but it means that the image must be located on a registry and not simply in your daemon's local cache. If you were running a Docker registry instance on your host (which is easily done with the public registry image in DockerHub) on port 5000, then you could use "localhost:5000/test/hello-world" as the fromImage, and it will pull it from your locally hosted registry (after you push it locally of course).
I know that an image consists of many layers.
for example, if u run "docker history [Image]", u can get a sequence of ids, and the ID on the top is the same as the image id, the rest IDs are layer ID.
in this case, are these rest layer IDs correspond to some other images? if it is true, can I view a layer as an image?
Layers are what compose the file system for both Docker images and Docker containers.
It is thanks to layers that when you pull a image, you eventually don't have to download all of its filesystem. If you already have another image that has some of the layers of the image you pull, only the missing layers are actually downloaded.
are these rest layer IDs correspond to some other images?
yes, they are just like images, but without any tag to identify them.
can I view a layer as an image?
yes
show case
docker pull busybox
docker history busybox
IMAGE CREATED CREATED BY SIZE COMMENT
d7057cb02084 39 hours ago /bin/sh -c #(nop) CMD ["sh"] 0 B
cfa753dfea5e 39 hours ago /bin/sh -c #(nop) ADD file:6cccb5f0a3b3947116 1.096 MB
Now create a new container from layer cfa753dfea5e as if it was an image:
docker run -it cfa753dfea5e sh -c "ls /"
bin dev etc home proc root sys tmp usr var
Layers and Images not strictly synonymous.
https://windsock.io/explaining-docker-image-ids/
When you pull an image from Docker hub, "layers" have "" Image IDs.
When you commit changes to locally built images, these layers will have Images IDs. Until when you push to Dockerhub. Only the leaf image will have Image ID for all others users pulling that image you uploaded.
From docker documentation:
A Docker image is a read-only template. For example, an image could contain an Ubuntu operating system with Apache and your web application installed. Images are used to create Docker containers. Docker provides a simple way to build new images or update existing images, or you can download Docker images that other people have already created. Docker images are the build component of Docker.
Each image consists of a series of layers. Docker makes use of union file systems to combine these layers into a single image. Union file systems allow files and directories of separate file systems, known as branches, to be transparently overlaid, forming a single coherent file system.
One of the reasons Docker is so lightweight is because of these layers. When you change a Docker image—for example, update an application to a new version— a new layer gets built. Thus, rather than replacing the whole image or entirely rebuilding, as you may do with a virtual machine, only that layer is added or updated. Now you don’t need to distribute a whole new image, just the update, making distributing Docker images faster and simpler.
The way I like to look at these things is like backup types. We can create full backups and after that create incremental backups. The full backup is not changed (although in some systems to decrease restore time after each incremental backup the full backup is changed to contain changes but for this discussion we can ignore this case) and just changes are backed up in a separate manner. So we can have different layers of backups, like we have different layers of images.
EDIT:
View the following links for more information:
Docker image vs container
Finding the layers and layer sizes for each Docker image