https://laravel.com/docs/9.x/sail
I did a clean Laravel install with Sail on computer 1. It is all setup and working with all containers running (mysql, laravel, redis, etc) and Docker Desktop is showing that it is connected to Windows WSL2 and running in that environment. I never had to install PHP as this install procedure took care of the entire environment.
However, I would now like to pull this repo down on Computer 2 and run it in containers as well. So I pulled down the repo, it didn't have a devcontainers folder, only docker-compose.
I tried running docker compose up and the error is the vendor folder is empty/non existent for obvious reasons.
Now, I could install PHP with the right version and then install composer install and then try again. But that doesn't seem right to me.
Shouldn't I be able to just run this repo as a remote container in Vscode and have it running everything on its own?
How do I get the vendor/bin/sail folders installed?
I went back to computer 1 and created a devcontainer folder using remote-containers, pulled that down onto computer 2 but computer is still does not have the right vendor folder and files to complete the operations.
What am I doing wrong?
Assuming you have Docker working correctly at the second computer, you could run a temporary sail container just to install the composer dependencies in said project, as explained in the Laravel Sail documentation
docker run --rm \
-u "$(id -u):$(id -g)" \
-v $(pwd):/var/www/html \
-w /var/www/html \
laravelsail/php81-composer:latest \
composer install --ignore-platform-reqs
https://laravel.com/docs/9.x/sail#installing-composer-dependencies-for-existing-projects
After this the temporary container will not exist anymore and you can now run ./vendor/bin/sail up -d normally.
Related
Hi i tried to install fresh Laravel project using
Laravel Sail
docker environment. First it was showing me "Docker is not running" error. Then i found out, i needed to start docker as rootless. I solved it, reading this url: [https://docs.docker.com/engine/install/linux-postinstall/].
After that, I successfully installed Laravel using Laravel Sail. Then I ran
./vendor/bin/sail up -d
I was able to view Laravel project in my browser. But when i ran any other artisan commands such as ./vendor/bin/sail artisan route:list or sail commands as sail shell, all the docker containers were forced closed automatically. It shows me this error.
Sail is not running.
You may Sail using the following commands: './sail up' or './sail up
-d'
Any suggestions? I am getting this issue on Ubuntu 20.04 LTS version.
If you are on Windows and using WSL, make sure the WSL Integration is properly set:
Settings->Ressources->WSL Integration + Toggle Linux version
I was using Laradock before installing Laravel Sail. Maybe there were some conflicts. So I backed up all my databases, then I removed all containers using this code. sudo docker rmi -f $(docker images -a -q). Then installed fresh Laravel project and it worked.
Please read my below comment as it is was a better solution for me.
very easy, maybe you don't have permission to run docker.
in Linux first use sudo -s and after user ./vendor/bin/sail up -d
Sail first checks to see if any current docker-compose processes have a status of Exit. If any of them have, then it will forcefully bring down all other services. Which is what you were are seeing whenever you type any sail sub-command. You can see the code here: https://github.com/laravel/sail/blob/87c63c2956749f66e43467d4a730b917ef7428b7/bin/sail#L44-L49
Run sail up to start the processes and then use docker-compose ps to check all services are running and none have an Exit status.
I had the same issue and when reviewing the code and checking my services I noticed the database had exited soon after I brought them up.
So far I have set up a mySQL server on Amazon lightsail and have succesfully used it while running strapi locally. How do I deploy Strapi itself on lightsail and get a link to access it through a browser?
I have read through https://strapi.io/documentation/3.0.0-beta.x/deployment/amazon-aws.html, but the guide is for AWS EC2. Do the same steps apply to lightsail?
So I ended up figuring out. Please let me know if something is wrong or can be done better:
Start an Ubuntu instance in lightsail, I picked the 2GB RAM tier because that's the min requirements for Strapi to run (they have them listed in their documents) and give it a static IP address.
Install node onto your server:
cd ~
curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
...
sudo apt-get install nodejs
...
node -v && npm -v
I cloned my project from github, so a lot of node modules weren't imported due to .gitignore. Simply cd into the project direcotry and run node install to install all the missing dependencies.
run npm build to build the panel, then npm start
it should tell you to go to localhost:1337, instead go to [your server's IP address]:1337
your Strapi app should be on the screen
Yes, The strapi can be deployed to Lightsail. But there will be no advantages.
Lightsail must be configured as a Node.js server.
Deploy from github
Install PM2 Runtime
I work with Docker for all my projects, however I have a problem with Symfony 3.4 and Composer with Docker .
When I add some more packages, Composer freezes on update and I need to restart Docker to unblock the situation.
I have no idea whether the problem comes from Symfony or Composer.
I think it's comes from Symfony because when I try with Symfony 4.3 there are no problems.
Can you help me to find a clue to resolve this problem?
Generally, I launch Composer with the following command line:
docker run --rm --name composer -ti -w /var/www -v %cd%:/var/www composer ...
I have resolve my problem,
set 4G RAM for Docker and it work.
I started to learn the Docker. I am a complete beginner to Docker. Now, what I am doing is trying to deploy a Docker image of Laravel application onto the Heroku. I have installed a Laravel project. My Laravel project has only one page, a welcome page showing a message. That's it. I am just trying to test Docker. I have created a Docker image for my Laravel project and successfully run it on my laptop as follow.
I created a Dockerfile in the project root folder with the following content.
FROM php:7
RUN apt-get update -y && apt-get install -y openssl zip unzip git
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN docker-php-ext-install pdo mbstring
WORKDIR /app
COPY . /app
RUN composer install
CMD php artisan serve --host=0.0.0.0 --port=8181
EXPOSE 8181
Then I built the image like this
docker build -t waiyanhein/laravel c:/xampp/htdocs/docker_laravel
Then I run it locally by running the following command.
docker run –p 8181:8181 waiyanhein/laravel
Everything was working. Then I tried to deploy the image to Heroku. I followed this link, https://devcenter.heroku.com/articles/container-registry-and-runtime. As in the link, I logged into heroku.
heroku container:login
Login succeeded. Then I create the app running this command.
heroku create dockerlaravelwai
The command was successful and this is the result.
Then I pushed it as a next step as in the link mention by running the following command.
heroku container:push web
when I run the above command, I got the following error.
» Error: Missing required flag:
» -a, --app APP app to run command against
» See more help with --help
What went wrong? How can I easily deploy the Laravel Docker's image to Heroku?
Its asking for you to specify the app name
heroku container:push web --app dockerlaravelwai
If I run composer install from my host, I hit my local composer cache:
- Installing deft/iso3166-utility (1.0.0)
Loading from cache
Yet when building a container having in its Dockerfile:
RUN composer install -n -o --no-dev
I download all the things, e.g.:
- Installing deft/iso3166-utility (1.0.0)
Downloading: 100%
It's expected, yet I like to avoid it. As even on a rebuilt, it would also download everything again.
I would like to have a universal cache for composer that I could also reshare for other docker projects.
I looked into this and found the approach to define a volume in the Dockerfile:
ENV COMPOSER_HOME=/var/composer
VOLUME /var/composer
I added that to my Dockerfile, and expected to only download the files once, and hit the cache afterwards.
Yet when I modify my composer, e.g. remove the -o flag, and rerun docker build ., I expected to hit the cache on build, yet I still download the vendors again.
How are volumes supposed to work to have a data cache inside a docker container?
Use the experimental feature : Docker buildkit (Supported Since docker 18.09, docker-compose 1.25.4)
In your dockerfile
# syntax=docker/dockerfile:experimental
FROM ....
# ......
RUN --mount=type=cache,target=/var/composer composer install -n -o --no-dev
Now before building, make sure the env var is exported:
export DOCKER_BUILDKIT=1
docker build ....
If you are using docker-compose, make sure to export also COMPOSE_DOCKER_CLI_BUILD :
export COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1
docker-compose build ...
If it does not work with docker-compose, make sure your docker-compose version is above 1.25.4
docker-compose version
I found two ways of dealing with this problem, yet none deal with composer volumes anymore.
Fasten composer download process: Use hirak/prestissimo
composer global require "hirak/prestissimo:^0.3"
💡 With Composer 2.0, the above step is no longer required for faster downloads. In fact, it won't install on Composer 2.0 environments.
Force docker to use a cached composer install. Docker uses a cache on a RUN if the added files didn't change. If you only do COPY . /your-php-app, docker build will refresh all the cashes and re-run composer install even if only one unrelated file in the source tree changed. In order to make docker build to run composer install only install on package changes, one has to add composer.json and composer.lock file before adding the source files. Since one also needs the source files anyway, one has to use different folders for composer install and rsync the content back to the then added folder; furthermore one then has to run the post-install scripts manually. It should look something like this (untested):
WORKDIR /tmp/
COPY composer.json composer.lock ./
RUN composer install -n -o --no-dev --no-scripts
WORKDIR /your-php-app/
COPY . /your-php-app/
RUN rsync -ah /tmp/* /your/php-app/
RUN composer run-script post-install-cmd
or combine the two =)
I would like to have a universal cache for composer that I could also reshare for other docker projects.
Using a shared volume for the Composer cache works great when working with containers. If you want to go broader than just containers, and use a shared cache for e.g. local development as well, I've developed a solution for that called Velocita - how it works.
Basically, you use one global Composer plugin for local projects and inside and build containers. This not only speeds up downloads tremendously, it also helps with 3rd party outage for example.
I would consider utilizing the $HOME/.composer/cache/files directory. This is where composer reads/write to when using composer install.
If you are able to mount it from your host to your container that would work. Also you could just tar it up after each time your run composer install and then drop that in before you run composer install the next time.
This is loosely how Travis CI recommends doing this.
Also, consider using the --prefer-dist flag with your composer install command.
Info on that can be found here: https://getcomposer.org/doc/03-cli.md#install
--prefer-dist: Reverse of --prefer-source, composer will install from dist if possible. This can speed up installs substantially on build servers and other use cases where you typically do not run updates of the vendors. It is also a way to circumvent problems with git if you do not have a proper setup.
Some references on utilizing the composer cache for you:
https://blog.wyrihaximus.net/2015/07/composer-cache-on-travis/
https://github.com/travis-ci/travis-ci/issues/4579