How to run a cloned Laravel project with sail on Windows? - laravel

I cloned an existing Laravel project from git which was dockerized with Sail. As vendor is in the .gitignore, I need to rebuild it before I can use Sail to run my app. Acording to the Laravel doc (https://laravel.com/docs/8.x/sail#installing-composer-dependencies-for-existing-projects) i need to get my dependencies using this command.
docker run --rm \
-u "$(id -u):$(id -g)" \
-v $(pwd):/var/www/html \
-w /var/www/html \
laravelsail/php81-composer:latest \
composer install --ignore-platform-reqs
Problem is both cmd and powershell seem to struggle with the $'s, it seams that they expect an applet name, and I can't manage to run this. What am I missing ?
The error I am getting with PS is
id : The term "id" is not recognized as the name of a cmdlet, function, script file or operable program.
In cmd, i got
docker: Error response from daemon: create $(pwd): "$(pwd)" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed.
I also tried with git bash and got
docker: invalid reference format: repository name must be lowercase.

Only execute the command in the WSL2 Distribution (as example, Ubuntu)
First, Open the wsl console from PowerShell pointing the project folder
wsl -d ubuntu
And then execute the docker command for execute Laravel sail
docker run --rm \
-u "$(id -u):$(id -g)" \
-v $(pwd):/var/www/html \
-w /var/www/html \
laravelsail/php81-composer:latest \
composer install --ignore-platform-reqs
I recommend check your php version for available laravel sail compatibility
When using the laravelsail/phpXX-composer image, you should use the same version of PHP that you plan to use for your application (74, 80, or 81).
Font: Laravel Sail

Related

How to get laravel sail docker container running on a different computer?

https://laravel.com/docs/9.x/sail
I did a clean Laravel install with Sail on computer 1. It is all setup and working with all containers running (mysql, laravel, redis, etc) and Docker Desktop is showing that it is connected to Windows WSL2 and running in that environment. I never had to install PHP as this install procedure took care of the entire environment.
However, I would now like to pull this repo down on Computer 2 and run it in containers as well. So I pulled down the repo, it didn't have a devcontainers folder, only docker-compose.
I tried running docker compose up and the error is the vendor folder is empty/non existent for obvious reasons.
Now, I could install PHP with the right version and then install composer install and then try again. But that doesn't seem right to me.
Shouldn't I be able to just run this repo as a remote container in Vscode and have it running everything on its own?
How do I get the vendor/bin/sail folders installed?
I went back to computer 1 and created a devcontainer folder using remote-containers, pulled that down onto computer 2 but computer is still does not have the right vendor folder and files to complete the operations.
What am I doing wrong?
Assuming you have Docker working correctly at the second computer, you could run a temporary sail container just to install the composer dependencies in said project, as explained in the Laravel Sail documentation
docker run --rm \
-u "$(id -u):$(id -g)" \
-v $(pwd):/var/www/html \
-w /var/www/html \
laravelsail/php81-composer:latest \
composer install --ignore-platform-reqs
https://laravel.com/docs/9.x/sail#installing-composer-dependencies-for-existing-projects
After this the temporary container will not exist anymore and you can now run ./vendor/bin/sail up -d normally.

Opacity attribute only appears in Docker container

I have a Laravel app that is using the Metronic theme. As a part of the theme, they have their own implementation of BlockUI. I've been using this for years with no trouble. When the app runs bare-metal, everything works as expected.
However, when I Dockerize the app, everything works fine, but I notice that an extra opacity attribute is being applied to the BlockUI element(s). Not only that, but it's doing it on all of the pages except one.
Here is how it should appear (bare-metal version):
As you can see, it darkens the DataTable and puts up a "Please wait..." box when an AJAX request is made.
Now here's the exact same page, but within a Docker container:
In this case, the "Please wait..." box is only barely visible because it's been given an opacity of about 0.1 and you cannot even tell that the DataTable has been darkened any at all.
How can I track down where this is coming from? It only happens when the exact same app (no changes) is run from within a Docker container and on all pages but one. (The "Orders by Print Type" page works fine. No clue why.)
Here's the Dockerfile, in case it might have something to do with this:
FROM php:apache
# Arguments defined in docker-compose.yml
ARG user
ARG uid
# Set our application folder as an environment variable
ENV APP_HOME /var/www/html
# Set working directory
WORKDIR $APP_HOME
# Use the default production configuration
RUN mv "$PHP_INI_DIR/php.ini-production" "$PHP_INI_DIR/php.ini"
# Copy over project-specific PHP settings
COPY ./docker-config/php/local.ini /usr/local/etc/php/conf.d/local.ini
# Get NodeJS
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
# Install all the system dependencies and enable PHP modules
RUN apt-get update && apt-get install -y \
libicu-dev \
libpq-dev \
libmcrypt-dev \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
git \
libzip-dev \
zip \
unzip \
nodejs \
build-essential \
&& rm -r /var/lib/apt/lists/* \
&& docker-php-ext-configure pdo_mysql \
--with-pdo-mysql=mysqlnd \
&& docker-php-ext-configure gd \
--enable-gd \
--with-freetype=/usr/include/ \
--with-jpeg=/usr/include/ \
&& docker-php-ext-install \
intl \
pcntl \
pdo_mysql \
pdo_pgsql \
pgsql \
zip \
opcache \
gd \
&& pecl install -o -f redis \
&& rm -rf /tmp/pear \
&& docker-php-ext-enable redis
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/bin/ --filename=composer
# Change uid and gid of apache to docker user uid/gid
RUN usermod -u $uid $user && groupmod -g $uid $user
# Copy existing application directory + permissions
COPY --chown=www-data:www-data . $APP_HOME
# Change the web_root to laravel /var/www/html/public folder
RUN sed -i -e "s/html/html\/public/g" /etc/apache2/sites-enabled/000-default.conf
# Fix the .env file for production.
RUN mv "$APP_HOME/.env.production" "$APP_HOME/.env"
# Enable apache module rewrite
RUN a2enmod rewrite
# Install dependencies
RUN npm install
# Compile CSS & JS
RUN npm run production
# Install all PHP dependencies
RUN composer install --no-interaction
# Create mountpoints and link them.
RUN ln -s /mnt/orders /var/www/html/public/orders
# Run artisan commands to set things up properly
RUN php artisan key:generate
RUN php artisan storage:link
# Optimization for production
RUN composer install --optimize-autoloader --no-dev
RUN php artisan config:cache
RUN php artisan route:cache
RUN php artisan view:cache
# Set the maintainer info metadata
LABEL maintainer="Sturm <email_hidden>"
And here is the relevant portion of the docker-compose.yml file:
# Laravel app (Apache & PHP services with Laravel)
schedule:
build:
args:
user: www-data
uid: 1000
context: .
dockerfile: Dockerfile
image: "sturmb/sky-schedule:2021.6.1"
container_name: schedule
restart: unless-stopped
working_dir: /var/www/html
volumes:
- /mnt/jobs_main:/mnt/jobs_main
- /mnt/orders:/mnt/orders
depends_on:
- schedule-db
ports:
- "8081:80"
- "4543:443"
networks:
- web
There are many moving parts here, therefore it is not trivial to pinpoint exactly where the change in your element is being applied from. One possible way to find this out could be to use MutationObserver and watch for changes being made to the DOM tree. Something along the line of:
var mutationObserver = new MutationObserver(function(mutations) {
mutations.forEach(function(mutation) {
console.log("Detected change: ", mutation);
});
});
var blockElement = document.getElementsByClassName("blockUI blockMsg blockElement");
mutationObserver.observe(blockElement, { attributes : true, attributeFilter : ["style"] });

Running "docker-compose run --rm composer update" not working in Jenkins pipeline

I have a Jenkins pipeline just for learning purposes, which should build a Laravel app via docker-compose. The "docker-compose --build" step is working fine, but next it is running "docker-compose run --rm composer update", which then stops, no error or output.
When I run the command manually after accessing the server via SSH, the command runs with no issues.
Composer service in docker-compose file:
composer:
build:
context: .
dockerfile: composer.dockerfile
container_name: composer
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
depends_on:
- php
user: laravel
entrypoint: ['composer', '--ignore-platform-reqs']
networks:
- laravel
Build step in jenkinsfile:
stage('Build') {
steps {
echo 'Building..'
sh 'chmod +x scripts/jenkins-build.sh'
sh './scripts/jenkins-build.sh'
}
}
Command in shell script:
print "Building docker app"
sudo docker-compose up -d --build site # works fine
sudo chown jenkins -R ./
print "Running composer"
sudo docker-compose run --rm composer update # hangs in jenkins but works in cmd?
View in Jenkins:
Same command working on same server, via cmd:
I know there are some bad practices in here, but this is just for learning purposes. Jenkins server is running Ubuntu 20.04 on AWS EC2 instance.
In the end I resorted to installing composer directly into my PHP docker image. Therefore instead of running the composer service, I now use docker exec php composer update.
From what I can see, any services that were used via docker-compose run did not work in the Jenkins pipeline. In my case, these were all services that were only running whilst performing some action (like composer update), so maybe that is why Jenkins did not like it.

How to use php artisan serve inside docker container?

I create a php-composer image using dockerfile:
FROM php:7
RUN apt-get update
RUN apt-get install curl
RUN curl -sS https://getcomposer.org/installer -o composer-setup.php
RUN php composer-setup.php --install-dir=/usr/local/bin --filename=composer
RUN apt-get install -y git
And I run following commands to create a container and start a laravel app.
docker run -p 127.0.0.1:3000:8000 --name MyTest -dt php-composer to create a container
docker cp laravelApp/ d4bbb5d36312:/usr/
docker exec -it MyTest bash
cd usr/laravelApp
php artisan serve
After thet, container's terminal will show the success info:
Laravel development server started: <http://127.0.0.1:8000>
But when I access 127.0.0.1:3000 at local browser, I get nothing.
So is it possible that simply run php artisan serve to start a laravel app inside docker container?
Or I must to using nginx or apache to run it?
This can be done so:
$ docker container run -it --rm -v /host/path/laravel:/app -p 3000:8000 php bash
$ cd /app
$ php artisan serve --host 0.0.0.0
By default containers start in bridge network, inside which the host available by the address 0.0.0.0.
When you start Docker, a default bridge network (also called bridge) is created automatically, and newly-started containers connect to it unless otherwise specified.
https://docs.docker.com/network/bridge
Or so (only Linux):
$ docker container run -it --rm --network host -v /host/path/laravel:/app php bash
$ cd /app
$ php artisan serve (or php artisan serve --port 3000)
If you use the host network driver for a container, that container’s network stack is not isolated from the Docker host.
https://docs.docker.com/network/host
You can debug the issue with two commands:
Run this on the host machine to check if the port mapping is correct:
docker port MyTest
Run this on the host machine to check the output from inside your container:
docker exec MyTest curl 127.0.0.0:8000
You should see the raw HTTP response of your Laravel application.

Unable to deploy the Docker Image of Laravel application on Heroku

I started to learn the Docker. I am a complete beginner to Docker. Now, what I am doing is trying to deploy a Docker image of Laravel application onto the Heroku. I have installed a Laravel project. My Laravel project has only one page, a welcome page showing a message. That's it. I am just trying to test Docker. I have created a Docker image for my Laravel project and successfully run it on my laptop as follow.
I created a Dockerfile in the project root folder with the following content.
FROM php:7
RUN apt-get update -y && apt-get install -y openssl zip unzip git
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN docker-php-ext-install pdo mbstring
WORKDIR /app
COPY . /app
RUN composer install
CMD php artisan serve --host=0.0.0.0 --port=8181
EXPOSE 8181
Then I built the image like this
docker build -t waiyanhein/laravel c:/xampp/htdocs/docker_laravel
Then I run it locally by running the following command.
docker run –p 8181:8181 waiyanhein/laravel
Everything was working. Then I tried to deploy the image to Heroku. I followed this link, https://devcenter.heroku.com/articles/container-registry-and-runtime. As in the link, I logged into heroku.
heroku container:login
Login succeeded. Then I create the app running this command.
heroku create dockerlaravelwai
The command was successful and this is the result.
Then I pushed it as a next step as in the link mention by running the following command.
heroku container:push web
when I run the above command, I got the following error.
» Error: Missing required flag:
» -a, --app APP app to run command against
» See more help with --help
What went wrong? How can I easily deploy the Laravel Docker's image to Heroku?
Its asking for you to specify the app name
heroku container:push web --app dockerlaravelwai

Resources