Can I create single Dockerfile from laradock? - laravel

I got instructed to create a single dockerfile in the root of the project, but also got a tip to use the laradock as starting point.
How can I do this? The only way so far I know to create an docker environment is to run it with docker-compose command

No, Dockerfiles are single containers (services) by design. Laradock provides a docker-compose file that references multiple dockerfiles. However you could create a smaller docker-compose file that only starts the containers you need (let's say a webserver with php, a database server and redis).
Laradock ships with way to much containers in docker-compose, that is why the tutorial tells you to specify which containers you want to run.
docker-compose up -d nginx mysql
But if you specify a minimal docker-compose.yml, you just can type
docker-compose up -d
without any additional arguments
Yes, you could add all the required services to a single container, but that would be against what you try to achieve using Docker.

Related

Curious about the use of docker-compose and dockerfile

I'm studing docker.
docker-compose is known as a role that conveniently runs multiple containers as one script.
First, since Dockerfile only handles one container, is it correct to think that Docker Compose is backwards compatible with Dockerfile?
I thought docker compose could cover everything, but I saw docker compose and docker files used together.
Let's take spring boot as an example.
Can I use only one docker-compose to run the db container required for the application, build the application, check the port being used, and run the jar file?
Or do I have to separate the dockerfils and roles and use the two?
When working with Docker, there are two concepts: Image and Container.
Images are like mini operating systems stored in a file which is built specifically with our application in mind. Think of it like a custom operating system which is sitting on your hard disk when your computer is switched off.
Containers are running instances of your image.
Imagine you had a shared hard disk (or even CD/DVD if you are old school) which had an operating system which can run on multiple machines. The files on the disk are the "image", and those files running on a machine are a "container".
This is how Docker works, you have files on the machine which are known as the image, and running instances of those files are referred to as the container. Images can also be uploaded and shared for other users to download and run on their machine too.
This brings us to Docker vs Docker Compose.
Docker is the underlying technology which manages (creates, stores or shares) images, and runs containers.
We provide the Dockerfile to tell Docker how to create our images. For example, we say: starts from the Python 3 base image, then install these requirements, then creates these folders, then switch to this user, etc... (I'm oversimplifying the actual steps, but this is just to explain the point).
Once we have done that, we can create an image using Docker, by running docker build .. If you run that, Docker will execute each step in our Dockerfile and store the result as an image on the system.
Once the image is build, you can run it manually using something like this:
docker run <IMAGE_ID>
However, if you need to setup volumes, you need to run them like this:
docker run -v /path/to/vol:/path/to/vol -p 8000:8000 <IMAGE_ID>
Often applications need multiple images to run. For example, you might have an application and a database, and you may also need to setup networks and shared volumes between them.
So you would need to write the above commands with the appropriate configurations and ID's for each container you want to run, every time you want to start your service...
As you might expect, this could become complex, tedious and difficult to manage very quickly...
This is where Docker Compose comes in...
Docker Compose is a tool used to define how Docker runs our images in a yaml file which is can be stored with our code and reused.
So, if we need to run our app image as a container, share port 8000 and map a volume, we can do this:
services:
app:
build:
context: .
ports:
- 8000:8000
volumes:
- app/:/app
Then, every time we need to start our app, we just run docker-compose up, and Docker Compose will handle all the complex docker commands behind the scenes.
So basically, the purpose of Docker Compose is to configure how our running service should work together to serve our application.
When we run docker-compose build, Docker Compose will run all the necessary docker build commands, to build all images needed for our project and tag them appropriately to keep track of them in the system.
In summary, Docker is the underlying technology used to create images and run them as containers, and Docker Compose is a tool that configures how Docker should run multiple containers to serve our application.
I hope this makes sense?
I suggest you to go deeper in what docker compose is reading at Difference between Docker Compose Vs Dockerfile.
I report part of what explained in the above article:
A Dockerfile is a simple text file that contains the commands a user could call to assemble an image whereas Docker Compose is a tool for defining and running multi-container Docker applications.
Docker Compose define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment. It get an app running in one command by just running docker-compose up.Docker compose uses the Dockerfile if one add the build command to your project’s docker-compose.yml. Your Docker workflow should be to build a suitable Dockerfile for each image you wish to create, then use compose to assemble the images using the build command.
About your question, I answer you that docker compose is so flexible that you can fragment your composition logic in multiple yaml files, and combining by the docker compose command line as you need.
Here an example:
# Build the docker infrastructure
docker-compose \
-f network.yaml \
-f database.yaml \
-f application.yaml
build
# Run the application
docker-compose \
-f application.yaml
up
to answer your question of spring boot application. you can build complete application through Compose alone as long as you know sequence and dependency but then question will arise are you using the compose power properly? as #Antonio Petricca already given well described answer for the compose
regarding compose file and DockerFile question: it depends on how you wrote your compose file. technically Both are different.
so in short:
1- Compose and DockerFile are two different things
2- compose can use multiple modular files even DockerFile and that is why it is so popular, you can break logic into multiple modules then use compose to build it. it also help you debug and iterate faster.
Hope its answer your doubt.

How to restart Laravel queue workers inside a docker container?

I'm working on a production docker compose to run my Laravel app. It has the following containers (amongst others):
php-fpm for the app
nginx
mysql
redis
queue workers (a copy of my php-fpm, plus supervisord).
deployment (another copy of my php-fpm, with a Gitlab runner installed inside it, as well as node+npm, composer etc)
When I push to my production branch, the gitlab runner inside the deployment container executes my deploy script, which builds all the things, runs composer update etc
Finally, my deploy script needs to restart the queue workers, which are inside the queue workers container. When everything is installed together on a VPS, this is easy: php artisan queue:restart.
But how can I get the deployment container to run that command inside the queue workers container?
Potential solutions
My research indicates basically that containers should not talk to each other, but if you must, I have found four possible solutions:
install SSH in both containers
share docker.sock with the deployment container so it can control other containers via docker
have the queue workers container monitor a directory in the filesystem; when it changes, restart the queue workers
communicate between the containers with a tiny http server in the queue workers container
I really want to avoid 1 and 2, for complexity and security reasons respectively.
I lean toward 3 but am concerned about wasteful resource usage spent monitoring the fs. Is there a really lightweight method of watching a directory with as many files as a Laravel install has?
4 seems slightly crazy but certainly do-able. Are there any really tiny, simple http servers I could install into the queue workers container that can trigger a single command when the deployment container hits an endpoint?
I'm hoping for other suggestions, or if there really is no better way than 3 or 4 above, any suggestions on how to implement either of those options.
Delete the existing containers and create new ones.
A container is fundamentally a wrapper around a single process, so this is similar to stopping the workers with Ctrl+C or kill(1), and then starting them up again. For background workers this shouldn't interrupt more than their current tasks, and Docker gives them an opportunity to finish what they're working on before they get killed.
Since the code in the Docker image is fixed, when your CI system produces a new image, you need to delete and recreate your containers anyways to run them with the new image. In your design, the "deployment" container needs access to the host's Docker socket (option #2) to be able to do anything Docker-related. I might run the actual build sequence on a different system and push images via a Docker registry, but fundamentally something needs to sudo docker-compose ... on the target system as part of the deployment process.
A simple Compose-based solution would be to give each image a unique tag, and then pass that as an environment variable:
version: '3.8'
services:
app:
image: registry.example.com/php-app:${TAG:-latest}
...
worker:
image: registry.example.com/php-worker:${TAG:-latest}
...
Then your deployment just needs to re-run docker-compose up with the new tag
ssh root#production.example.com \
env TAG=20210318 docker-compose up -d
and Compose will take care of recreating the things that have changed.
I believe #David Maze's answer would be the recommended way, but I decided to post what I ended up doing in case it helps anyone.
I took a different approach because I am running my CI script inside my containers instead of using a Docker registry & having the CI script rebuild images.
I could still have given the deploy container access to the docker.sock (option #2) thereby allowing my CI script to control docker (eg rebuild containers etc) but I wasn't keen on the security implications of that, so I ended up doing #3, with a simple inotifywait watching for a change in a special 'timestamp.txt' file I am modifying in my CI script. Because it's monitoring only a single file it's light on the CPU and is working well.
# Start watching the special directory so we know when to restart the workers.
SITE_DIR=/var/www/projectname/public_html
WATCH_DIR=/var/www/projectname/updated_at
while true
do
inotifywait -e create -e modify $WATCH_DIR
if [ $? -eq 0 ]
then
echo "Detected Site Code Change. Executing artisan queue:restart."
sudo -H -u www-data php $SITE_DIR/artisan queue:restart
fi
done
All the deploy script has to do to trigger a queue:restart is:
date > $WATCH_DIR/timestamp.txt

Running tests with Visual Studio Docker compose support

I have added docker compose to my project. When I debug the project it loads the docker compose file. In the override yml I have specified a postgresql image and volume so it automatically brings up the development database. This is great because you can clone repo and not have to install any local software apart from docker.
The only thing that is not good is running tests. When I run tests it doesn't bring up the database container, it just executes the code inside the test project. So tester has to manually start the database image.
I feel like I am probably doing something wrong. Is there a better way to make the tests work with the visual studio docker compose support so it brings up the database automatically?
I thought about running the tests inside the docker file but I think that might get in the way of development. What is a good approach here?
I would not recommend running tests inside your Dockerfile. This will complicate your development process as you have said.
In terms of the database, you can just run it outside of docker-compose so that it is always running in the background. Just remove the postgres config from your docker-compose.yml and run postgres with docker run ... instead. This way it will always be running until you stop it with docker stop ...
docker run -v /tmp/pgdata:/var/lib/postgresql/data -e POSTGRES_PASSWORD=<PASSWORD> -d postgres

How can i access files from a docker image?

First i just want to mention i am very new to docker.
I am using Win 10, "Docker for Windows".
I am using the default linux containers option.
I have downloaded the latest image from here,
https://github.com/camunda/docker-camunda-bpm-platform.
So now, my Docker is online, and the container + image are working. A tomcat server and a Camunda engine are online and working.
My problem is the following,
I need to do some changes and i cant find where Tomcat and Camunda are being stored. I need to edit some XML files both in the Camunda and in the Tomcat ( to setup which database to use for example ).
Can it be that it is not being stored on my local machine?
For example when i open the container with Kitematic ( Docker UI ) i can see environment variables for it, there is a SERVER_CONFIG and its value is /camunda/conf/server.xml ( this is one of the files i need to edit! but i cant find it or anything else anywhere on my local machine ).
you should access container using following command
sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5e978f353734 camunda/camunda-bpm-platform:latest "/sbin/tini -- ./cam…" 4 days ago Up 4 days
the issie
sudo docker exec -it 5e978f353734 /bin/bash
then you will see the container insie via shell command. good luck!
You may want to consider using Camunda BPM RUN, which aims to allow configuration without having to change the WAR deployment or Tomcat. Instead configuration is done as described here:
https://docs.camunda.org/manual/latest/user-guide/camunda-bpm-run/
Config files can be mounted into the docker images, but you may prefer to compose your own docker image based on the Camunda BPM Run base image.
The example here shows another approach which sets Camunda properties from outside the docker image by passing the environment variable SPRING_APPLICATION_JSON into the docker image.
https://medium.com/#robert.emsbach/anyone-can-run-camunda-bpm-on-azure-in-10-minutes-4b4055cc8e9

DOCKER - LAMP Stack issues - Premade Image

Trying to setup a LAMP stack with docker,
and found and tried to used https://hub.docker.com/r/linode/lamp/
But I can't find and don't know how to access the files linked to the domain
or how to change the domain name from example.com and so on.
I think my real question is how do I change files or rebuild an image
from other people.
First of all I want to mention I'm not a big fan of this image + approach because it's bundling multiple microservices. I would recommend to use a container for apache2, a container for mysql etc.
But for the setup of LAMP. I'm using the documentation provided on the site.
I've a path /xx/test/index.html which contains some html. I will map the port of the container on my container port + mount my files to the right folder in the container.
docker run -p 80:80 -t -i -v /root/test/:/var/www/example.com/public_html/ linode/lamp /bin/bash
I'm using -ti and start a bash session. In this they are starting the apache2 + mysql service. (it is the approach of the official documentation. Not mine. It's a strange approach):
root#35d00285b625:/# service apache2 start
* Starting web server apache2 *
root#35d00285b625:/# service mysql start
* Starting MySQL database server mysqld [ OK ]
* Checking for tables which need an upgrade, are corrupt or were
not closed cleanly.
After starting the services you can exit the container by pressing ctrl + p then ctrl + q. Now you can check your server-ip:80 to check your html code. If you want to replace example.conf you can mount your own apache2 configurations too.
If you want to change foldernames inside the image I would recommend to create your own dockerfile which starts with:
FROM docker pull linode/lamp
RUN changes..
First of all, Consider using microservices in separate containers. This will provide advantages like:
Fault Containment
Ease of Upgrades
Eliminates long-term commitment to a single technology stack
Easy to scale
System resilience
...
Now Docker was created with having microservices in mind, so for your LAMP Stack, I recommend using Apache+PHP in a container and mysql in another container. To make your containers communicate to eachother, create a userdefined network and put both containers in it.
Now back to your question:
You have 3 options for using your custom configuration files:
You need to mount your configuration files when creating a container(Recommended):
sudo docker run -d --name my-apache -v /path/to/custom/httpd.conf:/usr/local/apache2/conf/httpd.conf httpd
Please note this example is using library (official) apache2 image from docker hub, You should consult image creator's instructions for custom images.
You can manually edit the configuration file inside a running container and commit it as a new image.
sudo docker commit my-apache myrepository/myimagename:tag
sudo docker run -d myrepository/myimagename:tag
Create your own image via Dockerfile, and using FROM <base image> directive.

Resources