access logs on laravel sail - laravel

I'm trying out laravel sail for a local project and I can't for the life of me find an easy way to exec into the container and tail my logs. I'm sick of googling and finding nothing-- does anyone have a link to docs or know the easiest way to accomplish normal devving with Laravel sail? I'm considering giving up this technique and doing it the normal Docker way.

Sail is just a way to configure your docker environment easily, you can still run every docker command as normal, or even publish the sail files and modify them for yourself (and then remove the package). To enter a container execute docker exec -it <container> bash or ./vendor/bin/sail shell or ./vendor/bin/sail root-shell. To tail logs of a container, you can run docker logs --follow <container>.

Related

Laravel Sail Docker Hub

I've got laravel sail which as I know is few containers (mysql, redis, laravel, ...). Is there an easy way to just pack up the whole thing to ex. Docker Hub and easly download it on production server, and when i update it on localhost and run docker push, just run docker pull. Then everything (like new commands in DockerFile | apt install thing) will be updated and working exacly how it worked on localhost
I read the documentation, but I cannot figure out how docker works and how to easly change project location (Ex. I'm working on project at work, sometimes at home and this will be much easier to run docker push when I need build source code and deploy it)
I'm keeping source code on github, and it's working for dev servers, but to deploy something I have to check all dependencies and DockerFile, .env file and other things to make it works on production.
Thanks for help!
You can use the existing docker-compose.yml and just run docker-compose up -d on production to start all containers. Just be sure to for example disable xdebug on production as it slows down every request.

Docker Build/Deploy using Bash Script

I have a deploy script that I am trying to use for my server for CD but I am running into issues writing the bash script to complete some of my required steps such as running npm and the migration commands.
How would I go about getting into a container bash, from this script, running the commands below and then exiting to finish pulling up the changes?
Here is the script I am trying to automate:
cd /Project
docker-compose -f docker-compose.prod.yml down
git pull
docker-compose -f docker-compose.prod.yml build
# all good until here because it opens bash and does not allow more commands to run
docker-compose -f docker-compose.prod.yml run --rm web bash
npm install # should be run inside of web bash
python manage.py migrate_all # should be run inside of web bash
exit # should be run inside of web bash
# back out of web bash
docker-compose -f docker-compose.prod.yml up -d
Typically a Docker image is self-contained, and knows how to start itself up without any user intervention. With some limited exceptions, you shouldn't ever need to docker-compose run interactive shells to do post-deploy setup, and docker exec should be reserved for emergency debugging.
You're doing two things in this script.
The first is to install Node packages. These should be encapsulated in your image; your Dockerfile will almost always look something like
FROM node
WORKDIR /app
COPY package*.json .
RUN npm ci # <--- this line
COPY . .
CMD ["node", "index.js"]
Since the dependencies are in your image, you don't need to re-install them when the image starts up. Conversely, if you change your package.json file, re-running docker-compose build will re-run the npm install step and you'll get a clean package tree.
(There's a somewhat common setup that puts the node_modules directory into an anonymous volume, and overwrites the image's code with a bind mount. If you update your image, it will get the old node_modules directory from the anonymous volume and ignore the image updates. Delete these volumes: and use the code that's built into the image.)
Database migrations are a little trickier since you can't run them during the image build phase. There are two good approaches to this. One is to always have the container run migrations on startup. You can use an entrypoint script like:
#!/bin/sh
python manage.py migrate_all
exec "$#"
Make this script be executable and make it be the image's ENTRYPOINT, leaving the CMD be the command to actually start the application. On every container startup it will run migrations and then run the main container command, whatever it may be.
This approach doesn't necessarily work well if you have multiple replicas of the container (especially in a cluster environment like Docker Swarm or Kubernetes) or if you ever need to downgrade. In these cases it might make more sense to manually run migrations by hand. You can do that separately from the main container lifecycle with
docker-compose run web \
python manage.py migrate_all
Finally, in terms of the lifecycle you describe, Docker images are immutable: this means that it's safe to rebuild new images while the old ones are running. A minimum-downtime approach to the upgrade sequence you describe might look like:
git pull
# Build new images (includes `npm install`)
docker-compose build
# Run migrations (if required)
docker-compose run web python manage.py migrate_all
# Restart all containers
docker-compose up --force-recreate

Unable to find docker image locally

I was following this post - the reference code is on GitHub. I have cloned the repository on my local.
The project has got a react app inside it. I'm trying to run it on my local following step 7 on the same post:
docker run -p 8080:80 shakyshane/cra-docker
This returns:
Unable to find image 'shakyshane/cra-docker:latest' locally
docker: Error response from daemon: pull access denied for shakyshane/cra-docker, repository does not exist or may require 'docker login'.
See 'docker run --help'.
I tried login to docker again but looks like since it belongs to #shakyShane I cannot access it.
I idiotically tried npm start too but it's not a simple react app running on node - it's in the container and containers are not controlled by npm
Looks like docker pull shakyshane/cra-docker:latest throws this:
Error response from daemon: pull access denied for shakyshane/cra-docker, repository does not exist or may require 'docker login'
So the question is how do I run this docker image on my local mac machine?
Well this is illogical but still sharing so future people like me don't get stuck.
The problem was that I was trying to run a docker image which doesn't exist.
I needed to build the image:
docker build . -t xameeramir/cra-docker
And then run it:
docker run -p 8080:80 xameeramir/cra-docker
In my case, my image had TAG specified with it and I was not using it.
REPOSITORY TAG IMAGE ID CREATED SIZE
testimage testtag 189b7354c60a 13 hours ago 88.3MB
Unable to find image 'testimage:latest' locally for this command docker run testimage
So specifying tag like this - docker run testimage:testtag worked for me
Posting my solution since non of the above worked.
Working on macbook M1 pro.
The issue I had is that the image was built as arm/64. And I was running the command:
docker run --platform=linux/amd64 ...
So I had to build the image for amd/64 platform in order to run it.
Command below:
docker buildx build --platform=linux/amd64 ...
In conclusion your docker image platform and docker run platform needs to be the same from what I experienced.
In my case, the docker image did exist on the system and still I couldn't run the container locally, so I used the exact image ID instead of image name and tag, like this:
docker run myContainer c29150c8588e
I received this error message when I typed the name/character wrong. That is, "name1\name2" instead of "name1/name2" (wrong slash).
In my case, I saw this error when I had logged in to the dockerhub in my docker desktop. The repo I was pulling was local to my enterprise. Once i logged out of dockerhub, the pull worked.
This just happened to me because my local docker vm on macos ran out of disk space.
I just deleted some old images using docker image prune and it started working correctly again.
shakyshane/cra-docker Does not exist in that user's repo https://hub.docker.com/u/shakyshane/
The problem is you are trying to run an imagen that does not exists. If you are executing a Dockerfile, the image was not created until Dockerfile pass with no errors; so when Dockerfile tries to run the image, it can't find it. Be sure you have no errors in the execution of your scripts.
The simplest answer can be the correct one!.. make sure you have permissions to execute the command, use:
sudo docker run -p 8080:80 shakyshane/cra-docker
In my case, I didn't realise there was a difference between docker run and docker start, and I kept using the run command when I should've been using the start command.
FYI, run is for building and creating the docker container, start is to just start a stopped container
Use -d
sudo docker run -d -p 8000:8000 rasa/duckling
learn about -d here
sudo docker run --help
At first, i build image on mac-m1-pro with this command docker build -t hello_k8s_world:0.0.1 ., when is run this image the issue appear.
After read Master Yi's answer, i realize the crux of the matter and rebuild my images like this docker build --platform=arm64 -t hello_k8s_world:0.0.1 .
Finally,it worked.

Can I create single Dockerfile from laradock?

I got instructed to create a single dockerfile in the root of the project, but also got a tip to use the laradock as starting point.
How can I do this? The only way so far I know to create an docker environment is to run it with docker-compose command
No, Dockerfiles are single containers (services) by design. Laradock provides a docker-compose file that references multiple dockerfiles. However you could create a smaller docker-compose file that only starts the containers you need (let's say a webserver with php, a database server and redis).
Laradock ships with way to much containers in docker-compose, that is why the tutorial tells you to specify which containers you want to run.
docker-compose up -d nginx mysql
But if you specify a minimal docker-compose.yml, you just can type
docker-compose up -d
without any additional arguments
Yes, you could add all the required services to a single container, but that would be against what you try to achieve using Docker.

Saving Docker container state on Windows

I create a docker container using
docker-machine.exe create -d virtualbox --virtualbox-memory 2048 default
and I logged into the bash using
docker run -ti ubuntu /bin/bash
and I got something like root#ae78cd536ddf:/# where I did a couple of apt-get installs.
Then, I exited from the bash and when I again logged back in I could not find what I had installed. I wanted to do a docker commit, but I somehow can't figure where my installed stuff is?
UPDATE
Based on the answers I tried creating an image of the container. I have compiled all commands in a gist.
With docker run, you create a container from the ubuntu image. The container has the name ae78cd536ddf (in your case). You can inspect images and containers with docker ps -a and docker images respectively.
Each time you run docker run, a new container is created. When using docker run --name Somename, you force the container to be named Somename which prevents you from creating another container with the same name.
Images are immutable which means you can not change them. So when you modify something in the running container, the image stays the same and this you can create more containers from the same image.
So after you stopped a container (docker stop, exit the containerized bash or just reboot), you can run docker start ae78cd536ddf to restart it. But it will be running in background and you won't have a bash (check docker ps to see it's running). Now you just need a bash: docker exec -it ae78cd536ddf /bin/bash will execute a bash in the container you started before.
Just a note about creating images. You might want to install the software you always need (I personally love vim, htop, ...) and then docker commit the container. This will create a new image which you can see in docker images. Now you can run containers from this image by replacing ubuntu with your image name.
To get more reproducable builds (when using a CI for example), you can create a Dockerfile and run docker build.
Every docker run command creates a new container. The id in the hostname of the bash shell is the container ID. You can commit that.
To see all containers (including stopped containers), do docker ps -a.

Resources