laravel-8 storage accessing using docker - laravel

can't access images inside /public/storage
i have done php artisan storage:link
the files are visible when using my local environment but when switching to Docker environment with the same code, i can't access them (not found)
my files owner is the same as in my docker config
i test it by visiting http://127.0.0.1:8000/storage/test.png
where test.png is inside public/storage
I have 3 services inside my docker-compose file
Nginx
PHP
MYSQL
I think the problem from permissions but i can't figure out the solution

As far as I know, each docker has an IP address. So, you have to start the server with the IP address of docker container. However, to avoid errors from docker's dynamic IP address, you use 0.0.0.0 instead of the IP address of docker because 0.0.0.0 means all IPv4 addresses on the local machine. To resolve your issue, you use the following command in docker.
php artisan serve --host 0.0.0.0
Note:
Maybe you will get another error, let me know if you meet them.

if anyone faced the same issue.
i managed to solve this by removing the symlink created by laravel and make my own one from my nginx container based on the real path inside my container

Related

How to visit a docker service by ip address

I'm new with docker and I'm probably missing a lot, although i went through the basic documentation and I'm trying to deploy a simple Spring Boot API
I've deployed my API as a docker-spring-boot .jar file , then i installed docker and pushed it with the following commands:
sudo docker login
sudo docker tag docker-spring-boot phillalexakis/myfirstapi:01
sudo docker push phillalexakis/myfirstapi:01
Then i started the API with the docker run command:
sudo docker run -p 7777:8085 phillalexakis/myfirstapi:01
When i visit localhost:7777/hello I'm getting the desired response
This is my Dockerfile
FROM openjdk:8
ADD target/docker-spring-boot.jar docker-spring-boot.jar
EXPOSE 8085
ENTRYPOINT ["java","-jar","docker-spring-boot.jar"]
Based on this answered post this the command to get the ip address
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
So, i run it with container_name_or_id = phillalexakis/myfirstapi:01 and I'm getting this error
Template parsing error: template: :1:24: executing "" at <.NetworkSettings.Networks>: map has no entry for key "NetworkSettings"
If i manage somehow to get the IP will i be able to visit it and get the same response?
This is how i have it in my mind: ip:7777/hello
You have used the image name and not the container name.
Get the container name by executing docker ps.
The container ID is the value in the first column, the container name is the value in the last column. You can use both.
Then, when you have the IP, you will be able to access your API at IP:8085/hello, not IP:7777/hello
The port 7777 is available on the Docker Host and maps to the port 8085 on the container. If you are accessing the container directly - which you do, when you use its IP address - you need to use the port that the container exposes.
There is also another alternative:
You can give the container a name when you start it by specifying the --name parameter:
sudo docker run -p 7777:8085 --name spring_api phillalexakis/myfirstapi:01
Now, from your Docker host, you can access your API by using that name: spring_api:8085/hello
You should never need to look up that IP address, and it often isn't useful.
If you're trying to call the service from outside Docker space, you've done the right thing: use the docker run -p option to publish its port to the host, and use the name of the host to access it. If you're trying to call it from another container, create a network, make sure to run both containers with a --net option pointing at that network, and they can reach other using the other's --name as a hostname, and the container-internal port the other service is listening on (-p options have no effect and aren't required).
The Docker-internal IP address just doesn't work in a variety of common situations. If you're on a different host, it will be unreachable. If your local Docker setup uses a virtual machine (Docker Machine, Docker for Mac, minikube, ...) you can't reach the IP address directly from the host. Even if it does work, when you delete and recreate the container, it's likely to change. Looking it up as you note also requires an additional (privileged) operation, which the docker run -p path avoids.
The invocation you have matches the docker inspect documentation (as #DanielHilgarth notes, make sure to run it on the container and not the image). In the specific situation where it will work (you are on the same native-Linux host as the container) you will need to use the unmapped port, e.g. http://172.17.0.2:8085/hello.

How to find out (or what is) the correct Docker Host Url to use in jenkins to specify Docker host?

I have docker (docker for windows server) and jenkins both running on a windows server 2019 and I am trying to add Docker Host URI in Jenkins but I ran into timeout error or connection refused error all time and cannot connect to docker host.
I tried the following urls:
tcp://:2375 (or 2376),
tcp://localhost:2375 (or 2376),
tcp://127.0.0.1:2375 (or 2376)
Do I have to configure anything else on docker or jenkins?. I cannot move forward in this issue. it would really help me if any one can provide some guidance to solve this.
I was facing similar issue. You need to expose your docker demon at port 2375. I did it via docker desktop.
Then added HOST URl as tcp://127.0.0.1:2375

How to configure kube-proxy bind IP address?

For testing purposes, I want to set up the kubernetes master to be only accessible from the local machine and not the outside. Ultimately I am going to run a proxy server docker container on the machine that is opened up to the outside. This is all inside a minikube VM.
I figure configuring kube-proxy is the way to go. I did the following
kubeadm config view > ~/cluster.yaml
# edit proxy bind address
vi ~/cluster.yaml
kubeadm reset
rm -rf /data/minikube
kubeadm init --config cluster.yaml
Upon doing netstat -ln | grep 8443 i see tcp 0 0 :::8443 :::* LISTEN which means it didn't take the IP.
I have also tried kubeadm init --apiserver-advertise-address 127.0.0.1 but that only changes the advertised address to 10.x.x.x in the kubeadm config view. I feel that is probably the wrong thing anyways. I don't want the API server to be inaccessible to the other docker containers that need to access it or something.
I have also tried doing this kubeadm config upload from-file --config ~/cluster.yaml and then attempting to manually restart the docker running kube-proxy.
Also tried to restart the machine/cluster after kubeadm config change but couldn't figure that out. When you reboot a minikube VM by hand kubeadm command disappears and not even docker is running. Various online methods of restarting things dont seem to work either (could be just doing this wrong).
Also tried editing the kube-proxy docker's config file (bound to a local dir) but that gets overwritten when i restart the docker. I dont get it.
There's nothing in the kubernetes dashboard that allows me to edit the config file of the kube-proxy either (since its a daemonset).
Ultimately, I wish to use an authenticated proxy server sitting infront of the k8s master (apiserver specifically). Direct access to the k8s master from outside the VM will not work.
Thanks
you could limit it via the local network configuration. (Firewall, Routes)
As far as I know, the API needs to be accessible, at least via the local network where the other nodes reside in. Except you want to have a single node "cluster".
So, when you do not have a different network card, where you could advertise or bind the address to, you need to limit it then by the above mentioned Firewall or Route rules.
To your initial question topic, did you look into this issue? https://github.com/kubernetes/kubernetes/issues/39586

how to diagnose 404 not found error nginx on docker?

I'm trying to make working out laradock (docker+laravel)
following: https://github.com/LaraDock/laradock instructions
I installed docker + cloned laradock.git
laradock folder is located at
/myHD/...path../www/laradock
at the same level I have my laravel projects
/myHD/...path../www/project01
I edited laradock/docker-compose.xml
### Laravel Application Code Container ######################
volumes_source:
image: tianon/true
volumes:
- ../project01/:/var/www/laravel
After this, bu I'm not sure it this is how to reload correctly after editing docker-file, I did:
docker-compose up -d nginx mysql
now since I have a nginx error 404 not found: how can I debug the problem ?
Additional info:
I entered the machine via bash:
docker-compose exec --user=laradock workspace bash
but I cant find
/etc/nginx/... path (nginx folder doesn't exists !?!?)
Guessing your nginx is not located in the workspace container, it resides on a separate container, You've executed the following:
docker-compose up -d nginx mysql
That would probably only run nginx and mysql containers, not your php-fpm container. Also the path to you volume is important as the configurations in you nginx server depends on this.
To run php-fpm, add php-fpm or something similar to the docker-compose up command, check out what this is called in your docker-compose.yaml file e.g
docker-compose up -d nginx mysql phpfpm
To assess you nginx container first of all execute
`docker ps -a`
From the list, look for the ID of your nginx container, before running
docker exec -it <container-id> bash
This should then give you assess to your nginx container to make the required changes.
Or without directly accessing the container, simply make changes in the nginx configuration file, look for 'server' and the 'root' change the root from var/www/laravel/public to the new directory /project01/public.
Execute the command to bring down your containers
docker-composer down
Star over again with
docker-compose up -d nginx mysql phpfpm
Give it a go

Find host ip from a docker container running in boot2docker / osx

I have just attempted to move an app into a docker container using boot2docker on OS X.
This app requires to connect to a mysql server running on the host system (not in the app's container or in another container).
Now I'm struggling with configuring the mysql hostname within the "dockerized" app: So far it just connected to localhost, but this doesn't work anymore because this no longer points to the host mysql is actually running on.
As a quick workaround, I have added my workstations private IP (10.0.0.X in my case) to the app's mysql connection config.
However I wonder:
Is there an automatic way to find the host's private IP address from within a docker container?
You can provide the IP and port through environment variables to the contained application.
Create a script in your container like:
#!/bin/bash
export $MYSQL_HOST
export $MYSQL_PORT
echo $MYSQL_HOST
echo $MYSQL_PORT
# TODO: maybe your have to write some config files at this point
/start_your_app.sh # use the enviroment variables e.g. in your app config.
Run your containerimage with:
docker run -e MYSQL_HOST=192.168.172.1 MYSQL_PORT=3306 your_image
Have a look at e.g.
http://serverascode.com/2014/05/29/environment-variables-with-docker.html

Resources