I have setup the local environment of the project using Laradock and pushed the images to a Amazon ECR.
Now I'm trying to deploy that in to a EC2 instance
My Issues are :
Once I pull the images to the EC2 from ECR, should I include docker-composer.yml file in side of laradock folder to the EC2 or can I run the images using "docker run" command ?
Should I add laradock folder to remote server ?
Once I pull the images to the EC2 from ECR, should I include docker-composer.yml file in side of laradock folder to the EC2 or can I run the images using "docker run" command ?
Yes you should add docker-compose.yml, because that is the file docker-compose uses to know what services exist. You should run docker-compose up <service1> <service2> <service3> ... -d and figure out if the configurations are working for you out of the box.
Should I add laradock folder to remote server ?
Yes this is where the specific images and configs are defined.
Related
can't access images inside /public/storage
i have done php artisan storage:link
the files are visible when using my local environment but when switching to Docker environment with the same code, i can't access them (not found)
my files owner is the same as in my docker config
i test it by visiting http://127.0.0.1:8000/storage/test.png
where test.png is inside public/storage
I have 3 services inside my docker-compose file
Nginx
PHP
MYSQL
I think the problem from permissions but i can't figure out the solution
As far as I know, each docker has an IP address. So, you have to start the server with the IP address of docker container. However, to avoid errors from docker's dynamic IP address, you use 0.0.0.0 instead of the IP address of docker because 0.0.0.0 means all IPv4 addresses on the local machine. To resolve your issue, you use the following command in docker.
php artisan serve --host 0.0.0.0
Note:
Maybe you will get another error, let me know if you meet them.
if anyone faced the same issue.
i managed to solve this by removing the symlink created by laravel and make my own one from my nginx container based on the real path inside my container
I have setup a laradock app on my local machine. I have followed the instructions as provided: http://laradock.io/
In addition to that as I am on Windows 7, and using Docker toolbox
when access to localhost I get this error This site can’t be reached
in the condition you are using toolbox, you must place your project and laradock to c:\users\your-user, or make symlink to this forder. Check your docker machine ip using docker-machine ip default . Then access your-docker-ip and you will see your laravel app.
I have setup a laradock app on my local machine. I have followed the instructions as provided:
http://laradock.io/
In addition to that as I am on Windows 10, and using Docker toolbox, I have shared my folder with the laradock's workspace. That's working fine as I can see my app's folders inside the workspace when I run the following command
docker-compose exec workspace bash
I have also added a host entry inside my hosts file on windows.
127.0.0.1 localhost
But nothing works. I get a response 'localhost refused to connect'. Even css files inside public folder are not accessible
You should checkout the nginx/sites folder inside your laradock folder. Check the root path inside default.conf - root /var/www/public; should correspond to where your project /public folder actually is inside workspace.
I personally use laradock for many projects and create multiple .conf files that correspond to my sites name like myproject1.test - myproject1.conf and my file structure is like that:
/laradock
/myproject1
/myproject2
hosts file:
127.0.0.1 myproject1.test
127.0.0.1 myproject2.test
myproject1.conf inside nginx/sites:
...
server_name myproject1.test;
root /var/www/myproject1/public;
...
Hope that helps
I'm trying to make working out laradock (docker+laravel)
following: https://github.com/LaraDock/laradock instructions
I installed docker + cloned laradock.git
laradock folder is located at
/myHD/...path../www/laradock
at the same level I have my laravel projects
/myHD/...path../www/project01
I edited laradock/docker-compose.xml
### Laravel Application Code Container ######################
volumes_source:
image: tianon/true
volumes:
- ../project01/:/var/www/laravel
After this, bu I'm not sure it this is how to reload correctly after editing docker-file, I did:
docker-compose up -d nginx mysql
now since I have a nginx error 404 not found: how can I debug the problem ?
Additional info:
I entered the machine via bash:
docker-compose exec --user=laradock workspace bash
but I cant find
/etc/nginx/... path (nginx folder doesn't exists !?!?)
Guessing your nginx is not located in the workspace container, it resides on a separate container, You've executed the following:
docker-compose up -d nginx mysql
That would probably only run nginx and mysql containers, not your php-fpm container. Also the path to you volume is important as the configurations in you nginx server depends on this.
To run php-fpm, add php-fpm or something similar to the docker-compose up command, check out what this is called in your docker-compose.yaml file e.g
docker-compose up -d nginx mysql phpfpm
To assess you nginx container first of all execute
`docker ps -a`
From the list, look for the ID of your nginx container, before running
docker exec -it <container-id> bash
This should then give you assess to your nginx container to make the required changes.
Or without directly accessing the container, simply make changes in the nginx configuration file, look for 'server' and the 'root' change the root from var/www/laravel/public to the new directory /project01/public.
Execute the command to bring down your containers
docker-composer down
Star over again with
docker-compose up -d nginx mysql phpfpm
Give it a go
As part of "How to access docker container service from outside world like from parent windows host machine "
I followed the following step :
1) On windows machine(10.204.255./16) , I created vagrant VM (172.17.0./24) . Inside vagrant VM I created different docker images based on my requirements.
2) As part of docker image creation, created centos:6.6 images and run installed ACE-TAO service inside that.
3) TAO service is running properly, and it is binding with the specific container ip: specific port (like 172.17.0.10:13021)
Reference: Able to create images and run the images to create container and install TAO rpm and TAO service running successfully.
Issues is like i'm not able to ping this ip from outside world like from my windows machine
I'm attaching my dockerfile here
FROM centos:6.6
MAINTAINER praveen
WORKDIR /root/
ADD TAO-1.7.7-0.x86_64.rpm /root/TAO-1.7.7-0.x86_64.rpm
RUN rpm -ivh TAO-1.7.7-0.x86_64.rpm
CMD ["/etc/init.d/tao", "start"]
EXPOSE 13021
I believe this is common usecase of docker is like :
Installed service on dockerized container with are accessible from host machine , if we try to access with ip:port
ACE-TAO behavior is like ,rpm installed to specific host , so we can access corba service from this url : corba://(tao_service_runnig_ip):(listening port)
In order to meet this requirement , i need to access the from host machine