So, i built a small application to test how docker works, is a small laravel app that registers users with its profile image. Everything is working properly but the profile image is not being display.
I assume this is because how docker works (ephemeral, unchange, etc), so i was reading a bit about volumes but unfortunately i was not able to make it work.
The images are stored inside a folder called uploads within public folder (laravel structure).
In my docker-compose.yml file i have the following volumes defined:
volumes:
- mysql-data:/var/lib/mysql
So i tried to add the one that i need, something like this:
volumes:
- mysql-data:/var/lib/mysql
- user-images:/app/public/uploads
volumes:
mysql-data:
user-images:
I also tried with bind mounts but i think this can only be used using docker container run (not quite sure).
Any idea on how could i fix this?
Thanks
user-images:/app/public/uploads would be a named volume, defined in /var/lib/docker/volumes
If you want to use a bind mount, that is mounting an host folder as a volume, use a path:
volumes:
- "./user-images:/app/public/uploads"
See also "Laravel + Docker" and its part 2 for a more complete example.
I'm assuming your image is in directory tree with the dockerfile/docker-compose. If that is the case you actually don't want to use a named volume given those are stored elsewhere on your system and would require you move your image to that location. (see "Mountpoint" in image below)
What you likely want to use is the --mount flag which in your compose file would look like so...
volumes:
- type: bind
source: ./path/to/file
target: /app/public/uploads
See docker volumes docks for more info. I wrote the long version which I prefer as it's more explicit but the sort version does the same thing. You should be aware that using a bind volume will overwrite any files you might of added to the actual image if they overlap. Tip this is handy when you have a hot reloading dev server as you can change your files locally and they be ran in the containers context.
Related
I have recently decided to try out Laravel Sail instead of my usual setup with Vagrant/Homestead. Everything seems to be beautifully and easily laid out but I cannot seem to find a workaround for changing domain names in the local environment.
I tried serving the application on say port 89 with the APP_PORT=89 sail up command which works fine using localhost:89 however it seems cumbersome to try and remember what port was which project before starting it up.
I am looking for a way to change the default port so that I don't have to specify what port to serve every time I want to sail up. Then I can use an alias like laravel.test for localhost:89 so I don't have to remember ports anymore I can just type the project names.
I tried changing the etc/hosts file but found out it doesn't actually help with different ports
I essentially am trying to access my project by simply typing 'laravel.test' on the browser for example.
Also open for any other suggestions to achieve this.
Thanks
**Update **
After all this search I actually decided to change my workflow to only have one app running at a time so now I am just using localhost and my CPU and RAM loves it, so if you are here moving from homestead to docker, ask yourself do you really need to run all these apps at the same time. If answer is yes research on, if not just use localhost, there is nothing wrong with it
To change the local name in Sail from the default 'laravel.test' and the port, add the following to your .env file:
APP_SERVICE="yourProject.local" APP_PORT=89
This will take effect when you build (or rebuild using sail build --no-cache) your Sail container.
And to be able to type in 'yourProject.local' in your web browser and have it load your web page, ensure you have your hosts file updated by adding
127.0.0.1 yourProject.localto your hosts file. This file is located:
Windows 10 – “C:\Windows\System32\drivers\etc\hosts”
Linux – “/etc/hosts”
Mac OS X – “/private/etc/hosts”
You'll need to close all browser instances and reopen after making chnages to the hosts file. With this, try entering the alias both with and without the port number to see which works. Since you already set the port via .env you may not need to include it in your alias.
If this doesn't work, change the .env APP_URL=http://yourProject.local:89
Ok another option, in /routes/web.php I assume you have a route set up that may either return a view or call a controller method. You could test to see if you can have this
‘return redirect('http://yourProject.local:89');’ This may involve a little playing around with the routes/controller, but this may be worth looking into.
I have an application that uses about 20GB of raw data. The raw data consists of binaries.
The files rarely - if ever - change. Changes only happen if there are errors within the files that need to be resolved.
The most simple way to handle this would be to put the files in its own git repository and create a base image based on that. Then build the application on top of the raw data image.
Having a 20GB base image for a CI pipeline is not something I have tried and does not seem to be the optimal way to handle this situation.
The main reason for my approach here ist to prevent extra deployment complexity.
Is there a best practice, "correct" or more sensible way to do this?
Huge mostly-static data blocks like this are probably the one big exception to me to the “Docker images should be self-contained” rule. I’d suggest keeping this data somewhere else, and download it separately from the core docker run workflow.
I have had trouble in the past with multi-gigabyte images. Operations like docker push and docker pull in particular are prone to hanging up on the second gigabyte of individual layers. If, as you say, this static content changes rarely, there’s also a question of where to put it in the linear sequence of layers. It’s tempting to write something like
FROM ubuntu:18.04
ADD really-big-content.tar.gz /data
...
But even the ubuntu:18.04 image changes regularly (it gets security updates fairly frequently; your CI pipeline should explicitly docker pull it) and when it does a new build will have to transfer this entire unchanged 20 GB block again.
Instead I would put them somewhere like an AWS S3 bucket or similar object storage. (This is a poor match for source control systems, which (a) want to keep old content forever and (b) tend to be optimized for text rather than binary files.). Then I’d have a script that runs on the host that downloads that content, and then mount the corresponding host directory into the containers that need it.
curl -LO http://downloads.example.com/really-big-content.tar.gz
tar xzf really-big-content.tar.gz
docker run -v $PWD/really-big-content:/data ...
(In Kubernetes or another distributed world, I’d probably need to write a dedicated Job to download the content into a Persistent Volume and run that as part of my cluster bring-up. You could do the same thing in plain Docker to download the content into a named volume.)
I'm not a Docker expert and have been struggling with this problem for a few hours now -
I have 2 independent images - one for a Python REPL and another for a Scheme REPL. I want to create an application that provides a single CLI interface wrapped around 2 containers running either image - so that when I enter python it connects to the Python REPL container and executes everything that follows there; whereas, scheme connects to the Scheme REPL container.
I have 2 questions -
a) Is this possible at all using Docker Compose? Also, does this really qualify as a use case for Docker Compose?
b) Suppose I start with the following bare-bones docker-compose.yml -
version: '3.3'
services:
python:
image: "python:3.6.2-alpine3.6"
racket:
image: "m4burns/racket"
Do I setup the common CLI (say a bash shell) in another container that communicates with the other two if I issue the python or scheme command? How do I define the entrypoints?
I know a simpler solution would simply be to make a Dockerfile that combines both the Python & Scheme setup into a single image. However, I really want to keep them separate and hence am going down this path.
Any help will be appreciated.
Using docker-compose does not provide you a single CLI interface. Instead, two separate containers are created but advantage is that different containers can communicate with each other using service name that you specify.
version: '3.3'
services:
python:
image: "python:3.6.2-alpine3.6"
racket:
image: "m4burns/racket"
From the example you gave, python can access racket by using http://racket. The containers can access each other using their service name. By default, service name is treated as hostname. And this hostname is used for communication. Most people use this default behavior, however, you can specify hostname separately also.
I am writing a small microservices based app, and in it I have a redis instance that some ruby code/containers access to use for via Resque. Currently in my docker compose I am linking like so:
redis:
image: redis:latest
ports:
- '6379:6379'
ruby_worker:
image: my_user/my_image:latest
links:
- redis:db
This works fine(I only name it :db for now cause that is the example I found when looking up linking).
In my ruby code, I have to set up my Resque redis server like:
Resque.redis = ENV['DB_PORT_6379_TCP_ADDR'] + ':6379'
But this just doesn't seem right. It is dependent on that exact redis name, and if I had to spin up a different redis instance(like how I did while playing with docker cloud today), it doesn't find the redis server. In total I have 3 containers(so far) connecting to this redis for resque. A small sinatra based front end, and 2 workers. I am not much of a rails person, and have never used resque before 3 days ago. So sorry if I missed some of the basics.
Is there a better way to connect to my redis instance in my ruby code? Is there a way to pass the redis name in my docker-compose? Right now my resque-web container is configured like below and it seems to work fine:
resque:
image: ennexa/resque-web
links:
- redis:redisserver
ports:
- "5678:5678"
command: "-r redis://redisserver:6379"
Don't use those link environment variables, they are deprecated, and they don't exist in user defined networks anymore.
A good way to do this would be first to default to the name hostname redis. You can always alias any service to have that name, so you should almost never need to change it from the default. For example:
links:
- someservice:redis
To override the default, you can create some application specific env var like REDIS_HOSTNAME, and set it from the environment section (but again, this is almost never necessary):
environment:
REDIS_HOSTNAME=foo
I want to run an arbitrary application inside a docker container safely, like within a vm. To do so I save the application (that I donwloaded from the web and that I don't trust) inside a directory of the host system and I create a volume that maps this directory with the home directory of the container and then I run the application inside the container. Are there any security issues with this approach? Are there better solutions to accomplish the same task?
Moreover, to install all the necessary dependencies, I let to execute an arbitrary script inside a bash terminal running inside the container: could this be dangerous?
To add to #Dimitris answer. There are other things you need to consider.
There are certain things container do not contain. Docker uses namespaces to alter process view of the system.i.e N/W Shared memory etc. But you have to keep in mind it is not like KVM. Docker do talk to kernel directly unlike KVM(Vms) like /proc/sys.
So if the arbitrary application tries to access kernel subsystems like Cgroups , /proc/sys , /proc/bus etc. you could be in trouble. I would say its fine unless its a multi-tenant system.
As long as you do not give the application sudo access you should be good to try it out.
Dependencies are better off defined in the Dockerfile in a clear way for other to see. Opting to run a script instead will also do the job but it's more inconvenient.