Setup single CLI to interface with multiple containers? - shell

I'm not a Docker expert and have been struggling with this problem for a few hours now -
I have 2 independent images - one for a Python REPL and another for a Scheme REPL. I want to create an application that provides a single CLI interface wrapped around 2 containers running either image - so that when I enter python it connects to the Python REPL container and executes everything that follows there; whereas, scheme connects to the Scheme REPL container.
I have 2 questions -
a) Is this possible at all using Docker Compose? Also, does this really qualify as a use case for Docker Compose?
b) Suppose I start with the following bare-bones docker-compose.yml -
version: '3.3'
services:
python:
image: "python:3.6.2-alpine3.6"
racket:
image: "m4burns/racket"
Do I setup the common CLI (say a bash shell) in another container that communicates with the other two if I issue the python or scheme command? How do I define the entrypoints?
I know a simpler solution would simply be to make a Dockerfile that combines both the Python & Scheme setup into a single image. However, I really want to keep them separate and hence am going down this path.
Any help will be appreciated.

Using docker-compose does not provide you a single CLI interface. Instead, two separate containers are created but advantage is that different containers can communicate with each other using service name that you specify.
version: '3.3'
services:
python:
image: "python:3.6.2-alpine3.6"
racket:
image: "m4burns/racket"
From the example you gave, python can access racket by using http://racket. The containers can access each other using their service name. By default, service name is treated as hostname. And this hostname is used for communication. Most people use this default behavior, however, you can specify hostname separately also.

Related

Application dependencies (another apps)

We need to deploy our 4 applications (3 spring boot apps and 1 zookeper) with docker stack. As our DevOps guy told us, there is no way how to define in docker stack which application will be depending on another like in docker compose, so we as developers need to solve it in code.
Can you tell me how to do that or what is the best way? One of our applications have to be started as first because that app manage database (migration and so on). Next can start other applications when database is prepared. Any ideas? Thanks.
if you want to run all the 4 applications in one docker container, you can refer to this postRun multiple services in a container
if you want to docker compose the 4 applications, you can refer to this post startup order, it use depends_on your other app images
no matter what the way is, you must write a script to check if your first app has already finish to manage the database, you can refer wait-for-postgres.sh to learn how to use sleep in shell to repeatedly check your first app status
the more precisely way i can suggest one is for example:
put a shared static variable to false
public static boolean is_app_start = false;
when you finish to manage your database, change this value to true;
write a #RequestMapping("/is_app_start") in your controller to return this value
use curl in your shell script to check the value

Docker volume usage for storing jpg

So, i built a small application to test how docker works, is a small laravel app that registers users with its profile image. Everything is working properly but the profile image is not being display.
I assume this is because how docker works (ephemeral, unchange, etc), so i was reading a bit about volumes but unfortunately i was not able to make it work.
The images are stored inside a folder called uploads within public folder (laravel structure).
In my docker-compose.yml file i have the following volumes defined:
volumes:
- mysql-data:/var/lib/mysql
So i tried to add the one that i need, something like this:
volumes:
- mysql-data:/var/lib/mysql
- user-images:/app/public/uploads
volumes:
mysql-data:
user-images:
I also tried with bind mounts but i think this can only be used using docker container run (not quite sure).
Any idea on how could i fix this?
Thanks
user-images:/app/public/uploads would be a named volume, defined in /var/lib/docker/volumes
If you want to use a bind mount, that is mounting an host folder as a volume, use a path:
volumes:
- "./user-images:/app/public/uploads"
See also "Laravel + Docker" and its part 2 for a more complete example.
I'm assuming your image is in directory tree with the dockerfile/docker-compose. If that is the case you actually don't want to use a named volume given those are stored elsewhere on your system and would require you move your image to that location. (see "Mountpoint" in image below)
What you likely want to use is the --mount flag which in your compose file would look like so...
volumes:
- type: bind
source: ./path/to/file
target: /app/public/uploads
See docker volumes docks for more info. I wrote the long version which I prefer as it's more explicit but the sort version does the same thing. You should be aware that using a bind volume will overwrite any files you might of added to the actual image if they overlap. Tip this is handy when you have a hot reloading dev server as you can change your files locally and they be ran in the containers context.

Docker and rancher

i never really understood how to start a docker and how to maintain it alive.
I have a question, so when you start a docker in the terminal you must provide a command for the docker so it maintains alive, and when you dont provide a service it restarts everytime, you can provide the /bin/bash so it maintains open. (Could you show me how to do it the right way, maintain it open with bash ?)
When it comes to rancher, when you create a new docker you can provide the command too, but if you dont the docker won't restart it maintains alive, so what does this means, that it have default command ? (/bin/bash)? What command does exactly executes rancher to start the docker?
thank you all
It is probably best if you read some about docker, to get the various concepts clear. From your use of "a docker", it seems that you don't really have all the pieces yet for an easy understanding.
A quick layout would be that you have
Image. I have seen this compared to a 'class' in programming
Container. In the same comparison, this would be an object: an instance of a class.
If you want to run something with docker, you start a container from an image. Just like if you want to create an object, you create one from a class. (lets not take this comparison/simili too far)
Now a containers purpose is to run something, rather, to run a single something. So "keeping a docker open" is not something you 'should want' What you want is to run, for instance, a server. Or a script.
Every container runs a single process (or should run one). As the 'official' usecase is not 'create a virtual server you can play around', it might behave strange or complicated if you want to have place to ssh to and not run a specific thing.
This also means you don't want to run any services as a background: if you run apache, you want to run it not as a daemon, but just run it: that's what the docker container is for. If you need to run something else (for instance, a database server) you would start a second container.
There might be exceptions for this, but to get your head around the why stuff works as it does, you should probably start somewhat religiously with these 'rules', and from that point go on.

Correct way to link redis to sinatra docker containers

I am writing a small microservices based app, and in it I have a redis instance that some ruby code/containers access to use for via Resque. Currently in my docker compose I am linking like so:
redis:
image: redis:latest
ports:
- '6379:6379'
ruby_worker:
image: my_user/my_image:latest
links:
- redis:db
This works fine(I only name it :db for now cause that is the example I found when looking up linking).
In my ruby code, I have to set up my Resque redis server like:
Resque.redis = ENV['DB_PORT_6379_TCP_ADDR'] + ':6379'
But this just doesn't seem right. It is dependent on that exact redis name, and if I had to spin up a different redis instance(like how I did while playing with docker cloud today), it doesn't find the redis server. In total I have 3 containers(so far) connecting to this redis for resque. A small sinatra based front end, and 2 workers. I am not much of a rails person, and have never used resque before 3 days ago. So sorry if I missed some of the basics.
Is there a better way to connect to my redis instance in my ruby code? Is there a way to pass the redis name in my docker-compose? Right now my resque-web container is configured like below and it seems to work fine:
resque:
image: ennexa/resque-web
links:
- redis:redisserver
ports:
- "5678:5678"
command: "-r redis://redisserver:6379"
Don't use those link environment variables, they are deprecated, and they don't exist in user defined networks anymore.
A good way to do this would be first to default to the name hostname redis. You can always alias any service to have that name, so you should almost never need to change it from the default. For example:
links:
- someservice:redis
To override the default, you can create some application specific env var like REDIS_HOSTNAME, and set it from the environment section (but again, this is almost never necessary):
environment:
REDIS_HOSTNAME=foo

How to run an application inside docker safely

I want to run an arbitrary application inside a docker container safely, like within a vm. To do so I save the application (that I donwloaded from the web and that I don't trust) inside a directory of the host system and I create a volume that maps this directory with the home directory of the container and then I run the application inside the container. Are there any security issues with this approach? Are there better solutions to accomplish the same task?
Moreover, to install all the necessary dependencies, I let to execute an arbitrary script inside a bash terminal running inside the container: could this be dangerous?
To add to #Dimitris answer. There are other things you need to consider.
There are certain things container do not contain. Docker uses namespaces to alter process view of the system.i.e N/W Shared memory etc. But you have to keep in mind it is not like KVM. Docker do talk to kernel directly unlike KVM(Vms) like /proc/sys.
So if the arbitrary application tries to access kernel subsystems like Cgroups , /proc/sys , /proc/bus etc. you could be in trouble. I would say its fine unless its a multi-tenant system.
As long as you do not give the application sudo access you should be good to try it out.
Dependencies are better off defined in the Dockerfile in a clear way for other to see. Opting to run a script instead will also do the job but it's more inconvenient.

Resources