How can I take upload the docker Image into Github - image

I wanted to use Kibana docker image.
Below is the docker-compose that I am using.
kibana:
image: kibana
links:
- "elasticsearch"
ports:
- "9201:9201"
networks:
- cloud
This image gets run on 5601 though I am specifying 9201
I learn about changing the Port for Kibana here
If I do so every time I run the container using docker-compose up it will pull the latest image.
As a reason, I want to put this docker Image into VSTS/GIT so that I can modify the content there and use it.
I don't want just a docker image. I want the configuration of it which I can change as per my requirement.
Any help will be appreciable.
Thanks in advance.

The docker image is a base from which the container is created. When you run docker-compose up docker-compose will use the docker engine to start all the services that you have specified in the compose.yml file.
If the containers do not exist, it will create new ones and if they do exist, it will start your running containers.
The YML file (docker-compose.yml) can be uploaded to whichever repository you wish, but if you wish to upload a image, you will have to use a docker registry.
This is probably not really what you wish to do. You should rather store all your configurations in a mounted directory which is actually on the computer running the docker containers instead of in the image or container itself.
You could also create a docker volume which you store the configuration in, it will persist as long as you do not remove it.
Now, when it comes to the port issue, you are binding the containers 9201 to your 9201, if the service in the container is not running on 9201 you will not be able to reach it through that port.
Port mapping is done with host:container, so if you want to bind your computers 9201 to the containers 5601 you write:
ports:
- "9201:5601"
And it should map 5601 to 9201.

Related

How to network 2 separate docker containers to communicate with eachother?

I'm pretty new to docker, and I've tried searching about networking but haven't found a solution that's worked.
I have a Laravel app that is using Laradock.
I also have an external 3rd party API that runs in its own docker container.
I basically want to specify the container name of the api inside my laravel .env file, and have it dynamically resolve the container ip so I can make API calls from my Laravel app. I can already do this with services that are already part of laradock like mariadb/mysql, but since my API is located in an external container, it can't connect to it.
I tried making a network and attaching them with;
docker network create my-network
Then inside my docker-compose.yml files for each of the containers, I specified;
networks:
my-network:
name: "my-network"
But if I try and ping them with;
docker exec -ti laradock-workspace-1 ping my-api
I can't connect and can't really figure out why. Was hoping someone familiar with docker might be able to explain why since I'm sure it's something very obvious I'm missing. Thanks!
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file.
Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.

How to connect my spring boot app to redis container on docker?

I have created a spring app and i want to connect it to redis server which is deployed on docker-compose i put the needed properties as follow :
spring.redis.host=redis
spring.redis.port=6379
But i keep getting a ConnexionException so how can i Know on which host redis is running and how to connect to it.
Here is my docker-compose file :
version: '2'
services:
redis:
image: 'bitnami/redis:5.0'
environment:
# ALLOW_EMPTY_PASSWORD is recommended only for development.
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL
ports:
- '6379:6379'
volumes:
- 'redis_data:/bitnami/redis/data'
volumes:
redis_data:
driver: local
From docker compose documentation
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name
If you want to access redis by container name ('redis' in this case), the Spring boot application has also be deployed as a docker compose service, but it doesn't appear in the docker-compose file that you've provided in the question, so please add it.
Alternatively If you're trying to run the spring boot application on host machine, use 'localhost' instead of 'redis' in order to access the redis container.
Another approach you can use is "docker network" , Below are the steps to follow :
Create a docker network for redis
docker network create redis-docker
Spin up redis container is redis-docker network.
docker run -d --net redis-docker --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:latest
Inspect the redis-docker container
docker network inspect redis-docker
Copy the "IPv4Address" IP and paster in application.yml
Now build , start your application.

How to deploy Spring Boot RESTful Web Service Docker img to EC2?

What I'm trying to do is simple, deploy a Spring Boot RESTful Web Service to EC2 so it's accessible publicly.
For this I need to do the following:
Write Spring Boot web service, containerize and test locally - done
Here is my Dockerfile:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
ADD ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
When I docker run this locally on 8080 it works fine (returns static json).
Push to Dockerhub - done
Launch an Amazon Linux AMI on aws and make it http accessible (port 80) - done
Install apache (httpd) and start - done
Here is where I need some help
I run docker image from dockerhub like so
docker run --rm -p 80:8080 kaspartr/demo
It doesn't allow cause of course the port is taken by apache. And if I stop it and run it is deployed but I cannot access it online.
Can someone please explain how do you deploy docker image into the apache?
Do I need to change the Dockerfile or something else?
Thank you!!
Typically I run application on separate port and do docker forward:
Add yo you application.properties
server.port=9001
And add to docker-compose.yml:
version: '1'
services:
your-service:
build: .
ports:
- '9001:9001'
environment:
SERVICE_URL: http://service:9001/path
Since Apache has already taken port 80, you cannot make your container runs on port 80 too.
My guess is that you are planning to use Apache as a reverse proxy to your container. Try publishing your container to another port, and proxying port 80 to the container using Apache.

How do I run a docker-compose container in interactive mode such that other containers see it?

The scenario: I set a breakpoint in code, which is mounted (as a volume) into a container, created by docker-compose. The code is an odoo module, so it's part of the odoo container.
There is also a webapp container, which has a link to odoo in order to use the API.
The odoo container doesn't expose the API port because the host doesn't need it; the webapp container can, of course, see it.
services:
odoo:
volumes:
- localpath:/mnt/extra-addons-bundles
webapp:
links:
- odoo
Of course, the purpose of the breakpoint is to allow me to control the application - so I need a TTY attached. I can do this with docker-compose run --rm odoo. However, when I do this, it creates a new container, so the webapp never actually hits it. Plus, it doesn't tell me what the new container is called, so I have to manually figure it out to do that.
I can use docker exec to run another odoo in the odoo container but then I have to run it on a new port, and thus change the webapp config to use this new instance.
Is there a way of achieving what I want, i.e. to run the odoo container in interactive mode, such that the webapp container can see it, and without reconfiguring the webapp container?
The answer is to use tty: true in the docker-compose file, and docker attach to actually get that process attached to your terminal.
Try this and see if it works
services:
odoo:
volumes:
- localpath:/mnt/extra-addons-bundles
tty: true
stdin_open: true
webapp:
links:
- odoo
I have also added stdin_open in case you need it, if you don't just remove it
Edit-1
Also if you need to attach to the running container, then you need to use docker attach as docker-compose doesn't have that functionality
docker attach <containername>

Docker-compose links vs external_links

I believe it is simple question but I still do not get it from Docker-compose documentations. What is the difference between links and external_links?
I like external_links as I want to have core docker-compose and I want to extend it without overriding the core links.
What exactly I have, I am trying to setup logstash which depends on the elasticsearch. Elasticsearch is in the core docker-compose and the logstash is in the depending one. So I had to define the elastic search in the depended docker-compose as a reference as logstash need it as a link. BUT Elasticsearch has already its own links which I do not want to repeat them in the dependent one.
Can I do that with external_link instead of link?
I know that links will make sure that the link is up first before linking, does the external_link will do the same?
Any help is appreciated. Thanks.
Use links when you want to link together containers within the same docker-compose.yml. All you need to do is set the link to the service name. Like this:
---
elasticsearch:
image: elasticsearch:latest
command: elasticsearch -Des.network.host=0.0.0.0
ports:
- "9200:9200"
logstash:
image: logstash:latest
command: logstash -f logstash.conf
ports:
- "5000:5000"
links:
- elasticsearch
If you want to link a container inside of the docker-compose.yml to another container that was not included in the same docker-compose.yml or started in a different manner then you can use external_links and you would set the link to the container's name. Like this:
---
logstash:
image: logstash:latest
command: logstash -f logstash.conf
ports:
- "5000:5000"
external_links:
- my_elasticsearch_container
I would suggest the first way unless your use case for some reason requires that they cannot be in the same docker-compose.yml
I think external_link will do not do the same as links in docker-compose up command.
links waits for container to boot up and get IP address which is used in etc/hosts file, therefore external_link has already IP:hostname values name described in docker-compose file.
Moreover links will be deprecated
Here is a link to Docker-Compose project that uses Elasticsearch, Logstash, and Kibana. You will see that I'm using links:
https://github.com/bahaaldine/elasticsearch-paris-accidentology-demo

Resources