DockerCompose Kong with Deck installed - api-gateway

I'm looking into using Deck for Kong to perform synchronized migration. However, I can't seem to find a way to install Deck cli into my Kong container using docker-compose.
Is there any guide/documentation I can follow to perform such an installation?

I think that this can be resolved with docker container kong/deck if you will follow the below steps.
write Dockerfile.deck by using kong/deck image
. write wait script for the 8001 (admin) port of kong.
. kong/deck is just a container for the cli, so you should notice that its default ENTRYPOINT is deck command, so you should reset ENTRYPOINT if you want to run wait script.
. copy kong.yaml from local to container directory
add your-deck service part to docker-compose.yml with build configuraiton of 1)
run dockercompose up after build
This will work if you want to apply the kong.yaml at deployment time.

Related

Springboot app image not working on aws ecs

I am learning AWS ECS. In continuation to it, I created a sample application with the following Dockerfile config
FROM adoptopenjdk/openjdk11
ARG JAR_FILE=build/libs/springboot-practice-1.0-SNAPSHOT.jar
COPY ${JAR_FILE} app.jar
EXPOSE 80
ENTRYPOINT ["java","-jar","/app.jar"]
The app has just one endpoint /employee which returns a hardcoded value at the moment.
I built this image locally and ran it successfully using command
docker run -p 8080:8080 -t chandreshv85/demo
Pushed it to docker respository
Created a fargate cluster and task definition
Created a task in running state based on the above definition.
Now when I try to access the endpoint it's giving 404.
Suspecting that something could be wrong with the security group, I created a different task definition based on another image (another repo) and it ran fine with the same security group.
Now, I believe that something is wrong with the springboot image. Can someone please help me identify what is wrong here? Please let me know if you need more information.

Curious about the use of docker-compose and dockerfile

I'm studing docker.
docker-compose is known as a role that conveniently runs multiple containers as one script.
First, since Dockerfile only handles one container, is it correct to think that Docker Compose is backwards compatible with Dockerfile?
I thought docker compose could cover everything, but I saw docker compose and docker files used together.
Let's take spring boot as an example.
Can I use only one docker-compose to run the db container required for the application, build the application, check the port being used, and run the jar file?
Or do I have to separate the dockerfils and roles and use the two?
When working with Docker, there are two concepts: Image and Container.
Images are like mini operating systems stored in a file which is built specifically with our application in mind. Think of it like a custom operating system which is sitting on your hard disk when your computer is switched off.
Containers are running instances of your image.
Imagine you had a shared hard disk (or even CD/DVD if you are old school) which had an operating system which can run on multiple machines. The files on the disk are the "image", and those files running on a machine are a "container".
This is how Docker works, you have files on the machine which are known as the image, and running instances of those files are referred to as the container. Images can also be uploaded and shared for other users to download and run on their machine too.
This brings us to Docker vs Docker Compose.
Docker is the underlying technology which manages (creates, stores or shares) images, and runs containers.
We provide the Dockerfile to tell Docker how to create our images. For example, we say: starts from the Python 3 base image, then install these requirements, then creates these folders, then switch to this user, etc... (I'm oversimplifying the actual steps, but this is just to explain the point).
Once we have done that, we can create an image using Docker, by running docker build .. If you run that, Docker will execute each step in our Dockerfile and store the result as an image on the system.
Once the image is build, you can run it manually using something like this:
docker run <IMAGE_ID>
However, if you need to setup volumes, you need to run them like this:
docker run -v /path/to/vol:/path/to/vol -p 8000:8000 <IMAGE_ID>
Often applications need multiple images to run. For example, you might have an application and a database, and you may also need to setup networks and shared volumes between them.
So you would need to write the above commands with the appropriate configurations and ID's for each container you want to run, every time you want to start your service...
As you might expect, this could become complex, tedious and difficult to manage very quickly...
This is where Docker Compose comes in...
Docker Compose is a tool used to define how Docker runs our images in a yaml file which is can be stored with our code and reused.
So, if we need to run our app image as a container, share port 8000 and map a volume, we can do this:
services:
app:
build:
context: .
ports:
- 8000:8000
volumes:
- app/:/app
Then, every time we need to start our app, we just run docker-compose up, and Docker Compose will handle all the complex docker commands behind the scenes.
So basically, the purpose of Docker Compose is to configure how our running service should work together to serve our application.
When we run docker-compose build, Docker Compose will run all the necessary docker build commands, to build all images needed for our project and tag them appropriately to keep track of them in the system.
In summary, Docker is the underlying technology used to create images and run them as containers, and Docker Compose is a tool that configures how Docker should run multiple containers to serve our application.
I hope this makes sense?
I suggest you to go deeper in what docker compose is reading at Difference between Docker Compose Vs Dockerfile.
I report part of what explained in the above article:
A Dockerfile is a simple text file that contains the commands a user could call to assemble an image whereas Docker Compose is a tool for defining and running multi-container Docker applications.
Docker Compose define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment. It get an app running in one command by just running docker-compose up.Docker compose uses the Dockerfile if one add the build command to your project’s docker-compose.yml. Your Docker workflow should be to build a suitable Dockerfile for each image you wish to create, then use compose to assemble the images using the build command.
About your question, I answer you that docker compose is so flexible that you can fragment your composition logic in multiple yaml files, and combining by the docker compose command line as you need.
Here an example:
# Build the docker infrastructure
docker-compose \
-f network.yaml \
-f database.yaml \
-f application.yaml
build
# Run the application
docker-compose \
-f application.yaml
up
to answer your question of spring boot application. you can build complete application through Compose alone as long as you know sequence and dependency but then question will arise are you using the compose power properly? as #Antonio Petricca already given well described answer for the compose
regarding compose file and DockerFile question: it depends on how you wrote your compose file. technically Both are different.
so in short:
1- Compose and DockerFile are two different things
2- compose can use multiple modular files even DockerFile and that is why it is so popular, you can break logic into multiple modules then use compose to build it. it also help you debug and iterate faster.
Hope its answer your doubt.

How I can Dockerize my web api on windows

Docker is a full development platform for creating containerized apps, and Docker for Windows is the best way to get started with Docker on Windows systems.
Start your favorite shell (cmd.exe, PowerShell, or other) to check your versions of docker and docker-compose, and verify the installation.
PS C:\Users\Docker> docker --version
Docker version 17.03.0-ce, build 60ccb22
PS C:\Users\Docker> docker-compose --version
docker-compose version 1.11.2, build dfed245
Your questions is not very specific but it appears that you are trying to containerize an asp.net web app, Here is a basic clue to what you want to accomplish by using docker.
Docker is a linux containers system means it's based on linux kernel and by installing docker in windows you are installing a linux guest machine to built your containers in and you will customize your containers to forward ports that will serve your app development from inside the container to your host machine, So basically How this is going to happen? after installing docker first docker needs a base image(linux image) to run your containers from, so a great place to find docker images is docker hub, so also for a basic scenario you need:
1) Pull an image.
2) Run a container based on this image.
To accomplish number 1: we will use microsoft dotnet official docker hub as an example.
docker pull microsoft/aspnetcore
docker pull: will pull the dotnet:latest image from docker hub, :latest is a tag specify the latest stable release of dotnet means if you want another runtime version you will use docker pull dotnet:runtime from the above dotnet official docker hub link you will find tags under Supported tags
To accomplish number 2: we need to run a container by using this image.
docker run -d -p 8000:80 --name firstwebapptest microsoft/aspnetcore
docker run: will create a container name firstwebapptest based on microsoft/aspnetcore forwarding the container port 80to the host port 8000 and all of that will run as a detached mode -d
And now check your browser localhost:8000
This is a very basic scenario using the docker command line tools.
So another way to accomplish this scenario is by using a dockerfile you will find How to use this image in microsoft dotnet official docker hub link, It assumes that you already in your app directory that contain your compiled myapp.dll. What will you do is create a file called dockerfile in this directory and write this inside:
FROM microsoft/aspnetcore
WORKDIR /app
COPY . .
ENTRYPOINT ["dotnet", "myapp.dll"]
FROM: base image that we already pulled
WORKDIR: that will be the directory inside the linux container
COPY: . . the first . is copying your host directory content inside the container the second . is your guest directory in that case will be /app
ENTREYPOINT: is the linux command that will run once this container is up and running in that case dotnet myapp.dll means you are running the command dotnet from the linux container inside the WORKDIR /app with all your host directory app structure that contains your compiled myapp.dll. that we already copied it COPY . .
so now we have the dockerfile all what we need is to build and run it.
docker build -t secondwebapptest .
docker run -d -p 8001:80 secondwebapptest
docker build: will build a container named -t secondwebapptest from . the dot refer to the dockerfile that you just built and that you are already in the working directory otherwise you have to specify a path to the docker file by using -f but that is not our case.
docker run: will run the container that already been created that named secondwebapptest based on forwarding the container port 80to the host port 8001 and all of that will run as a detached mode -d.
And now check your browser localhost:8001

Docker run script in host on docker-compose up

My question relates to best practices on how to run a script on a docker-compose up directive.
Currently I'm sharing a volume between host and container to allow for the script changes to be visible to both host and container.
Similar to a watching script polling for changes on configuration file. The script has to act on host on changes according to predefined rules.
How could I start this script on a docker-compose up directive or even from the Dockerfile of the service, so that whenever the container goes up the "watcher" can find any changes being made and writing to.
The container in question will always run over a Debian / Ubuntu OS and should be architecture independent, meaning it should be able to run on ARM as well.
I wish to run a script on the Host, not inside the container. I need the Host to change its network interface configurations to easily adapt any environment The HOST needs to change I repeat.. This should be seamless to the user, and easily editable on a Web interface running Inside a CONTAINER to adapt to new environments.
I currently do this with a script running on the host based on crontab. I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up.
I just wish to know the best practices and examples of how to run a script on HOST from INSIDE a CONTAINER, so that the deploy can be as easy for the installing operator to just run docker-compose up
It seems that there is no best practice that can be applied to your case. A workaround proposed here: How to run shell script on host from docker container? is to use a client/server trick.
The host should run a small server (choose a port and specify a request type that you should be waiting for)
The container, after it starts, should send this request to that server
The host should then run the script / trigger the changes you want
This is something that might have serious security issues, so use at your own risk.
The script needs to run continuously in the foreground.
In your Dockerfile use the CMD directive and define the script as the parameter.
When using the cli, use docker run -d IMAGE SCRIPT
You can create an alias for docker-compose up. Put something like this in ~/.bash_aliases (in Ubuntu):
alias up="docker-compose up; ~/your_script.sh"
I'm not sure if running scripts on the host from a container is possible, but if it's possible, it's a severe security flaw. Containers should be isolated, that's the point of using containers.

DOCKER - LAMP Stack issues - Premade Image

Trying to setup a LAMP stack with docker,
and found and tried to used https://hub.docker.com/r/linode/lamp/
But I can't find and don't know how to access the files linked to the domain
or how to change the domain name from example.com and so on.
I think my real question is how do I change files or rebuild an image
from other people.
First of all I want to mention I'm not a big fan of this image + approach because it's bundling multiple microservices. I would recommend to use a container for apache2, a container for mysql etc.
But for the setup of LAMP. I'm using the documentation provided on the site.
I've a path /xx/test/index.html which contains some html. I will map the port of the container on my container port + mount my files to the right folder in the container.
docker run -p 80:80 -t -i -v /root/test/:/var/www/example.com/public_html/ linode/lamp /bin/bash
I'm using -ti and start a bash session. In this they are starting the apache2 + mysql service. (it is the approach of the official documentation. Not mine. It's a strange approach):
root#35d00285b625:/# service apache2 start
* Starting web server apache2 *
root#35d00285b625:/# service mysql start
* Starting MySQL database server mysqld [ OK ]
* Checking for tables which need an upgrade, are corrupt or were
not closed cleanly.
After starting the services you can exit the container by pressing ctrl + p then ctrl + q. Now you can check your server-ip:80 to check your html code. If you want to replace example.conf you can mount your own apache2 configurations too.
If you want to change foldernames inside the image I would recommend to create your own dockerfile which starts with:
FROM docker pull linode/lamp
RUN changes..
First of all, Consider using microservices in separate containers. This will provide advantages like:
Fault Containment
Ease of Upgrades
Eliminates long-term commitment to a single technology stack
Easy to scale
System resilience
...
Now Docker was created with having microservices in mind, so for your LAMP Stack, I recommend using Apache+PHP in a container and mysql in another container. To make your containers communicate to eachother, create a userdefined network and put both containers in it.
Now back to your question:
You have 3 options for using your custom configuration files:
You need to mount your configuration files when creating a container(Recommended):
sudo docker run -d --name my-apache -v /path/to/custom/httpd.conf:/usr/local/apache2/conf/httpd.conf httpd
Please note this example is using library (official) apache2 image from docker hub, You should consult image creator's instructions for custom images.
You can manually edit the configuration file inside a running container and commit it as a new image.
sudo docker commit my-apache myrepository/myimagename:tag
sudo docker run -d myrepository/myimagename:tag
Create your own image via Dockerfile, and using FROM <base image> directive.

Resources