First i just want to mention i am very new to docker.
I am using Win 10, "Docker for Windows".
I am using the default linux containers option.
I have downloaded the latest image from here,
https://github.com/camunda/docker-camunda-bpm-platform.
So now, my Docker is online, and the container + image are working. A tomcat server and a Camunda engine are online and working.
My problem is the following,
I need to do some changes and i cant find where Tomcat and Camunda are being stored. I need to edit some XML files both in the Camunda and in the Tomcat ( to setup which database to use for example ).
Can it be that it is not being stored on my local machine?
For example when i open the container with Kitematic ( Docker UI ) i can see environment variables for it, there is a SERVER_CONFIG and its value is /camunda/conf/server.xml ( this is one of the files i need to edit! but i cant find it or anything else anywhere on my local machine ).
you should access container using following command
sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5e978f353734 camunda/camunda-bpm-platform:latest "/sbin/tini -- ./cam…" 4 days ago Up 4 days
the issie
sudo docker exec -it 5e978f353734 /bin/bash
then you will see the container insie via shell command. good luck!
You may want to consider using Camunda BPM RUN, which aims to allow configuration without having to change the WAR deployment or Tomcat. Instead configuration is done as described here:
https://docs.camunda.org/manual/latest/user-guide/camunda-bpm-run/
Config files can be mounted into the docker images, but you may prefer to compose your own docker image based on the Camunda BPM Run base image.
The example here shows another approach which sets Camunda properties from outside the docker image by passing the environment variable SPRING_APPLICATION_JSON into the docker image.
https://medium.com/#robert.emsbach/anyone-can-run-camunda-bpm-on-azure-in-10-minutes-4b4055cc8e9
Related
I'm studing docker.
docker-compose is known as a role that conveniently runs multiple containers as one script.
First, since Dockerfile only handles one container, is it correct to think that Docker Compose is backwards compatible with Dockerfile?
I thought docker compose could cover everything, but I saw docker compose and docker files used together.
Let's take spring boot as an example.
Can I use only one docker-compose to run the db container required for the application, build the application, check the port being used, and run the jar file?
Or do I have to separate the dockerfils and roles and use the two?
When working with Docker, there are two concepts: Image and Container.
Images are like mini operating systems stored in a file which is built specifically with our application in mind. Think of it like a custom operating system which is sitting on your hard disk when your computer is switched off.
Containers are running instances of your image.
Imagine you had a shared hard disk (or even CD/DVD if you are old school) which had an operating system which can run on multiple machines. The files on the disk are the "image", and those files running on a machine are a "container".
This is how Docker works, you have files on the machine which are known as the image, and running instances of those files are referred to as the container. Images can also be uploaded and shared for other users to download and run on their machine too.
This brings us to Docker vs Docker Compose.
Docker is the underlying technology which manages (creates, stores or shares) images, and runs containers.
We provide the Dockerfile to tell Docker how to create our images. For example, we say: starts from the Python 3 base image, then install these requirements, then creates these folders, then switch to this user, etc... (I'm oversimplifying the actual steps, but this is just to explain the point).
Once we have done that, we can create an image using Docker, by running docker build .. If you run that, Docker will execute each step in our Dockerfile and store the result as an image on the system.
Once the image is build, you can run it manually using something like this:
docker run <IMAGE_ID>
However, if you need to setup volumes, you need to run them like this:
docker run -v /path/to/vol:/path/to/vol -p 8000:8000 <IMAGE_ID>
Often applications need multiple images to run. For example, you might have an application and a database, and you may also need to setup networks and shared volumes between them.
So you would need to write the above commands with the appropriate configurations and ID's for each container you want to run, every time you want to start your service...
As you might expect, this could become complex, tedious and difficult to manage very quickly...
This is where Docker Compose comes in...
Docker Compose is a tool used to define how Docker runs our images in a yaml file which is can be stored with our code and reused.
So, if we need to run our app image as a container, share port 8000 and map a volume, we can do this:
services:
app:
build:
context: .
ports:
- 8000:8000
volumes:
- app/:/app
Then, every time we need to start our app, we just run docker-compose up, and Docker Compose will handle all the complex docker commands behind the scenes.
So basically, the purpose of Docker Compose is to configure how our running service should work together to serve our application.
When we run docker-compose build, Docker Compose will run all the necessary docker build commands, to build all images needed for our project and tag them appropriately to keep track of them in the system.
In summary, Docker is the underlying technology used to create images and run them as containers, and Docker Compose is a tool that configures how Docker should run multiple containers to serve our application.
I hope this makes sense?
I suggest you to go deeper in what docker compose is reading at Difference between Docker Compose Vs Dockerfile.
I report part of what explained in the above article:
A Dockerfile is a simple text file that contains the commands a user could call to assemble an image whereas Docker Compose is a tool for defining and running multi-container Docker applications.
Docker Compose define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment. It get an app running in one command by just running docker-compose up.Docker compose uses the Dockerfile if one add the build command to your project’s docker-compose.yml. Your Docker workflow should be to build a suitable Dockerfile for each image you wish to create, then use compose to assemble the images using the build command.
About your question, I answer you that docker compose is so flexible that you can fragment your composition logic in multiple yaml files, and combining by the docker compose command line as you need.
Here an example:
# Build the docker infrastructure
docker-compose \
-f network.yaml \
-f database.yaml \
-f application.yaml
build
# Run the application
docker-compose \
-f application.yaml
up
to answer your question of spring boot application. you can build complete application through Compose alone as long as you know sequence and dependency but then question will arise are you using the compose power properly? as #Antonio Petricca already given well described answer for the compose
regarding compose file and DockerFile question: it depends on how you wrote your compose file. technically Both are different.
so in short:
1- Compose and DockerFile are two different things
2- compose can use multiple modular files even DockerFile and that is why it is so popular, you can break logic into multiple modules then use compose to build it. it also help you debug and iterate faster.
Hope its answer your doubt.
This question already has an answer here:
MySQL container crash after /etc/mysql/my.cnf change, how to edit back?
(1 answer)
Closed 2 years ago.
Good morning.
I am currently using Docker version 19.03 on Mac OS X Catalina.
MariaDB 10.3 was installed in Docker, vim was installed to set the character-set, and /etc/mysql/my.cnf file was modified.
After modification, an attempt was made to restart to reflect, but it was not executed normally.
When I checked it with docker ps -a command, STATUS showed an Exited (1) error.
When I checked the log showing the error, I could check the following log.
unknown variable'collection-server=utf8_unicode_ci'
Stupidly there was a typo in the settings.
So I am trying to modify this setting, but there is no way to modify it because the docker container is not loaded.
docker-compose.yml is not in use.
The simplest way is to delete the Docker Container, reset it, but I don't think this is the right way.
Is there a way to modify /etc/mysql/my.cnf inside Docker Container without using docker-compose.yml?
You can use "docker config" to manage your configuration.
$ docker config --help
Usage: docker config COMMAND
Manage Docker configs
Options:
Commands:
create Create a configuration file from a file or STDIN as content
inspect Display detailed information on one or more config files
ls List configs
rm Remove one or more configuration filesRun ‘docker config COMMAND — help’ for more information on a command
You can create config and add or rm the config from the docker container. you can add configuration file to your docker container either using docker compose or using docker command.
Please check this awesome medium article which will help you to do hands on.
Is it somehow possible to build images without having docker installed. On maven build of my project I'd like to produce docker image, but I don't want to force others to install docker on their machines.
I can think of some virtual box image with docker installed, but it is kind of heavy solution. Is there some way to build the image with some maven plugin only, some Go code or already prepared virtual box image for exactly this purpose?
It boils down to question how to use docker without forcing users to install anything. Either just for build or even for running docker images.
UPDATE
There are some, not really up to date, maven plugins for virtual machine provisioning with vagrant or with vbox. I have found article about building docker images without docker on basel
So far I see two options either I can somehow build the images only or run some VM with docker daemon inside(which can be used not only for builds, but even for integration tests)
We can create Docker image without Docker being installed.
Jib Maven and Gradle Plugins
Google has an open source tool called Jib that is relatively new, but
quite interesting for a number of reasons. Probably the most interesting
thing is that you don’t need docker to run it - it builds the image using
the same standard output as you get from docker build but doesn’t use
docker unless you ask it to - so it works in environments where docker is
not installed (not uncommon in build servers). You also don’t need a
Dockerfile (it would be ignored anyway), or anything in your pom.xml to
get an image built in Maven (Gradle would require you to at least install
the plugin in build.gradle).
Another interesting feature of Jib is that it is opinionated about
layers, and it optimizes them in a slightly different way than the multi-
layer Dockerfile created above. Just like in the fat jar, Jib separates
local application resources from dependencies, but it goes a step further
and also puts snapshot dependencies into a separate layer, since they are
more likely to change. There are configuration options for customizing the
layout further.
Pls refer this link https://cloud.google.com/blog/products/gcp/introducing-jib-build-java-docker-images-better
For example with Spring Boot refer https://spring.io/blog/2018/11/08/spring-boot-in-a-container
Have a look at the following tools:
Fabric8-maven-plugin - http://maven.fabric8.io/ - good maven integration, uses a remote docker (openshift) cluster for the builds.
Buildah - https://github.com/containers/buildah - builds without a docker daemon but does have other pre-requisites.
Fabric8-maven-plugin
The fabric8-maven-plugin brings your Java applications on to Kubernetes and OpenShift. It provides a tight integration into Maven and benefits from the build configuration already provided. This plugin focus on two tasks: Building Docker images and creating Kubernetes and OpenShift resource descriptors.
fabric8-maven-plugin seems particularly appropriate if you have a Kubernetes / Openshift cluster available. It uses the Openshift APIs to build and optionally deploy an image directly to your cluster.
I was able to build and deploy their zero-config spring-boot example extremely quickly, no Dockerfile necessary, just write your application code and it takes care of all the boilerplate.
Assuming you have the basic setup to connect to OpenShift from your desktop already, it will package up the project .jar in a container and start it on Openshift. The minimum maven configuration is to add the plugin to your pom.xml build/plugins section:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>3.5.41</version>
</plugin>
then build+deploy using
$ mvn fabric8:deploy
If you require more control and prefer to manage your own Dockerfile, it can handle this too, this is shown in samples/secret-config.
Buildah
Buildah is a tool that facilitates building Open Container Initiative (OCI) container images. The package provides a command line tool that can be used to:
create a working container, either from scratch or using an image as a starting point
create an image, either from a working container or via the instructions in a Dockerfile
images can be built in either the OCI image format or the traditional upstream docker image format
mount a working container's root filesystem for manipulation
unmount a working container's root filesystem
use the updated contents of a container's root filesystem as a filesystem layer to create a new image
delete a working container or an image
rename a local container
I don't want to force others to install docker on their machines.
If by "without Docker installed" you mean without having to install Docker locally on every machine running the build, you can leverage the Docker Engine API which allow you to call a Docker Daemon from a distant host.
The Docker Engine API is a RESTful API accessed by an HTTP client such
as wget or curl, or the HTTP library which is part of most modern
programming languages.
For example, the Fabric8 Docker Maven Plugin does just that using the DOCKER_HOST parameter. You'll need a recent Docker version and you'll have to configure at least one Docker Daemon properly so it can securely accept remote requests (there are lot of resources on this subject, such as the official doc, here or here). From then on, your Docker build can be done remotely without having to install Docker locally.
Google has released Kaniko for this purpose. It should be run as a container, whether in Kubernetes, Docker or gVisor.
I was running into the same problems, and I did not find any solution, thus i developed odagrun, it's a runner for Gitlab with integrated registry api, update DockerHub, Microbadger etc.
OpenSource and has a MIT license.
Ideal to create a docker image on the fly, without the need of a docker daemon nor the need of a root account, or any image at all (image: scratch will do), currrently still in development, but i use it every day.
Requirements
project repository on Gitlab
an openshift cluster (an openshift-online-starter will do for most medium/small
extract how the docker image for this project was created:
# create and push image to ImageStream:
build_rootfs:
image: centos
stage: build-image
dependencies:
- build
before_script:
- mkdir -pv rootfs
- cp -v output/oc-* rootfs/
- mkdir -pv rootfs/etc/pki/tls/certs
- mkdir -pv rootfs/bin-runner
- cp -v /etc/pki/tls/certs/ca-bundle.crt rootfs/etc/pki/tls/certs/ca-bundle.crt
- chmod -Rv 777 rootfs
tags:
- oc-runner-shared
script:
- registry_push --rootfs --name=test-$CI_PIPELINE_ID --ISR --config
I was trying to add some files inside a docker container like "touch". I found after I shutdown this container, and bring it up again, all my files are lost. Also, I'm using ubuntu image, after shutdown-restart the same image, all my software that has been installed by apt-get is gone! Just like running a new image. So how can I save any file that I created?
My question is, does docker "store" all its file systems like "/tmp" as memory file system, so nothing is actually saved to disk?
Thanks.
This is normal behavoir for docker. You have to define a volume to save your data, those volumes will exist even if you shutdown your container.
For example with a simple apache webserver:
$ docker run -dit --name my-apache-app -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4
This will mount your "current" director to /usr/local/apache2/htdocs at the container, so those files wil be available there.
A other approach is to use named volumes, those ones are not linked to a directory on your disk. Please refer to the docs:
Docker Manage - Data
When you start a container using docker run command,docker run ubuntu, docker starts a new container based on the image you specified. Any changes you make to the previous container will not be available, as this is a new instance spawned from the base image.
There a multiple ways to persist your data/changes to your container.
Use Volumes.
Data volumes are designed to persist data, independent of the container’s lifecycle. You could attach a data volume or mount a host directory as a volume.
Use Docker commit to create a new image with your changes and start future containers based on that image.
docker commit <container-id> new_image_name
docker run new_image_name
Use docker ps -a to list all the containers. It will list all containers including the ones that have exited. Find the docker id of the container that you were working on and start it using docker start <id>.
docker ps -a #find the id
docker start 1aef34df8ddf #start the container in background
References
Docker Volumes
Docker Commit
Trying to setup a LAMP stack with docker,
and found and tried to used https://hub.docker.com/r/linode/lamp/
But I can't find and don't know how to access the files linked to the domain
or how to change the domain name from example.com and so on.
I think my real question is how do I change files or rebuild an image
from other people.
First of all I want to mention I'm not a big fan of this image + approach because it's bundling multiple microservices. I would recommend to use a container for apache2, a container for mysql etc.
But for the setup of LAMP. I'm using the documentation provided on the site.
I've a path /xx/test/index.html which contains some html. I will map the port of the container on my container port + mount my files to the right folder in the container.
docker run -p 80:80 -t -i -v /root/test/:/var/www/example.com/public_html/ linode/lamp /bin/bash
I'm using -ti and start a bash session. In this they are starting the apache2 + mysql service. (it is the approach of the official documentation. Not mine. It's a strange approach):
root#35d00285b625:/# service apache2 start
* Starting web server apache2 *
root#35d00285b625:/# service mysql start
* Starting MySQL database server mysqld [ OK ]
* Checking for tables which need an upgrade, are corrupt or were
not closed cleanly.
After starting the services you can exit the container by pressing ctrl + p then ctrl + q. Now you can check your server-ip:80 to check your html code. If you want to replace example.conf you can mount your own apache2 configurations too.
If you want to change foldernames inside the image I would recommend to create your own dockerfile which starts with:
FROM docker pull linode/lamp
RUN changes..
First of all, Consider using microservices in separate containers. This will provide advantages like:
Fault Containment
Ease of Upgrades
Eliminates long-term commitment to a single technology stack
Easy to scale
System resilience
...
Now Docker was created with having microservices in mind, so for your LAMP Stack, I recommend using Apache+PHP in a container and mysql in another container. To make your containers communicate to eachother, create a userdefined network and put both containers in it.
Now back to your question:
You have 3 options for using your custom configuration files:
You need to mount your configuration files when creating a container(Recommended):
sudo docker run -d --name my-apache -v /path/to/custom/httpd.conf:/usr/local/apache2/conf/httpd.conf httpd
Please note this example is using library (official) apache2 image from docker hub, You should consult image creator's instructions for custom images.
You can manually edit the configuration file inside a running container and commit it as a new image.
sudo docker commit my-apache myrepository/myimagename:tag
sudo docker run -d myrepository/myimagename:tag
Create your own image via Dockerfile, and using FROM <base image> directive.