I am new to docker concept. For an application I need to create nginx container.
The nginx configuration is in chef cookbook, hence I have example.conf.erb file and default.rb (containing setup nginx file) in my chef/cookbook/ directory . I am not sure how to containerise. I copied .conf.erb to /etc/nginx/conf.d/example.conf.erb. I am not sure what else to do. I am confused and no resource online need help immediate
default.rb :
include_recipe 'nginx_ldap_auth'
include_recipe 'nginx'
template 'nginx config' do
source 'example.conf.erb'
path '/etc/nginx/conf.d/example.conf.erb'
owner 'root'
group 'root'
mode ''
variables({'environment variables'})
notifies :restart, 'service[nginx])'
My Dockerfile currently look like this:
FROM nginx:alpine
COPY default.conf /etc/nginx/example.conf.erb/
I am not sure if I need docker-compose. Apart from Dockerfile there is nothing much I have created. Please guide
Either first let Chef create the config file and then run run docker build to create an image.
Or you can check out a multi stage dockerfiles.
With multiple stages you can first use an image that includes Chef, there create the config file and then copy it into the nginx image.
Related
i'm using logstash image and i have some ruby scripts that are located in the same directory as my dockerfile.
my goal is to create a script folder, and copy my scripts to it.
my problem is that when i access to the container instance, the folder is not created and no ruby file exist.
this is my dockerfile
FROM docker.elastic.co/logstash/logstash:${ELK_VERSION}
USER root
WORKDIR /usr/share/logstash
RUN mkdir scripts
COPY ./scripts/*.rb scripts
thanks in advance.
EDIT 1:
this is the files structure
I needed to create a Docker image of a Springboot application and I achieved that by creating a Dockerfile and building it into an image. Then, I used "docker run" to bring up a container. This container is used for all the activities for which my application was written.
My problem, however, is that the JAR file that I have used needs constant changes and that requires me to rebuild the Docker image everytime. Furthermore, I need to take the contents of the earlier running Docker container and transfer it into a container created from the newly built image.
I know this whole process can be written as a Shell script and exected every time I have changes on my JAR file. But, is there any tool I can use to somehow automate it in a simple manner?
Here is my Dockerfile:
FROM java:8
WORKDIR /app
ADD ./SuperApi ./SuperApi
ADD ./config ./config
ADD ./Resources ./Resources
EXPOSE 8000
CMD java -jar SuperApi/SomeName.jar --spring.config.location=SuperApi/application.properties
If you have a JAR file that you need to copy into an otherwise static Docker image, you can use a bind mount to save needing to rebuild repeatedly. This allows for directories to be shared from the host into the container.
Say your project directory (the build location where the JAR file is located) on the host machine is /home/vishwas/projects/my_project, and you need to have the contents placed at /opt/my_project inside the container. When starting the container from the command line, use the -v flag:
docker run -v /home/vishwas/projects/my_project:/opt/my_project [...]
Changes made to files under /home/vishwas/projects/my_project locally will be visible immediately inside the container1, so no need to rebuild (and probably no need to restart) the container.
If using docker-compose, this can be expressed using a volumes stanza under the services listing for that container:
volumes:
- type: bind
source: /home/vishwas/projects/my_project
target: /opt/my_project
This works for development, but later on, it's likely you'll want to bundle the JAR file into the image instead of sharing from the host system (so it can be placed into production). When that time comes, just re-build the image and add a COPY directive to the Dockerfile:
COPY /home/vishwas/projects/my_project /opt/my_project
1: Worth noting that it will default to read/write, so the container will also be able to modify your project files. To mount as read-only, use: docker run -v /home/vishwas/projects/my_project:/opt/my_project:ro
You are looking for docker compose
You can build and start containers with a single command using compose.
I have one installation of Rundeck in a Linux server and it is up & running on port 4440. But I want to have one more installation of it and expecting it to run on other port. Is it possible? This question may look weird but I want to have additional setup of Rundeck due to personal reasons.
Eagerly looking for help. Thanks in advance.
You can test your "Personal instance" with a docker container without touch the "real instance" (or use two docker containers if you want), in both cases, you need to specify different ports (for example 4440 for "real" instance/container and 5550 to "test" container).
Here you have the official docker image, here about how to run, check the "Environment variables" section to specify the TCP port of each container (also, you have a lot of params to test).
And here you have a lot of configurations to test (LDAP, DB backends, etc..).
If you use Rundeck with Docker you must change the init.sh.
He is responsible of configuration overwrite at each container creation, so all your configuration updates are lost.
Doing this also avoid to have clear configuration params i' your docker-compose file...
The steps are :
create docker-compose file as mentioned on Rundeck Docker hub
map volumes on your host so you can save rundeck's files and directory
stop your container
comment config overwrite in init.sh
restart your container
You can then update rundeck's config on the fly and just restart rundeck container to see the changes...
Consider the following YAML code in my docker-compose.yml file that sets up volume mounting (using version 3.7), using short form syntax as specified in the docs:
volumes:
- ./logging:/var/log/cron
This maps the relative path logging on my host machine to the /var/log/cron folder inside the container. When I run docker-compose up, if the logging folder doesn't exist on my host machine, Docker creates it. All good there.
Now, if I change the above to long-form syntax:
volumes:
- type: bind
source: ./logging
target: /var/log/cron
Now when I run docker-compose up, it DOES NOT create logging folder if it doesn't exist on my host machine. I get
Cannot create container for service app: b'Mount denied:\nThe source path "C:/Users/riptusk331/logging"\ndoesn\'t exist and is not known to Docker'
Does anyone know why the short form syntax creates the host path if it doesn't exist, but the long form does not and gives an error?
Using Docker Desktop for Windows.
I have a java WAR file that is of an Image (Docker) and is being started inside a Tomcat (Docker) container. Since the coding changes, the WAR will change also. I would like to do the following:
Change the java code Update to Git
Have a WAR file created (from code just pushed to Git)
create a NEW IMAGE (Docker) that uses the NEW WAR file
stop all old containers (running old image)
re-start the containers (which will be using the new image)
I am also using Portainer. Is there some series of commands that I can execute / run so that Item #4 and Item #5 can be ran automatically (without requiring human intervention)? Is there some kind of way that this can be done at all?
TIA
docker-compose can be helpful for this. You can create a yml file for your application and use docker compose cli to spin up new containers as required. For example I have tomcat/mongo based application with following yml file:
version: '3'
services:
mongodb:
image: mongo
network_mode: host
tomcat:
build:
context: ./app
dockerfile: DockerfileTomcat
network_mode: host
depends_on:
- mongodb
With folder layout as:
├── docker-compose.yml
└── app
├── DockerfileTomcat
└── app.war
Where DockerfileTomcat takes care of copying the war file in tomcat container as:
FROM tomcat:8.5-jre8
RUN rm -rf /usr/local/tomcat/webapps/*
COPY app.war /usr/local/tomcat/webapps/app.war
In order to start your application you need to run following command in the directory containing docker-compose.yml:
docker-compose up --build
Just copy the new war file over app.war each time and run the command above. It will create the base image and launch the updated container.
If this isn't something you are looking for you can write a BASH script to automate the process. Let me know if you want me to post it here.