Use environment variables in docker - amazon-ec2

I have implemented docker project for automated setup. I use docker 1.9 on Ubuntu Server and utilize feature build-arg. I using it for set dynamic subdomain in apache virtual hosts file.
docker build --no-cache --build-arg domain=demo1.myapp.com -t imagename .
docker run -d -p 8080:80 imagename
I use domain and replace it in virtual hosts file using sed command in my script file
sed -i -e "s/defaulthost.com/$domain/g" /etc/apache2/sites-enabled/myApp.conf
My Dockerfile had code
ARG domain
RUN /bin/sh /script.sh $domain
Now I need to migrate application on AWS where I get Amazon Linux AMI. But here I get supported docker version 1.7, which do not support build-arg. I tried to upgrade but lot of dependencies block me.
Now I decide to use ENV environment variables like below.
docker run -d -p 8080:80 -e domain=demo1.myapp.com
I also changed Docker file like
My Dockerfile had code
RUN /bin/sh /script.sh
But It look like they not working in my scenario as at build time sed script replace empty value in apache file and build process failed.
If it is not possible without build arg or I am doing wrong way of set/use ENV

First, AWS can support docker 1.9.
See for instance "Getting overlay networking to work in AWS with Docker 1.9"
use a Docker Machine version 0.5.2-dev, as explained here
use the right AMI (Amazon Machine Image) Ubuntu 15.10
Set up the AWS environment variables
If you chose to remain with an old AMI and its docker 1.7, then -e option are for runtime only (creating/running containers), not build time (image).
That means if your ENTRYPOINT or CMD was: /script.sh, using inside the script $domain (and then launching your main process), that would work.

Related

Docker image for executing gradle bootBuildImage command

I'm looking for a docker image to build my gradle project which also need a docker engine to execute gradle bootBuildImage command. Any recommandation ?
Thanks,
Dan
If you use the Gradle wrapper scripts (which you should), you can use any image you like as long as it has Java on it. OpenJDK is a good match.
If you don't use the wrapper scripts, you need to have an image with Gradle installed. The official Gradle image should do.
But I think what you are really asking is how to build a docker image inside a container. The bootBuildImage task doesn't need the local Docker cli tools, and only needs to connect to a daemon. That daemon could be running on a remote host, but you can also make it connect to your local host outside the container. To do this, mount the local docker socket.
Here is an example that mounts the current directory inside a container and builds a Docker image in it through the Spring Boot plugin for Gradle:
docker run --rm \
-v gradle-cache:/home/gradle/.gradle \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD":/home/gradle/project \
-w /home/gradle/project \
gradle:6.7.0-jdk11 \
gradle --no-daemon bootBuildImage
Note that it persists the Gradle home directory in a volume, which means you can't run this command concurrently. Delete the volume when no longer needed with docker volume rm gradle-cache.
Also note that it executes the build as root.

How can we send aws address in my local machine to jenkins image run in docker container?

I am trying to send the path of aws in my host machine to jenkins that will be run in a docker container. So I downloaded jenkins image and I am trying to use aws cli command in jenkins pipeline in order to build nodejs application and then deploy it to s3 bucket. For the I need aws cli in jenkins image that I am running through docker. As far as I know, once you run any image in docker container, then it will be a seprate environemnt in itself so jenkins will not know that I have aws installed in my mac unless I send it address of aws in my mac which is what I am trying to do with
-v $(which aws): $(which aws)
command.
docker run -d -p 8080:8080 -p 50000:50000 -v ~/jenkins_directory:/var/jenkins_home -v $(which aws):$(which aws) jenkins/jenkins:2.190.2
However after I run this container in command line, it shows the following error response
docker: Error response from daemon: Mounts denied:
The path /usr/local/bin/aws
is not shared from OS X and is not known to Docker.
According to some of the answers I found in stackoverflow I then tried to add the address of aws in Docker file sharing panel. When I added the address of aws in docker, it again shows that
The path /usr is reserved by Docker however it may be possible to export specific subdirectories.
I have been able to get around this. I tried adding the whole
usr/local/bin/aws
in docker file sharing panel but still it shows the same problem. Does anyone have any idea what other things we can do in order to send the address of aws in my local container to jenkins image that I am trying to run in docker container?
You need to install aws-cli in your docker image, and then you will able to use aws-cli inside your container.
FROM jenkins/jenkins:2.190.2
USER root
RUN apt-get update && \
apt-get install awscli -y
USER jenkins
-v or volumes are not designed to bind the host executable, but they are designed for files and folders for persistent storage. If you need executable you need to add in your docker image.
To be able to save (persist) data and also to share data
between containers, Docker came up with the concept of volumes. Quite
simply, volumes are directories (or files) that are outside of the
default Union File System and exist as normal directories and files on
the host filesystem.
understanding-volumes-docker
For this question
I am trying to use aws CLI command in jenkins pipeline in order to
build nodejs application and then deploy it to s3 bucket.
If you are inside AWS, you can assign the IAM role to Jenkins server and you will not be required to bind host keys.
Or if you are outside AWS, then you just need bind host aws config and credentials ,
docker run -d -p 8080:8080 -p 50000:50000 -v ~/jenkins_directory:/var/jenkins_home -v $HOME/.aws/:/var/jenkins_home/.aws/ jenkins/jenkins:2.190.2

Fabric8 Maven Plugin to build docker image

I'm trying to configure Fabric8 Maven plugin to build a docker image for my Spring Boot application.
The following dockerHost configuration works in Ubuntu.
<dockerHost>unix:///var/run/docker.sock</dockerHost>
However, I'm trying to make it work in Windows 10 but couldn't do so.
What is the equivalent in Windows 10?
If you have installed Docker using the Docker toolbox, then you can find the Docker host by running:
docker-machine env default
Where default is the name of the docker machine, which can be found by running docker-machine ls.
It is also possible to not specify the dockerHost in the pom file, and use the DOCKER_HOST env variable. This variable can be exported using
eval "$(docker-machine env default)"

How I can Dockerize my web api on windows

Docker is a full development platform for creating containerized apps, and Docker for Windows is the best way to get started with Docker on Windows systems.
Start your favorite shell (cmd.exe, PowerShell, or other) to check your versions of docker and docker-compose, and verify the installation.
PS C:\Users\Docker> docker --version
Docker version 17.03.0-ce, build 60ccb22
PS C:\Users\Docker> docker-compose --version
docker-compose version 1.11.2, build dfed245
Your questions is not very specific but it appears that you are trying to containerize an asp.net web app, Here is a basic clue to what you want to accomplish by using docker.
Docker is a linux containers system means it's based on linux kernel and by installing docker in windows you are installing a linux guest machine to built your containers in and you will customize your containers to forward ports that will serve your app development from inside the container to your host machine, So basically How this is going to happen? after installing docker first docker needs a base image(linux image) to run your containers from, so a great place to find docker images is docker hub, so also for a basic scenario you need:
1) Pull an image.
2) Run a container based on this image.
To accomplish number 1: we will use microsoft dotnet official docker hub as an example.
docker pull microsoft/aspnetcore
docker pull: will pull the dotnet:latest image from docker hub, :latest is a tag specify the latest stable release of dotnet means if you want another runtime version you will use docker pull dotnet:runtime from the above dotnet official docker hub link you will find tags under Supported tags
To accomplish number 2: we need to run a container by using this image.
docker run -d -p 8000:80 --name firstwebapptest microsoft/aspnetcore
docker run: will create a container name firstwebapptest based on microsoft/aspnetcore forwarding the container port 80to the host port 8000 and all of that will run as a detached mode -d
And now check your browser localhost:8000
This is a very basic scenario using the docker command line tools.
So another way to accomplish this scenario is by using a dockerfile you will find How to use this image in microsoft dotnet official docker hub link, It assumes that you already in your app directory that contain your compiled myapp.dll. What will you do is create a file called dockerfile in this directory and write this inside:
FROM microsoft/aspnetcore
WORKDIR /app
COPY . .
ENTRYPOINT ["dotnet", "myapp.dll"]
FROM: base image that we already pulled
WORKDIR: that will be the directory inside the linux container
COPY: . . the first . is copying your host directory content inside the container the second . is your guest directory in that case will be /app
ENTREYPOINT: is the linux command that will run once this container is up and running in that case dotnet myapp.dll means you are running the command dotnet from the linux container inside the WORKDIR /app with all your host directory app structure that contains your compiled myapp.dll. that we already copied it COPY . .
so now we have the dockerfile all what we need is to build and run it.
docker build -t secondwebapptest .
docker run -d -p 8001:80 secondwebapptest
docker build: will build a container named -t secondwebapptest from . the dot refer to the dockerfile that you just built and that you are already in the working directory otherwise you have to specify a path to the docker file by using -f but that is not our case.
docker run: will run the container that already been created that named secondwebapptest based on forwarding the container port 80to the host port 8001 and all of that will run as a detached mode -d.
And now check your browser localhost:8001

How to run a docker command in Jenkins Build Execute Shell

I'm new to Jenkins and I have been searching around but I couldn't find what I was looking for.
I'd like to know how to run docker command in Jenkins (Build - Execute Shell):
Example: docker run hello-world
I have set Docker Installation for "Install latest from docker.io" in Jenkins Configure System and also have installed several Docker plugins. However, it still didn't work.
Can anyone help me point out what else should I check or set?
John
One of the following plugins should work fine:
CloudBees Docker Custom Build Environment Plugin
CloudBees Docker Pipeline Plugin
I normally run my builds on slave nodes that have docker pre-installed.
I came across another generic solution. Since I'm not expert creating a Jenkins-Plugin out of this, here the manual steps:
Create/change your Jenkins (I use portainer) with environment variables DOCKER_HOST=tcp://192.168.1.50 (when using unix protocol you also have to mount your docker socket) and append :/var/jenkins_home/bin to the actual PATH variable
On your docker host copy the docker command to the jenkins image "docker cp /usr/bin/docker jenkins:/var/jenkins_home/bin/"
Restart the Jenkins container
Now you can use docker command from any script or command line. The changes will persist an image update.

Resources