Docker is a full development platform for creating containerized apps, and Docker for Windows is the best way to get started with Docker on Windows systems.
Start your favorite shell (cmd.exe, PowerShell, or other) to check your versions of docker and docker-compose, and verify the installation.
PS C:\Users\Docker> docker --version
Docker version 17.03.0-ce, build 60ccb22
PS C:\Users\Docker> docker-compose --version
docker-compose version 1.11.2, build dfed245
Your questions is not very specific but it appears that you are trying to containerize an asp.net web app, Here is a basic clue to what you want to accomplish by using docker.
Docker is a linux containers system means it's based on linux kernel and by installing docker in windows you are installing a linux guest machine to built your containers in and you will customize your containers to forward ports that will serve your app development from inside the container to your host machine, So basically How this is going to happen? after installing docker first docker needs a base image(linux image) to run your containers from, so a great place to find docker images is docker hub, so also for a basic scenario you need:
1) Pull an image.
2) Run a container based on this image.
To accomplish number 1: we will use microsoft dotnet official docker hub as an example.
docker pull microsoft/aspnetcore
docker pull: will pull the dotnet:latest image from docker hub, :latest is a tag specify the latest stable release of dotnet means if you want another runtime version you will use docker pull dotnet:runtime from the above dotnet official docker hub link you will find tags under Supported tags
To accomplish number 2: we need to run a container by using this image.
docker run -d -p 8000:80 --name firstwebapptest microsoft/aspnetcore
docker run: will create a container name firstwebapptest based on microsoft/aspnetcore forwarding the container port 80to the host port 8000 and all of that will run as a detached mode -d
And now check your browser localhost:8000
This is a very basic scenario using the docker command line tools.
So another way to accomplish this scenario is by using a dockerfile you will find How to use this image in microsoft dotnet official docker hub link, It assumes that you already in your app directory that contain your compiled myapp.dll. What will you do is create a file called dockerfile in this directory and write this inside:
FROM microsoft/aspnetcore
WORKDIR /app
COPY . .
ENTRYPOINT ["dotnet", "myapp.dll"]
FROM: base image that we already pulled
WORKDIR: that will be the directory inside the linux container
COPY: . . the first . is copying your host directory content inside the container the second . is your guest directory in that case will be /app
ENTREYPOINT: is the linux command that will run once this container is up and running in that case dotnet myapp.dll means you are running the command dotnet from the linux container inside the WORKDIR /app with all your host directory app structure that contains your compiled myapp.dll. that we already copied it COPY . .
so now we have the dockerfile all what we need is to build and run it.
docker build -t secondwebapptest .
docker run -d -p 8001:80 secondwebapptest
docker build: will build a container named -t secondwebapptest from . the dot refer to the dockerfile that you just built and that you are already in the working directory otherwise you have to specify a path to the docker file by using -f but that is not our case.
docker run: will run the container that already been created that named secondwebapptest based on forwarding the container port 80to the host port 8001 and all of that will run as a detached mode -d.
And now check your browser localhost:8001
Related
I have tried this example out on a windows 10 box which has WSL2 installed and integrated with the latest Docker version. After following the steps in the example and downloading the code in the linux subsystem, I am able to build the image and run the container. The website is also available on the browser when I browse to it on a browser running on Windows 10. However, when I create a file or folder in the container the same doesn't reflect in the host filesystem which in this case is the linux subsystem. Similarly, a file created in the host linux subsystem is not seen in the container's cli when I use the ls command.
I ran this commands to confirm that the folder has been mounted where 44711fc95366 is my container id
docker inspect -f "{{ .Mounts }}" 44711fc95366
This gives an output like so:
[{bind /home/userlab1/my-proj/getting-started/app /usr/src/app true
rprivate}]
If the mount point expressed above is correct, I should be able to create a file or folder in host subsystem on the path /home/userlab1/my-proj/getting-started/app and be able to see it at the /usr/src/app path in the container, correct?
The docker image has been created and run from the linux subsystem command line like so:
docker run -it -v ~/my-proj/getting-started/app:/usr/src/app -p 3001:3000 --name cntr-linux-todo
img-todo:in-linux
While the application runs, the files updated in the container don't reflect on the website that is running from the container, nor does a new file/folder created in the container be seen in the host subsystem and vice versa. What am I missing?
As you are using Windows's version of docker, it cannot see files/folders from WSL.
You can move ~/my-proj into C:\Users\user20358, and mount from there :
-v 'C:\Users\user20358\my-proj\getting-started\app:/usr/src/app'
I am trying to send the path of aws in my host machine to jenkins that will be run in a docker container. So I downloaded jenkins image and I am trying to use aws cli command in jenkins pipeline in order to build nodejs application and then deploy it to s3 bucket. For the I need aws cli in jenkins image that I am running through docker. As far as I know, once you run any image in docker container, then it will be a seprate environemnt in itself so jenkins will not know that I have aws installed in my mac unless I send it address of aws in my mac which is what I am trying to do with
-v $(which aws): $(which aws)
command.
docker run -d -p 8080:8080 -p 50000:50000 -v ~/jenkins_directory:/var/jenkins_home -v $(which aws):$(which aws) jenkins/jenkins:2.190.2
However after I run this container in command line, it shows the following error response
docker: Error response from daemon: Mounts denied:
The path /usr/local/bin/aws
is not shared from OS X and is not known to Docker.
According to some of the answers I found in stackoverflow I then tried to add the address of aws in Docker file sharing panel. When I added the address of aws in docker, it again shows that
The path /usr is reserved by Docker however it may be possible to export specific subdirectories.
I have been able to get around this. I tried adding the whole
usr/local/bin/aws
in docker file sharing panel but still it shows the same problem. Does anyone have any idea what other things we can do in order to send the address of aws in my local container to jenkins image that I am trying to run in docker container?
You need to install aws-cli in your docker image, and then you will able to use aws-cli inside your container.
FROM jenkins/jenkins:2.190.2
USER root
RUN apt-get update && \
apt-get install awscli -y
USER jenkins
-v or volumes are not designed to bind the host executable, but they are designed for files and folders for persistent storage. If you need executable you need to add in your docker image.
To be able to save (persist) data and also to share data
between containers, Docker came up with the concept of volumes. Quite
simply, volumes are directories (or files) that are outside of the
default Union File System and exist as normal directories and files on
the host filesystem.
understanding-volumes-docker
For this question
I am trying to use aws CLI command in jenkins pipeline in order to
build nodejs application and then deploy it to s3 bucket.
If you are inside AWS, you can assign the IAM role to Jenkins server and you will not be required to bind host keys.
Or if you are outside AWS, then you just need bind host aws config and credentials ,
docker run -d -p 8080:8080 -p 50000:50000 -v ~/jenkins_directory:/var/jenkins_home -v $HOME/.aws/:/var/jenkins_home/.aws/ jenkins/jenkins:2.190.2
I am running robot framework (Selenium based) testing inside a Docker container. But I need to access files outside the Docker container (in Mac).
I have tried providing absolute path of the Mac but Docker refers it's core folder as the root folder.
I found below links for Windows, but not for Mac.
Docker - accessing files inside container from host
Access file of windows machine from docker container
one approach is copy your files inside docker container at creation time, but if your files updates by another service on host, and it needs to access them too, just mount it like below.
docker run -d --name your-container -v /path/to/files/:/path/inside/container containername:version
this way files on the host machine mounts into docker container and the user inside container can access them.
I installed docker toolbox 1.11.2 and Laradock v.2 cloned from GitHub.
Everything seems to work except the laradock_workspace_1. When is generated it does not create files on the host machine (Windows 7 64-bit). In the docker-compose.yml I have tried playing with the volumes as suggested here
### Laravel Application Code Container ######################
volumes_source:
build: ./volumes/application
volumes:
- ../:/var/www/laravel
If I change the last line to ../.. then run docker-compose up, docker exec -it laradock_workspace_1 ls and I can see that it is traversing the folders on the host machine. I just don't see any files.
My goal here is to make the actual Laravel code external so I can edit them on the host machine and use git.
I can use the Kitematic app to make the changes I want but they seem lost if I do a docker-compose down. (and I get errors about things still being in use.)
I'm new to docker so any help is appreciated.
First, make sure your docker-machine is running. If it is, then follow below:
Open up Virtualbox GUI and right click your docker vm, and select settings, then go to Shared Folders.
Change the c\users to whatever folder your code lies in, like this:
This will mount your desired folder to /c/Users in the docker-machine vm.
After this, change the docker-compose.yml in the laradock folder to this:
### Laravel Application Code Container ######################
volumes_source:
build: ./volumes/application
volumes:
- /c/Users/pomodoro.xyz/code:/var/www/laravel
The logic behind this is, since we are running the docker in a VM, the docker-compose command looks for folder in the VM, not in the windows machines. Thats why we have provided the VM machine path to the docker-compose file.
I have implemented docker project for automated setup. I use docker 1.9 on Ubuntu Server and utilize feature build-arg. I using it for set dynamic subdomain in apache virtual hosts file.
docker build --no-cache --build-arg domain=demo1.myapp.com -t imagename .
docker run -d -p 8080:80 imagename
I use domain and replace it in virtual hosts file using sed command in my script file
sed -i -e "s/defaulthost.com/$domain/g" /etc/apache2/sites-enabled/myApp.conf
My Dockerfile had code
ARG domain
RUN /bin/sh /script.sh $domain
Now I need to migrate application on AWS where I get Amazon Linux AMI. But here I get supported docker version 1.7, which do not support build-arg. I tried to upgrade but lot of dependencies block me.
Now I decide to use ENV environment variables like below.
docker run -d -p 8080:80 -e domain=demo1.myapp.com
I also changed Docker file like
My Dockerfile had code
RUN /bin/sh /script.sh
But It look like they not working in my scenario as at build time sed script replace empty value in apache file and build process failed.
If it is not possible without build arg or I am doing wrong way of set/use ENV
First, AWS can support docker 1.9.
See for instance "Getting overlay networking to work in AWS with Docker 1.9"
use a Docker Machine version 0.5.2-dev, as explained here
use the right AMI (Amazon Machine Image) Ubuntu 15.10
Set up the AWS environment variables
If you chose to remain with an old AMI and its docker 1.7, then -e option are for runtime only (creating/running containers), not build time (image).
That means if your ENTRYPOINT or CMD was: /script.sh, using inside the script $domain (and then launching your main process), that would work.