Running nginx container with pwd doesn't work in bash but adding the complete location in powershell worls - bash

I have index.html file in a folder. I am mapping the local directory into the nginx docker container.
When I run the nginx docker container using the command
docker run --name website -v C:\Users\prash\Documents\Programming\Spring\Docker:/usr/share/nginx/html:ro -d -p 8080:80 nginx:latest
The container starts successfully but when I run the following two commands in bash, although the container starts, I can't open index.html file in browser.
docker run --name website -v $(pwd):/usr/share/nginx/html:ro -d -p 8080:80 nginx:latest
What might be the issue here?
I am new to docker, I tried finding the solution for this, but couldn't find anything.

Related

Docker compose to have executed 'command: bash' and keep container open

The docker compose yml file below keeps the container open after I run docker compose up -d but command: bash does not get executed:
version: "3.8"
services:
service:
container_name: pw-cont
image: mcr.microsoft.com/playwright:v1.30.0-focal
stdin_open: true # -i
tty: true # -t
network_mode: host # --network host
volumes: # Ensures that updates in local/container are in sync
- $PWD:/playwright/
working_dir: /playwright
command: bash
After I spin the container up, I wanted to visit Docker Desktop > Running container's terminal.
Expectation: Since the file has command: bash, I expect that in docker desktop, when I go to the running container's terminal, it will show root#docker-desktop:/playwright#.
Actual: Container's terminal in docker desktop is showing #, still need to type bash to see root#docker-desktop:/playwright#.
Can the yml file be updated so that bash gets auto executed when spinning up the container?
docker compose doesn't provide that sort of interactive connection. Your docker-compose.yaml file is fine; once your container is running you can attach to it using docker attach pw-cont to access stdin/stdout for the container.
$ docker compose up -d
[+] Running 1/1
⠿ Container pw-cont Started 0.1s
$ docker attach pw-cont
root#rocket:/playwright#
root#rocket:/playwright#
I'm not sure what you are trying to achieve, but using the run command
docker-compose run service
gives me the prompt you expect.

how to run docker with keycloak image as a daemon in prod environment

I am using docker to run my Keycloak server in aws production environment. The problem is keycloak uses wildfly which is constant running. Because of this I cannot close the shell. I am trying to find a way to run docker as a daemon thread.
The command I use to run docker
docker run -p 8080:8080 jboss/keycloak
Just user docker's detach option -d.
docker run -p 8080:8080 -d jboss/keycloak

How to mount windows folder using docker compose volumes?

How to mount windows folder using docker compose volumes?
I am trying to set up docker container using docker-compose.
My docker-compose.yml file looks as follow:
php-fpm:
build: php-fpm
container_name: php-fpm
volumes:
- ../project:/var/www/dev
When i enter to the container like this:
docker exec -it php-fpm bash
And display content with ls command the /var/www/dev directory is empty.
Does enyone know the solution for this ?
$ docker -v
Docker version 1.12.0, build 8eab29e
$ docker-compose -v
docker-compose version 1.8.0, build d988a55
I have Windows 10 and docker is installed via Docker ToolBox 1.12.0
#edit
The mounted directory is also empty under Linux enviroment
I fixed it by going to: Local Security Policy > Network List Manager Policies and Double-clicked unidentified Networks then change the location type to private and restarted Docker. Source

Docker containers on travis-ci

I have a Dockerfile that I am attempting to test using RSpec, serverspec and docker-api. Locally (using boot2docker as I am on OS X) this works great and all my test pass, but on travis-ci none of the tests pass. My .travis.yml file is as such:
language: ruby
rvm:
- "2.2.0"
sudo: required
cache: bundler
services:
- docker
before_install:
- docker build -t tomasbasham/nginx .
- docker run -d -p 80:80 -p 443:443 --name nginx -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf tomasbasham/nginx
script: bundle exec rspec
Am I doing something noticeably wrong here? The repository I have created and is run on travis-ci is on GitHub. There may be something else amiss that I am unaware of
tl;dr
A container MUST run its program in the foreground.
Your Dockerfile is the issue. From the github repository you provided, the Dockerfile content is:
# Dockerfile for nginx with configurable persistent volumes
# Select nginx as the base image
FROM nginx
MAINTAINER Tomas Basham <me#tomasbasham.co.uk>
# Install net-tools
RUN apt-get update -q && \
apt-get install -qy net-tools && \
apt-get clean
# Mount configurable persistent volumes
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx", "/var/www/html"]
# Expose both HTTP and HTTPS ports
EXPOSE 80 443
# ENTRYPOINT
ENTRYPOINT ["service", "nginx", "start"]
Before debugging your RSpec/serverspec tests, you should make sure that docker image is able to run a container.
Type those commands from the directory which has the Dockerfile in:
docker build -t tmp .
docker run --rm -it tmp
If you get your shell prompt back to you, that means your container stopped running. If your container isn't starting, then your test suite will fail.
What's wrong with the dockerfile
The entrypoint you defined ENTRYPOINT ["service", "nginx", "start"] will execute a command that will, in turn, start the nginx program in the background. This means the process that was initially run by docker (/bin/service) will terminate, and docker will detect that and stop your container.
To run nginx in the foreground, one must run nginx -g daemon off; as you can find in the Dockerfile for the official nginx image. But since you put daemon off; in your nginx.conf file, you should be fine with just nginx.
I suggest you remove the entrypoint from your Dockerfile (and also remove the daemon off; from your nginx config) and it should work fine.
serverspec
Once you get a container which runs, you will have to focus on the serverspec part on which I'm not experienced with.

Passing Elasticsearch and Kibana config file to docker containers

I have found a docker image devdb/kibana which runs Elasticsearch 1.5.2 and Kibana 4.0.2. However I would like to pass into this docker container the configuration files for both Elasticsearch (i.e elasticsearch.yml) and Kibana (i.e config.js)
Can I do that with this image itself? Or for that would I have to build a separate docker container?
Can I do that with this image itself?
yes, just use Docker volumes to pass in your own config files
Let say you have the following files on your docker host:
/home/liv2hak/elasticsearch.yml
/home/liv2hak/kibana.yml
you can then start your container with:
docker run -d --name kibana -p 5601:5601 -p 9200:9200 \
-v /home/liv2hak/elasticsearch.yml:/opt/elasticsearch/config/elasticsearch.yml \
-v /home/liv2hak/kibana.yml:/opt/kibana/config/kibana.yml \
devdb/kibana
I was able to figure this out by looking at your image Dockerfile parents which are: devdb/kibana→devdb/elasticsearch→abh1nav/java7→abh1nav/baseimage→phusion/baseimage
and also taking a peek into a devdb/kibana container: docker run --rm -it devdb/kibana find /opt -type f -name *.yml.
Or for that would I have to build a separate docker container?
I assume you mean build a separate docker image?. That would also work, for instance the following Dockerfile would do that:
FROM devdb/kibana
COPY elasticsearch.yml /opt/elasticsearch/config/elasticsearch.yml
COPY kibana.yml /opt/kibana/config/kibana.yml
Now build the image: docker build -t liv2hak/kibana .
And run it: docker run -d --name kibana -p 5601:5601 -p 9200:9200 liv2hak/kibana

Resources