GItLab CI gives curl: (7) Failed to connect to localhost port 8090: Connection refused - shell

The issue is I get the curl: (7) Failed to connect to localhost port 8090: Connection refused GItLab CI error but this does not happen on my laptop where I get the source html of the webpage. The .gitlab-ci.yml below is a simple reproduction of the issue. I have spent numerous hours trying to figure this out - i'm sure someone else has also.
Aside: This isn't a similar question - since they don't offer a solution.
GitLab Repo: https://gitlab.com/mudassir-ahmed/wordpress-testing-with-gitlab-ci/tree/another-approach but the only file it contains is the .gitlab-ci.yml shown below...
image: docker:stable
variables:
# When using dind service we need to instruct docker, to talk with the
# daemon started inside of the service. The daemon is available with
# a network connection instead of the default /var/run/docker.sock socket.
#
# The 'docker' hostname is the alias of the service container as described at
# https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#accessing-the-services
#
# Note that if you're using the Kubernetes executor, the variable should be set to
# tcp://localhost:2375/ because of how the Kubernetes executor connects services
# to the job container
# DOCKER_HOST: tcp://localhost:2375/
#
# For non-Kubernetes executors, we use tcp://docker:2375/
DOCKER_HOST: tcp://docker:2375/
# When using dind, it's wise to use the overlayfs driver for
# improved performance.
DOCKER_DRIVER: overlay2
services:
- docker:dind
before_script:
- docker info
build:
stage: build
script:
- apk update
- apk add curl
#- hostname -i
- docker container ls
- docker run -d -p 8090:80 --name nginx-server kitematic/hello-world-nginx
- curl localhost:8090 # This works on my laptop but not on a GitLab runner.

Referring to the answer from here : gitlab-ci.yml & docker-in-docker (dind) & curl returns connection refused on shared runner
There are two ways to fix this :
Option 1: Replace localhost in curl localhost:8090 with docker like this curl docker:8090
Option 2:
services:
- name: docker:dind
alias: localhost

docker run -d -p 8090:80 --name nginx-server kitematic/hello-world-nginx
curl localhost:8090 # This works on my laptop but not on a GitLab runner.
Assuming that is your code i think that you should somehow add some timeout between docker run and curl.
I have similar issues some time ago after starting docker container on gitlab runner machine i wasnt able to accces my url to. When i added command which check if container is running for " about one minute " it resolved my problem.
"docker inspect -f {{.State.Running}} " + containerName" but in order to do that check, you should add some additional script

Related

How to use local proxy settings in docker-compose

I am setting up a new server for our Redmine installation, since the old installation was done by hand, which makes it difficult to update everything properly. I decided to go with a Docker image but am having trouble starting the docker container due to an error message. The host is running behind a proxy server, which I think, is causing this problem, as everything else such as wget, curl, etc. is working fine.
Error message:
Pulling redmine (redmine:)...
ERROR: Get https://registry-1.docker.io/v2/: dial tcp 34.206.236.31:443: connect: connection refused
I searched on Google about using Docker/Docker-Compose with a proxy server in the background and found a few websites where people had the same issue but none of these really helped me with my problem.
I checked with the Docker documentation and found a guide but this does not seem to work for me: https://docs.docker.com/network/proxy/
I also found an answered question here on StackOverflow: Using proxy on docker-compose in server which might be the solution I am after but I am unsure where exactly I have to put the solution. I guess the person means the docker-compose.yml file but I could be wrong.
This is what my docker-compose.yml looks like:
version: '3.1'
services:
redmine:
image: redmine
restart: always
ports:
- 80:3000
environment:
REDMINE_DB_MYSQL: db
REDMINE_DB_PASSWORD: SECRET_PASSWORD
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: SECRET_PASSWORD
MYSQL_DATABASE: redmine
I expect to run the following command without the above error message
docker-compose -f docker-compose.yml up -d
I did a bit more research and seem to have used better key words because I found my solution now. I wanted to share the solution with everyone, in case someone else may ever need it.
Create folder for configuring docker service through systemd
mkdir /etc/systemd/system/docker.service.d
Create service configuration file at /etc/systemd/system/docker.service.d/http-proxy.conf and put the following in the newly created file
[Service]
# NO_PROXY is optional and can be removed if not needed
# Change proxy_url to your proxy IP or FQDN and proxy_port to your proxy port
# For Proxy server which require username and password authentication, just add the proper username and password to the URL. (see example below)
# Example without authentication
Environment="HTTP_PROXY=http://proxy_url:proxy_port" "NO_PROXY=localhost,127.0.0.0/8"
# Example with authentication
Environment="HTTP_PROXY=http://username:password#proxy_url:proxy_port" "NO_PROXY=localhost,127.0.0.0/8"
# Example for SOCKS5
Environment="HTTP_PROXY=socks5://proxy_url:proxy_port" "NO_PROXY=localhost,127.0.0.0/8"
Reload systemctl so that new settings are read
sudo systemctl daemon-reload
Verify that docker service Environment is properly set
sudo systemctl show docker --property Environment
Restart docker service so that it uses updated Environment settings
sudo systemctl restart docker
Now you can execute the docker-compose command on your machine without getting any connection refused error messages.
For the proxy server which requires username and password for authentication: Apart from adding the credentials in /etc/systemd/system/docker.service.d/http-proxy.conf, as suggested in this answer, I also had to add the same to the Dockerfile. Following is a snippet from the Dockerfile.
FROM ubuntu:16.04
ENV http_proxy http://username:password#proxy_url:proxy_port
ENV https_proxy http://username:password#proxy_url:proxy_port
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y \
build-essential \
bla bla bla ...

Access a host from within a Docker container on Windows

I use Docker CE for Windows on latest Windows 10 and have built an image with a
script that runs a test against a web server.
(A litmus test suite for a WebDAV server to be exact, but I think the problem
is general.)
I run the web server on a Powershell console:
> wsgidav -p 8080 -H localhost
21:04:19.107 - <13348)> wsgidav INFO : Running WsgiDAV/3.0.0a3 Cheroot/6.4.0 Python/3.6.5
21:04:19.107 - <13348)> wsgidav INFO : Serving on http://localhost:8080 ...
From another Powershell console, I run my script in a Docker container (using FROM alpine).
The script starts and tries to access the endpoint, but does not succeed:
> docker pull mar10/litmus
> docker run --rm -p 8080:8080 mar10/litmus http://gateway.docker.internal:8080
-> running `basic':
0. init.................. FAIL (connection refused by `gateway.docker.internal' port 8080: Operation timed out)
I tried so far
Using the gateway.docker.internal hostname
using -p PORT:PORT
using --net=host
restarting the docker daemon (which interestingly sometimes also was neccessary to
fix timeouts in docker pull)
different IP addresses for the web server (127.0.0.1, localhost, 0.0.0.0, local IP)
Nothing worked so far (although the failure message may be different).
Maybe I just missed a working combination of the above, or any other trick?
FWIW, I was able to solve it by building the container with the --network host option and use a real IP of the client (instead of localhost or 0.0.0.0).
Details here: https://hub.docker.com/r/mar10/docker-litmus/

Docker on Mac is running but refusing to expose port

Mac here, running Docker Community Edition Version 17.12.0-ce-mac49 (21995).
I have Dockerized a web app with a Dockerfile like so:
FROM openjdk:8
RUN mkdir /opt/myapp
ADD build/libs/myapp.jar /opt/myapp
ADD application.yml /opt/myapp
ADD logback.groovy /opt/myapp
WORKDIR /opt/myapp
EXPOSE 9200
ENTRYPOINT ["java", "-Dspring.config=.", "-jar", "myapp.jar"]
I then build that image like so:
docker build -t myapp .
I then run a container of that image like so:
docker run -it -p 9200:9200 --net="host" --env-file ~/myapp-local.env --name myapp myapp
In the console I see the app start up without any errors, and all seems to be well. Even my metrics publishes (which publish heartbeat and other health metrics every 20 seconds) are printing to the console as I would expect them to. Everything seems to be fine.
Except when I go to run a curl against my app from another terminal/session:
curl -i -H "Content-Type: application/json" -X POST -d '{"username":"heyitsme","password":"12345"}' http://localhost:9200/v1/auth/signIn
curl: (7) Failed to connect to localhost port 9200: Connection refused
Now, if this were a situation where the /v1/auth/signIn path wasn't valid, or if there was something wrong with my request entity/payload, the server would pick up on it and send an error (I assure you; as I can confirm this exact same curl works when I run the server outside of Docker as just a standalone service).
So this is definitely a situation where the curl command can't connect to localhost:9200. Again, when I run my app outside of Docker, that same curl command works perfectly, so I know my app is trying to standup on port 9200.
Any ideas as to what could be going wrong here, or how I could begin troubleshooting?
The way you run your container has 2 conflicting parts:
-p 9200:9200 says: "publish (bind) port 9200 of the container to port 9200 of the host"
--net="host" says: "use the host's networking stack"
According to Docker for Mac - Networking docs / Known limitations, use cases, and workarounds, you should only publish a port:
I want to connect to a container from the Mac
Port forwarding works for localhost; --publish, -p, or -P all work. Ports exposed from Linux are forwarded to the Mac.
Our current recommendation is to publish a port, or to connect from another container. This is what you need to do even on Linux if the container is on an overlay network, not a bridge network, as these are not routed.
The command to run the nginx webserver shown in Getting Started is an example of this.
$ docker run -d -p 80:80 --name webserver nginx
Check that your app bind to 0.0.0.0:9200 and not localhost:9200 or something similar
Problem seems to be in the network mode you are running the container.
Quick test: Login to your container and run the curl cmd there, hopefully it works. That would isolate the problem to request not being forwarded from host to container.
Try running your container on the default bridge network and test.
Refer to this blog for details on the network modes in docker
TLDR; You will need to add an IPtables entry to allow the traffic to enter your container.

cannot configure HDFS address using gethue/hue docker image

I'm trying to get the Hue docker image from gethue/hue, but it seems to ignore the configuration I give him and always look for HDFS on localhost instead of the docker container I ask him to look for.
Here is some context:
I'm using the following docker compose to launch a HDFS cluster:
hdfs-namenode:
image: bde2020/hadoop-namenode:1.1.0-hadoop2.7.1-java8
hostname: namenode
environment:
- CLUSTER_NAME=davidov
ports:
- "8020:8020"
- "50070:50070"
volumes:
- ./data/hdfs/namenode:/hadoop/dfs/name
env_file:
- ./hadoop.env
hdfs-datanode1:
image: bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
depends_on:
- hdfs-namenode
links:
- hdfs-namenode:namenode
volumes:
- ./data/hdfs/datanode1:/hadoop/dfs/data
env_file:
- ./hadoop.env
This launches images from BigDataEurope, which are already properly configured, including:
- the activation of webhdfs (in /etc/hadoop/hdfs-site.xml):
- dfs.webhdfs.enabled set to true
- the hue proxy user (in /etc/hadoop/core-site.xml):
- hadoop.proxyuser.hue.hosts set to *
- hadoop.proxyuser.hue.groups set to *
The, I launch hue following their instructions:
First, I launch a bash prompt inside the docker container:
docker run -it -p 8888:8888 gethue/hue:latest bash
Then, I modify desktop/conf/pseudo-distributed.ini to point to the correct hadoop "node" (in my case a docker container with the address 172.30.0.2:
[hadoop]
# Configuration for HDFS NameNode
# ------------------------------------------------------------------------
[[hdfs_clusters]]
# HA support by using HttpFs
[[[default]]]
# Enter the filesystem uri
fs_defaultfs=hdfs://172.30.0.2:8020
# NameNode logical name.
## logical_name=
# Use WebHdfs/HttpFs as the communication mechanism.
# Domain should be the NameNode or HttpFs host.
# Default port is 14000 for HttpFs.
## webhdfs_url=http://172.30.0.2:50070/webhdfs/v1
# Change this if your HDFS cluster is Kerberos-secured
## security_enabled=false
# In secure mode (HTTPS), if SSL certificates from YARN Rest APIs
# have to be verified against certificate authority
## ssl_cert_ca_verify=True
And then I launch hue using the following command (still inside the hue container):
./build/env/bin/hue runserver_plus 0.0.0.0:8888
I then point my browser to localhost:8888, create a new user ('hdfs' in my case), and launch the HDFS file browser module. I then get the following error message:
Cannot access: /user/hdfs/.
HTTPConnectionPool(host='localhost', port=50070): Max retries exceeded with url: /webhdfs/v1/user/hdfs?op=GETFILESTATUS&user.name=hue&doas=hdfs (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 99] Cannot assign requested address',))
The interesting bit is that it still tries to connect to localhost (which of course cannot work), even though I modified its config file to point to 172.30.0.2.
Googling the issue, I found another config file: desktop/conf.dist/hue.ini. I tried modifying this one and launching hue again, but same result.
Does any one know how I could correctly configure hue in my case?
Thanks in advance for your help.
Regards,
Laurent.
Your one-off docker run command is not on the same network as the docker-compose containers.
You would need something like this, replacing [projectname] with the folder you started docker-compose up in
docker run -ti -p 8888:8888 --network="[projectname]_default" gethue/hue bash
I would suggest using Docker Compose also for the Hue container and volume mount for a INI files under desktop/conf/ that you can specify simply
fs_defaultfs=hdfs://namenode:8020
(since you put hostname: namenode in the compose file)
You'll also need to uncomment the WebHDFS line for your changes to take affect
All INI files are merged in the conf folder for Hue

Failed to communicate a dockerized process with elastic search with "None of the configured nodes are available"

I have spring boot application which communicate with ElasticSearch 5.0.0 alpha 2.
My application successfully communicate with elastic and preform several queries.
When I try to dockerize my application, it fails to communicate with ElasticSearch, and I get the following error:
None of the configured nodes are available: [{#transport#-1}{127.0.0.1}{127.0.0.1:9300}]
I have spent a lot of time on the internet, but I have found problems when the ElasticSearch is dockerized, but in my case, the client is dockerized, and it is working fine without the docker.
The command I used to create the docker image is: docker build -t my-service .
The DockerFile is:
FROM java:8
VOLUME /tmp
ADD ./build/libs/myjarfile-2.0.0.jar app.jar
EXPOSE 8090
RUN sh -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
To execute the image i use: docker run --name myname -d -p 8090:8090 -t my-service
Can someone share his/her experience with this issue?
Thanks
Guy Hudara
The problem is that your elasticsearch is not available on your dockerized host. When you put something in a docker container it also gets isolated on a network layer and localhost is localhost of the docker container but not the host itself. Therefore if you have elasticsearch also in a docker container use container linking and environment variable injection or reference your host machines address of your main network interface – not loopback – to your app.
Option 1
assuming that elasticsearch exposes 9200 try to run the following
$ docker run -d --name=elasticsearch elasticsearch
$ docker run -d --name=my-app --link elasticsearch:elasticsearch -p 8090:8090 my-app
Then you can define elasticsearch address in your app using env variable ${ELASTICSEARCH_PORT_9200_TCP_ADDR}.
Option 2
assuming your host machine runs on 192.168.1.10 you can also do the following:
$ docker run -d -p 9200:9200 elasticsearch
$ docker run -d -p 8090:8090 my-app
note that the name for the easticsearch container is optional here but the exposing of elasticsearch port mandatory. In this case you'll have to configure your elasticsearch host in your app given address of 192.168.1.10.

Resources