Why is curl shutting down the docker container? - bash

Good day!
I have a microservice that runs in a windower and a registry that stores the address of the microservices.
I also have a script that runs when the container is turned on. The script gets its local ip and sends it to another server using curl. After executing the script, code 0 is returned and the container exits. How can you fix this problem?
#docker-compose realtime logs
nginx_1 | "code":"SUCCESSFUL_REQUEST" nginx_1 exited with code 0
My bash script
#!/bin/bash
address=$(hostname -i)
curl -X POST http://registry/service/register -H 'Content-Type: application/json' -d '{"name":"'"$MICROSERVICE_NAME"'","address":"'"$address"'"}'
The script runs fine and no problem, but unfortunately it breaks the container process. Is it possible to somehow intercept this code so that it does not shut down the container?
I would be grateful for any help or comment!🙏
EDIT:
Dockerfile here the script is called after starting the container
FROM nginx:1.21.1-alpine
WORKDIR /var/www/
COPY ./script.sh /var/www/script.sh
RUN apk add --no-cache --upgrade bash && \
apk add nano
#launch script
CMD /var/www/script.sh
EDIT 2:
my docker-compose.yml
version: "3.9"
services:
#database
pgsql:
hostname: pgsql
build: ./pgsql
ports:
- 5432:5432/tcp
volumes:
- ./pgsql/data:/var/lib/postgresql/data
#registry
registry_fpm:
build: ./fpm/registry
depends_on:
- pgsql
volumes:
- ./microservices/registry:/var/www/registry
registry_nginx:
hostname: registry
build: ./nginx/registry
depends_on:
- registry_fpm
volumes:
- ./microservices/registry:/var/www/registry
- ./nginx/registry/nginx.conf:/etc/nginx/nginx.conf
#server
nginx:
build: ./nginx
environment:
MICROSERVICE_NAME: Microservice_1
depends_on:
- registry_nginx
ports:
- 80:80/tcp

the purpose of the registry is to store only the ip of all microservices. If you are familiar with microservices, then it is quite possible that you know that the registry is like the custodian of all addresses of microservices. The registry is used by other microservices to obtain microservice addresses so that microservices can communicate over http.
there is no need for these addresses as far as i can tell. the microservices can easily use each other's hostnames.
you already do this with your curl: the POST request goes to the server registry; and so on
docker compose may just be all the orchestration you require for you microservices.
regarding IPs and networking
if you prefer, for more isolation and consistency, you can configure in your compose.yaml
custom networks virtualised network adapters; think of it as vLANs where the nodes are selected containers only.
for addn info on networking refer
custom IP addresses for each container
hostnames for each container
links deprecated; do not use; information only
regarding heartbeat
keeping track of a heartbeat shouldn't be necessary.
but if you really need one, doing it from within the container is a no-no. a container should be only one running process. and creating a new record is redunduant as the docker daemon is already keeping track of all IP and state (and loads of others).
the function of registry (keeping track of lifecycle) is instead played by the docker daemon. try docker compose ps
however, you can configure the container to restart automatically when it fails using the restart tag
if you need a way to monitor these without the CLI, listening on the docker socket is the way to go.
you could make your own dashboard that taps into the Docker API whose endpoints are listed here. NB: the socket might need to be protected and if possible, ought to be mounted as read-only
but better solution would be a using an image that already does this. i cannot give you recommendations unfortunately; i have not used any.

Related

How to access a website running on docker after closing the debug on Visual Studio

I build a very simple web app and web api on .net core and configured the docker-compose to get them to communicate over the same network correctly.
On visual studio, when I hit play on the Docker Compose project, it runs fine, both the web app and the web api work and communicate correctly.
On the Docker Desktop app i see them running (green).
But when I close/stop the debugger on VS I can't access the websites anymore even though the containers are still running. I thought docker worked as a sort of IIS.
Am I misunderstanding the docker capabilities or do I need to run them again from a CLI or publish them somewhere or what?
I thought the fact the containers are up and running should mean they're live for me to navigate to.
Help me out over here please.
You are correct, unless there is some special routing happening, the fact that the containers are running means your services are available.
You can see the ports being exposed from the docker ps -a command:
CONTAINER_ID: 560f78689902
IMAGE: moviedecisionweb:dev
COMMAND: "C:\\remote_debugger\\…"
CREATED: About a minute ago
STATUS: Up About a minute
PORTS: 0.0.0.0:52002->80/tcp, 0.0.0.0:52001->443/tcp
NAMES: mdweb
CONTAINER_ID: 1cd7f72426fe
IMAGE: moviedecisionapi:dev
COMMAND: "C:\\remote_debugger\\…"
CREATED: About a minute ago
STATUS: Up About a minute
PORTS: 0.0.0.0:52005->80/tcp, 0.0.0.0:52004->443/tcp
NAMES: mdapi
Based on the provided output, you have two docker containers running.
I'm assuming the ports 80 & 443 are serving the HTTP & HTTPS services (respectively) from your app/s.
Based on this...
For container "mdweb", you should be able to access the docker services from your docker host machine (PC) via:
http://0.0.0.0:52002
https://0.0.0.0:52001
For container "mdapi", you should be able to access the docker services from your docker host machine (PC) via:
http://0.0.0.0:52005
https://0.0.0.0:52004
I believe you can use localhost, 127.0.0.1 & 0.0.0.0 interchangeably in the above.
You cannot use the hostnames "mdweb" or "mdapi" from your docker HOST machine - unless you have explicitly setup your DNS to handle these names. However you can use these hostnames if you are inside a docker container on the same docker network.
If you provide more information (e.g. your docker-compose.yml), we could help you further...

cannot configure HDFS address using gethue/hue docker image

I'm trying to get the Hue docker image from gethue/hue, but it seems to ignore the configuration I give him and always look for HDFS on localhost instead of the docker container I ask him to look for.
Here is some context:
I'm using the following docker compose to launch a HDFS cluster:
hdfs-namenode:
image: bde2020/hadoop-namenode:1.1.0-hadoop2.7.1-java8
hostname: namenode
environment:
- CLUSTER_NAME=davidov
ports:
- "8020:8020"
- "50070:50070"
volumes:
- ./data/hdfs/namenode:/hadoop/dfs/name
env_file:
- ./hadoop.env
hdfs-datanode1:
image: bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
depends_on:
- hdfs-namenode
links:
- hdfs-namenode:namenode
volumes:
- ./data/hdfs/datanode1:/hadoop/dfs/data
env_file:
- ./hadoop.env
This launches images from BigDataEurope, which are already properly configured, including:
- the activation of webhdfs (in /etc/hadoop/hdfs-site.xml):
- dfs.webhdfs.enabled set to true
- the hue proxy user (in /etc/hadoop/core-site.xml):
- hadoop.proxyuser.hue.hosts set to *
- hadoop.proxyuser.hue.groups set to *
The, I launch hue following their instructions:
First, I launch a bash prompt inside the docker container:
docker run -it -p 8888:8888 gethue/hue:latest bash
Then, I modify desktop/conf/pseudo-distributed.ini to point to the correct hadoop "node" (in my case a docker container with the address 172.30.0.2:
[hadoop]
# Configuration for HDFS NameNode
# ------------------------------------------------------------------------
[[hdfs_clusters]]
# HA support by using HttpFs
[[[default]]]
# Enter the filesystem uri
fs_defaultfs=hdfs://172.30.0.2:8020
# NameNode logical name.
## logical_name=
# Use WebHdfs/HttpFs as the communication mechanism.
# Domain should be the NameNode or HttpFs host.
# Default port is 14000 for HttpFs.
## webhdfs_url=http://172.30.0.2:50070/webhdfs/v1
# Change this if your HDFS cluster is Kerberos-secured
## security_enabled=false
# In secure mode (HTTPS), if SSL certificates from YARN Rest APIs
# have to be verified against certificate authority
## ssl_cert_ca_verify=True
And then I launch hue using the following command (still inside the hue container):
./build/env/bin/hue runserver_plus 0.0.0.0:8888
I then point my browser to localhost:8888, create a new user ('hdfs' in my case), and launch the HDFS file browser module. I then get the following error message:
Cannot access: /user/hdfs/.
HTTPConnectionPool(host='localhost', port=50070): Max retries exceeded with url: /webhdfs/v1/user/hdfs?op=GETFILESTATUS&user.name=hue&doas=hdfs (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 99] Cannot assign requested address',))
The interesting bit is that it still tries to connect to localhost (which of course cannot work), even though I modified its config file to point to 172.30.0.2.
Googling the issue, I found another config file: desktop/conf.dist/hue.ini. I tried modifying this one and launching hue again, but same result.
Does any one know how I could correctly configure hue in my case?
Thanks in advance for your help.
Regards,
Laurent.
Your one-off docker run command is not on the same network as the docker-compose containers.
You would need something like this, replacing [projectname] with the folder you started docker-compose up in
docker run -ti -p 8888:8888 --network="[projectname]_default" gethue/hue bash
I would suggest using Docker Compose also for the Hue container and volume mount for a INI files under desktop/conf/ that you can specify simply
fs_defaultfs=hdfs://namenode:8020
(since you put hostname: namenode in the compose file)
You'll also need to uncomment the WebHDFS line for your changes to take affect
All INI files are merged in the conf folder for Hue

Setting redis configuration with docker in windows

I want to set up redis configuration in docker.
I have my own redis.conf under D:/redis/redis.conf and have configured it to have bind 127.0.0.1 and have uncommented requirepass foobared
Then used this command to load this configuration in docker:
docker run --volume D:/redis/redis.conf:/usr/local/etc/redis/redis.conf --name myredis redis redis-server /usr/local/etc/redis/redis.conf
Next,
I have docker-compose.yml in my application in maven Project under src/resources.
I have the following in my docker-compase.yml
redis:
image: redis
ports:
- "6379:6379"
And i execute the command :
docker-compose up
The Server runs, but when i check with the command:
docker ps -a
it Shows that redis Image runs at 0.0.0.0:6379.
I want it to run at 127.0.0.1.
How do i get that?
isn't my configuration file loading or is it wrong? or my commands are wrong?
Any suggestions are of great help.
PS: I am using Windows.
Thanks
Try to execute:
docker inspect <container_id>
And use "NetworkSettings"->"Gateway" (it must be 172.17.0.1) value instead of 127.0.0.1.
You can't use 127.0.0.1 as your Redis was run in the isolated environment.
Or you can link your containers.
So first of all you should not be worried about redis saying listening on 0.0.0.0:6379. Because redis is running inside the container. And if it doesn't listen on 0.0.0.0 then you won't be able to make any connections.
Next if you want redis to only listen on localhost on localhost then you need to use below
redis:
image: redis
ports:
- "127.0.0.1:6379:6379"
PS: I have not run container or docker for windows with 127.0.0.1 port mapping, so you will have to see if it works. Because host networking in Windows, Mac and Linux are different and may not work this way

Docker on Windows 10 "driver failed programming external connectivity on endpoint"

I am trying to use $ docker-compose up -d for a project and am getting this error message:
ERROR: for api Cannot start service api: driver failed programming external connectivity on endpoint dataexploration_api_1 (8781c95937a0a4b0b8da233376f71d2fc135f46aad011401c019eb3d14a0b117): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:9000:tcp:172.19.0.2:80: input/output error
Encountered errors while bringing up the project.
I am wondering if it is maybe the port? I had been trying port 8080 previously. The project was originally set up on a mac and I have cloned the repository from gitHub.
I got the same error message on my Windows 10 Pro / Docker v17.06.1-ce-win24 / Docker-Compose v1.14.0 using Windows Powershell x86 in admin mode.
The solution was to simply restart Docker.
If happens once, restarting Docker will do the work. In my case, it was happening every time that I restarted my computer.
In this case, disable Fast Startup, or you probably will restart Docker every time that your computer starts. This solution was obtained from here
Simply restaring Docker didn't fix the problem for me on Windows 10.
In my case, I resolved the problem with the exact steps below:
1) Close "Docker Desktop"
2) Run the commands below:
net stop com.docker.service
net start com.docker.service
3) Launch "Docker Desktop" again
Hope this will help someone else.
I got that error too, if you want to know the main reason why error happens, its because docker is already running a similar container, to resolve the problem( avoid restarting Docker), you must:
docker container ls
You got something similar to:
CONTAINER ID IMAGE COMMAND CREATED
1fa4ab2cf395 friendlyhello "python app.py" 28 seconds ago
This is a list of the running containers, take the CONTAINER ID (copy Ctrl+C)
Now you have to end the process (and let run another image) run this command.
docker container stop <CONTAINER_ID>
And thats all! Now you can create the container.
For more information, visit https://docs.docker.com/get-started/part2/
Normally this error happens when you are trying start a container but the ports that the container needs are occuppied, usually by the same Docker like a result of an latest bad process to stop.
For me the solution is:
Open windows CMD like administrator, type netstat -oan to find the process (Docker is the that process) that is occuppying your port:
In my case my docker ports are 3306 6001 8000 9001.
Now we need free that ports, so we go to kill this process by PID (colum PID), type
TASKKILL /PID 9816 /F
Restart docker.
Be happy.
Regards.
I am aware there are already a lot answers, but none of them solved the problem for me. Instead, I got rid of this error message by resetting docker to factory defaults:
In my case, the problem was that the docker container (Nginx) uses 80 port, and IIS uses the same. Setting up another port in IIS solve problem
In most case, the first case you should think about is there is an old service running and using that port.
In my case, since I change the image name, then when using docker-compose to stop (then up), it won't stop old container (service), lead to the new container can not be started.
A bit of a late answer but I will leave it here it might help someone else
On a Mac mojave after a lot of restarts of both mac and docker
I had to sudo apachectl stop.
The simplest way to solve this is Restarting the docker. But in some cases it might not work even though you don't have any containers that are running in the port.You can check the running containers using docker ps command and can see all the containers that were not cleared but exited before using docker ps -a command.
Imagine there is a container which has the container id 8e35276e845e.You can use the command docker rm 8e35276e845e or docker rm 8e3 to end the container.Note that the first 3 strings are the id of that particular docker container id. Thus according to the above scenario 8e3 is the id of 8e35276e845e.
If restarting doesn't work you can try changing the ports of the services in the docker-compose.yml file and according to the apache port you have to change the port of the v-host(if there's any).This will resolve your problem.
Ex:
build:
context: ./apache
dockerfile: Dockerfile
working_dir: /var/www/
volumes:
- .:/var/www
networks:
- app-network
ports:
- 8082:80
depends_on:
- mysql_db
should be changed into
apache_server:
build:
context: ./apache
dockerfile: Dockerfile
working_dir: /var/www/
volumes:
- .:/var/www
networks:
- app-network
ports:
- 8083:80
depends_on:
- mysql_db
and the particular v-host also has to be changed,
Ex(according to the above scenario):
<VirtualHost *:80>
ProxyPreserveHost On
ServerAlias phpadocker.lk
ProxyPass / http://localhost:8083/
ProxyPassReverse / http://localhost:8083/
</VirtualHost>
This will help you to solve the above problem.
For many windows users out there that have the same issue I would suggest to restart the computer also, because most of the times (for me at least) restarting just Docker doesn't work. So, I would suggest you follow the following steps:
Restart your pc.
Then Start up your PowerShell as admin and run this:
Set-NetConnectionProfile -interfacealias "vEthernet (DockerNAT)" -NetworkCategory Private
After that restart your Docker.
After completing these steps you will be able to run without problems.
I hope that helps.

How to run Redis on Docker using docker-compose.yml?

Have found an official Spring tutorial about the application developing that uses Redis keystore is described but don't know almost nothing about Docker and don't really want to learn it. The app's source code contains docker-compose.yml file with multiple Redis oriented settings and Spring docs are say:
There is a docker-compose.yml file in the source code in Github which
you can run really easily on the command line with docker-compose up.
But it seems to be not that easy and Docker docs are too complicated.
Have installed Docker and deployed Redis there:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
81cbeeb08153 redis "docker-entrypoint.sh" 22 hours ago Up 21 minutes 6379/tcp Server
The docker-compose.yml
redis:
image: redis
ports:
- "6379:6379"
What's next? How to import this in Docker Redis?
I'm trying to up Redis on the Windows machine to let my simple localhost app finally work.
Do you have Docker Compose installed? If yes, just run docker-compose up - it will start redis image and make it listen on a correct port.
Alternatively, you will have to start redis manually and correctly expose specified port.

Resources