Have found an official Spring tutorial about the application developing that uses Redis keystore is described but don't know almost nothing about Docker and don't really want to learn it. The app's source code contains docker-compose.yml file with multiple Redis oriented settings and Spring docs are say:
There is a docker-compose.yml file in the source code in Github which
you can run really easily on the command line with docker-compose up.
But it seems to be not that easy and Docker docs are too complicated.
Have installed Docker and deployed Redis there:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
81cbeeb08153 redis "docker-entrypoint.sh" 22 hours ago Up 21 minutes 6379/tcp Server
The docker-compose.yml
redis:
image: redis
ports:
- "6379:6379"
What's next? How to import this in Docker Redis?
I'm trying to up Redis on the Windows machine to let my simple localhost app finally work.
Do you have Docker Compose installed? If yes, just run docker-compose up - it will start redis image and make it listen on a correct port.
Alternatively, you will have to start redis manually and correctly expose specified port.
Related
Good day!
I have a microservice that runs in a windower and a registry that stores the address of the microservices.
I also have a script that runs when the container is turned on. The script gets its local ip and sends it to another server using curl. After executing the script, code 0 is returned and the container exits. How can you fix this problem?
#docker-compose realtime logs
nginx_1 | "code":"SUCCESSFUL_REQUEST" nginx_1 exited with code 0
My bash script
#!/bin/bash
address=$(hostname -i)
curl -X POST http://registry/service/register -H 'Content-Type: application/json' -d '{"name":"'"$MICROSERVICE_NAME"'","address":"'"$address"'"}'
The script runs fine and no problem, but unfortunately it breaks the container process. Is it possible to somehow intercept this code so that it does not shut down the container?
I would be grateful for any help or comment!🙏
EDIT:
Dockerfile here the script is called after starting the container
FROM nginx:1.21.1-alpine
WORKDIR /var/www/
COPY ./script.sh /var/www/script.sh
RUN apk add --no-cache --upgrade bash && \
apk add nano
#launch script
CMD /var/www/script.sh
EDIT 2:
my docker-compose.yml
version: "3.9"
services:
#database
pgsql:
hostname: pgsql
build: ./pgsql
ports:
- 5432:5432/tcp
volumes:
- ./pgsql/data:/var/lib/postgresql/data
#registry
registry_fpm:
build: ./fpm/registry
depends_on:
- pgsql
volumes:
- ./microservices/registry:/var/www/registry
registry_nginx:
hostname: registry
build: ./nginx/registry
depends_on:
- registry_fpm
volumes:
- ./microservices/registry:/var/www/registry
- ./nginx/registry/nginx.conf:/etc/nginx/nginx.conf
#server
nginx:
build: ./nginx
environment:
MICROSERVICE_NAME: Microservice_1
depends_on:
- registry_nginx
ports:
- 80:80/tcp
the purpose of the registry is to store only the ip of all microservices. If you are familiar with microservices, then it is quite possible that you know that the registry is like the custodian of all addresses of microservices. The registry is used by other microservices to obtain microservice addresses so that microservices can communicate over http.
there is no need for these addresses as far as i can tell. the microservices can easily use each other's hostnames.
you already do this with your curl: the POST request goes to the server registry; and so on
docker compose may just be all the orchestration you require for you microservices.
regarding IPs and networking
if you prefer, for more isolation and consistency, you can configure in your compose.yaml
custom networks virtualised network adapters; think of it as vLANs where the nodes are selected containers only.
for addn info on networking refer
custom IP addresses for each container
hostnames for each container
links deprecated; do not use; information only
regarding heartbeat
keeping track of a heartbeat shouldn't be necessary.
but if you really need one, doing it from within the container is a no-no. a container should be only one running process. and creating a new record is redunduant as the docker daemon is already keeping track of all IP and state (and loads of others).
the function of registry (keeping track of lifecycle) is instead played by the docker daemon. try docker compose ps
however, you can configure the container to restart automatically when it fails using the restart tag
if you need a way to monitor these without the CLI, listening on the docker socket is the way to go.
you could make your own dashboard that taps into the Docker API whose endpoints are listed here. NB: the socket might need to be protected and if possible, ought to be mounted as read-only
but better solution would be a using an image that already does this. i cannot give you recommendations unfortunately; i have not used any.
I build a very simple web app and web api on .net core and configured the docker-compose to get them to communicate over the same network correctly.
On visual studio, when I hit play on the Docker Compose project, it runs fine, both the web app and the web api work and communicate correctly.
On the Docker Desktop app i see them running (green).
But when I close/stop the debugger on VS I can't access the websites anymore even though the containers are still running. I thought docker worked as a sort of IIS.
Am I misunderstanding the docker capabilities or do I need to run them again from a CLI or publish them somewhere or what?
I thought the fact the containers are up and running should mean they're live for me to navigate to.
Help me out over here please.
You are correct, unless there is some special routing happening, the fact that the containers are running means your services are available.
You can see the ports being exposed from the docker ps -a command:
CONTAINER_ID: 560f78689902
IMAGE: moviedecisionweb:dev
COMMAND: "C:\\remote_debugger\\…"
CREATED: About a minute ago
STATUS: Up About a minute
PORTS: 0.0.0.0:52002->80/tcp, 0.0.0.0:52001->443/tcp
NAMES: mdweb
CONTAINER_ID: 1cd7f72426fe
IMAGE: moviedecisionapi:dev
COMMAND: "C:\\remote_debugger\\…"
CREATED: About a minute ago
STATUS: Up About a minute
PORTS: 0.0.0.0:52005->80/tcp, 0.0.0.0:52004->443/tcp
NAMES: mdapi
Based on the provided output, you have two docker containers running.
I'm assuming the ports 80 & 443 are serving the HTTP & HTTPS services (respectively) from your app/s.
Based on this...
For container "mdweb", you should be able to access the docker services from your docker host machine (PC) via:
http://0.0.0.0:52002
https://0.0.0.0:52001
For container "mdapi", you should be able to access the docker services from your docker host machine (PC) via:
http://0.0.0.0:52005
https://0.0.0.0:52004
I believe you can use localhost, 127.0.0.1 & 0.0.0.0 interchangeably in the above.
You cannot use the hostnames "mdweb" or "mdapi" from your docker HOST machine - unless you have explicitly setup your DNS to handle these names. However you can use these hostnames if you are inside a docker container on the same docker network.
If you provide more information (e.g. your docker-compose.yml), we could help you further...
I have a spring-config-sever project that I am trying to run via Docker. I can run it from the command line and my other services and browser successfully connect via:
http://localhost:8980/aservice/dev
However, if I run it via Docker, the call fails.
My config-server has a Dockerfile:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE=build/libs/my-config-server-0.1.0.jar
ADD ${JAR_FILE} my-config-server-0.1.0.jar
EXPOSE 8980
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/my-config-server-0.1.0.jar"]
I build via:
docker build -t my-config-server .
I am running it via:
docker run my-config-server -p 8980:8980
And then I confirm it is running via
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1cecafdf99fe my-config-server "java -Djava.securit…" 14 seconds ago Up 13 seconds 8980/tcp suspicious_brahmagupta
When I run it via Docker, the browse fails with a "ERR_CONNECTION_REFUSED" and my calling services fails with:
Could not locate PropertySource: I/O error on GET request for
"http://localhost:8980/aservice/dev": Connection refused (Connection
refused);
Adding full answer based on comments.
First, you have to specify -p before image name.
docker run -p 8980:8980 my-config-server.
Second, just configuring localhost with host port won't make your my-service container to talk to other container. locahost in container is within itself(not host). You will need to use appropriate docker networking model so both containers can talk to each other.
If you are on Linux, the default is Bridge so you can configure my-config-server container ip docker inspect {containerIp-of-config-server} as your config server endpoint.
Example if your my-config-server ip is 172.17.0.2 then endpoint is - http://172.17.0.2:8980/
spring:
cloud:
config:
uri: http://172.17.0.2:8980
Just follow the docker documentation for little bit more understanding on how networking works.
https://docs.docker.com/network/network-tutorial-standalone/
https://docs.docker.com/v17.09/engine/userguide/networking/
If you want to spin up both containers using docker-compose, then you can link both containers using service name. Just follow Networking in Compose.
I could imagine that the application only listens on localhost, ie 127.0.0.1.
You might want to try setting the property server.address to 0.0.0.0.
Then port 8980 should also be available externally.
I want to set up redis configuration in docker.
I have my own redis.conf under D:/redis/redis.conf and have configured it to have bind 127.0.0.1 and have uncommented requirepass foobared
Then used this command to load this configuration in docker:
docker run --volume D:/redis/redis.conf:/usr/local/etc/redis/redis.conf --name myredis redis redis-server /usr/local/etc/redis/redis.conf
Next,
I have docker-compose.yml in my application in maven Project under src/resources.
I have the following in my docker-compase.yml
redis:
image: redis
ports:
- "6379:6379"
And i execute the command :
docker-compose up
The Server runs, but when i check with the command:
docker ps -a
it Shows that redis Image runs at 0.0.0.0:6379.
I want it to run at 127.0.0.1.
How do i get that?
isn't my configuration file loading or is it wrong? or my commands are wrong?
Any suggestions are of great help.
PS: I am using Windows.
Thanks
Try to execute:
docker inspect <container_id>
And use "NetworkSettings"->"Gateway" (it must be 172.17.0.1) value instead of 127.0.0.1.
You can't use 127.0.0.1 as your Redis was run in the isolated environment.
Or you can link your containers.
So first of all you should not be worried about redis saying listening on 0.0.0.0:6379. Because redis is running inside the container. And if it doesn't listen on 0.0.0.0 then you won't be able to make any connections.
Next if you want redis to only listen on localhost on localhost then you need to use below
redis:
image: redis
ports:
- "127.0.0.1:6379:6379"
PS: I have not run container or docker for windows with 127.0.0.1 port mapping, so you will have to see if it works. Because host networking in Windows, Mac and Linux are different and may not work this way
I'm running boot2docker on a Mac for my development. I built a Docker image containing a Jetty server which is connecting to elasticsearch at localhost together with Redis and MySQL.
I'm running docker-compose with a host bridge configuration which looks like the following:
api:
image: api
ports:
- "8080:8080"
environment:
JETTY_ENVIRONMENT: dev
net: "host"
What I want is accessing elasticsearch which I installed on my Mac via localhost:9200.
Try this? I know we're not supposed to answer with links, but I thought it'd be OK since it's a link to a boot2docker file on Github.