Calling backend docker container from frontend + client side vs server side rendering - spring-boot

I'm currently working on a full stack application using Spring-boot (Kotlin), SvelteKit (run with Vite), and MongoDB, each with their own Docker container. My backend service is being forwarded to port 6868 on my localhost, and when I run my frontend service with "npm run dev" locally (which triggers this script): vite dev --host 0.0.0.0 --port 8080 and remove the service from my docker-compose.yml (see frontend-svelte service in docker-compose below), I am able to call my backend at localhost:6868 (see +page.js below). However, when I run my frontend inside of a docker container, the request to localhost:6868 fails. This sort of makes sense to me since localhost:6868 would refer to the inside of the docker container if the code was being sent from the server (docker container) as opposed to the browser. When I change localhost:6868 to spring-boot:8080 (the docker container) the initial request sent Server side does succeed (i.e. the console log below in /frontend-svelte/routes/+page.js does print out) however, there is still an error in the browser for the subsequent requests being sent from the client side. It seems to me that the issue is the discrepancy caused by requests sent from client-side vs server-side, so how can I resolve this issue? Thanks everyone for your help!
docker-compose.yml
version: "3.8"
services:
mongodb:
image: mongo:5.0.2
restart: unless-stopped
ports:
- 27017:27017
volumes:
- db:/data/db
spring-boot:
image: bracket_backend
build:
context: ./backend
dockerfile: Dev.Dockerfile
depends_on:
- mongodb
ports:
- 6868:8080
stdin_open: true
tty: true
volumes:
- ./backend/src:/app/src
env_file:
- ./.env
frontend-svelte:
image: bracket_frontend
build:
context: ./frontend-svelte
dockerfile: Dev.Dockerfile
ports:
- 1234:8080
stdin_open: true
tty: true
volumes:
- ./frontend-svelte/src:/app/src
depends_on:
- spring-boot
volumes:
db:
/backend/Dev.Dockerfile
FROM maven:3.8.6-openjdk-18-slim
WORKDIR /app
COPY ./.mvn ./mvn
COPY ./mvnw ./
COPY ./pomDev.xml ./
# Note that src is mounted as a volume to allow code update w/o restarting container
ENTRYPOINT mvn spring-boot:run -f pomDev.xml
/frontend-svelte/Dev.Dockerfile
FROM node:16-slim
WORKDIR /app
COPY package.json .
RUN npm install --legacy-peer-deps
COPY svelte.config.js .
COPY vite.config.js .
COPY jsconfig.json .
COPY playwright.config.js .
# Note that src is mounted so changes will occur.
ENTRYPOINT npm run dev
/frontend-svelte/routes/+page.js (this is where the request to the backend is made. This succeeds if not run from docker container)
import { getBaseUrl } from '$lib/utils.js';
/** #type {import('./$types').PageLoad} */
export async function load({ params }) {
console.log("test")
const response = await fetch(`http://localhost:6868/api/groups`); // THIS LINE MAKES THE REQUEST TO THE BACKEND
console.log("respoonse is: ");
console.log(response);
if (!response.ok) {
throw new Error(`Error! status: ${response.status}`);
}
return response.json()
}

Related

How to get docker-compose container to see Redis host?

I have this simple docker-compose.yml file:
version: '3.8'
services:
bot:
build:
dockerfile: Dockerfile
context: .
links:
- redis
depends_on:
- redis
redis:
image: redis:7.0.0-alpine
ports:
- "6379:6379"
environment:
- REDIS_REPLICATION_MODE=master
restart: always
volumes:
- cache:/data
command: redis-server
volumes:
cache:
driver: local
This is how the bot (in Go) connects to redis:
import "github.com/go-redis/redis/v8"
func setRedisClient() {
rdb = redis.NewClient(&redis.Options{
Addr: "redis:6379",
Password: "",
DB: 0,
})
}
bot Dockerfile:
FROM golang:1.18.3-alpine3.16
WORKDIR /go/src/bot-go
COPY . .
RUN go build .
RUN ./bot-go
But when I run docker-compose up --build I always get:
panic: dial tcp: lookup redis on 192.168.65.5:53: no such host
redis host is never seen no matter what changes I make to the host or to docker-compose file.
The app does work without Docker when I configure the client to local.
What I am doing wrong exactly?
The problem is the bot-go image never stops building. Change RUN ./bot-go to CMD [ "./bot-go" ] in the Dockerfile and everything will work fine.

Spring boot + docker + rest template connection refused

I have two services, spring boot docker and when I try communication with rest template I got java.net.ConnectException: Connection refused (Connection refused)
url is http://localhost:8081/api/v1/package/250Mbps
Service 1 docker-compose.yml:
product_catalogue_service:
image: openjdk:8-jdk-alpine
ports:
- "8081:8081"
volumes:
- .:/app
working_dir: /app
command: ./gradlew bootRun
restart: on-failure
Service 2 docker-compose.yml:
order_service:
image: openjdk:8-jdk-alpine
ports:
- "8083:8083"
volumes:
- .:/app
working_dir: /app
command: ./gradlew bootRun
restart: on-failure
Rest template URL, and it is working when I run project 2 from the IntelliJ:
http://localhost:8081/api/v1/package/250Mbps
When I run docker ps, name of first service is:
productcatalogueservice_product_catalogue_service_1
I tried to use that instead of localhost - unknown host exception.
I tried "product_catalogue_service_1" instead, also unknown host exception,
and finally I tried "product_catalogue_service" also unknown host exception.
Any idea?
By default, docker-compose creates a network between your containters and assign the serivce name as de host name.
So, you can reach the product_catalog_service from order_service like so: http://product_catalog_service:8081/api/v1/package/250Mbps
Now, from your post it seems that you have two different docker-compose.yml files, one for each service. In this case, each container is living in its own network (created by docker-compose) and they can't see each other, at least by default.
Honestly, I've never try to connect two containers created from two different docker-compose files. I always create one docker-compose.yml file, put there all my services and let docker-compose manage the network between them.
Using one docker-compose.yml
version: ...
services:
product_catalogue_service:
image: openjdk:8-jdk-alpine
ports:
- "8081:8081"
volumes:
- .:/app
working_dir: /app
command: ./gradlew bootRun
restart: on-failure
order_service:
image: openjdk:8-jdk-alpine
ports:
- "8083:8083"
volumes:
- .:/app
working_dir: /app
command: ./gradlew bootRun
restart: on-failure
One last thing, this answer explains in great detail why localhost:8081 is not working from your order_service container.

Spring boot docker updates do not appear

I am running a web project and a database through docker compose, but my updates do not appear on the page.
version: '3.2'
services:
app:
image: springio/gs-spring-boot-docker
ports:
- "8080:8080"
depends_on:
- mypostgres
mypostgres:
image: image
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=ps
- POSTGRES_USER=us
- POSTGRES_DB=db
I changed Application.java just printing instead of "Hello World" something, I refreshed page localhost:8080 but still no changes in my web page
I changed
Go to the directory of your Dockerfile and run the following commands:
Build the new image:
docker build --build-arg JAR_FILE=build/libs/*.jar -t springio/gs-spring-boot-docker .
and then run the new image:
docker run -p 8080:8080 -t springio/gs-spring-boot-docker

Jenkins doesn't notify after running docker compose up

I am using Jenkins to run my unit tests. Also, I am using docker-compose to link the spring boot and its Postgres database. Each time the Jenkins file is executed during a pull request or commit, I use the compose stack and to check that the tests have been performed correctly.
If the test fails then container aborted and Jenkins notifies but in a positive scenario when the spring boot application starts Jenkins doesn't notify and sticks.
this is the docker file :
FROM openjdk:10-jdk
COPY run.sh /
RUN chmod +x /run.sh
COPY ./target/*.jar /app.jar
CMD ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
This is the docker-compose file:
version: '3.2'
services:
app:
restart: always
build: .
container_name: app
working_dir: /app
volumes:
- .:/app
ports:
- 8085:8080
links:
- pgsql
depends_on:
- pgsql
pgsql:
image: postgres:10
container_name: pgsql
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=passwordorsomething
- POSTGRES_USER=postgres
- POSTGRES_DB=pgsql
restart: always
This is the stage for running docker compose and start spring boot and run the test :
stage('Test') {
agent {
label "docker"
}
steps {
sh 'docker rm -f $(docker ps -a -q)'
sh 'docker-compose up --build --exit-code-from app'
}
}
After Jenkins reach to 'docker-compose up --build --exit-code-from app' and the spring boot starts it sticks in the Test stage.
It's only a guess, but is the restart: always making the container restart. Assuming some of your tests are failing?
Its a good idea to add a post block to do docker-compose down to avoid zombie container

Docker compose, link file outside container

I'm working with docker-compose for a laravel project and nginx.
This is my docker-compose.yml :
version: '2'
services:
backend:
image: my_image
depends_on:
- elastic
- mysql
mysql:
image: mysql:8.0.0
nginx:
depends_on:
- backend
image: my_image
ports:
- 81:80
So, my Laravel project is in the container backend, and If I run the command : docker-compose up -d it's ok, all containers are created and my project is running on port 81.
My problem is, In the Laravel project in my container backend, I have a .env file with database login, password and other stuff.
How can I edit this file after docker-compose up ? Directly in the container is not a good idea, is there a way to link a file outside a container with docker-compose ?
Thanks
One approach to this is to use the 'env_file' directive on the docker-compose.yml, in there you can put a list of key value pairs that will be exported into the container. For example:
web:
image: nginx:latest
env_file:
- .env
ports:
- "8181:80"
volumes:
- ./code:/code
Then you can configure your application to use these env values.
One catch with this approach is that you need to recreate the containers if you change any value or add a new one (docker-compose down && docker-compose up -d).

Resources