I try to create docker image and run a container but maven build fails due failing the tests with testcontainers. Also should say that Im a windows user, but there is a Ubuntu-22.04 over Windows 10. Docker successfully finds WSL2 in settings
P.S. tests are passed if I run mvn clean package/clean install or manually start them
When I run docker build -t *someName* OR docker-compose up --build app I faced with this (most useful as I think) stacktrace after successful dependencies downloading:
#0 78.13 12:55:52.042 [main] INFO org.testcontainers.utility.ImageNameSubstitutor - Image name
substitution will be performed by: DefaultImageNameSubstitutor
(composite of 'ConfigurationFileImageNameSubstitutor' and
'PrefixingImag eNameSubstitutor')
#0 78.18 12:55:52.093 [main] DEBUG org.testcontainers.dockerclient.RootlessDockerClientProviderStrategy -
$XDG_RUNTIME_DIR is not set.
#0 78.18 12:55:52.094 [main] DEBUG org.testcontainers.dockerclient.RootlessDockerClientProviderStrategy -
'/root/.docker/run' does not exist.
#0 78.24 12:55:52.151 [main] DEBUG org.testcontainers.dockerclient.RootlessDockerClientProviderStrategy -
'/run/user/0' does not exist.
#0 78.24 12:55:52.151 [main] DEBUG org.testcontainers.dockerclient.DockerClientProviderStrategy - Trying
out strategy: UnixSocketClientProviderStrategy
#0 78.24 12:55:52.154 [main] DEBUG org.testcontainers.dockerclient.DockerClientProviderStrategy -
UnixSocketClientProviderStrategy: failed with exception
InvalidConfigurationException (Could not find unix domain socket).
Root caus e NoSuchFileException (/var/run/docker.sock)
#0 78.25 12:55:52.157 [main] INFO org.testcontainers.dockerclient.DockerMachineClientProviderStrategy -
docker-machine executable was not found on PATH
([/opt/java/openjdk/bin, /usr/local/sbin, /usr/local/bin, /usr/sbin,
/usr/bin, /sbin, /bin])
#0 78.25 12:55:52.158 [main] ERROR org.testcontainers.dockerclient.DockerClientProviderStrategy - Could
not find a valid Docker environment. Please check configuration.
Attempted configurations were:
#0 78.25 UnixSocketClientProviderStrategy: failed with exception InvalidConfigurationException (Could not find unix domain
socket). Root cause NoSuchFileException (/var/run/docker.sock)As no
valid configuration was found, e xecution cannot continue.
#0 78.25 See https://www.testcontainers.org/on_failure.html for more details.
===Dockerfile===
FROM maven:3.8.5 AS maven
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN mvn clean package
FROM openjdk:17-jdk-slim
ARG JAR_FILE=practise.jar
WORKDIR /opt/app
COPY --from=maven /usr/src/app/target/${JAR_FILE} /opt/app/
ENTRYPOINT \["java","-jar","practise.jar"\]
EXPOSE 8080
===docker-compose.yml===
version: '3.1'
services:
app:
build: .
image: 'practise'
ports:
"8080:8080"
links:
postgresdb
environment:
SPRING_DATASOURCE_URL=jdbc:postgresql://postgresdb:5432/practise
SPRING_DATASOURCE_USERNAME=root
SPRING_DATASOURCE_PASSWORD=root
SPRING_JPA_HIBERNATE_DDL_AUTO=update
postgresdb:
image: 'postgres:13.1-alpine'
ports:
"5432:5432"
expose:
5432
environment:
POSTGRES_PASSWORD=root
POSTGRES_USER=root
POSTGRES_DB=practise
There is a declaration of test class:
#AutoConfigureMockMvc
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.DEFINED_PORT)
#Testcontainers
#TestMethodOrder(MethodOrderer.OrderAnnotation.class)
class CompanyControllerTest {
#Container
public static final PostgreSQLContainer<?> container =
new PostgreSQLContainer<>("postgres")
.withUsername("root")
.withPassword("root");
#DynamicPropertySource
static void properties(DynamicPropertyRegistry registry) {
registry.add("hibernate.connection.url", container::getJdbcUrl);
registry.add("hibernate.connection.username", container::getUsername);
registry.add("hibernate.connection.password", container::getPassword);
}
// tests
}
Related
I hope someone can help me with this issues, as I'm no expert with docker.
I have a Java Spring Boot application (let's call it my-app) that uses ScyllaDB. So far, I have been running the application with Spring Boot embedded Apache Tomcat build, and the database is running in Docker with no issues.
Here is the docker-compose file for the 3 Scylla nodes:
version: "3"
services:
scylla-node1:
container_name: scylla-node1
image: scylladb/scylla:4.5.0
restart: always
command: --seeds=scylla-node1,scylla-node2 --smp 1 --memory 750M --overprovisioned 1 --api-address 0.0.0.0
ports:
- 9042:9042
volumes:
- "./scylla/scylla.yaml:/etc/scylla/scylla.yaml"
- "./scylla/cassandra-rackdc.properties.dc1:/etc/scylla/cassandra-rackdc.properties"
networks:
- scylla-network
scylla-node2:
container_name: scylla-node2
image: scylladb/scylla:4.5.0
restart: always
command: --seeds=scylla-node1,scylla-node2 --smp 1 --memory 750M --overprovisioned 1 --api-address 0.0.0.0
ports:
- 9043:9042
volumes:
- "./scylla/scylla.yaml:/etc/scylla/scylla.yaml"
- "./scylla/cassandra-rackdc.properties.dc1:/etc/scylla/cassandra-rackdc.properties"
networks:
- scylla-network
scylla-node3:
container_name: scylla-node3
image: scylladb/scylla:4.5.0
restart: always
command: --seeds=scylla-node1,scylla-node2 --smp 1 --memory 750M --overprovisioned 1 --api-address 0.0.0.0
ports:
- 9044:9042
volumes:
- "./scylla/scylla.yaml:/etc/scylla/scylla.yaml"
- "./scylla/cassandra-rackdc.properties.dc1:/etc/scylla/cassandra-rackdc.properties"
networks:
- scylla-network
Using the node tool, I can see the DB is fine:
Datacenter: DC1
-- Address Load Tokens Owns Host ID Rack
UN 172.27.0.3 202.92 KB 256 ? 4e2690ec-393b-426d-8956-fb775ab5b3f9 Rack1
UN 172.27.0.2 99.5 KB 256 ? ae6a0b9f-d0e7-4740-8ebe-0ce1d2e9ea7e Rack1
UN 172.27.0.4 202.68 KB 256 ? 7a4b39bf-f38a-41ab-be33-c11a4e4e352c Rack1
In the application, the Java driver I'm using is the DataStax Java driver 3.11.2.0 for Apache Cassandra. The way I connect with the DB is the following:
#Bean
public Cluster cluster() {
Cluster cluster = Cluster.builder().addContactPointsWithPorts(
new InetSocketAddress("127.0.0.1", 9042),
new InetSocketAddress("127.0.0.1", 9043),
new InetSocketAddress("127.0.0.1", 9044))
.build();
return cluster;
}
#Bean
public Session session(Cluster cluster, #Value("${scylla.keyspace}") String keyspace) throws IOException {
final Session session = cluster.connect();
setupKeyspace(session, keyspace);
return session;
}
When running the application with the tomcat server, I receive a lot of connection errors at the start:
2022-07-19 22:42:38.424 WARN 28228 --- [r1-nio-worker-3] com.datastax.driver.core.Connection : Error creating netty channel to /172.27.0.4:9042
However, after a small spam of log errors, the app eventually connects and its totally usable. I do have to wait for the node tool to execute and confirm that all nodes are up, though.
2022-07-19 23:25:12.324 INFO 25652 --- [ restartedMain] c.d.d.c.p.DCAwareRoundRobinPolicy : Using data-center name 'DC1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
2022-07-19 23:25:12.324 INFO 25652 --- [ restartedMain] com.datastax.driver.core.Cluster : New Cassandra host /172.27.0.3:9042 added
2022-07-19 23:25:12.324 INFO 25652 --- [ restartedMain] com.datastax.driver.core.Cluster : New Cassandra host /172.27.0.2:9042 added
2022-07-19 23:25:12.324 INFO 25652 --- [ restartedMain] com.datastax.driver.core.Cluster : New Cassandra host /127.0.0.1:9044 added
Then, I recently added "my-app" to my docker-compose file, but the app can't start and instantly shuts down even if I wait for the node status tool to confirm that all nodes are up.
Caused by: java.net.ConnectException: Connection refused
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042
Is there something wrong with the way I'm connecting with the DB? I wonder why the embedded tomcat build works and the docker one instantly shuts down. I was hoping someone here could help me find a way for the docker-compose build to wait for all the scylla nodes to be up before starting my-app (I assume I can do it with a script in the dockerfile? Maybe?), but I can't even seem to start the app in docker the same way I did with the tomcat. Maybe I'm missing smething regarding the port and host when using docker.
Any ideas in what I could try to solve this? Thanks in advance!
Docker compose file edited with the app:
my-app:
container_name: my-app
build:
context: .
dockerfile: Dockerfile
image: my-app
ports:
- 8082:8082
depends_on:
- scylla-node1
- scylla-node2
- scylla-node3
networks:
- scylla-network
You need to use IP addresses in the contact points which are accessible from outside the containers, not localhost.
Typically, it will be the IP address you've configured for CASSANDRA_RPC_ADDRESS (environment variable) or rpc_address (in your yaml).
If you didn't set the RPC addresses for the containers, you need to tell Cassandra what IP address to advertise to other nodes and clients by specifying a broadcast address with CASSANDRA_BROADCAST_ADDRESS or broadcast_rpc_address.
The important thing is that you need to use IP addresses which are reachable from your Spring Boot app. Cheers!
I have a spring application with flyway and psql. After
mvn clean install
sudo docker build -t air-travels-api.jar .
docker run -p 8080:8080 air-travels-api.jar
I stuck with an error:
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
2022-07-09 14:28:09.610 WARN 1 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'flywayInitializer' defined in class path resource [org/springframework/boot/autoconfigure/flyway/FlywayAutoConfiguration$FlywayConfiguration.class]: Invocation of init method failed; nested exception is org.flywaydb.core.internal.exception.FlywaySqlException: Unable to obtain connection from database: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Here's my docker-compose.yaml:
version: '3'
services:
air-travels-api:
image: air-travels-api
build:
context: .
container_name: air-travels-api
ports:
- "8080:8080"
depends_on:
- flyway
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://air-travels-api-db:5432/air-travels-api
- SPRING_DATASOURCE_USERNAME=postgres
- SPRING_DATASOURCE_PASSWORD=postgres
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
flyway:
image: boxfuse/flyway:5-alpine
command: -url=jdbc:postgresql://air-travels-api-db:5432/air-travels-api -schemas=public -user=postgres -password=postgres migrate
volumes:
- ./migration:/flyway/sql
depends_on:
- air-travels-api-db
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=air-travels-api
- POSTGRES_HOST=postgres
- POSTGRES_PORT=5432
- POSTGRES_SCHEMA=public
air-travels-api-db:
image: postgres:12
restart: always
ports:
- "5432:5432"
container_name: air-travels-api-db
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: air-travels-api
Dockerfile:
FROM adoptopenjdk:11-jre-hotspot
EXPOSE 8080
ADD target/air-travels-api-0.0.1-SNAPSHOT.jar air-travels-api.jar
ENTRYPOINT ["java", "-jar", "/air-travels-api.jar"]
Applicaton.yaml
spring:
datasource:
url: jdbc:postgresql://air-travels-api-db:5432/air-travels-api
username: postgres
password: postgres
I found a similar question on stackoverflow, they suggested making sure postgres is running on the local machine. But I have it running inside a container (air-travels-api-db).
There are 2 issues that I see:
In the Application.yaml your url should be
jdbc:postgresql://air-travels-api-db:5432/air-travels-api
since your database service has the hostname air-travels-api-db and not localhost.
In your flyway service in docker-compose.yaml, the url is also incorrect. It should point to air-travels-api-db instead of postgres.
command: -url=jdbc:postgresql://api-travels-api-db:5432/air-travels-api -schemas=public -user=postgres -password=postgres migrate
You do set the environment variable, but it is possible the command-line argument will override that.
One suggestion: Database containers are known to have a slow startup, therefore, it is a good idea to either add a health-check to your database service, or make sure to implement retry logic in your application. Otherwise, you will see race condition issues where the application runs before the database is available and it crashes. This is very common.
CircleCI introduces orb in 2.1, I am trying to add Circle Ci config my sample project.
But in my testing codes, I have used test containers to simplify the dependent config of my integration tests.
When committing my codes, the Circle CI running is failed.
org.testcontainers.containers.ContainerLaunchException: Container startup failed
Caused by: org.testcontainers.containers.ContainerFetchException: Can't get Docker image: RemoteDockerImage(imageName=mongo:4.0.10, imagePullPolicy=DefaultPullPolicy())
Caused by: java.lang.IllegalStateException: Could not find a valid Docker environment. Please see logs and check configuration
My Circle CI config.
version: 2.1
orbs:
maven: circleci/maven#1.0.1
codecov: codecov/codecov#1.1.0
jobs:
codecov:
machine:
image: ubuntu-1604:201903-01
steps:
- codecov/upload
workflows:
build:
jobs:
- maven/test:
command: "-q verify -Pcoverage"
- codecov:
requires:
- maven/test
Got it run myself.
The maven orb provides reusable jobs and commands, but by default, it used a JDK executor, does not provide a Docker runtime.
My solution is giving up the reusable job, and reuse some commands from the maven orb in your own jobs.
version: 2.1
orbs:
maven: circleci/maven#1.0.1
codecov: codecov/codecov#1.1.0
executors:
docker-mongo:
docker:
- image: circleci/openjdk:14-jdk-buster
- image: circleci/mongo:latest
jobs:
build:
executor: docker-mongo
steps:
- checkout
- maven/with_cache:
steps:
- run: mvn -q test verify -Pcoverage
- maven/process_test_results
- codecov/upload:
when: on_success
workflows:
build:
jobs:
- build
I am trying to dockerize 4 services and I have a problem with one of the services. Particularly, this service is implemented is spring boot service and uses google vision API. When building the images and starting the containers everything works fine, until it gets to the part where the google vision API code is used. I then have the following runtime errors when running the containers:
netty-tcnative unavailable (this may be normal)
java.lang.IllegalArgumentException: Failed to load any of the given libraries: [netty_tcnative_linux_x86_64, netty_tcnative_linux_x86_64_fedora, netty_tcnative_x86_64, netty_tcnative]
at io.grpc.netty.shaded.io.netty.util.internal.NativeLibraryLoader.loadFirstAvailable(NativeLibraryLoader.java:104) ~[grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.netty.shaded.io.netty.handler.ssl.OpenSsl.loadTcNative(OpenSsl.java:526) ~[grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.netty.shaded.io.netty.handler.ssl.OpenSsl.<clinit>(OpenSsl.java:93) ~[grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.defaultSslProvider(GrpcSslContexts.java:244) [grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:171) [grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.forClient(GrpcSslContexts.java:120) [grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.netty.shaded.io.grpc.netty.NettyChannelBuilder.buildTransportFactory(NettyChannelBuilder.java:385) [grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:435) [grpc-core-1.18.0.jar!/:1.18.0]
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:223) [gax-grpc-1.42.0.jar!/:1.42.0]
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:164) [gax-grpc-1.42.0.jar!/:1.42.0]
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:156) [gax-grpc-1.42.0.jar!/:1.42.0]
at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:157) [gax-1.42.0.jar!/:1.42.0]
at com.google.cloud.vision.v1.stub.GrpcImageAnnotatorStub.create(GrpcImageAnnotatorStub.java:84) [google-cloud-vision-1.66.0.jar!/:1.66.0]
at com.google.cloud.vision.v1.stub.ImageAnnotatorStubSettings.createStub(ImageAnnotatorStubSettings.java:120) [google-cloud-vision-1.66.0.jar!/:1.66.0]
at com.google.cloud.vision.v1.ImageAnnotatorClient.<init>(ImageAnnotatorClient.java:136) [google-cloud-vision-1.66.0.jar!/:na]
at com.google.cloud.vision.v1.ImageAnnotatorClient.create(ImageAnnotatorClient.java:117) [google-cloud-vision-1.66.0.jar!/:na]
at com.google.cloud.vision.v1.ImageAnnotatorClient.create(ImageAnnotatorClient.java:108) [google-cloud-vision-1.66.0.jar!/:na]
Complete log file of the error can be found in this link:
Complete Log File.
Here are my docker-compose.yml file and the Dockerfile of the service causing problem:
DockerFile
FROM maven:3.6.0-jdk-8-alpine
WORKDIR /app/back
COPY src src
COPY pom.xml .
RUN mvn clean package
FROM openjdk:8-jdk-alpine
RUN apk add --no-cache curl
WORKDIR /app/back
COPY --from=0 /app/back/target/imagescanner*.jar ./imagescanner.jar
COPY --from=0 /app/back/target/classes/API-Key.json .
ENV GOOGLE_APPLICATION_CREDENTIALS ./API-Key.json
EXPOSE 8088
ENTRYPOINT ["java", "-jar", "./imagescanner.jar"]
docker-compose.yml
version: '3'
services:
front:
container_name: demoLab_front
build: ./front
image: demolab/front:latest
expose:
- "3000"
ports:
- "8087:3000"
restart: always
back:
container_name: demoLab_backGCV
build: ./backGCV
image: demolab/backgcv:latest
depends_on:
- lab
ports:
- "8088:8088"
restart: always
lab:
container_name: demoLab_labGCV
build: ./lab
image: demolab/labgcv:latest
expose:
- "8089"
ports:
- "8089:8089"
restart: always
sift:
container_name: demoLab_labSIFT
build: ./detect-label-service
image: demolab/labsift:latest
expose:
- "5000"
ports:
- "5000:5000"
restart: always
EDIT
After some googling I found out that: GRPC Java examples are not working on Alpine Linux since required libnetty-tcnative-boringssl-static depends on glibc. Alpine is using musl libc and application startup will fail with message similar to mine.
I found this project that try to build the right images but it seems broken for a lot of pepole (the build didn't work for my case)
Problem solved by replacing this line of the Dockerfile:
FROM openjdk:8-jdk-alpine
with this line:
FROM koosiedemoer/netty-tcnative-alpine
The problem: Suppressed: java.lang.UnsatisfiedLinkError: no netty_tcnative in java.library.path
On alpine container.
There is a simple workaround:
apk add libressl
apk add openssl
ln -s /lib/ld-musl-x86_64.so.1 /lib/libcrypt.so.1
I am new to docker.I am using Windows 10 and installed docker for my machine and working docker via power-shell.
My problem was i can't copy my files from docker-compose.yml file
My file look like below
version: '2'
services:
maven:
image: maven:3.3.3-jdk-8
volumes:
- ~/.m2:/root/.m2
- /d/projects/test/:/code
working_dir: /code
links:
- mongodb
entrypoint: /code/wait-for-it.sh mongodb:27017 -- mvn clean install
environment:
- mongodb_hosts=mongodb
mongodb:
image: mongo:3.2.4
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092"
This test project i'm using maven and i have lot of files on it. But it's give the error, like
ERROR: for maven Cannot start service maven: oci runtime error: exec:
"/code/wait-for-it.sh": stat /code/wait-for-it.sh: no such file or
directory ERROR: Encountered errors while bringing up the project.
I shared my local drive also in docker settings, still mount problem was there.
Please help me, thanks for advance.