Is it to be expected that the client will discover each node twice for the Hazelcast sidecar caching pattern? - spring-boot

I'm pretty new to using Hazelcast for its interesting feature of auto-sync with other cache instances. My queries are bottom of the description.
Here was my initial goal:
Design an environment following Hazelcast sidecar caching pattern.
There will be no cache on the application container side. Basically, I don't want to use "near-cache" just to avoid my JVM being heavy and reduce GC time.
Application Container in each Node will communicate with its own sidecar cache container via localhost IP.
Hazelcast management center will be a separate node that communicates with all the nodes containing Hazelcast sidecar cache container.
Here is the target design:
I prepared Hazelcast configuration [hazelcast.yaml] for Hazelcast container,
hazelcast:
cluster-name: dev
network:
port:
auto-increment: false
port-count: 3
port: 5701
I also prepared another hazelcast.yaml for my application container,
hazelcast:
map:
default:
backup-count: 0
async-backup-count: 1
read-backup-data: true
network:
reuse-address: true
port:
auto-increment: true
port: 5701
join:
multicast:
enabled: true
kubernetes:
enabled: false
tcp-ip:
enabled: false
interaface: 127.0.0.1
member-list:
- 127.0.0.1:5701
Here is the client part, I used SpringBoot for it.
#Component
public class CacheClient {
private static final String ITEMS = "items";
private HazelcastInstance client;
CacheClient() throws IOException {
ClientConfig config = new YamlClientConfigBuilder("hazelcast.yaml").build();
config.setInstanceName(UUID.randomUUID().toString());
client = HazelcastClient.getOrCreateHazelcastClient(config);
}
public Item put(String number, Item item){
IMap<String, Item> map = client.getMap(ITEMS);
return map.putIfAbsent(number, item);
}
public Item get(String key){
IMap<String, Item> map = client.getMap(ITEMS);
return map.get(key);
}
}
Here is the dockerfile, I used to build my application container image,
FROM adoptopenjdk/openjdk11:jdk-11.0.5_10-alpine-slim
# Expose port 8081 to Docker host
EXPOSE 8081
WORKDIR /opt
COPY /build/libs/hazelcast-client-0.0.1-SNAPSHOT.jar /opt/app.jar
COPY /src/main/resources/hazelcast.yaml /opt/hazelcast.yaml
COPY /src/main/resources/application.properties /opt/application.properties
ENTRYPOINT ["java","-Dhazelcast.socket.server.bind.any=false","-Dhazelcast.initial.min.cluster.size=1","-Dhazelcast.socket.bind.any=false","-Dhazelcast.socket.server.bind.any=false","-Dhazelcast.socket.client.bind=false","-Dhazelcast.socket.client.bind.any=false","-Dhazelcast.logging.type=slf4j","-jar","app.jar"]
Here is the deployment script I used,
apiVersion: v1 # Kubernetes API version
kind: Service # Kubernetes resource kind we are creating
metadata: # Metadata of the resource kind we are creating
name: spring-hazelcast-service
spec:
selector:
app: spring-hazelcast-app
ports:
- protocol: "TCP"
name: http-app
port: 8081 # The port that the service is running on in the cluster
targetPort: 8081 # The port exposed by the service
type: LoadBalancer # type of the service. LoadBalancer indicates that our service will be external.
---
apiVersion: apps/v1
kind: Deployment # Kubernetes resource kind we are creating
metadata:
name: spring-hazelcast-app
spec:
selector:
matchLabels:
app: spring-hazelcast-app
replicas: 1 # Number of replicas that will be created for this deployment
template:
metadata:
labels:
app: spring-hazelcast-app
spec:
containers:
- name: hazelcast
image: hazelcast/hazelcast:4.0.2
workingDir: /opt
ports:
- name: hazelcast
containerPort: 5701
env:
- name: HZ_CLUSTERNAME
value: dev
- name: JAVA_OPTS
value: -Dhazelcast.config=/opt/config/hazelcast.yml
volumeMounts:
- mountPath: "/opt/config/"
name: allconf
- name: spring-hazelcast-app
image: spring-hazelcast:1.0.3
imagePullPolicy: Never #IfNotPresent
ports:
- containerPort: 8081 # The port that the container is running on in the cluster
volumes:
- name: allconf
hostPath:
path: /opt/config/ # directory location on host
type: Directory # this field is optional
---
apiVersion: v1 # Kubernetes API version
kind: Service # Kubernetes resource kind we are creating
metadata: # Metadata of the resource kind we are creating
name: hazelcast-mc-service
spec:
selector:
app: hazelcast-mc
ports:
- protocol: "TCP"
name: mc-app
port: 8080 # The port that the service is running on in the cluster
targetPort: 8080 # The port exposed by the service
type: LoadBalancer # type of the
loadBalancerIP: "127.0.0.1"
---
apiVersion: apps/v1
kind: Deployment # Kubernetes resource kind we are creating
metadata:
name: hazelcast-mc
spec:
selector:
matchLabels:
app: hazelcast-mc
replicas: 1 # Number of replicas that will be created for this deployment
template:
metadata:
labels:
app: hazelcast-mc
spec:
containers:
- name: hazelcast-mc
image: hazelcast/management-center
ports:
- containerPort: 8080 # The port that the container is running on in the cluster
Here is my application logs,
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.5.4)
2021-09-27 06:42:51.274 INFO 1 --- [ main] com.caching.Application : Starting Application using Java 11.0.5 on spring-hazelcast-app-7bdc8b7f7-bqdlt with PID 1 (/opt/app.jar started by root in /opt)
2021-09-27 06:42:51.278 INFO 1 --- [ main] com.caching.Application : No active profile set, falling back to default profiles: default
2021-09-27 06:42:55.986 INFO 1 --- [ main] c.h.c.impl.spi.ClientInvocationService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Running with 2 response threads, dynamic=true
2021-09-27 06:42:56.199 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] HazelcastClient 4.0.2 (20200702 - 2de3027) is STARTING
2021-09-27 06:42:56.202 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] HazelcastClient 4.0.2 (20200702 - 2de3027) is STARTED
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (jar:file:/opt/app.jar!/BOOT-INF/lib/hazelcast-all-4.0.2.jar!/) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2021-09-27 06:42:56.277 INFO 1 --- [ main] c.h.c.i.c.ClientConnectionManager : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Trying to connect to cluster: dev
2021-09-27 06:42:56.302 INFO 1 --- [ main] c.h.c.i.c.ClientConnectionManager : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Trying to connect to [127.0.0.1]:5701
2021-09-27 06:42:56.429 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] HazelcastClient 4.0.2 (20200702 - 2de3027) is CLIENT_CONNECTED
2021-09-27 06:42:56.429 INFO 1 --- [ main] c.h.c.i.c.ClientConnectionManager : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Authenticated with server [172.17.0.3]:5701:c967f642-a7aa-4deb-a530-b56fb8f68c78, server version: 4.0.2, local address: /127.0.0.1:54373
2021-09-27 06:42:56.436 INFO 1 --- [ main] c.h.internal.diagnostics.Diagnostics : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2021-09-27 06:42:56.461 INFO 1 --- [21ad30a.event-4] c.h.c.impl.spi.ClientClusterService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2]
Members [1] {
Member [172.17.0.3]:5701 - c967f642-a7aa-4deb-a530-b56fb8f68c78
}
2021-09-27 06:42:56.803 INFO 1 --- [ main] c.h.c.i.s.ClientStatisticsService : Client statistics is enabled with period 5 seconds.
2021-09-27 06:42:57.878 INFO 1 --- [ main] c.h.i.config.AbstractConfigLocator : Loading 'hazelcast.yaml' from the working directory.
2021-09-27 06:42:57.934 WARN 1 --- [ main] c.h.i.impl.HazelcastInstanceFactory : Hazelcast is starting in a Java modular environment (Java 9 and newer) but without proper access to required Java packages. Use additional Java arguments to provide Hazelcast access to Java internal API. The internal API access is used to get the best performance results. Arguments to be used:
--add-modules java.se --add-exports java.base/jdk.internal.ref=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.nio=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.management/sun.management=ALL-UNNAMED --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED
2021-09-27 06:42:57.976 INFO 1 --- [ main] com.hazelcast.instance.AddressPicker : [LOCAL] [dev] [4.0.2] Prefer IPv4 stack is true, prefer IPv6 addresses is false
2021-09-27 06:42:57.987 INFO 1 --- [ main] com.hazelcast.instance.AddressPicker : [LOCAL] [dev] [4.0.2] Picked [172.17.0.3]:5702, using socket ServerSocket[addr=/172.17.0.3,localport=5702], bind any local is false
2021-09-27 06:42:58.004 INFO 1 --- [ main] com.hazelcast.system : [172.17.0.3]:5702 [dev] [4.0.2] Hazelcast 4.0.2 (20200702 - 2de3027) starting at [172.17.0.3]:5702
2021-09-27 06:42:58.005 INFO 1 --- [ main] com.hazelcast.system : [172.17.0.3]:5702 [dev] [4.0.2] Copyright (c) 2008-2020, Hazelcast, Inc. All Rights Reserved.
2021-09-27 06:42:58.047 INFO 1 --- [ main] c.h.s.i.o.impl.BackpressureRegulator : [172.17.0.3]:5702 [dev] [4.0.2] Backpressure is disabled
2021-09-27 06:42:58.373 INFO 1 --- [ main] com.hazelcast.instance.impl.Node : [172.17.0.3]:5702 [dev] [4.0.2] Creating MulticastJoiner
2021-09-27 06:42:58.380 WARN 1 --- [ main] com.hazelcast.cp.CPSubsystem : [172.17.0.3]:5702 [dev] [4.0.2] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
2021-09-27 06:42:58.676 INFO 1 --- [ main] c.h.s.i.o.impl.OperationExecutorImpl : [172.17.0.3]:5702 [dev] [4.0.2] Starting 2 partition threads and 3 generic threads (1 dedicated for priority tasks)
2021-09-27 06:42:58.682 INFO 1 --- [ main] c.h.internal.diagnostics.Diagnostics : [172.17.0.3]:5702 [dev] [4.0.2] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2021-09-27 06:42:58.687 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : [172.17.0.3]:5702 [dev] [4.0.2] [172.17.0.3]:5702 is STARTING
2021-09-27 06:42:58.923 INFO 1 --- [ main] c.h.i.cluster.impl.MulticastJoiner : [172.17.0.3]:5702 [dev] [4.0.2] Trying to join to discovered node: [172.17.0.3]:5701
2021-09-27 06:42:58.932 INFO 1 --- [cached.thread-3] c.h.internal.nio.tcp.TcpIpConnector : [172.17.0.3]:5702 [dev] [4.0.2] Connecting to /172.17.0.3:5701, timeout: 10000, bind-any: false
2021-09-27 06:42:58.955 INFO 1 --- [.IO.thread-in-0] c.h.internal.nio.tcp.TcpIpConnection : [172.17.0.3]:5702 [dev] [4.0.2] Initialized new cluster connection between /172.17.0.3:40242 and /172.17.0.3:5701
2021-09-27 06:43:04.948 INFO 1 --- [21ad30a.event-3] c.h.c.impl.spi.ClientClusterService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2]
Members [2] {
Member [172.17.0.3]:5701 - c967f642-a7aa-4deb-a530-b56fb8f68c78
Member [172.17.0.3]:5702 - 08dfe633-46b2-4581-94c7-81b6d0bc3ce3
}
2021-09-27 06:43:04.959 WARN 1 --- [ration.thread-0] c.h.c.i.operation.OnJoinCacheOperation : [172.17.0.3]:5702 [dev] [4.0.2] This member is joining a cluster whose members support JCache, however the cache-api artifact is missing from this member's classpath. In case JCache API will be used, add cache-api artifact in this member's classpath and restart the member.
2021-09-27 06:43:04.963 INFO 1 --- [ration.thread-0] c.h.internal.cluster.ClusterService : [172.17.0.3]:5702 [dev] [4.0.2]
Members {size:2, ver:2} [
Member [172.17.0.3]:5701 - c967f642-a7aa-4deb-a530-b56fb8f68c78
Member [172.17.0.3]:5702 - 08dfe633-46b2-4581-94c7-81b6d0bc3ce3 this
]
2021-09-27 06:43:05.466 INFO 1 --- [ration.thread-1] c.h.c.i.p.t.AuthenticationMessageTask : [172.17.0.3]:5702 [dev] [4.0.2] Received auth from Connection[id=2, /172.17.0.3:5702->/172.17.0.3:40773, qualifier=null, endpoint=[172.17.0.3]:40773, alive=true, connectionType=JVM], successfully authenticated, clientUuid: 8843f057-c856-4739-80ae-4bc930559bd5, client version: 4.0.2
2021-09-27 06:43:05.468 INFO 1 --- [d30a.internal-3] c.h.c.i.c.ClientConnectionManager : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Authenticated with server [172.17.0.3]:5702:08dfe633-46b2-4581-94c7-81b6d0bc3ce3, server version: 4.0.2, local address: /172.17.0.3:40773
2021-09-27 06:43:05.968 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : [172.17.0.3]:5702 [dev] [4.0.2] [172.17.0.3]:5702 is STARTED
2021-09-27 06:43:06.237 INFO 1 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port 8081
2021-09-27 06:43:06.251 INFO 1 --- [ main] com.caching.Application : Started Application in 17.32 seconds (JVM running for 21.02)
Here is the Hazelcast management center member list,
Finally my question is,
Why I'm seeing 2 members, where there is only one sidecar cache container deployed?
What modification I will be required to reach my initial goal?

According to Spring Boot documentation for Hazelcast feature:
If a client can’t be created, Spring Boot attempts to configure an embedded server.
Spring Boot starts an embedded server from your hazelcast.yaml from the application container and joins to Hazelcast container using multicast.
You should replace your hazelcast.yaml in the Spring Boot app container with hazelcast-client.yaml with the following content:
hazelcast-client:
cluster-name: "dev"
network:
cluster-members:
- "127.0.0.1:5701"
After doing that Spring Boot will autoconfigure client HazelcastInstance bean and you will be able to change your cache client like this:
#Component
public class CacheClient {
private static final String ITEMS = "items";
private final HazelcastInstance client;
public CacheClient(HazelcastInstance client) {
this.client = client;
}
public Item put(String number, Item item){
IMap<String, Item> map = client.getMap(ITEMS);
return map.putIfAbsent(number, item);
}
public Item get(String key){
IMap<String, Item> map = client.getMap(ITEMS);
return map.get(key);
}
}

Related

Eureka Registered Application is null: false

new in eureka and got this error while joining api-gateway to eureka.
2022-06-02 06:51:45.941 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : The response status is 200
2022-06-02 06:51:50.949 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Disable delta property : false
2022-06-02 06:51:50.951 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Single vip registry refresh property : null
2022-06-02 06:51:50.951 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Force full registry fetch : false
2022-06-02 06:51:50.951 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Application is null : false
2022-06-02 06:51:50.952 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Registered Applications size is zero : true
2022-06-02 06:51:50.952 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Application version is -1: false
2022-06-02 06:51:50.953 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server
2022-06-02 06:51:50.966 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : The response status is 200
and this is my eureka configuration, I put this on a repository. Eureka services is quite stable, but none of other my services can join the microservices
#Eureka Client - Configuration
eureka:
instance:
preferIpAddress: true
appname: ${spring.application.name}
hostname: service-registry
health-check-url-path: /actuator/health
lease-renewal-interval-in-seconds: 10
client:
enabled: true
healthcheck:
enabled: true
register-with-eureka: true # Wisnu
fetch-registry: true # Wisnu
registry-fetch-interval-seconds: 5
service-url:
defaultZone: http://34.101.154.152:8761/eureka```

Docker Compose up service container is up and running but swagger not working

I am working on a core banking solution. I have two microservices that are called account-query-service and account-cmd-service. When I run docker-compose up, container are up and running, but swagger is not working for services. There is no problem for development side.
I can't see where the error is.
http://localhost:5002/swagger-ui.html
http://localhost:5003/swagger-ui.html
Here is Docker logs for account-cmd-service.
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.6.4)
2022-03-22 10:46:03.474 INFO 1 --- [ main] c.b.account.cmd.CommandApplication : Starting CommandApplication using Java 11.0.4 on 52d897453453 with PID 1 (/usr/app/account.cmd-0.0.1-SNAPSHOT.jar started by root in /usr/app)
2022-03-22 10:46:03.501 INFO 1 --- [ main] c.b.account.cmd.CommandApplication : No active profile set, falling back to 1 default profile: "default"
2022-03-22 10:46:06.266 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data MongoDB repositories in DEFAULT mode.
2022-03-22 10:46:06.403 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 122 ms. Found 1 MongoDB repository interfaces.
2022-03-22 10:46:07.971 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 5007 (http)
2022-03-22 10:46:08.027 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2022-03-22 10:46:08.028 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.58]
2022-03-22 10:46:08.225 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2022-03-22 10:46:08.226 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 4341 ms
2022-03-22 10:46:08.815 INFO 1 --- [ main] org.mongodb.driver.cluster : Cluster created with settings {hosts=[cmddb:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms'}
2022-03-22 10:46:09.084 INFO 1 --- [l'}-cmddb:27017] org.mongodb.driver.connection : Opened connection [connectionId{localValue:2, serverValue:2}] to cmddb:27017
2022-03-22 10:46:09.079 INFO 1 --- [l'}-cmddb:27017] org.mongodb.driver.connection : Opened connection [connectionId{localValue:1, serverValue:1}] to cmddb:27017
2022-03-22 10:46:09.099 INFO 1 --- [l'}-cmddb:27017] org.mongodb.driver.cluster : Monitor thread successfully connected to server with description ServerDescription{address=cmddb:27017, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=13, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=97384900}
2022-03-22 10:46:17.416 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 5007 (http) with context path ''
2022-03-22 10:46:17.455 INFO 1 --- [ main] c.b.account.cmd.CommandApplication : Started CommandApplication in 15.482 seconds (JVM running for 16.813)
version: "3.4"
services:
customerdb:
container_name: customerdb
image: postgres
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-postgres}
POSTGRES_USER: ${POSTGRES_USER:-postgres}
volumes:
- ./customer/postgres_init.sql:/docker-entrypoint-initdb.d/postgres_init.sql
ports:
- "5432:5432"
restart: unless-stopped
querydb:
container_name: querydb
image: postgres
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-postgres}
volumes:
- ./account.query/postgres_init.sql:/docker-entrypoint-initdb.d/postgres_init.sql
ports:
- "5433:5432"
restart: unless-stopped
rabbitmq:
container_name: "bank_rabbitmq"
image: "rabbitmq:3.8-management"
hostname: "rabbitmq"
environment:
RABBITMQ_DEFAULT_USER: "guest"
RABBITMQ_DEFAULT_PASS: "guest"
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- "15672:15672"
- "5672:5672"
cmddb:
container_name: "cmddb"
image: mongo
restart: always
ports:
- "27017:27017"
customer-service:
image: bank/customer-service-api
container_name: customer-service
build:
context: ./customer
dockerfile: Dockerfile
ports:
- "5000:5000"
depends_on:
- customerdb
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://customerdb:5432/customerdb
- SPRING_DATASOURCE_USERNAME=postgres
- SPRING_DATASOURCE_PASSWORD=postgres
account-cmd:
image: bank/account-cmd-service-api
container_name: account-cmd-service
build:
context: ./account.cmd
dockerfile: Dockerfile
ports:
- "5002:5002"
depends_on:
- cmddb
environment:
- SPRING_DATA_MONGODB_HOST=cmddb
- SPRING_DATA_MONGODB_PORT=27017
- SPRING_DATA_MONGODB_DATABASE=accountcmdb
- SPRING_RABBITMQ_HOST=rabbitmq
- SPRING_RABBITMQ_PORT=5672
- SPRING_RABBITMQ_USERNAME=guest
- SPRING_RABBITMQ_PASSWORD=guest
account-query:
image: bank/account-query-service-api
container_name: account-query-service
build:
context: ./account.query
dockerfile: Dockerfile
ports:
- "5003:5003"
depends_on:
- querydb
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://querydb:5433/accountingdb
- SPRING_DATASOURCE_USERNAME=postgres
- SPRING_DATASOURCE_PASSWORD=postgres
- SPRING_RABBITMQ_HOST=rabbitmq
- SPRING_RABBITMQ_PORT=5672
- SPRING_RABBITMQ_USERNAME=guest
- SPRING_RABBITMQ_PASSWORD=guest
volumes:
customerdb:
cmddb:
querydb:
I would be very happy if someone could help. Here is my github repository.
https://github.com/dogaanismail/bank-solution
I have just found my mistake that I never thought of. I should have been more careful when I was coding rest apis.
You have to specify your rest api mapping for swagger.
#PostMapping or #GetMapping have to be specified.
For instance;
Here is a mistake implementations.
#PostMapping
public ResponseEntity openAccount(#RequestBody AccountCreateRequest accountCreateRequest) {
OpenAccountCommand command = ObjectMapperUtils.map(accountCreateRequest, OpenAccountCommand.class);
var id = UUID.randomUUID().toString();
command.setId(id);
try {
return ResponseEntity.ok(commandDispatcher.send(command));
} catch (Exception e) {
var safeErrorMessage = MessageFormat.format("Error while processing - {0}.", e.toString());
logger.log(Level.SEVERE, safeErrorMessage, e);
return new ResponseEntity<>(new ErrorResponse(safeErrorMessage, id), HttpStatus.INTERNAL_SERVER_ERROR);
}
}
I change the implementation to this.
#PostMapping("/openAccount")
public ResponseEntity openAccount(#RequestBody AccountCreateRequest accountCreateRequest) {
OpenAccountCommand command = ObjectMapperUtils.map(accountCreateRequest, OpenAccountCommand.class);
var id = UUID.randomUUID().toString();
command.setId(id);
try {
return ResponseEntity.ok(commandDispatcher.send(command));
} catch (Exception e) {
var safeErrorMessage = MessageFormat.format("Error while processing - {0}.", e.toString());
logger.log(Level.SEVERE, safeErrorMessage, e);
return new ResponseEntity<>(new ErrorResponse(safeErrorMessage, id), HttpStatus.INTERNAL_SERVER_ERROR);
}
}

spring Vault location [secret/my-application] not resolvable: Not found

I want to connect to the vault server and read my secret in the spring application
vault config:
spring:
application:
name: inquiry
profiles:
active: dev
cloud:
vault:
kv:
enabled: true
backend: secret
profile-separator: '/'
application-name: inquiry
host: development
port: 8200
scheme: https
authentication: token
token: my-token
ssl:
trust-store: development-truststore.jks
trust-store-password: pass
in the vault, I have inquiry policy add attache inquiry token to it
vault policy read inquiry
path "secret/*" {
capabilities = ["read", "list"]
}
path "secret/data/inquiry/*" {
capabilities = ["read", "create", "update"]
}
curl --header "X-Vault-Token:my-token" -k https://localhost:8200/v1/secret/data/inquiry/dev
return my data
{"request_id":"35548b9e-3422-201b-6243-a600d7f61fc3","lease_id":"","renewable":false,"lease_duration":0,"data":{"data":{"DBPassword":"pass","DBUser":"user"},"metadata":{"created_time":"2020-07-08T09:02:42.237713857Z","deletion_time":"","destroyed":false,"version":1}},"wrap_info":null,"warnings":null,"auth":null}
but in spring I got this error:
2020-07-08 13:55:50.131 INFO 83792 --- [ main] o.s.v.a.LifecycleAwareSessionManager : Scheduling Token renewal
2020-07-08 13:55:50.159 INFO 83792 --- [ main] o.s.v.c.e.LeaseAwareVaultPropertySource : Vault location [secret/inquiry] not resolvable: Not found
2020-07-08 13:55:50.167 INFO 83792 --- [ main] o.s.v.c.e.LeaseAwareVaultPropertySource : Vault location [secret/application/dev] not resolvable: Not found
2020-07-08 13:55:50.174 INFO 83792 --- [ main] o.s.v.c.e.LeaseAwareVaultPropertySource : Vault location [secret/application] not resolvable: Not found
2020-07-08 13:55:50.175 INFO 83792 --- [ main] b.c.PropertySourceBootstrapConfiguration : Located property source: [BootstrapPropertySource {name='bootstrapProperties-secret/inquiry/dev'}, BootstrapPropertySource {name='bootstrapProperties-secret/inquiry'}, BootstrapPropertySource {name='bootstrapProperties-secret/application/dev'}, BootstrapPropertySource {name='bootstrapProperties-secret/application'}]
2020-07-08 13:55:50.181 INFO 83792 --- [ main] i.c.i.sepam.inquiry.InquiryApplication : The following profiles are active: dev
I use the jdk14.
how can I solve it, thank you
The issue is in your Vault Policy.
path "secret/data/inquiry/*" {
capabilities = ["read", "create", "update"]
}
drop the trailing / and just have secret/data/inquiry*
Spring is looking for access to a k/v store at inquiry, not in a sub-directory.
Spring is requesting access to k/v stores at secret/app-name, secret/application and secret/app-name/spring-active-profile. For each path, it expects a single k/v store that contains all the secrets.
I'm assuming this was solved a while ago by the poster, but I ran into this exact same thing when I had someone unfamiliar with spring setting up my app's permissions.

Configure GremlinServer to JanusGraph with HBase and Elasticsearch

Can't create instance of GremlinServer with HBase and Elasticsearch.
When i run shell script: bin/gremlin-server.sh config/gremlin.yaml. I get exception:
Exception in thread "main" java.lang.IllegalStateException: java.lang.NoSuchMethodException: org.janusgraph.graphdb.tinkerpop.plugin.JanusGraphGremlinPlugin.build()
Gremlin-server logs
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/user/janusgraph/lib/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/user/janusgraph/lib/logback-classic-1.1.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
0 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer -
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
135 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Configuring Gremlin Server from config/gremlin.yaml
211 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics Slf4jReporter configured with interval=180000ms and loggerName=org.apache.tinkerpop.gremlin.server.Settings$Slf4jReporterMetrics
557 [main] INFO org.janusgraph.diskstorage.hbase.HBaseCompatLoader - Instantiated HBase compatibility layer supporting runtime HBase version 1.2.6: org.janusgraph.diskstorage.hbase.HBaseCompat1_0
835 [main] INFO org.janusgraph.diskstorage.hbase.HBaseStoreManager - HBase configuration: setting zookeeper.znode.parent=/hbase-unsecure
836 [main] INFO org.janusgraph.diskstorage.hbase.HBaseStoreManager - Copied host list from root.storage.hostname to hbase.zookeeper.quorum: main.local,data1.local,data2.local
836 [main] INFO org.janusgraph.diskstorage.hbase.HBaseStoreManager - Copied Zookeeper Port from root.storage.port to hbase.zookeeper.property.clientPort: 2181
866 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
1214 [main] INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper - Process identifier=hconnection-0x1e44b638 connecting to ZooKeeper ensemble=main.local:2181,data1.local:2181,data2.local:2181
1220 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
1220 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:host.name=main.local
1220 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_212
1220 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation
1220 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/jre
1221 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/home/user/janusgraph/conf/gremlin-server:/home/user/janusgraph/lib/slf4j-log4j12-
// Here hanusgraph download very many dependencies
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:os.version=3.10.0-862.el7.x86_64
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:user.name=user
1257 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:user.home=/home/user
1257 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/home/user/janusgraph
1257 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=main.local:2181,data1.local:2181,data2.local:2181 sessionTimeout=90000 watcher=hconnection-0x1e44b6380x0, quorum=main.local:2181,data1.local:2181,data2.local:2181, baseZNode=/hbase-unsecure
1274 [main-SendThread(data2.local:2181)] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - Opening socket connection to server data2.local/xxx.xxx.xxx.xxx:2181. Will not attempt to authenticate using SASL (unknown error)
1394 [main-SendThread(data2.local:2181)] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - Socket connection established to data2.local/xxx.xxx.xxx.xxx, initiating session
1537 [main-SendThread(data2.local:2181)] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - Session establishment complete on server data2.local/xxx.xxx.xxx.xxx:2181, sessionid = 0x26b266353e50014, negotiated timeout = 60000
3996 [main] INFO org.janusgraph.core.util.ReflectiveConfigOptionLoader - Loaded and initialized config classes: 13 OK out of 13 attempts in PT0.631S
4103 [main] INFO org.reflections.Reflections - Reflections took 60 ms to scan 2 urls, producing 0 keys and 0 values
4400 [main] WARN org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration - Local setting cache.db-cache-time=180000 (Type: GLOBAL_OFFLINE) is overridden by globally managed value (10000). Use the ManagementSystem interface instead of the local configuration to control this setting.
4453 [main] WARN org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration - Local setting cache.db-cache-clean-wait=20 (Type: GLOBAL_OFFLINE) is overridden by globally managed value (50). Use the ManagementSystem interface instead of the local configuration to control this setting.
4473 [main] INFO org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation - Closing master protocol: MasterService
4474 [main] INFO org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation - Closing zookeeper sessionid=0x26b266353e50014
4485 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Session: 0x26b266353e50014 closed
4485 [main-EventThread] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - EventThread shut down
4500 [main] INFO org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration - Generated unique-instance-id=c0a8873843641-main-local1
4530 [main] INFO org.janusgraph.diskstorage.hbase.HBaseStoreManager - HBase configuration: setting zookeeper.znode.parent=/hbase-unsecure
4530 [main] INFO org.janusgraph.diskstorage.hbase.HBaseStoreManager - Copied host list from root.storage.hostname to hbase.zookeeper.quorum: main.local,data1.local,data2.local
4531 [main] INFO org.janusgraph.diskstorage.hbase.HBaseStoreManager - Copied Zookeeper Port from root.storage.port to hbase.zookeeper.property.clientPort: 2181
4532 [main] INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper - Process identifier=hconnection-0x5bb3d42d connecting to ZooKeeper ensemble=main.local:2181,data1.local:2181,data2.local:2181
4532 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=main.local:2181,data1.local:2181,data2.local:2181 sessionTimeout=90000 watcher=hconnection-0x5bb3d42d0x0, quorum=main.local:2181,data1.local:2181,data2.local:2181, baseZNode=/hbase-unsecure
4534 [main-SendThread(main.local:2181)] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - Opening socket connection to server main.local/xxx.xxx.xxx.xxx:2181. Will not attempt to authenticate using SASL (unknown error)
4534 [main-SendThread(main.local:2181)] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - Socket connection established to main.local/xxx.xxx.xxx.xxx:2181, initiating session
4611 [main-SendThread(main.local:2181)] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - Session establishment complete on server main.local/xxx.xxx.xxx.xxx:2181, sessionid = 0x36b266353fd0021, negotiated timeout = 60000
4616 [main] INFO org.janusgraph.diskstorage.Backend - Configuring index [search]
5781 [main] INFO org.janusgraph.diskstorage.Backend - Initiated backend operations thread pool of size 16
6322 [main] INFO org.janusgraph.diskstorage.Backend - Configuring total store cache size: 186687592
7555 [main] INFO org.janusgraph.graphdb.database.IndexSerializer - Hashing index keys
7925 [main] INFO org.janusgraph.diskstorage.log.kcvs.KCVSLog - Loaded unidentified ReadMarker start time 2019-06-13T09:54:08.929Z into org.janusgraph.diskstorage.log.kcvs.KCVSLog$MessagePuller#656d10a4
7927 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Graph [graph] was successfully configured via [config/db.properties].
7927 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized Gremlin thread pool. Threads in pool named with pattern gremlin-*
Exception in thread "main" java.lang.IllegalStateException: java.lang.NoSuchMethodException: org.janusgraph.graphdb.tinkerpop.plugin.JanusGraphGremlinPlugin.build()
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.initializeGremlinScriptEngineManager(GremlinExecutor.java:522)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.<init>(GremlinExecutor.java:126)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.<init>(GremlinExecutor.java:83)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor$Builder.create(GremlinExecutor.java:813)
at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init>(ServerGremlinExecutor.java:169)
at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init>(ServerGremlinExecutor.java:89)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:110)
at org.apache.tinkerpop.gremlin.server.GremlinServer.main(GremlinServer.java:363)
Caused by: java.lang.NoSuchMethodException: org.janusgraph.graphdb.tinkerpop.plugin.JanusGraphGremlinPlugin.build()
at java.lang.Class.getMethod(Class.java:1786)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.initializeGremlinScriptEngineManager(GremlinExecutor.java:492)
... 7 more
Graph configuration:
storage.backend=hbase
storage.hostname=main.local,data1.local,data2.local
storage.port=2181
storage.hbase.ext.zookeeper.znode.parent=/hbase-unsecure
cache.db-cache=true
cache.db-cache-clean-wait=20
cache.db-cache-time=180000
cache.db-cache-size=0.5
index.search.backend=elasticsearch
index.search.hostname=xxx.xxx.xxx.xxx
index.search.port=9200
index.search.elasticsearch.client-only=false
gremlin.graph=org.janusgraph.core.JanusGraphFactory
host=0.0.0.0
Gremlin-server configuration
host: localhost
port: 8182
channelizer: org.apache.tinkerpop.gremlin.server.channel.HttpChannelizer
graphs: { graph: config/db.properties }
scriptEngines: {
gremlin-groovy: {
plugins: {
org.janusgraph.graphdb.tinkerpop.plugin.JanusGraphGremlinPlugin: {},
org.apache.tinkerpop.gremlin.server.jsr223.GremlinServerGremlinPlugin: {},
org.apache.tinkerpop.gremlin.tinkergraph.jsr223.TinkerGraphGremlinPlugin: {},
org.apache.tinkerpop.gremlin.jsr223.ImportGremlinPlugin: { classImports: [java.lang.Math], methodImports: [java.lang.Math#*] },
org.apache.tinkerpop.gremlin.jsr223.ScriptFileGremlinPlugin: { files: [scripts/janusgraph.groovy] }
}
}
}
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] } }
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { serializeResultToString: true } }
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] } }
metrics: {
slf4jReporter: {enabled: true, interval: 180000}
}
What do I need to do to server start without error?

Spring Config: It's not able to locate Config server

Config Server bootstrap.yml:
spring:
application:
name: configserver
profiles:
active: vault
cloud:
config:
server:
vault:
host: ${vault_server_host:localhost}
port: ${vault_server_port:8200}
scheme: ${vault_server_scheme:https}
backend: ${vault_backend:configserver}
Vault secrets:
$ vault kv get configserver/configclient
=== Data ===
Key Value
--- -----
foo VAUUULT
So, I'm able to get config values using curl:
$ curl -X GET http://localhost:8888/configclient/default -H "X-Config-Token: f7b238dd-425f-52f8-2104-1e37ecf65ede"
{
"name":"configclient",
"profiles":[
"default"
],
"label":null,
"version":null,
"state":null,
"propertySources":[
{
"name":"vault:configclient",
"source":{
"foo":"VAUUULT"
}
}
]
}
So, I've tried to get foo value from Config server from Config client. Config client bootstrap.yml:
spring:
application:
name: configclient
cloud:
config:
uri: http://localhost:8888
headers:
X-Config-Token: ${vault_token}
However, it seems that Config client is not able to locate Config server:
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.0.3.RELEASE)
2018-07-12 10:03:53.809 INFO 15448 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Fetching config from server at : http://localhost:8888
2018-07-12 10:03:54.239 WARN 15448 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Could not locate PropertySource: 400 null
2018-07-12 10:03:54.256 INFO 15448 --- [ main] c.t.i.t.s.t.TdevConfigclientApplication : No active profile set, falling back to default profiles: default
So then, it's getting me that:
Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder 'foo' in value "${foo}"
foo is configured as #Value("${foo}"):
#SpringBootApplication
#RestController
public class TdevConfigclientApplication {
#RequestMapping("/")
public String home() {
return "Hello World! " + this.foo;
}
#Value("${foo}")
private String foo;
public static void main(String[] args) {
SpringApplication.run(TdevConfigclientApplication.class, args);
}
}
Here you can see a more detailed config client trace snippet:
2018-07-12 10:29:05.249 INFO 17299 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Fetching config from server at : http://localhost:8888
2018-07-12 10:29:05.457 DEBUG 17299 --- [ main] o.s.web.client.RestTemplate : Created GET request for "http://localhost:8888/configclient/default"
2018-07-12 10:29:06.023 DEBUG 17299 --- [ main] o.s.web.client.RestTemplate : Setting request Accept header to [application/json, application/*+json]
2018-07-12 10:29:06.092 DEBUG 17299 --- [ main] s.n.www.protocol.http.HttpURLConnection : sun.net.www.MessageHeader#3bb6b7e25 pairs: {GET /configclient/default HTTP/1.1: null}{Accept: application/json, application/*+json}{User-Agent: Java/10.0.1}{Host: localhost:8888}{Connection: keep-alive}
2018-07-12 10:29:06.121 DEBUG 17299 --- [ main] s.n.www.protocol.http.HttpURLConnection : sun.net.www.MessageHeader#3b1892d05 pairs: {null: HTTP/1.1 400}{Content-Type: application/json;charset=UTF-8}{Transfer-Encoding: chunked}{Date: Thu, 12 Jul 2018 08:29:06 GMT}{Connection: close}
2018-07-12 10:29:06.145 DEBUG 17299 --- [ main] o.s.web.client.RestTemplate : GET request for "http://localhost:8888/configclient/default" resulted in 400 (null); invoking error handler
2018-07-12 10:29:06.162 WARN 17299 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Could not locate PropertySource: 400 null
Any ideas?
In your bootstrap.yml you need to replace spring.cloud.config.headers by this:
spring:
application:
name: configclient
cloud:
config:
uri: http://localhost:8888
token : ${vault_token}
You can see doc http://cloud.spring.io/spring-cloud-config/1.4.x/single/spring-cloud-config.html

Resources