jhipster microservices client springcloud config with authorization header to jhipster registry - microservices

I am using jhipster 7 with springboot 2.5.4 for microservice applications development (DEV env)
I have setup jhipster registry server running on my localhost port 8761 with docker's help.
I am trying to access jhipster central springcloud config on
http://localhost:8761/config/application/prod/main
from all my microservices and gateway.
However, I got the following 401 warning when starting any of my microservice and gateway:
██╗ ██╗ ██╗ ████████╗ ███████╗ ██████╗ ████████╗ ████████╗ ███████╗
██║ ██║ ██║ ╚══██╔══╝ ██╔═══██╗ ██╔════╝ ╚══██╔══╝ ██╔═════╝ ██╔═══██╗
██║ ████████║ ██║ ███████╔╝ ╚█████╗ ██║ ██████╗ ███████╔╝
██╗ ██║ ██╔═══██║ ██║ ██╔════╝ ╚═══██╗ ██║ ██╔═══╝ ██╔══██║
╚██████╔╝ ██║ ██║ ████████╗ ██║ ██████╔╝ ██║ ████████╗ ██║ ╚██╗
╚═════╝ ╚═╝ ╚═╝ ╚═══════╝ ╚═╝ ╚═════╝ ╚═╝ ╚═══════╝ ╚═╝ ╚═╝
:: JHipster 🤓 :: Running Spring Boot 2.5.4 ::
:: https://www.jhipster.tech ::
2021-11-04 17:54:01.411 WARN 23468 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Could not locate PropertySource: 401 Unauthorized: [{
"timestamp" : "2021-11-05T00:54:01.400+00:00",
"status" : 401,
"error" : "Unauthorized",
"message" : "",
"path" : "/config/application/prod/main"
}]
2021-11-04 17:54:01.415 INFO 23468 --- [ main] com.okta.developer.alert.AlertApp : No active profile set, falling back to default profiles: dev,api-docs
2021-11-04 17:54:02.551 DEBUG 23468 --- [ main] i.g.r.utils.RxJava2OnClasspathCondition : RxJava2 related Aspect extensions are not activated, because RxJava2 is not on the classpath.
2021-11-04 17:54:02.552 DEBUG 23468 --- [ main] i.g.r.utils.ReactorOnClasspathCondition : Reactor related Aspect extensions are not activated because Resilience4j Reactor module is not on the classpath.
This is one of my spring cloud config from application-dev.yml (not the bootstrap.yml as I am not sure if do have to put my configs into bootstrap.yml)
spring:
devtools:
restart:
enabled: true
additional-exclude: static/**
livereload:
enabled: false # we use Webpack dev server + BrowserSync for livereload
jackson:
serialization:
indent-output: true
cloud:
config:
uri: http://admin:${jhipster.registry.password}#localhost:8761/config
# name of the config server's property source (file.yml) that we want to use
name: application
profile: prod
label: main # toggle to switch to a different version of the configuration as stored in git
# it can be set to any label, branch or commit of the configuration source Git repository
I am using keycloak as my oauth server, I believe I need to send an authorization header along with the GET request to /config/application/prod/main.
My central config looks like this:
{
"name" : "application",
"profiles" : [ "prod" ],
"label" : "main",
"version" : null,
"state" : null,
"propertySources" : [ {
"name" : "file:central-config/localhost-config/application.yml",
"source" : {
"configserver.name" : "Docker JHipster Registry",
"configserver.status" : "Connected to the JHipster Registry running in Docker",
"jhipster.security.authentication.jwt.base64-secret" : "xxxxxxxxxxxxx",
"eureka.client.service-url.defaultZone" : "http://admin:${jhipster.registry.password}#localhost:8761/eureka/"
}
} ]
}
Can someone help on how can I get rid of this 401 error and retrieve the central config successfully?

spent some time to do investigation. The error is caused by the incorrect basic auth proile setup in my gateway and microservices.
I noticed that the config in application-*.yml showing below contains a registry password
jhipster:
xxxxxxxxx
registry:
password: xxxxxxxxxxxxx
and I thought that was good enough for my microservices to retrieve spring cloud config from jhipster registry, but it is actually not. My microservices actually trying to find registry password from bootstrap-*.yml, and it failed to retrieve the correct password as I did not fill it with the appropriate value. So what I did is in bootstrap-*.yml made an update on the registry password
jhipster:
registry:
password: <correct_password>
and now the issue resolved
██╗ ██╗ ██╗ ████████╗ ███████╗ ██████╗ ████████╗ ████████╗ ███████╗
██║ ██║ ██║ ╚══██╔══╝ ██╔═══██╗ ██╔════╝ ╚══██╔══╝ ██╔═════╝ ██╔═══██╗
██║ ████████║ ██║ ███████╔╝ ╚█████╗ ██║ ██████╗ ███████╔╝
██╗ ██║ ██╔═══██║ ██║ ██╔════╝ ╚═══██╗ ██║ ██╔═══╝ ██╔══██║
╚██████╔╝ ██║ ██║ ████████╗ ██║ ██████╔╝ ██║ ████████╗ ██║ ╚██╗
╚═════╝ ╚═╝ ╚═╝ ╚═══════╝ ╚═╝ ╚═════╝ ╚═╝ ╚═══════╝ ╚═╝ ╚═╝
:: JHipster 🤓 :: Running Spring Boot 2.5.4 ::
:: https://www.jhipster.tech ::
2021-11-08 11:36:28.507 INFO 3872 --- [ main] com.okta.developer.store.StoreApp : No active profile set, falling back to default profiles: dev,api-docs
2021-11-08 11:36:29.639 DEBUG 3872 --- [ main] i.g.r.utils.RxJava2OnClasspathCondition : RxJava2 related Aspect extensions are not activated, because RxJava2 is not on the classpath.
2021-11-08 11:36:29.641 DEBUG 3872 --- [ main] i.g.r.utils.ReactorOnClasspathCondition : Reactor related Aspect extensions are not activated because Resilience4j Reactor module is not on the classpath.
2021-11-08 11:36:29.656 DEBUG 3872 --- [ main] i.g.r.utils.RxJava2OnClasspathCondition : RxJava2 related Aspect extensions are not activated, because RxJava2 is not on the classpath.
#Gaël Marziou, you are correct that I only setup the basic auth instead of oauth, and it is good enough for me to retrieve the config as of this fix: commit 452728a

Related

Eureka Registered Application is null: false

new in eureka and got this error while joining api-gateway to eureka.
2022-06-02 06:51:45.941 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : The response status is 200
2022-06-02 06:51:50.949 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Disable delta property : false
2022-06-02 06:51:50.951 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Single vip registry refresh property : null
2022-06-02 06:51:50.951 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Force full registry fetch : false
2022-06-02 06:51:50.951 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Application is null : false
2022-06-02 06:51:50.952 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Registered Applications size is zero : true
2022-06-02 06:51:50.952 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Application version is -1: false
2022-06-02 06:51:50.953 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server
2022-06-02 06:51:50.966 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : The response status is 200
and this is my eureka configuration, I put this on a repository. Eureka services is quite stable, but none of other my services can join the microservices
#Eureka Client - Configuration
eureka:
instance:
preferIpAddress: true
appname: ${spring.application.name}
hostname: service-registry
health-check-url-path: /actuator/health
lease-renewal-interval-in-seconds: 10
client:
enabled: true
healthcheck:
enabled: true
register-with-eureka: true # Wisnu
fetch-registry: true # Wisnu
registry-fetch-interval-seconds: 5
service-url:
defaultZone: http://34.101.154.152:8761/eureka```

Is it to be expected that the client will discover each node twice for the Hazelcast sidecar caching pattern?

I'm pretty new to using Hazelcast for its interesting feature of auto-sync with other cache instances. My queries are bottom of the description.
Here was my initial goal:
Design an environment following Hazelcast sidecar caching pattern.
There will be no cache on the application container side. Basically, I don't want to use "near-cache" just to avoid my JVM being heavy and reduce GC time.
Application Container in each Node will communicate with its own sidecar cache container via localhost IP.
Hazelcast management center will be a separate node that communicates with all the nodes containing Hazelcast sidecar cache container.
Here is the target design:
I prepared Hazelcast configuration [hazelcast.yaml] for Hazelcast container,
hazelcast:
cluster-name: dev
network:
port:
auto-increment: false
port-count: 3
port: 5701
I also prepared another hazelcast.yaml for my application container,
hazelcast:
map:
default:
backup-count: 0
async-backup-count: 1
read-backup-data: true
network:
reuse-address: true
port:
auto-increment: true
port: 5701
join:
multicast:
enabled: true
kubernetes:
enabled: false
tcp-ip:
enabled: false
interaface: 127.0.0.1
member-list:
- 127.0.0.1:5701
Here is the client part, I used SpringBoot for it.
#Component
public class CacheClient {
private static final String ITEMS = "items";
private HazelcastInstance client;
CacheClient() throws IOException {
ClientConfig config = new YamlClientConfigBuilder("hazelcast.yaml").build();
config.setInstanceName(UUID.randomUUID().toString());
client = HazelcastClient.getOrCreateHazelcastClient(config);
}
public Item put(String number, Item item){
IMap<String, Item> map = client.getMap(ITEMS);
return map.putIfAbsent(number, item);
}
public Item get(String key){
IMap<String, Item> map = client.getMap(ITEMS);
return map.get(key);
}
}
Here is the dockerfile, I used to build my application container image,
FROM adoptopenjdk/openjdk11:jdk-11.0.5_10-alpine-slim
# Expose port 8081 to Docker host
EXPOSE 8081
WORKDIR /opt
COPY /build/libs/hazelcast-client-0.0.1-SNAPSHOT.jar /opt/app.jar
COPY /src/main/resources/hazelcast.yaml /opt/hazelcast.yaml
COPY /src/main/resources/application.properties /opt/application.properties
ENTRYPOINT ["java","-Dhazelcast.socket.server.bind.any=false","-Dhazelcast.initial.min.cluster.size=1","-Dhazelcast.socket.bind.any=false","-Dhazelcast.socket.server.bind.any=false","-Dhazelcast.socket.client.bind=false","-Dhazelcast.socket.client.bind.any=false","-Dhazelcast.logging.type=slf4j","-jar","app.jar"]
Here is the deployment script I used,
apiVersion: v1 # Kubernetes API version
kind: Service # Kubernetes resource kind we are creating
metadata: # Metadata of the resource kind we are creating
name: spring-hazelcast-service
spec:
selector:
app: spring-hazelcast-app
ports:
- protocol: "TCP"
name: http-app
port: 8081 # The port that the service is running on in the cluster
targetPort: 8081 # The port exposed by the service
type: LoadBalancer # type of the service. LoadBalancer indicates that our service will be external.
---
apiVersion: apps/v1
kind: Deployment # Kubernetes resource kind we are creating
metadata:
name: spring-hazelcast-app
spec:
selector:
matchLabels:
app: spring-hazelcast-app
replicas: 1 # Number of replicas that will be created for this deployment
template:
metadata:
labels:
app: spring-hazelcast-app
spec:
containers:
- name: hazelcast
image: hazelcast/hazelcast:4.0.2
workingDir: /opt
ports:
- name: hazelcast
containerPort: 5701
env:
- name: HZ_CLUSTERNAME
value: dev
- name: JAVA_OPTS
value: -Dhazelcast.config=/opt/config/hazelcast.yml
volumeMounts:
- mountPath: "/opt/config/"
name: allconf
- name: spring-hazelcast-app
image: spring-hazelcast:1.0.3
imagePullPolicy: Never #IfNotPresent
ports:
- containerPort: 8081 # The port that the container is running on in the cluster
volumes:
- name: allconf
hostPath:
path: /opt/config/ # directory location on host
type: Directory # this field is optional
---
apiVersion: v1 # Kubernetes API version
kind: Service # Kubernetes resource kind we are creating
metadata: # Metadata of the resource kind we are creating
name: hazelcast-mc-service
spec:
selector:
app: hazelcast-mc
ports:
- protocol: "TCP"
name: mc-app
port: 8080 # The port that the service is running on in the cluster
targetPort: 8080 # The port exposed by the service
type: LoadBalancer # type of the
loadBalancerIP: "127.0.0.1"
---
apiVersion: apps/v1
kind: Deployment # Kubernetes resource kind we are creating
metadata:
name: hazelcast-mc
spec:
selector:
matchLabels:
app: hazelcast-mc
replicas: 1 # Number of replicas that will be created for this deployment
template:
metadata:
labels:
app: hazelcast-mc
spec:
containers:
- name: hazelcast-mc
image: hazelcast/management-center
ports:
- containerPort: 8080 # The port that the container is running on in the cluster
Here is my application logs,
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.5.4)
2021-09-27 06:42:51.274 INFO 1 --- [ main] com.caching.Application : Starting Application using Java 11.0.5 on spring-hazelcast-app-7bdc8b7f7-bqdlt with PID 1 (/opt/app.jar started by root in /opt)
2021-09-27 06:42:51.278 INFO 1 --- [ main] com.caching.Application : No active profile set, falling back to default profiles: default
2021-09-27 06:42:55.986 INFO 1 --- [ main] c.h.c.impl.spi.ClientInvocationService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Running with 2 response threads, dynamic=true
2021-09-27 06:42:56.199 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] HazelcastClient 4.0.2 (20200702 - 2de3027) is STARTING
2021-09-27 06:42:56.202 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] HazelcastClient 4.0.2 (20200702 - 2de3027) is STARTED
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (jar:file:/opt/app.jar!/BOOT-INF/lib/hazelcast-all-4.0.2.jar!/) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2021-09-27 06:42:56.277 INFO 1 --- [ main] c.h.c.i.c.ClientConnectionManager : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Trying to connect to cluster: dev
2021-09-27 06:42:56.302 INFO 1 --- [ main] c.h.c.i.c.ClientConnectionManager : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Trying to connect to [127.0.0.1]:5701
2021-09-27 06:42:56.429 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] HazelcastClient 4.0.2 (20200702 - 2de3027) is CLIENT_CONNECTED
2021-09-27 06:42:56.429 INFO 1 --- [ main] c.h.c.i.c.ClientConnectionManager : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Authenticated with server [172.17.0.3]:5701:c967f642-a7aa-4deb-a530-b56fb8f68c78, server version: 4.0.2, local address: /127.0.0.1:54373
2021-09-27 06:42:56.436 INFO 1 --- [ main] c.h.internal.diagnostics.Diagnostics : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2021-09-27 06:42:56.461 INFO 1 --- [21ad30a.event-4] c.h.c.impl.spi.ClientClusterService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2]
Members [1] {
Member [172.17.0.3]:5701 - c967f642-a7aa-4deb-a530-b56fb8f68c78
}
2021-09-27 06:42:56.803 INFO 1 --- [ main] c.h.c.i.s.ClientStatisticsService : Client statistics is enabled with period 5 seconds.
2021-09-27 06:42:57.878 INFO 1 --- [ main] c.h.i.config.AbstractConfigLocator : Loading 'hazelcast.yaml' from the working directory.
2021-09-27 06:42:57.934 WARN 1 --- [ main] c.h.i.impl.HazelcastInstanceFactory : Hazelcast is starting in a Java modular environment (Java 9 and newer) but without proper access to required Java packages. Use additional Java arguments to provide Hazelcast access to Java internal API. The internal API access is used to get the best performance results. Arguments to be used:
--add-modules java.se --add-exports java.base/jdk.internal.ref=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.nio=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.management/sun.management=ALL-UNNAMED --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED
2021-09-27 06:42:57.976 INFO 1 --- [ main] com.hazelcast.instance.AddressPicker : [LOCAL] [dev] [4.0.2] Prefer IPv4 stack is true, prefer IPv6 addresses is false
2021-09-27 06:42:57.987 INFO 1 --- [ main] com.hazelcast.instance.AddressPicker : [LOCAL] [dev] [4.0.2] Picked [172.17.0.3]:5702, using socket ServerSocket[addr=/172.17.0.3,localport=5702], bind any local is false
2021-09-27 06:42:58.004 INFO 1 --- [ main] com.hazelcast.system : [172.17.0.3]:5702 [dev] [4.0.2] Hazelcast 4.0.2 (20200702 - 2de3027) starting at [172.17.0.3]:5702
2021-09-27 06:42:58.005 INFO 1 --- [ main] com.hazelcast.system : [172.17.0.3]:5702 [dev] [4.0.2] Copyright (c) 2008-2020, Hazelcast, Inc. All Rights Reserved.
2021-09-27 06:42:58.047 INFO 1 --- [ main] c.h.s.i.o.impl.BackpressureRegulator : [172.17.0.3]:5702 [dev] [4.0.2] Backpressure is disabled
2021-09-27 06:42:58.373 INFO 1 --- [ main] com.hazelcast.instance.impl.Node : [172.17.0.3]:5702 [dev] [4.0.2] Creating MulticastJoiner
2021-09-27 06:42:58.380 WARN 1 --- [ main] com.hazelcast.cp.CPSubsystem : [172.17.0.3]:5702 [dev] [4.0.2] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
2021-09-27 06:42:58.676 INFO 1 --- [ main] c.h.s.i.o.impl.OperationExecutorImpl : [172.17.0.3]:5702 [dev] [4.0.2] Starting 2 partition threads and 3 generic threads (1 dedicated for priority tasks)
2021-09-27 06:42:58.682 INFO 1 --- [ main] c.h.internal.diagnostics.Diagnostics : [172.17.0.3]:5702 [dev] [4.0.2] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2021-09-27 06:42:58.687 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : [172.17.0.3]:5702 [dev] [4.0.2] [172.17.0.3]:5702 is STARTING
2021-09-27 06:42:58.923 INFO 1 --- [ main] c.h.i.cluster.impl.MulticastJoiner : [172.17.0.3]:5702 [dev] [4.0.2] Trying to join to discovered node: [172.17.0.3]:5701
2021-09-27 06:42:58.932 INFO 1 --- [cached.thread-3] c.h.internal.nio.tcp.TcpIpConnector : [172.17.0.3]:5702 [dev] [4.0.2] Connecting to /172.17.0.3:5701, timeout: 10000, bind-any: false
2021-09-27 06:42:58.955 INFO 1 --- [.IO.thread-in-0] c.h.internal.nio.tcp.TcpIpConnection : [172.17.0.3]:5702 [dev] [4.0.2] Initialized new cluster connection between /172.17.0.3:40242 and /172.17.0.3:5701
2021-09-27 06:43:04.948 INFO 1 --- [21ad30a.event-3] c.h.c.impl.spi.ClientClusterService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2]
Members [2] {
Member [172.17.0.3]:5701 - c967f642-a7aa-4deb-a530-b56fb8f68c78
Member [172.17.0.3]:5702 - 08dfe633-46b2-4581-94c7-81b6d0bc3ce3
}
2021-09-27 06:43:04.959 WARN 1 --- [ration.thread-0] c.h.c.i.operation.OnJoinCacheOperation : [172.17.0.3]:5702 [dev] [4.0.2] This member is joining a cluster whose members support JCache, however the cache-api artifact is missing from this member's classpath. In case JCache API will be used, add cache-api artifact in this member's classpath and restart the member.
2021-09-27 06:43:04.963 INFO 1 --- [ration.thread-0] c.h.internal.cluster.ClusterService : [172.17.0.3]:5702 [dev] [4.0.2]
Members {size:2, ver:2} [
Member [172.17.0.3]:5701 - c967f642-a7aa-4deb-a530-b56fb8f68c78
Member [172.17.0.3]:5702 - 08dfe633-46b2-4581-94c7-81b6d0bc3ce3 this
]
2021-09-27 06:43:05.466 INFO 1 --- [ration.thread-1] c.h.c.i.p.t.AuthenticationMessageTask : [172.17.0.3]:5702 [dev] [4.0.2] Received auth from Connection[id=2, /172.17.0.3:5702->/172.17.0.3:40773, qualifier=null, endpoint=[172.17.0.3]:40773, alive=true, connectionType=JVM], successfully authenticated, clientUuid: 8843f057-c856-4739-80ae-4bc930559bd5, client version: 4.0.2
2021-09-27 06:43:05.468 INFO 1 --- [d30a.internal-3] c.h.c.i.c.ClientConnectionManager : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Authenticated with server [172.17.0.3]:5702:08dfe633-46b2-4581-94c7-81b6d0bc3ce3, server version: 4.0.2, local address: /172.17.0.3:40773
2021-09-27 06:43:05.968 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : [172.17.0.3]:5702 [dev] [4.0.2] [172.17.0.3]:5702 is STARTED
2021-09-27 06:43:06.237 INFO 1 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port 8081
2021-09-27 06:43:06.251 INFO 1 --- [ main] com.caching.Application : Started Application in 17.32 seconds (JVM running for 21.02)
Here is the Hazelcast management center member list,
Finally my question is,
Why I'm seeing 2 members, where there is only one sidecar cache container deployed?
What modification I will be required to reach my initial goal?
According to Spring Boot documentation for Hazelcast feature:
If a client can’t be created, Spring Boot attempts to configure an embedded server.
Spring Boot starts an embedded server from your hazelcast.yaml from the application container and joins to Hazelcast container using multicast.
You should replace your hazelcast.yaml in the Spring Boot app container with hazelcast-client.yaml with the following content:
hazelcast-client:
cluster-name: "dev"
network:
cluster-members:
- "127.0.0.1:5701"
After doing that Spring Boot will autoconfigure client HazelcastInstance bean and you will be able to change your cache client like this:
#Component
public class CacheClient {
private static final String ITEMS = "items";
private final HazelcastInstance client;
public CacheClient(HazelcastInstance client) {
this.client = client;
}
public Item put(String number, Item item){
IMap<String, Item> map = client.getMap(ITEMS);
return map.putIfAbsent(number, item);
}
public Item get(String key){
IMap<String, Item> map = client.getMap(ITEMS);
return map.get(key);
}
}

spring Vault location [secret/my-application] not resolvable: Not found

I want to connect to the vault server and read my secret in the spring application
vault config:
spring:
application:
name: inquiry
profiles:
active: dev
cloud:
vault:
kv:
enabled: true
backend: secret
profile-separator: '/'
application-name: inquiry
host: development
port: 8200
scheme: https
authentication: token
token: my-token
ssl:
trust-store: development-truststore.jks
trust-store-password: pass
in the vault, I have inquiry policy add attache inquiry token to it
vault policy read inquiry
path "secret/*" {
capabilities = ["read", "list"]
}
path "secret/data/inquiry/*" {
capabilities = ["read", "create", "update"]
}
curl --header "X-Vault-Token:my-token" -k https://localhost:8200/v1/secret/data/inquiry/dev
return my data
{"request_id":"35548b9e-3422-201b-6243-a600d7f61fc3","lease_id":"","renewable":false,"lease_duration":0,"data":{"data":{"DBPassword":"pass","DBUser":"user"},"metadata":{"created_time":"2020-07-08T09:02:42.237713857Z","deletion_time":"","destroyed":false,"version":1}},"wrap_info":null,"warnings":null,"auth":null}
but in spring I got this error:
2020-07-08 13:55:50.131 INFO 83792 --- [ main] o.s.v.a.LifecycleAwareSessionManager : Scheduling Token renewal
2020-07-08 13:55:50.159 INFO 83792 --- [ main] o.s.v.c.e.LeaseAwareVaultPropertySource : Vault location [secret/inquiry] not resolvable: Not found
2020-07-08 13:55:50.167 INFO 83792 --- [ main] o.s.v.c.e.LeaseAwareVaultPropertySource : Vault location [secret/application/dev] not resolvable: Not found
2020-07-08 13:55:50.174 INFO 83792 --- [ main] o.s.v.c.e.LeaseAwareVaultPropertySource : Vault location [secret/application] not resolvable: Not found
2020-07-08 13:55:50.175 INFO 83792 --- [ main] b.c.PropertySourceBootstrapConfiguration : Located property source: [BootstrapPropertySource {name='bootstrapProperties-secret/inquiry/dev'}, BootstrapPropertySource {name='bootstrapProperties-secret/inquiry'}, BootstrapPropertySource {name='bootstrapProperties-secret/application/dev'}, BootstrapPropertySource {name='bootstrapProperties-secret/application'}]
2020-07-08 13:55:50.181 INFO 83792 --- [ main] i.c.i.sepam.inquiry.InquiryApplication : The following profiles are active: dev
I use the jdk14.
how can I solve it, thank you
The issue is in your Vault Policy.
path "secret/data/inquiry/*" {
capabilities = ["read", "create", "update"]
}
drop the trailing / and just have secret/data/inquiry*
Spring is looking for access to a k/v store at inquiry, not in a sub-directory.
Spring is requesting access to k/v stores at secret/app-name, secret/application and secret/app-name/spring-active-profile. For each path, it expects a single k/v store that contains all the secrets.
I'm assuming this was solved a while ago by the poster, but I ran into this exact same thing when I had someone unfamiliar with spring setting up my app's permissions.

A component required a bean named '' that could not be found

I'm trying to build my first grails application using grails-spring-security-rest plugin following this post's instructions.
However, when I try to run the application it gives me the following output:
| Running application...
2017-05-07 20:18:54.614 WARN --- [ main] g.p.s.SpringSecurityCoreGrailsPlugin :
Configuring Spring Security Core ...
Configuring Spring Security Core ...
2017-05-07 20:18:54.688 WARN --- [ main] g.p.s.SpringSecurityCoreGrailsPlugin : ... finished configuring Spring Security Core
... finished configuring Spring Security Core
Configuring Spring Security REST 2.0.0.M2...
... finished configuring Spring Security REST
... with GORM support
2017-05-07 20:19:00.278 DEBUG --- [ost-startStop-1] o.s.s.w.a.i.FilterSecurityInterceptor : Validated configuration attributes
2017-05-07 20:19:00.527 DEBUG --- [ost-startStop-1] g.p.s.r.t.g.jwt.FileRSAKeyProvider : Loading public/private key from DER files
2017-05-07 20:19:00.531 DEBUG --- [ost-startStop-1] g.p.s.r.t.g.jwt.FileRSAKeyProvider : Public key path: /mnt/dev/Workspaces/LZR.RAS/RAS-API/security/public_key.der
2017-05-07 20:19:00.538 DEBUG --- [ost-startStop-1] g.p.s.r.t.g.jwt.FileRSAKeyProvider : Private key path: /mnt/dev/Workspaces/LZR.RAS/RAS-API/security/private_key.der
2017-05-07 20:19:00.612 DEBUG --- [ost-startStop-1] g.p.s.rest.RestTokenValidationFilter : Initializing filter 'restTokenValidationFilter'
2017-05-07 20:19:00.612 DEBUG --- [ost-startStop-1] g.p.s.rest.RestTokenValidationFilter : Filter 'restTokenValidationFilter' configured successfully
2017-05-07 20:19:00.612 DEBUG --- [ost-startStop-1] o.s.s.w.a.ExceptionTranslationFilter : Initializing filter 'restExceptionTranslationFilter'
2017-05-07 20:19:00.612 DEBUG --- [ost-startStop-1] o.s.s.w.a.ExceptionTranslationFilter : Filter 'restExceptionTranslationFilter' configured successfully
2017-05-07 20:19:00.613 DEBUG --- [ost-startStop-1] o.s.security.web.FilterChainProxy : Initializing filter 'filterChainProxy'
2017-05-07 20:19:00.613 DEBUG --- [ost-startStop-1] o.s.security.web.FilterChainProxy : Filter 'filterChainProxy' configured successfully
2017-05-07 20:19:00.613 DEBUG --- [ost-startStop-1] g.p.s.rest.RestLogoutFilter : Initializing filter 'restLogoutFilter'
2017-05-07 20:19:00.613 DEBUG --- [ost-startStop-1] g.p.s.rest.RestLogoutFilter : Filter 'restLogoutFilter' configured successfully
2017-05-07 20:19:00.613 DEBUG --- [ost-startStop-1] g.p.s.rest.RestAuthenticationFilter : Initializing filter 'restAuthenticationFilter'
2017-05-07 20:19:00.613 DEBUG --- [ost-startStop-1] g.p.s.rest.RestAuthenticationFilter : Filter 'restAuthenticationFilter' configured successfully
2017-05-07 20:19:02.731 DEBUG --- [ main] o.s.s.a.h.RoleHierarchyImpl : setHierarchy() - The following role hierarchy was set:
2017-05-07 20:19:03.064 ERROR --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :
***************************
APPLICATION FAILED TO START
***************************
Description:
A component required a bean named '' that could not be found.
Action:
Consider defining a bean named '' in your configuration.
Here is my application.yml content:
---
grails:
profile: rest-api
codegen:
defaultPackage: ras
spring:
transactionManagement:
proxies: false
info:
app:
name: '#info.app.name#'
version: '#info.app.version#'
grailsVersion: '#info.app.grailsVersion#'
spring:
main:
banner-mode: "off"
groovy:
template:
check-template-location: false
# Spring Actuator Endpoints are Disabled by Default
endpoints:
enabled: false
jmx:
enabled: true
---
grails:
mime:
disable:
accept:
header:
userAgents:
- Gecko
- WebKit
- Presto
- Trident
types:
json:
- application/json
- text/json
hal:
- application/hal+json
- application/hal+xml
xml:
- text/xml
- application/xml
atom: application/atom+xml
css: text/css
csv: text/csv
js: text/javascript
rss: application/rss+xml
text: text/plain
all: '*/*'
urlmapping:
cache:
maxsize: 1000
controllers:
defaultScope: singleton
converters:
encoding: UTF-8
---
hibernate:
cache:
queries: false
use_second_level_cache: true
use_query_cache: false
region.factory_class: org.hibernate.cache.ehcache.EhCacheRegionFactory
dataSource:
pooled: true
jmxExport: true
driverClassName: com.mysql.jdbc.Driver
dialect: org.hibernate.dialect.MySQL5InnoDBDialect
username: *******
password: *******
environments:
development:
dataSource:
dbCreate: create-drop
url: jdbc:mysql://localhost:3306/ras_dev?autoReconnect=true&useUnicode=yes&characterEncoding=UTF-8&useSSL=false
test:
dataSource:
dbCreate: create-drop
url: jdbc:mysql://localhost:3306/ras_test?autoReconnect=true&useUnicode=yes&characterEncoding=UTF-8&useSSL=false
production:
dataSource:
dbCreate: update
url: jdbc:mysql://localhost:3306/ras?autoReconnect=true&useUnicode=yes&characterEncoding=UTF-8
properties:
jmxEnabled: true
initialSize: 5
maxActive: 50
minIdle: 5
maxIdle: 25
maxWait: 10000
maxAge: 600000
timeBetweenEvictionRunsMillis: 5000
minEvictableIdleTimeMillis: 60000
validationQuery: SELECT 1
validationQueryTimeout: 3
validationInterval: 15000
testOnBorrow: true
testWhileIdle: true
testOnReturn: false
jdbcInterceptors: ConnectionState
defaultTransactionIsolation: 2 # TRANSACTION_READ_COMMITTED
application.groovy
grails.plugin.springsecurity.useSecurityEventListener = true
grails.plugin.springsecurity.securityConfigType = 'InterceptUrlMap'
grails.plugin.springsecurity.rememberMe.persistent = true
grails.plugin.springsecurity.rest.login.active = true
grails.plugin.springsecurity.rest.login.useJsonCredentials = true
grails.plugin.springsecurity.rest.login.usernamePropertyName = 'username'
grails.plugin.springsecurity.rest.login.passwordPropertyName = 'password'
grails.plugin.springsecurity.rest.login.failureStatusCode = 401
grails.plugin.springsecurity.rest.login.endpointUrl = '/api/login'
grails.plugin.springsecurity.rest.logout.endpointUrl = '/api/logout'
grails.plugin.springsecurity.rest.token.storage.jwt.useEncryptedJwt = true
grails.plugin.springsecurity.rest.token.storage.jwt.privateKeyPath = 'security/private_key.der'
grails.plugin.springsecurity.rest.token.storage.jwt.publicKeyPath = 'security/public_key.der'
grails.plugin.springsecurity.rest.token.rendering.authoritiesPropertyName = 'permissions'
grails.plugin.springsecurity.rest.token.rendering.usernamePropertyName = 'username'
grails.plugin.springsecurity.rest.token.generation.useSecureRandom = true
grails.plugin.springsecurity.rest.token.validation.headerName = 'X-Auth-Token'
grails.plugin.springsecurity.rest.token.validation.useBearerToken = false
grails.plugin.springsecurity.filterChain.chainMap = [
['/api/**': 'JOINED_FILTERS,-exceptionTranslationFilter,-authenticationProcessingFilter,-securityContextPersistenceFilter'], // Stateless chain
['/data/**': 'JOINED_FILTERS,-exceptionTranslationFilter,-authenticationProcessingFilter,-securityContextPersistenceFilter'], // Stateless chain
['/**': 'JOINED_FILTERS,-restTokenValidationFilter,-restExceptionTranslationFilter'] // Traditional chain
]
grails.plugin.springsecurity.interceptUrlMap = [
[pattern: '/', access: ['permitAll']],
[pattern: '/assets/**', access: ['permitAll']],
[pattern: '/partials/**', access: ['permitAll']],
[pattern: '/**/js/**', access: ['permitAll']],
[pattern: '/**/css/**', access: ['permitAll']],
[pattern: '/**/images/**', access: ['permitAll']],
[pattern: '/**/favicon.ico', access: ['permitAll']],
[pattern: '/api/login', access: ['permitAll']],
[pattern: '/api/logout', access: ['isFullyAuthenticated()']],
[pattern: '/api/validate', access: ['isFullyAuthenticated()']],
[pattern: '/**', access: ['isFullyAuthenticated()']]
]
resources.groovy
import ras.bean.DefaultSecurityEventListener
import ras.auth.DefaultJsonPayloadCredentialsExtractor
beans = {
credentialsExtractor(DefaultJsonPayloadCredentialsExtractor)
defaultSecurityEventListener(DefaultSecurityEventListener)
}
grails version:
$ grails --version
| Grails Version: 3.2.6
| Groovy Version: 2.4.7
| JVM Version: 1.8.0_121
UPDATE 1
I have added following lines to logback.groovy
logger("org.springframework.security", DEBUG, ['STDOUT'], false)
logger("grails.plugin.springsecurity", DEBUG, ['STDOUT'], false)
logger("org.pac4j", DEBUG, ['STDOUT'], false)
Yet, the console output and stacktrace.log file have the same output as posted above
I would really appreciate any suggestions on how to fix this error.
Finally, I was able to fix the problem:
Issue 1:
I created User Role and UserRole classes manually instead of using
grails s2-quickstart com.app-name User Role
as described here
Issue 2:
I used the wrong format for chainMap filters. Here is the one that worked for me
grails.plugin.springsecurity.filterChain.chainMap = [
[pattern: '/assets/**', filters: 'none'],
[pattern: '/**/js/**', filters: 'none'],
[pattern: '/**/css/**', filters: 'none'],
[pattern: '/**/images/**', filters: 'none'],
[pattern: '/**/favicon.ico', filters: 'none'],
[pattern: '/api/**', filters: 'JOINED_FILTERS,-exceptionTranslationFilter,-authenticationProcessingFilter,-securityContextPersistenceFilter'], // Stateless chain
[pattern: '/data/**', filters: 'JOINED_FILTERS,-exceptionTranslationFilter,-authenticationProcessingFilter,-securityContextPersistenceFilter'], // Stateless chain
[pattern: '/**', filters: 'JOINED_FILTERS,-restTokenValidationFilter,-restExceptionTranslationFilter'] // Traditional chain
]
Field springSecurityService in com.form.application.UserPasswordEncoderListener required a bean of type 'grails.plugin.springsecurity.SpringSecurityService' that could not be found.
Action:
Consider defining a bean of type 'grails.plugin.springsecurity.SpringSecurityService' in your configuration
still getting this issue

Flume ng / Avro source, memory channel and HDFS Sink - Too many small files

I'm facing a strange issue.
I'm looking to aggregate a lot of information from flume to an HDFS.
I applied recommanded configuration to avoid a too many small files, but it didn't works.
Here is my configuration file.
# single-node Flume configuration
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 5458
a1.sources.r1.threads = 20
# Describe the HDFS sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://myhost:myport/user/myuser/flume/events/%{senderType}/%{senderName}/%{senderEnv}/%y-%m-%d/%H%M
a1.sinks.k1.hdfs.filePrefix = logs-
a1.sinks.k1.hdfs.fileSuffix = .jsonlog
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.batchSize = 100
a1.sinks.k1.hdfs.useLocalTimeStamp = true
#never roll-based on time
a1.sinks.k1.hdfs.rollInterval=0
##10MB=10485760, 128MB=134217728, 256MB=268435456
a1.sinks.kl.hdfs.rollSize=10485760
##never roll base on number of events
a1.sinks.kl.hdfs.rollCount=0
a1.sinks.kl.hdfs.round=false
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 5000
a1.channels.c1.transactionCapacity = 1000
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
This configuration works, and I see my files. But the weight average of the file is 1.5kb.
Flume console output provide this kind of information.
16/08/03 09:48:31 INFO hdfs.BucketWriter: Creating hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484507.jsonlog.tmp
16/08/03 09:48:31 INFO hdfs.BucketWriter: Closing hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484507.jsonlog.tmp
16/08/03 09:48:31 INFO hdfs.BucketWriter: Renaming hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484507.jsonlog.tmp to hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484507.jsonlog
16/08/03 09:48:31 INFO hdfs.BucketWriter: Creating hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484508.jsonlog.tmp
16/08/03 09:48:31 INFO hdfs.BucketWriter: Closing hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484508.jsonlog.tmp
16/08/03 09:48:31 INFO hdfs.BucketWriter: Renaming hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484508.jsonlog.tmp to hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484508.jsonlog
16/08/03 09:48:31 INFO hdfs.BucketWriter: Creating hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484509.jsonlog.tmp
16/08/03 09:48:31 INFO hdfs.BucketWriter: Closing hdfs://myhost:myport/user/myuser/flume/events/a/b/c/16-08-03/0948/logs-.1470210484509.jsonlog.tmp
Someone has an idea about the issue ?
Here is some info about flume behavior.
The command is
flume-ng agent -n a1 -c /path/to/flume/conf --conf-file sample-flume.conf -Dflume.root.logger=TRACE,console -Xms8192m -Xmx16384m
Note : the logger directive doesn't works. I don't understand why but I'm ...
Flume starting output is :
16/08/03 15:32:55 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider starting
16/08/03 15:32:55 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:sample-flume.conf
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:kl
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Added sinks: k1 Agent: a1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:kl
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:kl
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [a1]
16/08/03 15:32:55 INFO node.AbstractConfigurationProvider: Creating channels
16/08/03 15:32:55 INFO channel.DefaultChannelFactory: Creating instance of channel c1 type memory
16/08/03 15:32:55 INFO node.AbstractConfigurationProvider: Created channel c1
16/08/03 15:32:55 INFO source.DefaultSourceFactory: Creating instance of source r1, type avro
16/08/03 15:32:55 INFO sink.DefaultSinkFactory: Creating instance of sink: k1, type: hdfs
16/08/03 15:32:56 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
16/08/03 15:32:56 INFO node.AbstractConfigurationProvider: Channel c1 connected to [r1, k1]
16/08/03 15:32:56 INFO node.Application: Starting new configuration:{ sourceRunners:{r1=EventDrivenSourceRunner: { source:Avro source r1: { bindAddress: 0.0.0.0, port: 5458 } }} sinkRunners:{k1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#466ab18a counterGroup:{ name:null counters:{} } }} channels:{c1=org.apache.flume.channel.MemoryChannel{name: c1}} }
16/08/03 15:32:56 INFO node.Application: Starting Channel c1
16/08/03 15:32:56 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: c1: Successfully registered new MBean.
16/08/03 15:32:56 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: c1 started
16/08/03 15:32:56 INFO node.Application: Starting Sink k1
16/08/03 15:32:56 INFO node.Application: Starting Source r1
16/08/03 15:32:56 INFO source.AvroSource: Starting Avro source r1: { bindAddress: 0.0.0.0, port: 5458 }...
16/08/03 15:32:56 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: k1: Successfully registered new MBean.
16/08/03 15:32:56 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: k1 started
16/08/03 15:32:56 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: r1: Successfully registered new MBean.
16/08/03 15:32:56 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: r1 started
16/08/03 15:32:56 INFO source.AvroSource: Avro source r1 started.
Since I cannot have more verbose output, I have to suppose that information like
[...]
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Added sinks: k1 Agent: a1
16/08/03 15:32:55 INFO conf.FlumeConfiguration: Processing:k1
[...]
indicates that the sinks is correctly configured.
PS : I saw followings answers but none of those works (I should miss something ...).
flume-hdfs-sink-generates-lots-of-tiny-files-on-hdfs
too-many-small-files-hdfs-sink-flume
flume-tiering-data-flows-using-the-avro-source-and-sink
flume-hdfs-sink-keeps-rolling-small-files
Increase batch size as per your requirement
a1.sinks.k1.hdfs.batchSize =

Resources