Not able to establish connection to the kerberos and SASL enabled kafka producer - spring-boot

I am trying to connect to kafka producer that is kerberos and SSL enabled,
Here is the properties.yml
spring:
autoconfigure:
exclude[0]: org.springframework.boot.autoconfigure.security.servlet.SecurityAutoConfiguration
exclude[1]: org.springframework.boot.actuate.autoconfigure.security.servlet.ManagementWebSecurityAutoConfiguration
kafka:
topics:
- name: SOME_TOPIC
num-partitions: 5
replication-factor: 1
bootstrap-servers:
- xxx:9092
- yyy:9092
- zzz:9092
autoCreateTopics: false
template:
default-topic: SOME_TOPIC
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
properties:
security:
protocol: SASL_SSL
ssl:
enabled:
protocols: TLSv1.2
truststore:
location: C:\\resources\\truststorecred.jks
password: truststorepass
type: JKS
sasl:
mechanism: GSSAPI
kerberos:
service:
name: kafka
and VM options as follow.
-Djava.security.auth.login.config=C:\jaas.conf
-Djava.security.krb5.conf=C:\resources\krb5.ini
Jaas.conf as follow
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="C:\\resources\\serviceacc#xxx.keytab"
principal="serviceacc#xxx.COM"
useTicketCache=true
serviceName="kafka";
};
able to logged in to kerberos but immediate it is failing with below exception.
bootstrap.servers = [xxxx.com:9092, yyyy.com:9092, zzzz.com:9092]
client.id =
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retries = 5
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = kafka
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = SASL_SSL
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = C:\\resources\\truststorecred.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
2019-12-21 14:56:16.115 INFO 24216 --- [ main] o.a.k.c.s.authenticator.AbstractLogin : Successfully logged in.
2019-12-21 14:56:16.117 INFO 24216 --- [xxx.COM] o.a.k.c.security.kerberos.KerberosLogin : [Principal=serviceacc#xxx.COM]: TGT refresh thread started.
2019-12-21 14:56:16.118 INFO 24216 --- [xxx.COM] o.a.k.c.security.kerberos.KerberosLogin : [Principal=serviceacc#xxx.COM]: TGT valid starting at: Sat Dec 21 14:56:15 IST 2019
2019-12-21 14:56:16.119 INFO 24216 --- [xxx.COM] o.a.k.c.security.kerberos.KerberosLogin : [Principal=serviceacc#xxx.COM]: TGT expires: Sun Dec 22 00:56:15 IST 2019
2019-12-21 14:56:16.119 INFO 24216 --- [xxx.COM] o.a.k.c.security.kerberos.KerberosLogin : [Principal=serviceacc#xxx.COM]: TGT refresh sleeping until: Sat Dec 21 23:13:36 IST 2019
2019-12-21 14:56:16.912 INFO 24216 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 1.0.2
2019-12-21 14:56:16.912 INFO 24216 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : 2a121f7b1d402825
2019-12-21 14:56:22.085 WARN 24216 --- [| adminclient-1] o.a.k.common.network.SslTransportLayer : Failed to send SSL Close message
java.io.IOException: An existing connection was forcibly closed by the remote host
at sun.nio.ch.SocketDispatcher.write0(Native Method) ~[na:1.8.0_191]
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:51) ~[na:1.8.0_191]
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[na:1.8.0_191]
at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[na:1.8.0_191]
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) ~[na:1.8.0_191]
at org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:213) ~[kafka-clients-1.0.2.jar:na]
at org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:176) ~[kafka-clients-1.0.2.jar:na]
at org.apache.kafka.common.utils.Utils.closeAll(Utils.java:703) [kafka-clients-1.0.2.jar:na]
at org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:61) [kafka-clients-1.0.2.jar:na]
at org.apache.kafka.common.network.Selector.doClose(Selector.java:741) [kafka-clients-1.0.2.jar:na]
at org.apache.kafka.common.network.Selector.close(Selector.java:729) [kafka-clients-1.0.2.jar:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:522) [kafka-clients-1.0.2.jar:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:412) [kafka-clients-1.0.2.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460) [kafka-clients-1.0.2.jar:na]
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1006) [kafka-clients-1.0.2.jar:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_191]
2019-12-21 14:56:22.087 WARN 24216 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node -2 terminated during authentication. This may indicate that authentication failed due to invalid credentials.
2019-12-21 14:56:26.598 WARN 24216 --- [| adminclient-1] o.a.k.common.network.SslTransportLayer : Failed to send SSL Close message
Help would be greatly appreciated.
Thank you

Just a small change worked for me
security.protocol: SASL_PLAINTEXT

Related

Eureka Registered Application is null: false

new in eureka and got this error while joining api-gateway to eureka.
2022-06-02 06:51:45.941 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : The response status is 200
2022-06-02 06:51:50.949 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Disable delta property : false
2022-06-02 06:51:50.951 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Single vip registry refresh property : null
2022-06-02 06:51:50.951 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Force full registry fetch : false
2022-06-02 06:51:50.951 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Application is null : false
2022-06-02 06:51:50.952 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Registered Applications size is zero : true
2022-06-02 06:51:50.952 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Application version is -1: false
2022-06-02 06:51:50.953 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server
2022-06-02 06:51:50.966 INFO 8 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : The response status is 200
and this is my eureka configuration, I put this on a repository. Eureka services is quite stable, but none of other my services can join the microservices
#Eureka Client - Configuration
eureka:
instance:
preferIpAddress: true
appname: ${spring.application.name}
hostname: service-registry
health-check-url-path: /actuator/health
lease-renewal-interval-in-seconds: 10
client:
enabled: true
healthcheck:
enabled: true
register-with-eureka: true # Wisnu
fetch-registry: true # Wisnu
registry-fetch-interval-seconds: 5
service-url:
defaultZone: http://34.101.154.152:8761/eureka```

Is it to be expected that the client will discover each node twice for the Hazelcast sidecar caching pattern?

I'm pretty new to using Hazelcast for its interesting feature of auto-sync with other cache instances. My queries are bottom of the description.
Here was my initial goal:
Design an environment following Hazelcast sidecar caching pattern.
There will be no cache on the application container side. Basically, I don't want to use "near-cache" just to avoid my JVM being heavy and reduce GC time.
Application Container in each Node will communicate with its own sidecar cache container via localhost IP.
Hazelcast management center will be a separate node that communicates with all the nodes containing Hazelcast sidecar cache container.
Here is the target design:
I prepared Hazelcast configuration [hazelcast.yaml] for Hazelcast container,
hazelcast:
cluster-name: dev
network:
port:
auto-increment: false
port-count: 3
port: 5701
I also prepared another hazelcast.yaml for my application container,
hazelcast:
map:
default:
backup-count: 0
async-backup-count: 1
read-backup-data: true
network:
reuse-address: true
port:
auto-increment: true
port: 5701
join:
multicast:
enabled: true
kubernetes:
enabled: false
tcp-ip:
enabled: false
interaface: 127.0.0.1
member-list:
- 127.0.0.1:5701
Here is the client part, I used SpringBoot for it.
#Component
public class CacheClient {
private static final String ITEMS = "items";
private HazelcastInstance client;
CacheClient() throws IOException {
ClientConfig config = new YamlClientConfigBuilder("hazelcast.yaml").build();
config.setInstanceName(UUID.randomUUID().toString());
client = HazelcastClient.getOrCreateHazelcastClient(config);
}
public Item put(String number, Item item){
IMap<String, Item> map = client.getMap(ITEMS);
return map.putIfAbsent(number, item);
}
public Item get(String key){
IMap<String, Item> map = client.getMap(ITEMS);
return map.get(key);
}
}
Here is the dockerfile, I used to build my application container image,
FROM adoptopenjdk/openjdk11:jdk-11.0.5_10-alpine-slim
# Expose port 8081 to Docker host
EXPOSE 8081
WORKDIR /opt
COPY /build/libs/hazelcast-client-0.0.1-SNAPSHOT.jar /opt/app.jar
COPY /src/main/resources/hazelcast.yaml /opt/hazelcast.yaml
COPY /src/main/resources/application.properties /opt/application.properties
ENTRYPOINT ["java","-Dhazelcast.socket.server.bind.any=false","-Dhazelcast.initial.min.cluster.size=1","-Dhazelcast.socket.bind.any=false","-Dhazelcast.socket.server.bind.any=false","-Dhazelcast.socket.client.bind=false","-Dhazelcast.socket.client.bind.any=false","-Dhazelcast.logging.type=slf4j","-jar","app.jar"]
Here is the deployment script I used,
apiVersion: v1 # Kubernetes API version
kind: Service # Kubernetes resource kind we are creating
metadata: # Metadata of the resource kind we are creating
name: spring-hazelcast-service
spec:
selector:
app: spring-hazelcast-app
ports:
- protocol: "TCP"
name: http-app
port: 8081 # The port that the service is running on in the cluster
targetPort: 8081 # The port exposed by the service
type: LoadBalancer # type of the service. LoadBalancer indicates that our service will be external.
---
apiVersion: apps/v1
kind: Deployment # Kubernetes resource kind we are creating
metadata:
name: spring-hazelcast-app
spec:
selector:
matchLabels:
app: spring-hazelcast-app
replicas: 1 # Number of replicas that will be created for this deployment
template:
metadata:
labels:
app: spring-hazelcast-app
spec:
containers:
- name: hazelcast
image: hazelcast/hazelcast:4.0.2
workingDir: /opt
ports:
- name: hazelcast
containerPort: 5701
env:
- name: HZ_CLUSTERNAME
value: dev
- name: JAVA_OPTS
value: -Dhazelcast.config=/opt/config/hazelcast.yml
volumeMounts:
- mountPath: "/opt/config/"
name: allconf
- name: spring-hazelcast-app
image: spring-hazelcast:1.0.3
imagePullPolicy: Never #IfNotPresent
ports:
- containerPort: 8081 # The port that the container is running on in the cluster
volumes:
- name: allconf
hostPath:
path: /opt/config/ # directory location on host
type: Directory # this field is optional
---
apiVersion: v1 # Kubernetes API version
kind: Service # Kubernetes resource kind we are creating
metadata: # Metadata of the resource kind we are creating
name: hazelcast-mc-service
spec:
selector:
app: hazelcast-mc
ports:
- protocol: "TCP"
name: mc-app
port: 8080 # The port that the service is running on in the cluster
targetPort: 8080 # The port exposed by the service
type: LoadBalancer # type of the
loadBalancerIP: "127.0.0.1"
---
apiVersion: apps/v1
kind: Deployment # Kubernetes resource kind we are creating
metadata:
name: hazelcast-mc
spec:
selector:
matchLabels:
app: hazelcast-mc
replicas: 1 # Number of replicas that will be created for this deployment
template:
metadata:
labels:
app: hazelcast-mc
spec:
containers:
- name: hazelcast-mc
image: hazelcast/management-center
ports:
- containerPort: 8080 # The port that the container is running on in the cluster
Here is my application logs,
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.5.4)
2021-09-27 06:42:51.274 INFO 1 --- [ main] com.caching.Application : Starting Application using Java 11.0.5 on spring-hazelcast-app-7bdc8b7f7-bqdlt with PID 1 (/opt/app.jar started by root in /opt)
2021-09-27 06:42:51.278 INFO 1 --- [ main] com.caching.Application : No active profile set, falling back to default profiles: default
2021-09-27 06:42:55.986 INFO 1 --- [ main] c.h.c.impl.spi.ClientInvocationService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Running with 2 response threads, dynamic=true
2021-09-27 06:42:56.199 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] HazelcastClient 4.0.2 (20200702 - 2de3027) is STARTING
2021-09-27 06:42:56.202 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] HazelcastClient 4.0.2 (20200702 - 2de3027) is STARTED
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (jar:file:/opt/app.jar!/BOOT-INF/lib/hazelcast-all-4.0.2.jar!/) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2021-09-27 06:42:56.277 INFO 1 --- [ main] c.h.c.i.c.ClientConnectionManager : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Trying to connect to cluster: dev
2021-09-27 06:42:56.302 INFO 1 --- [ main] c.h.c.i.c.ClientConnectionManager : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Trying to connect to [127.0.0.1]:5701
2021-09-27 06:42:56.429 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] HazelcastClient 4.0.2 (20200702 - 2de3027) is CLIENT_CONNECTED
2021-09-27 06:42:56.429 INFO 1 --- [ main] c.h.c.i.c.ClientConnectionManager : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Authenticated with server [172.17.0.3]:5701:c967f642-a7aa-4deb-a530-b56fb8f68c78, server version: 4.0.2, local address: /127.0.0.1:54373
2021-09-27 06:42:56.436 INFO 1 --- [ main] c.h.internal.diagnostics.Diagnostics : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2021-09-27 06:42:56.461 INFO 1 --- [21ad30a.event-4] c.h.c.impl.spi.ClientClusterService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2]
Members [1] {
Member [172.17.0.3]:5701 - c967f642-a7aa-4deb-a530-b56fb8f68c78
}
2021-09-27 06:42:56.803 INFO 1 --- [ main] c.h.c.i.s.ClientStatisticsService : Client statistics is enabled with period 5 seconds.
2021-09-27 06:42:57.878 INFO 1 --- [ main] c.h.i.config.AbstractConfigLocator : Loading 'hazelcast.yaml' from the working directory.
2021-09-27 06:42:57.934 WARN 1 --- [ main] c.h.i.impl.HazelcastInstanceFactory : Hazelcast is starting in a Java modular environment (Java 9 and newer) but without proper access to required Java packages. Use additional Java arguments to provide Hazelcast access to Java internal API. The internal API access is used to get the best performance results. Arguments to be used:
--add-modules java.se --add-exports java.base/jdk.internal.ref=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.nio=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.management/sun.management=ALL-UNNAMED --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED
2021-09-27 06:42:57.976 INFO 1 --- [ main] com.hazelcast.instance.AddressPicker : [LOCAL] [dev] [4.0.2] Prefer IPv4 stack is true, prefer IPv6 addresses is false
2021-09-27 06:42:57.987 INFO 1 --- [ main] com.hazelcast.instance.AddressPicker : [LOCAL] [dev] [4.0.2] Picked [172.17.0.3]:5702, using socket ServerSocket[addr=/172.17.0.3,localport=5702], bind any local is false
2021-09-27 06:42:58.004 INFO 1 --- [ main] com.hazelcast.system : [172.17.0.3]:5702 [dev] [4.0.2] Hazelcast 4.0.2 (20200702 - 2de3027) starting at [172.17.0.3]:5702
2021-09-27 06:42:58.005 INFO 1 --- [ main] com.hazelcast.system : [172.17.0.3]:5702 [dev] [4.0.2] Copyright (c) 2008-2020, Hazelcast, Inc. All Rights Reserved.
2021-09-27 06:42:58.047 INFO 1 --- [ main] c.h.s.i.o.impl.BackpressureRegulator : [172.17.0.3]:5702 [dev] [4.0.2] Backpressure is disabled
2021-09-27 06:42:58.373 INFO 1 --- [ main] com.hazelcast.instance.impl.Node : [172.17.0.3]:5702 [dev] [4.0.2] Creating MulticastJoiner
2021-09-27 06:42:58.380 WARN 1 --- [ main] com.hazelcast.cp.CPSubsystem : [172.17.0.3]:5702 [dev] [4.0.2] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
2021-09-27 06:42:58.676 INFO 1 --- [ main] c.h.s.i.o.impl.OperationExecutorImpl : [172.17.0.3]:5702 [dev] [4.0.2] Starting 2 partition threads and 3 generic threads (1 dedicated for priority tasks)
2021-09-27 06:42:58.682 INFO 1 --- [ main] c.h.internal.diagnostics.Diagnostics : [172.17.0.3]:5702 [dev] [4.0.2] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2021-09-27 06:42:58.687 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : [172.17.0.3]:5702 [dev] [4.0.2] [172.17.0.3]:5702 is STARTING
2021-09-27 06:42:58.923 INFO 1 --- [ main] c.h.i.cluster.impl.MulticastJoiner : [172.17.0.3]:5702 [dev] [4.0.2] Trying to join to discovered node: [172.17.0.3]:5701
2021-09-27 06:42:58.932 INFO 1 --- [cached.thread-3] c.h.internal.nio.tcp.TcpIpConnector : [172.17.0.3]:5702 [dev] [4.0.2] Connecting to /172.17.0.3:5701, timeout: 10000, bind-any: false
2021-09-27 06:42:58.955 INFO 1 --- [.IO.thread-in-0] c.h.internal.nio.tcp.TcpIpConnection : [172.17.0.3]:5702 [dev] [4.0.2] Initialized new cluster connection between /172.17.0.3:40242 and /172.17.0.3:5701
2021-09-27 06:43:04.948 INFO 1 --- [21ad30a.event-3] c.h.c.impl.spi.ClientClusterService : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2]
Members [2] {
Member [172.17.0.3]:5701 - c967f642-a7aa-4deb-a530-b56fb8f68c78
Member [172.17.0.3]:5702 - 08dfe633-46b2-4581-94c7-81b6d0bc3ce3
}
2021-09-27 06:43:04.959 WARN 1 --- [ration.thread-0] c.h.c.i.operation.OnJoinCacheOperation : [172.17.0.3]:5702 [dev] [4.0.2] This member is joining a cluster whose members support JCache, however the cache-api artifact is missing from this member's classpath. In case JCache API will be used, add cache-api artifact in this member's classpath and restart the member.
2021-09-27 06:43:04.963 INFO 1 --- [ration.thread-0] c.h.internal.cluster.ClusterService : [172.17.0.3]:5702 [dev] [4.0.2]
Members {size:2, ver:2} [
Member [172.17.0.3]:5701 - c967f642-a7aa-4deb-a530-b56fb8f68c78
Member [172.17.0.3]:5702 - 08dfe633-46b2-4581-94c7-81b6d0bc3ce3 this
]
2021-09-27 06:43:05.466 INFO 1 --- [ration.thread-1] c.h.c.i.p.t.AuthenticationMessageTask : [172.17.0.3]:5702 [dev] [4.0.2] Received auth from Connection[id=2, /172.17.0.3:5702->/172.17.0.3:40773, qualifier=null, endpoint=[172.17.0.3]:40773, alive=true, connectionType=JVM], successfully authenticated, clientUuid: 8843f057-c856-4739-80ae-4bc930559bd5, client version: 4.0.2
2021-09-27 06:43:05.468 INFO 1 --- [d30a.internal-3] c.h.c.i.c.ClientConnectionManager : b1bdd9bb-2879-4161-95fd-2b6e321ad30a [dev] [4.0.2] Authenticated with server [172.17.0.3]:5702:08dfe633-46b2-4581-94c7-81b6d0bc3ce3, server version: 4.0.2, local address: /172.17.0.3:40773
2021-09-27 06:43:05.968 INFO 1 --- [ main] com.hazelcast.core.LifecycleService : [172.17.0.3]:5702 [dev] [4.0.2] [172.17.0.3]:5702 is STARTED
2021-09-27 06:43:06.237 INFO 1 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port 8081
2021-09-27 06:43:06.251 INFO 1 --- [ main] com.caching.Application : Started Application in 17.32 seconds (JVM running for 21.02)
Here is the Hazelcast management center member list,
Finally my question is,
Why I'm seeing 2 members, where there is only one sidecar cache container deployed?
What modification I will be required to reach my initial goal?
According to Spring Boot documentation for Hazelcast feature:
If a client can’t be created, Spring Boot attempts to configure an embedded server.
Spring Boot starts an embedded server from your hazelcast.yaml from the application container and joins to Hazelcast container using multicast.
You should replace your hazelcast.yaml in the Spring Boot app container with hazelcast-client.yaml with the following content:
hazelcast-client:
cluster-name: "dev"
network:
cluster-members:
- "127.0.0.1:5701"
After doing that Spring Boot will autoconfigure client HazelcastInstance bean and you will be able to change your cache client like this:
#Component
public class CacheClient {
private static final String ITEMS = "items";
private final HazelcastInstance client;
public CacheClient(HazelcastInstance client) {
this.client = client;
}
public Item put(String number, Item item){
IMap<String, Item> map = client.getMap(ITEMS);
return map.putIfAbsent(number, item);
}
public Item get(String key){
IMap<String, Item> map = client.getMap(ITEMS);
return map.get(key);
}
}

Configure GremlinServer to JanusGraph with HBase and Elasticsearch

Can't create instance of GremlinServer with HBase and Elasticsearch.
When i run shell script: bin/gremlin-server.sh config/gremlin.yaml. I get exception:
Exception in thread "main" java.lang.IllegalStateException: java.lang.NoSuchMethodException: org.janusgraph.graphdb.tinkerpop.plugin.JanusGraphGremlinPlugin.build()
Gremlin-server logs
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/user/janusgraph/lib/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/user/janusgraph/lib/logback-classic-1.1.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
0 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer -
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
135 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Configuring Gremlin Server from config/gremlin.yaml
211 [main] INFO org.apache.tinkerpop.gremlin.server.util.MetricManager - Configured Metrics Slf4jReporter configured with interval=180000ms and loggerName=org.apache.tinkerpop.gremlin.server.Settings$Slf4jReporterMetrics
557 [main] INFO org.janusgraph.diskstorage.hbase.HBaseCompatLoader - Instantiated HBase compatibility layer supporting runtime HBase version 1.2.6: org.janusgraph.diskstorage.hbase.HBaseCompat1_0
835 [main] INFO org.janusgraph.diskstorage.hbase.HBaseStoreManager - HBase configuration: setting zookeeper.znode.parent=/hbase-unsecure
836 [main] INFO org.janusgraph.diskstorage.hbase.HBaseStoreManager - Copied host list from root.storage.hostname to hbase.zookeeper.quorum: main.local,data1.local,data2.local
836 [main] INFO org.janusgraph.diskstorage.hbase.HBaseStoreManager - Copied Zookeeper Port from root.storage.port to hbase.zookeeper.property.clientPort: 2181
866 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
1214 [main] INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper - Process identifier=hconnection-0x1e44b638 connecting to ZooKeeper ensemble=main.local:2181,data1.local:2181,data2.local:2181
1220 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
1220 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:host.name=main.local
1220 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_212
1220 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation
1220 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/jre
1221 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/home/user/janusgraph/conf/gremlin-server:/home/user/janusgraph/lib/slf4j-log4j12-
// Here hanusgraph download very many dependencies
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:os.version=3.10.0-862.el7.x86_64
1256 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:user.name=user
1257 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:user.home=/home/user
1257 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/home/user/janusgraph
1257 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=main.local:2181,data1.local:2181,data2.local:2181 sessionTimeout=90000 watcher=hconnection-0x1e44b6380x0, quorum=main.local:2181,data1.local:2181,data2.local:2181, baseZNode=/hbase-unsecure
1274 [main-SendThread(data2.local:2181)] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - Opening socket connection to server data2.local/xxx.xxx.xxx.xxx:2181. Will not attempt to authenticate using SASL (unknown error)
1394 [main-SendThread(data2.local:2181)] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - Socket connection established to data2.local/xxx.xxx.xxx.xxx, initiating session
1537 [main-SendThread(data2.local:2181)] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - Session establishment complete on server data2.local/xxx.xxx.xxx.xxx:2181, sessionid = 0x26b266353e50014, negotiated timeout = 60000
3996 [main] INFO org.janusgraph.core.util.ReflectiveConfigOptionLoader - Loaded and initialized config classes: 13 OK out of 13 attempts in PT0.631S
4103 [main] INFO org.reflections.Reflections - Reflections took 60 ms to scan 2 urls, producing 0 keys and 0 values
4400 [main] WARN org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration - Local setting cache.db-cache-time=180000 (Type: GLOBAL_OFFLINE) is overridden by globally managed value (10000). Use the ManagementSystem interface instead of the local configuration to control this setting.
4453 [main] WARN org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration - Local setting cache.db-cache-clean-wait=20 (Type: GLOBAL_OFFLINE) is overridden by globally managed value (50). Use the ManagementSystem interface instead of the local configuration to control this setting.
4473 [main] INFO org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation - Closing master protocol: MasterService
4474 [main] INFO org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation - Closing zookeeper sessionid=0x26b266353e50014
4485 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Session: 0x26b266353e50014 closed
4485 [main-EventThread] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - EventThread shut down
4500 [main] INFO org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration - Generated unique-instance-id=c0a8873843641-main-local1
4530 [main] INFO org.janusgraph.diskstorage.hbase.HBaseStoreManager - HBase configuration: setting zookeeper.znode.parent=/hbase-unsecure
4530 [main] INFO org.janusgraph.diskstorage.hbase.HBaseStoreManager - Copied host list from root.storage.hostname to hbase.zookeeper.quorum: main.local,data1.local,data2.local
4531 [main] INFO org.janusgraph.diskstorage.hbase.HBaseStoreManager - Copied Zookeeper Port from root.storage.port to hbase.zookeeper.property.clientPort: 2181
4532 [main] INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper - Process identifier=hconnection-0x5bb3d42d connecting to ZooKeeper ensemble=main.local:2181,data1.local:2181,data2.local:2181
4532 [main] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=main.local:2181,data1.local:2181,data2.local:2181 sessionTimeout=90000 watcher=hconnection-0x5bb3d42d0x0, quorum=main.local:2181,data1.local:2181,data2.local:2181, baseZNode=/hbase-unsecure
4534 [main-SendThread(main.local:2181)] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - Opening socket connection to server main.local/xxx.xxx.xxx.xxx:2181. Will not attempt to authenticate using SASL (unknown error)
4534 [main-SendThread(main.local:2181)] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - Socket connection established to main.local/xxx.xxx.xxx.xxx:2181, initiating session
4611 [main-SendThread(main.local:2181)] INFO org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - Session establishment complete on server main.local/xxx.xxx.xxx.xxx:2181, sessionid = 0x36b266353fd0021, negotiated timeout = 60000
4616 [main] INFO org.janusgraph.diskstorage.Backend - Configuring index [search]
5781 [main] INFO org.janusgraph.diskstorage.Backend - Initiated backend operations thread pool of size 16
6322 [main] INFO org.janusgraph.diskstorage.Backend - Configuring total store cache size: 186687592
7555 [main] INFO org.janusgraph.graphdb.database.IndexSerializer - Hashing index keys
7925 [main] INFO org.janusgraph.diskstorage.log.kcvs.KCVSLog - Loaded unidentified ReadMarker start time 2019-06-13T09:54:08.929Z into org.janusgraph.diskstorage.log.kcvs.KCVSLog$MessagePuller#656d10a4
7927 [main] INFO org.apache.tinkerpop.gremlin.server.GremlinServer - Graph [graph] was successfully configured via [config/db.properties].
7927 [main] INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Initialized Gremlin thread pool. Threads in pool named with pattern gremlin-*
Exception in thread "main" java.lang.IllegalStateException: java.lang.NoSuchMethodException: org.janusgraph.graphdb.tinkerpop.plugin.JanusGraphGremlinPlugin.build()
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.initializeGremlinScriptEngineManager(GremlinExecutor.java:522)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.<init>(GremlinExecutor.java:126)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.<init>(GremlinExecutor.java:83)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor$Builder.create(GremlinExecutor.java:813)
at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init>(ServerGremlinExecutor.java:169)
at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init>(ServerGremlinExecutor.java:89)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:110)
at org.apache.tinkerpop.gremlin.server.GremlinServer.main(GremlinServer.java:363)
Caused by: java.lang.NoSuchMethodException: org.janusgraph.graphdb.tinkerpop.plugin.JanusGraphGremlinPlugin.build()
at java.lang.Class.getMethod(Class.java:1786)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.initializeGremlinScriptEngineManager(GremlinExecutor.java:492)
... 7 more
Graph configuration:
storage.backend=hbase
storage.hostname=main.local,data1.local,data2.local
storage.port=2181
storage.hbase.ext.zookeeper.znode.parent=/hbase-unsecure
cache.db-cache=true
cache.db-cache-clean-wait=20
cache.db-cache-time=180000
cache.db-cache-size=0.5
index.search.backend=elasticsearch
index.search.hostname=xxx.xxx.xxx.xxx
index.search.port=9200
index.search.elasticsearch.client-only=false
gremlin.graph=org.janusgraph.core.JanusGraphFactory
host=0.0.0.0
Gremlin-server configuration
host: localhost
port: 8182
channelizer: org.apache.tinkerpop.gremlin.server.channel.HttpChannelizer
graphs: { graph: config/db.properties }
scriptEngines: {
gremlin-groovy: {
plugins: {
org.janusgraph.graphdb.tinkerpop.plugin.JanusGraphGremlinPlugin: {},
org.apache.tinkerpop.gremlin.server.jsr223.GremlinServerGremlinPlugin: {},
org.apache.tinkerpop.gremlin.tinkergraph.jsr223.TinkerGraphGremlinPlugin: {},
org.apache.tinkerpop.gremlin.jsr223.ImportGremlinPlugin: { classImports: [java.lang.Math], methodImports: [java.lang.Math#*] },
org.apache.tinkerpop.gremlin.jsr223.ScriptFileGremlinPlugin: { files: [scripts/janusgraph.groovy] }
}
}
}
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] } }
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { serializeResultToString: true } }
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] } }
metrics: {
slf4jReporter: {enabled: true, interval: 180000}
}
What do I need to do to server start without error?

A component required a bean named '' that could not be found

I'm trying to build my first grails application using grails-spring-security-rest plugin following this post's instructions.
However, when I try to run the application it gives me the following output:
| Running application...
2017-05-07 20:18:54.614 WARN --- [ main] g.p.s.SpringSecurityCoreGrailsPlugin :
Configuring Spring Security Core ...
Configuring Spring Security Core ...
2017-05-07 20:18:54.688 WARN --- [ main] g.p.s.SpringSecurityCoreGrailsPlugin : ... finished configuring Spring Security Core
... finished configuring Spring Security Core
Configuring Spring Security REST 2.0.0.M2...
... finished configuring Spring Security REST
... with GORM support
2017-05-07 20:19:00.278 DEBUG --- [ost-startStop-1] o.s.s.w.a.i.FilterSecurityInterceptor : Validated configuration attributes
2017-05-07 20:19:00.527 DEBUG --- [ost-startStop-1] g.p.s.r.t.g.jwt.FileRSAKeyProvider : Loading public/private key from DER files
2017-05-07 20:19:00.531 DEBUG --- [ost-startStop-1] g.p.s.r.t.g.jwt.FileRSAKeyProvider : Public key path: /mnt/dev/Workspaces/LZR.RAS/RAS-API/security/public_key.der
2017-05-07 20:19:00.538 DEBUG --- [ost-startStop-1] g.p.s.r.t.g.jwt.FileRSAKeyProvider : Private key path: /mnt/dev/Workspaces/LZR.RAS/RAS-API/security/private_key.der
2017-05-07 20:19:00.612 DEBUG --- [ost-startStop-1] g.p.s.rest.RestTokenValidationFilter : Initializing filter 'restTokenValidationFilter'
2017-05-07 20:19:00.612 DEBUG --- [ost-startStop-1] g.p.s.rest.RestTokenValidationFilter : Filter 'restTokenValidationFilter' configured successfully
2017-05-07 20:19:00.612 DEBUG --- [ost-startStop-1] o.s.s.w.a.ExceptionTranslationFilter : Initializing filter 'restExceptionTranslationFilter'
2017-05-07 20:19:00.612 DEBUG --- [ost-startStop-1] o.s.s.w.a.ExceptionTranslationFilter : Filter 'restExceptionTranslationFilter' configured successfully
2017-05-07 20:19:00.613 DEBUG --- [ost-startStop-1] o.s.security.web.FilterChainProxy : Initializing filter 'filterChainProxy'
2017-05-07 20:19:00.613 DEBUG --- [ost-startStop-1] o.s.security.web.FilterChainProxy : Filter 'filterChainProxy' configured successfully
2017-05-07 20:19:00.613 DEBUG --- [ost-startStop-1] g.p.s.rest.RestLogoutFilter : Initializing filter 'restLogoutFilter'
2017-05-07 20:19:00.613 DEBUG --- [ost-startStop-1] g.p.s.rest.RestLogoutFilter : Filter 'restLogoutFilter' configured successfully
2017-05-07 20:19:00.613 DEBUG --- [ost-startStop-1] g.p.s.rest.RestAuthenticationFilter : Initializing filter 'restAuthenticationFilter'
2017-05-07 20:19:00.613 DEBUG --- [ost-startStop-1] g.p.s.rest.RestAuthenticationFilter : Filter 'restAuthenticationFilter' configured successfully
2017-05-07 20:19:02.731 DEBUG --- [ main] o.s.s.a.h.RoleHierarchyImpl : setHierarchy() - The following role hierarchy was set:
2017-05-07 20:19:03.064 ERROR --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :
***************************
APPLICATION FAILED TO START
***************************
Description:
A component required a bean named '' that could not be found.
Action:
Consider defining a bean named '' in your configuration.
Here is my application.yml content:
---
grails:
profile: rest-api
codegen:
defaultPackage: ras
spring:
transactionManagement:
proxies: false
info:
app:
name: '#info.app.name#'
version: '#info.app.version#'
grailsVersion: '#info.app.grailsVersion#'
spring:
main:
banner-mode: "off"
groovy:
template:
check-template-location: false
# Spring Actuator Endpoints are Disabled by Default
endpoints:
enabled: false
jmx:
enabled: true
---
grails:
mime:
disable:
accept:
header:
userAgents:
- Gecko
- WebKit
- Presto
- Trident
types:
json:
- application/json
- text/json
hal:
- application/hal+json
- application/hal+xml
xml:
- text/xml
- application/xml
atom: application/atom+xml
css: text/css
csv: text/csv
js: text/javascript
rss: application/rss+xml
text: text/plain
all: '*/*'
urlmapping:
cache:
maxsize: 1000
controllers:
defaultScope: singleton
converters:
encoding: UTF-8
---
hibernate:
cache:
queries: false
use_second_level_cache: true
use_query_cache: false
region.factory_class: org.hibernate.cache.ehcache.EhCacheRegionFactory
dataSource:
pooled: true
jmxExport: true
driverClassName: com.mysql.jdbc.Driver
dialect: org.hibernate.dialect.MySQL5InnoDBDialect
username: *******
password: *******
environments:
development:
dataSource:
dbCreate: create-drop
url: jdbc:mysql://localhost:3306/ras_dev?autoReconnect=true&useUnicode=yes&characterEncoding=UTF-8&useSSL=false
test:
dataSource:
dbCreate: create-drop
url: jdbc:mysql://localhost:3306/ras_test?autoReconnect=true&useUnicode=yes&characterEncoding=UTF-8&useSSL=false
production:
dataSource:
dbCreate: update
url: jdbc:mysql://localhost:3306/ras?autoReconnect=true&useUnicode=yes&characterEncoding=UTF-8
properties:
jmxEnabled: true
initialSize: 5
maxActive: 50
minIdle: 5
maxIdle: 25
maxWait: 10000
maxAge: 600000
timeBetweenEvictionRunsMillis: 5000
minEvictableIdleTimeMillis: 60000
validationQuery: SELECT 1
validationQueryTimeout: 3
validationInterval: 15000
testOnBorrow: true
testWhileIdle: true
testOnReturn: false
jdbcInterceptors: ConnectionState
defaultTransactionIsolation: 2 # TRANSACTION_READ_COMMITTED
application.groovy
grails.plugin.springsecurity.useSecurityEventListener = true
grails.plugin.springsecurity.securityConfigType = 'InterceptUrlMap'
grails.plugin.springsecurity.rememberMe.persistent = true
grails.plugin.springsecurity.rest.login.active = true
grails.plugin.springsecurity.rest.login.useJsonCredentials = true
grails.plugin.springsecurity.rest.login.usernamePropertyName = 'username'
grails.plugin.springsecurity.rest.login.passwordPropertyName = 'password'
grails.plugin.springsecurity.rest.login.failureStatusCode = 401
grails.plugin.springsecurity.rest.login.endpointUrl = '/api/login'
grails.plugin.springsecurity.rest.logout.endpointUrl = '/api/logout'
grails.plugin.springsecurity.rest.token.storage.jwt.useEncryptedJwt = true
grails.plugin.springsecurity.rest.token.storage.jwt.privateKeyPath = 'security/private_key.der'
grails.plugin.springsecurity.rest.token.storage.jwt.publicKeyPath = 'security/public_key.der'
grails.plugin.springsecurity.rest.token.rendering.authoritiesPropertyName = 'permissions'
grails.plugin.springsecurity.rest.token.rendering.usernamePropertyName = 'username'
grails.plugin.springsecurity.rest.token.generation.useSecureRandom = true
grails.plugin.springsecurity.rest.token.validation.headerName = 'X-Auth-Token'
grails.plugin.springsecurity.rest.token.validation.useBearerToken = false
grails.plugin.springsecurity.filterChain.chainMap = [
['/api/**': 'JOINED_FILTERS,-exceptionTranslationFilter,-authenticationProcessingFilter,-securityContextPersistenceFilter'], // Stateless chain
['/data/**': 'JOINED_FILTERS,-exceptionTranslationFilter,-authenticationProcessingFilter,-securityContextPersistenceFilter'], // Stateless chain
['/**': 'JOINED_FILTERS,-restTokenValidationFilter,-restExceptionTranslationFilter'] // Traditional chain
]
grails.plugin.springsecurity.interceptUrlMap = [
[pattern: '/', access: ['permitAll']],
[pattern: '/assets/**', access: ['permitAll']],
[pattern: '/partials/**', access: ['permitAll']],
[pattern: '/**/js/**', access: ['permitAll']],
[pattern: '/**/css/**', access: ['permitAll']],
[pattern: '/**/images/**', access: ['permitAll']],
[pattern: '/**/favicon.ico', access: ['permitAll']],
[pattern: '/api/login', access: ['permitAll']],
[pattern: '/api/logout', access: ['isFullyAuthenticated()']],
[pattern: '/api/validate', access: ['isFullyAuthenticated()']],
[pattern: '/**', access: ['isFullyAuthenticated()']]
]
resources.groovy
import ras.bean.DefaultSecurityEventListener
import ras.auth.DefaultJsonPayloadCredentialsExtractor
beans = {
credentialsExtractor(DefaultJsonPayloadCredentialsExtractor)
defaultSecurityEventListener(DefaultSecurityEventListener)
}
grails version:
$ grails --version
| Grails Version: 3.2.6
| Groovy Version: 2.4.7
| JVM Version: 1.8.0_121
UPDATE 1
I have added following lines to logback.groovy
logger("org.springframework.security", DEBUG, ['STDOUT'], false)
logger("grails.plugin.springsecurity", DEBUG, ['STDOUT'], false)
logger("org.pac4j", DEBUG, ['STDOUT'], false)
Yet, the console output and stacktrace.log file have the same output as posted above
I would really appreciate any suggestions on how to fix this error.
Finally, I was able to fix the problem:
Issue 1:
I created User Role and UserRole classes manually instead of using
grails s2-quickstart com.app-name User Role
as described here
Issue 2:
I used the wrong format for chainMap filters. Here is the one that worked for me
grails.plugin.springsecurity.filterChain.chainMap = [
[pattern: '/assets/**', filters: 'none'],
[pattern: '/**/js/**', filters: 'none'],
[pattern: '/**/css/**', filters: 'none'],
[pattern: '/**/images/**', filters: 'none'],
[pattern: '/**/favicon.ico', filters: 'none'],
[pattern: '/api/**', filters: 'JOINED_FILTERS,-exceptionTranslationFilter,-authenticationProcessingFilter,-securityContextPersistenceFilter'], // Stateless chain
[pattern: '/data/**', filters: 'JOINED_FILTERS,-exceptionTranslationFilter,-authenticationProcessingFilter,-securityContextPersistenceFilter'], // Stateless chain
[pattern: '/**', filters: 'JOINED_FILTERS,-restTokenValidationFilter,-restExceptionTranslationFilter'] // Traditional chain
]
Field springSecurityService in com.form.application.UserPasswordEncoderListener required a bean of type 'grails.plugin.springsecurity.SpringSecurityService' that could not be found.
Action:
Consider defining a bean of type 'grails.plugin.springsecurity.SpringSecurityService' in your configuration
still getting this issue

org.apache.solr.common.SolrException: Not Found

I want to do a web crawler using Nutch 1.9 and Solr 4.10.2
The crawling is working but when it comes to indexing there is a problem. I looked for the problem and I tried so many methods but nothing seem to work. This is what I get:
Indexer: starting at 2015-03-13 20:51:08
Indexer: deleting gone documents: false
Indexer: URL filtering: false
Indexer: URL normalizing: false
Active IndexWriters :
SOLRIndexWriter
solr.server.url : URL of the SOLR instance (mandatory)
solr.commit.size : buffer size when sending to SOLR (default 1000)
solr.mapping.file : name of the mapping file for fields (default solrindex-mapping.xml)
solr.auth : use authentication (default false)
solr.auth.username : use authentication (default false)
solr.auth : username for authentication
solr.auth.password : password for authentication
Indexer: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357)
at org.apache.nutch.indexer.IndexingJob.index(IndexingJob.java:114)
at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:176)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.indexer.IndexingJob.main(IndexingJob.java:186)
And when I see the log file this is what I get:
2015-03-13 20:51:08,768 INFO indexer.IndexingJob - Indexer: starting at 2015-03-13 20:51:08
2015-03-13 20:51:08,846 INFO indexer.IndexingJob - Indexer: deleting gone documents: false
2015-03-13 20:51:08,846 INFO indexer.IndexingJob - Indexer: URL filtering: false
2015-03-13 20:51:08,846 INFO indexer.IndexingJob - Indexer: URL normalizing: false
2015-03-13 20:51:09,117 INFO indexer.IndexWriters - Adding org.apache.nutch.indexwriter.solr.SolrIndexWriter
2015-03-13 20:51:09,117 INFO indexer.IndexingJob - Active IndexWriters :
SOLRIndexWriter
solr.server.url : URL of the SOLR instance (mandatory)
solr.commit.size : buffer size when sending to SOLR (default 1000)
solr.mapping.file : name of the mapping file for fields (default solrindex-mapping.xml)
solr.auth : use authentication (default false)
solr.auth.username : use authentication (default false)
solr.auth : username for authentication
solr.auth.password : password for authentication
2015-03-13 20:51:09,121 INFO indexer.IndexerMapReduce - IndexerMapReduce: crawldb: testCrawl/crawldb
2015-03-13 20:51:09,122 INFO indexer.IndexerMapReduce - IndexerMapReduce: linkdb: testCrawl/linkdb
2015-03-13 20:51:09,122 INFO indexer.IndexerMapReduce - IndexerMapReduces: adding segment: testCrawl/segments/20150311221258
2015-03-13 20:51:09,234 INFO indexer.IndexerMapReduce - IndexerMapReduces: adding segment: testCrawl/segments/20150311222328
2015-03-13 20:51:09,235 INFO indexer.IndexerMapReduce - IndexerMapReduces: adding segment: testCrawl/segments/20150311222727
2015-03-13 20:51:09,236 INFO indexer.IndexerMapReduce - IndexerMapReduces: adding segment: testCrawl/segments/20150312085908
2015-03-13 20:51:09,282 WARN util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-03-13 20:51:09,747 INFO anchor.AnchorIndexingFilter - Anchor deduplication is: off
2015-03-13 20:51:20,904 INFO indexer.IndexWriters - Adding org.apache.nutch.indexwriter.solr.SolrIndexWriter
2015-03-13 20:51:20,929 INFO solr.SolrMappingReader - source: content dest: content
2015-03-13 20:51:20,929 INFO solr.SolrMappingReader - source: title dest: title
2015-03-13 20:51:20,929 INFO solr.SolrMappingReader - source: host dest: host
2015-03-13 20:51:20,929 INFO solr.SolrMappingReader - source: segment dest: segment
2015-03-13 20:51:20,929 INFO solr.SolrMappingReader - source: boost dest: boost
2015-03-13 20:51:20,929 INFO solr.SolrMappingReader - source: digest dest: digest
2015-03-13 20:51:20,929 INFO solr.SolrMappingReader - source: tstamp dest: tstamp
2015-03-13 20:51:21,192 INFO solr.SolrIndexWriter - Indexing 250 documents
2015-03-13 20:51:21,192 INFO solr.SolrIndexWriter - Deleting 0 documents
2015-03-13 20:51:21,342 INFO solr.SolrIndexWriter - Indexing 250 documents
2015-03-13 20:51:21,437 WARN mapred.LocalJobRunner - job_local1194740690_0001
org.apache.solr.common.SolrException: Not Found
Not Found
request: http://127.0.0.1:8983/solr/update?wt=javabin&version=2
at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:430)
at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:244)
at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105)
at org.apache.nutch.indexwriter.solr.SolrIndexWriter.write(SolrIndexWriter.java:135)
at org.apache.nutch.indexer.IndexWriters.write(IndexWriters.java:88)
at org.apache.nutch.indexer.IndexerOutputFormat$1.write(IndexerOutputFormat.java:50)
at org.apache.nutch.indexer.IndexerOutputFormat$1.write(IndexerOutputFormat.java:41)
at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.write(ReduceTask.java:458)
at org.apache.hadoop.mapred.ReduceTask$3.collect(ReduceTask.java:500)
at org.apache.nutch.indexer.IndexerMapReduce.reduce(IndexerMapReduce.java:323)
at org.apache.nutch.indexer.IndexerMapReduce.reduce(IndexerMapReduce.java:53)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:522)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:421)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:398)
2015-03-13 20:51:21,607 ERROR indexer.IndexingJob - Indexer: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357)
at org.apache.nutch.indexer.IndexingJob.index(IndexingJob.java:114)
at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:176)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.indexer.IndexingJob.main(IndexingJob.java:186)
So please any help?

Resources