java client for hazelcast cluster not working with listener - client

1.we have requirement to configure hazelcast client which listener to map events
like entry added , entry updated
2.please post link of examples.
3.we have embedded hazelcast . we want to it to make to client server model.
client is able to connect to server :
INFO: hz.client_0 [simpleserver] [3.10.1] HazelcastClient 3.10.1
(20180521 - 66f881d) is CLIENT_CONNECTED
code in client to add listener:
HazelcastInstance hz = HazelcastClient.newHazelcastClient(cg);
hz.getMap("READ_ONLY_MAP").addEntryListener(new EntryAdapter<String, String>() {
#Override
public void entryAdded(EntryEvent<String, String> event) {
System.out.println("entry added:" + event.getValue());
}
}, true);
hazelcast.xml :
<map name="READ_ONLY_MAP">
<max-size policy="FREE_HEAP_PERCENTAGE">30</max-size>
<eviction-policy>LFU</eviction-policy>
</map>

1.able to find solution by implementing entry listener in both server and client on map.

Related

Standalone hazelcast expired event not working, but embedded hazelcast expired event does work

I used embedded hazelcast on localhost:5701.
Later I changed config. Now I use standalone hazelcast instances on app-server:5701, 3 machines built into 1 cluster. Everything works fine, except expired event.
Cache configuration is rather simple and common:
#Slf4j
#RequiredArgsConstructor
#Component
public class ExpiredListener implements EntryEvictedListener<String, Application>, EntryExpiredListener<String, Application> {
private final ApplicationEventPublisher publisher;
#Override
public void entryEvicted(EntryEvent<String, Application> event) {
publisher.publishEvent(new ApplicationCacheExpireEvent(this, event.getOldValue()));
}
#Override
public void entryExpired(EntryEvent<String, Application> event) {
publisher.publishEvent(new ApplicationCacheExpireEvent(this, event.getOldValue()));
}
}
and
#Bean
public List<MapConfig> mapConfigs(final ExpiredListener expiredListener) {
final MapConfig applicationCache = getMapConfig(APPLICATIONS_CACHE_NAME)
.setMaxSizeConfig(new MaxSizeConfig()
.setMaxSizePolicy(MaxSizeConfig.MaxSizePolicy.USED_HEAP_SIZE)
.setSize(properties.getMaxMapHeapSize()))
.setTimeToLiveSeconds(properties.getTtlBeforeSave())
.addEntryListenerConfig(new EntryListenerConfig(expiredListener, false, true));
final MapConfig warmingCacheMapConfig = getMapConfig(LOCK_MAP_NAME);
return List.of(applicationCache, warmingCacheMapConfig);
}
UPDATE
Hazelcast service is run in docker container
UPDATE 2
I used setTtl to my imap, but that ruined my server (rest api not supported, but lib version and hazel version are the same)
I used put(K, V, TTL, UNITS), aaaand... it does not help!
I put <time-to-live-soconds> in server xml config file, restarted it, and the hazel started to send events!
But, then I removed that lines from the xml config, reloaded server and it remains working... (I don't use map backup right now)
So, tomorrow I will recheck everything. It seems like either hazelcast docker hangs or put(K, V, TTL, UNITS) works.
UPDATE 3 (Solution)
This exact combination works for me
Setup TTL in config bean (see code above)
Use put(K, V, ttl, TimeUnits) instead of put(K, V)
Explicitly add event listener to hazelcast map
...
applications = hazelcastInstance.getMap(APPLICATIONS_CACHE_NAME);
applications.addEntryListener(new ExpiredListener(this.applicationEventPublisher), true);
...
Tested both with and without hazelcast server restart.

Capturing cassandra cql metrics from spring boot application

I want to capture the db query metrics from springboot cassandra application and expose to prometheus endpoint.
Already have implemenatation for springboot+ postgres and its working with r2dbc-proxy. since r2dbc not providing support for cassandra. looking for any sample implementation.
After edited code for below comment:
String contactPoint = System.getProperty("contactPoint", "127.0.0.1");
// init default prometheus stuff
DefaultExports.initialize();
// setup Prometheus HTTP server
Optional<HTTPServer> prometheusServer = Optional.empty();
try {
prometheusServer = Optional.of(new HTTPServer(Integer.getInteger("prometheusPort", 9095)));
} catch (IOException e) {
System.out.println("Exception when creating HTTP server for Prometheus: " + e.getMessage());
}
Cluster cluster = Cluster.builder()
.addContactPointsWithPorts(new InetSocketAddress(contactPoint, 9042))
.withoutJMXReporting()
.build();
try (Session session = cluster.connect()) {
MetricRegistry myRegistry = new MetricRegistry();
myRegistry.registerAll(cluster.getMetrics().getRegistry());
CollectorRegistry.defaultRegistry.register(new DropwizardExports(myRegistry));
session.execute("create keyspace if not exists test with replication = {'class': 'SimpleStrategy', 'replication_factor': 1};");
session.execute("create table if not exists test.abc (id int, t1 text, t2 text, primary key (id, t1));");
session.execute("truncate test.abc;");
}
catch(IllegalStateException ex){
System.out.println("metric registry fails to configure!!!!!");
throw ex;
}
}
}
If using micrometer
add dependency com.datastax.oss:java-driver-metrics-micrometer
Create CqlSessionBuilderCustomizer and register MeterRegistry (io.micrometer.core.instrument.MeterRegistry) bean using withMetricRegistry method.
Create DriverConfigLoaderBuilderCustomizer with used metrics (https://stackoverflow.com/a/62940370/12584290)
DataStax Java driver exposes metrics via Dropwizard Metrics library (driver version 3.x, driver version 4.x) that could be exposed as Prometheus endpoint using via standard Prometheus libraries, like io.prometheus.simpleclient_dropwizard that is part of Prometheus Java client library.
Here is an example for driver version 4.x, but with small modification it could work with 3.x as well. The main part is following:
MetricRegistry registry = session.getMetrics()
.orElseThrow(() -> new IllegalStateException("Metrics are disabled"))
.getRegistry();
CollectorRegistry.defaultRegistry.register(new DropwizardExports(registry));
the rest is just creating session, exposing metrics via HTTP, etc.

Propagating errors between Hazelcast Server and Hazelcast Client

I have the following scenario:
- a Hazelcast Server as a microservice which performs some computations when receives a method call.
- a Hazelcast Client as another microservice which calls the Hazelcast Server through the specified method call.
I want that when I throw an exception from the Hazelcast Server to receive it on the Hazelcast Client side as it is (currently, I'm receiving somthing like this: java.util.concurrent.ExecutionException: com.hazelcast.client.UndefinedErrorCodeException: Class name: ro.orange.eshop.personalisationengineapi.application.exception.ValidationException)
I've digged a little into the APIs and on the Hazelcast Client side I've found a way to register a new exception:
#Bean
fun addHazelcastKnownExceptions(hazelcastInstance: HazelcastInstance): Int {
val hazelcastClientInstance = (hazelcastInstance as HazelcastClientProxy).client
hazelcastClientInstance.clientExceptionFactory.register(400, ValidationException::class.java) { message, cause -> ValidationException(message, cause) }
return 1
}
But it seems that this exception must be registered also on the server side as well. And here comes the problem! On the server side, I've found a class called ClientExceptions which has a method public void register(int errorCode, Class clazz) but I can't find a way to receive a ClientExceptions instance (I should mention that I'm using Hazelcast Spring).
Thank you!
It is not supported to register custom exception factory as an API as of 3.12.x.
Related issue to follow https://github.com/hazelcast/hazelcast/issues/9753
As a workaround, I could suggest using class name (UndefinedErrorCodeException.getOriginialClassName()) to recreate exception classes on the client side.
== EDIT ==
Client API does not support it. You have found the private API.
If you are ok with relying on private API here is the hack for registering classes on the hazelcast server:
Note that I DO NOT recommend this solution since it relies on private API that can change.
HazelcastInstance instance = Hazelcast.newHazelcastInstance();
if (instance instanceof HazelcastInstanceProxy) {
HazelcastInstanceImpl original = ((HazelcastInstanceProxy) instance).getOriginal();
ClientExceptions clientExceptions = original.node.getClientEngine().getClientExceptions();
clientExceptions.register( USER_EXCEPTIONS_RANGE_START + 1, UndefinedCustomFormatException.class);
}

Clustered WildFly 10 domain messaging

I have three machines located in different networks:
as-master
as-node-1
as-node-2
In as-master I have WildFly as domain host-master and the two nodes have WildFly as domain host-slave each starting an instance in the full-ha server group. From the as-master web console I can see the two nodes in the full-ha profile runtime and if I deploy a WAR it gets correctly started on both nodes.
Now, what I'm trying to achieve is messaging between the two instances of the WAR, i.e. sending a message from a producer instance in as-node-1, consumers in all the nodes should receive the message.
This is what I tried: added a topic to WildFly domain.xml:
<jms-topic name="MyTopic" entries="java:/jms/my-topic"/>
Create a JAX-RS endpoint to trigger a producer bound to the topic:
#Path("jms")
#RequestScoped
public class MessageEndpoint {
#Inject
JMSContext context;
#Resource(mappedName = "java:/jms/my-topic")
Topic myTopic;
#GET
public void sendMessage() {
this.context.createProducer().send(this.myTopic, "Hello!");
}
}
Create a MDB listening to the topic:
#MessageDriven(activationConfig = {
#ActivationConfigProperty(
propertyName = "destination",
propertyValue = "java:/jms/my-topic"
),
#ActivationConfigProperty(
propertyName = "destinationType",
propertyValue = "javax.jms.Topic")
)
)
public class MyMessageListener implements MessageListener {
private final static Logger LOGGER = /* ... */
public void onMessage(Message message) {
try {
String body = message.getBody(String.class)
LOGGER.info("Received message: " + body);
} catch (JMSException e) {
throw new RuntimeException(e);
}
}
}
But when I curl as-node-1/jms I see the log only in as-node-1, and when I curl as-node-2/jms I see the log only in as-node-2.
Shouldn't the message be delivered on all the nodes where the WAR is deployed? What am I missing?
As I've come with the exactly same question - put the answer here.
Yes, messages should be delivered to all nodes if a destination is a Topic.
With the default configuration ActiveMQ Artemis uses broadcast to discover and connect to other ActiveMQ instances on other nodes (within the same discovery-group):
<discovery-group name="dg-group1" jgroups-channel="activemq-cluster"/>
<cluster-connection name="my-cluster" discovery-group="dg-group1" connector-name="http-connector" address="jms"/>
Still need to ensure that a JNDI name for a jms-topic starts with the "jms" to match the address="jms" in the line above (what is OK in your case: "java:/jms/my-topic")
The only thing missed in your example to get it working on all nodes is :<cluster password="yourPassword" user="activemqUser"/>
(for sure activemqUser user must be added earlier, e.g. with the addUser.sh script).
This let ActiveMQ instances to communicate each other. So called Core Bridge connection is created between nodes. As stated in ActiveMQ manual :
..this is done transparently behind the scenes - you don't have to
declare an explicit bridge for each node
If everything is OK then the bridge may be found in server.log: AMQ221027: Bridge ClusterConnectionBridge#63549ead [name=sf.my-cluster ...] is connected.
Btw, if a destination is a Queue then ActiveMQ will not send a message to other nodes unless a message is not consumed locally.
P.s. as answered here this refers to a classic approach to distribute an event to all nodes in a cluster.

Spring websocket Client to Client communication

I have a requirement where my Websocket session should be able to communicate with each other.I am creating a Request Response model where my Client A would send a request on a Queue on which I have multiple subscriber agents (Ag1 and Ag2). I would expect that my requests would round robin between these 2 subscribers. Unfortunately, the event is broadcasted to both the agents rather than it being a one to one communication.
My Spring config
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/websocket").withSockJS();
}
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.setApplicationDestinationPrefixes("/app");
config.enableSimpleBroker("/queue", "/topic");
}
Client JS Code
requestResponse = new RequestResponse({
outgoingChannel : "/queue/clients",
incomingChannel : "/topic/broadcast/clients",
callbackFn : widget3eventHandler
},session);
Agent Subscriber Code
requestResponse = new RequestResponse({
outgoingChannel : "/topic/broadcast/clients",
incomingChannel : "/queue/clients",
callbackFn : widget3eventHandler,
processAll : true
},session);
Is this a bug in SIMP Broker or am i doing something wrong.
You can check this sample chat application if you want to know how to achieve client to client communication.

Resources