I work with spring boot and axon example, i implement the snapshot feature, with the below code is working fine, after 3 events i found the data in the table snapshot_event_entry in the database
#Configuration
#AutoConfigureAfter(value = { AxonAutoConfiguration.class })
public class AxonConfig {
#Bean
public SnapshotTriggerDefinition catalogSnapshotTrigger(Snapshotter snapshotter) {
return new EventCountSnapshotTriggerDefinition(snapshotter, 3);
}
}
#Aggregate(snapshotTriggerDefinition = "catalogSnapshotTrigger")
public class CatalogAggregate { }
My question, is there a method to do a snapshot in demand? That means i want to implement an api to do the snapshot, not automatically after 3 events
there is nothing already in place.
One way to implement what you need is to create a dedicated Command, eg PerformShapshotCmd, that will carry the aggregateId information, and a #CommandHandler into your Aggregate. You could then let Spring autowire the Snapshotter instance bean, and call for the scheduleSnapshot(Class<?> aggregateType, String aggregateIdentifier) method.
Below some code snippet that could guide you.
data class PerformShapshotCmd(#TargetAggregateIdentifier val id: String)
#CommandHandler
public void handle(PerformShapshotCmd cmd, Snapshotter snapshotter) {
logger.debug("handling {}", cmd);
snapshotter.scheduleSnapshot(this.getClass(), cmd.getId());
}
You should also define one Bean of type Snapshotter into your config
#Bean
public SpringAggregateSnapshotterFactoryBean snapshotter() {
SpringAggregateSnapshotterFactoryBean springAggregateSnapshotterFactoryBean = new SpringAggregateSnapshotterFactoryBean();
//Setting async executors
springAggregateSnapshotterFactoryBean.setExecutor(Executors.newSingleThreadExecutor());
return springAggregateSnapshotterFactoryBean;
}
Please note that the first argument of your commandHandler needs to be the command, otherwise the framework will complain with an exception at startup time.
Related
can anyone suggest any tutorial/sample project for Implementing Snapshot in AXON 4.4.2 with springBoot 2.3.3.
i went through the documentation(https://docs.axoniq.io/reference-guide/axon-framework/tuning/event-snapshots#snapshotting) and did below:
The AxonConfig.class
#Configuration
public class AxonConfig {
#Bean
public SnapshotTriggerDefinition app1SnapshotTrigger(Snapshotter snapshotter) {
return new EventCountSnapshotTriggerDefinition(snapshotter, 10);
}
}
The Aggregate
#Aggregate(snapshotTriggerDefinition = "app1SnapshotTrigger")
public class MyAggregate {
#AggregateIdentifier
private String id;
private String name;
#AggregateMember
private List<Address> addresses = new ArrayList<>();
private MyAggregate () {
}
#CommandHandler
private MyAggregate (CreateNameCommand createNameCommand) {
-----
}
#EventSourcingHandler
private void on(NameCreatedEvent nameCreatedEvent) {
----
}
Am i missing something. Will it create a snapshot at the threshold value 10.
Thanks.
unfortunately we have no sample demo ready to show in this case.
From your code snippet looks that all is in place. Maybe there is some other configuration that is taking over your annotation.
To give a try, I applied your configuration to our https://github.com/AxonIQ/giftcard-demo/
First note that can guide is the following
if you declared a Repository as we did in https://github.com/AxonIQ/giftcard-demo/blob/master/src/main/java/io/axoniq/demo/giftcard/command/GcCommandConfiguration.java#L17 this configuration will take over Annotation placed into your aggregate. If you prefer annotation, you can remove this Bean definition.
Here the piece of code, instead, to have this configured as a Bean
#Bean
public Repository<GiftCard> giftCardRepository(EventStore eventStore, SnapshotTriggerDefinition giftCardSnapshotTrigger) {
return EventSourcingRepository.builder(GiftCard.class)
.snapshotTriggerDefinition(giftCardSnapshotTrigger)
.eventStore(eventStore)
.build();
}
#Bean
public SpringAggregateSnapshotterFactoryBean snapshotter() {
var springAggregateSnapshotterFactoryBean = new SpringAggregateSnapshotterFactoryBean();
//Setting async executors
springAggregateSnapshotterFactoryBean.setExecutor(Executors.newSingleThreadExecutor());
return springAggregateSnapshotterFactoryBean;
}
#Bean("giftCardSnapshotTrigger")
public SnapshotTriggerDefinition giftCardSnapshotTriggerDefinition(Snapshotter snapshotter) {
return new EventCountSnapshotTriggerDefinition(snapshotter, 10);
}
You can check that your snapshot is working fine looking at client log : after 10 events on the same agggregateId, you should find this info log entry
o.a.a.c.event.axon.AxonServerEventStore : Snapshot created
To check you can use the REST api to retrieve the events from an aggregate
curl -X GET "http://localhost:8024/v1/events?aggregateId=A01"
This will produce a stream containing events starting from the latest Snapshot: you will have nine events listed until the tenth event will be processed. After that, the endpoint will list events from the snapshot.
You can also check /actuator/health endpoint: it will shows the last snapshot token if the showDetails is enabled (enabled by default in EE, not enabled by default in SE).
Corrado.
I am trying to cache Kafka Records within 3 minutes of interval post that it will get expired and removed from the cache.
Each incoming records which is fetched using kafka consumer written in springboot needs to be updated in cache first then if it is present i need to discard the next duplicate records if it matches the cache record.
I have tried using Caffeine cache as below,
#EnableCaching
public class AppCacheManagerConfig {
#Bean
public CacheManager cacheManager(Ticker ticker) {
CaffeineCache bookCache = buildCache("declineRecords", ticker, 3);
SimpleCacheManager cacheManager = new SimpleCacheManager();
cacheManager.setCaches(Collections.singletonList(bookCache));
return cacheManager;
}
private CaffeineCache buildCache(String name, Ticker ticker, int minutesToExpire) {
return new CaffeineCache(name, Caffeine.newBuilder().expireAfterWrite(minutesToExpire, TimeUnit.MINUTES)
.maximumSize(100).ticker(ticker).build());
}
#Bean
public Ticker ticker() {
return Ticker.systemTicker();
}
}
and my Kafka Consumer is as below,
#Autowired
CachingServiceImpl cachingService;
#KafkaListener(topics = "#{'${spring.kafka.consumer.topic}'}", concurrency = "#{'${spring.kafka.consumer.concurrentConsumers}'}", errorHandler = "#{'${spring.kafka.consumer.errorHandler}'}")
public void consume(Message<?> message, Acknowledgment acknowledgment,
#Header(KafkaHeaders.RECEIVED_TIMESTAMP) long createTime) {
logger.info("Recieved Message: " + message.getPayload());
try {
boolean approveTopic = false;
boolean duplicateRecord = false;
if (cachingService.isDuplicateCheck(declineRecord)) {
//do something with records
}
else
{
//do something with records
}
cachingService.putInCache(xmlJSONObj, declineRecord, time);
and my caching service is as below,
#Component
public class CachingServiceImpl {
private static final Logger logger = LoggerFactory.getLogger(CachingServiceImpl.class);
#Autowired
CacheManager cacheManager;
#Cacheable(value = "declineRecords", key = "#declineRecord", sync = true)
public String putInCache(JSONObject xmlJSONObj, String declineRecord, String time) {
logger.info("Record is Cached for 3 minutes interval check", declineRecord);
cacheManager.getCache("declineRecords").put(declineRecord, time);
return declineRecord;
}
public boolean isDuplicateCheck(String declineRecord) {
if (null != cacheManager.getCache("declineRecords").get(declineRecord)) {
return true;
}
return false;
}
}
But Each time a record comes in consumer my cache is always empty. Its not holding the records.
Modifications Done:
I have added Configuration file as below after going through the suggestions and more kind of R&D removed some of the earlier logic and now the caching is working as expected but duplicate check is failing when all the three consumers are sending the same records.
`
#Configuration
public class AppCacheManagerConfig {
public static Cache<String, Object> jsonCache =
Caffeine.newBuilder().expireAfterWrite(3, TimeUnit.MINUTES)
.maximumSize(10000).recordStats().build();
#Bean
public CacheLoader<Object, Object> cacheLoader() {
CacheLoader<Object, Object> cacheLoader = new CacheLoader<Object, Object>() {
#Override
public Object load(Object key) throws Exception {
return null;
}
#Override
public Object reload(Object key, Object oldValue) throws Exception {
return oldValue;
}
};
return cacheLoader;
}
`
Now i am using the above cache as manual put and get.
I guess you're trying to implement records deduplication for Kafka.
Here is the similar discussion:
https://github.com/spring-projects/spring-kafka/issues/80
Here is the current abstract class which you may extend to achieve the necessary result:
https://github.com/spring-projects/spring-kafka/blob/master/spring-kafka/src/main/java/org/springframework/kafka/listener/adapter/AbstractFilteringMessageListener.java
Your caching service is definitely incorrect: Cacheable annotation allows marking the data getters and setters, to add caching through AOP. While in the code you clearly implement some low-level cache updating logic of your own.
At least next possible changes may help you:
Remove #Cacheable. You don't need it because you work with cache manually, so it may be the source of conflicts (especially as soon as you use sync = true). If it helps, remove #EnableCaching as well - it enables support for cache-related Spring annotations which you don't need here.
Try removing Ticker bean with the appropriate parameters for other beans. It should not be harmful as per your configuration, but usually it's helpful only for tests, no need to define it otherwise.
Double-check what is declineRecord. If it's a serialized object, ensure that serialization works properly.
Add recordStats() for cache and output stats() to log for further analysis.
I use Spring Boot 2.1.2.RELEASE, and I try to use Micrometer with CompositeMeterRegistry. My goal is to publish some selected meters to ElasticSearch. The code below shows my sample config. The problem is, that the filter is completely ignored (so all metrics are sent to ElasticSearch), although I can see in the logs that it was processed ("filter reply of meter ..." lines).
Strangely, if I define the MeterFilter as a Spring bean, then it's applied to ALL registries (however, I want it to be applied only on "elasticMeterRegistry").
Here is a sample configuration class:
#Configuration
public class AppConfiguration {
#Bean
public ElasticConfig elasticConfig() {
return new ElasticConfig() {
#Override
#Nullable
public String get(final String k) {
return null;
}
};
}
#Bean
public MeterRegistry meterRegistry(final ElasticConfig elasticConfig) {
final CompositeMeterRegistry registry = new CompositeMeterRegistry();
registry.add(new SimpleMeterRegistry());
registry.add(new JmxMeterRegistry(new JmxConfig() {
#Override
public Duration step() {
return Duration.ofSeconds(10);
}
#Override
#Nullable
public String get(String k) {
return null;
}
}, Clock.SYSTEM));
final ElasticMeterRegistry elasticMeterRegistry = new ElasticMeterRegistry(elasticConfig, Clock.SYSTEM);
elasticMeterRegistry.config().meterFilter(new MeterFilter() {
#Override
public MeterFilterReply accept(Meter.Id id) {
final MeterFilterReply reply =
id.getName().startsWith("logback")
? MeterFilterReply.NEUTRAL
: MeterFilterReply.DENY;
log.info("filter reply of meter {}: {}", id.getName(), reply);
return reply;
}
});
registry.add(elasticMeterRegistry);
return registry;
}
}
So, I expect ElasticSearch to receive only "logback" metrics, and JMX to receive all metrics.
UPDATE:
I have played with filters and found a "solution", but I don't really understand why the code above doesn't work.
This works:
elasticMeterRegistry.config().meterFilter(new MeterFilter() {
#Override
public MeterFilterReply accept(Meter.Id id) {
final MeterFilterReply reply =
id.getName().startsWith("logback")
? MeterFilterReply.ACCEPT
: MeterFilterReply.DENY;
log.info("filter reply of meter {}: {}", id.getName(), reply);
return reply;
}
});
The difference is: I return ACCEPT instead of NEUTRAL.
Strangely, the following code does not work (ES gets all metrics):
elasticMeterRegistry.config().meterFilter(
MeterFilter.accept(id -> id.getName().startsWith("logback")));
But this works:
elasticMeterRegistry.config().meterFilter(
MeterFilter.accept(id -> id.getName().startsWith("logback")));
elasticMeterRegistry.config().meterFilter(
MeterFilter.deny());
CONCLUSION:
So, it seems that instead of NEUTRAL, the filter should return ACCEPT. But for meters not starting with "logback", my original filter (with NEUTRAL) returns DENY. Then why are those metrics published to ElasticSearch registry?
Can someone explain this?
This is really a composite of questions. I'll just point out a few points.
For the MeterRegistry bean you defined, Spring Boot will auto-configure an ElasticMeterRegistry bean as there's no ElasticMeterRegistry bean. Instead of creating a CompositeMeterRegistry bean on your own, just define a custom ElasticMeterRegistry bean which is applied the MeterFilter you want and let Spring Boot create one (CompositeMeterRegistry bean) for you.
For MeterFilterReply, ACCEPT will accept the meter immediately, DENY will deny the meter immediately, and NEUTRAL will postpone the decision to next filter(s). Basically meters will be accepted unless there's any DENY.
I am trying to set up a service that has both a REST (POST) endpoint and a Kafka endpoint, both of which should take a JSON representation of the request object (let's call it Foo). I would want to make sure that the Foo object is valid (via JSR-303 or whatever). So Foo might look like:
public class Foo {
#Max(10)
private int bar;
// Getter and setter boilerplate
}
Setting up the REST endpoint is easy:
#PostMapping(value = "/", produces = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<String> restEndpoint(#Valid #RequestBody Foo foo) {
// Do stuff here
}
and if I POST, { "bar": 9 } it processes the request, but if I post: { "bar": 99 } I get a BAD REQUEST. All good so far!
The Kafka endpoint is easy to create (along with adding a StringJsonMessageConverter() to my KafkaListenerContainerFactory so that I get JSON->Object conversion:
#KafkaListener(topics = "fooTopic")
public void kafkaEndpoint(#Valid #Payload Foo foo) {
// I shouldn't get here with an invalid object!!!
logger.debug("Successfully processed the object" + foo);
// But just to make sure, let's see if hand-validating it works
Validator validator = localValidatorFactoryBean.getValidator();
Set<ConstraintViolation<SlackMessage>> errors = validator.validate(foo);
if (errors.size() > 0) {
logger.debug("But there were validation errors!" + errors);
}
}
But no matter what I try, I can still pass invalid requests in and they process without error.
I've tried both #Valid and #Validated. I've tried adding a MethodValidationPostProcessor bean. I've tried adding a Validator to the KafkaListenerEndpointRegistrar (a la the EnableKafka javadoc):
#Configuration
public class MiscellaneousConfiguration implements KafkaListenerConfigurer {
private Logger logger = LoggerFactory.getLogger(this.getClass());
#Autowired
LocalValidatorFactoryBean validatorFactory;
#Override
public void configureKafkaListeners(KafkaListenerEndpointRegistrar registrar) {
logger.debug("Configuring " + registrar);
registrar.setMessageHandlerMethodFactory(kafkaHandlerMethodFactory());
}
#Bean
public MessageHandlerMethodFactory kafkaHandlerMethodFactory() {
DefaultMessageHandlerMethodFactory factory = new DefaultMessageHandlerMethodFactory();
factory.setValidator(validatorFactory);
return factory;
}
}
I've now spent a few days on this, and I'm running out of other ideas. Is this even possible (without writing validation into every one of my kakfa endpoints)?
Sorry for the delay; we are at SpringOne Platform this week.
The infrastructure currently does not pass a Validator into the payload argument resolver. Please open an issue on GitHub.
Spring kafka listener by default do not scan for #Valid for non Rest controller classes. For more details please refer this answer
https://stackoverflow.com/a/71859991/13898185
Using Springboot 1.5.x, Spring Cloud, and JAX-RS:
I could use a second pair of eyes since it is not clear to me whether the Spring configured, Javanica HystrixCommand works for all use cases or whether I may have an error in my code. Below is an approximation of what I'm doing, the code below will not actually compile.
From below WebService lives in a library with separate package path to the main application(s). Meanwhile MyWebService lives in the application that is in the same context path as the Springboot application. Also MyWebService is functional, no issues there. This just has to do with the visibility of HystrixCommand annotation in regards to Springboot based configuration.
At runtime, what I notice is that when a code like the one below runs, I do see "commandKey=A" in my response. This one I did not quite expect since it's still running while the data is obtained. And since we log the HystrixRequestLog, I also see this command key in my logs.
But all the other Command keys are not visible at all, regardless of where I place them in the file. If I remove CommandKey-A then no commands are visible whatsoever.
Thoughts?
// Example WebService that we use as a shared component for performing a backend call that is the same across different resources
#RequiredArgsConstructor
#Accessors(fluent = true)
#Setter
public abstract class WebService {
private final #Nonnull Supplier<X> backendFactory;
#Setter(AccessLevel.PACKAGE)
private #Nonnull Supplier<BackendComponent> backendComponentSupplier = () -> new BackendComponent();
#GET
#Produces("application/json")
#HystrixCommand(commandKey="A")
public Response mainCall() {
Object obj = new Object();
try {
otherCommandMethod();
} catch (Exception commandException) {
// do nothing (for this example)
}
// get the hystrix request information so that we can determine what was executed
Optional<Collection<HystrixInvokableInfo<?>>> executedCommands = hystrixExecutedCommands();
// set the hystrix data, viewable in the response
obj.setData("hystrix", executedCommands.orElse(Collections.emptyList()));
if(hasError(obj)) {
return Response.serverError()
.entity(obj)
.build();
}
return Response.ok()
.entity(healthObject)
.build();
}
#HystrixCommand(commandKey="B")
private void otherCommandMethod() {
backendComponentSupplier
.get()
.observe()
.toBlocking()
.subscribe();
}
Optional<Collection<HystrixInvokableInfo<?>>> hystrixExecutedCommands() {
Optional<HystrixRequestLog> hystrixRequest = Optional
.ofNullable(HystrixRequestLog.getCurrentRequest());
// get the hystrix executed commands
Optional<Collection<HystrixInvokableInfo<?>>> executedCommands = Optional.empty();
if (hystrixRequest.isPresent()) {
executedCommands = Optional.of(hystrixRequest.get()
.getAllExecutedCommands());
}
return executedCommands;
}
#Setter
#RequiredArgsConstructor
public class BackendComponent implements ObservableCommand<Void> {
#Override
#HystrixCommand(commandKey="Y")
public Observable<Void> observe() {
// make some backend call
return backendFactory.get()
.observe();
}
}
}
// then later this component gets configured in the specific applications with sample configuraiton that looks like this:
#SuppressWarnings({ "unchecked", "rawtypes" })
#Path("resource/somepath")
#Component
public class MyWebService extends WebService {
#Inject
public MyWebService(Supplier<X> backendSupplier) {
super((Supplier)backendSupplier);
}
}
There is an issue with mainCall() calling otherCommandMethod(). Methods with #HystrixCommand can not be called from within the same class.
As discussed in the answers to this question this is a limitation of Spring's AOP.