Print output to console when using KafkaStream Processor API - apache-kafka-streams

When using StreamDSL, I can call .print(Printed.toConsole()) to see the output in the console.
Is there sth similar when using the Processor API? I expect a PrintToConsoleProcessor, or?
For sure I can create a dummy processor, but a PrintToConsoleProcessor would be very useful.

Ok could be fairly easy
topology.addProcessor("console", () -> new Processor() {
#Override
public void init(ProcessorContext context) {
}
#Override
public void process(Object key, Object value) {
System.out.println(value.toString());
}
#Override
public void punctuate(long timestamp) {
}
#Override
public void close() {
}
}, "PARENT")

Related

Give Priority to SFTP Remote Directories

Using single SFTP channel I need to process two remote directories lowpriority and highprioiry but lowpriority files pick after the highpriority .
please let know how handle multiple directories in SFTP inbound adapter with single channel ?
We can do using https://docs.spring.io/spring-integration/reference/html/sftp.html#sftp-rotating-server-advice Rotation Service advice in Spring 5.1.2 Release but what about 4.3.12 Release.?
It is not available in 4.3.x; the feature was added in 5.0.7.
It needs infrastructure changes so it will be hard to replicate with custom code in 4.3.x.
You could use two adapters and stop/start them as necessary.
EDIT
Here is one solution; the advice on the primary flow starts the secondary flow when no new files are found. The secondary flow runs just once, then restarts the primary flow; and the cycle continues...
#SpringBootApplication
public class So54329898Application {
public static void main(String[] args) {
SpringApplication.run(So54329898Application.class, args);
}
#Bean
public IntegrationFlow primary(SessionFactory<LsEntry> sessionFactory) {
return IntegrationFlows.from(Sftp.inboundAdapter(sessionFactory)
.localDirectory(new File("/tmp/foo"))
.remoteDirectory("foo/foo"), e -> e
.poller(Pollers.fixedDelay(5_000, 5_000)
.advice(startSecondaryAdvice())))
.channel("channel")
.get();
}
#Bean
public IntegrationFlow secondary(SessionFactory<LsEntry> sessionFactory) {
return IntegrationFlows.from(Sftp.inboundAdapter(sessionFactory)
.localDirectory(new File("/tmp/foo"))
.remoteDirectory("foo/bar"), e -> e
.poller(Pollers.trigger(oneShotTrigger(sessionFactory)))
.autoStartup(false))
.channel("channel")
.get();
}
#Bean
public IntegrationFlow main() {
return IntegrationFlows.from("channel")
.handle(System.out::println)
.get();
}
#Bean
public Advice startSecondaryAdvice() {
return new StartSecondaryWhenPrimaryIdle();
}
#Bean
public FireOnceTrigger oneShotTrigger(SessionFactory<LsEntry> sessionFactory) {
return new FireOnceTrigger((Lifecycle) primary(sessionFactory));
}
public static class StartSecondaryWhenPrimaryIdle extends AbstractMessageSourceAdvice
implements ApplicationContextAware {
private ApplicationContext applicationContext;
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
this.applicationContext = applicationContext;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (result == null) {
System.out.println("No more files on primary; starting single shot on secondary");
this.applicationContext.getBean("primary", Lifecycle.class).stop();
this.applicationContext.getBean("secondary", Lifecycle.class).stop();
this.applicationContext.getBean(FireOnceTrigger.class).reset();
this.applicationContext.getBean("secondary", Lifecycle.class).start();
}
return result;
}
}
public static class FireOnceTrigger implements Trigger {
private final Lifecycle primary;
private volatile boolean done;
public FireOnceTrigger(Lifecycle primary) {
this.primary = primary;
}
#Override
public Date nextExecutionTime(TriggerContext triggerContext) {
if (done) {
System.out.println("One shot on secondary complete; restarting primary");
this.primary.start();
return null;
}
done = true;
return new Date();
}
public void reset() {
done = false;
}
}
}

Axon Register Tracking Processor with distributed query model

I had implement CQRS+ES application using axon and spring-boot. I use separate query model and command model application. I use rabbitmq to publish event from command mode. It works correct. But tracking Processor implementation is not work in my application.
This is my query model
#SpringBootApplication
public class SeatQueryPart1Application {
public static void main(String[] args) {
SpringApplication.run(SeatQueryPart1Application.class, args);
}
#Bean
public SpringAMQPMessageSource statisticsQueue(Serializer serializer) {
return new SpringAMQPMessageSource(new DefaultAMQPMessageConverter(serializer)) {
#RabbitListener(exclusive = false, bindings = #QueueBinding(value = #Queue, exchange = #Exchange(value = "ExchangeTypesTests.FanoutExchange", type = ExchangeTypes.FANOUT), key = "orderRoutingKey"))
#Override
public void onMessage(Message arg0, Channel arg1) throws Exception {
super.onMessage(arg0, arg1);
}
};
}
#Autowired
public void conf(EventHandlingConfiguration configuration) {
configuration.registerTrackingProcessor("statistics");
}
}
this is a event handler class
#ProcessingGroup("statistics")
#Component
public class EventLoggingHandler {
private SeatReservationRepository seatReservationRepo;
public EventLoggingHandler(final SeatReservationRepository e) {
this.seatReservationRepo = e;
}
#EventHandler
protected void on(SeatResurvationCreateEvent event) {
Timestamp timestamp = new Timestamp(System.currentTimeMillis());
Seat seat=new Seat(event.getId(), event.getSeatId(), event.getDate(),timestamp ,true);
seatReservationRepo.save(seat);
}
}
this is yml configuration
axon:
eventhandling:
processors:
statistics.source: statisticsQueue
How can i do it correct. (Can anyone suggest tutorial or code sample)
The SpringAMQPMessageSource is a SubscribableMessageSource. This means you cannot use a tracking event processor to process messages. It is only compatible with a Subscribable Event Processor.
Removing configuration.registerTrackingProcessor("statistics"); and leaving it to the default (subscribing) should do the trick.

Session management in a Neo4J Flink sink

I am developing a data analytics app with Apache Flink and Neo4J (Community Edition).
In this application, the Flink sink must save/update relations in Neo4J.
Which is the best way for Neo4J session management, and why?
First implementation:
public class MySink extends RichSinkFunction<Link> {
private DbConfiguration dbconfig;
private Driver driver;
#Override
public void open(Configuration parameters) throws Exception {
this.driver = Neo4JManager.open(this.dbconfig);
}
#Override
public void close() throws Exception {
this.driver.close();
}
#Override
public void invoke(Link link) throws Exception {
Session session = this.driver.session();
Neo4JManager.saveLink(session, link);
session.close();
}
}
Second implementation:
public class MySink extends RichSinkFunction<Link> {
private DbConfiguration dbconfig;
private Driver driver;
private Session session;
#Override
public void open(Configuration parameters) throws Exception {
this.driver = Neo4JManager.open(this.dbconfig);
this.session = driver.session();
}
#Override
public void close() throws Exception {
this.session.close();
this.driver.close();
}
#Override
public void invoke(Link link) throws Exception {
Neo4JManager.saveLink(this.session, link);
}
}
In both implementations, the following functions have been used:
public class Neo4JManager {
public static Driver open(DbConfiguration dbconf) {
AuthToken auth = AuthTokens.basic(dbconf.getUsername(), dbconf.getPassword());
Config config = Config.build().withEncryptionLevel(Config.EncryptionLevel.NONE ).toConfig();
return GraphDatabase.driver(dbconf.getHostname(), auth, config);
}
public static void saveLink(Session session, Link link) {
Value params = parameters("x", link.x, "y", link.y);
session.run('CREATE (Person {id:{x}}-[FOLLOWS]->(Person {id:{y}}))'
}
}
Thank you.

JMeter Plugin - How to Listen to TestState

I am working on developing a JMeter plugin. I'm trying to create an AbstractVisualizer that is capable of monitoring the current test state. However, implementing the TestStateListener doesn't seem to be working.
I'm testing this by creating a basic listener that has a login to output arbitrary info to JMeter's logging console. When a sample is sent through the Add function, a line is sent to the console. But nothing is ever triggered on the various TestState functions. Is there something more structural I'm missing?
public class TestListener extends AbstractVisualizer
implements TestStateListener
{
private static final Logger log = LoggingManager.getLoggerForClass();
#Override
public void add(SampleResult arg0) {
log.info("add");
}
#Override
public void clearData() {
// TODO Auto-generated method stub
}
#Override
public String getStaticLabel()
{
return "Test Listener";
}
#Override
public String getLabelResource() {
return null;
}
#Override
public void testEnded() {
log.info("Test Ended");
}
#Override
public void testEnded(String arg0) {
log.info("Test Ended");
}
#Override
public void testStarted() {
log.info("Test started");
}
#Override
public void testStarted(String arg0) {
log.info("Test started");
}
}
I'm not sure how to do it in 1 class. I have 2 classes:
The UI:
public class MonitorGui extends AbstractListenerGui
{
// ...
#Override
public TestElement createTestElement()
{
TestElement element = new Monitor();// <-- this is the backend
modifyTestElement(element);
return element;
}
// ...
}
And then the backend goes like this:
public class Monitor extends AbstractListenerElement
implements SampleListener,
Clearable, Serializable,
TestStateListener, Remoteable,
NoThreadClone
{
private static final String TEST_IS_LOCAL = "*local*";
// ...
#Override
public void testStarted()
{
testStarted(TEST_IS_LOCAL);
}
#Override
public void testEnded()
{
testEnded(TEST_IS_LOCAL);
}
#Override
public void testStarted(String host)
{
// ...
}
// ...
}
You may not need to implement SampleListener like I do, but probably other things are quite similar.
I based that implementation on a built-in pair of ResultSaverGui and ResultCollector which are the components that are saving results into the file(s) for Simple Data Writer, Summary Report and so on.

Spring Batch how to regroup/aggregate user datas into a single object

I am trying to transform user operations (like purshases) into a user summary class (expenses by user). A user can have multiple operations but only one summary. I cannot sum purshases in the reader because I need a processor to reject some operation depending to another service.
So some code :
class UserOperation {
String userId;
Integer price;
}
class UserSummary {
String userId;
Long sum;
}
#Bean
public Step retrieveOobClientStep1(StepBuilderFactory stepBuilderFactory, ItemReader<UserOperation> userInformationJdbcCursorItemReader, ItemProcessor<UserOperation, UserSummary> userInformationsProcessor, ItemWriter<UserSummary> flatFileWriter) {
return stepBuilderFactory.get("Step1").<UserOperation, UserSummary>chunk(100) // chunck result that need to be aggregated... not good
.reader(userInformationJdbcCursorItemReader) // read all user operations from DB
.processor(userInformationsProcessor) // I need to reject or not some operations - but here 1 operation = 1 summary that is not good
.writer(flatFileWriter) // write result into flat file
.build();
}
I thing that ItemReader/ItemProcessor/ItemWriter is for single item processing.
But how to regroup multiples records into a single object using Spring Batch ? only Tasklet ?
Possibility but cause problems with small commit interval :
public class UserSummaryAggregatorItemStreamWriter implements ItemStreamWriter<UserSummary>, InitializingBean {
private ItemStreamWriter<UserSummary> delegate;
#Override
public void afterPropertiesSet() throws Exception {
Assert.notNull(delegate, "'delegate' may not be null.");
}
public void setDelegate(ItemStreamWriter<UserSummary> delegate) {
this.delegate = delegate;
}
#Override
public void write(List<? extends UserSummary> items) throws Exception {
Map<String, UserSummary> userSummaryMap = new HashMap<String, UserSummary>();
// Aggregate
for (UserSummary item : items) {
UserSummary savedUserSummary = userSummaryMap.get(item.getUserId());
if (savedUserSummary != null) {
savedUserSummary.incrementSum(item.getSum()); // sum
} else {
savedUserSummary = item;
}
userSummaryMap.put(item.getSubscriptionCode(), savedUserSummary);
}
Collection<UserSummary> values = userSummaryMap.values();
if(values != null) {
delegate.write(new ArrayList<UserSummary>(values));
}
}
#Override
public void open(ExecutionContext executionContext) throws ItemStreamException {
delegate.open(executionContext);
}
#Override
public void update(ExecutionContext executionContext) throws ItemStreamException {
delegate.update(executionContext);
}
#Override
public void close() throws ItemStreamException {
delegate.close();
}
}

Resources