I'm looking to write messages to a single queue. I'd like to use the sub-topics functionality, so that tailers can pick and choose either to read all of the sub-topics under one topic, or pick specific sub-topics to read from.
The documentation mentions that sub-topics are supported in a directory under the main topic, so in order to read from a subtopic, do we just create a new queue and point it to the sub-topic path?
SingleChronicleQueue queue = SingleChronicleQueueBuilder.binary("Topic").build();
SingleChronicleQueue queue2 = SingleChronicleQueueBuilder.binary("Topic/SubTopic").build();
ExcerptAppender appender = queue.acquireAppender();
ExcerptAppender appender2 = queue2.acquireAppender();
appender.writeText("aaa");
appender2.writeText("bbb");
This will just output aaa, but i want it to output but aaa and bbb
There is no real concept of hierarchy in Chronicle-Queue; there is a one-to-one mapping between file-system directory and queue.
If you wish to filter certain records, you will need to do that when reading the records out of the queue. It will be up to your application to decide how to detect messages that should be filtered.
The documentation you refer to appears to have been copied from concepts that exist in Chronicle-Engine.
Related
I am exploring ChronicleQueue to save events generated in one of my application.I would like to publish the saved events to a different system in its original order of occurrence after some processing.I have multiple instances of my application and each of the instance could run a single threaded appender to append events to ChronicleQueue.Although ordering across instances is a necessity,I would like to understand these 2 questions.
1)How would the index of the read index for my events be saved so that I don't end up reading and publishing the same message from chronicle queue multiple times.
In the below code(picked from the example in github) the index is saved till we reach the end of the queue while we restarted the application.The moment we reach the end of the queue,we end up reading all the messages again from the start.I want to make sure for a particular consumer identified by a tailer Id, the messages are read only once.Do i need to save the read index in another queue and use that to achieve what I need here.
String file = "myPath";
try (ChronicleQueue cq = SingleChronicleQueueBuilder.binary(file).build()) {
for(int i = 0 ;i<10;i++){
cq.acquireAppender().writeText("test"+i);
}
}
try (ChronicleQueue cq = SingleChronicleQueueBuilder.binary(file).build()) {
ExcerptTailer atailer = cq.createTailer("a");
System.out.println(atailer.readText());
System.out.println(atailer.readText());
System.out.println(atailer.readText());
}
2)Also need some suggestion if there is a way to preserve ordering of events across instances.
Using a named tailer should ensure that the tailer only reads a message once. If you have an example where this doesn't happen can you create a test to reproduce it?
The order of entries in a queue are fixed when writing and all tailer see the same messages in the same order, there isn't any option.
I have Kafka Streams unit test based on a really great, reliable and convenient TopologyTestDriver:
try (TopologyTestDriver testDriver = new TopologyTestDriver(builder.build(),
streamsConfig(Serdes.String().getClass(), SpecificAvroSerde.class))) {
TestInputTopic<String, Event> inputTopic = testDriver.createInputTopic(inputTopicName,
Serdes.String().serializer(), eventSerde.serializer());
TestOutputTopic<String, Frame> outputWindowTopic = testDriver.createOutputTopic(
outputTopicName, Serdes.String().deserializer(), frameSerde.deserializer());
...
}
I'd like to test a bit more complex setup where an "output" topic is an "input" topic for another topology.
I can define several input and output topics inside of the same topology. But as soon as I am using the same topic as an input and output topic within the same topology, I'm getting the following exception:
org.apache.kafka.streams.errors.TopologyException: Invalid topology: Topic events has already been registered by another source.
at org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.validateTopicNotAlreadyRegistered(InternalTopologyBuilder.java:578)
at org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.addSource(InternalTopologyBuilder.java:378)
at org.apache.kafka.streams.kstream.internals.graph.StreamSourceNode.writeToTopology(StreamSourceNode.java:94)
at org.apache.kafka.streams.kstream.internals.InternalStreamsBuilder.buildAndOptimizeTopology(InternalStreamsBuilder.java:303)
at org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:558)
at org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:547)
It looks like the TopologyTestDriver doesn't provide possibility to define input-output topics, is that right?
Update
To better illustrate what I'm trying to achieve:
builder.stream("input-topic, ...)..to("intermediate-topic",...);
builder.stream("intermediate-topic", ...)..to("output-topic",...);
and I want to be able to verify (assert) the contents of the "intermeidate-topic" in my unit test. Btw. I cannot "reuse" the result of the call ".to()" in building the next topology part, since that method returns void.
But I only have testDriver.createInputTopic() and testDriver.createOutputTopic() and no way of defining something like testDriver.createInputOutputTopic().
Using the same topic as input and an output topic should work. However, you cannot use the same topic as input topic multiple times (the strack trace indicates that you try to do this).
If you want to use the same input topic twice, you would just add it once, and "fan it out":
KStream stream = builder.stream(...);
stream.map(...); // first usage
stream.filter(...); // second usage
Using the same KStream object twice, is basically a "fan out" (or "broadcast") that will send the input data to both operators.
I am using cloud stream to consuming messages I am using something like
#StreamListener(target = "CONSTANT_CHANNEL_NAME")
public void readingData(String input){
System.out.println("consumed info is"+input);
}
But I want to keep channel name as per my environment and it should be picked from property file, while as per Spring channel name should be constant.
Is there any work around to fix this problem?
Edit:1
Let's see the actual situation
I am using multiple queues and dlq queues and it's binding is done with rabbit-mq
I want to change my channel name and queue name as per my environment
I want to do all on same AMQP host.
My Sink Code
public interfaceProcessorSink extends Sink {
#Input(CONSTANT_CHANNEL_NAME)
SubscribableChannel channel();
#Input(CONSTANT_CHANNEL_NAME_1)
SubscribableChannel channel2();
#Input(CONSTANT_CHANNEL_NAME_2)
SubscribableChannel channle2();
}
You can pick target value from property file as below:
#StreamListener(target = "${streamListener.target}")
public void readingData(String input){
System.out.println("consumed info is"+input);
}
application.yml
streamListener:
target: CONSTANT_CHANNEL_NAME
While there are many ways to do that I wonder why do you even care? In fact if anything you do want to make it constant so it is always the same, but thru configuration properties map it to different remote destinations (e.g., Kafka, Rabbit etc). For example spring.cloud.stream.bindings.input.destination=myKafkaTopic states that channel by the name input will be mapped to (bridged with) Kafka topic named myKafkaTopic'.
In fact, to further prove my point we completely abstracted away channels all together for users who use spring-cloud-function programming model, but that is a whole different discussion.
My point is that I believe you are actually creating a problem rather the solving it since with externalisation of the channel name you create probably that due to misconfiguration your actual bound channel and the channel you're mentioning in your properties are not going to be the same.
Using Chronicle with vertx.io...
I create a new Chronicle per per verticle. I.e: one instance per thread.
chronicle = ChronicleQueueBuilder.indexed("samePath").build(); <-- This is created per thread to the same queue.
Now for each web http POST request I do... Each post is handle by exactly 1 thread at a time.
String message = request.toString();
ExcerptAppender appender = chronicle.createAppender();
// Configure the appender to write up to 100 bytes
appender.startExcerpt(message.length()+100);
// Copy the content of the Object as binary
appender.writeObject(message);
// Commit
appender.finish();
This seems to work. But is it ok?
This is not ok for IndexedChronicle whereas it is for VanillaChronicle.
If you can, best is to share the same VanillaChonicle instance among verticles (on the same process of course) and create an appender on demand.
Note that you can use WriteUtf* instead of writeObject to serialize strings as it is much more efficient.
In TIBCO EMS, which I am familiar with, there is a feature called "destination bridges".
Queues and Topic can be bridged (linked) so that the 2nd destination can become client of the the first. (Queue to queue, Topic to queue, Queue to topic, topic to topic)
For instance, a topic can be bridged to a queue, which in essence will become a durable subscriber of the messages submitted to the topic. Clients can subscribe to the topic OR read from the queue. This example is a way to load balance the reading of a pub/sub for multiple clients (readers of the queue).
This "bridge" feature can also involve message selectors and destination name wilcards.
So, a QUEUE X can be the client of TOPIC.* with condition CUST_ID(a JMS attribute)>30.
In that case, all message submitted to TOPIC.A OR TOPIC.B fitting the criteria would end up in QUEUE X. All this does not involve anything but simple EMS configuration.
I don't know enough about Websphere MQ, and I need a similar behavior. Will I have to develop a processing program outside of MQ, or can the feature within the product sufice ?
Note : I have already gone through MQ documentation and found about the "Alias queues" feature. Since the feature should really be called "Shortcut queue" and does not involve 2 destinations... I don't think it could help me...
Thanks!
Edit : For reference, the command (DEF SUB) enabling this in MQ is documented here
Edit 2 : The selected answer cover the "Topic -> Queue" pattern from TIBCO EMS "Destination bridge" featuire. Please note that the "Q->Q", T->T and Q->T" patterns are not covered here.
Easy! Define your queue to receive the subscription and then define a durable administrative subscription.
DEF QL(MY.SUSCRIBER.QUEUE)
DEF SUB('MY.SUBSCRIPTION') +
TOPICSTR('SOME/TOPIC/#') +
DEST('MY.SUSCRIBER.QUEUE') +
SELECTOR('JMSType = 'car' AND color = 'blue' AND weight > 2500') +
REPLACE
The Infocenter has a section on Selector Syntax and a page for the DEFINE SUB command.