In my scenario, a receiving vehicle gets BSMs from multiple senders. I need the BSM data recorded separately according to their respective senders.
Currently, I am achieving this using a custom logging system. However, since OMNET++ has a sophisticated logging system built-in, Is it possible to achieve what I need using OMNET's built-in tools?
OMNeT++ vectors log 2-tuples (TV: time+value) or 3-tuples (ETV: event+time+value) for each piece of data. You can use this additional information to find which values have been recorded at the same simulation time or as a consequence of the same event.
Related
This is a very broad question, I’m new to Flink and looking into the possibility of using it as a replacement for a current analytics engine.
The scenario is, data collected from various equipment, the data is received As a JSON encoded string with the format of {“location.attribute”:value, “TimeStamp”:value}
For example a unitary traceability code is received for a location, after which various process parameters are received in a real-time stream. The analysis is to be ran over the process parameters however the output needs to include a relation to a traceability code. For example {“location.alarm”:value, “location.traceability”:value, “TimeStamp”:value}
What method does Flink use for caching values, in this case the current traceability code whilst running analysis over other parameters received at a later time?
I’m mainly just looking for the area to research as so far I’ve been unable to find any examples of this kind of scenario. Perhaps it’s not the kind of process that Flink can handle
A natural way to do this sort of thing with Flink would be to key the stream by the location, and then use keyed state in a ProcessFunction (or RichFlatMapFunction) to store the partial results until ready to emit the output.
With a keyed stream, you are guaranteed that every event with the same key will be processed by the same instance. You can then use keyed state, which is effectively a sharded key/value store, to store per-key information.
The Apache Flink training includes some explanatory material on keyed streams and working with keyed state, as well as an exercise or two that explore how to use these mechanisms to do roughly what you need.
Alternatively, you could do this with the Table or SQL API, and implement this as a join of the stream with itself.
Me and my team currently work on the read side of a CQRS and event-sourcing system.
We want our projectors to be able to listen to only a subset of all events and we want our projectors to be idempotent since an event can be published many times.
Here is our current architecture:
Since a projectionist doesn't handle all events how it can know if an event hasn't been received in the correct order or if an event has already been received? We can't use the sequence number because the sequence number is related to a stream and not an event type.
The terms "projectionist", "projection ledger" and "projector" comes from this article.
How to know if we have to reorder/ignore events on read side?
The "Bus" is not the authority for order of events - that responsibility lies with the event store. So a projectionist that needs to know what order things happen should query the store, rather than trying to reconstruct the original ordering from the information on the bus.
Greg Young's 2014 talk Polyglot Data includes a good discussion of this point.
(The projectionist might query the event store via some API, rather than talking to the store directly - a curated atom feed based on the data in the store).
Like proposed by #VoiceOfUnreason, we fixed the problem by ditching the bus and by replacing it with the change feed processor of CosmosDB since our events are stored in CosmosDB. We had no problem with this solution so far. The change feed processor is capable of managing the checkpoints and broadcasting the events to every projectors out of the box!
I am implementing an example of spring-boot and axon. I have two events
(deposit and withdraw account balance). I want to know is there any way to get the state of the Account Aggregate by a given date ?
I want to get not just the final state, but to replay events in a range of dates.
I think I can help with this.
In the context of Axon Framework, you can start a replay of events by telling a given TrackingEventProcessor to 'reset' it's Tokens. By the way, the current description on this in the Reference Guide can be found here.
These TrackingTokens are the objects which know how far a given TrackingEventProcessor is in terms of handling events from the Event Stream. Thus resetting/adjusting these TrackingTokens is what will issue a Replay of events.
Knowing all these, the second step is to look at the methods the TrackingEventProcessor provides to 'reset tokens', which is threefold:
TrackingEventProcessor#resetTokens()
TrackingEventProcessor#resetTokens(Function<StreamableMessageSource, TrackingToken>)
TrackingEventProcessor#resetTokens(TrackingToken)
Option one will reset your tokens to the beginning of the event stream, which will thus replay everything.
Option two and three however give you the opportunity to provide a TrackingToken.
Thus, you could provide a TrackingToken starting from several points on the Event Stream. So, how do you go about to creating such a TrackingToken at a specific point in time? To that end, you should take a look at the StreamableMessageSource interface, which has the following operations:
StreamableMessageSource#createTailToken()
StreamableMessageSource#createHeadToken()
StreamableMessageSource#createTokenAt(Instant)
StreamableMessageSource#createTokenSince(Duration)
Option 1 is what's used to create a token at the start of the stream, whilst 2 will create a token at the head of the stream.
Option 3 and 4 will however allow you to create a token at a specific point in time, thus allowing you to replay all the events since the defined instance up to now.
There is one caveat in this scenario however. You're asking to replay an Aggregate. From Axon's perspective by default the Aggregate is the Command Model in a CQRS set up, thus dealing with Commands going in to your system. In the majority of the applications, you want Commands (e.g. the requests to change something) to occur on the current state of the application. As such, the Repository provided to retrieve an Aggregate does not allow specifying a point in time.
The above described solution in regards to replaying is thus solely tied to Query Model creation, as the TrackingEventProcessor is part of the Event Handling side in your application most often used to create views. This idea also ties in with your questions, that you want to know the "state of the Account Aggregate" at a given point in time. That's not a command, but a query, as you have 'a request for data' instead of 'the request to change state'.
Hope this helps you out #Safe!
I have implemented a relay server on top of WebSocket. Sender will send many small binary messages to the server and they are then relayed to all the connected clients.
What I am interested in is the time between sender send the message and the reader receives the message. Right now I have already setup the Test Plan with a thread group of 25 receivers and another group of 1 sender and they can receive and send the message respectively.
The aggregate report is considering the send message and read message as two different labels. How should I configure the Test Plan to record my desired time?
p.s. I am using this jmeter WebSocket sampler plugin
https://bitbucket.org/pjtr/jmeter-websocket-samplers
Thanks in advance.
The aggregate report is considering the send message and read message
as two different labels.
Sure it is, because there are two separate thread groups, according to you.
You need to sync & order the sampler results somehow, so I see two ways here:
1) Write raw sampler results (simple writer, aggregate report & summary report are capable of doing that), then use external tool (say, a table processor, Excel or similar) to process them, do that simple math and show your desired timings.
Or streamline the results to the timeseries DB (e.g. Influx) with Backend Listener and proceed from there - like, do the math and/or visualize (say, with Grafana)
2) Second option seems to be syncing Thread Groups to each other with InterThreadCommunication plugin.
But that seems trickier to me, and what's more, may influence the timing readings (depending on way you do it) so the results got twisted.
Thus I personally would prefer passive metrics readings and post-calculations upon them (which could be turned pretty much "live" too, if you want, with BackendListener+Influx+Grafana bundle, or similar)
The question environment relates to JavaEE, Spring
I am developing a system which can start and stop arbitrary TCP (or other) listeners for incoming messages. There could be a need to authenticate these messages. These messages need to be parsed and stored in some other entities. These entities model which fields they store.
So for example if I have property1 that can have two text fields FillLevel1 and FillLevel2, I could receive messages on TCP which have both fill levels specified in text as F1=100;F2=90
Later I could add another filed say FillLevel3 when I start receiving messages F1=xx;F2=xx;F3=xx. But this is a conscious decision on the part of system modeler.
My question is what do you think is better to use for parsing and storing the message. ETL (using Pantaho, which is used in other system) where you store the raw message and use task executor to consume them one by one and store the transformed messages as per your rules.
One could use Espr or Drools to do the same thing , storing rules and executing them with timer, but I am not sure how dynamic you could get with making rules (they have to be made by end user in a running system and preferably in most user friendly way, ie no scripts or code, only GUI)
The end user should be capable of changing the parse rules. It is also possible that end user might want to change the archived data as well (for example in the above example if a new value of FillLevel is added, one would like to put a FillLevel=-99 in the previous values to make the data consistent).
Please ask for explanations, I have the feeling that I need to revise this question a bit.
Thanks
Well Esper is a great CEP engine, but drools has it's own implementation Drools Fusion which integrates really well with jBpm. That would be a good choice.