Enforcing well balanced parallelism in a unkeyed Flink stream - parallel-processing

Based on my understanding of Flink, it introduces parallelism based on keys (keygroups). However, suppose one had a massive unkeyed stream and would like the work to be done in parallel, what would be the best way to achieve this?
If the stream has some fields, one might think about keying by one of the fields arbirtrarily, however this does not guarantee that the workload will be balanced properly. For instance because one value in that field may occur in 90% of the messages. Hence my question:
How to enforce well balanced parallelism in Flink, without prior knowledge of what is in the stream
One potential solution I could think of is to assign a random number to each message (say 1-3 if you want to have a parallelism of 3, or 1-1000 if you want parallelism to be more flexible). However, I wondered if this was the recommended approach as it does not feel very elegant.

keyBy is one way to specify stream partitioning, and it is especially useful, since you are guaranteed that all stream elements with the same key will be processed together. This is the basis for stateful stream processing with Flink.
However, if you don't need to use key-partitioned state, and instead care about ensuring that the partitions are well balanced, you can use shuffle() or rebalance() to cause a random or round-robin partitioning. See the docs for more details. You can also implement a custom partitioner, if you want more explicit control.
BTW, if you do want to key the stream by a random number, do not do something like keyBy(new Random.nextInt(n)). It's imperative that the key selector be deterministic. This is necessary because the keys do not travel with the stream records -- instead, the key selector function is used to compute the key whenever it is needed. So for random keying, add another field to your events and populate it with a random number, and use that as the key. This technique is useful when you want to use keyed state or timers, but don't have anything suitable to use as a key.

Related

Parallel counting using a functional approach and immutable data structures?

I have heard and bought the argument that mutation and state is bad for concurrency. But I struggle to understand what the correct alternatives actually are?
For example, when looking at the simplest of all tasks: counting, e.g. word counting in a large corpus of documents. Accessing and parsing the document takes a while so we want to do it in parallel using k threads or actors or whatever the abstraction for parallelism is.
What would be the correct but also practical pure functional way, using immutable data structures to do this?
The general approach in analyzing data sets in a functional way is to partition the data set in some way that makes sense, for a document you might cut it up into sections based on size. i.e. four threads means the doc is sectioned into four pieces.
The thread or process then executes its algorithm on each section of the data set and generates an output. All the outputs are gathered together and then merged. For word counts, for example, a collection of word counts are sorted by the word, and then each list is stepped through using looking for the same words. If that word occurs in more than one list, the counts are summed. In the end, a new list with the sums of all the words is output.
This approach is commonly referred to as map/reduce. The step of converting a document into word counts is a "map" and the aggregation of the outputs is a "reduce".
In addition to the advantage of eliminating the overhead to prevent data conflicts, a functional approach enables the compiler to optimize to a faster approach. Not all languages and compilers do this, but because a compiler knows its variables are not going to be modified by an outside agent it can apply transforms to the code to increase its performance.
In addition, functional programming lets systems like Spark to dynamically create threads because the boundaries of change are clearly defined. That's why you can write a single function chain in Spark, and then just throw servers at it without having to change the code. Pure functional languages can do this in a general way making every application intrinsically multi-threaded.
One of the reasons functional programming is "hot" is because of this ability to enable multiprocessing transparently and safely.
Mutation and state are bad for concurrency only if mutable state is shared between multiple threads for communication, because it's very hard to argue about impure functions and methods that silently trash some shared memory in parallel.
One possible alternative is using message passing for communication between threads/actors (as is done in Akka), and building ("reasonably pure") functional data analysis frameworks like Apache Spark on top of it. Apache Spark is known to be rather suitable for counting words in a large corpus of documents.

Is there guaranteed order of subscribers in chronicle queue?

I'm looking at Chronicle and I don't understand one thing.
An example - I have a queue with one writer - market data provider writes tick data as they appear.
Let's say queue has 10 readers - each reader is a different trading strategy that reads the new tick, and might send buy or sell order, let's name them Strategy1 .. Strategy10.
Let's say there is a rule that I can have only one trade open at any given time.
Now the problem - as far as I understand, there is no guarantee on the order how these subscribed readers process the tick event. Each strategy is subscribed to the queue, so each of them will get the new tick asynchronously.
So when I run it for the first time, it might be that Strategy1 receives the tick first and places the order, all the other strategies will then be not able to place their orders.
If I'll replay the same sequence of events, it might be that a different strategy processes the tick first, and places its order.
This will result in totally different results while using the same sequence of initial events.
Am I understanding something wrong or is this how it really works?
What are the possible solutions to this problem?
What I want to achieve is that the same sequence of source events (ticks) produces always the same sequence of trades.
If you want determinism in your system, then you need to remove sources of non-determinism. In your case, since you can only have a single trade open at one time, it sounds like it would be sensible to run all 10 strategies on a single thread (reader). This would also remove the need for any synchronisation on the reader side to ensure that there is only one open trade.
You can then use some fixed ordering to your strategies (e.g. round-robin) that will always produce the same output for a given set of inputs. Alternatively, if the decision logic is too expensive to run in serial, it could be executed in parallel, with each reader feeding into some form of gate (a Phaser-like structure), on which a decision about what strategy to use can be made deterministically. The drawback to this design is that the slowest trading strategy would hold back all the others.
Think you need to make a choice about how much you want concurrently and independently, and how much you want in order and serial. I suggest you allow the strategies to place orders independently, however the reader of those order to process them in the original order by checking a sequence number such as the queue index in the first queue.
This way the reader of the orders will process them in the same order regardless of the order they are processed and written which appears to be your goal.

What are the main advantages in using promises in scheme?

Pragmatically, what are the main advantages of using promises? Can you show me some examples of real-life useful usage of promises?
In Scheme a promise is just a value that has a task that is not necessarily done yet and if you never use the value it will never be calculated. In short it is a way to do lazy evaluation in the otherwise eager Scheme. A typical way is to do computations on streams instead of lists.
With lists you can use higher order functions so that you can have a list, then filter it for values you are interested in, then transform these values and perhaps at some point you have enough to produce the value you needed. This is nice since you can abstract each step so that you can make logic that only does one step and compose steps to make the whole program, but in this scenario the first step needs to finish in full before the next step can handle the result of the first while it might be that if you are searching for the first prime number between 0 and 1000 having iterated over all the numbers in each step might not be so effective. Here is where streams comes in.
With streams the code looks the same, but the intermediate result is made by need. A stream is a pair where the parts are promises so that the code that would otherwise make a pair is delayed until the values are used. Every step just produces enough data for the next step and thus should it be enough for the first step to iterate just 20% of the elements for the last step to have computed the final result the 80% rest will never ever be processed in any of the steps. With such a structure the initial stream can also be infinite, like all the numbers from 0 increased by 1.
There are penalties involved using streams. Imagine you make an algorithm that would visit all the elements anyway. Then a stream version of an algorithm would be slower since the promises that are created and the forcing gives th eprogram overhead compared with doing the computation without laziness.
You might be interested in seeing Hal Abelson explaining streams and their pros and cons.
There are other alternatives to streams an lazy evaluation. One is generators. Here you can also make composable procedures that takes a generator and produces a generator. The iteration will be by need like with streams.
Another alternative would be transducers. This is also composable and iterates like streams and generators, but unlike generators initial data cannot be an infinite sequence like with streams and generators unless the underlying structure supports it.
The advantages of using promises or any other technique in this answer is not scheme specific. They are true for all eager programming languages!

Can Aerospike's Large Ordered List match the Sorted Sets of Redis for Leaderboards?

I'm considering replacing Redis with Aerospike and I wanted to know if aerospike is capable of delivering the same capabilities and performance as Redis's sorted sets for Leaderboards within an application. I need to be able to quickly insert, read and update items in the set. I also need to be able to do range queries on them and retrieve the rank of an arbitary item in the set quickly.
Aerospike does not currently have a built-in Leaderboard feature. However, this is one of many functions that anyone can build with User Defined Functions (UDFs) and Large Data Types (LDTs).
The way this would work is you would have a set of UDFs that employs two Large Ordered List LDTs. One LLIST would manage the primary collection, and the other LLIST would provide the Leaderboard/Scoreboard ordering (basically used as an index into the primary collection).
The UDFs would manage the user interaction (read/write/delete primary value and read/scan leaderboard value) and pass the work on to the LDT functions.
We've talked internally about building these examples to show the power of UDFs and LDTs. Perhaps, with a little incentive, we could raise the priority of getting these examples done.
The other issue is performance. What are your latency and throughput requirements?

Motivation behind an implicit sort after the mapper phase in map-reduce

I am trying to understand as to why does map-reduce does an implicit sorting during the shuffle and sort phase both on the map side and the reduce side which is manifested as a mixture of both in-memory as well as on-disk sorting (can be really expensive for large sets of data).
My concern is that while running map-reduce jobs, performance is a significant consideration and an implicit sorting based on the keys before throwing the output of the mapper to the reducer will have a great impact on the performance when dealing with large sets of data.
I understand that sorting can prove to be a boon in certain cases where it is explicitly required but this is not always true? So, why does the concept of implicit sorting exist in Hadoop Map-Reduce?
For any kind of reference to what I am talking about while mentioning the shuffle and sort phase feel free to give a brief reading to the post : Map-Reduce: Shuffle and Sort on my blog: Hadoop-Some Salient Understandings
One of the possible explanation to the above which came to my mind much later after posting this question is:
The sorting is done just to aggregate all the records corresponding to a particular key, together, so that all these records corresponding to that single key maybe sent to a single reducer (default partitioning logic in Hadoop Map-Reduce). So, it may be said that by sorting all the records by the keys after the Mapper phase just allows to bring all records corresponding to a single key together where the order of the keys in sorted order may just get used for certain use cases such as sorting large sets of data.
If people can verify the above if they think the same, it shall be great. Thanks.

Resources