I have the following situation where I need to join two streams Bid(Seller, Item, Price) and Ask(Buyer, Item, Price) where I need to emit a tuple (Seller, Buyer) when the buyer offers a higher price than requested by the seller.
I know that I can configure the Bolt's grouping option FieldGrouping. But if I configure each input separately, is there a guarantee that the data with the same value will always go to the same Bolt task.
I am putting a pseudo code to help explain more
builder.setBolt("goodPrice", new GoodPriceBolt(), 5)
.fieldsGrouping("Bid", new Fields("Item"))
.FieldsGrouping("Ask", new Fields("Item"));
Now, as per the documentation http://storm.apache.org/releases/current/Concepts.html, we can guarantee that all Bid data points for the same item value will be delivered to the same task. But, I am not sure if the code above will guarantee also that all Ask data points with the same item value as that of the Bid will be delivered to the same task.
In other words, I need to partition on Bid.Item = Ask.Item. Is that possible in Storm?
Yes, as far as I know. Joins is listed on Storm's page as a common pattern http://storm.apache.org/releases/2.0.0-SNAPSHOT/Common-patterns.html.
Here's the implementation of fields grouping in Storm https://github.com/apache/storm/blob/09e01231cc427004bab475c9c70f21fa79cfedef/storm-client/src/jvm/org/apache/storm/daemon/GrouperFactory.java#L160. The values list contains the values of the fields you've specified in the field grouping (in your case "Item"). The id of the task to send the tuple to is based on https://github.com/apache/storm/blob/09e01231cc427004bab475c9c70f21fa79cfedef/storm-client/src/jvm/org/apache/storm/utils/TupleUtils.java#L44, which uses the hash code of the values. As long as whatever is in your "Item" field implements hashCode properly, you should be good.
You might also be interested in http://storm.apache.org/releases/1.2.1/Joins.html, and maybe https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/org/apache/storm/starter/SingleJoinExample.java. Keep in mind that when you join streams, you should try to take into account that the matching tuples might not show up in the joiner at the same time, which is why Storm provides a join bolt that lets you specify a window for how long you want to wait on one part of the match.
Related
I am working on a Talend transformation process (we are using Talend 6.4).
, and I don't know how to implement the current requirement.
I have an input consisting in :
Two columns that are my group keys (Account and Product), but are not unique (the same Account x Product couple can happen in multiple rows)
A criterion column (Contract end date), which will help me decide which row I want to keep for each group
Some "tail" data that need to be passed to the following step of the processing (the contract number)
The rule to implement is:
Keep only one record per group
The selected record must be one with no end date or, if all have end date, with the biggest end date
The selected record can be random in case there is a tie
See the transformation applying those rules on some dummy data:
I thought first to do the following:
sort by Account, Product, End_date (nulls first)
"select first" in each group
but I am not skilled enough to know whether the second transformation exists in Talend.
Regards,
Pierre
Very interesting Talend question.
You need to create something like this job.
here a link to the zip file to import in your Talend
The answer from #MBDIA seem to be working, however I would like to share what we did to fulfill our requirement.
See our Talend process here:
The first tMap (tMap_3) acts like a tReplicate and a tMap, and sends:
in the upper branch only the Account and Product references, that are then deduplicated by the tAggregateRow_1.
in the lower branch all data and computed fields that enables us to take care of the case where the date is missing (instead of defaulting to 31/12/9999, we compute a flag (0 or 1) that we use in the sort step afterwards).
In the second part of the process, we first apply the sort to the whole data on Account, Product, Empty date flag (computed before), End date (desc) and use a second tMap to make a join on both branches (on Account x Product), only keeping First Match in order to keep the first record as per our requirement.
I want to implement this logic other than aggregator stage, basically through transformer stage to merge these records based on the ID column, and there is no possibility to get multiple values for same field in my case for same ID column.
I have this input data,
ID|VAL1|VAL2|VAL3|BAL1|BAL2|BAL3
10001|5|0|0|1000|0|0
10001|0|10|0|0|1200|0
10001|0|0|11|11|0|10500
and i want my output to be like:
ID|VAL1|VAL2|VAL3|BAL1|BAL2|BAL3
10001|5|10|11|1000|1200|10500
Is it possible to implement it and if, then thanks in advance!!!!
There are at least two options to do that:
Using the loop within the transformer
Storing the data of the previous row (with the help of stage variables) until LastRowInGroup
Some common things are
get the data sorted upfront the transformer
Use LastRowInGroup to use it as output constraint
remember that the stage & loop variables are processed top down so the sequence matters and enables one to point to an old (previous) content when referring to a variable further down from above
Be aware that this a little advanced - the aggregator would be probably the easier solution.
I have read about the Apache storm and did some basic tutorials. I have following topology in mind that I would like to implement with storm, but not sure how to handle the data distribution.
Business requirement is to
evaluate customers portfolio in realtime.
In simplified form it involves:
1) Accept live steam of market prices (currencies, commodities, etc...)
2) For every price tick calculate current profit of every position and convert it to customer account currency
3) Analyze total p/l and volume of all positions per customer and generate signals if required
4) At customer level calculation must be sequential and atomic/serialized.
I.e. all positions must be evaluated with every tick in the order it entered the system and totals must be calculated based on the same price even if customer has 100s of positions.
5) Analyze volumes / trends of all positions in system aggregated by symbol/customer type/country /etc... and make them available in some kind of a dashboard.
All orders are executed and stored in rdbms.
My major question is how to distribute 100s of thousands of positions across Storm bolts on different nodes that every node handles it's own part. Using Modulo is good enough for partitioning the customers, but how can I provide id to every instance of bolt so each of them handles it's own equal part of customers only? Is there something out of the box in Storm to do that?
Another question is how to do above aggregations efficiently?
you can use fieldsGrouping for that. you can declare a field by which tuples are grouped(in your case, id).
I'll just suppose that your input stream is JSON object with id and body field like
{"id":"1234","body":"some body"}
Also suppose your topology has one spout, two bolts namely BoltA and BoltB.
In BoltB, override declareOutputFields method and fill in the detail.
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("id","log"));
}
And you can declare topology like below
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("spout", spout, 1);
builder.setBolt("boltA", new BoltA(), 1)
.shuffleGrouping("spout");
builder.setBolt("counterBolt", new BoltB(), 1).fieldsGrouping("boltB", new Fields("id"));
In this case, tuples with same id from boltA will be delivered to same instance of boltB
Suppose I have a large (300-500k) collection of text documents stored in the relational database. Each document can belong to one or more (up to six) categories. I need users to be able to randomly select documents in a specific category so that a single entity is never repeated, much like how StumbleUpon works.
I don't really see a way I could implement this using slow NOT IN queries with large amount of users and documents, so I figured I might need to implement some custom data structure for this purpose. Perhaps there is already a paper describing some algorithm that might be adapted to my needs?
Currently I'm considering the following approach:
Read all the entries from the database
Create a linked list based index for each category from the IDs of documents belonging to the this category. Shuffle it
Create a Bloom Filter containing all of the entries viewed by a particular user
Traverse the index using the iterator, randomly select items using Bloom Filter to pick not viewed items.
If you track via a table what entries that the user has seen... try this. And I'm going to use mysql because that's the quickest example I can think of but the gist should be clear.
On a link being 'used'...
insert into viewed (userid, url_id) values ("jj", 123)
On looking for a link...
select p.url_id
from pages p left join viewed v on v.url_id = p.url_id
where v.url_id is null
order by rand()
limit 1
This causes the database to go ahead and do a 1 for 1 join, and your limiting your query to return only one entry that the user has not seen yet.
Just a suggestion.
Edit: It is possible to make this one operation but there's no guarantee that the url will be passed successfully to the user.
It depend on how users get it's random entries.
Option 1:
A user is paging some entities and stop after couple of them. for example the user see the current random entity and then moving to the next one, read it and continue it couple of times and that's it.
in the next time this user (or another) get an entity from this category the entities that already viewed is clear and you can return an already viewed entity.
in that option I would recommend save a (hash) set of already viewed entities id and every time user ask for a random entity- randomally choose it from the DB and check if not already in the set.
because the set is so small and your data is so big, the chance that you get an already viewed id is so small, that it will take O(1) most of the time.
Option 2:
A user is paging in the entities and the viewed entities are saving between all users and every time user visit your page.
in that case you probably use all the entities in each category and saving all the viewed entites + check whether a entity is viewed will take some time.
In that option I would get all the ids for this topic- shuffle them and store it in a linked list. when you want to get a random not viewed entity- just get the head of the list and delete it (O(1)).
I assume that for any given <user, category> pair, the number of documents viewed is pretty small relative to the total number of documents available in that category.
So can you just store indexed triples <user, category, document> indicating which documents have been viewed, and then just take an optimistic approach with respect to randomly selected documents? In the vast majority of cases, the randomly selected document will be unread by the user. And you can check quickly because the triples are indexed.
I would opt for a pseudorandom approach:
1.) Determine number of elements in category to be viewed (SELECT COUNT(*) WHERE ...)
2.) Pick a random number in range 1 ... count.
3.) Select a single document (SELECT * FROM ... WHERE [same as when counting] ORDER BY [generate stable order]. Depending on the SQL dialect in use, there are different clauses that can be used to retrieve only the part of the result set you want (MySQL LIMIT clause, SQLServer TOP clause etc.)
If the number of documents is large the chance serving the same user the same document twice is neglibly small. Using the scheme described above you don't have to store any state information at all.
You may want to consider a nosql solution like Apache Cassandra. These seem to be ideally suited to your needs. There are many ways to design the algorithm you need in an environment where you can easily add new columns to a table (column family) on the fly, with excellent support for a very sparsely populated table.
edit: one of many possible solutions below:
create a CF(column family ie table) for each category (creating these on-the-fly is quite easy).
Add a row to each category CF for each document belonging to the category.
Whenever a user hits a document, you add a column with named and set it to true to the row. Obviously this table will be huge with millions of columns and probably quite sparsely populated, but no problem, reading this is still constant time.
Now finding a new document for a user in a category is simply a matter of selecting any result from select * where == null.
You should get constant time writes and reads, amazing scalability, etc if you can accept Cassandra's "eventually consistent" model (ie, it is not mission critical that a user never get a duplicate document)
I've solved similar in the past by indexing the relational database into a document oriented form using Apache Lucene. This was before the recent rise of NoSQL servers and is basically the same thing, but it's still a valid alternative approach.
You would create a Lucene Document for each of your texts with a textId (relational database id) field and multi valued categoryId and userId fields. Populate the categoryId field appropriately. When a user reads a text, add their id to the userId field. A simple query will return the set of documents with a given categoryId and without a given userId - pick one randomly and display it.
Store a users past X selections in a cookie or something.
Return the last selections to the server with the users new criteria
Randomly choose one of the texts satisfying the criteria until it is not a member of the last X selections of the user.
Return this choice of text and update the list of last X selections.
I would experiment to find the best value of X but I have in mind something like an X of say 16?
Basically I have a 'thread line' where new threads are made and a TimeUUID is used as a key. Which obviously provides sorting of a new thread quite easily, espically when say making a query of the latest 20 threads etc.
My problem is that when a new 'post' is made to a thread I want to be able to 'bump' that thread to the front of the 'thread line' which is where the problem comes in, how do I basically make this happen so I can still make queries that can still be selected in the right order without providing any kind of duplicates etc.
The only way I can see this working is if rather than a column family sorting via a TimeUUID I need the column family to sort via the insertion Timestamp, therefore I can use the unique thread IDs for column keys and retrieve these in the order they are inserted or reinserted rather than by TimeUUID? Is this possible or am I missing a simple trick that allows for this? As far as I know you have to set a particular comparitor or otherwise it defaults to bytes?
Columns within a row are always sorted by name with the given comparator. You cannot sort by timestamp or value or anything else, or Cassandra would not be able to merge multiple updates to the same column correctly.
As to your use case, I can think of two options.
The most similar to what you are doing now would be to create a second columnfamily, ThreadMostRecentPosts, with timeuuid columns (you said "keys" but it sounds like you mean "columns"). When a new post arrives, delete the old most-recent column and add a new one.
This has two problems:
The unit of replication is the row, so having this grow indefinitely could be problematic. (Using expiring columns to age out no-longer-relevant thread information might help.)
You need a lock manager so that multiple posts to the same thread don't race and possibly leave multiple entries in this row.
I would suggest instead creating a row per day (for instance), whose columns are the thread IDs and whose values are the most recent post. Adding a new post just updates the value in that column; no delete/re-add is done, so the race is not a problem anymore. You don't get sorting for free anymore but that's okay because you're limiting it to a small enough set that you can do that sort in memory (say, yesterday's threads and today's).
(Finally, I would add that I can say from experience that having a cutoff past which old threads don't get bumped to the front by a new reply is a Good Thing.)