Equal distribution of couchdb document keys for parallel processing - parallel-processing

I have a couchdb db instance where each document has a unique id (string). I would like to go over each document in the db and perform some external operation based on the contents of each document (for ex: connecting to another web server to get specific details etc). However, instead of sequentially going over each document, is it possible to first get a list of k buckets of these document keys represented by the starting key + ending key (id being the key), then to query for all documents in each of these buckets separately & do the external operation on each bucket's documents in parallel ?
I currently use couchdb-python for accessing my db + views. For ex, this is the code I currently use:
for res in db.view("mydbviews/id"):
doc = db[res.id]
do_external_operation(doc) # Time consuming operation
It would be great if I could do something like 'parallel for' for the above loop.

Assuming that you're only emitting one result per document in the view, then presumably running the view with start and end keys along with some python parallelisation technique is sufficient here. As #Ved says, the bigger issue here is parallel processing, rather than generating the subsets of documents. I'd recommend the multiprocessing module, like so:
def work_on_subset(viewname, key_low, key_high):
rows = db.view(viewname, startkey=key_low, endkey=key_high)
for row in rows:
pass # Do your work here
viewname = '_design/designname/_view/viewname'
key_list = [('a', 'z'), ('1', '10')] # Or whatever subset you want
pool = multiprocessing.Pool(processes=10) # Or however many you want
result = []
for (key_low, key_high) in key_list:
result.append(pool.apply_async(work_on_subset, args=(viewname, key_low, key_high)))
pool.close()
pool.join()

Related

Conditional array or hash combining

I am working on a project that has 2 separate input files, each with some information that relates to the other file.
I have loaded them each into their own arrays after parsing them like so
file_1 << "#{contract_id}|#{region}|#{category}|#{supplier_ID}"
file_2 << "#{contract_id}|#{region}|#{category}|#{manufacturer}|#{model}"
File 1 has 30,000 lines and File 2 has 400,000 lines. My desired output will have somewhere in the neighborhood of 600,000 lines from my estimations.
Now my problem is figuring out a way to combine them, as they have a many-to-many relationship.
For every time the contract_id, region AND category match, i need to have a record that looks like the following:
supplier_ID region category manufacturer model.
my initial thought was to iterate over one of the arrays and put everything into a hash using the #{contract_id}|#{region}|#{category}|#{manufacturer} as the KEY and the #{model} as the value.
But the limitation there is that it only iterates over the array once and thus the output is limited to the number of elements in the respective array.
My understanding of your question:
File 1 has the columns contract_id, region, category, supplier_id.
File 2 has the columns contract_id, region, category, manufacturer, model
You want to a program that will take file 1 and file 2 as inputs do the equivalent of an SQL join to produce a new file with the following columns: supplier_id, region, category, manufacturer, model. Your join condition is that the contract_id, region, and category need to match.
Here is how I would tackle this:
Step 1: Read both files into arrays that have the data from each. Don't store the data entries as an ugly pipe-delimited string; store them as an array or a hash.
file_1_entries << [contract_id, region, category, supplier_ID]
Step 2: Iterate over the data from both files and make hashes to index them by the columns you care about (contract_id, region, and category). For example, to index file 1, you would make a hash whose key is some combination of those three columns (either an array or a string) and the value is an array of entries from file 1 that match.
file_1_index = {}
file_1_entries.each do |x|
key = some_function_of(x)
file_1_index[key] ||= []
file_1_index[key] << x
end
Step 3: Iterate over one of your index hashes, and use the index hashes to do the join you want to do.
file_1_index.keys.each do |key|
file_1_matching_entries = file_1_index.fetch(key, [])
file_2_matching_entries = file_2_index.fetch(key, [])
# nested loop to do the join
end
I can't go into very much detail on each of these steps because you asked a pretty broad question and it would take a long time to add all the details. But you should try to do these steps and ask more specific questions if you get stuck.
It's possible your machine might run out of memory while you are doing this, depending on your computer. In that case, you might need to build a temporary database (e.g. with sqlite) and then perform the join using an actual SQL query instead of trying to do it yourself in Ruby.

How to sort an optimized keys only query

I'm using datastore native api to access to gae database (for well studied specific reasons). I wanted to optimize the code and use the memcache in my requests instead of directly grabbing the values, the issue, is that my query is sorted.
When I do a findProductsByFiltersQuery.setKeysOnly(); on my query, I receive this error:
The provided keys-only multi-query needs to perform some sorting in
memory. As a result, this query can only be sorted by the key
property as this is the only property that is available in memory.
The weired thing is that it starts happening from a certain complexity of the request, for example this request fails:
SELECT __key__ FROM Product WHERE dynaValue = _Rs:2 AND productState = PUBLISHED AND dynaValue = _RC:2 AND dynaValue = RF:1 AND dynaValue = ct:1030003 AND dynaValue = _RS:4 AND dynaValue = _px:2 AND itemType = NEWS ORDER BY modificationDate DESC
while this one passes :
SELECT __key__ FROM Product WHERE itemType = CI AND productState = PUBLISHED ORDER BY modificationDate DESC
Can someone explain me why this is happening and if ordering is not possible when getting the keys, for what is that feature? : since results are paginated, it is useless to get a bad set of keys from the first filtering request. So how is it thought???
Please also notice that when I do non keysOnly very long request I receive this message
Splitting the provided query requires that too many subqueries are
merged in memory.
at com.google.appengine.repackaged.com.google.common.base.Preconditions.checkArgument(Preconditions.java:129)
at com.google.appengine.api.datastore.QuerySplitHelper.splitQuery(QuerySplitHelper.java:99)
at com.google.appengine.api.datastore.QuerySplitHelper.splitQuery(QuerySplitHelper.java:71)
Can someone explain me how is it possible there is in memory treatment when values are indexed? or is it the devmode server only that does this error?
In-memory queries are necessary when you use OR, IN, and != operators in Datastore. As described in this blog post, queries using these operators are split in the client into multiple Datastore queries. For example:
SELECT * FROM Foo WHERE A = 1 OR A = 2
gets split into two queries:
SELECT * FROM Foo WHERE A = 1
SELECT * FROM Foo WHERE A = 2
If you add ORDER BY B to your query, both sub-queries get this order:
SELECT * FROM Foo WHERE A = 1 ORDER BY B
SELECT * FROM Foo WHERE A = 2 ORDER BY B
While each of these Datastore queries returns results sorted by B, the union of the queries is not. On the client side, the SDK merges the results from the two ordered by B.
In order to do this, the Datastore queries must actually return the ordered property, otherwise the SDK won't know the correct way to merge these together.
If you are writing queries with a large number of filters, make sure to only use AND filters. This will allow all the operations to be performed only in the Datastore, in which case no in-memory sorting is necessary.

Storing data with big and dynamic groupings / paths

I currently have the following pig script (column list truncated for brevity):
REGISTER /usr/lib/pig/piggybank.jar;
inputData = LOAD '/data/$date*.{bz2,bz,gz}' USING PigStorage('\\x7F')
PigStorage('\\x7F')
AS (
SITE_ID_COL :int,-- = Item Site ID
META_ID_COL :int,-- = Top Level (meta) category ID
EXTRACT_DATE_COL :chararray,-- = Date for the data points
...
)
SPLIT inputData INTO site0 IF (SITE_ID_COL == 0), site3 IF (SITE_ID_COL == 3), site15 IF (SITE_ID_COL == 15);
STORE site0 INTO 'pigsplit1/0/' USING org.apache.pig.piggybank.storage.MultiStorage('pigsplit1/0/','2', 'bz2', '\\x7F');
STORE site3 INTO 'pigsplit1/3/' USING org.apache.pig.piggybank.storage.MultiStorage('pigsplit1/3/','2', 'bz2', '\\x7F');
STORE site15 INTO 'pigsplit1/15/' USING org.apache.pig.piggybank.storage.MultiStorage('pigsplit1/15/','2', 'bz2', '\\x7F');
And it works great for what I wanted it to do, but there's actually at least 22 possible site IDs and I'm not certain there's not more. I'd like to dynamically create the splits and store into paths based on that column. Is the easiest way to do this going to be through a two step usage of the MultiStorage UDF, first splitting by the site ID and then loading all those results and splitting by the date? That seems inefficient. Can I somehow do it through GROUP BYs? It seems like I should be able to GROUP BY the site ID, then flatten each row and run the multi storage on that, but I'm not sure how to concatenate the GROUP into the path.
The MultiStorage UDF is not set up to divide inputs on two different fields, but that's essentially what you're doing -- the use of SPLIT is just to emulate MultiStorage with two parameters. In that case, I'd recommend the following:
REGISTER /usr/lib/pig/piggybank.jar;
inputData = LOAD '/data/$date*.{bz2,bz,gz}' USING PigStorage('\\x7F')
AS (
SITE_ID_COL :int,-- = Item Site ID
META_ID_COL :int,-- = Top Level (meta) category ID
EXTRACT_DATE_COL :chararray,-- = Date for the data points
...
)
dataWithKey = FOREACH inputData GENERATE CONCAT(CONCAT(SITE_ID_COL, '-'), EXTRACT_DATE_COL), *;
STORE dataWithKey INTO 'tmpDir' USING org.apache.pig.piggybank.storage.MultiStorage('tmpDir', '0', 'bz2', '\\x7F');
Then go over your output with a simple script to list all the files in your output directories, extract the site and date IDs, and move them to appropriate locations with whatever structure you like.
Not the most elegant workaround, but it could work all right for you. One thing to watch out for is the separator you choose in your key might not be allowed (it might only be alphanumeric). Also, you'll be stuck with that extra field in your output data.
I've actually submitted a patch to the MultiStorage module to allow splitting on multiple tuple fields rather than only one, resulting in a dynamic output tree.
https://issues.apache.org/jira/browse/PIG-3258
It hasn't gotten much attention yet, but I'm using it in production with no issues.

assigning IDs to hadoop/PIG output data

I m working on PIG script which performs heavy duty data processing on raw transactions and come up with various transaction patterns.
Say one of pattern is - find all accounts who received cross border transactions in a day (with total transaction and amount of transactions).
My expected output should be two data files
1) Rollup data - like account A1 received 50 transactions from country AU.
2) Raw transactions - all above 50 transactions for A1.
My PIG script is currently creating output data source in following format
Account Country TotalTxns RawTransactions
A1 AU 50 [(Txn1), (Txn2), (Txn3)....(Txn50)]
A2 JP 30 [(Txn1), (Txn2)....(Txn30)]
Now question here is, when I get this data out of Hadoop system (to some DB) I want to establish link between my rollup record (A1, AU, 50) with all 50 raw transactions (like ID 1 for rollup record used as foreign key for all 50 associated Txns).
I understand Hadoop being distributed should not be used for assigning IDs, but are there any options where i can assign non-unique Ids (no need to be sequential) or some other way to link this data?
EDIT (after using Enumerate from DataFu)
here is the PIG script
register /UDF/datafu-0.0.8.jar
define Enumerate datafu.pig.bags.Enumerate('1');
data_txn = LOAD './txndata' USING PigStorage(',') AS (txnid:int, sndr_acct:int,sndr_cntry:chararray, rcvr_acct:int, rcvr_cntry:chararray);
data_txn1 = GROUP data_txn ALL;
data_txn2 = FOREACH data_txn1 GENERATE flatten(Enumerate(data_txn));
dump data_txn2;
after running this, I am getting
ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR 2997: Unable to recreate exception from backed error: java.lang.NullPointerException
at datafu.pig.bags.Enumerate.enumerateBag(Enumerate.java:89)
at datafu.pig.bags.Enumerate.accumulate(Enumerate.java:104)
....
I often assign random ids in Hadoop jobs. You just need to ensure you generate ids which contain a sufficient number of random bits to ensure the probability of collisions is sufficiently small (http://en.wikipedia.org/wiki/Birthday_problem).
As a rule of thumb I use 3*log(n) random bits where n = # of ids that need to be generated.
In many cases Java's UUID.randomUUID() will be sufficient.
http://en.wikipedia.org/wiki/Universally_unique_identifier#Random_UUID_probability_of_duplicates
What is unique in your rows? It appears that account ID and country code are what you have grouped by in your Pig script, so why not make a composite key with those? Something like
CONCAT(CONCAT(account, '-'), country)
Of course, you could write a UDF to make this more elegant. If you need a numeric ID, try writing a UDF which will create the string as above, and then call its hashCode() method. This will not guarantee uniqueness of course, but you said that was all right. You can always construct your own method of translating a string to an integer that is unique.
But that said, why do you need a single ID key? If you want to join the fields of two tables later, you can join on more than one field at a time.
DataFu had a bug in Enumerate which was fixed in 0.0.9, so use 0.0.9 or later.
In case when your IDs are numbers and you can not use UUID or other string based IDs.
There is a DataFu library of UDFs by LinkedIn (DataFu) with a very useful UDF Enumerate. So what you can do is to group all records into a bag and pass the bag to the Enumerate. Here is the code from top of my head:
register jar with UDF with Enumerate UDF
inpt = load '....' ....;
allGrp = group inpt all;
withIds = foreach allGrp generate flatten(Enumerate(inpt));

Windows Azure Paging Large Datasets Solution

I'm using Windows Azure Table Storage to store millions of entities, however I'm trying to figure out the best solution that easily allows for two things:
1) a search on an entity, will retrieve that entity and at least (pageSize) number of entities either side of that entity
2) if there are more entities beyond (pageSize) number of entities either side of that entity, then page next or page previous links are shown, this will continue until either the start or end is reached.
3) the order is reverse chronological order
I've decided that the PartitionKey will be the Title provided by the user as each container is unique in the system. The RowKey is Steve Marx's lexiographical algorithm:
http://blog.smarx.com/posts/using-numbers-as-keys-in-windows-azure
which when converted to javascript instead of c# looks like this:
pad(new Date(100000000 * 86400000).getTime() - new Date().getTime(), 19) + "_" + uuid()
uuid() is a javascript function that returns a guid and pad adds zeros up to 19 chars in length. So records in the system look something like this:
PK RK
TEST 0008638662595845431_ecf134e4-b10d-47e8-91f2-4de9c4d64388
TEST 0008638662595845432_ae7bb505-8594-43bc-80b7-6bd34bb9541b
TEST 0008638662595845433_d527d215-03a5-4e46-8a54-10027b8e23f8
TEST 0008638662595845434_a2ebc3f4-67fe-43e2-becd-eaa41a4132e2
This pattern allows for every new entity inserted to be at the top of the list which satisfies point number 3 above.
With a nice way of adding new records in the system I thought then I would create a mechanism that looks at the first half of the RowKey i.e. 0008638662595845431_ part and does a greater than or less than comparison depending on which direction of the already found item. In other words to get the row immediately before 0008638662595845431 I would do a query like so:
var tableService = azure.createTableService();
var minPossibleDateTimeNumber = pad(new Date(-100000000*86400000).getTime() - new Date().getTime(), 19);
tableService.getTable('testTable', function (error) {
if (error === null) {
var query = azure.TableQuery
.select()
.from('testTable')
.where('PartitionKey eq ?', 'TEST')
.and('RowKey gt ?', minPossibleDateTimeNumber + '_')
.and('RowKey lt ?', '0008638662595845431_')
.and('Deleted eq ?', 'false');
If the results returned are greater than 1000 and azure gives me a continuation token, then I thought I would remember the last items RowKey i.e. the number part 0008638662595845431. So now the next query will have the remembered value as the starting value etc.
I am using Windows Azure Node.Js SDK and language is javascript.
Can anybody see gotcha's or problems with this approach?
I do not see how this can work effectively and efficiently, especially to get the rows for a previous page.
To be efficient, the prefix of your “key” needs to be a serially incrementing or decrementing value, instead of being based on a timestamp. A timestamp generated value would have duplicates as well as holes, making mapping page size to row count at best inefficient and at worst difficult to determine.
Also, this potential algorithm is dependent on a single partition key, destroying table scalability.
The challenge here would be to have a method of generating a serially incremented key. One solution is to use a SQL database and performing an atomic update on a single row, such that an incrementing or decrementing value is produced in sequence. Something like UPDATE … SET X = X + 1 and return X. Maybe using a stored procedure.
So the key could be a zero left padded serially generated number. Split such that say the first N digits of the number is the partition key and remaining M digits are the row key.
For example
PKey RKey
00001 10321
00001 10322
….
00954 98912
Now, since the rows are in sequence it is possible to write a query with the exact key range for the page size.
Caveat. There is a small risk of a failure occurring between generating a serial key and writing to table storage. In which case, there may be holes in the table. However, your paging algorithm should be able to detect and work around such instances quite easily by specify a page size slightly larger than necessary or by retrying with an adjusted range.

Resources