Is Fork-Join framework in Java 8 the best option? - java-8

I have a scenario like i want to read a spreadsheet which consists of around 2000 records and enter it into database.
Currently we are using Executor framework. We have limitation that no of tasks should be only 5. Each task reads 20 rows from the excel. We provide the start index and end index of the rows to be read from the excel to each task.
Say, currently,
Task 1 handles 1-20
Task-2 handles 21-40
Task-3 handles 41-60
Task-4 handles 61-80
Task-5 handles 81-100
If Task-1 finishes its execution, it takes the next 20 rows thats 101-120. Suppose if Task-2 finishes before Task-1, it will start reading from 121-140 and not 101-120.
Can i handle this scenario more effectively in a Fork-Join framework only with the restriction of 5 tasks and each task 20 rows?
Need some insight into the performance issues.

No need to switch the thread pool. To make the load more balanced you can just maintain atomic variable which points to the first not taken row:
AtomicInteger currentRow = new AtomicInteger(); // shared between tasks
final int maxRow = 2000;
final int batchSize = 20;
// Inside every task:
while(true) {
int row = currentRow.getAndAdd(batchSize);
if(row >= maxRow) return;
int from = row+1;
int to = Math.min(row+batchSize, maxRow);
// process rows from..to; it's guaranteed that other threads
// do not process the same rows.
}
The body of every task is absolutely the same. Also this implementation does not depend on number of tasks created. If you later decide to have 3 tasks or 7 tasks, just adjust the thread pool size and submit more (or less) tasks.

Related

KStream to KStream Join- Output record post a configurable time in event of no matching record within the window

Need some opinion/help around one use case of KStream/KTable usage.
Scenario:
I have 2 topics with common key--requestId.
input_time(requestId,StartTime)
completion_time(requestId,EndTime)
The data in input_time is populated at time t1 and the data in completion_time is populated at t+n.(n being the time taken for a process to complete).
Objective
To compare the time taken for a request by joining data from the topics and raised alert in case of breach of a threshold time.
It may happen that the process may fail and the data may not arrive on the completion_time topic at all for the request.
In that case we intend to use a check that if the currentTime is well past a specific(lets say 5s) threshold since the start time.
input_time(req1,100) completion_time(req1,104) --> no alert to be raised as 104-100 < 5(configured value)
input_time(req2,100) completion_time(req2,108) --> alert to be raised with req2,108 as 108-100 >5
input_time(req3,100) completion_time no record--> if current Time is beyond 105 raise an alert with req3,currentSysTime as currentSysTime - 100 > 5
Options Tried.
1) Tried both KTable-KTable and KStream-Kstream outer joins but the third case always fails.
final KTable<String,Long> startTimeTable = builder.table("input_time",Consumed.with(Serdes.String(),Serdes.Long()));
final KTable<String,Long> completionTimeTable = builder.table("completion_time",Consumed.with(Serdes.String(),Serdes.Long()));
KTable<String,Long> thresholdBreached =startTimeTable .outerJoin(completionTimeTable,
new MyValueJoiner());
thresholdBreached.toStream().filter((k,v)->v!=null)
.to("finalTopic",Produced.with(Serdes.String(),Serdes.Long()));
Joiner
public Long apply(Long startTime,Long endTime){
// if input record itself is not available then we cant use any alerting.
if (null==startTime){
log.info("AlertValueJoiner check: the start time itself is null so returning null");
return null;
}
// current processing time is the time used.
long currentTime= System.currentTimeMillis();
log.info("Checking startTime {} end time {} sysTime {}",startTime,endTime,currentTime);
if(null==endTime && currentTime-startTime>5000){
log.info("Alert:No corresponding record from file completion yet currentTime {} startTime {}"
,currentTime,startTime);
return currentTime-startTime;
}else if(null !=endTime && endTime-startTime>5000){
log.info("Alert: threshold breach for file completion startTime {} endTime {}"
,startTime,endTime);
return endTime-startTime;
}
return null;
}
2) Tried the custom logic approach recommended as per the thread
How to manage Kafka KStream to Kstream windowed join?
-- This approach stopped working for scenarios 2 and 3.
Is there any case of handling all three scenarios using DSL or Processors?
Not sure of we can use some kind of punctuator to listen to when the window changes and check for the stream records in current window and if there is no matching records found,produce a result with systime.?
Due to the nature of the logic involve it surely had to be done with combination of DSL and processor API.
Used a custom transformer and state store to compare with configured
values.(case 1 &2)
Added a punctuator based on wall clock for
handling the 3rd case

Grails 3 - globally incremented number

I have a global number that gets incremented frequently but gets reset yearly. Right now I'm storing this global number as a single row:
class GlobalCounter {
Integer counter = 0
static constraints = {
}
}
When I increment it, I use a lock in a transaction:
def globalCounter = GlobalCounter.lock(1)
globalCounter.counter = globalCounter.counter + 1
globalCounter.save()
globalCounter.discard()
The problem is that when I update it frequently, I get an exception instead of the code waiting:
Transaction (Process ID 61) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
I know that when you add a new row, the database gives the option of using a primary key with auto increment, so I think this should be possible without deadlocks. How can I do it?

Spark performance - how to parallelize large loops?

I have a Spark application containing 8000 loops totally and it runs on a cluster of 5 nodes. Each node has 125GB memory and 32 cores. The code in concern looks like the following:
for (m <- 0 until deviceArray.size) { // there are 1000 device
var id = deviceArray(m)
for (t <- 1 to timePatterns) { // there are 8 time patterns
var hrpvData = get24HoursPVF(dataDF, id, t).cache()
var hrpvDataZI = hrpvData.zipWithIndex
var clustersLSD = runKMeans(hrpvData, numClusters, numIterations)
var clusterPVPred = hrpvData.map(x => clustersLSD.predict(x))
var clusterPVMap = hrpvDataZI.zip(clusterPVPred)
var pvhgmRDD = clusterPVMap.map{r => (r._2, r._1._2)}.groupByKey
var arrHGinfo = pvhgmRDD.collect
// Post process data
// .....
hrpvData.unpersist()
}
}
The function call get24HoursPVF() prepares feature vectors for k-means, and it takes about 40 seconds. Each loop takes about 50 seconds to finish using the cluster. My data size is from 2 to 3 GB (read from tables). Given 8000 loops, the total time running this Spark application is unacceptable (8000x50s).
Since each device is independent, is there any way to parallelize the 8000 iterations? Or how to utilize clusters to solve the problem of total long running time? Scala Future won't work because it just submits jobs near simultaneously but Spark won't run these jobs simultaneously.
Aside from the for loops, you've got 2 of the slowest API calls in Spark in your code there - groupByKey, and collect.
groupByKey should almost never be used, instead look at reduceByKey, see this Databricks blog for more details.
collect transfers all the data in that RDD to an array on the driver node, unless that's a small amount of data it'll have a fairly big performance impact.
On the for loops, I'm not particularly familiar with what you're trying to do, but in
var hrpvData = get24HoursPVF(dataDF, id, t).cache()
you're building and caching a new dataframe for each id and t value. I'm not sure why you couldn't just build one single dataframe containing each variant of id and t at the start, then run your zipWithIndex, map, etc over that whole dataframe?

How to run an open ended quartz job in a time window and then exit cleanly with spring

I have a quartz job that updates some records in a database. Since the number of records to update is huge, it will do it in batches of 100 for a period of 2 hours every night between 1 and 3 AM.
So if it is processing a job when the 3 AM deadline reaches, I would like it to finish processing the current batch of 100 records and then exit cleanly. How can this be achieved? Can it be done using a cron expression alone or is there some mechanism to achieve this graceful shutdown?
I am using spring's SchedulerFactoryBean and MethodInvokingJobDetailFactoryBean classes to configure the job in my application context xml.
Some pseudo code of my job implementation method.
public void updateRecords()
{
while(true) // i need to replace true with some other logic to exit from loop at scheduled time.
{
// 1. select 100 records
// 2. update 100 records
// 3. commit
}
}
When you run the updateRecords, store the System.currentTimeMillis and see if the time has exceeded that time + 2*60*60*1000 ms in the while loop.
public void updateRecords()
{
long jobStartedAt = System.currentTimeMillis();
long twoHoursLater = jobStartedAt + 2 * 60 * 60 * 1000; //2 hours, 60 minutes/hour, 60 seconds/minute, 1000 ms/second
while(System.currentTimeMillis() < twoHoursLater)
{
// 1. select 100 records
// 2. update 100 records
// 3. commit
}
}
Also:
Depending on your database properties, 100 jobs in a batch could be a bit to small to efficient. Don't be afraid to try larger batches.
And if the jobs takes very long it could be because of heavy indexing. If you have the possibility, drop the indexes before doing large scale (dumb) batching and rebuild them afterwards, if they are not explicitly needed for the batch processing. Every update of indexes can render many more diskwrites than the single change of a record.

Hbase quickly count number of rows

Right now I implement row count over ResultScanner like this
for (Result rs = scanner.next(); rs != null; rs = scanner.next()) {
number++;
}
If data reaching millions time computing is large.I want to compute in real time that i don't want to use Mapreduce
How to quickly count number of rows.
Use RowCounter in HBase
RowCounter is a mapreduce job to count all the rows of a table. This is a good utility to use as a sanity check to ensure that HBase can read all the blocks of a table if there are any concerns of metadata inconsistency. It will run the mapreduce all in a single process but it will run faster if you have a MapReduce cluster in place for it to exploit.
$ hbase org.apache.hadoop.hbase.mapreduce.RowCounter <tablename>
Usage: RowCounter [options]
<tablename> [
--starttime=[start]
--endtime=[end]
[--range=[startKey],[endKey]]
[<column1> <column2>...]
]
You can use the count method in hbase to count the number of rows. But yes, counting rows of a large table can be slow.count 'tablename' [interval]
Return value is the number of rows.
This operation may take a LONG time (Run ‘$HADOOP_HOME/bin/hadoop jar
hbase.jar rowcount’ to run a counting mapreduce job). Current count is shown
every 1000 rows by default. Count interval may be optionally specified. Scan
caching is enabled on count scans by default. Default cache size is 10 rows.
If your rows are small in size, you may want to increase this
parameter.
Examples:
hbase> count 't1'
hbase> count 't1', INTERVAL => 100000
hbase> count 't1', CACHE => 1000
hbase> count 't1', INTERVAL => 10, CACHE => 1000
The same commands also can be run on a table reference. Suppose you had a reference to table 't1', the corresponding commands would be:
hbase> t.count
hbase> t.count INTERVAL => 100000
hbase> t.count CACHE => 1000
hbase> t.count INTERVAL => 10, CACHE => 1000
If you cannot use RowCounter for whatever reason, then a combination of these two filters should be an optimal way to get a count:
FirstKeyOnlyFilter() AND KeyOnlyFilter()
The FirstKeyOnlyFilter will result in the scanner only returning the first column qualifier it finds, as opposed to the scanner returning all of the column qualifiers in the table, which will minimize the network bandwith. What about simply picking one column qualifier to return? This would work if you could guarentee that column qualifier exists for every row, but if that is not true then you would get an inaccurate count.
The KeyOnlyFilter will result in the scanner only returning the column family, and will not return any value for the column qualifier. This further reduces the network bandwidth, which in the general case wouldn't account for much of a reduction, but there can be an edge case where the first column picked by the previous filter just happens to be an extremely large value.
I tried playing around with scan.setCaching but the results were all over the place. Perhaps it could help.
I had 16 million rows in between a start and stop that I did the following pseudo-empirical testing:
With FirstKeyOnlyFilter and KeyOnlyFilter activated:
With caching not set (i.e., the default value), it took 188 seconds.
With caching set to 1, it took 188 seconds
With caching set to 10, it took 200 seconds
With caching set to 100, it took 187 seconds
With caching set to 1000, it took 183 seconds.
With caching set to 10000, it took 199 seconds.
With caching set to 100000, it took 199 seconds.
With FirstKeyOnlyFilter and KeyOnlyFilter disabled:
With caching not set, (i.e., the default value), it took 309 seconds
I didn't bother to do proper testing on this, but it seems clear that the FirstKeyOnlyFilter and KeyOnlyFilter are good.
Moreover, the cells in this particular table are very small - so I think the filters would have been even better on a different table.
Here is a Java code sample:
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.filter.RowFilter;
import org.apache.hadoop.hbase.filter.KeyOnlyFilter;
import org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter;
import org.apache.hadoop.hbase.filter.FilterList;
import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
import org.apache.hadoop.hbase.filter.RegexStringComparator;
public class HBaseCount {
public static void main(String[] args) throws IOException {
Configuration config = HBaseConfiguration.create();
HTable table = new HTable(config, "my_table");
Scan scan = new Scan(
Bytes.toBytes("foo"), Bytes.toBytes("foo~")
);
if (args.length == 1) {
scan.setCaching(Integer.valueOf(args[0]));
}
System.out.println("scan's caching is " + scan.getCaching());
FilterList allFilters = new FilterList();
allFilters.addFilter(new FirstKeyOnlyFilter());
allFilters.addFilter(new KeyOnlyFilter());
scan.setFilter(allFilters);
ResultScanner scanner = table.getScanner(scan);
int count = 0;
long start = System.currentTimeMillis();
try {
for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
count += 1;
if (count % 100000 == 0) System.out.println(count);
}
} finally {
scanner.close();
}
long end = System.currentTimeMillis();
long elapsedTime = end - start;
System.out.println("Elapsed time was " + (elapsedTime/1000F));
}
}
Here is a pychbase code sample:
from pychbase import Connection
c = Connection()
t = c.table('my_table')
# Under the hood this applies the FirstKeyOnlyFilter and KeyOnlyFilter
# similar to the happybase example below
print t.count(row_prefix="foo")
Here is a Happybase code sample:
from happybase import Connection
c = Connection(...)
t = c.table('my_table')
count = 0
for _ in t.scan(filter='FirstKeyOnlyFilter() AND KeyOnlyFilter()'):
count += 1
print count
Thanks to #Tuckr and #KennyCason for the tip.
Use the HBase rowcount map/reduce job that's included with HBase
Simple, Effective and Efficient way to count row in HBASE:
Whenever you insert a row trigger this API which will increment that particular cell.
Htable.incrementColumnValue(Bytes.toBytes("count"), Bytes.toBytes("details"), Bytes.toBytes("count"), 1);
To check number of rows present in that table. Just use "Get" or "scan" API for that particular Row 'count'.
By using this Method you can get the row count in less than a millisecond.
To count the Hbase table record count on a proper YARN cluster you have to set the map reduce job queue name as well:
hbase org.apache.hadoop.hbase.mapreduce.RowCounter -Dmapreduce.job.queuename= < Your Q Name which you have SUBMIT access>
< TABLE_NAME>
You can use coprocessor what is available since HBase 0.92. See Coprocessor and AggregateProtocol and example
Two ways Worked for me to get count of rows from hbase table with Speed
Scenario #1
If hbase table size is small then login to hbase shell with valid user and execute
>count '<tablename>'
Example
>count 'employee'
6 row(s) in 0.1110 seconds
Scenario #2
If hbase table size is large,then execute inbuilt RowCounter map reduce job:
Login to hadoop machine with valid user and execute:
/$HBASE_HOME/bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter '<tablename>'
Example:
/$HBASE_HOME/bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter 'employee'
....
....
....
Virtual memory (bytes) snapshot=22594633728
Total committed heap usage (bytes)=5093457920
org.apache.hadoop.hbase.mapreduce.RowCounter$RowCounterMapper$Counters
ROWS=6
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=0
If you're using a scanner, in your scanner try to have it return the least number of qualifiers as possible. In fact, the qualifier(s) that you do return should be the smallest (in byte-size) as you have available. This will speed up your scan tremendously.
Unfortuneately this will only scale so far (millions-billions?). To take it further, you can do this in real time but you will first need to run a mapreduce job to count all rows.
Store the Mapreduce output in a cell in HBase. Every time you add a row, increment the counter by 1. Every time you delete a row, decrement the counter.
When you need to access the number of rows in real time, you read that field in HBase.
There is no fast way to count the rows otherwise in a way that scales. You can only count so fast.
U can find sample example here:
/**
* Used to get the number of rows of the table
* #param tableName
* #param familyNames
* #return the number of rows
* #throws IOException
*/
public long countRows(String tableName, String... familyNames) throws IOException {
long rowCount = 0;
Configuration configuration = connection.getConfiguration();
// Increase RPC timeout, in case of a slow computation
configuration.setLong("hbase.rpc.timeout", 600000);
// Default is 1, set to a higher value for faster scanner.next(..)
configuration.setLong("hbase.client.scanner.caching", 1000);
AggregationClient aggregationClient = new AggregationClient(configuration);
try {
Scan scan = new Scan();
if (familyNames != null && familyNames.length > 0) {
for (String familyName : familyNames) {
scan.addFamily(Bytes.toBytes(familyName));
}
}
rowCount = aggregationClient.rowCount(TableName.valueOf(tableName), new LongColumnInterpreter(), scan);
} catch (Throwable e) {
throw new IOException(e);
}
return rowCount;
}
Go to Hbase home directory and run this command,
./bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter 'namespace:tablename'
This will launch a mapreduce job and the output will show the number of records existing in the hbase table.
You could try hbase api methods!
org.apache.hadoop.hbase.client.coprocessor.AggregationClient

Resources