Hadoop Cassandra job not reading all input rows - hadoop

I have a quite simple Hadoop job using Cassandra as input and output. Here is the job configuration code (nothing special):
Job job = new Job(getConf(), JOB_NAME);
job.setJarByClass(getClass());
job.setMapperClass(CassandraHadoopCounterMapper.class);
job.setReducerClass(CassandraHadoopCounterReducer.class);
job.setCombinerClass(CassandraHadoopCounterCombiner.class);
job.setInputFormatClass(CqlInputFormat.class);
job.setOutputFormatClass(CqlOutputFormat.class);
job.setMapOutputKeyClass(IntWritable.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Map.class);
job.setOutputValueClass(List.class);
ConfigHelper.setInputColumnFamily(job.getConfiguration(), KEYSPACE, INPUT_COLUMN_FAMILY, WIDE_ROWS);
ConfigHelper.setOutputColumnFamily(job.getConfiguration(), KEYSPACE, OUTPUT_COLUMN_FAMILY);
ConfigHelper.setInputRpcPort(job.getConfiguration(), "9160");
ConfigHelper.setOutputRpcPort(job.getConfiguration(), "9160");
ConfigHelper.setInputInitialAddress(job.getConfiguration(), "localhost");
ConfigHelper.setOutputInitialAddress(job.getConfiguration(), "localhost");
ConfigHelper.setInputPartitioner(job.getConfiguration(), Murmur3Partitioner.class.getName());
ConfigHelper.setOutputPartitioner(job.getConfiguration(), Murmur3Partitioner.class.getName());
String query = "UPDATE " + KEYSPACE + "." + OUTPUT_COLUMN_FAMILY + " SET c = ?";
CqlConfigHelper.setOutputCql(job.getConfiguration(), query);
//aditional properties:
CqlConfigHelper.setInputCQLPageRowSize(job.getConfiguration(), "2000");
ConfigHelper.setInputSplitSize(job.getConfiguration(), 4 * 64 * 1024);
My input cassandra table have 10k rows.
In hadoop I have set max mappers = 2 and max reducers = 2
In job counters i can see the following:
Map input records=4000
Which is InputCQLPageRowSize * mappers
If InputCQLPageRowSize is not set then Map input records equals 2000 (because default InputCQLPageRowSize is 1000)
My questions:
How to make my hadoop job to read all rows in input table?
The job is run entirely locally on my PC.
I am using Cassandra v2.0.11 and Hadoop v1.0.4

My problem was related to a bug in cassandra 2.0.11 that added a strange LIMIT clause in underlying CQL query run to read data to the map task:
I posted that issue to cassandra jira:
https://issues.apache.org/jira/browse/CASSANDRA-9074
It turned out that that problem was stricly related to the following bug fixed in cassandra 2.0.12:
https://issues.apache.org/jira/browse/CASSANDRA-8166

Related

Hive internal table takes block size larger than configured

My configuration for dfs.blocksize is 128M, and if I upload any file or create any file, it takes block of size 128M which is cool. But when I create hive table it , however small may be, takes block size of 256M.
Can we set size of table when they are created? I don't know how it's done
UPDATE
I am using spark sql.
spark = SparkSession .builder()
.appName("Java Spark SQL basic example")
.enableHiveSupport()
.config("spark.sql.warehouse.dir", "hdfs://bigdata-
namenode:9000/user/hive/warehouse")
.config("mapred.input.dir.recursive", true)
.config("hive.mapred.supports.subdirectories", true)
.config("spark.sql.hive.thriftServer.singleSession", true)
.config("hive.exec.dynamic.partition.mode", "nonstrict")
//.master("local")
.getOrCreate();
String query1 = String.format("INSERT INTO TABLE bm_top."+orc_table+" SELECT icode, store_code,division,from_unixtime(unix_timestamp(bill_date,'dd-MMM-yy'),'yyyy-MM-dd'), qty, bill_no, mrp FROM bm_top.temp_ext_table");
spark.sql(query1);

Map Reduce job on EMR successfully running but no output data on S3

Im running MR job on EMR master host.
My input file is in S3 and output set to a table in Hive via Hcatalog.
The job is running successful and i do see reducers output rows but looking at the S3 new partitions folder i can only see MR 0 byte SUCCESS file but no actual data files.
note- when reducer stage start i do see files writes to S3 into temp folder, but it seems the last operation throws the files somewhere.
I don't see any errors in MR logs.
Relevant MR driver code:"
Job job = Job.getInstance();
job.setJobName("Build Events");
job.setJarByClass(LoggersApp.class);
job.getConfiguration().set("fs.defaultFS", "s3://my-bucket");
// set input paths Path[] inputPaths = "file on s3";
FileInputFormat.setInputPaths(job, inputPaths); // set input output
format job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(HCatOutputFormat.class);
_configureOutputTable(job);
private void _setReducer(Job job) {
job.setReducerClass(Reducer.class);
job.setOutputValueClass(DefaultHCatRecord.class); }
private void _configureOutputTable(Job job) throws IOException {
OutputJobInfo jobInfo =
OutputJobInfo.create(_cli.getOptionValue("hive-dbname"),
_cli.getOptionValue("output-table"), null); HCatOutputFormat.setOutput(job, jobInfo); HCatSchema schema =
HCatOutputFormat.getTableSchema(job.getConfiguration());
HCatFieldSchema partitionDate = new HCatFieldSchema("date",
TypeInfoFactory.stringTypeInfo, null); HCatFieldSchema
partitionBatchId = new HCatFieldSchema("batch_id",
TypeInfoFactory.stringTypeInfo, null);
schema.append(partitionDate); schema.append(partitionBatchId);
HCatOutputFormat.setSchema(job, schema);
}
Any help?

Apache Spark 1.2.1 standalone cluster giving java heap space error

I need information about, how to figure out how much heap space(memory) would be needed to operate on x mb(suppose x means 600 mb) in spark standalone cluster.
Scenario:
I have standalone cluster with 14gb memory and 8 cores. I want to operate(Reading data from files and writing it to Cassandra) on 600 MB of data.
For this task I have SparkConfig as:
.set("spark.cassandra.output.throughput_mb_per_sec","800")
.set("spark.storage.memoryFraction", "0.3")
And --executor-memory=5g --total-executor-cores 6 --driver-memory 6g at the time of submitting task.
In spite of above configuration,I getting java heap space error while writing data to Cassandra.
Below is the java code:
public static void main(String[] args) throws Exception {
String fileName = args[0];
Long now = new Date().getTime();
SparkConf conf = new SparkConf(true)
.setAppName("JavaSparkSQL_" +now)
.set("spark.cassandra.connection.host", "192.168.1.65")
.set("spark.cassandra.connection.native.port", "9042")
.set("spark.cassandra.connection.rpc.port", "9160")
.set("spark.cassandra.output.throughput_mb_per_sec","800")
.set("spark.storage.memoryFraction", "0.3");
JavaSparkContext ctx = new JavaSparkContext(conf);
JavaRDD<String> input =ctx.textFile
("hdfs://abc.xyz.net:9000/figmd/resources/" + fileName, 12);
JavaRDD<PlanOfCare> result = input.mapPartitions(new
ParseJson()).filter(new PickInputData());
System.out.print("Count --> "+result.count());
System.out.println(StringUtils.join(result.collect(), ","));
javaFunctions(result).writerBuilder("ks","pt_planofcarelarge",
mapToRow(PlanOfCare.class)).saveToCassandra();
}
What configuration I am suppose to do?Am I missing anything?
Thanks in advance.
JavaRDD collect method return an array that contains all of the elements in this RDD.
So in your case, it will creates an array with 340000 elements which will result in a Java Heap Error, you may want to take a small sample of your data and collect it or you may want to save it directly to your disk.
For more information about JavaRDD, you can always refer to the official documentation.

hadoop mapreduce job on Cassandra result is wrong

I have 7 node cassandra (1.1.1) and hadoop (1.03) cluster ( tasktracker install same on every cassandra node).
and my column family use wide row pattern. 1 row contains about 200k columns (max about 300k).
My problem is when we use Hadoop to run analytic jobs ( count numbers of occurrence of a word) the result i received is wrong ( result is too lower as I expected in test records)
there 's one strange when we monitoring on job tracker is map progress task indicate wrong ( in my image below ) , And number of "Map input records" when i rerun job ( same data) is not same.
here is my init job code:
Job job = new Job(conf);
job.setJobName(this.jobname);
job.setJarByClass(BannerCount.class);
job.setMapperClass(BannerViewMapper.class);
job.setReducerClass(BannerClickReducer.class);
FileSystem fs = FileSystem.get(conf);
ConfigHelper.setInputRpcPort(job.getConfiguration(), "9160");
ConfigHelper.setInputInitialAddress(job.getConfiguration(), "192.168.23.114,192.168.23.115,192.168.23.116,192.168.23.117,192.168.23.121,192.168.23.122,192.168.23.123");
ConfigHelper.setInputPartitioner(job.getConfiguration(), "org.apache.cassandra.dht.RandomPartitioner");
ConfigHelper.setInputColumnFamily(job.getConfiguration(), KEYSPACE, COLUMN_FAMILY, true);
ConfigHelper.setRangeBatchSize(job.getConfiguration(), 500);
SlicePredicate predicate = new SlicePredicate();
SliceRange sliceRange = new SliceRange();
sliceRange.setStart(ByteBufferUtil.EMPTY_BYTE_BUFFER);
sliceRange.setFinish(ByteBufferUtil.EMPTY_BYTE_BUFFER);
sliceRange.setCount(200000);
predicate.setSlice_range(sliceRange);
ConfigHelper.setInputSlicePredicate(job.getConfiguration(), predicate);
String outPathString = "BannerViewResultV3" + COLUMN_FAMILY;
if (fs.exists(new Path(outPathString)))
fs.delete(new Path(outPathString), true);
FileOutputFormat.setOutputPath(job, new Path(outPathString));
job.setInputFormatClass(ColumnFamilyInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(LongWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(LongWritable.class);
job.setNumReduceTasks(28);
job.waitForCompletion(true);
return 1;

How to set system env of mapper?

First I tried How to set system environment variable from Mapper Hadoop? but mapred.map.child.env doesn't work for me.
I am using hadoop 0.20.1. I wanna to pass all system env from the class that started the job to mapper. Here is what I do:
StringBuilder envStr = new StringBuilder();
for (Entry<String, String> entry : System.getenv().entrySet()) {
envStr.append(entry.getKey() + "=" + entry.getValue() + ",");
}
if (envStr.length() > 0) {
envStr.deleteCharAt(envStr.length() - 1);
}
// System.out.println("Setting mapper child env to :" + envStr);
getConf().set("mapred.map.child.env", envStr.toString());
But It doesn't work. I also tried just set one system value but it does't work either. In Mapper the System.getenv doesn't contains the value. But job.xml has the key and value. Is there any way to do this?
It seems that your hadoop is too old. This is a bug of hadoop 0.20.
Plz updgrade to 0.21 or more stable 1.0.x.
See related jira and hadoop 0.21.0 release note for more information.

Resources