I am writing a Spark STreaming application using file stream...
val probeFileLines = ssc.fileStream[LongWritable, Text, TextInputFormat]("/data-sources/DXE_Ver/1.4/MTN_Abuja/DXE/20160221/HTTP", filterF, false) //.persist(StorageLevel.MEMORY_AND_DISK_SER)
But I get exception error for file/IO..for
16/09/07 10:20:30 WARN FileInputDStream: Error finding new files
java.io.FileNotFoundException: /mapr/cellos-mapr/data-sources/DXE_Ver/1.4/MTN_Abuja/DXE/20160221/HTTP
at com.mapr.fs.MapRFileSystem.listMapRStatus(MapRFileSystem.java:1486)
at com.mapr.fs.MapRFileSystem.listStatus(MapRFileSystem.java:1523)
at com.mapr.fs.MapRFileSystem.listStatus(MapRFileSystem.java:86)
While the directory exist in my cluster.
I am running my job using spark submit
spark-submit --class "StreamingEngineSt" target/scala-2.11/sprkhbase_2.11-1.0.2.jar
This could be related to file permissions or ownership(May be need hdfs user).
Related
I am trying to access s3 files from local spark context using pySpark.
I keep getting File "C:\Spark\python\lib\py4j-0.9-src.zip\py4j\protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o20.parquet.
: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3native.NativeS3FileSystem not found
I had set os.environ['AWS_ACCESS_KEY_ID'] and
os.environ['AWS_SECRET_ACCESS_KEY'] before I called df = sqc.read.parquet(input_path). I also added these lines:
hadoopConf.set("fs.s3.impl", "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
hadoopConf.set("fs.s3.awsSecretAccessKey", os.environ["AWS_SECRET_ACCESS_KEY"])
hadoopConf.set("fs.s3.awsAccessKeyId", os.environ["AWS_ACCESS_KEY_ID"])
I have also tried changing s3 to s3n, s3a. Neither worked.
Any idea how to make it work?
I am on Windows 10, pySpark, Spark 1.6.1 built for Hadoop 2.6.0
I'm running pyspark appending the libraries from hadoop-aws.
You will need to use s3n in your input path. I'm running that from Mac-OS. so I'm not sure if it will work in Windows.
$SPARK_HOME/bin/pyspark --packages org.apache.hadoop:hadoop-aws:2.7.1
This package declaration works even in spark-shell
spark-shell --packages org.apache.hadoop:hadoop-aws:2.7.1
and specify in the shell
sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId", "xxxxxxxxxxxxx")
sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey", "xxxxxxxxxxxxxxxxx")
I am using Hortonworks Sandbox 2.0 which contains the following version of Hbase and Hive
Component Version
------------------------
Apache Hadoop 2.2.0
Apache Hive 0.12.0
Apache HBase 0.96.0
Apache ZooKeeper 3.4.5
...and
I am trying to register my hbase table into hive using the following query
CREATE TABLE IF NOT EXISTS Document_Table_Hive (key STRING, author STRING, category STRING) STORED BY ‘org.apache.hadoop.hive.hbase.HBaseStorageHandler’ WITH SERDEPROPERTIES (‘hbase.columns.mapping’ = ‘:key,metadata:author,categories:category’) TBLPROPERTIES (‘hbase.table.name’ = ‘Document’);
This does not work, I get the following Exception:
2014-03-26 09:14:57,341 ERROR exec.DDLTask (DDLTask.java:execute(435)) – java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration
at org.apache.hadoop.hive.hbase.HBaseStorageHandler.setConf(HBaseStorageHandler.java:249)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
2014-03-26 09:14:57,368 ERROR ql.Driver (SessionState.java:printError(419)) – FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org/apache/hadoop/hbase/HBaseConfiguration
I have already created the Hbase Table “Document” and the describe command gives the following description
‘Document’,
{NAME => ‘categories’,..},
{NAME => ‘comments’,..},
{NAME => ‘metadata’,..}
I have tried the following things
add hive.aux.jars.path in hive-site.xml
hive.aux.jars.path
file:///etc/hbase/conf/hbase-site.xml,file:///usr/lib/hbase/lib/hbase-common-0.96.0.2.0.6.0-76-hadoop2.jar,file:///usr/lib/hive/lib/hive-hbase-handler-0.12.0.2.0.6.0-76.jar,file:///usr/lib/hbase/lib/hbase-client-0.96.0.2.0.6.0-76-hadoop2.jar,file:///usr/lib/zookeeper/zookeeper-3.4.5.2.0.6.0-76.jar
add jars using hive add jar command
add jar /usr/lib/hbase/lib/hbase-common-0.96.0.2.0.6.0-76-hadoop2.jar;
add jar /usr/lib/hive/lib/hive-hbase-handler-0.12.0.2.0.6.0-76.jar;
add jar /usr/lib/hbase/lib/hbase-client-0.96.0.2.0.6.0-76-hadoop2.jar;
add jar /usr/lib/zookeeper/zookeeper-3.4.5.2.0.6.0-76.jar;
add file /etc/hbase/conf/hbase-site.xml
specify the hadoop_classpath
export HADOOP_CLASSPATH=/etc/hbase/conf:/usr/lib/hbase/lib/hbase-common-0.96.0.2.0.6.0-76-hadoop2:/usr/lib/zookeeper/zookeeper-3.4.5.2.0.6.0-76.jar
And it is still not working!
How can I add the jars in the hive classpath so that it finds the hbaseConfiguration class,
or is it an entirely different issue?
No need to copy the entire jars. Just hbase-*.jar , zookeeper*.jar, hive-hbase-handler*.jar would be enough. By default all hadoop related jars would be added to hadoop classpath, Since hive internally uses hadoop command to execute.
Or
Instead of copying hbase jars to hive library by specifying HIVE_AUX_JARS_PATH environment variable to /usr/lib/hbase/lib/ in /etc/hive/conf/hive-env.sh will also do.
The second approach is more suggested than first
has anyone had successful experience loading data to hbase-0.98.0 from pig-0.12.0 on hadoop-2.2.0 in an environment of hadoop-2.20+hbase-0.98.0+pig-0.12.0 combination without encountering this error:
ERROR 2998: Unhandled internal error.
org/apache/hadoop/hbase/filter/WritableByteArrayComparable
with a line of log trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/filter/WritableByteArra
I searched the web and found a handful of problems and solutions but all of them refer to pre-hadoop2 and base-0.94-x which were not applicable to my situation.
I have a 5 node hadoop-2.2.0 cluster and a 3 node hbase-0.98.0 cluster and a client machine installed with hadoop-2.2.0, base-0.98.0, pig-0.12.0. Each of them functioned fine separately and I got hdfs, map reduce, region servers , pig all worked fine. To complete an "loading data to base from pig" example, i have the following export:
export PIG_CLASSPATH=$HADOOP_INSTALL/etc/hadoop:$HBASE_PREFIX/lib/*.jar
:$HBASE_PREFIX/lib/protobuf-java-2.5.0.jar:$HBASE_PREFIX/lib/zookeeper-3.4.5.jar
and when i tried to run : pig -x local -f loaddata.pig
and boom, the following error:ERROR 2998: Unhandled internal error. org/apache/hadoop/hbase/filter/WritableByteArrayComparable (this should be the 100+ times i got it dying countless tries to figure out a working setting).
the trace log shows:lava.lang.NoClassDefFoundError: org/apache/hadoop/hbase/filter/WritableByteArrayComparable
the following is my pig script:
REGISTER /usr/local/hbase/lib/hbase-*.jar;
REGISTER /usr/local/hbase/lib/hadoop-*.jar;
REGISTER /usr/local/hbase/lib/protobuf-java-2.5.0.jar;
REGISTER /usr/local/hbase/lib/zookeeper-3.4.5.jar;
raw_data = LOAD '/home/hdadmin/200408hourly.txt' USING PigStorage(',');
weather_data = FOREACH raw_data GENERATE $1, $10;
ranked_data = RANK weather_data;
final_data = FILTER ranked_data BY $0 IS NOT NULL;
STORE final_data INTO 'hbase://weather' USING
org.apache.pig.backend.hadoop.hbase.HBaseStorage('info:date info:temp');
I have successfully created a base table 'weather'.
Has anyone had successful experience and be generous to share with us?
ant clean jar-withouthadoop -Dhadoopversion=23 -Dhbaseversion=95
By default it builds against hbase 0.94. 94 and 95 are the only options.
If you know which jar file contains the missing class, e.g. org/apache/hadoop/hbase/filter/WritableByteArray, then you can use the pig.additional.jars property when running the pig command to ensure that the jar file is available to all the mapper tasks.
pig -D pig.additional.jars=FullPathToJarFile.jar bulkload.pig
Example:
pig -D pig.additional.jars=/usr/lib/hbase/lib/hbase-protocol.jar bulkload.pig
I want to run a map-only job in Hadoop MapReduce, here's my code:
Configuration conf = new Configuration();
Job job = new Job(conf);
job.setJobName("import");
job.setMapperClass(Map.class);//Custom Mapper
job.setInputFormatClass(TextInputFormat.class);
job.setNumReduceTasks(0);
TextInputFormat.setInputPaths(job, new Path("/home/jonathan/input"));
But I get the error:
13/07/17 18:22:48 ERROR security.UserGroupInformation: PriviledgedActionException
as: jonathan cause:org.apache.hadoop.mapred.InvalidJobConfException:
Output directory not set.
Exception in thread "main" org.apache.hadoop.mapred.InvalidJobConfException:
Output directory not set.
Then I tried to use this:
job.setOutputFormatClass(org.apache.hadoop.mapred.lib.NullOutputFormat.class);
But it gives me a compilation error:
java: method setOutputFormatClass in class org.apache.hadoop.mapreduce.Job
cannot be applied to given types;
required: java.lang.Class<? extends org.apache.hadoop.mapreduce.OutputFormat>
found: java.lang.Class<org.apache.hadoop.mapred.lib.NullOutputFormat>
reason: actual argument java.lang.Class
<org.apache.hadoop.mapred.lib.NullOutputFormat> cannot be converted to
java.lang.Class<? extends org.apache.hadoop.mapreduce.OutputFormat>
by method invocation conversion
What am I doing wrong?
Map-only jobs still need an output location specified. As the error says, you're not specifying this.
I think you mean that your job produces no output at all. Hadoop still wants you to specify an output location, though nothing need be written.
You want org.apache.hadoop.mapreduce.lib.output.NullOutputFormat not org.apache.hadoop.mapred.lib.NullOutputFormat, which is what the second error indicates though it's subtle.
I have a single node cluster from which i got logs and gave input TraceBuilder and it works.
I have grouped 5 node cluster under default rack and got logs. Here job and topology traces are generated properly.
I have set up 5 node cluster with each of them mapped to different racks.
I have hadoop-0.20.2 set up on my Eclipse Helios. So, i ran Tracebuilder using
Main Class: org.apache.hadoop.tools.rumen.TraceBuilder
I ran some jobs on cluster and used copy of /usr/local/hadoop/logs/history folder of master node as input to TraceBuilder.
Arguments: /home/arun/job.json /home/arun/topology.json /home/ubuntu/Documents/testlog
But i get
11/12/16 12:02:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
11/12/16 12:02:38 WARN rumen.TraceBuilder: TraceBuilder got an error while processing the [possibly virtual] file master_1324011575958_job_201112161029_0001_hduser_word+count within Path file:/home/ubuntu/Documents/testlog/master_1324011575958_job_201112161029_0001_hduser_word+count
java.lang.NullPointerException
at org.apache.hadoop.tools.rumen.JobBuilder.processTaskAttemptFinishedEvent(JobBuilder.java:492)
at org.apache.hadoop.tools.rumen.JobBuilder.process(JobBuilder.java:149)
at org.apache.hadoop.tools.rumen.TraceBuilder.processJobHistory(TraceBuilder.java:310)
at org.apache.hadoop.tools.rumen.TraceBuilder.run(TraceBuilder.java:264)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:83)
at org.apache.hadoop.tools.rumen.TraceBuilder.main(TraceBuilder.java:142)
.....................
It generates job trace json file but the fields like hostname and location are "null" in it and the topology trace json file doesn't have 5 node's info and is like this :
{
"name" : "<root>",
"children" : [ ]
}
Can anyone help me out?
This error occurs because none expected input file was found on input directory.
The input directory must to contain job files, for example: job_201205192032_0006_conf.xml. These files are stored inside the logs/history folder, but under some directories generated in accord with the job execution and execution date