i'm on rhel7 64bit. I managed to apparently build the hadoop 2.4.1 distribution from source. before that, i built snappy from source and installed it. then i build the hadoop dist. with
mvn clean install -Pdist,native,src -DskipTests -Dtar -Dmaven.javadoc.skip=true -Drequire.snappy
yet when i look at $HADOOP_HOME/lib/native i see hdfs and hadoop libs but not snappy. so when i run hadoop checknative it says that i don't have snappy installed. furthermore, i downloaded hadoop-snappy, and compiled /that/ and it generated the snappy libs. i copied those over to $HADOOP_HOME/lib/native /and/ to $HADOOP_HOME/lib just for extra measure. STILL, hadoop checknative doesn't see it!
found the non-obvious solution in an obscure place http://lucene.472066.n3.nabble.com/Issue-with-loading-the-Snappy-Codec-td3910039.html
needed to add -Dcompile.native=true. this was not highlighted in the apache build doc nor was it in any build guide i've come across!
Related
I found some references to -Phadoop-provided flag for building spark without hadoop libraries but cannot find a good example of how to use it. how can I build spark from source and make sure it does not add any of it's own hadoop dependencies. it looks like when I built the latest spark it included a bunch of 2.8.x hadoop stuff which conflicts with my cluster hadoop version.
Spark has download options for "pre-built with user-provided Hadoop", which are consequently named with spark-VERSION-bin-without-hadoop.tgz
If you would really like to build it, then run this from the project root
./build/mvn -Phadoop-provided -DskipTests clean package
I downloaded hadoop 2.7 and each installation guide that I have found mentions /etc/hadoop/.. directory but the distribution that I have downloaded doesn't have this directory.
I tried with Hadoop 2.6 as well and it doesn't have this directory either.
Should I create these directories ?
Caveat; I am a complete newbie !
Thanks in advance.
It seems you have downloaded the source. Build the Hadoop source then you will get that folder.
To build the hadoop source, refer building.txt file available in the Hadoop package that you have downloaded.
Try downloading hadoop-2.6.0.tar.gz instead of hadoop-2.6.0-src.tar.gz from Hadoop Archive. As #Kumar has mentioned, you might have source distribution.
If you don't want to compile hadoop from source, then download hadoop-2.6.0.tar.gz from the link given above.
Try to download compiled hadoop from the below link
http://a.mbbsindia.com/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz instead of downloading source
you will get the /etc/hadoop/.. path in it..
Download it from Apache Website
Apache Hadoop link
After compiling Hadoop 2.5.1 with maven
hadoop version
Hadoop 2.5.1, I tried to compile apache spark using the following command:
mvn -Pyarn -Phadoop-2.5 -Dhadoop.version=2.5.1 -Pdeb -DskipTests clean package
But apparently there is no 2.5 profile.
My question is : what should I do?
rebuild hadoop 2.4
or compile spark with profile 2.4
or any other solution ?
Looks like this was asked after the poster inquired:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-1-0-with-Hadoop-2-5-0-td15827.html
"The hadoop-2.4 profile is really intended to be "Hadoop 2.4+". It
should compile and run fine with Hadoop 2.5 as far as I know. CDH 5.2
is Hadoop 2.5 + Spark 1.1, so there is evidence it works."
Just changing the profile name worked for me.
Thx for the answers.
I'm using Mahout Cookbook, which shows examples for Mahout 0.8 and uses Hadoop 0.23.5.
I'm new to the whole system, so I would like to know which Hadoop version to use when running Mahout 0.9?
Thanks
When pulling Mahout 0.9 from maven it includes hadoop-core version 1.2.1. Mahout version 0.9 does not work with hadoop 2 according to this. It is resolved in the latest master branch on github, but this requires you to recompile mahout from the source and include the hadoop 2 libraries. Mahout 1.0 should support hadoop 2.X versions.
If you choose to run Mahout 0.9 with Hadoop 2, you can follow these steps to make it work:
git clone https://github.com/apache/mahout.git
In the Mahout folder, type:
mvn -Dhadoop2.version=2.2.0 -DskipTests clean install
mvn -Dhadoop2.version=2.2.0 clean package
And below is a usage example for recommenditembased:
bin/mahout recommenditembased --input input/input.txt --output output --usersFile input/users.txt --similarityClassname SIMILARITY_COOCCURRENCE
Edit: original source is http://mahout.apache.org/developers/buildingmahout.html
This version of Mahout also runs with hadoop 0.2 core jar.
I am using it on windows machine , as 0.2 onwards, hadoop gives permission exception for windows system
I need some guidance on installing Oozie on Hadoop 2.2. The Quick Start docs page indicates that
IMPORTANT: By default it builds against Hadoop 1.1.1. It's possible to
build against Hadoop 2.x versions as well, but it is strongly
recommend to use a Bigtop distribution if using Hadoop 2.x because the
Oozie sharelibs built from the tarball distribution will not work with
it.
I haven't been able to get Bigtop to work.
I tried following some guidance from here but it only tells me to edit the pom.xml files, not what to edit in them.
I have pig and maven installed.
Thanks in advance
This is a problem with the releases resolving shared libraries with Maven, and has been since fixed if you use git master. I had this problem so hopefully this solution will work for the Oozie version you are building from.
The advice at here is of use. Similar to the blog post you linked, the grep command will indicate the offending files:
$ grep -l "2.2.0-SNAPSHOT" `find . -name "pom.xml"`
./hadooplibs/hadoop-2/pom.xml
./hadooplibs/hadoop-distcp-2/pom.xml
./hadooplibs/hadoop-test-2/pom.xml
./pom.xml
Any mentions of 2.2.0-SNAPSHOT in these files should be replaced with 2.2.0
I would suggest removing the -SNAPSHOT part using the following command:
$ grep -l "2.2.0-SNAPSHOT" `find . -name "pom.xml"` | xargs sed -i 's|2.2.0-SNAPSHOT|2.2.0|g'
UPDATE: If you don't have Hadoop JARs built from when you built Hadoop itself then you will need to add the option -DincludeHadoopJars
And then build the package:
$ mvn clean package assembly:single -Dhadoop.version=2.2.0 -DskipTests
Or if you're using JDK7 and/or targeting Java 7 (as I did):
$ mvn clean package assembly:single -Dhadoop.version=2.2.0 -DjavaVersion=1.7 -DtargetJavaVersion=1.7 -DskipTests
Documentation on building Oozie (version 4 docs) is available here.
The above worked building release-4.0.0 with Hadoop 2.2 and Java SDK 7.
The distro can then be found in distro/target.