I'm trying to install Hadoop on CentOS7, following steps here - https://www.vultr.com/docs/how-to-install-hadoop-in-stand-alone-mode-on-centos-7 (Only difference Hadoop version is 3.2.1 instead of 2.7.3 in article)
I followed everything precisely until at step 4 when i type in "hadoop" in terminal it gives me an error - ERROR: Invalid HADOOP_YARN_HOME
Is there any setup related to Yarn thats needed? I read the Apache doc and other links on the web but they all mention only JAVA_HOME path is needed which I did set as per above link.
Any help appreciated.
Thanks!
Open ~/.bashrc
add
export HADOOP_HOME=path_to_your_hadoop_package
export HADOOP_YARN_HOME=$HADOOP_HOME
Related
I want to installed the hadoop(3.2) in my linux system which has installed the hadoop(2.7).When I execute hadoop , I can only get the information of hadoop 2.7 ,even if I change the environment variable. And the most confused thing is when I run echo $HADOOP_HOME , sometimes I can get the path of hadoop 2.7,sometimes hadoop 3.2. I hope someone can help me.
enter image description here
enter image description here
I am trying to install kylo 0.8.4.
There is a step to install kylo specific components after installing Nifi using command,
sudo ./install-kylo-components.sh /opt /opt/kylo kylo kylo
but getting follwing error.
Creating symlinks for NiFi version 1.4.0.jar compatible nars
ERROR: spark-submit not on path. Has spark been installed?
I have spark installed.
need help.
The script calls which spark-submit to check if Spark is available. If available, it uses spark-submit --version to determine the version of Spark that is installed.
The error indicates that spark-submit is not available on system path. Can you please execute which spark-submit on the command line and check the result? Please refer to the screenshot below for expected result on Kylo sandbox.
If spark-submit is not available on the system path, you can fix it by updating the PATHvariable in .bash_profile file by providing the location of your Spark installation.
As a next step, you can also verify the installed version of Spark by running spark-submit --version. Please refer to screenshot below for an example result.
I am installing predictionIO from source code. I have downloaded and done the predictionIO installation successfully. I am now trying to install the dependencies (Spark, Elasticsearch, HBase) but I am running into errors for each of them. Below are the issues I am facing when I execute pio status:
1 - Unable to locate a proper Apache Spark installation
2 - It is also unable to find metadata files.
I have not changed any default settings. I'm using windows 8.1. On localhost, I have running IIS. On 127.0.0.1:8888 I run ipython notebook.
Please help on how I can get predictionIO up and running on my machine.
Thanks
If you are on Windows, you can install with Vagrant.
http://docs.prediction.io/community/projects/#vagrant-installation-for-predictionio
I believe the discussion has moved to the google group.
https://groups.google.com/forum/#!searchin/predictionio-user/SAS/predictionio-user/ZamBr1ZaQ3o/fyNkXh3zsv0J
https://groups.google.com/forum/#!searchin/predictionio-user/SAS/predictionio-user/0awaASUR8lE/JkLtPeRrNt4J
This is the relevant thread.
https://groups.google.com/forum/#!searchin/predictionio-user/SAS/predictionio-user/0awaASUR8lE/JkLtPeRrNt4J
Moreover, PredictionIO docs had a few errors. Below are some of them and their corrected versions.
1 - Actual line: PATH=$PATH:/home/yourname/predictionio/bin; export PATH
Corrected Version PATH=$PATH:/home/yourname/PredictionIO/bin; export PATH
2 - Actual Line: $ pio eventserver
Corrected Version: $ pio eventserver --ip 0.0.0.0
3 - Actual Line pio template get PredictionIO/templates-scala-parallel-recommendation MyRecommendation
Corrected Version pio template get PredictionIO/template-scala-parallel-recommendation MyRecommendation
Is there any sources available for running Apache in Cygwin. With the latest Hadoop version i was able to setup a hadoop cluster in windows machine successfully, but I can't make PIG run in a cygwin terminal. The following error returns while attempting invoking pig grunt.
$ pig -x local
cygwin warning:
MS-DOS style path detected: c:\pig/conf/pig.properties
Preferred POSIX equivalent is: /cygdrive/c/pig/conf/pig.properties
CYGWIN environment variable option "nodosfilewarning" turns off this warning.
Consult the user's guide for more details about POSIX paths:
http://cygwin.com/cygwin-ug-net/using.html#using-pathnames
cygpath: cannot create short name of C:\pig\logs
Cannot locate pig-withouthadoop.jar. do 'ant jar-withouthadoop', and try again.
Any help would be appreciated.
Thanks
To resolve the above error, I have rebuild pig for hadoop-2.2.0 as described in the below link and able to get rid of the exception.
http://javatute.com/javatute/faces/post/hadoop/2014/installing-pig-11-for-hadoop-2-on-ubuntu-12-lts.xhtml
I installed Hadoop and Pig using brew install hadoop and brew install pig.
I read here that you will to get Unable to load realm info from SCDynamicStore error message unless you add:
export HADOOP_OPTS="-Djava.security.krb5.realm=OX.AC.UK -Djava.security.krb5.kdc=kdc0.ox.ac.uk:kdc1.ox.ac.uk"
to your hadoop-env.sh file, which I have.
However, when I run hadoop namenode -format, I still see:
java[1548:1703] Unable to load realm info from SCDynamicStore
amongst the outputs.
Anyone know why I'm still getting it?
As dturnanski suggests, you need to use an older JDK. You can set this in the hadoop-env.sh file by changing the JAVA_HOME setting to:
export JAVA_HOME=`/usr/libexec/java_home -v 1.6`
(Note the grave quotes here.) This fixed the problem for me.
I had the same issue with java 7. works with java 6