post cloudera hadoop installation -- No CDH version detected. Check Again - hadoop

I failed starting scm-agent services on a single node hadoop cluster on a VM. After a long search, the underscore in hostname was removed, all packages were uninstalled, and re-installation went well.
However, I am now seeing an error:
No CDH version detected. Check Again.
One of the solutions mentioned on other portals was yum info |grep my version which actually is in no direction.
Can someone point out what could have gone wrong?

yum list|grep your version
then yum removed all the prvious versions.
https://community.cloudera.com/t5/Cloudera-Manager-Installation/Cluster-Installation-Detecting-CDH-versions-on-all-hosts/m-p/30090#M5105

Related

Updating individual CDH Components in a Community Edition via '1 Click Installer'

Can someone let me know if it possible to update individual CDH component to 5.13 from 5.7 via "1 Click Installer" for Community Edition?
For example, let's say I want to update only the hadoop-hdfs-datanode to the latest in a server. If I do sudo apt-get install hadoop-hdfs-datanode it is updating other CDH component also running in that node (like resource-manager, node-manager, etc).
As discussed here if I am trying to upgrade hadoop-yarn-resourcemanager it is upgrading almost all the cdh hadoop components
support#platform1:~$ sudo apt-get install hadoop-yarn-resourcemanager
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
hadoop hadoop-0.20-mapreduce hadoop-client hadoop-conf-pseudo hadoop-hdfs
hadoop-hdfs-datanode hadoop-hdfs-journalnode hadoop-hdfs-namenode
hadoop-hdfs-secondarynamenode hadoop-hdfs-zkfc hadoop-mapreduce
hadoop-mapreduce-historyserver hadoop-yarn hadoop-yarn-nodemanager
The following packages will be upgraded:
hadoop hadoop-0.20-mapreduce hadoop-client hadoop-conf-pseudo hadoop-hdfs
hadoop-hdfs-datanode hadoop-hdfs-journalnode hadoop-hdfs-namenode
hadoop-hdfs-secondarynamenode hadoop-hdfs-zkfc hadoop-mapreduce
hadoop-mapreduce-historyserver hadoop-yarn hadoop-yarn-nodemanager
hadoop-yarn-resourcemanager
15 upgraded, 0 newly installed, 0 to remove and 16 not upgraded.
it is updating other CDH component also running in that node
I doubt it is upgrading everything in the node, just the dependent services of upgrading the hadoop client.
If you were to install Hadoop all by itself, it includes HDFS, MapReduce, YARN, and the Hadoop client libraries. Therefore, it makes sense that upgrading the datanode package would try to grab those, but not HBase, Hive, Pig, Spark, Oozie, etc. packages.
Essentially, you need to ensure all your Hadoop client libraries are the same version. CDH itself hasn't moved off of Hadoop 2.6.0 between those releases, although it has added patches to that base release, so it might be fine to upgrade.
However, let's take HBase as an example. From the documentation, it says Hadoop 2.6.0, 2.7.0 nor Hadoop 2.8.x are supported; Hadoop 3.x is not tested; only 2.6.1+ or 2.7.1+ are supported.
And continues on to say that
In distributed mode, it is critical that the version of Hadoop that is out on your cluster match what is under HBase... Make sure you replace the jar in HBase across your whole cluster. Hadoop version mismatch issues have various manifestations but often all look like its hung
All component upgrades should be followed through, and Cloudera makes the effort to ensure all components of a single release work together, not mixed across releases.

Install Spark 1.5 in existing Hortonworks HDP Cluster

I'm new to Hadoop and want find the way how to install Spark 1.5.1 on the existing Hadoop cluster. 4 nodes, Ubuntu 14.04. Hadoop 2.3.2. Ambari Version 2.1.2.1. Followed tutorial, but there are spark version for the Ubuntu 12, and I cannot install it on our system. So after step 1 I stucked. sudo apt-get install spark_2_3_2_1_12-master -y
Got an error:
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package spark_2_3_2_1_12-master
Can anyone provide us with some guidline, how to install 1.5?
Currently we have Spark 1.4 installed, up, and running, but due to requirement of functionality need the 1.5!
Ubuntu 14.04 Trusty Tahr is not officially supported by HDP. If you look at the repos available for stack updates, HDP stack public repos, they only have ones up for Centos, Red Hat, and Oracle Linux. Did you try using Spark's Simple Build Tool to build spark-1.5 source against your Hadoop install ? You would need to set SPARK_HADOOP_HOME=your hadoop location. See this for step by step with Ubuntu 14.04 and an earlier version of Spark. I don't see why the same steps would fail with Spark 1.5.

how to determine the cloudera minor release in the one click install debian package ? (i.e., 5.1 ? 5.2 ?)

I've managed to get cloudera single node hadoop cluster up and running from this package: http://archive.cloudera.com/cdh5/one-click-install/precise/amd64/cdh5-repository_1.0_all.deb
my colleagues asked me what minor release of cloudera this is installing.. and i am stumped as to which. Is there some info, readme, config or license file that gives this information for cloudera hadoop distros once they are installed ?
Or maybe someone just knows which minor release the above URL will install (if you could provide that info along with a link to a documentation source that would be fantastic.)
thanks in advance
-chris
The one-click install repo currently points to the latest Cloudera version, which is 5.3.0 as of earlier this week.
To check the version you installed, just list the package name. There should be some version number like '5.2.x' appended to the package name. An example command:
dpkg -l | grep 'cloudera'

Ambari install script location(s)

I'm setting up a HDP 2.1 cluster with Apache Ambari. All servers run SLES 11 SP3. The setup fails if I select to install Ganglia because of some dependencies:
Installing package apache2?mod_php* ('/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm apache2?mod_php*')
Problem: apache2-mod_php53-5.3.17-0.27.1.x86_64 conflicts with apache2-mod_php5 provided by apache2-mod_php5-5.2.14-0.7.30.50.1.x86_64
Solution 1: Following actions will be done:
do not install apache2-mod_php5-5.2.14-0.7.30.50.1.x86_64
deinstallation of php5-5.2.14-0.7.30.50.1.x86_64
deinstallation of php5-xmlwriter-5.2.14-0.7.30.50.1.x86_64
[... more PHP 5.2.x packages ...]
Solution 2: do not install apache2-mod_php53-5.3.17-0.27.1.x86_64
Apparently the Regex picks the 5.3 version, a 5.2 version would be available though.
So my question is: Where is the install script stored, that Ambari is running here? I would like to replace the regex with the correct version of the package.
Information about what packages are to be installed is stored in
/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/GANGLIA/metainfo.xml
Change the value and restart the Ambari Server for the changes to take effect.

Hadoop showing old version despite latest version installation

I am trying to install hadoop in my ubuntu OS. I followed each and every step exactly from this link Hadoop Install Tutorial and everything was going as expected until i tried to run
$ start-dfs.sh and $ hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi 2 5 command. These commands doesn't work as expected.I tried R&D and somehow came to know that i was using older hadoop version Hadoop 1.0.2 despite of me getting latest 2.2.0 version.
As i could not solve this, i tried to uninstall hadoop completely, Now when i try doing it, it says
$ sudo dpkg -r hadoop
dpkg: dependency problems prevent removal of hadoop:
hadoop-native depends on hadoop (= 1.0.2-0ubuntu1~hadoop1).
dpkg: error processing hadoop (--remove):
dependency problems - not removing
Errors were encountered while processing:
hadoop
Appreciate any help !
I dont know whether its a proper way to remove hadoop or not, but i have removed it using below method.
I first manually deleted the /usr/local/hadoop folder from all the users(If any).If you are not able to remove it due to lack of permissions, then make sure about the permissions of the folder. Make the permission of the folder to "Sudo" and on "Creating and deleting files" so that every user can delete from their instances.
Then from Terminal $ rm -r hadoop does the job going to the /usr/local path.
After this, i checked $ hadoop version again in terminal ..and boom it again showed its existence. Then i did below step.
2.Goto terminal sudo apt-get purge hadoop or sudo apt-get remove hadoop...then it worked

Resources