Upgrade my hadoop to latest stable version - hadoop

I am trying to upgrade my Hadoop infrastructure installed on Ubuntu 14.04 from hadoop-2.2.0-stable to hadoop-2.6.0-stable. Am I supposed to remove my previous version of hadoop and install a fresh copy or is it possible to move to a newer version without loosing any data? Any help is appreciated

Related

How to install OpenModelica 1.9.5 on Ubuntu 20.04?

I installed the latest version on OpenModelica (version 1.16.5) on Ubuntu 20.04.
However, this version has problems with the packages (examples).
So, by recommendation, I'm trying to install version 1.9.5 or 1.10.X, without success.
Since I'm a novice Linux user, I don't know how to do this.
On the site,
"https://openmodelica.org/download/download-linux"
for older versions, it is suggested to use the line
"deb https://build.openmodelica.org/omc/builds/linux/releases/1.xx.x/".
I honestly do not know how to do it and that is why I ask for help to be able to carry out this procedure.
Grateful.
If you are on Ubuntu 20.04 (focal), you can only install OpenModelica versions released after April 2020 (because we don't update the old releases for newer OpenModelica versions). The oldest you can install without using docker or compiling your own OpenModelica is the following as /etc/apt/sources.list.d/openmodelica.list:
deb https://build.openmodelica.org/omc/builds/linux/releases/1.16.0/ focal release
Which example does not work in the latest OpenModelica for you? We have done some fairly extensive testing and the change from the old to the new frontend is a net gain in the number of models that simulate.
Now this error occurs after installing Scilab through the terminal.

Open Source Puppet Upgrade from 3.8.7 on Ubuntu OS

I am in the process of upgrading Open Source Puppet from Server version 3.8.7 (which is hosted on Ubuntu 14.04 ) to a New version of Puppet on Ubuntu 16.04.
i have listed all the existing modules in current version however deciding which Open Source version to be chosen for upgrade is not very clear from documentations.
Hence,request help on
how to choose Next version of puppet server on Ubuntu 16.04?
Can we upgrade straight from 3.x to 5.x or 6.x?
Anyone who has done this kind of upgrade, please share your learnings which can
support us to complete the safe upgrade.

Install specific version of hadoop using Ambari

I am new to HDP installation using Ambari. I want to install Hadoop 2.9.0 using Ambari web installation. My Ambari version is 2.7.0.0 and I am using HDP 3.0 which has Hadoop 3.1.0. But I need to install Hadoop 2.9.0. Can someone please let me know if this can be done? And how can this be achieved?
I have not started the cluster installation yet and I'm done with Ambari installation.
Ambari uses pre-defined software stacks.
HDP does not offer any stack with Hadoop 2.9.0
You would therefore need to manually install that version of Hadoop yourself, although you can still manage the servers (but not the Hadoop configuration) using Ambari
In any case, there's little benefit to installing a lower version of the software, plus you won't get Hortonworks support if you do that

Install HDP 2.2 on CentOS 7

As I can see HDP 2.2 needs Centos 6.5 as an operation system, probably because Ambari needs Centos 6.5. My question is if anyone has installed it on Centos 7. Is there any hard dependencies that will not allow me to complete the installation successfully?
Ambari 2.2+ can be installed successfully and works fine on CentOS 7. Then you can install HDP 2.0+.
As far as I'm aware there are no hard dependencies, per se. However Ambari itself looks at the operating system version, and if its CentOS 7, it'll stop the install.
In order to work around that you'd need to edit Ambari's source code.
Just consult the official Installation Guide for a relevant Ambari version
https://cwiki.apache.org/confluence/display/AMBARI/Install+Ambari+2.2.0+from+Public+Repositories
It's an up-to-date source of OS compatibility information.
Here you can see that Centos 7 is officially supported.

Which version of CDH using Cloudera Manager automatically Installs JDK1.7?

I am using Cloudera Manager with CDH4.2.2 for my 3+1 cluster. On starting the installation with cloudera manager, it automatically downloads and installs JDK1.6. I want to use JDK1.7 with CDH for my convinience. Is it possible or is there any version of CDH which while installating Hadoop in the cluster automatically downloads and installs and successfully runs Hadoop with JDK1.7?
If yes, may I know which version of CDH is it and where do i get to download it from?
I want to work with JDK1.7 instead of 1.6 because i want to install Apache Giraph on CDH but it seems Giraph does not fit fine with JDK1.6 and needs the JDK1.7.
With Regards,
JDK 1.7 is supported for all CDH applications as of CDH 4.4 and Cloudera Manager 4.7.
That being said, no version of Cloudera Manager 4.x installs JDK 1.7 during the installation (latest version is 4.8.2). The only version of Cloudera Manager that installs JDK 1.7 automatically is 5.0.0.
To summarize: If you want an automated installation of JDK 1.7 via Cloudera Manager, you need to upgrade to CDH 5, and CM 5.0.0. Alternatively, you could upgrade to CDH4.4, and then perform a manual installation of JDK 1.7.

Resources