HBase create command produces no response in shell - hadoop

I am new to HBase and installed the 0.20.6 version on Cygwin as that is the requirement of my project. The master is running along with Hregionserver. However, when I try to create a table in the HBase shell, there does not seem to be any response.
When I see the list of commands, none of the table related commands are seen. How do I resolve this? Please see the screenshot.

As reported on the documentation
"HBase requires that a JDK be installed"
But there is no Java on Cygwin, so you can not have a cygwin version of HBase.

Related

Unable to Launch Hive prompt in Windows 10

I have downloaded Hadoop 3.1.0 and Hive is 2.1.0 in my Windows 10. I have Hadoop running properly using start-all.cmd command from terminal. When I try to run 'hive' from command prompt it gives following messages and errors which are in the attached screenshot. I am using Derby 10.12.1.1 with hive. Following this tutorial from Youtube.
Have also tried reinstalling Hive but still not working. I have already spent a lot of my time dealing with this problem but got no success.
Any sort of help will be appreciated. Thank You.

Simple and quick oozie installation steps on unix

I am trying to install ozzie on my Unix machine, however i am getting error while executing the distro file.
I have tried making changes into pom.xml of oozie, but that also did not worked.
I hvae followed below links :
https://gauravkohli.com/2014/08/26/apache-oozie-installation-on-hadoop-2-4-1/
https://milindjagre.wordpress.com/2016/01/05/oozie-4-1-0-installation-on-hadoop-2-6-0-on-ubuntu-14-04/
Any help or installation guide will be great help.

Unable to execute cr9idata.pl

Oracle Database Home patches installed successfully
Executing cr9idata.pl
Executing: perl /u01/db/VIS/12.1.0/nls/data/old/cr9idata.pl
Unable to execute cr9idata.pl
RW-50010: Error: - script has returned an error: 2
RW-50004: Error code received when running external process. Check log file for details.
Running Database Install Driver for VIS instance
I executed command in terminal with root account:
[root#ntcs ~]# perl /u01/db/VIS/12.1.0/nls/data/old/cr9idata.pl
Directory /u01/db/VIS/12.1.0/nls/data/9idata already exist. Overwriting...
Copying files to /u01/db/VIS/12.1.0/nls/data/9idata...
Copy finished.
Please reset environment variable ORA_NLS10 to /u01/db/VIS/12.1.0/nls/data/9idata!
Thanks advanced for helping !
After serveral day debugging. I found the reason is my oracle user error.
I installed perl into /home/oracle/perl5...then make oracle user error.
I cannot su oracle from root user.
when i install perl for oracle user then .bashrc of oracle user auto add two routine line:
eval perl -I ~/perl5/lib/perl5 -Mlocal::lib
export MANPATH=$HOME/perl5/man:$MANPATH
I just remove above two lines above then i can su oracle from root user.
Conclusion:
When you install Oracle EBS 12.2.0 on CentOS 7.3. rapidwiz tool will su to oracle user automatically to install DB. But our oracle error cannot be su to so make error. But it show the symtom very strange so it is difficult to debug.
Now i install Oracle EBS 12.2 sucessfully !
I am very happy to share this to anyone meet this error.
I am installing EBS 12 at the moment.
Looking a bit deeper into the logs for the cr9idata.pl script, I saw that this error is caused by a missing Perl module. The Perl version that is installed with Oracle includes this library, so setting the path manually works in that case.
If you (like me) have installed Perl from YUM, install this module: perl-File-CheckTree

How to find cdh version hadoop

When connecting to Hadoop cluster, how can I know which version of Hadoop this cluster is running? In particular this is important for proper configuration of libraries when compiling and packaging Hadoop Java jobs with Maven.
The simplest way if you have ssh access to hadoop node is by running command
$ hadoop version
If you are looking for CDH version then check /usr/lib/hadoop/cloudera/cdh_version.properties
In cdh, in the cluster I am using, there is not any cdh_version.properties (or I couldn't find it)
If your cluster uses "Parcels", you could check which version of cdh is used by doing:
/opt/cloudera/parcels
And you could see the version as the name of the folder:
CDH-5.5.1-1.cdh5.5.1.p0.11
Note: I know that this is a not a general rule for getting which cdh version is used. I am trying to show an alternative way that it worked to me.
We can check the installed version with the help of following command:
cat /usr/lib/hadoop/cloudera/cdh_version.properties
Hope this may help you.

Apache pig in Cygwin

Is there any sources available for running Apache in Cygwin. With the latest Hadoop version i was able to setup a hadoop cluster in windows machine successfully, but I can't make PIG run in a cygwin terminal. The following error returns while attempting invoking pig grunt.
$ pig -x local
cygwin warning:
MS-DOS style path detected: c:\pig/conf/pig.properties
Preferred POSIX equivalent is: /cygdrive/c/pig/conf/pig.properties
CYGWIN environment variable option "nodosfilewarning" turns off this warning.
Consult the user's guide for more details about POSIX paths:
http://cygwin.com/cygwin-ug-net/using.html#using-pathnames
cygpath: cannot create short name of C:\pig\logs
Cannot locate pig-withouthadoop.jar. do 'ant jar-withouthadoop', and try again.
Any help would be appreciated.
Thanks
To resolve the above error, I have rebuild pig for hadoop-2.2.0 as described in the below link and able to get rid of the exception.
http://javatute.com/javatute/faces/post/hadoop/2014/installing-pig-11-for-hadoop-2-on-ubuntu-12-lts.xhtml

Resources