How to test if hbase is correctly running - hadoop

I just installed hbase on a EC2 server (I also have HDFS installed, it's working).
My problem is that I don't know how to check if my Hbase is correctly installed.
To install hbase I followed this tutorial in wich they say we can check the hbase instance on the webUI on the address addressOfMyMachine:60010, I also checked on the port 16010 but this is not working.
I have an error saying this :
Sorry, the page you are looking for is currently unavailable.
Please try again later.
If you are the system administrator of this resource then you should check the error log for details.
I managed to run the hbase shell but I don't know if my installation is working well.

To check if Hbase is running or not using shell script, execute the command below.
if echo -e "list" | hbase shell 2>&1 | grep -q "ERROR:" 2>/dev/null ;then echo "Hbase is not running"; fi

Related

migration from Cloudera Hadoop to HDINSIGHT

Can anyone tell me. I have HQL scripts that I used to run on Cloudera using hive -f scriptname.hql Now I want to run on these scripts in HDINSIGHT(Hadoop cluster) but the hive command line is not available in HDINSIGHT. Can someone guide how I can do that.
beeline -u 'jdbc:hive2://headnodehost:10001/;transportMode=http' -i query.hql
Anyone has experience of using above rather than
hive -f query.hql
I don't see there is any other way to execute the HQL files.You can refer to this document-https://learn.microsoft.com/en-us/azure/hdinsight/hadoop/apache-hadoop-use-hive-beeline#run-a-hiveql-file
You can also use the zookeeper quorum(encircled) to avoid failure of queries during head node failover
beeline -u '<zookeeper quorum>' -i /path/query.hql
Create an environment variable :
export hivef="beeline -u 'jdbc:hive2://hn0-hdi-uk.witechmill.co.uk:10001/default;principal=hive/_HOST#witechmill.CO.UK;auth-kerberos;transportMode=http' -n umerrkhan "
witechmill is my clustername
Then call the script by using the below
$hivef scriptname.hql

How to ignore reboot prompt in a ShellScript

I am trying to create a Shellscript with the following commands.
#!/bin/bash
ipa-client-install --uninstall
/usr/local/sbin/new-clone.sh -i aws -s aws-dev
My problem is that the ipa-client-install --uninstall command prompts for a reboot at the end with the default value being no.
Here is the output.
Client uninstall complete. The original nsswitch.conf configuration
has been restored. You may need to restart services or reboot the
machine. Do you want to reboot the machine? [no]:
How can I supress the reboot dialog and just accept the default "no"?
How can I check to see if ipa-client-install is installed before attempting to remove it?
I am new to Shellscripting, so I am struggling a bit :-)
Please be safe.
You can use Linux pipes to take care of the prompt issue. To rpm -q will help you to check if the package is available.
Your final script would be like
#!/bin/bash
if rpm -q ip-client-install
then
echo no | ipa-client-install --uninstall
else
echo "Package not found"
fi
/usr/local/sbin/new-clone.sh -i aws -s aws-dev

XDMoD : error "no aggregate table. have you ingested your data"

For running the XDMoD from http://open.xdmod.org/
on CentOS virtual machine. The command
xdmod-shredder -v -r resource -f pbs -d /var/spool/torque/server_priv/accounting
runs correctly and the ingester command
xdmod-ingestor -v
also. But still error msg shows up when I open up web browser saying " have you ingested your data". the problem is still unknown. kindly help.

Zeppelin can't execute Hive use shell

I am using Zeppelin-0.5.6.
I check the log and find something below:
However, it works well when I use terminal.
I also tried some other hive commands. When I used commands below in Zeppelin I got the same error while it worked well in terminal.
%sh
hive -e "show databases"
Can anybody help?
Thanks.

Messed up sed syntactics in hadoop startup script after reinstalling JVM

i'm trying to run 3 node Hadoop cluster on Windows Azure cloud. I've gone through configuration, and test launch. Everything look fine, however, as i used to use OpedJDK which is not recommended as VM for Hadoop according to what i read, i decide to replace it with Oracle Server JVM. Removed old installation of java with Yum, along with all java folders in /usr/lib, installed most recent version of Oracle JVM, updated PATH and JAVA_HOME variables; however, now on launch i getting following masseges:
sed: -e expression #1, char 6: unknown option to `s'
64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known
HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not known
Server: ssh: Could not resolve hostname Server: Name or service not known
VM: ssh: Could not resolve hostname VM: Name or service not known
e.t.c. (in total about 20-30 strings with words which should not have anything in common with hostnames)
For me it looks like it's trying to pass part of code as Hostname because of incorrect usage of sed in start up script:
if [ "$HADOOP_SLAVE_NAMES" != '' ] ; then
SLAVE_NAMES=$HADOOP_SLAVE_NAMES
else
SLAVE_FILE=${HADOOP_SLAVES:-${HADOOP_CONF_DIR}/slaves}
SLAVE_NAMES=$(cat "$SLAVE_FILE" | sed 's/#.*$//;/^$/d')
fi
# start the daemons
for slave in $SLAVE_NAMES ; do
ssh $HADOOP_SSH_OPTS $slave $"${#// /\\ }" \
2>&1 | sed "s/^/$slave: /" &
if [ "$HADOOP_SLAVE_SLEEP" != "" ]; then
sleep $HADOOP_SLAVE_SLEEP
fi
done
Which looks unchanged, so the question is: how change of JVM could affect sed? And how can i fix it?
So i found an answer to this question: My guess was wrong, and everything with sed is fine. Problem however was in how Oracle JVM works with external libraries compare to OpenJDK. It did throw exception where script was not expecting it, and it ruin whole sed input.
You can fix it by adding following system variables:
HADOOP_COMMON_LIB_NATIVE_DIR which should point to /lib/native folder of your Hadoop installation and add -Djava.library.path=/opt/hadoop/lib to whatever options you already have in HADOOP_OPTS variable (notice that /opt/hadoop is my installation folder, you might need to change it in order for stuff to work properly).
I personally add export commands to hadoop-env.sh script, but adding it to .bash file or start-all.sh should work as well.

Resources