I have setup hadoop and hive on aws ec2 server with 14.04 ubuntu, when i run a background process for hive after hadoop services the hive server stops working after some time, still facing the issue below is the given command
hive --service hiveserver2 &
Related
I have hadoop 3.1.2 and hive 3.1.2 on a cluster and I want to connect to hive with presto-server-0.265.1.
I have just one catalog file in /opt/presto/etc/catalog as hive.properties here is:
connector.name=hive-hadoop2
hive.metastore.uri=thrift://192.168.49.13:9083
The presto service run but it can not connect to hive because I use hadoop3 and when I change hive.properties,presto service can not run.
how can I connect to hadoop3?
Update:
it wasn't about hadoop. hive metastore was not installed correctly so presto had problem to connect hive metastore
I am new to hadoop/HIve learning and struggling to fix this, for a distributed hadoop environment where should hive and pig need to install, is this edge node or where my hadoop installed
Hadoop installed on different server say hadoopVM, 2 separate data nodes DN1, DN2 & Edge Nodes from where I can submit jobs to hadoop to load any files to HDFS
till here i have no issue, i am trying to install hive edge node and getting below error
Attached error which i am getting on edgenode server
It seems that the Meta Store service is not started. start the service by issuing the following command in one of the session and don't close that session, and parallel start another session and try to use hive.
Active session mode:
sudo hive --service metastore
Background service mode:
If you add "&&" then service will be started and keep running as a background process.
sudo hive --service metastore &&
Altarnative:
If you still facing the problem then this is the problem because the new version of MySQL, you can refer my answer at below link.
SemanticException in Hive Shell Mode
I have Hadoop2.5 cluster already installed on my server, running on docker, and we have a Spark standalone cluster running on the same server(also deployed with docker), how can I swith the Spark cluster from standalone mode to Spark on Yarn, since hadoop and spark running on seperated docker.
I have a cluster of two machines and using apache hbase and apache hadoop. I have to use hue so that I can interect with hbase or hdfs through GUI. I have installed it successfully on my machine(ubuntu 14.04) but it is showing nothing about hdfs or tables etc. and gives error like
1.oozie server is not running
2.could not connect to local:9090
HBase thrift server cannot be contacted
How to do setting og hue so that it should connect to my running cluster.
I have two cloudera VM and on both i've configured phoenix and it is working fine as long as it is localhost.
When i'm trying to connect hbase from one VM from phoenix of another VM, i'm using this command
$ ./sqlline.sh xxx.xx.xx.xx:2181
The connection is successful, but phoenix is still referencing the local HBASE and not the remote HBASE. Can anyone tell me where is the problem?