hcatUtil not found when Configuring HP Vertica for HCatalog - hadoop

I am trying to configure HP Vertica for HCatalog:
Configuring HP Vertica for HCatalog
But I can not found hcatUtil on my Vertica cluster.
Where can I get this utility?

As this answer said, it's in /opt/vertica/packages/hcat/tools starting with version 7.1.1. But you probably need some further information:
You need to run hcatUtil on a node in your Hadoop cluster; the utility gathers up Hadoop libraries that Vertica also needs to access, so you need to have those libraries available. Assuming you're not co-locating Vertica nodes on your Hadoop nodes, the easiest way to do this is probably to copy the script to a Hadoop node, run it with output to a temporary directory, and then copy the contents of the temporary directory back to the Vertica node. (Put them in /opt/vertica/packages/hcat/lib.) Then proceed with installing the HCatalog connector.
See this section in the Vertica documentation for more details. (Link is to 7.2.x, but the process has been the same since the tool was introduced.)

The hcatUtil utility has been introduced in vertica 7.1.1 and is located at /opt/vertica/packages/hcat/tools. If you do not have it there, most likely you're using an older Vertica version.

Related

why I could use HBase without starting Hadoop/HDFS?

I am new to HBase, recently I installed HBase and tried to start it on my Mac. Everything is fine and I could play with HBase. In some articles, it said I should start Hadoop first when using HBase, I am wondering if this prerequisite changed?
Hadoop is not a hard requirement for HBase unless you are running fully distributed which you are not. Running on a single node like you are you can use the local filesystem. See HBase run modes: Standalone and Distributed for more information.
Your local filesystem (the file:// URI) is Hadoop-compatible. Hbase requires a Hadoop compatible storage layer, but that does not mean that it must literally be HDFS.
HDFS will simply provide scalability and reliability

Setup Marklogic connector for hadoop in Windows machine?

I need to setup MarkLogic connector for Hadoop for sending the ml files to HDFS storage and retrieving them.
I went through one of the ML document where they had mentioned in required software section it requires Linux:
The MarkLogic Connector for Hadoop is a Java-only API and is only available on Linux. You can use the connector with any of the Hadoop distributions listed below. Though the Hadoop MapReduce Connector is only supported on the Hadoop distributions listed below, it may work with other distributions, such as an equivalent version of Apache Hadoop.
So, does this mean I can't achieve this on a Windows machine?

Can Tableau connect with apache hadoop ? or it should be with only major hadoop distributions?

Need a help on reporting tool Basically we are looking for a best reporting tool that can connect to hive and pull the report. So thought of using Tableau. We are using our own hadoop distribution ( not from hortonworks, cloudera, Mapr Etc). Will tableau connects to apache distribution of hadoop also. If not please suggest some good reporting tool. Freeware is highly recommended.
thank you
Yes tableau will connect with your apache hadoop free distribution.
you will have to put all necessary jar file like hadoop core jars, hadoop common jars into your tableau lib directory. also in your hadoop lib directory you have to put your tableau driver correct version.
then with the help of hiveserver2 also known as hive thrift server. you can give your driver name and connection string
for more details:
http://kb.tableau.com/articles/knowledgebase/connecting-to-hive-server-2-in-secure-mode
http://kb.tableau.com/articles/knowledgebase/administering-hadoop-hive

Can apache spark run without hadoop?

Are there any dependencies between Spark and Hadoop?
If not, are there any features I'll miss when I run Spark without Hadoop?
Spark is an in-memory distributed computing engine.
Hadoop is a framework for distributed storage (HDFS) and distributed processing (YARN).
Spark can run with or without Hadoop components (HDFS/YARN)
Distributed Storage:
Since Spark does not have its own distributed storage system, it has to depend on one of these storage systems for distributed computing.
S3 – Non-urgent batch jobs. S3 fits very specific use cases when data locality isn’t critical.
Cassandra – Perfect for streaming data analysis and an overkill for batch jobs.
HDFS – Great fit for batch jobs without compromising on data locality.
Distributed processing:
You can run Spark in three different modes: Standalone, YARN and Mesos
Have a look at the below SE question for a detailed explanation about both distributed storage and distributed processing.
Which cluster type should I choose for Spark?
Spark can run without Hadoop but some of its functionality relies on Hadoop's code (e.g. handling of Parquet files). We're running Spark on Mesos and S3 which was a little tricky to set up but works really well once done (you can read a summary of what needed to properly set it here).
(Edit) Note: since version 2.3.0 Spark also added native support for Kubernetes
By default , Spark does not have storage mechanism.
To store data, it needs fast and scalable file system. You can use S3 or HDFS or any other file system. Hadoop is economical option due to low cost.
Additionally if you use Tachyon, it will boost performance with Hadoop. It's highly recommended Hadoop for apache spark processing.
As per Spark documentation, Spark can run without Hadoop.
You may run it as a Standalone mode without any resource manager.
But if you want to run in multi-node setup, you need a resource manager like YARN or Mesos and a distributed file system like HDFS,S3 etc.
Yes, spark can run without hadoop. All core spark features will continue to work, but you'll miss things like easily distributing all your files (code as well as data) to all the nodes in the cluster via hdfs, etc.
Yes, you can install the Spark without the Hadoop.
That would be little tricky
You can refer arnon link to use parquet to configure on S3 as data storage.
http://arnon.me/2015/08/spark-parquet-s3/
Spark is only do processing and it uses dynamic memory to perform the task, but to store the data you need some data storage system. Here hadoop comes in role with Spark, it provide the storage for Spark.
One more reason for using Hadoop with Spark is they are open source and both can integrate with each other easily as compare to other data storage system. For other storage like S3, you should be tricky to configure it like mention in above link.
But Hadoop also have its processing unit called Mapreduce.
Want to know difference in Both?
Check this article: https://www.dezyre.com/article/hadoop-mapreduce-vs-apache-spark-who-wins-the-battle/83
I think this article will help you understand
what to use,
when to use and
how to use !!!
Yes, of course. Spark is an independent computation framework. Hadoop is a distribution storage system(HDFS) with MapReduce computation framework. Spark can get data from HDFS, as well as any other data source such as traditional database(JDBC), kafka or even local disk.
Yes, Spark can run with or without Hadoop installation for more details you can visit -https://spark.apache.org/docs/latest/
Yes spark can run without Hadoop. You can install spark in your local machine with out Hadoop. But Spark lib comes with pre Haddop libraries i.e. are used while installing on your local machine.
You can run spark without hadoop but spark has dependency on hadoop win-utils. so some features may not work, also if you want to read hive tables from spark then you need hadoop.
Not good at english,Forgive me!
TL;DR
Use local(single node) or standalone(cluster) to run spark without Hadoop,but stills need hadoop dependencies for logging and some file process.
Windows is strongly NOT recommend to run spark!
Local mode
There are so many running mode with spark,one of it is called local will running without hadoop dependencies.
So,here is the first question:how to tell spark we want to run on local mode?
After read this official doc,i just give it a try on my linux os:
Must install java and scala,not the core content so skip it.
Download spark package
There are "without hadoop" and "hadoop integrated" 2 type of package
The most important thing is "without hadoop" do NOT mean run without hadoop but just not bundle with hadoop so you can bundle it with your custom hadoop!
Spark can run without hadoop(HDFS and YARN) but need hadoop dependency jar such as parquet/avro etc SerDe class,so strongly recommend to use "integrated" package(and you will found missing some log dependencies like log4j and slfj and other common utils class if chose "without hadoop" package but all this bundled with hadoop integrated pacakge)!
Run on local mode
Most simple way is just run shell,and you will see the welcome log
# as same as ./bin/spark-shell --master local[*]
./bin/spark-shell
Standalone mode
As same as blew,but different with step 3.
# Starup cluster
# if you want run on frontend
# export SPARK_NO_DAEMONIZE=true
./sbin/start-master.sh
# run this on your every worker
./sbin/start-worker.sh spark://VMS110109:7077
# Submit job or just shell
./bin/spark-shell spark://VMS110109:7077
On windows?
I kown so many people run spark on windown just for study,but here is so different on windows and really strongly NOT recommend to use windows.
The most important things is download winutils.exe from here and configure system variable HADOOP_HOME to point where winutils located.
At this moment 3.2.1 is the most latest release version of spark,but a bug is exist.You will got a exception like Illegal character in path at index 32: spark://xxxxxx:63293/D:\classe when run ./bin/spark-shell.cmd,only startup a standalone cluster then use ./bin/sparkshell.cmd or use lower version can temporary fix this.
For more detail and solution you can refer for here
No. It requires full blown Hadoop installation to start working - https://issues.apache.org/jira/browse/SPARK-10944

How to migrate data from a CDH3 cluster to a (different) CDH4 cluster?

I want to copy data from CDH3 to CDH4 (on a different server). My CDH4 server is set up such that it cannot see the CDH3, so I have to push data upstream from CDH3 to CDH4. (which means I cannot run the distcp command from CDH4 to copy the data). How can I get my data over to CDH4' HDFS via running a command on the lower version CDH3 hadoop or is this not possible?
Ideally, you should be able to use distcp to copy the data from one HDFS cluster to another.
hadoop distcp -p -update "hdfs://A:8020/user/foo/bar" "hdfs://B:8020/user/foo/baz"
-p to preserve status, -update to overwrite data if a file is already present but has a different size.
In practice, depending on the exact versions of Cloudera you're using, you may run into incompatibilities issues such as CRC mismatch errors. In this case, you can try to use HTFP instead of HDFS, or upgrade your cluster to the latest version of CDH4 and check the release notes to see if there is any relevant known issue and work-around.
If you still have issues using distcp, feel free to create a new stackoverflow question with the exact error message, versions of CDH3 and CDH4, and exact command.
You will have to use distcp with the following command when transferring b/w 2 different versions of HDFS (Notice hftp):
hadoop distcp hftp://Source-namenode:50070/user/ hdfs://destination-namenode:8020/user/
DistCp is intra-cluster only.
The only way I know is "fs -get", "fs -put" for every subset of data that can fit local disc.
For copying between two different versions of Hadoop, one will usually use HftpFileSystem. This is a read-only FileSystem, so DistCp must be run on the destination cluster (more specifically, on TaskTrackers that can write to the destination cluster). Each source is specified as hftp:/// (the default dfs.http.address is :50070).

Resources