MapR - How to Install Sqoop on a Client/Edge Node? - hadoop

I'm familiar with the Cloudera architecture but not MapR. I have a RHEL vm and previously installed the MapR client software using this documentation. I am able to submit mapreduce jobs and query HDFS as expected.
I followed this documentation (after I installed the MapR yum repo) and installed sqoop like so:
yum install mapr-sqoop
If I try to sqoop in some data, or even just issue the command sqoop, I receive the following error:
/opt/mapr/sqoop/sqoop-1.4.4/bin/configure-sqoop: line 47: /opt/mapr/bin/versions.sh: No such file or directory
Error: /opt/mapr/hadoop/hadoop- does not exist!
Please set $HADOOP_COMMON_HOME to the root of your Hadoop installation.
I have a /opt/mapr/hadoop/hadoop-0.20.2 directory. I've tried setting HADOOP_COMMON_HOME and HADOOP_HOME to both /opt/mapr/hadoop and /opt/mapr/hadoop/hadoop-0.20.2 yet still receive the same error.
-- Update:
I issued a find / -name hadoop and noted the last line which was /usr/bin/hadoop.
I then set HADOOP_COMMON_HOME to /usr, and was then asked to set HADOOP_MAPRED_HOME, HBASE_HOME, and HCAT_HOME, which I all set to /usr.
This error however is still present:
/opt/mapr/sqoop/sqoop-1.4.4/bin/configure-sqoop: line 47: /opt/mapr/bin/versions.sh: No such file or directory
I opened up this file and commented out line 47. This allowed me to use the sqoop command, but the import job failed and I received a lot of Error: Unsupported major.minor version.

There should be a patch for this if not fixed already,
Here is temp solution:
mapr-client does not give versions.sh , only mapr-core does. Simple fix is to
manually copy that file from a node with mapr-core installed and customize the
versions therein. sqoop then works fine.

Related

start hadoop failed with hadoop-functions.sh

I tried to start hadoop, but it failed with nothing started. Following the console log.
Mac:sbin lqs2$ sh start-all.sh
/Users/lqs2/Library/hadoop-3.1.1/libexec/hadoop-functions.sh: line 398:
syntax error near unexpected token `<'
/Users/lqs2/Library/hadoop-3.1.1/libexec/hadoop-functions.sh: line 398:
`done < <(for text in "${input[#]}"; do'
/Users/lqs2/Library/hadoop-3.1.1/libexec/hadoop-config.sh: line 70:
hadoop_deprecate_envvar: command not found
/Users/lqs2/Library/hadoop-3.1.1/libexec/hadoop-config.sh: line 87:
hadoop_bootstrap: command not found
/Users/lqs2/Library/hadoop-3.1.1/libexec/hadoop-config.sh: line 104:
hadoop_parse_args: command not found
/Users/lqs2/Library/hadoop-3.1.1/libexec/hadoop-config.sh: line 105:
shift: : numeric argument required
WARNING: Attempting to start all Apache Hadoop daemons as lqs2 in 10
seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
I have tried any ways to solve it but nothing woked. Even I reinstalled the latest version. But the error is the same. It almost drives me mad.
Any answer is helpful. Thanks.
Hadoop scripts require bash, not sh
$ chmod +x start-all.sh
$ ./start-all.sh
Though I would suggest starting HDFS and YARN separately so that you can isolate other issues
You also need to downgrade Hadoop to at least the latest 2.7 release for Spark to work
In my case, I was seeing this error in OSX after installing Hadoop using HomeBrew. The solution was to do a fresh install after downloading the Hadoop (3.2.1) binary directly from the official website. While installing, I had set HADOOP_HOME and JAVA_HOME environment variables.
A word of caution: I found that the issue can occur if the following environment variables are defined in hadoop-env.sh :
export HDFS_NAMENODE_USER="root"
export HDFS_DATANODE_USER="root"
export HDFS_SECONDARYNAMENODE_USER="root"
export YARN_RESOURCEMANAGER_USER="root"
export YARN_NODEMANAGER_USER="root"
I had initially added these variables while trying to fix the issue. Ultimately I removed them and the error disappeared.
Note, I executed all the Hadoop commands and scripts as non-root user, and also upgraded bash to version 5.0.17.

Hadoop YARN SLS (Scheduler Load Simulator)

I run the simulator with following command:
bin/slsrun.sh --input-rumen = <sample-data/2jobs2min-rumen-jh.json>
and it gave the following error:
-su: syntax error near unexpected token `newline'
Note: PWD is $HADOOP_ROOT/share/hadoop/tools/sls
It comes with your Hadoop distribution. The script is located in the tools directory: $HADOOP_HOME/share/hadoop/tools/sls/bin/slsrun.sh. A description of its usage is available at https://hadoop.apache.org/docs/r2.4.1/hadoop-sls/SchedulerLoadSimulator.html. I successfully followed the steps on my Hadoop 2.8.0 installation.

install cloudera impala shell on mac os x and connect to impala cluster

We have impala server on prod and I need connect to it with impala shell from my local macbook w/ mac os x (10.8).
I downloaded Impala-cdh5.1.0-release.tar.gz, unarchived it, tried buildall.sh which failed: .../bin/impala-config.sh: line 123: nproc: command not found
Trying impala-shell directly also fails:
$ shell/impala-shell
ls: /Users/.../Impala-cdh5.1.0-release/shell/ext-py/*.egg: No such file or directory
Traceback (most recent call last):
File "/Users/.../Impala-cdh5.1.0-release/shell/impala_shell.py", line 20, in <module>
import prettytable
ImportError: No module named prettytable
I have jdk installed and JAVA_HOME is set.
Cloudera manager seems doesn't support mac os, does it?
Based on your limited error message:
.../bin/impala-config.sh: line 123: nproc: command not found
I'd say that no, this package from cloudera doesnt support OSX. nproc is a linux command, and anything that references it isn't going to work on OSX.
If you could provide more information - such as where you downloaded it, or what it is, for those of us who aren't cloudera customers, we might be able to devise workarounds.
Or, contact cloudera support and complain about the lack of OSX support?
Your second error message looks like python, not java - and you provide no context around it....

Cannot locate pig-core-h2.jar. do 'ant -Dhadoopversion=23 jar', and try again

I downloaded pig 0.14.0 and I am running Hadoop 2.6.0 on MAC OSX. I followed all the installation steps for PIG at https://github.com/ucbtwitter/getting-started/wiki/Installing-Pig .I had set JAVA_HOME correctly as mentioned.
Even after running the ant "-Dhadoopversion=23 jar" command I am getting the same error "Cannot locate pig-core-h2.jar. do 'ant -Dhadoopversion=23 jar', and try again".
This error constantly arising
Cannot locate pig-core-h2.jar. do 'ant -Dhadoopversion=23 jar', and try again.
I studied the shell script by opening pig-0.14.0/bin/pig file and found that this error is related to the setting of CLASSPATH and PIG_HOME and JAVA_HOME variables.
Then I found that I mispelled the PIG_HOME then I corrected it.
Next I ran that specified command('ant -Dhadoopversion=23 jar') in the pig installation directory.
Then I got this error
Not a valid JAR: /Users/../../../pig-0.14.0/pig-0.14.0-SNAPSHOT-core-h2.jar /Users/../../../pig-0.14.0/pig-0.14.0-core-h2.jar
To resolve it remove that jar file in that location.
Then I got it working.
Find the path to the file pig-*-core-h2.jar.
I installed pig using brew install pig and found the jar in the path /usr/local/Cellar/pig/0.17.0/libexec
Run export PIG_HOME=/usr/local/Cellar/pig/0.17.0/libexec
This will fix your error.
i did this to fix the pig
mv /data/mapr/pig/pig-0.14/pig-0.14.0-mapr-1603-core-h2.jar /data/mapr/pig/pig-0.14/pig-0.14.0-mapr-1603-core-h2.jar.orig
The following solutions works:
Please make sure in your .bash_profile or .bashrc you have following environment variables:
export PIG_HOME="/Library/apache-pig-0.15.0"
export PATH="/Library/apache-pig-0.15.0/bin:${PATH}"
restart the machine or restart the unix terminal;
I replaced /Library/apache-pig-0.15.0/ with "home/cwu/Downloads/pig-0.15.0-src
"

Error -60005 when install Cocos2d-iPhone v3 RC4

When I try to install Cocos2d-iphone 3.0.0 RC4, I got an error: (run without sudo)
Error -60005 occurred while executing script with privileges.
So, I try to show its package content and use terminal to do: cd ...Cocos2D Installer 3.0.0.app/Contents/MacOS
I try this command: (with sudo)
sudo ./Cocos2D\ Installer\ 3.0.0
It works but I got log with some errors:
[1m>>> Installing Cocos2D-v3.0.0 files (B[m
[1m>>> Installing Cocos2D-v3.0.0 templates (B[m
[4m[1mCocos2D Template Installer (Cocos2D-v3.0.0)(B[m
Error: [31m✖︎(B[m Script cannot be executed as root.
In order for it to work properly, please execute the script again without 'sudo'.
If you want to know more about how to use this script execute '/Users/viethung/Downloads/Cocos2D-v3.0.0/install.sh --help'.
[1m>>> Building/Installing Cocos2D-v3.0.0 documentation, this may take a minute.... (B[m
appledoc version: 2.2 (build 963)
Generation step 4/5 failed: GBDocSetInstallGenerator failed generating output, aborting!
Documentation set was installed, but couldn't reload documentation within Xcode.
Xcode got an error: No documentation set present at specified path.
[1m>>> Cocos2D-v3.0.0 installation complete! (B[m
Are there any way is better than this way?
I have same problem.
I think you installed old cocos2d-iphone and it caused this problem.
You should remove old cocos2d-iphone first. I removed:
~/Library/Developer/Xcode/cocos2d v3.x
And install again. It works for me.
Hope it works for you :)

Resources