I received the below error in zookeeper when I log in my cluster environment. I am using the default zookeeper which comes along with hbase.
HBase is able to connect to ZooKeeper but the connection closes immediately.
This could be a sign that the server has too many connections (30 is the default)
Consider inspecting your ZK server logs for that error and then make sure you
are reusing HBase Configuration as often as you can. See HTable's javadoc for
more information.
It seems to be a file handle issue to me. HBase uses a lot of files all at the same time. The default ulimit -n -- i.e. user file limit -- of 1024 on most *nix systems is insufficient. Increasing the maximum number of file handles to a higher value say 10,000 or more might help. Please note that increasing the file handles for the user who is running the HBase process is an operating system configuration and not an HBase configuration.
If you are on Ubuntu you will need to make the following changes:
In the file /etc/security/limits.conf add the following line
hadoop - nofile 32768
Replace hadoop with whatever user is running Hadoop and HBase. If you have separate users, you will need 2 entries, one for each user. In the same file set nproc hard and soft limits. For example:
hadoop soft/hard nproc 32000
In the file /etc/pam.d/common-session add as the last line in the file:
session required pam_limits.so
Otherwise the changes in /etc/security/limits.conf won't be applied.
Don't forget to log out and back in again for the changes to take effect.
Reference : http://hbase.apache.org/book.html#basic.prerequisites
HTH
There can be many reasons.
I would say, first try this.
verify if /etc/hosts file is set correctly
Check your hbase configuration. Have you configured hbase-env.sh to manage Zookeeper itself? Have you configured zookeeper quorums in hbase-site.xml?
try copying the zoo.cfg in the hadoop conf directory (on classpath really) on the entire
cluster. (Source)
If that doesn't work out, go through your code and see if you are creating multiples HBaseConfiguration objects. The approach, as mentioned here, is to this is to create one single HBaseConfiguration object and then reuse it in all your code.
You can also take a look at the hbase.regionserver.handler.count
http://hbase.apache.org/configuration.html#recommended_configurations
Related
I have a hadoop cluster consisting of 4 nodes on which I am running a pyspark script. I have a config.ini file which contains details like locations of certificates, passwords, server names etc which are needed by the script. Each time this file is updated I need to sync the changes across all 4 nodes. Is there a way to avoid that?
I have needed to sync or update changes to my script. Making them on just one node and running it from there is enough. Is the same possible for the config file?
The most secure answer is likely to learn how to use a keystore with spark.
A little less secure but still good. Have you considered you could just put the file in HDFS and then just reference it? (lower security but easier to use)
Unsecure methods that are easy to use:
You can also pass it as a file to spark-submit to transfer the file for you.
Or you could add the values to your spark submit.
our Hadoop cluster shows the job tracker process eat up memory gradually that we have to restart the cluster every week. I searched around for the possible solution for this. one of the post mentioned to decrease 'mapred.jobtracker.completeuserjobs.maximum' to 5, so i checked mapred-site.xml under /hadoop-install/conf directory on name node and found there are two entries for that parameter, one set it to 30, the other set it to 5, when i goto any of the data node and check mapred-site.xml, i don't find the setting for that parameter at all. however when I checked running job on M/R administration page and checked their job file, it showed that parameters set to 100. I'm really confused where does this parameter is set. and if i updated it, do i need to restart the cluster? we are running apache Hadoop 1.2.1 on google cloud
Hadoop does not automatically copy the configuration files from your driver machine to all of the cluster machines You need to do that via scp and/or rsync or preferably an automated deployment tool like chef, ansible, puppet, etc.
As far as the individual job parameters: you can actually set them on per-job basis by using -D:
<path to jar>/myHadoopJobJar.jar -Dmapred.jobtracker.completeuserjobs.maximum=5
Specifically, I want to change the maximum number of mappers and the maximum number of reducers for each node in an HDInsight cluster running on Microsoft Azure.
Using remote desktop, I logged in to the head node. I edited the mapred-site.xml file on the head node and changed the mapred.tasktracker.map.tasks.maximum and the mapred.tasktracker.reduce.tasks.maximum values. I tried rebooting the head node, but I was not able to reboot. I used the start-onebox.cmd and stop-onebox.cmd scripts to try and start/stop HDInsight.
I then ran a streaming mapreduce passing the desired number of reducers to the hadoop-streaming.jar, but the number of reducers was still limited by the previous value of mapred.tasktracker.reduce.tasks.maximum. Most of my reducers were pending execution.
Do I need to change the mapred-site.xml file on every node? Is there an easy way to change this, or do I need to remote desktop into every node? How do I reboot or restart the cluster so that my new values are used?
Thanks
I know it has been a while since the question was posted, but I would like to post for other users who may find useful.
There are 2 ways you can change Hadoop configuration files (such as mapred-site.xml, hive-site.xml etc) on HDinsight
Option #1:
This is the easiest - you can supply the hadoop configuration values per job, as shown in this blog
Option #2:
You can customize HDinsight cluster with hadoop configuration values during provisioning or installing a cluster, as shown in this blog
Manually modifying a config file is not supported and the change will be lost when the Azure VM gets re-imaged.
I am currently "playing around" with Hadoop in a VM (CDH4.1.3 image from cloudera). What I am wondering about is the following (and the documentation did not really help me in that regard).
Following the tutorial, I would format a NameNode first - OK, that is already done if one uses the cloudera image. Likewise the HDFS file structure is already present. In the hdfs-site.xml the datanode data dir is set to:
/var/lib/hadoop-hdfs/cache/${user.name}/dfs/data
which is obviously where the blocks are supposed to be copied to in a real distributed setting. In the cloudera tutorial, one is told to create hdfs "home directories" for each user (/users/<username>), which I do not understand what they are for. Are they just for local test-runs in a single-node setup?
Say I really had petabytes of data on type not fitting into my local storage. This data would have to be distributed straight away, rendering a local "home directory" entirely useless.
Could someone tell me, just to give me an intuition, how a real Hadoop workflow with massive data would look like? What kind of distinct nodes would I have running for a start?
There's the master (JobTracker) with its slave file (where would I put that) allowing the master to resolve all the DataNodes. Then there is my NameNode that keeps track of where the block IDs are stored. The DataNodes are also carry TaskTracker responsibility. In the config files, the NameNode's URI is included -- am I correct so far? Then there is still the ${user.name} variable in the configuration which apparently, if I understood it right, has something to do with WebHDFS, which would also be great if someone could explain to me. In the running examples, the directions tend to be hardcoded to
/var/lib/hadoop-hdfs/cache/1/dfs/data, /var/lib/hadoop-hdfs/cache/2/dfs/data and so on.
So, back to the example: Say, I have my tape and want to import data into my HDFS (and I am required to stream data into the filesystem because I lack the local storage to save it locally on a single machine). Where would I start with the migration process? On an arbitrary DataNode? On the NameNode that distributes the chunks? After all, I cannot assume the data just to "be there", because the name node has to be aware of the block IDs.
It would be great if someone could shortly elaborate on these topics:
What is the home directory really for?
Do I migrate data to the home directory first and to the real distributed system afterwards?
How does WebHDFS work and what role does it play with regards to the user.name variable
How would I migrate "big data" into my HDFS on the fly - or even if it's not big data, how do I populate my file system in a proper way (meaning, that the chunks get randomly distributed across the cluster?
What is the home directory really for?
You have a small confusion here. Just like /home exists for local filesystems on Linux, where users are given their own storage space, /users is a home mount ON the HDFS (Distributed FS). The tutorial needs you to administratively create a home directory for the user you wish to later be running data loads and queries as, such that they get adequate permissions and storage access onto the HDFS. The tutorial is not asking you to create these directories locally.
Do I migrate data to the home directory first and to the real distributed system afterwards?
I believe my above answer should clarify this for you. You should create your home directory on the HDFS, and then load all your data inside of that directory.
How does WebHDFS work and what role does it play with regards to the user.name variable
WebHDFS is one of the various ways to access HDFS. Regular clients to talk to HDFS require use of Java APIs. WebHDFS (and also HttpFs) techniques were added to HDFS to let other languages have their own set of APIs by providing a REST front-end to HDFS. WebHDFS allows user-authentication, to help persist the permission and security models.
How would I migrate "big data" into my HDFS on the fly - or even if it's not big data, how do I populate my file system in a proper way (meaning, that the chunks get randomly distributed across the cluster?
The large part of problem HDFS solves for you is that of managing distribution of data. When loading files or data streams to HDFS (via CLI tools, sinks from Apache Flume, etc.), the blocks are spread in an ideal distribution by HDFS itself, and the chunking is managed by it as well. All you need to do is use the user-side regular FileSystem style APIs and forget about what goes where underneath - its all managed for you.
I'm trying to setup a hadoop cluster on 5 machines on same lan with NFS. The problem im facing is that the copy of hadoop on one machine is replicated on all the machines, so i cant provide exclusive properties for each slaves. Due to this, i get "Cannot create lock" kind of errors. The FAQ suggests that NFS should not be used, but i have no other option.
Is there a way where i can specify properties like, Master should pick its conf files from location1, slave1 should pick its conf files from location2 .....
Just to be clear, there's a difference between configurations for compute nodes and HDFS storage. Your issue appears to be solely the storage for configurations. This can and should be done locally, or at least let each machine map to a symlink based on some locally identified configuration (e.g. Mach01 -> /etc/config/mach01, ...).
(Revision 1) Regarding the comment/question below about symlinks: First, I'm going to admit that this is not something I can immediately solve. There are 2 approaches I see:
Have a script (e.g. at startup or as a wrapper for starting Hadoop) on the machine determine the hostname (e.g. hostname -a') which then identifies a local symlink (e.g./usr/local/hadoopConfig') to the correct directory on the NFS directory structure.
Set an environment variable, a la HADOOP_HOME, based on the local machine's hostname, and let various scripts work with this.
Although #1 should work, it is a method relayed to me, not one that I set up, and I'd be a little concerned about symlinks in the event that the hostname is misconfigured (this can happen). Method #2 is one that seems more robust.