Namenode is not running without errors in cygwin - hadoop

I have installed hadoop2.2, ssh, java in my cygwin. When I tried to run the namenode using this sbin/hadoop.daemon.sh start namenode", I'm getting no errors but when I run jps command once, I can see the namenode running once. After 10 sec it is not running.

Related

JPS command shows only JPS

I installed hadoop and tried to run it. The terminal shows that everything has been started but when i run jps command it shows only jps. I am new to ubuntu and we need to use for academics can anyone help me run it.
I installed java using sudo apt-get install open-jdk
My usr/lib/jvm directory looks like this
The following are my hadoop configuration files:
Its probably due to the users you are using . I can see start-all.sh with different user and jps with a different user. Run both commands with the same user

error while installing kylo specific services for nifi

I am trying to install kylo 0.8.4.
There is a step to install kylo specific components after installing Nifi using command,
sudo ./install-kylo-components.sh /opt /opt/kylo kylo kylo
but getting follwing error.
Creating symlinks for NiFi version 1.4.0.jar compatible nars
ERROR: spark-submit not on path. Has spark been installed?
I have spark installed.
need help.
The script calls which spark-submit to check if Spark is available. If available, it uses spark-submit --version to determine the version of Spark that is installed.
The error indicates that spark-submit is not available on system path. Can you please execute which spark-submit on the command line and check the result? Please refer to the screenshot below for expected result on Kylo sandbox.
If spark-submit is not available on the system path, you can fix it by updating the PATHvariable in .bash_profile file by providing the location of your Spark installation.
As a next step, you can also verify the installed version of Spark by running spark-submit --version. Please refer to screenshot below for an example result.

mpiexec hangs for remote execution

I have two EC2 instances.
Ubuntu 12.04 running OpenMPI 1.4.3
Ubuntu 14.04 running OpenMPI 1.6.5
I run this command:
mpiexec --hostfile machines ls
where "machines" is a file that contains the IP address of the other server that the command is being run on. Every time, it hangs indefinitely. When I replace the IP address with the server that the command is being run on, it works fine. I can password-less ssh between machines fine.
I tried having the same version of MPI on both machines, but could not get that to work. apt-get installs different versions on both machines for some reason.
What can I do to make MPI work between machines?

Not able to run Hadoop daemons

When I run the jps command:
I only see jps as the running java program in return.
When I run start-all.sh command, I receive errrors like:Connection to port 22 refused
Hadoop start-all.sh script uses SSH for managing it's services. Looks like that your computer/cluster:
has no ssh daemon installed. You can install it on Ubuntu with sudo apt-get install ssh.
(less likely) sshd uses non-default port. Check your sshd configuration.

JPS command not showing all the working daemons of hadoop

I have installed hadoop via cygwin in window 7 .All daemons of hadoop has started but jps failed to show tasktraker,secondarynamenode,datanode. I checked in logs , all are working fine. What could be the problem?
Thanks in advance.

Resources