Hadoop on OSX "Unable to load realm info from SCDynamicStore" - macos

I am getting this error on startup of Hadoop on OSX 10.7:
Unable to load realm info from SCDynamicStore
put: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /user/travis/input/conf. Name node is in safe mode.
It doesn't appear to be causing any issues with the functionality of Hadoop.

Matthew Buckett's suggestion in HADOOP-7489 worked for me. Add the following to your hadoop-env.sh file:
export HADOOP_OPTS="-Djava.security.krb5.realm=OX.AC.UK -Djava.security.krb5.kdc=kdc0.ox.ac.uk:kdc1.ox.ac.uk"

As an update to this (and to address David Williams' point about Java 1.7), I experienced that only setting the .realm and .kdc properties was insufficient to stop the offending message.
However, by examining the source file that is omitting the message I was able to determine that setting the .krb5.conf property to /dev/null was enough to suppress the message. Obviously if you actually have a krb5 configuration, better to specify the actual path to it.
In total, my hadoop-env.sh snippet is as follows:
HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.realm= -Djava.security.krb5.kdc="
HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.conf=/dev/null"

I'm having the same issue on OS X 10.8.2, Java version 1.7.0_21. Unfortunately, the above solution does not fix the problem with this version :(
Edit: I found the solution to this, based on a hint I saw here. In the hadoop-env.sh file, change the JAVA_HOME setting to:
export JAVA_HOME=`/usr/libexec/java_home -v 1.6`
(Note the grave quotes here.)

FYI, you can simplify this further by only specifying the following:
export HADOOP_OPTS="-Djava.security.krb5.realm= -Djava.security.krb5.kdc="
This is mentioned in HADOOP-7489 as well.

I had similar problem on MacOS and after trying different combinations this is what worked for me universally (both Hadoop 1.2 and 2.2):
in $HADOOP_HOME/conf/hadoop-env.sh set the following lines:
# Set Hadoop-specific environment variables here.
export HADOOP_OPTS="-Djava.security.krb5.realm= -Djava.security.krb5.kdc="
# The java implementation to use.
export JAVA_HOME=`/usr/libexec/java_home -v 1.6`
Hope this will help

and also add
YARN_OPTS="$YARN_OPTS -Djava.security.krb5.realm=OX.AC.UK -Djava.security.krb5.kdc=kdc0.ox.ac.uk:kdc1.ox.ac.uk"
before executing start-yarn.sh (or start-all.sh) on cdh4.1.3

I had this error when debugging MapReduce from Eclipse, but it was a red herring. The real problem was that I should have been remote debugging by adding debugging parameters to the JAVA_OPTS
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=1044
And then creating a new "Remote Java Application" profile in the debug configuration that pointed to port 1044.
This article has some more in-depth information about the debugging side of things. It's talking about Solr, but works much the same with Hadoop. If you have trouble, stick a message below and I'll try to help.

Related

RuntimeError: Java not found

I hava download JDK&deploy the Java_HOME and so on,i can use "javac" in Command,but when i use it like
nlp=StanfordCoreNLP(r'stanfordnlp',lang='zh')
there has a problem:
builtins.RuntimeError: Java not found.
Maybe you run out of memory. According to my experience, shutting down Java and restarting it can solve this issue.

Heap Dump not working Centos 7

I have added following setting in my catalina.sh file
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath="/root/logs".
But still the heap dump file is not created when tomcat is going down. I have this setup on centos 7 on AWS.
Please help me in solving this issue... Thanks in advance.
I would suggest try create it manually if it is not generating automatically by using a JDK bundled tool called jmap, although we recommend that you use the automatic method above for best result.
For Linux/Solaris-based Operating Systems:
Please execute the following command on Linux OS:
$JAVA_HOME/bin/jmap -dump:format=b,file=heap.bin <pid>
More Ref Follow the link below :
https://confluence.atlassian.com/doc/generating-a-heap-dump-219024032.html
the tomcat process is usually set up to run as the tomcat user, who most likely will not have any write access to your /root folder.
Please try to set it to somewhere like /tmp

spark jobserver Missing settings.sh, exiting

I am trying to run ./server_start.sh with spark-jobserver,
but it says
"Missing /home/spark/spark-jobserver1.5.1/bin/settings.sh, exiting",
I also check the details in ./server_start.sh from github where i found this(as the picture below):enter image description here
It means the setting.sh should be exists but not.
You need spark binaries to be installed on you machine. And export SPARK_HOME. See local.sh.template for example.

Running Cassandra on Mac OS X

I am trying to run Cassandra on my mac.
I installed it following the steps detailed here: http://www.datastax.com/docs/1.0/getting_started/install_singlenode_root
but when I run:
bin/nodetool ring –h localhost
I get the following error message:
Class JavaLaunchHelper is implemented in both
/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/java and
/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be used. Which one is undefined.
How can I make cassandra work?
Many thanks
You are using ancient docs. On a recent version of Cassandra, run the command like this:
bin/nodetool -h localhost ring (see http://www.datastax.com/documentation/cassandra/2.1/cassandra/tools/toolsRing.html)
If you installed vnodes (the default), use nodetool status for an easier-to-read output.
Please use these docs or the docs that match your installation, I doubt you installed Cassandra 1.0. Please check the installation instructions that match the version you downloaded.
CORRECTION: the nodetool ring command worked for me using options in any position on 2.0.10:
bin/nodetool -h localhost ring
bin/nodetool ring -h localhost
and using --h instead of -h
It is a known bug in the JDK but it is not going to stop you from running Cassandra.
What you can do is to set JAVA_HOME variable explicitly.
It will not solve the bug, but it might remedy the error.
This is problem with jdk version, so you have to do the following
unset JAVA_HOME from your terminal.
edit nodetool and assign JAVA variable with jdk version less than jdk7.
JAVA = /Library/Java/JavaVirtualMachines/jdk1.6.0_xx.jdk/Contents/Home/bin/java
then run nodetool, you should be able to go without any issue.

"The system cannot find the path specified." error message when trying to start GlassFish with asadmin

I tried to follow The Java EE 6 Tutorial and start GlassFish with the command below. But I got an error message. How to solve this?
C:\glassfish3\bin>asadmin start-domain --verbose
The system cannot find the path specified.
Go to asenv.bat file in config directory
remove line set AS_JAVA="C:/Program Files(x86)/Java/"
retry asadmin.
It will work this time!.
I fixed this issue by editing glassfish3\glassfish\config\asenv.bat as described in domain1 not configured -- The system cannot find the path specified
then I got an error because no domains existed, that was solved by editing glassfish3\bin\asadmin.bat as described in Oracle Glassfish "There is no Domain" Issue Fix Solution
Hi I was facing the same issue. I am able to resolve the same issue by following below steps:
Go to \glassfish\config (Note: In my case it is c:\glassfish3\glassfish\config)
Now open asenv.bat in notepad.
Make the value of AS_JAVA same as JAVA_HOME environment variable.
Now open command prompt and go to the bin folder and run asadmin start-domain domain1.
If you are getting error that no domain exist then create new domain by following below link:
http://docs.oracle.com/cd/E19776-01/820-4497/create-domain-1/index.html
I got this error, when installing Java EE (which includes GlassFish) using 64 bit windows 7. As a reference installing same Java EE latest to my 64 bit Linux worked well and I could see how it set default domain up.
It seems that in my 64 bit Window 7 asadmin.bat looks my java from "C:\Program Files (x86)\Java\bin\java" even if I have installed 64 bit version in "C:\Program Files\Java\jdk1.7.0_10\bin".
asadmin.bat runs first "%~dp0..\glassfish\config\asenv.bat" and then studies where if guesses java is. There is something odd in this, almost in my configuration, but I can't fix this nicely
%JAVA% -jar "%~dp0..\glassfish\modules\admin-cli.jar" %*
I could manually set %JAVA% right, but how to set nicer correction?
Set your correct Java path in:
<glassfish_home>\glassfish\config\osgi.properties
e.g.
set AS_JAVA=C:\Program Files\Java\jdk1.7.0_80
Note: follow Oracle glassfish's release notes for the supported JDKs

Resources