RuntimeError: Java not found - stanford-nlp

I hava download JDK&deploy the Java_HOME and so on,i can use "javac" in Command,but when i use it like
nlp=StanfordCoreNLP(r'stanfordnlp',lang='zh')
there has a problem:
builtins.RuntimeError: Java not found.

Maybe you run out of memory. According to my experience, shutting down Java and restarting it can solve this issue.

Related

Spark does't run in Windows anymore

I have Windows 10 and I followed this guide to install Spark and make it work on my OS, as long as using Jupyter Notebook tool. I used this command to instantiate the master and import the packages I needed for my job:
pyspark --packages graphframes:graphframes:0.8.1-spark3.0-s_2.12 --master local[2]
However, later, I figured out that any worker wasn't instantiated according to the aforementioned guide and my tasks were really slow. Therefore, taking inspiration from this, since I could not find any other way to connect workers to the Cluster manager due to the fact it was run by Docker, I tried to set up everything manually with the following commands:
bin\spark-class org.apache.spark.deploy.master.Master
The master was correctly instantiated, so I continued by the next command:
bin\spark-class org.apache.spark.deploy.worker.Worker spark://<master_ip>:<port> --host <IP_ADDR>
Which returned me the following error:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
21/04/01 14:14:21 INFO Master: Started daemon with process name: 8168#DESKTOP-A7EPMQG
21/04/01 14:14:21 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[main,5,main]
java.lang.ExceptionInInitializerError
at org.apache.spark.unsafe.array.ByteArrayMethods.<clinit>(ByteArrayMethods.java:54)
at org.apache.spark.internal.config.package$.<init>(package.scala:1006)
at org.apache.spark.internal.config.package$.<clinit>(package.scala)
at org.apache.spark.deploy.master.MasterArguments.<init>(MasterArguments.scala:57)
at org.apache.spark.deploy.master.Master$.main(Master.scala:1123)
at org.apache.spark.deploy.master.Master.main(Master.scala)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make private java.nio.DirectByteBuffer(long,int) accessible: module java.base does not "opens java.nio" to unnamed module #60015ef5
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:357)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Constructor.checkCanSetAccessible(Constructor.java:188)
at java.base/java.lang.reflect.Constructor.setAccessible(Constructor.java:181)
at org.apache.spark.unsafe.Platform.<clinit>(Platform.java:56)
... 6 more
From that moment on, none of the commands I used to run before were working anymore, and they returned the message you can see. I guess I messed up some Java stuff, but I do not understand what and where, honestly.
My java version is:
java version "16" 2021-03-16
Java(TM) SE Runtime Environment (build 16+36-2231)
Java HotSpot(TM) 64-Bit Server VM (build 16+36-2231, mixed mode, sharing)
I got the same error just now, the issue seems with Java version.
I installed java, python, spark etc. All latest versions... !
Followed steps mentioned in the below link..
https://phoenixnap.com/kb/install-spark-on-windows-10
Got the same error as you.. !
Downloaded Java SE 8 version from Oracle site..
https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html
Downloaded jdk-8u281-windows-x64.exe
Reset the JAVA_HOME.
Started spark-shell - it opened perfectly without any issues.
FYI: I don't have neither java or spark experience, if anyone feels something is wrong please correct me. Just that it worked for me, so providing the same solution here.. :)
Thanks,
Karun
I got a similar error on MacOS. The problem was with Java (I was using JDK 17), had to downgrade or use a different version.
Ended up using this:
https://adoptium.net/releases.html?variant=openjdk11
Download and install. Might have to remove your JDK17 version.
Easiest solution :
Latest version of Java (JDK) is not supported by Spark.
Please try installing JDK version 8. This will solve the error.

Heap Dump not working Centos 7

I have added following setting in my catalina.sh file
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath="/root/logs".
But still the heap dump file is not created when tomcat is going down. I have this setup on centos 7 on AWS.
Please help me in solving this issue... Thanks in advance.
I would suggest try create it manually if it is not generating automatically by using a JDK bundled tool called jmap, although we recommend that you use the automatic method above for best result.
For Linux/Solaris-based Operating Systems:
Please execute the following command on Linux OS:
$JAVA_HOME/bin/jmap -dump:format=b,file=heap.bin <pid>
More Ref Follow the link below :
https://confluence.atlassian.com/doc/generating-a-heap-dump-219024032.html
the tomcat process is usually set up to run as the tomcat user, who most likely will not have any write access to your /root folder.
Please try to set it to somewhere like /tmp

"The system cannot find the path specified." error message when trying to start GlassFish with asadmin

I tried to follow The Java EE 6 Tutorial and start GlassFish with the command below. But I got an error message. How to solve this?
C:\glassfish3\bin>asadmin start-domain --verbose
The system cannot find the path specified.
Go to asenv.bat file in config directory
remove line set AS_JAVA="C:/Program Files(x86)/Java/"
retry asadmin.
It will work this time!.
I fixed this issue by editing glassfish3\glassfish\config\asenv.bat as described in domain1 not configured -- The system cannot find the path specified
then I got an error because no domains existed, that was solved by editing glassfish3\bin\asadmin.bat as described in Oracle Glassfish "There is no Domain" Issue Fix Solution
Hi I was facing the same issue. I am able to resolve the same issue by following below steps:
Go to \glassfish\config (Note: In my case it is c:\glassfish3\glassfish\config)
Now open asenv.bat in notepad.
Make the value of AS_JAVA same as JAVA_HOME environment variable.
Now open command prompt and go to the bin folder and run asadmin start-domain domain1.
If you are getting error that no domain exist then create new domain by following below link:
http://docs.oracle.com/cd/E19776-01/820-4497/create-domain-1/index.html
I got this error, when installing Java EE (which includes GlassFish) using 64 bit windows 7. As a reference installing same Java EE latest to my 64 bit Linux worked well and I could see how it set default domain up.
It seems that in my 64 bit Window 7 asadmin.bat looks my java from "C:\Program Files (x86)\Java\bin\java" even if I have installed 64 bit version in "C:\Program Files\Java\jdk1.7.0_10\bin".
asadmin.bat runs first "%~dp0..\glassfish\config\asenv.bat" and then studies where if guesses java is. There is something odd in this, almost in my configuration, but I can't fix this nicely
%JAVA% -jar "%~dp0..\glassfish\modules\admin-cli.jar" %*
I could manually set %JAVA% right, but how to set nicer correction?
Set your correct Java path in:
<glassfish_home>\glassfish\config\osgi.properties
e.g.
set AS_JAVA=C:\Program Files\Java\jdk1.7.0_80
Note: follow Oracle glassfish's release notes for the supported JDKs

Cloudfoundry step up error - caused by Chef

I followed the Readme on https://github.com/cloudfoundry/vcap
It should be work fine...
but I got the error like this :
Does anyone know what's going on?
I run on Ubuntu10.04 ...
I have not encountered this problem with the latest version of VCAP, how long has it been since you updated the copy of the VCAP source on the Ubuntu instance?
Can you also post the configuration file you are using? if any.

Hadoop on OSX "Unable to load realm info from SCDynamicStore"

I am getting this error on startup of Hadoop on OSX 10.7:
Unable to load realm info from SCDynamicStore
put: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /user/travis/input/conf. Name node is in safe mode.
It doesn't appear to be causing any issues with the functionality of Hadoop.
Matthew Buckett's suggestion in HADOOP-7489 worked for me. Add the following to your hadoop-env.sh file:
export HADOOP_OPTS="-Djava.security.krb5.realm=OX.AC.UK -Djava.security.krb5.kdc=kdc0.ox.ac.uk:kdc1.ox.ac.uk"
As an update to this (and to address David Williams' point about Java 1.7), I experienced that only setting the .realm and .kdc properties was insufficient to stop the offending message.
However, by examining the source file that is omitting the message I was able to determine that setting the .krb5.conf property to /dev/null was enough to suppress the message. Obviously if you actually have a krb5 configuration, better to specify the actual path to it.
In total, my hadoop-env.sh snippet is as follows:
HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.realm= -Djava.security.krb5.kdc="
HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.conf=/dev/null"
I'm having the same issue on OS X 10.8.2, Java version 1.7.0_21. Unfortunately, the above solution does not fix the problem with this version :(
Edit: I found the solution to this, based on a hint I saw here. In the hadoop-env.sh file, change the JAVA_HOME setting to:
export JAVA_HOME=`/usr/libexec/java_home -v 1.6`
(Note the grave quotes here.)
FYI, you can simplify this further by only specifying the following:
export HADOOP_OPTS="-Djava.security.krb5.realm= -Djava.security.krb5.kdc="
This is mentioned in HADOOP-7489 as well.
I had similar problem on MacOS and after trying different combinations this is what worked for me universally (both Hadoop 1.2 and 2.2):
in $HADOOP_HOME/conf/hadoop-env.sh set the following lines:
# Set Hadoop-specific environment variables here.
export HADOOP_OPTS="-Djava.security.krb5.realm= -Djava.security.krb5.kdc="
# The java implementation to use.
export JAVA_HOME=`/usr/libexec/java_home -v 1.6`
Hope this will help
and also add
YARN_OPTS="$YARN_OPTS -Djava.security.krb5.realm=OX.AC.UK -Djava.security.krb5.kdc=kdc0.ox.ac.uk:kdc1.ox.ac.uk"
before executing start-yarn.sh (or start-all.sh) on cdh4.1.3
I had this error when debugging MapReduce from Eclipse, but it was a red herring. The real problem was that I should have been remote debugging by adding debugging parameters to the JAVA_OPTS
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=1044
And then creating a new "Remote Java Application" profile in the debug configuration that pointed to port 1044.
This article has some more in-depth information about the debugging side of things. It's talking about Solr, but works much the same with Hadoop. If you have trouble, stick a message below and I'll try to help.

Resources