I am trying to run Cassandra on my mac.
I installed it following the steps detailed here: http://www.datastax.com/docs/1.0/getting_started/install_singlenode_root
but when I run:
bin/nodetool ring –h localhost
I get the following error message:
Class JavaLaunchHelper is implemented in both
/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/java and
/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be used. Which one is undefined.
How can I make cassandra work?
Many thanks
You are using ancient docs. On a recent version of Cassandra, run the command like this:
bin/nodetool -h localhost ring (see http://www.datastax.com/documentation/cassandra/2.1/cassandra/tools/toolsRing.html)
If you installed vnodes (the default), use nodetool status for an easier-to-read output.
Please use these docs or the docs that match your installation, I doubt you installed Cassandra 1.0. Please check the installation instructions that match the version you downloaded.
CORRECTION: the nodetool ring command worked for me using options in any position on 2.0.10:
bin/nodetool -h localhost ring
bin/nodetool ring -h localhost
and using --h instead of -h
It is a known bug in the JDK but it is not going to stop you from running Cassandra.
What you can do is to set JAVA_HOME variable explicitly.
It will not solve the bug, but it might remedy the error.
This is problem with jdk version, so you have to do the following
unset JAVA_HOME from your terminal.
edit nodetool and assign JAVA variable with jdk version less than jdk7.
JAVA = /Library/Java/JavaVirtualMachines/jdk1.6.0_xx.jdk/Contents/Home/bin/java
then run nodetool, you should be able to go without any issue.
Related
I have Windows 10 and I followed this guide to install Spark and make it work on my OS, as long as using Jupyter Notebook tool. I used this command to instantiate the master and import the packages I needed for my job:
pyspark --packages graphframes:graphframes:0.8.1-spark3.0-s_2.12 --master local[2]
However, later, I figured out that any worker wasn't instantiated according to the aforementioned guide and my tasks were really slow. Therefore, taking inspiration from this, since I could not find any other way to connect workers to the Cluster manager due to the fact it was run by Docker, I tried to set up everything manually with the following commands:
bin\spark-class org.apache.spark.deploy.master.Master
The master was correctly instantiated, so I continued by the next command:
bin\spark-class org.apache.spark.deploy.worker.Worker spark://<master_ip>:<port> --host <IP_ADDR>
Which returned me the following error:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
21/04/01 14:14:21 INFO Master: Started daemon with process name: 8168#DESKTOP-A7EPMQG
21/04/01 14:14:21 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[main,5,main]
java.lang.ExceptionInInitializerError
at org.apache.spark.unsafe.array.ByteArrayMethods.<clinit>(ByteArrayMethods.java:54)
at org.apache.spark.internal.config.package$.<init>(package.scala:1006)
at org.apache.spark.internal.config.package$.<clinit>(package.scala)
at org.apache.spark.deploy.master.MasterArguments.<init>(MasterArguments.scala:57)
at org.apache.spark.deploy.master.Master$.main(Master.scala:1123)
at org.apache.spark.deploy.master.Master.main(Master.scala)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make private java.nio.DirectByteBuffer(long,int) accessible: module java.base does not "opens java.nio" to unnamed module #60015ef5
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:357)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Constructor.checkCanSetAccessible(Constructor.java:188)
at java.base/java.lang.reflect.Constructor.setAccessible(Constructor.java:181)
at org.apache.spark.unsafe.Platform.<clinit>(Platform.java:56)
... 6 more
From that moment on, none of the commands I used to run before were working anymore, and they returned the message you can see. I guess I messed up some Java stuff, but I do not understand what and where, honestly.
My java version is:
java version "16" 2021-03-16
Java(TM) SE Runtime Environment (build 16+36-2231)
Java HotSpot(TM) 64-Bit Server VM (build 16+36-2231, mixed mode, sharing)
I got the same error just now, the issue seems with Java version.
I installed java, python, spark etc. All latest versions... !
Followed steps mentioned in the below link..
https://phoenixnap.com/kb/install-spark-on-windows-10
Got the same error as you.. !
Downloaded Java SE 8 version from Oracle site..
https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html
Downloaded jdk-8u281-windows-x64.exe
Reset the JAVA_HOME.
Started spark-shell - it opened perfectly without any issues.
FYI: I don't have neither java or spark experience, if anyone feels something is wrong please correct me. Just that it worked for me, so providing the same solution here.. :)
Thanks,
Karun
I got a similar error on MacOS. The problem was with Java (I was using JDK 17), had to downgrade or use a different version.
Ended up using this:
https://adoptium.net/releases.html?variant=openjdk11
Download and install. Might have to remove your JDK17 version.
Easiest solution :
Latest version of Java (JDK) is not supported by Spark.
Please try installing JDK version 8. This will solve the error.
Maybe I am thick, but I can't seem to find a way to pass ES a config file path from the command line. I have been searching and reading for 45 mins now (including several posts on Stack Overflow), and none of the proposed solutions works.
Here are the ones I tried:
elasticsearch -Des.config=/path/to/my/elasticsearch.yml
==> ERROR: D is not a recognized option
elasticsearch -Ees.config=/path/to/my/elasticsearch.yml
==> org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: unknown setting [es.config] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
elasticsearch -Econfig=/path/to/my/config.yml
==> org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: unknown setting [config] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
elasticsearch -Epath.conf=/path/to/config/dir/with/elasticsearch.yml
==> No exception, but the program terminates without any output whatsoever (no error message). Since I didn't specify the -d option, I am assuming that it's not running as a daemon and that therefore, the ES server is not running by the end of that.
Can anyone pull me out of the mud here?
Thx.
I too struggled with the same issue and tried the same sort of commands as you did. The problem here is caused due to the version of elastic search.
If your version is above 5.0.0 and as per this none of the above commands will work. Also it looks like they have limited the types of parameters that can be passed from the command line.
The easiest way is to just cd to the directory you installed elasticsearch and then just ./bin/elasticsearch (Make sure you don't execute it as root, it doesn't allow you to run as root.)
The issue here is that after every new version of ES, some older functionality gets removed/updated which is frustrating. I'm currently working with Elasticsearch v6.4.0 and as for now this works.
I created a Centos 7.3 VM using kickstart using the following command:
virt-install --name=vm1 --disk path=vm1.img,size=20 --vcpus=2 --ram=10240 --os-type=linux --os-variant=rhel7.0 --network bridge=br0 --graphics none --location=http://<IP>/centos7.3 -x "ks=http://<IP>/centos73vm-ks.cfg append ip=<VM IP> netmask=255.255.252.0 gateway=<gw> bootproto=static console=ttyS0"
This works fine. VM is created, rebooted automatically and the node is usable. However, the problem with this is that I cannot use it to automate since I don't get the control back. To do that, I added the --noautoconsole options of the virt-install command at the end of the above command.
After doing so, VM is installed, but after reboot it does not come up automatically. It remains in shut off state. I need to start it manually. There are no errors on logging to the console. May someone give any leads on how to fix this?
Any help would be greatly appreciated. Thanks in advance.
you need to add --wait=-1 so that virt-install waits for the installation to complete before exiting. The vm will then automatically start when the installation completes.
this sure sounds like an issue that was covered on the RedHat customer portal. I'm not sure if that requires a paid license but your company (or you) might have one already?
-- Jonas
I have added following setting in my catalina.sh file
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath="/root/logs".
But still the heap dump file is not created when tomcat is going down. I have this setup on centos 7 on AWS.
Please help me in solving this issue... Thanks in advance.
I would suggest try create it manually if it is not generating automatically by using a JDK bundled tool called jmap, although we recommend that you use the automatic method above for best result.
For Linux/Solaris-based Operating Systems:
Please execute the following command on Linux OS:
$JAVA_HOME/bin/jmap -dump:format=b,file=heap.bin <pid>
More Ref Follow the link below :
https://confluence.atlassian.com/doc/generating-a-heap-dump-219024032.html
the tomcat process is usually set up to run as the tomcat user, who most likely will not have any write access to your /root folder.
Please try to set it to somewhere like /tmp
I have a host configured into Ambari which no longer exists. Ambari still thinks it's there. When I try to delete it through the UI I get:
400 status code received on DELETE method for API:
/api/v1/clusters/handy091015/hosts/r-hadoopeco-celeryworker-07ac46a4.hbinternal.com/host_components/ZOOKEEPER_CLIENT
Error message: Bad Request
When I try to delete it via the api, with the command below, I get the same host information as with a GET:
curl -H "X-Requested-By: ambari" -DELETE http://admin:admin#ambari.handy-internal.com//api/v1/clusters/handy091015/hosts/r-hadoopeco-celeryworker-07ac46a4.hbinternal.com
I have tried the instructions here to no avail:
https://cwiki.apache.org/confluence/display/AMBARI/Using+APIs+to+delete+a+service+or+all+host+components+on+a+host
My question is: how do I get Ambari to no longer know about/try to do things with this host.
I am not able to reproduce your behaviour with Ambari 2.1.2 and HDP 2.3 stack.
Limitation:
Note that host removing is supported only for hosts with no master components, so if they are present, then deleting is not possible.
Options:
Try to do ambari-server restart, sometimes it have intermittent issues
If this is an option, I recomend you to do ambari-server reset and install it from scratch. If you don't have much setup, it will save your time probably.
If not, you may want to post ambari-server.log file additionally. This may help to debug the core issue
Another option - just ignore that host, it will not do much harm to you. You can move it to maintenance mode, that will ease cluster operation.