I am posting to hopefully help others if they run into this issue on Mac. I recently updated ES to 2.2.x branch using Homebrew:
brew uninstall --force elasticsearch
brew update
brew install elasticsearch
I repeatedly got connection errors trying both localhost and 127.0.0.1 on port 9200.
curl http://localhost:9200
curl: (7) Failed to connect to localhost port 9200: Connection refused
I tried an unload and load.
launchctl unload ~/Library/LaunchAgents/homebrew.mxcl.elasticsearch.plist
launchctl load ~/Library/LaunchAgents/homebrew.mxcl.elasticsearch.plist
Then tried starting manually.
elasticsearch
The following errors appeared indicating that the Java version 1.7.x was an error and why it would not start.
Exception in thread "main" java.lang.RuntimeException: Java version: Oracle Corporation 1.7.0_45 [Java HotSpot(TM) 64-Bit Server VM 24.45-b08] suffers from critical bug https://bugs.openjdk.java.net/browse/JDK-8024830 which can cause data corruption.
Please upgrade the JVM, see http://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html for current recommendations.
If you absolutely cannot upgrade, please add -XX:-UseSuperWord to the JAVA_OPTS environment variable.
Upgrading is preferred, this workaround will result in degraded performance.
at org.elasticsearch.bootstrap.JVMCheck.check(JVMCheck.java:123)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:283)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
After getting past this error, there were errors as well for previously-installed plugins on the 1.7.x branch.
Exception in thread "main" java.lang.IllegalStateException: Could not load plugin descriptor for existing plugin [bigdesk]. Was the plugin built before 2.0?
Likely root cause: java.nio.file.NoSuchFileException: /usr/local/var/lib/elasticsearch/plugins/bigdesk/plugin-descriptor.properties
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:315)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:380)
at java.nio.file.Files.newInputStream(Files.java:106)
at org.elasticsearch.plugins.PluginInfo.readFromProperties(PluginInfo.java:87)
at org.elasticsearch.plugins.PluginsService.getPluginBundles(PluginsService.java:378)
at org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:128)
at org.elasticsearch.node.Node.<init>(Node.java:146)
at org.elasticsearch.node.Node.<init>(Node.java:128)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:178)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
The solutions to these issues I discovered were the following:
Edit the /usr/local/etc/elasticsearch/elasticsearch.yml file and verify the bind_host configs are commented so it will default to 0.0.0.0.
Edit the /usr/local/Cellar/elasticsearch/YOUR_VERSION/libexec/bin/elasticsearch.in.sh file and add the -XX:-UserSuperWord flag after the other JAVA_OPTS:
JAVA_OPTS="$JAVA_OPTS -XX:-UseSuperWord"
Manually remove the previous plugins so you can install the latest versions for that ES branch:
/usr/local/Cellar/elasticsearch/2.2.0_1/libexec/bin/plugin list
Installed plugins in /usr/local/var/lib/elasticsearch/plugins:
- bigdesk
- head
/usr/local/Cellar/elasticsearch/2.2.0_1/libexec/bin/plugin remove bigdesk
/usr/local/Cellar/elasticsearch/2.2.0_1/libexec/bin/plugin remove head
After these steps I could once again start ES 2.x and then I can re-install any desired plugins. I hope this helps others if they run into similar issues.
I had this issue on MacOs, when I tried to uninstall elasticsearch7 and install elastisearch#6 using homebrew.
I resolved it by manually deleting the following directories and installed elasticsearch#6
Data: /usr/local/var/lib/elasticsearch/
Logs: /usr/local/var/log/elasticsearch/elasticsearch_<<user>>.log
Plugins: /usr/local/var/elasticsearch/plugins/
Config: /usr/local/etc/elasticsearch/
steps:
>brew uninstall elasticsearch
>rm -rf <<above mentioned directories>>
>brew install elasticsearch#6
Related
When I try to start elasticsearch on my macOS laptop, it does not appear to have started. So much of the answers I find on the internet do not relate to using brew on macOS.
See this command line history of trying to start it.
:>brew services stop elasticsearch
Stopping `elasticsearch`... (might take a while)
==> Successfully stopped `elasticsearch` (label: homebrew.mxcl.elasticsearch)
:>brew services start elasticsearch
==> Successfully started `elasticsearch` (label: homebrew.mxcl.elasticsearch)
:>curl http://localhost:9200
curl: (7) Failed to connect to localhost port 9200: Connection refused
:>curl https://localhost:9200
curl: (7) Failed to connect to localhost port 9200: Connection refused
:>lsof -i :9200
:>sudo ps -ef | grep elastic
501 85360 68989 0 9:51AM ttys000 0:00.00 grep elastic
Also using the Network Utility I see nothing is listening at the port 9200.
I am using Catalina Version 10.15.7 (19H1030).
java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
My elasticsearch version is
Version: 6.0.1, Build: 601be4a/2017-12-04T09:29:09.525Z, JVM: 1.8.0_121
The location of the binary on my PATH is /usr/local/bin/elasticsearch
EDIT:
There was a comment to include any error messages or the output. If this was not clear, what I wrote is the only output. The only output to stdout or stderr from brew services start elasticsearch is "Successfully started elasticsearch".
However, when trying elasticsearch -d from the command line I get this:
:>elasticsearch
2021-05-24 09:33:08,875 main ERROR Could not register mbeans java.security.AccessControlException: access denied ("javax.management.MBeanTrustPermission" "register")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:585)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanTrustPermission(DefaultMBeanServerInterceptor.java:1848)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:322)
at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
at org.apache.logging.log4j.core.jmx.Server.register(Server.java:389)
at org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:167)
at org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:140)
at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:556)
at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:261)
at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:206)
at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:220)
at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:197)
at org.elasticsearch.common.logging.LogConfigurator.configureStatusLogger(LogConfigurator.java:172)
at org.elasticsearch.common.logging.LogConfigurator.configure(LogConfigurator.java:141)
at org.elasticsearch.common.logging.LogConfigurator.configure(LogConfigurator.java:120)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:290)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:130)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:121)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:69)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134)
at org.elasticsearch.cli.Command.main(Command.java:90)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85)
ERROR: no log4j2.properties found; tried [/usr/local/etc/elasticsearch] and its subdirectories
The /usr/local/etc/elasticsearch directory exists but is empty.
:>ls -l /usr/local/etc
...
drwxr-xr-x 2 marlpier admin 64 May 19 16:30 elasticsearch
...
:>find /usr/local/etc/elasticsearch
/usr/local/etc/elasticsearch
Maybe my jvm.options file is not found. Where should it be?
Answer was that brew reinstall elasticsearch was not working.
Doing brew uninstall elasticsearch gave a java error. So to uninstall java, brew uninstall java also gave a java error.
Answer to uninstall java was here by removing the java directory.
After that I was able to uninstall elasticsearch and then install it again with brew. Now it works.
I have Windows 10 and I followed this guide to install Spark and make it work on my OS, as long as using Jupyter Notebook tool. I used this command to instantiate the master and import the packages I needed for my job:
pyspark --packages graphframes:graphframes:0.8.1-spark3.0-s_2.12 --master local[2]
However, later, I figured out that any worker wasn't instantiated according to the aforementioned guide and my tasks were really slow. Therefore, taking inspiration from this, since I could not find any other way to connect workers to the Cluster manager due to the fact it was run by Docker, I tried to set up everything manually with the following commands:
bin\spark-class org.apache.spark.deploy.master.Master
The master was correctly instantiated, so I continued by the next command:
bin\spark-class org.apache.spark.deploy.worker.Worker spark://<master_ip>:<port> --host <IP_ADDR>
Which returned me the following error:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
21/04/01 14:14:21 INFO Master: Started daemon with process name: 8168#DESKTOP-A7EPMQG
21/04/01 14:14:21 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[main,5,main]
java.lang.ExceptionInInitializerError
at org.apache.spark.unsafe.array.ByteArrayMethods.<clinit>(ByteArrayMethods.java:54)
at org.apache.spark.internal.config.package$.<init>(package.scala:1006)
at org.apache.spark.internal.config.package$.<clinit>(package.scala)
at org.apache.spark.deploy.master.MasterArguments.<init>(MasterArguments.scala:57)
at org.apache.spark.deploy.master.Master$.main(Master.scala:1123)
at org.apache.spark.deploy.master.Master.main(Master.scala)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make private java.nio.DirectByteBuffer(long,int) accessible: module java.base does not "opens java.nio" to unnamed module #60015ef5
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:357)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Constructor.checkCanSetAccessible(Constructor.java:188)
at java.base/java.lang.reflect.Constructor.setAccessible(Constructor.java:181)
at org.apache.spark.unsafe.Platform.<clinit>(Platform.java:56)
... 6 more
From that moment on, none of the commands I used to run before were working anymore, and they returned the message you can see. I guess I messed up some Java stuff, but I do not understand what and where, honestly.
My java version is:
java version "16" 2021-03-16
Java(TM) SE Runtime Environment (build 16+36-2231)
Java HotSpot(TM) 64-Bit Server VM (build 16+36-2231, mixed mode, sharing)
I got the same error just now, the issue seems with Java version.
I installed java, python, spark etc. All latest versions... !
Followed steps mentioned in the below link..
https://phoenixnap.com/kb/install-spark-on-windows-10
Got the same error as you.. !
Downloaded Java SE 8 version from Oracle site..
https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html
Downloaded jdk-8u281-windows-x64.exe
Reset the JAVA_HOME.
Started spark-shell - it opened perfectly without any issues.
FYI: I don't have neither java or spark experience, if anyone feels something is wrong please correct me. Just that it worked for me, so providing the same solution here.. :)
Thanks,
Karun
I got a similar error on MacOS. The problem was with Java (I was using JDK 17), had to downgrade or use a different version.
Ended up using this:
https://adoptium.net/releases.html?variant=openjdk11
Download and install. Might have to remove your JDK17 version.
Easiest solution :
Latest version of Java (JDK) is not supported by Spark.
Please try installing JDK version 8. This will solve the error.
I am trying to install kibana using rpm kibana-4.5.0-1.x86_64.rpm.
However when i try to start the Kibana process, i am getting below prompt
Starting kibana....... unable to start process kibana.
To check the reason i have enabled log file by setting the below parameter in kibana.yml :
logging.dest: /opt/kibana/kibana.log
However no log file is getting created and i am unable to identify why kibana process is not starting.
Any suggestion would be appreciated..
Please check ---
RPM install is not supported on distributions with old versions of RPM, such as SLES 11 and CentOS 5.
My suggetion you can install Kibana with .tar.gz
Can follow the link :
https://www.elastic.co/guide/en/kibana/current/targz.html
As of last night all our new docker deployments started failing because the latest version of docker (docker-1.3.2-1.0.amzn1.x86_64) in the amazon repo fails to start up.
Steps to reproduce are:
## Launch instance with default amazon AMI
yum install docker-1.3.2-1.0.amzn1.x86_64
service docker restart
### Get the following error in /var/log/docker
2014/11/26 05:14:16 docker daemon: 1.3.2 c78088f/1.3.2; execdriver: native; graphdriver:
[8f6d7cfb] +job serveapi(unix:///var/run/docker.sock)
[info] Listening for HTTP on unix (/var/run/docker.sock)
docker: relocation error: docker: symbol dm_task_get_info_with_deferred_remove,
version Base not defined in file libdevmapper.so.1.02 with link time reference
If I downgrade back to docker-1.3.1-1.0.amzn1.x86_64 everything seems to be fine.
Is the AWS package actually broken, or is it just our setup?
Is there a work around other than downgrading?
Yes, it is broken for me too.
Downgrading has been the solution yet.
The same error was by me on a centos VM provisioned at my workplace - a yum update resolved it.
I suspect a build was broken but went out, and has been fixed subsequently.
I am facing problems with Elasticsearch.
I am unable to get the results. I checked in log files i got the following error:
ERROR:
2014-10-30 08:52:46,971][DEBUG][action.search.type ] [Lianda] [135] Failed to execute fetch phase
[Error: Runtime.getRuntime().exec("cd").getInputStream(): Cannot run program "cd": java.io.IOException: error=2, No such file or directory]
[Near : {... w InputStreamReader(Runtime.getRuntime().exec("cd" ....}]
Below are the version I am using:
elastic search version: 0.90.5
java version: 1.6.0_33 64 bit
plugin installed: phonetic
The strange thing is that, whenever I am getting this error, I restart the elastic search server and its works.
So I think something is getting overloaded.
Based on seeing the Runtime.getRuntime().exec() call, it could be related to a dynamic scripting vulnerability in the defaults of elastic search prior to version 1.2. See this document on scripting security.
If that is the source of your problem, you can put in a fix in your current version (or upgrade to a newer version). From the link above:
If you are running an Elasticsearch node prior to the 1.2.x release,
you can make this change on your system by putting the following
setting into elasticsearch.yml:
script.disable_dynamic: true
Then restart each node in your cluster. Dynamic scripting will now be
disabled. If you are running Elasticsearch 1.2.x or later, dynamic
scripting is already disabled by default.