Hello I have started to learn Apache Flink. I have tried to install Flink version 1.7.2 in my Macosx 10.14.3 and for checking purpose, i started local cluster mode by writing start-cluster.sh from my cmd by the reference of flink doc.when I have opened Dispatcher’s web frontend at http://localhost:8081 in my browser it shows 403 forbidden error. I have gone through the Flink log file and I have found
Caused by: org.apache.flink.util.FlinkException: Could not create the DispatcherResourceManagerComponent.
Caused by: java.net.BindException: Address already in use
Here is my screenshot of my log file . kindly help me to solve this problem . I am running java 1.8
From reference of Jira, It was bug in version 1.7 but It is now resolved in 1.7.2 . The Dispatcher frontend is working after changing rest port value into 8089(since 8081 very common port the default value might conflict ) in flink-conf.yml .
http://www.alternatestack.com/development/apache-flink-change-port-for-web-front-end/
Related
I have Windows 10 and I followed this guide to install Spark and make it work on my OS, as long as using Jupyter Notebook tool. I used this command to instantiate the master and import the packages I needed for my job:
pyspark --packages graphframes:graphframes:0.8.1-spark3.0-s_2.12 --master local[2]
However, later, I figured out that any worker wasn't instantiated according to the aforementioned guide and my tasks were really slow. Therefore, taking inspiration from this, since I could not find any other way to connect workers to the Cluster manager due to the fact it was run by Docker, I tried to set up everything manually with the following commands:
bin\spark-class org.apache.spark.deploy.master.Master
The master was correctly instantiated, so I continued by the next command:
bin\spark-class org.apache.spark.deploy.worker.Worker spark://<master_ip>:<port> --host <IP_ADDR>
Which returned me the following error:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
21/04/01 14:14:21 INFO Master: Started daemon with process name: 8168#DESKTOP-A7EPMQG
21/04/01 14:14:21 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[main,5,main]
java.lang.ExceptionInInitializerError
at org.apache.spark.unsafe.array.ByteArrayMethods.<clinit>(ByteArrayMethods.java:54)
at org.apache.spark.internal.config.package$.<init>(package.scala:1006)
at org.apache.spark.internal.config.package$.<clinit>(package.scala)
at org.apache.spark.deploy.master.MasterArguments.<init>(MasterArguments.scala:57)
at org.apache.spark.deploy.master.Master$.main(Master.scala:1123)
at org.apache.spark.deploy.master.Master.main(Master.scala)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make private java.nio.DirectByteBuffer(long,int) accessible: module java.base does not "opens java.nio" to unnamed module #60015ef5
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:357)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Constructor.checkCanSetAccessible(Constructor.java:188)
at java.base/java.lang.reflect.Constructor.setAccessible(Constructor.java:181)
at org.apache.spark.unsafe.Platform.<clinit>(Platform.java:56)
... 6 more
From that moment on, none of the commands I used to run before were working anymore, and they returned the message you can see. I guess I messed up some Java stuff, but I do not understand what and where, honestly.
My java version is:
java version "16" 2021-03-16
Java(TM) SE Runtime Environment (build 16+36-2231)
Java HotSpot(TM) 64-Bit Server VM (build 16+36-2231, mixed mode, sharing)
I got the same error just now, the issue seems with Java version.
I installed java, python, spark etc. All latest versions... !
Followed steps mentioned in the below link..
https://phoenixnap.com/kb/install-spark-on-windows-10
Got the same error as you.. !
Downloaded Java SE 8 version from Oracle site..
https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html
Downloaded jdk-8u281-windows-x64.exe
Reset the JAVA_HOME.
Started spark-shell - it opened perfectly without any issues.
FYI: I don't have neither java or spark experience, if anyone feels something is wrong please correct me. Just that it worked for me, so providing the same solution here.. :)
Thanks,
Karun
I got a similar error on MacOS. The problem was with Java (I was using JDK 17), had to downgrade or use a different version.
Ended up using this:
https://adoptium.net/releases.html?variant=openjdk11
Download and install. Might have to remove your JDK17 version.
Easiest solution :
Latest version of Java (JDK) is not supported by Spark.
Please try installing JDK version 8. This will solve the error.
I've a neo4j instance installed using community AMI and it keeps complaining with below error, even though I've configured Bolt accept non-local connections i.e. dbms.connector.bolt.address=0.0.0.0:7687
Caused by: java.net.URISyntaxException: Expected hostname at index 7: bolt://:7687
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.failExpecting(URI.java:2854)
at java.net.URI$Parser.parseHostname(URI.java:3390)
at java.net.URI$Parser.parseServer(URI.java:3236)
at java.net.URI$Parser.parseAuthority(URI.java:3155)
at java.net.URI$Parser.parseHierarchical(URI.java:3097)
at java.net.URI$Parser.parse(URI.java:3053)
at java.net.URI.<init>(URI.java:673)
at org.neo4j.server.rest.discovery.DiscoverableURIs$Builder.add(DiscoverableURIs.java:128)
... 15 more
I've installed following version
AWS AMI id : ami-0118d82e9da26d491
Neo4j version: 3.5.3
Is this a known issue, thanks in advance
I am posting to hopefully help others if they run into this issue on Mac. I recently updated ES to 2.2.x branch using Homebrew:
brew uninstall --force elasticsearch
brew update
brew install elasticsearch
I repeatedly got connection errors trying both localhost and 127.0.0.1 on port 9200.
curl http://localhost:9200
curl: (7) Failed to connect to localhost port 9200: Connection refused
I tried an unload and load.
launchctl unload ~/Library/LaunchAgents/homebrew.mxcl.elasticsearch.plist
launchctl load ~/Library/LaunchAgents/homebrew.mxcl.elasticsearch.plist
Then tried starting manually.
elasticsearch
The following errors appeared indicating that the Java version 1.7.x was an error and why it would not start.
Exception in thread "main" java.lang.RuntimeException: Java version: Oracle Corporation 1.7.0_45 [Java HotSpot(TM) 64-Bit Server VM 24.45-b08] suffers from critical bug https://bugs.openjdk.java.net/browse/JDK-8024830 which can cause data corruption.
Please upgrade the JVM, see http://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html for current recommendations.
If you absolutely cannot upgrade, please add -XX:-UseSuperWord to the JAVA_OPTS environment variable.
Upgrading is preferred, this workaround will result in degraded performance.
at org.elasticsearch.bootstrap.JVMCheck.check(JVMCheck.java:123)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:283)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
After getting past this error, there were errors as well for previously-installed plugins on the 1.7.x branch.
Exception in thread "main" java.lang.IllegalStateException: Could not load plugin descriptor for existing plugin [bigdesk]. Was the plugin built before 2.0?
Likely root cause: java.nio.file.NoSuchFileException: /usr/local/var/lib/elasticsearch/plugins/bigdesk/plugin-descriptor.properties
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:315)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:380)
at java.nio.file.Files.newInputStream(Files.java:106)
at org.elasticsearch.plugins.PluginInfo.readFromProperties(PluginInfo.java:87)
at org.elasticsearch.plugins.PluginsService.getPluginBundles(PluginsService.java:378)
at org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:128)
at org.elasticsearch.node.Node.<init>(Node.java:146)
at org.elasticsearch.node.Node.<init>(Node.java:128)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:178)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
The solutions to these issues I discovered were the following:
Edit the /usr/local/etc/elasticsearch/elasticsearch.yml file and verify the bind_host configs are commented so it will default to 0.0.0.0.
Edit the /usr/local/Cellar/elasticsearch/YOUR_VERSION/libexec/bin/elasticsearch.in.sh file and add the -XX:-UserSuperWord flag after the other JAVA_OPTS:
JAVA_OPTS="$JAVA_OPTS -XX:-UseSuperWord"
Manually remove the previous plugins so you can install the latest versions for that ES branch:
/usr/local/Cellar/elasticsearch/2.2.0_1/libexec/bin/plugin list
Installed plugins in /usr/local/var/lib/elasticsearch/plugins:
- bigdesk
- head
/usr/local/Cellar/elasticsearch/2.2.0_1/libexec/bin/plugin remove bigdesk
/usr/local/Cellar/elasticsearch/2.2.0_1/libexec/bin/plugin remove head
After these steps I could once again start ES 2.x and then I can re-install any desired plugins. I hope this helps others if they run into similar issues.
I had this issue on MacOs, when I tried to uninstall elasticsearch7 and install elastisearch#6 using homebrew.
I resolved it by manually deleting the following directories and installed elasticsearch#6
Data: /usr/local/var/lib/elasticsearch/
Logs: /usr/local/var/log/elasticsearch/elasticsearch_<<user>>.log
Plugins: /usr/local/var/elasticsearch/plugins/
Config: /usr/local/etc/elasticsearch/
steps:
>brew uninstall elasticsearch
>rm -rf <<above mentioned directories>>
>brew install elasticsearch#6
I am have a Jenkins CI with the following configuration:
Tomcat on Open SUSE [openSUSE 13.1 (x86_64)] [Jenkins v1.532.3]
Java version "1.7.0_51" [OpenJDK Runtime Environment (IcedTea 2.4.4) (suse-24.13.5-x86_64)].
I am positive that I have my proxy configured correctly. But when I try to download a plugin or update a plugin on Jenkins. I get this error
Git Client Plugin
Failure -
hudson.util.IOException2: Failed to download from http://updates.jenkins-ci.org/download/plugins/git-client/1.8.1/git-client.hpi
at hudson.model.UpdateCenter$UpdateCenterConfiguration.download(UpdateCenter.java:794)
.
.
Caused by: java.net.ProtocolException: Server redirected too many times (20)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
.
.
.
at hudson.model.UpdateCenter$UpdateCenterConfiguration.download(UpdateCenter.java:764)
... 7 more
Caused by: java.net.ProtocolException: Server redirected too many times (20)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1636)
But I am able to do a Curl or a wget to the actual URL/URI to download the plugin
Thanks for your help and feedback
I don't think, that this is still relevant, but may be could help to someone else..
I had a similar issue after I changed my password. I checked different proxies and each time had the same issue. Now that I have changed the password again to have more standard symbols, the issues is immediately gone.
So it seems, that sometimes there can be just some problem with some special characters in the password.
Hope that could help someone.
Best Regards,
Viacheslav
Following error is seen when no matter stop or delete any service such as oozie, yarn, hbase etc... Can anyone help?
A server error has occurred. Send the following information to Cloudera.
Path: http://172.29.36.58:7180/cmf/services/2/do
Version: Cloudera Standard 4.7.2 (#135 built by jenkins on 20130918-2007 git: 72d3f9dfa797fe2c627d00dc6414a1e0151b91d6)
java.lang.IllegalArgumentException:Service my_hbase1 (my_hbase) has unsatisfied dependencies: [ZOOKEEPER]
at Preconditions.java line 92
in com.google.common.base.Preconditions checkArgument()
problem resolved. This is because the incorrect configuration for Service my_hbase1 (my_hbase). Corrected the configuration of Zookeeper to the right name and the issue is gone.